FAQ

Common questions about PromptScore

This FAQ is meant to reduce ambiguity around scope, privacy, current capabilities, and the shape of the product.

Is PromptScore an evaluation framework?

No. PromptScore analyzes the input prompt itself. It does not grade the quality of a model response or replace output evaluation.

Does PromptScore send prompts to external APIs?

Not by default. The deterministic workflow stays local. Prompt text is only sent to an external provider when you explicitly enable LLM-backed rules and configure a provider client or API key.

Why have model profiles if the rules are deterministic?

Profiles let the same core engine adjust severity, weighting, suggestions, and references for different model families without forking the entire rule set.

Can I use PromptScore in CI?

Yes. The CLI exits with code 1 when findings meet the active failure threshold, which defaults to errors and can be tightened to warnings or info through config or --fail-on.

Can I add my own rules?

Yes. The core library supports additional rules through the programmatic API, and the repository documents how to register and test them.

Is there a paid product today?

No hosted paid product ships today. The current public product is the open-source local analysis experience.

Will browser and CLI results match?

That is the goal for the deterministic pipeline. The browser analyzer uses the same rule and scoring flow with built-in profiles replacing filesystem-loaded YAML, while custom LLM-enabled integrations depend on the client you inject.