Rules reference
Understand what the deterministic engine checks
All currently shipped rules are deterministic. They score the structure and clarity of the prompt itself, not the quality of a model response.
Rules in the current public release
| Rule ID | Category | What it checks |
|---|---|---|
min-length | specificity | Prompt is not too short. |
max-length | structure | Prompt is not excessively long. |
no-output-format | specificity | The expected answer format is specified. |
no-examples | best-practice | Few-shot examples are present. |
no-role | best-practice | A role or persona is assigned. |
no-context | specificity | Background context is provided. |
ambiguous-negation | clarity | Negative instructions are not overly vague or stacked. |
no-constraints | specificity | Explicit constraints are defined. |
all-caps-abuse | clarity | ALL CAPS is not overused for emphasis. |
vague-instruction | clarity | Qualifiers like good or appropriate are not left undefined. |
missing-task | clarity | An explicit task or request is detectable. |
no-structured-format | structure | Long prompts use visible structure such as sections or tags. |
How scoring should be interpreted
- The score is a structural signal, not a guarantee of output quality.
- Rule weight and severity come from the active profile.
- `missing-task` is intentionally the most important rule in the default experience.
- Suggestions are sorted by likely impact, not just in file order.
What rules are not doing yet
PromptScore does not currently validate runtime grounding, output correctness, safety outcomes, or tool behavior. Those are different problems and should remain clearly separated from prompt linting.