Static analysis for LLM prompts. Analyze a prompt in the browser, use the CLI in CI, or embed the core library directly in your tooling with the same deterministic engine.
The browser analyzer below uses the current deterministic engine directly in the client, so the score and findings stay aligned with the CLI and the core library.
This analyzer uses the current deterministic engine from @promptscore/core. No API calls are made.
Run the analyzer to see a real prompt report here.
Analyze prompts directly in the browser on this page, run the CLI on a prompt file, or import @promptscore/core in your code.
PromptScore checks length, structure, output format, examples, constraints, vague language, and more. All offline.
Each finding includes a concrete fix suggestion, and supported profiles can attach Claude- or GPT-specific references.
Check length, structure, examples, output format, constraints, vague language, and more. No LLM calls, no API keys.
YAML profiles for Claude, GPT, and a universal baseline. Rules adjust severity and suggestions per model.
Use the browser analyzer for quick checks, import @promptscore/core in your code, or run promptscore from the terminal.
Use the CLI in CI today. It can scan prompt directories and exits non-zero when findings meet your configured failure threshold.
Each failing rule includes a concrete fix suggestion, and supported profiles can link to official model docs.
Your prompts never leave your machine. No telemetry. No network calls. Fully offline.