Use PromptScore in the terminal and in CI
The CLI is the fastest way to lint prompts locally, automate checks in scripts, and enforce prompt quality in pipelines.
Core commands
Install the package as `@promptscore/cli`, then run the `promptscore` binary in your shell.
promptscore analyze prompt.txt
promptscore analyze prompts/
promptscore analyze "prompts/**/*.{txt,md}" --format json
promptscore analyze --inline "You are a helpful assistant."
promptscore analyze prompt.txt --model gpt --format json
promptscore analyze prompt.txt --rules missing-task,no-output-format
promptscore analyze prompt.txt --llm
promptscore rules
promptscore profilesDirectory and glob analysis
`analyze` accepts files, directories, and glob patterns. Directory inputs recurse through `.txt`, `.md`, `.markdown`, and `.prompt` files while skipping common build folders like `node_modules`, `.git`, `.next`, and `dist`. Use globs when you want custom file types or tighter matching.
promptscore analyze prompts/
promptscore analyze "prompts/**/*.{txt,md}" --fail-on warning
promptscore analyze prompts/ examples/reviews/*.mdProject config and policy
`analyze` can auto-load a project config file and lets CLI flags override it. Use `--config` when you want to point at a specific config file, and `--fail-on` when you want a one-off policy threshold in CI.
promptscore analyze prompt.txt --config ./configs/team.yaml
promptscore analyze prompt.txt --fail-on warningSupported formats
`analyze` supports `text`, `json`, and `markdown` output formats.
Opt-in LLM rules
Use `--llm` when you want PromptScore to run experimental LLM-backed rules in addition to the deterministic registry. This path remains explicit and requires provider config plus an API key environment variable.
promptscore analyze prompt.txt --llm
promptscore analyze prompt.txt --config ./promptscore.config.yaml --llmExit codes
- `0`: analysis completed and no findings met the active failure threshold. The default threshold is `error`.
- `1`: analysis completed and at least one finding met the active threshold from `--fail-on` or project config.
- `2`: PromptScore could not complete the command because of an input or runtime error.
stdin support
If no file and no `--inline` value are provided, PromptScore will read from stdin when input is piped into the process.
cat prompt.txt | promptscore analyze
echo "You are a helpful assistant." | promptscore analyze --model _baseCurrent boundaries
- LLM-powered rules are experimental, opt-in, and require provider configuration.
- Directory inputs are intentionally conservative and focus on prompt-like text files.
- The CLI remains a thin wrapper around the shared core engine and rule registry.