Config

Configure PromptScore once per project

PromptScore can discover a project config automatically so teams can share model defaults, rule subsets, profile directories, opt-in LLM settings, and CI failure thresholds.

Supported file names

  • `promptscore.config.yaml`
  • `promptscore.config.yml`
  • `promptscore.config.json`
  • `.promptscorerc`, `.promptscorerc.yaml`, or `.promptscorerc.json`

Example config

model: claude
format: markdown
rules:
  - missing-task
  - no-output-format
include_llm: true
llm:
  provider: openai
  model: gpt-5-mini
  api_key_env: OPENAI_API_KEY
fail_on_severity: warning
profiles_dir: ./profiles

What it controls today

  • `model`: default profile for analysis.
  • `format`: default output format for CLI runs.
  • `rules`: restrict analysis to a specific rule subset.
  • `include_llm`: opt into experimental LLM-backed rules for CLI or Node runs.
  • `llm.provider`: choose the external provider. Today, `openai` is supported.
  • `llm.model`: choose the model used for opt-in LLM-backed rules.
  • `llm.api_key_env`: choose which environment variable stores the API key.
  • `llm.base_url`: override the provider base URL when needed.
  • `fail_on_severity`: treat warnings or info findings as CI failures, even in batches.
  • `profiles_dir`: load profiles from a custom directory relative to the config file.

LLM activation model

`include_llm` does not send prompt text anywhere by itself. Prompt text only leaves the local runtime when you both enable LLM-backed rules and provide a configured provider client through the CLI or programmatic API.

Note: the hosted browser analyzer on `promptscore.dev` stays deterministic by default and does not inject a provider client.

Override precedence

CLI flags win over config values. Config values win over built-in defaults. For example, you can keep `model: claude` in the project config and still run `promptscore analyze prompt.txt --model gpt` for a one-off comparison.

Explicit config paths

PromptScore auto-discovers config files by walking up from the current working directory or analyzed file. You can also point at a specific file with `--config`.

promptscore analyze prompt.txt --config ./configs/team.yaml
promptscore profiles --config ./configs/team.yaml