AI analysis context
A short paragraph that tells the AI about your stack, critical paths, and known quirks — the single biggest lever on diagnosis quality
The AI analysis context is a free-form paragraph (up to 2,000 characters) that's prepended to every analysis the AI runs against this project. It tells the model what kind of system it's looking at so the diagnosis is grounded in your stack, not generic.
This is the single highest-leverage configuration choice for diagnosis quality. A good context paragraph routinely turns a vague "server errors increased" result into "Kafka consumer lag in the notifications service caused by retry storm after the 9:42am deploy".
Where to set it
Project → General → AI analysis context. Edit, save, done. The next analysis uses the new context.
What to include
Tech stack
Languages, frameworks, key infrastructure. Helps interpret error messages and stack traces correctly.
Critical paths
Which services or flows are revenue-critical. Mark them so diagnoses flag them more prominently.
Known quirks
Recurring transient issues that aren't bugs. Prevents false-positive root-cause guesses.
Architecture
High-level shape — monolith vs microservices, message broker, primary database. Provides context for service-to-service issues.
Deploy cadence
Continuous, weekly, monthly. Helps connect incidents to deploy timing.
Example
Stack: Node.js 20 (TypeScript), Postgres 16, Redis 7, Kafka, deployed on
AWS ECS via GitHub Actions. Continuous deploys on weekday business hours.
Critical paths: /checkout (order placement, calls Stripe, writes orders +
order_items), /webhook/stripe (revenue-critical, must respond < 5s).
Architecture: monorepo with three services — api-gateway (fronts everything),
checkout-service (handles orders), notifications-service (Kafka consumer
for emails + SMS).
Known quirks:
- notifications-service occasionally lags 30–60s during marketing email blasts —
this is normal, not a bug
- /webhook/stripe is rate-limited to 100 RPS per IP, expect 429s during burst
- our checkout flow uses optimistic locking; "duplicate key" errors on order
inserts are a retried success path, not a true failureBest practices
Be specific about your stack
"Node.js + Postgres" is OK. "Node.js 20, Postgres 16, Drizzle ORM, Fastify framework" is better — the AI catches version-specific issues.
Document quirks the AI couldn't infer
Anything that looks like an error in logs but is actually expected behavior. Without these, the AI guesses incorrectly.
Keep it under 1,500 characters
The hard limit is 2,000, but tight context outperforms padded context. Cut adjectives and explanation; keep facts.
Update it after major changes
New service? New deploy pipeline? New known issue? Edit the context. Otherwise the AI keeps reasoning about a system that no longer exists.
Anti-patterns
Don't paste your README. Marketing copy and onboarding text don't help the AI. Stick to operational facts: stack, paths, quirks.
Don't include credentials, internal hostnames, or PII. The context is sent to the AI on every analysis. Anything you wouldn't put in a support ticket shouldn't be in here.
Don't restate what the logs already show. The AI sees the logs directly. Telling it "we have errors sometimes" wastes space without adding signal.
When context isn't helping
If diagnoses still feel off-base after a strong context paragraph:
| Symptom | Try |
|---|---|
| Generic root causes | Add more architecture detail (which services talk to which) |
| Wrong service blamed | Document service-to-service dependencies more explicitly |
| Missing the deploy correlation | Add deploy cadence and tooling to context |
| Recurring false positives | Add the symptom to the quirks list |
