Reading the result
Walking through the structured diagnosis — summary, severity, root cause, services, timeline, code correlation, and suggested actions
Every analysis returns the same shape. Once you're familiar with it, scanning a result takes 10–20 seconds.
Top-of-page: summary, severity, confidence
The first thing you see in the dashboard or chat card:
| Field | What it tells you |
|---|---|
| Summary | One paragraph. What happened, in plain language. Read this first. |
| Severity | critical / high / medium / low / info. Drives notification routing — see Severity. |
| Confidence | 0.0–1.0. How sure the AI is. Low confidence is a hint to verify; high confidence is enough to act on. |
Low confidence isn't a bad diagnosis — it usually means there were multiple plausible causes and the AI couldn't pick a clear winner. Read the timeline and root cause for context.
Root cause
A structured guess at what actually broke:
| Field | Example |
|---|---|
description | "Connection pool exhaustion in checkout-api caused by a leaked transaction in processPayment" |
service | checkout-api |
errorType | ConnectionPoolExhausted |
category | database, network, timeout, memory, authentication, rate_limiting, etc. |
The category is one of: authentication, authorization, database,
network, timeout, rate_limiting, validation, null_reference,
configuration, dependency, memory, disk, unknown.
Useful for filtering analyses by category in the activity log.
Affected services
A list of every service that showed errors in the time window, with a role for each:
| Role | Meaning |
|---|---|
origin | Where the problem started — usually maps to the root-cause service |
propagator | Mid-chain — the problem passed through this service |
affected | End of the chain — this service saw symptoms but isn't the cause |
Each entry includes:
- Service name
- Error count in the window
- First seen / last seen timestamps
Reading this top-down tells you the blast radius. "Origin: payments-service. Propagators: order-service, fulfillment-service. Affected: notifications-service." That's a clear story.
Timeline
A reconstructed sequence of the relevant log events:
| Field | Notes |
|---|---|
| Timestamp | When the line appeared |
| Service | Which service emitted it |
| Level | error / warn / info |
| Message | The log line |
isRootCause | Boolean — flagged when the line is most directly tied to the diagnosed root cause |
The timeline shows the most relevant log events for this incident, in chronological order — not every error in the window.
The line(s) flagged isRootCause are usually the most direct
evidence of what broke. Click any line to see the raw log entry it
was extracted from.
Suggested actions
A prioritized list of what to do next:
| Field | Notes |
|---|---|
priority | 1 = do this first |
action | Short imperative — "Increase the database connection pool ceiling" |
rationale | Why this action — "Pool size 10 is below the steady-state load of 14" |
type | fix / investigate / mitigate / escalate |
Action types help triage:
- fix — actionable code or config change
- investigate — needs more digging before acting
- mitigate — short-term workaround while you fix the underlying issue
- escalate — this is bigger than your team; loop in someone
Code correlation
Available only when a code repo is connected to the project. When present:
| Field | Notes |
|---|---|
suspectedCode.repo | Repository name |
suspectedCode.filePath | File the AI thinks is responsible |
suspectedCode.functionName | Function inside that file |
suspectedCode.lineRange | Start and end line |
suspectedCode.snippet | The actual code lines |
causalPR | (Optional) The pull request the AI thinks introduced the issue, with author, merge time, and explanation |
recentCommits | Recent commits to the file, in case the PR pointer is wrong |
If causalPR is present and confidence is high, you have a likely
direct answer to "who shipped this?" — clickable through to the PR.
Bottom-of-page metadata
Useful for sanity-checking the diagnosis:
| Field | What it tells you |
|---|---|
logsScanned | How many log lines the analysis considered. Very low (< 10) suggests your time window or filters were too narrow. |
timeWindow | The exact window that was searched, in case the trigger normalized it |
Status field
The status field on an analysis evolves:
pending— queued, hasn't started yetprocessing— runningcomplete— full result availablepartial— diagnosis returned but with missing pieces (e.g., one log source unreachable). Diagnosis is still usable; thepartialReasonsfield explains what was missingfailed— analysis errored. The error message is in the analysis detail page
The dashboard updates the analysis view live while it's pending or
processing.
