SigSentrySigSentry

Reading the result

Walking through the structured diagnosis — summary, severity, root cause, services, timeline, code correlation, and suggested actions

Every analysis returns the same shape. Once you're familiar with it, scanning a result takes 10–20 seconds.

Top-of-page: summary, severity, confidence

The first thing you see in the dashboard or chat card:

FieldWhat it tells you
SummaryOne paragraph. What happened, in plain language. Read this first.
Severitycritical / high / medium / low / info. Drives notification routing — see Severity.
Confidence0.0–1.0. How sure the AI is. Low confidence is a hint to verify; high confidence is enough to act on.

Low confidence isn't a bad diagnosis — it usually means there were multiple plausible causes and the AI couldn't pick a clear winner. Read the timeline and root cause for context.

Root cause

A structured guess at what actually broke:

FieldExample
description"Connection pool exhaustion in checkout-api caused by a leaked transaction in processPayment"
servicecheckout-api
errorTypeConnectionPoolExhausted
categorydatabase, network, timeout, memory, authentication, rate_limiting, etc.

The category is one of: authentication, authorization, database, network, timeout, rate_limiting, validation, null_reference, configuration, dependency, memory, disk, unknown.

Useful for filtering analyses by category in the activity log.

Affected services

A list of every service that showed errors in the time window, with a role for each:

RoleMeaning
originWhere the problem started — usually maps to the root-cause service
propagatorMid-chain — the problem passed through this service
affectedEnd of the chain — this service saw symptoms but isn't the cause

Each entry includes:

  • Service name
  • Error count in the window
  • First seen / last seen timestamps

Reading this top-down tells you the blast radius. "Origin: payments-service. Propagators: order-service, fulfillment-service. Affected: notifications-service." That's a clear story.

Timeline

A reconstructed sequence of the relevant log events:

FieldNotes
TimestampWhen the line appeared
ServiceWhich service emitted it
Levelerror / warn / info
MessageThe log line
isRootCauseBoolean — flagged when the line is most directly tied to the diagnosed root cause

The timeline shows the most relevant log events for this incident, in chronological order — not every error in the window.

The line(s) flagged isRootCause are usually the most direct evidence of what broke. Click any line to see the raw log entry it was extracted from.

Suggested actions

A prioritized list of what to do next:

FieldNotes
priority1 = do this first
actionShort imperative — "Increase the database connection pool ceiling"
rationaleWhy this action — "Pool size 10 is below the steady-state load of 14"
typefix / investigate / mitigate / escalate

Action types help triage:

  • fix — actionable code or config change
  • investigate — needs more digging before acting
  • mitigate — short-term workaround while you fix the underlying issue
  • escalate — this is bigger than your team; loop in someone

Code correlation

Available only when a code repo is connected to the project. When present:

FieldNotes
suspectedCode.repoRepository name
suspectedCode.filePathFile the AI thinks is responsible
suspectedCode.functionNameFunction inside that file
suspectedCode.lineRangeStart and end line
suspectedCode.snippetThe actual code lines
causalPR(Optional) The pull request the AI thinks introduced the issue, with author, merge time, and explanation
recentCommitsRecent commits to the file, in case the PR pointer is wrong

If causalPR is present and confidence is high, you have a likely direct answer to "who shipped this?" — clickable through to the PR.

Bottom-of-page metadata

Useful for sanity-checking the diagnosis:

FieldWhat it tells you
logsScannedHow many log lines the analysis considered. Very low (< 10) suggests your time window or filters were too narrow.
timeWindowThe exact window that was searched, in case the trigger normalized it

Status field

The status field on an analysis evolves:

  • pending — queued, hasn't started yet
  • processing — running
  • complete — full result available
  • partial — diagnosis returned but with missing pieces (e.g., one log source unreachable). Diagnosis is still usable; the partialReasons field explains what was missing
  • failed — analysis errored. The error message is in the analysis detail page

The dashboard updates the analysis view live while it's pending or processing.