Analyses Deep-dive
Everything about running analyses, reading results, follow-ups, postmortems, and feedback
An analysis is a single diagnosis run — you give SigSentry a description and time window, you get back a structured result. This section covers how to trigger them effectively, how to read what comes back, and how to follow up.
Triggering an analysis
The five ways to start an analysis and what each is good for.
Reading the result
Understanding the summary, severity, root cause, timeline, and suggested actions.
Time windows
Picking the right window — the single highest-leverage choice for diagnosis quality.
Screenshots
When and how to attach a screenshot to give the AI more context.
Follow-up questions
Threaded Q&A on an existing analysis without re-running it.
Similar incidents
Pro+. Surface past analyses that look like the current one.
Postmortem generation
Generate a Markdown postmortem from any complete analysis.
Feedback
Rate accuracy. Drives your team's quality metrics over time.
The result schema, at a glance
Every analysis returns the same shape, regardless of how it was triggered:
| Field | What it is |
|---|---|
| Summary | One paragraph describing what happened |
| Severity | critical / high / medium / low / info |
| Confidence | 0.0–1.0 — how sure the AI is about its diagnosis |
| Root cause | Service, error type, category |
| Affected services | List with role: origin, propagator, affected |
| Timeline | Reconstructed sequence of relevant log events |
| Suggested actions | Prioritized list with type: fix, investigate, mitigate, escalate |
| Code correlation | (When repo connected) file, function, suspected PR |
| Logs scanned | How many log lines the analysis considered |
| Time window | The window that was searched |
The reading the result page walks each field in detail.
