Quick start
A 5-minute walkthrough from a fresh SigSentry account to a diagnosed incident
This walkthrough takes about five minutes and ends with a real diagnosis on your screen. We'll use AWS CloudWatch as the example log source — substitute Datadog, Loki, or any other supported provider if you prefer.
Sign up
Visit dashboard.sigsentry.com/signup and create a new organization. The full walkthrough is in Sign up.
You'll land in the dashboard with a default project and an active 14-day Pro trial — full features, no card required.
Connect CloudWatch
In the left sidebar, click Log Sources under your project, then Add log source. Fill in the form:
| Field | Value |
|---|---|
| Type | AWS CloudWatch |
| Name | Something descriptive, e.g. prod-cloudwatch |
| Access Key ID | AKIA... |
| Secret Access Key | ... |
| Region | e.g. us-east-1 |
| Log groups | One per line, or click Load log groups to autocomplete |
Click Test connection before saving. You should see a green Connected indicator plus a few sample log lines from the last five minutes.
Need a least-privilege IAM policy? See CloudWatch credentials.
Add a code repository (optional)
Skip this and the smoke test still works — but with a repo connected the diagnosis can identify the offending file and pull request.
Click Code Repos, then Connect repository. Pick GitHub, authorize the SigSentry GitHub App on the GitHub side, and select one or two repos. Leave service mappings blank for now — Service mappings covers tuning later.
Run your first analysis
Click Analyses → New analysis and fill out the form:
- Description — anything specific to a recent issue, even vague. "Checkout flow returning 500s" or even "smoke test" works
- Time range — pick a window where you know logs exist. Use Last 1 hour if unsure
- Screenshot — optional; drag in an error screenshot if you have one
Click Run analysis.
The status moves through pending → processing → complete over
about 30–60 seconds. Once complete, you'll see a structured diagnosis
including a one-paragraph summary, severity, root cause, affected
services, timeline, and suggested actions. If a repo was connected,
there's a code correlation section pointing at the likely culprit
file or PR.
Click Generate Postmortem to produce a Markdown writeup ready to drop into your incident management tool.
Where to go next
Notification channels
Deliver diagnoses to Slack, Teams, Discord, Google Chat, webhook, or email.
Chat triggers
Run analyses from chat with a slash command.
Watchdog
Proactive monitoring — alert on error spikes or pattern matches.
Core concepts
A deeper grounding in tenants, projects, roles, and the data model.
