First-time setup checklist
The minimum configuration to run your first useful analysis, plus optional integrations that significantly improve diagnosis quality
After signup, this is the order to set things up. The first two items are required for analyses to work. The rest are optional, but each one materially improves diagnosis quality.
Required steps
These are the bare minimum. Without them, analyses cannot run.
Connect at least one log source
Without a log source, an analysis has nothing to read. Pick the aggregator your team already uses and connect it. Each integration is two to five minutes of credential setup.
AWS CloudWatch
IAM access keys + log groups.
Datadog
API key plus an Application key.
Grafana Loki
URL plus an optional bearer token.
Splunk
Auth token plus your indexes.
Elastic / OpenSearch
Auth plus the indices to query.
GCP Cloud Logging
Service account credentials.
After saving any log source, click Test connection to confirm credentials and preview a few sample log lines.
Verify your project name and slug
Every account starts with a default project. Rename it if you'd like
under Project → General. The slug becomes part of the URL and chat
commands, so pick something short and lowercase: prod, api, or
backend work well.
Strongly recommended
You can run analyses without these, but the diagnoses will be noticeably shallower.
Connect a code repository
Without a repo, an analysis returns findings based on logs alone. With
one connected, the AI can actually read your source files and pull
request diffs — turning generic suggestions into specific ones like "the
regression appears to be in PR #482, which changed how expired auth
tokens are handled in services/billing/charge.ts".
GitHub
Install the SigSentry GitHub App, or use a personal access token.
GitLab
OAuth (recommended) or a personal access token.
Bitbucket
OAuth (recommended) or an app password.
You'll also configure service mappings — which service name in your logs maps to which repo path.
Add custom AI context for the project
Under Project → General, you'll find an AI Analysis Context textarea. A short paragraph here is the single biggest lever on diagnosis accuracy — it tells the AI about your stack, critical paths, and known quirks before it ever reads a log line.
A working example:
Next.js 14 frontend on Vercel, Node 20 API on AWS ECS Fargate,
PostgreSQL via Neon, Redis via Upstash. Auth via Clerk.
Critical paths: /api/checkout, /api/webhooks/stripe.
Known quirk: Stripe occasionally returns 429 with rate-limit headers —
treat as warning, not error.
Deploys via GitHub Actions on merge to main, ~10 deploys/day.The textarea is capped at 2,000 characters. See AI analysis context for best practices and patterns that work well.
Optional but high-value
Adding any of these changes how SigSentry shows up in your team's day-to-day workflow.
A notification channel
So that high-severity diagnoses are delivered automatically, and so your team learns about analyses they didn't personally trigger.
Each channel has a severity threshold, so you can route only what matters to noisy shared channels. See Notification Channels.
A chat trigger
So your team can run analyses without leaving Slack, Teams, Discord, or
Google Chat. Once installed, anyone in an authorized channel can run
/sigsentry analyze <description> last 30 minutes and the diagnosis
posts back as a threaded reply.
See Chat Triggers.
A watchdog rule
For proactive monitoring — fire alerts when error counts or rates cross a threshold, optionally auto-running an analysis. See Watchdog.
A support desk integration
If your support team uses Zendesk, Freshdesk, or Intercom, every inbound ticket gets a severity classification automatically — with the option to auto-trigger a full analysis on high-severity tickets. See Support Desk.
Invite the rest of your team
Under Team, invite your engineering, on-call, and support folks. Match the role to the responsibility:
| Role | Best for |
|---|---|
| Admin | Engineering managers, tech leads |
| Member | Engineers (most common choice) |
| Viewer | Stakeholders who only need to read |
Smoke-testing what you've set up
Once you've connected at least a log source, run a test analysis to confirm the pipeline. Navigate to your project, click Analyses → New analysis, give it any description, set the time range to Last 1 hour, and submit.
If you get a valid result back, the pipeline is working.
"No logs found" usually means the time window contains no log activity, or the log source is configured for the wrong environment. Try widening the window. If it persists, see Troubleshooting.
When you're ready to walk through it end to end, the Quick start takes about five minutes.
