Troubleshooting & FAQ
Common issues across SigSentry — connection failures, OAuth errors, webhook signatures, quota, analysis failures
A consolidated runbook for the issues that come up most often. Use the shortcuts below to jump to a topic, or scan the symptom tables — most problems have a one-line fix.
Connection failures
401 / 403 / 404 / timeouts when SigSentry talks to your log source
OAuth callback errors
Redirect URI, scopes, and workspace policies during channel install
Webhook signatures
Verifying outgoing webhooks on your receiver
No logs found
Empty results when you know logs exist
Quota exceeded
What to do when you hit your monthly analysis limit
Analysis failing
Status `failed`, unreachable sources, log-source rate limits
Plan downgrade
What happens to your data when you move to a smaller plan
Trial ended
Recovering access and data after the trial expires
Connection failures
If a log source test fails, or analyses come back with "log source unreachable", start here.
| Symptom | Likely cause | Fix |
|---|---|---|
401 Unauthorized | Credentials wrong or revoked | Re-paste the API key / access key in the log source. For AWS, confirm the access key is active in IAM. |
403 Forbidden | Credentials valid but missing permission | Check the IAM policy / API role grants the read scopes the adapter needs. The setup walkthrough for each log source lists the required permissions. |
404 Not Found | Wrong host or region | Confirm the host (Datadog site, Elastic URL, Loki endpoint) and region (CloudWatch, GCP). A us-east-1 access key cannot read eu-west-1 log groups. |
| Empty results despite log activity | Query, label, or selector doesn't match | Widen the test window, double-check log group / index / label values, and confirm the service is logging during the window you're testing. |
| Timeouts during test | Log source unreachable from SigSentry | Public-cloud log sources (Datadog, CloudWatch, GCP, Splunk Cloud, Elastic Cloud) work out of the box. Self-hosted log infra needs to allow SigSentry's outbound IPs through your firewall. |
For self-hosted Loki, Elastic, or Splunk, make sure the endpoint is reachable from the public internet and your IP allowlist permits SigSentry's outbound addresses. Contact support for the current IP range.
Don't paste credentials into screenshots, support tickets, or chat messages. If you need to share a config, redact the access key and secret. Rotate any key that's been exposed.
OAuth callback errors
When you install Slack, Microsoft Teams, Google Chat, or a Git provider, you're sent through that platform's OAuth flow. Errors at the callback step almost always come down to one of these.
| Symptom | Likely cause | Fix |
|---|---|---|
| "Redirect URI mismatch" | The OAuth app's allowed redirect URIs don't include SigSentry's callback | Add https://dashboard.sigsentry.com/oauth/callback (and any region-specific variant the install screen shows) to the OAuth app's redirect list. This is by far the most common cause. |
| "Invalid scope" or missing data after install | One or more required scopes weren't granted during install | Reinstall and approve every requested scope. The setup page for each channel lists exactly which scopes are required. |
| "App blocked by workspace admin" | Your Slack / Teams / Google Workspace blocks third-party apps | A workspace admin needs to approve SigSentry. Slack: Settings → Manage apps → Approved apps. Teams: Admin Center → Manage apps. Google Workspace: Apps → App access control. |
| Install completes but no test message arrives | Bot wasn't added to the target channel | Invite the SigSentry bot to the channel you're routing to, then resend. |
Webhook signature verification failures
These tips are for receivers you build to consume SigSentry's outgoing webhooks (the Generic Webhook channel).
| Symptom | Likely cause | Fix |
|---|---|---|
| Every request fails verification | Wrong secret | Double-check the HMAC secret saved in SigSentry matches the one your receiver uses. Rotating one without the other breaks verification immediately. |
| Verification works locally, fails in production | Body modified before verification | Some frameworks (Express with express.json(), Next.js route handlers, Lambda with API Gateway) parse the JSON body before exposing it. Sign and verify the raw bytes, not the re-serialized JSON. |
| Random failures, mostly correct | Clock skew (if your scheme uses a timestamp) | If your verifier rejects requests outside a small time window, ensure both sides run NTP. SigSentry's standard signature is body-only (no timestamp) — see the webhook channel doc for the exact algorithm. |
| Signature mismatch only on some payloads | Charset or whitespace transformation | Trailing newlines, content-type rewrites, or proxy decompression can change body bytes. Log the byte length your verifier sees and compare with Content-Length. |
Verify HMAC signatures in constant time (crypto.timingSafeEqual
in Node, hmac.compare_digest in Python). String equality leaks
timing information that can be used to forge signatures.
"No logs found" — diagnostic checklist
The analysis ran, but came back saying no logs were found in your window. Walk this list top to bottom — most cases are time-window or naming issues, not a real problem with the connection.
Widen the time window
A 5-minute window catches very little. If you're testing or backfilling, try a 1-hour window first to confirm logs are flowing, then narrow down.
Check the timezone
Time windows are interpreted in your tenant timezone (shown on the analysis form). A window that ends "now" but appears in the future relative to your log source — or shifted by several hours — is almost always a timezone mismatch. Confirm both sides are using the same clock reference.
Confirm the log source filter matches
CloudWatch log groups, Datadog queries, Elastic indices, Loki labels, Splunk searches — each is a filter. If the filter doesn't include the service you care about, no logs come back. Open the log source in the dashboard and run Test connection to see what shape of data is returned.
Check the service name
The AI matches services by the name it sees in your logs. If your
checkout service logs as checkout-api but you're asking about
checkoutService, results will be empty. Use the project's AI
analysis context to spell out service
naming so the AI knows what to look for.
Confirm logs are actually flowing
Push a test log line directly to your log source and confirm it appears in that source's own UI within the same window. If it doesn't, the problem is upstream of SigSentry.
Quota exceeded
You've hit your monthly analysis limit. You have three levers.
| Action | Where |
|---|---|
| Enable overage to keep running analyses past the cap | Billing → Overage |
| Upgrade to a higher plan | Billing → Upgrading |
| Reduce auto-analyze rule firing | Project → Watchdog → adjust thresholds, lookback windows, or disable noisy rules |
If a Watchdog rule is firing too often and eating quota, the fastest fix is to widen its lookback window or raise its threshold. Pattern rules with overly broad regexes are the most common offender — test with dry-run before re-enabling.
Why is my analysis failing?
When an analysis ends in status failed instead of completed, open
its detail page first — the error reason is shown at the top. Common
causes:
| Reason shown | What it means | Fix |
|---|---|---|
log_source_unreachable | All log sources errored before any data was returned | Open each log source and run Test connection. Fix whichever fails. |
log_source_rate_limited | Your log source provider throttled the query | Wait a few minutes and retry, or upgrade your log source plan if this happens often. CloudWatch and Datadog both have per-account query rate limits. |
no_logs_in_window | Connection succeeded, but the window had zero matching logs | Use the no-logs checklist above. |
analysis_timeout | The analysis took longer than the maximum allowed | Narrow the time window or reduce the number of log sources queried per analysis. |
internal_error | Something on our side went wrong | Retry. If it persists, contact support with the analysis ID. |
If every analysis on a project is failing the same way, the issue is almost always one specific log source — disable it temporarily to confirm, then fix that source.
Plan downgrade — what happens to data?
When you move to a smaller plan, the change takes effect at the end of your current billing period. See Downgrading for the full flow.
| What | What happens |
|---|---|
| Existing analyses | Retained for the new plan's retention window. Older analyses past that window become inaccessible (not deleted immediately, but not visible). |
| Watchdog rules | If you have more rules than the new plan's per-project limit allows, the excess rules are auto-disabled. Edit which ones stay active before the downgrade takes effect. |
| Channels and log sources | Unaffected — these aren't quota-gated. |
| In-flight analyses | Complete normally on the old plan. |
Trial ended — recovering access
If your free trial ran out and the dashboard is now read-only, you have two paths. See Free trial for details.
- Upgrade: pick a paid plan from Billing → Plans and complete checkout. Full access is restored immediately.
- Need more time: contact support before the 30-day grace window ends. Your data (analyses, projects, rules, channels) is preserved for 30 days after trial expiry. After that, it's permanently removed.
Still stuck?
If none of the above resolves your issue, contact support with:
- Your tenant slug and project slug
- The analysis ID (if it's analysis-related)
- The exact error message shown
- A rough timestamp of when it started
The more specific you are, the faster we can diagnose.
