Log Sources
Connect your existing log aggregator so SigSentry can read logs on demand for analyses
A log source is a connection to wherever your logs already live. SigSentry doesn't ingest your logs continuously — it queries the source on demand, scoped to the time window of an analysis, and discards the content once the diagnosis is generated.
Supported platforms
AWS CloudWatch
IAM access keys, log groups, query syntax.
Datadog
API key plus an Application key, optional EU site.
Grafana Loki
URL, optional bearer token, multi-tenant org IDs.
Splunk
Auth token plus indexes.
Elastic / OpenSearch
Auth plus the indices to query.
GCP Cloud Logging
Service account JSON, log filter patterns.
What's the same across all platforms
Each platform's setup page covers credentials and quirks particular to that adapter, but a few things are consistent everywhere.
Each log source belongs to one project. You can have several on the
same project — CloudWatch for the API and Datadog for the gateway, for
example — and they're queried in parallel during an analysis. The
plan-level logSourcesPerProject quota applies.
Credentials are encrypted and only used when an analysis or test query runs. Each log source has a list of sources inside it — log groups, indexes, indices, or label selectors, depending on the platform — and the dashboard lets you load these from your account to autocomplete.
Every source can be tested before saving. The Test connection button runs a small query against a small recent window and shows you sample lines. If you can't get green there, no analysis will succeed either.
Choosing the right time window
When you trigger an analysis, you supply a timeStart and timeEnd.
The log fetcher queries each active source for events strictly inside
that window — even if the platform's API supports broader queries, we
cap to the window you asked for.
The time window must contain log activity. Pick a window where you know there were events. "Last 1 hour" is a safe default while you're getting started.
What gets normalized
The log fetcher normalizes whatever the platform returns into a common shape:
| Field | Source |
|---|---|
timestamp | The platform's timestamp field, parsed to an ISO date |
level | Inferred from severity, status, or message prefix |
service | Extracted from the log stream, source field, or labels |
message | The actual log line |
metadata | Platform-specific extras (host, container, attributes) |
Per-platform notes on how each field is mapped live in the individual setup pages.
Common operations
Every log source supports the same CRUD operations from the dashboard: Create under Project → Log Sources, Test connection inline or in the form, Edit with the option to update credentials separately, toggle active / inactive (inactive sources are skipped during analyses), Delete with a confirmation, and Purge all under Danger Zone (admin only).
The same operations are available on the REST API when you want to manage sources programmatically.
Troubleshooting
The Troubleshooting page covers the usual suspects. The most common ones:
| Symptom | Most likely cause |
|---|---|
| "Test connection failed: invalid credentials" | Typo in keys, expired token, wrong region |
| "Test connection succeeded but no sample logs" | Time window outside log activity |
| Analysis returns "no logs found" | Time window mismatch; multiple sources but only one returning data |
| Authentication errors after a while | Token rotation; renew via Edit |
