How DownForAI Works
Transparency about what we measure, how we measure it, and what we don't claim.
Who we are
DownForAI is an independent monitoring project for AI services, operated as a solo technical effort. We have no affiliation, sponsorship, or financial relationship with any AI provider listed on this site. We are not paid to report any particular status.
How we detect incidents
We run automated HTTP probes every 2 to 5 minutes against each monitored service from multiple geographically distributed locations. For each probe, we record:
- HTTP response code (200, 429, 5xx, etc.)
- Response latency in milliseconds
- TLS handshake success / failure
- Response body signals (content-type, error patterns)
- Regional path / CDN edge hit
We classify each service surface as Operational, Degraded, or Outage based on a weighted combination of probe results over a rolling time window. We also compute latency baselines (median and median absolute deviation) to detect statistical anomalies distinct from hard errors.
How we classify incidents
When a service shows degraded or failed probes, we apply a diagnosis classifier with four possible outcomes:
- Global outage — Multiple surfaces across multiple probe locations report failure. Concurrent community reports increase confidence.
- Partial incident — A single surface degraded (e.g. API fails while web works) or isolated regional failure.
- Local / client-side issue — All probes healthy, no significant volume of community reports. The issue is likely on the user's end (network, credentials, rate limits).
- Inconclusive — Insufficient signal. We prefer honesty over false positives.
Our signal sources
- Live monitoring — Our own automated probes. Primary signal.
- Status page sync — When a provider's official status page reports an incident, we surface it. Secondary signal.
- Community reports — User-submitted reports on this site. Tertiary signal; used to corroborate probe data, not to declare outages on their own.
What we don't claim
DownForAI is not an SLA, an insurance product, or an official source. We show you signal. We show you data. We show you what we see. We don't:
- Claim authority over any provider's true service status
- Replace the provider's official status page
- Guarantee that our classification is always correct
- Detect incidents that do not affect our probes or that resolve faster than our cadence
When our signal conflicts with the provider's official status page, check both. When our signal matches community reports but the official page is green, that's exactly the kind of gap we try to surface.
How to interpret what you see
- Operational but slow — Service responds, but latency is above baseline. Check your integrations for timeouts.
- Degraded — Some probes fail. If only one surface is affected, the issue is likely partial. Consider a fallback.
- Outage — Multiple probes fail across regions. Cross-check the provider's status page.
- No data — Either monitoring is paused for this surface, or probes haven't had time to run yet.
Browse incidents
All major incidents detected by our monitoring are archived publicly. Browse by month or by service in the incidents archive.