DownForAI

OpenAI status: API, auth, latency & outage reports

Operational
Last probe 6 min agoΒ·4 surfaces
Likely your sideMEDIUM confidence
πŸ“‘ Official status page β†’πŸ“š Docs β†’πŸ’³ Pricing β†’
100.0%
24h Uptime
379ms
p50 Latency
488ms
p95 Latency
0
Incidents (30d)

Having issues with OpenAI?

Report problems quickly and help the community stay informed.

0
reports in the last 24 hours

Surface Health

OpenAI APIOperational
HTTP 200p50 379ms6m ago
ChatGPTOperational
HTTP 200p50 379ms6m ago
DALL-EOperational
HTTP 200p50 379ms6m ago
SoraOperational
HTTP 200p50 379ms6m ago

Uptime β€” last 24h

100.0%

Latency β€” last 24h (p50 per 30 min slot)

Loading…

Is OpenAI down for everyone?

Likely local or client-side issue
Our probes see normal responses. The issue is likely on your end or in your network path.
Moderate confidence
Probe summary (4 surfaces)
All surfaces operational as of last probe.
Signals detected
  • All monitored surfaces operational
  • No recent user reports
  • Check your network, credentials, or rate limits

Incident history (30d)

βœ“No incidents recorded in the past 30 days.

Reported symptoms

No user reports for OpenAI in the last 24 hours.

Known error signatures

Common failure patterns and how to diagnose them

Provider details

The OpenAI API provides programmatic access to OpenAI models (chat completions, responses, realtime, images, audio, embeddings, batch, files, assistants). Different infrastructure from ChatGPT web.

What we monitor
Chat Completions API β€”
Responses API β€”
Realtime API β€” Streaming audio/text
Images API β€”
Audio API β€” TTS/Whisper
Embeddings API β€”
Batch API β€” Async
Assistants API β€”
Files API β€”
Status page segments
APIsChatGPTCodexSora
Model families
GPT-5, GPT-4o, GPT-4o minio1, o3 (reasoning)DALL-E 3 (images)Whisper (STT), TTS
Common limits & quotas
  • API access uses tiered rate limits (Tier 1-5) based on payment history and usage
  • Rate limits expressed per-model in requests-per-minute (RPM) and tokens-per-minute (TPM)
  • Tier thresholds and exact limits are published on platform.openai.com
Ecosystem dependencies
Many third-party apps depend on OpenAI APIs; dependency patterns varyAzure OpenAI Service is a distinct infrastructure, not a proxyAssistants API depends on Files API for attached resources
Operator notes
  • The Azure OpenAI/direct OpenAI split is the most important fallback for production users
  • Rate limit headers (x-ratelimit-*) return real quota state on every response β€” log them proactively, don't wait for 429
  • Tier upgrades are often automatic based on spending; programmatic tier probing is unnecessary
  • For Realtime API, connection drops are expected; implement reconnect logic, don't flag as outage
Diagnostic signals
x-request-idx-ratelimit-limit-requestsx-ratelimit-remaining-requestsx-ratelimit-reset-requestsx-ratelimit-limit-tokensx-ratelimit-remaining-tokensretry-after$ curl https://api.openai.com/v1/models -H "Authorization: Bearer $OPENAI_API_KEY" β€” basic reachability + lists models$ curl https://api.openai.com/v1/chat/completions -H "Authorization: Bearer $OPENAI_API_KEY" -H "Content-Type: application/json" -d '{"model":"gpt-4o","messages":[{"role":"user","content":"ping"}]}' β€” real inference test$ curl -I https://api.openai.com β€” TLS/DNS sanity check

Fallback alternatives

What to use if this service is down

Direct OpenAI API is degraded
Azure OpenAI Service can reduce downtime for prod workloads (separate infrastructure)
Low-medium cost if already provisioned
Easy switch
Full OpenAI ecosystem unavailable
Anthropic API, Google Gemini API, Mistral API can reduce downtime for general chat
Low cost with abstraction layer
Easy switch
Embeddings API down
Voyage AI or Cohere Embed are drop-in alternatives
Easy switch
Latency-sensitive workloads
Groq (fast inference on open-weight models) can reduce latency tail
Different model quality
Moderate effort

How we monitor

downforai.com probes each AI service every 2–5 minutes from multiple independent locations. We measure HTTP response codes, latency (p50 & p95), and endpoint availability across the surfaces listed above. Status is classified as Operational, Degraded, or Outage based on a weighted combination of probe results. Uptime is calculated over 30-minute buckets and the last 24 hours. User reports are factored into our diagnosis as a secondary signal. We are independent of all providers listed and receive no compensation to report any particular status.

Community Discussion

0/1000
No comments yet. Be the first to share your experience!