Iterable incident

Customers on c102 are experiencing degraded performance

Minor Resolved View vendor source →
Started
Mar 17, 2026, 06:07 PM UTC
Resolved
Mar 17, 2026, 07:21 PM UTC
Duration
1h 13m
Detected by Pingoru
Mar 17, 2026, 06:07 PM UTC

Affected components

Email SendsJourney ProcessingPush SendsSMS SendsUser UpdatesList UpdatesUser Deletions

Update timeline

  1. investigating Mar 17, 2026, 06:07 PM UTC

    We are currently investigating an issue for one of ES clusters - c102 - The cluster is in a red state, leading to very slow ingestion and widespread API 500 errors. Customers are experiencing failed searches and errors on multiple API endpoints. Ingestion rates are near zero, with lag up to 20 minutes and increasing for internal, realtime, and bulk data pipelines.

  2. investigating Mar 17, 2026, 06:43 PM UTC

    Engineering is continuing to investigate this issue. At this time there is no improvement yet to any customers on C102. Next Update at 12:15 PDT or sooner.

  3. monitoring Mar 17, 2026, 07:00 PM UTC

    Engineering has restored C102 back to a "green" state and ingestion and API calls are succeeding again for this cluster. We will continue to monitor until we are fully through the accumulated backlog. Next update at 1 PM PDT or sooner.

  4. resolved Mar 17, 2026, 07:21 PM UTC

    We have fully caught up with any backlog on C102 and all queries and activity through C102 are functioning normally. If you continue to experience any issues please contact Support.

Looking to track Iterable downtime and outages?

Pingoru polls Iterable's status page every 5 minutes and alerts you the moment it reports an issue — before your customers do.

  • Real-time alerts when Iterable reports an incident
  • Email, Slack, Discord, Microsoft Teams, and webhook notifications
  • Track Iterable alongside 5,000+ providers in one dashboard
  • Component-level filtering
  • Notification groups + maintenance calendar
Start monitoring Iterable for free

5 free monitors · No credit card required