- Detected by Pingoru
- Apr 22, 2026, 03:39 PM UTC
- Resolved
- Apr 22, 2026, 03:39 PM UTC
- Duration
- —
Affected: Other
Timeline · 4 updates
-
investigating Apr 22, 2026, 02:50 PM UTC
Status: Investigating We are currently investigating evidence of issues related to payment-processing. We will publish further details when the problem's scope and impact are clear. Affected components Other (Degraded performance)
-
monitoring Apr 22, 2026, 03:01 PM UTC
Status: Monitoring As of 14:55 UTC, we have observed error rates return to baseline, and are continuing to investigate the root cause. Affected components Other (Operational)
-
monitoring Apr 22, 2026, 03:11 PM UTC
Status: Monitoring We have identified the root cause: A misconfiguration in our billing system caused provisioning of subscriptions to be delayed. Errors remain at baseline. We are continuing our work to ensure the root cause is fully addressed to maintain future system integrity. Affected components Other (Operational)
-
resolved Apr 22, 2026, 03:39 PM UTC
Status: Resolved We have confirmed full issue resolution. Affected components Other (Operational)
Read the full incident report →
- Detected by Pingoru
- Apr 17, 2026, 12:37 PM UTC
- Resolved
- Apr 17, 2026, 12:37 PM UTC
- Duration
- —
Affected: UIConversations
Timeline · 4 updates
-
investigating Apr 16, 2026, 05:08 PM UTC
Status: Investigating Starting around 13:00:00 UTC on 2026-04-14, some call transfers with the Twilio Native integration started to get categorized as failures in call transcripts, despite these transfers succeeding. Both custom webhook transfer tools and the native transfer_to_number tool are affected. Calls using SIP numbers are not currently affected. All information indicates that actual transfer failure rate is at baseline. Our engineering team is currently working to correct the mislabeling. Affected components UI (Degraded performance)
-
identified Apr 16, 2026, 06:51 PM UTC
Status: Identified We have identified the root cause of the issue and are currently preparing a release to address it. Affected components UI (Degraded performance)
-
monitoring Apr 17, 2026, 09:49 AM UTC
Status: Monitoring We have released an update to address the issue. We are currently monitoring the situation. Affected components UI (Degraded performance) Conversations (Degraded performance)
-
resolved Apr 17, 2026, 12:37 PM UTC
Status: Resolved The issue has been resolved, and the system is now showing the correct message. Affected components Conversations (Operational) UI (Operational)
Read the full incident report →
- Detected by Pingoru
- Apr 13, 2026, 09:25 PM UTC
- Resolved
- Apr 13, 2026, 09:25 PM UTC
- Duration
- —
Affected: Telephony
Timeline · 2 updates
-
investigating Apr 13, 2026, 07:50 PM UTC
Status: Investigating Starting at 19:50 UTC, calls to a portion of phone numbers imported into the ElevenLabs Agent Platform are failing. We are currently working to isolate the root cause and restore full functionality. Affected components Telephony (Partial outage)
-
resolved Apr 13, 2026, 09:25 PM UTC
Status: Resolved We have taken measures to restore full SIP functionality and tracked error rates as they have returned to baseline. Affected components Telephony (Operational)
Read the full incident report →
- Detected by Pingoru
- Apr 02, 2026, 10:38 AM UTC
- Resolved
- Apr 02, 2026, 10:38 AM UTC
- Duration
- —
Affected: ElevenCreative
Timeline · 3 updates
-
investigating Apr 02, 2026, 09:34 AM UTC
Status: Investigating We are investigating an issue affecting the access to the website and API usage of a small percentage of ElevenLabs users. This only impacts workspaces created today, and existing users aren't impacted. Affected components ElevenCreative (Degraded performance)
-
monitoring Apr 02, 2026, 09:42 AM UTC
Status: Monitoring Mitigating actions started rolling out, and we are monitoring the error rate. Affected components ElevenCreative (Degraded performance)
-
resolved Apr 02, 2026, 10:38 AM UTC
Status: Resolved The issue has been fixed; this affected only newly created accounts today. Impact is less than 100 workspaces. Affected components ElevenCreative (Operational)
Read the full incident report →
- Detected by Pingoru
- Apr 02, 2026, 10:28 AM UTC
- Resolved
- Apr 02, 2026, 10:28 AM UTC
- Duration
- —
Timeline · 2 updates
-
investigating Apr 02, 2026, 10:06 AM UTC
Status: Investigating We’re investigating an issue where voices created recently may not show up in “My voices” or search results. This only affects workspaces with 50,000 or more voices. • Voices are not lost and can still be accessed if you have the voice id or a direct link. • The issue is limited to visibility/search indexing for newly created voices. We’re working on a fix and will share another update as soon as search results are fully up to date.
-
resolved Apr 02, 2026, 10:28 AM UTC
Status: Resolved The issue has been resolved, and all the voices should be visible now. This only affected workspaces with 50,000 or more voices.
Read the full incident report →
- Detected by Pingoru
- Mar 31, 2026, 02:15 PM UTC
- Resolved
- Mar 31, 2026, 02:15 PM UTC
- Duration
- —
Affected: Telephony
Timeline · 3 updates
-
investigating Mar 31, 2026, 09:28 AM UTC
Status: Investigating We are currently investigating an issue with SIP calls that is impacting the transfer-to-human flow. We are working on a fix. Affected components Telephony (Degraded performance)
-
monitoring Mar 31, 2026, 09:58 AM UTC
Status: Monitoring We have pushed an update, and we are currently monitoring the situation. Affected components Telephony (Degraded performance)
-
resolved Mar 31, 2026, 02:15 PM UTC
Status: Resolved The root cause was an incompatibility in how ElevenLabs handled mid-dialog re-INVITEs. When SIP providers sent re-INVITEs, ElevenLabs responded with 488, which some providers treat as fatal, issue a BYE, and terminate the call. Per RFC 3261, a 488 should not necessarily result in session termination, but provider behaviour varied. Affected components Telephony (Operational)
Read the full incident report →
- Detected by Pingoru
- Mar 30, 2026, 09:03 AM UTC
- Resolved
- Mar 30, 2026, 09:03 AM UTC
- Duration
- —
Affected: Telephony
Timeline · 6 updates
-
investigating Mar 30, 2026, 06:16 AM UTC
Status: Investigating On March 29th around 15:33 UTC, some inbound SIP calls began failing intermittently. We are investigating root cause in parallel with mitigation. The issue is only with the calls on the EU Residency. Affected components Telephony (Partial outage)
-
resolved Mar 30, 2026, 06:26 AM UTC
Status: Resolved Since applying mitigations, we're no longer seeing call failures. We're continuing to look into the root cause. Affected components Telephony (Operational)
-
investigating Mar 30, 2026, 06:40 AM UTC
Status: Investigating We are reopening the investigation regarding the SIP calls. Affected components Telephony (Partial outage)
-
investigating Mar 30, 2026, 07:28 AM UTC
Status: Investigating We are continuing the investigation, and our team is actively looking into this. Affected components Telephony (Partial outage)
-
monitoring Mar 30, 2026, 08:23 AM UTC
Status: Monitoring A fix has been released. We are currently monitoring the situation, and we have not observed any errors at this time. Affected components Telephony (Partial outage)
-
resolved Mar 30, 2026, 09:03 AM UTC
Status: Resolved The issue has been fully resolved. The issue was only with the calls on the EU Residency. Affected components Telephony (Operational)
Read the full incident report →
- Detected by Pingoru
- Mar 19, 2026, 10:07 PM UTC
- Resolved
- Mar 19, 2026, 10:07 PM UTC
- Duration
- —
Affected: Speech to TextOtherText to Speech
Timeline · 2 updates
-
investigating Mar 19, 2026, 07:21 PM UTC
Status: Investigating Between 19:01 to 19:07 UTC, we observed increased error rates across the platform. We are currently investigating the root cause. Affected components Text to Speech (Partial outage)
-
resolved Mar 19, 2026, 10:07 PM UTC
Status: Resolved We were able to identify the root cause of the increase in errors. We have implemented a fix to mitigate any possible occurrences moving forward. Affected components Speech to Text (Operational) Other (Operational) Text to Speech (Operational)
Read the full incident report →
- Detected by Pingoru
- Mar 16, 2026, 06:52 PM UTC
- Resolved
- Mar 16, 2026, 04:00 PM UTC
- Duration
- —
Read the full incident report →
- Detected by Pingoru
- Mar 15, 2026, 03:55 PM UTC
- Resolved
- Mar 15, 2026, 03:55 PM UTC
- Duration
- —
Affected: Speech to TextOther API endpoints
Timeline · 2 updates
-
investigating Mar 15, 2026, 03:40 PM UTC
Status: Investigating Dubbing and ASR services are currently down. Our team is investigating the issue. Affected components Other API endpoints (Full outage) Speech to Text (Full outage)
-
resolved Mar 15, 2026, 03:55 PM UTC
Status: Resolved Our team has resolved the issue, services are back to normal. Affected components Other API endpoints (Operational) Speech to Text (Operational)
Read the full incident report →
- Detected by Pingoru
- Mar 09, 2026, 06:42 PM UTC
- Resolved
- Mar 09, 2026, 06:42 PM UTC
- Duration
- —
Affected: UIElevenCreative
Timeline · 3 updates
-
investigating Mar 09, 2026, 06:17 PM UTC
Status: Investigating Currently, https://eu.residency.elevenlabs.io/ and https://in.residency.elevenlabs.io/ are not accessible. We are investigating and working to resolve this. Affected components ElevenCreative (Partial outage) UI (Partial outage)
-
monitoring Mar 09, 2026, 06:30 PM UTC
Status: Monitoring The EU and IN residency pages are currently in a functional state. We are continuing to monitor their health to ensure no further interruptions. Affected components ElevenCreative (Operational) UI (Operational)
-
resolved Mar 09, 2026, 06:42 PM UTC
Status: Resolved We have confirmed full functionality of both EU and IN residency webpages. Affected components ElevenCreative (Operational) UI (Operational)
Read the full incident report →
- Detected by Pingoru
- Mar 08, 2026, 05:11 PM UTC
- Resolved
- Mar 08, 2026, 05:11 PM UTC
- Duration
- —
Affected: Conversations
Timeline · 2 updates
-
monitoring Mar 08, 2026, 04:58 PM UTC
Status: Monitoring Starting at 03:50 UTC today, a portion of SIP Trunks imported into ElevenLabs saw failures when SIP calls were initiated. As of 16:45 UTC, mitigation measures were implemented, and monitoring continues to ensure full resolution. Affected components Conversations (Partial outage)
-
resolved Mar 08, 2026, 05:11 PM UTC
Status: Resolved We have confirmed with affected clients and in our analytics that SIP trunking functionality is now fully restored. Affected components Conversations (Operational)
Read the full incident report →
- Detected by Pingoru
- Mar 07, 2026, 12:13 AM UTC
- Resolved
- Mar 07, 2026, 12:13 AM UTC
- Duration
- —
Affected: ConversationsText to Speech
Timeline · 2 updates
-
identified Mar 06, 2026, 10:28 PM UTC
Status: Identified Requests made to the voice "George", ID: JBFqnCBsd6RMkjVDRZzb, started failing at 19:30 UTC today. We have identified the cause of the issue and are working to release a fix for this. As a temporary solution, adding the voice ID 6WwXjDDEMyNmFG95zycZ from the library will allow generations that were previously directed at the voice of George to complete. The voice of George will soon be accessible again. If you are making requests to legacy voices "legacy voices": https://help.elevenlabs.io/hc/en-us/articles/26928417254801-What-are-Legacy-voices and getting the same error, please switch to a non-legacy voice. Affected components Text to Speech (Degraded performance) Conversations (Degraded performance)
-
resolved Mar 07, 2026, 12:13 AM UTC
Status: Resolved We have confirmed fully restored functionality of the voice of "George". Affected components Text to Speech (Operational) Conversations (Operational)
Read the full incident report →
- Detected by Pingoru
- Mar 04, 2026, 11:57 AM UTC
- Resolved
- Mar 04, 2026, 11:57 AM UTC
- Duration
- —
Affected: Conversations
Timeline · 2 updates
-
identified Mar 04, 2026, 07:31 AM UTC
Status: Identified Since March 3 at 7:00 AM UTC, an issue has been affecting some calls using the Conversational v3 Model, which is in Alpha stage. We have identified the root cause, and a fix is being deployed. Conversation recordings, transcripts, and analytics are unavailable for affected calls. Affected calls are currently displayed as "in progress" and will be transitioned to "failed" automatically in the next 48h. We will provide a further update once the fix has been fully rolled out. Affected components Conversations (Degraded performance)
-
resolved Mar 04, 2026, 11:57 AM UTC
Status: Resolved The issue has been resolved. Affected components Conversations (Operational)
Read the full incident report →
- Detected by Pingoru
- Feb 27, 2026, 03:02 PM UTC
- Resolved
- Feb 27, 2026, 03:02 PM UTC
- Duration
- —
Affected: Conversations
Timeline · 4 updates
-
investigating Feb 27, 2026, 01:50 PM UTC
Status: Investigating We are currently investigating increased conversation latency affecting the Gemini models. Conversations are not failing due to fallbacks; however, we are observing elevated response times. We are actively working with the cloud provider to diagnose and resolve the issue. Affected components Conversations (Degraded performance)
-
identified Feb 27, 2026, 02:20 PM UTC
Status: Identified The issue has been identified as originating from the cloud provider. We are actively working with them to resolve it. Affected components Conversations (Degraded performance)
-
monitoring Feb 27, 2026, 02:25 PM UTC
Status: Monitoring Error rates have decreased, and we are currently awaiting confirmation from the cloud provider. Affected components Conversations (Degraded performance)
-
resolved Feb 27, 2026, 03:02 PM UTC
Status: Resolved Gemini model performance has fully recovered; Our cloud provider has informed us that the upstream issue is fully resolved, and we have carefully tracked our metrics to confirm this. Affected components Conversations (Operational)
Read the full incident report →
- Detected by Pingoru
- Feb 26, 2026, 11:26 PM UTC
- Resolved
- Feb 26, 2026, 11:26 PM UTC
- Duration
- —
Affected: Speech to Text
Timeline · 2 updates
-
investigating Feb 26, 2026, 10:48 PM UTC
Status: Investigating We are seeing an increased rate of 429 errors from the Speech-to-Text API. Affected components Speech to Text (Degraded performance)
-
resolved Feb 26, 2026, 11:26 PM UTC
Status: Resolved Error rates have returned to the baseline. Affected components Speech to Text (Operational)
Read the full incident report →
- Detected by Pingoru
- Feb 25, 2026, 07:45 PM UTC
- Resolved
- Feb 25, 2026, 07:45 PM UTC
- Duration
- —
Affected: UISpeech to TextOtherOther API endpointsElevenCreative
Timeline · 11 updates
-
investigating Feb 25, 2026, 02:36 PM UTC
Status: Investigating We are currently investigating an issue with website and API requests and are working to identify the root cause as quickly as possible. We will provide an update as soon as more information becomes available. TTS API request and Agents conversations are working. There may be issues related to API services that affect conversation initiation. Affected components ElevenCreative (Degraded performance)
-
identified Feb 25, 2026, 03:04 PM UTC
Status: Identified We have identified an underlying issue with our cloud provider that is impacting service availability. Affected components ElevenCreative (Partial outage)
-
identified Feb 25, 2026, 05:05 PM UTC
Status: Identified We have isolated the components responsible for performance degradation and are attempting to implement workarounds to bypass the affected cloud provider services. Affected components UI (Partial outage) Speech to Text (Degraded performance) Other (Degraded performance) ElevenCreative (Partial outage) Other API endpoints (Degraded performance)
-
identified Feb 25, 2026, 05:23 PM UTC
Status: Identified Our engineering team is continuing to investigate issues with the cloud provider components. Multiple engineering teams and leadership is involved in the efforts. We'll continue monitoring closely and provide timely updates as the situation evolves. Affected components UI (Partial outage) Speech to Text (Degraded performance) Other (Degraded performance) ElevenCreative (Partial outage) Other API endpoints (Degraded performance)
-
identified Feb 25, 2026, 05:44 PM UTC
Status: Identified Our engineering team is actively exploring multiple remediation paths in parallel while working to isolate the root cause. We have escalated this issue both to our infrastructure provider and to internal leadership to ensure it receives the highest level of attention and resources. A definitive root cause has not yet been identified, but we are making iterative changes and closely monitoring their impact. We will continue to provide updates as our investigation progresses. Affected components Other (Degraded performance) ElevenCreative (Partial outage) Other API endpoints (Degraded performance) UI (Partial outage) Speech to Text (Degraded performance)
-
identified Feb 25, 2026, 06:01 PM UTC
Status: Identified Our team continues to actively investigate the degraded performance affecting services. We are working through several lines of investigation in parallel and methodically testing potential contributing factors. This remains escalated with our infrastructure provider and internal leadership, with additional engineering resources dedicated to this effort. While a definitive root cause has not yet been identified, each step is helping us build a clearer picture of the underlying issue. We understand the impact this is having and are treating this with the highest priority. We will provide another update in 15-20 minutes. Affected components Other (Degraded performance) ElevenCreative (Partial outage) Other API endpoints (Degraded performance) UI (Partial outage) Speech to Text (Degraded performance)
-
identified Feb 25, 2026, 06:27 PM UTC
Status: Identified We are continuing to investigate all possible root causes and eliminate potential causes. Additionally we are clarifying affected products below. What's affected: • Agents Platform - degraded performance due to issues around conversation initiation. Conversations that do start function as normal thus API customers should retry failed initiation conversations. • Website - intermittent slowness What's working normally: • Text-to-Speech (TTS) API • Speech-to-Text (STT) API • Dubbing API Our engineering team remains fully engaged and is continuing to work through active lines of investigation alongside our infrastructure provider. We are iterating on potential mitigations and monitoring the results closely. We will continue to provide updates every 15-20 minutes. Affected components Other (Degraded performance) ElevenCreative (Partial outage) Other API endpoints (Degraded performance) UI (Partial outage) Speech to Text (Degraded performance)
-
identified Feb 25, 2026, 06:54 PM UTC
Status: Identified Our team remains fully engaged and continues to work through active lines of investigation. All previously noted affected and operational components remain in the same state. We will provide another update in 15-20 minutes Affected components Speech to Text (Operational) Other (Operational) ElevenCreative (Partial outage) Other API endpoints (Degraded performance) UI (Partial outage)
-
monitoring Feb 25, 2026, 07:10 PM UTC
Status: Monitoring We have implemented changes to the affected services that have resulted in signs of recovery and improved performance. We are closely monitoring to confirm stability and will provide an another update in 15–20 minutes. Affected components ElevenCreative (Partial outage) Other API endpoints (Degraded performance) UI (Partial outage) Speech to Text (Operational) Other (Operational)
-
monitoring Feb 25, 2026, 07:27 PM UTC
Status: Monitoring Affected services continue to show sustained improvement following the changes implemented earlier. We are continuing to monitor key metrics and will confirm full resolution once we are satisfied with stability across baseline error rates. Next update in 15-20 minutes. Affected components Other API endpoints (Degraded performance) UI (Partial outage) Speech to Text (Operational) Other (Operational) ElevenCreative (Partial outage)
-
resolved Feb 25, 2026, 07:45 PM UTC
Status: Resolved As of 7:45 PM UTC, all affected services have fully recovered and are operating normally. Agents Platform and Audio Platform (Website) have returned to full health. Text-to-Speech (TTS), Speech-to-Text (STT), and Dubbing APIs were unaffected throughout the incident. The root cause has been identified and remediated. We will be conducting a thorough post-incident review and will share further details in a follow-up report. We apologize for the disruption and appreciate your patience throughout this incident. If you are still experiencing any issues, please don't hesitate to contact our support team. Affected components Other (Operational) ElevenCreative (Operational) Other API endpoints (Operational) UI (Operational) Speech to Text (Operational)
Read the full incident report →
- Detected by Pingoru
- Feb 23, 2026, 01:01 PM UTC
- Resolved
- Feb 23, 2026, 01:01 PM UTC
- Duration
- —
Affected: Conversations
Timeline · 3 updates
-
investigating Feb 23, 2026, 10:53 AM UTC
Status: Investigating We are currently investigating a potential issue affecting the v3 conversational TTS model on the Agents platform. Affected components Conversations (Degraded performance)
-
monitoring Feb 23, 2026, 12:34 PM UTC
Status: Monitoring We have released a fix. The error rate has returned to normal levels, and we are actively monitoring the situation. Affected components Conversations (Degraded performance)
-
resolved Feb 23, 2026, 01:01 PM UTC
Status: Resolved The issue has been resolved. Affected components Conversations (Operational)
Read the full incident report →
- Detected by Pingoru
- Feb 17, 2026, 02:03 PM UTC
- Resolved
- Feb 17, 2026, 01:26 PM UTC
- Duration
- —
Read the full incident report →
- Detected by Pingoru
- Feb 14, 2026, 09:52 AM UTC
- Resolved
- Feb 14, 2026, 09:52 AM UTC
- Duration
- —
Affected: Text to Speech
Timeline · 2 updates
-
identified Feb 14, 2026, 09:51 AM UTC
Status: Identified We detected that approximately 1.5% of v3 requests produced static noise Affected components Text to Speech (Degraded performance)
-
resolved Feb 14, 2026, 09:52 AM UTC
Status: Resolved The error rates have now returned to normal levels Affected components Text to Speech (Operational)
Read the full incident report →
- Detected by Pingoru
- Feb 11, 2026, 02:12 AM UTC
- Resolved
- Feb 11, 2026, 02:12 AM UTC
- Duration
- —
Affected: RAG
Timeline · 3 updates
-
identified Feb 10, 2026, 11:36 PM UTC
Status: Identified Degraded RAG performance, which may cause increased latency. Affected components RAG (Degraded performance)
-
monitoring Feb 11, 2026, 01:53 AM UTC
Status: Monitoring We have pushed a fix, which has resulted in latency returning to baseline. Affected components RAG (Degraded performance)
-
resolved Feb 11, 2026, 02:12 AM UTC
Status: Resolved The RAG performance issue is now resolved. Affected components RAG (Operational)
Read the full incident report →
- Detected by Pingoru
- Feb 10, 2026, 10:15 PM UTC
- Resolved
- Feb 10, 2026, 10:15 PM UTC
- Duration
- —
Affected: Other API endpoints
Timeline · 2 updates
-
identified Feb 10, 2026, 08:13 PM UTC
Status: Identified At 18:30 UTC, an unintentional change resulted in conversation_id not being included as a field within Post Call Transcript webhooks. As a temporary workaround, we advise consuming it from conversation_initiation_client_data.dynamic_variables.system__conversation_id, if possible. We are currently working on a fix to address this. Affected components Other API endpoints (Partial outage)
-
resolved Feb 10, 2026, 10:15 PM UTC
Status: Resolved The fix to resolve the missing conversation_id in Post Call Transcripts has been successfully deployed. A detailed RCA will be published later. Affected components Other API endpoints (Operational)
Read the full incident report →
- Detected by Pingoru
- Feb 10, 2026, 05:04 PM UTC
- Resolved
- Feb 10, 2026, 05:04 PM UTC
- Duration
- —
Affected: Conversations
Timeline · 11 updates
-
identified Feb 03, 2026, 05:37 PM UTC
Status: Identified Starting at ~14:00 UTC today, some conversations have been affected by a disruption in our data extraction and call summary mechanisms, and a subset of these may have seen increased conversation latency. We have identified the issue as stemming from reduced availability of the Gemini 2.0 and Gemini 2.5 Flash models, and are currently working to resolve the issue. Affected components Conversations (Degraded performance)
-
identified Feb 03, 2026, 08:14 PM UTC
Status: Identified We have observed a significant reduction in errors, and continue to work on a solution to fully mitigate the core issue. Affected components Conversations (Degraded performance)
-
monitoring Feb 03, 2026, 10:23 PM UTC
Status: Monitoring We have observed errors return to baseline. Currently, call summaries and data extraction are fully functional. At this point in time, we are following through with improvements to data extraction and call summaries to ensure they are not impacted by similar availability issues in the future. Affected components Conversations (Operational)
-
resolved Feb 04, 2026, 10:13 AM UTC
Status: Resolved The issue has been fully resolved, and safeguards have been implemented to prevent recurrence. Affected components Conversations (Operational)
-
monitoring Feb 04, 2026, 03:53 PM UTC
Status: Monitoring We have observed data extraction and evaluation failures reoccur, and are following through to ensure complete mitigation. Affected components Conversations (Degraded performance)
-
resolved Feb 04, 2026, 10:10 PM UTC
Status: Resolved We have fully released a fix that implements increased redundancy in our Data Collection and Evaluation mechanisms via multiple backup LLMs, and confirmed complete mitigation. Affected components Conversations (Operational)
-
identified Feb 05, 2026, 01:55 PM UTC
Status: Identified We have observed an increase in latency affecting the Gemini models and are actively investigating the issue, and are working on improvements to the formatting of the recently implemented Data Collection backups. Affected components Conversations (Degraded performance)
-
resolved Feb 06, 2026, 11:31 PM UTC
Status: Resolved Following our recent patch, we have tracked incidence of unexpected Data Collection values and Evaluation failures as they have decreased to negligible levels, and confirmed restored functionality with affected users. Moving forward, we will continue to improve our system to have effective fallbacks and increased redundancy. Affected components Conversations (Operational)
-
identified Feb 09, 2026, 04:44 PM UTC
Status: Identified We have observed a reoccurrence of Data Collection failures, and are working on additional measures to mitigate them. Affected components Conversations (Degraded performance)
-
monitoring Feb 09, 2026, 11:03 PM UTC
Status: Monitoring We have implemented additional safeguards, and will continue to monitor through peak traffic to ensure complete resolution. Affected components Conversations (Operational)
-
resolved Feb 10, 2026, 05:04 PM UTC
Status: Resolved We have confirmed the release of our Data Extraction fix has continued to mitigate issue occurrence during peak traffic and that rates of Data Extraction failures are below pre-incident baseline. Affected components Conversations (Operational)
Read the full incident report →