ElevenLabs experienced a minor incident on January 29, 2026 affecting Conversations, lasting —. The incident has been resolved; the full update timeline is below.
Affected components
Update timeline
- investigating Jan 27, 2026, 05:44 PM UTC
Status: Investigating Currently, some conversations are affected by increased latency due to elevated LLM generation failures. We are investigating the root cause and working to mitigate this. Affected components Conversations (Degraded performance)
- identified Jan 27, 2026, 06:30 PM UTC
Status: Identified We have identified the issue scope is isolated to the Gemini-2.5-Flash model. We are working with our cloud provider to resolve this. Affected components Conversations (Degraded performance)
- monitoring Jan 27, 2026, 09:10 PM UTC
Status: Monitoring We have observed a significant decrease in error occurrence and are continuing to monitor availability of the resources that provide service for Gemini-2.5-Flash. Affected components Conversations (Degraded performance)
- resolved Jan 27, 2026, 10:18 PM UTC
Status: Resolved LLM failures have now returned to baseline, and conversation latency using Gemini-2.5-Flash is now back to expected levels. For future mitigation, we plan to improve our fallback methods to better handle reduced cloud provider availability. Affected components Conversations (Operational)
- monitoring Jan 28, 2026, 03:49 PM UTC
Status: Monitoring We have observed a reoccurrence of Gemini-2.5-Flash failures and are continuing to monitor the issue. Affected components Conversations (Degraded performance)
- resolved Jan 29, 2026, 12:30 AM UTC
Status: Resolved LLM error rates have normalized, and Gemini-2.5-Flash conversation latency is back within expected ranges. We’ll continue monitoring closely. Improvements accommodating reduced cloud provider have been implemented, and will be live shortly. Affected components Conversations (Operational)