Dixa incident

Degraded performance - AI Voice transcript

Minor Resolved View vendor source →

Dixa experienced a minor incident on January 27, 2026 affecting AI Voice transcripts, lasting 55m. The incident has been resolved; the full update timeline is below.

Started
Jan 27, 2026, 11:41 AM UTC
Resolved
Jan 27, 2026, 12:36 PM UTC
Duration
55m
Detected by Pingoru
Jan 27, 2026, 11:41 AM UTC

Affected components

AI Voice transcripts

Update timeline

  1. investigating Jan 27, 2026, 11:41 AM UTC

    We are experiencing some difficulties with AI Voice transcripts. We are investigating the matter. Next update at 13:00 CET

  2. identified Jan 27, 2026, 12:10 PM UTC

    We have identified the issue as related to one of our providers. We continue to monitor the situation closely. Next update: 14:00 CET

  3. resolved Jan 27, 2026, 12:36 PM UTC

    All known issues to this incident have been resolved by our partner. We thank you for your patience and cooperation. Post mortem about this incident will be posted within 5 business days.

  4. postmortem Feb 02, 2026, 12:37 PM UTC

    Dixa experienced issues with transcription on Tuesday 27th January 2026 between 11:27 and 13:15 CET **Timeline** * At 10:22 CET / 4:22 EST Azure reports issues on the Azure OpenAI Service * At 11:27 CET / 5:27 EST the first issues with transcriptions start to happen at Dixa. At 11:46 CET / 5:46 EST Dixa investigates unrelated issues with live chats and calls. * At 12:16 CET / 6:16 EST While investigating the unrelated issues, Dixa noticed the issues with transcriptions. * At 13:15 CET / 7:15 EST Transcription issues disappear. **Impact** Transcriptions were not available or delayed in the specified timeframe. **Root cause** Our provider for AI voice transcriptions, Azure, reported issues. According to their [status](https://azure.status.microsoft/en-us/status/history/): _Between 09:22 UTC and 16:12 UTC on 27 January 2026, and again between 11:14 UTC and 13:35 UTC on 29 January 2026, a platform issue resulted in an impact to the Azure OpenAI Service in the Sweden Central region. Impacted customers may have experienced HTTP 500/503 errors, failed inference requests, and/or issues with model deployment metadata. This issue also affected the Agent Service and other downstream AI Services dependent on Azure OpenAI in this region._ **Immediate solution** We monitored the situation until it was fixed. **Long-term solution \(where applicable\)** We could consider changing region for our service if the issues become more frequent. Otherwise, Azure has been reliable enough to keep the current infrastructure. Furthermore, Azure made improvements on their side to prevent this situation from happening again.