Dixa incident
Degraded performance for Analytics and Real-time Dashboard
Dixa experienced a minor incident on September 9, 2025 affecting Analytics, lasting 2h 3m. The incident has been resolved; the full update timeline is below.
Affected components
Update timeline
- identified Sep 09, 2025, 01:45 PM UTC
We are experiencing some difficulties with Analytics data being delayed, which affects the Analytics and Real-time Dashboard modules. We have identified and fixed the issue, so the data ingestion is catching up quickly.
- monitoring Sep 09, 2025, 01:52 PM UTC
All data has now caught up, and we'll continue to monitor the service closely
- resolved Sep 09, 2025, 03:49 PM UTC
All known issues to this incident have been resolved. We thank you for your patience and cooperation. Post mortem about this incident will be posted within 5 business days.
- postmortem Sep 15, 2025, 06:29 AM UTC
## Summary On the 9th of September, 2025, a deployment at 2:45 PM CEST caused an ingestion delay affecting the Intelligence and the Realtime Dashboards. The incident was detected at 2:47 PM CEST by internal alerts as well as reports coming in to our Customer Support. A fix was released at 3:25 PM CEST, and all data was caught up at 3:51 PM CEST. The total incident was 66 minutes, with an impact on customers relying on our real-time dashboard and Intelligence module. ## Timeline * **2:45 PM CEST** - Deployment released that caused the incident where ingestion of data to Intelligence- and the Realtime Dashboards was delayed * **2:47 PM CEST** - Internal alerts notified relevant stakeholders at Dixa around the incident, and a fix was worked on * **3:25 PM CEST** - A fix was released, and delayed data started catching up * **3:51 PM CEST** - All delayed data had caught up **Total Duration:** 66 minutes ## Root Cause The incident was caused by a deployment that resulted in a delay in the ingestion of data. ## Impact * Realtime Dashboards * Intelligence/Analytics ## Resolution **Immediate Resolution:** * The engineering team started working on a fix as soon as internal alarms notified relevant stakeholders * Service functionality was restored following the fix, and the delayed data started catching up **Prevention Measures:** * Expand test coverage