Zaptec experienced a major incident on September 4, 2025 affecting API and Charger backend and 1 more component, lasting 5h 52m. The incident has been resolved; the full update timeline is below.
Affected components
Update timeline
- investigating Sep 04, 2025, 02:33 PM UTC
We see problems on the infrastructure side that causes our backend to not process requests in time. We're in contact with the vendor and work on resolving the issues ASAP. We will update as soon as we have more information.
- investigating Sep 04, 2025, 03:09 PM UTC
We are continuing to investigate this issue.
- investigating Sep 04, 2025, 04:06 PM UTC
We’re seeing a slight improvement in our systems, so we’re moving in the right direction. We will keep investigating, and update as soon as we have more information.
- investigating Sep 04, 2025, 04:32 PM UTC
Our systems are back to normal. We will keep monitoring the situation throughout the evening.
- monitoring Sep 04, 2025, 04:33 PM UTC
Our systems are back to normal. We will keep monitoring the situation throughout the evening.
- resolved Sep 04, 2025, 08:26 PM UTC
This incident has been resolved.
- postmortem Sep 05, 2025, 10:52 AM UTC
On September 4, 2025, Zaptec Cloud experienced a service disruption that resulted in increased latency and data processing delays. This incident affected our customers' ability to interact with their chargers in real-time. We sincerely apologize for the inconvenience and frustration this may have caused. This report provides a transparent overview of the incident, our response, and the steps we are taking to improve our system's resilience. **Timeline of events** 15:30: We first detected performance degradation and increased latency in our cloud services. 15:45: Our engineering team began an active investigation into the cause of the issue. 15:55: The investigation indicated that the disruption was related to our underlying cloud infrastructure. 16:08: The event was officially classified as a critical incident, and our response teams were fully engaged. 16:45: Service performance began to show signs of stabilization. We formally engaged our cloud provider to address the root cause. 17:22: The system began a more consistent recovery process. 18:12: All Zaptec Cloud services were fully restored and operating normally. **Root Cause Analysis** Following the incident, we received a report from our cloud service provider confirming they had experienced major network issues in the region. The timeline provided by the vendor directly corresponds with the period of our service degradation. We have concluded with high confidence that the root cause of this incident was the failure of our cloud provider's infrastructure, which was outside of our immediate control. **Next Steps and Mitigation** While the root cause was external, we are committed to improving our system's resilience to such failures. Our key action item is to review and reconsider redesining our infrastructure to enable faster and more effective failover. We are dedicated to ensuring the reliability of our services and will work diligently to implement these improvements.