Zaptec incident

Investigation Ongoing Into Delayed Observation History on Some Devices

Minor Resolved View vendor source →

Zaptec experienced a minor incident on January 16, 2026 affecting OCPP, lasting 17h 2m. The incident has been resolved; the full update timeline is below.

Started
Jan 16, 2026, 05:06 PM UTC
Resolved
Jan 17, 2026, 10:08 AM UTC
Duration
17h 2m
Detected by Pingoru
Jan 16, 2026, 05:06 PM UTC

Affected components

OCPP

Update timeline

  1. investigating Jan 16, 2026, 05:06 PM UTC

    We are currently investigating an issue where a small percentage of devices may show delayed history of observations. This does not affect the charging process. No action is required from users, and all data will appear as expected once the system catches up

  2. identified Jan 16, 2026, 09:08 PM UTC

    The issue has been identified and a fix is being implemented.

  3. monitoring Jan 16, 2026, 09:31 PM UTC

    A fix has been implemented and we are monitoring the results.

  4. resolved Jan 17, 2026, 10:08 AM UTC

    This incident has been resolved.

  5. postmortem Jan 19, 2026, 08:46 AM UTC

    On January 16, 2026, we experienced a delay in processing historical observation data for a small subset of devices. This affected how historical data was displayed in the portal, but did not impact charging functionality or real-time device status. We want to provide a transparent overview of what happened, how we responded, and the steps we are taking to prevent this from happening again. \# Impact Approximately 5% of devices experienced delays in displaying historical observation data in the portal. \## Importantly: \* Charging sessions and charging performance were not affected \* Real-time device status remained accurate \* No impact on safety or system stability \* All delayed data has since been fully processed and is now available in the portal. \# Timeline 14:40 – A customer reported that historical observation data for their device appeared delayed in the portal, while real-time status was accurate. 15:30 – Our team identified that the issue was isolated to the historical data processing pipeline for a small subset of devices. 16:30 – Investigation revealed the issue had begun approximately 24 hours earlier, with delays gradually accumulating over time. 18:00 – An incident response team was assembled and a status notification was posted on the portal. 18:00–21:00 – The team investigated multiple potential causes and tested various solutions. 22:00 – An optimized version of the data processing service was deployed, and the system immediately began recovering. 22:30 – All delayed data was fully processed and historical observations were up to date. \# Root Cause Under normal conditions, the system processed device observations without issue. However, when certain devices began transmitting duplicate observations at a higher rate than usual, the data processing pipeline was unable to keep pace. This caused a backlog that grew over time, resulting in delayed historical data for approximately 5% of devices. \## Resolution We deployed an optimized data processing method that handles higher volumes of observations more efficiently. The backlog was fully cleared and all historical data is now current. \## Preventive Measures To prevent similar issues in the future, we have: \* Improved the efficiency of our data processing pipeline \* Enhanced monitoring to detect processing delays earlier \* Implemented automated alerts to notify our team of any ingestion delays We apologize for any inconvenience this may have caused.