PubNub incident
Delay in Publishing Messages to Storage Globally
PubNub experienced a minor incident on January 1, 2026 affecting North America Points of Presence and Storage and Playback Service and 1 more component, lasting 34m. The incident has been resolved; the full update timeline is below.
Affected components
Update timeline
- investigating Jan 01, 2026, 12:25 AM UTC
Starting at 12:13AM GMT, we began seeing delays in published messages being written to history storage. Users may be unable to access recently stored messages through History API calls. The messages will eventually be published. Our engineers are actively working to address the issue and we will provide updates here. If you feel your service has been impacted and you would like to discuss, please email [email protected].
- monitoring Jan 01, 2026, 12:44 AM UTC
A fix has been applied and we are monitoring the results. Published messages waiting on queue to be written to history have now been successfully written. We will continue to monitor for the next 15 minutes.
- resolved Jan 01, 2026, 01:00 AM UTC
The incident has been resolved. Published messages continue to be written to history storage normally. No messages were lost during the incident. If you believe you were impacted by the incident and wish to discuss it with our team, please contact us by email at [email protected]
- postmortem Jan 05, 2026, 05:01 PM UTC
### **Problem Description, Impact, and Resolution** On January 1, 2026 at 00:00 UTC, we observed elevated latency in our History service across multiple regions. Customers may have experienced delays in message persistence and history availability during this period. The issue was caused by a mismatch in newly created persistence tables. Specifically, required columns for message metadata were missing from the new tables, resulting in failed write operations and backed-up queues. This created downstream pressure on our storage systems, leading to higher latency in history processing. We mitigated the issue by manually applying the correct updates across all affected persistence spaces. After the updates were applied, message processing returned to normal and queue latency cleared. This issue occurred because we did not have proper controls in place to ensure schema consistency for newly generated monthly persistence tables. ### **Mitigation Steps and Recommended Future Preventative Measures** To resolve the issue, we manually applied the required schema updates globally. In the coming days, we will update our change management processes to ensure schema changes are correctly applied to all future monthly tables. We are also auditing our schema tracking and automating validation to prevent inconsistencies across environments. These improvements will ensure that future table generation includes all necessary columns and reduce the risk of similar issues impacting History service performance.