Nebula incident
Call Issues and Load Issues on Dashboards and Apps
Nebula experienced a minor incident on July 30, 2025 affecting Core Network and Dashboard, lasting 22h 2m. The incident has been resolved; the full update timeline is below.
Affected components
Update timeline
- investigating Jul 30, 2025, 09:54 AM UTC
We are currently investigating reports of the above on some accounts, and will provide further updates within 20 minutes. Inbound calls are unaffected.
- investigating Jul 30, 2025, 10:02 AM UTC
Our team have also identified reports with receiving calls. We are looking to identify the issue and will provide a further update within 20 minutes.
- monitoring Jul 30, 2025, 10:19 AM UTC
We have identified the issue and are working to restore all services. We are seeing traffic routing correctly on affected accounts, and further work is underway to restore fully. Outbound calls have recovered and we will provide a further status update within 10 minutes.
- monitoring Jul 30, 2025, 10:29 AM UTC
We have seen services restored to normal across the affected part of our network. We will continue to monitor the situation and close the incident after a successful period of monitoring.
- resolved Jul 31, 2025, 08:05 AM UTC
After a period of successful monitoring, this incident has been resolved.
- postmortem Aug 01, 2025, 10:49 AM UTC
**Incident Analysis** At 10:46am BST, we identified issues with outbound call authentication and navigation within online dashboards. After approximately 10 minutes, the issue expanded, affecting some inbound calls and ability to log into softphone applications. Our Engineering Team quickly identified the cause as a ‘lock’ within an integral database that handles our API layer, triggered by an inefficient query. A database lock prevents new traffic and data from being written, causing a rapid backlog of processes such as placing and taking calls, presence, and application authentication. Inbound calls became affected later as a result of the delays on the overall network. At approximately 11:05 AM, our team restarted the affected database and traffic resumed within minutes. However, due to the volume of queued messages it took an additional 25 minutes for them all to be accepted into the database and cleared. As a result, service would have gradually restored across all affected areas over this time. **Next Steps** Our team has since addressed this particular query, and all other queries that had been flagged have been stopped until they have been optimised in the same way. They will remain ceased, until the fixes are implemented within the next 5 working days.