Affected components
Update timeline
- investigating Apr 07, 2026, 12:45 PM UTC
We currently experiencing an issue with SKS in ch-dk-2. We’re investigating the issue and will communicate when we have more information.
- investigating Apr 07, 2026, 12:53 PM UTC
The issue is being escalated to partial outage
- investigating Apr 07, 2026, 12:56 PM UTC
We keep investigating the root cause
- investigating Apr 07, 2026, 01:07 PM UTC
The issue may be related to some underlying network issues. We are still investigating
- investigating Apr 07, 2026, 01:14 PM UTC
The issue seems to be related to some partial IPv6 connectivity issue. We are still investigating
- investigating Apr 07, 2026, 01:16 PM UTC
Impact of the incident has been extended to the following services: SOS, Block storage as a side effect of the underlying connectivity issue
- investigating Apr 07, 2026, 01:21 PM UTC
Some SKS clusters have their API fully unavailable. We are still investigating
- investigating Apr 07, 2026, 01:36 PM UTC
We are still investigating the origin of the IPv6 network issue.
- investigating Apr 07, 2026, 01:49 PM UTC
We are still investigating the origin of the IPv6 network issue. During the investigation some brief connection reset may be experienced
- investigating Apr 07, 2026, 02:16 PM UTC
We are applying a set of mitigation, which is improving the current situation
- monitoring Apr 07, 2026, 02:30 PM UTC
Mitigation has been applied. Affected services are converging. We are monitoring the recovery
- monitoring Apr 07, 2026, 02:36 PM UTC
All services are back available. We are monitoring the situation
- monitoring Apr 07, 2026, 02:46 PM UTC
Services are nominal. We keep monitoring the situation
- resolved Apr 07, 2026, 03:00 PM UTC
Incident has been resolved. The exact root cause remain to be identified at this stage. The issue affected mostly IPv6 connectivity of a subset of hypervisor hosts. While we are still evaluating the exact impact of this incident, the following services have been affected: SKS control planes: Some SKS control plane backends where hosted on the affected hosts. This resulted in downtime of a subset of SKS control plane clusters SOS: Experienced an increase number of error 500. Issue has been fully mitigated by 16:15 CET Block storage: Experienced a brief connection drop. It may have resulted in some I/O errors returned to a subset of volumes. As a result some of the affected volumes may have been switched to read-only mode by the instance kernel. In such situation a manual remount will be required to bring back the affected volumes in write mode. Some IPv4/IPv6 connection reset may have been experienced on instances while we were applying the mitigation
Looking to track Exoscale downtime and outages?
Pingoru polls Exoscale's status page every 5 minutes and alerts you the moment it reports an issue — before your customers do.
- Real-time alerts when Exoscale reports an incident
- Email, Slack, Discord, Microsoft Teams, and webhook notifications
- Track Exoscale alongside 5,000+ providers in one dashboard
- Component-level filtering
- Notification groups + maintenance calendar
5 free monitors · No credit card required