Exoscale Outage History

Exoscale is up right now

There were 6 Exoscale outages since March 31, 2026 totaling 13h 56m of downtime. Each is summarised below — incident details, duration, and resolution information.

Source: https://exoscalestatus.com

Minor April 12, 2026

[DE-MUC-1] Elevated error rate for SOS

Detected by Pingoru
Apr 12, 2026, 09:08 AM UTC
Resolved
Apr 12, 2026, 10:30 AM UTC
Duration
1h 21m
Affected: DE-MUC-1Object Storage SOS
Timeline · 4 updates
  1. investigating Apr 12, 2026, 09:08 AM UTC

    We’re seeing degraded performance of our object storage stack in entire DE-MUC-1 zone.

  2. investigating Apr 12, 2026, 09:47 AM UTC

    We’re currently applying a mitigation. We’ll update the incident as soon as we have new information.

  3. monitoring Apr 12, 2026, 09:54 AM UTC

    We’ve applied a mitigation - we’re seeing improvement across entire zone.

  4. resolved Apr 12, 2026, 10:30 AM UTC

    Incident resolved.

Read the full incident report →

Minor April 8, 2026

[Network] Increased internet network latencies and packet loss

Detected by Pingoru
Apr 08, 2026, 11:08 AM UTC
Resolved
Apr 08, 2026, 07:10 PM UTC
Duration
8h 1m
Affected: HR-ZAG-1Network Internet Transit Connectivity
Timeline · 4 updates
  1. investigating Apr 08, 2026, 11:08 AM UTC

    We are investigating Increased internet network latencies and packet loss. We’ll post an update as soon as we have more information.

  2. monitoring Apr 08, 2026, 11:09 AM UTC

    One of our internet transit provider is experiencing routing issues. Traffic has been re-routed to alternate paths. We are monitoring the situation

  3. monitoring Apr 08, 2026, 11:24 AM UTC

    Our transit provider has confirmed an outage on their end. Situation is stable while traffic is routed to alternate paths

  4. resolved Apr 08, 2026, 07:10 PM UTC

    Issue with our transit provider has been resolved

Read the full incident report →

Minor April 7, 2026

ch-dk-2 connectivity issue

Detected by Pingoru
Apr 07, 2026, 12:45 PM UTC
Resolved
Apr 07, 2026, 03:00 PM UTC
Duration
2h 15m
Affected: CH-DK-2Block StorageManaged Kubernetes SKSObject Storage SOS
Timeline · 14 updates
  1. investigating Apr 07, 2026, 12:45 PM UTC

    We currently experiencing an issue with SKS in ch-dk-2. We’re investigating the issue and will communicate when we have more information.

  2. investigating Apr 07, 2026, 12:53 PM UTC

    The issue is being escalated to partial outage

  3. investigating Apr 07, 2026, 12:56 PM UTC

    We keep investigating the root cause

  4. investigating Apr 07, 2026, 01:07 PM UTC

    The issue may be related to some underlying network issues. We are still investigating

  5. investigating Apr 07, 2026, 01:14 PM UTC

    The issue seems to be related to some partial IPv6 connectivity issue. We are still investigating

  6. investigating Apr 07, 2026, 01:16 PM UTC

    Impact of the incident has been extended to the following services: SOS, Block storage as a side effect of the underlying connectivity issue

  7. investigating Apr 07, 2026, 01:21 PM UTC

    Some SKS clusters have their API fully unavailable. We are still investigating

  8. investigating Apr 07, 2026, 01:36 PM UTC

    We are still investigating the origin of the IPv6 network issue.

  9. investigating Apr 07, 2026, 01:49 PM UTC

    We are still investigating the origin of the IPv6 network issue. During the investigation some brief connection reset may be experienced

  10. investigating Apr 07, 2026, 02:16 PM UTC

    We are applying a set of mitigation, which is improving the current situation

  11. monitoring Apr 07, 2026, 02:30 PM UTC

    Mitigation has been applied. Affected services are converging. We are monitoring the recovery

  12. monitoring Apr 07, 2026, 02:36 PM UTC

    All services are back available. We are monitoring the situation

  13. monitoring Apr 07, 2026, 02:46 PM UTC

    Services are nominal. We keep monitoring the situation

  14. resolved Apr 07, 2026, 03:00 PM UTC

    Incident has been resolved. The exact root cause remain to be identified at this stage. The issue affected mostly IPv6 connectivity of a subset of hypervisor hosts. While we are still evaluating the exact impact of this incident, the following services have been affected: SKS control planes: Some SKS control plane backends where hosted on the affected hosts. This resulted in downtime of a subset of SKS control plane clusters SOS: Experienced an increase number of error 500. Issue has been fully mitigated by 16:15 CET Block storage: Experienced a brief connection drop. It may have resulted in some I/O errors returned to a subset of volumes. As a result some of the affected volumes may have been switched to read-only mode by the instance kernel. In such situation a manual remount will be required to bring back the affected volumes in write mode. Some IPv4/IPv6 connection reset may have been experienced on instances while we were applying the mitigation

Read the full incident report →

Minor April 6, 2026

Increased Error Rates and Latencies

Detected by Pingoru
Apr 06, 2026, 12:49 PM UTC
Resolved
Apr 06, 2026, 02:04 PM UTC
Duration
1h 14m
Affected: CH-DK-2Network Load Balancer NLB
Timeline · 3 updates
  1. investigating Apr 06, 2026, 12:49 PM UTC

    We’re experiencing increased error rate on some NLBs in CH-DK-2.

  2. monitoring Apr 06, 2026, 01:17 PM UTC

    We have applied a mitigation and we’re no longer seeing dropped traffic. We’ll keep monitoring the situation and update accordingly.

  3. resolved Apr 06, 2026, 02:04 PM UTC

    Issue has been resolved.

Read the full incident report →

Minor April 1, 2026

Increased Latency on Object Storage

Detected by Pingoru
Apr 01, 2026, 08:52 AM UTC
Resolved
Apr 01, 2026, 09:19 AM UTC
Duration
27m
Affected: CH-GVA-2Object Storage SOS
Timeline · 2 updates
  1. investigating Apr 01, 2026, 08:52 AM UTC

    We are experiencing increased latency on Object Storage in ch-gva-2

  2. resolved Apr 01, 2026, 09:19 AM UTC

    The issue seems to be resolved, latency is back to normal.

Read the full incident report →

Minor March 31, 2026

[NLB] Increased Error Rates and Latencies

Detected by Pingoru
Mar 31, 2026, 04:54 PM UTC
Resolved
Mar 31, 2026, 05:30 PM UTC
Duration
35m
Affected: DE-FRA-1Network Load Balancer NLB
Timeline · 5 updates
  1. investigating Mar 31, 2026, 04:54 PM UTC

    We’re experiencing increased error rate on some NLBs in DE-FRA-1.

  2. monitoring Mar 31, 2026, 05:12 PM UTC

    We have applied a mitigation and are currently monitoring the situation.

  3. investigating Mar 31, 2026, 05:18 PM UTC

    We do still experience increased error rate, and continue to investigate

  4. monitoring Mar 31, 2026, 05:25 PM UTC

    Further mitigation has been applied, we are monitoring the situation which seems to be back to normal.

  5. resolved Mar 31, 2026, 05:30 PM UTC

    Issue has been resolved.

Read the full incident report →

Looking to track Exoscale downtime and outages?

Pingoru polls Exoscale's status page every 5 minutes and alerts you the moment it reports an issue — before your customers do.

  • Real-time alerts when Exoscale reports an incident
  • Email, Slack, Discord, Microsoft Teams, and webhook notifications
  • Track Exoscale alongside 5,000+ providers in one dashboard
  • Component-level filtering
  • Notification groups + maintenance calendar
Start monitoring Exoscale for free

5 free monitors · No credit card required