Wasabi incident

System Errors in EU-CENTRAL-2

Minor Resolved View vendor source →

Wasabi experienced a minor incident on July 8, 2025 affecting EU-Central-2 (Frankfurt), lasting 6h 21m. The incident has been resolved; the full update timeline is below.

Started
Jul 08, 2025, 12:28 PM UTC
Resolved
Jul 08, 2025, 06:50 PM UTC
Duration
6h 21m
Detected by Pingoru
Jul 08, 2025, 12:28 PM UTC

Affected components

EU-Central-2 (Frankfurt)

Update timeline

  1. investigating Jul 08, 2025, 12:28 PM UTC

    We are currently experiencing system errors in our EU-CENTRAL-2 (Frankfurt) region. Customers may experience elevated HTTP 5XX error responses when interacting with their Wasabi bucket(s). We will update this page as we have more information.

  2. identified Jul 08, 2025, 01:10 PM UTC

    Our Operations Team is working to restore services in the region. Customers may continue to experience HTTP 5XX error responses when interacting with their Wasabi bucket(s) at this time.

  3. identified Jul 08, 2025, 02:26 PM UTC

    We are continuing to work on restoring service in the region. Some customers may continue to experience HTTP 5XX error responses when interacting with their Wasabi bucket(s) at this time.

  4. monitoring Jul 08, 2025, 05:10 PM UTC

    Our EU-CENTRAL-2 region is now fully operational, and we will continue to monitor the service. For any questions or concerns, please reach out to our Support Team at [email protected]

  5. resolved Jul 08, 2025, 06:50 PM UTC

    This incident has been resolved.

  6. postmortem Jul 15, 2025, 10:31 AM UTC

    On 08 July 2025 from 12:10 UTC to approximately 16:36 UTC, Wasabi S3 services in our Frankfurt \(eu-central-2\) region were unavailable due to a partial power issue in our datacenter. Wasabi’s Operations Team was alerted by our automated alert system that multiple racks were experiencing power issues, however not all racks within the datacenter were impacted due to the power issue not impacting all circuits within the building. While our Operations Team was working to bring up all server hardware and running system checks and diagnostics, a second power issue at 16:18 UTC caused all impacted racks to go offline a second time. By 16:36 UTC, our Operations Team was able to successfully bring up all impacted racks to a fully operational state.