Linode incident

Service Issue - Object Storage API

Major Resolved View vendor source →
Started
Mar 07, 2026, 12:02 PM UTC
Resolved
Mar 07, 2026, 09:08 PM UTC
Duration
9h 5m
Detected by Pingoru
Mar 07, 2026, 12:02 PM UTC

Affected components

US-East (Newark) Object StorageLinode Manager and APIUS-Southeast (Atlanta) Object StorageUS-IAD (Washington) Object StorageUS-ORD (Chicago) Object StorageEU-Central (Frankfurt) Object StorageAP-South (Singapore) Object StorageFR-PAR (Paris) Object StorageSE-STO (Stockholm) Object StorageUS-SEA (Seattle) Object StorageJP-OSA (Osaka) Object StorageIN-MAA (Chennai) Object StorageID-CGK (Jakarta) Object StorageBR-GRU (Sao Paulo) Object StorageES-MAD (Madrid) Object StorageGB-LON (London 2)AU-MEL (Melbourne)NL-AMS (Amsterdam) Object StorageIT-MIL (Milan) Object StorageUS-MIA (Miami) Object StorageUS-LAX (Los Angeles) Object StorageGB-LON (London 2) Object StorageAU-MEL (Melbourne) Object StorageIN-BOM-2 (Mumbai 2) Object StorageDE-FRA-2 (Frankfurt 2) Object StorageSG-SIN-2 (Singapore 2) Object StorageJP-TYO-3 (Tokyo 3) Object Storage

Update timeline

  1. investigating Mar 07, 2026, 12:02 PM UTC

    This issue is impacting Object Storage access globally. During this time customers may encounter issues with managing buckets, access keys, or Object Storage policies. Our team is continuing to investigate.

  2. investigating Mar 07, 2026, 12:34 PM UTC

    We are continuing to investigate this issue.

  3. investigating Mar 07, 2026, 01:35 PM UTC

    We are actively investigating an issue affecting the Object Storage service. Users may experience connection timeouts and errors when accessing this service. We will provide updates as we learn more and work toward a resolution.

  4. investigating Mar 07, 2026, 02:31 PM UTC

    We are continuing to investigate this issue. Thank you for your patience as we work toward a resolution.

  5. investigating Mar 07, 2026, 03:31 PM UTC

    Our team is continuing to investigate the Object Storage API issue, which affects all Object Storage regions. This issue is limited to interacting with the Object Storage service, such as managing buckets, access keys, or Object Storage policies. The underlying Object Storage service itself remains operational. We appreciate your patience and will provide further updates as soon as possible.

  6. identified Mar 07, 2026, 04:08 PM UTC

    Our team has identified the issue affecting the Object Storage service. We are working quickly to implement a fix, and we will provide an update as soon as the solution is in place.

  7. identified Mar 07, 2026, 05:07 PM UTC

    We are still working to implement the fix for the Object Storage service issue. We will share another update as soon as progress is made.

  8. monitoring Mar 07, 2026, 06:14 PM UTC

    At this time we have been able to correct the issues affecting the Object Storage service. We will be monitoring this to ensure that it remains stable. If you continue to experience problems, please open a Support ticket for assistance.

  9. resolved Mar 07, 2026, 09:08 PM UTC

    We haven’t observed any additional issues with the Object Storage service, and will now consider this incident resolved. If you continue to experience problems, please open a Support ticket for assistance.

  10. postmortem Mar 13, 2026, 11:59 AM UTC

    On March 7,2026, at approx. 10:05 UTC, the Object Storage API \(OSA\) service experienced an elevated rate of errors in one of the production clusters, during this period, customers may have experienced intermittent issues while performing operations such as loading or managing buckets, accessing keys, updating Object storage policies, or making modifications to Object Storage resources. The issue was traced to increased pressure on one of the underlying clusters following a recent configuration update. After identifying the contributing factor, the change was rolled back and service configurations were adjusted to restore normal system behavior. The issue was mitigated at 18:15 UTC, 7th March, 2026. Following the mitigation, system performance indicators improved and service stability was restored. Verification checks confirmed that request success rates returned to normal levels, system queues reduced significantly, latency stabilized and overall platform traffic appeared healthy. We apologize for the impact and thank you for your patience and continued support. We are committed to making continuous improvements to make our systems better and prevent recurrence. This summary provides an overview of our current understanding of the incident given the information available. Our investigation is ongoing and any information herein is subject to change.

Looking to track Linode downtime and outages?

Pingoru polls Linode's status page every 5 minutes and alerts you the moment it reports an issue — before your customers do.

  • Real-time alerts when Linode reports an incident
  • Email, Slack, Discord, Microsoft Teams, and webhook notifications
  • Track Linode alongside 5,000+ providers in one dashboard
  • Component-level filtering
  • Notification groups + maintenance calendar
Start monitoring Linode for free

5 free monitors · No credit card required