Linode Outage History

Linode is up right now

There were 30 Linode outages since February 4, 2026 totaling 26h 13m of downtime. Each is summarised below — incident details, duration, and resolution information.

Source: https://status.linode.com

Notice February 18, 2026

Connectivity Issue - FR-PAR (Paris), FR-PAR-2 (Paris 2) and ES-MAD (Madrid) Data Centers

Detected by Pingoru
Feb 18, 2026, 03:08 PM UTC
Resolved
Feb 18, 2026, 03:08 PM UTC
Duration
Affected: FR-PAR (Paris)ES-MAD (Madrid)FR-PAR-2 (Paris 2)
Timeline · 2 updates
  1. resolved Feb 18, 2026, 03:08 PM UTC

    We became aware of an issue that affected connectivity in our FR-PAR (Paris), FR-PAR-2 (Paris 2), and ES-MAD (Madrid) data centers. The issue lasted between 21:25 UTC and 22:50 UTC on February 17, 2026, and between 11:44 UTC and 12:16 UTC on February 18, 2026. During this time, users may have experienced intermittent connection timeouts and errors for all services deployed in those data centers. We can confirm that the issue has been resolved and service has resumed normal operations. If you continue to experience problems, please open a Support ticket for assistance.

  2. postmortem Feb 25, 2026, 06:35 PM UTC

    On February 17th, 2026, between 21:25 UTC and 22:50 UTC, and on February 18th, 2026, between 11:44 UTC and 12:16 UTC, we experienced connectivity issues in our FR-PAR \(Paris\), FR-PAR-2 \(Paris 2\), and ES-MAD \(Madrid\) data centers. During the impact windows, customers may have experienced intermittent connection timeouts and errors for all services deployed in those data centers. ‌ Our investigation identified the cause as an unstable interface link between Washington and Paris data centers. To resolve the issue, we removed the affected interface from service at 12:16 UTC on February 18th, 2026. ‌ To help prevent similar issues in the future, Akamai will review and enhance our monitoring and procedures. ‌ We apologize for the impact and thank you for your patience and continued support. We are committed to making continuous improvements to make our systems better and prevent recurrence. ‌ This summary provides an overview of our current understanding of the incident, given the information available. Our investigation is ongoing, and any information herein is subject to change.

Read the full incident report →

Minor February 17, 2026

Emerging Service Issue - Network Connectivity - EU-West (London)

Detected by Pingoru
Feb 17, 2026, 01:59 PM UTC
Resolved
Feb 18, 2026, 12:26 PM UTC
Duration
22h 26m
Affected: EU-West (London)
Timeline · 6 updates
  1. investigating Feb 17, 2026, 01:59 PM UTC

    Our team is investigating an emerging service issue affecting Network Connectivity in EU-West (London). We will share additional updates as we have more information.

  2. identified Feb 17, 2026, 04:13 PM UTC

    We've identified the cause of network performance degradation in EU-West (London) and are working to mitigate the impact.

  3. monitoring Feb 17, 2026, 05:45 PM UTC

    At this time we have been able to correct the issues affecting connectivity in our EU-West (London) data center. We will be monitoring this to ensure that it remains stable. If you are still experiencing issues, please open a Support ticket for assistance.

  4. identified Feb 17, 2026, 06:43 PM UTC

    We have found continued issues of network performance degradation in EU-West (London) effecting traffic and are working to mitigate the impact

  5. monitoring Feb 17, 2026, 10:58 PM UTC

    We have been able to mitigate an additional issue that was affecting a subset of user's connectivity in our EU-West (London) data center. We will be monitoring this to ensure that it remains stable. If you are still experiencing issues, please open a Support ticket for assistance.

  6. resolved Feb 18, 2026, 12:26 PM UTC

    We haven’t observed any additional connectivity issues in our EU-West (London) data center, and will now consider this incident resolved. If you continue to experience problems, please open a Support ticket for assistance.

Read the full incident report →

Notice February 12, 2026

Service Issue - Block Storage - SE-STO (Stockholm)

Detected by Pingoru
Feb 12, 2026, 01:26 PM UTC
Resolved
Feb 12, 2026, 03:19 PM UTC
Duration
1h 52m
Affected: SE-STO (Stockholm) Block Storage
Timeline · 3 updates
  1. monitoring Feb 12, 2026, 01:26 PM UTC

    Our team is aware of an issue that affected the Block Storage service in Stockholm between 07:40 and 10:52 AM UTC on February 12, 2026. During this time, users may have experience stuck operations on attached volumes. At this time we have been able to correct the issues affecting the Block Storage service. We will be monitoring this to ensure that it remains stable. If you continue to experience problems, please open a Support ticket for assistance.

  2. resolved Feb 12, 2026, 03:19 PM UTC

    We haven’t observed any additional issues with the Block Storage service in SE-STO (Stockholm), and will now consider this incident resolved. If you continue to experience problems, please open a Support ticket for assistance.

  3. postmortem Feb 25, 2026, 11:39 AM UTC

    Between 07:40 and 10:52 UTC on February 12, 2026, some customers experienced issues accessing their Block Storage volumes in the Stockholm region. During this period, storage operations stalled, resulting in a temporary denial of service for affected users. Monitoring detected a complete drop in storage throughput, and impacted customers were unable to access data on their volumes during the impact window. The incident was traced to a gap in the execution sequence during a recent storage system upgrade, which resulted in the storage environment entering a degraded state. The scenario was documented, however it was not clearly incorporated into the upgrade workflow, as it had not been encountered in previous upgrades. To mitigate the impact, Akamai initiated recovery actions and restored normal working service. In response, the upgrade process has been updated to include clearer guidance for handling similar situations. This summary provides an overview of our current understanding of the incident given the information available. Our investigation is ongoing and any information herein is subject to change

Read the full incident report →

Notice February 12, 2026

Service Issue - Longview

Detected by Pingoru
Feb 12, 2026, 02:29 AM UTC
Resolved
Feb 12, 2026, 04:23 AM UTC
Duration
1h 54m
Affected: Longview
Timeline · 3 updates
  1. monitoring Feb 12, 2026, 02:29 AM UTC

    Starting around 11:04 UTC on February 10, 2026, Longview graph dashboard became unavailable. The investigation revealed that an internal certificate expiry caused the issue. The impact was limited to reading the existing reporting data, and there was no permanent reporting data loss due to this issue. The affected certificate was rotated to mitigate the impact. The impact was mitigated at 23:44 UTC on February 10, 2026. We will continue to monitor to ensure that the impact has been fully mitigated.

  2. resolved Feb 12, 2026, 04:23 AM UTC

    We haven’t observed any additional issues with the Longview service, and will now consider this incident resolved. If you continue to experience problems, please open a Support ticket for assistance.

  3. postmortem Feb 16, 2026, 01:52 AM UTC

    Starting around 11:04 UTC on February 10, 2026, some customers were unable to access the Longview graph dashboard. Longview is a system data graphing service that tracks metrics for CPU, memory, and network bandwidth on both an aggregate and per-process basis. It also provides real-time graphs that can help expose performance problems \(more details about Longview are available [here](https://techdocs.akamai.com/cloud-computing/docs/longview)\). The investigation revealed that the issue was caused by an internal certificate expiry for the [longview.linode.com](http://longview.linode.com) hostname. The impact was limited to reading the existing reporting data only, while the write operations remained unaffected, and there was no permanent reporting data loss due to this issue. Akamai's internal automated tool rotated the expiring certificate before the certificate expired; however, the internal certificate tool which was supposed to auto-reload the system, did not reload the Longview servers after the previous certificate was rotated. The issue started when the previous certificate expired. To mitigate the impact, we manually reloaded the Longview servers for the new certificate to take effect. The impact was mitigated at 23:44 UTC on February 10, 2026, following this action. We will continue to investigate the root cause and will take appropriate preventive actions. We apologize for the impact and thank you for your patience and continued support. We are committed to making continuous improvements to make our systems better and prevent recurrence. This summary provides an overview of our current understanding of the incident given the information available. Our investigation is ongoing, and any information herein is subject to change.

Read the full incident report →

Notice February 4, 2026

Service Issue - Object Storage - gb-lon (London)

Detected by Pingoru
Feb 04, 2026, 02:41 PM UTC
Resolved
Feb 04, 2026, 02:41 PM UTC
Duration
Timeline · 2 updates
  1. resolved Feb 04, 2026, 03:35 PM UTC

    Starting at 14:43 UTC on February 4, 2026, users in our London (gb-lon) region may have encountered 500 errors when accessing object storage. We identified the issue quickly and resolved it by 15:19 UTC the same day. We apologize for any inconvenience this may have caused. If you are still experiencing issues, please open a Support ticket for assistance.

  2. postmortem Feb 09, 2026, 07:30 PM UTC

    On February 4, 2026, starting at 14:43 UTC, users accessing [gb-lon-1.linodeobjects.com](http://gb-lon-1.linodeobjects.com) experienced increased error rates due to an issue with one of the storage backends supporting the London Object Storage service. During this period, success rates dropped to between 93% and 95%. Our team identified that one of the six storage backends was experiencing problems, which affected service availability. The affected components recovered on their own, and service success rates returned to normal by 15:19 UTC. We are continuing to investigate the underlying cause of the backend issue and will implement additional safeguards to help prevent similar incidents. Thank you for your patience as we work to enhance service reliability. This summary reflects our current understanding of the incident. Our investigation is ongoing, and details may be updated as more information becomes available.

Read the full incident report →

Looking to track Linode downtime and outages?

Pingoru polls Linode's status page every 5 minutes and alerts you the moment it reports an issue — before your customers do.

  • Real-time alerts when Linode reports an incident
  • Email, Slack, Discord, Microsoft Teams, and webhook notifications
  • Track Linode alongside 5,000+ providers in one dashboard
  • Component-level filtering
  • Notification groups + maintenance calendar
Start monitoring Linode for free

5 free monitors · No credit card required