- Detected by Pingoru
- Apr 29, 2026, 09:22 AM UTC
- Resolved
- Apr 29, 2026, 07:19 PM UTC
- Duration
- 9h 57m
Affected: US-East (Newark) Object StorageUS-Southeast (Atlanta) Object StorageUS-IAD (Washington) Object StorageUS-ORD (Chicago) Object StorageEU-Central (Frankfurt) Object StorageAP-South (Singapore) Object StorageFR-PAR (Paris) Object StorageSE-STO (Stockholm) Object StorageUS-SEA (Seattle) Object StorageJP-OSA (Osaka) Object StorageIN-MAA (Chennai) Object StorageID-CGK (Jakarta) Object StorageBR-GRU (Sao Paulo) Object StorageES-MAD (Madrid) Object StorageGB-LON (London 2)AU-MEL (Melbourne)NL-AMS (Amsterdam) Object StorageIT-MIL (Milan) Object StorageUS-MIA (Miami) Object StorageUS-LAX (Los Angeles) Object StorageGB-LON (London 2) Object StorageAU-MEL (Melbourne) Object StorageIN-BOM-2 (Mumbai 2) Object StorageDE-FRA-2 (Frankfurt 2) Object StorageSG-SIN-2 (Singapore 2) Object StorageJP-TYO-3 (Tokyo 3) Object Storage
Timeline · 6 updates
-
investigating Apr 29, 2026, 09:22 AM UTC
We are aware that some customers have been experiencing 403 (InvalidAccessKeyId) errors when attempting to access object storage, beginning yesterday evening. We will share additional updates as we have more information.
-
investigating Apr 29, 2026, 10:30 AM UTC
Our team is investigating an issue affecting the Object Storage service. During this time, users may experience connection timeouts and errors with this service.
-
investigating Apr 29, 2026, 11:42 AM UTC
We are continuing to investigate this issue.
-
investigating Apr 29, 2026, 12:41 PM UTC
We are continuing to investigate this issue.
-
resolved Apr 29, 2026, 07:19 PM UTC
We haven’t observed any additional issues with the Object Storage service, and will now consider this incident resolved. If you continue to experience problems, please open a Support ticket for assistance.
-
postmortem Apr 30, 2026, 01:05 PM UTC
Between approximately 09:15 UTC on April 28, 2026 and 16:54 UTC on April 29, 2026 customers could have experienced issues with their Object Storage key pairs, including: key creation and validation, Object Storage access with their keys. The impact was seen throughout multiple data centers. Our investigation established that the issue started due to a combination of factors, including: a cascading failure of a background process from one cluster to others and a reconciler appliance not running due to missing metrics and an abnormal system load that led to call failures. To resolve the issue, we have restarted the reconciler and key operations were processed for customers. To help prevent similar issues in the future, Akamai will address the cascading failure scenario, add reconciler metrics to prevent the type of outage seen as well enhance queue management to improve reconciler performance. We apologize for the impact and thank you for your patience and continued support. We are committed to making continuous improvements to make our systems better and prevent recurrence. This summary provides an overview of our current understanding of the incident, given the information available. Our investigation is ongoing, and any information herein is subject to change.
Read the full incident report →
- Detected by Pingoru
- Apr 24, 2026, 07:27 PM UTC
- Resolved
- Apr 28, 2026, 03:16 PM UTC
- Duration
- 3d 19h
Affected: US-East (Newark) NodeBalancersUS-East (Newark) Linode Kubernetes EngineUS-Central (Dallas) NodeBalancersUS-Central (Dallas) Linode Kubernetes EngineUS-West (Fremont) NodeBalancersUS-West (Fremont) Linode Kubernetes EngineUS-Southeast (Atlanta) NodeBalancersUS-Southeast (Atlanta) Linode Kubernetes EngineUS-IAD (Washington) NodeBalancersUS-IAD (Washington) Linode Kubernetes EngineUS-ORD (Chicago) NodeBalancersUS-ORD (Chicago) Linode Kubernetes EngineCA-Central (Toronto) NodeBalancersCA-Central (Toronto) Linode Kubernetes EngineEU-West (London) NodeBalancersEU-West (London) Linode Kubernetes EngineEU-Central (Frankfurt) NodeBalancersEU-Central (Frankfurt) Linode Kubernetes EngineFR-PAR (Paris) NodeBalancersFR-PAR (Paris) Linode Kubernetes EngineAP-South (Singapore) NodeBalancersAP-South (Singapore) Linode Kubernetes EngineAP-Northeast-2 (Tokyo 2) NodeBalancersAP-Northeast (Tokyo 2) Linode Kubernetes EngineAP-West (Mumbai) NodeBalancersAP-West (Mumbai) Linode Kubernetes EngineAP-Southeast (Sydney) NodeBalancersAP-Southeast (Sydney) Linode Kubernetes EngineSE-STO (Stockholm) NodeBalancersSE-STO (Stockholm) Linode Kubernetes EngineUS-SEA (Seattle) NodeBalancersUS-SEA (Seattle) Linode Kubernetes EngineJP-OSA (Osaka) NodeBalancersJP-OSA (Osaka) Linode Kubernetes EngineIN-MAA (Chennai) NodeBalancersIN-MAA (Chennai) Linode Kubernetes EngineID-CGK (Jakarta) Linode Kubernetes EngineBR-GRU (São Paulo) NodeBalancersNL-AMS (Amsterdam) NodeBalancersBR-GRU (São Paulo) Linode Kubernetes EngineES-MAD (Madrid) NodeBalancersNL-AMS (Amsterdam) Linode Kubernetes EngineIT-MIL (Milan) NodeBalancersES-MAD (Madrid) Linode Kubernetes EngineUS-MIA (Miami) NodeBalancersIT-MIL (Milan) Linode Kubernetes EngineID-CGK (Jakarta) NodeBalancersUS-MIA (Miami) Linode Kubernetes EngineUS-LAX (Los Angeles) NodeBalancersUS-LAX (Los Angeles) Linode Kubernetes EngineGB-LON (London 2) NodeBalancersGB-LON (London 2) Linode Kubernetes EngineAU-MEL (Melbourne) NodeBalancersAU-MEL (Melbourne) Linode Kubernetes EngineIN-BOM-2 (Mumbai 2) NodeBalancersIN-BOM-2 (Mumbai 2) Linode Kubernetes EngineDE-FRA-2 (Frankfurt 2) NodeBalancersDE-FRA-2 (Frankfurt 2) Linode Kubernetes EngineSG-SIN-2 (Singapore 2) NodeBalancersSG-SIN-2 (Singapore 2) Linode Kubernetes EngineJP-TYO-3 (Tokyo 3) NodeBalancersJP-TYO-3 (Tokyo 3) Linode Kubernetes Engine
Timeline · 9 updates
-
investigating Apr 24, 2026, 07:27 PM UTC
Our team is investigating an emerging service issue affecting the Linode Kubernetes Engine (LKE) service across multiple regions. We will share additional updates as we have more information.
-
identified Apr 24, 2026, 08:19 PM UTC
Our team has identified the issue affecting the LKE service. We are working quickly to implement a fix, and we will provide an update as soon as the solution is in place.
-
identified Apr 24, 2026, 08:48 PM UTC
Our team has identified that due to this issue, customers may also experience issues creating new nodebalancers. We are working quickly to implement a fix, and we will provide an update as soon as the solution is in place.
-
identified Apr 24, 2026, 09:46 PM UTC
Our team has identified the issue affecting the LKE and newly updated/created NodeBalancer configurations. We are working quickly to implement a fix, and we will provide an update as soon as the solution is in place.
-
identified Apr 24, 2026, 10:07 PM UTC
We are continuing work to apply the identified fix the issue affecting the LKE and newly updated/created NodeBalancer configurations and we will provide an update as soon as the solution is in place.
-
identified Apr 24, 2026, 11:06 PM UTC
We are continuing work to apply the identified fix the issue affecting the LKE and newly updated/created NodeBalancer configurations and we will provide an update as soon as the solution is in place.
-
monitoring Apr 24, 2026, 11:53 PM UTC
At this time we have been able to correct the issues affecting the NodeBalancer service at 23:36 UTC on April 24, 2026. We will be monitoring this to ensure that it remains stable. If you continue to experience problems, please open a ticket with our Support Team.
-
resolved Apr 28, 2026, 03:16 PM UTC
We haven’t observed any additional issues with the LKE and NodeBalancer services and will now consider this incident resolved. If you continue to experience problems, please open a Support ticket for assistance.
-
postmortem Apr 30, 2026, 07:42 PM UTC
On April 24, 2026, between 17:57 UTC and 23:36 UTC, NodeBalancer infrastructure experienced a service degradation caused by NodeBalancer configuration identifiers exceeding a programmed limit. This issue degraded functionality for newly created and updated NodeBalancers and rendered Linode Kubernetes Engine \(LKE\) clusters inaccessible via the UI. Any autoscaling activity, NodeBalancer creation, or modification \(such as adding or removing nodes\) triggered the generation of configuration identifiers beyond the supported threshold, resulting in further degradation. The impact extended to downstream services such as LKE. Akamai identified the root cause and deployed a fix across all data centers by 23:36 UTC on April 24, 2026. To prevent this issue from reoccurring, Akamai will investigate, document, and update related NodeBalancer behaviors where applicable. Other software load balancer \(SLB\) and NodeBalancer-related programmatic limits will also be investigated. This summary provides an overview of our current understanding of the incident given the information available. Our investigation is ongoing and any information herein is subject to change.
Read the full incident report →
- Detected by Pingoru
- Apr 20, 2026, 11:49 AM UTC
- Resolved
- Apr 21, 2026, 01:08 AM UTC
- Duration
- 13h 19m
Affected: EU-Central (Frankfurt)DE-FRA-2 (Frankfurt 2)
Timeline · 5 updates
-
investigating Apr 20, 2026, 11:49 AM UTC
Our team is investigating an emerging service issue impacting connectivity in Frankfurt (eu-central). We will share additional updates as we have more information.
-
investigating Apr 20, 2026, 12:25 PM UTC
We are continuing to investigate this issue.
-
monitoring Apr 20, 2026, 01:57 PM UTC
A fix has been implemented and we are monitoring the results.
-
resolved Apr 21, 2026, 01:08 AM UTC
This incident has been resolved.
-
postmortem Apr 21, 2026, 02:11 PM UTC
On April 20, 2026, approximately between 11:05 UTC and 13:10 UTC, Frankfurt \(DE-FRA-2\) data center experienced degraded connectivity and performance issues for Compute services, including Linode and Object Storage. During this time, customers in the Frankfurt region may have experienced packet loss or connectivity issues Initially, we suspected ongoing power maintenance at the datacenter to be the cause; however, this was later determined not to be impacting. Further investigation revealed the connectivity issues were affecting certain compute hosts within the site, as well as host job completion issues site wide. Akamai identified incorrect route advertisements coming from a host within the datacenter, which contributed to the disruption. The affected host was identified and isolated, which mitigated the issue and restored service stability. We apologize for the impact and thank you for your patience and continued support. We are committed to making continuous improvements to make our systems better and prevent recurrence. This summary provides an overview of our current understanding of the incident, given the information available. Our investigation is ongoing, and any information herein is subject to change.
Read the full incident report →
- Detected by Pingoru
- Apr 16, 2026, 01:20 PM UTC
- Resolved
- Apr 17, 2026, 01:30 PM UTC
- Duration
- 1d
Affected: US-East (Newark) Linode Kubernetes EngineUS-Central (Dallas) Linode Kubernetes EngineUS-West (Fremont) Linode Kubernetes EngineUS-Southeast (Atlanta) Linode Kubernetes EngineUS-IAD (Washington) Linode Kubernetes EngineUS-ORD (Chicago) Linode Kubernetes EngineCA-Central (Toronto) Linode Kubernetes EngineEU-West (London) Linode Kubernetes EngineEU-Central (Frankfurt) Linode Kubernetes EngineFR-PAR (Paris) Linode Kubernetes EngineAP-South (Singapore) Linode Kubernetes EngineAP-Northeast (Tokyo 2) Linode Kubernetes EngineAP-West (Mumbai) Linode Kubernetes EngineAP-Southeast (Sydney) Linode Kubernetes EngineSE-STO (Stockholm) Linode Kubernetes EngineUS-SEA (Seattle) Linode Kubernetes EngineJP-OSA (Osaka) Linode Kubernetes EngineIN-MAA (Chennai) Linode Kubernetes EngineID-CGK (Jakarta) Linode Kubernetes EngineBR-GRU (São Paulo) Linode Kubernetes EngineNL-AMS (Amsterdam) Linode Kubernetes EngineES-MAD (Madrid) Linode Kubernetes EngineIT-MIL (Milan) Linode Kubernetes EngineUS-MIA (Miami) Linode Kubernetes EngineUS-LAX (Los Angeles) Linode Kubernetes EngineGB-LON (London 2) Linode Kubernetes EngineAU-MEL (Melbourne) Linode Kubernetes EngineIN-BOM-2 (Mumbai 2) Linode Kubernetes EngineDE-FRA-2 (Frankfurt 2) Linode Kubernetes EngineSG-SIN-2 (Singapore 2) Linode Kubernetes EngineJP-TYO-3 (Tokyo 3) Linode Kubernetes Engine
Timeline · 9 updates
-
investigating Apr 16, 2026, 01:20 PM UTC
Our team is investigating an emerging service issue affecting Linode Kubernetes Engine Standard and Enterprise in all regions. We will share additional updates as we have more information.
-
investigating Apr 16, 2026, 02:12 PM UTC
Our team continues to investigate the issue with the Linode Kubernetes Engine. Affected customers may be unable to deploy new clusters or add new nodes to a cluster. We will share additional updates as we have more information. Customers can find more information on the Akamai Community at https://community.akamai.com/customers/s/feed/0D5a7000013vgM6CAI
-
investigating Apr 16, 2026, 02:41 PM UTC
Our team continues to investigate the issue affecting the Linode Kubernetes Engine (LKE). We will share additional updates as we have more information. Customers can find more information on the Akamai Community at https://community.akamai.com/customers/s/feed/0D5a7000013vgM6CAI
-
identified Apr 16, 2026, 03:10 PM UTC
Our team has identified the issue affecting the LKE service. We are working quickly to implement a fix, and we will provide an update as soon as the solution is in place. Customers can find more information on the Akamai Community at https://community.akamai.com/customers/s/feed/0D5a7000013vgM6CAI
-
identified Apr 16, 2026, 03:40 PM UTC
Our team continues to work on implementing a fix for the issue affecting the Linode Kubernetes Engine. We will continue to provide updates until a solution is in place. Customers can find more information on the Akamai Community at https://community.akamai.com/customers/s/feed/0D5a7000013vgM6CAI
-
identified Apr 16, 2026, 04:09 PM UTC
As of 15:53 UTC on April 16, 2026 we are starting to see nodes successfully provisioning, however the service is not fully back up yet. Some customers may remain impacted at this time. We are working to stabilize the service, and will provide another update within the next 30 minutes. Customers can find more information on the Akamai Community at https://community.akamai.com/customers/s/feed/0D5a7000013vgM6CAI
-
monitoring Apr 16, 2026, 04:53 PM UTC
As of 16:27 UTC on April 16, 2026 we have corrected the issues affecting the LKE service. The issue was caused by an expired certificate, which has been updated. We will be monitoring this to ensure that it remains stable. If you continue to experience problems, please open a Support ticket for assistance.
-
resolved Apr 17, 2026, 01:30 PM UTC
We haven’t observed any additional issues with the LKE and LKE-E services, and will now consider this incident resolved. If you continue to experience problems, please open a Support ticket for assistance.
-
postmortem Apr 21, 2026, 02:24 AM UTC
Starting at 11:45 UTC on April 16, 2026, customers using Linode Kubernetes Engine \(LKE\) and Linode Kubernetes Engine Enterprise \(LKE-E\) began to experience issues deploying LKE and LKE-E clusters and nodes. The affected customers were unable to deploy new nodes, which also prevented them from recycling nodes and autoscaling clusters. The investigation revealed that a credential expiry caused the authentication failures. Our subject matter experts \(SMEs\) created new credentials and deployed them to mitigate the impact. The issue was mitigated at around 16:27 UTC on April 16, 2026, following the credential rotation. We are continuing to investigate the root cause of what led to this credential expiry and will take appropriate preventive actions. We apologize for the impact and thank you for your patience and continued support. We are committed to making continuous improvements to make our systems better and prevent recurrence. This summary provides an overview of our current understanding of the incident given the information available. Our investigation is ongoing, and any information herein is subject to change.
Read the full incident report →
- Detected by Pingoru
- Apr 15, 2026, 04:57 PM UTC
- Resolved
- Apr 15, 2026, 08:21 PM UTC
- Duration
- 3h 23m
Affected: Linode Manager and API
Timeline · 5 updates
-
identified Apr 15, 2026, 04:57 PM UTC
Our team has identified the issue affecting the Cloud Manager and API. We are working quickly to implement a fix, and we will provide an update as soon as the solution is in place. During this time, customers may be unable to create new Linodes, make changes to Cloud Firewalls, initiate migrations or resize operations. LKE autoscaling may also be delayed during this time.
-
identified Apr 15, 2026, 06:28 PM UTC
We are continuing to implement a fix, and we will provide an update as soon as the solution is in place.
-
monitoring Apr 15, 2026, 06:35 PM UTC
At this time we have been able to correct the issue affecting the Cloud Manager and API. We will be monitoring this to ensure that the service remains stable. If you are still experiencing issues and unable to open a Support ticket, please call us at 855-454-6633 (+1-609-380-7100 Intl.), or send an email to [email protected].
-
resolved Apr 15, 2026, 08:21 PM UTC
We haven't observed any additional issues with the Cloud Manager or API, and will now consider this incident resolved. If you continue to experience issues, please contact us at 855-454-6633 (+1-609-380-7100 Intl.), or send an email to [email protected] for assistance.
-
postmortem Apr 17, 2026, 07:41 PM UTC
On April 15, 2026, at 16:07 UTC, we identified an issue affecting the Linode platform, where multiple hosts began experiencing Linode VM guest orchestrator service failures. This issue disrupted new Linode jobs, Linode Kubernetes Engine \(LKE\) scaling, and related operations. Customers may have noticed slow or incomplete processing of compute-related tasks, including delays or failures when resizing Linodes or clusters, deploying new resources, updating Cloud Firewall rules, or making other changes through the Cloud Manager or API. These symptoms resulted in sluggish or partially completed operations during the issue timeframe. The initial investigation traced the root cause to a problematic configuration file pertaining to the Linode VM guest orchestrator service. In response, we engaged the relevant subject matter experts to place temporary restrictive measures on fleet management services, preventing further spread of the issue. The team began reverting the problematic configuration change, and submitting code reversions to restore the correct configuration file. Between approximately 17:00 and 18:07 UTC, we worked to proactively fix impacted hosts and restore job processing. At 18:07 UTC, mitigation efforts were complete, and all hosts alerting for failed statuses had been addressed. The reverted configuration changes were pushed out to remaining compute hosts, and change propagation was verified across each data center. Additional host state tests confirmed that all necessary updates took effect. At approximately 19:40 UTC, we resumed all normal internal operations and systems. Host status monitoring continued to ensure ongoing stability. At 20:21 UTC, no further issues were identified following monitoring, and the issue was considered resolved. Work now shifts to identifying long term corrective and preventive actions. This remains a work in progress. This summary provides an overview of our current understanding of the incident given the information available. Our investigation is ongoing and any information herein is subject to change.
Read the full incident report →
- Detected by Pingoru
- Apr 14, 2026, 06:33 PM UTC
- Resolved
- Apr 16, 2026, 07:32 PM UTC
- Duration
- 2d
Affected: US-Southeast (Atlanta)US-IAD (Washington)US-ORD (Chicago)US-SEA (Seattle)JP-OSA (Osaka)NL-AMS (Amsterdam)ES-MAD (Madrid)GB-LON (London 2)
Timeline · 5 updates
-
investigating Apr 14, 2026, 06:33 PM UTC
Our team is investigating a service issue that affects the Dedicated CPU offering. Some users may experience failed Linode boots, errors related to insufficient resource allocation, unsuccessful migrations of Dedicated CPU plans or scaling of LKE clusters, and difficulty deploying or scaling workloads that require Dedicated CPU resources. We will share additional updates as we have more information.
-
identified Apr 14, 2026, 08:45 PM UTC
Our team has identified the issue affecting the Dedicated CPU offering. We are working quickly to implement a fix, and we will provide an update as soon as the solution is in place.
-
identified Apr 15, 2026, 10:40 PM UTC
We have identified a fix for the issue affecting the Dedicated CPU offering and are currently testing it. We expect to include the fix in the next software release that is expected to happen by the end of May. Our investigation shows that approximately 0.2% of hosts were impacted by this issue. In the meantime, a workaround is available by contacting support. We will provide additional updates as we approach the deployment date.
-
resolved Apr 16, 2026, 07:32 PM UTC
We haven't observed any additional issues with the Dedicated CPU offering, and will now consider this incident resolved. If you continue to experience issues, please contact us at 855-454-6633 (+1-609-380-7100 Intl.), or send an email to [email protected] for assistance.
-
postmortem Apr 17, 2026, 05:03 PM UTC
On March 9, 2026, at 20:02 UTC, we identified an issue impacting Dedicated CPU plan Linodes, resulting in boot failures and resource allocation errors across multiple data centers. The problem was first reported by a customer who was unable to boot a newly provisioned Dedicated CPU Linode. We began investigating the issue on March 10, 2026, and escalated it internally on March 17, 2026, after receiving additional reports. By April 14, 2026, the incident had affected 35 Dedicated CPU plan hosts across seven data centers, including Amsterdam, London, Madrid, Atlanta, Chicago, Seattle, Washington, D.C., and Osaka. The issue was traced to a known software defect that caused validation failures during the boot process, leading to insufficient resource errors and preventing fallback allocations. Subject matter experts identified a missing configuration file as the root cause of the Dedicated Linode plan issues. To address this, we implemented a workaround script and began developing a long-term platform fix. We also introduced proactive alerting to detect similar issues in the future and updated our runbook with clear mitigation steps. Temporary monitoring will remain in place until the permanent fix is deployed. The permanent platform fix is scheduled for release by the end of May as part of our regular software update cycle. With the workaround script, runbook updates, and proactive alerting in place, we mitigated the broader customer impact by 14:42 UTC on April 16, 2026. If customers experience similar issues before the May platform release, we encourage them to contact Akamai Compute Support for assistance. This summary provides an overview of our current understanding of the incident given the information available. Our investigation is ongoing and any information herein is subject to change.
Read the full incident report →
- Detected by Pingoru
- Apr 09, 2026, 11:32 PM UTC
- Resolved
- Apr 10, 2026, 02:31 AM UTC
- Duration
- 2h 58m
Affected: US-IAD (Washington)
Timeline · 5 updates
-
investigating Apr 09, 2026, 11:32 PM UTC
Our team is investigating an emerging service issue affecting networking in US-IAD (Washington). We will share additional updates as we have more information.
-
investigating Apr 10, 2026, 12:27 AM UTC
Our team continues to investigate the issue affecting networking in US-IAD (Washington) data center. During this time, users may experience intermittent connection timeouts and errors for all services deployed in this data center. We will share additional updates as we have more information within the next 60 minutes.
-
investigating Apr 10, 2026, 01:04 AM UTC
We are continuing to investigate the issue. We will provide the next update as we make progress.
-
resolved Apr 10, 2026, 02:31 AM UTC
We have successfully resolved the networking issues in the US-IAD (Washington) data center. The cause was identified as a specific compute host that generated abnormal network traffic upon being powered on, which impacted the local management network and a subset of customer connectivity. Our engineering team has isolated the hardware responsible, and network stability was restored immediately. All management services are fully operational, and the backlog of pending host jobs has been completely processed. We will continue to monitor the region to ensure ongoing stability. Please contact us at 855-454-6633 (+1-609-380-7100 Intl.), or send an email to [email protected] for assistance.
-
postmortem Apr 13, 2026, 04:51 AM UTC
Starting around 22:55 UTC on April 9, 2026, customers were unable to access services hosted in the US-IAD \(Washington\) data center \(DC\). The investigation revealed that the issue was caused by network connectivity issues, which would have resulted in connection timeouts and/or errors for services deployed in this data center. Analysis confirmed that the disruption was triggered by a single compute host that was incorrectly configured and generated abnormal network traffic upon being powered on. To mitigate the impact, we have isolated the host, and the network stability was restored following this action. The impact was mitigated at 02:01 UTC on April 10, 2026. All management services are fully operational, and the backlog of pending host jobs has been completely processed. We will continue to investigate the root cause and will take appropriate preventive measures. We apologize for the impact and thank you for your patience and continued support. If you have further questions, please contact us at 855-454-6633 \(\+1-609-380-7100 Intl.\), or send an email to [[email protected]](mailto:[email protected]) for assistance. This summary provides an overview of our current understanding of the incident given the information available. Our investigation is ongoing, and any information herein is subject to change.
Read the full incident report →
- Detected by Pingoru
- Apr 07, 2026, 09:02 AM UTC
- Resolved
- Apr 08, 2026, 06:32 AM UTC
- Duration
- 21h 30m
Affected: GB-LON (London 2)
Timeline · 3 updates
-
investigating Apr 07, 2026, 09:02 AM UTC
Our team is investigating an emerging service issue affecting connectivity to our Linode infrastructure in the London 2(gb-ion) region likely caused due to a upstream AS. We will continue to share more details as we verify this.
-
monitoring Apr 07, 2026, 10:17 AM UTC
We've determined that the impact of the issue caused by an upstream AS has been mitigated. We continue to monitor the situation until we recieve further confirmation of the resolution.
-
resolved Apr 08, 2026, 06:32 AM UTC
We haven’t observed any additional connectivity issues and will now consider this incident resolved. If you continue to experience problems, please open a Support ticket for assistance.
Read the full incident report →
- Detected by Pingoru
- Apr 07, 2026, 12:50 AM UTC
- Resolved
- Apr 07, 2026, 01:39 AM UTC
- Duration
- 48m
Affected: US-East (Newark) Object Storage
Timeline · 3 updates
-
investigating Apr 07, 2026, 12:50 AM UTC
Our team is investigating an emerging service issue affecting Object Storage in us-east-1. We will share additional updates as we have more information.
-
identified Apr 07, 2026, 01:13 AM UTC
The issue has been identified and a fix is being implemented.
-
resolved Apr 07, 2026, 01:39 AM UTC
This incident has been resolved.
Read the full incident report →
- Detected by Pingoru
- Mar 24, 2026, 05:59 PM UTC
- Resolved
- Mar 24, 2026, 09:25 PM UTC
- Duration
- 3h 25m
Affected: Linode Manager and APIManaged Databases
Timeline · 2 updates
-
investigating Mar 24, 2026, 05:59 PM UTC
Our team is investigating an emerging service issue affecting the deployment of new compute instances where the swap disk size is set to 0. This is also impacting our Managed Databases service. We will share additional updates as we have more information.
-
resolved Mar 24, 2026, 09:25 PM UTC
A fix was implemented at 19:20 UTC. After monitoring the results, this incident has been resolved.
Read the full incident report →
- Detected by Pingoru
- Mar 20, 2026, 11:29 AM UTC
- Resolved
- Mar 24, 2026, 01:46 PM UTC
- Duration
- 4d 2h
Affected: ID-CGK (Jakarta)AU-MEL (Melbourne) Linode Kubernetes EngineSG-SIN-2 (Singapore 2) Linode Kubernetes Engine
Timeline · 11 updates
-
investigating Mar 20, 2026, 11:29 AM UTC
Our team has identified the issue affecting connectivity in our Jakarta data center. We are working quickly to implement a fix, and we will provide an update as soon as the solution is in place.
-
investigating Mar 20, 2026, 12:11 PM UTC
Our team has identified the issue affecting connectivity in our Jakarta, ID data center. We are working quickly to implement a fix, and we will provide an update as soon as the solution is in place.
-
identified Mar 20, 2026, 12:57 PM UTC
We are continuing to implement a fix for the connectivity issue in our Jakarta, ID data center.
-
identified Mar 20, 2026, 01:31 PM UTC
We are still working to resolve the connectivity issue at our Jakarta, ID data center. While most services are recovering, some compute services in Jakarta may still be affected. Due to backend dependencies, there may also be impacts to nearby data centers Singapore Expansion, SP (sg-sin-2) and Melbourne, AU (au-mel). Linode Kubernetes Engine (LKE) in these regions could experience disruptions because of a dependency with Jakarta, ID (id-cgk). We appreciate your patience and will provide further updates as we make progress.
-
identified Mar 20, 2026, 02:23 PM UTC
We are continuing to address the connectivity issue at our Jakarta, ID (id-cgk) data center. Most services have recovered, but some compute services in Jakarta remain affected. Due to backend dependencies, customers in the Singapore Expansion, SP (sg-sin-2) and Melbourne, AU (au-mel) data centers may also experience disruptions, particularly with Linode Kubernetes Engine (LKE) deployments. Our teams are actively working to resolve these issues and restore full service. We appreciate your patience and will share further updates as progress is made.
-
identified Mar 20, 2026, 03:13 PM UTC
Our teams are continuing to mitigate the connectivity issue in Jakarta, ID and are working to restore full service. We appreciate your patience as we resolve the remaining disruptions.
-
identified Mar 20, 2026, 04:07 PM UTC
We are making steady progress toward full recovery of the connectivity issue at our Jakarta, ID (id-cgk) data center. Most services in Jakarta are now operational. However, some Linodes and Linode Kubernetes Engine (LKE) clusters may still be affected, especially if nodes are provisioned or attempt to provision on impacted Linodes. The dependency issues that previously affected Singapore Expansion, SP (sg-sin-2) and Melbourne, AU (au-mel) have been resolved. We continue to work on restoring all services and appreciate your patience as we address the remaining issues. Further updates will be provided as progress continues.
-
identified Mar 20, 2026, 04:58 PM UTC
We are continuing to work on resolving the connectivity issue in our Jakarta, ID (id-cgk) data center. Thank you for your patience as we restore full service.
-
monitoring Mar 20, 2026, 05:28 PM UTC
At this time we have been able to correct the issues affecting connectivity in our Jakarta, ID data center. We will be monitoring this to ensure that it remains stable. If you are still experiencing issues, please open a Support ticket for assistance.
-
resolved Mar 24, 2026, 01:46 PM UTC
We haven’t observed any additional connectivity issues in our Jakarta, ID (id-cgk) data center, and will now consider this incident resolved. If you continue to experience problems, please open a Support ticket for assistance.
-
postmortem Mar 25, 2026, 03:48 PM UTC
On March 20, 2026, at approximately 10:40 UTC, the Jakarta, ID \(id-cgk\) data center experienced power fluctuations that caused outages across all services in Jakarta, with downstream impacts to services in the Singapore Expansion, SP \(sg-sin-2\), and Melbourne, AU \(au-mel\) regions. The facility switched to backup power, which triggered a simultaneous reboot of all data center network equipment and machines. Although switching to generator power should not have caused disruption, unexpected issues with the uninterrupted power supplies \(UPS\) led to outages. The data center’s facilities engineering team has identified that a transient under-voltage condition \(voltage flicker\) occurred on the utility power sources of our third-party data center. During this event, two of the data center’s UPS systems did not respond as designed, resulting in a power outage for the affected racks. The local data center team engaged a manual bypass of the affected UPS systems to generator power. Once utility power stabilized, the site transitioned off generator power and is now working with the UPS system vendor for further analysis. Content Delivery services experienced some performance degradation but recovered by 11:10 UTC. By 12:00 UTC, physical compute hosts were gradually becoming operational, but many Linode instances remained unavailable, resulting in service interruptions for workloads hosted in Jakarta. Ongoing reliance on backup power led to intermittent performance issues until full restoration. Customers in the Singapore Expansion, SP \(sg-sin-2\), and Melbourne, AU \(au-mel\) regions reported issues with Linode Kubernetes Engine \(LKE\) and Object Storage deployments. Investigation revealed that Object Storage and LKE were impacted in the cgk1, sin2, and mel1 regions, affecting the deployment of new clusters, nodes, and node pools. The incident was initially traced to under-voltage on the data center’s power sources. Akamai teams prioritized restoring data center networking, followed by Akamai products and services operating in Jakarta, ID. The data center returned to grid power at approximately 13:20 UTC. All services were restored, and the incident transitioned to monitoring at 17:28 UTC. Teams closely monitored all products and services in the affected data center and assessed the likelihood of another power event. With no issues identified, the incident was considered fully resolved at 18:27 UTC. Efforts are ongoing to address dependencies between services and data centers to improve resilience and prevent recurrences. The data center is also working with their UPS system vendor to further analyze the UPS response during the event. We are committed to making continuous improvements to our systems to prevent similar incidents in the future. We apologize for the impact and thank you for your patience and continued support. This summary provides an overview of our current understanding of the incident given the information available. Our investigation is ongoing and any information herein is subject to change.
Read the full incident report →
- Detected by Pingoru
- Mar 17, 2026, 01:04 PM UTC
- Resolved
- Mar 17, 2026, 02:22 PM UTC
- Duration
- 1h 17m
Affected: US-ORD (Chicago) Linode Kubernetes Engine
Timeline · 5 updates
-
investigating Mar 17, 2026, 01:04 PM UTC
Our team is investigating an issue affecting the Linode Kubernetes Engine Enterprise (LKE-E) service. We will share additional updates as we have more information.
-
identified Mar 17, 2026, 01:14 PM UTC
Our team has identified the issue affecting the LKE-E service. We are working quickly to implement a fix, and we will provide an update as soon as the solution is in place.
-
monitoring Mar 17, 2026, 01:27 PM UTC
At this time we have been able to correct the issues affecting the LKE-E service. We will be monitoring this to ensure that it remains stable. If you continue to experience problems, please open a Support ticket for assistance.
-
resolved Mar 17, 2026, 02:22 PM UTC
We haven’t observed any additional issues with the LKE-E service, and will now consider this incident resolved. If you continue to experience problems, please open a Support ticket for assistance.
-
postmortem Mar 17, 2026, 06:43 PM UTC
Between approximately 01:00 and 14:12 UTC on March 17, 2026, customers were unable to deploy new Linode Kubernetes Engine Enterprise \(LKE-E\) clusters in the Chicago \(us-ord\) region. This was caused by exceeding the allowable number of DNS records in the DNS zone used for provisioning, which prevented the creation of new records required for cluster deployment. Existing LKE-E clusters and standard LKE clusters in the region continued to operate normally. After receiving the first monitoring alert at 12:00 UTC, we investigated and identified the underlying issue. We found that DNS records associated with deleted LKE-E clusters were not being properly cleaned up, which led to a gradual buildup of unused records. This accumulation eventually reached the limit for the DNS zone, preventing new records from being created and blocking new cluster deployments. Between 13:00 and 13:20 UTC, we deleted approximately 500 obsolete domain records, which relieved the record limit and allowed new clusters to provision successfully. We restored impacted clusters to a healthy state and confirmed that deployments were functioning as expected. After a brief period to monitor these fixes, the incident was considered fully mitigated at 14:12 UTC the same day. We are continuing to clean up additional obsolete domain records and estimate that about 11,000 records will qualify for deletion based on the number of active clusters. To help prevent similar incidents and ensure reliable cluster provisioning in the future we are enhancing our record cleanup mechanisms and monitoring. This summary provides an overview of our current understanding of the incident given the information available. Our investigation is ongoing and any information herein is subject to change.
Read the full incident report →
- Detected by Pingoru
- Mar 13, 2026, 05:58 PM UTC
- Resolved
- Mar 13, 2026, 07:55 PM UTC
- Duration
- 1h 56m
Affected: US-IAD (Washington)
Timeline · 6 updates
-
investigating Mar 13, 2026, 05:58 PM UTC
Our team is investigating an emerging service issue affecting networking in our Washington, DC location (US-IAD). We will share additional updates as we have more information.
-
investigating Mar 13, 2026, 06:42 PM UTC
We are continuing to investigate the issue. We will share additional updates as we have more information.
-
investigating Mar 13, 2026, 06:59 PM UTC
Our team has identified the issue affecting connectivity in our Washington data center. We are working quickly to implement a fix, and we will provide an update as soon as the solution is in place.
-
identified Mar 13, 2026, 07:02 PM UTC
Our team has identified the issue affecting connectivity in our Washington data center. We are working quickly to implement a fix, and we will provide an update as soon as the solution is in place.
-
resolved Mar 13, 2026, 07:55 PM UTC
We haven’t observed any additional connectivity issues in our Washington data center, and will now consider this incident resolved. If you continue to experience problems, please open a Support ticket for assistance.
-
postmortem Mar 17, 2026, 08:45 PM UTC
_On March 13, 2026, between approximately 17:35 UTC and 18:54 UTC, our Washington \(US-IAD\) data center experienced connectivity issues. During this time, customers may have noticed disruptions across all services hosted in this data center._ _Our investigation revealed that three servers previously used for testing were released back to inventory without the drives being wiped. As a result, one of these servers was sent to the warehouse and later returned to the Washington data center. This led to a host broadcasting an invalid path, which caused the service disruption._ _We promptly isolated the affected host, and service began to improve around 18:54 UTC. After thorough monitoring and system checks, we confirmed that the issue was fully resolved by 19:01 UTC._ _To help prevent similar issues in the future, we will conduct a detailed investigation to determine why this host was incorrectly configured and why our network propagated the invalid path. Based on our findings, we will implement corrective measures to prevent this type of misconfiguration from recurring._ _We apologize for the impact and thank you for your patience and continued support. We are committed to making continuous improvements to make our systems better and prevent recurrence._ _This summary provides an overview of our current understanding of the incident, given the information available. Our investigation is ongoing, and any information herein is subject to change._
Read the full incident report →
- Detected by Pingoru
- Mar 13, 2026, 12:45 AM UTC
- Resolved
- Mar 13, 2026, 02:41 AM UTC
- Duration
- 1h 55m
Affected: JP-OSA (Osaka) Linode Kubernetes Engine
Timeline · 5 updates
-
investigating Mar 13, 2026, 12:45 AM UTC
Our team is investigating an emerging service issue affecting the deployment of new LKE Enterprise Clusters in JP, OSA (Osaka). We will share additional updates as we have more information.
-
investigating Mar 13, 2026, 01:31 AM UTC
We are continuing to investigate this issue.
-
monitoring Mar 13, 2026, 01:53 AM UTC
At this time we have been able to correct the issues affecting the LKE service. We will be monitoring this to ensure that it remains stable. If you continue to experience problems, please open a Support ticket for assistance.
-
resolved Mar 13, 2026, 02:41 AM UTC
We haven’t observed any additional issues with the LKE service, and will now consider this incident resolved. If you continue to experience problems, please open a Support ticket for assistance.
-
postmortem Mar 13, 2026, 02:59 AM UTC
Starting around 19:15 UTC on March 12, 2026, customers were unable to create Linode Kubernetes Engine for Enterprise \(LKE-E\) clusters and unable to perform administrative tasks such as LKE-E version upgrades, Control Plane ACL changes, etc., in the Osaka data center \(JP-OSA\). The creation process was stalled indefinitely when attempting to deploy LKE-E clusters in this data center. Akamai’s investigation revealed that the issue started following a phased rollout of the LKE software release in the Osaka data center. Akamai deployed a software fix to mitigate the impact. The impact was mitigated around 01:55 UTC on March 13, 2026. The clusters that were created during the impacted window resumed their creation. Our subject matter experts will continue to investigate the root cause and will take appropriate preventive actions. We apologize for the impact and thank you for your patience and continued support. We are committed to making continuous improvements to make our systems better and prevent recurrence. This summary provides an overview of our current understanding of the incident given the information available. Our investigation is ongoing, and any information herein is subject to change.
Read the full incident report →
- Detected by Pingoru
- Mar 09, 2026, 06:39 PM UTC
- Resolved
- Mar 11, 2026, 07:29 PM UTC
- Duration
- 2d
Affected: US-IAD (Washington)
Timeline · 5 updates
-
investigating Mar 09, 2026, 06:39 PM UTC
Our team is investigating an emerging service issue affecting compute hosts in IAD (Washington, DC). We will share additional updates as we have more information.
-
investigating Mar 09, 2026, 07:46 PM UTC
Our team has identified the issue affecting connectivity in our IAD (Washington, DC) data center. We are working quickly to implement a fix, and we will provide an update as soon as the solution is in place.
-
monitoring Mar 09, 2026, 09:40 PM UTC
At this time we have been able to correct the issues affecting connectivity in our IAD (Washington, DC) data center. We will be monitoring this to ensure that it remains stable. If you are still experiencing issues, please open a Support ticket for assistance.
-
resolved Mar 11, 2026, 07:29 PM UTC
We haven’t observed any additional connectivity issues in our IAD (Washington, DC) data center, and will now consider this incident resolved. If you continue to experience problems, please open a Support ticket for assistance.
-
postmortem Mar 11, 2026, 07:29 PM UTC
Beginning at 18:09 UTC on March 9, 2026, approximately 25% of Compute hosts in our IAD3 \(Washington DC\) data center experienced degraded network connectivity due to the application of an updated routing configuration intended to improve network performance. This updated router configuration was extensively tested and has been running in production in another region for several weeks without any issues. An investigation revealed that application of this change inadvertently caused loss of connectivity to these host due to a network configuration specific to the IAD3 data center. This resulted in loss of access and control of Linodes and clusters hosted on those machines until the issue was mitigated. Mitigation steps took longer than expected due to the specific networking nature of the issue and degradation of internal service visibility to impacted devices, which required additional planning to effectively apply a rollback to the prior configuration. We were able to successfully rollback the routing configuration and mitigate impact at 21:16 UTC on March 9, 2026. To prevent a reoccurrence of this issue, we are developing a new routing configuration deployment plan that avoids the identified failure modes and others inferred from our observations. This summary provides an overview of our current understanding of the incident given the information available. Our investigation is ongoing and any information herein is subject to change.
Read the full incident report →
- Detected by Pingoru
- Mar 09, 2026, 04:04 AM UTC
- Resolved
- Mar 09, 2026, 03:18 PM UTC
- Duration
- 11h 13m
Affected: Linode Manager and API
Timeline · 5 updates
-
investigating Mar 09, 2026, 04:04 AM UTC
Our team is investigating an emerging issue where some customers are receiving delayed event notification or notification emails for activities performed on Cloud Manager. We will share additional updates as we have more information.
-
investigating Mar 09, 2026, 05:48 AM UTC
We are continuing to investigate this issue. The appropriate subject matter experts are engaged. Subsequent updates will be posted as progress is made.
-
investigating Mar 09, 2026, 06:31 AM UTC
We are continuing to investigate this issue. As a workaround, customers can login to https://cloud.linode.com/ and manually check the notifications until the issue is mitigated. Subsequent updates will be posted as progress is made.
-
resolved Mar 09, 2026, 03:18 PM UTC
Our team has identified that the issue affecting the Cloud Manager and API is related to a previously communicated incident from March 4, 2026 (https://status.linode.com/incidents/yzlp8ykymmhm). Today's issue was caused by the backlog of notifications that accumulated due to the previous incident, and it has cleared now. We haven't observed any additional issues with the Cloud Manager or API, and will now consider this incident resolved. If you continue to experience issues, please contact us at 855-454-6633 (+1-609-380-7100 Intl.), or send an email to [email protected] for assistance.
-
postmortem Mar 13, 2026, 09:10 PM UTC
On March 3, 2026, at approximately 20:00 UTC,, a cron job responsible for email notifications from Cloud Manager began failing due to timeouts. As the job exceeded the request timeout interval, it repeatedly failed and restarted, causing the process to resume from the beginning of its queue each time. This resulted in customers receiving multiple duplicate email notifications and some delay in those notifications. This issue only affected event-based notification emails related to host jobs such as VM shutdowns, startups, deletions, or deployments. Other notification emails set up by the customer, such as CPU usage thresholds, and the host jobs themselves remained unaffected. To address the issue, the cron job was temporarily stopped, associated services were restarted, and a processing limit was introduced to control the number of records handled per run. These actions stabilized the system, and the incident was considered mitigated at approximately 19:00 UTC on March 4, 2026. After mitigation of the root cause, a backlog of approximately 2 million pending events had to be processed, which caused further delays in notification delivery over the following days for some customers. The system worked through this backlog and gradually returned to normal processing. As of March 10, 2026, the backlog has been cleared and notification emails are being delivered as expected. Since this incident, we have already tested and implemented additional cron job optimizations to ensure consistent performance. We are committed to making continuous improvements to make our systems better and prevent recurrence. This summary provides an overview of our current understanding of the incident given the information available. Our investigation is ongoing and any information herein is subject to change.
Read the full incident report →
- Detected by Pingoru
- Mar 07, 2026, 12:02 PM UTC
- Resolved
- Mar 07, 2026, 09:08 PM UTC
- Duration
- 9h 5m
Affected: US-East (Newark) Object StorageLinode Manager and APIUS-Southeast (Atlanta) Object StorageUS-IAD (Washington) Object StorageUS-ORD (Chicago) Object StorageEU-Central (Frankfurt) Object StorageAP-South (Singapore) Object StorageFR-PAR (Paris) Object StorageSE-STO (Stockholm) Object StorageUS-SEA (Seattle) Object StorageJP-OSA (Osaka) Object StorageIN-MAA (Chennai) Object StorageID-CGK (Jakarta) Object StorageBR-GRU (Sao Paulo) Object StorageES-MAD (Madrid) Object StorageGB-LON (London 2)AU-MEL (Melbourne)NL-AMS (Amsterdam) Object StorageIT-MIL (Milan) Object StorageUS-MIA (Miami) Object StorageUS-LAX (Los Angeles) Object StorageGB-LON (London 2) Object StorageAU-MEL (Melbourne) Object StorageIN-BOM-2 (Mumbai 2) Object StorageDE-FRA-2 (Frankfurt 2) Object StorageSG-SIN-2 (Singapore 2) Object StorageJP-TYO-3 (Tokyo 3) Object Storage
Timeline · 10 updates
-
investigating Mar 07, 2026, 12:02 PM UTC
This issue is impacting Object Storage access globally. During this time customers may encounter issues with managing buckets, access keys, or Object Storage policies. Our team is continuing to investigate.
-
investigating Mar 07, 2026, 12:34 PM UTC
We are continuing to investigate this issue.
-
investigating Mar 07, 2026, 01:35 PM UTC
We are actively investigating an issue affecting the Object Storage service. Users may experience connection timeouts and errors when accessing this service. We will provide updates as we learn more and work toward a resolution.
-
investigating Mar 07, 2026, 02:31 PM UTC
We are continuing to investigate this issue. Thank you for your patience as we work toward a resolution.
-
investigating Mar 07, 2026, 03:31 PM UTC
Our team is continuing to investigate the Object Storage API issue, which affects all Object Storage regions. This issue is limited to interacting with the Object Storage service, such as managing buckets, access keys, or Object Storage policies. The underlying Object Storage service itself remains operational. We appreciate your patience and will provide further updates as soon as possible.
-
identified Mar 07, 2026, 04:08 PM UTC
Our team has identified the issue affecting the Object Storage service. We are working quickly to implement a fix, and we will provide an update as soon as the solution is in place.
-
identified Mar 07, 2026, 05:07 PM UTC
We are still working to implement the fix for the Object Storage service issue. We will share another update as soon as progress is made.
-
monitoring Mar 07, 2026, 06:14 PM UTC
At this time we have been able to correct the issues affecting the Object Storage service. We will be monitoring this to ensure that it remains stable. If you continue to experience problems, please open a Support ticket for assistance.
-
resolved Mar 07, 2026, 09:08 PM UTC
We haven’t observed any additional issues with the Object Storage service, and will now consider this incident resolved. If you continue to experience problems, please open a Support ticket for assistance.
-
postmortem Mar 13, 2026, 11:59 AM UTC
On March 7,2026, at approx. 10:05 UTC, the Object Storage API \(OSA\) service experienced an elevated rate of errors in one of the production clusters, during this period, customers may have experienced intermittent issues while performing operations such as loading or managing buckets, accessing keys, updating Object storage policies, or making modifications to Object Storage resources. The issue was traced to increased pressure on one of the underlying clusters following a recent configuration update. After identifying the contributing factor, the change was rolled back and service configurations were adjusted to restore normal system behavior. The issue was mitigated at 18:15 UTC, 7th March, 2026. Following the mitigation, system performance indicators improved and service stability was restored. Verification checks confirmed that request success rates returned to normal levels, system queues reduced significantly, latency stabilized and overall platform traffic appeared healthy. We apologize for the impact and thank you for your patience and continued support. We are committed to making continuous improvements to make our systems better and prevent recurrence. This summary provides an overview of our current understanding of the incident given the information available. Our investigation is ongoing and any information herein is subject to change.
Read the full incident report →
- Detected by Pingoru
- Mar 05, 2026, 11:34 AM UTC
- Resolved
- Mar 05, 2026, 09:00 AM UTC
- Duration
- —
Timeline · 2 updates
-
resolved Mar 05, 2026, 11:34 AM UTC
We haven’t observed any additional connectivity issues in our Frankfurt data center, and will now consider this incident resolved. If you continue to experience problems, please open a Support ticket for assistance.
-
postmortem Mar 05, 2026, 09:32 PM UTC
On March 5, 2026, between approximately 08:45 and 10:00 UTC, we experienced connectivity issues in our Frankfurt datacenter. During the impact window, customers may have experienced intermittent connection timeouts and errors for services deployed in the Frankfurt datacenter. Our investigation established that during a planned maintenance in the data center, router cables were connected incorrectly due to human error. As a result, customers could have experienced intermittent connectivity issues during the impact window. To resolve the issue, we have corrected the data center cabling. To help prevent similar issues in the future, Akamai will reinforce maintenance procedures by ensuring technicians clearly understand the scope of work and adhere to documented maintenance instructions and verification steps during planned maintenance. We apologize for the impact and thank you for your patience and continued support. We are committed to making continuous improvements to make our systems better and prevent recurrence. This summary provides an overview of our current understanding of the incident, given the information available. Our investigation is ongoing, and any information herein is subject to change.
Read the full incident report →
- Detected by Pingoru
- Mar 05, 2026, 02:07 AM UTC
- Resolved
- Mar 05, 2026, 06:11 PM UTC
- Duration
- 16h 4m
Affected: US-ORD (Chicago) Linode Kubernetes EngineUS-SEA (Seattle) Linode Kubernetes EngineJP-OSA (Osaka) Linode Kubernetes EngineSG-SIN-2 (Singapore 2) Linode Kubernetes Engine
Timeline · 10 updates
-
investigating Mar 05, 2026, 02:07 AM UTC
We are investigating a critical service issue affecting NVIDIA RTX 4000 Ada GPU nodes across multiple regions, including Osaka (osa1), Seattle (sea1), and Chicago (ord1). Affected GPU nodes may report an unrecoverable error state leading to failures in Vulkan initialization and GPU-accelerated workloads. Additionally, some LKE clusters in the Osaka region are currently experiencing Control Plane connectivity issues, resulting in timed-out API requests and errors. Our engineering teams are currently investigating the root cause, focusing on a potential regression in the underlying host hypervisor or GPU firmware. We will provide more information as it becomes available
-
investigating Mar 05, 2026, 05:48 AM UTC
Our subject matter experts are actively investigating the issue. We will provide the next update as progress is made.
-
investigating Mar 05, 2026, 06:55 AM UTC
We are continuing to investigate the issue. We will provide the next update as progress is made.
-
monitoring Mar 05, 2026, 07:34 AM UTC
Our team has identified the issue affecting the service and implemented a fix. We will be monitoring this to ensure that it remains stable. If you continue to experience problems, please open a Support ticket for assistance.
-
investigating Mar 05, 2026, 10:50 AM UTC
We are aware of a recurrence of this issue across multiple regions. We are continuing to investigate and will provide the next update as progress is made.
-
investigating Mar 05, 2026, 02:48 PM UTC
We are continuing to investigate and will provide the next update as progress is made.
-
identified Mar 05, 2026, 04:23 PM UTC
Our team has identified the issue affecting the service. We are working quickly to implement a fix, and we will provide an update as soon as the solution is in place.
-
monitoring Mar 05, 2026, 05:01 PM UTC
At this time we have been able to correct the issues affecting the service. We will be monitoring this to ensure that it remains stable. If you continue to experience problems, please open a Support ticket for assistance.
-
resolved Mar 05, 2026, 06:11 PM UTC
We haven’t observed any additional issues with the service, and will now consider this incident resolved. If you continue to experience problems, please open a Support ticket for assistance.
-
postmortem Mar 09, 2026, 01:32 AM UTC
Starting at approximately 21:00 UTC on March 4, 2026, customers utilizing NVIDIA RTX 4000 Ada GPU-backed Linodes began experiencing lockups. At first, the issue was believed to be isolated to worker nodes on the Linode Kubernetes Engine \(LKE\) platform, but was later confirmed to impact all Linodes using this hardware. Standard Compute and non-RTX4000 GPU instances were unaffected. After ruling out recent software releases, our subject matter experts isolated the root cause to a recently deployed telemetry script. During a routine system improvement initiative, our teams identified and repaired a broken, legacy monitoring script to restore a missing metric on our internal observability dashboards. While investigating why a GPU monitoring script stopped reporting correct metrics, an update was made to restore it to a working state. The script, originally written for an earlier GPU generation, issued a firmware inspection query that was not apparent from the scope of the fix being made. On the RTX 4000 Ada architecture, this class of query against an active GPU triggers a race condition in the GPU System Processor \(GSP\), causing the GPU to enter a protective lockup state and become unavailable to running workloads. We disabled the monitoring script across the GPU fleet and rebooted the nodes to mitigate the impact. The issue was fully mitigated around 17:16 UTC on March 5, 2026. We sincerely apologize for the disruption this caused to your GPU-accelerated applications and services. We will take appropriate improvement measures and prevention actions to prevent recurrence. This summary provides an overview of our current understanding of the incident given the information available. Our investigation is ongoing, and any information herein is subject to change.
Read the full incident report →
- Detected by Pingoru
- Mar 04, 2026, 09:21 AM UTC
- Resolved
- Mar 04, 2026, 11:50 PM UTC
- Duration
- 14h 28m
Affected: Linode Manager and API
Timeline · 4 updates
-
investigating Mar 04, 2026, 09:21 AM UTC
We are aware some customers are receiving duplicate notification emails in fixed intervals. Additional updates will be shared once we have more information.
-
investigating Mar 04, 2026, 12:48 PM UTC
We are continuing to investigate the issue. We will share additional updates as we have more information and once progress is made.
-
investigating Mar 04, 2026, 06:14 PM UTC
Our team is actively investigating the issue affecting Linode Manager and API, which is causing duplicate notification emails for some customers. We appreciate your patience as we work to resolve this. We will provide further updates as soon as we have more information.
-
resolved Mar 04, 2026, 11:50 PM UTC
We haven't observed any additional issues, and will now consider this incident resolved. If you continue to experience issues, please contact us at 855-454-6633 (+1-609-380-7100 Intl.), or send an email to [email protected] for assistance.
Read the full incident report →
- Detected by Pingoru
- Feb 27, 2026, 09:00 PM UTC
- Resolved
- Mar 03, 2026, 05:30 PM UTC
- Duration
- 3d 20h
Affected: AP-West (Mumbai)
Timeline · 7 updates
-
investigating Feb 27, 2026, 09:00 PM UTC
Our team is investigating an issue affecting connectivity in our Mumbai, IN (in-mum) data center During this time, users may experience intermittent connection timeouts and errors for all services deployed in this data center. We will share additional updates as we have more information.
-
monitoring Feb 27, 2026, 09:34 PM UTC
At this time we have been able to correct the issues affecting connectivity in our Mumbai, IN (in-mum) data center. We will be monitoring this to ensure that it remains stable. If you are still experiencing issues, please open a Support ticket for assistance.
-
identified Feb 28, 2026, 11:11 AM UTC
We are aware of an ongoing reoccurrence of this issue starting from 09:17 UTC on February 28, 2026, which may cause degraded networking performance and packet loss with multiple Akamai services in the Mumbai, IN region. Our team is investigating and we will share additional updates as we have more information. If you are experiencing issues, please open a Support ticket for assistance.
-
investigating Feb 28, 2026, 12:16 PM UTC
At this time we have been able to apply mitigation actions that improve the experience for customers connecting to our Mumbai, IN (in-mum) data center. In the meantime, we are actively investigating the issue with our third-party and will share additional updates as we have more information. If you are experiencing issues, please open a Support ticket for assistance.
-
monitoring Feb 28, 2026, 05:43 PM UTC
We have taken mitigation steps and escalated to our third-party service provider in our Mumbai, IN (in-mum) data center, who confirmed the impact was caused due to ongoing restoration activities on their cable system during that period, which has been completed.; based on current observations as of 13:40 UTC on February 28, 2026, the service is resuming normal operations. We will continue to monitor to ensure that the impact has been fully mitigated.
-
resolved Mar 03, 2026, 05:30 PM UTC
We haven’t observed any additional connectivity issues in our Mumbai, IN (in-mum) data center, and will now consider this incident resolved. If you continue to experience problems, please open a Support ticket for assistance.
-
postmortem Mar 07, 2026, 05:19 AM UTC
Between February 27, 2026, 11:50 UTC and February 28, 2026, 17:40 UTC, Akamai observed degraded performance, intermittent connectivity issues, and packet loss affecting Edge Delivery and Cloud services for customers in India. The issue impacted multiple Edge locations and connectivity to Mumbai and Chennai data centers. The most significant impact occurred around 12:20 UTC on February 27, with a second window of disruption in the Mumbai region between 19:45 UTC and 21:00 UTC. A recurrence was identified on February 28, 2026, at 09:17 UTC. The recurrence was partially mitigated by 09:46 UTC and fully resolved by 11:08 UTC the same day. The root cause was a chain of unexpected subsea cable outages that occurred during repair work by the third-party network provider, resulting in intermittent disruptions to internet traffic between India and Europe. During the event, traffic was rerouted through unstable alternate paths by the third party provider, resulting in packet loss and congestion. Additionally, isolation of a key network point prevented effective traffic management, leading to service interruptions. Akamai took several actions to mitigate the impact, including suspending the traffic to the affected Edge locations, temporarily removing traffic from the affected ISP provider link, and closely monitoring the platform for stability. Once the third-party provider restored normal routing, Akamai removed temporary mitigations and confirmed service stability. The incident was fully mitigated by the third party provider at 15:34 UTC on March 3, 2026, after no further recurrences were observed. This summary provides an overview of our current understanding of the incident given the information available. Our investigation is ongoing and any information herein is subject to change.
Read the full incident report →
- Detected by Pingoru
- Feb 27, 2026, 08:18 PM UTC
- Resolved
- Feb 27, 2026, 09:01 PM UTC
- Duration
- 43m
Timeline · 2 updates
-
investigating Feb 27, 2026, 08:18 PM UTC
Our team is investigating an issue affecting connectivity in our Singapore data center starting at approximately 1905 UTC, February 27, 2026. During this time, users may experience intermittent connection timeouts and errors for all services deployed in this data center. We will share additional updates as we have more information.
-
resolved Feb 27, 2026, 09:01 PM UTC
We haven’t observed any additional connectivity issues in our Singapore data center, and will now consider this incident resolved. If you continue to experience problems, please open a Support ticket for assistance.
Read the full incident report →
- Detected by Pingoru
- Feb 27, 2026, 01:38 PM UTC
- Resolved
- Feb 27, 2026, 07:45 PM UTC
- Duration
- 6h 7m
Affected: IN-MAA (Chennai)IN-BOM-2 (Mumbai 2)
Timeline · 5 updates
-
investigating Feb 27, 2026, 01:38 PM UTC
Our team is investigating an issue affecting connectivity in our Chennai, IN (in-maa) and Mumbai Expansion, IN (in-bom-2) data centers During this time, users may experience intermittent connection timeouts and errors for all services deployed in this data center. We will share additional updates as we have more information.
-
monitoring Feb 27, 2026, 02:35 PM UTC
At this time we have been able to correct the issues affecting connectivity in our Chennai, IN (in-maa) and Mumbai Expansion, IN (in-bom-2) data centers. We will be monitoring this to ensure that it remains stable. If you are still experiencing issues, please open a Support ticket for assistance.
-
monitoring Feb 27, 2026, 03:48 PM UTC
We are continuing to monitor for any further issues.
-
resolved Feb 27, 2026, 07:45 PM UTC
We haven’t observed any additional connectivity issues in our Chennai, IN (in-maa) and Mumbai Expansion, IN (in-bom-2) data centers, and will now consider this incident resolved. If you continue to experience problems, please open a Support ticket for assistance.
-
postmortem Mar 07, 2026, 05:18 AM UTC
Between February 27, 2026, 11:50 UTC and February 28, 2026, 17:40 UTC, Akamai observed degraded performance, intermittent connectivity issues, and packet loss affecting Edge Delivery and Cloud services for customers in India. The issue impacted multiple Edge locations and connectivity to Mumbai and Chennai data centers. The most significant impact occurred around 12:20 UTC on February 27, with a second window of disruption in the Mumbai region between 19:45 UTC and 21:00 UTC. A recurrence was identified on February 28, 2026, at 09:17 UTC. The recurrence was partially mitigated by 09:46 UTC and fully resolved by 11:08 UTC the same day. The root cause was a chain of unexpected subsea cable outages that occurred during repair work by the third-party network provider, resulting in intermittent disruptions to internet traffic between India and Europe. During the event, traffic was rerouted through unstable alternate paths by the third party provider, resulting in packet loss and congestion. Additionally, isolation of a key network point prevented effective traffic management, leading to service interruptions. Akamai took several actions to mitigate the impact, including suspending the traffic to the affected Edge locations, temporarily removing traffic from the affected ISP provider link, and closely monitoring the platform for stability. Once the third-party provider restored normal routing, Akamai removed temporary mitigations and confirmed service stability. The incident was fully mitigated by the third party provider at 15:34 UTC on March 3, 2026, after no further recurrences were observed. This summary provides an overview of our current understanding of the incident given the information available. Our investigation is ongoing and any information herein is subject to change.
Read the full incident report →
- Detected by Pingoru
- Feb 27, 2026, 07:59 AM UTC
- Resolved
- Feb 27, 2026, 09:33 AM UTC
- Duration
- 1h 33m
Affected: EU-West (London)
Timeline · 3 updates
-
investigating Feb 27, 2026, 07:59 AM UTC
Our team is investigating an emerging service issue affecting the connectivity in EU-West (London). We will share additional updates as we have more information.
-
resolved Feb 27, 2026, 09:33 AM UTC
We haven’t observed any additional connectivity issues in our EU-West (London) data center, and will now consider this incident resolved. If you continue to experience problems, please open a Support ticket for assistance.
-
postmortem Mar 03, 2026, 08:29 AM UTC
On February 27, 2026 between approximately 04:25 and 09:01 UTC, we experienced connectivity issues in our EU-West \(London\) datacenter. During the impact window, customers may have experienced intermittent connection timeouts and errors for all services deployed in the London datacenter. Our investigation established that traffic going via one of the ISPs was dropped within the ISP’s Autonomous System. As a result, customers could have experienced connectivity issues. To resolve the issue, we have moved traffic away from the problematic ISP’s Autonomous System. To help prevent similar issues in the future, Akamai will work with the ISP to ensure that the underlying issue within their Autonomous System is addressed before returning traffic. We apologize for the impact and thank you for your patience and continued support. We are committed to making continuous improvements to make our systems better and prevent recurrence. This summary provides an overview of our current understanding of the incident, given the information available. Our investigation is ongoing, and any information herein is subject to change.
Read the full incident report →
- Detected by Pingoru
- Feb 21, 2026, 03:20 PM UTC
- Resolved
- Feb 21, 2026, 09:07 PM UTC
- Duration
- 5h 47m
Affected: Linode.com
Timeline · 8 updates
-
investigating Feb 21, 2026, 03:20 PM UTC
Our team is investigating an emerging service issue affecting the ability to login to linode.com or Cloud Manager in all regions. We will share additional updates as we have more information.
-
investigating Feb 21, 2026, 05:41 PM UTC
We are continuing to investigate an issue impacting the ability to login using Google TPA to linode.com and Cloud Manager in all regions. We will share additional updates as we have more information and once progress is made.
-
investigating Feb 21, 2026, 07:06 PM UTC
We are continuing to investigate an issue impacting the ability to log in to linode.com and Cloud Manager using Google TPA across all regions. We will share additional updates as we have more information and once progress is made.
-
identified Feb 21, 2026, 07:35 PM UTC
Our team has identified the issue affecting the ability to log in to linode.com and Cloud Manager using Google TPA across all regions. We are working quickly to implement a fix, and we will provide an update as soon as the solution is in place.
-
identified Feb 21, 2026, 07:59 PM UTC
We are continuing to work on a fix for this issue. Subsequent updates around mitigation status will be posted as progress is made.
-
identified Feb 21, 2026, 08:36 PM UTC
We are continuing to work on a fix for this issue. Subsequent updates around mitigation status will be posted as progress is made.
-
resolved Feb 21, 2026, 09:07 PM UTC
At this time, we have been able to correct the issues affecting the login to linode.com and Cloud Manager. We haven't observed any additional issues with Linode.com and Cloud Manager, and will now consider this incident resolved. If you are still experiencing additional issues, please open a Support ticket for assistance.
-
postmortem Mar 05, 2026, 04:39 PM UTC
On February 21, 2026, at approximately 13:00 UTC, customers were unable to log in to [linode.com](http://linode.com) and Cloud Manager using Google Third-Party Authentication \(TPA\) across all regions. Users encountered "Access Block: Authorization Error" and "The OAuth client was deleted. Error 401: deleted\_client" messages. Our investigation found that on February 18, 2026, a client ID was deleted from the Google Console. This client ID was associated with Google Single Sign-On login, which was affected by recent changes to Akamai’s Google Cloud Platform configuration. To address the issue, we initially tried to restore the deleted configuration, but the error messages continued. We then set up a new configuration with a new client ID. After restarting all Virtual Machines, login access was fully restored at 20:24 UTC on February 21, 2026. Akamai’s team is continuing to investigate the root cause and will implement measures to prevent similar issues in the future. We apologize for any inconvenience this caused and appreciate your patience as we work to improve our services. This summary provides an overview of our current understanding of the incident given the information available. Our investigation is ongoing and any information herein is subject to change.
Read the full incident report →