Linode incident
Service Issue - Linode Kubernetes Engine Enterprise (LKE-E) - Chicago, IL (us-ord)
Affected components
Update timeline
- investigating Mar 17, 2026, 01:04 PM UTC
Our team is investigating an issue affecting the Linode Kubernetes Engine Enterprise (LKE-E) service. We will share additional updates as we have more information.
- identified Mar 17, 2026, 01:14 PM UTC
Our team has identified the issue affecting the LKE-E service. We are working quickly to implement a fix, and we will provide an update as soon as the solution is in place.
- monitoring Mar 17, 2026, 01:27 PM UTC
At this time we have been able to correct the issues affecting the LKE-E service. We will be monitoring this to ensure that it remains stable. If you continue to experience problems, please open a Support ticket for assistance.
- resolved Mar 17, 2026, 02:22 PM UTC
We haven’t observed any additional issues with the LKE-E service, and will now consider this incident resolved. If you continue to experience problems, please open a Support ticket for assistance.
- postmortem Mar 17, 2026, 06:43 PM UTC
Between approximately 01:00 and 14:12 UTC on March 17, 2026, customers were unable to deploy new Linode Kubernetes Engine Enterprise \(LKE-E\) clusters in the Chicago \(us-ord\) region. This was caused by exceeding the allowable number of DNS records in the DNS zone used for provisioning, which prevented the creation of new records required for cluster deployment. Existing LKE-E clusters and standard LKE clusters in the region continued to operate normally. After receiving the first monitoring alert at 12:00 UTC, we investigated and identified the underlying issue. We found that DNS records associated with deleted LKE-E clusters were not being properly cleaned up, which led to a gradual buildup of unused records. This accumulation eventually reached the limit for the DNS zone, preventing new records from being created and blocking new cluster deployments. Between 13:00 and 13:20 UTC, we deleted approximately 500 obsolete domain records, which relieved the record limit and allowed new clusters to provision successfully. We restored impacted clusters to a healthy state and confirmed that deployments were functioning as expected. After a brief period to monitor these fixes, the incident was considered fully mitigated at 14:12 UTC the same day. We are continuing to clean up additional obsolete domain records and estimate that about 11,000 records will qualify for deletion based on the number of active clusters. To help prevent similar incidents and ensure reliable cluster provisioning in the future we are enhancing our record cleanup mechanisms and monitoring. This summary provides an overview of our current understanding of the incident given the information available. Our investigation is ongoing and any information herein is subject to change.
Looking to track Linode downtime and outages?
Pingoru polls Linode's status page every 5 minutes and alerts you the moment it reports an issue — before your customers do.
- Real-time alerts when Linode reports an incident
- Email, Slack, Discord, Microsoft Teams, and webhook notifications
- Track Linode alongside 5,000+ providers in one dashboard
- Component-level filtering
- Notification groups + maintenance calendar
5 free monitors · No credit card required