Anaplan experienced a notice incident on October 20, 2025 affecting us1: Data Center - US East and us2: Data Center - US West and 1 more component, lasting 14h 45m. The incident has been resolved; the full update timeline is below.
Affected components
Update timeline
- investigating Oct 20, 2025, 07:37 AM UTC
We are currently investigating an issue impacting customers’ ability to access the Anaplan Platform. We are working to resolve this issue as quickly as possible and will provide updates every 30 minutes or upon resolution.
- investigating Oct 20, 2025, 08:08 AM UTC
Thank you for your patience as we continue to investigate this issue. Currently, we do not yet have a time to resolution. We will continue to provide updates every 30 minutes as we work to resolve this issue as quickly as possible.
- identified Oct 20, 2025, 08:18 AM UTC
We are currently experiencing a global AWS outage, which is currently impacting login capabilities for Anaplan customers. Our team is actively monitoring the situation and collaborating with AWS engineers as they work to resolve the issue on their end. Restoring your access is our highest priority. We will post our next update within 30 minutes, or as soon as a resolution is underway
- identified Oct 20, 2025, 08:53 AM UTC
We are currently impacted by a global AWS outage, which is still impacting login capabilities for Anaplan customers. Our team is in communication with AWS and we are doing everything we can to restore service as quickly as possible. We will post our next update within 30 minutes, or as soon as a resolution is underway.
- identified Oct 20, 2025, 09:01 AM UTC
We are continuing to work closely with our cloud provider, Amazon Web Services (AWS), who have identified the issue is related to DNS resolution of their DynamoDB API endpoint. They are actively working on a fix. At this time, we do not have an ETA for resolution, but we are working with AWS to provide this information as soon as it becomes available. We will provide a further update within 30 minutes, or as soon as the issue is resolved.
- identified Oct 20, 2025, 09:23 AM UTC
We are continuing to work on a fix for this issue.
- identified Oct 20, 2025, 09:55 AM UTC
We are continuing to work with AWS, who have implemented a fix and services are starting to recover. Teams are monitoring internally to ensure our services recover and the platform can become available to customers. We will send another update in 30mins or upon resolution
- identified Oct 20, 2025, 10:12 AM UTC
We are continuing to work with AWS, who have implemented a fix and services are starting to recover. Teams are monitoring internally to ensure our services recover and the platform can become available to customers. We will send another update in 30mins or upon resolution
- identified Oct 20, 2025, 10:24 AM UTC
AWS have implemented a fix which has allowed services to recover. We are continuing to monitor this internally, as our services start to recover and the platform becomes fully accessible to customers again. We will provide a further update in 30mins or upon resolution.
- identified Oct 20, 2025, 10:39 AM UTC
While platform access has recovered, we are still managing downstream impacts from the earlier AWS outage. Cloudworks integrations are currently still impacted. Services in our US4 and US7 remain in a degraded state due to allocation issues stemming from the ongoing AWS issue. Our teams are actively working to restore all services to full functionality and are continuing to monitor the situation closely with AWS. We will provide a further update in 30 minutes, or as soon as we have more information.
- identified Oct 20, 2025, 10:55 AM UTC
While platform access has recovered, we are still managing downstream impacts from the earlier AWS outage. Cloudworks integrations are currently still impacted. Services in our US4 and US7 remain in a degraded state due to allocation issues stemming from the ongoing AWS issue. Our teams are actively working to restore all services to full functionality and are continuing to monitor the situation closely with AWS. We will provide a further update in 30 minutes, or as soon as we have more information.
- identified Oct 20, 2025, 10:58 AM UTC
While platform access has recovered, we are still managing downstream impacts from the earlier AWS outage. Cloudworks integrations are currently still impacted. Services in our US4 and US7 remain in a degraded state due to allocation issues stemming from the ongoing AWS issue. Our teams are actively working to restore all services to full functionality and are continuing to monitor the situation closely with AWS. We will provide a further update in 30 minutes, or as soon as we have more information.
- identified Oct 20, 2025, 11:11 AM UTC
We are continuing to manage the downstream impacts from the earlier AWS outage. Cloudworks integrations are currently still impacted but starting to recover Services in our US4 and US7 remain in a degraded state due to allocation issues stemming from the ongoing AWS issue. Our teams are actively working to restore all services to full functionality and are continuing to monitor the situation closely with AWS. We will provide a further update in 30 minutes, or as soon as we have more information.
- identified Oct 20, 2025, 11:48 AM UTC
We are continuing to resolve the downstream effects of the earlier AWS outage. Here is the current status: Cloudworks: Integrations are recovering but are not yet fully stable. US4 & US7 Services: These remain in a degraded state due to resource allocation issues stemming from the AWS incident. Our teams are actively working to restore all services and are monitoring the situation closely with AWS. We will provide another update in 30 minutes or sooner if we have new information.
- identified Oct 20, 2025, 12:37 PM UTC
We are continuing to resolve the downstream effects of the earlier AWS outage. Here is the current status: Cloud works: Integrations are still recovering and we are working on getting these stable. US4 & US7 Services: Due to ongoing recovery work taking place by AWS, the workspace allocations remain in a degraded state, but they have started to scale and recover. Our teams are actively working to restore all services and are monitoring the situation closely with AWS. We will provide another update in 30 minutes or sooner if we have new information.
- identified Oct 20, 2025, 01:10 PM UTC
We are continuing to work to resolve the remaining effects from the earlier AWS outage. Here is the current status: For our US4 and US7 data centers, AWS is implementing a fix aimed at restoring stability to workspace allocations. They expect this work to be complete within the next 30-40 minutes. Meanwhile, our teams remain focused on resolving the priority Cloudworks integration issues in the following regions: US2 US5 US7 EU4 We are working closely with AWS to bring all services back to full health and will continue to monitor the situation. We will provide another update in 30 minutes or sooner if we have new information.
- identified Oct 20, 2025, 01:53 PM UTC
We are continuing to work to resolve the remaining effects from the earlier AWS outage. Here is the current status: We have made progress on Cloudworks. The integration issues in the following regions have been resolved, though jobs may run slower than normal as the backlog clears: US5 EU4 Our teams are continuing their focus on stabilizing the remaining Cloudworks issues in these regions: US7 US2 For our US4 and US7 data centers, AWS has implemented a fix aimed at restoring stability to workspace allocations. We are monitoring this closely to confirm a full recovery. We are working closely with AWS to bring all services back to full health. We will provide another update in 30 minutes or sooner if we have new information.
- identified Oct 20, 2025, 02:37 PM UTC
We are continuing to work through the effects of the ongoing AWS service disruption. Here is the current status: We have made further progress on Cloudworks. The integration issues in the US2 region have now also been resolved, though jobs may run slower than normal as the backlog clears. Our teams are continuing their focus on stabilizing the remaining integration issues in the US7 region. For our US4 and US7 data centers, the situation remains impacted. The mitigations previously implemented by our cloud provider, AWS, have not resolved the underlying issue. AWS has confirmed they are again seeing significant API errors and connectivity issues across their US-EAST-1 Region and are actively investigating. We are treating this with the highest priority and are in direct communication with AWS. We will provide another update in 30 minutes or sooner if we have new information.
- identified Oct 20, 2025, 02:42 PM UTC
We are continuing to work through the effects of the ongoing AWS service disruption. Here is the current status: We have made further progress on Cloudworks. The integration issues in the US2 region have now also been resolved, though jobs may run slower than normal as the backlog clears. Our teams are continuing their focus on stabilizing the remaining integration issues in the US7 region. For our US4 and US7 data centers, the situation remains impacted. The mitigations previously implemented by our cloud provider, AWS, have not resolved the underlying issue. AWS has confirmed they are again seeing significant API errors and connectivity issues across their US-EAST-1 Region and are actively investigating. We are treating this with the highest priority and are in direct communication with AWS. We will provide another update in 30 minutes or sooner if we have new information.
- identified Oct 20, 2025, 03:22 PM UTC
We are continuing to work through the effects of the ongoing AWS service disruption. Here is the current status: Cloudworks: We have made further progress, and integrations are now running. However, due to the significant backlog, job processing will be slower than normal. Our teams are monitoring this closely and focusing on the remaining stabilization work in the US7 region. US4 & US7 Data Centers: These services remain impacted. Our provider, AWS, has traced the root cause to a network connectivity issue within their EC2 internal network in the US-EAST-1 region. This is causing the significant API errors we are observing. AWS is actively investigating and identifying potential mitigation options. We are treating this with the highest priority and are in constant communication with AWS. We will provide another update in 30 minutes or sooner if we have new information.
- identified Oct 20, 2025, 04:05 PM UTC
We are continuing to work through the effects of the ongoing AWS service disruption. Here is the current status: Cloudworks: While integrations are running in most regions, performance remains slower than normal as the system processes a significant backlog. The full stabilization of the US7 region is pending resolution of the underlying AWS issue. US4 & US7 Data Centers: These services also remain impacted. Our provider, AWS, has identified the root cause as an issue with an internal subsystem that monitors their network load balancers. To aid recovery, AWS is currently throttling requests for new EC2 instance launches while they actively work on mitigations. All of our impacted services are awaiting the resolution from AWS. We are treating this with the highest priority and are in constant communication with them. We will provide another update in 30 minutes or sooner if we have new information
- identified Oct 20, 2025, 04:43 PM UTC
We are continuing to work through the effects of the ongoing AWS service disruption. Here is the current status: Our cloud provider, AWS, has reported they have taken additional mitigation steps for the underlying issue. They are seeing initial signs of recovery for connectivity and API calls on their end and are also working to reduce the throttling of new EC2 instance launches. Our teams are now closely monitoring our systems to validate the effect of these AWS mitigations. The full stabilization of Cloudworks in the US7 region. The recovery of our US4 and US7, where the ability to allocate and access workspaces has been intermittent. We are treating this with the highest priority and will provide another update in 30 minutes, or as soon as we can confirm the outcome of these recovery efforts.
- identified Oct 20, 2025, 05:24 PM UTC
We are continuing to work through the effects of the ongoing AWS service disruption. AWS has developed a potential fix and is currently in the process of validating it to ensure it can be deployed safely. The resolution for these Anaplan services is dependent on the successful validation and deployment of these fixes by AWS. Our teams are in constant communication with them and are monitoring the situation to confirm recovery of Cloudworks in the US7 and restore reliable workspace allocation in our US4 and US7. We will provide another update in 30 minutes, or as soon as we have new information.
- identified Oct 20, 2025, 05:55 PM UTC
We are continuing to work through the effects of the ongoing AWS service disruption. Our cloud provider, AWS, has reported that their recovery efforts are progressing and showing early signs of success in their impacted infrastructure. This is a significant step forward, and we expect this progress to resolve the workspace allocation issues in our US4 and US7 and support recovery of Cloudworks in the US7 region. Our teams are monitoring our systems in real-time to confirm the positive impact of these recovery efforts. We will provide another update in 30 minutes, or as soon as we can validate the restoration of services.
- identified Oct 20, 2025, 06:31 PM UTC
We are continuing to work through the effects of the ongoing AWS service disruption. AWS have reported that their recovery efforts are progressing and we have seen positive indications of this from our internal checks. We are now observing workspace allocations actively coming back up in our US4 and US7. This will also support recovery of Cloudworks in the US7 region. Our teams are monitoring our systems in real-time to confirm the full and positive impact of these recovery efforts. We will provide another update in 30 minutes, or as soon as we can validate the full restoration of services.
- identified Oct 20, 2025, 07:09 PM UTC
We continue to work through the effects of the ongoing AWS service disruption. AWS have reported that their recovery efforts are progressing and we have seen positive indications of this from our ongoing internal testing and monitoring. AWS have indicated a full recovery is estimated to be 2 hours. We continue to see workspace allocations coming back up in our US4 and US7 with 20% of allocations remaining. We expect to see CloudWorks improvements in the US7 region post recovery. Our teams are monitoring our systems in real-time to confirm the full and positive impact of these recovery efforts. We will provide another update in 30 minutes, or as soon as we can validate the full restoration of services.
- identified Oct 20, 2025, 07:40 PM UTC
We continue to work through the effects of the ongoing AWS service disruption. AWS recovery efforts are progressing and we have seen positive indications of this from our ongoing internal testing and monitoring. We continue to see workspace allocations coming back up in our US4 and US7 with 10% of allocations remaining. We expect to see CloudWorks improvements in the US7 region post recovery. Our teams are monitoring our systems in real-time to confirm the full and positive impact of these recovery efforts. We will provide another update in 30 minutes, or as soon as we can validate the full restoration of services
- identified Oct 20, 2025, 08:06 PM UTC
We continue to work through the effects of the ongoing AWS service disruption. AWS recovery efforts are progressing and we have seen positive indications of this from our ongoing internal testing and monitoring. We continue to see workspace allocations coming back up in our US4 and US7 with 5% of allocations remaining. We expect to see CloudWorks improvements in the US7 region post recovery. Our teams are monitoring our systems in real-time to confirm the full and positive impact of these recovery efforts. We will provide another update in 30 minutes, or as soon as we can validate the full restoration of services
- identified Oct 20, 2025, 08:36 PM UTC
We continue to work through the effects of the ongoing AWS service disruption. We are seeing significant improvements as a result of AWS recovery efforts. We continue to see workspace allocations coming back up in our US4 and US7 regions. CloudWorks Integrations in the US7 region are now running; however, the jobs may run slower than normal as the backlog clears. Our teams are monitoring our systems in real-time to confirm the full and positive impact of these recovery efforts. We will provide another update in 30 minutes, or as soon as we can validate the full restoration of services.
- identified Oct 20, 2025, 09:08 PM UTC
We continue to work through the effects of the ongoing AWS service disruption. We are seeing some issue with basic authentication and log-in for customer with multiple log-in option via the pre-login page. This is being impacted by the ongoing recovery activities. This may be impacting users in multiple regions and is being investigated. Workspace allocations are back up in our US4 and US7 regions. CloudWorks Integrations in the US7 region are now running; however, the jobs may run slower than normal as the backlog clears. Our teams are monitoring our systems in real-time to confirm the full and positive impact of these recovery efforts. We will provide another update in 30 minutes, or as soon as we can validate the full restoration of services.
- identified Oct 20, 2025, 09:56 PM UTC
Most services have now recovered from the earlier AWS disruption. However, we are investigating a specific login issue. We are seeing some issues for customers attempting to log in via the Anaplan log in page. We have identified the cause and are now implementing an alternative recovery options. In the meantime, workarounds for affected users are: You should be able to log in as expected when accessing via your SSO provider (Recommended). Alternatively, retrying the basic login 3-5 times may be successful. Other Service Status Workspace Allocations: Services in our US4 and US7 regions are back up and remain stable. Cloudworks: Integrations in the US7 region are running; however, jobs may run slower than normal as the backlog clears. We continue to monitor the stability of all recovered services. We will provide another update in 30 minutes, or as soon as we can validate the full restoration of services.
- identified Oct 20, 2025, 10:28 PM UTC
Most services have now recovered from the earlier AWS disruption. However, we are investigating a specific login issue. We continue to note authentication issues for customers attempting to log in via the Anaplan log in page. We are now implementing an alternative recovery option. In the meantime, workarounds for affected users are: You should be able to log in as expected when accessing via your SSO provider (Recommended). Alternatively, retrying the basic login 3-5 times may be successful. Other Service Status Workspace Allocations: Services in our US4 and US7 regions are back up and continue to remain stable. Cloudworks: Integrations in the US7 region are running but jobs may run slower than normal as we work to clear the backlog. We continue to monitor the stability of all recovered services. We will provide another update in 30 minutes, or as soon as we can validate the full restoration of services.
- monitoring Oct 20, 2025, 11:08 PM UTC
Service has now been restored; you should now be able to resume normal activities. We will continue to monitor the platform to ensure no additional issues arise. If you have any questions, concerns, or continue to experience issues, please do not hesitate to contact Anaplan Support. We will provide a final update to you when we consider this situation fully resolved.
- monitoring Oct 20, 2025, 11:34 PM UTC
Service has now been restored; you should now be able to resume normal activities. US7 Cloudworks: All integrations are running. Please note that jobs may still run slower than normal as the system continues to process the backlog accumulated during the incident. Performance will improve as this backlog clears. We will continue to monitor the platform to ensure no additional issues arise. If you have any questions, concerns, or continue to experience issues, please do not hesitate to contact Anaplan Support. We will provide a final update to you when we consider this situation fully resolved.
- resolved Oct 21, 2025, 12:07 AM UTC
We have confirmed that the issue is now resolved. We deeply apologize for any impact this issue may have caused. We appreciate your patience and partnership as we worked through this issue. We will follow up within 7 business days with a detailed root cause analysis (RCA) that will be shared on our Status Page. If you have any question or concerns, please do not hesitate to contact us at Anaplan Support.
- postmortem Oct 29, 2025, 07:44 PM UTC
On October 20, 2025, starting at 07:06 UTC, the Anaplan platform experienced a major service disruption. The service disruption was caused by two separate incidents with our third-party providers occurring at the same time. First, a major regional outage at Amazon Web Services \(AWS\) impacted infrastructure in US4 and US7. This was immediately followed by a failure in the system of our third-party identity provider, which manages user logins. This second failure caused the disruption to cascade to other Anaplan regions that were not directly affected by the initial AWS outage. This dual failure resulted in a period of complete service unavailability for a subset of Anaplan. This was followed by a period of service degradation. During the degradation period, the platform was available, but CloudWorks™ integrations processed at a slower rate than usual. **Region: US1 – Data Center \(US East\)** Unavailability \(UTC\): 07:06–09:55 Degradation \(UTC\): 09:55–13:53 **Region: US2 – Data Center \(US West\)** Unavailability \(UTC\): 07:06–09:55 Degradation \(UTC\): 09:55–14:37 **Region: EU1 – Data Center \(Netherlands\)** Unavailability \(UTC\): 07:06–09:55 Degradation \(UTC\): 09:55–10:39 **Region: EU2 – Data Center \(Germany\)** Unavailability \(UTC\): 07:06–09:55 Degradation \(UTC\): 09:55–13:53 **Region: US3 – Cloud \(US\)** Unavailability \(UTC\): 07:06–09:55 Degradation \(UTC\): 09:55–13:53 **Region: AP1 – Cloud \(Japan\)** Unavailability \(UTC\): 07:06–09:55 Degradation \(UTC\): 09:55 – 13:53 **Region: AU1 – Cloud \(Australia\)** Unavailability \(UTC\): 07:06–09:55 Degradation \(UTC\): 09:55–13:53 **Region: US9 – Cloud \(US\)** Unavailability \(UTC\): 07:06 – 09:55 Degradation \(UTC\): 09:55–13:53 **Region: US5 – Cloud \(US East\)** Unavailability \(UTC\): 07:06–09:55 Degradation \(UTC\): 09:55–14:37 **Region: EU4 – Cloud \(Europe\)** Unavailability \(UTC\): 07:06–09:55 Degradation \(UTC\): 09:55–14:37 **Region: US4 – Cloud \(US\)** Unavailability \(UTC\): 07:06–09:55 Functional unavailability: 09:55–21:08 Degradation \(UTC\): 21:08–22:28 **Region: US7 – Cloud \(US\)** Unavailability \(UTC\): 07:06–09:55 Functional unavailability: 09:55–21:08 Degradation \(UTC\): 21:08–22:28 **Note:** For US4 and US7, the extended period represents functional unavailability due to persistent issues with workspace allocation because of the ongoing AWS outage. From about 21:08–23:08 UTC, there was an intermittent impact on users who were trying to log in to Anaplan using basic authentication. While our teams worked to deploy a permanent fix for this service, we communicated two immediate workarounds to affected customers **Root causes** This incident had two distinct but related root causes involving failures with two of our key service providers. 1. The incident was initiated by a considerable service disruption in AWS's US-EAST-1 \(Northern Virginia\) region. A technical issue in the core system led to a cascading failure across other foundational AWS services, which impacted our US4 and US7 regions. 2. The impact of the AWS outage was magnified by the failure of our third-party identity provider. Anaplan invests in a premium High Availability \(HA\) service from our provider. This service is designed to act as an automatic backup system in the event of a regional outage. This backup system didn't operate as designed. This failure expanded the impact to multiple regions outside the impacted AWS region. **Our response** Upon detecting the issue, our global engineering teams immediately initiated a multi-track response. Our teams worked in partnership with AWS and our identity provider to track recovery efforts and to hold them accountable for their service commitments. In parallel, our teams worked to prepare our systems for a safe and orderly restoration as both providers' services were restored. As the services recovered, our engineers worked to clear system backlogs to restore performance. They also identified and restarted any customer models that became stuck because of the outage. **Recovery timeline** The recovery from this complex, dual-vendor failure, was a methodical, multi-threaded effort conducted by our global engineering and incident response teams. The initial focus was on the failure of our identity provider. Following the successful restoration of the identity service and the clearing of system backlogs, our teams conducted final health checks across all services. Access was restored to all regions at 9:55 UTC. From 09:55–14:37 UTC, a dedicated team worked to clear a backlog of Cloudworks integrations as the system processed hours of queued jobs. We also manually restarted several customer models that had become stuck in an unresponsive state due to the initial outage. At 21:59 UTC, our teams identified a specific login failure and communicated a workaround to customers. In parallel, we deployed our own targeted update to the Anaplan login service. This update successfully fixed the specific login failure by 22:51 UTC. The US4 and US7 environments, which are hosted directly in the failed AWS region, experienced extended degradation. This was due to AWS's inability to launch new compute instances. Our teams employed targeted strategies in these regions, including manual resource allocation and draining workloads from unstable nodes. We restored full service at 22:28 UTC. **Corrective and preventative actions** This incident was caused by the failures of two of our providers. We are taking immediate and long-term actions to prevent a recurrence of this incident. We are holding our vendors accountable for their service commitments and are working with them to strengthen the resilience of our platform. **Amazon Web Services \(AWS\):** We have received and reviewed the official root cause analysis \(RCA\) from AWS. The RCA details their architectural failures and their corresponding corrective actions. The AWS report confirms that the initial failure originated in their core database networking system. AWS is implementing key preventative measures: * AWS has disabled the automated system that caused the initial failure across their infrastructure. * They're deploying a permanent fix for a specific software bug that allowed an incorrect network configuration to be published. This will add new architectural safeguards to prevent this from recurring. * AWS is adding new velocity controls to their Network Load Balancers, limiting the rate at which capacity can be removed and improving the throttling mechanisms in their core compute \(EC2\) systems. This will enable it to handle high loads better. * AWS is building a new suite of large-scale tests to continuously validate recovery workflows for their critical EC2 management systems. **Identity provider**: Following the incident, we conducted an immediate and thorough review with our third-party identity provider to understand the failure and ensure it will not happen again. We have confirmed the root cause from our identity provider. The automatic failover system, which we invested in for exactly this type of scenario, did not function as designed. To activate the backup, the system needed to obtain a fresh security credential. However, due to an architectural flaw, the service that issues that credential was itself located in the failed region. This created a deadlock that prevented the failover from executing, allowing a regional issue to have a much broader impact. While the initial trigger was external, we hold ourselves accountable for the resilience of the Anaplan platform and the providers we choose. Our partnership requires complete transparency and proven resilience. We are taking the following steps to hold our provider accountable and strengthen our platform: * Our partner has implemented a permanent fix to correct the architectural flaw. We are now conducting our own comprehensive validation and testing to ensure this solution meets our standards for reliability. * We have initiated a comprehensive, top-to-bottom architectural review with our partner to proactively identify and eliminate potential single points of failure. This strategic review includes a joint assessment of migrating key services to a different infrastructure region to ensure our platform’s foundation is as robust and resilient as possible **In closing** We apologize for the significant impact this outage has had on your business. We hold ourselves accountable for the performance of our platform and the vendors we choose. We are taking all necessary steps to build a more resilient platform for the future. If you have further questions or concerns, please visit the Anaplan [Support website](https://support.anaplan.com/). Thank you for your patience and for being an Anaplan customer.