Octopus Outage History

Octopus is up right now

There were 5 Octopus outages since February 2, 2026 totaling 217h 35m of downtime. Each is summarised below — incident details, duration, and resolution information.

Source: https://status.octopus.com

Major April 1, 2026

AWS Deployment Failures for Cloud Customers

Detected by Pingoru
Apr 01, 2026, 08:36 AM UTC
Resolved
Apr 08, 2026, 04:11 AM UTC
Duration
6d 19h
Affected: Octopus Cloud
Timeline · 3 updates
  1. investigating Apr 01, 2026, 08:06 AM UTC

    We are aware of an issue affecting Octopus Cloud customers running deployments that interact with AWS, including EKS/Kubernetes targets and AWS health checks. Impacted customers may see authentication errors when running these deployments. Our team is actively investigating and working to resolve the issue. Workaround: If you are affected, adding your AWS region as an environment variable in your script steps may resolve the issue: export AWS_REGION= for Bash $env:AWS_REGION = "" for Powershell E.g export AWS_REGION=ap-southeast-2 $env:AWS_REGION = "ap-southeast-2" We apologise for the disruption and will share further updates as our investigation progresses. If you need help in the meantime, please contact us at [email protected]

  2. investigating Apr 01, 2026, 08:36 AM UTC

    We are aware of an issue affecting Octopus Cloud customers running deployments that interact with AWS, including EKS/Kubernetes targets and AWS health checks. Impacted customers may see authentication errors when running these deployments. Our team is actively investigating and working to resolve the issue. Workaround: If you are affected, adding your AWS region as an environment variable in your script steps may resolve the issue: export AWS_REGION=[REGION] for Bash $env:AWS_REGION = "[REGION]" for Powershell E.g export AWS_REGION=ap-southeast-2 $env:AWS_REGION = "ap-southeast-2" We apologise for the disruption and will share further updates as our investigation progresses. If you need help in the meantime, please contact us at [email protected]

  3. resolved Apr 08, 2026, 04:11 AM UTC

    This incident has been resolved.

Read the full incident report →

Major March 30, 2026

Deployments using AWS-dependent resources may fail with “No RegionEndpoint or ServiceURL configured.”

Detected by Pingoru
Mar 30, 2026, 12:43 PM UTC
Resolved
Mar 30, 2026, 12:43 PM UTC
Duration
Affected: Octopus Cloud
Timeline · 1 update
  1. resolved Mar 30, 2026, 12:43 PM UTC

    We have identified the cause of this issue and have produced a fix in Octopus version 2026.2.3825. Please email [email protected] if you are seeing AWS region endpoint errors in your deployments similar to the one in the title and we can discuss upgrade options.

Read the full incident report →

Major March 24, 2026

gRPC port (8443) shows Octopus Cloud instance is Undergoing Maintenance

Detected by Pingoru
Mar 24, 2026, 04:13 AM UTC
Resolved
Mar 25, 2026, 06:09 AM UTC
Duration
1d 1h
Affected: Octopus Cloud
Timeline · 4 updates
  1. investigating Mar 24, 2026, 04:13 AM UTC

    We have identified the cause and are working to complete it as quickly as possible. We will provide another update once the issue is resolved.

  2. identified Mar 24, 2026, 04:13 AM UTC

    The issue has been identified and a fix is being implemented.

  3. monitoring Mar 24, 2026, 05:58 AM UTC

    We have a resolution for the issue, and it should be rolled out to customer instances during their next maintenance windows. If this blocks your deployments in the meantime, please reach out to support.

  4. resolved Mar 25, 2026, 06:09 AM UTC

    This incident has been resolved. Please don't hesitate to reach out to our support team if you are still having any related problems.

Read the full incident report →

Major February 26, 2026

Emails are currently not being delivered

Detected by Pingoru
Feb 26, 2026, 01:51 AM UTC
Resolved
Feb 26, 2026, 05:47 AM UTC
Duration
3h 55m
Affected: Octopus CloudControl Center (billing.octopus.com)Sign-in/Sign-up (octopus.com)
Timeline · 4 updates
  1. identified Feb 26, 2026, 01:51 AM UTC

    Our upstream email provider is currently experiencing issues delivering emails from Octopus. This impacts all emails, including email verification during authentication, Octopus Cloud subscription invitations, and billing notifications. We're currently investigating and will update this page when we know more.

  2. identified Feb 26, 2026, 05:11 AM UTC

    We are continuing to work on a fix for this issue.

  3. identified Feb 26, 2026, 05:32 AM UTC

    Sign-in/sign-up emails have been restored. Note: emails will now be sent from [email protected]

  4. resolved Feb 26, 2026, 05:47 AM UTC

    Email services have now been restored. Please contact [email protected] if you encounter any issues with email delivery.

Read the full incident report →

Critical February 2, 2026

Ubuntu Dynamic Workers are failing to lease

Detected by Pingoru
Feb 02, 2026, 09:31 PM UTC
Resolved
Feb 03, 2026, 09:38 PM UTC
Duration
1d
Affected: Octopus Cloud
Timeline · 6 updates
  1. investigating Feb 02, 2026, 09:31 PM UTC

    Octopus Cloud instances are failing to lease Ubuntu Dynamic Workers due to an issue with our upstream provider. We are currently investigating and working on mitigating the issue.

  2. identified Feb 02, 2026, 10:09 PM UTC

    Azure have identified the following issue on their side that affects the Dynamic Workers: Virtual Machines and dependent services - Service management issues in multiple regions. They are actively working to mitigate impact and expect it to be resolved by approximately 00:00 UTC. See Azure status page for additional details: https://azure.status.microsoft/en-gb/status

  3. monitoring Feb 02, 2026, 11:46 PM UTC

    Azure have rolled out a fix for this issue and we are seeing Dynamic Workers return to normal operation across all regions.

  4. monitoring Feb 02, 2026, 11:49 PM UTC

    We are continuing to monitor for any further issues.

  5. resolved Feb 03, 2026, 09:38 PM UTC

    Azure has resolved the issue that was causing our Dynamic Workers lease failures and we haven't seen any additional failures since yesterday.

  6. postmortem Feb 13, 2026, 04:58 AM UTC

    # Summary On 2 Feb 2026, between 20:13:34 to 22:56:04 UTC, Octopus Cloud customers in `West US 2` and `West Europe` may have experienced failed deployments or failed runbook runs due to `Ubuntu Dynamic Worker` steps failing on Leasing timeout. This disruption was caused by Azure failing to provision Virtual Machines across multiple regions - see Azure Issue `FNJ8-VQZ` on [Azure Status History](https://azure.status.microsoft/en-us/status/history/). # Background Octopus Cloud [Dynamic Workers](https://octopus.com/docs/octopus-cloud/dynamic-worker) are isolated virtual machines that we provide as part of our Octopus Cloud Subscription offering as a way to execute deployment and runbook steps and scripts, without needing to run on the Octopus Server or deployment targets themselves. Customers can use both Windows and Ubuntu Dynamic Workers. Octopus provides a [dynamic worker pool](https://octopus.com/docs/infrastructure/workers/dynamic-worker-pools) of these virtual machine types from which, as required by your deployment/runbook steps, your Octopus Cloud will exclusively lease a freshly provisioned dynamic worker VM for a limited time. ## Dynamic Workers Lifecycle 1. **Provisioning** - a new Azure Virtual Machine is provisioned, using the requested [Dynamic Worker image](https://octopus.com/docs/infrastructure/workers/dynamic-worker-pools#dynamic-worker-images) \(Windows/Ubuntu with a set of pre-installed tools\). 2. **Pool** - the newly created Dynamic Worker \(VM\) is placed in a worker pool until an instance requests a Dynamic Worker in one of its deployments/runbooks steps. Each region \(US, Europe and Australia\) has different pools for Windows and Ubuntu workers, with additional standby pools that can be turned on in cases of a temporary outage in an Azure region \(see below\). Octopus Cloud continuously monitors the pools’ levels and provision new workers automatically to keep them full. 3. **Leasing** - When a [deployment/runbook that uses a Dynamic Worker](https://octopus.com/docs/infrastructure/workers#where-steps-run) starts, if the instance doesn’t have a leased worker already \(in which case, it will continue using this worker, extending the worker’s lease for the new run\), the Octopus Server will request a new worker from the appropriate pool. This worker will be exclusively leased to this instance until it is no longer needed \(i.e. wasn’t used for an hour\) or until its maximum lifespan is reached \(3 days by default\). 4. **Deletion** - After a worker is no longer needed or it has reached its maximum lifespan, it is considered expired and will be deleted by the system automatically. The next time that the same instance will require a worker, it will lease a new one from the pool \(see above\). ## System Resilience & Safeguards Octopus Cloud implements multiple layers of protection to ensure Dynamic Workers’ availability and minimize customer impact in cases of service disruptions: * **Pre-provisioned Worker Pools:** We maintain multiple [dynamic worker pools](https://octopus.com/docs/infrastructure/workers/dynamic-worker-pools) \(for the different virtual machine OS and sizes\) with ready-to-use workers to provide immediate availability when deployments/runbooks are triggered, rather than waiting for on-demand provisioning. This also provides a safety buffer in cases where we can’t provision new Dynamic Workers due to temporary outages. * **Standby Services in Multiple Regions:** We maintain standby Dynamic Worker services in alternate Azure regions that we activate to provide continuity when a primary region experiences issues # Key Timing ‌ # Timeline and Impact All dates and times below are in UTC **Feb 2 2026:** **18:52:** 1st failed Dynamic Worker provisioning - At this point, our Dynamic Workers Service continued supplying workers successfully from the pools. However, since we couldn’t provision new workers, the pools started depleting. **20:13:** 1st Ubuntu Dynamic Worker lease failed in `West US2` Azure region \(once the pool was depleted\) - start of customer impact **20:47:** Octopus on-call was paged after 3 Dynamic Worker lease requests failed in `West US2`. The on-call then started the incident to investigate the issue **20:47-21:20**: * Octopus engineers started setting up a Dynamic Workers Service on a different Azure region in the US to mitigate the issue. * During the investigation we saw that Dynamic Workers were also failing to provision in the `West Europe` and `East Australia` Azure regions. At this point we realized that this was a multi-region outage in Azure and decided to open a support ticket with them. * Octopus Engineers turned off non-essential services \(e.g. Instance Upgrades\) to preserve the available Dynamic Workers for customer use. **21:24:** an on-call engineer opened a Sev A support ticket with Azure **21:31:** We published the initial partial outage alert for Octopus Cloud on [https://status.octopus.com/](https://status.octopus.com/) **21:33:** 1st Dynamic Worker lease failed in `West Europe` **21:42:** Azure acknowledged the multi-region issue and reported that they are investigating it. **22:56:** We saw the last Dynamic Worker lease failure. After this time, we were able to provision the required Virtual Machines and return the Dynamic Workers successfully to all lease requests. **23:46:** After verifying that all Dynamic Worker pools have been restored and we didn’t see any additional provisioning failures, we updated the incident status to “Mitigated”, and updated the Status page. ‌ **Feb 3 2026:** **6:05:** Azure confirmed that the issue was fully resolved on their side. **21:38:** Incident was resolved and Status page updated ‌ # Technical Details Octopus Cloud uses Azure Virtual Machines in order to supply Dynamic Workers to customers. During this Azure outage, we couldn’t provision new Azure Virtual Machines for our Ubuntu Dynamic Workers. Our pre-provisioned Dynamic Workers’ pools continued supplying Dynamic Workers for additional * 1:21 hours in `West US 2` and * 2:41 hours in `West Europe` before they were depleted and customer requests for new Dynamic Workers started failing. It’s worth noting that `East Australia` customers were not impacted because the pools in this region didn’t deplete during the incident. Additionally, customers that already had a Dynamic Worker leased at the time of the outage were not impacted \(unless their Dynamic Worker expired so a new one was requested during the incident\). ‌ We made preparations to switch to our Dynamic Workers Standby Service on different Azure regions. However, once we realized that this was a multi-region outage, we decided to revert the switch since it wouldn’t have resolved the issue. ‌ Once we saw that we couldn’t supply Dynamic Workers from our standby regions, we turned off non-essential services \(e.g. Instance Upgrades\) to preserve the available Dynamic Workers for customer use. ‌ Once Azure mitigated the issue on their side, our system recovered automatically and resumed providing new Dynamic Workers successfully. ‌ # Remediation Octopus takes service availability seriously. Despite the difficulty with upstream cloud provider outages, especially ones that are widespread across multiple regions, we fully review and remediate any outages that occur. We do this so that we’re continuously improving and maintaining the best possible service we can. Following our post mortem, we identified the following improvements to our system to help identify and mitigate \(where possible\) similar issues earlier: * **Page an on-call earlier -** add an alert to page the on-call when multiple Dynamic Workers in the same region fail to provision. This will allow us to detect similar incidents quicker and give us more time to mitigate the issue, before any Dynamic Worker leases fail and customers are impacted. * **Improve our Dynamic Workers Incident Playbook** to identify multi-regions incidents quicker in order to engage Azure support earlier to resolve the root cause of similar incidents. # Conclusion We apologize to our customers for any disruption and inconvenience as a result of this incident. We have started work on the identified remediations to ensure that we can detect similar incidents more quickly and reduce the impact on our customers as much as possible.

Read the full incident report →

Looking to track Octopus downtime and outages?

Pingoru polls Octopus's status page every 5 minutes and alerts you the moment it reports an issue — before your customers do.

  • Real-time alerts when Octopus reports an incident
  • Email, Slack, Discord, Microsoft Teams, and webhook notifications
  • Track Octopus alongside 5,000+ providers in one dashboard
  • Component-level filtering
  • Notification groups + maintenance calendar
Start monitoring Octopus for free

5 free monitors · No credit card required