Thycotic incident
Secret Server Cloud: US - Intermittent Application errors
Affected components
Update timeline
- identified Apr 24, 2026, 05:58 PM UTC
We have identified the cause of the current issues impacting Secret Server Cloud in the US region. An ongoing cloud provider incident is affecting an availability zone hosting workloads in our US East region. Customers may encounter HTTP 500 errors when accessing the Secret Server UI, with messages similar to the following: "Http failure response for https://.secretservercloud.com/... : 500 OK" To restore service, we are failing over traffic to our secondary region. Customers may experience brief connectivity errors during the failover. We will continue to provide updates as the failover progresses. We apologize for the disruption and thank you for your patience.
- monitoring Apr 24, 2026, 07:06 PM UTC
After further investigation, we determined that a failover to our secondary region was not required. We have moved the affected resources to a different availability zone within the primary region and are observing improvement in service health. Application errors are no longer being reported. We will continue to monitor the environment closely and post a further update once the incident is fully resolved. We apologize for the disruption and thank you for your patience.
- monitoring Apr 24, 2026, 11:06 PM UTC
We are continuing to monitor for any further issues. We are not seeing any Application errors since 15:31PM Eastern.
- resolved Apr 25, 2026, 08:56 PM UTC
This incident has been resolved.
- postmortem May 01, 2026, 01:35 PM UTC
**Incident Overview** On April 24, 2026, a subset of Secret Server Cloud customers in the US region experienced intermittent errors or connectivity issues when accessing their tenants. The disruption was caused by an outage in our cloud infrastructure provider at the East US data center. The issue originated in a single part of the data center but spread to additional areas as traffic was automatically redistributed, extending the scope and duration of the incident. Full-service recovery was confirmed at 00:15 UTC on April 25, 2026. **Root Cause** The outage impacted internal networking components that our platform depends on, disrupting connectivity across the environment. These failures contributed to the intermittent HTTP 500 errors experienced by some Delinea customers during the incident window. The issue originated in one of the Availability zones in East US data center and expanded to other zones as traffic was automatically redistributed, broadening the impact. Our team responded promptly, identifying the root cause at our cloud infrastructure provider and redistributing traffic to other zones. As a precautionary measure, our team initiated a failover to a secondary region, but later determined it was no longer needed and safely rolled back with no additional impact. Service was fully restored after our infrastructure team redirected traffic away from the affected nodes within the primary environment, with full recovery confirmed at approximately 00:15 UTC on April 25, 2026. **Preventive Actions** * Evaluate multi-availability-zone node pool configurations to improve platform resilience and reduce the impact of localized data center failures. * Establish clear thresholds and decision criteria for triggering regional failover versus in-region mitigation to improve response speed during incidents.
Looking to track Thycotic downtime and outages?
Pingoru polls Thycotic's status page every 5 minutes and alerts you the moment it reports an issue — before your customers do.
- Real-time alerts when Thycotic reports an incident
- Email, Slack, Discord, Microsoft Teams, and webhook notifications
- Track Thycotic alongside 5,000+ providers in one dashboard
- Component-level filtering
- Notification groups + maintenance calendar
5 free monitors · No credit card required