Centrify incident
Privilege Manager Cloud: US - 502 Bad Gateway or Sign in prompt
Centrify experienced a major incident on November 11, 2025 affecting Privilege Manager Cloud, lasting 2h 9m. The incident has been resolved; the full update timeline is below.
Affected components
Update timeline
- investigating Nov 11, 2025, 04:31 AM UTC
We are currently investigating service disruption in US region for Privilege Manager Cloud tenants. Our engineering teams are fully engaged and working to restore normal service as quickly as possible. We apologize for the inconvenience and appreciate your patience while we investigate.
- identified Nov 11, 2025, 05:37 AM UTC
The issue has been identified and a fix is being implemented.
- monitoring Nov 11, 2025, 05:54 AM UTC
A fix has been implemented and we are monitoring the results.
- resolved Nov 11, 2025, 06:41 AM UTC
We are pleased to inform you that the incident affecting the US region for Privilege Manager Cloud has been resolved. Our team has implemented a fix, and all systems are now operating normally. We apologize for any inconvenience this incident may have caused, and we appreciate your understanding and support. For any questions or concerns, please reach out to our support team at https://support.delinea.com.
- postmortem Dec 05, 2025, 07:15 AM UTC
**Incident Overview** On November 11, 2025, customers in the US region experienced service disruption when accessing Privilege Manager Cloud \(PMC\). Affected users encountered error messages or unexpected sign-in prompts when attempting to access their accounts. The impact occurred during the following time window: * November 11, 2025, from 03:11–05:49 UTC **Root Cause** The issue was caused by a disruption in our database infrastructure that stores customer information. Our application services were unable to connect to the database, preventing users from accessing their accounts. The disruption occurred due to: * Unplanned maintenance activities that created an overload on our system * A timing issue in our internal load management process that caused it to stop responding * The system became stuck in an unhealthy state and required a manual reset to recover Our monitoring systems detected the database connectivity issues shortly after they began. Standard recovery procedures such as increasing system capacity and restarting services did not resolve the problem, indicating a deeper issue with the database infrastructure. **Resolution** Our team performed a system failover to restore service availability. Normal operations resumed once the database connections were re-established and all services returned to a healthy state. **Preventive Actions** To strengthen reliability and minimize future impact, we are implementing the following measures: * Fixing the underlying issue to prevent our system from becoming stuck during high-load situations * Adding safeguards to ensure our load management process handles stress conditions more gracefully * Enhancing our monitoring and alerting systems to detect database connectivity problems more quickly