Cornerstone experienced a notice incident on January 28, 2026 affecting Uptime and Response Time, lasting 4h 24m. The incident has been resolved; the full update timeline is below.
Affected components
Update timeline
- investigating Jan 28, 2026, 07:49 PM UTC
Some customers on the US SL2 swimlane may experience intermittent timeouts while accessing or navigating the application. We are actively investigating the issue and will share another update as soon as more information becomes available.
- investigating Jan 28, 2026, 08:42 PM UTC
Our investigation into the disruption is ongoing, and teams remain actively engaged in assessing the issue. We will provide another update as more information becomes available.
- identified Jan 28, 2026, 08:59 PM UTC
The issue has been identified, and teams are actively working on implementing a fix to restore normal swimlane operations. We will share an additional update as we make progress.
- identified Jan 28, 2026, 11:06 PM UTC
Cornerstone engineering has identified the issue and we are currently working on developing a fix. We will share updates here at least every 2 hours or as they become available.
- monitoring Jan 28, 2026, 11:16 PM UTC
Cornerstone engineering was able to apply a fix and the issue is no longer replicable. We will leave this in a monitoring status for 1 hour and if no further instances we will update this incident to Resolved.
- resolved Jan 29, 2026, 12:13 AM UTC
After a period of a monitoring and no repeat occurrences we are considering this issue resolved.
- postmortem Feb 12, 2026, 07:49 PM UTC
**Issue Summary:** On January 28, 2026, clients hosted in the US SL2 Production environment experienced periods of degraded portal performance. During the affected window, users encountered intermittent latency and occasional login failures impacting portal accessibility. **Root Cause:** The incident was caused by unintended application behavior that generated an unusually high volume of repeated internal service requests. This excessive request activity created resource contention across the platform, reducing overall system responsiveness and leading to intermittent timeouts during user login attempts. **Corrective Action:** Engineering teams identified and mitigated the source of the repeated service activity. Impacted application components were recycled, and stabilization measures were applied to restore system performance. Following remediation, portal access and login functionality were validated and confirmed to be operating normally. **Preventive Measures:** To reduce the likelihood of recurrence, the following actions were implemented: * Enhanced Monitoring & Alerting: Strengthened detection of abnormal request patterns to enable earlier identification and intervention. * Improved Resource Visibility: Expanded monitoring of system resource utilization and contention to support swifter response. * Application Behavior Review: Reviewed and refined application logic and recovery procedures to prevent excessive internal request generation.