Cornerstone incident

Service Disruption: CDG SL1 : 'Service Unavailable' Error while accessing/working on the platform

Critical Resolved View vendor source →
Started
Feb 17, 2026, 02:17 PM UTC
Resolved
Feb 17, 2026, 05:39 PM UTC
Duration
3h 22m
Detected by Pingoru
Feb 17, 2026, 02:17 PM UTC

Affected components

UptimeResponse Time

Update timeline

  1. investigating Feb 17, 2026, 02:17 PM UTC

    The CDG SL1 Swimlane is experiencing a service disruption. Users may witness the error: Service Unavailable HTTP Error 503. The service is unavailable. This is our top priority and we are working to resolve the problem as soon as possible. Please check back periodically for additional updates, which will be posted as they become available.

  2. investigating Feb 17, 2026, 02:19 PM UTC

    We are continuing to investigate this issue.

  3. monitoring Feb 17, 2026, 02:35 PM UTC

    The issue has been identified and resolved, and services have now been restored to normal operations. We will continue to monitor this closely.

  4. resolved Feb 17, 2026, 05:39 PM UTC

    After careful monitoring, the issue has now been resolved. The CSOD Technology Team identified a service disruption impacting the CDG SL1 swimlane, which was successfully restored as of 06:19 AM Pacific Time today. A full Root Cause Analysis (RCA), including preventive measures, will be published on the Status Page within 7–10 business days. Thank you for your patience and understanding.

  5. postmortem Mar 06, 2026, 08:22 AM UTC

    **Incident Summary:** On February 17th, 2026, users were unable to log in to the portal hosted in the CDG SL1 \(AWS\) environment. During the impact window users encountered Service Unavailable errors. **Impact:** During the incident window, portal login functionality was temporarily unavailable. Users attempting to access the portal received service unavailable errors. The issue was limited to the affected environment. **Root Cause Analysis \(RCA\):** The disruption was caused by multiple long running database sessions executing the same query concurrently. The increased database resource utilization degraded overall performance, which in turn caused the application layer to return errors. **Resolution:** Upon identification of the issue, the engineering team: * Terminated the long-running database sessions causing resource contention and updated database statistics to improve query optimization and execution efficiency. * Temporarily adjusted database replication settings to reduce commit latency during stabilization. * Closely monitored system resource utilization to confirm normalization. Following these actions, database performance stabilized and portal access was fully restored. **Preventive Measures:** To reduce the likelihood of recurrence, the following measures are being implemented: * Reviewing and optimizing the identified query to prevent prolonged execution * Enhancing monitoring to proactively detect long running database sessions

Looking to track Cornerstone downtime and outages?

Pingoru polls Cornerstone's status page every 5 minutes and alerts you the moment it reports an issue — before your customers do.

  • Real-time alerts when Cornerstone reports an incident
  • Email, Slack, Discord, Microsoft Teams, and webhook notifications
  • Track Cornerstone alongside 5,000+ providers in one dashboard
  • Component-level filtering
  • Notification groups + maintenance calendar
Start monitoring Cornerstone for free

5 free monitors · No credit card required