Affected components
Update timeline
- investigating Apr 29, 2026, 03:22 PM UTC
We aare currently investigating an issue where some runs are stuck in a pending approval state without the ability to approve them.
- identified Apr 29, 2026, 03:22 PM UTC
The issue has been identified and a fix is being implemented.
- monitoring Apr 29, 2026, 05:04 PM UTC
A fix has been implemented and we are monitoring the results.
- identified Apr 29, 2026, 05:08 PM UTC
The issue has been identified and a fix is being implemented.
- monitoring Apr 29, 2026, 05:11 PM UTC
We are continuing to monitor for any further issues.
- resolved Apr 29, 2026, 06:57 PM UTC
This incident has been resolved.
- postmortem Apr 30, 2026, 06:03 PM UTC
**Update #1** On April 28 at approximately 15:00 UTC and again on April 29 at approximately 15:22 UTC, a load spike affected our internal task queue, which slowed down the mechanism Scalr uses to transition runs between pipeline stages. In some cases, the transition task failed before it could complete, leaving runs stuck in a waiting state with no automatic recovery path. Both times, the queues cleared on their own while our engineering team investigated. We have already shipped an improvement \(released April 30\) that makes this transition task significantly more resilient to lock contention, greatly reducing the likelihood of runs getting stuck. We are continuing to investigate the underlying cause of the queue spikes and will share further updates as the investigation progresses.
Looking to track Scalr downtime and outages?
Pingoru polls Scalr's status page every 5 minutes and alerts you the moment it reports an issue — before your customers do.
- Real-time alerts when Scalr reports an incident
- Email, Slack, Discord, Microsoft Teams, and webhook notifications
- Track Scalr alongside 5,000+ providers in one dashboard
- Component-level filtering
- Notification groups + maintenance calendar
5 free monitors · No credit card required