Update timeline
- resolved Apr 20, 2026, 07:30 PM UTC
Failures caused by an internal key rotation process.
- postmortem Apr 20, 2026, 07:31 PM UTC
### Postmortem ##### Impact At least one user experienced disruptions to automated security scan executions, preventing reattack requests and scanner-driven workflows from completing during the affected window. The issue started on UTC-5 26-04-13 22:19 and was proactively discovered 18.0 hours \(TTD\) later by a staff member who noticed that automated executions were not completing as expected. The problem was resolved in 58.9 minutes \(TTF\), resulting in a total window of exposure of 19.0 hours \(WOE\). ##### Cause A planned infrastructure change designed to consolidate our platform's internal configuration management was inadvertently scoped to resources available only in the testing environment. When deployed to production, these dependencies were absent, preventing the backend scan-execution services from initializing. The repeated restart attempts rendered all dependent workflows non-functional. ##### Solution The configuration change was rolled back, restoring the previous setup. All affected services restarted successfully and automated executions resumed normal operation. ##### Conclusion This incident led us to establish a clear rule for where different types of configuration can be applied in our infrastructure, ensuring that future consolidation efforts are validated against the live service environment before deployment. **INFRASTRUCTURE\_ERROR < INCOMPLETE\_PERSPECTIVE**
Looking to track Fluid Attacks downtime and outages?
Pingoru polls Fluid Attacks's status page every 5 minutes and alerts you the moment it reports an issue — before your customers do.
- Real-time alerts when Fluid Attacks reports an incident
- Email, Slack, Discord, Microsoft Teams, and webhook notifications
- Track Fluid Attacks alongside 5,000+ providers in one dashboard
- Component-level filtering
- Notification groups + maintenance calendar
5 free monitors · No credit card required