Fluid Attacks incident

Platform access failure

Critical Resolved View vendor source →
Started
Mar 18, 2026, 01:30 PM UTC
Resolved
Mar 18, 2026, 07:45 PM UTC
Duration
6h 15m
Detected by Pingoru
Mar 18, 2026, 01:30 PM UTC

Affected components

Platform

Update timeline

  1. identified Mar 25, 2026, 01:44 PM UTC

    Some users experienced issues when attempting to log into the platform, where the authentication process remained in an indefinite loading state.

  2. resolved Mar 25, 2026, 01:46 PM UTC

    The incident has been resolved and now users can log into the platform successfully. The authentication process is working as expected.

  3. postmortem Mar 25, 2026, 01:48 PM UTC

    **Impact** The WAF misconfiguration affected all platform and API users of [db.fluidattacks.com](http://db.fluidattacks.com/). The issue started on UTC-5 26-03-17 21:22 and was reactively discovered 11 hours \(TTD\) later by a customer who reported through our help desk [\[1\]](https://help.fluidattacks.com/agent/fluid4ttacks/fluid-attacks/tickets/details/944043000065432186) that they could not access the platform. Even after two partial policy adjustments, internal users remained unable to upload evidence or create new weaknesses until the full revert was applied. The problem was resolved in 6.2 hours \(TTF\), resulting in a total window of exposure of 17.2 hours \(WOE\) [\[2\]](https://gitlab.com/fluidattacks/universe/-/work_items/21597). **Cause** The issue was caused by a WAF configuration change that switched enforcement mode from logging to Cloudflare Managed Challenges. While logging mode passively records non-compliant requests, Managed Challenges actively blocks or challenges them. This change was applied without a full assessment of its impact on production traffic patterns, causing a large volume of legitimate requests from both external users and internal staff to be blocked or challenged by Cloudflare [\[3\]](https://gitlab.com/fluidattacks/universe/-/merge_requests/98392). **Solution** Two intermediate policy adjustments were applied on GMT-5 2026-03-18 at 10:27:35 and 11:50:07, progressively reducing the scope of blocked traffic. However, internal users continued experiencing degraded functionality. Full service was restored at GMT-5 2026-03-18 14:37:19 by reverting the WAF enforcement mode back to logging, eliminating the blocking entirely. A proper evaluation of WAF rules and their impact on production traffic should be conducted before reintroducing any active enforcement mode [\[4\]](https://gitlab.com/fluidattacks/universe/-/merge_requests/98462). **Conclusion** This incident highlights the importance of understanding the production impact of WAF enforcement modes before applying them. Switching from logging to active challenge/block mode without a staged rollout, impact assessment, or real-time alerting led to a prolonged disruption for all users. Future WAF changes must include pre-deployment traffic analysis, staged rollouts, post-change validation, and alerting on blocked request rates to detect regressions within minutes rather than hours. **INFRASTRUCTURE\_ERROR < INCOMPLETE\_PERSPECTIVE**

Looking to track Fluid Attacks downtime and outages?

Pingoru polls Fluid Attacks's status page every 5 minutes and alerts you the moment it reports an issue — before your customers do.

  • Real-time alerts when Fluid Attacks reports an incident
  • Email, Slack, Discord, Microsoft Teams, and webhook notifications
  • Track Fluid Attacks alongside 5,000+ providers in one dashboard
  • Component-level filtering
  • Notification groups + maintenance calendar
Start monitoring Fluid Attacks for free

5 free monitors · No credit card required