Fluid Attacks incident

Service disruption affecting multiple functions

Major Resolved View vendor source →

Fluid Attacks experienced a major incident on November 27, 2025 affecting Platform and Cloning and 1 more component, lasting 7h 24m. The incident has been resolved; the full update timeline is below.

Started
Nov 27, 2025, 05:15 PM UTC
Resolved
Nov 28, 2025, 12:39 AM UTC
Duration
7h 24m
Detected by Pingoru
Nov 27, 2025, 05:15 PM UTC

Affected components

PlatformCloningScanning

Update timeline

  1. identified Nov 28, 2025, 12:50 PM UTC

    It has been identified that several processes responsible for tasks such as updating Git roots, executing scans, generating reports, and other core services, are currently failing. Our engineering team is actively working to restore normal functionality.

  2. identified Nov 28, 2025, 12:50 PM UTC

    We have traced the source of the issue and are actively working on the necessary corrections.

  3. identified Nov 28, 2025, 12:54 PM UTC

    The solution is currently being implemented. We will provide another update once the services are fully stabilized

  4. resolved Nov 28, 2025, 12:55 PM UTC

    The incident has been resolved, and all affected operations are operating as expected.

  5. postmortem Nov 28, 2025, 01:29 PM UTC

    **Impact** At least one user observed failures in several features of the platform. The issue started on UTC-5 25-11-27 09:25 and was proactively discovered 1.9 hours \(TTD\) later by a staff member who noticed that some processes were not running as expected. Shortly after this internal detection, customer reports also began to arrive, confirming the problem. The problem was resolved in 7.6 hours \(TTF\), resulting in a total window of exposure of 9.6 hours \(WOE\) [\[1\]](https://gitlab.com/fluidattacks/universe/-/issues/19189). **Cause** The infrastructure used by some services was decommissioned, even though those services were still dependent on it. This led to interruptions in functionalities related to repository cloning, reattacks, report generation, and other operations handled by the affected components [\[2\]](https://gitlab.com/fluidattacks/universe/-/merge_requests/89521). **Solution** New infrastructure definitions were created, and the systems were updated to use them. This included refreshing the internal configurations so that all processes pointed to the correct, active infrastructure [\[3\]](https://gitlab.com/fluidattacks/universe/-/merge_requests/89638). **Conclusion** To prevent similar issues in the future, we are improving how infrastructure ownership is structured. Each component will become clearly responsible for the infrastructure it depends on, making those relationships visible and reducing the chances of accidental removal of shared resources. **INFRASTRUCTURE\_ERROR < INCOMPLETE\_PERSPECTIVE**