Fluid Attacks Outage History

Fluid Attacks is up right now

There were 9 Fluid Attacks outages since February 9, 2026 totaling 229h 59m of downtime. Each is summarised below — incident details, duration, and resolution information.

Source: https://status.fluidattacks.com

Notice April 27, 2026

Unable to Create GitHub Organization Credentials on the Platform

Detected by Pingoru
Apr 27, 2026, 11:30 PM UTC
Resolved
Apr 27, 2026, 11:30 PM UTC
Duration
Timeline · 2 updates
  1. resolved Apr 28, 2026, 09:16 PM UTC

    GitHub integration features were temporarily unavailable due to credentials being unintentionally removed during a scheduled rotation. The issue was identified and resolved by the engineering team.

  2. postmortem Apr 28, 2026, 09:19 PM UTC

    ## Postmortem ### Impact Service availability was temporarily affected for users attempting to access GitHub-integrated features on the platform. Specifically, operations that depend on GitHub organization credentials such as connecting GitHub accounts or repositories were unavailable during the incident window. The issue began on April 27, 2026 at 18:26 \(UTC-5\) and was identified proactively by our engineering team within minutes of onset. Full service was restored by April 28, 2026 at 10:22 \(UTC-5\), resulting in a total window of exposure of approximately 15.8 hours \(WOE\). ### Cause During a scheduled maintenance task involving the rotation of internal service credentials, a subset of credentials required for GitHub integrations was unintentionally removed from our secrets management system. The removal was based on an incorrect assessment that those credentials were no longer in active use. As a result, several internal processes that depend on GitHub API access stopped functioning, which in turn affected the availability of GitHub-related features on the platform. ### Solution Our engineering team identified the affected credentials and restored them as part of an emergency fix, with values rotated as a security best practice. Service was fully restored upon deployment of the fix. No credential values were exposed as part of this incident. ### Conclusion This incident underscores the importance of validating credential usage across all consumers before removal during rotation procedures. We are strengthening our rotation workflows to include automated checks that confirm a credential is fully decommissioned across all dependent systems prior to deletion, reducing the likelihood of a similar issue in the future. **ROTATION\_FAILURE < MISSING\_TEST < INCOMPLETE\_PERSPECTIVE**

Read the full incident report →

Notice April 20, 2026

Reattack Requests Failing

Detected by Pingoru
Apr 20, 2026, 07:30 PM UTC
Resolved
Apr 15, 2026, 03:30 PM UTC
Duration
Timeline · 2 updates
  1. resolved Apr 20, 2026, 07:30 PM UTC

    Failures caused by an internal key rotation process.

  2. postmortem Apr 20, 2026, 07:31 PM UTC

    ### Postmortem ##### Impact At least one user experienced disruptions to automated security scan executions, preventing reattack requests and scanner-driven workflows from completing during the affected window. The issue started on UTC-5 26-04-13 22:19 and was proactively discovered 18.0 hours \(TTD\) later by a staff member who noticed that automated executions were not completing as expected. The problem was resolved in 58.9 minutes \(TTF\), resulting in a total window of exposure of 19.0 hours \(WOE\). ##### Cause A planned infrastructure change designed to consolidate our platform's internal configuration management was inadvertently scoped to resources available only in the testing environment. When deployed to production, these dependencies were absent, preventing the backend scan-execution services from initializing. The repeated restart attempts rendered all dependent workflows non-functional. ##### Solution The configuration change was rolled back, restoring the previous setup. All affected services restarted successfully and automated executions resumed normal operation. ##### Conclusion This incident led us to establish a clear rule for where different types of configuration can be applied in our infrastructure, ensuring that future consolidation efforts are validated against the live service environment before deployment. **INFRASTRUCTURE\_ERROR < INCOMPLETE\_PERSPECTIVE**

Read the full incident report →

Notice March 31, 2026

Platform, Agent and API are down

Detected by Pingoru
Mar 31, 2026, 05:13 PM UTC
Resolved
Mar 31, 2026, 07:59 PM UTC
Duration
2h 45m
Affected: PlatformAgent
Timeline · 4 updates
  1. identified Mar 31, 2026, 05:13 PM UTC

    An issue was found in the platform, agent, and API Click for details: https://availability.fluidattacks.com

  2. identified Mar 31, 2026, 05:39 PM UTC

    We are continuing to work on a fix for this issue.

  3. resolved Mar 31, 2026, 07:59 PM UTC

    This incident has been resolved and all 3 components are now operational.

  4. postmortem Apr 06, 2026, 10:55 PM UTC

    ### Impact At least 5 client organizations experienced complete unavailability of the platform at `app.fluidattacks.com`. In addition to the main platform, 2 companion services \(AI SAST and the conversational AI assistant\) were equally unavailable, as they share the same server infrastructure and container image format. The issue started on UTC-5 26-03-31 12:07 and was reactively discovered 1 minute \(TTD\) later by an internal monitoring tool that noticed that services were failing. The problem was resolved in 2.7 hours \(TTF\), resulting in a total window of exposure of 2.7 hours \(WOE\). ### Cause The platform became unavailable because the application servers could not start. A component responsible for running our application containers — managed by Amazon as part of our cloud server infrastructure — was automatically updated to a new version that introduced stricter compatibility rules. This update was not announced as a breaking change and was applied to our servers without intervention on our part, as the server image version had never been pinned during the cluster's entire lifetime. Our servers recycle automatically every 24 hours and, upon recycling, adopt the latest available server image. Starting on March 19, nodes began adopting the updated image, which included a container runtime incompatible with a configuration pattern used in our images. The incompatibility went undetected for 12 days because no alert existed for pod initialization failures in this specific failure mode [\[1\]](https://github.com/awslabs/amazon-eks-ami/pull/2653) [\[2\]](https://github.com/containerd/containerd/issues/12683). ### Solution Two categories of fixes were applied to restore service and prevent recurrence: **Immediate remediation:** The image construction process was updated to eliminate the incompatible file structure — replacing the problematic internal links with direct file copies that are compatible with any version of the container runtime [\[3\]](https://gitlab.com/fluidattacks/universe/-/merge_requests/99443) [\[4\]](https://gitlab.com/fluidattacks/universe/-/merge_requests/99450) [\[5\]](https://gitlab.com/fluidattacks/universe/-/merge_requests/99456). A secondary update propagated this fix to the platform's main service [\[6\]](https://gitlab.com/fluidattacks/universe/-/merge_requests/99447) [\[7\]](https://gitlab.com/fluidattacks/universe/-/merge_requests/99451). Service was fully restored at UTC-5 26-03-31 14:55. **Preventive controls:** The server image version is now pinned, meaning future Amazon updates will only apply when we explicitly choose to adopt them after compatibility verification [\[8\]](https://gitlab.com/fluidattacks/universe/-/merge_requests/99483). Container images are now identified by a unique version identifier per code commit, enabling immediate rollback if a future issue arises. Unnecessary dependencies between deployment pipeline steps were also removed to prevent unrelated failures from blocking production deployments. ### Conclusion This incident was caused entirely by an upstream, third-party change that arrived through an automatic infrastructure update channel we had not controlled. No internal code change, deployment, or configuration modification triggered the failure. The 12-day gap between the server image adoption \(March 19\) and detection \(March 31\) reflects a missing alert for this class of failure. The corrective actions now in place — pinned server image versions, runtime-agnostic container images, and commit-tagged image identifiers — close the primary vectors that allowed this incident to occur and to persist undetected. Ongoing work to simplify the platform's operational infrastructure aims to reduce the number of moving parts that could interact unexpectedly in future upstream changes. **INFRASTRUCTURE\_ERROR < MISSING\_ALERT < THIRD\_PARTY\_CHANGE**

Read the full incident report →

Notice March 30, 2026

Platform access failure

Detected by Pingoru
Mar 30, 2026, 06:45 PM UTC
Resolved
Mar 30, 2026, 08:58 PM UTC
Duration
2h 13m
Affected: Platform
Timeline · 4 updates
  1. identified Mar 30, 2026, 06:45 PM UTC

    Some users experienced issues when attempting to log into the platform resulting in blank pages and failed requests. As a temporary workaround, users can enter the platform in incognito mode or guest mode and allow all cookies (not just necessary ones).

  2. monitoring Mar 30, 2026, 08:19 PM UTC

    A fix has been implemented and we are monitoring results for users.

  3. resolved Mar 30, 2026, 08:58 PM UTC

    This incident has been restored and access to all users is back to normal.

  4. postmortem Mar 30, 2026, 11:50 PM UTC

    ### Impact At least one user experienced a complete inability to access the application, resulting in a blank white screen upon interaction with the privacy settings. The issue started on UTC-5 26-03-30 13:44 and was reactively discovered 1 minute \(TTD\) later by one of our monitoring tools. An hour later, a staff member reproduced the problem: when users visited [app.fluidattacks.com](http://app.fluidattacks.com) and selected the "Use necessary cookies only" option on the banner, the page stayed blank and users were effectively locked out of the platform while choosing to "Allow all cookies" permitted normal operation. The problem was resolved in 2.2 hours \(TTF\), resulting in a total window of exposure of 2.2 hours \(WOE\). ### Cause The disruption was caused by an automated security feature within our third-party cookie management service, Cookiebot. This service uses an "auto-blocking" mode designed to prevent tracking scripts from running until a user provides consent. During a routine automated scan, the service misidentified our primary application code—the "engine" that runs the interface—as a marketing-related tracking tool. Consequently, when a user declined marketing cookies, the third-party service intercepted our application's main script and changed its instructions so the browser would ignore it. This prevented the application from starting, leaving the user with an empty page. This classification error occurred within the third-party's logic due to outdated or incorrect scan data and was not triggered by any internal changes to our code. ### Solution The immediate issue was temporarily resolved by manually triggering a fresh scan of our web address within the Cookiebot management dashboard. This forced the service to re-evaluate our application scripts, correctly identifying the main bundle as essential rather than promotional. Minutes later, however, Another scan reclassified the main bundle as promotional. The problem was ultimately solved by moving away from the "automatic" blocking mode in favor of a "manual" configuration, where we explicitly tell the service exactly which scripts are for tracking and which are essential for the application to function [\[1\]](https://gitlab.com/fluidattacks/universe/-/merge_requests/99348). ### Conclusion This incident highlights the risks of relying on automated third-party classification systems for critical application delivery. By moving toward a more fine grained and explicit configuration of our privacy layers, we will ensure that security and usability remain consistent regardless of a user's cookie preferences. Our systems are currently stable, and we remain dedicated to refining our external integrations to maintain the highest levels of availability.

Read the full incident report →

Critical March 9, 2026

Fluid Attacks DB portal unavailable

Detected by Pingoru
Mar 09, 2026, 03:30 PM UTC
Resolved
Mar 09, 2026, 06:17 PM UTC
Duration
2h 47m
Affected: Criteria
Timeline · 3 updates
  1. identified Mar 18, 2026, 08:23 PM UTC

    The Fluid Attacks DB portal is currently unavailable and cannot be accessed by users.

  2. resolved Mar 18, 2026, 08:26 PM UTC

    The incident has been resolved and the Fluid Attacks DB portal is now fully available and operating normally. Users can access the platform without issues and resume their regular activities.

  3. postmortem Mar 18, 2026, 08:31 PM UTC

    **Impact** At least one user experienced issues accessing [db.fluidattacks.com](http://db.fluidattacks.com), which was not loading correctly in the browser. The issue started on UTC-5 26-03-06 21:48 and was proactively discovered 2.5 days \(TTD\) later by a staff member who noticed that the application failed to render properly due to blocked resources, making the platform unusable. The problem was resolved in 2.4 hours \(TTF\), resulting in a total window of exposure of 2.6 days \(WOE\) [\[1\]](https://gitlab.com/fluidattacks/universe/-/work_items/19249). **Cause** The issue was caused by an incompatibility between a newly implemented caching mechanism and the existing Content Security Policy \(CSP\) headers. While the change worked correctly in the local environment, differences between local and production environments \(specifically CSP enforcement\) caused resources to be blocked in production, preventing proper rendering [\[2\]](https://gitlab.com/fluidattacks/universe/-/merge_requests/97294). **Solution** The change was reverted to restore normal operation of the DB. After stabilizing the platform, the caching implementation approach was revised to ensure compatibility with the CSP configuration before being reintroduced [\[3\]](https://gitlab.com/fluidattacks/universe/-/merge_requests/97355). **Conclusion** This incident highlights the importance of understanding and validating CSP behavior in production environments. Future changes must consider CSP constraints to avoid introducing features that are incompatible with existing security policies. **INFRASTRUCTURE\_ERROR < INCOMPLETE\_PERSPECTIVE**

Read the full incident report →

Major March 4, 2026

Invalid token error in IntelliJ plugin

Detected by Pingoru
Mar 04, 2026, 07:00 PM UTC
Resolved
Mar 13, 2026, 09:15 PM UTC
Duration
9d 2h
Affected: Extensions
Timeline · 4 updates
  1. identified Mar 20, 2026, 12:40 PM UTC

    Users are experiencing issues when using the IntelliJ plugin, as it throws an Invalid token error during usage. This blocks expected functionality within the plugin.

  2. identified Mar 20, 2026, 06:53 PM UTC

    We are continuing to work on a fix for this issue.

  3. resolved Mar 24, 2026, 04:29 PM UTC

    The incident has been resolved and now the IntelliJ plugin is working correctly.

  4. postmortem Mar 26, 2026, 04:14 PM UTC

    **Impact** At least one user encountered a "The token entered is invalid" error message when opening the IntelliJ plugin or adding a token. The issue started on UTC-5 25-11-12 14:13 and was reactively discovered 3.7 months \(TTD\) later by a customer who reported through our help desk [\[1\]](https://help.fluidattacks.com/agent/fluid4ttacks/fluid-attacks/tickets/details/944043000064180963) that the plugin remained unusable as vulnerabilities failed to load. The problem was resolved in 12 days \(TTF\), resulting in a total window of exposure of 4.1 months \(WOE\) [\[2\]](https://gitlab.com/fluidattacks/universe/-/work_items/21395). **Cause** The problem was caused by how the plugin handled large amounts of data and secure connections. The internal "engine" of the plugin was trying to perform too many tasks at once on a single pathway, causing it to freeze or "starve" for resources. Additionally, there was no set time limit for how long the plugin should wait for a response from the servers; if the server took too long to organize the data, the plugin would simply hang indefinitely. Because the plugin didn't provide any updates on what it was doing in the background, it eventually showed a generic "invalid token" error as a default failure message, even when the token itself was perfectly fine [\[3\]](https://gitlab.com/fluidattacks/universe/-/merge_requests/88533). **Solution** We restructured the plugin's internal workflow to allow it to handle multiple tasks simultaneously without freezing the user interface. We also set a strict 30-second time limit for server requests to ensure the plugin doesn't get stuck waiting forever. To make the process faster, we changed how the plugin gathers information so it can ask for multiple pieces of data at the same time rather than one after another. Finally, we added clear status messages so users can see exactly what the plugin is doing while it loads, providing immediate feedback instead of a confusing error message [\[4\]](https://gitlab.com/fluidattacks/universe/-/merge_requests/98189). **Conclusion** By improving how the plugin manages its internal tasks and communication with our servers, we have made the login and loading process more reliable. Users with large amounts of data can now access their information without the plugin freezing or showing misleading errors. We will continue to monitor the loading speeds to ensure the experience remains smooth as we add more features. **UNHANDLED\_EXCEPTION < MISSING\_TEST < INCOMPLETE\_PERSPECTIVE**

Read the full incident report →

Major February 25, 2026

Docs access unavailable for authenticated users

Detected by Pingoru
Feb 25, 2026, 02:19 AM UTC
Resolved
Feb 25, 2026, 05:15 AM UTC
Duration
2h 56m
Affected: Docs
Timeline · 3 updates
  1. identified Feb 27, 2026, 12:18 PM UTC

    Authenticated users are currently unable to access documentation content despite having valid credentials, preventing them from viewing and using Docs as expected.

  2. resolved Feb 27, 2026, 12:22 PM UTC

    The incident has been resolved, and authenticated users can now access the documentation content as expected. Access controls have been restored to normal operation.

  3. postmortem Feb 27, 2026, 12:38 PM UTC

    **Impact** At least one user encountered an access failure where authenticated sessions were unable to load documentation content, resulting in server-side errors, while the same content remained fully accessible to unauthenticated users. The issue started on UTC-5 26-02-24 18:12 and was proactively discovered 57.6 minutes \(TTD\) later by a staff member who observed that authenticated users were unable to access documentation due to repeated server errors, indicating a problem with how the backend handled authenticated requests. No customer reports were received. The problem was resolved in 4.9 hours \(TTF\), resulting in a total window of exposure of 5.9 hours \(WOE\) [\[1\]](https://gitlab.com/fluidattacks/universe/-/issues/20805). **Cause** The error was caused by content that the client expected to receive from the server but was never sent. The root cause was the complexity introduced by dynamic tag filtering from the UI, where the server had to reconstruct all existing documentation tags in order to enable filtering [\[2\]](https://gitlab.com/fluidattacks/universe/-/merge_requests/95254),[\[3\]](https://gitlab.com/fluidattacks/universe/-/merge_requests/96030). **Solution** Support for tag filtering from the UI was removed, while preserving the same functionality in the search bar. By reducing the use case to only displaying tags related to each article, server-side complexity was significantly reduced, and the bug was eliminated at its root [\[4\]](https://gitlab.com/fluidattacks/universe/-/merge_requests/96068). **Conclusion** By removing UI-based tag filtering, the issue was fully resolved, and the overall complexity of the tagging feature was substantially reduced. In the future, we will implement unit tests for Docs to proactively detect and prevent similar issues. **INCOMPLETE\_PERSPECTIVE < MISSING\_TEST**

Read the full incident report →

Minor February 10, 2026

Partial outage in the Fluid Attacks DB portal

Detected by Pingoru
Feb 10, 2026, 04:07 PM UTC
Resolved
Feb 10, 2026, 04:40 PM UTC
Duration
33m
Affected: Docs
Timeline · 3 updates
  1. identified Feb 10, 2026, 10:45 PM UTC

    The Fluid Attacks DB portal is experiencing intermittent availability issues.

  2. resolved Feb 10, 2026, 10:45 PM UTC

    The issue has been resolved, and the portal is now fully operational.

  3. postmortem Feb 11, 2026, 07:47 PM UTC

    **Impact** At least one user experienced degraded performance when accessing [db.fluidattacks.com](http://db.fluidattacks.com). The issue started on UTC-5 26-02-09 23:43 and was proactively discovered 11.2 hours \(TTD\) later by a staff member, who reported through our internal channels that the docs site was responding significantly slower than expected. Only the performance of the documentation was affected; there was no total service outage, and no customer reports were received. The problem was resolved in 18.7 minutes \(TTF\), resulting in a total window of exposure of 11.5 hours \(WOE\) [\[1\]](https://gitlab.com/fluidattacks/universe/-/issues/20558). **Cause** A highly complex MDX document parsing process to support tags in the new documentation system was introduced, which negatively impacted performance and led to slower response times [\[2\]](https://gitlab.com/fluidattacks/universe/-/merge_requests/94410). **Solution** As an immediate mitigation, the change was reverted to restore acceptable performance levels [\[3\]](https://gitlab.com/fluidattacks/universe/-/merge_requests/94445). After availability was recovered, a new solution was implemented that handles MDX document parsing more efficiently by leveraging caching mechanisms and extracting only the required metadata [\[4\]](https://gitlab.com/fluidattacks/universe/-/merge_requests/94472). **Conclusion** As we continue stabilizing the documentation platform, we will introduce basic performance tests aimed at detecting and preventing similar performance regressions in the future. **PERFORMANCE\_DEGRADATION < MISSING\_TEST**

Read the full incident report →

Notice February 9, 2026

Platform Google SSO failure

Detected by Pingoru
Feb 09, 2026, 11:11 PM UTC
Resolved
Feb 09, 2026, 11:37 PM UTC
Duration
26m
Affected: Platform
Timeline · 3 updates
  1. identified Feb 09, 2026, 11:36 PM UTC

    The app.fluidattacks.com platform is experiencing access issues affecting users attempting to log in via Google SSO.

  2. resolved Feb 09, 2026, 11:37 PM UTC

    The incident has been resolved and access to the platform via Google SSO has been restored.

  3. postmortem Feb 10, 2026, 02:08 PM UTC

    **Impact** At least one internal user experienced failed login attempts while using Google SSO on the platform. The issue started on UTC-5 26-02-09 17:55 and was proactively discovered 14.4 minutes \(TTD\) later by a staff member who reported through our internal channels that users trying to sign in could not access their accounts, although no external customers reported problems. The problem was resolved in 5.7 minutes \(TTF\), resulting in a total window of exposure of 20.1 minutes \(WOE\) [\[1\]](https://gitlab.com/fluidattacks/universe/-/issues/20397). **Cause** During a deployment, the old Google OAuth token was removed before a critical post-deployment job finished. This caused the platform to temporarily reject Google SSO logins [\[2\]](https://gitlab.com/fluidattacks/universe/-/merge_requests/94377). **Solution** Wait for the post-deployment job to complete. Once the job finished, the Google SSO login was restored [\[3\]](https://gitlab.com/fluidattacks/universe/-/jobs/13048056424). **Conclusion** To prevent this from happening again, the OAuth token should not be removed, invalidated, or deactivated until the related post-deployment processes have fully completed. **COMMUNICATION\_FAILURE < INFRASTRUCTURE\_ERROR < INCOMPLETE\_PERSPECTIVE**

Read the full incident report →

Looking to track Fluid Attacks downtime and outages?

Pingoru polls Fluid Attacks's status page every 5 minutes and alerts you the moment it reports an issue — before your customers do.

  • Real-time alerts when Fluid Attacks reports an incident
  • Email, Slack, Discord, Microsoft Teams, and webhook notifications
  • Track Fluid Attacks alongside 5,000+ providers in one dashboard
  • Component-level filtering
  • Notification groups + maintenance calendar
Start monitoring Fluid Attacks for free

5 free monitors · No credit card required