Flow Swiss Outage History

Flow Swiss is up right now

There were 2 Flow Swiss outages since February 12, 2026 totaling 6h 9m of downtime. Each is summarised below — incident details, duration, and resolution information.

Source: https://status.flow.com

Critical February 24, 2026

Flow Mainnet Network Issue

Detected by Pingoru
Feb 24, 2026, 01:15 AM UTC
Resolved
Feb 24, 2026, 05:13 AM UTC
Duration
3h 57m
Affected: Block Sealing
Timeline · 5 updates
  1. investigating Feb 24, 2026, 01:15 AM UTC

    Block sealing on Flow mainnet has stopped. We are investigating the issue.

  2. identified Feb 24, 2026, 04:07 AM UTC

    Flow is still in maintenance as the team works to restore full operations. The issue was caused by an incomplete node update in preparation for tomorrow’s height coordinated upgrade, which triggered a consensus safety mechanism that halted block sealing. Flow engineers are actively working on a fix, and the next update will be shared at 10 PM PT.

  3. monitoring Feb 24, 2026, 04:59 AM UTC

    The root cause of the issue has been resolved, and block production has resumed, though it is currently progressing slowly. Consensus node operators are actively upgrading their nodes, and once a majority of the consensus nodes in the network have completed the update, block sealing is expected to return to normal performance.

  4. resolved Feb 24, 2026, 05:13 AM UTC

    The incident has been resolved and the network has resumed.

  5. postmortem Feb 25, 2026, 01:43 AM UTC

    ### **Summary** On February 23, 2026, Flow Mainnet experienced an outage that temporarily halted block production. The network entered a safe state after detecting inconsistent results between nodes during preparations for a scheduled height coordinated upgrade. As designed, the protocol stopped sealing new blocks to protect network integrity. No user funds were impacted or at risk. The issue has since been resolved and normal operations have resumed. ### **What Happened** In preparation for a scheduled height coordinated upgrade, a new software version was rolled out to one execution node ahead of the coordinated update. That node generated results which did not match those produced by the other execution nodes. At the same time, one verification node was running an older software version. Running an older verification node version by itself would not have caused an outage. However, in this specific situation, the combination of an execution node and a verification node operating on mismatched versions contributed to inconsistent approvals being produced. The consensus safety mechanism detected that the required agreement conditions for sealing a block were not met and halted block production to prevent inconsistent state from being finalized. While this safety mechanism worked as intended, it resulted in temporary network downtime until the mismatch was identified and corrected. ### **Resolution** The version mismatches were identified and corrected, and nodes were aligned to the expected software versions. Once consistency was restored, block production resumed. Performance returned to normal as consensus node operators completed their updates. ### **Action Items** To reduce the likelihood of similar incidents in the future, the following actions are being implemented: 1. Height Coordinated Upgrade protections for verification nodesVerification nodes will soon adopt the upgrade enforcement behavior similar to execution nodes. If a verification node is not running the expected version after a coordinated upgrade, it will automatically crash and restart rather than continue participating. 2. CI and deployment safeguards for Cadence versionsA strengthening of the deployment process to ensure that node software upgrade cannot deploy a lower version of the node software. Automated checks will prevent accidental deployment of incorrect or older versions, reducing the risk of human error. ### **Closing** Network reliability and safety remain our top priorities. In this case, the protocol’s safety mechanisms correctly prevented inconsistent state from being finalized. The improvements outlined above will further strengthen Flow Mainnet’s resilience and upgrade processes going forward.

Read the full incident report →

Critical February 12, 2026

Flow Testnet Network Issue

Detected by Pingoru
Feb 12, 2026, 03:21 AM UTC
Resolved
Feb 12, 2026, 05:33 AM UTC
Duration
2h 11m
Affected: Flow Testnet
Timeline · 4 updates
  1. investigating Feb 12, 2026, 03:21 AM UTC

    Block production on testnet has halted. We are currently investigating the issue. No impact on mainnet.

  2. investigating Feb 12, 2026, 04:17 AM UTC

    Flow Testnet will be undergoing a network upgrade (spork) to recover from the current issue.

  3. resolved Feb 12, 2026, 05:33 AM UTC

    The issue has been resolved with a full network upgrade (spork) on testnet

  4. postmortem Feb 12, 2026, 09:29 PM UTC

    Yesterday 2/11, Flow Testnet experienced an outage during routine maintenance. ### **What Happened** The Flow team was performing maintenance to ensure testnet consensus nodes had enough disk space. During this work, a disk space reclamation process was run. There was a flaw in this process when applied to consensus nodes that caused the DKG file to be deleted. Once a majority of the nodes were updated, they were no longer able to finalize blocks. As a result, finalization halted and the network stopped progressing. _Please note: Mainnet was not impacted._ ### **Recovery** Possible recovery paths were evaluated. The safest and least disruptive way to restore the network was to perform a full network upgrade on testnet. The upgrade restored consensus and returned the network to a healthy state. ### **EVM Follow Up** After the upgrade, EVM nodes encountered a separate issue and were not able to smoothly transition to the new network state. Additional remediation was required to bring EVM services back into sync. ### **Next Steps** * Updating and testing the disk space reclamation process, adding more safety checks to ensure critical consensus state is preserved, along with clearer documentation. * Improving the EVM gateway to make it more robust during network upgrades and transitions. * Reviewing alerting and operational safeguards to surface issues more quickly.

Read the full incident report →

Looking to track Flow Swiss downtime and outages?

Pingoru polls Flow Swiss's status page every 5 minutes and alerts you the moment it reports an issue — before your customers do.

  • Real-time alerts when Flow Swiss reports an incident
  • Email, Slack, Discord, Microsoft Teams, and webhook notifications
  • Track Flow Swiss alongside 5,000+ providers in one dashboard
  • Component-level filtering
  • Notification groups + maintenance calendar
Start monitoring Flow Swiss for free

5 free monitors · No credit card required