- Detected by Pingoru
- Apr 08, 2026, 08:38 AM UTC
- Resolved
- Apr 08, 2026, 12:51 PM UTC
- Duration
- 4h 12m
Affected: Mainnet — JSON-RPC APIMainnet — Websockets APIMainnet — StreamsMainnet — Webhooks
Timeline · 3 updates
-
investigating Apr 08, 2026, 08:38 AM UTC
We are currently experiencing a service disruption on Morph Mainnet. Block production has halted due to an issue with the sequencer. This incident has been acknowledged by the Morph Foundation, who are actively investigating. We will provide updates as the situation develops.
-
monitoring Apr 08, 2026, 09:18 AM UTC
Block production on Morph Mainnet has resumed and nodes are currently catching up and syncing. The Morph Foundation has acknowledged an issue with the sequencer as the root cause of the halt. We are continuing to monitor until full sync is confirmed.
-
resolved Apr 08, 2026, 12:51 PM UTC
Block production on Morph Mainnet has fully resumed and nodes are synced. The issue was caused by a sequencer problem acknowledged by the Morph Foundation, who have since resolved it on their end. We will continue to monitor to confirm stability.
Read the full incident report →
- Detected by Pingoru
- Apr 08, 2026, 06:15 AM UTC
- Resolved
- Apr 08, 2026, 06:58 AM UTC
- Duration
- 42m
Affected: Mainnet — JSON-RPC APIMainnet — Websockets APIMainnet — StreamsMainnet — Webhooks
Timeline · 4 updates
-
investigating Apr 08, 2026, 06:19 AM UTC
We are currently investigating this issue.
-
identified Apr 08, 2026, 06:28 AM UTC
This is a chain-level incident. As our engineers are actively monitoring updates from the Morph Foundation, please find the next update by 8 AM UTC.
-
monitoring Apr 08, 2026, 06:56 AM UTC
A fix has been implemented and we are monitoring the results.
-
resolved Apr 08, 2026, 06:58 AM UTC
This incident has been resolved.
Read the full incident report →
- Detected by Pingoru
- Apr 07, 2026, 02:41 PM UTC
- Resolved
- Apr 08, 2026, 12:50 AM UTC
- Duration
- 10h 8m
Affected: Mainnet - JSON-RPC APIMainnet - REST APIMainnet — StreamsMainnet — Webhooks
Timeline · 2 updates
-
investigating Apr 07, 2026, 02:41 PM UTC
The Quicknode team is investigating the degraded performance of Hedera Mainnet - we will update this page as we acquire new information. Users may experience API errors and 503 responses during this time.
-
resolved Apr 08, 2026, 12:50 AM UTC
This incident has been resolved.
Read the full incident report →
- Detected by Pingoru
- Apr 05, 2026, 06:51 PM UTC
- Resolved
- Apr 05, 2026, 08:00 AM UTC
- Duration
- —
Timeline · 1 update
-
resolved Apr 05, 2026, 06:51 PM UTC
The HyperCore events WebSocket stream experienced intermittent interruptions due to unusually large hourly funding rate blocks, which caused the stream to stall for some consumers. Engineering deployed a fix to handle these large blocks more safely, and the stream is now stable.
Read the full incident report →
- Detected by Pingoru
- Apr 03, 2026, 12:42 AM UTC
- Resolved
- Apr 03, 2026, 01:49 AM UTC
- Duration
- 1h 6m
Affected: Sepolia — JSON-RPC APISepolia — WebSocket APISepolia — StreamsSepolia — Webhooks
Timeline · 2 updates
-
investigating Apr 03, 2026, 12:42 AM UTC
B3 Sepolia is experiencing a network-wide stall at block height 61,001,532. This is not isolated to Quicknode. We will provide the next update as soon as new information becomes available. Explorer: https://sepolia.explorer.b3.fun/
-
resolved Apr 03, 2026, 01:49 AM UTC
This incident has been resolved.
Read the full incident report →
- Detected by Pingoru
- Apr 02, 2026, 10:25 PM UTC
- Resolved
- Apr 03, 2026, 08:18 AM UTC
- Duration
- 9h 52m
Affected: Sui Testnet - JSON RPC API
Timeline · 4 updates
-
investigating Apr 02, 2026, 10:25 PM UTC
Sui Testnet is experiencing a network-wide stall at block height 321,596,099. This is not isolated to Quicknode. We will provide the next update as soon as new information becomes available. Explorer: https://testnet.suivision.xyz/
-
investigating Apr 03, 2026, 08:16 AM UTC
Our engineers are closely listening to updates from the Sui foundation.
-
monitoring Apr 03, 2026, 08:17 AM UTC
A fix has been implemented and we are monitoring the results.
-
resolved Apr 03, 2026, 08:18 AM UTC
This incident has been resolved.
Read the full incident report →
- Detected by Pingoru
- Apr 01, 2026, 04:16 PM UTC
- Resolved
- Apr 01, 2026, 08:47 PM UTC
- Duration
- 4h 31m
Affected: Galileo — JSON-RPC APIGalileo — WebSocket APIGalileo — StreamsGalileo — Webhooks
Timeline · 2 updates
-
investigating Apr 01, 2026, 04:16 PM UTC
The 0G Galileo testnet network appears to have stalled at block 25181554 - The stall is network wide and is not isolated to Quicknode. Our team has reached out to the foundation for further information. This page will be updated as we receive further information.
-
resolved Apr 01, 2026, 08:47 PM UTC
The 0G Galileo testnet network stall has been resolved, and the chain is progressing normally again. The issue was network wide and not isolated to QuickNode. We will continue monitoring for stability.
Read the full incident report →
- Detected by Pingoru
- Apr 01, 2026, 03:09 PM UTC
- Resolved
- Apr 01, 2026, 05:31 PM UTC
- Duration
- 2h 21m
Affected: Cosmos Mainnet — REST API
Timeline · 3 updates
-
investigating Apr 01, 2026, 03:09 PM UTC
On Cosmos, customers may experience degraded performance for both HTTP and WebSocket traffic, including increased error rates (ex, 503 responses), elevated latency, and intermittent request failures. To resolve this, we are performing an upgrade and working to restore normal service as quickly as possible. We will share the next update as soon as we confirm the cause and mitigation steps, and we’ll continue to provide updates as more information becomes available.
-
monitoring Apr 01, 2026, 05:08 PM UTC
We have implemented a fix, and service is recovering. We are actively monitoring the situation and will share additional updates as we confirm continued stability
-
resolved Apr 01, 2026, 05:31 PM UTC
This incident has been resolved.
Read the full incident report →
- Detected by Pingoru
- Apr 01, 2026, 02:50 PM UTC
- Resolved
- Apr 01, 2026, 04:14 PM UTC
- Duration
- 1h 23m
Affected: Fluent Testnet - Websockets APIFluent Testnet - JSON RPC API
Timeline · 3 updates
-
investigating Apr 01, 2026, 02:50 PM UTC
We are aware of a block height stall affecting Fluent Testnet. Requests may return stale or inconsistent data during this time. We are coordinating closely with the Fluent chain team to identify and resolve the underlying cause. Our engineering team is actively working to restore normal service. Recommended action: No action is required on your end. Requests to Fluent Testnet may return outdated block data until the issue is resolved. We will provide a further update within the next 60 minutes.
-
monitoring Apr 01, 2026, 03:44 PM UTC
The block height stall on Fluent Testnet has been addressed and all nodes are now synced and producing blocks normally. We are monitoring to ensure stability. We will provide a final update shortly to confirm full resolution.
-
resolved Apr 01, 2026, 04:14 PM UTC
This incident has been resolved.
Read the full incident report →
- Detected by Pingoru
- Mar 31, 2026, 12:28 PM UTC
- Resolved
- Mar 31, 2026, 12:38 PM UTC
- Duration
- 10m
Affected: Sepolia — JSON-RPC APISepolia — Websockets APISepolia — StreamsSepolia — Webhooks
Timeline · 2 updates
-
investigating Mar 31, 2026, 12:28 PM UTC
We are currently experiencing service disruptions on Ink Sepolia testnet due to backend node degradation. Multiple backend nodes are failing health checks and not providing block data within expected timeframes. Ink Sepolia testnet RPC endpoints are currently affected. Customers may experience request failures, timeouts, or inconsistent block data until our backend nodes are restored. Our engineering team is actively investigating and working to resolve the backend pool issues. We will provide the next update by 2:30pm or sooner.
-
resolved Mar 31, 2026, 12:38 PM UTC
The issue affecting Ink Sepolia testnet has been resolved. Backend nodes have returned to normal operation and are now passing health checks. All Ink Sepolia testnet RPC endpoints are now fully operational. If you continue to experience any issues, please contact our support team.
Read the full incident report →
- Detected by Pingoru
- Mar 30, 2026, 07:31 PM UTC
- Resolved
- Mar 30, 2026, 09:01 PM UTC
- Duration
- 1h 29m
Affected: Mainnet — JSON-RPC APIMainnet — Websockets APIMainnet — WebhooksMainnet — Streams
Timeline · 4 updates
-
investigating Mar 30, 2026, 07:31 PM UTC
We are investigating an issue impacting Gnosis (GNO) Mainnet Archive RPC where some requests are returning elevated HTTP 503 errors and may experience degraded availability. The team is actively working to identify the cause and restore normal service. We will follow up with an update by 20:00 UTC.
-
identified Mar 30, 2026, 08:03 PM UTC
Our team has identified the issue with the affected nodes and are currently working on recovery. We will follow up with an update by 20:30 UTC.
-
monitoring Mar 30, 2026, 08:31 PM UTC
A fix has been implemented and we are monitoring the results. We will follow up with an update by 21:00 UTC.
-
resolved Mar 30, 2026, 09:01 PM UTC
This incident has been resolved.
Read the full incident report →
- Detected by Pingoru
- Mar 30, 2026, 11:53 AM UTC
- Resolved
- Mar 30, 2026, 01:30 PM UTC
- Duration
- 1h 37m
Affected: Testnet — JSON-RPC APITestnet — WebSocket APITestnet — WebhooksTestnet — Streams
Timeline · 2 updates
-
investigating Mar 30, 2026, 11:53 AM UTC
We are currently investigating a block production stall on the Bitcoin Cash (BCH) mainnet. The network appears to have stopped producing new blocks, which is also confirmed by public explorers showing the same block height. We are monitoring the situation closely and will provide updates as they become available.
-
resolved Mar 30, 2026, 07:34 PM UTC
Bitcoin Cash (BCH) mainnet block production has resumed and the network is producing new blocks again. We will continue monitoring for stability.
Read the full incident report →
- Detected by Pingoru
- Mar 26, 2026, 11:47 PM UTC
- Resolved
- Mar 27, 2026, 12:45 AM UTC
- Duration
- 57m
Affected: Testnet — JSON-RPC APITestnet — StreamsTestnet — Webhooks
Timeline · 2 updates
-
investigating Mar 26, 2026, 11:47 PM UTC
BCH Testnet is experiencing slow block production. We are investigating and will update with more information once it is available.
-
resolved Mar 27, 2026, 12:45 AM UTC
This incident has been resolved.
Read the full incident report →
- Detected by Pingoru
- Mar 25, 2026, 10:29 AM UTC
- Resolved
- Mar 27, 2026, 11:24 AM UTC
- Duration
- 2d
Affected: Mainnet — JSON-RPC APIMainnet — Websockets APIMainnet — StreamsMainnet — Webhooks
Timeline · 4 updates
-
investigating Mar 25, 2026, 10:29 AM UTC
We are currently experiencing service disruptions on Sonic mainnet due to an upstream network-wide block stall. The network is stalled at block 66,026,794. This is a network-wide issue and is not isolated to Quicknode. The Sonic Foundation has been notified. All Sonic mainnet RPC endpoints are currently affected by the upstream network stall. New blocks are not being produced until the network resumes. We will provide the next update by 11:30am or sooner as information becomes available.
-
identified Mar 25, 2026, 01:26 PM UTC
We are currently experiencing service disruptions on Sonic mainnet RPC endpoints due to validator syncing issues. The Sonic team has clarified this is NOT a network stall. The Sonic network remains fully operational, continues to produce blocks, and transactions are processing as expected. The Sonic team has identified the issue and provided a fix requiring client upgrades to v2.1.6 and database resyncing. Our engineering team is actively applying this update to restore service. Sonic mainnet RPC endpoints are currently affected. Customers may experience request failures or timeouts until our nodes complete the upgrade and resync process. The native Sonic explorer remains operational: https://explorer.soniclabs.com/blocks For official updates from Sonic: https://x.com/SonicLabs/status/2036763965041021347 We will provide the next update by 3PM or sooner.
-
identified Mar 25, 2026, 03:48 PM UTC
Our engineering team is actively applying the Sonic client upgrades to v2.1.6 and database resyncing to restore service. Sonic Mainnet RPC endpoints are currently affected. Customers may experience request failures or timeouts until our nodes complete the upgrade and resync process. The native Sonic explorer remains operational: https://explorer.soniclabs.com/blocks For official updates from Sonic: https://x.com/SonicLabs/status/2036763965041021347
-
resolved Mar 27, 2026, 11:24 AM UTC
The issue affecting Sonic mainnet RPC endpoints has been resolved. Our engineering team successfully completed the Sonic client upgrades to v2.1.6 and database resyncing at 00:00 UTC on 27 March. All Sonic mainnet RPC endpoints are now fully operational. If you continue to experience any issues, please contact our support team. https://portal.usepylon.com/quicknode/forms/submit-a-support-ticket
Read the full incident report →
- Detected by Pingoru
- Mar 24, 2026, 04:31 PM UTC
- Resolved
- Mar 24, 2026, 08:40 PM UTC
- Duration
- 4h 9m
Affected: Mainnet — JSON-RPC APIMainnet — REST APIMainnet — StreamsMainnet — Webhooks
Timeline · 4 updates
-
investigating Mar 24, 2026, 04:31 PM UTC
The Quicknode team is investigating the degraded performance of Tron Mainnet - we will update this page as we acquire new information. Users may experience 503 errors during this time.
-
investigating Mar 24, 2026, 05:11 PM UTC
We are continuing to investigate degraded performance on Tron Mainnet. At this time, we are seeing elevated 503 errors primarily in the EU region. Traffic is being shifted away from impacted capacity while we work to restore full service. We will share another update by 18:10 UTC (or sooner if there is a significant change).
-
monitoring Mar 24, 2026, 05:30 PM UTC
Our team has deployed a fix to mitigate the 503 errors. Users should now see successful responses when making requests. We are monitoring for stability.
-
resolved Mar 24, 2026, 08:40 PM UTC
This incident has been resolved.
Read the full incident report →
- Detected by Pingoru
- Mar 24, 2026, 09:16 AM UTC
- Resolved
- Mar 25, 2026, 01:53 AM UTC
- Duration
- 16h 36m
Affected: Fuji — JSON-RPC APIFuji — WebSocket APIFuji — WebhooksFuji — Streams
Timeline · 4 updates
-
investigating Mar 24, 2026, 09:16 AM UTC
We are investigating intermittent 503 errors affecting the Avalanche Fuji Testnet. Some customers may experience temporary request failures or degraded performance. Our team is actively working to mitigate the issue and restore full service stability. We will provide an update as more information becomes available.
-
identified Mar 24, 2026, 09:53 AM UTC
We have identified this as an upstream network-wide issue affecting Avalanche Fuji Testnet infrastructure. The C-chain (EVM) is currently affected and blocks have stopped being produced. This is a network-wide issue and is not isolated to Quicknode. The Avalanche team has been notified and is investigating. All Avalanche Fuji Testnet RPC endpoints are currently affected. Customers may experience 503 errors, request failures, or timeouts until the network stabilizes. We will provide the next update by 11am or sooner as information becomes available.
-
monitoring Mar 24, 2026, 02:55 PM UTC
The upstream network issue has stabilized and blocks are now being produced. Our team is actively monitoring to ensure full service restoration. Customers should see improved performance, though some intermittent issues may still occur until all nodes are fully synchronized.
-
resolved Mar 25, 2026, 01:53 AM UTC
This incident has been resolved.
Read the full incident report →
- Detected by Pingoru
- Mar 24, 2026, 02:45 AM UTC
- Resolved
- Mar 24, 2026, 03:20 AM UTC
- Duration
- 35m
Affected: Testnet — JSON-RPC APITestnet — WebSocket APITestnet — WebhooksTestnet — Streams
Timeline · 4 updates
-
investigating Mar 24, 2026, 03:02 AM UTC
We have detected an issue on the BSC-Testnet Archive, due to which you would be seeing 503 errors. Our team is currently looking into it and will fix the situation as soon as possible.
-
identified Mar 24, 2026, 03:20 AM UTC
The issue has been identified and a fix is being implemented.
-
monitoring Mar 24, 2026, 03:20 AM UTC
A fix has been implemented and we are monitoring the results.
-
resolved Mar 24, 2026, 03:20 AM UTC
This incident has been resolved.
Read the full incident report →
- Detected by Pingoru
- Mar 23, 2026, 11:45 PM UTC
- Resolved
- Mar 25, 2026, 12:33 AM UTC
- Duration
- 1d
Affected: Sepolia — JSON-RPC APISepolia — Websockets APISepolia — StreamsSepolia — Webhooks
Timeline · 2 updates
-
investigating Mar 23, 2026, 11:45 PM UTC
We are investigating intermittent 503 errors affecting the Mantle Sepolia Testnet. Some customers may experience temporary request failures or degraded performance. Our team is actively working to mitigate the issue and restore full service stability. We will provide an update as more information becomes available.
-
resolved Mar 25, 2026, 12:33 AM UTC
This incident has been resolved.
Read the full incident report →
- Detected by Pingoru
- Mar 23, 2026, 05:17 PM UTC
- Resolved
- Mar 23, 2026, 10:20 PM UTC
- Duration
- 5h 3m
Affected: Sepolia - JSON RPCSepolia - WebSocket APISepolia — WebhooksSepolia — Streams
Timeline · 3 updates
-
investigating Mar 23, 2026, 05:17 PM UTC
We are investigating reports that Arbitrum Sepolia is stalled at block height 252,909,123 and may not be progressing normally. The Arbitrum foundation team is actively looking into the issue, and we are monitoring closely while coordinating updates. Customers may see stale “latest block” data until block production resumes.
-
monitoring Mar 23, 2026, 10:10 PM UTC
A fix has been implemented and we are monitoring the results.
-
resolved Mar 23, 2026, 10:20 PM UTC
This incident has been resolved.
Read the full incident report →
- Detected by Pingoru
- Mar 23, 2026, 01:10 PM UTC
- Resolved
- Mar 23, 2026, 10:04 PM UTC
- Duration
- 8h 54m
Affected: Mainnet — JSON-RPC APIMainnet — Websockets APIMainnet — StreamsMainnet — Webhooks
Timeline · 2 updates
-
investigating Mar 23, 2026, 01:10 PM UTC
We are currently investigating a block production stall on Peaq Mainnet. The same issue is reflected on the public block explorer, confirming this is a network-level incident. The Peaq Foundation has acknowledged the issue and is actively working on a resolution. We will provide further updates as they become available.
-
resolved Mar 23, 2026, 10:04 PM UTC
This incident has been resolved.
Read the full incident report →
- Detected by Pingoru
- Mar 22, 2026, 09:04 PM UTC
- Resolved
- Mar 23, 2026, 06:46 PM UTC
- Duration
- 21h 41m
Affected: 0G Mainnet — JSON-RPC API0G Mainnet — WebSocket API0G Mainnet — Streams0G Mainnet — Webhooks
Timeline · 2 updates
-
investigating Mar 22, 2026, 09:04 PM UTC
The 0G Mainnet network appears to have stalled at block 28080128 - The Quicknode team is investigating and has reached out to the foundation for further information. This issue is network wide and is not isolated to Quicknode. We will update with more information once it is available.
-
resolved Mar 23, 2026, 06:46 PM UTC
This incident has been resolved.
Read the full incident report →
- Detected by Pingoru
- Mar 22, 2026, 09:29 AM UTC
- Resolved
- Mar 23, 2026, 06:40 PM UTC
- Duration
- 1d 9h
Affected: Pacific-1 Mainnet — EVM JSON-RPC APIPacific-1 Mainnet — Tendermint JSON-RPC/REST APIPacific-1 Mainnet — Cosmos REST APIPacific-1 Mainnet — Websockets APIPacific-1 Mainnet — StreamsPacific-1 Mainnet — Webhooks
Timeline · 7 updates
-
investigating Mar 22, 2026, 09:29 AM UTC
We are currently experiencing service disruptions on Sei mainnet due to an upstream network-wide issue causing block stalls and elevated latency. This is a network-wide issue and is not isolated to Quicknode. All Sei mainnet RPC endpoints are currently affected. Customers may experience slower response times and blocks are not being produced consistently until the network stabilizes. We will provide the next update by 11AM or sooner as information becomes available.
-
investigating Mar 22, 2026, 01:03 PM UTC
The Sei Foundation has informed us that they are actively investigating this issue further. Customers may still experience slower response times. We will provide the next update by 2PM UTC or sooner as information becomes available.
-
monitoring Mar 22, 2026, 03:32 PM UTC
Block production has resumed, and the network stall appears resolved. Customers may continue to experience elevated latency on Sei Mainnet requests, particularly for heavy/debug methods. We are continuing to monitor
-
monitoring Mar 22, 2026, 05:49 PM UTC
Sei Mainnet has recovered and block production is stable. The upstream network stall impacting Sei Mainnet has been resolved.
-
monitoring Mar 22, 2026, 08:30 PM UTC
Our team is currently working on the ongoing latency being reported on the Sei Pacific network. A potential fix has been pushed and we are monitoring for stability. The next update will be at 21:00 UTC.
-
monitoring Mar 22, 2026, 09:23 PM UTC
Our team is continuing to monitor for stability on our Sei infrastructure following the fixes pushed by our engineers. Customers should now be experiencing decreases in their latency.
-
resolved Mar 23, 2026, 06:40 PM UTC
This incident has been resolved.
Read the full incident report →
- Detected by Pingoru
- Mar 22, 2026, 07:13 AM UTC
- Resolved
- Mar 23, 2026, 02:20 AM UTC
- Duration
- 19h 6m
Affected: Testnet - JSON-RPC APITestnet — Websockets APITestnet — StreamsTestnet — Webhooks
Timeline · 2 updates
-
investigating Mar 22, 2026, 07:13 AM UTC
Hemi Testnet appears to have stalled at block 5,503,922. This is a chain-wide stall. Quicknode teams are actively working with the Foundation to restore normal block progression. We will provide an update as more information becomes available.
-
resolved Mar 23, 2026, 02:20 AM UTC
This incident has been resolved.
Read the full incident report →
- Detected by Pingoru
- Mar 22, 2026, 06:44 AM UTC
- Resolved
- Mar 23, 2026, 09:57 PM UTC
- Duration
- 1d 15h
Affected: Sepolia — JSON-RPC APISepolia — WebSocket APISepolia — WebhooksSepolia — Streams
Timeline · 2 updates
-
investigating Mar 22, 2026, 06:44 AM UTC
Blast Sepolia appears to have stalled at block 34,714,475. This is a chain-wide stall. Quicknode teams are actively working with the Foundation to restore normal block progression. We will provide an update as more information becomes available.
-
resolved Mar 23, 2026, 09:57 PM UTC
This incident has been resolved.
Read the full incident report →
- Detected by Pingoru
- Mar 22, 2026, 05:40 AM UTC
- Resolved
- Mar 22, 2026, 05:45 PM UTC
- Duration
- 12h 4m
Affected: Sepolia — JSON-RPC APISepolia — Websockets APISepolia Beacon — REST APISepolia — WebhooksSepolia — Streams
Timeline · 4 updates
-
investigating Mar 22, 2026, 05:40 AM UTC
We are investigating intermittent 503 errors affecting the Ethereum Sepolia Testnet. Some customers may experience temporary request failures or degraded performance. Our team is actively working to mitigate the issue and restore full service stability. We will provide an update as more information becomes available.
-
identified Mar 22, 2026, 06:19 AM UTC
Quicknode teams have identified the issue and are actively working to restore nodes. The next update will be provided before 08:00 AM UTC or sooner.
-
monitoring Mar 22, 2026, 08:03 AM UTC
Some nodes have recovered, and 503s have been mitigated. Quicknode teams are continuing to work to recover full capacity and monitor performance to ensure stability. We will provide the next update before 14:00 UTC or sooner.
-
resolved Mar 22, 2026, 05:45 PM UTC
This incident has now been resolved, and nodes have recovered.
Read the full incident report →