Elevated 520 Errors on Devnet WebSockets
Timeline · 2 updates
- investigating Apr 29, 2026, 01:31 PM UTC
We are currently investigating this issue.
- resolved Apr 29, 2026, 01:47 PM UTC
This incident has been resolved.
There were 21 Helius outages since February 7, 2026 totaling 285h 9m of downtime. Each is summarised below — incident details, duration, and resolution information.
We are currently investigating this issue.
This incident has been resolved.
Devnet is currently returning {"code":-32401,"message":"Bad request, please try again later."}
Issue has been resolved, Devnet RPC calls are back to normal
We are investigating increased latency on some RPC requests on DEVNET
The issue has been identified and we have applied a fix, should see gradual improvement within the next 20-30 minutes
This incident has been resolved, latencies have returned to normal
Frankfurt nodes were timing out incoming requests from beta.helius-rpc.com endpoint (Gatekeeper Beta)
Frankfurt traffic will now fallback to Amsterdam
Traffic to Frankfurt is no longer rerouted and is operational
LaserStream Preprocessed Transactions is currently down in FRA. Connecting to the FRA endpoint will fail.
A fix has been implemented. Customers connected to Preprocessed Transactions (Beta) in FRA might see slightly elevated latency. All transactions are streamed now.
We have identified the root cause. A fix has been implemented. Latency for Preprocessed Transactions (Beta) in FRA has returned to normal levels.
This is due to routing issues at a Cloudflare PoP affecting a subset of traffic. We strongly recommend switching to https://beta.helius-rpc.com, which bypasses Cloudflare. Note: the Webhooks REST API (/v0/webhooks) is not yet supported on beta.
This incident has been resolved.
The Solana DEVNET cluster has been halted due to a planned restart to rollback some feature gates. You may see stale data or increased errors as the restart progresses.
This incident has been resolved.
We have identified an issue that is impacting the shared Devnet cluster, we have deployed a fix and waiting for the issue to resolve.
This incident has been resolved.
The issue has been identified and a fix is being implemented.
This incident has been resolved.
We have identified an issue impacting DAS that causes higher latency and 504 error in the US-West region.
A fix has been implemented and the errors have subsided we are monitoring the situation.
This incident has been resolved, the issue lasted from 17:13 to 17:26 UTC and the root cause was a network issue in the Pitt region.
The issue has been identified.
This incident has been resolved as of 14:03 ET
We are monitoring intermittent connectivity issues affecting the Pitt region. Requests to and from the region may experience increased latency.
This incident has been resolved.
Laserstream's historical replay window is temporarily reduced to 9h as we perform maintenance on the system.
This incident has been resolved.
There was an incident that caused elevated latency for account based calls in US east such as but not limited to getProgramAccounts, getTokenAccountsByOwner, getAccountInfo, getMultipleAccounts. Most of the impact was from 4:30PM to 4:50PM (ET)
There was an incident that caused elevated latency for account based calls in US east such as but not limited to getProgramAccounts, getTokenAccountsByOwner, getAccountInfo, getMultipleAccounts.
The issue has been identified and a fix is being implemented.
This incident has been resolved.
Data was temporarily delayed for historical RPCs (e.g. getTransaction, getBlock) for roughly 7 minutes, from 23:16 to 23:23 EST. Root cause has been fixed. This incident only impacted the US East region.
Laserstream historical replay is temporarily limited to the past 1000 slots
30 minutes of replay is now available. Full 24h of replay will be available by Sunday, 1:30AM UTC.
This incident has been resolved.
Devnet RPC requests may experience intermittent 504s. This issue is unique to devnet. Mainnet RPC is healthy and not affected.
A fix has been implemented, and the system is recovering. The root cause is related to degraded connectivity between the RPC and Google BigTable.
This incident has been resolved.
We are currently investigating increased 504 errors and time outs on DAS requests. We have identified the root cause and applied fixes.
We are continuing to work on a fix for this issue.
A fix has been implemented and everything is recovered, we will keep monitoring.
This incident has been resolved.
we are working on a fix. please inform us if you're having issues
We are continuing to work on a fix for this issue.
This incident has been resolved.
Pingoru polls Helius's status page every 5 minutes and alerts you the moment it reports an issue — before your customers do.
5 free monitors · No credit card required