tester
Timeline · 2 updates
- investigating Apr 23, 2026, 01:45 AM UTC
tester
- resolved Apr 23, 2026, 01:45 AM UTC
Resolved by chad
There were 8 Daily outages since February 4, 2026 totaling 28h 56m of downtime. Each is summarised below — incident details, duration, and resolution information.
tester
Resolved by chad
test
We're receiving reports of some users having problems connecting to Daily calls. It seems to be localized to the Texas area. We're investigating.
We've identified an issue that's causing some users to fail to join calls. The daily-js library has to download a bundle of additional JavaScript as part of joining a call. This bundle is downloaded from c.daily.co, which is using Amazon's CloudFront CDN. They aren't reporting issues yet, but we're engaging with their support to figure out why this is happening.
We're receiving reports of a few other regions experiencing similar connection issues. If you're monitoring client errors and you see messages that start with "Failed to load call object bundle https://c.daily.co/....", you're being affected by this. We're working with AWS to get to the bottom of this.
We've confirmed through several different end users that this is a DNS issue. Affected users from multiple regions have been able to join calls by pointing DNS to Google or CloudFlare, using IPs like 1.1.1.1 or 8.8.8.8 for DNS. Obviously, this solution doesn't scale; we're working with AWS to identify the root cause of the DNS resolution issue for c.daily.co.
Cloudflare has posted an incident concerning intermittent DNS failures for .co domains: https://www.cloudflarestatus.com/incidents/z3b5zxjtp6g1 This aligns with the troubleshooting we've done so far. Some internet discussions suggest that it can be somewhat ISP-dependent, but this is unconfirmed. If this is indeed related to the .co domain itself, it would mean that it affects participants' ability to join calls, but it could also affect the ability to make API requests to api.daily.co. Updates to follow.
The .co registry appears to be experiencing issues. Cloudflare's recursive DNS is affected (and they've acknowledged it). Many regional ISPs, such as AT&T in the southeast US, rely on Cloudflare's DNS to power their own DNS. Unlike Cloudflare, other DNS providers, like Google and Quad9, are serving .co records from stale cache. If they stop doing this, then anyone using those DNS services will start to experience these same failures. This won't be fully resolved until the .co registry comes back online. In the meantime, different DNS providers may see problems come and go. All of Daily's services remain online and functional, so if your users can resolve .co hostnames, they can use our services without problems.
Cloudflare has posted that they've implemented a fix, and they are monitoring the results. We're still not certain that Cloudflare is the actual root cause of 100% of our affected users, but we'll be monitoring error rates.
Cloudflare has resolved their status incident, but we're unsure if that means the underlying issue is resolved. We're continuing to monitor our own error rates and health checks.
The rate of bundle download failures has decreased, but it's also tracking our overall usage volume through the day. The number of affected users appears to be very small, but we still have tests that can replicate the failure. Unfortunately this is completely out of our control. If you have users that are continuing to be affected, you can suggest that they use Google's or Quad9's DNS servers at 1.1.1.1 or 9.9.9.9.
Our tests just confirmed that DNS resolution is no longer returning errors for .co domains in our tests. This issue has been resolved!
This incident has been resolved.
We're seeing an elevated rate of errors for API requests and room joins. We're addressing it right now.
There was a sudden increase in activity across several databases. We've addressed the cause of the issue, and platform metrics are returning to normal. We're continuing to monitor for any further issues.
This incident has been resolved.
We're investigating elevated error rates for incoming PSTN/SIP calls.
SignalWire has identified an issue causing problems with incoming and outgoing PSTN/SIP calls for Daily customers. If you're making dialout calls, you may see an error that says "DialOut stopped: Remote busy or Remote did not answer or Remote ended the call". If you're using dialin, you may hear a message that says "the number you have dialed is not configured correctly and cannot receive calls" or something similar.
It seems that SignalWire has resolved their issue. We're monitoring for any other problems.
This incident has been resolved. We're still collecting more information on the root cause here; we'll update this status post when we have more info.
We're investigating an issue that may be causing delayed delivery for some webhooks.
We've identified an issue with the webhook system that was causing brief delays when finding meeting start and end events for webhooks. We've addressed that issue. Webhook delivery time was only minimally impacted during the event, and it has returned to normal as of about 15 minutes ago.
This incident has been resolved.
We are currently investigating an issue affecting SIP / PSTN inbound calls. Customers may encounter 480 “Temporarily Unavailable” errors when attempting to receive calls. Our engineering team is actively working to identify the root cause and restore service.
Our upstream carrier partner has identified the root cause of the issue impacting SIP/PSTN dial-in (inbound) calling and has implemented a fix. Service is beginning to recover, though some degradation may still be present while the fix fully propagates. We are actively monitoring and working to confirm full restoration of inbound calling. Outbound (dial-out) calling remains unaffected.
The fix from our upstream carrier partner has been applied, and service has been restored for SIP/PSTN dial-in (inbound) calling. We are seeing successful inbound call connections again, and our internal testing confirms recovery. We are continuing to monitor closely to ensure stability and full recovery.
This incident has been resolved.
We've seen an increase in "Failed to load call object bundle" errors within the past hour. This happens when the browser is unable to download our JavaScript bundle from the CDN. If this happens to you or your users, you should be able to refresh the page and join your call successfully. We're working with our CDN provider to identify the issue.
The initial increase in error rates has subsided. Based on reports from a few customers, it seems like the impact was primarily in the Texas area. We're continuing to work with AWS to determine the root cause.
The issue has been resolved. We’ve seen the number of “Failed to load call object bundle” errors return to normal levels, and call joins are operating as expected.
Pingoru polls Daily's status page every 5 minutes and alerts you the moment it reports an issue — before your customers do.
5 free monitors · No credit card required