Bunny Outage History

Bunny is up right now

Bunny had 34 outages in the last 2 years totaling 374h 28m of downtime — averaging 1.4 incidents per month.

There were 34 Bunny outages since May 20, 2025 totaling 374h 28m of downtime. Each is summarised below — incident details, duration, and resolution information.

Source: https://status.bunny.net

Minor May 14, 2026

Degraded Performance in Singapore

Detected by Pingoru
May 14, 2026, 02:23 PM UTC
Resolved
May 14, 2026, 03:41 PM UTC
Duration
1h 17m
Affected: CDNEdge StorageEdge ScriptingMagic ContainersBunny Database
Timeline · 5 updates
  1. investigating May 14, 2026, 02:23 PM UTC

    We are currently experiencing a network issue in our Singapore region, and as a result there may be degraded performance on CDN, Storage, Magic Containers and Database from the region. Our Engineering team are investigating the problem.

  2. investigating May 14, 2026, 02:23 PM UTC

    We are continuing to investigate this issue.

  3. investigating May 14, 2026, 02:33 PM UTC

    We are continuing to investigate this issue.

  4. monitoring May 14, 2026, 02:42 PM UTC

    We identified an outage in relation to an upstream network provider that resulted in partial degradation in Singapore, this is now resolved. We are currently monitoring the situation.

  5. resolved May 14, 2026, 03:41 PM UTC

    This incident has now been cleared.

Read the full incident report →

Minor May 1, 2026

Configuration change delays

Detected by Pingoru
May 01, 2026, 03:51 AM UTC
Resolved
May 01, 2026, 11:12 AM UTC
Duration
7h 20m
Affected: CDN
Timeline · 3 updates
  1. investigating May 01, 2026, 03:51 AM UTC

    We are currently investigating issues related to delayed configuration of Pull Zones.

  2. identified May 01, 2026, 08:43 AM UTC

    We have identified the core issue at hand and our Engineering team are currently reviewing the situation. Configuration delays/syncs are slowly recovering.

  3. resolved May 01, 2026, 11:12 AM UTC

    The configuration delay has now been resolved. At around 00:15, we observed a large scale domain sync that caused configuration delays. This was resolved at 06:00 UTC, before the backlog of configuration changes were cleared by 09:30 UTC. Our Engineering team have identified the core issue and will take action to prevent future recurrence of the problem.

Read the full incident report →

Minor March 30, 2026

Increased Response Times for Statistics Reporting

Detected by Pingoru
Mar 30, 2026, 09:05 AM UTC
Resolved
Mar 30, 2026, 09:51 PM UTC
Duration
12h 46m
Affected: DashboardAPI
Timeline · 3 updates
  1. identified Mar 30, 2026, 09:05 AM UTC

    We are working on an issue with our Statistics API endpoint. There are increased response times probing CDN and DNS statistics, resulting in some delayed reports, or timeouts. This affects both dashboard and direct API requests for statistics.

  2. identified Mar 30, 2026, 03:29 PM UTC

    Our Engineering team has identified the issue of the statistics response delays, and are preparing a fix

  3. resolved Mar 30, 2026, 09:51 PM UTC

    Our Engineering team identified an overloaded process with on our statistics endpoint. This was rectified and monitored, and the issue has now been resolved.

Read the full incident report →

Minor March 23, 2026

Delays in Video Processing on Stream

Detected by Pingoru
Mar 23, 2026, 07:50 PM UTC
Resolved
Mar 27, 2026, 09:00 AM UTC
Duration
3d 13h
Affected: Stream - Transcoding Service
Timeline · 2 updates
  1. investigating Mar 23, 2026, 07:50 PM UTC

    We are currently experiencing high demand for our free encoding services. Video processing times may be longer than usual. Premium encoding, existing videos, and delivery are not affected. Our team is actively monitoring the situation, thank you for your patience.

  2. resolved Mar 27, 2026, 09:00 AM UTC

    We have expanded our transcoding capacity, transcoding delays have now cleared, and processing has normalized.

Read the full incident report →

Minor March 19, 2026

Container Application Update Issues

Detected by Pingoru
Mar 19, 2026, 11:41 AM UTC
Resolved
Mar 19, 2026, 03:05 PM UTC
Duration
3h 24m
Affected: Magic Containers
Timeline · 2 updates
  1. investigating Mar 19, 2026, 11:41 AM UTC

    Our Magic Containers team are currently investigating an issue with application updates. There may be deployment delays or failures on the platform as a result.

  2. resolved Mar 19, 2026, 03:05 PM UTC

    Our Containers team found that a recent deployment had resulted in an unintended CPU throttling on new pod or app deployments. This has now been resolved, and the team has made mitigating changes to ensure this does not resurface.

Read the full incident report →

Minor March 5, 2026

Log Forwarding Failures

Detected by Pingoru
Mar 05, 2026, 03:29 PM UTC
Resolved
Mar 06, 2026, 04:43 AM UTC
Duration
13h 13m
Affected: CDN Logging
Timeline · 3 updates
  1. investigating Mar 05, 2026, 03:29 PM UTC

    Our Engineering team are aware of and currently working to address Log Forwarding failures. Any zones configured with Log Forwarding is affected. Standard logging is not affected.

  2. monitoring Mar 05, 2026, 11:45 PM UTC

    A fix has been implemented and we are monitoring the results.

  3. resolved Mar 06, 2026, 04:43 AM UTC

    This incident has been resolved.

Read the full incident report →

Minor March 3, 2026

CDN Log Delivery Delay

Detected by Pingoru
Mar 03, 2026, 10:12 AM UTC
Resolved
Mar 04, 2026, 02:32 PM UTC
Duration
1d 4h
Affected: CDN Logging
Timeline · 2 updates
  1. identified Mar 03, 2026, 10:12 AM UTC

    Over the past few weeks, we've grown beyond the scale our previous CDN log delivery platform was originally designed to handle. As a result, customers are experiencing delays in log delivery. We've been working on a more robust, better-scaling ingest system, and that work is now complete. We expect the transition to be fully rolled out by Friday evening (UTC), March 6th.

  2. resolved Mar 04, 2026, 02:32 PM UTC

    Migration to our new Logging Platform is now complete, and near real-time logs should be visible again. Unfortunately, we had to accelerate the migration timeline as our legacy platform began struggling much sooner than expected. Because of this, we made the difficult decision to complete the migration before fully backfilling the expected 3 days of retained logging history. At the moment, approximately 1.5 days of logs are available on the new platform. This will gradually increase and return to the full 3-day retention by Friday. The raw, unprocessed logs remain stored on the legacy logging platform. If you require access to these logs and have a valid justification, please submit a support ticket, and we will do our best to recover them on a case-by-case basis.

Read the full incident report →

Minor March 2, 2026

Delayed Zone Configuration Updates

Detected by Pingoru
Mar 02, 2026, 09:18 AM UTC
Resolved
Mar 02, 2026, 11:21 AM UTC
Duration
2h 3m
Affected: Stream ServiceAPICDN
Timeline · 3 updates
  1. investigating Mar 02, 2026, 09:18 AM UTC

    We are currently looking into pull zone configuration delays. Existing configurations and platform traffic are not affected.

  2. monitoring Mar 02, 2026, 10:15 AM UTC

    Our Engineering team has applied a fix, and a delayed pull zone backlog of configurations are now processing. We will monitor the situation.

  3. resolved Mar 02, 2026, 11:21 AM UTC

    This issue is now resolved.

Read the full incident report →

Major February 20, 2026

Disk Failure in Frankfurt

Detected by Pingoru
Feb 20, 2026, 04:13 PM UTC
Resolved
Feb 22, 2026, 07:11 PM UTC
Duration
2d 2h
Affected: Edge Storage
Timeline · 2 updates
  1. identified Feb 20, 2026, 04:13 PM UTC

    We currently are experiencing an outage with one server within our German storage array. There may be isolated timeouts from Storage API, as well as timeouts on CDN, where no replication is enabled, or where some assets are uncached on the CDN network. Our Engineering team are working to restore the server.

  2. resolved Feb 22, 2026, 07:11 PM UTC

    The issue affecting one server in our German storage array has been fully resolved. All services are now operating normally. We will continue monitoring to ensure stability.

Read the full incident report →

Minor February 6, 2026

Pullzone Creation Failures

Detected by Pingoru
Feb 06, 2026, 02:02 PM UTC
Resolved
Feb 06, 2026, 02:10 PM UTC
Duration
8m
Affected: API
Timeline · 2 updates
  1. investigating Feb 06, 2026, 02:02 PM UTC

    We are currently investigating an issue pertaining to creating new pull zones. Existing configurations and traffic are not effected.

  2. resolved Feb 06, 2026, 02:10 PM UTC

    Our API team identified root cause and performed a deployment change on our network to resolve the issue.

Read the full incident report →

Minor January 28, 2026

Stream API/Upload, Storage and CDN Partial Outage

Detected by Pingoru
Jan 28, 2026, 11:39 PM UTC
Resolved
Jan 29, 2026, 12:35 AM UTC
Duration
55m
Affected: Stream ServiceDashboardStream - Transcoding ServiceAPICDNEdge Storage
Timeline · 2 updates
  1. investigating Jan 28, 2026, 11:39 PM UTC

    We identified a network issue pertaining to our Stream, API and Storage (Frankfurt) platform. CDN services with Frankfurt storage may have been partially affected. We are currently investigating the problem.

  2. resolved Jan 29, 2026, 12:35 AM UTC

    This was caused by an upstream network provider experiencing a network outage. While CDN traffic was not affected, supporting services such as Stream, Storage and our Core API may have been intermittently returning timeout errors. This has now been rectified and plans are in motion to further build redundancy into these products.

Read the full incident report →

Major January 15, 2026

Singapore CDN & Storage Network Outage

Detected by Pingoru
Jan 15, 2026, 10:28 PM UTC
Resolved
Jan 15, 2026, 10:58 PM UTC
Duration
30m
Affected: CDNEdge Storage
Timeline · 3 updates
  1. identified Jan 15, 2026, 10:28 PM UTC

    We are currently investigating connectivity issues affecting our Singapore location. CDN traffic will be re-balancing to other locations, but non-replicated storage services in Singapore will be offline. We will provide an update ASAP.

  2. monitoring Jan 15, 2026, 10:39 PM UTC

    A fix has been implemented, and we are monitoring.

  3. resolved Jan 15, 2026, 10:58 PM UTC

    This incident has been resolved.

Read the full incident report →

Major December 11, 2025

Storage Issues in Brazil Region

Detected by Pingoru
Dec 11, 2025, 08:56 AM UTC
Resolved
Dec 11, 2025, 09:02 AM UTC
Duration
6m
Affected: Edge Storage
Timeline · 2 updates
  1. investigating Dec 11, 2025, 08:56 AM UTC

    We are currently investigating a Storage outage within our Brazil storage nodes

  2. resolved Dec 11, 2025, 09:02 AM UTC

    A transient networking issue was addressed, and the issue is now resolved.

Read the full incident report →

Notice December 5, 2025

Ticketing System & Documentation Unavailable - Third-Party Outage

Detected by Pingoru
Dec 05, 2025, 09:01 AM UTC
Resolved
Dec 05, 2025, 09:29 AM UTC
Duration
28m
Timeline · 2 updates
  1. identified Dec 05, 2025, 09:01 AM UTC

    We’re currently experiencing an outage affecting our ticketing system and documentation portal. This disruption is caused by an ongoing third-party incident impacting multiple internet services and third-party tools, including our ticketing system. As a result, you may be unable to submit or view tickets, and some documentation pages may fail to load. We’re monitoring the status and will restore full functionality as soon as upstream services stabilize.

  2. resolved Dec 05, 2025, 09:29 AM UTC

    The incident appears to be resolved.

Read the full incident report →

Notice December 1, 2025

Delayed Billing Process

Detected by Pingoru
Dec 01, 2025, 02:56 PM UTC
Resolved
Dec 02, 2025, 08:45 AM UTC
Duration
17h 49m
Timeline · 2 updates
  1. identified Dec 01, 2025, 02:56 PM UTC

    We have identified an issue where a number of users may not have been billed correctly since 11:30AM CET Friday, and our Engineering team have identified the issue. Affected users will have billing delay processed over the next 24 hours or so, but any accounts moved into a negative balance state as a result of the correction will not have their account disabled for a period of time thereafter to avoid disruption to services.

  2. resolved Dec 02, 2025, 08:45 AM UTC

    This incident has been resolved.

Read the full incident report →

Minor November 28, 2025

API increased error rates

Detected by Pingoru
Nov 28, 2025, 01:31 PM UTC
Resolved
Nov 28, 2025, 05:24 PM UTC
Duration
3h 52m
Affected: API
Timeline · 6 updates
  1. identified Nov 28, 2025, 01:31 PM UTC

    We are aware of increased error rates related to the bunny.net API CDN traffic delivery is not affected

  2. identified Nov 28, 2025, 02:37 PM UTC

    Our Engineering team are continuing to work on the API issue. Platform delivery remains unaffected.

  3. investigating Nov 28, 2025, 03:33 PM UTC

    Our Engineering team are continuing to investigate the matter.

  4. investigating Nov 28, 2025, 04:40 PM UTC

    Our Engineering team are still working to resolve this issue.

  5. monitoring Nov 28, 2025, 05:05 PM UTC

    We have implemented a fix, affected API services are now restored. We will monitor the situation.

  6. resolved Nov 28, 2025, 05:24 PM UTC

    This incident has now been resolved.

Read the full incident report →

Minor November 23, 2025

[Bunny DNS] Intermittent DNS Resolution Issues (Coco/Kiki)

Detected by Pingoru
Nov 23, 2025, 10:23 PM UTC
Resolved
Nov 27, 2025, 10:01 AM UTC
Duration
3d 11h
Affected: DNS
Timeline · 6 updates
  1. investigating Nov 23, 2025, 10:23 PM UTC

    We are currently investigating an issue where some DNS queries are failing. Please note that the CDN is not impacted, and content delivery remains unaffected.

  2. identified Nov 23, 2025, 11:22 PM UTC

    We have identified the issue. Coco has mostly recovered, but Kiki is still experiencing some minor issues. Our team is continuing to work toward full recovery and will provide further updates as they become available.

  3. identified Nov 24, 2025, 01:11 AM UTC

    Coco is now stable and functioning as expected. However, Kiki is experiencing degraded performance, which may result in higher latency, which will resolve by itself. Our team is actively investigating and working on a solution to mitigate these issues. We will continue to provide updates as new information becomes available. Thank you for your patience and understanding.

  4. monitoring Nov 24, 2025, 08:33 AM UTC

    A fix has been implemented and we are monitoring the results.

  5. monitoring Nov 24, 2025, 08:34 AM UTC

    We are continuing to monitor for any further issues.

  6. resolved Nov 27, 2025, 10:01 AM UTC

    This issue has now been fully resolved.

Read the full incident report →

Notice November 18, 2025

Payment Gateway Timeouts

Detected by Pingoru
Nov 18, 2025, 12:12 PM UTC
Resolved
Nov 18, 2025, 02:46 PM UTC
Duration
2h 33m
Affected: Dashboard
Timeline · 2 updates
  1. investigating Nov 18, 2025, 12:12 PM UTC

    We are currently affected by an outage from our payment provider gateway, causing payment failures on our dashboard.

  2. resolved Nov 18, 2025, 02:46 PM UTC

    This incident has been resolved.

Read the full incident report →

Minor November 7, 2025

Increased CDN response times some Regions

Detected by Pingoru
Nov 07, 2025, 12:02 PM UTC
Resolved
Nov 07, 2025, 03:21 PM UTC
Duration
3h 18m
Affected: CDN
Timeline · 3 updates
  1. identified Nov 07, 2025, 12:02 PM UTC

    We have identified an issues within isolated regions that is causing sporadic slowdown on CDN requests. Affected Regions are tied down to: UK, NY, NG, GA, CA, WA, PL, TX, SE, AT and SE

  2. monitoring Nov 07, 2025, 12:03 PM UTC

    A fix has been deployed and the issue has now subsided.

  3. resolved Nov 07, 2025, 03:21 PM UTC

    This incident has now been resolved

Read the full incident report →

Minor October 2, 2025

Sporadic 502 responses

Detected by Pingoru
Oct 02, 2025, 03:48 PM UTC
Resolved
Oct 02, 2025, 06:26 PM UTC
Duration
2h 37m
Affected: Stream ServiceCDN
Timeline · 3 updates
  1. investigating Oct 02, 2025, 03:48 PM UTC

    We are currently investigating 502s being returned on the network.

  2. monitoring Oct 02, 2025, 04:32 PM UTC

    A fix has been implemented and we are monitoring the results.

  3. resolved Oct 02, 2025, 06:26 PM UTC

    This incident has now been resolved.

Read the full incident report →

Minor September 24, 2025

Billing Recharge Failures

Detected by Pingoru
Sep 24, 2025, 11:49 AM UTC
Resolved
Sep 24, 2025, 01:19 PM UTC
Duration
1h 30m
Affected: DashboardAPI
Timeline · 2 updates
  1. investigating Sep 24, 2025, 11:49 AM UTC

    We are currently investigating issues pertaining to account recharges/top-ups.

  2. resolved Sep 24, 2025, 01:19 PM UTC

    This issue is now resolved.

Read the full incident report →

Minor September 23, 2025

Intermittent Storage Upload failures to Frankfurt

Detected by Pingoru
Sep 23, 2025, 02:18 PM UTC
Resolved
Sep 23, 2025, 05:34 PM UTC
Duration
3h 15m
Affected: Edge Storage
Timeline · 3 updates
  1. investigating Sep 23, 2025, 02:18 PM UTC

    We are currently investigating an intermittent upload issue. This is isolated to the Frankfurt location.

  2. monitoring Sep 23, 2025, 04:00 PM UTC

    A fix has been implemented and we are monitoring the results.

  3. resolved Sep 23, 2025, 05:34 PM UTC

    The issue has been identified and a fix has been implemented.

Read the full incident report →

Minor September 19, 2025

API Degraded Performance/Timeouts

Detected by Pingoru
Sep 19, 2025, 08:27 AM UTC
Resolved
Sep 19, 2025, 09:28 AM UTC
Duration
1h
Affected: DashboardAPI
Timeline · 4 updates
  1. investigating Sep 19, 2025, 08:27 AM UTC

    We are currently investigating API degradation across multiple endpoints. Platform delivery is not affected.

  2. monitoring Sep 19, 2025, 08:44 AM UTC

    The API backend has now been restored and we are monitoring the situation.

  3. monitoring Sep 19, 2025, 08:44 AM UTC

    We are continuing to monitor for any further issues.

  4. resolved Sep 19, 2025, 09:28 AM UTC

    This issue is now resolved. A transient network issue caused performance degradation on our API processes, which was addressed.

Read the full incident report →

Notice September 9, 2025

Dashboard/API Logging Delay

Detected by Pingoru
Sep 09, 2025, 10:01 AM UTC
Resolved
Sep 11, 2025, 09:29 AM UTC
Duration
1d 23h
Timeline · 5 updates
  1. identified Sep 09, 2025, 10:01 AM UTC

    We are aware of and currently investigating a logging issue. This affects dashboard Log Explorer, and API Logging endpoint. Log Forwarding is not affected.

  2. monitoring Sep 09, 2025, 04:01 PM UTC

    Our Engineering team has implemented a fix on Logging. However there is a queue backlog; this may take a long period of time to fully process, and we will continue to monitor that situation.

  3. monitoring Sep 09, 2025, 10:20 PM UTC

    We have processed approximately 50% of the log queue backlog. We will continue to monitor the situation and update accordingly.

  4. monitoring Sep 10, 2025, 01:48 PM UTC

    Logs have now almost fully normalized. We will continue to monitor the situation.

  5. resolved Sep 11, 2025, 09:29 AM UTC

    The logging backlog has fully cleared and delivery is optimal again.

Read the full incident report →

Minor September 5, 2025

Isolated 502s in Brazil - Storage

Detected by Pingoru
Sep 05, 2025, 02:09 PM UTC
Resolved
Sep 05, 2025, 02:49 PM UTC
Duration
40m
Affected: CDNEdge Storage
Timeline · 3 updates
  1. investigating Sep 05, 2025, 02:09 PM UTC

    We are currently investigating an issue with our Storage platform in Brazil. Permacache and Storage as origin in the region is partially effected.

  2. monitoring Sep 05, 2025, 02:44 PM UTC

    We have identified a routing issue within the backend process of CDN to Storage. Our Engineering team have made some mitigation changes and are monitoring the situation.

  3. resolved Sep 05, 2025, 02:49 PM UTC

    Our Engineering team has resolved the Storage routing issues and performance is now operational once more.

Read the full incident report →