Bitbucket Outage History

Bitbucket is up right now

There were 4 Bitbucket outages since March 6, 2026 totaling 9h 21m of downtime. Each is summarised below — incident details, duration, and resolution information.

Source: https://bitbucket.status.atlassian.com

Minor April 16, 2026

Bitbucket Pipelines degraded performance

Detected by Pingoru
Apr 16, 2026, 08:23 PM UTC
Resolved
Apr 16, 2026, 08:44 PM UTC
Duration
21m
Affected: WebsiteAPIGit via SSHAuthentication and user managementGit via HTTPSWebhooksSource downloadsPipelinesGit LFSEmail deliveryPurchasing & LicensingSignup
Timeline · 3 updates
  1. monitoring Apr 16, 2026, 08:23 PM UTC

    We are investigating cases of degraded performance for Atlassian Bitbucket Pipelines customers. We mitigated an issue preventing new pipelines from starting, and have scaled up services are processing a backlog. Customers should see recovery shortly.

  2. monitoring Apr 16, 2026, 08:44 PM UTC

    We are investigating cases of degraded performance for Atlassian Bitbucket Pipelines customers. We mitigated an issue preventing new pipelines from starting, and have scaled up services are processing a backlog. Customers should see recovery shortly.

  3. resolved Apr 16, 2026, 08:44 PM UTC

    On April 16 at 8:00PM UTC Bitbucket Pipelines users may have experienced performance degradation running new pipelines. The issue has now been resolved, and the service is operating normally for all affected customers.

Read the full incident report →

Minor April 13, 2026

Users experiencing issues with login across Atlassian products

Detected by Pingoru
Apr 13, 2026, 07:29 AM UTC
Resolved
Apr 13, 2026, 10:17 AM UTC
Duration
2h 47m
Affected: WebsiteAPIGit via SSHAuthentication and user managementGit via HTTPSWebhooksSource downloadsPipelinesGit LFSEmail deliveryPurchasing & LicensingSignup
Timeline · 4 updates
  1. monitoring Apr 13, 2026, 07:29 AM UTC

    Our team is aware that some users were unable to log in to Atlassian products with their Atlassian accounts. While we believe this issue is now resolved, we are continuing to monitor all products and services for any ongoing impact. Our team is investigating with urgency, and we will provide an update within 1 hour.

  2. monitoring Apr 13, 2026, 08:33 AM UTC

    Atlassian account login services are now operating as expected, and we are not observing new errors. We continue to closely monitor our systems to ensure they remain stable. We will provide another update within 60 minutes or sooner if we detect a change in status.

  3. resolved Apr 13, 2026, 10:17 AM UTC

    On April 13, 2026, between 05:49 a.m. and 06:25 a.m. UTC, some users were unable to log in to Atlassian products with their Atlassian accounts. The underlying issue has been addressed, and authentication services have remained stable with no new impact observed.

  4. postmortem Apr 21, 2026, 02:32 AM UTC

    ### Summary On April 13, 2026, between 05:49 and 06:29 UTC, customers experienced failures when attempting to log in, sign up, reset passwords, and complete multi-factor authentication flows across Atlassian cloud products. Approximately 90% of authentication requests failed during the peak impact window, affecting users in the US East and EU regions. The incident was mitigated within 40 minutes through manual intervention, and full service was restored by 06:29 UTC. ### **IMPACT** * **Duration**: ~40 minutes \(05:49–06:29 UTC, April 13, 2026\) * **Affected regions**: US East and EU \(authentication infrastructure serves EU traffic from US East, with traffic primarily from EU at this time of day\). * **Affected products**: All Atlassian cloud products requiring authentication, including Jira, Confluence, Jira Service Management, and Trello. * **Customer experience**: Users attempting to log in, sign up, reset passwords, or complete MFA flows received errors. Users already logged in with active sessions were unaffected. ### **ROOT CAUSE** This incident had several contributing factors that combined to produce a failure that the system could not recover from without manual intervention. **The primary cause** was a recently enabled change that caused our authentication infrastructure to retry requests to a downstream identity service when those requests were slow to respond. This retry behaviour was rolled out to 100% of traffic earlier the same day. Under normal conditions this would be benign, but it meant that any slowness in the downstream service was amplified. Since multiple upstream services were also independently retrying their own failed requests, the amplification compounded further into a retry storm. **The trigger** was a burst of legitimate user traffic. A pattern of many parallel link preview requests for a single user caused a concentrated load spike on a downstream identity service, pushing its response times above the retry threshold. On its own, this kind of spike had occurred many times before and always recovered. With the retry amplification now in effect, the spike instead created a runaway feedback loop: slow responses caused retries, retries increased load, increased load caused slower responses, preventing recovery. The incident was mitigated by manually scaling up the downstream identity service to provide sufficient capacity to absorb the amplified load. Once scaled, the service recovered immediately, bringing authentication error rates to zero within one minute. **REMEDIAL ACTIONS PLAN & NEXT STEPS** We are taking the following actions designed to prevent recurrence and improve our resilience: 1. **Immediate**: The retry-on-timeout change has been disabled. 2. **Load shedding and self-healing**: We are adding load shedding capabilities to our authentication services so that they can automatically shed excess load and self-recover during traffic spikes, without requiring action before automatic scaling starts. 3. **Reducing request fan-out**: We are reviewing patterns where a single user action can generate many parallel downstream requests, and will introduce methods where possible to reduce the amplification potential. We apologize to customers whose services were interrupted by this incident and we are taking immediate steps to improve the platform’s reliability. Thanks, Atlassian Customer Support

Read the full incident report →

Minor March 12, 2026

Degraded performance of Bitbucket cloud

Detected by Pingoru
Mar 12, 2026, 08:48 AM UTC
Resolved
Mar 12, 2026, 12:33 PM UTC
Duration
3h 45m
Affected: WebsiteAPIGit via SSHAuthentication and user managementGit via HTTPSWebhooksSource downloadsPipelinesGit LFSEmail deliveryPurchasing & LicensingSignup
Timeline · 5 updates
  1. investigating Mar 12, 2026, 08:48 AM UTC

    We are actively investigating a service disruption impacting Bitbucket for some customers. We'll share updates here as more information is available.

  2. investigating Mar 12, 2026, 09:40 AM UTC

    We are actively investigating reports of a service disruption affecting Bitbucket Cloud. We'll share updates here within the next hour or as more information is available.

  3. investigating Mar 12, 2026, 10:52 AM UTC

    Our engineers are actively working to resolve the Bitbucket Cloud disruption. No new updates at this moment. We will share more details within the next hour or as soon as we more information is available.

  4. investigating Mar 12, 2026, 12:05 PM UTC

    Our engineers continue to work actively to identify the root cause. Meanwhile, the affected service is now stable and the error rate has decreased. We will share more details within the next hour, or sooner if more information becomes available.

  5. resolved Mar 12, 2026, 12:33 PM UTC

    We have successfully mitigated the incident, and the affected service is now fully operational. Our teams have verified that normal functionality has been restored and the service is performing as expected.

Read the full incident report →

Critical March 6, 2026

Disrupted Bitbucket availability

Detected by Pingoru
Mar 06, 2026, 02:44 AM UTC
Resolved
Mar 06, 2026, 05:11 AM UTC
Duration
2h 27m
Affected: WebsiteAPIGit via SSHAuthentication and user managementGit via HTTPSWebhooksSource downloadsPipelinesGit LFSEmail deliveryPurchasing & LicensingSignup
Timeline · 6 updates
  1. investigating Mar 06, 2026, 02:44 AM UTC

    We are actively investigating a service disruption impacting Bitbucket for some customers. We'll share updates here as more information is available.

  2. identified Mar 06, 2026, 03:10 AM UTC

    We have identified the cause of the issue, and our teams are diligently working on a mitigation. Affected users will experience Bitbucket being unavailable. We'll continue to share additional updates here as more information is available, with our next update to be posted within 1 hour.

  3. identified Mar 06, 2026, 04:04 AM UTC

    As restoration activities continue, we are now starting to see services recover for customers for Bitbucket Web and for Git HTTPS. We are continuing to work on mitigation across all remaining Bitbucket services. Our next update will be provided within one hour or if full recovery is seen prior to that time.

  4. identified Mar 06, 2026, 05:00 AM UTC

    Our team has now put mitigations in place across the majority of Bitbucket services and all are now actively recovering except for Pipelines, which is still being actively investigated. We are continuing to monitor these services and we will provide further update within 1 hour or when services have fully recovered.

  5. resolved Mar 06, 2026, 05:11 AM UTC

    On 06 March 2026 UTC, Bitbucket experienced a disruption, and services were unavailable to affected users. The issue has now been resolved, and the service is operating normally for all affected customers.

  6. postmortem Mar 26, 2026, 05:09 PM UTC

    ### Summary On March 6, 2026, between 02:19 UTC and 04:00 UTC, Bitbucket Cloud experienced an incident impacting the web app, API, CLI, and Pipelines operations. This was caused by the Bitbucket application hitting a regional provisioning API rate limit with our hosting provider, preventing application workers from handling website traffic. The incident was detected within 1 minute by automated monitoring and mitigated by scaling systems down and then back up to full capacity which put Atlassian systems into a known good state. ### **IMPACT** The incident resulted in a Bitbucket Cloud services being unavailable for 1 hour and 6 minutes on March 6, 2026 between 02:19 UTC and 03:25 UTC, followed by degraded website performance until 04:00 UTC. During this time, customers were unable to access Bitbucket services including the web app, Git operations \(clone, push, pull over HTTPS and SSH\), API, and running builds in Pipelines. ### **ROOT CAUSE** The issue stemmed from a change to an internal deployment system that increased use of a platform credential service, hitting a quota with our hosting provider. This blocked Bitbucket services from deploying additional capacity because new application nodes request the credential service on startup and were rate limited. This caused degradation of Bitbucket experiences and more failed requests to Bitbucket Cloud’s website and public APIs. ### **REMEDIAL ACTIONS PLAN & NEXT STEPS** The incident response team manually scaled down Bitbucket services, then gradually scaled them back up while closely monitoring our quota. We simultaneously engaged with our hosting provider to temporarily increasing this limit to unblock bringing more Bitbucket service capacity online. We know that outages impact your productivity. While we have a number of testing and preventative processes in place, Bitbucket services lacked necessary boundaries to be resilient to upstream platform system changes. To help minimise the impact of breaking changes to our environments, we will implement additional preventative measures such as: * Improve monitoring of shared Atlassian platform resources. * Update Bitbucket application bootstrapping to prevent new capacity from failing during resource contention of shared platform services. * Reduce Bitbucket’s dependency on shared hosting provider services. * Deploy Bitbucket services across multiple regions to reduce single-region failure risk. We apologize to customers whose services were impacted during this incident; we are taking immediate steps to improve the platform’s performance and availability. Thanks, Atlassian Customer Support

Read the full incident report →

Looking to track Bitbucket downtime and outages?

Pingoru polls Bitbucket's status page every 5 minutes and alerts you the moment it reports an issue — before your customers do.

  • Real-time alerts when Bitbucket reports an incident
  • Email, Slack, Discord, Microsoft Teams, and webhook notifications
  • Track Bitbucket alongside 5,000+ providers in one dashboard
  • Component-level filtering
  • Notification groups + maintenance calendar
Start monitoring Bitbucket for free

5 free monitors · No credit card required