Confluence Outage History

Confluence is up right now

There were 9 Confluence outages since February 3, 2026 totaling 37h 20m of downtime. Each is summarised below — incident details, duration, and resolution information.

Source: https://confluence.status.atlassian.com

Critical April 14, 2026

Disrupted Rovo availability for Automation rules

Detected by Pingoru
Apr 14, 2026, 12:25 PM UTC
Resolved
Apr 14, 2026, 04:11 PM UTC
Duration
3h 46m
Affected: View ContentiOS AppCreate and EditAndroid AppCommentsAuthentication and User ManagementSearchAdministrationNotificationsMarketplace AppsPurchasing & LicensingSignupConfluence AutomationsCloud to Cloud Migrations - Copy Product DataServer to Cloud Migrations - Copy Product Data
Timeline · 5 updates
  1. investigating Apr 14, 2026, 12:25 PM UTC

    We are actively investigating reports of a partial service disruption affecting Rovo, specifically for customers using automation rules that invoke Rovo agents. Some customers may find that these automations are not completing as expected. We'll share updates here as more information becomes available.

  2. investigating Apr 14, 2026, 12:25 PM UTC

    We have identified that this incident also affects automation rules in Confluence that use Rovo agents. We are investigating and will provide updates as we learn more.

  3. identified Apr 14, 2026, 02:11 PM UTC

    We have identified the issue, and our teams are working to resolve it and restore normal operations as quickly as possible. We will provide further updates as they become available.

  4. monitoring Apr 14, 2026, 03:46 PM UTC

    The issue has been resolved, and services are now operating normally for all affected customers. We will continue to monitor closely to confirm stability.

  5. resolved Apr 14, 2026, 04:11 PM UTC

    On April 14, 2026, affected users may have experienced some service disruption with automation rules that use Rovo agents. The issue has now been resolved, and the service is operating normally for all affected customers.

Read the full incident report →

Minor April 13, 2026

Users experiencing issues with login across Atlassian products

Detected by Pingoru
Apr 13, 2026, 07:29 AM UTC
Resolved
Apr 13, 2026, 10:17 AM UTC
Duration
2h 47m
Affected: View ContentiOS AppCreate and EditAndroid AppCommentsAuthentication and User ManagementSearchAdministrationNotificationsMarketplace AppsPurchasing & LicensingSignupConfluence AutomationsCloud to Cloud Migrations - Copy Product DataServer to Cloud Migrations - Copy Product Data
Timeline · 4 updates
  1. monitoring Apr 13, 2026, 07:29 AM UTC

    Our team is aware that some users were unable to log in to Atlassian products with their Atlassian accounts. While we believe this issue is now resolved, we are continuing to monitor all products and services for any ongoing impact. Our team is investigating with urgency, and we will provide an update within 1 hour.

  2. monitoring Apr 13, 2026, 08:33 AM UTC

    Atlassian account login services are now operating as expected, and we are not observing new errors. We continue to closely monitor our systems to ensure they remain stable. We will provide another update within 60 minutes or sooner if we detect a change in status.

  3. resolved Apr 13, 2026, 10:17 AM UTC

    On April 13, 2026, between 05:49 a.m. and 06:25 a.m. UTC, some users were unable to log in to Atlassian products with their Atlassian accounts. The underlying issue has been addressed, and authentication services have remained stable with no new impact observed.

  4. postmortem Apr 21, 2026, 02:33 AM UTC

    ### Summary On April 13, 2026, between 05:49 and 06:29 UTC, customers experienced failures when attempting to log in, sign up, reset passwords, and complete multi-factor authentication flows across Atlassian cloud products. Approximately 90% of authentication requests failed during the peak impact window, affecting users in the US East and EU regions. The incident was mitigated within 40 minutes through manual intervention, and full service was restored by 06:29 UTC. ### **IMPACT** * **Duration**: ~40 minutes \(05:49–06:29 UTC, April 13, 2026\) * **Affected regions**: US East and EU \(authentication infrastructure serves EU traffic from US East, with traffic primarily from EU at this time of day\). * **Affected products**: All Atlassian cloud products requiring authentication, including Jira, Confluence, Jira Service Management, and Trello. * **Customer experience**: Users attempting to log in, sign up, reset passwords, or complete MFA flows received errors. Users already logged in with active sessions were unaffected. ### **ROOT CAUSE** This incident had several contributing factors that combined to produce a failure that the system could not recover from without manual intervention. **The primary cause** was a recently enabled change that caused our authentication infrastructure to retry requests to a downstream identity service when those requests were slow to respond. This retry behaviour was rolled out to 100% of traffic earlier the same day. Under normal conditions this would be benign, but it meant that any slowness in the downstream service was amplified. Since multiple upstream services were also independently retrying their own failed requests, the amplification compounded further into a retry storm. **The trigger** was a burst of legitimate user traffic. A pattern of many parallel link preview requests for a single user caused a concentrated load spike on a downstream identity service, pushing its response times above the retry threshold. On its own, this kind of spike had occurred many times before and always recovered. With the retry amplification now in effect, the spike instead created a runaway feedback loop: slow responses caused retries, retries increased load, increased load caused slower responses, preventing recovery. The incident was mitigated by manually scaling up the downstream identity service to provide sufficient capacity to absorb the amplified load. Once scaled, the service recovered immediately, bringing authentication error rates to zero within one minute. **REMEDIAL ACTIONS PLAN & NEXT STEPS** We are taking the following actions designed to prevent recurrence and improve our resilience: 1. **Immediate**: The retry-on-timeout change has been disabled. 2. **Load shedding and self-healing**: We are adding load shedding capabilities to our authentication services so that they can automatically shed excess load and self-recover during traffic spikes, without requiring action before automatic scaling starts. 3. **Reducing request fan-out**: We are reviewing patterns where a single user action can generate many parallel downstream requests, and will introduce methods where possible to reduce the amplification potential. We apologize to customers whose services were interrupted by this incident and we are taking immediate steps to improve the platform’s reliability. Thanks, Atlassian Customer Support

Read the full incident report →

Critical April 8, 2026

Multiple products impacted by search failures

Detected by Pingoru
Apr 08, 2026, 05:41 AM UTC
Resolved
Apr 08, 2026, 11:43 AM UTC
Duration
6h 2m
Affected: View ContentiOS AppCreate and EditAndroid AppCommentsAuthentication and User ManagementSearchAdministrationNotificationsMarketplace AppsPurchasing & LicensingSignupConfluence AutomationsCloud to Cloud Migrations - Copy Product DataServer to Cloud Migrations - Copy Product Data
Timeline · 14 updates
  1. investigating Apr 08, 2026, 05:41 AM UTC

    Our team is aware that users are currently experiencing errors while attempting to search in Confluence and Jira. Our team is investigating with urgency and will be providing an update within 1 hour.

  2. investigating Apr 08, 2026, 06:01 AM UTC

    Our team is aware that users are currently experiencing errors while attempting to search in Confluence and Jira. Our team is investigating with urgency and will be providing an update within 1 hour.

  3. investigating Apr 08, 2026, 06:01 AM UTC

    The impact of this incident is now understood to also be impacting search in Jira and Confluence, as well as additional downstream impacts to Rovo Chat, User Management, Administration and Guard. Our team is continuing to investigate with urgency and we will provide further update within 1 hour.

  4. investigating Apr 08, 2026, 06:24 AM UTC

    Our team is aware that users are currently experiencing errors while attempting to search in Confluence and Jira. Our team is investigating with urgency and will be providing an update within 1 hour.

  5. investigating Apr 08, 2026, 06:25 AM UTC

    Our team is aware that users are currently experiencing errors while attempting to search in Confluence and Jira. Our team is investigating with urgency and will be providing an update within 1 hour.

  6. identified Apr 08, 2026, 06:25 AM UTC

    We are also aware of impact as a result of these search failures to Customer Service Management, Asset Reports, Jira Service Management and dashboards within Focus. Our team have identified the potential root cause of the issue and our teams are working with urgency on a resolution. We will provide further update within 1 hour.

  7. identified Apr 08, 2026, 07:40 AM UTC

    We are also aware of impact as a result of these search failures to Customer Service Management, Asset Reports, Jira Service Management and dashboards within Focus. Our team have identified the potential root cause of the issue and our teams are working with urgency on a resolution. We will provide further update within 1 hour.

  8. identified Apr 08, 2026, 07:40 AM UTC

    Confluence and Jira search reliability is improving, and we expect full recovery shortly; we will continue to closely monitor the services to ensure they remain stable.

  9. identified Apr 08, 2026, 08:33 AM UTC

    We are also aware of impact as a result of these search failures to Customer Service Management, Asset Reports, Jira Service Management and dashboards within Focus. Our team have identified the potential root cause of the issue and our teams are working with urgency on a resolution. We will provide further update within 1 hour.

  10. identified Apr 08, 2026, 08:33 AM UTC

    We have restored core search functionality and services are operating again, but some customers may still experience delays when searching for data changed within the last hour while new data continues to be indexed. We are actively working to complete reindexing and will update this page as performance fully recovers.

  11. identified Apr 08, 2026, 09:03 AM UTC

    We are also aware of impact as a result of these search failures to Customer Service Management, Asset Reports, Jira Service Management and dashboards within Focus. Our team have identified the potential root cause of the issue and our teams are working with urgency on a resolution. We will provide further update within 1 hour.

  12. monitoring Apr 08, 2026, 09:03 AM UTC

    The issue has been resolved, and services are now operating normally. Some customers may still experience delays when searching for data changed within the last hour, while new data continues to be indexed. We'll continue to monitor closely to confirm stability.

  13. resolved Apr 08, 2026, 11:43 AM UTC

    The issue has now been resolved, and the service is operating normally for all affected customers.

  14. postmortem Apr 17, 2026, 04:24 PM UTC

    ### Summary On April 8, 2026, between 04:46 UTC and 12:09 UTC, search functionality was unavailable or degraded across several Atlassian Cloud products, including Jira, Confluence, Jira Service Management, Rovo, Rovo Dev, Loom, Guard Standard, Customer Service Management and Atlassian Administration. A configuration change increased the resources reserved for a core system component that runs on nodes in our compute platform. On a subset of clusters configured for high‑density workloads, the increased reservations exceeded available node capacity interrupting search and related experiences for affected customers. The root cause was identified and a rollback was merged at 05:42 UTC with some systems seeing recovery by 07:33 UTC**.** Core search functionality was restored approximately by 08:55 UTC, and full downstream recovery completed by 12:09 UTC. ### **IMPACT** During the impact period, some customers experienced outages or degradation in search across Jira, Confluence, Jira Service Management, Rovo, Rovo Dev, Loom, Guard Standard, Customer Service Management and Atlassian Administration. Other experiences that rely on search such as quick find, navigation, AI assistants, dashboards, were also intermittently affected during this period. Impacted customers may have been unable to find pages or recordings and experienced degraded performance in finding issues; received empty or delayed search results; or experienced AI assistants and dashboards that could not retrieve relevant context. **Jira, Jira Service Management and Customer Service Management:** Search and experiences that depend on search like finding issues and agent responses in CSM remained available but with degraded performance in fallback mode. By 12:09 UTC, search indexes and search performance was fully restored from fallback to full capacity across all regions. **Guard Standard and Atlassian Administration:** Search functionality was unavailable for parts of the incident window. As a result, Domain Claims, usage tracking, and managed accounts were degraded for portions of the window. These services were restored to operational status by 07:33 UTC. Guard Premium was not impacted by this issue. **Confluence:** Search functionality was unavailable for parts of the incident window. Recovery began at 07:30 UTC as backend search clusters were restored. Full recovery, including search index replay, completed at 11:37 UTC. **Loom:** Search functionality and some experiences that rely on Confluence Search, such as sharing to spaces\) was unavailable for portions of the window and fully restored at 11:37 UTC. **Rovo and Rovo Dev:** Rovo agents remained responsive but experienced degraded functionality due to loss of search capabilities in underlying services. They were unable to reliably return context about work items or pages. Functionality was fully restored at 11:37 UTC. ### **ROOT CAUSE** Atlassian products rely on OpenSearch clusters to power their search capabilities including issue search, content search, and AI-powered search features. An infrastructure configuration change increased resource reservations \(CPU & Memory\) for a system component that runs across our compute platform. On a subset of clusters configured for high-density workloads, the increased reservations exceeded available node capacity. This caused search workloads to be evicted and, in some clusters, could not reschedule onto any available nodes impacting search functionality across affected products. The change was deployed across multiple production clusters in a short time frame, limiting the opportunity to detect the capacity conflict in a smaller subset of clusters before it reached the wider fleet. Automated scaling systems attempted to recover by provisioning additional capacity but in the worst‑affected clusters this led to runaway node scaling and exhaustion of available network resources, prolonging recovery time. ### **REMEDIAL ACTIONS PLAN & NEXT STEPS** We understand that service disruptions impact your productivity. In addition to our existing testing and preventative processes, Atlassian is prioritizing the following actions to help reduce the likelihood and impact of similar incidents in the future and to speed up recovery when issues occur: * **Enforce smaller deployment cohorts and larger soak for critical platform changes for these cluster types** Implement smaller deployment cohorts, mandatory soak periods between environments, and automated health gates so that changes are validated on a limited set of clusters before being promoted more broadly. * **Strengthen automated pre‑deploy validation for resource changes** Add validation checks to ensure resource changes for system components are compatible with node capacity and reserved headroom, preventing system workloads from crowding out customer workloads. * **Improve post‑deploy verification and alerting** Enhance monitoring and post‑deployment verification to detect patterns such as spikes in pending pods, runaway node scaling, and low pod‑IP headroom closely correlated with new configuration being rolled out. * **Align autoscaling behavior with capacity and safety limits** Align autoscaling capacity calculations with node reservations and introduce safeguards and circuit breakers to prevent runaway scaling and to enforce safe limits on node and pod IP counts. * **Enhance recovery automation** Improve automation and runbooks so we can safely disable autoscaling, remove empty nodes in bulk, and restore normal operations faster across multiple clusters in parallel. We apologize to customers whose services were impacted during this incident; we are taking immediate steps to improve the platform’s performance and availability and to reduce the risk and impact of similar issues in future. Thanks, Atlassian Customer Support

Read the full incident report →

Major March 12, 2026

Static macro rendering is failing in Confluence Cloud

Detected by Pingoru
Mar 12, 2026, 11:42 AM UTC
Resolved
Mar 13, 2026, 12:26 AM UTC
Duration
12h 43m
Affected: View ContentiOS AppCreate and EditAndroid AppCommentsAuthentication and User ManagementSearchAdministrationNotificationsMarketplace AppsPurchasing & LicensingSignupConfluence AutomationsCloud to Cloud Migrations - Copy Product DataServer to Cloud Migrations - Copy Product Data
Timeline · 4 updates
  1. investigating Mar 12, 2026, 11:42 AM UTC

    Confluence Cloud users may experience errors when accessing certain Marketplace macros. Affected customers are seeing the message Error rendering macro 'static-macro': Page loading failed. Our team is actively investigating this issue. We will provide another update within 2 hours, or sooner.

  2. investigating Mar 12, 2026, 01:38 PM UTC

    Confluence Cloud users may experience errors when accessing certain Marketplace macros. Affected customers are seeing the message Error rendering macro 'static-macro': Page loading failed. Our team is actively investigating this issue. We will provide another update within 2 hours, or sooner.

  3. identified Mar 12, 2026, 01:38 PM UTC

    Our team has identified the cause of this issue, and a fix is currently being deployed. We are monitoring the deployment, and we will provide another update within 2 hours, or sooner if needed.

  4. resolved Mar 13, 2026, 12:26 AM UTC

    On March 12, 2026, affected Confluence Cloud users may have experienced some service disruption. The issue has now been resolved, and the service is operating normally for all affected customers.

Read the full incident report →

Major March 10, 2026

Automation events delayed for some customers in the APAC region

Detected by Pingoru
Mar 10, 2026, 12:15 AM UTC
Resolved
Mar 10, 2026, 12:59 AM UTC
Duration
43m
Affected: View ContentiOS AppCreate and EditAndroid AppCommentsAuthentication and User ManagementSearchAdministrationNotificationsMarketplace AppsPurchasing & LicensingSignupConfluence AutomationsCloud to Cloud Migrations - Copy Product DataServer to Cloud Migrations - Copy Product Data
Timeline · 4 updates
  1. investigating Mar 10, 2026, 12:15 AM UTC

    We are aware of customers experiencing delays with their automations within Jira, Jira Service Management, Jira Work Management, Jira Product Discovery and Confluence. Our team is investigating with urgency and we will provide an update within one hour.

  2. identified Mar 10, 2026, 12:15 AM UTC

    Our team has now identified the cause of this issue relating to delayed automation events and has put a hotfix in place to help restore automation performance. We will continue to monitor performance as the backlog of issues is now being processed.

  3. monitoring Mar 10, 2026, 12:51 AM UTC

    The performance degradation of automations has been resolved, and services are now operating normally for all affected customers. We'll continue to monitor performance closely to confirm stability.

  4. resolved Mar 10, 2026, 12:59 AM UTC

    On 9 March UTC, Automation users in the APAC region may have experienced performance degradation within Jira, Jira Product Discovery, Jira Service Management, Jira Work Management, Confluence. The issue has now been resolved, and the service is operating normally for all affected customers.

Read the full incident report →

Minor February 20, 2026

Degraded performance of Cloud Products when selecting users or teams

Detected by Pingoru
Feb 20, 2026, 04:14 AM UTC
Resolved
Feb 20, 2026, 12:26 PM UTC
Duration
8h 11m
Affected: View ContentiOS AppCreate and EditAndroid AppCommentsAuthentication and User ManagementSearchAdministrationNotificationsMarketplace AppsPurchasing & LicensingSignupConfluence AutomationsCloud to Cloud Migrations - Copy Product DataServer to Cloud Migrations - Copy Product Data
Timeline · 4 updates
  1. identified Feb 20, 2026, 04:14 AM UTC

    Customers using Jira, Jira Service Management and Confluence may be experiencing difficulties with user and team selection in certain system and custom fields. We are continuing to work towards mitigating this issue for all users and products. We anticipate our next update to be posted within 6 hours as we continue our investigation.

  2. identified Feb 20, 2026, 07:13 AM UTC

    We understand that customers using Customer Service Management, Jira, Jira Service Management and Confluence may be experiencing difficulties with user and team selection in certain system and custom fields. We are continuing to work towards mitigating this issue for all users and products. We anticipate our next update to be posted within 6 hours or sooner based on significant progress.

  3. identified Feb 20, 2026, 12:26 PM UTC

    We understand that customers using Customer Service Management, Jira, Jira Service Management and Confluence may be experiencing difficulties with user and team selection in certain system and custom fields. We are continuing to work towards mitigating this issue for all users and products. We anticipate our next update to be posted within 6 hours or sooner based on significant progress.

  4. resolved Feb 20, 2026, 12:26 PM UTC

    This issue has been resolved and all services are functional.

Read the full incident report →

Major February 19, 2026

Disrupted availability of Confluence Cloud in several regions

Detected by Pingoru
Feb 19, 2026, 03:44 PM UTC
Resolved
Feb 19, 2026, 05:47 PM UTC
Duration
2h 2m
Affected: View Content
Timeline · 2 updates
  1. monitoring Feb 19, 2026, 03:44 PM UTC

    We have received and investigated reports of a partial service disruption affecting Confluence Cloud for some customers in AP South, APSE, APSE2, East, and Central. The issue has been resolved, and services are now operating normally for all affected customers. We'll continue to monitor closely to confirm stability.

  2. resolved Feb 19, 2026, 05:47 PM UTC

    On February 19 2026, affected Confluence Cloud users may have experienced some service disruption in AP South, APSE, APSE2, East, and Central regions. The issue has now been resolved, and the service is operating normally for all affected customers.

Read the full incident report →

Major February 18, 2026

Groups are not clickable from Groups Page in Atlassian Administration

Detected by Pingoru
Feb 18, 2026, 10:57 AM UTC
Resolved
Feb 18, 2026, 11:30 AM UTC
Duration
32m
Affected: View ContentiOS AppCreate and EditAndroid AppCommentsAuthentication and User ManagementSearchAdministrationNotificationsMarketplace AppsPurchasing & LicensingSignupConfluence AutomationsCloud to Cloud Migrations - Copy Product DataServer to Cloud Migrations - Copy Product Data
Timeline · 3 updates
  1. investigating Feb 18, 2026, 10:57 AM UTC

    Org Admins are unable to click on any group from Groups page in Atlassian administration (admin.atlassian.com). Admins are unable to adding/removing users and app access from the groups while accessing from admin.atlassian.com. We are currently investigating the problem. Next update will be shared in 60 mins or sooner.

  2. monitoring Feb 18, 2026, 11:28 AM UTC

    We have identified the root cause, and fix has been rolled out to mitigate the problem. We are monitoring the progress.

  3. resolved Feb 18, 2026, 11:30 AM UTC

    We have implemented the fix and the group page in the Admin Hub is now functioning as expected.

Read the full incident report →

Minor February 3, 2026

Confluence, Jira Mobile and Forge users may experience authentication issues

Detected by Pingoru
Feb 03, 2026, 10:19 AM UTC
Resolved
Feb 03, 2026, 10:50 AM UTC
Duration
30m
Affected: View ContentiOS AppCreate and EditAndroid AppCommentsAuthentication and User ManagementSearchAdministrationNotificationsMarketplace AppsPurchasing & LicensingSignupConfluence AutomationsCloud to Cloud Migrations - Copy Product DataServer to Cloud Migrations - Copy Product Data
Timeline · 3 updates
  1. investigating Feb 03, 2026, 10:19 AM UTC

    Customers utilizing Confluence and Jira Mobile may experience disruptions in OAuth authentication flows. Forge installations and invocations might also be disrupted. Our team is actively investigating the same and we shall keep you informed of the progress within next 60 mins or sooner.

  2. investigating Feb 03, 2026, 10:24 AM UTC

    Customers utilizing Confluence and Jira Mobile may experience disruptions in OAuth authentication flows. Forge installations and invocations might also be disrupted. Our team is actively investigating the same and we shall keep you informed of the progress within next 60 mins or sooner.

  3. resolved Feb 03, 2026, 10:50 AM UTC

    Team has identified the root cause and the issue has now been resolved, and the impacted services are operating normally. Team will continue to monitor the impacted services.

Read the full incident report →

Looking to track Confluence downtime and outages?

Pingoru polls Confluence's status page every 5 minutes and alerts you the moment it reports an issue — before your customers do.

  • Real-time alerts when Confluence reports an incident
  • Email, Slack, Discord, Microsoft Teams, and webhook notifications
  • Track Confluence alongside 5,000+ providers in one dashboard
  • Component-level filtering
  • Notification groups + maintenance calendar
Start monitoring Confluence for free

5 free monitors · No credit card required