- Detected by Pingoru
- Apr 13, 2026, 07:29 AM UTC
- Resolved
- Apr 13, 2026, 10:17 AM UTC
- Duration
- 2h 47m
Affected: View ContentiOS AppCreate and EditAndroid AppCommentsAuthentication and User ManagementSearchAdministrationNotificationsMarketplace AppsPurchasing & LicensingSignupConfluence AutomationsCloud to Cloud Migrations - Copy Product DataServer to Cloud Migrations - Copy Product Data
Timeline · 4 updates
-
monitoring Apr 13, 2026, 07:29 AM UTC
Our team is aware that some users were unable to log in to Atlassian products with their Atlassian accounts. While we believe this issue is now resolved, we are continuing to monitor all products and services for any ongoing impact. Our team is investigating with urgency, and we will provide an update within 1 hour.
-
monitoring Apr 13, 2026, 08:33 AM UTC
Atlassian account login services are now operating as expected, and we are not observing new errors. We continue to closely monitor our systems to ensure they remain stable. We will provide another update within 60 minutes or sooner if we detect a change in status.
-
resolved Apr 13, 2026, 10:17 AM UTC
On April 13, 2026, between 05:49 a.m. and 06:25 a.m. UTC, some users were unable to log in to Atlassian products with their Atlassian accounts. The underlying issue has been addressed, and authentication services have remained stable with no new impact observed.
-
postmortem Apr 21, 2026, 02:33 AM UTC
### Summary On April 13, 2026, between 05:49 and 06:29 UTC, customers experienced failures when attempting to log in, sign up, reset passwords, and complete multi-factor authentication flows across Atlassian cloud products. Approximately 90% of authentication requests failed during the peak impact window, affecting users in the US East and EU regions. The incident was mitigated within 40 minutes through manual intervention, and full service was restored by 06:29 UTC. ### **IMPACT** * **Duration**: ~40 minutes \(05:49–06:29 UTC, April 13, 2026\) * **Affected regions**: US East and EU \(authentication infrastructure serves EU traffic from US East, with traffic primarily from EU at this time of day\). * **Affected products**: All Atlassian cloud products requiring authentication, including Jira, Confluence, Jira Service Management, and Trello. * **Customer experience**: Users attempting to log in, sign up, reset passwords, or complete MFA flows received errors. Users already logged in with active sessions were unaffected. ### **ROOT CAUSE** This incident had several contributing factors that combined to produce a failure that the system could not recover from without manual intervention. **The primary cause** was a recently enabled change that caused our authentication infrastructure to retry requests to a downstream identity service when those requests were slow to respond. This retry behaviour was rolled out to 100% of traffic earlier the same day. Under normal conditions this would be benign, but it meant that any slowness in the downstream service was amplified. Since multiple upstream services were also independently retrying their own failed requests, the amplification compounded further into a retry storm. **The trigger** was a burst of legitimate user traffic. A pattern of many parallel link preview requests for a single user caused a concentrated load spike on a downstream identity service, pushing its response times above the retry threshold. On its own, this kind of spike had occurred many times before and always recovered. With the retry amplification now in effect, the spike instead created a runaway feedback loop: slow responses caused retries, retries increased load, increased load caused slower responses, preventing recovery. The incident was mitigated by manually scaling up the downstream identity service to provide sufficient capacity to absorb the amplified load. Once scaled, the service recovered immediately, bringing authentication error rates to zero within one minute. **REMEDIAL ACTIONS PLAN & NEXT STEPS** We are taking the following actions designed to prevent recurrence and improve our resilience: 1. **Immediate**: The retry-on-timeout change has been disabled. 2. **Load shedding and self-healing**: We are adding load shedding capabilities to our authentication services so that they can automatically shed excess load and self-recover during traffic spikes, without requiring action before automatic scaling starts. 3. **Reducing request fan-out**: We are reviewing patterns where a single user action can generate many parallel downstream requests, and will introduce methods where possible to reduce the amplification potential. We apologize to customers whose services were interrupted by this incident and we are taking immediate steps to improve the platform’s reliability. Thanks, Atlassian Customer Support
Read the full incident report →
- Detected by Pingoru
- Apr 13, 2026, 07:29 AM UTC
- Resolved
- Apr 13, 2026, 10:17 AM UTC
- Duration
- 2h 47m
Affected: Viewing contentCreate and editAuthentication and User ManagementSearchNotificationsAdministrationMarketplaceMobilePurchasing & LicensingSignupAutomation for Jira
Timeline · 4 updates
-
monitoring Apr 13, 2026, 07:29 AM UTC
Our team is aware that some users were unable to log in to Atlassian products with their Atlassian accounts. While we believe this issue is now resolved, we are continuing to monitor all products and services for any ongoing impact. Our team is investigating with urgency, and we will provide an update within 1 hour.
-
monitoring Apr 13, 2026, 08:33 AM UTC
Atlassian account login services are now operating as expected, and we are not observing new errors. We continue to closely monitor our systems to ensure they remain stable. We will provide another update within 60 minutes or sooner if we detect a change in status.
-
resolved Apr 13, 2026, 10:17 AM UTC
On April 13, 2026, between 05:49 a.m. and 06:25 a.m. UTC, some users were unable to log in to Atlassian products with their Atlassian accounts. The underlying issue has been addressed, and authentication services have remained stable with no new impact observed.
-
postmortem Apr 21, 2026, 02:34 AM UTC
### Summary On April 13, 2026, between 05:49 and 06:29 UTC, customers experienced failures when attempting to log in, sign up, reset passwords, and complete multi-factor authentication flows across Atlassian cloud products. Approximately 90% of authentication requests failed during the peak impact window, affecting users in the US East and EU regions. The incident was mitigated within 40 minutes through manual intervention, and full service was restored by 06:29 UTC. ### **IMPACT** * **Duration**: ~40 minutes \(05:49–06:29 UTC, April 13, 2026\) * **Affected regions**: US East and EU \(authentication infrastructure serves EU traffic from US East, with traffic primarily from EU at this time of day\). * **Affected products**: All Atlassian cloud products requiring authentication, including Jira, Confluence, Jira Service Management, and Trello. * **Customer experience**: Users attempting to log in, sign up, reset passwords, or complete MFA flows received errors. Users already logged in with active sessions were unaffected. ### **ROOT CAUSE** This incident had several contributing factors that combined to produce a failure that the system could not recover from without manual intervention. **The primary cause** was a recently enabled change that caused our authentication infrastructure to retry requests to a downstream identity service when those requests were slow to respond. This retry behaviour was rolled out to 100% of traffic earlier the same day. Under normal conditions this would be benign, but it meant that any slowness in the downstream service was amplified. Since multiple upstream services were also independently retrying their own failed requests, the amplification compounded further into a retry storm. **The trigger** was a burst of legitimate user traffic. A pattern of many parallel link preview requests for a single user caused a concentrated load spike on a downstream identity service, pushing its response times above the retry threshold. On its own, this kind of spike had occurred many times before and always recovered. With the retry amplification now in effect, the spike instead created a runaway feedback loop: slow responses caused retries, retries increased load, increased load caused slower responses, preventing recovery. The incident was mitigated by manually scaling up the downstream identity service to provide sufficient capacity to absorb the amplified load. Once scaled, the service recovered immediately, bringing authentication error rates to zero within one minute. **REMEDIAL ACTIONS PLAN & NEXT STEPS** We are taking the following actions designed to prevent recurrence and improve our resilience: 1. **Immediate**: The retry-on-timeout change has been disabled. 2. **Load shedding and self-healing**: We are adding load shedding capabilities to our authentication services so that they can automatically shed excess load and self-recover during traffic spikes, without requiring action before automatic scaling starts. 3. **Reducing request fan-out**: We are reviewing patterns where a single user action can generate many parallel downstream requests, and will introduce methods where possible to reduce the amplification potential. We apologize to customers whose services were interrupted by this incident and we are taking immediate steps to improve the platform’s reliability. Thanks, Atlassian Customer Support
Read the full incident report →
- Detected by Pingoru
- Apr 13, 2026, 07:29 AM UTC
- Resolved
- Apr 13, 2026, 10:17 AM UTC
- Duration
- 2h 47m
Affected: Viewing contentCreate and editAuthentication and User ManagementSearchNotificationsAdministrationMarketplaceMobilePurchasing & LicensingSignupAutomation for Jira
Timeline · 4 updates
-
monitoring Apr 13, 2026, 07:29 AM UTC
Our team is aware that some users were unable to log in to Atlassian products with their Atlassian accounts. While we believe this issue is now resolved, we are continuing to monitor all products and services for any ongoing impact. Our team is investigating with urgency, and we will provide an update within 1 hour.
-
monitoring Apr 13, 2026, 08:33 AM UTC
Atlassian account login services are now operating as expected, and we are not observing new errors. We continue to closely monitor our systems to ensure they remain stable. We will provide another update within 60 minutes or sooner if we detect a change in status.
-
resolved Apr 13, 2026, 10:17 AM UTC
On April 13, 2026, between 05:49 a.m. and 06:25 a.m. UTC, some users were unable to log in to Atlassian products with their Atlassian accounts. The underlying issue has been addressed, and authentication services have remained stable with no new impact observed.
-
postmortem Apr 21, 2026, 02:34 AM UTC
### Summary On April 13, 2026, between 05:49 and 06:29 UTC, customers experienced failures when attempting to log in, sign up, reset passwords, and complete multi-factor authentication flows across Atlassian cloud products. Approximately 90% of authentication requests failed during the peak impact window, affecting users in the US East and EU regions. The incident was mitigated within 40 minutes through manual intervention, and full service was restored by 06:29 UTC. ### **IMPACT** * **Duration**: ~40 minutes \(05:49–06:29 UTC, April 13, 2026\) * **Affected regions**: US East and EU \(authentication infrastructure serves EU traffic from US East, with traffic primarily from EU at this time of day\). * **Affected products**: All Atlassian cloud products requiring authentication, including Jira, Confluence, Jira Service Management, and Trello. * **Customer experience**: Users attempting to log in, sign up, reset passwords, or complete MFA flows received errors. Users already logged in with active sessions were unaffected. ### **ROOT CAUSE** This incident had several contributing factors that combined to produce a failure that the system could not recover from without manual intervention. **The primary cause** was a recently enabled change that caused our authentication infrastructure to retry requests to a downstream identity service when those requests were slow to respond. This retry behaviour was rolled out to 100% of traffic earlier the same day. Under normal conditions this would be benign, but it meant that any slowness in the downstream service was amplified. Since multiple upstream services were also independently retrying their own failed requests, the amplification compounded further into a retry storm. **The trigger** was a burst of legitimate user traffic. A pattern of many parallel link preview requests for a single user caused a concentrated load spike on a downstream identity service, pushing its response times above the retry threshold. On its own, this kind of spike had occurred many times before and always recovered. With the retry amplification now in effect, the spike instead created a runaway feedback loop: slow responses caused retries, retries increased load, increased load caused slower responses, preventing recovery. The incident was mitigated by manually scaling up the downstream identity service to provide sufficient capacity to absorb the amplified load. Once scaled, the service recovered immediately, bringing authentication error rates to zero within one minute. **REMEDIAL ACTIONS PLAN & NEXT STEPS** We are taking the following actions designed to prevent recurrence and improve our resilience: 1. **Immediate**: The retry-on-timeout change has been disabled. 2. **Load shedding and self-healing**: We are adding load shedding capabilities to our authentication services so that they can automatically shed excess load and self-recover during traffic spikes, without requiring action before automatic scaling starts. 3. **Reducing request fan-out**: We are reviewing patterns where a single user action can generate many parallel downstream requests, and will introduce methods where possible to reduce the amplification potential. We apologize to customers whose services were interrupted by this incident and we are taking immediate steps to improve the platform’s reliability. Thanks, Atlassian Customer Support
Read the full incident report →
- Detected by Pingoru
- Apr 13, 2026, 07:29 AM UTC
- Resolved
- Apr 13, 2026, 10:17 AM UTC
- Duration
- 2h 47m
Affected: Viewing contentCreate and editAuthentication and User ManagementSearchNotificationsAdministrationMarketplaceMobilePurchasing & LicensingSignupAutomation for Jira
Timeline · 4 updates
-
monitoring Apr 13, 2026, 07:29 AM UTC
Our team is aware that some users were unable to log in to Atlassian products with their Atlassian accounts. While we believe this issue is now resolved, we are continuing to monitor all products and services for any ongoing impact. Our team is investigating with urgency, and we will provide an update within 1 hour.
-
monitoring Apr 13, 2026, 08:33 AM UTC
Atlassian account login services are now operating as expected, and we are not observing new errors. We continue to closely monitor our systems to ensure they remain stable. We will provide another update within 60 minutes or sooner if we detect a change in status.
-
resolved Apr 13, 2026, 10:17 AM UTC
On April 13, 2026, between 05:49 a.m. and 06:25 a.m. UTC, some users were unable to log in to Atlassian products with their Atlassian accounts. The underlying issue has been addressed, and authentication services have remained stable with no new impact observed.
-
postmortem Apr 21, 2026, 02:33 AM UTC
### Summary On April 13, 2026, between 05:49 and 06:29 UTC, customers experienced failures when attempting to log in, sign up, reset passwords, and complete multi-factor authentication flows across Atlassian cloud products. Approximately 90% of authentication requests failed during the peak impact window, affecting users in the US East and EU regions. The incident was mitigated within 40 minutes through manual intervention, and full service was restored by 06:29 UTC. ### **IMPACT** * **Duration**: ~40 minutes \(05:49–06:29 UTC, April 13, 2026\) * **Affected regions**: US East and EU \(authentication infrastructure serves EU traffic from US East, with traffic primarily from EU at this time of day\). * **Affected products**: All Atlassian cloud products requiring authentication, including Jira, Confluence, Jira Service Management, and Trello. * **Customer experience**: Users attempting to log in, sign up, reset passwords, or complete MFA flows received errors. Users already logged in with active sessions were unaffected. ### **ROOT CAUSE** This incident had several contributing factors that combined to produce a failure that the system could not recover from without manual intervention. **The primary cause** was a recently enabled change that caused our authentication infrastructure to retry requests to a downstream identity service when those requests were slow to respond. This retry behaviour was rolled out to 100% of traffic earlier the same day. Under normal conditions this would be benign, but it meant that any slowness in the downstream service was amplified. Since multiple upstream services were also independently retrying their own failed requests, the amplification compounded further into a retry storm. **The trigger** was a burst of legitimate user traffic. A pattern of many parallel link preview requests for a single user caused a concentrated load spike on a downstream identity service, pushing its response times above the retry threshold. On its own, this kind of spike had occurred many times before and always recovered. With the retry amplification now in effect, the spike instead created a runaway feedback loop: slow responses caused retries, retries increased load, increased load caused slower responses, preventing recovery. The incident was mitigated by manually scaling up the downstream identity service to provide sufficient capacity to absorb the amplified load. Once scaled, the service recovered immediately, bringing authentication error rates to zero within one minute. **REMEDIAL ACTIONS PLAN & NEXT STEPS** We are taking the following actions designed to prevent recurrence and improve our resilience: 1. **Immediate**: The retry-on-timeout change has been disabled. 2. **Load shedding and self-healing**: We are adding load shedding capabilities to our authentication services so that they can automatically shed excess load and self-recover during traffic spikes, without requiring action before automatic scaling starts. 3. **Reducing request fan-out**: We are reviewing patterns where a single user action can generate many parallel downstream requests, and will introduce methods where possible to reduce the amplification potential. We apologize to customers whose services were interrupted by this incident and we are taking immediate steps to improve the platform’s reliability. Thanks, Atlassian Customer Support
Read the full incident report →
- Detected by Pingoru
- Apr 08, 2026, 08:33 AM UTC
- Resolved
- Apr 08, 2026, 11:43 AM UTC
- Duration
- 3h 10m
Affected: DashboardsAtlassian Data LakeThird party data connections
Timeline · 10 updates
-
investigating Apr 08, 2026, 08:33 AM UTC
Our team is aware that users are currently experiencing errors while attempting to search in Confluence and Jira. Our team is investigating with urgency and will be providing an update within 1 hour.
-
investigating Apr 08, 2026, 08:33 AM UTC
The impact of this incident is now understood to also be impacting search in Jira and Confluence, as well as additional downstream impacts to Rovo Chat, User Management, Administration and Guard. Our team is continuing to investigate with urgency and we will provide further update within 1 hour.
-
identified Apr 08, 2026, 08:33 AM UTC
We are also aware of impact as a result of these search failures to Customer Service Management, Asset Reports, Jira Service Management and dashboards within Focus. Our team have identified the potential root cause of the issue and our teams are working with urgency on a resolution. We will provide further update within 1 hour.
-
identified Apr 08, 2026, 08:33 AM UTC
Confluence and Jira search reliability is improving, and we expect full recovery shortly; we will continue to closely monitor the services to ensure they remain stable.
-
identified Apr 08, 2026, 08:33 AM UTC
We have restored core search functionality and services are operating again, but some customers may still experience delays when searching for data changed within the last hour while new data continues to be indexed. We are actively working to complete reindexing and will update this page as performance fully recovers.
-
investigating Apr 08, 2026, 09:03 AM UTC
Our team is aware that users are currently experiencing errors while attempting to search in Confluence and Jira. Our team is investigating with urgency and will be providing an update within 1 hour.
-
identified Apr 08, 2026, 09:03 AM UTC
We are also aware of impact as a result of these search failures to Customer Service Management, Asset Reports, Jira Service Management and dashboards within Focus. Our team have identified the potential root cause of the issue and our teams are working with urgency on a resolution. We will provide further update within 1 hour.
-
monitoring Apr 08, 2026, 09:03 AM UTC
The issue has been resolved, and services are now operating normally. Some customers may still experience delays when searching for data changed within the last hour, while new data continues to be indexed. We'll continue to monitor closely to confirm stability.
-
resolved Apr 08, 2026, 11:43 AM UTC
The issue has now been resolved, and the service is operating normally for all affected customers.
-
postmortem Apr 17, 2026, 04:22 PM UTC
### Summary On April 8, 2026, between 04:46 UTC and 12:09 UTC, search functionality was unavailable or degraded across several Atlassian Cloud products, including Jira, Confluence, Jira Service Management, Rovo, Rovo Dev, Loom, Guard Standard, Customer Service Management and Atlassian Administration. A configuration change increased the resources reserved for a core system component that runs on nodes in our compute platform. On a subset of clusters configured for high‑density workloads, the increased reservations exceeded available node capacity interrupting search and related experiences for affected customers. The root cause was identified and a rollback was merged at 05:42 UTC with some systems seeing recovery by 07:33 UTC**.** Core search functionality was restored approximately by 08:55 UTC, and full downstream recovery completed by 12:09 UTC. ### **IMPACT** During the impact period, some customers experienced outages or degradation in search across Jira, Confluence, Jira Service Management, Rovo, Rovo Dev, Loom, Guard Standard, Customer Service Management and Atlassian Administration. Other experiences that rely on search such as quick find, navigation, AI assistants, dashboards, were also intermittently affected during this period. Impacted customers may have been unable to find pages or recordings and experienced degraded performance in finding issues; received empty or delayed search results; or experienced AI assistants and dashboards that could not retrieve relevant context. **Jira, Jira Service Management and Customer Service Management:** Search and experiences that depend on search like finding issues and agent responses in CSM remained available but with degraded performance in fallback mode. By 12:09 UTC, search indexes and search performance was fully restored from fallback to full capacity across all regions. **Guard Standard and Atlassian Administration:** Search functionality was unavailable for parts of the incident window. As a result, Domain Claims, usage tracking, and managed accounts were degraded for portions of the window. These services were restored to operational status by 07:33 UTC. Guard Premium was not impacted by this issue. **Confluence:** Search functionality was unavailable for parts of the incident window. Recovery began at 07:30 UTC as backend search clusters were restored. Full recovery, including search index replay, completed at 11:37 UTC. **Loom:** Search functionality and some experiences that rely on Confluence Search, such as sharing to spaces\) was unavailable for portions of the window and fully restored at 11:37 UTC. **Rovo and Rovo Dev:** Rovo agents remained responsive but experienced degraded functionality due to loss of search capabilities in underlying services. They were unable to reliably return context about work items or pages. Functionality was fully restored at 11:37 UTC. ### **ROOT CAUSE** Atlassian products rely on OpenSearch clusters to power their search capabilities including issue search, content search, and AI-powered search features. An infrastructure configuration change increased resource reservations \(CPU & Memory\) for a system component that runs across our compute platform. On a subset of clusters configured for high-density workloads, the increased reservations exceeded available node capacity. This caused search workloads to be evicted and, in some clusters, could not reschedule onto any available nodes impacting search functionality across affected products. The change was deployed across multiple production clusters in a short time frame, limiting the opportunity to detect the capacity conflict in a smaller subset of clusters before it reached the wider fleet. Automated scaling systems attempted to recover by provisioning additional capacity but in the worst‑affected clusters this led to runaway node scaling and exhaustion of available network resources, prolonging recovery time. ### **REMEDIAL ACTIONS PLAN & NEXT STEPS** We understand that service disruptions impact your productivity. In addition to our existing testing and preventative processes, Atlassian is prioritizing the following actions to help reduce the likelihood and impact of similar incidents in the future and to speed up recovery when issues occur: * **Enforce smaller deployment cohorts and larger soak for critical platform changes for these cluster types** Implement smaller deployment cohorts, mandatory soak periods between environments, and automated health gates so that changes are validated on a limited set of clusters before being promoted more broadly. * **Strengthen automated pre‑deploy validation for resource changes** Add validation checks to ensure resource changes for system components are compatible with node capacity and reserved headroom, preventing system workloads from crowding out customer workloads. * **Improve post‑deploy verification and alerting** Enhance monitoring and post‑deployment verification to detect patterns such as spikes in pending pods, runaway node scaling, and low pod‑IP headroom closely correlated with new configuration being rolled out. * **Align autoscaling behavior with capacity and safety limits** Align autoscaling capacity calculations with node reservations and introduce safeguards and circuit breakers to prevent runaway scaling and to enforce safe limits on node and pod IP counts. * **Enhance recovery automation** Improve automation and runbooks so we can safely disable autoscaling, remove empty nodes in bulk, and restore normal operations faster across multiple clusters in parallel. We apologize to customers whose services were impacted during this incident; we are taking immediate steps to improve the platform’s performance and availability and to reduce the risk and impact of similar issues in future. Thanks, Atlassian Customer Support
Read the full incident report →
- Detected by Pingoru
- Apr 08, 2026, 06:55 AM UTC
- Resolved
- Apr 08, 2026, 07:45 AM UTC
- Duration
- 50m
Affected: Bitbucket Cloud APIsApp listing managementApp DeploymentApp listingsConfluence Cloud APIsArtifactory (Maven repository)Atlassian Support contact formApp pricingJira Cloud APIsCreate and manage appsDeveloper communityDeveloper documentationDeveloper service deskApp submissionsAuthentication and user managementProduct EventsMarketplace service deskUser APIsCategory landing pagesForge App InstallationWebhooksEvaluations and purchasesAtlassian SupportForge CDN (Custom UI)In-product Marketplace and app installation (Cloud)Forge Function InvocationWeb TriggersVulnerability management [AMS]In-product Marketplace and app installation (Server)aui-cdn.atlassian.comNotificationsForge App LogsPrivate listingsForge App MonitoringReporting APIs and dashboardsDeveloper consoleSearchForge direct app distributionVendor managementHosted storageVendor Home PageForge CLIEnd-user consentForge App AlertsApp Data ResidencyForge SQLForge MonetisationForge Feature Flags Evaluations
Timeline · 4 updates
-
investigating Apr 08, 2026, 06:55 AM UTC
We are investigating a failure in the Forge App logs experience, impacting Forge Ecosystem Cloud developers and vendors. We will provide more details once we identify the root cause.
-
identified Apr 08, 2026, 06:55 AM UTC
We have identified a failure in the Forge App logs experience, impacting Forge Ecosystem Cloud developers and vendors. Our teams are working on it's resolution
-
identified Apr 08, 2026, 07:45 AM UTC
We have identified a failure in the Forge App logs experience, impacting Forge Ecosystem Cloud developers and vendors. Our teams are working on it's resolution
-
resolved Apr 08, 2026, 07:45 AM UTC
The outage affecting Forge app logs between 05:00 UTC and 07:00 UTC on 8 April 2026 has been resolved, and systems are now stable. We are currently working on recovery efforts, including retrieving logs from this downtime window and validating data completeness. During this process, you may temporarily see duplicate log entries; however, this will not result in duplicate usage or additional costs. We will continue to monitor the system closely and provide further updates as needed.
Read the full incident report →
- Detected by Pingoru
- Apr 08, 2026, 06:24 AM UTC
- Resolved
- Apr 08, 2026, 11:43 AM UTC
- Duration
- 5h 18m
Affected: Jira Service Management WebService PortalOpsgenie Incident FlowOpsgenie Alert FlowOpsgenie Incident FlowOpsgenie Alert FlowJira Service Management Email RequestsAuthentication and User ManagementPurchasing & LicensingSignupAutomation for JiraAssist
Timeline · 12 updates
-
investigating Apr 08, 2026, 06:24 AM UTC
Our team is aware that users are currently experiencing errors while attempting to search in Confluence and Jira. Our team is investigating with urgency and will be providing an update within 1 hour.
-
investigating Apr 08, 2026, 06:25 AM UTC
Our team is aware that users are currently experiencing errors while attempting to search in Confluence and Jira. Our team is investigating with urgency and will be providing an update within 1 hour.
-
investigating Apr 08, 2026, 06:25 AM UTC
The impact of this incident is now understood to also be impacting search in Jira and Confluence, as well as additional downstream impacts to Rovo Chat, User Management, Administration and Guard. Our team is continuing to investigate with urgency and we will provide further update within 1 hour.
-
identified Apr 08, 2026, 06:25 AM UTC
We are also aware of impact as a result of these search failures to Customer Service Management, Asset Reports, Jira Service Management and dashboards within Focus. Our team have identified the potential root cause of the issue and our teams are working with urgency on a resolution. We will provide further update within 1 hour.
-
identified Apr 08, 2026, 07:40 AM UTC
We are also aware of impact as a result of these search failures to Customer Service Management, Asset Reports, Jira Service Management and dashboards within Focus. Our team have identified the potential root cause of the issue and our teams are working with urgency on a resolution. We will provide further update within 1 hour.
-
identified Apr 08, 2026, 07:40 AM UTC
Confluence and Jira search reliability is improving, and we expect full recovery shortly; we will continue to closely monitor the services to ensure they remain stable.
-
identified Apr 08, 2026, 08:33 AM UTC
We are also aware of impact as a result of these search failures to Customer Service Management, Asset Reports, Jira Service Management and dashboards within Focus. Our team have identified the potential root cause of the issue and our teams are working with urgency on a resolution. We will provide further update within 1 hour.
-
identified Apr 08, 2026, 08:33 AM UTC
We have restored core search functionality and services are operating again, but some customers may still experience delays when searching for data changed within the last hour while new data continues to be indexed. We are actively working to complete reindexing and will update this page as performance fully recovers.
-
identified Apr 08, 2026, 09:03 AM UTC
We are also aware of impact as a result of these search failures to Customer Service Management, Asset Reports, Jira Service Management and dashboards within Focus. Our team have identified the potential root cause of the issue and our teams are working with urgency on a resolution. We will provide further update within 1 hour.
-
monitoring Apr 08, 2026, 09:03 AM UTC
The issue has been resolved, and services are now operating normally. Some customers may still experience delays when searching for data changed within the last hour, while new data continues to be indexed. We'll continue to monitor closely to confirm stability.
-
resolved Apr 08, 2026, 11:43 AM UTC
The issue has now been resolved, and the service is operating normally for all affected customers.
-
postmortem Apr 17, 2026, 04:23 PM UTC
### Summary On April 8, 2026, between 04:46 UTC and 12:09 UTC, search functionality was unavailable or degraded across several Atlassian Cloud products, including Jira, Confluence, Jira Service Management, Rovo, Rovo Dev, Loom, Guard Standard, Customer Service Management and Atlassian Administration. A configuration change increased the resources reserved for a core system component that runs on nodes in our compute platform. On a subset of clusters configured for high‑density workloads, the increased reservations exceeded available node capacity interrupting search and related experiences for affected customers. The root cause was identified and a rollback was merged at 05:42 UTC with some systems seeing recovery by 07:33 UTC**.** Core search functionality was restored approximately by 08:55 UTC, and full downstream recovery completed by 12:09 UTC. ### **IMPACT** During the impact period, some customers experienced outages or degradation in search across Jira, Confluence, Jira Service Management, Rovo, Rovo Dev, Loom, Guard Standard, Customer Service Management and Atlassian Administration. Other experiences that rely on search such as quick find, navigation, AI assistants, dashboards, were also intermittently affected during this period. Impacted customers may have been unable to find pages or recordings and experienced degraded performance in finding issues; received empty or delayed search results; or experienced AI assistants and dashboards that could not retrieve relevant context. **Jira, Jira Service Management and Customer Service Management:** Search and experiences that depend on search like finding issues and agent responses in CSM remained available but with degraded performance in fallback mode. By 12:09 UTC, search indexes and search performance was fully restored from fallback to full capacity across all regions. **Guard Standard and Atlassian Administration:** Search functionality was unavailable for parts of the incident window. As a result, Domain Claims, usage tracking, and managed accounts were degraded for portions of the window. These services were restored to operational status by 07:33 UTC. Guard Premium was not impacted by this issue. **Confluence:** Search functionality was unavailable for parts of the incident window. Recovery began at 07:30 UTC as backend search clusters were restored. Full recovery, including search index replay, completed at 11:37 UTC. **Loom:** Search functionality and some experiences that rely on Confluence Search, such as sharing to spaces\) was unavailable for portions of the window and fully restored at 11:37 UTC. **Rovo and Rovo Dev:** Rovo agents remained responsive but experienced degraded functionality due to loss of search capabilities in underlying services. They were unable to reliably return context about work items or pages. Functionality was fully restored at 11:37 UTC. ### **ROOT CAUSE** Atlassian products rely on OpenSearch clusters to power their search capabilities including issue search, content search, and AI-powered search features. An infrastructure configuration change increased resource reservations \(CPU & Memory\) for a system component that runs across our compute platform. On a subset of clusters configured for high-density workloads, the increased reservations exceeded available node capacity. This caused search workloads to be evicted and, in some clusters, could not reschedule onto any available nodes impacting search functionality across affected products. The change was deployed across multiple production clusters in a short time frame, limiting the opportunity to detect the capacity conflict in a smaller subset of clusters before it reached the wider fleet. Automated scaling systems attempted to recover by provisioning additional capacity but in the worst‑affected clusters this led to runaway node scaling and exhaustion of available network resources, prolonging recovery time. ### **REMEDIAL ACTIONS PLAN & NEXT STEPS** We understand that service disruptions impact your productivity. In addition to our existing testing and preventative processes, Atlassian is prioritizing the following actions to help reduce the likelihood and impact of similar incidents in the future and to speed up recovery when issues occur: * **Enforce smaller deployment cohorts and larger soak for critical platform changes for these cluster types** Implement smaller deployment cohorts, mandatory soak periods between environments, and automated health gates so that changes are validated on a limited set of clusters before being promoted more broadly. * **Strengthen automated pre‑deploy validation for resource changes** Add validation checks to ensure resource changes for system components are compatible with node capacity and reserved headroom, preventing system workloads from crowding out customer workloads. * **Improve post‑deploy verification and alerting** Enhance monitoring and post‑deployment verification to detect patterns such as spikes in pending pods, runaway node scaling, and low pod‑IP headroom closely correlated with new configuration being rolled out. * **Align autoscaling behavior with capacity and safety limits** Align autoscaling capacity calculations with node reservations and introduce safeguards and circuit breakers to prevent runaway scaling and to enforce safe limits on node and pod IP counts. * **Enhance recovery automation** Improve automation and runbooks so we can safely disable autoscaling, remove empty nodes in bulk, and restore normal operations faster across multiple clusters in parallel. We apologize to customers whose services were impacted during this incident; we are taking immediate steps to improve the platform’s performance and availability and to reduce the risk and impact of similar issues in future. Thanks, Atlassian Customer Support
Read the full incident report →
- Detected by Pingoru
- Apr 08, 2026, 06:24 AM UTC
- Resolved
- Apr 08, 2026, 11:43 AM UTC
- Duration
- 5h 18m
Affected: Jira Service Management WebService PortalOpsgenie Incident FlowOpsgenie Alert FlowOpsgenie Incident FlowOpsgenie Alert FlowJira Service Management Email RequestsAuthentication and User ManagementPurchasing & LicensingSignupAutomation for JiraAssist
Timeline · 12 updates
-
investigating Apr 08, 2026, 06:24 AM UTC
Our team is aware that users are currently experiencing errors while attempting to search in Confluence and Jira. Our team is investigating with urgency and will be providing an update within 1 hour.
-
investigating Apr 08, 2026, 06:25 AM UTC
Our team is aware that users are currently experiencing errors while attempting to search in Confluence and Jira. Our team is investigating with urgency and will be providing an update within 1 hour.
-
investigating Apr 08, 2026, 06:25 AM UTC
The impact of this incident is now understood to also be impacting search in Jira and Confluence, as well as additional downstream impacts to Rovo Chat, User Management, Administration and Guard. Our team is continuing to investigate with urgency and we will provide further update within 1 hour.
-
identified Apr 08, 2026, 06:25 AM UTC
We are also aware of impact as a result of these search failures to Customer Service Management, Asset Reports, Jira Service Management and dashboards within Focus. Our team have identified the potential root cause of the issue and our teams are working with urgency on a resolution. We will provide further update within 1 hour.
-
identified Apr 08, 2026, 07:40 AM UTC
We are also aware of impact as a result of these search failures to Customer Service Management, Asset Reports, Jira Service Management and dashboards within Focus. Our team have identified the potential root cause of the issue and our teams are working with urgency on a resolution. We will provide further update within 1 hour.
-
identified Apr 08, 2026, 07:40 AM UTC
Confluence and Jira search reliability is improving, and we expect full recovery shortly; we will continue to closely monitor the services to ensure they remain stable.
-
identified Apr 08, 2026, 08:33 AM UTC
We are also aware of impact as a result of these search failures to Customer Service Management, Asset Reports, Jira Service Management and dashboards within Focus. Our team have identified the potential root cause of the issue and our teams are working with urgency on a resolution. We will provide further update within 1 hour.
-
identified Apr 08, 2026, 08:33 AM UTC
We have restored core search functionality and services are operating again, but some customers may still experience delays when searching for data changed within the last hour while new data continues to be indexed. We are actively working to complete reindexing and will update this page as performance fully recovers.
-
identified Apr 08, 2026, 09:03 AM UTC
We are also aware of impact as a result of these search failures to Customer Service Management, Asset Reports, Jira Service Management and dashboards within Focus. Our team have identified the potential root cause of the issue and our teams are working with urgency on a resolution. We will provide further update within 1 hour.
-
monitoring Apr 08, 2026, 09:03 AM UTC
The issue has been resolved, and services are now operating normally. Some customers may still experience delays when searching for data changed within the last hour, while new data continues to be indexed. We'll continue to monitor closely to confirm stability.
-
resolved Apr 08, 2026, 11:43 AM UTC
The issue has now been resolved, and the service is operating normally for all affected customers.
-
postmortem Apr 17, 2026, 04:23 PM UTC
### Summary On April 8, 2026, between 04:46 UTC and 12:09 UTC, search functionality was unavailable or degraded across several Atlassian Cloud products, including Jira, Confluence, Jira Service Management, Rovo, Rovo Dev, Loom, Guard Standard, Customer Service Management and Atlassian Administration. A configuration change increased the resources reserved for a core system component that runs on nodes in our compute platform. On a subset of clusters configured for high‑density workloads, the increased reservations exceeded available node capacity interrupting search and related experiences for affected customers. The root cause was identified and a rollback was merged at 05:42 UTC with some systems seeing recovery by 07:33 UTC**.** Core search functionality was restored approximately by 08:55 UTC, and full downstream recovery completed by 12:09 UTC. ### **IMPACT** During the impact period, some customers experienced outages or degradation in search across Jira, Confluence, Jira Service Management, Rovo, Rovo Dev, Loom, Guard Standard, Customer Service Management and Atlassian Administration. Other experiences that rely on search such as quick find, navigation, AI assistants, dashboards, were also intermittently affected during this period. Impacted customers may have been unable to find pages or recordings and experienced degraded performance in finding issues; received empty or delayed search results; or experienced AI assistants and dashboards that could not retrieve relevant context. **Jira, Jira Service Management and Customer Service Management:** Search and experiences that depend on search like finding issues and agent responses in CSM remained available but with degraded performance in fallback mode. By 12:09 UTC, search indexes and search performance was fully restored from fallback to full capacity across all regions. **Guard Standard and Atlassian Administration:** Search functionality was unavailable for parts of the incident window. As a result, Domain Claims, usage tracking, and managed accounts were degraded for portions of the window. These services were restored to operational status by 07:33 UTC. Guard Premium was not impacted by this issue. **Confluence:** Search functionality was unavailable for parts of the incident window. Recovery began at 07:30 UTC as backend search clusters were restored. Full recovery, including search index replay, completed at 11:37 UTC. **Loom:** Search functionality and some experiences that rely on Confluence Search, such as sharing to spaces\) was unavailable for portions of the window and fully restored at 11:37 UTC. **Rovo and Rovo Dev:** Rovo agents remained responsive but experienced degraded functionality due to loss of search capabilities in underlying services. They were unable to reliably return context about work items or pages. Functionality was fully restored at 11:37 UTC. ### **ROOT CAUSE** Atlassian products rely on OpenSearch clusters to power their search capabilities including issue search, content search, and AI-powered search features. An infrastructure configuration change increased resource reservations \(CPU & Memory\) for a system component that runs across our compute platform. On a subset of clusters configured for high-density workloads, the increased reservations exceeded available node capacity. This caused search workloads to be evicted and, in some clusters, could not reschedule onto any available nodes impacting search functionality across affected products. The change was deployed across multiple production clusters in a short time frame, limiting the opportunity to detect the capacity conflict in a smaller subset of clusters before it reached the wider fleet. Automated scaling systems attempted to recover by provisioning additional capacity but in the worst‑affected clusters this led to runaway node scaling and exhaustion of available network resources, prolonging recovery time. ### **REMEDIAL ACTIONS PLAN & NEXT STEPS** We understand that service disruptions impact your productivity. In addition to our existing testing and preventative processes, Atlassian is prioritizing the following actions to help reduce the likelihood and impact of similar incidents in the future and to speed up recovery when issues occur: * **Enforce smaller deployment cohorts and larger soak for critical platform changes for these cluster types** Implement smaller deployment cohorts, mandatory soak periods between environments, and automated health gates so that changes are validated on a limited set of clusters before being promoted more broadly. * **Strengthen automated pre‑deploy validation for resource changes** Add validation checks to ensure resource changes for system components are compatible with node capacity and reserved headroom, preventing system workloads from crowding out customer workloads. * **Improve post‑deploy verification and alerting** Enhance monitoring and post‑deployment verification to detect patterns such as spikes in pending pods, runaway node scaling, and low pod‑IP headroom closely correlated with new configuration being rolled out. * **Align autoscaling behavior with capacity and safety limits** Align autoscaling capacity calculations with node reservations and introduce safeguards and circuit breakers to prevent runaway scaling and to enforce safe limits on node and pod IP counts. * **Enhance recovery automation** Improve automation and runbooks so we can safely disable autoscaling, remove empty nodes in bulk, and restore normal operations faster across multiple clusters in parallel. We apologize to customers whose services were impacted during this incident; we are taking immediate steps to improve the platform’s performance and availability and to reduce the risk and impact of similar issues in future. Thanks, Atlassian Customer Support
Read the full incident report →
- Detected by Pingoru
- Apr 08, 2026, 06:24 AM UTC
- Resolved
- Apr 08, 2026, 11:43 AM UTC
- Duration
- 5h 18m
Affected: Customer experienceCustomer Service Management AI agentSupport websiteEmail requestCustomer Service Management spacesCustomer profilesAutomation for Jira
Timeline · 12 updates
-
investigating Apr 08, 2026, 06:24 AM UTC
Our team is aware that users are currently experiencing errors while attempting to search in Confluence and Jira. Our team is investigating with urgency and will be providing an update within 1 hour.
-
investigating Apr 08, 2026, 06:25 AM UTC
Our team is aware that users are currently experiencing errors while attempting to search in Confluence and Jira. Our team is investigating with urgency and will be providing an update within 1 hour.
-
investigating Apr 08, 2026, 06:25 AM UTC
The impact of this incident is now understood to also be impacting search in Jira and Confluence, as well as additional downstream impacts to Rovo Chat, User Management, Administration and Guard. Our team is continuing to investigate with urgency and we will provide further update within 1 hour.
-
identified Apr 08, 2026, 06:25 AM UTC
We are also aware of impact as a result of these search failures to Customer Service Management, Asset Reports, Jira Service Management and dashboards within Focus. Our team have identified the potential root cause of the issue and our teams are working with urgency on a resolution. We will provide further update within 1 hour.
-
identified Apr 08, 2026, 07:40 AM UTC
We are also aware of impact as a result of these search failures to Customer Service Management, Asset Reports, Jira Service Management and dashboards within Focus. Our team have identified the potential root cause of the issue and our teams are working with urgency on a resolution. We will provide further update within 1 hour.
-
identified Apr 08, 2026, 07:40 AM UTC
Confluence and Jira search reliability is improving, and we expect full recovery shortly; we will continue to closely monitor the services to ensure they remain stable.
-
identified Apr 08, 2026, 08:33 AM UTC
We are also aware of impact as a result of these search failures to Customer Service Management, Asset Reports, Jira Service Management and dashboards within Focus. Our team have identified the potential root cause of the issue and our teams are working with urgency on a resolution. We will provide further update within 1 hour.
-
identified Apr 08, 2026, 08:33 AM UTC
We have restored core search functionality and services are operating again, but some customers may still experience delays when searching for data changed within the last hour while new data continues to be indexed. We are actively working to complete reindexing and will update this page as performance fully recovers.
-
identified Apr 08, 2026, 09:03 AM UTC
We are also aware of impact as a result of these search failures to Customer Service Management, Asset Reports, Jira Service Management and dashboards within Focus. Our team have identified the potential root cause of the issue and our teams are working with urgency on a resolution. We will provide further update within 1 hour.
-
monitoring Apr 08, 2026, 09:03 AM UTC
The issue has been resolved, and services are now operating normally. Some customers may still experience delays when searching for data changed within the last hour, while new data continues to be indexed. We'll continue to monitor closely to confirm stability.
-
resolved Apr 08, 2026, 11:43 AM UTC
The issue has now been resolved, and the service is operating normally for all affected customers.
-
postmortem Apr 17, 2026, 04:23 PM UTC
### Summary On April 8, 2026, between 04:46 UTC and 12:09 UTC, search functionality was unavailable or degraded across several Atlassian Cloud products, including Jira, Confluence, Jira Service Management, Rovo, Rovo Dev, Loom, Guard Standard, Customer Service Management and Atlassian Administration. A configuration change increased the resources reserved for a core system component that runs on nodes in our compute platform. On a subset of clusters configured for high‑density workloads, the increased reservations exceeded available node capacity interrupting search and related experiences for affected customers. The root cause was identified and a rollback was merged at 05:42 UTC with some systems seeing recovery by 07:33 UTC**.** Core search functionality was restored approximately by 08:55 UTC, and full downstream recovery completed by 12:09 UTC. ### **IMPACT** During the impact period, some customers experienced outages or degradation in search across Jira, Confluence, Jira Service Management, Rovo, Rovo Dev, Loom, Guard Standard, Customer Service Management and Atlassian Administration. Other experiences that rely on search such as quick find, navigation, AI assistants, dashboards, were also intermittently affected during this period. Impacted customers may have been unable to find pages or recordings and experienced degraded performance in finding issues; received empty or delayed search results; or experienced AI assistants and dashboards that could not retrieve relevant context. **Jira, Jira Service Management and Customer Service Management:** Search and experiences that depend on search like finding issues and agent responses in CSM remained available but with degraded performance in fallback mode. By 12:09 UTC, search indexes and search performance was fully restored from fallback to full capacity across all regions. **Guard Standard and Atlassian Administration:** Search functionality was unavailable for parts of the incident window. As a result, Domain Claims, usage tracking, and managed accounts were degraded for portions of the window. These services were restored to operational status by 07:33 UTC. Guard Premium was not impacted by this issue. **Confluence:** Search functionality was unavailable for parts of the incident window. Recovery began at 07:30 UTC as backend search clusters were restored. Full recovery, including search index replay, completed at 11:37 UTC. **Loom:** Search functionality and some experiences that rely on Confluence Search, such as sharing to spaces\) was unavailable for portions of the window and fully restored at 11:37 UTC. **Rovo and Rovo Dev:** Rovo agents remained responsive but experienced degraded functionality due to loss of search capabilities in underlying services. They were unable to reliably return context about work items or pages. Functionality was fully restored at 11:37 UTC. ### **ROOT CAUSE** Atlassian products rely on OpenSearch clusters to power their search capabilities including issue search, content search, and AI-powered search features. An infrastructure configuration change increased resource reservations \(CPU & Memory\) for a system component that runs across our compute platform. On a subset of clusters configured for high-density workloads, the increased reservations exceeded available node capacity. This caused search workloads to be evicted and, in some clusters, could not reschedule onto any available nodes impacting search functionality across affected products. The change was deployed across multiple production clusters in a short time frame, limiting the opportunity to detect the capacity conflict in a smaller subset of clusters before it reached the wider fleet. Automated scaling systems attempted to recover by provisioning additional capacity but in the worst‑affected clusters this led to runaway node scaling and exhaustion of available network resources, prolonging recovery time. ### **REMEDIAL ACTIONS PLAN & NEXT STEPS** We understand that service disruptions impact your productivity. In addition to our existing testing and preventative processes, Atlassian is prioritizing the following actions to help reduce the likelihood and impact of similar incidents in the future and to speed up recovery when issues occur: * **Enforce smaller deployment cohorts and larger soak for critical platform changes for these cluster types** Implement smaller deployment cohorts, mandatory soak periods between environments, and automated health gates so that changes are validated on a limited set of clusters before being promoted more broadly. * **Strengthen automated pre‑deploy validation for resource changes** Add validation checks to ensure resource changes for system components are compatible with node capacity and reserved headroom, preventing system workloads from crowding out customer workloads. * **Improve post‑deploy verification and alerting** Enhance monitoring and post‑deployment verification to detect patterns such as spikes in pending pods, runaway node scaling, and low pod‑IP headroom closely correlated with new configuration being rolled out. * **Align autoscaling behavior with capacity and safety limits** Align autoscaling capacity calculations with node reservations and introduce safeguards and circuit breakers to prevent runaway scaling and to enforce safe limits on node and pod IP counts. * **Enhance recovery automation** Improve automation and runbooks so we can safely disable autoscaling, remove empty nodes in bulk, and restore normal operations faster across multiple clusters in parallel. We apologize to customers whose services were impacted during this incident; we are taking immediate steps to improve the platform’s performance and availability and to reduce the risk and impact of similar issues in future. Thanks, Atlassian Customer Support
Read the full incident report →
- Detected by Pingoru
- Apr 08, 2026, 06:24 AM UTC
- Resolved
- Apr 08, 2026, 11:43 AM UTC
- Duration
- 5h 18m
Affected: Atlassian product integrationsViewing contentCreate and editData upload and downloadData visualizationNotificationsSearchAuthentication and User managementPurchasing & LicensingSignup
Timeline · 12 updates
-
investigating Apr 08, 2026, 06:24 AM UTC
Our team is aware that users are currently experiencing errors while attempting to search in Confluence and Jira. Our team is investigating with urgency and will be providing an update within 1 hour.
-
investigating Apr 08, 2026, 06:25 AM UTC
Our team is aware that users are currently experiencing errors while attempting to search in Confluence and Jira. Our team is investigating with urgency and will be providing an update within 1 hour.
-
investigating Apr 08, 2026, 06:25 AM UTC
The impact of this incident is now understood to also be impacting search in Jira and Confluence, as well as additional downstream impacts to Rovo Chat, User Management, Administration and Guard. Our team is continuing to investigate with urgency and we will provide further update within 1 hour.
-
identified Apr 08, 2026, 06:25 AM UTC
We are also aware of impact as a result of these search failures to Customer Service Management, Asset Reports, Jira Service Management and dashboards within Focus. Our team have identified the potential root cause of the issue and our teams are working with urgency on a resolution. We will provide further update within 1 hour.
-
identified Apr 08, 2026, 07:40 AM UTC
We are also aware of impact as a result of these search failures to Customer Service Management, Asset Reports, Jira Service Management and dashboards within Focus. Our team have identified the potential root cause of the issue and our teams are working with urgency on a resolution. We will provide further update within 1 hour.
-
identified Apr 08, 2026, 07:40 AM UTC
Confluence and Jira search reliability is improving, and we expect full recovery shortly; we will continue to closely monitor the services to ensure they remain stable.
-
identified Apr 08, 2026, 08:33 AM UTC
We are also aware of impact as a result of these search failures to Customer Service Management, Asset Reports, Jira Service Management and dashboards within Focus. Our team have identified the potential root cause of the issue and our teams are working with urgency on a resolution. We will provide further update within 1 hour.
-
identified Apr 08, 2026, 08:33 AM UTC
We have restored core search functionality and services are operating again, but some customers may still experience delays when searching for data changed within the last hour while new data continues to be indexed. We are actively working to complete reindexing and will update this page as performance fully recovers.
-
identified Apr 08, 2026, 09:03 AM UTC
We are also aware of impact as a result of these search failures to Customer Service Management, Asset Reports, Jira Service Management and dashboards within Focus. Our team have identified the potential root cause of the issue and our teams are working with urgency on a resolution. We will provide further update within 1 hour.
-
monitoring Apr 08, 2026, 09:03 AM UTC
The issue has been resolved, and services are now operating normally. Some customers may still experience delays when searching for data changed within the last hour, while new data continues to be indexed. We'll continue to monitor closely to confirm stability.
-
resolved Apr 08, 2026, 11:43 AM UTC
The issue has now been resolved, and the service is operating normally for all affected customers.
-
postmortem Apr 17, 2026, 04:23 PM UTC
### Summary On April 8, 2026, between 04:46 UTC and 12:09 UTC, search functionality was unavailable or degraded across several Atlassian Cloud products, including Jira, Confluence, Jira Service Management, Rovo, Rovo Dev, Loom, Guard Standard, Customer Service Management and Atlassian Administration. A configuration change increased the resources reserved for a core system component that runs on nodes in our compute platform. On a subset of clusters configured for high‑density workloads, the increased reservations exceeded available node capacity interrupting search and related experiences for affected customers. The root cause was identified and a rollback was merged at 05:42 UTC with some systems seeing recovery by 07:33 UTC**.** Core search functionality was restored approximately by 08:55 UTC, and full downstream recovery completed by 12:09 UTC. ### **IMPACT** During the impact period, some customers experienced outages or degradation in search across Jira, Confluence, Jira Service Management, Rovo, Rovo Dev, Loom, Guard Standard, Customer Service Management and Atlassian Administration. Other experiences that rely on search such as quick find, navigation, AI assistants, dashboards, were also intermittently affected during this period. Impacted customers may have been unable to find pages or recordings and experienced degraded performance in finding issues; received empty or delayed search results; or experienced AI assistants and dashboards that could not retrieve relevant context. **Jira, Jira Service Management and Customer Service Management:** Search and experiences that depend on search like finding issues and agent responses in CSM remained available but with degraded performance in fallback mode. By 12:09 UTC, search indexes and search performance was fully restored from fallback to full capacity across all regions. **Guard Standard and Atlassian Administration:** Search functionality was unavailable for parts of the incident window. As a result, Domain Claims, usage tracking, and managed accounts were degraded for portions of the window. These services were restored to operational status by 07:33 UTC. Guard Premium was not impacted by this issue. **Confluence:** Search functionality was unavailable for parts of the incident window. Recovery began at 07:30 UTC as backend search clusters were restored. Full recovery, including search index replay, completed at 11:37 UTC. **Loom:** Search functionality and some experiences that rely on Confluence Search, such as sharing to spaces\) was unavailable for portions of the window and fully restored at 11:37 UTC. **Rovo and Rovo Dev:** Rovo agents remained responsive but experienced degraded functionality due to loss of search capabilities in underlying services. They were unable to reliably return context about work items or pages. Functionality was fully restored at 11:37 UTC. ### **ROOT CAUSE** Atlassian products rely on OpenSearch clusters to power their search capabilities including issue search, content search, and AI-powered search features. An infrastructure configuration change increased resource reservations \(CPU & Memory\) for a system component that runs across our compute platform. On a subset of clusters configured for high-density workloads, the increased reservations exceeded available node capacity. This caused search workloads to be evicted and, in some clusters, could not reschedule onto any available nodes impacting search functionality across affected products. The change was deployed across multiple production clusters in a short time frame, limiting the opportunity to detect the capacity conflict in a smaller subset of clusters before it reached the wider fleet. Automated scaling systems attempted to recover by provisioning additional capacity but in the worst‑affected clusters this led to runaway node scaling and exhaustion of available network resources, prolonging recovery time. ### **REMEDIAL ACTIONS PLAN & NEXT STEPS** We understand that service disruptions impact your productivity. In addition to our existing testing and preventative processes, Atlassian is prioritizing the following actions to help reduce the likelihood and impact of similar incidents in the future and to speed up recovery when issues occur: * **Enforce smaller deployment cohorts and larger soak for critical platform changes for these cluster types** Implement smaller deployment cohorts, mandatory soak periods between environments, and automated health gates so that changes are validated on a limited set of clusters before being promoted more broadly. * **Strengthen automated pre‑deploy validation for resource changes** Add validation checks to ensure resource changes for system components are compatible with node capacity and reserved headroom, preventing system workloads from crowding out customer workloads. * **Improve post‑deploy verification and alerting** Enhance monitoring and post‑deployment verification to detect patterns such as spikes in pending pods, runaway node scaling, and low pod‑IP headroom closely correlated with new configuration being rolled out. * **Align autoscaling behavior with capacity and safety limits** Align autoscaling capacity calculations with node reservations and introduce safeguards and circuit breakers to prevent runaway scaling and to enforce safe limits on node and pod IP counts. * **Enhance recovery automation** Improve automation and runbooks so we can safely disable autoscaling, remove empty nodes in bulk, and restore normal operations faster across multiple clusters in parallel. We apologize to customers whose services were impacted during this incident; we are taking immediate steps to improve the platform’s performance and availability and to reduce the risk and impact of similar issues in future. Thanks, Atlassian Customer Support
Read the full incident report →
- Detected by Pingoru
- Apr 08, 2026, 06:01 AM UTC
- Resolved
- Apr 08, 2026, 11:43 AM UTC
- Duration
- 5h 41m
Affected: Knowledge DiscoverySearch3P ConnectorsChatAgentsAuthentication and User managementPurchasing & LicensingSignupStudioMCP
Timeline · 13 updates
-
investigating Apr 08, 2026, 06:01 AM UTC
Our team is aware that users are currently experiencing errors while attempting to search in Confluence and Jira. Our team is investigating with urgency and will be providing an update within 1 hour.
-
investigating Apr 08, 2026, 06:01 AM UTC
The impact of this incident is now understood to also be impacting search in Jira and Confluence, as well as additional downstream impacts to Rovo Chat, User Management, Administration and Guard. Our team is continuing to investigate with urgency and we will provide further update within 1 hour.
-
investigating Apr 08, 2026, 06:24 AM UTC
Our team is aware that users are currently experiencing errors while attempting to search in Confluence and Jira. Our team is investigating with urgency and will be providing an update within 1 hour.
-
investigating Apr 08, 2026, 06:25 AM UTC
Our team is aware that users are currently experiencing errors while attempting to search in Confluence and Jira. Our team is investigating with urgency and will be providing an update within 1 hour.
-
identified Apr 08, 2026, 06:25 AM UTC
We are also aware of impact as a result of these search failures to Customer Service Management, Asset Reports, Jira Service Management and dashboards within Focus. Our team have identified the potential root cause of the issue and our teams are working with urgency on a resolution. We will provide further update within 1 hour.
-
identified Apr 08, 2026, 07:40 AM UTC
We are also aware of impact as a result of these search failures to Customer Service Management, Asset Reports, Jira Service Management and dashboards within Focus. Our team have identified the potential root cause of the issue and our teams are working with urgency on a resolution. We will provide further update within 1 hour.
-
identified Apr 08, 2026, 07:40 AM UTC
Confluence and Jira search reliability is improving, and we expect full recovery shortly; we will continue to closely monitor the services to ensure they remain stable.
-
identified Apr 08, 2026, 08:33 AM UTC
We are also aware of impact as a result of these search failures to Customer Service Management, Asset Reports, Jira Service Management and dashboards within Focus. Our team have identified the potential root cause of the issue and our teams are working with urgency on a resolution. We will provide further update within 1 hour.
-
identified Apr 08, 2026, 08:33 AM UTC
We have restored core search functionality and services are operating again, but some customers may still experience delays when searching for data changed within the last hour while new data continues to be indexed. We are actively working to complete reindexing and will update this page as performance fully recovers.
-
identified Apr 08, 2026, 09:03 AM UTC
We are also aware of impact as a result of these search failures to Customer Service Management, Asset Reports, Jira Service Management and dashboards within Focus. Our team have identified the potential root cause of the issue and our teams are working with urgency on a resolution. We will provide further update within 1 hour.
-
monitoring Apr 08, 2026, 09:03 AM UTC
The issue has been resolved, and services are now operating normally. Some customers may still experience delays when searching for data changed within the last hour, while new data continues to be indexed. We'll continue to monitor closely to confirm stability.
-
resolved Apr 08, 2026, 11:43 AM UTC
The issue has now been resolved, and the service is operating normally for all affected customers.
-
postmortem Apr 17, 2026, 04:24 PM UTC
### Summary On April 8, 2026, between 04:46 UTC and 12:09 UTC, search functionality was unavailable or degraded across several Atlassian Cloud products, including Jira, Confluence, Jira Service Management, Rovo, Rovo Dev, Loom, Guard Standard, Customer Service Management and Atlassian Administration. A configuration change increased the resources reserved for a core system component that runs on nodes in our compute platform. On a subset of clusters configured for high‑density workloads, the increased reservations exceeded available node capacity interrupting search and related experiences for affected customers. The root cause was identified and a rollback was merged at 05:42 UTC with some systems seeing recovery by 07:33 UTC**.** Core search functionality was restored approximately by 08:55 UTC, and full downstream recovery completed by 12:09 UTC. ### **IMPACT** During the impact period, some customers experienced outages or degradation in search across Jira, Confluence, Jira Service Management, Rovo, Rovo Dev, Loom, Guard Standard, Customer Service Management and Atlassian Administration. Other experiences that rely on search such as quick find, navigation, AI assistants, dashboards, were also intermittently affected during this period. Impacted customers may have been unable to find pages or recordings and experienced degraded performance in finding issues; received empty or delayed search results; or experienced AI assistants and dashboards that could not retrieve relevant context. **Jira, Jira Service Management and Customer Service Management:** Search and experiences that depend on search like finding issues and agent responses in CSM remained available but with degraded performance in fallback mode. By 12:09 UTC, search indexes and search performance was fully restored from fallback to full capacity across all regions. **Guard Standard and Atlassian Administration:** Search functionality was unavailable for parts of the incident window. As a result, Domain Claims, usage tracking, and managed accounts were degraded for portions of the window. These services were restored to operational status by 07:33 UTC. Guard Premium was not impacted by this issue. **Confluence:** Search functionality was unavailable for parts of the incident window. Recovery began at 07:30 UTC as backend search clusters were restored. Full recovery, including search index replay, completed at 11:37 UTC. **Loom:** Search functionality and some experiences that rely on Confluence Search, such as sharing to spaces\) was unavailable for portions of the window and fully restored at 11:37 UTC. **Rovo and Rovo Dev:** Rovo agents remained responsive but experienced degraded functionality due to loss of search capabilities in underlying services. They were unable to reliably return context about work items or pages. Functionality was fully restored at 11:37 UTC. ### **ROOT CAUSE** Atlassian products rely on OpenSearch clusters to power their search capabilities including issue search, content search, and AI-powered search features. An infrastructure configuration change increased resource reservations \(CPU & Memory\) for a system component that runs across our compute platform. On a subset of clusters configured for high-density workloads, the increased reservations exceeded available node capacity. This caused search workloads to be evicted and, in some clusters, could not reschedule onto any available nodes impacting search functionality across affected products. The change was deployed across multiple production clusters in a short time frame, limiting the opportunity to detect the capacity conflict in a smaller subset of clusters before it reached the wider fleet. Automated scaling systems attempted to recover by provisioning additional capacity but in the worst‑affected clusters this led to runaway node scaling and exhaustion of available network resources, prolonging recovery time. ### **REMEDIAL ACTIONS PLAN & NEXT STEPS** We understand that service disruptions impact your productivity. In addition to our existing testing and preventative processes, Atlassian is prioritizing the following actions to help reduce the likelihood and impact of similar incidents in the future and to speed up recovery when issues occur: * **Enforce smaller deployment cohorts and larger soak for critical platform changes for these cluster types** Implement smaller deployment cohorts, mandatory soak periods between environments, and automated health gates so that changes are validated on a limited set of clusters before being promoted more broadly. * **Strengthen automated pre‑deploy validation for resource changes** Add validation checks to ensure resource changes for system components are compatible with node capacity and reserved headroom, preventing system workloads from crowding out customer workloads. * **Improve post‑deploy verification and alerting** Enhance monitoring and post‑deployment verification to detect patterns such as spikes in pending pods, runaway node scaling, and low pod‑IP headroom closely correlated with new configuration being rolled out. * **Align autoscaling behavior with capacity and safety limits** Align autoscaling capacity calculations with node reservations and introduce safeguards and circuit breakers to prevent runaway scaling and to enforce safe limits on node and pod IP counts. * **Enhance recovery automation** Improve automation and runbooks so we can safely disable autoscaling, remove empty nodes in bulk, and restore normal operations faster across multiple clusters in parallel. We apologize to customers whose services were impacted during this incident; we are taking immediate steps to improve the platform’s performance and availability and to reduce the risk and impact of similar issues in future. Thanks, Atlassian Customer Support
Read the full incident report →
- Detected by Pingoru
- Apr 08, 2026, 06:01 AM UTC
- Resolved
- Apr 08, 2026, 11:43 AM UTC
- Duration
- 5h 41m
Affected: Domain ClaimsData ClassificationSAML-based SSOData Security PoliciesUser ProvisioningGuard DetectAccount ManagementAudit LogsSignupAPI tokens
Timeline · 13 updates
-
investigating Apr 08, 2026, 06:01 AM UTC
Our team is aware that users are currently experiencing errors while attempting to search in Confluence and Jira. Our team is investigating with urgency and will be providing an update within 1 hour.
-
investigating Apr 08, 2026, 06:01 AM UTC
The impact of this incident is now understood to also be impacting search in Jira and Confluence, as well as additional downstream impacts to Rovo Chat, User Management, Administration and Guard. Our team is continuing to investigate with urgency and we will provide further update within 1 hour.
-
investigating Apr 08, 2026, 06:24 AM UTC
Our team is aware that users are currently experiencing errors while attempting to search in Confluence and Jira. Our team is investigating with urgency and will be providing an update within 1 hour.
-
investigating Apr 08, 2026, 06:25 AM UTC
Our team is aware that users are currently experiencing errors while attempting to search in Confluence and Jira. Our team is investigating with urgency and will be providing an update within 1 hour.
-
identified Apr 08, 2026, 06:25 AM UTC
We are also aware of impact as a result of these search failures to Customer Service Management, Asset Reports, Jira Service Management and dashboards within Focus. Our team have identified the potential root cause of the issue and our teams are working with urgency on a resolution. We will provide further update within 1 hour.
-
identified Apr 08, 2026, 07:40 AM UTC
We are also aware of impact as a result of these search failures to Customer Service Management, Asset Reports, Jira Service Management and dashboards within Focus. Our team have identified the potential root cause of the issue and our teams are working with urgency on a resolution. We will provide further update within 1 hour.
-
identified Apr 08, 2026, 07:40 AM UTC
Confluence and Jira search reliability is improving, and we expect full recovery shortly; we will continue to closely monitor the services to ensure they remain stable.
-
identified Apr 08, 2026, 08:33 AM UTC
We are also aware of impact as a result of these search failures to Customer Service Management, Asset Reports, Jira Service Management and dashboards within Focus. Our team have identified the potential root cause of the issue and our teams are working with urgency on a resolution. We will provide further update within 1 hour.
-
identified Apr 08, 2026, 08:33 AM UTC
We have restored core search functionality and services are operating again, but some customers may still experience delays when searching for data changed within the last hour while new data continues to be indexed. We are actively working to complete reindexing and will update this page as performance fully recovers.
-
identified Apr 08, 2026, 09:03 AM UTC
We are also aware of impact as a result of these search failures to Customer Service Management, Asset Reports, Jira Service Management and dashboards within Focus. Our team have identified the potential root cause of the issue and our teams are working with urgency on a resolution. We will provide further update within 1 hour.
-
monitoring Apr 08, 2026, 09:03 AM UTC
The issue has been resolved, and services are now operating normally. Some customers may still experience delays when searching for data changed within the last hour, while new data continues to be indexed. We'll continue to monitor closely to confirm stability.
-
resolved Apr 08, 2026, 11:43 AM UTC
The issue has now been resolved, and the service is operating normally for all affected customers.
-
postmortem Apr 17, 2026, 04:24 PM UTC
### Summary On April 8, 2026, between 04:46 UTC and 12:09 UTC, search functionality was unavailable or degraded across several Atlassian Cloud products, including Jira, Confluence, Jira Service Management, Rovo, Rovo Dev, Loom, Guard Standard, Customer Service Management and Atlassian Administration. A configuration change increased the resources reserved for a core system component that runs on nodes in our compute platform. On a subset of clusters configured for high‑density workloads, the increased reservations exceeded available node capacity interrupting search and related experiences for affected customers. The root cause was identified and a rollback was merged at 05:42 UTC with some systems seeing recovery by 07:33 UTC**.** Core search functionality was restored approximately by 08:55 UTC, and full downstream recovery completed by 12:09 UTC. ### **IMPACT** During the impact period, some customers experienced outages or degradation in search across Jira, Confluence, Jira Service Management, Rovo, Rovo Dev, Loom, Guard Standard, Customer Service Management and Atlassian Administration. Other experiences that rely on search such as quick find, navigation, AI assistants, dashboards, were also intermittently affected during this period. Impacted customers may have been unable to find pages or recordings and experienced degraded performance in finding issues; received empty or delayed search results; or experienced AI assistants and dashboards that could not retrieve relevant context. **Jira, Jira Service Management and Customer Service Management:** Search and experiences that depend on search like finding issues and agent responses in CSM remained available but with degraded performance in fallback mode. By 12:09 UTC, search indexes and search performance was fully restored from fallback to full capacity across all regions. **Guard Standard and Atlassian Administration:** Search functionality was unavailable for parts of the incident window. As a result, Domain Claims, usage tracking, and managed accounts were degraded for portions of the window. These services were restored to operational status by 07:33 UTC. Guard Premium was not impacted by this issue. **Confluence:** Search functionality was unavailable for parts of the incident window. Recovery began at 07:30 UTC as backend search clusters were restored. Full recovery, including search index replay, completed at 11:37 UTC. **Loom:** Search functionality and some experiences that rely on Confluence Search, such as sharing to spaces\) was unavailable for portions of the window and fully restored at 11:37 UTC. **Rovo and Rovo Dev:** Rovo agents remained responsive but experienced degraded functionality due to loss of search capabilities in underlying services. They were unable to reliably return context about work items or pages. Functionality was fully restored at 11:37 UTC. ### **ROOT CAUSE** Atlassian products rely on OpenSearch clusters to power their search capabilities including issue search, content search, and AI-powered search features. An infrastructure configuration change increased resource reservations \(CPU & Memory\) for a system component that runs across our compute platform. On a subset of clusters configured for high-density workloads, the increased reservations exceeded available node capacity. This caused search workloads to be evicted and, in some clusters, could not reschedule onto any available nodes impacting search functionality across affected products. The change was deployed across multiple production clusters in a short time frame, limiting the opportunity to detect the capacity conflict in a smaller subset of clusters before it reached the wider fleet. Automated scaling systems attempted to recover by provisioning additional capacity but in the worst‑affected clusters this led to runaway node scaling and exhaustion of available network resources, prolonging recovery time. ### **REMEDIAL ACTIONS PLAN & NEXT STEPS** We understand that service disruptions impact your productivity. In addition to our existing testing and preventative processes, Atlassian is prioritizing the following actions to help reduce the likelihood and impact of similar incidents in the future and to speed up recovery when issues occur: * **Enforce smaller deployment cohorts and larger soak for critical platform changes for these cluster types** Implement smaller deployment cohorts, mandatory soak periods between environments, and automated health gates so that changes are validated on a limited set of clusters before being promoted more broadly. * **Strengthen automated pre‑deploy validation for resource changes** Add validation checks to ensure resource changes for system components are compatible with node capacity and reserved headroom, preventing system workloads from crowding out customer workloads. * **Improve post‑deploy verification and alerting** Enhance monitoring and post‑deployment verification to detect patterns such as spikes in pending pods, runaway node scaling, and low pod‑IP headroom closely correlated with new configuration being rolled out. * **Align autoscaling behavior with capacity and safety limits** Align autoscaling capacity calculations with node reservations and introduce safeguards and circuit breakers to prevent runaway scaling and to enforce safe limits on node and pod IP counts. * **Enhance recovery automation** Improve automation and runbooks so we can safely disable autoscaling, remove empty nodes in bulk, and restore normal operations faster across multiple clusters in parallel. We apologize to customers whose services were impacted during this incident; we are taking immediate steps to improve the platform’s performance and availability and to reduce the risk and impact of similar issues in future. Thanks, Atlassian Customer Support
Read the full incident report →
- Detected by Pingoru
- Apr 08, 2026, 06:01 AM UTC
- Resolved
- Apr 08, 2026, 11:43 AM UTC
- Duration
- 5h 41m
Affected: Domain ClaimsData ClassificationSAML-based SSOData Security PoliciesUser ProvisioningGuard DetectAccount ManagementAudit LogsSignupAPI tokens
Timeline · 13 updates
-
investigating Apr 08, 2026, 06:01 AM UTC
Our team is aware that users are currently experiencing errors while attempting to search in Confluence and Jira. Our team is investigating with urgency and will be providing an update within 1 hour.
-
investigating Apr 08, 2026, 06:01 AM UTC
The impact of this incident is now understood to also be impacting search in Jira and Confluence, as well as additional downstream impacts to Rovo Chat, User Management, Administration and Guard. Our team is continuing to investigate with urgency and we will provide further update within 1 hour.
-
investigating Apr 08, 2026, 06:24 AM UTC
Our team is aware that users are currently experiencing errors while attempting to search in Confluence and Jira. Our team is investigating with urgency and will be providing an update within 1 hour.
-
investigating Apr 08, 2026, 06:25 AM UTC
Our team is aware that users are currently experiencing errors while attempting to search in Confluence and Jira. Our team is investigating with urgency and will be providing an update within 1 hour.
-
identified Apr 08, 2026, 06:25 AM UTC
We are also aware of impact as a result of these search failures to Customer Service Management, Asset Reports, Jira Service Management and dashboards within Focus. Our team have identified the potential root cause of the issue and our teams are working with urgency on a resolution. We will provide further update within 1 hour.
-
identified Apr 08, 2026, 07:40 AM UTC
We are also aware of impact as a result of these search failures to Customer Service Management, Asset Reports, Jira Service Management and dashboards within Focus. Our team have identified the potential root cause of the issue and our teams are working with urgency on a resolution. We will provide further update within 1 hour.
-
identified Apr 08, 2026, 07:40 AM UTC
Confluence and Jira search reliability is improving, and we expect full recovery shortly; we will continue to closely monitor the services to ensure they remain stable.
-
identified Apr 08, 2026, 08:33 AM UTC
We are also aware of impact as a result of these search failures to Customer Service Management, Asset Reports, Jira Service Management and dashboards within Focus. Our team have identified the potential root cause of the issue and our teams are working with urgency on a resolution. We will provide further update within 1 hour.
-
identified Apr 08, 2026, 08:33 AM UTC
We have restored core search functionality and services are operating again, but some customers may still experience delays when searching for data changed within the last hour while new data continues to be indexed. We are actively working to complete reindexing and will update this page as performance fully recovers.
-
identified Apr 08, 2026, 09:03 AM UTC
We are also aware of impact as a result of these search failures to Customer Service Management, Asset Reports, Jira Service Management and dashboards within Focus. Our team have identified the potential root cause of the issue and our teams are working with urgency on a resolution. We will provide further update within 1 hour.
-
monitoring Apr 08, 2026, 09:03 AM UTC
The issue has been resolved, and services are now operating normally. Some customers may still experience delays when searching for data changed within the last hour, while new data continues to be indexed. We'll continue to monitor closely to confirm stability.
-
resolved Apr 08, 2026, 11:43 AM UTC
The issue has now been resolved, and the service is operating normally for all affected customers.
-
postmortem Apr 17, 2026, 04:24 PM UTC
### Summary On April 8, 2026, between 04:46 UTC and 12:09 UTC, search functionality was unavailable or degraded across several Atlassian Cloud products, including Jira, Confluence, Jira Service Management, Rovo, Rovo Dev, Loom, Guard Standard, Customer Service Management and Atlassian Administration. A configuration change increased the resources reserved for a core system component that runs on nodes in our compute platform. On a subset of clusters configured for high‑density workloads, the increased reservations exceeded available node capacity interrupting search and related experiences for affected customers. The root cause was identified and a rollback was merged at 05:42 UTC with some systems seeing recovery by 07:33 UTC**.** Core search functionality was restored approximately by 08:55 UTC, and full downstream recovery completed by 12:09 UTC. ### **IMPACT** During the impact period, some customers experienced outages or degradation in search across Jira, Confluence, Jira Service Management, Rovo, Rovo Dev, Loom, Guard Standard, Customer Service Management and Atlassian Administration. Other experiences that rely on search such as quick find, navigation, AI assistants, dashboards, were also intermittently affected during this period. Impacted customers may have been unable to find pages or recordings and experienced degraded performance in finding issues; received empty or delayed search results; or experienced AI assistants and dashboards that could not retrieve relevant context. **Jira, Jira Service Management and Customer Service Management:** Search and experiences that depend on search like finding issues and agent responses in CSM remained available but with degraded performance in fallback mode. By 12:09 UTC, search indexes and search performance was fully restored from fallback to full capacity across all regions. **Guard Standard and Atlassian Administration:** Search functionality was unavailable for parts of the incident window. As a result, Domain Claims, usage tracking, and managed accounts were degraded for portions of the window. These services were restored to operational status by 07:33 UTC. Guard Premium was not impacted by this issue. **Confluence:** Search functionality was unavailable for parts of the incident window. Recovery began at 07:30 UTC as backend search clusters were restored. Full recovery, including search index replay, completed at 11:37 UTC. **Loom:** Search functionality and some experiences that rely on Confluence Search, such as sharing to spaces\) was unavailable for portions of the window and fully restored at 11:37 UTC. **Rovo and Rovo Dev:** Rovo agents remained responsive but experienced degraded functionality due to loss of search capabilities in underlying services. They were unable to reliably return context about work items or pages. Functionality was fully restored at 11:37 UTC. ### **ROOT CAUSE** Atlassian products rely on OpenSearch clusters to power their search capabilities including issue search, content search, and AI-powered search features. An infrastructure configuration change increased resource reservations \(CPU & Memory\) for a system component that runs across our compute platform. On a subset of clusters configured for high-density workloads, the increased reservations exceeded available node capacity. This caused search workloads to be evicted and, in some clusters, could not reschedule onto any available nodes impacting search functionality across affected products. The change was deployed across multiple production clusters in a short time frame, limiting the opportunity to detect the capacity conflict in a smaller subset of clusters before it reached the wider fleet. Automated scaling systems attempted to recover by provisioning additional capacity but in the worst‑affected clusters this led to runaway node scaling and exhaustion of available network resources, prolonging recovery time. ### **REMEDIAL ACTIONS PLAN & NEXT STEPS** We understand that service disruptions impact your productivity. In addition to our existing testing and preventative processes, Atlassian is prioritizing the following actions to help reduce the likelihood and impact of similar incidents in the future and to speed up recovery when issues occur: * **Enforce smaller deployment cohorts and larger soak for critical platform changes for these cluster types** Implement smaller deployment cohorts, mandatory soak periods between environments, and automated health gates so that changes are validated on a limited set of clusters before being promoted more broadly. * **Strengthen automated pre‑deploy validation for resource changes** Add validation checks to ensure resource changes for system components are compatible with node capacity and reserved headroom, preventing system workloads from crowding out customer workloads. * **Improve post‑deploy verification and alerting** Enhance monitoring and post‑deployment verification to detect patterns such as spikes in pending pods, runaway node scaling, and low pod‑IP headroom closely correlated with new configuration being rolled out. * **Align autoscaling behavior with capacity and safety limits** Align autoscaling capacity calculations with node reservations and introduce safeguards and circuit breakers to prevent runaway scaling and to enforce safe limits on node and pod IP counts. * **Enhance recovery automation** Improve automation and runbooks so we can safely disable autoscaling, remove empty nodes in bulk, and restore normal operations faster across multiple clusters in parallel. We apologize to customers whose services were impacted during this incident; we are taking immediate steps to improve the platform’s performance and availability and to reduce the risk and impact of similar issues in future. Thanks, Atlassian Customer Support
Read the full incident report →
- Detected by Pingoru
- Apr 08, 2026, 05:41 AM UTC
- Resolved
- Apr 08, 2026, 11:43 AM UTC
- Duration
- 6h 2m
Affected: Viewing contentCreate and editAuthentication and User ManagementSearchNotificationsAdministrationMarketplaceMobilePurchasing & LicensingSignupAutomation for Jira
Timeline · 14 updates
-
investigating Apr 08, 2026, 05:41 AM UTC
Our team is aware that users are currently experiencing errors while attempting to search in Confluence and Jira. Our team is investigating with urgency and will be providing an update within 1 hour.
-
investigating Apr 08, 2026, 06:01 AM UTC
Our team is aware that users are currently experiencing errors while attempting to search in Confluence and Jira. Our team is investigating with urgency and will be providing an update within 1 hour.
-
investigating Apr 08, 2026, 06:01 AM UTC
The impact of this incident is now understood to also be impacting search in Jira and Confluence, as well as additional downstream impacts to Rovo Chat, User Management, Administration and Guard. Our team is continuing to investigate with urgency and we will provide further update within 1 hour.
-
investigating Apr 08, 2026, 06:24 AM UTC
Our team is aware that users are currently experiencing errors while attempting to search in Confluence and Jira. Our team is investigating with urgency and will be providing an update within 1 hour.
-
investigating Apr 08, 2026, 06:25 AM UTC
Our team is aware that users are currently experiencing errors while attempting to search in Confluence and Jira. Our team is investigating with urgency and will be providing an update within 1 hour.
-
identified Apr 08, 2026, 06:25 AM UTC
We are also aware of impact as a result of these search failures to Customer Service Management, Asset Reports, Jira Service Management and dashboards within Focus. Our team have identified the potential root cause of the issue and our teams are working with urgency on a resolution. We will provide further update within 1 hour.
-
identified Apr 08, 2026, 07:40 AM UTC
We are also aware of impact as a result of these search failures to Customer Service Management, Asset Reports, Jira Service Management and dashboards within Focus. Our team have identified the potential root cause of the issue and our teams are working with urgency on a resolution. We will provide further update within 1 hour.
-
identified Apr 08, 2026, 07:40 AM UTC
Confluence and Jira search reliability is improving, and we expect full recovery shortly; we will continue to closely monitor the services to ensure they remain stable.
-
identified Apr 08, 2026, 08:33 AM UTC
We are also aware of impact as a result of these search failures to Customer Service Management, Asset Reports, Jira Service Management and dashboards within Focus. Our team have identified the potential root cause of the issue and our teams are working with urgency on a resolution. We will provide further update within 1 hour.
-
identified Apr 08, 2026, 08:33 AM UTC
We have restored core search functionality and services are operating again, but some customers may still experience delays when searching for data changed within the last hour while new data continues to be indexed. We are actively working to complete reindexing and will update this page as performance fully recovers.
-
identified Apr 08, 2026, 09:03 AM UTC
We are also aware of impact as a result of these search failures to Customer Service Management, Asset Reports, Jira Service Management and dashboards within Focus. Our team have identified the potential root cause of the issue and our teams are working with urgency on a resolution. We will provide further update within 1 hour.
-
monitoring Apr 08, 2026, 09:03 AM UTC
The issue has been resolved, and services are now operating normally. Some customers may still experience delays when searching for data changed within the last hour, while new data continues to be indexed. We'll continue to monitor closely to confirm stability.
-
resolved Apr 08, 2026, 11:43 AM UTC
The issue has now been resolved, and the service is operating normally for all affected customers.
-
postmortem Apr 17, 2026, 04:25 PM UTC
### Summary On April 8, 2026, between 04:46 UTC and 12:09 UTC, search functionality was unavailable or degraded across several Atlassian Cloud products, including Jira, Confluence, Jira Service Management, Rovo, Rovo Dev, Loom, Guard Standard, Customer Service Management and Atlassian Administration. A configuration change increased the resources reserved for a core system component that runs on nodes in our compute platform. On a subset of clusters configured for high‑density workloads, the increased reservations exceeded available node capacity interrupting search and related experiences for affected customers. The root cause was identified and a rollback was merged at 05:42 UTC with some systems seeing recovery by 07:33 UTC**.** Core search functionality was restored approximately by 08:55 UTC, and full downstream recovery completed by 12:09 UTC. ### **IMPACT** During the impact period, some customers experienced outages or degradation in search across Jira, Confluence, Jira Service Management, Rovo, Rovo Dev, Loom, Guard Standard, Customer Service Management and Atlassian Administration. Other experiences that rely on search such as quick find, navigation, AI assistants, dashboards, were also intermittently affected during this period. Impacted customers may have been unable to find pages or recordings and experienced degraded performance in finding issues; received empty or delayed search results; or experienced AI assistants and dashboards that could not retrieve relevant context. **Jira, Jira Service Management and Customer Service Management:** Search and experiences that depend on search like finding issues and agent responses in CSM remained available but with degraded performance in fallback mode. By 12:09 UTC, search indexes and search performance was fully restored from fallback to full capacity across all regions. **Guard Standard and Atlassian Administration:** Search functionality was unavailable for parts of the incident window. As a result, Domain Claims, usage tracking, and managed accounts were degraded for portions of the window. These services were restored to operational status by 07:33 UTC. Guard Premium was not impacted by this issue. **Confluence:** Search functionality was unavailable for parts of the incident window. Recovery began at 07:30 UTC as backend search clusters were restored. Full recovery, including search index replay, completed at 11:37 UTC. **Loom:** Search functionality and some experiences that rely on Confluence Search, such as sharing to spaces\) was unavailable for portions of the window and fully restored at 11:37 UTC. **Rovo and Rovo Dev:** Rovo agents remained responsive but experienced degraded functionality due to loss of search capabilities in underlying services. They were unable to reliably return context about work items or pages. Functionality was fully restored at 11:37 UTC. ### **ROOT CAUSE** Atlassian products rely on OpenSearch clusters to power their search capabilities including issue search, content search, and AI-powered search features. An infrastructure configuration change increased resource reservations \(CPU & Memory\) for a system component that runs across our compute platform. On a subset of clusters configured for high-density workloads, the increased reservations exceeded available node capacity. This caused search workloads to be evicted and, in some clusters, could not reschedule onto any available nodes impacting search functionality across affected products. The change was deployed across multiple production clusters in a short time frame, limiting the opportunity to detect the capacity conflict in a smaller subset of clusters before it reached the wider fleet. Automated scaling systems attempted to recover by provisioning additional capacity but in the worst‑affected clusters this led to runaway node scaling and exhaustion of available network resources, prolonging recovery time. ### **REMEDIAL ACTIONS PLAN & NEXT STEPS** We understand that service disruptions impact your productivity. In addition to our existing testing and preventative processes, Atlassian is prioritizing the following actions to help reduce the likelihood and impact of similar incidents in the future and to speed up recovery when issues occur: * **Enforce smaller deployment cohorts and larger soak for critical platform changes for these cluster types** Implement smaller deployment cohorts, mandatory soak periods between environments, and automated health gates so that changes are validated on a limited set of clusters before being promoted more broadly. * **Strengthen automated pre‑deploy validation for resource changes** Add validation checks to ensure resource changes for system components are compatible with node capacity and reserved headroom, preventing system workloads from crowding out customer workloads. * **Improve post‑deploy verification and alerting** Enhance monitoring and post‑deployment verification to detect patterns such as spikes in pending pods, runaway node scaling, and low pod‑IP headroom closely correlated with new configuration being rolled out. * **Align autoscaling behavior with capacity and safety limits** Align autoscaling capacity calculations with node reservations and introduce safeguards and circuit breakers to prevent runaway scaling and to enforce safe limits on node and pod IP counts. * **Enhance recovery automation** Improve automation and runbooks so we can safely disable autoscaling, remove empty nodes in bulk, and restore normal operations faster across multiple clusters in parallel. We apologize to customers whose services were impacted during this incident; we are taking immediate steps to improve the platform’s performance and availability and to reduce the risk and impact of similar issues in future. Thanks, Atlassian Customer Support
Read the full incident report →
- Detected by Pingoru
- Apr 08, 2026, 05:41 AM UTC
- Resolved
- Apr 08, 2026, 11:43 AM UTC
- Duration
- 6h 2m
Affected: View ContentiOS AppCreate and EditAndroid AppCommentsAuthentication and User ManagementSearchAdministrationNotificationsMarketplace AppsPurchasing & LicensingSignupConfluence AutomationsCloud to Cloud Migrations - Copy Product DataServer to Cloud Migrations - Copy Product Data
Timeline · 14 updates
-
investigating Apr 08, 2026, 05:41 AM UTC
Our team is aware that users are currently experiencing errors while attempting to search in Confluence and Jira. Our team is investigating with urgency and will be providing an update within 1 hour.
-
investigating Apr 08, 2026, 06:01 AM UTC
Our team is aware that users are currently experiencing errors while attempting to search in Confluence and Jira. Our team is investigating with urgency and will be providing an update within 1 hour.
-
investigating Apr 08, 2026, 06:01 AM UTC
The impact of this incident is now understood to also be impacting search in Jira and Confluence, as well as additional downstream impacts to Rovo Chat, User Management, Administration and Guard. Our team is continuing to investigate with urgency and we will provide further update within 1 hour.
-
investigating Apr 08, 2026, 06:24 AM UTC
Our team is aware that users are currently experiencing errors while attempting to search in Confluence and Jira. Our team is investigating with urgency and will be providing an update within 1 hour.
-
investigating Apr 08, 2026, 06:25 AM UTC
Our team is aware that users are currently experiencing errors while attempting to search in Confluence and Jira. Our team is investigating with urgency and will be providing an update within 1 hour.
-
identified Apr 08, 2026, 06:25 AM UTC
We are also aware of impact as a result of these search failures to Customer Service Management, Asset Reports, Jira Service Management and dashboards within Focus. Our team have identified the potential root cause of the issue and our teams are working with urgency on a resolution. We will provide further update within 1 hour.
-
identified Apr 08, 2026, 07:40 AM UTC
We are also aware of impact as a result of these search failures to Customer Service Management, Asset Reports, Jira Service Management and dashboards within Focus. Our team have identified the potential root cause of the issue and our teams are working with urgency on a resolution. We will provide further update within 1 hour.
-
identified Apr 08, 2026, 07:40 AM UTC
Confluence and Jira search reliability is improving, and we expect full recovery shortly; we will continue to closely monitor the services to ensure they remain stable.
-
identified Apr 08, 2026, 08:33 AM UTC
We are also aware of impact as a result of these search failures to Customer Service Management, Asset Reports, Jira Service Management and dashboards within Focus. Our team have identified the potential root cause of the issue and our teams are working with urgency on a resolution. We will provide further update within 1 hour.
-
identified Apr 08, 2026, 08:33 AM UTC
We have restored core search functionality and services are operating again, but some customers may still experience delays when searching for data changed within the last hour while new data continues to be indexed. We are actively working to complete reindexing and will update this page as performance fully recovers.
-
identified Apr 08, 2026, 09:03 AM UTC
We are also aware of impact as a result of these search failures to Customer Service Management, Asset Reports, Jira Service Management and dashboards within Focus. Our team have identified the potential root cause of the issue and our teams are working with urgency on a resolution. We will provide further update within 1 hour.
-
monitoring Apr 08, 2026, 09:03 AM UTC
The issue has been resolved, and services are now operating normally. Some customers may still experience delays when searching for data changed within the last hour, while new data continues to be indexed. We'll continue to monitor closely to confirm stability.
-
resolved Apr 08, 2026, 11:43 AM UTC
The issue has now been resolved, and the service is operating normally for all affected customers.
-
postmortem Apr 17, 2026, 04:24 PM UTC
### Summary On April 8, 2026, between 04:46 UTC and 12:09 UTC, search functionality was unavailable or degraded across several Atlassian Cloud products, including Jira, Confluence, Jira Service Management, Rovo, Rovo Dev, Loom, Guard Standard, Customer Service Management and Atlassian Administration. A configuration change increased the resources reserved for a core system component that runs on nodes in our compute platform. On a subset of clusters configured for high‑density workloads, the increased reservations exceeded available node capacity interrupting search and related experiences for affected customers. The root cause was identified and a rollback was merged at 05:42 UTC with some systems seeing recovery by 07:33 UTC**.** Core search functionality was restored approximately by 08:55 UTC, and full downstream recovery completed by 12:09 UTC. ### **IMPACT** During the impact period, some customers experienced outages or degradation in search across Jira, Confluence, Jira Service Management, Rovo, Rovo Dev, Loom, Guard Standard, Customer Service Management and Atlassian Administration. Other experiences that rely on search such as quick find, navigation, AI assistants, dashboards, were also intermittently affected during this period. Impacted customers may have been unable to find pages or recordings and experienced degraded performance in finding issues; received empty or delayed search results; or experienced AI assistants and dashboards that could not retrieve relevant context. **Jira, Jira Service Management and Customer Service Management:** Search and experiences that depend on search like finding issues and agent responses in CSM remained available but with degraded performance in fallback mode. By 12:09 UTC, search indexes and search performance was fully restored from fallback to full capacity across all regions. **Guard Standard and Atlassian Administration:** Search functionality was unavailable for parts of the incident window. As a result, Domain Claims, usage tracking, and managed accounts were degraded for portions of the window. These services were restored to operational status by 07:33 UTC. Guard Premium was not impacted by this issue. **Confluence:** Search functionality was unavailable for parts of the incident window. Recovery began at 07:30 UTC as backend search clusters were restored. Full recovery, including search index replay, completed at 11:37 UTC. **Loom:** Search functionality and some experiences that rely on Confluence Search, such as sharing to spaces\) was unavailable for portions of the window and fully restored at 11:37 UTC. **Rovo and Rovo Dev:** Rovo agents remained responsive but experienced degraded functionality due to loss of search capabilities in underlying services. They were unable to reliably return context about work items or pages. Functionality was fully restored at 11:37 UTC. ### **ROOT CAUSE** Atlassian products rely on OpenSearch clusters to power their search capabilities including issue search, content search, and AI-powered search features. An infrastructure configuration change increased resource reservations \(CPU & Memory\) for a system component that runs across our compute platform. On a subset of clusters configured for high-density workloads, the increased reservations exceeded available node capacity. This caused search workloads to be evicted and, in some clusters, could not reschedule onto any available nodes impacting search functionality across affected products. The change was deployed across multiple production clusters in a short time frame, limiting the opportunity to detect the capacity conflict in a smaller subset of clusters before it reached the wider fleet. Automated scaling systems attempted to recover by provisioning additional capacity but in the worst‑affected clusters this led to runaway node scaling and exhaustion of available network resources, prolonging recovery time. ### **REMEDIAL ACTIONS PLAN & NEXT STEPS** We understand that service disruptions impact your productivity. In addition to our existing testing and preventative processes, Atlassian is prioritizing the following actions to help reduce the likelihood and impact of similar incidents in the future and to speed up recovery when issues occur: * **Enforce smaller deployment cohorts and larger soak for critical platform changes for these cluster types** Implement smaller deployment cohorts, mandatory soak periods between environments, and automated health gates so that changes are validated on a limited set of clusters before being promoted more broadly. * **Strengthen automated pre‑deploy validation for resource changes** Add validation checks to ensure resource changes for system components are compatible with node capacity and reserved headroom, preventing system workloads from crowding out customer workloads. * **Improve post‑deploy verification and alerting** Enhance monitoring and post‑deployment verification to detect patterns such as spikes in pending pods, runaway node scaling, and low pod‑IP headroom closely correlated with new configuration being rolled out. * **Align autoscaling behavior with capacity and safety limits** Align autoscaling capacity calculations with node reservations and introduce safeguards and circuit breakers to prevent runaway scaling and to enforce safe limits on node and pod IP counts. * **Enhance recovery automation** Improve automation and runbooks so we can safely disable autoscaling, remove empty nodes in bulk, and restore normal operations faster across multiple clusters in parallel. We apologize to customers whose services were impacted during this incident; we are taking immediate steps to improve the platform’s performance and availability and to reduce the risk and impact of similar issues in future. Thanks, Atlassian Customer Support
Read the full incident report →
- Detected by Pingoru
- Apr 08, 2026, 04:29 AM UTC
- Resolved
- Apr 08, 2026, 07:34 AM UTC
- Duration
- 3h 4m
Affected: Purchasing and LicensingCode ReviewCode ReviewSignupAdministrationRovo Dev CLIRovo Dev in VS Code
Timeline · 4 updates
-
identified Apr 08, 2026, 04:29 AM UTC
We are aware of an issue that is causing Rovo Dev code review to be unavailable for users. Our team has identified the cause of this issue and is working with urgency to resolve, and we will provide an update within 1 hour.
-
identified Apr 08, 2026, 05:22 AM UTC
Our team is continuing to work on restoring the Rovo Dev code reviews functionality with urgency. We will provide further update within 1 hour or sooner if we see reviews commencing to run successfully prior to that time.
-
identified Apr 08, 2026, 06:33 AM UTC
Our team has now discovered the root cause of this issue and is working on deploying a change to resolve the issue. We will provide further update within 1 hour.
-
resolved Apr 08, 2026, 07:34 AM UTC
This issue has been resolved and Rovo Dev code review is now working as expected.
Read the full incident report →
- Detected by Pingoru
- Apr 02, 2026, 08:46 PM UTC
- Resolved
- Apr 02, 2026, 10:36 PM UTC
- Duration
- 1h 49m
Affected: Bitbucket Cloud APIsApp listing managementApp DeploymentApp listingsConfluence Cloud APIsArtifactory (Maven repository)Atlassian Support contact formApp pricingJira Cloud APIsCreate and manage appsDeveloper communityDeveloper documentationDeveloper service deskApp submissionsAuthentication and user managementProduct EventsMarketplace service deskUser APIsCategory landing pagesForge App InstallationWebhooksEvaluations and purchasesAtlassian SupportForge CDN (Custom UI)In-product Marketplace and app installation (Cloud)Forge Function InvocationWeb TriggersVulnerability management [AMS]In-product Marketplace and app installation (Server)aui-cdn.atlassian.comNotificationsForge App LogsPrivate listingsForge App MonitoringReporting APIs and dashboardsDeveloper consoleSearchForge direct app distributionVendor managementHosted storageVendor Home PageForge CLIEnd-user consentForge App AlertsApp Data ResidencyForge SQLForge Monetisation
Timeline · 2 updates
-
investigating Apr 02, 2026, 08:46 PM UTC
We are actively investigating reports of a partial service disruption affecting Forge Remote applications integrating with JIRA. We will share updates here as more information is available.
-
resolved Apr 02, 2026, 10:36 PM UTC
On April 2, 2026, affected users may have experienced some service disruption. The issue has now been resolved, and the service is operating normally for all affected customers.
Read the full incident report →
- Detected by Pingoru
- Apr 02, 2026, 01:49 AM UTC
- Resolved
- Apr 02, 2026, 02:39 AM UTC
- Duration
- 50m
Affected: Support PortalTicketingKnowledge BaseCommunityAPI DocsPublic Issues TrackerTrainingBlogsCareersContent DeliverySearchDownloads AccessPreferences Center
Timeline · 3 updates
-
investigating Apr 02, 2026, 01:49 AM UTC
Our team is investigating an issue relating to 503 errors being displayed when attempting to load confluence.atlassian.com. We are investigating this issue with urgency and will provide an update within 1 hour.
-
monitoring Apr 02, 2026, 02:23 AM UTC
We are now seeing recovery across confluence.atlassian.com and ja.confluence.atlassian.com sites. We are continuing to monitor the recovery and will update when we are satisfied that this incident is fully resolved.
-
resolved Apr 02, 2026, 02:39 AM UTC
Our team has been able to successfully recover all services from confluence.atlassian.com and ja.confluence.atlassian.com and these sites are now performing as expected. This incident is now resolved. We apologize for any inconvenience that this issue may have caused.
Read the full incident report →
- Detected by Pingoru
- Mar 24, 2026, 07:59 AM UTC
- Resolved
- Mar 24, 2026, 08:44 AM UTC
- Duration
- 45m
Affected: Bitbucket Cloud APIsApp listing managementApp DeploymentApp listingsConfluence Cloud APIsArtifactory (Maven repository)Atlassian Support contact formApp pricingJira Cloud APIsCreate and manage appsDeveloper communityDeveloper documentationDeveloper service deskApp submissionsAuthentication and user managementProduct EventsMarketplace service deskUser APIsCategory landing pagesForge App InstallationWebhooksEvaluations and purchasesAtlassian SupportForge CDN (Custom UI)In-product Marketplace and app installation (Cloud)Forge Function InvocationWeb TriggersVulnerability management [AMS]In-product Marketplace and app installation (Server)aui-cdn.atlassian.comNotificationsForge App LogsPrivate listingsForge App MonitoringReporting APIs and dashboardsDeveloper consoleSearchForge direct app distributionVendor managementHosted storageVendor Home PageForge CLIEnd-user consentForge App AlertsApp Data ResidencyForge SQLForge Monetisation
Timeline · 2 updates
-
identified Mar 24, 2026, 07:59 AM UTC
We are receiving reports of degraded performance on the Connect APIs impacting Connect apps. We have identified the cause and are rolling out a fix now. We are monitoring the deployment and will provide another update within two hours, or sooner
-
resolved Mar 24, 2026, 08:44 AM UTC
On March 24, 2026, between 04:14 AM UTC and 08:40 AM UTC, Connect Apps using connect API's experienced performance degradation. The issue has now been resolved, and the service is operating normally for all affected customers.
Read the full incident report →
- Detected by Pingoru
- Mar 18, 2026, 11:32 AM UTC
- Resolved
- Mar 19, 2026, 04:08 AM UTC
- Duration
- 16h 35m
Affected: Jira Service Management WebService PortalOpsgenie Incident FlowOpsgenie Alert FlowOpsgenie Incident FlowOpsgenie Alert FlowJira Service Management Email RequestsAuthentication and User ManagementPurchasing & LicensingSignupAutomation for JiraAssist
Timeline · 11 updates
-
investigating Mar 18, 2026, 11:32 AM UTC
We are investigating reports of intermittent errors for SOME Jira Cloud customers for Automations triggering. We will provide more details once we identify the root cause.
-
investigating Mar 18, 2026, 11:32 AM UTC
Some customers may experience automations not triggering for specific actions such as work item creation, updates, and comment additions. However, manually triggered automations are operating as expected.
-
investigating Mar 18, 2026, 12:24 PM UTC
Customers using Jira and Jira Service Management are currently experiencing an issue where automations do not trigger for actions such as work item creation, field updates, and comment additions, although manual triggers remain functional. We'll share updates here as more information is available.
-
investigating Mar 18, 2026, 01:07 PM UTC
We continue to investigate the issue, and the next communication will be issued in 60 minutes, or sooner if a significant milestone is achieved.
-
investigating Mar 18, 2026, 02:01 PM UTC
We continue to investigate the issue, and the next communication will be issued in 60 minutes, or sooner if a significant milestone is achieved.
-
monitoring Mar 18, 2026, 02:03 PM UTC
We have identified the cause of the issue and deployed changes that have restored automation processing for Jira and Jira Service Management. Automations are now triggering as expected. We will continue to monitor closely to confirm stability.
-
monitoring Mar 18, 2026, 02:53 PM UTC
We are replaying historical events for affected customers to rerun Automation rules that failed to execute between March 18, 10:00 AM UTC and 1:45 PM UTC. We are closely monitoring the results of this replay and will post an update in a few hours when this work is complete.
-
monitoring Mar 18, 2026, 09:44 PM UTC
We have successfully completed the replay for most regions, with remainder expected to finish in next 3 hours.
-
monitoring Mar 18, 2026, 11:44 PM UTC
Processing of the historical events is still occurring with our team monitoring progress closely. We will be providing a further update either when these tasks are fully completed or within 4 hours, whichever is earlier.
-
resolved Mar 19, 2026, 04:08 AM UTC
The vast majority of customers historical events have now all been processed successfully, with a small group of customers that have very high volume of automations still to process which are actively underway. We expect these remaining processes to be completed within approximately two hours, if not sooner.
-
postmortem Apr 02, 2026, 07:07 AM UTC
### Summary On March 18, 2026, between 10:06 and 13:47 UTC, customers experienced delays in Automation rules executing when triggered by Jira events such as Work Item creation, Work Item updates, and comments. Automation rules using other trigger types, including scheduled triggers, manual triggers, and incoming webhooks, continued to operate normally. The incident was caused by an internal configuration change that inadvertently disabled the event delivery pathway used to notify the automation platform of changes in Jira. The incident was identified through customer support tickets and verified through our monitoring; engineering teams were engaged for resolution. Once the root cause was identified, the configuration was corrected and normal automation processing resumed. Following restoration, the delayed events began flowing to the automation platform for processing. This backlog took approximately 14 hours to fully clear. During this recovery window, some automation rules ran on Work Items whose data had changed due to user actions, customer mitigation, or other causes. Since rule execution usually follows triggering events closely, many customer rules assume immediate execution on the Work Item. The delay allowed other changes —such as updates or customer actions to mitigate the incident's impact — to occur, causing unintended consequences when the rule executed later. ### **IMPACT** During the impact window, Jira Cloud customers were unable to rely on timely execution of event‑triggered automation rules. Rules that depended on Jira Work Item events - including Work Item created, Work Item updated, comment added, sprint changes, and version changes - ran with significant delays. This affected automated workflows responsible for Work Item routing, notifications, field updates, and other rule‑driven actions throughout the outage. Automation rules that used scheduled, manual, or incoming webhook triggers remained unaffected. Following mitigation, a recovery period of approximately 14 hours was required to process the backlog of delayed events. During this window, processing delays peaked at approximately 12 hours from event occurrence to rule completion. In some cases, rules executing against Work Item data several hours after the rule trigger occurred, caused problems due to the rules being built with the expectation in mind that little time would pass between trigger and execution. This resulted in Work Items ending up in an unintended state; especially as some customers undertook manual intervention given the situation. ### **ROOT CAUSE** The incident was caused by a configuration change to an internal feature flag used to control event delivery to the automation platform. A code change had been prepared to remove a feature flag from the event delivery system. However, this code change had not yet been deployed to production. When the feature flag was subsequently retired through our feature flag management system, the retirement process relied on usage telemetry that incorrectly indicated the flag was no longer active. This inadvertently created a blind spot where the flag appeared unused when it was in fact being actively evaluated. When the flag was retired, the event delivery system interpreted its absence as an instruction to stop delivering Jira events to the automation platform, causing all event-triggered automation rules to stop firing. ### **REMEDIAL ACTIONS PLAN & NEXT STEPS** We understand that outages impact your productivity. In addition to our existing testing and preventative processes, Atlassian is prioritising the following actions to help reduce the likelihood and impact of similar incidents in the future: * **Strengthen feature flag lifecycle safety controls** * Address the telemetry gap by removing feature flagged code from the hot path of our event processing framework where it was being surpressed, ensuring we have good telemetry on feature flag use across our systems. This will prevent inadvertent archival of feature flags in future. * Refine feature flag retirement processes so that flags cannot be retired without verifying actual production usage, independent of standard telemetry signals. * **Improve event delivery monitoring and alerting** * Refine monitoring to detect drops in automation event delivery rates within minutes, and add automated alerting on automation execution volume anomalies to enable faster detection of disruptions. * **Improve our ability to clear delayed events faster and deliver controls for customers to decide alternative workflows for events based on delay.** * Refine replay infrastructure to recover faster from the pent up backlog that can be created due to delays. * Provide workflow components that allow users to decide what to do for varying delays between the triggering event and when the rule gets executed to reduce the risk of unintended changes during recovery. We recognise the importance of Jira Automation to our customers' workflows and are committed to ongoing improvements to the reliability and resilience of our platform. We sincerely apologize for the disruption this incident caused, and we will continue to invest in measures that support a stable and dependable service. Thanks, Atlassian Customer Support
Read the full incident report →
- Detected by Pingoru
- Mar 18, 2026, 11:32 AM UTC
- Resolved
- Mar 19, 2026, 04:08 AM UTC
- Duration
- 16h 35m
Affected: Jira Service Management WebService PortalOpsgenie Incident FlowOpsgenie Alert FlowOpsgenie Incident FlowOpsgenie Alert FlowJira Service Management Email RequestsAuthentication and User ManagementPurchasing & LicensingSignupAutomation for JiraAssist
Timeline · 11 updates
-
investigating Mar 18, 2026, 11:32 AM UTC
We are investigating reports of intermittent errors for SOME Jira Cloud customers for Automations triggering. We will provide more details once we identify the root cause.
-
investigating Mar 18, 2026, 11:32 AM UTC
Some customers may experience automations not triggering for specific actions such as work item creation, updates, and comment additions. However, manually triggered automations are operating as expected.
-
investigating Mar 18, 2026, 12:24 PM UTC
Customers using Jira and Jira Service Management are currently experiencing an issue where automations do not trigger for actions such as work item creation, field updates, and comment additions, although manual triggers remain functional. We'll share updates here as more information is available.
-
investigating Mar 18, 2026, 01:07 PM UTC
We continue to investigate the issue, and the next communication will be issued in 60 minutes, or sooner if a significant milestone is achieved.
-
investigating Mar 18, 2026, 02:01 PM UTC
We continue to investigate the issue, and the next communication will be issued in 60 minutes, or sooner if a significant milestone is achieved.
-
monitoring Mar 18, 2026, 02:03 PM UTC
We have identified the cause of the issue and deployed changes that have restored automation processing for Jira and Jira Service Management. Automations are now triggering as expected. We will continue to monitor closely to confirm stability.
-
monitoring Mar 18, 2026, 02:53 PM UTC
We are replaying historical events for affected customers to rerun Automation rules that failed to execute between March 18, 10:00 AM UTC and 1:45 PM UTC. We are closely monitoring the results of this replay and will post an update in a few hours when this work is complete.
-
monitoring Mar 18, 2026, 09:44 PM UTC
We have successfully completed the replay for most regions, with remainder expected to finish in next 3 hours.
-
monitoring Mar 18, 2026, 11:44 PM UTC
Processing of the historical events is still occurring with our team monitoring progress closely. We will be providing a further update either when these tasks are fully completed or within 4 hours, whichever is earlier.
-
resolved Mar 19, 2026, 04:08 AM UTC
The vast majority of customers historical events have now all been processed successfully, with a small group of customers that have very high volume of automations still to process which are actively underway. We expect these remaining processes to be completed within approximately two hours, if not sooner.
-
postmortem Apr 02, 2026, 07:07 AM UTC
### Summary On March 18, 2026, between 10:06 and 13:47 UTC, customers experienced delays in Automation rules executing when triggered by Jira events such as Work Item creation, Work Item updates, and comments. Automation rules using other trigger types, including scheduled triggers, manual triggers, and incoming webhooks, continued to operate normally. The incident was caused by an internal configuration change that inadvertently disabled the event delivery pathway used to notify the automation platform of changes in Jira. The incident was identified through customer support tickets and verified through our monitoring; engineering teams were engaged for resolution. Once the root cause was identified, the configuration was corrected and normal automation processing resumed. Following restoration, the delayed events began flowing to the automation platform for processing. This backlog took approximately 14 hours to fully clear. During this recovery window, some automation rules ran on Work Items whose data had changed due to user actions, customer mitigation, or other causes. Since rule execution usually follows triggering events closely, many customer rules assume immediate execution on the Work Item. The delay allowed other changes —such as updates or customer actions to mitigate the incident's impact — to occur, causing unintended consequences when the rule executed later. ### **IMPACT** During the impact window, Jira Cloud customers were unable to rely on timely execution of event‑triggered automation rules. Rules that depended on Jira Work Item events - including Work Item created, Work Item updated, comment added, sprint changes, and version changes - ran with significant delays. This affected automated workflows responsible for Work Item routing, notifications, field updates, and other rule‑driven actions throughout the outage. Automation rules that used scheduled, manual, or incoming webhook triggers remained unaffected. Following mitigation, a recovery period of approximately 14 hours was required to process the backlog of delayed events. During this window, processing delays peaked at approximately 12 hours from event occurrence to rule completion. In some cases, rules executing against Work Item data several hours after the rule trigger occurred, caused problems due to the rules being built with the expectation in mind that little time would pass between trigger and execution. This resulted in Work Items ending up in an unintended state; especially as some customers undertook manual intervention given the situation. ### **ROOT CAUSE** The incident was caused by a configuration change to an internal feature flag used to control event delivery to the automation platform. A code change had been prepared to remove a feature flag from the event delivery system. However, this code change had not yet been deployed to production. When the feature flag was subsequently retired through our feature flag management system, the retirement process relied on usage telemetry that incorrectly indicated the flag was no longer active. This inadvertently created a blind spot where the flag appeared unused when it was in fact being actively evaluated. When the flag was retired, the event delivery system interpreted its absence as an instruction to stop delivering Jira events to the automation platform, causing all event-triggered automation rules to stop firing. ### **REMEDIAL ACTIONS PLAN & NEXT STEPS** We understand that outages impact your productivity. In addition to our existing testing and preventative processes, Atlassian is prioritising the following actions to help reduce the likelihood and impact of similar incidents in the future: * **Strengthen feature flag lifecycle safety controls** * Address the telemetry gap by removing feature flagged code from the hot path of our event processing framework where it was being surpressed, ensuring we have good telemetry on feature flag use across our systems. This will prevent inadvertent archival of feature flags in future. * Refine feature flag retirement processes so that flags cannot be retired without verifying actual production usage, independent of standard telemetry signals. * **Improve event delivery monitoring and alerting** * Refine monitoring to detect drops in automation event delivery rates within minutes, and add automated alerting on automation execution volume anomalies to enable faster detection of disruptions. * **Improve our ability to clear delayed events faster and deliver controls for customers to decide alternative workflows for events based on delay.** * Refine replay infrastructure to recover faster from the pent up backlog that can be created due to delays. * Provide workflow components that allow users to decide what to do for varying delays between the triggering event and when the rule gets executed to reduce the risk of unintended changes during recovery. We recognise the importance of Jira Automation to our customers' workflows and are committed to ongoing improvements to the reliability and resilience of our platform. We sincerely apologize for the disruption this incident caused, and we will continue to invest in measures that support a stable and dependable service. Thanks, Atlassian Customer Support
Read the full incident report →
- Detected by Pingoru
- Mar 18, 2026, 11:29 AM UTC
- Resolved
- Mar 19, 2026, 04:08 AM UTC
- Duration
- 16h 38m
Affected: Viewing contentCreate and editAuthentication and User ManagementSearchNotificationsAdministrationMarketplaceMobilePurchasing & LicensingSignupAutomation for Jira
Timeline · 11 updates
-
investigating Mar 18, 2026, 11:29 AM UTC
We are investigating reports of intermittent errors for SOME Jira Cloud customers for Automations triggering. We will provide more details once we identify the root cause.
-
investigating Mar 18, 2026, 11:32 AM UTC
Some customers may experience automations not triggering for specific actions such as work item creation, updates, and comment additions. However, manually triggered automations are operating as expected.
-
investigating Mar 18, 2026, 12:24 PM UTC
Customers using Jira and Jira Service Management are currently experiencing an issue where automations do not trigger for actions such as work item creation, field updates, and comment additions, although manual triggers remain functional. We'll share updates here as more information is available.
-
investigating Mar 18, 2026, 01:07 PM UTC
We continue to investigate the issue, and the next communication will be issued in 60 minutes, or sooner if a significant milestone is achieved.
-
investigating Mar 18, 2026, 02:01 PM UTC
We continue to investigate the issue, and the next communication will be issued in 60 minutes, or sooner if a significant milestone is achieved.
-
monitoring Mar 18, 2026, 02:03 PM UTC
We have identified the cause of the issue and deployed changes that have restored automation processing for Jira and Jira Service Management. Automations are now triggering as expected. We will continue to monitor closely to confirm stability.
-
monitoring Mar 18, 2026, 02:53 PM UTC
We are replaying historical events for affected customers to rerun Automation rules that failed to execute between March 18, 10:00 AM UTC and 1:45 PM UTC. We are closely monitoring the results of this replay and will post an update in a few hours when this work is complete.
-
monitoring Mar 18, 2026, 09:44 PM UTC
We have successfully completed the replay for most regions, with remainder expected to finish in next 3 hours.
-
monitoring Mar 18, 2026, 11:44 PM UTC
Processing of the historical events is still occurring with our team monitoring progress closely. We will be providing a further update either when these tasks are fully completed or within 4 hours, whichever is earlier.
-
resolved Mar 19, 2026, 04:08 AM UTC
The vast majority of customers historical events have now all been processed successfully, with a small group of customers that have very high volume of automations still to process which are actively underway. We expect these remaining processes to be completed within approximately two hours, if not sooner.
-
postmortem Apr 02, 2026, 07:08 AM UTC
### Summary On March 18, 2026, between 10:06 and 13:47 UTC, customers experienced delays in Automation rules executing when triggered by Jira events such as Work Item creation, Work Item updates, and comments. Automation rules using other trigger types, including scheduled triggers, manual triggers, and incoming webhooks, continued to operate normally. The incident was caused by an internal configuration change that inadvertently disabled the event delivery pathway used to notify the automation platform of changes in Jira. The incident was identified through customer support tickets and verified through our monitoring; engineering teams were engaged for resolution. Once the root cause was identified, the configuration was corrected and normal automation processing resumed. Following restoration, the delayed events began flowing to the automation platform for processing. This backlog took approximately 14 hours to fully clear. During this recovery window, some automation rules ran on Work Items whose data had changed due to user actions, customer mitigation, or other causes. Since rule execution usually follows triggering events closely, many customer rules assume immediate execution on the Work Item. The delay allowed other changes —such as updates or customer actions to mitigate the incident's impact — to occur, causing unintended consequences when the rule executed later. ### **IMPACT** During the impact window, Jira Cloud customers were unable to rely on timely execution of event‑triggered automation rules. Rules that depended on Jira Work Item events - including Work Item created, Work Item updated, comment added, sprint changes, and version changes - ran with significant delays. This affected automated workflows responsible for Work Item routing, notifications, field updates, and other rule‑driven actions throughout the outage. Automation rules that used scheduled, manual, or incoming webhook triggers remained unaffected. Following mitigation, a recovery period of approximately 14 hours was required to process the backlog of delayed events. During this window, processing delays peaked at approximately 12 hours from event occurrence to rule completion. In some cases, rules executing against Work Item data several hours after the rule trigger occurred, caused problems due to the rules being built with the expectation in mind that little time would pass between trigger and execution. This resulted in Work Items ending up in an unintended state; especially as some customers undertook manual intervention given the situation. ### **ROOT CAUSE** The incident was caused by a configuration change to an internal feature flag used to control event delivery to the automation platform. A code change had been prepared to remove a feature flag from the event delivery system. However, this code change had not yet been deployed to production. When the feature flag was subsequently retired through our feature flag management system, the retirement process relied on usage telemetry that incorrectly indicated the flag was no longer active. This inadvertently created a blind spot where the flag appeared unused when it was in fact being actively evaluated. When the flag was retired, the event delivery system interpreted its absence as an instruction to stop delivering Jira events to the automation platform, causing all event-triggered automation rules to stop firing. ### **REMEDIAL ACTIONS PLAN & NEXT STEPS** We understand that outages impact your productivity. In addition to our existing testing and preventative processes, Atlassian is prioritising the following actions to help reduce the likelihood and impact of similar incidents in the future: * **Strengthen feature flag lifecycle safety controls** * Address the telemetry gap by removing feature flagged code from the hot path of our event processing framework where it was being surpressed, ensuring we have good telemetry on feature flag use across our systems. This will prevent inadvertent archival of feature flags in future. * Refine feature flag retirement processes so that flags cannot be retired without verifying actual production usage, independent of standard telemetry signals. * **Improve event delivery monitoring and alerting** * Refine monitoring to detect drops in automation event delivery rates within minutes, and add automated alerting on automation execution volume anomalies to enable faster detection of disruptions. * **Improve our ability to clear delayed events faster and deliver controls for customers to decide alternative workflows for events based on delay.** * Refine replay infrastructure to recover faster from the pent up backlog that can be created due to delays. * Provide workflow components that allow users to decide what to do for varying delays between the triggering event and when the rule gets executed to reduce the risk of unintended changes during recovery. We recognise the importance of Jira Automation to our customers' workflows and are committed to ongoing improvements to the reliability and resilience of our platform. We sincerely apologize for the disruption this incident caused, and we will continue to invest in measures that support a stable and dependable service. Thanks, Atlassian Customer Support
Read the full incident report →
- Detected by Pingoru
- Mar 17, 2026, 10:59 AM UTC
- Resolved
- Mar 17, 2026, 11:37 AM UTC
- Duration
- 38m
Affected: Knowledge DiscoverySearch3P ConnectorsChatAgentsAuthentication and User managementPurchasing & LicensingSignupStudioMCP
Timeline · 3 updates
-
investigating Mar 17, 2026, 10:59 AM UTC
Impact The outage is impacting our Forge apps and Developer console, specifically due to cs-apps APIs, resulting in 500 error messages for customers attempting to use these services. This incident is causing disruption in creating, updating, and deploying applications, although direct app usage is not affected. Similarly, developer console is impacted. Current Status Our teams are actively working to restore full functionality. The current focus is on addressing a database connectivity issue that seems to be causing the errors. Investigation is ongoing, with efforts concentrated on isolating the root cause through various technical checks. Next Steps The incident team is prioritising the investigation of database connectivity to rectify the errors impacting cs-apps APIs.
-
investigating Mar 17, 2026, 10:59 AM UTC
We are investigating an incident impacting Forge app management and related developer tooling. Customers may experience errors when attempting to: -Install, upgrade, or deploy Forge apps -Use Forge app tunnelling (particularly newly created tunnels and tunnels older than 30 minutes) -Access or manage apps via the Atlassian Developer Console In addition, some underlying app platform capabilities used by Forge and related services are affected, including: -Certain app data residency operations -Rovo MCP consent flows and other app management operations that rely on the app platform At this time, existing Forge apps continue to run, and app invocations are not impacted. Our engineering teams are actively investigating the issue and working to restore normal operation. We will provide the next update in 60 mins or sooner if we have significant progress to share.
-
resolved Mar 17, 2026, 11:37 AM UTC
On March 17th, 2026, between 9:05 AM UTC and 11:10 AM UTC, Forge app management and related developer tooling experienced some service disruption. The issue has now been resolved, and the service is operating normally for affected customers.
Read the full incident report →
- Detected by Pingoru
- Mar 17, 2026, 10:05 AM UTC
- Resolved
- Mar 17, 2026, 11:37 AM UTC
- Duration
- 1h 32m
Affected: Bitbucket Cloud APIsApp listing managementApp DeploymentApp listingsConfluence Cloud APIsArtifactory (Maven repository)Atlassian Support contact formApp pricingJira Cloud APIsCreate and manage appsDeveloper communityDeveloper documentationDeveloper service deskApp submissionsAuthentication and user managementProduct EventsMarketplace service deskUser APIsCategory landing pagesForge App InstallationWebhooksEvaluations and purchasesAtlassian SupportForge CDN (Custom UI)In-product Marketplace and app installation (Cloud)Forge Function InvocationWeb TriggersVulnerability management [AMS]In-product Marketplace and app installation (Server)aui-cdn.atlassian.comNotificationsForge App LogsPrivate listingsForge App MonitoringReporting APIs and dashboardsDeveloper consoleSearchForge direct app distributionVendor managementHosted storageVendor Home PageForge CLIEnd-user consentForge App AlertsApp Data ResidencyForge SQL
Timeline · 4 updates
-
investigating Mar 17, 2026, 10:05 AM UTC
We are actively investigating reports of performance degradation affecting Forge apps and Developer console will share updates here as more information becomes available.
-
investigating Mar 17, 2026, 10:34 AM UTC
We are continuing to investigate this issue.
-
investigating Mar 17, 2026, 10:59 AM UTC
We are investigating an incident impacting Forge app management and related developer tooling. Customers may experience errors when attempting to: -Install, upgrade, or deploy Forge apps -Use Forge app tunnelling (particularly newly created tunnels and tunnels older than 30 minutes) -Access or manage apps via the Atlassian Developer Console In addition, some underlying app platform capabilities used by Forge and related services are affected, including: -Certain app data residency operations -Rovo MCP consent flows and other app management operations that rely on the app platform At this time, existing Forge apps continue to run, and app invocations are not impacted. Our engineering teams are actively investigating the issue and working to restore normal operation. We will provide the next update in 60 mins or sooner if we have significant progress to share.
-
resolved Mar 17, 2026, 11:37 AM UTC
On March 17th, 2026, between 9:05 AM UTC and 11:10 AM UTC, Forge app management and related developer tooling experienced some service disruption. The issue has now been resolved, and the service is operating normally for affected customers.
Read the full incident report →
- Detected by Pingoru
- Mar 12, 2026, 11:42 AM UTC
- Resolved
- Mar 13, 2026, 12:26 AM UTC
- Duration
- 12h 43m
Affected: View ContentiOS AppCreate and EditAndroid AppCommentsAuthentication and User ManagementSearchAdministrationNotificationsMarketplace AppsPurchasing & LicensingSignupConfluence AutomationsCloud to Cloud Migrations - Copy Product DataServer to Cloud Migrations - Copy Product Data
Timeline · 4 updates
-
investigating Mar 12, 2026, 11:42 AM UTC
Confluence Cloud users may experience errors when accessing certain Marketplace macros. Affected customers are seeing the message Error rendering macro 'static-macro': Page loading failed. Our team is actively investigating this issue. We will provide another update within 2 hours, or sooner.
-
investigating Mar 12, 2026, 01:38 PM UTC
Confluence Cloud users may experience errors when accessing certain Marketplace macros. Affected customers are seeing the message Error rendering macro 'static-macro': Page loading failed. Our team is actively investigating this issue. We will provide another update within 2 hours, or sooner.
-
identified Mar 12, 2026, 01:38 PM UTC
Our team has identified the cause of this issue, and a fix is currently being deployed. We are monitoring the deployment, and we will provide another update within 2 hours, or sooner if needed.
-
resolved Mar 13, 2026, 12:26 AM UTC
On March 12, 2026, affected Confluence Cloud users may have experienced some service disruption. The issue has now been resolved, and the service is operating normally for all affected customers.
Read the full incident report →