UiPath Outage History

UiPath is up right now

There were 65 UiPath outages since February 23, 2026 totaling 70h 32m of downtime. Each is summarised below — incident details, duration, and resolution information.

Source: https://status.uipath.com

Major March 10, 2026

Multiple Regions - Maestro, Studio Web - Degraded Agentic Process Loading

Detected by Pingoru
Mar 10, 2026, 03:46 PM UTC
Resolved
Mar 10, 2026, 09:54 PM UTC
Duration
6h 8m
Affected: Studio WebStudio WebStudio WebStudio WebStudio WebStudio WebStudio WebAgentic OrchestrationAgentic OrchestrationAgentic OrchestrationAgentic OrchestrationAgentic OrchestrationAgentic OrchestrationAgentic OrchestrationAgentic OrchestrationAgentic OrchestrationAgentic Orchestration
Timeline · 7 updates
  1. investigating Mar 10, 2026, 03:46 PM UTC

    We are investigating reports of an outage impacting Maestro Agentic Process loading in Studio Web for Maestro in Multiple Regions. Impact: Users may be unable to load the Maestro Agentic Process in Studio Web. Next update: Our teams are working to understand the cause and scope and will share updates as available. Additional notes: We’re seeing a rise in errors when loading the Maestro Agentic Process in Studio Web. Clearing the browser cache may temporarily resolve the issue while we work on a long term fix.

  2. investigating Mar 10, 2026, 04:42 PM UTC

    We are continuing to investigate reports of an outage impacting Maestro Agentic Process loading in Studio Web for Maestro in Multiple Regions. Impact: Users may be unable to load the Maestro Agentic Process in Studio Web. Next update: Our teams are continuing to investigate the cause and scope and will share updates as available. Additional notes: We’re seeing a rise in errors when loading the Maestro Agentic Process in Studio Web. Clearing the browser cache may temporarily resolve the issue while we work on a long term fix.

  3. investigating Mar 10, 2026, 05:39 PM UTC

    We are continuing to investigate reports of an outage impacting Maestro Agentic Process loading in Studio Web for Maestro in Multiple Regions. Impact: Users may be unable to load the Maestro Agentic Process in Studio Web. Next update: Our teams are continuing to investigate the cause and scope and will share updates as available. Additional notes: We’re seeing a rise in errors when loading the Maestro Agentic Process in Studio Web. Clearing the browser cache may temporarily resolve the issue while we work on a long term fix.

  4. investigating Mar 10, 2026, 06:36 PM UTC

    We are continuing to investigate reports of an outage impacting Maestro Agentic Process loading in Studio Web for Maestro in Multiple Regions. Impact: Users may be unable to load the Maestro Agentic Process in Studio Web. Next update: Our teams are continuing to investigate the cause and scope and will share updates as available. Additional notes: We’re seeing a rise in errors when loading the Maestro Agentic Process in Studio Web. Clearing the browser cache may temporarily resolve the issue while we work on a long term fix.

  5. identified Mar 10, 2026, 09:25 PM UTC

    We are continuing to investigate reports of an outage impacting Maestro Agentic Process loading in Studio Web across multiple regions. Impact: Users may be unable to load the Maestro Agentic Process in Studio Web. Current Status: We have identified the issue and are currently implementing a fix. Workaround: Clearing the browser cache may temporarily resolve the issue while we work toward a permanent resolution. Next Update: Our teams are actively working on the fix and will share further updates as they become available.

  6. resolved Mar 10, 2026, 09:54 PM UTC

    The outage has been resolved and Maestro, Studio Web is fully operational. Impact: No ongoing user impact.

  7. postmortem Apr 17, 2026, 05:58 AM UTC

    ### Customer Impact Beginning on March 9, 2026 at 9:49:28 UTC, some customers using Studio Web were unable to load the Maestro canvas. Affected users encountered an error state instead of the expected experience, which blocked them from opening and working with Maestro-based workflows in Studio Web. The issue affected a limited set of organizations across multiple cloud rings, including customer tenants. Other Studio Web capabilities remained available, but Maestro access was disrupted for impacted organizations. Mitigation was completed on March 10, 2026 at 21:55 UTC, resulting in an impact window of approximately 36 hours. ### Root cause A licensing entitlement change intended to limit Maestro availability for Test Cloud licenses was applied too broadly. As a result, the entitlement required to access Maestro was removed for additional organizations that should have retained access. When Studio Web attempted to load Maestro for those affected organizations, the backend treated the service as unavailable, which caused the Maestro frontend module to fail to load and users to see an error instead of the canvas. The issue was not caused by infrastructure instability or a front end deployment failure. It was caused by an incorrect entitlement state for impacted organizations. ### Detection The issue was first identified through reports from internal users and subsequent investigation of telemetry showing repeated Maestro load failures for a distinct set of organizations. Because the failure was tied to an entitlement state rather than a full-service outage, existing alerting did not immediately classify the problem as a customer-facing incident. After the reports were correlated with telemetry and tenant-level impact, engineering confirmed the issue’s scope and declared an incident on March 10, 2026 at 15:15:09Z UTC. A public status update was then posted to notify affected customers. ### Response Once the scope of the issue was understood, engineering teams reviewed entitlement and audit data to determine why affected organizations were being treated as ineligible for Maestro access. Initial mitigation attempts focused on validation and client-side recovery steps, but these did not address the underlying entitlement issues for impacted tenants. Investigation then confirmed that a recent licensing entitlement refresh had removed the required Maestro entitlement from organizations that should have remained enabled. The team reversed the entitlement change to restore the Maestro entitlement for Test Cloud licenses. After the change was rolled back and access was restored for affected organizations, telemetry and manual verification confirmed recovery. In a smaller number of cases, some users continued seeing the error briefly because the prior service-unavailable response had been cached. Those cases were resolved as caches expired or were cleared. ### Follow-Up To reduce the likelihood of similar incidents, we are implementing the following improvements: 1. Improve change review and coordination for entitlement-related updates that can affect customer access to product capabilities. 2. Strengthen validation and rollout safeguards for entitlement changes so that tenant targeting errors are caught before production impacts. 3. Improve Studio Web error handling so entitlement-related failures are surfaced more clearly and are easier to distinguish from generic frontend loading issues. 4. Expand alerting and monitoring for Maestro load failures so canvas load issues are detected earlier. 5. Reduce cache persistence for service-unavailable responses where appropriate, to shorten residual impact after recovery.

Read the full incident report →

Major March 10, 2026

DataFabric Service Degraded in EU Region

Detected by Pingoru
Mar 10, 2026, 12:39 PM UTC
Resolved
Mar 10, 2026, 02:39 PM UTC
Duration
1h 59m
Affected: Data Service
Timeline · 4 updates
  1. investigating Mar 10, 2026, 12:39 PM UTC

    We are currently investigating reports of an issue in DataFabric in the Europe region. Impact: Customers using Data Fabric in the EU Production Automation Cloud may experience intermittent degraded performance and failed requests Our teams are actively working to identify the cause and assess the scope of the issue. Further updates will be shared as soon as more information becomes available.

  2. monitoring Mar 10, 2026, 01:43 PM UTC

    A fix has been deployed and the service is recovering. Current status: We are monitoring the system to ensure stability and full recovery. Further updates will be shared soon

  3. resolved Mar 10, 2026, 02:39 PM UTC

    The issue has been resolved. The system has remained stable during the monitoring period.

  4. postmortem Mar 16, 2026, 07:39 AM UTC

    ## _Customer Impact_ Between March 10, 2026, 10:15 UTC - 13:36 UTC, Data Fabric customers in Europe experienced increased latency and intermittent failures on DataFabric requests across consumption surfaces \(Automation Cloud UI and Workflow Activities\). During this period, some customers were unable to load Data Fabric UI pages or successfully run workflow activities— affected requests frequently timed out or were canceled by the client \(observed as HttpClient timeouts / `TaskCanceledException`\). No durable impact was introduced. ## _Root Cause_ * A large-scale background maintenance job in the Europe ring temporarily increased demand on the database, slowing responses. * At the same time, a deployment and the resulting transient increase in user requests exposed that a subset of service instances did not have enough capacity to absorb the increased, sustained load. * The combination of slower database responses and insufficient transient capacity increased the time requests remained open and caused more work to accumulate on active service instances, which further degraded performance. This resulted in worsening performance and intermittent failures for some customers. ## _Detection_ The issue was first detected by a customer report— and automated alerts were subsequently triggered. ## _Response_ Upon receiving the report, the team immediately triaged the issue and identified elevated dependency latency and unhealthy runtime instances. Around 10:52 UTC, we received the customer report, engaged, and started investigating the issue. We initiated a rollback of the recent deployment at around 12:29 UTC and increased runtime capacity as an stabilizing measure to curb customer impact. These steps together returned the service to a healthy state by 13:36 UTC. ## _Follow-Up_ * Review and align service instance resource allocations across regions so instances have sufficient headroom for transient load. * Improve operational observability to capture runtime diagnostics and memory/health signals earlier. * Add proactive alerts for sustained high resource usage and service restarts. * Implement operational guardrails for background jobs to prevent database overload/pressure.

Read the full incident report →

Major March 10, 2026

India - AI Center - AI Center Currently Unavailable

Detected by Pingoru
Mar 10, 2026, 11:41 AM UTC
Resolved
Mar 10, 2026, 01:12 PM UTC
Duration
1h 31m
Affected: AI Center
Timeline · 4 updates
  1. investigating Mar 10, 2026, 11:41 AM UTC

    We are currently investigating reports of an outage affecting AI Center accessibility in the India region. Impact: Users may be unable to access AI Center Our teams are actively working to identify the cause and assess the scope of the issue. Further updates will be shared as soon as more information becomes available.

  2. monitoring Mar 10, 2026, 12:35 PM UTC

    We have identified the root cause of the issue affecting AI Center accessibility in the India region. A fix has been deployed and the service is recovering. Current status: We are monitoring the system to ensure stability and full recovery. Further updates will be shared soon

  3. resolved Mar 10, 2026, 01:12 PM UTC

    The issue has been resolved. A fix was deployed and the system has remained stable during the monitoring period.

  4. postmortem Mar 16, 2026, 04:14 PM UTC

    ## _Customer Impact_ Customers using AI Center with tenants located in **India** experienced **degraded** **service** between **March 10, 2026 11:05 am UTC** and **12:40 pm UTC**. During this time, **some operations** related to skills and training pipelines were unavailable. Customers were unable to **start new skills**, **resume** or **stop** existing skills, or **start** new **training pipelines**. Skills that were **already running** **continued** to process requests **normally**, and training pipelines that were already in **progress** were **not affected**. Users saw error messages in the user interface when attempting these operations. Total duration: **approximately 95 minutes**. ## _Root Cause_ A **configuration change** related to network settings prevented AI Center services responsible for managing skills and training pipelines from **communicating** with the infrastructure that runs them. Because of this, requests to **start** or **manage** skills and training jobs could not be completed successfully. ## _Detection_ The issue was detected through **automated testing** systems that identified failures in skill and training operations **shortly** after the problem began. These tests reported errors in both the user interface and backend operations, which indicated a **service disruption**. ## _Response_ Once the issue was detected, engineers began **investigating** the failures affecting AI Center operations. March 10, 2026 * Automated tests **detect** failures in skill and training operations. * Engineers investigated multiple services and confirmed the issue was **not temporary**. * The investigation identified a **network configuration change** that blocked communication between AI Center components. * The configuration was **corrected** to **restore** communication between services. * AI Center services **recovered automatically** once the configuration update was applied. * Service functionality **returned to normal** shortly after the change was deployed. ## _Follow-Up_ We are **taking steps** to **reduce** the likelihood of similar issues in the future. Short-term improvements: * **Review** configuration changes affecting infrastructure connectivity before rollout. * **Validate** cluster configuration updates using **automated checks**. Long-term improvements * **Improve** configuration management processes to better **detect dependency changes**. * Introduce **automated validation and rollback** mechanisms for infrastructure configuration changes. We remain committed to improving the reliability of AI Center and the UiPath Platform™.

Read the full incident report →

Major March 9, 2026

Multiple Regions - Agents - Web Summary Tool Outage

Detected by Pingoru
Mar 09, 2026, 04:41 PM UTC
Resolved
Mar 09, 2026, 08:41 PM UTC
Duration
4h
Affected: AgentsAgentsAgentsAgentsAgentsAgentsAgentsAgentsAgentsAgents
Timeline · 8 updates
  1. investigating Mar 09, 2026, 04:41 PM UTC

    We are investigating reports of an outage impacting the Web Summary tool for Agents in Multiple Regions. Impact: Users may be unable to run deployed or debug Agents that rely on the Web Summary tool, causing those workflows to fail. Next update: Our teams are working to understand the cause and scope and will share updates as available.

  2. investigating Mar 09, 2026, 05:12 PM UTC

    We are investigating reports of an outage impacting the Web Summary tool used in RPA workflows and Agents in Multiple Regions. Impact: Users may be unable to run RPA workflows or deployed/debug Agents that rely on the Web Summary tool, which may cause those automations to fail. Next update: Our teams are working to understand the cause and scope and will share updates as available.

  3. identified Mar 09, 2026, 05:46 PM UTC

    We have identified the cause of the outage impacting the Web Summary tool for RPA workflows and Agents in Multiple Regions and are working on a fix. Impact: Users may continue to be unable to run RPA workflows or deployed/debug Agents that rely on the Web Summary tool, which may cause those automations to fail. Next update: Our focus is on restoring service as quickly as possible.

  4. identified Mar 09, 2026, 06:46 PM UTC

    We have identified the cause of the outage impacting the Web Summary tool for RPA workflows and Agents in Multiple Regions and are continuing to work on a fix. Impact: Users may continue to be unable to run RPA workflows or deployed/debug Agents that rely on the Web Summary tool, which may cause those automations to fail. Next update: Our teams are actively working to restore service and will share updates as progress is made.

  5. identified Mar 09, 2026, 07:36 PM UTC

    We have identified the cause of the outage impacting the Web Summary tool for RPA workflows and Agents in Multiple Regions and are continuing to work on a fix. Impact: Users may continue to be unable to run RPA workflows or deployed/debug Agents that rely on the Web Summary tool, which may cause those automations to fail. Next update: Our teams are actively working to restore service and will share updates as progress is made.

  6. monitoring Mar 09, 2026, 08:37 PM UTC

    We have identified the cause of the outage impacting the Web Summary tool for RPA workflows and Agents across multiple regions and have implemented a fix. Our teams are currently monitoring the environment to ensure the fix is stable and that services continue to operate normally. Next update: We will continue monitoring and provide further updates as we confirm sustained service stability.

  7. resolved Mar 09, 2026, 08:41 PM UTC

    The issue impacting the Web Summary tool for RPA workflows and Agents across multiple regions has been resolved. Root cause was identified and a fix has been successfully implemented. Services have been restored and users should now be able to run RPA workflows and deployed or debug Agents that rely on the Web Summary tool without failure. Thank you for your patience while our teams worked to resolve the issue.

  8. postmortem Mar 18, 2026, 11:54 PM UTC

    **Customer Impact** Between 16:28 UTC and 18:12 UTC \(about 1 hour 44 minutes\), the web summary tool experienced a service disruption. During this window, users were unable to execute summary-dependent tasks within Agents and RPA workflows, resulting in failed process executions. **Root Cause** The incident was triggered by an upstream API authentication failure. Specifically, the integration credentials for the third-party summary provider became invalidated due to an unexpected account configuration change on the provider's side. This led to the rejection of all outbound requests from our service. **Detection** The issue was first reported by a customer. ‌ **Resolution** To restore service, our engineering team provisioned a new API endpoint configuration and refreshed the associated security credentials. This successfully re-established the handshake with the provider, and restored full functionality. **Timeline** * **16:28 UTC:** Issue identified and investigation started. * **16:56 UTC:** Incident officially declared. * **17:00 UTC:** Root cause identified as an upstream credential failure. * **17:30 UTC:** Deployment of new API configuration started. * **17:53 UTC:** Deployment completed. Validation successful across production regions. * **18:12 UTC:** Incident resolved. Monitoring confirmed stability. **Follow-Up Actions** * Use detailed error-rate alerting to spot 4xx/5xx responses from upstream providers right away. This will reduce the reliance on customer reports. * Conduct a post-mortem with the service provider to ensure account configuration stability and prevent future credential invalidation.

Read the full incident report →

Minor March 9, 2026

Multiple Regions - Orchestrator - issues executing processes using target execution bindings

Detected by Pingoru
Mar 09, 2026, 01:21 PM UTC
Resolved
Mar 10, 2026, 01:15 PM UTC
Duration
23h 53m
Affected: OrchestratorOrchestratorOrchestratorOrchestratorOrchestratorOrchestratorOrchestratorOrchestratorOrchestratorOrchestratorOrchestratorOrchestrator
Timeline · 7 updates
  1. identified Mar 09, 2026, 01:21 PM UTC

    We have identified the cause and are working on mitigation. Impact: Some users may experience issues when executing processes using target execution bindings. Additional updates will be provided as we move toward resolution. Affected Regions: - Europe - US - Japan - Australia - Canada - Singapore - India - UK - Switzerland - Delayed US - Delayed EU

  2. identified Mar 09, 2026, 01:27 PM UTC

    We have identified the cause and are working on mitigation. Impact: Some users may experience issues when executing any process using target execution bindings. Additional updates will be provided as we move toward resolution. Affected Regions: - Europe - US - Japan - Australia - Canada - Singapore - India - UK - Switzerland - Delayed US - Delayed EU

  3. identified Mar 09, 2026, 02:26 PM UTC

    Our team is actively working toward a resolution. We will provide the next update as soon as more information becomes available.

  4. identified Mar 09, 2026, 07:34 PM UTC

    A fix has been developed and deployed to our lower environments for validation. We are currently testing the changes to ensure stability before proceeding with a wider rollout. We plan to deploy the fix across production environments once validation is complete. Additional updates will be provided as we make progress toward full resolution.

  5. monitoring Mar 10, 2026, 10:45 AM UTC

    The fix has been successfully applied in the following regions and is functioning as expected: European Union, United States, India, Australia, Canada, UK, Switzerland, Delayed-US, and UAE. We are in the process of rolling out the changes to the remaining regions and validating them

  6. resolved Mar 10, 2026, 01:15 PM UTC

    The issue has been resolved. A fix was deployed to the remaining regions and the system has remained stable during the monitoring period.

  7. postmortem Mar 16, 2026, 09:36 AM UTC

    ## _Customer Impact_ Region\(s\) affected: Global—All regions Service\(s\) affected: Orchestrator Customers using automation workflows with multiple manually configured Package Requirements execution settings, could experience job failures with the following error: > “_Machine doesn’t exist” \(Error code: 1002\)_ The underlying issue was a pre-existing bug that surfaced under a specific sequence of process edits. It was reported on March 6, 2026 and fully resolved on March 10, 2026. ## _Root Cause_ When updating a process with execution settings configured without modifying the execution settings, some settings with empty default values could be incorrectly overwritten. ## _Detection_ The problem was found on March 6, 2026, when a customer service ticket was opened. A customer reported that their automation workflow was failing with a "Machine doesn't exist" error upon execution. The issue had gone undetected until this point, as the bug manifests under a specific sequence of edits and produces no warnings at the time the corrupted data is saved. ## _Response_ Upon receiving the report, engineering triaged the issue and identified the root cause. A workaround was provided to affected customers while a proper fix was developed. The hotfix was rolled out across all regions between March 9–10, 2026, and a database cleanup was applied to remediate all affected processes, fully restoring job execution. ## _Follow-Up_ The bug has been fixed and a database cleanup has been done to fix all affected processes. To prevent recurrence, we are taking the following preventative measures: * Add checks to reject incorrect machine reference values before they are saved. * Incorporating this scenario into our test suite to detect similar issues in future releases.

Read the full incident report →

Major March 6, 2026

US - Agents - [Agent traces are not appearing for some customers]

Detected by Pingoru
Mar 06, 2026, 04:06 PM UTC
Resolved
Mar 06, 2026, 04:32 PM UTC
Duration
26m
Affected: Agents
Timeline · 4 updates
  1. investigating Mar 06, 2026, 04:06 PM UTC

    Agent traces are missing for some customers and we are investigating.

  2. investigating Mar 06, 2026, 04:08 PM UTC

    Agent traces are not appearing for some customers and we are investigating.

  3. monitoring Mar 06, 2026, 04:29 PM UTC

    Mitigation has been applied and performance is improving for the issue that impacted traces in Agents across US. We are monitoring closely to ensure stability.

  4. resolved Mar 06, 2026, 04:32 PM UTC

    The issue has been resolved and Agents performance has returned to expected levels after degraded performance impacted traces in US. Impact: No ongoing user impact.

Read the full incident report →

Major March 5, 2026

Multiple Regions - Multiple Services - [Some services may be unavailable]

Detected by Pingoru
Mar 05, 2026, 12:38 AM UTC
Resolved
Mar 05, 2026, 01:01 AM UTC
Duration
22m
Affected: Orchestrator
Timeline · 3 updates
  1. monitoring Mar 05, 2026, 12:38 AM UTC

    Affected Regions: - Europe - US - Japan - Australia - Canada - Singapore - Delayed US - Delayed EU - India - UK - Switzerland

  2. resolved Mar 05, 2026, 01:01 AM UTC

    Affected Regions: - Europe - US - Japan - Australia - Canada - Singapore - Delayed US - Delayed EU - India - UK - Switzerland

  3. postmortem Apr 20, 2026, 07:37 PM UTC

    ## Customer Impact On 2026-03-05 between 00:00 and 00:10 UTC, customers in the **United States** region experienced approximately 10 minutes of sign-in delays, temporary access issues, and intermittent functionality errors across Orchestrator, Automation Cloud, Test Manager, and Studio Web. Customers whose organization data is hosted in other regions were not affected. Service behavior returned to normal automatically at 00:10 UTC. No data was lost and no customer action was required. Total customer-impacting duration: **~10 minutes**, 00:00 – 00:10 UTC. ## Root Cause During a routine software update, a temporary reduction in capacity occurred at a transition point in the update. For a brief window, the available capacity could not keep up with normal request volume, which produced the sign-in delays and intermittent errors customers observed. Capacity restored automatically and service returned to normal behavior within ten minutes. ## Detection The incident was detected through automated monitoring. Alerts covering customer-facing response codes, request latency, and underlying service health fired within three minutes and paged the on-call team immediately. No monitoring gaps contributed to the incident. ## Response On-call engineering and site-reliability engaged within one minute of the alerts firing and convened on an incident bridge. By the time the team was assembled, monitoring showed service recovering on its own, no manual mitigation was needed. The team held the bridge to verify stability, confirmed a clean 30-minute observation window with no recurrence, marked the incident mitigated at 00:37 UTC, updated the public status page to resolved at 01:01 UTC, and closed the incident at 01:29 UTC. A full post-incident investigation opened the next day identified the underlying cause. Timeline: * 00:13 UTC — Automated alerts fired; on-call teams paged. * 00:14 UTC — Engineers acknowledged and joined the incident bridge. * 00:10 – 00:37 UTC — Self-recovery confirmed; stability verified against live traffic. * 00:37 UTC — Incident marked mitigated. * 01:01 UTC — Public status page set to resolved. * 01:29 UTC — Incident closed. ## Follow-Up We have already increased the minimum capacity running in production so there is more headroom during future updates. As a longer-term improvement, we are moving this workload onto our modern deployment platform, which rolls out new versions gradually and supports automatic rollback if any issue is detected. This migration is a top engineering priority and is the change that fully addresses the underlying cause.

Read the full incident report →

Major March 3, 2026

Multiple Regions - IXP - [Some customers are unable to access IXP]

Detected by Pingoru
Mar 03, 2026, 08:29 PM UTC
Resolved
Mar 03, 2026, 11:36 PM UTC
Duration
3h 7m
Affected: IXPIXPIXPIXPIXP
Timeline · 6 updates
  1. investigating Mar 03, 2026, 08:29 PM UTC

    Affected Regions: - Europe - India - UK - Switzerland

  2. identified Mar 03, 2026, 09:36 PM UTC

    Identified issues and team is working towards resolution.

  3. identified Mar 03, 2026, 09:37 PM UTC

    Identified issues and team is working towards resolution.

  4. monitoring Mar 03, 2026, 10:31 PM UTC

    A fix is deployed and we are monitoring the fix.

  5. resolved Mar 03, 2026, 11:36 PM UTC

    A fix is deployed and we are monitoring the fix.

  6. postmortem Mar 05, 2026, 11:59 AM UTC

    ## Customer impact Between March 3, 2026, at 3:20 pm UTC and March 3, 2026, at 11:49 pm UTC, some customers were unable to access the IXP Platform. During this period, users who were making cross-region requests to IXP were receiving "Not Found" responses. This encompassed the design-time UI, the API, and consumers via Activities. The disruption lasted approximately eight hours and 27 minutes. ## Root cause The incident was triggered by a configuration change to the service routing layer for IXP, specifically a modification to the service routing configuration deployed on March 3, 2026, at 3:20 pm UTC. This change was intended to prepare for an upcoming feature release but inadvertently disrupted cross-region request handling. As a result, requests originating from one region and targeting IXP in another region were blocked or misrouted, causing service unavailability for affected customers. Attempts to roll back the change initially worsened the issue, as the rollback did not restore the previous stable state and introduced additional routing inconsistencies. Recovery required building and deploying a targeted hotfix to restore correct routing behavior across all impacted regions. ## Detection The incident was first detected on March 3, 2026, at 6:21 pm UTC, when clients advised that they were unable to access the IXP design-time UI. Automated monitoring confirmed an issue with cross-region requests since the deployment. The issue was classified as an incident and escalated within minutes, with the status page updated to reflect the outage. Initial investigation began immediately, with engineering teams coordinating in real time. ## Response On March 3, 2026, at 6:21 pm UTC, engineering teams began investigating the root cause, focusing on recent configuration changes to the service routing layer. An initial attempt to fix the issue by reverting a minimal set of changes was unsuccessful at 7:55 pm UTC, and the engineering team started to work on a more robust fix. By 9:07 pm UTC, the issue was identified as a problematic service routing configuration deployed earlier that afternoon. A hotfix was developed and tested in a smaller environment, then deployed to the Japan region to verify resolution. By 10:00 pm UTC, the hotfix was pushed to the EU and US regions, where the majority of affected customers were located. Full recovery was achieved by March 3, 2026, at 11:49 pm UTC, when monitoring confirmed normal operation and customers reported restored access to IXP. The status page was updated to resolved, and no further customer cases were reported. Recovery was verified through automated checks, customer confirmations, and monitoring data, ensuring that cross-region requests to IXP were functioning as expected. ## Follow up To prevent similar incidents, we are implementing several technical and process improvements: * Automated validation checks will be added to pre-production environments, ensuring that changes to service routing layers are tested for cross-region compatibility before rollout. * Enhanced monitoring will be deployed to detect anomalies in cross-region request handling, providing earlier warning and more granular visibility into service health. * We will improve how quickly and precisely we can revert recently deployed changes so that issues can be resolved as soon as possible once they have been diagnosed. These measures build on lessons learned from similar past events, such as the incident involving email delivery disruptions due to configuration changes. We are committed to systematically addressing this pattern by strengthening our deployment safeguards, improving detection capabilities, and ensuring rapid, reliable recovery for our customers. Technical improvements will be rolled out over the coming weeks, with ongoing review to ensure effectiveness and resilience.

Read the full incident report →

Major February 27, 2026

Multiple Regions - Integration Service -

Detected by Pingoru
Feb 27, 2026, 04:33 PM UTC
Resolved
Feb 27, 2026, 05:12 PM UTC
Duration
38m
Affected: Integration Service
Timeline · 3 updates
  1. monitoring Feb 27, 2026, 04:33 PM UTC

    Starting at 2/26/2026 1:30 PM UTC to 2/27/2026 3:40 PM UTC We experienced an incident affecting Integration Service connectors used by the AI Trust Layer “Bring Your Own” (BYO) feature. Customers using BYO LLM/LLM Configurations to power Agents, GenAI Activities, or other agentic products may have encountered workflow failures during the impacted period. The issue occurred because we were unable to retrieve required connection details at runtime, resulting in failed requests to customer-configured external models. The issue has been identified and resolved. Service is fully restored, though workflows that failed during the incident window may need to be re-run. We continue to monitor the system to ensure stability.

  2. resolved Feb 27, 2026, 05:12 PM UTC

    Starting at 2/26/2026 1:30 PM UTC to 2/27/2026 3:40 PM UTC We experienced an incident affecting Integration Service connectors used by the AI Trust Layer “Bring Your Own” (BYO) feature. Customers using BYO LLM/LLM Configurations to power Agents, GenAI Activities, or other agentic products may have encountered workflow failures during the impacted period. The issue occurred because we were unable to retrieve required connection details at runtime, resulting in failed requests to customer-configured external models. The issue has been identified and resolved. Service is fully restored, though workflows that failed during the incident window may need to be re-run. We continue to monitor the system to ensure stability.

  3. postmortem Mar 18, 2026, 08:00 AM UTC

    ## _Customer Impact_ Between February 26, 2026 at 1:30 pm UTC and February 27, 2026 at 3:40 pm UTC, few customers experienced workflow failures when using Bring Your Own \(BYO\) large language model \(LLM\) configurations. Workflows that connected to external AI providers, such as Vertex AI and other supported LLM services, returned HTTP 401 \(Unauthorized\) or HTTP 500 \(Internal Server Error\) responses and failed to run successfully. Any workflows that failed during this period needed to be manually re-run after the issue was resolved. ## _Root Cause_ A recent update to Integration Service introduced a serialization and deserialization issue in how requests to external LLM providers were processed. Because of this issue, some requests to external AI providers failed, which caused workflows using BYO LLM configurations to stop running successfully. ## _Detection_ The issue was discovered through monitoring that detected an increase in failed requests related to BYO LLM connections. Because the issue affected only certain workflows and external model connections, the overall service remained available, which delayed detection. ## _Response_ Once the issue was identified, engineers investigated the failures affecting BYO LLM connections. **February 27, 2026** * **Shortly after detection** — Engineers confirmed the issue and began reverting the recent change. * A fix was deployed to restore the previous behavior for configuration handling. * Service improvements were observed as error rates began to decrease. * Engineers validated the fix by confirming that workflows connecting to external LLM providers were running successfully. * **3:40 PM UTC** — Service recovery was confirmed and all affected functionality returned to normal. Throughout the incident, customer updates were shared on the status page and monitoring continued to ensure stability. ## _Follow-Up_ We are implementing improvements to reduce the likelihood of similar incidents. _Short-term improvements_ * Add additional validation before deployments that affect external AI integrations, including serialization and deserialization handling. * Improve monitoring so issues with external model connections are detected faster. _Long-term improvements_ * Expand testing for supported external AI providers such as Vertex AI, OpenAI, and Azure OpenAI. * Introduce gradual rollouts for changes affecting critical integrations to detect issues earlier.

Read the full incident report →

Minor February 27, 2026

Multiple Regions – Data Service – VDO Entity Create/Modify Temporarily Restricted

Detected by Pingoru
Feb 27, 2026, 06:40 AM UTC
Resolved
Feb 27, 2026, 02:12 PM UTC
Duration
7h 31m
Affected: Data Service
Timeline · 10 updates
  1. investigating Feb 27, 2026, 06:40 AM UTC

    We are currently investigating the issue Affected Regions: - Europe - US - Japan - Australia - Canada - Singapore - Delayed US - Delayed EU - India - UK - Switzerland - UAE

  2. identified Feb 27, 2026, 07:42 AM UTC

    We have identified the scope of the issue. The impact is limited to the Create Entity preview feature. This action is currently not functioning as expected for a subset of connectors. Affected connectors: • SAP C4C • Oracle NetSuite • Zendesk • HubSpot CRM • BambooHR • Zoho Desk • UiPath Data Fabric • QuickBooks Online • FreshDesk • DocuSign

  3. identified Feb 27, 2026, 08:20 AM UTC

    We have identified the scope of the issue. The impact is limited to the Create VDO Entity preview feature. This action is currently not functioning as expected for a subset of connectors.

  4. identified Feb 27, 2026, 09:24 AM UTC

    We are continuing to work on a fix for this issue.

  5. identified Feb 27, 2026, 10:23 AM UTC

    We have identified a fix and are actively validating it in our testing environments. We will continue to provide updates as we make progress.

  6. identified Feb 27, 2026, 11:18 AM UTC

    We are continuing to validate the fix in our testing environments. We will provide further updates as progress is made.

  7. identified Feb 27, 2026, 12:20 PM UTC

    We have completed validation of the fix in our testing environments and will begin rolling it out shortly.

  8. monitoring Feb 27, 2026, 01:58 PM UTC

    A fix has been implemented and we are monitoring the results.

  9. resolved Feb 27, 2026, 02:12 PM UTC

    The incident has been fully resolved. We identified that a recent update unintentionally caused some previously available connections to not appear. The issue has been corrected, and all connections are now restored. There was no data loss or long-term impact. Thank you for your patience.

  10. postmortem Mar 03, 2026, 03:06 PM UTC

    ## Customer Impact Between 2026-02-27 06:25 UTC and 2026-02-27 14:12 UTC \(approximately 7 hours 47 minutes\), customers across multiple regions were unable to create or modify Entity definitions using few connectors through the Data Service. The Create Entity preview feature was not functioning as expected for a subset of connectors. Existing Entities capabilities in Data Explorer and activities were not affected. **Affected Connectors:** SAP C4C, Oracle NetSuite, Zendesk, HubSpot CRM, BambooHR, Zoho Desk, UiPath Data Service, QuickBooks Online, FreshDesk, DocuSign There was no data loss or long-term impact. All connections were restored after the fix was deployed. ## Root Cause A code change introduced a feature flag check for listing connections, but the base master connection list was incomplete — it was missing some connectors that previously existed in Data Service. As a result, the affected connectors did not appear when the feature flag was evaluated, preventing customers from creating or modifying Entity definitions through those connectors. This was a shallow-level code issue with no deep architectural impact. The fix involved updating the master connection list to include all previously available connectors and adjusting the feature flag accordingly. ## Detection The issue was detected at 2026-02-27 06:25 UTC via an automated alert created through the incident workflow in Rootly. The alert was acknowledged by DataFabric team within minutes. A customer-impacting incident was declared at 06:41 UTC, and the status page was created at 06:40 UTC. By 06:48 UTC, the DataFabric team confirmed the scope: Entity definition create/modify operations were restricted, while existing Entity queries and behaviour remained unaffected. ## Response * **06:25 UTC** – Issue detected and investigation started. * **06:41 UTC** – Customer-impacting incident declared. * **~07:40 UTC** – Root cause identified and fix prepared. * **~09:45 UTC** – Fix validated in pre-production environment. * **~10:20 UTC** – Deployment initiated across production regions. * **13:53 UTC** – Deployment completed across all affected regions. * **14:12 UTC** – Incident fully resolved and monitoring completed. ## Follow-Up Actions To prevent similar incidents in the future, we are implementing the following improvements: * Strengthening validation checks to ensure all supported connectors are correctly included during feature updates. * Improving pre-release verification processes for changes affecting connector configuration or visibility. * Enhancing monitoring to proactively detect unexpected connector availability issues.

Read the full incident report →

Looking to track UiPath downtime and outages?

Pingoru polls UiPath's status page every 5 minutes and alerts you the moment it reports an issue — before your customers do.

  • Real-time alerts when UiPath reports an incident
  • Email, Slack, Discord, Microsoft Teams, and webhook notifications
  • Track UiPath alongside 5,000+ providers in one dashboard
  • Component-level filtering
  • Notification groups + maintenance calendar
Start monitoring UiPath for free

5 free monitors · No credit card required