UiPath incident

Delayed US - Autopilot for Everyone - Sign-in issues

Major Resolved View vendor source →
Started
Apr 20, 2026, 06:55 PM UTC
Resolved
Apr 20, 2026, 08:13 PM UTC
Duration
1h 18m
Detected by Pingoru
Apr 20, 2026, 06:55 PM UTC

Affected components

Autopilot for Everyone

Update timeline

  1. investigating Apr 20, 2026, 06:55 PM UTC

    Users may be unable to sign-in to Autopilot for Everyone, and may be unable to send messages even if already logged in.

  2. monitoring Apr 20, 2026, 07:13 PM UTC

    We have applied a mitigation and are seeing traffic return to normal. We will continue to monitor the situation as it improves

  3. resolved Apr 20, 2026, 08:13 PM UTC

    We have applied a mitigation and are seeing traffic return to normal. We will continue to monitor the situation as it improves

  4. postmortem Apr 20, 2026, 10:16 PM UTC

    ## Customer Impact ‌ Between approximately 6:00 pm UTC and 7:14 pm UTC on April 20, 2026, customers using the Autopilot for Everyone service in the United States Delayed region were unable to sign in or send chat messages. All attempts to access the service returned errors, rendering the service effectively unavailable. Customers with cached credentials were also unable to send messages, and no workaround was available during the outage. Scope: The impact was limited to customers accessing Autopilot for Everyone via Portal > AI Trust Layer and those using the Autopilot for Everyone via Assistant in the United States Delayed region. ## Root Cause ‌ A service update deployed to the Autopilot for Everyone service on April 20, 2026 triggered an unexpected modification to the traffic routing configuration in the United States Delayed infrastructure. The routing rules that direct incoming customer requests were altered to an unsupported configuration. This mismatch meant that the routing layer could not match incoming requests to any valid destination, resulting in HTTP 404 errors for all service traffic. Recovery was achieved by manually patching the routing configuration to include both the original correct address and the modified address, allowing incoming customer requests to be properly routed and restoring service availability. The exact mechanism by which the infrastructure-level process introduced the incorrect routing address has been determined and we are working on a long-term fix. ## Detection ‌ The incident was first detected at 6:02 pm UTC on April 20, 2026, when automated monitoring reported failures in the Autopilot for Everyone service. Multiple alerts were triggered simultaneously, including service availability checks and automated browser-based tests targeting the affected region, all indicating that the service was unreachable. Engineers observed HTTP 404 errors on all service endpoints, including health checks, along with routing-level "no route" errors confirming that traffic could not reach the service. ## Response The service update that triggered the issue completed at approximately 6:00 pm UTC. The engineering team attempted to revert the change, but the rollback did not resolve the issue. Approximately 43 minutes elapsed between the deployment and attempted rollback. The persistent errors after the revert prompted the team to escalate, formally declare an incident, and begin a coordinated response. Upon detection, our engineering team began investigating the routing configuration and identified that it referenced an incorrect internal address that did not match the address used by incoming customer traffic. A revert of the service update had already been attempted but did not resolve the issue. To confirm the diagnosis, the team performed targeted tests against both the incorrect and correct addresses. Requests sent to the incorrect address returned successfully, while requests using the expected address failed — confirming that the routing configuration was functional but pointing to an unsupported destination. By 7:06 pm UTC, the team had formulated a plan to manually patch the routing configuration by adding the correct address alongside the existing incorrect entry. This additive approach was chosen as a safe, non-destructive fix that would restore service without risking disruption to any processes that might depend on the existing configuration. At 7:11 pm UTC, the routing configuration was patched. Health checks immediately returned successful responses, and automated service monitoring confirmed that the service was recovering. By 7:14 pm UTC, the incident was marked as mitigated and normal functionality was restored. The team continued monitoring the service for approximately one hour, confirming a full recovery with no further failures. The incident was marked as fully resolved at 8:14 pm UTC. ## Follow-up ‌ To prevent similar incidents in the future, we are implementing the following improvements: * Infrastructure process investigation: We have identified why an infrastructure-level process modified the routing configuration. We will follow up with safeguards to ensure that modifications are backwards-compatible with deployments running older releases. We are committed to making our systems more resilient and transparent. These improvements will help us detect configuration issues earlier, reduce recovery times, and deliver a more reliable experience for all customers.

Looking to track UiPath downtime and outages?

Pingoru polls UiPath's status page every 5 minutes and alerts you the moment it reports an issue — before your customers do.

  • Real-time alerts when UiPath reports an incident
  • Email, Slack, Discord, Microsoft Teams, and webhook notifications
  • Track UiPath alongside 5,000+ providers in one dashboard
  • Component-level filtering
  • Notification groups + maintenance calendar
Start monitoring UiPath for free

5 free monitors · No credit card required