UiPath incident

Europe - Integration Service - Degraded Performance

Major Resolved View vendor source →
Started
Apr 15, 2026, 07:37 AM UTC
Resolved
Apr 15, 2026, 07:48 AM UTC
Duration
10m
Detected by Pingoru
Apr 15, 2026, 07:37 AM UTC

Affected components

Integration Service

Update timeline

  1. investigating Apr 15, 2026, 07:37 AM UTC

    We are currently investigating an issue where customers in the Europe region are unable to create new Integration Service Connections. Existing connections are not impacted. Our engineering team is actively working to identify the root cause and restore full functionality.

  2. resolved Apr 15, 2026, 07:48 AM UTC

    The issue has been resolved. The system is stable.

  3. postmortem Apr 29, 2026, 08:49 AM UTC

    ## Customer Impact Between April 15, 2026 at 7:00 am UTC and April 15, 2026 at 7:50 am UTC, customers in the Europe region were unable to create new connections using the Integration Service. Existing connections and all other regions remained fully operational throughout the incident. Customers who attempted to create new connections received error messages. A public status page update was published at 7:40 am UTC to keep affected customers informed. ### Scope Only customers in the Europe region who attempted to create new Integration Service connections during this window were affected. All other regions and existing connections continued to function normally. ## Root Cause A recent backend configuration update to the Integration Service in the Europe region required corresponding database schema updates to be applied as part of the same deployment. Due to a conditional check in the deployment pipeline—introduced by an earlier, unrelated change—the schema migration step was skipped during rollout. As a result, the service ran against a schema that did not match the expected configuration, causing new connection creation requests to fail with internal errors. Existing connections were unaffected because they did not rely on the updated schema paths. Once the team identified the mismatch, the deployment was rolled back, which restored the service to its previous working state and normal operation resumed. Analysis of service logs and error patterns confirmed the root cause, with failures aligning precisely to the timing and scope of the deployment. ## Detection Automated monitoring detected the incident at 7:24 am UTC when error rates for new connection attempts in the Europe region exceeded normal thresholds. The alert was acknowledged within one minute, and incident response procedures began immediately. By 7:25 am UTC, the responsible team had assembled and initiated their investigation. The interval between the onset of customer impact and detection was under a minute, enabling a rapid response. ## Response At 7:05 am UTC, engineers received the automated alert and joined the incident response call to investigate error logs and recent deployments to the Integration Service. By 7:15 am UTC, the team had scoped the issue to new connection attempts in the Europe region and identified the recent deployment as the likely cause. At 7:33 am UTC, the incident was formally classified as customer-impacting. At 7:40 am UTC, a public status page update was published. By 7:48 am UTC, the team had confirmed the root cause, rolled back the deployment, and verified that new connection creation had returned to normal. Monitoring confirmed full recovery by 7:51 am UTC, and the incident was declared resolved. ## Follow-up To prevent similar incidents, we are implementing the following improvements: * **Deployment pipeline fix:** Correcting the pipeline condition that caused the database schema migration step to be skipped, and adding guardrails to ensure required schema updates are always applied alongside the corresponding service changes. * **Pre-deployment validation:** Adding automated pre-deployment checks that verify service and schema compatibility, so mismatches are caught before any customer-facing rollout. * **Canary traffic coverage:** We are adding automated test calls that exercise the new connection API against canary instances on every deployment, so schema or configuration mismatches are caught in the canary phase regardless of live traffic levels. * **Enhanced connection monitoring:** Expanding monitoring to include targeted alerts for failed new-connection attempts, enabling even faster detection of issues affecting new connections. We are also reviewing recent changes to our deployment and validation processes to identify additional safeguards. We sincerely appreciate your patience and understanding as we continue to work to make our services more resilient and dependable.

Looking to track UiPath downtime and outages?

Pingoru polls UiPath's status page every 5 minutes and alerts you the moment it reports an issue — before your customers do.

  • Real-time alerts when UiPath reports an incident
  • Email, Slack, Discord, Microsoft Teams, and webhook notifications
  • Track UiPath alongside 5,000+ providers in one dashboard
  • Component-level filtering
  • Notification groups + maintenance calendar
Start monitoring UiPath for free

5 free monitors · No credit card required