Affected components
Update timeline
- investigating Mar 10, 2026, 12:39 PM UTC
We are currently investigating reports of an issue in DataFabric in the Europe region. Impact: Customers using Data Fabric in the EU Production Automation Cloud may experience intermittent degraded performance and failed requests Our teams are actively working to identify the cause and assess the scope of the issue. Further updates will be shared as soon as more information becomes available.
- monitoring Mar 10, 2026, 01:43 PM UTC
A fix has been deployed and the service is recovering. Current status: We are monitoring the system to ensure stability and full recovery. Further updates will be shared soon
- resolved Mar 10, 2026, 02:39 PM UTC
The issue has been resolved. The system has remained stable during the monitoring period.
- postmortem Mar 16, 2026, 07:39 AM UTC
## _Customer Impact_ Between March 10, 2026, 10:15 UTC - 13:36 UTC, Data Fabric customers in Europe experienced increased latency and intermittent failures on DataFabric requests across consumption surfaces \(Automation Cloud UI and Workflow Activities\). During this period, some customers were unable to load Data Fabric UI pages or successfully run workflow activities— affected requests frequently timed out or were canceled by the client \(observed as HttpClient timeouts / `TaskCanceledException`\). No durable impact was introduced. ## _Root Cause_ * A large-scale background maintenance job in the Europe ring temporarily increased demand on the database, slowing responses. * At the same time, a deployment and the resulting transient increase in user requests exposed that a subset of service instances did not have enough capacity to absorb the increased, sustained load. * The combination of slower database responses and insufficient transient capacity increased the time requests remained open and caused more work to accumulate on active service instances, which further degraded performance. This resulted in worsening performance and intermittent failures for some customers. ## _Detection_ The issue was first detected by a customer report— and automated alerts were subsequently triggered. ## _Response_ Upon receiving the report, the team immediately triaged the issue and identified elevated dependency latency and unhealthy runtime instances. Around 10:52 UTC, we received the customer report, engaged, and started investigating the issue. We initiated a rollback of the recent deployment at around 12:29 UTC and increased runtime capacity as an stabilizing measure to curb customer impact. These steps together returned the service to a healthy state by 13:36 UTC. ## _Follow-Up_ * Review and align service instance resource allocations across regions so instances have sufficient headroom for transient load. * Improve operational observability to capture runtime diagnostics and memory/health signals earlier. * Add proactive alerts for sustained high resource usage and service restarts. * Implement operational guardrails for background jobs to prevent database overload/pressure.
Looking to track UiPath downtime and outages?
Pingoru polls UiPath's status page every 5 minutes and alerts you the moment it reports an issue — before your customers do.
- Real-time alerts when UiPath reports an incident
- Email, Slack, Discord, Microsoft Teams, and webhook notifications
- Track UiPath alongside 5,000+ providers in one dashboard
- Component-level filtering
- Notification groups + maintenance calendar
5 free monitors · No credit card required