Workflows Runtime recovered
Timeline · 1 update
- investigating May 01, 2026, 01:41 AM UTC
Workflows Runtime went down
There were 9 Langdock outages since February 15, 2026 totaling 37h 56m of downtime. Each is summarised below — incident details, duration, and resolution information.
Workflows Runtime went down
We're currently experiencing issues and Langdock is unavailable for all users. We're investigating and will keep you updated.
We have identified the root cause and fixed the issue. We experienced a failed database migration which caused some services not to start correctly. Langdock is available again for all users. We will continue to monitor the situation.
We are investigating a high error rate across Workflow runs.
We have identified the issue and released a fix. This issue is resolved.
api.langdock.com recovered
app.langdock.com recovered
Our workflow execution and file processing services are currently under a lot of load due to the incident earlier in the day. We expect the execution backlog to clear over the next hours.
The backlog has been processed. File uploads and workflows are running normally again.
Performance of the platform is currently degraded.
Post-Mortem: Service Disruption – March 30, 2026 Duration: 8:00 AM – 4:36 PM CEST Severity: Major Affected: app.langdock.com Status: Resolved What Happened On March 30, 2026, Langdock experienced a major service disruption caused by a platform-level degradation on Azure Container Apps (ACA). Containers that normally start in under 2 minutes took over 15 minutes during the incident, preventing our autoscaler from bringing new capacity online and leading to a full platform outage from approximately 11:30 AM CEST. Timeline (CEST) 8:00 AM — Performance degradation detected 9:15 AM — Issue identified; hotfix deployed 9:43 AM — Performance stabilised, partial recovery 9:55 AM — Downstream issues discovered; investigation continues 10:45 AM — Core platform restored; workflows & file uploads still affected 11:30 AM — Full platform outage begins due to cascading downstream effects from increased startup time of ACA. 12:11 PM — Platform taken fully offline for controlled reboot 3:34 PM — Platform restored and accepting traffic 4:36 PM — Workflows and file uploads fully restored 4:37 PM — Incident fully resolved Root Cause Azure Container Apps experienced a platform-level degradation affecting container startup times. The same container image started in under 30 seconds on AKS, confirming the issue was with ACA and not our application. Resolution & Next Steps We relaxed startup probes, raised the minimum container count, and temporarily disabled readiness/liveness probes to restore service. We are now implementing improved failover paths, improved startup monitoring, and better baseline capacity provisioning to prevent a recurrence. We sincerely apologize for the disruption. Please reach out to [email protected] with any questions.
We are aware of an issue where the web search in chat and workflows is not producing results. We are currently working on a fix for this issue.
We have identified the root cause of the issue and released a fix.
At the moment login is only possible via Microsoft. We are working on restoring the usual login and will update as we have more information.
We have identified the root cause and restored the normal login flow.
Pingoru polls Langdock's status page every 5 minutes and alerts you the moment it reports an issue — before your customers do.
5 free monitors · No credit card required