Lokalise incident

Lokalise application is unavailable

Critical Resolved View vendor source →

Lokalise experienced a critical incident on January 26, 2026 affecting Lokalise API and Lokalise App, lasting 40m. The incident has been resolved; the full update timeline is below.

Started
Jan 26, 2026, 03:52 PM UTC
Resolved
Jan 26, 2026, 04:32 PM UTC
Duration
40m
Detected by Pingoru
Jan 26, 2026, 03:52 PM UTC

Affected components

Lokalise APILokalise App

Update timeline

  1. investigating Jan 26, 2026, 03:56 PM UTC

    We are currently investigating this issue.

  2. investigating Jan 26, 2026, 04:06 PM UTC

    We are continuing to investigate this issue.

  3. resolved Jan 26, 2026, 04:32 PM UTC

    This incident has been resolved.

  4. postmortem Feb 02, 2026, 04:59 PM UTC

    On January 26, 2026, Lokalise experienced a service outage and performance degradation between 15:52 UTC and 16:32 UTC. During this time, the Lokalise application was unavailable, and users of the API experienced intermittent connectivity and high latency. **What happened?** **The Cause:** The incident was triggered during a migration of our monitoring systems. A configuration mismatch caused a high volume of internal network requests to fail, which subsequently overwhelmed our internal DNS services. This prevented various parts of our infrastructure from communicating with one another, including our primary databases and application services. **The Fix:** Our engineering team identified the source of the traffic and disabled the legacy monitoring configuration. We also rotated affected infrastructure nodes and adjusted our service scaling to alleviate pressure on our databases. These actions restored normal communication between our services, bringing the platform back to full operational status. **Timeline \(UTC\):** * **15:52:** Service degradation and unavailability detected and investigation initiated. * **16:24:** Root cause identified as internal network congestion impacting service connectivity. * **16:27:** Corrective actions implemented; service begins to stabilize. * **16:32:** Full service restored and performance monitored for stability. **What we are doing to prevent this in the future** * **Enhancing scaling capabilities:** We are upgrading our internal DNS services to scale horizontally, ensuring they can handle unexpected spikes in traffic without impacting the wider platform. * **Improving resource monitoring:** We are implementing additional alerting for resource exhaustion to identify and mitigate infrastructure bottlenecks before they impact service availability. * **Refining deployment procedures:** We are updating our internal documentation and validation steps to ensure infrastructure dependencies are strictly coordinated during system migrations. We sincerely apologize for the disruption this incident caused to your workflow. We understand how critical Lokalise is to your operations and are committed to improving our system's resilience to prevent a recurrence. If you have any questions or require further assistance, please contact us at [[email protected]](mailto:[email protected]).