- Detected by Pingoru
- May 02, 2026, 12:55 PM UTC
- Resolved
- May 02, 2026, 07:15 PM UTC
- Duration
- 6h 20m
Affected: Experiment results processingWarehouse native experimentation
Timeline · 4 updates
-
investigating May 02, 2026, 12:55 PM UTC
We are investigating an issue causing delayed experimentation results processing and delayed experimentation data warehouse export.
-
identified May 02, 2026, 01:17 PM UTC
We've identified the root cause of the experimentation results processing and data export delays and are working on implementing a fix.
-
monitoring May 02, 2026, 01:51 PM UTC
We have implemented a fix and are catching up on backlogged experimentation results and data export. We estimate that we will be fully caught up in about 8 hours.
-
resolved May 02, 2026, 07:15 PM UTC
Experimentation results processing and data warehouse export was caught up as of 12:28 pm PT.
Read the full incident report →
- Detected by Pingoru
- May 01, 2026, 04:06 AM UTC
- Resolved
- May 01, 2026, 02:52 AM UTC
- Duration
- —
Timeline · 1 update
-
resolved May 01, 2026, 04:06 AM UTC
We've identified elevated error rates with server-side streaming initialization requests in the APAC region between 19:52 and 19:56 PT. Some customers may have experienced initialization timeouts despite SDK retries.
Read the full incident report →
- Detected by Pingoru
- Apr 30, 2026, 11:14 AM UTC
- Resolved
- Apr 30, 2026, 11:58 AM UTC
- Duration
- 43m
Affected: Data Export
Timeline · 3 updates
-
investigating Apr 30, 2026, 11:14 AM UTC
We are investigating a delay with streaming data export delivery. No data is lost, but customers using streaming data export may notice a multi-minute lag in receiving data.
-
monitoring Apr 30, 2026, 11:45 AM UTC
We've identified latency in Pub/Sub export that were causing streaming data export delays that has since resolved. We're catching up on the export backlog now.
-
resolved Apr 30, 2026, 11:58 AM UTC
We've caught up on the streaming data export backlog.
Read the full incident report →
- Detected by Pingoru
- Apr 30, 2026, 12:30 AM UTC
- Resolved
- Apr 29, 2026, 11:45 PM UTC
- Duration
- —
Timeline · 1 update
-
resolved Apr 30, 2026, 12:30 AM UTC
We've identified an infrastructure issue that caused some 503 responses in feature flagging streaming client initialization on the US LaunchDarkly instance between 16:45 and 17:05 PT. SDKs should automatically retry initialization in most cases.
Read the full incident report →
- Detected by Pingoru
- Apr 29, 2026, 08:19 PM UTC
- Resolved
- May 01, 2026, 12:09 AM UTC
- Duration
- 1d 3h
Affected: Experiment results processingExperiment results analysis
Timeline · 4 updates
-
investigating Apr 29, 2026, 08:19 PM UTC
We are aware of an issue affecting some customers that may result in sample ratio mismatches (SRMs) or empty experiment results if flag variations are edited between experiment iterations.
-
identified Apr 29, 2026, 08:49 PM UTC
We've identified the issue and are implementing a fix for this behavior.
-
monitoring Apr 30, 2026, 12:04 AM UTC
We've released a fix to prevent this issue in new experiments and iterations. We're working to resolve this issue for experiment data recently processed.
-
resolved May 01, 2026, 12:09 AM UTC
We have validated a fix to resolve all remaining experiments with inaccurate sample ratio mismatches (SRMs) or empty experiment results. The vast majority of experiments and experimentation customers were not impacted.
Read the full incident report →
- Detected by Pingoru
- Apr 28, 2026, 06:34 PM UTC
- Resolved
- Apr 28, 2026, 07:25 PM UTC
- Duration
- 51m
Affected: Emails and notifications
Timeline · 4 updates
-
investigating Apr 28, 2026, 06:34 PM UTC
We are currently investigating an issue prevent Observability alert notifications from being delivered.
-
identified Apr 28, 2026, 06:38 PM UTC
We've identified the root cause preventing Observability alert notifications and are deploying a fix. All other notification types are operational.
-
monitoring Apr 28, 2026, 07:10 PM UTC
A fix has been deployed for Observability alerting and we are working to redeliver delayed notifications.
-
resolved Apr 28, 2026, 07:25 PM UTC
Observability alerts are functioning and delayed alerts have been redelivered.
Read the full incident report →
- Detected by Pingoru
- Apr 27, 2026, 03:39 PM UTC
- Resolved
- Apr 27, 2026, 05:37 PM UTC
- Duration
- 1h 58m
Affected: Metrics
Timeline · 4 updates
-
investigating Apr 27, 2026, 03:39 PM UTC
We are investigating an issue causing under-reporting of customer metrics in the EU region.
-
identified Apr 27, 2026, 03:45 PM UTC
We've identified that metrics ingest was delayed between 11:11 - 11:27am ET. No metric data has been lost. We're working on catching up on the delayed data.
-
monitoring Apr 27, 2026, 04:43 PM UTC
We're catching up on backlogged metric data in EU and are monitoring progress. No metric data has been lost.
-
resolved Apr 27, 2026, 05:37 PM UTC
We've fully caught up on the metrics data backlog.
Read the full incident report →
- Detected by Pingoru
- Apr 23, 2026, 07:00 PM UTC
- Resolved
- Apr 23, 2026, 07:00 PM UTC
- Duration
- —
Timeline · 1 update
-
resolved Apr 29, 2026, 10:22 AM UTC
We're addressing an issue that prevented warehouse data export from 12pm to 6pm PT on April 23 and are working to backfill that data to customer warehouses.
Read the full incident report →
- Detected by Pingoru
- Apr 20, 2026, 10:04 PM UTC
- Resolved
- Apr 20, 2026, 10:27 PM UTC
- Duration
- 22m
Affected: Docs (launchdarkly.com/docs)
Timeline · 4 updates
-
investigating Apr 20, 2026, 10:04 PM UTC
We are currently investigating this issue.
-
investigating Apr 20, 2026, 10:13 PM UTC
We have linked this issue with an outage on our doc provider's side.
-
identified Apr 20, 2026, 10:13 PM UTC
The issue has been identified and a fix is being implemented.
-
resolved Apr 20, 2026, 10:27 PM UTC
This incident has been resolved.
Read the full incident report →
- Detected by Pingoru
- Apr 20, 2026, 09:27 PM UTC
- Resolved
- Apr 20, 2026, 11:44 PM UTC
- Duration
- 2h 17m
Affected: Emails and notifications
Timeline · 3 updates
-
investigating Apr 20, 2026, 09:27 PM UTC
We are currently investigating an issue where observability alerts are not delivering for Slack notifications to all channels. DM alerts are still functional.
-
identified Apr 20, 2026, 10:01 PM UTC
The issue has been identified and a fix is being implemented.
-
resolved Apr 20, 2026, 11:44 PM UTC
The issue has been resolved.
Read the full incident report →
- Detected by Pingoru
- Apr 10, 2026, 04:30 AM UTC
- Resolved
- Apr 10, 2026, 04:30 AM UTC
- Duration
- —
Timeline · 1 update
-
resolved Apr 10, 2026, 05:41 AM UTC
Between approximately 9:37 PM and 9:46 PM PT on April 9, 2026, some customers using server-side SDKs in the EU region may have experienced longer than normal initialization times or timeouts when connecting to LaunchDarkly. The issue resolved. No action is required from customers. Server-side SDKs automatically reconnect and recover from transient connectivity issues.
Read the full incident report →
- Detected by Pingoru
- Apr 06, 2026, 05:58 PM UTC
- Resolved
- Apr 06, 2026, 07:02 PM UTC
- Duration
- 1h 3m
Affected: Observability Data Ingest (Sessions, Errors)
Timeline · 4 updates
-
identified Apr 06, 2026, 05:58 PM UTC
We are currently experiencing a delay in processing Session Replay and Error event data. Incoming data is being queued and will be processed — no data has been lost. Customers may observe a temporary lag in session and error data appearing in the LaunchDarkly UI. We have identified the root cause as elevated database utilization and are actively working to resolve the backlog. We will provide updates as processing returns to normal.
-
identified Apr 06, 2026, 06:15 PM UTC
We have identified the cause of the delayed Session Replay and Error event processing and a fix is in progress. The backlog is actively decreasing. No data has been lost — all queued events will be processed.
-
monitoring Apr 06, 2026, 06:40 PM UTC
The delayed processing of Session Replay and Error event data has been resolved. All queued events have been processed and data ingestion is operating normally. No data was lost during this incident. Thank you for your patience.
-
resolved Apr 06, 2026, 07:02 PM UTC
The delayed processing of Session Replay and Error event data has been resolved. All queued events have been processed and data ingestion is operating normally. No data was lost during this incident. Thank you for your patience.
Read the full incident report →
- Detected by Pingoru
- Mar 19, 2026, 04:58 PM UTC
- Resolved
- Mar 19, 2026, 04:58 PM UTC
- Duration
- —
Timeline · 1 update
-
resolved Mar 19, 2026, 04:58 PM UTC
Server-side streaming began rejecting new connections across all commercial regions, causing 500/503 errors for customers attempting to establish new SDK streaming connections for a brief period of time. Detailed timelines of the incident: - us-east-1 : 7:53 AM PST to 8:24 AM PST - ap-southeast-1: 7:53 AM PST to 7:55 AM PST - eu-west-1: 7:53 AM PST to 8:35 AM PST The team was quickly able to deploy the mitigation before an incident could be declared in our status pages, this is why we're posting it, retroactively. By 8:35am PST connections were re-established successfully in all commercial regions.
Read the full incident report →
- Detected by Pingoru
- Mar 18, 2026, 02:07 AM UTC
- Resolved
- Mar 20, 2026, 12:29 AM UTC
- Duration
- 1d 22h
Affected: Past experiment iterations
Timeline · 6 updates
-
investigating Mar 18, 2026, 02:07 AM UTC
We are aware of an issue affecting experiment reporting data for a subset of accounts. Affected users may see incomplete data for some experiment iterations in the UI.
-
identified Mar 18, 2026, 02:08 AM UTC
The issue has been identified and a fix is being implemented.
-
identified Mar 18, 2026, 04:17 AM UTC
We have implemented a mitigation that restores data availability for all current experiment iterations. Affected accounts should now see up-to-date reporting and exposure data in the UI. Our team continues to work on restoring data for previously affected experiment iterations. We will provide a further update once full historical data recovery is complete.
-
identified Mar 18, 2026, 10:37 PM UTC
Our team continues to work on restoring data for previously affected experiment iterations. We will provide a further update once full historical data recovery is complete.
-
identified Mar 19, 2026, 04:52 AM UTC
Our team continues to work on restoring data for previously affected experiment iterations. Complete data restoration expected by tomorrow.
-
resolved Mar 20, 2026, 12:29 AM UTC
Our team completed the data restoration for previously affected experiment iterations. Complete data is now available for all experiment iterations in the UI.
Read the full incident report →
- Detected by Pingoru
- Mar 10, 2026, 10:59 PM UTC
- Resolved
- Mar 11, 2026, 04:07 AM UTC
- Duration
- 5h 7m
Affected: Experiment results processingWarehouse native experimentation
Timeline · 4 updates
-
investigating Mar 10, 2026, 10:59 PM UTC
Event processing is currently delayed for and will show stale data for several product areas, including: - Autogenerated metric creation - Data Export - Experimentation No data has been lost.
-
monitoring Mar 10, 2026, 11:38 PM UTC
A fix has been implemented and we are monitoring the results.
-
monitoring Mar 10, 2026, 11:42 PM UTC
We are continuing to monitor for any further issues.
-
resolved Mar 11, 2026, 04:07 AM UTC
The issue with Event processing has been resolved. Impacted services have returned to normal operation. Flag Delivery was not impacted. There was no data loss experienced.
Read the full incident report →
- Detected by Pingoru
- Mar 03, 2026, 08:40 PM UTC
- Resolved
- Mar 04, 2026, 01:13 AM UTC
- Duration
- 4h 32m
Timeline · 3 updates
-
investigating Mar 03, 2026, 08:40 PM UTC
We're investigating reports of inaccurate usage reporting for some Observability products.
-
identified Mar 03, 2026, 09:15 PM UTC
We've identified the root cause affecting the Observability "Errors" usage for some customers and are working to correct it.
-
resolved Mar 04, 2026, 01:13 AM UTC
We've corrected the issues affecting the Observability "Errors" usage reporting for February and are finalizing reporting corrections for March.
Read the full incident report →
- Detected by Pingoru
- Feb 26, 2026, 03:43 PM UTC
- Resolved
- Feb 26, 2026, 05:06 PM UTC
- Duration
- 1h 22m
Affected: AuthenticationWeb app (app.launchdarkly.com)
Timeline · 5 updates
-
investigating Feb 26, 2026, 03:43 PM UTC
Some customers are experiencing issues with accessing the web app and authentication. We are investigating and will provide updates as they become available.
-
identified Feb 26, 2026, 04:17 PM UTC
Some customers are experiencing issues with accessing the web app and authentication. Some customers may see a low number of errors with flag evaluation, as well, but generally our Flag Delivery Network is functional. We have identified the issue and are continuing our work to resolve it.
-
monitoring Feb 26, 2026, 04:42 PM UTC
The issue with our application and authentication has been identified and a fix has been implemented. We are continuing to monitor the performance of impacted services. We will continue to update this page until it is resolved.
-
monitoring Feb 26, 2026, 05:04 PM UTC
The issue with our application and authentication has been resolved. Performance has remained stable following mitigation.
-
resolved Feb 26, 2026, 05:06 PM UTC
The issue with our application and authentication has been resolved. Performance has remained stable following mitigation.
Read the full incident report →
- Detected by Pingoru
- Feb 20, 2026, 10:02 PM UTC
- Resolved
- Feb 20, 2026, 11:04 PM UTC
- Duration
- 1h 1m
Affected: OpenTelemetry (Logs, Traces, Metrics)
Timeline · 4 updates
-
investigating Feb 20, 2026, 10:02 PM UTC
Beginning around 1:25 PM PST, we are investigating degraded ingest performance. Some customers may experience delays or gaps in observability data. Updates to follow.
-
identified Feb 20, 2026, 10:12 PM UTC
We have identified the cause of the degraded ingest performance and are applying mitigation. We are seeing signs of stabilization and continuing to monitor recovery.
-
monitoring Feb 20, 2026, 10:13 PM UTC
Mitigation has been applied and ingest performance has stabilized. We are monitoring to ensure continued stability.
-
resolved Feb 20, 2026, 11:04 PM UTC
Ingest performance has remained stable following mitigation. We are no longer observing impact.
Read the full incident report →
- Detected by Pingoru
- Feb 19, 2026, 03:39 PM UTC
- Resolved
- Feb 19, 2026, 04:09 PM UTC
- Duration
- 29m
Affected: Observability Data Ingest (Sessions, Errors)
Timeline · 3 updates
-
investigating Feb 19, 2026, 03:39 PM UTC
Sessions and errors may be delayed by up to 1 hours. We are investigating the root cause. No data is lost.
-
identified Feb 19, 2026, 03:54 PM UTC
We've identified the root cause and have deployed a fix. We're catching up on the session/error data backlog and should be caught up in ~15 minutes. No data has been lost.
-
resolved Feb 19, 2026, 04:09 PM UTC
We've caught up on the session/error data backlog.
Read the full incident report →
- Detected by Pingoru
- Feb 12, 2026, 11:30 PM UTC
- Resolved
- Feb 13, 2026, 03:39 AM UTC
- Duration
- 4h 8m
Affected: Experiment results analysis
Timeline · 4 updates
-
investigating Feb 13, 2026, 01:31 AM UTC
We are investigating an issue where certain experiment iterations are receiving incorrectly attributed data.
-
identified Feb 13, 2026, 01:32 AM UTC
The team has identified the root cause and is working on a fix.
-
identified Feb 13, 2026, 02:38 AM UTC
We have fixed the data attribution issues where certain experiment iterations are receiving incorrectly attributed data for any new experiments that are created. We are fixing the data attribution error for active experiments.
-
resolved Feb 13, 2026, 03:39 AM UTC
We have fixed the data attribution issues where running experiment iterations were receiving incorrectly attributed data. This incident is now resolved and results for active experiments are now accurate.
Read the full incident report →
- Detected by Pingoru
- Feb 11, 2026, 01:03 PM UTC
- Resolved
- Feb 12, 2026, 03:53 AM UTC
- Duration
- 14h 50m
Affected: OpenTelemetry (Logs, Traces, Metrics)
Timeline · 2 updates
-
investigating Feb 12, 2026, 03:50 AM UTC
We are currently investigating an issue affecting OTel telemetry ingestion in LaunchDarkly Observability. Some telemetry data may not be processed as expected. Our team is actively working to identify the root cause and mitigate impact.
-
resolved Feb 12, 2026, 03:53 AM UTC
This incident has been resolved. Telemetry data was dropped between 5:03 PM and 7:30 PM PT. All systems are now operating normally.
Read the full incident report →
- Detected by Pingoru
- Feb 10, 2026, 07:30 AM UTC
- Resolved
- Feb 10, 2026, 09:01 AM UTC
- Duration
- 1h 30m
Affected: Observability Data Ingest (Sessions, Errors)OpenTelemetry (Logs, Traces, Metrics)
Timeline · 2 updates
-
investigating Feb 10, 2026, 08:42 AM UTC
Some customers using LaunchDarkly Observability with AWS CloudWatch Metric Stream and/or CloudWatch Firehose log export may experience delayed ingestion of metrics/logs into Observability. The impact is limited to Observability data ingestion and related dashboards/alerts.
-
resolved Feb 10, 2026, 09:01 AM UTC
This issue has been resolved. Observability ingestion from AWS CloudWatch Metric Streams and CloudWatch Firehose log export has returned to normal. Data sent during the incident window may not have been ingested, and customers may see gaps in metrics or logs for that period.
Read the full incident report →
- Detected by Pingoru
- Feb 06, 2026, 11:55 AM UTC
- Resolved
- Feb 06, 2026, 12:44 PM UTC
- Duration
- 49m
Affected: Web app (app.launchdarkly.com)
Timeline · 2 updates
-
investigating Feb 06, 2026, 11:55 AM UTC
We are investigating an issue where a small number of customers may experience errors when accessing the LaunchDarkly UI and API. Feature flag delivery is not impacted. We will provide updates as the investigation continues.
-
resolved Feb 06, 2026, 12:44 PM UTC
Customers are seeing expected behavior when accessing the LaunchDarkly UI and API.
Read the full incident report →