LaunchDarkly Outage History

LaunchDarkly is up right now

There were 23 LaunchDarkly outages since February 6, 2026 totaling 121h 42m of downtime. Each is summarised below — incident details, duration, and resolution information.

Source: https://status.launchdarkly.com

Minor May 2, 2026

Delay in Experimentation Results and Experimentation Warehouse Data Export

Detected by Pingoru
May 02, 2026, 12:55 PM UTC
Resolved
May 02, 2026, 07:15 PM UTC
Duration
6h 20m
Affected: Experiment results processingWarehouse native experimentation
Timeline · 4 updates
  1. investigating May 02, 2026, 12:55 PM UTC

    We are investigating an issue causing delayed experimentation results processing and delayed experimentation data warehouse export.

  2. identified May 02, 2026, 01:17 PM UTC

    We've identified the root cause of the experimentation results processing and data export delays and are working on implementing a fix.

  3. monitoring May 02, 2026, 01:51 PM UTC

    We have implemented a fix and are catching up on backlogged experimentation results and data export. We estimate that we will be fully caught up in about 8 hours.

  4. resolved May 02, 2026, 07:15 PM UTC

    Experimentation results processing and data warehouse export was caught up as of 12:28 pm PT.

Read the full incident report →

Notice May 1, 2026

Elevated Server SDK Initialization Errors in APAC region

Detected by Pingoru
May 01, 2026, 04:06 AM UTC
Resolved
May 01, 2026, 02:52 AM UTC
Duration
Timeline · 1 update
  1. resolved May 01, 2026, 04:06 AM UTC

    We've identified elevated error rates with server-side streaming initialization requests in the APAC region between 19:52 and 19:56 PT. Some customers may have experienced initialization timeouts despite SDK retries.

Read the full incident report →

Minor April 30, 2026

Streaming Data Export Delay

Detected by Pingoru
Apr 30, 2026, 11:14 AM UTC
Resolved
Apr 30, 2026, 11:58 AM UTC
Duration
43m
Affected: Data Export
Timeline · 3 updates
  1. investigating Apr 30, 2026, 11:14 AM UTC

    We are investigating a delay with streaming data export delivery. No data is lost, but customers using streaming data export may notice a multi-minute lag in receiving data.

  2. monitoring Apr 30, 2026, 11:45 AM UTC

    We've identified latency in Pub/Sub export that were causing streaming data export delays that has since resolved. We're catching up on the export backlog now.

  3. resolved Apr 30, 2026, 11:58 AM UTC

    We've caught up on the streaming data export backlog.

Read the full incident report →

Notice April 30, 2026

Errors on streaming SDK initialization

Detected by Pingoru
Apr 30, 2026, 12:30 AM UTC
Resolved
Apr 29, 2026, 11:45 PM UTC
Duration
Timeline · 1 update
  1. resolved Apr 30, 2026, 12:30 AM UTC

    We've identified an infrastructure issue that caused some 503 responses in feature flagging streaming client initialization on the US LaunchDarkly instance between 16:45 and 17:05 PT. SDKs should automatically retry initialization in most cases.

Read the full incident report →

Minor April 29, 2026

Issue with Experiment Results in case of Flag Variation edits

Detected by Pingoru
Apr 29, 2026, 08:19 PM UTC
Resolved
May 01, 2026, 12:09 AM UTC
Duration
1d 3h
Affected: Experiment results processingExperiment results analysis
Timeline · 4 updates
  1. investigating Apr 29, 2026, 08:19 PM UTC

    We are aware of an issue affecting some customers that may result in sample ratio mismatches (SRMs) or empty experiment results if flag variations are edited between experiment iterations.

  2. identified Apr 29, 2026, 08:49 PM UTC

    We've identified the issue and are implementing a fix for this behavior.

  3. monitoring Apr 30, 2026, 12:04 AM UTC

    We've released a fix to prevent this issue in new experiments and iterations. We're working to resolve this issue for experiment data recently processed.

  4. resolved May 01, 2026, 12:09 AM UTC

    We have validated a fix to resolve all remaining experiments with inaccurate sample ratio mismatches (SRMs) or empty experiment results. The vast majority of experiments and experimentation customers were not impacted.

Read the full incident report →

Minor April 28, 2026

Issues with Observability Alert Notification

Detected by Pingoru
Apr 28, 2026, 06:34 PM UTC
Resolved
Apr 28, 2026, 07:25 PM UTC
Duration
51m
Affected: Emails and notifications
Timeline · 4 updates
  1. investigating Apr 28, 2026, 06:34 PM UTC

    We are currently investigating an issue prevent Observability alert notifications from being delivered.

  2. identified Apr 28, 2026, 06:38 PM UTC

    We've identified the root cause preventing Observability alert notifications and are deploying a fix. All other notification types are operational.

  3. monitoring Apr 28, 2026, 07:10 PM UTC

    A fix has been deployed for Observability alerting and we are working to redeliver delayed notifications.

  4. resolved Apr 28, 2026, 07:25 PM UTC

    Observability alerts are functioning and delayed alerts have been redelivered.

Read the full incident report →

Minor April 27, 2026

Delayed Metrics ingest in EU

Detected by Pingoru
Apr 27, 2026, 03:39 PM UTC
Resolved
Apr 27, 2026, 05:37 PM UTC
Duration
1h 58m
Affected: Metrics
Timeline · 4 updates
  1. investigating Apr 27, 2026, 03:39 PM UTC

    We are investigating an issue causing under-reporting of customer metrics in the EU region.

  2. identified Apr 27, 2026, 03:45 PM UTC

    We've identified that metrics ingest was delayed between 11:11 - 11:27am ET. No metric data has been lost. We're working on catching up on the delayed data.

  3. monitoring Apr 27, 2026, 04:43 PM UTC

    We're catching up on backlogged metric data in EU and are monitoring progress. No metric data has been lost.

  4. resolved Apr 27, 2026, 05:37 PM UTC

    We've fully caught up on the metrics data backlog.

Read the full incident report →

Minor April 23, 2026

Issue with Warehouse Data Export

Detected by Pingoru
Apr 23, 2026, 07:00 PM UTC
Resolved
Apr 23, 2026, 07:00 PM UTC
Duration
Timeline · 1 update
  1. resolved Apr 29, 2026, 10:22 AM UTC

    We're addressing an issue that prevented warehouse data export from 12pm to 6pm PT on April 23 and are working to backfill that data to customer warehouses.

Read the full incident report →

Critical April 20, 2026

Docs site is down

Detected by Pingoru
Apr 20, 2026, 10:04 PM UTC
Resolved
Apr 20, 2026, 10:27 PM UTC
Duration
22m
Affected: Docs (launchdarkly.com/docs)
Timeline · 4 updates
  1. investigating Apr 20, 2026, 10:04 PM UTC

    We are currently investigating this issue.

  2. investigating Apr 20, 2026, 10:13 PM UTC

    We have linked this issue with an outage on our doc provider's side.

  3. identified Apr 20, 2026, 10:13 PM UTC

    The issue has been identified and a fix is being implemented.

  4. resolved Apr 20, 2026, 10:27 PM UTC

    This incident has been resolved.

Read the full incident report →

Minor April 20, 2026

Observability Alerts are not Delivering Slack Notifications

Detected by Pingoru
Apr 20, 2026, 09:27 PM UTC
Resolved
Apr 20, 2026, 11:44 PM UTC
Duration
2h 17m
Affected: Emails and notifications
Timeline · 3 updates
  1. investigating Apr 20, 2026, 09:27 PM UTC

    We are currently investigating an issue where observability alerts are not delivering for Slack notifications to all channels. DM alerts are still functional.

  2. identified Apr 20, 2026, 10:01 PM UTC

    The issue has been identified and a fix is being implemented.

  3. resolved Apr 20, 2026, 11:44 PM UTC

    The issue has been resolved.

Read the full incident report →

Notice April 10, 2026

Degraded connectivity for server-side SDKs in EU region

Detected by Pingoru
Apr 10, 2026, 04:30 AM UTC
Resolved
Apr 10, 2026, 04:30 AM UTC
Duration
Timeline · 1 update
  1. resolved Apr 10, 2026, 05:41 AM UTC

    Between approximately 9:37 PM and 9:46 PM PT on April 9, 2026, some customers using server-side SDKs in the EU region may have experienced longer than normal initialization times or timeouts when connecting to LaunchDarkly. The issue resolved. No action is required from customers. Server-side SDKs automatically reconnect and recover from transient connectivity issues.

Read the full incident report →

Minor April 6, 2026

Delayed processing of Session Replay and Error data

Detected by Pingoru
Apr 06, 2026, 05:58 PM UTC
Resolved
Apr 06, 2026, 07:02 PM UTC
Duration
1h 3m
Affected: Observability Data Ingest (Sessions, Errors)
Timeline · 4 updates
  1. identified Apr 06, 2026, 05:58 PM UTC

    We are currently experiencing a delay in processing Session Replay and Error event data. Incoming data is being queued and will be processed — no data has been lost. Customers may observe a temporary lag in session and error data appearing in the LaunchDarkly UI. We have identified the root cause as elevated database utilization and are actively working to resolve the backlog. We will provide updates as processing returns to normal.

  2. identified Apr 06, 2026, 06:15 PM UTC

    We have identified the cause of the delayed Session Replay and Error event processing and a fix is in progress. The backlog is actively decreasing. No data has been lost — all queued events will be processed.

  3. monitoring Apr 06, 2026, 06:40 PM UTC

    The delayed processing of Session Replay and Error event data has been resolved. All queued events have been processed and data ingestion is operating normally. No data was lost during this incident. Thank you for your patience.

  4. resolved Apr 06, 2026, 07:02 PM UTC

    The delayed processing of Session Replay and Error event data has been resolved. All queued events have been processed and data ingestion is operating normally. No data was lost during this incident. Thank you for your patience.

Read the full incident report →

Major March 19, 2026

Server-side streaming rejected new connections across all commercial regions

Detected by Pingoru
Mar 19, 2026, 04:58 PM UTC
Resolved
Mar 19, 2026, 04:58 PM UTC
Duration
Timeline · 1 update
  1. resolved Mar 19, 2026, 04:58 PM UTC

    Server-side streaming began rejecting new connections across all commercial regions, causing 500/503 errors for customers attempting to establish new SDK streaming connections for a brief period of time. Detailed timelines of the incident: - us-east-1 : 7:53 AM PST to 8:24 AM PST - ap-southeast-1: 7:53 AM PST to 7:55 AM PST - eu-west-1: 7:53 AM PST to 8:35 AM PST The team was quickly able to deploy the mitigation before an incident could be declared in our status pages, this is why we're posting it, retroactively. By 8:35am PST connections were re-established successfully in all commercial regions.

Read the full incident report →

Minor March 18, 2026

Experiment Data Missing for Some Experiment Iterations

Detected by Pingoru
Mar 18, 2026, 02:07 AM UTC
Resolved
Mar 20, 2026, 12:29 AM UTC
Duration
1d 22h
Affected: Past experiment iterations
Timeline · 6 updates
  1. investigating Mar 18, 2026, 02:07 AM UTC

    We are aware of an issue affecting experiment reporting data for a subset of accounts. Affected users may see incomplete data for some experiment iterations in the UI.

  2. identified Mar 18, 2026, 02:08 AM UTC

    The issue has been identified and a fix is being implemented.

  3. identified Mar 18, 2026, 04:17 AM UTC

    We have implemented a mitigation that restores data availability for all current experiment iterations. Affected accounts should now see up-to-date reporting and exposure data in the UI. Our team continues to work on restoring data for previously affected experiment iterations. We will provide a further update once full historical data recovery is complete.

  4. identified Mar 18, 2026, 10:37 PM UTC

    Our team continues to work on restoring data for previously affected experiment iterations. We will provide a further update once full historical data recovery is complete.

  5. identified Mar 19, 2026, 04:52 AM UTC

    Our team continues to work on restoring data for previously affected experiment iterations. Complete data restoration expected by tomorrow.

  6. resolved Mar 20, 2026, 12:29 AM UTC

    Our team completed the data restoration for previously affected experiment iterations. Complete data is now available for all experiment iterations in the UI.

Read the full incident report →

Minor March 10, 2026

Event processing delays

Detected by Pingoru
Mar 10, 2026, 10:59 PM UTC
Resolved
Mar 11, 2026, 04:07 AM UTC
Duration
5h 7m
Affected: Experiment results processingWarehouse native experimentation
Timeline · 4 updates
  1. investigating Mar 10, 2026, 10:59 PM UTC

    Event processing is currently delayed for and will show stale data for several product areas, including: - Autogenerated metric creation - Data Export - Experimentation No data has been lost.

  2. monitoring Mar 10, 2026, 11:38 PM UTC

    A fix has been implemented and we are monitoring the results.

  3. monitoring Mar 10, 2026, 11:42 PM UTC

    We are continuing to monitor for any further issues.

  4. resolved Mar 11, 2026, 04:07 AM UTC

    The issue with Event processing has been resolved. Impacted services have returned to normal operation. Flag Delivery was not impacted. There was no data loss experienced.

Read the full incident report →

Notice March 3, 2026

Observability Usage Reporting

Detected by Pingoru
Mar 03, 2026, 08:40 PM UTC
Resolved
Mar 04, 2026, 01:13 AM UTC
Duration
4h 32m
Timeline · 3 updates
  1. investigating Mar 03, 2026, 08:40 PM UTC

    We're investigating reports of inaccurate usage reporting for some Observability products.

  2. identified Mar 03, 2026, 09:15 PM UTC

    We've identified the root cause affecting the Observability "Errors" usage for some customers and are working to correct it.

  3. resolved Mar 04, 2026, 01:13 AM UTC

    We've corrected the issues affecting the Observability "Errors" usage reporting for February and are finalizing reporting corrections for March.

Read the full incident report →

Major February 26, 2026

Investigating Issues with LaunchDarkly Application and Authentication

Detected by Pingoru
Feb 26, 2026, 03:43 PM UTC
Resolved
Feb 26, 2026, 05:06 PM UTC
Duration
1h 22m
Affected: AuthenticationWeb app (app.launchdarkly.com)
Timeline · 5 updates
  1. investigating Feb 26, 2026, 03:43 PM UTC

    Some customers are experiencing issues with accessing the web app and authentication. We are investigating and will provide updates as they become available.

  2. identified Feb 26, 2026, 04:17 PM UTC

    Some customers are experiencing issues with accessing the web app and authentication. Some customers may see a low number of errors with flag evaluation, as well, but generally our Flag Delivery Network is functional. We have identified the issue and are continuing our work to resolve it.

  3. monitoring Feb 26, 2026, 04:42 PM UTC

    The issue with our application and authentication has been identified and a fix has been implemented. We are continuing to monitor the performance of impacted services. We will continue to update this page until it is resolved.

  4. monitoring Feb 26, 2026, 05:04 PM UTC

    The issue with our application and authentication has been resolved. Performance has remained stable following mitigation.

  5. resolved Feb 26, 2026, 05:06 PM UTC

    The issue with our application and authentication has been resolved. Performance has remained stable following mitigation.

Read the full incident report →

Notice February 20, 2026

Degraded Observability Ingest

Detected by Pingoru
Feb 20, 2026, 10:02 PM UTC
Resolved
Feb 20, 2026, 11:04 PM UTC
Duration
1h 1m
Affected: OpenTelemetry (Logs, Traces, Metrics)
Timeline · 4 updates
  1. investigating Feb 20, 2026, 10:02 PM UTC

    Beginning around 1:25 PM PST, we are investigating degraded ingest performance. Some customers may experience delays or gaps in observability data. Updates to follow.

  2. identified Feb 20, 2026, 10:12 PM UTC

    We have identified the cause of the degraded ingest performance and are applying mitigation. We are seeing signs of stabilization and continuing to monitor recovery.

  3. monitoring Feb 20, 2026, 10:13 PM UTC

    Mitigation has been applied and ingest performance has stabilized. We are monitoring to ensure continued stability.

  4. resolved Feb 20, 2026, 11:04 PM UTC

    Ingest performance has remained stable following mitigation. We are no longer observing impact.

Read the full incident report →

Minor February 19, 2026

Observability Data Delay

Detected by Pingoru
Feb 19, 2026, 03:39 PM UTC
Resolved
Feb 19, 2026, 04:09 PM UTC
Duration
29m
Affected: Observability Data Ingest (Sessions, Errors)
Timeline · 3 updates
  1. investigating Feb 19, 2026, 03:39 PM UTC

    Sessions and errors may be delayed by up to 1 hours. We are investigating the root cause. No data is lost.

  2. identified Feb 19, 2026, 03:54 PM UTC

    We've identified the root cause and have deployed a fix. We're catching up on the session/error data backlog and should be caught up in ~15 minutes. No data has been lost.

  3. resolved Feb 19, 2026, 04:09 PM UTC

    We've caught up on the session/error data backlog.

Read the full incident report →

Major February 12, 2026

Data attribution issues with Experimentation

Detected by Pingoru
Feb 12, 2026, 11:30 PM UTC
Resolved
Feb 13, 2026, 03:39 AM UTC
Duration
4h 8m
Affected: Experiment results analysis
Timeline · 4 updates
  1. investigating Feb 13, 2026, 01:31 AM UTC

    We are investigating an issue where certain experiment iterations are receiving incorrectly attributed data.

  2. identified Feb 13, 2026, 01:32 AM UTC

    The team has identified the root cause and is working on a fix.

  3. identified Feb 13, 2026, 02:38 AM UTC

    We have fixed the data attribution issues where certain experiment iterations are receiving incorrectly attributed data for any new experiments that are created. We are fixing the data attribution error for active experiments.

  4. resolved Feb 13, 2026, 03:39 AM UTC

    We have fixed the data attribution issues where running experiment iterations were receiving incorrectly attributed data. This incident is now resolved and results for active experiments are now accurate.

Read the full incident report →

Minor February 11, 2026

Observability – Degraded OTel Telemetry Processing

Detected by Pingoru
Feb 11, 2026, 01:03 PM UTC
Resolved
Feb 12, 2026, 03:53 AM UTC
Duration
14h 50m
Affected: OpenTelemetry (Logs, Traces, Metrics)
Timeline · 2 updates
  1. investigating Feb 12, 2026, 03:50 AM UTC

    We are currently investigating an issue affecting OTel telemetry ingestion in LaunchDarkly Observability. Some telemetry data may not be processed as expected. Our team is actively working to identify the root cause and mitigate impact.

  2. resolved Feb 12, 2026, 03:53 AM UTC

    This incident has been resolved. Telemetry data was dropped between 5:03 PM and 7:30 PM PT. All systems are now operating normally.

Read the full incident report →

Minor February 10, 2026

Degraded Observability for Customers Using AWS CloudWatch Metric Streams or Firehose Log Export

Detected by Pingoru
Feb 10, 2026, 07:30 AM UTC
Resolved
Feb 10, 2026, 09:01 AM UTC
Duration
1h 30m
Affected: Observability Data Ingest (Sessions, Errors)OpenTelemetry (Logs, Traces, Metrics)
Timeline · 2 updates
  1. investigating Feb 10, 2026, 08:42 AM UTC

    Some customers using LaunchDarkly Observability with AWS CloudWatch Metric Stream and/or CloudWatch Firehose log export may experience delayed ingestion of metrics/logs into Observability. The impact is limited to Observability data ingestion and related dashboards/alerts.

  2. resolved Feb 10, 2026, 09:01 AM UTC

    This issue has been resolved. Observability ingestion from AWS CloudWatch Metric Streams and CloudWatch Firehose log export has returned to normal. Data sent during the incident window may not have been ingested, and customers may see gaps in metrics or logs for that period.

Read the full incident report →

Minor February 6, 2026

Increased error rate in our UI and API endpoints

Detected by Pingoru
Feb 06, 2026, 11:55 AM UTC
Resolved
Feb 06, 2026, 12:44 PM UTC
Duration
49m
Affected: Web app (app.launchdarkly.com)
Timeline · 2 updates
  1. investigating Feb 06, 2026, 11:55 AM UTC

    We are investigating an issue where a small number of customers may experience errors when accessing the LaunchDarkly UI and API. Feature flag delivery is not impacted. We will provide updates as the investigation continues.

  2. resolved Feb 06, 2026, 12:44 PM UTC

    Customers are seeing expected behavior when accessing the LaunchDarkly UI and API.

Read the full incident report →

Looking to track LaunchDarkly downtime and outages?

Pingoru polls LaunchDarkly's status page every 5 minutes and alerts you the moment it reports an issue — before your customers do.

  • Real-time alerts when LaunchDarkly reports an incident
  • Email, Slack, Discord, Microsoft Teams, and webhook notifications
  • Track LaunchDarkly alongside 5,000+ providers in one dashboard
  • Component-level filtering
  • Notification groups + maintenance calendar
Start monitoring LaunchDarkly for free

5 free monitors · No credit card required