Sauce Labs Outage History

Sauce Labs is up right now

There were 10 Sauce Labs outages since February 23, 2026 totaling 13h 44m of downtime. Each is summarised below — incident details, duration, and resolution information.

Source: https://status.saucelabs.com

Major April 24, 2026

2026-April-23 Resolved Service Incident

Detected by Pingoru
Apr 24, 2026, 04:33 PM UTC
Resolved
Apr 24, 2026, 04:33 PM UTC
Duration
Timeline · 1 update
  1. resolved Apr 24, 2026, 04:33 PM UTC

    Between April 23rd 22:44 and April 24th 15:25 UTC, there was a technical issue that affected video recordings for tests running on macOS 15 and iOS within our EU and US-West Data Center. We identified the issue and deployed a fix. All systems are now fully operational.

Read the full incident report →

Notice April 16, 2026

2026-April-16 Resolved Service Incident

Detected by Pingoru
Apr 16, 2026, 10:10 AM UTC
Resolved
Apr 16, 2026, 10:10 AM UTC
Duration
Timeline · 1 update
  1. resolved Apr 16, 2026, 10:10 AM UTC

    Between 02:00 and 11:15 CEST, live and automated tests on iOS 17.0 simulators were failing to start in the EU and US-West Data Center. We executed a deployment rollback, which restored services. All systems are now fully operational.

Read the full incident report →

Major April 7, 2026

2026-April-07 Service Incident

Detected by Pingoru
Apr 07, 2026, 03:02 PM UTC
Resolved
Apr 07, 2026, 04:13 PM UTC
Duration
1h 10m
Affected: US-WestUS-WestUS-WestUS-WestEU-CentralEU-CentralEU-CentralEU-Central
Timeline · 3 updates
  1. investigating Apr 07, 2026, 03:02 PM UTC

    We are currently investigating reports of test failures affecting users running tests using SauceCtl in our US-West-1 and EU-Central-1 Data Center. We are investigating.

  2. resolved Apr 07, 2026, 04:13 PM UTC

    We have identified the root cause and have deployed a fix for this issue. All services are fully operational.

  3. postmortem Apr 10, 2026, 09:56 PM UTC

    ### **Dates:** Monday April 7th 2026, ~11:00 – 15:55 UTC ### **What happened:** Some customers experienced 503 errors when running tests via saucectl. The test-composer service was intermittently unavailable, preventing framework-based test execution. ### **Why it happened:** A stale Docker image was deployed to the test-composer service due to a packaging issue that arose during an internal container registry migration. This caused service pods to crash. ### **How we fixed it:** We identified the stale image and redeployed the correct version, restoring the service. ### **What we are doing to prevent it from happening again:** We are hardening our image deployment pipeline and adding validation checks to ensure container registry migrations do not result in stale or incorrect images being deployed to production.

Read the full incident report →

Notice March 24, 2026

2026-March-24 Resolved Service Incident

Detected by Pingoru
Mar 24, 2026, 05:36 PM UTC
Resolved
Mar 24, 2026, 05:36 PM UTC
Duration
Timeline · 2 updates
  1. resolved Mar 24, 2026, 05:36 PM UTC

    Between 09:32 and 15:13 UTC, we identified a technical issue affecting iOS tests when running with network capture enabled. We've resolved the underlying cause and tests are working as expected. All services are fully operational.

  2. postmortem Apr 10, 2026, 11:06 PM UTC

    ### **Dates:** Tuesday, March 24th 2026, 09:32 UTC – 15:13 UTC ### **What happened:** Network calls failed on iOS devices during Real Device Cloud sessions where network capture was enabled. Approximately 12-13% of iOS sessions were affected. Android was not impacted. ### **Why it happened:** A deployment introduced a DNS resolution change that was incompatible with the iOS platform, causing network capture to break. ### **How we fixed it:** Rolled back the deployment to restore service. ### **What we are doing to prevent it from happening again:** Adding synthetic tests to catch network capture regressions before production, and implementing monitoring alerts for faster detection after deployments.

Read the full incident report →

Major March 19, 2026

2026-March-19 Service Incident

Detected by Pingoru
Mar 19, 2026, 09:51 AM UTC
Resolved
Mar 19, 2026, 10:54 AM UTC
Duration
1h 2m
Affected: US-WestUS-West
Timeline · 3 updates
  1. investigating Mar 19, 2026, 09:51 AM UTC

    Around 4:45 AM UTC we started experiencing lower iOS device availability in the US-West data center. Our team is actively investigating the root cause and working toward a resolution.

  2. resolved Mar 19, 2026, 10:54 AM UTC

    This incident has been resolved and our services are fully operational.

  3. postmortem Apr 10, 2026, 10:59 PM UTC

    ### **Dates:** Wednesday, March 19 2026, 04:45 UTC - 10:47 UTC. ### **What happened:** Approximately 15% of iOS devices in our US-West data center were temporarily unavailable for customer test sessions due to failed internet connectivity checks. ### **Why it happened:** An automated wireless network optimization feature adjusted transmit power levels on access points serving the affected devices, degrading wireless connectivity and causing devices to fail their availability checks. ### **How we fixed it:** The affected access points were identified and restarted, restoring normal wireless connectivity. ### **What we are doing to prevent it from happening again:** Evaluation of the automated optimization tools and a monitoring improvement.

Read the full incident report →

Notice March 18, 2026

2026-March-13 Resolved Service Incident

Detected by Pingoru
Mar 18, 2026, 05:15 PM UTC
Resolved
Mar 13, 2026, 02:30 PM UTC
Duration
Timeline · 2 updates
  1. resolved Mar 18, 2026, 05:15 PM UTC

    Between 14:43 and 15:11 UTC on March 13, a small subset of Real Devices (iOS and Android) became unavailable across all our data centers. After taking remedial action, the issue was identified and resolved. All services are fully operational.

  2. postmortem Apr 10, 2026, 10:57 PM UTC

    ### **Dates:** Friday, March 13th 2026, 14:43 UTC - 15:11 UTC. ### **What happened:** Real Devices \(iOS and Android\) availability gradually decreased across all data centers. ### **Why it happened:** A product defect was introduced resulting in a small subset of Real Devices \(~10%\) failing to maintain required connectivity. ### **How we fixed it:** Rollback to a stable version. ### **What we are doing to prevent it from happening again:** Improve monitoring & alerting, enhance post deployment validation.

Read the full incident report →

Major March 10, 2026

2026-March-10 Service Incident

Detected by Pingoru
Mar 10, 2026, 06:46 PM UTC
Resolved
Mar 10, 2026, 11:34 PM UTC
Duration
4h 48m
Affected: US-WestUS-WestEU-CentralEU-CentralUS-EastUS-East
Timeline · 3 updates
  1. investigating Mar 10, 2026, 06:46 PM UTC

    We are experiencing device unavailability in the US West 1, EU Central 1, and US East 4 data centers and have found that the issue is caused by a 3rd party service disruption. We are investigating.

  2. resolved Mar 10, 2026, 11:34 PM UTC

    This incident has been resolved.

  3. postmortem Apr 09, 2026, 09:07 PM UTC

    ### **Dates:** Tuesday March 10th 2026, 17:52 - 23:34 UTC ### **What happened:** The majority of iOS devices across all regions became unavailable. ### **Why it happened:** Apple's [ppq.apple.com](http://ppq.apple.com) app verification endpoint was down, causing internal device monitoring checks to fail, bringing devices offline. ### **How we fixed it:** We temporarily disabled these device monitoring checks. ### **What we are doing to prevent it from happening again:** Improved external monitoring to catch outages of apple’s [ppq.apple.com](http://ppq.apple.com) endpoint, loosened device monitoring to not take down live iOS devices if [ppq.apple.com](http://ppq.apple.com) is down.

Read the full incident report →

Notice March 6, 2026

2026-March-6 Resolved Service Incident

Detected by Pingoru
Mar 06, 2026, 11:28 PM UTC
Resolved
Mar 06, 2026, 09:38 PM UTC
Duration
Timeline · 2 updates
  1. resolved Mar 06, 2026, 11:28 PM UTC

    Between 21:38 UTC and 23:11 UTC, our virtual iOS and MacOS live and automated device tests were failing to start in the EU Data Center. We executed a deployment rollback, which restored services. All systems are now fully operational.

  2. postmortem Apr 10, 2026, 10:53 PM UTC

    ### **Dates:** Friday, March 6th 2026, 21:38 UTC - 23:11 UTC ### **What happened:** During the incident timeline, customers running virtual iOS simulator tests on ARM or macOS ARM desktop tests in the EU Data Center were unable to start new sessions for either live or automated. ### **Why it happened:** There was a sequencing issue on the release of the ARM side disk images in the EU. ### **How we fixed it:** The image reference for the ARM side disk was rolled back to the previous reference to restore service. ### **What we are doing to prevent it from happening again:** The tests that run to validate the image syncing have been completed in each region.

Read the full incident report →

Minor February 27, 2026

2026-February-27 Service Incident

Detected by Pingoru
Feb 27, 2026, 06:42 PM UTC
Resolved
Feb 27, 2026, 08:15 PM UTC
Duration
1h 33m
Affected: US-West
Timeline · 3 updates
  1. investigating Feb 27, 2026, 06:42 PM UTC

    Live and automated real device test results are not being displayed on the test results page in the US-West-1 data center. We are investigating.

  2. resolved Feb 27, 2026, 08:15 PM UTC

    After taking remedial action, we are now seeing real device test results display in the US-West-1 data center. All services are fully operational.

  3. postmortem Apr 15, 2026, 07:06 PM UTC

    ### **Dates:** Friday, February 27th 2026, 16:15 UTC - 19:50 UTC ### **What happened:** Requests made using API client authentication would return 500 errors. ### **Why it happened:** An internal data structure became corrupted due to a race condition. ### **How we fixed it:** The affected service was restarted, and a long-term fix was applied. ### **What we are doing to prevent it from happening again:** Thread locking has been applied to the affected service.

Read the full incident report →

Major February 23, 2026

2026-February-23 Service Incident

Detected by Pingoru
Feb 23, 2026, 05:22 PM UTC
Resolved
Feb 23, 2026, 10:31 PM UTC
Duration
5h 9m
Affected: US-WestUS-WestUS-WestUS-WestUS-West
Timeline · 4 updates
  1. investigating Feb 23, 2026, 05:22 PM UTC

    We are seeing an increase in automated tests using Sauce Connect failing with “Misconfigured -- No active tunnel found for provided identifier” errors in the US-West-1 data center. We are actively investigating.

  2. investigating Feb 23, 2026, 07:40 PM UTC

    We have identified the root cause and are working on implementing a fix. We are continuing to investigate.

  3. resolved Feb 23, 2026, 10:31 PM UTC

    We have identified the root cause and deployed a fix for this issue. All services are fully operational.

  4. postmortem Apr 15, 2026, 06:58 PM UTC

    ### **Dates:** Monday, February 23rd 2026, 17:22 UTC - 22:31 UTC ### **What happened:** WDIO-based tests run by customers using Sauce Connect 4 could not be started. ### **Why it happened:** Misconfiguration caused failing health checks in some cases, causing customer tunnels to shut down. ### **How we fixed it:** Misconfiguration was corrected. ### **What we are doing to prevent it from happening again:** Evaluation of the underlying software stack. Customers who are still using SC4 that can migrate to SC5 should do so at their earliest convenience.

Read the full incident report →

Looking to track Sauce Labs downtime and outages?

Pingoru polls Sauce Labs's status page every 5 minutes and alerts you the moment it reports an issue — before your customers do.

  • Real-time alerts when Sauce Labs reports an incident
  • Email, Slack, Discord, Microsoft Teams, and webhook notifications
  • Track Sauce Labs alongside 5,000+ providers in one dashboard
  • Component-level filtering
  • Notification groups + maintenance calendar
Start monitoring Sauce Labs for free

5 free monitors · No credit card required