Oso Outage History

Oso is up right now

Oso had 12 outages in the last 2 years totaling 298h 45m of downtime — averaging 0.5 incidents per month.

There were 12 Oso outages since June 3, 2025 totaling 298h 45m of downtime. Each is summarised below — incident details, duration, and resolution information.

Source: https://oso.statuspage.io

Notice May 8, 2026

Intermittent 5xx errors in us-east-1 region

Detected by Pingoru
May 08, 2026, 05:28 PM UTC
Resolved
May 08, 2026, 10:39 PM UTC
Duration
5h 10m
Timeline · 3 updates
  1. investigating May 08, 2026, 05:28 PM UTC

    Users in our us-east-1 region may observe HTTP 5xx errors on write requests or when using the Oso UI. We are investigating.

  2. monitoring May 08, 2026, 07:16 PM UTC

    We have a deployed a fix and are monitoring the results.

  3. resolved May 08, 2026, 10:39 PM UTC

    This incident has been resolved.

Read the full incident report →

Notice April 15, 2026

Fallback snapshots are delayed

Detected by Pingoru
Apr 15, 2026, 02:35 PM UTC
Resolved
Apr 23, 2026, 08:22 PM UTC
Duration
8d 5h
Timeline · 3 updates
  1. identified Apr 15, 2026, 02:35 PM UTC

    The issue has been identified and a fix is being implemented.

  2. monitoring Apr 15, 2026, 03:15 PM UTC

    The issue has resolved. We are keeping in a Monitoring status while we work on a longer term fix to resolve the underlying root cause.

  3. resolved Apr 23, 2026, 08:22 PM UTC

    We have implemented a fix to mitigate future occurrence of this issue.

Read the full incident report →

Minor April 10, 2026

Oso Sync Processing Temporarily Paused

Detected by Pingoru
Apr 10, 2026, 05:05 AM UTC
Resolved
Apr 10, 2026, 08:37 PM UTC
Duration
15h 32m
Timeline · 2 updates
  1. identified Apr 10, 2026, 05:05 AM UTC

    We have temporarily paused Oso Sync processing while we resolve an infrastructure issue. Syncs submitted during this window will not be processed. Authorization requests via Oso Cloud are unaffected. We will provide updates as the situation progresses.

  2. resolved Apr 10, 2026, 08:37 PM UTC

    This incident has been resolved.

Read the full incident report →

Notice April 10, 2026

Degraded Fallback Snapshot Freshness

Detected by Pingoru
Apr 10, 2026, 03:23 AM UTC
Resolved
Apr 10, 2026, 01:43 PM UTC
Duration
10h 20m
Timeline · 2 updates
  1. identified Apr 10, 2026, 03:23 AM UTC

    We are investigating an issue affecting fallback snapshot freshness. Our team has identified the cause and is actively working on remediation. We will provide updates as the situation progresses.

  2. resolved Apr 10, 2026, 01:43 PM UTC

    This incident has been resolved.

Read the full incident report →

Notice March 5, 2026

Oso Sync Degraded

Detected by Pingoru
Mar 05, 2026, 05:18 PM UTC
Resolved
Mar 06, 2026, 02:22 PM UTC
Duration
21h 4m
Timeline · 3 updates
  1. identified Mar 05, 2026, 05:18 PM UTC

    We have identified an issue with our integration with S3 that is causing Oso Sync jobs to intermittently fail or time out. We have escalated the issue to AWS and are exploring other mitigations we can make.

  2. monitoring Mar 06, 2026, 04:24 AM UTC

    We've deployed a fix and are monitoring the results.

  3. resolved Mar 06, 2026, 02:22 PM UTC

    Since deploying the fix, we have not observed additional failures or timeouts related to our integration with S3.

Read the full incident report →

Minor November 17, 2025

Elevated errors processing writes in eu-west-1

Detected by Pingoru
Nov 17, 2025, 04:25 PM UTC
Resolved
Nov 17, 2025, 05:29 PM UTC
Duration
1h 4m
Timeline · 4 updates
  1. investigating Nov 17, 2025, 04:25 PM UTC

    We are investigating occurrences of failed writes in the eu-west-1 region.

  2. identified Nov 17, 2025, 04:32 PM UTC

    We have identified the affected cloud API resources and are applying remediation.

  3. monitoring Nov 17, 2025, 04:42 PM UTC

    We are seeing a recovery in write traffic following the mitigation. We are continuing to monitor this region.

  4. resolved Nov 17, 2025, 05:29 PM UTC

    This incident has been resolved.

Read the full incident report →

Notice October 20, 2025

Platform degradation for some upstream AWS services

Detected by Pingoru
Oct 20, 2025, 10:14 AM UTC
Resolved
Oct 20, 2025, 07:00 AM UTC
Duration
Timeline · 1 update
  1. resolved Oct 20, 2025, 10:14 AM UTC

    Between Oct 20 07:11 AM UTC and Oct 20 09:27 AM UTC, our system alarms for AWS started paging the On-Call team. We confirmed from the AWS status page that AWS was having an incident: https://health.aws.amazon.com/health/status AWS has reported that they have found the root cause and customers should be seeing recovery. Our internal monitoring dashboards have been available throughout this time and report that the Oso Cloud platform has been handling authorization decisions and write traffic without disruption. Given this, we currently believe there is no visible impact for Oso customers. We will continue to monitor and share any relevant status updates.

Read the full incident report →

Minor October 20, 2025

Platform degradation in some upstream AWS services

Detected by Pingoru
Oct 20, 2025, 07:15 AM UTC
Resolved
Oct 21, 2025, 04:47 PM UTC
Duration
1d 9h
Affected: Authorization APIs
Timeline · 6 updates
  1. monitoring Oct 20, 2025, 10:20 AM UTC

    Between Oct 20 07:11 AM UTC and Oct 20 09:27 AM UTC, our system alarms for AWS started paging the On-Call team. We confirmed from the AWS status page that AWS was having an incident: https://health.aws.amazon.com/health/status AWS has reported that they have found the root cause and customers should be seeing recovery. Our internal monitoring dashboards have been available throughout this time and report that the Oso Cloud platform has been handling authorization decisions and write traffic without disruption. Customers using Oso Fallback nodes may have observed stale authorization decisions from Fallback instances during this time, but traffic should not have been routed to Fallback instances because Oso Cloud was responsive. Fallback instances should now be up to date. We currently believe this was the only visible impact for Oso customers. We will continue to monitor and share any relevant status updates.

  2. investigating Oct 20, 2025, 03:42 PM UTC

    AWS has reported that they are seeing network issues, which has coincided with customer reports of gateway timeouts attempting to reach the Oso Service in us-east-1. We are investigating the impact and exploring failover options.

  3. investigating Oct 20, 2025, 04:49 PM UTC

    AWS continues to experience networking issues in us-east-1, resulting in increased latency for some requests. We have applied an update to Oso Cloud to temporarily route traffic to alternate regions.

  4. monitoring Oct 20, 2025, 05:20 PM UTC

    We have removed `us-east-1` from our DNS traffic routing policy for `api.osohq.com` and `cloud.osohq.com`. We have confirmed that all API traffic is now being routed to other nearby regions. We did not observe errors from our service during this incident, but we proceeded with the failover out of caution and due to inconsistent visibility. We have some shared services in `us-east-1` that continue to report healthy status, and we have not seen any other impacts. We will continue to monitor and update on any changes.

  5. monitoring Oct 20, 2025, 10:32 PM UTC

    AWS is continuing to stabilize us-east-1. Out of an abundance of caution, we will continue to avoid routing traffic to this region until their incident is fully resolved.

  6. resolved Oct 21, 2025, 04:47 PM UTC

    The AWS incident has been resolved and their us-east-1 region appears to be stable. We have reverted our temporary mitigations to allow Oso Cloud traffic to flow to us-east-1 once more.

Read the full incident report →

Notice October 5, 2025

Unavailability in us-west-1

Detected by Pingoru
Oct 05, 2025, 12:55 PM UTC
Resolved
Oct 05, 2025, 12:55 PM UTC
Duration
Timeline · 2 updates
  1. resolved Oct 05, 2025, 12:55 PM UTC

    From 12:32-12:52 UTC, customers connecting to Oso from the us-west-1 region may have received HTTP errors or experienced increased latency. Postmortem to follow.

  2. postmortem Oct 07, 2025, 07:22 PM UTC

    Link to postmortem: [https://osohq.notion.site/2025-10-05-unavailability-in-us-west-1](https://osohq.notion.site/2025-10-05-unavailability-in-us-west-1)

Read the full incident report →

Minor September 22, 2025

Increased latency of write application for some customers

Detected by Pingoru
Sep 22, 2025, 04:57 PM UTC
Resolved
Sep 22, 2025, 10:11 PM UTC
Duration
5h 14m
Affected: Authorization APIs
Timeline · 5 updates
  1. monitoring Sep 22, 2025, 04:57 PM UTC

    We have identified an increase in delay in write processing for some customers. We have identified the source to be a backlog of writes in our processing queue, and a sequential bottleneck for some environments. The backlog of writes has completed processing and we are monitoring the replication lag as it returns to normal levels.

  2. monitoring Sep 22, 2025, 06:01 PM UTC

    We are continuing to monitor the replication recovery. Some affected customers should be seeing their workloads returning to normal, others should continue to see a reduction in write delay.

  3. monitoring Sep 22, 2025, 07:37 PM UTC

    Over the next hour environments should see gradual recovery in write delays, with full recovery estimated in 2 hours.

  4. resolved Sep 22, 2025, 10:11 PM UTC

    Write latencies to all affected Oso Cloud environments have recovered.

  5. postmortem Sep 24, 2025, 02:40 PM UTC

    Link to postmortem: [https://osohq.notion.site/2025-09-22-Write-delays-for-certain-customers-2769f1471f2b80e3b36fd94ada7f34ce](https://osohq.notion.site/2025-09-22-Write-delays-for-certain-customers-2769f1471f2b80e3b36fd94ada7f34ce)

Read the full incident report →

Notice July 8, 2025

Stale reads in us-east-1

Detected by Pingoru
Jul 08, 2025, 01:48 PM UTC
Resolved
Jul 08, 2025, 03:44 PM UTC
Duration
1h 55m
Timeline · 4 updates
  1. identified Jul 08, 2025, 01:48 PM UTC

    We have identified that certain nodes stopped processing writes, which result in their returning stale results. We have replaced the affected nodes and are preemptively replacing all of the nodes in the region to remediate the issue.

  2. monitoring Jul 08, 2025, 02:35 PM UTC

    We have confirmed that all affected nodes are no longer in use and reads are no longer stale. We are continuing with the regional restart.

  3. resolved Jul 08, 2025, 03:01 PM UTC

    The regional restart is complete. We continue to observe that reads are accurate.

  4. postmortem Jul 11, 2025, 02:43 PM UTC

    Link to postmortem: [https://osohq.notion.site/Stale-reads-in-us-east-1-22a9f1471f2b80bd8808f8b6cb76a44d](https://osohq.notion.site/Stale-reads-in-us-east-1-22a9f1471f2b80bd8808f8b6cb76a44d)

Read the full incident report →

Notice June 3, 2025

Website unavailable

Detected by Pingoru
Jun 03, 2025, 04:27 PM UTC
Resolved
Jun 03, 2025, 11:32 PM UTC
Duration
7h 5m
Timeline · 4 updates
  1. monitoring Jun 03, 2025, 04:27 PM UTC

    Our website hosting provider is currently experiencing an ongoing incident. We are actively monitoring. Oso Cloud services are unaffected. If you're looking for our docs, you can access them here: https://www.osohq.com/docs.

  2. monitoring Jun 03, 2025, 04:29 PM UTC

    We are continuing to monitor for any further issues.

  3. monitoring Jun 03, 2025, 07:09 PM UTC

    Our website hosting provider is seeing improvements and continuing to monitor.

  4. resolved Jun 03, 2025, 11:32 PM UTC

    Our website hosting provider has resolved the incident. Oso Cloud services remain unaffected.

Read the full incident report →