Quay Outage History

Quay is up right now

There were 12 Quay outages since February 4, 2026 totaling 542h 25m of downtime. Each is summarised below — incident details, duration, and resolution information.

Source: https://status.quay.io

Major April 21, 2026

Quay.io Push/Pull Degraded

Detected by Pingoru
Apr 21, 2026, 10:50 PM UTC
Resolved
Apr 22, 2026, 12:09 AM UTC
Duration
1h 19m
Affected: API
Timeline · 3 updates
  1. investigating Apr 21, 2026, 10:50 PM UTC

    We are experiencing degraded performance on quay.io push/pull API. We are actively investigating.

  2. identified Apr 21, 2026, 10:55 PM UTC

    Pull API is working again. We are actively working on bringing push API back online.

  3. resolved Apr 22, 2026, 12:09 AM UTC

    Pull and Push APIs are fully up again. Service is working as expected.

Read the full incident report →

Minor April 20, 2026

Migration of china.cdn.redhat.com providers

Detected by Pingoru
Apr 20, 2026, 02:37 PM UTC
Resolved
Apr 20, 2026, 09:01 PM UTC
Duration
6h 23m
Affected: cdn.redhat.com
Timeline · 1 update
  1. scheduled Apr 20, 2026, 02:37 PM UTC

    Our CDN provider is ceasing operations of its delivery network inside China which requires us to perform a migration for china.cdn.redhat.com. This migration should be seamless for most end users in both access and performance. If you experience issues after this migration has completed, please reach out to Red Hat Support for assistance. This change has no effect on our other CDN properties.

Read the full incident report →

Major April 9, 2026

Erraneous Advisory email notifications

Detected by Pingoru
Apr 09, 2026, 05:10 PM UTC
Resolved
Apr 09, 2026, 05:10 PM UTC
Duration
16s
Affected: Red Hat Lightspeed - Patch
Timeline · 2 updates
  1. monitoring Apr 09, 2026, 05:10 PM UTC

    On April 8th, 2026 an update to the Patch application caused Advisory notifications to be sent excessively and without any advisories listed. Within two hours of notification we were able to resolve the issue in production. Emails may be still be delivered due to delays in email delivery, but new erroneous emails are no longer being generated.

  2. resolved Apr 09, 2026, 05:10 PM UTC

    This incident has been resolved.

Read the full incident report →

Notice April 3, 2026

Subscription threshold exceeded notifications for non-Pay as you go products are showing misleading values

Detected by Pingoru
Apr 03, 2026, 02:04 PM UTC
Resolved
Apr 16, 2026, 02:33 PM UTC
Duration
13d
Affected: Subscription Watch
Timeline · 3 updates
  1. identified Apr 03, 2026, 02:04 PM UTC

    Backend service responsible of checking over usage condition is processing the wrong data for non-PAYGO products. This is leading to unexpected subscription threshold exceeded notifications with misleading values.

  2. identified Apr 03, 2026, 02:34 PM UTC

    We want customers to know that the notifications capability within Subscription Watch is currently turned off due to over usage notifications being sent out in error. Jira issue is https://redhat.atlassian.net/browse/SWATCH-4870

  3. resolved Apr 16, 2026, 02:33 PM UTC

    The notifications for non-payg should work correctly.

Read the full incident report →

Major March 30, 2026

quay.io API failures

Detected by Pingoru
Mar 30, 2026, 08:11 PM UTC
Resolved
Mar 30, 2026, 11:15 PM UTC
Duration
3h 4m
Affected: Registry Account ManagementAPIregistry.redhat.ioregistry.access.redhat.comRegistryregistry.connect.redhat.comFrontend
Timeline · 5 updates
  1. identified Mar 30, 2026, 08:11 PM UTC

    We are seeing 502s with quay.io pushes & pulls

  2. identified Mar 30, 2026, 08:58 PM UTC

    The team has identified the issue and is investigating. We've shifted Quay to be read-only now so pulls should gradually begin restoring but pushes will still fail.

  3. monitoring Mar 30, 2026, 11:03 PM UTC

    We've implemented a fix for the issue, pushes have been restored now.

  4. resolved Mar 30, 2026, 11:15 PM UTC

    Quay.io functionality has been restored.

  5. postmortem Apr 20, 2026, 02:42 PM UTC

    On March 30, 2026, Quay.io experienced a service disruption affecting image pulls and pushes, caused by an unexpected burst of resource-intensive operations that temporarily overloaded our primary database. Our engineering team mitigated the issue by failing over read traffic, restoring image pull capabilities within 45 minutes, and subsequently restored full push functionality once the database stabilized. We have since implemented enhanced traffic filtering and are actively optimizing our systems to prevent this from happening in the future.

Read the full incident report →

Minor March 12, 2026

access.redhat.com - SSO Timeouts

Detected by Pingoru
Mar 12, 2026, 08:16 PM UTC
Resolved
Mar 12, 2026, 08:39 PM UTC
Duration
22m
Affected: SSO Authentication
Timeline · 2 updates
  1. identified Mar 12, 2026, 08:16 PM UTC

    The issue has been identified and a fix is being implemented.

  2. resolved Mar 12, 2026, 08:39 PM UTC

    This incident has been resolved.

Read the full incident report →

Major March 10, 2026

Intermittent issues with RHEL 9 and RHUI downloads

Detected by Pingoru
Mar 10, 2026, 05:03 PM UTC
Resolved
Mar 18, 2026, 02:36 PM UTC
Duration
7d 21h
Affected: cdn.redhat.com
Timeline · 2 updates
  1. identified Mar 10, 2026, 05:03 PM UTC

    Update: Incident Reopened We are seeing a recurrence of the intermittent failures previously reported at https://status.redhat.com/incidents/6yxp7syh0z9v. Red Hat has confirmed intermittent failures when users attempt to download specific packages in Red Hat Enterprise Linux 9. The RHEL team is investigating. This was originally reported as "RHEL 9 AppStream repository synchronization on Red Hat Satellite 6 fails with duplicate key value error" in this public Knowledge Base: https://access.redhat.com/solutions/7139264. Red Hat Update Infrastructure (RHUI) has also been identified as impacted by this issue.

  2. resolved Mar 18, 2026, 02:36 PM UTC

    This incident has been resolved.

Read the full incident report →

Minor February 18, 2026

quay.io intermittent API failures

Detected by Pingoru
Feb 18, 2026, 08:51 PM UTC
Resolved
Feb 18, 2026, 09:40 PM UTC
Duration
49m
Affected: APIRegistry
Timeline · 4 updates
  1. identified Feb 18, 2026, 08:51 PM UTC

    We are observing an increase in 500s on pulls and pushes, the team is current investigating.

  2. monitoring Feb 18, 2026, 09:16 PM UTC

    A fix has been implemented and we are monitoring the results.

  3. resolved Feb 18, 2026, 09:40 PM UTC

    Incident has been resolved and service is restored

  4. postmortem Feb 26, 2026, 06:02 PM UTC

    This incident was caused by a database change which was improperly rolled out. Quay engineers were deploying an update which dropped a no longer utilized table, however, this was deployed all at once rather than in phases. That meant that our existing deployment began crashing once the table was dropped, causing API failures since the table was being queried with every request to the API. The SRE team had to manually scale down the old deployment to allow the newer one to progress which restored API functionality. Going forward we are reviewing our process for database migrations and implementing additional checks to ensure that these migrations happen in two steps: application update first and database migrations second.

Read the full incident report →

Major February 10, 2026

Hybrid Cloud Console Notifications partial outage

Detected by Pingoru
Feb 10, 2026, 10:29 AM UTC
Resolved
Feb 10, 2026, 04:29 PM UTC
Duration
6h
Affected: Red Hat Lightspeed - Notifications
Timeline · 3 updates
  1. investigating Feb 10, 2026, 10:29 AM UTC

    We are investigating a partial outage with the Hybrid Cloud Console Notifications service. All notifications sent by email or to Event-Driven Ansible, Google Chat, Microsoft Teams, PagerDuty, ServiceNow, Slack, Splunk, webhooks are currently interrupted. The Notifications UI remains operational. Our team is working to resolve this and will provide updates as they become available.

  2. investigating Feb 10, 2026, 01:45 PM UTC

    All notifications - including Email, Event-Driven Ansible, Google Chat, Microsoft Teams, PagerDuty, ServiceNow, Slack, Splunk and Webhook - have been restored to normal operation. Notifications from 09:22 UTC (03:22 ET) – 13:28 UTC (07:28 ET): These messages were delayed but have now been successfully delivered. Notifications from 09:02 UTC (03:02 ET) – 09:21 UTC (03:21 ET): Our team is still working to resolve an issue affecting delivery for this specific timeframe. We will provide a final update once the remaining gap is addressed.

  3. resolved Feb 10, 2026, 04:29 PM UTC

    This incident has been resolved.

Read the full incident report →

Critical February 9, 2026

Quay.io UI Could Not Be Loaded

Detected by Pingoru
Feb 09, 2026, 09:43 PM UTC
Resolved
Feb 09, 2026, 10:24 PM UTC
Duration
41m
Affected: Frontend
Timeline · 4 updates
  1. investigating Feb 09, 2026, 09:43 PM UTC

    We are currently investigating this issue.

  2. identified Feb 09, 2026, 10:01 PM UTC

    The issue has been identified and a fix is being implemented.

  3. monitoring Feb 09, 2026, 10:11 PM UTC

    An issue was identified with a recent production deploy. We have reverted the change and initiated a new deploy. The changes are rolling out now and we have observed a decrease in failures.

  4. resolved Feb 09, 2026, 10:24 PM UTC

    This incident has been resolved.

Read the full incident report →

Major February 5, 2026

ROSA [Classic & HCP] Y-Stream Upgrades Blocked

Detected by Pingoru
Feb 05, 2026, 03:41 AM UTC
Resolved
Feb 05, 2026, 12:08 PM UTC
Duration
8h 27m
Affected: OpenShift Cluster Manager
Timeline · 2 updates
  1. investigating Feb 05, 2026, 03:41 AM UTC

    We are currently investigating an issue where ROSA Classic and HCP clusters cannot schedule Y-Stream upgrades. Our team is working to resolve this and will provide updates as they become available. Note: Any Y-Stream upgrades already scheduled will be run.

  2. resolved Feb 05, 2026, 12:08 PM UTC

    This incident has been resolved.

Read the full incident report →

Critical February 4, 2026

Customer Portal Downloads Timeouts

Detected by Pingoru
Feb 04, 2026, 12:13 AM UTC
Resolved
Feb 04, 2026, 01:26 PM UTC
Duration
13h 13m
Affected: Downloads
Timeline · 2 updates
  1. investigating Feb 04, 2026, 12:13 AM UTC

    We're experiencing latency issues with a download service used by the Downloads portion of the Customer Portal. Pages may fail to load while that service is triaged.

  2. resolved Feb 04, 2026, 01:26 PM UTC

    This incident has been resolved.

Read the full incident report →

Looking to track Quay downtime and outages?

Pingoru polls Quay's status page every 5 minutes and alerts you the moment it reports an issue — before your customers do.

  • Real-time alerts when Quay reports an incident
  • Email, Slack, Discord, Microsoft Teams, and webhook notifications
  • Track Quay alongside 5,000+ providers in one dashboard
  • Component-level filtering
  • Notification groups + maintenance calendar
Start monitoring Quay for free

5 free monitors · No credit card required