Part of GitHub

Is GitHub Packages down?

Last checked 8h ago
Current status
GitHub Packages is up

No incidents right now.

Official status page: https://www.githubstatus.com · Polled every 5 minutes · 1 component tracked

GitHub Packages is operational right now. Last checked 8h ago; the most recent incident resolved 12d ago.

Real-time GitHub Packages status, recent outages, and incident history — pulled directly from GitHub Packages's official status page at https://www.githubstatus.com every 5 minutes. Pingoru tracks 1 GitHub Packages service and has captured 4 incidents in the last 90 days (95.56% uptime). Get email, Slack, Discord, or webhook alerts the moment GitHub Packages reports a new incident — free for 5 monitors, no credit card.

Users who monitor GitHub Packages also follow these Development services: Bitbucket OpenAI Datadog US1 Sentry NPM Travis CI Postman Pusher Pendo SonarCloud View all 6,000+ providers
GitHub Packages uptime 95.56% uptime · past 90 days
Mon Wed Fri
FebMarAprMay
Less More

Recent outages & incidents

Past 90 days
  1. Resolved 54m
    Started May 04, 2026, 03:45 PM UTC · Resolved May 04, 2026, 04:40 PM UTC
    Git OperationsWebhooksIssuesPull RequestsActionsPackagesPagesCodespaces
    Timeline · 19 updates
    • investigating · May 04, 2026, 03:45 PM UTC

      We are investigating reports of degraded performance for Issues and Webhooks

    • investigating · May 04, 2026, 03:48 PM UTC

      We are investigating Increased latency and timeouts across multiple GitHub services.

    • investigating · May 04, 2026, 03:48 PM UTC

      Git Operations is experiencing degraded performance. We are continuing to investigate.

    • investigating · May 04, 2026, 03:50 PM UTC

      Packages is experiencing degraded performance. We are continuing to investigate.

    • investigating · May 04, 2026, 03:51 PM UTC

      Pull Requests is experiencing degraded availability. We are continuing to investigate.

    • investigating · May 04, 2026, 03:51 PM UTC

      Actions is experiencing degraded performance. We are continuing to investigate.

    • investigating · May 04, 2026, 03:56 PM UTC

      Pull Requests is experiencing degraded performance. We are continuing to investigate.

    • investigating · May 04, 2026, 04:05 PM UTC

      Codespaces is experiencing degraded performance. We are continuing to investigate.

    • investigating · May 04, 2026, 04:06 PM UTC

      Pages is experiencing degraded performance. We are continuing to investigate.

    • investigating · May 04, 2026, 04:25 PM UTC

      Git Operations is operating normally.

    • investigating · May 04, 2026, 04:28 PM UTC

      Actions and Packages are operating normally.

    • investigating · May 04, 2026, 04:29 PM UTC

      Latency across services has normalized. We are continuing to investigate the root cause and prevent reoccurrence.

    • investigating · May 04, 2026, 04:29 PM UTC

      Pages is operating normally.

    • investigating · May 04, 2026, 04:32 PM UTC

      Pull Requests is operating normally.

    • investigating · May 04, 2026, 04:34 PM UTC

      The degradation affecting Issues has been mitigated. We are monitoring to ensure stability.

    • investigating · May 04, 2026, 04:35 PM UTC

      The degradation affecting Codespaces has been mitigated. We are monitoring to ensure stability.

    • investigating · May 04, 2026, 04:35 PM UTC

      Webhooks is operating normally.

    • monitoring · May 04, 2026, 04:36 PM UTC

      The degradation has been mitigated. We are monitoring to ensure stability.

    • resolved · May 04, 2026, 04:40 PM UTC

      This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.

    Latest: This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.

  2. Resolved 6h 15m
    Started Apr 27, 2026, 04:31 PM UTC · Resolved Apr 27, 2026, 10:46 PM UTC
    IssuesPull RequestsActionsPackages
    Timeline · 15 updates
    • investigating · Apr 27, 2026, 04:31 PM UTC

      We are investigating reports of degraded performance for Actions

    • investigating · Apr 27, 2026, 04:33 PM UTC

      Customers across GitHub are experiencing failures with searches. Examples include: workflow run failures, projects failing to load, and timed out search requests. This is due to an ongoing infrastructure issue that we have been investigating.

    • investigating · Apr 27, 2026, 04:36 PM UTC

      Issues is experiencing degraded performance. We are continuing to investigate.

    • investigating · Apr 27, 2026, 04:39 PM UTC

      Packages is experiencing degraded performance. We are continuing to investigate.

    • investigating · Apr 27, 2026, 04:53 PM UTC

      Pull Requests is experiencing degraded performance. We are continuing to investigate.

    • investigating · Apr 27, 2026, 05:35 PM UTC

      Users are experiencing intermittent failures to view issues, pull requests, projects and Actions workflow runs. We are still investigating and attempting mitigations. We will provide further updates.

    • investigating · Apr 27, 2026, 06:17 PM UTC

      We're continuing to see connectivity issues reaching elasticsearch. Impact on downstream services will be intermittent as we find the root cause

    • investigating · Apr 27, 2026, 06:19 PM UTC

      Pull Requests is experiencing degraded availability. We are continuing to investigate.

    • investigating · Apr 27, 2026, 07:50 PM UTC

      We have identified the source of the additional load causing stress on our ElasticSearch clusters. We have disabled the source of that load and are seeing signs of recovery

    • investigating · Apr 27, 2026, 08:06 PM UTC

      Pull Requests is experiencing degraded performance. We are continuing to investigate.

    • investigating · Apr 27, 2026, 09:32 PM UTC

      We've applied a mitigation and continuing to monitor

    • monitoring · Apr 27, 2026, 09:33 PM UTC

      The degradation affecting Actions, Issues, Packages and Pull Requests has been mitigated. We are monitoring to ensure stability.

    • monitoring · Apr 27, 2026, 10:35 PM UTC

      The degradation has been mitigated. We are monitoring to ensure stability.

    • monitoring · Apr 27, 2026, 10:44 PM UTC

      The degradation has been mitigated. We are monitoring to ensure stability.

    • resolved · Apr 27, 2026, 10:46 PM UTC

      This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.

    Latest: This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.

  3. Resolved 38m
    Started Apr 23, 2026, 02:40 PM UTC · Resolved Apr 23, 2026, 03:18 PM UTC
    ActionsPackagesCodespacesCopilot
    Timeline · 8 updates
    • investigating · Apr 23, 2026, 02:40 PM UTC

      We are investigating reports of degraded performance for Actions

    • investigating · Apr 23, 2026, 02:41 PM UTC

      Packages is experiencing degraded performance. We are continuing to investigate.

    • investigating · Apr 23, 2026, 02:42 PM UTC

      Codespaces is experiencing degraded performance. We are continuing to investigate.

    • investigating · Apr 23, 2026, 02:44 PM UTC

      Copilot is experiencing degraded performance. We are continuing to investigate.

    • investigating · Apr 23, 2026, 02:51 PM UTC

      Users are experiencing errors loading various web pages on github.com. Actions and Copilot Cloud Agent runs will be delayed.

    • investigating · Apr 23, 2026, 03:02 PM UTC

      A mitigation was applied and services have recovered.  Actions is working through queued work before fully recovering.

    • monitoring · Apr 23, 2026, 03:02 PM UTC

      The degradation affecting Actions, Codespaces, Copilot and Packages has been mitigated. We are monitoring to ensure stability.

    • resolved · Apr 23, 2026, 03:18 PM UTC

      This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.

    Latest: This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.

  4. Resolved 1h 2m
    Started Mar 13, 2026, 03:12 PM UTC · Resolved Mar 13, 2026, 04:15 PM UTC
    IssuesActionsPackages
    Timeline · 6 updates
    • investigating · Mar 13, 2026, 03:12 PM UTC

      We are investigating reports of degraded performance for Actions and Issues

    • investigating · Mar 13, 2026, 03:14 PM UTC

      We are investigating reports of issues with service(s): Actions, Feeds, Issues, Profiles, Registry Metadata, Star, User Dashboard. We will continue to keep users updated on progress towards mitigation.

    • investigating · Mar 13, 2026, 03:20 PM UTC

      Packages is experiencing degraded performance. We are continuing to investigate.

    • investigating · Mar 13, 2026, 03:47 PM UTC

      We are investigating intermittent performance degradation affecting Actions, Feeds, Issues, Package Registry, Profiles, Registry Metadata, Star, and User Dashboard. Users may experience elevated error rates and slower response times when accessing these services. We have identified a potential cause and are implementing mitigations to restore normal service. We'll post another update by 16:15 UTC.

    • investigating · Mar 13, 2026, 04:02 PM UTC

      We have deployed mitigations and are actively monitoring for recovery. We'll post another update by 17:00 UTC.

    • resolved · Mar 13, 2026, 04:15 PM UTC

      On March 13, 2026, between 13:35 UTC and 16:02 UTC, a configuration change to an internal authorization service reduced its processing capacity below what was needed during peak traffic. This caused intermittent timeouts when other GitHub services checked user permissions, resulting in four to five waves of errors over roughly two hours and forty minutes. In total, 0.4% of users were denied access to actions they were authorized to perform. The root cause was a resource right-sizing change deployed to the authorization service the previous day. It reduced CPU allocation below what was required at peak, causing the service's network gateway to throttle under load. Because the change was deployed after peak traffic on March 12, the reduced capacity wasn't surfaced until the next day's peak. The incident was mitigated by manually scaling up the authorization service and reverting the configuration change. To prevent recurrence, we are adding further resource utilization monitors across our entire stack to detect throttling and improving error handling so transient infrastructure timeouts are distinguished from authorization failures, enabling quicker detection of the root issue.

    Latest: On March 13, 2026, between 13:35 UTC and 16:02 UTC, a configuration change to an internal authorization service reduced its processing capacity below what was needed during peak tr…

Outage history

Past 90 days · 4 incidents View full outage history →