GitHub Outage History

GitHub is up right now

There were 46 GitHub outages since February 27, 2026 totaling 144h 41m of downtime. Each is summarised below — incident details, duration, and resolution information.

Source: https://www.githubstatus.com

Minor April 28, 2026

Incomplete pull request results in repositories

Detected by Pingoru
Apr 28, 2026, 02:17 PM UTC
Resolved
May 01, 2026, 04:15 AM UTC
Duration
2d 13h
Affected: Pull Requests
Timeline · 10 updates
  1. investigating Apr 28, 2026, 02:17 PM UTC

    We are investigating reports of degraded performance for Pull Requests

  2. investigating Apr 28, 2026, 02:51 PM UTC

    After yesterday’s incident, we are investigating cases where /pulls and /repo/pulls pages are not showing all indexed pull requests. This is because our Elasticsearch cluster does not currently contain all indexed documents. No pull request data has been lost. As pull requests are updated, they will be reindexed. We are also working on accelerating a full reindex so these pages return complete results again.

  3. investigating Apr 28, 2026, 03:58 PM UTC

    We are actively reindexing the remaining ElasticSearch indexes. Our priority is ensuring correctness and avoiding further impact. We are taking a measured approach to safely backfill data and will share additional updates as progress continues.

  4. investigating Apr 28, 2026, 09:43 PM UTC

    Elastic search reindexing of pull requests is continuing. All data is preserved, but may not be available on pages relying on elasticsearch until the reindex is complete. Pages and APIs that do not rely on elasticsearch, including the GitHub CLI (gh pr list) and API (/repos/{owner}/{repo}/pulls), are not impacted and can be used to retrieve pull request data in the interim.

  5. investigating Apr 28, 2026, 10:46 PM UTC

    We have made an interim mitigation to improve availability for some impacted repositories while reindexing continues, and we are actively monitoring the indexing progress.

  6. investigating Apr 29, 2026, 12:40 AM UTC

    Mitigation is in progress, with full recovery of impacted pull request listings expected within approximately 24 hours.

  7. investigating Apr 29, 2026, 10:22 PM UTC

    We have restored search/indexing functionality for over 99% of impacted pull requests. We are continuing to address the remaining affected pull requests and are reviewing outstanding gaps as part of the restoration process.

  8. investigating Apr 30, 2026, 03:49 AM UTC

    We have repaired the missing search records for affected Pull Requests and are working to identify and repair records left in a stale state after the recovery.

  9. investigating May 01, 2026, 04:11 AM UTC

    This incident has been resolved. Search and indexing functionality for pull requests are now fully restored. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.

  10. resolved May 01, 2026, 04:15 AM UTC

    This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.

Read the full incident report →

Minor April 28, 2026

Disruption with some GitHub services

Detected by Pingoru
Apr 28, 2026, 01:59 PM UTC
Resolved
Apr 28, 2026, 05:09 PM UTC
Duration
3h 9m
Affected: Actions
Timeline · 9 updates
  1. investigating Apr 28, 2026, 01:59 PM UTC

    We are investigating reports of impacted performance for some GitHub services.

  2. monitoring Apr 28, 2026, 01:59 PM UTC

    Actions is experiencing capacity constraints with hosted ubuntu-latest and ubuntu-24.02, leading to high wait times. Other hosted labels and self-hosted runners are not impacted.

  3. investigating Apr 28, 2026, 02:02 PM UTC

    Actions is experiencing degraded performance. We are continuing to investigate.

  4. investigating Apr 28, 2026, 02:49 PM UTC

    We're still investigating the root cause for run start delays and failures for Actions hosted Ubuntu jobs, around 5% of jobs are impacted as of now.

  5. investigating Apr 28, 2026, 03:20 PM UTC

    We've applied a mitigation to unblock running Actions. We're continuing to monitor.

  6. investigating Apr 28, 2026, 03:41 PM UTC

    Currently less than 2% of hosted ubuntu-latest and ubuntu-24.04 runs are delayed or failing. We are continuing to monitor for full recovery.

  7. investigating Apr 28, 2026, 04:36 PM UTC

    Less than 1% of hosted ubuntu-latest runs are delayed. We’re working through remaining steps to restore runner capacity.

  8. monitoring Apr 28, 2026, 05:08 PM UTC

    Actions is operating normally.

  9. resolved Apr 28, 2026, 05:09 PM UTC

    This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.

Read the full incident report →

Critical April 27, 2026

GitHub search is degraded

Detected by Pingoru
Apr 27, 2026, 04:31 PM UTC
Resolved
Apr 27, 2026, 10:46 PM UTC
Duration
6h 15m
Affected: IssuesPull RequestsActionsPackages
Timeline · 15 updates
  1. investigating Apr 27, 2026, 04:31 PM UTC

    We are investigating reports of degraded performance for Actions

  2. investigating Apr 27, 2026, 04:33 PM UTC

    Customers across GitHub are experiencing failures with searches. Examples include: workflow run failures, projects failing to load, and timed out search requests. This is due to an ongoing infrastructure issue that we have been investigating.

  3. investigating Apr 27, 2026, 04:36 PM UTC

    Issues is experiencing degraded performance. We are continuing to investigate.

  4. investigating Apr 27, 2026, 04:39 PM UTC

    Packages is experiencing degraded performance. We are continuing to investigate.

  5. investigating Apr 27, 2026, 04:53 PM UTC

    Pull Requests is experiencing degraded performance. We are continuing to investigate.

  6. investigating Apr 27, 2026, 05:35 PM UTC

    Users are experiencing intermittent failures to view issues, pull requests, projects and Actions workflow runs. We are still investigating and attempting mitigations. We will provide further updates.

  7. investigating Apr 27, 2026, 06:17 PM UTC

    We're continuing to see connectivity issues reaching elasticsearch. Impact on downstream services will be intermittent as we find the root cause

  8. investigating Apr 27, 2026, 06:19 PM UTC

    Pull Requests is experiencing degraded availability. We are continuing to investigate.

  9. investigating Apr 27, 2026, 07:50 PM UTC

    We have identified the source of the additional load causing stress on our ElasticSearch clusters. We have disabled the source of that load and are seeing signs of recovery

  10. investigating Apr 27, 2026, 08:06 PM UTC

    Pull Requests is experiencing degraded performance. We are continuing to investigate.

  11. investigating Apr 27, 2026, 09:32 PM UTC

    We've applied a mitigation and continuing to monitor

  12. monitoring Apr 27, 2026, 09:33 PM UTC

    The degradation affecting Actions, Issues, Packages and Pull Requests has been mitigated. We are monitoring to ensure stability.

  13. monitoring Apr 27, 2026, 10:35 PM UTC

    The degradation has been mitigated. We are monitoring to ensure stability.

  14. monitoring Apr 27, 2026, 10:44 PM UTC

    The degradation has been mitigated. We are monitoring to ensure stability.

  15. resolved Apr 27, 2026, 10:46 PM UTC

    This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.

Read the full incident report →

Minor April 24, 2026

Delays with Actions Jobs for Larger Runners using VNet Injection in the East US region

Detected by Pingoru
Apr 24, 2026, 07:02 PM UTC
Resolved
Apr 25, 2026, 12:36 AM UTC
Duration
5h 33m
Timeline · 4 updates
  1. investigating Apr 24, 2026, 07:02 PM UTC

    We are investigating reports of impacted performance for some GitHub services.

  2. investigating Apr 24, 2026, 07:09 PM UTC

    We are investigating reports of degraded performance for Larger Runners with vnet injection in East US and we are working with our service provider on mitigation.

  3. investigating Apr 24, 2026, 07:14 PM UTC

    This is related to the public impact, "Multiservice impact for Azure Workloads in East US" shared at https://azure.status.microsoft/

  4. resolved Apr 25, 2026, 12:36 AM UTC

    This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.

Read the full incident report →

Minor April 23, 2026

Incident with Pull Requests

Detected by Pingoru
Apr 23, 2026, 09:43 PM UTC
Resolved
Apr 23, 2026, 09:43 PM UTC
Duration
Affected: Pull Requests
Timeline · 5 updates
  1. investigating Apr 23, 2026, 07:50 PM UTC

    We are investigating reports of degraded performance for Pull Requests

  2. investigating Apr 23, 2026, 07:58 PM UTC

    We have identified a regression in merge queue behavior present when squash merging or rebasing. We have identified the root-cause and are in the process of reverting the change.

  3. investigating Apr 23, 2026, 08:47 PM UTC

    Pull Requests is operating normally.

  4. investigating Apr 23, 2026, 09:18 PM UTC

    We have resolved a regression present when using merge queue with either squash merges or rebases. If you use merge queue in this configuration, some pull requests may have been merged incorrectly between 2026-04-23 16:05-20:43 UTC. This behavior is still present in GitHub Enterprise Cloud with Data Residency, and we are rolling out the same fix.

  5. resolved Apr 23, 2026, 09:43 PM UTC

    This incident has been resolved and we've contacted repository administrators via email who have impacted branches. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will also be shared as soon as it is available.

Read the full incident report →

Minor April 23, 2026

Disruption with users unable to start Claude and Codex agent task from the web

Detected by Pingoru
Apr 23, 2026, 07:28 PM UTC
Resolved
Apr 23, 2026, 07:42 PM UTC
Duration
13m
Timeline · 3 updates
  1. investigating Apr 23, 2026, 07:28 PM UTC

    We are investigating reports of impacted performance for some GitHub services.

  2. investigating Apr 23, 2026, 07:33 PM UTC

    We have identified the root cause of the issue and are working on mitigation.

  3. resolved Apr 23, 2026, 07:42 PM UTC

    This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.

Read the full incident report →

Critical April 23, 2026

Incident with multiple GitHub services

Detected by Pingoru
Apr 23, 2026, 04:12 PM UTC
Resolved
Apr 23, 2026, 05:30 PM UTC
Duration
1h 18m
Affected: WebhooksActionsCopilot
Timeline · 8 updates
  1. investigating Apr 23, 2026, 04:12 PM UTC

    We are investigating reports of degraded availability for Copilot and Webhooks

  2. investigating Apr 23, 2026, 04:19 PM UTC

    We are investigating multiple unavailable services.

  3. investigating Apr 23, 2026, 04:34 PM UTC

    Actions is experiencing degraded performance. We are continuing to investigate.

  4. investigating Apr 23, 2026, 04:52 PM UTC

    We have identified the root problem and are working on mitigation.

  5. investigating Apr 23, 2026, 05:03 PM UTC

    The degradation affecting Actions and Copilot has been mitigated. We are monitoring to ensure stability.

  6. investigating Apr 23, 2026, 05:04 PM UTC

    Many services are mitigated and are validating the remaining services.

  7. investigating Apr 23, 2026, 05:10 PM UTC

    Webhooks is operating normally.

  8. resolved Apr 23, 2026, 05:30 PM UTC

    This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.

Read the full incident report →

Minor April 23, 2026

Investigating errors on GitHub

Detected by Pingoru
Apr 23, 2026, 02:40 PM UTC
Resolved
Apr 23, 2026, 03:18 PM UTC
Duration
38m
Affected: ActionsPackagesCodespacesCopilot
Timeline · 8 updates
  1. investigating Apr 23, 2026, 02:40 PM UTC

    We are investigating reports of degraded performance for Actions

  2. investigating Apr 23, 2026, 02:41 PM UTC

    Packages is experiencing degraded performance. We are continuing to investigate.

  3. investigating Apr 23, 2026, 02:42 PM UTC

    Codespaces is experiencing degraded performance. We are continuing to investigate.

  4. investigating Apr 23, 2026, 02:44 PM UTC

    Copilot is experiencing degraded performance. We are continuing to investigate.

  5. investigating Apr 23, 2026, 02:51 PM UTC

    Users are experiencing errors loading various web pages on github.com. Actions and Copilot Cloud Agent runs will be delayed.

  6. investigating Apr 23, 2026, 03:02 PM UTC

    A mitigation was applied and services have recovered. Actions is working through queued work before fully recovering.

  7. monitoring Apr 23, 2026, 03:02 PM UTC

    The degradation affecting Actions, Codespaces, Copilot and Packages has been mitigated. We are monitoring to ensure stability.

  8. resolved Apr 23, 2026, 03:18 PM UTC

    This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.

Read the full incident report →

Critical April 22, 2026

Disruption with Copilot chat and Copilot Coding Agent

Detected by Pingoru
Apr 22, 2026, 03:35 PM UTC
Resolved
Apr 22, 2026, 07:18 PM UTC
Duration
3h 42m
Timeline · 8 updates
  1. investigating Apr 22, 2026, 03:35 PM UTC

    We are investigating reports of impacted performance for some GitHub services.

  2. investigating Apr 22, 2026, 03:43 PM UTC

    We are aware of users seeing errors interacting with Copilot chat on github.com and Copilot cloud agent. We have identified the cause and are investigating remediations.

  3. investigating Apr 22, 2026, 04:24 PM UTC

    We continue to work on mitigation for Copilot chat and cloud agent.

  4. investigating Apr 22, 2026, 04:58 PM UTC

    Mitigation is progressing for Copilot chat and cloud agent.

  5. investigating Apr 22, 2026, 05:40 PM UTC

    Mitigation is progressing for Copilot chat and cloud agent recovery.

  6. investigating Apr 22, 2026, 05:49 PM UTC

    We are now seeing recovery for Copilot cloud agent.

  7. investigating Apr 22, 2026, 06:05 PM UTC

    Copilot cloud agent and chat are mitigated for github.com.

  8. resolved Apr 22, 2026, 07:18 PM UTC

    This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.

Read the full incident report →

Minor April 21, 2026

Disruption with projects service

Detected by Pingoru
Apr 21, 2026, 03:03 PM UTC
Resolved
Apr 22, 2026, 01:24 AM UTC
Duration
10h 20m
Timeline · 11 updates
  1. investigating Apr 21, 2026, 03:03 PM UTC

    We are investigating reports of impacted performance for some GitHub services.

  2. investigating Apr 21, 2026, 03:07 PM UTC

    We are investigating reports of delays with GitHub Projects. Users may notice that changes made to projects are not reflected immediately. Our team has identified the source of the delays and is actively working to resolve the issue.

  3. investigating Apr 21, 2026, 03:42 PM UTC

    We continue to investigate delays with GitHub Projects where changes may not be reflected immediately. Our team has identified the cause and applied mitigations to address the issue. We are seeing initial signs of recovery, though some delays may persist as the system works through a backlog of pending updates.

  4. investigating Apr 21, 2026, 04:18 PM UTC

    We are deploying a fix to relieve the queue of delayed data. Some users may still experience delays with GitHub Projects where changes are not reflected immediately as remaining backlogs are processed.

  5. investigating Apr 21, 2026, 04:45 PM UTC

    The mitigation is deployed and we are seeing recorvery in the queues and will provide an update as to when full recovery will be realized.

  6. investigating Apr 21, 2026, 05:21 PM UTC

    The queues are continuing to decrease and we are working to accelerate the rate of processing through the queues.

  7. investigating Apr 21, 2026, 07:41 PM UTC

    Recovery from the delays affecting GitHub Projects continues to progress. We have deployed additional mitigations that are accelerating processing of the backlog. Users may still experience delays where changes to projects are not reflected immediately. We expect full recovery within approximately six hours.

  8. monitoring Apr 21, 2026, 09:15 PM UTC

    The degradation has been mitigated. We are monitoring to ensure stability.

  9. monitoring Apr 21, 2026, 10:49 PM UTC

    The issue remains mitigated. Users may still experience delays in changes to projects while we process the backlog of events. We expect a full recovery in approximately three hours.

  10. monitoring Apr 22, 2026, 12:00 AM UTC

    The issue remains mitigated. Users may still experience small delays in changes to projects while we process the backlog of events. We expect a full recovery in approximately two hours.

  11. resolved Apr 22, 2026, 01:24 AM UTC

    This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.

Read the full incident report →

Major April 20, 2026

Partial degradation for code scanning default setup and for code quality

Detected by Pingoru
Apr 20, 2026, 01:28 PM UTC
Resolved
Apr 21, 2026, 05:04 AM UTC
Duration
15h 36m
Timeline · 15 updates
  1. investigating Apr 20, 2026, 01:28 PM UTC

    We are investigating reports of impacted performance for some GitHub services.

  2. investigating Apr 20, 2026, 01:57 PM UTC

    We are actively working to mitigate an issue affecting code scanning default setup and code quality features on pull requests. Users may experience pull request code scanning and code quality analyses not being triggered on new pull requests. Our engineering team has identified the root cause and working on mitigating the issue.

  3. investigating Apr 20, 2026, 02:38 PM UTC

    We are currently deploying mitigations that should unblock code scanning default setup and code quality features on pull requests.

  4. investigating Apr 20, 2026, 03:20 PM UTC

    We are continuing to work on a mitigation to unblock code scanning default setup and code quality features on pull requests.

  5. investigating Apr 20, 2026, 04:16 PM UTC

    Code scanning default setup and Code Quality triggers are back up and running. PRs not processed before or during this incident will require a new push to trigger code scanning or code quality analysis. We are seeing problems with new issues not showing on project boards and are working on mitigation.

  6. investigating Apr 20, 2026, 04:48 PM UTC

    We continue to work on mitigation regarding new issues not showing on project boards.

  7. investigating Apr 20, 2026, 05:32 PM UTC

    We continue to work on mitigation regarding new issues not showing on project boards.

  8. investigating Apr 20, 2026, 06:08 PM UTC

    A deployment to fix this issue of new issues not showing up in projects is underway.

  9. investigating Apr 20, 2026, 06:20 PM UTC

    The issue has been mitigated. Newly created issues linked to projects should now function as expected. Issues that were linked to projects during the incident may take approximately five hours to render correctly while we complete a re-index.

  10. monitoring Apr 20, 2026, 06:20 PM UTC

    The degradation has been mitigated. We are monitoring to ensure stability.

  11. monitoring Apr 20, 2026, 06:21 PM UTC

    The degradation has been mitigated. We are monitoring to ensure stability.

  12. monitoring Apr 20, 2026, 09:36 PM UTC

    The degradation has been mitigated. We are monitoring to ensure stability.

  13. monitoring Apr 21, 2026, 03:10 AM UTC

    The issue remains mitigated. Issues that were linked to projects during the incident may take approximately three more hours to render correctly while we complete a re-index.

  14. monitoring Apr 21, 2026, 04:18 AM UTC

    The degradation has been mitigated. We are monitoring to ensure stability.

  15. resolved Apr 21, 2026, 05:04 AM UTC

    This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.

Read the full incident report →

Major April 16, 2026

Incident with Codespaces

Detected by Pingoru
Apr 16, 2026, 03:06 PM UTC
Resolved
Apr 16, 2026, 06:28 PM UTC
Duration
3h 21m
Affected: Codespaces
Timeline · 7 updates
  1. investigating Apr 16, 2026, 03:06 PM UTC

    We are investigating reports of degraded performance for Codespaces

  2. investigating Apr 16, 2026, 03:08 PM UTC

    We are experiencing degraded performance in Codespaces related to creating a new Codespace or starting an existing Codespace from the VS Code editor. SSH connections to Codespaces are not impacted. We are working toward mitigation and will continue to keep you updated on progress.

  3. investigating Apr 16, 2026, 03:41 PM UTC

    Codespaces is experiencing degraded availability. We are continuing to investigate.

  4. investigating Apr 16, 2026, 03:49 PM UTC

    We found an issue that impacts 70% of Codespaces. We are engaged with the provider and working towards mitigation.

  5. investigating Apr 16, 2026, 04:37 PM UTC

    Our provider is implementing a mitigation and we are seeing signs of recovery.

  6. monitoring Apr 16, 2026, 06:22 PM UTC

    The degradation affecting Codespaces has been mitigated. We are monitoring to ensure stability.

  7. resolved Apr 16, 2026, 06:28 PM UTC

    On April 16, 2026 between 09:30 UTC and 17:15 UTC, users experienced failures when attempting to connect to GitHub Codespaces via the VS Code editor. During this time, approximately 40% of codespace start operations failed. Users connecting via SSH were not impacted. The issue was caused by a failure in an upstream download service that prevented the VS Code Server from being retrieved during codespace startup. The impact was mitigated by implementing a workaround to use an alternative download path when the primary endpoint is degraded. We are working with the upstream dependency to address the root cause of the download service failure, and we are improving our fallback mechanisms to reduce the impact of similar upstream failures in the future.

Read the full incident report →

Major April 13, 2026

Incident with Pages

Detected by Pingoru
Apr 13, 2026, 07:56 PM UTC
Resolved
Apr 13, 2026, 08:35 PM UTC
Duration
39m
Affected: Pages
Timeline · 5 updates
  1. investigating Apr 13, 2026, 07:56 PM UTC

    We are investigating reports of degraded availability for Pages

  2. investigating Apr 13, 2026, 07:57 PM UTC

    We are investigating reports of issues with Pages. We will continue to keep users updated on progress towards mitigation.

  3. monitoring Apr 13, 2026, 08:30 PM UTC

    The degradation affecting Pages has been mitigated. We are monitoring to ensure stability.

  4. monitoring Apr 13, 2026, 08:32 PM UTC

    We have mitigated the issue with Pages.

  5. resolved Apr 13, 2026, 08:35 PM UTC

    On Sunday April 13th, 2026, between 18:53 UTC and 20:30 UTC, the GitHub Pages service experienced elevated error rates. On average, the error rate was 10.58% and peaked at 12.77% of requests to the service, resulting in approximately 17.5 million failed requests returning HTTP 500 errors. This was due to an automated DNS management tool (octodns) erroneously deleting a DNS record for a Pages backend storage host after its upstream data source intermittently failed to return the record, causing the tool to treat it as stale and remove it. We mitigated the incident by re-creating the deleted DNS record. To prevent future incidents, we are implementing availability-zone-tolerant routing in the Pages frontend so that an unresolvable backend host triggers failover to healthy hosts rather than returning errors, adding safeguards to prevent automated deletion of DNS records owned by other systems, and improving logging and alerting for DNS resolution failures in the Pages serving path.

Read the full incident report →

Minor April 10, 2026

Problems with third-party Claude and Codex Agent sessions not being listed in the agents tab dashboard

Detected by Pingoru
Apr 10, 2026, 01:07 PM UTC
Resolved
Apr 10, 2026, 01:28 PM UTC
Duration
21m
Timeline · 3 updates
  1. investigating Apr 10, 2026, 01:07 PM UTC

    We are investigating reports of impacted performance for some GitHub services.

  2. investigating Apr 10, 2026, 01:08 PM UTC

    We are investigating third party Claude and Codex Cloud Agent sessions not being listed in the agents tab dashboard.

  3. resolved Apr 10, 2026, 01:28 PM UTC

    On April 9, 2026, between 22:59 UTC and April 10, 2026, 13:24 UTC, the Copilot Mission Control service was degraded and did not display Claude and Codex Cloud Agent sessions in the agents tab dashboard. Customers were unable to see, list, or manage their third party agent sessions during this period. The underlying agent sessions continued to function normally. This was a visibility and management issue only, and no HTTP errors were generated. The API returned successful responses with incomplete results, with an average error rate of 0% and a maximum error rate of 0%. This was due to a code change that introduced a filter which inadvertently excluded third party agent sessions. We mitigated the incident by reverting the problematic code change and deploying the fix to production. We are working to add automated monitoring for dashboard content visibility and improve integration test coverage for third party agent session listing to reduce our time to detection and mitigation of issues like this one in the future.

Read the full incident report →

Minor April 9, 2026

Disruption with GitHub notifications

Detected by Pingoru
Apr 09, 2026, 04:42 AM UTC
Resolved
Apr 09, 2026, 04:57 AM UTC
Duration
15m
Timeline · 3 updates
  1. investigating Apr 09, 2026, 04:42 AM UTC

    We are investigating reports of impacted performance for some GitHub services.

  2. monitoring Apr 09, 2026, 04:57 AM UTC

    The degradation has been mitigated. We are monitoring to ensure stability.

  3. resolved Apr 09, 2026, 04:57 AM UTC

    On April 9, 2026, between 03:22 UTC and 04:49 UTC, GitHub Notifications experienced degraded availability. During this time, approximately 45% of requests to the notifications service returned errors, with a peak error rate of approximately 54%, preventing affected users from successfully viewing or interacting with their notifications service. The issue was identified and resolved, restoring the service to full availability. We are working to improve our metrics to reduce time to detection and mitigation for similar issues in the future.

Read the full incident report →

Minor April 2, 2026

Copilot Coding Agent failing to start some jobs

Detected by Pingoru
Apr 02, 2026, 04:18 PM UTC
Resolved
Apr 02, 2026, 04:30 PM UTC
Duration
12m
Timeline · 3 updates
  1. investigating Apr 02, 2026, 04:18 PM UTC

    We are investigating reports of impacted performance for some GitHub services.

  2. investigating Apr 02, 2026, 04:28 PM UTC

    When assigning tasks to Copilot Cloud Agent, the task will appear to be working, but may not actually be running. We are investigating.

  3. resolved Apr 02, 2026, 04:30 PM UTC

    Between 15:20 and 20:18 UTC on Thursday April 2, Copilot Cloud Agent entered a period of reduced performance. Due to an internal feature being developed for Copilot Code Review, the Copilot Cloud Agent infrastructure started to receive an increased number of jobs. This load eventually caused us to hit an internal rate limit, causing all work to suspend for an hour. During this hour, some new jobs would time out, while others would resume once rate limiting ended. Roughly 40% of jobs in this period were affected. Once the cause of this rate limiting was identified, we were able to disable the new CCR feature via a feature flag. Once the jobs that were already in the queue were able to clear, we didn't see additional instances of rate limiting afterwards. This was the same incident declared in https://www.githubstatus.com/incidents/d96l71t3h63k

Read the full incident report →

Major April 1, 2026

GitHub audit logs are unavailable

Detected by Pingoru
Apr 01, 2026, 04:06 PM UTC
Resolved
Apr 01, 2026, 04:10 PM UTC
Duration
4m
Timeline · 3 updates
  1. investigating Apr 01, 2026, 04:06 PM UTC

    We are investigating reports of impacted performance for some GitHub services.

  2. investigating Apr 01, 2026, 04:07 PM UTC

    A routine credential rotation has failed for our our audit logs service; we have re-deployed our service and are waiting for recovery.

  3. resolved Apr 01, 2026, 04:10 PM UTC

    On April 1, 2026, between 15:34 UTC and 16:02 UTC, our audit log service lost connectivity to its backing data store due to a failed credential rotation. During this 28-minute window, audit log history was unavailable via both the API and web UI. This resulted in 5xx errors for 4,297 API actors and 127 github.com users. Additionally, events created during this window were delayed by up to 29 minutes in github.com and event streaming. No audit log events were lost; all audit log events were ultimately written and streamed successfully. Customers using GitHub Enterprise Cloud with data residency were not impacted by this incident. We were alerted to the infrastructure failure at 15:40 UTC — six minutes after onset — and resolved the issue by recycling the affected environment, restoring full service by 16:02 UTC. We are conducting a thorough review of our credential rotation process to strengthen its resiliency and prevent recurrence. In parallel, we are strengthening our monitoring capabilities to ensure faster detection and earlier visibility into similar issues going forward.

Read the full incident report →

Major April 1, 2026

Disruption with GitHub's code search

Detected by Pingoru
Apr 01, 2026, 03:02 PM UTC
Resolved
Apr 01, 2026, 11:45 PM UTC
Duration
8h 42m
Timeline · 7 updates
  1. investigating Apr 01, 2026, 03:02 PM UTC

    We are investigating reports of impacted performance for some GitHub services.

  2. investigating Apr 01, 2026, 04:00 PM UTC

    We identified an issue in our ingestion pipeline that degraded the freshness of Code Search results. While fixing the issue with the ingestion pipeline, a deployment caused a loss of dynamic configuration which is causing most requests for Code Search results to fail. We are working to restore the service and to re-ingest the misaligned data.

  3. investigating Apr 01, 2026, 05:48 PM UTC

    We are observing some recovery for Code Search queries, but customers should be aware that the data being served may be stale, especially for changes that took place after 07:00 UTC today (1 April 2026). We are still working on recovering our ingestion pipeline, and synchronizing the indexed data. We will update again within 2 hours.

  4. investigating Apr 01, 2026, 07:37 PM UTC

    We are still working on recovering back to a serviceable state and expect to have a more substantial update within another two hours.

  5. investigating Apr 01, 2026, 10:00 PM UTC

    We have stabilized Code Search infrastructure, and are in the final stages of validation before slowly reintroducing production traffic.

  6. investigating Apr 01, 2026, 11:45 PM UTC

    Code search has recovered and is serving production traffic.

  7. resolved Apr 01, 2026, 11:45 PM UTC

    On April 1st, 2026 between 14:40 and 17:00 UTC the GitHub code search service had an outage which resulted in users being unable to perform searches. The issue was initially caused by an upgrade to the code search Kafka cluster ZooKeeper instances which caused a loss of quorum. This resulted in application-level data inconsistencies which required the index to be reset to a point in time before the loss of quorum occurred. Meanwhile, an accidental deploy resulted in query services losing their shard-to-host mappings, which are typically propagated by Kafka. We remediated the problem by performing rolling restarts in the Kafka cluster, allowing quorum to be reestablished. From there we were able to reset our index to a point in time before the inconsistencies occurred. The team is working on ways to improve our time to respond and mitigate issues relating to Kafka in the future.

Read the full incident report →

Minor April 1, 2026

Incident with Copilot

Detected by Pingoru
Apr 01, 2026, 09:58 AM UTC
Resolved
Apr 01, 2026, 12:41 PM UTC
Duration
2h 43m
Affected: Copilot
Timeline · 9 updates
  1. investigating Apr 01, 2026, 09:58 AM UTC

    We are investigating reports of degraded performance for Copilot

  2. investigating Apr 01, 2026, 10:00 AM UTC

    We are investigating reports of issues with service(s): Copilot Dotcom Agents. We will continue to keep users updated on progress towards mitigation.

  3. investigating Apr 01, 2026, 10:31 AM UTC

    Users may see increased latency and intermittent errors when viewing or creating agent sessions. We are working on mitigations to return to baseline performance and success rate.

  4. monitoring Apr 01, 2026, 10:56 AM UTC

    The degradation affecting Copilot has been mitigated. We are monitoring to ensure stability.

  5. monitoring Apr 01, 2026, 11:24 AM UTC

    The degradation has been mitigated. We are monitoring to ensure stability.

  6. monitoring Apr 01, 2026, 11:37 AM UTC

    The success rate for creating and viewing agent sessions has stabilized, and we're continuing to monitor latency, which is trending toward baseline levels.

  7. monitoring Apr 01, 2026, 12:02 PM UTC

    The degradation has been mitigated. We are monitoring to ensure stability.

  8. monitoring Apr 01, 2026, 12:10 PM UTC

    The success rate and latency for creating and viewing agent sessions has stabilized at baseline levels, we are continuing to monitor recovery

  9. resolved Apr 01, 2026, 12:41 PM UTC

    On April 1, 2026, between 07:29 and 12:41 UTC, some customers experienced elevated 5xx errors and increased latency when using GitHub Copilot features that rely on `/agents/sessions` endpoints (including creating or viewing agent sessions). The issue was caused by resource exhaustion in one of the Copilot backend services handling these requests, in turn, causing timeouts and failed requests. We mitigated the incident by increasing the service’s available compute resources and tuning its runtime concurrency settings. Service health returned to normal and the incident was fully resolved by 12:41 UTC.

Read the full incident report →

Minor March 31, 2026

Incident with Pull Requests: High percentage of 500s

Detected by Pingoru
Mar 31, 2026, 03:05 PM UTC
Resolved
Mar 31, 2026, 09:23 PM UTC
Duration
6h 18m
Affected: Pull Requests
Timeline · 11 updates
  1. investigating Mar 31, 2026, 03:05 PM UTC

    We are investigating reports of degraded performance for Pull Requests

  2. investigating Mar 31, 2026, 03:06 PM UTC

    We are seeing a higher than average number of 500s due to timeouts across GitHub services. We have a potential mitigation in flight and are continuing to investigate.

  3. investigating Mar 31, 2026, 03:39 PM UTC

    We are investigating increased 500 errors affecting GitHub services. You may experience intermittent failures when using Pull Requests and other features. We are actively working to identify and resolve the underlying cause.

  4. investigating Mar 31, 2026, 04:15 PM UTC

    We are continuing to investigate increased 500 errors affecting GitHub services. You may experience intermittent failures when using Pull Requests and other features. We are actively working to identify and resolve the underlying cause.

  5. investigating Mar 31, 2026, 04:35 PM UTC

    We are seeing recovery in latency and timeouts of requests related to pull requests, even though 500s are still elevated. While we are continuing to investigate, we are applying a mitigation and expect further recovery after it is applied.

  6. investigating Mar 31, 2026, 05:16 PM UTC

    We identified an issue causing increased errors when accessing Pull Requests. The mitigation is being applied across our infrastructure and we will continue to provide updates as the mitigation rolls out.

  7. investigating Mar 31, 2026, 06:42 PM UTC

    We continue to experience elevated error rates affecting Pull Requests. An earlier fix resolved one component of the issue, but some users may still encounter intermittent timeouts when viewing or interacting with pull requests. Our teams are actively investigating the remaining causes.

  8. investigating Mar 31, 2026, 07:28 PM UTC

    Error rates remain elevated across multiple pull request endpoints. We are pursuing multiple potential mitigations.

  9. investigating Mar 31, 2026, 09:12 PM UTC

    We continue to see a small subset of repositories experiencing timeouts and elevated latency in Pull Requests, affecting under 1% of requests.

  10. monitoring Mar 31, 2026, 09:16 PM UTC

    The degradation affecting Pull Requests has been mitigated. We are monitoring to ensure stability.

  11. resolved Mar 31, 2026, 09:23 PM UTC

    On Monday March 31st, 2026, between 13:53 UTC and 21:23 UTC the Pull Requests service experienced elevated latency and failures. On average, the error rate was 0.15% and peaked at 0.28% of requests to the service. This was due to a change in garbage collection (GC) settings for a Go-based internal service that provides access to Git repository data. The changes caused more frequent GC activity and elevated CPU consumption on a subset of storage nodes, increasing latency and failure rates for some internal API operations. We mitigated the incident by reverting the GC changes. To prevent future incidents and improve time to detection and mitigation, we are instrumenting additional metrics and alerting for GC-related behavior, improving our visibility into other signals that could cause degraded impact of this type, and updating our best practices and standards for garbage collection in Go-based services.

Read the full incident report →

Minor March 31, 2026

Issues with metered billing report generation

Detected by Pingoru
Mar 31, 2026, 01:47 PM UTC
Resolved
Mar 31, 2026, 03:10 PM UTC
Duration
1h 22m
Timeline · 7 updates
  1. investigating Mar 31, 2026, 01:47 PM UTC

    We are investigating reports of impacted performance for some GitHub services.

  2. investigating Mar 31, 2026, 01:56 PM UTC

    We're seeing issues related to metered billing reports, intermittently affecting metered usage graphs and reports on the billing page. We have identified an issue with a data store, and are working on mitigations.

  3. investigating Mar 31, 2026, 02:39 PM UTC

    We're continuing to see high failure rates on billing report generation, and are working on mitigations for a data store related to billing reports.

  4. investigating Mar 31, 2026, 02:56 PM UTC

    We are seeing a high number of 500s due to timeouts across GitHub services. We are redeploying some of our core services and we expect that this allow us to recover.

  5. investigating Mar 31, 2026, 02:59 PM UTC

    We have applied mitigations to a data store related to billing reports, and are seeing partial recovery to billing report generation. We continue to monitor for full recovery.

  6. monitoring Mar 31, 2026, 03:01 PM UTC

    The degradation has been mitigated. We are monitoring to ensure stability.

  7. resolved Mar 31, 2026, 03:10 PM UTC

    On March 31, 2026, between 06:15 UTC and 15:30 UTC, the GitHub billing usage reports feature was degraded due to reduced server capacity. Customers requesting billing usage reports and loading the top usage by organization and repository on the billing overview and usage pages were impacted. The average error rate for usage report requests was 15%, peaking at 98% over an eight-minute window. For the billing pages, an average of 56% of requests failed to load the top usage cards. The root cause was an increase in billing usage report requests with large datasets, which exhausted the capacity of the nodes responsible for reporting data. There was no impact on billing charges. We mitigated the incident by adjusting our auto-scaling thresholds to better meet our capacity needs. We are working to improve our metrics to reduce time to detection and mitigation for similar issues in the future.

Read the full incident report →

Minor March 30, 2026

Elevated delays in Actions workflow runs and Pull Request status updates

Detected by Pingoru
Mar 30, 2026, 01:02 PM UTC
Resolved
Mar 30, 2026, 01:25 PM UTC
Duration
23m
Affected: Pull RequestsActions
Timeline · 4 updates
  1. investigating Mar 30, 2026, 01:02 PM UTC

    We are investigating reports of degraded performance for Actions and Pull Requests

  2. monitoring Mar 30, 2026, 01:20 PM UTC

    The degradation affecting Actions and Pull Requests has been mitigated. We are monitoring to ensure stability.

  3. monitoring Mar 30, 2026, 01:25 PM UTC

    The degradation has been mitigated. We are monitoring to ensure stability.

  4. resolved Mar 30, 2026, 01:25 PM UTC

    On March 30, 2026, between 10:11 UTC and 13:25 UTC, GitHub Actions experienced degraded performance. During this time, approximately 2.65% of workflow jobs triggered by pull request events experienced start delays exceeding 5 minutes. The issue was caused by replication lag on an internal database cluster used by Actions, which triggered write throttling in our database protection layer and slowed job queue processing. The replication lag originated from planned maintenance to scale the internal database. Newly added database hosts triggered guardrails in the throttling layer, restricting write throughput. The incident was mitigated by excluding the new hosts from replication delay calculations. To prevent recurrence, we have updated our maintenance procedures to ensure new hosts are excluded from throttling assessments during scaling operations. Additionally, we are investing in automation to streamline this type of maintenance activity.

Read the full incident report →

Major March 24, 2026

Teams Github Notifications App is down

Detected by Pingoru
Mar 24, 2026, 04:59 PM UTC
Resolved
Mar 24, 2026, 07:51 PM UTC
Duration
2h 51m
Timeline · 5 updates
  1. investigating Mar 24, 2026, 04:59 PM UTC

    We are investigating reports of impacted performance for some GitHub services.

  2. investigating Mar 24, 2026, 05:09 PM UTC

    We found an issue impacting notifications from GitHub to Microsoft Teams. We are working on mitigation and will keep users updated on progress towards mitigation.

  3. investigating Mar 24, 2026, 05:43 PM UTC

    We are experiencing degraded availability from Azure APIs, which is impacting notifications from GitHub to Microsoft Teams. We are working with Azure to resolve the issue.

  4. investigating Mar 24, 2026, 06:50 PM UTC

    We are experiencing degraded availability from Azure Teams APIs, which is impacting notifications from GitHub to Microsoft Teams. We are awaiting resolution from Azure.

  5. resolved Mar 24, 2026, 07:51 PM UTC

    On March 24, 2026, between 15:57 UTC and 19:51 UTC, the Microsoft Teams Integration and Teams Copilot Integration services were degraded and unable to deliver GitHub event notifications to Microsoft Teams. On average, the error rate was 37.4% and peaked at 90.1% of requests to the service -- approximately 19% of all integration installs failed to receive GitHub-to-Teams notifications in this time period. This was due to an outage at one of our upstream dependencies, which caused HTTP 500 errors and connection resets for our Teams integration. We coordinated with the relevant service teams, and the issue was resolved at 19:51 UTC when the upstream incident was mitigated. We are working to update observability and runbooks to reduce time to mitigation for issues like this in the future.

Read the full incident report →

Major March 20, 2026

Disruption with Copilot Coding Agent Sessions

Detected by Pingoru
Mar 20, 2026, 12:58 AM UTC
Resolved
Mar 20, 2026, 01:58 AM UTC
Duration
1h
Timeline · 4 updates
  1. investigating Mar 20, 2026, 12:58 AM UTC

    We are investigating reports of impacted performance for some GitHub services.

  2. investigating Mar 20, 2026, 01:00 AM UTC

    We are seeing widespread issues starting and viewing Copilot Agent sessions. We understand the cause and are working on remediation.

  3. investigating Mar 20, 2026, 01:26 AM UTC

    We are rolling out our mitigation and are seeing recovery.

  4. resolved Mar 20, 2026, 01:58 AM UTC

    On March 19, 2026, between 01:05 UTC and 02:52 UTC, and again on March 20, 2026, between 00:42 UTC and 01:58 UTC, the Copilot Coding Agent service was degraded and users were unable to start new Copilot Agent sessions or view existing ones. During the first incident, the average error rate was ~53% and peaked at ~93% of requests to the service. During the second incident, the average error rate was ~99%% and peaked at ~100%% of requests with significant retry amplification. Both incidents were caused by the same underlying system authentication issue that prevented the service from connecting to its backing datastore. We mitigated each incident by rotating the affected credentials, which restored connectivity and returned error rates to normal. The mitigation time was 01:24. The second occurrence was due to an incomplete remediation of the first. We are implementing automated monitoring for credential lifecycle events and improving operational processes to reduce our time to detection and mitigation of issues like this one in the future.

Read the full incident report →

Minor March 19, 2026

Git operations for users in the west coast are experiencing an increase in latency

Detected by Pingoru
Mar 19, 2026, 04:25 PM UTC
Resolved
Mar 20, 2026, 12:05 AM UTC
Duration
7h 40m
Affected: Git Operations
Timeline · 9 updates
  1. investigating Mar 19, 2026, 04:25 PM UTC

    We are investigating reports of degraded performance for Git Operations

  2. investigating Mar 19, 2026, 05:01 PM UTC

    We are redirecting traffic back to our Seattle region and customers should see a decrease in latency for Git operations

  3. investigating Mar 19, 2026, 05:49 PM UTC

    We are still seeing elevated latency for Git operations in the west coast and are continuing to investigate

  4. investigating Mar 19, 2026, 06:27 PM UTC

    We are working to enable a new network path in the west coast to reduce load and will monitor the impact on latency for Git Operations

  5. investigating Mar 19, 2026, 09:59 PM UTC

    We are beginning the rollout of our new network path. During this change, users will continue to see higher latency from the west coast. We will provide another update when the rollout is complete.

  6. investigating Mar 19, 2026, 10:57 PM UTC

    We have completed the rollout of our new network path and are monitoring its impact.

  7. investigating Mar 19, 2026, 11:52 PM UTC

    We are seeing early signs of improvement. We are working on one more small change to further improve traffic routing on the west coast.

  8. investigating Mar 20, 2026, 12:05 AM UTC

    We have reached stability with git operations through our changes deployed today.

  9. resolved Mar 20, 2026, 12:05 AM UTC

    On March 19, 2026 between 16:10 UTC and 00:05 UTC (March 20), Git operations (clone, fetch, push) from the US west coast experienced elevated latency and degraded throughput. Users reported clone speeds dropping from typical speeds to under 1 MiB/s in extreme cases. The root cause was network transport link saturation at our Seattle edge site, where a fiber cut affecting our backbone transport resulted in saturation and packet loss. We had a planned scale-up in progress for the site that was accelerated to resolve the backbone capacity pressure. We also brought online additional edge capacity in a cloud region and redirected some users there. Current scale with the upgraded network capacity is sufficient to prevent reoccurrence, as we upgraded from 800Gbps to 3.2Tbps total capacity on this path. We will continue to monitor network health and respond to any further issues.

Read the full incident report →

Looking to track GitHub downtime and outages?

Pingoru polls GitHub's status page every 5 minutes and alerts you the moment it reports an issue — before your customers do.

  • Real-time alerts when GitHub reports an incident
  • Email, Slack, Discord, Microsoft Teams, and webhook notifications
  • Track GitHub alongside 5,000+ providers in one dashboard
  • Component-level filtering
  • Notification groups + maintenance calendar
Start monitoring GitHub for free

5 free monitors · No credit card required