GitHub Enterprise Cloud EU Outage History

GitHub Enterprise Cloud EU is up right now

There were 20 GitHub Enterprise Cloud EU outages since February 3, 2026 totaling 47h 11m of downtime. Each is summarised below — incident details, duration, and resolution information.

Source: https://eu.githubstatus.com

Critical April 23, 2026

EU - Some data not showing in Elastic Search indexes

Detected by Pingoru
Apr 23, 2026, 11:58 PM UTC
Resolved
Apr 24, 2026, 01:20 AM UTC
Duration
1h 22m
Affected: Issues
Timeline · 5 updates
  1. investigating Apr 23, 2026, 11:58 PM UTC

    We are investigating reports of impacted performance for some GitHub services.

  2. investigating Apr 24, 2026, 12:33 AM UTC

    Issues is experiencing degraded availability. We are continuing to investigate.

  3. investigating Apr 24, 2026, 12:51 AM UTC

    We have identified the issue with missing search data and are working to restore it.

  4. investigating Apr 24, 2026, 01:16 AM UTC

    Issues is operating normally.

  5. resolved Apr 24, 2026, 01:20 AM UTC

    This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.

Read the full incident report →

Notice April 23, 2026

EU - Disruption with some GitHub services

Detected by Pingoru
Apr 23, 2026, 08:22 PM UTC
Resolved
Apr 23, 2026, 05:00 AM UTC
Duration
Timeline · 1 update
  1. resolved Apr 23, 2026, 08:22 PM UTC

    Between 06:07 and 06:57 UTC, during a routine OS upgrade to our search infrastructure, a pre-existing corruption on one of our coordinator nodes in a single availability zone caused it to stop serving requests. As a result, search query requests routed to that availability zone were dropped, affecting approximately 142,000 requests over the course of the incident. The issue was resolved by provisioning additional coordinator nodes in the availability zone to restore capacity, after which the corrupted node was decommissioned.

Read the full incident report →

Minor April 23, 2026

EU - Incident with Pull Requests

Detected by Pingoru
Apr 23, 2026, 07:50 PM UTC
Resolved
Apr 23, 2026, 09:43 PM UTC
Duration
1h 52m
Affected: Pull Requests
Timeline · 4 updates
  1. investigating Apr 23, 2026, 07:50 PM UTC

    We are investigating reports of degraded performance for Pull Requests

  2. investigating Apr 23, 2026, 07:58 PM UTC

    We have identified a regression in merge queue behavior present when squash merging or rebasing. We have identified the root-cause and are in the process of reverting the change.

  3. investigating Apr 23, 2026, 09:38 PM UTC

    Pull Requests is operating normally.

  4. resolved Apr 23, 2026, 09:43 PM UTC

    This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.

Read the full incident report →

Minor April 23, 2026

EU - Disruption with users unable to start Claude and Codex agent task from the web

Detected by Pingoru
Apr 23, 2026, 07:29 PM UTC
Resolved
Apr 23, 2026, 07:42 PM UTC
Duration
13m
Timeline · 3 updates
  1. investigating Apr 23, 2026, 07:29 PM UTC

    We are investigating reports of impacted performance for some GitHub services.

  2. investigating Apr 23, 2026, 07:33 PM UTC

    We have identified the root cause of the issue and are working on mitigation.

  3. resolved Apr 23, 2026, 07:42 PM UTC

    This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.

Read the full incident report →

Critical April 22, 2026

EU - Disruption with Copilot chat and Copilot Coding Agent

Detected by Pingoru
Apr 22, 2026, 03:35 PM UTC
Resolved
Apr 22, 2026, 07:18 PM UTC
Duration
3h 42m
Timeline · 8 updates
  1. investigating Apr 22, 2026, 03:35 PM UTC

    We are investigating reports of impacted performance for some GitHub services.

  2. investigating Apr 22, 2026, 03:43 PM UTC

    We are aware of users seeing errors interacting with Copilot chat on github.com and Copilot cloud agent. We have identified the cause and are investigating remediations.

  3. investigating Apr 22, 2026, 04:24 PM UTC

    We continue to work on mitigation for Copilot chat and cloud agent.

  4. investigating Apr 22, 2026, 04:58 PM UTC

    Mitigation is progressing for Copilot chat and cloud agent.

  5. investigating Apr 22, 2026, 05:40 PM UTC

    Mitigation is progressing for Copilot chat and cloud agent recovery.

  6. investigating Apr 22, 2026, 05:49 PM UTC

    We are now seeing recovery for Copilot cloud agent.

  7. investigating Apr 22, 2026, 06:05 PM UTC

    Copilot cloud agent and chat are mitigated for github.com.

  8. resolved Apr 22, 2026, 07:18 PM UTC

    This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.

Read the full incident report →

Minor April 10, 2026

EU - Problems with third-party Claude and Codex Agent sessions not being listed in the agents tab dashboard

Detected by Pingoru
Apr 10, 2026, 01:07 PM UTC
Resolved
Apr 10, 2026, 01:28 PM UTC
Duration
21m
Timeline · 3 updates
  1. investigating Apr 10, 2026, 01:07 PM UTC

    We are investigating reports of impacted performance for some GitHub services.

  2. investigating Apr 10, 2026, 01:08 PM UTC

    We are investigating third party Claude and Codex Cloud Agent sessions not being listed in the agents tab dashboard.

  3. resolved Apr 10, 2026, 01:28 PM UTC

    On April 9, 2026, between 22:59 UTC and April 10, 2026, 13:24 UTC, the Copilot Mission Control service was degraded and did not display Claude and Codex Cloud Agent sessions in the agents tab dashboard. Customers were unable to see, list, or manage their third party agent sessions during this period. The underlying agent sessions continued to function normally. This was a visibility and management issue only, and no HTTP errors were generated. The API returned successful responses with incomplete results, with an average error rate of 0% and a maximum error rate of 0%. This was due to a code change that introduced a filter which inadvertently excluded third party agent sessions. We mitigated the incident by reverting the problematic code change and deploying the fix to production. We are working to add automated monitoring for dashboard content visibility and improve integration test coverage for third party agent session listing to reduce our time to detection and mitigation of issues like this one in the future.

Read the full incident report →

Minor April 2, 2026

EU - Copilot Coding Agent failing to start some jobs

Detected by Pingoru
Apr 02, 2026, 04:18 PM UTC
Resolved
Apr 02, 2026, 04:30 PM UTC
Duration
12m
Timeline · 3 updates
  1. investigating Apr 02, 2026, 04:18 PM UTC

    We are investigating reports of impacted performance for some GitHub services.

  2. investigating Apr 02, 2026, 04:28 PM UTC

    When assigning tasks to Copilot Cloud Agent, the task will appear to be working, but may not actually be running. We are investigating.

  3. resolved Apr 02, 2026, 04:30 PM UTC

    Between 15:20 and 20:18 UTC on Thursday April 2, Copilot Cloud Agent entered a period of reduced performance. Due to an internal feature being developed for Copilot Code Review, the Copilot Cloud Agent infrastructure started to receive an increased number of jobs. This load eventually caused us to hit an internal rate limit, causing all work to suspend for an hour. During this hour, some new jobs would time out, while others would resume once rate limiting ended. Roughly 40% of jobs in this period were affected. Once the cause of this rate limiting was identified, we were able to disable the new CCR feature via a feature flag. Once the jobs that were already in the queue were able to clear, we didn't see additional instances of rate limiting afterwards. This was the same incident declared in https://www.githubstatus.com/incidents/d96l71t3h63k

Read the full incident report →

Minor March 31, 2026

EU - Incident with Pull Requests: High percentage of 500s

Detected by Pingoru
Mar 31, 2026, 03:05 PM UTC
Resolved
Mar 31, 2026, 09:23 PM UTC
Duration
6h 18m
Affected: Pull Requests
Timeline · 11 updates
  1. investigating Mar 31, 2026, 03:05 PM UTC

    We are investigating reports of degraded performance for Pull Requests

  2. investigating Mar 31, 2026, 03:06 PM UTC

    We are seeing a higher than average number of 500s due to timeouts across GitHub services. We have a potential mitigation in flight and are continuing to investigate.

  3. investigating Mar 31, 2026, 03:39 PM UTC

    We are investigating increased 500 errors affecting GitHub services. You may experience intermittent failures when using Pull Requests and other features. We are actively working to identify and resolve the underlying cause.

  4. investigating Mar 31, 2026, 04:15 PM UTC

    We are continuing to investigate increased 500 errors affecting GitHub services. You may experience intermittent failures when using Pull Requests and other features. We are actively working to identify and resolve the underlying cause.

  5. investigating Mar 31, 2026, 04:35 PM UTC

    We are seeing recovery in latency and timeouts of requests related to pull requests, even though 500s are still elevated. While we are continuing to investigate, we are applying a mitigation and expect further recovery after it is applied.

  6. investigating Mar 31, 2026, 05:16 PM UTC

    We identified an issue causing increased errors when accessing Pull Requests. The mitigation is being applied across our infrastructure and we will continue to provide updates as the mitigation rolls out.

  7. investigating Mar 31, 2026, 06:42 PM UTC

    We continue to experience elevated error rates affecting Pull Requests. An earlier fix resolved one component of the issue, but some users may still encounter intermittent timeouts when viewing or interacting with pull requests. Our teams are actively investigating the remaining causes.

  8. investigating Mar 31, 2026, 07:28 PM UTC

    Error rates remain elevated across multiple pull request endpoints. We are pursuing multiple potential mitigations.

  9. investigating Mar 31, 2026, 09:12 PM UTC

    We continue to see a small subset of repositories experiencing timeouts and elevated latency in Pull Requests, affecting under 1% of requests.

  10. monitoring Mar 31, 2026, 09:16 PM UTC

    The degradation affecting Pull Requests has been mitigated. We are monitoring to ensure stability.

  11. resolved Mar 31, 2026, 09:23 PM UTC

    On Monday March 31st, 2026, between 13:53 UTC and 21:23 UTC the Pull Requests service experienced elevated latency and failures. On average, the error rate was 0.15% and peaked at 0.28% of requests to the service. This was due to a change in garbage collection (GC) settings for a Go-based internal service that provides access to Git repository data. The changes caused more frequent GC activity and elevated CPU consumption on a subset of storage nodes, increasing latency and failure rates for some internal API operations. We mitigated the incident by reverting the GC changes. To prevent future incidents and improve time to detection and mitigation, we are instrumenting additional metrics and alerting for GC-related behavior, improving our visibility into other signals that could cause degraded impact of this type, and updating our best practices and standards for garbage collection in Go-based services.

Read the full incident report →

Major March 24, 2026

EU - Teams Github Notifications App is down

Detected by Pingoru
Mar 24, 2026, 05:00 PM UTC
Resolved
Mar 24, 2026, 07:51 PM UTC
Duration
2h 50m
Timeline · 5 updates
  1. investigating Mar 24, 2026, 05:00 PM UTC

    We are investigating reports of impacted performance for some GitHub services.

  2. investigating Mar 24, 2026, 05:09 PM UTC

    We found an issue impacting notifications from GitHub to Microsoft Teams. We are working on mitigation and will keep users updated on progress towards mitigation.

  3. investigating Mar 24, 2026, 05:43 PM UTC

    We are experiencing degraded availability from Azure APIs, which is impacting notifications from GitHub to Microsoft Teams. We are working with Azure to resolve the issue.

  4. investigating Mar 24, 2026, 06:51 PM UTC

    We are experiencing degraded availability from Azure Teams APIs, which is impacting notifications from GitHub to Microsoft Teams. We are awaiting resolution from Azure.

  5. resolved Mar 24, 2026, 07:51 PM UTC

    On March 24, 2026, between 15:57 UTC and 19:51 UTC, the Microsoft Teams Integration and Teams Copilot Integration services were degraded and unable to deliver GitHub event notifications to Microsoft Teams. On average, the error rate was 37.4% and peaked at 90.1% of requests to the service -- approximately 19% of all integration installs failed to receive GitHub-to-Teams notifications in this time period. This was due to an outage at one of our upstream dependencies, which caused HTTP 500 errors and connection resets for our Teams integration. We coordinated with the relevant service teams, and the issue was resolved at 19:51 UTC when the upstream incident was mitigated. We are working to update observability and runbooks to reduce time to mitigation for issues like this in the future.

Read the full incident report →

Major March 20, 2026

EU - Disruption with Copilot Coding Agent Sessions

Detected by Pingoru
Mar 20, 2026, 12:58 AM UTC
Resolved
Mar 20, 2026, 01:58 AM UTC
Duration
1h
Timeline · 4 updates
  1. investigating Mar 20, 2026, 12:58 AM UTC

    We are investigating reports of impacted performance for some GitHub services.

  2. investigating Mar 20, 2026, 01:00 AM UTC

    We are seeing widespread issues starting and viewing Copilot Agent sessions. We understand the cause and are working on remediation.

  3. investigating Mar 20, 2026, 01:26 AM UTC

    We are rolling out our mitigation and are seeing recovery.

  4. resolved Mar 20, 2026, 01:58 AM UTC

    On March 19, 2026, between 01:05 UTC and 02:52 UTC, and again on March 20, 2026, between 00:42 UTC and 01:58 UTC, the Copilot Coding Agent service was degraded and users were unable to start new Copilot Agent sessions or view existing ones. During the first incident, the average error rate was ~53% and peaked at ~93% of requests to the service. During the second incident, the average error rate was ~99%% and peaked at ~100%% of requests with significant retry amplification. Both incidents were caused by the same underlying system authentication issue that prevented the service from connecting to its backing datastore. We mitigated each incident by rotating the affected credentials, which restored connectivity and returned error rates to normal. The mitigation time was 01:24. The second occurrence was due to an incomplete remediation of the first. We are implementing automated monitoring for credential lifecycle events and improving operational processes to reduce our time to detection and mitigation of issues like this one in the future.

Read the full incident report →

Major March 19, 2026

EU - Issues with Copilot Coding Agent

Detected by Pingoru
Mar 19, 2026, 10:41 AM UTC
Resolved
Mar 19, 2026, 02:32 PM UTC
Duration
3h 51m
Affected: Copilot
Timeline · 9 updates
  1. investigating Mar 19, 2026, 10:41 AM UTC

    We are investigating reports of impacted performance for some GitHub services.

  2. investigating Mar 19, 2026, 10:43 AM UTC

    We are investigating reports of errors when accessing Copilot Coding Agent features. Users may be unable to view or start coding agent tasks through the Agents interface. Our engineers are actively working to restore full functionality.

  3. investigating Mar 19, 2026, 11:19 AM UTC

    We believe we have identified an underlying credential issue and are working to resolve that across impacted environments.

  4. investigating Mar 19, 2026, 12:00 PM UTC

    We have resolved the underlying credential issue and have verified that users can see and interact with their Copilot Coding Agent tasks. We are now investigating issues with some Coding Agent tasks completing successfully.

  5. investigating Mar 19, 2026, 12:32 PM UTC

    We are investigating reports of Copilot coding agent session logs not loading and sessions intermittently not starting. Users are able to see their tasks and create new ones.

  6. investigating Mar 19, 2026, 01:45 PM UTC

    Copilot is experiencing degraded performance. We are continuing to investigate.

  7. investigating Mar 19, 2026, 02:02 PM UTC

    We are investigating reports that Copilot Coding Agent session logs are not available in the UI.

  8. monitoring Mar 19, 2026, 02:32 PM UTC

    Copilot is operating normally.

  9. resolved Mar 19, 2026, 02:32 PM UTC

    On March 19, 2026, between 01:05 UTC and 02:52 UTC, and again on March 20, 2026, between 00:42 UTC and 01:58 UTC, the Copilot Coding Agent service was degraded and users were unable to start new Copilot Agent sessions or view existing ones. During the first incident, the average error rate was ~53% and peaked at ~93% of requests to the service. During the second incident, the average error rate was ~99%% and peaked at ~100%% of requests with significant retry amplification. Both incidents were caused by the same underlying system authentication issue that prevented the service from connecting to its backing datastore. We mitigated each incident by rotating the affected credentials, which restored connectivity and returned error rates to normal. The mitigation time was 01:24. The second occurrence was due to an incomplete remediation of the first. We are implementing automated monitoring for credential lifecycle events and improving operational processes to reduce our time to detection and mitigation of issues like this one in the future.

Read the full incident report →

Minor March 5, 2026

EU - Some OpenAI models degraded in Copilot

Detected by Pingoru
Mar 05, 2026, 12:47 AM UTC
Resolved
Mar 05, 2026, 01:13 AM UTC
Duration
25m
Affected: Copilot
Timeline · 4 updates
  1. investigating Mar 05, 2026, 12:47 AM UTC

    We are investigating reports of degraded performance for Copilot

  2. investigating Mar 05, 2026, 12:53 AM UTC

    We are experiencing degraded availability for the gpt-5.3-codex model in Copilot Chat, VS Code and other Copilot products. This is due to an issue with an upstream model provider. We are working with them to resolve the issue.

  3. investigating Mar 05, 2026, 01:13 AM UTC

    The issues with our upstream model provider have been resolved, and gpt-5.3-codex is once again available in Copilot Chat and across IDE integrations. We will continue monitoring to ensure stability, but mitigation is complete.

  4. resolved Mar 05, 2026, 01:13 AM UTC

    On March 5th, 2026, between approximately 00:26 and 00:44 UTC, the Copilot service experienced a degradation of the GPT 3.5 Codex model due to an issue with our upstream provider. Users encountered elevated error rates when using GPT 3.5 Codex, impacting approximately 30% of requests. No other models were impacted. The issue was resolved by a mitigation put in place by our provider.

Read the full incident report →

Minor March 3, 2026

EU - Claude Opus 4.6 Fast not appearing for some Copilot users

Detected by Pingoru
Mar 03, 2026, 08:31 PM UTC
Resolved
Mar 03, 2026, 09:11 PM UTC
Duration
39m
Affected: Copilot
Timeline · 3 updates
  1. investigating Mar 03, 2026, 08:31 PM UTC

    We are investigating reports of degraded performance for Copilot

  2. investigating Mar 03, 2026, 09:05 PM UTC

    We believe that all expected users still have access to Claude Opus 4.6. We confirm that no users have lost access.

  3. resolved Mar 03, 2026, 09:11 PM UTC

    On March 3, 2026, between 19:44 UTC and 21:05 UTC, some GitHub Copilot users reported that the Claude Opus 4.6 Fast model was no longer available in their IDE model selection. After investigation, we confirmed that this was caused by enterprise administrators adjusting their organization's model policies, which correctly removed the model for users in those organizations. No users outside the affected organizations lost access. We confirmed that the Copilot settings were functioning as designed, and all expected users retained access to the model. The incident was resolved once we verified that the change was intentional and no platform regression had occurred.

Read the full incident report →

Major March 3, 2026

EU - Incident with Copilot and Actions

Detected by Pingoru
Mar 03, 2026, 06:59 PM UTC
Resolved
Mar 03, 2026, 08:09 PM UTC
Duration
1h 9m
Affected: Copilot
Timeline · 4 updates
  1. investigating Mar 03, 2026, 06:59 PM UTC

    We are investigating reports of degraded performance for Copilot

  2. investigating Mar 03, 2026, 07:17 PM UTC

    We've identified the issue and have applied a mitigation. We're seeing recovery of services. We continue to montitor for full recovery.

  3. investigating Mar 03, 2026, 07:32 PM UTC

    Copilot is operating normally.

  4. resolved Mar 03, 2026, 08:09 PM UTC

    On March 3, 2026, between 18:46 UTC and 20:09 UTC, GitHub experienced a period of degraded availability impacting GitHub.com, the GitHub API, GitHub Actions, Git operations, GitHub Copilot, and other dependent services. At the peak of the incident, GitHub.com request failures reached approximately 40%. During the same period, approximately 43% of GitHub API requests failed. Git operations over HTTP had an error rate of approximately 6%, while SSH was not impacted. GitHub Copilot requests had an error rate of approximately 21%. GitHub Actions experienced less than 1% impact. This incident shared the same underlying cause as an incident in early February where we saw a large volume of writes to the user settings caching mechanism. While deploying a change to reduce the burden of these writes, a bug caused every user’s cache to expire, get recalculated, and get rewritten. The increased load caused replication delays that cascaded down to all affected services. We mitigated this issue by immediately rolling back the faulty deployment. We understand these incidents disrupted the workflows of developers. While we have made substantial, long-term investments in how GitHub is built and operated to improve resilience, we acknowledge we have more work to do. Getting there requires deep architectural work that is already underway, as well as urgent, targeted improvements. We are taking the following immediate steps: - We have added a killswitch and improved monitoring to the caching mechanism to ensure we are notified before there is user impact and can respond swiftly. - We are moving the cache mechanism to a dedicated host, ensuring that any future issues will solely affect services that rely on it.

Read the full incident report →

Minor February 26, 2026

EU - Incident with Copilot

Detected by Pingoru
Feb 26, 2026, 10:22 AM UTC
Resolved
Feb 26, 2026, 11:06 AM UTC
Duration
44m
Affected: Copilot
Timeline · 3 updates
  1. investigating Feb 26, 2026, 10:22 AM UTC

    We are investigating reports of degraded performance for Copilot

  2. investigating Feb 26, 2026, 11:06 AM UTC

    Copilot is operating normally.

  3. resolved Feb 26, 2026, 11:06 AM UTC

    On February 26, 2026, between 09:27 UTC and 10:36 UTC, the GitHub Copilot service was degraded and users experienced errors when using Copilot features including Copilot Chat, Copilot Coding Agent and Copilot Code Review. During this time, 5-15% of affected requests to the service returned errors. The incident was resolved by infrastructure rebalancing. We are improving observability to detect capacity imbalances earlier and enhancing our infrastructure to better handle traffic spikes.

Read the full incident report →

Minor February 25, 2026

EU - Incident with Copilot Agent Sessions impacting CCA/CCR

Detected by Pingoru
Feb 25, 2026, 04:38 PM UTC
Resolved
Feb 25, 2026, 04:44 PM UTC
Duration
6m
Affected: Copilot
Timeline · 2 updates
  1. investigating Feb 25, 2026, 04:38 PM UTC

    We are investigating reports of degraded performance for Copilot

  2. resolved Feb 25, 2026, 04:44 PM UTC

    On February 25, 2026, between 15:05 UTC and 16:34 UTC, the Copilot coding agent service was degraded, resulting in errors for 5% of all requests and impacting users starting or interacting with agent sessions. This was due to an internal service dependency running out of allocated resources (memory and CPU). We mitigated the incident by adjusting the resource allocation for the affected service, which restored normal operations for the coding agent service. We are working to implement proactive monitoring for resource exhaustion across our services, review and update resource allocations, and improve our alerting capabilities to reduce our time to detection and mitigation of similar issues in the future.

Read the full incident report →

Minor February 20, 2026

EU - Incident with Copilot GPT-5.1-Codex

Detected by Pingoru
Feb 20, 2026, 10:02 AM UTC
Resolved
Feb 20, 2026, 11:41 AM UTC
Duration
1h 39m
Affected: Copilot
Timeline · 5 updates
  1. investigating Feb 20, 2026, 10:02 AM UTC

    We are investigating reports of degraded performance for Copilot

  2. investigating Feb 20, 2026, 10:02 AM UTC

    We are experiencing degraded availability for the GPT 5.1 Codex model in Copilot Chat, VS Code and other Copilot products. This is due to an issue with an upstream model provider. We are working with them to resolve the issue. Other models are available and working as expected.

  3. investigating Feb 20, 2026, 10:36 AM UTC

    We are still experiencing degraded availability for the GPT 5.1 Codex model in Copilot Chat, VS Code and other Copilot products. This is due to an issue with an upstream model provider. We are working with them to resolve the issue.

  4. investigating Feb 20, 2026, 11:19 AM UTC

    The issues with our upstream model provider have been resolved, and GPT 5.1 Codex is once again available in Copilot Chat and across IDE integrations [VSCode, Visual Studio, JetBrains]. We will continue monitoring to ensure stability, but mitigation is complete.

  5. resolved Feb 20, 2026, 11:41 AM UTC

    On February 20, 2026, between 07:30 UTC and 11:21 UTC, the Copilot service experienced a degradation of the GPT 5.1 Codex model. During this time period, users encountered a 4.5% error rate when using this model. No other models were impacted. The issue was resolved by a mitigation put in place by the external model provider. GitHub is working with the external model provider to further improve the resiliency of the service to prevent similar incidents in the future.

Read the full incident report →

Minor February 13, 2026

EU - Disruption with some GitHub services regarding file upload

Detected by Pingoru
Feb 13, 2026, 10:30 PM UTC
Resolved
Feb 13, 2026, 10:58 PM UTC
Duration
28m
Timeline · 2 updates
  1. investigating Feb 13, 2026, 10:30 PM UTC

    We are investigating reports of impacted performance for some GitHub services.

  2. resolved Feb 13, 2026, 10:58 PM UTC

    On February 13, 2026, between 21:46 UTC and 22:58 UTC (72 minutes), the GitHub file upload service was degraded and users uploading from a web browser on GitHub.com were unable to upload files to repositories, create release assets, or upload manifest files. During the incident, successful upload completions dropped by ~85% from baseline levels. This was due to a code change that inadvertently modified browser request behavior and violated CORS (Cross-Origin Resource Sharing) policy requirements, causing upload requests to be blocked before reaching the upload service. We mitigated the incident by reverting the code change that introduced the issue. We are working to improve automated testing for browser-side request changes and to add monitoring/automated safeguards for upload flows to reduce our time to detection and mitigation of similar issues in the future.

Read the full incident report →

Minor February 12, 2026

EU - Intermittent disruption with Copilot completions and inline suggestions

Detected by Pingoru
Feb 12, 2026, 02:06 PM UTC
Resolved
Feb 12, 2026, 04:50 PM UTC
Duration
2h 43m
Timeline · 4 updates
  1. investigating Feb 12, 2026, 02:06 PM UTC

    We are investigating reports of impacted performance for some GitHub services.

  2. investigating Feb 12, 2026, 02:08 PM UTC

    We are experiencing degraded availability in some regions for Copilot completions and suggestions. We are working to resolve the issue.

  3. investigating Feb 12, 2026, 03:33 PM UTC

    We are experiencing degraded availability in Western Europe for Copilot completions and suggestions. We are working to resolve the issue.

  4. resolved Feb 12, 2026, 04:50 PM UTC

    Between February 11th 21:30 UTC and February 12th 15:40 UTC, users in Western Europe experienced degraded quality for all Next Edit Suggestions requests. Additionally, on February 12th, between 18:40 UTC and 20:30 UTC, users in Australia and South America experienced degraded quality and increased latency of up to 500ms for all Next Edit Suggestions requests. The root cause was a newly introduced regression in an upstream service dependency. The incident was mitigated by failing over Next Edit Suggestions traffic to unaffected regions, which caused the increased latency. Once the regression was identified and rolled back, we restored the impacted capacity. We have improved our quality analysis tooling and are working on more robust quality impact alerting to accelerate detection of these issues in the future.

Read the full incident report →

Minor February 9, 2026

EU - Copilot Policy Propagation Delays

Detected by Pingoru
Feb 09, 2026, 04:29 PM UTC
Resolved
Feb 10, 2026, 10:01 AM UTC
Duration
17h 31m
Affected: Copilot
Timeline · 10 updates
  1. investigating Feb 09, 2026, 04:29 PM UTC

    We are investigating reports of degraded performance for Copilot

  2. investigating Feb 09, 2026, 04:30 PM UTC

    We’ve identified an issue where Copilot policy updates are not propagating correctly for some customers. This may prevent newly enabled models from appearing when users try to access them. The team is actively investigating the cause and working on a resolution. We will provide updates as they become available.

  3. investigating Feb 09, 2026, 05:23 PM UTC

    We're continuing to investigate a an issue where Copilot policy updates are not propagating correctly for all customers. This may prevent newly enabled models from appearing when users try to access them.

  4. investigating Feb 09, 2026, 06:06 PM UTC

    We're continuing to investigate an issue where Copilot policy updates are not propagating correctly for a subset of enterprise users. This may prevent newly enabled models from appearing when users try to access them.

  5. investigating Feb 09, 2026, 06:49 PM UTC

    We're continuing to investigate an issue where Copilot policy updates are not propagating correctly for a subset of enterprise users. This may prevent newly enabled models from appearing when users try to access them. Next update in two hours.

  6. investigating Feb 09, 2026, 08:39 PM UTC

    We're continuing to investigate an issue where Copilot policy updates are not propagating correctly for a subset of enterprise users. This may prevent newly enabled models from appearing when users try to access them. Next update in two hours.

  7. investigating Feb 09, 2026, 10:09 PM UTC

    We're continuing to investigate an issue where Copilot policy updates are not propagating correctly for a subset of enterprise users. This may prevent newly enabled models from appearing when users try to access them. Next update in two hours.

  8. investigating Feb 10, 2026, 12:26 AM UTC

    We're continuing to address an issue where Copilot policy updates are not propagating correctly for a subset of enterprise users. This may prevent newly enabled models from appearing when users try to access them. This issue is understand and we are working to get the mitigation applied. Next update in one hour.

  9. investigating Feb 10, 2026, 12:52 AM UTC

    Copilot is operating normally.

  10. resolved Feb 10, 2026, 10:01 AM UTC

    GitHub experienced degraded Copilot policy propagation from enterprise to organizations between February 3 at 21:00 UTC through February 10 at 16:00 UTC. During this period, policy changes could take up to 24 hours to apply. We mitigated the issue on February 10 at 16:00 UTC after rolling back a regression that caused the delays. The propagation queue fully caught up on the delayed items by February 11 at 10:35 UTC, and policy changes now propagate normally. During this incident, whenever an enterprise updated a Copilot policy (including model policies), there were significant delays before those policy changes reached their child organizations and assigned users. The delay was caused by a large backlog in the background job queue responsible for propagating Copilot policy updates. Our investigation determined the incident was caused by a code change shipped on February 3 that increased the number of background jobs enqueued per policy update, in order to accommodate upcoming feature work. When new Copilot models launched on February 5th and 7th, triggering policy updates across many enterprises, the higher job volume overwhelmed the shared background worker queue, resulting in prolonged propagation delays. No policy updates were lost; they were queued and processed once the backlog cleared. We understand these delays disrupted policy management for customers using Copilot at scale and have taken the following immediate steps: 1. Restored the optimized propagation path and put tests in place to avoid a regression. 2. Ensured upcoming features are compatible with this design. 3. Added alerting on queue depth to detect propagation backlogs immediately. GitHub is critical infrastructure for your work, your teams, and your businesses. We are focused on these mitigations and continued improvements so Copilot policy changes propagate reliably and quickly.

Read the full incident report →

Looking to track GitHub Enterprise Cloud EU downtime and outages?

Pingoru polls GitHub Enterprise Cloud EU's status page every 5 minutes and alerts you the moment it reports an issue — before your customers do.

  • Real-time alerts when GitHub Enterprise Cloud EU reports an incident
  • Email, Slack, Discord, Microsoft Teams, and webhook notifications
  • Track GitHub Enterprise Cloud EU alongside 5,000+ providers in one dashboard
  • Component-level filtering
  • Notification groups + maintenance calendar
Start monitoring GitHub Enterprise Cloud EU for free

5 free monitors · No credit card required