GitHub header

All Systems Operational

About This Site

For the status of GitHub Enterprise Cloud - EU, please visit: eu.githubstatus.com
For the status of GitHub Enterprise Cloud - Australia, please visit: au.githubstatus.com
For the status of GitHub Enterprise Cloud - US, please visit: us.githubstatus.com

Git Operations ? Operational
Webhooks ? Operational
Visit www.githubstatus.com for more information Operational
API Requests ? Operational
Issues ? Operational
Pull Requests ? Operational
Actions ? Operational
Packages ? Operational
Pages ? Operational
Codespaces ? Operational
Copilot Operational
Operational
Degraded Performance
Partial Outage
Major Outage
Maintenance
Nov 1, 2025
Resolved - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.
Nov 1, 06:14 UTC
Update - Actions is operating normally.
Nov 1, 06:14 UTC
Update - We have mitigated the issue for manually dispatching workflows via the UI
Nov 1, 06:14 UTC
Update - We have identified the cause of the issue and are working towards a mitigation
Nov 1, 05:35 UTC
Update - We are investigating issues manually dispatching workflows via the GitHub UI for Actions. The Workflow Dispatch API is unaffected.
Nov 1, 05:05 UTC
Investigating - We are investigating reports of degraded performance for Actions
Nov 1, 04:43 UTC
Oct 31, 2025

No incidents reported.

Oct 30, 2025
Resolved - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.
Oct 30, 23:00 UTC
Update - Links on GitHub's landing http://github.com/home are not working. Primary user workflows (PRs, Issues, Repos) are not impacted.
Oct 30, 22:54 UTC
Update - Dotcom main navigation broken links.
Oct 30, 22:47 UTC
Investigating - We are currently investigating this issue.
Oct 30, 22:47 UTC
Oct 29, 2025
Resolved - On October 29th, 2025 between 14:07 UTC and 23:15 UTC, multiple GitHub services were degraded due to a broad outage in one of our service providers:

- Users of Codespaces experienced failures connecting to new and existing Codespaces through VSCode Desktop or Web. On average the Codespace connection error rate was 90% and peaked at 100% across all regions throughout the incident period.
- GitHub Actions larger hosted runners experienced degraded performance, with 0.5% of overall workflow runs and 9.8% of larger hosted runner jobs failing or not starting within 5 minutes. These recovered by 20:40 UTC.
- The GitHub Enterprise Importer service was degraded, with some users experiencing migration failures during git push operations and most users experiencing delayed migration processing.
- Initiation of new trials for GitHub Enterprise Cloud with Data Residency were also delayed during this time.
- Copilot Metrics via the API could not access the downloadable link during this time. There were approximately 100 requests during the incident that would have failed the download. Recovery began around 20:25 UTC.

We were able to apply a number of mitigations to reduce impact over the course of the incident, but we did not achieve 100% recovery until our service provider’s incident was resolved.

We are working to reduce critical path dependencies on the service provider and gracefully degrade experiences where possible so that we are more resilient to future dependency outages.

Oct 29, 23:15 UTC
Update - Codespaces is operating normally.
Oct 29, 23:15 UTC
Update - Codespaces continues to recover
Oct 29, 22:06 UTC
Update - Actions is operating normally.
Oct 29, 21:02 UTC
Update - Actions has fully recovered.

Codespaces continues to recover. Regions across Europe and Asia are healthy, so US users may want to choose one of those regions following these instructions: http://docs.github.com/en/codespaces/setting-your-user-preferences/setting-your-default-region-for-github-codespaces.

We expect full recovery across the board before long.

Oct 29, 21:01 UTC
Update - Codespaces is experiencing degraded performance. We are continuing to investigate.
Oct 29, 20:56 UTC
Update - We are beginning to see small signs of recovery, but connections are still inconsistent across services and regions. We expect to see gradual recovery from here.
Oct 29, 20:34 UTC
Update - We continue to see improvement in Actions larger runners jobs. Larger runners customers may still experience longer than normal queue times, but we expect this to rapidly improve across most runners.

ARM64 runners, GPU runners, and some runners with private networking may still be impacted.

Usage of Codespaces via VS Code (but not via SSH) is still degraded.

GitHub and Azure teams continue to collaborate towards full resolution.

Oct 29, 19:21 UTC
Update - Codespaces is experiencing degraded availability. We are continuing to investigate.
Oct 29, 19:05 UTC
Update - Codespaces is experiencing degraded performance. We are continuing to investigate.
Oct 29, 18:58 UTC
Update - Impact to most larger runner jobs should now be mitigated. ARM64 runners are still impacted. GitHub and Azure teams continue to collaborate towards full resolution.
Oct 29, 18:12 UTC
Update - Codespaces is experiencing degraded availability. We are continuing to investigate.
Oct 29, 17:40 UTC
Update - Additional impact from this incident:

We’re currently investigating an issue causing the Copilot Metrics API report URLs to fail for 28-day and 1-day enterprise and user reports. We are collaborating with Azure teams to restore access as soon as possible.

Oct 29, 17:26 UTC
Update - We are seeing ongoing connection failures across Codespaces and Actions, including Enterprise Migrations.

Linux ARM64 standard hosted runners are failing to start, but Ubuntu latest and Windows latest are not affected at this time.

SSH connections to Codespaces may be successful, but connections via VS Code are consistently failing.

GitHub and Azure teams are coordinating to mitigate impact and resolve the root issues.

Oct 29, 17:11 UTC
Update - Actions impact is primarily limited to larger runner jobs at this time. This also impacts enterprise migrations.
Oct 29, 16:31 UTC
Update - Codespaces is experiencing degraded performance. We are continuing to investigate.
Oct 29, 16:19 UTC
Investigating - We are investigating reports of degraded performance for Actions
Oct 29, 16:17 UTC
Resolved - A cloud resource used by the Copilot bing-search tool was deleted as part of a resource cleanup operation. Once this was discovered, the resource was recreated. Going forward, more effective monitoring will be put in place to catch this issue earlier.
Oct 29, 21:49 UTC
Investigating - We are currently investigating this issue.
Oct 29, 21:34 UTC
Oct 28, 2025
Resolved - From October 28th at 16:03 UTC until 17:11 UTC, the Copilot service experienced degradation due to an infrastructure issue which impacted the Claude Haiku 4.5 model, leading to a spike in errors affecting 1% of users. No other models were impacted. The incident was caused due to an outage with an upstream provider. We are working to improve redundancy during future occurrences.
Oct 28, 17:11 UTC
Update - The issues with our upstream model provider have been resolved, and Claude Haiku 4.5 is once again available in Copilot Chat and across IDE integrations.

We will continue monitoring to ensure stability, but mitigation is complete.

Oct 28, 17:11 UTC
Update - Usage of the Haiku 4.5 model with Copilot experiences is currently degraded. We are investigating and working to remediate. Other models should be unaffected.
Oct 28, 16:42 UTC
Investigating - We are currently investigating this issue.
Oct 28, 16:39 UTC
Oct 27, 2025
Resolved - Between October 23, 2025 19:27:29 UTC and October 27, 2025 17:42:42 UTC, users experienced timeouts when viewing repository landing pages. We observed the timeouts for approximately 5,000 users across less than 1,000 repositories including forked repositories. The impact was limited to logged in users accessing repositories in organizations with more than 200,000 members. Forks of repositories from affected large organizations were also impacted. Git operations were functional throughout this period.

This was caused by feature flagged changes impacting organization membership. The changes caused unintended timeouts for organization membership count evaluations which led to repository landing pages not loading.

The flag was turned off and a fix addressing the timeouts was deployed, including additional optimizations to better support organizations of this size. We are reviewing related areas and will continue to monitor for similar performance regressions.

Oct 27, 17:51 UTC
Update - We have deployed the fix and resolved the issue.
Oct 27, 17:51 UTC
Update - The fix for this issue has been validated and is being deployed. This fix will also resolve related timeouts on the Access settings page of the impacted repositories and forks.
Oct 27, 17:16 UTC
Update - Viewing code in repositories in or forked from very large organizations (200k+ members) are not loading in the desktop web UI, showing a unicorn instead. A fix has been identified and is being tested. Access via git and access to specific pages within the repository, such as pull requests, are not impacted, nor is accessing the repository via the mobile web UI.
Oct 27, 16:35 UTC
Investigating - We are currently investigating this issue.
Oct 27, 16:25 UTC
Oct 26, 2025

No incidents reported.

Oct 25, 2025

No incidents reported.

Oct 24, 2025
Resolved - On UTC Oct 24 2:55 - 3:15 AM, githubstatus.com was unreachable due to service interruption with our status page provider.
During this time, GitHub systems were not experiencing any outages or disruptions.
We are working our vendor to understand how to improve availability of githubstatus.com.

Oct 24, 14:17 UTC
Resolved - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.
Oct 24, 10:10 UTC
Update - We have found the source of the slowness and mitigated it. We are watching recovery before we status green but no user impact is currently observed.
Oct 24, 10:07 UTC
Investigating - We are currently investigating this issue.
Oct 24, 09:31 UTC
Oct 23, 2025
Resolved - On October 23, 2025, between 15:54 UTC and 19:20 UTC, GitHub Actions larger hosted runners experienced degraded performance, with 1.4% of overall workflow runs and 29% of larger hosted runner jobs failing to start or timing out within 5 minutes.

The full set of contributing factors is still under investigation, but the customer impact was due to database performance degradation, triggered by routine database changes causing a load profile that triggered a bug in the underlying database platform used for larger runners.

Impact was mitigated through a combination of scaling up the database and reducing load. We are working with partners to resolve the underlying bug and have paused similar database changes until it is resolved.

Oct 23, 20:25 UTC
Update - Actions is operating normally.
Oct 23, 20:25 UTC
Update - Actions larger runner job start delays and failure rates are recovering. Many jobs should be starting as normal. We're continuing to monitor and confirm full recovery.
Oct 23, 19:33 UTC
Update - We continue to investigate problems with Actions larger runners. We're continuing to see signs of improvement, but customers are still experiencing jobs queueing or failing due to timeout.
Oct 23, 18:17 UTC
Update - We continue to investigate problems with Actions larger runners. We're seeing limited signs of recovery, but customers are still experiencing jobs queueing or failing due to timeout.
Oct 23, 17:36 UTC
Update - We continue to investigate problems with Actions larger runners. Some customers are experiencing jobs queueing or failing due to timeout.
Oct 23, 16:59 UTC
Update - We're investigating problems with larger hosted runners in Actions. Our team is working to identify the cause. We'll post another update by 17:03 UTC.
Oct 23, 16:36 UTC
Investigating - We are investigating reports of degraded performance for Actions
Oct 23, 16:33 UTC
Oct 22, 2025
Resolved - On October 22, 2025, between 14:06 UTC and 15:17 UTC, less than 0.5% of web users experienced intermittent slow page loads on GitHub.com. During this time, API requests showed increased latency, with up to 2% timing out.

The issue was caused by elevated loads on one of our databases caused by a poorly performing query, which impacted performance for a subset of requests.

We identified the source of the load and optimized the query to restore normal performance. We’ve added monitors for early detection for query performance, and we continue to monitor the system closely to ensure ongoing stability.

Oct 22, 15:53 UTC
Update - API Requests is operating normally.
Oct 22, 15:53 UTC
Update - We have identified a possible source of the issue and there is currently no user impact but we are continuing to investigate and will not resolve this incident until we have more confidence in our mitigations and investigation results.
Oct 22, 15:17 UTC
Update - Some users may see slow, timing out requests or not found when browsing repos. We have identified slowness in our platform and are investigating.
Oct 22, 14:37 UTC
Investigating - We are investigating reports of degraded performance for API Requests
Oct 22, 14:29 UTC
Oct 21, 2025
Resolved - On October 21, 2025, between 13:30 and 17:30 UTC, GitHub Enterprise Cloud Organization SAML Single Sign-On experienced degraded performance. Customers may have been unable to successfully authenticate into their GitHub Organizations during this period. Organization SAML recorded a maximum of 0.4% of SSO requests failing during this timeframe.

This incident stemmed from a failure in a read replica database partition responsible for storing license usage information for GitHub Enterprise Cloud Organizations. This partition failure resulted in users from affected organizations, whose license usage information was stored on this partition, being unable to access SSO during the aforementioned window. A successful SSO requires an available license for the user who is accessing a GitHub Enterprise Cloud Organization backed by SSO.
The failing partition was subsequently taken out of service, thereby mitigating the issue.

Remedial actions are currently underway to ensure that a read replica failure does not compromise the overall service availability.

Oct 21, 17:39 UTC
Update - Mitigation continues, the impact is limited to Enterprise Cloud customers who have configured SAML at the organization level.
Oct 21, 17:18 UTC
Update - We continuing to work on mitigation of this issue.
Oct 21, 17:11 UTC
Update - We’ve identified the issue affecting some users with SAML/OIDC authentication and are actively working on mitigation. Some users may not be able to authenticate during this time.
Oct 21, 16:33 UTC
Update - We're seeing issues for a small amount of customers with SAML/OIDC authentication for GitHub.com users. We are investigating.
Oct 21, 16:03 UTC
Investigating - We are currently investigating this issue.
Oct 21, 16:00 UTC
Resolved - On October 21, 2025, between 07:55 UTC and 12:20 UTC, GitHub Actions experienced degraded performance. During this time, 2.11% workflow runs failed to start within 5 minutes, with an average delay of 8.2 minutes. The root cause was increased latency on a node in one of our Redis clusters, triggered by resource contention after a patching event became stuck.

Recovery began once the patching process was unstuck and normal connectivity to the Redis cluster was restored at 11:45 UTC, but it took until 12:20 UTC to clear the backlog of queued work. We are implementing safeguards to prevent this failure mode and enhancing our monitoring to detect and address problems like this more quickly in the future.

Oct 21, 12:28 UTC
Update - We were able to apply a mitigation and we are now seeing recovery.
Oct 21, 11:59 UTC
Update - We are seeing about 10% of Actions runs taking longer than 5 minutes to start, we're still investigating and will provide an update by 12:00 UTC.
Oct 21, 11:37 UTC
Update - We are still seeing delays in starting some Actions runs and are currently investigating. We will provide updates as we have more information.
Oct 21, 09:59 UTC
Update - We are seeing delays in starting some Actions runs and are currently investigating.
Oct 21, 09:25 UTC
Investigating - We are investigating reports of degraded performance for Actions
Oct 21, 09:12 UTC
Oct 20, 2025
Resolved - From October 20th at 14:10 UTC until 16:40 UTC, the Copilot service experienced degradation due to an infrastructure issue which impacted the Grok Code Fast 1 model, leading to a spike in errors affecting 30% of users. No other models were impacted. The incident was caused due to an outage with an upstream provider.
Oct 20, 16:40 UTC
Update - The issues with our upstream model provider continue to improve, and Grok Code Fast 1 is once again stable in Copilot Chat, VS Code and other Copilot products.
Oct 20, 16:39 UTC
Update - We are continuing to work with our provider on resolving the incident with Grok Code Fast 1 which is impacting 6% of users. We’ve been informed they are implementing fixes but users can expect some requests to intermittently fail until all issues are resolved.

Oct 20, 16:07 UTC
Update - We are experiencing degraded availability for the Grok Code Fast 1 model in Copilot Chat, VS Code and other Copilot products. This is due to an issue with an upstream model provider. We are working with them to resolve the issue.

Other models are available and working as expected.

Oct 20, 14:47 UTC
Investigating - We are investigating reports of degraded performance for Copilot
Oct 20, 14:46 UTC
Resolved - On October 20, 2025, between 08:05 UTC and 10:50 UTC the Codespaces service was degraded, with users experiencing failures creating new codespaces and resuming existing ones. On average, the error rate for codespace creation was 39.5% and peaked at 71% of requests to the service during the incident window. Resume operations averaged 23.4% error rate with a peak of 46%. This was due to a cascading failure triggered by an outage in a 3rd-party dependency required to build devcontainer images.

The impact was mitigated when the 3rd-party dependency recovered.

We are investigating opportunities to make this dependency not a critical path for our container build process and working to improve our monitoring and alerting systems to reduce our time to detection of issues like this one in the future.

Oct 20, 11:01 UTC
Update - We are now seeing sustained recovery. As we continue to make our final checks, we hope to resolve this incident in the next 10 minutes.
Oct 20, 10:56 UTC
Update - We are seeing early signs of recovery for Codespaces. The team will continue to monitor and keep this incident active as a line of communication until we are confident of full recovery.
Oct 20, 10:15 UTC
Update - We are continuing to monitor Codespace's error rates and will report further as we have more information.
Oct 20, 09:34 UTC
Update - We are seeing increased error rates with Codespaces generally. This is due to a third party provider experiencing problems. This impacts both creation of new Codespaces and resumption of existing ones.

We continue to monitor and will report with more details as we have them.

Oct 20, 09:01 UTC
Investigating - We are investigating reports of degraded availability for Codespaces
Oct 20, 08:56 UTC
Oct 19, 2025

No incidents reported.

Oct 18, 2025

No incidents reported.