- Detected by Pingoru
- Apr 28, 2026, 09:26 PM UTC
- Resolved
- Apr 28, 2026, 11:31 PM UTC
- Duration
- 2h 4m
Affected: API Gateway
Timeline · 6 updates
-
investigating Apr 28, 2026, 09:26 PM UTC
We are investigating 403 errors for PostgREST requests across multiple regions.
-
identified Apr 28, 2026, 09:36 PM UTC
The issue has been identified and we are working on a fix.
-
identified Apr 28, 2026, 09:40 PM UTC
We are continuing to work on a fix for this issue.
-
identified Apr 28, 2026, 10:24 PM UTC
We are continuing to work on a fix. We’ll share further updates as progress is made.
-
monitoring Apr 28, 2026, 11:03 PM UTC
The exceptions fix has been rolled out, and we are monitoring for continued stability.
-
resolved Apr 28, 2026, 11:31 PM UTC
This is now resolved. Thank you for your patience while we worked to resolve this issue.
Read the full incident report →
- Detected by Pingoru
- Apr 28, 2026, 08:34 PM UTC
- Resolved
- Apr 24, 2026, 10:00 AM UTC
- Duration
- —
Timeline · 1 update
-
resolved Apr 28, 2026, 08:34 PM UTC
Incident Summary A health check designed to prevent Out-of-Memory (OOM) conditions began closing some incoming connections, which led to 503 errors for a subset of users. These errors were initially surfaced under a generic SUPABASE_EDGE_RUNTIME_ERROR code. Timeline & Actions Sunday, 26: Issue detected as user reports of 503 errors increased. Investigation into root cause began. Monday, 27: Introduced a more accurate error code (SUPABASE_EDGE_RUNTIME_SERVICE_DEGRADED) and implemented infrastructure changes to mitigate impact. Tuesday, 28: Adjusted conditions for returning 503 responses to be less aggressive. Prepared a “retry-on-degraded” mechanism (not yet deployed). Current Status Mitigations are in place and improvements to error handling are ongoing. Further resilience enhancements will be deployed shortly.
Read the full incident report →
- Detected by Pingoru
- Apr 27, 2026, 12:39 PM UTC
- Resolved
- Apr 27, 2026, 07:04 PM UTC
- Duration
- 6h 24m
Affected: AuthDashboardDatabase
Timeline · 7 updates
-
identified Apr 27, 2026, 12:39 PM UTC
Posting this. “We are seeing an increase in projects unavailable in eu-west-3 following an upstream issue with EC2 instances in the region. The team is working on restoring access to these projects”
-
identified Apr 27, 2026, 01:31 PM UTC
The team is continuing to work through affected projects; however, a project restart is also effective. This can be performed from the dashboard for your own projects at any time.
-
identified Apr 27, 2026, 01:38 PM UTC
We have identified this issue across multiple regions, not just eu-west-3 as originally suspected. We are expanding the scope of efforts to bring affected projects back online. Users can also resolve this, in most cases, on their own via a project restart. This can be performed from the dashboard for your own projects at any time.
-
identified Apr 27, 2026, 03:00 PM UTC
The team is still working to bring these back online, and have a fix under way. Many users can also resolve this on their own via a project restart. This can be performed from the dashboard for your own projects at any time. But for those who are still seeing issues after a restart, we will be pushing a fix soon.
-
identified Apr 27, 2026, 04:22 PM UTC
The team is continuing their mitigation efforts. We've fixed and restarted most of the affected projects. We're continuously looking for any others that are affected so we can be sure to get them all fixed.
-
monitoring Apr 27, 2026, 06:34 PM UTC
We believe all users affected by this particular issue have been resolved. The team is going to keep an eye on these error rates to ensure we catch any that didn't originally appear, or that no new issues arise. We appreciate your patience as we worked through this issue.
-
resolved Apr 27, 2026, 07:04 PM UTC
This incident has been resolved.
Read the full incident report →
- Detected by Pingoru
- Apr 27, 2026, 10:54 AM UTC
- Resolved
- Apr 27, 2026, 08:05 PM UTC
- Duration
- 9h 11m
Affected: Dashboard
Timeline · 8 updates
-
identified Apr 27, 2026, 10:54 AM UTC
We’ve identified that a recent change to the Postgres systemd service has a dependency on a prestart script. This script is not present on some older projects, and when those projects restart, Postgres may fail to start successfully. This can result in projects remaining offline. We are working on a fix to restore service.
-
identified Apr 27, 2026, 11:11 AM UTC
We’ve identified that a recent change to the Postgres systemd service has a dependency on a prestart script. This script is not present on some older projects, and when those projects restart, Postgres may fail to start successfully. This can result in projects remaining offline. We are working on a fix to restore service. If you are running a version of Postgres older than 15.1.1.57, please avoid restarting your project until this issue is resolved.
-
identified Apr 27, 2026, 11:40 AM UTC
Our team is actively working on a fix and is manually patching impacted projects to restore service as quickly as possible. If you are running a version of Postgres older than 15.1.1.57, please avoid restarting your project until this issue is resolved.
-
monitoring Apr 27, 2026, 01:12 PM UTC
All impacted projects with open support tickets have now been manually patched and restored. We are currently rolling out a fix across the fleet to prevent further impact. We will continue to monitor progress and provide updates.
-
identified Apr 27, 2026, 02:21 PM UTC
We’ve identified additional impacted projects and are currently working to manually patch and restore them. We are also continuing to roll out a fix across the fleet to prevent further impact.
-
identified Apr 27, 2026, 04:05 PM UTC
The permanent fix has now been rolled out across the fleet. We are continuing to manually patch any remaining impacted projects to restore service.
-
monitoring Apr 27, 2026, 05:31 PM UTC
The fix has been rolled out, and we are monitoring for continued stability. Any remaining impacted projects are being manually patched.
-
resolved Apr 27, 2026, 08:05 PM UTC
All projects are patched and this is now resolved. Thank you for your patience while we ensured all projects were updated.
Read the full incident report →
- Detected by Pingoru
- Apr 26, 2026, 05:52 PM UTC
- Resolved
- Apr 26, 2026, 05:52 PM UTC
- Duration
- —
Timeline · 1 update
-
resolved Apr 29, 2026, 05:52 PM UTC
A modification to existing projects was deployed due to the recently communicated Data API and pg_graphql changes. https://github.com/orgs/supabase/discussions/45329 This modification was supposed to disable pg_graphql for projects that had not seen usage in the last 30 days. Due to a misconfiguration, this targeted more projects than intended. We are deeply sorry for the inconvenience and have since addressed the issue. For those using pg_graphql actively, you may have seen pg_graphql extension is not enabled in the logs for your project and you can safely re-enable this. If you have been impacted, or you are unable to re-enable pg_graphql, please contact [email protected]
Read the full incident report →
- Detected by Pingoru
- Apr 25, 2026, 04:16 AM UTC
- Resolved
- Apr 25, 2026, 05:47 AM UTC
- Duration
- 1h 30m
Affected: ap-northeast-1us-east-2
Timeline · 5 updates
-
identified Apr 25, 2026, 04:16 AM UTC
We are seeing new project creation, resize requests, and project restart failures due to capacity issues in us-east-1. We are disabling project creation, project resize actions, and project restarts in these regions. We have already reached out to our provider for additional capacity, and will update here as we have additional information.
-
identified Apr 25, 2026, 04:26 AM UTC
US-East-2 and AP-Northeast-1 have been disabled for project creation and configuration change actions, and the team is currently working to free additional capacity in these regions.
-
identified Apr 25, 2026, 04:38 AM UTC
Capacity has been freed, and the regions have been re-enabled. The team is currently working on resolving any failed project starts, resizes, restarts, or other configuration changes during this event.
-
monitoring Apr 25, 2026, 05:14 AM UTC
Projects have recovered and capacity has been restored. All previously affected regions have been re-enabled, and we will continue to monitor to ensure stability.
-
resolved Apr 25, 2026, 05:47 AM UTC
This issue has now been resolved and projects have returned to normal.
Read the full incident report →
- Detected by Pingoru
- Apr 24, 2026, 11:21 AM UTC
- Resolved
- Apr 24, 2026, 02:36 PM UTC
- Duration
- 3h 15m
Affected: Dashboard
Timeline · 4 updates
-
investigating Apr 24, 2026, 11:21 AM UTC
We are currently investigating reports that newly created projects may be unreachable. Initial findings indicate DNS resolution failures. We will provide an update as soon as more information is available.
-
identified Apr 24, 2026, 11:56 AM UTC
We are currently experiencing an issue propagating new DNS records for our zone. We are working closely with our upstream network providers technical team to implement a fix and will share updates as we learn more.
-
monitoring Apr 24, 2026, 12:43 PM UTC
A fix has been implemented, and we are seeing improvements in DNS resolution. We’re closely monitoring to ensure the issue is fully resolved and services remain stable.
-
resolved Apr 24, 2026, 02:36 PM UTC
The issue has been resolved and DNS resolution is now operating normally.
Read the full incident report →
- Detected by Pingoru
- Apr 17, 2026, 02:53 PM UTC
- Resolved
- Apr 18, 2026, 02:01 AM UTC
- Duration
- 11h 8m
Affected: API GatewayAuthDatabaseEdge FunctionsRealtimeStorage
Timeline · 18 updates
-
investigating Apr 17, 2026, 01:02 PM UTC
We are currently investigating reports of users experiencing login issues when using Supabase Auth. At this time, the impact appears to be limited to a subset of users, primarily in South America. The exact cause and scope are still being determined. Our team is actively working to identify the root of the issue and will provide updates as more information becomes available. We appreciate your patience while we investigate.
-
investigating Apr 17, 2026, 01:31 PM UTC
We are currently investigating reports of users experiencing login issues when using Supabase Auth. At this time, the impact appears to be limited to a subset of users, primarily in North and South America. The exact cause and scope are still being determined. Our team is actively working to identify the root of the issue and will provide updates as more information becomes available. We appreciate your patience while we investigate.
-
investigating Apr 17, 2026, 01:51 PM UTC
We are actively investigating this issue and working with relevant partner organizations. At this time, we do not believe this issue is specific to any particular network provider. A subset of users continue to be impacted in both South and North America.
-
investigating Apr 17, 2026, 02:16 PM UTC
We have further clarified the scope and impact of the issues currently affecting users. A subset of users across North and South America are experiencing DNS resolution failures and HTTP 530 errors. These symptoms indicate a networking-related issue impacting the availability of Supabase projects. We are actively working with both internal teams and external network providers to isolate the root cause and determine the most effective path to resolution. We will continue to provide updates as more information becomes available.
-
investigating Apr 17, 2026, 02:30 PM UTC
We continue to work with partners and investigate.
-
investigating Apr 17, 2026, 02:53 PM UTC
We continue to have active lines of investigation going with our upstream network provider partners, and are still working toward a resolution. This issue affects the network-level access to projects. The projects themselves and the data in them are safe and unaffected. Users are seeing two separate symptoms of these networking issues: 1. DNS Lookup failures for supabase project URLs 2. 530 responses to HTTP requests to supabase project API Endpoints
-
identified Apr 17, 2026, 03:09 PM UTC
Our upstream networking provider has discovered an issue and has declared an incident on their side. They are currently working toward resolution. We are actively working with them and will provide updates on progress as more information is available.
-
identified Apr 17, 2026, 03:41 PM UTC
Our upstream networking partner is continuing their investigation. We will continue to post regular updates as we have more information.
-
identified Apr 17, 2026, 04:31 PM UTC
Our upstream networking provider has identified a fix and is currently implementing a fix. We are seeing preliminary improvements to error rates for users in North America, but are still following this issue closely with our provider.
-
identified Apr 17, 2026, 05:44 PM UTC
Upstream provider mitigations are still under way, and we are seeing improvements incrementally as routing regions have the mitigations applied. We are still seeing particular impact in South America, but some users in North America may still be affected as well. We anticipate continued incremental improvements and will update as we get more information.
-
identified Apr 17, 2026, 06:34 PM UTC
Our upstream provider has rolled the fix out to most regions. While we are seeing improvement, we are still seeing increased error rates in Mexico and Brazil and are continuing to work with them until this is fully resolved.
-
identified Apr 17, 2026, 07:33 PM UTC
The fix that our upstream provider implemented has taken effect. Service has significantly improved across all previously impacted regions. There are still residual elevated error rates in a handful of regions and we will continue to work with our provider to eliminate them.
-
monitoring Apr 17, 2026, 08:10 PM UTC
Our upstream provider has reported their incident resolved. We will continue to monitor for any errors our customers may be experiencing for another 30 minutes.
-
identified Apr 17, 2026, 08:35 PM UTC
Some users continue to experience errors. We are raising the issue with our provider.
-
identified Apr 17, 2026, 09:31 PM UTC
We recognize that some of our users, especially in parts of North and South America, are continuing to experience DNS and 5XX errors when trying to connect to their projects. We have escalated this issue with our network provider and will work with them continuously until the problems are resolved.
-
identified Apr 17, 2026, 11:08 PM UTC
We've seen a marked improvement in error rates across all regions in the last few minutes. We are still waiting on confirmation from our provider on status, but we believe things to be improved at the moment. We'll keep working on this until we have for sure reached a resolution.
-
monitoring Apr 18, 2026, 12:37 AM UTC
We've received confirmation from our networking provider that there is recovery across all regions. We are able to corroborate this with our own metrics collections. We will continue to monitor for a bit, but we believe this to be resolved at this time. We apologize for the trouble, and we appreciate your patience.
-
resolved Apr 18, 2026, 02:01 AM UTC
All metrics have been nominal and stable for several hours now, and our upstream provider has resolved their incident.
Read the full incident report →
- Detected by Pingoru
- Apr 16, 2026, 07:05 PM UTC
- Resolved
- Apr 17, 2026, 05:00 PM UTC
- Duration
- 21h 54m
Timeline · 4 updates
-
monitoring Apr 16, 2026, 07:05 PM UTC
We are experiencing delays in delivery of some of our email to customers related to transactions, including new welcome, oauth-approved, project-shutdown, and payment-failure notifications. This has caused some customers to receive notifications after their issue has been resolved (for example, payment failure notifications after payment has been issued). We are actively monitoring the email delivery queue and expect the delays to be eliminated over time. We will update this status as appropriate.
-
monitoring Apr 16, 2026, 07:37 PM UTC
We have made progress investigating the specific causes of the email spike and are determining appropriate next steps to improve clearance of the backlog.
-
monitoring Apr 16, 2026, 08:51 PM UTC
Customers will continue to see these delayed transactional emails while we clear the backlog. We will provide another update once these queued emails are sent.
-
resolved Apr 17, 2026, 05:00 PM UTC
Email queues have returned to normal.
Read the full incident report →
- Detected by Pingoru
- Apr 12, 2026, 03:51 PM UTC
- Resolved
- Apr 12, 2026, 05:03 PM UTC
- Duration
- 1h 12m
Affected: ap-south-1ap-southeast-1
Timeline · 3 updates
-
investigating Apr 12, 2026, 03:51 PM UTC
We are currently investigating an issue affecting project creation in some APAC regions. Users may experience failures when attempting to create new projects in impacted regions. Existing projects are not impacted. Our team is actively working to mitigate the issue and restore normal availability. We will share further updates as more information becomes available.
-
monitoring Apr 12, 2026, 04:19 PM UTC
We are recovering capacity in the impacted APAC regions and are seeing improvements in project creation success rates. We will continue to monitor to ensure stability.
-
resolved Apr 12, 2026, 05:03 PM UTC
Capacity has been fully restored in the impacted APAC regions, and project creation is operating normally.
Read the full incident report →
- Detected by Pingoru
- Apr 08, 2026, 07:10 PM UTC
- Resolved
- Apr 08, 2026, 08:05 PM UTC
- Duration
- 54m
Affected: Analytics
Timeline · 4 updates
-
identified Apr 08, 2026, 07:11 PM UTC
Users may see errors when attempting to retrieve logs via the Supabase dashboard or may not see new logs arriving via log drains. The team has identified the issue and is working on a fix. Underlying projects are unaffected, this is only affecting the logging service.
-
identified Apr 08, 2026, 07:24 PM UTC
The underlying logging platform has stabilized, and some log sources have been restored and are successfully ingesting logs. The team is working to restore the rest of the log sources now.
-
monitoring Apr 08, 2026, 07:41 PM UTC
All log sources have been restored and logs should be available to users again now. We are keeping an eye on things to ensure ongoing stability.
-
resolved Apr 08, 2026, 08:05 PM UTC
This incident has been resolved.
Read the full incident report →
- Detected by Pingoru
- Apr 02, 2026, 08:28 AM UTC
- Resolved
- Apr 02, 2026, 09:24 AM UTC
- Duration
- 56m
Affected: Dashboard
Timeline · 4 updates
-
investigating Apr 02, 2026, 08:28 AM UTC
We are currently investigating increased error rates affecting plan upgrades. Our team is actively working to identify the root cause and restore normal functionality. We will provide updates as more information becomes available.
-
identified Apr 02, 2026, 08:40 AM UTC
We have identified the cause of increased error rates affecting plan upgrades and are working on a fix. We will provide further updates as progress is made.
-
monitoring Apr 02, 2026, 09:05 AM UTC
A fix has been applied, and we are monitoring to ensure stability.
-
resolved Apr 02, 2026, 09:24 AM UTC
The issue causing increased error rates during plan upgrades has been resolved. All systems are operating normally.
Read the full incident report →
- Detected by Pingoru
- Apr 01, 2026, 12:13 AM UTC
- Resolved
- Apr 01, 2026, 01:11 AM UTC
- Duration
- 58m
Affected: Analytics
Timeline · 3 updates
-
investigating Apr 01, 2026, 12:13 AM UTC
We are seeing degradation of logs and are currently investigating the cause of this issue.
-
monitoring Apr 01, 2026, 12:47 AM UTC
We have identified the issue and a fix is in place. We will continue to monitor to ensure stability.
-
resolved Apr 01, 2026, 01:11 AM UTC
Log ingestion has recovered and the incident is considered resolved. Some log data was lost during the incident between around 11pm to 11:45pm ETC on March 31.
Read the full incident report →
- Detected by Pingoru
- Mar 27, 2026, 05:58 PM UTC
- Resolved
- Mar 27, 2026, 06:43 PM UTC
- Duration
- 44m
Timeline · 3 updates
-
identified Mar 27, 2026, 05:58 PM UTC
We have experienced problems with project creation, the issue has been identified and mitigation steps have been implemented.
-
monitoring Mar 27, 2026, 06:18 PM UTC
Project creation has returned to normal. We continue to actively monitor and operate our platform for stability and consistency.
-
resolved Mar 27, 2026, 06:43 PM UTC
We have confirmed that project creation has returned to normal.
Read the full incident report →
- Detected by Pingoru
- Mar 27, 2026, 03:13 PM UTC
- Resolved
- Mar 27, 2026, 05:03 PM UTC
- Duration
- 1h 50m
Timeline · 6 updates
-
investigating Mar 27, 2026, 03:13 PM UTC
We are investigating a problem with log ingestion. Users may experience delay in current log data.
-
identified Mar 27, 2026, 03:33 PM UTC
We have identified the issue with log ingestion. We are testing mitigation options, including bringing up additional capacity. We will continue to update this page with our progress.
-
identified Mar 27, 2026, 03:52 PM UTC
We have implemented increase capacity and logs are returning to normal. It will take some time for the ingestion backlog to be eliminated. Some services, such as realtime, storage, and api-gateway, will have lost logs during the ingestion incident.
-
identified Mar 27, 2026, 04:12 PM UTC
We continue to increase capacity in targeted areas to mitigate log ingestion degradation. It will take some time for the ingestion backlog to be eliminated.
-
monitoring Mar 27, 2026, 04:17 PM UTC
Our capacity mitigations have taken effect and ingestion has returned to normal. Customers should see logs returning to normal.
-
resolved Mar 27, 2026, 05:03 PM UTC
Log ingestion has returned to normal
Read the full incident report →
- Detected by Pingoru
- Mar 26, 2026, 11:41 PM UTC
- Resolved
- Mar 27, 2026, 02:58 AM UTC
- Duration
- 3h 17m
Affected: Realtime
Timeline · 7 updates
-
investigating Mar 26, 2026, 11:41 PM UTC
We are investigating network connectivity issues with Realtime.”
-
investigating Mar 27, 2026, 12:20 AM UTC
We're still investigating network connectivity issues with Realtime.
-
identified Mar 27, 2026, 12:46 AM UTC
We've identified the issue and are currently working on a fix, we will provide an update on our progress soon.
-
identified Mar 27, 2026, 01:13 AM UTC
We are working on a fix and we’ll continue to provide updates as progress is made.
-
identified Mar 27, 2026, 01:44 AM UTC
We appreciate your patience as we continue to work on a fix. We will provide updates as they become available.
-
monitoring Mar 27, 2026, 02:48 AM UTC
The fix has been implemented and we’re monitoring to ensure stability.
-
resolved Mar 27, 2026, 02:58 AM UTC
The issue has been resolved and Realtime services have returned to normal.
Read the full incident report →
- Detected by Pingoru
- Mar 24, 2026, 03:13 PM UTC
- Resolved
- Mar 24, 2026, 03:56 PM UTC
- Duration
- 43m
Timeline · 3 updates
-
identified Mar 24, 2026, 03:13 PM UTC
We are aware of an issue affecting users creating/updating branches. A permission error in our branching workflow is causing failures for all branches. We have identified the root cause and a fix is being deployed. We will provide an update once resolved.
-
monitoring Mar 24, 2026, 03:27 PM UTC
The fix has been implemented and branching access has returned to normal.
-
resolved Mar 24, 2026, 03:56 PM UTC
Branching access has returned to normal, this issue is resolved.
Read the full incident report →
- Detected by Pingoru
- Mar 23, 2026, 04:28 PM UTC
- Resolved
- Mar 23, 2026, 06:56 PM UTC
- Duration
- 2h 27m
Affected: Analytics
Timeline · 4 updates
-
investigating Mar 23, 2026, 04:28 PM UTC
We are currently investigating an issue affecting Supabase Logs, resulting in partial log ingestion for some services. Users may experience delays or errors when accessing logs. In some cases, logs may be partially ingested or not ingested at all. Projects remain fully functional. This issue is limited to logging only. Our team is actively working to identify the root cause and restore normal performance as quickly as possible. We will provide further updates as more information becomes available.
-
identified Mar 23, 2026, 04:38 PM UTC
The cause of the issue has been identified. Our engineering team is working on a fix.
-
monitoring Mar 23, 2026, 05:05 PM UTC
A fix has been implemented. The system is now operating normally, and we will continue to monitor.
-
resolved Mar 23, 2026, 06:56 PM UTC
This issue is now resolved. All logging ingestion has resumed in all regions.
Read the full incident report →
- Detected by Pingoru
- Mar 23, 2026, 02:16 PM UTC
- Resolved
- Mar 23, 2026, 03:42 PM UTC
- Duration
- 1h 25m
Affected: Edge Functions
Timeline · 3 updates
-
investigating Mar 23, 2026, 02:16 PM UTC
We are currently investigating elevated error rates affecting Edge Functions in the eu-central-1 (Frankfurt) region
-
monitoring Mar 23, 2026, 02:32 PM UTC
We carried out a controlled redeployment of the affected services to restore stability. The system is now operating normally, and we will continue to monitor.
-
resolved Mar 23, 2026, 03:42 PM UTC
Error rates have returned to normal. This incident has been resolved.
Read the full incident report →
- Detected by Pingoru
- Mar 19, 2026, 08:55 PM UTC
- Resolved
- Mar 19, 2026, 10:00 PM UTC
- Duration
- 1h 4m
Affected: Management API
Timeline · 4 updates
-
investigating Mar 19, 2026, 08:55 PM UTC
We are currently investigating project creation failures across multiple regions.
-
identified Mar 19, 2026, 09:08 PM UTC
We have identified the issue and a fix is underway.
-
monitoring Mar 19, 2026, 09:21 PM UTC
A fix has been implemented and we are monitoring the results.
-
resolved Mar 19, 2026, 10:00 PM UTC
Project creation in all regions are now successful.
Read the full incident report →
- Detected by Pingoru
- Mar 18, 2026, 04:27 PM UTC
- Resolved
- Mar 18, 2026, 06:21 PM UTC
- Duration
- 1h 54m
Affected: Storage
Timeline · 5 updates
-
investigating Mar 18, 2026, 04:27 PM UTC
We are currently investigating elevated error rates affecting storage services in the ap-northeast-1 (Tokyo) region. Newly created projects may experience issues connecting to storage, while existing projects remain unaffected at this time. We are actively working to understand the full scope of the impact and will provide further updates as more information becomes available.
-
investigating Mar 18, 2026, 04:58 PM UTC
We are continuing to investigate elevated error rates affecting storage services in the ap-northeast-1 (Tokyo) region. Newly created projects may still experience issues connecting to storage, while existing projects remain unaffected at this time. Our team is actively working to determine the root cause and assess the full impact. We will provide further updates as more information becomes available.
-
investigating Mar 18, 2026, 05:30 PM UTC
We are continuing to investigate elevated error rates affecting storage services in the ap-northeast-1 (Tokyo) region. While we are still observing a higher-than-normal level of errors, initial findings indicate that the overall impact is lower than first expected. Many affected requests are succeeding upon retry. Our team is actively working to determine the root cause and will provide further updates as more information becomes available.
-
identified Mar 18, 2026, 05:53 PM UTC
The root cause, involving task instances and batch sending events, has been identified. Our engineering team is working on a fix.
-
resolved Mar 18, 2026, 06:21 PM UTC
The fix was implemented. All impacted projects and newly created projects in the region are now successfully connecting to storage services.
Read the full incident report →
- Detected by Pingoru
- Mar 17, 2026, 04:50 AM UTC
- Resolved
- Mar 17, 2026, 07:41 AM UTC
- Duration
- 2h 50m
Timeline · 4 updates
-
investigating Mar 17, 2026, 04:50 AM UTC
Some custom Postgres configurations applied through the Supabase CLI are not currently taking effect for projects running Postgres version 17.6.1.084. Projects that do not use CLI-managed Postgres configurations, or that are running earlier Postgres versions, are not affected.
-
identified Mar 17, 2026, 05:11 AM UTC
We have identified the root cause and are currently working on a fix.
-
monitoring Mar 17, 2026, 07:07 AM UTC
A fix has been implemented and we are monitoring.
-
resolved Mar 17, 2026, 07:41 AM UTC
This incident has been resolved.
Read the full incident report →
- Detected by Pingoru
- Mar 13, 2026, 08:54 AM UTC
- Resolved
- Mar 13, 2026, 10:19 AM UTC
- Duration
- 1h 24m
Affected: Connection Pooler
Timeline · 4 updates
-
investigating Mar 13, 2026, 08:54 AM UTC
We are currently investigating increased error rates relating to the connection pooler in ap-northeast-2 (South Korea) We will provide further updates as information becomes available.
-
investigating Mar 13, 2026, 09:25 AM UTC
We continue to investigate increased error rates relating to the connection pooler in ap-northeast-2 (South Korea) We will provide further updates as information becomes available.
-
monitoring Mar 13, 2026, 09:40 AM UTC
A fix has been implemented and error rates have reduced. Our team is actively monitoring the system to ensure stability.
-
resolved Mar 13, 2026, 10:19 AM UTC
Error rates have returned to normal, and connectivity has stabilised.
Read the full incident report →
- Detected by Pingoru
- Mar 11, 2026, 06:08 PM UTC
- Resolved
- Mar 11, 2026, 07:09 PM UTC
- Duration
- 1h
Affected: Management API
Timeline · 4 updates
-
investigating Mar 11, 2026, 06:08 PM UTC
We are currently investigating reports of projects failing to be created.
-
identified Mar 11, 2026, 06:25 PM UTC
We have identified the root cause and are currently working on a fix.
-
monitoring Mar 11, 2026, 06:47 PM UTC
A fix has been implemented and we are seeing project creation recovery across regions. We're moving the incident to monitoring while we confirm full stability.
-
resolved Mar 11, 2026, 07:09 PM UTC
Project creation recovery has been confirmed across all regions. Regional dashboards show error rates returned to normal, and previously failing project creation operations are succeeding again.
Read the full incident report →
- Detected by Pingoru
- Mar 07, 2026, 08:12 PM UTC
- Resolved
- Mar 07, 2026, 09:56 PM UTC
- Duration
- 1h 43m
Affected: Storage
Timeline · 5 updates
-
investigating Mar 07, 2026, 08:12 PM UTC
We have noticed that storage uploads and downloads have become degraded in EU-Central-2. We are investigating.
-
identified Mar 07, 2026, 08:22 PM UTC
We have identified the cause of the storage issue in EU-Central-2. The upstream provider is working to resolve the issue.
-
identified Mar 07, 2026, 08:44 PM UTC
We are observing storage error rates returning to normal.
-
monitoring Mar 07, 2026, 09:16 PM UTC
The upstream incident has been resolved and we have not observed any additional errors. We will continue to monitor our storage infrastructure for continued stable operations.
-
resolved Mar 07, 2026, 09:56 PM UTC
All storage operations have returned to normal.
Read the full incident report →