- Detected by Pingoru
- Mar 31, 2026, 02:32 PM UTC
- Resolved
- Mar 31, 2026, 02:32 PM UTC
- Duration
- —
Timeline · 1 update
-
resolved Mar 31, 2026, 02:32 PM UTC
This is a retroactive status page linked to the following incident: https://status.grafana.com/incidents/38wwbz50ggrp This retroactive status page is meant to clarify the time of impact. This issue first started at ~2026-03-30 18:00 UTC. This is now resolved
Read the full incident report →
- Detected by Pingoru
- Mar 31, 2026, 09:48 AM UTC
- Resolved
- Mar 31, 2026, 10:24 AM UTC
- Duration
- 35m
Affected: AWS Australia - prod-ap-southeast-2AWS Brazil - prod-sa-east-1AWS Canada - prod-ca-east-0AWS Germany - prod-eu-west-2AWS Germany - prod-eu-west-4AWS India - prod-ap-south-1AWS Japan - prod-ap-northeast-0AWS UAE - prod-me-central-1AWS Singapore - prod-ap-southeast-1AWS Sweden - prod-eu-north-0AWS US East - prod-us-east-0AWS US East - prod-us-east-2AWS US West - prod-us-west-0AWS Australia - prod-au-southeast-1AWS UK - prod-gb-south-1AWS Ireland - prod-eu-west-6Azure US Central - us-central2AWS Switzerland - prod-eu-central-0Azure Netherlands - prod-eu-west-3GCP Australia - prod-au-southeast-0GCP Belgium - prod-eu-west-0GCP Brazil - prod-sa-east-0GCP India - prod-ap-south-0GCP Singapore - prod-ap-southeast-0GCP UK - prod-gb-south-0GCP US Central - prod-us-central-0GCP US Central - prod-us-central-3GCP US Central - prod-us-central-4GCP US East - prod-us-east-1play.grafana.orgFederal Cloud - AWS US Gov West
Timeline · 3 updates
-
monitoring Mar 31, 2026, 09:48 AM UTC
Some of the CloudWatch queries were failing. Started at 08:37 UTC Monitoring from 09:21 UTC
-
monitoring Mar 31, 2026, 09:49 AM UTC
We are continuing to monitor for any further issues.
-
resolved Mar 31, 2026, 10:24 AM UTC
This incident has been resolved.
Read the full incident report →
- Detected by Pingoru
- Mar 30, 2026, 06:34 PM UTC
- Resolved
- Mar 30, 2026, 04:30 PM UTC
- Duration
- —
Timeline · 1 update
-
resolved Mar 30, 2026, 06:34 PM UTC
We encountered an issue impacting only a small subset of customers in the prod-us-central-0 region. The incident occurred between 16:20 and 17:50 UTC on 3/30/26. This incident is now resolved.
Read the full incident report →
- Detected by Pingoru
- Mar 27, 2026, 01:36 PM UTC
- Resolved
- Mar 27, 2026, 08:48 PM UTC
- Duration
- 7h 12m
Affected: AWS Australia - prod-ap-southeast-2AWS Brazil - prod-sa-east-1AWS Canada - prod-ca-east-0AWS Germany - prod-eu-west-2AWS Germany - prod-eu-west-4AWS India - prod-ap-south-1AWS Japan - prod-ap-northeast-0AWS UAE - prod-me-central-1AWS Singapore - prod-ap-southeast-1AWS Sweden - prod-eu-north-0AWS US East - prod-us-east-0AWS US East - prod-us-east-2AWS US West - prod-us-west-0AWS Australia - prod-au-southeast-1AWS UK - prod-gb-south-1AWS Ireland - prod-eu-west-6Azure US Central - us-central2AWS Switzerland - prod-eu-central-0Azure Netherlands - prod-eu-west-3GCP Australia - prod-au-southeast-0GCP Belgium - prod-eu-west-0GCP Brazil - prod-sa-east-0GCP India - prod-ap-south-0GCP Singapore - prod-ap-southeast-0GCP UK - prod-gb-south-0GCP US Central - prod-us-central-0GCP US Central - prod-us-central-3GCP US Central - prod-us-central-4GCP US East - prod-us-east-1play.grafana.orgFederal Cloud - AWS US Gov West
Timeline · 6 updates
-
investigating Mar 27, 2026, 01:36 PM UTC
We’re currently investigating an issue which is affecting primarily users on the Free tier. Impacted users will be met with a "your Grafana instance is loading" message indefinitely. Our team is actively working to identify the cause and will share an update within 1-2 hours. Thank you for your patience.
-
investigating Mar 27, 2026, 02:51 PM UTC
We’re continuing to investigate the issue with Grafana instances. While we don’t have new information to share yet, our team is working to identify the root cause. Next update in 1-2 hours.
-
investigating Mar 27, 2026, 04:36 PM UTC
We’re continuing to investigate the issue with Grafana instances. While we don’t have new information to share yet, our team is working to identify the root cause. Next update in 1-2 hours.
-
identified Mar 27, 2026, 06:10 PM UTC
We’ve identified the cause of the issue impacting the instances. Our team is currently implementing a fix. We’ll provide another update in 1–2 hours, or sooner, if the situation changes.
-
monitoring Mar 27, 2026, 08:16 PM UTC
We’ve implemented a fix and are monitoring the results to confirm the issue is fully resolved. Services may start to recover during this time. We’ll update again in 1 hour.
-
resolved Mar 27, 2026, 08:48 PM UTC
This incident has been resolved. Thank you for your patience.
Read the full incident report →
- Detected by Pingoru
- Mar 25, 2026, 02:11 PM UTC
- Resolved
- Apr 23, 2026, 08:07 PM UTC
- Duration
- 29d 5h
Affected: Azure Netherlands - prod-eu-west-3: Ingestion
Timeline · 10 updates
-
investigating Mar 25, 2026, 02:11 PM UTC
The metric writes issue reported in https://status.grafana.com/incidents/gfshj17lxj5z is still ongoing. Our Engineering team is actively investigating this and we will provide further updates as our investigation progresses.
-
investigating Mar 25, 2026, 09:35 PM UTC
We are continuing to investigate this issue.
-
monitoring Mar 26, 2026, 12:04 PM UTC
A fix has been implemented and we are monitoring the results.
-
monitoring Mar 26, 2026, 05:45 PM UTC
We are continuing to monitor the previously impacted environments.
-
monitoring Mar 27, 2026, 09:05 PM UTC
We are continuing to monitor this through the weekend.
-
monitoring Apr 02, 2026, 09:38 PM UTC
We are continuing to monitor for any further issues.
-
monitoring Apr 08, 2026, 08:32 PM UTC
We are still seeing intermittent issues and continue to seek a resolution
-
monitoring Apr 14, 2026, 08:11 PM UTC
We have deployed mitigation and seen improvement in write failures over the past week. We are still seeing intermittent spikes in latency and continue to monitor.
-
monitoring Apr 20, 2026, 03:08 PM UTC
We are continuing to monitor for any further issues.
-
resolved Apr 23, 2026, 08:07 PM UTC
This incident has been resolved. Thank you for your patience.
Read the full incident report →
- Detected by Pingoru
- Mar 24, 2026, 02:00 PM UTC
- Resolved
- Mar 24, 2026, 02:00 PM UTC
- Duration
- —
Timeline · 1 update
-
resolved Mar 25, 2026, 10:30 AM UTC
An issue affecting Grafana Cloud instances was diagnosed yesterday 24th of March that avoided Dashboards to be loaded correctly. The incident impacted the following clusters: - GCP US Central (us-central-0) between 13:15 and 13:50 UTC - AWS US East (us-east-0) between 13:43 and 13:55 UTC - AWS US West (us-west-2) between 14:05 and 14:06 UTC The issue has been identified and measurement corrections has been applied.
Read the full incident report →
- Detected by Pingoru
- Mar 24, 2026, 09:08 AM UTC
- Resolved
- Mar 25, 2026, 12:52 PM UTC
- Duration
- 1d 3h
Affected: Azure Netherlands - prod-eu-west-3Azure Netherlands - prod-eu-west-3: APIAzure Netherlands - prod-eu-west-3: IngestionAzure Netherlands - prod-eu-west-3: Public Probes
Timeline · 6 updates
-
investigating Mar 24, 2026, 09:08 AM UTC
We are currently experiencing degraded writes for mimir-prod-22 in prod-eu-west-3 since 08:45Z.
-
monitoring Mar 24, 2026, 09:19 AM UTC
A fix has been implemented and we are monitoring the results.
-
monitoring Mar 24, 2026, 09:23 PM UTC
We have not observed any recent errors, but we will continue to monitor while we work with our CSP.
-
investigating Mar 25, 2026, 07:04 AM UTC
We are moving this back to 'Investigating' as we are now observing a substantial drop in successful ingestion and increase in write path errors, and elevated rule evaluation latency and error. Reads are mostly fine. Our Engineering team is actively investigating this and we will provide further updates as our investigation progresses.
-
investigating Mar 25, 2026, 07:43 AM UTC
This is also now impacting Logs and Synthetic Monitoring in prod-eu-west-3. For Synthetic Monitoring, users might observe errors pushing check execution metrics, and this can eventually lead to missing data. In addition, users might observe errors evaluating Synthetic Monitoring provisioned alert rule evaluations, and this can lead to missed alerts. For Logs, there is no immediate impact on alerts, however, remote writes to Mimir is delayed which means users may see gaps in their recording rules.
-
resolved Mar 25, 2026, 12:52 PM UTC
This incident has been resolved.
Read the full incident report →
- Detected by Pingoru
- Mar 23, 2026, 05:03 PM UTC
- Resolved
- Mar 23, 2026, 06:48 PM UTC
- Duration
- 1h 44m
Affected: AWS US East - prod-us-east-0AWS US East - prod-us-east-2GCP US East - prod-us-east-1
Timeline · 5 updates
-
investigating Mar 23, 2026, 05:03 PM UTC
We are aware of an issue currently impacting Grafana Assistant. Impacted users are met with a request to accept the TOS, however the plugin is failing upon accepting. Our engineering are currently investigating this issue.
-
investigating Mar 23, 2026, 06:01 PM UTC
We are continuing to investigate this issue.
-
investigating Mar 23, 2026, 06:07 PM UTC
The impact extends beyond the TOS check. Assistant is completely unavailable in the impacted region.
-
identified Mar 23, 2026, 06:25 PM UTC
The issue has been identified, and we are implementing a fix.
-
resolved Mar 23, 2026, 06:48 PM UTC
This incident has been resolved.
Read the full incident report →
- Detected by Pingoru
- Mar 20, 2026, 03:00 PM UTC
- Resolved
- Mar 20, 2026, 03:41 PM UTC
- Duration
- 40m
Affected: AWS Germany - prod-eu-west-2AWS Germany - prod-eu-west-4
Timeline · 3 updates
-
investigating Mar 20, 2026, 03:00 PM UTC
We are currently investigating an issue impacting the main database for Authentication API's in the prod-eu-west-2 region. Writes are currently failing, but reads are operational.
-
investigating Mar 20, 2026, 03:08 PM UTC
We have observed impact in prod-eu-west-4 as well.
-
resolved Mar 20, 2026, 03:41 PM UTC
This incident has been resolved.
Read the full incident report →
- Detected by Pingoru
- Mar 19, 2026, 04:46 PM UTC
- Resolved
- Mar 19, 2026, 06:44 PM UTC
- Duration
- 1h 57m
Affected: Grafana Cloud: Integrations
Timeline · 5 updates
-
investigating Mar 19, 2026, 04:46 PM UTC
We are currently investigating an issue impacting the CloudWatch Datasource causing failures.
-
monitoring Mar 19, 2026, 05:13 PM UTC
We have identified the issue, and are rolling out the fix. We are already seeing improvements and will continue to monitor progress.
-
monitoring Mar 19, 2026, 05:56 PM UTC
We have observed recovery for the Cloudwatch Datasource. We are now seeing failures for the following Datasources: Aurora Opensearch X-Ray Timestream Redshift Sitewise A fix for the above is being rolled out now, and we will monitor progress. We will also change the name of this incident from "Cloudwatch Datasource Issues" to "Various Datasource Issues" to more accurately reflect impact.
-
monitoring Mar 19, 2026, 05:56 PM UTC
We are continuing to monitor for any further issues.
-
resolved Mar 19, 2026, 06:44 PM UTC
This incident has been resolved.
Read the full incident report →
- Detected by Pingoru
- Mar 19, 2026, 11:17 AM UTC
- Resolved
- Mar 19, 2026, 06:11 PM UTC
- Duration
- 6h 54m
Affected: Cloud Test Runs
Timeline · 2 updates
-
investigating Mar 19, 2026, 11:17 AM UTC
Some customers are seeing degraded performance and errors from certain v6 API endpoints. We are investigating the issue.
-
resolved Mar 19, 2026, 06:11 PM UTC
Our engineering team has deployed a fix and we continue to observe a continued period of recovery. At this time, we are considering this issue resolved. No further updates.
Read the full incident report →
- Detected by Pingoru
- Mar 13, 2026, 10:28 AM UTC
- Resolved
- Mar 18, 2026, 07:13 AM UTC
- Duration
- 4d 20h
Affected: Azure Netherlands - prod-eu-west-3
Timeline · 3 updates
-
investigating Mar 13, 2026, 10:28 AM UTC
We are seeing issues on the write path for Loki in cluster Azure Netherlands (eu-west-3). Impact will reflect in degradation of logs ingestion on that cluster. Our engineering team is already working on restoring the service.
-
investigating Mar 13, 2026, 09:22 PM UTC
We are continuing to investigate this issue with our CSP, and will provide updates as they become available.
-
resolved Mar 18, 2026, 07:13 AM UTC
We have been observing stability for a period of time and will mark the incident as resolved at this time.
Read the full incident report →
- Detected by Pingoru
- Mar 13, 2026, 07:41 AM UTC
- Resolved
- Mar 13, 2026, 06:11 PM UTC
- Duration
- 10h 30m
Affected: Cloud Test Runs
Timeline · 4 updates
-
investigating Mar 13, 2026, 07:41 AM UTC
We are seeing an increased number of Aborted-by-Systems with a k6 binary building error. We are investigating the issue. The first occurrence of this happened back on March 9, has now been identified as a blocking issue for some customers.
-
identified Mar 13, 2026, 08:45 AM UTC
The issue has been identified and a fix is being implemented.
-
monitoring Mar 13, 2026, 12:49 PM UTC
A fix has been implemented and we are monitoring the results.
-
resolved Mar 13, 2026, 06:11 PM UTC
This incident has been resolved.
Read the full incident report →
- Detected by Pingoru
- Mar 11, 2026, 05:10 PM UTC
- Resolved
- Mar 13, 2026, 06:15 PM UTC
- Duration
- 2d 1h
Affected: AWS US West - prod-us-west-0: Rule Evaluation
Timeline · 3 updates
-
investigating Mar 11, 2026, 05:10 PM UTC
We are currently investigating an issue impacting rule evaluation for a subset of customers in the prod-us-west-0 region. We will provide updates as they become available.
-
monitoring Mar 11, 2026, 06:02 PM UTC
A fix has been implemented and we are monitoring the results.
-
resolved Mar 13, 2026, 06:15 PM UTC
This incident has been resolved.
Read the full incident report →
- Detected by Pingoru
- Mar 10, 2026, 06:06 PM UTC
- Resolved
- Mar 10, 2026, 07:17 PM UTC
- Duration
- 1h 10m
Affected: AWS Australia - prod-ap-southeast-2AWS Brazil - prod-sa-east-1AWS Canada - prod-ca-east-0AWS Germany - prod-eu-west-2AWS Germany - prod-eu-west-4AWS India - prod-ap-south-1AWS Japan - prod-ap-northeast-0AWS UAE - prod-me-central-1AWS Singapore - prod-ap-southeast-1AWS Sweden - prod-eu-north-0AWS US East - prod-us-east-0AWS US East - prod-us-east-2AWS US West - prod-us-west-0AWS Australia - prod-au-southeast-1AWS UK - prod-gb-south-1AWS Ireland - prod-eu-west-6Azure US Central - us-central2Azure Netherlands - prod-eu-west-3GCP Australia - prod-au-southeast-0GCP Belgium - prod-eu-west-0GCP Brazil - prod-sa-east-0GCP India - prod-ap-south-0GCP Singapore - prod-ap-southeast-0GCP UK - prod-gb-south-0GCP US Central - prod-us-central-0GCP US Central - prod-us-central-3GCP US Central - prod-us-central-4GCP US East - prod-us-east-1play.grafana.orgFederal Cloud - AWS US Gov West
Timeline · 2 updates
-
investigating Mar 10, 2026, 06:06 PM UTC
We are noticing issues with various HG pages. Our engineering team is actively looking into it.
-
resolved Mar 10, 2026, 07:17 PM UTC
This incident has been resolved.
Read the full incident report →
- Detected by Pingoru
- Mar 10, 2026, 06:00 PM UTC
- Resolved
- Mar 11, 2026, 09:48 PM UTC
- Duration
- 1d 3h
Affected: Azure Netherlands - prod-eu-west-3: Ingestion
Timeline · 5 updates
-
investigating Mar 10, 2026, 06:00 PM UTC
We are currently investigating an issue impacting a subset of users in the prod-eu-west-3 region. Impacted users are experiencing elevated transit write failures, with no degradation to the read path.
-
monitoring Mar 10, 2026, 06:42 PM UTC
A fix has been implemented, and we are monitoring.
-
identified Mar 11, 2026, 01:35 AM UTC
There are ongoing intermittent elevated transient write failures. We will continue to provide additional updates as more information becomes available.
-
monitoring Mar 11, 2026, 03:51 PM UTC
Things have been stable, and we have a potential mitigation should this issue arise again. We are monitoring the issue in the meantime.
-
resolved Mar 11, 2026, 09:48 PM UTC
This incident has been resolved.
Read the full incident report →
- Detected by Pingoru
- Mar 10, 2026, 03:26 PM UTC
- Resolved
- Mar 10, 2026, 08:39 PM UTC
- Duration
- 5h 12m
Affected: AWS US West - prod-us-west-0
Timeline · 2 updates
-
identified Mar 10, 2026, 03:26 PM UTC
There has been a reoccurrence o the issues on the Read path of Loki services on AWS US West since yesterday 9th around ~17:15UTC. The issue has been identified, and resolutions steps has been taken to restore full service. We are currently monitoring the service status. The impact of this includes timeouts and 5xx errors when query logs for customers on this cluster.
-
resolved Mar 10, 2026, 08:39 PM UTC
This incident has been resolved.
Read the full incident report →
- Detected by Pingoru
- Mar 09, 2026, 06:03 PM UTC
- Resolved
- Mar 10, 2026, 09:17 PM UTC
- Duration
- 1d 3h
Affected: GCP US Central - prod-us-central-0
Timeline · 2 updates
-
monitoring Mar 09, 2026, 06:03 PM UTC
From 15:30 to 15:45 UTC and from 16:53 to 17:03 UTC, the prod-us-central-0 and prod-us-central-5 regions saw elevated latency and error rates on the write path. We're monitoring now.
-
resolved Mar 10, 2026, 09:17 PM UTC
This incident has been resolved.
Read the full incident report →
- Detected by Pingoru
- Mar 09, 2026, 02:20 PM UTC
- Resolved
- Mar 10, 2026, 08:54 PM UTC
- Duration
- 1d 6h
Affected: Grafana Cloud: Fleet Management
Timeline · 3 updates
-
investigating Mar 09, 2026, 02:20 PM UTC
Some users in prod-us-central-0 may be seeing elevated rate of errors when fetching configurations. Our engineers are currently investigating this issue.
-
investigating Mar 10, 2026, 06:11 PM UTC
Our engineering team continues to work towards a resolution for this issue.
-
resolved Mar 10, 2026, 08:54 PM UTC
This incident has been resolved.
Read the full incident report →
- Detected by Pingoru
- Mar 07, 2026, 08:07 PM UTC
- Resolved
- Mar 09, 2026, 08:59 AM UTC
- Duration
- 1d 12h
Timeline · 5 updates
-
investigating Mar 07, 2026, 08:07 PM UTC
We are seeing elevated errors rate and outages across many of our services in prod-eu-central-0, due to an on-going AWS S3 outage in that region.
-
investigating Mar 07, 2026, 08:10 PM UTC
We are continuing to investigate this issue.
-
investigating Mar 07, 2026, 08:10 PM UTC
Since about 20:03 UTC we have seen AWS S3 recover and also our services are recovering, we are monitoring.
-
monitoring Mar 08, 2026, 11:30 AM UTC
Since about 20:03 UTC we have seen AWS S3 recover and also our services are recovering, we are monitoring.
-
resolved Mar 09, 2026, 08:59 AM UTC
This incident has been resolved.
Read the full incident report →
- Detected by Pingoru
- Mar 05, 2026, 10:27 PM UTC
- Resolved
- Mar 05, 2026, 11:36 PM UTC
- Duration
- 1h 8m
Affected: GCP Belgium - prod-eu-west-0: QueryingGCP Belgium - prod-eu-west-0: Ingestion
Timeline · 3 updates
-
investigating Mar 05, 2026, 10:27 PM UTC
A recent incident affecting the data read path and rule execution within prod-eu-west-0 began at ~21:05 UTC on March 5, 2026. Customers with instances in this region may experience write failures and delays in rule evaluation. Engineering is actively engaged and assessing the issue. We will provide updates accordingly.
-
monitoring Mar 05, 2026, 10:41 PM UTC
Engineering has released a fix and as of 22:00 UTC, customers should no longer experience write failures and delays in rule evaluation. We will continue to monitor for recurrence and provide updates accordingly.
-
resolved Mar 05, 2026, 11:36 PM UTC
We continue to observe a continued period of recovery. At this time, we are considering this issue resolved. No further updates.
Read the full incident report →
- Detected by Pingoru
- Mar 04, 2026, 07:47 AM UTC
- Resolved
- Mar 04, 2026, 09:29 AM UTC
- Duration
- 1h 41m
Affected: Grafana Cloud: Fleet Management
Timeline · 3 updates
-
investigating Mar 04, 2026, 07:47 AM UTC
We are currently experiencing an issue with Fleet Management in prod-us-central-0. Users in prod-us-central-0 may observe elevated rate of errors when fetching configurations.
-
monitoring Mar 04, 2026, 08:46 AM UTC
A fix has been implemented and we are monitoring the results.
-
resolved Mar 04, 2026, 09:29 AM UTC
This incident has been resolved.
Read the full incident report →
- Detected by Pingoru
- Mar 03, 2026, 06:35 PM UTC
- Resolved
- Mar 03, 2026, 01:00 PM UTC
- Duration
- —
Timeline · 1 update
-
resolved Mar 03, 2026, 06:35 PM UTC
Test run browser screenshot upload experienced failures from 13:12 to 14:51 UTC. The issue has been resolved
Read the full incident report →
- Detected by Pingoru
- Mar 02, 2026, 07:37 AM UTC
- Resolved
- Mar 02, 2026, 03:48 PM UTC
- Duration
- 8h 11m
Affected: Azure Netherlands - prod-eu-west-3
Timeline · 3 updates
-
investigating Mar 02, 2026, 07:37 AM UTC
We are experiencing increased write latency for logs in prod-eu-west-3. Our Engineering team is aware and currently investigating this. We will provide further updates accordingly.
-
investigating Mar 02, 2026, 08:08 AM UTC
We are now experiencing write outage for logs in prod-eu-west-3. Our Engineering team is aware and currently investigating this. We will provide further updates accordingly.
-
resolved Mar 02, 2026, 03:48 PM UTC
This incident has been resolved.
Read the full incident report →
- Detected by Pingoru
- Feb 27, 2026, 04:25 PM UTC
- Resolved
- Feb 27, 2026, 04:25 PM UTC
- Duration
- —
Read the full incident report →