- Detected by Pingoru
- May 01, 2026, 05:26 AM UTC
- Resolved
- May 01, 2026, 08:36 AM UTC
- Duration
- 3h 10m
Affected: fr-par-1fr-par-2fr-par-3nl-ams-1nl-ams-2nl-ams-3pl-waw-1pl-waw-2pl-waw-3Instances
Timeline · 4 updates
-
investigating May 01, 2026, 05:26 AM UTC
Due to an internal issue, it is currently not possible to create new instances across all regions from the console. However, you can create them using the Scaleway CLI available here: https://cli.scaleway.com/instance/
-
identified May 01, 2026, 08:21 AM UTC
The issue has been identified and a fix is being implemented.
-
monitoring May 01, 2026, 08:22 AM UTC
A fix has been implemented and we are monitoring the results.
-
resolved May 01, 2026, 08:36 AM UTC
This incident has been resolved.
Read the full incident report →
- Detected by Pingoru
- Apr 30, 2026, 09:55 AM UTC
- Resolved
- Apr 30, 2026, 09:55 AM UTC
- Duration
- —
Affected: Elastic Metal
Timeline · 1 update
-
resolved Apr 30, 2026, 09:55 AM UTC
At 10h43 CET, customers may have experienced network interruptions on Elastic Metal servers located in rack 430B/110 in nl-ams-2
Read the full incident report →
- Detected by Pingoru
- Apr 29, 2026, 09:11 AM UTC
- Resolved
- Apr 30, 2026, 08:58 AM UTC
- Duration
- 23h 46m
Timeline · 2 updates
-
investigating Apr 29, 2026, 09:11 AM UTC
The emails that are sent to ProtonMail (including the emails sent to a custom domain associated to ProtonMail) are processed very slowly.
-
resolved Apr 30, 2026, 08:58 AM UTC
This incident has been resolved.
Read the full incident report →
- Detected by Pingoru
- Apr 28, 2026, 12:06 PM UTC
- Resolved
- Apr 28, 2026, 01:50 PM UTC
- Duration
- 1h 43m
Affected: fr-par-1fr-par-2fr-par-3nl-ams-1nl-ams-2nl-ams-3pl-waw-1pl-waw-2pl-waw-3Serverless Functions
Timeline · 4 updates
-
investigating Apr 28, 2026, 12:06 PM UTC
Customers may experience DNS resolution failures for their Serverless Functions in the nl-ams and pl-waw regions, due to an error that happened during the scheduled maintenance: https://status.scaleway.com/incidents/qhh06fhhn47r. Errors for clients might look like: "lookup [...] on 10.32.15.25:53: read udp 100.96.xx.xx:xx->10.32.15.25:53: i/o timeout". We have identified the root cause. Sorry about any inconvenience.
-
investigating Apr 28, 2026, 12:11 PM UTC
We are continuing to investigate this issue.
-
investigating Apr 28, 2026, 12:16 PM UTC
After investigation, this also affects functions created on new namespaces on fr-par.
-
resolved Apr 28, 2026, 01:50 PM UTC
The root cause has been fixed: all functions now have a working DNS again. The incident is now closed. Impacted functions were: - on fr-par, only functions not attached to a private network, that belong to a namespace created after 2026-04-28 8:00 AM UTC, until 1:05 PM UTC. - on nl-ams, all functions not attached to a private network, between 8:41 AM UTC and 1:46 PM UTC - on pl-waw, all functions not attached to a private network, between 9:30 AM UTC and 1:05 PM UTC The incident on nl-ams and pl-waw is related to the scheduled maintenance https://status.scaleway.com/incidents/zrhfpqjvjyrl. A mismatch between DNS service IPs caused the issue. We deeply apologize for the inconvenience.
Read the full incident report →
- Detected by Pingoru
- Apr 27, 2026, 09:18 AM UTC
- Resolved
- Apr 27, 2026, 03:15 PM UTC
- Duration
- 5h 57m
Affected: APIs
Timeline · 2 updates
-
investigating Apr 27, 2026, 09:18 AM UTC
Customers may experience timeouts when performing IPMI calls, including stop, start, and reboot actions, on BareMetal servers.
-
resolved Apr 27, 2026, 03:15 PM UTC
This incident has been resolved.
Read the full incident report →
- Detected by Pingoru
- Apr 27, 2026, 08:19 AM UTC
- Resolved
- Apr 27, 2026, 12:19 PM UTC
- Duration
- 3h 59m
Affected: DC5
Timeline · 2 updates
-
investigating Apr 27, 2026, 08:19 AM UTC
We have detected a switch down in room 1 rack c57 in DC5 Servers in that rack currently have no private network access and are unreachable. 27.04.2026 10h00 UTC The issue has been forwarded to our team for resolution.
-
resolved Apr 27, 2026, 12:19 PM UTC
This incident has been resolved.
Read the full incident report →
- Detected by Pingoru
- Apr 27, 2026, 07:24 AM UTC
- Resolved
- Apr 27, 2026, 07:49 AM UTC
- Duration
- 25m
Affected: DC5
Timeline · 2 updates
-
investigating Apr 27, 2026, 07:24 AM UTC
Since 8:50am CET, customers may experience network connectivity issues on servers attached to the rpn switch s1-c57-3.rpn.dc5 in DC5. We are currently investigating this issue.
-
resolved Apr 27, 2026, 07:49 AM UTC
This incident has been resolved.
Read the full incident report →
- Detected by Pingoru
- Apr 23, 2026, 01:04 PM UTC
- Resolved
- Apr 23, 2026, 02:07 PM UTC
- Duration
- 1h 3m
Affected: DatabasesJobsServerless ContainerServerless FunctionsKafka
Timeline · 3 updates
-
monitoring Apr 23, 2026, 01:04 PM UTC
Due to a database issue, some requests to the registry API did timeout
-
monitoring Apr 23, 2026, 01:15 PM UTC
We are continuing to monitor for any further issues.
-
resolved Apr 23, 2026, 02:07 PM UTC
This incident has been resolved.
Read the full incident report →
- Detected by Pingoru
- Apr 23, 2026, 08:32 AM UTC
- Resolved
- Apr 23, 2026, 08:33 AM UTC
- Duration
- 46s
Affected: DediboxDC3
Timeline · 2 updates
-
investigating Apr 23, 2026, 08:32 AM UTC
We have detected a switch down in DC3;4 4-6;F2. Servers in that rack currently have no public network access and are unreachable. =================== 23/042026 10h20 UTC The issue has been forwarded to our team for resolution.
-
resolved Apr 23, 2026, 08:33 AM UTC
This incident has been resolved.
Read the full incident report →
- Detected by Pingoru
- Apr 22, 2026, 11:51 AM UTC
- Resolved
- Apr 30, 2026, 12:57 PM UTC
- Duration
- 8d 1h
Affected: Container RegistryJobsObject StorageServerless ContainerServerless Functions
Timeline · 4 updates
-
investigating Apr 22, 2026, 11:51 AM UTC
Customers may experience timeouts when uploading objects to Object Storage in the fr-par region due to an issue with an internal component.
-
investigating Apr 22, 2026, 12:39 PM UTC
We are continuing to investigate this issue.
-
monitoring Apr 22, 2026, 03:41 PM UTC
We have implemented corrective measures. Performance should return to normal. We have planned further actions to resolve the issue long-term.
-
resolved Apr 30, 2026, 12:57 PM UTC
This incident has been resolved.
Read the full incident report →
- Detected by Pingoru
- Apr 22, 2026, 08:12 AM UTC
- Resolved
- Apr 22, 2026, 08:12 AM UTC
- Duration
- —
Affected: Web Hosting
Timeline · 1 update
-
resolved Apr 22, 2026, 08:12 AM UTC
Email reception on the pf-012.whm.fr-par.scw.cloud platform was degraded. Emails remained in the queue and were not delivered due to a malfunction. The problem has been fixed and the pending emails have been successfully delivered.
Read the full incident report →
- Detected by Pingoru
- Apr 21, 2026, 07:30 PM UTC
- Resolved
- Apr 21, 2026, 10:41 PM UTC
- Duration
- 3h 11m
Affected: DediboxDC2
Timeline · 2 updates
-
investigating Apr 21, 2026, 07:30 PM UTC
We have detected a switch down in DC2; room 205; H8 Servers in that rack currently have no public network access and are unreachable. =================== ##### 21.04.y2026 21.30 UTC The issue has been forwarded to our team for resolution.
-
resolved Apr 21, 2026, 10:41 PM UTC
This incident has been resolved.
Read the full incident report →
- Detected by Pingoru
- Apr 21, 2026, 01:07 PM UTC
- Resolved
- Apr 21, 2026, 01:07 PM UTC
- Duration
- —
Affected: Observability
Timeline · 1 update
-
resolved Apr 21, 2026, 01:07 PM UTC
Customers may experience ingestion errors on product metrics due to a load increase on an internal component.
Read the full incident report →
- Detected by Pingoru
- Apr 20, 2026, 02:18 PM UTC
- Resolved
- Apr 20, 2026, 02:57 PM UTC
- Duration
- 38m
Affected: Kubernetes Kapsule
Timeline · 3 updates
-
identified Apr 20, 2026, 02:18 PM UTC
Due to an internal error, we're unable to provision new nodes.
-
monitoring Apr 20, 2026, 02:56 PM UTC
We have found a problem related to our CI buidling image for Kapsule node. We have fixed the issue, and are now monitoring everything is working fine.
-
resolved Apr 20, 2026, 02:57 PM UTC
This incident has been resolved.
Read the full incident report →
- Detected by Pingoru
- Apr 20, 2026, 02:08 PM UTC
- Resolved
- Apr 20, 2026, 04:37 PM UTC
- Duration
- 2h 28m
Timeline · 2 updates
-
identified Apr 20, 2026, 02:08 PM UTC
We are currently experiencing some issues with our phone system. We will post an update here as soon as we have additional information.
-
resolved Apr 20, 2026, 04:37 PM UTC
This incident has been resolved.
Read the full incident report →
- Detected by Pingoru
- Apr 16, 2026, 02:59 PM UTC
- Resolved
- Apr 17, 2026, 07:05 AM UTC
- Duration
- 16h 6m
Affected: Observability
Timeline · 2 updates
-
investigating Apr 16, 2026, 02:59 PM UTC
Customers may experience elevated 5xx errors on logs ingestion in fr-par.
-
resolved Apr 17, 2026, 07:05 AM UTC
This incident has been resolved.
Read the full incident report →
- Detected by Pingoru
- Apr 16, 2026, 02:22 PM UTC
- Resolved
- Apr 16, 2026, 02:24 PM UTC
- Duration
- 1m
Affected: APIs
Timeline · 2 updates
-
investigating Apr 16, 2026, 02:22 PM UTC
Customers may have experienced connection issues with Managed Opensearch due to an expired certificate. We apologize for the inconvenience Start time: 1:30pm
-
resolved Apr 16, 2026, 02:24 PM UTC
This incident has been resolved.
Read the full incident report →
- Detected by Pingoru
- Apr 16, 2026, 09:55 AM UTC
- Resolved
- Apr 16, 2026, 10:19 AM UTC
- Duration
- 23m
Affected: Instances
Timeline · 2 updates
-
investigating Apr 16, 2026, 09:55 AM UTC
Due to an ongoing issue with an internal API service, Cloud-init may fail to load the Scaleway datasource, which means Instances may end up in an unconfigured state. We have identified the culprit and are actively working on a resolution.
-
resolved Apr 16, 2026, 10:19 AM UTC
This incident has been resolved.
Read the full incident report →
- Detected by Pingoru
- Apr 15, 2026, 09:44 AM UTC
- Resolved
- Apr 15, 2026, 09:44 AM UTC
- Duration
- —
Timeline · 1 update
-
resolved Apr 15, 2026, 09:44 AM UTC
[10:00] Clients facing issues An issue with the update of Imunify on cPanel broke the Cloudflare proxy feature and prevents access to the websites using this feature. The issue is directly caused by : https://imunify360.statuspage.io/incidents/810xhk4nbqnr [10:50] Acknowledged the issue An automatic update of Imunify caused the issue. [11:15] Issue fixed We forced the update of the fixed version of Imunify to fix the issue. All websites are now available.
Read the full incident report →
- Detected by Pingoru
- Apr 15, 2026, 09:36 AM UTC
- Resolved
- Apr 15, 2026, 09:36 AM UTC
- Duration
- —
Affected: Public Gateway
Timeline · 1 update
-
resolved Apr 15, 2026, 09:36 AM UTC
got a temporary api issue between 7:17 UTC to 8:25 UTC.
Read the full incident report →
- Detected by Pingoru
- Apr 14, 2026, 09:23 AM UTC
- Resolved
- Apr 14, 2026, 11:30 AM UTC
- Duration
- 2h 6m
Affected: Network
Timeline · 2 updates
-
investigating Apr 14, 2026, 09:23 AM UTC
Customers may experience network connectivity issues on Room 108 Rack A7 in DC2 due to a configuration error on an internal component.
-
resolved Apr 14, 2026, 11:30 AM UTC
The issue has been resolved and all services are now operating normally.
Read the full incident report →
- Detected by Pingoru
- Apr 13, 2026, 07:25 AM UTC
- Resolved
- Apr 13, 2026, 07:25 AM UTC
- Duration
- —
Affected: BookMyName
Timeline · 1 update
-
resolved Apr 13, 2026, 07:25 AM UTC
This incident has been resolved.
Read the full incident report →
- Detected by Pingoru
- Apr 11, 2026, 02:51 PM UTC
- Resolved
- Apr 13, 2026, 02:06 PM UTC
- Duration
- 1d 23h
Affected: Databases
Timeline · 4 updates
-
investigating Apr 11, 2026, 02:51 PM UTC
We are currently experiencing increased latencies on one of our clusters in the PAR-1 region, since 13:20 UTC. This situation may also impact other services relying on this infrastructure: Mongo DB, Postgresql DB, MySQL DB, Serverless, Data Warehouse, Cockpit, ServerlessDB. Our teams are actively working on. The situation has slightly improved since the beginning of the mitigation actions, and additional measures will be implemented shortly to further reduce the impact.
-
investigating Apr 11, 2026, 04:22 PM UTC
Block Storage issue is now corrected. We are still working on managing possible impacts on other products.
-
monitoring Apr 11, 2026, 08:50 PM UTC
All side effects have been managed. We continue to monitor the situation.
-
resolved Apr 13, 2026, 02:06 PM UTC
No more disruption since 48 hours. Returned to a plain healthy state. No dataloss detected on affected cluster.
Read the full incident report →
- Detected by Pingoru
- Apr 09, 2026, 02:50 PM UTC
- Resolved
- Apr 09, 2026, 03:50 PM UTC
- Duration
- 1h
Affected: Serverless-Database
Timeline · 2 updates
-
investigating Apr 09, 2026, 02:50 PM UTC
Customers may experience access issues with newly generated S3 API keys due to insufficient bucket permissions. We are currently investigating the root cause.
-
resolved Apr 09, 2026, 03:50 PM UTC
This incident has been resolved.
Read the full incident report →
- Detected by Pingoru
- Apr 08, 2026, 11:34 AM UTC
- Resolved
- Apr 08, 2026, 03:21 PM UTC
- Duration
- 3h 47m
Affected: Instances
Timeline · 3 updates
-
identified Apr 08, 2026, 11:34 AM UTC
Customers may experience issues with Instance Orchestration APIs due to an internal component failure during routine deployments. We are working on restoring the APIs as soon as possible.
-
monitoring Apr 08, 2026, 11:50 AM UTC
The issue has been identified and resolved, and we are monitoring the situation to ensure stability.
-
resolved Apr 08, 2026, 03:21 PM UTC
The issue has been identified and resolved, and we are monitoring the situation to ensure stability.
Read the full incident report →