Elastic.io Outage History

Elastic.io major outage View live status →

There were 18 Elastic.io outages since February 6, 2026 totaling 113h 13m of downtime. Each is summarised below — incident details, duration, and resolution information.

Source: https://status.elastic.co

Major April 30, 2026

Delayed AutoOps Metrics in AWS us-east-1

Detected by Pingoru
Apr 30, 2026, 09:46 PM UTC
Resolved
Apr 30, 2026, 10:36 PM UTC
Duration
50m
Affected: AutoOps
Timeline · 3 updates
  1. identified Apr 30, 2026, 09:46 PM UTC

    We have identified and are working to mitigate an issue which is causing delayed AutoOps metrics for customers in AWS us-east-1.

  2. monitoring Apr 30, 2026, 10:18 PM UTC

    This issue has now been resolved and we are continuing to monitor the system.

  3. resolved Apr 30, 2026, 10:36 PM UTC

    This issue has now been resolved.

Read the full incident report →

Major April 28, 2026

Missing Metrics in Cloud Console — US East (us-east-1)

Detected by Pingoru
Apr 28, 2026, 02:46 PM UTC
Resolved
Apr 28, 2026, 06:40 PM UTC
Duration
3h 53m
Timeline · 5 updates
  1. investigating Apr 28, 2026, 02:46 PM UTC

    Some users are seeing missing metrics visualizations in the Cloud console due to a disruption in metric data ingestion affecting deployments in the us-east-1 (US East) region. Some recent metric data may not appear in charts or monitoring views. We are actively investigating and will provide another update within 30 minutes.

  2. investigating Apr 28, 2026, 03:57 PM UTC

    Our investigation is ongoing. Some early signs of stabilisation have been observed, but metrics visualizations in the Cloud console remain impacted. We will provide a further update within 30 minutes.

  3. investigating Apr 28, 2026, 04:32 PM UTC

    Our investigation is ongoing. We continue to see early signs of stabilization, but metrics visualizations in the Cloud console remain impacted. We will provide a further update in 1 hour, or when there is a change in status.

  4. identified Apr 28, 2026, 05:39 PM UTC

    We are continuing to see signs of stabilization and monitoring as the system recovers, but metrics visualizations in the Cloud console remains impacted until recovery is complete. We will provide a further update in 1 hour, or when there is a change in status.

  5. resolved Apr 28, 2026, 06:40 PM UTC

    The system has recovered and functionality has returned to normal.

Read the full incident report →

Major April 27, 2026

Elevated Error Rates and Latency — EU West Region

Detected by Pingoru
Apr 27, 2026, 11:49 AM UTC
Resolved
Apr 27, 2026, 12:40 PM UTC
Duration
51m
Affected: Elasticsearch connectivity: AWS eu-west-3Kibana connectivity: AWS eu-west-3APM connectivity: AWS eu-west-3
Timeline · 4 updates
  1. identified Apr 27, 2026, 11:49 AM UTC

    We are currently investigating connectivity issues affecting a subset of customers in our AWS EU West 3 (Paris) region. Some requests may be experiencing elevated latency or failures. Our team is actively investigating, and we are working with our cloud infrastructure provider on the underlying issue. Other regions are not affected at this time. We will provide updates as the situation develops.

  2. identified Apr 27, 2026, 11:57 AM UTC

    We are continuing to work on a fix for this issue.

  3. monitoring Apr 27, 2026, 12:08 PM UTC

    Our monitoring system reports that AWS EU West 3 (Paris) region ingress layer traffic has recovered to normal levels as of 11:36 UTC and our SLO alerts have resolved. We are continuing to monitor and will update once we are confident the issue is fully resolved.

  4. resolved Apr 27, 2026, 12:40 PM UTC

    The connectivity issues affecting a subset of customers in our AWS EU West (Paris) region have been resolved. Failed probes recovered at 11:36 UTC following remediation of an issue with our underlying cloud infrastructure provider. We are no longer observing any elevated error rates or latency in the region. We apologise for any inconvenience caused and will conduct a post-incident review.

Read the full incident report →

Major April 20, 2026

EIS elevated 5xx error rates for model google-gemini-embedding-001

Detected by Pingoru
Apr 20, 2026, 06:37 AM UTC
Resolved
Apr 20, 2026, 10:41 AM UTC
Duration
4h 4m
Affected: Elastic Inference Service
Timeline · 3 updates
  1. investigating Apr 20, 2026, 06:37 AM UTC

    We're seeing elevated 5xx error rates for the Gemini Embedding v1 model in EIS. The following default inference endpoint is affected: - .google-gemini-embedding-001 We're investigating and will update again in 2 hours or if there's a change in status.

  2. investigating Apr 20, 2026, 09:50 AM UTC

    Errors are coming from the provider API (Google). The incident has been escalated to them, and we are still waiting for a response.

  3. resolved Apr 20, 2026, 10:41 AM UTC

    An incident with an upstream service provider has been resolved, and access to the model is now restored.

Read the full incident report →

Major April 9, 2026

Privatelink hostnames reported by API are incorrect

Detected by Pingoru
Apr 09, 2026, 08:46 PM UTC
Resolved
Apr 10, 2026, 07:54 PM UTC
Duration
23h 7m
Timeline · 4 updates
  1. identified Apr 09, 2026, 08:46 PM UTC

    A recent change to our Privatelink implementation resulted in the URLs reported by the deployment API being incorrect in some cases, causing connectivity issues for customers who relied on those URLs. We have identified the issue and are working on a fix.

  2. identified Apr 09, 2026, 10:44 PM UTC

    We have merged a fix and are working on deploying it to production

  3. identified Apr 10, 2026, 12:18 PM UTC

    We are still working on moving the fix to production as we experienced some testing issues.

  4. resolved Apr 10, 2026, 07:54 PM UTC

    We have successfully deployed a fix for the PrivateLink hostname issue to the User Console. Customers who experienced incorrect PrivateLink URLs or connectivity issues with their deployments should now see correct hostnames. We apologize for the inconvenience.

Read the full incident report →

Major April 9, 2026

Connection issue to Kibana via the Cloud UI SSO

Detected by Pingoru
Apr 09, 2026, 03:37 PM UTC
Resolved
Apr 10, 2026, 11:51 AM UTC
Duration
20h 14m
Timeline · 5 updates
  1. investigating Apr 09, 2026, 03:37 PM UTC

    We are aware of an issue when connecting to Kibana through the Elastic Cloud UI via SAML SSO, impacting hosted Cloud deployments. Our team is actively investigating. We will post an update to this status within the next hour.

  2. investigating Apr 09, 2026, 04:21 PM UTC

    We have identified that only PrivateLink customers are impacted and no other Hosted Cloud Deployments should see any issues.

  3. investigating Apr 09, 2026, 04:41 PM UTC

    We have identified the issue and are working on a fix. We will update again in 3 hours or earlier.

  4. investigating Apr 09, 2026, 07:50 PM UTC

    We have rolled out the fix and there should be no more customer impact

  5. resolved Apr 10, 2026, 11:51 AM UTC

    We have confirmed no further impact from this incident, and are now marking it as resolved.

Read the full incident report →

Major April 6, 2026

Synthetics service may not run on schedule (us-east-4)

Detected by Pingoru
Apr 06, 2026, 05:58 PM UTC
Resolved
Apr 06, 2026, 08:43 PM UTC
Duration
2h 45m
Timeline · 3 updates
  1. investigating Apr 06, 2026, 05:58 PM UTC

    We are investigating an issue in our Synthetics service on us-east-4. Some customer monitor jobs may not run on their expected schedule. We will provide an update in an hour or earlier.

  2. identified Apr 06, 2026, 07:16 PM UTC

    We have identified the problem and are working on a solution. We should have an update within the next 2-3 hours.

  3. resolved Apr 06, 2026, 08:43 PM UTC

    Issue has been fixed and there should be no more customer impact

Read the full incident report →

Major March 30, 2026

Elevated error rates for Claude Sonnet 4.5 EIS inference endpoints

Detected by Pingoru
Mar 30, 2026, 06:46 PM UTC
Resolved
Mar 30, 2026, 07:57 PM UTC
Duration
1h 10m
Affected: Elastic Inference Service
Timeline · 2 updates
  1. investigating Mar 30, 2026, 06:46 PM UTC

    We're seeing elevated 5xx error rates for the Claude Sonnet 4.5 model in EIS. The following default inference endpoints are affected: - .anthropic-claude-4.5-sonnet-chat_completion - .anthropic-claude-4.5-sonnet-completion - .gp-llm-v2-chat_completion - .gp-llm-v2-completion We're investigating and will update again in 2 hours or if there's a change in status.

  2. resolved Mar 30, 2026, 07:57 PM UTC

    Endpoints are operating normally

Read the full incident report →

Major March 27, 2026

AutoOps deployments marked as Inactive in AWS us-east-1 region

Detected by Pingoru
Mar 27, 2026, 04:33 AM UTC
Resolved
Mar 27, 2026, 07:23 AM UTC
Duration
2h 49m
Affected: AutoOps
Timeline · 4 updates
  1. investigating Mar 27, 2026, 04:33 AM UTC

    We are currently investigating an outage of AutoOps in AWS us-east-1 region. Customer deployments in the region may be marked as inactive and recent metrics may not be available. We will provide an update when one is available or within the hour, whichever comes first.

  2. monitoring Mar 27, 2026, 05:37 AM UTC

    We have identified the issue and applied mitigations to bring AutoOps in the AWS us-east-1 region back to functionality. Customer deployments may still be marked as inactive, and recent metrics may still not be available. We expect AutoOps to return to full functionality within the next hour. We will provide an update when one is available or within the hour, whichever comes first.

  3. monitoring Mar 27, 2026, 07:13 AM UTC

    We have mitigated the issue, and AutoOps is back to full functionality in the AWS us-east-1 region. We will continue monitoring the signals from the region to ensure AutoOps remains fully functional, and will share another update once the issue is fully resolved.

  4. resolved Mar 27, 2026, 07:23 AM UTC

    AutoOps is back to full functionality in the AWS us-east-1 region and the incident has been resolved.

Read the full incident report →

Major March 26, 2026

Issue creating new projects in GCP europe-west3

Detected by Pingoru
Mar 26, 2026, 10:51 AM UTC
Resolved
Mar 26, 2026, 05:42 PM UTC
Duration
6h 51m
Affected: Project Orchestration (Create/Edit/Restart/Delete)Elastic Cloud Serverless
Timeline · 3 updates
  1. identified Mar 26, 2026, 10:51 AM UTC

    We are aware of problems creating new Serverless projects in the GCP europe-west3 region. The engineering team has identified the problem and we are working on mitigating it. We will post the next updated in 2 hours.

  2. identified Mar 26, 2026, 01:06 PM UTC

    The engineering team is validating the fix to restore project creation in the GCP europe-west3 region. We will post another update in 2 hours or sooner if needed

  3. resolved Mar 26, 2026, 05:42 PM UTC

    This issue is resolved. Customers are now able to create new Serverless projects in the GCP europe-west3 region

Read the full incident report →

Minor March 9, 2026

GCP us-central1 hosting problems

Detected by Pingoru
Mar 09, 2026, 11:32 AM UTC
Resolved
Mar 09, 2026, 08:29 PM UTC
Duration
8h 57m
Affected: Deployment orchestration (Create/Edit/Restart/Delete): GCP us-central1Elastic Docker Registry
Timeline · 5 updates
  1. investigating Mar 09, 2026, 11:32 AM UTC

    We have identified an issue impacting the provisioning of kibana resources for new projects. This may delay or fail requests for new projects. We are investigating the root cause, and will post updates shortly.

  2. investigating Mar 09, 2026, 11:46 AM UTC

    Further investigations has indentified additional impact across services hosted in the GCP us-central1 region. We are working internally, and with our hosting providers to determine the root cause.

  3. identified Mar 09, 2026, 12:05 PM UTC

    The issue has been identified, and we are working with our hosting providers to reach a solution. Users may see an impact with the availability of the elastic docker registry (browsing, and pulling images) Users may see an impact with the orchestration of their hosted Serverless projects, leading to slower provisioning and scaling tasks.

  4. monitoring Mar 09, 2026, 01:16 PM UTC

    We have seen an improvement regarding this service disruption. The docker registry is available, and project orchestration has been restored. We will continue to monitor the situation, and ensure a full service recovery.

  5. resolved Mar 09, 2026, 08:29 PM UTC

    This incident has been resolved.

Read the full incident report →

Major February 26, 2026

fleet-server rejecting new incoming connections

Detected by Pingoru
Feb 26, 2026, 11:29 AM UTC
Resolved
Feb 26, 2026, 05:07 PM UTC
Duration
5h 37m
Affected: Elastic Cloud HostedElastic Cloud Serverless
Timeline · 3 updates
  1. investigating Feb 26, 2026, 11:29 AM UTC

    We're seeing some errors from fleet-server in Elastic Cloud that are affecting new incoming connections for some projects. We will provide an update within the next 2 hours.

  2. monitoring Feb 26, 2026, 12:24 PM UTC

    We have identified the root cause of those sporadic errors observed and have successfully mitigated the issue. We continue to observe the situation and provide an update within the next 2 hours.

  3. resolved Feb 26, 2026, 05:07 PM UTC

    This incident is resolved.

Read the full incident report →

Major February 25, 2026

Intermittent 404s when accessing elastic.co webpage

Detected by Pingoru
Feb 25, 2026, 02:33 PM UTC
Resolved
Feb 25, 2026, 02:38 PM UTC
Duration
5m
Affected: Elastic website - elastic.co
Timeline · 2 updates
  1. investigating Feb 25, 2026, 02:33 PM UTC

    We currently investigating an issue causing intermittent 404 errors when visiting the elastic.co webpage. We will provide additional updates as they become available.

  2. resolved Feb 25, 2026, 02:38 PM UTC

    404 errors have been resolved on the elastic.co webpage.

Read the full incident report →

Major February 18, 2026

Issues provisioning new Elastic Cloud Hosted capacity in Azure regions

Detected by Pingoru
Feb 18, 2026, 05:56 PM UTC
Resolved
Feb 18, 2026, 07:12 PM UTC
Duration
1h 15m
Affected: Elastic Cloud Hosted
Timeline · 2 updates
  1. investigating Feb 18, 2026, 05:56 PM UTC

    We are currently aware of and investigating an issue causing failure to provision new capacity in Elastic Cloud Hosted Azure regions. We will provide additional updates as they become available.

  2. resolved Feb 18, 2026, 07:12 PM UTC

    This issue is now resolved.

Read the full incident report →

Major February 12, 2026

Old Clusters Appearing on Billing Usage Page

Detected by Pingoru
Feb 12, 2026, 06:21 PM UTC
Resolved
Feb 13, 2026, 12:46 AM UTC
Duration
6h 25m
Timeline · 2 updates
  1. identified Feb 12, 2026, 06:21 PM UTC

    We've identified an issue on the billing usage pages where some hosted deployments might be showing incorrect usage. We're working on a fix and will update again in 6 hours.

  2. resolved Feb 13, 2026, 12:46 AM UTC

    We have identified and fixed an issue on the billing usage pages that resulted in the pages showing incorrect billing totals. There should be no more customer impact.

Read the full incident report →

Major February 10, 2026

Degradation with BYOK deployment creation in Azure regions

Detected by Pingoru
Feb 10, 2026, 06:35 PM UTC
Resolved
Feb 10, 2026, 10:51 PM UTC
Duration
4h 16m
Timeline · 3 updates
  1. investigating Feb 10, 2026, 06:35 PM UTC

    We are investigating a degradation in BYOK deployment creation in Azure regions. We will update again in one hour.

  2. identified Feb 10, 2026, 07:49 PM UTC

    We have identified a misconfiguration in our Azure set up and working on pushing out the fix. Will update soon once it has been released.

  3. resolved Feb 11, 2026, 01:01 PM UTC

    This issue has been resolved, and BYOK deployment creation has been confirmed as fully functional.

Read the full incident report →

Major February 9, 2026

Degradation in the `openai-gpt-oss-120b` model in EIS

Detected by Pingoru
Feb 09, 2026, 04:53 PM UTC
Resolved
Feb 10, 2026, 10:22 AM UTC
Duration
17h 29m
Timeline · 3 updates
  1. investigating Feb 09, 2026, 04:53 PM UTC

    We've noticed a degradation in the `openai-gpt-oss-120b` model in EIS, we're investigating the root cause and we'll update again in 2 hours

  2. investigating Feb 09, 2026, 07:35 PM UTC

    We've identified the issue and have merged a fix. We are in the process of rolling this out but are blocked by a GitHub degradation. Will update as soon as we have some progress on the deploy.

  3. resolved Feb 10, 2026, 10:22 AM UTC

    We have completed our work to restore access to`openai-gpt-oss-120b` model in EIS.

Read the full incident report →

Major February 6, 2026

Issue creating new projects in AWS us-east-1

Detected by Pingoru
Feb 06, 2026, 01:56 AM UTC
Resolved
Feb 06, 2026, 04:22 AM UTC
Duration
2h 26m
Timeline · 3 updates
  1. investigating Feb 06, 2026, 01:56 AM UTC

    We are currently investigating an issue that has resulted in provisioning delays affecting the creation of new Elastic Cloud Serverless projects within the AWS us-east-1 region. While existing projects remain operational and unaffected, customers may encounter errors or extended wait times when attempting to spin up new projects in this specific region. Our engineering team is actively working to identify the root cause and restore full functionality. We apologize for the inconvenience and will provide further updates as more information becomes available.

  2. identified Feb 06, 2026, 02:43 AM UTC

    We have successfully identified the root cause of the provisioning delays in the AWS us-east-1 region and are now beginning the remediation phase. A fix has been developed and validated through internal testing, and we are currently rolling out the fix across the affected infrastructure. Customers may still see intermittent failures for a short period as the fix propagates.

  3. resolved Feb 06, 2026, 04:22 AM UTC

    The issues affecting Elastic Cloud Serverless project creation in the AWS us-east-1 region have been resolved. Our engineering team identified a DNS configuration error as the root cause; once the records were corrected and propagated, service functionality returned to normal. All systems are now operating as expected, and customers should no longer experience delays or errors when creating new projects. We appreciate your patience while we worked to clear this up.

Read the full incident report →

Looking to track Elastic.io downtime and outages?

Pingoru polls Elastic.io's status page every 5 minutes and alerts you the moment it reports an issue — before your customers do.

  • Real-time alerts when Elastic.io reports an incident
  • Email, Slack, Discord, Microsoft Teams, and webhook notifications
  • Track Elastic.io alongside 5,000+ providers in one dashboard
  • Component-level filtering
  • Notification groups + maintenance calendar
Start monitoring Elastic.io for free

5 free monitors · No credit card required