Confluent Outage History

Confluent is up right now

There were 12 Confluent outages since February 3, 2026 totaling 538h 29m of downtime. Each is summarised below — incident details, duration, and resolution information.

Source: https://status.confluent.cloud

Notice April 24, 2026

Some customers using Confluent Cloud Flink on Azure with PrivateLink connectivity are experiencing issues submitting new Flink statements

Detected by Pingoru
Apr 24, 2026, 05:18 PM UTC
Resolved
Apr 25, 2026, 12:14 AM UTC
Duration
6h 56m
Affected: Confluent Cloud
Timeline · 5 updates
  1. investigating Apr 24, 2026, 05:18 PM UTC

    We are currently investigating this issue.

  2. identified Apr 24, 2026, 08:14 PM UTC

    The issue has been identified and fix is being rolled out. ETA for rollout complete across impacted regions 2 hours.

  3. identified Apr 24, 2026, 10:27 PM UTC

    Roll out of fix is progressing, expecting all clusters to complete in approximately 30 minutes.

  4. monitoring Apr 24, 2026, 11:06 PM UTC

    A fix has been implemented and we are monitoring for next hour.

  5. resolved Apr 25, 2026, 12:14 AM UTC

    This incident has been resolved.

Read the full incident report →

Major April 21, 2026

Experiencing delays in Tableflow external catalog syncs in AWS

Detected by Pingoru
Apr 21, 2026, 09:34 AM UTC
Resolved
Apr 21, 2026, 09:49 AM UTC
Duration
14m
Affected: Confluent Cloud
Timeline · 2 updates
  1. monitoring Apr 21, 2026, 09:34 AM UTC

    We are experiencing delays Tableflow external catalog sync in AWS. Our team has added a fix to resolve the situation and are monitoring the same.

  2. resolved Apr 21, 2026, 09:49 AM UTC

    This incident has been resolved.

Read the full incident report →

Major March 5, 2026

Metrics API was experiencing failures from 03:23 UTC to 03:58 UTC

Detected by Pingoru
Mar 05, 2026, 04:40 AM UTC
Resolved
Mar 05, 2026, 07:22 AM UTC
Duration
2h 42m
Affected: Confluent Cloud
Timeline · 2 updates
  1. monitoring Mar 05, 2026, 04:40 AM UTC

    Confluent Cloud Metrics API was unavailable from 2026-03-05 03:23 UTC to 2026-03-05 03:58 UTC, the incident has been mitigated and the systems are functioning well. We are continuing to monitor and will provide an update on or before 2026-03-05 06:40 UTC.

  2. resolved Mar 05, 2026, 07:22 AM UTC

    Confluent Cloud Metrics API are now fully functional. The incident has been resolved and systems are functioning as normal.

Read the full incident report →

Major March 2, 2026

Elevated error rates in AWS me-south-1 and me-central-1 regions

Detected by Pingoru
Mar 02, 2026, 07:19 AM UTC
Resolved
Mar 20, 2026, 12:55 AM UTC
Duration
17d 17h
Affected: Confluent Cloud
Timeline · 6 updates
  1. investigating Mar 02, 2026, 07:19 AM UTC

    We are experiencing increased error rates in some of our Confluent Cloud services in AWS me-south-1 and me-central-1 regions. This began at approximately 05:00 AM UTC today and is linked to disruptions in AWS’s availability zones in these regions (mec1-az1, mec1-az3 and mes1-az2). Team is currently working on mitigating these and will provide updates as the situation evolves.

  2. investigating Mar 02, 2026, 08:32 AM UTC

    Confluent Cloud services in AWS me-central-1 region are experiencing a major outage. They are linked to AWS's me-central-1 regional outage Confluent Cloud services in AWS me-south-1 are being mitigated.

  3. monitoring Mar 02, 2026, 10:35 AM UTC

    Confluent cloud services in AWS me-south-1 are mitigated. They are currently being monitored. Confluent Cloud Services in AWS me-central-1 are still disrupted owing to regional outage at AWS.

  4. monitoring Mar 02, 2026, 03:37 PM UTC

    Confluent Cloud services in AWS me-south-1 remain stable and operating normally following mitigation. We continue to monitor for any residual issues. Confluent Cloud services in AWS me-central-1 continue to experience a complete outage due to the ongoing AWS regional infrastructure failure in that region.

  5. monitoring Mar 03, 2026, 05:34 AM UTC

    AWS regional recovery is expected to be extended in both me-central-1 and me-south-1 regions. Customers requiring immediate restoration in these two regions are encouraged to review regional failover options.

  6. resolved Mar 20, 2026, 12:55 AM UTC

    This incident has been resolved.

Read the full incident report →

Major March 1, 2026

Elevated Error Rates in AWS me-central-1 region

Detected by Pingoru
Mar 01, 2026, 06:08 PM UTC
Resolved
Mar 01, 2026, 10:05 PM UTC
Duration
3h 57m
Affected: Confluent Cloud
Timeline · 4 updates
  1. investigating Mar 01, 2026, 06:08 PM UTC

    We are experiencing increased error rates in some of our Confluent Cloud services in the AWS me-central-1 region. This began at approximately 12:51 PM UTC today and is linked to a disruption in one of AWS’s availability zones (mec1-az2). Our team is actively applying mitigation steps and will provide updates as the situation evolves.

  2. identified Mar 01, 2026, 06:48 PM UTC

    We have identified the cause of the problem to be disruption in one of AWS’s availability zones (mec1-az2). We are taking steps to confirm the safest path to mitigation for any impacted Confluent Cloud services in this region.

  3. monitoring Mar 01, 2026, 09:41 PM UTC

    The problem has been mitigated as of 21:15 PM UTC today. All Confluent Cloud services are now healthy in me-central-1 region and we will monitor for any residual issues before resolving this incident in 1 hour.

  4. resolved Mar 01, 2026, 10:05 PM UTC

    This incident has been resolved. All Confluent Cloud services are now healthy in me-central-1 region.

Read the full incident report →

Major February 26, 2026

All new dedicated kafka cluster provisioning and expansion in Azure westus3, southcentralus, and eastus2 are failing

Detected by Pingoru
Feb 26, 2026, 07:56 PM UTC
Resolved
Feb 27, 2026, 01:57 AM UTC
Duration
6h 1m
Affected: Confluent Cloud
Timeline · 3 updates
  1. investigating Feb 26, 2026, 07:56 PM UTC

    Azure is investigating the issue and we will post an update soon.

  2. identified Feb 26, 2026, 11:07 PM UTC

    The issue has been identified and a fix is being implemented.

  3. resolved Feb 27, 2026, 01:57 AM UTC

    This incident has been resolved.

Read the full incident report →

Major February 26, 2026

Network provisioning service in Azure East US region is degraded

Detected by Pingoru
Feb 26, 2026, 01:59 PM UTC
Resolved
Feb 26, 2026, 10:24 PM UTC
Duration
8h 24m
Affected: Confluent Cloud
Timeline · 3 updates
  1. investigating Feb 26, 2026, 01:59 PM UTC

    Provisioning new networks in this region can potentially fail. We are investigating the issue and will post an update soon.

  2. monitoring Feb 26, 2026, 10:23 PM UTC

    A fix has been implemented and we are monitoring the results.

  3. resolved Feb 26, 2026, 10:24 PM UTC

    This incident has been resolved.

Read the full incident report →

Major February 25, 2026

Some Confluent Cloud customers accessing services in GCP us-south1 region might experience elevated errors rates

Detected by Pingoru
Feb 25, 2026, 05:01 AM UTC
Resolved
Feb 27, 2026, 06:16 PM UTC
Duration
2d 13h
Affected: Confluent Cloud
Timeline · 6 updates
  1. investigating Feb 25, 2026, 05:01 AM UTC

    We are currently investigating the issue

  2. identified Feb 25, 2026, 12:02 PM UTC

    GCP has identified the issue and is actively working on applying mitigation.

  3. identified Feb 26, 2026, 01:11 AM UTC

    At this time, single zone clusters in us-south1 Zone-a are impacted.

  4. monitoring Feb 26, 2026, 05:41 PM UTC

    A fix has been implemented and we are monitoring the results

  5. monitoring Feb 27, 2026, 03:25 AM UTC

    We are continuing to monitor for any further issues.

  6. resolved Feb 27, 2026, 06:16 PM UTC

    This incident has been resolved.

Read the full incident report →

Minor February 24, 2026

Elevated Kafka REST API Errors in AWS us-west-2

Detected by Pingoru
Feb 24, 2026, 09:15 PM UTC
Resolved
Feb 25, 2026, 01:07 AM UTC
Duration
3h 52m
Affected: Confluent Cloud
Timeline · 5 updates
  1. investigating Feb 24, 2026, 09:15 PM UTC

    We are experiencing an elevated level of Kafka REST API Errors with error code 429 in AWS us-west-2 and are currently looking into the issue. This issue impacts Kafka eSKU clusters in this regions.

  2. investigating Feb 24, 2026, 09:16 PM UTC

    We are currently working on a mitigation.

  3. investigating Feb 24, 2026, 09:16 PM UTC

    We are continuing to investigate this issue.

  4. identified Feb 25, 2026, 12:33 AM UTC

    Issue has been identified and we are currently deploying the fix.

  5. resolved Feb 25, 2026, 01:07 AM UTC

    This incident has been resolved.

Read the full incident report →

Minor February 23, 2026

Elevated Kafka Latency in GCP asia-southeast1

Detected by Pingoru
Feb 23, 2026, 10:12 PM UTC
Resolved
Feb 24, 2026, 09:58 AM UTC
Duration
11h 45m
Affected: Confluent Cloud
Timeline · 6 updates
  1. investigating Feb 23, 2026, 10:12 PM UTC

    We are currently investigating this issue.

  2. investigating Feb 23, 2026, 10:17 PM UTC

    Starting 2/21/2026, 21:00 UTC We are observing Elevated Kafka Latency in GCP asia-southeast1 region for less than 0.1% of produce and fetch requests for some customers. Median latencies for both produce and fetch requests are not impacted. We are actively working with GCP to identify the root cause of the issue. We will provide next update in 2 hours or earlier.

  3. investigating Feb 23, 2026, 10:18 PM UTC

    We are continuing to investigate this issue.

  4. investigating Feb 24, 2026, 12:23 AM UTC

    We are continuing to investigate this issue with GCP. GCP is actively working on applying mitigation.

  5. monitoring Feb 24, 2026, 07:27 AM UTC

    GCP has identified the root cause and has fixed the same. Currently monitoring.

  6. resolved Feb 24, 2026, 09:58 AM UTC

    The issue was resolved successfully. All systems working as expected.

Read the full incident report →

Major February 11, 2026

All new cluster creations in Azure westus3, southcentralus, and eastus2 are failing

Detected by Pingoru
Feb 11, 2026, 10:14 PM UTC
Resolved
Feb 12, 2026, 02:45 AM UTC
Duration
4h 30m
Affected: Confluent Cloud
Timeline · 5 updates
  1. investigating Feb 11, 2026, 10:14 PM UTC

    Confluent has been experiencing failures creating all new clusters in Azure westus3, southcentralus, and eastus2. The impact started at 09:50 UTC on Wednesday, February 11, 2026. We are investigating and will provide another update in 30 minutes or sooner if mitigation has been achieved.

  2. investigating Feb 11, 2026, 10:50 PM UTC

    We are continuing to investigate. Our next update will be in 90 minutes or sooner if mitigation has been achieved.

  3. identified Feb 12, 2026, 12:07 AM UTC

    The issue has been identified with Azure support and working on mitigating failed cluster creations. We will provide another update in the next 90 minutes or sooner if mitigation has been achieved.

  4. identified Feb 12, 2026, 01:58 AM UTC

    We are continuing to work on mitigation for new cluster creations. We will provide another update in the next 60 minutes or sooner.

  5. resolved Feb 12, 2026, 02:45 AM UTC

    The incident has been resolved. New cluster creation in the Azure westus3, southcentralus, and eastus2 regions have been fixed.

Read the full incident report →

Major February 3, 2026

Confluent Cloud Service Outage

Detected by Pingoru
Feb 03, 2026, 09:33 PM UTC
Resolved
Feb 04, 2026, 12:48 AM UTC
Duration
3h 14m
Affected: Confluent Cloud
Timeline · 4 updates
  1. investigating Feb 03, 2026, 11:10 PM UTC

    We are currently investigating this issue.

  2. identified Feb 03, 2026, 11:25 PM UTC

    We’ve identified the cause of the connection issues and are reverting a recent configuration change. Service stability is improving as the rollback progresses. We’ll continue monitoring and provide an update once all services are confirmed restored.

  3. monitoring Feb 04, 2026, 12:08 AM UTC

    Confluent has identified the cause of the issue and reverted the related configuration change. During the incident, some Kafka clusters experienced intermittent connectivity issues, and some control plane services, including metrics, logging, authentication, and new cluster provisioning, were briefly impacted. The issue has been mitigated, services are stable, and we continue to monitor.

  4. resolved Feb 04, 2026, 12:48 AM UTC

    The issue has been resolved and services are operating normally. The root cause was a networking configuration change in the us-west-2 region that caused client connection issues starting at approximately 21:33 UTC, resulting in intermittent service disruption across several Confluent Cloud services, including Kafka, Flink, Metrics API, Logging, Provisioning, and Authentication. The change was reverted, and as of 00:20 UTC, services have been fully restored.

Read the full incident report →

Looking to track Confluent downtime and outages?

Pingoru polls Confluent's status page every 5 minutes and alerts you the moment it reports an issue — before your customers do.

  • Real-time alerts when Confluent reports an incident
  • Email, Slack, Discord, Microsoft Teams, and webhook notifications
  • Track Confluent alongside 5,000+ providers in one dashboard
  • Component-level filtering
  • Notification groups + maintenance calendar
Start monitoring Confluent for free

5 free monitors · No credit card required