dbt Cloud Outage History

dbt Cloud is up right now

There were 292 dbt Cloud outages since February 4, 2026. Each is summarised below — incident details, duration, and resolution information.

Source: https://status.getdbt.com

Minor February 25, 2026

[US AWS] Gitlab 403 and 429 Errors

Detected by Pingoru
Feb 25, 2026, 09:20 PM UTC
Resolved
Feb 25, 2026, 09:20 PM UTC
Duration
Timeline · 3 updates
  1. identified Feb 25, 2026, 07:49 PM UTC

    Status: Identified There is currently a GitLab outage that may have some impact on git operations. We are monitoring on our end, please refer to the following statuspage for any more information and to stay up to date: https://status.gitlab.com/ Affected components IDE (US AWS) (Degraded performance) dbt platform CLI (US AWS) (Degraded performance)

  2. monitoring Feb 25, 2026, 07:52 PM UTC

    Status: Monitoring It appears that a fix has been shipped and GitLab and will continue to provide updates. More information: https://status.gitlab.com/ Affected components IDE (US AWS) (Degraded performance) dbt platform CLI (US AWS) (Degraded performance)

  3. resolved Feb 25, 2026, 09:20 PM UTC

    Status: Resolved This incident has now been resolved. Please refer to GitLab for more details: https://status.gitlab.com/ Affected components dbt platform CLI (US AWS) (Operational) IDE (US AWS) (Operational)

Read the full incident report →

Minor February 25, 2026

[US Cell 1 AWS] Gitlab 403 and 429 Errors

Detected by Pingoru
Feb 25, 2026, 09:20 PM UTC
Resolved
Feb 25, 2026, 09:20 PM UTC
Duration
Timeline · 3 updates
  1. identified Feb 25, 2026, 07:49 PM UTC

    Status: Identified There is currently a GitLab outage that may have some impact on git operations. We are monitoring on our end, please refer to the following statuspage for any more information and to stay up to date: https://status.gitlab.com/ Affected components dbt platform CLI (US Cell 1 AWS) (Degraded performance) IDE (US Cell 1 AWS) (Degraded performance)

  2. monitoring Feb 25, 2026, 07:52 PM UTC

    Status: Monitoring It appears that a fix has been shipped and GitLab and will continue to provide updates. More information: https://status.gitlab.com/ Affected components IDE (US Cell 1 AWS) (Degraded performance) dbt platform CLI (US Cell 1 AWS) (Degraded performance)

  3. resolved Feb 25, 2026, 09:20 PM UTC

    Status: Resolved This incident has now been resolved. Please refer to GitLab for more details: https://status.gitlab.com/ Affected components dbt platform CLI (US Cell 1 AWS) (Operational) IDE (US Cell 1 AWS) (Operational)

Read the full incident report →

Minor February 25, 2026

[US Cell 2 AWS] Gitlab 403 and 429 Errors

Detected by Pingoru
Feb 25, 2026, 09:20 PM UTC
Resolved
Feb 25, 2026, 09:20 PM UTC
Duration
Timeline · 3 updates
  1. identified Feb 25, 2026, 07:49 PM UTC

    Status: Identified There is currently a GitLab outage that may have some impact on git operations. We are monitoring on our end, please refer to the following statuspage for any more information and to stay up to date: https://status.gitlab.com/ Affected components IDE (US Cell 2 AWS) (Degraded performance) dbt platform CLI (US Cell 2 AWS) (Degraded performance)

  2. monitoring Feb 25, 2026, 07:52 PM UTC

    Status: Monitoring It appears that a fix has been shipped and GitLab and will continue to provide updates. More information: https://status.gitlab.com/ Affected components IDE (US Cell 2 AWS) (Degraded performance) dbt platform CLI (US Cell 2 AWS) (Degraded performance)

  3. resolved Feb 25, 2026, 09:20 PM UTC

    Status: Resolved This incident has now been resolved. Please refer to GitLab for more details: https://status.gitlab.com/ Affected components dbt platform CLI (US Cell 2 AWS) (Operational) IDE (US Cell 2 AWS) (Operational)

Read the full incident report →

Minor February 25, 2026

[US Cell 3 AWS] Gitlab 403 and 429 Errors

Detected by Pingoru
Feb 25, 2026, 09:20 PM UTC
Resolved
Feb 25, 2026, 09:20 PM UTC
Duration
Timeline · 3 updates
  1. identified Feb 25, 2026, 07:49 PM UTC

    Status: Identified There is currently a GitLab outage that may have some impact on git operations. We are monitoring on our end, please refer to the following statuspage for any more information and to stay up to date: https://status.gitlab.com/ Affected components IDE (US Cell 3 AWS) (Degraded performance) dbt platform CLI (US Cell 3 AWS) (Degraded performance)

  2. monitoring Feb 25, 2026, 07:52 PM UTC

    Status: Monitoring It appears that a fix has been shipped and GitLab and will continue to provide updates. More information: https://status.gitlab.com/ Affected components dbt platform CLI (US Cell 3 AWS) (Degraded performance) IDE (US Cell 3 AWS) (Degraded performance)

  3. resolved Feb 25, 2026, 09:20 PM UTC

    Status: Resolved This incident has now been resolved. Please refer to GitLab for more details: https://status.gitlab.com/ Affected components IDE (US Cell 3 AWS) (Operational) dbt platform CLI (US Cell 3 AWS) (Operational)

Read the full incident report →

Minor February 25, 2026

[US Cell 4 AWS] Gitlab 403 and 429 Errors

Detected by Pingoru
Feb 25, 2026, 09:20 PM UTC
Resolved
Feb 25, 2026, 09:20 PM UTC
Duration
Timeline · 3 updates
  1. identified Feb 25, 2026, 07:49 PM UTC

    Status: Identified There is currently a GitLab outage that may have some impact on git operations. We are monitoring on our end, please refer to the following statuspage for any more information and to stay up to date: https://status.gitlab.com/ Affected components IDE (US Cell 4 AWS) (Degraded performance) dbt platform CLI (US Cell 4 AWS) (Degraded performance)

  2. monitoring Feb 25, 2026, 07:52 PM UTC

    Status: Monitoring It appears that a fix has been shipped and GitLab and will continue to provide updates. More information: https://status.gitlab.com/ Affected components IDE (US Cell 4 AWS) (Degraded performance) dbt platform CLI (US Cell 4 AWS) (Degraded performance)

  3. resolved Feb 25, 2026, 09:20 PM UTC

    Status: Resolved This incident has now been resolved. Please refer to GitLab for more details: https://status.gitlab.com/ Affected components IDE (US Cell 4 AWS) (Operational) dbt platform CLI (US Cell 4 AWS) (Operational)

Read the full incident report →

Minor February 25, 2026

[APAC AWS] Compare changes job step failing for Databricks users

Detected by Pingoru
Feb 25, 2026, 06:46 PM UTC
Resolved
Feb 25, 2026, 06:46 PM UTC
Duration
Timeline · 3 updates
  1. investigating Feb 24, 2026, 10:27 PM UTC

    Status: Investigating We're investigating an issue with advanced CI that is causing the compare changes step of CI jobs to fail for users of the Databricks adapter. The team is working on a resolution and we will provide updates as soon as new information becomes available. Affected components Scheduled Jobs (APAC AWS) (Degraded performance)

  2. identified Feb 24, 2026, 11:07 PM UTC

    Status: Identified We have identified an issue with partial success handling in advanced CI that resulted the compare changes step of CI jobs to fail for users of the Databricks adapter. A fix is being implemented, and we will provide an update shortly. Affected components Scheduled Jobs (APAC AWS) (Degraded performance)

  3. resolved Feb 25, 2026, 06:46 PM UTC

    Status: Resolved The issue has been resolved, and all affected systems are now functioning normally as of 17:25 UTC. We have put measures in place to prevent a recurrence and the compare changes step of advanced CI jobs for Databricks users has returned to normal. Please contact Support via email [email protected] "[email protected]": http://[email protected] if you continue to experience delays and are unsure of the root cause. Affected components Scheduled Jobs (APAC AWS) (Operational)

Read the full incident report →

Minor February 25, 2026

[EMEA AWS] Compare changes job step failing for Databricks users

Detected by Pingoru
Feb 25, 2026, 06:46 PM UTC
Resolved
Feb 25, 2026, 06:46 PM UTC
Duration
Timeline · 3 updates
  1. investigating Feb 24, 2026, 10:27 PM UTC

    Status: Investigating We're investigating an issue with advanced CI that is causing the compare changes step of CI jobs to fail for users of the Databricks adapter. The team is working on a resolution and we will provide updates as soon as new information becomes available. Affected components Scheduled Jobs (EMEA AWS) (Degraded performance)

  2. identified Feb 24, 2026, 11:07 PM UTC

    Status: Identified We have identified an issue with partial success handling in advanced CI that resulted the compare changes step of CI jobs to fail for users of the Databricks adapter. A fix is being implemented, and we will provide an update shortly. Affected components Scheduled Jobs (EMEA AWS) (Degraded performance)

  3. resolved Feb 25, 2026, 06:46 PM UTC

    Status: Resolved The issue has been resolved, and all affected systems are now functioning normally as of 17:25 UTC. We have put measures in place to prevent a recurrence and the compare changes step of advanced CI jobs for Databricks users has returned to normal. Please contact Support via email [email protected] "[email protected]": http://[email protected] if you continue to experience delays and are unsure of the root cause. Affected components Scheduled Jobs (EMEA AWS) (Operational)

Read the full incident report →

Minor February 25, 2026

[EMEA Cell 1 AZURE] Compare changes job step failing for Databricks users

Detected by Pingoru
Feb 25, 2026, 06:46 PM UTC
Resolved
Feb 25, 2026, 06:46 PM UTC
Duration
Timeline · 3 updates
  1. investigating Feb 24, 2026, 10:27 PM UTC

    Status: Investigating We're investigating an issue with advanced CI that is causing the compare changes step of CI jobs to fail for users of the Databricks adapter. The team is working on a resolution and we will provide updates as soon as new information becomes available. Affected components Scheduled Jobs (EMEA Cell 1 AZURE) (Degraded performance)

  2. identified Feb 24, 2026, 11:07 PM UTC

    Status: Identified We have identified an issue with partial success handling in advanced CI that resulted the compare changes step of CI jobs to fail for users of the Databricks adapter. A fix is being implemented, and we will provide an update shortly. Affected components Scheduled Jobs (EMEA Cell 1 AZURE) (Degraded performance)

  3. resolved Feb 25, 2026, 06:46 PM UTC

    Status: Resolved The issue has been resolved, and all affected systems are now functioning normally as of 17:25 UTC. We have put measures in place to prevent a recurrence and the compare changes step of advanced CI jobs for Databricks users has returned to normal. Please contact Support via email [email protected] "[email protected]": http://[email protected] if you continue to experience delays and are unsure of the root cause. Affected components Scheduled Jobs (EMEA Cell 1 AZURE) (Operational)

Read the full incident report →

Minor February 25, 2026

[EU Cell 1 GCP] Compare changes job step failing for Databricks users

Detected by Pingoru
Feb 25, 2026, 06:46 PM UTC
Resolved
Feb 25, 2026, 06:46 PM UTC
Duration
Timeline · 3 updates
  1. investigating Feb 24, 2026, 10:27 PM UTC

    Status: Investigating We're investigating an issue with advanced CI that is causing the compare changes step of CI jobs to fail for users of the Databricks adapter. The team is working on a resolution and we will provide updates as soon as new information becomes available. Affected components Scheduled Jobs (EU Cell 1 GCP) (Degraded performance)

  2. identified Feb 24, 2026, 11:07 PM UTC

    Status: Identified We have identified an issue with partial success handling in advanced CI that resulted the compare changes step of CI jobs to fail for users of the Databricks adapter. A fix is being implemented, and we will provide an update shortly. Affected components Scheduled Jobs (EU Cell 1 GCP) (Degraded performance)

  3. resolved Feb 25, 2026, 06:46 PM UTC

    Status: Resolved The issue has been resolved, and all affected systems are now functioning normally as of 17:25 UTC. We have put measures in place to prevent a recurrence and the compare changes step of advanced CI jobs for Databricks users has returned to normal. Please contact Support via email [email protected] "[email protected]": http://[email protected] if you continue to experience delays and are unsure of the root cause. Affected components Scheduled Jobs (EU Cell 1 GCP) (Operational)

Read the full incident report →

Minor February 25, 2026

[EU4 Cell 1 GCP] Compare changes job step failing for Databricks users

Detected by Pingoru
Feb 25, 2026, 06:46 PM UTC
Resolved
Feb 25, 2026, 06:46 PM UTC
Duration
Timeline · 3 updates
  1. investigating Feb 24, 2026, 10:27 PM UTC

    Status: Investigating We're investigating an issue with advanced CI that is causing the compare changes step of CI jobs to fail for users of the Databricks adapter. The team is working on a resolution and we will provide updates as soon as new information becomes available. Affected components Scheduled Jobs (EU4 Cell 1 GCP) (Degraded performance)

  2. identified Feb 24, 2026, 11:07 PM UTC

    Status: Identified We have identified an issue with partial success handling in advanced CI that resulted the compare changes step of CI jobs to fail for users of the Databricks adapter. A fix is being implemented, and we will provide an update shortly. Affected components Scheduled Jobs (EU4 Cell 1 GCP) (Degraded performance)

  3. resolved Feb 25, 2026, 06:46 PM UTC

    Status: Resolved The issue has been resolved, and all affected systems are now functioning normally as of 17:25 UTC. We have put measures in place to prevent a recurrence and the compare changes step of advanced CI jobs for Databricks users has returned to normal. Please contact Support via email [email protected] "[email protected]": http://[email protected] if you continue to experience delays and are unsure of the root cause. Affected components Scheduled Jobs (EU4 Cell 1 GCP) (Operational)

Read the full incident report →

Minor February 25, 2026

[JP Cell 1 AWS] Compare changes job step failing for Databricks users

Detected by Pingoru
Feb 25, 2026, 06:46 PM UTC
Resolved
Feb 25, 2026, 06:46 PM UTC
Duration
Timeline · 3 updates
  1. investigating Feb 24, 2026, 10:27 PM UTC

    Status: Investigating We're investigating an issue with advanced CI that is causing the compare changes step of CI jobs to fail for users of the Databricks adapter. The team is working on a resolution and we will provide updates as soon as new information becomes available. Affected components Scheduled Jobs (JP Cell 1 AWS) (Degraded performance)

  2. identified Feb 24, 2026, 11:07 PM UTC

    Status: Identified We have identified an issue with partial success handling in advanced CI that resulted the compare changes step of CI jobs to fail for users of the Databricks adapter. A fix is being implemented, and we will provide an update shortly. Affected components Scheduled Jobs (JP Cell 1 AWS) (Degraded performance)

  3. resolved Feb 25, 2026, 06:46 PM UTC

    Status: Resolved The issue has been resolved, and all affected systems are now functioning normally as of 17:25 UTC. We have put measures in place to prevent a recurrence and the compare changes step of advanced CI jobs for Databricks users has returned to normal. Please contact Support via email [email protected] "[email protected]": http://[email protected] if you continue to experience delays and are unsure of the root cause. Affected components Scheduled Jobs (JP Cell 1 AWS) (Operational)

Read the full incident report →

Minor February 25, 2026

[US AWS] Compare changes job step failing for Databricks users

Detected by Pingoru
Feb 25, 2026, 06:46 PM UTC
Resolved
Feb 25, 2026, 06:46 PM UTC
Duration
Timeline · 3 updates
  1. investigating Feb 24, 2026, 10:27 PM UTC

    Status: Investigating We're investigating an issue with advanced CI that is causing the compare changes step of CI jobs to fail for users of the Databricks adapter. The team is working on a resolution and we will provide updates as soon as new information becomes available. Affected components Scheduled Jobs (US AWS) (Degraded performance)

  2. identified Feb 24, 2026, 11:07 PM UTC

    Status: Identified We have identified an issue with partial success handling in advanced CI that resulted the compare changes step of CI jobs to fail for users of the Databricks adapter. A fix is being implemented, and we will provide an update shortly. Affected components Scheduled Jobs (US AWS) (Degraded performance)

  3. resolved Feb 25, 2026, 06:46 PM UTC

    Status: Resolved The issue has been resolved, and all affected systems are now functioning normally as of 17:25 UTC. We have put measures in place to prevent a recurrence and the compare changes step of advanced CI jobs for Databricks users has returned to normal. Please contact Support via email [email protected] "[email protected]": http://[email protected] if you continue to experience delays and are unsure of the root cause. Affected components Scheduled Jobs (US AWS) (Operational)

Read the full incident report →

Minor February 25, 2026

[US Cell 1 AWS] Compare changes job step failing for Databricks users

Detected by Pingoru
Feb 25, 2026, 06:46 PM UTC
Resolved
Feb 25, 2026, 06:46 PM UTC
Duration
Timeline · 3 updates
  1. investigating Feb 24, 2026, 10:27 PM UTC

    Status: Investigating We're investigating an issue with advanced CI that is causing the compare changes step of CI jobs to fail for users of the Databricks adapter. The team is working on a resolution and we will provide updates as soon as new information becomes available. Affected components Scheduled Jobs (US Cell 1 AWS) (Degraded performance)

  2. identified Feb 24, 2026, 11:07 PM UTC

    Status: Identified We have identified an issue with partial success handling in advanced CI that resulted the compare changes step of CI jobs to fail for users of the Databricks adapter. A fix is being implemented, and we will provide an update shortly. Affected components Scheduled Jobs (US Cell 1 AWS) (Degraded performance)

  3. resolved Feb 25, 2026, 06:46 PM UTC

    Status: Resolved The issue has been resolved, and all affected systems are now functioning normally as of 17:25 UTC. We have put measures in place to prevent a recurrence and the compare changes step of advanced CI jobs for Databricks users has returned to normal. Please contact Support via email [email protected] "[email protected]": http://[email protected] if you continue to experience delays and are unsure of the root cause. Affected components Scheduled Jobs (US Cell 1 AWS) (Operational)

Read the full incident report →

Minor February 25, 2026

[US Cell 1 AZURE] Compare changes job step failing for Databricks users

Detected by Pingoru
Feb 25, 2026, 06:46 PM UTC
Resolved
Feb 25, 2026, 06:46 PM UTC
Duration
Timeline · 3 updates
  1. investigating Feb 24, 2026, 10:27 PM UTC

    Status: Investigating We're investigating an issue with advanced CI that is causing the compare changes step of CI jobs to fail for users of the Databricks adapter. The team is working on a resolution and we will provide updates as soon as new information becomes available. Affected components Scheduled Jobs (US Cell 1 AZURE) (Degraded performance)

  2. identified Feb 24, 2026, 11:07 PM UTC

    Status: Identified We have identified an issue with partial success handling in advanced CI that resulted the compare changes step of CI jobs to fail for users of the Databricks adapter. A fix is being implemented, and we will provide an update shortly. Affected components Scheduled Jobs (US Cell 1 AZURE) (Degraded performance)

  3. resolved Feb 25, 2026, 06:46 PM UTC

    Status: Resolved The issue has been resolved, and all affected systems are now functioning normally as of 17:25 UTC. We have put measures in place to prevent a recurrence and the compare changes step of advanced CI jobs for Databricks users has returned to normal. Please contact Support via email [email protected] "[email protected]": http://[email protected] if you continue to experience delays and are unsure of the root cause. Affected components Scheduled Jobs (US Cell 1 AZURE) (Operational)

Read the full incident report →

Minor February 25, 2026

[US Cell 1 GCP] Compare changes job step failing for Databricks users

Detected by Pingoru
Feb 25, 2026, 06:46 PM UTC
Resolved
Feb 25, 2026, 06:46 PM UTC
Duration
Timeline · 3 updates
  1. investigating Feb 24, 2026, 10:27 PM UTC

    Status: Investigating We're investigating an issue with advanced CI that is causing the compare changes step of CI jobs to fail for users of the Databricks adapter. The team is working on a resolution and we will provide updates as soon as new information becomes available. Affected components Scheduled Jobs (US Cell 1 GCP) (Degraded performance)

  2. identified Feb 24, 2026, 11:07 PM UTC

    Status: Identified We have identified an issue with partial success handling in advanced CI that resulted the compare changes step of CI jobs to fail for users of the Databricks adapter. A fix is being implemented, and we will provide an update shortly. Affected components Scheduled Jobs (US Cell 1 GCP) (Degraded performance)

  3. resolved Feb 25, 2026, 06:46 PM UTC

    Status: Resolved The issue has been resolved, and all affected systems are now functioning normally as of 17:25 UTC. We have put measures in place to prevent a recurrence and the compare changes step of advanced CI jobs for Databricks users has returned to normal. Please contact Support via email [email protected] "[email protected]": http://[email protected] if you continue to experience delays and are unsure of the root cause. Affected components Scheduled Jobs (US Cell 1 GCP) (Operational)

Read the full incident report →

Minor February 25, 2026

[US Cell 2 AWS] Compare changes job step failing for Databricks users

Detected by Pingoru
Feb 25, 2026, 06:46 PM UTC
Resolved
Feb 25, 2026, 06:46 PM UTC
Duration
Timeline · 3 updates
  1. investigating Feb 24, 2026, 10:27 PM UTC

    Status: Investigating We're investigating an issue with advanced CI that is causing the compare changes step of CI jobs to fail for users of the Databricks adapter. The team is working on a resolution and we will provide updates as soon as new information becomes available. Affected components Scheduled Jobs (US Cell 2 AWS) (Degraded performance)

  2. identified Feb 24, 2026, 11:07 PM UTC

    Status: Identified We have identified an issue with partial success handling in advanced CI that resulted the compare changes step of CI jobs to fail for users of the Databricks adapter. A fix is being implemented, and we will provide an update shortly. Affected components Scheduled Jobs (US Cell 2 AWS) (Degraded performance)

  3. resolved Feb 25, 2026, 06:46 PM UTC

    Status: Resolved The issue has been resolved, and all affected systems are now functioning normally as of 17:25 UTC. We have put measures in place to prevent a recurrence and the compare changes step of advanced CI jobs for Databricks users has returned to normal. Please contact Support via email [email protected] "[email protected]": http://[email protected] if you continue to experience delays and are unsure of the root cause. Affected components Scheduled Jobs (US Cell 2 AWS) (Operational)

Read the full incident report →

Minor February 25, 2026

[US Cell 3 AWS] Compare changes job step failing for Databricks users

Detected by Pingoru
Feb 25, 2026, 06:46 PM UTC
Resolved
Feb 25, 2026, 06:46 PM UTC
Duration
Timeline · 3 updates
  1. investigating Feb 24, 2026, 10:27 PM UTC

    Status: Investigating We're investigating an issue with advanced CI that is causing the compare changes step of CI jobs to fail for users of the Databricks adapter. The team is working on a resolution and we will provide updates as soon as new information becomes available. Affected components Scheduled Jobs (US Cell 3 AWS) (Degraded performance)

  2. identified Feb 24, 2026, 11:07 PM UTC

    Status: Identified We have identified an issue with partial success handling in advanced CI that resulted the compare changes step of CI jobs to fail for users of the Databricks adapter. A fix is being implemented, and we will provide an update shortly. Affected components Scheduled Jobs (US Cell 3 AWS) (Degraded performance)

  3. resolved Feb 25, 2026, 06:46 PM UTC

    Status: Resolved The issue has been resolved, and all affected systems are now functioning normally as of 17:25 UTC. We have put measures in place to prevent a recurrence and the compare changes step of advanced CI jobs for Databricks users has returned to normal. Please contact Support via email [email protected] "[email protected]": http://[email protected] if you continue to experience delays and are unsure of the root cause. Affected components Scheduled Jobs (US Cell 3 AWS) (Operational)

Read the full incident report →

Minor February 25, 2026

[US Cell 4 AWS] Compare changes job step failing for Databricks users

Detected by Pingoru
Feb 25, 2026, 06:46 PM UTC
Resolved
Feb 25, 2026, 06:46 PM UTC
Duration
Timeline · 3 updates
  1. investigating Feb 24, 2026, 10:27 PM UTC

    Status: Investigating We're investigating an issue with advanced CI that is causing the compare changes step of CI jobs to fail for users of the Databricks adapter. The team is working on a resolution and we will provide updates as soon as new information becomes available. Affected components Scheduled Jobs (US Cell 4 AWS) (Degraded performance)

  2. identified Feb 24, 2026, 11:07 PM UTC

    Status: Identified We have identified an issue with partial success handling in advanced CI that resulted the compare changes step of CI jobs to fail for users of the Databricks adapter. A fix is being implemented, and we will provide an update shortly. Affected components Scheduled Jobs (US Cell 4 AWS) (Degraded performance)

  3. resolved Feb 25, 2026, 06:46 PM UTC

    Status: Resolved The issue has been resolved, and all affected systems are now functioning normally as of 17:25 UTC. We have put measures in place to prevent a recurrence and the compare changes step of advanced CI jobs for Databricks users has returned to normal. Please contact Support via email [email protected] "[email protected]": http://[email protected] if you continue to experience delays and are unsure of the root cause. Affected components Scheduled Jobs (US Cell 4 AWS) (Operational)

Read the full incident report →

Minor February 24, 2026

[US AWS] Platform instability affecting runs, Studio, and CLI

Detected by Pingoru
Feb 24, 2026, 09:53 PM UTC
Resolved
Feb 24, 2026, 09:53 PM UTC
Duration
Timeline · 2 updates
  1. identified Feb 24, 2026, 09:52 PM UTC

    Status: Identified We have identified an issue impacting run execution and Studio and Cloud CLI functionality. A fix is currently being implemented. We will share another update as soon as more information becomes available. Affected components Semantic Layer API (US AWS) (Operational) Metadata Ingestion (US AWS) (Operational) dbt platform CLI (US AWS) (Full outage) Web Application (US AWS) (Operational) IDE (US AWS) (Full outage) dbt Copilot (US AWS) (Operational) Admin API (US AWS) (Operational) Scheduled Jobs (US AWS) (Full outage) Metadata API (US AWS) (Operational)

  2. resolved Feb 24, 2026, 09:53 PM UTC

    Status: Resolved Between 24 Feb 21:52-21:53 UTC, we experienced platform instability that resulted in newly triggered runs to time out and be cancelled, Studio to fail to launch and Cloud CLI commands to fail. The issue has since been resolved. We have implemented measures to help prevent a recurrence and all platform features are operating normally. Please contact Support via email [email protected] "[email protected]": http://[email protected] with any further questions. Affected components Metadata Ingestion (US AWS) (Operational) dbt platform CLI (US AWS) (Operational) Metadata API (US AWS) (Operational) Scheduled Jobs (US AWS) (Operational) IDE (US AWS) (Operational) dbt Copilot (US AWS) (Operational) Admin API (US AWS) (Operational) Semantic Layer API (US AWS) (Operational) Web Application (US AWS) (Operational)

Read the full incident report →

Minor February 24, 2026

[US Cell 1 AWS] Cloud CLI server impacted by transient DNS resolution error

Detected by Pingoru
Feb 24, 2026, 04:05 PM UTC
Resolved
Feb 24, 2026, 04:05 PM UTC
Duration
Timeline · 2 updates
  1. monitoring Feb 24, 2026, 04:02 PM UTC

    Status: Monitoring We're investigating an issue with cloud CLI that caused a transient DNS resolution failure for the Cloud CLI server for a small number of customer. This is impacted Cloud CLI servers were in (us-c1, us-c2, us-c3) instances anytime between 10:00 AM EST and between 11:00 AM EST. Affected components dbt platform CLI (US Cell 1 AWS) (Degraded performance)

  2. resolved Feb 24, 2026, 04:05 PM UTC

    Status: Resolved The issue has been resolved, and all affected systems are now functioning normally as of11:00 AM EST. Between 10:00 AM EST and 11:00 AM ESTan issue with cloud CLI resulted in a small number of customers requests failing. We have put measures in place to prevent a recurrence and the cloud CLI has returned to normal. Please contact Support via email [email protected] "[email protected]": http://[email protected] if you continue to experience delays and are unsure of the root cause. Affected components dbt platform CLI (US Cell 1 AWS) (Operational)

Read the full incident report →

Minor February 24, 2026

[US Cell 2 AWS] Cloud CLI server impacted by transient DNS resolution error

Detected by Pingoru
Feb 24, 2026, 04:05 PM UTC
Resolved
Feb 24, 2026, 04:05 PM UTC
Duration
Timeline · 2 updates
  1. monitoring Feb 24, 2026, 04:02 PM UTC

    Status: Monitoring We're investigating an issue with cloud CLI that caused a transient DNS resolution failure for the Cloud CLI server for a small number of customer. This is impacted Cloud CLI servers were in (us-c1, us-c2, us-c3) instances anytime between 10:00 AM EST and between 11:00 AM EST. Affected components dbt platform CLI (US Cell 2 AWS) (Degraded performance)

  2. resolved Feb 24, 2026, 04:05 PM UTC

    Status: Resolved The issue has been resolved, and all affected systems are now functioning normally as of11:00 AM EST. Between 10:00 AM EST and 11:00 AM ESTan issue with cloud CLI resulted in a small number of customers requests failing. We have put measures in place to prevent a recurrence and the cloud CLI has returned to normal. Please contact Support via email [email protected] "[email protected]": http://[email protected] if you continue to experience delays and are unsure of the root cause. Affected components dbt platform CLI (US Cell 2 AWS) (Operational)

Read the full incident report →

Minor February 24, 2026

[US Cell 3 AWS] Cloud CLI server impacted by transient DNS resolution error

Detected by Pingoru
Feb 24, 2026, 04:05 PM UTC
Resolved
Feb 24, 2026, 04:05 PM UTC
Duration
Timeline · 2 updates
  1. monitoring Feb 24, 2026, 04:02 PM UTC

    Status: Monitoring We're investigating an issue with cloud CLI that caused a transient DNS resolution failure for the Cloud CLI server for a small number of customer. This is impacted Cloud CLI servers were in (us-c1, us-c2, us-c3) instances anytime between 10:00 AM EST and between 11:00 AM EST. Affected components dbt platform CLI (US Cell 3 AWS) (Degraded performance)

  2. resolved Feb 24, 2026, 04:05 PM UTC

    Status: Resolved The issue has been resolved, and all affected systems are now functioning normally as of11:00 AM EST. Between 10:00 AM EST and 11:00 AM ESTan issue with cloud CLI resulted in a small number of customers requests failing. We have put measures in place to prevent a recurrence and the cloud CLI has returned to normal. Please contact Support via email [email protected] "[email protected]": http://[email protected] if you continue to experience delays and are unsure of the root cause. Affected components dbt platform CLI (US Cell 3 AWS) (Operational)

Read the full incident report →

Minor February 23, 2026

[US Cell 2 AWS] Studio and CLI Invocation Failures

Detected by Pingoru
Feb 23, 2026, 08:32 PM UTC
Resolved
Feb 23, 2026, 08:32 PM UTC
Duration
Timeline · 1 update
  1. resolved Feb 23, 2026, 08:32 PM UTC

    Status: Resolved Starting at approximately 19:03 UTC, some customers may have experienced dbt invocation failures. This impacted some Studio IDE and CLI invocations, resulting in "internal error" messages. The underlying issue has been identified and resolved. Systems are now operating normally, and invocations should now run successfully. Affected components IDE (US Cell 2 AWS) (Operational) dbt platform CLI (US Cell 2 AWS) (Operational)

Read the full incident report →

Minor February 22, 2026

[US Cell 1 AWS] Run Failures and Studio IDE not loading in dbt Platform

Detected by Pingoru
Feb 22, 2026, 03:19 AM UTC
Resolved
Feb 22, 2026, 03:19 AM UTC
Duration
Timeline · 12 updates
  1. investigating Feb 21, 2026, 04:37 PM UTC

    Status: Investigating We're investigating an issue with orchestration that is causing run failures. This is impacting Scheduled Jobs anytime after 11:00 AM Eastern. The team is working on a resolution and we will provide updates at approximately 15 minute intervals or as soon as new information becomes available. Affected components Scheduled Jobs (US Cell 1 AWS) (Partial outage)

  2. investigating Feb 21, 2026, 05:05 PM UTC

    Status: Investigating We're continuing to investigate an issue with orchestration resulting in run failures for deployment jobs on the dbt Platform. The team is working on a resolution and we will provide updates as soon as new information becomes available. Affected components Scheduled Jobs (US Cell 1 AWS) (Partial outage)

  3. identified Feb 21, 2026, 05:39 PM UTC

    Status: Identified We have identified an issue with Scheduled Jobs that is resulting in run failures. A fix is being implemented, and we will provide an update shortly. Affected components Scheduled Jobs (US Cell 1 AWS) (Partial outage)

  4. identified Feb 21, 2026, 06:34 PM UTC

    Status: Identified The identified issue with Scheduled Jobs is also causing run delays in addition to run failures with the output of "Internal Error." We are working on implementing a fix and we will provide an update shortly. Affected components Scheduled Jobs (US Cell 1 AWS) (Partial outage)

  5. identified Feb 21, 2026, 07:39 PM UTC

    Status: Identified We are continuing to work on implementing a fix, and we will provide additional updates as more information becomes available. Affected components Scheduled Jobs (US Cell 1 AWS) (Partial outage)

  6. monitoring Feb 21, 2026, 07:51 PM UTC

    Status: Monitoring We have deployed a fix for Scheduled Jobs that was causing run start delays and run failures. Runs are starting to return to a normal state and error rates are decreasing. We are continuing to monitor. Affected components Scheduled Jobs (US Cell 1 AWS) (Partial outage)

  7. identified Feb 21, 2026, 08:28 PM UTC

    Status: Identified We continue to see impacts on Scheduled Jobs resulting in run cancellations, run failures, and run delays due to the ongoing incident. A fix is being implemented, and we will continue to provide updates as soon as more information becomes available. Affected components Scheduled Jobs (US Cell 1 AWS) (Partial outage)

  8. identified Feb 21, 2026, 09:19 PM UTC

    Status: Identified We have identified an issue with the Studio IDE as part of this ongoing incident that is resulting in IDE sessions not starting. A fix is being implemented, and we will provide an update shortly. Affected components IDE (US Cell 1 AWS) (Full outage) Scheduled Jobs (US Cell 1 AWS) (Full outage)

  9. identified Feb 21, 2026, 11:01 PM UTC

    Status: Identified We are continuing to work on implementing a fix, and we will provide an update as soon as more information is available. Affected components IDE (US Cell 1 AWS) (Full outage) Scheduled Jobs (US Cell 1 AWS) (Full outage)

  10. identified Feb 22, 2026, 12:20 AM UTC

    Status: Identified We are continuing to work on implementing a fix, and we will provide an update as soon as more information is available. Affected components IDE (US Cell 1 AWS) (Full outage) Scheduled Jobs (US Cell 1 AWS) (Full outage)

  11. monitoring Feb 22, 2026, 02:11 AM UTC

    Status: Monitoring We have deployed a fix for Scheduled Jobs that was causing run failures, run delays, run cancellations, and IDE sessions not starting. Scheduled Jobs and the Studio IDE have returned to their normal state and we are continuing to monitor. Affected components Scheduled Jobs (US Cell 1 AWS) (Operational) IDE (US Cell 1 AWS) (Operational)

  12. resolved Feb 22, 2026, 03:19 AM UTC

    Status: Resolved The issue has been resolved, and all affected systems are now functioning normally. An issue with Scheduled Jobs resulted in run failures, run delays, run cancellations, and IDE sessions not starting. We have put measures in place to prevent a recurrence and Scheduled Jobs and the Studio IDE have returned to normal. Please contact Support via email [email protected]: mailto:[email protected] if you continue to experience delays and are unsure of the root cause. Affected components IDE (US Cell 1 AWS) (Operational) Scheduled Jobs (US Cell 1 AWS) (Operational)

Read the full incident report →

Minor February 22, 2026

[US Cell 2 AWS] Scheduled Job Run Cancellations

Detected by Pingoru
Feb 22, 2026, 02:07 AM UTC
Resolved
Feb 22, 2026, 02:07 AM UTC
Duration
Timeline · 3 updates
  1. identified Feb 22, 2026, 12:09 AM UTC

    Status: Identified We have identified an issue with Scheduled Jobs in the US Cell 2 AWS environment, resulting in run cancellations with the following error: "This run timed out after 10 minutes of inactivity." A fix is actively being implemented, and we will provide an update as soon as more information becomes available. Affected components Scheduled Jobs (US Cell 2 AWS) (Degraded performance)

  2. monitoring Feb 22, 2026, 12:15 AM UTC

    Status: Monitoring We have deployed a fix for Scheduled Jobs in the US Cell 2 AWS environment that was causing run cancellations. Scheduled Jobs have returned to their normal state and we are continuing to monitor. Affected components Scheduled Jobs (US Cell 2 AWS) (Operational)

  3. resolved Feb 22, 2026, 02:07 AM UTC

    Status: Resolved The issue has been resolved, and all affected systems are now functioning normally. An issue with Scheduled Jobs in the US Cell 2 AWS environment resulted in run cancellations. We have put measures in place to prevent a recurrence and Scheduled Jobs have returned to normal. Please contact Support via email [email protected]: mailto:[email protected] if you continue to experience delays and are unsure of the root cause. Affected components Scheduled Jobs (US Cell 2 AWS) (Operational)

Read the full incident report →

Looking to track dbt Cloud downtime and outages?

Pingoru polls dbt Cloud's status page every 5 minutes and alerts you the moment it reports an issue — before your customers do.

  • Real-time alerts when dbt Cloud reports an incident
  • Email, Slack, Discord, Microsoft Teams, and webhook notifications
  • Track dbt Cloud alongside 5,000+ providers in one dashboard
  • Component-level filtering
  • Notification groups + maintenance calendar
Start monitoring dbt Cloud for free

5 free monitors · No credit card required