Affected components
Update timeline
- monitoring Feb 23, 2026, 08:37 AM UTC
A fix has been implemented and we are monitoring the results.
- resolved Feb 23, 2026, 08:52 AM UTC
This incident has been resolved.
- postmortem Mar 02, 2026, 05:47 PM UTC
### **Summary** On **February 23, 2026**, between **00:00 AM and 00:30 AM PST** \(**08:00–08:30 UTC / 13:30–14:00 IST**\), a configuration change to **VPC Service Controls \(VPC-SC\)** was enforced at the **organization level** with the intent of restricting **BigQuery** usage to approved projects only. Shortly after enforcement, multiple **unrelated GCP services** began failing due to **unexpected API denials**, impacting **CI workflows** and **internal service operations** \(e.g., access to storage, container images, and some console/project visibility behaviors\). Service was fully restored after the change was **reverted**, and normal operations resumed within approximately **30 minutes**. **No data loss occurred, and there was no security breach.** ### **Root Cause** A **regression in configuration scope** occurred when an **org-level VPC-SC perimeter** was enforced to restrict BigQuery access. While the intent was to limit **BigQuery** specifically, applying the perimeter at the organization level **changed service communication boundaries** more broadly than expected. This resulted in **wider GCP API denials** affecting dependent services and cross-project interactions. Contributing factors: * **Org-level enforcement increased blast radius** beyond the intended BigQuery restriction. * Some **internal service-to-service calls** and **cross-project dependencies** were impacted in ways that were **not clearly surfaced** by dry-run visibility. * Validation in a non-production environment did not expose this behavior due to **lower workload volume** and **fewer real-world cross-service / cross-project integrations**. ### **Impact** During the incident window, the following symptoms were observed: **CI / External-facing impact** * CI Hosted builds and some **CI jobs** were unable to complete operations that rely on GCP access \(for example, launching compute resources using service accounts and/or accessing required dependencies\). **Customer Impact \(CD\)** During the incident window, customers were unable to execute CD pipelines and encountered the error:_“Cannot generate token for the accountId.”_ Additionally: * The Harness File Store was unable to fetch existing files or store new files from the GCS backend. * Logs were not visible in the Harness UI. ### **Remediation** **Immediate** * **Reverted** the organization-level VPC-SC enforcement, returning the environment to the prior access state. * Confirmed recovery across impacted services and workflows after rollback. **Permanent** * A safer enforcement approach is being developed to meet the original governance goal \(restricting BigQuery usage to approved projects\) **without applying an organization-wide perimeter in a single step**. * Working with **cloud provider support** to validate an alternative design and ensure the expected service dependency behavior is understood before re-enforcement. ### **Action Items** To prevent such incidents from happening we have implemented\(and implementing\): 1. **Staged rollout for org-level security boundary changes** 2. **Dependency mapping before perimeter enforcement** 3. **Improve observability for VPC-SC denials** 4. **Strengthen pre-production validation** 5. **Evaluate alternative BigQuery governance controls**
Looking to track Harness downtime and outages?
Pingoru polls Harness's status page every 5 minutes and alerts you the moment it reports an issue — before your customers do.
- Real-time alerts when Harness reports an incident
- Email, Slack, Discord, Microsoft Teams, and webhook notifications
- Track Harness alongside 5,000+ providers in one dashboard
- Component-level filtering
- Notification groups + maintenance calendar
5 free monitors · No credit card required