Update timeline
- resolved Apr 07, 2026, 02:15 AM UTC
Type: Incident Duration: 13 hours Affected Components: Captive Portals Apr 7, 13:45:00 GMT+0 - Identified - **Monitoring: Expanded Impact to US Nodes & Escalation** The scope of this incident has broadened beyond the APAC region. We have now received reports of similar connectivity issues affecting Zscaler nodes in the US. We have escalated this matter to senior Zscaler engineering leadership for an urgent, comprehensive investigation into what appears to be a multi-region disruption. ## **Current Status & Workarounds** While we await a global resolution from Zscaler, please continue to utilize the previously verified node failovers: * **India:** Move from Chennai 2 to **Hyderabad 1**. * **APAC:** Move from Singapore to **Hong Kong 3** or **Tokyo 4**. * **US:** We are currently identifying stable target nodes for US traffic and will provide specific recommendations shortly. Our internal teams remain on high alert and are working alongside Zscaler to stabilize service globally. Apr 7, 02:15:00 GMT+0 - Investigating - **Investigating: Service Issues with Zscaler Integration (APAC)** We are currently investigating reports of connectivity issues affecting customers integrated with the [**zscloud.net**](http://Zscloud.net) cloud in the **APAC region**. We are working directly with Zscaler to identify the cause and will provide updates as they become available. Apr 7, 07:00:00 GMT+0 - Identified - **Monitoring: Investigation & Remediation in Progress** Zscaler has officially acknowledged our support ticket and is investigating the issue. Simultaneously, our internal engineering teams are evaluating potential remediation steps to restore service for **APAC customers**. We will share more details as our joint efforts progress. Apr 7, 08:00:00 GMT+0 - Identified - **Monitoring: Expanded Scope & Investigation** Our investigation has confirmed that the impact extends beyond the [zscloud.net](http://Zscloud.net) environment to include [zscaler.net](http://zscaler.net) and other Zscaler clouds within the APAC region. Zscaler has acknowledged our high-priority ticket, and we are working closely with their team while our internal engineers continue to pursue a parallel remediation path. Apr 7, 15:15:00 GMT+0 - Monitoring - **Monitoring: Global Mitigation Applied by Zscaler** Zscaler has implemented a global fix to address the connectivity issues affecting multiple nodes (including Singapore, Chennai 2, and US-based data centers). We have received initial confirmation from customers in both the **US and APAC regions** that service has been restored and they are now able to connect via their original primary nodes. Zscaler has also published an official update regarding this resolution on their Trust Portal: [Zscaler Status Post 28871](https://trust.zscaler.com/zscaler.net/posts/28871). We are waiting the RCA from Zscaler engineering team Apr 7, 16:15:00 GMT+0 - Resolved - ## **Incident Update: Resolved** **Resolved: Global Service Restoration** We have confirmed that service has been fully restored across all Zscaler clouds and regions, including the previously impacted nodes in **Singapore**, **Chennai 2**, and the **US**. Zscaler has implemented a global fix, and our internal telemetry shows that all integration traffic is now flowing normally. **Next Steps** Our engineering leadership is currently awaiting the formal **Root Cause Analysis (RCA)** from Zscaler’s engineering team. Once we have reviewed their technical breakdown of the multi-region disruption, we will provide a comprehensive summary to all affected customers. Apr 7, 08:40:00 GMT+0 - Identified - **Monitoring: Multiple Impacted Nodes Identified (Singapore & Chennai)** Our ongoing investigation has confirmed that the issue is not limited to the Singapore node. We have identified that the Chennai 2 node (India) is also experiencing disruptions, contributing to the connectivity issues across the APAC region. ## **Recommended Workaround** To restore connectivity immediately, we suggest customers manually switch their traffic to one of the following stable nodes: * **Hong Kong 3** * **Tokyo 4** We continue to work with Zscaler to resolve the specific issue in Singapore and will provide a final update once the node is stabilized. Apr 7, 11:00:00 GMT+0 - Identified - **Monitoring: Additional Impacted Node (Chennai 2) & Verified Workaround** Our ongoing investigation has confirmed that the disruption is not limited to Singapore. The Chennai 2 node (India) is also experiencing significant instability. We have successfully verified a workaround with impacted customers in the region. ## **Verified Workaround** If you are experiencing connectivity issues or tunnel drops on the affected nodes, please implement the following failovers: * **For India-based traffic (Chennai 2):** Redirect tunnels to the **Hyderabad 1** node. This has been confirmed to restore stable connectivity in live testing. * **For Singapore-based traffic:** Redirect to **Hong Kong 3** or **Tokyo 4**. We remain in constant communication with Zscaler support as they work toward a permanent fix for the Singapore and Chennai data centers. Apr 14, 16:15:00 GMT+0 - Postmortem - # RCA From Zscaler: Incident Summary ● On April 7th, 2026, Zscaler received customer reports of Identity Provider (IdP)-initiated authentication failures affecting the [zscaler.net](http://zscaler.net) and [zscloud.net](http://zscloud.net) cloud environments. ● Customers using IdP-initiated authentication (notably Cloudi-Fi) experienced login failures (Error 581000 / 403 Forbidden) where the authentication redirect flow failed at the final redirect to [gateway.zscloud.net](http://gateway.zscloud.net), preventing users from accessing resources. ● Zscaler’s investigation determined the root cause was a recently enabled feature bit. Zscaler disabled the feature bit on the control plane for impacted cloud environments, and customers reported the issue was mitigated. ● On April 8th, 2026, additional reports were received. Zscaler confirmed the feature bit remained disabled on the control plane. However, further analysis found the disablement had not propagated to some Service Edges that form the data plane, resulting in continued impact for a subset of customers. ● Zscaler performed a full cloud audit to identify affected Service Edges and executed a Central Authority (CA) cache flush on impacted Service Edges to restore normal configuration synchronization. After the flush, the change propagated successfully, and service returned to normal. ● As part of Zscaler’s Post-Incident Review (PIR) process, Zscaler is evaluating monitoring, alerting thresholds, and resiliency controls to identify improvement opportunities and further reduce the likelihood and impact of similar events. This document contains preliminary findings; a final Root Cause Analysis (RCA) will be issued following validation of long-term corrective actions. # Incident Trigger ● A newly implemented security feature bit was introduced to enhance protections against open redirects during authentication handoffs. The feature validates redirect tokens and Zscaler private and confidential information targets to safeguard Zscaler’s service against abuse of open redirect paths in authentication redirect flows. ## Impact (Medium) ● Customers using IdP-initiated authentication (notably Cloudi-Fi) experienced login failures (Error 581000 / 403 Forbidden) where the authentication redirect flow failed at the final redirect to [gateway.zscloud.net](http://gateway.zscloud.net), preventing users from accessing resources. ## Incident Window | **Time (UTC)** | **Event** | | --------------------- | ----------------------------------------- | | April 7th, 2026 06:59 | Incident start (INC-000001256) | | April 7th, 2026 15:09 | Incident mitigated (feature bit disabled) | | April 8th, 2026 05:58 | Further impact reported (INC-000001257) | | April 8th, 2026 12:27 | Incident resolved (CA flush completed) | # Resolution Details ● Zscaler implemented corrective actions to resolve the incident. Zscaler disabled the feature bit on the control plane for the impacted cloud environments and completed a CA cache flush on impacted Service Edges to restore normal configuration synchronization. # Strategic Improvements ● As part of Zscaler’s Post-Incident Review (PIR) process, Zscaler is evaluating monitoring, alerting thresholds, and resiliency controls to identify improvement opportunities and further reduce the likelihood and impact of similar events. This document contains preliminary findings; a final Root Cause Analysis (RCA) will be issued following validation of long-term corrective actions.
Looking to track Cloudi-Fi downtime and outages?
Pingoru polls Cloudi-Fi's status page every 5 minutes and alerts you the moment it reports an issue — before your customers do.
- Real-time alerts when Cloudi-Fi reports an incident
- Email, Slack, Discord, Microsoft Teams, and webhook notifications
- Track Cloudi-Fi alongside 5,000+ providers in one dashboard
- Component-level filtering
- Notification groups + maintenance calendar
5 free monitors · No credit card required