TeraSwitch Outage History

TeraSwitch is up right now

There were 9 TeraSwitch outages since February 3, 2026 totaling 1097h 52m of downtime. Each is summarised below — incident details, duration, and resolution information.

Source: https://www.teraswitchstatus.com

Major April 30, 2026

TYO1 - Network Connectivity Issue - Single Rack

Detected by Pingoru
Apr 30, 2026, 11:33 PM UTC
Resolved
May 01, 2026, 05:30 PM UTC
Duration
17h 57m
Affected: TYO1 - Tokyo, Japan
Timeline · 4 updates
  1. identified Apr 30, 2026, 11:33 PM UTC

    Teraswitch has identified a top-of-rack switch failure at our TYO1 site affecting a small subset of customers there. Services single homed to this switch are currently offline. We are working on an immediate replacement and will provide updates as more information becomes available.

  2. identified May 01, 2026, 04:01 AM UTC

    The failed switch has been replaced and is being finalized for production. We will provide an update once affected services are fully restored.

  3. monitoring May 01, 2026, 05:44 AM UTC

    Our replacement switch is in service and all affected services should be restored at this time. If you have any further issues, please contact Teraswitch Support at [email protected].

  4. resolved May 01, 2026, 05:30 PM UTC

    This incident has been resolved.

Read the full incident report →

Critical March 26, 2026

AMS1 - Loss of Connectivity

Detected by Pingoru
Mar 26, 2026, 10:08 PM UTC
Resolved
Apr 02, 2026, 10:38 PM UTC
Duration
7d
Affected: AMS1 - Amsterdam, Netherlands
Timeline · 5 updates
  1. identified Mar 26, 2026, 10:08 PM UTC

    Teraswitch is working to resolve a simultaneous backbone fiber cut affecting AMS1 connectivity.

  2. identified Mar 26, 2026, 10:53 PM UTC

    We are working on a resolution to a multiple diverse fiber cut that is impacting AMS1 connectivity. The issue has been identified by the fiber vendor, and we are working on an alternative fiber solution to restore connectivity.

  3. monitoring Mar 27, 2026, 12:14 AM UTC

    We have implemented a temporary fix and AMS1 connectivity is now restored. We will follow up with a permanent fix and RCA.

  4. resolved Apr 02, 2026, 10:38 PM UTC

    An RCA has been posted for this issue.

  5. postmortem Apr 02, 2026, 10:38 PM UTC

    # Root Cause Analysis: AMS1 Connectivity Outage **Incident Date:** March 26, 2025 **Duration:** ~1h 21m \(22:08 UTC – 00:14 UTC \+1\) **Severity:** Critical – Full site connectivity loss **Affected Site:** AMS1 – Amsterdam, Netherlands **Status:** Resolved ## Executive Summary On March 26, 2025, Teraswitch's AMS1 facility experienced a complete loss of backbone connectivity lasting approximately 1 hour and 21 minutes. The outage was caused by a scheduled fiber vendor maintenance window that simultaneously impacted both the primary and what was believed to be a diverse redundant fiber path between AMS1 and the rest of the backbone. Investigation revealed that due to a documentation and handoff error at the fiber vendor dating back over a year, the AMS1–AMS2 fiber span had never been migrated to the intended diverse path as part of the AMS3 ring buildout. As a result, both affected spans shared physical infrastructure, eliminating the redundancy intended to protect against exactly this type of event. Connectivity was restored within the maintenance window after a rapid joint audit with the fiber vendor confirmed the provisioning discrepancy, and the AMS1–AMS2 span was moved to its correct diverse path. ## Background Teraswitch's Amsterdam backbone originally consisted of a single dark fiber span between AMS1 and AMS2. When AMS3 was later brought online, the network design called for a three-node fiber ring with fully diverse physical paths between all sites to provide redundant backbone connectivity. To support this design, the fiber vendor was engaged to: 1. Provision a new AMS1–AMS3 span 2. Provision a new AMS2–AMS3 span 3. Reroute the existing AMS1–AMS2 span onto a physically diverse path Due to an internal handoff and documentation error within the fiber vendor, step 3 was not completed. The AMS1–AMS2 span remained on its original physical route. Teraswitch was not made aware of this omission, and the span continued to operate normally for over a year. Because it carried live traffic and appeared correctly in topology, it was not identified as incorrectly provisioned during subsequent audits. ## Timeline of Events | Time \(UTC\) | Event | | --- | --- | | Prior to March 26 | Fiber vendor schedules routine maintenance affecting Amsterdam infrastructure | | ~22:08 | AMS1 backbone connectivity lost. Both AMS1–AMS2 and AMS1–AMS3 paths go down simultaneously. Teraswitch NOC begins triage. | | ~22:53 | Fiber vendor engaged and confirms maintenance is impacting both spans. Root cause identified as shared physical infrastructure due to the original provisioning error. | | ~00:14 \+1 | Fiber vendor migrates AMS1–AMS2 span to the correct diverse physical path. Connectivity restored. Monitoring confirmed stable. | ## Root Cause **Primary cause:** An internal documentation and handoff failure at the fiber vendor resulted in the AMS1–AMS2 dark fiber span never being migrated to its intended physically diverse route during the AMS3 ring buildout. Both the AMS1–AMS2 and AMS1–AMS3 spans shared common physical infrastructure, making the designed ring topology's redundancy ineffective. **Contributing factor:** Because the span was operationally active and traffic was flowing normally, the provisioning error went undetected across both Teraswitch and vendor records for over a year. When the scheduled maintenance affected the shared physical infrastructure, both paths were impacted simultaneously, leaving AMS1 with no available backbone connectivity. ## Impact * **AMS1 customers** experienced a complete loss of inbound and outbound connectivity for approximately 1 hour and 21 minutes. * No data loss or hardware damage occurred. * All other Teraswitch sites were unaffected. ## Resolution Working jointly with the fiber vendor during the incident, Teraswitch engineers and vendor technicians audited the physical path assignments for all Amsterdam spans. The discrepancy between the intended and actual routing of the AMS1–AMS2 span was identified. The vendor migrated the span to the correct physically diverse path, restoring independent redundant connectivity across the AMS1–AMS2–AMS3 ring as originally designed. ## Corrective Actions | Action | Owner | Status | | --- | --- | --- | | Confirm and document physical path diversity for all three Amsterdam spans with fiber vendor | Teraswitch / Fiber Vendor | Complete | | Obtain updated as-built fiber records from vendor reflecting correct path assignments | Fiber Vendor | Complete | | Audit all other Teraswitch sites for similar provisioning discrepancies against vendor records | Teraswitch | Complete | | Establish a fiber path verification checklist for all future vendor provisioning work prior to accepting new spans | Teraswitch | Planned | | Add physical diversity validation to change management process for any future ring or redundancy buildouts | Teraswitch | Planned | | Incorporate optical span latency validation against fiber path build sheets as an acceptance criterion for new span provisioning | Teraswitch | Planned | ## Lessons Learned * **Operational traffic is not proof of correct provisioning.** A span can carry live traffic for an extended period while still being routed incorrectly relative to its intended physical diversity design. * **Redundancy assumptions must be periodically verified against vendor as-built records**, not solely inferred from operational status. * **Fiber vendor handoffs require explicit acceptance criteria** including documented physical path confirmation before provisioning work is considered complete. * **Span latency is a low-cost signal for path verification.** In post-incident review, Teraswitch noted that the measured propagation latency on the AMS1–AMS2 span was slightly lower than expected based on the fiber path build sheet for the intended diverse route. This discrepancy, while subtle, was consistent with the span still traversing the shorter original path. Validating measured latency against estimated values from build sheets at provisioning acceptance could have surfaced this error significantly earlier. This check will be incorporated into the span acceptance process going forward. _RCA prepared by Teraswitch Network Engineering. For questions contact the NOC or network architecture team._

Read the full incident report →

Major March 19, 2026

EWR2 - Network Connectivity Issues

Detected by Pingoru
Mar 19, 2026, 09:43 PM UTC
Resolved
Mar 25, 2026, 10:00 PM UTC
Duration
6d
Affected: EWR2 - Newark, NJ
Timeline · 2 updates
  1. monitoring Mar 19, 2026, 09:43 PM UTC

    Teraswitch is investigating reports of network connectivity issues affecting some services at our EWR2 site. We have identified the likely cause and implemented a fix, and are monitoring to verify that the issue is resolved.

  2. resolved Mar 25, 2026, 10:00 PM UTC

    This incident has been resolved.

Read the full incident report →

Minor March 17, 2026

PIT1 - Sporadic Internet Connectivity Issues

Detected by Pingoru
Mar 17, 2026, 08:05 PM UTC
Resolved
Mar 25, 2026, 10:03 PM UTC
Duration
8d 1h
Affected: PIT1 - Pittsburgh, PA
Timeline · 4 updates
  1. investigating Mar 17, 2026, 08:05 PM UTC

    Teraswitch is investigating reports of select markets having issues connecting to PIT1.

  2. identified Mar 17, 2026, 08:36 PM UTC

    We have located a network device with software issues, our team is about to reload the device. This issue affects mostly colocation customers at PIT1, but traffic from various sources may have passed through this device depending on the direction of network flow.

  3. monitoring Mar 17, 2026, 08:43 PM UTC

    Reload is complete and the network operation appears to have returned to normal. We will follow up this notification with a RCA and repair plan.

  4. resolved Mar 25, 2026, 10:03 PM UTC

    This incident has been resolved.

Read the full incident report →

Notice February 20, 2026

VAN1 - Connectivity Issues

Detected by Pingoru
Feb 20, 2026, 07:47 AM UTC
Resolved
Feb 20, 2026, 11:32 AM UTC
Duration
3h 45m
Affected: VAN1 - Vancouver, Canada
Timeline · 2 updates
  1. investigating Feb 20, 2026, 07:47 AM UTC

    Teraswitch is investigating a loss of the majority of VAN1 internet connectivity - possibly related to issues in Seattle.

  2. resolved Feb 20, 2026, 11:32 AM UTC

    Cogent has recovered as of 4:33am Eastern. Services should be normalized at this time.

Read the full incident report →

Minor February 20, 2026

APAC - Loss of Tokyo to Seattle Backbone Paths

Detected by Pingoru
Feb 20, 2026, 07:01 AM UTC
Resolved
Feb 24, 2026, 03:44 PM UTC
Duration
4d 8h
Affected: Intra-Market ConnectivityGlobal External Internet
Timeline · 2 updates
  1. identified Feb 20, 2026, 07:01 AM UTC

    TeraSwitch is aware and tracking a loss of two paths from Seattle to Tokyo. This has caused much of the internet traffic to divert via public Internet Transit.

  2. resolved Feb 24, 2026, 03:44 PM UTC

    This incident has been resolved.

Read the full incident report →

Notice February 19, 2026

Subsea Cable Fault - Singapore to Europe

Detected by Pingoru
Feb 19, 2026, 02:08 AM UTC
Resolved
Mar 09, 2026, 02:08 AM UTC
Duration
18d
Affected: SGP1 - SingaporeSGP2 - Singapore
Timeline · 2 updates
  1. identified Feb 19, 2026, 02:08 AM UTC

    We have reached out to the vendor asking for a update. Latency from Singapore to Europe will be elevated until the outage is resolved.

  2. resolved Mar 09, 2026, 02:08 AM UTC

    We have successfully brought online a second diverse sub-sea cable path and latency has returned to normal. Once the original severed cable comes back online, we will now have full path redundancy.

Read the full incident report →

Minor February 4, 2026

Multiple US Backbone Connection Losses

Detected by Pingoru
Feb 04, 2026, 10:45 AM UTC
Resolved
Feb 05, 2026, 04:46 PM UTC
Duration
1d 6h
Affected: Intra-Market Connectivity
Timeline · 3 updates
  1. identified Feb 04, 2026, 02:26 PM UTC

    Teraswitch has identified thre core backbone links that are down within the central US. This is causing US-East traffic to wrap around the world to get to US-West. Chicago/Seattle Chicago/Salt Lake City Dallas /Los Angeles We are working with our fiber vendors for a ETR.

  2. monitoring Feb 04, 2026, 04:04 PM UTC

    Chicago to Salt Lake City path is restored which has normalized operations with 1 of 3 links restored. We will close this status when 2 of 3 are restored which also restores redundancy.

  3. resolved Feb 09, 2026, 04:46 PM UTC

    This incident has been resolved.

Read the full incident report →

Critical February 3, 2026

Teraswitch Console (console.tsw.io) - Inaccessible

Detected by Pingoru
Feb 03, 2026, 07:54 PM UTC
Resolved
Feb 03, 2026, 10:37 PM UTC
Duration
2h 43m
Affected: Portal (console.tsw.io)
Timeline · 3 updates
  1. identified Feb 03, 2026, 07:54 PM UTC

    Teraswitch is investigating reports that our console (console.tsw.io) is currently inaccessible. This issue has been identified as the result of an outage in an underlying cloud provider. We will provide an update as more information becomes available.

  2. identified Feb 03, 2026, 09:16 PM UTC

    This issue is due to an ongoing Cloudflare outage: https://www.cloudflarestatus.com/incidents/m1xvmqf37z97. Cloudflare has implemented a fix and we are monitoring for recovery. Our API is unaffected by this issue and remains accessible.

  3. resolved Feb 03, 2026, 10:37 PM UTC

    Cloudflare has resolved the issue, and console.tsw.io is accessible once again as of 22:28 UTC. This incident has been resolved.

Read the full incident report →

Looking to track TeraSwitch downtime and outages?

Pingoru polls TeraSwitch's status page every 5 minutes and alerts you the moment it reports an issue — before your customers do.

  • Real-time alerts when TeraSwitch reports an incident
  • Email, Slack, Discord, Microsoft Teams, and webhook notifications
  • Track TeraSwitch alongside 5,000+ providers in one dashboard
  • Component-level filtering
  • Notification groups + maintenance calendar
Start monitoring TeraSwitch for free

5 free monitors · No credit card required