- Detected by Pingoru
- Apr 30, 2026, 11:33 PM UTC
- Resolved
- May 01, 2026, 05:30 PM UTC
- Duration
- 17h 57m
Affected: TYO1 - Tokyo, Japan
Timeline · 4 updates
-
identified Apr 30, 2026, 11:33 PM UTC
Teraswitch has identified a top-of-rack switch failure at our TYO1 site affecting a small subset of customers there. Services single homed to this switch are currently offline. We are working on an immediate replacement and will provide updates as more information becomes available.
-
identified May 01, 2026, 04:01 AM UTC
The failed switch has been replaced and is being finalized for production. We will provide an update once affected services are fully restored.
-
monitoring May 01, 2026, 05:44 AM UTC
Our replacement switch is in service and all affected services should be restored at this time. If you have any further issues, please contact Teraswitch Support at [email protected].
-
resolved May 01, 2026, 05:30 PM UTC
This incident has been resolved.
Read the full incident report →
- Detected by Pingoru
- Mar 26, 2026, 10:08 PM UTC
- Resolved
- Apr 02, 2026, 10:38 PM UTC
- Duration
- 7d
Affected: AMS1 - Amsterdam, Netherlands
Timeline · 5 updates
Read the full incident report →
- Detected by Pingoru
- Mar 19, 2026, 09:43 PM UTC
- Resolved
- Mar 25, 2026, 10:00 PM UTC
- Duration
- 6d
Affected: EWR2 - Newark, NJ
Timeline · 2 updates
-
monitoring Mar 19, 2026, 09:43 PM UTC
Teraswitch is investigating reports of network connectivity issues affecting some services at our EWR2 site. We have identified the likely cause and implemented a fix, and are monitoring to verify that the issue is resolved.
-
resolved Mar 25, 2026, 10:00 PM UTC
This incident has been resolved.
Read the full incident report →
- Detected by Pingoru
- Mar 17, 2026, 08:05 PM UTC
- Resolved
- Mar 25, 2026, 10:03 PM UTC
- Duration
- 8d 1h
Affected: PIT1 - Pittsburgh, PA
Timeline · 4 updates
-
investigating Mar 17, 2026, 08:05 PM UTC
Teraswitch is investigating reports of select markets having issues connecting to PIT1.
-
identified Mar 17, 2026, 08:36 PM UTC
We have located a network device with software issues, our team is about to reload the device. This issue affects mostly colocation customers at PIT1, but traffic from various sources may have passed through this device depending on the direction of network flow.
-
monitoring Mar 17, 2026, 08:43 PM UTC
Reload is complete and the network operation appears to have returned to normal. We will follow up this notification with a RCA and repair plan.
-
resolved Mar 25, 2026, 10:03 PM UTC
This incident has been resolved.
Read the full incident report →
- Detected by Pingoru
- Feb 20, 2026, 07:47 AM UTC
- Resolved
- Feb 20, 2026, 11:32 AM UTC
- Duration
- 3h 45m
Affected: VAN1 - Vancouver, Canada
Timeline · 2 updates
-
investigating Feb 20, 2026, 07:47 AM UTC
Teraswitch is investigating a loss of the majority of VAN1 internet connectivity - possibly related to issues in Seattle.
-
resolved Feb 20, 2026, 11:32 AM UTC
Cogent has recovered as of 4:33am Eastern. Services should be normalized at this time.
Read the full incident report →
- Detected by Pingoru
- Feb 20, 2026, 07:01 AM UTC
- Resolved
- Feb 24, 2026, 03:44 PM UTC
- Duration
- 4d 8h
Affected: Intra-Market ConnectivityGlobal External Internet
Timeline · 2 updates
-
identified Feb 20, 2026, 07:01 AM UTC
TeraSwitch is aware and tracking a loss of two paths from Seattle to Tokyo. This has caused much of the internet traffic to divert via public Internet Transit.
-
resolved Feb 24, 2026, 03:44 PM UTC
This incident has been resolved.
Read the full incident report →
- Detected by Pingoru
- Feb 19, 2026, 02:08 AM UTC
- Resolved
- Mar 09, 2026, 02:08 AM UTC
- Duration
- 18d
Affected: SGP1 - SingaporeSGP2 - Singapore
Timeline · 2 updates
-
identified Feb 19, 2026, 02:08 AM UTC
We have reached out to the vendor asking for a update. Latency from Singapore to Europe will be elevated until the outage is resolved.
-
resolved Mar 09, 2026, 02:08 AM UTC
We have successfully brought online a second diverse sub-sea cable path and latency has returned to normal. Once the original severed cable comes back online, we will now have full path redundancy.
Read the full incident report →
- Detected by Pingoru
- Feb 04, 2026, 10:45 AM UTC
- Resolved
- Feb 05, 2026, 04:46 PM UTC
- Duration
- 1d 6h
Affected: Intra-Market Connectivity
Timeline · 3 updates
-
identified Feb 04, 2026, 02:26 PM UTC
Teraswitch has identified thre core backbone links that are down within the central US. This is causing US-East traffic to wrap around the world to get to US-West. Chicago/Seattle Chicago/Salt Lake City Dallas /Los Angeles We are working with our fiber vendors for a ETR.
-
monitoring Feb 04, 2026, 04:04 PM UTC
Chicago to Salt Lake City path is restored which has normalized operations with 1 of 3 links restored. We will close this status when 2 of 3 are restored which also restores redundancy.
-
resolved Feb 09, 2026, 04:46 PM UTC
This incident has been resolved.
Read the full incident report →
Critical February 3, 2026 - Detected by Pingoru
- Feb 03, 2026, 07:54 PM UTC
- Resolved
- Feb 03, 2026, 10:37 PM UTC
- Duration
- 2h 43m
Affected: Portal (console.tsw.io)
Timeline · 3 updates
-
identified Feb 03, 2026, 07:54 PM UTC
Teraswitch is investigating reports that our console (console.tsw.io) is currently inaccessible. This issue has been identified as the result of an outage in an underlying cloud provider. We will provide an update as more information becomes available.
-
identified Feb 03, 2026, 09:16 PM UTC
This issue is due to an ongoing Cloudflare outage: https://www.cloudflarestatus.com/incidents/m1xvmqf37z97. Cloudflare has implemented a fix and we are monitoring for recovery. Our API is unaffected by this issue and remains accessible.
-
resolved Feb 03, 2026, 10:37 PM UTC
Cloudflare has resolved the issue, and console.tsw.io is accessible once again as of 22:28 UTC. This incident has been resolved.
Read the full incident report →
- Detected by Pingoru
- Jan 30, 2026, 09:53 AM UTC
- Resolved
- Jan 30, 2026, 10:30 AM UTC
- Duration
- 37m
Affected: LAX1 - Los Angeles, CASEA1 - Seattle, WA
Timeline · 3 updates
-
investigating Jan 30, 2026, 09:53 AM UTC
We are investing significant packet loss on the west coast US.
-
investigating Jan 30, 2026, 10:06 AM UTC
We are continuing to investigate this issue.
-
resolved Jan 30, 2026, 10:30 AM UTC
This incident has been resolved.
Read the full incident report →
- Detected by Pingoru
- Jan 28, 2026, 02:17 PM UTC
- Resolved
- Jan 28, 2026, 02:17 PM UTC
- Duration
- —
Affected: Intra-Market ConnectivityUltra-Low Latency Connectivity
Timeline · 1 update
-
resolved Jan 28, 2026, 02:17 PM UTC
At 2PM UTC, Teraswitch reloaded software related to our route servers to fix "stuck" transport tunneling sessions. This may have caused interruptions to ULL/HFT and transport services over our network. We believe the first stuck session started approximately 6 hours before, with more dropping off over the next few hours. As of this moment, all stuck sessions appear to be resolved, and traffic is passing normally and via ULL links.
Read the full incident report →
- Detected by Pingoru
- Dec 28, 2025, 02:19 AM UTC
- Resolved
- Jan 26, 2026, 01:56 PM UTC
- Duration
- 29d 11h
Affected: Intra-Market ConnectivitySGP1 - SingaporeSGP2 - Singapore
Timeline · 2 updates
-
identified Dec 28, 2025, 02:19 AM UTC
Due to multiple undersea cable cuts, SGP1 and SGP2 is no longer operating on the Teraswitch backbone. SGP1/2 networking being unreachable for a time period was seen as network routes shifted and Internet routes adjusted.
-
resolved Jan 26, 2026, 01:56 PM UTC
This incident has been resolved.
Read the full incident report →
- Detected by Pingoru
- Dec 05, 2025, 09:53 PM UTC
- Resolved
- Dec 05, 2025, 11:46 PM UTC
- Duration
- 1h 52m
Affected: Portal (console.tsw.io)
Timeline · 2 updates
-
investigating Dec 05, 2025, 09:53 PM UTC
Teraswitch is investigating increased errors in our service portal (console.tsw.io). Teraswitch services themselves are unaffected. We will provide an update as more information becomes available.
-
resolved Dec 05, 2025, 11:46 PM UTC
The cause of the increased errors was identified and fixes put in place. This issue is resolved.
Read the full incident report →
- Detected by Pingoru
- Nov 29, 2025, 05:21 AM UTC
- Resolved
- Nov 29, 2025, 01:02 PM UTC
- Duration
- 7h 40m
Affected: Intra-Market ConnectivityTYO1 - Tokyo, JapanTYO2 - Tokyo, Japan
Timeline · 2 updates
-
monitoring Nov 29, 2025, 05:21 AM UTC
Terawitch Network Operations has observed a subsea cable fault between our Seattle and Tokyo POPs. Latency to Tokyo may be increased while the system is down. We have reached out to our cable provider for an update.
-
resolved Nov 29, 2025, 01:02 PM UTC
Fault has cleared and cable returned to service.
Read the full incident report →
Critical November 18, 2025 - Detected by Pingoru
- Nov 18, 2025, 12:08 PM UTC
- Resolved
- Nov 18, 2025, 06:09 PM UTC
- Duration
- 6h
Affected: APIPortal (console.tsw.io)
Timeline · 3 updates
-
investigating Nov 18, 2025, 12:08 PM UTC
Teraswitch is investigating monitoring alerts/errors in our console (console.tsw.io) and API (api.tsw.io). This appears to be due to an underlying major cloud provider which is experiencing an outage. We will provide an update as more information becomes available.
-
identified Nov 18, 2025, 12:15 PM UTC
This issue has been isolated to an ongoing Cloudflare network outage - see their incident page for more details: https://www.cloudflarestatus.com/incidents/8gmgl950y3h7
-
resolved Nov 18, 2025, 06:09 PM UTC
Cloudflare updated their incident report that their services are now operating normally - they will post a final update to the incident once their investigation has concluded. Teraswitch monitoring confirms our console / API services have stabilized with no further errors. This incident is now resolved.
Read the full incident report →
- Detected by Pingoru
- Nov 14, 2025, 03:17 PM UTC
- Resolved
- Nov 17, 2025, 01:45 PM UTC
- Duration
- 2d 22h
Affected: Intra-Market ConnectivityGlobal External InternetInternet Exchanges and Peering
Timeline · 4 updates
-
investigating Nov 14, 2025, 03:17 PM UTC
Teraswitch is investigating moments of Internet connectivity instability.
-
monitoring Nov 14, 2025, 05:25 PM UTC
Teraswitch was attacked by a DDOS attack that specifically targeted our infrastructure and blockchain services. While our network has the capacity and mitigation ability, it took longer than expected to identify good traffic from attack traffic without further causing worse issues. Some attack traffic sources came from very abnormal and specific locations, such as AT&T (US ISP) and a group of Brazilian ISPs. This traffic primarily landed in our Dallas data center, which overloaded select network links there on its way towards the attack targets. We have tightened our rule sets, shifted traffic around, and are working to ensure impacts are continually mitigated.
-
identified Nov 15, 2025, 01:29 PM UTC
We are observing further Internet connectivity instability due to the attack and are continuing to apply mitigations. We will continue to monitor the situation until resolution and update on any further changes.
-
resolved Nov 17, 2025, 01:45 PM UTC
This incident is now considered resolved.
Read the full incident report →
- Detected by Pingoru
- Oct 29, 2025, 12:20 AM UTC
- Resolved
- Oct 29, 2025, 12:36 AM UTC
- Duration
- 16m
Affected: Intra-Market Connectivity
Timeline · 2 updates
-
investigating Oct 29, 2025, 12:20 AM UTC
Our trans-Atlantic link from London to New York has dropped offline. Traffic is diverted via Ashburn to Frankfurt.
-
resolved Oct 29, 2025, 12:36 AM UTC
This detour has already cleared and was a momentary disruption. TeraSwitch will engage our transport vendor for a better understanding of this incident.
Read the full incident report →
- Detected by Pingoru
- Oct 01, 2025, 09:52 PM UTC
- Resolved
- Oct 01, 2025, 11:30 PM UTC
- Duration
- 1h 38m
Affected: Portal (console.tsw.io)
Timeline · 2 updates
-
investigating Oct 01, 2025, 09:52 PM UTC
Teraswitch is investigating rising error rates and reports that Console is not working. During this time, management of services and ordering services may be unavailable. Customer services/servers and network operations are unaffected.
-
resolved Oct 01, 2025, 11:30 PM UTC
All operations of Console services have been restored. Users should not see any issues managing their accounts and services.
Read the full incident report →
Notice September 30, 2025 - Detected by Pingoru
- Sep 30, 2025, 07:01 PM UTC
- Resolved
- Oct 01, 2025, 02:13 PM UTC
- Duration
- 19h 11m
Affected: SLC1 - Salt Lake City, UT
Timeline · 2 updates
-
investigating Sep 30, 2025, 07:01 PM UTC
Teraswitch is currently investigating and monitoring rising temperatures at SLC1. We are awaiting comment from our facility vendor about what may be the cause. At this time, we are aware of no impacts to services or customer systems.
-
resolved Oct 01, 2025, 02:13 PM UTC
This temperature issue was quickly resolved yesterday and was identified to be caused by maintenance work being completed on a nearby HVAC unit. The temperature increased due to lower airflow. At this time, all work is complete and the HVAC maintenance is completed. Our facility vendor is working to increase airflow so that in the future, specific units aren't required to supply the proper airflow.
Read the full incident report →
- Detected by Pingoru
- Sep 26, 2025, 12:00 AM UTC
- Resolved
- Jan 26, 2026, 01:57 PM UTC
- Duration
- 122d 13h
Affected: Intra-Market ConnectivitySGP1 - SingaporeSGP2 - Singapore
Timeline · 3 updates
-
monitoring Sep 29, 2025, 07:18 PM UTC
Due to a sub-sea cable fault, our transport provider estimates an 10/17/2025 restoration to capacity between SGP1/2 to our backbone via Tokyo.
-
monitoring Dec 23, 2025, 04:01 PM UTC
We received an update from our vendor that this will be repaired prior to Jan 31st. We have also purchased another diverse path to be installed in Jan 2026 to prevent the increased latency in the future.
-
resolved Jan 26, 2026, 01:57 PM UTC
An alternative undersea cable has been established to restore this route. SGPTYO connectivity is now via backbone.
Read the full incident report →
Notice September 24, 2025 - Detected by Pingoru
- Sep 24, 2025, 07:58 PM UTC
- Resolved
- Sep 24, 2025, 08:10 PM UTC
- Duration
- 11m
Affected: IAD1 - Ashburn, VAEWR1 - Newark, NJEWR2 - Newark, NJ
Timeline · 2 updates
-
investigating Sep 24, 2025, 07:58 PM UTC
Due to a probable fiber cut, EWR1/2 IAD1 has been detoured through alternative paths.
-
resolved Sep 24, 2025, 08:10 PM UTC
The path has been restored and operations are normal between NJ and Ashburn. We are awaiting the provider's response about the cause of the disturbance.
Read the full incident report →
- Detected by Pingoru
- Sep 04, 2025, 09:43 PM UTC
- Resolved
- Sep 05, 2025, 12:59 AM UTC
- Duration
- 3h 16m
Affected: Intra-Market ConnectivityAMS1 - Amsterdam, NetherlandsAMS2 - Amsterdam, NetherlandsAMS3 - Amsterdam, NetherlandsFRA2 - Frankfurt, Germany
Timeline · 2 updates
-
identified Sep 04, 2025, 09:43 PM UTC
Due to a fiber cut, our backbone has diverted Amsterdam Frankfurt traffic via London. There is no impact to operations other than higher latency between cities.
-
resolved Sep 05, 2025, 12:59 AM UTC
The fiber cut has been restored and routing is now normalized.
Read the full incident report →
- Detected by Pingoru
- Sep 01, 2025, 11:18 PM UTC
- Resolved
- Sep 02, 2025, 03:26 PM UTC
- Duration
- 16h 7m
Affected: Intra-Market ConnectivityGlobal External InternetFRA2 - Frankfurt, Germany
Timeline · 2 updates
-
monitoring Sep 01, 2025, 11:18 PM UTC
Due to a network transport provider's routine maintenance having an issue, Teraswitch has elected to reduce risk and divert traffic from our backbone crossing through Frankfurt unnecessarily. At this time Frankfurt traffic will be routing locally and the global backbone will only use Frankfurt if required.
-
resolved Sep 02, 2025, 03:26 PM UTC
Our network transport provider was able to roll back their changes with no impact. At a later date they will replace the malfunctioning hardware. Teraswitch has normalized operations in Frankfurt.
Read the full incident report →
- Detected by Pingoru
- Sep 01, 2025, 05:15 AM UTC
- Resolved
- Oct 17, 2025, 06:38 PM UTC
- Duration
- 46d 13h
Affected: Intra-Market ConnectivityUltra-Low Latency ConnectivityTYO1 - Tokyo, JapanTYO2 - Tokyo, Japan
Timeline · 3 updates
-
identified Sep 01, 2025, 05:15 AM UTC
Teraswitch was alerted to a loss to our path between Seattle and Tokyo. We have directed traffic to external paths. We are working with our providers to resolve this. ULL traffic between Frankfurt and Tokyo may have seen ~5 minutes of degraded connectivity.
-
identified Sep 03, 2025, 05:50 PM UTC
Our sub-sea cable vendor has confirmed that repairs should be completed around 9/19/2025. At this time, Tokyo backbone traffic remains diverted.
-
resolved Oct 17, 2025, 06:38 PM UTC
Undersea cable operations have been restored, and traffic is stable and normalized along this path. Tokyo to Seattle is restored.
Read the full incident report →
- Detected by Pingoru
- Aug 26, 2025, 02:22 PM UTC
- Resolved
- Aug 26, 2025, 06:30 PM UTC
- Duration
- 4h 7m
Affected: SGP1 - Singapore
Timeline · 2 updates
-
identified Aug 26, 2025, 02:22 PM UTC
Teraswitch is rerouting traffic due to high latency seen with multiple internet providers in Singapore.
-
resolved Aug 26, 2025, 06:30 PM UTC
This incident has been resolved.
Read the full incident report →