ThinkOn Outage History

ThinkOn is up right now

There were 9 ThinkOn outages since February 5, 2026 totaling 256h 21m of downtime. Each is summarised below — incident details, duration, and resolution information.

Source: https://status.thinkon.com

Notice April 26, 2026

[OTT2] vCloud Access is Unavailable

Detected by Pingoru
Apr 26, 2026, 01:15 AM UTC
Resolved
Apr 26, 2026, 03:25 AM UTC
Duration
2h 9m
Affected: OTT2 Virtual Compute
Timeline · 2 updates
  1. investigating Apr 26, 2026, 01:15 AM UTC

    We are currently investigating an issue affecting vCloud accessibility. At this time, we do not believe any workloads are impacted. Our team is working diligently to restore access as quickly as possible.

  2. resolved Apr 26, 2026, 03:25 AM UTC

    This incident has been resolved.

Read the full incident report →

Notice April 20, 2026

[OTT] Service Disruption

Detected by Pingoru
Apr 20, 2026, 02:18 PM UTC
Resolved
Apr 20, 2026, 02:53 PM UTC
Duration
34m
Affected: OTT2 Virtual Compute
Timeline · 2 updates
  1. investigating Apr 20, 2026, 02:18 PM UTC

    We are currently experiencing connectivity issues at our Ottawa Data Center. As a result, some customers may be experiencing service disruptions at this time. Our technical teams are actively investigating and working to restore full service as quickly as possible. We will continue to provide updates as more information becomes available. We appreciate your patience and understanding.

  2. resolved Apr 20, 2026, 02:53 PM UTC

    This incident has been resolved.

Read the full incident report →

Notice April 15, 2026

Service Disruption – GUBOV Data Centre

Detected by Pingoru
Apr 15, 2026, 10:42 PM UTC
Resolved
Apr 17, 2026, 09:37 PM UTC
Duration
1d 22h
Affected: GUBOV Internet ConnectivityGUBOV Object StorageGUBOV Virtual ComputeGUBOV Veeam Cloud ConnectGUBOV Veeam for O365GUBOV Virtual Compute BackupsGUBOV Commvault Endpoint BackupGUBOV Zerto Virtual ReplicationGUBOV Veeam M365
Timeline · 24 updates
  1. investigating Apr 15, 2026, 10:42 PM UTC

    Dear Customers, We are currently experiencing a widespread outage at the GUBOV data centre, which is also impacting our Service Desk operations. As a result, response times for both new and existing tickets may be delayed while our teams work to manage and resolve the situation. Next Update We are actively investigating the issue and will provide further updates as more information becomes available. We appreciate your patience and understanding.

  2. investigating Apr 15, 2026, 11:24 PM UTC

    Dear Customers, We continue to investigate the widespread outage at the GUBOV data centre. Our technicians are currently on-site and actively investigating the underlying physical hardware. Service Desk operations may still experience delays, and response times for new and existing tickets may be impacted during this time. Next Update We will provide further updates as more information becomes available. We appreciate your continued patience and understanding.

  3. investigating Apr 15, 2026, 11:44 PM UTC

    Dear Customers, We continue to investigate the widespread outage at the GUBOV data centre. Our on-site technicians are actively assessing the physical hardware, while our networking engineers are working in tandem to troubleshoot any potential hardware or network-related issues. Service Desk operations may still experience delays, and response times for new and existing tickets may be impacted during this time. Next Update We will provide further updates as more information becomes available. We appreciate your continued patience and understanding.

  4. investigating Apr 16, 2026, 12:12 AM UTC

    Dear Customers, We continue to investigate the widespread outage at the GUBOV data centre. At this time, we believe the root cause has been identified and are currently working to confirm our findings. Our on-site technicians and networking engineers remain actively engaged in validating the issue and determining the appropriate remediation steps. Service Desk operations may still experience delays, and response times for new and existing tickets may be impacted during this time. Next Update We will provide further updates as more information becomes available. We appreciate your continued patience and understanding.

  5. identified Apr 16, 2026, 12:23 AM UTC

    Dear Customers, We continue to investigate the widespread outage at the GUBOV data centre. We have identified the underlying issue and are currently working to validate and implement corrective actions. Our teams are actively engaged in restoring services as quickly as possible. Service Desk operations may still experience delays, and response times for new and existing tickets may be impacted during this time. Next Update We will provide further updates as more information becomes available. We appreciate your continued patience and understanding.

  6. identified Apr 16, 2026, 12:56 AM UTC

    Dear Customers, We continue to investigate the widespread outage at the GUBOV data centre. Our teams have identified the issue and are currently performing corrective actions, including controlled reboots of affected equipment, to restore services. Our technicians and engineers remain actively engaged in resolving the issue as quickly as possible. Service Desk operations may still experience delays, and response times for new and existing tickets may be impacted during this time. Next Update We will provide further updates as more information becomes available. We appreciate your continued patience and understanding.

  7. identified Apr 16, 2026, 01:22 AM UTC

    Dear Customers, We continue to investigate the widespread outage at the GUBOV data centre. The corrective actions performed thus far have not yet resolved the issue, and our teams are continuing to investigate and implement additional remediation steps. Our technicians and engineers remain fully engaged and are working to restore services as quickly as possible. Service Desk operations may still experience delays, and response times for new and existing tickets may be impacted during this time. Next Update We will provide further updates as more information becomes available. We appreciate your continued patience and understanding.

  8. identified Apr 16, 2026, 01:56 AM UTC

    Dear Customers, We continue to investigate the widespread outage at the GUBOV data centre. Our teams are actively performing both physical (on-site) and logical inspections to further isolate the issue and determine the appropriate remediation steps. At this time, there are no significant changes to report; however, our technicians and engineers remain fully engaged and are continuing their investigation. Service Desk operations may still experience delays, and response times for new and existing tickets may be impacted during this time. Next Update We will provide further updates as more information becomes available. We appreciate your continued patience and understanding.

  9. identified Apr 16, 2026, 02:35 AM UTC

    Dear Customers, We continue to investigate the widespread outage at the GUBOV data centre. Our teams are actively performing both physical (on-site) and logical inspections, reviewing each device to further isolate the issue and determine the appropriate remediation steps. At this time, there are no significant changes to report; however, our technicians and engineers remain fully engaged and continue their detailed investigation. Service Desk operations may still experience delays, and response times for new and existing tickets may be impacted during this time. Next Update We will provide further updates as more information becomes available. We appreciate your continued patience and understanding.

  10. identified Apr 16, 2026, 03:09 AM UTC

    Dear Customers, We continue to investigate the widespread outage at the GUBOV data centre. Our teams are actively performing detailed manual checks and verifying configurations across both primary and secondary network paths to further isolate the issue. At this time, there are no significant changes to report; however, our technicians and engineers remain fully engaged and continue their investigation. Service Desk operations may still experience delays, and response times for new and existing tickets may be impacted during this time. Next Update We will provide further updates as more information becomes available. We appreciate your continued patience and understanding.

  11. identified Apr 16, 2026, 03:56 AM UTC

    Dear Customers, We continue to investigate the widespread outage at the GUBOV data centre. At this time, replacement hardware is being brought on-site. Our teams will be working to install, configure, and perform validation checks as part of the recovery effort. This process will take some time to complete, and our technicians and engineers remain fully engaged in restoring services as quickly and safely as possible. Service Desk operations may still experience delays, and response times for new and existing tickets may be impacted during this time. Next Update We will provide further updates as more information becomes available. We appreciate your continued patience and understanding.

  12. identified Apr 16, 2026, 04:48 AM UTC

    Dear Customers, We continue to investigate the widespread outage at the GUBOV data centre. At this time, we are proceeding with the next remediation step, which involves replacing the affected hardware. A technician is expected to be on-site with replacement equipment within the next 20 minutes. Once on-site, our team will begin racking, cabling, installing, and configuring the devices, followed by validation checks. This process will take some time to complete. We understand the significant impact this incident is having on your production environments and your customers, and we want to assure you that our teams are working with urgency to restore services as quickly as possible. Service Desk operations may still experience delays, and response times for new and existing tickets may be impacted during this time. Next Update We will provide further updates as more information becomes available. We appreciate your continued patience and understanding.

  13. identified Apr 16, 2026, 05:27 AM UTC

    Dear Customers, We continue to work on restoring services at the GUBOV data centre. As part of the recovery effort, our teams are proceeding with the installation and configuration of replacement network equipment. The following activities are currently underway: Installation and racking of replacement switches Firmware updates on the new devices Application of site-specific configurations Migration of existing cabling from the affected devices to the new equipment Testing of connections and dependent systems Ongoing monitoring and validation to ensure services are fully restored This process will take some time to complete, and our technicians and engineers remain fully engaged to ensure a safe and stable recovery. Service Desk operations may still experience delays, and response times for new and existing tickets may be impacted during this time. Next Update We will provide further updates as more information becomes available. We appreciate your continued patience and understanding.

  14. identified Apr 16, 2026, 06:35 AM UTC

    Dear Customers, We are currently in the final stages of configuring the replacement equipment at the GUBOV data centre. If no issues are encountered, we expect to restore site connectivity shortly and begin verification testing. Our teams remain actively engaged to ensure a stable restoration of services. Service Desk operations may still experience delays, and response times for new and existing tickets may be impacted during this time. Next Update We will provide further updates as more information becomes available. We appreciate your continued patience and understanding.

  15. identified Apr 16, 2026, 10:14 AM UTC

    Dear Customers, We are currently validating the recent changes and completing the reconnection of all systems at the GUBOV data centre to confirm full restoration of connectivity. Our teams are actively monitoring the environment to ensure services are stable as they come back online. Service Desk operations may still experience delays, and response times for new and existing tickets may be impacted during this time. Next Update We will provide further updates as more information becomes available. We appreciate your continued patience and understanding.

  16. identified Apr 16, 2026, 10:41 AM UTC

    Dear Customers, As we continue to stabilize the environment at the GUBOV data centre, you may experience brief, intermittent periods of connectivity while we complete the restoration of site services. Our teams are actively monitoring and working to ensure a full and stable recovery. Service Desk operations may still experience delays, and response times for new and existing tickets may be impacted during this time. Next Update We will provide further updates as more information becomes available. We appreciate your continued patience and understanding.

  17. identified Apr 16, 2026, 10:59 AM UTC

    Dear Customers, We are currently completing core system validation as site connectivity is being restored at the GUBOV data centre. Our teams are working to confirm that all services are functioning as expected. Service Desk operations may still experience delays, and response times for new and existing tickets may be impacted during this time. Next Update We will provide further updates as more information becomes available. We appreciate your continued patience and understanding.

  18. monitoring Apr 16, 2026, 11:30 AM UTC

    Dear Customers, Site connectivity has been restored, and we are continuing to validate all services at the GUBOV data centre. At this time, we ask that you please attempt to connect to your environments and confirm that your services are functioning as expected. If you encounter any issues, please contact ThinkOn Support so our teams can assist promptly. Service Desk operations may still experience minor delays as we complete final validations. Next Update We will provide further updates as needed. We appreciate your continued patience and support.

  19. monitoring Apr 16, 2026, 11:45 AM UTC

    We are continuing to monitor for any further issues.

  20. monitoring Apr 16, 2026, 11:52 AM UTC

    Dear Customers, Services have been restored, and we are currently monitoring the environment to ensure ongoing network stability at the GUBOV data centre. We kindly ask that customers continue their validation efforts and confirm that services are operating as expected. If you encounter any issues, please contact ThinkOn Support so our teams can assist promptly. Service Desk operations may still experience minor delays as we complete final monitoring and validation. Next Update We will provide further updates as needed. We appreciate your continued patience and support.

  21. monitoring Apr 16, 2026, 12:54 PM UTC

    Dear Customers, We are continuing to monitor the environment at the GUBOV data centre to ensure ongoing stability and to identify any residual issues. We kindly ask that customers continue to verify their services and confirm that everything is operating as expected. If you encounter any issues, please contact ThinkOn Support so our teams can assist promptly. Service Desk operations may still experience minor delays as we complete final monitoring and validation. Next Update We will provide further updates as needed. We appreciate your continued patience and support.

  22. monitoring Apr 16, 2026, 10:46 PM UTC

    Dear Customers, We continue to actively monitor the environment at the GUBOV data centre to ensure ongoing stability and confirm that all services remain fully operational. At this time, there are no new issues identified; however, our teams remain engaged and vigilant in observing the environment. We kindly ask that customers continue to validate their services and confirm that everything is functioning as expected. If you encounter any issues, please contact ThinkOn Support so our teams can assist promptly. Service Desk operations may still experience minor delays as we complete final monitoring and validation activities. Next Update We will provide further updates as needed. We appreciate your continued patience and support.

  23. monitoring Apr 17, 2026, 03:10 PM UTC

    Dear Customers, Most outstanding issues related to the GUBOV data centre incident have now been resolved. Our teams remain actively engaged and will continue to closely monitor the network to ensure ongoing stability. Monitoring systems are fully functional and operational. We kindly ask that customers continue to report any issues they may encounter so our teams can investigate and assist promptly. Next Update We will provide further updates as needed. We appreciate your continued patience and support.

  24. resolved Apr 17, 2026, 09:37 PM UTC

    Dear Customers, All services at the GUBOV data centre have been restored, and there are currently no known outstanding customer issues. While services are stable, our teams will continue to perform follow-up work, including the replacement of certain hardware components, as part of the full remediation process. This work is not expected to impact customer services. We will continue to closely monitor the environment throughout the weekend to ensure ongoing stability and proactively address any potential issues. If you experience any service disruptions or have concerns, please contact ThinkOn Support for assistance. Next Update This incident will be considered resolved, and no further updates are planned unless new issues arise. We appreciate your patience and support throughout this incident.

Read the full incident report →

Major April 4, 2026

TOR10 - Network Interruption

Detected by Pingoru
Apr 04, 2026, 03:47 PM UTC
Resolved
Apr 04, 2026, 05:42 PM UTC
Duration
1h 55m
Affected: TOR10 Internet ConnectivityTOR10 Virtual ComputeOTT2 Virtual Compute
Timeline · 2 updates
  1. investigating Apr 04, 2026, 03:47 PM UTC

    We are currently experiencing an issue affecting access to our Critical Compute environment, which may impact connectivity to vCloud services. At this time, there is no indication that customer workloads are impacted. Our team is actively investigating and working to restore full functionality as quickly as possible. We understand the importance of these services and appreciate your patience while we resolve the issue. Further updates will be provided as more information becomes available. If you have any urgent concerns, please reach out to our support team.

  2. resolved Apr 04, 2026, 05:42 PM UTC

    This incident has been resolved.

Read the full incident report →

Major April 1, 2026

[CMH1] vCloud Service Disruption

Detected by Pingoru
Apr 01, 2026, 02:34 PM UTC
Resolved
Apr 01, 2026, 04:49 PM UTC
Duration
2h 14m
Affected: NUBAV Virtual Compute
Timeline · 3 updates
  1. investigating Apr 01, 2026, 02:34 PM UTC

    We are currently experiencing an issue accessing the vCloud portal. Our team is actively investigating the problem.

  2. monitoring Apr 01, 2026, 02:38 PM UTC

    A fix has been implemented and we are monitoring the results.

  3. resolved Apr 01, 2026, 04:49 PM UTC

    This incident has been resolved.

Read the full incident report →

Notice March 24, 2026

CGY4 - Network Degradation

Detected by Pingoru
Mar 24, 2026, 03:19 PM UTC
Resolved
Apr 01, 2026, 05:30 PM UTC
Duration
8d 2h
Affected: CGY4 Virtual ComputeCGY4 Zerto Virtual ReplicationCGY4 Veeam M365
Timeline · 4 updates
  1. identified Mar 24, 2026, 03:19 PM UTC

    We are currently experiencing a network issue impacting Zerto replication, Veeam backup operations, and overall system performance. Engineering teams are actively investigating and working to mitigate the issue.

  2. identified Mar 25, 2026, 06:34 PM UTC

    We are still experiencing slowness with Zerto, VCC, VBO and we are still investigating. Engineering teams are actively investigating and working to mitigate the issue.

  3. monitoring Mar 30, 2026, 03:00 PM UTC

    Performance has improved across affected services, including Zerto, VCC, VBO, and overall system operations. Customer reports and system metrics indicate services are stabilizing. ThinkOn engineers will continue to monitor closely to ensure stability is maintained. Updates will be provided if required.

  4. resolved Apr 01, 2026, 05:30 PM UTC

    This incident has been resolved

Read the full incident report →

Notice March 17, 2026

[CMH1] Service Disruption with Data Protect 365 portal

Detected by Pingoru
Mar 17, 2026, 05:07 PM UTC
Resolved
Mar 17, 2026, 06:42 PM UTC
Duration
1h 35m
Affected: CMH1 Veeam M365
Timeline · 2 updates
  1. investigating Mar 17, 2026, 05:07 PM UTC

    We are currently experiencing an issue accessing the Data Protect portal. Our team is actively investigating the problem.

  2. resolved Mar 17, 2026, 06:42 PM UTC

    The service disruption with the Data protect portal is now resolved.

Read the full incident report →

Notice February 20, 2026

CGY4 - Network Interruption

Detected by Pingoru
Feb 20, 2026, 09:20 PM UTC
Resolved
Feb 21, 2026, 12:11 AM UTC
Duration
2h 50m
Affected: NUBAV Internet ConnectivityNUBAV Virtual ComputeNUBAV Veeam Cloud ConnectNUBAV Zerto Virtual Replication
Timeline · 2 updates
  1. investigating Feb 20, 2026, 09:20 PM UTC

    Please be advised that we are currently troubleshooting a network situation: We are currently investigating a network issue at CGY4. While we troubleshoot and address the root cause, the connection to a few services may become intermittently, such as the vCloud portal and Zerto replications. We apologize for any inconvenience this may cause and appreciate your patience while we work to resolve the problem. Further updates will be provided as they become available.

  2. resolved Feb 21, 2026, 12:11 AM UTC

    This Incident has been resolved.

Read the full incident report →

Critical February 9, 2026

MTL2 - Service Interruption

Detected by Pingoru
Feb 09, 2026, 06:57 PM UTC
Resolved
Feb 09, 2026, 10:52 PM UTC
Duration
3h 54m
Affected: MTL2 Virtual ComputeMTL2 Virtual Compute Backups
Timeline · 4 updates
  1. investigating Feb 09, 2026, 06:57 PM UTC

    We are receiving multiple reports of service issues at the MTL2 site. Our team is actively working to identify the scope and root cause. Updates to follow.

  2. investigating Feb 09, 2026, 07:06 PM UTC

    We are continuing to investigate this issue.

  3. identified Feb 09, 2026, 09:34 PM UTC

    We are working on resolving the issue.

  4. resolved Feb 09, 2026, 10:52 PM UTC

    Services have been restored.

Read the full incident report →

Looking to track ThinkOn downtime and outages?

Pingoru polls ThinkOn's status page every 5 minutes and alerts you the moment it reports an issue — before your customers do.

  • Real-time alerts when ThinkOn reports an incident
  • Email, Slack, Discord, Microsoft Teams, and webhook notifications
  • Track ThinkOn alongside 5,000+ providers in one dashboard
  • Component-level filtering
  • Notification groups + maintenance calendar
Start monitoring ThinkOn for free

5 free monitors · No credit card required