- Detected by Pingoru
- Mar 26, 2026, 03:21 PM UTC
- Resolved
- Mar 26, 2026, 05:59 PM UTC
- Duration
- 2h 37m
Affected: Concord (US East)
Timeline · 3 updates
-
investigating Mar 26, 2026, 03:21 PM UTC
We are aware of a problem where Creating or saving Monitoring Policies fails with "could not execute statement" error. The Kaseya R&D Team are investigating the issue. Subscribe to the Kaseya Status Page for up-to-date information at https://status.kaseya.com/
-
monitoring Mar 26, 2026, 03:39 PM UTC
A fix has been implemented and we are monitoring the results.
-
resolved Mar 26, 2026, 05:59 PM UTC
This incident has been resolved.
Read the full incident report →
- Detected by Pingoru
- Mar 25, 2026, 03:04 PM UTC
- Resolved
- Mar 25, 2026, 01:30 PM UTC
- Duration
- —
Timeline · 2 updates
-
resolved Mar 25, 2026, 03:04 PM UTC
Starting at 9:30 AM Eastern time, ConnectBooster Live Portals were inaccessible. Users could not log in and customers were not able to make payments. The Kaseya R&D Team resolved the issue at 10:10 AM Eastern time. All users should now be able to access their Live portals as expected. Subscribe to this Kaseya Status Page to be notified when an RCA is posted.
-
postmortem Mar 27, 2026, 05:22 PM UTC
**Summary:** Between 2026-03-25 13:30 UTC and 2026-03-25 14:10 UTC, about 66% of Connect Booster partners were unable to login to their instance\(s\) or process payments. **Root Cause:** ConnectBooster attempted to auto-scale and provision new instances of the application to accommodate more load, however these new instances failed to start due to a dependency failure. **Incident Timeline:** * Identified: 2026-03-25T13:46 UTC * Resolved: 2026-03-25T14:10 UTC **Preventative Measures:** To reduce the likelihood and impact of similar incidents in the future, we are taking the following steps: * Improving our independent monitoring systems to ensure these failures are caught and remediated more quickly. * Modifying startup dependency requirements to be less stringent. In the future, in the event of a failure in certain parts of the product or in the event of certain external vendor failures, unaffected parts of the product will continue to function.
Read the full incident report →
- Detected by Pingoru
- Mar 25, 2026, 02:43 PM UTC
- Resolved
- Mar 27, 2026, 02:17 PM UTC
- Duration
- 1d 23h
Affected: Off-Site SynchronizationOff-Site Recovery
Timeline · 3 updates
-
identified Mar 25, 2026, 02:43 PM UTC
We are aware of a problem where BCDR users paired to server5856 in the US-East region can experience degraded offsite capabilities. The Kaseya R&D Team has identified the issue and is working towards a resolution. Subscribe to the Kaseya Status Page for up-to-date information at https://status.kaseya.com/
-
monitoring Mar 27, 2026, 01:18 PM UTC
A fix has been implemented and we are monitoring the results.
-
resolved Mar 27, 2026, 02:17 PM UTC
This incident has been resolved.
Read the full incident report →
- Detected by Pingoru
- Mar 24, 2026, 11:44 PM UTC
- Resolved
- Mar 25, 2026, 02:41 PM UTC
- Duration
- 14h 56m
Affected: BackupVerificationRecovery
Timeline · 2 updates
-
investigating Mar 24, 2026, 11:44 PM UTC
We are aware of an issue where Datto Endpoint Backup for PC users paired with Cloud Siris 'use1-dtc-server-143' may experience degraded performance, which can result in slower-than-average backup and restore speeds. The Kaseya R&D Team has identified the issue and is working towards a resolution. Subscribe to the Kaseya Status Page for up-to-date information at https://status.kaseya.com/
-
resolved Mar 25, 2026, 02:41 PM UTC
This incident has been resolved.
Read the full incident report →
- Detected by Pingoru
- Mar 24, 2026, 02:24 PM UTC
- Resolved
- Mar 24, 2026, 05:45 PM UTC
- Duration
- 3h 21m
Affected: Agent RegistrationBackupVerificationRecovery
Timeline · 3 updates
-
identified Mar 24, 2026, 02:24 PM UTC
We are aware of a problem where users paired to server 'gbe2-dtc-server-157' in the UK region can experience degraded offsite performance. The Kaseya R&D Team has identified the issue and is working towards a resolution. Subscribe to the Kaseya Status Page for up-to-date information at https://status.kaseya.com/
-
monitoring Mar 24, 2026, 04:04 PM UTC
A fix has been implemented and we are monitoring the results.
-
resolved Mar 24, 2026, 05:45 PM UTC
This incident has been resolved.
Read the full incident report →
- Detected by Pingoru
- Mar 23, 2026, 04:39 PM UTC
- Resolved
- Mar 23, 2026, 07:30 PM UTC
- Duration
- 2h 50m
Affected: Pinotage (EU1)Merlot (EU2)
Timeline · 3 updates
-
investigating Mar 23, 2026, 04:39 PM UTC
We are aware of a problem where some device filters on Merlot and Pinotage are failing to load with an error pop-up saying: "Device Filter results were not able to be returned." The Kaseya R&D Team are investigating the issue. Subscribe to the Kaseya Status Page for up-to-date information at https://status.kaseya.com/
-
monitoring Mar 23, 2026, 06:31 PM UTC
A fix has been implemented and we are monitoring the results.
-
resolved Mar 23, 2026, 07:30 PM UTC
This incident has been resolved.
Read the full incident report →
- Detected by Pingoru
- Mar 20, 2026, 11:39 AM UTC
- Resolved
- Mar 23, 2026, 02:10 PM UTC
- Duration
- 3d 2h
Affected: SaaS Protection Backups
Timeline · 8 updates
-
investigating Mar 20, 2026, 11:39 AM UTC
We are currently investigating an issue affecting some customers hosted on pods 'use1-saas-p7' and 'use1-saas-p8', where performance degradation has been observed affecting all backups. The Kaseya R&D team has identified the root cause and is actively working on a resolution. Further updates will be provided as progress is made.
-
identified Mar 20, 2026, 12:05 PM UTC
The R&D team identified the root cause of the problem and they are working on resolving the issue.
-
identified Mar 20, 2026, 02:16 PM UTC
Backups are processing normally again on pod 'use1-saas-p7' and backup success rate metrics are improving. The R&D team has identified another issue still affecting backups on pod 'use1-saas-p8' and they are actively working to resolve it.
-
identified Mar 20, 2026, 05:37 PM UTC
Our R&D team is continuing to implement changes to address the backup issues affecting customers hosted on pod 'use1-saas-p8' Backups continue to process normally on pod 'use1-saas-p7' and backup success rates should fully recover in the next couple of hours.
-
identified Mar 20, 2026, 05:53 PM UTC
Our R&D team has implemented a fix to address the backup issues affecting pod 'use1-saas-p8' and backups have started to process normally again.
-
monitoring Mar 20, 2026, 07:09 PM UTC
Backups success rates for pod 'use1-saas-p7' have returned to normal levels and remain stable. Customer backups on pod 'use1-saas-p8' are continuing to process normally and our R&D team is monitoring recovery as backup success rates improve.
-
monitoring Mar 22, 2026, 07:20 PM UTC
Backups success rates for both pods 'use1-saas-p7' and 'use1-saas-p8' have returned to normal levels. Our R&D team is continuing to monitor for any other issues.
-
resolved Mar 23, 2026, 02:10 PM UTC
This incident has been resolved.
Read the full incident report →
- Detected by Pingoru
- Mar 20, 2026, 10:48 AM UTC
- Resolved
- Mar 20, 2026, 01:54 PM UTC
- Duration
- 3h 6m
Affected: Merlot (EU2)
Timeline · 4 updates
-
investigating Mar 20, 2026, 10:48 AM UTC
We are aware of a problem where Agent-based alerts are not being raised on the platform for Datto RMM on the Merlot platform The Kaseya R&D Team are aware of the issue and are investigating the root cause. Subscribe to the Kaseya Status Page for up-to-date information at https://status.kaseya.com/
-
monitoring Mar 20, 2026, 12:21 PM UTC
The R&D team identified the cause of the issue and implemented a fix. New alerts are now creating on the platform in real time, and previous alerts impacted by this issue are being processed and will be raised in the accounts within the next 2 hours.
-
resolved Mar 20, 2026, 01:54 PM UTC
This incident has been resolved.
-
postmortem Apr 30, 2026, 07:53 AM UTC
**Summary:** Between approximately 23:00 UTC on 18 March 2026 and 12:07 UTC on 20 March 2026, partners on the Merlot \(EU2\) platform experienced an issue whereby alerts raised by devices were not processed by the service and did not appear in the Web Application within their accounts. **Root Cause:** The root cause was identified as a defect in the alert-processing mechanism, which prevented the service from processing alerts containing certain special characters. When alert processing fails, a retry mechanism is triggered to attempt reprocessing a set number of times. However, an additional defect caused the retry mechanism to enter an infinite loop, never discarding the affected message. The combined effect of these issues caused alert processing to stall on the affected alerts, resulting in a continuously growing message queue. Initial standard remediation measures were unsuccessful; therefore, the R&D and Operations teams force-cleared the queue and restarted the processing service to recover from the incident. Following remediation, the service returned to a healthy state, and both historical and new alerts began processing in near real time for devices where alert conditions were still met. The R&D team is addressing the identified defects at the earliest opportunity. In the interim, configuration changes have been applied to minimize the risk of the issue reoccurring. **Incident Timeline:** Identified: 2026-03-20 09:12 UTC Public Notification: 2026-03-20 10:48 UTC Resolved: 2026-03-20 12:07 UTC **Preventative Measures:** To reduce the likelihood and impact of similar incidents in the future, the following actions are being taken: * The Operations team has implemented direct monitoring of the alert-processing queue to notify responders if an abnormal state develops that could impact service performance. * The Operations team is also working on improving monitoring of alert-processing failures to ensure unexpected states are reviewed and addressed in a timely manner. * R&D and Operations playbooks, along with escalation procedures for standard remediation measures, have been updated based on lessons learned during the response to this incident.
Read the full incident report →
- Detected by Pingoru
- Mar 18, 2026, 10:49 AM UTC
- Resolved
- Mar 19, 2026, 01:12 PM UTC
- Duration
- 1d 2h
Affected: SaaS Protection Backups
Timeline · 4 updates
-
identified Mar 18, 2026, 10:49 AM UTC
We are currently investigating an issue affecting some customers hosted on pod use1-saas-p5, where backup performance degradation has been observed for certain SharePoint services. The Kaseya R&D team has identified the root cause and is actively working on a resolution. Further updates will be provided as progress is made.
-
monitoring Mar 18, 2026, 12:30 PM UTC
We are seeing continued improvement in backup performance on pod use1-saas-p5. The R&D team will continue monitoring the environment until full recovery is confirmed.
-
monitoring Mar 18, 2026, 02:07 PM UTC
Backups are processing normally and backup success rates continue to improve. Our R&D team is continuing to monitor.
-
resolved Mar 19, 2026, 01:12 PM UTC
This incident has been resolved.
Read the full incident report →
- Detected by Pingoru
- Mar 16, 2026, 03:16 PM UTC
- Resolved
- Mar 16, 2026, 04:30 PM UTC
- Duration
- 1h 13m
Affected: BackupAgent RegistrationRestore
Timeline · 2 updates
-
investigating Mar 16, 2026, 03:16 PM UTC
We are aware of a problem where Endpoint Backup v2 registration, backup, restore requests are not initiating successfully. The Kaseya R&D Team is currently investigating the issue and working towards a resolution. Subscribe to the Kaseya Status Page for up-to-date information at https://status.kaseya.com/
-
resolved Mar 16, 2026, 04:30 PM UTC
This incident has been resolved.
Read the full incident report →
- Detected by Pingoru
- Mar 16, 2026, 11:08 AM UTC
- Resolved
- Mar 16, 2026, 08:08 PM UTC
- Duration
- 9h
Affected: SaaS Protection Backups
Timeline · 3 updates
-
identified Mar 16, 2026, 11:08 AM UTC
We are aware of an issue affecting backup performance for some customers hosted on pod use1-saas-p9. Our R&D team is actively working to restore the service to full capacity. In the meantime, backup processing may be delayed, but the team is working to ensure daily backups are completed within the 24-hour window. Subscribe to the Kaseya Status Page for up-to-date information at https://status.kaseya.com/
-
monitoring Mar 16, 2026, 11:47 AM UTC
Operations have been restored by the R&D team. We are continuing to monitor the service to ensure full recovery.
-
resolved Mar 16, 2026, 08:08 PM UTC
This incident has been resolved.
Read the full incident report →
- Detected by Pingoru
- Mar 11, 2026, 01:33 PM UTC
- Resolved
- Mar 12, 2026, 05:09 PM UTC
- Duration
- 1d 3h
Affected: Routers
Timeline · 3 updates
-
identified Mar 11, 2026, 01:33 PM UTC
We are aware of a problem where Datto Networking DNA devices' dynamic DNS addresses have been inadvertently configured with an incorrect private IP address. The Kaseya R&D Team has identified the issue and is working towards a resolution. Subscribe to the Kaseya Status Page for up-to-date information at https://status.kaseya.com/
-
monitoring Mar 11, 2026, 02:35 PM UTC
A fix has been implemented and we are monitoring the results.
-
resolved Mar 12, 2026, 05:09 PM UTC
This incident has been resolved.
Read the full incident report →
- Detected by Pingoru
- Mar 09, 2026, 07:17 PM UTC
- Resolved
- Mar 20, 2026, 01:13 PM UTC
- Duration
- 10d 17h
Affected: Off-Site SynchronizationOff-Site Recovery
Timeline · 3 updates
-
identified Mar 09, 2026, 07:17 PM UTC
We are aware of a problem where Datto BCDR users in the Germany region associated with server3818 may experience reduced offsite performance. The Kaseya R&D Team has identified the issue and is working towards a resolution. Subscribe to the Kaseya Status Page for up-to-date information at https://status.kaseya.com/
-
monitoring Mar 16, 2026, 05:13 PM UTC
A fix has been implemented and we are monitoring the results.
-
resolved Mar 20, 2026, 01:13 PM UTC
This incident has been resolved.
Read the full incident report →
- Detected by Pingoru
- Mar 06, 2026, 08:45 PM UTC
- Resolved
- Mar 06, 2026, 10:04 PM UTC
- Duration
- 1h 19m
Affected: Zinfandel (US West)
Timeline · 4 updates
-
investigating Mar 06, 2026, 08:45 PM UTC
We are aware of a problem where Jobs and Audits are experiencing delays on the Zinfandel Platform. The Kaseya R&D Team Investigating the issue. Subscribe to the Kaseya Status Page for up-to-date information at https://status.kaseya.com/
-
monitoring Mar 06, 2026, 09:04 PM UTC
A fix has been implemented and we are monitoring the results.
-
resolved Mar 06, 2026, 10:04 PM UTC
This incident has been resolved.
-
postmortem Mar 18, 2026, 10:18 AM UTC
**Summary** Around **2026-03-06 9:40 AM EST,** partners on the Zinfandel platform started experiencing delays in both Quick and Scheduled Job execution. The issue was initially mitigated by 12:00 PM EST; however, the steps taken to restore service inadvertently caused the problem to reoccur later that afternoon at approximately 2:13 PM EST. The R&D and Operations teams fully resolved the issue by 2:47 PM EST. **Root Cause and Resolution** The initial incident was triggered by an unusually large-scale alert resolution operation, which created a significant backlog of processing tasks within the database. The high volume of queued work caused processing times to exceed the allowable execution window. This resulted in repeated retries, which continually saturated the database and prevented other operations from running normally. To alleviate the load, the task scheduling service was scaled down, and the queuing services were recycled, which reduced database pressure and restored normal operation by approximately 12:00 PM EST. However, at around 2:13 PM EST, these earlier mitigation steps produced an unintended side effect: they constrained the throughput of the service responsible for processing device audits. This limitation caused additional downstream delays in Job execution across the platform. The service was subsequently scaled back up to full capacity, and all services were confirmed healthy by **2:47 PM EST**. **Preventative Measures** To reduce the likelihood and impact of similar incidents in the future, the following steps are being taken: * **Resolution of Related Product Issues:**` `The R&D team has identified a backend software defect that contributed to the incident. A fix is scheduled for the **14.9 release**. * **Enhanced Monitoring, Alerting, and Response:**` `The Kaseya R&D team is reviewing additional monitoring capabilities to provide deeper insight into application performance at the component level for key services. * **Improved Incident Management and Response:**` `Global Kaseya teams will continue to receive training and coaching on Incident Management playbooks to ensure that all internal stakeholders are promptly informed and take coordinated action when events occur.
Read the full incident report →
- Detected by Pingoru
- Mar 05, 2026, 02:05 PM UTC
- Resolved
- Mar 05, 2026, 08:09 PM UTC
- Duration
- 6h 3m
Affected: SaaS Protection Restores/Exports
Timeline · 6 updates
-
identified Mar 05, 2026, 02:05 PM UTC
We are aware of a problem currently affecting Restores and Exports for Datto SaaS Protection that requires the Restore/Export service to be temporarily disabled. The Kaseya R&D Team has identified the issue and are actively working on a fix. Subscribe to the Kaseya Status Page for up-to-date information at https://status.kaseya.com/
-
identified Mar 05, 2026, 04:18 PM UTC
The Kaseya R&D team is actively testing a fix for this issue. We will provide additional updates on progress as they become available.
-
identified Mar 05, 2026, 05:10 PM UTC
Testing of the fix was successful for V3 customers in all regions. Restore/Export services should now be functioning normally again for all V3 customers. The Kaseya R&D team is now actively working to apply the fix and test for all V2 customers. We will continue to provide additional updates as they become available.
-
identified Mar 05, 2026, 06:15 PM UTC
The Kaseya R&D team is continuing to work on applying the fix for customers hosted on V2 nodes in all regions. SaaS Protection services for customers hosted on V3 pods in all regions continue to function normally.
-
monitoring Mar 05, 2026, 07:35 PM UTC
The Kaseya R&D team has successfully deployed the fix to all V2 nodes in all regions and Restores and Exports were re-enabled. Restores and Exports for customers in all regions are functioning normally again. We will continue to monitor for any other issues.
-
resolved Mar 05, 2026, 08:09 PM UTC
This incident has been resolved.
Read the full incident report →
- Detected by Pingoru
- Mar 04, 2026, 02:33 AM UTC
- Resolved
- Mar 04, 2026, 06:27 PM UTC
- Duration
- 15h 54m
Affected: SaaS Protection Backups
Timeline · 3 updates
-
investigating Mar 04, 2026, 02:33 AM UTC
We are aware of a problem where SaaS Protection customers hosted on node 'usw3-pod-1' are experiencing backup degradation. The Kaseya R&D Team is currently investigating. Subscribe to the Kaseya Status Page for up-to-date information at https://status.kaseya.com/
-
monitoring Mar 04, 2026, 01:40 PM UTC
A fix has been implemented and we are monitoring the results.
-
resolved Mar 04, 2026, 06:27 PM UTC
This incident has been resolved.
Read the full incident report →
- Detected by Pingoru
- Mar 04, 2026, 01:44 AM UTC
- Resolved
- Mar 04, 2026, 07:43 AM UTC
- Duration
- 5h 58m
Affected: AU5
Timeline · 2 updates
-
investigating Mar 04, 2026, 01:44 AM UTC
We are investigating an issue impacting AU5 partners where some DWP Report Files are not opening in browser or agent. Other regions are not affected. The Kaseya R&D Team is actively investigating. Subscribe to the Kaseya Status Page for up to date information at https://status.kaseya.com/
-
resolved Mar 04, 2026, 07:43 AM UTC
This incident has been resolved.
Read the full incident report →
- Detected by Pingoru
- Mar 02, 2026, 04:35 PM UTC
- Resolved
- Mar 06, 2026, 02:31 PM UTC
- Duration
- 3d 21h
Affected: Off-Site SynchronizationOff-Site Recovery
Timeline · 3 updates
-
investigating Mar 02, 2026, 04:35 PM UTC
We are aware of a problem where Datto BCDR users associated with Server3935 may experience disruptions affecting offsite synchronization and offsite restore operations. The Kaseya R&D Team is currently investigating the issue and working towards a resolution. Subscribe to the Kaseya Status Page for up-to-date information at https://status.kaseya.com/
-
identified Mar 02, 2026, 06:15 PM UTC
The issue has been identified and a fix is being implemented.
-
resolved Mar 06, 2026, 02:31 PM UTC
This incident has been resolved.
Read the full incident report →
- Detected by Pingoru
- Feb 27, 2026, 04:53 PM UTC
- Resolved
- Feb 27, 2026, 09:08 PM UTC
- Duration
- 4h 14m
Affected: SaaS Protection BackupsSaaS Protection Console LoginSaaS Protection Seat Management
Timeline · 3 updates
-
investigating Feb 27, 2026, 04:53 PM UTC
We are aware of a problem where SaaS Protection customers hosted on node 'use1-bfyii-2378' are currently unreachable and are receiving an error when attempting to access their accounts. The Kaseya R&D Team is currently investigating. Subscribe to the Kaseya Status Page for up-to-date information at https://status.kaseya.com/
-
monitoring Feb 27, 2026, 05:53 PM UTC
A fix has been implemented and we are monitoring the results.
-
resolved Feb 27, 2026, 09:08 PM UTC
This incident has been resolved.
Read the full incident report →
- Detected by Pingoru
- Feb 26, 2026, 06:09 PM UTC
- Resolved
- Mar 02, 2026, 02:21 PM UTC
- Duration
- 3d 20h
Affected: SaaS Protection Backups
Timeline · 3 updates
-
identified Feb 26, 2026, 06:09 PM UTC
We are aware of a problem where some Datto SaaS Protection customers hosted on pod 'aue1-saas-p0' are currently experiencing degraded backup performance for SharePoint and Teams services. The Kaseya R&D team has identified the issue and are actively working to resolve it. Subscribe to the Kaseya Status Page for up-to-date information at https://status.kaseya.com/
-
monitoring Feb 27, 2026, 07:05 PM UTC
The Kaseya R&D team have implemented configuration changes on the pod to address the issue. Sharepoint and Teams success rates are improving and the Kaseya R&D team is actively monitoring.
-
resolved Mar 02, 2026, 02:21 PM UTC
This incident has been resolved.
Read the full incident report →
- Detected by Pingoru
- Feb 26, 2026, 02:11 PM UTC
- Resolved
- Mar 05, 2026, 08:10 PM UTC
- Duration
- 7d 5h
Affected: SaaS Protection Backups
Timeline · 6 updates
-
identified Feb 26, 2026, 02:11 PM UTC
We are aware of a problem where some Datto SaaS Protection customers hosted on pod 'des1-saas-p0' are currently experiencing degraded backup performance for SharePoint and Teams services. The Kaseya R&D team has identified the issue and are actively working to resolve it. Subscribe to the Kaseya Status Page for up-to-date information at https://status.kaseya.com/
-
monitoring Feb 26, 2026, 05:36 PM UTC
While the Kaseya R&D team continue to address the issue they have temporarily decreased the daily backup frequency on pod 'des1-saas-p0' from 2x per day to 1x per day to maintain high backup success rates amidst ongoing spikes in Microsoft throttling. The Kaseya R&D team is closely monitoring success rates after the configuration change and will increase backup frequency to 2x backups per day when it is safe and reliable for all customers hosted on the pod.
-
monitoring Feb 27, 2026, 07:05 PM UTC
The Kaseya R&D team have implemented additional configuration changes on the pod to address the issue further. Sharepoint and Teams success rates are improving and the Kaseya R&D team is actively monitoring.
-
monitoring Mar 02, 2026, 01:40 PM UTC
SharePoint and Teams success rates on the pod have stabilized and the Kaseya R&D team has increased backup frequency back to 2x backups per day. We are continuing to monitor.
-
monitoring Mar 03, 2026, 01:46 PM UTC
Backup success rates for Teams and SharePoint services on pod 'des1-saas-p0' have stabilized at 2x backups per day. The Kaseya R&D team is continuing to monitor.
-
resolved Mar 05, 2026, 08:10 PM UTC
This incident has been resolved.
Read the full incident report →
- Detected by Pingoru
- Feb 26, 2026, 08:42 AM UTC
- Resolved
- Feb 26, 2026, 02:58 PM UTC
- Duration
- 6h 15m
Affected: US - VSA136
Timeline · 3 updates
-
identified Feb 26, 2026, 08:42 AM UTC
We are currently experiencing service disruption for US - VSA136, where administrators may be unable to login to UI. Our team is actively working to restore functionality at this time. We apologize for any inconvenience. - Cloud Operations Team
-
monitoring Feb 26, 2026, 10:24 AM UTC
A fix has been implemented and we are monitoring the results.
-
resolved Feb 26, 2026, 02:58 PM UTC
This incident has been resolved.
Read the full incident report →
- Detected by Pingoru
- Feb 22, 2026, 11:13 AM UTC
- Resolved
- Feb 22, 2026, 12:30 PM UTC
- Duration
- 1h 17m
Affected: US - IAD2VSA01US - IAD2VSA02US - IAD2VSA03US - IAD2VSA04US - IAD2VSA06US - IAD2VSA07US - IAD2VSA08US - IAD2VSA09US - IAD2VSA10US - IAD2VSA12US - IAD2VSA33US - NA1VSA01US - NA1VSA02US - NA1VSA03US - NA1VSA04US - NA1VSA05US - NA1VSA06US - NA1VSA07US - NA1VSA08US - NA1VSA09US - NA1VSA10US - NA1VSA105US - NA1VSA106US - NA1VSA107US - NA1VSA108US - NA1VSA11US - NA1VSA112US - NA1VSA113US - NA1VSA115US - NA1VSA116US - NA1VSA117US - NA1VSA12US - NA1VSA13US - NA1VSA14US - NA1VSA16US - NA1VSA17US - NA1VSA18US - NA1VSA19US - NA1VSA20US - NA1VSA21US - NA1VSA22US - NA1VSA23US - NA1VSA24US - NA1VSA25US - NA1VSA26US - NA1VSA27US - NA1VSA28US - NA1VSA29US - NA1VSA30US - NA1VSA31US - NA1VSA32US - NA1VSA33US - NA1VSA34US - NA1VSA35US - NA1VSA36US - NA1VSA37US - NA1VSA38US - NA1VSA39US - NA1VSA40UK - EMEAVSATRIAL07US - NA1VSATRIAL03UK - SAAS02UK - SAAS04UK - SAAS05UK - SAAS06UK - SAAS08UK - SAAS09UK - SAAS12UK - SAAS16UK - SAAS17UK - SAAS19UK - SAAS20UK - SAAS22UK - SAAS24UK - SAAS25UK - SAAS26UK - SAAS27UK - SAAS28UK - SAAS29UK - SAAS34UK - SAAS35UK - SAAS36UK - SAAS37UK - SAAS38UK - SAAS39UK - SAAS40UK - SAAS41UK - SAAS42UK - SAAS43UK - SAAS44UK - SAAS46UK - SAAS47UK - SAAS48UK - SAAS49UK - UKVSA109UK - UKVSA110UK - UKVSA111UK - VSA120UK - VSA129AquilaLynx01Andromeda02 - EU01EU - EUVSA01EU - EUVSA02EU - EUVSA03EU - EUVSA04EU - EUVSA05EU - EUVSA06EU - EUVSA07EU - EUVSA08EU - EUVSA09EU - EUVSA10EU - EUVSA11EU - EUVSA12EU - EUVSA13EU - EUVSA14EU - EUVSA15EU - EUVSA16EU - EUVSA17EU - EUVSA20EU - EUVSA21EU - SAAS01EU - SAAS03EU - SAAS07EU - SAAS10EU - SAAS11EU - SAAS14EU - SAAS18EU - SAAS21EU - SAAS23EU - SAAS30EU - SAAS32EU - SAAS33EU - SAAS45EU - VSA114EU-TRIAL1002EU-VSA128ELOQUIOSYD2 - RSSPRMSYD2 - VSA139SYD2 - VSA140SYD2 - VSA144SYD2 - VSA145SYD2-TRIAL1003AU - GenesisAU - GONG001AU-OTAUS - Noc-AssistUS - PhoenixUS - RIM CEUS - RIM DQUS - SAAS15US - TRIAL1001US - VSA118US - VSA119US - VSA121US - VSA122US - VSA125US - VSA126US - VSA130US - VSA131US - VSA132US - VSA133US - VSA134US - VSA135US - VSA136US - VSA138US - VSA141US - VSA142US - VSA143US - VSA146US - WakandaUS-EBS01US-EDATECH01US-GSENTKASEYA01US-VSA123US-VSA124US-VSA127Leo06Leo45PHILKAS01Tatooine01Tatooine02Tatooine03Andromeda02 - US01Andromeda02 - US02Andromeda02 - US03Andromeda01Andromeda04CyberTekDraco01US -RIM CE2US - CGSINCUS - ABCPESUS - Revan01US - Revan02US - ABCPES02US - RIM CE3
Timeline · 3 updates
-
investigating Feb 22, 2026, 11:13 AM UTC
We are experiencing Partial Service Disruption on Liveconnect.me. Our team is working to restore functionality at this time. We apologize for any inconvenience. - Cloud Operations Team
-
monitoring Feb 22, 2026, 11:42 AM UTC
A fix has been implemented and we are monitoring the results.
-
resolved Feb 22, 2026, 12:30 PM UTC
This incident has been resolved.
Read the full incident report →
- Detected by Pingoru
- Feb 17, 2026, 10:39 PM UTC
- Resolved
- Feb 20, 2026, 02:07 PM UTC
- Duration
- 2d 15h
Affected: Off-Site Recovery
Timeline · 3 updates
-
investigating Feb 17, 2026, 10:39 PM UTC
We are aware of a problem where Datto BCDR users paired to server5856 can experience degraded offsite capabilities. The Kaseya R&D Team is investigating the issue and working towards a resolution. Subscribe to the Kaseya Status Page for up-to-date information at https://status.kaseya.com/
-
monitoring Feb 18, 2026, 02:52 PM UTC
A fix has been implemented and we are monitoring the results.
-
resolved Feb 20, 2026, 02:07 PM UTC
This incident has been resolved.
Read the full incident report →
- Detected by Pingoru
- Feb 17, 2026, 01:31 PM UTC
- Resolved
- Feb 18, 2026, 09:39 PM UTC
- Duration
- 1d 8h
Affected: KaseyaOne
Timeline · 3 updates
-
investigating Feb 17, 2026, 01:31 PM UTC
We are currently experiencing an issue with push notifications not working as expected. Our team is actively investigating and working on a resolution. You can still Login to KaseyaOne using 2FA Authenticator app by selecting 'Try Another Way" on the Approve Log In Push MFA screen. Thank you for your patience—we’ll provide updates as soon as we have more information.
-
monitoring Feb 18, 2026, 08:25 AM UTC
A fix has been implemented and we are monitoring the results.
-
resolved Feb 18, 2026, 09:39 PM UTC
This incident has been resolved.
Read the full incident report →