DRACOON experienced a minor incident on August 13, 2025 affecting Virus protection, lasting 4h 59m. The incident has been resolved; the full update timeline is below.
Affected components
Update timeline
- identified Aug 13, 2025, 10:00 AM UTC
We have identified an issue with degraded performance of our virus protection feature for some customers. Our engineers are working on a resolution and we are confident that we will have a fix in place soon. We will provide more information and updates as soon as they become available.
- monitoring Aug 13, 2025, 01:45 PM UTC
The issue with degraded performance of our virus protection feature has been resolved, and we are monitoring the situation to ensure it remains stable. We apologize for any inconvenience this may have caused and appreciate your patience.
- resolved Aug 13, 2025, 02:59 PM UTC
The issue with degraded performance of our virus protection feature has been fully resolved. All systems are now operating normally. We apologize for any inconvenience this may have caused and appreciate your patience. If you continue to experience any issues, please don't hesitate to reach out to our support team for assistance.
- postmortem Sep 02, 2025, 11:28 AM UTC
We experienced an issue with our Virus Protection feature on 13.08.2025 at around 12:00 CEST. Our team has worked diligently to identify the root cause and implement a resolution. In this post-mortem, we want to share the details of what happened, why it happened, what we did to resolve it, and what we will do to prevent similar incidents in the future. **What happened?** At 12:00 CEST, monitoring alerted the team about increased latency in the Virus Protection service. Investigation revealed that unusually high load from a spike in file scanning requests overwhelmed the systems processing capacity, leading to degraded performance for a subset of customers. **Why did this happen?** The Virus Protection service was able to keep up with the increased load for scanning files, but the responses were not processed fast enough by another service, leading to missing/delayed verdicts for the files that had to be scanned. **What did we do?** The team manually scaled out the service to restore capacity. At 15:45, latency metrics returned to normal and the functionality was fully restored. **What can we do to improve?** We are investigating the root cause of the slow response processing and will reconfigure the service in question to make sure to eliminate the bottleneck. We apologize for any inconvenience this incident may have caused. We are committed to ensuring the stability and reliability of our services and will continue to take proactive measures to prevent similar incidents from happening in the future. If you have any questions or concerns, please don't hesitate to reach out to our support team for assistance.