Hoxhunt incident

Background task issues affecting multiple Hoxhunt components

Major Resolved View vendor source →

Hoxhunt experienced a major incident on January 2, 2026 affecting Hoxhunt Training and Hoxhunt Admin and 1 more component, lasting 2d 23h. The incident has been resolved; the full update timeline is below.

Started
Jan 02, 2026, 12:15 PM UTC
Resolved
Jan 05, 2026, 11:45 AM UTC
Duration
2d 23h
Detected by Pingoru
Jan 02, 2026, 12:15 PM UTC

Affected components

Hoxhunt TrainingHoxhunt AdminHoxhunt ResponseHoxhunt InsightsHoxhunt Sense

Update timeline

  1. identified Jan 02, 2026, 12:15 PM UTC

    We have identified an issue where some background tasks are not being processed correctly. This is impacting multiple components, including incident orchestration and data exports. Our team is investigating and working to resolve the issue. We will provide updates as more information becomes available.

  2. identified Jan 02, 2026, 01:21 PM UTC

    Situation has somewhat improved but background tasks are still heavily congested. Our team continues to remedy and resolve the issue with highest priority. Hoxhunt Support

  3. monitoring Jan 02, 2026, 06:28 PM UTC

    A fix has been implemented and we are monitoring the results.

  4. resolved Jan 05, 2026, 11:45 AM UTC

    This incident has been resolved. All background tasks are running normally. We apologize for the inconvenience caused by this incident.

  5. postmortem Jan 05, 2026, 12:19 PM UTC

    **Incident window \(UTC\):** **Jan 1, 2026 02:30 – Jan 2, 2026 16:30** ### Summary Between January 1st and January 2nd, we experienced a disruption in our background processing system. During this period, background jobs were not completing as expected, which resulted in delayed execution of several automated features across the platform. ### Customer Impact During the incident window, background tasks were queued but not processed. This caused **delays** \(not data loss\) in the following areas: * Training and SAT reminders * Training and benchmark delivery * Training leaderboards * Training simulation delivery * Training automation runs * Respond threat feedback reporting * Respond threat evaluation * Respond incident escalation processing User access to the platform itself was unaffected, but the above features experienced degraded availability due to delayed background processing. ### Root Cause A large batch of scheduled background tasks triggered a performance issue in our task allocation logic. Under this unusually large workload, the system was unable to distribute tasks within required time limits. ### Resolution On January 2nd at 16:30 UTC, we restored normal background task processing by resolving the backlog that had accumulated. ### Preventive Actions We are actively working on improvements to prevent similar incidents in the future, including: * Improving background task handling to better support large task bursts * Introducing pacing and safeguards for large task submissions * Strengthening monitoring to detect queue degradation earlier