Lakera incident

Ingestion failures Infinity Portal Data Services (DataTube)

Major Resolved View vendor source →

Lakera experienced a major incident on December 5, 2025 affecting Check Point Portal Data Services (DataTube) and Check Point Portal Data Services (DataTube) - EU region and 1 more component, lasting 1h 16m. The incident has been resolved; the full update timeline is below.

Started
Dec 05, 2025, 04:59 PM UTC
Resolved
Dec 05, 2025, 06:15 PM UTC
Duration
1h 16m
Detected by Pingoru
Dec 05, 2025, 04:59 PM UTC

Affected components

Check Point Portal Data Services (DataTube)Check Point Portal Data Services (DataTube) - EU regionCheck Point Portal Data Services (DataTube) - AUS region

Update timeline

  1. investigating Dec 05, 2025, 04:59 PM UTC

    Starting at 15:45 UTC on December 5th, we are experiencing a cross-region issue affecting data ingestion from the Infinity Portal. Our team is actively investigating and will provide updates as soon as the issue is resolved.

  2. identified Dec 05, 2025, 05:49 PM UTC

    The issue has been identified and a fix is being implemented.

  3. monitoring Dec 05, 2025, 05:59 PM UTC

    A fix has been implemented and we are monitoring the results.

  4. resolved Dec 05, 2025, 06:15 PM UTC

    This incident has been resolved.

  5. postmortem Dec 07, 2025, 04:23 PM UTC

    ### **Summary** On **Friday, December 5th, 2025, between 15:45 and 18:00 UTC**, Infinity Portal Data Services experienced a temporary disruption affecting **data ingestion and query operations**. ### **Incident Timeline** * **15:45** – Alerts triggered for Infinity Portal Data Services; investigation began immediately. * **16:00** – War room initiated with engineering teams. * **16:15** – Issue narrowed down to a backend network security component impacting service communication. * **17:00** – Fix identified and implemented in staging for validation. * **17:45** – Fix successfully tested and rolled out to production environments. * **18:00** – All services fully restored and stable. ### **Root Cause** The disruption was caused by an internal **network security component misconfiguration** that required adjustments to restore proper communication between services. The issue has been fully resolved. ### **Next Steps** We are improving monitoring and automation for these critical components to prevent similar issues in the future.