Recorded Future incident

Delay in content processing and alerting.

Notice Resolved View vendor source →

Recorded Future experienced a notice incident on October 20, 2025 affecting Alerts and API and 1 more component, lasting 59m. The incident has been resolved; the full update timeline is below.

Started
Oct 20, 2025, 07:20 PM UTC
Resolved
Oct 20, 2025, 08:19 PM UTC
Duration
59m
Detected by Pingoru
Oct 20, 2025, 07:20 PM UTC

Affected components

AlertsAPICollection and Processing

Update timeline

  1. investigating Oct 20, 2025, 07:20 PM UTC

    Dear Customer, We are currently experiencing a delay in our content collection and analysis processes exceeding 1 hour in duration as a result of a service disruption with our hosting provider, AWS. This means that there is a delay between when content is published and when it is available in Recorded Future’s web application, API, and alerts. Other systems are running normally, historical data is still available, and we do not anticipate any data loss from this issue. We know the root cause, and are working to mitigate the issue, but do not have an estimate for resolution at this time. We will follow up when the system is back to normal, but please contact our support team at [email protected] if you have any questions or concerns. Regards, Recorded Future Platform Operations

  2. identified Oct 20, 2025, 07:42 PM UTC

    Our team has mitigated the issue, and the processing delay is reducing. We will provide an updated estimated resolution time once we are able to confirm the speed of the reduction.

  3. resolved Oct 20, 2025, 08:19 PM UTC

    Dear Customer, We are now seeing the analysis processing returning to normal timeframes. While delays may still be present, delay times are within expected ranges. Regards, Recorded Future Platform Operations

  4. postmortem Nov 03, 2025, 07:33 PM UTC

    * **Issue:** * On the day of the major [Amazon Web Services \(AWS\) outage](https://aws.amazon.com/message/101925/), our systems experienced a significant delay in data processing. Specifically, at 19:20, a normal surge in data arrived for processing by Recorded Future. However, due to the ongoing AWS outage, neither automatic nor manual scaling of our processing capacity was technically possible. This lack of available compute resources caused the data processing queues to suffer unexpected delays. * **Cause:** * The primary cause was the [AWS regional issue in US-EAST-1](https://aws.amazon.com/message/101925/), which directly inhibited the ability of our Platform to dynamically increase resource capacity \(scaling\) to meet the incoming data load. * **Remediation / Path Forward:** * Once AWS resolved the underlying issue that was preventing scaling, our systems automatically increased capacity and processed the queued data as designed, bringing the queues back to normal levels. * Recorded Future is actively discussing a strategic initiative to migrate our systems to alternative regions toenhance our resilience against future single-region AWS outages.