Runpanther experienced a minor incident on October 20, 2025 affecting Panther Console (Web App) and Data Ingestion into Panther (Log Processing) and 1 more component, lasting 22h 44m. The incident has been resolved; the full update timeline is below.
Affected components
Update timeline
- identified Oct 20, 2025, 06:55 PM UTC
AWS is continuing to experience service disruptions in the us-east-1 region, which may still cause errors or delays when accessing or using Panther. We’ve confirmed that data ingestion and alert processing are not impacted; events are being queued, and will continue to process downstream as services stabilize. However, it may take some time for ingestion pipelines to fully catch up. You can follow AWS’s progress directly on their Service Health Dashboard: https://health.aws.amazon.com/health/status We’ll continue to post updates here as we confirm full recovery and return to normal operation. Thank you for your patience while we monitor and validate system stability.
- monitoring Oct 21, 2025, 12:46 AM UTC
Panther services experienced temporary degradation due to a regional AWS outage in us-east-1, and have now returned to normal operation. During the incident, some users saw intermittent authentication errors and delays in data ingestion, enrichment, detection processing, and alert generation. Events are being queued, and will continue to process downstream as services stabilize. However, it may take some time for ingestion pipelines to fully catch up. Here is the current summary of how each Panther component was impacted and is actively being remediated: Data ingestion and Enrichment: - HTTP Sources: If you received 5xx errors when attempting to send data to a Panther HTTP source, please send the data again, as it was not properly ingested nor stored in a queue. - Data is currently being recovered from the Dead-Letter Queue (DLQ). We expect the data to continue flowing in over the next 24 hours as failed jobs are requeued. Detections: - Detection processing may be delayed. Panther is in the process of requeuing all events that had issues being run through associated detections. Alerting: - Alerts were generated as detections were processed, which could have been delayed. - Delivery of alerts to destinations may have been delayed. - If a destination itself was down, Panther retried a number of times to deliver the alert, then marked delivery as failed in the Console and generated a System Error. You can review AWS’s incident details on the AWS Service Health Dashboard: https://health.aws.amazon.com/health/status We’ll continue to post updates here as we confirm full recovery and return to normal operation. Thank you for your patience while we monitor and validate system stability.
- resolved Oct 21, 2025, 05:39 PM UTC
Panther services have fully recovered from the regional AWS outage in us-east-1, and all systems are now operating normally. During the incident, some customers experienced intermittent authentication errors and delays in data ingestion, enrichment, detection processing, and alert generation. All impacted components have since stabilized, and ingestion pipelines have caught up. No further action is required from customers.