Flexera incident

Flexera One – SaaS Manager – NA – Service Outage

Critical Resolved View vendor source →

Flexera experienced a critical incident on December 17, 2025 affecting IT Asset Management - US SaaS Manager, lasting —. The incident has been resolved; the full update timeline is below.

Started
Dec 17, 2025, 05:09 PM UTC
Resolved
Dec 17, 2025, 05:09 PM UTC
Duration
Detected by Pingoru
Dec 17, 2025, 05:09 PM UTC

Affected components

IT Asset Management - US SaaS Manager

Update timeline

  1. resolved Dec 17, 2025, 05:09 PM UTC

    Incident Description: A service outage impacted Flexera One SaaS Manager in the North America region. During the impact window, the application may have been unavailable, affecting access to Managed SaaS Applications. Priority: P1 Impact Start Time: December 17, 2025, 5:16 AM PST Impact End Time: December 17, 2025, 6:48 AM PST Restoration Activity: The disruption was caused by an unexpected backlog of internal processing activity that increased load on the service, resulting in temporary unavailability. Technical teams restored the service and adjusted system capacity to safely process the backlog. The application has been fully restored, and monitoring remains in place to ensure continued stability.

  2. postmortem Dec 30, 2025, 10:21 AM UTC

    **Description:** Flexera One – SaaS Manager – NA – Service Disruption **Timeframe:** December 17, 2025, 5:16 AM PST to December 17, 2025, 6:48 AM PST ‌ **Incident Summary** On Wednesday, December 17, 2025, at 5:16 AM PST, our teams detected a service outage affecting Flexera One SaaS Manager in the North America region. During this period, customers may have experienced service unavailability and reduced performance when accessing Managed SaaS Applications. Prior to the outage, the issue had been observed intermittently with low impact, but it later escalated into a complete service disruption. Upon identification of the issue, our technical teams swiftly began investigating the cause. They determined that the outage resulted from a significant backlog of message volume following a previous service disruption. This accumulation placed an excessive load on the SaaS Manager service component and its underlying cloud compute resources, ultimately leading to an unhealthy service state. The issue was resolved after we provisioned additional capacity to handle the accumulated message load. By 6:48 AM PST, full service functionality was restored and confirmed once the capacity stabilized. ‌ **Root Cause** The issue arose from a substantial buildup of messages following an earlier service disruption. This backlog placed a persistent strain on the SaaS Manager service component and its underlying cloud compute resources. As message volume continued to increase, the affected service component became resource-constrained and entered an unhealthy state, ultimately leading to a production outage. The service was unable to recover automatically due to insufficient capacity to process the accumulated workload. ‌ **Remediation Actions** · Our teams scaled the cloud compute capacity to process the accumulated message backlog and restored the impacted SaaS Manager service component. · Verified successful message processing and service health post-restoration. · Confirmed full recovery through monitoring and customer validation. ‌ **Future Preventative Measures** · Auto-Scaling Enablement - Auto-scaling has been enabled for the affected service component to dynamically adjust capacity during sudden increases in message volume. This will allow the service to recover automatically under similar load conditions in the future. · Enhanced Monitoring and Early Detection - Monitoring improvements are being implemented to detect abnormal message accumulation and service degradation earlier. Alert thresholds are being refined to trigger proactive intervention before service health is impacted.