Flexera incident
Flexera One - IT Visibility - EU - Third party data ingestion delay
Affected components
Update timeline
- identified Mar 17, 2026, 04:56 AM UTC
Incident Description: We are currently investigating an issue affecting inventory ingestion within Flexera One – IT Visibility (ITV) in the EU production environment. The service is experiencing a processing backlog, which may result in delays in inventory updates from third-party data sources. ITV user interfaces and APIs remain fully accessible, and core functionality is not impacted. Priority: P3 Restoration Activity: Our technical teams have identified the likely cause as resource constraints in a managed caching service. We have increased capacity and are actively evaluating system recovery while working to clear the backlog. We are closely monitoring the situation and will provide further updates as progress is made.
- resolved Mar 17, 2026, 08:11 AM UTC
Inventory ingestion has been fully restored, and data processing is operating normally. We are proactively replaying customer third-party data to ensure completeness and prevent any missed updates. All other third-party data sources are already up to date and processing as expected
- postmortem Apr 01, 2026, 04:38 AM UTC
**Description:** Flexera One – IT Visibility – EU – Delayed Inventory Updates from Third-Party Data Sources **Timeframe:** March 16, 2026, 7:40 PM PDT – March 17, 2026, 1:30 AM PDT **Incident Summary** On March 16, 2026, at approximately 7:40 PM PDT, an issue was identified affecting inventory ingestion within the Flexera One IT Visibility service in the EU production environment. During this period, inventory data received from certain third-party data sources was delayed in processing, resulting in backlogged inventory updates for some customers. Throughout the incident, the IT Visibility user interface and APIs remained accessible. The impact was limited to delays in the processing and reflection of inventory updates from third-party integrations. Technical teams began investigating immediately and implemented corrective actions to restore processing and recover the affected backlog. As part of the recovery effort, processing capacity was increased and affected third-party data was replayed to ensure that no updates were missed. Processing subsequently resumed and the backlog was cleared. By March 17, 2026, at approximately 1:30 AM PDT, recovery had been confirmed and the incident was considered resolved. **Root Cause** The incident was associated with resource constraints affecting the inventory ingestion flow, which resulted in processing delays and backlog growth for inventory data received from third-party data sources in the EU region. At the time of writing, further internal review is continuing to determine the precise underlying cause of the condition that led the service into this state. **Remediation Actions** The following actions were taken during the incident response: 1. Incident Investigation Initiated: Technical teams began investigating the delayed processing affecting third-party inventory ingestion in the EU region. 2. Capacity Increase Applied: Processing capacity was increased to relieve the constraint contributing to the backlog. 3. Processing Recovery Monitored: Processing behavior was monitored to confirm that data flow had resumed successfully. 4. Data Replay Performed: Third-party data was replayed as part of recovery to ensure that delayed updates were processed completely. 5. Service Restoration Validated: Technical teams confirmed that backlog processing had caught up and that inventory ingestion had returned to expected operation. **Future Preventative Measures** This incident highlighted the importance of continued review of ingestion performance conditions and earlier detection of backlog-related degradation affecting third-party inventory processing. Based on the current information available, the following follow-up activities are being pursued: 1. Underlying Cause Review: Complete a deeper review to determine the precise underlying cause of the processing constraint that led to the backlog condition. 2. Alerting and Detection Improvements: Review and improve alerting and detection thresholds to help identify similar ingestion slowdowns sooner. 3. Recovery Process Review: Review the additional issues identified during incident response that affected the resolution effort and determine whether any follow-up improvements are needed.
Looking to track Flexera downtime and outages?
Pingoru polls Flexera's status page every 5 minutes and alerts you the moment it reports an issue — before your customers do.
- Real-time alerts when Flexera reports an incident
- Email, Slack, Discord, Microsoft Teams, and webhook notifications
- Track Flexera alongside 5,000+ providers in one dashboard
- Component-level filtering
- Notification groups + maintenance calendar
5 free monitors · No credit card required