Flexera incident
Flexera One - IT Visibility - NAM - Data Update Delays
Affected components
Update timeline
- identified Mar 20, 2026, 03:34 AM UTC
Incident Description: We are currently investigating an issue affecting data update operations in the North America (NAM) production environment. While the ITV platform remains accessible, customers may experience delays in workflows that rely on data updates. Priority: P2 Restoration Activity: Our technical teams have identified an issue within the database cluster and are actively working with the service provider to restore stable operations. Write activity has been temporarily paused as a precaution while recovery actions are in progress. We continue to monitor the situation closely and will provide further updates as progress is made.
- identified Mar 20, 2026, 05:37 AM UTC
Our technical teams are actively working to restore full stability of the affected database cluster. Write activity remains paused to ensure data consistency and prevent further impact while recovery actions are in progress. We continue to monitor the environment closely and will provide further updates as progress is made.
- resolved Mar 20, 2026, 08:18 AM UTC
Our technical teams have restored stability of the affected database cluster and resumed write operations. Services are operating normally, and we will continue to monitor the environment closely.
- postmortem Apr 06, 2026, 04:24 AM UTC
**Description:** Flexera One – IT Visibility – NAM – Data Update Delays **Timeframe:** March 19, 2026, 7:35 PM PDT – March 20, 2026, 1:13 AM PDT **Incident Summary** On March 19, 2026, at approximately 7:35 PM PDT, technical teams identified an issue affecting data update activity in the North America \(NAM\) production environment for Flexera One IT Visibility. During the incident window, write operations were paused while technical teams assessed service stability and recovery progress. As a result, customers may have experienced delays in workflows that depend on data updates. As the incident progressed, technical teams confirmed that one of the production data clusters in the NAM environment had become unstable. Recovery began during the incident, and a primary was re-established within the affected cluster. However, write operations remained unstable for a period, so write activity continued to remain paused while the environment recovered further. Once the affected cluster stabilized and recovery was completed, write activity was resumed and the platform began catching up on delayed data updates. By March 20, 2026, at approximately 1:13 AM PDT, write operations had resumed, delayed updates had caught up, and the incident was considered resolved. **Root Cause** The incident was caused by instability affecting one of the production data clusters in the NAM environment, which disrupted normal write activity. Although recovery started during the incident and a primary was re-established, write operations remained unstable until the affected cluster recovered fully. To protect the environment during recovery, write activity remained paused until service stability was restored. This resulted in delays to customer data updates during the incident window. **Remediation Actions** The following actions were taken during the incident response: * Issue Identification and Response: Technical teams identified the issue affecting data update activity in the NAM production environment and began recovery efforts. * Protective Write Pause: Write operations were paused as a precaution while service stability and recovery progress were assessed. * Recovery Monitoring: Technical teams continued recovery efforts after a primary was re-established, while monitoring the environment until it was stable enough to resume write activity. Controlled Write Restoration: Write activity was resumed once the affected cluster had recovered and stable processing could safely continue. * Catch-Up Validation: Technical teams monitored the environment as delayed updates were processed and confirmed that the platform had caught up before closing the incident. **Future Preventative Measures** Based on this incident, the following follow-up actions are being taken: * Cluster Recovery Review: We are continuing to work with the technical teams to review what occurred with the affected production cluster and identify improvements that may help reduce the likelihood of similar disruption in the future. * Recovery Process Improvements: We are reviewing the recovery approach used during this incident to help strengthen stabilization and restoration activities in similar scenarios. * Service Provider Engagement: We are continuing to engage our external service provider to support follow-up review and help ensure long-term stability of the affected environment.
Looking to track Flexera downtime and outages?
Pingoru polls Flexera's status page every 5 minutes and alerts you the moment it reports an issue — before your customers do.
- Real-time alerts when Flexera reports an incident
- Email, Slack, Discord, Microsoft Teams, and webhook notifications
- Track Flexera alongside 5,000+ providers in one dashboard
- Component-level filtering
- Notification groups + maintenance calendar
5 free monitors · No credit card required