Flexera incident
Flexera One - IT Asset Management - NAM - Menu items not loading
Flexera experienced a major incident on October 19, 2025 affecting IT Asset Management - APAC Login Page, lasting 22m. The incident has been resolved; the full update timeline is below.
Affected components
Update timeline
- investigating Oct 19, 2025, 12:55 PM UTC
Incident Description: We are currently investigating a service degradation impacting IT Asset Management (ITAM) services in the North America region. While other pages, such as Inventory, continue to function normally, affected customers may experience difficulties accessing certain menu items within the ITAM application. Priority: P2 Restoration Activity: Our teams are actively investigating the issue and working toward resolution. We are continuing to evaluate mitigation options and will provide further updates as progress is made
- resolved Oct 19, 2025, 01:18 PM UTC
Our teams identified a database node in an unhealthy state and performed a restart to restore functionality. All services are now fully operational and performing as expected.
- postmortem Oct 31, 2025, 12:48 PM UTC
**Description:** Flexera One - IT Asset Management - NAM - Menu items not loading **Timeframe:** October 19, 2025, 5:26 AM PDT to October 19, 2025, 6:12 AM PDT **Incident Summary** On Sunday, October 19, 2025, at 5:26 AM PDT, the monitoring systems detected a service degradation impacting IT Asset Management \(ITAM\) services in the North America region. During the incident, while other functionalities—such as Inventory pages—continued to operate normally, affected customers experienced difficulties accessing certain menu items within the ITAM application. The technical teams promptly engaged in investigation and identified that a database service component was in an unhealthy state, which caused the partial service degradation. A restart of the affected service component was performed to restore functionality. By 6:12 AM PDT, all ITAM services were fully operational and performing as expected. **Root Cause** The investigation determined that the root cause of the issue was a database service component that entered a bad state due to a one-off network issue. The transient nature of the issue prevented the teams from reproducing it during post-incident validation. There was no evidence of a persistent fault or systemic configuration issue contributing to the incident. **Remediation Actions** · The affected database service component was restarted, restoring normal functionality. · Health checks and monitoring alerts were validated to ensure all services were stable following recovery. · A thorough post-incident review was conducted to confirm no residual impact or data inconsistency. **Future Preventative Measures** · **Redundancy Review:** Review and enhance redundancy measures for critical database service components to improve resilience against similar transient network issues. · **Monitoring Enhancements:** Evaluate improvements to monitoring and auto-recovery mechanisms for quicker detection and recovery from unhealthy service states.