Flexera incident

Flexera One – NAM– Service degradation

Critical Resolved View vendor source →

Flexera experienced a critical incident on November 18, 2025 affecting IT Asset Management - US Beacon Communication and IT Asset Management - US Inventory Upload and 1 more component, lasting —. The incident has been resolved; the full update timeline is below.

Started
Nov 18, 2025, 06:15 AM UTC
Resolved
Nov 18, 2025, 06:15 AM UTC
Duration
Detected by Pingoru
Nov 18, 2025, 06:15 AM UTC

Affected components

IT Asset Management - US Beacon CommunicationIT Asset Management - US Inventory UploadIT Asset Management - US Login PageIT Asset Management - US Batch Processing SystemIT Asset Management - US Business ReportingIT Asset Management - US SaaS ManagerCloud Cost Optimization - USIT Asset Management - US Restful APIs

Update timeline

  1. resolved Nov 18, 2025, 06:15 AM UTC

    Incident Description: On 17 November at 00:54 AM PST, our teams identified an issue impacting Flexera One in the NAM region. Affected customers were unable to access Flexera One using their credentials. Priority: P1 Restoration Activity: Our teams were immediately engaged and initiated an investigation. The root cause was traced to a recent release that, despite being successfully tested in staging and deployed in other production regions, inadvertently caused a service outage in NAM due to human error. Upon identifying the issue, the team promptly reverted the change to the last known good state, restoring services at 01:33 AM PST. A comprehensive root cause analysis will be conducted, and we will share the post-mortem report once it is available.

  2. postmortem Dec 02, 2025, 12:51 PM UTC

    **Description:** Flexera One - Flexera One – NAM– Service degradation **Timeframe:** November 17, 2025, 12:54 AM PST to November 17, 2025, 01:33 AM PST ‌ **Incident Summary** # On 17 November 2025 at 00:54 AM PST, teams identified an issue preventing customers and internal users in the NAM region from accessing Flexera One. Users attempting to log in via SSO were unable to authenticate successfully, resulting in complete access disruption for affected customers. Other regions \(EU and APAC\) remained fully functional. Investigation determined that the outage began immediately following a recent application release deployed in NAM. This release had passed full staging validation and was already operating successfully in EU and APAC. However, although the deployment process completed without errors, the application in NAM began failing as soon as the new build was activated. Teams discovered that the failure was caused by a missing database field required by the new release. As part of the recent change, a new field was introduced in one of the database tables. The database migrations had been completed in EU and APAC, enabling successful deployments in those regions. However, the same migration had not been executed in NAM, resulting in a schema mismatch. When the new application code attempted to read from the missing field, it caused the service in NAM to fail, preventing all login traffic from completing successfully. The deployment was rolled back at 01:33 AM PST on 17 November 2025, after which user access returned to normal. Teams continued monitoring afterward and confirmed no further stability issues. ‌ **Root Cause** A database schema mismatch occurred in the NAM region during the deployment of a recent application release: · The release introduced a new field in one of the database tables, and this change was fully tested in staging and successfully deployed in the EU and APAC regions. · In regions except NAM, the required database migration had been completed, ensuring schema alignment with the new application code. · However, the corresponding migration was not yet executed in NAM, leaving the database schema out of sync with the deployed application. · The deployment process assumed all regions were aligned and proceeded without validating the NAM schema before deployment. · As a result, when the updated application attempted to access the missing database field, the server failed and triggered authentication and SSO access outages for all users in the NAM region. ‌ **Remediation Actions** · **Deployment Rollback** - Teams executed an immediate rollback of the release in the NAM region at 01:33 AM PST, restoring the previous stable version of the application and reestablishing access for both customers and internal users. · **Comprehensive System Validation** - Post-rollback, functional and authentication tests were performed to verify that Flexera One access, SSO workflows, and background services were operating correctly. Extended monitoring confirmed no additional failures. ‌ **Future Preventative Measures** ‌ · Pre-deployment Verification – SoP has been updated to include that teams will explicitly confirm that all required database migrations have been successfully executed in the target region before any deployment. Deployment will not proceed unless the necessary fields and schema changes are verified as present. · Stronger Coordination Between Application and Migration Teams - Communication and confirmation processes will be strengthened to ensure database migrations are fully completed before any release dependent on schema updates is deployed. · Enhanced Deployment Observability - Add monitoring and alerting to detect schema-related errors such as missing field access, invalid column references, or failed queries immediately upon deployment.