- Detected by Pingoru
- Apr 29, 2026, 06:33 PM UTC
- Resolved
- Apr 29, 2026, 06:33 PM UTC
- Duration
- —
Affected: IT Asset Management - US SaaS Manager
Timeline · 1 update
-
resolved Apr 29, 2026, 06:33 PM UTC
Incident Description: We experienced an issue affecting Flexera One SaaS Manage in the NAM region. During the impact window, affected customers may have been unable to access the application. Priority: P1 Restoration Activity: Our technical teams identified the issue and completed the required restoration activities. Access has been restored, and we will continue to monitor the environment to ensure continued stability. A formal retrospective will be conducted to review the root cause, actions taken during the incident, and any long-term preventative measures. A postmortem report summarizing those findings will be shared following the retrospective.
Read the full incident report →
- Detected by Pingoru
- Apr 28, 2026, 03:31 PM UTC
- Resolved
- Apr 28, 2026, 05:50 PM UTC
- Duration
- 2h 19m
Affected: IT Asset Management - US Batch Processing System
Timeline · 2 updates
-
investigating Apr 28, 2026, 03:31 PM UTC
Incident Description: We are currently investigating an issue affecting reconciliation processing for Flexera One IT Asset Management in the NA region. While the application remains accessible, affected customers may experience delays in reconciliation completion and related processing activities. Priority: P2 Restoration Activity: Our technical teams are actively engaged and are investigating the cause of the reconciliation processing delays. We are reviewing the affected processing components and working to restore normal processing. Further updates will be provided as more information becomes available.
-
resolved Apr 28, 2026, 05:50 PM UTC
Our technical teams identified that the reconciliation processing delays were caused by a deployment that occurred while reconciliation tasks were already running. This caused some affected reconciliation jobs to enter a repeated retry state instead of completing as expected, which delayed processing for impacted customers. The issue has been addressed, and newly started reconciliation jobs are completing successfully. Jobs that were stopped as part of recovery will run again based on their normal schedule. We will continue monitoring processing health and are tracking longer-term improvements to reduce the risk of a similar issue occurring again.
Read the full incident report →
- Detected by Pingoru
- Apr 21, 2026, 09:01 AM UTC
- Resolved
- Apr 21, 2026, 09:23 AM UTC
- Duration
- 21m
Affected: Snow Atlas - AustraliaSnow Atlas API - Australia
Timeline · 2 updates
-
investigating Apr 21, 2026, 09:01 AM UTC
Incident Description: We are currently experiencing an issue affecting SAM Core on Snow Software in the Australia region. Affected users may encounter errors or experience difficulties when attempting to load certain pages within the application. Priority: P2 Restoration Activity: Our technical teams are actively investigating the issue. Initial analysis indicates a potential problem within the messaging system that may be contributing to the observed behavior. Further updates will be provided as more information becomes available.
-
resolved Apr 21, 2026, 09:23 AM UTC
Our teams identified the issue as being caused by a recent deployment. The change has been rolled back, and services have been successfully restored to normal operation.
Read the full incident report →
- Detected by Pingoru
- Apr 14, 2026, 12:15 PM UTC
- Resolved
- Apr 14, 2026, 12:15 PM UTC
- Duration
- —
Affected: IT Asset Management - US Beacon CommunicationIT Asset Management - US Inventory Upload
Timeline · 1 update
-
resolved Apr 14, 2026, 12:15 PM UTC
Incident Description: Our teams previously identified an issue impacting the IT Asset Management service in the NAM region, affecting a subset of customers. The issue began on April 12 and was fully resolved on April 13 at 2:21 AM PDT. While the platform remained accessible throughout, some customers experienced intermittent failures when their beacons attempted to communicate with the Flexera One platform. During this time, certain requests returned internal server errors. As a result, impacted customers may have encountered delays or failures in data retrieval processes that rely on beacon connectivity. Priority: P3 Restoration Activity: Our teams identified the underlying cause of the intermittent internal server errors as a resource constraint on a server. Corrective actions were implemented to restore system stability and ensure adequate capacity. Following these actions, beacon communications returned to normal operation. The platform continues to be closely monitored to confirm sustained stability, and additional safeguards are being evaluated to reduce the recurence of similar issues in the future.
Read the full incident report →
- Detected by Pingoru
- Apr 08, 2026, 11:07 AM UTC
- Resolved
- Apr 08, 2026, 02:11 PM UTC
- Duration
- 3h 3m
Affected: IT Asset Management - EU Inventory Upload
Timeline · 3 updates
-
investigating Apr 08, 2026, 11:07 AM UTC
Incident Description: We are investigating an issue impacting inventory data uploads for IT Asset Management (ITAM) in the EU region. Affected customers may experience upload failures and related error messages. Priority: P2 Restoration Activity: Our technical teams are actively investigating the cause and working to restore normal upload processing. We are monitoring the environment closely and will continue to share updates as progress is made toward full restoration.
-
resolved Apr 08, 2026, 02:11 PM UTC
We have identified the issue affecting inventory uploads in the EU region. The disruption was caused by a service responsible for handling communication between components becoming unavailable following an unexpected system event. The service has since been successfully restored, and inventory uploads are now processing as expected. There has been no data loss, as affected uploads will automatically retry and complete. Our teams are actively monitoring the system to ensure stability as processing returns to normal and are investigating the underlying cause of the event. Additional safeguards are also being implemented to help prevent recurrence.
-
postmortem Apr 22, 2026, 03:43 AM UTC
**Description:** Flexera One – IT Asset Management – EU – Inventory Upload Failures **Timeframe:** April 7, 2026, 10:53 AM PDT – April 8, 2026, 7:45 AM PDT **Incident Summary** On April 7, 2026, at approximately 10:53 AM PDT, an issue began affecting inventory data uploads for Flexera One IT Asset Management in the EU production environment. During this period, customers in the EU region experienced failures when attempting to upload inventory data, which also resulted in delays in downstream data processing. Technical teams began investigating the issue after reports were received of upload failures affecting the EU region. During the investigation, it was confirmed that the upload endpoint was reachable and responding, while customer upload attempts were still failing. Further analysis continued to identify the source of the disruption and determine why uploads were not being processed successfully. The issue was subsequently identified as a required service in the EU production environment being unexpectedly stopped. As a result, upload requests were not processed successfully during the affected period, which caused authentication failures and prevented data from being received for processing. Once the service was restarted, uploads began processing successfully again and recovery activity was monitored. By April 8, 2026, at approximately 7:45 AM PDT, successful uploads had resumed and the incident was considered resolved. It was confirmed during the incident that there was no data loss, as affected uploads would automatically re-upload after service restoration. Technical teams continued monitoring following restoration to ensure recovery progressed as expected. **Root Cause** The incident was caused by a required service in the EU production environment being unexpectedly stopped. This interruption prevented inventory upload requests from being processed successfully, resulting in authentication failures and no data being received for processing during the affected period. Further analysis is underway to determine the cause of the unexpected service interruption. **Remediation Actions** The following actions were taken during the incident response: 1. Incident Investigation Initiated: Technical teams began investigating reports of inventory upload failures affecting the EU region. 2. Endpoint Behavior Reviewed: Technical teams confirmed that the upload endpoint was reachable and responding while further analysis continued to isolate the source of the failure. 3. Cause Identified: Investigation determined that a required service in the EU production environment had been unexpectedly stopped. 4. Service Restarted: The affected service was restarted in the EU production environment. 5. Upload Recovery Validated: Technical teams confirmed that uploads were processing successfully again following service restoration. 6. Post-Restoration Monitoring Performed: Recovery activity and backlog behavior were monitored after restoration. **Future Preventative Measures** The following follow-up actions were identified during the incident: 1. Automatic Service Restart Measures: We are implementing measures to help ensure the affected service starts automatically in the event of a similar occurrence.
Read the full incident report →
- Detected by Pingoru
- Apr 02, 2026, 08:50 PM UTC
- Resolved
- Apr 07, 2026, 09:36 PM UTC
- Duration
- 5d
Affected: CloudCheckr US regionCloudCheckr - EU (European Region)CloudCheckr - AU (Australia & New Zealand region)CloudCheckr GOV regionCloudCheckr Federal
Timeline · 6 updates
-
investigating Apr 02, 2026, 08:50 PM UTC
Incident Description: We have identified an issue affecting CloudCheckr inventory and cost collection across all regions. As a result, some customers may experience delays in inventory and cost data collection. Priority: P2 Restoration Activity: The disruption has been linked to service issues affecting our service provider. Our technical teams are monitoring the situation closely, assessing any downstream impact, and will provide further updates as more information becomes available.
-
identified Apr 03, 2026, 12:12 AM UTC
Our technical teams continue to monitor the issue affecting CloudCheckr inventory and cost collection across all regions and are assessing the potential for broader impact over the next couple of days as billing-related processing activity increases. We will continue to provide updates as more information becomes available.
-
identified Apr 06, 2026, 06:04 PM UTC
Our technical teams continue to work on the issue affecting CloudCheckr inventory and cost collection across all regions. A product-side mitigation is now being deployed across CloudCheckr regions to help address the ongoing impact on collection workflows when communication with the affected external region fails. Testing is being finalized, and deployments are in progress across CloudCheckr regions. We will continue to provide updates as more information becomes available.
-
monitoring Apr 07, 2026, 05:42 AM UTC
Fix deployment has been successfully completed across all regions, and our teams are actively monitoring the services to ensure continued stability and expected performance.
-
resolved Apr 07, 2026, 09:36 PM UTC
Following the fix deployment across all CloudCheckr regions, the environment has remained stable and technical team validation has confirmed recovery. This incident is now considered resolved.
-
postmortem Apr 22, 2026, 03:42 AM UTC
**Description:** CloudCheckr – All Regions – Inventory and Cost Collection Delays **Timeframe:** April 1, 2025, 1:00 PM PDT – April 7, 2025, 7:21 AM PDT **Incident Summary** On April 1, 2025, at approximately 1:00 PM PDT, an issue was identified affecting inventory and cost collection within CloudCheckr across all regions. During this period, some customers experienced delays in inventory and cost data collection. During the investigation, technical teams confirmed impact across US, EU, AU, GOV, and HSE regions. The issue was linked to service disruptions affecting an external cloud service provider in Middle East regions. As the incident progressed, technical teams confirmed that the issue was causing failures in discovery workflows and was also impacting billing and invoicing. For customers Technical teams monitored the environment closely while developing and validating a mitigation. A product-side mitigation was then deployed across all CloudCheckr regions to bypass the affected Middle East region on a per-customer basis when communication with that region failed. Following completion of the deployments, technical teams continued monitoring and validating service behavior across regions. By April 7, 2025, at approximately 7:21 AM PDT, recovery had been confirmed, with no remaining concerns at that time regarding data processing, billing, or invoicing, and the incident was considered resolved. **Root Cause** The incident was caused by service disruptions affecting an external cloud service provider in Middle East regions. As a result, CloudCheckr collection workflows encountered timeouts when processing customers with usage in the affected regions, which led to delays in inventory and cost data collection and also impacted billing and invoicing. **Remediation Actions** The following actions were taken during the incident response: 1. Incident Investigation Initiated: Technical teams began investigating delays affecting inventory and cost collection across CloudCheckr regions. 2. External Service Disruption Identified: The issue was linked to service disruptions affecting an external cloud service provider in Middle East regions. 3. Impact Assessment Performed: Technical teams assessed the effect on collection workflows, including discovery, billing, and invoicing. 4. Monitoring Continued: Teams continued to monitor service behavior and customer impact while evaluating mitigation options. 5. Product-Side Mitigation Deployed: A mitigation was deployed across CloudCheckr regions to bypass the affected Middle East region on a per-customer basis when communication with that region failed. 6. Recovery Validated: Following deployment, technical teams monitored the environment and confirmed recovery across all regions before resolving the incident. **Future Preventative Measures** • Resilient Regional Collection Handling: We will further strengthen its collection logic to reduce dependency on a single affected region during external service disruptions. This includes improving how collection workflows detect repeated regional communication failures and continue processing in a way that helps minimize delays to inventory, cost, billing, and invoicing activities across other unaffected regions.
Read the full incident report →
- Detected by Pingoru
- Apr 02, 2026, 08:02 PM UTC
- Resolved
- Apr 02, 2026, 08:47 PM UTC
- Duration
- 44m
Affected: IT Asset Management - US Beacon CommunicationIT Visibility USCloud License Management - USCloud Commitment Management - USIT Asset Management - US Inventory UploadIT Asset Management - US Login PageIT Asset Management - US Batch Processing SystemIT Asset Management - US Business ReportingIT Asset Management - US SaaS ManagerCloud Cost Optimization - USIT Asset Management - US Restful APIsCloudscape
Timeline · 4 updates
-
identified Apr 02, 2026, 08:02 PM UTC
Incident Description: We identified an issue affecting access to the Flexera One application in the North America (NAM) region. During the incident window, customers may have experienced difficulties accessing the application. Priority: P1 Restoration Activity: Technical teams are engaged and actively implementing mitigation steps to restore access in the NAM region. We will continue to provide updates as more information becomes available.
-
monitoring Apr 02, 2026, 08:22 PM UTC
Service has been restored, and customer access in the NAM region is operating normally at this time. Our technical teams are continuing to monitor closely to ensure sustained stability following the mitigation steps that were implemented.
-
resolved Apr 02, 2026, 08:47 PM UTC
Technical teams identified a recent service change as the source of the disruption and reverted the affected changes during the incident. Following those actions, customer access in the NAM region was restored. Services have remained stable since recovery, and our technical teams continue to monitor closely to ensure sustained stability.
-
postmortem Apr 17, 2026, 05:21 AM UTC
**Description:** Flexera One – NAM – Access Disruption **Timeframe:** April 2, 2026, 12:39 PM PDT – April 2, 2026, 1:02 PM PDT **Incident Summary** On April 2, 2026, at approximately 12:39 PM PDT, an issue was identified affecting access to the Flexera One application in the North America \(NAM\) region. During this period, customers may have experienced difficulties accessing the Flexera One application. Technical teams began investigating immediately and identified elevated load affecting an access-related backend service involved in authentication and request processing. This degraded service behavior temporarily impacted customer access in the NAM region. Mitigation actions were initiated during the incident response, including reverting recent changes associated with the affected service. Following these actions, system performance improved and customer access was restored. By April 2, 2026, at approximately 1:02 PM PDT, access to the Flexera One application in NAM had returned to expected operation. After validation and monitoring confirmed stable recovery, the incident was considered resolved. **Root Cause** The incident was caused by a recent service change that introduced inefficient database request behavior within an access-related backend service. This created elevated load in the production environment and temporarily disrupted customer access to the Flexera One application. **Remediation Actions** The following actions were taken during the incident response: 1. Incident Detection and Response Initiated: Technical teams were alerted to the access disruption affecting the NAM region and began immediate investigation. Impact Isolation: It was confirmed that the customer-facing impact was limited to the NAM region. 2. Service Review and Diagnosis: Technical teams reviewed recent changes and identified elevated load affecting the related backend service. 3. Corrective Action Applied: Recent changes associated with the affected service were reverted to restore normal operation. Service Restoration Verification: Customer access and application performance were validated following the mitigation actions. 4. Post-Recovery Monitoring: Services were monitored after restoration to confirm continued normal operation. **Future Preventative Measures** This incident highlighted the importance of validating performance-related service changes under production-scale demand conditions. Based on this experience, the following measures are being applied: 1. Performance Validation Enhancements: We are reviewing pre-production validation approaches to better identify query efficiency issues under higher traffic conditions. 2. Deployment Safeguards: We are strengthening staged rollout and monitoring practices for changes affecting access-related services. 3. Database Access Optimization: Following service restoration through rollback of the recent change, technical teams are addressing the inefficient database access behavior identified during the investigation before similar changes are reintroduced.
Read the full incident report →
- Detected by Pingoru
- Mar 31, 2026, 10:12 PM UTC
- Resolved
- Apr 04, 2026, 06:13 AM UTC
- Duration
- 3d 8h
Affected: IT Visibility USIT Visibility EUIT Visibility - APAC
Timeline · 9 updates
-
identified Mar 31, 2026, 10:12 PM UTC
Incident Description: We are currently investigating an issue affecting third-party inventory imports in Flexera One IT Visibility across all regions. As a result, some customers may experience delays in inventory processing and timeouts during imports. Priority: P2 Restoration Activity: Our technical teams have identified the issue and are actively implementing mitigation steps to restore normal processing. Recovery actions are underway across the affected processing path. As processing resumes, some customers may continue to experience residual delays and timeouts while backlog processing is worked through.
-
identified Mar 31, 2026, 10:39 PM UTC
Our technical teams have identified the root cause and implemented a hotfix to restore normal data notification flow across the affected regions. Recovery activity is now focused on replaying missed notifications generated during the affected period. As this work continues, some customers may still experience residual delays and timeouts while backlog processing completes.
-
identified Apr 01, 2026, 06:58 AM UTC
Backlog processing continues to progress at a steady rate following the hotfix implementation. Our teams are actively monitoring the recovery to ensure complete resolution and system stability.
-
identified Apr 01, 2026, 04:15 PM UTC
Our technical teams continue recovery efforts following the hotfix, and newly submitted inventory packages are progressing through processing as expected. Recovery work is currently focused on addressing missed messages from the affected period and validating the backfill approach for those items. As this work continues, some customers may still experience delays in processing.
-
identified Apr 02, 2026, 04:30 PM UTC
Our technical teams have completed generation of the retroactive messages needed for recovery and are now validating the replay approach before reingestion begins. This work is being carried out carefully to minimize excessive downstream strain during recovery. Newly submitted inventory packages continue to process as expected, while recovery for missed messages remains in progress.
-
identified Apr 03, 2026, 12:34 PM UTC
Our technical teams are continuing to validate the replay approach required for reingestion of previously missed messages. Newly submitted inventory packages are processing as expected, and recovery efforts for missed messages remain actively in progress.
-
identified Apr 03, 2026, 06:08 PM UTC
Our technical teams have completed further validation and confirmed that newly submitted data is now flowing as expected for the affected processing path. Based on this validation, retroactive replay of previously missed messages is not required. Recovery efforts are now focused on confirming full restoration of the remaining impact, and some customers may continue to experience temporary delays until that validation is complete.
-
resolved Apr 04, 2026, 06:13 AM UTC
Our teams have completed the validation for all remaining organizations and confirmed that services have returned to normal. The incident has been closed, and additional details will be shared in a post-mortem report.
-
postmortem Apr 20, 2026, 05:27 AM UTC
**Description:** Flexera One – IT Visibility – All Regions – Third-Party Inventory Import Processing Disruption **Timeframe:** March 31, 2026, 3:00 PM PDT to April 3, 2026, 11:12 PM PDT **Incident Summary** On Wednesday, March 31, 2026, at 3:00 PM PDT, an issue was identified affecting third-party \(external\) inventory imports within Flexera One IT Visibility across all regions. During the impact window, customers using these external inventory connections experienced delays in inventory processing and timeouts during import operations. The issue was observed across all regions \(NAM, EU, and APAC\), while other inventory ingestion methods remained fully operational. Multiple customer cases were reported, and technical teams initiated an investigation promptly. The issue was traced to disruptions in the inventory processing pipeline, where uploaded data was not advancing through the workflow as expected. After the issue was identified, a hotfix was deployed, restoring normal data flow for newly incoming inventory. While new data began processing successfully, some customers continued to experience delays as previously missed processing events were analyzed and system backlogs were cleared. Subsequent validation confirmed that newer inventory data superseded earlier missed data, removing the need for full reprocessing. Services were fully validated and confirmed to be operating as expected prior to incident closure on 3 Apr 2026, at 11:12 PM PDT. **Root Cause** The issue was caused by a missing configuration setting introduced during a recent release, which impacted inventory processing across all regions. This configuration gap resulted in certain processing events not being triggered, preventing uploaded inventory data from progressing through the expected pipeline. Contributing Factors: * Configuration Gap in Release: A required environment setting was not present in production following deployment. * Processing Pipeline Interruption: Missing triggers prevented inventory data from progressing to downstream systems. * Regional Impact Variations: While the issue affected all regions, some regions experienced additional symptoms such as delayed processing queues. * Detection Delay: The issue primarily affected asynchronous processing, which delayed immediate visibility. **Remediation Actions** The following remediation steps were implemented to restore service functionality: * Deployed a hotfix to restore the missing configuration across all regions. * Restored normal processing of inventory data for all newly submitted uploads. * Investigated and validated the status of previously impacted data. * Confirmed that newer data submissions correctly updated downstream systems. * Monitored system performance to ensure stability and full recovery. **Future Preventative Measures** * Implement Enhanced Release Validation Controls: Ensure all required environment configurations are validated and present prior to production deployment. * Strengthen End-to-End Pipeline Monitoring: Introduce comprehensive monitoring across asynchronous processing flows to detect failures or delays in real time. * Enforce Configuration Consistency Across Regions: Establish safeguards to prevent configuration drift and ensure uniform deployments across all environments. * Processing Gap Detection Alerts: Review and upgrade alerting mechanisms to identify missing or delayed processing events within the ingestion pipeline. * Optimize Recovery and Backlog Handling Procedures: Improve recovery strategies to efficiently handle missed processing events and reduce impact from backlog accumulation.
Read the full incident report →
- Detected by Pingoru
- Mar 31, 2026, 08:08 PM UTC
- Resolved
- Mar 31, 2026, 09:38 PM UTC
- Duration
- 1h 30m
Affected: IT Visibility US
Timeline · 4 updates
-
investigating Mar 31, 2026, 08:08 PM UTC
Incident Description: We are investigating an issue affecting Data Explorer within Flexera One IT Visibility in the NA region. Customers may encounter errors when using Data Explorer, and requests may fail to return results as expected. Priority: P2 Restoration Activity: Our technical teams are actively investigating the issue and working to restore normal functionality. We are closely monitoring progress and will provide further updates as they become available.
-
identified Mar 31, 2026, 08:53 PM UTC
Our technical teams have identified a potential cause and are actively implementing mitigation steps to restore Data Explorer functionality in the NA region. Current investigation indicates the failure is related to a service-side issue affecting request processing, and rollback actions are currently in progress. We are monitoring the outcome of these changes closely and will provide a further update as soon as more information becomes available.
-
resolved Mar 31, 2026, 09:38 PM UTC
Rollback actions have been completed successfully, and Data Explorer functionality has been restored in the NA region. Our investigation determined the issue was related to a service-side deployment/configuration problem affecting request processing. Service has been validated following mitigation, and requests are now completing as expected. We will complete a full root cause analysis with the teams involved and provide a post-mortem report outlining the underlying cause and preventive measures identified.
-
postmortem Apr 20, 2026, 06:07 AM UTC
**Description:** Flexera One – IT Visibility – NA – Data Explorer Request Failures **Timeframe:** March 31, 2026, 12:41 PM PDT to March 31, 2026, 1:56 PM PDT **Incident Summary** On Tuesday, March 31, 2026, at 12:41 PM PDT, our teams detected an issue affecting the Data Explorer feature within Flexera One IT Visibility in the NA region. Customers in this region encountered errors when using Data Explorer; requests failed to return results as expected and, in some cases, resulted in HTTP 500 errors, preventing successful query execution. The impact was isolated to Data Explorer functionality in the NA region. Other regions, including EU and APAC, were not affected, and the broader Flexera One platform remained fully accessible throughout the incident. Engineering teams began investigating immediately and determined that the issue originated in a backend service responsible for processing Data Explorer queries. Initial analysis suggested a potential problem with request handling. Subsequent validation confirmed that customer requests were valid and that the failure was occurring service-side within this backend component. The issue was resolved by reverting the affected configuration changes in the backend service, which restored service stability. Post-recovery validation confirmed that Data Explorer queries completed successfully and that normal functionality was fully restored for customers in the NA region. **Root Cause** During their investigations, our technical teams identified that the issue was caused by a partial or inconsistent deployment in the NA region. Configuration changes were applied without the corresponding service components being fully deployed. This mismatch caused runtime failures during query processing, resulting in HTTP 500 errors when Data Explorer queries were executed.The issue was not related to authentication or customer-submitted queries, despite initial error messages suggesting otherwise. Contributing Factors: * Partial Deployment State: Configuration updates were applied without matching service binaries. * Service Runtime Failures: The mismatch led to failures during query generation in the backend service. * Misleading Error Messages: UI errors suggested request issues, which delayed precise identification of the service-side cause. * Regional Isolation: The issue was limited to NA due to differences in deployment state across regions. **Remediation Actions** The following remediation steps were implemented to restore service functionality: * Reverted the affected configuration changes in the NA environment. * Restored alignment between deployed configuration and service components. * Validated successful execution of Data Explorer queries across affected organizations. * Monitored system performance to confirm stability post-recovery. **Future Preventative Measures** * Review Post-Deployment Consistency Checks: Ensure configuration and service components are aligned across all regions after deployment. * Enhance Error Handling and Messaging: Improve system behavior so backend failures are accurately reflected in user-facing error messages. * Strengthen Regional Deployment Monitoring: Expand monitoring coverage for critical services to detect issues earlier. * Improve Rollback Validation Processes: Establish more robust validation steps to ensure faster and more reliable recovery following rollback actions.
Read the full incident report →
- Detected by Pingoru
- Mar 31, 2026, 03:04 PM UTC
- Resolved
- Mar 31, 2026, 08:44 PM UTC
- Duration
- 5h 40m
Affected: IT Visibility EU
Timeline · 6 updates
-
investigating Mar 31, 2026, 03:04 PM UTC
Incident Description: We are investigating an issue affecting Flexera One IT Visibility in the EU region. Customers may experience errors when accessing the Evidence UI and may also encounter failures with ZIP exports and Query exports. Priority: P2 Restoration Activity: Our technical teams are actively investigating the issue and working to restore full functionality as quickly as possible.
-
identified Mar 31, 2026, 04:02 PM UTC
Our technical teams have identified a recent change as a likely contributor to the issue and are actively working to restore affected functionality in the EU region. As part of mitigation, teams are preparing an alternate recovery path while continuing efforts to resolve the issue as quickly as possible. We will share further updates as soon as more information becomes available.
-
identified Mar 31, 2026, 04:55 PM UTC
Our technical teams have completed mitigation and moved traffic to the restored service path. Early indications show the issue has been resolved, though some customers may continue to experience temporary impact while network updates propagate. We are continuing to validate recovery and will provide a further update once full restoration has been confirmed.
-
monitoring Mar 31, 2026, 05:29 PM UTC
Our technical teams have completed mitigation steps, and current indicators point to recovery of affected services in the EU region. Some customers may continue to experience temporary impact for a limited period while network updates complete in their environment. If the issue persists, restarting the affected machine may help. We are continuing to monitor restoration closely.
-
resolved Mar 31, 2026, 08:44 PM UTC
Mitigation steps were completed successfully, and full recovery has been confirmed. Affected services in the EU region are now operating normally. We will conduct a full root cause analysis and provide a post-mortem report highlighting the underlying cause and the preventative measures identified as part of our follow-up to this incident.
-
postmortem Apr 16, 2026, 06:30 AM UTC
**Description:** Flexera One – IT Visibility – EU – Errors Affecting Evidence UI and Export Functions **Timeframe:** March 31, 2026, 7:48 AM PDT to March 31, 2026, 1:20 PM PDT **Incident Summary** On Tuesday, March 31, 2026, 7:48 AM PDT, an issue was identified affecting IT Visibility \(ITV\) functionality in the EU region. Customers experienced errors when accessing the Evidence UI, as well as failures when performing ZIP exports and Query exports. Requests returned HTTP 503 errors, resulting in incomplete or unsuccessful operations. During the impact window, customers in the EU region experienced failures across key ITV workflows, including accessing the Evidence UI and performing ZIP and Query exports. While the Flexera One platform remained accessible, these specific functionalities did not operate as expected. The issue was detected through monitoring alerts and internal investigation. Engineering teams from multiple groups engaged immediately and worked in parallel to identify the cause and restore service. Initial mitigation efforts, including rollback of recent changes, did not fully resolve the issue. As part of recovery, a new infrastructure cluster was provisioned and traffic was redirected to it. Following this action, services began recovering, and full functionality was restored after DNS propagation completed. Some customers may have experienced brief residual impact due to caching before full recovery was realized. **Root Cause** The issue was caused by a deployment-related configuration inconsistency affecting secure communication settings in the EU region. During a recent infrastructure deployment, a critical configuration responsible for secure service communication was unintentionally recreated. This led to intermittent failures in request routing, resulting in HTTP 503 errors for affected ITV functionalities. Although the deployment initially appeared successful, the issue manifested under specific conditions and impacted multiple organizations within the EU region. Contributing Factors: * Configuration difference: Differences between deployed configurations in regions led to inconsistent behavior, with EU being uniquely impacted. * Deployment Side Effects: Infrastructure changes unintentionally modified critical communication settings. * Limited Functional Monitoring: Existing monitoring focused on service health, delaying detection of user-facing issues. **Remediation Actions** The following remediation steps were implemented to restore service functionality: * Rolled back the impacted deployment changes in the EU region. * Provisioned a new infrastructure cluster and redirected traffic to stabilize services. * Verified recovery of affected UI, ZIP exports, and Query export functionality. * Monitored system behavior and confirmed stability configuration update propagation. **Future Preventative Measures** * Stronger Deployment Consistency Controls: Ensuring configuration changes are applied consistently across all regions to prevent drift. * Enhanced Post-Deployment Validation: Review and implement functional \(end-to-end\) validation tests to verify key workflows after deployments. * Improved Monitoring and Alerting: Expanding monitoring to detect user-impacting failures \(such as API and export failures\). * Deployment Safeguards and Review: Strengthening change review processes to identify and prevent unintended configuration changes during deployments.
Read the full incident report →
- Detected by Pingoru
- Mar 27, 2026, 09:14 PM UTC
- Resolved
- Mar 27, 2026, 09:14 PM UTC
- Duration
- —
Affected: IT Asset Management - US SaaS ManagerIT Asset Management - APAC SaaS Manager
Timeline · 2 updates
-
resolved Mar 27, 2026, 09:14 PM UTC
Incident Description: An issue was identified that affected access to the Managed SaaS Applications page within the Flexera One SaaS Manager application. Customers encountered errors when attempting to load this page, impacting visibility into managed SaaS applications. Priority: P2 Impact Start Time: March 27, 2026, 12:37 PM PDT Impact End Time: March 27, 2026, 1:59 PM PDT Impact Duration: 1 hour 22 minutes Restoration Activity: The issue was traced to a subset of services supporting this functionality entering an error state. Corrective actions were taken, including restarting affected services and validating recovery across impacted regions. Full functionality has been restored, and the service continues to be monitored closely.
-
postmortem Apr 16, 2026, 06:21 AM UTC
**Description:** Flexera One – SaaS Manager – NAM/APAC – Unable to Load Managed SaaS Applications **Timeframe:** March 27, 2026, 12:37 PM PDT to March 27, 2026, 1:59 PM PDT **Incident Summary** On Friday, 27th Mar 2026 at 12:37 PM PDT, an issue was identified that affected access to the Managed SaaS Applications page within the Flexera One SaaS Manager application. Customers encountered errors when attempting to load this page, resulting in a temporary loss of visibility into managed SaaS applications. During the impact window, customers in the NAM and APAC regions experienced errors when accessing the Managed SaaS Applications page. Users may have seen the error message: “An unexpected error occurred while loading managed applications.” The issue was limited to this specific functionality, and the rest of the Flexera One platform remained accessible and operational. No impact was observed in the EU region, and no customer cases were reported during the incident. Service was restored by restarting the affected component, which stabilized the impacted functionality, and normal operation resumed within the stated timeframe. **Root Cause** * The issue occurred due to a temporary disruption in communication with a supporting internal service required for application functionality. During this time, certain application components became unstable and were unable to process requests correctly, resulting in errors when loading the affected page. * The impacted service dependency was briefly unavailable but recovered after a restart, which restored normal operations. No code changes or permanent fixes were required. Contributing Factors: * Temporary unavailability of a supporting service dependency. * Limited resilience of affected components to short-lived connectivity interruptions. * Monitoring gaps, where alerts were triggered in one environment but not consistently across all affected components. * Minimal error logging, with only health check failures observed, which reduced early visibility into the issue. **Remediation Actions** The following remediation steps were implemented to restore service functionality: * Restored normal operation by restarting the affected supporting service. * Stabilized application components and verified successful page functionality. * Confirmed service health across impacted regions following recovery. **Future Preventative Measures** * Improved Service Resilience: Enhancing application behavior to better tolerate brief interruptions in dependent services without impacting user experience. * Enhanced Monitoring Coverage: Expanding monitoring to ensure consistent alerting across all environments and components. * Alerting Improvements: Reviewing and refining alert mechanisms to ensure issues are detected promptly and reliably across all clusters.
Read the full incident report →
- Detected by Pingoru
- Mar 24, 2026, 06:12 PM UTC
- Resolved
- Mar 24, 2026, 07:30 PM UTC
- Duration
- 1h 18m
Affected: IT Visibility - APAC
Timeline · 4 updates
-
investigating Mar 24, 2026, 06:12 PM UTC
Incident Description: We are investigating an issue affecting IT Visibility in the APAC region. Customers may experience intermittent failures when performing actions such as running queries or exporting/downloading data, with requests returning 503 errors. The Flexera One UI remains accessible; however, some features may not function as expected due to these service disruptions. Priority: P2 Restoration Activity: Our technical teams are actively investigating the issue and working to identify the underlying cause. Further updates will be provided as more information becomes available.
-
monitoring Mar 24, 2026, 07:21 PM UTC
We have implemented a mitigation and are observing signs of recovery in the APAC region. Initial validation indicates that previously impacted operations, including queries and data exports/downloads, are now completing successfully. Our teams are continuing to monitor the environment and perform validation to ensure full restoration of service.
-
resolved Mar 24, 2026, 07:30 PM UTC
The issue affecting IT Visibility in the APAC region has been resolved. Customers should no longer experience errors when running queries or exporting/downloading data. The disruption was caused by an unintended configuration change introduced during a recent update, which impacted request routing for certain services. Our teams identified the issue and implemented corrective actions to restore normal service behavior. All services have been validated and are operating normally. We will continue to monitor the environment to ensure sustained performance.
-
postmortem Apr 10, 2026, 12:23 PM UTC
**Description:** Flexera One - IT Visibility - APAC - Intermittent Errors During Queries and Data Exports **Timeframe:** 24 March, 2026, 11:12 AM PDT to 24 March, 2026, 12:30 PM PDT **Incident Summary** On March 24, 2026, our monitoring systems and internal alerts identified intermittent failures impacting IT Visibility services in the APAC region. The impacted customers experienced errors when executing queries and exporting or downloading data, with some requests returning HTTP 503 responses. While the Flexera One service remained accessible, select functionalities relying on backend services were intermittently unavailable. Initial investigation indicated that requests were intermittently failing before reaching backend services, pointing to an issue within the request routing. The behavior was observed across multiple endpoints and request types, confirming that the issue was not isolated to a specific feature. Our technical teams identified that a recent configuration change had unintentionally impacted request routing behavior in the APAC region. Although similar changes had been deployed in other regions without impact, this change resulted in inconsistent routing behavior in APAC. To mitigate the issue, the team provisioned new infrastructure with a corrected configuration and redirected traffic accordingly. Following this action, API responses returned to normal, and system health checks stabilized. All services were validated, and functionality was fully restored by 12:30 PM PDT. Continuous monitoring confirmed sustained recovery. **Root Cause** The issue was caused by an unintended configuration change within the gateway layer that affected request routing behavior for IT Visibility services in the APAC region. Contributing Factors: * The change was introduced as part of a routine configuration update and was not expected to impact production behavior. * Variations in how the configuration was applied across environments led to inconsistent outcomes. * Pre-deployment validation did not fully capture this behavior in the APAC environment. * The nature of the change made its production impact not immediately apparent during review. **Remediation Actions** The following actions were taken by our technical teams to restore the service: * Reverted the impacted configuration to a known stable state. * Provisioned a new gateway cluster with the corrected configuration. * Redirected traffic to the updated cluster to restore normal request routing. * Validated API responses and service health across all affected endpoints. * Continued monitoring to ensure sustained stability and normal operation. **Future Preventative Measures** * Controlled Deployment Governance for Gateway Changes - Introduce stricter controls to ensure that changes are clearly gated and promoted correctly. * Enhanced Change Review, Testing & PR Standards – Review and implement a standardized PR template requiring clear documentation of change intent, expected impact, associated tracking, and validation steps. * Environment Consistency & Validation Improvements - Strengthen validation processes to ensure consistent behavior across regions and environments before production rollout.
Read the full incident report →
- Detected by Pingoru
- Mar 20, 2026, 03:34 AM UTC
- Resolved
- Mar 20, 2026, 08:18 AM UTC
- Duration
- 4h 44m
Affected: IT Visibility US
Timeline · 4 updates
-
identified Mar 20, 2026, 03:34 AM UTC
Incident Description: We are currently investigating an issue affecting data update operations in the North America (NAM) production environment. While the ITV platform remains accessible, customers may experience delays in workflows that rely on data updates. Priority: P2 Restoration Activity: Our technical teams have identified an issue within the database cluster and are actively working with the service provider to restore stable operations. Write activity has been temporarily paused as a precaution while recovery actions are in progress. We continue to monitor the situation closely and will provide further updates as progress is made.
-
identified Mar 20, 2026, 05:37 AM UTC
Our technical teams are actively working to restore full stability of the affected database cluster. Write activity remains paused to ensure data consistency and prevent further impact while recovery actions are in progress. We continue to monitor the environment closely and will provide further updates as progress is made.
-
resolved Mar 20, 2026, 08:18 AM UTC
Our technical teams have restored stability of the affected database cluster and resumed write operations. Services are operating normally, and we will continue to monitor the environment closely.
-
postmortem Apr 06, 2026, 04:24 AM UTC
**Description:** Flexera One – IT Visibility – NAM – Data Update Delays **Timeframe:** March 19, 2026, 7:35 PM PDT – March 20, 2026, 1:13 AM PDT **Incident Summary** On March 19, 2026, at approximately 7:35 PM PDT, technical teams identified an issue affecting data update activity in the North America \(NAM\) production environment for Flexera One IT Visibility. During the incident window, write operations were paused while technical teams assessed service stability and recovery progress. As a result, customers may have experienced delays in workflows that depend on data updates. As the incident progressed, technical teams confirmed that one of the production data clusters in the NAM environment had become unstable. Recovery began during the incident, and a primary was re-established within the affected cluster. However, write operations remained unstable for a period, so write activity continued to remain paused while the environment recovered further. Once the affected cluster stabilized and recovery was completed, write activity was resumed and the platform began catching up on delayed data updates. By March 20, 2026, at approximately 1:13 AM PDT, write operations had resumed, delayed updates had caught up, and the incident was considered resolved. **Root Cause** The incident was caused by instability affecting one of the production data clusters in the NAM environment, which disrupted normal write activity. Although recovery started during the incident and a primary was re-established, write operations remained unstable until the affected cluster recovered fully. To protect the environment during recovery, write activity remained paused until service stability was restored. This resulted in delays to customer data updates during the incident window. **Remediation Actions** The following actions were taken during the incident response: * Issue Identification and Response: Technical teams identified the issue affecting data update activity in the NAM production environment and began recovery efforts. * Protective Write Pause: Write operations were paused as a precaution while service stability and recovery progress were assessed. * Recovery Monitoring: Technical teams continued recovery efforts after a primary was re-established, while monitoring the environment until it was stable enough to resume write activity. Controlled Write Restoration: Write activity was resumed once the affected cluster had recovered and stable processing could safely continue. * Catch-Up Validation: Technical teams monitored the environment as delayed updates were processed and confirmed that the platform had caught up before closing the incident. **Future Preventative Measures** Based on this incident, the following follow-up actions are being taken: * Cluster Recovery Review: We are continuing to work with the technical teams to review what occurred with the affected production cluster and identify improvements that may help reduce the likelihood of similar disruption in the future. * Recovery Process Improvements: We are reviewing the recovery approach used during this incident to help strengthen stabilization and restoration activities in similar scenarios. * Service Provider Engagement: We are continuing to engage our external service provider to support follow-up review and help ensure long-term stability of the affected environment.
Read the full incident report →
- Detected by Pingoru
- Mar 19, 2026, 02:20 AM UTC
- Resolved
- Mar 20, 2026, 12:13 AM UTC
- Duration
- 21h 52m
Affected: IT Visibility EU
Timeline · 6 updates
-
identified Mar 19, 2026, 02:20 AM UTC
Incident Description: Our teams have identified and are actively investigating an issue impacting inventory processing in the EU production environment. While ITV platform remain accessible, some customers may experience delays in processing newly uploaded inventory, and status pages may intermittently show timeouts during the normalization stage. Priority: P2 Restoration Activity: Our teams have identified resource constraints within the database cluster and are currently upgrading the capacity to improve throughput. Our teams continue to monitor the environment closely while these mitigation actions are in progress.
-
monitoring Mar 19, 2026, 05:31 AM UTC
The infrastructure upgrade to improve inventory normalization performance is currently in progress. At this time, the system is able to handle incoming data at the expected rate; however, the existing backlog is still being processed. We continue to closely monitor progress as the upgrade completes and additional improvements take effect.
-
monitoring Mar 19, 2026, 06:24 AM UTC
Progress continues on the infrastructure improvements to restore normal processing performance. The primary cluster upgrade is now majorly complete and is beginning to gradually reduce the existing backlog. Our teams have taken additional actions to further accelerate backlog clearance and improve overall processing times. We will continue to provide updates as performance continues to improve.
-
monitoring Mar 19, 2026, 03:44 PM UTC
Progress continues on infrastructure improvements to restore normal processing performance. The primary cluster upgrade has now been completed, and backlog reduction is ongoing. Additional capacity has been brought online, and workload is being redistributed to further accelerate backlog clearance. We will continue to monitor progress closely and provide further updates as processing performance improves.
-
resolved Mar 20, 2026, 12:13 AM UTC
The backlog affecting inventory processing in the EU production environment has been fully cleared. Processing has returned to expected levels, and all services are functioning as expected.
-
postmortem Apr 03, 2026, 05:24 AM UTC
**Description:** Flexera One – IT Visibility – EU – Inventory Processing Delay **Timeframe:** March 18, 2026, 5:58 PM PDT – March 19, 2026, 4:27 PM PDT **Incident Summary** On March 18, 2026, at approximately 5:58 PM PDT, technical teams identified a backlog affecting inventory normalization in the EU production environment for Flexera One IT Visibility. During this period, ITV UIs and APIs remained accessible and operational. However, some customers experienced delays in the processing of newly uploaded inventory, and status pages could intermittently display timeouts during the Normalize stage. Technical teams confirmed that the issue was limited to the EU region and that a majority of customers were expected to be impacted to some extent. Technical teams began recovery activities by increasing resources on the existing cluster to improve normalization throughput. As the upgrade progressed, services were able to keep up with the incoming data rate, but the backlog was not yet decreasing. Additional actions were then taken to accelerate recovery, including bringing a second cluster online and migrating high-traffic organizations to that environment to distribute load more effectively. These actions improved processing throughput and allowed the backlog to begin reducing until it was fully cleared. By March 19, 2026, at approximately 4:27 PM PDT, inventory normalization processing had returned to expected levels in the EU region, and the incident was considered resolved. Cleanup activities continued separately after service restoration. **Root Cause** The incident was caused by inventory normalization processing in the EU production environment falling behind the incoming workload, which resulted in a backlog. As a result, some newly uploaded inventory took longer than expected to process, and customers could intermittently see timeouts during the normalization stage while the backlog was being cleared. **Remediation Actions** The following actions were taken during the incident response: 1. Incident Detection and Response Initiated: Technical teams were notified of the normalization backlog and began investigating the issue affecting inventory processing in the EU region. 2. Impact Isolation: It was confirmed that the issue was limited to the EU region and that ITV UIs and APIs remained accessible while normalization processing was delayed. 3. Cluster Resource Upgrade: Resources on the existing cluster were increased to improve normalization throughput. 4. Additional Cluster Provisioned: A second cluster was brought online to help distribute processing load. 5. Workload Redistribution: High-traffic organizations were migrated to the second cluster to accelerate backlog reduction. 6. Service Restoration Verification: Processing throughput was monitored and validated until the backlog was fully cleared and normalization returned to expected levels. **Future Preventative Measures** This incident highlighted the importance of maintaining sufficient throughput and monitoring within the inventory normalization process. Based on this experience, the following measures are being applied: 1. Capacity and Throughput Review: We are reviewing approaches to better support normalization demand and reduce the likelihood of similar backlogs. 2. Workload Distribution Readiness: We are strengthening readiness to distribute workload more effectively when processing demand increases. 3. Monitoring and Early Detection: We are reviewing monitoring and alerting approaches to help identify backlog growth sooner and support earlier remediation.
Read the full incident report →
- Detected by Pingoru
- Mar 18, 2026, 03:36 PM UTC
- Resolved
- Mar 18, 2026, 03:43 PM UTC
- Duration
- 7m
Affected: IT Asset Management - US Beacon CommunicationIT Asset Management - EU Beacon CommunicationIT Visibility USIT Visibility EUCloud Cost Optimization - EUIT Asset Management - APAC Beacon CommunicationIT Visibility - APACCloud Cost Optimization - APACCloud License Management - USCloud License Management - EUCloud License Management - APACCloud Commitment Management - USIT Asset Management - US Inventory UploadIT Asset Management - EU Inventory UploadIT Asset Management - APAC Inventory UploadIT Asset Management - US Login PageIT Asset Management - EU Login PageIT Asset Management - APAC Login PageIT Asset Management - EU Batch Processing SystemIT Asset Management - US Batch Processing SystemIT Asset Management - APAC Batch Processing SystemIT Asset Management - US Business ReportingIT Asset Management - EU Business ReportingIT Asset Management - APAC Business ReportingIT Asset Management - US SaaS ManagerIT Asset Management - APAC SaaS ManagerCloud Cost Optimization - USIT Asset Management - EU SaaS ManagerIT Asset Management - APAC Restful APIsCloudscape
Timeline · 3 updates
-
investigating Mar 18, 2026, 03:36 PM UTC
Incident Description: We experienced an issue affecting access to the Flexera One UI across North America (NA), Europe (EU), and Asia-Pacific (APAC) regions. During this time, customers attempting to start new sessions or log in to the UI may have encountered access errors. Customers who already had an active session open may have continued to access the UI, while new login attempts could fail. As a result, some users may have been unable to access the UI during the incident window. Priority: P1 Incident Start Time: March 18, 2026, 02:39 AM CT Incident End Time: March 18, 2026, 04:35 AM CT Incident Duration: 1 hour 56 minutes Restoration Activity: Our technical teams investigated the issue and identified a configuration change affecting UI access controls. The change was reverted, restoring normal access to the Flexera One UI across NA, EU, and APAC. The service has been operating normally since the fix was applied, and we continue to monitor the environment.
-
resolved Mar 18, 2026, 03:43 PM UTC
The issue affecting access to the Flexera One UI across NA, EU, and APAC has been resolved. The service has been operating normally since the fix was implemented, and we continue to monitor the environment.
-
postmortem Mar 31, 2026, 06:34 AM UTC
**Description:** Flexera One – UI – All Regions – Access Disruption \(HTTP 403 Errors\) **Timeframe:** March 18, 2026, 2:39 AM PDT – March 18, 2026, 4:35 AM PDT **Incident Summary** On March 18, 2026, at approximately 2:39 AM PDT, an issue was identified affecting access to the Flexera One UI across North America \(NA\), Europe \(EU\), and Asia-Pacific \(APAC\). During this period, customers attempting to start new sessions or log in to the UI may have encountered HTTP 403 errors when attempting to access the service. Customers who already had an active session open may have continued to access the UI, while new login attempts could fail. As a result, some users may have been unable to access the UI during the incident window. Technical teams began investigating immediately and confirmed that the issue was related to a recently introduced security control change that affected legitimate access requests more broadly than intended. By March 18, 2026, at approximately 4:35 AM PDT, the change had been reverted and access to the Flexera One UI was restored across all affected regions. Following validation that login and access behavior had returned to expected operation, the incident was considered resolved. **Root Cause** The incident was caused by a recently introduced security control change that behaved more broadly than intended and unintentionally blocked valid customer access requests. This prevented some customers from successfully starting new sessions or logging in to the Flexera One UI and resulted in HTTP 403 errors during the incident window. **Remediation Actions** The following actions were taken during the incident response: 1. Incident Detection and Response Initiated: Technical teams were notified of the access issue and began investigating the disruption affecting the Flexera One UI across all regions. 2. Impact Isolation: It was confirmed that the issue was affecting new login attempts and session initiation, while some existing active sessions could continue operating during the incident window. 3. Change Review and Validation: The recently introduced control change was reviewed to determine why legitimate customer access requests were being blocked. 4. Corrective Change Applied: The change responsible for the unintended blocking behavior was reverted to restore normal access. 5. Service Restoration Verification: Following the corrective action, access to the Flexera One UI was validated across affected regions and confirmed to be functioning normally. **Future Preventative Measures** This incident highlighted the importance of strong validation, coordination, and monitoring when implementing access-related control changes within a centralized configuration. Based on this experience, the following measures are being applied: 1. Change Validation and Deployment Controls: We are strengthening review, validation, and staged deployment practices for access-related control changes before they are applied more broadly. 2. Rollback and Communication Readiness: We are reinforcing rollback preparedness, stakeholder communication, and post-change monitoring to reduce the likelihood and duration of similar issues. 3. Alerting Sensitivity Review: We are reviewing alerting sensitivity to help identify unintended access disruptions sooner following similar changes.
Read the full incident report →
- Detected by Pingoru
- Mar 17, 2026, 04:56 AM UTC
- Resolved
- Mar 17, 2026, 08:11 AM UTC
- Duration
- 3h 14m
Affected: IT Visibility EU
Timeline · 3 updates
-
identified Mar 17, 2026, 04:56 AM UTC
Incident Description: We are currently investigating an issue affecting inventory ingestion within Flexera One – IT Visibility (ITV) in the EU production environment. The service is experiencing a processing backlog, which may result in delays in inventory updates from third-party data sources. ITV user interfaces and APIs remain fully accessible, and core functionality is not impacted. Priority: P3 Restoration Activity: Our technical teams have identified the likely cause as resource constraints in a managed caching service. We have increased capacity and are actively evaluating system recovery while working to clear the backlog. We are closely monitoring the situation and will provide further updates as progress is made.
-
resolved Mar 17, 2026, 08:11 AM UTC
Inventory ingestion has been fully restored, and data processing is operating normally. We are proactively replaying customer third-party data to ensure completeness and prevent any missed updates. All other third-party data sources are already up to date and processing as expected
-
postmortem Apr 01, 2026, 04:38 AM UTC
**Description:** Flexera One – IT Visibility – EU – Delayed Inventory Updates from Third-Party Data Sources **Timeframe:** March 16, 2026, 7:40 PM PDT – March 17, 2026, 1:30 AM PDT **Incident Summary** On March 16, 2026, at approximately 7:40 PM PDT, an issue was identified affecting inventory ingestion within the Flexera One IT Visibility service in the EU production environment. During this period, inventory data received from certain third-party data sources was delayed in processing, resulting in backlogged inventory updates for some customers. Throughout the incident, the IT Visibility user interface and APIs remained accessible. The impact was limited to delays in the processing and reflection of inventory updates from third-party integrations. Technical teams began investigating immediately and implemented corrective actions to restore processing and recover the affected backlog. As part of the recovery effort, processing capacity was increased and affected third-party data was replayed to ensure that no updates were missed. Processing subsequently resumed and the backlog was cleared. By March 17, 2026, at approximately 1:30 AM PDT, recovery had been confirmed and the incident was considered resolved. **Root Cause** The incident was associated with resource constraints affecting the inventory ingestion flow, which resulted in processing delays and backlog growth for inventory data received from third-party data sources in the EU region. At the time of writing, further internal review is continuing to determine the precise underlying cause of the condition that led the service into this state. **Remediation Actions** The following actions were taken during the incident response: 1. Incident Investigation Initiated: Technical teams began investigating the delayed processing affecting third-party inventory ingestion in the EU region. 2. Capacity Increase Applied: Processing capacity was increased to relieve the constraint contributing to the backlog. 3. Processing Recovery Monitored: Processing behavior was monitored to confirm that data flow had resumed successfully. 4. Data Replay Performed: Third-party data was replayed as part of recovery to ensure that delayed updates were processed completely. 5. Service Restoration Validated: Technical teams confirmed that backlog processing had caught up and that inventory ingestion had returned to expected operation. **Future Preventative Measures** This incident highlighted the importance of continued review of ingestion performance conditions and earlier detection of backlog-related degradation affecting third-party inventory processing. Based on the current information available, the following follow-up activities are being pursued: 1. Underlying Cause Review: Complete a deeper review to determine the precise underlying cause of the processing constraint that led to the backlog condition. 2. Alerting and Detection Improvements: Review and improve alerting and detection thresholds to help identify similar ingestion slowdowns sooner. 3. Recovery Process Review: Review the additional issues identified during incident response that affected the resolution effort and determine whether any follow-up improvements are needed.
Read the full incident report →
- Detected by Pingoru
- Mar 16, 2026, 04:06 AM UTC
- Resolved
- Mar 16, 2026, 06:17 AM UTC
- Duration
- 2h 11m
Affected: IT Visibility - APAC
Timeline · 3 updates
-
investigating Mar 16, 2026, 04:06 AM UTC
Incident Description: We are currently investigating an issue affecting inventory upload status reporting within Flexera One – IT Visibility (ITV) for customers in the APAC region. Inventory ingestion is processing successfully; however, the system responsible for reporting upload processing status is failing. As a result, inventory uploads may incorrectly appear as “Timed Out” even though the underlying data ingestion may have completed successfully. Priority: P3 Restoration Activity: Our technical teams are actively investigating the issue impacting upload status reporting and are working to identify the underlying cause. We are monitoring the situation closely and will provide further updates as progress is made.
-
resolved Mar 16, 2026, 06:17 AM UTC
Our teams identified the issue as an incorrect routing configuration and implemented the necessary updates to correct it. Following the change, routing behavior, traffic flow, and overall service health were validated, and all components are now confirmed to be operating normally.
-
postmortem Mar 30, 2026, 03:41 AM UTC
**Description:** Flexera One – IT Visibility – APAC – Inventory Upload Status Reporting Failure **Timeframe:** March 15, 2026, 8:20 PM PDT – March 15, 2026, 11:08 PM PDT **Incident Summary** On March 15, 2026, at approximately 8:20 PM PDT, technical teams identified an issue affecting inventory upload status reporting in the APAC region. During this period, inventory uploads continued to be ingested and processed, but the status reporting associated with those uploads did not update correctly. As a result, affected uploads could appear as timed out within the application even though processing was continuing successfully. Technical teams immediately began investigating the issue and worked to isolate the source of the reporting failure. The investigation confirmed that the issue was limited to the APAC region and was specific to the status reporting path rather than the underlying inventory upload processing itself. By March 15, 2026, at approximately 11:08 PM PDT, the underlying issue had been corrected and service behavior was validated as functioning normally. Following verification that traffic flow and status reporting had returned to expected behavior, the incident was considered resolved. **Root Cause** The incident was caused by an incorrect routing configuration within the service communication path used for inventory upload status reporting. This prevented status-related requests from reaching the intended service endpoint, which caused status reporting to fail even though the underlying upload processing continued to operate normally. Further investigation determined that the deployed configuration did not match the intended configuration. **Remediation Actions** The following actions were taken during the incident response: 1. Incident Detection and Response Initiated: Technical teams were notified of the issue and began investigating the failure affecting inventory upload status reporting in APAC. 2. Impact Isolation: It was confirmed that the issue was limited to the APAC region and that inventory uploads were continuing to process successfully while status reporting was not updating correctly. 3. Configuration Review and Validation: The relevant service configuration was reviewed to identify where status-related requests were not reaching the intended endpoint. 4. Corrective Configuration Update: A corrective routing change was applied to direct traffic to the proper service endpoint used for status communication. 5. Service Restoration Verification: Following the corrective update, service traffic and status reporting behavior were verified as functioning as expected. **Future Preventative Measures** This incident highlighted the importance of ensuring configuration consistency and correct service communication paths within the platform. Based on this experience, the following measures are being applied: 1. Configuration Consistency Controls: We are reviewing controls to ensure deployed configurations remain aligned with intended configuration standards. 2. Post-Change Validation: We are evaluating validation steps following configuration changes to ensure service communication paths are functioning as expected.
Read the full incident report →
- Detected by Pingoru
- Mar 07, 2026, 05:07 PM UTC
- Resolved
- Mar 08, 2026, 03:35 AM UTC
- Duration
- 10h 27m
Affected: IT Visibility USIT Visibility EUIT Visibility - APAC
Timeline · 5 updates
-
investigating Mar 07, 2026, 05:07 PM UTC
Incident Description: We are currently investigating an issue affecting export functionality within Flexera One – IT Visibility for customers in the North America (NAM), Europe (EU), and Asia-Pacific (APAC) regions. As a result, customers may experience failures when attempting to generate or download exports. Priority: P2 Restoration Activity: Our technical teams have been engaged and are actively working to identify the underlying cause and implement remediation to restore normal export functionality. We are monitoring the situation closely and will provide further updates as progress is made.
-
identified Mar 07, 2026, 06:39 PM UTC
Our technical teams continue to investigate the issue. A fix has been identified and is currently undergoing testing. We will provide further updates as progress is made.
-
identified Mar 07, 2026, 08:56 PM UTC
Our technical teams have deployed a fix to address the export failures. Early validation indicates that export functionality has improved in some regions, and we are continuing to monitor as the changes propagate across all regions. Further updates will be provided as validation progresses.
-
resolved Mar 08, 2026, 03:35 AM UTC
Our technical teams have implemented corrective measures to address the issue affecting export functionality within Flexera One – IT Visibility. Post-deployment validation confirms that exports and related query services are operating normally across all regions. This incident has now been resolved. A detailed root cause analysis report will be shared in the coming days.
-
postmortem Apr 06, 2026, 11:57 AM UTC
**Description:** Flexera One – IT Visibility – All Regions – Export Failures **Timeframe:** March 7, 2026, 8:52 AM PST to March 7, 2026, 5:13 PM PST **Incident Summary** On Saturday, 7 Mar 2026, at 8:52 AM PST, our teams identified an issue affecting Flexera One – IT Visibility in all regions where monitoring alerts and customer reports indicated failures in export functionality. Our technical teams were promptly engaged and the issue was isolated to a query, where export requests were failing during execution. Initial investigation determined that the failures were caused by a recent deployment that had successfully passed staging validation; however, the behavior in production unexpectedly differed, resulting in errors when customers attempted to generate or download export files. Our technical teams identified a flaw in the query design and developed a fix to refine the query logic, significantly improving execution reliability. The fix was validated in staging and then deployed in a phased manner across regions. APAC was restored first, confirming the effectiveness of the solution. Deployment to EU followed and completed successfully, while NAM required additional time for full propagation before export operations returned to normal. Continuous monitoring after deployment confirmed that export functionality was fully restored across all regions by 5:13 PM PST. **Root Cause** The issue was introduced during a planned deployment where a query behavior change, although validated in staging, did not function as expected in the production environment, resulting in export failures. Contributing Factors: * A discrepancy between staging and production environments led to differences in query execution behavior. * The query design required adjustments to align with production data handling patterns. * Pre-deployment validation did not fully capture this variation in behavior. **Remediation Actions** The following remediation steps were implemented to restore service functionality: * Refined the query logic to align with expected production behavior and ensure consistent execution. * Validated the updated query in a controlled staging environment prior to release. * Deployed the fix in a phased manner across APAC, EU, and NAM regions to ensure stability. * Performed post-deployment validation to confirm successful export functionality across all regions. * Continued monitoring to ensure sustained stability and expected system behavior. **Future Preventative Measures** * Production-Scale Validation Enhancement - Improve pre-release validation to better simulate real-world production data volumes and usage patterns. * Query Performance & Scalability Testing Improvements - Introduce stricter performance benchmarks and scalability testing for queries used in high-volume operations such as exports. * Enhanced Monitoring & Proactive Alerting - Expand monitoring coverage and implement proactive alerting for export failures to reduce detection and response time.
Read the full incident report →
- Detected by Pingoru
- Mar 02, 2026, 08:46 PM UTC
- Resolved
- Mar 03, 2026, 05:29 PM UTC
- Duration
- 20h 43m
Affected: IT Asset Management - US SaaS ManagerIT Asset Management - APAC SaaS ManagerIT Asset Management - EU SaaS Manager
Timeline · 6 updates
-
monitoring Mar 02, 2026, 08:46 PM UTC
Incident Description: Our teams have identified and are actively investigating an issue impacting SaaS inventory imports across all regions. Affected customers may experience import failures and timeouts while processing uploads. Priority: P2 Restoration Activity: Our teams identified the root cause as an issue with the authentication service and implemented a configuration update to resolve it. The service has now returned to normal operation, and our teams are closely monitoring the environment to ensure continued stability.
-
investigating Mar 03, 2026, 10:30 AM UTC
After an extended period of monitoring, our teams have observed that FSM imports continue to fail . The issue remains under active investigation, with technical teams fully engaged to identify and address the underlying cause. We will share further updates as more information becomes available.
-
investigating Mar 03, 2026, 10:30 AM UTC
We are continuing to investigate this issue.
-
monitoring Mar 03, 2026, 11:40 AM UTC
Our teams team dentified that an associated service had not been updated. As a result, traffic was not being routed to the cluster. The required service updates have now been completed across all regions and environments, and traffic routing has been restored accordingly. We will continue to monitor the services to ensure stability.
-
resolved Mar 03, 2026, 05:29 PM UTC
The issue affecting Flexera SaaS Manager and related inventory import processes has been resolved. Our team identified and corrected a service configuration issue that impacted authentication requests required for certain import and reconciliation jobs. The affected service has been restored across all regions. Validation confirms that inventory imports are executing successfully, and no further impact has been observed. We will continue to monitor the environment closely as a precaution. A detailed review is underway to ensure additional safeguards are in place to prevent recurrence.
-
postmortem Mar 16, 2026, 07:26 PM UTC
**Description:** Flexera One – SaaS Manager – All Regions – Inventory Import Failures **Timeframe:** March 2, 2026, 1:09 AM PST – March 3, 2026, 9:05 AM PST **Incident Summary** On March 2, 2026, at 1:09 AM PST, reports were received indicating that Flexera SaaS Manager inventory imports were failing for multiple customers. During this period, affected imports were unable to complete successfully and returned an error indicating that the system was unable to obtain an access token required for processing the import workflow. The issue affected customers across multiple regions and resulted in scheduled SaaS Manager imports failing to execute as expected. Other Flexera platform functionality remained available during this time. Technical teams engaged to investigate the issue and identified that the failures were related to a service responsible for issuing authentication tokens used during the SaaS Manager import process. Recovery activities were initiated to restore normal service behavior and ensure the affected service was functioning correctly across all environments. Following the corrective actions, successful SaaS Manager imports were observed across multiple regions, confirming that the issue had been resolved. Additional monitoring and validation were performed to ensure that the service continued operating as expected. By March 3, 2026, at 9:05 AM PST, successful imports had been confirmed across regions and the incident was formally closed. **Root Cause** The disruption occurred due to a configuration issue affecting the service responsible for issuing authentication tokens used during the SaaS Manager import process. As a result, the service was unable to provide the required access tokens needed for import processing. Because SaaS Manager imports depend on this authentication service, import jobs were unable to obtain the required access token and therefore failed after retry attempts were exhausted. **Remediation Actions** 1. Service Configuration Correction: The configuration affecting the authentication service was corrected to restore its ability to issue access tokens required by SaaS Manager imports. 2. Service Update Across Environments: The service responsible for authentication was updated across all regions and environments to ensure consistent operation. 3. Infrastructure Validation: After the update, the authentication service was verified to be running correctly across environments. 4. Functional Verification: Successful SaaS Manager imports were confirmed across multiple regions following the fix, verifying that the import process was functioning normally. 5. Post-Restoration Monitoring: Additional monitoring and validation activities were performed to ensure imports continued to succeed after the service update. **Future Preventative Measures** This incident highlighted the importance of monitoring and resilience for services responsible for authentication and import processing within the platform. Based on this experience, the following measures are being applied: 1. Monitoring and Alerting Improvements: Additional alerting and monitoring capabilities are being introduced to help detect similar service disruptions earlier and enable faster remediation. 2. Configuration Safeguards: The service configuration has been updated to prevent the behavior that caused the authentication service disruption observed during this incident.
Read the full incident report →
Critical February 24, 2026 - Detected by Pingoru
- Feb 24, 2026, 03:33 PM UTC
- Resolved
- Feb 25, 2026, 06:53 AM UTC
- Duration
- 15h 20m
Affected: IT Asset Management - APAC Beacon Communicationapi.flexera.auIT Visibility - APACCloud Cost Optimization - APACCloud License Management - APACIT Asset Management - APAC Inventory UploadIT Asset Management - APAC Login PageIT Asset Management - APAC Batch Processing SystemIT Asset Management - APAC Business ReportingIT Asset Management - APAC SaaS Manager
Timeline · 9 updates
-
investigating Feb 24, 2026, 03:33 PM UTC
Incident Description: We have identified an issue affecting login access to Flexera One in the APAC region. Customers may receive errors when attempting to log in to Flexera One. Priority: P1 Restoration Activity: Our technical teams are actively investigating the issue and working to implement corrective actions to restore normal login functionality. Investigation and stabilization efforts remain ongoing. We will continue to monitor the situation closely and provide further updates as progress continues.
-
investigating Feb 24, 2026, 04:08 PM UTC
Our technical teams continue investigating the login access issue affecting Flexera One in the APAC region. Ongoing analysis has identified service request failures impacting the login process, and additional diagnostic measures have been enabled to assist with remediation efforts. Investigation and stabilization activities remain in progress as teams work to restore normal access. We continue to monitor service health closely and will provide further updates as additional progress is made.
-
investigating Feb 24, 2026, 07:09 PM UTC
Our technical teams continue to investigate the login access issue affecting Flexera One in the APAC region. Diagnostic and stabilization activities remain ongoing as teams work to identify the underlying cause and restore normal access. We are closely monitoring service health and will provide further updates as soon as additional information becomes available.
-
investigating Feb 24, 2026, 08:48 PM UTC
Our technical teams continue investigating the login access issue affecting Flexera One in the APAC region. Ongoing analysis has identified service communication issues impacting requests required for successful login. Configuration adjustments have been applied, and validation activities are currently in progress as teams work toward restoring consistent access. Some service responses have begun recovering; however, customers may still experience intermittent login errors while stabilization efforts continue. We remain actively engaged in remediation and monitoring service health closely. Further updates will be provided as progress continues.
-
monitoring Feb 24, 2026, 09:40 PM UTC
Remediation activities for the login access issue affecting Flexera One in the APAC region have been implemented, and service recovery has been observed. Customers may now be able to access the Flexera One UI. Some functionality may continue to experience intermittent errors while final stabilization and validation activities continue across affected services. Our technical teams remain engaged and are monitoring the environment closely. Further updates will be provided as restoration efforts progress.
-
monitoring Feb 25, 2026, 12:35 AM UTC
Service access has been restored for customers accessing the Flexera One platform in the APAC region. We are continuing to investigate an issue affecting certain backend services. Customers may still experience intermittent errors when navigating within the platform, loading reports or dashboards, or accessing some functionality. We remain actively engaged and are working toward full restoration. Further updates will be provided as progress continues.
-
monitoring Feb 25, 2026, 05:30 AM UTC
Our technical teams continue to investigate the issue impacting certain backend services in the APAC region. Customers may still encounter intermittent errors while navigating the platform, loading custom reports or dashboards, or accessing specific functionality. Investigation and stabilization efforts remain actively in progress as we work toward full service restoration. We will continue to provide updates as additional progress is made.
-
resolved Feb 25, 2026, 06:53 AM UTC
We have successfully implemented corrective measures to address the service disruption impacting customers in the APAC region. As part of the resolution, our teams made configuration changes to ensure proper upstream service accessibility. Post-validation confirms that services have returned to normal operation. We will conduct a detailed retrospective analysis to determine the underlying root cause. A comprehensive post-mortem report outlining our findings and preventive actions will be shared once the review is complete.
-
postmortem Mar 16, 2026, 10:02 AM UTC
**Description:** Flexera One – APAC – Login Issue **Timeframe:** February 24, 2026, 07:33 AM PST to February 24, 2026, 10:50 PM PST **Incident Summary** On Tuesday, February 24, 2026, at 07:33 AM PST , our teams identified login failures affecting customers attempting to access Flexera One in the APAC region. The affected users encountered authentication errors during login, preventing access to the platform and associated workflows. Initial investigation confirmed that the core authentication infrastructure and identity services were operating as expected. Further analysis isolated the issue to the API gateway layer, where requests were failing due to connection issues affecting specific backend services. As a result, authentication calls and dependent APIs returned errors when routed through the gateway. Our technical teams reviewed the gateway routing configuration and discovered that the affected APAC services were using a specialized gateway path configured with a non-standard port. Traffic using this configuration encountered handshake failures at the gateway layer. To restore service, configuration updates were applied to the API gateway, and the impacted service endpoints were migrated to standard HTTPS communication over the recommended port. After the updated routing configuration was deployed and validated, authentication requests and dependent APIs returned to normal operation. Login access to Flexera One was fully restored, and the incident was closed following confirmation of platform stability. **Root Cause** The incident was caused by a gateway routing configuration that failed to properly handle secure service communication for certain APAC endpoints. Some APAC services were configured to communicate through the API gateway using a non-standard port. Requests routed through this configuration experienced handshake failures within the gateway layer, which prevented authentication-related API calls from completing successfully. As a result, login attempts and certain backend service calls failed. Contributing Factors * Non-Standard Port Dependency - Certain APAC services relied on communication through a non-standard gateway port, which introduced additional complexity and region-specific failure conditions. * Regional Gateway Configuration Differences - Gateway routing behavior and configurations differed slightly across regions, making the issue isolated to APAC and increasing diagnostic complexity. * Gateway-Only Failure Condition - The issue occurred only when requests were routed through the API gateway, while direct service communication remained healthy, which initially complicated root cause isolation. **Remediation Actions** The following remediation steps were implemented to restore service functionality: * Gateway Configuration Updates - Updated API gateway routing and mapping configurations within the APAC environment. * Endpoint Migration to Standard HTTPS - Migrated affected services from the non-standard port configuration to standard HTTPS communication over the designated port . * End-to-End Service Validation - Performed validation of authentication flows and dependent API services to ensure normal operation following configuration changes. * Post-Deployment Stability Monitoring - Monitored system behavior and login activity to confirm platform stability after the remediation was deployed. **Future Preventative Measures** * Regional Configuration Standardization - Standardize API gateway routing and service communication patterns across all regions. * Reduction of Non-Standard Port Usage - Review and eliminate dependencies on non-standard service communication ports where possible. * Improved Gateway Observability - Enhance monitoring and alerting for gateway-level handshake failures and routing issues. * Post-Incident Retrospective Review - Conduct a retrospective review of the incident to identify additional improvements in platform reliability and operational processes.
Read the full incident report →
- Detected by Pingoru
- Feb 20, 2026, 04:59 PM UTC
- Resolved
- Feb 24, 2026, 01:44 PM UTC
- Duration
- 3d 20h
Affected: IT Visibility USIT Visibility EUIT Visibility - APACIT Asset Management - US Business ReportingIT Asset Management - EU Business ReportingIT Asset Management - APAC Business Reporting
Timeline · 7 updates
-
identified Feb 20, 2026, 04:59 PM UTC
Incident Description: We are currently investigating an issue in Flexera One impacting capability updates . The issue affects a subset of our customers across all regions intermittently. Affected customers may observe missing Dashboards in the Flexera One UI. Customers whose capabilities are active and not expired are not affected. Priority: P2 Restoration Activity: Our teams are actively investigating the issue, which has been identified as a defect. Our technical teams are actively working toward implementing a permanent resolution. A temporary workaround is available. Where necessary, our teams can manually apply corrections in the backend. For urgent requests, please raise an SQ (Service Request), and our support teams will prioritize assistance accordingly. We will continue to provide updates as more information becomes available.
-
identified Feb 21, 2026, 07:23 AM UTC
Our technical teams are progressing toward a permanent fix. Given the complexity of the defect, full resolution may take additional time. We will share further updates as progress continues. A temporary workaround is in place, with backend corrections applied manually where required.
-
identified Feb 23, 2026, 09:23 AM UTC
We are continuing to work on restoring full health to the capability update process. Our immediate priority was to stabilize customers known to be impacted. This was achieved by manually correcting configurations to return those environments to a healthy state. While our teams continue to make steady progress towards a more permanent solution, the automated onboarding process remains partially impacted. We appreciate your patience and will provide further updates as progress continues.
-
identified Feb 24, 2026, 12:37 AM UTC
We continue investigating this issue and are implementing a targeted fix. Customers with existing active capabilities are not expected to be impacted. A subset of customers performing capability updates or new organization onboarding may experience intermittent issues, including missing Dashboards within the Flexera One UI. Our technical teams have identified a potential root cause and are currently validating a remediation approach while stabilization efforts continue across environments. Work remains ongoing to restore full reliability of the automated onboarding process. We will provide further updates as progress continues.
-
identified Feb 24, 2026, 07:15 AM UTC
Onboarding has been fully restored in NAM and APAC, with no known issues at this time. For the EU region, our teams have devised a fix, and deployment and validation activities are currently in progress. Our teams continue to work toward implementing a permanent resolution and will share further updates as progress is made.
-
resolved Feb 24, 2026, 01:44 PM UTC
Our teams have successfully applied the fix across all regions. Following post-deployment validations, we have confirmed that services have been fully restored and are operating normally.
-
postmortem Mar 10, 2026, 02:01 PM UTC
**Description:** Flexera One - All regions - Missing Dashboards and Tabular View Reports **Timeframe:** February 20, 2025, 7:02 AM PST to February 20, 2025, 5:30 AM PST **Incident Summary** On February 19, 2026, at 7:02 AM PST, our teams identified an issue impacting capability provisioning and onboarding workflows within Flexera One.The issue affected a subset of customers across multiple regions and resulted in intermittent failures during capability updates and new organization creation initiated through Salesforce order management processes. As a result, impacted customers would experience missing dashboards or unavailable functionality in the Flexera One interface. The issue affected only a subset of customers whose capabilities were being created or updated during the incident window. Customers with already active capabilities were not impacted. During the investigation, technical teams identified a defect within an internal onboarding component responsible for processing capability updates. A code fix addressing this issue was deployed across regions on February 23, 2026, which resolved the issue for North America and APAC environments. Following this deployment, onboarding workflows in the EU region continued to experience intermittent failures. Further analysis revealed a separate, region-specific infrastructure issue in the EU environment that was preventing certain provisioning workflows from completing successfully. Our technical teams implemented a configuration change to restore service connectivity in the EU environment. After deployment of this fix and completion of validation testing, services were confirmed to be operating normally across all regions. During the incident, Flexera teams also performed manual backend remediation for affected customers requiring urgent capability updates to restore access while permanent fixes were being implemented. All regions were confirmed healthy following completion of remediation and validation activities. **Root Cause** Root Cause #1 – Defect in Capability Provisioning Logic \(All Regions\) A defect in an internal backend component responsible for processing capability provisioning caused intermittent failures during onboarding and capability update workflows. The issue affected a service responsible for managing capability state updates triggered by Salesforce order events. When this defect was encountered, the provisioning workflow could fail before completing the capability update process, resulting in missing dashboards or inaccessible features for affected organizations. Root Cause #2 – Network Connectivity Issue in EU Environment \(EU Region Only\) A separate infrastructure issue existed within the EU environment where an IAM service cluster was unable to establish reliable connectivity with an internal audit service cluster. The communication path relied on a service configuration that experienced network communication errors. This prevented certain capability provisioning workflows from completing successfully within the EU region even after the initial code fix was deployed. Contributing Factors: * The failures were intermittent, making the issue difficult to reproduce during early investigation. * Two independent issues occurred simultaneously, which extended the investigation timeline. * The EU infrastructure issue masked the effectiveness of the initial code fix until further regional validation was performed. **Remediation Actions** Following actions were taken to restore the service to normalcy: * Code Fix Deployment: Corrected the defect in the capability provisioning logic used during onboarding workflows. * Infrastructure Configuration Update: Restored internal service connectivity in the EU environment. * Customer Remediation: Performed manual backend capability corrections for impacted customers requiring urgent access. * Service Validation: Conducted cross-region testing to verify onboarding and capability provisioning were functioning normally. **Future Preventative Measures** To help prevent similar issues in the future, our teams are working on the following improvements: * Improved Capability Provisioning Validation: Introduce automated checks to detect provisioning failures earlier. * Enhanced Monitoring and Alerting: Expand monitoring coverage for onboarding and capability lifecycle workflows. * Infrastructure Connectivity Monitoring: Implement additional alerts for regional service communication failures. * Expand Testing: Increase automated testing coverage for capability lifecycle and onboarding workflows.
Read the full incident report →
- Detected by Pingoru
- Feb 19, 2026, 06:08 PM UTC
- Resolved
- Feb 19, 2026, 08:57 PM UTC
- Duration
- 2h 48m
Affected: IT Visibility US
Timeline · 5 updates
-
investigating Feb 19, 2026, 06:08 PM UTC
Incident Description: We are investigating an issue affecting access to Custom Reports within IT Visibility in the NAM region. Customers may experience errors when attempting to access Custom Reports. At this time: • Out-of-the-box reports remain available • EU and APAC regions are not impacted • The issue is isolated to Custom Reports functionality Priority: P2 Restoration Activity: Our technical teams are actively investigating to determine the scope and root cause. We will provide further updates as more information becomes available.
-
identified Feb 19, 2026, 07:07 PM UTC
Our investigation has identified a behavior affecting access controls for Custom Reports in the NAM region. This condition is preventing reports from loading for certain organizations. We are actively testing a targeted adjustment to validate the behavior and are working toward resolution. We will provide another update once validation is complete.
-
identified Feb 19, 2026, 07:27 PM UTC
Our investigation has identified the cause of the issue impacting Custom Reports in the NAM region. The behavior is related to how report access is being evaluated for certain organizations. We have validated a corrective action and are working to implement a broader resolution. We will provide another update once the permanent fix is confirmed.
-
resolved Feb 19, 2026, 08:57 PM UTC
The issue impacting access to Custom Reports within IT Visibility in the NAM region has been resolved. A corrective update has been deployed, and validation confirms that reports are loading as expected. We will continue to monitor the environment to ensure stability. A formal post-mortem report outlining the detailed root cause, corrective actions, and future preventative measures will be published in the coming days.
-
postmortem Mar 06, 2026, 05:32 AM UTC
**Description:** Flexera One – IT Visibility – NAM – Custom Reports Access Issue **Timeframe:** February 19, 2026, 9:00 AM PST to February 19, 2026, 12:30 PM PST **Incident Summary** On Thursday, February 19, 2026, at 9:00 AM PST, we identified an issue affecting customers in the NAM region who encountered errors when accessing Custom Reports within IT Visibility. Impacted users were unable to load Custom Reports through the application interface, while out-of-the-box reports remained available and customers in the EU and APAC regions were not affected. Investigation confirmed that backend services were operating normally, isolating the problem to loading the Custom Reports page. The issue was traced to application capability validation logic combined with an unexpected expiration of a capability associated with the reporting feature due to a defect. When the application evaluated this state, the validation logic incorrectly blocked the Custom Reports interface from loading. To restore functionality, the affected capability state was corrected and a fix was implemented to adjust the UI capability validation logic, ensuring the Custom Reports page could load correctly for entitled organizations. The change was deployed to production and validated across impacted organizations. After deployment, Custom Reports loaded successfully and affected customers confirmed that functionality was restored. The issue was fully resolved at 12:30 PM PST after post-deployment validation and monitoring confirmed that normal operation had been restored. **Root Cause** During the investigation, our engineering teams determined that the incident was caused by an unexpected capability expiration resulting from a defect in an internal automation process responsible for managing organization capability states. Due to this defect, a reporting-related capability was incorrectly marked as expired for certain organizations. When users attempted to access Custom Reports, the application evaluated the organization’s capability state during page initialization. Because the capability appeared expired, the page-level validation logic incorrectly prevented the Custom Reports interface from loading, resulting in the errors experienced by customers. Contributing Factors * Automation Defect: A bug in the internal process incorrectly updated the capability state for certain organizations. * Page-Level Validation Behavior: The validation logic did not correctly handle certain organization configuration conditions during access evaluation. * Handling of Validation Outcome: When the condition was triggered, the application blocked the page from loading rather than allowing the interface to render . **Remediation Actions** • Investigation and Pattern Identification: Our technical teams analyzed impacted organizations and identified the configuration pattern triggering the access issue. • Validation Testing: Targeted testing on affected organizations confirmed that adjusting the configuration restored access to Custom Reports and helped isolate the root cause. • Configuration Update: A corrective update was implemented to adjust how page-level validation is performed when loading Custom Reports. • Production Deployment and Verification: The update was deployed to production and validated across impacted organizations to confirm successful report access. • Customer Confirmation: Confirmation was received from the impacted customers that Custom Reports functionality had been restored. **Future Preventative Measures** • Capability Validation Review: Perform a comprehensive review of checks to ensure they produce consistent, predictable behavior across the application. • Configuration State Handling Improvements: Refine how organization configuration states are handled during access validation to avoid unintentionally blocking user interface pages. • Monitoring and Safeguards: Identify and implement additional measures to detect and prevent similar validation issues going forward. · Defect Investigation: Conduct a detailed RCA to identify and fix the root cause for the defect.
Read the full incident report →