One Identity Starling incident

PAM Essentials RDP sessions down.

Critical Resolved View vendor source →

One Identity Starling experienced a critical incident on October 10, 2025 affecting PAM Essentials, lasting 3d 5h. The incident has been resolved; the full update timeline is below.

Started
Oct 10, 2025, 04:45 PM UTC
Resolved
Oct 13, 2025, 10:22 PM UTC
Duration
3d 5h
Detected by Pingoru
Oct 10, 2025, 04:45 PM UTC

Affected components

PAM Essentials

Update timeline

  1. investigating Oct 10, 2025, 04:45 PM UTC

    We are currently see issues with RDP session not connecting. We are investigating and will update as we work to identify and resolve the issue.

  2. investigating Oct 10, 2025, 05:55 PM UTC

    We are continuing to investigate this issue and will update as we work to identify and resolve the issue.

  3. investigating Oct 10, 2025, 06:58 PM UTC

    We are continuing to investigate this issue and will update as we work to identify and resolve the issue.

  4. investigating Oct 10, 2025, 08:27 PM UTC

    We are continuing to investigate this issue and will update as we work to identify and resolve the issue.

  5. monitoring Oct 10, 2025, 09:57 PM UTC

    We have identified the issue is with a 3rd party vendor experiencing a network related outage. We have taken steps to mitigate the impact on our customers. We will continue to monitor situation and await final resolution from vendor.

  6. monitoring Oct 11, 2025, 05:32 AM UTC

    We have identified the issue is with a 3rd party vendor experiencing a network related outage. We have taken steps to mitigate the impact on our customers. We will continue to monitor situation and await final resolution from vendor.

  7. monitoring Oct 11, 2025, 01:59 PM UTC

    We have identified the issue is with a 3rd party vendor experiencing a network related outage. We have taken steps to mitigate the impact on our customers. We will continue to monitor situation and await final resolution from vendor.

  8. monitoring Oct 11, 2025, 06:17 PM UTC

    We have identified the issue is with a 3rd party vendor experiencing a network related outage. We have taken steps to mitigate the impact on our customers. We will continue to monitor situation and await final resolution from vendor.

  9. monitoring Oct 12, 2025, 05:43 AM UTC

    We have identified the issue is with a 3rd party vendor experiencing a network related outage. We have taken steps to mitigate the impact on our customers. We will continue to monitor situation and await final resolution from vendor.

  10. monitoring Oct 12, 2025, 03:19 PM UTC

    We have identified the issue is with a 3rd party vendor experiencing a network related outage. We have taken steps to mitigate the impact on our customers. We will continue to monitor situation and await final resolution from vendor.

  11. monitoring Oct 12, 2025, 08:00 PM UTC

    We have identified the issue is with a 3rd party vendor experiencing a network related outage. We have taken steps to mitigate the impact on our customers. We will continue to monitor situation and await final resolution from vendor.

  12. monitoring Oct 13, 2025, 04:00 AM UTC

    We have identified the issue is with a 3rd party vendor experiencing a network related outage. We have taken steps to mitigate the impact on our customers. We will continue to monitor situation and await final resolution from vendor.

  13. resolved Oct 13, 2025, 10:22 PM UTC

    Setting status to resolved, the 3rd party vendor has confirmed the issue has been resolved.

  14. postmortem Dec 18, 2025, 02:23 PM UTC

    **Service Incident Summary** **Product:** One Identity Starling PAM Essentials **Region:** United States **Date:** October 9, 2025 **What happened** On October 9, 2025, at 6:05 PM UTC, customers using One Identity Starling PAM Essentials in the US region experienced an interruption to Remote Desktop session connectivity. Service was fully restored by 9:18 PM UTC. **What caused the issue** The disruption was caused by an outage within **Microsoft Azure’s West US data center**, triggered by a power-related failure during scheduled maintenance. This event impacted several Azure networking services. As a result, our service experienced an unusually high volume of network requests, which caused application components to exceed memory limits and temporarily stop responding. **How we resolved the issue** Once the root cause was identified, we scaled the affected application components to handle the increased request volume. This action restored service while Microsoft worked to resolve the underlying data center issue. **What we are doing to prevent future impact** While this type of cloud infrastructure failure is rare, we are taking steps to reduce the impact of similar events in the future by: * Continuing to monitor system health and memory utilization in real time * Reviewing scaling strategies and resilience options as our infrastructure evolves * Applying lessons learned from this incident to ongoing platform improvements