Philips HealthSuite incident

Ongoing AWS issues impacting multiple services(Amazon Elastic Compute Cloud) in the US East (N. Virginia) on 2026-MAY-08

Philips HealthSuite is currently experiencing a minor incident, which began 19h ago. The vendor's full update timeline is below.

Started
May 08, 2026, 08:09 AM UTC
Resolved
Ongoing
Duration
● 18h 44m
Detected by Pingoru
May 08, 2026, 08:09 AM UTC

Update timeline

  1. investigating May 08, 2026, 08:09 AM UTC

    Operational issue – Amazon Elastic Compute Cloud (N. Virginia) Service Amazon Elastic Compute Cloud Increased Error Rate and Latency May 07 10:11 PM PDT We are observing early signs of recovery. We continue to work towards restoring temperatures to normal levels and bring impacted racks back online in the affected Availability Zone (use1-az4) in the US-EAST-1 Region. We have been able to get additional cooling system capacity online, which has allowed us to recover some affected racks and are actively working to recover additional racks in a controlled and safe manner. In the impacted Availability Zone, EC2 Instances, EBS Volumes, and other AWS Services may continue to experience elevated error rates and latencies for some workflows until full recovery is achieved. We will provide an update by 11:30 PM PDT, or sooner if we have additional information to share. May 07 8:06 PM PDT We are actively working to restore temperatures to normal levels in the affected Availability Zone (use1-az4) in the US-EAST-1 Region, though progress is slower than originally anticipated. Since our last update we have made incremental progress to restore cooling systems within the affected AZ, which will not be visible to external customers but are required for the restoration of affected services. In the impacted Availability Zone, EC2 Instances, EBS Volumes, and other AWS Services are also experiencing elevated error rates and latencies for some workflows. As part of our recovery effort, we have shifted traffic away from the impacted Availability Zone for most services. We recommend customers utilize one of the other Availability Zones in the US-EAST-1 Region, as existing instances in other AZs remain unaffected by this issue. If immediate recovery is required, we recommend customers restore from EBS Snapshots and/or replace affected resources by launching new replacement resources in one of the unaffected zones. We will provide an update by 10:00 PM PDT, or sooner if we have additional information to share. May 07 6:47 PM PDT We continue to work towards mitigating the increased temperatures to its normal levels in the affected Availability Zone (use1-az4) in the US-EAST-1 Region. Other AWS services that depend on the affected EC2 instances and EBS volumes in this Availability Zone, may also experience impairments. We have weighed away traffic for most services at this time. We recommend customers utilize one of the other Availability Zones in the US-EAST-1 Region at this time, as existing instances in other AZ's remain unaffected by this issue. Customers may experience longer than usual provisioning times. We will provide an update by 7:45 PM PDT, or sooner if we have additional information to share. May 07 5:53 PM PDT We continue to investigate instance impairments to a single Availability Zone (use1-az4) in the US-EAST-1 Region. We have experienced an increase in temperatures within a single data center, which in some cases has caused impairments for instances in the Availability Zone. EC2 instances and EBS volumes hosted on impacted hardware are affected by the loss of power during the thermal event. Other AWS services that depend on the affected EC2 instances and EBS volumes in this Availability Zone, may also experience impairments. We will continue to provide updates as recovery continues. May 07 5:25 PM PDT We are investigating instance impairments in a single Availability Zone (use1-az4) in the US-EAST-1 Region. Other Availability Zones are not affected by the event and we are working to resolve the issue. Affected AWS services The following AWS services have been affected by this issue. Impacted (8 services) AWS IoT Core Amazon ElastiCache Amazon Elastic Kubernetes Service Amazon Elastic Load Balancing Amazon Managed Streaming for Apache Kafka Amazon OpenSearch Service Amazon Redshift Amazon SageMaker

  2. investigating May 08, 2026, 08:11 AM UTC

    Update: May 07 11:38 PM PDT We continue to make progress in resolving the impaired EC2 instances in the affected Availability Zone (use1-az4) in the US-EAST-1 Region, and are working towards full recovery. We are actively working to bring additional cooling system capacity online, which will enable us to recover the remaining affected racks in a controlled and safe manner. In the impacted Availability Zone, EC2 Instances, EBS Volumes, and other AWS Services may continue to experience elevated error rates and latencies for some workflows. Customers will continue to see some of their affected EC2 instances and EBS volumes as impaired until we achieve full recovery. We will provide an update by May 8, 1:30 AM PDT, or sooner if we have additional information to share.

  3. investigating May 08, 2026, 09:02 AM UTC

    May 08 1:32 AM PDT Mitigation efforts remain underway to resolve the impaired EC2 instances and degraded EBS volumes in a single Availability Zone (use1-az4) in the US-EAST-1 Region. These EC2 instances and EBS volumes were impacted due to a loss of power during the thermal event. The work to bring additional cooling system capacity online, which will enable us to recover the remaining affected infrastructure in a controlled and safe manner, is taking longer than we had initially anticipated. Some services, such as IoT Core, ELB, NAT Gateway, and Redshift, have seen significant improvements in the recovery of their workflows. However, some customers will continue to see their affected EC2 instances and EBS volumes as impaired until we achieve full recovery. While we do not currently have an ETA for full recovery, we are prioritizing this issue and will provide another update by 3:30 AM PDT or sooner if additional information becomes available.

  4. investigating May 08, 2026, 11:06 AM UTC

    May 08 3:54 AM PDT We continue to make progress towards resolving the impaired EC2 instances and degraded EBS volumes in a single Availability Zone (use1-az4) in the US-EAST-1 Region. At this time, we wanted to provide some more details on the issue. Beginning on May 7 at 4:20 PM PDT, we began experiencing an increase in instance impairments within the affected zone due to the loss of power during a thermal event. Engineers were automatically engaged within minutes and immediately began investigating multiple mitigations. By 9:12 PM PDT, we restored power to a subset of the affected infrastructure and observed some signs of recovery, which have remained stable. We continue working to bring additional cooling system capacity online, which will enable us to recover the remaining affected hardware in the impacted zone in a controlled and safe manner. Some AWS services, such as IoT Core, ELB, NAT Gateway, and Redshift, continue to see significant improvements in the recovery of their workflows. However, some customers will continue to see their affected EC2 instances and EBS volumes as impaired until we achieve full recovery. If immediate recovery is required, we recommend customers restore from EBS snapshots and/or replace affected resources by launching new replacement resources in one of the unaffected zones. Based on our current mitigation efforts, we expect full recovery to take several hours. We are prioritizing this issue and will provide another update by 6:30 AM PDT or sooner if additional information becomes available.