Gurobi Optimization incident

Instant Cloud Connectivity issues in AWS Regions

Major Resolved View vendor source →

Gurobi Optimization experienced a major incident on December 7, 2021 affecting Gurobi AWS us-east-1 and Instant Cloud Provisioning - Global, lasting 7h 52m. The incident has been resolved; the full update timeline is below.

Started
Dec 07, 2021, 04:25 PM UTC
Resolved
Dec 08, 2021, 12:17 AM UTC
Duration
7h 52m
Detected by Pingoru
Dec 07, 2021, 04:25 PM UTC

Affected components

Gurobi AWS us-east-1Instant Cloud Provisioning - Global

Update timeline

  1. investigating Dec 07, 2021, 04:25 PM UTC

    We are currently experiencing connectivity issues in AWS Region starting at 7:30 AM PST. We strongly recommend our customers to use Azure pending issue is swiftly resolved. We appreciate your patience on this.

  2. investigating Dec 07, 2021, 04:27 PM UTC

    We are currently experiencing connectivity issues in AWS Region starting at 7:30 AM PST. We strongly recommend our customers to use Azure pending issue is swiftly resolved. We appreciate your patience on this.

  3. investigating Dec 07, 2021, 04:56 PM UTC

    AWS has identified an API, elevated error rates for EC2 APIs and console issues in the US-EAST-1 Region. A root cause has been identified and they are actively working towards recovery.

  4. investigating Dec 07, 2021, 05:00 PM UTC

    AWS has identified an API, elevated error rates for EC2 APIs and console issues in the US-EAST-1 Region. A root cause has been identified and they are actively working towards recovery.

  5. investigating Dec 07, 2021, 07:47 PM UTC

    AWS has identified an impact to multiple AWS APIs in the US-EAST-1 Region. A root cause has been identified and they are actively working towards recovery.

  6. resolved Dec 08, 2021, 12:17 AM UTC

    AWS has resolved the underlying issue.

  7. postmortem Dec 08, 2021, 12:18 AM UTC

    Here are the updates from AWS: **\[9:37 AM PST\]** We are seeing impact to multiple AWS APIs in the US-EAST-1 Region. This issue is also affecting some of our monitoring and incident response tooling, which is delaying our ability to provide updates. We have identified the root cause and are actively working towards recovery. **\[10:12 AM PST\]** We are seeing impact to multiple AWS APIs in the US-EAST-1 Region. This issue is also affecting some of our monitoring and incident response tooling, which is delaying our ability to provide updates. We have identified root cause of the issue causing service API and console issues in the US-EAST-1 Region, and are starting to see some signs of recovery. We do not have an ETA for full recovery at this time. **\[11:26 AM PST\]** We are seeing impact to multiple AWS APIs in the US-EAST-1 Region. This issue is also affecting some of our monitoring and incident response tooling, which is delaying our ability to provide updates. Services impacted include: EC2, Connect, DynamoDB, Glue, Athena, Timestream, and Chime and other AWS Services in US-EAST-1. The root cause of this issue is an impairment of several network devices in the US-EAST-1 Region. We are pursuing multiple mitigation paths in parallel, and have seen some signs of recovery, but we do not have an ETA for full recovery at this time. Root logins for consoles in all AWS regions are affected by this issue, however customers can login to consoles other than US-EAST-1 by using an IAM role for authentication. **\[12:34 PM PST\]** We continue to experience increased API error rates for multiple AWS Services in the US-EAST-1 Region. The root cause of this issue is an impairment of several network devices. We continue to work toward mitigation, and are actively working on a number of different mitigation and resolution actions. While we have observed some early signs of recovery, we do not have an ETA for full recovery. For customers experiencing issues signing-in to the AWS Management Console in US-EAST-1, we recommend retrying using a separate Management Console endpoint \(such as [[https://us-west-2.console.aws.amazon.com/](https://us-west-2.console.aws.amazon.com/)](https://us-west-2.console.aws.amazon.com)\). Additionally, if you are attempting to login using root login credentials you may be unable to do so, even via console endpoints not in US-EAST-1. If you are impacted by this, we recommend using IAM Users or Roles for authentication. We will continue to provide updates here as we have more information to share. **\[2:04 PM PST\]** We have executed a mitigation which is showing significant recovery in the US-EAST-1 Region. We are continuing to closely monitor the health of the network devices and we expect to continue to make progress towards full recovery. We still do not have an ETA for full recovery at this time. **\[2:43 PM PST\]** We have mitigated the underlying issue that caused some network devices in the US-EAST-1 Region to be impaired. We are seeing improvement in availability across most AWS services. All services are now independently working through service-by-service recovery. We continue to work toward full recovery for all impacted AWS Services and API operations. In order to expedite overall recovery, we have temporarily disabled Event Deliveries for Amazon EventBridge in the US-EAST-1 Region. These events will still be received & accepted, and queued for later delivery. **\[3:03 PM PST\]** Many services have already recovered, however we are working towards full recovery across services. Services like SSO, Connect, API Gateway, ECS/Fargate, and EventBridge are still experiencing impact. Engineers are actively working on resolving impact to these services.