VitalSource incident

We are experiencing degraded performance across our systems

Minor Resolved View vendor source →

VitalSource experienced a minor incident on August 25, 2025 affecting V3/V4 APIs and VitalSource Launch - LTI, LMS, SAML, Office365, OAuth2 and other integrations and 1 more component, lasting 6h 23m. The incident has been resolved; the full update timeline is below.

Started
Aug 25, 2025, 02:45 PM UTC
Resolved
Aug 25, 2025, 09:09 PM UTC
Duration
6h 23m
Detected by Pingoru
Aug 25, 2025, 02:45 PM UTC

Affected components

V3/V4 APIsVitalSource Launch - LTI, LMS, SAML, Office365, OAuth2 and other integrationsBookshelf OnlineManage PlatformStore, Sampling and Ecommerce PlatformVitalSource ExploreAnalyticsAcrobatiqVerba CollectVerba Connect

Update timeline

  1. investigating Aug 25, 2025, 02:45 PM UTC

    We are currently investigating this issue.

  2. identified Aug 25, 2025, 04:00 PM UTC

    We have made some changes that have improved our performance and are monitoring closely for any other issues.

  3. identified Aug 25, 2025, 05:59 PM UTC

    We are continuing to work on this issue. The Manage Platform UI is offline for maintenance.

  4. identified Aug 25, 2025, 06:23 PM UTC

    We are continuing to work on a fix for this issue.

  5. identified Aug 25, 2025, 06:44 PM UTC

    We have been experiencing an intermittent service degradation affecting VitalSource systems. A subset of users and API integrations may encounter slow performance or error messages, while others remain unaffected. Our engineering teams are actively investigating and working on remediation. We will provide the next update as soon as more information is available.

  6. monitoring Aug 25, 2025, 07:09 PM UTC

    A fix has been implemented and we are monitoring the results.

  7. identified Aug 25, 2025, 09:05 PM UTC

    We are continuing to work on a fix for this issue.

  8. resolved Aug 25, 2025, 09:09 PM UTC

    This incident has been resolved.

  9. postmortem Aug 25, 2025, 10:33 PM UTC

    Today, we experienced intermittent service interruption across VitalSource systems. During this incident, users and API integrations encountered slow performance, error messages, and intermittent access issues. The cause was due to significant request queuing in our core API infrastructure. This request queuing caused latency to some of our critical API endpoints, and this latency was then felt across several of our end-user-facing applications as well as the APIs that power our partner applications. Our engineering team resolved the request queuing issue by adjusting database configurations and Kubernetes cluster settings. This added additional capacity, and we were able to return to normal operations. All systems are now operating normally. Our engineering team's highest priority is to understand exactly why we saw severe request queuing, and we will continue to monitor performance closely.