Alpaca incident

JNLC's are getting stuck due to large batch processing.

Major Resolved View vendor source →

Alpaca experienced a major incident on November 18, 2025 affecting broker.journals.get and Broker Dashboard and 1 more component, lasting 1h 46m. The incident has been resolved; the full update timeline is below.

Started
Nov 18, 2025, 11:31 AM UTC
Resolved
Nov 18, 2025, 01:17 PM UTC
Duration
1h 46m
Detected by Pingoru
Nov 18, 2025, 11:31 AM UTC

Affected components

broker.journals.getBroker Dashboardbroker.events.journals.status.get

Update timeline

  1. investigating Nov 18, 2025, 11:31 AM UTC

    JNLCs are getting delayed due to the high volume of batches being processed in our ledger system. We are continuously monitoring the system during this period.

  2. monitoring Nov 18, 2025, 12:48 PM UTC

    We have processed the large batches and we are back to normal. All backlogs are cleared.

  3. resolved Nov 18, 2025, 01:17 PM UTC

    Manual batches are cleared and system back to normal

  4. postmortem Nov 19, 2025, 01:45 PM UTC

    We are providing an update on a service interruption that impacted transaction processing on **November 18, 2025**. ### **What Happened** During a period of heavy transaction load, a specific internal maintenance process temporarily blocked data synchronization between our core accounting and reconciliation systems. This blockage caused a backlog, temporarily delaying the completion status for a segment of executed financial transactions \(journals\). ### **Impact** * Approximately **40,000 transactions** were processed but temporarily showed an intermediary, "stuck" status. * **Partners who rely on real-time transaction status updates or derived balance fields may have experienced brief delays or temporary inconsistencies in reporting.** * **No customer funds or executed transactions were lost.** All transactions were successfully processed by our core ledger system. ### **Resolution** Our engineering team identified the degradation in our database synchronization immediately. We performed a controlled remediation process to reconcile the data, manually updating the status of the backlogged transactions and verifying consistency across all affected systems. **The issue has been fully resolved, and normal transaction processing is fully restored.** ### **Preventative Measures** We have identified the root cause—a long-running internal job that held a database resource for too long—and are implementing immediate fixes to ensure this does not recur: * **Process Optimization:** Hardening the internal job to prevent future resource contention. * **Resilience and Monitoring:** Enhancing our tools for faster recovery and strengthening alerts around transaction backlogs and database health to detect similar issues instantly. We appreciate your understanding and thank you for your patience. We remain committed to the highest standards of reliability.