Alpaca Outage History

Alpaca is up right now

There were 11 Alpaca outages since February 4, 2026 totaling 17h 0m of downtime. Each is summarised below — incident details, duration, and resolution information.

Source: https://status.alpaca.markets

Minor April 27, 2026

Dashboard SMS 2FA

Detected by Pingoru
Apr 27, 2026, 12:13 PM UTC
Resolved
Apr 27, 2026, 07:54 PM UTC
Duration
7h 40m
Affected: Dashboard
Timeline · 4 updates
  1. investigating Apr 27, 2026, 12:13 PM UTC

    We are currently experiencing issues with SMS two-factor authentication for the Trading Dashboard. If you cannot log in, please contact support. This does not affect our APIs or Broker Dashboard.

  2. identified Apr 27, 2026, 12:14 PM UTC

    The issue has been identified and a fix is being implemented.

  3. monitoring Apr 27, 2026, 07:20 PM UTC

    We believe the issue with SMS based two-factor authentication for the Trading Dashboard is resolved. We are currently monitoring the service to validate.

  4. resolved Apr 27, 2026, 07:54 PM UTC

    Login for Trading Dashboard is working normally again..

Read the full incident report →

Minor April 7, 2026

JIT Reports are not available for 6-April-2026

Detected by Pingoru
Apr 07, 2026, 02:45 AM UTC
Resolved
Apr 07, 2026, 03:30 AM UTC
Duration
44m
Affected: Just in Time
Timeline · 4 updates
  1. investigating Apr 07, 2026, 02:45 AM UTC

    Just In Time reports are not generated

  2. investigating Apr 07, 2026, 03:09 AM UTC

    Team is still working to identify the root cause

  3. identified Apr 07, 2026, 03:22 AM UTC

    We have identified the underlying cause. We are working on the recovery.

  4. resolved Apr 07, 2026, 03:30 AM UTC

    We have fixed the issue and reports are now available

Read the full incident report →

Notice March 31, 2026

Service degradation across multiple API's

Detected by Pingoru
Mar 31, 2026, 01:25 PM UTC
Resolved
Mar 31, 2026, 02:10 PM UTC
Duration
44m
Affected: broker.accounts.getJNLC
Timeline · 5 updates
  1. investigating Mar 31, 2026, 01:25 PM UTC

    We are currently investigating a service degradation that is impacting account information access and transaction processing. Client Impact: Users may experience intermittent timeouts or delays when attempting to access their account details. Current Status: Our engineering teams are actively engaged and working to address the underlying issues related to processing capacity and system delays. We are prioritising a resolution and will provide an update when more information is available.

  2. identified Mar 31, 2026, 01:34 PM UTC

    We are seeing improvement in API response times, and journals are no longer getting stuck. We are continuing to monitor the situation and will post further updates as needed.

  3. monitoring Mar 31, 2026, 01:43 PM UTC

    The service has been restored to normal operation. We are no longer seeing issues with any API endpoints or journals. We will continue monitoring to ensure stability.

  4. monitoring Mar 31, 2026, 01:56 PM UTC

    All services are operating normally with no issues detected. We are continuing to monitor and will provide updates if anything changes.

  5. resolved Mar 31, 2026, 02:10 PM UTC

    The issue has been fully resolved and we are marking this incident as closed. If you observe any further issues, please don't hesitate to reach out to us.

Read the full incident report →

Major March 23, 2026

Journals Delayed

Detected by Pingoru
Mar 23, 2026, 07:45 PM UTC
Resolved
Mar 23, 2026, 08:10 PM UTC
Duration
25m
Affected: JNLC
Timeline · 4 updates
  1. investigating Mar 23, 2026, 07:45 PM UTC

    We are seeing issue with journal processing due to redpanda.

  2. identified Mar 23, 2026, 07:48 PM UTC

    We have identified an issue with redpanda cluster and we are in the process of restart.

  3. monitoring Mar 23, 2026, 07:55 PM UTC

    The cluster is up and journals are getting processed

  4. resolved Mar 23, 2026, 08:10 PM UTC

    Journal processing are back to normal

Read the full incident report →

Notice March 17, 2026

marketing website down

Detected by Pingoru
Mar 17, 2026, 04:03 PM UTC
Resolved
Mar 17, 2026, 04:17 PM UTC
Duration
13m
Timeline · 2 updates
  1. investigating Mar 17, 2026, 04:03 PM UTC

    We are investigating above issue. Updates will follow.

  2. resolved Mar 17, 2026, 04:17 PM UTC

    Website is back up and there was no impact.

Read the full incident report →

Minor March 12, 2026

intermittently Orders are not filled

Detected by Pingoru
Mar 12, 2026, 04:54 PM UTC
Resolved
Mar 12, 2026, 08:18 PM UTC
Duration
3h 23m
Affected: Orders APIFractional Orders
Timeline · 17 updates
  1. investigating Mar 12, 2026, 04:54 PM UTC

    We are currently seeing intermittent issues with orders being filled. Our team is actively working on the issue with full attention.

  2. investigating Mar 12, 2026, 04:55 PM UTC

    We are continuing to investigate this issue.

  3. identified Mar 12, 2026, 05:05 PM UTC

    We are still looking into the issue ,will provide next update shortly.

  4. identified Mar 12, 2026, 05:13 PM UTC

    This is due to a crash in our main messaging system (RabbitMQ). We have restarted the messaging system and are now restarting the associated programs to re-establish stable connections. We are actively monitoring the system to ensure it is back to normal as quickly as possible.

  5. identified Mar 12, 2026, 05:25 PM UTC

    While we were able to restart the main messaging infrastructure, we are currently managing some recurring connection instability and have identified a backlog of orders that have not yet been processed. Our engineering team is fully engaged and focused on clearing this backlog and addressing the underlying causes of the intermittent system disruptions to ensure a complete and stable resolution. We will provide another update once the system has fully stabilised.

  6. identified Mar 12, 2026, 05:38 PM UTC

    We are currently experiencing intermittent delays in order processing due to a system connectivity issue. Our team is actively working to resolve this. Here's what's happening: 1.Orders that are stuck in "pending" status are being manually processed and re-routed. 2.For any orders pending cancellation, we are coordinating directly with our trading partners to ensure they are properly canceled. 3.Crypto order processing is being restored as part of our ongoing recovery efforts. Our engineering team has identified the underlying cause and is working on a permanent fix. We will share another update once the issue is fully resolved. We apologize for the inconvenience and appreciate your patience.

  7. identified Mar 12, 2026, 05:48 PM UTC

    The root cause of the instability has been traced to a specific node within our messaging cluster. A targeted restart of that component is being prepared and will be executed shortly. In the meantime, the order backlog is actively being cleared , pending orders are decreasing and filled orders are increasing as manual re-routing continues. The crypto exchange component restart has been completed successfully. The next step to restore full system stability is the execution of this remaining component restart. We will confirm once it is complete.

  8. identified Mar 12, 2026, 06:01 PM UTC

    After restart of problematic component in messaging cluster The messaging system has been stabilised and key trading services are back online and operating normally.A final reconnection step for our crypto trading service is currently being completed to ensure all systems are fully restored. We expect full recovery shortly and will confirm once everything is back to normal. Thank you for your patience.

  9. identified Mar 12, 2026, 06:13 PM UTC

    The majority of services have been restored. Our team has identified one remaining synchronisation issue within the messaging infrastructure and is applying a targeted fix. As a precaution, affected services have been temporarily scaled down during the repair. A small number of orders that did not reach their intended venues are being re-processed. We will confirm resolution once this final fix is verified.

  10. identified Mar 12, 2026, 06:26 PM UTC

    The fix for the primary affected queue has been successfully applied , messages are flowing and the backlog is cleared. During this process, two additional queues were found to have similar synchronisation issues. Our team is now applying the same fix to these remaining components to restore full stability. In parallel, we are evaluating longer-term improvements to our messaging infrastructure to prevent recurrence. We will provide an update once all components are fully restored.

  11. identified Mar 12, 2026, 06:37 PM UTC

    We are still working on the issue and monitoring system closely. We will write next update soon.

  12. identified Mar 12, 2026, 06:56 PM UTC

    Our messaging system is operating normally as of now. Our team is now focused on cleaning up the remaining orders that were impacted during the disruption. We appreciate your continued patience.

  13. identified Mar 12, 2026, 07:10 PM UTC

    We are still doing cleanup of backlog of all pending cancel and pending replace orders.

  14. identified Mar 12, 2026, 07:22 PM UTC

    The current phase is operational cleanup and client impact mitigation: normal order flow is working, and the focus is on systematically clearing pending replace/pending cancel states (including crypto) while continuing elevated monitoring.

  15. identified Mar 12, 2026, 07:40 PM UTC

    Operational cleanup of pending_cancel is still going on.

  16. resolved Mar 12, 2026, 08:18 PM UTC

    This incident has been resolved.

  17. postmortem Mar 13, 2026, 04:40 PM UTC

    **Impact:** On March 12, 2026, our trading platform experienced intermittent connectivity to multiple trading venues. Consequently, orders were either not successfully delivered to venues or remained in a pending status. **Incident overview and remediation:** Our investigation found instability within our core queueing system and a crash-loop in our order consumer components within routing service, causing cascading latency across the platform. To resolve the issue, we restarted the queueing system, restoring connectivity and clearing the order backlog. Any remaining "stuck" orders were manually reconciled by our operations team. **Next steps** * We have extended this weekend's maintenance window \(March 14–15, 2026\) to two hours to upgrade system capacity, restart key services for stability, and apply critical infrastructure updates both internally and across one of our third party vendors. * We’re validating our remediation plan against the root cause to prevent recurrence. * We are maintaining heightened surveillance of platform stability and will provide updates via our status page. If you have questions or believe your orders were impacted, please contact your account manager or our support team through your usual support channel.

Read the full incident report →

Major March 9, 2026

Journal processing is stuck in "sent_to_clearing" state

Detected by Pingoru
Mar 09, 2026, 03:23 AM UTC
Resolved
Mar 09, 2026, 04:15 AM UTC
Duration
51m
Affected: JNLC
Timeline · 10 updates
  1. investigating Mar 09, 2026, 03:23 AM UTC

    We are currently investigating this issue.

  2. identified Mar 09, 2026, 03:30 AM UTC

    We have identified the issue with one of the infrastructure component causing replication to fail. Team is currently working on it.

  3. identified Mar 09, 2026, 03:33 AM UTC

    Team have fixed the underlying root cause and new journals are getting executed. We are working on the backfill.

  4. identified Mar 09, 2026, 03:34 AM UTC

    We are continuing to work on a fix for this issue.

  5. identified Mar 09, 2026, 03:47 AM UTC

    Team is still working on the backfill

  6. identified Mar 09, 2026, 04:00 AM UTC

    Team is still working on backfilling the journals

  7. identified Mar 09, 2026, 04:13 AM UTC

    Journals backfilled completed.

  8. monitoring Mar 09, 2026, 04:13 AM UTC

    A fix has been implemented and we are monitoring the results.

  9. resolved Mar 09, 2026, 04:15 AM UTC

    This incident has been resolved.

  10. postmortem Mar 11, 2026, 02:55 PM UTC

    ## **Service Status Update: Transaction Processing Delay \(March 8-9, 2026\)** ## **Root Cause** On the evening of March 8, 2026, our platform experienced a disruption in the processing of journal entries \(internal account transfers\). The issue was triggered by a high-volume internal data job related to annual tax reporting. This specific job generated a very large volume of database writes, which caused WAL \(Write-Ahead Log\) growth to reach the configured limit for replication slots. Once this threshold was reached, our database marked the affected logical replication slots as lost, which interrupted our Change Data Capture \(CDC\) pipeline which is needed for advancing journal processing from “sent to clearing” to “executed. ## **Impact** * **Data Integrity:** We can confirm that **no data was lost**, and all funds remain fully accounted for. The issue was limited strictly to a delay in processing time, not the accuracy of the transactions. * **Affected Partners:** The delay impacted 31 correspondent firms, with the majority of the volume concentrated in specific high-activity accounts. * **Current Status:** All backlogged transactions were successfully processed and finalized by 11.10pm ET on 8th March ## **Resolution** Once the delay was identified, our engineering team took the following immediate actions: * **Terminated the Data Job:** The heavy internal tax processing task was halted to relieve pressure on the database. * **Recreated Replication Slots:** The affected logical replication slots were recreated. * **Backfilled CDC Data:** Engineers performed a **backfill** to restore the replication pipeline and synchronize the missing changes ‌ ## **Commitment to Reliability** To prevent a recurrence, we are implementing the following safeguards: 1. **Job Optimization:** Future large-scale data tasks \(such as tax reporting\) will be broken into smaller batches to prevent system saturation. 2. **Increased Capacity:** We are reviewing our synchronization storage limits to provide a larger buffer during peak periods of internal data activity. 3. **Enhanced Monitoring:** We have updated our internal playbooks to ensure faster automated alerting if data synchronization lags in the future. We apologize for any inconvenience this delay may have caused to your operations.

Read the full incident report →

Minor February 9, 2026

Timeouts on Options Market Data Snapshots

Detected by Pingoru
Feb 09, 2026, 05:01 PM UTC
Resolved
Feb 09, 2026, 05:36 PM UTC
Duration
35m
Affected: https://data.alpaca.markets/v1beta1/options/snapshots
Timeline · 3 updates
  1. investigating Feb 09, 2026, 05:01 PM UTC

    We are currently investigating this issue.

  2. investigating Feb 09, 2026, 05:16 PM UTC

    We are continuing to investigate this issue.

  3. resolved Feb 09, 2026, 05:36 PM UTC

    This issue has been resolved.

Read the full incident report →

Minor February 6, 2026

Time outs on accounts

Detected by Pingoru
Feb 06, 2026, 03:37 PM UTC
Resolved
Feb 06, 2026, 04:46 PM UTC
Duration
1h 9m
Affected: Account API
Timeline · 6 updates
  1. investigating Feb 06, 2026, 03:37 PM UTC

    investigating issue

  2. investigating Feb 06, 2026, 03:42 PM UTC

    We are currently seeing time outs in our accounts_ID endpoint. Our team is currently looking into the issue

  3. identified Feb 06, 2026, 03:48 PM UTC

    We identified the issue causing the time outs and addressing it

  4. identified Feb 06, 2026, 04:02 PM UTC

    We are implementing a fix now

  5. monitoring Feb 06, 2026, 04:24 PM UTC

    a fix has been implemented and we are continuing to monitor

  6. resolved Feb 06, 2026, 04:46 PM UTC

    The endpoint is now stabilized and we are not seeing and more errors

Read the full incident report →

Minor February 4, 2026

Account activation is delayed

Detected by Pingoru
Feb 04, 2026, 07:01 PM UTC
Resolved
Feb 04, 2026, 07:59 PM UTC
Duration
57m
Affected: New Account Onboarding
Timeline · 7 updates
  1. investigating Feb 04, 2026, 07:01 PM UTC

    We are currently investigating this issue.

  2. investigating Feb 04, 2026, 07:01 PM UTC

    We are continuing to investigate this issue.

  3. investigating Feb 04, 2026, 07:06 PM UTC

    We are continuing to investigate this issue.

  4. identified Feb 04, 2026, 07:24 PM UTC

    There is an invalid message that is pushed to the queue causing the service crash.

  5. identified Feb 04, 2026, 07:30 PM UTC

    Invalid messages are removed from the queue. We are monitoring the queue and service

  6. monitoring Feb 04, 2026, 07:30 PM UTC

    A fix has been implemented and we are monitoring the results.

  7. resolved Feb 04, 2026, 07:59 PM UTC

    The issue is fixed and service is back to normal

Read the full incident report →

Looking to track Alpaca downtime and outages?

Pingoru polls Alpaca's status page every 5 minutes and alerts you the moment it reports an issue — before your customers do.

  • Real-time alerts when Alpaca reports an incident
  • Email, Slack, Discord, Microsoft Teams, and webhook notifications
  • Track Alpaca alongside 5,000+ providers in one dashboard
  • Component-level filtering
  • Notification groups + maintenance calendar
Start monitoring Alpaca for free

5 free monitors · No credit card required