Is Vapi Down Right Now? Live Vapi Status & Outages | IsDown Outage History

Is Vapi Down Right Now? Live Vapi Status & Outages | IsDown is up right now

There were 19 Is Vapi Down Right Now? Live Vapi Status & Outages | IsDown outages since February 12, 2026 totaling 361h 8m of downtime. Each is summarised below — incident details, duration, and resolution information.

Source: https://status.vapi.ai/

Major April 20, 2026

Documentation resources are...

Detected by Pingoru
Apr 20, 2026, 10:12 PM UTC
Resolved
Apr 20, 2026, 10:12 PM UTC
Duration
Affected: Vapi Docs
Timeline · 1 update
  1. resolved Apr 20, 2026, 10:12 PM UTC

    Our documentation provider is experiencing an outage. We’re in touch with the team and will provide communication when services are back up

Read the full incident report →

Minor April 16, 2026

Degraded Performance on Inb...

Detected by Pingoru
Apr 16, 2026, 02:45 PM UTC
Resolved
Apr 16, 2026, 05:00 PM UTC
Duration
2h 15m
Affected: Vapi Numbers InboundSIP InboundTwilio InboundTelnyx InboundVonage Inbound
Timeline · 2 updates
  1. investigating Apr 16, 2026, 02:45 PM UTC

    We are investigating reports of degraded performance affecting a subset of calls. Our team is actively working to determine the root cause and will provide updates as we learn more.

  2. resolved Apr 16, 2026, 05:00 PM UTC

    We have now resolved this incident. Between ~7:30AM PT and ~9:30AM PT, a new query pattern caused database slowness that increased API latency and led to dropped inbound calls with certain providers. We mitigated by rolling back the deployment and are following up with a deeper review of the query change.

Read the full incident report →

Major April 14, 2026

SIP Infrastructure Outage

Detected by Pingoru
Apr 14, 2026, 01:32 PM UTC
Resolved
Apr 14, 2026, 03:27 PM UTC
Duration
1h 55m
Affected: Vapi SIP
Timeline · 2 updates
  1. investigating Apr 14, 2026, 01:32 PM UTC

    We are currently experiencing degradation in our SIP infrastructure, resulting in in-call failures including call transfers, increased latency, and other related issues. Our team is actively investigating and working to resolve the problem. We will provide updates as more information becomes available.

  2. resolved Apr 14, 2026, 03:27 PM UTC

    The issue has been resolved. After applying the remediation, we monitored the affected systems and confirmed stable operation. All services are functioning normally.

Read the full incident report →

Minor April 2, 2026

Elevated Error Rates with S...

Detected by Pingoru
Apr 02, 2026, 11:30 AM UTC
Resolved
Apr 02, 2026, 11:30 AM UTC
Duration
Affected: Providers (Soniox)
Timeline · 1 update
  1. resolved Apr 02, 2026, 11:30 AM UTC

    We are currently observing elevated error rates affecting calls that use the Soniox transcriber. Impacted calls may terminate unexpectedly with the ended reason call.in-progress.error-vapifault-soniox-transcriber-failed. While we work to resolve this, we recommend switching to an alternative transcriber or configuring a transcriber fallback plan to ensure call continuity. You can set up fallbacks by following the guide here: https://docs.vapi.ai/customization/transcriber-fallback-plan We are actively monitoring the situation and will provide updates as more information becomes available.

Read the full incident report →

Minor March 25, 2026

Degraded API performance on...

Detected by Pingoru
Mar 25, 2026, 08:53 PM UTC
Resolved
Mar 25, 2026, 08:53 PM UTC
Duration
Affected: Vapi APIVapi API [Weekly]
Timeline · 1 update
  1. resolved Mar 25, 2026, 08:53 PM UTC

    We observed an elevated rate of API errors from 1:28pm to 1:39pm PDT. The errors have since resolved. We are closely monitoring API performance and investigating the root cause.

Read the full incident report →

Minor March 19, 2026

Elevated Call Failure Rate ...

Detected by Pingoru
Mar 19, 2026, 11:17 PM UTC
Resolved
Mar 21, 2026, 05:46 AM UTC
Duration
1d 6h
Affected: Vapi API [Weekly]
Timeline · 2 updates
  1. investigating Mar 19, 2026, 11:17 PM UTC

    We're seeing elevated call failures on weekly, and the team is actively looking into it.

  2. resolved Mar 21, 2026, 05:46 AM UTC

    Incident Report, March 19, 2026 Impact: A service disruption affected inbound and outbound call reliability on the Daily and Weekly channel. Some calls failed to connect with transport-never-connected, worker-not-available, worker-died, and deepgram-transcriber-failed end reasons. Timeline (all times PDT): 12:20 PM We detected elevated call failure rates on the Weekly production cluster. 12:22 PM We published a status page incident and began investigating. 12:25 PM We identified the trigger as an unanticipated surge in call volume that exceeded our provisioned cluster capacity and downstream rate limits with a model provider. 12:30 PM We applied traffic controls and began working with the model provider to increase capacity. Call failures began declining. 1:40 PM Call success rates returned to normal and held stable. First incident window closed. ~4:00 PM A separate traffic spike re-triggered infrastructure constraints, leading to elevated failures. We began investigating immediately. 4:00 PM to 4:40 PM We rebalanced traffic and migrated affected workloads to dedicated infrastructure to restore headroom on shared clusters. 4:50 PM All mitigations took effect. Call success rates returned to normal. 4:50 PM to 8:10 PM We continued active monitoring. No further failures observed. 8:10 PM Second incident window closed. Immediate Action Items: Improve workload isolation and per-account capacity guardrails to prevent resource contention from cascading across the platform. Note: A full root cause analysis is underway and will be available upon request. We sincerely apologize for the disruption and thank you for your patience.

Read the full incident report →

Major March 19, 2026

Elevated errors in daily an...

Detected by Pingoru
Mar 19, 2026, 07:21 PM UTC
Resolved
Mar 19, 2026, 08:40 PM UTC
Duration
1h 19m
Affected: Vapi APIVapi API [Weekly]Providers (Deepgram)
Timeline · 2 updates
  1. investigating Mar 19, 2026, 07:21 PM UTC

    We are aware of elevated call failures rates on the weekly cluster with worker-not-available ended reason, and deepgram-transcriber-unavailable in both daily and weekly channels. Our team is actively investigating the issue.

  2. resolved Mar 19, 2026, 08:40 PM UTC

    Resolved — The issue causing degraded performance has been identified and mitigated as of 13:45 PDT. All services are operating normally. We will continue to monitor and provide an update if needed.

Read the full incident report →

Minor March 12, 2026

Daily channel - Calls & das...

Detected by Pingoru
Mar 12, 2026, 03:23 PM UTC
Resolved
Mar 12, 2026, 04:22 PM UTC
Duration
59m
Affected: Vapi API
Timeline · 2 updates
  1. investigating Mar 12, 2026, 03:23 PM UTC

    High failure rate in connecting calls and API in the daily channel. Team is working on resolving it.

  2. resolved Mar 12, 2026, 04:22 PM UTC

    Resolved — The issue causing degraded performance has been identified and mitigated as of 8:22 AM. All services are operating normally. We will continue to monitor and provide an update if needed.

Read the full incident report →

Minor March 10, 2026

Some GPT 5.2 calls failing ...

Detected by Pingoru
Mar 10, 2026, 12:46 PM UTC
Resolved
Mar 19, 2026, 06:51 PM UTC
Duration
9d 6h
Affected: Providers (OpenAI)
Timeline · 2 updates
  1. investigating Mar 10, 2026, 12:46 PM UTC

    We're noticing a small percentage of calls using GPT 5.2 fail with an internal error during inference from OpenAI's side. We've reached out to the team and are closely monitoring the situation. In the meantime, we recommend switching to another model as we're seeing the degradation only on 5.2 currently.

  2. resolved Mar 19, 2026, 06:51 PM UTC

    The issue has been resolved we're not seeing any further degradation

Read the full incident report →

Minor March 4, 2026

Call Degradation on Daily

Detected by Pingoru
Mar 04, 2026, 04:03 PM UTC
Resolved
Mar 04, 2026, 04:17 PM UTC
Duration
14m
Affected: Vapi API
Timeline · 2 updates
  1. investigating Mar 04, 2026, 04:03 PM UTC

    We're noticing some call degradation primarily on the Daily channel. We are monitoring the situation and will update as we know more.

  2. resolved Mar 04, 2026, 04:17 PM UTC

    The issue has been resolved and all systems are operational.

Read the full incident report →

Minor March 4, 2026

Vapi Voice "Emma" is tempor...

Detected by Pingoru
Mar 04, 2026, 02:31 AM UTC
Resolved
Mar 04, 2026, 04:04 PM UTC
Duration
13h 33m
Affected: Vapi API
Timeline · 3 updates
  1. investigating Mar 04, 2026, 02:31 AM UTC

    We're noticing issues with calls using the Vapi voice "Emma". If you are using this voice, we recommend switching to another voice while this is resolved, and add voice fallbacks to prevent complete failures - https://docs.vapi.ai/voice-fallback-plan No other voices are currently impacted. We will update the status page as we know more.

  2. resolved Mar 04, 2026, 04:04 PM UTC

    The issue has been resolved and the voice is currently usable. We still recommend setting up fallbacks for any voices for the future to avoid call drops - https://docs.vapi.ai/voice-fallback-plan

  3. resolved Mar 04, 2026, 04:04 PM UTC

    The issue has been resolved and the voice is currently usable. We still recommend setting up fallbacks for any voices for the future to avoid call drops - https://docs.vapi.ai/voice-fallback-plan

Read the full incident report →

Minor February 25, 2026

Calls are degraded on Daily...

Detected by Pingoru
Feb 25, 2026, 03:27 PM UTC
Resolved
Feb 26, 2026, 01:59 AM UTC
Duration
10h 32m
Affected: Vapi API
Timeline · 3 updates
  1. investigating Feb 25, 2026, 03:27 PM UTC

    We are seeing calls being dropped on the Daily channel with ended reason "call.in-progress.error-vapifault-worker-died". The team is looking into it and will update here.

  2. resolved Feb 26, 2026, 01:59 AM UTC

    We have not seen the issue since ~3:50pm today. We have determined the root cause and are rolling out a fix to improve stability in the daily channel. Incident Report — Daily Channel Call Failures (February 25, 2026) Impact Between 7:27 AM – 3:59 PM PST, approximately 19,527 calls failed on the Daily channel due to call worker failures. All Daily channel users were impacted. The Weekly channel was not affected. Timeline (all times in PST) 7:27 AM — Degraded call reliability detected on the Daily channel. Status page updated and investigation begins immediately. 8:33 AM — Issue escalated. Team recommends affected customers switch to the Weekly channel while investigation continues. Status page updated. 9:04 AM — Team begins proactive outreach to guide affected customers to the Weekly channel. 10:55 AM — Additional call failures observed on Daily after a brief period of stability. Investigation continues. 11:00 AM — Rolled back previous deployment. Did not observe any significant improvement. 1:30 PM — Continued investigating the issue. 3:59 PM — No further issues observed. 6:00 PM — Released a fix to improve stability in the Daily channel. Incident resolved. What Went Well The issue was detected and acknowledged quickly. A dedicated incident response was organized promptly to focus investigation. Teams were notified early and guided affected customers to switch to the Weekly channel. Action Items Isolate background operations from call handling Strengthen deployment validation Improve resilience under load Expand monitoring and alerting Note This report is intended as a summary of the incident timeline, impact, and immediate action items. A deeper root cause analysis is available upon request. This issue impacted the Daily channel only. Customers desiring increased stability (at the cost of delayed access to features) can switch to the Weekly channel by navigating to Organization Settings on the Vapi Dashboard and changing the Channel to "weekly".

  3. investigating Feb 26, 2026, 01:59 AM UTC

    We have not seen the issue since ~3:50pm today. We have determined the root cause and are rolling out a fix to improve stability in the `daily` channel. # Incident Report — Daily Channel Call Failures (February 25, 2026) ## Impact Between 7:27 AM – 3:59 PM PST, approximately 19,527 calls failed on the Daily channel due to call worker failures. All Daily channel users were impacted. The Weekly channel was not affected. ## Timeline (all times in PST) **7:27 AM** — Degraded call reliability detected on the Daily channel. Status page updated and investigation begins immediately. **8:33 AM** — Issue escalated. Team recommends affected customers switch to the Weekly channel while investigation continues. Status page updated. **9:04 AM** — Team begins proactive outreach to guide affected customers to the Weekly channel. **10:55 AM** — Additional call failures observed on Daily after a brief period of stability. Investigation continues. **11:00 AM** — Rolled back previous deployment. Did not observe any significant improvement. **1:30 PM** — Continued investigating the issue. **3:59 PM** — No further issues observed. **6:00 PM** — Released a fix to improve stability in the Daily channel. Incident resolved. ## What Went Well - The issue was detected and acknowledged quickly. - A dedicated incident response was organized promptly to focus investigation. - Teams were notified early and guided affected customers to switch to the Weekly channel. ## Action Items - Isolate background operations from call handling - Strengthen deployment validation - Improve resilience under load - Expand monitoring and alerting ## Note This report is intended as a summary of the incident timeline, impact, and immediate action items. A deeper root cause analysis is available upon request. This issue impacted the Daily channel only. Customers desiring increased stability (at the cost of delayed access to features) can switch to the Weekly channel by navigating to **Organization Settings** on the Vapi Dashboard and changing the Channel to **"weekly"**.

Read the full incident report →

Minor February 24, 2026

Call degradation on Weekly ...

Detected by Pingoru
Feb 24, 2026, 07:25 PM UTC
Resolved
Feb 25, 2026, 05:50 PM UTC
Duration
22h 25m
Affected: Vapi APIVapi API [Weekly]
Timeline · 3 updates
  1. investigating Feb 24, 2026, 07:25 PM UTC

    We are seeing calls degraded on Weekly channel. The team is looking into the issue and will share updates here.

  2. resolved Feb 25, 2026, 05:50 PM UTC

    Incident report: Impact: A service disruption affected call reliability on the Weekly channel. Some calls ended unexpectedly with worker-not-available or worker-died end reasons. Timeline (all times PT): 8:07 AM - We detected a burst of call failures across the platform. 8:16 AM - Automated monitoring alert fired. We acknowledged and began investigating. 8:42 AM - We scoped the impact across affected accounts. 8:47 AM - The issue self-resolved. We identified the root cause as resource contention in our call processing infrastructure during a traffic spike. 9:18 AM - We completed an initial root cause analysis and identified an underlying bottleneck in our call queue infrastructure. 11:13 AM - A related issue resurfaced due to cascading effects from the earlier contention. We began investigating immediately. 11:25 AM - We published a status page to notify customers. 11:38 AM - We confirmed the root cause as CPU contention between infrastructure components. 11:39 AM - We applied a mitigation. Call queue metrics began recovering. 11:45 AM - We updated the status page with the identified cause and fix. 11:46 AM - Error rates began declining. We continued active monitoring. 1:11 PM - We declared resolution on the status page. ~1:35 PM - A brief secondary spike occurred during an infrastructure resource adjustment. We responded immediately. 3:21 PM - All systems fully stabilized. Action Items Enforce resource limits across processing components and improve infrastructure isolation for critical call processing. Note A full root cause analysis is underway and will be available upon request. We sincerely apologize for the disruption and thank you for your patience.

  3. investigating Feb 25, 2026, 05:50 PM UTC

    Incident report: **Impact:** A service disruption affected call reliability on the Weekly channel. Some calls ended unexpectedly with `worker-not-available` or `worker-died` end reasons. **Timeline (all times PT):** - **8:07 AM** - We detected a burst of call failures across the platform. - **8:16 AM** - Automated monitoring alert fired. We acknowledged and began investigating. - **8:42 AM** - We scoped the impact across affected accounts. - **8:47 AM** - The issue self-resolved. We identified the root cause as resource contention in our call processing infrastructure during a traffic spike. - **9:18 AM** - We completed an initial root cause analysis and identified an underlying bottleneck in our call queue infrastructure. - **11:13 AM** - A related issue resurfaced due to cascading effects from the earlier contention. We began investigating immediately. - **11:25 AM** - We published a status page to notify customers. - **11:38 AM** - We confirmed the root cause as CPU contention between infrastructure components. - **11:39 AM** - We applied a mitigation. Call queue metrics began recovering. - **11:45 AM** - We updated the status page with the identified cause and fix. - **11:46 AM** - Error rates began declining. We continued active monitoring. - **1:11 PM** - We declared resolution on the status page. - **~1:35 PM** - A brief secondary spike occurred during an infrastructure resource adjustment. We responded immediately. - **3:21 PM** - All systems fully stabilized. **Action Items** Enforce resource limits across processing components and improve infrastructure isolation for critical call processing. **Note** A full root cause analysis is underway and will be available upon request. We sincerely apologize for the disruption and thank you for your patience.

Read the full incident report →

Minor February 24, 2026

Authentication Provider Deg...

Detected by Pingoru
Feb 24, 2026, 06:33 PM UTC
Resolved
Feb 26, 2026, 05:08 PM UTC
Duration
1d 22h
Affected: Vapi Auth
Timeline · 2 updates
  1. investigating Feb 24, 2026, 06:33 PM UTC

    Our auth service provider is reporting a degradation specifically in India. Some customers in that region will see issues with login. See our providers status page for live updates: https://status.supabase.com/incidents/xmgq69x4brfk.

  2. resolved Feb 26, 2026, 05:08 PM UTC

    Our provider has confirmed the issue is specific to accessing projects and seems to not include authentication. They are still resolving the issue from their end, but we are marking the issue as resolved. If customers continue to see issues with authentication, please reach out to [email protected]

Read the full incident report →

Minor February 23, 2026

Call Degradation in Daily C...

Detected by Pingoru
Feb 23, 2026, 05:30 PM UTC
Resolved
Feb 23, 2026, 06:09 PM UTC
Duration
39m
Affected: Vapi API
Timeline · 3 updates
  1. investigating Feb 23, 2026, 05:30 PM UTC

    We are seeing decreased success rate in calls on the daily channel. The team is investigating and we will post updates here. In the meantime, we highly recommend switching to weekly channel to mitigate service disruption.

  2. resolved Feb 23, 2026, 06:09 PM UTC

    The issue is resolved as of 10:05am PST. Incident Report Impact Between 9:10–10:05 AM, 37,806 calls were dropped due to call worker failures. All Daily users were impacted. Timeline (all times in PST) 9:02 AM — On-call engineer notices pods crashing in the Daily cluster. 9:11 AM — Black box probe alert fires; acknowledged by on-call engineer, triggering investigation. 9:27 AM — Issue escalates to the point of impacting all calls on Daily. 9:30 AM — Status page created to inform users of impact and request they switch to the Weekly channel. 9:34 AM — Incident team assembles. 9:37 AM — Rollback to previous deployment is initiated. Due to a large backlog of unprocessed jobs, rollback is delayed waiting for an excessive number of pods to become ready. 10:05 AM — Forceful cutover is initiated and service is restored. What Went Well Monitoring detected the issue before it became widespread. On-call engineer assembled the incident team quickly. Action Items Improve emergency rollback procedure to bypass or relax pod readiness checks during incidents, enabling faster cutover. Continue ongoing observability improvements to reduce MTTD. Note A full root cause analysis is underway and available upon request. This report is intended as a summary of the incident timeline, impact, and immediate action items. Note that this issue impacted the Daily cluster only. Customers desiring increased stability (at the cost of delayed access to features) should switch to the Weekly channel by navigating to Organization Settings on the Vapi Dashboard and changing the Channel to "weekly".

  3. resolved Feb 23, 2026, 06:09 PM UTC

    The issue is resolved as of 10:05am PST. ## Incident Report ### Impact Between 9:10–10:05 AM, 37,806 calls were dropped due to call worker failures. All Daily users were impacted. ### Timeline (all times in PST) 9:02 AM — On-call engineer notices pods crashing in the Daily cluster. 9:11 AM — Black box probe alert fires; acknowledged by on-call engineer, triggering investigation. 9:27 AM — Issue escalates to the point of impacting all calls on Daily. 9:30 AM — Status page created to inform users of impact and request they switch to the Weekly channel. 9:34 AM — Incident team assembles. 9:37 AM — Rollback to previous deployment is initiated. Due to a large backlog of unprocessed jobs, rollback is delayed waiting for an excessive number of pods to become ready. 10:05 AM — Forceful cutover is initiated and service is restored. ### What Went Well Monitoring detected the issue before it became widespread. On-call engineer assembled the incident team quickly. ### Action Items Improve emergency rollback procedure to bypass or relax pod readiness checks during incidents, enabling faster cutover. Continue ongoing observability improvements to reduce MTTD. ### Note A full root cause analysis is underway and available upon request. This report is intended as a summary of the incident timeline, impact, and immediate action items. Note that this issue impacted the Daily cluster only. Customers desiring increased stability (at the cost of delayed access to features) should switch to the Weekly channel by navigating to Organization Settings on the Vapi Dashboard and changing the Channel to "weekly".

Read the full incident report →

Minor February 12, 2026

Unable to Sign in to the Va...

Detected by Pingoru
Feb 12, 2026, 08:57 PM UTC
Resolved
Feb 13, 2026, 05:05 AM UTC
Duration
8h 8m
Affected: Vapi Auth
Timeline · 2 updates
  1. investigating Feb 12, 2026, 08:57 PM UTC

    email authentication for the vapi dashboard was experiencing issues. users were unable to sign in using their email credentials.

  2. resolved Feb 13, 2026, 05:05 AM UTC

    Email authentication for the Vapi dashboard has been restored. Users should now be able to sign in normally using their email credentials.

Read the full incident report →

Looking to track Is Vapi Down Right Now? Live Vapi Status & Outages | IsDown downtime and outages?

Pingoru polls Is Vapi Down Right Now? Live Vapi Status & Outages | IsDown's status page every 5 minutes and alerts you the moment it reports an issue — before your customers do.

  • Real-time alerts when Is Vapi Down Right Now? Live Vapi Status & Outages | IsDown reports an incident
  • Email, Slack, Discord, Microsoft Teams, and webhook notifications
  • Track Is Vapi Down Right Now? Live Vapi Status & Outages | IsDown alongside 5,000+ providers in one dashboard
  • Component-level filtering
  • Notification groups + maintenance calendar