Inngest Outage History

Inngest is up right now

There were 16 Inngest outages since February 3, 2026 totaling 2h 26m of downtime. Each is summarised below — incident details, duration, and resolution information.

Source: https://status.inngest.com

Minor April 28, 2026

Increased function execution latency

Detected by Pingoru
Apr 28, 2026, 10:48 PM UTC
Resolved
Apr 28, 2026, 10:48 PM UTC
Duration
Affected: Function execution
Timeline · 3 updates
  1. investigating Apr 28, 2026, 07:11 PM UTC

    Status: Investigating We are actively investigating increased function execution latency on a subset of customer shards starting around 5pm UTC. We will provider further updates as we identify the cause and resolve the issue. Affected components Function execution (Degraded performance)

  2. monitoring Apr 28, 2026, 10:27 PM UTC

    Status: Monitoring Function execution latency has returned to normal for affected customers. Some customer shards were affected and we've pinpointed the cause of a slow degradation that compounded over time. We are working on adding new monitoring to catch performance regressions in this part of the system more quickly. Affected components Function execution (Operational)

  3. resolved Apr 28, 2026, 10:48 PM UTC

    Status: Resolved Performance on all shards is back to normal levels. The degradation was caused by a change that aimed to improve concurrency metrics for Inngest accounts. The change, while out for a full 24 hours and fairly benign, began to compound and produced slowness for some queue shards within our system earlier today. We have reverted that change and are working to understand the performance impact of this change. Affected components Function execution (Operational)

Read the full incident report →

Minor April 25, 2026

Degraded performance for REST API (runs/events data)

Detected by Pingoru
Apr 25, 2026, 06:04 PM UTC
Resolved
Apr 25, 2026, 06:04 PM UTC
Duration
Timeline · 3 updates
  1. identified Apr 23, 2026, 08:06 PM UTC

    Status: Identified We are currently experiencing degraded performance affecting a subset of users utilizing our REST API for retrieving runs and events data. Responses may be delayed or temporarily return incomplete data, with recent updates not appearing immediately. We have identified the cause of the issue. We're actively working on implementing a fix to resume normal operation of the REST API. Function execution is not affected and is working as expected. Affected components API (REST and GraphQL) (Degraded performance)

  2. monitoring Apr 25, 2026, 01:54 AM UTC

    Status: Monitoring We have fixed the issue affecting a subset of users utilizing our REST API for retrieving runs and events data for recent updates and are monitoring for issues with availability of historical data over the REST API. Affected components API (REST and GraphQL) (Degraded performance)

  3. resolved Apr 25, 2026, 06:04 PM UTC

    Status: Resolved The incident is now resolved and the system is full operational. Affected components API (REST and GraphQL) (Operational)

Read the full incident report →

Minor April 15, 2026

Delays in function run scheduling

Detected by Pingoru
Apr 15, 2026, 11:21 PM UTC
Resolved
Apr 15, 2026, 11:21 PM UTC
Duration
Affected: Function executionObservability
Timeline · 11 updates
  1. investigating Apr 15, 2026, 12:15 PM UTC

    Status: Investigating We are actively investigating delays with function run scheduling for a subset of customers. We will provide further updates as we identify the cause and resolve the issue. Affected components Function execution (Degraded performance)

  2. identified Apr 15, 2026, 02:23 PM UTC

    Status: Identified We have scaled up several resources across the system and to handle a large increase in scale within the system. Services are scaled up and we have also added new function state shards, but rollout of those new shards can take up to ~30m. We are also working on networking improvements to improve efficiency of the system with this significantly higher load. Affected components Function execution (Degraded performance)

  3. identified Apr 15, 2026, 02:49 PM UTC

    Status: Identified We're deploying an in-memory optimization within the the part of the system the schedules new function runs. This optimization will alleviate pressure on underlying systems and increase throughput. The change will be rolled out momentarily. We're also working in parallel on a system change to create a dedicated service for processing for event batching which is the cause of the overall backlog on the system. Affected components Function execution (Degraded performance)

  4. identified Apr 15, 2026, 03:34 PM UTC

    Status: Identified Changes have increase throughput, but not yet to typical levels. We are actively testing the new system change to decouple batch processing before enabling it for all accounts. Affected components Function execution (Degraded performance)

  5. identified Apr 15, 2026, 03:55 PM UTC

    Status: Identified There is an increase in throughput since 15:37 UTC (~15 min ago). We are continuing to apply changes and prepare a larger change to decouple parts of the system. Event observability: Events may be delayed when appearing in the dashboard as the database ingestion for these events is also related to this part of the system that handles function scheduling. Events continue to be ingested and the Event API remains unaffected. Affected components Function execution (Degraded performance) Observability (Degraded performance)

  6. identified Apr 15, 2026, 05:03 PM UTC

    Status: Identified The system is consuming the event backlog as fast as possible, with an ETA of ~10-15 minutes until function scheduling is caught up. After function scheduling is caught up, function execution in your account may still be limited by your account concurrency or a given function's own flow control settings (concurrency, rate limit, etc.). We will continue to share more updates as soon as we can. Affected components Function execution (Degraded performance) Observability (Degraded performance)

  7. identified Apr 15, 2026, 05:42 PM UTC

    Status: Identified Function scheduled delays should be caught up. With events processed and new runs scheduled, your system my still see backlogs based on your function's flow control (e.g. concurrency) config and your account's concurrency. We still see backlogs in processing step.waitForEvent, step.invoke and cancelOn event expressions. We are continuing to work on this. We also are continuing our rollout of isolated batch processing as previously mentioned to further isolate parts of our system. EDIT - This was edited to include step.invoke as well for completeness. Affected components Function execution (Degraded performance) Observability (Operational)

  8. identified Apr 15, 2026, 06:36 PM UTC

    Status: Identified Function execution scheduling and processing throughput is at normal levels. Async "pause" operations (step.waitForEvent, step.invoke, cancelOn) are severely backlogged which may cause delays in any of these operations from completing. This may cause issues with your function execution if you rely on them. The team is working on fixes and clear this backlog and fix the key issues. We do not yet have an ETA on resolving this specific issue. Affected components Function execution (Degraded performance) Observability (Operational)

  9. monitoring Apr 15, 2026, 08:03 PM UTC

    Status: Monitoring The function run backlog for functions has been resolved as of 11:10AM PT. Batched functions and `step.waitForEvent` may face delays as the backlog continues to process. Affected components Observability (Operational) Function execution (Degraded performance)

  10. monitoring Apr 15, 2026, 09:34 PM UTC

    Status: Monitoring Async "pause" operations (step.waitForEvent, step.invoke, cancelOn) should be running again at typical throughput. Event batching still has an backlog that we are actively working through. Status: • Function scheduling - Running as expected • Function execution - Running as expected, no queue backlogs • Async "pause" opts (waitForEvent, invoke, cancelOn) - Running as expected • Event batching - Significant backlog Affected components Function execution (Degraded performance) Observability (Operational)

  11. resolved Apr 15, 2026, 11:21 PM UTC

    Status: Resolved The incident is now resolved and the system is full operational. Affected components Function execution (Operational) Observability (Operational)

Read the full incident report →

Minor April 2, 2026

Dashboard down

Detected by Pingoru
Apr 02, 2026, 05:49 PM UTC
Resolved
Apr 02, 2026, 05:49 PM UTC
Duration
Affected: Inngest Dashboard
Timeline · 4 updates
  1. identified Apr 02, 2026, 05:07 PM UTC

    Status: Identified The Inngest dashboard is down due to an issue with our downstream provider. Vercel. We are working quickly to bring this back up Affected components Inngest Dashboard (Full outage)

  2. identified Apr 02, 2026, 05:13 PM UTC

    Status: Identified We area pushing a hotfix to the dashboard as recommended by Vercel's incident report. The rest of the Inngest system, function execution, API, etc. all remain functional. Affected components Inngest Dashboard (Full outage)

  3. monitoring Apr 02, 2026, 05:15 PM UTC

    Status: Monitoring A fix has been deployed and the dashboard is back online. We will continue to monitor the Vercel status page for any changes to the incident itself. Affected components Inngest Dashboard (Operational)

  4. resolved Apr 02, 2026, 05:49 PM UTC

    Status: Resolved The incident is now resolved and the system is full operational. Related to the Vercel incident, we updated our dashboard to Node 22.x to solve the issue. We continue to monitor Vercel's incident and react accordingly. https://www.vercel-status.com/incidents/5r9bp5y8rql2 Affected components Inngest Dashboard (Operational)

Read the full incident report →

Minor March 31, 2026

Increased failures with step.fetch, step.ai.infer

Detected by Pingoru
Mar 31, 2026, 11:59 AM UTC
Resolved
Mar 31, 2026, 11:59 AM UTC
Duration
Timeline · 3 updates
  1. investigating Mar 31, 2026, 11:14 AM UTC

    Status: Investigating We are actively investigating an issue with proxied requests via step.fetch or step.ai.infer. We will provider further updates as we identify the cause and resolve the issue. If you are not using these features your system should remain unaffected. Affected components API (REST and GraphQL) (Partial outage)

  2. monitoring Mar 31, 2026, 11:45 AM UTC

    Status: Monitoring A fix has been deployed for step.fetch and step.ai.infer and we're monitoring the system to ensure the system is fully operational. We continue to investigate the root cause. Affected components API (REST and GraphQL) (Operational)

  3. resolved Mar 31, 2026, 11:59 AM UTC

    Status: Resolved The incident is now resolved and the system is full operational. During this incident step.fetch and step.ai.infer were failing due to a bug causing empty request bodies to be returned. The root cause was determined, the system was rolled back and a fix will be rolled out today. Affected components API (REST and GraphQL) (Operational)

Read the full incident report →

Minor March 26, 2026

Function run scheduling delays

Detected by Pingoru
Mar 26, 2026, 11:45 PM UTC
Resolved
Mar 26, 2026, 11:45 PM UTC
Duration
Affected: Function execution
Timeline · 5 updates
  1. investigating Mar 26, 2026, 10:42 PM UTC

    Status: Investigating We are actively investigating delays with function run scheduling. We will provider further updates as we identify the cause and resolve the issue. Affected components Function execution (Degraded performance)

  2. identified Mar 26, 2026, 10:58 PM UTC

    Status: Identified We have identified the cause of the issue affecting a core system queue. We are rolling our a mitigation now and preparing follow up changes. Affected components Function execution (Degraded performance)

  3. identified Mar 26, 2026, 11:15 PM UTC

    Status: Identified We have deployed an additional hot fix. The earlier change rolled out have addressed the core issue and the system is now processing the backlog. The backlog is decreasing. We will provide another update as we have an estimate on time to recovery. Affected components Function execution (Degraded performance)

  4. monitoring Mar 26, 2026, 11:35 PM UTC

    Status: Monitoring The mitigations have fixed the issue and the event backlog is now caught up. We continue to monitor and evaluate other short and long term mitigations to add. Affected components Function execution (Operational)

  5. resolved Mar 26, 2026, 11:45 PM UTC

    Status: Resolved The incident is now resolved and the system is full operational. This was related to an issue caused by the part of the system powering the debounce feature. The internal event backlog is fully caught up and the two mitigations deployed have addressed the issue. The team is preparing a post-mortem to ensure this issue does not reoccur. Affected components Function execution (Operational)

Read the full incident report →

Minor March 26, 2026

Degraded function execution performance

Detected by Pingoru
Mar 26, 2026, 02:55 PM UTC
Resolved
Mar 26, 2026, 02:55 PM UTC
Duration
Affected: Function execution
Timeline · 4 updates
  1. investigating Mar 26, 2026, 11:12 AM UTC

    Status: Investigating We are actively investigating an issue with function execution and other core system health. We will provider further updates as we identify the cause and resolve the issue. Affected components Function execution (Partial outage)

  2. investigating Mar 26, 2026, 11:18 AM UTC

    Status: Investigating We are actively investigating an issue with internal networking. We will provider further updates as we identify the cause and resolve the issue. Function execution is picking up again and was degraded between 11:02 and 11:10 AM UTC. Affected components Function execution (Degraded performance)

  3. monitoring Mar 26, 2026, 11:36 AM UTC

    Status: Monitoring Function execution has returned to normal levels as of 11:12 UTC. We are actively looking into the root cause and taking further measures to stabilize the system. Affected components Function execution (Operational)

  4. resolved Mar 26, 2026, 02:55 PM UTC

    Status: Resolved After an extended monitoring period, we are resolving this incident. The system is full operational. Affected components Function execution (Operational)

Read the full incident report →

Minor March 10, 2026

Reduced throughput on function execution

Detected by Pingoru
Mar 10, 2026, 02:31 AM UTC
Resolved
Mar 10, 2026, 02:31 AM UTC
Duration
Affected: Function execution
Timeline · 4 updates
  1. monitoring Mar 10, 2026, 01:54 AM UTC

    Status: Monitoring We are actively investigating an issue with reduced throughput for function execution. The system experienced a short reduction and has recovered. We continue investigation while monitoring the system. Affected components Function execution (Degraded performance)

  2. investigating Mar 10, 2026, 02:03 AM UTC

    Status: Investigating We are experiencing networking issues preventing function execution workers from working. We are in contact with our infrastructure provider as we work on active mitigations. Affected components Function execution (Full outage)

  3. identified Mar 10, 2026, 02:23 AM UTC

    Status: Identified We have identified the cause of the issue with networking. We are quickly working to fix the issue. Affected components Function execution (Full outage)

  4. resolved Mar 10, 2026, 02:31 AM UTC

    Status: Resolved The networking fix has been applied and all systems are operational. We have identified the root cause. Function execution has returned to normal. Any backlogs incurred during the incident will be executed. Affected components Function execution (Operational)

Read the full incident report →

Minor March 3, 2026

Function execution delayed

Detected by Pingoru
Mar 03, 2026, 11:44 PM UTC
Resolved
Mar 03, 2026, 11:44 PM UTC
Duration
Affected: Function execution
Timeline · 3 updates
  1. investigating Mar 03, 2026, 11:27 PM UTC

    Status: Investigating We are actively investigating an issue with function execution affecting all accounts. We will provider further updates as we identify the cause and resolve the issue. Affected components Function execution (Degraded performance)

  2. monitoring Mar 03, 2026, 11:33 PM UTC

    Status: Monitoring We identified the issue and fixed the issue with the backlog. Function execution delay is caught up. We continue to monitor the system. Affected components Function execution (Operational)

  3. resolved Mar 03, 2026, 11:44 PM UTC

    Status: Resolved The incident is now resolved and the system is full operational after monitoring. Affected components Function execution (Operational)

Read the full incident report →

Minor February 26, 2026

Elevated latency for function execution

Detected by Pingoru
Feb 26, 2026, 02:07 AM UTC
Resolved
Feb 26, 2026, 02:07 AM UTC
Duration
Affected: Function execution
Timeline · 4 updates
  1. investigating Feb 26, 2026, 01:14 AM UTC

    Status: Investigating We are actively investigating an issue with elevated latency for function execution for some queue shards. We will provider further updates as we identify the cause and resolve the issue. Affected components Function execution (Degraded performance)

  2. identified Feb 26, 2026, 01:35 AM UTC

    Status: Identified We have identified the issue with some problem servers groups that reached saturation. These servers have been isolated and removed from the usage pool. The system is stable now as we bring additional capacity online for redundancy and overhead. Affected components Function execution (Degraded performance)

  3. monitoring Feb 26, 2026, 02:06 AM UTC

    Status: Monitoring The new servers have been brought online to increase capacity and we are monitoring as they are introduced into our usage pool. Function execution on all shards is performing as expected, but we are continuing to monitor this closely throughout the coming hours. Affected components Function execution (Operational)

  4. resolved Feb 26, 2026, 02:07 AM UTC

    Status: Resolved The incident is now resolved and the system is full operational. Affected components Function execution (Operational)

Read the full incident report →

Minor February 19, 2026

Degraded dashboard availability

Detected by Pingoru
Feb 19, 2026, 06:47 PM UTC
Resolved
Feb 19, 2026, 06:47 PM UTC
Duration
Affected: Inngest Dashboard
Timeline · 3 updates
  1. investigating Feb 19, 2026, 04:21 PM UTC

    Status: Investigating We are actively investigating an issue with our app at https://app.inngest.com. This is caused by downtime in an upstream provider. We will provider further updates as we identify the cause and resolve the issue. Affected components Inngest Dashboard (Partial outage)

  2. monitoring Feb 19, 2026, 05:21 PM UTC

    Status: Monitoring Availability issues with our upstream auth provider are decreasing. We are monitoring the system and will close out the incident if auth remains available. Affected components Inngest Dashboard (Partial outage)

  3. resolved Feb 19, 2026, 06:47 PM UTC

    Status: Resolved The incident is now resolved and the system is full operational. Affected components Inngest Dashboard (Operational)

Read the full incident report →

Minor February 10, 2026

Delays on some function execution

Detected by Pingoru
Feb 10, 2026, 03:57 PM UTC
Resolved
Feb 10, 2026, 03:57 PM UTC
Duration
Affected: Function execution
Timeline · 2 updates
  1. identified Feb 10, 2026, 03:16 PM UTC

    Status: Identified We have identified the cause of the issue. We're actively working on implementing a fix to resume normal operation of the system. Our internal networking teams are improving the scalability of NAT64 as we scale our services. Affected components Function execution (Degraded performance)

  2. resolved Feb 10, 2026, 03:57 PM UTC

    Status: Resolved The incident is now resolved and the system is full operational. Affected components Function execution (Operational)

Read the full incident report →

Minor February 9, 2026

Delayed function execution

Detected by Pingoru
Feb 09, 2026, 10:50 PM UTC
Resolved
Feb 09, 2026, 10:50 PM UTC
Duration
Affected: Function execution
Timeline · 5 updates
  1. investigating Feb 09, 2026, 03:31 PM UTC

    Status: Investigating We are actively investigating delayed function execution. Affected components Function execution (Degraded performance)

  2. monitoring Feb 09, 2026, 03:52 PM UTC

    Status: Monitoring We scaled up the infrastructure served the affected subset of customers, which will begin to reduce latency to normal levels. Affected components Function execution (Degraded performance)

  3. resolved Feb 09, 2026, 06:13 PM UTC

    Status: Resolved The incident is now resolved and the system is full operational. Affected components Function execution (Operational)

  4. identified Feb 09, 2026, 09:39 PM UTC

    Status: Identified Function execution delays have returned for a subset of users. As we mitigated issues from earlier, some queue related slowness has returned affecting a subset of users running on certain shards. Affected components Function execution (Degraded performance)

  5. resolved Feb 09, 2026, 10:50 PM UTC

    Status: Resolved After an extended monitoring period, function execution has returned to normal rates across all queue shards. During this issue, only a subset of users were affected on part of our infrastructure. Our infrastructure team is in the midst of rolling out additional system capacity going forward. Affected components Function execution (Operational)

Read the full incident report →

Minor February 9, 2026

Delayed function run status

Detected by Pingoru
Feb 09, 2026, 07:51 AM UTC
Resolved
Feb 09, 2026, 07:51 AM UTC
Duration
Affected: Observability
Timeline · 3 updates
  1. monitoring Feb 09, 2026, 06:58 AM UTC

    Status: Monitoring Run status and traces are currently delayed. The system has been scaled and is catching up. Function execution is unaffected. Affected components Observability (Degraded performance)

  2. monitoring Feb 09, 2026, 07:26 AM UTC

    Status: Monitoring The run trace and event history ingestion pipeline is nearly caught up. We further increased the capacity here to catch up on the backlog cause by very high load. Affected components Observability (Degraded performance)

  3. resolved Feb 09, 2026, 07:51 AM UTC

    Status: Resolved Runs, traces and events data are all caught up from their temporary backlog. The dashboard metrics are all being processed with no backlog. Affected components Observability (Operational)

Read the full incident report →

Minor February 3, 2026

Subset of customers experiencing function execution delays

Detected by Pingoru
Feb 03, 2026, 11:36 PM UTC
Resolved
Feb 03, 2026, 11:36 PM UTC
Duration
Affected: Function execution
Timeline · 5 updates
  1. investigating Feb 03, 2026, 04:41 PM UTC

    Status: Investigating We are actively investigating an issue with one of our queue shards experiencing higher than usual delays with function execution. We will provider further updates as we identify the cause and resolve the issue. Affected components Function execution (Degraded performance)

  2. investigating Feb 03, 2026, 08:40 PM UTC

    Status: Investigating We're working to mitigate the slowness by re-allocating workloads across our queue shards. Additionally, we're provisioning more capacity for workloads to alleviate pressure on the system queues. Affected components Function execution (Degraded performance)

  3. investigating Feb 03, 2026, 09:41 PM UTC

    Status: Investigating We have made a configuration change in the system to unlock additional throughput in an attempt to reduce the bottleneck. System throughput is increasing in some affected part of the system. Affected components Function execution (Degraded performance)

  4. monitoring Feb 03, 2026, 10:54 PM UTC

    Status: Monitoring The configuration change made earlier has increased throughput and reduce latency for affected users. The impact of this change takes up to an hour to roll out. Our internal metrics are seeing p75 and p90s return to normal levels with some anomalies in p95 and p99 execution latency, but generally closer to normal. We continue to monitor and investigate long term mitigations. Affected components Function execution (Degraded performance)

  5. resolved Feb 03, 2026, 11:36 PM UTC

    Status: Resolved System latency for function execution has returned to normal levels for the affected users. The incident has been resolved. The cause of the incident was due to increased load causing congestion. We applied changes to the system to reduce congestion, resulting in increasing throughput. We also re-distributed some affected users in an effort to mitigate impact. Our team's planned to roll out new infrastructure in the coming weeks and is accelerating that plan to aim to roll it out later this week to increase overall capacity. Affected components Function execution (Operational)

Read the full incident report →

Looking to track Inngest downtime and outages?

Pingoru polls Inngest's status page every 5 minutes and alerts you the moment it reports an issue — before your customers do.

  • Real-time alerts when Inngest reports an incident
  • Email, Slack, Discord, Microsoft Teams, and webhook notifications
  • Track Inngest alongside 5,000+ providers in one dashboard
  • Component-level filtering
  • Notification groups + maintenance calendar
Start monitoring Inngest for free

5 free monitors · No credit card required