Nanonets Outage History

Nanonets is up right now

Nanonets had 27 outages in the last 2 years totaling 23h 9m of downtime — averaging 1.1 incidents per month.

There were 27 Nanonets outages since May 23, 2025 totaling 23h 9m of downtime. Each is summarised below — incident details, duration, and resolution information.

Source: https://status.nanonets.com

Major May 15, 2026

RCA: Performance Degradation for LLM Nano Models on May 14

Detected by Pingoru
May 15, 2026, 07:14 AM UTC
Resolved
May 14, 2026, 10:00 PM UTC
Duration
Timeline · 1 update
  1. resolved May 15, 2026, 07:14 AM UTC

    From 22:00 UTC to 23:15 UTC on May 14th, we observed performance degradation in the oneshot service affecting LLM Nano models. The issue was primarily caused by a network issue with our GPU service provider. Our team worked with the provider to stabilize the service and performance has since recovered. As an additional preventive measure, we are introducing stronger fallback mechanisms to route requests to more accurate backup models during such events. We apologize for the inconvenience caused and appreciate your patience.

Read the full incident report →

Major April 28, 2026

Elevated 5xx Errors & Timeouts for Instant Learning (LLM Nano)

Detected by Pingoru
Apr 28, 2026, 05:16 AM UTC
Resolved
Apr 28, 2026, 06:02 AM UTC
Duration
46m
Affected: API
Timeline · 3 updates
  1. investigating Apr 28, 2026, 05:16 AM UTC

    We are currently investigating an issue affecting our GPU service provider infrastructure, which is causing elevated 5xx errors and timeouts for Instant Learning models using LLM Nano. Our team is actively working with the provider to identify and resolve the underlying network issues. We will share further updates as soon as we have more information.

  2. monitoring Apr 28, 2026, 05:37 AM UTC

    We’ve temporarily routed LLM Nano traffic to LLM Mini, a more stable, accurate and higher-capacity variant, to mitigate errors. File processing should now be faster and more reliable while we continue working on resolving the underlying issue.

  3. resolved Apr 28, 2026, 06:02 AM UTC

    This incident has been resolved.

Read the full incident report →

Major April 10, 2026

Intermittent 503 Errors for Subset of Users on app.nanonets.com

Detected by Pingoru
Apr 10, 2026, 07:13 AM UTC
Resolved
Apr 09, 2026, 06:30 AM UTC
Duration
Timeline · 1 update
  1. resolved Apr 10, 2026, 07:13 AM UTC

    Between 7:19 UTC and 7:29 UTC, a subset of users experienced intermittent 503 errors when accessing app.nanonets.com. A backend node went down and was replaced by a new node. Due to a DNS caching issue, traffic for some users could not be routed to the new node, resulting in failed requests. Other users were unaffected. The issue was identified and resolved by updating the load balancer configuration to ensure proper traffic routing. Preventive measures have been put in place to avoid recurrence.

Read the full incident report →

Minor March 30, 2026

Delayed file processing for ILM models in app.nanonets.com

Detected by Pingoru
Mar 30, 2026, 01:41 PM UTC
Resolved
Mar 30, 2026, 02:03 PM UTC
Duration
21m
Affected: API
Timeline · 3 updates
  1. investigating Mar 30, 2026, 01:41 PM UTC

    We are currently investigating this issue.

  2. identified Mar 30, 2026, 01:52 PM UTC

    The issue has been identified and a fix is being implemented.

  3. resolved Mar 30, 2026, 02:03 PM UTC

    This incident has been resolved.

Read the full incident report →

Minor February 11, 2026

Post-Processing Failures Impacting app.nanonets.com Models

Detected by Pingoru
Feb 11, 2026, 09:10 AM UTC
Resolved
Feb 11, 2026, 09:26 AM UTC
Duration
15m
Affected: API
Timeline · 3 updates
  1. investigating Feb 11, 2026, 09:10 AM UTC

    We are currently investigating this issue.

  2. monitoring Feb 11, 2026, 09:15 AM UTC

    A fix has been implemented and we are monitoring the results.

  3. resolved Feb 11, 2026, 09:26 AM UTC

    This incident has been resolved.

Read the full incident report →

Minor February 3, 2026

Delayed File Processing and Intermittent Failures Across Regions

Detected by Pingoru
Feb 03, 2026, 03:13 PM UTC
Resolved
Feb 03, 2026, 03:35 PM UTC
Duration
22m
Affected: API
Timeline · 3 updates
  1. identified Feb 03, 2026, 03:13 PM UTC

    One of our GPU service provider is facing issues. We are working on resolving the issue with them.

  2. monitoring Feb 03, 2026, 03:22 PM UTC

    A fix has been implemented and we are monitoring the results.

  3. resolved Feb 03, 2026, 03:35 PM UTC

    This incident has been resolved.

Read the full incident report →

Minor January 23, 2026

Delayed File Processing in EU & EU-Open Region

Detected by Pingoru
Jan 23, 2026, 05:55 PM UTC
Resolved
Jan 23, 2026, 07:04 PM UTC
Duration
1h 8m
Affected: API
Timeline · 4 updates
  1. investigating Jan 23, 2026, 05:55 PM UTC

    We are currently investigating this issue.

  2. identified Jan 23, 2026, 06:25 PM UTC

    The issue has been identified and a fix is being implemented.

  3. monitoring Jan 23, 2026, 06:48 PM UTC

    A fix has been implemented and we are monitoring the results.

  4. resolved Jan 23, 2026, 07:04 PM UTC

    This incident has been resolved.

Read the full incident report →

Major January 21, 2026

Async file processing & Extract data section file visibility issue for users using app.nanonets.com

Detected by Pingoru
Jan 21, 2026, 08:58 AM UTC
Resolved
Jan 21, 2026, 09:34 AM UTC
Duration
35m
Affected: APIWeb App
Timeline · 3 updates
  1. investigating Jan 21, 2026, 08:58 AM UTC

    We are currently investigating this issue.

  2. monitoring Jan 21, 2026, 09:07 AM UTC

    A fix has been implemented and we are monitoring the results.

  3. resolved Jan 21, 2026, 09:34 AM UTC

    This incident has been resolved.

Read the full incident report →

Major December 11, 2025

Delay in file processing for Instant learning models in US region

Detected by Pingoru
Dec 11, 2025, 07:56 AM UTC
Resolved
Dec 11, 2025, 09:52 AM UTC
Duration
1h 55m
Affected: API
Timeline · 4 updates
  1. investigating Dec 11, 2025, 07:56 AM UTC

    We are currently investigating this issue.

  2. monitoring Dec 11, 2025, 09:30 AM UTC

    Our sync API is now operating normally. For async uploads, most results are already available and any remaining pending results will be processed within the next few minutes as we clear the backlog. Our team is addressing the issue on priority. We sincerely apologize for the inconvenience caused.

  3. resolved Dec 11, 2025, 09:52 AM UTC

    This incident has been resolved.

  4. postmortem Dec 12, 2025, 07:45 AM UTC

    One of our core processing services experienced an unexpected surge in load, which slowed down parts of our system and led to a backlog in processing. Our engineering team identified the underlying bottleneck and implemented fixes to stabilize performance. We are also rolling out improvements to make our platform more resilient to sudden spikes in usage. We sincerely apologize for the inconvenience caused.

Read the full incident report →

Critical December 8, 2025

IN region Outage

Detected by Pingoru
Dec 08, 2025, 07:52 AM UTC
Resolved
Dec 08, 2025, 07:59 AM UTC
Duration
6m
Affected: APIWeb AppWebsite
Timeline · 3 updates
  1. investigating Dec 08, 2025, 07:52 AM UTC

    Service Disruption on https://in.nanonets.com US and EU regions are not affected

  2. identified Dec 08, 2025, 07:58 AM UTC

    The issue has been identified and a fix is being implemented.

  3. resolved Dec 08, 2025, 07:59 AM UTC

    This incident has been resolved.

Read the full incident report →

Minor November 8, 2025

Failures in some instant learning models

Detected by Pingoru
Nov 08, 2025, 06:33 PM UTC
Resolved
Nov 08, 2025, 08:46 PM UTC
Duration
2h 13m
Affected: APIWeb AppWebsite
Timeline · 4 updates
  1. identified Nov 08, 2025, 06:33 PM UTC

    The issue has been identified and a fix is being implemented.

  2. monitoring Nov 08, 2025, 08:04 PM UTC

    A fix has been implemented and we are monitoring the results.

  3. monitoring Nov 08, 2025, 08:04 PM UTC

    We are continuing to monitor for any further issues.

  4. resolved Nov 08, 2025, 08:46 PM UTC

    This incident has been resolved.

Read the full incident report →

Critical October 20, 2025

Service Disruption Due to Global AWS Outage

Detected by Pingoru
Oct 20, 2025, 09:11 PM UTC
Resolved
Oct 20, 2025, 09:43 PM UTC
Duration
32m
Affected: APIWeb App
Timeline · 4 updates
  1. identified Oct 20, 2025, 09:11 PM UTC

    On 20th Oct, a global AWS outage impacted one of our feature flag service providers, resulting in intermittent failures for some of our models. The issue originated from the provider's infrastructure dependency on affected AWS regions, which caused disruptions in feature flag evaluations within our system. Our engineering team promptly identified the root cause and implemented mitigation measures to restore functionality. We are continuously monitoring the situation and working with our partners to ensure full stability. We apologize for the inconvenience caused and appreciate your patience and understanding.

  2. monitoring Oct 20, 2025, 09:28 PM UTC

    A fix has been implemented and we are monitoring the results.

  3. monitoring Oct 20, 2025, 09:29 PM UTC

    We are continuing to monitor for any further issues.

  4. resolved Oct 20, 2025, 09:43 PM UTC

    This incident has been resolved.

Read the full incident report →

Critical September 22, 2025

app.nanonets.com down

Detected by Pingoru
Sep 22, 2025, 12:01 PM UTC
Resolved
Sep 22, 2025, 12:17 PM UTC
Duration
15m
Affected: APIWeb App
Timeline · 5 updates
  1. investigating Sep 22, 2025, 12:01 PM UTC

    We are currently investigating this issue.

  2. identified Sep 22, 2025, 12:07 PM UTC

    The issue has been identified and a fix is being implemented.

  3. monitoring Sep 22, 2025, 12:09 PM UTC

    A fix has been implemented and we are monitoring the results.

  4. resolved Sep 22, 2025, 12:17 PM UTC

    This incident has been resolved.

  5. postmortem Sep 22, 2025, 02:09 PM UTC

    **Temporary Service Disruption on Sept 22nd \(US Region\)** We experienced a temporary outage affecting our US region application \([_app.nanonets.com_](http://app.nanonets.com)\) between **11:58 and 12:09 UTC on Sept 22nd**. Other regions remained unaffected. The disruption was caused by an unexpected resource issue on one of our database nodes, which impacted application stability. Our team quickly identified the issue, applied a fix, and restored normal operations. We sincerely apologize for the inconvenience and we are implementing additional safeguards to prevent such issues in the future. Thank you for your understanding.

Read the full incident report →

Notice September 4, 2025

Clarification: No Platform-wide Downtime on September 4th

Detected by Pingoru
Sep 04, 2025, 05:51 AM UTC
Resolved
Sep 04, 2025, 05:51 AM UTC
Duration
Affected: API
Timeline · 1 update
  1. resolved Sep 04, 2025, 05:51 AM UTC

    At 1:00 AM UTC on September 4th, we posted an incident titled “Zero learning model file processing failures.” Upon further investigation, we identified that the issue was isolated to a single model and was not platform-wide. No other models or regions were impacted. The previously posted incident has been removed. We would like to clarify that there has been no platform-wide downtime in any region. Thank you for your understanding.

Read the full incident report →

Major August 20, 2025

app.nanonets.com – User Logout Issue

Detected by Pingoru
Aug 20, 2025, 12:13 PM UTC
Resolved
Aug 20, 2025, 12:13 PM UTC
Duration
Affected: APIWeb AppWebsite
Timeline · 1 update
  1. resolved Aug 20, 2025, 12:13 PM UTC

    During a planned migration activity, users were unexpectedly logged out of the platform between 11:50 UTC and 12:05 UTC. The disruption occurred because database reads failed for old pods during the migration, while new pods came up late, leading to temporary session validation failures. The issue was resolved once the new pods were fully operational. We sincerely apologize for the inconvenience caused and will ensure future migrations are executed with better coordination to avoid service disruption.

Read the full incident report →

Major July 26, 2025

India region services affected

Detected by Pingoru
Jul 26, 2025, 09:07 AM UTC
Resolved
Jul 26, 2025, 09:13 AM UTC
Duration
6m
Affected: APIWeb App
Timeline · 3 updates
  1. identified Jul 26, 2025, 09:07 AM UTC

    The issue has been identified and a fix is being implemented.

  2. monitoring Jul 26, 2025, 09:08 AM UTC

    A fix has been implemented and we are monitoring the results.

  3. resolved Jul 26, 2025, 09:13 AM UTC

    This incident has been resolved.

Read the full incident report →

Minor July 23, 2025

Intermittent Prediction Failures on Instant Learning Models

Detected by Pingoru
Jul 23, 2025, 04:02 PM UTC
Resolved
Jul 23, 2025, 04:02 PM UTC
Duration
Affected: API
Timeline · 1 update
  1. resolved Jul 23, 2025, 04:02 PM UTC

    Between 15:05 UTC and 15:40 UTC on July 23, 2025, a small subset of users using the Sync API for Instant Learning models on app.nanonets.com may have experienced intermittent prediction failures. We identified an edge case in our system that triggered this issue under specific conditions. The issue was resolved promptly with a fix, and the system has been stable since. All other regions and services remained unaffected and continue to operate normally. We apologize for the inconvenience caused and appreciate your understanding.

Read the full incident report →

Major July 22, 2025

Temporary Slowness on app.nanonets.com due to High Load on DB

Detected by Pingoru
Jul 22, 2025, 10:21 AM UTC
Resolved
Jul 22, 2025, 10:21 AM UTC
Duration
Affected: APIWeb App
Timeline · 1 update
  1. resolved Jul 22, 2025, 10:21 AM UTC

    On July 22nd, 2025, users of app.nanonets.com may have experienced intermittent slowness during the periods 08:45–08:56 UTC and 10:03–10:11 UTC. This issue was caused by an unusually high load on our primary database. Our engineering team promptly identified the root cause and deployed a fix. The system has since stabilized and is functioning normally. We appreciate your patience during this time. All other regions remained fully operational without any downtime.

Read the full incident report →

Notice June 24, 2025

EU Region Prediction API Failures

Detected by Pingoru
Jun 24, 2025, 01:58 PM UTC
Resolved
Jun 24, 2025, 01:58 PM UTC
Duration
Affected: API
Timeline · 1 update
  1. resolved Jun 24, 2025, 01:58 PM UTC

    Between 13:33 UTC and 13:44 UTC on June 24th 2025, we observed intermittent failures in the Prediction API for our EU region due to a critical infrastructure issue. Our alerting systems flagged the anomaly, and our engineers quickly identified and resolved the problem. Services in the EU region were fully restored by 13:44 UTC. Other regions remained stable and were not impacted during this period. We appreciate your patience and understanding.

Read the full incident report →

Notice June 12, 2025

GCP Global Outage – No Impact to Our Systems

Detected by Pingoru
Jun 12, 2025, 07:08 PM UTC
Resolved
Jun 12, 2025, 07:41 PM UTC
Duration
33m
Affected: API
Timeline · 3 updates
  1. monitoring Jun 12, 2025, 07:08 PM UTC

    There is a major GCP global outage ongoing at the moment. We have ensured that all our critical dependencies have been redirected to fallbacks, and we are currently not impacted. GCP incident: https://status.cloud.google.com/incidents/ow5i3PPK96RduMcb1SsW We are closely monitoring the situation and will take any necessary actions if the situation evolves. Thanks for your continued support and vigilance. We'll keep you posted with any updates.

  2. monitoring Jun 12, 2025, 07:10 PM UTC

    We are continuing to monitor for any further issues.

  3. resolved Jun 12, 2025, 07:41 PM UTC

    This incident has been resolved.

Read the full incident report →