Elevated 5xx Errors & Timeouts for Instant Learning (LLM Nano)
Timeline · 3 updates
- investigating Apr 28, 2026, 05:16 AM UTC
We are currently investigating an issue affecting our GPU service provider infrastructure, which is causing elevated 5xx errors and timeouts for Instant Learning models using LLM Nano. Our team is actively working with the provider to identify and resolve the underlying network issues. We will share further updates as soon as we have more information.
- monitoring Apr 28, 2026, 05:37 AM UTC
We’ve temporarily routed LLM Nano traffic to LLM Mini, a more stable, accurate and higher-capacity variant, to mitigate errors. File processing should now be faster and more reliable while we continue working on resolving the underlying issue.
- resolved Apr 28, 2026, 06:02 AM UTC
This incident has been resolved.