Transifex incident

API Issue

Notice Resolved View vendor source →
Started
Apr 02, 2026, 12:50 PM UTC
Resolved
Apr 02, 2026, 07:25 PM UTC
Duration
6h 34m
Detected by Pingoru
Apr 02, 2026, 12:50 PM UTC

Affected components

Transifex API

Update timeline

  1. investigating Apr 02, 2026, 12:50 PM UTC

    We are currently investigating the issue

  2. investigating Apr 02, 2026, 03:16 PM UTC

    We are still actively investigating the issue and working to identify the root cause. We sincerely apologize for the delays and the API errors you are experiencing, and we understand the impact this may have on your workflows. Our team is fully engaged, and we will share another update as soon as we have more information.

  3. identified Apr 02, 2026, 05:54 PM UTC

    The issue has been identified, and a fix has been applied. Service has been restored, and performance is returning to normal.

  4. monitoring Apr 02, 2026, 06:18 PM UTC

    The fix has been successfully applied, and system performance has stabilized. We are continuing to monitor all systems closely to ensure sustained stability.

  5. resolved Apr 02, 2026, 07:25 PM UTC

    The issue has been fully resolved, and services are now operating normally. We apologize for any inconvenience caused and thank you for your patience and understanding.

  6. postmortem Apr 07, 2026, 08:23 AM UTC

    **API v3 Partial Downtime** **Date:** 02 April 2026 **Service Impacted:** api-v3 **Impact Scope:** Services depending on api-v3 experienced degraded functionality **Summary** On April 2nd, api-v3 experienced partial downtime due to a database performance issue. A database query caused contention within the database, leading to increased resource utilization and degraded responsiveness of api-v3 pods. As a result, api-v3 instances were intermittently failing health checks and restarting, which further amplified database load and connection pressure. **Impact** * Degraded performance and intermittent unavailability of api-v3 * Increased error rates for dependent services * Elevated database CPU usage and connection counts ‌ **Root Cause** The incident was caused by a database query that partially blocked operations within the database. This led to: * Query contention and locking behavior * Increased execution times for other queries * Accumulation of database connections from api-v3 pods This combination resulted in reduced system responsiveness and instability in api-v3. **Mitigation & Resolution** * Problematic long-running queries were identified and terminated * The affected query was optimized to improve performance and reduce locking behavior Following these actions, system performance returned to normal and stability was restored. **Follow-up Actions** * Optimize database query performance related to the incident * Improve monitoring and alerting for long-running or blocking queries * Evaluate database connection handling and limits in api-v3 * Consider implementing specific query timeouts and safeguards to prevent similar issues ‌ **Current Status** The issue has been resolved, and the system has remained stable since the fix was applied.

Looking to track Transifex downtime and outages?

Pingoru polls Transifex's status page every 5 minutes and alerts you the moment it reports an issue — before your customers do.

  • Real-time alerts when Transifex reports an incident
  • Email, Slack, Discord, Microsoft Teams, and webhook notifications
  • Track Transifex alongside 5,000+ providers in one dashboard
  • Component-level filtering
  • Notification groups + maintenance calendar
Start monitoring Transifex for free

5 free monitors · No credit card required