Maropost incident

Intermittent 503 errors on merchant webstores

Notice Resolved View vendor source →

Maropost experienced a notice incident on January 8, 2026, lasting —. The incident has been resolved; the full update timeline is below.

Started
Jan 08, 2026, 12:20 AM UTC
Resolved
Jan 07, 2026, 12:00 AM UTC
Duration
Detected by Pingoru
Jan 08, 2026, 12:20 AM UTC

Update timeline

  1. resolved Jan 08, 2026, 12:20 AM UTC

    Some merchants experienced intermittent website access issues, where pages returned a 503 error (for example “We will be back online shortly”), and in some cases inconsistent access to cPanel. The issue was intermittent, which meant some refreshes worked while others failed.

  2. postmortem Jan 08, 2026, 12:20 AM UTC

    **Incident:** Intermittent 503 errors on merchant webstores **Date:** Tuesday, 7 January 2026 \(AEST\) **Severity:** P1 \(High\) CCP-Multiple sites reporting pa… ### What happened On the morning of 7 January 2026, some merchants experienced intermittent website access issues, where pages returned a **503 error** \(for example “We will be back online shortly”\), and in some cases inconsistent access to cPanel. The issue was intermittent, which meant some refreshes worked while others failed. ### Customer impact * **Impacted services:** Merchant webstores \(and intermittent access issues reported for cPanel\) * **Symptom:** Pages timing out and returning 503 errors * **Duration:** Approximately **10:00 AM to 12:30 PM AEST** \(around 2.5 hours\) * **Scope:** Multiple merchants across the platform CCP-Multiple sites reporting pa… ### Timeline \(AEST\) * **10:00 AM:** Incident begins, reports of 503 errors start coming in * **Morning:** Team investigates platform performance and confirms the issue is intermittent * **Around 12:15 PM:** 503 errors are no longer observed in edge logging \(stabilising\) * **12:30 PM:** Service considered resolved C ### Root cause \(plain English\) The incident was caused by a problem affecting **one instance within a cache layer** in the production stack. Backend threads on that cache instance increased and did not scale down as expected, which led to resource saturation and contributed to the observed 503 errors. Contributing factor: an **automated daily process** that refreshes/replaces instances \(to prevent backend threads from continually increasing\) **failed on the day**, which allowed older instances to keep running instead of being replaced. ### Resolution To restore service, we **rolled out new cache layer instances**, which resolved the abnormal backend thread behaviour and returned webstore responsiveness to normal. ### What we are doing to prevent this happening again We are strengthening detection and recovery around the rollout process, including: * Enhancing alerting so that if tests fail during instance rollout, notifications are raised to operational channels * Improving safeguards around the daily refresh pipeline to reduce the chance of older instances remaining active when they should be replaced. ### What merchants need to do No action is required. If you continue to experience intermittent 503 errors after **12:30 PM AEST on 7 January 2026**, please contact Support via [https://share.hsforms.com/1PF2VNMWnS2al3GnLh8msNQrp8ms](https://share.hsforms.com/1PF2VNMWnS2al3GnLh8msNQrp8ms) with: * Your store URL * Approximate time of occurrence \(including timezone\) * Any screenshots or error messages ### Apology We apologise for the disruption and any impact this may have had on your trading. Thank you for your patience while we investigated and resolved the issue.