Thread experienced a critical incident on September 3, 2025 affecting ConnectWise Pods and Thread Inbox, Messenger, & Admin and 1 more component, lasting 6h 28m. The incident has been resolved; the full update timeline is below.
Affected components
Update timeline
- investigating Sep 03, 2025, 07:31 PM UTC
One of our third-party services is experiencing some issues, and this is impacting the loading of the chats in the Inbox and Messenger. Our engineering team is working on it.
- investigating Sep 03, 2025, 07:40 PM UTC
We are continuing to investigate this issue.
- investigating Sep 03, 2025, 07:40 PM UTC
We are continuing to investigate this issue.
- investigating Sep 03, 2025, 10:08 PM UTC
We are continuing to investigate this issue.
- identified Sep 03, 2025, 10:13 PM UTC
We partially identified the cause of the issue. Inbox/Messenger are gradually loading properly again. ConnectWise chat pod was disabled as it seems to be the root cause of the issue.
- identified Sep 03, 2025, 10:52 PM UTC
We released a temporary fix. We're still working on a permanent fix.
- identified Sep 03, 2025, 10:58 PM UTC
Inbox, Messenger, AutoTask Pods, HaloPSA Pods are fully operational. We're analyzing the issue with the ConnectWise Pods at the moment and will update you as soon as we have more information.
- resolved Sep 04, 2025, 02:00 AM UTC
This incident has been resolved.
- postmortem Sep 09, 2025, 02:25 PM UTC
**Incident Report – Twilio Outage** **Date:** September 3, 2025 **Duration:** ~4 hours **Impact:** Messaging services \(Inbox, Messenger, PSA Pods\) were unavailable during the outage window. No tickets or messages were lost. **What Happened** Our messaging platform experienced an outage when we exceeded an undocumented Twilio concurrency limit on API requests. This caused services to become unresponsive until the limit was raised. **Resolution** We worked with Twilio to raise the limit and restored services after restarting affected systems. **Next Steps** * Add monitoring and alerts for API usage. * Implement request throttling and batching to avoid spikes. * Incorporate vendor limits into our capacity planning. * Continue working with Twilio to architect a solution for further scaling **Conclusion** We’ve addressed the immediate issue and put safeguards in place to prevent a recurrence. Thank you for your patience as we strengthen our systems for future reliability and scale.