One‑Way / No Audio at Call Start and During Internal Transfers
Timeline · 9 updates
- investigating Apr 14, 2026, 11:01 AM UTC
We are currently investigating an issue affecting Call2Teams across multiple regions. At this time, the specific regions and the full extent of the impact are not yet confirmed. This issue may cause intermittent loss of audio at the start of calls or during internal call transfers, impacting both inbound and outbound calls. Our teams are actively working to identify the root cause and implement a resolution. Updates will be provided every 60 minutes as we learn more. We apologise for any inconvenience caused and appreciate your patience during this time. Dstny Support
- identified Apr 14, 2026, 11:47 AM UTC
We are currently investigating an issue affecting Call2Teams in the EU West region. This issue is causing one‑way or no audio at the start of calls and during internal call transfers, impacting users in the affected area. Our teams are working to identify the root cause and implement a resolution. In parallel, our Platform Engineering team is actively working on implementing an urgent fix, with further updates to follow. Updates will be provided every 60 minutes as we learn more. We apologise for any inconvenience caused and appreciate your patience during this time. Dstny Support
- monitoring Apr 14, 2026, 12:55 PM UTC
We are writing to inform you that an issue occurred earlier affecting Call2Teams in the EU region. A mitigation has now been successfully implemented, and services have been restored. This issue may have caused one‑way or no audio at the start of calls or during internal call transfers for users in the affected area. We will continue to monitor the situation closely over the next 24 hours to ensure there is no further impact. Thank you for your understanding, and we appreciate your continued patience. If you have any questions or concerns, please don’t hesitate to contact our support team. Dstny Support
- identified Apr 15, 2026, 09:37 AM UTC
We are currently investigating an issue caused by a bug within our application code. Recent changes introduced by Microsoft appear to have exacerbated the behaviour, which brought the issue to our attention. Our engineering teams are actively working to identify the root cause and implement a fix, with the aim of addressing this today where feasible. As a temporary workaround, limiting usage to a single codec should help minimise impact in the interim. We will provide further updates as more information becomes available. We apologise for any inconvenience caused and appreciate your patience. Dstny Support
- identified Apr 15, 2026, 10:44 AM UTC
Following our previous update, we have now tested a new solution that addresses the identified issue and early results are positive. The fix is currently being validated with a small number of targeted customer accounts. Subject to the outcome of this testing, we expect to provide a further update regarding a wider rollout. In the meantime, the previously shared workaround remains applicable to minimise impact. We will continue to keep you informed as we progress. Thank you for your continued patience. Dstny Support
- identified Apr 15, 2026, 12:15 PM UTC
Due to our deployment pipeline setup, the rollout is expected to take up to one hour from this point, rather than the 30 minutes previously advised. We will provide a further update once the rollout has completed. Thank you for your continued patience and understanding. Dstny Support
- monitoring Apr 15, 2026, 01:25 PM UTC
Our Platform team has identified the root cause and implemented corrective measures to restore normal service. Customers are not expected to have experienced any impact. A small number may have seen a brief call setup failure during deployment, which would have cleared on redial, or short‑lived re‑registrations as part of normal update behaviour. Service is now stable, and we are moving into a monitoring phase. We will continue to closely monitor availability over the next 24 hours and do not anticipate any further impact at this time. Customers may also revert to their previous codec settings if required. Thank you for your patience and understanding. Dstny Support
- resolved Apr 16, 2026, 02:15 PM UTC
We are pleased to confirm that this incident has now been fully resolved. Over the past 24 hours, we have closely monitored the service and observed no recurrence or further impact. The root cause has been identified, and preventative measures have been implemented to reduce the risk of a similar issue occurring in future. To provide transparency, a detailed post‑mortem report will be shared within the next five business days. We sincerely apologise for any inconvenience caused and thank you for your patience and understanding throughout this incident. Should you have any further questions or concerns, please do not hesitate to contact our Support team. Kind regards, Dstny Support
- postmortem Apr 21, 2026, 02:54 PM UTC
**Incident Summary** From 13:45 UTC on 13th April 2026 until 11:29 UTC on 15th April 2026, a subset of Call2Teams customers experienced intermittent call quality issues. Reported symptoms included one‑way audio, silence at the start of calls or during internal transfers, intermittent static, and in some cases early call termination shortly after connection. While initial reports suggested a possible regional limitation, investigation confirmed that affected scenarios could occur across all regions. Following a series of targeted fixes and monitoring, full service was restored and confirmed stable by 11:29 UTC on 15th April 2026. **Root Cause** The incident was caused by an interaction between a change in Microsoft’s call setup behaviour and a previously latent issue in the Call2Teams call setup logic. Microsoft’s updated codec selection exposed a defect in how our platform handled codec ordering in certain responses, which could create mismatches between the codec signalled to customer phone systems and the codec used towards Microsoft. In parallel, G.722 was being advertised to Microsoft in line with SBC vendor guidance and earlier interoperability assumptions, leading Microsoft to select G.722 in cases where some customer PBXs did not support it. Together, these factors increased real‑time transcoding and contributed to the intermittent audio problems observed. **Incident Resolution** Engineers first reduced unnecessary transcoding by adjusting configuration so that G.722 is only offered to Microsoft when it is explicitly supported by the customer’s phone system. Outside of the standard default codecs \(PCMA and PCMU\), advertised codecs were aligned with each customer’s PBX capabilities, which immediately improved platform stability. A software update was then deployed to correct call setup handling so that codec selections negotiated with Microsoft are maintained consistently when signalled back to customer systems. Following rollout, targeted testing, and monitoring, call quality was confirmed to be stable across all affected scenarios. **Mitigative Actions** We have corrected the call setup logic to prevent incorrect codec re‑ordering from being signalled to customer systems, ensuring that codec selections negotiated with Microsoft are preserved end‑to‑end. We have also reduced unnecessary transcoding by limiting advertised codecs to standard defaults \(PCMA and PCMU\), while conditionally including other codecs, such as G.722, only where they are explicitly supported by the customer’s phone system. In addition, we are reviewing our SBC configuration and internal processes to improve early detection of, and response to, third‑party behavioural changes, strengthening overall service resilience and reducing the likelihood of similar incidents recurring. ### **Timeline**