Dstny Outage History

Dstny is up right now

There were 5 Dstny outages since February 14, 2026 totaling 209h 46m of downtime. Each is summarised below — incident details, duration, and resolution information.

Source: https://dstnystatus.statuspage.io

Major April 14, 2026

One‑Way / No Audio at Call Start and During Internal Transfers

Detected by Pingoru
Apr 14, 2026, 11:01 AM UTC
Resolved
Apr 16, 2026, 02:15 PM UTC
Duration
2d 3h
Affected: EU
Timeline · 9 updates
  1. investigating Apr 14, 2026, 11:01 AM UTC

    We are currently investigating an issue affecting Call2Teams across multiple regions. At this time, the specific regions and the full extent of the impact are not yet confirmed. This issue may cause intermittent loss of audio at the start of calls or during internal call transfers, impacting both inbound and outbound calls. Our teams are actively working to identify the root cause and implement a resolution. Updates will be provided every 60 minutes as we learn more. We apologise for any inconvenience caused and appreciate your patience during this time. Dstny Support

  2. identified Apr 14, 2026, 11:47 AM UTC

    We are currently investigating an issue affecting Call2Teams in the EU West region. This issue is causing one‑way or no audio at the start of calls and during internal call transfers, impacting users in the affected area. Our teams are working to identify the root cause and implement a resolution. In parallel, our Platform Engineering team is actively working on implementing an urgent fix, with further updates to follow. Updates will be provided every 60 minutes as we learn more. We apologise for any inconvenience caused and appreciate your patience during this time. Dstny Support

  3. monitoring Apr 14, 2026, 12:55 PM UTC

    We are writing to inform you that an issue occurred earlier affecting Call2Teams in the EU region. A mitigation has now been successfully implemented, and services have been restored. This issue may have caused one‑way or no audio at the start of calls or during internal call transfers for users in the affected area. We will continue to monitor the situation closely over the next 24 hours to ensure there is no further impact. Thank you for your understanding, and we appreciate your continued patience. If you have any questions or concerns, please don’t hesitate to contact our support team. Dstny Support

  4. identified Apr 15, 2026, 09:37 AM UTC

    We are currently investigating an issue caused by a bug within our application code. Recent changes introduced by Microsoft appear to have exacerbated the behaviour, which brought the issue to our attention. Our engineering teams are actively working to identify the root cause and implement a fix, with the aim of addressing this today where feasible. As a temporary workaround, limiting usage to a single codec should help minimise impact in the interim. We will provide further updates as more information becomes available. We apologise for any inconvenience caused and appreciate your patience. Dstny Support

  5. identified Apr 15, 2026, 10:44 AM UTC

    Following our previous update, we have now tested a new solution that addresses the identified issue and early results are positive. The fix is currently being validated with a small number of targeted customer accounts. Subject to the outcome of this testing, we expect to provide a further update regarding a wider rollout. In the meantime, the previously shared workaround remains applicable to minimise impact. We will continue to keep you informed as we progress. Thank you for your continued patience. Dstny Support

  6. identified Apr 15, 2026, 12:15 PM UTC

    Due to our deployment pipeline setup, the rollout is expected to take up to one hour from this point, rather than the 30 minutes previously advised. We will provide a further update once the rollout has completed. Thank you for your continued patience and understanding. Dstny Support

  7. monitoring Apr 15, 2026, 01:25 PM UTC

    Our Platform team has identified the root cause and implemented corrective measures to restore normal service. Customers are not expected to have experienced any impact. A small number may have seen a brief call setup failure during deployment, which would have cleared on redial, or short‑lived re‑registrations as part of normal update behaviour. Service is now stable, and we are moving into a monitoring phase. We will continue to closely monitor availability over the next 24 hours and do not anticipate any further impact at this time. Customers may also revert to their previous codec settings if required. Thank you for your patience and understanding. Dstny Support

  8. resolved Apr 16, 2026, 02:15 PM UTC

    We are pleased to confirm that this incident has now been fully resolved. Over the past 24 hours, we have closely monitored the service and observed no recurrence or further impact. The root cause has been identified, and preventative measures have been implemented to reduce the risk of a similar issue occurring in future. To provide transparency, a detailed post‑mortem report will be shared within the next five business days. We sincerely apologise for any inconvenience caused and thank you for your patience and understanding throughout this incident. Should you have any further questions or concerns, please do not hesitate to contact our Support team. Kind regards, Dstny Support

  9. postmortem Apr 21, 2026, 02:54 PM UTC

    **Incident Summary** From 13:45 UTC on 13th April 2026 until 11:29 UTC on 15th April 2026, a subset of Call2Teams customers experienced intermittent call quality issues. Reported symptoms included one‑way audio, silence at the start of calls or during internal transfers, intermittent static, and in some cases early call termination shortly after connection. While initial reports suggested a possible regional limitation, investigation confirmed that affected scenarios could occur across all regions. Following a series of targeted fixes and monitoring, full service was restored and confirmed stable by 11:29 UTC on 15th April 2026. **Root Cause** The incident was caused by an interaction between a change in Microsoft’s call setup behaviour and a previously latent issue in the Call2Teams call setup logic. Microsoft’s updated codec selection exposed a defect in how our platform handled codec ordering in certain responses, which could create mismatches between the codec signalled to customer phone systems and the codec used towards Microsoft. In parallel, G.722 was being advertised to Microsoft in line with SBC vendor guidance and earlier interoperability assumptions, leading Microsoft to select G.722 in cases where some customer PBXs did not support it. Together, these factors increased real‑time transcoding and contributed to the intermittent audio problems observed. **Incident Resolution** Engineers first reduced unnecessary transcoding by adjusting configuration so that G.722 is only offered to Microsoft when it is explicitly supported by the customer’s phone system. Outside of the standard default codecs \(PCMA and PCMU\), advertised codecs were aligned with each customer’s PBX capabilities, which immediately improved platform stability. A software update was then deployed to correct call setup handling so that codec selections negotiated with Microsoft are maintained consistently when signalled back to customer systems. Following rollout, targeted testing, and monitoring, call quality was confirmed to be stable across all affected scenarios. **Mitigative Actions** We have corrected the call setup logic to prevent incorrect codec re‑ordering from being signalled to customer systems, ensuring that codec selections negotiated with Microsoft are preserved end‑to‑end. We have also reduced unnecessary transcoding by limiting advertised codecs to standard defaults \(PCMA and PCMU\), while conditionally including other codecs, such as G.722, only where they are explicitly supported by the customer’s phone system. In addition, we are reviewing our SBC configuration and internal processes to improve early detection of, and response to, third‑party behavioural changes, strengthening overall service resilience and reducing the likelihood of similar incidents recurring. ### **Timeline**

Read the full incident report →

Major April 2, 2026

ConnectMe outbound calls intermittently failing

Detected by Pingoru
Apr 02, 2026, 08:23 AM UTC
Resolved
Apr 02, 2026, 06:30 AM UTC
Duration
Timeline · 2 updates
  1. resolved Apr 02, 2026, 08:23 AM UTC

    We are writing to inform you that an issue occurred earlier affecting ConnectMe in EU-West. The incident has been fully resolved, and services have been restored. This may have caused failing outbound calls for users in the affected areas. We will continue to monitor the situation for the next 24 hours to ensure there is no further impact. Thank you for your understanding, and we appreciate your continued patience. If you have any questions or concerns, please don’t hesitate to contact our support team.

  2. postmortem Apr 09, 2026, 11:03 AM UTC

    **Incident Summary** On 2nd April 2026, between 04:05 UTC and 07:37 UTC, a routine software update in the EU West region led to intermittent call failures for a subset of ConnectMe users. The issue was proactively detected, and service was fully restored after the update was rolled back at 07:37 UTC. All systems were subsequently verified as stable. **Root Cause** The incident was triggered by a routine software update aimed at enhancing service recovery within the ConnectMe platform. During deployment, the update caused an unexpected interaction within the system, leading to a key component becoming unresponsive and resulting in intermittent outbound call failures for some users. **Incident Resolution** The issue was resolved by rolling back the software update to the previous stable version at 07:37 UTC. This action restored normal service for all users. The rollback was performed promptly after the root cause was identified, and service stability was closely monitored to confirm full recovery. **Mitigative Actions** We are refining monitoring to better detect outbound call failures and reviewing the upgrade process to address identified gaps. Backend adjustments will also be made to improve stability in future deployments.

Read the full incident report →

Major March 3, 2026

Call2Teams Registration Failure - US West

Detected by Pingoru
Mar 03, 2026, 04:40 PM UTC
Resolved
Mar 04, 2026, 05:33 PM UTC
Duration
1d
Affected: US
Timeline · 4 updates
  1. investigating Mar 03, 2026, 04:40 PM UTC

    We are currently investigating an issue affecting Call2Teams in the US West region. This is resulting in registration loss and failed inbound and outbound calls for users in the affected areas. Our teams are working to identify the root cause and implement a resolution. We have received reports that some users are already regaining service while engineering teams continue to apply mitigating actions. We will provide updates every 60 minutes as more information becomes available. We apologise for any inconvenience caused and appreciate your patience while we work to restore full service.

  2. monitoring Mar 03, 2026, 05:29 PM UTC

    Our Platform Team has identified the root cause of the issue and implemented corrective actions to fully restore application services. We will continue to closely monitor service availability over the next 24 hours, and no further impact is expected at this time. Thank you, Dstny Support

  3. resolved Mar 04, 2026, 05:33 PM UTC

    We are pleased to confirm that this incident has now been fully resolved. Over the past 24 hours, we have monitored the service closely and have seen no recurrence or further impact. We have identified the root cause and implemented measures to prevent similar incidents in the future. For full transparency, a detailed post‑incident report will be made available within the next five business days. We sincerely apologise for any inconvenience caused and appreciate your patience and understanding throughout this incident. If you have any further questions or concerns, please contact our support team. Thank you, Dstny Support

  4. postmortem Mar 09, 2026, 04:27 PM UTC

    **Incident Summary** On 3rd March 2026, a subset of Call2Teams users in the US region experienced disruption to their calling services. Between 12:55 UTC and 16:50 UTC, users served by one of our Edge signalling SBCs were unable to register successfully, which prevented them from making or receiving calls during this period. Full service was restored for all affected users by 16:50 UTC. **Root Cause** The root cause was introduced during routine maintenance on one of our Edge signalling SBCs. Following a software upgrade, essential services were not restarted as required, leaving the SBC unable to process user registrations. This resulted in affected users being unable to establish calls during the disruption window. **Incident Resolution** Initial recovery actions began at 16:22 UTC, when key services on the affected Edge signalling SBC were restarted, restoring functionality for the majority of impacted users. A small number of users required an additional corrective action, and full service was restored for them at 16:50 UTC by temporarily redirecting their traffic to an alternative Edge to ensure continuity. The following day, all relevant services were comprehensively restarted to confirm stable and correct operation across the platform. **Mitigative Actions** * Strengthening maintenance procedures and post‑upgrade validation to prevent incomplete steps during routine operations. * Implementing enhanced monitoring to alert teams immediately if registration success rates drop to zero on any Edge.

Read the full incident report →

Minor February 26, 2026

ConnectMe – Login Issues and Inbound/Outbound Call Failures

Detected by Pingoru
Feb 26, 2026, 10:21 AM UTC
Resolved
Mar 03, 2026, 04:41 PM UTC
Duration
5d 6h
Affected: EU
Timeline · 4 updates
  1. investigating Feb 26, 2026, 10:21 AM UTC

    We are currently investigating an issue affecting ConnectMe in EU West. This is impacting a subset of users and includes login issues as well as inbound and outbound call failures in the affected areas. Our teams are working to identify the root cause and implement a resolution. Updates will be provided every 60 minutes as we learn more. We apologise for any inconvenience caused and appreciate your patience during this time. Dstny Support

  2. monitoring Feb 26, 2026, 10:23 AM UTC

    Our Platform team has identified the root cause of the issue and implemented corrective measures to restore application services. We will continue to monitor service availability for the next 24 hours. and do not anticipate any further impact at this time. Thank you. Dstny Support.

  3. resolved Mar 03, 2026, 04:41 PM UTC

    This incident has been resolved.

  4. postmortem Mar 05, 2026, 10:42 AM UTC

    **Major Incident Category** Service Degradation **Post Mortem Owner** Ant Hurlock **Date Post Mortem Completed \(UTC\)** 04 Mar 2026, 17:30 **Incident Summary** On 26th February 2026, a small percentage of ConnectMe users in the EU West region experienced service degradation between 08:19 and 09:22 UTC. Affected users were connected to a single platform node and encountered varied symptoms, including blank screens after login and disruptions to calling and related functionality. Although the overall number of impacted users was limited, the disruption for those affected was significant. Engineers applied mitigation steps, and all services were fully restored and confirmed stable by 09:22 UTC. **Root Cause** The incident was triggered by a software fault within a core platform component. A planned update containing improvements and fixes for two related issues had not yet been deployed, meaning these enhancements were not available in the production environment. The first issue caused the affected platform node to become unresponsive, and the diagnostic information required to quickly identify the cause was insufficient. The upcoming update will introduce enhanced logging to provide the visibility needed to detect and analyse node deadlock scenarios in real time. The second issue involved the node’s health‑monitoring probe. A related software defect meant the probe could take significantly longer than expected to detect and recover from failures, extending the automated recovery window to as much as two hours. Although the platform is designed to self‑recover within this timeframe, in this case recovery did not occur until engineers intervened manually to restore service. **Incident Resolution** The issue was first detected by automated monitoring at 08:19 UTC, prompting immediate investigation by our engineering team. Initial recovery efforts were delayed due to an incorrect procedure, which terminated the unhealthy pod but left the health‑check waiting on internal conditions that were not relevant to this type of failure. As a result, the health‑check began a recovery cycle that could have taken up to two hours to complete. Engineers then intervened manually, restarting the affected component and restoring normal service for all impacted users at 09:22 UTC. Service stability was confirmed following this intervention. **Mitigative Actions** * Accelerating deployment of the planned software update. * Improving the speed and reliability of automatic recovery by resolving the issue that previously caused the health‑monitoring probe to delay self‑recovery by up to two hours. * Introducing enhanced logging to support faster diagnosis of node deadlock scenarios and progress towards a permanent fix. * Updating runbooks to provide clearer guidance for rapid recovery and validation of service restoration.

Read the full incident report →

Critical February 14, 2026

ConnectMe & SMP – Nordics Production – Users unable to login

Detected by Pingoru
Feb 14, 2026, 03:35 AM UTC
Resolved
Feb 14, 2026, 10:54 AM UTC
Duration
7h 19m
Affected: EUEUEUEU
Timeline · 8 updates
  1. investigating Feb 14, 2026, 03:35 AM UTC

    We are currently investigating an issue affecting ConnectMe & SMP in the Nordics production environment. Users will be unable to log into these services. Additional services including CRM Connect, Teams Connect and Omnichannel are also showing service degradation. Our teams are working to identify the root cause and implement a resolution. Updates will be provided every 60 minutes as we learn more. We apologize for any inconvenience caused and appreciate your patience during this time. Dstny Support

  2. investigating Feb 14, 2026, 03:57 AM UTC

    We are continuing to investigate this issue.

  3. investigating Feb 14, 2026, 04:00 AM UTC

    We continue to investigate this incident in collaboration with our Platform team and are implementing measures to reduce user impact wherever possible. We will provide an additional update within the next 60 minutes. Thank you for your continued patience and understanding during this time. Dstny Support

  4. investigating Feb 14, 2026, 04:51 AM UTC

    We continue to investigate this incident in conjunction with our Platform team. There has been no change in the current service impact, and mitigation activities remain in progress. The next update will be provided within the next 60 minutes. Thank you. Dstny Support

  5. investigating Feb 14, 2026, 05:46 AM UTC

    Our teams continue to work on this incident with the highest priority. While service remains impacted and there has been no change in status, we are making progress in narrowing down the underlying cause and are actively exploring additional mitigation paths to restore stability as quickly as possible. We will provide a further update within the next 60 minutes. Dstny Support

  6. monitoring Feb 14, 2026, 06:16 AM UTC

    Our Platform team has identified the root cause of the issue and implemented corrective measures to restore application services. We will continue to monitor service availability for the next 24 hours. and do not anticipate any further impact at this time. Thank you. Dstny Support.

  7. resolved Feb 14, 2026, 10:54 AM UTC

    This incident has been resolved.

  8. postmortem Mar 09, 2026, 03:05 PM UTC

    **Incident Summary** On 14th February 2026, a critical outage affected all users in the Nordics production environment. The authentication service became unavailable, resulting in a complete loss of access to all authentication‑dependent services including ConnectMe, SMP, and others. The incident began at 00:27 UTC and was resolved for end-users by 05:15 UTC. **Root Cause** The outage was caused by an unintended configuration that allowed a point‑in‑time recovery backup file to grow unexpectedly. This rapid growth consumed the storage allocated to the authentication database. Once the storage was exhausted, the database crashed, rendering the authentication service \(Keycloak\) unavailable. As a result, all dependent services relying on authentication were unable to process login operations, leading to a widespread service outage. **Incident Resolution** Service was restored by increasing the storage capacity for the authentication database and identifying that an automated cleanup process was rapidly consuming the new space. The authentication service was temporarily scaled down to halt the excessive data growth, and an index was added to improve the cleanup process. Once these steps were completed, the authentication service was brought back online and user access was restored at 05:15 UTC. Work to restore full database redundancy continued after user impact ended. **Mitigative Actions** * The team is reviewing whether point-in-time recovery should be implemented for the authentication database. * The recovery procedure for the database is being overhauled and comprehensively updated to ensure a faster and more effective response in future incidents. * Steps are being taken to confirm that the bastion server is fully isolated and dedicated to production, ensuring separation from development environments. * Ongoing investigation into periodic CPU and network spikes related to Keycloak database queries is underway.

Read the full incident report →

Looking to track Dstny downtime and outages?

Pingoru polls Dstny's status page every 5 minutes and alerts you the moment it reports an issue — before your customers do.

  • Real-time alerts when Dstny reports an incident
  • Email, Slack, Discord, Microsoft Teams, and webhook notifications
  • Track Dstny alongside 5,000+ providers in one dashboard
  • Component-level filtering
  • Notification groups + maintenance calendar
Start monitoring Dstny for free

5 free monitors · No credit card required