Redox Outage History

Redox is up right now

There were 10 Redox outages since February 6, 2026 totaling 127h 23m of downtime. Each is summarised below — incident details, duration, and resolution information.

Source: https://status.redoxengine.com

Minor April 13, 2026

Increased rate of errors for Carequality queries

Detected by Pingoru
Apr 13, 2026, 04:12 PM UTC
Resolved
Apr 13, 2026, 09:11 PM UTC
Duration
4h 58m
Affected: RLS
Timeline · 2 updates
  1. investigating Apr 13, 2026, 04:12 PM UTC

    We are aware of an issue with organizations querying the Redox Gateway over the Carequality network. Queries may receive an increased rate of HTTP 500 responses. All traffic not related to Carequality is unaffected. If you have further questions regarding this issue, please reach out to [email protected]

  2. resolved Apr 13, 2026, 09:11 PM UTC

    A fix has been implemented and we are monitoring the results. Error rates have returned to a normal range. If you have further questions, please reach out to [email protected]

Read the full incident report →

Minor March 16, 2026

Partial Log Delay

Detected by Pingoru
Mar 16, 2026, 02:51 PM UTC
Resolved
Mar 16, 2026, 03:37 PM UTC
Duration
46m
Affected: Traffic Processing
Timeline · 3 updates
  1. investigating Mar 16, 2026, 02:51 PM UTC

    Redox is currently experiencing a delay in some logs. About an eighth of logs are delayed by under and hour. What this means: you may experience a delay before receiving traffic from Redox. However, please be aware that the actual processing of logs is unaffected; message flow is still occurring, albeit at a slower pace. We are actively looking into this issue and will have a fix available as soon as possible. If you are concerned that your logs have been impacted or have any additional questions, please reach out to [email protected]

  2. monitoring Mar 16, 2026, 03:06 PM UTC

    A fix has been implemented for the delay issues and Redox is actively monitoring our logs. You may continue to experience delays as queues catch up to the present. If you have questions please reach out to [email protected]

  3. resolved Mar 16, 2026, 03:37 PM UTC

    The incident has been resolved and logs delays are at nominal levels. If you have any additional questions please reach out to [email protected]

Read the full incident report →

Major March 5, 2026

Unexpected Dashboard Alerting Outage

Detected by Pingoru
Mar 05, 2026, 08:48 PM UTC
Resolved
Mar 10, 2026, 04:36 PM UTC
Duration
4d 19h
Affected: Alerting
Timeline · 10 updates
  1. investigating Mar 05, 2026, 08:48 PM UTC

    Our automated monitoring tools detected a partial outage affecting the dashboard at ~2:00 PM Central. Intermittently, alert rules are unable to be loaded within the dashboard and a subset of alerts are not triggering as expected. Our team is currently investigating the cause and working on a solution. Please work with your teams to implement downtime procedures. Traffic processing is not impacted. If you have any additional questions, please notify us at [email protected].

  2. identified Mar 05, 2026, 10:25 PM UTC

    We're continuing to investigate the root cause of the issue. It appears to be related to a third-party vendor. We are seeing degraded performance rather than an outage at this time. We are reaching out to this partner to assist with additional troubleshooting. Additionally, we are taking steps to put failovers in place to ensure reliability of our dashboard alerts. A fix is also being put in place to ensure the dashboard still displays as expected.

  3. identified Mar 05, 2026, 11:40 PM UTC

    A fix has been put in place for the dashboard so it now reflects current alert statuses as expected. We are continuing to work toward more reliable alert notification services.

  4. identified Mar 06, 2026, 02:26 PM UTC

    We have identified the cause for the ongoing notification issue and are actively implementing a fix. We will update the status of this incident when this fix has been fully deployed.

  5. identified Mar 06, 2026, 04:48 PM UTC

    The fix was deployed but we are still seeing intermittent errors on our end. We have been communicating with our 3rd party vendor to assist further. We will continue to provide updates as they become available.

  6. identified Mar 06, 2026, 04:49 PM UTC

    The fix was deployed but we are still seeing intermittent errors on our end. We have been communicating with our 3rd party vendor to assist further. We will continue to provide updates as they become available.

  7. identified Mar 06, 2026, 09:33 PM UTC

    Our third-party vendor is in the process of recovering from an incident on their end. We are doing everything within our power on the Redox side to mitigate from our end. We will continue to provide updates as they become available and will have more information available on Monday. Alerting within the customer dashboard will be inconsistent for now and through the weekend. We recommend reviewing any alerts with scrutiny to ensure they are accurate. Our own Redox internal alerting and remains unimpacted. This includes automated destination level error retry and failure alerts.

  8. identified Mar 09, 2026, 02:56 PM UTC

    Our third party vendor has completed their recovery and is monitoring things as are we. Alerts are working as expected and we closing monitoring the situation to ensure this continues. If you have any additional questions, please contact [email protected]

  9. monitoring Mar 09, 2026, 02:56 PM UTC

    Our third party vendor has completed their recovery and is monitoring things as are we. Alerts are working as expected and we closing monitoring the situation to ensure this continues. If you have any additional questions, please contact [email protected]

  10. resolved Mar 10, 2026, 04:36 PM UTC

    This issue has been resolved as our third party vendor has resolved the issue and our alerts stayed operational through our monitoring process. If you have any additional questions please reach out to [email protected]

Read the full incident report →

Notice February 27, 2026

Delays in message processing for Data on Demand

Detected by Pingoru
Feb 27, 2026, 06:14 PM UTC
Resolved
Feb 27, 2026, 06:26 PM UTC
Duration
12m
Affected: Traffic Processing
Timeline · 3 updates
  1. investigating Feb 27, 2026, 06:14 PM UTC

    Redox is aware of an issue with our message processing for Data on Demand. What this means: We see ~15 min delay in processing than expected. We’ve found the source of the trouble and the fix is in place. Systems are returning to normal, and you should see full functionality restored shortly. If you have any additional questions please reach out to us at [email protected]

  2. monitoring Feb 27, 2026, 06:16 PM UTC

    A fix has been implemented and we are monitoring the results.

  3. resolved Feb 27, 2026, 06:26 PM UTC

    Messages are now processing normally. All affected queues have been resumed. If you have any additional questions or are not seeing expected logs, please reach out to [email protected]

Read the full incident report →

Minor February 26, 2026

Log processing is delayed

Detected by Pingoru
Feb 26, 2026, 05:33 PM UTC
Resolved
Feb 26, 2026, 05:47 PM UTC
Duration
14m
Affected: Traffic Processing
Timeline · 4 updates
  1. identified Feb 26, 2026, 05:33 PM UTC

    We are aware of an issue with our message processing. What this means: Messages are presently not processing as expected. Logs are taking longer to process than expected. Please expect delays in message receipt until we have identified the issue and implemented a fix. What we will do: After a fix has been implemented, Redox will go through affected messages and determine what action can be taken to rectify the situation If you have any additional questions please reach out to us at [email protected]

  2. monitoring Feb 26, 2026, 05:39 PM UTC

    A fix has been implemented for the traffic processing issue and we are monitoring the results. Message processing latency is catching up. If you have any questions, please contact us at [email protected]

  3. resolved Feb 26, 2026, 05:47 PM UTC

    Messages are now processing normally. If you have any additional questions or are not seeing expected logs, please reach out to [email protected]

  4. postmortem Mar 10, 2026, 10:56 PM UTC

    **What Happened** On **February 26, 2026** at approximately **10:16 AM CT**, our production environment experienced elevated latency across all asynchronous traffic processing. The incident was triggered when core processing workers were unintentionally scaled to a minimal scale, below our normal expected scale to process day to day traffic. This significantly reduced our processing capacity. The impact varied across connections, with durations ranging from 20-42 minutes and maximum average latencies between 9.6 and 18.2 minutes. **How We Fixed It** Our team quickly identified the root cause and executed a script to scale all affected workers back to their proper scale. By **10:45 AM CT**, all async processing had returned to nominal latency levels.

Read the full incident report →

Major February 23, 2026

VPN Degradation for subset of VPN tunnels

Detected by Pingoru
Feb 23, 2026, 08:31 PM UTC
Resolved
Feb 23, 2026, 09:14 PM UTC
Duration
43m
Affected: Traffic Processing
Timeline · 6 updates
  1. identified Feb 23, 2026, 08:31 PM UTC

    We have identified the issue with our VPN gateways and are currently working on a fix.

  2. identified Feb 23, 2026, 08:38 PM UTC

    We have identified the issue with our VPN gateways and are currently working on a fix. A subset of HL7 over VPN traffic will be impacted.

  3. monitoring Feb 23, 2026, 08:55 PM UTC

    Our gateways have been restarted and VPN tunnels are coming back up. We are currently monitoring results.

  4. monitoring Feb 23, 2026, 08:59 PM UTC

    As of 02/23/2026 at ~2:15 PM Central Time, the Redox MLLP Listener service is experiencing degraded performance for processing inbound MLLP transactions. Inbound and outbound HL7 messages to and from Redox were delayed while we investigated the root cause. Web requests (HTTPS) from Redox were unaffected. We have mitigated the issue and VPN tunnels are back online. If you have any additional questions, please notify us at [email protected].

  5. resolved Feb 23, 2026, 09:14 PM UTC

    This incident has been resolved.

  6. postmortem Mar 02, 2026, 03:28 PM UTC

    ### **Summary** On **February 23, 2026**, between **1:05 AM to 2:05 AM CST** and **1:53 PM to 3:03 PM CST**, a subset of customers experienced elevated latency and connectivity issues across a number of our MLLP VPN Gateways. Service was fully restored once primary gateway instances were cycled and traffic successfully failed over to secondary nodes. ### **What Happened** The outage was triggered by a memory exhaustion event on specific VPN Gateway instances. There was a memory leak in a third party library led to a steady rise in memory consumption, eventually causing the primary instances to become unresponsive. ### **What We Are Doing** To prevent a recurrence and improve our response time, our engineering team is implementing the following measures: * **Stability:** We are deploying updates that mitigate the issue we saw with the third party memory leak to prevent resource exhaustion and non-responsive primary gateway instances. We submitted a ticket with the third party library and they have resolved the issue in the library. * **Enhanced Monitoring:** We are deploying updates to have more granular alerts that monitor MLLP gateway traffic health and reduce time to detection for unhealthy gateways. * **Service Resilience:** We are reviewing our automated health monitoring and failover protocols to ensure the system responds effectively to instance health changes.

Read the full incident report →

Minor February 23, 2026

Latency in viewing logs

Detected by Pingoru
Feb 23, 2026, 04:16 PM UTC
Resolved
Feb 23, 2026, 05:31 PM UTC
Duration
1h 14m
Affected: Logs (view/search)
Timeline · 2 updates
  1. investigating Feb 23, 2026, 04:16 PM UTC

    At approximately 10:15 ET, Redox became aware of an issue in which logs are not properly displaying on the dashboard. You may notice up to an hour of latency in viewing logs. What this means: While logs are still processing properly, the dashboard is not displaying them as expected. Search within the dashboard will not work as expected until a fix has been implemented. If you have questions about specific logs or your connection and whether it has been affected, please contact us at [email protected]

  2. resolved Feb 23, 2026, 05:31 PM UTC

    A fix has been implemented for the log visibility issues and the latency is fully caught up. Logs should now be visible across the entire Redox platform. If you have any addition questions or are still experiencing issues, please reach out to [email protected]

Read the full incident report →

Minor February 23, 2026

Issue With Message Processing

Detected by Pingoru
Feb 23, 2026, 07:09 AM UTC
Resolved
Feb 23, 2026, 07:51 AM UTC
Duration
41m
Affected: Traffic Processing
Timeline · 4 updates
  1. investigating Feb 23, 2026, 07:09 AM UTC

    At approximately 12:30AMCT, Redox became aware of an issue with our message processing for a subset of customers. What we will do: After a fix has been implemented, Redox will go through affected messages and determine what action can be taken to rectify the situation If you have any additional questions please reach out to us at [email protected]

  2. monitoring Feb 23, 2026, 07:30 AM UTC

    A fix has been implemented for the traffic processing issue. Messages are processing properly again. If you have any questions, please contact us at [email protected]

  3. resolved Feb 23, 2026, 07:51 AM UTC

    Messages are now processing normally. If you have any additional questions or are not seeing expected logs, please reach out to [email protected]

  4. postmortem Mar 02, 2026, 03:29 PM UTC

    ### **Summary** On **February 23, 2026**, between **1:05 AM to 2:05 AM CST** and **1:53 PM to 3:03 PM CST**, a subset of customers experienced elevated latency and connectivity issues across a number of our MLLP VPN Gateways. Service was fully restored once primary gateway instances were cycled and traffic successfully failed over to secondary nodes. ### **What Happened** The outage was triggered by a memory exhaustion event on specific VPN Gateway instances. There was a memory leak in a third party library led to a steady rise in memory consumption, eventually causing the primary instances to become unresponsive. ### **What We Are Doing** To prevent a recurrence and improve our response time, our engineering team is implementing the following measures: * **Stability:** We are deploying updates that mitigate the issue we saw with the third party memory leak to prevent resource exhaustion and non-responsive primary gateway instances. We submitted a ticket with the third party library and they have resolved the issue in the library. * **Enhanced Monitoring:** We are deploying updates to have more granular alerts that monitor MLLP gateway traffic health and reduce time to detection for unhealthy gateways. * **Service Resilience:** We are reviewing our automated health monitoring and failover protocols to ensure the system responds effectively to instance health changes.

Read the full incident report →

Notice February 10, 2026

Polling traffic delayed

Detected by Pingoru
Feb 10, 2026, 12:50 PM UTC
Resolved
Feb 10, 2026, 03:13 PM UTC
Duration
2h 23m
Timeline · 2 updates
  1. monitoring Feb 10, 2026, 12:50 PM UTC

    At approximately 6:25am ET, Redox became aware of an issue with polling workflows for third-party APIs. What this means: Any messages that would have been generated by these polling workflows would see latency in being delivered to customers by up to an hour. What we're doing: We have implemented a fix and are monitoring the results. You may continue to experience delays as queues catch up. Please contact us at [email protected] if you have any questions.

  2. resolved Feb 10, 2026, 03:13 PM UTC

    This incident has been resolved.

Read the full incident report →

Minor February 6, 2026

Increased errors on Carequality Location queries

Detected by Pingoru
Feb 06, 2026, 07:19 PM UTC
Resolved
Feb 06, 2026, 07:40 PM UTC
Duration
20m
Affected: Carequality
Timeline · 3 updates
  1. investigating Feb 06, 2026, 07:19 PM UTC

    We are currently experiencing an issue with increased error rates for location queries to Carequality What this means: If you query Carequality you may see an increased error rate. What we're doing: We are actively investigating the issue and will report updates as we have them. If you have questions Please contact us at [email protected]

  2. monitoring Feb 06, 2026, 07:32 PM UTC

    A fix has been implemented to resolve the errors we were getting. We're actively monitoring the situation to ensure our errors return to nominal rates. If you have any additional questions, please contact [email protected]

  3. resolved Feb 06, 2026, 07:40 PM UTC

    The incident has been resolved and the Redox Engine has resumed normal operations. If you have any additional questions please reach out to [email protected]

Read the full incident report →

Looking to track Redox downtime and outages?

Pingoru polls Redox's status page every 5 minutes and alerts you the moment it reports an issue — before your customers do.

  • Real-time alerts when Redox reports an incident
  • Email, Slack, Discord, Microsoft Teams, and webhook notifications
  • Track Redox alongside 5,000+ providers in one dashboard
  • Component-level filtering
  • Notification groups + maintenance calendar
Start monitoring Redox for free

5 free monitors · No credit card required