Mews Outage History

Mews is up right now

There were 24 Mews outages since February 3, 2026 totaling 160h 23m of downtime. Each is summarised below — incident details, duration, and resolution information.

Source: https://status.mews.li

Minor April 28, 2026

Investigating issues with Mews Services

Detected by Pingoru
Apr 28, 2026, 10:35 AM UTC
Resolved
Apr 28, 2026, 12:49 PM UTC
Duration
2h 13m
Timeline · 4 updates
  1. investigating Apr 28, 2026, 10:35 AM UTC

    We are currently investigating reports of an issue affecting Mews. The Mews team is actively working to identify the cause, and we will provide updates as soon as possible.

  2. monitoring Apr 28, 2026, 11:22 AM UTC

    The team has resolved the issue, and we are closely monitoring the system to ensure stability.

  3. monitoring Apr 28, 2026, 12:37 PM UTC

    We think the impact from the issue is over.

  4. resolved Apr 28, 2026, 12:49 PM UTC

    System metrics remain healthy after the fix.

Read the full incident report →

Notice April 27, 2026

Investigating issues with Mews Guest In House Report

Detected by Pingoru
Apr 27, 2026, 10:45 AM UTC
Resolved
Apr 27, 2026, 02:46 PM UTC
Duration
4h 1m
Timeline · 2 updates
  1. investigating Apr 27, 2026, 10:45 AM UTC

    We are currently investigating reports of an issue affecting Guest In House report. The Mews team is actively working to identify the cause, and we will provide updates as soon as possible.

  2. resolved Apr 27, 2026, 02:46 PM UTC

    We've spotted that something has gone wrong. We're currently investigating the issue, and will provide an update soon.

Read the full incident report →

Minor April 26, 2026

Downgraded Performance

Detected by Pingoru
Apr 26, 2026, 08:09 AM UTC
Resolved
Apr 26, 2026, 08:29 AM UTC
Duration
20m
Affected: Mews POS
Timeline · 2 updates
  1. investigating Apr 26, 2026, 08:09 AM UTC

    We have noticed a downgraded performance of the system for some Swedish customers. We are currently looking into it. Thank you for your patience.

  2. resolved Apr 26, 2026, 08:29 AM UTC

    We have resolved the issue affecting some Swedish customers. The system is now operating as expected. Thank you for your patience.

Read the full incident report →

Notice April 21, 2026

Investigating issues with with changing dates on reservations

Detected by Pingoru
Apr 21, 2026, 12:12 PM UTC
Resolved
Apr 21, 2026, 02:58 PM UTC
Duration
2h 46m
Affected: Mews Operations
Timeline · 4 updates
  1. investigating Apr 21, 2026, 12:09 PM UTC

    we are aware of the fact that customers may be unable to change the dates on reservations. We are working to resolve the issue

  2. investigating Apr 21, 2026, 12:12 PM UTC

    We are currently investigating reports of an issue affecting Mews. The Mews team is actively working to identify the cause, and we will provide updates as soon as possible.

  3. monitoring Apr 21, 2026, 12:20 PM UTC

    We have fixed the issue. Users should be able to change dates again on reservations in the reservations module of the PMS

  4. resolved Apr 21, 2026, 02:58 PM UTC

    We've confirmed the impact from the issue is over.

Read the full incident report →

Minor April 10, 2026

Cannot create quotes in EMS quotation app

Detected by Pingoru
Apr 10, 2026, 10:44 AM UTC
Resolved
Apr 10, 2026, 02:08 PM UTC
Duration
3h 23m
Affected: Mews Events
Timeline · 3 updates
  1. identified Apr 10, 2026, 10:44 AM UTC

    A recent change to the EMS booking engine is preventing new quotes from being created. We are reverting the booking engine to its previous stable version and update once that quote creation is functioning normally.

  2. monitoring Apr 10, 2026, 12:51 PM UTC

    We reverted the app to its previous stable version and have confirmed that quote creation is functioning normally. We are continuing to monitor the service to ensure no further errors occur. We apologize for the disruption to your operations.

  3. resolved Apr 10, 2026, 02:08 PM UTC

    We have confirmed the issue has been resolved. In case it persists, please refresh the application to apply the latest changes.

Read the full incident report →

Major April 9, 2026

Properties cannot print fiscalization bills

Detected by Pingoru
Apr 09, 2026, 06:18 AM UTC
Resolved
Apr 09, 2026, 09:05 AM UTC
Duration
2h 47m
Timeline · 4 updates
  1. investigating Apr 09, 2026, 06:18 AM UTC

    We are currently investigating an issue affecting some properties, where invoices cannot be printed due to a Fiscal Registry error message.

  2. monitoring Apr 09, 2026, 07:52 AM UTC

    We have unblocked jobs for printing fiscalized bills both for Germany and France, we are now in contacts with properties to confirm the issue has been resolved

  3. resolved Apr 09, 2026, 09:05 AM UTC

    We are no longer seeing any problems with fiscalization jobs in production, printing of fiscalized bills should be working normally now

  4. postmortem Apr 14, 2026, 06:31 PM UTC

    # Problem On April 9, 2026, properties in France and Germany were unable to reliably print or download invoices due to issues with fiscal record processing. * In France, printing bills and invoices was blocked. * In Germany, printing was delayed, with documents available after a two-minute fallback. # Causes A database performance issue caused fiscalization jobs in France and Germany to time out. The database selected a suboptimal execution plan for a key query, which prevented fiscal records from being processed and blocked invoice generation. This was an infrastructure issue and was not caused by a recent code release. # Action Our engineering team identified the affected queries and restored normal database performance, unblocking fiscalization jobs in both countries. All pending fiscal records for France and Germany were processed by 09:41 CEST on April 9, 2026. We continued monitoring to confirm stable processing before resolving the incident. # Solutions We are implementing the following measures to prevent recurrence: * **Decouple fiscal processing** * Introduce a new table to separate fiscal record ingestion from downstream processing, reducing the risk of database bottlenecks blocking fiscalization. * **Improve alerting** * Enhance fiscalization job alerts so repeated timeouts trigger immediate investigation, with a focus on markets where operations can be fully blocked. * **Strengthen monitoring and resilience** * Continue improving monitoring and query performance to detect and mitigate similar issues earlier.

Read the full incident report →

Notice April 1, 2026

Upsells in Booking Engine display all products

Detected by Pingoru
Apr 01, 2026, 12:13 PM UTC
Resolved
Apr 03, 2026, 04:30 PM UTC
Duration
2d 4h
Affected: Mews Guest Experience
Timeline · 3 updates
  1. identified Apr 01, 2026, 12:13 PM UTC

    We identified issues inside Booking Engine where all products were displayed as available for Upsells. We're working on resolving the issue.

  2. monitoring Apr 01, 2026, 12:14 PM UTC

    We've resolved the problem and are monitoring the situation.

  3. resolved Apr 03, 2026, 04:30 PM UTC

    We think the impact from the issue is over.

Read the full incident report →

Minor April 1, 2026

Barcelona city taxes have not been updated correctly

Detected by Pingoru
Apr 01, 2026, 08:17 AM UTC
Resolved
Apr 01, 2026, 09:16 AM UTC
Duration
58m
Affected: Mews Operations
Timeline · 3 updates
  1. identified Apr 01, 2026, 08:17 AM UTC

    We are currently aware of an issue resulting in the incorrect application of the city tax in Barcelona. We are working on a resolution. We will provide a further update shortly.

  2. resolved Apr 01, 2026, 09:16 AM UTC

    We have successfully identified and deployed a fix for the incorrect application of the city tax in Barcelona. Systems are now calculating the tax correctly, and normal operations have resumed. We apologize for any inconvenience caused.

  3. postmortem Apr 13, 2026, 01:28 PM UTC

    **Problem** On April 1st during the morning, we identified that the city tax rates for Barcelona were not being correctly applied to reservations. The new tax rates, which were legally required to take effect from April 1st, had been configured in our system but had not yet been fully propagated to all services at the time reservations were being created. ‌ **Action** Our team was alerted to the issue through property reports and immediately began investigating. We expedited a system deployment to propagate the updated tax configuration and ran corrective processes to fix all reservations that had been created with the incorrect tax amounts. The issue was fully resolved and confirmed by affected properties within approximately one hour.

Read the full incident report →

Notice March 25, 2026

Data Processing Delays - Mews BI Affected

Detected by Pingoru
Mar 25, 2026, 11:08 AM UTC
Resolved
Mar 25, 2026, 12:24 PM UTC
Duration
1h 16m
Timeline · 2 updates
  1. identified Mar 25, 2026, 11:08 AM UTC

    Our data processing infrastructure is running behind which is causing lagged data in Mews BI. No data has been lost and the system should be caught up in 1 hour.

  2. resolved Mar 25, 2026, 12:24 PM UTC

    We confirmed the Mews BI data is being refreshed as expected.

Read the full incident report →

Minor March 19, 2026

Data Processing Delays - MewsBi Affected

Detected by Pingoru
Mar 19, 2026, 09:09 AM UTC
Resolved
Mar 19, 2026, 11:09 AM UTC
Duration
2h
Affected: Mews Business Intelligence
Timeline · 2 updates
  1. monitoring Mar 19, 2026, 09:09 AM UTC

    Our data processing infrastructure is running behind which is causing data delays in Mews BI. No data has been lost . Data is expected to be refreshed in 1 hour

  2. resolved Mar 19, 2026, 11:09 AM UTC

    All data in mews BI was refreshed.

Read the full incident report →

Notice March 13, 2026

POS App version 4.4.9 crashing

Detected by Pingoru
Mar 13, 2026, 09:38 AM UTC
Resolved
Mar 13, 2026, 11:43 AM UTC
Duration
2h 5m
Affected: Mews POS
Timeline · 2 updates
  1. identified Mar 13, 2026, 09:38 AM UTC

    Version 4.4.9 was published yesterday. Clients have reported failures (crashes) while opening an order for a table. We have halted the rollout to prevent new issues. Downgrade the version if you have received a new version by uninstalling and installing through store.

  2. resolved Mar 13, 2026, 11:43 AM UTC

    Downgrade the version to 4.4.8 if you have received a new version by uninstalling and installing through store.

Read the full incident report →

Major March 5, 2026

Price derivations of multi-level dependent rates not working as expected

Detected by Pingoru
Mar 05, 2026, 01:51 PM UTC
Resolved
Mar 06, 2026, 06:41 AM UTC
Duration
16h 49m
Affected: Mews Marketplace
Timeline · 7 updates
  1. identified Mar 05, 2026, 01:51 PM UTC

    We noticed that some rate prices were not properly synchronized to channel managers. We are working on a fix right now.

  2. identified Mar 05, 2026, 02:19 PM UTC

    Yesterday's incident caused incorrect prices to be sent to our channel manager connections. When we resynchronized we did not resolve the issue, so incorrect prices were still present. We're actively working to restore the correct prices and will update you once this is resolved.

  3. identified Mar 05, 2026, 03:18 PM UTC

    We will start restoring the incorrect prices shortly. The price updates generated between 4 March 2026, 13:00 and 16:20 (UTC) were affected, and incorrect prices were synced during this period. We will notify you once the restoration process has started and will also provide an estimated ETA for the completion.

  4. identified Mar 05, 2026, 07:56 PM UTC

    We have started the restorer job. It will recover the affected price items between 4 March 2026, 13:00 and 16:20 (UTC). We will notify you once the process is completed. The estimated time of completion (ETA) is 7 hours.

  5. identified Mar 05, 2026, 09:23 PM UTC

    The re-synchronization operation is still ongoing, with an estimated completion time of 4:00 AM. Some partners may already be showing corrected prices in the meantime. We will post an update once it is fully complete.

  6. resolved Mar 06, 2026, 06:41 AM UTC

    We are pleased to confirm that the re-synchronization operation has been completed. All affected price items from 4 March 2026, 13:00 – 16:20 UTC have been fully recovered. Thank you for your patience throughout this incident.

  7. postmortem Apr 02, 2026, 01:40 PM UTC

    # Problem On 4 March 2026 at 15:15 CET, a change to room price calculations caused some complex dependent rates to ignore differences between room categories. For affected properties, prices for room types became flat. Between 14:00–17:30 CET, incorrect prices were shown in Mews Operations and sent to channel manager integrations, so some guests booking through online channels might have seen rooms at incorrect prices. Direct bookings in the system using these rate configurations were also affected during the same period. # Action The Mews team detected the issue at 15:15 CET after a customer report. The Mews team confirmed the impact, reverted the change, and at 16:35 CET pricing calculations were correct for all new updates. The Mews team then ran a recovery job to recreate and resend corrected prices to channel manager integrations in a controlled way until all inventory was synchronised on 6 March 2026. # Causes The issue came from a change in dependent rate pricing that was not fully separated from existing pricing behaviour. For some complex rate setups, this caused all room categories on an affected rate to use the same base value. # Solution All prices have been re-synchronised, and pricing services are fully operational. The Mews team continues to monitor pricing accuracy. We apologise for the disruption this incident caused to you and your guests. Pricing accuracy is critical, and we are committed to strengthening our testing, rollout safeguards, monitoring, alerting, and recovery tooling to reduce the risk of similar issues and resolve them faster.

Read the full incident report →

Minor March 4, 2026

Online check-in journey not loading

Detected by Pingoru
Mar 04, 2026, 10:21 AM UTC
Resolved
Mar 04, 2026, 01:58 PM UTC
Duration
3h 36m
Affected: Mews Guest Experience
Timeline · 2 updates
  1. monitoring Mar 04, 2026, 10:21 AM UTC

    The Online check-in is degraded mainly for the nationality step. The issue was fixed and we are monitoring its impact.

  2. resolved Mar 04, 2026, 01:58 PM UTC

    The impact of the issue is over.

Read the full incident report →

Minor February 23, 2026

Elevated Demo API Errors

Detected by Pingoru
Feb 23, 2026, 03:30 PM UTC
Resolved
Feb 23, 2026, 03:47 PM UTC
Duration
17m
Timeline · 2 updates
  1. investigating Feb 23, 2026, 03:30 PM UTC

    We’re currently investigating an issue affecting the Mews Demo API. Some users may encounter failures, primarily HTTP 525 errors. Our engineering team is working to restore normal service and will share an update as soon as possible.

  2. resolved Feb 23, 2026, 03:47 PM UTC

    The issue is now fully resolved. Mews is now operating as expected.

Read the full incident report →

Major February 19, 2026

Channel Managers are delayed in processing; Space status page cannot be opened

Detected by Pingoru
Feb 19, 2026, 11:53 AM UTC
Resolved
Feb 19, 2026, 12:33 PM UTC
Duration
40m
Affected: Mews Operations
Timeline · 6 updates
  1. investigating Feb 19, 2026, 11:53 AM UTC

    We are currently investigating reports of an issue affecting Mews. The Mews team is actively working to identify the cause, and we will provide updates as soon as possible.

  2. identified Feb 19, 2026, 12:02 PM UTC

    We've spotted that something has gone wrong. We're currently investigating the issue, and will provide an update soon.

  3. monitoring Feb 19, 2026, 12:11 PM UTC

    Fix is deployed and we are monitoring it on production

  4. monitoring Feb 19, 2026, 12:18 PM UTC

    We have identified and rolled back the root cause of two related issues affecting Mews Operations: Channel Manager delays: Reservation synchronization via Channel Managers is delayed. Space Status page: The Space Status page in Mews Operations is currently unavailable. We've rolled back the root cause and we are actively monitoring Channel Manager processing to confirm full recovery. We will provide an update once normal operation is restored. We apologize for the disruption and appreciate your patience.

  5. resolved Feb 19, 2026, 12:33 PM UTC

    We have confirmed that this incident is fully resolved. Both affected services have been restored: - Channel Manager synchronization - Reservation processing is working as expected. All remaining reservations in the queue will be processed correctly. - Space Status page - The Space Status page in Mews Operations is accessible again. Please note: Due to the backlog accumulated during the incident, there may be delays of up to 30 minutes before all reservations are fully synchronized. No action is needed from your side.

  6. postmortem Mar 06, 2026, 02:52 AM UTC

    ### Problem On 19 February 2026, between 10:53 and 12:10 UTC, a code change caused two issues 1. Users were unable to open the detail page of individual rooms from the timeline, reservation preview, or the space status report. 2. Reservation synchronization via channel managers stopped processing, causing delays in updates from connected distribution channels. The total duration of customer impact was approximately 1 hour and 17 minutes. ### Action The engineering team identified the faulty code change and rolled back the affected release. The backlog of updates from channel managers was then fully processed within 18 minutes. ### Causes A recent code change caused failures in querying history of changes of individual rooms, making the pages displaying such data and reservation synchronization fail. ### Solutions The faulty code change was rolled back and the underlying issue is being fixed. We are also strengthening our deployment safeguards and quality gates to reduce the likelihood and impact of similar incidents in the future.

Read the full incident report →

Major February 18, 2026

Marketplace - Channel Manager ARI updates issues

Detected by Pingoru
Feb 18, 2026, 11:06 AM UTC
Resolved
Feb 20, 2026, 02:31 PM UTC
Duration
2d 3h
Affected: Mews Marketplace
Timeline · 6 updates
  1. investigating Feb 18, 2026, 11:06 AM UTC

    We are currently investigating issues with processing and distributing Availability and Price updates.

  2. identified Feb 18, 2026, 02:58 PM UTC

    We have identified the issue and are actively working on a fix. During this time, ARI updates may not synchronize correctly, potentially leading to overbookings.

  3. identified Feb 18, 2026, 04:19 PM UTC

    We have implemented a partial mitigation to improve synchronization of availability, rates, and restrictions. Work on a full resolution is ongoing.

  4. identified Feb 19, 2026, 12:56 PM UTC

    The issue has been resolved. We are now in the process of recovering missed critical availability updates for upcoming 6 months.

  5. monitoring Feb 20, 2026, 09:33 AM UTC

    The incident has been resolved, and Inventory updates are synchronizing correctly. Previously missed Inventory updates are in the process of being automatically synchronized.

  6. resolved Feb 20, 2026, 02:31 PM UTC

    A recovery for missed inventory updates is currently running and will continue over the weekend.

Read the full incident report →

Critical February 12, 2026

Performance drops of Mews web app

Detected by Pingoru
Feb 12, 2026, 03:34 PM UTC
Resolved
Feb 12, 2026, 04:58 PM UTC
Duration
1h 24m
Affected: Mews Operations
Timeline · 4 updates
  1. investigating Feb 12, 2026, 03:34 PM UTC

    We are currently experiencing an unplanned outage. We are working on solving this problem as quickly as possible. Please continue checking this page for updates on the status.

  2. investigating Feb 12, 2026, 03:59 PM UTC

    We have scaled our infrastructure and the service is now operational, but we are still investigating the root cause.

  3. resolved Feb 12, 2026, 04:58 PM UTC

    We observed localised spikes in latency, customers might have experienced slow loading pages, and low responsiveness. Mews is now operating as expected.

  4. postmortem Feb 23, 2026, 09:30 AM UTC

    ## Overview In February 2026, Mews experienced an outage lasting 47 minutes and two shorter periods of degraded performance that affected access to the Mews PMS app. There was no data loss or data corruption. The impact was on availability and speed, not on the correctness of your data. Even short interruptions are unacceptable, especially at critical times when your teams are checking guests in and out, running reports, and handling payments. In this postmortem we explain, in straightforward terms: * What happened * How you were affected * What caused it * What we are changing to prevent a repeat We sincerely apologize for the disruption and are treating the symptoms of these incidents with the highest priority to ensure they do not happen again. ## What happened ### 5 February 2026 – Major outage * Between 06:40–07:27 UTC, many users could not log in to Mews. Pages loaded very slowly or showed errors, and key functions were unavailable. * A shorter performance degradation occurred at around 10:00 UTC, affecting a smaller group of properties. * Behind the scenes, several of the servers that handle Mews PMS traffic became overloaded at the same time as a brief issue on our primary database in the Germany West Central region \(Azure Cloud\). Together, these made the app unstable until we restarted the affected servers and increased capacity. ### 12 February 2026 – Short performance drops * On 12 February, there were four short windows where Mews PMS was noticeably slower or briefly unavailable for some users. * These were much shorter than the 5 February outage, but still visible as slow page loads and occasional login problems. ### 17 February 2026 – Mews is slow * On the morning of 17 February, support received multiple reports from different regions that Mews was slow or not loading at all for short periods. * In our monitoring, one of the servers handling Mews PMS traffic was clearly struggling and needed to be restarted. After a restart, performance returned to normal. ## How you were affected Across the three events, properties experienced: * Difficulty logging in to Mews PMS * Slow pages, sometimes ending in timeouts or errors * The app being slow and/or unresponsive during the instances of performance degradation ## Why it happened \(high‑level\) Although the three incidents were triggered by different technical details, the overall pattern was the same. ### 1. The majority of our backend instances were degraded * Servers became overloaded by traffic; * A background database task put extra pressure on our main database; * Internal checks meant to verify that a server is healthy did more work than they should and themselves became slow. In each case, part of our system was under more strain than it could handle, and some servers started responding very slowly or failing requests. ### 2. The “recycling” of instances was not responsive enough We rely on a combination of: * Internal health checks inside our application; and * The cloud provider’s ability to stop sending traffic to unhealthy instances In this case, that combination did not work as well as it needs to: * Our health‑check endpoint was too complex and depended on responses from several other downstream dependencies. * It also continued to say “I’m OK” often enough that the cloud platform - which looks only at that signal - kept the struggling servers in rotation. * The platform behaved exactly as designed, but our own health signal and thresholds were not strict enough to protect you from partial failures - hence the degraded performance you were experiencing. ### 3. Our monitoring focused too much on averages Our automatic scaling and some of our dashboards look at averages across all servers. Those averages can look acceptable even when a few individual servers are in serious trouble. This made it harder to see, quickly and clearly, that “one or two machines are having a very bad time and need to be taken out of service now”. In simple terms: local problems on specific servers and supporting systems exposed weaknesses in how we check health and remove bad servers, and that turned into outages and slowness for you. ## What we did during the incidents During each incident, our teams: * Declared and managed a formal incident, with clear ownership and regular updates. * Restarted or removed problematic servers and temporarily increased capacity to stabilize performance. * Worked with our cloud provider where there were signs of underlying platform issues. * Collected detailed logs and metrics to reconstruct exactly what happened, confirm that there was no data loss, and identify the design gaps that need to be fixed. These actions restored service each time, but they also showed that we need deeper changes, not just tactical fixes. ## What we are changing We are now implementing structural improvements in three key areas. ### 1. Simpler, more trustworthy health checks We are redesigning the internal “health check” that decides whether a server should receive traffic: * The new check will be much lighter and will not depend on multiple other systems being fully healthy just to answer “can this server safely handle requests?”. * It will have strict time limits, so a slow response does not get mistaken for a healthy server. * When a server is clearly unable to serve traffic, the health check will say so clearly and consistently, so the platform can remove it from rotation quickly. This reduces the chance that a partially broken server continues to handle your traffic. ### 2. Faster “recycling” of unhealthy instances and smarter scaling We are improving how we detect and react to unhealthy servers: * Looking at each server individually \(errors, slow responses, health‑check results\), not just at global averages. * Tuning our automatic scaling rules so we react faster when a subset of servers is overloaded. * Adjusting how we use the cloud provider’s automatic recovery tools so that they help with diagnostics and healing, without introducing extra instability. Our goal is that if one server misbehaves, you do not notice it because it is taken out of service and replaced before it affects users. ### 3. Safer behavior around the database We are also tightening up how we work with the database: * For the 5 February event, we are working with our cloud provider on the formal root cause analysis for the brief database issue in our primary region, and we are improving our monitoring of that database so we have earlier and clearer warning signals. * For the 12 February event, we have changed an internal maintenance job so that it no longer makes heavy changes during busy hours. Future adjustments of that type will follow a controlled, manual process. ## Looking ahead These incidents underline how important predictable performance and availability are to your operations. Our focus is on: * Reducing the chance that local technical issues ever become visible to you * Limiting the impact and duration if something does go wrong * Giving ourselves better visibility and clearer signals so we can act quickly and confidently

Read the full incident report →

Minor February 12, 2026

Creation of payment requests fails

Detected by Pingoru
Feb 12, 2026, 02:30 PM UTC
Resolved
Feb 12, 2026, 07:25 PM UTC
Duration
4h 54m
Affected: Mews Payments
Timeline · 5 updates
  1. investigating Feb 12, 2026, 02:30 PM UTC

    We're investigating the issue.

  2. investigating Feb 12, 2026, 03:58 PM UTC

    We have identified the cause and are in the process of fixing it. We will update again once we have confirmation of a fix.

  3. monitoring Feb 12, 2026, 05:24 PM UTC

    Deployment with the reverted changes has been deployed to production. Waiting for confirmation that the issue is resolved.

  4. resolved Feb 12, 2026, 07:25 PM UTC

    The issue has been resolved. Payment requests should be working as expected. Thank you for your patience.

  5. postmortem Feb 18, 2026, 09:04 AM UTC

    ### Overview Less than 0.5% of properties were unable to create payment requests. All affected customers had multicurrency enabled. The issue was caused by incorrect handling of multicurrency status checks introduced in the release. ### Action The issue was resolved by reverting the update that introduced the faulty logic. After the rollback, payment request creation returned to normal for all affected properties. ### Causes Due to the update, payment gateway accounts data did not load correctly. This caused an error when retrieving multicurrency status, which resulted in payment requests being blocked. ### Solution The hotfix was deployed by reverting the multicurrency status changes introduced in the update. Improvements to test coverage and alerting are underway to help prevent similar issues in the future.

Read the full incident report →

Major February 11, 2026

Mews Events page is not loading

Detected by Pingoru
Feb 11, 2026, 08:08 AM UTC
Resolved
Feb 11, 2026, 08:36 AM UTC
Duration
28m
Affected: Mews Events
Timeline · 2 updates
  1. investigating Feb 11, 2026, 08:08 AM UTC

    We are currently investigating an issue related to an expired security certificate that may impact access to parts of our services on Mews Events. Our engineering teams are actively working to restore normal operation as quickly as possible. We will provide another update as soon as more information becomes available. We apologize for the inconvenience and appreciate your patience.

  2. resolved Feb 11, 2026, 08:36 AM UTC

    The issue related to an expired security certificate has now been resolved. Our engineering teams have restored normal operation, and services should be functioning as expected. We are reviewing what happened to help prevent this from recurring. Thank you for your patience while we worked to fix the problem. If you continue to experience any issues, please contact our support team.

Read the full incident report →

Minor February 10, 2026

Missing split reservation panel in reservation management

Detected by Pingoru
Feb 10, 2026, 04:09 PM UTC
Resolved
Feb 10, 2026, 04:09 PM UTC
Duration
Affected: Mews Operations
Timeline · 2 updates
  1. monitoring Feb 10, 2026, 04:08 PM UTC

    We identified an issue where the split reservation panel was not visible in the reservation management screen for a subset of users. This was caused by a gradual feature rollout affecting up to 10% of users, which has since been rolled back. Impact window: February 9, 3:34 UTC – February 10, 3:36 UTC Affected users should now see the split reservation panel as expected. No further action is required on your end.

  2. resolved Feb 10, 2026, 04:09 PM UTC

    The issue has been resolved.

Read the full incident report →

Critical February 5, 2026

Unplanned Outage - Resolved

Detected by Pingoru
Feb 05, 2026, 07:05 AM UTC
Resolved
Feb 05, 2026, 08:05 AM UTC
Duration
1h
Affected: Mews Operations
Timeline · 7 updates
  1. investigating Feb 05, 2026, 07:05 AM UTC

    We are currently experiencing an unplanned outage. We are working on solving this problem as quickly as possible. Please continue checking this page for updates on the status.

  2. monitoring Feb 05, 2026, 07:34 AM UTC

    We upscaled our infrastructure, and the system is stabilizing. We are closely monitoring the system status.

  3. monitoring Feb 05, 2026, 07:43 AM UTC

    We've spotted that something has gone wrong. We're currently investigating the issue, and will provide an update soon.

  4. monitoring Feb 05, 2026, 07:47 AM UTC

    We upscaled our infrastructure, and the system is stabilizing. We are closely monitoring the system status.

  5. monitoring Feb 05, 2026, 07:51 AM UTC

    We upscaled our infrastructure, and the system is stabilizing. We are closely monitoring the system status.

  6. resolved Feb 05, 2026, 08:05 AM UTC

    Earlier today, between 06:45 UTC and 07:27 UTC, we encountered an issue where some of our web server instances became unhealthy, resulting in degraded service. This caused login disruptions and in-session issues on the Mews Commander app. The root cause was identified as a set of web service instances that stopped functioning correctly overnight. We’ve already taken corrective action and stabilized the environment. We apologize for any inconvenience and thank you for your understanding.

  7. postmortem Feb 23, 2026, 09:15 AM UTC

    ## Overview In February 2026, Mews experienced an outage lasting 47 minutes and two shorter periods of degraded performance that affected access to the Mews PMS app. There was no data loss or data corruption. The impact was on availability and speed, not on the correctness of your data. Even short interruptions are unacceptable, especially at critical times when your teams are checking guests in and out, running reports, and handling payments. In this postmortem we explain, in straightforward terms: * What happened * How you were affected * What caused it * What we are changing to prevent a repeat We sincerely apologize for the disruption and are treating the symptoms of these incidents with the highest priority to ensure they do not happen again. ## What happened ### 5 February 2026 – Major outage * Between 06:40–07:27 UTC, many users could not log in to Mews. Pages loaded very slowly or showed errors, and key functions were unavailable. * A shorter performance degradation occurred at around 10:00 UTC, affecting a smaller group of properties. * Behind the scenes, several of the servers that handle Mews PMS traffic became overloaded at the same time as a brief issue on our primary database in the Germany West Central region \(Azure Cloud\). Together, these made the app unstable until we restarted the affected servers and increased capacity. ### 12 February 2026 – Short performance drops * On 12 February, there were four short windows where Mews PMS was noticeably slower or briefly unavailable for some users. * These were much shorter than the 5 February outage, but still visible as slow page loads and occasional login problems. ### 17 February 2026 – Mews is slow * On the morning of 17 February, support received multiple reports from different regions that Mews was slow or not loading at all for short periods. * In our monitoring, one of the servers handling Mews PMS traffic was clearly struggling and needed to be restarted. After a restart, performance returned to normal. ## How you were affected Across the three events, properties experienced: * Difficulty logging in to Mews PMS * Slow pages, sometimes ending in timeouts or errors * The app being slow and/or unresponsive during the instances of performance degradation ## Why it happened \(high‑level\) Although the three incidents were triggered by different technical details, the overall pattern was the same. ### 1. The majority of our backend instances were degraded * Servers became overloaded by traffic; * A background database task put extra pressure on our main database; * Internal checks meant to verify that a server is healthy did more work than they should and themselves became slow. In each case, part of our system was under more strain than it could handle, and some servers started responding very slowly or failing requests. ### 2. The “recycling” of instances was not responsive enough We rely on a combination of: * Internal health checks inside our application; and * The cloud provider’s ability to stop sending traffic to unhealthy instances In this case, that combination did not work as well as it needs to: * Our health‑check endpoint was too complex and depended on responses from several other downstream dependencies. * It also continued to say “I’m OK” often enough that the cloud platform - which looks only at that signal - kept the struggling servers in rotation. * The platform behaved exactly as designed, but our own health signal and thresholds were not strict enough to protect you from partial failures - hence the degraded performance you were experiencing. ### 3. Our monitoring focused too much on averages Our automatic scaling and some of our dashboards look at averages across all servers. Those averages can look acceptable even when a few individual servers are in serious trouble. This made it harder to see, quickly and clearly, that “one or two machines are having a very bad time and need to be taken out of service now”. In simple terms: local problems on specific servers and supporting systems exposed weaknesses in how we check health and remove bad servers, and that turned into outages and slowness for you. ## What we did during the incidents During each incident, our teams: * Declared and managed a formal incident, with clear ownership and regular updates. * Restarted or removed problematic servers and temporarily increased capacity to stabilize performance. * Worked with our cloud provider where there were signs of underlying platform issues. * Collected detailed logs and metrics to reconstruct exactly what happened, confirm that there was no data loss, and identify the design gaps that need to be fixed. These actions restored service each time, but they also showed that we need deeper changes, not just tactical fixes. ## What we are changing We are now implementing structural improvements in three key areas. ### 1. Simpler, more trustworthy health checks We are redesigning the internal “health check” that decides whether a server should receive traffic: * The new check will be much lighter and will not depend on multiple other systems being fully healthy just to answer “can this server safely handle requests?”. * It will have strict time limits, so a slow response does not get mistaken for a healthy server. * When a server is clearly unable to serve traffic, the health check will say so clearly and consistently, so the platform can remove it from rotation quickly. This reduces the chance that a partially broken server continues to handle your traffic. ### 2. Faster “recycling” of unhealthy instances and smarter scaling We are improving how we detect and react to unhealthy servers: * Looking at each server individually \(errors, slow responses, health‑check results\), not just at global averages. * Tuning our automatic scaling rules so we react faster when a subset of servers is overloaded. * Adjusting how we use the cloud provider’s automatic recovery tools so that they help with diagnostics and healing, without introducing extra instability. Our goal is that if one server misbehaves, you do not notice it because it is taken out of service and replaced before it affects users. ### 3. Safer behavior around the database We are also tightening up how we work with the database: * For the 5 February event, we are working with our cloud provider on the formal root cause analysis for the brief database issue in our primary region, and we are improving our monitoring of that database so we have earlier and clearer warning signals. * For the 12 February event, we have changed an internal maintenance job so that it no longer makes heavy changes during busy hours. Future adjustments of that type will follow a controlled, manual process. ## Looking ahead These incidents underline how important predictable performance and availability are to your operations. Our focus is on: * Reducing the chance that local technical issues ever become visible to you * Limiting the impact and duration if something does go wrong * Giving ourselves better visibility and clearer signals so we can act quickly and confidently

Read the full incident report →

Minor February 4, 2026

Reset 2FA option is missing in Mews Operations

Detected by Pingoru
Feb 04, 2026, 11:12 AM UTC
Resolved
Feb 04, 2026, 03:52 PM UTC
Duration
4h 40m
Affected: Mews Operations
Timeline · 2 updates
  1. identified Feb 04, 2026, 11:12 AM UTC

    The Mews team identified the issue and is working on a resolution. Further updates will follow as we make progress.

  2. resolved Feb 04, 2026, 03:52 PM UTC

    The issue has been resolved.

Read the full incident report →

Notice February 4, 2026

Configuration of age‑based adjustments for products

Detected by Pingoru
Feb 04, 2026, 10:26 AM UTC
Resolved
Feb 04, 2026, 11:01 AM UTC
Duration
35m
Affected: Mews Operations
Timeline · 3 updates
  1. identified Feb 04, 2026, 10:26 AM UTC

    The Mews team identified the issue and is working on a resolution. Further updates will follow as we make progress.

  2. identified Feb 04, 2026, 10:28 AM UTC

    The Mews team has pinpointed the cause and a fix is currently underway. We’re in the final stages of resolving this and will provide another update shortly.

  3. resolved Feb 04, 2026, 11:01 AM UTC

    The issue is resolved: Users are now able to add or modify age-based price adjustments for products charged per person or per night.

Read the full incident report →

Major February 3, 2026

Reservation Group Creation Issue

Detected by Pingoru
Feb 03, 2026, 02:10 PM UTC
Resolved
Feb 03, 2026, 02:35 PM UTC
Duration
24m
Affected: Mews Operations
Timeline · 3 updates
  1. identified Feb 03, 2026, 02:10 PM UTC

    We've identified an issue affecting how reservation groups are created: - Availability blocks using "All in one group" pickup distribution are currently failing with a validation error - For all other reservation types, new reservation groups are being created incorrectly instead of adding reservations to existing groups when appropriate We've identified the root cause and are currently rolling back the change that introduced this issue.

  2. resolved Feb 03, 2026, 02:35 PM UTC

    We have successfully rolled back the change and confirmed that reservation groups are now being created correctly. A page reload is required to apply the changes. Note: Reservations created during the incident may still be in separate reservation groups instead of the intended shared group. Please review reservations created between 10:55 UTC and 14:23 UTC for any regrouping needs.

  3. postmortem Feb 19, 2026, 08:56 AM UTC

    ## Problem On February 3, 2026, properties using Mews Operations were unable to add new reservations to availability blocks configured to accept only a single booking group. Any attempt to do so resulted in an error. As a result, staff were unable to create new reservations within those availability blocks for the duration of the incident. In addition, approximately 2,000 reservations that should have been added to an existing booking group were instead incorrectly placed into separate, newly created groups. ## Action Our team was notified of the issue and began investigating immediately. We identified the source of the problem, verified a fix on our test environment, and restored Mews Operations to the previous working version on production. The issue was fully resolved by 15:03 UTC, with error rates returning to zero shortly after. ## Causes The incident was triggered by a software update released earlier that day. The update inadvertently caused Mews Operations to stop correctly identifying which booking group a new reservation belonged to. As a result, the system treated each new reservation as belonging to a brand new group, which conflicted with the single-group rule set on those availability blocks. ## Solutions We restored the previous working version of Mews Operations to immediately stop the impact. A targeted fix addressing the underlying cause has since been deployed. We are also introducing additional automated checks to this area of the system to prevent similar issues from reaching production in the future.

Read the full incident report →

Looking to track Mews downtime and outages?

Pingoru polls Mews's status page every 5 minutes and alerts you the moment it reports an issue — before your customers do.

  • Real-time alerts when Mews reports an incident
  • Email, Slack, Discord, Microsoft Teams, and webhook notifications
  • Track Mews alongside 5,000+ providers in one dashboard
  • Component-level filtering
  • Notification groups + maintenance calendar
Start monitoring Mews for free

5 free monitors · No credit card required