- Detected by Pingoru
- May 08, 2026, 07:24 PM UTC
- Resolved
- May 08, 2026, 08:05 PM UTC
- Duration
- 41m
Affected: Uptime (Uptime)
Timeline · 2 updates
-
investigating May 08, 2026, 07:24 PM UTC
We're mitigating. Need help? Let us know at [email protected]
-
resolved May 08, 2026, 08:05 PM UTC
The attack has been successfully mitigated by our DDoS protection. Sincere apologies for the inconvenience. Need help? Let us know at [email protected].
Read the full incident report →
- Detected by Pingoru
- Feb 13, 2026, 12:26 AM UTC
- Resolved
- Feb 13, 2026, 02:53 AM UTC
- Duration
- 2h 27m
Affected: Better Stack (Better Stack)
Timeline · 2 updates
-
investigating Feb 13, 2026, 12:26 AM UTC
Bunny CDN, one of our CDN providers, is experiencing a software bug on their UK-located PoP. We're in touch with them and the issue should be resolved within a few minutes. We apologize for the inconvenience. If you urgently need to use the dashboard, please, use a VPN to connect to any US location. Thank you.
-
resolved Feb 13, 2026, 02:53 AM UTC
Bunny CDN resolved the issue. We apologize for the inconvenience.
Read the full incident report →
- Detected by Pingoru
- Jan 14, 2026, 03:05 PM UTC
- Resolved
- Jan 14, 2026, 06:35 PM UTC
- Duration
- 3h 30m
Affected: Telemetry (Telemetry)
Timeline · 2 updates
-
investigating Jan 14, 2026, 03:05 PM UTC
We’re currently seeing delayed data processing for some sources located in our US East cluster. The underlying root cause has been resolved; however, it may take some time for all historical data to fully appear in Live tail. Please note there's no data loss — all incoming data is ingested and will become visible as the system catches up. If you have any questions, feel free to reach out at [email protected].
-
resolved Jan 14, 2026, 06:35 PM UTC
The earlier delay affecting some sources in our US East cluster has been fully resolved. All data has now been processed and is visible in Live tail, with no data loss. If you have any questions, please contact us at [email protected].
Read the full incident report →
- Detected by Pingoru
- Nov 26, 2025, 01:20 PM UTC
- Resolved
- Nov 26, 2025, 01:55 PM UTC
- Duration
- 35m
Affected: Telemetry (Telemetry)
Timeline · 2 updates
-
investigating Nov 26, 2025, 01:20 PM UTC
We’re currently seeing elevated query latency and temporary failures in our eu-nbg-2 cluster. Incoming data is being ingested, but processing might be delayed for some of the sources, and some queries might also fail or return incomplete results. Our engineering team is actively working on this and the backlog is already being reduced and we expect full recovery shortly. We’ll keep this page updated as soon as we have any more updates.
-
resolved Nov 26, 2025, 01:55 PM UTC
Query issues in eu-nbg-2 are now mostly resolved. The ingesting lag has also fully caught up around the same time, and data is being processed as expected. We’re continuing to monitor the cluster closely and are reviewing the ingesting pipeline to ensure everything is stable and performing normally. Thanks a lot for your patience and let us know at [email protected] if anything!
Read the full incident report →
- Detected by Pingoru
- Nov 18, 2025, 11:35 AM UTC
- Resolved
- Nov 18, 2025, 12:30 PM UTC
- Duration
- 55m
Affected: Better Stack (Better Stack)Uptime (Uptime)Telemetry (Telemetry)
Timeline · 2 updates
-
investigating Nov 18, 2025, 11:35 AM UTC
We’re currently experiencing an issue where the Better Stack UI is temporarily inaccessible. Our backend services, monitoring infrastructure, APIs, and alerting are fully operational — however, Cloudflare is undergoing a global outage that is impacting routing and preventing our dashboard from loading. You can track the upstream incident here: https://www.cloudflarestatus.com/incidents/8gmgl950y3h7 We’re actively monitoring the situation and will restore full access as soon as Cloudflare resolves their routing issues. All core functionality — checks, heartbeats, alerts, log ingestion, and metrics — continues to function normally in the background. Thank you for your patience. 🙏
-
resolved Nov 18, 2025, 12:30 PM UTC
Cloudflare has mostly resolved the underlying routing issues related to their global outage, and their systems are now operating normally. With their network stabilized, the Better Stack UI and all related services are fully accessible again. Everything on our side is now running as expected, and we are no longer seeing elevated error rates or degraded performance. We’ll continue to monitor the situation, but no further impact is expected. If you have any questions or notice anything unusual, please feel free to reach out to us at [email protected] — we’re happy to help. Thank you for your patience throughout the incident. 🙏
Read the full incident report →
- Detected by Pingoru
- Nov 16, 2025, 01:55 PM UTC
- Resolved
- Nov 16, 2025, 07:10 PM UTC
- Duration
- 5h 15m
Affected: Uptime (Uptime)
Timeline · 2 updates
-
investigating Nov 16, 2025, 01:55 PM UTC
We’re currently seeing intermittent timeout issues affecting some custom status pages. The first errors begin appearing around 1:55 PM UTC, with certain requests returning no response across multiple regions. We’re looking into this now and will share another update shortly.
-
resolved Nov 16, 2025, 07:10 PM UTC
We’ve resolved an issue that caused some custom status pages to fail to load or return timeout errors. The problem was related to certificate handling workflow, which temporarily prevented certain pages from obtaining or refreshing their SSL certificates. All affected pages are now loading normally again. Thank you very much for your patience and understanding. Please let us know at [email protected] if you'd have any further questions.
Read the full incident report →
- Detected by Pingoru
- Oct 20, 2025, 07:31 AM UTC
- Resolved
- Oct 20, 2025, 09:29 AM UTC
- Duration
- 1h 58m
Affected: Uptime (Uptime)Telemetry (Telemetry)
Timeline · 2 updates
-
investigating Oct 20, 2025, 07:31 AM UTC
We are seeing elevated error rates in our email delivery service due to an ongoing AWS outage. As a result, some notification emails may be delayed. Uptime monitoring and data processing are fully operational.
-
resolved Oct 20, 2025, 09:29 AM UTC
Email deliveries are back to normal.
Read the full incident report →
- Detected by Pingoru
- Oct 01, 2025, 07:41 AM UTC
- Resolved
- Oct 01, 2025, 08:58 AM UTC
- Duration
- 1h 17m
Affected: Telemetry (Telemetry)
Timeline · 2 updates
-
investigating Oct 01, 2025, 07:41 AM UTC
We’re currently experiencing a network outage in our provider’s data centre that is impacting a portion of queries and causing delays in data ingestion. Our team is actively working with the provider to resolve the issue as quickly as possible. We’ll keep you updated as soon as we have any more details. Thank you for your patience while we work on bringing everything back to normal.
-
resolved Oct 01, 2025, 08:58 AM UTC
The issue has been resolved; all pending data has been ingested, and queries are fully operational.
Read the full incident report →
- Detected by Pingoru
- Aug 20, 2025, 11:17 AM UTC
- Resolved
- Aug 20, 2025, 08:42 PM UTC
- Duration
- 9h 25m
Affected: Telemetry (Telemetry)
Timeline · 2 updates
-
investigating Aug 20, 2025, 11:17 AM UTC
We’re currently investigating an issue where logs sent via RSyslog from Digital Ocean are not reaching our ingestion endpoint. This started after planned migration of the old Rsyslog sources to our new cloud infrastructure and only seems to be affecting Digital Ocean platform. We're currently cooperating with Digital Ocean team on a fix and we’ll provide updates as soon as possible.
-
resolved Aug 20, 2025, 08:42 PM UTC
We identified an issue with DNS resolution between Digital Ocean and Better Stack. This has now been resolved, and logs are now being processed normally.
Read the full incident report →
- Detected by Pingoru
- May 28, 2025, 02:50 PM UTC
- Resolved
- May 28, 2025, 03:55 PM UTC
- Duration
- 1h 5m
Affected: Telemetry (Telemetry)
Timeline · 2 updates
-
investigating May 28, 2025, 02:50 PM UTC
Some customers are experiencing delays in log ingestion and slower query performance once again. We sincerely apologize for the repeated disruption. Our team is actively working to identify the root cause and will provide further updates as soon as we have more information. Thank you for your patience and understanding.
-
resolved May 28, 2025, 03:55 PM UTC
This issue has now been resolved. Log ingestion delays and query performance have returned to normal for all affected customers. We apologize for the repeated disruptions and appreciate your patience as we worked to restore service. Our team continues to monitor the situation closely to ensure ongoing stability. If you continue to experience any issues, please reach out to us right away.
Read the full incident report →
- Detected by Pingoru
- May 27, 2025, 12:50 PM UTC
- Resolved
- May 27, 2025, 03:00 PM UTC
- Duration
- 2h 10m
Affected: Telemetry (Telemetry)
Timeline · 2 updates
-
investigating May 27, 2025, 12:50 PM UTC
Some customers are once again experiencing delays in log ingestion and slower query performance. We apologize for the recurrence and understand how disruptive this can be. Our team is actively investigating the underlying cause, and we’ll share more details as soon as we have them. Thank you for your continued patience and understanding!
-
resolved May 27, 2025, 03:00 PM UTC
This issue has now been resolved. Log ingestion and query performance is now stable. We apologize again for the disruption and appreciate your patience while we addressed this recurrence. If you notice any ongoing issues, please reach out to us right away.
Read the full incident report →
- Detected by Pingoru
- May 16, 2025, 09:33 AM UTC
- Resolved
- May 16, 2025, 12:10 PM UTC
- Duration
- 2h 37m
Affected: Telemetry (Telemetry)
Timeline · 2 updates
-
investigating May 16, 2025, 09:33 AM UTC
We’re aware of reports of intermittent delays in log ingestion and slower-than-normal query performance affecting some customers. Our engineering team is actively investigating the root cause and working to restore full performance as soon as possible. We’ll post another update once we have more details—thank you for your patience!
-
resolved May 16, 2025, 12:10 PM UTC
We’ve identified and resolved the root cause of the issue. Log ingestion delays are cleared and query performance has returned to normal. Should you continue to experience any problems, please reach out to us at [email protected]. Thank you for your patience!
Read the full incident report →
- Detected by Pingoru
- May 06, 2025, 02:34 PM UTC
- Resolved
- May 06, 2025, 02:48 PM UTC
- Duration
- 14m
Affected: Uptime (Uptime)Telemetry (Telemetry)
Timeline · 2 updates
-
investigating May 06, 2025, 02:34 PM UTC
We're seeing degraded performance for the Uptime dashboard and Live tail loading times in some regions. Our engineers are looking into it. We apologize for the inconvenience.
-
resolved May 06, 2025, 02:48 PM UTC
All services are back to normal. Sincere apologies for the inconvenience! Please message us at [email protected] if we can help with anything.
Read the full incident report →
- Detected by Pingoru
- Apr 25, 2025, 08:50 PM UTC
- Resolved
- Apr 26, 2025, 04:05 AM UTC
- Duration
- 7h 15m
Affected: Telemetry (Telemetry)
Timeline · 2 updates
-
investigating Apr 25, 2025, 08:50 PM UTC
We're currently experiencing an issue with log processing that may result in slower queries and delayed log ingestion for some users. We're actively investigating and working to resolve the issue as quickly as possible.
-
resolved Apr 26, 2025, 04:05 AM UTC
All delayed ingestion has been cleared; logs and metrics should now be available as normal. Thank you for your patience.
Read the full incident report →
- Detected by Pingoru
- Apr 20, 2025, 08:29 PM UTC
- Resolved
- Apr 20, 2025, 08:41 PM UTC
- Duration
- 12m
Affected: Uptime (Uptime)
Timeline · 2 updates
-
investigating Apr 20, 2025, 08:29 PM UTC
Uptime and status page performance is degraded in some regions. Our engineers are looking into it. We apologize for the inconvenience
-
resolved Apr 20, 2025, 08:41 PM UTC
All services are back to normal. Sincere apologies for the inconvenience! Please message us at [email protected] if we can help with anything.
Read the full incident report →
- Detected by Pingoru
- Mar 08, 2025, 11:15 AM UTC
- Resolved
- Mar 08, 2025, 11:49 AM UTC
- Duration
- 34m
Affected: Uptime (Uptime)Uptime (Uptime backend processing health)
Timeline · 2 updates
-
investigating Mar 08, 2025, 11:15 AM UTC
Uptime dashboard is unavailable to certain regions due to a hardware failure. We're performing a failover.
-
resolved Mar 08, 2025, 11:49 AM UTC
We've successfully performed a hardware failover. We'll be migrating our redis cluster to a new hardware as a result of this incident over the coming weeks. Sincere apologies for the inconvenience! Please let us know at [email protected] if you have any questions.
Read the full incident report →
- Detected by Pingoru
- Mar 07, 2025, 06:10 AM UTC
- Resolved
- Mar 07, 2025, 08:28 AM UTC
- Duration
- 2h 18m
Affected: Uptime (Uptime backend processing health)
Timeline · 2 updates
-
investigating Mar 07, 2025, 06:10 AM UTC
Some customers might not be receiving phone call alerts due to an incident with our voice provider. We have all hands on deck. Please let us know at [email protected] if we can help.
-
resolved Mar 07, 2025, 08:28 AM UTC
All alerts should now work again for all customers. We apologize for the inconvenience and will re-evaluate our primary voice call provider going forward. Have questions? Please let us know at [email protected].
Read the full incident report →
- Detected by Pingoru
- Feb 06, 2025, 10:34 AM UTC
- Resolved
- Feb 06, 2025, 10:38 AM UTC
- Duration
- 4m
Affected: Telemetry (Telemetry)
Timeline · 2 updates
-
investigating Feb 06, 2025, 10:34 AM UTC
One of our object storage providers is experiencing availability issues, causing some queries to temporarily not load. There's no delay in processing logs or metrics data.
-
resolved Feb 06, 2025, 10:38 AM UTC
All services are back to normal. Sincere apologies for the inconvenience!
Read the full incident report →
- Detected by Pingoru
- Jan 09, 2025, 12:04 PM UTC
- Resolved
- Jan 09, 2025, 12:31 PM UTC
- Duration
- 27m
Affected: Uptime (Uptime backend processing health)
Timeline · 2 updates
-
investigating Jan 09, 2025, 12:04 PM UTC
Uptime and status page performance is degraded in some regions. Our engineers are looking into it.
-
resolved Jan 09, 2025, 12:31 PM UTC
All services are back to normal. Sincere apologies for the inconvenience! Please message us at [email protected] if we can help.
Read the full incident report →
- Detected by Pingoru
- Dec 09, 2024, 11:44 AM UTC
- Resolved
- Dec 09, 2024, 11:49 AM UTC
- Duration
- 5m
Affected: Uptime (Uptime backend processing health)
Timeline · 2 updates
-
investigating Dec 09, 2024, 11:44 AM UTC
Dashboard & API performance has been reported as degraded in some regions. We're looking into it and should be back to full health soon.
-
resolved Dec 09, 2024, 11:49 AM UTC
Degradation was resolved.
Read the full incident report →