- Detected by Pingoru
- Apr 02, 2026, 10:29 AM UTC
- Resolved
- Apr 02, 2026, 05:40 PM UTC
- Duration
- 7h 11m
Affected: Twitter Ads
Timeline · 5 updates
-
identified Apr 02, 2026, 10:20 AM UTC
The issue has been identified and we are working to resolve it.
-
identified Apr 02, 2026, 10:29 AM UTC
We have identified that this issue was caused by the 'campaigns' endpoint, which is returning a '500 Internal Server' error. We are looking into potential workarounds to address these failures and will provide further updates as soon as more information is available.
-
monitoring Apr 02, 2026, 02:40 PM UTC
A fix has been implemented and we are monitoring the results.
-
monitoring Apr 02, 2026, 02:51 PM UTC
A fix has been implemented and the connections are recovering from the failure. We'll continue to monitor the results.
-
resolved Apr 02, 2026, 05:40 PM UTC
We have resolved this incident.
Read the full incident report →
- Detected by Pingoru
- Apr 01, 2026, 07:30 PM UTC
- Resolved
- Apr 01, 2026, 11:19 PM UTC
- Duration
- 3h 49m
Affected: Reddit Ads
Timeline · 4 updates
-
identified Apr 01, 2026, 07:30 PM UTC
The issue has been identified and we are working to resolve it.
-
identified Apr 01, 2026, 07:50 PM UTC
We are seeing HTTP 500 (Internal Server Error) responses from Reddit’s Ads API when attempting to create reporting jobs. This occurs on the report creation endpoint and is affecting multiple connectors.
-
monitoring Apr 01, 2026, 10:26 PM UTC
The issue has been resolved from Reddit and a hotfix has also been released to avoid sync failures due to 500 errors. We will continue monitoring for any further failures.
-
resolved Apr 01, 2026, 11:19 PM UTC
This incident has been resolved. We have observed that instance rates are returning to normal levels and affected connectors are syncing successfully. Incident Summary Description: We identified an issue for Reddit Ads which resulted in syncs failing with a "HTTP 500 (Internal Server Error)" error. Timeline: This issue began on April 1st at 16:00 UTC and was resolved on the same day at 21:30 UTC. Cause: The Reddit Ads API started returning 500 failures to connector requests. Resolution: The 500 errors were resolved source-side and a hotfix has been implemented to catch similar failures in the future.
Read the full incident report →
- Detected by Pingoru
- Apr 01, 2026, 04:13 PM UTC
- Resolved
- Apr 01, 2026, 01:30 PM UTC
- Duration
- —
Timeline · 1 update
-
resolved Apr 01, 2026, 04:13 PM UTC
This incident has been resolved. We observed error rates returning to normal levels, and all affected connections are now running successfully. Incident Summary Description: We identified an issue where GitHub connection syncs were intermittently failing with the error: "No server is currently available to service your request." Timeline: The issue began on April 1st at 10:00 UTC and was resolved by 13:00 UTC. Cause: The failures were caused by intermittent unavailability of the GitHub API, which led to sync disruptions. Resolution: The issue was resolved automatically once the GitHub API resumed normal operation.
Read the full incident report →
- Detected by Pingoru
- Mar 31, 2026, 03:40 PM UTC
- Resolved
- Mar 31, 2026, 04:47 PM UTC
- Duration
- 1h 6m
Affected: Pipedrive
Timeline · 3 updates
-
identified Mar 31, 2026, 03:40 PM UTC
The issue has been identified and we are working to resolve it.
-
monitoring Mar 31, 2026, 04:15 PM UTC
A fix has been implemented and we are monitoring the results.
-
resolved Mar 31, 2026, 04:47 PM UTC
Description: We identified an issue affecting the Pipedrive connection, which resulted in syncs failing with Null exception: Cannot invoke "java.time.temporal.Temporal.until(java.time.temporal.Temporal, java.time.temporal.TemporalUnit)" because "startInclusive" is null Timeline: This issue began on March 31, 2026 at 11:06 UTC and was resolved on March 31, 2026 at 16:20 UTC. Cause: A configuration issue impacted existing connections. Resolution: Applied a fix to improve handling and prevent the issue.
Read the full incident report →
- Detected by Pingoru
- Mar 30, 2026, 10:00 PM UTC
- Resolved
- Mar 31, 2026, 05:25 PM UTC
- Duration
- 19h 24m
Affected: Twitter OrganicTwitter Ads
Timeline · 6 updates
-
identified Mar 30, 2026, 05:55 PM UTC
The issue has been identified and we are working to resolve it.
-
identified Mar 30, 2026, 06:09 PM UTC
The issue appears to be related to missing rate limit headers in the response. As a result, requests are repeatedly hitting HTTP 429 (rate limit) and, in some cases, resulting in HTTP 503 errors. We have reached out to X support, and X has also acknowledged degraded performance on their status page: https://docs.x.com/status
-
identified Mar 31, 2026, 03:56 AM UTC
We are still seeing the issue at the source end and are continuing to monitor their status page. We have reached out to X support, and X has acknowledged degraded performance on their status page: https://docs.x.com/status.
-
identified Mar 31, 2026, 11:56 AM UTC
We are continuing to monitor the status page, as there are no latest updates from X. In the meantime, we are exploring potential workarounds to help mitigate the issue until it is fully resolved on the source side.
-
monitoring Mar 31, 2026, 03:54 PM UTC
The issue appears to have been resolved from the source side. Rate limit errors have subsided, and affected connectors have returned to normal sync operations
-
resolved Mar 31, 2026, 05:25 PM UTC
This incident has been resolved. We have observed that error rates have returned to normal levels, and the Twitter Organic and Twitter Ads services are operating as expected. Incident Summary Description: We identified an issue affecting multiple Twitter Organic and Twitter Ads connections, which were failing with the error: “HTTP 503 Service Unavailable.” Timeline: The issue began on March 30th at 00:00 UTC and was resolved on March 31st at 16:30 UTC. Cause: The X API stopped returning rate limit information in the response headers. This caused Twitter Organic and Twitter Ads connections to encounter 429 errors, leading to sync failures. Resolution: X has resolved their API issue. Additionally, we deployed a hotfix to ensure that if the source hits account-level rate limits and does not return response headers, it will no longer cause sync failures. Instead, the connections will be automatically rescheduled.
Read the full incident report →
- Detected by Pingoru
- Mar 30, 2026, 10:15 AM UTC
- Resolved
- Mar 30, 2026, 10:15 AM UTC
- Duration
- —
Timeline · 1 update
-
resolved Mar 30, 2026, 10:43 AM UTC
Description: We identified an issue affecting the Criteo connector, which resulted in syncs failing with 503 Service Unavailable and 504 Gateway Time-out errors. Timeline: This issue began on March 30, 2026 at 08:17 UTC and was resolved on March 30, 2026 at 09:17 UTC. Cause: The failures were caused by an outage on the third-party side, as confirmed on their official status page. https://status.criteo.com/incidents/dj24tx2vf4w3 Resolution: The issue was resolved after third-party restored their services. Syncs resumed normal operation once the upstream outage was mitigated.
Read the full incident report →
- Detected by Pingoru
- Mar 26, 2026, 04:05 PM UTC
- Resolved
- Mar 26, 2026, 05:12 PM UTC
- Duration
- 1h 7m
Affected: General Services
Timeline · 3 updates
-
identified Mar 26, 2026, 04:05 PM UTC
The issue has been identified and we are working to resolve it.
-
monitoring Mar 26, 2026, 05:00 PM UTC
A fix has been implemented and we are monitoring the results.
-
resolved Mar 26, 2026, 05:12 PM UTC
This incident has been resolved. We have observed that error rates have returned to normal levels, and the service is operating as expected. Incident Summary Description: We identified an issue with the OAuth Authentication that is affecting multiple connections, resulting in a 404 error. Timeline: The issue began on 26/03/2026 at 09:09 UTC and was resolved on 26/03/2026 at 16:42 UTC. Cause: During an ingress upgrade, an error caused OAuth callback requests to return HTTP 404 responses. This led to authentication failures for connections relying on OAuth. Resolution: The change was reverted, which restored normal OAuth functionality for Fivetran connections.
Read the full incident report →
- Detected by Pingoru
- Mar 26, 2026, 10:00 AM UTC
- Resolved
- Mar 26, 2026, 10:00 AM UTC
- Duration
- —
Timeline · 1 update
-
resolved Mar 26, 2026, 02:00 PM UTC
Incident Summary Description: We identified an issue where some Fivetran REST API requests were timing out. Timeline: The issue began on 26/03/2026 at 10:17 UTC and was resolved on 26/03/2026 at 12:05 UTC. Cause: The issue was caused by resource constraints in our infrastructure, causing timeouts when resources were unavailable. Resolution: We have increased resource limits to mitigate the issue, and we are not seeing connection failures after that change.
Read the full incident report →
- Detected by Pingoru
- Mar 26, 2026, 05:43 AM UTC
- Resolved
- Mar 26, 2026, 08:10 AM UTC
- Duration
- 2h 27m
Affected: General ServicesGoogle Cloud SQL for PostgreSQLPostgreSQL
Timeline · 4 updates
-
identified Mar 26, 2026, 06:20 AM UTC
The issue has been identified and we are working to resolve it.
-
identified Mar 26, 2026, 06:24 AM UTC
We have identified the root cause of the current errors as a recent code change. Our engineering team is currently developing a fix to restore full service.
-
monitoring Mar 26, 2026, 08:10 AM UTC
We have released a fix to resolve the identified issues. Affected services are beginning to recover, and we are closely monitoring the system to ensure continued stability.
-
resolved Mar 26, 2026, 08:10 AM UTC
This incident has been resolved. We have observed that error rates have returned to normal levels, and the service is operating as expected. Incident Summary Description: We identified an issue with the PostgreSQL Connections Using RDS Proxy Failing with the Error: The source database does not support a provided lock_timeout parameter Timeline: The issue began on 26/03/2026 at 05:40 UTC and was resolved on 26/03/2026 at 07:45 UTC. Cause: The issue was caused by a recent code change introducing a bug from Fivetran. Resolution: The bug was fixed and the connection services were restored.
Read the full incident report →
- Detected by Pingoru
- Mar 26, 2026, 04:35 AM UTC
- Resolved
- Mar 26, 2026, 08:25 AM UTC
- Duration
- 3h 49m
Affected: Fivetran API
Timeline · 5 updates
-
identified Mar 26, 2026, 04:35 AM UTC
The issue has been identified and we are working to resolve it.
-
identified Mar 26, 2026, 05:47 AM UTC
We are continuing to investigate the root cause of the issue. Our Engineering team is actively working to identify the underlying problem and will share further updates as more information becomes available. The team is also reviewing recent changes to determine if they may have contributed to the issue.
-
identified Mar 26, 2026, 06:47 AM UTC
We have identified the root cause of the issue causing 404 errors in the Fivetran REST API and are actively working on a fix. We will continue to share updates as we make progress.
-
monitoring Mar 26, 2026, 07:05 AM UTC
We have deployed a fix for the issue causing 404 errors in the Fivetran REST API. We will continue to monitor the service to ensure normal operation. If you continue to experience any issues, please reach out to our Support Team.
-
resolved Mar 26, 2026, 08:25 AM UTC
This incident has been resolved. We have observed that error rates have returned to normal levels, and the service is operating as expected. Incident Summary Description: We identified an issue with the Fivetran REST API that caused certain endpoint calls to fail with a 404 error. Timeline: The issue began on 25/03/2026 at 14:04 UTC and was resolved on 26/03/2026 at 06:55 UTC. Cause: The issue was caused by a recent change on our end. Resolution: The change was reverted, restoring normal API functionality.
Read the full incident report →
- Detected by Pingoru
- Mar 26, 2026, 03:30 AM UTC
- Resolved
- Mar 26, 2026, 01:40 PM UTC
- Duration
- 10h 10m
Affected: Twitter Ads
Timeline · 7 updates
-
identified Mar 26, 2026, 03:30 AM UTC
The issue has been identified and we are working to resolve it.
-
identified Mar 26, 2026, 04:09 AM UTC
The issue has been identified as being caused by a problem on the third-party side. They have also acknowledged and posted an update on their page (link below). https://docs.x.com/status We are closely monitoring the situation and will provide further updates as soon as more information becomes available.
-
identified Mar 26, 2026, 05:56 AM UTC
The third-party is still working on this incident, and there have been no further updates on their status page at this time. We are continuing to closely monitor the situation and will keep you informed as soon as we receive any new information.
-
identified Mar 26, 2026, 08:35 AM UTC
We have reached out to the third-party to obtain the latest status on this issue and are currently awaiting their response. Additionally, we have started observing "HTTP 500 – Internal Server Error" for a few connections. At this time, there are no further updates available on their status page. We are continuing to closely monitor the situation and will share updates as soon as we receive more information.
-
monitoring Mar 26, 2026, 10:40 AM UTC
A fix has been implemented and we are monitoring the results.
-
monitoring Mar 26, 2026, 11:41 AM UTC
Many Twitter and Twitter Ads connections have recovered, and the rate of severe errors has dropped significantly over the past 2 hours. We are still seeing intermittent 500 and 503 errors for a small subset of connections. X (Twitter) has not yet marked the incident as resolved on their status page, and we will continue to monitor for full recovery. https://docs.x.com/status
-
resolved Mar 26, 2026, 01:40 PM UTC
This incident has been resolved. We have observed that error rates have returned to normal levels, and the service is operating as expected. Incident Summary Description: We identified an issue with the multiple Twitter and Twitter Ads connections, which are failing with the error "HTTP 503 Service Unavailable". Timeline: The issue began on 26/03/2026 at 01:00 UTC and was resolved on 26/03/2026 at 14:30 UTC. Cause: A third-party issue caused Twitter and Twitter_Ads connections to fail. Resolution: X (Twitter) has fixed their API.
Read the full incident report →
- Detected by Pingoru
- Mar 25, 2026, 01:20 AM UTC
- Resolved
- Mar 25, 2026, 02:59 AM UTC
- Duration
- 1h 38m
Affected: Destinations
Timeline · 3 updates
-
identified Mar 25, 2026, 01:20 AM UTC
The issue has been identified and we are working to resolve it.
-
monitoring Mar 25, 2026, 01:37 AM UTC
Connections syncing to BigQuery destinations are impacted due to an internal configuration error. The issue has now been resolved, and we are monitoring sync.
-
resolved Mar 25, 2026, 02:59 AM UTC
This incident has been resolved. We have observed that instance rates are returning to normal levels, and affected connectors are syncing successfully. Incident Summary Description: We identified an issue causing sync failures, with multiple connectors in the BigQuery destination failing due to a "GCS_BG_STORAGE" error. Timeline: This issue began on March 25, 2026, at 00:18 UTC and was resolved on March 25, 2026, at 01:49 UTC. Cause: Connections syncing to BigQuery destinations are impacted due to an internal configuration error. Resolution: The necessary steps were taken to resolve the internal configuration issue, after which instance rates returned to normal and the affected connectors resumed syncing successfully.
Read the full incident report →
- Detected by Pingoru
- Mar 24, 2026, 05:30 AM UTC
- Resolved
- Mar 24, 2026, 05:30 AM UTC
- Duration
- —
Affected: Mixpanel
Timeline · 3 updates
-
identified Mar 24, 2026, 05:30 AM UTC
The issue has been identified and we are working to resolve it.
-
monitoring Mar 24, 2026, 05:30 AM UTC
The issue was caused by an intermittent issue on the third-party side, which was resolved automatically, and we are monitoring the sync.
-
resolved Mar 24, 2026, 05:30 AM UTC
We have resolved this incident. Description: We identified an issue where multiple Mixpanel connections are failing their sync with a socket timeout error due to an issue on the third-party side. Timeline: Incident Start: 2026-03-23 at 18:50 UTC Incident End: 2026-03-23 at 19:15 UTC Cause: The issue was caused by an intermittent issue on the third-party side. Resolution: The issue was automatically resolved from the third-party side without any intervention from the Fivetran end.
Read the full incident report →
- Detected by Pingoru
- Mar 23, 2026, 08:10 PM UTC
- Resolved
- Mar 23, 2026, 11:40 PM UTC
- Duration
- 3h 29m
Affected: Box
Timeline · 3 updates
-
identified Mar 23, 2026, 08:10 PM UTC
The issue has been identified and we are working to resolve it.
-
monitoring Mar 23, 2026, 08:40 PM UTC
A fix has been implemented and we are monitoring the results.
-
resolved Mar 23, 2026, 11:40 PM UTC
Description: Some Box connections unexpectedly failed their sync runs, but recovered on subsequent runs. Timeline: Incident Start: 2026-02-23 at 16:30 UTC Incident End: 2026-02-23 at 18:00 UTC Cause: The issue appears to be caused by a temporary error in our credential handling service, but we are continuing to investigate this post incident resolution. Resolution: The App Store Connect Reporter is now available, and syncs are succeeding normally.
Read the full incident report →
- Detected by Pingoru
- Mar 22, 2026, 07:40 AM UTC
- Resolved
- Mar 22, 2026, 08:40 AM UTC
- Duration
- 1h
Affected: General Services
Timeline · 2 updates
-
identified Mar 22, 2026, 07:40 AM UTC
We identified an issue with the itunes_connect Connections where sync is failing with the "Unknown failure" error.
-
resolved Mar 22, 2026, 08:40 AM UTC
Description: We identified an issue with Apple App Store Connect that caused sync failures due to intermittent unavailability of the App Store Connect Reporter. Timeline: This issue began on 22 March 2026 at 06:17 UTC and was resolved on 22 March 2026 at 07:14 UTC. Cause: The issue was caused by a temporary API failure on the App Store Connect side. Resolution: The App Store Connect Reporter is now available, and syncs are succeeding normally.
Read the full incident report →
- Detected by Pingoru
- Mar 21, 2026, 09:37 AM UTC
- Resolved
- Mar 21, 2026, 10:30 AM UTC
- Duration
- 53m
Affected: DestinationsGeneral Services
Timeline · 5 updates
-
identified Mar 21, 2026, 07:05 AM UTC
The issue has been identified and we are working to resolve it.
-
identified Mar 21, 2026, 07:07 AM UTC
Sync failures were caused by an ongoing Snowflake issue impacting Snowflake destinations. Snowflake has reported recovery for US regions, and we are monitoring to ensure all syncs return to normal. More details: https://status.snowflake.com/
-
monitoring Mar 21, 2026, 07:30 AM UTC
Snowflake has reported recovery for US regions, and we are monitoring to ensure all syncs return to normal. More details: https://status.snowflake.com/
-
monitoring Mar 21, 2026, 09:38 AM UTC
We are continuing to monitor to ensure all syncs return to normal. More details: https://status.snowflake.com/
-
resolved Mar 21, 2026, 10:30 AM UTC
This incident has been resolved. We have observed that instance rates are returning to normal levels, and affected connectors are syncing successfully. Incident Summary Description: We identified an issue where multiple connection syncs under the Snowflake destination are failing due to a Snowflake Downtime. Timeline: This issue began on 21-03-2026 at 05:26:30 UTC and was resolved on 21-03-2026 at 06:17:30 UTC. Cause: External Snowflake incident affecting the impacted AWS region(s), leading to elevated Snowflake exceptions and downstream sync failures. Resolution: The issue was resolved on the Snowflake side, and connections are now syncing successfully
Read the full incident report →
- Detected by Pingoru
- Mar 21, 2026, 05:58 AM UTC
- Resolved
- Mar 21, 2026, 05:58 AM UTC
- Duration
- —
Timeline · 1 update
-
resolved Mar 21, 2026, 05:58 AM UTC
Description: We identified an issue for Hubspot which resulted in syncs failing due to increased API rate limiting. Timeline: This issue began on 20th March, 2026 at 20:45 UTC and was resolved on 20th March, 2026 at 23:00 UTC. Cause: A recent update intended to improve how our connection manages HubSpot authentication led to an unintended increase in how frequently access tokens were refreshed. This resulted in a higher number of authentication requests than expected, which triggered HubSpot’s rate limits and caused some syncs to fail. Resolution: We rolled back the change, which immediately resolved the issue. Syncs are now succeeding normally.
Read the full incident report →
- Detected by Pingoru
- Mar 20, 2026, 07:30 PM UTC
- Resolved
- Mar 20, 2026, 10:40 PM UTC
- Duration
- 3h 9m
Affected: General Services
Timeline · 3 updates
-
identified Mar 20, 2026, 07:30 PM UTC
The issue has been identified and we are working to resolve it.
-
monitoring Mar 20, 2026, 07:40 PM UTC
A fix has been implemented and we are monitoring the results.
-
resolved Mar 20, 2026, 10:40 PM UTC
We have resolved this incident.
Read the full incident report →
- Detected by Pingoru
- Mar 18, 2026, 06:00 PM UTC
- Resolved
- Mar 18, 2026, 02:30 PM UTC
- Duration
- —
Timeline · 1 update
-
resolved Mar 18, 2026, 06:00 PM UTC
Incident Summary Description: A few Amazon S3 connections experienced intermittent failures due to an AWS-side issue, with the error: "Unable to execute HTTP request." Timeline: The issue began on March 18, 2026, at 10:30 UTC and was resolved by 13:40 UTC. Root Cause: The intermittent sync failures were caused by AWS-side timeout issues. Resolution: The issue was automatically resolved once connectivity from the AWS side was restored.
Read the full incident report →
- Detected by Pingoru
- Mar 18, 2026, 05:50 AM UTC
- Resolved
- Mar 18, 2026, 06:49 AM UTC
- Duration
- 59m
Affected: Snapchat Ads
Timeline · 3 updates
-
identified Mar 18, 2026, 05:50 AM UTC
The issue has been identified and we are working to resolve it.
-
monitoring Mar 18, 2026, 06:13 AM UTC
We experienced an intermittent issue with a third-party API service that resulted in 500 errors. Connections are now recovering, and we are continuing to closely monitor sync performance.
-
resolved Mar 18, 2026, 06:49 AM UTC
This incident has been resolved. We observed that Snapchat Ads connection sync success rates have returned to normal levels, and syncs are now running as expected. Description: We identified an issue where Snapchat Ads connections started failing with error "500 Internal Server Error" Timeline: The issue began on March 18, 2026, at 04:03 UTC and was resolved on March 18th, 2026, at 05:01 UTC. Cause: The issue was caused by an intermittent issue with a third-party API service that resulted in 500 errors. Resolution: No changes were required on our side. The issue was automatically resolved on the third-party side, and syncs are back to normal.
Read the full incident report →
- Detected by Pingoru
- Mar 17, 2026, 06:18 PM UTC
- Resolved
- Mar 17, 2026, 06:00 PM UTC
- Duration
- —
Timeline · 1 update
-
resolved Mar 17, 2026, 06:18 PM UTC
This incident has been resolved. We observed that HubSpot connector sync success rates have returned to normal levels, and syncs are now running as expected. Description: We identified an issue where some HubSpot connector syncs were failing with the error: "Unknown failure." Timeline: The issue began on March 17, 2026, at 17:10 UTC and was resolved at 17:26 UTC. Cause: The issue was caused by timeouts while attempting to reach HubSpot's OAuth endpoint. Resolution: No changes were required on our side. The issue was automatically resolved once connectivity to HubSpot's OAuth endpoint was restored.
Read the full incident report →
- Detected by Pingoru
- Mar 16, 2026, 11:23 PM UTC
- Resolved
- Mar 16, 2026, 11:21 PM UTC
- Duration
- —
Timeline · 1 update
-
resolved Mar 16, 2026, 11:23 PM UTC
Description We identified an issue with our public REST API that caused API calls to fail with a "column api.disabled_at does not exist" exception. Timeline: This issue began on 22:15 UTC on March 16, 2026 and was resolved on 22:50 UTC on March 16, 2026. Cause: A migration change caused the issue. Resolution: A fix was implemented to address the migration issue, after which instance rates and all implementations of the Fivetran public REST API returned to normal.
Read the full incident report →
- Detected by Pingoru
- Mar 15, 2026, 07:35 AM UTC
- Resolved
- Mar 15, 2026, 12:12 PM UTC
- Duration
- 4h 36m
Affected: TransformationsGeneral Services
Timeline · 4 updates
-
identified Mar 15, 2026, 07:35 AM UTC
We have identified an issue where multiple connection syncs and transformations are delayed.
-
identified Mar 15, 2026, 09:00 AM UTC
We are continuing to work on a fix for this issue.
-
monitoring Mar 15, 2026, 09:56 AM UTC
A fix has been implemented and we are monitoring the results.
-
resolved Mar 15, 2026, 12:12 PM UTC
This incident has been resolved. We have observed that instance rates are returning to normal levels, and affected connectors are syncing successfully. Incident Summary Description: We identified an issue that caused delays in connection syncs and transformations. During this time, new syncs in AWS and Azure were delayed, while ongoing syncs continued to run normally. In the GCP regions, ongoing syncs failed, and new syncs were delayed. Timeline: This issue began on March 15, 2026, at 6:20 UTC and was resolved on March 15, 2026, at 11 UTC. Cause: A recent infrastructure change caused the issue. Resolution: A fix was implemented to address the infrastructure issue, after which instance rates returned to normal, and affected connectors resumed syncing successfully.
Read the full incident report →
- Detected by Pingoru
- Mar 14, 2026, 10:55 PM UTC
- Resolved
- Mar 15, 2026, 05:01 AM UTC
- Duration
- 6h 6m
Affected: Qualtrics
Timeline · 4 updates
-
identified Mar 14, 2026, 10:55 PM UTC
The issue has been identified and we are working to resolve it.
-
identified Mar 14, 2026, 11:12 PM UTC
We identified an issue where the Distribution Contact endpoint was retrying requests too frequently, which caused some syncs to fail with rate limit errors (429). A hotfix has been raised to adjust the retry behavior so that retries are spaced out more effectively. We will continue to monitor the syncs after deployment.
-
monitoring Mar 14, 2026, 11:22 PM UTC
A fix has been implemented and we are monitoring the results.
-
resolved Mar 15, 2026, 05:01 AM UTC
This incident has been resolved. We have observed that instance rates are returning to normal levels and affected connectors are syncing successfully. Incident Summary Description: We identified an issue for Qualtrics which resulted in syncs failing with HTTP Error Code 429. Timeline: This issue began on 10th March at around 10PM UTC and was resolved on 14th March at 10:30PM UTC. Cause: This issues was caused by Third party 429s for distribution History endpoint which started spiking after 10th March at around 10PM UTC, we suspect if qualtrics had enforced additional rate limits at an app level in addition to the brand level api limit of 300 per minute Resolution: We increased the retries and also added exponential sleep between api calls to reduce the load on qualtrics server
Read the full incident report →
- Detected by Pingoru
- Mar 14, 2026, 09:00 AM UTC
- Resolved
- Mar 14, 2026, 12:11 PM UTC
- Duration
- 3h 11m
Affected: HubSpot
Timeline · 4 updates
-
identified Mar 14, 2026, 09:00 AM UTC
We identified an issue with the HubSpot Connections where the syncs are failing with an HTTP 477 RefreshCredentials error.
-
identified Mar 14, 2026, 10:15 AM UTC
Recent migration of the Fivetran public app by HubSpot has caused the issue. A hotfix is being deployed to resolve the issue.
-
monitoring Mar 14, 2026, 10:45 AM UTC
A fix has been implemented and we are monitoring the results.
-
resolved Mar 14, 2026, 12:11 PM UTC
This incident has been resolved, and we have observed that the sync is now completing successfully. Incident Summary Description: We have identified an issue where some HubSpot connector syncs are failing with the error Failed to sync 1 endpoint(s) with error: {CAPTURE_DELETES_USING_WEBHOOKS=com.fivetran.platform.interfaces.security.credentials.RefreshCredentialsException: HTTP 477 } Timeline: The issue began on March 14, 2026, at 09:01 UTC and was resolved on March 14, 2026, at 10:43 UTC. Cause: The recent migration of the Fivetran public app by HubSpot has caused the issue. Resolution: A fix has been implemented to resolve the issue.
Read the full incident report →