- Detected by Pingoru
- May 01, 2026, 01:10 AM UTC
- Resolved
- May 01, 2026, 04:03 AM UTC
- Duration
- 2h 53m
Affected: HubSpot
Timeline · 3 updates
-
identified May 01, 2026, 01:10 AM UTC
The issue has been identified and we are working to resolve it.
-
monitoring May 01, 2026, 03:16 AM UTC
We have identified that the issue is related to the events endpoint and have reached out to HubSpot support for more details on the failure. The failure rate is decreasing without any changes on the Fivetran side, and we are monitoring the syncs.
-
resolved May 01, 2026, 04:03 AM UTC
Instance rates have returned to normal levels, and affected connections are now syncing successfully. Incident Summary Description: We identified an issue impacting HubSpot connections, where syncs started failing with the error "Your HubSpot source is not responding at this time. We'll retry the sync later." Timeline: The issue began on May 1, 2026 at 00:10 UTC and was resolved on May 1, 2026 at 03:30 UTC Cause: The failures were caused by intermittent 502/504 errors returned by the HubSpot events API endpoint, which led to sync disruptions. Resolution: The issue resolved automatically without any intervention from the Fivetran side.
Read the full incident report →
- Detected by Pingoru
- Apr 30, 2026, 06:40 PM UTC
- Resolved
- Apr 30, 2026, 05:30 PM UTC
- Duration
- —
Timeline · 1 update
-
resolved Apr 30, 2026, 06:40 PM UTC
Incident Summary Description: We identified an issue where services were intermittently failing with the error: "UNAUTHENTICATED: Token", resulting in connector sync failures during specific time windows. Timeline: This issue began on 2026-04-30 at 10:00 UTC and was resolved on 2026-04-30 at 16:30 UTC. Customers might have experienced failures around scheduled deployment times (10:00, 13:00, and 16:00 UTC), with recovery shortly after. Root Cause: A recent internal change led to intermittent timeouts during authentication token validation, which triggered "UNAUTHENTICATED: Token" errors and caused some syncs to fail. Resolution: We implemented a fix to ensure sufficient time for token validation, preventing further sync failures. All services are now operating normally.
Read the full incident report →
- Detected by Pingoru
- Apr 28, 2026, 06:15 PM UTC
- Resolved
- Apr 28, 2026, 10:32 PM UTC
- Duration
- 4h 17m
Affected: Amazon Selling Partner
Timeline · 4 updates
-
identified Apr 28, 2026, 06:15 PM UTC
The issue has been identified and we are working to resolve it.
-
identified Apr 28, 2026, 06:35 PM UTC
We are currently seeing an increase in 500 Internal Server Error responses due to degraded performance in the Amazon Selling Partner API, as indicated on their status page. This is a third-party issue on ASP's side, and we are continuing to monitor the connections. ASP Status Page: https://sellercentral.amazon.com/sp-api-status
-
monitoring Apr 28, 2026, 06:54 PM UTC
The connections are recovering and we are currently monitoring the syncs.
-
resolved Apr 28, 2026, 10:32 PM UTC
This incident has been resolved. We observed that syncs have recovered and are completing successfully without further failures. Incident Summary Description: We identified an issue with Amazon Selling Partner connections where syncs were failing with 500 Internal Server Error. Timeline: This issue began on 2026-04-28 at 17:39:00 UTC and was resolved on 2026-04-28 at 19:38:00 UTC. Cause: The failures were caused by a source-side outage in the Amazon Selling Partner API, resulting in 500 Internal Server Error responses. Resolution: No action was taken from our end. The issue was resolved on the source side, and syncs recovered automatically.
Read the full incident report →
- Detected by Pingoru
- Apr 28, 2026, 03:05 PM UTC
- Resolved
- Apr 28, 2026, 05:54 PM UTC
- Duration
- 2h 48m
Affected: Qualtrics
Timeline · 4 updates
-
identified Apr 28, 2026, 03:05 PM UTC
The issue has been identified and we are working to resolve it.
-
identified Apr 28, 2026, 03:41 PM UTC
Sync failures are occurring due to an outage in Qualtrics’ Ticket Export API. We are monitoring the situation. For more details, please visit: https://status.qualtrics.com/incidents/wj65y7rwfx2j
-
monitoring Apr 28, 2026, 04:25 PM UTC
A fix has been implemented and we are monitoring the results.
-
resolved Apr 28, 2026, 05:54 PM UTC
Instance rates have returned to normal levels, and affected connectors are now syncing successfully. Incident Summary Description: We identified an issue with Qualtrics connections where syncs were failing with a "Ticket export failed" error. Timeline: The issue began on April 28, 2026 at 11:30 AM UTC and was resolved at 4:10 PM UTC. Cause: The failures were caused by an issue with the Ticket Export endpoint on the Qualtrics side, leading to sync disruptions. More details: https://status.qualtrics.com/incidents/wj65y7rwfx2j Resolution: The issue was resolved once Qualtrics fixed the problem on their end.
Read the full incident report →
- Detected by Pingoru
- Apr 28, 2026, 01:10 AM UTC
- Resolved
- Apr 28, 2026, 02:57 PM UTC
- Duration
- 13h 46m
Affected: Microsoft ListsSharePoint
Timeline · 5 updates
-
identified Apr 28, 2026, 09:50 AM UTC
The issue has been identified and we are working to resolve it.
-
identified Apr 28, 2026, 09:51 AM UTC
Fivetran's OAuth app secret key has expired, and the team is working on creating new secret key credentials to fix the issue.
-
monitoring Apr 28, 2026, 09:52 AM UTC
A new secret key credential has been created to resolve the issue. Syncs are now recovering, and we are monitoring the results.
-
monitoring Apr 28, 2026, 10:26 AM UTC
We have updated all the regions with the new secret keys and will continue to monitor the connections.
-
resolved Apr 28, 2026, 02:57 PM UTC
This incident has been resolved. We have observed that instance rates are returning to normal levels and affected connectors are syncing successfully. Incident Summary Description: We identified an issue for SharePoint and Microsoft Lists connections which resulted in syncs failing with a "Authentication failed: The client secret or certificate has expired" error. Timeline: This issue began on 2026-04-27 at 22:20:00 UTC and was resolved on 2026-04-28 at 09:30:00 UTC Cause: The issue occurred due to Microsoft OAuth app credentials expiration.Due to this internal workflows failed to rotate the secret keys. Resolution: A fix has been implemented to update these secrets.
Read the full incident report →
- Detected by Pingoru
- Apr 27, 2026, 08:38 AM UTC
- Resolved
- Apr 27, 2026, 04:00 PM UTC
- Duration
- 7h 21m
Affected: Amazon Selling Partner
Timeline · 4 updates
-
identified Apr 27, 2026, 08:35 AM UTC
The issue has been identified and we are working to resolve it.
-
identified Apr 27, 2026, 08:38 AM UTC
We have observed that all the Amazon Selling Partner connections across Fivetran app, custom app and customer_oauth app are re-scheduling due to the change in rate limits from 3rd party.
-
monitoring Apr 27, 2026, 12:15 PM UTC
We have implemented a mitigation on our end to temporarily skip the SALES_AND_TRAFFIC_BUSINESS_* tables while we await further updates from the third-party. We will continue to monitor the connections.
-
resolved Apr 27, 2026, 04:00 PM UTC
This incident has been resolved. We have observed that instance rates are returning to normal levels and affected connectors are syncing successfully. Incident Summary Description: We identified an issue for Amazon Selling Partner connections which resulted in syncs re-scheduling due to the rate limits on the GET_SALES_AND_TRAFFIC_REPORT. Timeline: This issue began on 2026-04-24 at 09:00:00 UTC and was resolved on 2026-04-27 at 14:15:00 UTC. Cause: We are experiencing the rate limits issue from the third party causing our connectors to reschedule for the GET_SALES_AND_TRAFFIC_REPORT. Resolution: A fix has been implemented to skip the tables associated with this report and allowing the connections for a successful sync.
Read the full incident report →
- Detected by Pingoru
- Apr 26, 2026, 07:00 AM UTC
- Resolved
- Apr 26, 2026, 07:02 AM UTC
- Duration
- 2m
Affected: QuickBooks
Timeline · 2 updates
-
identified Apr 26, 2026, 07:00 AM UTC
The issue has been identified and we are working to resolve it.
-
resolved Apr 26, 2026, 07:02 AM UTC
Incident Summary: Description: QuickBooks connections were failing due to the source-side maintenance window. Timeline: April 25, 2026, from 9:00 PM to 11:30 PM PT Cause: Source side maintenance Resolution: Automatically resolved after the maintenance window is over.
Read the full incident report →
- Detected by Pingoru
- Apr 25, 2026, 10:55 PM UTC
- Resolved
- Apr 26, 2026, 01:30 PM UTC
- Duration
- 14h 35m
Affected: Twitter Ads
Timeline · 5 updates
-
identified Apr 25, 2026, 10:55 PM UTC
The issue has been identified and we are working to resolve it.
-
identified Apr 26, 2026, 12:09 AM UTC
We are receiving 500 Internal Server Errors from the Twitter API when querying the accounts endpoint.
-
identified Apr 26, 2026, 05:48 AM UTC
We are continuing to work on a fix for this issue.
-
monitoring Apr 26, 2026, 12:26 PM UTC
A fix has been implemented and we are monitoring the results.
-
resolved Apr 26, 2026, 01:30 PM UTC
Incident Summary: Description: Multiple Twitter Ads connectors failed due to the Twitter Ads API returning persistent server errors (HTTP 500) on reporting endpoints. Timeline (UTC): Started at 25.04.2026 21:00 and Resolved 26.04.2026 11:50 Cause: The Twitter Ads API experienced an outage, returning repeated HTTP 500 Internal Server errors on the stats reporting endpoint (POST URI https://ads-api.twitter.com/12/stats/jobs/accounts/). Resolution: Deployed a fix/workaround that enables the connector to gracefully handle HTTP 500 errors from the Twitter Ads API. Affected accounts are now skipped with a customer-facing warning on the connector dashboard, sync progress is preserved for successful accounts, and skipped data is automatically retried on the next sync.
Read the full incident report →
- Detected by Pingoru
- Apr 24, 2026, 06:35 PM UTC
- Resolved
- Apr 24, 2026, 10:50 PM UTC
- Duration
- 4h 14m
Affected: General Services
Timeline · 5 updates
-
identified Apr 24, 2026, 06:35 PM UTC
The issue has been identified and we are working to resolve it.
-
identified Apr 24, 2026, 06:58 PM UTC
The issue was caused by an outage on Azure. You can find more details at the following link: https://azure.status.microsoft/en-us/status
-
identified Apr 24, 2026, 08:47 PM UTC
Azure has identified the issue and they expect recovery within the next hour. Fivetran engineering will continue to monitor as Azure works through this incident.
-
identified Apr 24, 2026, 09:49 PM UTC
Azure is continuing to rollout a fix in the East US region. We have observed our backlog of syncs are caught up, but pod scaling is still affected. Once Azure confirms they have competed their fix, we will test pod scaling to ensure it works as expected.
-
resolved Apr 24, 2026, 10:50 PM UTC
Syncs have returned to normal and Azure looks to have completed the rollout of their fix. Azure will have a post mortem posted to their status page for the root cause of this incident: https://azure.status.microsoft/en-us/status
Read the full incident report →
- Detected by Pingoru
- Apr 23, 2026, 08:04 PM UTC
- Resolved
- Apr 23, 2026, 06:00 PM UTC
- Duration
- —
Timeline · 1 update
-
resolved Apr 23, 2026, 08:04 PM UTC
We have observed that instance rates have returned to normal levels, and the affected connectors are now syncing successfully. Incident Summary Description: We identified an issue with Recharge connections, where syncs were failing with a 502 error. Timeline: The issue began on 2026-04-23 at 15:45 UTC and was resolved at 17:38 UTC. Cause: The failures were caused by the intermittent unavailability of the Recharge API, leading to sync disruptions. More details are available on the recharge's status page: https://status.getrecharge.com/incidents/cbz3mcgf2lx5 Resolution: The issue was resolved automatically once the Recharge API resumed normal operation
Read the full incident report →
- Detected by Pingoru
- Apr 23, 2026, 07:46 PM UTC
- Resolved
- Apr 23, 2026, 06:00 PM UTC
- Duration
- —
Timeline · 1 update
-
resolved Apr 23, 2026, 07:46 PM UTC
Description: We identified an issue affecting ServiceTitan connectors, which resulted in syncs failing with an HTTP 500 error. Timeline: This issue began on 2026-04-23 at 17:55 UTC and was resolved on 2026-04-23 at 18:34 UTC. Cause: The failures were due to a temporary service outage on ServiceTitan’s end, disrupting sync processes. Resolution: The issue was resolved once ServiceTitan deployed a fix on their end
Read the full incident report →
- Detected by Pingoru
- Apr 23, 2026, 05:20 PM UTC
- Resolved
- Apr 23, 2026, 07:40 PM UTC
- Duration
- 2h 19m
Affected: General Services
Timeline · 4 updates
-
identified Apr 23, 2026, 05:20 PM UTC
The issue has been identified and we are working to resolve it.
-
monitoring Apr 23, 2026, 05:20 PM UTC
A fix has been implemented and we are monitoring the results.
-
monitoring Apr 23, 2026, 07:00 PM UTC
In the last hour, we have observed sync failure rates drop to normal rates. We are continuing to monitor.
-
resolved Apr 23, 2026, 07:40 PM UTC
We have resolved this incident.
Read the full incident report →
- Detected by Pingoru
- Apr 21, 2026, 10:00 PM UTC
- Resolved
- Apr 22, 2026, 03:25 AM UTC
- Duration
- 5h 24m
Affected: Twitter Ads
Timeline · 3 updates
-
identified Apr 21, 2026, 10:00 PM UTC
The issue has been identified and we are working to resolve it.
-
monitoring Apr 22, 2026, 12:25 AM UTC
A fix has been implemented and we are monitoring the results.
-
resolved Apr 22, 2026, 03:25 AM UTC
This incident has been resolved. We have observed that error rates have returned to normal levels, and the service is operating as expected. Incident Summary Description: We identified an issue affecting multiple Twitter Ads Connections Sync Failures Due to Access Level/Permission Error (Code 453) Timeline: The issue began on 2026-04-21 20:43 UTC and was resolved on 2026-04-22 02:23 UTC. Cause: X API started to fail report GZIP file authenticated download requests. Resolution: Authentication has been removed from GZIP file download requests.
Read the full incident report →
- Detected by Pingoru
- Apr 21, 2026, 04:30 AM UTC
- Resolved
- Apr 21, 2026, 04:41 AM UTC
- Duration
- 10m
Affected: General Services
Timeline · 3 updates
-
identified Apr 21, 2026, 04:30 AM UTC
The issue has been identified and we are working to resolve it.
-
monitoring Apr 21, 2026, 04:30 AM UTC
A fix has been implemented and we are monitoring the results.
-
resolved Apr 21, 2026, 04:41 AM UTC
We have resolved this incident. Incident Summary Description: We identified an issue affecting Apple App Store connections, which were failing with the error: “Unknown failure. This request can not be processed right now Reporter is currently unavailable.” Timeline: The issue began on April 20th at 22:45 UTC and was resolved on April 21st at 01:20 UTC. Cause: The sync failure was caused by an intermittent issue on the third-party side. Resolution: The issue was automatically resolved from the third-party side without any intervention from the Fivetran end.
Read the full incident report →
- Detected by Pingoru
- Apr 16, 2026, 06:00 PM UTC
- Resolved
- Apr 16, 2026, 06:00 PM UTC
- Duration
- —
Timeline · 1 update
-
resolved Apr 17, 2026, 12:58 AM UTC
This incident has been resolved. Incident Summary Description: We identified an issue where connectors using the Hybrid Deployment agent were unable to schedule new sync jobs. Timeline: This issue began on 2026-04-15 18:30 UTC and was resolved at 2026-04-16 23:20 UTC. Cause: The issue was caused by a recent code change. Resolution: The changes have been reverted, and the HD Agents affected will require a restart. During the restart, the latest update will be automatically applied, resolving the issue and preventing recurrence. Steps to restart your agent: For Docker and Podman deployments: ./hdagent.sh stop ./hdagent.sh start For Kubernetes deployments: 1. Get the HD Agent pod: kubectl get pod -n -l app.kubernetes.io/name=hd-agent 2. Kill the pod, which will force it to restart and pull the latest image kubectl delete pod
Read the full incident report →
- Detected by Pingoru
- Apr 14, 2026, 04:40 PM UTC
- Resolved
- Apr 15, 2026, 12:01 AM UTC
- Duration
- 7h 21m
Affected: Twitter Ads
Timeline · 4 updates
-
identified Apr 14, 2026, 04:40 PM UTC
The issue has been identified and we are working to resolve it.
-
identified Apr 14, 2026, 07:21 PM UTC
We have identified an issue where the source API is returning null values in a primary key field (segment) for some reports, resulting in sync failures. We are actively working on a fix to skip these rows so that syncs can continue without interruption. We will provide another update as more information becomes available.
-
monitoring Apr 14, 2026, 10:48 PM UTC
Twitter has identified an outage and services have now returned to normal: https://devcommunity.x.com/t/x-api-service-outage/262778 A hotfix has also been deployed to prevent further failures due to null primary keys. We have observed connections begin to recover and will continue monitoring.
-
resolved Apr 15, 2026, 12:01 AM UTC
This incident has been resolved. We have observed that instance rates are returning to normal levels and affected connectors are syncing successfully. Incident Summary Description: We identified an issue for Twitter Ads which resulted in syncs failing with a "Null primary key found" error. Timeline: This issue began on 2026-04-14 at 16:30 UTC and was resolved at 23:30 UTC. Cause: Twitter API started returning null values in a primary key field (segment) for some reports, causing sync failures. Resolution: Twitter resolved the issue within the source: https://devcommunity.x.com/t/x-api-service-outage/262778 A hotfix was deployed by Fivetran to prevent further failures by skipping keys where the primary key was not present.
Read the full incident report →
- Detected by Pingoru
- Apr 11, 2026, 10:45 PM UTC
- Resolved
- Apr 12, 2026, 02:36 AM UTC
- Duration
- 3h 51m
Affected: Twitter Ads
Timeline · 4 updates
-
identified Apr 11, 2026, 10:45 PM UTC
The issue has been identified and we are working to resolve it.
-
identified Apr 11, 2026, 11:18 PM UTC
Connectors are failing with a “Bad Authentication data” error (code 215) while fetching successful async job results.
-
monitoring Apr 12, 2026, 12:07 AM UTC
A fix has been implemented, and we are monitoring the results.
-
resolved Apr 12, 2026, 02:36 AM UTC
This incident has been resolved. Incident Summary Description: Twitter API started returning a Bad Authentication data error (code 215) while fetching successful async job results. Timeline: Impact start time around April 11th, 9:30 PM UTC to April 11th 01:57 UTC Cause: 3rd party error on Twitter side Resolution: We have implemented a workaround that skips the problematic endpoints for problematic accounts when this error occurs, and connectors are now back to normal.
Read the full incident report →
- Detected by Pingoru
- Apr 11, 2026, 11:55 AM UTC
- Resolved
- Apr 11, 2026, 02:48 PM UTC
- Duration
- 2h 53m
Affected: Twitter Ads
Timeline · 4 updates
-
identified Apr 11, 2026, 11:55 AM UTC
The issue has been identified and we are working to resolve it.
-
identified Apr 11, 2026, 01:08 PM UTC
The failure, "URI is not absolute" is a cause of failing Twitter APIs while trying to fetch reports with code 500 and message "internal server error"
-
monitoring Apr 11, 2026, 01:14 PM UTC
A fix has been implemented and we are monitoring the results.
-
resolved Apr 11, 2026, 02:48 PM UTC
Incident Summary. Description: Twitter API started returning Internal Server Error with code 500 when we try to fetch reports. Timeline: Impact start time around April 10th 21:54:44 UTC to April 11th 12:47:37 UTC Cause: 3rd party error on twitter side Resolution: We have implemented a workaround for this by skipping reports for problematic accounts and connectors are back to normal.
Read the full incident report →
- Detected by Pingoru
- Apr 10, 2026, 12:34 AM UTC
- Resolved
- Apr 10, 2026, 01:45 AM UTC
- Duration
- 1h 11m
Affected: Transformations
Timeline · 4 updates
-
identified Apr 10, 2026, 12:20 AM UTC
The issue has been identified and we are working to resolve it.
-
identified Apr 10, 2026, 12:34 AM UTC
We are actively working on a fix for this issue. This issue is currently impacting dbt core and Quickstart transformations
-
monitoring Apr 10, 2026, 12:45 AM UTC
A fix has been implemented and we are monitoring the results.
-
resolved Apr 10, 2026, 01:45 AM UTC
This incident has been resolved. Incident Summary Description: We identified an issue where dbt core and Quickstart transformation jobs are failing with "UnsatisfiedDependencyException" error. Timeline: The issue began on April 10, 2026 at 00:40 UTC and was resolved on April 10, 2026 at 01:30 UTC. Cause: The issue was caused by a recent code change made to the transformation. Resolution: Engineering identified the root cause and reverted the recent changes. After deploying the revert, the transformation resumed normal operation.
Read the full incident report →
- Detected by Pingoru
- Apr 08, 2026, 08:38 PM UTC
- Resolved
- Apr 07, 2026, 11:30 PM UTC
- Duration
- —
Timeline · 1 update
-
resolved Apr 08, 2026, 08:38 PM UTC
This incident has been resolved. We have observed that instance rates have returned to normal levels, and affected connectors are syncing successfully. Incident Summary Description: We identified an issue with Twitter Ads and Twitter Organic connections, which resulted in sync failures with an "HTTP 503 (Service Unavailable)" error. Timeline: This issue began on 2026-04-07 at 15:00 UTC and was resolved at 23:00 UTC. Cause: The Twitter Ads API '/stats/jobs/accounts/' endpoint started returning HTTP 503 (Service Unavailable) errors. Resolution: A hotfix was deployed to handle 503 errors, as well as to retry failed endpoints.
Read the full incident report →
- Detected by Pingoru
- Apr 08, 2026, 12:50 PM UTC
- Resolved
- Apr 08, 2026, 03:52 PM UTC
- Duration
- 3h 2m
Affected: Amazon Selling Partner
Timeline · 3 updates
-
identified Apr 08, 2026, 12:50 PM UTC
We have identified that the connections are failing with the below error all_listings_report endpoint sync failed with error: Cannot invoke "com.fivetran.platform.interfaces.connector.SharedContext.safeFileBufferFactory()" because "this.sharedContext" is null
-
monitoring Apr 08, 2026, 02:07 PM UTC
A fix has been implemented and we are monitoring the results.
-
resolved Apr 08, 2026, 03:52 PM UTC
This incident has been resolved. We have observed that instance rates are returning to normal levels and affected connectors are syncing successfully. Incident Summary Description: We identified an issue for Amazon Selling partner connections which resulted in syncs failing with a "Cannot invoke "com.fivetran.platform.interfaces.connector.SharedContext.safeFileBufferFactory()" because "this.sharedContext" is null" error. Timeline: This issue began on April 8, 2026 at 11:00 AM UTC and was resolved on April 8, 2026 at 2:30 PM UTC. Cause: A bug has been introduced on our end due to which we started hitting context being null error . Resolution: A fix has been implemented on our end to mitigate the issue.
Read the full incident report →
- Detected by Pingoru
- Apr 07, 2026, 10:24 AM UTC
- Resolved
- Apr 07, 2026, 04:30 AM UTC
- Duration
- —
Timeline · 1 update
-
resolved Apr 07, 2026, 10:24 AM UTC
This incident has been resolved. We have observed that instance rates are returning to normal levels and affected connectors are syncing successfully. Incident Summary Description: We identified an issue with our external logging services which resulted in delay in log delivery in customer's external log. Timeline: This issue began on 2026-04-07 at 07:50 UTC and was resolved on 2026-04-07 at 10:00 UTC. Cause: An outage in the external logging API caused a buildup of log messages in our internal queue, impacting external logging services. Resolution: This issue has been resolved on the third-party end and our instances are returned to normal.
Read the full incident report →
- Detected by Pingoru
- Apr 04, 2026, 08:05 AM UTC
- Resolved
- Apr 04, 2026, 01:10 PM UTC
- Duration
- 5h 5m
Affected: Awin
Timeline · 4 updates
-
identified Apr 04, 2026, 08:05 AM UTC
The issue has been identified as originating from the source side and we are working to resolve it.
-
identified Apr 04, 2026, 08:44 AM UTC
As a temporary workaround, our development team is skipping the Transaction endpoint. We will contact the source to investigate the issue and work on a fix.
-
monitoring Apr 04, 2026, 10:52 AM UTC
We have successfully implemented a workaround by skipping the Transaction endpoint sync for now. We will continue monitoring all affected connections and have also reached out to the awin source team.
-
resolved Apr 04, 2026, 01:10 PM UTC
This incident has been resolved. We have observed that instance rates are returning to normal levels and affected connectors are syncing successfully. Incident Summary Description: We identified an issue for Awin connections which resulted in syncs failing with a 500 internal server error for Transaction endpoint. Timeline: This issue began on 2026-04-04 at 06:40 AM UTC and was resolved on 2026-04-04 at 10:00 AM UTC. Cause: This was determined to be a source-side issue. Resolution: We’ve implemented a workaround by skipping the Transaction endpoint sync, and the connections are now syncing successfully. We’ve also reached out to Awin Support for more details on these failures. Once the issue on the source side is resolved, the Transaction table will automatically resume syncing data. Additionally, the Transaction table runs a weekly override re-sync, so any skipped data will be backfilled during that process.
Read the full incident report →
- Detected by Pingoru
- Apr 02, 2026, 07:29 PM UTC
- Resolved
- Apr 02, 2026, 07:00 PM UTC
- Duration
- —
Timeline · 1 update
-
resolved Apr 02, 2026, 07:29 PM UTC
This incident has been fully resolved. Error rates have returned to normal levels, and all affected connections are now running successfully. Incident Summary Description: We identified an issue where Snapchat Ads connection syncs were intermittently failing with the error "javax.net.ssl.SSLHandshakeException" Timeline: The issue began on April 2nd at 17:55 UTC and was resolved by 18:20 UTC. Root Cause: The failures were caused by an intermittent SSL validation issue on the source side affecting the Snapchat API endpoint. Resolution: No action or code changes were required from our end. The issue resolved automatically once the Snapchat API resumed normal operation.
Read the full incident report →
- Detected by Pingoru
- Apr 02, 2026, 05:45 PM UTC
- Resolved
- Apr 02, 2026, 09:22 PM UTC
- Duration
- 3h 37m
Affected: SQL ServerEpic Clarity
Timeline · 4 updates
-
identified Apr 02, 2026, 05:45 PM UTC
We identified a potential issue affecting the SQL Server (Teleport) and Epic CSA connectors, where syncs are failing with the following error. The action failed after 1 attempt(s), the failures were:1) java.lang.IllegalArgumentException: No enum constant com.fivetran.integrations.sql_server.connector.schema.SqlServerType.SYSNAME
-
identified Apr 02, 2026, 06:37 PM UTC
We are continuing to work on a fix for this issue.
-
monitoring Apr 02, 2026, 09:03 PM UTC
A fix has been implemented and we are monitoring the results.
-
resolved Apr 02, 2026, 09:22 PM UTC
This incident has been resolved. Incident Summary Description: We identified an issue affecting the SQL Server (Teleport) and Epic CSA connectors, where syncs were failing with the following error: java.lang.IllegalArgumentException: No enum constant com.fivetran.integrations.sql_server.connector.schema.SqlServerType.SYSNAME Timeline: The issue began on April 2, 2026, at 15:07 UTC and was resolved on April 2, 2026, at 20:49 UTC. Cause: A recent deployment introduced an issue impacting views in the Epic CSA and SQL Server (Teleport) connectors. Resolution: The deployment was rolled back, and a hotfix was applied, restoring normal operations.
Read the full incident report →