- Detected by Pingoru
- Apr 30, 2026, 08:18 PM UTC
- Resolved
- Apr 30, 2026, 08:18 PM UTC
- Duration
- —
Affected: Central 1 ServicesPayment ServicesTreasury ServicesDigital Banking ServicesIncident Alerting
Timeline · 1 update
-
resolved Apr 30, 2026, 08:18 PM UTC
Please be advised that Central 1 experienced a Client Centre (https://clients.central1.com/) outage today between 12:44 to 12:57 p.m. PT (3:44 to 3:57 p.m. ET). You may have experienced issues logging into Client Centre or Central 1 applications during that time. A postmortem will be provided within the next two weeks. Central 1 - [email protected] – 1.888.889.7878, press 1
Read the full incident report →
- Detected by Pingoru
- Apr 27, 2026, 02:38 PM UTC
- Resolved
- Apr 27, 2026, 06:54 PM UTC
- Duration
- 4h 16m
Affected: Payment Services
Timeline · 2 updates
-
identified Apr 27, 2026, 02:38 PM UTC
Payments Canada has notified Central 1 that BNC (National Bank of Canada) did not send one of the two files in the morning exchange. The larger of the two AFT files was successfully received and processed. We will provide an update once we have received and processed the remaining BNC file. Central 1 – [email protected] – 1.888.889.7878, press 1
-
resolved Apr 27, 2026, 06:54 PM UTC
We have received this mornings missing AFT file from BNC. The 160 transactions missed in the file this morning will be included in this afternoon's AFT CR03/ON03. Central 1 – [email protected] – 1.888.889.7878, press 1
Read the full incident report →
- Detected by Pingoru
- Apr 25, 2026, 06:28 PM UTC
- Resolved
- Apr 25, 2026, 06:28 PM UTC
- Duration
- —
Affected: Payment ServicesDigital Banking Services
Timeline · 1 update
-
resolved Apr 25, 2026, 06:28 PM UTC
Central 1 is aware that users were not seeing the OAS production login link on the Client Centre. The link for OAS has been restored. Central 1 - [email protected] - 1.888.889.7878, press 1
Read the full incident report →
- Detected by Pingoru
- Apr 22, 2026, 08:56 PM UTC
- Resolved
- Apr 23, 2026, 11:23 AM UTC
- Duration
- 14h 27m
Affected: Payment ServicesTreasury ServicesIncident Alerting
Timeline · 5 updates
-
investigating Apr 22, 2026, 08:56 PM UTC
We are currently investigating an issue impacting Wires. At approximately 1:25 p.m. PT (4:25 p.m. ET), we observed incoming and outgoing wires not moving in our system. At this time, we are working to validate scope. We will provide an update within 30 minutes or sooner as more information becomes available. Central 1 – [email protected] - 1.888.889.7878, press 1
-
investigating Apr 22, 2026, 09:22 PM UTC
Fiserv has raised a critical incident and have their team engaged reviewing a core issue within their infrastructure causing impacts to our wires system. As a workaround, Central 1 is currently manually releasing incoming and outgoing wires. The work around will prevent any further delays in wires processing while Fiserv continue to triage their incident. We will provide an update within 1 hour or sooner as more information becomes available. Central 1 – [email protected] - 1.888.889.7878, press 1
-
monitoring Apr 22, 2026, 10:26 PM UTC
Central 1 has been able to manually process incoming and outgoing wires up to 3 p.m. PT (6 p.m. ET) wires close time (SWIFT gateway closed). Fiserv continues to triage their critical incident and have Central 1 teams engaged to ensure we can establish connectivity with their system. Central 1 and Fiserv will work to ensure there is system stability for tomorrow mornings 5 a.m. PT (8 a.m. ET) wires opening. We will provide an update tomorrow at 5:30 a.m. PT (8:30 a.m. ET). Central 1 – [email protected] - 1.888.889.7878, press 1
-
monitoring Apr 23, 2026, 03:08 AM UTC
Fiserv has provided an update to Central 1 that they have resolved their technical issues within their Azure environment. Central 1 and Fiserv will complete additional validation steps to ensure system stability prior to Wires business hours resuming tomorrow morning at 5 a.m. PT (8 a.m. ET) wires opening prior to confirming the incident is resolved. We will provide another update tomorrow by 5:30 a.m. PT (8:30 a.m. ET).
-
resolved Apr 23, 2026, 11:23 AM UTC
Central 1 has confirmed that both outgoing and incoming wire transfers have resumed normal processing. Central 1 – [email protected] - 1.888.889.7878, press 1
Read the full incident report →
- Detected by Pingoru
- Apr 20, 2026, 11:45 PM UTC
- Resolved
- Apr 20, 2026, 11:45 PM UTC
- Duration
- —
Affected: Central 1 ServicesPayment ServicesDigital Banking Services
Timeline · 2 updates
-
resolved Apr 20, 2026, 11:45 PM UTC
Central 1 received alerts starting around 3:34 p.m. PT (6:34 p.m. ET) for UCS service degradation, impacting e-Transfers 3.4, Bill Payments, Remote Deposit Capture (RDC) Payments Services, and login services (OIDC) to Digital Banking (Forge and MemberDirect). The service degradation lasted from 3:34 to 4:17 p.m. PT (6:34 to 7:17 p.m. ET) (43-minute degradation). Services have recovered and have remained stable. Central 1 - [email protected] - 1.888.889.7878, press 1
-
postmortem Apr 29, 2026, 08:24 PM UTC
**Postmortem: INC220338 – Universal Connectivity Services \(UCS\)** **Service Degradation** On April 20, 2026, at approximately 3:50 p.m. PT, Central 1 experienced a service degradation affecting Universal Connectivity Services \(UCS\) within the Vancouver Hosting Centre \(VAHC\). The disruption lasted approximately 43 minutes and impacted connectivity across several payment and digital banking services. During the incident window, most client traffic successfully failed over to the Toronto Hosting Centre \(TOHC\). However, a subset of clients and services that are pinned to VAHC experienced connectivity issues. Impacted services included MemberDirect and Forge login, e-Transfer 3.4, Remote Deposit Capture \(RDC\), and bill payments. Service was restored once the underlying network issue was identified and corrected. **Point of Failure:** The incident was caused by an unintended disruption to a live network cable/port during Smart Hands work performed by an Equinix technician. While completing a requested cabling change, an adjacent active connection was inadvertently disturbed, resulting in a loss of network connectivity within VAHC. Although the majority of services are designed to fail over to TOHC, applications and client connections that are statically pinned to VAHC did not automatically transition, which led to the observed service degradation for those clients. The event was detected through Dynatrace monitoring, which includes a built-in validation window to reduce false positives. The total detection and notification timeline of approximately 13 minutes was within expected operational thresholds. Post-incident review confirmed that monitoring, escalation, and response processes functioned as designed. The incident resulted in a temporary disruption to UCS-dependent services, including digital banking access and payment processing for a subset of clients. While most traffic failed over successfully, clients with dependencies tied specifically to VAHC experienced service interruption during the 43-minute window. No data loss or security concerns were identified. **Corrective Actions -** [PRB011719](https://central1.service-now.com/problem.do?sys_id=8452f90f3bd04350b117ebc964e45ad7&sysparm_record_target=problem&sysparm_record_row=1&sysparm_record_rows=1&sysparm_record_list=parent%3D682d46b23b904f10b117ebc964e45a26%5EORDERBYDESCsys_updated_on): Central 1 has completed a review with Equinix and obtained a formal incident report to validate the sequence of events and reinforce handling expectations for Smart Hands activities. Change governance practices have also been clarified to ensure stronger alignment between operational tasks and formal change management, even for activities perceived as low risk. [PTASK0010546](https://central1.service-now.com/problem_task.do?sys_id=1a391203339ccf10bbaf35bb9d5c7bd7&sysparm_record_target=problem_task&sysparm_record_row=4&sysparm_record_rows=5&sysparm_record_list=problem%3D8452f90f3bd04350b117ebc964e45ad7%5EORDERBYnumber) - Equinix Incident Report - COMPLETED [PTASK0010543](https://central1.service-now.com/problem_task.do?sys_id=df5656c3331ccf10bbaf35bb9d5c7bda&sysparm_record_target=problem_task&sysparm_record_row=1&sysparm_record_rows=5&sysparm_record_list=problem%3D8452f90f3bd04350b117ebc964e45ad7%5EORDERBYnumber) - Change Governance Clarity - COMPLETED * Clearer alignment between operational activities and formal change management, even when activities are perceived as low risk. [PTASK0010545](https://central1.service-now.com/problem_task.do?sys_id=37481a4b335ccf10bbaf35bb9d5c7b23&sysparm_record_target=problem_task&sysparm_record_row=3&sysparm_record_rows=5&sysparm_record_list=problem%3D8452f90f3bd04350b117ebc964e45ad7%5EORDERBYnumber) - Investigate Port Monitoring – Due end of May 2026 * Investigate the possibility of port monitoring on high traffic ports. Is this something that should be done to better strengthen our detection process. [PTASK0010554](https://central1.service-now.com/problem_task.do?sys_id=212a42ff33dccf50bbaf35bb9d5c7b7a&sysparm_record_target=problem_task&sysparm_record_row=5&sysparm_record_rows=5&sysparm_record_list=problem%3D8452f90f3bd04350b117ebc964e45ad7%5EORDERBYnumber) - Review failover capabilities for clients pinned to specific UCS locations – Due end of May 2026 We apologize for the disruption experienced during this incident and are taking these steps to further strengthen resiliency and change governance controls. Jason Seale | Director of Client Support Services [[email protected]](mailto:[email protected]) | 778.558.5627
Read the full incident report →
- Detected by Pingoru
- Apr 20, 2026, 12:15 PM UTC
- Resolved
- Apr 20, 2026, 02:49 PM UTC
- Duration
- 2h 33m
Affected: Central 1 ServicesPayment ServicesTreasury Services
Timeline · 5 updates
-
investigating Apr 20, 2026, 12:15 PM UTC
Central 1 is currently unable to send or receive CAD, USD, or FX wires. Wires can still be created and reviewed in PS-Wires. Central 1’s technical resources are actively working to restore full service. We will provide an update by 5:45 a.m. PT (8:45 a.m. ET). Central 1 – [email protected] - 1.888.889.7878, press 1
-
investigating Apr 20, 2026, 01:00 PM UTC
Central 1 continues to experience a technical issue preventing incoming and outgoing wires from being processed. Wires can be created and approved in PS-Wires. We have engaged our technical teams along with our 3rd party providers, Fiserv and IBM An update will be provided by 6:45 a.m. PT (9:45 a.m. ET) or sooner if the incident is resolved. Central 1 – [email protected] - 1.888.889.7878, press 1
-
identified Apr 20, 2026, 02:00 PM UTC
Central 1 continues to experience a technical issue preventing the processing of incoming and outgoing wires. Wires can be created and approved in PS-Wires. The technical issue has been identified, and we are waiting for our vendor to implement a fix to their system. The fix will be applied by 7:30 a.m. PT (10:30 a.m. ET). As soon as the wires are processed, we will send a confirmation via this channel. Central 1 – [email protected] - 1.888.889.7878, press 1
-
identified Apr 20, 2026, 02:32 PM UTC
Our vendor addressed their technical issue at 7:02 am PT (10 am ET), and after a server restart, service has recovered at 7:04 am PT (10:04 am ET). Inbound and Outbound wires are now processing. Wires in pending status could take an additional 30 minutes to process. We continue to monitor the system and will send an update on system performance within the next hour. Central 1 – [email protected] - 1.888.889.7878, press 1
-
resolved Apr 20, 2026, 02:49 PM UTC
After further monitoring of the wire system, we can confirm that both incoming and outgoing wires are now processing. Central 1 – [email protected] - 1.888.889.7878, press 1
Read the full incident report →
- Detected by Pingoru
- Apr 19, 2026, 07:04 PM UTC
- Resolved
- Apr 19, 2026, 07:47 PM UTC
- Duration
- 42m
Affected: Payment ServicesIncident Alerting
Timeline · 3 updates
-
investigating Apr 19, 2026, 07:04 PM UTC
Central 1 is aware that PaymentStream Direct is not accessible and have initiated an immediate investigation. User will receive an error message that the server is unavailable after entering the 2-step token code. Keeping you informed during this incident is important to us. We’ll share another update with you by today at 2 p.m. PT (5 p.m. ET), or as soon as we have significant developments to report on. Thank you for your patience. Central 1 - [email protected] - 1.888.889.7878, press 1
-
resolved Apr 19, 2026, 07:47 PM UTC
Service has been restarted and PaymentStream Direct is now fully accessible. Thank you for your patience during this service outage. Central 1 - [email protected] - 1.888.889.7878, press 1
-
postmortem Apr 29, 2026, 08:21 PM UTC
**Postmortem: INC220235 – PaymentStream Direct \(PSD\) Service Outage** On Sunday, April 19, between 3:00 a.m. and 12:40 p.m. PT, PaymentStream Direct \(PSD\) experienced a service outage. During this time, the few users attempting to access PSD were unable to complete two-step verification encountering a “server is unavailable” error. The issue was resolved with a service restart, restoring the application to a healthy state. **Point of Failure:** The outage was caused by a scheduled restart of the PSD application, during which the service failed to recover properly. Although the application appeared to restart, it did not return to a functional state, as the required application port was not actively listening. This resulted in the application being unreachable despite no immediate indication of failure at the infrastructure level. The duration of the incident was extended due to gaps in monitoring for this specific unhealthy state. Automated alerting that is in place has a build in bypass of the 2SV step based upon the nature of the monitoring tool in place. This prevented the tool from detecting the outage. The existing synthetic monitoring only validated a partial authentication flow without confirming successful access into the PSD application itself. The synthetic monitoring is also being enhanced to better detect systemic failures in the 2SV workflow to raise earlier awareness. PSD, primarily used by branch staff to perform wire transfers, CRA bill payments, and AFT transactions, but also supports some Digital functions that do not require step up login. The platform has very low usage on Sundays, with ~ 1% the typical traffic of a weekday business hour. During the outage, the Payment functions were unavailable, however in the impact period it is not expected that there are branch payments originating. Some related services, including PS AFT business member transactions, were also impacted, however affected volumes are very low. A small number of credit unions did report service issues, which helped confirm the broader scope of the incident. Overall client impact was limited due to low transaction volumes during the outage window, and no data loss or security concerns were identified. **Corrective Actions -** [PRB011716](https://central1.service-now.com/problem.do?sys_id=9a5be8ae3b184710b117ebc964e45a13&sysparm_record_target=problem&sysparm_record_row=1&sysparm_record_rows=1&sysparm_record_list=parent%3Ddda01bd63358c310bbaf35bb9d5c7b97%5EORDERBYDESCsys_updated_on): Monitoring improvements are being implemented to ensure full end-to-end validation of PSD availability, including updates to synthetic monitoring to confirm successful application access rather than partial authentication checks. In parallel, application-level monitoring will be enhanced to detect and alert on port availability failures, with automated escalation through PagerDuty. [PTASK0010552](https://central1.service-now.com/problem_task.do?sys_id=cb3b1b573b540b50b117ebc964e45acf&sysparm_record_target=problem_task&sysparm_record_row=2&sysparm_record_rows=3&sysparm_record_list=problem%3D9a5be8ae3b184710b117ebc964e45a13%5EORDERBYnumber) - Improve Site24x7 Synthetic Monitoring - Completed * Re‑record Site24x7 monitor to validate full PSD access and add look for key words. [PTASK0010551](https://central1.service-now.com/problem_task.do?sys_id=d0ba9bdf3b140b50b117ebc964e45aed&sysparm_record_target=problem_task&sysparm_record_row=1&sysparm_record_rows=3&sysparm_record_list=problem%3D9a5be8ae3b184710b117ebc964e45a13%5EORDERBYnumber) - Enable Zenoss Monitoring – End of May 2026 * Add application health and port‑listening checks for PSD and ensure PagerDuty triggers automatically on failure. [PTASK0010553](https://central1.service-now.com/problem_task.do?sys_id=aaebdb133b940b50b117ebc964e45afe&sysparm_record_target=problem_task&sysparm_record_row=3&sysparm_record_rows=3&sysparm_record_list=problem%3D9a5be8ae3b184710b117ebc964e45a13%5EORDERBYnumber) - Clarify Service Ownership – End of May 2026 * Reinforce that PSD is owned by the Digital / Platform team, despite payments naming. Anthony will provide coaching to on‑call resources. We sincerely apologize for the disruption caused by this incident and are taking these steps to strengthen detection and response to prevent recurrence. Enquiries: Jason Seale | Director of Client Support Services [[email protected]](mailto:[email protected]) | 778.558.5627
Read the full incident report →
- Detected by Pingoru
- Apr 16, 2026, 07:28 PM UTC
- Resolved
- Apr 29, 2026, 01:26 PM UTC
- Duration
- 12d 17h
Affected: Digital Banking Services
Timeline · 7 updates
-
investigating Apr 16, 2026, 07:28 PM UTC
Please be advise that International transfer services are currently unavailable in online banking. We are conducting a thorough analysis of the systems involved and working towards resolving the issue. We will provide an update on or before Friday April 17, 2 p.m. PT / 5 p.m. ET. [email protected] - 1.888.889.7878, option 2
-
investigating Apr 17, 2026, 09:24 PM UTC
We are continuing to investigate the root cause of the international transfer service interruption. Our technical teams remain focused on the analysis and resolution process. We will provide the next update on or before Monday, April 20, by 2 p.m. PT / 5 p.m. ET.
-
investigating Apr 21, 2026, 06:17 PM UTC
We are continuing to investigate the root cause of the international transfer service interruption. Our technical teams remain focused on the analysis and resolution process. We will provide the next update on or before Tuesday, April 21, by 2 p.m. PT / 5 p.m. ET.
-
investigating Apr 24, 2026, 12:33 AM UTC
We believe we have identified the root cause of the International Transfer Service interruption. A fix has been applied in our lower environments and is currently undergoing testing. Our technical teams are closely monitoring results as we progress toward resolution. We will provide the next update on or before Monday, April 27, by 2 p.m. PT / 5 p.m. ET [email protected] - 1.888.889.7878, option 2
-
investigating Apr 27, 2026, 10:43 PM UTC
Our teams continue to validate the fix identified for the international transfer service interruption. We are tentatively targeting a resolution later this week, pending successful testing. We will provide the next update tomorrow, on or before 2 p.m. PT / 5 p.m. ET. [email protected] - 1.888.889.7878, option 2
-
investigating Apr 28, 2026, 08:45 PM UTC
Please be advised that a fix will be deployed with Urgent Digital Banking Core Release 768 (CHG170247) The release has been scheduled for Wednesday April 29, 2026 at 1 a.m. PT (4 a.m. ET). [email protected] - 1.888.889.7878
-
resolved Apr 29, 2026, 01:26 PM UTC
Please be advised that the urgent release is deployed and International Transfer is available in online banking.
Read the full incident report →
- Detected by Pingoru
- Apr 15, 2026, 06:02 PM UTC
- Resolved
- Apr 15, 2026, 08:27 PM UTC
- Duration
- 2h 24m
Affected: Digital Banking ServicesIncident Alerting
Timeline · 3 updates
-
investigating Apr 15, 2026, 06:02 PM UTC
Please be advised that certain TeamSite components, including the Find a Branch/ATM tool, are experiencing inconsistent behaviour. Our internal platform team and OpenText are conducting an immediate investigation and are working to resolve the issue as quickly as possible. [email protected] - 1.888.889.7878, option 2
-
investigating Apr 15, 2026, 06:41 PM UTC
We are continuing to investigate this issue.
-
resolved Apr 15, 2026, 08:27 PM UTC
Please be advised that the reported performance issues have been resolved. The fix involved applying a necessary certificate to the environment. Following diligent work by the OpenText team, all services are now fully operational.
Read the full incident report →
- Detected by Pingoru
- Apr 14, 2026, 05:30 PM UTC
- Resolved
- Apr 14, 2026, 06:16 PM UTC
- Duration
- 45m
Affected: Digital Banking Services
Timeline · 3 updates
-
investigating Apr 14, 2026, 05:30 PM UTC
Please be advised that members may have intermittent login failures on Retail and Small Business banking. We are actively investigating to identify the root cause and assess the impact. Members using biometric login or unchallenged two-step verification (2SV) are not affected. We are conducting a thorough analysis of the systems involved and working toward resolving the issue. [email protected] - 1.888.889.7878, option 2
-
resolved Apr 14, 2026, 06:16 PM UTC
The incident has been resolved. We will complete a postmortem, and share these results when available. [email protected] - 1.888.889.7878, option 2
-
postmortem Apr 21, 2026, 09:28 PM UTC
Postmortem: INC219954 – Intermittent Two‑Step Verification \(2SV\) Login Errors On Monday, April 14 2026, beginning at approximately 12:30 a.m. PT, some credit union customers experienced intermittent login failures when accessing Retail and Small Business Online Banking. The issue persisted until 10:10 a.m. PT, when full service was restored. Customers affected by the issue may have been unable to complete logins requiring Two‑Step Verification \(2SV\). Customers using biometric login or unchallenged 2SV flows were not impacted. Point of Failure: One of the newly added MD Authentication \(MDAuth\) servers entered a degraded state following its routine nightly restart. Although the server appeared operational, it was unable to reliably process 2SV authentication requests, resulting in intermittent login failures for users routed to that server. Under normal circumstances, Dynatrace monitoring alerts the team when an authentication server enters a degraded state. However, this newly added server was configured with an incorrect monitoring mode, preventing expected alerts from triggering. As a result, the server remained in rotation longer than expected while serving failed authentication requests. Resolution: The affected MDAuth server was temporarily removed from the load balancer, immediately stabilizing 2SV login functionality. No further issues were observed following its removal. Corrective Actions: Server Removed from Load Balancer – COMPLETED The affected authentication server was suspended from traffic to restore service stability. Monitoring Configuration Corrected – COMPLETED Dynatrace monitoring has been updated to ensure the server is tracked under the correct monitoring mode and generates alerts for this failure condition. Post‑Deployment Validation Review – COMPLETED Monitoring configuration checks have been reinforced for newly added authentication servers to ensure alerting is active prior to placing them into production rotation. We apologize for the disruption this caused to your operations and your customers. If you have any questions or would like additional detail, please contact Digital Banking Support. Digital Banking Support [digitalbanking\[email protected]](mailto:[email protected]) 1\.888.889.7878 \(press 2\)
Read the full incident report →
- Detected by Pingoru
- Apr 14, 2026, 03:51 PM UTC
- Resolved
- Apr 14, 2026, 03:51 PM UTC
- Duration
- —
Affected: Payment Services
Timeline · 1 update
-
resolved Apr 14, 2026, 03:51 PM UTC
Central 1’s ORS (Online Return System) batch return job did not run at 4 p.m. PT (7 p.m. ET) or 11 p.m. PT (2 a.m. ET) yesterday. The files have been uploaded for processing, and the transactions will be recorded in tomorrow’s ICBS and ICBR reports. Central 1 - [email protected] - 1.888.889.7878, press
Read the full incident report →
- Detected by Pingoru
- Apr 06, 2026, 04:15 PM UTC
- Resolved
- Apr 06, 2026, 05:37 PM UTC
- Duration
- 1h 22m
Affected: Central 1 ServicesPayment ServicesTreasury Services
Timeline · 2 updates
-
investigating Apr 06, 2026, 04:15 PM UTC
Central 1 has advised Fiserv that we are experiencing latency processing incoming wires through their EPP service. There is no impact on creating wires in PaymentStream Direct, and Central 1 has not observed any failed wires currently. Fiserv has initiated an active investigation into the performance issue. A further update will be provided within the next hour. Central 1 - [email protected] - 1.888.889.7878, press 1
-
resolved Apr 06, 2026, 05:37 PM UTC
The latency issue impacting the wires repair queue has been resolved. There was no observed impact to incoming or outgoing wire processing during this morning’s event. Fiserv will continue their investigation into the underlying performance issue to support ongoing product stability. Central 1 – [email protected] – 1.888.889.7878, press 1
Read the full incident report →
- Detected by Pingoru
- Apr 02, 2026, 02:26 PM UTC
- Resolved
- Apr 02, 2026, 04:06 PM UTC
- Duration
- 1h 40m
Affected: Payment Services
Timeline · 4 updates
-
identified Apr 02, 2026, 02:26 PM UTC
Central 1 is experiencing a gateway issue that has stopped processing all incoming and outgoing wires traffic. Central 1’s 3rd-party supplier is working with IBM to resolve the gateway incident. An update will be provided by 8:30 a.m. PT (11:30 a.m. ET) Central 1 - [email protected] - 1.888.889.7878, press 1
-
identified Apr 02, 2026, 03:32 PM UTC
Central 1 continues to experience a service disruption impacting incoming and outgoing wire processing. The issue began at approximately 6:00 a.m. PT (9:00 a.m. ET) and is affecting all wires initiated after this time. We are completing a manual workaround to release pending wires as we wait for our third-party provider to resolve their gateway-related issue with IBM. Work is underway to restore MQ services and resume normal processing. Wire creation in PaymentStream Direct remains available; however, transmission and processing are currently delayed. Our teams remain fully engaged, and we will provide the next update within 30 minutes. Central 1 - [email protected] - 1.888.889.7878, press 1
-
resolved Apr 02, 2026, 04:06 PM UTC
The issue impacting incoming and outgoing wire processing has now been resolved. Service was restored, and wire processing resumed at approximately 8:50 a.m. PT (11:50 a.m. ET). During the incident, wires initiated after 6:00 a.m. PT experienced delays due to a third-party gateway issue. All services are now operating normally, and any remaining queued wires are being processed. We apologize for the disruption and appreciate your patience. If you continue to experience any issues, please contact our support team. Central 1 - [email protected] - 1.888.889.7878, press 1
-
postmortem Apr 21, 2026, 01:15 PM UTC
**INC219290 Postmortem: Wire Transfer Service Disruption** On April 2, 2026, between 5:25 a.m. PT and 10:21 a.m. PT \(8:25 a.m. to 1:21 p.m. ET\), Central 1 experienced a service disruption impacting incoming and outgoing wire transfers. During this time, wire creation through PaymentStream Direct remained available; however, transmission, processing, and transaction inquiry functions were unavailable or significantly delayed due to an issue within a third-party service provider. The disruption was caused by failures within Fiserv’s Enterprise Payments Platform \(EPP\), which supports wire processing and regulatory screening. As a result, wires could not be transmitted or processed, leading to delays in time-sensitive payments and increased client escalations. Service was fully restored after the vendor applied corrective actions, and normal processing resumed. **Point of Failure:** The incident originated within Fiserv’s infrastructure across multiple layers. An automated process within Fiserv’s environment incorrectly modified hostname configurations on IBM MQ gateway servers, preventing MQ services from starting and halting wire message processing. In parallel, the required database configuration settings were not consistently applied in the older environment, leading to degraded performance, timeouts, and instability in the Enterprise Payments Platform. Together, these issues led to a complete interruption of wire processing services. **Corrective Actions** Fiserv implemented emergency changes to restore MQ services and correct database configuration settings, stabilizing the platform and resuming normal wire processing. As part of their follow-up actions, Fiserv is standardizing database configurations across all environments, reinforcing installation and configuration practices with their teams, and introducing validation controls to ensure required settings are consistently applied. At Central 1, we are enhancing monitoring to improve early detection of third-party processing failures and reviewing our vendor support and escalation processes to ensure faster resolution. We are also assessing our internal incident handling procedures to strengthen coordination and response effectiveness during third-party outages. * PTASK0010523 - FISERV Incident and Root Cause Report – COMPLETED * PTASK0010524 - Improved Wires Monitoring - COMPLETED * PTASK0010526 – Assess Document C1 Incident Handling Procedures – Due end of April 2026 * PTASK0010525 - C1 Vendor support Process Review - Due end of Q2 2026 We sincerely apologize for the disruption and the impact this incident had on your operations, particularly for time-sensitive wire transactions, and we appreciate your patience while services were restored.
Read the full incident report →
- Detected by Pingoru
- Apr 02, 2026, 01:01 AM UTC
- Resolved
- Apr 02, 2026, 01:01 AM UTC
- Duration
- —
Affected: Central 1 ServicesPayment ServicesDigital Banking Services
Timeline · 2 updates
-
resolved Apr 02, 2026, 01:01 AM UTC
Central 1 received alerts starting around 4:20 p.m. PT (7:20 p.m. ET) for UCS service degradation (around 50% reduction in traffic), impacting e-Transfers, Bill Payments and Remote Deposit Capture (RDC) Payments Services, and login services (OIDC) to Digital Banking (Forge and MemberDirect). The service degradation lasted from 4:17 TO 4:34 p.m. PT (7:17 to 7:34 p.m. ET) (17-minute degradation) when services recovered and have remained stable. Central 1 teams are working on the assessing the point of failure and will continue to monitor for service stability throughout the evening. A postmortem will be completed within the next two weeks. Central 1 - [email protected] - 1.888.889.7878, press 1
-
postmortem Apr 09, 2026, 05:56 PM UTC
**Postmortem: INC219263 – UCS Service Degradation Impacting Payments and Digital Banking** On April 1, 2026, between 4:17 p.m. and 4:34 p.m. PT \(7:17 p.m. to 7:34 p.m. ET\), Central 1 experienced a service degradation affecting multiple payment and authentication services. During this 17-minute window, clients may have experienced intermittent issues with Interac e-Transfers \(3.4/Iso8583\), Bill Payments, Remote Deposit Capture \(RDC\), and login services supporting the Forge and MemberDirect Digital Banking platforms. Service performance degraded, resulting in 50% traffic failing in the impact window. Service self-recovered at 4:34 p.m. PT \(7:34 p.m. ET\) , with all services remaining stable thereafter. We recognize the importance of these services to your operations and sincerely apologize for the disruption experienced during this time. **Point of Failure**: The degradation coincided with an unauthorized infrastructure event involving Central 1’s virtual firewall environment in Microsoft Azure. A hot-plug activity \(adding or removing a component or device from a server while it is active\) performed by our vendor, Palo Alto Networks, triggered an unexpected firewall reboot during business hours. This resulted in a brief interruption to network traffic, impacting upstream services. The issue was resolved by the vendor without requiring intervention from Central 1, and services recovered immediately following the event. **PRB011705 - Corrective Actions:** * PTASK0010529 – Vendor Patching Maintenance Enhancement - COMPLETED Central 1 has formally escalated this incident with Palo Alto Networks to reinforce adherence to agreed maintenance windows and change management expectations. We are working with the vendor to ensure that any future infrastructure activities are either conducted within approved maintenance windows or communicated in advance to prevent unintended service impact. Internally, we have opened a high-priority problem record \(PRB011705\) to further assess monitoring, vendor coordination, and response readiness, ensuring improved visibility and faster validation of root cause in similar scenarios. We recognize how important these services are to your operations and apologize for the disruption, truly appreciating your patience during this event. Jason Seale | Director of Client Support Services | [[email protected]](mailto:[email protected]) 778.558.5627
Read the full incident report →
- Detected by Pingoru
- Mar 31, 2026, 02:27 PM UTC
- Resolved
- Mar 31, 2026, 05:35 PM UTC
- Duration
- 3h 8m
Affected: Payment Services
Timeline · 3 updates
-
monitoring Mar 31, 2026, 02:27 PM UTC
Central 1 has been advised by Payments Canada that BNC experienced a delay in delivering its AFT file during the evening exchange on March 30 and the morning AFT exchange on March 31. We will provide an update for the BNC AFT files as soon as we receive them from Payments Canada. Central 1 - [email protected] - 1.888.889.7878, press 1
-
monitoring Mar 31, 2026, 02:56 PM UTC
BNC confirmed with Payments Canada that there were no issues with this morning outbound AFT files. The issue was strictly affecting AFT files destined for yesterday's Evening AFT Exchange cut-off. Central 1 - [email protected] - 1.888.889.7878, press 1
-
resolved Mar 31, 2026, 05:35 PM UTC
Central 1 received the evening AFT exchange files from BNC before the Payment Canada AFT exchange cut-off. The transactions were included in the March 30, CR01\ON01 AFT files. There was no issue with any of BNC’s AFT files this morning. Central 1 - [email protected] - 1.888.889.7878, press 1
Read the full incident report →
- Detected by Pingoru
- Mar 31, 2026, 11:51 AM UTC
- Resolved
- Mar 31, 2026, 11:51 AM UTC
- Duration
- —
Affected: Payment Services
Timeline · 1 update
-
resolved Mar 31, 2026, 11:51 AM UTC
Central 1 has been advised by Payments Canada that BNC experienced a delay in delivering its AFT file during the evening exchange. The AFT files missed in the evening exchange will be transmitted in the first AFT exchange window and will be included in this morning’s CR02/ON02 AFT file. Central 1 - [email protected] - 1.888.889.7878, press 1
Read the full incident report →
- Detected by Pingoru
- Mar 30, 2026, 02:11 PM UTC
- Resolved
- Mar 30, 2026, 02:36 PM UTC
- Duration
- 25m
Affected: Central 1 ServicesPayment ServicesTreasury ServicesDigital Banking ServicesIncident Alerting
Timeline · 3 updates
-
investigating Mar 30, 2026, 02:11 PM UTC
Users are experiencing an accessibility issue with the Client Centre (Clients.Central1.Com) and the services and applications behind the Client Centre. We are conducting a thorough analysis of the systems involved and working toward resolving the issue. Central 1’s technical team is investigating and will provide another update by 8:00 a.m. PT (11:00 a.m. ET). Central 1 - [email protected] – [email protected] - 1.888.889.7878
-
resolved Mar 30, 2026, 02:36 PM UTC
The Client Centre (Clients.Central1.Com) access issues have been resolved. Central 1 - [email protected] – [email protected] - 1.888.889.7878
-
postmortem Apr 22, 2026, 01:52 PM UTC
**INC219102 – Error When Accessing the Client Centre** On March 30, 2026, between 3:00 a.m. PT \(6:00 a.m. ET\) and 4:15 a.m. PT \(7:15 a.m. ET\), Central 1 customers experienced intermittent access issues \(approximately 50%\) when accessing Client Centre and related services and applications. The issue was caused by how login requests were being directed within the system. While traffic is normally spread across available servers, some login attempts were sent to servers that were not available. Because the system tried to keep users on the same path for consistency, repeated login attempts continued to be directed to the same unavailable server. **Point of Failure** This incident was triggered by a routine system change made on Sunday, March 29, 2026. A background process responsible for managing system capacity did not complete successfully. This did not have an immediate impact but became apparent once normal business traffic increased, at which point the system reached its limits. As a result of this condition, some login requests could not be properly processed, leading to login failures for approximately half of the users. In some cases, affected users continued to experience the issue on repeated attempts until they refreshed their session \(for example, by closing and reopening their browser\). To restore service, the two impacted login servers were removed from service, allowing traffic to be handled by the remaining healthy servers and resolving the issue. **PRB011701 – Actions & Lessons Learned** Central 1 has reinforced the requirement for all teams to create change tickets and adhere to the established change management process. * **PTASK0010527** – Monitoring did not alert on disk usage prior to the server reaching full capacity – _Complete_ * **PTASK0010532** – Investigate enhancing ADFS probes beyond basic HTTP checks – _Target: Q2 2026_ We apologize for the disruption this caused to your operations and your members. If you have any questions or require additional details, please contact Support. Central 1 – Support | [[email protected]](mailto:[email protected]) | 1.888.889.7878 \(press 1\)
Read the full incident report →
- Detected by Pingoru
- Mar 13, 2026, 03:43 PM UTC
- Resolved
- Mar 13, 2026, 03:43 PM UTC
- Duration
- —
Affected: Payment Services
Timeline · 1 update
-
resolved Mar 13, 2026, 03:43 PM UTC
We have identified that the ORS (Online Return System) batch return job did not run at 4 p.m. PT (7 p.m. ET) or 11 p.m. PT (2 a.m. ET) yesterday. Some organizations received the reports for yesterday, but they are incomplete. The files have been uploaded for processing, and the transactions will be recorded in tomorrow’s ICBS and ICBR reports. Central 1 - [email protected] - 1.888.889.7878, press
Read the full incident report →
- Detected by Pingoru
- Mar 13, 2026, 02:52 PM UTC
- Resolved
- Mar 13, 2026, 02:51 PM UTC
- Duration
- —
Timeline · 1 update
-
resolved Mar 13, 2026, 02:52 PM UTC
Central 1 has identified that some organizations are missing their ICBS (BATCH RETURNS USER SUMMARY LIST) and ICBR (BATCH RETURNS USER LIST) reports this morning. Our technical team is investigating, and we will provide an update by 9 a.m. PT (12 p.m. ET). Central 1 - [email protected] - 1.888.889.7878, press
Read the full incident report →
- Detected by Pingoru
- Mar 06, 2026, 10:08 PM UTC
- Resolved
- Mar 06, 2026, 10:08 PM UTC
- Duration
- —
Affected: Payment Services
Timeline · 1 update
-
resolved Mar 06, 2026, 10:08 PM UTC
Please be advised that the settlement to Central 1 accounts for the March 5 CR04 AFT Clearing file was delayed. The settlement has been posted today, with an effective date of March 5. Note, this is a delay in Central 1 account settlement only; the files were delivered to FTP directories at the usual time yesterday afternoon. Central 1 - [email protected] - 1.888.889.7878, press 1
Read the full incident report →
- Detected by Pingoru
- Mar 05, 2026, 04:05 PM UTC
- Resolved
- Mar 05, 2026, 04:58 PM UTC
- Duration
- 52m
Affected: Digital Banking Services
Timeline · 4 updates
-
investigating Mar 05, 2026, 04:05 PM UTC
We are investigating intermittent problems with SMS delivery for 2‑Step Verification. Affected members might encounter messages like “code cannot be delivered” or “invalid code.” As a temporary workaround, please ask members to retry the action—additional attempts typically succeed. We will provide updates as more information becomes available. [email protected] - 1.888.889.7878, option 2
-
investigating Mar 05, 2026, 04:05 PM UTC
We are continuing to investigate this issue.
-
resolved Mar 05, 2026, 04:58 PM UTC
Please be aware that we experienced a service interruption impacting a quarter of SMS challenged logins and transactional step-ups between 12:39 a.m. (3:39 a.m. ET) and 8:31 a.m. PT (11:31 a.m. ET). During this time, users may have been unable to complete login attempts or transaction that required additional authentication. The incident has been resolved. We will complete a postmortem, and share these results when available. [email protected] - 1.888.889.7878, option 2
-
postmortem Apr 21, 2026, 06:59 PM UTC
Postmortem: INC217878 – Intermittent Two‑Step Verification \(2SV\) SMS Issues On March 5th 2026, beginning at approximately 12:39 a.m. PT \(3:39 a.m. ET\), some customers experienced intermittent issues when attempting to complete Two‑Step Verification \(2SV\) via SMS. Approximately 25% of SMS‑challenged logins and transactional step‑ups were impacted during this period. Full service was restored by 8:31 a.m. PT \(11:31 a.m. ET\). Customers affected by the issue may have received error messages such as “code cannot be delivered” or “invalid code.” In most cases, retrying the action resulted in a successful authentication. Other authentication methods were unaffected. Point of Failure: One of the MD authentication servers entered a degraded state following a restart. Although the server appeared online, it was unable to reliably process SMS‑based 2SV requests, resulting in intermittent delivery and validation failures for users routed to that server. The error observed in system logs differed from previously encountered authentication server failures and was not immediately detected by existing alerting. Because the server was not fully offline, it remained in rotation longer than expected, impacting approximately one‑quarter of SMS‑challenged authentication attempts. The affected authentication server was removed from the service pool, immediately stabilizing 2SV SMS functionality. Corrective Actions: Affected Server Removed – COMPLETED The MD authentication server in a degraded state was removed from the authentication pool, restoring normal service. Enhanced Alerting for New Error Pattern – COMPLETED Monitoring and alerting have been updated to detect this specific log error earlier, enabling faster identification and response in similar scenarios. Operational Review – COMPLETED The incident was reviewed against existing restart and recovery procedures to ensure alignment and identify any additional improvements. We apologize for the inconvenience this caused to your operations and your customers. If you have any questions or would like additional detail, please contact Digital Banking Support. Digital Banking Support [digitalbanking\[email protected]](mailto:[email protected]) 1\.888.889.7878 \(press 2\)
Read the full incident report →
- Detected by Pingoru
- Mar 02, 2026, 06:30 PM UTC
- Resolved
- Mar 03, 2026, 06:21 PM UTC
- Duration
- 23h 51m
Affected: Payment Services
Timeline · 2 updates
-
identified Mar 02, 2026, 06:30 PM UTC
Please be advised that cash parcel settlements for parcels (CP) delivered on Friday, February 27 did not include the February 26 files. The missing cash parcel data from Central 1 reports and the PREC/OREC file has been identified as an issue originating on RBC’s end. We have escalated the matter with RBC and are actively following up. We will provide further updates as soon as more information becomes available. Central 1 - [email protected] - 1.888.889.7878, press 1
-
resolved Mar 03, 2026, 06:21 PM UTC
The cash settlement for February 26 has been processed. The transactions have been included in this morning's PREC\OREC file. Contact Central 1 Support if you have any questions. Central 1 - [email protected] - 1.888.889.7878, press 1
Read the full incident report →
- Detected by Pingoru
- Feb 24, 2026, 06:53 PM UTC
- Resolved
- Feb 24, 2026, 10:58 PM UTC
- Duration
- 4h 4m
Affected: Treasury Services
Timeline · 2 updates
-
investigating Feb 24, 2026, 06:53 PM UTC
Globex 2000 has notified Central1 they are experiencing system issues and users are receiving a “User has been disabled” or "Error. No branches assigned. Please contact administrator" message when trying to access the FX Notes application. Central 1 and Globex 2000 are investigating and we will provide an update by 2 p.m. PT (5 p.m. ET) or sooner if the incident is resolved. Central 1 - [email protected] - 1.888.889.7878, press 1
-
resolved Feb 24, 2026, 10:58 PM UTC
Users can now access the FX Notes application. If your organization continues to experience access issues, contact Central 1 Support. Central 1 - [email protected] - 1.888.889.7878, press 1
Read the full incident report →
- Detected by Pingoru
- Feb 24, 2026, 05:07 AM UTC
- Resolved
- Feb 24, 2026, 05:07 AM UTC
- Duration
- —
Affected: Payment Services
Timeline · 2 updates
-
resolved Feb 24, 2026, 05:07 AM UTC
Central 1 experienced a service disruption on e-Transfer version 3.5 (ISO 20022) between 7:35 pm PT (10:35 pm ET) and 8:47 pm PT (11:47 m ET) that would have affected all transactions. Service has since been restored. Autodeposits received during that time will automatically be retried, requiring no further action. There was no impact to 3.4/ISO8583 transactions. We apologize for any inconvenience caused by this service interruption. Central 1 - [email protected] - 1.888.889.7878, press 1
-
postmortem Mar 20, 2026, 05:18 PM UTC
**Postmortem: INC217374 – e-Transfer 3.5/ISO20022 Outage** On Monday, February 23, 2026, Central 1 experienced a service disruption to e-Transfer version 3.5 \(ISO 20022\) between 7:35 pm PT \(10:35 pm ET\) and 8:47 pm PT \(11:47 pm ET\), which affected both send and receive transactions. When service was restored, Auto-deposits received during that time were automatically retried, requiring no further action. There was no impact to 3.4/ISO8583 transactions. **Point of Failure:** The outage was caused by a failure within the messaging infrastructure supporting Interac e-Transfer 3.5, specifically related to the Amazon MQ \(ActiveMQ\) PSA Trace broker reaching full storage capacity. This occurred after two trace consumer instances experienced a sustained, unexplained degradation in network throughput beginning on February 19, significantly reducing their ability to process trace messages. As a result, messages accumulated in both the primary trace queue and dead-letter queue, preventing the system from clearing processed data and ultimately exhausting available storage. Once the broker reached capacity, built-in flow control mechanisms halted all message publishing across the platform, which in turn blocked payment processing and led to downstream application failures, including loss of connectivity within the EMT service. Although the issue originated in a non-functional trace processing component, it had a cascading impact on core payment flows. Service was restored by clearing the accumulated backlog and restarting affected services, while AWS continues to investigate the underlying cause of the EC2 network degradation. **Corrective Actions \(PRB011685\):** * Add monitoring & alerts for PSA trace broker – Complete * Evaluate disabling PSA trace queues \(partially or fully\): Q2 – 2026 * AWS Investigation – Ongoing by vendor We apologize for the delay and appreciate your patience while Central 1 works internally to improve system functionality. Central 1 - [[email protected]](mailto:[email protected]) - 1.888.889.7878
Read the full incident report →
- Detected by Pingoru
- Feb 12, 2026, 11:20 PM UTC
- Resolved
- Feb 13, 2026, 12:36 AM UTC
- Duration
- 1h 15m
Affected: Digital Banking ServicesIncident Alerting
Timeline · 2 updates
-
identified Feb 12, 2026, 11:20 PM UTC
Please be advised, OpenText is restarting Teamsite. This will cause a 2 minute full outage of the authoring/editing interface. This will have no impact on live sites (your customers or members will not be impacted). [email protected] - 1.888.889.7878, option 2
-
resolved Feb 13, 2026, 12:36 AM UTC
The incident has been resolved.
Read the full incident report →