Pipefy Outage History

Pipefy is up right now

There were 10 Pipefy outages since February 12, 2026 totaling 89h 37m of downtime. Each is summarised below — incident details, duration, and resolution information.

Source: https://status.pipefy.com

Notice April 1, 2026

Custom Integrations/iPaaS : Rate Limiting Issues

Detected by Pingoru
Apr 01, 2026, 07:40 PM UTC
Resolved
Apr 02, 2026, 02:23 PM UTC
Duration
18h 42m
Affected: Integrations
Timeline · 3 updates
  1. monitoring Apr 02, 2026, 01:15 PM UTC

    We have identified that a recent internal infrastructure update caused unexpected rate-limiting on our iPaaS platform, resulting in temporary delays or 429 errors in workflows for the custom integrations. Our Infrastructure team has identified the root cause and successfully applied a fix to restore normal traffic. We're closely monitoring the environment to ensure stability. Normal service operation is expected across all affected organizations.

  2. resolved Apr 02, 2026, 02:23 PM UTC

    This incident has been resolved. Additional information will be provided shortly in our postmortem report.

  3. postmortem Apr 06, 2026, 07:14 PM UTC

    **Root Cause** Recently, efforts were made to separate the infrastructure for better handling of increased demand. This led to the failure of previously set exception rules. An update was applied to a specific host or domain without verifying its use in the integration, causing issues. **Resolution** The issue was resolved by modifying the configuration rules on Ipass app. **Action Plan** - An anomaly alert will be created for the Ipass to avoid this scenario happend again in the future

Read the full incident report →

Notice March 30, 2026

Filters and search interfaces degradations

Detected by Pingoru
Mar 30, 2026, 12:53 PM UTC
Resolved
Mar 30, 2026, 04:53 PM UTC
Duration
4h
Affected: DashboardsFilters
Timeline · 2 updates
  1. investigating Mar 31, 2026, 07:50 PM UTC

    On March 30th, the platform experienced a degradation that impacted filter and search applications. This incident was opened retroactively to provide visibility to customers. The problem is no longer occurring at the moment. Thank you for your understanding.

  2. resolved Mar 31, 2026, 07:51 PM UTC

    This incident has been resolved.

Read the full incident report →

Minor March 6, 2026

Integrations slowness

Detected by Pingoru
Mar 06, 2026, 03:44 PM UTC
Resolved
Mar 06, 2026, 05:28 PM UTC
Duration
1h 44m
Affected: Integrations
Timeline · 5 updates
  1. investigating Mar 06, 2026, 03:44 PM UTC

    We are currently investigating this issue.

  2. investigating Mar 06, 2026, 03:45 PM UTC

    We are continuing to investigate this issue.

  3. monitoring Mar 06, 2026, 04:12 PM UTC

    A fix has been implemented and we are monitoring the results.

  4. resolved Mar 06, 2026, 05:28 PM UTC

    This incident has been resolved.

  5. postmortem Mar 10, 2026, 09:02 PM UTC

    **Root Cause** The issue was caused by a failure in the integration feature, which led to an overload in the system. An unoptimized job created thousands of subflows, quickly filling up memory and affecting system availability. The misuse of the integration was identified as the root cause, prompting the disabling of the problematic flow as an initial step to mitigate the issue. **Resolution** To resolve the issue, the team deactivated the problematic flow and partially cleared the queues. This initial solution aimed to prevent new calls and address the items stuck in the queue. **Action Plan** **We will continue to monitor the system closely to ensure stability and prevent future issues. Our focus remains on continuously improving the customer experience by optimizing our processes and enhancing communication between teams.**

Read the full incident report →

Minor March 6, 2026

Integrations failure

Detected by Pingoru
Mar 06, 2026, 03:00 PM UTC
Resolved
Mar 06, 2026, 03:09 PM UTC
Duration
8m
Timeline · 5 updates
  1. investigating Mar 06, 2026, 03:08 PM UTC

    We are currently investigating this issue.

  2. monitoring Mar 06, 2026, 03:08 PM UTC

    A fix has been implemented and we are monitoring the results.

  3. monitoring Mar 06, 2026, 03:09 PM UTC

    We are continuing to monitor for any further issues.

  4. resolved Mar 06, 2026, 03:09 PM UTC

    This incident has been resolved.

  5. postmortem Mar 10, 2026, 09:10 PM UTC

    **Root Cause** The issue was caused by a failure in the integration feature, which led to an overload in the system. An unoptimized job created thousands of subflows, quickly filling up memory and affecting system availability. The misuse of the integration was identified as the root cause, prompting the disabling of the problematic flow as an initial step to mitigate the issue. **Resolution** To resolve the issue, the team deactivated the problematic flow and partially cleared the queues. This initial solution aimed to prevent new calls and address the items stuck in the queue. **Action Plan** **We will continue to monitor the system closely to ensure stability and prevent future issues. Our focus remains on continuously improving the customer experience by optimizing our processes and enhancing communication between teams.**

Read the full incident report →

Notice March 4, 2026

Dashboards Outage

Detected by Pingoru
Mar 04, 2026, 09:23 PM UTC
Resolved
Mar 04, 2026, 09:50 PM UTC
Duration
27m
Affected: Dashboards
Timeline · 4 updates
  1. investigating Mar 04, 2026, 09:23 PM UTC

    We are currently investigating this issue.

  2. investigating Mar 04, 2026, 09:23 PM UTC

    We are continuing to investigate this issue.

  3. monitoring Mar 04, 2026, 09:45 PM UTC

    A fix has been implemented and we are monitoring the results.

  4. resolved Mar 04, 2026, 09:50 PM UTC

    This incident has been resolved.

Read the full incident report →

Minor March 4, 2026

Third-party provider, Oracle Cloud (OCI) Platform Instability

Detected by Pingoru
Mar 04, 2026, 01:34 PM UTC
Resolved
Mar 05, 2026, 10:46 AM UTC
Duration
21h 11m
Affected: ApplicationDashboards
Timeline · 14 updates
  1. investigating Mar 04, 2026, 01:34 PM UTC

    We are currently investigating this issue.

  2. identified Mar 04, 2026, 01:54 PM UTC

    Customers may notice that certain requests are taking longer than usual to process.

  3. identified Mar 04, 2026, 02:03 PM UTC

    We identified the queue processing delays has been identified as an ongoing infrastructure issue on the side of our third-party provider, Oracle Cloud (OCI)

  4. identified Mar 04, 2026, 02:04 PM UTC

    We identified the queue processing delays has been identified as an ongoing infrastructure issue on the side of our third-party provider, Oracle Cloud (OCI)

  5. identified Mar 04, 2026, 02:51 PM UTC

    We are continuing to work on a fix for this issue.

  6. identified Mar 04, 2026, 03:04 PM UTC

    Platform Instability We are currently experiencing platform instability affecting Pipefy. Our team has identified that the issue is related to a network problem in the Oracle Cloud Infrastructure (OCI) region where our servers are hosted. Due to this infrastructure issue, we are temporarily unable to allocate enough machines to keep our multi-tenant environment fully stable. As a result, some users may experience slowness or intermittent access to the platform. What we are doing: Our engineering team is actively working together with OCI to resolve the issue as quickly as possible. We are dedicating all necessary resources and efforts to restore full platform stability. Impact: Some customers may experience degraded performance or intermittent availability while the incident is ongoing. At this moment, there is no indication of data loss or data exposure. Next Steps: We will continue to monitor the situation closely and provide updates as progress is made. Once the incident is fully resolved, we will conduct a detailed Root Cause Analysis (RCA) and share the findings and preventive actions with our customers. We sincerely apologize for the disruption and appreciate your patience and understanding while we work to restore normal service. Next update: In 2 hours

  7. identified Mar 04, 2026, 05:01 PM UTC

    Major update: Performance Improvement and Ongoing Monitoring We would like to share an update regarding the platform instability affecting Pipefy. Our engineering team has implemented mitigation measures that have significantly improved platform performance and stability. The service is currently operating in a more stable state compared to earlier during the incident. We continue to closely monitor the environment together with the third party to ensure stability is maintained and to quickly address any remaining impact. Current Status: Most platform functionalities should now be operating normally, although some users may still experience occasional performance fluctuations while we complete our monitoring and stabilization efforts. Data Integrity: There is no indication of data loss, exposure, or compromise. We will continue to monitor the situation and will provide another update once we confirm the platform is fully stable. Next update in 2 hours.

  8. identified Mar 04, 2026, 07:36 PM UTC

    Major update: Performance Improvement and Ongoing Monitoring We would like to share an update regarding the platform instability affecting Pipefy. The service is currently operating in a more stable state now We continue to closely monitor the environment together with the third party to ensure stability is maintained and to quickly address any remaining impact. Current Status: Most platform functionalities should now be operating normally, while we complete our monitoring and stabilization efforts and waiting for the official update of our third party. We will continue to monitor the situation and will provide another update once we confirm the platform is fully stable. Follow the third party updates on: https://ocistatus.oraclecloud.com/#/incidents/ocid1.oraclecloudincident.oc1.phx.amaaaaaavwew44aahmtvmmxusawis4baxh73alctlnntgb3jtf2lakwtp27a

  9. monitoring Mar 04, 2026, 08:54 PM UTC

    A fix has been implemented and we are monitoring the results.

  10. monitoring Mar 04, 2026, 09:13 PM UTC

    We are continuing to monitor for any further issues.

  11. monitoring Mar 04, 2026, 09:14 PM UTC

    We are continuing to monitor for any further issues.

  12. monitoring Mar 04, 2026, 09:23 PM UTC

    We are continuing to monitor for any further issues.

  13. resolved Mar 05, 2026, 10:46 AM UTC

    This incident has been resolved.

  14. postmortem Mar 13, 2026, 07:01 PM UTC

    **Summary** On March 4, 2026, an incident occurred affecting multiple services in Oracle Cloud Infrastructure \(OCI\). Pipefy identified the disruption at 13:20 UTC, which impacted connectivity and service management operations for resources. As the Pipefy infrastructure relies on OCI, which affects the availability and normal behavior of certain platform functionalities for some customers. Oracle engineers investigated the issue, identified its source, and implemented mitigation actions. Full service functionality was confirmed as restored on March 4, 2026 at 19:40 UTC. Although Pipefy was back to normal at 19:40 UTC, we chose to wait for the provider to resolve the incident, which occurred on March 5, 2026 04:00am UTC . **Impact** During the incident window, customers have experienced intermittent latency or connection failures due to an underlying issue within the infrastructure. Because certain Pipefy components rely on OCI networking, services were temporarily degraded. We would like to emphasize that there was no loss or exposure of data. Full functionality was restored once the infrastructure issues were resolved. **Root Cause** The root cause of the incident was identified as an important and massive incident on Oracle's infrastructure that impacted the Pipefy platform. The disruption was caused by an issue affecting multiple Oracle Cloud Infrastructure \(OCI\) services, particularly involving components related to the Virtual Cloud Network \(VCN\). This issue affected the ability of services and resources to communicate reliably. The incident was triggered by OCI routine maintenance, the issue occurred because the Virtual Cloud Network Control Plane \(VCNCP\) was unable to propagate network configuration to the network dataplane services due to performance degradation of VCNCP’s internal database. At Pipefy, we identified the disruption at 13:20 UTC, this resulted in an inability to perform lifecycle actions, such as provisioning or scaling compute resources. Following that, OCI service teams were informed and the escalation process started. Oracle engineering teams identified the source of the disruption and implemented corrections to stabilize the affected infrastructure and restore normal connectivity and service operations for Pipefy on March 04 at 16:18 UTC, which resulted in full control plane recovery at OCI on March 05 at 04:40 UTC **Preventive Measures Performed** * To prevent this scenario from happening again, OCI implemented rigorous improvements to its infrastructure, such as migrating VCNCP services to hosts with greater memory capacity. \[Done\] * It also implemented performance optimizations in the VCNCP internal database, in addition to improvements in service monitoring. \[Done\] Next steps * Improvements in the monitoring and observability process of the Pipefy platform. **Additional informations** We would like to point out that all information regarding incidents will be available on our support tool, but we reiterate that official communication is done through our status page [https://status.pipefy.com](https://status.pipefy.com). We will continue to monitor the system closely to ensure stability and prevent future issues, while continuously working to enhance the customer experience.

Read the full incident report →

Minor March 2, 2026

[Oracle Cloud / OCI] Third-Party Service Degradation

Detected by Pingoru
Mar 02, 2026, 10:00 PM UTC
Resolved
Mar 04, 2026, 11:47 AM UTC
Duration
1d 13h
Affected: Application
Timeline · 5 updates
  1. investigating Mar 03, 2026, 03:02 AM UTC

    We are currently investigating an issue causing degraded performance and slowness in services that require background queues. Customers may notice that certain requests are taking longer than usual to process.

  2. identified Mar 03, 2026, 03:03 AM UTC

    The root cause of the queue processing delays has been identified as an ongoing infrastructure issue on the side of our third-party provider, Oracle Cloud (OCI)

  3. monitoring Mar 03, 2026, 03:05 AM UTC

    We are currently monitoring the services to ensure full stability as the backlog of requests clears up. Please note that some intermittent slowness might still be experienced until all queued tasks are fully processed

  4. resolved Mar 04, 2026, 11:47 AM UTC

    This incident has been resolved.

  5. postmortem Mar 13, 2026, 07:25 PM UTC

    **Root Cause** The incident was caused by a cloud provider's routine maintenance which led to a problem with the network control system. This issue prevented the network from updating its configuration due to a slowdown in the internal database. Despite this, the Pipefy platform's performance remained unaffected. **Resolution** The incident was resolved after a series of actions were taken by a cloud provider’s team, which closed the issue.

Read the full incident report →

Major February 25, 2026

Custom Integrations | Slowness when executing any action

Detected by Pingoru
Feb 25, 2026, 02:32 PM UTC
Resolved
Feb 25, 2026, 03:56 PM UTC
Duration
1h 23m
Affected: Integrations
Timeline · 5 updates
  1. investigating Feb 25, 2026, 02:32 PM UTC

    We are currently investigating this issue.

  2. monitoring Feb 25, 2026, 02:50 PM UTC

    A fix has been implemented and we are monitoring the results.

  3. monitoring Feb 25, 2026, 03:31 PM UTC

    We are continuing to monitor for any further issues.

  4. resolved Feb 25, 2026, 03:56 PM UTC

    This incident has been resolved.

  5. postmortem Mar 02, 2026, 09:14 PM UTC

    **Root Cause** The issue was caused by exceeding the execution limit in integration queues, leading to a loop in automation that triggered numerous webhooks. This resulted in an overload of a specific flow, creating a large processing queue. Consequently, clients experienced delays in their process executions. **Resolution** To resolve the issue, the customer service account was deactivated, which helped alleviate the problem. **Action Plan** To prevent future occurrences, a rate limit will be activated on the system, and the timeout for execution flows will be reduced.

Read the full incident report →

Major February 14, 2026

Slowness in Pipefy functionalities

Detected by Pingoru
Feb 14, 2026, 12:43 PM UTC
Resolved
Feb 14, 2026, 02:50 PM UTC
Duration
2h 6m
Affected: Advanced ReportsFiltersReports
Timeline · 4 updates
  1. identified Feb 14, 2026, 12:43 PM UTC

    The issue has been identified and a fix is being implemented.

  2. monitoring Feb 14, 2026, 02:30 PM UTC

    A fix has been implemented and we are monitoring the results.

  3. resolved Feb 14, 2026, 02:50 PM UTC

    This incident has been resolved.

  4. postmortem Feb 24, 2026, 06:08 PM UTC

    **Root Cause** We encountered a significant delay due to a high volume of card deletions, which affected the processing queues. **Resolution** The issue with the queues resolved itself without any intervention from the team. The system returned to normal operations on its own with the team supervision. **Action Plan**

Read the full incident report →

Notice February 12, 2026

Cards are not beeing created by the pipe email

Detected by Pingoru
Feb 12, 2026, 03:16 PM UTC
Resolved
Feb 12, 2026, 05:21 PM UTC
Duration
2h 5m
Affected: Email Inbox
Timeline · 4 updates
  1. investigating Feb 12, 2026, 03:16 PM UTC

    Hello Pipefy customers! We've identified a slowdown in receiving emails for card creation; during our analysis, we determined the slowness was due to our email provider. A ticket has been opened with our partner, and we are working to resolve the issue as quickly as possible.

  2. identified Feb 12, 2026, 05:01 PM UTC

    The issue has been identified and a fix is being implemented.

  3. resolved Feb 12, 2026, 05:21 PM UTC

    This incident has been resolved.

  4. postmortem Feb 13, 2026, 08:01 PM UTC

    **Summary:** **The issue at hand involved the inability to create cards via the pipe email in Pipefy. This was due to a rule on firewall provider that was inadvertently blocking incoming emails, which are essential for the automatic creation of cards.** **Impact:** **The impact of this issue was significant as it disrupted the workflow of users relying on email-triggered card creation. This could have led to delays in task management and project tracking, affecting overall productivity and efficiency for users who depend on this feature for seamless operations.** **Root Cause:** **The root cause of the problem was identified as a specific rule on Cloudflare that was blocking incoming emails. This blockage prevented the emails from reaching the system, thereby halting the automatic card creation process.** **Corrective Actions:** * **The team disabled the problematic rule on Cloudflare, which restored the normal flow of incoming emails and resumed the card creation process. - A ticket was opened with SendGrid to request a range of IP addresses to ensure smoother operations in the future. - An alert system is to be created to monitor the incoming emails from the SendGrid endpoint to prevent similar issues.** **Next Steps/Conclusion:** **We will continue to monitor the system closely to ensure stability and prevent future occurrences of similar issues. Our ongoing efforts are focused on enhancing the customer experience by ensuring that all features function seamlessly and efficiently.**

Read the full incident report →

Looking to track Pipefy downtime and outages?

Pingoru polls Pipefy's status page every 5 minutes and alerts you the moment it reports an issue — before your customers do.

  • Real-time alerts when Pipefy reports an incident
  • Email, Slack, Discord, Microsoft Teams, and webhook notifications
  • Track Pipefy alongside 5,000+ providers in one dashboard
  • Component-level filtering
  • Notification groups + maintenance calendar
Start monitoring Pipefy for free

5 free monitors · No credit card required