Slack incident

Issue loading activity feed in Slack

Minor Resolved View vendor source →

Slack experienced a minor incident on January 23, 2026 affecting Messaging, lasting 6d 12h. The incident has been resolved; the full update timeline is below.

Started
Jan 23, 2026, 05:03 AM UTC
Resolved
Jan 29, 2026, 05:16 PM UTC
Duration
6d 12h
Detected by Pingoru
Jan 23, 2026, 05:03 AM UTC

Affected components

Messaging

Update timeline

  1. investigating Jan 23, 2026, 05:03 AM UTC

    We’re investigating a Feature Degradation affecting a number of users. Some users may be unable to load their activity feed and may encounter the following error message: “Something’s fishy! Slack can’t seem to load your activity feed. ”We’ll provide updates as soon as possible, as more information becomes available. We apologize for any inconvenience this may cause.

  2. resolved Jan 23, 2026, 05:43 AM UTC

    Our work on this issue is still ongoing. Current status and actions being taken: We have identified the source of the ongoing issue related to activity feed and message functionality and are actively working on a fix. We'll provide another update as soon as it becomes available until the impact is resolved for all users.

  3. resolved Jan 23, 2026, 06:47 AM UTC

    This issue is now resolved for all users. Current status and recent actions taken: The issue affecting the activity feed and message functionality has been resolved. We’ve taken steps to prevent this from happening again, and all systems are now operating normally.Previous impact to end users: During the impact, some users experienced issues with the activity feed and messaging, including degraded performance and errors.We apologize for any disruptions to your day.

  4. resolved Jan 29, 2026, 05:16 PM UTC

    Starting 8:27 PM PST until 10:01 PM PST on January 22, 2026, some users experienced issues loading activity feed in Slack. We traced this to a backend resource that unexpectedly hit capacity limits. This resulted in activity feed processes no longer serving traffic. We promptly adjusted the configuration and increased capacity limits for the backend resource, resolving the issue for all affected users.