LogDNA experienced a minor incident on February 26, 2021 affecting Log Ingestion (Agent/REST API/Code Libraries) and Log Ingestion (Heroku) and 1 more component, lasting 7h 51m. The incident has been resolved; the full update timeline is below.
Affected components
Update timeline
- investigating Feb 26, 2021, 06:43 AM UTC
We are currently investigating this issue.
- identified Feb 26, 2021, 07:08 AM UTC
Customers may experience delays with newly ingested logs and searching. The issue has been identified and a fix is being implemented.
- identified Feb 26, 2021, 09:12 AM UTC
We are continuing to work towards restoring the search of recently ingested logs. At this time Users will experience searching, boards and screens not returning results for recently ingested logs.
- identified Feb 26, 2021, 09:36 AM UTC
We are continuing to work on a fix for this issue.
- monitoring Feb 26, 2021, 12:50 PM UTC
We resolved the issue and the service has returned to normal. We are closely monitoring the environment at this time.
- resolved Feb 26, 2021, 08:42 PM UTC
We resolved the issue and all services are operational.
- postmortem Mar 11, 2021, 07:18 PM UTC
## Dates: Start Time: Friday, February 26, 2021, at 06:43 UTC End Time: Friday, February 26, 2021, at 20:42 UTC ## What happened: The insertion of newly submitted logs stopped entirely for all accounts for about 3 hours. Logs were still available in Live Tail but not for searching, graphing, and timelines. The ingestion of logs from clients was not interrupted and no data was lost. For more than 95% of newly submitted logs, log processing returned to normal speeds within 3 hours. All logs submitted during the 3 hour pause were available again about 30 minutes later. For less than 5% of newly submitted logs, log processing returned to normal speeds gradually. Logs submitted during the 3 hour pause also gradually became available. This impact was limited to about 12% of accounts. The incident was closed when logs from all time periods for all accounts were entirely available. ## Why it happened: Our service ran out of a set of resources that manage pre-sharding on the clusters that store logs, an operation that ensures new logs are promptly inserted into the clusters. This happened because of several simultaneous changes to our infrastructure that didn’t account for the need for more resources, particularly on clusters with a relatively large number of shards relative to their overall storage capacity. The insertion of new logs slowed down and the backlog of unprocessed logs grew. Eventually, the portion of our service that processes new logs was unable to keep up with demand. ## How we fixed it: We restarted the portion of our service that processes newly submitted logs. During the recovery, we prioritized restoring logs submitted in the last day. 95% of accounts were fully recovered after 3.5 hours. ## What we are doing to prevent it from happening again: We’ve increased the scale of the set of resources that ensure logs are processed promptly by adding more servers for these resources to run upon. We’ve also added alerting for when these resources are reaching their limit.