JustCall incident

JustCall loading issues for US region | Elevated API errors

Notice Resolved View vendor source →

JustCall experienced a notice incident on January 21, 2021, lasting —. The incident has been resolved; the full update timeline is below.

Started
Jan 21, 2021, 05:00 PM UTC
Resolved
Jan 21, 2021, 05:00 PM UTC
Duration
Detected by Pingoru
Jan 21, 2021, 05:00 PM UTC

Update timeline

  1. resolved Jan 21, 2021, 06:48 PM UTC

    Today at 12pm EST, our engineers were alerted by our notification systems in place about latency in accessing our websites and load issues. We identified that we were attacked by a bot army which tries to pass through our firewalls thus increasing the traffic on our internal website (accessible by our customers). We immediately took steps to mitigate this and our automated load balancing systems transferred our internal traffic to multiple instances thus helping our customers able to make calls and send text messages. We agree that our customers may have seen blank pages, trouble accessing their dashboard and dialer during this time. The issue was severe for the first 12 minutes i.e till 12:12 pm EST and was completely under control by 12:32 pm EST. A detailed postmortem of this issue will be available in another 60 mins from here on our status page. If anything still seems not working properly, please clear your browser cache and cookies for the JustCall page and try again. If things still don’t seem right, contact us and we’ll investigate further.

  2. postmortem Jan 21, 2021, 06:51 PM UTC

    On January 21st, at 12:00pm EST JustCall was attacked by a giant group of bots from all over the world and overloaded our site all at once with the intention of taking it down. This type of attack isn't meant to steal or destroy anyone's data, it's just meant to cause the site to become inaccessible. At 12:01 pm EST, our internal notification system alarmed our engineering operations team. This internal notification system is designed to figure out our server load and start diverting traffic to our always-on copies of our codebase on different servers. By 12:12 pm EST, our Google Cloud servers were able to identify this fake traffic and take them down automatically. Our engineers on the other hand made sure that resource-intensive tasks or background jobs are immediately diverted to our other servers which are designed to handle these tasks. By this time, our latency was reduced manifolds and our customers were able to access the website, dialer, and mobile apps without any issues. Around 12:30 pm EST, we were able to bring down our server to our normal average, and at the time of this writing, our servers are performing just fine.