‘Software glitch’ to blame for global Cloudflare outage

Cloudflare webiste
(Image credit: Shutterstock)

Cloudflare has resolved an issue that led to websites serviced by the networking and internet security firm to show 502 Bad Gateway' errors en masse for half an hour yesterday.

From 2:42pm BST the networking giant suffered a massive spike in CPU utilisation to its network, which Cloudflare is blaming on bad software deployment. This affected websites hosted in territories across the entire world.

Once this faulty deployment was rolled back, its CTO John Graham-Cumming explained, service was returned to normal operation and all domains using Cloudflare returned to normal traffic levels.

"This was not an attack (as some have speculated) and we are incredibly sorry that this incident occurred," Graham-Cumming said.

"Internal teams are meeting as I write performing a full post-mortem to understand how this occurred and how we prevent this from ever occurring again."

The incident affected several massive industries, including cryptocurrency markets, with users not able to properly access exchanges like CoinMarketCap and CoinBase.

Cloudflare issued an update last night suggesting the global outage was caused by the deployment of just one misconfigured rule within the Cloudflare Web Application Firewall (WAF) during a routine deployment. The company had aimed to improve the blocking of inline JavaScript used in cyber attacks.

One of the rules it deployed caused CPU to spike to 100% on its machines worldwide, and subsequently led to the 502 errors seen on sites across the world. Web traffic dropped by 82% at the worst point during the outage.

"We were seeing an unprecedented CPU exhaustion event, which was novel for us as we had not experienced global CPU exhaustion before," Graham-Cumming continued.

"We make software deployments constantly across the network and have automated systems to run test suites and a procedure for deploying progressively to prevent incidents.

"Unfortunately, these WAF rules were deployed globally in one go and caused today's outage."

At 3:02pm BST the company realised what was going on and issued a global kill on the WAF Managed Rulesets which dropped CPU back to normal levels and restored traffic, before fixing the issue and re-enabling the Rulesets approximately an hour later.

Many on social media were speculating during the outage that the 502 Bad Gateway errors may be the result of a distributed denial-of-service (DDoS) attack. However, these suggestions were fairly quickly quashed and confirmed to be untrue by the firm.

"The impact of the Cloudflare outage shows the sometimes-unexpected impact of massive success - much as with early outages at AWS and other cloud providers, it's a reminder of how dependent the internet ecosystem can become on the utility and expediency of a singular platform," analyst for the cloud transformation channel with 451 Research Carl Brooks told IT Pro.

"Cloudflare has a lot going for it: it effectively ended DDOS as an attack platform as we knew it, for instance, and it's a vital performance booster for extremely reasonable prices, but it has also quietly become a part of the backbone of the internet, and like every other provider out there, it will have hiccups."

Keumars Afifi-Sabet
Contributor

Keumars Afifi-Sabet is a writer and editor that specialises in public sector, cyber security, and cloud computing. He first joined ITPro as a staff writer in April 2018 and eventually became its Features Editor. Although a regular contributor to other tech sites in the past, these days you will find Keumars on LiveScience, where he runs its Technology section.