Cost of critical application failures revealed
Financial losses are heightened during periods of peak customer demand
On average, critical application failures have been found to cost enterprises between $500,000 and $1 million per hour, according to research recently published by Datastax.
Sudden surges in customer demand are cited as playing a key role in failures, with existing IT infrastructure unable to offer the scalability necessary to effectively manage such events.
By critical applications, we are referring to a software program or suite of related programs that must function continuously for a business or segment of a business to be successful. If a critical application experiences downtime, however brief, financial consequences will ensue, as Datastax have quantified.
Failures can result from various causes, from bugs embedded in code, to erroneous deployment features, and hardware failures. However, consumer trends are an increasingly prevalent catalyst for application shutdown, the spurts of sudden demand threatening to overload technology stacks.
Sudden rises in demand can be reliably pin-pointed on the calendar, several dates per year certain to carry spikes in consumer spending both online and in-store. Most sectors will be affected, particularly around traditional consumer sales events such as Black Friday, and on occasions which came into being at the turn of the millenium, such as Cyber Monday, coined in 2005.
One notable example of the damages of critical application failure arose as a result of Prime Day. The e-commerce giant Amazon created the annual shopping holiday to provide a plethora of deals and packages exclusively for its Amazon Prime users. In 2018 however, Prime Day was marred by service disruptions due to heavy online traffic, preventing users from finalising purchases.
The year 2017 saw Amazon generate an estimated $1 billion in sales from its 30-hour Prime Day event, or $34 million per hour. While the 2018 blackout was brief, that still equates to mammoth losses in revenue.
Application failures during peak periods are not only financially damaging in the short-term, but can also be destabilising in the long-term. Datastax's report found that 53% of potential customers leave a website if online performance lags by merely three seconds. With an abundance of choice available, customers take this as their cue to jump to an alternative, on the high-street by popping next door, or online with a mere flick of the wrist and click of a button.
Retailers need to ensure they provide positive customer experiences that result in unhampered and repeat purchases, but this can be difficult considering fluctuating demand. If infrastructure is enlarged to handle the very peaks of workload, sheer and sustained drops throughout the rest of the year will result in expensive surpluses in capacity.
Scalability is the key. Through investing in a technology stack that can shift to meet demand, surpluses or deficiencies in capacity can be eliminated. To further minimise risk, database infrastructure can be simplified to lower stack complexity, can be distributed to improve uptime and elasticity, and can be open sourced to reduce security and operational risk.
Navigating the new normal: A fast guide to remote working
A smooth transition will support operations for years to comeDownload now
Putting a spotlight on cyber security
An examination of the current cyber security landscapeDownload now
The economics of infrastructure scalability
Find the most cost-effective and least risky way to scaleDownload now
IT operations overload hinders digital transformation
Clearing the path towards a modernised system of agreementDownload now