Our 5-minute guide to distributed caching

Today's web, mobile and IoT applications need to operate at web scale, anticipating millions of users, terabytes of data and submillisecond response times, as well as operating on multiple devices around the world.

For those applications which run in clustered environments, distributed caching is a vital requirement. Distributed caching solves many common problems with data access, improving performance, manageability and scalability, but what is it and how can it benefit businesses?

What is distributed caching?

Caching is a commonly used technology to boost application performance as well as reduce costs. The primary goal of caching is to alleviate bottlenecks that come with traditional databases. By caching frequently used data in memory rather than making database round trips application response times can be dramatically improved.

Distributed caching is simply an extension of this concept, but the cache is configured to span multiple servers. It's commonly used in cloud computing and virtualised environments, where different servers give a portion of their cache memory into a pool which can then be accessed by virtual machines. This also means it's a much more scalable option.

The data stored in a distributed cache is quite simply whatever is accessed the most, and can change over time if a piece of data hasn't been requested in a while.

By caching frequently accessed data in memory, rather than the backend database, applications can deliver highly responsive experiences.

Distributed caching can also substantially lower capital and operating costs by reducing workloads on backend systems and reducing network usage. In particular, if the application runs on a relational database such as Oracle, which requires high-end, costly hardware in order to scale, distributed caching that runs on low-cost commodity servers can reduce the need to add expensive resources.

Common distributed caching use cases

Due to clear performance and cost benefits, distributed caching is used across numerous applications. Common use cases include:

Speeding up RDBMS: Many web and mobile applications need to access data from a backend relational database management system (RDBMS) for example, inventory data for an online product catalogue. However, relational systems were not designed to operate at internet scale and can be easily overwhelmed by the volume of requests from web and mobile applications. Caching data from the RDBMS in memory is a widely used, cost-effective technique to speed up the backend RDBMS.

Managing usage spikes: Web and mobile applications often experience spikes in usage. In these cases, caching can prevent the application from being overwhelmed and can help avoid the need to add expensive backend resources.

Mainframe offloading: Mainframes are still used widely in many industries. A cache is used to offload workloads from a backend mainframe, thereby reducing costs as well as enabling completely new services that wouldn't be possible using just the mainframe.

Web session store: Session data and web history are kept in memory for example, as inputs to a shopping cart or real-time recommendation engine on an ecommerce site, or player history in a game.

What makes distributed caching effective?

The requirements for effective distributed caching are fairly straightforward. Six key criteria are listed here, but their importance depends on an organisation's specific situation.

Performance: For a given workload, the cache must meet and sustain the application's required steady-state performance targets for latency and throughput. Efficiency of performance is a related factor that impacts cost, complexity and manageability.

Scalability: As more users, more data requests and more operations increase the workload, the cache should still deliver the same performance. It should also be able to scale easily and affordably without impacting availability.

Availability: The cache must ensure the availability of data 24/7 so that data is always available during planned and unplanned downtime.

Manageability: Using a cache should be quick to deploy. It should also be easy to monitor and manage, without adding unnecessary extra work for the operations team.

Simplicity: If done properly, adding a cache to a deployment shouldn't make things unnecessarily complex, or add additional work for developers.

Affordability: Upfront implementation costs should always be considered with any IT decision, as well as ongoing costs. An evaluation should consider total cost of ownership, including license fees as well as hardware, services, maintenance and support.

Esther Kezia Thorpe

Esther is a freelance media analyst, podcaster, and one-third of Media Voices. She has previously worked as a content marketing lead for Dennis Publishing and the Media Briefing. She writes frequently on topics such as subscriptions and tech developments for industry sites such as Digital Content Next and What’s New in Publishing. She is co-founder of the Publisher Podcast Awards and Publisher Podcast Summit; the first conference and awards dedicated to celebrating and elevating publisher podcasts.