What is cache memory?
We explain the different categories of cache memory and how it differs from RAM
Memory is a key element to computing and there are a number of different types to keep your machine running smoothly. Some are designed for short-term tasks and others are for long-term use, such as storage, but both are equally important in terms of the software and hardware of a computer.
While the most obvious function of memory is to store files and information, it is also responsible for many other applications, such as encoding and retrieving data. This is where cache memory is important as it is designed to make a device run more efficiently by working with other components. On its own, it may not be all that useful, but cache memory plays a key role in computing when coupled with other parts.
Holding recently-accessed data in cache memory helps operations to run faster, rather than software using the same set of instructions in rapid succession to use the data. This makes memory caching a key part of modern computing.
Despite its proximity to the CPU, the larger the capacity of the cache, the more data it can store and subsequently the quicker it operates. It's also worth noting that chips with less capacity are generally slower as they don't store as much data.
Cache memory can be complicated, however; not only is it different to the standard DRAM that most people are familiar with, but there are also multiple different kinds of cache memory.
There are three different categories, graded in levels: L1, L2 and L3. L1 cache is generally built into the processor chip and is the smallest in size, ranging from 8KB to 64KB. However, it is also the fastest type of memory for the CPU to read. Multi-core CPUs will generally have a separate L1 cache for each core.
L2 and L3 caches are larger than L1, but take longer to access. L2 cache is occasionally part of the CPU, but often a separate chip between the CPU and the RAM.
Graphics processing chips often have a separate cache memory to the CPU, which ensures that the GPU can still speedily complete complex rendering operations without relying on the relatively high-latency system RAM.
Cache memory generally tends to operate in a number of different configurations: direct mapping, fully associative mapping and set associative mapping.
Direct mapping features blocks of memory mapped to specific locations within the cache, while fully associative mapping lets any cache location be used to map a block, rather than requiring the location to be pre-set. Set associative mapping acts as a halfway-house between the two, in that every block is mapped to a smaller subset of locations within the cache.
Preparing for long-term remote working after COVID-19
Learn how to safely and securely enable your remote workforceDownload now
Cloud vs on-premise storage: What’s right for you?
Key considerations driving document storage decisions for businessesDownload now
Staying ahead of the game in the world of data
Create successful marketing campaigns by understanding your customers betterDownload now
Solutions that facilitate work at full speedDownload now