The evolution of the data centre
Data centres have become the glue that holds the fabric of modern society together
Whether it's making a credit card purchase, messaging your friends or even simply ordering a pizza, virtually all of the things we do on a daily basis are powered and supported by data centres.
But the data centres we rely on today are a far cry from the technology of the past; they've changed almost immeasurably since digital computing took its first early steps in the 1950s and 60s. Processing power and capacity have increased exponentially over the years and the infrastructure needed to support modern applications has grown ever more complex.
These advances have been driven by the growing demands of both businesses and consumers. First, the birth of the internet led to an explosion in the amount of people consuming online services, which necessitated vast increases in the amount of processing power and capacity that data centres had to offer.
Before long, the need for server capacity spawned the creation of third-party providers, who would host companies' servers in their own facility, thus removing the initial expense and ongoing overheads of setting up an on-premise data centre for companies. Eventually, as network technology and connectivity improved, this gave way to the cloud computing model where companies rent space not in a data centre, but on the server itself.
Cloud computing has been a major catalyst for change in the data centre; not only have many operating models fundamentally shifted as a result as a result of its rise, but it's also driven technological advancements like multi-tenant systems, lightning-fast storage and AI applications.
One of the most foundational changes in data centre technology was the advent of multi-core processors around the turn of the millennium. By fitting two or more processing cores onto a single die, chip manufacturers could radically boost the total performance of data centre hardware, allowing the same workloads to be run with fewer machines.
Multi-core processing also brought huge advantages to virtualisation, which has been a linchpin of the data centre's growth. Because each processing core runs in parallel with the others, multi-core systems can run huge amounts of virtual machines simultaneously with minimal drops in performance, vastly increasing the amount of applications that can be run at once.
Containerisation has had a similar impact; each VM can host multiple containers within it, each of which can host its own application. This allows data centres to exponentially multiply their capacity for applications. As well as spearheading the continued advancement of multi-core processing, Intel has also been a leader in developing virtualisation and container technology, working with engineering partners to make containers and VMs lighter, faster and more resilient.
Data centre equipment is highly powerful, but all that power generates large amounts of heat. Unfortunately, server processors are highly sensitive, and need to be kept below a certain temperature in order to ensure optimal performance. In order to maintain this, data centres have to be very carefully climate-controlled, relying on complex and expensive cooling systems that are often the second-largest consumers of power.
While these cooling systems are still very necessary, Intel's continued advancement in processor technology has made server processors more thermally efficient, generating less heat and therefore requiring less cooling. On top of that, the company also introduced sensors to its server chips in 2011 which allow data centre administrators to measure the temperature and airflow within a data centre. This enables them to better identify hot and cold spots, modelling the placement of new racks and equipment according to temperature conditions.
Along with preventing costly outages, increasing thermal efficiency throughout the data centre also prolongs the lifespan of the servers themselves and reduces the amount of overall cooling necessary, thereby saving administrators money in terms of the substantial operational costs incurred by cooling efforts.
Intel has also steadily improved the power efficiency of its data centre products. Newer chips like its Xeon Scalable range offer greater performance than previous generations, while consuming less electricity. As with improved cooling performance, this saves data centre operators money in operational costs, but it also allows more chips to be packed into the same physical space.
This means that companies can eke more computational power out of the same resources, without needing to invest in more cabinets, increased power consumption or more cooling. Space efficiency is a key concern, too; floor space within a data centre is often in high demand, so the more physical components that can be packed into a single rack, the better.
The move from traditional spinning-platter HDDs to SSDs was a huge leap forward in this regard, as it meant that storage drives could take up much less space inside a server, albeit at a higher cost. SSDs were also much faster than HDDs at accessing the data stored on them, greatly speeding up overall server operations and enabling much faster performance for tasks like data analytics.
Intel has been instrumental in advancing storage technology through its partnership with Micron, which involved introducing data striping for increased performance and pioneering high-reliability enterprise drives. It also led the workgroup that developed NVMe technology and, more recently, co-developed 3D Xpoint memory technology, which offers unparalleled speeds for low-latency workloads. You may be familiar with Intel's Optane range of memory and storage products, all of which are powered by 3D Xpoint.
The end result of all of these numerous changes, developments and advancements has been modern data centres, which are capable of supporting complex, cloud-native workloads. Gone are the days of monolithic mainframes supporting single applications; now, data centres play host to hundreds upon hundreds of sophisticated, multi-core, multi-processor servers, each making use of advanced software-defined networking and low-latency solid state storage drives to power millions of simultaneous applications and processes.
Intel has been at the heart of this change for decades, drawing on its engineering heritage and world-class research expertise to push the boundaries of what data centres are capable of. Whether it's Optane storage technology, high-performance Xeon Platinum processors or the intelligent software supporting virtualised and containerised applications, Intel remains at the bleeding edge of enterprise processing technology.
Five lessons learned from the pivot to a distributed workforce
Delivering continuity and scale with a remote work strategyDownload now
Connected experiences in a digital transformation
Enable businesses to meet the demands of the futureDownload now
Simplify to secure
Reduce complexity by integrating your security ecosystemDownload now
Enhance the safety and security of your people, assets and operations
Enable a true vision of security with an engineered solution based on hyperconverged and storage platformsDownload now