How do you build a green data centre?

A photo from the ground looking up at skyscrapers, with green trees partially obscuring the view

Creating a more sustainable business has become a key priority for organisations – for instance, the majority (60%) of Fortune 500 companies have climate and/or energy efficiency goals. They’re looking at their operations, their supply chains, their offices and how they make and deliver their products or services and looking to see how they can make them more cost and energy efficient. Inevitably, the enterprise data centre has a huge part to play in this drive.

The intergovernmental organisation, the International Energy Agency, believes that data centre electricity use now makes up over 1% of global energy demand. By 2025, they could produce 3.2% of global carbon emissions. Adopting green data centres – where construction and operations minimise energy and water consumption and use alternative energy sources and cooling methods where possible – can help drive this environmental impact down.

Public awareness of the environmental impact of data centres is growing, and governments are also taking notice. The EU is expanding or reviewing legislation and initiatives, including the Ecodesign Regulation on servers and data storage products, the EU Code of Conduct on Data Centre Energy Efficiency, and the EU Green Public Procurement criteria. Its Proposal for a Directive on energy efficiency brings in new elements designed to improve the energy efficiency of data centres. In the UK, the UK Carbon Reduction Initiative sets out strict rules for data centre managers if their facility uses more than 6,000MW per year of power, while the Department for Environment, Food & Rural Affairs (DEFRA) has described data centres as a ‘key area of focus.’

While data centres pose real challenges and opportunities for companies trying to operate sustainably, it is possible to make them greener.

Designing greener

How? Well, the ideal approach is to start from scratch, building sustainability into every aspect of the design and construction. This could even start with where you locate the data centre, selecting a position that’s already cool, not inclined towards humidity and more likely to benefit from a good, natural airflow.

Of course, this isn’t always possible, but every data centre can be designed with air and cooling management in mind. Cooling data centres can take up to 40% of their total energy consumption, and power usage efficiency (PUE) metrics already take this into account. By using natural airflows in cooling and avoiding the recirculation of warm air, data centre operators can reduce energy use and associated emissions.

Dividing data centres into hot and cold aisles can make a difference, managing intakes and exhausts to ensure that hot air can be vented from the hot aisle and cool air introduced through the cool aisle, for the most efficient configuration. Doors and walls can also direct airflow, while isolating the equipment that produces the most heat can help, particularly with measures to get that heat out of the building as rapidly as possible. New air-conditioning systems may be more energy efficient than existing ones, not just reducing their environmental impact, but their long-term operating costs.

What’s more, immersive cooling systems, like those designed and manufactured by Asperitas, transfer heat to a synthetic oil through natural convection, reducing the energy waste and costs of cooling and enabling the heat to be used elsewhere. They also make it possible to reduce the physical footprint of the servers and infrastructure, allowing data centres to be located where they would previously have been impractical.

Greener power

There are also opportunities to cut waste and consumption in how the power is supplied. Before reaching servers, network infrastructure and storage racks, the power typically moves through multiple uninterruptible power supplies (UPS) and power distribution units (PDUs). What’s more, many data centres use multiple power sources and UPS systems in order to provide a level of redundancy. While these generally run at a high level of efficiency, this can vary; a modern modular UPS or PDU might achieve 96% to 99% efficiency under a full load, but older or less capable units might be closer to 94%, wasting significant amounts of energy over time. Units with energy efficient modes and intelligent management are more likely to push this up into the 98% to 99% range.

In some cases, shifting to an energy supplier that uses more renewable energy sources, or even connecting your own renewable sources, could also reduce greenhouse gas emissions (scope 2). Unlike Amazon, not every business can operate a wind or solar farm to power their data centre, but there may be a way to harness renewable resources in a more modest and accessible way.

Crucially, organisations need to make the most of tools and technology that allow them to monitor and measure power consumption, then optimise based on the data. Intelligent PDUs, data centre smart assistants and smart power and temperature controls all have their part to play. Obviously, Google works at the largest scales, yet it’s pioneered the smart approach with smart temperature controls and data-driven insights from its DeepMind AI tech. By combining the two, Google was able to reduce its data centre energy cooling power consumption by 40% over 18 months.

Going deeper

All these measures will help to build a greener data centre, but to maximise their energy efficiency, organisations also need to go deeper at the server hardware level. You can start by looking at how servers and racks are cooled, using variable speed fans that react to temperature or even liquid cooling for devices running more demanding, high-performance applications. You can integrate energy efficiency into your storage strategy, using SSDs, with their reduced heat output and power consumption, where you need performance, and slower, more efficient hard drives where you don’t.

Most importantly, you need to use the most energy-efficient processors and server hardware and optimise how and when you use them as much as possible. Again, using data and AI-based management tools can help by dropping servers into sleep states outside peak hours of demand, but the key here is really virtualisation. Consolidating workloads from physical servers to virtual servers reduces power consumption for the servers themselves alongside the power consumption of the systems that cool them. When the multinational bank, DBS, moved from virtualising 50% of workloads to 99% in its private cloud, it shrank the physical footprint of its servers in one data centre to a quarter of its original size, and reduced its power consumption by half.

Part of this success came down to a shift to AMD’s EPYC server CPUs. With more cores and a higher RAM capacity per server, DBS was able to run more virtual servers from a single physical machine. AMD’s own research has demonstrated that ten AMD EPYC 7713 can reduce energy consumption by 32% and save an estimated 70 metric tons of greenhouse gas emissions. What’s more, with more cores per socket and more cores per server, it’s easier to fit more processing power into a nimble dual or single-socket server, reducing server counts and footprints even further.

AMD’s current generation EPYC processors already produce exceptional levels of performance per Watt. Yet AMD has even bigger ambitions for the future, committing to a 30x increase in energy efficiency for AMD processors and Instinct GPU accelerators by 2025, representing a 2.5x acceleration of the industry trends from 2015-2020, and a 97% reduction in energy use per computation from 2020-2025. You only have to look at The Oak Ridge National Laboratory’s Frontier supercomputer to see what might be possible. Each of its 9.408 HPE Cray EX nodes combines an AMD 7A53 EPYC CPU with four AMD Instint MI250X GPUs and 9.2 petabytes of memory. It’s the world’s fastest supercomputer, with a theoretical peak performance of 1.686 exaflops, yet also the world’s most energy-efficient supercomputer, with 52.23 gigaflops of performance per watt. That’s 32% more efficient than the previous energy-efficiency leader.

And note the focus on accelerators. As more organisations ramp up investments in big data and AI applications, energy-efficient GPUs will be as vital as optimised CPUs in reducing power consumption and tackling climate change. If all AI and HPC server nodes matched AMD’s promised power reductions, billions of kilowatt-hours of electricity could be saved every year. As we move away from an approach to computing focused on the CPU to one that utilises a wider range of accelerators, adaptive compute engines, data processing units and FPGAs, we’ll see these more application or workload-specific processors deliver even more performance per watt.

Building a more sustainable data centre is a holistic enterprise, taking in everything from the physical space to how individual servers are configured, cooled and managed. But by building on the latest processor technology and optimising how it's used through virtualisation, organisations can put themselves in the best position to meet their sustainability goals. Businesses need to look beyond cores, threads and raw performance, and focus more on efficiency when making hardware choices, but with AMD’s technology, the architecture means you can have both – and reduce TCO.

If you’re looking for more detail on how, tools like AMD’s AMD EPYC bare metal server TCO estimation tool can help

ITPro

ITPro is a global business technology website providing the latest news, analysis, and business insight for IT decision-makers. Whether it's cyber security, cloud computing, IT infrastructure, or business strategy, we aim to equip leaders with the data they need to make informed IT investments.

For regular updates delivered to your inbox and social feeds, be sure to sign up to our daily newsletter and follow on us LinkedIn and Twitter.