Next steps for the data centre

In the data centre, power no longer means what you can do with your servers; it's what the bill for plugging them in and running them adds up to. To stop services like shared calendars, software on demand and new Internet tools driving the bills up too far, the industry is looking for a new approach to supplying power.

Big data centres are getting bigger to cope with the consumer demand for Web 2.0-style services; one large US ISP is predicting a fivefold increase in the data they'll serve up over the next five years, which will mean nine times as many servers (and nine times the electricity bill). Enterprises investing in web services, software as a service and remote management are also going to need the servers to support that technology.

It's not unusual to see data centres with 10,000 or even 50,000 processors according to Intel chief technology officer Justin Rattner, and he expects that figure to keep rising. Around five per cent of servers go into what Intel dubs 'mega data centres' today, but by 2010 he believes it will be more like 25 per cent of all the servers sold going into data centres with more than 100,000 servers. And while server virtualisation means you're making the most of your hardware investment, higher CPU usage demands more power to run your servers and more power to cool them.

The capacity of the data centre

Server racks are typically designed to hold servers running at 15 to 20 per cent load. As new usage models mean server load closer to 80 per cent, existing power and thermal management solutions are starting to struggle. If it means new racking, power supplies and cooling, those money-saving data centre consolidations could end up costing more than you'd expect.

As electricity prices continue to increase, paying for power is a significant issue. A data centre with a million servers would need as much electricity as a quarter of a million homes. That's why Microsoft, Google and Yahoo are all building their next generation of data centres close to the hydroelectric power stations on the Columbia River. Short of sticking a wind turbine on the roof, what are your options for managing power costs? Intel, IBM and Google all have some ideas.

Processors that need less power are part of the answer, and cramming more cores into each processor could give you the same performance from fewer blades (because it's not just the processor that needs power). In five years Intel expects to have a processor package with 80 cores. Google distinguished engineer Luiz Barroso warns that "power could essentially overtake the hardware costs if we don't do anything about it"; when Google plotted the cost trends for its data centre it found over the lifetime of a server, power costs more than the original hardware after three years. Barroso wants the industry to look at the whole system; "memory, I/O, cooling and power delivery and conversion as well."

Where the power goes

A typical data centre power supply today delivers a third of the power going through it to your processors; the other two thirds turn into heat you have to pay to get rid of. It's a huge amount, when you consider the average server is rated at well over 300W.

As Google builds its own servers from scratch, it's been able to address these problems by designing its own power supplies, and is currently getting better than 90 per cent efficiency; a big improvement over the 55 per cent efficient power supplies in home or desktop systems, where the power supply can be the component that takes the most power.

The key to Google's success is its move to a single 12V power supply. Most server power supplies are designed to deliver several different voltages to motherboards, which in practice means a separate power supply for each voltage - making one power supply really at least four. Voltage regulators on the motherboard handle final conversions and stabilise the power. Google's power supply design also increases voltage regulator efficiencies, as it's easier not to overload a single power rail. A higher efficiency single rail power supply will actually be cheaper than its multi-voltage equivalent, as it uses fewer components.

While redesigning every power supply in the world would be overkill, there's certainly scope for improvement - and for considerable savings. Google estimates if its power supplies were deployed in 100 million PCs running for an average of eight hours per day, the resulting power saved would be around 40 billion kilowatt-hours over three years - a reduction of $5 billion in power costs at California's energy rates. Google has no plans to build hardware for anyone else, but it will be working with Intel to create an open standard for power supplies with a single 12V rail.

Switch to DC input and you could get even more efficiency; in tests at Lawrence Berkeley Labs a prototype Intel power supply used 14% less power. Depending on your needs, you could take the power saving, or use it to fit 60% more servers into your data for the same power budget.

Power through the boards

Intel Research is also looking at what happens to the power once it reaches the motherboard. Tuning the voltage regulators for different power demands and switching to the one that's most efficient for current power needs can save a significant amount of power; the research team will have its first test motherboard ready shortly.

If you don't want to replace any hardware, look for tools to show what the power you're paying for is doing. BAPCo's new EECoMark benchmark measures the power efficiency of desktop PCs; Intel is collaborating on a version for servers due in the first half of 2007.

Mary Branscombe

Mary is a freelance business technology journalist who has written for the likes of ITPro, CIO, ZDNet, TechRepublic, The New Stack, The Register, and many other online titles, as well as national publications like the Guardian and Financial Times. She has also held editor positions at AOL’s online technology channel, PC Plus, IT Expert, and Program Now. In her career spanning more than three decades, the Oxford University-educated journalist has seen and covered the development of the technology industry through many of its most significant stages.

Mary has experience in almost all areas of technology but specialises in all things Microsoft and has written two books on Windows 8. She also has extensive expertise in consumer hardware and cloud services - mobile phones to mainframes. Aside from reporting on the latest technology news and trends, and developing whitepapers for a range of industry clients, Mary also writes short technology mysteries and publishes them through Amazon.