Next steps for the data centre

New innovations in processor technology, power, cooling and software all mean that the traditional data centre is changing shape rapidly.

In the data centre, power no longer means what you can do with your servers; it's what the bill for plugging them in and running them adds up to. To stop services like shared calendars, software on demand and new Internet tools driving the bills up too far, the industry is looking for a new approach to supplying power.

Big data centres are getting bigger to cope with the consumer demand for Web 2.0-style services; one large US ISP is predicting a fivefold increase in the data they'll serve up over the next five years, which will mean nine times as many servers (and nine times the electricity bill). Enterprises investing in web services, software as a service and remote management are also going to need the servers to support that technology.

It's not unusual to see data centres with 10,000 or even 50,000 processors according to Intel chief technology officer Justin Rattner, and he expects that figure to keep rising. Around five per cent of servers go into what Intel dubs 'mega data centres' today, but by 2010 he believes it will be more like 25 per cent of all the servers sold going into data centres with more than 100,000 servers. And while server virtualisation means you're making the most of your hardware investment, higher CPU usage demands more power to run your servers and more power to cool them.

The capacity of the data centre

Advertisement - Article continues below
Advertisement - Article continues below

As electricity prices continue to increase, paying for power is a significant issue. A data centre with a million servers would need as much electricity as a quarter of a million homes. That's why Microsoft, Google and Yahoo are all building their next generation of data centres close to the hydroelectric power stations on the Columbia River. Short of sticking a wind turbine on the roof, what are your options for managing power costs? Intel, IBM and Google all have some ideas.

Processors that need less power are part of the answer, and cramming more cores into each processor could give you the same performance from fewer blades (because it's not just the processor that needs power). In five years Intel expects to have a processor package with 80 cores. Google distinguished engineer Luiz Barroso warns that "power could essentially overtake the hardware costs if we don't do anything about it"; when Google plotted the cost trends for its data centre it found over the lifetime of a server, power costs more than the original hardware after three years. Barroso wants the industry to look at the whole system; "memory, I/O, cooling and power delivery and conversion as well."

Where the power goes

As Google builds its own servers from scratch, it's been able to address these problems by designing its own power supplies, and is currently getting better than 90 per cent efficiency; a big improvement over the 55 per cent efficient power supplies in home or desktop systems, where the power supply can be the component that takes the most power.

The key to Google's success is its move to a single 12V power supply. Most server power supplies are designed to deliver several different voltages to motherboards, which in practice means a separate power supply for each voltage - making one power supply really at least four. Voltage regulators on the motherboard handle final conversions and stabilise the power. Google's power supply design also increases voltage regulator efficiencies, as it's easier not to overload a single power rail. A higher efficiency single rail power supply will actually be cheaper than its multi-voltage equivalent, as it uses fewer components.

While redesigning every power supply in the world would be overkill, there's certainly scope for improvement - and for considerable savings. Google estimates if its power supplies were deployed in 100 million PCs running for an average of eight hours per day, the resulting power saved would be around 40 billion kilowatt-hours over three years - a reduction of $5 billion in power costs at California's energy rates. Google has no plans to build hardware for anyone else, but it will be working with Intel to create an open standard for power supplies with a single 12V rail.

Advertisement - Article continues below

Switch to DC input and you could get even more efficiency; in tests at Lawrence Berkeley Labs a prototype Intel power supply used 14% less power. Depending on your needs, you could take the power saving, or use it to fit 60% more servers into your data for the same power budget.

Power through the boards

If you don't want to replace any hardware, look for tools to show what the power you're paying for is doing. BAPCo's new EECoMark benchmark measures the power efficiency of desktop PCs; Intel is collaborating on a version for servers due in the first half of 2007.

Featured Resources

What you need to know about migrating to SAP S/4HANA

Factors to assess how and when to begin migration

Download now

Your enterprise cloud solutions guide

Infrastructure designed to meet your company's IT needs for next-generation cloud applications

Download now

Testing for compliance just became easier

How you can use technology to ensure compliance in your organisation

Download now

Best practices for implementing security awareness training

How to develop a security awareness programme that will actually change behaviour

Download now

Most Popular

data governance

Brexit security talks under threat after UK accused of illegally copying Schengen data

10 Jan 2020
cyber security

If not passwords then what?

8 Jan 2020
web browser

What is HTTP error 503 and how do you fix it?

7 Jan 2020
Policy & legislation

GDPR and Brexit: How will one affect the other?

9 Jan 2020