Rebirth of the supercomputer

The name Cray is synonymous with supercomputing, as is the image of physically huge, room filling computers with an equally huge price tag.

However, everything changed during the nineties as Moore's Law came into effect and saw affordable workstations and servers that were capable of doing everything those early supercomputers could, and more, for not only less money but less associated running costs in terms of power supply and cooling as well as taking up a lot less room of course.

Inevitably, many of the smaller companies in the business disappeared along with the notion that supercomputing had a future. Yet Cray are still going strong, and the likes of HP and IBM bought up many of those smaller companies in order to exploit the skill and technology resources represented and have been active in the high performance computing field ever since.

Racing ahead

Indeed, supercomputers are alive and well and on the rise as demand for ever more highly calculation-intensive tasks continues. Be that solving quantum mechanical physics problems, climate research (including research into global warming and even weather forecasting), molecular modelling or simulating the detonation of nuclear weapons.

The fact is that universities, military agencies and scientific research laboratories are all demanding more power from their computers, and that's good news for the enterprise because what is developed in the fields of academia and defence eventually filters down to the business sector, or at least the really useful bits do. Without the supercomputer we would not have vector processing, liquid cooling, parallel file-systems or RAID arrays/striped disks for example.

Enterprise delivery

The personal supercomputer

Using Microsoft Compute Cluster Server 2003 you can even daisy chain Typhoon PSCs together. "We see strong growth in the PSC market and will be adding to the Typhoon range of systems making them capable of accommodating eight quad core CPUs. Graphics capable versions with the addition of a dedicated head node with two further CPUs will be available shortly" said Howard Wiblin , Northern European sales manager for Tyan.

The target market being not only scientific research institutions, but also the finance sector which has expressed an interest in performing real time statistical analysis in dealing and back room operations for example. Considering the computing power on tap, the value equation is impressive as a configured Typhoon PSC starts at around 5,500 rising to 14,000 plus the cost of the OS.

The PFLOPS race

The speed of a supercomputer was traditionally measured in Floating point operations per second (Flops), calculated against a benchmark that represents very real world applications and problems. Until technology made that metric redundant and it was eventually replaced by a TeraFlops measurement instead. TFlops, as the name suggests, is a trillion (or Tera) Flop. Currently, IBM has the fastest supercomputer in the world with its Blue Gene/L machine installed at the Lawrence Livermore National Laboratory in California that uses its 131,072 processor cores to reach a staggering 207.3 TFlops.

Yet that isn't enough, and the race for the PFlop supercomputer is on. Cray has announced that it is to update the US Department of Energy Oak Ridge National Laboratory supercomputer at a cost of 100 million, which should be finished during 2008 and capable of the magic PFlop. The Japanese research institute, RIKEN, claim to have already got there with the MDGRAPE-3 but this is discounted from the 'race' because it has such a specialised architecture designed not for general purpose high performance computing but specifically for simulating molecular dynamics, so its application outside of this field is non-existent. Still, it shows it can be done, and in the case of the joint Hitachi/Intel/SGI driven MDGRAPE-3 done using 4,808 custom processors, 64 servers with 256 dual-core processors and 37 servers with 74 processors bringing the total to just 40,314 cores.

And the winner is?

TOP500 supercomputer list

The list covers everything from 24 different Blue Gene systems, 31 AMD Opteron clusters and 47 System p-based machines as well as a 15 TFlop BladeCenter-based supercomputer, the JS21, at Indiana University which is the biggest university based supercomputer currently in the US.

The increase in the number of IBM BladeCenter systems in the list, up from 71 to 132 at last count, is indicative of the IBM drive towards high performance computing. "The rebirth of interest in supercomputing is down to two main factors, firstly the end of Moore's Law; it is now no longer possible to achieve better performance systems by simply ramping up GHz, organisations are being forced to look at new computing methods. Secondly the pressure on fuel prices is well known and is inevitably driving us to examine new designs that give higher compute capacity for the electricity consumed" said Caroline Isaac, strategic growth business unit executive at IBM.

This has led to the development of Roadrunner, a joint project between IBM and the Los Alamos National Laboratory where it is destined to be installed within 1100 square metres of floor space towards the end of next year.

Roadrunner will be jointly powered by a Red Hat Linux 4.3 driven base cluster of IBM System x3755 servers using 16,000 AMD Opteron processors, and 16,000 Cell processors as designed for the PlayStation 3. Every Cell contains 8 processors with a master unit controller that can assign tasks to any member of the processing team as required, and with each Cell capable of 256 billion calculations per second, Roadrunner stands every chance of meeting the predicted 1.6 PFlops target. That is some four times faster than the predicted ultimate Blue Gene performance of 360 TFlops, which hasn't even been achieved as of yet.

The secret to this hoped for success lies with the ability to offload highly repetitive specialised calculations to the Cell processors, while leaving the AMD Opteron powered cluster to deal with everything else. The other secret will be in developing the applications that can work with two different processor based clusters, dividing the calculation load effectively. The hybrid design allows the system to segment complex mathematical equations, routing each segment to the part of the system that can most efficiently handle it. So the typical compute processes, file I/O, and communication activity will be handled by AMD Opteron processors while more complex and repetitive elements -- ones that traditionally consume the majority of supercomputer resources -- will be directed to the Cell processors.

HECTOR's House

Davey Winder

Davey is a three-decade veteran technology journalist specialising in cybersecurity and privacy matters and has been a Contributing Editor at PC Pro magazine since the first issue was published in 1994. He's also a Senior Contributor at Forbes, and co-founder of the Forbes Straight Talking Cyber video project that won the ‘Most Educational Content’ category at the 2021 European Cybersecurity Blogger Awards.

Davey has also picked up many other awards over the years, including the Security Serious ‘Cyber Writer of the Year’ title in 2020. As well as being the only three-time winner of the BT Security Journalist of the Year award (2006, 2008, 2010) Davey was also named BT Technology Journalist of the Year in 1996 for a forward-looking feature in PC Pro Magazine called ‘Threats to the Internet.’ In 2011 he was honoured with the Enigma Award for a lifetime contribution to IT security journalism which, thankfully, didn’t end his ongoing contributions - or his life for that matter.

You can follow Davey on Twitter @happygeek, or email him at davey@happygeek.com.