High Performance Computing looks to the future

If you take McNealy's comment together with the developments and changes in processor technologies that are expected to occur as High Performance Computing (HPC) moves into the mainstream it means that there now exists the potential for some subtle but significant changes to occur in the architectures of enterprise infrastructures.

Put at its most simple, the question becomes `if the network is the computer do we actually need individual computers anymore in the way that we currently know and love them? Do we need big data centres for example, do we need armies of applications servers, do we need PCs?

For the immediate future the short answer is yes, but there is an argument that many of those traditional computer systems could be replaced by something else - and that is the long term view of John McHugh, vice president and general manager of HP's network systems division, ProCurve. "Networking is the underpinning of everything now, the fundamental enabler," he said. "I believe that if HP is to use networking as a strategic weapon then there had to be a complete and credible networking offering. In other words, we need the credibility as an open market network player."

The fundamental issue that is now starting to surface is that some of what we currently consider core processor logic, that which is found in a server together with memory, I/O and the rest, can now be integrated into the network. At its simplest, if it is currently OK and sensible for the I/O, the communications technology, to be close to the logic could it be just as sensible for the logic to be spread around, close to the I/O?

This is, in effect inverting the classic systems architecture, for if the network is the nervous system then is there an argument that business logic Blades could be distributed around the network, located in network chassis.

"That is exactly where our thinking is and it is the exact question we are asking," said McHugh. "The question is whether we could get every application or service that is running in a datacentre or the network. Ideally, you would have pools of processing, and also distributed processing. So if an application is better managed centrally you have a nice, giant, scalable pool there. But if it is something that works better in a distributed environment you'd love to be able to push that as far out as possible and make it as scalable and distributed as manageable."

Typical applications that would fit this distributed model, he suggested, would be the application specific devices, such as web services accelerators, active management tools, and security tools. But it would also fit well with many new applications, such as video pre-processing. HP has a customer that is using such a tool to observe the input from some 6,000 security cameras in a hotel. But instead of a security operative facing a wall of screens there is just one. The system monitors the activity from each camera and compares the images to pre-defined patterns and rules. So, for example, if one camera picks up people suddenly running in a location that is unusual - a hotel corridor, for example - this is the image that is displayed to the security operative.

"The application can be written on any industry standard server running Windows or Linux, and then ported to the Blade server embedded into the network. That is the vision we have," he said.

Integrating business processes

Embedding business or process logic into devices such as network routers and switches, so the processing logic is conducted as an integral part of the communication process is, at least in part, being driven by the development of those ever-larger datacentre pools. The number of connections now needed in a datacentre, plus the dynamic switching that is required and the traffic that has to be handled means that there is practical sense in moving towards a Blade-based architecture centred on aggregating 10Gb services. "When you put networking in a server chassis you can give it indigestion," McHugh said. "Instead you can put a number of servers onto a single 10Gb trunk and not have it as a managed network node inside the box."

This does point to new architectures that locate the logic for any function at the point where it most needed. This could have the benefit of reducing latencies for some critical processes, for example, and could also open up the opportunity to utilize alternative processor technologies where the applications can be written to run on platforms that could be more appropriate to the specifics of the process or task being undertaken. The predicted growth rates for multicore processors - it is expected that Intel will be offering x86 architected processors with 128 cores by 2012 and alternative technologies are already surpassing that number - means that HPC technologies are going to play a far more central role in infrastructure architectures, but could also open up some truly fascinating possibilities.

The development of service-based end-user delivery models is also pushing the networking vendors to expand their horizons and product portfolios. Cisco, for example, is now actively growing into the SOA, services and support area while ProCurve, as part of HP, has the benefit of riding with a company well established in the services area. This does not mean, however, that McHugh finds himself under any restrictions or strictures in the direction he wishes to take the company, particularly in terms of creating what he sees as a complete offering.

He does acknowledge, however, that can be issues in defining the word `complete'. "ProCurve has been able to expand its market footprint while staying out of certain applications spaces and it has not hampered us," he said. "But it has prevented the company from having access to parts of the network where those specific applications or technologies have been perceived by customers as representing `complete'. We have sufficient critical mass to be seen in many customers' eyes as complete, but not in all customers.

"There are no sacred cows ProCurve cannot push on if it wants to. The virtualisation of services and processing, and the virtualisation of the network means that there will be a lot of different philosophies and architectural battles to be fought over the next five to 10 years. It is the same with the way SOA has redefined datacentres. What's happening is that the whole operating system stack is being re-evolved into higher level datacentre management as opposed to the old systems management. HP already well established in this area. We have a table with all the pieces on it, regardless of what direction the market takes."

He has opinions on the way the future may develop in this area, but is not yet getting any messages from HP saying `you can't put a server in your network'. In fact ProCurve has already done this with a full function server that runs a standard O/S and allows the company to locate appliances and functionality around the network where it is required. It also allows them the flexibility to attack markets with the best, most appropriate architecture to meet a business need, rather than being bound to a specific technology or architecture.

McHugh goes to considerable lengths to highlight the difference he sees between data and information One of the changes he sees coming is future networks being able to receive data and be the first element in filtering and managing that data as it progresses through the infrastructure and business processes. Architectures will need to become flexible enough to enable that process of turning data into information.

"HP is in a classic position because it has not been technology zealots," he suggested. "It has been prepared to be in all sectors/technologies and to wait to see what technologies users congeal round as the preferred options."