Intel and AMD to kill off affordable supercomputer graphics?

24 Sep, 2010

Nvidia’s subsidy for high-end graphics boards is at risk.

Intel and AMD’s moves to integrate graphics processors and CPU chips could be a bridge too far for supercomputer users and other researchers who need specialist graphics cards.

The warning bell is being rung by Greg Pfister, a former IBM distinguished engineer and now affiliated with Colorado State University as research faculty. He has blogged that, whilst the world is celebrating advances like Intel’s Sandy Bridge and AMD’s Zacate, others may not be so pleased.

In his blog, Pfister quotef Sharon Glotzer of the University of Michigan, who said: “Today you can get 2GFlops for $500 (£316). That is ridiculous.”

Pfister replied: “It is indeed. And it's only possible because CUDA [nVidia’s supercomputer card] is subsidised by sinking the fixed costs of its development into the high volumes of Nvidia's mass market low-end GPUs.”

In the US, Nvidia also has income from its participation in defence projects. The Defence Advanced Research Projects Agency (DARPA) has paid Nvidia $25 million to work with DARPA's Exascale project.

Exascale is a research project aimed at developing a supercomputer 100 to 1,000 times faster than anything around today. In addition, it has to consume as little power as possible – a lean, green, computing machine

However, the income from participation in this project would be insufficient to make up for the loss of the subsidy that Nvidia’s substantial low-end, mass-market sales afford.

“For users, it's the loss of that subsidy that will hurt the most,” Pfister continued. “No more supercomputing for the masses, I'm afraid… So enjoy your ‘supercomputing for the masses’ while it lasts.”