Computing at the speed of light
One of the fastest ways we have to transfer data is via light – which is why fibre optic cables are so whizzy with broadband. So why don’t we build computers using the same idea?
Computing at the speed of light is a compelling idea and one that’s been proven in experiments. But it’s not easy to build a device small enough to interface with the electronic architecture that makes up traditional computers. As light has a relatively large wavelength, optical chips are much larger than electric versions.
Researchers at the universities of Oxford, Exeter and Münster may have found the beginnings of a solution, by shrinking light into nanoscopic dimensions. Nikolaos Farmakidis, a graduate student at the University of Oxford and co-author of a paper on the development of the electro-optical computing device, explains what all this means. He and co-author Nathan Youngblood both worked on the idea at Harish Bhaskaran’s lab in Oxford, alongside collaborators at the other universities.
What problems were you trying to solve?
We’ve long realised, mostly in communications, that light has very big advantages in comparison to electronics. The spread of communications onto the internet is highly due to the presence of optics through optical fibres. We thought that this is something we could exploit in the computing field as well… but while electronics have their limitations, they are good at other tasks.
Our motivation was to capitalise on the advantages of both. We’re trying to bring in the speed with which light travels, the bandwidth it has and the low loss in transferring information. But at the same time, we want to use the already built network of electronics, which is highly scalable, to create structures that capitalise on the advantages of both of these modes of operation.
What were the challenges?
The biggest physical challenge to combine optics and electronics is the different length scales they operate at. If you imagine light having a length scale or wavelength in the low microns for communications, and electronics work most efficiently at nanoscale dimensions – as you can see in the latest integrated circuits – it’s clear that these are two things that don’t combine.
How did you solve that?
There’s almost like a trick that you can play with light. It’s confined into what we call a “surface plasmon”. We confine light into an electric field caused by an oscillation of electrons… which fundamentally allows us to scale things down a few orders of magnitude.
And that electro-optical device will allow us to have the best of both worlds for computing?
It depends a little bit on how you define “computing”. This device works for computing, but it’s still a building block. It colocates memory and processing, and we can store information on it using light or electronics, and use it to compute with, but it’s not something that you can upload code to yet, it’s not something that operates like a standard computer.
How will this be used?
The companies that are interested in this work are looking at technologies that are going to be implemented in the next five years or so. If I were to pinpoint the first application of this, I wouldn’t say that it would be personal computing. It’s more likely we will see it in servers where multiplications of many numbers together is required at a very high rate.
Electronics are very good at multiplying or doing operations on individual numbers. But the computing protocols we’ve used have changed a bit and now what’s needed is the multiplication of many sets of numbers together – it’s what we call “matrix multiplication”.
This is a device that lends itself to that. Light fundamentally has the possibility that you can send and operate on many numbers at the same time, which you can encode in the wavelength, whereas with electronics you are pretty much bound to a single signal. I would say that the device will be most useful in any applications where you require fast operation on large sets of numbers simultaneously.