(Tiny) Rings of Fire:

Sending data via light at the nano-scale

Because nano-photonic interconnects can also be used to link the cores in a multi-core processor, they also stand to alter how computer chips are both made and programmed.

By Simon Firth, December 2008


This is the second in a two-part series profiling recent research in HP's Information and Quantum Systems and Exascale Computing labs.

Photonics is the science of transmitting information via light.

First employed in fiber optic cables spanning the world's oceans, photonics has since enabled super-fast data networks on a national, local and – most recently – office-level scale.

Now researchers at HP Labs are helping put photonic connections inside computers themselves. While one Labs group is developing ways to optically connect the blades within a server, and even to run light between processors along small ‘waveguides' on a server's circuit board [see the first article in this series], another is looking further out.

Sometime within the next decade, connections that employ conventional lasers (or VCSELs) to convert electronic signals into light will have trouble transmitting data as fast as the processors they are connecting, says HP Labs physicist Ray Beausoleil.

That's because processors are set to keep expanding their data-crunching abilities exponentially into the future, following Moore's Law. In five to seven years, says Beausoleil, conventional lasers and the relatively large waveguides now being used won't be able to keep up.

The solution, he says, is to put photonics on the chip itself – something that can only be done by working with light on the nano-scale.


Not enough room

Moore's Law states that processors are supposed to keep expanding their data-crunching abilities exponentially into the future. Today's processors need communication bandwidth of 10 gigabytes per second, for example, but within a decade some computer applications are expected to need in the range of 10 terabytes per second – a factor of 1,000 more.

Existing electronic connections between processors can only scale linearly at best, and even photonic connections now on the drawing board are not expected to ever exceed a few hundred gigabits per second. Thus in ten years we'll need to connect hundreds of them to a 10 terabyte-per-second chip to be sure the connectors aren't acting as a data bottleneck.

“And all the optics you need to emit the light, focus the light, refocus the light, and detect the light take up a certain amount of room,” Beausoleil points out. “You just have to look at how much area we have available in these servers for interconnect technology, and you see they won't fit.”

 

Many waves of many lengths

So how do you get a linearly expanding technology to catch up with one that's growing exponentially?

You can do a lot by massively reducing the size of individual optical light generators and the detectors needed to turn their light back into data. But that won't get you far enough.

What also needs to happen, says Beausoleil, “is we need to pack more data into the frequency spectrum of the light they use.”

Telecommunications companies already do this in a process known as Dense Wavelength Division Multiplexing (DWDM), where light of many wavelengths -- separated by about half of a billionth of a meter – is sent down a single optical fiber.

So what if you could do the same thing along an optical channel – called a waveguide – thousands of times thinner than a conventional optic fiber?

“If we sent, say, 64 different wavelengths down a nano-scale waveguide,” Beausoleil suggests, “and if each is operating at 10 Gigabits per second, that's like 640 Gigabits per second of data all of a sudden going through a single 'wire'.”

 

Rings of fire

That's the theory. But how do you do it in practice?

HP's solution is to utilize a particular property of silicon -- its ability to function as a tunable resonator.

That means that it changes the speed of light in its interior by a predictable amount with a small input of electric charge. So if you send light down minute silicon channels that contain tiny loops in them, you can turn the loops into traffic stops for the light. Just add a tiny bit of charge to a loop of a very specific size and it impedes the flow of light sent at a very specific frequency. Relax the charge and it passes on again.

“When the light is stopped by the ring,” explains Beausoleil, “at the end of that waveguide you're seeing a zero. When the ring is tuned off-resonance and the light is allowed to pass through onto the waveguide, that's a one. So you've created an optical modulator.”

It's possible to do that very fast. Do it at 10 GHz, for example, and you are able to transmit data at 10 billion bits per second.

 

World's smallest ring resonator

HP Labs researchers have also shown that it's possible to do this on an incredibly small scale.

“We've made high-quality rings with diameters as small as 3 micrometers, which is a world record,” reports Beausoleil.

Just as importantly, the HP Labs group has shown that you can put 64 of these micro-ring resonators in less than a millimeter along a single, tiny wave guide. Send light of 64 different wavelengths down the waveguide, then, and each ring can be 'tuned' to act as a switch – or modulator – for each particular frequency of light.

“Each of those 64 rings would modulate light on that waveguide at 10 Gigabits per second,” Beausoleil notes. So we'd be delivering 640 Gigabits per second using rings taking up just about a tenth of a millimeter.”

 

Sorting out priorities

That's not the end of the story, though.

“What we really need to be able to do now,” says Beausoleil, “is to put on the same waveguide 64 banks of 64 modulators.”

Not only that, but for them to work efficiently, sometimes you want one bank of 64 rings to have priority over the over 63 banks. Deciding which output channel gets priority – a process called arbitration – is something processors already do. But the way they do it – by sending an electrical signal asking for permission to send and then getting a permission signal back – slows them down.

The HP Labs team has devised a way to do arbitration optically – by using a tiny light detector, essentially, to see if a signal will go through or not.

“An optical arbitration system is really an analog optical computer,” says Beausoleil, “that runs in parallel with all the digital silicon and massively reduces the problems we have with latency.”

 

Research challenges

Although the HP Labs photonics team has proofs of concept for many of its ideas, there's plenty of work to be done before we see nano-photonic connections in commercial servers.

Silicon doesn't just change the speed of light with an electric charge. Heat also changes it, so any processor that communicates via silicon micro-ring resonators has to have a complicated temperature regulation system that keeps the rings tuned properly.

Also, efficient detectors don't yet exist that can easily read the pulses of light produced by nano-sized micro-ring resonators.

 

Big changes in chip programming possible

But if the team succeeds, they'll do more than keep the connections between processors running as fast as the processors themselves. Because nano-photonic interconnects can also be used to link the cores in a multi-core processor (the most common kind of processor today), they also stand to alter how computer chips are both made and programmed.

The physics of electrically-interconnected processors make it hard to go and get code or data from a cache somewhere else on a multi-core chip, which means that, as processors have gone from having one, to two, to four, and now eight cores, they've become brutally hard to program.

“But with our architecture,” says Beausoleil, “all memory is equally distant from all cores. So the programmer does not have to worry about where code is or where data is and instead gets to operate with parallelism at the highest level. And that's going to, I think, revolutionize programming.”