Microprocessor designers at MIT are working on ways to make PC microprocessors more powerful using a completely different approach than those that has been doubling the power of processors every 18 months for years.

The approach – referred to as Internet on a chip or network on a chip – has been under development for years, but hasn't gone mainstream because simpler methods could deliver power boosts more efficiently.

It is getting to the point that will no longer be possible, according to researchers at MIT.

Processor designers have hit plateaus in both traditional methods of increasing the power of – increasing the width of the data bus so the chip can process larger chunks of data on each cycle, or making the cycles shorter and faster so it can process more chunks of data per minute.

The data bus on each chip, which allows the cores to exchange data, scale pretty well on chips with as many as eight cores, Peh said. Ten-core chips may use a second bus to keep performance high, but adding extra buses for each cluster of cores would become impractical quickly, long before being able to support hundreds of cores in one chipset – a scale Peh said is not as far away as most of us would think.

The solution is to distribute the mechanism for data-transport in the same way multicores distribute the ability to process data.

Each core would get a tiny data connection analogous to the Ethernet plug that goes into the back of each server in a cluster and divide data into packets so it can be transmitted and verified more effectively than the data streams used by PC data busses. To keep track of the packets, transmit and receive them correctly, each core would have a tiny router.

Networking each core would "lay a grid over all the cores, so there are many possible paths between nodes," said Peh. "Latency is much lower, with the disparity increasing as you scale up the core counts," Peh told EETimes. "Bandwidth is also much much higher because there are many possible paths to spread traffic across."

The network-on-a-chip design would save power because each core would only send data to the four cores nearest it, which would pass them on to other cores as needed.

Among the big changes will be Peh's calculations showing all chipmakers will have to move to ring-networked interconnections or mesh network designs for processors with 16 processors or above.

Peh and colleagues will also demonstrate a packet-switched Internet-on-a-chip design that uses 38 percent less energy than it would using a standard data bus.

The chips, which are starting to be known as mini-internet chips, use two techniques impossible with data busses – low-swing signaling and "virtual bypassing."

Virtual bypassing is a way to reduce the amount of time each router on the Internet holds a packet by having the router that was its last stop send a message ahead so the next router down the line will be able to change its settings so it doesn't have to hold and examine the packet before sending it on.

Low-swing signaling, which reduces the amount of change in voltage necessary for each data packet created by each core.