Bigger is better in pastries, paychecks and bank accounts, but not in electronics. A recent story in HPCwire caught my interest and got me thinking about what the end of the shrink road might portend – and the potential alternatives.

The ability to steadily shrink the size of the processor brains that drive computers – and pretty much everything else – has driven computer performance since the advent of the microprocessor.

But now that we’re at 32nm and moving toward 16nm and even 14nm (see Intel’s recent announcement), we don’t have all that many nm to go until we hit the limits of what’s possible under the laws of physics. When you get too small, you can run into problems at atomic scale.

IBM Fellow and all-things-chip guru Bernie Meyerson explained this clearly and concisely several years ago when he predicted that Intel’s single-core 5GHz chip would never see the light of day. With images from an electron microscope, he showed how extremely small chip pathways can be reduced to the point where they’re just a few atoms thick. This sounds fine until you learn that atoms aren’t nice, round balls like they’re presented in textbooks.

Unfortunately, atoms can be kind of lumpy. When you have only a few of them forming a guardrail on your chip electronic roadways, they allow electricity to leak through, which leads to more heat and energy use. Cranking up the GHz in a chip increases the heat generated to the point where it surpasses the ability of the materials to handle it.

It was this physical limitation on processor frequency that led us to the multiple core world we see now. The only way to get more performance out of processors is to use the real estate gained by shrinking the on-die components to provide duplicate cores and run parallel workloads on them at reasonable frequencies.

A few alternatives for future chip designs are discussed in the HPCwire story, including HP’s compute-memory hybrid memristors, which could come to market as a flash substitute this year. Joint research by IBM and Samsung into carbon nanotubes is also mentioned. I think that we’ll see a combination of different technologies come into play as we bump up against the shrinking benefits of process shrinking. (Wow, that’s going out on a limb, isn’t it?)

The real problem isn’t that we’re not getting enough cycles out of processors; it’s that the speed at which data moves from memory to processor and back again hasn’t really increased all that much over the past several years. That’s the biggest bottleneck we’re facing, and faster processors with more cores doesn’t really solve it unless the problem set is completely parallel.

What’s the solution? I have no idea… but people who are much better equipped than I are working on it. All I know is that it’s going to need a cool name… maybe something with ‘turbo’ or ‘fire’ in it.