I founded Endpoint Technologies Associates, Inc., an independent technology market intelligence company, in 2005. Previously, I was vice president of Client Computing at IDC, covering client PCs (desktop and mobile computers). Before that, I ran my own research and analysis firm, directed operations for a developer of multilingual text processing software, ran a technology analysis and publishing practice for a consulting company, managed international accounts for a data communications equipment manufacturer, and did new product development for a computerized trading network. I have published in a variety of forums and been quoted in a number of publications and other media outlets. I snagged a B.F.A. from Bennington College and an MBA from the University of Chicago Graduate School of Business. I am multilingual, world-traveled, and have bicycled over the Alps, but am now a family man.
Feel free to Circle me on Google+

The Microprocessor Powers Through Its 40th Birthday

Today, the chip often called the brains of the computer turns 40 years old, and Intel, one of the pioneering firms in microprocessors is celebrating this birthday, as well it should, given that the company’s fortunes are largely based on it.

On Nov. 15, 1971, Intel brought out the Intel 4004 microprocessor, which had 2,300 transistors.

Intel’s big server chips today sport more than 2.5 billion, and the difference in sophistication is astonishing. Whereas early computers could barely light up enough pixels to create a primitive monowidth font on a black-and-white TV screen, today’s systems can do 3D visualizations of detailed simulations in millions of colors in real time.

In an earlier post, I talked about how Steve Jobs learned from Bob Noyce, one of Intel’s founders, enough about the likely trajectory of silicon microprocessor development to plan products for Apple years before they were actually feasible. This trajectory, so real you could almost walk on it, was the Milky Way that guided Jobs and other luminaries of the technology age.

The progression of development was so nearly inevitable that it has been called Moore’s Law, after Gordon Moore, another Intel founder. Even though the whole span had yet to be realized, the scientists and engineers working on it could for the most part see how to get to the next level each step along the way.

Of course there have been setbacks and challenges. In mid-2003, Intel announced plans for its desktop and server roadmaps that included a 4GHz processor, but by May 2004, Intel had canceled the entire project. At 4GHz, the parts were too hot, used too much electricity, and leaked too much current. The company was forced to take a different route entirely to higher performance.

Luckily for Intel, in another division of the company, in Haifa, Israel, Mooly Eden and his team were working on the Intel’s first dual-core processor (AMD actually brought THE first dual-core processor to market), and Eden’s architecture became the heart of Intel’s next phase of development: multicore design. Multicore architecture spread work over several cores, allowing jobs to finish more quickly by running them in parallel. This benefit let the architects drop the clock frequency, which allowed these higher performing chips to run with less electricity, a big jump in efficiency. Disaster was averted and the gains continued.

By far the greatest gains in microprocessors have come from something called “process node” development. That’s insider jargon for “line” or “feature” shrinkage. If you imaging a microprocessor as a tremendous number of circuits or wires laid down next to each other on a piece of glass, then the width of those wires or features is what we’re talking about. In fact, the feature that processor designers look most closely at are the “gates,” which are just what they sound like. A gate either allows electricity through or not. Depending on how the logic is set up, the presence or absence of a current through that gate can be interpreted as a “yes or a “no,” a zero or a one. And out of those humble zeros and ones are built all the complex structures of our data world, from the face of a Brazilian model in a high resolution photo to the flashing of graphs on the screen of a lightening-quick trading desk on Wall Street.

The Intel 4004 was made in a 10 micron (µm) process. A micron is one-millionth of a meter. So, 10µm is one-hundred-thousandth of a meter. You can almost see something that big. In Wikipedia, examples of things 10µms wide include a droplet of mist or a strand of spider web silk.

Leading-edge products on the market today are made in a 32 nanometer (nm) process. A nanometer is a billionth of a meter. So, the reduction in feature width was three orders of magnitude or about 1,000 times, spread over a period of 40 years.

Post Your Comment

Post Your Reply

Forbes writers have the ability to call out member comments they find particularly interesting. Called-out comments are highlighted across the Forbes network. You'll be notified if your comment is called out.

Comments

Someday I would not be surprised if somebody could find ways to connect Intel sub-nano microprocessors to an organic human brain through brainwave “bluetooth-like” technology. Maybe designers’ dreams could be made possible!

I’m interested in seeing the new cube-like chips from INTC and the kind of progress they will bring. Specifically, can they keep up the powers of Intel branded chips while lowering power consumption to rival those of ARMH?

With rumors coming out that the next Xbox could use ARM technology, this Intel investor truly hopes that the gates tech lives up to the hype so that new markets can finally be open to INTC.

Might also be good to note that the impact of the process node on performance has waned over the last decade. While the shrinking transistor size is still relevant for economies of scale, many of Intel’s successful competitors have chosen to pursue performance enhancing technologies like silicon-on-insulator transistors for low power devices. Surely in the near term future we will see other new technologies which aren’t related to scaling, but have a huge impact. One such technology would be the realization of Si and III-V materials on the same wafer. Intel has made some efforts in this arena, but it will be interesting to see who gets there first and how it impacts the chip-maker landscape.