Technically, all semiconductors manufactured today are 3D chips. They're built using a process depositing metal layers with circuits then being etched out based on whatever design is required for that layer. AMD's current solution uses about 10 metal layers, and Intel uses only 8.

The reality of the assembly is not quite that simple, but for the sake of illustration you can envision it like this: it's a layered cake. Each layer gets added on top the other, with parts of the cake being removed (and eaten) leaving circuit-like designs. Once all of the layers are added, they form a 3D circuit, which exhibits known electrical properties in that configuration, thereby allowing it to be harnessed.

The boffineroos making these things have gotten the process down so well that they can use 65 nm and 45 nm pathways in production to construct the uber-tiny structures needed for the circuits. And I don't care what any of the posters here at Geek.com say to the contrary, or how they try to minimize this reality, but that is just plain awesome. I've said it before many times; it's absolutely amazing these things work at such small scales (because of the manufacturing “tricks” employed to bypass what seemed to be, just a few years ago, nearly impossible hurdles to overcome.

So, even today, we have these multi-layered circuits. Still, the purpose of the multiple layers today is not really to develop any 3D structures, which work one on top of another, but really just to build the required 3D structure designs a particular way so they can work as we know them to.

And because of that, there are hundreds of millions of these little constructed circuits, while being physically erected in 3D space, still being laid out as points on a grid. You can think of it as an apple orchard as seen from an airplane. Lots of large structures jutting up, but with all kinds of wires and lines going everywhere. They provide power, allow a functional unit to be constructed, which conveys electrical signals from the start to the finish, after being processed through chains of these circuit, to yield a computed electrical result at the far end, etc. It's actually these chains that provide the real, usable work we see today.

These tiny structures (transistors, bitlines, wires, sinks, etc.) must all work perfectly for a chip to work at all. And in today's microprocessors, there are literally hundreds of billions of individual components involved in making a chip when you factor in all of the metal layers, all of the bit lines, wires, and everything else; and all of it has to be laid out perfectly to work perfectly.

Due to the physics of electromagnetism, these circuits cannot be placed closer together than they can be placed. There are real laws that mandate how certain circuits, pathways, bitlines, etc., are actually laid out on a processor. If you move them closer together than those laws allow, then side-effects of their proximity begin to show up and the circuits fail or operate more slowly. And because of this layout spacing requirement, there are tiny wires hooking everything up that are often extremely long (relatively speaking) just to convey information from point A to point B in a way which doesn't affect anything else notably along the way. This yields a large amount of silicon real-estate, which is used solely for data communication lines from one place to another.

So, what is IBM's new creation? What IBM has done (and presumably others are pursuing) is to remove that hurdle of having extremely long communication lines by taking that layering process to a whole new level.

IBM's 3D process involves not only layering components to create an individual integrated circuit in 3D space, but rather stacking completely constructed circuit layers on top of other completely constructed circuit layers to create real 3D chip-space utilization.

While this does make the semiconductors thicker, and most likely has presented some real challenges for cooling, it has the strong benefit of reducing some of the communication lines (bit lines) between logical components by over 1,000 times their normal size in the flat layouts used today. This means less silicon real-estate, lower costs, faster inter-unit communications, lower power, less heat, and overall solutions with more headroom. It's a win-win on every front.

IBM accomplishes this stacking by using something called a silicon via (pronounced with a long “i” as in “aye”). Vias are well known concepts employed today in multi-layer motherboards and other circuit boards for the purposes of taking communications from one layer to the other. If you've ever looked at a motherboard you'll see little round holes here and there. That's what they're there for. They take electrical signals from one layer to another (or to several simultaneously).

IBM's process is so amazing because it's done this via “magic” in silicon. The company created silicon-based vias, allowing pathways between multiple layers of normally etched chips to be connected vertically, thereby reducing the communication lines significantly. Apparently all it took was research to overcome hurdles and voila! We almost instantly have moved from the stone-tool primitive to the modernesque beautiful.

IBM hopes to have sample chips available by the second half of this year, with full-scale production coming next year.

another way to save space(10:12am EST Fri Apr 13 2007)If you could figure out how to fabricate a gates at an angle between 0 and 90 degrees, you could fit more stuff on the same square area. Although your layer would be thicker now. My guess is that all circuits are flat right now because it's easy to make them that way. - by dskowTX

had to post link(11:30am EST Fri Apr 13 2007)Intel! Man , they just will not relent!! Agression is not the word!

also(11:36am EST Fri Apr 13 2007)this is still using the FSB ! I am blown away though that Alienware is not on the list. If IBM were still in the buieness of cpu's or pc's , I knwo IBM woudl be up there! I had an IBM PC. In fact, it was AMD! Great PC and well always wondered why they pulled out of the pc business at least as I remeber them! - by Alan

Alan(11:44am EST Fri Apr 13 2007)

Did you get my emails?

- by RickGeek

Alan(11:50am EST Fri Apr 13 2007)

Where do you see such agression? Did you read the last page? We're talking about $8-10K systems here. For a lot less than that you could buy four average Core 2 Duo machines and have some more real computing power in a clustered environment. Or, you could have four separate computers. :)

I was thinking of multiple cores stacked on top of each other with this tech.

That would work really well for AMD and NVidia's planned split cores for future graphics cards, as well as having more than 4 cores on a single chip

- by Headley

bigger caches(3:11pm EST Fri Apr 13 2007)I would think caches would be the easiest thing to pull off here. You could manufacture them seperatly increasing yield and allowing for much more specialized processor configs.

I didn't see anything about the cooling aspect. How much distance is between the layers. Are there going to be heat pipes between the layers.

The pentium pro had a cache on a seperate chip in the same packaging. This seems like a leap forward in multi chip packaging instead of side by side you can stack them right on top of each other. Technically you could have stacked them and had a thicker packaging …. cooling would have been the trick.

Are they really talking about being able to put twenty+ layers on the same die? Or are we creating 2 dies and then gluin them together…with really fast/short interconnects in a 3d config? - by twiggy

twiggy(3:38pm EST Fri Apr 13 2007)

I think that's an excellent idea. The separate cache could even be constructed in components which are much smaller, such as 256KB banks, which would be very small to manufacture, therefore greatly increasing yields. You simply add on more cache modules to gain more cache. If you could stack them up in multiple layers also, that would be great.

This opens up a lot of possibilities for new chip designs made with physically constructed isolated components that are assembled together for the purposes of creating a CPU. Since every component is much smaller, yields would go up accordingly. I would think companies like AMD would be jumping on this bandwagon! Especially since they're partnered with AMD.

- by RickGeek

Rick…(4:53pm EST Fri Apr 13 2007)Ah nutz. I actually sent you a link to this IBM 3D thing yesterday after reading an article at science daily. I sent an email too Rick@Geek.com. Then I got a mailer demon today. Could you let me know the correct method of contacting you? Thanx - by MasterBlaster

MasterBlaster(8:02pm EST Fri Apr 13 2007)

I can still be reached at OSProject@ameritech.net

- by RickGeek

LOL(9:30pm EST Fri Apr 13 2007)Yup how will this effect cooling.

Lets stack a 40 watt 1 cm^2 piece of silicon on top of it stack a 60 watt 1 cm^2 piece of GPU and on top of that stack the DRAMs…

lets wonder why the system will last about an hour before it fails due to thermo accelerated lifetime degradation.

Dude, IBm is going to leverage this for their SiG mixmode communications. It don't work for high speed high power digital…

But why am I not suprised it didn't dawn on some of you pretenders - by rocco

rocco(9:59pm EST Fri Apr 13 2007)

I appreciate the information you have to offer. I think if you didn't call other people names it might be better received.

- by RickGeek

pictures?(2:25am EST Sat Apr 14 2007)Would like to see pictures. I have seen 3D vias before, but they were quite big (microns), how tight are these through vias? - by fdc

Hmmm(3:42pm EST Sat Apr 14 2007)“I appreciate the information you have to offer. I think if you didn't call other people names it might be better received.”

10/10 for the effort Rick… but i think your words are falling on ears connected to a mind that relies on brute force and ignorance.