Posted
by
timothy
on Sunday December 10, 2000 @09:28PM
from the wouldn't-cover-the-head-of-a-goose dept.

SirFlakey writes: "It appears Moore's law has been proven right yet again. According to a report in Fairfax's IT section, Intel has managed to create the world's smallest transistor(s). This, according to the article would allow them to create CPU's with 10 times (420 million) the P4's transistor count. The transistors are only 3 Atoms thick(!). They say they have come close to the limit of modern technology but also still have plenty of innovation left for the future. This annoucement comes only a few days after it released an earnings warning for this quarter."

This will be like the first discussion of that kind.. oh, no, what am I thinking. Ofcourse there WILL be Intel bashing... C'mon, the guys are trying to invent/achieve something. Give them some credits.

I think a more interesting comparison would be Super Mario World and Donkey Kong Country and Killer Instinct for the SNES. THAT was a big change...

But didn't many of the more impressive SNES games have an additional co-processor inside the cart? Not only can you not do that with a PSX CD-ROM, it hasn't been necessary to see the same increase in apparent power.

That said, the improvement on the PSX is mostly not from tighter game code itself (not saying this hasn't happened at all though), but from ditching the Sony C libraries and coding directly to the PlayStation hardware. You almost certainly don't wan't to be doing that in a general-purpose OS.

With transistors with dimensions on the order of a few atoms, I would think Intel would run into all kinds of problems with quantum effects, uncertainty, etc. I'm no quantum physicist, but I seem to remember that the properties (electric, thermal, electronic, whatever) of some substance only work as an statistical average for a substance, when you talk about a few atoms, all classical bets are off. Any given atom has at any given time a finite probability (although perhaps small) of jumping energy levels, spontaneously emitting electrons, decaying into something else, or other strange things. I guess it depends on how big the other two dimensions are (if they're not also 3 atoms). That would suck if 1/3 of the transistor suddenly split into some nitrogen or somthing

So Intel has a transistor which is three atoms thick. According to Moore's Law, within 18 months Intel will come out with a transistor 1.5 atoms thick. Hmm, I guess the portable atom smasher isn't very far away!

But seriously, I don't see why/. editors have to ruin a great advance like this one by linking it to Intel's financial troubles. What are you guys saying? Intel won't be around in 18 months to top this achievement? Or is it a "But you're still losing marketshare to AMD so Nyah!" kind of mentality? Advances in science are advances in science. Just because they were made by a company with profit in mind doesn't mean their scientific discoveries won't be shared.

Keep in mind this article was probably aimed at the same people who buy iMacs. These people need real world comparisons to even begin to comprehend what's being said. Quoting nanometres and nanoseconds isn't going to help...

CPUs become faster, programmers throw more stuff at them, assuming that everybody will have the newest CPU. Why this is the norm I don't know -- I'm a coder myself and I get a chuckle everytime out of other coders saying, "Well... who cares? They can add more RAM or upgrade".

Anytime I see bootup times discussed I can't help but think of a PBS special I saw on Apple a couple of Thanksgivings ago, where "The Woz" talked about Steve Jobs asking him too make the first apple bootup quicker. Woz was happy with the time.. which says alot to me.. but Jobs wanted it faster. Rather than say "screw that"... He took it as a challenge and made it boot quicker. I wish more coders in this day and age took more pride in making their stuff run faster and better... rather than just running at all.

why do you think a 33mhz playstation went from the original mortal kombat to gran-tourismo 2 without ANY change in hardware? tighter code. the fuckwits up at redmond could learn a thing or two from these folks.

I think a more interesting comparison would be Super Mario World and Donkey Kong Country and Killer Instinct for the SNES. THAT was a big change...

But PC programmers are starting to get thier shit together, look at BeOS! Now if only we can get enough software ported to it...

They were going to have to cut back somewhere, heck if they can get away with using less silicon that should boost earnings. That will improve 4th quarter earnings, if not then first quarter next year.

Heck look at cars, when they started making them good and durable, that could handle a 100MPH crash. Without being totaled. Big business goes and cuts costs, making them with plastic, and aluminum. Now cars are lucky if they survive a 5mph bump in the driveway

I can see the warning on the box now: "Overclockers beware, a 2 degree increase in tempature and these silicon atoms will fuse into a new hunk of silicon."

long live the Pentium 166(non-mmx) The only cpu I have had that will running for more than a year without a reboot. While running a print/fileserver.

Your ignorance of logic, scientific method, and the shortcomings of western thinking is frightening. As Paul Feyerabend states in his classic text Against Method:

Theories cannot be derived from facts. The demand to admit only those theories which follow from facts leaves us without any theory. Hence, science as we know it can exist only if we drop the demand and revise our methodology.

That's all I have to say on that.

I am disheartened by your bourgousie appeal to democracy and the government as agents for change. As any student of history can tell you, government is and always has been the tool of the rich. You cannot expect an institution founded and controlled by the rich to disturb the status quo. Only a bottom-up surge of populist fervor can rock the boat enough to make it over-turn.

Moore's 'Law' is supersition; part of a belief system no more 'correct' or useful than Christianity in the Dark Ages. It's part of an economic and social system that keeps the down-trodden supressed. Any defender of Moore's Law must be prepared to defend the entire capitalist religion. Are you prepared to do this?

Good point. So perhaps IBM can etch silicon wafers down to such lilliputian dimensions, but what about thermal instability? With 3-atom-wide transistors, I'm guessing the number of electrons needed to hold a charge in a flop ain't all that much, and the alpha radiation from nearby lead (e.g. solder) could become a big(ger) concern.

Or did they forget to mention such a device is really only reliable around absolute zero?

Actually, quantum computing is not (just) about making transistors very small. It's a totally different way of doing the computing, as many a slashdot article [slashdot.org] has pointed out:). This Scientific American article [sciam.com] is a good overview of the subject, as well.

Aid for the clueless: smaller transistors put off less heat so you can run them faster. Smaller transistors can be packed more closely so you can run them faster. Smaller transistors can have more of them fitted to the same chip, allowing nifty architectures so you can run things faster.

Actually, that should be you're a f@#$ing idiot, not your a f@#$ing idiot.

I must admit that I was quite surprised by the reaction my comments garnered. I didn't mean to infer that the transitor was a bad thing or a stupid invention. I think it's great that companies are continually improving products and making processors faster and smaller. I was simply pointing out the fact that the computer industry seems to sustain itself on evolutionary advances, not revolutionary advances. I consider the invention of the transistor to be a revolutionary breakthrough, I guess I'm just suprised that, to my knowledge, the computer industry hasn't embraced a wider array of research.

That's great, but what if you want to run software that was written in the last three years, or software that will be written in the next three years. Somehow, I don't think you'd like to wait a week to compile a new kernel or compress a movie into MPEG4 format. You were probably one of the ones who thought that "640k or RAM ought to be enough for anyone"

Considering quantum effects like tunelling, how exactly would you power such a 3-atom transistor processor?

It would apparently consume only very little amounts of electricity, but considering how thin the paths would be, perhaps internal resistance would rise, making temperature rise and demanding higher voltages. Higher voltages make quantum tunneling and sheer molecular structure reconfiguration much more likely.

The result would be either generalized short circuits or destruction of paths (with formation of others), I suppose.

Of course this is the first thing Intel thinks of, but it be very interesting to know how they'd manage to pull such a feat off using real world materials and at room temperature.

The lattice constant (distance between the center of adjacent atoms) in silicon is 5.43 angstroms. Thus one would assuem that 30nm (300 angstroms) is actually about 55 atoms thick.
Most likely the 30nm refers to the gate length and the 3 atom reference was a 'misguided' measure of the gate dielectric thickness. The reason I say misguided is because dielectrics tend to be molecules not atoms. Although 3 molecules is thin, such thicknesses have already been reported before.
So much spin. But I guess it makes sense since IEDM (International Electron Device Meeting) is occurring soon and everyone loves to get excited about the newest small transistors.

"Reducing circuit size is the cornerstone of Moore's Law, which states that the number of transistors capable of being put on a processor should double every 18 months. Shrinking circuits allows manufacturers to put more transistors onto a wafer, which in turn increases power. Unfortunately, the current technique, called DUV lithography, will likely hit its limit around 2003.

Controlling small wavelength light, however, is not easy. Current lithography machines depend on lenses to focus light. Because EUV light would be absorbed by glass, the new system will use a series of four specially coated convex mirrors to capture the mask
image and reduce it. The mirrors each contain 80 separate metallic layers just 12 atoms thick.

The technology stems from work at Stanford University. The laser-light technique, meanwhile, derived from work on missile defense systems, said Dave Attwood, a professor at the University of California and a researcher on the project.

EUV machines will be able to process about 80 wafers an hour, approximately the same as current lithography machines, making the process economically feasible."

I wonder what will it cost for chipmakers to transition over to the EUV technology? Intel is huge and would obviously be more able to make a capital investment like this than competitors.

And thus the great economy of the 21st century continues to thrive...
If for nothing more than continued prosperity, there is value in the continued upgrading of sw/hw. However, for true productivity, most software (hello M$) is bloated, bug-ridden and have "features" above and beyond the ordinary user's needs.
So, is anyone coding their web pages with Word? Betcha there are plenty who use Notepad, though...

The really funny part is how many Linx running meatheads think they can second guess the most successful processor company ever.The history of the rise of open source software, is really the story of the rise of Intel.People try to pretend like it isn't true, but Torvalds didn't write for the 68K or the PowerPC, he wrote for the x386.

Intel's gotten badly burned a couple of times lately trying to lead the market places it didn't want to go. Why on earth do you think we'd try to lead it away from "x86" any faster than the Itanium line can carry it?

Couldn't an OS take a hardware inventory and mirror its ram to disk on shutdown, then at startup, if
the BIOS didn't report any changes to the hardware configuration, simply load the last memory image
and forget about have to go through the entire boot process?

Um, yes, it could. It can. It's called either ACPI S4 (Suspend to disk) [aopen.com] (when it works) or "Bloody F*#%@#@ C{+#" (when it doesn't)...

Too bad gate count doesn't necessarily = correctness or robustness of design. In fact, it's pretty much inevitable that higher complexity (in this case, by one or more magnitudes) will translate into buggier designs, unless Intel can find the time to check all circa (infinity)! operational permutations.

Whaa? "the real problem in the cache world is miss rate and branch prediction, not capacity" indeed.
How do you suppose things get kicked out of cache? Little elves? While conflicts are still a problem (particularly if you don't do I-cache optimizations and/or have low associativity), increasing the capacity of the cache makes a lots of problems go away. There's a reason Intel gets to charge more for the Xeon, you know, and it's not just the groovy name.

The factual part:
To make a transistor like that work, it would have to have incredibly low resistance, gold anyone? Actually, gold would be ideal, as it can easily be made into a 3 atom thick surface. However, it could not be done with today's primitive photographic procedures. One solution would be to use a stream of electrons to shape the chip, but this is all smoke and mirrors for now.

The funny [offtopic] part:
New CueCat 2000b! Now with more features and the power of a new 1THZ (1,000,000 MHZ) processor thanks to Intel technology! Scan barcodes like never before - over 50% are scanned correctly! The most ultra-secure encryption technology protects your private information from everyone but us!

Later that day, hackers get ahold of CueCat 2000b.
Hacker: So the're still using Base64+XOR?
Hacker #2: Yep

The other funny [offtopic] partNew I-Opener 3000a! Now with the power of a 1THZ (1,000,000 MHZ) processor thanks to new Intel technology! New ultra-high tech security measures dependent on the Intel processer make the I-Openter 3000a Unhackable! This product is rock solid, with Iron Clad Security (tm)!

Later that day, hackers get ahold of the I-Opener 3000aHacker: Goop on the BIOS again?Hacker #2: Yep.

I wish more coders in this day and age took more pride in making
their stuff run faster and better... rather than just running at all.

some do: they're called console programmers.

why do you think a 33mhz playstation went from the original mortal kombat to gran-tourismo 2 without ANY change in hardware? tighter code. the fuckwits up at redmond could learn a thing or two from these folks.

FluXAfter 16 years, MTV has finally completed its deevolution into the shiny things network

Intel has been using the same basic archetecture for the past 20 years.

The question that must be posed after bitching about Intel's dogged adherence to the x86 architecture is how will you get the world to change from x86 when we are already heading towards the dream of one billion connected devices, all using x86? If we suddenly decide to change to a completely new way of processing then we are going to render all of these one billion connected devices entirely obsolete - and you thought you had enough trouble keeping up with clock speed changes!

It's the same problem with the oil industry. There are too many people who have invested too much time, people and money into petroleum fuel for it to be chucked away at a moment's notice. That's the reason we're not driving Hydrogen-fuelled fuel cell cars now. So it obviously seems that if Intel won't make the switch to the next level (whatever that is) then we're going to be using the same old shit for the next 20 years!

Self Bias Resistor
Computer: A device that multiplies a user's ability to make mistakes.

If they say they'll have them in devices within five years, they probably have a method. Besides if you actually read the article you'll see that it doesn't mention being three atoms thick, but 30nm, which is realistic, considering today's technologies are in the 130-180nm range. So 30nm is beyond the uv range, so they've devised a new etching technology. I believe the uv limit was based on the lattice spacing in what they use as a focusing lense. At any rate, the possibilities are staggering (real time ray tracing!!!), go intel, time to invest!!

Indeed, let us hope that the transistor count will help the P4s future processors to not suck as much as they have been lately, I sincerely hope that they won't lose their markey completely to AMD. Competition = good.

It expects to sell 400 million-transistor processors able to do 400 million calculations in
the time it takes to blink.

Yeah. It took me one calculation to determine that they will need quite a few of those A Clockwork Orange eye- holder opener thingies for this to happen. That would require that of the 6 billion people on Earth, 400 million people, or 15% of the population, to buy one of these in.2 seconds. I'm not even sure if there are 800 million people worldwide with a desktop or laptop computer.

I assume they use some derivation of nano self construction. Im not exactly sure what all this entails, but it is used to create Carbon Nanotubes, and has been referenced in many other nano-scale refrences.

While this is cool in the fact that it will make Intel's chips cooler which means more energy efficient which translates to can be made faster, I question how this is a major breakthrough in the technology. Intel has been using the same basic archetecture for the past 20 years. I will admit that they have made many developments in size and relitive speed, but all they seem to be doing is making the chip smaller and slapping a larger heatsink on. I'll think this is much more intresting when they develop a transistor that is less than 0.05 nm.(*disclamer* - my spelling may suck, but take the time to look past mere gramatical errors)

This advance will allow the Pentium 5 to have an all-new 700-stage pipeline to give the architecture room to be clocked up to 10 GHz. Unfortunately, due to the length of the pipeline, a branch misprediction will cause a stall lasting approximately 20 seconds. To avoid this, Intel will dedicate 300 million transistors on the chip to the world's most advanced prediction unit...

Have you seen the reprint of K&R 2? It's the same number of pages as the copy I bought several years ago, but they used such a thick stock that the book is actually thicker than O'Reilly's "C++: The Core Language". It's nearly 3x as thick as my original copy. Absurd.

(Ok, so maybe the pages aren't 3mm thick, but still...)

Something tells me the author did the calculation for 3nm, not 30nm...

Getting somewhere, we just arent there yet. Anyways, I remeber hearing somewhere that the reason brain cells are so much better than transistors is that they have...many, I think it was 26 states as opposed to two. Wouldnt that be interesting if we could make a computer like that.

Is this all in aid of bringing us yet another implementation of that wonderful, anti-orthogonal 16/32-bit dinosaur instruction set? If they spent half as much effort on Merced, or better yet, revving a really good design like PA-RISC, we might get somewhere.

Considering that as far as our knowledge of the sub-atomics are concerned we are nearly ignorant. It is perceivable that the depth by which we are able to engineer structures is only now beginning to scratch the surface. Perhaps splitting an atom IS the next step?

Aren't the building a 50 mile long particle accelerator so they can smash these things apart and learn more about what they are?

Just like silicon replaced by diamond - it's probably going to have a significantly (but not radically) different manufacturing process. I think that each process, such as diamond boards, or atomic transistors, will require a revolution in a particular technology, but since these benefits in process will all happen at different times I strongly believe the technology as a whole evolves. Thus, each little revolution in a particular piece of production results in the evolution of the technology as a whole.

According to a post some time ago IBM acheived 10 nanometers as described in a pervious post. If intel claims their 30nm is smaller then IBM's 10nm they are smokeing something.
http://slashdot.org/articles/00/08/12/1520241.shtm l

I care about them so much that I beleive they should be directed in useful and appropriate directions.

Every tool reaches a level of development after which no further development is necessary or usefull. A framing hammer made today is essentially the same as a framing hammer made twenty years ago because there's no useful improvement to be made. A head machined to a nanometer's accuracy or a handle make of some wacky wundermaterial would not make my hammer any more useful to me.

Instead, progress goes into a different kind of tool. My wood and metal hammer is fine for my occasional homeowner projects, but someone with more carpentry ambition would also have a high-tech nailgun.

Same with computers. There comes a point where the typical consumer just doesn't need any more power. That's why you can still find new P-90 systems being sold - for a personal net access/word processing box, that's enough. There are many people who are no more interested in playing Quake III or doing video editing on their PC than I am in building an addition to my house as a DIY project.

For whatever reason, the faster the CPU is the slower the machine boots. Back in the 8-bit no-mass-storage days our machines booted instantly. Then they took seconds. Then they took minutes. Now they take several minutes. Two more generations from now, if your power fails you will have to wait 2 days for your machine to reboot (hell, it takes almost that long now if it decides to run scandisk or whatever).

The inverse proportion even runs to metaphors. I remember an ad or article or something a few years ago about how this speed-demon new CPU stole the poor engineer's coffee break -- well, now he'll get it back while the damn thing reboots. Maybe with a vacation thrown in for lagniappe.

My understanding of semiconductor design is a little shaky, but don't these devices work by localizing a charge somewhere inside them? If that's the case and the device is only the size of 3 atoms, won't it be extremely difficult to localize a charge into that space. Even geting the thing to hold a single electron seems unlikely becuase of the couloumb repulsion, spin orbit coupling, etc that souch a small device would have to overcome.

Also, if they are working with conventional processes, how will they deal with the diffraction and quantum effects of shooting electrons or photons through the mask which they use to create the chips? I'll be very interested to see the details that the article said would be realeased tomorrow, because this promises to be extraordinarily revolutionary physics if they have indeed succeeded in producing transistors this small.

Yeah, and 400 million transistors gives a lot of room for design slop--more space for slapping together pre-designed components.

Any idiot can make a circuit that adds two 1-bit numbers. Any idiot can also string 128 of 1-bit adders together to make a 128-bit adder. That's how damn near *all* logic circuits are designed. Wash, rinse, and repeat. No big deal.

Sure, any idiot can string together 128 1-bit adders, but designing a 128 bit adder to run at that high of a clock speed takes a bit more work. It would have to use some kind of carry-lookahead logic trick to get everything it needed done in one cycle. Point being, putting together a solid, optimized component like that DOES take some serious design time--if for nothing else to but to do the math using a CAD program or espresso. And if that takes effort, getting your stuff to play nice at a high enough clock speed must take more!

I'm far and away no pro [yet] at this sort of thing, but from what I've done myself so far (just introductory digital design stuff, building components and simple clocked machines) it would take a long time to put together something this complex and do it right. Witness the P4.

It seems like at that level of thickness you would have to start concerning yourselves with the crystaline structures of whatever you're using to insulate between layers of transistors.

With it only 3 atoms thick you'd think that there would be fab screwups causing bands in the transistors to narrow to an unusable level - probably happening quite frequently. Would play hell with your yield thats for certain.

I wonder, though, if they're doing work with transistor area. If a reduction to 3 atoms thick bought them another 10 years of industry life I wonder what shrinking the sides by 1/2 would do.

With these chips, computers will be able to translate verbal commands or conversations from one language to another in real time, or search massive and complex optical databases.

Don't you just love the examples that are used to "show off" the speed of new chips to the masses? Is translating verbal commands in real time to another language really the killer app we've all been waiting for?

The very complaint against X86 architecture is that it is CISC, it is throwing architecture where you need to be throwing better technology. Well, here they are at least trying to get better technology.

Every major advance in the last 40 years has been due to increases in clock speed and switch density. Cute tricks like caching and dual-piping or whatever they're calling it this year are flea bites on the butt of real progress. Remember what an "advance" the 486 was over the 386? The corporate boojums need things to market so they make things up when there's nothing real in the pipe, but when something real comes along it doesn't have to be marketed to you because you sure as damn hell notice it.

I mean, my relatively nonobsolte PIII is real cool, but would it really be that much cooler than a machine with 486-level architecture running at the same 450 MHz? For that matter I have to wonder how my tired old 8-bit friends would fare if one could run them at a good fraction of a GHz. Sure, you buy some extra clocks with all those extra transistors trying to second-guess look-ahead your code, but I wonder if that's the best use of all that high-speed silicon. Maybe a *cough* beowulf cluster */cough* of, say, Z80-level CPUs all fabbed on one chip and running at 1GHz could do some really interesting things by comparison.

If this thing is real then great for Intel and for us, it doesn't really matter what architecture they apply it to; and if it isn't real it won't save them when something that is does come along, not matter how good their press releases are.

I was just talking with a collegue working on Bose-Einstein condensations (BEC) and I asked what some of the uses were. Due to the way BECs work statically/quantum mechanically one can create any interferance pattern within the BEC. He said that there are people working on trying to figure out ways of using this property to replace the etching processes used today to create things like computer chips by creating a interferance pattern in the form that one wants and then laying the BEC on the matterial (there is more to it than that but you know that). This would allow for manufacture of things at the 3 atom level. Of course, as someone else mentioned, 30 Nanometers is larger than 3 atoms thick. Lattice structures of silicides are roughly between.1 and.9 nm [1].

Theoretically this is possible, now whether this is practical is a whole different ball park.

Actually, we don't use the same thing that was invented in 1947 for ICs now. There are all kinds of transistors. BJTs, IGBTs, FETs, MOSFETs. The latter being the type used in modern semiconductor technology. Forgive any errors (I've not yet taken solid state), but whereas a conventional transistor emits a collector-emitter current proportional (the gain) to the base-emitter current, a MOSFET's gate is a capacitor (in fact the capacitors used for DRAM are just MOSFETs) where the current through them is proportional to the voltage across the gate. They are much more disposed to on-off operation than operation over a linear region, because it requires minimal (gate capacitor leakage current) energy to maintain a MOSFET gate state, whereas to represent a '1' on a BJT would take a constant supply of current, irregardless of whether it had changed recently or not.

Given the smaller and smaller transitors, I have always wondered what the effect of ionizing radiation is on these things. Granted, your average cpu is not out in outer space someplace, but even your everyday enviroment has its share of crud running around (Xenon from granite ferinst). Are we going to have to be careful about protecting these tinie weenie gates that use using very, very few electrons, or are we going to have to build error detecting/correcting logic into the cpu itself?

Actually, they are probably also using a process created by a company called "Numerical Technologies" (here's a link to one of their press releases: http://www.numeritech.com/news/pressreleases/20000 531nan.html)

Rather than looking into new and innovative ways to increase a CPUs power like Sun does with their Ultrasparc line or Digital did with their Alphas or even AMD has done with their line of chips, they just try to keep shrinking the size of the transitors, pumping more of them into the CPU, and ramping up the clock speed. When are they going to learn that the x86 architecture is dead dead dead? I REALLY hope they don't screw up Merced.. err.. Itanium by keeping the prices too artificially high. I'd really like to see that technology move into consumer PCs instead of just servers. We need a stepping stone out of the x86 world while preserving the cost factor that makes x86 based systems a more palatable choice over the higher end and more expensive workstations.

Good lord thats a lot. This should fill in nicely while molecular computing advances to the point of commercial feasibility as a technology.

However, one thing that amazes me even more is how much effort its going to take to actually design a chip that uses 400 mil transistors! I'm a computer engineering student: designing small stuff using just a few is enough for me.

Don't blame Intel on that... blame the vendor of whatever OS you happen to be using... Though, from what i've heard, BeOS boots within 10 or 15 seconds and Mac OS X is supposedly going to power right on up as well, so it's doable.

Couldn't an OS take a hardware inventory and mirror its ram to disk on shutdown, then at startup, if the BIOS didn't report any changes to the hardware configuration, simply load the last memory image and forget about have to go through the entire boot process?

This is not an issue for laptops, PDA's, or the physical size of the computer under you desk. What this does affect is VLSI (ULSI?) IC's. Reduced tranistor size means lower operating voltages. Which means you can scale down supply voltage, which reduces electric field strength and power dissapation in the tranisistors. This leads to boosted device density and switching speed. In short this allows VLSI designers to create faster, more complex and powerfull IC's and/or ones that require less power(this could affect your laptops and PDA's). As for powerfull and complex look at the IBM Power4 processor it contains 170 million transistors, SIA (Semiconductor Industry Association) predicted in 1999 that by 2002 microprocessors would contain 76 million transistors !!! This is all because of the incredible shrinking transistor.

It's really nice that you think its time for a major computing breakthrough. Personally, I think its about time for a major transportation breakthough, something that really catapults transportation into a new era, not unlike the invention of the wheel itself;)

Considering that the first transistor was created in late 1947 [pbs.org], I guess we've come along way. But have we? Really the only thing we've been able to do is decrease the size of the transistor, so we are able to pack more into the same amount of space. This may be an issue for laptops, and PDA's, but I'm not really all that concerned about the size of the PC sitting under my desk. I think it's about time for a major computing breakthrough, something that really catapults computing into a new era, not unlike the invention of the transistor itself.

traditional etch/deposition system works by leaving putting on a layer (in this case 3 atoms thick) then etching off the stuff you don't want.

What I can't see is how one can lay down anything 3 atoms thick (or wide) reliably (in the sense of real-world mass manufacture, not one of a time in-the-lab productions) using scaled versions of existing Fab tachnologies and without some nano-assembler type technology. Worst case you'll get 3 atoms somewhere in the middle of the wafer and maybe 5 or 0 at the edges....

This sort of tech will come one day - but I beleive it's going have to be by revolution, not evolution....