Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

arcticstoat writes in with word that scientists at the Space Nanotechnology Laboratory at MIT have found a new way of extending Moore's law into the future — they have succeeded in etching a grid of 25nm lines into a silicon wafer. The article notes that this technique could be used for writing the grid on which chips are laid down, but that the electronic elements would have to be written using more complex techniques. "[Researchers] created an interference pattern using light from a laser with a wavelength of 351 nm. The pattern consists of alternating light and dark zones repeating every 200 nm. This allowed them to etch 25-nm lines into a silicon wafer, each 175 nm apart. They then repeated the process three times, each time shifting the interference pattern by 50 nm and etching another 25-nm groove. The resulting grid has alternating 25-nm stripes and grooves..."

Actually, there is more innovative stuff that can make computers faster, but people insist on making them smaller. Take the conducting plastics [thefutureofthings.com] that are under development. If they have a much greater chance of making computers faster with less conventional methods, I don't see why they dont use them.

But isn't the problem not so much the continued shrinking it down,but at such tiny sizes you have a better chance for the electrons leaking? I am not an engineer,but IIRC there have been several articles in the past few years on the need to switch to something besides silicon due to the electrons leaking once you go below a certain size. And how small CAN you get before the electrons begin to jump the gates on a regular basis? Surely there has to be a size limit. Maybe someone here who IS an engineer can en

25nm is nothing to write home about, companies are already planning for 25nm. What's exciting is that they created a feature that was smaller than the wavelength of the light used to etch it. Had they used 400nm light to create a 45nm feature, would the title have been "MIT breakthrough could lead to 45nm chips!!!"?

Not only did they make features smaller than the wavelength, they did it with a relatively simple and inexpensive setup. It would be interesting to see this combined with the memristor development in an attempt to create very cheap, high density storage or even cooler, hybrid analog/digital computers.

Well, the fact that they've been creating features smaller than the wavelength of the illuminating light is nothing to write home about either.

Current chips (since at least the 180nm node) are being fabbed this way at all microelectronics fabs all around the world. We already use 193nm light to create features as small as 22nm (using tricks like immersion, double-exposure and OPC)

This will help us to get into the resolutions which will make graphene come alive for us. After all, its semiconductive properties only begin to happen at scales of 10nm or lower. I'm eagerly awaiting the graphene age to commence.

I thought quantum interference was a problem with circuits and gates smaller than 40nm, so even the ability to etch the channels won't mean they'll work. Maybe I'm remembering incorrectly - can someone set the record straight?

They've been making features smaller than the wavelength of light for almost a decade now using interference lithography, it's nothing new, but it did scare people that Moore's Law would be over sooner than they thought (after all, how would they write features using extreme UV with the materials that have?).

What's interesting is that their interference lithography mask allows them to reach a minimum feature size limit of 25nm for silicon.

Altera (www.altera.com) are one of the many silicon companies announcing 42nm devices shipping in the next year or so. Xilinx fanboys - I'm sure they promise the same (picture an AMD/Intel bunfight if you will) - though I must confess I am friendly towards them as an ex employee of sorts, I am certain they are not the only ones proposing to produce devices at this process node in the near future. Intel and IBM being very much at the front of the curve, so to speak. The gap between theoretical limits being announced and actual manufacturing at the announced node seems to be getting a lot shorter. Is quantum really next, or is optical? As we get down to 32nm and beyond the so called 'moores law' (which seems only to really serve journalism as such;) ) seems to really, genuinely be nearing the limits. What IS next after silicon transistors on a die? Gallium is supposedly running out (due to flat panels) and that's only a doping chemical for speed, still in the silicon domain, not a real sea change of technology. Whats going to happen to the size/power curve? Even multicore processors will suffer as long as they are still roadmapped out on the same substrate. Are we really running out of time now? I don't really hear of the 'next big thing' in any form other than conjecture at the moment..?

IANA nanotechnology specialist, but IMO the 'next big thing' might be something like an i686 on the same die as a Xilinx whopping-big FPGA so that you can do hardware encryption at memory bus speeds or things like that. When the hardware gets smaller you can be more creative about how you combine it with other hardware.

Personally, I'm looking forward to the ARM-23 running ARMLinux on a PDA with realtime encryption and DSL sized wireless bandwidth. When you can jam a bunch of hardware in a tiny place, things

Sorry, just having re read your post, y'know, just to make sure I'm not going mad... I should have marked it +5 hilarious.. If you are not clued up about the subject matter then please don't hit reply. Dick Tracey my ass, waste your keystrokes elsewhere luddite!

This is a direct consequence of your misunderstanding of Moore's Law, journalism or not. Moore's Law does not insist on miniaturisation, but rather on the degree of integration (DoI). Until the relatively recent time, the minuaturisation was the main factor in increasing the DOI. It is no longer. And that is not a problem. The current trend is the increase of the DoI derived from the increase of the absolute size of the chip. This is a well-established trend already, just look at the multi-core CPUs. So, in

Hmm... Wonder how much performance gain there would be with 2GB of RAM with L2 cache performance. =)

That would lead to a similar situation as with the Chip-RAM/Fast-RAM architecture used in the old Amiga architecture, where there's one area of RAM that the CPU has blazing fast access to and one where the CPU has as slow access as the other devices.

Next, you optimize your bloaty software. Word Perfect 5.1 ran just fine on an 8MHz 286, and had a capaibility set not too different from the current word processors. Any piece of software can be optimized to the point when most operations are instantaneous

I've laid floating lines 1 nm apart with my boat in the past. 25 nm could be done if you wanted but that's getting pretty far apart. Who wants this grid, and why the heck would you use a laser to create light and dark zones every 200 nautical miles?

The process of making smaller features is only a small fraction of the problem in producing 25nm (or smaller) Si-based electronics. Left aside quantum effects, which start to dominate at length scales smaller than 10nm, stability and electrical leakage through the gate are the most significant problems. When Intel went from 65nm to 45 nm, it wasn't just a "shrinking" process, but an all new use of materials design had to be used to deal with the gate current leakage. In simple words, the silicon oxide insulator was just too thin not to leak. The new metal high-K (Hafnium-based) is the major step that allowed those chip to be made. This research is good, but it solves only a small fraction of the difficulties the electronics industry faces in dealing with Moore's law.

Exactly. And it is not only limited to the physical effects themselves, but also includes the limited capabilities of the modern design and verification software necessary to simulate these effects on any input of any pratical size in any practical time.

Designing these chips will be expensive. And that's exactly what Moore's Law is about. Not some stupid miniaturisation of the devices.

Though Gordon Moore certainly developed his law around the silicon chip, the interesting thing about his law is that it is retroactive and not restricted to silicon, leading to the possibility that even if there is a real limit to silicon, something else will come along to replace it and keep the law going through another iteration. Whether that turns out to be holographic, 3-D, biological, or whatever is anyone's guess at this point.

If you start out with the Hollerith census counting machines developed for the 1890 census (the ones that used cards the size of dollar bills because they had a bunch of dollar bill boxes, hench the size of the punched card and the 80-column screen), then move to electric relay switches, then to vacuum tubes, then to transistors, then to silicon, the whole thing is an exponential curve with a doubling every 18-24 months.

Every time I hear someone saying, "We're eaching the end of Moore's Law," I think: Not.

First, IAALE (I am a lithography engineer) working on Intel's 22nm process technology. Let's clear up a few misconceptions:

1) The name of a logic node is directly related to the size of the features being made. Those names (e.g. 65nm, 45m, 32nm, etc.) used to relate to the "half-pitch" of the minimum pitch that was printed. But that is not true today. 65nm used a minimum pitch of ~200nm, 45nm used ~140nm and 32nm is using ~100nm. The next node, 22nm is slated to use minimum a pitch of 72nm. The features discussed in this article have a pitch of 50nm, which would be equivalent to the node after 22nm, i.e. 16nm.

2) It's not hard to print features smaller than the wavelength of light. For the lens based systems we used, the Rayleigh criterion gives the minimum pitch possible: 0.25*lambda/NA, where lambda=wavelength (193nm) and NA=numerical aperature (1.35 for the best lenses). So 72nm is the minimum pitch, already much smaller than the wavelength

3) I hate to break it to these researchers, but interferometry has been used for a looong time to make gratings. Search for "interferomety lithography" on Google Scholar. The fourth link is called "Nanolithography using extreme ultraviolet lithography interferometry: 19 nm lines and spaces". That paper is from 1999. And they did that one exposure, not three (using a smaller wavelength).You would actually need at least one more exposure to divide the grating into something that resembled a logic circuit. The technique in this artcle is not practcal for a number of reasons, but we can do better than them using pitch-doubling techniques and only two exposures.

Here is a perspective on the size of these 25nm stripes and grooves. If a cross-hatch of these stripes and grooves done both vertically and horizontally each had a pixel of a picture placed on it, then the number of high definition 1920x1080 pictures you could fit in just one square millimeter would be 20.833 pictures wide by 37.037 pictures high, for an average of 771.605 pictures per square millimeter... a half minute of video at 25 fps. For the metric challenged, that's 529.166 pictures wide by 940.741 pictures high, for an average of 497808.642 pictures per square inch... over 4.6 hours of video at 30 fps.