Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

Roland Piquepaille writes "According to the semiconductor industry, maskless nanolithography is a flexible nanofabrication technique which suffers from low throughput. But now, engineers at the University of California at Berkeley have developed a new approach that involves 'flying' an array of plasmonic lenses just 20 nanometers above a rotating surface, it is possible to increase throughput by several orders of magnitude. The 'flying head' they've created looks like the stylus on the arm of an old-fashioned LP turntable. With this technique, the researchers were able to create line patterns only 80 nanometers wide at speeds up to 12 meters per second. The lead researcher said that by using 'this plasmonic nanolithography, we will be able to make current microprocessors more than 10 times smaller, but far more powerful' and that 'it could lead to ultra-high density disks that can hold 10 to 100 times more data than today's disks.'"

Well, think about it this way: They have to build one or a few really expensive machines, and they obviously don't need much resources. So the mass production effect will push the price down after a while.

brownian motion isnt really relevant at this level, but i imagine that if the channel or 'wires' or whatever were close enough then tunneling could be an issue, but probability of tunneling falls off exponentially with the distance, and the severity depends on the energy, but if the wires are put close enough then it could be an issue, however only if there was just few atoms between channels

the researchers that make 200-400GHz transistors today DO in fact worry very much about tunneling. (I'm thinking of InP/InGaAsP transistors)

Quantum wells are around 5-10nm wide, so anything approaching ~20nm would at least have to account for that sort of quantum effect. So density may have a difficult limit to breach, but smaller lithography certainly makes high speed transistors easier to implement on CMOS.

Actually with processors using a 90 and 45 nanometer transistor size, there is a very high likely hood that a number of transistors will fail over the lifetime of the chip due to diffusion alone.
Though modern processors have taken care of this by routing data through parts of the chip that are still active. Though this has an interesting affect of slowing the processor down as it gets older.

http://www.extremetech.com/article2/0,1697,1994121,00.asp [extremetech.com]
Here is an article on it. Although its from 2006, there has been more work done on it. There are more articles on it in the literature.
If you search for 'self healing' microprocessors you can find a number of articles on it.

This is completely untrue, if a transistor fails on a CPU, that's it, there's no routing around the damage as you seem to imply.

If you'd actually read the article you referenced when queried by someone else, you'd see that was a three year study initiated in 2006, so even if that study bears fruit it'd be 5-10 years at least before it showed up in the CPUs you buy from Intel or AMD.

I tend to think of Brownian motion happening in a gas or liquid - which Wikipedia confirms http://en.wikipedia.org/wiki/Brownian_motion [wikipedia.org] Thermal diffusion of atoms in a device do cause problems and limit the temperature at which semiconductors can work. In fact, diffusion of dopants is one way a chip can 'wear out' with long term use. No doubt the smaller the scale the more problem diffusion will be, but it tends to be very temperature sensitive, so keeping the device at some reasonable temperature would pr

Tunneling electrons and other quantum effects are already in effect in current devices. We just design around those effects instead of taking advantage of them currently. When we really get the ability to make reliable 5nm size scale parts, we'll just switch to quantum dot based transistors (single electron transistors).

Brownian motion isn't relevent here.

A big issue is that sharp features are thermodynamically unstable (lots of dangling surface bonds), so edges tend to "soften" over time due to surface diffusion. Also, at ohmic contacts you can get pits forming which can eventually degrade features.

Another issue is that at the size scales we're talking about, current insulators stop working. They're looking at switching to a variety of new materials for this purpose (for example, IrO2), but these are tricky. This is what they mean when they say "high dielectric constant" materials. Every MOS transistors has a this oxide layer (between the Metal and the Semiconductor), and that layer's thickness defines many of the physical properties of the device.

Finally, you have to worry about inductors to a lesser extent. Current inductors aren't quite good enough, but we're working on that too =) Nanoscale metallic alloys are definitely the way to go.

In any event, this article is sort of sensationalist (surprise!). I was able to make 20nm features using physical embossing (stamping metal liquid precursors with a plastic stamp and then curing them) back in 2002. Making features of small size scale is easy, it's keeping error rate, making interconnects, etc that's hard and annoying. Plasmonics is very neat though, I can imagine it working with time.

Besides, hard disks already have magnetic domains of ~ only a few nanometers anyway.

Finally, you have to worry about inductors to a lesser extent. Current inductors aren't quite good enough, but we're working on that too =) Nanoscale metallic alloys are definitely the way to go.

Now my experience with electronics is quite brief at best, but I was under the impression that inductors were specifically avoided in electronic circuitry for a number of reasons, not the least of which is that they tend to be bulky. This is not a big problem because the effects of an inductor can be simulated wit

They avoid them as much as possible, as you say. I meant by "lesser extent" that they aren't as big of a deal because we avoid them.

In rare situations they are necessary, and the limiting factor is one of standard magnetic materials ceasing to function as expected at very high frequencies. You wouldn't necessarily have them patterned into a circuit, but say for instance you want to use an inductor to transformer-couple AC signals into an analog to digital converter.

What exactly is the problem with this term? Just too "fancy" and "technical" for you salt of the earth Anonymous Cowards? It makes perfect sense if you know the root words for it, and it succinctly describes the technology:

- Plasmonic: Of or using plasmons. [wikipedia.org]- Nano-: At the nanometer scale of operation- Lithography: Lithography [wikipedia.org].

Maybe you can argue that the "nano" is superfluous, but it captures one of the two things that are significant about the new technique -- it uses plasmons instead of traditional light, and it can theoretically operate at a scale as small as 5-10 nm. ("Nano-" seems to be more significant, when you're at the point where you're talking single-digit nanometer resolution.)

Just because it's long and wordy doesn't mean that it's Star Trek nonsense. The phrase has a useful meaning.

Do current chip manufacturers like Intel and AMD work on new lithography techniques, or do they focus more on architectural changes?
It seems that they shrink their process at a fairly slow rate, and both companies seem to do it at about the same speed.

Also, if they both have been just advancing the standard techniques using high frequency light to etch all the chips, how easily could they change their manufacturing process over to something radically different?

Seeing chips with 100 times more density would offer incredible benefits for speed and power savings, seeing the recent changes that the 65nm to 45nm process has brought. Hopefully we'll actually be able to see this process being used inside the next 10 years though.

You have to make a difference between Fabs which produce ICs and companies that produce Fab equimpent. Off course they're intertwined but AMD and the likes is an architecture Co, where Companies like ASML drive Fab technology. The "slow rate" is set by industry agreements - milestones - to keep the cost of Fab tech R&D minimal. The shrink step is a factor 2 for surface, resulting in a factor sqrt(2) for feature size. Litho tech companies use this step because the market is not viable for developing Fab tech which takes a different approach: litho is just a fraction in the hundreds of steps it takes to produce an IC. If you were to implement a new Fab litho technique which differs from the roadmap you won't have customers because the technology isn't in sync with the other processes.
In other words: this new technology is only viable if the others jump on the bandwagon, so far it's "only" proof of concept. The field of Fab tech R&D is filled with new concepts, but that's just a small part of the story.

The quote means that the competent will always find solutions before resorting to violence, because for every possible situation there is an option better than violence. The incompetent can't find any of those options and use force, which is never the optimal solution.

I think that it's meant to mean, 'if you find yourself in a situation where you feel you must use violence, you have been or are being incompetent'. In other words, you've either done something wrong in the past or you aren't seeing all your current options.

While I see your point, Asimov meant to take it even further. Your interpetation implies that violence can be an acceptable solution to a problem. Asimov is saying that it never is, and if you think it is a valid solution, you're not seeing the whole

While I see your point, Asimov meant to take it even further. Your interpetation implies that violence can be an acceptable solution to a problem. Asimov is saying that it never is, and if you think it is a valid solution, you're not seeing the whole picture.

Filling one's belly involves violence, even if one is one of the strictest forms of vegan.

Therefore, either we're all incompetent because we all eat, or there's a flaw in Asimov's logic, which I rightly pointed out in the GP.

If an economic or religious shutdown isn't violent, what is it, then? It most certainly is an exercise of force.

(Besides, we're talking about science fiction, here. If you can tell the future, of COURSE you're going to have an alternative to violence - it's like a rapist calling to set up a time and place for an appointment. The "Asimov" model here doesn't even come close to fitting realistic real-world scenarios because of this.)

It's my understanding that they work on both. It's really expensive to build the fabs to produce the chip at the smaller process so obviously they are going to profit off the ones they have as long as possible. Last I had heard AMD is one generation behind Intel right now. You can't just shrink a chip down either with the new techniques. Every time you have a process shrink you run into new problems.

Perhaps this will make SSDs competitive now. You can get 4 GB microSD cards these days. If you could get j

It seems that they shrink their process at a fairly slow rate, and both companies seem to do it at about the same speed.

I have no idea what definition of slow you're using at least. Making a new process work is absurdly complicated and expensive and they usually do it once every four years. By any standard I can think of the computer industry is still moving at breakneck speeds, setting new performance records, creating new device classes and entering new price brackets all the time. For older definitions of supercomputer, you're probably carrying one in your pocket. At this rate, it'll be a little chip under my watch in ten

For older definitions of supercomputer, you're probably carrying one in your pocket. At this rate, it'll be a little chip under my watch in ten years.

You can already get mobile phone watches (CECT M800 and others), which have 2 Gigabytes of memory, have Bluetooth capability and which can both record and play mp3/mp4 files, along with using WAP internet access. There's even a watch with Wi-FI detection built in.

I don't mean to say slow exactly, but the progression of lithography technology doesn't seem to be moving as fast as other areas are. This could just be because I don't really understand the whole process of photolithograpy-- I understand that it is complicated, but the sizes are decreasing by a constant factor of about 1.4 every 2-3ish years, while we can easily see hard drive density increasing exponentially.
That is why (well maybe a possible reason why) companies have just been making multi-core machine

Do current chip manufacturers like Intel and AMD work on new lithography techniques, or do they focus more on architectural changes?

Yes. This research was funded by the National Science Foundation, a federal agency, but IBM, Intel, and AMD are all active in process technology research. I can't dig up much in the way of what they're currently researching, but here are a few things I was aware of in the past few years (and some things I dug while looking for them):

Intel is also funding research into computational lithography to avoid having to do immersion lithography, like IBM and others are doing for the next generation.

AMD & IBM were partnering on a test fab for EUV lithography in 2006 and had successfully demonstrated the ability to create transistors but were still working on metal interconnects at that time. I'd bet money they've gotten past that point by now.

IBM did a lot of pioneering work on strained silicon that they announced back in 2001.

Silicon-on-insulator (SOI) was another fab technology they pioneered in 1998, but it hasn't spread much in the industry beyond them, AMD, and Motorola / Freescale -- in other words, IBM and its partners.

And then again, back to IBM, they were the first company to come up with a viable process for laying down copper interconnects, using what's called a dual-damascene process, in the late 90's.

Hitachi has been actively developing electron-beam lithography for over a decade, but the technology has yet to really live up to its promise as a commercially viable competitor for photolithography AFAIK.

Some of the above research was about commercializing "pure" research done in independent labs like this experiment, but a lot of it was directly funded by the big fabrication companies and their clients and partners. Since I'm not in the fabrication industry myself, I can't really comment any further on who has done what (and how much each of the above deserves credit). This is just news I remember from years past.

Intel is also on the forefront of photonic interconnects for Processors. HP just jumped on board a year or two ago. Often they fund university research and then try to implement it viably in CMOS or current fab processes.

It actually says nothing about whether or not these microprocessors would be able to operate faster.

But assuming this is real, it one of two things:

Maybe we'll have 200 cores which are about as fast as single cores we have now, in which case, nothing will be slower, and people who planned ahead (like Erlang developers) will find themselves running much faster. On top of that, embarrassingly parallel applications like raytracing will be that much more viable -- consider that it only took 16 cores to make a g

That, or we have 200 cores, each of which is tens or hundreds of times faster than what we've got now. In which case, WTF do I care that 198 of my cores are doing nothing, when the other two are running my Ruby and Python apps as though they were hand-optimized assembly?

All other things being equal, C or hand-optimized assembly will still be faster than Ruby or Python. Maybe the faster processors make the Ruby and Python "fast enough", but they still won't be as fast as hand-optimized assembly language o

All other things being equal, C or hand-optimized assembly will still be faster than Ruby or Python.

True, and for some things, it will matter.

But take right now -- how many apps are Ruby or Python "too slow" for, on modern processors?

Of course that's ignoring the possibility of a big break through in interpreter and code generation technology before these chips come out.

It seems to be pretty steadily moving along. Just look at the recent JavaScript improvements.

Granted, none of these will be able to match hand-optimized assembly, by definition, because we can always output exactly the same program the compiler would (VM, runtime optimizations, and all), and additionally handle corner cases that the VM might be slower with.

I bet hand optimized assembly would still be faster (I do understand what you are driving at, but even on the 'garbage' available today, a huge swath of programming tasks are 'fast enough', even if implemented in something like Ruby or Python).

I am making one assumption, though: That RAM keeps up. It would really suck to have 198 cores sitting idle, and the other two mostly just waiting for your RAM.

Presumably, as chips get faster, larger caches and more intelligent caching will become ever more important. Latency for main memory access really hasn't improved much from my first computer (Mac SE) to my current computer. Happily, though, the entire contents of my first computer's hard drive can now fit in 1% of my current computer's main memory, and the entire contents of my first computer's RAM easily fits within the on-chip cache.

Semiconducting is always a matter of quantum effects. The doping needed to get the desired effects are going down to single atoms, which complicates things, and tunneling can certainly also be an issue, but it's not like these things would rely on the world being essentially Newtonian.

One of the difficulties with a scanning technology like this is throughput -- with mask-based lithography you can expose dice with great speed, while something like this will have to scan across the entire surface of the wafer. It sounds like there's good potential for parallelization (the article mentions packing ~100k of these lenses onto the floating head), so this technology won't necessarily be as slow as electron-beam lithography, but I can't imagine it'll be cheap either. Furthermore, the software and hardware involved must be much more complex than a conventional stepper; now you've got to modulate your light-source very rapidly, rotate your wafer, and keep track of the write-head's position to sub-nanometer precision. Tool design and maintenance costs will be pretty high, I imagine.

Modern 40/45nm and the upcoming 23nm chips need very short wavelengths to get produced.
This is expensive.

The new technique uses relatively long ultraviolet light wavelengths.

There's certainly a cost advantage to using longer-wavelength light for the exposure, but there's also a tradeoff in device complexity. Using longer-wavelength light for the exposure translates to cheaper lamps, mirrors, and optics, but the added complexity is going to add a lot of cost to the design and maintenance of these tools.

A conventional stepper performs a series of mechanical and optical alignments before exposing a die on the wafer, then steps to the next die to continue the process. A lithogra

Here's an idea - fuck quality control of chips, just make them able to work around faults in hardware/firmware/software (hello solaris), that way there will never ever be any duds, just slightly slower CPUs, and slightly faster ones. Production costs ought to rocket down. Heck, if not self repairing, we can make them adaptable.

Its not that the interconnect isn't there, its just higher resistance. (for instance)

The real question is how cheap. Current generation lithography systems have become ridiculously expensive. Preparing a mask for a 65nm process costs in excess of $2M. This makes short-run production at not-even-cutting-edge technology levels extremely expensive, and basically discourages smaller chipmakers from considering any niche applications that might require higher density.

Even if the production process is slower, if this can cut the initial preparation costs significantly, it co

I just had 2 fail over the weekend. I didn't lose anything vital because I had backups but everything I considered non-essential is gone (mostly just lots of VMWare images of various distros). At some point it beocmes a bitch to manage so much data.

I just had 2 fail over the weekend. I didn't lose anything vital because I had backups but everything I considered non-essential is gone (mostly just lots of VMWare images of various distros). At some point it beocmes a bitch to manage so much data.

How old were they? I would have thought that drives young enough to be around that capacity would be nowhere near their MTBF*. Is this a reflection of a general decline in manufacturing standards? Are manufacturing standards decreasing with increased capacity? Or is there something else about these high capacity drives that reduces their reliability?

* Yes I understand that the M stands for mean and that some units fail earlier than most in order to make up that particular average. Still, a few years is

I bought the system in June, so around 5 months old. One had a bad block. I pulled it out and when I powered back on the other one started ticking. I have no idea what happened? Static discharge? One drive affecting the other? Who knows. Anyway they're very well ventilated (2 large case fans sitting in front of the 4 drives. Their temp had never exceeded 40 degrees (usually around 28-32 when chugging along and sat at 25 idle with some variation depending on the weather). This machine is always on though.

At the risk of veering off-topic like you were modded, I had a 1 TB (Seagate!) drive fail in the past week myself, one of a purchase of 3. In the process of RMAing it. Luckily, like you, I hadn't decided to trust any data solely to it yet; so nothing was lost. Still, that purchase more than doubled the data storage in this house, in other words, those three drives together can store more data than the 40+ other drives I have in and out of machines here. (Did I just set myself up for a burglary?:) Prob

Do they have a solution for controling overlay error between processing layers to less than 1.25nm?

If the answer is no, this technology is dead in the water as far as IC fabrication goes. (but may have very useful applications in other nanotech fields)

As someone who works in litho, I enjoy reading about any advances in resolution, but know that any advance in resolution must be accompanied by an even larger improvement in the non-insignificant task of placing each of the 10 to 50+ patterns needed to build a