Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

petralynn writes to tell us the New York Times is reporting that Stanford engineers have discovered a method to modulate a beam of laser light up to 100 billion times a second. The new technology apparently uses materials that are already in wide use throughout the semiconductor industry. From the article: "The vision here is that, with the much stronger physics, we can imagine large numbers - hundreds or even thousands - of optical connections off of chips," said David A.B. Miller, director of the Solid State and Photonics Laboratory at Stanford University. "Those large numbers could get rid of the bottlenecks of wiring, bottlenecks that are quite evident today and are one of the reasons the clock speeds on your desktop computer have not really been going up much in recent years."

I was just about to point out a fun trick of searching Google News for the article to read it without registering. Then I clicked on the link and realized that it doesn't require registration. How odd...

NYT registration required to read this John Markoff (infamous at Slashdot
because of his "sensational" coverage of Kevin Mitnick) article... but fortunately,
BugMeNot [bugmenot.com] comes to the rescue with username/password of "twernt/twernt"

This work was funded by Intel and DARPA with some assistance from an
HP researcher and uses something called the
Quantum-Confined Stark Effect [google.com] with
primary application in optical networking gear... but hey, maybe
we'll see a 100 GHz PC in the not-too-distant future.

and are one of the reasons the clock speeds on your desktop computer have not really been going up much in recent years

This sounds silly to me since desktop power (say a $500 system - discounting monitor and keyboard) is increasing exponentially, doubling every two years compared to the price. The machine I built this spring was twice as powerful than a system I built in 2003 for the same money, but 8 times as powerful as a machine I built just 6 years ago and is about 128 times as powerful as the machine I had when I went to college in 92. And I am only considering pure clock speed, not increases in the efficiency of chips, growth of RAM and disk for the price, etc. While Moore's law concerning silicon chips will start faltering as we approach 2020, I have been nothing but impressed with how desktop performance continues to improve.

These new laser improvements, and things like molecular computing, will help us continue on after the 2020 mark with our current exponential growth.

I disagree, the slowed progress in PC speedups in the last couple years has been disappointing. Things really started to fall off at about 3ghz. 3ghz was released in 2002(!) over 2 1/2 years ago and we still haven't hit 4ghz, that says it all with respect to clock speed.

More efficient processors are only just closing in on 3ghz... pretty bad when the P3 (also reasonably good IPC) came out at 1GHz *5 years* ago.

Intel and AMD have clearly indicated that the good old days are over by introducing dual-cor

I am not referring only to pure clock cycles. I actually think the trend towards multi-core is a good thing. First, in modern computer enviros where multiple threads are running, multi-core systems should prove to be very effective. Second, multi-core systems will use less power than a single core system with the same total processing power. This is simple EE - multi-core means power consumption goes up in a linear style instead of as a square function. It does not matter how it is technically getting d

Who cares? They're more efficient. They don't need to run at 3ghz to be faster that the old stuff. Just because the clock speed isn't there yet doesn't mean the performance hasn't gone up. Look how many times AMD has pulled ahead of Intel in performance, and they've never even shipped a 3ghz CPU. The only thing that has fallen off is the power of intel's old marketing. The only reason there's a 3ghz number to "catch up to" is that so much performa

I'm not so sure about that. Speeds may have only increased from 3Ghz to 3.8Ghz in the last 1-2 years, but that's still a respectable 27% increase over a 1 year period. Of course you are right, the biggest increases have been with the multicore and 32-64 bit architecture changes. Clock speed may not be doubling every 18 months, but a 27% annual increase is nothing to sneeze at either.

Moore's Law has nothing to do with clock speed I think. If i remember correctly, it states that number of transistors on the chip will double every 18 months. Improved clock speeds are just side effect.

Actually clock speeds really have done just as much in the last 2-3 years, but let me explain in more detail exactly what my point is.

I only use Moore's Law as a side note here, so don't let that old argument take you away from my real point. I am talking about the PC you can buy for a fixed prices, $500 in my case. I wish my scanner was not borked, or I would scan in a chart I have sitting right in front of me proving that this trend of being able to purchase twice the machine for the same price every t

My question is this. What would you rather have? A 100Mhz 386 computer with broadband or the latest dual core computer with dial up? I know I would choose the 386 with broadband. Computers are faster but it is only the increase in speed of communications that are making them more interesting. Please give an example of a program that helps the common man that runs on today's computers that would not run on a 386. Games are entertainment and I consider them little help to the common man.

You have a valid point of your own. The growth of the Internet and bandwidth both follow the same model as I discussed. I am not making an argument against the increase in bandwidth in my previous comments, so I am not sure how to take your comment. A more generalized version of the comment I was making extends to bandwidth.

To address your question about benefits to mankind, I think you will soon find that a number of huge questions will be answered via distibuted computing. Project like SETI's screensa

I once paid almost $500 for a scanner, now dell gives them a way if you pay shipping and ink or vica versa, basicaly you can easily buy a printer/scanner for what you'd have paid for just shipping a few years ago.

Clock speed was advancing more rapidly than process shrinks; now, it's probably going to be a small factor on top of the process shrink improvements.

We probably have at least 3 more generations of process shrink before any sort of "wall", and quite possibly several beyond that. The SIA roadmap isn't mostly red until after 2011, and many of those problems are solvable in less than the 6 years we have left.

A week or so ago, I mentioned decommissioning analog & digital TV broadcast spectrum to use for ore wireless data. I mentioned how fiber was just on serendipidous discovery away from massive data rates. I was shunned as "everyone knows" there are limits to light.

While this may not be THE discovery I was alluding to, it proves that the door surely isn't closed.

While science can find use in this discovery, I'm more interested in profitable consumer uses. What are the possibilities there?

It describes what I believe is the same breakthrough in considerable detail. The Big Deal is that lasers can now be made from standard CMOS silicon fab processes, meaning you can integrate the lasers and optoelectronics directly into the chip without needing radically new chip fab techniques. Really interesting stuff!

The problem isn't bandwidth, it's cost, getting those high data rates on and off the fiber at a reasonable price. Wavelength division multiplexing can be used to attain insanely high data rates, if you have enough money.

I've been wanting to know for some time if there is a material that can switch from transparent to reflective? It would need to be pretty fast (or slow, if you could also slow down the speed of light, which I have read somewhere can be done)

I only took 3 materials science classes in Undergrad, so this won't be a full answer, but it might get you started on the right track.

I recall that some crystalline materials exhibit very different refractive and reflective properties when put under mechanical strain. Materials that do this but with electricity are how we make accelerometers these days. So a crystal that either transmits the light or refracts it off into a random direction depending on strain may be what you're looking for. No clue what

nope, that wont do it! I need something that can reflect normaly, or be transparent. Although I suppose being able to reflect normaly, or reflect at a slightly different angle, would also work (instead of being transparent). It needs to be pretty precise.

Yeah, I know about MEMS and DLP devices, but I want something solid state.

Heat from absorption of an optical signal (at the levels used in communications) is negligable. First, the signals are very low, measured in mW, and the absorption per meter in fiber is incredibly small. (I can't remember the actual number, but you need to go tens of miles before you even lose half your light.) As you can imagine, dissipating a few mW over a hundred miles doesn't generate any heat.

Not sure if this is exactly what you are looking for, but acousto optic modulators uses acoustic waves to change the refractive properties to diffract the incoming light to a known specific angle. So by sending a pulse through the material the beam changes angle and you can then reflect this part of the beam back.

Interesting, thank you.But if I understand what you are saying, you are talking about a semiconductor that is either transparent or absorbant, not reflective. I need something that can literaly be like a mirror in one state or like glass in the other. Alternately, if it could be reflective in one state like a normal mirror, or reflect at a different predictable direction in the other state. Although I imagine if there were such a material known they would be using it instead of MEMS in DLP devices alread

If I remember correctly, QCSE uses excitons to absorb light.What is the wavelength of these excitons in SiGe? If it's significantly different than 1.3 microns - 1.5 microns, then this is a short-haul play -- like inside a box. In any case, 100 Gb/s is generally fragile stuff anyway over long distance, so it's highly unlikely that this is part of some global supercomputer, as the article suggests.

That's OK, though. This might be great stuff for optical interconnection buses.

I'm reading the actual Nature article now (Vol 437|27 October 2005|doi: 10.1038/nature04204, refer here [nature.com] for those who have access). The structure they have built is a multilayer of Si and SiGe (10 nm Si and 16 nm SiGe, repeated ten times). You are correct that there are exciton peaks in the range of 1.3 microns to 1.5 microns. Specifically, there state:

Clear quantum confinement is seen, with strong
exciton peaks that we assign to electron-to-heavy-hole (e-hh;,0.88 eV at 0 V) and electron-to-light-hole (

....by reducing the cost of fast switching. There's plenty of dark fiber http://en.wikipedia.org/wiki/Dark_fiber [wikipedia.org] out there for anyone who can afford the hardware and this may take OC12 fiber cards from ~$6000US to a couple of hundred.

At the very least, it will make it possible for gigabit ethernet switches to use an optical brain to handle much larger total loads and likely at lower costs. (No, I don't know if this is cheaper to make but I figure the low grade parts that don't run at full speed will be sol

Like most unreleased technologies, I am skeptical. Many research groups publicize the possible miracle's their technology could fix while downplaying the downside of the technology. This is done in order to get more research dollars spent to fund your research. This sounds like publicity from a research group in order to get more funding. In that respect I think it is working.

I'll stick to Journal articles to see if the technology actually works though.

RTFA:"He acknowledged, however, that there is a significant gap between research results and commercial availability of devices based on scientific breakthroughs.
Other designers working in the field were also cautious about direct applications of the technology. Alex Dickenson, chief executive of Luxtera, a Carlsbad, Calif. start-up firm that announced a 10-billion bit per second optical modulator using a different silicon-based approach earlier this year, said that he believed there would significant hur

This tech will mean a new opportunity for a new kind of "PCB" maker. Circuit boards with embedded optical traces will replace (or layered on to) traditional electronic circuit boards. New optical chip-to-board interconnects will also become a new, growing business. I know that people do make all-optical circuits (I've seen these at Lucent's museum in NJ), but it looks like the current tech is very expensive (etched channels in a sliced wafer).

The first company to develop a low-cost, high-quality tech for "printing" optical traces will make a mint once these interconnects become common. I'd bet that the ultimate technology will be a sandwich of resins with etched channels and vapor-deposited reflective layers, walls, corners (or high-index resin filling). For most applications, the optical interconnect can be single-layer because the non-interference on crossing beams will let two traces/channels cross each other with interference.

Inventions like this one are a great start. But until they find away to make cheap circuits to route optical connections on a board, this tech won't see widespread adoption.

That was all very interesting lay speculation, and I do wish I could take you up on your bet.

See, no one is going to "print" optical traces. Unless you consider gluing fiber to a board "printing." Fiber optic cables are cheaper than PCB by a long shot, which is why they are used for optical interconnect now, and will be in the forseeable future.

See, no one is going to "print" optical traces. Unless you consider gluing fiber to a board "printing." Fiber optic cables are cheaper than PCB by a long shot, which is why they are used for optical interconnect now, and will be in the forseeable future

Yes, fiber is cheap for point-to-point routings, but I doubt it scales well. What happens when a motherboard becomes 100% optical interconnect -- with virtually every chip and attached device using optics to communicate? Optical connections would run from

Yes, fiber is cheap for point-to-point routings, but I doubt it scales well. What happens when a motherboard becomes 100% optical interconnect -- with virtually every chip and attached device using optics to communicate? Optical connections would run from the CPU (maybe each core of the CPU) to memory controller, cache, main memory banks (perhaps one fiber per optically connected RAM card), I/O controllers, mass storage devices, I/O ports, expansion bus slots (again, one fiber per slot), etc. A single mo

Not quite. You are only interested in the bandwidth (expressed in Hz). If you can turn that light on and off at 100 times per second you can also do it at 10 times per second (but maybe not at 1000). In other words: you can modulate it with a bandwidth of 100Hz (*), so it makes sense to call this '100Hz' and not '100 times per second'. IMHO no engineer would ever do this because it takes some time to figure out how many GHz '100 billion times per second' represents (especially for non native speakers).

Somewhere between the lab and the press release things got overstated. Since my PhD is in silicon-based optoelectronics, I am familiar with this kind of work. A few thoughts crossed my mind after reading the paper.

What these guys have found is a physical effect that possibly could lead to fast modulation of light. Neglected in the press release are a few fairly important issues:

They haven't demonstrated any time-resolved optical effect, and are inferring it strictly from what might be possible. I have no doubt they can modulate, but the operational speeds are still guesstimates.

The effect that was demonstrated is not within the 1550 nm wavelength window used for telecom traffic. Their current work shows the effect right in the middle of an H2O absorption peak. Can the effect be shifted? Probably, but these sorts of things are always more work than expected.

From a practical standpoint, other Quantum Confined Stark Effect devices often show a strong sensitivity to the polarization of the input light. Ensuring a known input polarization is a major problem right now in optoelectronics. Lord knows it was (still is, actually) a major hassle in my research

This device is not quite as CMOS compatible as might be hoped. Building strained germanium quantum wells on a silicon substrate requires depositing atoms layer by layer, and is a slow process. Process throughput will no doubt be an issue.

All that being said, this is still very exciting. It is a new physical effect demonstrated in a silicon-based material, and a physical effect that has been used elsewhere to do useful things. Hopefully a real modulation device will come along shortly.

Your points all seem valid, except for the polarization one. I think most modulators are polarization sensitive. You just polarize the input and accept the losses. In the case of the modulator attached to the laser (the usual case) the laser output is polarized.

"Those large numbers could get rid of the bottlenecks of wiring, bottlenecks that are quite evident today and are one of the reasons the clock speeds on your desktop computer have not really been going up much in recent years."

I'm pretty sure the wiring "bottleneck" has, uh, absolutely nothing to do with why clock speeds haven't been going up. CPUs can run at whatever speed they like, independent of the bus. (Well.. an arbitrary multiplier of the bus; not independent strictly speaking). The problem is t

This has nothing to do with CPU speed, but rather the bus speed that connects the CPU to other components. The last "major" upgrade on a common bus was increasing PCI frequency to 66 MHz from 33 MHz... and that took 10 years to accomplish, not the 18 month doubling of "Moore's Law" that everybody talks about. Even PCI-X is an "older technology" by many standards. And think about that too: If the bandwidth going to a peripheral card is limited by the fundimental bus architechture, why should peripheral d

I hate all the people that post that without knowing shit about it. As this applies to optics and not semiconductors, it really doesn't have anything to do with moore's law.

From the article:

Several industry executives said the advance was significant because it meant that optical data networks were now on the same Moore's Law curve of increasing performance and falling cost that has driven the computer industry for the past four decades.

It's not all that accurately worded, but it is relevant. The lack of accuracy is likely due to trying to keep that comment short.In any case, while Moore's Law is specific to transitor based circuitry, the pattern is applicable to other technologies, such as Kryder's Law which covers rigid magnetic media (hard drives). In fact, looking at these cases in general within a field of technology suggests a more abstract pattern. After all, the original component technologies with which Moore worked when he made h

Yeah, but the hope is that in 33 years, we have something newer. I mean, by then, we'll hopefully have three-dimensional chips, or quantum stuff, or something we haven't even thought up yet. And I'll be reading this article's clone on Slashdot, and we'll have the exact same discussion. Except I'll have taken over the world by then.

The speed of the electrons is on the order of cm/s, and is related to the current density.

Slightly more correctly, the drift velocity of electrons in standard copper cable is on the order of (tens of) cm/s. Actual electron velocity is close to c (as they bounce around in a cable), and electron drift velocities can be on the order of 10^7 m/s in some media.

The speed of electricity in a wire is not really the issue (it's about half the speed of light, I think. I'm sure someone will correct me). The real issue is signal propagation. When a transistor switches from closed to open or back, the electrical signal travelling through the wire is not a perfect on/off. The voltage ramps up or ramps down as some function of the length of the connection, width of the wire, conductivity, leakage from the transistor, inductance,... The system needs a bit of time to "settle" into the new high or low state. This is a big limiting factor in the clocking of modern CPUs. For communication off the chip, it's far worse. Now the lines are no longer 90nm (or whatever the chip was made at) in width, and have to go through a far longer distance. That's why today's processors are limited at around 1GHz to the outside world, while internally they can be faster.

Optical interconnects alleviate many of these problems. With a laser, the ramp up time is significantly shorter, there's no capacitance in the system, and it is far less prone to interference. So, on a 100 GHz optical link you can multiplex 100 1GHz pins (essentially running a P4's FSB on two wires instead of something like 180), thereby significantly reducing the pin count. Or you could run the pins 100 times as fast, meaning much less processor waiting on RAM or bus data.

Yeah, that's not true. I don't know how fast an electron moves (I'm assuming not the speed of light, since they have mass, and that quantum physics I know little about probably comes into play), but in a normal conductor they don't move very far before slamming into something. Individual electrons don't move that far or fast on their own, it's the aggregate and resulting field that really moves.

But that's not really the problem. Transmit time is still quite low (I've heard 1ns per 6 in of trace on a board). Latency isn't really the problem. The problem is -- how fast can you change the signal? That's bandwidth. Here electrical conductors suffer because of parasitic capacitance and inductance, skin effects, reflections, induced current from nearby conductors, and a whole host of other signal integrity issues. It gets worse the longer the channel is and the more things you have connected to it. If you're wondering why the MP Pentium 4s have been on a 100MHz QDR front side bus since they were released, this is why. It's also why even point-to-point interconnect like AMDs has only recently broken 1 GHz.

Optics don't really have this issue. Two fiber optic cables next to each other don't interfere with each other. You don't have to overcome the capacitance of the channel to change from one value to the next. You just send photons of one frequency, and then switch to the next. As fast as you can switch is how much bandwidth you can get.

Alright, I'm not really liking this explanation anymore. To just directly answer your question: the advantage is 100 GHz interconnect in a way that could potentially be built into chips.

Nothing moves at the speed of light (in a vacuum) as the energy required to accelerate near the speed of light increases (presumably) asymptotically.
Current is a measure of the speed of electrons in a conductor if the number of free electrons per volume of the conductor is known. Because the capacity will be constant, one can get a general idea of current as proportional to the speed of electrons in a conductor.

My high school physics teachjer would go on rants all the time about what actually moved down the line, the "hole" or the electrons themselves.His name was Dr. Troy Soos and worked for Los Alamos for a while. Then he decided to write baseball murder mysteries (apparently that makes more money than being a research scientiest).

The speed an electric signal will propogate in a transmission line is somewhat less than 1C. The value of 0.1C in a sibling post is a good rule of thumb. Think of your transmission line as a bunch of inductors in series and a bunch of capacitors in parallel (imagine a ladder with inductor legs and capacitor rungs). At each step along the way you need to charge up the capacitor before current will move to the next inductor, where your current will charge up the magnetic flux and then on to the next cap, etc.

You can build what's called an "aritficial transmission line" in just such a manner. It simulates the effect of a much longer pair of wires for lab purposes.

First off, the electron velocity in wire is much less than the propagation velocity through the same wire.

Now for the fun part - What is the velocity of propagation?

For frequencies were the inductive reactance of the conductor is significantly larger than the resistance of that conductor at that frequency (think skin effect), then the velocity of propagation is c divided by the square root of the effective relative dielectric constant. This is often referred to as an LC transmission line since propagation is dominated by the series inductance and shunt capaitance. LC lines have a propagation velocity independent of frequency (at least to the first order). As an example, coaxial cable with a solid polyethylene dielectric will have a propagation velocity of 0.66c, which would be valid from a few hundred kHz to several GHz.

When the the conductor resistance is greater than the inductive reactance, then the line becomes an RC line where the "propagation velocity" is dependent on frequency (dispersive) and the time for a transition to propagate is proportional to the square of the line length. The effective "propagation velocity" is going to be a lot less than c. Turns out that the interconnects on chips are RC lines - and it is often necessary to insert inverters on a line to speed things up (recall that propagation time varies with the square of the line length) - a good rule of thumb is to space the inverters so the the propagation delay equals the gate delay.

The RC problem is why loading coils were put on phone lines - the inductive reactance of the coils is larger than the resistance and the line becomes an LC. The loading coils are bad news for DSL - and an unloaded line looks like an LC line at the frequencies used by the DSL modems.

A good reference for this is High Speed Digital Design, a Handbook of Black Magic by Johnson and Graham.

Well, on that scale, the wire acts as a capacitor. You have to put x amount of charge into the wire before you get to your desired voltage on the far end. 10 years ago, the delay associated with charging a global wire (something on the order a few milimeters long) was insignifigant compared to the delay from charing a single logic gate. Today, the dealy of that same wire is several times that of a single logic gate. In a 45nm process it might be greater than an entire stage of logic (15-20 gate delays)

It helps to think of the wires as a capacitor, which take a while to charge up to a trip voltage. Presently CPU's are trying to flip the bits about as fast as the wires can be charged, so they are making the wires smaller to charge faster and run cooler. Smaller wires means more bad chips and more expensive cpu's, using light for signaling gets around big parts of the problem.

You need to differentiate the drift speed of the particular electrons (this can be quite slow, esp. in AC) and the speed of propagation of energy, which if I recall is damn fast (near C, but not there...granted, 1/10 of C is still astoundingly fast, so my poor memory of freshman physics may not contradict you, though I think your guess is off)...the real advantage is that the switching speed is far beyond what we can do with current metal/electron based circuits (rtfa) . Additionally, this is big because u

When sending signals electronically, you're not really moving electrons. More like shaking them. Electrons don't actually travel through the wire in a net current. The way signals are propagated in an electrical transmission line is actually light, with some motion of electrons in local currents. It's just a difference in frequency, that's all. I think RF waves (the electrical signals you're talking about) generally travel around half the speed of light in vacuum. On the other hand, that's not all that much

Quantum computers are great, in theory, but even if we are able to figure out how to build one that actually works they are only capable of solving certain types of problems. Our present understanding of quantum physics tells us that you can't design a quantum computer that can do all the same math problems as a generic Intel/AMD CPU (e.i. run Windows; play Counterstrike; etc.).

That being said, the problems that can be solved by quantum computers tend to be the ones that would take a regular CPU until the end of the universe to perform (break strong encryption, large traveling salesman problems, etc.). At some point, if we can make a quantum computer compact enough, we might end up having quantum co-processors built into out PCs but we'll probably never see the CPU of our PC replaced by a quantum computer.

The tech being discussed in the article would be directly applicable to making generic PCs run faster (though it could also have the potential to improve communication speeds with a hypothetical quantum computer as well). Another tech that will probably be leveraged to make generic systems faster is the replacement of silicon in computer chips with diamond. Since diamond can handle vastly higher temperatures than silicon, without melting, it is theoretically possible to push the clock speed on a diamond based CPU much higher than on today's silicon CPUs.

This is an important point that is frequently overlooked: quantum computers will not speed up traditional computing, they will just let us solve classes of problems that are intractable, at the moment.

I will admit, I'm not a physicist. However, while I was studying Computer Science at college I had two occasions to study quantum computers. One occasion was a guest professor at my parallel computing class who covered how software would be developed for a hypothetical Quantum Computer and the other was a colloquium run by my CS department with another guest speaker discussing almost the same topic. The question of playing Doom on a quantum computer came up both times and they both agreed that quantum compu

Innovation?! C'mon! This is a culture in which people really do use words like "synergy" and "value-added" with straight faces! I know; I've worked with them!

Each time I've worked in a corporate environment, I've been thoroughly appauled. People don't pursue good ideas! Rather, they make sure that they have all the right "check marks" on their "report cards." At the last place I worked, there were so many half-assed useless projects lying around -- wa