Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

alphadogg writes to mention that Sun is attempting to move from the typical design of multiple small chips back to a unified single-wafer design. "The company is announcing today a $44 million contract from the Pentagon to explore replacing the wires between computer chips with laser beams. The technology, part of a field of computer science known as silicon photonics, would eradicate the most daunting bottleneck facing today's supercomputer designers: moving information rapidly to solve problems that require hundreds or thousands of processors."

Weird. The new discussion display system works just fine on my computer, and in fact, on any computer that I've ever tested it on. Running either Firefox, or (shudder) IE.I keep hearing people bitch about how it takes forever to load, and crashes their browser, and all sorts of other crap, but I've never seen it, and I've surfed/. on a very broad range of computer hardware.

Maybe your computer is infected with spyware, or something. Or maybe you've got a browser extension that screws something up.

I don't like the commenting system, it's quite slow and very annoying to use. Usually I don't log in, but I guess this is a good incentive:-)
I do hope that they will enhance it, perhaps even provide an option to disable it without logging in.

Cypress Semiconductor has already figured this out with their tech. Check out Silicon Light Machines and you will see T. J. Rodgers acquired a former Cypress Semiconductor alumnus in that acquisition and all Sun needs to do is work with CY.

I wonder if the time saved transmitting information via light is offset by the transition time used to translate that back into electric signals. On a single board, the distance travelled is on the order of decimeters. On a chip, micrometers. Are the time savings *that* significant? Even between peripherals, the time saved seems negligble.

I am not an expert in electricity by no means, but I have a fundamental understanding of it (or so I think). Energy is energy. With no resistance (don't overlook this point), light traveling via laser or via electrons flowing over a wire, the speed would be the same. Now, in reality, there IS resistance... there is always a "friction" or resistance (ohm) when energy is passing over a wire. In a vacuum, a laser will move as fast as energy can possibly travel. At least on paper.

The electricity-water analogy saves the day again: If you have a pipe filled with water you could push on one side of the pipe, and at the other end of the pipe a person would very quickly notice the increase in pressure, as water would start flowing out of his side. This doesn't mean that a molecule of water in the pipe would move far at all.

(It's actually to do with electric fields, which do travel at the speed of light, but the water analogy works well)

No, but it depends on whether or not the receiver is current-steered or voltage-steered. If it's voltage steered then it's the propagation of the electric field that carries the signal. In which case, it can be near the speed of light.Also, future chip-to-chip interconnects seem to be moving towards transmission lines rather than treating circuit paths like bulk interconnects. Wave-pipelining the signal will mean that data transfer rates will not be hindered by the time it takes a voltage swing from tran

When you look at a wire, or printed trace on a PCB it is not the resistance that limits how fast you can send a signal. It is inductance and capasitance that act like a low pass filter. We don't care how fact eletrons travel in wire what we care about is how fact we can change the voltage in the wire. We send data by changing voltages not by sending electrons.

And when you look at a PCB, it's not just the speed of the signal that determines the time it takes, it's also the distance it travels. Wires on a PCB can only cross by being at different heights (expensive) so it is common to route signals indirectly, which increases their distance quite a lot. When you have 64 wires coming from your RAM chips, and needing to get to your CPU, this sort of thing adds up quickly. Beams of light, in contrast, can cross without interfering with each other.

Very good. The other major issues are interference (e.g. capacitive interaction between lines etc) and the sheer bandwidth -- you can modulate your carrier with plenty of other frequencies (referred to as wavelength in the optical domain and frequency in the audio/radio domain -- go figure).

I think the photons strike a really small sort of solar panel where the burst of light turns instantly into a burst of electricity. So there's no digital translation by a chip necessary. Of course you lose a lot of power converting it like that cuz let's say the solar sensor is 50% energy efficient, well you have to use 2x the electricity in the first place to get the desired 1x electricity at the end. So these chips are gonna be fast but they'll suck up energy faster than me eating 50% Walgreens Cocoa P

You're on the right track, but you're not quite there. Solar panels are more or less arrays of photodiodes. AFAIK most fiber system use PIN photodiodes to convert the light intensity over a specific band of wavelengths in a fiber to electrical current. Note that I said current, not voltage. Typically a transimpedance amplifier and some kind of comparator circuit is then used to measure the intensity of the signal. The PIN diodes can convert very small quantities of light to very small currents, and tra

The article doesn't make it clear whether using optical communications is intended to reduce latency or increase bandwidth.

With respect to latency: the electrical signals travel at ~30% the speed of light, whereas the optical signals travel at ~70% the speed of light (it depends on refractive index, etc.). Over the distances we're talking about (as you said, mm to dm), that's only fractions of a nanoseconds delay savings [google.com]. This is on the order of a modern computer's switching time [google.com]. All this complexity to get rid of a one or two processor cycles of latency?

I suspect instead they are looking to increase bandwidth. An optical fiber can carry very high data rates. Moreover a single physical fiber can carry multiple simultaneous channels (e.g. different wavelengths of light). So the intention may instead be to create high-bandwidth links between various processors. Using on-chip lasers can make the entire assembly smaller and faster than the equivalent for electrical wires.

Really what they want, I think, is to implement the same kind of high-speed optical switching we use for transcontinental fiber-optics into a single computer or computer cluster. If you can put all the switching and multiplexing components directly onto the silicon chips, then you can have the best of both worlds: well-established silicon microchips that interface directly into well-understood high-speed optical switching systems.

Not to make bigger chips, but to solve the interconnect problem when you use a lot of small chips in a big package.

Although, even on-chip, at 1 cm^2 and above, optical conversion might beat be able to beat the reactance+buffering on a channel that crosses the whole chip, especially when a single physical channel might be able to carry 64 logical channels.

It's not a new idea, it's just one that needs to be revisited from time to time, to see if the optical tech is up to the job yet.

This idea absolutely correct. It is all about bandwidth. If you have several chips on the same board and want to send data between them, you either use board traces, or you build a custom package, but either way you have to use metal and you hit a wall. Even if you cover the entire surface of your chip in solder bumps you will never get as much bandwidth as you would like.Think about where the bottlenecks are in your computer... memory and IO. You want a faster supercomputer, well you need more processor

Since the article quotes a bandwidth, "billions of bits of data a second," rather than a latency, I think it's fairly obvious that Sun is attempting to increase bandwidth between sections of the processor.

It's not so much transit time, as parallelization where the big advantage is. Many frequencies of light can share the same medium without interfering with each other. Imagine many processors and memory chips streaming data to each other simultaneously, over the same backplane.

I don't think it's about the time it takes to transfer a single bit but the amount of bits that can be transmitted at once with light rather than wires. If we can talk line-of-sight transmission between boards, it's easy to line up an array of about a million emitters with an array of a million detectors and send back and forth the same amount of data you would need a couple thousand wires (taking translation times into account) to do.

Sun is a very entertaining company to watch. Even when their gizmos never end up in products, they are always cool.

The second problem is that as the clock speed of these connections becomes faster, synchronisation becomes a problem. While CPU's are running in the GHz frequencies, the system bus is still running in the hundreds of

Commentary on this, from an actual EE, not the pretend ones on Slashdot (you know who you are)?

Just look up any of the countless other "use light instead of wires" stories that have been widely reported over the past decade(s). I'm not saying it's not going to happen — I'm sure at some point it will — but barring additional information, preferably actual accomplishments, this is just more of the same.

To get you started, here's a search for you [google.ca]. It looks like IBM is only promising a 100-fold performance increase, but Sun got the contract (despite the possibly inaccurate story, it doesn't sound like they actually figured out anything thus far, besides "how to get some government loot") by promising a 1000x increase.

Hey DARPA — I'll give you a 1,000,000x improvement! Email and I'll tell you where to send the cash.

The article claims it will reduce energy usage. It's much faster, so it saves time. And because time is money, it also saves money. I'm going to make a wild guess that it'll be more expensive to manufacture, because wires and solder and very very easy to put down.

If the "lasers" require an electrical signal to be generated, isn't this just adding a step? Also you need an optical sensor somewhere which converts the light back into an electrical signal, no?
Sounds like building a tunnel where there is already a bridge.

On chip they are pumping the signal over a traces with mm range lengths and um range widths, off chip it's over traces with dm range lengths and mm range widths. Timing and power consumption are hard enough problems on chip, off chip they become much harder... not to mention that most of the power consumed either goes into EM or gets coupled into other signals.

Serial connections help with the timing, but do diddly for power and noise. That's where optical comes in.

To use the beloved transportation analogy: it's like moving your cargo off of trucks and onto a high-speed train. Yes it takes time to move cargo, but it's worth it if the time savings of the high-speed train are big enough (for long enough distances, the savings can be significant).

In this case, there may be a delay associated with signal processing, but if the optical transmission is sufficiently faster than an equivalent electrical one, then it's worth it. Considering that electrical signals themselves need to undergo various kinds of switching and processing anyway (data written or read from a bus), I don't know that converting to laser signals will add much of a delay.

I don't know if this is a serious question or not, but one assumes that the lasers will operate in completely sealed environments (e.g. inside an IC package) or over optical fibers if they need to traverse free space. I think the intra-package situation is probably more common; you could communicate from one core to another on the same die using a laser rather than a wired interconnect and hopefully have less interference/RF/capacitance issues to deal with. This also makes sense given what I know about mo

I agree that dust will not be a problem, as the pathways through which the light signal would travel would probably be sealed in some way and I can't even begin to guess why the GP was concerned about a computer being knocked on its side. However, I would imagine that since the pickups for a hard drive are magnetic, dust would not make much of a difference. Now I don't know how big the gap is between a head and the platter, so I guess if this was close enough dust could scratch a platter? But our CD driv

The head is close enough to the platter that it would hit a piece of dust. In fact, the head is *so* close to the platter, that it would hit a fingerprint on the surface. It floats on a cushion of air created by the high speed of the spinning platter.

Scratching the surface renders that part of the surface unusable, but also creates pieces of shrapnel which cause more problems.

Will these be in the visible or infrared range? Will the laser beams terminate or leak outside the unpackaged chip? I ask because engineers are constantly looking at decapped chips or doing various types of testing under the microscope of live circuitry. I'd hate to get hit by a laser beam through a microscope.

I didn't read TFA, but I did read the headline...So you are telling me that Star at the center of our solar system (Sol or some people call it "Sun") is somehow changing its rate of rotation/turning to track lasers and the side effect of this turning is to increase the production speed of inedible chips made out of computers?No wonder, I don't read TFA... the headline is just plain silly.

You twat. Stop trying to be a pedantic prick. It says "Sun", The shortened name of a company called Sun Microsystems thats typically used in conversation by a large number of people who don't have shit for brains. Lets not forget the logo displayed to the side of the article summary.

Remember the article not long ago about micro transmitters/receivers on a chip?

Considering no special connections are needed for wireless, unlike light which woud likely need fiber or line of sight, chips equipped with that mini wireless tech would, in theory, only need to be powered and placed in proximity to each other.

Not as sexy as SPARCs with friggin' lasers, but certainly a plus from a computer design perspective.

Even a directed wireless transmitter through a waveguide only manages to send a fraction of its signal power over to the receiver. There's also the problem that it's much more susceptible to interference, it drains a lot of power because RF signals are not easy to generate at high speeds, the extra logic required and the fact that the bandwidth is just nowhere near what traditional wired links are capable of might not make it all that attractive.

Even a directed wireless transmitter through a waveguide only manages to send a fraction of its signal power over to the receiver. There's also the problem that it's much more susceptible to interference, it drains a lot of power because RF signals are not easy to generate at high speeds, the extra logic required and the fact that the bandwidth is just nowhere near what traditional wired links are capable of might not make it all that attractive.

Exactly. Hence the reason 802.x wireless is much slower than its wired counterpart or why fiber optics are used for high-speed networking over great distances (like between North America and Europe) (as opposed to satellites).

Agreed on all your points, although I don't think getting 100% of the sigal power to the receiver is an issue. And maybe wired bandwidth is greater, but if you only need 1 gigabit, who cares if fiber can do terabit speeds?Sun's research is aimed at supercomputers... getting 1024 processors to all talk to each other. Simultaneously. That's a lot of cross connections, and some heavy duty switching gear. But as long as any two processors can switch to the same frequency, they could communicate. Meaning 512 pro

"This is a high-risk program," said Ron Ho, a researcher at Sun Laboratories who is one of the leaders of the effort. "We expect a 50 percent chance of failure, but if we win we can have as much as a thousand times increase in performance."

Whenever anyone says there is a 50% chance of something happening they really mean "I have no idea. No idea at all. I'm guessing."

In probability theory, "p" has a specific meaning which is roughly stated as "the ratio of the total number of positive outcomes to the total number of possible outcomes in a population". So for the number of 50% to be right, it must be known that if this research was repeated a million times, 500,000 times there would be success and 500,000 times there would be failure. But this makes no sense because the thing being measured is not a stochastic property. It is simply an unknown thing.

What is probably vaguely intended when a number like this is given is that if you took all the things in the history of the world that "felt" like this in the beginning, half of them will have worked out and half will have not.

How on earth could any mortal human know that?

But it gets even more complicated. One cannot state a probability like this without stating how confident one is in the estimate of the number. So really a person should say the probably of success of this endeavor is between 45% and 55% and this estimate will be correct 19 times out of 20.

With that as background here is what I humbly suggest 50% really means: it means "I have no idea how to quantify the error of this estimate. It doesn't matter what the estimate is because the error band could possibly stretch between 0% and 100%. So I'll split the difference and call it 50%". But that is wrong, the statement should be "I estimate the probability of success to be between 0% and 100%".

But nobody does that because it makes them look stupid.

So whenever anyone says there is a 50% chance, or a 50/50 probability of something happening, they might as well talk in made-up Klingon words, the information content of their statement will be equivalent.

Absolutely. Personally, I do the same thing: if someone asks me about the likelihood of something happening about which I have no clue, I tell them flat out "50/50. Here, let me flip a coin." I expect the same thing to have happened here as well.

In probability theory, "p" has a specific meaning which is roughly stated as "the ratio of the total number of positive outcomes to the total number of possible outcomes in a population". So for the number of 50% to be right, it must be known that if this research was repeated a million times, 500,000 times there would be success and 500,000 times there would be failure. But this makes no sense because the thing being measured is not a stochastic property. It is simply an unknown thing.

This is true, if by "probability theory" you mean "Frequentism [wikipedia.org]". Frequentism is nice, for those cases where you are dealing with nice, neat ensembles. For a lot of real world situations which require probabilistic reasoning, there are no ensembles, only unique events which require prediction. For that, we often use Bayesian Probability [wikipedia.org].

Take the assertion "I'd say there's a 10% chance that there was once life on Mars." Well, from a Frequentist point of view, that's complete bullshit. Either we will find evidence of life, or we won't - either the probability is 100% or 0%. There's only one Mars out there.

In order to deal with this limitation, Bayesian Probability Theory was born. In it probabilities reflect degrees of belief, rather than frequencies of occurance. Despite meaning something quite different, Bayesian probabilities still obey the laws of probability (they sum/integrate to one, etc), thus making them mathematically compatible (and thus leading to confusion by those that don't study probability theory carefully.) Of course there are issues with paradoxes and the fact that prior distributions must be assumed rather than empirically gathered, but that does not prevent it from being very useful for spam filtering [wikipedia.org], machine vision [visionbib.com] and adaptive software [norvig.com].

As someone who professionally uses statistics to model the future performance of a very large number of high-budget projects at a major U.S. defense contractor, I can assure you that his statement was much more in line with the Bayesian interpretation of probability than the Frequentist view you implicitly assume.

Sorry for the rant, I just get very annoyed when people assume that Frequentism is all there is to statistics - Frequentism is just the beginning.

But it gets even more complicated. One cannot state a probability like this without stating how confident one is in the estimate of the number.

Of course! But where did the confidence interval come from, and how much confidence do we have in it? It's important to provide a meta-confidence score, so that we know how much to trust it! That too, however, should be suspect - indeed even moreso because it is a more complex quantity to measure! So a meta-2 confidence score is in order, for any serious statistician... But why stop there?!

With that as background here is what I humbly suggest 50% really means: it means "I have no idea how to quantify the error of this estimate. It doesn't matter what the estimate is because the error band could possibly stretch between 0% and 100%. So I'll split the difference and call it 50%".

So, if someone does not give an error bound on an estimate, we should assume that the error is maximal?

So whenever anyone says there is a 50% chance, or a 50/50 probability of something happening, they might as well talk in made-up Klingon words, the information content of their statement will be equivalent.

Or, it's entirely possible that that 50% number is somewhat accurate, because they know something about the subject that you do not.

Just my luck huh, here I go looking all smart then some uber Bayesian has to come along and spoil my party.

Anyway, with little expectation of anything good coming from this (for my ego I mean), here's why I don't usually think in Bayesian terms. Correct me if I'm wrong which I probably am.

While I have heard Bayesians talk about probability not meaning the same thing as as "normal", I've never seen any Bayes p which means anything other than a relative likelihood that I'm familiar with. If there is a bag

Just my luck huh, here I go looking all smart then some uber Bayesian has to come along and spoil my party.

I'm hardly a Bayesian in spirit, but it's useful enough when treated properly. I'm actually much more likely to say "Bayesian statistics is absolute bollocks - which just so happens to work very reliably in many cases". This is due to the well known paradoxes with priors, and issues associated with the certainty of beliefs (which you referenced). I prefer Dempster Shafer evidence combination when

My mod points expired recently, so could someone mod this up? I do machine learning and computer vision with Bayesian statistics, and the above poster is spot-on. The GP sounds like a frequentist trying to regain control over statistical vocabulary.FWIW, the frequentists can keep "confidence interval". We don't want to sully our theoretically sound vocabulary with its filthy connotations.:p But "probability" is something we'll lay uncompromising claim to, however much detractors say that subjective probabi

I think there are a lot of people who are not really taught Bayesian statistics, and so they are limited to think of probability solely in terms of frequentist terminology.

To be fair, many things about Bayesian statistics are odd, and possibly even unsound (yay prior distributions we just made up!) The confidence interval thing can get a bit ridiculous, but I prefer Dempster Shafer theory for the precise reason that I emphatically DO NOT want to treat all evidence with equal weight.

If I understood correctly this is not about single wafer design but exactly the opposite: regaining the speed of 'single wafer design' with multiple chips by using optical communications between chips increasing the inter-chips bandwidth (normally intra-chip bandwith is much higher than inter-chip bandwith so this is a bottleneck).

The byline of the Seattle Times story is "John Markoff New York Times". 5 seconds with Google's site:nytimes.com reveals the original story [nytimes.com] with better explanation and more quotes from Sun personnel.

Because the NY Times used to require registration to read their articles?

.

Of course, the article still makes some bonehead errors. They do not cut wafers of identical chips apart to be able to eliminate the few failures in a circuit, but because we want a hundred CPU chips more than we want a single four inch processor with about 100x4 or x8 cores. You do not need that many processors to do your own taxes (unless.Net is far more wast

Interesting, so what they want to do is to be able to create larger multi-chip packages where each the chips are connected to each other optically rather than the traditional wire-bonds on a SiP. I'm honestly not seeing the advantage here in terms of speed. A single LVDS pair across a chip pad and wire-bond can already carry "tens of billions of bits per second" of bandwidth. Many can be put in parallel. I can see this being an advantage if they've discovered some ultra-efficient electro-optical convers

Instead of goofing around with connections, why not build a chip occupying the entire 300mm wafer? Any local manufacturing problem would disable just one specific core out of the hundreds of cores on the wafer-chip. Isn't it done already? Cell, AMD tri-core, old celerons...
Even the memory could be on the wafer, or at worst, one wafer for the cores and one for the memory, vertically stacked with through-silicon vias.

This needs one large cooler instead of hundreds of smaller ones.
You can do something useful with the concentrated heat, for example provide hot water. Better than letting it go useless.
But I think a good tradeoff would be to lower the frequency an order of magnitude, and use the massive parallelism - hundreds or thousands of cores on a die. Better, make it fully three-dimensional for a massive explosion of processing units. The brain is large and is 3D and still does not get really hot.

Because we don't have the fabrication technology to expose a whole wafer at once. Since we're essentially shining light on the surface, the wider we make the beam, the softer the features get, specially towards the edges (because the light hits the edge at a different angle).
There's a sweet spot of size-vs-yields. Trying to make bigger chips requires multiple exposures for the same die, and getting the exposures to line up properly is extremely tedious.

Yes, that's true, but that's not the focus of the article. The article is aboot replacing electrical lines on the PCBs. The biggest bottleneck in a PC is the front side bus. This is the connection b/w the memory, the HDDs, and the CPU. If you could switch these types of connections from electrical to optical then you could increase the communication bottleneck b/w the chips. The next step would be faster RAM and then faster HDDs, next a faster CPU, then a faster bus, and the circle continues.

The area of photonics is largely related to physics and electrical engineering, not so much with computer science, which deals with information processing and computations.
Being someone who works in the area of silicon photonics, this is some pretty exciting news.

I have to wonder, if Sun is pursuing Defense contracts, does Sun know where it's business is headed? Usually companies do the Defense contracts when they are small, need money, and don't really have a product yet. Since Sun made $740 million last year, you'd think they could afford to spend $40 million on this (probably over several years), and then they'd get to keep all the knowledge to themselves (including their R&D direction). So I can only assume that either Sun thinks this has too small a chan

"The company is announcing today a $44 million contract from the Pentagon to explore replacing the wires between computer chips with laser beams.

I hope this means that servers with the new chips will not actually cost 2-4x as much as an equivalent Dell server. IMHO, Sun needs to do something about the cost of their servers. I try to only use them when required because of their cost and I'm told the inflated price is due to the low yields of the SPARCs.

The technology...would [I]render useless[/I] the most daunting fear of the Pentagon: [I]EMP weapons[/I]."

And that's exactly why the Pentagon would be investing in such technology. Any additional performance or other geeky coolness is just a side benefit. Ultra-high-performance computing is the DoE's gig, not the Pentagon's.