Posted
by
Zonk
on Friday June 01, 2007 @10:03AM
from the next-up-a-skillion-core-system dept.

thejakebrain writes "Intel has built its 80-core processor as part of a research project, but don't expect it on your desktop any time soon. The company's CTO, Justin Rattner, held a demonstration of the chip for a group of reports last week. Intel will be presenting a paper on the project at the International Solid State Circuits Conference in San Francisco this week. 'The chip is capable of producing 1 trillion floating-point operations per second, known as a teraflop. That's a level of performance that required 2,500 square feet of large computers a decade ago. Intel first disclosed it had built a prototype 80-core processor during last fall's Intel Developer Forum, when CEO Paul Otellini promised to deliver the chip within five years.'"Update: 06/01 14:37 GMT by Z: This article is about four months old. We discussed this briefly last year, but search didn't show that we discussed in February.

Seriously, this thing doesn't even support the x86 instruction set (yet?), and is not form-factor compatible with today's processors.

When it finally comes out, hopefully they will have integrated the "3D" chip-stacking tech into it as well, and HOPEFULLY it will run both cooler and with less power requirements than today's offerings.

Oh, I get it! You spell "Microsoft" with a "$" replacing the "s" because Microsoft likes money! Then you write some shallow technical-sounding drivel around it to legitimize your flagrant adolescent fanboyism as Slashdot's trademark pseudo-intellectual circle-jerk! Clever!

Well using XP I see that there is about Programs that are running on the system at any one time the rest are in a wait state. Most programs are not multi-threaded. So with 8 Cores should be more then enough to run current Vista design. There was an article yesterday saying how the next versoin of Windows will be designed to handle multi-core processors better so I figure this design hasn't changed much.No the Mac Pro isn't the only system with 8 cores (2 4 core cpus) it is a common system that has 8 Cores

Sure would be nice to have a play with it once they have worked out how to program it...

It's very likely you can get one at Best Buy before they have worked out how to program with it. The fact is, current programming paradigms simply aren't suited to fine-grained parallelism - and in saying that, I don't mean to imply that such a paradigm can definitely exist. Sure there are many parallel research languages, but whether those could be adopted by mainstream programmers and used to achieve anywhere near

A operating system is not a single process with a single thread. A clean installation of Windows has at least 24 processes at boot time. That means if the Kernel is smart enough it will run each of them on a separate core giving you quite a speed boost, and still leaving lots of space.

Process-level parallelism is fine for servers, but nearly useless for personal computers. The number of processes is irrelevant, it's the number of runnable processes that matters. Most processes spend most of their time w

do you remember the 80286 (which had an extra year wait)? There are some of us here that do. Even the z80 was a bit late by making it a better 8080. I was not waiting for it (but coded on it), but I would guess that 1 or 2 ppl here did.

Itanium isn't doing that badly, but it's been relegated to the "heavy iron" mainframe and supercomputer type systems, and that's a tough market. They made a gamble that didn't work as well as they hoped.

But, CPU cores are going to be the MHz of the next 20 years. Remember in the late 80s when 8MHz was alot and 33Mhz and 66Mhz were in the lab? 20 years form now we'll have 2Gigacore CPUs running between 2-3GHz.

One floating point operation (say, an add), is a FLOP, not a FLO. Just like a No OPeration is a NOP (alternatively, NOOP, but assembly mnemonic is almost always NOP). If you want to know the rate at which a processor executes FLOPs, you say that it computes at X FLOPS.

Yeah, man. Or what if Intel codenamed their next processor Beowulf? *inhales, holds breath, exhales slowly, smoke twisting lazily* Can you imagine a Beowulf cluster of Beowulfs or did I just blow your mind?

If intel called the 80 cpu beast "Grendel", could it still be part of a Beowulf cluster? Or would it end up in a perpetual battle - cpu versus os - until the very fabric of the universe itself crumbled around us?

If intel called the 80 cpu beast "Grendel", could it still be part of a Beowulf cluster? Or would it end up in a perpetual battle - cpu versus os - until the very fabric of the universe itself crumbled around us?

If you could work Grendel's mom into a Beowulf cluster of Grendels, you might just have aced the German pr0n market. For Japan, add tentacles. And if you can reverse the expected subject/verb/object order, you might have a market in Soviet Russia to boot.

80 cores means there are probably quite a lot of on-chip interconnects between the cores.

There has to be a typo hiding in there, but the whole thing is an empty set. It's hard to believe they can make 80 cores with 100E6 transistors when it take 261E6 transistors to make two. Each core would have less than a million transistors in the 80 core model. You have to go all the way back to the 486 [answers.com] to see that kind of count from Intel. It's possible because the cores are not x86, there's no "ability to use

There has to be a typo hiding in there, but the whole thing is an empty set. It's hard to believe they can make 80 cores with 100E6 transistors when it take 261E6 transistors to make two.

Yeah, when I read it I thought it must be a typo (especially given that the die area is bigger). Although, the article says it's VLIW (it isn't specific, but I guess it'll be IA64 based), which means you can throw away a shed-load of transistors from the scheduler that would have to be present on an OOE superscalar device.

You have to go all the way back to the 486 to see that kind of count from Intel.

That might not be a terrible idea. Since the utility of this chip assumes fine-grained parallelism anyways, the new metric would have to be flops per transistor. Implement the 486 design with modern process technology, thus allowing you to put 250 of them on a single chip runing at 100x the original clock speed of 33mhz, and you might get something very nice. Or is that even possible?

Core2 has 2 or 4 Mbyte l2 cache. 1 bit cache is 6 transitors == more than 200 of those 291 million transistors are high-density cache. (Density of cache is a lot higher than that of logic, which the 80 core cpu nearly solely is made of).

(Btw, i fucking HATE the "millimeters squared" expression. Its square millimeter. 275 mm squared would be more than 65 cm^2.)

I thought that was a little weird, too. But the 80-core chip could simply have more wires (and therefore, fewer transistors). Given that they mention that there are routing elements between the cores, it's possible that a lot of the chip's real estate is taken up by massive busses between adjacent cores.

Another explanation might be that they didn't want to waste the time/expense to come up with an optimized layout, or that they intentionally spaced things out to make testing easier.

Would someone tell me how this happened? We were the fucking vanguard of shaving in this country. The Gillette Mach3 was the razor to own. Then the other guy came out with a three-blade razor. Were we scared? Hell, no. Because we hit back with a little thing called the Mach3Turbo. That's three blades and an aloe strip. For moisture. But you know what happened next? Shut up, I'm telling you what happened--the bastards went to four blades. Now we're standing around with our cocks in our hands, selling three blades and a strip. Moisture or no, suddenly we're the chumps. Well, fuck it. We're going to five blades.

Sure, we could go to four blades next, like the competition. That seems like the logical thing to do. After all, three worked out pretty well, and four is the next number after three. So let's play it safe. Let's make a thicker aloe strip and call it the Mach3SuperTurbo. Why innovate when we can follow? Oh, I know why: Because we're a business, that's why!

You think it's crazy? It is crazy. But I don't give a shit. From now on, we're the ones who have the edge in the multi-blade game. Are they the best a man can get? Fuck, no. Gillette is the best a man can get.

What part of this don't you understand? If two blades is good, and three blades is better, obviously five blades would make us the best fucking razor that ever existed. Comprende? We didn't claw our way to the top of the razor game by clinging to the two-blade industry standard. We got here by taking chances. Well, five blades is the biggest chance of all.

Here's the report from Engineering. Someone put it in the bathroom: I want to wipe my ass with it. They don't tell me what to invent--I tell them. And I'm telling them to stick two more blades in there. I don't care how. Make the blades so thin they're invisible. Put some on the handle. I don't care if they have to cram the fifth blade in perpendicular to the other four, just do it!

You're taking the "safety" part of "safety razor" too literally, grandma. Cut the strings and soar. Let's hit it. Let's roll. This is our chance to make razor history. Let's dream big. All you have to do is say that five blades can happen, and it will happen. If you aren't on board, then fuck you. And if you're on the board, then fuck you and your father. Hey, if I'm the only one who'll take risks, I'm sure as hell happy to hog all the glory when the five-blade razor becomes the shaving tool for the U.S. of "this is how we shave now" A.

People said we couldn't go to three. It'll cost a fortune to manufacture, they said. Well, we did it. Now some egghead in a lab is screaming "Five's crazy?" Well, perhaps he'd be more comfortable in the labs at Norelco, working on fucking electrics. Rotary blades, my white ass!

Maybe I'm wrong. Maybe we should just ride in Bic's wake and make pens. Ha! Not on your fucking life! The day I shadow a penny-ante outfit like Bic is the day I leave the razor game for good, and that won't happen until the day I die!

The market? Listen, we make the market. All we have to do is put her out there with a little jingle. It's as easy as, "Hey, shaving with anything less than five blades is like scraping your beard off with a dull hatchet." Or "You'll be so smooth, I could snort lines off of your chin." Try "Your neck is going to be so friggin' soft, someone's gonna walk up and tie a goddamn Cub Scout kerchief under it."

I know what you're thinking now: What'll people say? Mew mew mew. Oh, no, what will people say?! Grow the fuck up. When you're on top, people talk. That's the price you pay for being on top. Which Gillette is, always has been, and forever shall be, Amen, five blades, sweet Jesus in heaven.

Attention, consumers with bodily hair: The razor industry has news for you! You will never in a million years guess what this news is, unless your IQ is higher than zero, in which case you're already thinking: "Not another blade! Don't tell me they're adding ANOTHER BLADE!!"

Shut up! Don't spoil the surprise for everybody else!

Before I tell you the news, let's put it in historical context by reviewing:

THE HISTORY OF SHAVING

Human beings are one of only two species of animals that shave themselves (the other one is salamanders). The Internet tells us that humans have been shaving since the Stone Age. Of course, the Internet also tells us that hot naked women want to befriend us, so we can't be 100 percent sure about everything we read there.

But assuming that www.quikshave.com/ timeline.htm is telling the truth, Neanderthal Man used to pluck his facial hairs "using two seashells as tweezers." No doubt Neanderthal Woman found this very attractive. "You smell like a clam," were probably her exact words. It was during this era that the headache was invented.

By 30,000 B.C., primitive man was shaving with blades made from flint, which is a rock, so you had a lot of guys whose faces were basically big oozing scabs. The next shaving breakthrough came when the ancient Egyptians figured out how to make razors from sharp metal, which meant that, for the first time, the man who wanted to be well-groomed could, without any assistance or special training, cut an ear completely off.

This was pretty much the situation until the late 19th century, at about 2:30 p.m., when the safety razor was invented. This introduced a wonderful era known to historians as "The Golden Age of Not Having Razor Companies Introduce Some Ludicrously Unnecessary New Shaving Technology Every 10 Damn Minutes."

I, personally, grew up during this era. I got my first razor when I was 15, and I used it to shave my "beard," which consisted of a lone chin hair approximately one electron in diameter. (I was a "late bloomer" who did not fully experience

puberty until many of my classmates, including females, were bald.) My beard would poke its wispy head out of its follicle every week or so, and I, feeling manly, would smother it under 14 cubic feet of shaving cream and lop it off with my razor. Then I would stand in front of the bathroom mirror, waiting for it to grow again. Mine was a lonely adolescence.

The razors of that era had one blade, and they worked fine; ask any older person who is not actively drooling. But then, in 1971, a very bad thing happened: Gillette, looking for a way to enhance the shaving experience (by which I mean "charge more") came out with a razor that had TWO blades. This touched off a nuclear arms race among razor companies, vying to outdo one another by adding "high-tech" features that made the product more expensive, but not necessarily better. This tactic is called "sneakerization," in honor of the sneaker industry, which now has people paying upwards of $200 a pair for increasingly weird-looking footwear boasting the durability of thinly sliced Velveeta.

Soon everybody was selling two-blade razors. So the marketing people put on their thinking caps, and, in an astounding burst of creativity, came up with the breakthrough concept of: THREE BLADES. Gillette, which is on the cutting edge (har!) of razor sneakerization, currently has a top-of-the-line three-blade razor -- excuse me, I mean "shaving system" -- called the "Mach3Turbo," which, according to the Gillette Web site (www.gillette.com) has more technology than a nuclear submarine, including "open cartridge architecture" and an "ergonomic handle" featuring "knurled elastomeric crescents." That's right: It has elastomeric crescents, and they have been knurled! By knurlers! No, I don't know what this means. But it sure sounds technological.

Which brings us to today's exciting news, which was brought to my attention by alert reader Jake Hamer. Gillette's arch-rival, Schick (maker of the Xtreme 3 shaving system) has announced that it's coming out with a new razor that has -- prepare to be floored by innovativeness -- FOUR BLADES. Yes! It will be called the "Quattro," which is Italian for "more expensive."

Of course it will not end there. I bet an urgent memo has already gone out in Gillette's marketing department. "Hold some focus groups immediately!" it says. "Find out what number comes after four!"

Yes, the razor-technology race shows no signs of slowing. And who knows what lies ahead? Razors with 10 blades? Twenty blades? A thousand blades? Razors that go backward in time and shave your ancestors? Exciting times lie ahead, shaving consumers!

The onion article was about competition, not performance. It was a satiric look at how one company feels pressured to innovate in an arena where continuously adding *features* will not have a significant impact on the performance of the product. You can't simply substitute Intel for Gillette and assume that the remainder of the article would mesh up.

It was an interesting parody. Perhaps not entirely on topic, but related.

Well shit, they should just rename the Onion to The Daily Prophet. Remember that little bit they did about Bush after the first time he was (s)elected, "Our Long National Nightmare of Peace and Prosperity Is Over?" http://www.godlessgeeks.com/BushNightmare.htm [godlessgeeks.com] Here it is with links to all the jokes that came true. Shit!

Not to mention that Slashdot (even Zonk) Covered this LAST YEAR [slashdot.org].But that's OK, I'm sure Slashdot gave insightful and cogent coverage of real events that actually matter to geeks on this site, you know, like the Release of a new major version of GCC [gnu.org]Oh wait.... that (like a bunch of other actually interesting stories) would be in the aptly-named, sir not appearing on this website category due to it not making enough banner revenue.

Clearly, there is a demonstrable need for news sites to process dupes faster and in parallel with other dupes. The reason this one took so long is because there isn't a high-speed dupe instruction on the older generations of processors.

This is a nice move by Intel. I wonder what AMD's plans are...81 cores?

Besides, with most software being single-threaded I don't know if a consumer will immediately need more than 4 cores for a while. I can still see software companies trying to come up with ways to keep all 80 cores busy..."Well, they need at least 20 anti-virus processes, 10 genuine advantage monitors, and we'll install 100 shareware application with cute little icons in the task bar by default. There, that should keep all the cores nice and warm and busy -- our job is done!".

But in all seriousness, I would expect some extremely realistic environmental physical simulations (realtime large n-body interactions and perhaps realtime computational fluid dynamics)...now that's something to look forward to!

If that's the case, might as well connect all unused heatsinks to a griddle and I'll fry my by bacon and eggs in the morning on them. If some of them burn up or get too hot it's "ok" I got 60 others waiting...

Or... you could just have two and order a CPU to be delivered when one burns up. In fact that's what happens on some mainframes. If a part fries, the machine calls "home" and the company will send a replacement immediately. Sometimes the administrator will find out something went bad only when the rep

Virtualisation - dual and quad core doesn't really cut it for massive levels of virtualisation.. you want to reserve at least a core for your host OS, then you've got to divvy up the rest. With dual core that means you're down to having no smp in your vm's (sucks if they're compile boxes), with quad core that's only 3... get 20-30 machines in there and it's starting to look shaky. 80 cores would scale to hundreds of virtual machines without any particular slowdow.

That makes sense. The GPUs sometimes have a a higher transistor count than CPUs...

The problem with Fusion is if they kill the add-on graphics and you just buy one Fusion processor that costs say $400 to plug into the CPU slot. In the meantime, NVIDIA releases their new generation board and Intel releases a new generation CPU. The consumer can choose to upgrade one or the other or both, but an AMD customer is stuck just one expensive part and would have to upgrade it as one piece.

Kilts becomes AMD CEO:81 cores? Fuck that! We're going to 100 cores and putting a goddamn window on the CPU so all of the fan boys can watch the electrons flow. Then we're going to put the ethernet connection DIRECTLY on the chip. Ya, you heard me right, a connector right on the damn chip. You disagree? Great, I need your soon to be empty cube to store my prototypes you pussy.

Wait! Brace yourself, I've got another one. A speaker slapped onto the CPU. You hear that? That is the sound of genius and

Besides, with most software being single-threaded I don't know if a consumer will immediately need more than 4 cores for a while. I can still see software companies trying to come up with ways to keep all 80 cores busy

You're ignoring the server market, where the applications tend to be highly multi-threaded, and it's not difficult at all to keep your CPUs busy.

This isn't a general purpose processor. Think "cell processor" on a larger scale. You wouldn't be running your firefox or text editor on this thing. You'd load it up and have it do things like graphics processing, ray tracing, DSP work, chemical analysis, etc...

So stop saying "we already don't have multi-core software now!!!" because this isn't meant for most software anyways.

...Yet. You're right in the fact that these cores are incredibly simplistic, so much so that they make DSPs look functional, but really what's going on here is a science project to develop the on-chip network, not to develop the CPU cores as much. Intel envisions lifting the networking component out of this design and applying it to various different cores, so that a general computing core can be mixed in with DSP cores and other "Application Specific Accelerator" cores.

So no, this model you're not going to be running Firefox or your text editor on (in fact, I doubt you even _could_ do this, these cores currently are very, very stripped down in their capacity to do work, to where they're basically two MACs tied to a small SRAM and a "network adapter"), but never-say-never, this style of chip is right around the corner.

I really don't understand this idea that we "can't use multiple cores yet" because we don't have some magical, mythical necessary programming model that will make this come alive instantly. The fact is, we've had the necessary models for decades now. Adding multiple cores doesn't necessarily mean we need to change our programs at all, but rather it means we need to change our Operating Systems.

To the point: right now, we typically schedule applications to run on time-slices, to virtually expand one proce

"For most applications, 80 cores will probably hurt performance, and at the very least will be idling and wasting power doing NOPs most of the time."

We can throw away the whole idea of NOPs when it comes to this chip and the technologies we have today. If we're not using the core, simply turn the core off, it's just going to be wasting energy, and you can turn the core back on pretty damn quick (~100's of ns) if you need it. Since these cores are virtually cache-less already, there's no huge to-memory pe

I like the claim that mathematical models show more than 16 cores hurt performance in most applications.Likewise, I bet there are models that show more than 14 cores hurt performance in a super-plurality of applications. Could've you been more vague? What models, what types of applications, and most importantly why? You can use Amdahl's law to show the theoretical maximum speedup for different types of programs, and guess what, none of them go down after 16, you just flatten (essentially diminishing returns

[a teraflop is] a level of performance that required 2,500 square feet of large computers a decade ago.

Over a decade and a half ago, in 1990, I programmed parallel AT&T DSP32C boards (multiple DSPs per ISA board in an 80386 host). Up to 5 25GFLOPS chips on a 125GFLOPS board, up to 8 boards in a 1TFLOPS host PC. That PC, nearly double the "decade ago", had over 1TFLOPS (including its FPGA glue) in about 3 square feet.

And it actually ran applications (commercial image processing) in the field. This Intel

My concern isn't that they can cram 80 processors of some capability onto a single chip (how many i486 processors could you put on the current Core 2 Duo die and transistor budget at 65/45nm?), but how can you feed enough data on and back off the chip to keep these processors running at near full speed? 8000 processors on a chip are worthless if they can't get to their data.

And I'm not impressed by the Flop rate. Not with the Cell Processor already out for a year.

Well, those shitty, basic computers that took up big rooms, remember those? No? Ok, well, if those were still here, this thing would be like 90239820 times smaller, cool huh?
How many of those are we going to have to hear before we come up with some new kind of comparison.
You know how fast a woman can plot a route around a detour using a map in a big city? Yeah? Well, this shit is like 939203902093902093092093 times faster.

Of course we won't see an 80-core processor any day soon! Intel will produce "new chips" every couple years and just add cores. I suspect they have done this all along (e.g, the Pentium series). That is it makes sense from a capitalist standpoint to throttle your technology to generate or maintain a certain level of demand. I have no evidence for this, but as cynical as it sounds it sure makes sense.

Nobody believed me when I mentioned in a previous story comment that I was playing with an 8-core HP test system. Everyone went "No Wai!" and this was about three months or so ago. Lookie here - 80 fucking cores.

You do know, XP has a dual-core OS (XP 64 bit edition), and simply expanding it to allow it to run on more than 2 doesn't sound like too much of a trick. Fully utilizing all of those, different issue, but getting it to simply "run" a 64bit program doesnt seem impossible.