Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

Jordin Normisky writes to mention the news, via ZDNet Asia, that IBM's new Power6 processor will be unveiled next month at a conference in San Francisco. They're also planning to announce a second-generation Cell, both of which are expected to run faster than 5GHz. From the article: "In addition, the [Power6] chip 'consumes under 100 watts in power-sensitive applications,' a power range comparable to mainstream 95-watt AMD Opteron chips and 80-watt Intel Xeon chips. Power6 has 700 million transistors and measures 341 square millimeters, according to the program. The smaller that a chip's surface area is, the more that can be carved out of a single silicon wafer, reducing per-chip manufacturing costs and therefore making a computer more competitive. Power6, like the second-generation Cell, is built with a manufacturing process with 65-nanometer circuitry elements, letting more electronics be squeezed onto a given surface area. "

This is IBM. They were the first people to do dual core. Now everyone is doing it, it's no longer worth talking about. Everyone else, however, is having problems getting past 3GHz, so this definitely is worth shouting about.

More cores means more threads, which is all fine and lovely, unless you really need a single thread to do something very quickly. Perhaps the algorithm that you are implementing doesn't parallelize well, for instance.

Interersting. I had assumed that this bucket analogy was referring to the number of instructions per clock that the CPU was able to do, which has traditionally been the counterpoint to the "OMG FAST CLOCK SPEEDZ!!!" argument that companies like intel made for a long time.

I suppose dual cores are sort of a crude way to try and double the IPC count for a chip, but with the added need for ambidextrous programmes.

The older power3 chips (350 mhz) can compete with an intel 2.0 ghz chipset for our computations. However because alot of our stuff is very poorly written it caches to disk all of the time and the overall build of the rs6000 machines (and their more current versions) was best at managing the heavy throughput from the disk to fast memory. When we finally got our stuff to use a full 64 bit addressing system and we were able to use all of the fast memory that advantage vaporized for the rs6000 machines. Now

With Intel's chips that was becoming increasingly true. But for IBM's power processors more clock does indeed mean faster. The Power line already outperformed Intel per clock. With the increase in clock things may get very interesting.

The Power line hasn't outperformed Intel per clock since the Pentium 4, if you discount applications where the server-sized L3 cache and memory bus of the POWER series comes into play.In SPECint*, the G5's per-GHz performance is in the P4 range, maybe a little bit higher. Core 2's per-GHz performance is about 80% higher than that.

In general, more GHz means more performance for every processor, all else being equal. Any given design is the product of a set of trade-offs. Power is traded versus clockspeed, IP

No...we have advanced beyond seeing clock speed as the sole measure of performance. Obviously within the same processor type, a faster clock = better. I bumped my new E6600 up to 2.88GHz because it achieves significantly higher scores in synthetic and real benchmarks. Unless IBM's architecture totally sucks or is not useful for generic computing tasks, 5GHz is still pretty damn impressive.

K8L is going to bring IPC improvements to Opteron, along with L3 cache and native (single die) quad-core.AMD is all about the platform now. That's why they purchased ATI. It's about bringing CPU, GPU, and other specailized processors together using a fast, flexible bus (HyperTransport).

AMD is also about low-cost. Remember that current Athlon 64 CPUs have about half as many transistors as their Core 2 Duo counterparts. CPU + GPU + Northbridge in a single CPU (AMD Fusion) will have huge impact in the low-end

I do understand that AMD approaches the multi-core issue and SMP in general a bit more elegantly than Intel, and that this has a lot to do with HyperTransport, but Intel just beat them at their own game and they will have a lot of work to do in the *NEAR* future to get back to where they've been since the launch of the Athlon processor (first to 1GHz, first to seamless 64-bit x86 desktop among their most shining achievements).

AMD wasn't very much about low-cost for the last couple of years - FX and X2 chips were historically overpriced until Core 2 hit the scene - there was a 40%-60% price drop on the X2 dual-core chips at about that time if you'll recall. That means two things to me: insane profit margin and no need to compete with the floundering NetBurst.

CPU performance matters tremendously. Application performance disk-bound? Don't make me laugh. My system has 2GB of system RAM, as I hope today's Vista-ready machines do - when I load a large program (like a game) that I've already loaded since my computer has been turned on, it doesn't even read the HDD, nor does it jitter when loading new areas in games like Oblivion. I turned off my page file a long time ago. User input bound? Maybe if you're writing INPUT N$ statements in BASIC. Don't forget that Vista is around the corner for most of the world, no matter how bad it is.

DDR2 didn't help or hurt AM2 very much so I don't think memory subsystem bandwidth (or latency) is your answer either. Don't forget that media encoding, scientific applications, CAD, and gaming are what sells the high-margin computers that both Intel and AMD care a great deal about, and what drives technology in general (they can't sell if it they can't market it). AMD still has a relative deathgrip on the 8-way server market but its hold on 2- and 4-way servers that it rightfully wrested from Intel's grasp is rapidly slipping away due to Woodcrest and Kentsfield's rather nice performance per watt.

HTX slots might be an interesting toy for the future, and perhaps wonderfully applicable to server/render farms, but I don't see a product or a killer app yet.

Killer Product or App using an HTX-slot card? I can maybe answer the product part - HTX Graphics cards. Pure HyperTransport bus access, huge clock and loads of bandwidth, can literally be used as a universal bus across the entire system (using different pinouts for different types of devices, internal or external,) and maybe perhaps the bus has enough bandwidth (assuming programmers program cleverly and optimally,) to allow for massive things to occur at once, like running a rendering server, play a game, e

The current version fo SPECint really can't take advantage of anything bigger than a 4MB L2 cache. On the other hand, it should be noted that those POWER5 numbers are with IBM's XLC compilers, which are a lot better relative to GCC on PowerPC than Intel's C compiler is relative to GCC on x86. Also, POWER5 is very sensitive to workload and instruction ordering, so you're going to see lesser performance a lot more often than with Core, which is designed to handle the huge variety of poorly-scheduled code floa

Keep in mind that Power chips are used in high end servers, not commodity PCs. Given the expense of these servers, it's likely that the "OFMG 5GHZ!!!!111" reaction that typifies that commodity PC fanboy market does not apply. I doubt that IBM is sacrificing performance just to market 5GHz speeds (like Intel did with NetBurst).

I think everyone is racing with Sun on (high end) server markets, all enterprise sites and existing server customers seem to love Sun Niagara CPU especially because of heat and performance per watt stuff." The entire chip consumes a maximum of 72 watts, considerably less than rivals such as Intel's Xeon, which consumes 110 to 165 watts."

Last stats I saw IBM was winning in the Unix/Linux server market. I have not seen stats since the release of Niagra, which appears to be a nice CPU for multi-process/multi-threaded environments. Do you have sales/market share stats to support your statement?

There are clock speeds and there are operations. I know what an operation is, but how are cpu clockspeeds rated? Is it just something as silly as their clock source? By defination it is "cycles per second", but what exactly is cycling? I've always been confused by this and I think I just don't understand how digital processors work enough.

That's all right, I still find myself stumped by analog processors, like the valve body in a GM 700R automatic transmission. *shudder*Anyway, here goes:

Basically they take a tiny wafer-thin piece of silicon, use chemical to scrape out millions of little transistor shapes onto its surface, and strap a buckin' bronco of a clock crystal on it that shakes it like a salt shaker, or like jello jigglers on free-based cocaine.

Yup, just the clock speed. You want something like FLOPS (Floating Point Operations Per Second) or MIPS (Million Instructions Per Second) for a slightly more menaingful comparison, but that still suffers from neglecting to compare efficiency in parallel pipelines, accuracy of pre-fetches, etc.

Make's AMD's "Athlon XP 2400" marketing seem a little less deceptive when you realize that their GHz were getting more things done than Intel's GHz, doesn't it?

So what are you suggesting? That we only optimize the "most important" aspect of a computer system? Please, could you define what that worthy aspect is?

Computing performance is based on many factors. Clock speed is WAY up there on the list of importance. Just because it might not be the most important in an objective sense is no reason to stop trying to improve it.

I would agree with you if these chips were being sold to the common user. As of right now, I'm not familiar with any "e-machines" that run the IBM Cell processor. I don't see what IBM has to gain if their 5Ghz processor isn't an improvement on AMD or Intel because both of those companies already have a substantial amout of the market for home users. I can only assume these chips will be used in high-end products only.

I wonder if IBM's fab plants can cash the check their PR department writes

These are the engineers, including at least one IBM Fellow (the second author)... this is not the PR department. I expect these folks would not take their reputations in the engineering community lightly.

pSeries, iSeries, and zSeries, are still hard at work doing same they have always done, running banks, distribution centers, and the like. The difference is that mini's and mainframes don't need glossy magazines so that people know they get work done, they just do it.If you look at the direction AMD is going you will see the archietecture so common in the mini/mainframe areana is coming down to the home.

It was always hilarious to hear the network guys brag about their 4-way network tower with its 8gb plus

Yes, but the complex x86 instructions (and many simpler ones as well) take more than one cycle to execute. The relevant measure isn't the number of instructions required to accomplish a task, but the number of cycles required. You can easily concoct examples for which x86 requires fewer instructions but more cycles.

The smaller that a chip's surface area is, the more that can be carved out of a single silicon wafer, reducing per-chip manufacturing costs and therefore making a computer more competitive. Power6, like the second-generation Cell, is built with a manufacturing process with 65-nanometer circuitry elements, letting more electronics be squeezed onto a given surface area.

The cost of making chips, by far, is the R&D cost. The "first" chip costs hundreds of millions to make. Once the "first chip" is made the margin cost is VERY low. Beyond recovering R&D costs....the rest is just distribution channel costs....then....PROFIT!

Boy, Howdy! are you out of the loop. I work on those suckers and believe you me, the chip cost is not trivial.

Do the math: the cost of a 300 mm wafer in a 65 nm process runs well over $5000 (how much is a Deep Dark Secret.) Ignoring geometric yield loss, that's about 70,000 mm of potential dice per. If one chip is 350 square mm, you're getting about 200 per wafer, or $25 per chip fab cost. Yield drops off steeply with size (think in terms of losing ten to twenty dice per wafer, regardless of die size) and that adds into the fab cost too.

Look at it this way. To design a high end chip...* software for synthesis, implementation, timing/physical/formal verification, OPC, power/temp analysis and all the other stuff runs in the millions of dollars.* 20 engineers working for 3 years + benefits/managers/other overhead ~10 million dollars.* mask costs 100's of thousands of dollars.so getting to the first chip runs at least 15-20 million dollars and for something like the core2 duo it's closer to 500 -1000 million.

You do realize that the CURRENT generation of POWER5+ CPU's are already quad-core [ibm.com], right? Honestly, guys, you all need to read up on what makes POWER [wikipedia.org] different from PowerPC [wikipedia.org]. One is a server or workstation class chip, the other is a desktop class one.

But do these chips come with 32Mb of L3 Cache, have the fastest Fiber Channel Bus Interconnect in the market, and allow for extremely flexible, multi-platform OS true hardware virtualization?

Performance comparisons between x86 and RISC chips in my opinion are really not valid. What you really want to look at is system workload. Scalability is where the POWER chips really perform and these chips are designed for the

They're also planning to announce a second-generation Cell, both of which are expected to run faster than 5GHz.

Why don't they seem to be making any kind of performance comparisons? Talking about physical size, power consumption as compared to intel & AMD are great, but it seems weird that there's no mention of real-world performance against those same competitors. Even a rough estimate would be interesting.

There've been no firm figures since the Frieza chip reached end-of-line. Exactly how much higher Cell-2 rates than Cell-1 is hard to say, although Piccolo/Kami in an SLI configuration falls somewhere between. The Bejita-SSJ+ beats both on all benchmarks, and Cell-3 'Perfect' beats everything unless you overclock a Gohan to the undocumented and unsupported SSJ2 setting.

Not exactly. IBM has only announced this chip and from what I have seen it's not even a PPC chip anyway. Apple is CURRENTLY shipping dual core Xeon systems and will more than likely announce quad-core systems next week, similar to systems already shipping from PC makers like Dell. By the time the Power6 makes the jump from vaporware to reality we might see an 8-core Intel chip shipping in the high end Macs.

Both the G5 and Intel's Pentium 4 had similar problems improving performance during the past few years. Intel actually had to go back to a variant of their old P6 core to get out of their hole... and anyone who bet on "IA64" can tell you all ABOUT intel's overpromising.

It would be ludicrous, but Kutaragi's talked before about never reducing the price of the PS3 but instead upgrading it with more memory, bigger hard drives, etc. It would be pretty damned amusing if, a year and a half after PS3 launch, instead of cutting prices with a new easier to produce Cell and Blu-ray they upgraded the PS3 with the Cell2(and hosed everyone who'd already bought one). This would be so stupid and arrogant that it's only plausible because it's Sony.

Yeah, it's sort of expected in the computer world. When you buy a console, though, you expect your investment to last for 4-5 years or so. You don't expect that you have to upgrade your PS2 or the hottest new games won't play on it because it's last year's PS2. IF they did it it would be a whole new trend of badness like MS started with the two levels of console pricing.

Consoles are supposed to be static platforms that last about five years. If Sony kept the PS3 updated, it would require games to have system requirements and everyone would have to keep up with hardware demands every year. Along with the downloadable game updates that are becoming common these days, it would be the final merging of the hell of PC gaming with the once-great console eden.

The entire point of having a console over a computer is that any game that is released for the console is guaranteed to run on it, and run well, whether it's released 6 months or years after the console's release. General purpose computers don't tend to work that way. Therefore, if Sony were to do that, all the newer, better games would be unplayable on the old consoles, thus the early adopters get screwed out of all newer games. Typical Sony.

Kutaragi specifically said they were considering doing the upgrade instead of price drop life cycle because it's a computer and not just a console. Once he said that, this is fair speculation.And PS2, PS1 just don't apply any more. I love my PS2 and still think it's the best console on the market right now if you were only going to own one. I thought Sony was great and was looking forward to the PS3. But it's been a year of such stupidity from them that all bets are off. Now that they think they own the mar

In the world of technology a promise of more/better performance counts as much as a drunken "I love you." One reason why Apple jumped from PPC is that IBM failed to deliver a 3.0 Ghz chip within a reasonable time frame (in the PPC970 series) and completely failed on delivering a laptop chip. Believe it when you actually see shipping servers.

The chips are already in production, this is the very end of the cycle on these. Keep in mind that while many seem to relate this to PPC, this is really the POWER line targeting servers and IBM has been traditionally pretty accurate with their statements regarding POWER4, POWER5, POWER5+ in the past.

I just went through the IBM site and it seems their shipping architecture is Power5+ at about 2.0 to 2.5? Ghz. While I'm not disputing that IBM is capable of jumping to 4Ghz on the first series of Power6, I take a very skeptical approach to performance promises. Given the delays on the first series of the Cell, I'd definitely take a wait and see on that one.

I haven't checked the information yet, but here's an abstract on the rest, found through google:

The Power6 processor will run between 4GHz and 5GHz and it has been proven to chew away data at a speed of 6GHz in the lab.

IBM see things a little differently and they decided to raise the frequency in both cores of the processor.

For high-end models, four POWER6 MPUs will be packaged in a single multi-chip module, along with four L3 victim caches, each 32MB.

On the management side, IBM is also improving their virtualization capabilities in the POWER6. In particular products, a single processor may be able to host 2-300 virtual instances, although theoretically up to 1024 VMs are possible. Memory partitioning and migration have been added as well, which reduces system down time for repairs.

IBM is claiming a factor of two performance increase, which would be consistent with the vastly higher clockspeeds and increases in raw system bandwidth.

IBM's roadmaps currently include the POWER6+, which is presumably a 45nm derivative product. Judging by past practices, the POWER6+ will debut in the second half of 2008, probably just in time to dash the hopes of rivals.

The Power and PowerPC lines will grow one step closer together with Power6, which incorporates the AltiVec instruction set that speeds up many multimedia tasks. AltiVec, also known as VMX, increases efficiency by letting a single processing instruction be applied to multiple data elements. That's helpful for video and audio tasks on desktop machines, but servers will benefit as well in, for example, high-performance computing tasks such as genetic data processing, McCredie said

Where Power5 can transfer data on and off the chip at a rate of 150 gigabytes per second, Power6 can do so at 300GBps, McCredie said.

Oh, and it is also good for BCD's (binary coded decimals) which obviously points to the expected customers (high end financial firms, presumably).

Move back? They were never on them. POWER6 != powerpc (though they are similar in more ways than not).

I think Apple is perfectly happy with the Intel move at this point. One of the reasons for the migration (if you can get past Jobs' reality distortion field of blah blah per watt or whatever) was that IBM wasn't able to keep up with demand, either with getting the speeds up, or with delivering the slow crappy ones they already had.

Move back? They were never on them. POWER6 != powerpc (though they are similar in more ways than not).

I think Apple is perfectly happy with the Intel move at this point. One of the reasons for the migration (if you can get past Jobs' reality distortion field of blah blah per watt or whatever) was that IBM wasn't able to keep up with demand, either with getting the speeds up, or with delivering the slow crappy ones they already had.

Well, no, Apple never used POWER6 specifically, but they did use PPC, and IBM

This also puts Apple in a good situation, AT NO POINT do they have computers that are inferior to their competition. Before, if Motorola or IBM outmatched Intel, Apple had bragging rights, if Intel beat them, then they were at a disadvantage. Now, if there is a supply issue in x86 land, then Dell, HP, and Apple are all in the same boat. Apple now competes on its software, not on Motorola/IBM's interest in beating Intel.At PPC was often a disadvantage and only occasionally an advantage for Apple, they get

The biggest effect the Intel switch has had is to put a stake in the heart of the horrid old OS 9 vampire. I'm more than half convinced that the reason Jobs timed the switch when he did was because he'd just - about six or so months earlier - been able to pull the last G4 Powermac that could boot into OS 9 off the Apple store without the usual storm of protests. With the Intel switch, the new Macs don't include Classic and won't even run the old OS 9 software.Even if IBM had a 5 GHz quad-core Power PC that

First of all, switch to a Power6 based architecture is not something you simple do. It takes a LOT of effort in writing the OS to function on the new architecture, not to mention all the work by developers to make their programs function on it as well. Second, Apple didn't choose Intel because they were the "best at the moment" uP supplier. They chose Intel because Apples felt they had a better future than the PowerPC line. So, even if someone, like Power6, does poke their head above Intel/x86 in perfor

From the an Applications point of view moving from the PPC to the POWER line would be a none issue. Just recompile if that.For the OS it may be a bit of a challenge but far less than moving from the PPC to Intel.It is often used in mid-range systems and work stations. It is big, fast, and usually expensive. This is a step to keep the power line above the X86 not really to catch up.Apple didn't use the Power line it used the PPC line of CPUs.I do agree that this will not make any difference to Apple which i

As a game developer, I can say with some confidence that Apple's decisions are NOT influenced by difficulty incurred by developers to get their software to run properly on any number of different versions floating around, which are in many ways incompatible with each other

It was never about performance per se -- there are plenty of faster things out there than the Core 2 Duo. IBM will be happy to sell you some of them, as will Sun or Fujitsu. Or Cray. All for the low price of $600k a machine.

The issue is that IBM makes supercomputers, and Motorola makes cellphones, and they design their chips accordingly. Apple, making neither of these things, couldn't persuade either of them to make a low-power, fast, cheap CPU useful for a laptop and continue updating it with such a small market. Intel, on the other hand, spends most of their engineering effort trying to solve exactly this problem, and so has its business interests aligned with Apple's, as opposed to IBM or Motorola, who didn't really care about them at all, and would happily spend their R&D money on designing things like this chip instead of making a G5 that would fit in a laptop.

Also the one area that IBM and AMD do not do well yet is mobile chips. Here Intel had a clear advantage. Coupled with Intel being able to deliver promised speed and quantities, Apple made a good decision.

With 32MB of cache, hopefully cache misses won't be too infrequent. IBM, as well as being the first to market with dual- and quad-core, were first to market with SMT as well. The nice thing about SMT is that when you get a cache miss, you can just give the other thread a bit more time to run. With enough contexts (and a high enough degree of parallelism) cache misses become much less important. This is something the T1 does particularly well.

IBM does not give a heck to Desktop market unless you are calling them about 10.000 terminal running Enterprise Big Iron monster and they may even suggest you buy Dell terminals/PCs if it fits their project better. What matters to them is the mainframe, technologies used, software used and the entire consulting to keep such business up.Motorola/Freescale lives happily in embedded processor market and telecoms market too.

Sorry, but these annoucements arent much more optimistic than the ones that were made before the launch of the G5.Lets see IBM actually roll out those babies, and look what yields they get, how cool they really run and in what ways the design has suffered to allow them to reach that kind of clockspeeds.

So what happens in the near future when Intel brings out the 80 core microprocessor that does 1.28 trillion calculations a second? I do not understand how Intel can do that with only 100 million transistors and the power6 has over 700 million transistors.

For the millionth time, this is a POWER chip, not a PowerPC chip. It's a difference between server and workstation processors.

Apple DID switch because of Intel's better roadmap, as the Core 2 Duo and upcoming technologies prove. IBM's inability to get the heat down is just evidence of their inferior roadmap compared to intel, and I don't understand why you think it refutes Apple's motives when it actually proves it.