Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

MojoKid writes about some interesting news from AMD. From the article: "Advanced Micro Devices plans to use resonant clock mesh (PDF) technology developed by Cyclos Semiconductor to push its Piledriver processor architecture to 4GHz and beyond, the company announced at the International Solid State Circuits Conferences (ISSCC) in San Francisco. Cyclos is the only supplier of resonant clock mesh IP, which AMD has licensed and implemented into its x86 Piledriver core for Opteron server processors and Accelerated Processing Units. Resonant clock mesh technology will not only lead to higher clocked processors, but also significant power savings. According to Cyclos, the new technology is capable of reducing power consumption by 10 percent or bumping up clockspeeds by 10 percent without altering the TDP."
Unfortunately, aside from a fuzzy whitepaper, actual technical details are all behind IEEE and other paywalls with useless abstracts.

Agreed. It's a breathlessly ebullient press release sales pitch. That said, I hope AMD is able to get back into the game to keep Intel honest, and I own an Intel processor (the last four or five machines I built before it were AMD-based).

The only workstation class machine with which I have been completely happy is powered by an AMD 4 way Phenom II. Quiet, powerful, cheap, pick all three. And looking around, I would say that its successor is highly likely to be an AMD 6 way, 45 nm process chip. Best value by far for my money.

Today I can choose slightly less latency with Intel or significantly more value with AMD. Call me cheap, but I will take the value, thank you.

Yep. This is where AMD lives and dies: the budget segment. That's where they stomp Intel, which prefers to keep its high margins and the mindshare that comes along with having the fastest chip of them all.

For myself, AMD would have to push out very affordable 4-socket and 8-socket Opteron solutions, like they did in the K8 days. These days, it's a better value for me to spend the big bucks on Intel workstations and ride them out for an extra year.

There was a time 8-12 years ago where it looked like AMD could have snatched the performance crown.
But, without the Fab expertise to match Chipzilla, it just never happened and nothing short of a fantastic screwup by Intel or an astonishing breakthrough by AMD will close the gap.
But, AMD has been rock-solid for my personal needs and make it so easy to keep migrating to newer CPUs / Mainboards that I haven't run an Intel desktop, at home, in 10 years.

Err, there was a time 8-12 years ago when AMD *did* snatch the performance crown.

Around about the time of the Athlon 64's appearance, when Socket 939 came along, they were actually both faster and cheaper than Intel. Nothing intel had could match the FX range on the desktop, and nothing intel were doing in the server room could match Opteron at the time. Intel was struggling with its netburst architecture (IIRC) which had high clock speeds and performed slightly better under some loads (video encoding IIRC) but markedly worse for pretty much everything else.

It didn't last long, Intel took back the performance crown, and after a few years made serious inroads into the budget sector as well. But for a brief, shining moment (around the time the FX-55 and 57 were released) AMD held the crown.

That's what I was referring to. Apart from solving the Fab issues, I don't know what else AMD could have done.
They had ongoing problems with yields and there were initial problems with power consumption. I vaguely recall that you had to use either ( or both ) certified coolers or power supplies or the warranty was void.
Intel took FOREVER to get their version of Hypertransport and the Alpha-derived designs to market but once they did, with Nehalem, they've not looked back. If they ever revive and perfect

"They had ongoing problems with yields and there were initial problems with power consumption. I vaguely recall that you had to use either ( or both ) certified coolers or power supplies or the warranty was void."

I have no idea about Opterons or the server room, but in the land of the desktop that wasn't so. The FX chips may well have needed a good power supply and decent cooling (especially if you were going to take advantage of their clock-unlocked features), but in general the high-end gamer PC world was

Actually I'd say buying ATI was one of the smartest things they ever did. one can argue if they had waited until the market tanked they could have gotten it cheaper but hindsight and all that. But have you tried bobcat? Less than 18w for a dual core with an HD6310 GPU and often runs at less than 12w. hell AMD had to slow down their desktop production simply because they didn't have enough capacity to meet demand for the Brazos platform. If that's failure I'll take two please. Go to someplace like Tiger and see how many units you have with the E350, we are talking netbooks and laptops, HTPCs and all in ones, the OEMs are cranking out new designs to use those chips as fast as they can. I walked into my local Wally World the other day and less than 4 units were Intel, the rest? All AMD Fusion. And don't forget this is still running on VLIW GPUs, the next revs will replace them for vector units which should behave like a hyper powerful FP when not needed for graphics.

so I'd say while AMD has made some SERIOUS mistakes, killing the AM3 line and Stars arch before getting the bugs fixed (or better yet replacing for the consumer chip) the BD/PD design, trying to push a server chip like BD/PD as a desktop chip, frankly the APUs created thanks to the merger have been one of the few smart moves they've had. With Brazos they have a unit that stomps Intel+ION while often costing less than intel alone and thanks to intel shooting themselves in the face by killing the Nvidia chipsets there won't be any new ION designs. With Brazos you have a unit that sips power, is quiet, low enough heat it can be passively cooled, while still able to do 1080p over HDMI. If you haven't tried one you really should, its a sweet chip.

The Athlon 64 was indeed awesome. I was a full-on raging AMD fan back then, eventually culminating in an 8-way Opteron workstation: the good old Tyan Thunder K8QW. Only problem was, AMD stagnated for way too long. When I upgraded from the A64 to the X2, it was a huge leap (obviously), stomping all over Intel's overpriced Pentium-D. But then, Intel came out with the Core 2 series, and AMD just kept releasing die-shrinks of the same old CPUs. I had nothing to upgrade to. I eventually tired of waiting fo

AMD *does* push out affordable 4-socket Opteron setups- the Opteron 6000 series CPUs. They are selling those a whole ton less expensive now than they did in the K8 days. The least-expensive Opteron 6000s sell for $266 each and the most-expensive ones are around $1200-1500, compared to starting around $800 each and going on up to close to $3000 for the K8-era 4-way-capable Opterons. Considering a 4-way-capable Intel Xeon still costs close to $2000 and goes on up to near $5000- and is based on two-year-old technology- the Opterons are that great deal you were wishing for.

However on the desktop, Intel has gotten much better in their pricing (i.e. they don't cripple lower-end chips as severely as they used to) and is giving AMD a real run for their money.

The only workstation class machine with which I have been completely happy is powered by an AMD 4 way Phenom II.

My last box was a quad-core Phenom II. It served me well. There's no denying though, that Intel's current i7s (I have a 2600K) blow everything else out of the water. I fervently hope AMD will come up with something to challenge it. Competition is good.

May I make a suggestion? Tiger has been selling their remaining stocks of 95w Thubans (in case you haven't heard in a serious "WTF are they thinking?" move AMD has killed AM3 for two sockets that have less than a year of life in them, FM1 and AM3+) for around $100. Sign up for their emails, that is where they have been offering it as of late. i got one and with the money i saved upgraded my ECS board to a nicer Asrock and i must say i couldn't be happier, the 1035T is not only around 40% faster than my 925 Deneb but whereas the Deneb would max out at around 139f doing transcodes with the hyper N520 cooler i paired the thuban with i'm getting a MAX of 114F and that's after 7 and a half hours of slamming the CPU with Virtualdub. At idle this baby is literally below room temp, no shit looking at Coretemp my chip is at 67f and the room is 72f. Frankly I've never been happier with a chip upgrade in my life and its just a damned shame AMD has killed AM3 but their loss is your gain if you jump on it and snatch one while they're cheap. I mean 6 cores for $109? How can you beat that? Paired with 8gb of RAM and a CF enabled board i figure this baby will last me until 2020 easy, what a sweet chip.

But for everyone that wants to save some money and have a nice chip snatch one of the AM3s NOW before the stock runs out because when they are gone, that's it. I went ahead and built my GF a new Athlon X3 box and gave the Deneb to my youngest and as soon as this next batch of laptops gets sold I'll be building the oldest an X3 or X4 before supplies run out. The really nice Am3 boards have never been cheaper and paired with 4-8Gb of DDR 3 and a Hyper212 or hyper N520 they make pretty badass desktops, plenty of OCing headroom if you desire and easy to unlock so that X3 can easily be the cheapest quad you'll ever buy. But for me that X6 so cheap? hell how could you not love getting 6 cores for $109 shipped? That's a no brainer.

Agreed it's a sales pitch. But not vaporware at all. Very neat solution. (I saw another with similar properties a couple years ago but this one is 'way better.)

The issue is the power consumption of the clocking of the chip. Modern designs are primarily layers of D-type flip-flop registers separated by small amounts of random logic and all the flip flops are clocked simultaneously, all the time. The clock signal is input to ALL the flipflops and a bit of the random logic. I'm guessing somewhere between one in five and one in ten gate inputs are driven about equally by CLK or ~CLK. Further, the other signals flip between one and zero once, sometimes, on each cycle. ALL the CLK signals flip from zero to one and back to zero EVERY cycle. So there's a lot of activity on the clock.

In CMOS the load on the clock is primarily capacitave - the stray capacitance of the CMOS gates and wiring - plus some losses, mainly due to the resistance of the wiring. The stray capacitance has to be charged and discharged every cycle. The charge represents energy. In a conventional design the clock drivers are essentially the same thing as logic gates (inverters). New energy is supplied from the power supply (and about half of it, excluding signal-line resistive losses, dumped as heat in the pullup transistors of the drivers) every cycle as the lines are charged. Then the charge is dumped to ground (and the rest of the energy dumped as heat in the pulldown transistors). All that energy gets lost as heat every cycle, and it represents about 30% of the power consumed by the chip. It would be nice to scavenge it and reuse most of it for the next tick.

A previous invention used a half-wave transmission line looped around the chip and connected plus-to-minus. A big mobius strip. The CLK and ~CLK loads acted as distributed capacitance around the transmission line. A clock waveform circulated continuously, twice per cycle. Instead of a sea of drivers providing new energy and then throwing it away every cycle, the transmission ring had a few drivers distributed around it, keeping the wave circulating and correctly formed, and pumping in enough energy to replace the resistive losses while the bulk of the energy went round-and-round. Result: Most of the clock power requirements and heating load go away.

Unfortunately, the circulating clock wave meant the region completing a computation ALSO went round-and-round, rather than everything switching at the same time. Stock design tools assume CLK/~CLK is simultaneous (except for minor variations) across the whole chip. So using that earlier system would require a major rewrite on the stock tools and new design methodologies.

THIS system does a similar hack energetically, but with everything in sync. Instead of a sea of drivers driven by a carefully-balanced tree of pre-drivers, the CLK and ~CLK are constructed as a pair of heavy-conductor meshes - like two stacked layers of flattened-out window screens. These form two plates of a capacitor. These plates are connected by an inductor, forming a resonant "tank circuit". When this is "pumped up" by a few drivers and is "ringing", energy alternates between being an electric field between the screens and a magnetic field in the inductor coil, twice (once for each polarity) each cycle. Again the bulk of the energy is reused over and over while the drivers only have to replace the (mostly) resistive losses (and pump it up initially, over a number of cycles). Again the bulk of the clock power and heating is gone. But this time the whole chip is switching essentially simultaneously, so the stock design tools just work.

Neat!

Downside (of both inventions): You can't quickly start and stop the clock in a given area or run it more than a few percent off the speed set by the resonance of the tank circuit or transmission line. No overclocking. Also no clock gating to save power on quiesc

A 100W bulb uses.1 kWh in an hour, or.0000278 kWh in a second, or.000278 kWh in 10 seconds. (or.278 Wh)

Therefore, a 100W bulb running for 10 seconds uses about the same amount as energy as an average Google search. Which is a lot higher than I thought it would be - since I use 20W CFL's, each time I do a google search, that's the equivalent of 50 seconds of light per Google search. Just while typing this reply, I did enough Google searches to light up my room for about 15 minutes.

Intel is already running at 4GHz+. Ok not officially, but it is almost impossible to find a Sandy Bridge K series that won't easily overclock to 4Ghz or more. I bumped my 2600k to 4GHz. No voltage increase, no messing around, just turned the multiplier up. Zero stability issues, doesn't even draw a ton more power. Basically they are just being conservative for thermal reasons.

The 22nm Ivy Bridge is soon to launch as well. Never mind any potential better OCing, it is faster per clock than SB. Well SB is a good bit faster than Bulldozer (who's architecture Piledriver uses) per clock, sometimes more than a bit (depends on what you are doing).

So no, they'd need way more speed to give Intel any kind of run for their money, unfortunately. What they really need is a better design, something that does better per clock, but of course new designs take a long time and BD itself was quite delayed.

Remember the one and only time AMD did eclipse Intel was during Intel's P4 phase. Intel had decided to go for low work per clock, high clock speed. Well speeds didn't scale as they'd hoped and the P4 was not as powerful for it. AMD chips were tops. However the Core architecture turned all that around. It was very efficient per clock, and each generation just gets better. Meanwhile AMD stagnated on new architectures, and then released Bulldozer which is not that great.

Also they have to fight the losing fab battle. They spun off their fabs and as such aren't investing tons of R&D in it. Well Intel is, and thus are nearly a node ahead of everyone else. Other companies are just in the last few months getting their 32nm node and 28nm half-node production lines rolling out products to retail channels. Intel has their 22nm node process complete and is fabbing chips for retail release in a couple months. So they've got that over AMD, until other fabs catch up, by which time Intel will probably have their 14nm half-node process online in Chandler (the plant construction is in full swing).

Sadly, things are just not good in the x86 competition arena. AMD competes only in a few markets, and Intel seems to edge in more and more. Servers with lots of cores for reasonable prices seems to be the last place they really have an edge, and that is a small market.

I don't want to see a one player game, but AMD has to step it up and this unfortunately is probably not it. If they make it work, expect Intel to just release faster Core i chips with higher TDP specs. The massive OCing success shows they could do so with no problem.

I'm a diehard Intel Fanboi. My last AMD was an 80286, I owned an AMD80386DX40, but never used it (acquired it at a swap meet after the P60's had just launched).Prescott had a use case where it outperformed AMD, but it was very narrow, if your load was highly predictive and did not cause cache misses or branch prediction failures, it owned the AMD. Sadly every workload except straight up numerical number crunching was not so good. I used my 3.6GHz P4 for transscoding video. It was the first machine that I owned where I could encode faster than real time (i.e. movie is 60 min, I could encode in 50).

I really hope this pans out for AMD and brings them a little up into Intel's game. While as you said there has only been one time where AMD flat out bested Intel, there have been several cases where AMD has nailed a particular segment:* Low cost many cores (data compute clusters).* Low cost reasonable performance for most end user loads.* Downright cheap CPU for entry machines.Every time they've done something they have forced Intel to step up to that segment and improve.In this case I hope to see not the high spec CPU improvement, but rather the mid-range CPU segment get a very low power option. Somewhere in the i5 equivalent range, but giving desktop performance while sipping mobile levels of power.It would make building a poor man's compute cluster more feasible from a power and cooling standpoint.-nB

It's not true that the AMD lead was that short. The Athlon came out and was immediately on par with or better than Intel's Pentium IIIs. By the time it was thunderbird vs coppermine/tualatin the lead was pretty sizable. That lasted throughout the Athlon64/Pentium 4 period and into the Core's run until the Core 2 duos arrived. The gap was close for a while with Inte's multi-core processors generally superior, but as little as about a year and a half ago, AMD had the better offering in the X3 than Intel's Core i3. Competition is tight, which has been good for the rest of us.

If you're factoring cost in, AMD's lead dates back to the K6-2. Clock for clock they were slower, but I could get a 400MHz K6-2 and motherboard for less than a 266MHz Pentium 2 and motherboard back then and the K6-2 was a lot faster - especially since it ran the memory 50% faster than the P2.

It's even more ridiculous than that. My motherboard automatically overclocked my 2500K to 4.3GHz. From what I can tell, that 1GHz increase over the stock value isn't even pushing it (temperatures are still ridiculously low, with a 7-Zip benchmark hitting 55C). Granted, aftermarket coolers probably help, but I believe a 0.5-0.75GHz bump on a stock cooler is entirely reasonable.

I have a feeling that Intel might actually be downplaying their default clocks; even under the most terrible conditions, I can't see

Yes, but can a Xeon do it and how many cores can you have in a box at a sane price?Currently there are AMD systems with 64 cores going for under US$10k. For some CPU bound tasks such things are wonderful. Any speed increase makes it even better.Intel are catching up with 10 core CPUs that are faster than the currently available Opterons but for tasks with a LOT of threads the AMD CPUs still outperform for the same number of sockets. It may look like a "small market" to you but there's still a huge number

Also they have to fight the losing fab battle.... Other companies are just in the last few months getting their 32nm node and 28nm half-node production lines rolling out products to retail channels. Intel has their 22nm node process complete and is fabbing chips for retail release in a couple months.

However this technology lets AMD get rid of most of the clock drivers and most of their power consumption and waste heat. That means the rest of the logic can be pulled closer together in a given technology, s

"might give Intel a run for their money"I'm sorry to inform you but you're a little (lot) out of the loop on the current state of Intel and AMD processors available. I'm sure most people here don't want to hear this but the little guy is well and truly down on the ground being kicked in the stomach.

I wouldn't be surprised if one of these CPU's at 5ghz would barely compete with Intels current top shelf items, let alone 4ghz.

Is AMD really doing that badly?Seriously I am out of the loop from an AMD perspective*, but I assumed they were still rocking the cost/performance on the low end of the CPU ranges, and was hoping this would allow them to push into the mid-range i5 territory.-nB

*all I work on at work & at home is Intel stuff, so I don't have any relevant AMD info.

I got a bulldozer 8250 for 179$ and an motherboard AM3+ for 139$.It run everything I want well and I never have to kill anything before starting a demanding game.It compiles speedily enough that my vertex 2 is now the bottleneck when I run maven.

So please, tell us what Intel could have offered me in term of performance with a set price of 318$ ?

You have to ignore the people who go on about AMD not being worth the money (though I have to admit that Bulldozer was a huge flop). Last year I got my 955BE and motherboard for $200 total. Nothing Intel offers can come close to that for a CPU and Mobo. The CPU alone would be at least $150, to match the Phenom II X4 955BE. I got a high quality motherboard and high quality CPU for about the cost of Intel's lower end CPUs.

Llano, the A series, is actually a very solid product. For the cost of an i3, you get a quad core that is about 1/4 slower overall, but whose integrated graphics about 3 times faster. Actually selling very well.

Bulldozer is a disaster unless all you do is video encoding.

Now, here's the puzzling part: they want to use bulldozer, the failure, as the new core for the A series, the success. I hope they find a way to fix it, otherwise my next rig will have an Intel for the first time in ten years.

You must not work in Parallel Programming, doing any heavy engineering analysis/modeling. Taking advantage of all those threads and cores within Bulldozer and utilizing it with OpenCL along with the GPGPUs is a dream come true. More and more modeling environments are leveraging all that this architecture offers, but to you if your game doesn't presently use it it's worthless. To each their own.

Now, here's the puzzling part: they want to use bulldozer, the failure, as the new core for the A series, the success. I hope they find a way to fix it, otherwise my next rig will have an Intel for the first time in ten years.

I think the people calling bulldozer a failure have the wrong expectations. The core used in the existing A series is a direct descendant of the original Athlon from 1999, which itself was very similar to (and designed by the same people as) the DEC Alpha introduced in 1992, predating even the Pentium Pro. Suffice it to say that there isn't a lot of optimizing left to be done on the design.

Bulldozer is a clean slate. The current implementation has some obvious shortcomings, not least of which that the cache architecture is lame. (The L1 is too small and the L2 latency is too high. They might actually do pretty well to make a smaller, lower latency, non-exclusive L2 and use the extra transistors for a bigger L3 or even an L4.) But that's not a bad thing. It's something they can fix and make future generations faster than the current generation. Which is the problem with the old K10 -- there are no easy little changes left to be made to make it substantially faster than it is now.

The other part of the problem is that people want Bulldozer to be something it's not. It isn't designed for first in class single thread performance. It's designed to have adequate single thread performance while reducing the number of transistors per core so that you can have a lot of cores. It's designed for the server market, in other words. And to a lesser extent the workstation market. They designed something that would let them compete in the space that has the highest margins. So now all the high-end gamers who only care about single thread performance are howling at the moon because AMD concluded it couldn't compete with Intel in that sector and stopped trying.

What you have to realize is that it isn't that the design is flawed. It's that you aren't the target market. They could have built something that achieved 90-100% of Intel's best on single threads instead of 60-80% by doubling the number of transistors per thread and halving the number of threads and cores, but think about who would buy that. PC enthusiasts who comprise about 0% of the market. It wouldn't sell in the server market because the performance per core * number of cores would be lower. It wouldn't sell in the budget market because it would require too many transistors per thread and therefore cost too much to manufacture.

Instead, with Bulldozer they can use more modules and sell to the server market or anyone else with threaded software and then and use fewer modules in combination with a GPU and sell to the budget market and the midrange gaming market, and leave the six dozen howling high-end PC gamers to Intel.

Do the math here, if Bulldozer's cores were 60-80% of Intel's then their 8 core chip should perform 120-160% of Intel's quad core chips in multithreaded performance.

Only if by "do the math" you mean "ignore the math." Bulldozer modules are neither a complete pair of cores nor a single core with hyperthreading, remember.

If you run a single thread on a module, it doesn't have to share the FPU or caches with any other threads and will have higher performance -- hence 60-80% of Intel's on single threaded workloads. If you run two threads on the same module than they share some things but not everything as HT would, so instead of having total performance go up by a pittance

What did happen is that management decided there SHOULD BE such cross-engineering,which meant we had to stop hand-crafting our CPU designs and switch to an SoC design style. This results in giving up a lot of performance, chip area, and efficiency. The reason DEC Alphas were always much faster than anything else is they designed each transistor by hand. Intel and AMD had always done so at least for the critical parts of the chip. That changed before I left - they started to rely on synthesis tools, automatic place and route tools, etc. I had been in charge of our design flow in the years before I left, and I had tested these tools by asking the companies who sold them to design blocks (adders, multipliers, etc.) using their tools. I let them take as long as they wanted. They always came back to me with designs that were 20% bigger, and 20% slower than our hand-crafted designs, and which suffered from electromigration and other problems.

That is now how AMD designs chips. I'm sure it will turn out well for them [/sarcasm]

And that comment was back in 2010. No surprise now Bulldozer is slower and uses more power, and the only advantage is it has more cores (meh, any idiot can add more cores, at worst case you just add another computer[1]).

[1] The same embarrassingly parallel tasks that do well on multiple cores will do well on multiple computers.

Yes, they are doing that badly. The bulldozer was a giant dissapointment. They have nothing on the table for the desktop crowd. At almost all price points it's silly to buy AMD at this time unfortunately. Especially for heat / power usage etc.

AMD's strategy was to switch to milling out 2 cores or so per unit, aka the Bulldozer architecture, and then stitching them together into a processor. I guess it makes the design more compact / easier to fab.

for a single executing thread of a specific bit width GHz means everything.The trick is can they scale it to multiple cores/threads, while lowering their power to match Intel's performance/Watt at the high end of the compute arena. If they can do that they will once again pull in DC customers.-nB

Single core performance is all that matters when processing a toolpath for CNC machining.

Rubbish. There is no way your CNC machining app will even get close to the minimum latency that a single AMD core is capable of. What you are really saying is that your vendor is slow to get a clue about parallel programming.

What you are really saying is that your vendor is slow to get a clue about parallel programming.

Maybe there are CNC algorithms that aren't easily parallelizable. Or (more likely) they can be paralellized, but the CNC development teams haven't got around to doing that yet. It doesn't really matter which as far as the consumer is concerned -- in either case, they will want a chip that maximizes single-threaded performance. Finger-pointing doesn't help them one bit, but fast CPUs might.

It doesn't really matter which as far as the consumer is concerned -- in either case, they will want a chip that maximizes single-threaded performance.

Speak for yourself. I prefer to keep the money in my pocket, and spend it on more frequent full-box upgrades. This keeps me ahead of the curve on average. Example: in a past gig where money was no object I started life with a Core2 class desktop which was state of the art at the time, but no, even when money is no object the beancounters will reject the idea of a new box every six months. In short order my onetime shiny Intel box was being smoked

It can't be made parallel. Each pass depends on the previous one. It's dumb that people think everything can be made parallel. Nine women can't have a baby in a month. That's been known for quite a while.

Perhaps you overstate the difficulty. There are many methods of making things parallel. While serializing constraints may in fact exist, it is rare that a problem cannot be factored in such a way as to make most of an algorithm parallelizable in spite of them. Or to put it simply, Amdahl's law was wrong, proved by example many times over. Or to put it equally simply: if you don't try, you can be sure of not succeeding.

See Athlon vs P4. Both were best for single threaded stuff, owing to a single core. However the Athlon did more with less, got better performance at lower clocks. Why? It could do more per clock, or more properly took less clocks to execute an instruction.

IPC matters and the Core i series is really good at it. Bulldozer, not as good. What that means is that all other things being equal, BD needs to be clocked higher than SB to do the same calculations in the same time.

sure they are still issues, but in addressing the GGP of GHz not meaning squat, GHz still matters. If cost/power/threadcount/and per clock average performance are all the same which would you rather have? 3.67GHx or 4Ghz (~10% higher)?-nB

Quick background: Currently clocks on most generic chips today are structured as trees. As you can imagine the fan-out of the clock trees is pretty large and thus require clock buffers/driver circuits which need to be balanced so that clock signal gets to the leaves at about the same time (in a typical design where you don't use a lot of physical design tricks). To ease balancing the propagation delay, the clock tree is often physically looks like a fractalized "H" (just imagine the root clock driving in the center of the crossbar out towards the leaves at the corners of the "H", the wire lengths of the clock tree segments are the same, then the corners the big H driving the center of a smaller "H", etc, etc). Of course at the leaves, there can be some residual imbalance due to small manufacturing variations and wire loading and that has to be accounted for in closing the timing for the chip (to avoid short paths), and ultimatly these imbalances limit the upper frequencies achievable by the chip.

Additional background: In any electrical circuit, there are some so-called resonant frequencies because of the distributed (or lumped) inductance and capacitances in the network. That is some frequencies experience a lot less energy loss than average (for the car analogy buffs, you can get your car to "bounce" quite easily if you bounce it at it's resonant frequency).

The basic idea of the Cyclos technology is to "short-circuit" the middle of the clock tree on the chip with a mesh to make sure all the middle of the clock tree is coordinated to be the same clock (as oppposed to a typical H tree clock, in every stage the jitter builds up from the root). That way you avoid some of the imbalances the limit the upper frequencies achievable by the chip. The reason I say "short-circuit" is that it really isn't a "short circuit". If you just arbitrarily put in a mesh in the middle of a clock tree, although it would tend to get the clocks aligned, it would presents a very large capacitive and inductive load to drive and would likely increase power greatly. **Except** if that mesh was designed so that it resonated at the frequency that you were going to drive the clock, then you can get the benefit of jitter reduction w/o the power cost. Since you get to pick the physical design parameters of the mesh (wire width, length, and grid spacing, and external tank circuit inductance) and the target frequency, theoretically you can design that mesh to be resonant (well, that remains to be seen).

The reason this idea hasn't been used to date is that it's a hard problem to create the mesh with the proper parameters and now the processor really has to just run at that frequency all the time (well, you can do clock cycle eating to approximate lower frequencies). Designers have gotten better at these things now and the area budgets for these types of things have gotten in the affordable range as transistors have gotten smaller.

FWIW, In a pipeline design (like a cpu), sometimes it's advantagous to have a clock-follows-signal clocking topology or even an async strategy instead of a clock tree, but there of course is a complication if there is a loop or cycle in the pipeline (often this happens at say a register file or a bypass path in the pipeline), so that trick is limited in appliciablity, where the mesh idea is really a more general solution to clock network jjitter problems.

How can the mesh be resonant to a square wave (with lots of high frequency harmonics over a huge band)?

I can imagine it being resonant to a single frequency sine wave.

But if the clock mesh is powered by a sine wave, you have to turn it back into a square wave to drive gates, and to do that you have to compare the clock voltage level with some known voltage levels, and there you may have process inaccuracies.

[quote]How can the mesh be resonant to a square wave (with lots of high frequency harmonics over a huge band)?[/quote]

There's no such thing as a square wave at 4GHz. You can draw them like that on paper, but in reality the edges smear into a pretty good approximation of a sine wave.

Regardless, it will still have some higher frequency components, but you don't have to worry about them. The resonance won't help generate nice sharp edges, but that's the line driver's job. The resonance is just to save energy by helping pump the voltage at the fundamental frequency.

(Disclaimer, not an EE, but I've looked over their shoulders a bunch of times)

Thanks to you and PatPending, I'm now read into something that seems mighty interesting and far over my head. I'm wondering if, or how, this might be applied to the "3d" chips that IBM is/was working on. (Btw, I re-read James P. Hogan's "Inherit the Stars" on Saturday - he mentions stacked chips with internal cooling channels, in 1978.)

and has been for at least 5 years. A theoretical 10% performance boost? Gimme a break. I upgraded from a Core2Duo E6600 @ 2.4GHz to a quad core i5 2600k which runs at an overclocked 4.5GHz on air... Day to day, the new rig delivers a *mostly* perceptible performance advantage, but nothing earth shattering... I give you several recent changes that felt bigger:

1. Moving from hard drive to SSD2. Moving from a DirectX9 class GPU to a DirectX 11 GPU (at least in games).3. Move from pre-JIT JS browser engine to a JIT-engined browser.

As far as desktop CPU development goes, I think the future is largely about optimizing software for the multi-core architectures, not adding Gigahertz.

Cyclos resonant clock mesh technology employs on-chip inductors to create an
electric pendulum, or "tank circuit", formed by the large capacitance of the
clock mesh in parallel with the Cyclos inductors. The Cyclos inductors and
clock control circuits "recycle" the clock power instead of dissipating it on
every clock cycle like in a clock tree implementation, which results in a
reduction in total IC power consumption of up to 10%.

Inductors save power because unlike most other circuit elements, inductors
are able to store energy in a magnetic field so it can be used later on.
This is part of how switching power supplies get their efficiency.

The bulldozer and i7-2600k were about same performance wise but that is 8 core cpu vs 4 cores + HT. Powerusage of both machines at wall was like 250watts under load. When you overclocked both the bulldozer to 4.8ghz and i7 to 5ghz, i7 used 80 more watts, the bulldozer doubled its draw to over 500 watts, i think it was 550 watts.

And they still sell Power7 with 8 cores and issuing 6 instructions per cycle at 4GHz+. They're obscenely fast, but they're also not cheap unless you're comparing them to Itanium, SPARC, or Intel's -EX series Xeons.

Except they won't sell them to you unless you are Sony or a reseller that's used to Defence pork contracts. The last time I finally got a price on a POWER CPU system (after two annyoing weeks of the salesguy building up a "relationship" and carefully weighing my wallet) I gave up and got four Xeon systems that were almost as good each for a lower price than the single POWER CPU system.

Whatever makes a better processor is a good thing, but I find it ironic AMD promoting higher clock speeds after renaming their processors due to the clock speed wars.

It is not ironic, rather it is because returns from superscalar design are diminishing while feature size keeps shrinking and other incremental technology improvements keep delivering higher practical clock rates.