Post Your Comment

64 Comments

7 - 10 post are about the fact that todays sw don't use more than 2 cores in a efficent way. Well 2 - 3 years ago, there was close to none. Did Valve, Epic and others build frameworks for using multi cpu's before the hardware base was in place. The answer is no. Do most big software house today put a big effort in scaling over more cores? The answer is yes. Should Intel/Amd wait until the sw houses catch up? I don't understand it, but the spoken majority seems to answer this with a yes?
My question: When the big sw house is done with the mulri cpu frameworks, do you son't belive they then will scale over n numbers of cpu's. Userinput, rendering/gpu stuff, AI x n the deept of today etc. All real lifte arhitechure is paralell, sw is not yet, but hopefully that will change.

----
If lifte is good, and you have insane to much money, you stop developing, you dont need to priortize and you slowly fall back in your pillow. Yes AMD fight uphill, but if they manage to survive, nature has proven that fhigting uneven odds, will give you and middel to long term edge (ok if you survive). Tons of money dont save anything. Not sure they suvive, but if they don't a new company with clever enginers will rise somwhere in the future. Yes we need competition and there always will be. Reply

This is the end of AMD. Unless this turns out like P4 (not likely), AMD will have to release their process first or soon after [or better yet, a 16nm ultra-fast processor, and while I'm still dreaming, make it free] and have it perform better (also not likely). Poor AMD. I was going to buy a Phenom II, but Intel seems the way to go, future-wise. AMD will be liquidated, as well as VIA and Intel will go back to selling way overpriced processors that perform less than a i386 [Windows 7 certified]. Reply

Intel doesn't make fast gpu's. Even when they tried with that agp gpu ati and nvidia killed it. They won't let a new playing into a graphics market with out a fight. Lastly intel has been trying to beat amd for 40 something years, and there still not even close to beating them. Now that amd has acquired amd they have superior graphics patents. Reply

What is really amazing, is the shrink proccess timetable. It looks like they will meet the timetable for our first Quantum DOT procersors. It is theorized to occure at the 1.5nm proccess and by the year 2020. Reply

I guess I can't blame them for changing sockets all the time, but I'm not sure if I'll be switching any time soon. My Q6600 hasn't gone past 50% usage yet, even when extreme multi-tasking (editing HD video, etc.)

On the mainstream quad-core side, it may not make sense to try to upgrade to 32nm quad-core until Sandy Bridge at the end of 2010. If you buy Lynnfield this year, chances are that you won’t feel a need to upgrade until late 2010/2011.

So if you buy a quad core 8 thread 3.0 Ghz processor you will "NEED" to upgrade in one year?! What?! It doesn't make sense to upgrade just for the sake of having the latest. Upgrade when your computer can't run the programs you need it to anymore; or when you have the extra money and you'll see at least a 30 percent minimum increase in performance. You should be good for at least 2 years with Lynnfield and probably 4 or 5 years. Reply

I watch roadmaps from time to time and I know where AMD has potential.

Simplify the damn roadmap, platforms, chipsets, sockets!

Seriously, I need a spread sheet and calculator to keep it all straight.

Glad Anand gave kind of a summary for were and when it makes sense to upgrade but I just don't have the patience to filter through it all to the end I get a working knowledge of it.

One thing AMD has been good at in the past if they continue, is to keep upgrades simple. I don't want a new motherboard and new socket on near every CPU upgrade. I'm not sure if mobo makers love it or hate it, obviously they get new sales but it's kind of nuts.
This alone, knowing I have some future proofing on the mobo, makes CPU upgrades appealing and easy and something I would take advantage of.

As far as the GPU/CPU it's nothing I will need for years to come. We will have to wait until it permeates the market before it gets used by devs, just like multicore. It will at least take consoles implementing it before game devs start utilizing it, and even then it's liable to take a lot of steps back in performance (it's only hype now)... Reply

I fail to see the purpose of introducing the 6 core/Gulfstream. Most software could barely take advantage of 4 core, let alone 6. It seems like Intel just want to brag they can cram many cores into a single package without evidence that 6 cores will improve performance. It's almost like the mhz wars from the 1990s. Instead of spending time on a 6 core chip, why couldn't they just bring out Sandy Bridge earlier? Reply

Check back a couple pages, I think we posted exactly the same thing, as I completely agree with you. :)

The only thing I can think of is since the server market pays the bills in a sense they are tailoring the chip for that purpose and just making a consumer level chip that will still be tops but probably not as nice in most instances as a faster quad. Reply

If I remember right (living here in the Phoenix area), there are 3 buildings in Chandler at the site,... 2 of them will be coverted over to the 32nm process, the 3rd building is no longer going to be used apparently,.... or will use the 3rd for something else,...

I am wondering about the integrated graphics in Clarkdale/Arrandale will it be DirectX 11 compliant? Is it going to be better than GMA X4500? What about h264 acceleration, 8 channel LPCM support and working 24p? Reply

11X is in New Mexico as the caption on the pic says. Specifically Rio Rancho, NM, near Albuquerque. It's OK, you'd be surprised how many times I've spoken with someone in the US on the phone that told me I was calling the wrong number, since they don't support locations outside the US. Go American education!
Reply

Does anyone know what the lifespan of LGA 1156 will be? Is intel expected to change sockets again when we reach Sandy Bridge? Is there any chance that I will have be able to get one mother board to last me several years? Reply

Might depend on who you buy the motherboard from. My motherboard is a P965 and is not Penryn compatible, though other P965 boards are. There might be both hardware (say, power delivery) and software (BIOS) considerations to future generation processors. Reply

Wery little change. Intel ghange their soccets when they do new architechture prosessor. Only reason would be that AMD would be so cpmpetative that there would be a real prize war... By making new soccet they can make more money!
Reply

"Now that isn’t to say that the six-core 32nm Gulftown will work in existing X58 motherboards; while that would be nice, Intel does have a habit of forcing motherboard upgrades, we’ll have to wait and see." Unfortunately, my trusty nearly three year old E6600/ASUS P5W croaked and I need a new build *now* (my PS3 is no real sub for PC gaming :p ). I was going to just go cheap and build an E8500/P45 rig, but after reading this, I'm debating whether I should just go ahead and throw down the extra several hundred on an i7 build for future upgrade insurance. I'm leaning more towards the latter. Reply

My question is this. I've got a QX9650 at 3.2ghz on an x38 asus P5E3 Deluxe. Is it worth upgrading anytime this year to the i7 or am I fast enough to hold out until the Quad Core Gulftown rolls around in early 2010?

Want the highest end? Go for i7 now and upgrade to Gulftown hexa-core next year.

Want a mainstream quad? buy lynnfield at the end of this year and upgrade to Sandy Bridge at the end of the next year.

Are satisfied with your E8x00, or another dual core and think quad-core is a waste of money? Go for Clarckdale at the end of this year.

Want to buy a notebook? The 32nm Arrandale will deliver excellent performance with great power savings and an on package graphics processor for even more power saving.

Want to buy a powerful quad-core notebook? Go for Nehalem based Clarcksfield 45nm, which should deliver quite a lot of performance over current mobile CPU's, with Nehalem's power saving features as well, but not as much power savings as Arrandale. Reply

I'm a bit dissapointed that the next top of the line chip will be 6-core instead of a pumped quad. We are still in multi-core infancy with very few programs taking advantage of anything over dual-core, and almost nothing taking FULL advantage of quad-core. I just don't see how 6-core will be more beneficial than a higher clocked 4-core...

As it stands, however, if the power efficiency is legit my next computer may very well be a laptop. Reply

Take a look at your Program Menu and tell me what apps today that are not multithreaded would receive serious benefit from being multithreaded? Besides gaming? Single-thread apps do receive benefits from multiple cores in typical usage scenarios because they can be run on a (semi) dedicated core and not interfere with other apps. Reply

Interesting thought. I'm hoping that with the mainstreaming of the dual core, multi-threaded apps become more common and that the single to dual jump turns out to be the biggest leap. But it's really just a hope on my part, don't know if it will happen.

Isn't there a multitasking advantage with 4 core machines? Also, once we start ripping 720 and 1080p files, 6 cores is gonna be hot. Reply

There are definite multitasking advantages with quadcore if you are heavily multitasking (i'd argue tri-core is probably used more effectively currently than that final 4th core). Single to dual, however, was a much greater difference for multitasking on the whole.

I just don't see the quad-hex jump being more beneficial than quad-juicedquad in this case. Reply

Yeah, can't say I'm real happy about the lack of a 32nm quad-core for 1366. If my motherboard supported Penryn I'd probably just buy one of those cheap, getting an SSD, and waiting for Sandy Bridge. Since it doesn't, the decision is more difficult. Probably depends how much business I get this year. Reply

Actually, they fully delivered on the marketing. It's just that when Nvidia/ATI delivered products in the same space Intels product looked rubbish. There is nothing wrong with the G45 other than it not being an 9400 or a 790GX. Reply

I just don't see how AMD competes, long term. With Intel moving to 32nm faster than expected, and with mainstream parts, that would put them 18 months ahead of AMD, unless somehow, they manage to pull off a similar coup. But it doesn't look as though they will be able to.

We might remember that a bit over a year ago, AMD stated quite boldly, that they would move to within 6 months of Intel's process changes, but they are still a year behind. No progress there. Unless they can manage to switch around their roadmap the way Intel seems to be able to do, they will fall further behind. Reply

TSMC's fabs will always be a generation or so behind the like of Intel's own, just as AMD (with IBM's assistance) were ahead of them in the past.

I can't see AMDs fab company getting much outside investment in the current economic climate -- new state-of-the-art fab facilities are too expensive and there is no guarantee of profitable contracts to keep them busy. The Foundry Company is never going to catch up with Intel unless a miracle happens, and TSMC etc will likely be direct competitors.

Intel are speeding up their fab and process development because they have money in the bank and continued profits to fuel it. AMD are in dire-straits financially and making a loss. Even with the risks hedge-fund managers take, they'd be mad to put money into AMD just now. Reply

I wouldn't count AMD out just yet if I were you. One false move from Intel and an unexpected innovation from AMD and they're back on their feet. If in Q4 2007 you said Ati would level the playing field with Nvidia the following year most would call you crazy, yet it still happened. So I still have hopes for AMD. Reply

In all seriousness, I have a feeling AMD might pull a rabbit out of it's hat like ATi did with the 4 series with their new architecture. Actually, technically they did with Phenom II but really it was just too late in the game to make the significant dent that ATi's 4 series did (though I'd say the triple cores this round are a big win).

At any rate, 2011 (Bulldozer, or whatever they're calling it now) better be huge. The 65nm X2s were somewhat competitive with Conroe, but after that it just started going downhill. If Bulldozer doesn't do it I don't think AMD is going to be able to get back up. =( Reply

Let me see if I've got this straight: in 2H'09 (I would actually bet Q3'09) we will finally see the Core i5 quads-cores (Lynnfield/Clarksfield) (on a new LGA-1156 socket), which should have been released in Dec'08.

So the 45nm Core i5 quads will be the highest performing CPU available for LGA-1156, positioning above the 32nm Clarkdale/Arrandale dual-cores (the 'Core i5 Duo' maybe?) which arrive in Q4'09

How do they indent to make the LGA-1366 platform have better overclockability, i7 and i5 are almost the same, are they going to actively prevent OC'ing on i5? that would be ridiculous.

Somehow I don't think that the artificial socket segmentation will have a significant number of enthusiast herded into LGA-1366 to get the higher margin cash-cow that Intel has planned it to be. Reply

Intel isn't going to artificially limit overclocking directly, but it is indirectly by redirecting the better chips to 1366. So the i7 CPU's will be cherry-picked versions of the i5's and thus will overclock better. Besides that the only socket with Extreme versions will be 1366.(Though that is a niche within a niche really) Reply

Overclockers are a very small fraction of the market. I'm not even sure intel is thinking about overclockability when they engineer chips. Overclockability is more an artefact of good engineering than a design goal from the outset. Overclockers are always paranoid that intel or AMD is out to get them by intentionally crippling chips. There just aren't enough of us for Intel to be concerned. We're like 1% of the total CPU market.

Pretty much every chip that intel has released at any price point since the introduction of Core 2 has been wonderfully overclockable. I wouldn't worry that Intel is going to change that soon, especially since Core i5 is basically just mainstream processor with the same design fundamentals as the excellent i7. Reply

Although I understand it's a hobby, I don't care if people can overclock or not. As long as we have fast chips at a good price and they're faster than what we have...I mean, why would you care? Isn't it all about SPEED? Reply

I wouldn't be surprised if the opposite was true. I'm really sick of all the hype on shrinking creates less heat. Look at the gpu industry, ever since they started shrinking things got hotter and hotter, and now it seems with i7 even though it's not a die shrink and we are use to 45nm by now, the new hardware to support minor changes in architecture of the cpu seem to make things run hotter.

But you're making an unfair comparison - for example, the current latest GPUs have only been produced on the newest nodes, ever. Now, if we take for example, a Radeon 3870 vs. a Radeon 2900 XT, the former draws far less power and will overclock better on air, almost directly as a result of them shrinking from a 80nm to a 55nm process, despite them performing exactly the same. Another example is the Core 2 E8000 series and E6000 series. Despite the increase in cache size, the E8000 dissipates little enough heat that they can provide them with a very tiny heatsink compared to the earlier 65nm cores, and objectively they draw much less power at the same clock speed because they run at lower volts.

You can see this sort of thing again and again throughout the technology industry, Coppermine (180nm) -> Tualatin(130nm), GeForce 7800 -> 7900, G80 -> G92, etc., etc.

If you were to compare say, a GTX280 to a 8800 GTX and say the former draws much more power than the 8800 GTX, AND it's produced on a smaller process - well, yes, but that's because they've clocked it higher and there are far more transistors (twice as many, in fact). Reply

That's because every time they shrink the chips they pack in new features and push the clock speed to the bleeding edge. If all they did was die shrink the old tech, we'd all be running something like an Atom CPU right now. Atoms closely resemble Pentium 3s, but on modern manufacturing only draw what? 5 watts? Reply

The GPU's run hotter, because they pack double the transistors with a new shrink, than their previous HW... Reduction of the manufacturing process enables that we can have so much more transistors in the same place, of course it gets hot... Reply

For the CPU market, the problem is the ever growing amount of cache memory. Intel processors are designed with the large cache being their solution to improvements that AMD brings to the table.

I suspect that Intel will have more trouble after this move to the new fab process because the difficulty in moving to a new process node grows at an exponential rate. We saw Intel hit a wall with the Pentium 3 line because they were not ready for a new process shrink at that point, so the P4 came out. When Intel got their process technology on track, the people at Intel could go back to the Pentium 3 design(with improvements) to release the Core and Core 2 Duo.

There will come a time when an all new design will be needed in order to hold on to their lead, and that is when AMD will probably catch back up, if AMD can survive until then.
Reply

Even though my last three CPUs were all from AMD (they made sense at the time- K6-III/400, Athlon XP 1700+, Athlon 64 X2 4400+), I have to disagree with your comment about the improvements (presumably the integrated memory controller) which AMD brings to the table.

With Core i7, Intel has effectively removed the one last technological advantage AMD had- faster memory access. The fact that Intel chips still tend to have larger L3 caches is quite simply because they can afford to give it to them, as they are ahead of AMD on the fab-process. For a high-end desktop chip where there is die-space to spare, you could add some more cores which will probably sit idle (keeping four busy is hard enough, especially with HT), but adding more L3 cache (so long as the latency of it is not adversely affected) is a very cheap and easy way to use up the space and provide a bit of a speedup in almost everything.

AMD is currently fighting a losing game. The Phenom II (bug-fixed Phenom) cannot compete with Core i7 with AMDs current fabs, and unlike Intel who have the tick-tock steady new-process, then new-design with large teams working on each step; AMD seem to have one team working on a new design, which has to be made to work with whichever process looks like the best option at the time.

We need AMD to survive for the x86 (or x64, who came up with that :p ) CPU market to be competitive, but I think the head of AMD is going to have to get into bed with the head of IBM, else they are doomed to fall ever further behind Intel in chip-design. The K10 is promising, but a long way off still, and AMD hasn't exactly been raking in the billions of dollars of profits recently to do that R&D. VIA have found an x86 CPU niche they can compete in, I fear that unless AMD pull an elephant out the hat with the K10, they'll have to slot in between VIA and Intel in providing CPUs specialising in a particular performance-sector, with Intel being the undisputed leader. Reply