thats what ive been thinking Bear but i wanted to get the next gen cpu from amd not a phenom 2 has to be 4-12 core lano or something

Click to expand...

I have been thinking about the 8 core bulldozer but i know it's going to be a while yet so i expect that as long as the unlocked versions of sandy bridge are not insanely expensive i would get one and then see how bulldozer does against it, probably around the time of the 7xxx and 6xx cards

im obv a serial updater but i like to buy dear quality fast gear less often and never a whole new pc all at once id end up both broke and compromising not good im on beans as is

ps bulldozer plus a gfx core tho mate ,it might well be worth waiting for as it has upgraded maths co pro as well and direct comput in core abillity now we just need the os to use the thing all the time instead of now n again

im obv a serial updater but i like to buy dear quality fast gear less often and never a whole new pc all at once id end up both broke and compromising not good im on beans as is

Click to expand...

I'm the same, the only think i had at the start of this PC is the motherboard and ram thus why i want them gone

Really if your gpu is multiple generations ahead of your CPU it will limit you in many games but the major problem is when running multiple cards you need a lot more CPU power than a single card to feed them.

ps bulldozer plus a gfx core tho mate ,it might well be worth waiting for as it has upgraded maths co pro as well and direct comput in core abillity now we just need the os to use the thing all the time instead of now n again

Click to expand...

If i switch back to one card with either a 6970 or 580 i will just keep my phenom until i can see how bulldozer does against sandy bridge before i upgrade.

If i switch back to one card with either a 6970 or 580 i will just keep my phenom until i can see how bulldozer does against sandy bridge before i upgrade.

Click to expand...

prob v wise ive been trying to hold firm on keeping my 5870 as it does just fine at 1050 core in all games on 1 screen even 1080p 32" but im allways tempted to blow my doe on some kinda tec i nearlly blew my rent on an ocz revodrive this month

ps luckily or as i see it unluckily me Landlord phoned me first asking for doe he a lucky man lol

prob v wise ive been trying to hold firm on keeping my 5870 as it does just fine at 1050 core in all games on 1 screen even 1080p 32" but im allways tempted to blow my doe on some kinda tec i nearlly blew my rent on an ocz revodrive this month

ps luckily or as i see it unluckily me Landlord phoned me first asking for doe he a lucky man lol

Click to expand...

darn bills taking money away from our upgrades

Really you should have little reason to get rid of your 5870 yet, the only reason i got a pair of 6870's is because i was on a 4870 and wanted to go with a triple monitor setup, if your gaming at 1920x1080 then an overclocked 5870 will handle basically any game, if you really can't resist upgrading then a new cpu should squeeze out some more fps in many games.

A gamer friend of mine was getting lower fps in left 4 dead 2 with a 5850 than i was getting with my 4870 with the same settings/res just because he was on an old 2.2ghz dual core, l4d2 is a rather cpu limited game though but there are many more that are as well.

Can't help but think that sadly, I am not generally getting good feelings about any of this new stuff anymore, whether it be the 69XX or the 580, seems that even with smaller fabrication processes, bigger transistor counts etc etc, the only way both teams seem to be going with their solutions is to bump up voltages, increase thermals, increase power draws, don't people realise that you can actually increase efficiency without having to plug several 8 pin PCI-E connectors into a card, just look at the Diesel Engine, in the UK, 10 years ago a 2000cc diesel Engine would give you about 90 HP, today they will give you 180+ AND be more fuel efficient in the process, perhaps NVidia and AMD need to go on a factory tour of BMW

Click to expand...

I agree to a point. I'm sure you remember the days of 3Dfx where adding a second GPU meant you doubled your frame rate. 100% scaling and no real need for the drivers to keep updated to get it. Sadly though the free cake was eaten a long time ago as both you, me and others that remember know.

Where I don't agree though is to realize that the efficiency has gone through the roof at the same performance level. Its just the top performance part makes it appear as though this is not true. Where as today we have more transistors in a Juniper chip then any 3Dfx made while performance has plowed through the roof and kept power use in check. Get a 400w Corsair and you have enough juice for a 5770/GTS 450, i5 760/Phenom II X4, 4GB of DDR3 and Gigabytes of drive space. Even the late nineties we were not using much more than poorly efficient (in comparison) 300w units. Sure they produce more heat but they now we have several orders of magnitude increase in transistors that operate on much the same amperage. I remember using a 300w unit for a 800Mhz Duron, 3Dfx 5500, 1GB 133Mhz SDRam, and two raid 0 120GB 7200.7 drives and I thought that was quick. I did that in 2000 when I graduated high school. I didn't build another system until I went to college at Kansas State in 2007. Moving from that to a C2D, 10k Raptor, and 1950 Pro was a shock.

If we hold power usage and performance steady over the last, say 12 years, the increase in efficiency is quite staggering. I think the real proof of this is that the same games I played when I was a kid (Doom, Quake, Unreal, C&C, etc) are starting to show up again but now played on the internet using your browser. Imo, shocking is an understatement.

Random thing I noticed cruising newegg earlier today, the retail price of 6850 and 6870's has crept up $5-10. Not a huge sum of money of course but who knows if it will magically go up so more along with other retailers. If you're thinking of getting one sooner might be better than later.

I have been thinking about the 8 core bulldozer but i know it's going to be a while yet so i expect that as long as the unlocked versions of sandy bridge are not insanely expensive i would get one and then see how bulldozer does against it, probably around the time of the 7xxx and 6xx cards

really its all about the speed/performance per mhz, as most games are limited to a couple cores

Click to expand...

They will be pretty expensive I would guess on arrival, the top end models allegedly that stock at 2.8gig, the 2600S I think they were calling it should be unlocked, just take a look at how much the i7 965 extreme was on first release and at a guess add 10-20% and you probably would be close to the mark, so all that said, I would think once Sandy bridge does arrive you could probably pick up a 6 core Gulftown for as cheap if not cheaper by then...... just my thoughts.

Is there any performance draw back to using 8 chips as opposed to 16 for 2GBs of on board memory?

I agree to a point. I'm sure you remember the days of 3Dfx where adding a second GPU meant you doubled your frame rate. 100% scaling and no real need for the drivers to keep updated to get it. Sadly though the free cake was eaten a long time ago as both you, me and others that remember know.

Where I don't agree though is to realize that the efficiency has gone through the roof at the same performance level. Its just the top performance part makes it appear as though this is not true. Where as today we have more transistors in a Juniper chip then any 3Dfx made while performance has plowed through the roof and kept power use in check. Get a 400w Corsair and you have enough juice for a 5770/GTS 450, i5 760/Phenom II X4, 4GB of DDR3 and Gigabytes of drive space. Even the late nineties we were not using much more than poorly efficient (in comparison) 300w units. Sure they produce more heat but they now we have several orders of magnitude increase in transistors that operate on much the same amperage. I remember using a 300w unit for a 800Mhz Duron, 3Dfx 5500, 1GB 133Mhz SDRam, and two raid 0 120GB 7200.7 drives and I thought that was quick. I did that in 2000 when I graduated high school. I didn't build another system until I went to college at Kansas State in 2007. Moving from that to a C2D, 10k Raptor, and 1950 Pro was a shock.

If we hold power usage and performance steady over the last, say 12 years, the increase in efficiency is quite staggering. I think the real proof of this is that the same games I played when I was a kid (Doom, Quake, Unreal, C&C, etc) are starting to show up again but now played on the internet using your browser. Imo, shocking is an understatement.

Click to expand...

I know where you are coming from and I cannot disagree with your comments however, in essence you are saying that graphics cards are 10 times more powerful today but they don't draw 10 times the power or produce 10 times the heat, yes you are right and noone can argue that, and whilst my points were specifically aimed at the top end cards, think of it this way then, if a GTX460 draws x amount of power for x amount of performance, how does the 480 perform in relation to that and we see that the GTX 460 1GB on average performs per watt 30% better than the 480..... my origional point was, and I may not have put it across well, that IMO manufacturers should be doing better than that.

Just so as not to cloud the issue and pick on the now very boring fermi power and heat comments (which strangely enuff I don't subscribe to), if we take a look at the HD5970 in relation to the 5870 then in this case AMD do manage better, the 5870 wins the performance per watt by 15%, and this may well be the benefits of a dual GPU top end card..... maybe in the future thats the way everyone will go, if only on the principle that perhaps NVidia would have saved a lot of development time and money if they had not bothered trying to get the numbers out of a 480 and then moving on to a 580, but simply waited for the 460 to arrive and then throw out a dual GPU 460...... I dunno.

Personally, I would be a hypocrite if I were to denounce high consumption cards as I have owned and enjoyed the use of a GX2 and the 4870x2, both of which evidence your point very well!

HD 6000 series could and should be much cheaper and AMD would still make a killing. Those gpus are so cheaper to make compared to NVIDIA`s cores. As AMD said they weren`t in to money making all the want is to sell cheaper hardware because they care about their customers - yeah wright.

Random thing I noticed cruising newegg earlier today, the retail price of 6850 and 6870's has crept up $5-10. Not a huge sum of money of course but who knows if it will magically go up so more along with other retailers. If you're thinking of getting one sooner might be better than later.

Click to expand...

5800's did the same, they'll go back down once Nv put's out a card in response.

They will be pretty expensive I would guess on arrival, the top end models allegedly that stock at 2.8gig, the 2600S I think they were calling it should be unlocked, just take a look at how much the i7 965 extreme was on first release and at a guess add 10-20% and you probably would be close to the mark, so all that said, I would think once Sandy bridge does arrive you could probably pick up a 6 core Gulftown for as cheap if not cheaper by then...... just my thoughts.

I know where you are coming from and I cannot disagree with your comments however, in essence you are saying that graphics cards are 10 times more powerful today but they don't draw 10 times the power or produce 10 times the heat, yes you are right and noone can argue that, and whilst my points were specifically aimed at the top end cards, think of it this way then, if a GTX460 draws x amount of power for x amount of performance, how does the 480 perform in relation to that and we see that the GTX 460 1GB on average performs per watt 30% better than the 480..... my origional point was, and I may not have put it across well, that IMO manufacturers should be doing better than that.

Just so as not to cloud the issue and pick on the now very boring fermi power and heat comments (which strangely enuff I don't subscribe to), if we take a look at the HD5970 in relation to the 5870 then in this case AMD do manage better, the 5870 wins the performance per watt by 15%, and this may well be the benefits of a dual GPU top end card..... maybe in the future thats the way everyone will go, if only on the principle that perhaps NVidia would have saved a lot of development time and money if they had not bothered trying to get the numbers out of a 480 and then moving on to a 580, but simply waited for the 460 to arrive and then throw out a dual GPU 460...... I dunno.

Personally, I would be a hypocrite if I were to denounce high consumption cards as I have owned and enjoyed the use of a GX2 and the 4870x2, both of which evidence your point very well!

HD 6000 series could and should be much cheaper and AMD would still make a killing. Those gpus are so cheaper to make compared to NVIDIA`s cores. As AMD said they weren`t in to money making all the want is to sell cheaper hardware because they care about their customers - yeah wright.

Click to expand...

gotta remember the gf104 if pretty damn cheap to manufacture and with the 460's not using all of the core, yields must be through the roof. but I suspect both sides could drop prices, it's just a matter of what they can get and right now people are paying it, even despite the econ.

They will be pretty expensive I would guess on arrival, the top end models allegedly that stock at 2.8gig, the 2600S I think they were calling it should be unlocked, just take a look at how much the i7 965 extreme was on first release and at a guess add 10-20% and you probably would be close to the mark, so all that said, I would think once Sandy bridge does arrive you could probably pick up a 6 core Gulftown for as cheap if not cheaper by then...... just my thoughts.

Click to expand...

it sucks but if i buy a cpu i want the ability to overclock and preferably be able to change they multiplier, so it's either a k version from sandy bridge or black edition form bulldozer, as long as one or the other beats out the i7 by a nice amount will happily pay what's needed as I'm hoping for my next major cpu upgrade to last me a long time (at least 2 years).

Switched drive home
If all goes well, code-named "Southern Islands" (Southern Islands) of the AMD next-generation graphics card "Radeon HD 7000" series will be the end of the second quarter of 2011 or the beginning of the third quarter of the official release.

The argument is that the South Island had to wait until the second half of next year, but on the new technology in order ahead of 28nm NVIDIA, AMD is working with in-depth cooperation GlobalFoundries overtime, hoping to catch up in the first half of the first to conquer new ground.

According to sources, the South Islands, released in time for the second quarter of next year, the possibility is very large, but largely depends on the progress of GlobalFoundries 28nm process.

Just yesterday, we heard that NVIDIA has won its first trial using 28nm process chips, still by the TSMC foundry, but the new process based on the "Kepler" (Kepler) until the second half of 2011 will debut.

Radeon HD 6000 series family, code-named "Northern Islands" (Northern Islands), TSMC 32nm process is a compromise after the removal of the product, the transition means that relatively heavy, unlike the South Island that would switch to a new core architecture.

HD 6000 series could and should be much cheaper and AMD would still make a killing. Those gpus are so cheaper to make compared to NVIDIA`s cores. As AMD said they weren`t in to money making all the want is to sell cheaper hardware because they care about their customers - yeah wright.

Click to expand...

I'd like to know when and where AMD said anything to that effect. They may have a higher margin on GPU's than Nvidia does but that doesn't mean they immediately have to cut into it. The purpose of a business is to produce a profit.

At any rate the 6850 and 6870 pricing is quite fair considering what you get.