Depends on the game and resolution , you can't generalize and say GTX2xx does much better.
There are lot's of games where gtx260 in high resolutions and details doesn't do so well against a 4870 1gb.
Please present some proof ( google tons of reviews until you find one that suits your argument ) that the GTX2xx is having a much better minimum framerate and don't post me one two games , you generalized like it's happening in most of the games if not all of them so please , show us the proof.
I know you like your video card very much ( Nvidia fan , stated as a good thing of course ) , but there has to be some limits to where you defend your video card.

Click to expand...

GTX260 is not even my video card.

I'm not going to argue with you today and I'm definately not going to spend one more minute showing you anything. People that matter to me know what I'm talking about, or at least have enough brains to understand what a comment like the mine above means and act and decide acordingly.

Have a nice day.

PD: When you take the time to answer (and provide proofs to) the questions you still owe me, maybe, and I mean maybe, I'll try to find some benchmarks that show what everybody who has seen both cards in action knows already. i.e DrPepper.

In the meantime, you could troll me less and show me the opossite benchmarks, the ones that show the HD4870 having better lower fps, WHEN at the same time has the same average ones. If average are faster, that just shows the card is faster at those games, and it doesn't contradict my point. Again. Have a nice day.

I'm not going to argue with you today and I'm definately not going to spend one more minute showing you anything. People that matter to me know what I'm talking about, or at least have enough brains to understand what a comment like the mine above means and act and decide acordingly.

Have a nice day.

PD: When you take the time to answer (and provide proofs to) the questions you still owe me, maybe, and I mean maybe, I'll try to find some benchmarks that show what everybody who has seen both cards in action knows already. i.e DrPepper.

In the meantime, you could troll me less and show me the opossite benchmarks, the ones that show the HD4870 having better lower fps, WHEN at the same time has the same average ones. If average are faster, that just shows the card is faster at those games, and it doesn't contradict my point. Again. Have a nice day.

Click to expand...

I agree with that... i bought both the 4870 and the gtx 260 and the gtx had noticeably higher min framerates. Im not a fan towards one side or the other... which is why i bought two cards and RMA'd one...

I've fried plenty o' parts when not OCing. Need to start doing it again to avoid problems...

I think both parties are to blame in this pathetic "OC and rename" scheme! nVidia of course it's king with up to 3 names for one single card! AMD seems keen on beating that record too, after managing to get top spot in Vantage, so they are just as guilty!
Good news is my 4870 just lenghtened its life cycle!
With both nVidia and AMD milking their cows for a bit more time, we might at least expect some decent driver support from them!

Releasing the same driver every month with little more than a few added Crossfire profiles is hardly enough for me.

Their driver support has gone to the complete crapper.

They didn't officially support the HD4670/50 until more than 3 months after it was released. That means in the 3 driver released after the HD4600 series' launch, they couldn't even be bothered to add in support for a card they had officially launched.:shadedshu

The HD4850x2 was officially launched with the HD4870x2, and the HD4850x2 cards started showing up on the market about 3 months later, but they just finally officially released drivers for it last month. Thats about 9 months without driver support for an official product.:shadedshu

If the monthly driver releases were actually improvements in any way beyond a few added Crossfire profiles, adding support for these cards would come the very next driver release, not months later.

and all that just when nvidias drivers have been on a roll, like a friggen freight train.

pretty much every release there's performance improvements, new physx drivers.... they just keep getting better and better.

honestly drivers were a decent factor in me getting rid of my 4870's, they seem few and far between, then you wait a month for a release and it hasn't even got what you want in it :shadedshu

another reason they annoy me is no "per game profiles" for image settings, sure i loved 24x edge detect AA, but its either ON or OFF, and naturally not all games will play with such a high level of AA, and really screw goin into the CP every time to disable or enable it.

Relatively speaking, if the core for the RV790 is just a more power eff. RV770 then the same clocks for both should produce the same results. But in terms of the RV790 OCing better, I wouldn't count on it. They can try to rework the RV770 all they can but it will still be 10 lbs of beef in a 5 lb box. It can use slightly less power which should produce slightly less heat. But to most, it will still be just an OCed RV770 & we all know the limits on that thing.

I know I won't be buying one since I already have 2 4870s that I'm going to build another rig around, not to mention the 280 & 260 that's in my old rigs.

yes it will be faster, both of your clock speeds exceed the new ones. unless of course core tweaks enable it to have better clock for clock performance.

Click to expand...

More clock on the same technology (55nm) means more power draw. I must say i was hoping fore a 40nm preview on older architecture like in rv430 days (110nm shrink of r423) to prepare the way to new r520 (x1800xt). I'm more than dissapointed with this news.

Let just hope they'll not f?ck us like in time of r360 chip based grapic cards that they now w/ rv790 release has been often mentioned. when most of r9800xt/pro (rv360/350) had a much shorter lifespan just caused just cause they obviously neglect power requirements.

gddr5 is a whole lot better it's just moving with a stream ddr2/gddr3/4 are on way to become obsolete and ddr3/gddr5/(6) is a new shiny technology.

the same way ddr2 differs from ddr3 gddr3/4 differs from gddr5

Click to expand...

Definitely, GDDR5 is a whole new class. I still wonder why it is even considered DDR, and not GDR. The problem is that it hasn't been properly applied. It has only been paired with the RV770, which really can't take advantage of what it is capable of. The two best situations for GDDR5 are on low bus width cards(128-bit) and extreme high end single GPUs(which we haven't really seen yet, the next generation of GPUs should fit the bill though).

That was true when GDDR4 first came out. However, GDDR3 was able to catch up to GDDR4 in clock speeds. It still requires slightly more voltage, but was much much cheaper to produce.

Click to expand...

Not exactly gddr4 has low competition (samsung only afair) so that's the main reason why nobody wants to pay extra money for something 'slightly better' especially when Samsung only showcased and never really mass produced gddr4 1600Mhz chips. These babies reaches gddr5 lower spectrum but consumed far more power, and conscious ATi rather skip it than to have hot chips (that reached their limits) from Samsung on their cards. And Samsung good financial situation didn't gave them opportunity for a good deal like they done with Qimonda that was, and still is, in rivet squeezer. So i fact that 'better deal w/ Qimonda' and marketing bump they produced giving oc enthusiasts a big slurp. gddr5 was simple thing must have for a lot of people for almost a whole year.

Definitely, GDDR5 is a whole new class. I still wonder why it is even considered DDR, and not GDR. The problem is that it hasn't been properly applied. It has only been paired with the RV770, which really can't take advantage of what it is capable of. The two best situations for GDDR5 are on low bus width cards(128-bit) and extreme high end single GPUs(which we haven't really seen yet, the next generation of GPUs should fit the bill though).

Click to expand...

I not really get you there. ddr1/2/3 is used on budget 64-bit cards where low bus configuration allows them to use 8x8 or 4x16 chips. And it's far more affordable to use ddr3 than gddr3 cause ddr3 is newer as i said, and in power consumption more comparable w/ gddr5, and gives almost the same memory bandwidth on same bus. ddr3@1600cl7 chips also had somewhat lower latencies gddr3 chips. Unfortunately gdr5 usage is still pretty uncommon when you want to produce lower mainstream/htpc like (hd 4670) cards for far less money than enthusiastic cards (3-4 times cheaper), it simplydoesn't pay off no matter what we wished for