Why not just go with the big GPU anyway. Granted the console will technically be more powerful than a PC built with the same specs because of the lack of overhead; but why not just go bigger. If they went with a 7870 or 7950, the could build the system to use the on-die 6550D equivalent and only turn on the big GPU when a game is loaded. It would use less than 1W to keep on standby so WTF not? It would also eliminate potential crossfire issues and would have more power overall. Hell the could use the on-die GPU as a physics processor while gaming using Direct Compute or AMD APP whatever.

I think both Sony and AMD are missing an opportunity here or there is more to come.

This is hilarious. That console will be worse than a year old budget gaming build. But us PC gamers are going to see good things out of this because there will be more development for the x86_64 platform and developers will start trying to make sure their games work well on dual-GPU setups.

Can't post process until the render is done--second GPU is idle until it is given a task. Likewise GPGPU tasks will not accelerate the graphics nor will it put the second GPU to full use. In short, you're providing double the GPU power but only using, at most, 1.5 of it. A single GPU with the power of two could do all the above with few drawbacks.

Everything in computers translates to consoles. Look at the CELL as a prime example. The issues plaguing computer developers with multithreading are amplified on PS3 because they not only have two cores to code, but also 7 SPEs. Developers simply don't use the SPEs unless what they are trying to achieve requires it. CELL was abandoned for that reason.

Console developers have the same push for profits as PC developers. If they can achieve their goals without using the intricacies of hardware, they're not going to go there.

Likewise, the issue of limited RAM in consoles translates back to computers.

Very few games go over 2 GiB of RAM usage. Consoles have ~1 GiB. They're developed to stay inside the console threshold. They only go north of that 1 GiB when you turn up the settings.

Click to expand...

that comes down to opinion, many would say turning up the settings to max should fall within that 2GB limit (due to the apps often crashing with no LAA support)

sure at console level graphics, they dont go over it.

meh, this arguing is pointless. these consoles are still a MASSIVE improvement over current gen, and most people are happy with current gen console graphics. they'll be faster, more power efficient, and more features.

The GPU, on the other hand, will be based on the "Southern Islands" architecture, and the IGN.com report pin points it to resemble Radeon HD 7670. The HD 7670 is a re-branded HD 6670, which is based on the 40 nm "Turks" GPU.

In english: two GPUs never produce 200% output. One chip with double the power can.

The only reason why I can think of they are doing this is because they're going for something really slim. It's easier to cool two small GPUs than one big GPU. At the same time, you'd think they would rule that out quickly because two GPUs cost more to manufacturer and install than one big GPU. I think Microsoft and Sony got hit with a stupid stick and a big one that. Sense, this makes none.

Click to expand...

Not trying to sound like a @ss but, I watch soo many benchmark sites and I'd say 95% of them show pretty much near full scaling by adding a second GPU. provided they are benching a GPU newer then the HD5000k series. I did a random search for current GPU's and got this http://www.guru3d.com/article/radeon-hd-6850-6870-crossfirex-review/7 which shows a 6850 crossfired vs a 6870 crossfired, the scaling they got was really close to 2 GPU's working perfectly in tandem.

I know this is only concerning titles that the dev's put the effort into making the scaling work better. But it still shows the GPU's are very capable of doing it.

forgive me if I miss understood what you were saying. this is how I see it though, current and even last gen are VERY capable of scaling well together.

that comes down to opinion, many would say turning up the settings to max should fall within that 2GB limit (due to the apps often crashing with no LAA support)

Click to expand...

By "turn up settings" I meant "to levels that exceed consoles." For example, a cross-platform game can be expected to use ~1 GiB at the same resolution it is on consoles (720p or lower usually). Double the pixels (1920x1200) and you'll see that jump to the 2 GiB I mentioned before. If consoles were removed from the picture, we'd have games using 4+ GiB now at 1920x1200 and more than ~2 cores.

meh, this arguing is pointless. these consoles are still a MASSIVE improvement over current gen, and most people are happy with current gen console graphics. they'll be faster, more power efficient, and more features.

Click to expand...

Agreed. The only logical explaination I can come up with for opting to use two GPUs instead of one superior GPU is that they're buying surplus GPUs for cheap. The illogical explaination would be that hardware developers haven't figured out why multi-GPU hasn't been wildly successful on computer hardware and they'll learn from this mistake in the next generation.

graphics aside, does anyone else think the best upgrade for the next gen consoles would be some form of SSD or hybrid storage? Perhaps something like Seagates Momentus hybrid or a cheaper version of OCZ's RevoDrive? Obviously straight flash SSD would be epicly fast but next gen games are only going to get bigger and bigger as these consoles get more PC-like. Hell PC games are getting absurdly large. SWTOR is 35gb, Shogun 2 is 30gb, RAGE was 22gb.