This is what the promise of Fusion really is. Putting a CPU and a GPU in the same MCM, or heck, even on the same die, is not revolutionary, and hardly even evolutionary. Thus far it has worked mostly as a cost-saving measure. The system architecture is still the same as a regular computer with separate CPU and GPU.

Now we're talking, though. Unfortunately this is more likely to mean the GPU will be a low-end one shackled with the inadequacies of DDR3 memory, rather than the amazing opportunity of letting a CPU and GPU share some horrendously fast GDDR5 memory.

Now we're talking, though. Unfortunately this is more likely to mean the GPU will be a low-end one shackled with the inadequacies of DDR3 memory, rather than the amazing opportunity of letting a CPU and GPU share some horrendously fast GDDR5 memory.

Click to expand...

Also judging by the PS4 specs will this really be the case...Its got 8GB of DDR5 unified memory?

This is pretty fantastic IMO, it will give AMD a considerable performance gain and, as stated earlier, makes the term Fusion much more true. What I find interesting about this is you could potentially have several GB of memory go toward the GPU. With a little overclocking, this could probably easily handle 6 monitors that aren't doing anything GPU intensive (such as HD video or 3D). If you want a multi-seat office or school computer, this would be very ideal. Many people overestimate the needs of office computers.

Also judging by the PS4 specs will this really be the case...Its got 8GB of DDR5 unified memory?

Click to expand...

gddr5 isnt actually "faster" than ddr3. it is just optimized for graphics(pretty sure it handles higher volume transfers better at the sacrifice for a bit of added latency, but i could be wrong, havent looked too far into it) anyway if they make it GDDR5 mem the processor side of things will suffer, while the graphics would improve... so best outcome would probably just be DDR4 coming out in time for the apus.

Unfortunately this is more likely to mean the GPU will be a low-end one shackled with the inadequacies of DDR3 memory, rather than the amazing opportunity of letting a CPU and GPU share some horrendously fast GDDR5 memory.

Click to expand...

GDDR5 isn't better than DDR3, it IS DDR3 but optimised for the parallel tasks of GPUs. GDDR5 has high bandwidth because it can have multiple (high latency/high bandwidth) controllers per channel (while also reading AND writing during the cycle) while DDR3 has a single (low latency/low bandwidth) controller per channel (and can only read OR write during the cycle).

CPUs want DDR3 because they prefer low latency, as they have multiple workloads all needing access quickly so as to not hold up the current thread.
GPUs want GDDR5 because they want high bandwidth, and care less about latency because they need to move a lot of data but it is less time critical.

These are two competing requirements. On the desktop you'll want DDR3 because you will have multiple workloads running simultaneously. Consoles such as the PS4 will be able to get away with GDDR5 because it will be undertaking a single workload that will be mainly GPU-related for which GDDR5 will suffice.

It should be noted that it is not GDDR5 that has high latency but the controllers themselves, as high bandwidth and low latency are competing requirements.
Low latency GDDR5 controllers should be do-able, it is just that it hasn't been needed for past/current/future AMD/nVidia GPUs which require high bandwidth. Perhaps a controller for APUs that can switch high bandwidth/low latency modes is the answer.

If the integrated GPU has any real number crunching power to it, this could be a huge deal. It won't need to be a GTX Titan, something with a few hundred compute cores that can significantly outperform a CPU for basic parallel tasks would do the trick. I can imagine computing clusters with a dozen APU's per blade server, offering huge throughput with relatively low power demands.

Of course, this is all marketing BS if the integrated GPU isn't big enough. Careful programming can mitigate the data transfer overhead, which isn't so bad if you don't need to constantly load new gigabyte-scale blocks of data onto the GPU (bear in mind that the 'bottleneck' is the PCI-E bus, nothing compared to the bus between the CPU and RAM, but it's not like we're moving 10 gigs onto a USB drive).

I sure would love it if this turns out to be as good as it sounds: over the summer I'll be teaching researchers how to do GPGPU computing, and eliminating the data transfer step would make things way simpler when coding.

gddr5 isnt actually "faster" than ddr3. it is just optimized for graphics(pretty sure it handles higher volume transfers better at the sacrifice for a bit of added latency, but i could be wrong, havent looked too far into it) anyway if they make it GDDR5 mem the processor side of things will suffer, while the graphics would improve... so best outcome would probably just be DDR4 coming out in time for the apus.

Click to expand...

Didn’t say it was faster just said that the Playstation 4 specs showed unified memory.