AMD Kaveri APU details and release date announced

AMD has just kicked off its annual developer conference, APU13, and to get the ball rolling it has revealed a host of details about its upcoming APU, codenamed Kaveri, with the new chip set to arrive on 14 January 2014.

Kaveri will be the company's first chip to completely unify both CPU and GPU together on one chip, an approach AMD is calling Heterogeneous System Architecture (HSA).

The key to HSA is that both the CPU and GPU, for the first time, share the same memory space and have equal flexibility to create and dispatch workloads. This is in contrast to previous APUs that have required workloads that need GPU processing to be copied back and forth to the GPU memory space.

Also revealed are details of what the highest-spec chip will be. Kaveri will have up to 4 CPU cores (2 modules) that will be based on the company's existing CPU architecture, Steamroller. It will also feature a GPU composed of 8 GCN 1.1 Compute Units (CUs), making for a Stream Processor (SP) count of 512, or the equivalent of a Radeon HD 7750 desktop card.

AMD claims these numbers equate to a floating point processing figure of 856GFLOPS, which Anandtech has worked out via PCWorld will mean it is called the A10-7850K, clocked at 3.7GHz and with a GPU speed of 720MHz.

Also confirmed are that AMD's proprietary GCN API, Mantle, will be supported by Kaveri and that its new TrueAudio technology will also be incorporated into the chip.

The January 14 launch will, in contrast to previous APU launches, be for desktop FM2+ chips first with mobile parts to follow soon after. The launch will be the week after the CES trade show, where AMD is expected to provide further details about the new chips, including pricing and specific SKUs.

Why does it seem odd that the GPU speed is 720MHz, when the 7750 is 825MHz and the next gen consoles are around 800Mhz. Im assuming Kaveri shares some similarity with the way the custom made jaguar chips share GPU and CPU memory, or am i way off with my assumption ?

I completely get it that AMD want to go for the lower end of the build spectrum. I just wish there was a single voice, from within the bowels of AMD that can stand up and say "Remember when we made stuff that were cool? Can we do that again?"

Mmmm, CPU might be holding you back then though. 2 module/4 core is not a whole lot of processing power. Though it might be enough to get good frame rates in games if you got a 7750 and stuck it in crossfire with this thing.

Just...well, I guess it makes a fine budget machine.

The CPU is still just very, very sad (I think that works out to around 60-75% of a core i3 Ivy chip in single thread, depending on Steamrollers exact gains, and around 90-110% in multithreaded integer stuff).

For a mATX or mITX build I would still rather save up and then pay for an ASUS 760Ti Mini with pretty much any £80-£180 Intel CPU. Yes it costs more but you get much more performance and that build should last for years while still being able to build a small build.

For me, these APUs are only interesting when it comes to laptops and the potential for an NUC size build.

Originally Posted by azazel1024Mmmm, CPU might be holding you back then though. 2 module/4 core is not a whole lot of processing power. Though it might be enough to get good frame rates in games if you got a 7750 and stuck it in crossfire with this thing.

Just...well, I guess it makes a fine budget machine.

The CPU is still just very, very sad (I think that works out to around 60-75% of a core i3 Ivy chip in single thread, depending on Steamrollers exact gains, and around 90-110% in multithreaded integer stuff).

Quote:

Originally Posted by SchizoFrogFor a mATX or mITX build I would still rather save up and then pay for an ASUS 760Ti Mini with pretty much any £80-£180 Intel CPU. Yes it costs more but you get much more performance and that build should last for years while still being able to build a small build.

For me, these APUs are only interesting when it comes to laptops and the potential for an NUC size build.

It all depends on what you need. Why paying nvidia + intel price if all you'll ever need is an APU ?

Originally Posted by jrs77The 512 SPs of the IGP might be the same as on a HD7750, but it only has access to a shared DDR3 memory instead of GDDR5, so it won't come even close to a HD7750 in reality.

Also, we're quiet possibly speaking of a 135W-part for the A10-7850k, which isn't going to be cooled silently in a small box like a mITX-HTPC.

I shouldn't think it would be that high. The HD7750 is a sub 75W part itself and can be passively cooled. The CPU is also not a particularly high wattage part either.

Quote:

Originally Posted by jrs77Will never happen again, that AMD develops and manufactures CPUs for the high-end-desktops. AMDs whole focus is on APUs and their new low-power ARM-based server-chips.

They've given up competing with intel basically.

Which is very sad indeed for the rest of the industry. In time, with profitability, and if there is still money to be made in the x86 business, AMD could return in force, but it would take a ballsy CEO and some outstanding engineers to pull the company out of the mire into which their x86 business has sunk.

I would have loved to have seen AMD continue pushing cores for example. Not the way that they have with the Bulldozer architectures, but real high IPC cores. You may say that we don't need such large numbers of cores, but servers do and it's a nice side benefit to get additional cores on the high end desktop.

Intel have stopped pushing cores so much on the desktop, instead focusing on mediocre 10% IPC improvements and 100MHz speed bumps year to year and integrated GPUs that are never used by most people that build custom machines. Even the new X79 parts are the lowest end of the IB-E parts.

Quote:

Originally Posted by bawjawsHow Mantle performs in general is the $64,000 question, isn't it? Personally, I think it looks promising, but anyone buying AMD for Mantle alone is taking a leap of faith at this point.

I took a punt on Mantle when I saw the HD7970 prices come down. It wasn't the only consideration, but it was certainly part of it;along with boosting my folding output massively.

Quote:

Originally Posted by SnipsI just wish there was a single voice, from within the bowels of AMD that can stand up and say "Remember when we made stuff that were cool? Can we do that again?"

Originally Posted by Assassin8orI shouldn't think it would be that high. The HD7750 is a sub 75W part itself and can be passively cooled. The CPU is also not a particularly high wattage part either.

If you look at the current A10-6800k and the current FX then a 125-135 Watt isn't that high of an estimate tbh.

Quote:

Which is very sad indeed for the rest of the industry. In time, with profitability, and if there is still money to be made in the x86 business, AMD could return in force, but it would take a ballsy CEO and some outstanding engineers to pull the company out of the mire into which their x86 business has sunk.

Yes it is sad, as intel doesn't need to improve much either, allthough they actually do quiet alot imho. Especially shrinking their nodes further and further with 14nm to come in 2014.
And allthough the processing-power doesn't increase much, but the overall-performance gets better this way nevertheless. And they've shown with their Iris Pro chips, that they can actually built a good APU to begin with.

Quote:

I would have loved to have seen AMD continue pushing cores for example. Not the way that they have with the Bulldozer architectures, but real high IPC cores. You may say that we don't need such large numbers of cores, but servers do and it's a nice side benefit to get additional cores on the high end desktop.

Intel have stopped pushing cores so much on the desktop, instead focusing on mediocre 10% IPC improvements and 100MHz speed bumps year to year and integrated GPUs that are never used by most people that build custom machines. Even the new X79 parts are the lowest end of the IB-E parts.

More than 4 cores aren't that much of interest for the absolute majority of desktops. Only those who do alot of rendering are really in need of as many cores as possible, but these people usually work with render-nodes to offload the work to a second machine purely ment for the task.

The thing I'm mostly interested in is performance/Watt and singlethread-performance. And in this area intel beats AMD since the introduction of the first Core2Duo.
AMD could for example develop on more efficient CPUs to compete with intel on the performance/Watt-area, but they don't seem to have any interest in that for desktop-parts and focus on that area only in the server-market with their new multicore ARM-based chips.

Originally Posted by jrs77More than 4 cores aren't that much of interest for the absolute majority of desktops. Only those who do alot of rendering are really in need of as many cores as possible, but these people usually work with render-nodes to offload the work to a second machine purely ment for the task.

Unless you're a Linux/BSD user, in which case the more cores the better. I use a quad-core chip, and would really like an eight-core when I next upgrade - because it has a direct impact on how quickly I can get things done.

Perfect example: let's say I'm compressing backups. While the traditional bzip2 application is single-threaded, I use lbzip - which is multi-threaded with a pretty nearly linear gain in performance, meaning what would have taken an hour is done in just 15 minutes. What about when I'm reprocessing PDFs to reduce the resolution of the embedded images for posting on the web? Again, normally that'd be a single-threaded operation - but using GNU Parallel to drive Ghostscript means I can run four instances at the same time on my list of PDFs to be processed, again finishing the job in around a quarter the time it would normally take. If I had an eight-core chip, I'd be getting these jobs done in an eighth the time.

Sure, if you're running *Windows* then anything above a quad-core might be a waste except for selected specialist scenarios, but don't tar all computer users with the same brush. My AMD chip might be weak in IPC, but it's a damn sight faster for my workloads than an equivalently-priced dual-core Intel part. S'why I bought it, after all.

As soon as AMD can offload its FPU computations to future GCN, there will be no concern over 'core' count. ALU will do the mundane tasks and you'll have tons of FPU for everything else.

It's a shame AMD hasn't committed to a '6-core' FM2+ though, and just whacked up the TDP for shits-n-giggles. They won't win awards but enthusiasts wouldn't care. FX9000 series still sold, and FM2+ has TrueAudio/PCIe/etc

Originally Posted by BindibadgiAs soon as AMD can offload its FPU computations to future GCN, there will be no concern over 'core' count. ALU will do the mundane tasks and you'll have tons of FPU for everything else.

It's a shame AMD hasn't committed to a '6-core' FM2+ though, and just whacked up the TDP for shits-n-giggles. They won't win awards but enthusiasts wouldn't care. FX9000 series still sold, and FM2+ has TrueAudio/PCIe/etc

Originally Posted by HarlequinIris pro good? really? its a larger die than the GTX 660!!

horrensous cost as well - AMD have the market here , and tbh who actually cares at the latest i7 - games are really GFX limited now.

Iris Pro isn't bigger as a GTX660. The whole APU-package is.

And yes, I do care about singlethread-performance alot actually, and you can aswell look at allmost every rendering-benchmark to get an idea why an i5-4670k is most likely allways better than any quad-core from AMD.

I never spoke of the latest i7 there, but about the reasonable $200-parts from intel.

Quote:

Originally Posted by Gareth HalfacreeUnless you're a Linux/BSD user, in which case the more cores the better. I use a quad-core chip, and would really like an eight-core when I next upgrade - because it has a direct impact on how quickly I can get things done.

Perfect example: let's say I'm compressing backups. While the traditional bzip2 application is single-threaded, I use lbzip - which is multi-threaded with a pretty nearly linear gain in performance, meaning what would have taken an hour is done in just 15 minutes. What about when I'm reprocessing PDFs to reduce the resolution of the embedded images for posting on the web? Again, normally that'd be a single-threaded operation - but using GNU Parallel to drive Ghostscript means I can run four instances at the same time on my list of PDFs to be processed, again finishing the job in around a quarter the time it would normally take. If I had an eight-core chip, I'd be getting these jobs done in an eighth the time.

Sure, if you're running *Windows* then anything above a quad-core might be a waste except for selected specialist scenarios, but don't tar all computer users with the same brush. My AMD chip might be weak in IPC, but it's a damn sight faster for my workloads than an equivalently-priced dual-core Intel part. S'why I bought it, after all.

I speak generally ofc, and I allways try to keep in mind what the absolte majority of desktop/notebook-users is using their machnes for. And in that case we're speaking allmost exclusively of Windows-based PCs running nothing more than an office-suite, a mediaplayer for HD-content and maybe a few tools to edit a homevideo. PCs used for playing some taxing games are allready a very small minority of some 5-10% and another 5-10% is actually running some demanding software like 3d-software.

We people in these forums usually forget about the fact, that we're a neglectable minority for the hardware-vendors when itt comes to volume-sales.

Originally Posted by Harlequincan mantle do it? or can even this do it on an A88x board?

what about onboard GDDR5?

It's nothing to do with motherboard chipset. Onboard GDDR5 isn't going to happen outside special orders like the PS4. Maybe Ultrabooks - because that's the only scenario you'd have a fixed amount of memory - but GDDR is not exactly low power so that's a fail too.

Originally Posted by Gareth HalfacreeEh? I scrape by quite nicely on my APU. How would an Intel chip - which would cost me several times as much to buy - earn me more money?

I think he ment : "Those working in 3D rendering". But in these case, I'd build a small rendering farm using inexpensive and small nodes rather than FAT expensive X79 or core extreme CPU.

For anything else, an APU is pretty much what anyone need, throw RAM according to your need and an SSD to make it snappier. I saw personnal computer birth (read: consumer computer) .... and I'll prolly see it's death (under its current form). I really think that the era of fat CPU, GPU and big boxes will end in short to mid times. SOC is the future, could you like it or not this is where we are heading.

Demanding computing tasks will be offloaded to servers or farms. Home computing / entertainment will move to a single architecture. Console, tablet, smartphone, PC and even connected TV ... they all are offering quite the same features (internet, gaming, youtube, social network and emails). This is quite a big redundandcy and sooner or later all this world will fuse into something unique. Maybe I'm mistaking myself, but this is who I see the upcoming form of "computing".

The only difference between console, PC, tablet, etc. ... relies in maximum achievable performances. Once computer all had a sound card, a netword card, a video card, etc .... now computer are heading to motherboard (which house network and sound) + cpu (with onboard gpu, memory controler, etc.) ... the step before embedding the whole chipset inside the CPU is pretty small and console are closer and closer to computer architecture. Time will tell :)

Those needing there computer for Maths equations, 3d Rendering, Photo / Video editing. Code compiling. I dont know your own use of a computer gareth to say if a Intel chip would make you more money. If you are doing the news for bit tech on a day to day basis then it would not.

What you said has already happened guille at least in the casual sector. The Ipad has took alot of sales from the pc desktop sector. Even in my own household im the only one who still uses there PC on a regular basis 2 laptops just collect dust. Easier to just use a Ipad for general browsing.

If AMDs APUs were launched before the whole Tablet revolution in the casual sector and they were sold to the 2 major brands in lenova and dell then you never know what would of happened.

Instead the Ipad feels quicker than most pcs that do not have a SSD in the tasks it can do. Ill always say the biggest mistake companies made was to not force SSDs into cheaper pcs. As they make the system so much quicker than any cpu would.

Originally Posted by rolloThose needing there computer for Maths equations, 3d Rendering, Photo / Video editing. Code compiling. I dont know your own use of a computer gareth to say if a Intel chip would make you more money. If you are doing the news for bit tech on a day to day basis then it would not.

Correct, it wouldn't. That goes for a goodly chunk of the PCs around today, as well - and it's already been mentioned upthread how unlikely it is that markets where it makes a real and immediate difference to the bottom line are doing the rendering locally anyway. Just look at Nvidia's Grid: virtualised GPUs for offloading your rendering remotely, so you don't *need* a kick-ass workstation at your desk.

Video editing is a good example of the sort of workload where having a good wodge of local compute power is important, and where spending more now will save you money in the long run. Code compiling? Arguable. It's not like you can't be working on something else while your code compiles, and you spend far more time looking at the IDE with your CPU idling than actually burning code. Photo editing? That's RAM-dependent, not CPU - my APU copes quite admirably with me editing print-resolution images, and while certain intensive operations may complete marginally quicker on an Intel chip it would be many years before those savings add up to break-even on the cost difference.

You're dismissing the overwhelming majority of the PC market - those who *don't* do large amounts of video editing, local 3D rendering and the like. You're claiming that the edge-cases who do are the majority, which is so wrong-footed as to be ridiculous. It's not the case that "if you actually need to make cash from your computer then [Intel is] an auto buy," nor that only AMD fanboys buy AMD as you claimed. For the overwhelming majority of the market, an AMD chip will allow them to "make cash from [their] computer" exactly as quickly as an Intel chip - unless you're claiming an Intel chip will help me type faster - but, potentially, at a lower capital expenditure and total cost of ownership. For your edge cases, sure, but next time you fancy arguing the point have a quick look at the comparative sizes of the overall desktop market and the professional workstation market and you'll see just how small a percentage those edge cases make up.

TL;DR: Don't make sweeping generalisations that can be easily disproved.

This is a very very very very marginal percentage of home computer usage. Even code compiling can be done on "low end" processor. Most compilation is incremental now, no need for full rebuild each time you hit the F9 key (or what ever it is :D).

Math equation and 3D rendering can be happily offloaded to a farm, and this is what I'd do if I was living from this. I wouldn't like to wait for the render to end before I can continue using the computer.

Photo and video editing is more about RAM. Filters, for a huge part of them, are multi-threadable and thus offloadable. Video rendering / compression is also multi-threadable --> multi-threadable, but localy due to the amount of data to transfer.

I'd love to see FPGA working along side a SOC. The FPGA could be reprogrammable on the fly and thus provide a "CPU" matching what you're processing. Need a video compressing CPU ? Flash the FPGA. Need a (de)crypting specialized CPU ? Flash the FPGA etc etc etc.

Edit: oh .... a you can remote develop too. I know this is ultra marginal and people will think that I'm mentally ill ... but I've developped and compiled from my phone (Motorola Droid, with full landscape keyboard) using SSH to connect to my desktop home computer. VI was perfectly usable, only the keyboard was preventing fast typing. Then compiling was done as usual with the make command.

oh .... a you can remote develop too. I know this is ultra marginal and people will think that I'm mentally ill ... but I've developped and compiled from my phone (Motorola Droid, with full landscape keyboard) using SSH to connect to my desktop home computer. VI was perfectly usable, only the keyboard was preventing fast typing. Then compiling was done as usual with the make command.

Ha, remote compiling, why would you want to do that? My Nokia N900 can run make, gcc, g++, etc on device so you can code and run your code right there, quite usefull at points when away from a desktop. While I've never tried it myself, it can reportedly also cross compile for x86 as well (although at that point I don't see why you would compile on a desktop as well). Alternatively just code in python and be done with compiling :D

Originally Posted by PCBuilderSvenHa, remote compiling, why would you want to do that? My Nokia N900 can run make, gcc, g++, etc on device so you can code and run your code right there, quite usefull at points when away from a desktop. While I've never tried it myself, it can reportedly also cross compile for x86 as well (although at that point I don't see why you would compile on a desktop as well). Alternatively just code in python and be done with compiling :D

Since most source code files are archived and versioned (SVN, GIT, etc.) somewhere on a server nowaday, we could imagine remote compiling them too. Pretty useful when all you have is a thin client or when you are on the go or lack the processing power.

Executable are pretty lightweight most of the time, compared to the compiling time sending it back to you is nothing. While being remotely compiled, you can do something else with your computer.

Python is not what I need, sorry. Nice scripting language but no use for what I do (C/C++ with no screen). Each object compilation can be distributed between several cores / CPU / machine, you then just have to gather all the .o files and bind everything together (but maybe GCC already works this way).

If you're seriously into 3d-rendering, then you go buy a small server with as many cores as possible. Something like a G34-board with two octacores for just under €1000 (only board + 2 CPUs).

Anyways, for the professional @ home, who does alot of video-editing or DTP (primarily AdobeCS) a combination of an intel CPU + HD7750 is still the best option currently. Such a system isn't really expensive and if you have a business, then you can write it off the taxes anyways.

Log in

You are not logged in, please login with your forum account below. If you don't already have an account please register to start contributing.