The beta calculation speed seems way off - its just a simple OLS regression, which is basically instant on modern hardware. My guess is all of the speedup is in the graphing, rather than the beta calculation. Reply

Keep in mind that this is LibreOffice Calc, which is horrifically slow. So slow the developers will pretty much be the first people to tell you that it's too slow. As such I can totally believe that Calc really that slow right now.

The AMD/LibreOffice deal was announced back in July. It's basically a package deal that has AMD funding development of Calc so that they can finally refactor the code to improve its performance, while at the same time they'd also add OpenCL support for AMD. Consequently I'm far more interested in how the final, refactored version of Calc will stand up. If this is comparing old/slow Calc to OpenCL calc, then it's not a very useful comparison.Reply

I don't understand how AMD thinks this will work. The amount of dev time it would take to make your application offload to the iGPU is too large. Plus any workload that is performance starved realizes much larger benifit from a beefy CPU and/or beefy GPU, not gimped on both.Reply

I've been buying intel since the C2D hit the market, but I can still see where this processor would be handy. A emulation cab or a htpc come to mind. Or cases where price is the utmost concern. I don't see myself replacing intel in my two high end game rigs but there are use cases for these.Reply

Why? Intel still has the better single threaded performance and an i3 wins on multihreaded vs all the APUs and the iGPU is good enough for emulation. All your really doing by going APU on HTPC is wasting power.Reply

I have an old laptop that I use as a ghetto media center with xbmcbuntu, its got intel ironlake graphics. with a recent kernel (3.10 or newer I believe) I was able to get limited RGB over HDMI working :)Reply

first of all they call Graphic Processing Unit's specialized Co-Processors for a reason...in that they are specially constructed to perform lots of compute intensive thread-able tasksfor algorithms where processing of large blocks of data that can be done in parallel.

for instance Released in 1985, the Commodore Amiga was one of the first personal computers to come standard with a GPU. The GPU supported line draw, area fill, and included a type of stream processor called a blitter which accelerated the movement, manipulation, and combination of multiple arbitrary bitmaps. Also included was a coprocessor with its own (primitive) instruction set capable of directly invoking a sequence of graphics operations without CPU intervention.

it also had a Heterogeneous System Architecture in that it had "Chip RAM" that was shared between the central processing unit (CPU) and the dedicated chipset co-processors Agnus (DMA) controller (hence the name).Agnus operates a system where the "odd" clock cycle is allocated to time-critical custom chip access and the "even" cycle is allocated to the CPU, thus the CPU is not typically blocked from memory access and may run without interruption. However, certain chipset DMA, such as copper or blitter operations, can use any spare cycles, effectively blocking cycles from the CPU. In such situations CPU cycles are only blocked while accessing shared RAM, but never when accessing external RAM or ROM

Prior to this and for quite some time after, many other personal computer systems instead used their main, general-purpose CPU to handle almost every aspect of drawing the display, short of generating the final video signal...

you don't really believe this AMD PR "processor's IQ" do you , ,here's a hint, processors do not have any IQ , the worlds best engineers cant even simulate cognitive responses in real time yet never mind simulated IQ....

and for fun here's a little demo to show how a specialist gfx Co-Processor is good at fast gfx manipulation and not much else as it does not have a high sequential integer SIMD capability on a very high data interconnect such as todays CPU's and their AVX2 SIMD for instance.

There's actually quite a number of applications which are bottle-necked both by the limited parallel throughput of CPUs (even with SSE and AVX) and by the latency / overhead of moving smaller amounts of data over the PCIe bus. Those applications will benefit enormously from having access to what's effectively a smaller, but practically latency-free GPU.Reply

Also, there are plenty of places where it would be a big advantage to a company to enable big performance improvements with some resources spent developing the ability to use HSA. It's a direct competitive advantage in certain situations.Reply

LOL, it seems you didn't actually read that link so who;s the putz (noun1.a stupid or worthless person.) given it would help support your AMD HSA SOC stance if you actually understood even the basics...here it is again http://www.javaworld.com/article/2078856/mobile-ja..."HSA targets native parallel execution in Java virtual machines by 2015

The range of market segments that AMD's Kaveri addresses from 15w/35w notebooks, to micro HTPCs, all-in-ones to 65w/95w desktops, to powerful APU servers is more remarkable than other competing solutions because of the new HSA architecture that unleashes the power of the Steamroller cores and powerful Hawaii-based GCN engines.Reply

The major stumbling block for me is the lack of memory bandwidth. It has already been shown that the Richland series APUs scale well with increasing bandwidth for GPU intensive tasks. And yet here we are with a significantly more powerful GPU component, and yet still we have only dual channel DDR3?Reply

I hope they update their Dual Graphics compatibility--would be a nice feature if one was using a dGPU, to get a bit more bang for the buck out of the APU. Just a question of which cards I suppose? They tweeted about it: https://twitter.com/AMD/status/420366942763880448Reply

not great but ok... I managed to install their drivers and run a few things on ubuntu with a richland APU... Wasn't a walk in the park but it ultimately worked, plus the open source drivers are also ok-ish. All in all still far behind the windows optimizations but slowly getting thereReply

I compiled the latest kernel (includes the radeon kernel driver module), mesa (opengl), xf86-ati (user space driver) and drm directly from git (on Debian Testing distro). On Trinity (A8-5500) and Richland(A8-6500) the drivers run well, HD decoding is working great and Steam games such as the hl2/source engine based ones - L4D2, Day of Defeat, TF2 and other hl2 mods run perfectly with slight visual adjustments such as forced multi core rendering, no AA and vsync on.Reply

Very interesting! Now all AMD needs is to be on a competitive node for once. They need to jump to 14/16nm FinFET as soon as it's available in 2015 (with their next-gen chips, obviously).

Forget about cost-cutting. AMD needs some good PR, and it's going to get that PR if it has great performance and power consumption numbers, especially when compared to the competition. Always being 2 nodes behind Intel, isn't helping much with that...Reply

Do not forget that intel's 22 nm node is a marketing name and actually lies around 26 nm node by generally accepted worldwide fabs standards. GloFo is going to present true 20 nm node in the middle of this year. Next-gen Carrizo should be expected to be true 20 nm, but it depends on foundry readiness however. I'm little confused, why Carrizo is not going to have implemented DDR4 memory controller yet.Reply

Actually no. The process node number hasn't been the actual process node for a while. 16nm FinFet is actually just 20nm with FinFet as Gflo and TSMC and Samsung are trying to pretend they are tied with Intel. Currentl, Intel is 1.5 gens ahead but Broadwell will make it 2.5 by 20nm without FinFet is coming soon as well so it will be back to being 1.5 gen behind. Reply

I actually code on JVM based languages (think Java, Scala, Javascript) . There is a project which AMD is working on with Oracle which will offload certain instruction set to the GPU without the need for the coder to do anything special. If that project pans out and gives the gains it is supposed to deliver, it would be pretty awesome and would change my life for the better at least. Reply

20% IPC improvement from Steamroller is short of some lofty expectations from AMD fans, but still impressive. I also like how AMD committed to a hard release date that isn't months in the future, hopefully it performs well and we see what it is capable of in about a week.Reply

The grammar of this article drives me crazy, can you stop using the British style of referring to companies as groups of people? AMD is a singular entity. It is A corporation, not some random group of people. Beyond that you don't even keep it consistent. Just remember, "The United States is" not "The United States are." I cringe every time I see "AMD have" or "AMD are" and you don't even do it for "AMD does" or "AMD calls." Also I notice you store passwords as clear text, this is a disaster waiting to happen.Reply

Reset your password. They e-mail you your current password. If they are able to send you your password, that means it is stored as plain text. That means they have a database that has everyone's password stored with their e-mail address and username. This is how everyone gets their data/accounts compromised. Reply

It's bad practice to send someones password via email. Email is normally not encrypted, so sending someones password in an email will be in clear text and can be intercepted. Adding to the fact that most people re-use passwords (yes I know, a bad practice ... but happens a lot) you could be compromising your users at other sites as well.Reply

Incorrect assumptions aside, "This is how everyone gets their data/accounts compromised" isn't quite true either. While plain text storage of passwords would obviously be a no-no, password re-use is the real killer for large scale account/data hijacking. You shouldn't be re-using passwords anywhere, especially between important accounts (e.g. bank/email [which might have details of other accounts]/etc.) and a blog. Even if Anandtech was storing passwords in plain text (which they aren't), any compromise of their system should only allow an attacker to impersonate you here, not drain your bank accounts and ruin your life.Reply

Yes, I am British, hence the writing style. Setting the word processor to English (U.S.) doesn't catch everything. Regardless, surely 'The United States' is a collection of states, and thus plural? Just like a company is a collection of people, and thus plural as well. This is how I was raised :DReply

There is actually a lot of history behind using "United States is" vs "United States are" and I think that common usage now and the reasons behind it easily trump any purist grammar viewpoint (even though the purist result is even debatable). People think of the United States more as one country than as a collection of states nowadays. "United States" is the name of the country so it doesn't matter if it has an "s" on the end.http://en.wikipedia.org/wiki/United_States#Etymolo...Reply

Actually it is supposed to be "The United States are". Each state is a sovereign entity united under a central government. It wasn't until the 20th century largely that the nation was referred to in the singular. Reply

It's Ian Cutress, he's good at many things, grammar isn't one of them ;-)

A while ago, after he'd written one of his other technically brilliant but grammatically very poor pieces, I suggested in a comment that if he takes this job seriously (which I know is a side job for him), he should enroll in a writing course or something, but I got flamed by everyone who thought I was being rude.

This isn't even one of his bad pieces, sometimes I really struggle to even finish his sentences. Which is a pity cause he's full of very interesting knowledge...Reply

Whenever anything graphics related comes into play. The I5 is a great CPU with a very meh igpu, and terrible drivers to boot so that isn't helping anything, Richland currently smashes HD 4600, Kaveri will pull ahead even more so. The only thing Intel has thats faster than Richland is iris pro, and that only comes on $400+ processors. Plus its easy to be fast when you have 64 to 128mb of on die memory.Reply

If you're buying an overclockable Intel processor, it is not very likely that you'll stick with the integrated GPU, and when you stick a dedicated GPU into both systems, the Intel outperforms the AMD with one hand tied to its back.Reply

It's one of the few websites where I actually read the comments cause they can, on occasion, be useful and informative, but when things like Intellover happen, it becomes very difficult to stay focused on the conversation. Reply

Well, it's certainly an interesting way to go. It could be the way everything ends up in the future. We'll see when the final silicon is tested but I suspect that it will end up slotting in under Intel's i7 line somehow.Reply

Most likely will, these are main stream APU's not high end desktop enthusiast CPU's. Though, I really want to see how well the 20% x86 ipc holds up. IF they managed a solid 20% ipc improvement across the board, it could make it worthwhile for them to release an AM3+ replacement for the current FX series. It would at least put their CPU's real close to intels on ipc again. Reply

It's very much open to interpretation. If Steamroller is 20% faster per clock, is this single or multithreaded? If it's 20% faster per core, does this mean that despite the 400MHz drop, each core is 20% faster than Richland? One thing that appears to be set in stone is that the improvement is in the x86 hardware, regardless of HSA/hUMA and what-have-you.

If we end up with a situation where Kaveri is 20% faster per clock, the A10-7850K will beat the A10-6800K by barely 10%, assuming the 20% is a realistic average. The downclocking of the Kaveri range to bring down power usage makes sense with a significantly larger GPU than before; I'd be very much interested in seeing the power usage of the A10-7700K considering that it allegedly has 384 GPU cores.

I believe that Steamroller's main improvement in CPU terms is removing the multithreading bottleneck, something that has hampered the Bulldozer architecture. A 30% performance boost chip-wise over the original FX series (the FX-4130 may be an ideal comparison - 3.8GHz with 3.9GHz turbo) would be a decent jump over the course of two years.

If we were to expect a 20% jump over Piledriver (still mainly multithreaded, I expect), then the following comparison may be of some use:

Kaveri's aim should be to compete with the top i3s; each has four threads, the former having smaller cores with a view to parallelism and the latter having two fat cores with an ability to utilise them fully. I don't think we should expect Steamroller to compete with the i5s on a CPU core-for-core basis (especially not FP), and certainly not until Excavator. A lack of L3 will always hurt in such comparisons anyway.Reply

Yes, I saw that article when it came out, shook my head, and issued AMD a 'mental report card' of must try harder, and I made reference to the mobile parts and their lower performance in my comment.

However, my friend, as I also mentioned, doesn't play games, I do, hence my i7.

I was merely stating, like another poster later did, that there is not a bad DESKTOP CPU out there. They all pull their weight, and most people can get by with the modern performance they have on offer.

Hence the stagnation in the Desktop industry, many don't feel a need to upgrade so regularly anymore, having 'enough' performance for their everyday needs, some refused to accept Win 8 (no help either), whilst others moved over to tablets (that I cannot stand).

Pervioulsy this well-heeled friend of mine, upgraded regularly, handing down his 'old' (lol) motherboard & parts to me as he saw fit. It was a boon for me at the time really, and has stopped since this 6-core AMD came into his life.

Here is my example of what you said. I just "upgraded" my wife's gaming computer. She's been gaming fine on a Core 2 Duo for a few years, and I recently put in a better Core 2 Duo (E8500) from ebay, a new video card (GTX 760), and a little more DDR2 memory and she's off to play Call of Duty Ghosts :) I think it will last at least another year that way. The CPU purchase was $30 and put the system over the system requirement there. The memory I scrounged and was gifted to from work. And the video card will obviously be carried over to a new build whenever that happens. So the way I think of it, upgrade cost for this end of the road system was just $30 and that's worth it to me to stave off a new build, new purchases, full reinstall of OS and games etc for a little while longer. Oh yeah, I'll sell that old CPU on ebay as well so net is probably even less. Obviously this is not everybody's situation but it shows us playing a brand new FPS game on pretty old hardware which likely couldn't have been done in the past. Rest assured, when the new build is done eventually it will be screaming fast. I wonder if AMD will have FX CPUs by then.Reply

It took me a little while to figure this one out, and I hope it comes true, but I have serious doubts.

AMD's goal looks to be like this:

1) Release an APU with a powerful GPU and a good-enough CPU2) Develop an architecture that takes advantage of AMD's GPUs and leverages them like a CPU3) Develop an API that is light on the CPU to make up for AMD's historically-weak (FPU) CPUs4) Lean on said architecture and the API to make the weak CPU irrelevant because the GPU side of the APU is better at the relevant tasks anyway5) Profit

AMD is trying to upend both Nvidia and Intel by trying to make up for their weak CPUs by say "fuck it" and treating the GPU as a powerful vector-coprocessor, much better than AVX. If it works, and games take advantage of both Mantle and HUMA (you have both a Kaveri APU and maybe a Radeon R7 card), I shutter to think of the framerate.

I'd just like to make it clear that the primary goal of Mantle is probably to take the CPU out of the equation when it comes to graphics performance, making AMD's APUs just as competitive as the Core i7 -- because the CPU has been made irrelevant because of Mantle. You can argue there will come a point again where we're putting out ridiculous photo-realistic graphics and the CPU again becomes a bottleneck but maybe they've thought of that, too. In the mean time, using HUMA can also let AMD slap both Intel and Nvidia. You need to code a program to take advantage of MMX/SSE/AVX/whateverIntelComesUpWithNext (or at least change compilers) so assuming HUMA catches on (a very difficult way forward), this could get interesting.Reply

That's very much how I see it, too. But I see the gamble as being extremely problematic, because so far all of that only works in a very small niche: Where one APU is enough.With 1080p becoming the lower end in everything from smartphones to TVs or beamers, I don't see a single APU powerful enough to become mainstream.Yes, I can run Unigine Valley at 30FPS in a 1024x576 window on Trinity and Richland and perhaps with Kavery and Mantle it will work at 720p, but that still falls short of what most people will want. It’s half a PS4 and it needs to scale to twice a PS4 at least.Now if there was a *natural* scaling path, like the ability to simply add another APU or three to gain resolution (1080p and 4k), then they'd have me convinced.But currently the only way is to add a discrete GPU and HSA won’t scale with that.Well, compared to the current situation, where 50% of the APU die area become useless as soon as you add a dGPU (made Trinity/Richmond a hard sell for gaming IMHO) with HSA code could still use the iGPU portion of the APU for something useful, but basically a developer would again have to distribute their code into at least two distinct and individually sized buckets of compute resources and unless 90% of all PCs out there had it, nobody will very likely go through the effort.Kavery needs to be multi processor and designed with an interconnect which allows creating a single image SMP HAS system with at least four nodes in a gaming rig and perhaps a little more for HPC or even server use. I also believe that Kavery should be sold in GPU like modules with DRAM built in, soldered on and optimized for that specific module. GDDR5 or DDR4 depending on where you want to wind up in price and power. These modules should then either be mounted flat for the single or stuck into a passive backplane to create the dual, quad or whatever sized rigs.With Mantle AMD has game developers ready to invest some fundamental work to redo their engines, if now they could make it scale I could see this turn into a stampede.As a single APU only design, it could die because the size of the ecosystem is too small to sustain it.Reply

About scaling... they can actually just add another CPU module or two, or GPU cores. These would be bigger more expensive APUs but they'd be what you want. This is sort of what they did with the Jaguar APUs that are in the new consoles. With their module based architecture, and APU marriage, they know they have this flexibility. We'll see what they choose to do with it.Reply

Pushing everyone to adopt a different architecture only works if you control the market (i.e. in this case you are Intel). AMD are a small time player, for most companies it's simply not worth all the effort it would take to develop stuff for HUMA if 90%+ of your market can't use it. Hence while the tech may be great you know it will fail like the last few iterations of AMD cpu's which also had power point slides that promised great things for the cpu/gpu combo but have actually come to nothing.Reply

Actually people aren't taking something into consideration here. They do now have the ability to control the market when it comes to gaming. All XBOne, and PS4 games are running said AMD processors, with huma already built into them. Those 8 core jaguar apu's were designed with that in mind. Any games ported from those consoles, to the PC will have support for HSA by default. Just the same as any game designed to run on all 3 will have it by default. This is apparent when you look at the ps4 where both the cpu and gpu in the APU use the same bank of DDR5 memory. http://www.anandtech.com/show/5493/amd-outlines-hs... Something else people forgot about.

Something else to digest, Intel has been doing this for a while, it's called quick sync on their cpu's. So it's no surprise that AMD would make effort to utilize similar tech on their APU's as well.

To the person saying it'd have to be twice a jaguar apu. Those cpu cores in the jaguar apu are minimal function x86-64 cores. Plus, They run at about half the frequency of these full steamroller cores. Which would effectively make a 4 core / 2 module Kaveri APU equal to that console apu other than the gpu component.

Now about AMD's weak fpu. I have to look back at the older reviews with between the PII and the Bulldozer/piledriver. Every time I look at those, I realize that people were forgetting they were comparing 4 FPU's to 6 in the previous generation. Since BD/PD 8xxx CPU's were 4 module, they only had 4 FPU's. Where as the older PII X6, had 6 full cores, meaning 6 FPU's. On a per FPU basis, BD/PD was actually a lot stronger than Thuban/Deneb.Reply

The problem is screen resolution: 60FPS or even 30FPS on 1080p can't be done with a single 128-Bit DDR3 bus. And that's all APUs can offer today. PS4 using GDDR5 and Xbox using eDRAM should prove that to the less technically inclined. At the moment the *top* Trinity/Kaveri APUs are 720p or 1K only for reasonble gaming performance. And while AMD has a whole pethoria of APU bins going down, only dGPU is available going up and that doesn't include HSA.2K is the bare minimum you need today for anything stationary, consoles or PCs and this CES is about going from 4K to 8K for TV screens. So if you don't have a clear answer, vision and growth path today to address these resolutions any chance to come out of the niche is severely hampered.It doesn't mean you have to deliver 4K yet, 2K is enough, but unless developers know it will be there by the time they need it, they won't take the risk of jumping for HSA. Nor perhaps the consumers, who would certainly prefer a simple seamless upgrade for higher resolution or would like to play the same games on the differently sized screens around the house.Reply

Which is a terrible idea because as weak at compute as Kepler is, Nvidia can upend their roadmap and go back to some of the ideas behind Fermi which would wipe out the compute advantage really quickly.

People should just accept the fact that there are many varieties of English and don't act offended because a Brit uses British English, because they think it's strictly an american site and everything should ponder to them. He refers to companies in plural, so what? Does that hamper your reading comprehension? Reply

Well said! Languages are about effective communication. What's the matter with a British author writing in British style? Now shall we drop this utterly pointless patriotism thing and focus on the purpose of this site: technology?Reply

"AMD’s Tech Day at CES 2014, all of which was under NDA until the launch of Kaveri, AMD have supplied us with some information that we can talk about today." It's two weeks' until Kaveri will be available to the public, and at the US largest consumer electronics fair, there's no new information from AMD?? NDA? This, and the delay of Mantle. It's starting to feel bad. Reply

I wonder. Would it be possibly to utilize the APU for true audio if one has a dedicated card that doesn't support it? I currently recently got a 280x, which I recently found out doesn't support the new trueaudio, and I've been wanting to upgrade my CPU.Reply

I admit to be a mild AMD fan, so perhaps I am looking a bit too much into this, but I read very good news for all users in this review.First of all, starting off with the IPC disadvantage (mentioned by Anand) which is today in the order of 40% on the CPU side. This is a fair approximation, IMHO.With a 20% improvement, we're already cutting the distance by half, which is not bad, considering the noticeable advantage of AMD's APUs on the GPU side.Add to this that with Mantle the first data shows 45% increase in performance. All "free". That's tremendous, objectively.Add to this that the A10 6800K is almost half the price of the i5 4670k, which, roughly, is on par with in real life ... and you get a pretty nice picture painted for the 7850K.If the price stays the same ... it may be that finally, Steamroller is worth upgrading over the (extremely) aged Stars core of the Athlon/Phenom II with better performance across the board.Not a huge achievement, I must admit, but it was about time ... and better late than never.Reply

I agree. The new tech features are where I see the big upgrade advantage. I don't know if the CPU side will be as much of an improvement as everyone wants, IPC is up but clock speed down. However, the CPU performance has been quite good since the last flagship APU A10 6800K. Gamers are looking for another 6 or 8 core though to put any CPU disadvantage to rest, and no integrated GPU required. With AMD not having such processor in any plans right now, gamers building systems have to accept the APU or go Intel. AMD may be choosing to do this to push the new tech and get adoption. That's probably the focus they need and the long game they need to play. AMD will lose gamers, but they are a small percentage of customers. All just theories. Perhaps if AMD can fully move FPU work to the integrated graphics on APUs, then the need for more cores evaporates.Reply

Yep, I believe this has been part of the plan since their new architecture began (module based architecture and APUs with solid GPUs integrated on die go together). You have to admire their long term vision for being the underdog in the market.Reply

A couple of things to keep in mind. The current Iris pro Intel 4570R is actually pretty cheap. As cheap as these? Doubtful, but they are priced at $288 a tray. Not exactly $400+ to get in to Iris pro on the desktop.

Next, for performance disadvantage in single thread. Yeah, it is pretty bad. Kaveri makes up some of it. However, keep in mind, AMD is claiming a 20% boost in IPC, but it is also cut back 10% on clock speed. So actual performance is only advancing about 10% or so. Its something, but it isn't any more than Intel's move from Ivy to Haswell was in effect. Which means AMD hasn't really gained ground between generations from Richland to Kaveri as Intel went from Ivy to Haswell. There might have been a slight improvement, but slight.

Intel on the other hand has cut AMD's lead on iGPUs in the move from Ivy to Haswell. Intel generally seems to have gained more in the move from Ivy to Haswell than AMD is gaining in the move from Richland to Kaveri. AMD still generally has the lead, but Intel is cutting it down.

Broadwell is claimed to be a pretty huge gain in iGPU for Intel again, which if that proves to be true, Intel might actually have a lead in performance or be very, very close behind.

Unless Kaveri brings a lot better power use tech in to their APUs, Richland was pretty far behind Ivy and Haswell makes it darned right embaressing the difference in idle, light work load and performance per watt under heavy load. I don't see that Kaveri is making that much better. They have slightly better TDP, better efficiency under load, but likely they still won't be as efficient in performance per watt as Ivy in most work loads. Under light/idle its likely to be pretty bad...which factors in to the price advantage for AMD if you are looking at business machines or machines that are going to be on most/all day long.

A savings of $10 or 15 in power over a year doesn't sound like much, but if a machine has a 3 year expected service life, or even 4 or 5 that gets to be a lot of power savings. A $190 processor ends up being as cost effective as a $235 processor after 3 years.

Kaveri seems to be a good step forward, but it isn't a performance "win" to "dethrone" Intel. It pretty much leaves AMD in the same bucket. They can rule on entry level systems, gamers on a steep budget and some HTPC systems depending on the HTPC requirements (thinking gaming for an HTPC on a modest budget or very constrained chasis that can't accept a dGPU).

In general Kaveri doesn't seem to be better for workstations than higher end Intel offerings, and I don't mean Ivy Bridge-E either. Also for a machine where cost is slightly less of an issue, probably still better performance with a higher end i5 or i7 Haswell processor and a dGPU.

Or on a modest machine, a $130 odd Haswell i3 is probably going to give generally better system performance than a similar priced Kaveri will, and most users aren't going to care that the Kaveri might have 20-50% better iGPU performance. Still not a lot of stuff that is GPGPU capable through OCL and most of the stuff that is, either the user won't notice a hair of difference in performance or else it isn't stuff average users of modest machines are going to be using (and higher end machines again would benefit more from a $100 or 200 dGPU and something like an i54670).Reply

Very good analysis. Intel is behind in iGPU area, but not by much. HD Graphics 4600 and Iris Graphics show that Intel does have good potential for developing a good iGPU. If the Kaveri APU concept does take off, Intel won't take very long time to respond. Maybe just a year. And in the end, APU gamers is just a fraction of a PC gamer market, and PC gamer market is just a fraction of the overall PC market. Basically, the AMD APUs are marketed to folks who care enough about gaming to want an APU instead of Intel Core, but not enough to be willing to buy a discrete video card. How many are there? To be honest, a $170 APU that alone can power a capable gaming rig is intriguing, but I dount the leap in performance will be huge compared to Richland A10.

Another hurdle that AMD needs to overcome is Intel's aggressive pricing. In the end, the $130 Core i3 is a pretty damn good CPU for most folks who just run productivity and a few multimedia apps. The A10 APU would have been a steal if AMD offered it at the same price as Core i3. But at $160-170, the "productivity" folks will walk away with the Core i3, while many gamers will scratch their head about buying this APU vs Intel Core I3 with dedicated low-end GPU. Power users will skip both and jump straight to Core i5 or i7.Reply

If I may comment here ...Intel's GPU offering is comparable to AMD only in terms of Iris Pro. Everything else is much behind (http://www.anandtech.com/show/6993/intel-iris-pro-...The i7-4850HQ is > 300usd more expensive than the 5800K, and I don't quite buy the yearly cost estimates: unless you play 24/7, the delta cost in electricity is going to be much lower.There are other savings in AMD's platform at MoBo level, for various reasons, so the saving doesn't only come from the CPU.If Mantle really provides the 45% improvement it is going to make the difference even larger.Basically, except for the few Iris Pro parts, it really makes no sense to compare Intel with AMD's APUs, from a GPU's perspective, while a 20% penalty on the CPU side seems much more manageable.If (and it's a big "if") the HSA really gets some traction at software level, the CPU's shortcomings can be easily offset by the GPU.If you think about it, already today most intensive every-day apps are already leveraging the GPU (web browing, spreadsheets, Flash/Silverlight and even image processing).So for everyday tasks I doubt anyone can really see any speed advantage comparing a ~$130 AMD APU with a ~$500 Intel APU. Throw in there the gaming advantage from AMD's platform and I see a fairly decent prospect for AMD. Of course, they need to execute: there have to be some decent PC/laptop offerings with good APUs and balanced configurations (no more of those 2GB craps, please).The real questions, from where I sit, are: does HSA really work as expected? Is the gap to Hashwell CPU really reduced by ~20%? Is the memory bandwidth going to bottleneck the GPU's performance, or will it really be able to hit the levels of the 7750?So I wouldn't say that Reply

It begs the question why Intel doesn't bring their good integrated graphics to lower end CPUs for an appropriate price. Maybe they don't see money in it from their perspective. Joe Schmo wouldn't buy a slower CPU for a little more $, and probably the system builders see that as well. AMD has some additional reasons to do this with their long game. They will eventually flip the switch to make their superior GPUs translate into much more powerful overall APUs. They get closer with each architecture update, and with more HSA adoption. It will be interesting to see how things play out. We'll certainly know more come Excavator.http://www.pcper.com/reviews/Editorial/AMDs-Proces...Reply

What I said is that Intel has more than enough in-house capability to build a good iGPU. If APU market really takes off, which is not given, Intel has the capacity to respond very fast. And Intel is not that much behind right now. The A10-6800K APU is on average about 30% faster than HD Graphics 4600. So the rift is there, but it isn't that bit. AMD said previously that the Steamroller APU will improve GPU performance by 20-30%. So if you believe AMD's words, then the Kaveri A10s may be faster than Intel HD Graphics by some 50% on average. As for Mantle, it probably will take a long time to take off and become mainstream. Only new titles will use it, but I suspect people who intend to play on APUs have in mind older games as well, and those may not benefit from Maltle. It's possible that Intel will try to respond. It's also possible that the APU market will remain so small that Intel won't bother.Reply

Probably because most users and hardware vendors don't care. A bulk of enterprise PC users don't care for playing games or running multimedia apps either on desktops or laptops. Likewise, a lot of people don't play games on PCs these days. Those who do play on PCs are split into groups, those who buy a separate video card and those who don't. So IMO, so far the APU user market is still kind of small. AMD is hyping it and hoping that APUs will take off big time. After all, AMD probably has an advantage over Intel in the GPU area. Although, I suspect Intel will be able to respond to AMD if they really have to.Reply

You can't do 1080p or better out of DDR3, no matter how much die space you give to the GPU: It's not a GPU limitation but a bandwidth issue. That's the whole problem with the APUs, too.Of course once you were to put the entire GPU DRAM on die and only stream out video, that will change. Don't know when that will happen, but perhaps not that far off.Reply

sooner rather than later it seems they (and Intel) will also have to use the lower latency/power 512bit Wide IO 2 option(4x128bit channels of DDR3 ) as 4K Rec 2020 and finally the real 8K UHDTV real colour Rec 2020 spec comes around in the 2016-2020 timeline...

as this makes very clear ,obviously radeon 7 and above have some serious bottlenecks in their paths and extrapolating to the kaveri SoC's with their single 128bit bus to gfx core it probably wont end well here but we shall see soon.Reply

you can however get lots of so called "android tv box" today cheap today , a quad core box for £50 you can control from any android phone/tablet for instance rather than the under powered single core google Chromecast for instancehttp://www.amazon.co.uk/Quad-Android-1-8GHz-Output..."Quad Core Android 4.2 TV Box (MINI PC) "ATV" with 1.8GHz CPU, 2GB RAM, Full HD Output, HDMI DLNA WIFI 8GB HI718"

OC if you are not in a hurry as such then you are far better looking for the new Octacore Arm cortex with integrated UHD1 real colour Rec 2020 spec decoder as standard to give you more option later on....

...odd thing about AMD right now and even since they announced working with ARM IP is that they could actually bypass this limiting single channel 128bit interconnect to the gfx core and simply use the existing older ARM CoreLink CCN-504 Cache Coherent Network IP delivering up to one Terabit (128 GigaBytes/s) of usable system bandwidth per second in their latest APU's along side their existing arm IP licence

they get far better Cache Coherence with massive extra data throughput capabilities potential than today's APU data throughput for almost free ( a few pennies for the extra IP licence) but they probably wont, never mind them using the far better current CoreLink CCN-508 that can deliver up to 1.6 terabits of sustained usable system bandwidth per second with a peak bandwidth of 2 terabits per second (256 GigaBytes/s) at processor speeds scaling all the way up to 32 processor cores total..... plus some super low power and fast wide IO 2 ram as icing on the cake for 2014....Reply