AMD launches Ryzen

AMD’s benchmarks showed that the top Ryzen 7 1800X, compared to the 8-core Intel Core i7-6900K, both at out-of-the-box frequencies, gives an identical score on the single threaded test and a +9% in the multi-threaded test. AMD put this down to the way their multi-threading works over the Intel design. Also, the fact that the 1800X is half of the price of the i7-6900K.

If these promises and benchmarks hold up, Intel will be facing some incredibly tough competition on the desktop/laptop side for the first time in a long, long time.

I with hold judgement ’till some third party does a comprehensive benchmark instead of taking AMD’s word for it. Even the Bulldozer beat contemporary Intel CPUs in certain very parallel benchmarks such as x264.

According to the link, they used Cinebench. Two things some two mind:

1) Very parallel (an AMD advantage)

2) Did they also factor-in the results of the OpenGL benchmark that Cinebench comes with into the score? (an AMD advantage, since Intel’s integrated GPUs are worse than AMD’s, but it doesn’t matter in such CPUs since they will most likely be paired to a discrete GPU, it’s just a good way to inflate the score if they also took GPU performance into account)

So… Let’s wait for some third party to do some independent comprehensive benchmarking (which will include very parallel tests like x264 and Cinebench but also less parallel ones like browser tests and games) and we ‘ll see.

Just to be clear, anything from 15% loss to Intel’s best CPU and up (in less parallel tasks) is a major win for AMD. The days geeks bought the very best CPU are over, now they buy a powerful-enough CPU (and a powerful-enough GPU) and save some money to buy a high-end smartphone which also gives better bragging rights.

I hope they hit a homerun, but to be fair there is a lot of spin on this.

They compare it to an i7-6900k, but their real competition is actually an i7-7700k (a $349 processor). Almost no one buys 6900ks outside of the workstation market.

Sure, with double the core count Ryzen should demolish the 7700k on heavily multi-threaded workloads – but I suspect the 7700k will still easily win on single/lightly threaded workloads like games and the stuff most people run at home (at least as stock clockspeeds). At similar TDP as well – for significantly less money.

Anyway, if you go into it wanting an 8 core monster this may end up being the way to go, but I think most of the market is quite content with 4 cores for most stuff. It isn’t that 8 cores is bad of course, more than they are missing out of a big part of the market by not offering anything less (yet).

I think it will all come down to overclockability for the gamer/enthusiast market. If Ryzen can hit 4.5Ghz or higher they will have a winner. If not, at $500, it isn’t all that compelling to me. Unless your workload is video encoding or other heavily multi-threaded stuff the 7700k will probably be a better choice for most people.

Well see I guess. I’m pulling for them though. Competition is a good thing.

Which wins in ‘home’ usage really depends. While Windows stuff is still dragging it’s arse getting to the point of parallelization that pretty much every other modern OS has, there are still qui8te a few things that do a great job. Recent versions of Office actually do a decent job using multiple cores, and web browsers other than IE have been doing it for years (and I’ve checked, Edge does use multiple cores much better than IE). The problem ends up being all the developers who are perpetuating that view that multiple cores should be for multi-tasking on client systems, not getting things done more efficiently.

As far as games, it depends on the specifics there too. There are quite a few games that do benefit from parallelization, and the good studios are moving more in that direction. In fact, the only new games I’ve seen for quite a while that don’t do a good job of using multiple cores are PC-only games, since PS4 and XB1 games pretty much have to use multiple cores to do anything useful.

Now, all of that said, what I’m interested to see is memory bandwidth comparisons (which are for some reason always missing from these types of press releases even for server processors).

Recent versions of Office actually do a decent job using multiple cores, and web browsers other than IE have been doing it for years (and I’ve checked, Edge does use multiple cores much better than IE). [/q]

Sure. But we aren’t talking about single core vs multi core, we are talking about 4 cores vs 8 cores… There is literally nothing in Microsoft Office that will benefit significantly from having more than 4 cores outside of extremely complex Excel spreadsheets. Now sure, if you want to calc a big spreadsheet while your encoding a video the difference will be huge, but if your that guy they you probably want an 8 core, that is your market. I’m just saying for most people 4 cores is enough, especially if it is 4 faster cores.

The problem ends up being all the developers who are perpetuating that view that multiple cores should be for multi-tasking on client systems, not getting things done more efficiently.

Some things can’t be done more efficiently by multiple cores, because some things can’t be paralleled effectively. That isn’t developers perpetuating anything, it is Amdahl’s law…

Again, I’m not arguing that multithreading isn’t beneficial. It’s just that, for most workloads, the payoff shrinks as the core count climbs. I don’t think we are quite at the point yet where 8 cores shows significantly better performance for the average user running average workloads. Its certainly getting there though, so AMD is definitely forward thinking here.

As far as games, it depends on the specifics there too. There are quite a few games that do benefit from parallelization, and the good studios are moving more in that direction.

Right, but do they benefit significantly from more than 4? Most current games don’t from benchmarks I have seen. In fact few benefit from more than 2…

[q] In fact, the only new games I’ve seen for quite a while that don’t do a good job of using multiple cores are PC-only games, since PS4 and XB1 games pretty much have to use multiple cores to do anything useful.

I think that might be their silver lining – recent and upcoming console ports to PC. Those coming from Xbox One/PS4, originally targeted 8 core Jaguars, so Ryzen having 8 cores as well will likely result in lazy ports running much better on it than on a 4 core Intel chip. That said, it may not matter though, Jaquars are pretty slow in the first place. I doubt a 7700k, even with only 4 cores, will end up being the bottleneck for any console port. Having almost double the clock speed more than makes up for it…

Sure. But we aren’t talking about single core vs multi core, we are talking about 4 cores vs 8 cores… There is literally nothing in Microsoft Office that will benefit significantly from having more than 4 cores outside of extremely complex Excel spreadsheets. Now sure, if you want to calc a big spreadsheet while your encoding a video the difference will be huge, but if your that guy they you probably want an 8 core, that is your market. I’m just saying for most people 4 cores is enough, especially if it is 4 faster cores. [/q] However, it does help with multitasking, which is a very common desktop usage pattern. More threads of execution means fewer context switches for a given over-committed workload (and everything is overcommitted these days in terms of CPU usage), which will in turn improve overall system performance.

There are also a lot of people who use way more plugins in their web browser than they should, and on at-least Chrome (and it’s various derivatives), that usage will benefit from more cores since Chrome runs each plugin as it’s own process.

Some things can’t be done more efficiently by multiple cores, because some things can’t be paralleled effectively. That isn’t developers perpetuating anything, it is Amdahl’s law…

True, but there are quite a few things that could be better parallelized that aren’t. Compression and encryption are excellent examples of this. For compression and some types of encryption, you can get a serious performance boost by processing individual blocks of data in parallel, yet most compression and encryption tools don’t do this simply because the developers are too lazy to implement it (and this isn’t just a Windows issue either).

Again, I’m not arguing that multithreading isn’t beneficial. It’s just that, for most workloads, the payoff shrinks as the core count climbs. I don’t think we are quite at the point yet where 8 cores shows significantly better performance for the average user running average workloads. Its certainly getting there though, so AMD is definitely forward thinking here.

In general, I agree with your conclusion here, but not all of the reasoning you give for it, and I would like to point out that the type of people who will be buying at least the 8-core Ryzen models don’t exactly qualify as ‘average’ users running ‘average’ workloads.

Right, but do they benefit significantly from more than 4? Most current games don’t from benchmarks I have seen. In fact few benefit from more than 2…

This is largely a function of the design. AI in particular can really benefit from this, if it’s written correctly, most games just don’t. Resource loading can also benefit from this under certain circumstances, but that’s a much harder problem to solve than writing AI that makes efficient usage of multiple threads of execution.

[q]I think that might be their silver lining – recent and upcoming console ports to PC. Those coming from Xbox One/PS4, originally targeted 8 core Jaguars, so Ryzen having 8 cores as well will likely result in lazy ports running much better on it than on a 4 core Intel chip. That said, it may not matter though, Jaquars are pretty slow in the first place. I doubt a 7700k, even with only 4 cores, will end up being the bottleneck for any console port. Having almost double the clock speed more than makes up for it…

Check the specs, a 4 core Intel model on the high end is 8 threads of execution (4 cores with 2-way HT), an 8 core Jaguar CPU is 8 threads of execution (8 cores, no SMT), and an 8 core Ryzen is 16 threads of execution (8 cores with 2-way SMT). 16-threads of execution is part of why the Ryzen CPU’s are such a big deal, the only way to get that on an Intel CPU these days is to buy a 1000+ dollar Xeon.

Now, that said, 8 real cores is far superior for actual parallelization than 4 with 2-way multi-threading for anything but thread-pool based workloads (which are rare on client systems), but I think the big impact will be that Ryzen’s SMT is probably more like Bulldozer’s SMT (shared-nothing unless you are doing 256-bit FPU operations) than Intel’s HyperThreading (almost everything except registers shared and multiplexed by hardware).

However, it does help with multitasking, which is a very common desktop usage pattern. More threads of execution means fewer context switches for a given over-committed workload (and everything is overcommitted these days in terms of CPU usage), which will in turn improve overall system performance. [/q]

Overcommitted? Technically, sure – many more threads than cores. But unless you are running an actual workload, something that can actually come close to fully taxing at least a single core, it doesn’t matter. I read somewhere that the average home computer (in 2014 I think) had an average CPU utilization of less than 5%… Point is overcommitting only matters if all those threads are actually doing something tangible.

I have an i7 4770k (4 core haswell) overclocked to turbo up to 4.3Ghz. I do some actual work on it (programming, office stuff, photoshop, video encoding, etc.), it runs as a file server and MySQL backend for 5 Kodi boxes in my house 24/7, and I do quit a bit of everyday browsing and gaming on it.

Outside of video encoding or when I am running a game, I rarely see CPU usage go past 20%, even on a single core. In fact most of the time it is running at 800mhz because it literally has nothing worth turboing up to do. That is with a chrome instance running 30 tabs, and probably 5-10 other programs running… Even in games, I usually have a core or two free (or mostly free) to service all the other stuff. It runs smooth as butter 99% of the time.

For someone like me, which I think is a “typical” power user who runs a fairly varied workload, 8 cores only offers a tangible benefit when doing one thing – video encoding. Sure, it may speed up a few things here and there by a few percentage points, but considering I have a 4 year old processor, and it STILL performs within 15% or so of the fastest thing I can buy today on a core by core basis, and I STILL can’t figure out a way most of the time to actually keep it busy outside of one activity, it seems pointless to want an 8 core/16 thread monster – especially one that runs at a slower clockspeed so that it will probably perform worse 99% of the time. Now if Ryzen can overclock to say 4.5Ghz or more it might be a completely different story. The point being that single threaded performance matters a great deal more for most people, most of the time, because they rarely if ever even use the 4 cores they have…

I’m not saying that is true for everyone, but for me an 8 core machine means “video encoding at double the speed” and literally nothing else. I don’t care about encoding speed enough to warrant it. There are of course other workloads that will actually use all those cores too, they are just not all that common/frequent with most people (or even many people).

Anyway, I’m not trying to sway you, just explain my reasoning. Your welcome to your own opinion of course.

[q]I think the big impact will be that Ryzen’s SMT is probably more like Bulldozer’s SMT (shared-nothing unless you are doing 256-bit FPU operations) than Intel’s HyperThreading (almost everything except registers shared and multiplexed by hardware).

From what I have read it is the opposite – it is more like HT (shared everything except the registers).

Outside of video encoding or when I am running a game, I rarely see CPU usage go past 20%, even on a single core. In fact most of the time it is running at 800mhz because it literally has nothing worth turboing up to do. That is with a chrome instance running 30 tabs, and probably 5-10 other programs running… Even in games, I usually have a core or two free (or mostly free) to service all the other stuff. It runs smooth as butter 99% of the time.

Having 30 tabs (or 1500 tabs for that matter) of idle web pages is going to cost you 0% CPU time. But point taken, the average computer today does virtually nothing multithreaded.

What you are missing is why this is: four cores is simply not enough speed advantage to truly push applications to use more cores.

If we finally get some competition between AMD and Intel, maybe that will force them to start releasing 8 or 16 core CPUs. Suddenly a new range of things gets more realistic to do real time. For those doing video editing I’m sure they’ll appreciate if everything is 4 times as fast.

Overcommitted? Technically, sure – many more threads than cores. But unless you are running an actual workload, something that can actually come close to fully taxing at least a single core, it doesn’t matter. I read somewhere that the average home computer (in 2014 I think) had an average CPU utilization of less than 5%… Point is overcommitting only matters if all those threads are actually doing something tangible.

I have an i7 4770k (4 core haswell) overclocked to turbo up to 4.3Ghz. I do some actual work on it (programming, office stuff, photoshop, video encoding, etc.), it runs as a file server and MySQL backend for 5 Kodi boxes in my house 24/7, and I do quit a bit of everyday browsing and gaming on it.

Outside of video encoding or when I am running a game, I rarely see CPU usage go past 20%, even on a single core. In fact most of the time it is running at 800mhz because it literally has nothing worth turboing up to do. That is with a chrome instance running 30 tabs, and probably 5-10 other programs running… Even in games, I usually have a core or two free (or mostly free) to service all the other stuff. It runs smooth as butter 99% of the time.

For someone like me, which I think is a “typical” power user who runs a fairly varied workload, 8 cores only offers a tangible benefit when doing one thing – video encoding. Sure, it may speed up a few things here and there by a few percentage points, but considering I have a 4 year old processor, and it STILL performs within 15% or so of the fastest thing I can buy today on a core by core basis, and I STILL can’t figure out a way most of the time to actually keep it busy outside of one activity, it seems pointless to want an 8 core/16 thread monster – especially one that runs at a slower clockspeed so that it will probably perform worse 99% of the time. Now if Ryzen can overclock to say 4.5Ghz or more it might be a completely different story. The point being that single threaded performance matters a great deal more for most people, most of the time, because they rarely if ever even use the 4 cores they have…

I’m not saying that is true for everyone, but for me an 8 core machine means “video encoding at double the speed” and literally nothing else. I don’t care about encoding speed enough to warrant it. There are of course other workloads that will actually use all those cores too, they are just not all that common/frequent with most people (or even many people). [/q] Based on what you’ve said though, you’re not in their target audience. CPU’s with this many cores are targeted at a couple of very specific groups:

1. People who do professional design and engineering work (CAD software is _really_ good at using multi-core processors very efficiently).

2. People who do lots of bulk data processing.

3. People who run lots of VM’s at the same time.

4. People who just want bragging rights.

The demos are directly targeted at group 4, just as such things always have been and almost always will be when talking about client system CPU’s.

In my case, I fall solidly into groups 2 and 3 (I run BOINC apps and almost a dozen VM’s 24/7 on the only system I have which has a socketed CPU). It’s a good buy for me because I want ECC RAM, which in turn means I would have to get a Xeon if I go with an Intel CPU, and therefore would on average get less than half the processing power with less energy efficiency for the same price (oh, and a lower cap on the RAM speed I could use too, which is also a big deal for my usage since most stuff I’m doing is memory bound).

The availability of CPU’s like this will likely help improve the ability of other software to use multiple core more effectively.

Anyway, I’m not trying to sway you, just explain my reasoning. Your welcome to your own opinion of course.

It’s rather refreshing to see someone with this attitude.

[q]From what I have read it is the opposite – it is more like HT (shared everything except the registers).

Based on what you’ve said though, you’re not in their target audience. CPU’s with this many cores are targeted at a couple of very specific groups:

1. People who do professional design and engineering work (CAD software is _really_ good at using multi-core processors very efficiently).

2. People who do lots of bulk data processing.

3. People who run lots of VM’s at the same time.

4. People who just want bragging rights.

The demos are directly targeted at group 4, just as such things always have been and almost always will be when talking about client system CPU’s.

I think galvanash’s post speaks to your earlier point where you said “However, it does help with multitasking, which is a very common desktop usage pattern.”

Anyways, for #1 #2, I have to wonder if GPGPU and vector processors are a better approach for most kinds of engineering workloads. Even running lots of CPUs in parallel still doesn’t get you the kind of performance you can get with dedicated vector processors. General purpose processors are bad at scaling due to shared memory. A few years ago I conducted an SMP experiment iterating a simple “multiplication kernel” across a large data set, this operation is “embarrassingly easy” to run in parallel, so in theory the performance should grow linearly with the number of CPUs. However I found that adding more processors added much less performance than that because they were all bottlenecked by ram. Granted some kinds of operations should scale well on massive SMP, especially if they are longer calculations that can operate entirely within cache and less memory contention. But a GPU is far better designed for performing billions of simple operations efficiently. And since many engineering problems can be boiled down to performing simple calculations on billions of data points, this is why I think most highly parallel applications are headed to vector processors rather than SMP.

For #3 SMP is very useful for VMs, although it’s not really something most desktop users need.

IMHO games and AI are the best candidates for large SMP for normal desktop users, although in practice most games I’ve seen don’t take advantage of it since graphics are left to the GPU and the game logic seems to be bound to one or two cores. It’s interesting to ponder the possibilities though.

I’ve been looking at upgrading my home server for a while, but now I’m glad I waited. I’d been looking at a decent quad core Xeon E3, but I’ll now be able to get an 8-core 16-thread CPU with essentially the same MIPS rating per thread that also supports ECC RAM (the only reason I even considered shelling out the money for a Xeon) for essentially the same cost and a marginally lower TDP. I’ll probably end up having to get a discrete GPU, but I was going to anyway since the Xeon I had been looking at didn’t have one built-in and I’m not going to shell out extra money for a MB with an integrated GPU that Linux barely supports (sometimes I wonder what ever happened to Matrox server GPU’s).

Note that a “performance vs. performance” comparison is mostly irrelevant.

The comparison that matters is more like “performance+heat+CPU_performance+GPU_performance+price vs. performance+heat+CPU_performance+GPU_performance+price“(with various weighting factors for each component that reflect intended usage).

If Zen is slightly slower than an equivalent Intel CPU, but the price is lower or the integrated GPU is better, then maybe Zen “wins” despite being slightly slower.

“Slightly slower” is still outrageously more powerful than what was used to change mankind’s history. So I bet I can accept this “relative” lower power to type in my cv, browse some pr0n and play Return to Na Pali.

Looking at the die photo, it’s clear the “eight core” processor is really two quad core processors on the same die with better non-processor resource sharing. This is how these things go… most early quad cores were just two dual cores on the same die, often with no sharing beyond a common external bus.