If this is your first visit, be sure to
check out the FAQ by clicking the
link above. You may have to register
before you can post: click the register link above to proceed. To start viewing messages,
select the forum that you want to visit from the selection below.

Multi-Core Scaling Performance Of AMD's Bulldozer

10-26-2011, 01:00 AM

Phoronix: Multi-Core Scaling Performance Of AMD's Bulldozer

There has been a lot of discussion in the past two weeks concerning AMD's new FX-Series processors and the Bulldozer architecture. In particular, with the Bulldozer architecture consisting of "modules" in which each has two x86 engines, but share much of the rest of the processing pipeline with their sibling engine; as such, the AMD FX-8150 eight-core CPU only has four modules. In this article is a look at how well the Bulldozer multi-core performance scales when toggling these different modules. The multi-core scaling performance is compared to AMD's Shanghai, Intel's Gulftown and Sandy Bridge processors.

I don't view it that way. If you're gonna have, say, 8MB cache on 4 cores, it's better to make it shared rather than 2MB per core. That way, on loads that involve fewer cores the cache increases (on a two-thread load you have 4MB per core).

But of course that view comes from someone who doesn't know the details behind CPU cache memory :-P

Comment

Very nice test suite, but I would propose some changes:
1) To judge the efficiency of scaling per architekture, functions like Turbo should be disabled. When enabled it is only natural that the scaling with more threads gets lower.
2) I would change the graphs so that they are easier to interpret by the looks (so that linear scaling would look linear). For example, the x-axis should be linear if the y-axis is linear. Not like you have it now, with 1 to 2 distance being the same as 2 to 4 distance. It just looks weird.

2RealNC> It is not the case with BD modules. No matter if one per module or two per module, all of them share the whole L3 cache. If one core per module is activated, it has 2MB L2, which it would have to share with the other core otherwise.

Comment

2RealNC> It is not the case with BD modules. No matter if one per module or two per module, all of them share the whole L3 cache. If one core per module is activated, it has 2MB L2, which it would have to share with the other core otherwise.

I'm afraid I didn't understand the above.

In my thinking, it seems better to have a larger, shared cache rather than multiple smaller, non-shared ones.

Comment

Yes, it may be better sollution to have larger shared cache than smaller dedicated cache per core (because having "larger" dedicated cache per core is more expensive), but I thought we are talking about Bulldozer in the state as it is and which cores are better to be left enabled. Hope I cleared it up.

Comment

In any case, a direct comparison of "4 threads across 4 modules" against "4 threads crammed into 2 modules" might be interesting to see how much Bulldozer's modules actually lose over discrete cores by sharing certain parts of the CPU pipeline. Of course this is only meaningful if they run at fixed frequencies, i.e. with turbo core and any dynamic frequency scaling disabled.