If this is your first visit, be sure to
check out the FAQ by clicking the
link above. You may have to register
before you can post: click the register link above to proceed. To start viewing messages,
select the forum that you want to visit from the selection below.

Intel Core i7 AVX GCC Compiler Tuning Results

Phoronix: Intel Core i7 AVX GCC Compiler Tuning Results

For those owners of Intel's latest-generation Core i3/i5/i7 "Sandy Bridge" processors, here's a quick look at the impact of some GCC tuning options specific to these latest AVX-enabled Intel processors...

As many gentoo users have found, compiler flags generally have one of the following effects.

1: No effect at all
2: The compiled binary does not work
3: The compiled binary is slower
4: The compiled binary is faster

In addition, performance increases often require certain combinations of compiler flags, making the tweak more complex than adding "-march"

I once had a bash script for a number of CLI binaries (notably ffmpeg, faac, flac) which would iterate through cflag combinations and compilers (gcc versus icc) After each iteration, the script would run an automated benchmark on the resulting binary. Results were dumped to a file and sorted. The issue that I ran into was that the results would change depending on factors such as platform arch, available memory, and CPU affinity. Other issues involved (pre)linking and libraries, killing misbehaving binaries, memory reclamation, etc.

Overall, a system with local optimizations performed approximately 50% better on average than a generic "-o2" solution. The problem is that you will never be able to find and fix all of the minor issues caused by the optimizations across all binaries to a level required by a distro. My conclusion was that compiler optimizations are of great benefit to single-task servers (a transcoding server in my case), but are currently out of reach for a general desktop.

As many gentoo users have found, compiler flags generally have one of the following effects.

1: No effect at all
2: The compiled binary does not work
3: The compiled binary is slower
4: The compiled binary is faster

In addition, performance increases often require certain combinations of compiler flags, making the tweak more complex than adding "-march"

I once had a bash script for a number of CLI binaries (notably ffmpeg, faac, flac) which would iterate through cflag combinations and compilers (gcc versus icc) After each iteration, the script would run an automated benchmark on the resulting binary. Results were dumped to a file and sorted. The issue that I ran into was that the results would change depending on factors such as platform arch, available memory, and CPU affinity. Other issues involved (pre)linking and libraries, killing misbehaving binaries, memory reclamation, etc.

Overall, a system with local optimizations performed approximately 50% better on average than a generic "-o2" solution. The problem is that you will never be able to find and fix all of the minor issues caused by the optimizations across all binaries to a level required by a distro. My conclusion was that compiler optimizations are of great benefit to single-task servers (a transcoding server in my case), but are currently out of reach for a general desktop.

Then I bought a Mac....

Incredible. Possible expected outcome is one option out of the entire set of options?! YOU DON'T SAY!!!!

I too then bought a Mac and love the little things like, sleep that works.

As many gentoo users have found, compiler flags generally have one of the following effects.

1: No effect at all
2: The compiled binary does not work
3: The compiled binary is slower
4: The compiled binary is faster

In addition, performance increases often require certain combinations of compiler flags, making the tweak more complex than adding "-march"

I once had a bash script for a number of CLI binaries (notably ffmpeg, faac, flac) which would iterate through cflag combinations and compilers (gcc versus icc) After each iteration, the script would run an automated benchmark on the resulting binary. Results were dumped to a file and sorted. The issue that I ran into was that the results would change depending on factors such as platform arch, available memory, and CPU affinity. Other issues involved (pre)linking and libraries, killing misbehaving binaries, memory reclamation, etc.

Overall, a system with local optimizations performed approximately 50% better on average than a generic "-o2" solution. The problem is that you will never be able to find and fix all of the minor issues caused by the optimizations across all binaries to a level required by a distro. My conclusion was that compiler optimizations are of great benefit to single-task servers (a transcoding server in my case), but are currently out of reach for a general desktop.

Then I bought a Mac....

Did you try profile guided optimization? My guess it that may be the easiest way to get the best binary without resorting to potentially dangerous flags.

As many gentoo users have found, compiler flags generally have one of the following effects.

1: No effect at all
2: The compiled binary does not work
3: The compiled binary is slower
4: The compiled binary is faster

In addition, performance increases often require certain combinations of compiler flags, making the tweak more complex than adding "-march"

I once had a bash script for a number of CLI binaries (notably ffmpeg, faac, flac) which would iterate through cflag combinations and compilers (gcc versus icc) After each iteration, the script would run an automated benchmark on the resulting binary. Results were dumped to a file and sorted. The issue that I ran into was that the results would change depending on factors such as platform arch, available memory, and CPU affinity. Other issues involved (pre)linking and libraries, killing misbehaving binaries, memory reclamation, etc.

Overall, a system with local optimizations performed approximately 50% better on average than a generic "-o2" solution. The problem is that you will never be able to find and fix all of the minor issues caused by the optimizations across all binaries to a level required by a distro. My conclusion was that compiler optimizations are of great benefit to single-task servers (a transcoding server in my case), but are currently out of reach for a general desktop.

Then I bought a Mac....

my friend do You have those results posted anywhere? I would be interested is seeing best flag combination for those apps as I use my PC mostly for transcoding. Pehaps You happen to still have those automated benchmarks with You? i would greatly appreciate sharing of the knowledge

Overall, a system with local optimizations performed approximately 50% better on average than a generic "-o2" solution.

That sounds extremely good. Do you remember which programs made up the bulk of this increase in performance? Personally I see little to no point in optimizing an entire system, kernel included unless you need extremely low latency. If you are using computationally intense programs at a very regular basis in tasks which spans long periods of time then yes I think there's a good reason for compiling these with more aggressive compiler optimizations, I'm talking encoders, compressors, renderers, that kind of stuff. Unless of course you think it's just fun to poke around and try to optimize your system as much as possible, then it's no better or worse way to spend your own time than any other hobby out there.

Originally Posted by WorBlux

Did you try profile guided optimization? My guess it that may be the easiest way to get the best binary without resorting to potentially dangerous flags.

I agree, one of the very potent optimizations available is loop-unrolling, however due to the difficulty of accurately estimating this optimization at compile-time it's not turned on by default. However, when you use profile guided optimization it has all the runtime data it needs to make accurate choices when unrolling and thus turns it on by default. Downside to PGO is that you need to compile in two stages with a test run between them, this can of course be automated though, like with firefox, x264 etc.

I too then bought a Mac and love the little things like, sleep that works.

Funny, I keep reading everywhere that macs are pieces of technological wonder, but my Mac Mini fails to boot properly 1 out of 3 boots. It just stays in the grey screen forever. Also, I can't let it turn off the screen through DPMS otherwise the screen is never going to wake up again unless I reboot. On the other hand all my cheap homebuilt PCs running Ubuntu suspend and hibernate perfectly fine all the time.

Funny, I keep reading everywhere that macs are pieces of technological wonder, but my Mac Mini fails to boot properly 1 out of 3 boots. It just stays in the grey screen forever. Also, I can't let it turn off the screen through DPMS otherwise the screen is never going to wake up again unless I reboot. On the other hand all my cheap homebuilt PCs running Ubuntu suspend and hibernate perfectly fine all the time.

Opposite experience here... I've got a 13" Core 2 Duo Macbook Pro and it sleeps/wakes perfectly, as does my wife's Sandy Bridge MBP (13"), and her old iBook (G4), and her brother's and mother's systems (All of them either in or previously in the publishing industry).

My desktop (Athlon x2 5000+, then Phenom II x3, then an x6 and Radeon 4850, then a 4770, then 6850) hasn't woken from sleep properly in the last few years, not even once... even with an Ubuntu reinstall and then an eventual replacement with Mint.

I won't say that sleep is universally broken on my PCs in Linux (most of the other ones work), but it is for this one.

Opposite experience here... I've got a 13" Core 2 Duo Macbook Pro and it sleeps/wakes perfectly, as does my wife's Sandy Bridge MBP (13"), and her old iBook (G4), and her brother's and mother's systems (All of them either in or previously in the publishing industry).

My desktop (Athlon x2 5000+, then Phenom II x3, then an x6 and Radeon 4850, then a 4770, then 6850) hasn't woken from sleep properly in the last few years, not even once... even with an Ubuntu reinstall and then an eventual replacement with Mint.

I won't say that sleep is universally broken on my PCs in Linux (most of the other ones work), but it is for this one.

My guess is BIOS issues, are a particularly buggy chipset being included that doesn't reload properly on resume.