Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

Deathspawner writes "In an unusual move, Advanced Micro Devices has issued a press release rejecting its endorsement for the industry recognized benchmark SYSmark 2012. Developed by BAPCo and backed by industry heavyweights such as Dell, Intel and Hewlett-Packard, AMD has stated that BAPCo both has tuned SYSmark to create bias in favor of its competitor, and that its benchmarks are not relevant for the audience it targets. Also noted is a complete lack of heterogeneous CPU+GPU testing. Techgage tears apart AMD's claims to see if they are valid, while also evaluating the overall usefulness of SYSmark and the impact it can have on consumers."

I have been commenting on/. for years, this account is the one I remembered. If I could find my old accounts, I would have a low 4 digit user #. I also respond on my own forums all the time, but I have been on the road so much I haven't had time for a few weeks.

Immaterial, if all I am using the article for is to reference the fact that Nvidia & Via left as well. Unless you have some evidence to the contrary, I don't care what "marketing" department I'm presently using as a source.

I knew that Via & Nvidia left, and Googled the first reference that popped up. So what? You're not questioning my facts, so I don't give a damn.

Agreed. I probably could have used a more impartial source, but it's still true that Nvidia and VIA both left

I think Via leaving is no big deal. I've actually bought a few of their boards for simple projects (can't find an atom board with two onboard NIC's to save my live), and while they don't suck they're certainly nothing to write home about.

But Nvidia leaving is a big deal. And anyone claiming that ION/atom or Fusion don't beat the crap out of Atom user experience wise has simply not been a user

PTS is really only a bootstrapper to launch 3rd party benchmarks and compare the results. If the test it is running just so happens to be SYSmark, an open source bootstrapper won't help us evaluate the fairness of the test itself.

Of course, the more the people being tested know about how it's tested, the easier it is for them to cheat.(Plenty of past history from both Nvidia and ATI doing that with video cards.)

(Note: Always investigate claims of benchmark cheating, sometimes it's a misunderstanding. One example deals with a claim of cheating because an optimization routine found the same process being hit constantly, so it cached it. There were screams of cheating and 'tuning' the driver to trick the benchmark when all it really was is caching doing what it's supposed to. Even though it did give artificially high scores in that one test. Once the issue was known, the benchmarkers changed their program to not do a stupid repetitive test that would just get cached.)

Of course this isn't an issue of cheating, but it sure feels like it. Makes you wonder what AMD is really worried about...

Well if they used the Intel compiler then it is for all intents and purposes useless as Intel has been rigging their compiler with a "Genuine_Intel" flag and if the flag isn't detected dropping all SSE and above optimizations and instead running in slow ass x87 mode. Last I read despite being ordered to change their behavior Intel is STILL putting out compilers with the evil bit on and haven't done anything to alert previous customers of their douchebaggery.

So I wouldn't be so quick to just dismiss out of hand, after all, who would have thought that Intel would rig their own compiler to cheat? I can't even imagine how many programs out there have been compiled using the Intel compilers which makes every single program written using their tool chain rigged against AMD and Via.

The most recent versions of Intel's compiler clearly document the evil behavior in the description of the code generation options, so at least it's not hidden any more. You can also mostly disable the evil behavior, if you are willing to sacrifice the runtime code-path selection that allows you to use SSEx on hardware that supports it while retaining compatibility with earlier machines.

Still, any benchmark using Intel's compiler can't be trusted unless it is fully open-source, including the exact compiler f

So in other words you get the "choice' of cripple ALL CPUs, or just Intel's competitors? Wow their douchebaggery just gets better and better don't it? Especially when it would be trivial to check for the SSE flag (which EVERY CPU that has SSE has implemented for nearly 8 years, which is a lifetime in CPUs) and "If CPU = SSE then run optimized' end of story.

The sad part is AMD does have their own compiler [amd.com] which is completely FOSS and unlike Intel doesn't try to hamstring a CPU because it isn't made by them.

It seems like AMDs biggest complaint is that the benchmark isn't offloading CPU intensive tasks to the GPU. It is pretty hard to take them seriously when they are complaining that the benchmark favors their competition by actually benchmarking the CPU.

Well, a benchmark with 2012 in the name certainly shouldn't be using two non-GPU accelerated web browsers and Acrobat 9! They really do have a point that currently released software is doing a much better job of using their more well rounded systems then the benchmark is. It's a system benchmark not a CPU benchmark (we have SPEC for that).

Reason is you are benchmarking, well, the CPU. You don't want the GPU messing with it. While the day may come when CPU and GPU fuse, that day is not now. So seems silly to bench a CPU with things that accelerate using a GPU.

Nothing wrong with benchmarking a GPU, or benchmarking things that use both, but you need to be clear about what you are testing. If the test is CPU, then that is what you want to restrict it to.

They are currently interdependent, design a 6 core system with 8GB RAM, put a Rage 3d card in it, then run something modern, not a game, just a browser or a spreadsheet and tell me if the GPU doesn't improve the system.

No, the complaint is that it is supposed to be a benchmark of overall system performance, and that benchmarking just the GPU does not do this. For example, in running a typical OS, a lot of the screen redrawing performance is dependent on the GPU if the GPU can handle it. If the GPU can't, a lot of that work gets offloaded onto the CPU, and performance suffers. Thus, by benchmarking only the CPU, you cannot get an accurate picture of the real-world performance of the system as a whole, or even of the pro

AMD is also betting big on Fusion and hardware accelerated HTML 5 with IE 10/Windows 8. They plan on making x86 tablets in which, some of their CPU's are barely faster than an Intel Atom but have a GPU inside as powerful as an ATI 6xxx HD. These benchmarks will be crap on the Llamo chip, but in real world use with Windows 8 and Flash 10.3 and higher you can run 1080p HD video fluidily without a sweat.

I would be irrated and concerned too if I were AMD, as people would get a false impression on their low end

I've been using a Llano chip recently, and the performance has been a lot better than I was expecting. Because the APU utilizes a Radeon HD 6310 it's not suitable for recent games, but I've found that games up until the last couple years seem to do just fine on it.

I'll wait to see what the performance is like on Windows 8, but Llano has little trouble with Windows 7, the system remains responsive pretty much whatever I'm doing and I rarely feel like I'm waiting around for things that should be done. There's

It seems like AMDs biggest complaint is that the benchmark isn't offloading CPU intensive tasks to the GPU. It is pretty hard to take them seriously when they are complaining that the benchmark favors their competition by actually benchmarking the CPU.

Although once I would have agreed that, as a pure CPU benchmark it seems strange to allow offloading work to the GPU, the line has blurred rather a lot in the past year. AMD/ATI has put a ton of work into getting their OpenCL implementation as close to gen

This is one area where things becomes problematic for Intel. This is because of hyperthreading shaes some important resources, such as the SSE related stuff. There for if you are running lots of threads making use of features with these issues, then you will actually run into notably reduced performance as the threads compete for the same resources.

This can also be a increased problem with some optimization options with GCC as it will make he

Hyperthreading doesn't share some things, it shares everything. It is the ability for a single core to run two threads at the same time in hardware. Intel isn't the only one who does things like that, Sun does, as does IBM (with more than two threads to a core). There are two benefits to this:

1) Less context switching penalty. It actually takes a fair amount of resources for an OS to switch from one thread to another. So run more threads in parallel in hardware, get more performance in heavily multi-threade

Imagine you have 4 hyperthreaded cores and you have a process that is running 7 CPU intensive threads. Because of processor affinity, 6 threads will share a processor(2 each) and one thread will have an entire processor. If they all have equal work, the one using a processor by itself will complete first, but the other 6 still need to complete for the whole set to be considered complete.

I think AMD is a bit lost on this one or atleast they dont understand the point of benchmarking

AMD's largest complaint is that SM2012 doesn't represent the market well enough, employing high-end workloads that the regular consumer doesn't care about

As far as i know benchmarking is about pushing the tech to its limits to see what it can do or at least how it does against a standard heavy load. Your average user whose surfing the web, checking email, playing/streaming media or playing games probably isnt going to stress new hardware all that much. Aside from gaming, hardware from 5 years ago is still up to that task.

Well having run the benchmark in question and many others (including open ones) on my newest system build, The current SYSmark scores lower then it should based on other tests which test the same things. So I don't think it's just whining. Something really is going on under the hood with SYSmark, compared to other benchmarking apps that do the same sorts of things. I seem to recall this isn't abnormal either, this has been something seen often in a variety of tools with bias often programmed in treating one

I'm not sure you understand the point of benchmarking.You can benchmark a lot of stuff. But its pointless to benchmark HD read/write speeds when what you're interested in is FLOPS. So there are benchmarks which see how many floating point ops your system can do. And there are benchmarks about hard drive performance. But there tends not to be one benchmark to rule them all.

BAPco say that SYSmark is a benchmark for real-world business app performance. But AMD say SYSmark doesn't utilize the GPU in any way.

To get right down to it, the real purpose of all benchmarks is to provide grist for geeks to argue about which toy is better at doing X. So, the more benchmarks the better, ideally each measuring not-quite-the-same-thing and not-quite-the-same-way. That way, everybody can have a favorite, and everyone can win!:)

I think they are pissy because they dont stand up well to the competition.

Before reading the article, that was my assumption as well, but we are both wrong. IMHO, the complaint is legitimate, as Sysmark's results seem to contradict even basic common-sense metrics like "how long does it take to perform task XYZ". Whatever measurements they use, it seems the weighting has been intentionally skewed to give misleading results. Like the example in TFA, where Sysmark says an old Core-2 QX9770 with DDR2 matches an i7-965 with DDR3, even though simple task-oriented tests prove the i7

Once upon a time [vanshardware.com] BAPCo didnt even bother pretending that they weren't actually Intel.

These days, BAPCo pretends that they arent Intel. Its still Intel tho.

BAPCo is Intel.

The last time Intel so blatantly rigged the benchmark game was when the Athlon XP's were beating the shit out of the Pentium 4's. AMD has recently made a mockery of Intels Atom solutions, and the one leaked benchmark [phoronix.com] for the Bulldozer design must have Intel more than a little worried about i

Quite a bit of Windows software is compiled using Intel's compilers, and they are intentionally made to sabotage performance on AMD chips. When looking at CPUID, instead of checking the features they want, they look for that _and_ the CPU being "GenuineIntel", and if not, the code chooses the worst possible implementation [agner.org]. This includes some major scientific math libraries and a part of popular benchmarks.

That is because the Pentium Dual Core is a low end CPU and Intel optimises for the high end ones where people care about benchmarks.

That is also the reason why performance of code generated with the Intel C++ compiler is poor on other manufacturer's CPUs; They were required to stop looking for the GENUINE_INTEL CPU flag but still optimise based on assumptions about pipelines and available APUs/FPUs on current generation Intel processors. The Pentium Dual Core is based on the older Core 2 architecture and of

They were required to stop looking for the GENUINE_INTEL CPU flag but still optimise based on assumptions about pipelines and available APUs/FPUs on current generation Intel processors.

This was actually not a requirement. The FTC ruled that Intel must inform people that the compiler does not optimized for non-Intels, not that they had to stop doing these things.

As of September of last year, when Intel released a new version, it was still doing the same things. Essentially, Open64 is beating ICC on the Intel T2370 (with the bog-standard -O2 optimizations enabled) even when ICC is generation code that does full processor model detection at runtime.

I would love to compile my open source windows binaries with intel's compiler, however it is unclear how I am supposed to get it for free, and old versions i have found for free fail to compile my programs.

Then compile against LLVM/Clang Trunk. The LLVM 3.0 release is nearing and is has come a long way in a short period of time.

Quite a bit of Windows software is compiled using Intel's compilers...

Dear KiloByte,

You clearly just made that up. That is a patently untrue statment. Both Windows and Office are bult with the Microsoft Compiler.

Wow, I would've expected better from a low 6-digit UID. Maybe, perhaps, by "Windows software" KiloByte meant programs made to run on Windows, not necessarily Windows itself? There exist companies making Windows software that were built with Intel C++.

The reason Intel's compiler gets used so much is because it consistently generates the fastest code, period, when run on Intel processors (which are by far the majority). You see compiler shootouts and among others it goes back and forth what is faster at what, then at the top is the ICC, it just kills all the others.

Well, maybe AMD should do something about that. Maybe they should make their own, competing, compiler. Make it generate optimized code for ALL chips. Hell give it away fo

Didn't know they'd done that. Well then I guess the question now is how it performs. If it gives results similar to the ICC then they just need to market the thing and start convincing companies to use it.

You can get away with that sort of behavior when you're a bit player in the market, but when you've got most of the supply locked up, I don't think there's anyway in which it isn't an antitrust violation to do so. Developers are mostly going to use the one that produces the fastest code on Intel processors because it's the larger part of the market. The odds are good as well that the machines they're using use Intel processors as well.

AMD has chosen an architectural roadmap that makes the GPU and CPU part of the same APU. SYSmark does not measure 3-D graphics performance. At all. So while AMD is pursuing a path that will give its APUs greater overall performance than the CPUs they contain, they are actually hamstringing themselves in the CPU-only testing arena, because the CPU portion of thier APUs will seem relatively lower in performance at the same price point.

AMD's proper course of action should have been to promote an APU-specific benchmark. Instead, it tried to change SYSmark to do something it doesn't do.

It was denied the right to twist the benchmark in its favor. Rather than coming up with the obvious solution of spinning off a new benchmark consortium to develop an APU-specific test, it started crying and ran to its room shouting, "I hate you! I hate you! I hate you!"

AMD is, really, behind a major 8-ball right here. It has, again, put all of its eggs into a rather hopeful basket, and come up with fewer than expected. At least this time, unlike with the Barcelona [xtremesystems.org] debacle, it isn't doing it while roller-skating blindfolded through a car-wash. That time it cost them their fabs. [eweek.com] They don't have much left to sell.

So then the answer is to stop innovating unless everybody else is doing the same thing?

I recently bought a laptop with a Llano chip in it, and I love it, the battery life is great, and the performance in terms of things that people normally do is great as well. This isn't about sour grapes, this is about a benchmark that's lost its way and isn't of particular use. If it's focusing so heavily in the way that it is, I'm not sure how I'd use the scores to figure out what processor to get.

So then the answer is to stop innovating unless everybody else is doing the same thing?

Everybody else is doing the same thing.

Intel and nVidia both have APUs already.

This is definitely about sour grapes. SYSmark is a benchmark. If you keep moving the mark on the bench, it stops being a benchmark. If the mark is the same for everyone, and AMD keeps not measuring up to it, it's not scientifically sound for AMD to claim the mark is the part that's wrong. It's pure petulance. Even if nVidia and VIA did the same thing.

Sysmark is supposed to measure overall systems performance, not just be a CPU benchmark.

"SYSmark® 2012 is the latest version of the premier performance metric that measures and compares PC performance based on real world applications."

Real world software uses the GPU; Aero, flash, Chrome, IE, Firefox, photoshop, just off the top of my head. Intel is moving into the combined CPU/GPU market too, though in a slightly different way, with sandy bridge. GPU acceleration of applications is here to stay, and e

Sysmark is supposed to measure overall systems performance, not just be a CPU benchmark.

Since it doesn't measure 3D performance, that hasn't been true since 3D hardware became nominal equipment in all desktop computers. It's no reason to get all huffy and run off 15 years later claiming they're not playing fair.

I have a deodorant that claims to get me laid by gangs of supermodels at the bus stop.

The claims made by the benchmark are irrelevant. You can't tell what one benchmark does by looking at the art on the box. Anyone who doesn't analyze the benchmark's mix of tests doesn't understand the benchmark. Anyone who relies on just one benchmark doesn't understand the complexity of a computer.

However, this benchmark can be used by people who do a certain sort of computing. They can weight it higher than what other

The problem is that SYSmark claims to be a full-system benchmark, not a CPU benchmark.

The average hardware site runs a dozen different benchmarks on every part before making a comparison. The benchmarks have random degrees of orthogonality and overlap.

It hardly matters what the consortium says about its benchmark, once they're aggregated like that. Just so long as it's stable so that parts can be compared when tested in different times and locations.

Will it be AMD, the plucky underdog who always does what's best by the consumer vs Intel, the evil conglomerate who will stop at nothing to screw you over for profit?

Or will it be Intel, who are trying their best in the face of constant criticism simply for being number one vs AMD, who are just bitter about the fact that they've been playing catch-up ever since the Core2s were released?

AMD is certainly correct in that most benchmarks these days are optimized for Intel cpus. That has been known for years. However, they are also wrong because Intel's SandyBridge architecture blows away AMD's phenom II architecture by 30% or better while also using considerably less power, in tests with the more cpu-neutral GCC.

As far as I can tell it basically comes down to main memory management. The Phenom II architecture can run a system call a lot faster than an Intel I7, but the moment data has to b

I don't care about SYSmark telling me whether any given Intel CPU is better than any given AMD CPU or vice versa. What I really care about is finding out if the newly released Intel/AMD [insert arbitrary name here] CPU/GPU is truly better than the old Intel/AMD [insert arbitrary name here] CPU/GPU. By the time I get to the point of looking into concrete numbers from benchmarks, I've already decided whether I'm going to get an AMD or an Intel processor. The real problem that I have with all this benchmarking

ispc is a new compiler for "single program, multiple data" (SPMD) programs. Under the SPMD model, the programmer writes a program that mostly appears to be a regular serial program, though the execution model is actually that a number of program instances execute in parallel on the hardware. (See a more detailed example that illustrates this concept.) ispc compiles a C-based SPMD programming language to run on the SIMD units of CPUs; it frequently provides a a 3x or more speedup on C