Posted
by
kdawson
on Monday October 12, 2009 @11:03PM
from the try-something-smarter-than-detecting-the-app-name dept.

EconolineCrush writes "3DMark Vantage developer Futuremark has clear guidelines for what sort of driver optimizations are permitted with its graphics benchmark. Intel's current Windows 7 drivers appear to be in direct violation, offloading the graphics workload onto the CPU to artificially inflate scores for the company's integrated graphics chipsets. The Tech Report lays out the evidence, along with Intel's response, and illustrates that 3DMark scores don't necessarily track with game performance, anyway."

Thanks for telling all of us that the best measure of hardware's performance ingame is... to benchmark it with a game.

Except the article clearly shows that the name of the games executable determines frame rates in some cases. It then goes on to state:

the very same 785G system managed 30 frames per second in Crysis: Warhead, which is twice the frame rate of the G41 with all its vertex offloading mojo in action. The G41's new-found dominance in 3DMark doesn't translate to superior gaming performance, even in this game targeted by the same optimization.

This kind of offloading is definitely shady. I can't see how they'd get the driver approved.

Did you actually read the article? The driver was shown to be using the same cheats when the benchmark executable was renamed. This isn't about actual optimizations as far as the GPU is concerned; it's about falsifying results by using the CPU instead.

Well if you RTFA (I know, but I got bored) it says they tested a 785G from ATI and found that renaming did nothing, and of course it kicked Intel's butt, surprise surprise. I mean is there anyone at this point that doesn't know Intel IGP = Big can of fail, and that pretty much the only reason you see them so much is they are dirt cheap?

But if you would like some benchmarks of actual games and BD playback using both the ATI and Intel chips here you are [techreport.com], enjoy. As someone who recently switched from being a li

Optimizing things by application is a good thing. While benchmarks are nice and all I don't think ATI gave a shit about cheating on benchmarks. At the time the game art on the cover of every video card was Quake 3. It isn't shocking to see they did a few optimizations for it, I'd be surprised if they didn't. Cool read though, neat seeing groups hack little toys like that together.

optimizing != benchmark. just optimizing for one thing does NOT mean that you are cheating. cheating is making the world think your card can do something it cant. like what intel did, ie unloading the load onto the cpu.

you seem to be a witless fanboi tho, since you havent been able to muster the courage to post with your own userid. so never mind this reply. its kinda wasted on you probably.

Both ATI and nVidia have been caught cheating [extremetech.com] (and by cheating I mean specifically targeting the FutureMark benchmarks to make their products look better than they actually are). The above link is only a single instance. A quick google will net you a good sampling over the last decade or two.

Optimizing a driver for a specific game is not cheating as long as it doesn't affect quality. Optimizing your driver to get inflated scores specifically in a benchmark is cheating.

True cheating when benchmarking a GPU vs. optimizing is something that many people do not seem to understand. Cheating when it comes to GPUs really would come down to intentionally degrading visual quality just to get higher scores while tricking the benchmark application into thinking the quality is as high as specified.

An example of this would be when running a test with Antialiasing set at 8x in the applications, and antialiasing being set to "application control" in the drivers, yet when the drivers

If and only if the Intel driver would make the necessary adjustments for a real world app (as opposed to one that just happens to have the same name as a popular benchmark), THEN it would be acceptable IF they also specified the CPU required to get that performance. Otherwise, it's doing something in the benchmarks that it will not do when you run your application and if your CPU is busy, it may not even run the benchmark as well on your machine.

I don't need to suggest anything. There are very clear rules defined for accepted benchmarking standards. As I said, I have no problem with a vendor tweaking their drivers to make some random game perform better, but when you specifically target a benchmark and then hide the fact that the work is actually being done by the CPU rather than the GPU, then I take issue.

If they allowed an option to prevent the CPU offload so that you could evaluate the GPU at face value, I wouldn't have a problem with that. That

Don't remember AMD cheating, But then I have only more recently become a big ATI fan, However Nvidia has a long history of benchmark cheating in drivers in order to make there stuff look better than it is and many times it was far more blatant than what intel is doing here

At the time of quack.exe ATI wasn't owned by AMD, cheating or no cheating we've got to be clear on that one.

I think the point is to benchmark the performance of the gpu. If your fav-game-of-the-month looks fabulous on your friend's hopped up system with xyz graphics card, you expect to get the same graphics performance if you buy the same card, despite having a lower class processor. If the game is already taxing your friend's CPU to play smoothly, imagine the reduced gameplay AND graphics you'll get when you try it on your system, since it's trying to offload GPU work to your already burdened CPU?

There's simply no excuse for changing your behavior when you detect a benchmark app is running. Fraud, fraud, fraud. That's no better than the driver software screwing with the benchmark app as it runs or modifying its output before it's displayed, bugging it into displaying completely made-up numbers of their choosing.

In any case application specific optimizations are a great tool. They got an extra 18% speed out of the chip with just application specific tweaks. That's a pretty damn significant increase. Ignoring that would be a terrible decision. All graphics drivers should use this and update the drivers every few months as new games come out.

The app-specific optimizations actually made Crysis look like shit, and ate more CPU power (you need an extra core to play Crysis), and the damn thing was still smashed by an equivalent AMD chip that could play Crysis at twice the frame rate (which was 30fps, rather than an unusable 15fps). The benchmark showed that Intel's was about 30% faster than AMD's offering, which in real life use was actually twice as fast as Intel's.

Oh, ATI was one of the first to cheat on a graphics benchmark quack.exe anyone?

Oh this type of thing has been going on for a VERY long time. For example, there was the Chang Modification [pcmag.com] back in 1988 (It slowed down the system clock that was used as a timing base for the benchmark, resulting in higher benchmark scores).

Oh, ATI was one of the first to cheat on a graphics benchmark quack.exe anyone?

Oh this type of thing has been going on for a VERY long time.

I even remember teapot based hacks (although not the details unfortunately, probably something along the lines of having the teapot hardwired somewhere) back when displaying rotating GL teapots was all the rage to test graphics hardware (ancient history, obviously).Of course something like Quake was still the stuff of science fiction at the time.

You cite a 2001 issue as one of the earliest examples? Poor form.;)Go back another 4 years. nVidia released the Riva 128 which started winning most benchmarks against ATI's Rage Pro (and Rendition Verite etc). Well, a few publications started noticing that the speed advantage was due to the image quality being much worse with no tri-linear filtering, no fog (at least for a few iterations of drivers) and some sort of compressing the textures that made rendered text on some games illegible (a couple of games had the misfortune of having that problem even with their menu system). I remember the comparison images for the nicest benchmark/demo of the day called "Final Reality" were quite telling of the IQ difference. However, most publications of the time just went with fps numbers, so that left ATI with no choice but to "optimize" their new driver set (called "Turbo") especially for 3D benchmarks:)

I used to work for a video card manufacturer and game and video developers often did totally retarded things which just happened to work on the cards they developed on but made the software run like crap on ours. We routinely had to implement workarounds for individual games to make them run properly on our cards.

One particular example which springs to mind -- I won't mention the developer or the game -- was an engine which used a feature which we supported in hardware but a certain other card manufacturer whose cards they used performed in software. Rather than configuring said feature once as they should have done, retarded developer repeatedly reconfigured it numerous times in the course of a single video frame, which required us to reconfigure the hardware every time -- slow as heck over an AGP bus -- whereas other card manufacturer just had to execute a few CPU instructions. We had to detect the game and disable our hardware support, so that we would fall back to software and run the retarded code much faster; in that instance there were places in the game where, far from a measly 15%, we'd literally be going from seconds per frame to numerous frames per second.

So it's quite possible to need to detect individual games or applications in order to work around retarded coding which cripples performance on your hardware. The line you shouldn't cross -- and which I don't believe we ever did -- was to render something other than what the developer intended, for example by detecting a shader used by a benchmark and replacing it with one that looked similar but didn't do as much work.

Similarly, the issue here is not Intel punting processing to the CPU when the GPU is overloaded, but the fact that they do so by detecting the name of the benchmark rather than by monitoring the GPU loading and dynamically switching between hardware and software so that it would work on any application. General optimisation is fine, workarounds for retarded developers are fine, but special optimisations for benchmarks which don't affect real applications is getting pretty close to the line.

I remember some time ago, that people were upset that a video card manufacturer was optimizing drivers so certain games run fast (thus will score higher on benchmarks). I welcome this. I want my games to run faster, and if the manufacturer is putting a ton of effort to optimize their drivers so that some games will run faster, FOR FREE, then it's a boon for the customers. Although, optimizing for 3D Mark helps no one. But who actually cares about 3D Mark scores anyway?

Oh, so you like your video cards to sacrifice visual quality for frames per second, and "everything on ultra high + glassy smoothness" is not a something you look for when selecting a shiny new video card?

they REDUCED image quality, boosted performance. there isnt ANYthing wrong with that technically. you are yourself allowed to reduce image quality and boost performance through settings on any graphics cards.

it means that they TRADED OFF quality for performance. not showed as if their card was capable of delivering both, through cheating.

linked at the bottom of that article but here (in German but screenshots speak for themselves) [3dcenter.org], the quoted ATI rep basically admits it too by saying that they optimize for the best "visual experience" where that's some mix of visual fidelity, framerate, etc.

I don't know all the details of when (in relation to AMD buying out ATi) but...

ATi was notorious for cheating on the IQ benchmarks - essentially using a different anisotropic filtering method for the IQ test (the good one), and then the cheating one during the other tests.

The ridiculous part was that Nvidia was caught doing a similar thing, and the outcry (in part driven by ATi calling out Nvidia) forced Nvidia to include admit it and later driver option to select the optimization level used. When ATi was

They actually claimed that the 2000+ chip was equivalent to what the performance of 2GHz Thunderbird (their older CPU) would be. Thatit also was similar to 2GHz P4 was left to the imagination of the buyers.

Though 2000+ Athon XP and 2GHz P4 were quite similar.

And now neither company uses the clock frequency for advertisement, since all the clock frequency can tell you is if the chip in question is faster/slower than other chips in the same family, and model numbers can do that too (for example Opteron 275 is

but the + performance thing is real. you can experience it live, as opposed to benchmarks. whereas the raw 'processing power' intel supposedly sells to people doesnt translate into real world tasks as it should.

im an end user. i care about multimedia, video, games, internet, daily tasks. im not going to run long batches of arithmetic calculations or compile thousands of lines of code. i dont give a flying fuck about what number a cpu has on it - what i care is what i SEE in front of my eyes as performance.

Effectively dividing tasks among CPUs is not the issue here. They want to benchmark the GPU and they wanna make sure you don't enable optimizations that are targeted specifically for the benchmark which Intel was doing shamelessly.

Effectively dividing tasks among CPUs is not the issue here. They want to benchmark the GPU and they wanna make sure you don't enable optimizations that are targeted specifically for the benchmark which Intel was doing shamelessly.

The behaviour their driver has in the benchmark is also used in several games... ie Crysis Warhead. RTFA.

The issue is that the driver treats different games differently, based on filename. Some get this boost and some don't. Whether you put 3DMark into the boosted or unboosted category, its results will be indicative of some games and not of others.

Here's the thing, though: They took 3DMarkVantage.exe and renamed it to 3DMarkVintage.exe, and much of that offloading was dropped. So this isn't a general-purpose optimization, which would make sense -- it's a targeted optimization, aimed at and enabled specifically for a benchmark, in order to get higher scores in said benchmark.

It reminds me of the days when Quake3.exe would give you higher benchmarks, but worse video, than Quack3.exe.

Here's the thing, though: They took 3DMarkVantage.exe and renamed it to 3DMarkVintage.exe, and much of that offloading was dropped. So this isn't a general-purpose optimization, which would make sense -- it's a targeted optimization, aimed at and enabled specifically for a benchmark, in order to get higher scores in said benchmark.

A practice which is explicitly forbidden per the guidelines. I know lots of Slashdotters don't read the article but I am really beginning to wonder what part of that is so hard

No, they should either target behaviors, rather than executables, or make these available for teh application to request. Or they should improve the overall performance for everything -- starting with, oh, making a better chipset.

Targeting specific executables, even if they do end up improving the performance of specific games, has the effect of raising the barrier of entry to that market -- I mean, it's hard enough to optimize a game engine without having to develop a business relationship with Intel.

Not so, an integrated GPU is simply a (often low power) GPU which uses the system's RAM instead of it's own RAM. Because system memory buses are usually much much slower than the ones included on dedicated graphics cards, and because the IGP shares the bandwidth with the CPU, the IGP is in turn relatively slow.

There's not (normally) anything to do with using the CPU to do graphics computations.

While it makes some sense, triggering the behavior using certain filenames is peculiar to say the least.

I suppose considering that the 3DMark tests are intented to test a hardware solution's peak performance, there is some rationale behind identifying the test executable on some list of "heavy" applications. The guidelines in which 3DMark explicitly forbids that sort of thing are clear, yes. However, in a sense the "spirit" of those guidelines is that they don't want companies trying to cheat by designing d

Well, if the GPU becomes saturated, I could imagine the rest of the load spilling over to the CPU (one or many cores). Obviously the GPU is more efficient at video tasks, but if the video task is priority for the user, why not offload to the CPU as well? Makes sense to me.

If you do that for a benchmark app then you are not really testing (just) the performance of the graphics hardware, so turning on that optimization without disclosing it is probably not really a fair comparison of the hardware. To make it 'fair' you really need to make the benchmark app to be aware of the feature and be able to turn it on or off under software control, or at least know if it is enabled or not. I wonder if similar optimisations could be made to any 3D video driver...

In the real world, if the user wants high graphics performance and there are CPU cores doing nothing then like you said, offloading to them makes perfect sense.

Well, if the GPU becomes saturated, I could imagine the rest of the load spilling over to the CPU (one or many cores). Obviously the GPU is more efficient at video tasks, but if the video task is priority for the user, why not offload to the CPU as well? Makes sense to me.

If you do that for a benchmark app then you are not really testing (just) the performance of the graphics hardware, so turning on that optimization without disclosing it is probably not really a fair comparison of the hardware. To make it 'fair' you really need to make the benchmark app to be aware of the feature and be able to turn it on or off under software control, or at least know if it is enabled or not. I wonder if similar optimisations could be made to any 3D video driver...

In the real world, if the user wants high graphics performance and there are CPU cores doing nothing then like you said, offloading to them makes perfect sense.

It's only half unfair though. In optimized games like Crysis, Call of Juarez, etc., they get a boost just like 3DMark Vantage shows. In other words, 3DMark's performance is indicative of how those games will perform. However, in any game not specifically mentioned in the drivers, the 3DMark results don't match up with actual games' performance.

As most people have stated, it would be much better if they could do this based on actual performance statistics, rather than just based on the filename. The flip

On the one hand, a mechanism that uses the CPU for some aspects of the graphics process seems perfectly reasonable(whether or not it is a good engineering decision is another matter, and would depend on whether it improves performance under desired workloads, what it does to energy consumption, total system cost, etc.), so I wouldn't blame intel for that alone.

On the other hand, though, the old "run 3Dmark, then run it again with the executable's name changed" test looks pretty incriminating. Historically, that has been a sign of dodgy benchmark hacks.

In this case, however, TFA indicates that the driver has a list of programs for which it enables these optimizations, which includes 3Dmark, but also includes a bunch of games and things. Is that just an extension of dodgy benchmark hacking, taking into account the fact that games are often used for benchmarking? Or is this optimization feature risky in some way(either unstable, or degrades performance) and so only enabled for whitelisted applications?

If the former, intel is being scummy. If the latter, I'm not so sure. From a theoretical purist standpoint, the idea that graphics drivers would need per-application manual tweaking kind of grosses me out; but, if in fact that is the way the world works, and intel can make the top N most common applications work better through manual tweaking, I'm can't really say that that is a bad thing(assuming all the others aren't suffering for it).

I'm inclined to give Intel the benefit of the doubt here. Few reasons:

1) Nobody buys Intel integrated chips because of how they do on 3D mark. Nobody thinks they are any serious kind of performance. Hell, most people are amazed to find out that these days they are good enough that you can, in fact, play some games on them (though not near as well as dedicated hardware). So I can't imagine they are gaining lots of sales out of this. Remember these are chips on the board itself. You either got a board with on

Nobody buys Intel integrated chips because of how they do on 3D mark. Nobody thinks they are any serious kind of performance. Hell, most people are amazed to find out that these days they are good enough that you can, in fact, play some games on them (though not near as well as dedicated hardware). So I can't imagine they are gaining lots of sales out of this. Remember these are chips on the board itself. You either got a board with one or didn't. You don't pick one up later because you liked the numbers.

Mod the parent up: what his link shows is that Intel are not keeping it a secret that they offload to the processor; they have a published document saying that they do this for 3DMark as well as other software for the XP and Vista driver. I don't know whether they have yet published a similar document for Win7 driver, but Win7 is not yet on the shelves, so it's a bit hard to criticize them for not disclosing for that.

It's not really cheating is it, if you are open about what you are doing; I think the title

Just look at the pics. Changing the name of the executable changed the results dramatically. The driver is apparently detecting when it's running a 3DMark (or some other specific apps) and switches to some other mode to boost its scores/FPS markings.

We have engineered intelligence into our 4 series graphics driver such that when a workload saturates graphics engine with pixel and vertex processing, the CPU can assist with DX10 geometry processing to enhance overall performance. 3DMarkVantage is one of those workloads, as are Call of Juarez, Crysis, Lost Planet: Extreme Conditions, and Company of Heroes. We have used similar techniques with DX9 in previous products and drivers. The benefit to users is optimized performance based on best use of the hardware available in the system. Our driver is currently in the certification process with Futuremark and we fully expect it will pass their certification as did our previous DX9 drivers.

And the rest of page 2 indicates that offloading some of the work to the CPU does, for certain games, improve performance significantly. Offhand, this doesn't necessarily seem like a bad thing. Intel is just trying to make the most out of the hardware of the whole machine. Also, one would also do well to bear in mind that the GPU in question is an integrated graphics chipset: they're not out to compete against a modern gaming video adapter and thus have little incentive to pump their numbers in a synthetic benchmark. Nobody buys a motherboard based on the capabilities of the integrated graphics.

The question that should be asked is: What is the technical reason for the drivers singling out only a handful of games and one benchmark utility instead of performing these optimizations on all 3D scenes that the chipset renders?

It makes sense; you'd only want to perform these optimizations in games where you're significantly GPU bound. CPU-heavy games, such as Supreme Commander, are probably better off spending the CPU time on the game itself.

I'd see this as more laziness than anything else; it's easier to just hard-code in a list of GPU-bottlenecked games than it would be to actually have your driver auto-detect if there is idle CPU time that could be better spent on offloading.

It seems entirely reasonable to me for them to optimize the driver to run particular programs faster if at all possible.

Perhaps, but you definitely don't do it for the benchmark. The article quotes the 3DMark Vantage guidelines which are perfectly clear.

With the exception of configuring the correct rendering mode on multi-GPU systems, it is prohibited for the driver to detect the launch of 3DMark Vantage executable and to alter, replace or override any quality parameters or parts of the benchmark workload based on the detection. Optimizations in the driver that utilize empirical data of 3DMark Vantage workloads are prohibited.

it is prohibited for the driver to detect the launch of 3DMark Vantage executable and to alter, replace or override any quality parameters or parts of the benchmark workload based on the detection.

And was it actually doing this? What I mean is that if the visuals were identical with the hack as without, what difference does it make? I don't know if this is the case since the article is down, but if they were then I don't see Intel doing the wrong thing. I can see the justification for integrated graphics

Optimizing for games makes sense or rendering software. Optimizing for benchmarks seems like a pretty clear violation of the rules.

It does point out an weakness in benchmarks over in game tests though. If a company spends all of their time optimizing for specific applications then they will get lower marks in a benchmark than they would in real life. But it isn't fair to apply these to benchmarks. Lends more credence to the 'top 5 games' benchmarks that tomshardware or whoever uses.

That's probably a matter of the optimization path in the games that are run and would probably result in an unstable system if done for general gaming. Tricking a game into running the incorrect codepath just seems to be asking for trouble IMHO.

You may be thinking of changing the CPUID on Via chips to GenuineIntel vs AuthenticAMD vs CentaurHauls.

There's one of the 'big' benchmark suites where the chip's score is roughly the same on AuthenticAMD and CentaurHauls, but gets a boost on GenuineIntel. Via's chips are the only ones with (user) changeable cpuid, so we don't know how differently IDed AMD or Intel do, but still interesting.

Is Intel the 500 lb. gorilla in chipsets? Sure, and they got there by 'cheating.' Which is winning.

To be fair, I'm pretty sure that Intel has made the highest-performing chipsets for Intel processors for quite a long time now, occasionally competing with Nvidia (who recently gave up). The other makers like VIA and SiS never had chipsets that worked as well as Intel's. Of course, this is for chipsets which don't have built-in graphics.

Intel's entries into the 3D graphics market have never been very good, o

But why they even bother trying to rig benchmarks like this is beyond me. No one who's serious about graphics performance would use Intel's built-in video.

No, but there are a lot of 3D games that aren't FPS junkie, the-sky-is-the-limit craving games. For example, I liked King's Bounty which has minimum requirements of "Videocard nVidia GeForce 6600 with 128 Mb or equivalent ATI". Tales of Monkey Island says: "Video: 64MB DirectX 8.1-compliant video card (128MB rec.)". You won't find any of these in the latest AMD/nVidia review, but just pretending to have a little 3D performance can make a difference between "no 3D games at all" and "some non-intensive 3D gam

I'm seeing a potential other side to this that doesn't seem be being explored (unless I've missed something) -- if the optimizations are specific to.exes listed in the driver's.inf file, has anyone tried adding other games to the list (or alternately, just renaming another executable to match one in the list)?

It would seem like an interesting turn if the optimizations are generic, but only enabled for games/applications that Intel has spent time testing them on.

You'd think you'd have logic in the GPU that could determine when a certain load was being achieved, certain 3D functionality was being called, etc., and offload some work to a multicore CPU if it was hitting a certain performance threshold (as long as the CPU itself wasn't being pounded...but most games are mainly picking on the GPU and hardly taking full advantage of a quad core CPU or whatever). That makes a degree of sense...using your resources more effectively is a good thing. If that improves your performance scores, well...so what? It measures the fact that your drivers are better than the other card's drivers. That seems like fair play, from a consumer's standpoint. If the competitors can't be bothered to write drivers that work efficiently, that's their problem. Great card + bad drivers = bad investment, as far as I'm concerned. That's the real point of these benchmarking tests, anyway. It's just product marketing.

But trapping a particular binary name to fix the results? That's being dishonest to customers. They're deliberately trying to trick gamers who just look at the 3DMark benchmarks into buying their hardware, but giving them hardware that won't necessarily perform at the expected level of quality. I generally stick up for Intel, having worked there in the past as a contractor and generally liking the company and people...but this is seriously bad form on their behalf. I'm surprised this stuff got through their validation process...I know I'd have probably choked on my coffee laughing if I were on that team and could see this in their driver code.

I would have thought by now that a standard tool in the benchmarkers repertoire was a tool that copied each benchmark exe to a different name and location and launched that, followed by a launch with the default name; and that the more popular benchmarks had options to tweak the test ordering and methodology slightly to make application profiling difficult.

Marketing execs change all the time. Each one says "Hey! I have an idea...." The programmer who is asked to put in the cheat is not wildly enthusiastic about the idea, knows it won't work and does a quick and dirty hack.

3DMark Vantage was never a legit benchmark. Heavily tuned for Intel CPU and nVidia GPU architectures it never actually meant a damm thing.

Just compare performance of gf285/295 v. radeon 4870/5870 (any review) in 3DMark and in games. In 3DMark Vantage nVidia cards have close to 50% advantage while in real games radeons sometimes score higher.

The statistical anomaly alone is sufficient to dismiss 3DMark Vantage results as outlier.

The article isn't loading for me, but: can't they simply measure the amount of CPU used during the benchmark and use that information in the benchmark? I don't think it's basically evil to perform that kind of offloading (except in this case when the rules of 3DMark forbid using empirical data on it to optimize performance; but then again, I would imagine many other pieces of software also get this treatment without bad effects on quality or game experience), but dynamically detecting the situation would de

"We have engineered intelligence into our 4 series graphics driver such that when a workload saturates graphics engine with pixel and vertex processing, the CPU can assist with DX10 geometry processing to enhance overall performance. 3DMarkVantage is one of those workloads, as are Call of Juarez, Crysis, Lost Planet: Extreme Conditions, and Company of Heroes. We have used similar techniques with DX9 in previous products and drivers. The benefit to users is optimized performance based on best use of the hard

I hope that new apple systems don't get stuck with this carp video + a dual core cpu. It's the new imac thinner then even with intel core i3 cpu and half as fast video starting at $1200. To get a real video card starting price is $1800.

Mac mini with the slowest corei3 and 2gb of ram starting at $500-$600.

APPLE IF you plan to pull that carp at least have a real desktop at $800-$1500+.

"they should be encouraged to release hand coded or special drivers to improve performance in specific games."

Games, sure - but it defeats the point of benchmarks by introducing a new useless variable: how optimized the driver is for that benchmark. I mean, why should 3dMarkVintage.exe be 30% slower than 3dMarkVantage.exe? How does this help anyone except Intel?

It's not special drivers for specific games. It's regular drivers with exceptions coded in to make them appear faster on "standardised" tests, which are meant to be an all-purpose benchmark to help consumers identify the sort of card they need (and to compare competing cards). This is cheating to increase sales among the early adopter/benchmarker crowd, impress marketing types and get more units on shelves, and is generally at the cost of the consumer.

It's not special drivers for specific games. It's regular drivers with exceptions coded in to make them appear faster on "standardised" tests, which are meant to be an all-purpose benchmark to help consumers identify the sort of card they need (and to compare competing cards). This is cheating to increase sales among the early adopter/benchmarker crowd, impress marketing types and get more units on shelves, and is generally at the cost of the consumer.

No need for a car analogy on this one. So it's like what happens when the public schools teach a generation or two in such a way that they are optimized for performance on standardized tests, and when those students eventually enter the working world, they don't know how to make change without a cash register or other calculator of some sort? The way they don't know how to deconstruct an argument? Let alone understand the importance of things like living within your means?

I'm just guessing here, but maybe because offloading this work to the CPUs decreases CPU performance substantially, they don't want to make these changes generic because it'd make it look like systems with Intel video are slow, especially in any CPU-oriented benchmarks. After all, they pointed out in the article how Intel does this same thing for the "Crysis" game, but even with this offloading working, the game only got a measly 15fps with all extra effects off, which is downright unusable.

Its funny that Intel simply creates an INF file and uses those to detect apps and optimize for performance. I mean, if you are detecting a file name and enabling performance optimizations, why not detect the app behaviour itself and make the optimizations generic ? Clearly you know the app behaviour and you know the performance optimizations work. This seem to me a case where people were asked to ship it out fast and instead of taking the time to plug the optimization into the tool, they just made it a hack. A really bad one too!!!

Sure, but how hard would it actually be for a graphics driver to scan an arbitrary executable and determine a) that it's a game and b) how it will behave when executed? I suppose they could model it after the heuristic and behavioristic features of some antivirus/antispyware applications, but nothing about this problem sounds trivial. There's also the question about how bloated of a graphics driver you are willing to accept.

My guess is that the above concerns explain why this was a poorly-executed hack.

I think he meant at runtime.It wouldn't be hard to detect that a running application was only using one thread out of a quad core cpu and was thrashing the gpu, so then offload some stuff to the other cpu cores.

Detecting application behavior dynamically is non-trivial. Commonly it is performed by instrumenting the binary, which _degrades_ the performance of the binary. The act of observation destroys the behavior to be observed, so to speak.
This is why 3D marks vantage explicitly prohibits "Use of empirical data of application for optimization". _After_ you get the behavior of application, optimization is a lot easier.