Posted
by
CmdrTaco
on Wednesday November 17, 2010 @08:43AM
from the well-that's-not-nice dept.

dkd903 writes "A Mozilla engineer has uncovered something embarrassing for Microsoft – Internet Explorer is cheating in the SunSpider Benchmark. The SunSpider, although developed by Apple, has nowadays become a very popular choice of benchmark for the JavaScript engines of browsers."

um... I'll admit they've not been doing as well as they used to, but their stock [yahoo.com] would seem to indicate that they are not losing money. Individual divisions might be (they did get in the news yesterday for playing games with sales figures in one division--but not in a way that the money didn't come from somewhere in the company), but as a whole, their P/E is better than Apple's (even if they do have a lower market cap).

Insiders selling company stock is always a Red Flag for investors. Whether that is justified or not in this case is up to the individual to decide, but it's a Red Flag for a reason... very often it means problems within the company and bad news follows.

There are perfectly good reasons for insider selling, and it helps to be very straightforward about it... it's not like you can hide it anyway, it's reported and watched vigorously.

Not being a Microsoft investor, and not particularly interested in their are

That's actually why there's blackout periods for insiders buying/selling shares, as part of the Sarbanes-Oxley rules. When I was at Dell, I wasn't allowed to buy/sell within 30 days (either way) of any public statement regarding earnings or future plans. These rules exist specifically to prevent insiders from gaming the market by buying up a large amount of stock right before an expected rise (higher than expected earnings announced, for example), or selling right before an expected fall (lower than expecte

You'll find numerous buildings scattered across the US that once belonged to IBM, but no longer do, because of IBM's shrinking profits (mainly due to their loss of the IBM PC business, but also other downsizing). Another analogy is that Microsoft might end-up like Kmart, who was once the #1 retailer but is now increasingly irrelevant.

anyone who says libertarian doesn't understand the conflict in the term. it's basically republicans who don't want to be called republicans or someone who hates both parties but still leans republican. if someone declares themselves independent libertarian, then they're more acknowledging that they don't necessarily align with "libertarian" views. I have a friend like this, and it's basically republican but he doesnt' want to admit it.

anyone who says libertarian doesn't understand the conflict in the term. it's basically republicans who don't want to be called republicans or someone who hates both parties but still leans republican. if someone declares themselves independent libertarian, then they're more acknowledging that they don't necessarily align with "libertarian" views. I have a friend like this, and it's basically republican but he doesnt' want to admit it.

I like to think of a Libertarian as a Republican that smokes pot and/or downloads porn. It could also be a Democrat who hates paying taxes to a federal government that either wastes the money or gives to someone who does not deserve it.

We're not... true libertarians (small "L") believe people should be responsible for themselves and their own actions. Fiscal conservatism is a means to the end... where people can pay their own bills and take care of their own kids.

I usually self-identify as libertarian and describe my politics as "I believe in tax cuts and gay marriage". Although I will admit that as time passes I'm moving toward the American left on the tax cut side.

I think by "socially liberal" he means "nothing should be illegal unless it harms someone else." But as to fiscal conservatism, that's the Libertarian Party, not libertarians in general. Personally, I agree that anything that doesn't trample someone else's rights should be legal, but at the same time I'd like to see universal health care, and have the poor taken better care of. "There but for the grace of God go I".

And I'd like to see corporate power reigned in, and I'd like the corporates to stop getting government welfare and getting away without paying taxes. So I seldom vote for a Libertarian candidate.

Well, I can't speak for right libertarians but for left libertarians it's easy. We don't think the solution to political problems is to throw money at them, we think there are underlying systemic problems that need to be addressed, in a fundamental way rather than a patchwork way.

Poverty doesn't exist because the government doesn't have adequate social programs to funnel money into the hands of poor people; poverty exists because wealth is power and is used to leverage more wealth and power. The genesis of

First of all, I (and I'll not speak for all left libertarians here, as there is some debate on the matter, but I think I'm expressing the majority opinion) distinguish "personal property" (that is, what you own and use) from "private property" (that is, property owned privately, usually by an organization, but not owned or used by an individual, and leveraged to extract profit). This distinction is important before any discussion of any wealth-distribution theory.

Personal property is personal property. I can think of scarce few real leftists (which is to say, true socialists, communists, anarcho-communists, left-libertarians, etc) who include personal property when they say "property is theft". Private property, on the other hand, being the spoils of a great and sustained theft from the public, belong to the public and should be returned to the public.

Note here that I do not mean the state when I say public. Which is to say that I'm not an advocate for systems like the Soviet Union, but I am an advocate for movements like the worker takeover of factories we see in some Latin American countries.

Well, yes, taking some of your money. But since the only way that you can make money is because the wider society sees that as a benefit, suck it up and pay your goddamned taxes. This illusion that somehow the money in your pocket came to you by yourself alone is the greatest lie of Libertarianism.

Any other definition necessarily requires taking my money and giving it to someone else.

Ah, the "anti-tax" argument. I'm happy with taxes. Honestly. Do I wish they were lower - of course. Do I think that we spend money on stupid things? Yep.

Put taxes are still cheaper than having my own private doctor and hospital, my own roads, my own water towers and power generation, my own private library, swimming pool, and so on. Governments should do these things, because it's cheaper for everyone to pitch in.

Short story: Someone notices a perhaps too-fast result for a particular benchmark test with IE 9 and modifies the benchmark code which then throws IE 9 performance the other way. One *possible* conclusion is that MS have done some sort of hardcoding/optimisation for this test, which has been thrown out by the modifications.

Thanks for someone pointing this out. I mean really, if they were going to throw this test why would they throw it quite this much? And is this the ONLY portion of this test that seems to act this way? If so then why in the world would they throw only this portion and why this much? The original result was uber fast, the result on the modified test pretty slow - if they were going to try and hide something why make it uber fast and not just slightly better?

Something is weird, possibly hinky, but to outright declare cheating based just on this? Really? O_o

The purpose of a benchmark is to try to show how performance will be in the real world. If a given application has been programmed to do very well in a given benchmark yet does not do as well with a real-world situation, then the benchmark results are flawed. The idea of coding an application just to have good benchmark numbers that would not be seen in the real world is considered cheating. In this case, we are talking about JavaScript speeds, so you would be very surprised if you believed that IE 9

Optimisations done purely for use only on a benchmark to achieve far better results than normal is the exact definition of cheating. Benchmarks are meant to test the browser with some form of real performance measure and not how good the programmers are at making the browser pass that one test. If the thing is getting thrown off by some very simple instructions to the tune of 20 times longer then it is seriously broken. Optimization or not.

It is like when ATI/Nvidia made their drivers do some funky shit on the benchmarks to make their products seem way better; This was also called cheating at the time.

The benchmark in question can be considerably optimized by dead code elimination, since a computationally expensive function in there (one that loops computing stuff) does not have any observable side effects, and does not return any values - so it can be replaced with a no-op. It is a perfectly legitimate optimization technique, but the one which tends to trip up naively written benchmark suites because they assume that "for(int i=0; i < 1000000; ++i) {}" is going to be executed exactly as written.

Thee was actually a similar case with artificial tests in the past - Haskell (GHC) scores on the Programming Language Shootout. Most tests there were also written as loops with some computations inside and no side-effects, on the presumption that compilers will leave the computations intact even though their results are never used. Well, one thing that GHC has always had is a particularly advanced dead code eliminator, and it compiled most of those tests down to what is essentially equivalent to "int main() { return 0; }" - with corresponding benchmark figures. Once they've changed the tests to print out the final values, this all went back to normal.

In this case it's not quite that simple, because seemingly trivial changes to benchmark code - changes which do not change the semantics of the code in any way - trip off the dead code elimination analyzer in IE9 JS engine. However, it is still an open question on whether it is deliberate, or due to bugs in the analyzer. One plausible explanation was that analyzer is written to deal with code which at least looks plausible, and neither of the suggested optimizer-breaking changes (inserting an extra statement consisting solely of "false;" in the middle of the function, or "return;" at the end of it) make any sense in that context. Any dead code elimination is necessarily pessimistic - i.e. it tries to guess if the code is unused, but if there are any doubts (e.g. it sees some construct that it doesn't recognize as safe) it has to assume otherwise.

The only true way to test this is to do two things:

1. Try to change the test in other ways and see if there are any significant diffs (such that they are not reasonably detectable as being the same as the original code) which will still keep the optimizer working.

2. Write some new tests specifically to test dead code elimination. Basically just check if it happens on completely different test cases.

By the way, the guy who found the discrepancy has opened a bug [microsoft.com] in MS bug tracker regarding it, in case you want to repro or track further replies.

Did you look at the diffs [mozilla.com]? The addition of the "true;" operation should make absolutely no difference to the output code. It's a NOP. The fact that it makes a difference indicates that either something fishy is going on, or there is a bug in the compiler that fails to recognise "true;" or "return (at end of function)" as being deadcode to optimise away, and yet the compiler can apparently otherwise recognise the entire function as deadcode. Just to be clear, we are talking about a compiler that can apparently completely optimise away this whole function:

but fails to optimise away the code when a single "true;" instruction is added, or when "return" is added to the end of the function. Maybe it is just a bug, but it certainly is an odd one.

This shows the dangers of synthetic non-realistic benchmarks. I was amused to read Microsoft's comments on SunSpider: "The WebKit SunSpider tests exercise less than 10% of the API’s available from JavaScript and many of the tests loop through the same code thousands of times. This approach is not representative of real world scenarios and favors some JavaScript engine architectures over others." Indeed.

All JS functions return values. If no value is specified in the "return" statement, or if the return happens due to reaching the end of the function, "undefined" is returned. So adding a "return;" at the end of the function which does not otherwise return anything does not change its meaning in any way.

That said, it is quite possible that the optimizer does not know this, and treats any "return" as a signal that the function returns a meaningful value. Which then indicates a bug in the optimizer.

The return statement was "return;" i.e. a return statement that did not return anything. Looking at the other JavaScript engines, that line added at most 1ms, while with the IE engine it added 19ms. If the IE9 JS engine is handling this function in a super-efficient way that is not due to cheating, the optimisation must be highly sensitive to variance.

One way to check if the IE9 engine is doing some sort of special casing (e.g. hashing the text for the function) would be to change the name of a variable. This should not change the behaviour of the engine as it is the same code (there are no extra elements in the tree, like additional returns). If the IE9 engine is cheating, this should jump from 1ms to 20ms like the other variances. If it is an optimisation bug, the performance should be 1ms for both cases.

A, who is not a standards organisation develops it.B, who is also not a standards organisation uses it.If sufficient numbers of Bs use it, it becomes a de-facto standardSometimes C, who is a standards organisation says it's a standard.Then it becomes a de-jure standard.

For the record, I caught wind of this a month or two ago and posted it here in a firefox performance article. I was trolled and troll moderated despite pointing to the Mozilla team's own experiments.

The ONLY reasonable explanation, assuming you actually understand the implications of what is it you're (generalized readers, not your specifically) reading, which based on previous happenings is questionable, is that Microsoft is cheating their asses off by identifying the exact benchmark and returning a pre-computed value. Either that, or this is indicative of a horrible optimization bug which would negatively effect all javascript in their browser and it would be impossible for them to be competitive in the least. Given there is no evidence to support the later, the only reasonable conclusion is they are cheating their asses off in these benchmarks.

Why don't you try reading it before you make that claim? The article is a few simple benchmark results and mild speculation as to what caused them. The summary may be inflammatory, the article goes out of its way not to be.

1) Microsoft beats everyone else by a factor of 10.
2) Making any of a number of effectively cosmetic changes to the function results in Microsoft taking twice as long as everyone else.
3) Making the inner loop 10x longer makes everyone else take 10x longer, except MS, who takes 180x longer.

Sorry, but if that counts as an optimization "bug", I have a bridge to sell you.

This is the nature of benchmarks... whenever people start caring about them enough, software/hardware designers optimize for the benchmark.

Next we're going to be shocked that 8th grade history students try to memorize the material they think will be on their test rather than seeking a deep and insightful mastery of the subject and its modern societal implications.

Fear not, for I have RTFA and the original article that the digitizor article is based on.

Fortunately for the ethics of Mozilla, the named Mozilla engineer (Rob Sayre) never claimed that IE9 cheated. Instead, he diplomatically refers to it as a "oddity" and "fragile analysis" and filed a bug w/ MSFT.

Most likely they are cheating. The other possibilities are far less plausible. Even if you discount that possibility, either they're not competent at optimization or they're not competent at writing a robust engine.

In none of the cases is MS doing something legitimately. Optimizing to one test is invariably a bad idea, no matter how well designed, and quite honest at this point they should be able to code an engine that's a lot more resilient than that.

explain how this non-breaking modification suddenly means "slow." Is it following more (if(false-cond...)) etc and doing more processing than necessary just to find out there's nothing more to do? i.e. broken short circuiting?

1) If you actually read the article, you may have noticed that the engineer is named. It's
right there there at the beginning of paragraph 2: "While Mozilla engineer Rob Sayre"2) The "cheating" stuff is all from the Hacker News thread and the fucking articl. I
suggest you further read item 1 under "Further Readings" on the fucking article, which
is what Rob actually wrote. The link is: http://blog.mozilla.com/rob-sayre/2010/11/16/reporting-a-bug-on-a-fragile-analysis/ [mozilla.com]

Just to save you the trouble of reading it, if don't want to, it's pretty clear that IE9 is eliminating the heart of the math-cordic loop as dead code. It _is_ dead code, so the optimization is correct. What's weird is that very similar code (in fact, code that compiles to identical bytecode in some other JS engines) that's just as dead is not dead-code eliminated. This suggests that the dead-code-elimination algorithm is somewhat fragile. In particular, testing has yet to turn up a single other piece of dead code it eliminates other than this one function in Sunspider. So Rob filed a bug about this apparent fragility with Microsoft and blogged about it. The rest is all speculation by third parties.

Next we're going to be shocked that 8th grade history students try to memorize the material they think will be on their test rather than seeking a deep and insightful mastery of the subject and its modern societal implications.

Some things to consider: 1) I'm not doing business with the 8th grader. Nor am I relying on his understanding and memorization of history to run Javascript that I write for clients. 2) You are giving Microsoft a pass by building an analogy between their javascript engine and an 8th grade history student.

Just something to consider when you say we shouldn't be shocked by this.

No it couldn't. Firefox has for a long time lagged on pretty much all the tests, including that stupid ACID test. They lagged specifically because they were more focused on real improvements over faking it or optimizing for conditions that one is unlikely to encounter.

Or, it could be that they're just incredibly incompetent at cheating. I suppose that's possible. But given the degree to which the real speed has improved with the 4.0b7, I think we can largely rule out that level of incompetence.

It shows that Microsoft is more concerned about getting a good score on the benchmark than they are about providing a good customer experience.

For that to be true, you'll need to demonstrate that they put more effort into scoring well on the benchmark than they did in improving performance in general. I don't think you can.

Improving performance in general is worth doing and I'm sure it's being done, but it's hard. Improving performance on a benchmark dramatically is often not that hard, and it's worth doing if it gets your product noticed.

I'm sure all browser makers are doing the exact same thing on both counts -- anonymous Mozilla guy is just

read the article. their js performance is quite suspect if their results are "too good to be true" when the benchmark is unmodified and then too bad to be true when it's very slightly modified. some more 3rd party testing should be done.. and actually it would be pretty easy to do.

Benchmarks are great, for improving the performance of your code. Benchmarks are terrible, as soon as they start to get press and companies try to deceive users by gaming them. That's why it is important that we call out when they are caught so they get more bad press and maybe think twice about gaming the benchmark in the first place.

That's not a problem with benchmarks per se, that's a problem with the idiots that insist that benchmark performance is the same thing as good performance in general.

It really depends how the benchmark is set up, certain things are known to be costly in terms of IO, RAM and processing time. And a benchmark which measures things like that and gives some meaningful indication where the time is being spent is definitely valuable.

There is a difference between optimising for a benchmark and cheating at a benchmark. Optimising for a benchmark means looking at the patterns that are in a benchmark and ensuring that these generate good code. This is generally beneficial, because a well-constructed benchmark is representative of the kind of code that people will run, so optimising for the benchmark means that common cases in real code will be optimised too. I do this, and I assume that most other compiler writers do the same. Cheating at a benchmark means spotting code in a benchmark and returning a special case.

For example, if someone is running a recursive Fibonacci implementation as a benchmark, a valid optimisation would be noting that the function has no side effects and automatically memoising it. This would turn it into a linear time, rather than polynomial time, function, at the cost of increased memory usage. A cheating optimisation would be to recognise that it's the Fibonacci sequence benchmark and replaces it with one that's precalculated the return values. The cheat would be a lot faster, but it would be a special case for that specific benchmark and would have no impact on any other code - it's cheating because you're not really using the compiler at all, you're hand-cmpiling that specific case, which is an approach that doesn't scale.

The Mozilla engineer is claiming that this is an example of cheating because trivial changes to the code (adding an explicit return; at the end, and adding a line saying true;) both make the benchmark much slower. I'm inclined to agree. The true; line is a bit difficult - an optimiser should be stripping that out, but it's possible that it's generating an on-stack reference to the true singleton, which might mess up some data alignment. The explicit return is more obvious - that ought to be generating exactly the same AST as the version with an implicit return.

That said, fair benchmarks are incredibly hard to write for modern computers. I've got a couple of benchmarks that show my Smalltalk compiler is significantly faster than GCC-compiled C. If you look at the instruction streams generated by the two, this shouldn't be the case, but due to some interaction with the cache the more complex code runs faster than the simpler code. Modify either the Smalltalk or C versions very slightly and this advantage vanishes and the results return to something more plausible. There are lots of optimisations that you can do with JavaScript that have a massive performance impact, but need some quite complex heuristics to decide where to apply them. A fairly simple change to a program can quite easily make it fall through the optimiser's pattern matching engine and run in the slow path.

You're assuming that DCE is actually working properly. If it isn't, then true; will be compiled to a load of the true singleton (or a constant value if it's not implemented as an object). This would result in some register churn, which (especially on x86) would cause some spills to the stack.

If they're not properly aligning stuff on the stack, then spilling true; could mean that every other spill is not word aligned anymore, which could case some serious performance problems, especially if one of the v

There was a good reason for dismissing those possibilities. MS ought to be smart enough at this point to avoid those possibilities. If anything, he was overly generous in assuming that MS has the competence to know to use more than one benchmark when assessing speed and able to put together an engine which is resilient enough to deal with what should be a non-change.

Strictly speaking neither of those lines should even appear when run, that's supposed to be more or less stripped out before the engine star

Possibility one: Microsoft cheated. Presented as highly likely.
(I tend to agree that it's quite conceivable - other corporations have been caught doing similar things (like the NVIDIA/FutureMark debacle) and JavaScript execution speed is currently the most-hyped performance metric in the browser market.)

Possibility two: Microsoft have relied entirely on SunSpider when testing their JavaScript engine and over-optimized it to a point where it's now a SunSpider VM that happens to run JavaScript and doesn't work well with anything that isn't SunSpider. This is declared unlikely.
(Although I wouldn't put such a blunder past Microsoft, I do think that their tests extend beyond "how fast is SunSpider".)

Possibility three: The engine is legitimately ten times as fast as everyone else in this test but badly-written and so fragile that it experiences major slowdowns on code that meets currently-unknown criteria. Presented as unlikely.
(Note that in the Hacker News analysis [ycombinator.com] the general consensus now seems to be that IE indeed does something with the code that it shouldn't; an earlier theory of broken dead code analysis couldn't stand up to the fact that any change that causes the bytecode to look differently, even if functionally equivalent, causes slowdowns).

Sure, but the suspicious thing about this particular optimization is that adding a no-op statement that merely expresses something that is otherwise implicit (the return at the end of the function) disables it. This makes it look like they are optimizing for code that looks exactly like the source code of this function... which is not a very useful thing to do unless you want to cheat at a benchmark.

Except in the sense that you can get a lot of good press / free advertising by stomping a mudhole in the other guy's performance in a benchmark. There's a clear incentive to improve your actual performance, because when real people get ahold of your benchmarked piece of software/hardware/whatever they're going to notice that actual performance -- but there's also some incentive to improve you benchmark performance for cheap advertising. The former is more valuable than the latter, and I'm sure

1) Microsoft cheated by optimizing Internet Explorer 9 solely to ace the SunSpider Bechmark. To me, this seems like the best explanation.2)Microsoft engineers working on Internet Explorer 9 could have been using the SunSpider Benchmark and unintentionally over-optimized the JavaScript engine for the SunSpider Benchmark. This seems very unlikely to me.

I see no reason why explanation number one is more likely than explanation number two.

Accuse someone of something when phishing for information. Watch the reactions, watch people back-peddling, listen for lies, listen for an overly reactive explaination, watch for the ultra-defensive, nose scratching, bullshitters, beads of sweat...

Does no on else use this trick in life? I doubt I've invented it; I'm sure it's taught somewhere and there's probably a fancy name for it.

I see no reason why explanation number one is more likely than explanation number two.

I do. Given the nature of the changes that were used to uncover this, to me (as a programmer) it seems very unlikely that such over-optimization could happen in such a way that it would degrade so severely with those changes. Here is what was changed (look at the 2 diff files linked near the bottom of the article):

1) A "true;" statement was added into the code. It was not an assignment or a function call, or anything complex. Just a simple true statement. Depending on the level of optimization by the interp

In my opinion, a useful benchmark reflects standard usage patterns. Therefore, optimizing for the benchmark can only benefit the end user. If shuffling the "return" and "True" is just as valid an option, perhaps the benchmark should include both approaches.

Maybe I'm a bit naive, but when I change my code, I expect the results to change as well.

The keyword in number two is "unintentionally". Happy accidents do happen, but they rarely go unrecognized and once recognized they should be reconciled. If recognized but not reconciled then you can't say it was unintentional and therefore I have to agree that number two seems unlikely.

Benchmarks are very nice and all, but in the end, users using different browsers for real should decide which *feels* faster or better (which isn't the same as being faster or better). If real-world users can't feel the difference, then benchmarks are just there for masturbation value, and quite frankly, on reasonably modern hardware, I've never felt any true difference in rendering speed between the various "big" browsers out there.

I reckon the only thing that truly matters is the speed at which a browser

There are three possible explanation for this weird result from Internet Explorer:

Microsoft cheated by optimizing Internet Explorer 9 solely to ace the SunSpider Bechmark. To me, this seems like the best explanation.
Microsoft engineers working on Internet Explorer 9 could have been using the SunSpider Benchmark and unintentionally over-optimized the JavaScript engine for the SunSpider Benchmark. This seems very unlikely to me.
A third option (suggested in Hacker News) might be that this is an actual bug and adding these trivial codes disaligns cache tables and such throwing off the performance entirely. If this is the reason, it raises a serious question about the robustness of the engine.

Everything in italics is unsupported opinion by the author, yet is treated as fact in the summary and title by CmdrTaco and Slashdot. Perhaps if Slashdot would stick to actual news sites (you know NEWS for nerds and all that), this would be a balanced report with a good amount of information. Instead, it is just another Slashdot supported hit piece against MicroSoft.

So, instead the blogger should declare that MS cheated at the benchmarks with nothing more than his results for which he admits that there are at least three plausible explanations?

And, then Taco should treat the author's biased opinion as fact? Remember, the title of this post is "Internet Explorer 9 Caught Cheating in SunSpider."

I don't think so.

And, where is the response from MS? Did anyone ask MS, or did someone find this and go "MS is CHEATING!!11!!one!" without actually investigating or even asking MS? Because, it really looks like the latter, which would make this just more MS bashing blogspam.

There are three possible explanation for this weird result from Internet Explorer:

1. Microsoft cheated by optimizing Internet Explorer 9 solely to ace the SunSpider Bechmark. To me, this seems like the best explanation.
2. Microsoft engineers working on Internet Explorer 9 could have been using the SunSpider Benchmark and unintentionally over-optimized the JavaScript engine for the SunSpider Benchmark. This seems very unlikely to me.
3. A third option (suggested in Hacker News) might be that this is an actual bug and adding these trivial codes disaligns cache tables and such throwing off the performance entirely. If this is the reason, it raises a serious question about the robustness of the engine.

I'm not saying if what they have done is right or wrong, but this is a sensationalist headline that offers two other "less evil" alternatives to the outcome.

Headlines are supposed to be succinct summaries and that is enforced by the character limit here. Maybe a better headline would be "Internet Explorer 9 Probably Cheating On Sunspider, But Maybe Just Horribly Written In Ways That Make SunSpider Apply Poorly". Of course that is too long for the title.

The important take away is that a particular SunSpider test is not a valid test for IE 9's performance in that category and that IE 9 will do much, much worse in many real world scenarios. The likelihood is that

Meh I think claiming they are cheating with no evidence seems a little too out there. I've never seen MS brag about how fast their browser is on this particular benchmark, and frankly seems more like a bug than a cheat.

While MS-IE have disclosed a lot of information lately on their blogs, if they're going to discuss Sunspider results (as they did on 28 October with the IE9PP6 tests [msdn.com]) then use of sleight of hand to sex them up is fair game for criticism.

But did modifying this one test to near impossible speed make that much of a difference? It was obviously anomalous right? What about the other test results? If tweaked do things get screwy and if so what about the other browsers? So far I'm not convinced although certainly it's posisble. Frankly if who they are trying to woo to their browser is Joe Average user then this benchmark, commented on in a blog no Joe Average likely reads, seems silly IMO.

The article clearly states:
There are three possible explanation for this weird result from Internet Explorer:
1.
Microsoft cheated by optimizing Internet Explorer 9 solely to ace the SunSpider Bechmark. To me, this seems like the best explanation.
2.
Microsoft engineers working on Internet Explorer 9 could have been using the SunSpider Benchmark and unintentionally over-optimized the JavaScript engine for the SunSpider Benchmark. This seems very unlikely to me.
3.
A third option (suggested in Hacker News) might be that this is an actual bug and adding these trivial codes disaligns cache tables and such throwing off the performance entirely. If this is the reason, it raises a serious question about the robustness of the engine.
So, what proof do we have that Microsoft actually cheated?

Does this part of the benchmark produce a result or output, and if so is it correct?

And if it doesn't produce any output or a result that's checked, there is plenty of scope for innocent explanations. It could be a bug that doesn't arise when the extra statements are added. Or it could be that part of the code is being optimised away (because the result isn't used) and the analysis isn't clever enough to handle it when the extra statements are present.

If Microsoft is cheating, why wouldn't they cheat a bit better? Of the five browsers, including betas, IE is second from last [mozilla.com]. Last place is, of course, Firefox, even with the new JS engine. Oh, and that stats image? Taken from the same blog post [mozilla.com] that originally discovered the Sunspider IE9 issue over a month ago.

Rob Sayre, the Mozilla Engineer who discovered this, filed a bug [mozilla.com] with Microsoft to get them to look at this issue. However, he didn't file said bug until today, which is likely why this is in the news now rather than a month ago.

AC is right on the money there. Open-source software has come such a long way that Microsoft products and business are entirely avoidable these days, and therefore are no longer a threat. Google is the true danger of the age because they're fast on the way to make off-line applications obsolete altogether and render the open-source vs. closed source debate moot, as we'll have to swallow their online applications shenanigans without being able to do a thing about it.

It is actually a couple of months old [mozilla.com]. The thing that makes me doubt the claims of cheating is that nobody has been able to find other examples of performance variations in this benchmark in all the time since this came to light. If they were going to cheat, why limit it to the cordic test? Nobody would base their browser choice on this obscure test.

I don't have the beta installed yet, but what I would like to see is the actual calculation changed and then run the tests again. Don't just put in weird code like "true;" but make the javascript plausible. It could be that the addition of these unusual statements are enough to confuse the optimiser so that it resorts back to a completely unoptimised version.

If you knew anything about JIT compilers, you would know that they have simple heuristics on purpose (compile speed is a strict constraint.) Making something 1 statement longer could remove it as a candidate for quite a few optimizations (inlining, static loop evaluation, loop unrolling, dead code elimination, etc..)

These simple heuristics use quickly evaluated metrics once the source is translated into an abstract syntax tree. The number of nodes in the tree.. the depth of the tree.. the number of conditional nodes..

JIT's are not simply compilers that try to produce the best code possible. JIT's make tradeoff decisions between compile time and the resulting code quality.

The data you reference shows 59%. However, 51% or 59% it doesn't really matter. What is important is the trend shown. IE is consistently losing market share over time. Were they climbing up the ladder that would be one thing. They're not. This doesn't say anything about whether or not they "cheated", only that your claim that their market share is so high why would they bother to cheat doesn't make much sense, to me.

Mozilla (netscape) used to have 91% share too but I don't see them cheating.

I choose browsers based on features not speed. I like Firefox's addons to enable me to download youtube vids, SeaMonkey's builtin newsgroups/chat/email features, and Opera's Turbo for slow connections (dialup, cellular). As for speed they all seem about the same although FF3.6 does have a memory leak that can be annoying.