Created to identify flaws in the way a browser renders its web
pages, WaSP’s Acid tests throw down the gauntlet with difficult-to-display
graphics written to accentuate browsers’ quirks. When the original Acid test
was released in 1998, it helped reign in browser inconsistencies and insured
that Internet Explorer, Netscape, and others handled HTML code according to
specification – making web designers’ lives easier and ensuring the web
rendered consistently in the future.

Acid2, with its focus on Cascading Style Sheets, seems
quaint in comparison to Acid3’s objectives, which target major web standards
expected to see use today and in the future. Tests are derived from many of the
last few years’ development in the web’s control languages, including rendering
graphics embedded in HTML code, CSS3 compliance, DOM compliance, CSS2
downloadable fonts, as well as handling new graphics formats and Unicode
support.

Currently, no known browser is able to correctly render the
Acid3 test, which displays an animated, incrementing score counter and a series
of colored boxes with some description text. Bloggers have already assembled galleries of
browsers’ failing test results, with most of today’s browsers scoring between
40 and 60 on the test’s 100-point scale. The results shouldn’t be too alarming
as the Acid tests have always been forward-looking in nature, and are designed
to measure standards to aspire to, as opposed to what’s current. Also note that
more than six months lapsed between Acid2’s release and Safari 2.02’s
announcement that it was the first to pass Acid2.

Anecdotal reports around the web seem to indicate that
nightly builds of the next versions of Firefox and Safari are reportedly
achieving Acid3 scores in the 80-90 range.

Given the state of the web today – where web designers will
often write two versions of a web site: one for Internet Explorer and one for
everyone else – Microsoft’s announcement that Internet Explorer 8 passed Acid2
is all the more important. Currently, each new version of Internet Explorer
keeps older versions’ flaws for compatibility, resulting in a confusing state
of affairs for web developers.

The release of IE7 complicated matters further, as it
shipped with both an IE6-compatibility mode and a somewhat-standards-compliant
IE7 rendering mode, with an easily overlooked method for switching between the
two. As a result, Internet Explorer earned a nasty reputation among web
design circles, with developers writing safe, proven websites that worked
universally instead of rich websites that exercised their languages’ full
features.

In time, it is hoped that Internet Explorer 8 will see the
end of this rift, as it will ship with the new Acid2-passing
standards-compliant mode switched on and used by default. For those that want
to test Internet Explorer 8 out on your own, Microsoft already released the Beta
1 version of the browser.

Comments

Threshold

Username

Password

remember me

This article is over a month old, voting and posting comments is disabled

quote: If your logic was correct, then their would be one browser that always gets 100%

That's just silly. There's a hundred tests in Acid3, each containing several assumptions about how the standard should exactly be written. Unless a test was written specifically to a certain browser, its going to fail at least some of them.

And for you MS haters who rate down anything you think is positive about the company, I never said IE wasn't worse than other browsers, and didn't have actual coding errors. But interpretation of ambiguous standards is part of the problem and it explains why no browser will ever get 100% on a test like this unless they code specifically for it.

You're assuming the ACID test is just another browser's interpretation of the standards, which misses the intent. If there was one "correct" interpretation of the standards, and it demonstrates the results, every real browser should strive towards the same results. If you don't believe the ACID tests reflect the standards, then you're correct that it is just another interpretation.

Given that the tests are supposed to be the benchmark for the standards, the browser dev teams can use the benchmarks as a visual goal for compatibility to the standards text, which a lot of times can be confusing and interpreted differently.

"Paying an extra $500 for a computer in this environment -- same piece of hardware -- paying $500 more to get a logo on it? I think that's a more challenging proposition for the average person than it used to be." -- Steve Ballmer