On Mon, Feb 13, 2012 at 7:41 PM, Boris Zbarsky <bzbarsky@mit.edu> wrote:
> On 2/13/12 7:27 PM, Simon Fraser wrote:
>> I do see a strong need for an API like this for the test suite, but I'd be
>> worried
>> about exposing something to content. I would never want a page on the open
>> web using a method like this.
>
>
> Yeah, that's why the Gecko method isn't exposed to random web pages. I can
> maybe see requiring a special configuration option to be set to run the test
> suite; setting that option would allow the API to be called from an
> untrusted page... But even that's somewhat scary.
So philosophically: is there really any point in having conformance
tests that depend on APIs that normal web pages can't access? What
are they actually testing, really? A browser could return whatever it
wants from those APIs and pass the tests, without any implication
about its web compatibility. Such APIs make sense for internal
regression tests, because browsers will have an incentive to make them
correct so that they can catch regressions. They don't make sense to
me for conformance tests.
On the other hand, also philosophically: if there's no way for a
regular webpage to observe that the behavior is correct, why do we
care if it's correct? If the computed values of something change at
the wrong rate during a transition or animation, can that actually
conceivably break any webpages? If not, why should we care about
testing it?
So how about this. Have JS tests for whatever unprivileged JS can
reliably observe, such as start and end states, firing of events, etc.
And have animated reftests, which can be run manually by flipping
back and forth quickly between two iframes as the transition/animation
occurs (like aryeh.name/tmp/css-test/contributors/aryehgregor/incoming/viewer.html),
with the test passing if there's no discernible flickering. Leave
everything else, such as intermediate computed values, untested. Does
that make sense to everyone?