The assumption has been that an additional random sample will make sure
that a tester's intitial sampling of pages has not left out pages that
may expose problems no present in the intitial sample.
That aim in itself is laudable, but for this to work, the sampling would
need to be
1. independent of individual tester choices (i.e., automatic) -
which would need a definition, inside the methodology, of a
valid approach for truly random sampling. No one has even hinted on
a reliable way to do that - I believe there is none.
A mere calculaton of sample size for a desired level of confidence
would need to be based to the total number of a site's pages *and*
page states - a number that will usually be unknown.
2. Fairly represent not just pages, but also page states.
But crawling a site to derive a collection of URLS for
random sampling is not doable since many states (and there URLs or
DOM states) only come about as a result of human input.
I hope I am not coming across as a pest if I say again that in my
opinion, we are shooting ourselves in the foot if we make random
sampling a mandatory part of the WCAG-EM. Academics will be happy,
practitioners working to a budget will just stay away from it.
Detlev
--
---------------------------------------------------------------
Detlev Fischer PhD
DIAS GmbH - Daten, Informationssysteme und Analysen im Sozialen
GeschÃ€ftsfÃŒhrung: Thomas Lilienthal, Michael Zapp
Telefon: +49-40-43 18 75-25
Mobile: +49-157 7-170 73 84
Fax: +49-40-43 18 75-19
E-Mail: fischer@dias.de
Anschrift: Schulterblatt 36, D-20357 Hamburg
Amtsgericht Hamburg HRB 58 167
GeschÃ€ftsfÃŒhrer: Thomas Lilienthal, Michael Zapp
---------------------------------------------------------------