I agree with Vivienne. We need to ensure that we're still including
primary user scenarios and critical paths that are usually defined in
business test cases. For our larger websites, we use automated scanning on
the whole site, identify those items mentioned below for representative
sampling of page/technology types, and then model our manual testing after
the top critical paths/scenarios that a user needs to be able to complete.
If there's capacity to do more than that, we usually look at the most
heavily trafficked pages next.
Perhaps that isn't as random as it should be, but it seems to line up with
overall user experience goals.
Thanks,
Elle
On Thu, Dec 15, 2011 at 9:40 AM, Vivienne CONWAY <v.conway@ecu.edu.au>wrote:
> HI all
>
> Unless I'm missing something, if we are talking about random sampling
> methods, how do we make sure they include those 'complete processes'. Do
> we look at doing random sampling, plus complete processes, plus core
> elements of the website (website purpose).?
>
>
> Regards
>
> Vivienne L. Conway, B.IT(Hons)
> PhD Candidate & Sessional Lecturer, Edith Cowan University, Perth, W.A.
> Director, Web Key IT Pty Ltd.
> v.conway@ecu.edu.au
> v.conway@webkeyit.com
> Mob: 0415 383 673
>
> This email is confidential and intended only for the use of the individual
> or entity named above. If you are not the intended recipient, you are
> notified that any dissemination, distribution or copying of this email is
> strictly prohibited. If you have received this email in error, please
> notify me immediately by return email or telephone and destroy the original
> message.
> ________________________________________
> From: Velleman, Eric [evelleman@bartimeus.nl]
> Sent: Thursday, 15 December 2011 7:35 PM
> To: RichardWarren; Eval TF
> Subject: RE: Sampling
>
> Hi,
>
> Yes, agree, the evaluation will need to specify the resources that have
> been evaluated.
>
> If the evaluation needs to be replicable and allow synchronous or
> asynchronous comparisons (like monitoring) the evaluation sample must be
> generated by a uniform random procedure that is partly described by Richard
> in an earlier mail (see bottom of this message). Partly, because the
> situation for our uniform random procedure is a bit more complicated than
> with WCAG 1.0. There are some additional factors at work here that are
> described in the Scope section like accessibility support and use of
> different technologies and more. This is covered in WCAG 2.0 like also
> described by Alistair in an earlier mail but we will have to check if that
> is enough for the purpose of the evaluation report.
>
> Question: Can we make a list of what should minimally be in the core
> resource list (if available in the scope of the Website that is being
> evaluated)? We will discuss the size of the sample later.
>
> Using Richards list I come to:
>
> Home Page,
> Site Map,
> Section landing pages (is there a maximum?)
> Any sub-section landing pages (usually linked to from the section landing
> pages)
> Forms
> Data tables
> Multimedia (maybe we have to be more specific here)
>
> While reading, the following additions seem interesting to add:
> Help resource
> Contact information resource
> Search and extended search resources including resulting resources
> Distinct web technology pages (...)
> Pages with other programming languages
> CSS alternatives for mobile, (more..)
> Frames (are they still used?)
>
> Also:
> Resources representative of each category of resources having a
> substantially distinct â€œlook and feelâ€