Hi Richard,
"It is good practice to specify the sets or sources of techniques that are intended to be used during the evaluation at this stage already, to ensure consistent expectation between the evaluator and the evaluation commissioner."
I actually think it does have something to do with the actual techniques used by the web developer - otherwise, I can just imagine the conversation between the evaluator and the evaluation commissioner -
Evaluation Commissioner - we have developed the website in accordance with our techniques which we believe deliver accessible content.
Evaluator - well that doesn't really matter, I'm going to test your website against my checks based on my interpretation of WCAG 2.0...
Not the way I hope things will go, but I'd be interested to hear how others interpret 1eâ€¦
All the best
Alistair
On 10 May 2012, at 13:28, RichardWarren wrote:
> Hi Alistair,
>
> Step 1e - Defining techniques - is in regard to the techniques used by the evaluator, not the web developer. The purpose of recording the technique the evaluator used is so that any future evaluation or query can identify how the evaluator came to his/her conclusion. This step has nothing to do with any techniques used by the webdesigner, thus no need to check their code etc.
>
> Perhaps the second paragraph of step 1e is confusing the issue (W3C/WAI provides a set of publicly documented Techniques for WCAG 2.0 but other techniques may be used too) because most of the techniques listed are to do with the website not the evaluation.
>
> Richard
>
> -----Original Message----- From: Alistair Garrison
> Sent: Thursday, May 10, 2012 9:48 AM
> To: Eval TF
> Subject: Step 1.e: Define the Techniques to be used
>
> Dear All,
>
> "Step 1.e: Define the Techniques to be used" - could we consider making this step non-optional?
>
> The first reason being that we really need to check their implementation of the techniques (W3C, their own code of best practice or whatever) they say they use.
>
> For example:
>
> - Case 1) If they have done something by using technique A, and we evaluate using technique B there could be an issue (they might fail B);
> - Case 2) If they have done something by using technique A, and we evaluate using technique A and B there still could be an issue (they might fail B);
> - Case 3) If they have done something by using technique A, and we evaluate using technique A - it seems to work.
>
> The second reason being that testing seems only to be really replicable if we know what the techniques were they said they implemented - otherwise, two different teams could easily get two different results based on the cases above.
>
> I would be interested to hear your thoughts.
>
> Very best regards
>
> Alistair
>
>