Short introduction to changes in last version of
Methodology

EV: described the update to the methodology in a
table
... email relating to the new version methodology and how he has put the
changes in that we discussed
... main change was in Clause #6 and #9
... item #9 is not set, but we should discuss

ME: at the university their library has
electronic documents that are provided by 3rd party vendors which are not
accessible. So they are discussing how they can make a claim about the system
for providing the document is accessible, but the documents themselves are not.
Also, LMS eg Blackboard where there is 3rd party material, but the system is
accessible. Thoughts?

VC: not comfortable with being too flexible, but
should encourage 3rd party vendors accountable for their content

EV: Netherlands have encountered this also.
System accessible, but content is not. Looks like it is more accessible than it
really is.

Katie: strongly says you shouldn't be able to
scope out content - makes it non-compliant with WCAG. Organisations make a
choice over which content they use and we want to push them to get accessible
content. Should be choosing 3rd party content that is accessible. If they want
to say they use this methodology that is one thing, but they can't make a
claim.

Detlev: doesn't think anyone agreed to taking out
parts of complete processes. Regarding taking out other things from the scope,
but it shouldn't be picking and choosing which pages conform. Should be
self-proclaimed parts of sites, not just removing pages. It should be whole
parts of websites, not bits and pieces.

EV: you shouldn't be able to scope out the
contents. e.g. a CMS wants an evaluation and a conformance claim where there
would be no content, only if it is not available.

Kerstin is not coming through

Shadi: don't underestimate the power of
reporting. Re the empty CMS the user can make a claim and doesn't need to
exclude anything, because there is no content. You may be able to use this
methodology for the university example, but can say that the reason for the
university website not being conformant is the university library website that
is using non accessible third party providers.

EV: Examples for exclusion #9. Statement of
Partial Conformance e.g. Conformant with WCAG 2.0, but not for third party
providers.

<kerstin> sorry, don't know what the problem
is

Mike: If the university is making everything on
line accessible, but it would be helpful to say that the system is compliant.
Allows the focus to be put on the part that is not compliant.

<Mike_Elledge> +1

Shadi: agreed with Eric's statement. We discussed
whether the target website conforms or not, and in addition to that a figure
(not sure how to state e.g. 90%) to compare. Not saying that something is 90%
conformant. We need to find a way to say which things are work and which do
not.

Discussion on Item #10 - deleted Error Margin

EV: Clause 6 - the positive approach - describing
items which meet more than the conformance claim. However there are problems
when the person who does the evlauation will have to list the items which met
the higher level. This would add more cost and time to the evaluation. Is that
what people want?

Vivienne: developers won't want to pay for
something they haven't contracted to be done.

Detlev: You shouldn't have to look at all of the
standards for the level above. If you are looking at an issue such as colour
for A, if the contrast is almost at the AA standard, you can mention that. You
may be finding the website is better for some criteria than the standard you
are looking for.

<Kathy> agree

agree

<MartijnHoutepen> agree

<Mike_Elledge> agree

I just don't want an assessor to have to look at all levels

<kerstin> I'm not sure, because unless you
have tested, you can't say something

Shadi: it is not actually changing the
definition, but could be a note under the definition or a clarification in the
scope. We should re-use what we already have.
... it is an added thing to the definition.

EV: Clause 4, Sampling of pages. Eric proposed to
take out the full sample section, but participants wanted it to stay in.
... the sample tells you what has been looked at, but not just the individual
pages in the sample. You would still sample the section of the website that you
are looking at. Not sure if I got that right Eric.
... you don't always need a 'contact us' page if you are looking at a specific
part of a website.
... people can choose their own scope, but within this scope they need to use
the 3 types of sampling if it is in there - e.g. if a help page is in the
selection, it should be in the sample.
... random resources also need to be selected. Do we want to describe the
exact methods people have to use to get a random set of resources or is this
free for anyone to do it in his own way?

Kathy: it would be good to have some suggestions.
Clients ask how you determine what the random sample is and there are number of
different ways to create a random sample. W3C can provide ways you can create a
random sample and suggest these.

<Mike_Elledge> +1

+1

<richard> +1

Shadi: not aware of any proven method to do this
and it could get difficult.

EV: we need to make a split between a manual
selection and an automatic selection or a combination.

Shadi: what is UWEM saying?

<shadi> [[Unified Web Evaluation
Methodology]]

EV: UWEM made a special method to select a random
sample - called Uniform Random Sampling Without Replacement
... under 4.1.3.
... technique developed by UWEM that is statistically verified and is fairly
easy to describe
... will send a pointer on the mailing list

Detlev: re: random sampling. We agree that we
need the core resource e.g. home page, contact, search results etc. That will
give you 5-10 pages for your sample. If you add to that random sampling - may
take frequency of visits into account. YOu may have a lot of redundancy - pages
that are the same as other pages in the sample. Could be a waste of resources.
Prefers a way to look for the

differences in content type, and not specific on the way you
do random sampling. There can be things outside the sample that have not been
covered. The disclaimer shows that some content types could be missed.

EV: Netherlands have a minimum of 50 pages

<kerstin> what customers would pay depends on
what they are used

EV: UWEM also has stop criteria. 4.3 shows this.
Instead of taking large numbers of pages because they are all the same and we
can look at specific content types after that.

Shadi: Agree with Detlev. UWEM good for
tool-based approaches To have a good structure, good advice on how to select
pages, thinking 50 pages. Take the benefit of the human evaluator who can
decide if a page belongs to a complete process. Lean toward a much manual as
possible. You will have to have a tool that can scan the pages to select pages
for you to evaluate manually. Need a good

structure to document how we are going to sample pages.

Shadi: depends on how you define core resource.
e.g. don't only take the first form you find. Don' t just look for things that
are linked from the home page.
... we need to provide more details of how to select the pages

Martijn: agrees the sample does not have to be
very big, but need random sampling. Random provides good verification.

<ericvelleman> ?

<Zakim> shadi, you wanted to propose approach
for random sample

Shadi: go to search engine and type something
randomly and put the delimiter site=" and you will come up with a page, from
which you can follow say the 5th link.
... there may be easy ways for an evaluator to find the random page easily.

EV: doesn't need to be scientific, it would be
interesting to give a few examples (simply) on how you could do it. Keep away
from an academic discussion.

Shadi: can point to things such as UWEM

Detlev: need to pick pages that are unpredictable
so no one can know what pages you will pick