28 February 2000 ER telecon

Summary of action items and resolutions

Resolved: short main ToC with GL text, follow one, get to GL that list
all checkpoints (sub-toc 1), from there click to get to Techniques, if
more than one technique, get short table of contents. perhaps have
navigation (previous next) after each toc that goes to same level.

ACTION LK: Draw up plan for uniformity of evaluation sections

ACTION WC: work on script to regenerate table of contents /

Participants

Harvey Bingham

Wendy Chisholm

Michael Cooper

Len Kasday

William Loughburough (WL)

Brian Metheny

Chris Ridpath

Gregory J. Rosmaita (scribe)

Agenda

Review of new ERT draft: http://www.w3.org/WAI/ER/IG/ert/

Discussion Points

How much detail in the table of contents. If we include every
technique for every checkpoint it will get pretty large. For
comparison, you can look at the table of contents in the W3C process
document http://www.w3.org/Consortium/Process/Process-19991111/, and
also the table of contents of WCAG
http://www.w3.org/TR/1999/WAI-WEBCONTENT-19990505/

Discussion of open issues. Note that the issues are marked as "@@"
so you can find them quickly.

Table of contents and document formatting

LK: first issue table of contents ad formatting; how much information to
put into table of contents; list of guidelines and checkpoints; include
techniques, as well

WC: Ian and I have been working on scripts to generate document; thinking
about what should be there--currently hard to read and many pages when print
out; links wrap

// Brian Metheny (BM) joins //

WC: want to generate ToC through scripts; asked Ian how to modify scripts;
old ToC against W3C Process Doc and WCAG; part of concern is that up to 21
December has full table of contents (every technique in ToC); hard to read
because included full text of Tech; took out Technique links 10 Feb 2000; ToC
only includes GL name and Checkpoint name; what is needs to be in ToC? only
Tech name? shortened version? WCAG original ToC full text of GL and
checkpoints; shortened and included short heading underneath -- kept to 10 to
15 words; not sure what is most useful

GJR: when did conformance eval for ATAG, used TITLEs to put full text of
checkpoint and guidelines into ToC to avoid bloat and hyperlink text
wrapping

HB: add 15 lines if short ToC with just GL and names, and then full ToC; is
that a possibility

WC: yeah; Ian's suggestion, just short ToC, when go to GL, suitable of
contents that tells about techniques; problem is some just have only 1
tech--in that case, though don't need sub-ToC

BM: ToC just

WC: sub-ToC could be checkpoints or checkpoints and techniques; don't know
that need all techniques listed in ToC

BM: techniques as link in ToC, have header of Checkpoint at top of
sub-section

LK: how many people when used this document actually utilized ToC?

BM: tried to, but now just scroll to where I am going

LK: for just you personally, what would be most convenient

BM: short ToC -- probably just GLs at top, jump to section and get sub-ToC;
if trying to scan through doc, sub-ToCs would be annoying

WC: short main ToC with GL text, follow one, get to GL that list all
checkpoints, from there click to get to Techniques, if more than one
technique, get short table of contents; 1.1 have no idea that there are 10
techniques in the sub- section; sub-ToC may help give people scope; 3 levels
of ToC might work well; GL 8, only one checkpoint and only 1 technique,

GJR: like the nesting

MC: click navigators may find it a pain, but might be best balance between
what we began with

CR: like multiple level ToCs, too

HB: omit checkpoints from sub-ToC, make sure that GL phrase associated
with

HB: another dimension by which to find checkpoints? an index?

WC: thought of an HTML elements and attributes list/table as in HTM L 4
spec

GJR: put full text of GL, checkpoint, or technique in TITLE, as I
suggested

GJR: might want to give choice to go back to sub-ToC as well as main
ToC

LK: are people happy with this?

// general agreement //

Resolved: short main ToC with GL text, follow one, get to GL that list all
checkpoints (sub-toc 1), from there click to get to Techniques, if more than
one technique, get short table of contents. perhaps have navigation (previous
next) after each toc that goes to same level.

Format of the evaluation section for each technique

LK: format issues; one thing I noticed that makes it hard for me to read,
evaluation section in each Technique; lots of different styles in examples;
some say do X, some say, if X do Y; some are passive some are active; have
trouble with the "evaluation" part -- just what does it mean? need to make a
pass and have a parallel structure throughout

WC: on issues list; lot of places where language inconsistent; need to
figure out which style is best

WL: how about initiate instead of trigger? you're initiating an
evaluation

HB: no, recognizing a condition that triggers an action or alert

LK: what about "Condition to test for"

WC: could just say "Conditions"

GJR: have "Conditions" for all, then have "Evaluation" for what needs user
input or thought; way of emphasizing what tool can do by itself, and when it
identifies a problem that needs human intervention to rectify

WC: interesting, like that it reinforces the auto-fix, human fix
dichotomy

HB: built into Bobby; human assessment part built into trigger

MC: trigger is things that tell you whether or not need to run an
evaluation; whether or not you need to evaluate for this technique; trigger is
looking for IMG, evaluation is "is there an ALT attribute, and is its value
valid"

LK: too technical a distinction

WL: look at what is there, try to find things that cause an action?

MC: don't know if overcomplicating, but evaluate by taking pass through
page, when find IMG, have to know which checkpoints apply, then run all
applicable checks; that may be implementation dependent

WL: get a lot of false positives from Bobby;

GJR: exceptions dictionary, ALT registry can avoid overburdening user

LK: what GJR is talking about is organizational?

GJR: purely organizational?

LK: some condition machine can apply; judgement that user has to make, then
some inputs that user has to put in,

GJR: and mark clearly, this information can be recycled, etc.

LK: case of ALT text, condition that have an IMG with no ALT, then that is
an error, no need for further judgement; if have ALT text in IMG, then tell
user is this ALT text ok? guess we are not doing that

MC: ID as suspicious, but not outlined what to do with that

WL: is there a similarity between what a spell checker does?

WC: I think so

WL: ERT going through document source, encounters something

GJR: spell checker and grammar checker in a way

WL: a checker, nonetheless with parameters that notify you that something
might need your attention

BM: that might be a good metaphor for an evaluation tool, repair tool can
ask you about things that aren't there; issue is "how to limit noise", but
need to include as item

LK: example?

BM: example in future -- navigation; have you used navigation bars? eval
tool could help identify this is a nav bar if reused across entire site; down
the road, but

LK: look for navigational grouping? what if none?

WL: spell checker that flags every proper name and grammar checker that
tells you to do something a way you don't always do them; concerned going to
get away from things that matter and just deal with stuff that is easy to ID,
but which aren't important; when someone gets an actionable thing

LK: so where are we with this? document already has a number of sections;
are what we are discussing included in here; are we talking about fixing the
wording and making style consistent, or adding and subtracting sub-headers;
have "Evaluation", "Example Message", "Repair Techniques"; do we need to
change that organization

WL: me; we're talking only about whether or not to use trigger or not

LK: distinction between human evaluation and automated; do you think

GJR: condition is what tool looks for, then "Example Message" would
indicate whether user needs to be alerted or not

WL: is it going to be more than "have you brushed your teeth" "did you wash
behind your ears" or a list of nags?

MC: full coverage for WCAG while limiting nags; how can we make algorithms
necessary

WL: so we're just discussing what to name that process; things that can
actually be looked for, rather than regenerating a list of guidelines

MC: what can be looked for currently under "Evaluation section"

LK: every GL, every checkpoint, then evaluation and suggestion; where
question that needs to be asked is a general question, question needs to be
general; concern is having too many false alarms

WL: did you write this clearly

LK: right; is there some way to address that-- for example attaching a
signal; in practice, designer might want to handle each thing differently;
handling ALT text going to be different than

WL: so many of these things are "keep it simple, keep it clear"; never
going to be able to automate that

LK: is there some rating that applies to these things that differentiate
between missing ALT text

MC: for Bobby 3.2, breaking them up into a few categories; some things
"Full Support" if triggered, a problem, if not, no problem; "Partial Support"
something that triggers it, but don't know straight off if problem;
sub-divided -- LONGDESC for this image, others once per-page; third category
is "Ask" -- not even a trigger that we can devise; trying to come up with ways
to reduce the noise from partial and partial support items -- can turn off
(which makes it impossible to be Bobby improved) but makes it easier to use
and scale for the user; CR expressed interest in putting into tips;

LK: in terms of underlying parameters, there is the importance of the
checkpoint and then the signal-to-noise ratio inherent in condition --
probability that there is a problem given that it was triggered

WL: keeping it simple is a P1 -- is that a disability issue or usability
issue

LK: should we put into the document some indication of user overload?
cost-benefit analysis

WL: checklist is a resource, tool, while a resource, is also something
else

LK: but there are things in between -- can have a checkpoint that says
"make sure simplest language used and pass to ID all multimedia objects -- any
multimedia object is still general -- more general than missing ALT text

WL: really?

LK: presence of multimedia object triggers certain things that may not be
automatically checkable, but could trigger false alarm

WL: my problem with Bobby, MC all the things on the "Ask" list have a
vagueness to them

MC: can't think of a practical algorithm for some

WL: that's what I'm talking about

WC: not all tools will do that; don't think is fair to ask all tools to do
everything; asking Bobby and A-Prompt to do everything, because they are the
only extant tools; need to get as many tools out there to fit different user
styles

LK: if have tool without every prompt -- would be useful to have some sort
of guidance as to what to put in prompt; how important is checkpoint? to what
extent can tool avoid false positive?

MC: rating each technique on a scale?

LK: yes,

WC: what if there is a tool that just focuses on one technique; should
encourage that; user-built tool kit; rating only works for someone
implementing the whole thing; Microsoft, could, for example, take the Word
spell and grammar checker and put it into FrontPage

LK: rating has to do with false alarms

WC: algorithms aren't going to be uniformly implemented; wary of rating
based on our perceived importance

LK: for algorithms we suggest, should indicate whether or not might issue
false alarms; would be a way for someone to scan through doc and try to build
a tool with less false positives

WC: these are the instances we know of that will produce a false
positive

LK: power of the test -- doesn't have to be in first release

GJR: no conformance statement, but need a usage statement to stress that
isn't necessary to implement all; maybe a configuration appendix, as well

WC: "how to use this document to do what you want"; should include a
section indicator

LK: how long before have draft ready for public commentary

WL: 2 months past deadline; think could stand up to outside scrutiny
now

LK: what do you think WC?

WC: like to get ToC fixed, will go through and create ToC by hand as Ian is
very busy so don't think could help work on scripts; CR are you available?

CR: yeah

WC: fix ToC and put it out publicly

LK: feel more comfortable if got format consistent before public review
call

WC: think ToC should be done last, what you do may cause ToC to change

HB: deserves to be a script; can work on script while LK editing
document

LK: plan -- go through and word everything the same way; turning everything
into "and" or "or" lists; only remaining issue is to call it "Conditions" or
"Evaluation", but that is a global search-and-replace

WC: let's put that issue on list

LK: will leave sub-headers alone, concentrate on getting wording in
parallel, where repair has snuck into Evaluation, will move to repair