We have already seen how the various parts of the task model are
interdependent and can be defined and obtained in terms of various
combinations of the others. This has implications for how we fill in
and validate the model, and how we obtain in the detail required for
those parts of the model that we need in order to proceed with our
testing and reporting.

The error taxonomy and the proofed text model together are, in a
sense, primary in terms of our evaluation methods. All the other
taxonomies have aspects that are only fixed relative to the errors,
whether inherently or as a result of the practicalities of collecting
information, and the errors are only fixed relative to the proofed
text model. Thus, we find the following:

The error sources part of the writer model is likely to be built
up by examining a (large) set of unproofed and proofed texts from a
given class of writers, and allocating frequency measures to errors
that are classified by the researcher in the pre-existing taxonomy;

The unproofed text model (not to be confused with instances of
unproofed text, which have yet to be classified and hence cannot be
used for the same things) can be considered as the proofed text
model (which does not vary with any of the other factors) plus the
error taxonomy;

The end-user model tells us, for a given error type, what
quality of advice of a given type is necessary to correct it.

However, as we have already discussed, the error taxonomy itself is
derived from examination of unproofed texts, implicit knowledge of
proofed text and shrewd but unfounded ideas about error sources. Are
we going round in circles? Not quite;
Appendix Requirements Analysis for Linguistic Engineering
Evaluation contains
some preliminary discussion of the issue of validation for these
structures, but the problem does require more work.

In our evaluation method we use these linked structures of the model
in a number of different ways; different parts of this linked
structure must be fully realised to support these different purposes
of the model, which are similar to the purposes given for the error
taxonomy:

To support the statement of detailed requirements;

To form the basis of reliable test methods for
the performance of systems;

To enable the results of testing to be mapped on to the
customer-driven presentation of results -- the reportable
attributes.