TechWhirl Sponsors

About TechWhirl

TechWhirl (TECHWR-L) is a resource for technical writing and technical communications professionals of all experience levels and in all industries to share their experiences and acquire information.

For two decades, technical communicators have turned to TechWhirl to ask and answer questions about the always-changing world of technical communications, such as tools, skills, career paths, methodologies, and emerging industries. The TechWhirl Archives and magazine, created for, by and about technical writers, offer a wealth of knowledge to everyone with an interest in any aspect of technical communications.

>Geoff has brought up an interesting point about the similarities
>between techcomm and the scientific method.
><snip>
>> The assertion that technical communication isn't a science
>> leads me to wonder why not. After all, the definition of
>> scientific inquiry is as follows:
>> 1. Based on an existing body of knowledge, form a
>> hypothesis.
>> 2. Test that hypothesis under known, (semi-?) controlled
>> conditions.
>> 3. Revise that hypothesis if necessary based on the results.
>> 4. Repeat as needed ("replication", "independant
>> confirmation", and "iteration").
>
>If anything technical communication can be regarded as a social
>science. It's a people thing. Without the people there is no
>communication in technical communication.
>
>When you're dealing with people there's no such thing as exact science.
<snip>

I think there's a continuum between science and engineering, and technical
communication is closer to the engineering end. It is about designing and
creating useful artifacts, and is only secondarily about discovering the
nature of the world, in service to the former goal. But just as there are
social sciences, I think technical communication qualifies as "social
engineering." (This term was first used by computer crackers to describe
gaining entry to systems by exploiting human vulnerabilities rather than
technical ones.)

>This is what makes techcomm contentious. 'Standards' are constantly
>being refuted. For instance, if studies show that the MAJORITY of the
>research population find all caps slower to read, means that that there
>is a MINORITY that at least doesn't care or could even find them
>easier to read. So writing for that minority would mean what?
>(Try telling your boss/client s/he's part of a minority ;-)

As with social sciences, there is no such thing as exact social
engineering. It is straightforward for engineers to have standards,
because if they violate certain principles, the bridge will fall down, or
the plane won't fly. But in technical communication, any principles are
limited in their application to certain groups, times, and places, and are
not necessarily universal even within their appropriate context. So you
can only claim that a given "standard" leads to greater utility in the
artifact, *for a certain group*, *in a certain context*. For a different
group in a different context, you have to investigate what "standards"
apply to them. Which leads us to the same conclusion:

>This is why your specific audience analysis should validate what you do.

>> 1. Based on audience analysis, determine which of several
>> "best practices" and "standards" should apply to our particular
>> audience.
>> 2. Create a document based on the hypotheses in 1 and
>> perform usability tests under a variety of (semi-?) controlled
>> conditions.
>> 3. Revise the document if necessary based on the results.
>> 4. Repeat as needed ("replication", "independant
>> confirmation", and "iteration").
><snip>
>I would say use the above four step procedure, that Geoff has defined
>as your 'best practice' and walk (don't run) with it.