Chapter 14 of Shneiderman is short, but it contains several statements that
strongly support the CSCW position, including

The introversion and isolation of early computer users has given
way to lively online communities of busily interacting dyads and bustling
crowds of chatty users. (p. 478)

Computing has become a social process. (p. 502)

Teams of people often work together and use complex shared
technology. (p. 494)

The first sentence above is the first sentence of this chapter, and the
second is the first sentence of the practitioner's summary. Although
Shneiderman may not realize how far reaching the implications of such
statements really are, there are many valuable things in this chapter. The
table on page 481 (whose four classes are the product of two distinctions,
local vs. distributed, and synchronous vs. asynchronous) gives a basic
starting point for considering any communication system. The short list of
things that can cause cooperative systems to fail (on page 481) is good,
though necessarily incomplete:

disparity between who does the work and who gets the benefit;
threats to existing political power structures; insufficient critical mass of
users who have convenient access; violation of social taboos; and rigidity
that counters common practice or prevents exception handling.

Today there are many case studies supporting these (and other) points, and
much could be written on this area.

The Coordinator (by Flores, Winograd, et al) is mentioned on p.484, but
without citing the CSCW papers that analyze why it failed. One major reason
is that people very often want to mitigate speech acts in order not to seem
rude (see the webnote on Speech Acts and
Mitigation), and having explicit speech act labels on messages
forces them to either seem rude, or else risk being misunderstood. In fact,
many users really hated the system, despite having undergone extensive
"training" in its use.

"The dangers of junk electronic mail, ...users who fail to be polite,
... electronic snoopers, ... calculating opportunists" are mentioned (p.485)
but the importance of such difficult social factors deserves more emphasis.

For synchronous distributed systems, ownership and control are noted as
important concerns (p. 488) but without much analysis of why. Much more could
be said on both of these topics. MUDs are mentioned (p. 490) but their
growing use in business is not noted. For example, the ACS (Academic
Computing Service) at UCSD uses a MUD to coordinate its work.

The potential problems with video listed on page 491 should be noted:

slow session startup and exit; difficulty of identifying speakers (the issue
here is not just "distracting audio"); difficulty of making eye contact;
changed social status; small image size; and potential invasion of privacy.

The importance of a good audio channel is discussed on p. 493, but without
saying why it is important. The reason, confirmed by experimental studies,
is that oral narrative provides the context within which video is
interpreted; you can also verify this by observing how sound tracks are used
in movies. (One simple experiment is to put different sound tracks on the
same scene, so that it is interpreted in completely different, even
opposite, ways.)

The table on page 503 is very useful, but I would have liked to see more.
The pluses and minuses of anonymity in various situations is discussed in
several places and can be very important. The source of weakness in this
chapter may be suggested by a sentence at the very end (p. 504):

Although user-interface design of applications will be a necessary
component, the larger and more difficult research problems lie in studying the
social processes.

This suggests that Shneiderman separates user interface design from
the social processes that they are supposed to support; this is not the
point of view that is taken by modern CSCW researchers, and indeed, similar
separations can be blamed for many of the failures that occur in developing
large and/or complex systems of all kinds.

Results about the structure of interaction from the area of
ethnomethodology known as conversation analysis (as discussed in Techniques for Requirements
Elicitation) have interesting applications to video conferencing
and similar media. Concepts of particular interest include negotiation for
turn taking, interrupts, repair, and discourse unit boundaries. One
important point is that a long enough delay (perhaps as little as .1 second)
can cause large disruptions, due to our expectations about the meaning of
gaps in coversation. Another point is that separate video images of
individuals or groups, especially when there are many of them, can frustrate
our expectations about co-presence, such as our expectation that we
and other participants have the ability to monitor attention, and
to conduct effective recipient design. Please note that all these concepts
from ethnomethodology concern the natural interaction of human beings; when
we consider them in the context of human computer interaction, we are only
using an analogy, since these sociological concepts do not apply to
non-humans.

The importance of Social Issues for User Interface Design

The following is a general essay on the importance of social issues for
user interface design; this can be seen as part of the general point that it
is always artificial to separate social and technical issues. Let's start
with the importance of the social for engineering in general, looking at some
"wisdom" quotes from experienced engineers, who say things like

In real engineering, there are always competing factors, like
safety and cost. So real engineering is about finding the right trade-offs
among such factors.

It's the "people factors" that always cause the most trouble.

Engineering is all about finding the right compromises.

There are many interesting historical cases to illustrate these points. For
example, Edison and Westinghouse, in their fierce competition about whether AC
or DC should be used for mass power distribution, toured the country, giving
dramatic demonstrations in theatres. Also Pasteur, who today would be called
a "bioengineer", had to work very hard to get his "germ theory" accepted, and
then found that innoculation was even harder for the public to accept than
"pasteurization" had been. (Bruno Latour has written an excellent book on
Pasteur.)

Companies spend a great deal of effort training employees in social skills
such as how to conduct a meeting, ethics, management skills, group skills,
etc., and many say that such skills are essential for advancement. Also note
that there is a recent trend in engineering education to include courses like
"Engineering Ethics" and "Engineer in Society". For example, UCSD has
introduced a course in "Team Engineering" (ENG 101).

One can also cite many large computer disasters, such as the $8 billion FAA
air trafic control modernization default, the Atlanta Games sports data
fiasco, the Denver Airport baggage system failure, the default on database
modernization for DoD, the London Stock Exchange modernization collapse, and
the UK National Health Service information systems disgrace; in each case,
social issues were very deeply implicated. What all these disasters had in
common is that they were impossible to hide, which suggests that they are in
fact the tip of an enormous iceberg of failures which were never made public,
such as the near failure of Bank of America in the late 80s.

Another source of data is studies from "software economics" showing that
for large projects, most of the cost of error correction comes from errors in
requirements (approximately 50%), while the smallest part (maybe 5%) comes
from errors in coding, and the rest comes from errors in specifying modules
(maybe 20%) and in overall system design (maybe 25%). Of course, these
figures vary widely among projects. Another important fact is that most
projects are cancelled before completion, again usually due to requirements
problems. It is therefore very significant that the most serious problems
with requirements have a strong social component.

Of course to see that all this is relevant to user interface design, you
have to accept that user interface design is a branch of engineering with a
great similarity to software engineering. To me this seems obvious, and it
is also very well supported by experience; I would say that these two fields
differ mainly in their scope. So the conclusion of all these arguments is
that user interface designers have to take account of social issues or they
are much more likely to fail. Note that to speak of "social issues" as
separate from "technical issues" is questionable, and many modern social
scientists (such as Bruno Latour and Leigh Star) claim that they really
cannot be separated. However, they are often separated "for some immediate
practical purpose," and of course it is convenient to speak of "social
factors" even though they never exist in isolation, as long as one realizes
the limitations of this way of speaking. Blended terms like
"socio-technical systems" are better, although they still suggest a
separation, and may also be more difficult to understand.

A couple of years ago, I attended a meeting at UCSD which discussed the
so called "digital divide", and was surprised at some of the opinions
expressed. One person seemed to think that just throwing technology at the
problem could solve it, or at least, make a major contribution, noting that
very large amounts of grant money were available for this. Another wanted
an extensive survey of views of people currently without computer access on
what they wanted. (But another participant recalled a recent large survey
of this kind done in Chicago, which found that the top two items for what
poor people there wanted were: (1) some way to ensure that their trash would
be picked up; and (2) inexpensive child care. Alas, these are not the kinds
of thing that the internet is good at.) Several wanted a detailed survey of
organizations involved with computer literacy, and of facilities currently
available; in fact, these already existed for San Diego. Others wanted to
emphasize training through grass roots community organizations.

Only one participant seemed to realize that information and services on the
web in their current form are not suitable for many people who could certainly
benefit from them if they could access them. This person cited a study done
in Birmingham about a project to provide information about cancer over the
web. It seems that many people were intimidated or confused by the way all
this information was organized and the language and social conventions that
were used, and as a result jumped to sometimes unwarranted conclusions, such
as that they had learned for sure that they were going to die of cancer soon.

Brief Background on Modern Research in Sociology of Science and
Technology

In contrast to the 19th century trend of making heroes of a small number
of individuals (e.g., Einstein, Newton, or Mozart), recent research often
looks for the work that was done to make something happen, and in
particular, at the kind of "infrastructural" work that is usually left out
in traditional accounts, e.g., the work of people who actually build the
instruments that are used in physics experiments, or the people who somehow
obtain the money to build a skyscraper. The research strategy that
consciously looks for these omissions is called infrastructural
inversion (due to Leigh Star), and I have suggested that the omissions
themselves should be called infrastructural submersions. One major
example is that the hygiene infrastructure (especially sewers) of cities
like London and Paris made possible the medical advances of the mid to late
nineteenth century. And of course the experimental work of Newton would
have been impossible without the advances in technology that made his
experimental apparatus constructable. The same is true of high energy
physics today, where (for example) low temperature magnets are an important
and highly specialized intrastructure for particle accelerators.

Social Theories of Technology and Science

A sociological understanding of technology (and science, which cannot
really be separated from technology) must concern itself with what engineers
and scientists actually do, which is often different from what they say they
do. This is a special case of a very general problem in anthropology and
ethnography, called the say-do problem. Among the factors that produce
this discrepency are tacit knowledge, false memory syndrome, and the myths
that all professions have about their work; very often the discrepency is
not a deliberate deception. Tacit knowledge is knowledge of
how to do something, without the ability to say how we do it. Instances are
very common in everyday life and in professional life; for example, few people
can describe how they tie their shoes, brush their teeth, or organize their
schedule. As an illustration, numerous case studies have shown that a large
part of "routine" office work actually consists of handling exceptions, i.e.,
of doing things that by definition are not routine; but if you ask (for
example) a file clerk what he does, you will get only a description of the
most routine activities.

The ubiquity the say-do problem has very serious methodological
implications for sociologists: in many cases they cannot just ask informants
to tell them the answers to the questions that they really want to have
answered; however, sometimes sociologists can ask other questions, and then
use their answers to get at what they really want to know. Thus designing good
questionaires is a delicate art, that must take account of how people tend to
respond to various kinds of question.

In fact, much of today's sociology has a statistical flavor, being based on
questionaires, structured interviews, and various kinds of demographics.
While this seems to work rather well for selling soap and politicians, it will
not help very much with understanding how technologies relate to society. In
general, better answers can be obtained in an interview if very specific
questions are asked, and if the researcher is immersed in the concrete details
of a particular project; general questions tend to produce general answers,
which are often misleading or wrong - though of course the same can happen
with specific questions. Concrete details are often much more useful than
statistical summaries.

A Remark on Algebraic Semiotics

Despite the mathematical character of the formal definitions of sign system
and semiotic morphism, these concepts can be used very informally in practice,
just as simple arithmetic is used in everyday life. For example, to see if we
have enough gas left to drive from San Diego to Los Angeles, we make some
assumptions, use some approximations, and only do the divisions and
multiplications roughly. It would not make much sense to first work out an
exact formula taking account of all contingencies, then do a careful analysis
of the likelihoods, and finally calculate the mean and variance of the
resulting probability distribution (though this is the sort of thing that NASA
does for space shuttle missions). In user interface design, our goal is often
just to get a rough understanding of why some design options may be better
than others, and for this purpose, assumptions, approximations, and rough
calculations are sufficient, especially when there is time pressure.