Chapter one is what would ordinarily constitute an introduction or
preface to the book. Schneier states that the book is about trust:
the trust that we need to operate as a society. In these terms, trust
is the confidence we can have that other people will reliably behave
in certain ways, and not in others. In any group, there is a desire
in having people cooperate and act in the interest of all the members
of the group. In all individuals, there is a possibility that they
will defect and act against the interests of the group, either for
their own competing interest, or simply in opposition to the group.
(The author notes that defection is not always negative: positive
social change is generally driven by defectors.) Actually, the text
may be more about social engineering, because Schneier does a very
comprehensive job of exploring how confident we can be about trust,
and they ways we can increase (and sometimes inadvertantly decrease)
that reliability.

Part I explores the background of trust, in both the hard and soft
sciences. Chapter two looks at biology and game theory for the
basics. Chapter three will be familiar to those who have studied
sociobiology, or other evolutionary perspectives on behaviour. A
historical view of sociology and scaling makes up chapter four.
Chapter five returns to game theory to examine conflict and societal
dilemmas.

Schneier says that part II develops a model of trust. This may not be
evident at a cursory reading: the model consists of moral pressures,
reputational pressures, institutional pressures, and security systems,
and the author is very careful to explain each part in chapters seven
through ten: so careful that it is sometimes hard to follow the
structure of the arguments.

Part III applies the model to the real world, examining competing
interests, organizations, corporations, and institutions. The
relative utility of the four parts of the model is analyzed in respect
to different scales (sizes and complexities) of society. The author
also notes, in a number of places, that distrust, and therefore
excessive institutional pressures or security systems, is very
expensive for individuals and society as a whole.

Part IV reviews the ways societal pressures fail, with particular
emphasis on technology, and information technology. Schneier
discusses situations where carelessly chosen institutional pressures
can create the opposite of the effect intended.

The author lists, and proposes, a number of additional models. There
are Ostrom's rules for managing commons (a model for self-regulating
societies), Dunbar's numbers, and other existing structures. But
Schneier has also created a categorization of reasons for defection, a
new set of security control types, a set of principles for designing
effective societal pressures, and an array of the relation between
these control types and his trust model. Not all of them are perfect.
His list of control types has gaps and ambiguities (but then, so does
the existing military/governmental catalogue). In his figure of the
feedback loops in societal pressures, it is difficult to find a
distinction between "side effects" and "unintended consequences."
However, despite minor problems, all of these paradigms can be useful
in reviewing both the human factors in security systems, and in public
policy.

Schneier writes as well as he always does, and his research is
extensive. In part one, possibly too extensive. A great many studies
and results are mentioned, but few are examined in any depth. This
does not help the central thrust of the book. After all, eventually
Schneier wants to talk about the technology of trust, what works, and
what doesn't. In laying the basic foundation, the question of the far
historical origin of altruism may be of academic philosophical
interest, but that does not necessarily translate into an
understanding of current moral mechanisms. It may be that God
intended us to be altruistic, and therefore gave us an ethical code to
shape our behaviour. Or, it may be that random mutation produced
entities that acted altruistically and more of them survived than did
others, so the population created expectations and laws to encourage
that behaviour, and God to explain and enforce it. But trying to
explore which of those (and many other variant) options might be right
only muddies the understanding of what options actually help us form a
secure society today.

Schneier has, as with "Beyond Fear" (cf. BKBYNDFR.RVW) and "Secrets
and Lies" (cf. BKSECLIE.RVW), not only made a useful addition to the
security literature, but created something of value to those involved
with public policy, and a fascinating philosophical tome for the
general public. Security professionals can use a number of the models
to assess controls in security systems, with a view to what will work,
what won't (and what areas are just too expensive to protect). Public
policy will benefit from examination of which formal structures are
likely to have a desired effect. (As I am finishing this review the
debate over SOPA and PIPA is going on: measures unlikely to protect
intellectual property in any meaningful way, and guaranteed to have
enormous adverse effects.) And Schneier has brought together a wealth
of ideas and research in the fields of trust and society, with his
usual clarity and readability.