Sigh. As a rule, I expect better from Russell. I suppose I have bad days
too.
Russell's line of argument may be summarized as:
1. I am totally ignorant about topic X.
2. Here is my (self-evidently uninformed) rant on topic X.
3. I refuse to learn about topic X because to do so will provide financial
support to the author of topic X and my completely unsubstantiated and
uninformed preconceptions lead me to believe that I do not wish to support
such an author.
<Kindly imagine suitable ascii art inserted here of an Ostrich with it's
head buried firmly in the sand mooning you.>
I've actually *read* the damn book.
Lessig is trying to initiate a dialogue about a set of concerns he has. The
sense of the book overall is not that he has arrived at final conclusions,
but rather that he is trying to pose some questions. The questions he poses
are ones that it is desperately important that we, collectively and as a
community, answer. To the extent that he proposes answers, he does so only
tentatively, and with a clear awareness that the balance of power in this
space is still in flux.
In stark summary, Lessig's book argues as follows:
The internet shifts the focus of power from government to corporates. Where
law has mature models for dealing with the power disparity between
individuals and governments, its tools for dealing with the disparities in
power between individuals and corporates are comparatively poor to
non-existent. Where and how shall we embed the relationships that we want,
and what is the role that law should play in this encoding?
It is almost never healthy to seek to silence someone who asks telling
questions.
Some particular points in the book that I felt were important:
Lessig observes that there are many ways to build buttresses that shore up
desired behavior. They can be built into the law. They can be built into the
software. There are others he mentions as well. He argues that protections
do *not* need to be entirely based in law, but he also argues that
protections embedded in software are less permanent than they might at first
appear.
In particular, and I think this observation both accurate and crucial,
Lessig notes that software is much more amenable to cooption than we tend to
want to admit. The internet was built on "open" standards, but control of
those standards is slowly but steadily being assumed by corporate
interests -- ask anyone involved with IETF. The goals and objectives of
those corporations are not necessarily your goals or my goals, and there is
a grave danger inherent in the possibility that they will succeed in
controlling the infrastructure by which the very dialogue about those goals
occurs.
The privacy wars are an example. As an individual, I clearly want the
ability to reject random advertising. American corporations clearly view
this as undesirable. They argue that targeted advertising, if done
accurately, somehow does me a favor. Indeed, it may occasionally provide me
with an opportunity of which I was not aware. I am struck, however, by the
fact that when someone contrives circumstances in which the alleged
beneficiary cannot say "no," the allegation of benefit is almost invariably
false. There are cases in law where this is the price of society: one trades
personal independence for collective protection, for example. We may argue
about the merit of a given trade. But note that in law we *can* argue,
whereas there is no social infrastructure that supports binding negotiating
with corporate objectives.
Are there flaws in Lessig's arguments? Certainly.
Are there flaws in pure libertarianism? Equally certainly! In the pure view
of libertarianism, he who has the money makes the rules. As an individual, I
find this prospect frightening. It is too easy to sell birthrights, and too
hard to buy them back.
shap