1 October 2007

We think we understand the rules of commerce. Manufacturers and
sellers advertise; we buy or not, as we choose. We have an
intuitive understanding of how advertising works, up to and including
a rather vague notion that advertisers try to target "suitable"
customers. Similarly, manufacturers and sellers have an understanding
of how people buy and use their products. However, technology has
been changing what's possible for all parties, frequently faster than
people's assumptions. This mismatch, between what we expect and what
is happening, is at the root of a lot of high-tech conflict, ranging
from peer-to-peer file-sharing to Apple's
iPhone.

Let's look at the iPhone first. People who have purchased one feel,
not unreasonably, that it's theirs and they can do whatever
they want with it, from making phone calls to
tossing
it into a blender. However, Apple has a different view.
They think they're selling a device and a service, and that
the requirements of the service preclude user control of the device.
It's not that Apple is necessarily wrong in their approach (though
I
think so); rather, it's the mismatch between consumers' expectations
and Apple's plans that has
caused trouble.
The result? Dueling news stories about
iBricks
versus
newer,
better "hacks".

The converse happened with peer-to-peer filesharing. Consumers were
the ones to adopt new technology — digital distribution
of songs — while the music industry was holding back.
This isn't a new reaction by the music industry to changing technology.
Twenty years ago, the RIAA tried to persuade Congress to mandate
Copycode, but tests by the National Bureau of Standards
(now NIST) showed that the
scheme
didn't work.
In 1942, in an incident that is now largely forgotten, the American Federation
of Musicians went
on
strike for royalty payments from the record industry; this incident is
generally known as the
Recording Ban
of 1942.

Fundamentally, these incidents are all the same: people had a mental
(and sometimes legal) model of what was "normal" and possible; technology
changed, and one party's behavior changed with it, to the shock
of the other. The reaction was the same, too: effectively, the creation
of a new crime or form of misbehavior I call "felony interference with
a business model".

The same dynamic has hit the Internet advertising market. Ad blocking
on the web
is catching on — after many years of not doing so — so we
see accusations that using it is tantamount to theft. But note
the explanation given
here:

No flashing whack-a-mole banners. No highly targeted Google
ads based on the search terms you've entered.

to say nothing of pop-ups and pop-unders.

What happened? Both parties found their expectations weren't met.
Advertisers felt that ads modeled on newspaper ads weren't effective, so
they used technology to enhance ad visibility. Consumers resented the
distraction, annoyance, and bandwidth consumption; some have resorted
to ad blockers.

Turning again to the iPhone, we see that one party — Apple —
is trying to use technology to tilt the balance in its favor.
This is resented even by people who have no desire to switch phone
carriers or to install non-Apple applications, because they correctly
perceive a shift in their model of the world.

In one sense, there's no need to panic: there are always shifts.
People and institutions are remarkably flexible over time. Consider
"traditional" newspaper ads — which used to
appear on the front
page. The danger comes with enforcement of the existing
model, whether by law or by technology. That sort of thing
freezes innovation, by blocking technologies that threaten today's
behavioral model.

From my perspective, Apple's attempts to
lock down the iPhone via repeated updates won't work; people will
break
through each new locking technology.
Strange
excuses for their behavior will be seen as just that: strange
excuses. If nothing else, the market will have the final say; if
people really want a freer device, someone will build one that's
even better than the iPhone, though perhaps not cooler.

The danger will come if Apple succeeds on the legal front, via the DMCA
or the like. Technologies change, which upsets people; eventually, people
adapt, and a new norm is reached. We have to avoid artificial interference
with that dynamic.

5 October 2007

While trying to visit a baseball web site, I saw this:

Now, I don't blame the server for being unhappy, since I was trying
to look at the
score of a Mets game,
but let's look at it more closely.

Why does a config file have a login name and password? It turns
out that that's a documented feature — or rather,
misfeature — of .NET.
Some part of the server needs to invoke another subsystem with
different privileges; this is a documented and often-recommended way
of doing it. It's dubious, from a security perspective, but in many
cases it's necessary. More precisely, it's often necessary to
store credentials in some file — but why in a configuration file?
Why not let the the configuration file — a file about which you
may want to display diagnostic messages — simply point to another
file that contains nothing but the password? (I should add that
'sportsrus' is a very bad password…)

Beyond that, why is the account that's being invoked
Administrator? That's the all-powerful, privileged
account on Windows systems. The principle of least privilege says
that applications should run with as few privileges as possible.
Is it really necessary to gain all privileges here? Why?

Finally, why is the detailed error message being displayed to users?
There's nothing I can do with the information. Certainly, write it to
the log file. Probably, tell the user there's a system error. But
the details are useful to end users if and only if they're official
system testers. That should have been disabled on a production system.

10 October 2007

It is worth noting that Teachers College is access-controlled: you have
to show a university ID card to enter. While that certainly isn't
foolproof security, it does suggest that the perpetrator was likely
a member of the university community.

From a security perspective, though, there's another problem:
everything
runs as root. That is, every application runs with full
privileges; if any application has a security hole — and
there have been
many
of them — the attacker has complete control over the device.
It is, frankly, rather unbelievable that Apple made such a mistake.
Microsoft effectively did this with every version of Windows up to
Vista, but at least they had the excuse of backwards compatibility. It
almost justifies Apple's claim that excluding other applications is
necessary
for security, save that
Palm Pilot has always has always
behaved that way.

There is a silver lining, though. Running as root has one major advantage:
root can switch to other userids. This would permit each application to
run as a separate userid, thus separating each one from the others.
It's a solution I've been
advocating
for years. Microsoft has done something similar with
Internet
Explorer 7. Will Apple follow suit? It would be a good way
to benefit from a serious misfeature. Of course, they have to separate
their own applications that way, too; they've certainly had
their share of
security problems.

Update: Apple has just
announced that in February,
they will offer a software development kit (SDK) for
the iPhone and iPod touch. This is very good news. However, the
note speaks approvingly of Nokia requiring applications to be digitally
signed by "known developers". This conflates authentication — who
wrote or published the code — and protection. They're not the
same. At best, authentication tells you whom to sue after the fact.
What's really needed is a strong security architecture that prevents
nasty things from happening.

It will be interesting to see what happens if Apple does decide to use
digital signatures. What will the criteria be for obtaining a
certificate? Will certificates need to be renewed frequently, effectively
forcing users into a software rental model? Is this the iPhone or the
iProfit?
(There are some good observations in the
New
York Times Bits blog.)
I'll post more when we know
some details.

The central question is who determines what runs on the Internet,
end system owners or ISPs. Traditionally, the Internet has fostered
the "smart host, dumb network" model, and it has succeeded brilliantly.
Rather than innovation being controlled by a small number of providers
— and for consumers, at least, the economics favor local monopolies
or duopolies — the smart host model draws on many small
entrepreneurs and technologists from around the world.

Is copyright the issue?
BitTorrent is
partnering
with content owners,
including Fox, Paramount, Warner Bros. and MGM.
Besides, Comcast is not a law enforcement agency. This isn't a simple
semantic complaint; when one is dealing with the legal process, there
are guarantees of due process and an opportunity to
contest the charges.

They may be concerned about bandwidth consumption. This is a legitimate
concern,
especially since the technology of cable ISPs makes upstream bandwidth
more expensive.
In that case, though, the remedies are first, to tell customers —
per the AP story, Comcast is not saying precisely what its policy is
— and second, to use traffic-shaping rather than simply sending
resets. Traffic shaping addresses the real problem (overconsumption
of expensive upstream bandwidth) without choking innovation.

22 October 2007

Comcast has finally
made
some statements
on what they're
doing to peer-to-peer traffic.
Briefly, they likened it to a telephone busy signal: when the network
is too busy — though they won't say what their criteria
are — some connections are interrupt. It's supposedly
almost transparent, since "the software automatically tries again".

That won't fly. Stating that the software will retry assumes
a certain model of software. Perhaps some particular
clients will retry. Others may not. The semantics of a TCP Reset
are quite well-defined; there's even an Internet Best Current Practice
that
warns against
other inappropriate TCP Resets.

It's worth noting that others have used TCP Resets to block traffic
they don't want. The most notorious offender is China, which
uses Resets
to implement its Great Firewall. Is this the model that Comcast
wishes to emulate?

A typical "Do Not Track" option works by letting people download
a special cookie. Doubleclick's
opt-out
service does just that:

Presumably, the AOL version would be more complex, because it will let
you specify your interests. That is, it's intended to permit targeted
advertising but without tracking.

The problem is that today's best way to avoid tracking — regularly
cleaning out your cookie collection — will delete the "no-track"
cookies. (Doubleclick even warns about this.) Users will thus be
faced with a choice: defend against everyone, by frequently discarding
all cookies; defend against the more responsible marketers, by using
their no-track cookies; or trying to remember to be selective about
discards and/or recreating many different no-track cookies very frequently.
None of these options sound appealing.

Update: a
New
York Times blog
has noted the same problem. It refers to some technology developed
by Tacoda to permit preferences to
persist even if cookies are deleted. It isn't clear to me what that
technology is; Tacoda and its subsidiary,
Advertising.com have web
pages on cookie-based opt-out. Perhaps it uses a
Flash cookie?
Flash cookies are just about as useful for tracking people, and they're
seldom deleted because most people don't know about them.