16 July 2010

There's a
report
out
of a new vulnerability in Windows. That alone isn't
particularly significant. There are, however, two interesting and scary
things about the malware that exploited this flaw.

First, the code included two drivers that were digitally signed
by a reputable company, Realtek. That is, the source of the code
was strongly identified.
Perhaps such schemes
aren't that helpful
as a security measure.

15 July 2010

Everyone knows why (some) publishers use DRM: they're afraid of
people stealing their content and hence costing them revenue. It
turns out, though, that DRM can itself have that effect.

I had a pair of Garmin GPS units, one for driving and one for hiking
and biking. I decided to upgrade the latter; I wanted a unit with
a more sensitive receiver and with a higher-resolution color screen.
Naturally, I bought another Garmin, since over the years I had
purchased a fair number of Garmin maps I wanted to reuse.

With another trip coming up and with better topographic
maps available, I decided to buy a new 1:24000 one covering the area where
we'd be hiking. It turns out, though, that many Garmin maps are
locked
to a specific unit; if you replace your GPS, you have to buy
new maps. But that destroys the lock-in — someone who owns locked
maps has absolutely no incentive to stay with Garmin. They're
hurting their own future market for GPS receivers.

A similar phenomenon applies to e-book readers.
My comments in January
notwithstanding, I did buy one recently. (I may write about that
some other time; briefly, prices had dropped enough that I was
willing to spend the money on something I might not use long-term.)
Thus far, however, I've confined myself to public domain books
(thank you,
Project Gutenberg) and library
books. I have not bought any e-books. Why not? They're all locked with
some DRM scheme, and there are just too many scenarios that would cause
me to lose access to books I purchase. In short, the presence of DRM
has inhibited me from buying e-books. (Pricing is another issue.
While I
don't
expect e-books to be signficantly cheaper than hard-copy editions,
I also don't expect them to be more expensive. That, however, is
what I've often seen.)

Some of this, of course, is fixable. Garmin could, perhaps, charge a
modest fee to retarget a map to a different receiver.
Publishers could deposit "unlock" keys and software with an escrow agent
people would expect to be around in 40 or 50 years. (I do have some
books in my house that (a) are that old, and (b) I reread on occasion. I
was quite amused to find that some of them have moved into the public
domain and are freely available online.) Even these are inconvenient,
and hence will cause some people to refrain from purchasing the items.
And that's the bottom line: DRM may (or may not) prevent piracy and hence
boost sales, but it can also cost sales.

13 July 2010

I just finished reading Richard Clarke and Robert Knake's book
Cyberwar.
Though the book has flaws, some of them serious, the authors make
some important points. They deserve to be taken seriously.

I should note that I
disagree
with some of my friends
about whether or not "cyberwar" is a real concept.
Earlier, I speculated that perhaps it might be a useful way to
conduct disinformation
operations, but it need not be so limited. Truthfully, we do not
know for sure what can be done by cyber means.
Some tests indicate that the
power grid
might be vulnerable. Clarke and Knake speak of disruptions to
military command and control networks,
damage to electrical generators,
and destruction of the
financial system. But we've never had a cyberwar, so we don't really
know.

I found the policy discussions stronger than the technical
ones. Those latter had a number of errors, some amusing
(the U.S. power grid runs at 60 hertz, not 60 megahertz;
MCI became part of
Verizon, not AT&T), others considerably more serious for
the points the authors make (the Tier 1 ISPs talk to each
other via many different private peering interconnections, rather
than at "telecom hotels" — the latter was once true but hasn't
been for a fair number of years; consequently, there are
very many links points that need protecting). Of course, I'm less
qualified to assess the correctness of policy discussions; however,
given that that is the authors' background, I will give them the benefit
of the doubt.

I suspect that the doomsday scenarios painted are overblown. Yes,
there are risks. However, as a Rand Corporation
study
on cyberwarfare and cyberdeterrence
pointed out, there is a great deal of uncertainty inherent in offensive
cyberoperations. If everything goes just right for the attackers,
the results might be as portrayed, but the cyberfog of war is even
more opaque than the ordinary fog of war. The authors acknowledge
the uncertainty in passing, but don't draw the obvious conclusions.

A more serious failure in that vein occurs near the end of the book,
where they quote Ed Amoroso of AT&T as saying that software
is the real problem. Ed is precisely correct (and they also speak
highly of him), which makes me
wonder why they suggest that the Internet has to be reinvented
to achieve proper security. Similarly, they advocate "Govnet",
a separated network for running the Federal government, perhaps even
one that uses different operating systems and network protocols
than the Internet. It can't work. Apart from the many practical
difficulties of building, deploying, and
maintaining a new OS and application suite from scratch, and of
keeping up with changes in hardware,
the
government needs many interconnections to the private sector
(as they themselves point out),
just to get its routine work done.

The most serious problem with the book is their "Defensive Triad"
for solving the technical problem. It's threefold: have Tier 1 ISPs
scan their backbones for malware; separate the power grid (and perhaps
other critical infrastructure) from the public Internet; and secure DoD's
networks. It's hard to argue with the last one, save for the fact
that it's not clear how to do it enough better than has been done
in the past. The other two are much harder to accomplish.

It isn't clear to me that it's even possible to do deep packet
inspection (DPI) at the scale required. I don't think Clarke and Knake
appreciate just how fast the backbones run (some links run at
40G bps; even peering links are frequently 10G bps), nor how many interconnections
need to be scanned. Besides, DPI can't detect 0-day attacks
(a problem the authors note elsewhere but not here), nor can it see
through encryption. (Ausingly enough, the prospect of "three strikes"
laws forcing disconnection from the Internet for file-sharing has
the
spooks
worried
that it may cause much more encryption to be used.)

It is also not obvious how to truly separate the power grid networks
from public Internet. Yes, one can mandate disconnection.
But if
SIPRNET
(Secret IP Router Network) and
JWICS
(Joint Worldwide Intelligence Communications System)
can't maintain their air gaps against "sneakernet" connections —
and the authors assert that about the former and suggest it about the latter
— how is a power company supposed to manage?
Furthermore, some sort of sneakernet connection has to exist, or there will be
no way to get system patches and new anti-virus signatures installed on the
isolated machines. (The authors say that many machines on SIPRNET did not have
their own layers of protection because they trusted the isolation. They very
rightly decry this sort of sloppiness — but doing better requires some sort
of connection.)

The discussion of a possible arms control treaty was quite nuanced and interesting.
I started the book thinking that the idea was preposterous; I now think that
something along the lines they suggest is probably feasible and desirable.
Note that neither I nor the authors suggest that the negotiations would be easy
or that full compliance would be common. I won't try to summarize the discussion;
you'll have to read it for yourself.

Clarke and Knake make a loud call for open discussion of U.S. cyberwar policy,
much as was done for nuclear policy. They even made the obvious (to me)
allusion to
Dr. Strangelove.
Their point is depressingly obvious; I really don't understand why the Powers That
Be in Washington don't see it that way.

So: given all of my complaints about the technical details, and given
some lingering concern about the accuracy of the policy sections, why do I
recommend the book? It's simple: I do think there is potential danger, and
this book is a clear recounting of how we got into this mess; it's also a clarion
call to fix it. Their specific prescriptions may not work, but if we don't
start we've never going to solve it. To quote the
Mishnah,
"it is not your part to finish the task, yet you are not
free to desist from it."

11 July 2010

The White House has recently released a draft of the
National Strategy for
Trusted Identities in Cyberspace. Some of its ideas are good and some
are bad. However, I fear it will be a large effort that will do little,
and will pose a threat to our privacy. As I've
written
elsewhere, I may be willing to sacrifice some privacy to help the
government protect the nation; I'm not willing to do so to help private
companies track me when it's quite useless as a defense.

The fundamental premise of the proposed strategy is that our serious
Internet security problems are due to lack of sufficient authentication.
That is demonstrably false. The biggest problem was and is buggy code.
All the authentication in the world won't stop a bad guy who goes around
the authentication system, either by
finding
bugs exploitable before authentication is performed,
finding bugs in the authentication system itself,
or by hijacking your system and abusing the authenticated connection
set up by the legitimate user. All of these attacks have been known for
years.

What's new here is some detailed design principles. Fundamentally, the
current draft is proposing a federated authentication system, with many
different identity providers. But that's not new; it's been tried a
number of times in the past, by such groups as the
Liberty Alliance.
Such efforts have been notable for their lack of success in the market.
If this system is to be truly voluntary, as the draft states, why
should this effort succeed? (Of course, whether or not the scheme proposed
will actually be voluntary is open to some debate. The draft says the
government will not
"require individuals to obtain high-assurance digital credentials if they
do not want to engage in high-risk online transactions with the government
or otherwise". In other words, you don't have to participate, as long
as you're willing to forgo things like online banking, electronic filing
of tax returns, perhaps working in certain jobs, etc.)

One very good thing the draft suggests is the use of attribute
credentials rather than identity credentials. If done properly, that
can provide very good privacy protection. To be effective, though,
the government needs mechanisms — yes, strong privacy laws and
regulations — that encourage use of attributes without identity
whenever possible. We need ways to discourage collection of identity
information unless identity is actually needed to deliver the requested
service.

There has been a lot of academic work on unlinkable credentials, such
as
Stefan Brands'
schemes and those by
Jan
Camenisch and Anna Lysyanskaya. It is disappointing that
the White House draft did not allude to such schemes. In fact, I'm
concerned that there is no desire for true technical privacy
mechanisms; the mention of forensics as a major goal worries me.

If we're going to have multiple credentials, as the draft envisions, a
lot of attention needs to be paid to making these identities usable.
The report notes the problem but suggests that identity providers should
conduct studies on the subject, presumably to ensure that their offerings
are usable. That's wrong; users deal with their own authentication agent,
which in turn talks to providers without the user knowing or caring very
much about how that is done. But that means that the authentication agent,
in the computer, phone, or what have you, needs to be designed for usability.
Of course, by centralizing authentication you've created a new, critical
resource: the authentication manager. What better target for a malicious
hacker....

Given all this, should we be focusing on authentication?
Apart from the
forensics issue (and I think that that is a major goal, though
it is hardly stressed), I fear that people are looking under the
lamppost for their keys. While there are certainly some challenges
to doing authentication at such scale, it is a much simpler problem than
buggy code. I suspect that this is being proposed because it looks doable,
even though it will do little to solve the real problems and will create
other risks.