5 February 2012

A lot of people are fascinated by the
news
story
that
Anonymous
managed to listen to a conference call between the
FBI
and
Scotland Yard.
Some of the interest is due to marvel that two such sophisticated
organizations could be had,
some is due to schadenfreude,
and some is probably despair: if the bad guys can get at these
folks, is anyone safe? To me, though, the interesting thing are the
lessons we can learn about what's wrong with security.
Many of the failures that led to this incident are endemic in
today's world, and much of the advice we're given on what to do
is simply wrong or arguably even harmful.

The first issue is how Anonymous managed to record the call. The
ways we'd see it done in a movie — tapping a phone line
or listening to law enforcement official's cell phone — are
comparatively difficult to do. They're
not
impossible, but they're not the easy way for a task like this.
Rather, what appears to have happened is what most outside security
experts immediately suspected: Anonymous read an email giving the details
of the call, and simply dialed in, in the same way as the intended
participants. The message was sent to
"more
than three dozen people at the bureau, Scotland Yard, and agencies
in France, Germany, Ireland, the Netherlands and Sweden;"
a single security flaw anywhere along the chain could have
resulted in the leak.

Here we see the first flaw: the call details were, effectively, a shared
credential. It is quite probable that the conference call moderator
had no idea who had dialed in. We see the same phenomenon with role
accounts: many people share the password for the login, email access, etc.
It may happen in the large — postmaster@example.com — it
may happen when a vacationing executive gives a secretary the
password to his or her email account; it may happen when
spouses or romantic partners
share
passwords.
Whatever the reason, it creates a security risk.

Reading further into the article, we see that "One recipient, a foreign
police official, evidently forwarded the notification to a private
account". At that point, it's tempting to blame that official, say he or
she was
poorly trained or disobedient, and stop worrying. Apart from the
self-evident fact that a single security lapse shouldn't compromise
everything (a proposition easier to state than to make happen), I strongly
suspect that this unnamed official was behaving very rationally: he or she
either wanted email access that was too inconvenient via the proper mail
servers, or wanted a different human interface. If this person had no
access to work email from home, or felt that, say,
gmail was enough better that their
productivity was improved, it's not surprising that this would happen.
It shouldn't happen — and one would hope that a police official working
on cybercrime would understand the risks — but in a strong sense
the failing was organizational: if my hypothesis is correct, they may
have failed to make it easy for people
to do the right thing. Let me stress this:
a security mechanism that is so inconvenient that it tempts
employees to evade it is worse than useless, it's downright harmful.
(Note well: I'm not saying that this official did the right thing;
I'm saying that organizational policies or technologies may have
led to too much temptation for people who are trying to be
more productive.)

But how did Anonymous know which outside email account to
monitor?
This
article notes that assorted groups have made a habit of
targeting law enforcement email servers, with some success against
less-sophisticated police organizations. That would yield a list
of email addresses, and perhaps passwords. Perhaps more importantly,
it can show who was using an outside mail server, one that
isn't protected by VPNs, firewalls, one-time passwords, and the like.
At that point, the attackers have several ways to proceed.

First, they could try this law enforcement email password against the
outside mail server. The odds are high that it will succeed; far
too many people reuse passwords. And why do they do this? Because
they have too many passwords to remember, especially if they're
all "strong". And of course, people are
forbidden
to write them down.

Most of the advice we get on security starts with "pick a strong
password".
(Look at
CERT's
advice: the very first thing it tells people to do is
"always select and use strong passwords". Patches, a really
effective defensive measure, are mentioned fourth.)
Strong passwords are not a bad idea, but you're in much more trouble
if you reuse passwords. No one can possibly memorize
all of the passwords they have; reuse is the usual answer.

A second way in which the attackers could have compromised the
official's account is via a spear-phishing message, booby-trapped
to install a keystroke logger. That's been seen, though more often in a
national
security context. If the attackers did this, even encrypting
the emails wouldn't have helped; the same malware that stole the
login password could probably steal the private key as well.
But I'm pretty sure that no encryption was employed;
most
encryption systems are too hard to use.
Smart-card based decryption would have helped (though such things are
far less convenient to use); though there are still attacks, they're
more involved, and arguably less available to a group like Anonymous.

It's clear that there wasn't a single failure involved; in particular,
the crucial mistake of forwarding work email to a personal account
was quite plausibly a rational response to organizational policies.
Preventing recurrences of this kind of incident will not be easy;
there are too many weak spots.