What's worrying the spooks?

If evading blocking systems becomes a mainstream activity (and there’s said to be 6-7 million illegal file sharers in the UK) then it will be used, almost automatically, by subversive groups — preventing the spooks from examining the traffic patterns and comprehending the threat.

There seems to be some confusion about quite what is worrying the security services. Last October, The Times reported that “both the security services and police are concerned about the plans, believing that threatening to cut off pirates will increase the likelihood that they will escape detection by turning to encryption”, and this meme that the concern is encryption has been repeated ever since.

However, I think that Patrick Foster, the Times media correspondent, got hold of the wrong end of the stick. The issue isn’t encryption but traffic analysis.

Peer-to-peer file sharing is already widely encrypted (Limewire has used encrypted transfers “out of the box” since version 4.13.11 in November 2007). This is done because it helps to prevent ISPs from detecting and blocking or “traffic shaping” (slowing down) data transfers.

However, imagine that in the near future those who illegally share copyright material are being disconnected, and websites such as The Pirate Bay are being blocked. Since this legal framework doesn’t do anything to provide alternatives to file sharing, millions of people will start to use protocols that hide the identity of peers (Tor would be suitable, but something slightly more special purpose will doubtless be rapidly adopted); along with software that evades blocking mechanisms (once again Tor fits the bill, but there are other alternatives).

Once this new generation of software is deployed (and it would be ubiquitous within weeks), not only are the rights-holders unable to determine who is nibbling away at their twentieth-century business model, but the spooks can no longer use traffic analysis to determine the members of conspiracies. That’s precisely why they were concerned last October about the disconnection aspects of the Bill, and that’s precisely why they are even more concerned now with the opposition amendment that has unexpectedly put “web blocking” onto the table.

The recently leaked BPI memo, setting out the rights holders lobbying position shows how seriously this risk is being taken at the highest levels:

There has been a meeting between Number 10 officials and BIS special advisers today to discuss the way forward on Clause 18. I am told that “discussions continue” but that “the security services concerns are not being met”.

The BPI (who wrote the opposition amendment in the first place) further reported in their memo the expectation that the Government would bow to the security services’s concerns and just remove Clause 18 from the Bill in “wash-up“, leaving the opposition whips (MPs hardly get a look in) with the choice of accepting a cut-down Bill or nothing.

Since evasion of web blocking needs more general purpose mechanisms than hiding the identity of file-sharing peers, it’s obvious why the security services concerns have resurfaced so strongly now. However, one wonders why their concerns about disconnection have been stifled. Perhaps they’ve decided that the specialist peer-to-peer obfuscation systems will be too special purpose to be used as the de facto means of communication by those they seek to surveil ? or perhaps they’ve just been told that helping old-media is more important than tracking terrorists ?

Diffie and Landau, in their book on wiretapping, said that “traffic analysis, not cryptanalysis, is the backbone of communications intelligence” … I suspect there’s a number of Parliamentarians who are currently having the ramifications of this very carefully explained to them.

15 thoughts on “What's worrying the spooks?”

“Perhaps they’ve decided that the specialist peer-to-peer obfuscation systems will be too special purpose to be used as the de facto means of communication by those they seek to surveil ?”

Hard to see why. A pretty obvious means to transmit information is to encrypt the payload and then make it available as a peer-to-peer file, travelling over the standard mechanisms. You don’t need access control because anyone who obtains the file also needs the key.

Of course, this relies on you trusting your crypto and being able to generate good key material, and I believe there’s some evidence that the bad buys mistrust available crypto to the point that they roll their own bad crypto.

A pretty obvious means to transmit information is to encrypt the payload and then make it available as a peer-to-peer file, travelling over the standard mechanisms. You don’t need access control because anyone who obtains the file also needs the key

If you’re watching the source of the file (or keeping tabs on the IPs that appear in the torrent tracker data), then you learn exactly who fetched the file. You don’t need to know its contents, merely that it appears to be worth worrying about.

Encryption doesn’t fix traffic analysis!

One of Bletchley’s achievements was to work out, in 1941 by traffic analysis alone, not by reading the traffic, that the German air force was split into units of 9 planes, not 12 as previously thought, thereby giving a more accurate estimate of total strength.

I’m sure that this isn’t an original thought, but I’ve often wondered if the seemingly pointless spam messages that I get these days (content just garbled sentence fragments, no links, no images) couldn’t be an attempt to avoid traffic analysis. All you need is to hire time on a botnet, and include the intended recipient amongst the tens of thousands of people you spam the message to.

Just a guess. The weaknesses with any anonymiser system (The Onion Ring, Tor, is just one) is the server that does the anonymising. In the case of Tor it is the entry/exit points into the network. They know *exactly* who the packet is for and where the packet is from. Anyone can donate their machine to be part of the Tor network and Dan Egerstad did this a couple of years back – providing five exit nodes and then harvesting the packets (he then published the email addresses of a hundred rather dim diplomats who confused anonymising with encyption). Perhaps the spooks have a plan to provide Tor exit points?

There are other anonymisers of course, and as you say, some other, more robust system may be invented to get over the Tor exit node issue.

One other thing to think about is the various routers that ISPs use. Your packets will pass through some before even entering the “cloud”. Perhaps the government has done some deal with ISPs to give the spooks access to these routers in exchange for, say, immunity from copyright owners suing them?

It is true that you can attack Tor by traffic correlation, but this requires that you can see both the entry and exit nodes at the same time. Otherwise (and unless you provide a LOT of exit nodes, this will be unusual) you either know that “this known person is contacting someone unknown” (which you could learn by bugging their house) or “someone unknown is contacting this server” (which you could learn by inspecting the logs on the server).

The same restriction applies if you’re monitoring the traffic through the ISPs router — you can see that the customer is connecting to a Tor node (but the encryption layer prevents you seeing what they are saying), but it doesn’t tell you who they are contacting. Even if deduce from the traffic patterns that file sharing is occuring, you cannot know (apart from measuring the size) if this is a Linux distro, a feature film, or a terrorist beheading.

The spooks will not want the general population using Tor-like systems on a de facto basis … it WILL affect their ability to do traffic analysis and that’s why they’re unhappy.

Now of course if you don’t think they should be snooping on anyone then you will not share their concerns — but that’s another story!

“If you’re watching the source of the file (or keeping tabs on the IPs that appear in the torrent tracker data), then you learn exactly who fetched the file.”

They cannot watch all the ways to a file…

A simple way to decouple the message source from the message recipient is through a third party that does the job for every body via a distributed model…

1, Say hello to the world wide web of searchable “caches”.

2, Say hello to the world wide web of botnets.

For the first method Google amongst many many others has a large searchable cache of “open blog posts”.

If I want to send a low bandwidth stego message to you I can do the following.

1, Setup a list of time based “One Time User Names” (OTUN) with you in some manner.

2, About a day befor the agreed time I post a relevent message (containing the stego channel) onto a randomly selected “open blog”, that I know get’s indexed by a “google bot” very regularly.

3, After the apointed time the recipient does a google on the OTUN, if there is a hit then they get the detail out of the google cache not from the Blog site.

Another way to do this is to abuse an “open web proxie cache”.

For various reasons (Uni’s and Students) some organisations alow a users web browser to use their web cache from any where on the Internet (it’s a way of getting around IP address based service limitation rules). Once a page is in such a cache it is relativly easy to get it out again without going to the original source page.

Thus if a person has control of a botnet they can randomly cause a PC to request the page via an open web cach shortly before a pre-aranged time. At the pre-arranged time the person to whom the message is actually directed can make a request to the web cache to retreive the page.

There are so many “open to post” places where a stego message can be put, and likewise so many “multiple transient user open caches” of one sort or another around you don’t have to use TOR or any other “suspicious services” to communicate stego messages covertly.

The simple fact is the Internet has to many ways to make a One Time “dead lettter drop” for traditional traffic analysis to be workable with one or more “unknown players” any more.

I had occasion a while back to demonstrate this “decoupled comms path” in a variety of forms to a number of people who have shall we say an interest on following comms on the Internet and the “Oh F**k” look was not that well hidden on various peoples faces and the questions that followed tended to confirm the “we’re not happy about this” view point.

Which was much the same response I have seen in the “last century” when showing how a couple of “second hand” phones with “Pay as you Go” or “top up” SIMs can provide a One Time decoupled comms path.

The simple fact is technology and the attendant “Decoupled One Time” oportunities are moving forward faster than any Governmental intel organisation can keep up with.

Which means that the intel weenies have to either know both illicit parties and have 100% coverage on their comms, or the illicit parties make the cardinal mistake of using a the same One Time path twice or not having sufficient decoupling.

With “pay for technology” the “cardinal sins” where to be expected, not just due to the cost involved but the effort of making sufficient anonymous purchases, so the intel weenies could still stay play in the game.

However with the Internet and zero cost walk by access points (burger joints / coffee shops / pubs / travel hubs / etc) the game has taken a turn towards “Interesting times” for the intel weenies. They can with suitable preperation by the illicit parties be badly hit by the use of “Decoupled One Time Communications”.

Thus not just relegated to sitting on the subs bench hoping for the mistake by a player giving them the chance to get on the pitch. But not even being a spectator sitting on the sidelines, nor live match spectators in the stands or television. But relegated to channce viewing of “action replays” well after the event…

The only current solution is HumInt which can take significant time and resources to get into place, but the trouble with HumInt as always is, “where is the place to be”.

Perhaps it’s not surprising some countries want to block access to the likes of Google and have their own search systems where they get access to the raw data in real time…

What you’ve failed to grok is that none of these are mainstream — or easy, not all that many people have a spare botnet to hand !

The concern of the spooks is that anonymising systems will become mainstream, and will be used as a matter of course for moving data around. Therefore, even with the most Internet-ignorant of targets, there will be limited opportunity to map out the members of the conspiracy.

It’s the deployment, and wide take-up, of systems such as that in comment #1 that is of concern, not the exotica you describe.

TOR was not mainstream (and many would argue it’s still not) but it has gained and is gaining traction.

P2P download software was only a relativly short while ago an “exotic method” but is now without a doubt mainstream.

We have seen with social networking what apears “exotic” today will be “normal” next week and potentialy “so last year” the week after.

Thus the rise of a technology can be very rapid and frequently unexpected, it depends on it’s utility to people.

Also importantly the technology just does not go away. The “so last year” aspect applies to specific implementations which get rapidly out evolved. Usually by those with a little more imagination than just the technical asspects of a new technology.

One of the main problems malware writers had (and still do currently) is this “inability” to see beyond the technology. I as others did saw that they would become “guns for hire” by those with a little more “imagination” and that the relationship would become symbiotic once basic money laundering skills where learned by the malware writers.

The next stage which we are begining to see is the explotation of the less blunt “in yer face” types of criminal activity. Which is again (from history) an expected result as was seen with street crime when Robert Peel got his little idea going.

You then enter a protracted phase of development much like that of the EW ECM/ECCM/ECCCM or legislative regulation of a market. This usually only stops when a small change moves the “tipping point” and “another way” takes over.

What is not usually known in advance is what the small change will be.

Which brings me onto another asspect of your viewpoint,

“or easy, not all that many people have a spare botnet to hand”

It is noticable that you go for what you see as the “week leg attack” style of argument, which often counts against you.

There are many many ways that can be used to cause a “chosen” message to be got into a cache. The simplest for most people to grasp is that of a “direct request” from a user or their PC.

For most people who think a little further this would imply some kind of easily tracable link.

Thus to get across the point that this to can be “disconnected” and done apparently randomly and with ease you need a simple example that they will recognise.

Almost the simplest for them to grasp is that of a botnet.

Technicaly there are simpler ways to do it but they are not that well known and would need a lengthy explination.

Also as noted by others you can rent small parts of botnets quite cheaply.

And concevably putting a bot net operator “to the test” that they have bots in a given domain would be a reasonable negotiation tactic as part of making a rental.

What simpler test could their be than by asking they make a bot request a “harmless” web document from within a given domain at a given time as proof they have “bots to rent” in that domain…

After all establishment of trust between two or more untrusting parties who might further wish to remain anonymous is a known research area

Thus “putting to a test” to “establish trust” into a “malware for rent” negotiation would, I would have thought occured to somebody involved with research into botnets.

However I have wondered in the past why you appear to have a blind spot with botnets and their market potential. You have on a number of occasions come across as some one “behind the curve” from your comments.

Economicaly what we currently see with botnets is “a failure to capatalise an asset” not a failure of technology.

Although some would argue (and have) that botnets have an economic effect equivalent of the illegal drugs market. The realisation on the likes of the 13million bot mariposa botnet was at best a few cents/bot (from what little information there is currently).

It takes little or no imagination to see the parellels between “Cloud” and “botnet”, infact Peter Gutmann has already coined the “Malwre as a Service” or MaaS some time ago. So there is certainly no shortage of imagination to look at and even a few good models.

As I’ve said before most of information about botnets comes from what is “obvious” not that, that is less brash or even covert.

Only a short while ago a version of ZeuS (Kneber) was found to have been targeting document files in .mil and .gov domains.

It was found to have got past a considerable number of AV organisation products.

And like many of the current crop of the older botnets was obvious by it’s noise level (ie lot’s of outbound traffic from each bot).

ZeuS appears of recent times to be changing it’s focus in that it is now offering not just “information gathering” but remote shell access as well.

Thus it will enable bot net operators “to be” for all intents and purposes the compramised PC user at any level which I’m sure you will recognise is a serious issue.

For instance what is not currently clear is if this development enables an “end run” around none, some or all “Secure VPN” products available on the suceptable machines. Either whilst connected to the VPN or in a way that will be activated when using the VPN.

Likewise mariposa had an old “removable media” infection vector. This was originaly developed to do “boot sector” etc viral infections prior to LAN’s and later WAN’s becoming common. This allows it to cross “air gap” security.

Both should be “red light” issues for those wishing to protect information assets, especial as ZeuS appears to be able to get past a large number of AV Company anti “malware” software. Thus getting around the two main infrustructure “safe guards”.

On argument area currently is “directed attacks” -v- “fire and forget”. It is felt by some in the security industry that a “directed attack” will always get through and is thus being used as a “lightning rod” argument.

Whilst this is true significntly more damage is done each year by “flood” than “lightning”.

A “fire and forget” infection vector to implement a wide area covert information gathering network would like a flood wash into many many organisations, getting into VPN’s and crossing air gaps.

Most people arguing “directed” is the bigger threat appear not to have studied “spy craft” history. Most successfull spies work their way into an organisation and by remaining covert work their way up or to be “sleepers” to be woken at appropriate times.

With appropriate covert “disconected” control channel and return channel you could have a “sleeper botnet”.

All that the malware writer would need to crack is the “air gap” problem. Which has three aspects,

Mariposa appears to have solved the initial infection (1) problem, and arguably the other two problems are thus technicaly possable using the same vector path. The only real two questions then become,

1, Timelyness.
2, Bandwidth.

With regards timelyness even if it was a one time crossing of the air gap (ie infection) there is a very real possability that “second hand” equipment either via replacment ot theft will get attached by the new owner to an insecure network and thus release any residual information on the storage devices. There have been enough research projects where the resarchers have gone to “computer fairs” or “computer breakers” and perchased a sample of drives etc and found significant numbers had not been re-formated but also contained PII or financial information.

I could go on but I think I’ve made the point that botnets are not just for “Spaming and DoSing” and that the malware writers are waking up to the potential for better capitalisation on the asset.

However all that said the use of botnets to my original argument was to simplify the explanation, I can certainly think of technicaly more advanced ways, also many technicaly easier ways of achiving the very very minor issue of getting a page into a Univeristy or other “open web browser proxie cache”.

Personaly I’d stick with the much simpler and more easily available “search engine caches” as traffic to them is “expected” and thus not abnormal thus making masking much much more difficult.

I can certainly think of technicaly more advanced ways, also many technicaly easier ways of achiving the very very minor issue of getting a page into a Univeristy or other “open web browser proxie cache”.

How many people will plough through all that is anyone’s guess (your own blog is a good place for comments that are longer than the original article) — but your remarks don’t seem to have all that much to do with why the spooks are opposing the DEB, and why fiddling with the injunctions power by adding yet more subclauses about national security is failing to assuage their fears.