Main menu

Tor security advisory: "relay early" traffic confirmation attack

SUMMARY:

On July 4 2014 we found a group of relays that we assume were trying to deanonymize users. They appear to have been targeting people who operate or access Tor hidden services. The attack involved modifying Tor protocol headers to do traffic confirmation attacks.

The attacking relays joined the network on January 30 2014, and we removed them from the network on July 4. While we don't know when they started doing the attack, users who operated or accessed hidden services from early February through July 4 should assume they were affected.

Unfortunately, it's still unclear what "affected" includes. We know the attack looked for users who fetched hidden service descriptors, but the attackers likely were not able to see any application-level traffic (e.g. what pages were loaded or even whether users visited the hidden service they looked up). The attack probably also tried to learn who published hidden service descriptors, which would allow the attackers to learn the location of that hidden service. In theory the attack could also be used to link users to their destinations on normal Tor circuits too, but we found no evidence that the attackers operated any exit relays, making this attack less likely. And finally, we don't know how much data the attackers kept, and due to the way the attack was deployed (more details below), their protocol header modifications might have aided other attackers in deanonymizing users too.

Relays should upgrade to a recent Tor release (0.2.4.23 or 0.2.5.6-alpha), to close the particular protocol vulnerability the attackers used — but remember that preventing traffic confirmation in general remains an open research problem. Clients that upgrade (once new Tor Browser releases are ready) will take another step towards limiting the number of entry guards that are in a position to see their traffic, thus reducing the damage from future attacks like this one. Hidden service operators should consider changing the location of their hidden service.

THE TECHNICAL DETAILS:

We believe they used a combination of two classes of attacks: a traffic confirmation attack and a Sybil attack.

A traffic confirmation attack is possible when the attacker controls or observes the relays on both ends of a Tor circuit and then compares traffic timing, volume, or other characteristics to conclude that the two relays are indeed on the same circuit. If the first relay in the circuit (called the "entry guard") knows the IP address of the user, and the last relay in the circuit knows the resource or destination she is accessing, then together they can deanonymize her. You can read more about traffic confirmation attacks, including pointers to many research papers, at this blog post from 2009:https://blog.torproject.org/blog/one-cell-enough

The particular confirmation attack they used was an active attack where the relay on one end injects a signal into the Tor protocol headers, and then the relay on the other end reads the signal. These attacking relays were stable enough to get the HSDir ("suitable for hidden service directory") and Guard ("suitable for being an entry guard") consensus flags. Then they injected the signal whenever they were used as a hidden service directory, and looked for an injected signal whenever they were used as an entry guard.

The way they injected the signal was by sending sequences of "relay" vs "relay early" commands down the circuit, to encode the message they want to send. For background, Tor has two types of cells: link cells, which are intended for the adjacent relay in the circuit, and relay cells, which are passed to the other end of the circuit. In 2008 we added a new kind of relay cell, called a "relay early" cell, which is used to prevent people from building very long paths in the Tor network. (Very long paths can be used to induce congestion and aid in breaking anonymity). But the fix for infinite-length paths introduced a problem with accessing hidden services, and one of the side effects of our fix for bug 1038 was that while we limit the number of outbound (away from the client) "relay early" cells on a circuit, we don't limit the number of inbound (towards the client) relay early cells.

So in summary, when Tor clients contacted an attacking relay in its role as a Hidden Service Directory to publish or retrieve a hidden service descriptor (steps 2 and 3 on the hidden service protocol diagrams), that relay would send the hidden service name (encoded as a pattern of relay and relay-early cells) back down the circuit. Other attacking relays, when they get chosen for the first hop of a circuit, would look for inbound relay-early cells (since nobody else sends them) and would thus learn which clients requested information about a hidden service.

There are three important points about this attack:

A) The attacker encoded the name of the hidden service in the injected signal (as opposed to, say, sending a random number and keeping a local list mapping random number to hidden service name). The encoded signal is encrypted as it is sent over the TLS channel between relays. However, this signal would be easy to read and interpret by anybody who runs a relay and receives the encoded traffic. And we might also worry about a global adversary (e.g. a large intelligence agency) that records Internet traffic at the entry guards and then tries to break Tor's link encryption. The way this attack was performed weakens Tor's anonymity against these other potential attackers too — either while it was happening or after the fact if they have traffic logs. So if the attack was a research project (i.e. not intentionally malicious), it was deployed in an irresponsible way because it puts users at risk indefinitely into the future.

(This concern is in addition to the general issue that it's probably unwise from a legal perspective for researchers to attack real users by modifying their traffic on one end and wiretapping it on the other. Tools like Shadow are great for testing Tor research ideas out in the lab.)

B) This protocol header signal injection attack is actually pretty neat from a research perspective, in that it's a bit different from previous tagging attacks which targeted the application-level payload. Previous tagging attacks modified the payload at the entry guard, and then looked for a modified payload at the exit relay (which can see the decrypted payload). Those attacks don't work in the other direction (from the exit relay back towards the client), because the payload is still encrypted at the entry guard. But because this new approach modifies ("tags") the cell headers rather than the payload, every relay in the path can see the tag.

C) We should remind readers that while this particular variant of the traffic confirmation attack allows high-confidence and efficient correlation, the general class of passive (statistical) traffic confirmation attacks remains unsolved and would likely have worked just fine here. So the good news is traffic confirmation attacks aren't new or surprising, but the bad news is that they still work. See https://blog.torproject.org/blog/one-cell-enough for more discussion.

Then the second class of attack they used, in conjunction with their traffic confirmation attack, was a standard Sybil attack — they signed up around 115 fast non-exit relays, all running on 50.7.0.0/16 or 204.45.0.0/16. Together these relays summed to about 6.4% of the Guard capacity in the network. Then, in part because of our current guard rotation parameters, these relays became entry guards for a significant chunk of users over their five months of operation.

We actually noticed these relays when they joined the network, since the DocTor scanner reported them. We considered the set of new relays at the time, and made a decision that it wasn't that large a fraction of the network. It's clear there's room for improvement in terms of how to let the Tor network grow while also ensuring we maintain social connections with the operators of all large groups of relays. (In general having a widely diverse set of relay locations and relay operators, yet not allowing any bad relays in, seems like a hard problem; on the other hand our detection scripts did notice them in this case, so there's hope for a better solution here.)

In response, we've taken the following short-term steps:

1) Removed the attacking relays from the network.

2) Put out a software update for relays to prevent "relay early" cells from being used this way.

3) Put out a software update that will (once enough clients have upgraded) let us tell clients to move to using one entry guard rather than three, to reduce exposure to relays over time.

4) Clients can tell whether they've received a relay or relay-cell. For expert users, the new Tor version warns you in your logs if a relay on your path injects any relay-early cells: look for the phrase "Received an inbound RELAY_EARLY cell".

The following longer-term research areas remain:

5) Further growing the Tor network and diversity of relay operators, which will reduce the impact from an adversary of a given size.

OPEN QUESTIONS:

Q1) Was this the Black Hat 2014 talk that got canceled recently?Q2) Did we find all the malicious relays?Q3) Did the malicious relays inject the signal at any points besides the HSDir position?Q4) What data did the attackers keep, and are they going to destroy it? How have they protected the data (if any) while storing it?

Great questions. We spent several months trying to extract information from the researchers who were going to give the Black Hat talk, and eventually we did get some hints from them about how "relay early" cells could be used for traffic confirmation attacks, which is how we started looking for the attacks in the wild. They haven't answered our emails lately, so we don't know for sure, but it seems likely that the answer to Q1 is "yes". In fact, we hope they *were* the ones doing the attacks, since otherwise it means somebody else was. We don't yet know the answers to Q2, Q3, or Q4.

Research? That's just a guess. Tor guys don't know who and they don't know why.

"Together these relays summed to about 6.4% of the Guard capacity in the network. "

Does that sound like something like Joe Blow could afford? Because the presentation that never was, talked about doing this for 3k. But they didn't even have that. I don't think you run all these boxes for 6months for that little anyways. I don't think 'researchers' ie guys in their basement throw thousands of dollars at something so they can write a pdf.

Those network ranges coincide with fdcservers. Looking at the prices, yes they could get 116 servers on fast connections for 3k. Whois says they're out of Chicago which sounds like a researcher might use (centralized, not hiding their tracks, etc)

$30/mo * 115 VPSs * 5 months = $17k. Totally within the budget of some research group who decided that was a good use of their money.

The larger the Tor network gets (in terms of capacity), the more expensive it is to sign up a given fraction of it. Alas, bandwidth prices are very different depending on where the relay is, so getting good diversity is more expensive. I'm glad Hart Voor Internetvrijheid and other groups are working hard at the location diversity goal even though it's more expensive:https://www.torservers.net/partners.html

This is not "Joe Blow". CERT is one of the most well-funded computer security research organizations in the country. 30K, let alone 3K, is easily within the budget of the powers that be, if they feel it's worth spending that much. It's also easily within the budget of external funding providers (I'm sure your conspiracy theory-oriented brain could come up with some plausible ones).

These are not "guys in their basement". They are researchers in their well-funded computer lab.

And for those who do not follow general infosec issues closely, CMU/(US)CERT has a very close collaboration and funding association with US Homeland Security. Which means that it is nearly certain that all of the results of this research attack have been passed on to NSA.

Can Tor users check if they've been using one of the guards in the ranges that were removed from the network, or would those guard entries have been immediately removed from the client's state file upon learning that they'd been declared invalid?

(Of course, knowing that one *wasn't* using one of these guards would *not* mean you weren't affected, but it would still be interesting to know.)

I wonder how many people have obtained the attacker's data! Locations of a lot of hidden services and their users would be quite interesting to many people - operators of hidden services are quite diverse and even include hardened criminals like the GCHQ's JTRIG hacking/trolling department: https://firstlook.org/theintercept/2014/07/14/manipulating-online-polls… (their catalog of capabilities include several that use, rather than attack, Tor hidden services).

If the attacker is the CMU researchers and law enforcement seizes their data to selectively prosecute certain hidden services, perhaps that data could also be used to investigate and litigate against JTRIG? Sadly though, we probably would not hear about it if such seizure happens since everything would be parallel constructed for the public case... unless the researchers decide to tell us (which would probably be violating an NSL or something).

"If the attacker is the CMU researchers and law enforcement seizes their data to selectively prosecute certain hidden services" - seems like that would be fruit of the poisonous tree, but I am not a lawyer

The 'NSA' was using MIT (and others) to seed TOR for the SOD. Notice how the FOIA leaves the SOD out, they often masquerade behind the DEA title. Despite early news reports (circa 2009) it wasn't the DEA that busted Viktor Bout it was the SOD (I think it was a time article in 2011).

Mudrock also got the Hemisphere FOIA from LAPD or TacPD combine that with the telecom immunity act of 9/2007.

If U.S. IP blocks are excluded in the torrc via 'ExcludeNodes {US}'
can there be any point in the connection from client to hidden service (rendevouz point, introduction point) that could be on U.S. IP blocks notwithstanding?

Does the rendevouz point know it connects the anonymous client to a specific hidden service, does it know the servers .onion address?

Does 'DB' in the graphics on the explanation page stand for a Hidden Service Directory server? Why is 'DB' not drawn within the Tor cloud?

ExcludeNodes cannot be applied to HSDir selection because clients need to be able to construct the same list of HSDirs as the publisher (service) so that they can find the place where the descriptor is published.

If you had ExcludeNodes {US} and the two blocks listed are indeed identified as US by tor's geoip data source (I haven't checked about that), then at least you won't have one of them as a guard. But any other guard could also potentially be passively decoding the signals sent by the malicious HSDirs.

The attacks observed were coming from HSDirs, which know the address of the hidden service they're serving a descriptor for. The message transmitted was the hidden service address. This message can be decoded by the guard, which knows the IP of the client (which is accessing or publishing the descriptor).

So, when the attacker is a hidden service's HSDir (which will probably happen eventually, as the position in the DHT rotates at some interval - it would be good to know how long it takes to cycle through 50% of the HSDirs) the guards for the hidden service can deanonymize it - meaning, they can link its IP address with its onion address. Clients using a malicious guard can also be deanonymized (their IP can be identified as one which accessed the service).

It is entirely possible that other guards which are not in the set of nodes mentioned above (and/or not controlled by the attacker running the nodes caught doing the active part of the attack) are or were also decoding these messages.

The same attack could also deanonymize non-hidden-service traffic if these messages were sent from exit nodes. There have not (yet) been exit nodes observed sending relay_early cells backwards.

you wrote: "their (the clients) IP can be identified as one which accessed the service".
Do you mean they know what specific hidden site the client has visited (worst case one can imagine!!), or do they only know that the client accessed the hidden service generally?

Just asking because they said in the article: "but the attackers likely were not able to see any application-level traffic (e.g. what pages were loaded or even whether users visited the hidden service they looked up)"

That is he wont know directly. Since he is also your entry gaurd, he can watch the traffic over the circuit and get a good idea of how much traffic is passing, combined with knowing the site, could very well let him figure out some of those details.

Even if he didn't, it would likely be enough information over time to seperate casual observers who find a site and check it out, from serious users who may be more interesting targets.

Unless this is pure research, then I would assume this is not the end game but simply helping troll for targets.

Good point. The result of the attack in this advisory is that he knows which hidden service you looked up or published about. It's totally possible that he would then go on to do some other attack, like the website fingerprinting one you hint about, or just general "gosh she's using Tor a lot" observations.

"The attacker, if his relays are in the right place, could learn that you (that is, your IP address) did a lookup for the hidden service address (e.g. duskgytldkxiuqc6.onion)."

It's funny to think that some FBI ip could appear among the one they de-anonymize.
Duh, the FBI regulary infiltrate cp, drugs hidden service. (Think Silk Road, the FBI had several accounts on it from the beginning).
I know, it doesn't change anything but it makes me smile.

I have a quick release coordination question. Why wasn't the version of tor in TBB also bumped up, especially given how recently TBB 3.6.3 was released? Doesn't the current release cycle gap between TBB and tor potentially increase the likelihood that .22 (mostly TBB client) users will be distinguished from .23 (mostly relay) users?

I know it's not necessarily preferred/ideal practice, but some people run relays from TBB instances. I certainly have in the past...so if you agree with the sentiment, it might even be a good idea to append a notice to the most recent TBB blog post discouraging people from configuring TBB's tor as a relay until it gets bumped up to .23.

Perhaps I'm making too big a deal of this, and TBB 3.6.4 is already on its way...

Whereas the new Tor release isn't urgent for clients, since it only 1) adds a log entry (and the interface letting TBB users read their log lines sure isn't easy to use) and 2) prepares them to move from 3 guards to 1, but only once we set the consensus parameter to instruct them to switch (which I plan to do after the next TBB has been out for a while).

Hopefully it won't be too long until there's a TBB with the next Tor stable in it. But the TBB team have been working hard on a TBB 4.0 alpha, which will include the Tor 0.2.5.x tree, and I sure want to see that too. So much to do, and not enough people working on it all!

I would love to help on this sort of front as a volunteer, but this type of issue--as minor as I hope it will turn out to be--seems pretty staff-driven in terms of progress. So in terms of inviting volunteers to help, it unfortunately seems like one of the few areas where volunteers working mostly in *other* areas would be the precondition to open up staff bandwidth to address this type of issue.

You mentioned in your write-up that while their signing up all those relays triggered warnings in DocTor, they were left in the consensus since it was felt that they weren't too significant a portion of the network.

Hypothetically, what would be enough to make the authority operators say "Hey, these guys are bad news, let's get them out of the consensus ASAP"? Maybe this is a dumb question, and it depends on specific circumstances I don't know enough about.

Yeah. That's still not entirely resolved. I hope the answer is "it will take a lot less the next time we see it!" :)

But really, I worry about more subtle attacks, where the adversary signs up a few relays at a time over the course of months. Also, there are events (like the blocking in Egypt) where many people decide to run relays at once. If the adversary signs up a pile of relays on that day, will we say "oh good, people care about security and safety"?

The attack was possible to notice this time because the relays all came from the same netblocks at around the same time and running the same (kind of old by now) Tor version. Detecting this sort of attack in the general case seems like a really hard research problem.

Hi! There's a new alpha release available for download. If you build Tor from source, you can download the source code for 0.3.3.2-alpha from the usual place on the website. Packages should be available over the coming weeks, with a new alpha Tor Browser release some time in February.

Remember, this is an alpha release: you should only run this if you'd like to find and report more bugs than usual.

Tor 0.3.3.2-alpha is the second alpha in the 0.3.3.x series. It introduces a mechanism to handle the high loads that many relay operators have been reporting recently. It also fixes several bugs in older releases. If this new code proves reliable, we plan to backport it to older supported release series.

Changes in version 0.3.3.2-alpha - 2018-02-10

Major features (denial-of-service mitigation):

Give relays some defenses against the recent network overload. We start with three defenses (default parameters in parentheses). First: if a single client address makes too many concurrent connections (>100), hang up on further connections. Second: if a single client address makes circuits too quickly (more than 3 per second, with an allowed burst of 90) while also having too many connections open (3), refuse new create cells for the next while (1-2 hours). Third: if a client asks to establish a rendezvous point to you directly, ignore the request. These defenses can be manually controlled by new torrc options, but relays will also take guidance from consensus parameters, so there's no need to configure anything manually. Implements ticket 24902.

Major bugfixes (netflow padding):

Stop adding unneeded channel padding right after we finish flushing to a connection that has been trying to flush for many seconds. Instead, treat all partial or complete flushes as activity on the channel, which will defer the time until we need to add padding. This fix should resolve confusing and scary log messages like "Channel padding timeout scheduled 221453ms in the past." Fixes bug 22212; bugfix on 0.3.1.1-alpha.