On Thu, Jul 31, 2014 at 04:12:33PM -0400, Philipp Winter wrote:
> One good example is documented in a recent research paper [0]. Section
> 5.2 describes how we chased a group of related malicious exit relays
> over several months. At some point the attackers began to sample MitM
> attempts and target web sites. Publishing our actions would probably
> have helped the attackers substantially.
I think this is a really important point.
I'm usually on the side of transparency, and screw whether publishing
our methods and discussions impacts effectiveness.
But in this particular case I'm stuck, because the arms race is so
lopsidedly against us.
We can scan for whether exit relays handle certain websites poorly,
but if the list that we scan for is public, then exit relays can mess
with other websites and know they'll get away with it.
We can scan for incorrect behavior on various ports, but if the list
of ports and the set of behavior we do is public, then again relays are
free to mess with things we don't look for.
One way forward is a community-grown set of tools that are easy to
extend, and a bunch of people actively extending them and trying out
new things to look for. And then when they find something, they let
people know and others can verify it.
But then what -- we add that particular test to the set of official
tests that we do? And other people keep doing their secret "I wonder if
I can catch a new one" tests in the background? Or do we add all tests
that anybody has implemented, and try to cover everything that matters,
whatever that means?
Another way forward is to design and deploy an adaptive test system,
which e.g. searches google for some content, then fetches it with and
without Tor, and tries to figure out if the exit is messing with stuff.
That turns out to be a really tough research project, in that a lot of
web content is dynamic so your results will be mostly false positives
unless you account for that somehow. That's what SoaT was aiming to do:
https://gitweb.torproject.org/torflow.git/blob/HEAD:/NetworkScanners/ExitAuthority/README.ExitScanning
Some researchers like Micah Sherr ("Validating Web Content with Senser")
might have components to contribute here, to recognize and discard false
positives (but at the cost of introducing new false negatives too).
So in summary, if we're going to dabble here and there and notice bad
exits in an ad hoc way, I think secrecy is one of the few tools we have to
not totally lose the arms race. If we're going to play the arms race more
seriously, then secrecy should become an increasingly less relevant tool.
But as usual a lot of research remains if we want to get there from here.
--Roger