from the it's-not-HOW-we-got-it,-it's-what-we-HAVE dept

Ever since the government first declared it had located the Silk Road server linked to Dread Pirate Roberts (allegedly Ross Ulbricht) thanks to a leaky CAPTCHA, there have been questions about the plausibility of this explanation. Ulbricht's attorneys suggested it wasn't the FBI, but rather the NSA, who tracked the alleged Silk Road mastermind down. This suggested parallel construction, something federal agencies have done previously to obscure the origin of evidence and something the FBI actively encourages local law enforcement agencies to do when deploying cell tower spoofers.

Technical documents filed in response to discovery requests seem to solidify the parallel construction theory. Brian Krebs at Krebs on Security and Robert Graham at Errata Security have both examined the government's filings (the Tarbell Declaration [pdf]) and noted that what the government said it did doesn't match what's actually on display.

Krebs' article quotes Nicholas Weaver, a researcher at the International Computer Science Institute at Berkeley, who points out that where the FBI agents say they found the leak doesn't mesh with the server code and architecture.

“The IP address listed in that file — 62.75.246.20 — was the front-end server for the Silk Road,” Weaver said. “Apparently, Ulbricht had this split architecture, where the initial communication through Tor went to the front-end server, which in turn just did a normal fetch to the back-end server. It’s not clear why he set it up this way, but the document the government released in 70-6.pdf shows the rules for serving the Silk Road Web pages, and those rules are that all content – including the login CAPTCHA – gets served to the front end server but to nobody else. This suggests that the Web service specifically refuses all connections except from the local host and the front-end Web server.”

Translation: Those rules mean that the Silk Road server would deny any request from the Internet that wasn’t coming from the front-end server, and that includes the CAPTCHA.

Weaver says that FBI agents would have been served nothing at all when attempting to access the server without using Tor. The server simply wasn't leaking into the open web. The more likely explanation is that the FBI contacted the IP directly and accessed a PHPMyAdmin page.

Brian Krebs quotes Nicholas Weaver as claiming "This suggests that the Web service specifically refuses all connections except from the local host and the front-end Web server". This is wrong, the web server accept all TCP connections, though it may give a "403 forbidden" as the result.

Even with this detail being off, the parallel construction theory still fits. Graham notes that the Tarbell Declaration (the filing that contains the official explanation of how the Silk Road server was accessed) is noticeably light on supporting documentation -- like screenshots, packet logs or code snippets.

Now that the government has been forced to hand over more technical documentation, it's original story is falling apart.

Since the defense could not find in the logfiles where Tarbell had access the system, the prosecutors helped them out by pointing to entries that looked like the following:

However, these entries are wrong. First, they are for the phpmyadmin pages and not the Silk Road login pages, so they are clearly not the pages described in the Tarbell declaration. Second, they return "200 ok" as the error code instead of a "401 unauthorized" login error as one would expect from the configuration. This means either the FBI knew the password, or the configuration has changed in the meantime, or something else is wrong with the evidence provided by the prosecutors.

The NSA as the purposefully-missing link makes sense. First off, Ulbricht's back end server was located in Iceland. Graham points out basic authentication was provided by this server via Port 80. If the NSA was monitoring traffic in and out of Iceland (as it is legally able to do), it could easily have captured a password for this server.

Furthermore, the front end server (located in Germany -- also within the NSA's established dragnet) would return "forbidden" errors when accessed outside of Tor, but would not when accessing PHP files (as Weaver noted). To get to the admin page, other possibly non-NSA-related tactics could have been used. (Graham suggests a couple of different methods well within the FBI's technical grasp and abilities -- "scanning the entire Internet for SSL servers, then searching for the string "Silkroad" in the resulting webpage" or doing the same but correlating the results with traffic traveling across the Tor onion connection.) However, none of the above is suggested by Tarbell's recounting of the events. In fact, the official narrative is vague enough that almost any explanation could fit.

Tarbell doesn't even deny it was parallel construction. A scenario of an NSA agent showing up at the FBI offices and opening a browser to the IP address fits within his description of events.

Defendant has submitted a declaration from Joshua Horowitz in support of his motion and request for an evidentiary hearing.

If the Government has any response to the factual statements (and/or relevance of the factual statements) asserted therein, it should file such response by C.O.B., October 6, 2014 (if possible).

The government may not feel compelled to respond. A filing from earlier in September (but added to the docket on Oct. 1st) suggests it's pretty much done discussing Ulbricht's "NSA boogeyman." [pdf link]

In light of these basic legal principles, the Government objects to the September 17 Requests as a general matter on the ground that no adequate explanation has been provided as to how the requested items are material to the defense. Most of the requests appear to concern how the Government was able to locate and search the SR Server. Yet the Government has already explained why, for a number of reasons, there is no basis to suppress the contents of the SR Server:

(1) Ulbricht has not claimed any possessory or property interest in the SR Server as required to establish standing for any motion to suppress; (2) the SR Server was searched by foreign law enforcement authorities to whom the Fourth Amendment does not apply in the first instance; (3) even if the Fourth Amendment were applicable, its warrant requirement would not apply given that the SR Server was located overseas; and (4) the search was reasonable, given that the FBI had reason to believe that the SR Server hosted the Silk Road website and, moreover, Ulbricht lacked any expectation of privacy in the SR Server under the terms of service pursuant to which he leased the server.

Particularly given these circumstances, it is the defendant’s burden to explain how the contents of the SR Server were supposedly obtained in violation of the defendant’s Fourth Amendment rights and how the defendant’s discovery requests are likely to vindicate that claim. The defense has failed to do so, and the Government is unaware of any evidence – including any information responsive to the defense’s discovery requests – that would support any viable Fourth Amendment challenge. Instead, the defense’s discovery requests continue to be based on mere conjecture, which is neither a proper basis for discovery nor a proper basis for a suppression hearing.

The response document notes that it has already responded with several documents, won't be responding to a host of other requests, but most tellingly, says the government is "not aware" of any supporting documentation for Agent Tarbell's declaration. (As noted by Robert Graham, the declaration as written is "impossible to reconstruct," with the lack of technical details being a large part of that.)

5. The name of the software that was used to capture packet data sent to the FBI from the Silk Road servers.

Other than Attachment 1, the Government is not aware of any contemporaneous records of the actions described in paragraphs 7 and 8 of the Tarbell declaration. (Please note that Attachment 1 is marked “Confidential” and is subject to the protective order entered in this matter.)

6. A list of the “miscellaneous entries” entered into the username, password, and CAPTCHA fields on the Silk Road login page, referenced in the SA Tarbell’s Declaration, at ¶ 7.

See response to request #5.

7. Any logs of the activities performed by SA Tarbell and/or CY-2, referenced in ¶ 7 of SA Tarbell’s Declaration.

See response to request #5.

8. Logs of any server error messages produced by the “miscellaneous entries”referenced in SA Tarbell’s Declaration.

See response to request #5.

9. Any and all valid login credentials used to enter the Silk Road site.

See response to request #5.

10. Any and all invalid username, password, and/or CAPTCHA entries entered on the Silk Road log in page.

See response to request #5.

11. Any packet logs recorded during the course of the Silk Road investigation, including but not limited to packet logs showing packet headers which contain the IP address of the leaked Silk Road Server IP address [193.107.86.49].

See response to request #5.

Parallel construction matters, but the government claims it doesn't. It will probably continue to declare it a non-issue so long as the courts agree that Ulbricht's Fourth Amendment rights weren't violated. Ulbright's Fourth Amendment defense is admittedly a disaster, making claims that have nearly no chance of holding up under judicial scrutiny. The Silk Road indictment is a lousy test case for challenging parallel construction.

But parallel construction spills over into purely domestic investigations where Fourth Amendment rights are supposedly guaranteed. As long as the "expectation of privacy" isn't violated -- according to the government's definition of what does and doesn't enjoy this "expectation" -- the origin of the evidence isn't really up for discussion, according to the government's own filing. And what the government says here is that what was ultimately obtained matters more than how it was obtained. Parallel construction covers up invasive surveillance and investigative tactics, providing courts with evidence that looks clean but was illicitly gathered.

from the urls-we-dig-up dept

Despite the staggering growth in computing power and capabilities over the history of the technology, there remains a line in the sand between what computers can do and what we think of as "true" artificial intelligence. This line has gotten blurrier as computers have succeeded in performing certain tasks that were formerly human-only, but even these instances often feel like a brute-force approach to simulating something our own brains seem to accomplish more genuinely and abstractly. On the flipside, the success of these simulations raises questions about what's really happening inside our own heads. Here are a few of the latest developments in artificial intelligence that try to approach that line:

from the because-i-love-being-talked-down-to-by-a-dialog-box dept

Fact: if you have a site with any amount of traffic and open comment threads, you're going to draw trolls. There's no method that's been proven to completely rid your site of trolls, though not for a lack of trying. (This one is particularly mischevious.) Various sites have tried anything from aggressive moderation to requiring Facebook logins... all to no avail. (Although the latter method has proven that certain people are more than willing to troll without the protection of anonymity.)

A human rights group is introducing a new take on CAPTCHAs, those little boxes that make you type in a word to prove you are human before you can comment or register for a site. Their version doesn’t just present a scrambled word to be deciphered, but instead forces a person to choose the right word to unscramble based on the proper emotional response to a human rights violation.

Civil Rights Defenders, the Swedish-based group that developed the tool, hopes the Civil Rights Captcha will help sites block spiders and bots, while letting humans in — and hopefully educating the humans at the same time...

But perhaps forcing a troll to repeatedly choose an empathetic response will, over time, soothe the ravages of comment sections around the net. Okay, that might also be asking too much, but at the very least spreading information about human rights abuses certainly can’t hurt, even if the jerks of the internet (see, for example, YouTube comments) remain beyond help.

While its heart is certainly in the right place, the implementation still requires captchas, something most users would rather not encounter every time they make a comment. (Yes, I know. But sometimes, decent , non-trolling humans don't want to "create an account" or "enter an email address" in order to participate.) On top of that is the fact that each captcha has only one "right" answer, making the system more than a little heavy-handed in its moralizing. This assumes that your regular, non-troll commenters are going to be fine with being preached at while jumping through hoops. It also assumes that all dedicated trolls are morons incapable of deducing the (obviously) "right" reaction to each situation presented.

This particular captcha service might prove useful in limited situations, like being pre-loaded with questions related to a particular cause or event being discussed/promoted at the website deploying it. It also might prove popular with the sort of people who are willing to annoy a certain percentage of their community in order to "raise awareness." It will become a form of penance for those involved, much like forwarding "concerned" emails and switching Facebook statuses to show support. You know, the sort of thing that will morph into "I solved Captchas for world peace. What will YOU do?" t-shirts popping up on Cafepress.

I can't see this solving the troll problem, but I can see it annoying most of a user base, leaving the site deploying it with a smaller audience consisting of people who like being moralized at frequently. Like any other captcha, the spambots and trolls will find a way around it, with the only ones affected being decent human beings, which would seem to be the sort of "demographic" you'd want to annoy less. Pushing them through a "think our way or hit the road" filtering system doesn't make trolls any less prevalent or make non-decent human beings any more "decent."

from the urls-we-dig-up dept

Software is getting better and better at mimicking human behavior on the internet. People are fooled every day by automated messages that seem like they come from actual humans, and sometimes real human messages are mistakenly thought to be composed by computers. Here are just a few more examples of internet bots getting smarter.

from the urls-we-dig-up dept

Computers can be programmed to play all sorts of games, but these machines don't enjoy playing -- or even winning. It'll be quite the feat to create artificial intelligence that actually understands which games are fun to play... and what games are boring. Game designers aren't guaranteed to create fun games, so it's not exactly an easy task for humans to figure out. But when a game is fun, people seem to naturally know it. That's not to say that every popular game is fun for everyone, but there seems to be some quality of good games that can't just be replicated easily. Here are a few quick links on games designed just for us humans.

from the uh-oh... dept

Last year, we wrote about a troubling set of lawsuits filed by Craigslist that seemed very dangerous, as it was pushing the boundaries on a series of legal concepts, all of which could come back to haunt Craigslist (and others) at a later date. For example, we noted that there was a "weak" DMCA claim that said that the captchas used by Craigslist to get people to prove they were human were actually "technological protection measures," and circumventing them violated the anti-circumvention provisions of the DMCA. While it's not the same lawsuit (apparently Craigslist had filed even more such lawsuits), Ray Dowd has the details of Craigslist winning a default judgment in a similar lawsuit after the company sued didn't bother to defend itself. This is why the concept of default judgments always concerns me. Now we have a ruling on the books that finds captchas are like DRM, and getting around them even if for perfectly legal purposes (can't read 'em?) may count as violating the DMCA.

from the fascinating dept

Slashdot points us to an interview with Luis von Ahn (who we're a big fan of), where he talks about how spammers who are frustrated by various types of CAPTCHA tests have set up their own sort of "innovation prize," offering up somewhere in the range of $500,000 for software that can automatically pass CAPTCHA and reCAPTCHA reading tests (the things where you have to fill in a series of letters to sign up for a service or post a comment). As von Ahn points out: "If [the spammers] are really able to write a programme to read distorted text, great -- they have solved an AI problem." It is, effectively, an "X Prize" for optical character recognition. Not that we like to encourage spammers, but it is rather fascinating how the underground business seems to mirror the above ground innovation world as well.