Saturday, December 15, 2007

The U.S. District Court for the District of Vermont has held that you can invoke the Fifth Amendment privilege against self-incrimination and refuse to give up the password you have used to encrypt data files.

Here are the essential facts in United States v. Boucher, 2007 WL 4246473 (November 29, 2007):

On December 17, 2006, defendant Sebastien Boucher was arrested on a complaint charging him with transportation of child pornography in violation of 18 U.S.C. § 2252A(a)(1). At the time of his arrest government agents seized from him a laptop computer containing child pornography. The government has now determined that the relevant files are encrypted, password-protected, and inaccessible. The grand jury has subpoenaed Boucher to enter a password to allow access to the files on the computer. Boucher has moved to quash the subpoena on the grounds that it violates his Fifth Amendment right against self-incrimination.

The district court held that Boucher could invoke the Fifth Amendment and refuse to comply.

I did an earlier post about this general issue and, as I explained there, in order to claim the Fifth Amendment privilege the government must be (i) compelling you (ii) to give testimony that (iii) incriminates you. All three of these requirements have to be met or you cannot claim the Fifth Amendment privilege. (And if you voluntarily comply by giving up your password, you can’t try to invoke the privilege later because a court will say that you were not compelled to do so – you did so voluntarily.)

In the earlier post or two I did on this issue, I was analyzing a scenario, which has come up in a few instance (though not in any reported cases I’m familiar with) in which someone is stopped by Customs officers while entering or leaving the U.S. In my scenario, which is the kind of circumstance I’ve heard about, the officers check the person’s laptop, find it’s encrypted and demand the password. The question then becomes whether the laptop’s owner can (i) invoke the Fifth Amendment privilege or (ii) invoke Miranda. As I’ve written before, to invoke Miranda you have to be in custody, and you arguably are not here. And to be “compelled” under the Fifth Amendment, you have to be commanded to so something by judicial process or some analogous type of official coercion (like losing your job); you probably (?) don’t have that here, either.

But in the Boucher case, he had been subpoenaed by a federal grand jury which was ordering him to give up the password, so he was being compelled to do so.

As to the second and third requirements, the district court held that giving up the password was a testimonial, incriminating act:

Compelling Boucher to enter the password forces him to produce evidence that could be used to incriminate him. Producing the password, as if it were a key to a locked container, forces Boucher to produce the contents of his laptop. . . .

Entering a password into the computer implicitly communicates facts. By entering the password Boucher would be disclosing the fact that he knows the password and has control over the files on drive Z. The procedure is equivalent to asking Boucher, `Do you know the password to the laptop?’ . . .

The Supreme Court has held some acts of production are unprivileged such as providing fingerprints, blood samples, or voice recordings. Production of such evidence gives no indication of a person's thoughts . . . because it is undeniable that a person possesses his own fingerprints, blood, and voice. Unlike the unprivileged production of such samples, it is not without question that Boucher possesses the password or has access to the files.

In distinguishing testimonial from non-testimonial acts, the Supreme Court has compared revealing the combination to a wall safe to surrendering the key to a strongbox. The combination conveys the contents of one's mind; the key does not and is therefore not testimonial. A password, like a combination, is in the suspect's mind, and is therefore testimonial and beyond the reach of the grand jury subpoena.

United States v. Boucher, supra.

The government tried to get around the testimonial issue by offering “to restrict the entering of the password so that no one views or records the password.” The court didn’t buy this alternative:

While this would prevent the government from knowing what the password is, it would not change the testimonial significance of the act of entering the password. Boucher would still be implicitly indicating that he knows the password and that he has access to the files. The contents of Boucher's mind would still be displayed, and therefore the testimonial nature does not change merely because no one else will discover the password.

United States v. Boucher, supra.

So Boucher wins and the court quashes the subpoena, which means it becomes null and void and cannot be enforced.

I applaud the court’s decision. I’ve argued for this outcome in chapters I’ve written for a couple of books and in some short articles (and in discussions with my students). I think this is absolutely the correct result, but I strongly suspect the government will appeal the decision. Let’s hope the appellate court goes along with this court.

There is, again, a caveat: Remember that Boucher had been served with a grand jury subpoena so there was no doubt he was being compelled to give up the password. The airport scenario is much more difficult, because compulsion is not as obvious. We won’t know whether anyone can take the Fifth Amendment in that context unless and until someone refuses to provide their password to Customs officers and winds up litigating that issue in court.

Wednesday, December 12, 2007

I just ran across something I’d not seen before: a law enforcement (FBI) form called “Consent to Assume Online Presence.”

Before I get to the form, what it does and why it’s new to me (anyway), I should explain what I mean by “consent.”

As I wrote in an earlier post, the Fourth Amendment creates a right to be free from “unreasonable searches and seizures.” That means, among other things, that law enforcement officers do not violate the Fourth Amendment when they conduct a search or seizure that is “reasonable.”

As I also explained in that post, a search or seizure can be reasonable in either of two ways: (i) if it is conducted pursuant to a warrant (search warrants for searching and seizing evidence, arrest warrants for seizing people); or (ii) if it is conducted pursuant to a valid exception to the warrant requirement. As I explained in the earlier post, consent is an exception to the warrant requirement. With consent, you essentially waive your Fourth Amendment rights and let law enforcement search and/or seize you or your property.

Unlike many of the exceptions to the warrant requirement, consent does not require that the officer have probable cause to believe he or she will find evidence of criminal activity in the place(s) they want to search. Probable cause is irrelevant here because you’re voluntarily giving up your Fourth Amendment rights.

To be valid, consent must be voluntary (so police can’t threaten to beat you until you consent) and it must be knowing (which means you have to know you had the right not to consent . . . but courts presume we all know that, so an officer doesn’t have to tell you that you have the right NOT to consent for your consent to be valid).

Officers can rely on oral consent (they ask if you’ll consent to let them search, say, your car, you say “ok” and they proceed, having gotten your consent), but there’s really a preference in law enforcement for having the person sign a form. Consent is, after all, a kind of contract: You agree to give up your Fourth Amendment rights and that creates an agreement with law enforcement under which they will search the property for which you have given consent. If officers rely on oral consent, the person can always say later that they didn’t’ consent at all or didn’t consent to the scope of the search that was conducted (i.e., the officers searched more than the person agreed to having them do). So officers, especially federal officers, generally have the person sign a form, a “Consent to Search” form.

Enough background. Let’s get to the “Consent to Assume Online Presence.” As far as I can tell, the “Consent to Assume Online Presence” form has so far been mentioned in only two reported cases, both federal cases and both involving FBI investigations.

In United States v. Fazio, F. Supp.2d, 2006 WL 1307614 (U.S. District Court for the Eastern District of Missouri, 2006), the FBI was conducting an online investigation of child pornography when they ran across an account (“salvatorejrf”) that was associated with the creation and posting of “four visual depictions of naked children.” United States v. Fazio, supra. They traced the account to Salvatore Fazio and, after some more investigation, obtained a warrant to search his home.

FBI agents executed the warrant, seized computers, CDs and other evidence. One of the agents, Agent Ghiz, also wound up interviewing Fazio, who said “he was acting in an undercover capacity to identify missing and exploited children” and “admitted that he had downloaded images of children from the internet and uploaded them on other sites.” United States v. Fazio, supra. According to the opinion, during the interview

Agent Ghiz did not accuse the defendant of lying nor did he use any psychological ploys to encourage Mr. Fazio to talk. . . . According to Agent Ghiz, [Fazio] never attempted to leave during the execution of the search warrant or the interview. Toward the conclusion of the interview, Agent Ghiz asked [Fazio] if he would be willing to continue to help in the investigation by allowing the FBI to use his online identity to access other sites to help investigate other child pornography crimes. [Fazio] was willing to cooperate and gave consent to the FBI's assuming his online presence. Government's Exhibit 8, a copy of a form entitled Consent to Assume Online Presence, was introduced at the evidentiary hearing. It was signed by [Fazio] in the presence of Agent Ghiz.

United States v. Fazio, supra. The evidentiary hearing came when Fazio moved to suppress the evidence the agents had obtained.

The other, more recent case is much more recent. In United States v. Jones, 2007 WL 4224220 (U.S. District Court for the Southern District of Ohio, 2007) the FBI was conducting another investigation into the online distribution of child pornography. In the course of the investigation, they ran across an account that was registered to Joseph Jones. United States v. Jones, supra. They obtained a warrant to search his home and went there to execute it but no one was there. The agents and some local police officers then went looking for Jones, whom they eventually found talking to two other men at the end of a driveway in what seems to have been a rural area. United States v. Jones, supra.

And FBI agent, Agent White, explained to Jones why they were looking form him and, at his request, showed him the search warrant for the property they had identified earlier. I won’t go into all the details, but Jones wound up consenting to their searching another location with which he also had ties. United States v. Jones, supra. He gave his consent to the search of that property by, as I noted earlier, signing a “Consent to Search” form, a traditional form. The FBI agent also had brought a “Consent to Assume Online Presence” form and Jones wound up signing that, too:

[Agent] White and [Jones] completed the `Consent To Assume Online Presence’ form. This form gave the FBI permission to take over [Jones’] `online presence’ on Internet sites related to child pornography so agents could discover other offenders. [Jones] filled in the spaces on the form calling for his online accounts, screen names, and passwords, and he signed and dated the form at the bottom.

United States v. Jones, supra.

I find the “Consent to Assume Online Presence” form very interesting, for a couple of reasons. One is that it doesn’t act like a traditional consent in that it doesn’t conform to the usual dynamic of a Fourth Amendment search and seizure.

The usual dynamic, which goes back centuries, is that law enforcement officers get a warrant to search a place for specified evidence and seize the evidence when they find it. They then go to the place and, if the owner is there, give the owner a copy of the warrant (which is their “ticket” to be there), conduct the search and seizure, give the owner an inventory of what they’ve taken and then leave. This dynamic is structured, both spatially and temporally: It happens “at” a specific real-space place (or places). It has a beginning, a middle and an end.

The same thing is true of traditional consent searches. So if consent to let police search my car for, say, drugs, they can search the car for drugs. The car is the “place,” so they can search that “place” and no other. And the search will last as only long as it takes to reasonably search the car (can’t routinely take it apart). Here, too, the owner of the car is usually there and observes the search.

Now look at the “Consent to Assume Online Presence” search, as I understand it: Agents, or officers, obtain the consent to assume the person’s online identity, which they do at some later time (that not being convenient at the moment consent is given, as we see in these two cases). The “place” to be searched is, I gather, cyberspace, since the Consent to Assume Online Presence lets officers use the person’s online accounts to search cyberspace for other evidence, i.e., to find others involved in child pornography in the two cases described above. So the “place” to be searched is apparently unbounded, and I’m wondering if the temporal dimension of the consent is pretty much the same. I don’t see any mention of the “Consent to Assume Online Presence’s” form limiting the length of time in which the consenting person’s online accounts can be used for this purpose. I suppose there’s a functional self-limitation, in that the consent expires when the accounts do or when they’re otherwise cancelled.

But even with that limitation, this is a pretty amazingly unbounded consent to search. It’s basically an untethered consent to search: As I said earlier, traditional consent searches have definite spatial and temporal limitations: “Yes, officer, you can search my car for drugs” lets an officer search the car (only that car) until he either finds drugs or gives up after not finding drugs. There, the search is tethered to the place being searched and is limited by the reasonable amount of time such a search would need. Here, the consent is untethered in that it apparently lets officers use the consenting person’s accounts to conduct online investigations.

I’m not even sure this is a consent to search, in the traditional sense. In these two cases, law enforcement had already gained access to the persons’ online accounts, so there wasn’t any going to be any additional, incremental invasion of their privacy. Law enforcement officers had already been in their online accounts and seen what there was to see. The consent in these cases picks up, as you can see from the facts summarized above, after the suspect has already been identified, after search warrants have been executed (and, in one case, a regular, spatial consent search executed) and after the suspect has effectively been transformed into a defendant. So that investigation is really over.

This is a consent to investigate other, unrelated cases. That’s why it doesn’t strike me as a traditional search. It’s really a consent to assume someone’s identity to investigate crimes committed by persons other than the one consenting. Now, there are cases in which law enforcement officers key in on a suspect, get the suspect to consent to letting them search property – a car, say – where they think they will find evidence of someone else’s being involved in the criminal activity they’re investigating the suspect for. There the officers are getting consent to carry on an investigation that at least partially impacts on someone other than the person giving consent. But there the consent search is a traditional consent search because it conforms to the dynamic I outlined above – it has defined spatial and temporal dimensions.

I could ramble on more about that aspect of the “Consent to Assume Online Presence” searches (or whatever they are) but I won’t. I’ll content myself with making one final point that seems interesting about them.

When I consent to a traditional search, I can take it back. That is, I can revoke my consent. So if the officer says, “Can I search your car for drugs?” and I (foolishly) say, “yes,” I can change my mind. If, while the officer is searching, I say, “I’ve changed my mind – stop searching right now”, then the officer has to do just that. If the officer has found drugs before I change my mind, then the officer can keep those drugs and they can be used in evidence against me because they were found legitimately, i.e., they were found while my consent was still in effect.

How, I wonder, do you revoke your “Consent to Assume Online Presence”? Do you email the agency to which you gave the consent, on call them or visit them or have your lawyer get in touch and say, “by the way, I changed my mind – quit using my account”?

Saturday, December 08, 2007

Over the last month or three, I’ve read several news stories about how IBM and Linden Labs, along with a number of IT companies, are working to develop “avatar interoperability.”

“Avatar interoperability,” as you may know, means that you, I or anyone could create an avatar on Second Life and use that same avatar in other virtual worlds, such as HiPiHi or World of Warcraft or Entropia.

The premise is that having created my avatar – my virtual self – I could then use that avatar to travel seamlessly among the various virtual worlds.

In a sense, I guess, the interoperable avatar becomes my passport to participate in as many virtual worlds as I like; I would not longer be tethered to a specific virtual world by my limited, idiosyncratic avatar.

Avatar interoperability seems to be one aspect of creating a new 3D Internet. One article I read said the ultimate goal is to replace our current, text-based Internet with “a galaxy of connected virtual worlds.” So instead of experiencing cyberspace as a set of linked, sequential “pages,” each of which features a combination of text, graphics and sound, I’d log on as my virtual self and experience cyberspace as a truly virtual place. Or, perhaps more accurately, I would experience cyberspace as a linked series of virtual places, just as I experience the real-world as a linked series of geographically-situated places.

Cyberspace would become an immersive, credible pseudo 3D reality – the evolved twenty-first analogue of the hardware-based virtual reality people experimented with fifteen years or so ago . . . the tethered-to-machinery virtual reality depicted in 1990’s movies like The Lawnmower Man and Disclosure. That older kind of virtual reality was seen as something you used for a particular purpose – to play a game or access data.

The new 3D Internet featuring interoperable avatars is intended to make cyberspace a more immersive experience. Our approach to privacy law in the United States is often described as sectoral; that is, instead of having general, all-encompassing privacy laws, we have discrete privacy laws each of which targets a distinct area of our lives. So we have medical privacy laws and law enforcement search privacy laws and wiretap privacy laws and so on.

I think our experience of cyberspace is currently sectoral, in this same sense: I go on, I check my email, I check some news sites, I might do a little shopping on some shopping sites, then I might watch some videos or check out some music or drop into Second Life to socialize a bit or schedule flights or do any of the many, many other things we all do online. I think my doing this is a sectoral activity because I move from discrete website to discrete website. I may log in multiple times, using different login information. I go to each site for a specific, distinct purpose.

I think, then, that the custom of referring to websites as “web pages” accurately captures the way I currently experience cyberspace. really is much more analogous to browsing the pages in a book than it is to how we experience life in the real, physical world. In the real-world I do go to specific places (work, grocery, dry cleaner’s, restaurants, hotels, dog groomer, book store, mall, etc.) for distinct purposes. But I’m “in” the real-world the whole time. I don’t need to reconfigure my reality to move from discrete place to discrete place; the experience is seamless.

So that seems to be the goal behind the development of the 3D Internet. It seems to be intended to promote a more immersive, holistic experience of cyberspace while, at the same time, making it easier and more realistic to conduct work, commerce, education and other activities online. Avatars, currency and the other incidents of our online lives would all become seamlessly portable.

Personally, I really like the idea. I think it would make cyberspace much easier and much more interesting to use. It would also really give us the sense of “being” in another place when we’re online.

When I first heard about avatar interoperability, I wondered about what I guess you’d call the cultural compatibility of migrating avatars. It seemed, for example, incongruous to imagine a World of Warcraft warrior coming into Second Life or vice versa (Second Life winged sprite goes into WoW). And that’s just one example. I had basically the same reaction when I thought of other kinds of avatars leaving their respective environments and entering new and culturally very different worlds.

But then, as I thought about it, I realized that’s really what we do in the real world. We don’t have the radical differences in physical appearance and abilities (or inclinations) you see among avatars, but we definitely have distinct cultural differences. We may still have a way to go in some real-world instances (I’m personally not keen on going to Saudi Arabia, for example), but we’ve come a long way from where we were centuries ago when xenophobia was the norm.

And the ostensible cultural (and physical) differences among avatars will presumably be mitigated by the fact that an avatar is only a guise a human being uses to interact online. Since it seems humanity as a whole is becoming increasingly cosmopolitan and tolerant, the presumably superficial, virtual differences among avatars may not generate notable cultural incompatibilities as they move into the galaxy of interconnected virtual worlds.

I also wondered about what this might mean for law online. Currently, as you may know, the general operating assumption is that each virtual world polices itself. So Linden Lab deals with crimes and other “legal” issues in Second Life, and the other virtual worlds do the same. There have been, as I’ve noted in other posts, some attempts to apply real world laws to conduct occurring in virtual worlds. Earlier this year, the Belgian police investigated a claim of virtual rape on Second Life; I don’t know what happened with the investigation. As I’ve written elsewhere, U.S. law currently would not consider whatever occurs online to be a type of rape, because U.S. law defines rape as a purely real-world physical assault. Online rape cannot qualify as a physical assault and therefore cannot be prosecuted under U.S. law, even though it can inflict emotional injury. U.S. criminal law, anyway, does not really address emotional injury (outside harassment and stalking).

That, though, is a bit of a digression. My general point is that so far law generally treats online communities as separate, self-governing places. Second Life and other virtual worlds functionally have a status analogous to that of the eighteenth- and nineteenth-century colonies operated by commercial entities like the Hudson Bay Company or the British East Indian Company. That is, they are a “place” the population of which is under the governing control of a private commercial entity. As I, and others have written, this makes a great deal of sense as long as each of these virtual worlds remains a separate, impermeable entity. As long as each remains a discrete entity, and as long as we only inhabit cyberspace by choice, we in effect consent to have the company that owns and operates a virtual world settle disputes and otherwise act as law-maker and law-enforcer in that virtual realm.

Things may become more complicated once avatars have the ability to migrate out of their virtual worlds of origin and into other virtual worlds and into a general cyberspace commons. We will have to decide if we want to continue the private, sectoral approach to law we now use for the inhabitants of discrete virtual worlds (so that, for example, if my Second Life avatar went into WoW she would become subject to the laws of WoW) or change that approach somehow.

It seems to me the most reasonable approach, at least until we have enough experience with this evolved 3D Internet to come up with a better alternative, is to continue to treat discrete virtual worlds as individual countries, each of which has its own law. This works quite well in our real, physical world: When I go to Italy, I submit myself to Italian law; when I go to Brazil I submit myself to Brazilian law and so on. At some point we might decide to adopt a more universal, more homogeneous set of laws that would generally conduct in cyberspace. Individual enclaves could then enforce special, supplemental laws to protect interests they deemed uniquely important.

One of my cyberspace law students did a presentation in class this week in which she told us about the British law firms that have opened up offices and, I believe, practices in Second Life. That may be just the beginning. Virtual law may become a routine feature of the 3D Internet.

I just received this email (from a source that will remain anonymous):

Good afternoon,

I have a wireless router (WiFi) which for technical reasons I won’t bore you with, has no encryption. If a third party were to access the internet via my unencrypted router and then commit an illegal act, could I be held liable? I’m not sure if this question in anyway broaches your area of expertise and if not please excuse the intrusion. I’ve asked some technical colleagues but they were not able to answer.

It’s a very good question. I’ve actually argued in several law review articles that people who do not secure their systems, wireless or otherwise, should be held liable – to some extent – when criminals use the networks they’ve left open to victimize others.

In those articles, as in nearly everything I do, I was analyzing the permissibility of using criminal liability to encourage people to secure their computer systems . . . which I think is the best way to respond to cybercrime. Since I’m not sure if the person who sent me this email is asking about criminal liability, about civil liability or about both, I’ll talk about the potential for both, but focus primarily on criminal liability.

There are essentially two ways in which one person (John Doe) can be held liable for crimes committed solely by another person – Jane Smith, we’ll say (with my apologies to any and all Jane Smiths who read this). One is that there is a specific provision of the law – a statute or ordinance or other legal rule – which holds someone in Doe’s position (operating an unsecured wireless network, say) liable for crimes Smith commits.

I’m not aware of any laws that currently hold people who run unsecured wireless networks liable for crimes the commission of which involves exploiting the insecurity of those networks. I seem to recall reading an article a while back about a town that had adopted an ordinance banning the operation of unsecured wireless networks, but I can’t find the article now. If such an ordinance, or such a law, existed, it would in effect create a free-standing criminal offense. That is, it would make it a crime (presumably a small crime, a misdemeanor, say) to operate an unsecured network.

That type of law goes to imposing liability on the person who violated it, which, in our hypothetical, would be John Doe, who left his wireless network unsecured. That approach, of course, simply holds Doe liable for what Doe, himself, did (or did not do). It doesn’t hold him criminally liable for what someone else was able to do because he did not secure his wireless network. And unless that law explicitly creates a civil cause of action for people who were victimized by cybercriminals (our hypothetical Jane Smith). Some statutes, like the federal RICO statute, do create a civil cause of action for people who’ve been victimized by a crime (racketeering, under the RICO provision) but absent some specific provision to the contrary, statutes like this only let a person who’s been victimized sue the individual(s) who actually victimized them (Jane Smith).

As I wrote in an earlier post, there are essentially two ways one person (John Doe) can be held liable for the crimes another person (Jane Smith) commits: one is accomplice liability and the other is a type of co-conspirator liability. While these principles are used primarily to impose criminal liability, they could probably (note the qualifier) be used to impose civil liability under provisions like the RICO statute that let victims sue to recover damages from their victimizers.

So let’s consider whether John Doe could be held liable under either of those principles. Accomplice liability, it applies to those who “aid and abet” the commission of a crime. So, if I know my next-door neighbor is going to rob the bank where I work and I give him the combination to the bank vault, intending to assist his commission of the robbery, I can be held liable as an accomplice.

The requirements for such liability are, basically, that I (i) did something to assist in or encourage the commission of the crime and (ii) I did that with the purpose of promoting or encouraging the commission of a crime. In my example above, I hypothetically provide the aspiring robber with the key to the bank vault for the express purpose of helping him rob the bank. The law says that when I do this, I become criminally liable for the crime – here, the robbery – he actually commits. And the neat thing about accomplice liability, as far as prosecutors are concerned, is that I in effect step into the shoes of the robber. That is, I can be held criminally liable for aiding the commission of the crime someone else committed in the same way as, and to the same extent as, the one who actually committed it. In this hypothetical, my conduct establishes my liability as an accomplice to the bank robbery, so I can be convicted of bank robbery.

I don’t see how accomplice liability could be used to hold John Doe criminally liable for cybercrimes Jane Smith commits by exploiting his unsecured wireless network. Yes, he did in effect assist – aid and abet – the commission of those cybercrimes by leaving his network unsecured. I am assuming, though, that he did not leave it unsecured in order to assist the commission of those crimes – that, in other words, it was not his purpose to aid and abet them. Courts generally require that one charged as an accomplice have acted with the purpose of promoting the commission of the target crimes (the ones Jane Smith hypothetically commits), though a few have said you can be an accomplice if you knowingly aid and abet a crime.

If we took that approach here, John Doe could be held liable for aiding and abetting Jane Smith’s cybercrimes if he knew she was using his unsecured wireless network and did nothing to prevent that. It would not be enough, for the purpose of imposing accomplice liability, if he knew it was possible someone could use his network to commit cybercrimes; he’d have to know that Jane Doe was using it or was about to use it for that specific purpose. I don’t see that standard’s applying to our hypothetical John Doe – he was, at most, reckless in leaving the network unsecured, maybe just negligent in doing so. (As I’ve written before, recklessness means you consciously disregard a known risk that cybercriminals will exploit your unsecured network to commit crimes, while negligence means that an average, reasonable person would have known this was a possibility and would have secured the network).

The other possibility is, as I wrote in that earlier post, what is called Pinkerton liability (because it was first used in a prosecution against a man named Pinkerton). To hold someone liable under this principle, the prosecution must show that they (John Doe) entered into a conspiracy with another person (Jane Smith) the purpose of which was the commission of crimes (cybercrimes, here). The rationale for Pinkerton liability is that a criminal conspiracy is a type of contract, and all those who enter into the contract become liable for crimes their fellow co-conspirators commit.

Mr. Pinkerton (Daniel, I believe) was convicted of bootlegging crimes his brother (Walter, I think) committed while Daniel was in jail. The government’s theory was that the brothers had entered into a conspiracy to bootleg before Daniel went to jail, the conspiracy continued while he was in jail, so he was liable for the bootlegging crimes Walter committed. I don’t see how this could apply to our John Doe-Jane Smith hypothetical because there’s absolutely no evidence that Doe entered into a criminal conspiracy with Smith. He presumably doesn’t even know she exists and/or doesn’t know anything about her plans to commit cybercrimes by making use of his conveniently unsecured network.

In my earlier post, which was about a civil lawsuit, I talked about how these principles could, or could not, be used to hold someone civilly liable for crimes. I’ll refer you to that post if you’re interested in that topic.

Bottom line? I suspect (and this is just speculation, not legal advice) that it would be very difficult, if not impossible, to hold someone who left their wireless network unsecured criminally liable if an unknown cybercriminal used the vulnerable network to commit crimes.

According to the opinion, FleetPro Technical Services operated a program with Renault UK that let members of the British Air Line Pilots Association (BALPA) buy new Renaults at a discount. In the ten months the program was in effect, FleetPro sent 217 orders through the system, only 3 of which were submitted by members of BALPA. The opinion says that Russell Thoms, FleetPro’s director and employee, placed the other 214 orders and passed on the discounts to brokers who sold the cars to members of the public.

Renault discovered what had been going on and sued FleetPro and Thoms for fraud. At trial, the defense counsel argued that there was no fraud because there was, in effect, no fraudulent representation made by one human being to another. The court described the relevant facts as follows:

[W]hat happened when orders produced by Mr. Thoms and sent by e-mail as attachments to Mr. Johnstone [the Renault fleet sales executive who handled the orders] were received was that he opened them, printed them off and gave them to Fiona Burrows to input into a computer system information including the BALPA FON [the code used to process orders]. The evidence was that no human mind was brought to bear at the Importer's end on the information put into the computer system by Fiona Burrows. No human being at the Importer consciously received or evaluated the specific piece of information in respect of each relevant order that it was said to fall within the terms of the BALPA Scheme. . . . [T[he last human brain in contact with the claim that a particular order fell within the terms of the BALPA Scheme was that of Fiona Burrows at the Dealer. The point of principle which thus arises is whether it is possible in law to find a person liable in deceit if the fraudulent misrepresentation alleged was made not to a human being, but to a machine.

Renault UK Limited v. FleetPro Technical Services Limited, supra.

Judge Richard Seymour held that it is, in fact, possible to hold someone liable when a fraudulent misrepresentation is made to a machine:

I see no objection . . . to holding that a fraudulent misrepresentation can be made to a machine acting on behalf of the claimant, rather than to an individual, if the machine is set up to process certain information in a particular way in which it would not process information about the material transaction if the correct information were given. For the purposes of the present action, . . . a misrepresentation was made to the Importer when the Importer's computer was told that it should process a particular transaction as one to which the discounts for which the BALPA Scheme provided applied, when that was not in fact correct

Renault UK Limited v. FleetPro Technical Services Limited, supra.

After I read this decision, I did some research to see if I could find any reported American cases addressing the issue. I could not.

I’m not sure why. Maybe the argument simply has not been raised (which, of course, means that it may be, and some U.S. court will have to decide whether to follow this approach or not).

Or maybe the reason it hasn’t come up has to do with the way American statutes, or at least American criminal statutes, go about defining the use of a computer to defraud. Basically, the approach these statutes take is to make it a crime to access a computer “or any part thereof for the purpose of: . . . executing any scheme or artifice to defraud”. Idaho Code § 18-2202. You see very similar language in a many state computer crime statutes, and the basic federal computer crime statute has language that is analogous. See 18 U.C. Code § 1030(a)(4) (crime to knowingly “and with intent to defraud” access a computer without authorization or by exceeding authorized access and thereby further “the intended fraud”).

So maybe the issue of defrauding a machine hasn’t arisen in U.S. criminal law because our statutes are essentially tool statutes. That is, they criminalize using a computer as a tool to execute a “scheme or artifice to defraud.”

In the U.K. case, Renault was claiming that Thoms had defrauded it by submitting false purchase orders for discounted cars. The defense’s position was that to recover Renault would have to show that Thoms had intentionally made a false statement of fact directly to Renault, intending that Renault rely on the representation to its detriment. And that is the classic dynamic of fraud. Historically, fraudsters lied directly to their victims to induce them to part with money or other valuables. That is why, as I’ve mentioned before, fraud was originally known as “larceny by trick:” The fraudster in effect stole property from the victim by convincing him to hand it over to the fraudster in the belief he would profit by doing so. Here, the distortion of fact is direct and immediate; the victim hands over the property because he believes what the perpetrator has told (or written) him.

Many American fraud statutes predicate their definition of fraud crimes on executing a “scheme or artifice to defraud,” language that comes from the federal mail fraud statute, 18 U.S. Code § 1341. Section 1341, which dates back to 1872, makes it a crime to send anything through the mail for the purpose of executing a “scheme or artifice to defraud.” It was enacted in response to activity that is functionally analogous to online fraud: After the Civil War, con artists were using the U.S. mails to defraud many people remotely and anonymously. The sponsor of the legislation said it was needed “to prevent the frauds which are mostly gotten up in the large cities . . . by thieves, forgers, and rapscallions generally, for the purpose of deceiving and fleecing the innocent people in the country.” McNally v. United States, 483 U.S. 350 (1987). So § 1341 is really a fraud statute; it merely utilizes the “use of the mail to execute a scheme or artifice to defraud” language as a way to let the federal government step in an prosecute people who are committing what is really a garden variety state crime: fraud.

But as I said, many modern state computer crime statutes also use the “scheme or artifice to defraud” terminology. To some extent, that may simply be an artifact, a result of the influence federal criminal law has on the states; we have grown accustomed to phrasing fraud provisions in terms of executing schemes or artifices to defraud, so that language migrated to computer crime statutes.

Does that language eliminate the problem the U.K. court deal with? Does it eliminate the need to consider whether it is possible to defraud a machine by predicating the crime on using a computer to execute a scheme to defraud instead of making it a crime to make false representations directly to another person for the purpose of inducing them to part with their property?

On the one hand, it might. Under computer crime statutes modeled upon the mail fraud statute, the crime is committed as soon as the perpetrator makes any use of a computer for the purposes of completing a scheme to defraud a human being. Courts have long held that you can be charged with violating the federal mail fraud statute as soon as you deposit fraudulent material into the mail; it’s not necessary that the material actually have reached the victim, been read by the victim and induced the victim to give the perpetrator her property.

I think the same approach applies to computer crime statutes based on the mail fraud statute: the computer fraud offense is committed as soon as the perpetrator makes use of a computer with the intent of furthering his goal of defrauding a human being out of their property. Under that approach, it doesn’t really matter whether a person was actually defrauded, or whether a computer was defrauded. It’s enough that the perpetrator used a computer in an effort to advance his goal of defrauding someone.

I suspect this accounts for the fact that I, anyway, can’t find any U.S. cases addressing the issue of whether or not it is possible to defraud a computer. It’s an issue that may not be relevant in criminal fraud cases. It may, however, arise in civil fraud cases where, I believe, you would actually have to prove that “someone” was defrauded out of their property by the defendant’s actions.