Monday, June 30, 2008

This is about a case the California Court of Appeals recently decided: People v. Wilkinson, 2008 WL 2441101 (California Court of Appeals, June 18, 2008).

The issue in the case is a Fourth Amendment issue, which I’ll get to. Mostly, I want to explain what happened to get Joseph Wilkinson charged with “unauthorized access and taking of computer data”. People v. Wilkinson, supra.

In September 2005, [Wilkinson] and Schultze, who had been friends for several years, were sharing an apartment. . . . Each . . . had his or her own room. In her room, Schultze had a computer with a webcam . . . which she used primarily for video conversations over the Internet. . . . [Her boyfriend] Sadler was either `spending a lot of time’ at the apartment or had moved into Schultze's room.

On September 4, Sadler discovered a video file on Schultze's computer that showed [Wilkinson] in Schultze's room. Suspicious that [he] was using the webcam to record them, Sadler conducted an investigation to determine `if things were being changed on the computer while [he and Schultze] were away.’ Over the next several days, he determined that someone was deleting video files on the computer that the webcam had recorded and moving the webcam so that it pointed at the bed.

On . . . September 7, 2005. . . Officer James Walker responded to a complaint by Sadler and Schultze that [Wilkinson] was using a webcam to record them. . . . The officers then went inside to speak to [Wilkinson]. Officer Walker asked [him] if he could look around [his]'s room, but [Wilkinson] refused to give his consent.

After speaking with [Wilkinson], the officers took him to their patrol car. Walker told Sadler and Schultze that he did not have probable cause to arrest, but he was “willing to accept their citizens arrest,” and . . .` there would probably be some follow-up. . . . ‘Sadler `was upset . . . [H]e thought there was gonna be more of an investigation. . . . ‘ Walker explained . . . he could not search [Wilkinson]'s room because [he] had refused. . . . Sadler asked if he could go into [Wilkinson]'s room. Walker told Sadler, `you can do whatever you want. It's your apartment. . . . But . . . you cannot act as an agent of my authority. I cannot ask you to go into the room, nor can you go into the room believing that you're doing so for myself.’ Walker also told Sadler . . . [Wilkinson] had asked that they not go into his room. . . . Officer Walker took [Wilkinson] to the jail for booking. . . .

After the[y] left, Sadler and Schultze discussed what they should do. . . Sadler decided to go into [Wilkinson]'s room to look for more evidence. He entered [his] room and picked up about 15 to 20 compact discs he found strewn around the room. . . . He took them to Schultze's room where he viewed three to five of them on Schultze's computer. . . . [H]e found images of . . . himself and Schultze `hanging out’ . . . and `being naked,’ with some sexual content but no images of them having sexual intercourse. He went back to [Wilkinson]'s room . . .and took all the writable compact discs he could find.

Sadler returned to Schultze's computer and viewed about five to seven more of the discs. . . . Meanwhile, at the police station, Walker's sergeant “overruled” [Wilkinson]'s arrest. Walker brought [Wilkinson] home and left him in the patrol car while he explained to Sadler why [he] was no longer under arrest. Sadler told Walker he had found evidence of [Wilkinson] having taken images from Schultze's computer, put them on compact discs, and taken them back to his room. . . . Walker and Sadler went to Schultze's room, where Sadler showed the officer images on two of the compact discs he . . . viewed. Walker told Sadler he would need to see more explicit images of Sadler and Schultze having sexual intercourse, and Sadler looked through . . . more discs to find the images the officer wanted.

Walker took 36 compact discs Sadler had removed from [Wilkinson]'s room. At the police station, Detective Jimmy Vigon viewed images from `several’ discs, which consisted of Sadler and Schultze `sitting around watching TV to actually having sex.’ After viewing [them], he interviewed [Wilkinson]. . . [who]admitted obtaining the images from Schultze's computer[and] signed a consent form allowing the police to search his room.

People v. Wilkinson, supra.

After being charged with unauthorized access and “taking of computer data,” Wilkinson moved to suppress the evidence. He argued that Sadler’s taking he disks was an illegal, warrantless search because Sadler was acting as an agent of the police when he did so. Wilkinson also argued that Walker’s viewing the images without a warrant was also an illegal search. People v. Wilkinson, supra.

As to the first issue, as I’ve noted before, constitutional provisions like the 4th Amendment only protect us from “unreasonable” government action. They don’t protect us from what other private citizens do. So if I illegally search your stuff (you didn’t give me permission), that may be a trespass and some other civil causes of action but it doesn’t violate the 4th Amendment. And that’s what the court of appeals found as to Sadler’s searching Wilkinson’s room: Sadler was acting on his own, not as an agent of the police. To be an agent of the police, (i) you have to be acting with the intent to benefit the police (which Sadler was) AND (ii) the police have to have encouraged you to do that, which Officer Walker did not do. The court found Sadler searched on his own, so there was no 4th Amendment problem with his finding and looking at the disks.

The court of appeals also held there was no 4th Amendment violation in Walker’s looking at images on the disks Sadler had already seen. It’s a basic premise of 4th Amendment analysis that, as I noted above, private searches don’t violate the 4th Amendment, which means that police don’t violate the 4th Amendment, either, if they simply look at what a private party has already examined. People v. Wilkinson, supra.

There’s a case in which a FedEx package started leaking white powder; FedEx employees opened the package to see what was going on and found what they thought was cocaine. No 4th Amendment problems because they’re acting as private citizens. They called the police, who looked at what the employees had seen and concluded it was cocaine. The person who sent the package was charged with distributing cocaine and moved to suppress, claiming this was an illegal search. The court upheld the search because (i) the private citizens didn’t violate the 4th Amendment (they can’t, unless and until they become agents of the police), and (ii) the police didn’t violate it either because they just looked at what the private citizens had already found.

This court of appeals reached the same conclusion as to some of Officer Walker’s conduct. When he simply looked at disks Sadler had already viewed, the rationale I outlined above applied and there was no 4th Amendment problem. The problem came when Walker told Sadler he would need more explicit images of sexual intercourse, and Sadler went back to find them. So the court of appeals held that Walker’s viewing of these images was an illegal search under the 4th Amendment (because he didn’t just stay within the scope of what Sadler had already seen, acting on his own). People v. Wilkinson, supra.

The court also found there was another illegal search: Detective Vigon looked at images on “several” of the disks taken from Wilkinson’s room. The problem as that there was

no evidence in the record . . . as to . . . what discs he viewed or whether those discs were ones that Sadler had already viewed in his private search. As a result, it is impossible to determine whether Detective Vigon's viewing of the . . . discs exceeded the scope of the private search. Because the People failed to show that Detective Vigon's viewing was limited to discs that had been previously viewed during the private search, we conclude that Detective Vigon's viewing of the discs was an illegal search. . . .

People v. Wilkinson, supra. The court of appeals therefore remanded the case back to the trial court, instructing it to go through the evidence and figure out precisely what should, and should not, be suppressed.

The 4th Amendment issues are really very straightforward. It’s the facts that are creepy. . . .

Friday, June 27, 2008

You’ve probably seen the news stories about the two Orange County teen-agers who have been charged with crimes – lots of crimes – for hacking into high school computer systems to change grades and cheat on exams.

As the Orange County Register reported last Wednesday, one of the students, 18-year-old Omar Khan, has been charged with 69 felony counts; the charges include altering public records, unauthorized access to computers, fraud, burglary, identity theft and conspiracy. According to the same source, the other student, Tanvir Singh, also 18, has been charged with four counts of conspiracy, burglary, computer fraud and attempted altering of a public record.

As the Orange County Registernoted, Khan could be sent to prison for up to 38 years if he were to be convicted on all 69 counts, while Singh faces up to 3 years in prison if he were convicted of the charges against him.

Khan is charged with 34 counts of altering a public record, 11 counts of stealing a public record, 7 counts of unauthorized computer access, 6 burglary counts (he apparently physically broke into the school several times), 4 identity theft counts 3 counts of altering a book of records 2 counts of receiving stolen property, 1 count of conspiracy (to do pretty much the above) and 1 count of attempting to alter a public record. I obviously can’t go through all those charges here.

So I'lI focus on Khan because he’s the one with the most counts and the most exposure. Here’s a sample of the charges, at least the ones that focus on computer-predicated crimes. Count 2 of the Felony Complaint – an unauthorized access count -- charges that on or about

and between January 23, 2008 and January 26, 2008, in violation of Section 502(c)(1) of the Penal Code (COMPUTER ACCESS AND FRAUD), a FELONY, OMAR SHAHID KHAN did knowingly and unlawfully access and without permission alter, damage, delete, destroy, and otherwise use data, a computer, a computer system, and a computer network belonging to Tesoro High School and Capistrano Unified School District, in order to devise and execute a scheme and artifice to defraud, deceive, and extort, and to wrongfully control and obtain money, property, and data.

Felony Complaint - People of the State of California v. Khan and Singh, Superior Court of California – County of Orange (Case No. 08HF1157). There are other counts like this, though they’re each based on different instances of gaining unauthorized access (on different dates). As I read section 501(d)(1) of the California Penal Code, the offense charged above “is punishable by a fine not exceeding ten thousand dollars" or imprisonment "for 16 months, or two or three years, or by both”.

Here’s an identity theft count:

On or about April 17, 2008, in violation of Section 530.5(a) of the Penal Code (IDENTITY THEFT), a FELONY, OMAR SHAHID KHAN did willfully and unlawfully obtain personal identifying information, as defined in Penal Code section 530.55 (b), of Tesoro High School Registrar Valerie D., and did unlawfully use and attempt to use that information for an unlawful purpose, specifically to wit, to access school computer network and grade database program, without the consent of Tesoro High School Registrar Valerie D.

Felony Complaint - People of the State of California v. Khan and Singh, supra. As I read section 530.5(a) of the California Penal Code, the offense charged above is punishable by “a fine, by imprisonment . . . not to exceed one year, or by both”. There are more of these, too; they tend to charge the same conduct as above, but the person whose identity is stolen in other counts is a teacher (different teachers).

And there are a number of counts like this one:

On or about April 17, 2008, in violation of Section 6201 of the Government Code (ALTER AND FALSIFY A PUBLIC RECORD), a FELONY, OMAR SHAHID KHAN did willfully and unlawfully alter and falsify the official permanent Topics in Calculus grade and transcript records of Omar Khan, filed and deposited in the Tesoro High School and Capistrano Unified School District, a public office located in Orange County California.

Felony Complaint - People of the State of California v. Khan and Singh, supra. I’m having a little trouble figuring out the sentence for this one, but I THINK it’s basically the same as the prior count, i.e., a fine and/or imprisonment for not more than a year.

I give you these highlights from what is a really long complaint (69 counts!) simply to illustrate that these guys are charged with some serious crimes. Looking at these charges and the possible sentences that can be imposed for each, I can certainly see why Mr. Khan is facing a possible sentence of 38 years, if he were to be convicted of all of them.

My first reaction on hearing about this was, why a criminal case? Why not handle this within the school system?

I can’t find much in the way of statements from the prosecutors who put the charges together, but I did see that Jim Amormino, of the Orange County Sheriff’s Department (who presumably investigated all this), said criminal charges were justified because this isn’t a “victimless” crime. According to a KTLA report, Amormino said that if Khan had gotten away with changing his grades and succeeded in getting into the university he wanted, some other, truly-deserving student would have lost the place he or she deserved.

I wonder if this theory is the basis of the fraud allegations in the count I quoted above and others like it? I wondered what the “fraud” was. I’m not sure if it’s being cast as a fraud on (i) the universities Khan was applying to or (ii) on the students he would have been cheating out of a place in one of those universities or (iii) something else.

I can certainly see the school’s, and the community’s, being outraged. It’s a really shabby thing to do . . . and it threatens to undermine faith in the integrity of the whole educational process. I don’t know about high schools, but law schools in general tend to be fraught with rumors about cheating because most of the grades are based on a single final exam. I honestly don’t think there is much cheating in any law school, but the belief that it goes on is something law school administrators have to deal with; they have to take pains to ensure that students believe in the honesty and integrity of the exam processes. I suspect high schools have to do something similar . . . especially as high school students become more adept at using computers.

Even if I can buy the need for a criminal prosecution – the need to “make an example” of these guys in an attempt to discourage other, similarly-talented and similarly-situated students from following their example – I really don’t see the need for so many charges. I may be wrong, but I can’t imagine this thing will go to trial. I have to assume there’ll be some kind of plea bargain (especially as the allegations in the complaint make it sound like the prosecution has its evidence down solid) . . . but I also suspect the prosecutor is going to want to see jail time. As one of the articles I read said, that’s pretty much going to kill any hope these two have of going to a good school (any school) and it’s probably not going to do much for their future job prospects, either.

Wednesday, June 25, 2008

In an opinion issued a few months ago, a federal district court in Nevada ruled on a defendant’s motion to suppress.

The motion challenged whether evidence that an Internet Protocol (IP) address has allegedly been used to access and download child pornography, combined with evidence regarding the IP address subscriber's identity and residential address, is sufficient to provide probable cause to believe that evidence of child pornography will be found at the subscriber's residence. U.S. v. Carter, 2008 WL 623600 (U.S. District Court – District of Nevada 2008).

Mr. Carter was charged with receiving and possessing child pornography in violation of federal law. The charges were based on evidence that found during a search of his residence. U.S. v. Carter, supra. The search was conducted with a search warrant, the probable cause for which was based on an affidavit from an FBI Agent: Agent Flaherty. U.S. v. Carter, supra.

In the federal system, as in, I believe, all states, an officer seeking a search warrant form a magistrate submits an application for the warrant and, to establish probable cause for the issuance of the warrant, usually submits an affidavit. Federal Rule of Criminal Procedure 41(d)(1), for example, states that “[a]fter receiving an affidavit or other information, a magistrate judge . . . must issue a warrant is there is probable cause to search for and seize . . . property”.

The affidavit Agent Flaherty submitted recounted an investigation another FBI agent, Agent Luders, had conducted “of the Ranchi message board which is a hard core child pornography message board . . .in Japan.” U.S. v. Carter, supra. Agent Luders was able to download “video and image files” from the board that contained child pornography, but because Japan’s “child pornography laws are different than those of the United States” he was not able to get a search warrant for user logs “that would have enabled the Government to identify users who” had downloaded child pornography form the Ranchi site. U.S. v. Carter, supra.

To get around that, Agent Luders logged into the Ranchi message board and created a posting that advertised a video of a four-year-old girl engaging in sexual activity with an adult male. U.S. v. Carter, supra. Forty minutes later, he posted another message, which stated that he had inadvertently posted the wrong video the first time; this message sent Ranchi patrons to another website to download the “correct” video. U.S. v. Carter, supra. That site “also returned to the covert FBI computer in San Jose, California which captured the . . . IP addresses of the users who accessed the website . . . and attempted to download the advertised video.” U.S. v. Carter, supra.

According to Agent Flaherty’s affidavit, several hundred IP addresses tried to download the video, one of which was IP address 68.108.184.145. U.S. v. Carter, supra.

The Affidavit described the steps taken . . .to identify the user of 68.108.184.145. A search of the publicly available website arin.net revealed . . [it] was controlled by Cox Communications. . . . [T]he Government served an administrative subpoena on Cox Communications to identify the . . . subscriber to IP address 68.108.184.145 on [the date was used in an attempt to download the video file]. . . . Cox . . .responded by identifying Luana Carter, . . . Las Vegas, Nevada . . . as the subscriber to . . . 68.108.184.145. . . . On January 17, 2007, the Government conducted a search of the public records data base LexisNexis which indicated that Luana Carter resided at the above listed address and that Defendant Travis Carter was a household member at that address. . . . On January 17, 2007, the Government also checked Nevada Department of Motor Vehicle records which revealed a current driver's license for Luana Carter, with the same social security number, date of birth and physical address obtained through LexisNexis. On February 8, 2007, the Government also served an administrative subpoena on Nevada Power Company for subscriber information for [the address]. Nevada Power Company's response / / / listed Luana Carter as having an active account at that address since June 22, 2001. . . .

U.S. v. Carter, supra.

Agent Flaherty then surveilled the address and observed a vehicle registered to Travis Carter parked in front of it. At that point Agent Flaherty sought a search wsarrant:

Because the IP address returned to the Internet account of Luana Carter, whose address was [that identified above] and there was still an active account in her name for that address on the date of the Affidavit, Agent Flaherty . . . stated that she believed evidence of child pornography crimes would be found at that residence. . . . Magistrate Judge Leavitt issued a search warrant to search the premises, including computers and other data storage devices for evidence of child pornography.

U.S. v. Carter, supra. The agents seized a computer from Travis Carter’s bedroom, and found “thousands of child pornography images” on it. U.S. v. Carter, supra.

As noted above, Mr. Carter moved to suppress the evidence, arguing that the warrant was invalid because it was not based on probable cause. He claimed Agent Flaherty’s

Affidavit was misleading because it failed to inform the Magistrate Judge of material facts regarding Internet access through an Internet services provider such as Cox Communications and how IP addresses function. Defendant argues that if such information had been included in the affidavit, probable cause would have been lacking.

U.S. v. Carter, supra.

In support of his motion, Mr. Carter submitted an affidavit from an expert who, after explaining how Internet access works and how IP addresses can be spoofed (or faked), concluded that there are

many problems with using an IP address to decide the location of a computer allegedly using an IP address on the Internet. The IP address can be `spoofed.’ A single IP address can be used by multiple computers at multiple locations through a wireless router. The MAC address of a cable modem can be spoofed to allow access to another's Internet connection. A neighborhood with several houses can share one Internet connection and therefore have the same IP address.

U.S. v. Carter, supra.

He lost. The district court followed the reasoning of the U.S. Court of Appeals for the Fifth Circuit in U.S. v. Perez, 484 F.3d 735 (2007). The argument in that case was essentially identical to Mr. Carter’s argument. It too relied on a claim of IP spoofing to argue that a warrant, which resulted in officers’ finding child pornography, was not based on probable cause. Here is what the Fifth Circuit said, in part:

[T]here was a substantial basis to conclude that evidence of criminal activity would be found at 7608 Scenic Brook Drive. The affidavit . . . included the information that the child pornography viewed by the witness in New York had been transmitted over the IP address 24.27.21.6, and that this IP address was assigned to Javier Perez, residing at 7608 Scenic Brook Drive. . . . Perez argues that the association of an IP address with a physical address does not give rise to probable cause to search that address. He argues that if he `used an unsecure wireless connection, then neighbors would have been able to easily use [Perez's] internet access to make the transmissions.’ But though it was possible that the transmissions originated outside of the residence to which the IP address was assigned, it remained likely that the source of the transmissions was inside that residence. . . . . `[P]robable cause does not require proof beyond a reasonable doubt.” [U.S. v.] Brown, 941 F.2d 1300, 1302 (5th Cir. 1991).

The Carter court therefore held that “even if the information set forth in” the testimony of Mr. Carter’s experts “had been included in Agent Flaherty's affidavit, there would still have remained a likelihood or fair probability that the transmission emanated from the subscriber's place of residence and that evidence of child pornography would be found at that location.” U.S. v. Carter, supra. It denied his motion to suppress the evidence.U.S. v. Carter, supra.

The IP spoofing argument is like the Trojan horse defense in that it, too, tries to claim that someone else committed the crime . . . the SODDI, or “some other dude did it” defense. I can see why these courts reached the conclusion they did, but it seems to me that the IP spoofing defense, like the Trojan horse defense, can be a valid defense (and/or a valid basis for suppressing evidence) in certain cases.

Since a SODDI defense can also be used at trial, I assume the IP spoofing defense can, as well. The only reported cases I find, so far, that refer to it all involve motions to suppress evidence.

Monday, June 23, 2008

I recently ran across an article that raised some of the same issues I’ve been thinking about with regard to charging minors with possessing and disseminating child pornography. You can find that article here.

Four years ago, a 15-year-old Pennsylvania girl who allegedly posted “photographs of herself in various states of undress and performing a variety of sexual acts” online was charged with sexual abuse of children, possession of child pornography and distributing child pornography. Teen Girl Charged with Posting Nude Photos on Internet, USA Today (March 29, 2004). I’ve read about similar cases being filed elsewhere in the U.S. and in other countries, as well.

Now, as you may know, the problem is being compounded by cell phones. According to the article I mentioned earlier, this past May a 17-year-old Wisconsin boy was charged with possessing child pornography and sexual exploitation of a child after he posted “naked pictures” of his 16-year-old girlfriend “from his cell phone onto MySpace.” And you may have seen the stories that have been published recently noting hwo common it is for teen-agers to use their cell phones to send nude pictures of themselves to other teen-agers.

The problem, as that article I mention above notes, is that we don’t have a loophole, an exception for minors who create and disseminate what is literally child pornography. It quotes a Pittsburgh police detective who notes, quite correctly, that creating and disseminating child pornography is a crime, and the law “`doesn’t say anything about the age of the person who does it.’”

Should it? That’s something I’ve been thinking about for a while, and it seems to me there are two aspects of this issue, neither of which has been addressed by our law.

Before I get to the two issues, let me briefly review the nature of the “crime” we’re talking about here. As I explained in an earlier post, child pornography is visual material (e.g., photos, videos) the contents of which would not be criminalized if they depicted adults, instead of minors. The Supreme Court held over thirty years ago that child pornography can be criminalized even though it is not obscene (obscenity is criminalized because of its content, even though it involves adults) because the production of the material victimizes children.

According to the Court, child pornography laws criminalized two “harms:” One is the physical and emotional abuse children suffer in the creation of child pornography; the other is the emotional injury they suffer as the material, which records their victimization, continues to be circulated. The legal justification for criminalizing child pornography, then, is that its creation “harms” children.

That brings us to the two aspects of the issue I noted above: One is the situation in which child pornography is created by a child (production of child pornography) who then distributes it (dissemination of child pornography). Production and dissemination of child pornography are both crimes because of the rationale I noted above, i.e., when adults use children to create the stuff, and then disseminate it, the children are victimized.

The issue that I think is being raised now is whether that is true when it is a child who creates and disseminates material that is, literally, child pornography. The argument here is, or would be, that there is no “victim” because the child consensually creates the material and then distributes it to others. If there is no victim, the argument goes (or would go), then there is no need to bring charges and, indeed, no one to be charged.

If a prosecutor were so inclined, he or she could respond to that argument as follows: The child does not have the ability to consent to the creation of child pornography. We have the crime of statutory rape because the law does not consider that children under the age of 18 (often, can be lower in some jurisdictions) are mature enough to be able to consent to sexual relations. So, statutory rape is sexual intercourse between two people, one of whom is over the age of consent (19, say) and the other of whom is not (is 17, say, in a jurisdiction where the age of consent is 18). A prosecutor could then use this analogy to say that, by extrapolation, a child cannot consent to . . . what? . . . making child pornography?

I don’t think that counterargument works. I, personally, think our statutory rape laws are out of whack, an artifact of a different time and a different culture. But we have them, so I’ll assume they continue to exist and continue to be accepted as valid. The premise of statutory rape statutes is that minors (those under the age of consent) as a category do not have capacity to consent to sex. We therefore protect all of those in that category by presuming incapacity and prosecuting those who ignore the presumptive incapacity. We in a sense assume victimization here; that is, we in a sense assume that the person over the age of consent takes advantage of the younger partner (which is where I begin to have reservations, personally).

In the instances where a minor produces child pornography purely on their own (I’m not talking about instances in which an adult with whom they are chatting online persuades them to do so), we do not have that presumed victimizer. We have a child committing a crime against herself or against himself, which seems absurd.

I don’t know of any legal principle that says you can’t commit a crime against yourself. Suicide used to be a crime, so they used to prosecute people who tried but failed to kill themselves (how insane is that?). In the twentieth century, though, our society decided that was a really stupid way to approach things, and so decriminalized suicide. That might be somewhat relevant here. The other analogy that comes to my mind is a provision in the Model Penal Code which, as I’ve said before, is a template of criminal law that has influenced U.S. criminal law at the state, and even the federal, levels.

In its provisions on accomplice liability, the Model Penal Code says the victim can’t be an accomplice. An accomplice is someone whose conduct facilitates the commission of a crime; so if you tell me you want to rob a liquor store and I give you a gun you can use to do so, I’m an accomplice to your robbing the store. Even though I wasn’t there when it happened, I facilitated it and so I become liable for the crime as if I had committed it myself; as I tell my students, an accomplice stands in the shoes of the perpetrator, has the same criminal exposure as the one who carries out the crime.

The drafters of the Model Penal Code said that someone who is a victim of a crime is not an accomplice. So someone who is raped is not an accomplice of the rapist; someone who is robbed is not an accomplice of the robber, and so on. It seems to me one might argue by extrapolation that if a victim can’t be an accomplice, then they certainly shouldn’t be held liable as a perpetrator.

Is there a victim when a minor features himself or herself in nude or sexually explicit photos and puts them online? If there isn’t, do we have a crime? Should we then create some kind of exclusion of liability for minors who create child pornography featuring their own images? Should we give them a pass or simply reduce the level of criminal liability they face? Or maybe we should come with an entirely new crime?

I don’t have the answers to any of those questions, but I certainly think we should be asking them.

Before I end this post, I want to note the other aspect of this issue: Last year, an Arizona teen-ager named Matthew Bandy was charged with possessing child pornography in a high-profile case that caught the attention of ABC News, among others. His parents hired a computer forensics person and raised a version of the Trojan horse defense; that plus other circumstances resulted in his eventually pleading guilty to a lesser charge.

When I read about that case, I wondered: Bandy was 16 years old. If a 16-year-old (or a 15-year-old or a 14-year-old) looks at child pornography that involves images of girls not that much younger than he is, is that the same as an adult male’s looking at the material? In other words, does it matter if it’s teen-agers looking at other teen-agers? Does that somehow inflict a lesser “harm” (or nor “harm”) than if an adult looks at the stuff? Should we institute some kind of lesser offense for this situation, one that would let the prosecution bring charges but that would not result in the teen-ager’s facing serious jail time and/or the possibility of being labeled as a sex offender?

Saturday, June 21, 2008

Last week, the U.S. Court of Appeals for the Ninth Circuit decided a case that deals with the privacy, or lack of privacy, in text messages sent via a pager. Quon v. Arch Wireless Operating Co., Inc. 2008 WL 2440559 (9th Cir. 2008).

The Ninth Circuit docket number is 07-55282; you can use it to find the opinion here. Click on the “opinions” button you’ll see at the top of the page, left-hand side, and use the docket number or the case name to find the opinion.

Last January I did a post on the district court’s decision in the case, so this is a follow-up to that one.Here’s how the case arose: In 2001, the City of Ontario contracted with Arch Wireless (AW) to provide wireless text-messaging services for the Police Department (OPD), among other city agencies. The OPD received “twenty-two alphanumeric pagers,” one of which it gave to Sergeant Quon, a member of the SWAT team. Quon v. Arch Wireless, supra. Messages sent via the pager went through AW receiving stations to its network where it went to a server; the server archived a copy of the message and stored it in the system until “the recipient pager” was “read to receive” the message. Quon v. Arch Wireless, supra.

Neither the City nor the OPD had a policy governing text-messaging via the pagers. The City had a “`general Computer Usage’” policy which stated that (i) personal use of email, networks, etc. was a violation of City policy; the City reserved the right to monitor use of its computer systems; (iii) users had “no expectation of privacy or confidentiality when using these resources”; and (iv) the use of “inappropriate” or “suggesting” language would “not be tolerated.” Quon v. Arch Wireless, supra. Before the City and OPD got the pagers, Quon had signed an “employee acknowledgment” which essentially reiterated the policy outlined above. Quon v. Arch Wireless, supra.

While the City didn’t have an official pager policy, it had “an informal policy governing their use.” Quon v. Arch Wireless, supra.

Under the City's contract with (AW) each pager was allotted 25,000 characters, after which the City was required to pay overage charges. Lieutenant Duke `was in charge of the purchasing contract and responsible for procuring payment for overages. He stated that `t]he practice was, if there was overage, that the employee would pay for the overage that the City had. . . . [W]e would usually call the employee and say, “Hey, look, you're over X amount of characters. It comes out to X amount of dollars. Can you write me a check for your overage[?]”’

Quon v. Arch Wireless, supra. And that is apparently how things worked, At one point Duke had a conversation with Quon which, of course, both remembered differently. Duke remembered that he told Quon text-messages sent via the pagers could be audited under the City’s public records policy. Quon remembered the conversation this way:

When asked `if he ever recalled a discussion with Lieutenant Duke that if his text-pager went over, his messages would be audited . . . Sergeant Quon said, “No. In fact he . . . said . . . if you don't want us to read it, pay the overage fee.’ “

Quon went over the monthly character limit `three or four times’ and paid the City for the overages. Each time, `Lieutenant Duke would come and tell [him] that[he] owed X amount of dollars because [he] went over [his] allotted characters.’ Each of those times, Quon paid the City for the overages.

Quon v. Arch Wireless, supra.

In August, 2002, Quon and another officer exceeded their character limit and Duke let his superiors know he was “tired of being a bill collector.” Quon v. Arch Wireless, supra. The Chief ordered Duke to request the transcripts of the messages sent via the pagers to determine if they were “`exclusively work related, thereby requiring an increase in the number of characters officers were permitted”’. Quon v. Arch Wireless, supra.

Duke contacted an AW representative who eventually sent him the transcripts. A review of Quon’s messages showed that he had exceeded his monthly allotment of characters by 15,158 characters “and that many of these messages were personal in nature and were often sexually explicit.” Quon v. Arch Wireless, supra. The Chief referred the matter to the OPD department of internal affairs to could determine if “`someone was wasting . . . City time not doing work when they should be.” Quon v. Arch Wireless, supra.

I don’t know what, if anything, happened with the IA referral, but Quon sued AW and the City and the OPD for violating his rights under the 4th Amendment. (He also had a statutory claim, but I’m focusing on the 4th Amendment both because I have limited space and because as far as I’m concerned, constitutional issues always trump.)

To prevail on that argument, he has to show that he had a 4th amendment expectation of privacy in the messages and that the city violated that right. As I explained in an earlier post, to have a 4th amendment expectation of privacy (i) you have to believe that something (like text messages) is private and (ii) society has to agree with you. That is, you have to subjectively believe the thing was private and society (our objective factor) has to agree that yes, you’re right. We as a culture think that thing is private.

The Ninth Circuit found that Quon did have a reasonable expectation of privacy in the text messages: “That (AW) may have been able to access the contents of the messages for its own purposes is irrelevant. . . .[Quon] did not expect that (AW) would monitor [his] text messages, much less turn over the messages to third parties without [his] consent." Quon v. Arch Wireless, supra. It also found that he “reasonably relied on” the informal policy, i.e., the implicit agreement that the OPD would not audit his messages if he paid for overages in his use of characters.

And the Ninth Circuit found that the OPD’s searching of the messages violated the 4th Amendment. The court noted that the OPD did have (essentially) probable cause to check things out to see if Quon was wasting business time on personal matters. But it also found that the OPD could have used other, less intrusive means to check this out “without intruding” on Quon’s 4th Amendment rights.

[T]he (OPD) could have warned Quon that for . . . September he was forbidden from using his pager for personal communications, and that the contents of . . . his messages would be reviewed to ensure the pager was used only for work-related purposes during that time frame. Alternatively, if the (OPD) wanted to review past usage, it could have asked Quon to count the characters himself, or asked him to redact personal messages and grant permission to the (OPD) to review the redacted transcript. . . . These are just a few . . . ways in which the (OPD) could have conducted a search that was reasonable in scope. Instead, (it) opted to review the contents of all the messages, work-related and personal, without the consent of Quon. . . This was excessively intrusive . . .[B]ecause [Quon] had a reasonable expectation of privacy in those messages, the search violated [his] Fourth Amendment rights.

Quon v. Arch Wireless, supra.

So where does that leave us? It leaves Quon with some live claims against AW, the City and the OPD . . . which I assume will be settled.

Where does it leave us in the greater scheme of things, i.e., in terms of text-message (and even email) privacy? I really don’t think it changes things all that much. There are a number of state and federal cases which have held that whether employees have a 4th Amendment right to privacy in communications sent via workplace computers or via workplace-related systems (as in this case) depends on the policies the employer has in place. If an employer has a clearly articulated and widely disseminated policy stating, in essence, “abandon all privacy you who use this system for any type of communication,” then a 4th Amendment claim is pretty much toast. It’s logically difficult to argue that you thought the email you sent from an employer (or university) monitored email system was private when the system displayed various warnings of the type I just noted.

I agree with Mr. Wright, who submitted a comment on my earlier Quon post. I think this decision is going to motivate employers (and schools, and probably agencies and any other institution that isn’t already doing so) to put lots and lots of “abandon all privacy” warnings on their systems.

Thursday, June 19, 2008

In the post, the owner of what I understand is a very expensive coffeemaker says he’s discovered that the coffeemaker, which has the capacity “to communicate with the Internet via a PC” can be hacked.

The author of the post says that the software vulnerabilities in the system (I gather they’re in the system the coffeemaker uses to communicate online, rather than in the coffeemaker itself) would let someone hack the coffeemaker and, say, alter the strength of coffee or tinker with the water settings and “make a puddle” or just break it and force a service call.

The notion of linking the thing to the Internet is apparently to allow it to be serviced remotely. (It is, apparently, a VERY expensive coffeemaker – when my inexpensive coffeemaker has problems, I just replace it.)

The last line of the post is particularly interesting. It says that the problems with the software would let a remote attacker “gain access to the Windows XP system” it’s running on. That could be interesting. Being merely a lawyer and not an expert in technology, I can’t speculate as to precisely what one could accomplish with that, but I assume it could be worth someone’s pursuing.

This post reminded me of what came to my mind when I installed a new ac/furnace system last year. It’s top of the line, very energy efficient . . . and when they installed it they told me that, if I like, we could hook a laptop up to it. The furnace company could then use the laptop to monitor the system and, if possible, fix at least some problems remotely. They also told me I could connect to it when I’m traveling and alter the settings remotely, from the road (using, of course, the Internet).

Not having any idea why I’d want to do that, I haven’t gone for that option. The furnace is air-gapped, and as far as I’m concerned, is going to stay that way. When they told me about that option, though, I started thinking of interesting things someone could do if they hacked a furnace. I’m sure you could make things pretty uncomfortable in my house (way too hot, way too cold). I wonder if you could compromise the system sufficiently to do some real damage . . . cause a fire, say?

This concept of putting appliances and home systems online is something I talked a bit about in my last book: Law in an Era of “Smart” Technology. It’s a book about law and how it has dealt with technology essentially since there has been technology, of any kind. The law’s approach to technology, I argue in the book, is to segment technology from other aspects of our life, so we get what are often called “technologically-specific” laws.

That has made sense, as long as “using” technology was a discrete, compartmentalized aspect of our lives. It makes sense, in that world, to have “car” laws – laws that define requirements for being able to operate a motor vehicle (of whatever type) lawfully, laws that define what you can and can’t do with one (e.g., no speeding) and laws that make it a crime to do certain things with them (e.g., drive drunk).

As I argue in the book, though, I think that world is rapidly coming to an end as technology beings to subtly and invisibly permeate all aspects of our lives. As interactive technology -- like this coffeemaker -- becomes an embedded part of our lives, we forget we're "using" technology. It recedes into the background of our consciousness, and that has a number of implications.

Many of those implications are great. I like (kind of) the fact that my smart new furnace nags me when it's time to clean its electronic air filters. Makes me much more conscientous when the thing itself keeps telling me what I need to do. I also like the fact that it does all kinds of neat things that improve my service and cut my bills. I like it when other technologies do things for me. I'm looking forward to more of that.

The major downside, of course, is that as we utilize these technologies but remain unaware of the fact that we are, essentially, opening access portals into our lives, we creat all kinds of opportunities for attackers.

A while back, I wrote a post on the Trojan horse defense. It could just as easily be called the “malware defense,” since it lays the blame for computer-facilitated activity on malicious software.

As I explained in that post, the Trojan horse defense came to public notice back in 2003, when UK citizen Aaron Caffrey was prosecuted in Britain for a hack attack that shut down the Port of Houston in the U.S.

Caffrey’s defense was that he was framed by other hackers, who installed Trojan horse programs on his laptop, used them to seize control of the laptop and launch the attack, thereby making it appear he was the one who was responsible, and them erasing the Trojans so no trace remained on the laptop. Caffery was acquitted. The jury bought his defense, even though there were no Trojan horses on the laptop (self-erasing) and many found the claims incredible.

I don’t know how old the notion of the Trojan horse defense is. I was watching (don’t ask why) a 1989 movie called She-Devil on TV, and was astonished to see that, at one point, a defense lawyer suggests blaming his client’s embezzlement of company funds on a computer virus. The idea has apparently been around for a long time.

The defense has been raised in the U.S. and, to my knowledge, has worked a few times, often to persuade the prosecution to negotiate a plea and a lesser sentence than it might otherwise have pursued. It has, I think often been raised frivolously, by people who are simply trying to persuade the jury that they didn’t do whatever it is they’re charged with. But there’s a recent case from Boston in which the defense was not only valid, but seems to have prevented a major miscarriage of justice.

The case is the prosecution of Michael Fiola for allegedly having child pornography on his “state-issued laptop.” I won’t go into the facts here. You can read about them in this article: Police Show Kiddie Porn Rap Was Bogus, Boston Herald (June 16, 2006).

I find two things about this case scary. The first is thinking about what might have happened to Mr. Fiola if his lawyers had not been savvy enough to hire a good computer forensics person to investigate the possibility that, indeed, Mr. Fiola was the victim of computer circumstance, a Trojan horse, viruses, combination of the above, etc. Had they not known to do that, and had they not been able to find a good forensics person, I hate to think what would have happened to an innocent man.

The other thing I find scary is that, unlike some of the cases in which the Trojan horse defense has been raised in the United Kingdom, the sad state of security on the laptop Mr. Fiola was using was not his fault (even though he apparently is not at all adept at using and protecting computers). No, Mr. Fiola got into trouble because of the poor state of security on the laptop his employer (the state of Massachusetts) gave him to use. As Mr. Fiola’s lawyer told the Boston Herald, “`Anybody who has a work laptop, this could happen to.’”

Tuesday, June 17, 2008

As I’ve noted before, the 4th Amendment prohibits “unreasonable” searches and seizures, and a search or seizure will be “reasonable” if it is conducted pursuant to a warrant.

The 4th amendment is interpreted as incorporating a preference for searches that are conducted pursuant to a search warrant, so that’s pretty much the best way for law enforcement officers to ensure that a search is constitutional.

But that does not exhaust the “reasonableness” required for a search. The search must also be “reasonable” in scope, i.e., it has to remain within the scope of what the warrant authorizes officers to search for.

So if police have a warrant to search a home for two stolen large-screen TVs, they can only search (i) in places where the TVs could be and (ii) until they find what they’re looking for. If they look in places where the object of the search – the TVs – could not be, like a dresser drawer, that search is unreasonable and the evidence it turned up will be suppressed. The same thing will happen if the officers keep searching after they find the two TVs the warrant authorized them to search for.

A recent case from the Air Force Court of Criminal Appeals illustrates how important the scope requirement can be. Here are the facts that led to a motion to suppress evidence:

On 12 February 2005, [appellant] attended a party with other airmen, and a game of strip poker ensued. On 25 March . . . Air Force Office of Special Investigations (OSI) received information that an alleged sexual assault had taken place during . . . the . . . party. The appellant was not the suspect. . . [but] . . . OSI discovered that [he] took pictures at the party, which included photographs of partially nude people who attended the party.

OSI agents approached the appellant. . . . [He] told the agents he had saved the party pictures on his laptop, and took the agents to his off-base apartment to show them the pictures. [He] offered to give the agents copies of the images saved on his computer, but he would not consent to turning over his laptop. After reviewing the pictures provided, the OSI, convinced that the computer may contain more pictures than provided by the appellant, sought and received search authorization from the military magistrate for the appellant's off-base quarters. Upon receiving search authorization, the agents went back to the appellant's apartment and seized the laptop and a digital memory card. During the course of the seizure the agents . . . advised the appellant that he had no choice but to provide the computer and memory card because they contained possible evidence. . . .

The following Monday, OSI, realizing that they had executed an off-base search improperly, contacted United States Magistrate Judge SO to obtain a valid search authorization. Judge SO asked if the items had been searched yet and SA AW informed the judge that the items were in a secure area and had not been searched. Judge SO issued the warrant. The search warrant authorized search and seizure for `one Toshiba laptop computer and one digital memory card used to record photographs taken on February 12, 2005.’

U.S. v. Osorio, 2008 WL 2149372 (A.F. Ct. Crim. App. May 9, 2008).The search for and seizure of his laptop and memory card was improper because the OSI agents obtained the warrant from a military magistrate, who was not authorized to issue warrants to search premises that were off-base, i.e., not on military property. Since they did not begin searching the seized items until they got a valid search warrant from a U.S. Magistrate, the subsequent search would have been “reasonable.” It seems to me that the seizure of the items was not “reasonable,” and might have been a basis for suppressing what was later found, but that doesn’t seem to have been an issue here.

After the OSI had obtained that valid search warrant, one of their agents Special Agent “JL” – referred to in the opinion as SA JL – was asked to prepare a forensic mirror image of the hard drive on the laptop so it could be sent to the Defense Computer Forensic Analysis Laboratory for analysis. U.S. v. Osorio, supra. This is where it gets interesting:

[SA JL] was not . . . assigned to the case and was unaware of the . . . scope of the search warrant. . . . She was simply . . . asked to prepare the hard drive for shipment. In order to confirm she had a made a correct functioning mirror image of the hard drive . . . SA JL used forensic software to view all the photos on the computer at once as thumbnails. Once she confirmed the mirror image, she had done everything necessary to fulfill her technical task.

Despite having completed her task, SA JL began reviewing the thumbnails, and noticed several . . . nude persons, and decided to open the thumbnails to make sure the pictures were not `contraband.’ Without opening the thumbnails, it was impossible for her to determine the true contents of the picture. Therefore, she double-clicked on one thumbnail and saw what she believed to be the image of a nude minor. She continued to open thumbnails to see how many similar pictures were on the computer and noticed several more pictures of nude minors. She then searched to see if the pictures were saved to the computer or just stored in temporary internet files, the latter of which could show that the pictures existence on the hard drive may not have been intentional. She searched the computer for 20-30 minutes and then informed the OSI agents about the photos depicting nude minors.

U.S. v. Osorio, supra.

The agents sent the mirror image to the lab; the lab got a second search warrant authorizing a search of the laptop for child pornography, which they found. Osorio was charged with and convicted of possessing child pornography. U.S. v. Osorio, supra. He appealed, arguing that SA JL’s search of the images on his laptop violated the 4th amendment because it was not within the scope of the search warrant that authorized her having access to the hard drive. Opening the thumbnails was a “search” because, as noted above, she couldn’t tell what they were without doing so. That means her opening them was an incremental, additional intrusion on Osorio’s 4th amendment right to privacy in the contents of his laptop.

So he was raising the scope issue noted above – he was essentially saying, to continue the analogy I used earlier, that SA JL went looking for TVs in dresser drawers, i.e., went way beyond what the warrant authorized her to do.

The Air Force Court of Criminal Appeals agreed with Osorio:

SA JL exceeded the scope of the search warrant the minute she opened the thumbnail to . . . `make sure it was not contraband.’ SA JL admitted on cross-examination that she opened the thumbnail to verify if the picture was child . . . pornography, not to verify it was a mirror image of the other computer or to review a photograph taken on February 12, 2005. Having testified that . . . once she opened the picture directory tree, her job was done, we find that SA JL was not acting within the scope of the warrant at the time of the discovery of the first suspect image.

U.S. v. Osorio.

Since SA JLwas not acting within the scope of the warrant, her viewing the images was an “unreasonable” search that violated the 4th amendment. And since her viewing those images provided the probable cause for the second warrant – the one the lab got before they analyzed the laptop – that warrant was invalid. The Air Force court of appeals held that the

seizure of evidence upon which the charge and conviction was based was a consequence of an unconstitutional general search and the military judge erred by refusing to suppress it. Accordingly, the findings and the sentence are set aside and the charge dismissed.

U.S. v. Osorio.

So, there's an object lesson for law enforcement here: Always, always stay within the scope of your warrant and, when in doubt, get a second warrant that specifically authorizes what you want to do.

There’s probably also some kind of object lesson for people who take photos at strip poker parties, but I’m not sure what it is.

Sunday, June 15, 2008

As I was checking legal databases to see what’s new in cybercrime, I found an opinion involving an insider attack.

In the opinion, the court rejects a defendant’s request to vacate the sentence it imposed after he pled guilty to “unauthorized computer intrusion . . . in violation of” 18 U.S. Code § 1030, the basic federal computer crime statute. Underwood v. U.S., 2008 WL 648459 (U.S. District Court – Western District of Missouri 2008).

The defendant – Henry Curtis Underwood – claimed his sentence should be set aside because of ineffective assistance of counsel.

Mr. Underwood lost, as defendants usually do when they raise this issue. The reason for that, basically, is that in order to prevail a defendant has to show that his attorney’s performance was “constitutionally deficient” and that this deficiency prejudiced the outcome of the case, i.e., resulted in his erroneously pleading guilty in a case like this. Strickland v. Washington, 466 U.S. 668, 687 (1984). The court in this case found that the arguments Mr. Underwood advanced as to why his attorney was ineffective were not well-grounded; one particular point that didn’t help was that at the hearing at which he pled guilty, he said “she had neither done anything that Underwood did not want her to do nor had she failed to do anything [he] asked her to do.” Underwood v. U.S. supra.

This post, though, is not about Mr. Underwood’s trying to get his plea and sentence set aside. I thought the facts in the case were a good example of the kind of damage an “insider” can do. Here’s how a US Department of Justice Press Release described what led to his being charged with unauthorized access in violation of § 1030:

Underwood was employed as the [Northeast Nodaway R-V School District’s] technology coordinator, but had been placed on administrative leave at the time of the offense. Underwood had been convicted of bank robbery in 1995 in federal court in Texas and sentenced to five years and three months in federal prison, but Underwood did not reveal his criminal history in his job application.

In the course of investigating a $200 theft from the Parnell Elementary School in December 2004, a Nodaway County Deputy Sheriff uncovered Underwood's bank robbery conviction. Underwood was placed on administrative leave on Jan. 27, 2005, and the next day sent an instant message to the principal at Parnell Elementary saying that he could not understand why he was accused of taking the missing money. On Saturday, Jan. 29, 2005, while working in her office, the principal was abruptly logged off her school computer and she could not log back on. An investigation revealed that only two accounts were still functioning, the `cunderwood’ account and the `Administrator’ account. All other accounts on the school district's network had been disabled and could not be accessed, and all computer work stations at both Parnell Elementary and Ravenwood High School had been disabled.

At the time Underwood was suspended, school district officials were unaware he had provided himself with remote access to the district's computer network through a Virtual Private Network. Underwood had established a VPN link from his home, using a laptop computer, to the Ravenwood school.

Underwood admitted that he established a remote connection to the district's computer system on Jan. 29, 2005. Underwood used the unauthorized access to initiate a program that locked out or disabled every user of the system with the exception of the account `cunderwood’ and the administrator's account.

According to the Press Release, the lockout was “highly disruptive of the operations of the school district. Full access to the system was not restored until March 2005, and the school district . . incurred remediation costs in the form of payments to consultants to repair the network and reestablish account access.”

On November 16, 2005, Underwood was charged with one count of violating 18 U.S. Code § 1030. Press Release, supra. On February 21, 2006, he pled guilty. Press Release, supra. On June 14, 2006, the judge who was assigned the case sentenced him “to one year and six months in federal prison without parole. The court also ordered Underwood to pay $15,600 in restitution to the school district.” Press Release, supra. That seems a reasonable sentence, I’d say, given the standards and factors that go into sentencing in general and sentencing for a § 1030 case.

What I think is notable about this case is that (assuming the facts alleged above are true), here we have a classic “insider” who is able to do a great deal of damage to a computer system. As I’ve noted before, people often tend to equate cybercrime with “outsiders,” with “hackers” (usually disgruntled teenagers) who “break into” computer systems. There are, of course, lots and lots of outsiders who do precisely that, most of whom are not teenagers, disgruntled or gruntled.

What many people tend to overlook, especially in educational institutions, small businesses and other environments that may not have had occasion to consider this problem, is the threat an unhappy employee, or a contractor, can pose. (A few years ago I spoke to a group of lawyers and judges. After I had described various kinds of cybercrime, including unauthorized intrusions of varying types, a judge raised his hand and asked me if the court systems at his court were secure. I suggested he take that up with their IT people. I hope he did.)

The U.S. Secret Service has done two very good studies of the insider threat, which I suggest you take a look at if you’re interested in this problem.

Dealing with insiders is, I think, much more difficult than dealing with the outsiders. The task of dealing with outsiders is to a great extent analogous to the task of fending off attackers in the real, physical world: You barricade points of entry and lock down as much as you can to try to prevent their getting inside. It's like being in a castle and fending off invaders.

You don't have that clear boundary with insider attacks. The insiders are, of course, already inside, and keeping controlling them can be a very dicey undertaking. Obviously, one solution is to monitor everything everyone does, but that is probably going to be logistically impossible and will certainly not endear an organization to its employees. I could go on in that vein, but I’d recommend you check out the Secret Service studies, as they include some suggested “proactive practices” for dealing with the problem.

Friday, June 13, 2008

Until very recently, South Dakota had a telephone harassment/threat statute that looked pretty much like similar statutes in other states.

Here is what it said:

It is a Class 1 misdemeanor for a person to use a telephone for any of the following purposes:(1) To call another person with intent to terrorize, intimidate, threaten, harass or annoy such person by using obscene or lewd language or by suggesting a lewd or lascivious act;(2) To call another person with intent to threaten to inflict physical harm or injury to any person or property;(3) To call another person with intent to extort money or other things of value;(4) To call another person with intent to disturb him by repeated anonymous telephone calls or intentionally failing to replace the receiver or disengage the telephone connection.

South Dakota Codified Laws § 49-31-31.

In March, the South Dakota legislature passed a bill that revised this statute so it would read as follows:

It is a Class 1 misdemeanor for a person to use a telephone or other electronic communication device for any of the following purposes:(1) To call contact another person with intent to terrorize, intimidate, threaten, harass or annoy such person by using obscene or lewd language or by suggesting a lewd or lascivious act;(2) To call contact another person with intent to threaten to inflict physical harm or injury to any person or property;(3) To call contact another person with intent to extort money or other things of value;(4) To call contact another person with intent to disturb him that person by repeated anonymous telephone calls or intentionally failing to replace the receiver or disengage the telephone connection.

South Dakota House Bill 1313 (approved March 12, 2008). As you can see from the text I've highlighted, the new bill expands the statute so that it encompasses the use of "electronic communication devices" in addition to telephones. The South Dakota governor signed the bill into law on March 12, 2008, the day the legislature passed it.

One of the frustrating things about legislation at the state level is that it’s often difficult, or even impossible, to get what we in the law call “legislative history.” Legislative history, which tends to be abundant at the federal level, is a legislative body’s explaining why it adopted a particular measure. It can take the form of committee reports on the proposed legislation, debates on the measure on the floor of the legislature, transcripts of hearings on the measure, etc. Most states don’t compile legislative history, so you often have to guess as to why they did something.

Why did South Dakota do this? Well, I think they actually did a very good, a very rational thing: They looked at their threatening/harassment communication statute and saw that it was technologically limited – it only criminalized the use of a TELEPHONE to threaten or harass someone. Statutes like this began to come into existence in the last century as phones became more popular. Every state has a statute similar to this, and many of them are still based on using a telephone.

To remedy this problem, states sometimes just enact law that creates a new crime. So some states still have phone harassment but they’ve also added a new crime: computer harassment. I happen to think that approach is wrong. As I’ve written before, criminal law is not about the method you use, it’s about the “harm” you inflict. So we outlaw homicide (the “harm” of causing the death of another human being), not the method you use. We don’t, in other words, break our homicide statutes out into (i) homicide by poison, (ii) homicide by strangulation, (iii) homicide by stabbing, (iv) homicide by gun . . . and so on.

I suspect the South Dakota legislators were responding, after the fact or proactively, to the issue that was recently raised before a New York court.

A New Yorker was charged with two counts of aggravated harassment for sending “approximately six text messages” to the victim’s “phone threatening” her “by stating that” he “was outside of [her] resident and [she] would end up in the hospital.” People v. Limage, 19 Misc.3d 395, 851 N.Y.S.2d 852 (Criminal Court – City of New York, Kings County, February, 2008). Limage moved to dismiss the charges against him arguing, in part, that what he was alleged to have done did not quality as harassment under the applicable New York statute.

Here’s the statutory provision he was charged under:

The relevant portion of [New York] Penal Law § 240.30 provides that: “[a] person is guilty of aggravated harassment in the second degree when, with intent to harass, annoy, or alarm another person, he or she:

1. Either (a) communicates with a person . . . by telephone . . . or any form of written communication, in a manner likely to cause annoyance or alarm; or

(b)causes a communication to be initiated by . . . electronic means with a person . . . by telephone . . . or any form of written communication, in a manner likely to cause annoyance or alarm.”

People v. Limage, supra.

One of Limage’s arguments for dismissing the charges was, apparently, that text messages aren’t encompassed by the statute above because “text messages are brief, easy to ignore, and therefore not as serious as phone calls, letters, or e-mails”. People v. Limage, supra. The court disagreed:

With the advancement of technology, telephones have come to be used for more than simply placing and receiving calls. They now have the capability of sending and receiving messages and pictures, accessing the internet, playing music, and much more. . . . [T]ext messages are communicated in writing, just like letters or e-mails, and access the recipient often instantaneously, like a phone call directly to the person's cell phone. Additionally, the brevity of a text message has no impact on the severity of its meaning. A short text message can be more vicious and threatening then a lengthy, convoluted e-mail or letter. The defendant too easily dismisses the technological developments which have facilitated ever faster communication, and which, along with their many benefits, bring . . . ever greater potential for abuse.

People v. Limage, supra.

This issue will probably come up in cases in other states, because I don’t think any state’s harassment/threat statutes specifically mention using text messages . . . and I, personally, don’t think they should. This goes back to what I said above, about how criminal statutes should outlaw the infliction of particular “harm” (threats, harassment), not inflicting-a-particular-“harm”-by-a-specific-method. I think what the South Dakota legislature did is a pretty good approach to the situation.

I really think, though, that we need to focus the “harm” not the method at all, because I’m sure email and text messages, as we currently understand them, will be quite obsolete in . . . what? . . . 10 years? Less? Why can’t we just make it a crime to threaten or harass someone?

Wednesday, June 11, 2008

This post isn’t about cybercrime. It’s about a kind-of digital evidence issue: a defendant’s right to obtain the source code of technology used to generate evidence against him or her.

The issue can, and will, I believe, come up in a variety of contexts, including the use of particular software programs to analyze seized hard drives and otherwise locate digital evidence. At least as far as I can tell, it hasn’t really come up except in one context: Attempts to get the source code used in various kinds of DUI testing machines.

I can see why this would be an area where a number of challenges arise, since it seems to e an area that spawns a lot of litigation, as people challenge their DUI convictions.

There are a number of cases on this issue and the analysis can become pretty lengthy, but I’m going to try to keep this short. I’m going to focus primarily on a Minnesota case, State v. Underdahl, 2008 WL 2107772 (Minnesota Court of Appeals, May 20, 2008).

Here’s what happened to bring the source code issue before the court of appeals:

These appeals . . . from the district court's decisions to grant respondents' motions to discover the source code for the . . . Intoxilyzer 5000EN (Intoxilyzer), the machine used to test respondents' breath for alcohol concentration. Respondents . . . Brunner and . . . Underdahl were each charged with driving while impaired after the Intoxilyzer tests registered an alcohol concentration above .08. During pretrial proceedings, respondents . . . moved for discovery of the computer source code, the original text of the computer program by which the instrument operates.

The state . . . [argued} that the source code was not relevant. . . . The district court . . . . found that . . . Brunner `cannot assess the reliability of the testing method without access to the software that controls the testing process.’ . . . [R]egarding . . . Underdahl, the court stated that `[b]ecause the Intoxilyzer [ ] provides the only evidence of . . .alcohol concentration that may be used to prove his guilt, evidence regarding the operation of that instrument is relevant to this case. The state appeals from both decisions.

State v. Underdahl, supra.

The basis for Mr. Brunner and Mr. Underdahl’s challenge was Rule 9.01 of the Minnesota Rules of Criminal Procedure, which lets a court require the prosecution to provide information if a defendant shows t it “may relate to his guilt or innocence. The prosecution argued that the challengers did not have a viable claim under the statute because “the results of an Intoxilyzer breath test are presumed to be reliable under Minn.Stat. § 634.16 (2006), which allows the results of a breath test to be admitted `in evidence without antecedent expert testimony that an ... approved breath-testing instrument provides a trustworthy and reliable measure of the alcohol in the breath.’” State v. Underdahl, supra.

The context in which the challenge comes up is evidence law. Every state (and the federal system) has “gate-keeping” rules of evidence, the purpose of which is to ensure that the trier or fact (which is usually a jury, but can be a judge in what is known as a bench trial) hears only evidence that meets some basic standard of reliability.

Evidence is essentially divided into two categories: witness testimony and physical evidence. Here, we’re not talking about physical evidence, as such, even though breathalyzers involve the analysis of physical artifacts. What is being offered into evidence is not, however, the alleged drunk driver’s breath (or blood, when a DUI charge is based on a blood test). Instead, it’s the result of a “scientific” procedure – an analysis of the amount of alcohol in someone’s breath. I’m not going to try to explain what they do, because I don’t understand it in any depth. There’s a Wikipedia entry on the subject, and you should check it out if you want to know more about the processes involved.

For the results of a breathalyzer test to be admissible at a DUI trial, they must meet the requirements of the applicable rules of evidence. In this case, they were the Minnesota Rules of Evidence. Minnesota Rule of Evidence 702 is the rule that sets the standard for admitting the results of scientific tests:

If scientific . . .or other specialized knowledge will assist the trier of fact to . . .determine a fact in issue, a witness qualified as an expert . . . may testify thereto in the form of an opinion or otherwise. The opinion must have foundational reliability. . . . [T]he proponent must establish that the underlying scientific evidence is generally accepted in the relevant scientific community.

The federal system and other states have similar rules, all of which are essentially based on common sense. When a regular witness – a lay witness – testifies, the other side can challenge the reliability of that witness’ testimony by cross-examining them, because they are testifying about matters we all understand. If you’ve seen the movie My Cousin Vinnie, remember how Vinnie undermines the reliability of the testimony of the lady who claims to have seen the guys who robbed the store by showing that her vision is just not up to being able to do so, even with her glasses on. That works because the jury (or the judge if it’s a bench trial) can understand the challenge the defense is making; it's essentially a matter of common sense.

The premise of the challenge in the Underdahl case (and of the apparently hundreds of cases like it) is that the defendants cannot effectively challenge the reliability of the tests performed on their breath by the Intolilyzer (a particular kind of breathalyzer) unless they are given access to its source code. The Wikipedia entry I noted above outlines some of the errors that can arise in the administration of these tests. Defense attorneys, like those involved in the Underdahl case, are arguing that they cannot challenge the reliability of a particular breathalyzer/Intoxilyzer test unless they have access to the source code of the machine’s software.

Essentialy, they’re saying that the test cannot be cross-examined like a person, so the only way they can challenge its reliability is to have access to how it works; if they and their experts can find some flaw in the source code, and if that flaw would erode the functioning of the machine, then they would have a way to challenge its reliability in a particular instance. Absent access to the source code, they say, they have no way to challenge the accuracy of the testing . . . and if the testing is not challenged, the result will be a finding of guilt.

These defendants, like others, lost in their attempt to obtain the source code. The court of appeals held that they did not produce evidence to satisfy the requirements of Rule 9.01, i.e., they did not show how the evidence could relate to their guilt or innocence:

[R[espondents have not shown what an Intoxilyzer `source code’ is, how it bears on the operation of the Intoxilyzer, or what precise role it has in regulating the accuracy of the machine. Accordingly, there is no showing as to what possible deficiencies could be found in a source code, how significant any deficiencies might be to the accuracy of the machine's results, or that testing of the machine, which defendants are permitted to do, would not reveal potential inaccuracies without access to the source code.

State v. Underdahl, supra.

Other defendants in other states have lost for another reason: Prosecutors often argue, and courts have agreed, that the prosecutor does not possess the source code and therefore cannot turn it over because it is a trade secret belonging to the company that makes the breathalyzer. Here is what a Nebraska court said on that issue:

Kuhl urges this court to balance his Sixth Amendment right of confrontation against . . . any trade secret right that the manufacturer of the machine in question might have. Kuhl argues that he should be assured the opportunity to examine the evidence against him and that this requires the State to turn over the source code to allow him to, `in a way, cross examine the machine and determine if it was in proper working order. . . . Section 29-1914 provides that discovery orders `shall be limited to items or information within the possession, custody, or control’ of the State. . . . The record is clear that the source code is not in the State's possession and that the manufacturer of the machine . . . considers the source code to be a trade secret and the proprietary information of the company.

State v. Kuhl, 16 Neb. App. 127, 741 N.W.2d 701 (Nebraska Court of Appeals 2007). The Nebraska court therefore upheld the trial court’s denying Mr. Kuhl’s motion to give him access to the source code of the breathalyzer used in his case.

A Kentucky court reached a rather different conclusion in House v. Commonwealth, 2008 WL 162212 (Kentucky Court of Appeals 2008). After being charged with DUI, Mr. House served a subpoena on the manufacturer of the breathalyzer used in his case; the subpoena sought the machine’s source code. The trial court quashed the subpoena and Mr. House appealed. The court of appeals reversed the order quashing the subpoena and remanded the case for further proceedings. Here’s it what it said about the trade secret issue:

The Commonwealth and CMI argue. . . that the computer code is a protected trade secret and that this should weigh against disclosure. However, House has expressed his willingness for he, his attorney, and his expert witness to enter into a protective order stipulating that the code or its contents are not to be shared with any party outside of the case. The district court is authorized to enter such . . . [T]he order may provide that any copies or work product generated as a result of the software engineer's review be returned to CMI upon completion of the review. As civil and/or criminal penalties could result from the disclosure of the code to other parties, such a protective order should obviate any concern CMI may have with respect to protection of its source code.

I’ve no idea where this is going in the breathalyzer context, but it seems to me the general issue – i.e., how do you cross-examine a technical process carried out by a machine? – will be with us for some time. And I think we will see it being raised in the context of machines and software being used to obtain and analyze digital evidence.