from the can-beings-of-light-cure-a-streisand? dept

A couple weeks ago, we wrote about faith healer Adam Miller's monumentally stupid and ridiculous SLAPP lawsuit against Stephanie Guttormson because she posted a video that was critical of a video Miller put together promoting his faith healing nonsense. We dissected how weak and laughable the lawsuit was -- and it's possible that Miller and his lawyer have now realized this, as they've dismissed the lawsuit, but without prejudice, meaning he could potentially file it again in the future. Given that (and the fact that Guttormson has raised a bunch of money in a crowdfunding campaign), I wonder if she'll now file for a declaratory judgment of non-infringement... Update: Guttormson notes that while she could go after him, she's going to let the matter drop.

Either way, Miller said he filed the lawsuit because he felt it was unfair that people were seeing this video that Guttormson put together:

When he filed the original lawsuit, it noted that the video had been viewed approximately 1,500 times since it first was posted back in December of last year. So, 1,500 views in a bit over four months. In the two weeks since the lawsuit was filed, it's now up to about 50,000 views. Nice work, Adam.

Separately, if you do a Google search on "adam miller healer" (as he likes to be known), it's full of stories about how he's suing Guttmorson and mocking his faith healing nonsense. One of these days people are going to understand the nature of the Streisand Effect, but apparently that day is not today.

from the democratizing-creation dept

Techdirt has written often enough about applications of DNA sequencing -- the elucidation of the four chemical "letters" A, C, G, T that go to make up genomes. But things have moved on: the new frontier is not just analyzing DNA, but synthesizing it. A fascinating article on SFGate describes the activities of one company working in this area, Cambrian Genomics, and some of the tricky ethical issues it raises:

In Austen Heinz's vision of the future, customers tinker with the genetic codes of plants and animals and even design new creatures on a computer. Then his startup, Cambrian Genomics, prints that DNA quickly, accurately and cheaply.

"Anyone in the world that has a few dollars can make a creature, and that changes the game," Heinz said. "And that creates a whole new world."

The 31-year-old CEO has a deadpan demeanor that can be hard to read, but he is not kidding. In a makeshift laboratory in San Francisco, his synthetic biology company uses lasers to create custom DNA for major pharmaceutical companies. Its mission, to "democratize creation" with minimal to no regulation, frightens bioethicists as deeply as it thrills Silicon Valley venture capitalists.

Printing the new DNA is the easy bit; increasingly, the hard bit is deciding what should -- and shouldn't -- be printed:

Right now, employees check each order to make sure that a customer isn't printing, say, base pairs of Ebola. But staff won’t have time to do that if, as Heinz predicts, orders dramatically increase in the next two years. In that case, he said, Cambrian might first ship the plates to an independent facility where experts would put the DNA inside cells, film and analyze it, and make sure that it is safe before releasing it.

This facility, he envisions, could be run by another company, not necessarily the government. Because Cambrian wants to keep government interference to an absolute minimum, its CEO insists that behaving well is in the company’s best interest.

That may be true, but as this technology spreads, and becomes cheaper, more and more companies around the world will start to offer similar services, making it harder to oversee their working. And then there will be the backstreet labs that intentionally try to avoid any kind of control. Soon, if you can model it, you will be able to synthesize it. Cambrian Genomics is already helping to drive the spread of its tools and ideas:

Cambrian will also share its technology with startups in which it holds a 10 percent equity stake. One is Petomics, which is making a probiotic for cats and dogs that makes their feces smell like bananas. Another is SweetPeach, which hopes to take samples of users' vaginal microorganisms and send back personalized probiotics to promote vaginal health. (Contrary to Heinz's description of SweetPeach at a recent conference, the products will not make vaginas smell like peaches.)

Heinz seeks to help create "thousands" more startups in this vein. On top of that, he wants to replace lost limbs, fight viruses and develop alternatives to antibiotics. Maybe someday, he said, scientists will even print DNA on Mars. "It’s going to be an amazing next few hundred years."

Given the rapid advances in synthetic biology, that certainly seems likely. The question is: will the next few hundred years be good amazing, or bad amazing? Where -- and how -- do we draw the line here?

Pennsylvania man Anthony Elonis has historically enjoyed saying outrageous things on Facebook, such as how he would like to murder his estranged wife; shoot up an elementary school; sneak into an amusement park he was fired from to wreak havoc; slit the throats of a female co-worker and a female FBI agent; and use explosives on the state police, the sheriff's department, and any SWAT team that might come to his house. Elonis has never actually done any of these things, but he did spend the last three and a half years in prison for saying that he would. This week, the Supreme Court said it's going to re-examine the case, meaning we'll get a federal decision on whether threats made online need to be made seriously to send the threat-maker to jail, or just need to be taken seriously by a reasonable person threatened.

Elonis' argument is that his threats were just "rap lyrics" intended to be read by only his friends. He also argues he never targeted anyone (ex-wife, schools, FBI) with these comments (specifically pointing to the fact that he never "tagged" any of his "targets" using Facebook's notification system) and that the supposed threats were taken out of context -- that context being that Elonis was known for posting outrageous comments.

The lesson here seems to be that seeking negative attention from the internet also tends to net you additional attention from law enforcement, especially if your background isn't exactly clean. Elonis apparently harassed coworkers to the point of being fired from an amusement park job and some Facebook comments fantasizing/threatening violence towards his ex-wife prompted a real-life restraining order.

One of the problems with Elonis' case is that it asks the Court to find in favor of a very unsympathetic individual. It also asks it to ignore the objective standard so many courts have used and begin applying a subjective standard -- something more aligned with the reality of internet communication. There are other cases out there with more sympathetic protagonists, like Justin Carter, a teen who was arrested and thrown in jail (and held with a $500,000 bail) over some post-video game trash talking that included a mention of shooting up a school. To make the case against Carter, the comment was stripped of its context and presented as the teen's sincere desire to kill schoolchildren.

Social media interactions, when robbed of context, can often appear to be much more dangerous than they actually are. Simply holding that the reasonable person would view one specific comment or post as threatening hurts not only seemingly more "dangerous" people like Elonis, but also those who never truly uttered a threat (like Justin Carter). Since we can't expect the theoretical "reasonable person" to have access to the surrounding context, we expect the court to consider this along with the reasonable person's point of view.

It has long been known that people are more willing to say divisive and controversial things on the internet -- stuff they certainly wouldn't say in person. To hold these interactions to a "reasonable person" standard ignores the fact that the internet isn't particularly known for "reasonable" interactions. There's likely no "bright line" to be found here. Not everything threatening said online should be treated as a threat, but on the other hand, the tendency of internet interactions to be more exaggerated than those in real life shouldn't be used as a shield against criminal charges.

Hanni Fakhoury, a lawyer for the Electronic Freedom Foundation, makes a very good point -- one that could head off a lot of high-level court discussions over the "reasonable person" viewpoint.

Fakhoury says threats made online should be where police investigations start, not where prosecutions start.

"We've tolerated stupid speech a long time in this country, and we shouldn't let the Internet shake that balance," says Fakhoury. "We need a holistic approach to problems, not just, 'If you say a threat on the Internet, you're going to jail.'"

As we've often stated here, supposed threats should very definitely be investigated. But these investigations not only need to take into account whether the person has the means to carry them out, but also the surrounding context. It's simply not enough to declare something a threat because someone felt threatened -- a word some people deploy when they actually mean "appalled" or "offended." But that's often how these prosecutions start -- a very subjective situation which is only held to a supposedly objective viewpoint long after someone's already been jailed and gone to trial.

The latest case involving the legal parameters of online speech before the justices concerns a Pennsylvania man sentenced to 50 months in prison after being convicted on four counts of the interstate communication of threats. Defendant Anthony Elonis' 2010 Facebook rant concerned attacks on an elementary school, his estranged wife, and even law enforcement.

"That's it, I've had about enough/ I'm checking out and making a name for myself/ Enough elementary schools in a ten mile radius/ to initiate the most heinous school shooting ever imagined/ and hell hath no fury like a crazy man in a Kindergarten class/ the only question is … which one?" read one of Elonis' posts.

Elonis' case is a bit more complicated. For one, Elonis is 30 years old. While growing older doesn't necessarily make you immune from stupidity, the expectations are a bit higher in terms of online discourse. It's a little harder to claim you're running on the same high-octane concoction of hormones and blood displacement that teenage boys are. Not that all youthful indiscretions are excusable, but given that age group's tendency towards disproportionate drama in all things, it does make it more understandable.

In addition, Elonis' statements were directed at a variety of targets, any of which would seem to be a viable recipient for his anger. Not only did Elonis mention shooting up a school (specifically a kindergarten), but he also apparently had dire "plans" for his wife and local law enforcement. Again, the post-Sandy Hook law enforcement/judicial mentality further clouds the issue, raising the question that if Elonis had left out the part about the school shooting, would he still be facing 30 months in prison? (Of course, threatening law enforcement tends to create just as much of a legal mess, usually one far worse than simply threatening your estranged spouse does…)

But the odds are fairly long that the Supreme Court will find the ability to carry out the threat matters as much as the perception of everyone else but the person making the statement.

Only one federal appeals court has sided with Elonis' contention that the authorities must prove that the person who made the threat actually meant to carry it out. Eight other circuit courts of appeal, however, have ruled that the standard is whether a "reasonable person" would conclude the threat was real.

This long shot is also reliant on another long shot: that the administration will support this appeal. A similar case involving an Iraq War vet was greeted by the White House with a written petition asking the Supreme Court to reject the case. These two obstacles make it unlikely that the judicial system will start treating so-called "threats" any differently than they have in the past. And it's a very long past. David Kravets at Ars Technica points out that the statute being applied to these cases originated in 1932.

There are legitimate threats and these are rightly not treated as free speech. But there are others that are treated as legitimate threats even when there's no evidence the person uttering them has the ability, much less the intention to back up their unfortunate statements. Applying a 1932 statute to the wide open discourse platform that is the internet is doing little more than putting loudmouths and idiots in jail. Those who mean actual harm to others generally don't enlighten their future targets via Twitter, Facebook and forum posts.

By all means, potential threats should be investigated, but the courts need to come to the realization that these statements cannot be entirely robbed of their context (including intent and ability) and presented "as is" to the hypothetical "reasonable person." Reasonable people are completely capable of understanding that not every hurtful word can actuallyhurt someone, nor do they believe every "threat" is the sign of impending danger. Not only should the statute be reconsidered, but so should the court's "reasonable person" ideal.

from the dangerous-step dept

My own representative in Congress, Jackie Speier, has apparently decided to introduce a federal "revenge porn" bill, which is being drafted, in part, by Prof. Mary Anne Franks, who has flat out admitted that her goal is to undermine Section 230 protections for websites (protecting them from liability of actions by third parties) to make them liable for others' actions. Now, I've never written about Franks before, but the last time I linked to a story about her in a different post, she went ballistic on Twitter, attacking me in all sorts of misleading ways. So, let me just be very clear about this. Here's what she has said:

"The impact [of a federal law] for victims would be immediate," Franks said. "If it became a federal criminal law that you can't engage in this type of behavior, potentially Google, any website, Verizon, any of these entities might have to face liability for violations."

That makes it clear her intent is to undermine Section 230 and make third parties -- like "Google, any website, Verizon... face liability."

Now, her retort to all of this is likely that she's not seeking to undermine Section 230 in any way. Rather, she's attempting to do something of an end-run around it. Section 230 has never protected sites from liability of federal crimes -- just civil infractions and state crimes. So her goal is to make the amorphous concept of "revenge porn" a "federal crime" thereby suddenly making third-party websites liable. She will argue that does nothing to undermine Section 230. A more reasoned and thoughtful look at the issue, however, shows how this effort is fraught with dangerous consequences and potential First Amendment problems.

Taking a step back, though, let's be clear: revenge porn -- the practice of posting naked pictures of someone (who likely took those photos for an individual or themselves, rather than the public) along with that person's identifying information -- is odious. Those who are involved in the practice are morally repugnant individuals. And yet, what we've seen is that there do appear to be ways to deal with them. One of the most well-known creators of revenge porn, Hunter Moore, was recently arrested on charges that he conspired with another person to hack into email accounts to get more photos. Often, those engaged in revenge porn are also engaged in extortion over those images or other crimes on which they can be charged. Some revenge porn sites have been hit with lawsuits for copyright infringement -- though that creates a whole different set of problems.

Still, you can see why there's a temptation to create a new anti-revenge porn statute. The whole concept of revenge porn is itself repugnant, so it's tempting (especially as a lawmaker) to pull out that old hammer and create some regulations. But the dangers of regulating based on reacting to the odiousness of those sites may obscure the way such laws will inevitably -- as Prof. Franks herself admits -- impact companies that are clearly not engaged in revenge porn.

In the article about the legislation, EFF's Matt Zimmerman (who, actually, just left EFF) points out that using criminal law here is "dangerous" because it would likely lead lots of companies to reflexively delete all sorts of content, including plenty of perfectly legal and legitimate content, to avoid the sort of liability Franks describes. And that's the huge problem here. By spreading liability, you guarantee over-censorship. It's easy for people who are narrowly focused on a single issue to not recognize the wider impact that issue may have. Trying to accurately describe what "revenge porn" is for the sake of criminalizing its posting, will almost certainly have chilling effects on third parties and undermine the very intent of the CDA's Section 230.

People who don't think through the details seem to assume that it must be easy to define what is "revenge porn," but the deeper you go, the more difficult it becomes to define -- and the more risks there are of both over-criminalizing and creating serious First Amendment issues. For example, you could say that sites should be forced to take down photos of individuals where those individuals insist that the photos are problematic. But then you'd have to deal with situations, like with Ranaan Katz, where he went after a blogger and Google (using civil copyright law) for posting an "unflattering" photo. Do we really want to bring criminal law into that arena?

And the First Amendment issue is not easy to get around. At all. As lawyer Mark Bennett discussed in trying to create a First Amendment-compliant anti-revenge-porn statute, it's not an easy challenge:

The First Amendment problem we face is that “posting nude or explicit images of former lovers online” is speech; a statute focused on such posting is a content-based regulation of speech; content-based regulations of speech are presumed to be invalid (that is, speech is presumed to be protected); and the Supreme Court in U.S. v. Stevens expressly rejected a balancing test for content-based criminal laws, instead applying a categorical test.

At best, Bennett tries, as an exercise, to see if it would be possible to extend obscenity laws to cover "revenge porn," but that would massively expand obscenity laws, again in potentially dangerous ways. The problem here is that pretty much everyone agrees that revenge porn is a really horrible thing -- but any attempt to criminalize it will have serious implications way beyond the targeted issue.

Instead of following Franks down that dangerous road, it would be wise to focus on ways to use existing laws to go after those who are clearly engaged in related questionable behaviors. Changing the laws to put the burden on third parties is only going to create significant new problems. I'm disappointed that my own Representative in Congress, Jackie Speier, appears to not realize this, and I will be contacting her office to express my concerns about the bill.

from the digitus-impudicus dept

The middle finger, or flipping the bird, or digitus impudicus, is a wonderfully universal way to let someone know what you think of them. We recently told you the story of a delightful woman who fashioned her Christmas lights into the gesture as a way to help her neighbors get into the holiday spirit. What I didn't realize is how many stories there are of people giving the bird to the police while driving around on streets. Quite frankly, it never occurred to me to be driving past someone who has the ability to make me miserable in so many different ways and give them the finger.

But that's exactly what Vietnam veteran John Swartz of New York did, flipping off an officer and his speed gun as he drove past in 2006. He was subsequently pulled over and arrested for disorderly conduct. He's apparently been fighting back ever since and now his court case has been reinstated by a federal appeals court, who didn't believe the arresting officer's explanation that he pulled the car over because he thought the middle finger was meant as an alert that the female driver, Swartz's wife, needed assistance.

From the three judge panel:

Perhaps there is a police officer somewhere who would interpret an automobile passenger's giving him the finger as a signal of distress, creating a suspicion that something occurring in the automobile warranted investigation. And perhaps that interpretation is what prompted Insogna to act, as he claims. But the nearly universal recognition that this gesture is an insult deprives such an interpretation of reasonableness. This ancient gesture of insult is not the basis for a reasonable suspicion of a traffic violation or impending criminal activity. Surely no passenger planning some wrongful conduct toward another occupant of an automobile would call attention to himself by giving the finger to a police officer.

On the one hand, it's good that a court recognized that there is no law against flipping off the police and that free speech should be protected from hysterically reaching justifications for revenge arrests like this. On the other hand, it's a little sad that a federal appeals court has to delve into such territory at all. Of course, none of this should be read as some embrace for flipping off police in general, but speech is speech and it should be protected. In any case, this isn't over yet and no date for trial has yet been set, so we'll have to wait for a verdict.

from the don't-do-as-you-would-be-done-by dept

Techdirt has noted before the hypocrisy of Disney in refusing to allow others to draw on its creativity in the same way that it has drawn on the art and ideas of the past. Here's another example, but this time it's an opera that's had difficulties:

A British-designed and directed opera about Walt Disney which premieres in Spain this month before coming to London has been forced to tell the great cartoonist's story without any of the images of the characters that made him a household name. Minnie, Donald, Pluto and Goofy, not to mention Mickey Mouse himself, will not be appearing on stage with the singers.

The Perfect American, the latest work by the acclaimed composer Philip Glass, concentrates on the last years of Disney's life, when he lay dying of lung cancer while planning to have his body frozen. It portrays Disney as a megalomaniac with McCarthyite, racist and misogynist tendencies, so it is clear why the global entertainment corporation has denied rights.

Rather weirdly (sour grapes?), the artistic director of the English National Opera, which will perform the opera in London, says that "we would probably not have used the real Disney characters in the production even if we had been allowed to," so in practice Disney's refusal hasn't turned out to be a big problem. But there's still an interesting issue here.

As Techdirt has discussed before, the famous "Mickey Mouse Curve" shows how copyright extensions always seem to come through just as Mickey Mouse is about to enter the public domain. So it's not entirely impossible that Disney will be pushing for yet another extension fairly soon, and for more after that.

What that means in practice is that creators of works like the Philip Glass opera presenting Walt Disney in a less than totally flattering light are likely to find themselves unable to use any of the iconic Disney images beyond what is permitted by fair use. And so a crucial facet of modern culture will not be available for artists to build upon as they wish for the foreseeable future -- hardly how the copyright bargain of a time-limited government-enforced monopoly in exchange for releasing works into the public domain is supposed to work.

from the no-respect dept

The EFF has a post up about how Victoria's Secret sent a legal nastygram to an ISP taking down a parody campaign by an anti-rape organization, FORCE, called Pink Loves Consent. The campaign was a parody designed to raise awareness of these issues, by mocking Victoria's Secret's "PINK" line of clothing, that includes underwear that says things like "sure thing" and "unwrap me." The parody campaign replaced those with things like "ask first" and "respect." The page showed what Victoria's Secret could have done to put forth a more positive, more respectful message... and the company's response was to go straight to the hosting company and demand the site be taken down (which it was, though they found a new host who was willing to put it back up). Parody is a key element of free speech -- and issuing a takedown over this seems like a pretty clear attempt to stifle free speech. And, really, it just makes Victoria's Secret look really, really obnoxious. Were its lawyers really so offended by positive messages, rather than pure sexual objectification?

from the #guinnessbookofhorribleworldrecords dept

India's somewhat schizophrenic relationship with privacy and freedom of speech has been discussed here before. The Indian government, on one hand, seems to want to do the right thing and safeguard its citizens from censorship and surveillance... but only up to a point. Once the going gets rough (i.e., outbreaks of violence, demonstrations), the government begins ramping up its surveillance and cracking down on free speakers.

On 20 October, he (Ravi Srinivasan) posted a tweet to his 16 followers saying that Karti Chidambaram, a politician belonging to India's ruling Congress party and son of Finance Minister P Chidambaram, had "amassed more wealth than Vadra".

This message ("got reports that karthick chidambaram has amassed more wealth than vadra") went out to all of 16 followers and somehow found its way to Karti himself, who responded like anyone else would when mildly insulted: by contacting law enforcement...

Karti Chidambaram (@KartiPC) did not take the tweet in good humour and filed a police complaint on 29 October.

… which immediately responded with the sort of speed reserved for appeasing angry politicians.

They arrested Mr Srinivasan early next morning, charged him under Section 66A of India's Information Technology [IT] Act, and demanded 15 days of police custody.

Srinivasan's single allegation could have been addressed through India's libel laws, but since that route takes time and money, the offended politician instead used the police department to take care of the "problem" by using the "sweeping power" of Section 66A of the IT Act of 2000.

[Section 66A] can send you to jail for three years for sending an email or other electronic message that "causes annoyance or inconvenience".

On the face of it, this protects citizens against online harassment.

In reality, the law is more often used by the state as a weapon against dissent. In each such case, police action has been swift and harsh.

In April, the West Bengal government led by Chief Minister Mamata Banerjee used Section 66A against a teacher who had emailed to friends a cartoon that was mildly critical of her.

Loosely worded laws, ostensibly designed to "protect" citizens, usually devolve into tools of censorship. For some strange reason, those with the most power are the ones who feel the most "threatened" by open criticism and dissent. It's little wonder that legislators are more than willing to push through open-ended "cyberlaws" that can be bent to fit any situation. The end result is this fact, which is perhaps least surprising of all:

And, interestingly, Section 66A has never been used against politicians.

To Srinivasan's credit, he refused to back down from his statement. In addition, his arrest and subsequent appearance on television led to him gaining another 2,300 followers, many of whom are wondering if his arrest was tied to his anti-corruption campaigning. Despite the public support of the arrested tweeter, the politician behind his arrest remains unrepentant, tweeting out this amazing statement in his own defense:

"Free speech is subject to reasonable restrictions. I have a right to seek constitutional/legal remedies over defamatory/scurrilous tweets."

There's nothing "reasonable" about arresting someone rather than following the "constitutional/legal remedies" set up by India's libel law. This is simple thug tactics being deployed by someone operating without fear of reprisal. Section 66A needs to be cleaned up if freedom of speech and privacy are going to be protected, rather than just paid lip service at convenient intervals.

from the questionable-platform-reliance dept

The Washington Post has an interesting story about Facebook's admission that it erroneously took down a widely shared image posted by an anti-Obama group over the weekend. The somewhat viral image (which, as the article notes, isn't exactly the most truthful of images -- but perhaps par for the course when it comes to political speech) was removed after Facebook said it "violated Facebook's Statement of Rights and Responsibilities." However, people going through Facebook's official list of Rights & Responsibilities didn't turn up anything that the content violated.

Leaving aside the question of exaggerated political speech, this raises the same question that we've wondered in the past: why do so many people rely on closed platforms today, that allow somewhat arbitrary removal of speech? While Facebook eventually admitted its error, this is hardly the first such case of Facebook deciding what you can or cannot talk about. That's a tremendously powerful position that Facebook's users have granted to Facebook in making it their communications platform of choice. Many people will say that this is "the price" that people pay to be on a platform where everyone else is -- and that the convenience of Facebook outweighs such costs. But it's also why so many people are a bit nervous about Facebook these days.