from the good-for-them dept

Wikimedia, like many other internet platform these days releases a transparency report that discusses various efforts to takedown content or identify users. We're now all quite used to what such transparency reports look like. However, Wikimedia's latest is worth reading as a reminder that Wikipedia is a different sort of beast. Not surprisingly, it gets a lot fewer demands, but it also abides by very few of those demands. My favorite is the fact that people demand Wikimedia edit or remove content. It's a wiki. Anyone can edit it. But if your edits suck, you're going to be in trouble. And yet, Wikimedia still receives hundreds of demands. And doesn't comply with any of them. Including ones from governments. Instead, Wikimedia explains to them just how Wikipedia works.

From July to December of 2017, we received 343 requests to alter or remove project content, seven of which came from government entities. Once again, we granted zero of these requests. The Wikimedia projects thrive when the volunteer community is empowered to curate and vet content. When we receive requests to remove or alter that content, our first action is to refer requesters to experienced volunteers who can explain project policies and provide them with assistance.

On the copyright front, they only received 12 requests. I actually would have expected more, but the community is pretty strict about making sure that only content that can be on the site gets there. Only 2 of the 12 takedowns were granted.

Wikimedia projects feature a wide variety of content that is freely licensed or in the public domain. However, we occasionally will receive Digital Millennium Copyright Act (DMCA) notices asking us to remove content that is allegedly copyrighted. All DMCA requests are reviewed thoroughly to determine if the content is infringing a copyright, and if there are any legal exceptions, such as fair use, that could allow the content to remain on the Wikimedia projects. From July to December of 2017, we received 12 DMCA requests. We granted two of these. This relatively low amount of DMCA takedown requests for an online platform is due in part to the high standards of community copyright policies and the diligence of project contributors.

This is actually really important, especially as folks in the legacy entertainment industry keep pushing for demands that platforms put in place incredibly expensive "filter" systems. Wikipedia is one of the most popular open platforms on the planet. But it would make no sense at all for it to invest millions of dollars in an expensive filtering system. But, since the whining from those legacy industry folks never seems to recognize that there's a world beyond Google and Facebook, they don't much consider how silly it would be to apply those kinds of rules to Wikipedia.

Also interesting is that Wikipedia has now been dealing with some "Right to be Forgotten" requests in the EU. It notes that in the six month period covered by the transparency report they received one such request (which was not granted):

rom July to December of 2017, the Wikimedia Foundation received one request for content removal that cited the right to erasure, also known as the right to be forgotten. We did not grant this request. The right to erasure in the European Union was established in 2014 by a decision in the Court of Justice of the European Union. As the law now stands, an individual can request the delisting of certain pages from appearing in search results for their name. The Wikimedia Foundation remains opposed to these delistings, which negatively impact the free exchange of information in the public interest.

I don't envy whatever person eventually tries to go after Wikimedia in court over a Right to be Forgotten claim -- though it feels inevitable.

There's more to look at in the report, but it is interesting to look over this and be reminded that not every internet platform is Google or Facebook, and demanding certain types of solutions that would hit all platforms... is pretty silly.

from the proving-once-again-pickup-artists-are-not-to-be-trusted dept

Another case of YouTube's copyright notification system being abused has filtered down through social media. A YouTuber whose channel specializes in game reviews was targeted by the developer of the game after some back-and-forth on the internet over his negative review.

“[La Ruina] criticized my idea that people should be themselves when they’re talking to women, which I think is probably the right thing to do,” Hodgkinson said. “He left a comment on my video when it had 150 views, which I thought was weird.”

The comment from LaRuina suggested shy men could not be themselves and still avail themselves of women, which is apparently their God-given right. This spilled over from YouTube onto Twitter. The discussion there surprisingly took a turn for the better, with LaRuina admitting he was overreacting to the YouTuber's "worst game ever" schtick. (LaRuina also stated he was "Europe's top pick-up artist," which he apparently considers an accolade rather than a slur. No authoritative studies or r/redpill polls were cited in support of this declaration.)

Everything seemed to have calmed down until some useful Twitterer expressed surprise the DMCA system hadn't been abused by someone on the receiving end of criticism. LaRuina responded with a tweet noting his "DMCA subscription" that he had apparently "forgotten about."

Lo and behold and completely expectedly, Hodgkinson's video was hit with a copyright claim from LaRuina. This is normal. This is the system the major labels/studios wanted: one that grants the accuser full credibility until proven otherwise, leaving the accused with minimal tools to fight bogus takedown requests that could see their channel removed and their source of income destroyed.

This incident, however, has plenty of weird to go with the YouTube DMCA normal.

Hodgkinson and La Ruina’s conversation on Twitter continued. La Ruina told Hodgkinson that the studio only issues DMCA takedowns to channels that rip off its content, essentially promising Hodgkinson that the team wouldn’t weaponize and abuse the DMCA takedown.

That all changed less than two days later. Hodgkinson received an email from La Ruina’s team apologizing for issuing a DMCA takedown request, way before Hodgkinson even knew that YouTube had accepted the strike.

Stranger still, LaRuina then sent Hodgkinson $50 via Paypal to cover any lost ad revenue while the video was removed. LaRuina still maintains he had nothing to do with the takedown, passing the buck to "the company."

“I didn’t know that he often makes those kinds of videos,” La Ruina told Polygon, speaking about Hodgkinson’s series. “He’s trying to profit with this type of clickbait-y, YouTube headline. All of that is okay, but when I replied to his tweet and we got into a little thing, he basically put me in a position where he said, ‘You wouldn’t dare DMCA me because you’re afraid of this,’ and I said, ‘No, I wouldn’t because it’s the wrong thing to do.’ And then someone in my company did it, we immediately retracted it and we’re waiting for the video to come back."

Which would be fine (but still stupid), but LaRuina just kept talking.

“So yeah, we did it for one video, but in general I don’t think that’s the right thing to do.”

This sounds like he was directly involved and that he did it because the YouTuber "dared" him to do it. None of these statements say much about "Europe's top pick-up artist" and the company he keeps at the company he runs.

But that's the lesson here: someone completely in the wrong can destroy the work, if not the livelihood, of someone in the right. Fair use was supposed to be one of the things YouTube would take into account when handling copyright claims, but so far, it seems to be limited to a very small select group of users. LaRuina may have retracted his claim, but it still killed Hodgkinson's video for a few days. Even if Hodgkinson doesn't rely on YouTube for his main source of income, it does supply him with money to purchase games to review. This bullshit move by LaRuina is nothing more than a bully targeting someone else's livelihood over some schticky criticism.

from the plus-more-internet-hobbling-guidelines dept

Once social media companies and websites began acquiescing to EU Commission demands for content takedown, the end result was obvious. Whatever was already in place would continually be ratcheted up. And every time companies failed to do the impossible, the EU Commission would appear on their virtual doorsteps, demanding they be faster and more proactive.

Facebook, Twitter, Google, and Microsoft all agreed to remove hate speech and other targeted content within 24 hours, following a long bitching session from EU regulators about how long it took these companies to comply with takedown orders. As Tim Geigner pointed out late last year, the only thing tech companies gained from this acquiescence was a reason to engage in proactive censorship.

Because if a week or so, often less, isn't enough, what will be? You can bet that if these sites got it down to 3 days, the EU would demand it be done in 2. If 2, then 1. If 1? Well, then perhaps internet companies should become proficient in censoring speech the EU doesn't like before it ever appears.

Even proactive censorship isn't enough for the EU Commission. It has released a new set of recommendations [PDF] for social media companies that sharply increases mandated response time. The Commission believes so-called "terrorist" content should be so easy to spot, companies will have no problem staying in compliance.

Given that terrorist content is typically most harmful in the first hour of its appearance online and given the specific expertise and responsibilities of competent authorities and Europol, referrals should be assessed and, where appropriate, acted upon within one hour, as a general rule.

Yes, the EU Commission wants terrorist content vanished in under an hour and proclaims, without citing authorities, that the expertise of government agencies will make compliance un-impossible. The Commission also says it should be easy to keep removed content from popping up somewhere else, because it's compiled a "Database of Hashes."

Another bad idea that cropped up a few years ago makes a return in this Commission report. The EU wants to create intermediary liability for platforms under the concept of "duty of care." It would hold platforms directly responsible for not preventing the dissemination of harmful content. This would subject social media platforms to a higher standard than that imposed on European law enforcement agencies involved in policing social media content.

In order to benefit from that liability exemption, hosting service providers are to act expeditiously to remove or disable access to illegal information that they store upon obtaining actual knowledge thereof and, as regards claims for damages, awareness of facts or circumstances from which the illegal activity or information is apparent. They can obtain such knowledge and awareness, inter alia, through notices submitted to them. As such, Directive 2000/31/EC constitutes the basis for the development of procedures for removing and disabling access to illegal information. That Directive also allows for the possibility for Member States of requiring the service providers concerned to apply a duty of care in respect of illegal content which they might store.

This would apply to any illegal content, from hate speech to pirated content to child porn. All of it is treated equally under certain portions of the Commission's rules, even when there are clearly different levels of severity in the punishments applied to violators.

In accordance with the horizontal approach underlying the liability exemption laid down in Article 14 of Directive 2000/31/EC, this Recommendation should be applied to any type of content which is not in compliance with Union law or with the law of Member States, irrespective of the precise subject matter or nature of those laws...

The EU Commission not only demands the impossible with its one-hour takedowns, but holds social media companies to a standard they cannot possibly meet. On one hand, the Commission is clearly pushing for proactive removal of content. On the other hand, it wants tech companies to shoulder as much of the blame as possible when things go wrong.

Given that fast removal of or disabling of access to illegal content is often essential in order to limit wider dissemination and harm, those responsibilities imply inter alia that the service providers concerned should be able to take swift decisions as regards possible actions with respect to illegal content online. Those responsibilities also imply that they should put in place effective and appropriate safeguards, in particular with a view to ensuring that they act in a diligent and proportionate manner and to preventing [sic] the unintended removal of content which is not illegal.

The Commission follows this by saying over-censoring of content can be combated by allowing those targeted to object to a takedown by filing a counter-notice. It then undercuts this by suggesting certain government agency requests should never be questioned, but rather complied with immediately.

[G]iven the nature of the content at issue, the aim of such a counter-notice procedure and the additional burden it entails for hosting service providers, there is no justification for recommending to provide such information about that decision and that possibility to contest the decision where it is manifest that the content in question is illegal content and relates to serious criminal offences involving a threat to the life or safety of persons, such as offences specified in Directive (EU) 2017/541 and Directive 2011/93/EU. In addition, in certain cases, reasons of public policy and public security, and in particular reasons related to the prevention, investigation, detection and prosecution of criminal offences, may justify not directly providing that information to the content provider concerned. Therefore, hosting service providers should not do so where a competent authority has made a request to that effect, based on reasons of public policy and public security, for as long as that authority requested in light of those reasons.

These recommendations will definitely cause all kinds of collateral damage, mainly through proactive blocking of content that may not violate any EU law. It shifts all of the burden (and the blame) to tech companies with the added bonus of EU fining mechanisms kicking into gear 60 minutes after a takedown request is sent. The report basically says the EU Commission will never be satisfied by social media company moderation efforts. There will always be additional demands, no matter the level of compliance. And this is happening on a flattened playing field where all illegal content is pretty much treated as equally problematic, even if the one-hour response requirement is limited to "terrorist content" only at the moment.

from the what-the-fuck-twitter? dept

It is something of an unfortunate Techdirt tradition that every time the Olympics rolls around, we are alerted to some more nonsense by the organizations that put on the event -- mainly the International Olympic Committee (IOC) -- going out of their way to be completely censorial in the most obnoxious ways possible. And, even worse, watching as various governments and organizations bend to the IOC's will on no legal basis at all. In the past, this has included the IOC's ridiculous insistence on extra trademark rights that are not based on any actual laws. But, in the age of social media it's gotten even worse. The Olympics and Twitter have a very questionable relationship as the company Twitter has been all too willing to censor content on behalf of the Olympics, while the Olympic committees, such as the USOC, continue to believe merely mentioning the Olympics is magically trademark infringement.

So, it's only fitting that my first alert to the news that the Olympics are happening again was hearing how Washington Post reporter Ann Fifield, who covers North Korea for the paper, had her video of the unified Korean team taken off Twitter based on a bogus complaint by the IOC:

And Twitter complied even though the takedown is clearly bogus. Notice Fifield says that it is her video? The IOC has no copyright claim at all in the video, yet they filed a DMCA takedown over it. The copyright is not the IOC's and therefore the takedown is a form of copyfraud. Twitter should never have complied and shame on the company for doing so. Even more ridiculous: Twitter itself is running around telling people to "follow the Olympics on Twitter." Well, you know, more people might do that if you weren't taking down reporters' coverage of those very same Olympics.

Oh, and it appears that Facebook is even worse. They're pre-blocking the uploads of such videos:

This is fucked up and both the IOC and Facebook should be ashamed. The IOC can create rules for reporters and can expel them from the stadium if they break those rules, but there is simply no legal basis for them to demand such content be taken off social media, and Twitter and Facebook shouldn't help the IOC censor reporters.

from the the-automattic-doctrine dept

Last week, Santa Clara University hosted a gathering of tech platform companies to discuss how they actually handle content moderation questions. Many of the participants in the event have written essays about the questions that were discussed at the event. Between last week and this week, we're publishing a bunch of these essays, including this one.

WordPress.com is one of the most popular publishing platforms online. We host sites for bloggers, photographers, small businesses, political dissidents, and large companies. With more than 70 million websites hosted on our service, we unsurprisingly receive complaints about all types of content. Our terms of service define the categories of content that we don't allow on wordpress.com.

We try to be as objective as possible in defining the categories of content that we do not allow, as well as in our determinations about what types of content fall into, or do not fall into, each category. For most types of disputed content, we have the competency to make a judgment call about whether it violates our terms of service.

One notable and troublesome exception is content that is allegedly untrue or defamatory. Our terms prohibit defamatory content, but it's very difficult if not impossible for us, as a neutral, passive host, to determine the truth or falsity of a piece of content hosted on our service. Our services are geared towards the posting of longer form content and we often receive defamation complaints aimed at apparently well-researched, professionally written blog posts or pieces of journalism.

Defamation complaints put us in the awkward position of making a decision about whether the contents of a website are true or false. Moreover, in jurisdictions outside of the United States, these complaints put us on the hook for legal liability and damages if we don't take the content down after receiving an allegation that it is not true.

Making online hosts and other intermediaries like WordPress.com liable for the allegedly defamatory content posted by users is often criticized for burdening hosts and stifling innovation. But intermediary liability isn't just bad for online hosts. It's also terrible for online speech. The looming possibility of writing a large check incentivizes hosts like Automattic to do one thing when we first receive a complaint about content: Remove it. That decision may legally protect the host, but it doesn't protect users or their online speech.

The Trouble with "Notice and Takedown"

Taken at face value, the notice-and-takedown approach might seem to be a reasonable way to manage intermediary liability. A host isn't liable absent a complaint, and after receiving one, a host can decide what to do about the content.

Internet hosts like Automattic, however, are in no position to judge disputes over the truth of content that we host. Setting aside the marginal number of cases in which it is obvious that content is not defamatory—say, because it expresses an opinion—hosts are not at all equipped to determine whether content is (or is not) true. We can't know whether the subject of a blog post sexually assaulted a woman with whom he worked, if a company employs child laborers, or if a professor's study on global warming is tainted by her funding sources. A host does not have subpoena power to collect evidence. It does not call witnesses to testify and evaluate their credibility. And a host is not a judge or jury. This reality is at odds with laws imputing knowledge that content is defamatory (and liability) merely because a host receives a complaint that content is defamatory and doesn't remove it right away.

Nevertheless, the prospect of intermediary liability encourages hosts to make a judgment anyway, by accepting a complaint at face value and removing the disputed content without any vetting by a court. This process, unfortunately, encourages and rewards abuse. Someone who does not like a particular point of view, or who wants to silence legitimate criticism, understands that he or she has decent odds of silencing that speech by lodging a complaint with the website's host, who often removes the content in hopes of avoiding liability. That strategy is much faster than having the allegations tried in a court, and as a bonus, the complainant won't face the tough questions—Did he assault a co-worker? Did she know that the miners were children? Did he fib his research?

The potential for abuse is not theoretical. We regularly see dubious complaints about supposedly defamatory material at WordPress.com. Here is a sampling:

A multi-national defense contractor lodged numerous defamation complaints against a whistleblower who posted information about corruption to a WordPress.com blog.

An international religious/charitable group brought defamation charges against a blogger who questioned the organization's leadership.

A large European pharmaceutical firm sought, on defamation grounds, to disable a WordPress.com blog, which detailed negative experiences with the firm's products. A court later determined that this content was true.

It's notable that many of the worrying complaints we receive (including all of the examples above) come from large corporations or wealthy individuals and are aimed at small publishers or individual bloggers, who make up the core of our user base.

Of course, valid defamation complaints should be resolved and a system exists for doing so: the complainant can take legal action against the person who posted the content. This process keeps decisions about freedom of expression where they belong—with a court.

Our Approach at Automattic

The threat to legitimate speech posed by the notice and takedown process is behind our policy for dealing with defamation complaints. We do not remove user content based only on an allegation of defamation. We require a court order or judgment on the content at issue before taking action.

The third example above illustrates why we do not honor takedown demands that aren't accompanied by a court order. If we chose not to wait for a court order, but instead eliminated any potential liability by immediately disabling the site, we would have taken an important, and truthful, voice offline.

Our policy is the right one for us, but it can also be costly. We are often sued in defamation cases around the world based on our users' content. At any given time, we have upwards of twenty defamation cases pending against us around the globe. This is an inevitable side effect of our policies, and we try to be judicious about our involvement in the cases that we do see. Some cases result in a quick and straightforward judgment, but others require more fact-finding and we often face a choice about what our level of involvement should be. Ideally, we want to spend our resources fighting cases that matter–either because there is a serious risk to the freedom of speech of users who want their content to remain online, or because there is a serious risk to the company or our people. We recognize that we have some power as a host to not only demand a court order before removing content, but also that we can play a part in ensuring a more fair adjudication of some disputes if we are actively involved in a case. We view this as an important role, both for our users and for the values of free speech, especially in cases where important speech issues are at stake and/or there is a very clear differential in power between the complaining party and our user.

In each lawsuit, we ask ourselves a few questions: What is this case about? Does the user want to keep the content to remain online, and could we make a difference on the user's behalf? What is the blog about? Are there any political or other important speech issues? Is there a potential monetary award against us?

We like to call our rubric for making decisions on when to step in to help defend our users "The Automattic Doctrine", and the answers to the questions above help us decide how actively participate in the lawsuit. In our experience, the determinative question is most often whether the user wants to be involved in the defense and work with us to keep their ideas and opinions online.

Our approach ultimately puts the decision about whether content is defamatory, or instead, protected speech, in front of the right decision maker: a neutral court of law. Leaving such important decisions to the discretion of Internet hosts is misplaced and tilts the balance in favor of silencing often legitimate voices.

Paul Sieminski is General Counsel at Automattic. Holly Hogan is Associate General Counsel at Automattic.

from the messy-copyright dept

Today, Santa Clara University is hosting a gathering of tech platform companies to discuss how they actually handle content moderation questions. Many of the participants have written short essays about the questions that are being discussed at this event. We've published a bunch of essays this week from the conference, and will continue with more next week.

Have you ever wondered why it can be hard to find out what some old paintings look like? Why there seem to be so few pictures of artistic works available from many countries even though they're filled with public sculptures and murals? Or why prices for books and movies can be so wildly different in different countries? The answer is that copyright law is different all over the world, and these differences can make figuring out what to do with these works so difficult or risky that most websites are not willing to have them around at all. This essay talks about a few of these works and why they add a major challenge to content moderation online.

To begin, Wikipedia and the Wikimedia Foundation that hosts it have a mission to host freely available educational content, which means that one of the areas that comes up for us quite often when we receive content moderation requests is whether something is truly free or not. This can come up in a bunch of different ways, and I'd like to talk about a few of them, and why they make it quite difficult to figure out what's really available to the public and what's not.

The first one is old pictures and manuscripts. It's generally accepted that if a work was published before 1923, then it's old enough that the author's rights have expired and the ability to freely copy, share, and remix the work shouldn't be limited by the law anymore. But that raises a couple questions. First, how do you know when something was published, especially back then? There's a whole swath of old pictures and writings that were prepared before 1923, but may have never been published at all until later, which then requires figuring out a different timing scheme or figuring out when the work was published: a sometimes very difficult affair due to records lost during the World Wars and various upheaval in the world over the last century. For just one example, a dispute about an old passport photo recently came down to whether it was taken in Egypt or Syria during a time when those national borders were very fluid. If it had been in Egypt, it would have been given U.S. copyright and protected because it was after 1923, but if it had been in Syria at the time, it would not have been protected because that country wasn't extended recognition for copyrights at the time.

A second example is works from countries with broad moral rights. All the works on Wikimedia projects that were made recently are dedicated by their authors to the public domain or licensed under free culture licenses like Creative Commons. However, these sorts of promises only work in some countries. There are international copyright treaties that cover a certain agreed-upon set of protections for every country, but many countries add additional rights on top of the treaties such as what are called moral rights. Moral rights in many countries give the creator the power to rescind a license and they cannot give up that power no matter how hard they try. It ends up looking something like this: "I promise that you can use my work forever as long as you give me attribution, and anyone else can reuse it too, and I want this to be irrevocable so that the public can benefit without having to come back to me." And then a couple years later, it's "oh, sorry, I've decided that I changed my mind, just forget my earlier promise." In some places that works, and because of that possibility, people can't always be sure that the creative works being offered to them are reliable.

A third problem is pictures of artwork. This one applies, though a bit differently, to both new and old works. With new photos of old works, it's a question of creativity. Copyrights are designed to reward people for their original creativity: you don't get a new "life of the author plus 70 years" of protection for making a photocopy. But in some places, they again go past the international rights agreed upon in the copyright treaties and add extra protections. In this case, many countries offer a couple decades worth of protection for taking a straight on 2-D photograph of an old work of art. The Wikimedia Foundation is currently in a lawsuit about this with the Reiss Engelhorn Museum in Germany, where the museum argues that photographs on its website are copyrighted even though the only thing shown in the photo is a public domain painting such as a portrait of Richard Wagner.

The other variation of problems with photos of art is photographs of more recent works out in the public. Did you know that in many places if you're walking in a park and you take a snapshot with a statue in it, you're actually violating someone's copyright? This varies from country to country: some places allow you to photograph artistic buildings but not sculptures or mosaics, other places let you take photographs of anything out in public, and others prohibit photographs of anything artistic even if it's displayed in public. This issue, called freedom of panorama, is one that many Wikimedians are concerned over, and is currently being debated in the European Parliament, but in the meantime can lead to very confused expectations about what sorts of things can be photographed as the answer varies depending on where you are.

The difficulty around so many of these types of works is that they put the public at risk. The works on Wikipedia, and works in the public domain or that are freely licensed more generally are supposed to be free for everyone to use. Copyright is built on a balance that rewards authors and artists for their creativity by letting them have a monopoly on who uses their works and how they're used. But the system has become so strong that even when the monopoly has expired and the creator is long dead, or when the creator wants to give their work away for free, it's extremely difficult for the public to understand what is usable and to use it safely and freely as intended. The public always has to be worried that old records might not be quite accurate, or that creators in many places will simply change their minds no matter how many promises and assurances they provide that they want to make something available for the public good.

These kinds of difficulties are one of the reasons why the Wikimedia Foundation made the decision to defer to the volunteer editors. The Wikimedia movement consists of volunteers from all over the world, and they get to decide on the rules for each different language of Wikipedia. This often helps to avoid conflicts, such as many languages spoken primarily in Europe choosing not to host images that might be allowed under U.S. fair use law, whereas English language does allow fair use images. It's difficult for a small company to know all the rules in hundreds of different countries, but individual volunteers from different places can often catch issues and resolve them even where the legal requirements are murky. As just one example, this has actually led Wikimedia volunteers who deal with photographs to have one of the most detailed policies for photographs of people of any website (and better than many law textbooks). In turn, volunteers handling so many of the content issues means that the Foundation is able to dedicate time from our lawyers to help clarify situations that do present a conflict such as the Reiss Engelhorn case of freedom or panorama issues already mentioned.

That said, even with efforts from many dedicated people around the world, issues like these international conflicts leave some amount of confusion and conflict. These issues often don't receive as much attention because they're not as large as, say, problems with pirated movies, but they present a more pernicious threat. As companies shy away from dealing with works that might be difficult to research or uncertain as to how the law applies to them, the public domain slowly shrinks over time and we are all poorer for it.

from the more-water-on-the-grease-fire-that-is-the-reputation-management-business dept

The reputation management tactic of filing bogus defamation lawsuits may be slowly coming to an end, but there will be a whole lot of reputational damage to be spread among those involved by the time all is said and done.

Richart Ruddie, proprietor of Profile Defenders, filed several lawsuits in multiple states fraudulently seeking court orders for URL delistings. The lawsuits featured fake plaintiffs, fake defendants, and fake admissions of guilt from the fake defendants. Some judges issued judgments without a second thought. Others had second thoughts but they were identical to their first one. And some found enough evidence of fraud to pass everything on to the US Attorney's office.

The Arizona lawsuits... were filed by lawyers Aaron Kelly and Daniel Warner of Kelly / Warner Law, a prominent Internet libel law firm (though some were also linked to Richart Ruddie, Profile Defenders, and a company connected to Profile Defenders).

These two attorneys are now facing a bar complaint because of their actions in the Chinnock v. Ivanski case, another lawsuit with fraudulent legal documents. These were delivered to the court by Kelly and Warner, with their apparent approval of the fraudulent contents. The bar complaint [PDF] details the many falsifications in the lawsuit documents, including the faking of notary public signatures.

[40.] The complaint states Ivanski resides in Turkey and Chinnock resides in Colorado. The complaint states, "[t]he parties purposefully availed themselves of the benefits of Arizona law," but does not explain how the state courts in Arizona have jurisdiction to hear the matter....

[42.] Respondent Warner knew that Krista Ivanski is not a real person. Krista Ivanski was fabricated to serve as defendant in the matter.

[44.] Respondent Warner knew that the 38 allegedly defamatory statements were not posted by the same person.

[45.] Respondent Warner knew that legal action regarding many of the allegedly defamatory statements was barred by the [Arizona one-year] statute of limitations ....

[46.] Alternatively, if Respondent Warner did not know the information in paragraphs 42-45, Respondent Warner failed to investigate the matter prior to filing the complaint.

[...]

[50.] The proposed order is signed by Ivanski and notarized by Amanda Sparks, a notary from Fulton County, Georgia. The Plaintiff's Verification attached to the original complaint and signed by Chinnock was also notarized in Fulton County, Georgia. According to the complaint, neither Ivanski nor Chinnock reside in Georgia.

[51.] There is no notary in Fulton County named Amanda Sparks. A search performed via the Georgia Superior Court Clerk's Cooperative Authority notary search shows no notary in Fulton County named Amanda Sparks. The notarization by Amanda Sparks is a forgery.

[52.] Respondent Warner knew that the notarization by "Amanda Sparks" from Fulton County, Georgia, was a forgery or failed to investigate the matter prior to filing the document.

And that's just the first of two forgeries included in documents in this case. The second forgery also pertains to a fake notary public in yet another state.

54.] The proposed Amended Order For Permanent Injunction is signed by Ivanski and notarized by "Samantha Pierce," a notary from Colorado. According to the complaint, Chinnock resides in Colorado while Ivanski resides in Turkey.

[55.] There is no notary in Colorado named Samantha Pierce. A notary search performed via the Colorado Secretary of State's website returns "no records found" for notary Samantha Pierce. The notarization by Samantha Pierce is a forgery.

[56.] The notary ID used by Samantha Pierce is 20121234567. The sample notary seal displayed on the Colorado Secretary of State's general notary information page uses notary ID 20121234567.

This is not the end of the complaint's allegations. It also alleges Aaron Kelly's case (Lynd v. Hood) involved fake notarization and fake defendants. The same goes for Gottuso v. Marks, which was handled by Aaron Kelly and involved a fictitious defendant. And so it continues for several more cases handled by Kelly and Warner. It also details a case handled by this law firm involving Richart Ruddie directly. The connection between Ruddie and the law firm goes beyond falsified documents. In this case, Ruddie was targeting posts critical of the Kelly / Warner law firm hosted at Ripoff Report but pretended the posts targeted Ruddie himself.

The lawsuit filed by Ruddie was fraudulent. Jake Kirschner did not post the allegedly defamatory statements. At least one of the statements was posted by an individual named Charles Roderick.

The allegedly defamatory statements are about Respondent Warner, not about Ruddie as alleged in the complaint.

Ruddie filed a fraudulent lawsuit to remove online criticism of his business associate Respondent Daniel Warner.

Respondent Warner knew that Ruddie filed the fraudulent lawsuit to achieve Respondent Warner's goal of removing the online criticism without having to prove the elements of defamation.

Multiple violations of attorney rules of conduct are alleged. So far, it's nothing more than a complaint and is still in need of review by a disciplinary judge. Questions will be raised about Kelly and Warner's complicity in these fraudulent lawsuit schemes. Kelly / Warner have already released a statement claiming the named lawyers knew nothing about the fraudulent claims contained in the documents they served to the court, nor are they required to. (Emphasis in the original.)

Internet defamation attorneys cannot and will not be held to a higher standard of care than normal attorneys. After a quick reading of the ethical rules, the comments thereto, and a case filed by the Texas Attorney General against a reputation management company, it should be evident to any reasonable person that the old saying, “where there is smoke, there is fire” is not necessarily true in the digital age today.

“An advocate is responsible for pleadings and other documents prepared for litigation, but is usually not required to have personal knowledge of matters asserted therein, for litigation documents ordinarily present assertions by the client, or by someone on the client’s behalf, and not assertions by the lawyer.” ER 3.3 cmt 3 (emphasis added).

“The prohibition against offering false evidence only applies if the lawyer knows that the evidence is false. [And] [a] lawyer’s reasonable belief that evidence is false does not preclude its presentation to the trier of fact.” See ER 3.3 cmt 8. “[A] lawyer should resolve doubts about the veracity of testimony or other evidence in favor of the client . . . .” Id. Although the firm practices far within and from “the line,” the comments to the ethical rules indicate that “the line” extends rather far.

So, the law firm is claiming to be another victim of a shady reputation management firm engaged in fraudulent lawsuits for the purposes of removing critical posts from the internet. This defensive statement may let readers know just how far attorneys can wander from due diligence without being slapped with sanctions, but doesn't do much to assure readers the law firm won't turn a blind eye to sketchy legal paperwork if the price is right. The post also throws some shade at Eugene Volokh and Paul Alan Levy with its final sentence, asking clients to let the law firm know if they are contacted by "alleged 'reporters.'"

While this plays out, we can expect the flow of bogus lawsuits to continue to slow. These tactics flew under the radar for a few years, but there are multiple private entities actively engaged in tracking down perpetrators. In addition, the issue has gone federal thanks to a Connecticut judge's decision to forward allegations to the US Attorney's office. I get that people are often disappointed Section 230 immunity doesn't allow them to demand delisting of content they personally find objectionable. The problem isn't with disappointed people, but rather the sketchy reputation management firms that promise (and bill for) stuff they can't legally deliver.

from the sporadic-pushback-coupled-with-routine-acquiescence dept

Facebook continues to increase its stranglehold on news delivery, reducing pipelines of info to a nonsensically-sorted stream for its billions of users. Despite the responsibility it bears to its users to keep this pipeline free of interference, Facebook is ingratiating itself with local governments by acting as a censor on their behalf.

While Facebook has fought back against government overreach in the United States, it seems less willing to do so in other countries. The reporting tools it provides to users are abused by governments to stifle critics and control narratives. And that's on top of the direct line it opens to certain governments, which are used to expedite censorship. That's what's happening in Israel, as Glenn Greenwald reports:

[I]sraeli officials have been publicly boasting about how obedient Facebook is when it comes to Israeli censorship orders:

Shortly after news broke earlier this month of the agreement between the Israeli government and Facebook, Israeli Justice Minister Ayelet Shaked said Tel Aviv had submitted 158 requests to the social media giant over the previous four months asking it to remove content it deemed “incitement.” She said Facebook had granted 95 percent of the requests.

She’s right. The submission to Israeli dictates is hard to overstate: As the New York Times put it in December of last year, “Israeli security agencies monitor Facebook and send the company posts they consider incitement. Facebook has responded by removing most of them.”

This is especially troubling given the context of the Palestinian-Israeli relationship. By favoring Israel's view of "incitement," Facebook is censoring news streams read by Palestinians, giving them a government-approved view of current events. While Facebook is apparently reluctant to take down pro-Israeli calls for violence, it's been moving quickly to delete almost everything Israeli security forces deem "incitement." The info Palestinians see -- filtered through Facebook -- provides a mostly one-sided depiction of ongoing unrest.

What makes this censorship particularly consequential is that “96 percent of Palestinians said their primary use of Facebook was for following news.” That means that Israeli officials have virtually unfettered control over a key communications forum of Palestinians.

This isn't just a "war-torn Middle East" problem. It's everyone's problem. As Greenwald points out, the company -- which was willing to fight for the rights of US citizens -- seems far less willing to do so when the government's target is a foreigner.

Facebook now seems to be explicitly admitting that it also intends to follow the censorship orders of the U.S. government. Earlier this week, the company deleted the Facebook and Instagram accounts of Ramzan Kadyrov, the repressive, brutal, and authoritarian leader of the Chechen Republic, who had a combined 4 million followers on those accounts. To put it mildly, Kadyrov — who is given free rein to rule the province in exchange for ultimate loyalty to Moscow — is the opposite of a sympathetic figure: He has been credibly accused of a wide range of horrific human rights violations, from the imprisonment and torture of LGBTs to the kidnapping and killing of dissidents.

But none of that dilutes how disturbing and dangerous Facebook’s rationale for its deletion of his accounts is. A Facebook spokesperson told the New York Times that the company deleted these accounts not because Kadyrov is a mass murderer and tyrant, but that “Mr. Kadyrov’s accounts were deactivated because he had just been added to a United States sanctions list and that the company was legally obligated to act.”

That's all it takes: being placed on a list by a government. It's not that Facebook should become a platform for evil people to spread their message, but that it should take more than a government saying it doesn't like someone for Facebook to start deleting accounts. On top of that, Facebook is handling this in classic Facebook moderation mode:

Others who are on the same sanctions list, such as Venezuelan President Nicolas Maduro, remain active on both Facebook and Instagram.

Sanctions list members should be punished by governments, not private companies. If the US government wants to claim an Instagram account equates to a sanction violation, it's welcome to make that argument in court. The problem with Facebook is its actions are consistently inconsistent. It points to a sanction list it's not even following. It battles overbroad warrants in court, fighting back against baseless intrusions by the government, but grants the government enough credibility to disappear anyone nominated for sanctions by the administration,

Around the world, it continues to treat some governments as more equal than others, and it stills seems to prefer access to users to protecting users, especially in countries where censorious actions are the norm. Facebook wants to be all things to all people, but mainly it just wants all people. Sacrificing a few ethical standards is the most expedient choice. While Facebook is welcome to inconsistently apply its moderation standards on its own, it's extremely troubling it's willing to do the same on behalf of world governments. While both may look like censorship, only the latter actually is. And in the long run, it will be the latter that does the most permanent damage.

from the pressing-the-shut-up-button dept

Nothing But the Truth Films (NBT) has a credibility problem. Oh, the irony, I would normally say, except for the fact NBT deals mostly with this sort of "truth."

We present the black and white facts about the geopolitical climate which include Islam, Illuminati, Freemasonry, Cults and more. See how your freedoms are slowly eroding and spread the message with the help of our channel.

So… that's the kind of "truth" we're dealing with, often pronounced "conspiracy theory." J.K. Sheindlin is the person behind NBT Films and the author of a book that has supposedly blown minds of Islam adherents everywhere, resulting in them renouncing their faith on camera.

One popular video on NBT's YouTube channel shows a supposed Islamic man angrily and bitterly decrying the religion after having his eyes opened by Sheindlin's book. But the video isn't what it seems: it's actually footage taken from somewhere else, dealing with an entirely different issue, but with NBT's fabricated subtitles giving the impression Sheindlin's book has unconverted another follower of Islam.

While the video purports to tell the “black and white facts” about someone renouncing his faith because of Sheindlin’s book, the clip in reality does not capture an Arab’s reaction to a controversial book, nor does it capture that person renouncing his faith on live television. Sheindlin added fabricated captions to the video (while pledging to tell “nothing but the truth”) in order to generate buzz for his book The People vs Muhammad.

This footage is dated 2 July 2013, when Egyptian president Mohamed Morsi rejected the military’s ultimatum to leave office. Opposition activist Ihab al-Khouli, the “Arab guy” in the video displayed above, was reacting to Morsi’s speech…

Last month, the conspiracy channel filed a DMCA copyright complaint requesting that Google delist Evon’s article from its search results. That’s according to the Lumen Database, which archives online takedown requests.

The copyrighted work is a video that our company produced, and has been embedded on the following website without our permission. You can see the video embedded on the page, under the section ‘Origin’. We did not give any authorisation for the website ‘Snopes’ to use our video for their news. Therefore, the company Snopes has infringed our copyright.

First off, no one needs permission to embed a YouTube video. If someone wants to prevent others from embedding their videos, they can always turn that option off. Second, Sheindlin's complaint about someone else using "his" video is especially rich considering he's using footage created by someone else without acknowledgment and, on top of that, adding his own subtitles to misconstrue the content of the footage he "borrowed."

It appears Sheindlin is now warning people about his bogus subtitle work (he has more videos purporting to be people denouncing their Islamic faith after reading his book). This annotation has been added to the beginning of the bogus faith rejection video.

If you can't see it, the text box says: "SUBTITLES CHANGED FOR PROMOTIONAL PURPOSES."

At long last, Nothing But the Truth Films finally engages in a close approximation of honesty. Refreshing. And once again, someone looks at a tool created to stop copyright infringement and sees a way to silence a critic.

Finally, for comparison purposes, here's the legitimate, unaltered video with the correct translation:

from the wasn't-expecting-that dept

Here's one I certainly didn't expect. A group known for spreading a bunch of bogus RIAA talking points about the evils of YouTube seems to be admitting two odd things: (1) that it's impossible to expect YouTube to accurately police all the content on its site and (2) that sharing entire published news articles is clearly not copyright infringement. The group in question is the "Content Creators Coalition" -- last seen around these parts whining about the DMCA's safe harborson a site that only exists because of them. And it seems that bizarre and self-contradictory publicity stunts are basically the norm for this group. They've specifically been whining about how one of their videos got taken down on YouTube over an apparent terms of service violation. They complained, and YouTube reviewed it, and put the video back up. But, the Content Creators Coalition is using this to argue... something about how YouTube is trying to censor criticism?

It really doesn't make much sense, because it actually seems to be a pretty blatant admission by the Content Creators Coalition that their other bugaboo -- about how YouTube doesn't take down infringing content fast enough -- is completely silly as well. Proactively policing the millions upon millions of videos uploaded to the site (for free, mind you) is nearly impossible to do correctly. The article itself (published by the Google-hating News Corp.-owned NY Post) tries to attack YouTube's moderation features, but actually makes the perfect argument for why it's silly to expect an open platform like YouTube to police everything:

While videos of ISIS beheadings somehow slipped past YouTube censors, the video streaming site didn’t have any problems finding a playful ad campaign by some indie musicians — and promptly pulling the plug on it.

Right. Which is why it's great that we can now add the Content Creators Coalition to those who think that forcing YouTube to police and filter content on its platform is silly and will lead to unnecessary and misguided takedowns. Glad to have them on board.

Now, the only reason I even know about this article is that it was sent to me by Eric Jotkoff at Law Media Group. If you don't remember Law Media Group, they're the secretive lobbying PR shop that seems to specialize in attacking Google with really sketchy practices, such as insisting that corn farmers will be hurt by Google partnering with Yahoo, or by publishing faked op-eds, such as one about how awful net neutrality was -- but "written" by a guy who actually was in favor of net neutrality.

And when I say that Jotkoff and Law Media Group sent me that NY Post article, I do mean sent it to me. He sent me the entire article in an email. So that appears to be Law Media Group, on behalf of the Content Creators Coalition, admitting that sending around entire news articles is not infringing. Now, I'd argue that there's a good fair use case to be made for sharing full articles via email in such situations. But I wouldn't really expect a group like Law Media Group, which regularly sends me emails about the importance of stronger copyright on behalf of a whole bunch of groups that all seem to parrot the RIAA's talking points (coincidence, I'm sure?), to basically admit that reposting full articles from companies like News Corp. is fair use.

I've asked Eric to confirm that this is the official stance of the organization, but, perhaps not too surprisingly, I have not heard back at the time of publication.