from the for-whom-the-bell-trolls dept

You will recall that several months back, Valve released a statement outlining what it considered to be sweeping changes to its game curation duties. While the company made a great deal of forthcoming tools on the Steam store for filtering game searches, pretty much everyone focused on the platform's claim that it would no longer keep any game off its platform unless it was "illegal or a troll game." That, of course, still left all kinds of ambiguity as to what is and is not allowed on the platform and it provided a wide avenue through which Steam could still drive its oversight truck. This led to our having a podcast discussion in which I pointed out repeatedly that this was every bit as opaque a policy as the one that proceeded it, which was followed by the real-world example of developers across the spectrum pointing out that they in fact had no idea what the policy actually meant. In other words, the whole thing has generally been an unproductive mess.

A mess which Valve tried to clean up this past week in an extensive blog post on its site which attempted to define what it meant by "troll games." As the folks at Ars point out, this attempt at clarity is anything but. Much of what Valve lays out as "troll games" makes sense: scam games that work Steam's inventory system, or try to manipulate developer Steam keys, or games that are simply broken due to a lack of seriousness on the part of the developer. But then it also said the definition included what most people thought of in the original announcement: games that "just try to incite and sow discord."

Valve's Doug Lombardi said at the time that Active Shooter was removed from Steam because it was "designed to do nothing but generate outrage and cause conflict through its existence." That designation came despite the fact that the developer said the game was "a dynamic SWAT simulator in which dynamic roles are offered to players" and that he would "likely remove the shooter's role in the game by the release" after popular backlash to the idea.

As the developer noted at the time, too, "there are games like Hatred, Postal, Carmageddon and etc., which are even [worse] compared to Active Shooter and literally focuses on mass shootings/killings of people."

It's as good an example as any for pointing out what has always been true about art forms: one person's inflammatory content is another person's artistic genius. More worrisome, Valve's own words on its policy put the company squarely in the business of mind-reading, with its post suggesting that troll developers are those that aren't actually interested in making or selling a game. It relies on Valve's own analysis of a developer's "good faith" in putting forth the game.

While good-faith developer efforts can obviously lead to "crude or lower quality games" on Steam, Valve says that "it really does seem like bad games are made by bad people." And it's those bad games from bad people that Valve doesn't want on Steam.

Absent a mind-reading device, determining a developer's motives isn't an easy task. Defining what separates a good faith effort to sell a game from a "troll" involves a "deep assessment" of the developer, Valve says, including a look at "what they've done in the past, their behavior on Steam as a developer, as a customer, their banking information, developers they associate with, and more."

We could spend a great deal of time discussing how qualified Valve is in making these determinations, or what value such curation provides for a platform like Steam. Or we could talk instead about whether this treatment sets video games back a notch or two as an art form, with corporate oversight playing the role of evaluating each artist's intent.

But the real lesson here is that, whatever you think of Valve's definitions above, it is clear as day that these explanations are not in line with the overall message in Valve's original notice of the change in policy. The company explicitly said at that time that it didn't believe it should be in the business of deciding what types of games with what types of content users should see on the platform. The whole point of this was for wide inclusion, whereas it seems really hard to see any daylight from this updated explanation and Steam's historical curation policy. Valve still gets to decide what goes on the platform.

from the ignorance-or-purposeful-misdirection dept

As the EU gets ready to vote (again) on various amendments for the EU Copyright Directive, there has been an incredibly dishonest push by supporters of the original directive (often incorrectly claiming they're thinking of creators' best interests), to argue that the warnings of those who think these proposals are dangerous are misleading. What they are doing is unfortunate, but it deserves to be called out -- because of just how dishonest it is. They usually involve misrepresenting the law and its impact in order to completely misrepresent what will happen.

There are numerous examples of this in practice, but I'll use this article in the German site FAZ as just one example of the kind of rhetoric being used, as it is an impressively intellectually bankrupt version of the argument I'm seeing quite a bit lately. It was written by a guy named Volker Rieck who has shown up in a bunch of places attacking critics of the EU Copyright Directive. He apparently runs some sort of anti-piracy organization, which perhaps shouldn't be surprising. But, that doesn't excuse the sheer dishonesty of his arguments.

Very early in the process, the only MEP from the Pirate Party, Julia Reda, began to fight the propositions. For her campaign, she made very strong use of distortion and simplification. The word "link tax"..., by way of which Reda wanted to stop Article 11 of the policy, may be catchy, but there is something unwittingly comical to the earnest suggestion that there is a tax, collected by the tax office, on using links to online pieces of writing.

This is... odd. The word "tax" is used in a variety of contexts to show excess costs of certain proposals. Nothing about it deliberately suggests a "tax office" will be involved. But the "link tax" is quite real. The whole point of Article 11 is to create a new form of license -- required for certain sites to have to pay for nearly every use of media content. Let's be clear, because it often gets lost in the discussion: all of this content is already covered by copyright. At issue is whether or not one can link to it and include a short summary of the contents without first having to pay a license above and beyond what they would have to pay to license the content itself. And this is not an ambiguous issue. In the latest draft of the proposal from MEP Axel Voss it's pretty explicit that the link tax is about "obtaining fair and proportionate remuneration for such uses." The following is directly from the text of Voss's proposed amendment (which is more or less the "default" plan for the Copyright Directive as he's the main MEP behind the Directive):

Online content sharing service providers
perform an act of communication to the public
and therefore are responsible for their
content and should therefore conclude fair
and appropriate licensing agreements with
rightholders.

It is absolutely a tax to require a license for such uses. And while Voss has included this escape clause saying that this "does not
extend to acts of hyperlinking with respect to press publications" it is left entirely vague as to how to distinguish when a link with some basic link text is allowed without a license and when it needs to be licensed. Indeed, Voss's only real limitation is that the rules "shall not extend to mere hyperlinks, which are accompanied by individual words." Individual words. What goes beyond "individual"? Considering that individual means "single" or "one," it seems clear that under Voss's definition, accompanying a link with two words, may now subject you to requiring a license to link. This is even worse than the awful German law, which only required licenses on something beyond "short" phrases (where even that was not clearly defined).

Back to the awful FAZ piece:

The polemical buzzword "upload filter", to oppose Article 13 of the policy, wasn't much better. Upload filters are not, and were never, part of the proposal, but the word works well in fueling fears. Indeed, Julia Reda managed to convince some of her supporters that if the policy on copyright law is passed, everything on the internet will be filtered, and memes – yes, those beloved memes – will be forbidden altogether.

The fact that the policy says something completely different was of no more than marginal interest. According to the actual proposal, web platforms – and only web platforms – would have been obliged to enter into license agreements with the individual right owners of user-uploaded content or the copyright collectives by which the content is maintained.

This is particularly galling in just how dishonest it is. Saying that this won't impact users, but merely platforms, is bullshit. How do most users communicate these days? On platforms. And saying that platforms then have to license all content, as if the "cost" of that is not then passed along to the users. And that "cost" isn't just in monetary terms. It will, undoubtedly be in terms of perfectly non-infringing works completely taken offline, either because of accidental identification or malicious takedown efforts.

Sure, some people could try to post content on their own sites, but how long will it take until those who support Article 13 move down the stack and argue that hosting companies who allow users to host their own websites are in the same classification as the platforms who are required to obtain licenses under the law?

It gets worse:

In this scenario, it’s the platforms who are responsible for license payments; users have nothing to do with it.

I mean, come on. The platforms are the arbiters of end-users speech in this case. Of course users have everything to do with it. If it's too costly, the platforms will default to blocking the content, rather than allowing it to happen. And, again, any costs will be passed on from the platforms to the users in some form or another.

It would simply have meant a duty for the platforms to be transparent in order to comprehensively account for the licensing and to correctly forward the payments to the respective right owners. If a platform didn’t want to enter such a license agreement, the EU policy would at least hold that platform responsible to keep its own website clean. How it achieves that is up to the platform itself, as long as it prevents copyright infringements.

This is also particularly dishonest. If a platform doesn't want to enter into such a license... they would be responsible for keeping their website clean. And how would they possibly do that? They'd be required to pay for an incredibly expensive (and ineffective) upload filter. So to claim that this isn't a proposal for upload filters is utter nonsense.

Also, the whole "it's up to the platform, as long as it prevents copyright infringement" is fantasy land thinking, as if there's some solution that magically stops all copyright infringement. Whoever wrote this is incredibly dishonest or ignorant of how the world works. There is no solution that prevents all copyright infringement -- other than not existing at all.

Unfortunately, though, many of those who have joined the discussion have refused to put in the intellectual effort to read the proposal in its updated form and understand its intention. This goes for everyone all the way from web associations of political parties to journalist Sascha Lobo, who wrote of "censorship machines"... in "der Spiegel". If only they had read what they publicly decry! Then maybe they would have realised that for the first time, users of platforms that don’t license content would have had substantial leverage, including a right to mediation in the case of the blocking of content. At that point, at the latest, it should have become clear that the term "censorship" misses the mark. Perhaps it was simply too complicated to get hold of and understand the current version of the document?

Leverage? What leverage? If the law requires you not to allow any infringement, you have no leverage at all. Second, the concern about censorship is not at all made up. We know it's real because we see it happen all the time under existing notice-and-takedown regimes, which are significantly less extreme and less draconian than what's required under Article 13. The censorship comes from platforms seeking to avoid significant liability (and costly trials). They are incentivized (heavily) into taking down content to avoid the risk and liability. And thus, they will take down lots and lots of content rather than risk it -- especially when held to ridiculous standards like preventing all infringement from appearing on their platforms.

The dishonesty continues:

But let’s talk about the platforms, since they are the ones affected by this. More specifically, let’s talk about one of the most successful platforms: Youtube. It’s exclusively platforms like Youtube that the policy addresses. Not start-ups, not online shops, and not open source platforms.

This is blatantly untrue. As we noted back in July, those behind the EU Copyright Directive explicitly said the opposite. Here's what they said:

Any platform is covered by Article 13 if one of their main purposes is to give access to copyright protected content to the public.

It cannot make any difference if it is a “small thief” or a “big thief” as it should be illegal in the first place.

Small platforms, even a one-person business, can cause as much damage to right holders as big companies, if their content is spread (first on this platform and possibly within seconds throughout the whole internet) without their consent.

That's from the Committee who voted on the Directive. So to say it only targets platforms like YouTube when the crafters of the law itself say that it applies to small platforms and even one-person businesses, shows just how dishonest supporters are concerning all of this. Separately, it's obvious that it doesn't just apply to YouTube because YouTube already complies with Article 13 via things like ContentID. To argue that the law is targeting them is ridiculous. Why write an entire new law to just say "that thing you're already doing, yeah, keep that up." The author of the FAZ piece then goes on to talk all sorts of nonsense about Content ID.

For years, Youtube has used a system called Content ID, which allows right owners who have uploaded their content to the platform to decide what happens to it if and when it’s used. This ranges from monetarisation – if, for instance, a user uploads a video which includes music, the right owner of that music receives a portion of the video’s ad revenue – to the blocking of the video. Above all else, it’s meant to prevent third parties from making money using other people’s content.

But it gets better still. A system called Copyright Match, which Youtube developed for its channel owners, is just now ready to be put into practice. It is, as it were, a "Content ID" light, and is mainly intended to assist Youtubers in reacting to identical videos. The user who uploaded the video first automatically receives a message and gets to decide what happens to the duplicate, including the possibility to block it.

Is there anybody out there who’d brand this "censorship"? Apparently not – after all, there have been no demonstrations against Content ID and Copyright Match. We haven’t seen public outrage against Youtube’s "censorship machine".

If someone is going to insist that (1) Article 13 only targets platforms like YouTube, even when the authors of the law insist that's not true, and (2) state that no one complains about ContentID takedowns, they have no business arguing that the attacks on the EU Copyright Directive are untruthful. They are ignorant or lying. Neither is a good look.

The rest of the article is out-and-out conspiracy theory talking, including (I kid you not) accusations of George Soros' involvement in fighting against the Copyright Directive. And yet, amazingly, some people are taking this shit seriously. It is not serious. It is blatantly dishonest and should be treated as such.

from the someone-said dept

Anonymity is back in the news in a big way, especially since the New York Times published an explosive opinion piece by an anonymous White House official. Here at Techdirt — proudly one of the few blogs that still allows completely anonymous comments with no sign-up — we've talked about anonymity for a long time in the context of the internet. On this week's episode, Mike and regular co-hosts Dennis Yang and Hersh Reddy talk about the benefits, challenges, and overall importance of anonymous speech.

(a) an offense that has as an element the use, attempted use, or threatened use of physical force against the person or prop­erty of another, or

(b) any other offense that is a felony and that, by its nature, involves a substantial risk that physical force against the person or property of another may be used in the course of committing the offense.

Before holding a lawful permanent resident alien . . . subject to removal for having committed a crime, the Immigration and Nationality Act requires a judge to determine that the ordinary case of the alien's crime of conviction involves a substantial risk that physical force may be used. But what does that mean? Just take the crime at issue in this case, California burglary, which applies to everyone from armed home intruders to door-to-door salesmen peddling shady products. How, on that vast spectrum, is anyone supposed to locate the ordinary case and say whether it includes a substantial risk of physical force? The truth is, no one knows.

The fix is in. And it's almost worse than doing nothing. As C.J. Ciaramella reports for Reason, the proposed fix would add a bunch of crimes not normally thought of as "crimes of violence" to the list of crimes of violence.

Republicans in the House passed a bill this morning that would reclassify dozens of federal crimes as "crimes of violence," making them deportable offenses under immigration law. Criminal justice advocacy groups say the bill, rushed to the floor without a single hearing, is unnecessary, is overbroad, and will intensify the problem of overcriminalization.

The Community Safety and Security Act of 2018, H.R. 6691, passed the House by a largely party-line vote of 247–152. Among the crimes that it would make violent offenses are burglary, fleeing, and coercion through fraud.

Burglary is normally committed when no one's around, separating it from robbery, in which stuff is taken directly from victims, often requiring the use or threat of force. It also adds stalking, arson, "interference with flight crew members and attendants," and "firearms use" [?] to the mix.

But the weirdest addition appears to be a bone tossed to law enforcement. From the bill [PDF]:

The term ‘fleeing’ means knowingly operating a motor vehicle and, following a law enforcement officer’s signal to bring the motor vehicle to a stop—

(A) failing or refusing to comply; or

(B) fleeing or attempting to elude a law enforcement officer.

Car chases are now crimes of violence. Suspects are better off ditching the vehicle and running like they sell drugs in the school zone. Pull over immediately or get evicted from the country. It's a weird thing to throw into a list of crimes known for their inherent violence. Then again, the list of "violent" crimes is already weird -- a seeming overcorrection by Congress to expel as many "permanent" residents from the country as possible. Then there's insertion of "conspiracy," which makes thinking or talking about the "violent" criminal acts listed a violent crime itself.

The law was unconstitutionally vague prior to this. If this bill is passed, the problem shifts from vagueness to overbreadth. And it very likely will pass. It was rushed through the House on a party line vote, and the party controlling the House will be passing it on to a president (assuming the Senate likes the House's idea) aligned with the controlling party -- one who's partial to legislation that makes it easier to kick out non-Americans while also rubbing the belly of the nation's law enforcement agencies.

from the really-guys? dept

Blackberry, the Canadian company that briefly made semi-popular devices for people at companies thanks to their physical keyboards, has always been more of a patent troll. While the company was on the losing end of one of the most famous pure patent troll cases in the past few decades, we have noted in the past that the very reason the trolling operation NTP sued Blackberry (then RIM) was RIM/Blackberry's own ridiculously aggressive patent shakedowns of other companies, which caught the attention of NTP's principles in the first place. Since the demand for actual devices from Blackberry has shrunk to "wait, those guys still exist?" levels, it's focused again on patent shakedowns.

Back in March, the company sued Facebook claiming that Facebook was infringing with some fairly basic concepts related to mobile messaging. While there were a number of different patents and claims in the original 117-page complaint, many of them are clearly bonkers. There is no reason why this stuff should be patented at all. Take, for example, US Patent 8,209,634 for "Previewing a new event on a small screen device." Believe it or not, Blackberry has patented adding a little dot showing you how many unread messages you have. Really.

The Blackberry complaint goes on at length about just how amazing and unknown this kind of thing was before this patent (which is utter nonsense):

Given the state of the art at the time of the invention of the ’634 Patent, the inventive concepts of the ’634 Patent were not conventional, well-understood, or routine. The ’634 Patent discloses, among other things, an unconventional and technological solution to an issue arising specifically in the context of wireless communication devices and electronic messaging received within those devices. The solution implemented by the ’634 Patent provides a specific and substantial improvement over prior messaging notification systems, resulting in an improved user interface for electronic devices and communications applications on those devices, including by introducing novel elements directed to improving the function and working of communications devices such as, inter alia, the claimed “visually modifying at least one displayed icon relating to electronic messaging to include a
numeric character representing a count of the plurality of different messaging correspondents for which one or more of the electronic messages have been received and remain unread” (claims 1, 7, and 13), “displaying on the graphical user interface an identifier of the correspondent from whom at least one of the plurality of messages was received” (claim 5), and “displaying on the graphical user interface at least one preview of content associated with at least one of the received electronic messages” (claim 6), “[executable / machine-readable] instructions which, when executed, cause the wireless communication device to visually modify the graphical user interface to include an identifier of the correspondent from whom at least one of the plurality of messages was received” (claims 11 and 17), “[executable / machine-readable] instructions which, when executed, cause the wireless communication device to visually modify the graphical user interface to include at least one preview of content associated with at least one of the received electronic messages” (claim 12 and 18).

That's a load of claptrap. The reason icons didn't historically show a number for unread messages had nothing to do with an "unconventional and technological solution," but because the resolution of small screens wasn't good enough to make this viable. Once the technology caught up the very obvious way to display such information became fairly standard fairly quickly. But, alas, Blackberry claims that Facebook is clearly infringing because of this:

What a load of nonsense. There's a lot more like this in the complaint, with patents that clearly never should have been granted, and are likely invalid patents post-Alice.

Anyway, that was all back in March. The reason I'm bringing it up again now is because Facebook has now sued Blackberry for patent infringement in a strikingly similar lawsuit. Indeed, I'd almost think that Facebook's lawyers at Cooley were trolling Blackberry's lawyers by making their complaint 118-pages to Blackberry's 117-page complaint about Facebook. You may recall, back in 2012, that Facebook (with an assist from Microsoft) bought a bunch of patents from a struggling AOL, in an effort to keep them out of the hands of trolls. The new suit involves claims that are suspiciously just as stupid and ridiculous as the ones in Blackberry's lawsuit against Facebook.

Again, if there weren't potentially billions of dollars at stake, I'd really think that Facebook was trolling Blackberry with these claims. Take, for example, the claims around US Patent 8,429,231 for "voice instant messaging." The heart of the patent is having a button on a text instant messaging app that allows you to shift the conversation to voice. An image example from the patent:

And... the corresponding image of how Blackberry is supposedly infringing on this patent:

I honestly can't decide which of the two examples above is a stupider patent -- the unread messages bubble or the click-to-call button.

It is entirely possible that, as was done in the good old days of patent nuclear wars, the intention here is just to get the two sides to agree to some sort of cross licensing deal and to walk away from the courthouse -- but what a massive waste of time, money and resources this is, all over some fairly basic UI features that never should have been patented in the first place.

from the good-deals-on-cool-stuff dept

At only half-an-inch thick, the VogDUO Triple-USB Travel Wall Charger can easily slip into your bag or pocket when you're on the move. Its triple-USB design delivers an impressive 30 watts of power to up to three devices for faster charging. And, with a 270-degree swivel plug, the VogDUO charger can easily fit into any plug, even if it's on a crowded power strip. It's available in 3 colors and is on sale for $49.99.

Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.

from the TSA:-where-the-'A'-stands-for-'Assholes' dept

Apparently, the intensive training [waits for laughter to subside] TSA agents receive before hitting security checkpoints sends them the message that the more humiliating the search is for the passenger, the safer our skies are. TSA agents can find cash, but not bombs. They can find water bottles, but not weapons. And they can damn sure search the hell out of anyone with a medical condition because those citizens are the most terroristic citizens of all.

Here's the TSA getting into a wrestling match with a 19-year-old woman with a brain tumor on her way to treatment. Here's the thuggish agency searching a three-year-old with a rare cardiovascular disorder. Here are the boys in airport blue splattering the contents of a urostomy bag all over themselves and the person wearing it. Here's the thin blue line between us and air insecurity deciding a portable defib carried by an 85-year-old must be a bomb. Here's the agency deciding agents' inability to read a card informing them about breast implants' ability to set off scanners -- handed to them by a breast cancer patient -- is just part of the TSA's proper screening processes.

Heather Bowser says she travels several times a year and this is the first time she was asked to go through this type of security procedure.

[..]

She was at the Minneapolis/ St.Paul International Airport when she said she was told agents would have to swab the top part of her prosthetic leg. “They swabbed the bottom of my leg and my hands,” Bowser said. “Then they said they needed to swab the top of my socket.”

She said she told the agent no. "I said if you have to swab the top of my leg that means I have to take my pants down and I don’t want to do that,” Bowser said. “That’s part of my body. I felt that was dehumanizing.”

Under threat of arrest, Bowser complied. The TSA agents actually told her she could be arrested for their stupidity. Since they couldn't wrap their minds around the concept that a prosthetic leg is just a necessary prosthesis, rather than a weapon of plane destruction, Bowser was forced to comply with their stupid, futile "search" or face losing her liberty on top of her dignity.

Bowser has filed a complaint with the TSA's redress black hole. Meanwhile, the agency has released a defensive statement ahead of the "proper procedures were followed" statement due to arrive sometime in the near future.

TSA is aware of an allegation made by a passenger who was screened at Minneapolis-St. Paul International Airport over the weekend. We are looking into the matter to ensure that our screening protocols were followed. We encourage the passenger to reach out to TSA so we can work with her directly to address her concerns.

Loosely translated, the statement says two things. First, the agency will say all protocols were followed. We know this because that's the conclusion it's reached in every other screening debacle. Second, it says the TSA would prefer humiliated victims of TSA searches file complaints that can be ignored, rather than force it confront its failures in public.

Revel in your air safety, fellow Americans. All it's cost us is our freedoms and dignity.

from the disinformation-nation dept

We've talked at great lengths about Facebook's pretty transparent effort to dominate the advertising industry in developing markets. That has come largely via internet.org and the company's "Free Basics" service, which provides a curated selection of Facebook-approved content exempt from mobile usage caps (aka "zero rated"). While Facebook has often hyped this service as a wonderful way to connect impoverished third-world farmers to the internet, net neutrality and gatekeeper concerns resulted in the program being banned in India as part of a growing tide of criticism over the programs' less noble aspects.

Many groups (like Mozilla) have pointed out that if Facebook really wants to connect poor people to the internet, they should just connect poor people to the internet, not some curated, AOL-esque version of it where Facebook dictates what content and services users get to see. Others have quite correctly pointed out the perils of conflating such a walled garden with the actual internet, especially in places like Myanmar just emerging from under the umbrella of violent dictatorship where the internet is a relatively new phenomenon with an even more profound impact than usual.

But as the report notes, Philippine President Rodrigo Duterte has used Facebook -- more specifically Facebook's Free Basics service -- to wage a major disinformation war against his political opponents, shore up support via a cacophony of fake user accounts, and amplify smear campaigns and any number of bogus news reports. And because only Facebook-approved content was exempt from usage caps, users quickly began to see Facebook as the end all be all of connectivity and information, exactly as Facebook designed it.

But Facebook didn't do much of anything to help combat platform abuse, resulting in cultural and political chaos that may just look a little familiar:

"Alongside all this, Duterte and his administration have railed against the mainstream media in the Philippines. Duterte has repeatedly called local news outlets “fake news.” He’s suggested murdered journalists must have “done something” to deserve their fate. Such statements are chilling in a country where as many as 177 media workers have been killed since 1986, according to the National Union of Journalists of the Philippines.

This miasma of inflammatory rhetoric, propaganda, and real and fake news has made a mess of the Filipino political discourse and the Philippines itself. And it’s a mess we’ve seen before."

The difference, of course, is that in the States users at least tend to have access to the actual internet, allowing them to find alternative viewpoints. That's not quite as easy when your version of the "internet" is primarily a walled garden dictated by Facebook; a walled garden that at least in its earlier incarnations went so far as to ban access to encrypted services while giving governments a wonderful new repository for personal data. The end result was that all of the problems we've seen in the States were amplified and made even worse in the Philippines:

"Facebook’s Internet.org effort has floundered embarrassingly in more than half a dozen nations and territories. But in the Philippines, the social media capital of the world according to global media agency We Are Social, Facebook rushed into a culture that unquestioningly assimilated it.

"We were seduced, we were lured, we were hooked, and then, when we became captive audiences, we were manipulated to see what other people — people with vested interests and evil motives of power and domination — wanted us to see,” de Lima wrote to BuzzFeed News. “It was a slow takeover of our attention. We didn’t notice it until it was already too late."

Now that may have happened anyway, but when users can only afford to use Facebook's version of the internet, you can see how the problem could be compounded. On the plus side, Facebook does at least seem to be showing some indication it now understands its own initial lack of understanding as it bumbled toward attempting to dominate the developing world's ad markets:

"Facebook has made the world more connected than ever before, resulting in unprecedented ways for people to organize themselves in society,” a Facebook spokesperson said in response to a list of detailed questions sent by BuzzFeed News. “We know we were too idealistic about the nature of these connections and didn’t focus enough on preventing abuse or thinking through all the ways people could use the tools on the platform to do harm."

We're still pretty far from hammering out a solution to the global disinformation problem, or from determining the real width and breadth of such operations and their real impact on elections. But there does at least seem to be forming a growing consensus that when rich, white Westerners attempt to dominate markets they don't really understand with "help" that may not actually be all that helpful -- it's possible to do more harm than good.

from the be-careful-what-you-wish-for dept

On Wednesday, the EU Parliament will vote yet again on the EU Copyright Directive and a series of amendments that might fix some of the worst problems of the Directive. MEP Julia Reda has a detailed list of many of the proposals and what they would do to the current proposals on the table. While there are a few attempts to "improve" Articles 11 and 13, many of those improvements are, unfortunately, very limited in nature, and will still create massive problems for the way the internet works.

Unfortunately, as with the situation earlier this year, many groups claiming to represent content creators are arguing in support of the original proposals, and spreading pure FUD about the attempts to fix them. Author Cory Doctrow has a very thorough debunking of each of their talking points. Here's just a snippet:

Niall says that memes and other forms of parody will not be blocked by Article 13's filters, because they are exempted from European copyright. That's doubly wrong.

First, there are no EU-wide copyright exemptions. Under the 2001 Copyright Directive, European countries get to choose zero or more exemptions from a list of permissible ones.

Second, even in countries where parody is legal, Article 13's copyright filters won't be able to detect it. No one has ever written a software tool that can tell parody from mere reproduction, and such a thing is so far away from our current AI tools as to be science fiction (as both a science fiction writer and a Visiting Professor of Computer Science at the UK's Open University, I feel confident in saying this).

But there's an even larger point that makes it so incredibly frustrating that we've been seeing content creators claim to support the existing draft in order to get back at Google and Facebook. And it's that these rules will lock in the giant internet companies as the only major internet platforms and block out any new upstarts that might compete with them. Cory's explains it this way:

Niall says Article 13 will not hurt small businesses, only make them pay their share. This is wrong. Article 13's copyright filters will cost hundreds of millions to build (existing versions of these filters, like Youtube's Content ID, cost $60,000,000 and only filter a tiny slice of the media Article 13 requires), which will simply destroy small competitors to the US-based multinationals.

What's more, these filters are notorious for underblocking (missing copyrighted works -- a frequent complaint made by the big entertainment companies...when they're not demanding more of these filters) and overblocking (blocking copyrighted works that have been uploaded by their own creators because they are similar to something claimed by a giant corporation).

Niall says Article 13 is good for creators' rights. This is wrong. Creators benefit when there is a competitive market for our works. When a few companies monopolise the channels of publication, payment, distribution and promotion, creators can't shop around for better deals, because those few companies will all converge on the same rotten policies that benefit them at our expense.

We've seen this already: once Youtube became the dominant force in online video, they launched a streaming music service and negotiated licenses from all the major labels. Then Youtube told the independent labels and indie musicians that they would have to agree to the terms set by the majors -- or be shut out of Youtube forever. In a market dominated by Youtube, they were forced to take the terms. Without competition, Youtube became just another kind of major label, with the same rotten deals for creators.

I'd argue that Cory's explanation even understates the problem here. The very design of these laws is to limit competition. What is often ignored in these discussions is that the record labels, movie studios and publishers pushing for these laws have always viewed the world in a particular way: where they "negotiate" against other big companies for how to best split up the pie. They don't want to negotiate with smaller companies. They want just a few companies they can negotiate with -- but hopefully they want the law in their favor so they can pressure that small list of companies to do their bidding. They certainly don't care what's in the best interests for actual creators, because their entire reason for being has been to take as much money out of actual creators' pockets and keep it for themselves.

The idea that Article 11 and Article 13 will, in any way, help creators, rather than legacy gatekeepers is laughable. The idea that it will somehow harm the internet giants is equally laughable. They can deal with it. What it will do is take upstart competitors out of the equation entirely and will significantly remove negotiating leverage for creators. Whereas, in the recent past, they didn't like the deals offered by the major labels, publishers and studios, internet platforms offered creators an excellent alternative, giving them negotiating power. But, with the EU Copyright Directive, those third party platforms will be limited, and thus actual creators will have much less negotiating leverage, many fewer options, and will get pushed back into exploitative contracts with the legacy gatekeepers. It's unfortunate, then, that at least some have been lead to believe these rules are actually in their interest, when they will do significant harm to them instead.