from the opening-the-black-boxes dept

We've just written about calls for a key legal communications system to be open-sourced as a way of re-building confidence in a project that has been plagued by problems. In many ways, it's surprising that these moves aren't more common. Without transparency, there can be little trust that a system is working as claimed. In the past this was just about software, but today there's another aspect to the problem. As well as the code itself, there are the increasingly-complex algorithms, which the software implements. There is a growing realization that algorithms are ruling important parts of our lives without any public knowledge of how they work or make decisions about us. In Germany, for example, one of the most important algorithms determines a person's SCHUFA credit rating: the name comes from an abbreviation of its German "Schutzorganisation für Allgemeine Kreditsicherung", which means "Protection Agency for General Credit Security". As a site called Algorithm Watch explains:

SCHUFA holds data on round about 70 million people in Germany. That's nearly everyone in the country aged 18 or older. According to SCHUFA, nearly one in ten of these people living in Germany (some 7 million people) have negative entries in their record. That's a lot!

SCHUFA gets its data from some 9,000 partners, such as banks and telecommunication companies. Incredibly, SCHUFA doesn't believe it has a responsibility to check the accuracy of data it receives from its partners.

In addition, the algorithm used by SCHUFA to calculate credit scores is protected as a trade secret so no one knows how the algorithm works and whether there are errors or injustices built into the model or the software.

So basically, if you are an adult living in Germany, it's a good chance your financial life is affected by a credit score produced by a multimillion euro private company using an automatic process that they do not have to explain and an algorithm based on data that nobody checks for inaccuracies.

A new crowd-sourced project called OpenSCHUFA aims to change that. It's being run by Algorithm Watch and Open Knowledge Foundation Germany (full disclosure: I am an unpaid member of the Open Knowledge International Advisory Council). As well as asking people for monetary support, OpenSCHUFA wants German citizens to request a copy of their credit record, which they can obtain free of charge from SCHUFA. People can then send the main results -- not the full record, and with identifiers removed -- to OpenSCHUFA. The project will use the data to try to understand what real-life variables produce good and bad credit scores when fed into the SCHUFA system. Ultimately, the hope is that it will be possible to model, perhaps even reverse-engineer, the underlying algorithm.

This is an important attempt to pry open one of the major black boxes that are starting to rule our lives. Whether or not it manages to understand the SCHUFA algorithm, the exercise will provide useful experience for other projects to build on in the future. And if you are wondering whether it's worth expending all this money and effort, look no further than SCHUFA's response to the initiative, reported here by netzpolitik.org (original in German):

SCHUFA considers the project as clearly directed against the overarching interests of the economy, society and the world of business in Germany.

The fact that SCHUFA apparently doesn't want people to know how its algorithm works is a pretty good reason for trying to find out.

from the it's-just-entertainment dept

Violent video games have once again found themselves in the role of scapegoat after a recent spate of gun violence in America. After the Florida school shooting, and in the extended wake of the massacre in Las Vegas, several government representatives at various levels have leveled their ire at violent games, including Trump, who commissioned an insane sit-down to act as moderator between game company executives and those that blame them for all the world's ills. Amid this deluge of distraction, it would be easy to forget that study after study after study have detailed how bunk the notion is that you can tie real-world violence and violent games is. Not to mention, of course, that there has never been more people playing more violent video games in the history of the world than at this moment right now, and at the same time research shows a declining trend for deviant behavior in teens rather than any sort of upswing.

But a recent study conducted by the Max Planck Institute and published in Molecular Psychiatry further demonstrates the point that violence and games are not connected, with a specific methodology that carries a great deal of weight. The purpose of the study was to move beyond measuring behavior effects immediately after short, unsustained bursts of game-playing and into the realm of the effects on sustained, regular consumption of violent video games.

To correct for the "priming" effects inherent in these other studies, researchers had 90 adult participants play either Grand Theft Auto V or The Sims 3 for at least 30 minutes every day over eight weeks (a control group played no games during the testing period). The adults chosen, who ranged from 18 to 45 years old, reported little to no video game play in the previous six months and were screened for pre-existing psychological problems before the tests.

The participants were subjected to a wide battery of 52 established questionnaires intended to measure "aggression, sexist attitudes, empathy, and interpersonal competencies, impulsivity-related constructs (such as sensation seeking, boredom proneness, risk taking, delay discounting), mental health (depressivity, anxiety) as well as executive control functions." The tests were administered immediately before and immediately after the two-month gameplay period and also two months afterward, in order to measure potential continuing effects.

Participants in the experimental groups were playing GTA, The Sims, or no games at all, and the before and after tests demonstrated three significant behavior changes among all participants. That equates to less than 10% of the survey results indicating any significant change. As the Ars post points out, you would expect at least 10% to show significant change just by random chance. Going through the data and the near complete dirth of any significant behavior changes, the study fairly boldly concludes that there were "no detrimental effects of violent video game play" among the participants.

Were this a fair and just world, this study would be seen as merely confirming what our common sense observations tell us: playing violent games doesn't make someone violent in real life. After all, were that not true, we would see violence rising commensurate with the availability of violent games across a collection of global societies. That simply isn't happening.

So, as America tries to work out its mass-shooting problem, one thing should be clear: whatever list you have in your head about what to blame for the violence, we should be taking video games off of that list.

from the I-can't-do-that,-Dave dept

Despite worries about the reliability and safety of self-driving vehicles, the millions of test miles driven so far have repeatedly shown self-driving cars to be significantly more safe than their human-piloted counterparts. Yet whenever accidents (or near accidents) occur, they tend to be blown completely out of proportion by those terrified of (or financially disrupted by) an automated future.

"A self-driving Uber SUV struck and killed a pedestrian in Tempe, Arizona, Sunday night, according to the Tempe police. The department is investigating the crash. A driver was behind the wheel at the time, the police said.

"The vehicle involved is one of Uber's self-driving vehicles," the Tempe police said in a statement. "It was in autonomous mode at the time of the collision, with a vehicle operator behind the wheel."

Uber, for its part, says it's working with Tempe law enforcement to understand what went wrong in this instance:

Our hearts go out to the victim’s family. We’re fully cooperating with @TempePolice and local authorities as they investigate this incident.

Bloomberg also notes that Uber has suspended its self-driving car program nationwide until it can identify what exactly went wrong. The National Transportation Safety Board is also opening an investigation into the death and is sending a small team of investigators to Tempe.

We've noted for years now how despite a lot of breathless hand-wringing, self-driving car technology (even in its beta form) has proven to be remarkably safe. Millions of AI driver miles have been logged already by Google, Volvo, Uber and others with only a few major accidents. When accidents do occur, they most frequently involve human beings getting confused when a robot-driven vehicle actually follows the law. Google has noted repeatedly that the most common accidents it sees are when drivers rear end its AI-vehicles because they actually stopped before turning right on red.

And while there's some caveats for this data (such as the fact that many of these miles are logged with drivers grabbing the wheel when needed), self-driving cars have so far proven to be far safer then even many advocates projected. We've not even gotten close to the well-hyped "trolly problem," and engineers have argued that if we do, somebody has already screwed up in the design and development process.

It's also worth reiterating that early data continues to strongly indicate that self-driving cars will be notably safer than their human-piloted counterparts, who cause 33,000 fatalities annually (usually because they were drunk or distracted by their phone). It's also worth noting that 10 pedestrians have been killed by drivers in the Phoenix area (including Tempe) in the last week alone by human drivers, and Arizona had the highest rate of pedestrian fatalities in the country last year. And it's getting worse, with 197 Arizona pedestrian deaths in 2016 compared to 224 in 2017.

We'll have to see what the investigation reveals, but hopefully the tech press will view Arizona's problem in context before writing up their inevitable hyperventilating hot takes. Ditto for lawmakers eager to justify over-regulating the emerging self-driving car industry at the behest of taxi unions or other disrupted legacy sectors. If we are going to worry about something, those calories might be better spent on shoring up the abysmal security and privacy standards in the auto industry before automating everything under the sun.

For years, a slew of shadowy companies have sold so-called encrypted phones, custom BlackBerry or Android devices that sometimes have the camera and microphone removed and only send secure messages through private networks. Several of those firms allegedly cater primarily for criminal organizations.

Now, the FBI has arrested the owner of one of the most established companies, Phantom Secure, as part of a complex law enforcement operation, according to court records and sources familiar with the matter.

Phantom makes phones solely for criminals, unlike Apple or Android manufacturers, who only have a certain percentage of criminals in their userbases. All of these companies may provide the protection of encryption, but only one actively targets a criminal market. Encryption protects everyone, not just criminals, but that fact is usually paved over with subtle-as-10-tons-of-asphalt comments from the FBI director while portraying the FBI as the nation's white knight and cell phone manufacturers as profit-driven sociopaths.

These companies marketing directly to criminals do more to protect data and communications than vanilla smartphones. Remote wipe capability is built in. Often, cameras and microphones are removed, along with GPS software/hardware. It's more security than most people need, but then again, most people aren't cartel members.

The thing is, the FBI director doesn't care if you're law-abiding. He wants your encryption options limited and weakened so the contents can be accessed. This makes your smartphone more susceptible to being accessed by criminals, rather than just G-men. And these criminals accessing your phone will probably have phones the FBI can't even access, even with backdoors or key escrow or easily-cracked encryption. Chris Wray claims this is all about public safety, but he's willing to make the public less safe to gain the access he wants.

While I understand the concern of the inability to access evidence, the fact remains no solution involving compromised encryption will make the public safer. And while I understand the concern, the concern itself is overstated and accompanied by smoke-and-mirrors presentations. The FBI points to stacks of locked phones, but says nothing about the many tools at its disposal: phone-cracking companies, judges, contempt charges, good old fashioned consent requests, or whether all cases involving these phones remain at a standstill. The FBI does not argue in good faith, and the access it wants can only be had by sacrificing the security and safety of law-abiding citizens.

from the you-know-it dept

Over at the EFF blog, Joe Mullin has an excellent discussion on why Hollywood is such a vocal supporter of SESTA, despite having nothing to do with Hollywood. It's because the bill actually accomplishes a goal that Hollywood has dreamed about for years: mandatory filtering of all content on the internet.

For legacy software and entertainment companies, breaking down the safe harbors is another road to a controlled, filtered Internet—one that looks a lot like cable television. Without safe harbors, the Internet will be a poorer place—less free for new ideas and new business models. That suits some of the gatekeepers of the pre-Internet era just fine.

The not-so-secret goal of SESTA and FOSTA is made even more clear in a letter from Oracle. “Any start-up has access to low cost and virtually unlimited computing power and to advanced analytics, artificial intelligence and filtering software,” wrote Oracle Senior VP Kenneth Glueck. In his view, Internet companies shouldn’t “blindly run platforms with no control of the content.”

That comment helps explain why we’re seeing support for FOSTA and SESTA from odd corners of the economy: some companies will prosper if online speech is subject to tight control. An Internet that’s policed by “copyright bots” is what major film studios and record have advocated for more than a decade now. Algorithms and artificial intelligence have made major advances in recent years, and some content companies have used those advances as part of a push for mandatory, proactive filters. That’s what they mean by phrases like “notice-and-stay-down,” and that’s what messages like the Oracle letter are really all about.

There's a lot more in Mullin's post, but it actually goes much beyond that. Every rock you lift up in looking at where SESTA's support has come from, you magically find Hollywood people scurrying quietly around. We've already noted that much of the initial support for SESTA came from a group whose then board chair was a top lobbyist for News Corp.. And, as we reported last month, after a whole bunch of people we spoke to suggested that much of the support for SESTA was being driven by former top News Corp. lobbyist, Rick Lane, we noticed that a group of people who went around Capitol Hill telling Congress to support SESTA publicly thanked their "partner" Rick Lane for showing them around.

In other words, it's not just Hollywood seeing a bill that gets them what it wants and suddenly speaking up in favor of it... this is Hollywood helping to make this bill happen in the first place as part of its ongoing effort to remake the internet away from being a communications medium for everyone, and into a broadcast/gatekeeper dominated medium where it gets to act as the gatekeeper.

And if you think that Hollywood big shots are above pumping up a bogus moral panic to get their way, you haven't been paying attention. Remember, for years Hollywood has also pushed the idea that the internet requires filters and censorship for basically any possible reason. Back during the SOPA days, it focused on "counterfeit pharmaceuticals." Again, not an issue that Hollywood is actually concerned with, but if it helped force filters and stopped user-generated content online, Hollywood was quick to embrace it.

Remember, after all, that the MPAA set up Project Goliath to attack Google, and a big part of that was paying its own lawyers at the law firm of Jenner & Block to write demand letters for state Attorneys General, like Mississippi Attorney General Jim Hood, who sent a bogus subpoena and demand letter to Google (written by the MPAA's lawyers and on the MPAA's bill). And what did Hood complain about to Google in that letter written by the MPAA's lawyers? You guessed it:

Hood accused Google of being “unwilling to take basic actions to make the Internet safe from unlawful and predatory conduct, and it has refused to modify its own behavior that facilitates and profits from unlawful conduct.” His letter cites not just piracy of movies, TV shows and music but the sale of counterfeit pharmaceuticals and sex trafficking.

The MPAA has cynically been using the fact that there are fake drugs and sex trafficking on the internet for nearly decade to push for undermining the core aspects of the internet. They don't give a shit that none of this will stop sex trafficking (or that it will actually make life more difficult for victims of sex trafficking). The goal, from the beginning was to hamstring the internet, and return Hollywood to what it feels is its rightful place as the gatekeeper for all culture.

Indeed, our post earlier about Senator Blumenthal's bizarre email against a basic SESTA amendment from Senator Wyden to fix the "moderator's dilemma" aspect was quite telling. He falsely claimed that adding in that amendment -- that merely states that the act of doing some moderation or filtering doesn't append liability to the site for content they fail to filter or moderate (which is the crux of CDA 230's "Good Samaritan" language) -- would create problems for Hollywood. Indeed, a key part of Blumenthal's letter is that this amendment "has the potential to disrupt other areas of the law, such as copyright protections."

But that makes zero sense at all. CDA 230 does not apply to copyright. It doesn't apply to any intellectual property law, as intellectual property is explicitly exempted from all of CDA 230 and has been from the beginning. Nothing in the Wyden amendment changes that. And... it does seem quite odd for Blumenthal to suddenly be bringing up copyright in a discussion about CDA 230, unless it's really been Hollywood pushing these bills all along, and thus in Blumenthal's mind, SESTA and copyright are closely associated. As Prof. Eric Goldman notes, talking nonsensically about copyright in this context appears to be quite a tell by Senator Blumenthal.

from the good-deals-on-cool-stuff dept

Everyone knows you shouldn't be fumbling with your phone while you're driving. Now, you can stay safer with Muse, the Bluetooth device that brings the power and convenience of Amazon Alexa into your car. Just pair it with your smartphone, plug it into your car, and tell Alexa what she can do for you. Muse performs more than 30,000 Alexa skills from playing music and audiobooks to opening the garage door, ordering food, and more. It provides hands-free entertainment with support for Amazon Music, iHeartRadio, and others. The Muse is on sale for $59.99.

Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.

from the say-what-now? dept

Earlier today, in discussing a long list of possible fixes for SESTA, we noted that the only one that even has a remote chance (i.e., the only fix that actually has the potential of being considered by the Senate) is Senator Wyden's amendment, which is designed to solve the "moderator's dilemma" issue by clarifying that merely using a filter or doing any sort of moderation for the sake of blocking some content does not automatically append liability to the service provider for content not removed. Senator Portman -- the sponsor of the bill -- has insisted (despite the lack of such language in the bill) that this is how SESTA should be interpreted. Specifically, Portman stated that SESTA:

...does not amend, and thus preserves, the Communications Decency Act’s Good Samaritan provision. This provision protects good actors who proactively block and screen for offensive material and thus shields them from any frivolous lawsuits.

Except, that's not what the bill actually says. Which is why the language in the Wyden amendment is so important. It basically adds into the law what Portman pretends is already there.

Thus, you would think that Portman and the other Senators backing SESTA should also support the Wyden amendment. They do not. Senator Richard Blumenthal -- who has spent years attacking the internet, and who has already stated that if SESTA kills small internet businesses he would consider that a good thing -- is opposed to the amendment, and sent out a letter supposedly co-signed by other SESTA supporters:

Senators Blumenthal, McCaskill and the other bipartisan sponsors of SESTA oppose the Wyden amendments. These amendments threaten to derail the bill and they would make it even more difficult than current law to hold websites that sexually traffic minors like Backpage.com accountable….The safe harbor amendment would provide websites like Backpage.com with even stronger legal protections than they enjoy today. It also has the potential to disrupt other areas of the law, such as copyright protections. This “bad Samaritan” amendment is not a clarification or a protection for good actors–it is an additional tool to protect traffickers and illegal conduct online.

Here's the problem with that. Almost everything stated above is 100% factually wrong. And not just a little bit wrong. It's so wrong that it raises serious questions about whether Blumenthal understands some fairly fundamental issues in the bill he's backing. Professor Eric Goldman has a pretty concise explanation of everything that's wrong with the statement, noting that it -- somewhat incredibly -- shows that SESTA's main sponsors don't even understand the very basic aspects of CDA 230, as they insist on changing the law.

There are at least three obvious problems with this email. First, the amendment would indeed protect good actors because it would eliminate the Moderator’s Dilemma. The authors of this email still don’t understand, or have decided to ignore, the Moderator’s Dilemma. Second, the proposed amendment would not help Backpage–at all. The Senate Investigative Committee report highlighted voluminous facts about Backpage’s knowledge, so I can’t see how Backpage’s purported filtering would come up in any SESTA/FOSTA enforcement.

Third, the email indicates that the amendment “has the potential to disrupt other areas of the law, such as copyright protections.” This is where the screen freezes, the record scratches, and the narrator says in a deadpan, “No, it wouldn’t.” Section 230(e)(2) expressly carves out “intellectual property” claims–including copyright–from Section 230’s coverage. Anyone with even a basic understanding of Section 230 knows this. Yet, the sponsors, on the eve of a decisive vote with monumental stakes for Section 230, appear to be demonstrating a fundamental misunderstanding of what Section 230 says and does. That is very, very confidence-rattling.

Worse, the email has it precisely backwards. The amendment would HELP, not DISRUPT, copyright protection efforts. If services stuck in the Moderator’s Dilemma decide to turn off proactive moderation efforts, that will include turning off copyright filtering. In other words, SESTA/FOSTA may have the unwanted consequence of encouraging Internet services to do LESS copyright filtering. (This is just one of many examples of my claim that SESTA/FOSTA may counterproductively increase anti-social content). The amendment would fix that by not holding their copyright filtering efforts against Internet services for sex trafficking or prostitution promotion purposes, i.e., by filtering for copyright, the Internet services won’t fear that a court will ask why their filters missed promotions for sex trafficking or prostitution. So if Congress wants to avoid “disrupting” efforts to combat online copyright infringement, the amendment is essential.

This isn't a matter of differing opinions. This is the main backers of a bill to drastically change CDA 230 insisting that (1) their bill does something it does not and (2) a fix to their bill that would bring it into line with what they claim their bill does... actually does a bunch of things it absolutely does not.

At this point, you have to start wondering what the hell is happening in the Senate, and in particular in Senator Blumenthal's office. He is not just doing a big thing badly -- he is gleefully spouting the exact opposite of basic facts about both the existing law, and the bill he sponsored. I know that politicians aren't exactly known for their honesty, but he seems to be taking this to new levels -- and causing massive harm in the process.

from the this-is-bad dept

I'm going to assume that you weren't living in an internet-proof cave this weekend, and caught at least some of the stories about Cambridge Analytica and Facebook. The news first kicked off with the announcement of a data protection lawsuit filed against Cambridge Analytica in the UK on Friday evening (we'll likely have more on that lawsuit soon), followed quickly by an attempt by Facebook to get out ahead of the coming tidal wave by announcing that it was suspending Cambridge Analytica and some associated parties from its platforms, claiming terms of service violations. This was quickly followed on Saturday with two explosive stories. The first, from Carole Cadwalladr at The Guardian, revealing a "whistleblower" from the very early days of Cambridge Analytica (who more or less set up how it works with data profiles) named Christopher Wylie. This was quickly followed up by another story at the NY Times, which was a bit more newsy, providing more details on how Cambridge Analytica got data on about 50 million people out of Facebook.

Admittedly -- much of this isn't actually new. The Intercept had reported something similar a year ago, though it only said it was 30 million Facebook users, rather than 50 million. And that story built on the work of a 2015 (yes, 2015) story in the Guardian discussing how Cambridge Analytica was using data from "tens of millions" of Facebook users "harvested without permission" in support of Ted Cruz's presidential campaign.

There's a lot of heat on this story right now, and a lot of accusations being thrown around, and I'll admit that I'm not entirely sure where I come down on the details yet. I assume people on basically both sides of this issue will scream at me and call me names over this, but there's too much going on to fully understand what happened here. I will note that, in that Guardian story in 2015, Cruz told the publication that this data collecting and targeting effort was "very much the Obama model." And political consultant Patrick Ruffini has a well worth reading Twitter thread arguing that people are overreacting to much of this, and that the 2012 Obama campaign did the exact same thing, and was celebrated for its creative use of data and targeting on the internet. Ad tech guy Jay Pinho makes the same point as well. Here's a Time article from 2012 excitedly talking up how the Obama campaign used Facebook in the same way:

That’s because the more than 1 million Obama backers who signed up for the app gave the campaign permission to look at their Facebook friend lists. In an instant, the campaign had a way to see the hidden young voters. Roughly 85% of those without a listed phone number could be found in the uploaded friend lists.

Of course, there is one major difference between the Obama one and the Cambridge Analytica one, which involves the level of transparency. With the Obama campaign, people knew they were giving their data (and friend's data) to the cause of re-electing Obama. Cambridge Analytica got its data by having a Cambridge academic (who the new Guardian story revealed for the first time is also appointed to a position at St. Petersburg University) set up an app that was used to collect much of this data, and misled Facebook by telling them it was purely for academic purposes, when the reality is that it was setup and directly paid for by Cambridge Analytica with the intent of sucking up that data for Cambridge Analytica's database. Is that enough to damn the whole thing? Perhaps.

As for the claims that this is just the same old Facebook model of selling everyone's data... that was not true and still is not accurate. Facebook doesn't sell your data. It sells access to its users via the data it has on you. That may not seem different, but it is different. But the lines do seem to get a bit blurry, as it appears that Cambridge Analytica, via its partnership with the professor Dr. Aleksander Kogan (who apparently briefly changed his name to -- I kid you not -- Dr. Spectre) and his "Global Science Research," basically paid people via Amazon's Mechanical Turk to do a "personality assessment" on Facebook that, as part of the process, exposed information about their entire social graph, which GSR apparently hoovered up and passed along to Cambridge Analytica.

At the very least, it can be said that Facebook should have recognized much earlier that this could and would be done, and to understand the potential privacy problems related to it. Facebook has a fairly long and painful history of not quite realizing how what it does impacts people's privacy, and this is one more example.

But, it's raising a bigger question, as well, and it's one that caused Facebook to do something that I'll definitively call as "incredibly stupid," which is that it threatened to sue the Guardian over its story, mainly because the Guardian story refers to this whole mess as a "data breach" for Facebook's data.

Facebook instructed external lawyers and warned us we were making 'false and defamatory' allegations. Today they said it was not correct to call this a data breach. We are calling it a data breach. https://t.co/Q8wrw0FDyr

And, of course, Facebook wasn't the only one who threatened to sue. Cambridge Analytica did too:

The Observer also received the first of three letters from Cambridge Analytica threatening to sue Guardian News and Media for defamation.

There are issues of terminology here. Facebook, in its post, is adamant that what happened is not a "breach"

The claim that this is a data breach is completely false. Aleksandr Kogan requested and gained access to information from users who chose to sign up to his app, and everyone involved gave their consent. People knowingly provided their information, no systems were infiltrated, and no passwords or sensitive pieces of information were stolen or hacked.

There are legal reasons why Facebook is so concerned about whether or not this is a "breach" and, let's face it, the company is about to face a million and a half lawsuits over this, not to mention government investigations (already Senator Amy Klobuchar has demanded Mark Zuckerberg's head on a plattertestimony before the Senate and Massachusetts' Attorney General Maura Healey has announced the opening of an investigation, and there have also been rumblings out of the UK and the EU, as well as the FTC). But, there are also some fairly important legal obligations if this was a "breach" in the traditional sense, such as disclosing that to those impacted by the breach.

I'm not entirely sure where I come down on the breach question. It doesn't feel like a traditional breach. It wasn't that Facebook coughed up this info, it was its users coughed up the info... and Facebook just made it easy for this outside "academic" to hoover up all that info by paying a bunch of people to take dopey personality quizzes. However, as the Guardian's Alex Hern points out, how do you distinguish what Kogan/GSR/Cambridge Analytica did from social engineering to get information.

If you're having trouble thinking of today's story as a "breach", try rephrasing it in your head as "Facebook fell prey to a social engineering attack in which it was convinced to hand over user data by an attacker who told it what it wanted to hear".

Of course, there is something of a difference: it still wasn't Facebook per se coughing up the info. It was Facebook's own users. And, you might even argue that if you believe that Facebook doesn't "own" all this data in the first place, that it was actually those Facebook users coughing up a bunch of their own data -- including lots of data about their friends. Needless to say, this is a mess where a lot more transparency might help, and that transparency is going to be forced upon Facebook with a sledgehammer in the near near future.

But, regardless of where you come down on all of this, Facebook threatening defamation against the Guardian for calling this a data breach is ludicrous and Facebook should be ashamed and apologize. Even as it clearly disagrees with how the Guardian characterized much of the story, that's no excuse to whip out defamation threats. Not only is it incredibly stupid from a Facebook PR perspective (and makes the company look like a giant bully), it suggests that the company still has absolutely no fucking clue how to communicate with the press and the public about how its own platform works.

It's actually quite incredible to recognize just how big Facebook has gotten in the face of how little it seems to understand about what its own platform does.

from the will-it-be-fixed? dept

It appears that sometime this week (or even possibly today), the Senate is unfortunately likely to vote (perhaps by an overwhelming margin) for SESTA, despite the fact that it's a terribly drafted bill which no one can explain how it will actually stop sex trafficking. Indeed, it's a bill that many victims advocates are warning will not just make problems worse, but will put lives in danger. And that's leaving aside all of the damage it will do to free speech and tons of websites on the internet.

Much of this could have been avoided if anyone in Congress were actually interested in understanding how the internet worked, and how to write a bill that actually addressed problems around sex trafficking -- rather than buying into a false narrative (pushed mainly by Hollywood) that the liability protections of CDA 230 were magically responsible for sex traffickers using the internet. Two academics who are probably the most knowledgeable experts on intermediary liability, Daphne Keller at Stanford and Eric Goldman at Santa Clara University, have each posted thoughts on how to "salvage" SESTA. If Congress were serious, it would listen to them. But that's a big "if."

Thread: Half a dozen ways #SESTA-#FOSTA could have been drafted to do less damage to small platforms and lawful speech. Many require zero trade-offs w the law's goal of protecting trafficking victims. (Though notably, many victims' orgs say the law will make things worse.) https://t.co/tjZG72giPV

First up, she takes on the problematic "knowledge" standard used in SESTA/FOSTA. Again, a key part of the bill is that internet sites can become liable if they have "knowledge" of sex trafficking activity that is done on the platform. But what the hell is meant by "knowledge"? In other parts of the law, even when it's more spelled out, there are examples of legal cases lasting years while everyone wrangles over what "knowledge" means. In the copyright context, Viacom sued YouTube and were in court for more than half a decade, with much of that being over the simple question of whether knowledge meant "specific" knowledge or "general" knowledge. SESTA could solve many of its problems if it made its knowledge standard clear -- and, as Keller notes, one that wouldn't require "teams of lawyers."

Indeed, this is perhaps the largest problem with SESTA (and may also doom the bill in court). Prosecutors and the DOJ have already raised concerns about the standards in the bill, and even the politicians supporting it toss out very, very different definitions. Senator Rob Portman has claimed it requires "intent." Meanwhile, Rep. Cathy McMorris Rodgers claims that the standard is "knowingly turning a blind eye." That's... extremely different. Senator Cory Booker claims its "a high standard" that requires "proving beyond a reasonable doubt." All of those mean very different things, and when you have the politicians backing the bill all spouting nonsense, and the law itself doesn't clarify, you're making a huge mess.

Keller's second suggestion is to add in real and meaningful penalties for bad faith accusers as well as an appeals process for the accused. This is also a big deal. Again, looking at the DMCA, we've talked about how the one part of that law dealing with bad faith accusations is basically toothless and almost never useful. And thus, the DMCA is abused all the time. We have all those lessons to learn from -- and it appears that Congress is ignoring them.

Up next would be a clear statement that the law does not require monitoring all speech. Such a mandatory monitoring system would have tremendous First Amendment issues -- but unfortunately it seems likely that some may read the bill to require mandatory filtering (oddly, others will read it as saying you shouldn't use filters at all to avoid knowledge -- and that dichotomy of results should just emphasize how poorly the bill was drafted).

Fourth, Keller suggests making it clear that merely monitoring should not be deemed as knowledge (this could be seen as related to clarifying the knowledge standard as well). On that front, there may be an amendment on the table that could help (see below...).

Fifth: the bills should make it clear that it applies to service providers that are end user facing, rather than further up the stack. Again, here's a lesson that we've learned from takedowns in the copyright space. As Hollywood got more and more upset about various things online it continued to move up the stack beyond services to hosting companies, data centers, registrars and even ICANN itself. We shouldn't allow SESTA to allow for the same nonsense.

Finally, Keller suggests that if we must go through with such a bad bill, there should be some requirements on transparency about the impacts for both tech platforms and government agencies, so that we can look back on the bill and determine what it did -- both good and bad.

Will Congress take any of these steps? It doesn't look like it.

As for Goldman, his post focuses on an amendment that Senator Wyden is offering. Last I heard, it appears that the Senate may actually consider this amendment. And it's an amendment similar to one that Goldman himself suggested -- with a very modest addition to SESTA clarifying the whole question of does "monitoring" equal "knowledge." Specifically the amendment would add the following language:

The fact that a provider or user of an interactive computer service has undertaken any efforts (including monitoring and filtering) to identify, restrict access to, or remove material the provider or user considers objectionable shall not be considered in determining the criminal or civil liability of the provider or user for any material that the provider or user has not removed or restricted access to.

As Goldman notes, this one amendment would fix the worst problems of SESTA (while still leaving in place plenty of others). If you at least support making SESTA less horrible, he suggests calling your Senators and letting them know:

If you think this is a meritorious fix to a bad bill, then *immediately* call your Senators (you have 2, remember!) and tell them:

1) You oppose SESTA/FOSTA because it’s not clear the law actually helps sex trafficking victims; and

2) You want your Senator to support Sen. Wyden’s proposed content moderation amendment because it ensures online services will keep being the first line of defense in the fight against sex trafficking.

Note 1: This issue could be moot as early as Monday afternoon, so literally CALL NOW.

It seems quite likely the bill is going to pass very soon and then get signed into law. The fact that there are simple and reasonable ways to improve on the bill, which Congress is blatantly ignoring, is problematic.

from the the-word-on-the-blog dept

This week, following our coverage of the disturbing actions of a cop that led to a high-speed crash killing an infant, one commenter for some reason felt it was time to turn the blame around on the mother, suggesting the death must have been caused by her negligence. A reply from Alexander won first place for insightful:

As an Automotive Engineer who has engineered seats in cars I can tell you for certain that none of them in ordinary vehicles are designed to deal with a 94mph collision. Cars disintegrate at that speed.

Those videos you see for car safety, the super slow motion ones, they occur at ~20mph. Yes, that is how much the seats move at 20mph. At 94mph they disintegrate.

Fastening the straps correctly or not would likely not have changed the outcome at those speeds. The officer is clearly grossly negligent and the mother did not contribute in any significant way to the death of her infant. I say that with confidence of someone who's signature is still on the approvals for seats still carrying children in cars today.

That cop should have his drivers license cancelled for reckless driving for a decade. If he loses his job, then stiff shit. Then talk about trying him for negligent homicide.

In second place on the insightful side, we've got Dingledore the Previously Impervious using Microsoft's anger about a computer recycler offering Windows recovery disks to highlight the hypocrisy of "copying is theft":

Microsoft - Eats cake, yet still has cake.

Microsoft have spent years explaining that they sell licences, not DVDs of software.

Now, they're apparently selling the DVDs again. If it's a lost sale, where can we buy them?

This is actually self-consistent. The government believes that secure encryption with a Law Enforcement Agency Key ("LEAK") is possible if the technology companies would just "nerd harder," even as the government offers neither reference implementation nor convincing proof that this can be done. Likewise, the government now seemingly believes that the companies could identify, in real time, trolls that the government's own intelligence/surveillance agencies failed to spot. In both cases, the government:

Expects the private sector to solve the problem, and is actively demonizing anyone who fails to drop everything to work on the problem

Provides no useful assistance in solving the problem

Provides no reasonable explanation for why, with its vast resources and supposed subject matter expertise, the government cannot offer useful assistance solving the problem

Truly, the greatest sign that TD is not filled to the rafters with pure evil is that they have chosen to only apply the code that forces people to read articles they didn't want to on one site, rather than weaponizing it and taking over the world.