from the facebook-derangement-syndrome dept

And so we're back with Facebook Derangement Syndrome. As we've noted a few times in the past, many of the freakouts about Facebook's privacy practices involve completely misunderstanding or exaggerating the nature of what Facebook did -- and presenting things not just in the worst possible light, but in an actively misleading way. This is especially true in the context of privacy questions, where many people seem to interpret Facebook's good decisions not to lock down YOUR OWN access to your own data as a bad thing and then pressure the company to lock up access to your own data, limiting what you can do with it.

Of course, there is some amount of inherent conflict between open systems and privacy. Indeed, going back eleven years, we had a post highlighting the potential privacy conflicts of Facebook's "open social graph." And, of course, at the time, Facebook was celebrated for being so open and not locking up everyone's data, but enabling it to be used more widely in other systems.

And that brings us to this week's big NY Times story on Facebook. As we already discussed, what it really highlighted is what a terrible job Facebook does in being open and transparent about how it uses data. But we were also left with some questions about some of the claims in the NYT report, especially regarding the claims that other companies had access to messages.

As more people have looked at it, it increasingly appears that the NY Times reporting on this was really, really bad and contributed to the hysteria, rather than improving understanding. The companies that had access to Facebook messages involved software integrations where those third party apps allowed you to directly access Facebook Messenger from those apps -- in the same way that if you want to use Facebook Messenger on your mobile phone, you have to give that phone access to your messages so that... you can use FB Messenger.

As Mathew Ingram notes in an article about this, early on, many people rightfully celebrated Facebook's open approach, which involved the opposite of locking down data, but purposefully exposing it to make the rest of the internet more useful. It was the kind of openness and open integration most people used to celebrate. It was the opposite of building a locked box silo of your data.

Will Oremus, over at Slate, further notes that the integrations Facebook is now being slammed for in the Times were ones that people were happy about in the past, though, perhaps naively.

The companies’ Facebook integrations simply allowed existing customers to log into their Facebook accounts from within the streaming app and use its messaging features without having to navigate to Facebook proper. It’s the sort of arrangement that looks foolhardy or even sinister today but that many internet users took for granted at the time.

I know that because I was one of them. I thought nothing of using Facebook to log into Spotify, because I naïvely trusted Facebook to guard my data, probably more so than I trusted Spotify. I even tested for a while a Mozilla Firefox feature that brought a Facebook feed directly into your browser, as a sidebar, so that you could see what your friends were up to even when you were on other websites. It eventually dawned on me that this was imprudent, and certainly there were some activists at the time who were sounding alarms, but it was hardly a scandal.

None of that is to say that Facebook doesn't have serious problems. As I wrote when the NY Times piece first came out, the company seems to trip over itself to be sneaky and combative in explaining all of this, and it has always done a terrible job of transparently explaining how the data is and can be used.

But we should be focusing on the real issues regarding our privacy online, rather than cooking up bogus issues to argue about. When we focus on the wrong things, inevitably, whatever "solution" is proposed will make things much, much worse.

And, again, there are real issues here. Facebook letting Amazon look at who you know to determine whether or not reviews are allowed... that's a problem. No one was told about that. And that wasn't just about creating integrations to help users do something. That was a questionable sharing of information with a corporate partner, without user permission.

So, we should be able to admit that Facebook has a real privacy problem (and perhaps an even bigger transparency/honesty problem), without immediately jumping on every conspiracy theory about Facebook, when many are not actually accurate.

from the history-repeats-itself dept

As you probably have noticed, there's a growing tide of streaming video services popping up to feed users who want a cheaper, more flexible alternative to traditional cable. By and large this has been a very good thing. It's finally driving some competition for bumbling apathetic giants like Comcast, forcing them to at least make a feeble effort to improve customer service. It also reflects a belated admission by the broadcast industry that you need to compete with piracy (instead of say, suing the entire planet and hoping it goes away) by offering users access to cheaper, flexible viewing options.

But the gold rush into streaming has come with a few downsides. Studies have suggested that every broadcaster on the planet will likely have their own streaming service by 2022. In a bid to drive more subscribers to their service, said broadcasters are increasingly developing their own content, or striking their own content exclusivity deals, and then locking that content in an exclusivity silo. For example, if you want to watch Star Trek: Discovery, you need to shell out $6 a month for CBS All Access. Can't miss House of Cards? You'll need Netflix. Bosch? Amazon Prime. The Handmaid's Tale? Hulu.

Again, on its face this impulse makes perfect sense: you want the kind of content that drives users to your platform. And at first it wasn't all that noticeable, because there were only a handful of services. Even if you subscribed to four of them, you still probably were saving money over your traditional cable bill.

The problem is, as more and more companies jump into the streaming market, users are being forced to subscribe to an ocean of discordant services to get access for the content they're looking for. As users are forced to pony up more and more cash for more and more services, it's going to start defeating the purpose of ditching over-priced, traditional cable. But instead of going back to cable, back in March we noted how users are just as likely to consider piracy.

And of course that's already starting to happen, with BitTorrent usage seeing some modest but notable bumps, especially overseas. It's minor now, but if you've paid attention to several decades of piracy precedent, it's not hard to predict the outcome of this rush to cordon off everything into far too many exclusivity silos. Disney, for example, is preparing to pull all of its best content off of Netflix (Star Wars, Pixar, Marvel) and make it exclusive to its own streaming platform. In the wake of its acquisition of Time Warner, AT&T is contemplating doing the same thing with old episodes of shows like Friends. You may have noticed a trend:

"Before Netflix got into the Original series game, it made a name for itself by licensing content from other distributors like Warner Bros. TV, Paramount Television, and NBC Universal Television. Licensing deals are great for fans who don’t have cable or are looking to discover new series in full, but now that streaming is king, distributors and production companies have realized that they can make more money by consolidating their content on a single streaming service — hence why Disney, WarnerMedia, DC, and other media companies are creating their own platforms with original content."

You'd be pretty hard pressed to find many people in the streaming or broadcast sector who realize the pitfalls of this gold rush toward streaming exclusivity, even after all of the painful piracy and gatekeeper lessons learned thus far. After all, most industry executives are right that having must-watch exclusive content is necessary to drive subscriber adoption, and that developing original content in house is a better financial proposition than skyrocketing broadcast licensing costs. But few have paused, taken a step back, and considered how the rush to exclusivity at scale could come back to bite the sector at large.

That's thanks, in part, to the weird aversion among most journalists and analysts to even mention piracy in their reports or stories. Most reporters and analysts see even mentioning piracy as some kind of bizarre cardinal sin that implies they somehow advocate for the behavior. This tendency to ignore the elephant in the room is a major reason the industry has such a hard time learning that you have to compete with piracy, not engage in idiotic, counter-productive and often harmful attempts to "cure" it with legislation, lawyers, or an endless parade of terrible ideas.

The old adage that those who fail to learn from history are doomed to repeat it will likely hold true here. If the current trend holds, by 2022 consumers will be forced to subscribe to an absolute universe of $10 to $15 per month services just to get all the content they're looking for, on the presumption the average household has an unlimited amount of disposable income.

If history is any indication, it will take another year or two for the industry to identify and admit this exclusivity parade is driving users back to piracy. At that point, they'll probably burn through a rotating crop of "solutions" (like waging war on password sharing), before coming to this central conclusion: that licensing your content to a sensible but not overwhelming crop of companies actually good at the technical and customer service aspects of streaming (like, Netflix) -- instead of everybody and their mother launching their own streaming product -- wasn't such a terrible idea after all.

from the good-to-see dept

Here we go. For years I've been talking about how we really need to move the web to a world of protocols instead of platforms. The key concept is that so much of the web has been taken over by internet giants who have built data silos. There are all sorts of problems with this. For one, when those platforms are where the majority of people get their information, it makes them into the arbiters of truth when that should make us quite uncomfortable. Second, it creates a privacy nightmare where hugely valuable data stores are single points of failure for all your data (even when those platforms have strong security, just having so much data held by one source is dangerous). Finally, it really takes us far, far away from the true promise of cloud computing, which was supposed to be a situation where we separated out the data and the application layers and could point multiple applications at the same data. Instead, we got silos where you're relying on a single provider to host both the data and the application (which also raises privacy concerns).

Despite some people raising these issues for quite some time, there hasn't been much public discussion of them until just recently (in large part, I believe, driven by the growing worries about how the big platforms have become so powerful). A few companies here or there have been trying to move us towards a world of protocols instead of platforms, and one key project to watch is coming from the inventor of the web himself, Tim Berners-Lee. He had announced his project Solid a while back: an attempt to separate out the data layer, allowing end users to control that data and have much more control over what applications could access it. I've been excited about the project, but just last week I commented to someone that it wasn't clear how much progress had actually been made.

Then, last Friday, Berners-Lee announced that he's doubling down on the project, to the point that he's taken a sabbatical from MIT and reduced his involvement with the W3C to focus on a new company to be built around Solid called inrupt. inrupt's new CEO also has a blog post about this, which admittedly comes off as a bit odd. It seems to suggest that the reason to form inrupt was not necessarily that Solid has made a lot of forward progress, but rather than it needs money, and the only way to get some is to set up a company:

Solid as an open-source project had been facing the normal challenges: vying for attention and lacking the necessary resources to realize its true potential. The solution was to establish a company that could bring resources, process and appropriate skills to make the promise of Solid a reality. There are plenty of examples of a commercial entity serving as the catalyst for an open-source project, to bolster the community with the energy and infrastructure of a commercial venture.

And so we started planning inrupt - a company to do just that. Inrupt’s mission is to ensure that Solid becomes widely adopted by developers, businesses, and eventually … everyone; that it becomes part of the fabric of the web. Tim, as our CTO, has committed his time and talent to the company, and I am delighted to be its chief executive. We also have an exceptional investor as part of the team.

I'm certainly hopeful that something significant comes of this, as it truly is an opportunity to move the internet into that kind of more distributed, less centralized/silo'd world that shows off the true power of the web. I have heard some grousing among some people that this is just Tim Berners-Lee just rebranding the concept of the Semantic Web that he started pushing nearly two decades ago, without any real traction. And, of course, there have been plenty of other attempts over the decades to build these kinds of systems. As it stands right now, there are a few other projects that are getting some traction, including the more distributed social platform Mastodon or some of the ideas that have come out of IndieWeb.

That said, we may finally be entering an era where both users and companies alike are recognizing the benefits of a more distributed web and the downsides of a more centralized one. So it really does feel like there's an opportunity to embrace these concepts, and it's good to see the founder of the world wide web ramping up his efforts on this. If it produces real, workable solutions, that would obviously be fantastic, but at the very least if it gets more people just thinking about these concepts, that would also be useful. So, this should be seen as big news for anyone concerned about the powers of the largest internet companies (especially if you're skeptical about government trying to step in to deal with those companies when they don't know what they're doing). While the details and implementation will matter quite a bit, it's exciting to see more movement towards a world in which the data layer is not just separated out, but where end users will be able to fully control that layer themselves, and potentially choose which apps can access what (and for how long). It certainly opens up a real opportunity to bring back the early promise of a truly decentralized web... and that would be a web built on protocols rather than centralized, silo'd platforms.

from the nerding-harder-won't-solve-complex-problems dept

Facebook is probably not having a very good week concerning its privacy practices. Just days after it came out that -- contrary to previous statements -- the company was using phone numbers that were submitted to Facebook for two-factor-authentication as keys for advertising, earlier this morning the company admitted a pretty massive data breach in which its "view as" tool was allowing users to grab tokens of other users and effectively take over their accounts (even if those users had two factor authentication enabled).

This is, as they say, "really, really bad." It turned the "view as" feature -- which lets you see how your own page looks to other users -- into a "take over someone else's account" feature. That's a pretty big mistake to make for a product used by approximately half of the entire population of the planet. I'm sure there will be much more on this, but a few hours after the announcement, Facebook had another headache to deal with: numerous reports said that people trying to post articles about this new security mess from either the Guardian or the AP, were getting that action blocked, with Facebook's systems saying that the action looked like spam:

If you can't read that, it says:

Action Blocked

Our security systems have detected that a lot of people are posting the same content, which could mean that it's spam. Please try a different post.

If you think this doesn't go against our Community Standards let us know.

It's not hard to see how this happened of course. Many times, when a ton of people all start linking to the exact same story, there's a decent chance that it might just be a spam attack. I think even our own spam filter for the Techdirt comments takes something similar into account. Thus, with so many people all posting that link to Facebook, it tripped an algorithmic alarm, leading it to block the posting as possible spam. It appears this practice only lasted for a little while, as currently both articles can be posted to Facebook again.

Obviously, given that the content was about a big Facebook security breach, this looks fishy, even if there's a perfectly "logical" explanation for how it happened. But this also gives us yet another opportunity to highlight how ridiculous it is for people to argue that algorithmic content moderation is a reasonable solution. It's always going to mess up, especially when used at scale, and sometimes will do so in incredibly embarrassing ways, such as here.

And, of course, it provides yet another opportunity to highlight the problems of having just a few giant silos collecting and keeping so much data about people. Even if they are very good at security -- and despite arguments to the contrary, Facebook has a strong security team -- there are always going to be vulnerabilities like this, and companies like Facebook are always going to represent huge targets. This seems like yet another reminder that we need to be looking for more solutions to decentralize the web, and move away from giant silos holding onto all of our data.

Tragically, the powers that be are often looking at this the other way: trying to magically "force" big companies to "lock down" data, which actually only increases the value and demands on the silo, while expecting magic algorithms to protect the data. If we're serious about protecting privacy, we need to start looking at very different solutions that don't mean letting the giant internet companies control all this data all the time. Move it out to the ends of the network, let individuals control their own data stores (or partner with smaller third parties who can help with security) and then let those users choose when, how and where to allow the large platforms access to that data (if at all). There are better solutions, but there seems to be little interest in actually making them work.

from the equal-and-opposite-reaction dept

When it comes to the type of traffic the content industries are worried about regarding piracy, the present is no longer the past. You can see this in many ways, such as anti-piracy efforts largely focusing on illicit streaming sites, the trend in laws and takedown notices also targeting streaming sites, and the overall messaging coming out of the copyright industries about how evil streaming sites are with little distinction between the legal and illegal. All of this has been built in part on the realization that bittorrent traffic, the piracy metric of a decade ago, has been steadily dropping in its traffic market share for several years. Combined with a drastic rise in streaming traffic share, the takeaway was that pirates weren't downloading any longer and were instead streaming.

The other side of that conversation is how good, convenient streaming services like Netflix and Amazon Prime Video have taken away some of the impulse for copyright infringement as well. It turns out that if you give the public access to what they want at a reasonable price and make the content easy to get, there's no longer a need to pirate that content. Who knew?

Unfortunately, the past few years have seen a drastic fragmentation of the streaming market. Where there was once the need to essentially have one or two streaming services to get most of the content you want, exclusivity deals and homegrown content created by the streaming companies themselves has carved out more borders in the streaming services industry, often times requiring many streaming services to get the content people now want. And, because every action has an equal and opposite reaction, Canadian broadband management company Sandvine is reporting that bittorrent traffic is suddenly on the rise.

Globally, across both mobile and fixed access networks file-sharing accounts for 3% of downstream and 22% of upstream traffic. More than 97% of this upstream is BitTorrent, which makes it the dominant P2P force. In the EMEA region, which covers Europe, the Middle East, and Africa there’s a clear upward trend. BitTorrent traffic now accounts for 32% of all upstream traffic. This means that roughly a third of all uploads are torrent-related.

Keep in mind that overall bandwidth usage per household also increased during this period, which means that the volume of BitTorrent traffic grew even more aggressively.

That trend is being mirrored in other regions around the world as well. Once thought to be the declining method for filesharing, bittorrent is suddenly making a traffic comeback. And the reason is the fragmentation in the streaming marketplace. Again, there are both real dollar costs and mental transaction costs that come along with signing up for multiple streaming services. Asking a person to subscribe to 3-6 streaming services to get the kind of content package that is now the norm is no different than a bulky cable bill with multiple and complicated packages. In addition, this streaming service bloat also mirrors what cord-cutters have been running away from for years, which is the cost associated with access to all kinds of content the customer has no interest in. In other words, the streaming industry risks making the exact same mistake the cable industry made with the exact same result in pushing people to pirate content.

And it's not just me saying so. Sandvine's report reaches the same conclusion.

“More sources than ever are producing ‘exclusive’ content available on a single streaming or broadcast service – think Game of Thrones for HBO, House of Cards for Netflix, The Handmaid’s Tale for Hulu, or Jack Ryan for Amazon. To get access to all of these services, it gets very expensive for a consumer, so they subscribe to one or two and pirate the rest.

“Since these numbers were taken in June for this edition, there were no Game of Thrones episodes coming out, so consider these numbers depressed from peak!” Cullen notes.

None of this is to say that streaming and content companies can't produce exclusive content, or that every content creator should make their content available on every streaming platform. However, when the bittorrent traffic picks up, the industry also can't simply fall back on the "everybody just wants everything for free!" tired mantra that's been trotted out so often in the past. It's a direct result of the fragmentation, which is a business model issue.

from the it's-a-start dept

So, just last week we had a post by Kevin Bankston from the Open Technology Institute arguing for some basic steps towards much greater data portability on social media. The idea was that the internet platforms had to make it much easier to not just download your data (which most of them already do), but to make it useful elsewhere. Bankston's specific proposal included setting clear technical standards and solving the graph portability project. In talking about standards, Bankston referenced Google's data transfer project, but that project has taken a big step forward today announcing a plan to let users transfer data automatically between platforms.

The "headline" that most folks are focusing on is that Google, Facebook, Microsoft and Twitter are all involved in the project (along with a few smaller companies), meaning that it should lead to a situation where you could easily transfer data between them. As it stands right now, the various services let you download your data, but getting it into another platform is still a hassle, making the whole "download your data" thing not all that useful beyond "oh, look at everything this company has about me." Making a system where you can easily transfer all that data to another platform without having to manage the transition yourself or being left with a bunch of useless data is a big step forward -- and a huge step towards giving users much more significant control over their data.

But the really important thing that this may lead to is not so much about transferring your data between one of the giant platforms, but hopefully in opening up new businesses which would allow you to retain much greater control over your data, while limiting how much the platforms themselves keep. This is something we've talked about in the past concerning the true power of data portability. Rather than having it tied up in silos connected to the services you use, wouldn't it be much better if I could keep a "data bank" of my data in a place that is secure -- and where if and when I want to I can allow various services to access that data in order to provide the services I want?

In other words, for many years I've complained about how we've lost the promise of cloud computing in just building up giant silos of data connected to the various online services. If we can separate out the data layer from the service layer, then we can get tremendous benefits, including (1) more end-user control over their own data (2) more competitive services and (3) less power to dominate everything by the biggest platforms. Indeed, we could even start to move towards a world of protocols instead of platforms.

Of course, this is only one step in that direction, but it's a big one. And, yes, it's notable that the big platforms are all working on this together, since it has the potential to undermine their own powerful position. But it's absolutely the right thing for them to do, and hopefully we'll start to see much more interesting services pop up out of this. If it only ends up allowing people to shift between Google and Facebook that will be a failure. If it enables new services and more end user control over data -- forcing various services to compete and provide better value in exchange for accessing our data -- that would be a huge step forward in how the internet functions.

from the the-changing-web dept

I've been talking a lot lately about the unfortunate shift of the web from being more decentralized to being about a few giant silos and I expect to have plenty more to say on the topic in the near future. But I'm thinking about this again after Andy Baio reminded me that this past weekend was five years since Google turned off Google Reader. Though, as he notes, Google's own awful decision making created the diminished use that allowed Google to justify shutting it down. Here's Andy's tweeted thread, and then I'll tie it back to my thinking on the silo'd state of the web today:

Google Reader shut down five years ago today, and I’m still kind of pissed about it.

Google ostensibly killed Reader because of declining usage, but it was a self-inflicted wound. A 2011 redesign removed all its social features, replaced with Google+ integration, destroying an amazing community in the process.

The audience for Google Reader would never be as large or as active as modern social networks, but it was a critical and useful tool for independent writers and journalists, and for the dedicated readers who subscribed to their work.

There are great feedreaders out there — I use Feedly myself, but people love Newsblur, Feedbin, Inoreader, The Old Reader, etc. But Google Reader was a *community* and not easily replaced. Google fragmented an entire ecosystem, for no good reason, and it never recovered.

Many people have pointed to the death of Google Reader as a point at which news reading online shifted from things like RSS feeds to proprietary platforms like Facebook and Twitter. It might seem odd (or ironic) to bemoan a move by one of the companies now considered one of the major silos for killing off a product, but it does seem to indicate a fundamental shift in the way that Google viewed the open web. A quick Google search (yeah, yeah, I know...) is not helping me find the quote, but I pretty clearly remember, in the early days of Google, one of Larry Page or Sergey Brin saying something to the effect of how the most important thing for Google was to get you off its site as quickly as possible. The whole point of Google was to take you somewhere else on the amazing web. Update It has been pointed out to me that the quote in question most likely is part of Larry Page's interview with Playboy in which he responded to the fact that in the early days all of their competitors were "portals" that tried to keep you in with the following:

We built a business on the opposite message. We want you to come to Google and quickly find what you want. Then we’re happy to send you to the other sites. In fact, that’s the point. The portal strategy tries to own all of the information.

Somewhere along the way, that changed. It seems that much of the change was really an overreaction by Google leadership to the "threat" of Facebook. So many of Google's efforts from the late 2000s until now seemed to have been designed to ward off Facebook. This includes not just Google's multiple (often weird) attempts at building a social network, but also Google's infatuation with getting users to sign in just to use its core search engine. Over the past decade or so, Google went very strongly from a company trying to get you off its site quickly to one that tried to keep you in. And it feels like the death of Reader was a clear indication of that shift. Reader started in the good old days, when the whole point of an RSS reader was to help you keep track of new stuff all over the web on individual sites.

But, as Andy noted above, part of what killed Reader was Google attempting desperately to use it as a tool to boost Google+, the exact opposite of what Google Reader stood for in helping people go elsewhere. I don't think Google Reader alone would have kept RSS or the open web more thriving than it is today, but it certainly does feel like a landmark shift in the way Google itself viewed its mission: away from helping you get somewhere else, and much more towards keeping you connected to Google's big data machine.

from the a-tale-of-two-clouds dept

The somewhat apocryphal purpose of the early internet was to have a system that could survive a nuclear war, by building it in nodes, such that it couldn't be knocked out easily. That distributed and decentralized concept had many other benefits as well. Somewhat famously, 25 years ago, John Gillmore declared"The Net interprets censorship as damage and routes around it." And there remains some truth to that... in part. But the internet has changed drastically over the decades, and we're now living in the age of the cloud -- which might better be described as the age of the large third party who can be influenced.

Internet censors have a new strategy in their bid to block applications and websites: pressuring the large cloud providers that host them. These providers have concerns that are much broader than the targets of censorship efforts, so they have the choice of either standing up to the censors or capitulating in order to maximize their business. Today’s internet largely reflects the dominance of a handful of companies behind the cloud services, search engines and mobile platforms that underpin the technology landscape. This new centralization radically tips the balance between those who want to censor parts of the internet and those trying to evade censorship. When the profitable answer is for a software giant to acquiesce to censors' demands, how long can internet freedom last?

It's a good question, and one that I've been thinking a lot about in the past few years. I think it's an overreaction to blame the concept of "the cloud." Indeed, the idea of moving information onto the internet, rather than buried on local machines has some massive benefits, including the ability to access the information and services from any device, as well as being able to (sometimes) connect various services together and accomplish much more.

The real problem to me -- and one I've spoken about going back many years -- is that today's "cloud" is not the "cloud" we should want. It's become a series of silos. Silos owned by large companies. But there's no reason it needs to remain that way. There is simply no reason that we can't build a "cloud" in which end users retain full control over their data. They may allow third party services to access and interact with that data, but it's bizarre how the vision of the "cloud" has turned into a world where it basically just means Google, Microsoft, IBM, Rackspace, whoever else, hosting all your data and retaining all of the control to it, including the control to take it down and make it disappear.

Most of Schneier's piece focuses on Russia's somewhat Quixotic focus on shutting down Telegram, but notes that what happens is almost entirely up to a few large internet companies, and how much they'll push back on pressure from Russia (or other governments):

Tech giants have gotten embroiled in censorship battles for years. Sometimes they fight and sometimes they fold, but until now there have always been options. What this particular fight highlights is that internet freedom is increasingly in the hands of the world's largest internet companies. And while freedom may have its advocates—the American Civil Liberties Union has tweeted its support for those companies, and some 12,000 people in Moscow protested against the Telegram ban—actions such as disallowing domain fronting illustrate that getting the big tech companies to sacrifice their near-term commercial interests will be an uphill battle. Apple has already removed anti-censorship apps from its Chinese app store.

But it's unfortunate that that is the end result. Sometimes it's good that there are large companies who will (sometimes) fight these battles for smaller players, but that shouldn't be the last resort to protect against censorship of the type that Russia and China and other countries seek. For years we've been saying that it's time for us to rethink the internet, and move back towards a more decentralized, distributed world in which this kind of censorship isn't even an issue. It hasn't happened yet, but it feels like we're increasingly moving towards a world in which that's going to be necessary if we want to retain what is best about the internet.

from the meet-the-new-boss dept

By and large, the added competition being levied upon the traditionally apathetic pay TV industry has been a good thing. Though it has taken a decade longer than it probably would have in a healthier market, the rise of streaming competitors has forced incumbent cable companies like Comcast to up their game and at least consider lowering prices, improving abysmal customer service, and offering more flexible video options.

That's not to say that the new streaming frontier isn't going to be without some pretty notable problems. Studies suggest that by 2022, nearly every broadcaster, cable channel, and their mother intend to offer a direct to consumer streaming video product. That includes Disney, which later this year will be pulling most of its most popular content from services like Netflix and Hulu (Star Wars, Marvel films, Pixar titles) and exclusively hosting them on its own, new streaming video platform.

On its surface this improved level of competition is most assuredly a good thing. But as we've noted previously there's a problem brewing here that most executives and analysts don't seem particularly keyed into. Namely, that once you've cordoned off each broadcaster and content-creator's product into countless walled silos -- each requiring their own subscription -- you've not only countered the biggest benefit of the streaming revolution (lower prices, greater flexibility), you've opened the door to customers getting frustrated and returning to piracy.

It's pretty rare that I see research firms point out this potential pitfall. For example, The Diffusion Group's study noted that while they predict every broadcaster will offer their own streaming service by 2022, this could simply increase the number of annoying retrans and carriage feuds (and the annoying blackouts and price hikes that result) between cable companies and broadcasters:

"These are early signs of an emerging media tribalism," argues Berkley. "Major networks will increasingly reserve their best titles for their own direct-to-consumer services, which will help drive total network DTC subscriptions close to 50 million by 2022."

Berkley warns, however, that DTC strategies come with great risk, especially for TV networks. "The legacy model is built upon decades of comfortable relationships between networks and operators. If networks extract too much high-value content too quickly, channel conflicts are inevitable."

Said "conflicts" could prove particularly interesting now that we're doing away with net neutrality (for, you know, "freedom" or whatever). We've already watched as companies like Viacom blocked entire broadband IP ranges from accessing content during programming disputes with cable operators. News Corp also blocked Cablevision customers during a similar feud. These bad ideas are a two-way street, and without net neutrality protections in place, there's a universe of new, creative ways cable operators and broadcasters can hamstring one another in direct competition, annoying consumers.

Less talked about by analysts is the fact that customers might find piracy a better, simpler alternative if they're forced to subscribe to thirty different services (at $8 to $20 per month, per pop) just to access the movies and TV shows they're looking for. Like CBS did with the new Star Trek: Discovery series, each player in this game wants to hide their best content behind an exclusivity paywall. But customers are already confused and frustrated by constantly shifting licensing deals impacting title availability, and may find piracy to be the simpler alternative to having to navigate an ocean of exclusivity silos.

Again, the rise of streaming competition and lower-priced, more flexible TV services is a good thing. But if many of these companies aren't careful, it wouldn't take much to shoot this progress squarely in the foot via exclusives or other anti-competitive behaviors emboldened in the wake of federal apathy on net neutrality. The end result could be progress in name only, with customers finally shaking off one bad idea (bloated, expensive bundles of over-priced channels), only to find themselves forced to subscribe to a dozen caveat-laden streaming services just to get the content they're looking for.

To be clear, Mark's statement on the issue is not bad. It's obviously been workshopped through a zillion high-priced PR people, and it avoids all the usual "I"m sorry if we upset you..." kind of tropes. Instead, it's direct, it takes responsibility, it admits error, does very little to try to "justify" what happened, and lists out concrete steps that the company is taking in response to the mess.

We have a responsibility to protect your data, and if we can't then we don't deserve to serve you. I've been working to understand exactly what happened and how to make sure this doesn't happen again. The good news is that the most important actions to prevent this from happening again today we have already taken years ago. But we also made mistakes, there's more to do, and we need to step up and do it.

It runs through the timeline, and appears to get it accurately based on everything we've seen (so no funny business with the dates). And, importantly, Zuckerberg notes that even if it was Cambridge Analytica that broke Facebook's terms of service on the API, that's not the larger issue -- the loss of trust on the platform is the issue.

This was a breach of trust between Kogan, Cambridge Analytica and Facebook. But it was also a breach of trust between Facebook and the people who share their data with us and expect us to protect it. We need to fix that.

The proactive steps that Facebook is taking are all reasonable steps as well: investigating all old apps prior to the closing of the old API to see who else sucked up what data, further restricting access to data, and finally giving more transparency and control to end users:

First, we will investigate all apps that had access to large amounts of information before we changed our platform to dramatically reduce data access in 2014, and we will conduct a full audit of any app with suspicious activity. We will ban any developer from our platform that does not agree to a thorough audit. And if we find developers that misused personally identifiable information, we will ban them and tell everyone affected by those apps. That includes people whose data Kogan misused here as well.

Second, we will restrict developers' data access even further to prevent other kinds of abuse. For example, we will remove developers' access to your data if you haven't used their app in 3 months. We will reduce the data you give an app when you sign in -- to only your name, profile photo, and email address. We'll require developers to not only get approval but also sign a contract in order to ask anyone for access to their posts or other private data. And we'll have more changes to share in the next few days.

Third, we want to make sure you understand which apps you've allowed to access your data. In the next month, we will show everyone a tool at the top of your News Feed with the apps you've used and an easy way to revoke those apps' permissions to your data. We already have a tool to do this in your privacy settings, and now we will put this tool at the top of your News Feed to make sure everyone sees it.

That's mostly good, though as I explained earlier, I do have some concerns about how the second issue -- locking down the data -- might also limit the ability of end users to export their data to other services.

Also, this does not tackle the better overall solution that we mentioned yesterday, originally pitched by Cory Doctorow to open up the platform not to third party apps that suck up data, but to third party apps that help users protect and control their own data. That part is missing and it's a big part.

If you already hated Zuckerberg and Facebook, this response isn't going to be enough for you (no response would be, short of shutting the whole thing down, as ridiculous as that idea is). If you already trusted him, then you'll probably find this to be okay. But a lot of people are going to fall in the middle and what Facebook actually does in the next few months is going to be watched closely and will be important. Unless and until the company also allows more end-user control of privacy, including by third party apps, it feels like this will fall short.

And, of course, it seems highly unlikely that these moves will satisfy the dozens of regulators around the world seeking their pound of flesh, nor the folks who are already filing lawsuits over this. Facebook has a lot of fixing to do. And Zuckerberg's statement is better than a bad statement, but that's probably not good enough.

Meanwhile, as soon as this response was posted, Zuckerberg went on a grand press tour, hitting (at least): CNN, Wired, the NY Times and Recode. It's entirely possible he did more interviews too, but that's enough for now.

There are some interesting tidbits in the various interviews, but most of what I said above stands. It's not going to be enough. And I'm not sure people will be happy with the results. In all of the interviews he does this sort of weird "Aw, shucks, I guess what people really want is to have us lock down their data, rather than being open" thing that is bothersome. Here's the Wired version:

I do think early on on the platform we had this very idealistic vision around how data portability would allow all these different new experiences, and I think the feedback that we’ve gotten from our community and from the world is that privacy and having the data locked down is more important to people than maybe making it easier to bring more data and have different kinds of experiences.

But, of course, as we pointed out yesterday (and above), all this really does is lock in Facebook, and make it that much harder for individuals to really control their own data. It also limits the ability of upstarts and competitors to challenge Facebook. In other words, the more Facebook locks down its data, the more Facebook locks itself in as the incumbent. Are we really sure that's a good idea? Indeed, when Wired pushes him on this, he basically shrugs and says "Well, the people have spoken, and they want us to control everything."

I think the feedback that we’ve gotten from people—not only in this episode but for years—is that people value having less access to their data above having the ability to more easily bring social experiences with their friends’ data to other places. And I don’t know, I mean, part of that might be philosophical, it may just be in practice what developers are able to build over the platform, and the practical value exchange, that’s certainly been a big one. And I agree. I think at the heart of a lot of these issues we face are tradeoffs between real values that people care about.

In the Recode interview, he repeats some of these lines, and even suggests (incorrectly) that there's a trade-off between data portability and privacy:

“I was maybe too idealistic on the side of data portability, that it would create more good experiences — and it created some — but I think what the clear feedback from our community was that people value privacy a lot more.”

But... that's only true in the situation where Facebook controls everything. If they actually gave users more control and transparency, then the user can decide how her data is shared, and you can have both portability and privacy.

One other interesting point he raises in that interview: we should not be letting Mark Zuckerberg make all the decisions about what is and what is not okay:

“What I would really like to do is find a way to get our policies set in a way that reflects the values of the community, so I am not the one making those decisions,” Zuckerberg said. “I feel fundamentally uncomfortable sitting here in California in an office making content policy decisions for people around the world.”

“[The] thing is like, ‘Where’s the line on hate speech?’ I mean, who chose me to be the person that did that?,” Zuckerberg said. “I guess I have to, because of [where we are] now, but I’d rather not.”

But, again, that's a choice that Facebook is making in becoming the data silo. If the end users had more control and more tools to control, then it's no longer Mark's choice. Open up the platform not for "apps" that suck up users data, but for tools that allow users to control their own data, and you get a very different result. But that's not where we're moving. At all.

So... so far this is moving in exactly the direction I feared when I wrote about this yesterday. "Solving" the problem isn't going to be solving the problem for real -- and it's just going to end up giving Facebook greater power over our data. That's an unfortunate result.

Oh. And for all of the apologizing and regrets that Zuckerberg raises in these interviews, he has yet to explain why if this is such a big deal, Facebook threatened to sue one of the journalists who broke this story. It seems like that's an important one for him to weigh in on.