It seems that the storm that has been gathering for several years between the government and the tech industry around privacy, encryption, and the proper role of law enforcement is upon us. Apple has chosen its ground to stand on, and has now been joined — at least in spirit — by many of the other heavyweights of Silicon Valley, including Google, Facebook (and WhatsApp),Twitter, Microsoft and many more. Broadly, the Valley has closed ranks behind Apple’s contention that it should not be compelled to cooperate with the FBI’s request to decrypt a locked iPhone from the San Bernardino terrorists.

After I wrote about why Apple was wrong last week, I’ve continued to follow this issue closely, there are a few observations that are well worth making about how this debate has begun to evolve.

Share this:

I recently posted this on Medium, which I’ve begun gravitating to for my blogging. In the last 36 hours or so, it’s garnered a couple of thousand views, which kind of blows me away, and has also turned my Twitter Mentions feed into a smoking garbage fire. I’m re-posting it here for posterity.

Following a long fight with the feds, Apple’s Tim Cook issued a sharp public retort to the FBI yesterday in the form of “A Message to Our Customers”:

The United States government has demanded that Apple take an unprecedented step which threatens the security of our customers. We oppose this order, which has implications far beyond the legal case at hand.

The whole letter, which is not that long and worth reading, goes on to state Apple’s objection to the FBI’s request to essentially create a new, custom version of iOS that they can use to defeat the security on a recovered iPhone 5C from the San Bernardino terrorists.

The real crux of this question, of course, turns on the core of one of the tech industry’s biggest current controversies: government access (or “surveillance,” depending on your framing of the issue). Should the government — in whatever its form — be able to gain access to data on your smartphone?

Share this:

The online publishing world is currently grappling with (and/or panicking about) the rise of widespread ad-blocking. Ad-blocking software is deeply problematic, because at its core, it’s essentially a classic shakedown scheme: offer your services to readers by vilifying advertisers and hyping paranoia about “tracking” and “malware,” and then turn around and charge advertisers to “whitelist” ads that they find acceptable.

It’s hard to fault readers for not wanting to see ads that are, at a minimum, annoying, and can quickly eat up mobile bandwidth (as beautifully illustrated today by this NYT feature). On the other hand, publishers need a business model to create content people like. Ad blockers have various forms of rebuttals to this argument, most of which amount to shrugging and saying that isn’t their problem.

One alternative that you see often proposed has to do with direct reader micropayments. “Let me pay directly for content,” the argument goes, “so you don’t have to rely on ads for your site!” This is certainly an interesting idea — indeed, it’s one that has been debated, experimented with, and ultimately rejected for a long time. I have one such experience that, I hope, might prove illustrative.

Share this:

You can always count on the New York Times to sell eyeballs with Serious Opinions about the chattering class’s moral panic of the day. True to form, they published a red meat op/ed today on how internet platforms like Facebook, Google, Instagram and Twitter are diabolically making money with their users’ data, thereby corrupting democracy and otherwise destroying the world. Instead, says UNC’s Zeynep Tufekci, “Internet sites” should all build direct subscription options for their users that would allow them to opt out of “tracking,” enable encryption and be treated as a customer, not just a “user.”

This idea is completely unworkable. But understanding why requires you to understand what Facebook – and other social platforms – really are, and what they aren’t.

Share this:

Two weeks ago (before taking some vacation), I wrote a critique of Glenn Greenwald’s recent TED talk on the importance of privacy in which I accused him of pillorying a silly straw man argument instead of substantively considering the issue. Having had some time to reflect, I feel like it’s only fair for me to better articulate my own view on the actual question that I wish Greenwald had used his platform to address: how is privacy going to work in our modern, wired society, and what will it actually mean for us in practical terms?

In preparation for a speaking engagement I have coming up, I spent some time recently organizing my thoughts on these questions. Like everything else on this blog, they are a work in progress; but here’s a summary of what I’ve come up with so far. In short: individual privacy is certainly important, but not to the exclusion of other public interests as well.