Margaret Talbot’s brief note on the death of Jusice Scalia is a postscript to her decade-old profile of him. In that profile, Scalia was the man. This was before his most apoplectic dissents in the Obamacare cases, as well as the Windsor and Obergefell cases, which ultimately recognized the universal right to marry. (He was pretty irate in Babbitt, though.)

That profile of Justice Scalia is a wonderful glimpse at Scalia near the zenith of his legal career, but this bit from the postscript really resonated with me:

I saw Justice Scalia speak a number of times, when I was profiling him for the magazine, in 2004 and 2005, and the question he hated most was how he would have ruled on Brown v. Board of Education. Scalia was committed to an originalist approach to jurisprudence, but a literal reading of the Fourteenth Amendment’s guarantee of equal protection would not seem to require a ruling to desegregate schools. […]

To law students who pointed out that it was the flexible, not the originalist approach that enabled Brown and other civil-rights breakthroughs, he’d reply that “Even Mussolini made the trains run on time,” or “Hitler developed a wonderful automobile. What does that prove? I’ll stipulate that you can reach some results you like with the other system. But that’s not the test.” In short, he never did reconcile originalism with Brown. And any legal philosophy that cannot be squared with that moral high point of the modern Supreme Court is fatally flawed.

That’s as a beautiful and succinct a metric for any judicial philosophy as I’ve ever read. Of course, sometimes people also reach conclusions you like by appliying a philosophy you may not like. As my friend Keith reminded me the other day, I concurred with Scalia’s recent raft of Fourth Amendment opinions. And in law school, there were certainly a handful of opinions in which I agreed with Scalia. It was always traumatic.

Jeffrey Toobin—also in the New Yorker—pulls fewer punches about Scalia’s philosophy and legacy. After a positively scathing indictment of the justice’s neolithic views on homosexuality, Toobin gets to Heller, a gun control case where Scalia read the original text of the Consitution and neatly sidestepped the whole bit about militias:

Scalia spent thousands of words plumbing the psyches of the Framers, to conclude (wrongly, as John Paul Stevens pointed out in his dissent) that they had meant that individuals, not just members of “well-regulated” state militias, had the right to own handguns. Even Scalia’s ideological allies recognized the folly of trying to divine the “intent” of the authors of the Constitution concerning questions that those bewigged worthies could never have anticipated.

None of this would have been remarkable if not for Scalia’s lifelong obsession with the plain language of the Constitution, and the legitimacy which he pretended that lent his legal opinions. But his inability to explain why an originalist justice would have been on the right side of Brown, and the fact that Scalia abandoned that philosophy when the stakes were highest, mar his legacy.

Regardless, constitutional law classes will be less exciting for want of more Scalia dissents.

You may have heard that Google’s DeepMind, an artificial intelligence, has mastered Go. This is a big deal, because it’s hard to build a computer that’s good at games. In video games, there’s always one particular move that confuses the AI opponent: football games fall for trick plays over and over, racing games have AIs that don’t understand how to overtake other cars safely, and so on. Games are hard, humans are smart, and computers aren’t. Note that computers were perfectly average at traditional games like Chess for literally decades.

Sure, computers are great at chess now. Everyone knows that IBM’s Deep Blue supercomputer won a chess match against the reigning world champion Garry Kasparov, but that game was a rematch. The year before, Kasparov handily won his match against Deep Blue. The Deep Blue machine only won the rematch after literally doubling its computing power to improve its brute-force analysis of the outcome of nearly every possible move at once. Deep Blue was one of the the 250 most powerful supercomputers in the world at the time. A little more than a decade later, this underwhelming smartphone could run a chess program capable of trouncing all but a handful of players on the planet. Computers got way smarter in a hurry.

So what happened with this Go thing? Are we in the ‘supercomputer ekes out a win’ stage, or the ‘cellphone checkmates you in thirty seconds’ stage? And how do machines go from one stage to the other?

Today, we have two different post-mortems of the mansplaining that occurs after a woman expresses an opinion. The first is a statistical analysis of the mansplaining prompted by Holly Wood’s rebuttal of some rich guy’s defense of income inequality. You should read Wood’s essay, as well as the analysis which includes dialetical gems like this:

What is the best way to look like the smartest person in the room without actually saying anything worth noting? Say that both sides are wrong and that having a strong opinion is for overly passionate losers. This is often mixed with tone-policing and repeated efforts to make sure everyone understands they’re not on anyone’s side. You can’t be on a side in a public debate. That’d mean having an opinion that is potentially not just regurgitating the status quo!

“Both sides” is usually just intellectual cowardice disguised as nuance.

The second post-mortem, by Rebecca Solnit, is no less scathing. Solnit wrote an article called Men Explain Lolita to Me; men were apparently honor-bound to educate Solnit after she picked on Esquire for publishing a list of 80 Books Every Man Should Read. A full 79 of those books were written by men, and Solnit pointed out that this:

seemed to encourage this narrowness of experience and I was arguing not that everyone should read books by ladies—though shifting the balance matters—but that maybe the whole point of reading is to be able to explore and also transcend your gender (and race and class and nationality and moment in history and age and ability) and experience being others. Saying this upset some men. Many among that curious gender are easy to upset, and when they are upset they don’t know it (see: privelobliviousness). They just think you’re wrong and sometimes also evil.

It’s tempting take the cheap shot, the sarcastic nihilistic poke and say “well, of course. It’s Esquire. This is par for the course.” You could even link to something actually educational about Esquire’s sordid history to prove your point. But that’s still the lazy way out, and Solnit isn’t lazy. This is much better:

Scott Adams wrote last month that we live in a matriarchy because, “access to sex is strictly controlled by the woman.” Meaning that you don’t get to have sex with someone unless they want to have sex with you, which if we say it without any gender pronouns sounds completely reasonable. You don’t get to share someone’s sandwich unless they want to share their sandwich with you, and that’s not a form of oppression either. You probably learned that in kindergarten.

But if you assume that sex with a female body is a right that heterosexual men have, then women are just these crazy illegitimate gatekeepers always trying to get in between you and your rights. Which means you have failed to recognize that women are people, and perhaps that comes from the books and movies you have—and haven’t—been exposed to, as well as the direct inculcation of the people and systems around you. Art matters, and there’s a fair bit of art in which rape is celebrated as a triumph of the will. It’s always ideological, and it makes the world we live in.

From Twitter, some questions from Friend of the Blog Miranda regarding my last post on Zone Shifting:

How does one define location? Where are you “located”, for example, if you’re in EU but have credit card with an American address?

And what about a free market argument when you just want to watch something that’s not legally available at that time in that location? Or if it’s not available at all?

Location, location, location

The short answer to the first question is that you’re located in your physical location, and you’re getting that country’s version of Netflix with the stuff Netflix has licensed for that country.

The long answer: every nation sets its own copyright regime with its own copyright law. When you’re in Foreign Countrystan, they decide whether the movie you’re trying to watch has copyright protection or not. That sounds like a terrible idea, and it’s an incredibly terrible idea. In fact, the Western World realized this back in 1886, back when people took like three baths a year.

Netflix announced this week that they’re cracking down on the use of VPNs. Among other uses for VPNs, they let users connect to web sites “from” other parts of the world. I’m in New York, but I can use a VPN in Sweden to connect to the Swedish version of Netflix, which has a different selection of TV shows and movies than the American version.

I frequently log in to my Netflix account from an Italian VPN. I like to watch movies in Italian. I am teaching my kids Italian, and I like them to watch their cartoons in Italian. The same cartoons that are on my Netflix USA account are also available on Netflix Italy. But, for some reason, Netflix does not give me the option to change the language to Italian, as it does if I log in through an IP address in Europe. Netflix could easily offer the same shows with the Italian language option in the USA, but for some reason, they would rather not.

Zone shifting is a legitimate use. I can understand that Netflix would rather not let me access “Better Call Saul,” from my proxy server. They don’t have U.S. distribution rights to it yet, so technically, if I were to access Better Call Saul on that proxy server, I’m violating someone’s rights.

A cursory look on Google seems to indicate that Randazza has coined “Zone Shifting.” I love it.

Last year, I became fairly obsessed with superintelligent artificial intelligences. I dipped a toe into the Iaian M. Banks Culture series of books, which are science fiction set in a distant future where humanity has created thousands of godlike AIs to fly their ships and terraform their worlds. I do recommend it.

The next book I read was “Superintelligence: Paths, Dangers, Strategies” by the philosopher Nick Bostrom. Bostrom actually gets paid to think (and write nonfiction!) about artificial intelligence, what it might look like, and when it might arrive. We’ve all seen The Terminator and The Matrix, so you get the gist of how scary the “what” could be.

Raffi Khatchadourian, writing in The New Yorker, has a great review of the book and interview with Bostrom. It’s called The Doomsday Invention, and it covers the “when” of AI. Note that expert consensus on AI is that we’re about twenty years away from being able to create it, and that we’ve been twenty years away for about sixty years.

For de­c­ades, researchers, hampered by the limits of their hardware, struggled to get the technique to work well. But, beginning in 2010, the increasing availability of Big Data and cheap, powerful video-­game processors had a dramatic effect on performance. Without any profound theoretical breakthrough, deep learning suddenly offered breathtaking advances. “I have been talking to quite a few contemporaries,” Stuart Russell told me. “Pretty much everyone sees examples of progress they just didn’t expect.” He cited a YouTube clip of a four-legged robot: one of its designers tries to kick it over, but it quickly regains its balance, scrambling with uncanny naturalness. “A problem that had been viewed as very difficult, where progress was slow and incremental, was all of a sudden done. Locomotion: done.”

In an array of fields—speech processing, face recognition, language translation—the approach was ascendant. Researchers working on computer vision had spent years to get systems to identify objects. In almost no time, the deep-learning networks crushed their records. In one common test, using a database called ImageNet, humans identify photographs with a five-per-cent error rate; Google’s network operates at 4.8 per cent. A.I. systems can differentiate a Pembroke Welsh Corgi from a Cardigan Welsh Corgi.

We’re not going to go extinct tomorrow, next year, or in ten years, but machines are getting exponentially smarter every day. It’s exciting, and only a little scary.
​