If you can see this notice you have Javascript disabled. Please enable Javascript to view our site in all it's glory. Thank You.

MY ARTICLES

CHRISTIAN SCIENCE MONITOR

“It’s time for an about-face on facial recognition” (with Woodrow Hartzog)Christian Science Monitor June 25, 2015

Until recently, concerns over facial recognition technologies were largely theoretical. Only a few companies could create databases of names and faces large enough to identify significant portions of the population by sight. These companies had little motivation to widely exploit this technology in invasive ways.

Unfortunately, things are changing – and fast. The tech industry appears poised to introduce new facial recognition products and does not seem to take seriously concerns over personal identification. In addition to downplaying the important role biometrics play in modern data security schemes, industry is ignoring the importance of maintaining obscurity in our day-to-day lives.

When parents and school officials in New Jersey discovered that educational publisher Pearson recently monitored students’ Twitter accounts during standardized testing periods, an uproar ensued. Parents were alarmed and one superintendent called it “disturbing.”

“If predictive algorithms can craft the best e-mails, we’re all in big trouble”Christian Science Monitor April 27, 2015

It claims to be “the biggest improvement to e-mail since spell-check.”

Sounds impressive, right?

The huge leap forward promised by Crystal Project Inc., makers of the newly released Crystal app, is to give its users real-time insight into the recipients of their e-mails. In the blink of an eye, it can examine the recipient’s online data trail and turn that information into a detailed personality profile. It will even offer suggestions for how to better communicate to the person you’re writing.

“Why you have a right to obscurity” (with Woodrow Hartzog)
An interview with Julie BrillChristian Science Monitor April 15, 2015

Some people argue that the Digital Age has eviscerated obscurity. They say shifts in the technological and economic landscapes have forever changed society.

Their argument is that a tipping point has occurred; it’s now too late to stop others from collecting, aggregating, and analyzing nearly every aspect of our data trail, and profiting from a steady stream of intrusive privacy invasions.

Effective solutions for ending – or even lessening – the abuse and online harassment that happens on the Web have been elusive to say least. But the cause did receive a strong endorsement recently when Twitter, Reddit, and Facebook all took harder stances against deplorable behavior on their platforms. What’s more, over the past 18 months, 14 states have criminalized so-called “revenge porn,” the act of posting online nude images without someone’s consent.

“Why domestic drones stir more debate than ones used in warfighting abroad”
An interview with John Kagg

The use of drones domestically has sparked heated debate around the potential threats to both privacy and safety. The digital rights group Electronic Frontier Foundation warns that drones “raise significant issues for privacy and civil liberties” since they are capable of “highly advanced surveillance.” In terms of commercial use, the Federal Aviation Administration has proposed rules to limit where drones can fly.

“What is intellectual privacy and how yours is being violated”
An interview with Neil RichardsChristian Science Monitor February 25, 2015

Privacy may be one of the biggest casualties of the Digital Age. If it’s not the government that’s storing records of cellphone calls, it’s advertisers tracking our every online clicks. While limiting government and corporate snooping is now a matter of heated debate, the notion of guarding intellectual privacy has yet to generate much fuss.

“Frank Pasquale unravels the new machine age of algorithms and bots”
An interview with Frank PasqualeChristian Science Monitor January 28, 2015

Slate recently said Frank Pasquale’s new book, “The Black Box Society: The Secret Algorithms That Control Money and Information,” attempts to “come to grips with the dangers of ‘runaway data’ and ‘black box algorithms’ more comprehensively than any other book to date.’

I recently spoke with Pasquale about his new book and about how algorithms play a major role in our everyday lives — from what we see and don’t see on the Web, to how companies and banks classify consumers, to influencing the risky deals made by investors.
Read Full Article

Privacy advocates have long been pushing for laws governing how schools and companies treat data gathered from students using technology in the classroom. Most now applaud President Obama‘s newly announced Student Digital Privacy Act to ensure “data collected in the educational context is used only for educational purposes.”

But while young students are vulnerable to privacy harms, things are tricky for college students, too. This is especially true as many universities and colleges gather and analyze more data about students’ academic — and personal — lives than ever before.

By now you’ve probably read something about the hacked Sony Pictures Entertainment e-mails containing confidential information, including unflattering remarks made by Sony executives. Stories with titillating gossip about celebrities are everywhere. Even Time peddled an “outrageous” schadenfreude listicle.

The voyeuristic coverage of dirty laundry not meant to be publicly aired is morally revolting. It’s a predictable, yet deplorable way to bring eyeballs to pages and screens. So long as ill-gotten, petty gossip stories run in high-profile places, there’s incentive for future hackers to get their kicks and settle beefs illegally.

“Google’s action on revenge porn opens the door to the right to be forgotten in the US” (with Woodrow Hartzog)The Guardian June 25, 2015

Google’s recent decision to delist “revenge porn” from its search results is a big deal, and not just for victims. Beyond opposing harmful conduct that disproportionately targets women, Google has essentially demonstrated how something akin to the European Union’s right to be forgotten can, and should, work in the US.

“Automating walking is the first step in a dystopian nightmare”WiredUK May 20, 2015

Back in ancient Greece, Plato told the story of a man who gazed so intently at the stars that he walked right into a well. Supposedly, Thales was “so eager to know the things in the sky that he could not see what was there before him at his very feet”. Flash forward to today, and scientists have finally found a way for us to avoid the fate of an absent-minded professor. But before we get too excited about enhanced navigation, we should pause to consider why commercial applications might encourage us to embrace something awful: an alienated relation to our own embodiment.

“Robot Servants Are Going to Make Your Life Easy. Then They’ll Ruin It”Wired September 5, 2014

Jibo, the “world’s first family robot,” hit the media hype machine like a bomb. From a Katie Couric profile to coverage in just about every outlet, folks couldn’t get enough of this little robot with a big personality poised to bring us a step closer to the world depicted in “The Jetsons” where average families have maids like Rosie. In the blink of an eye, pre-orders climbed passed $1.8 millionand blew away the initial fundraising goal of $100k.

“You’ve Been Obsessing Over Your Likes and Retweets Way Too Much”Wired June 9, 2014

The digital age version of the proverbial tree falling in the woods question is: Does something exist if it hasn’t been liked, favorited, linked to, or re-tweeted? According to many tech critics, the tragic answer is no. Like Lady Gaga, we live for the applause. But if constantly chasing other people’s approval is a shallow way to live that leads to time and energy being wasted over pleasing others and recurring feelings of insecurity and emptiness, how can we course correct?

The Golden Age of universities may be dead. And while much of the commentary around the online disruption of education ranges from cost-benefit analyses to assessing ideology of what drives MOOCs (massively open online courses), the real question becomes — what is the point of the university in this landscape?

It’s clear that universities will have to figure out the balance between commercial relevance and basic research, as well as how to prove their value beyond being vehicles for delivering content. But lost in the shuffle of commentary here is something arguably more important than and yet containing all of these factors: culture.

While I am far from a Luddite who fetishizes a life without tech, we need to consider the consequences of this latest batch of apps and tools that remind us to contact significant others, boost our willpower, provide us with moral guidance, and encourage us to be civil. Taken together, we’re observing the emergence of tech that doesn’t just augment our intellect and lives — but is now beginning to automate and outsource our humanity.

But let’s take a concrete example. Instead of doing the professorial pontification thing we tech philosophers are sometimes wont to do, I talked to the makers of BroApp, a “clever relationship wingman” (their words) that sends “automated daily text messages” to your significant other. It offers the promise of “maximizing” romantic connection through “seamless relationship outsourcing.”

“The ‘Mood Graph': How Our Emotions Taking Over the Web”Wired August 19, 2013

Recently, URL shortener Bitly announced a beta version of its tool for “Feelings”, a “fun bookmarklet to express how you feel about the content you’re sharing”. Its tagline, however, is even more telling: “Because you don’t ‘like’ everything.” This is a subtle jab at Facebook’s Like button, even though Facebook, too, provided a way earlier this year for its users to more broadly express how they feel by selecting from a dropdown menu of options (happy, sad, tired, etc.) and emoji.

All of this is especially interesting when you consider the most recent research finding, released this past week, that Facebook may “provide an invaluable resource for fulfilling the basic human need for social connection,” but “rather than enhancing well-being … [it] may undermine it.”

Oh, the irony: Facebook keeps expanding the emotional bandwidth of its interface, yet its users are still depressed.

The new ads for Facebook Home are propaganda clips. Transforming vice into virtue, they’re social engineering spectacles that use aesthetic tricks to disguise the profound ethical issues at stake. This isn’t an academic concern: Zuckerberg’s vision (as portrayed by the ads) is being widely embraced — if the very recent milestone of half a million installations is anything to go by.

Let’s face it: Technology and etiquette have been colliding for some time now, and things have finally boiled over if the recent spate of media criticisms is anything to go by. There’s the voicemail, not to be left unless you’re “dying.” There’s the e-mail signoff that we need to “kill.” And then there’s the observation that what was once normal — like asking someone for directions — is now considered “uncivilized.”

Cyber-savvy folks are arguing for such new etiquette rules because in an information-overloaded world, time-wasting communication is not just outdated — it’s rude. But while living according to the gospel of technological efficiency and frictionless sharing is fine as a Silicon Valley innovation ethos, it makes for a downright depressing social ethic.

“Will autocomplete make you too predictable?”BBC Future January 15, 2015

Do you know what you really want? Right now, there are computers all over the world busily trying to tell you the answer – often before you know yourself.

If you’ve bought books or music on Amazon, watched a film on Netflix or even typed a text message, then these mind-reading machines may have steered you to that choice by making recommendations. These predictive algorithms work by finding patterns in our previous behaviour and making inferences about our future desires – and they are everywhere.

“Google vs. our humanity: How the emerging ‘Internet of Things’ is turning us into robots”Salon May 22, 2014

According to a new Pew Research Center report, by the time 2025 rolls around the Internet of Things will dramatically improve our lives. Janna Anderson, co-author of the document,says experts expect “positive change in health, transportation, shopping, industrial production and the environment.” While these are genuine possibilities, I’m worried that insufficient attention is being paid to a troubling issue that goes beyond potential privacy problems: the moral cost of outsourcing our decisions to increasingly interconnected smart devices.

More than 20 years ago, David Foster Wallace lamented that television had co-opted irony, using the medium to flatter viewers into believing they were smarter than the rest of the naïve public – all the while lulling them into consuming more and more of the products advertised on television, just like everyone else. While irony perhaps has gotten an unduly bad rap, Wallace was absolutely right to worry about the manner in which the entertainment-industrial complex has been doling out winks to the viewer. Today the very tools that appear to dilute the power of advertising only reinforce its authority — an issue that’s especially troubling, given the fuzzy line protecting independent content in the digital age.

Critics haven’t been kind to Personal Dating Assistants, a new service that allows men to up their online dating game by outsourcing tasks to paid, clandestine wingmen who pimp profiles, locate prospects and ghostwrite correspondences. GQ calls it “creepy.” CNET says customers eventually will have to admit they are big fakes. And over at Jezebel, dudes who take advantage of the deception are called “human trash.”

Unfortunately, Personal Dating Assistants is a sign of things to come. Thanks to technology, we’ll be seeing more opportunities to degrade ourselves and others through outsourcing activities that are basic to our humanity.

“Fighting Facebook, A Campaign for a People’s Terms of Service”with Ari Melber and Woodrow HartzogThe Nation May 22, 2013

Facebook is on the defensive again. Members of the social networking site sued the company for co-opting their identities in online ads, and Facebook agreed to revise its “Statement of Rights and Responsibilities” and offer a $20 million settlement. The case has drawn less attention than the dorm disputes portrayed in “The Social Network,” but the impact is far wider. An underpublicized aspect of the dispute concerns the power of online contracts, and ultimately, whether users or corporations have more control over life online.

When Facebook Inc. recently lifted its restriction on public posts by teenagers, some privacy scholars applauded the move as a win for parents — offering them a chance to teach their children about digital accountability. They may be overstating the case, however. If information and communication technologies aren’t designed to help users — especially younger ones — guard their information, appeals to good judgment and discipline won’t go very far.

The right way to answer the question of whether privacy is dead is to begin by pointing out that the question itself is poorly constructed. Yes, it’s an attention-grabber that gets asked all the time. Thirty-five years ago, it even was posed on the cover of Newsweek. The layout invites readers to identify with an illustration of a frightened couple being scrutinized by an anthropomorphized phone busily writing an investigative report.

“Don’t Let Nudges Become Shoves“New Scientist June 22, 2013, p. 37
online version titled, “Nudge: When does persuasion become coercion?”

NUDGES are born of good intentions and clever ideas. Alas, that’s not enough.

I once proposed a nudge to promote online civility. I suggested that magazines and newspapers should moderate comments using a variation of ToneCheck, an “emotional spell-checker” for email that prompts users to tone down angry messages.

Richard Thaler, one of the chief architects of nudge, loved it, tweeting: “A Nudge dream come true.” But my students saw a problem: legitimate opinions getting censored or watered down. The lesson I learned is that nudge designers must always consider the possibility of unintended consequences. In fact, that is only one of many concerns about nudging.

As I found, creating effective nudges is difficult. Thaler and Cass Sunstein’s influential book Nudge creates the impression that nearly anyone can do it. All you need is a basic understanding of how ..

“There’s a widely shared image on the Internet of a teacher’s note that says: ‘Dear students, I know when you’re texting in class. Seriously, no one just looks down at their crotch and smiles.’

College students returning to class this month would be wise to heed such warnings. You’re not as clever as you think—your professors are on to you. The best way to stay in their good graces is to learn what behavior they expect with technology in and around the classroom.”

My grandfather died on Halloween. Thanks to Hurricane Sandy, none of the New York family members could attend the funeral in Massachusetts. Fortunately, another option became available: The ceremony was streamed online, and so my wife, daughter and I gathered around a laptop in our living room to watch the live webcast.

The rabbi began by giving technology center stage, poignantly acknowledging that the virtual participants played an important role in honoring the deceased’s memory. After that, technology receded into the background for the Massachusetts crowd. My grandmother looked like a bereaved widow. Online coverage didn’t affect her demeanor—or anyone else’s.

If you’re looking to add a digital spark to your relationship this Valentine’s Day, you can download the new app Romantimatic.

Romantimatic will send you scheduled reminders to contact your significant other and give you pre-set messages to fire off. The pre-set messages include simple, straightforward classics like “I love you” and “I miss you.”

Or maybe that doesn’t sound appealing. It sure doesn’t to me. In that case, I recommend you follow my lead: Take a solemn oath before the Greek god Eros and vow to never, ever go this far down the outsourced sentiment rabbit hole.

“I See You: The Databases that Facial Recognition Apps Need to Survive”
with Woodrow HartzogThe Atlantic January 23, 2014

Privacy concerns have been ignited by “NameTag,” a facial-recognition app designed to reveal personal information after analyzing photos taken on mobile devices. Many are concerned that Google Glass will abandon its prohibition on facial recognition apps. And, there are open questions about the proper protocols for opting customers in and out of services that identify people through facial comparisons in real time. These kinds of services are technically “face matching” services, though they are colloquially referred to here as “facial-recognition technologies.”

“How Not to Be a Jerk With Your Stupid Smart Phone”The Atlantic Nov. 4, 2013

As technology expands our communicative reach, new opportunities to be rude inevitably arise. Some people overreact to this incivility by turning to uniform and mechanical etiquette rules, hoping to make things better by constraining choices and limiting situational judgment. But for societies that value diversity and autonomy, general mandates—like expecting everyone to turn off their cell phones in theaters—only work in exceptional cases.

“Quitters Never Win: The Costs of Leaving Social Media”
with Woodrow HartzogThe Atlantic February 15, 2013

Simple solutions have been proposed to help users cope with the vulnerability of disclosing information on the social web. These remedies are clear and decisive, but they demand significant trade-offs — perhaps greater sacrifice than typically is acknowledged.

“Obscurity: A Better Way to Think About Your Data Than ‘Privacy’”
with Woodrow HartzogThe Atlantic January 17, 2013

Facebook’s announcement of its new Graph search tool on Tuesday set off yet another round of rapid-fire analysis about whether Facebook is properly handling its users’ privacy. Unfortunately, most of the rapid-fire analysts haven’t framed the story properly. Yes, Zuckerberg appears to be respecting our current privacy settings. And, yes, there just might be more stalking ahead. Neither framing device, however, is adequate. If we rely too much on them, we’ll miss the core problem: the more accessible our Facebook information becomes, the less obscurity protects our interests.

While many debates over technology and privacy concern obscurity, the term rarely gets used. This is unfortunate, as “privacy” is an over-extended concept. It grabs our attention easily, but is hard to pin down. Sometimes, people talk about privacy when they are worried about confidentiality. Other times they evoke privacy to discuss issues associated with corporate access to personal information. Fortunately, obscurity has a narrower purview.

Huffington Post Live had me on as a guest for follow-up show, “The End of Privacy” on January 18,2013. Click here for episode.

“Augmented-Reality Racism“The Atlantic December 16, 2012

Racism is ugly to confront, and, like most people, I’ve got plenty of personal stories. My grandmother, bless her heart, was a wonderful grandmother, but like many Jewish people of her generation, she was incredibly racist, afraid of black people she didn’t know. This fear caused her anxiety when she got the urge to go to a favorite restaurant. She loved the food, but, as she would derisively say, so did the schvartze (Yiddish slur for a black person).

What if she didn’t have to see the black people at all? This possibility is what worries me about our augmented-reality future, which is (mostly) anticipated with optimism. If grandma had lived to see ubiquitous augmented reality, I suspect she’d put it to dehumanizing use, leaving for the restaurant with her goggles on (a less obtrusive artifact than the Coke bottle glasses she actually wore), programming them to make all dark skinned people look like variations of Larry David and Rhea Pearlman. AsBrian Wassom — who regularly writes on augmented reality — notes, if apps can “recognize a particular shade of melanin, and replace it with another,” racists could one day “live in their own version of…utopia.”

“Can a Robot Learn to Cook?”Co-authored with Evelyn KimThe Atlantic October 9, 2012

Everyone’s coming over to watch the big game. You’ve got beer, a giant high-definition television, and a well-deserved reputation for serving wings hotter than Dante’s eighth circle of hell. Unfortunately, you are pressed for time. Wouldn’t it be great if a machine like Rosey from The Jetsons could quickly prepare them? Maybe you could even pass off the dish as your own!

Then again, maybe not. Would Rosey’s version taste like yours, or would her rendition expose your duplicity? Could she cut the chicken into the right size parts and ensure your friends don’t choke on bone chips? Would Rosey know when the chicken pieces hit the ideal state of crispiness without being raw inside? Most importantly, could she discern when the spice Rubicon was crossed? These questions all revolve around one issue: Can Rosey can acquire tacit knowledge?

Lance Armstrong’s decision not to fight the U.S. Anti-Doping Agency has drawn mixed response: supporters and detractors wasted no time before airing their views. While some supporters maintain lack of incriminating evidence is key, others have stated that Armstrong still deserves our sympathy even if he is guilty of using banned substances. It is crucial to understand why this might be the case, as the implications of the judgment extend well beyond feelings directed at a high-profile athlete.

The sympathy-for-a-possible-cheater argument is expressed clearly in “Pillorying Armstrong: Complete Nonsense,” a piece co-written by Arthur Caplan — one of the most famous bioethicists in the U.S. — and two other NYU professors. The authors write: “Shouldn’t Armstrong, especially because of the inspiration he is to cancer survivors or anyone on the short end of the advantage stick, get a pass for being no more dirty, but a whole lot better than everyone else in his sport? Armstrong isn’t being investigated as the only cheater. He is in all likelihood just the best, most talented one.” In other words, we should feel bad for Armstrong because LiveStrongpromotes so much social good that it blunts part of the cheating stain, and because professional cycling is rotten to the core, filled with so many cheaters that breaking the rules is the only viable way to compete.

“Nudge, Nudge: Can Software Prompt Us Into Being More Civil?”The Atlantic July 30, 2012

The closer we get to the presidential election, the more concern gets raised about how divided the country is and how acrimonious our discussions are over fundamental issues. Attack ads aren’t the only problem. The comments sections on web pages and blogs are overflowing with bitterness. The mood expressed there shows such heightened signs of technological influence, it seems ripped from the pages of the Marshall McLuhan playbook: the medium of communication is influencing the messages people send and receive. The best solution, then, might be for magazines, newspapers, and blogs to address the root problem by hacking the source: re-designing the structure of the forum to encourage civility. Before considering whether we want to go there, let’s quickly review why the medium matters.

At Scientific American, the hyperbolically titled “Why Is Everyone on the Internet So Angry?” asked why so many readers post hostile and rude comments on controversial Web stories. The answer? A “perfect storm of factors”: anonymity lessens personal accountability; distance from our conversation partners makes us treat them as abstractions, not human beings; it’s easier to be mean to someone when addressing them through writing rather than through speech; armchair commentary provides a false sense of accomplishment; and, a lack of real-time flow in the conversation encourages monologues.

Dobbs questions the role of gun culture in steering “certain unhinged or deeply a-moral people toward the sort of violence that has now become so routine that the entire thing seems scripted.” But what about “normal” people? Yes, plenty of people carry guns without incident. Yes, proper gun training can go a long way. And, yes, there are significant cultural differences about how guns are used. But, perhaps overly simplistic assumptions about what technology is and who we are when we use it get in the way of us seeing how, to use Dobbs’s theatrical metaphor, guns can give “stage directions.”

“What Happens When We Turn The World’s Most Famous Robot Test on Ourselves?“The Atlantic June 20, 2012

This weekend marks the centenary of Alan Turing’s birth. Turing was one of the greatest computer scientist of all time. In a 1950 paper that outlined what has come to be known as the Turing Test he offered a way out of endless philosophical speculation about whether computers could ever be classed as ‘intelligent.’ He said that if human judges ask interview questions of a hidden computer and a hidden person and cannot tell the difference after five minutes, the computer should be considered intelligent. Nowadays, programmers compete yearly for the Loebner Prize, which is won by the computer that is most often mistaken for a human.

But the Turing Test’s application is no longer limited to questions of artificial intelligence: Social scientists too are getting in on the action and using the test in a completely new way — to compare different human subjects and their ability to pass as members of groups to which they do not belong, such as religious and ethnic minorities or particular professional classes. With the Turing Test, sociologists can compare the extent to which subjects can understand people who are different from them in some way.

“Why It’s OK to Let Apps Make You a Better Person”The Atlantic March 9, 2012

In article after article, one theme emerges from the media coverage of people’s relationships with our current set of technologies: Consumers want digital willpower. App designers in touch with the latest trends in behavioral modification–nudging, the quantified self, and gamification–and good old-fashioned financial incentive manipulation, are tackling weakness of will. They’re harnessing the power of payouts, cognitive biases, social networking, and biofeedback. The quantified self becomes the programmable self.

Skeptics might believe while this trend will grow as significant gains occur in developing wearable sensors and ambient intelligence, it doesn’t point to anything new. After all, humans have always found creative ways to manipulate behavior through technology–whips, chastity belts, speed bumps, and alarm clocks all spring to mind. So, whether or not we’re living in unprecedented times is a matter of debate, but nonetheless, the trend still has multiple interesting dimensions.

“Why Occupy Wall Street is So Hard to Understand”The Atlantic December 01, 2012

Our view is that the protesters should not be expected to offer clear public-policy guidance. In this essay, we’ll lay out why. The Occupy protesters have succeeded in creating emotionally impacting, ethically guided collective action. But they’re constrained by the forms of expressing political dissent that Americans are familiar with. Most people are habituated to express their political dissent by way of a limited number of options: individualism, token gestures of solidarity, joining an existing campaign, and partaking in a standard form of political participation. None of those forms are on the scale of action needed to deal with the problems to which OWS has addressed itself.

In other words, Occupy protesters have to create not just a set of demands, but a set of new ways of demanding. That sort of social experiment requires breaking from the status quo to find new leverage points on existing power structures. That’s what Occupy has attempted to do. This new type of emotionally impacting, ethically guided collective action is not incoherent, but it may be illegible.

Who has time anymore to manage their social media feeds? All the status updating, replying, and posting of smart takes on the day’s news is exhausting. Well, Google wants to help you out with that: The company recently submitted a patent for software that learns how users respond to social media posts and then automatically recommends updates and replies they can make for future ones. Consider it outsourcing, for your social life—an amped up, next gen blend of automated birthday reminders and computer generated, personalized remarks (more successful Turing Test than random word salad).

“Humans are Already More ‘Enhanced’ by Technology than We Realize“Slate October 3, 2013

Time recently ran a cover story titled, “Can Google Solve Death?” The wording was a bit much, as the subject of the piece, Google’s new firm Calico, has more modest ambitions, like using “tools like big data to determine what really extends lives.” But even if there won’t be an app for immortality any time soon, we’re increasingly going to have to make difficult decisions about when human limits should be pushed and how to ensure ethics keeps pace with innovation.

“When Nudge Comes to Shove” (re-print of New Scientist article)Slate July 7, 2013Read Full Article

“Why We Need New Rights to Privacy”Slate Nov. 2, 2012

Thanks to the real state website Zillow, it’s now super easy to profit from your neighbor’s suffering. With a few easy clicks, you can find out “if a homeowner has defaulted on the mortgage and by how much, whether a house has been taken back by the lender, and what a house might sell for in foreclosure,” as the Los Angeles Times recently reported. After using the service, you can stop by the Johnsons’ to make them a low-ball offer, perhaps sweetening the exploitation with a plate of cookies.

Maybe that’s not fair. Zillow doesn’t let people opt-out, but the company omits borrowers’ names, has a process for correcting mistakes, and uploads only legal information that was previously—albeit inconveniently—available.

“How to Make a Spy Exhibit Boring”Co-authored with John MixSlate October 10, 2012

In the status update age, it may be hard to believe, but not every aspect of technology should cater to you and your experience. Museums are especially vulnerable to the dangers of user-centrism, and pressure is increasing for them to embrace the experience economy by offering interactive exhibits that “come alive.” Sure, visitors are learning and having fun, but in the long run, this attitude may threaten the durability of collections—you know, the reason why you go to the museum in the first place.

Today’s museums give the entertainment industry a run for its money. According to the Horizon Report: 2011 Museum Edition—a joint effort between the Marcus Institute for Digital Education in the Arts and the New Medium Consortium—there’s an all-out digital love fest going on. In the near-term, its authors expect extensive efforts to focus on mobile apps. In the next two to three years, they predict “wide-spread adoptions” of augmented reality. Four to five years down the road, emphasis should shift to digital preservation (looking for ways to “future-proof” digital objects) and smart artifacts that “blur the line” between digital and physical things.

“Future of Privacy Forum Director: Browser Settings Should Be As Easy to Navigate as a Car“Slate August 23, 2012

We’re all concerned about privacy, but have a hard time separating hype from fact, hysteria from reasonable concerns, and peripheral from main issues. For insight into what’s really going on, I spoke with Jules Polonetsky, director and co-chair of the Future of Privacy Forum, a Washington, D.C.-based think tank that seeks to advance responsible data practices. His résumé includes a period of citizen advocacy, with Jules serving as legislative aide to Rep. Charles Schumer and as NYC consumer affairs commissioner under Mayor Rudolph Giuliani. Polonetsky has also worked in consumer advocacy for AOL.

“Why Do We Love to Call New Technologies ‘Creepy’?”Slate August 22, 2012

What if you walked into a bar and everybody knew your name—except you’d never been there before?

A couple of weeks ago, we were introduced to Facedeals, which integrates Facebook’s APIs with facial recognition technology. When you enter a store, restaurant, or bar that uses Facedeals, your mug will be scanned so that you can be offered special deals and get automatically checked in to the location. “Creepy,” tech sitesRedOrbit and TechCrunch both labeled it. That’s not surprising.

Creepy is the go-to term for broadcasting how technology unsettles us. Time and time again we’re asked to think in binary terms and identify a device or app either as good or its polar opposite, creepy. Although we’re often led to believe that creepy is an emotional response to things going horribly awry, our creepy radar isn’t nearly as reliable as Peter Parker’s danger determining spider sense.

As if we didn’t already have enough reasons to distrust Wall Street, a new study finds that a troubling number of financial services professionals would rather bury a moral compass than use one. Twenty-four percent of participants attested that “unethical or illegal behavior could help people in their industry be successful.” Would Main Street be better off if this greed were curtailed by behavioral-steering technology—digital Jiminy Crickets?

In the classic story Le avventure di Pinocchi, Pinocchio learns that the essential difference between machines—an animated puppet—and real people is moral conscience. Though insignificant in Collodi’s novel, Jiminy Cricket serves as an external moral compass for Disney’s Pinocchio, following our hero through his adventures to tell him right from wrong. Pinocchio only develops moral maturity when he frees himself from the cricket’s advice and grasps how to make ethical decisions on his own.

“Was Hitler a Bully? Teaching the Holocaust to Kids“Slate April 20, 2012

Should I allow my 5-year-old daughter to embrace the world of Disney, or break Prince Charming’s spell by pointing out that royalty got awesome castles by exploiting poor serfs? Answers to questions like this define a parent’s outlook on what childhood should be like. Despite my exposure to critical gender studies, I generally encourage my daughter to get her politically incorrect princess on. So, imagine my dismay at discovering that her kindergarten class planned to commemorate Yom Hashoah (Holocaust Remembrance Day) by discussing a person called “Bully Hitler.”

To be fair, the teachers did their best when comparing the worst criminal in history to a playground tormentor. By combining Chrysanthemum, a story about a young girl bullied because of her unusual name, with the forest-animal tale Terrible Things: An Allegory About the Holocaust, no traumatic detail was ever uttered. Nobody mentioned concentration camps filled with emaciated prisoners and flesh incinerating ovens. And that’s a good thing, because 5- and 6-year-olds just can’t grasp the complexity of the Holocaust.

According to a recent study, memory’s sharpness deteriorates earlier than we presumed: Forty-five is the new mental 60. Fortunately, there are practical ways to enhance mental agility: exercise, healthy diet, sufficient rest, learning new things. Increasingly, technology will play an important role in preserving cognitive function. From the sanctioned war on Alzheimer’s to widespread off-label use of Ritalin, Adderall, and Modafinil, one thing is clear: We’re intent on getting our memory enhancement on.

Ubiquitous information and communication technology is a major player in the memory enhancement game. I’m not alluding to products that target impairments, like the iPhone app for combating dementia. Rather, I mean commonplace software that people use to make recall less taxing, more extensive, or easier to visualize.

“How’d My Avatar Get Into That Sneaker Ad?”Slate January 4, 2012
Co-authored with Shaun Foster

Let’s play a game—thought experiment. Imagine it’s the near future. You’re walking along a city street crowded with storefronts. As you walk past boutiques, cafes, and the Apple Store, your visage follows you. Thanks to advances in facial recognition and other technologies, behavioral marketers have developed the capacity to take your Facebook profile, transform it into a 3-D image, and insert it into ads. That sweater you’re eyeing? In the display, the mannequin wearing it takes on your face and shape. The screen showing a car commercial depicts you behind the wheel. At a travel agency (let’s pretend they still exist—after all, this is a thought experiment!), you see yourself sunning on a beach, while the real you is bundled up against the cold. The ads might show you with an attractive stranger or a lost love (after all, Facebook knows whom you used to date). Or they could contain scenes of you and your happy family. No longer do you have to picture yourself in the ad—technology has that covered.

Although the technology in our thought experiment doesn’t yet exist, many of the necessary components already do. There is Autodesk 123D Catch, a program that uses computer vision technology to transform simple photographs into 3-D objects. Facebook has its own recognition tools to help users identify and tag photos. Video games generate avatars using sophisticated motion capture techniques.

For the most part, media coverage of the Occupy Wall Street protest has been predictable. Stories are narrated according to the pro/con structure typical of—depending on whom you ask—balanced reporting or sensationalism. On the one hand, positive focus sympathetically explains why protesters have been demonstrating en masse since Sept. 17. These accounts place the activist mantra of “We are the 99%” in a historical and economic context that connects significant inequalities in wealth to violations of justice that should prompt people of conscience to demand rectification. On the other hand, negative reports argue against interpreting the protest as legitimate civil disobedience. Detractors’ opinions range from indictments of individual work ethic—contending that that problem at issue is poor individual decisions, not dysfunctional systems—to indignation over an unclear protest agenda that allows Dionysian energy to manifest in this millennium’s Woodstock.

In a previous post, I mentioned that exciting speakers are making guest appearances in my current “Technology, Privacy, and the Law” course. Jay Stanley, SeniorPolicy Analyst at the American Civil Liberties Union, just dropped by via Skype. The conversation was so interesting that I wanted to share some of the highlights with you here.

“Can Predictive Technology Make Us Less Predictable?”
September 27, 2014

Over at the NY Times,Anna North asks if we can become more creative by using an unusual search engine called Yossarian that purports to help us see things in new ways—ways that go beyond the predictable associations we’re inclined to make when thinking about people, things, ideas, events, etc. What fascinates me about this possibility is that in order for it to be true, prediction needs to be the antidote to predictability. Without inferring where your mind is prone to wandering, neither a person nor an algorithm stands a chance of presenting something to you in a new light.

Over at the LA Times, Maria Bustillos has a harsh review of Nicholas Carr’s new book, The Glass Cage: Automation and Us. Referring to Carr as one “of the Information Age’s chief scaredy-cats,” Bustillos characterizes his latest endeavor, an explanation of problems with automation, as expanding “the field of his paranoia to computers in general.”

Sure, Carr’s last book, The Shallows: What The Internet is Doing to Our Brain, stirred up lots of debate about whether Google is making us stupid. Some said no and decided he’s too pessimistic—a negative judgment that’s absolutely appropriate to reach. You certaintly can have good reasons for believing that Carr’s conclusions aren’t supported by all of the research he musters. But ‘paranoia’ has connotations of irrationality and delusion. It’s an unfair association when applied to Carr. It’s particularly troubling because versions of the rhetoric are routinely applied to technology critics to unduly strip their skepticism of legitimacy.

Over at the New York Times, Farhad Manjoo argued that smart phones should be designed to better protect people from the harms that can arise when their nude selfies end up in the wrong hands. Manjoo’s proposal entails nudging, and consequently has greater moral complexity than meets the eye. We think it’s a good and important idea, and will explain why to help make the case more persuasive.

More than a few people maintain that if we all knew everything about each other, the world would be a better place. The total transparency argument takes many forms, and shades of it can be seen in the surveillance policy and discourse that holds that “more information is always better than less information,” and information asymmetries should always be remedied by more disclosure and surveillance, not less.

“Why Predictive Shopping Might Be Bad For The Future”Forbes August 21, 2014

Harvard law professor Cass Sunstein presents some disturbing statistics about “predictive shopping” in hisNY Times Op-Ed, “Shopping Made Psychic”. Unfortunately, Sunstein doesn’t emphasize the downside to his findings. Without sufficient critical commentary, it’s too easy to be too optimistic about the wrong way to build the future.

Next week, the new term begins and I’ll be teaching an undergraduate philosophy course called, “Technology, Privacy, and the Law.” The first order of business will be to explain why thinking critically about privacy—determining what it is, deciding when it should be protected, and pinpointing how it ought to be safeguarded—means doing philosophy. Given the practical stakes of these issues, you might not realize that getting into them involves philosophical thinking. But if you’ve got a principled bone to pick with corporate, peer, or governmental surveillance, or if you’ve good reasons for being displeased with the activists who are taking stands against it, you’ve got your philosopher’s cap on.

Over at The New York Times, Natasha Singer discusses the pros and cons of universities providing incoming students with online technology that helps them select roommates. She does a great job of identifying salient points. But I think it’s important to augment the story by adding some remarks on privacy and prejudice.

“Why We Should Be Careful About Adopting Social Robots”Forbes July 17th, 2014

Although Jibo, designed by MIT professor Cynthia Breazeal to be the “world’s first family robot,” isn’t set to ship until 2015, folks are already excited about this little bot with a “big personality.” While there’s much to be said for Breazeal’s vision of “humanizing technology” so that the smart home of the future doesn’t “feel cold and computerized,” we might want to pause a bit before rushing to build the type of world depicted in the movie Her. Although it is easy to imagine we’ll be better off when we’ve got less to do, we don’t actually know the existential and social implications of outsourcing ever-more intimate tasks to technology.

The Los Angeles Times just updated the design of its online edition. One of the new features is called “sharelines,” and it’s basically summaries appearing at the top of articles that readers can click on to instantly tweet out. Even the editor’s super-succinct note introducing the changes begins with three of these talking points!

While this exercise in concise craftsmanship is informative and user-friendly, it’s also got disconcerting overtones. Seen in the larger context of technological development, it’s a wakeup call to examine how often we’re being asked to outsource labor at the expense of living up to our potential.

Normally, if you asked me to free associate what comes to mind when I hear words like “productivity app” and “life hack,” you’d be treated an all out vent session—a combination of skepticism and cynicism directed overly hyped products, overesteem for efficiency, and overblown attempts to delegate responsibility and willpower. But then I read a gushing review of Full, an app for tracking and measuring “what’s important to you.” I actually think it’s a good product and an excellent prompt for thinking about why goal track apps are so existentially provocative.

College can be a wonderful experience. But no environment is absolutely safe. Tragically, shootings, date rape, stalking, alcohol induced fights, and other predatory and violent incidents occur on campuses. Some see guns as the solution—letting students carry firearms to protect themselves. Just look at what’s happening in Minnesota,Idaho, and Oklahoma. Maybe a better way forward, however, is to arm students with a different technology: smartphones loaded with safety apps.

“Watching You Play: Can A Dystopian Video Game Help Us Better Appreciate Privacy?”Forbes March 4, 2014

The biggest challenge for privacy advocates is getting people to appreciate why privacy matters even if you don’t have anything to hide. Those of us who feel strongly about the topic tend to lean on arguments Daniel Solove made in a seminal article back in 2011. But there’s other ways to explore the thesis that take us beyond privacy theory. Dystopian fiction is a powerful vehicle for considering the consequences of society placing too much value on transparency and over-sharing. So are dystopian video games, as is evidenced by the demo of Nicky Case’s “Nothing to Hide: Any Anti-Stealth Game Where You Are Your Own Watchdog”—a crowd-funded and open sourced endeavor (both code and art).

The technology world was abuzz last week when Google announced it spent nearly half a billion dollars to acquire DeepMind, a UK-based artificial intelligence (AI) lab. With few details available, commentators speculated on the underlying motivation.

Is the deal linked to Google’s buying spree of seven robotics companies in December alone, including Boston Dynamics, “a company holding contracts with the US military”? Is Google building an unstoppable robot army powered by AI? Does Google want to create something like Skynet? Or, is this just busybody gossip that naturally happens in an information-vacuum? The deal could simply be to improve searchengine functionality.

“5 Ways to Avoid Being Suckered by Unreliable Information“Forbes January 25, 2014

Without “noise makers”—folks spreading rumors, false information, hoaxes, rumor, and hearsay—markets and the blogosphere might grind to a halt. But as Vincent Hendricks argues in “When Twitter Storms Cause Financial Panic,” information bubbles can be immensely destructive. They can hurt the economy and damage society.

There’s no surefire way to use new media and only consume “correct information and convincing arguments.” Any consultant who tells you otherwise is, at best, exaggerating. Fortunately, there are simple things we all can do that can make a big difference. I reached out to Hendricks, Professor of Formal Philosophy at the University of Copenhagen and co-author of the new book Infostorms: How to Take Information Punches and Save Democracy. He offered the following basic recipe for determining if you’re stuck in an information bubble.

A well-intentioned grandmother accidentally hurt her grandkids’ feelings. She took screenshots of their delightful Instagram photos and proudly uploaded them toFacebook for all of her social network friends to see. If the younger generation didn’t set their accounts to private, could Grandma possibly have committed a faux pas? All she did was lovingly pass along publicly available information!

“Keep on Tweeting, There’s No Techno-Fix for Incivility or Injustice”Forbes January 2, 2014

It would be nice to believe that the road to civility could be paved by following simple formulae, like Frank Bruni’s New Year’s exhortation, “Tweet less, read more”. Unfortunately, uncomplicated Op-Ed advice doesn’t translate into effective results in the messy real world.

Apple’s latest television ad, “Misunderstood,” is leaving viewers with impassioned and conflicting interpretations. Giving Talmudic treatment to a short commercial might seem like overkill, especially given the Christmas theme. But I think we’re lucky the narrative has become a Rorschach test for discussing the social and ethical impact of technology.

“What You Don’t Say About Data Can Hurt You”Forbes November 21, 2013
Co-authored with Woodrow Hartzog

Big data generates big myths. To help society set realistic expectations, the right kind of skepticism is needed.

Kate Crawford, Principal Researcher at Microsoft Research and Visiting Professor at MIT’s Center for Civic Media, does a fantastic job of explaining why folks are too optimistic about the promise of what big data can offer. She rightly argues that too much faith in it inclines us to misunderstand what data reflects, overestimate the political efficacy of information, and become insensitive to civil rights concerns.

While privacy advocates have expressed concern about the phenomenon of massive data collection and analytics colloquially known as “big data,” most people are more familiar with social media anxiety, like inappropriate Facebook posts leading to embarrassing and reputation ruining incidents. This situation is likely to change, and in the near future society will have to confront a profound question.

What happens when everyone can get their curious, envious, and outraged hands on increasingly powerful surveillance tools and correlation-creating algorithms that have high predictive value, powerful aggregation potential, and can be put to discriminatory, manipulative, and exploitative use?

For the past few weeks, my six-year-old daughter has been obsessed with Selena Gomez reprising her role as Alex Russo on the Disney show Wizards of Waverly Place. Like many of her friends, Rory has seen every episode of Wizards and religiously listens to Selena’s music. While Alex–like so many of the current Disney lineup–is a snarky character, we haven’t had to worry much about the consequences of Selena fandom until now, when the complications of online information are smacking us in the face.

Equating more with better is an old advertising trick. The message is so deeply burrowed in our psyches that it sounds less like Madison Avenue and more like an ancestral call. Is it shallow? Yes. Is it easy to pick apart in academic discussions and stern parental lectures? Sure. Does it reek of the idealistic Internet coverage that we’ve been long bombarded with? Absolutely! But, let’s face it. The ideal wouldn’t persist if it didn’t work. We’re suckers for the supersized.

But what — beyond a willingness to endure gentle caricature — does Siri ask us from us in return? The superficial answer is little but consumption: purchasing iPhones and data plans. But Michael Schrage, Research Fellow with the MIT Sloan School’s Center for Digital Business, argues the superficial answer misses something important. Siri — as well as other increasingly popular — and pervasive — technologies asks us to participate in a fundamental re-design of our social sensibilities.

“Lab Rats in the Social Experiment of Personalized Advertising“Huffington Post August 29, 2012

Advances in biotechnology, nanotechnology, and nuclear energy have turned society into what Dutch ethicist Ibo van de Poel calls a large-scale laboratory for experimenting with the unforeseen consequences of new technologies. In comparison, personalized advertising — also called targeted and behavioral ads — doesn’t seem nearly so dangerous. It is easy to believe that the worst that can happen is we’ll buy a few unnecessary things, lose some privacy, or find some content off-limits (as in the case of new London billboard that uses facial recognition technology to send male and female viewers different information). A more sober look suggests we should be worried about participating in a social experiment that gambles with our human agency and freedom.

My recent article in The Atlantic, “The Philosophy of the Technology of the Gun,” is provocative in part because it suggests tools like guns might have more power of us than meets the eye. Given widely held views about autonomy (e.g., the notion that “guns don’t kill people, people kill people”), this alternative way of looking at things can cause anxiety, especially when misunderstood and translated into terms like those offered by the first commenter, “Guns are magic mind control machines.” The article presented an account of how humans relate to technology, and to further illuminate those relations, I’ll briefly revisit media theorist Friedrich Kittler’s take onFriedrich Nietzche’s use of the typewriter. Like my gun essay, this analysis challenges the “instrumentalist” conception of technology.

Earlier this year, controversy surrounded ultrasound legislation in Texas, Virginia, North Carolina, Texas, and Idaho. Lost in the critical commentaries on abuses of patients’ and physicians’ rights was concern over a fundamental violation of liberty. This issue hasn’t gone away, even though sonogram coverage isn’t currently grabbing headlines.

Medical experts routinely use ultrasound technology in ways that favor the Right to the Life agenda, even in states that don’t have mandatory ultrasound laws. This problem goes unnoticed because the potential harm caused by the medical community is not the result of political ideology. Rather, it arises from inadvertent exploitation of patients’ natural human weaknesses and cognitive tendencies. To understand why, we need to grasp how typical conversations about ultrasound images can impede rather than foster informed consent.

“Are Millennials Less Green Than Their Parents?”3 Quarks Daily May 28, 2012
Co-authored with Thomas Seager and Jathan Sadowski

A highly publicized Journal of Personality and Social Psychology study depicts Millennials as more egoistic than Baby Boomers and Generation Xers. The research is flawed. The psychologists fail to see that kids today face new problems that previously weren’t imaginable and are responding to them in ways that older generations misunderstand.

The psychological study seems persuasive largely because the conclusions are supported by massive data. Investigators examined two nationally representative databases (Monitoring the Future and American Freshman surveys) containing information provided by 9.2 million high school and college students between 1966 and 2009. Such far-reaching longitudinal analysis seems to offer a perfect snapshot of generational attitudes on core civic issues.

When the media discovered the Homeless Hotspots “charitable experiment,” it responded with a torrent of moral condemnation. Critics wasted no time denouncing the initiative as a publicity stunt that cruelly objectified homeless people as technological infrastructure. Instead of equating the initiative with exploitation, perhaps a movement should be started that advocates Saneel Radia, head of innovation at BBH Labs, be given a Nobel Peace Prize. After all, in 2006 Muhammad Yunus and the Grameen Bank received one largely for helping create the Village Phone program—an initiative praised for hiring impoverished and marginalized women as “Phone Ladies.”

On the surface, Homeless Hotspots looks like a typical conscientious enterprise. BBH Labs, a private company, partnered with Front Steps, a non-profit shelter located in Austin, Texas, to create a new business niche. Building off the model of employing homeless people to sell newspapers on the street, the Homeless Hotspots participants (sometimes reported as totaling 13, other times tallied at 20) offered 4G Internet to South by South West Interactive Festival attendees in exchange for $20 per day plus customer donations. The suggested donation—which anyone could refuse to pay—was $2 for every 15 minutes of Internet use. BBH labs, however, claims to have guaranteed the participants would earn a minimum of $50 per day.