Online advertising has traditionally focused on ad-targeting: given someone’s Internet activity, how can one select ads that are most relevant to their interests? Advertisers don’t want to waste their time showing an ad for a Big Mac to a vegetarian. They say that it is in the consumer’s best interest to see relevant ads, an argument that I always felt was rather paternalistic, like how my parents put sugar in my cough syrup when I was young. The syrup is going in my mouth whether I like it or not, so I guess it might as well be sweetened. If I had my way, though, I’d avoid the whole situation all together.

But, like cough syrup, there are good reasons we have advertisements — they fund the Internet. This is hardly an exaggeration. Without them blogs would disappear, Facebook would start charging its users, and pay-walls would become the norm. Many startups, initially ad-free, are only worth anything because they can eventually serve advertisements to a carefully cultivated audience. This industry has grown to such a scale that nearly everything you do online is tracked. Information about what you click and what you see is passed from server to server, stored between browsing sessions, and eventually recalled and updated the next time you are detected on a popular site. The trail of virtual breadcrumbs you scatter across the Internet says so much about who you are and, most importantly, what you might buy. Inferring a person’s interests solely from their browsing data is nontrivial. It is like answers to series of Rorschach tests — the data are much too nuanced and subtle for me, a untrained person, to conclude anything. But the answer may be clear in the hands of an expert or a well built and properly trained learning algorithm.

Ad-targeting is relatively innocent — it simply selects an optimal advertisement from a premanifactured set. There is no manipulation. You still have the freedom to ignore the ad, to turn a blind eye to its broad, rather than personalized, persuasion.

Suppose instead of selecting an ad based on your inferred interests, a program uses the data it collects about you to take advantage of your behavior, to manipulate you. Imagine the following scenario introduced by Ryan Calo in Digital Marketplace Manipulation:

What if an advertiser had a way to count how many decisions you had made, or determine your present emotional state? That advertiser might try to reach you at your most susceptible. An obese person trying to avoid snacking between meals could receive a text on his phone from the nearest donut shop exactly when he was least likely to resist.

In the same paper, Ryan Calo also introduces research that constructs an advertisement by morphing a picture of your face with the face in the ad so it subliminally looks like you. The unconscious similarity elicits your empathetic response and, naturally, you click the advertisement. This is modern research, not science fiction [1]. It isn’t hard to believe Ryan Calo when he says that “Surveillance gives the observer an increased power to persuade”. This isn’t a new idea. You know when you are negotiating a deal you should never ever reveal how much money you have, or how much you truly are willing to pay — that information gives the seller more power over the negotiation. There is a similar imbalance of power when the market knows more about you than even you do, when it can predict your state of mind and desires and use them against you. And we are giving the market this information quite willingly on our favorite sites.

Some do not see how insidious this situation can be. They argue for self-control. They believe if you fall for such petty tricks you deserve to be manipulated. In rebuttal, I like to introduce the thought experiment of the “perfect” psychologist.

Imagine you run an advertising company. In an effort to increase your sales, you hire a “perfect” psychologist. What do I mean by “perfect”? Well, given the right data about an individual, this psychologist can tell you how to personalize an advertisement that will convince them to buy your product, guaranteed. She can construct a pitch that exploits someone’s ignorance, their hopes and fears, their insecurities, and the very way they see the world; a pitch that rallies all of their cognitive biases against them. The “perfect” psychologist’s pitch always succeeds.

Is this ethical? The answer isn’t obvious. On one hand, your customers will probably walk away from the transaction happy about their purchase, whether the happiness stems from cognitive dissonance, or “true” satisfaction. You are obviously happy because your company’s sales record is pristine. This seems to benefit everyone. But, something seems off here: your customers had no control over their decision to buy the product. The “perfect” psychologist strips people of their free will. Would you be okay knowing that a company can give such a pitch? Even if you enjoy the product afterwards?

Let us return to our hypothetical advertising agency. The “perfect” psychologist is becoming popular. Every day the pile of requests on her desk grows, and she is finding herself overworked. She is fed up. She misses her family. Most of all, she knows this isn’t good for her mental health (she is a psychologist, after all). Luckily, she also happens to be a computer scientist, and she has a brilliant idea: why not create a program that does my job for me? She stays late at the office one last time to create this software. In tests she finds the software is fast, much faster than herself. While she can produce on average about ten pitches a day, this program can output hundreds. There is a trade-off, though, as the program isn’t quite as accurate as she is. But, it is accurate enough. With this software running back at her office, she can now go home at 5.

How accurate must the “perfect” psychologist’s software be for us to cry foul? If the power of online advertising lies somewhere on the spectrum between what we have now and a “perfect” psychologist, at what point are we no longer responsible for being duped? We can always say you should have known better in the case of pyramid schemes, sleazy salesmen, and Nigerian princes. It seems in these cases education will fix the issue. But if advertising becomes more individualized, if it learns to exploit each individual’s unique cocktail of cognitive and social biases to hide in their bullshit blind-spot, education can become impotent. It would warn “avoid these skeevy strategies” — useless advice if you cannot detect the strategies in the first place. It is like when you try to tell your grandparents not to click that sketchy pop-up that says “your computer is unprotected, download this anti-virus software NOW”. They look at you like you are crazy — why the hell would I not click it? There is something about how they see the world that nullifies your advice. Next time you show up to their house to fix their computer you find they fell for a different variation of the same damn trick.

To combat individualized manipulation you would have to eliminate each person’s cognitive biases, shatter their unconscious assumptions about the world and replace them with perfectly logical, rational, beliefs and thought processes. To me this seems impossible. As Alexander Hamilton says, we are “a reasoning rather than reasonable animal”. The very nature of our thought is highly irrational. We take shortcuts, use heuristics, and rely heavily on emotions; perfectly rational thought is not feasible in our limited cognitive capacity. Sometimes I even wonder if what we call rational thought is itself an insidious bias, one that we adopt as truth not because we can prove it is Absolutely True but because it seems to work and we have absolutely no clue why. Nevertheless: I suspect we will always be susceptible to clever, targeted, deception.

Okay, time for me to take off my tinfoil hat. There never will be a “perfect” psychologist and we likely can never create software that even approximates one. However, Ryan Calo is still relevant. By giving information about yourself to advertisers you open up new opportunities for manipulation, especially now that most of our transactions happen through the web as a mediator. We rarely take a step back and think about how exactly our information is being used, whether it is used for us or against us. And sometimes the way companies use your information is invisible to you. I didn’t realize the subtle information bubble Google constructs around your browsing history until I saw this TED talk. We should realize we are giving away a lot with our Internet activity, and we should start thinking about what we are getting in return.

What do you think? Where would you draw the line for personalized advertising?

-JB

You should follow me on twitter (@lbitsofinsanity), where I link my posts and other neat things.

Like this:

Every four years, we look to the next guy and tell ourselves that he is going to do what he says he’ll do. This guy will change things. And every four years we are disappointed. We call him a crook, a liar. He leaves office, and the next guy walks in, giving us the same hopes and disappointment. On and on it goes. We conclude all politicians are corrupt, sociopathic people who lie for votes. I suspect we look at this the wrong way.

Think back to student government in junior high. Each year a kid would run for class president on a political platform of promises like “I will double the length of lunch period!”, or “Students should be able to give teachers detention!”. But, the kids know he can’t do these things. Even if elected, the administrators and the teachers, who hold the real power, will restrain him. So the student body decides to elect the reasonable kid with the speech about student solidarity and how much our district is better than the others. She promises a better homecoming. The kids don’t expect change because that is in the hands of the powerful adults; children are smart like that. Maybe we should lower the voting age.

What I am getting at is this. We should not be surprised when a president or congress doesn’t change anything. They are restrained by the “administrators and teachers” of the world, whoever they are. Plenty of those with power lurk behind the stage of national politics. Us citizens should peek backstage.

Like this:

Many ancient cultures recognize that harnessing fire marked the beginning of progress. In Greek mythology, Prometheus stole fire from the gods and gave it to the humans, along with writing, agriculture, and mathematics — the cornerstones of Greek civilization that, through the onward thrust of history, brought our modern technological marvels. This angered Zeus. He decided to punish Prometheus by chaining him to a rock where an eagle would devour his liver. Unfortunately for Prometheus, he was immortal, so his liver grew back and he had to endure the torture repeatedly.

Next, Zeus punished the humans by sending them Pandora and her infamous box. She opened the box out of curiosity, unleashing war, suffering, chaos, and evil upon the world. The message of the myth is clear: fire gave us both civilization and suffering. The city of Dresden burned down in an inferno in WWII. At least 100,000 miners died harvesting coal used to feed the fire that burned in the bellies of the locomotives during the industrial revolution. And, in Auschwitz alone, the Nazi’s burned hundreds of thousands of Jews; who knows how many they burned in the whole Holocaust.

The Greek myth of fire reminds us that there are two sides to human progress: technological progress and social progress. It is foolish to rely on technological innovation alone to improve our lives; powerful technology in the hands of an immature society is, like a child running around with scissors, at its worst destructive and at its best nerve-wracking. Society needs to deliberately mature at the same rate as it develops technologically. The amount of kindness we show one another should be proportional to the number of transistors we can fit on a circuit board.

I recognize that technology has brought us many great things. I also believe anything that makes our lives easier is worthwhile. Right now I am typing an essay that people around the world can read. By clicking a link on my blog, a reader in India requests an article that is fired through a series of wires to eventually rest on her computer — an exchange that happens in the span of milliseconds. I am always amazed how the Internet can help people: Khan Academy provides free, high quality educational videos, and Wikipedia organizes everything there is to know in a quick, easy to search, interface. These are Good Things.

But, it is too easy to ignore actual humans when you talk about the amazing feats of technology. Does the kid who can’t get clean water care how fast packets can get transmitted across a few wires? Does the single parent who is working 80 hours a week to support a family care about Khan Academy when there is hardly enough time to kiss the children goodnight, let alone get an education for a higher paying job? There are people that live life mostly untouched by the latest technology.

It seems many of us have forgotten the Promethean trade-off. The United States is a nation born out of the flames of the industrial revolution. Here, we worship technology and innovation, we idolize the Edisons and ignore the Debs. We throw money at the corporations whose employees work long hours in bad conditions in the name of economic efficiency because they give us affordable new toys and advanced technology. At the same time we ignore the kinder businesses that are killed off, overrun by the very same powerhouse corporations. We justify this by saying the weaker companies couldn’t compete in the marketplace. Consumers aren’t willing to pay for the overhead of a labor union and corporations aren’t willing to cut into their profit margin to pay their employees more. So, workers are stuck in the middle of the careless battle between supply and demand.

I feel like many people in power are like Frank Hoenikker from Cat’s Cradle: fascinated by technical possibilities, but weighed down by humanitarian concerns. They think you need to get the hell out of the way of progress, or else progress has the right to trample over you. The NSA is a model specimen. They built their massive PRISM and XKeyscore infrastructure in secret. They claim they wanted it hidden from US enemies, which I’m sure is partly true. But, I have a suspicion that they were most afraid of the public backlash against a blatant infringement of human rights, afraid of the national debate that will soon determine the fate of these programs. They didn’t want their own citizens getting in the way — a humanitarian concern. Sorry, NSA, for weighing you down.

At this rate, I fear we will become a technologically advanced society that doesn’t give a damn about those who live in it. I fear in a millennium we will have bionic limbs, but they will be affordable only by the wealthy, widening the gap between the haves and have nots. I fear we will have intergalactic travel, but it will be used for trade that mimics the international imperialism of the early 1900’s — a greedy struggle for profit and territory, culminating in a war financed by the deaths of millions. I fear we will have interactive movies, maybe even the Feelies of a Brave New World, but we will rather watch them than fight for our rights. Technological advancement is pointless if the social system on which it rests remains unchanged; otherwise, we are just running with scissors.

Maybe we need to get back to working towards a Great Society. Maybe strike a New Deal. When someone asks me what I would like to see in a hundred years, I avoid saying flying cars or space travel. Rather, I say I would like to see a world where people are kinder to one another, where we live in a more egalitarian society, where we have more time to spend with friends, family, our community, and ourselves. A better life for everyone should be our goal. Perhaps flying cars and space travel are necessary to get there; although, I highly doubt it. But right now we need to realize that technology alone isn’t going to save us. We need to save ourselves.

Like this:

What you do online is a goldmine. Every click, every “like”, and every status message reveals a little information about yourself. What it reveals may not be obvious to a human, but it is obvious to a computer program. A number of these computer programs are ensconced within the code of most popular websites, watching what you do and relaying it to advertisers [1], who use the data to form dossiers on customers: Are you conservative? Are you homosexual? A gamer? Middle-class? What you “like” on Facebook gives ample information to answer these questions with high accuracy [2]. The massive scale of this data collection is a testament to its power and value.

I wonder what my dossier looks like. Perhaps they know things about me that even I don’t, things I reveal subconsciously through my interactions on the web. Here and there I try to rebel by “liking” things I don’t, in fact, like. I realize this in itself can be informative; maybe the program detects my offending “like” as an outlier, easily dismissing it while learning that I am prone to petty acts of disobedience. Nevertheless, I don’t know what my dossier looks like, but I sure hope it doesn’t look like me.

I have a close friend who, for the longest time, refused to own a Facebook account. Something changed his mind a few weeks before the end of college. During those weeks he scrambled to amass a collection of flattering images and witty status messages while masterfully selecting his “likes” to paint a favorable picture of himself. I felt disgusted looking at his profile: this wasn’t the guy I knew in college. Where are his fears, his hopes, and his aspirations? All his character flaws were missing. Yes, these flaws would drive me crazy and sometimes make me want to leave the room when he walked in. But they were the background to his personality, the very frame which supported the rest of his persona. Without them, his Facebook profile felt like an odd caricature.

I was kidding myself that he was the only one who manufactured their online presence. Looking at my own profile I discovered I was equally at fault. I had pictures of my family vacations, my achievements, and good times with close friends; yet, I had no sign of my disreputable escapades. And, I assure you, there are plenty. Upon looking at myself, and the rest of my Facebook friends, I realized how normal my friend’s profile was. We are all painting caricatures.

Perhaps I have foreground and background swapped, that the caricature is primary and our fears, hopes, and aspirations, along with our character flaws, are not fundamental, but irrelevant details, just the romantic embellishments of the individual. I fear the notion we are all simply variations on a set of possible human archetypes, that you can reduce anything there is to know about a person to a small group of facts. You like cream-cheese? Well, that is because you are middle-class, male, and a conservative.

That is what advertisers are trying to discover: what are the facts that make you tick? What type of person, of the possible archetypes, are you? We are developing better methods of understanding people through their behavior and perfecting machine learning algorithms so they paint better caricatures.

We are also understanding ourselves more and more in terms of neurons. By looking at neuron firing patterns, scientists can predict when you are going to lift a finger before you are even aware of your intention [3]. Using fMRI imaging and machine learning algorithms, researchers are able to predict what object you are thinking about [4]. Our top-down understanding of behavior and our bottom-up understanding of neurons eventually will connect and form a complete picture of ourselves. At that point, anything you do can be explained by the neurons in your brain, or, by the fact that you are a gay liberal who likes curly fries. The more we learn about ourselves the less room there seems for free will.

When Google Glass was announced, I imagined a similar hypothetical product. Just like Google Glass, it would sit innocently on your face mimicking a pair of glasses. The product would see everything you see, hear everything you say, and watch everything you do, learning and learning until it knows you better than you know yourself. Then, it starts to whisper advice in your ear. When you go to the ice-cream parlor it would whisper “you should try the pistachio.” When you go shopping it would suggest “you should get your friend this.” You are compelled to listen because it has never been wrong before and it has always made you happier; after all, it knows everything there is to know about you. Eventually, every person becomes an automaton that always follows the product’s advice because its advice is the best there is. Every person is as happy as possible meanwhile never making their own decisions. Free will is traded for a utopia.

I’d like to think I am more than my Internet activity. I’d like to think I am more than the neurons firing in my brain. I’d like to think I am more than a simple caricature. I’m not going to pretend to know what I am; I very well could be any of those. But I do know that by learning everything about ourselves, we may, in fact, lose everything.

Like this:

I love Twitter. It gives me the power to tap directly into the information stream coming from dozens of prominent politicians, news organizations, and bloggers. With its 140 character limit, it is a perfect platform to read quick facts and news headlines.

But news isn’t the only thing I find on Twitter. I also find people participating in a national conversation. What does the conversation look like? Following the George Zimmerman trial, here is a tweet by Justin Bieber: “Adam Levine is engaged Zimmerman is not guilty Cory Monteith died Talia died Casey Anthony is pregnant The world’s obviously ending.” Another, by Slate magazine: “Why did an all-female jury let George Zimmerman go? White women are taught to fear black men: url redacted.” Finally, a tweet supporting the other side, by Donny Ferguson: “If George Zimmerman had killed Trayvon Martin 17 years ago with clamps and a vacuum, Democrats would arm themselves with feces to defend him.” Apparently Twitter’s 140 character limit also promotes irrational rhetoric and sophistry.

What do I do when I encounter such Tweets? Naturally, like any human being, I bless the tweets that voice my own opinion with a retweet — regardless of how irrational its content — and I scorn the ones I disagree with by unfollowing the perpetrator of such unfair propagandizing. Then, I walk away. So much for Twitter being a conversation platform.

In real life, it is difficult to walk away when you encounter someone in person who disagrees with you. The person in front of you is a real human being, perhaps even someone you have agreed with before. You are compelled to uncover how another rational person could have come to a different conclusion, rather than to dismiss her as a liberal, conservative, anarchist, or hippie on the Internet. You must confront the difference of opinion. Yes, this could result in a shouting match involving nasty names; it could also result in a discussion through which you come to a conclusion neither of you has thought of before. Dialogue, not monologue, brings forth truth.

At least, that is my romantic and idealistic view of how such an encounter should play out. Maybe I am being unrealistic about real life. Maybe human vices are simply being reflected in how people use Twitter. Maybe Twitter is simply acting as a conduit through which people express their inherent irrationality. After all, surely we are masters of our technology — Twitter is only as dangerous as the society that wields it.

Although, I don’t believe Twitter is entirely blameless. As McLuhan would say, Twitter calls out for certain forms of content while rejecting others. Technically, I could write a detailed argument on Twitter, cutting the long-form up into 140 character fragments and dumping it into tweet after tweet in reverse order. But my essay would be lost among the more concise opinions, slowly receding into the horizon of yesterday’s news to be replaced with the next issue bubbling to the foreground.

You can retort that Twitter is not designed for long-form, and I will completely agree with you, because that is the point. There is a growing disconnect between what Twitter is designed for and the ways people are compelled to use it. Twitter calls out for discourse and participation in a global conversation while paradoxically pruning anything rational from that conversation. For a tweet to become popular it needs to be witty, concise, and rhetorically powerful — none of which implies rationality.

I find reading tweets like walking through a funhouse: on the right you pass the plea to authority mirror, stop and look at the selective evidence mirror, and on the way out you laugh at the ad hominem mirror. You walk out of the funhouse without knowing reality and, more importantly, without realizing you don’t. Maybe you even latched onto one of the mirrors as the “real” one, not because it was less fallacious than the others, but, like how someone wants to believe a mirror that makes them look skinnier represents the truth, it was what they wanted to see. We are now conducting our national conversation in Twitter, the funhouse, while ignoring how it is distorting our conversation.

Suppose instead of fool-heartedly posting an essay onto Twitter, I wise up. I condense my essay by taking its conclusion, call it A, and tweet “I think A”. Perhaps I opt to be witty and say “I think A because the liberals are BASTARDS”. Of course, I’d word it in a clever way and be less direct. But, when you boil it down to its essence, that is what it would say. No longer am I making an argument; rather, I am signaling my opinion to others. I am picking sides. Others who agree with me will tweet their assent and those who don’t agree will tweet how silly I am. Such a “conversation” is more like a room of noisy people talking over each other and not listening because no one is saying anything worth listening to.

I’m not going to pretend that abolishing Twitter will magically make us rational, eloquent thinkers; nor am I going to suggest we should all stop using our beloved tweeting technology. But I do think we need to take a step back as a society and ask ourselves what Twitter is for, what it is doing to our discourse, and how it is framing the way we look at things. From there we can decide its role in society.

This is very hard. Maybe not hard for an individual, but certainly hard as a society. If you look through history you will find that it is rare for a people to look at any technology in an unbiased light. Technology changes a society like heroin restructures the reward pathways in an addict’s brain. The person reconsidering the needle a second time is not the same person who considered it the first. In our case, the “reward pathways” are the news organizations, political groups, lobbyist, unions, and corporations who have adopted Twitter for the rhetorical content it carries, not to mention the general population who use it to stay in touch. So, for now, we are stuck in the Twitter funhouse.

Like this:

Setting Jetty up to use SSL turns out to be easier than I thought. In fact, at least in Debian it is done for you already. If you go to the etc folder, you’ll see a keystore file and a jetty-ssl.xml file. Both of these are set with sensible default values. To enable SSL when you run jetty, use

java -jar start.jar etc/jetty.xml etc/jetty-ssl.xml

You can now go to https://localhost:8443 and see your webapp using SSL (make sure you use https at the beginning instead of http. This is a common mistake). In your browser you will be warned that this is not a signed certificate, and you will have to add a security exception. This is expected when your project is in development; be sure to get a signed certificate if you are plan on deploying!

So you’re done!…if you just want to use the defaults. Instead suppose we wanted to create our own keystore. Continue on, dear reader!

First you need to generate a keystore file that contains the private/public RSA key pair you will use for SSL communications. You can do this using a convenient command named keytool that is bundled with whatever JDK you are using.

Be very careful if you are using a debian distribution, as the keytool in your PATH may not produce keystores of the right format! I had a blast trying to figure out what was causing the exception “invalid keystore format”. You should ensure that the keytool you are using comes from the *JDK*.
locate keytool

/usr/bin/keytool <– THIS MAY BE WHAT YOU WANT, BUT…
/usr/lib/jvm/java-6-openjdk-amd64/bin/keytool <– USE THIS TO BE SAFE
/usr/lib/jvm/java-6-openjdk-amd64/jre/bin/keytool <– OR THIS
/usr/lib/jvm/java-6-openjdk-amd64/jre/man/ja_JP.eucJP/man1/keytool.1.gz
/usr/lib/jvm/java-6-openjdk-amd64/jre/man/man1/keytool.1.gz
/usr/lib/jvm/java-6-openjdk-amd64/man/ja_JP.eucJP/man1/keytool.1.gz
/usr/lib/jvm/java-6-openjdk-amd64/man/man1/keytool.1.gz
/usr/share/man/man1/keytool.1.gz
/var/lib/dpkg/alternatives/keytool

When you are developing, and not yet deployed, you should generate an unsigned keystore certificate. You can do this as follows:

You will be asked to create a password for the keystore, and a password for the key. Remember these! You will need them to configure Jetty properly.

This will create a keystore in your working directory with the name keystoreName that you can then use for general SSL communications. But, remember, we wanted to set this up on Jetty.

Go to your Jetty folder. It is typically located at /usr/share/jetty if you are on a Debian based distribution. You should see something like the following:
contexts javadoc lib resources start.jar
etc jre1.5 logs start-daemon.jar webapps

What’s important is the etc folder, where you will find jetty-ssl.xml — the configuration file for SSL. In the field called “keystore” you should put the path to your keystore. I ended up putting the keystore in Jetty’s etc folder like the default setting, so I didn’t need to change this.

Now, remember the passwords you created when you ran keytool? Of course you do. The keystore password goes in the “password” field and the key password goes in the “keyPassword” field. Which is which, you ask? The first password you created was the keystore password, and the second was the key password.

You should remove the trustPassword fields. Set the port field to anything you want, but make sure that (1) it is different from your http port and (2) you have permissions to use that port.

Then, as above, you can visit your page using SSL with https://localhost:sslport, where of course you replace sslport with whatever port number you set in the configuration file. Enjoy your SSLed Jetty…but remember: this is not okay for a deployed webapp!

Like this:

Quick tip. Many people suggest adding “setxkb -option ‘ctrl:nocaps'” to your startup programs to swap your control and caps-lock key. But this depends on whatever desktop environment you currently use. If you switch your desktop environment, you must add that line all over again.

Instead, open up /etc/default/keyboard in your favorite text editor. Replace the line XKBOPTIONS=” to XKBOPTIONS=”ctrl:nocaps”. Now, whenever you start up Debian, the switch will be done for you, regardless of desktop environment.