Imagine you're building a recommendation algorithm for your new online site. How do you measure its quality, to make sure that it's sending users relevant and personalized content? Click-through rate may be your initial hope…but after a bit of thought, it's not clear that it's the best metric after all.

Take Google's search engine. In many cases, improving the quality of search results will decrease CTR! For example, the ideal scenario for queries like When was Barack Obama born? is that users never have to click, since the question should be answered on the page itself.

Or take Twitter, who one day might want to recommend you interesting tweets. Metrics like CTR, or even number of favorites and retweets, will probably optimize for showing quick one-liners and pictures of funny cats. But is a Reddit-like site what Twitter really wants to be? Twitter, for many people, started out as a news site, so users may prefer seeing links to deeper and more interesting content, even if they're less likely to click on each individual suggestion overall.

Or take eBay, who wants to help you find the products you want to buy. Is CTR a good measure? Perhaps not: more clicks may be an indication that you're having trouble finding what you're looking for. What about revenue? That might not be ideal either: from the user perspective, you want to make your purchases at the lowest possible price, and by optimizing for revenue, eBay may be turning you towards more expensive products that make you a less frequent customer in the long run.

And so on.

So on many online sites, it's unclear how to measure the quality of personalization and recommendations using metrics like CTR, or revenue, or dwell time, or whatever. What's an engineer to do?

Well, consider the fact that many of these are relevance algorithms. Google wants to show you relevant search results. Twitter wants to show you relevant tweets and ads. Netflix wants to recommend you relevant movies. LinkedIn wants to find you relevant people to follow. So why, so often, do we never try to measure the relevance of our models?

I'm a big fan of man-in-the-machine techniques, so to get around this problem, I'm going to talk about a human evaluation approach to measuring the performance of personalization and discovery products. In particular, I'll use the example of related book suggestions on Amazon as I walk through the rest of this post.

Amazon, and Moving Beyond Log-Based Metrics

(Let's continue motivating a bit why log-based metrics are often imperfect measures of relevance and quality, as this is an important but difficult-to-understand point.)

So take Amazon's Customers Who Bought This Item Also Bought feature, which tries to show you related books.

To measure its effectiveness, the standard approach is to run a live experiment and measure the change in metrics like revenue or CTR.

Or suppose that we replace all of Amazon's related books with shinier, more expensive items. Again, CTR and revenue are likely to increase, as the flashier content draws eyeballs. But is this anything but a short-term boost? Perhaps the change decreases total sales in the long run, as customers start to find Amazon too expensive for their tastes and move to other marketplaces.

Scenarios like these are the machine learning analogue of turning ads into blinking marquees. While they might increase clicks and views initially, they're probably not optimizing user happiness or site quality for the future. So how can we avoid them, and ensure that the quality of our suggestions remains consistently high? This is a related books algorithm, after all – so why, by sticking to a live experiment and metrics like CTR, are we nowhere inspecting the relatedness of our recommendations?

Human Evaluation

Solution: let's inject humans into the process. Computers can't measure relatedness (if they could, we'd be done), but people of course can.

For example, in the screenshot below, I asked a worker (on a crowdsourcing platform I've built on my own) to rate the first three Customers Who Bought This Also Bought suggestions for a book on barnesandnoble.com.

(Copying the text from the screenshot:

Fool's Assassin, by Robin Hobb. (original book) "I only just read this recently, but it's one of my favorites of this year's releases. This book was so beautifully written. I've always been a fan of the Fantasy genre, but often times it's all stylistically similar. Hobb has a wonderful way with characters and narration. The story was absolutely heartbreaking and lead me wanting another book."

The Broken Eye (Lightbringer Series #3), by Brent Weeks. (first related book) "It's a good, but not great, recommendation. This book sounds interesting enough. It's third in a series though, so I'd have to see how I like the first couple of books first."

Dust and Light: A Sanctuary Novel, by Carol Berg. (second related book) "It's an okay recommendation. I'm not so familiar with this author, but I like some of the premise of the book. I know the saying 'Don't judge a book by its cover...' but the cover art doesn't really appeal to me and kind of turns me off of the book."

Tower Lord, by Anthony Ryan. (third related book) "It's a good, but not great, recommendation. Another completely unfamiliar author to me (kind of like this though, it shows me new places to look for books!) This is also a sequel, though, so I'd have to see how I liked the first book before purchasing this one.")

The recommendations are decent, but already we see a couple ways to improve them:

First, two of the recommendations (The Broken Eye and Tower Lord) are each part of a series, but not Book #1. So one improvement would be to ensure that only series introductions get displayed unless they're followups to the main book.

Book covers matter! Indeed, the second suggestion looks more like a fantasy romance novel than the kind of fantasy that Robin Hobb tends to write. (So perhaps B&N should invest in some deep learning...)

CTR and revenue certainly wouldn't give us this level of information, and it's not clear that they could even tell us our algorithms are producing irrelevant suggestions in the first place. Nowhere does the related scroll panel make it clear that two of the books are part of a series, so the CTR on those two books would be just as high as if they were indeed the series introductions. And if revenue is low, it's not clear whether it's because the suggestions are bad or because, separately, our pricing algorithms need improvement.

So in general, here's one way to understand the quality of a Do This Next algorithm:

Take a bunch of items (e.g., books if we're Amazon or Barnes & Noble), and generate their related brethren.

Send these pairs off to a bunch of judges (say, by using a crowdsourcing platform like Hybrid), and ask them to rate their relevance.

Analyze the data that comes back.

Algorithmic Relevance on Amazon, Barnes & Noble, and Google

Let's make this process concrete. Pretend I'm a newly minted VP of What Customers Also Buy at Amazon, and I want to understand the product's flaws and stars.

I started by asking a couple hundred of my own human workers to take a book they enjoyed in the past year, and to find it on Amazon. They'd then take the first three related book suggestions from a different author, rate them on the following scale, and explain their ratings.

Great suggestion. I'd definitely buy it. (Very positive)

Decent suggestion. I might buy it. (Mildly positive)

Not a great suggestion. I probably wouldn't buy it. (Mildly negative)

Terrible suggestion. I definitely wouldn't buy it. (Very negative)

(Note: I usually prefer a three-point or five-point Likert scale with a "neutral" option, but I was feeling a little wild.)

For example, here's how a rater reviewed the related books for Anne Rice's The Wolves of Midwinter.

So how good are Amazon's recommendations? Quite good, in fact: 47% of raters said they'd definitely buy the first related book, another 29% said it was good enough that they might buy it, and only 24% of raters disliked the suggestion.

The second and third book suggestions, while a bit worse, seem to perform pretty well too: around 65% of raters rated them positive.

What can we learn from the bad ratings? I ran a follow-up task that asked workers to categorize the bad related books, and here's the breakdown.

Related but different-subtopic. These were book suggestions that were generally related to the original book, but that were in a different sub-topic that the rater wasn't interested in. For example, the first related book for Sun Tzu's The Art of War (a book nominally about war, but which nowadays has become more of a life hack book) was On War (a war treatise), but the rater wasn't actually interested in the military aspects: "I would not buy this book, because it only focuses on military war. I am not interested in that. I am interested in mental tactics that will help me prosper in life."

Completely unrelated. These were book suggestions that were completely unrelated to the original book. For example, a Scrabble dictionary appearing on The Art of War's page.

Uninteresting. These were suggestions that were related, but whose storylines didn't appeal to the rater. "The storyline doesn't seem that exciting. I am not a dog fan and it's about a dog."

Wrong audience. These were book suggestions whose target audiences were quite different from the original book's audiences. In many cases, for example, a related book suggestion would be a children's book, but the original book would be geared towards adults. "This seems to be a children's book. If I had a child I would definitely buy this; alas I do not, so I have no need for it."

Wrong book type. Suggestions in this category were items like textbooks or appearing alongside novels.

Disliked author. These recommendations were by similar authors, but one that the rater disliked. "I do not like Amber Portwood. I would definitely not want to read a book by and about her."

Not first in series. Some book recommendations would be for an interesting series the rater wasn't familiar with, but they wouldn't be the first book in the series.

Bad rating. These were book suggestions that had a particularly low Amazon rating.

So to improve their recommendations, Amazon could try improving its topic models, add age-based features to its books, distinguish between textbooks and novels, and invest in series detectors. (Of course, for all I know, they do all this already.)

Competitive Analysis

We now have a general grasp of Amazon's related book suggestions and how they could be improved, and just like we could quote a metric like a CTR of 6.2% or whatnot, we can also now quote a relevance score of 0.62 (or whatever). So let's turn to the question of how Amazon compares to other online booksellers like Barnes & Noble and Google Play.

I took the same task I used above, but this time asked raters to review the related suggestions on those two sites as well.

In short,

Barnes & Nobles's algorithm are almost as good as Amazon's: the first three suggestions were rated positive 58% of the time, compared to 68% on Amazon.

But Play Store recommendations are atrocious : a whopping 51% of Google's related book recommendations were marked terrible.

Why are the Play Store's suggestions so bad? Let's look at a couple examples.

Here's the Play Store page for John Green's The Fault in Our Stars, a critics-loved-it book about cancer and romance (and now also a movie).

Two of the suggestions are completely random: a poorly-rated Excel manual and a poorly-reviewed textbook on sexual health. The others are completely unrelated cowboy books, by a different John Green.

Here's the page for The Strain. In this case, all the suggestions are in a different language! And there are only four of them.

Once again asking raters to categorize all of the Play Store's bad recommendations...

45% of the time, the related book suggestions were completely unrelated to the original book in question. For example: displaying a physics textbook on the page for a romance novel.

32% of the time, there simply wasn't a book suggestion at all. (I'm guessing the Play Bookstore's catalog is pretty limited.)

14% of the time, the related books were in a different language.

So despite Google's state-of-the-art machine learning elsewhere, its Play Store suggestions couldn't really get much worse.

Side-by-Sides

Let's step back a bit. So far I've been focusing on an absolute judgments paradigm, in which judges rate how relevant a book is to the original on an absolute scale. This model is great for understanding the overall quality of Amazon's related book algorithm.

In many cases, though, we want to use human evaluation to compare experiments. For example, it's common at many companies to:

Launch a "human evaluation A/B test" before a live experiment, both to avoid accidentally sending out incredibly buggy experiments to users, as well as to avoid the long wait required in live tests.

Use a human-generated relevance score as a supplement to live experiment metrics when making launch decisions.

For these kinds of tasks, what's preferable is often a side-by-side model, wherein judges are given two items and asked which one is better. After all, comparative judgments are often much easier to make than absolute ones, and we might want to detect differences at a finer level than what's available on an absolute scale.

The idea is that we can assign a score to each rating (negative, say, if the rater prefers the control item; positive if the rater prefers the experiment), and we aggregate these to form an overall score for the side-by-side. Then in much the same way that drops in CTR may block an experiment launch, a negative human evaluation score should also give much cause for concern.

Unfortunately, I don't have an easy way to generate data for a side-by-side (though I could perform a side-by-side on Amazon vs. Barnes & Noble), so I'll omit an example, but the idea should be pretty clear.

Personalization

Here's another subtlety. In my examples above, I asked raters to pick a starting book themselves (one that they read and loved in the past year), and then rate whether they personally would want to read the related suggestions.

Another approach is to pick the starting books for the raters, and then have the rate the related suggestions more objectively, by trying to put themselves in the shoes of someone who'd be interested in the starting item.

Which approach is better? As you can probably guess, there's no clear answer – it depends on the task and goals at hand.

Pros of the first approach:

It's much more nuanced. It can often be hard to put yourself in someone else's shoes: would someone reading Harry Potter be interested in Twilight? On the one hand, they're both fantasy books; on the other hand, Twilight seems a bit more geared towards female audiences.

Pros of the second approach:

Sometimes, objectivity is a good thing. (Should you really care if someone dislikes Twilight simply because Edward reminds her of her boyfriend?)

Allowing people to choose their starting items may bias certain metrics. For instance, people are much more likely to choose popular books to rate, whereas we might want to measure the quality of Amazon's suggestions across a broader, more representative slice of its catalog.

Recap

Let's review what I've discussed so far.

Online, log-based metrics like CTR and revenue aren't necessarily the best measures of a discovery algorithm's quality. Items with high CTR, for example, may simply be racy and flashy, not the most relevant.

So instead of relying on these proxies, let's directly measure the relevance of our recommendations by asking a pool of human raters.

There are a couple different approaches for sending which items to be judged. We can let the raters choose the items (an approach which is often necessary for personalized algorithms), or we can generate the items ourselves (often useful for more objective tasks like search; this also has the benefit of making it easier to derive popularity-weighted or uniformly-weighted metrics of relevance). We can also take an absolute judgment approach, or use side-by-sides.

By analyzing the data from our evaluations, we can make better launch decisions, discover examples where our algorithms do very well or very poorly, and find patterns for improvement.

What are some of the benefits and applications?

As mentioned, log-based metrics like CTR and revenue don't always capture the signals we want, so human-generated scores of relevance (or other dimensions) are useful complements.

Human evaluations can make iteration quicker and easier. A algorithm change might normally require weeks of live testing before we gather enough data to know how it affects users, but we can easily have a task judged by humans in a couple hours.

Imagine we're an advertising company, and we choose which ads to show based on a combination of CTR and revenue. Once we gather hundreds of thousands of relevance judgments from our evaluations, we can build a relevance-based machine learning model to add to the mix, thereby injecting a more direct measure of quality into the system.

How can we decide what improvements need to be made to our models? Evaluations give us very concrete feedback and specific examples about what's working and what's wrong.

In the spirit of item-item similarities, here are some other posts readers of this post might also want to read.

And finally, I'll end with a call for information. Do you run any evaluations, or use crowdsourcing platforms like Mechanical Turk or Crowdflower (whether for evaluation purposes or not)? Or do you want to? I'd love to talk to you to learn more about what you're doing, so please feel free to send me an email and hit me up!

Imagine you just started a job at a new company. You watched World War Z recently, so you're in a skeptical mood, and given that your last two startups failed from what you believe to be a lack of data, you're giving everything an extra critical eye.

You start by thinking about the impact of the sales team. How much extra revenue are they generating for the company? The sales folks you've met say that over 90% of the leads they've talked to end up buying the company's product – but, you wonder, how many of those leads would have converted anyways?

You take a look at the logs, and notice something interesting: last week was hack week, and half the salesforce took time off from making calls to make Marauder's Maps instead, yet the rate of converted leads remained the same...*

Suddenly, one of your teammates drops by your desk. He's making a batch of Soylent, and he wants you to take a sip. It looks nasty, so you ask what the benefits are, and he responds that his friends who've been drinking it for the past few months just ran a marathon! Oh, did they just start running? Nope, they ran the marathon last year too!...

*Inspired by a true story.

Causal Inference

Causality is incredibly important, yet often extremely difficult to establish.

Do patients who self-select into taking a new drug get better because the drug works, or would they have gotten better anyways? Is your salesforce actually effective, or are they simply talking to the customers who already plan to convert? Is Soylent (or your company's million-dollar ad campaign) truly worth your time?

In an ideal world, we'd be able to run experiments – the gold standard for measuring causality – whenever we wish. In the real world, however, we can't. There are ethical qualms with giving certain patients placebos, or dangerous and untested drugs. Management may be unwilling to take a potential short-term revenue hit by assigning sales to random customers, and a team earning commission-based bonuses may rebel against the thought as well.

How can we understand causal lifts in the absence of an A/B test? This is where propensity modeling, or other techniques of causal inference, comes into play.

Propensity Modeling

So suppose we want to model the effect of drinking Soylent using a propensity model technique. To explain the idea, let's start with a thought experiment.

Imagine Brad Pitt has a twin brother, indistinguishable in every way: Brad 1 and Brad 2 wake up at the same time, they eat the same foods, they exercise the same amount, and so on. One day, Brad 1 happens to receive the last batch of Soylent from a marketer on the street, while Brad 2 does not, and so only Brad 1 begins to incorporate Soylent into his diet. In this scenario, any subsequent difference in behavior between the twins is precisely the drink's effect.

Taking this scenario into the real world, one way to estimate the Soylent's effect on health would be as follows:

For every Soylent drinker, find a Soylent abstainer who's as close a match as possible. For example, we might match a Soylent-drinking Jay-Z with a non-Soylent Kanye, a Soylent-drinking Natalie Portman with a non-Soylent Keira Knightley, and a Soylent-drinking JK Rowling with a non-Soylent Stephenie Meyer.

We measure Soylent's effect as the difference between each twin pair.

However, finding closely matching twins is extremely difficult in practice. Is Jay-Z really a close match with Kanye, if Jay-Z sleeps one hour more on average? What about the Jonas Brothers and One Direction?

Propensity modeling, then, is a simplification of this twin matching procedure. Instead of matching pairs of people based on all the variables we have, we simply match all users based on a single number, the likelihood ("propensity") that they'll start to drink Soylent.

In more detail, here's how to build a propensity model.

First, select which variables to use as features. (e.g., what foods people eat, when they sleep, where they live, etc.)

Next, build a probabilistic model (say, a logistic regression) based on these variables to predict whether a user will start drinking Soylent or not. For example, our training set might consist of a set of people, some of whom ordered Soylent in the first week of March 2014, and we would train the classifier to model which users become the Soylent users.

The model's probabilistic estimate that a user will start drinking Soylent is called a propensity score.

Form some number of buckets, say 10 buckets in total (one bucket covers users with a 0.0 - 0.1 propensity to take the drink, a second bucket covers users with a 0.1 - 0.2 propensity, and so on), and place people into each one.

Finally, compare the drinkers and non-drinkers within each bucket (say, by measuring their subsequent physical activity, weight, or whatever measure of health) to estimate Soylent's causal effect.

For example, here's a hypothetical distribution of Soylent and non-Soylent ages. We see that drinkers tend to be quite a bit older, and this confounding fact is one reason we can't simply run a correlational analysis.

After training a model to estimate Soylent propensity and group users into propensity buckets, this might be a graph of the effect that Soylent has on a person's weekly running mileage.

In the above (hypothetical) graph, each row represents a propensity bucket of people, and the exposure week denotes the first week of March, when the treatment group received their Soylent shipment. We see that prior to that week, both groups of people track quite well. After the treatment group (the Soylent drinkers) start their plan, however, their weekly running mileage ramps up, which forms our estimate of the drink's causal effect.

Other Methods of Causal Inference

Of course, there are many other methods of causal inference on observational data. I'll run through two of my favorites. (I originally wrote this post in response to a question on Quora, which is why I take my examples from there.)

Regression Discontinuity

Quora recently started displaying badges of status on the profiles of its Top Writers, so suppose we want to understand the effect of this feature. (Assume that it's impossible to run an A/B test now that the feature has been launched.) Specifically, does the badge itself cause users to gain more followers?

For simplicity, let's assume that the badge was given to all users who received at least 5000 upvotes in 2013. The idea behind a regression discontinuity design is that the difference between those users who just barely receive a Top Writer badge (i.e., receive 5000 upvotes) and those who just barely don't (i.e., receive 4999 upvotes) is more or less random chance, so we can use this threshold to estimate a causal effect.

For example, in the imaginary graph below, the discontinuity at 5000 upvotes suggests that a Top Writer badge leads to around 100 more followers on average.

Natural Experiments

Understanding the effect of a Top Writer badge is a fairly uninteresting question, though. (It just makes an easy example.) A deeper, more fundamental question could be to ask: what happens when a user discovers a new writer that they love? Does the writer inspire them to write some of their own content, to explore more of the same topics, and through curation lead them to engage with the site even more? How important, in other words, is the connection to a great user as opposed to the reading of random, great posts?

I studied an analogous question when I was at Google, so instead of making up an imaginary Quora case study, I'll describe some of that work here.

So let's suppose we want to understand what would happen if we were able to match users to the perfect YouTube channel. How much is the ultimate recommendation worth?

Does falling in love with a new channel lead to engagement above and beyond activity on the channel itself, perhaps because users return to YouTube specifically for the new channel and stay to watch more? (a multiplicative effect) In the TV world, for example, perhaps many people stay at home on Sunday nights specifically to catch the latest episode of Real Housewives, and channel surf for even more entertainment once it's over.

Does falling in love with a new channel simply increase activity on the channel alone? (an additive effect)

Does a new channel replace existing engagement on YouTube? After all, maybe users only have a limited amount of time they can spend on the site. (a neutral effect)

Does the perfect channel actually cause users to spend less time overall on the site, since maybe they spend less time idly browsing and channel surfing once they have concrete channels they know how to turn to? (a negative effect)

As always, an A/B test would be ideal, but it's impossible in this case to run: we can't force users to fall in love with a channel (we can recommend them channels, but there's no guarantee they'll actually like them), and we can't forcibly block them from certain channels either.

One approach is to use a natural experiment (a scenario in which the universe itself somehow generates a random-like assignment) to study this effect. Here's the idea.

Consider a user who uploads a new video every Wednesday. One month, he lets his subscribers know that he won’t be uploading any new videos for a few weeks, while he goes on vacation.

How do his subscribers respond? Do they stop watching YouTube on Wednesdays, since his channel was the sole reason for their visits? Or is their activity relatively unaffected, since they only watch his content when it appears on the front page?

Imagine, instead, that the channel starts uploading a new video every Friday. Do his subscribers start to visit then as well? And now that they're on YouTube, do they merely stay for the new video, or does their visit lead to a sinkhole of searches and related content too?

As it turns out, these scenarios do happen. For example, here's a calendar of when one popular channel uploads videos. You can see that in 2011, it tended to upload videos on Tuesdays and Fridays, but it shifted to uploads on Wednesday and Saturday at the end of the year.

By using this shift as natural experiment that "quasi-randomly" removes a well-loved channel on certain days and introduces it on others, we can try to understand the effect of successfully making the perfect recommendation.

(This is probably a somewhat convoluted example of a natural experiment. For an example that perhaps illustrates the idea more clearly, suppose we want to understand the effect of income on mental health. We can't force some people to become poor or rich, and a correlational study is clearly flawed. This NY Times article describes a natural experiment when a group of Cherokee Indians distributed casino profits to its members, thereby "randomly" lifting some of them out of poverty.

Another example, assuming there's nothing special about the period in which hack week occurs, is the use of hack week as an instrument that quasi-randomly "prevents" the sales team from doing their job, as in the scenario I described above.)

Discovering Drivers of Growth

Let's go back to propensity modeling.

Imagine that we're on our company's Growth team, and we're tasked with figuring out how to turn casual users of the site into users that return every day. What do we do?

The propensity modeling approach might be the following. We could take a list of features (installing the mobile app, logging in, signing up for a newsletter, following certain users, etc.), and build a propensity model for each one. We could then rank each feature by its estimated causal effect on engagement, and use the ordered list of features to prioritize our next sprint. (Or we could use these numbers in order to convince the exec team that we need more resources.) This is a slightly more sophisticated version of the idea of building an engagement regression model (or a churn regression model), and examining the weights on each feature.

Despite writing this post, though, I admit I'm generally not a fan of propensity modeling for many applications in the tech world. (I haven't worked in the medical field, so I don't have a strong opinion on its usefulness there, though I think it's a little more necessary there.) I'll save more of my reasons for another time, but after all, causal inference is extremely difficult, and we're never going to be able to control for all the hidden influencers that can bias a treatment. Also, the mere fact that we have to choose which features to include in our model (and remember: building features is very time-consuming and difficult) means that we already have a strong prior belief on the usefulness of each feature, whereas what we'd really like to do is to discover hidden motivations of engagement that we've never thought of.

So what can we do instead?

If we're trying to understand what drives people to become heavy users of the site, why don't we simply ask them?

In more detail, let's do the following:

First, we'll run a survey on a couple hundred of users.

In the survey, we'll ask them whether their engagement on the site has increased, decreased, or remained about the same over the past year. We'll also ask them to explain possible reasons for their change in activity, and to describe how they use the site currently. We can also ask for supplemental details, like their demographic information.

Finally, we can filter all responses for those users who heavily increased their engagement over the past year (or who heavily decreased it, if we're trying to understand churn), and analyse their responses for the reasons.

For example, here's one interesting response I got when I ran this study at YouTube.

"I have always been a big music fan, but recently I took up playing the guitar. Because of my new found passion (playing the guitar) my desire to watch concerts has increased. I started watching a whole lot of music festivals and concerts that are posted on Youtube and other music videos. I have spent a lot of time also watching guitar lessons on Youtube (from www.justinguitar.com)."

This response was representative of a general theme the survey uncovered: one big driver of engagement seemed to come from people discovering a new offline hobby, and using YouTube to increase their appreciation of it. People who wanted to start cooking at home would turn to YouTube for recipe videos, people who started playing tennis or some other sport would go to YouTube for lessons or other great shots, college students would look for channels like Khan Academy to supplement their lectures, and so on. In other words, offline activities were driving online growth, and instead of trying to figure out what kinds of online content people were interested in (which articles did they like on Facebook, who did they follow on Twitter, what did they read on Reddit), perhaps we should have been focusing on bringing their physical hobbies into the digital world.

This "offline hobby" idea certainly wouldn't have been a feature I would have thrown into any engagement model, even if only because it's a very difficult feature to create. (How do we know which videos are related to real-world behavior?) But now that we suspect it's a potentially big driver of growth ("potentially" because, of course, surveys aren't necessarily representative), it's something we can spend a lot more time studying in the logs.

End

To summarize: propensity modeling is a powerful technique for measuring causal effects in the absence of a randomized experiment.

Purely correlational analyses on top of observational studies can be very dangerous, after all. To take my favorite example: if we find that cities with more policemen tend to have more crime, does this mean that we should try to reduce the size of our police forces in order to reduce the nation's amount of crime?

For another example, here's a post by Gelman on contradictory conclusions about hormone replacement therapy in the Harvard Nurses Study.

That said, remember that (as always) a model is only as good as the data that you feed it. It's super difficult to account for all the hidden variables that might matter, and what you think might be a well-designed causal model might well in fact be missing many hidden factors. (I actually remember hearing that a propensity model on the nurses study generated a flawed conclusion, though I can't find any references to this at the moment.) So consider whether there are other approaches you can take, whether it's an easier-to-understand causal technique or even just asking your users, and even if a randomized experiment seems too difficult to run now, the effort may be well worth the trouble in the end.

I love studying users and products, and think data science can be extremely useful in guiding product/strategy as a whole. So I thought it would be fun to depart from the usual machine learning and engineering things I write about, and do a quick study of Airbnb.

Think of this like business analysis, or strategy – from a data science point of view.

(It's in slide deck form, of course, because that's how these things roll.)

It turns out LSTMs are a fairly simple extension to neural networks, and they're behind a lot of the amazing achievements deep learning has made in the past few years. So I'll try to present them as intuitively as possible – in such a way that you could have discovered them yourself.

But first, a picture:

Aren't LSTMs beautiful? Let's go.

(Note: if you're already familiar with neural networks and LSTMs, skip to the middle – the first half of this post is a tutorial.)

Neural Networks

Imagine we have a sequence of images from a movie, and we want to label each image with an activity (is this a fight?, are the characters talking?, are the characters eating?).

How do we do this?

One way is to ignore the sequential nature of the images, and build a per-image classifier that considers each image in isolation. For example, given enough images and labels:

Our algorithm might first learn to detect low-level patterns like shapes and edges.

With more data, it might learn to combine these patterns into more complex ones, like faces (two circular things atop a triangular thing atop an oval thing) or cats.

And with even more data, it might learn to map these higher-level patterns into activities themselves (scenes with mouths, steaks, and forks are probably about eating).

This, then, is a deep neural network: it takes an image input, returns an activity output, and – just as we might learn to detect patterns in puppy behavior without knowing anything about dogs (after seeing enough corgis, we discover common characteristics like fluffy butts and drumstick legs; next, we learn advanced features like splooting) – in between it learns to represent images through hidden layers of representations.

Mathematically

I assume people are familiar with basic neural networks already, but let's quickly review them.

A neural network with a single hidden layer takes as input a vector x, which we can think of as a set of neurons.

Each input neuron is connected to a hidden layer of neurons via a set of learned weights.

The hidden layer is fully connected to an output layer, and the jth output neuron outputs \(y_j = \sum_i v_{ij} h_i\). If we need probabilities, we can transform the output layer via a softmax function.

(Note: to make the notation a little cleaner, I assume x and h each contain an extra bias neuron fixed at 1 for learning bias weights.)

Remembering Information with RNNs

Ignoring the sequential aspect of the movie images is pretty ML 101, though. If we see a scene of a beach, we should boost beach activities in future frames: an image of someone in the water should probably be labeled swimming, not bathing, and an image of someone lying with their eyes closed is probably suntanning. If we remember that Bob just arrived at a supermarket, then even without any distinctive supermarket features, an image of Bob holding a slab of bacon should probably be categorized as shopping instead of cooking.

So what we'd like is to let our model track the state of the world:

After seeing each image, the model outputs a label and also updates the knowledge it's been learning. For example, the model might learn to automatically discover and track information like location (are scenes currently in a house or beach?), time of day (if a scene contains an image of the moon, the model should remember that it's nighttime), and within-movie progress (is this image the first frame or the 100th?). Importantly, just as a neural network automatically discovers hidden patterns like edges, shapes, and faces without being fed them, our model should automatically discover useful information by itself.

When given a new image, the model should incorporate the knowledge it's gathered to do a better job.

This, then, is a recurrent neural network. Instead of simply taking an image and returning an activity, an RNN also maintains internal memories about the world (weights assigned to different pieces of information) to help perform its classifications.

Mathematically

So let's add the notion of internal knowledge to our equations, which we can think of as pieces of information that the network maintains over time.

But this is easy: we know that the hidden layers of neural networks already encode useful information about their inputs, so why not use these layers as the memory passed from one time step to the next? This gives us our RNN equations:

$$h_t = \phi(Wx_t + Uh_{t-1})$$

$$y_t = Vh_t$$

Note that the hidden state computed at time \(t\) (\(h_t\), our internal knowledge) is fed back at the next time step. (Also, I'll use concepts like hidden state, knowledge, memories, and beliefs to describe \(h_t\) interchangeably.)

Longer Memories through LSTMs

Let's think about how our model updates its knowledge of the world. So far, we've placed no constraints on this update, so its knowledge can change pretty chaotically: at one frame it thinks the characters are in the US, at the next frame it sees the characters eating sushi and thinks they're in Japan, and at the next frame it sees polar bears and thinks they're on Hydra Island. Or perhaps it has a wealth of information to suggest that Alice is an investment analyst, but decides she's a professional assassin after seeing her cook.

This chaos means information quickly transforms and vanishes, and it's difficult for the model to keep a long-term memory. So what we'd like is for the network to learn how to update its beliefs (scenes without Bob shouldn't change Bob-related information, scenes with Alice should focus on gathering details about her), in a way that its knowledge of the world evolves more gently.

This is how we do it.

Adding a forgetting mechanism. If a scene ends, for example, the model should forget the current scene location, the time of day, and reset any scene-specific information; however, if a character dies in the scene, it should continue remembering that he's no longer alive. Thus, we want the model to learn a separate forgetting/remembering mechanism: when new inputs come in, it needs to know which beliefs to keep or throw away.

Adding a saving mechanism. When the model sees a new image, it needs to learn whether any information about the image is worth using and saving. Maybe your mom sent you an article about the Kardashians, but who cares?

So when new a input comes in, the model first forgets any long-term information it decides it no longer needs. Then it learns which parts of the new input are worth using, and saves them into its long-term memory.

Focusing long-term memory into working memory. Finally, the model needs to learn which parts of its long-term memory are immediately useful. For example, Bob's age may be a useful piece of information to keep in the long term (children are more likely to be crawling, adults are more likely to be working), but is probably irrelevant if he's not in the current scene. So instead of using the full long-term memory all the time, it learns which parts to focus on instead.

This, then, is an long short-term memory network. Whereas an RNN can overwrite its memory at each time step in a fairly uncontrolled fashion, an LSTM transforms its memory in a very precise way: by using specific learning mechanisms for which pieces of information to remember, which to update, and which to pay attention to. This helps it keep track of information over longer periods of time.

Mathematically

Let's describe the LSTM additions mathematically.

At time \(t\), we receive a new input \(x_t\). We also have our long-term and working memories passed on from the previous time step, \(ltm_{t-1}\) and \(wm_{t-1}\) (both n-length vectors), which we want to update.

We'll start with our long-term memory. First, we need to know which pieces of long-term memory to continue remembering and which to discard, so we want to use the new input and our working memory to learn a remember gate of n numbers between 0 and 1, each of which determines how much of a long-term memory element to keep. (A 1 means to keep it, a 0 means to forget it entirely.)

Naturally, we can use a small neural network to learn this remember gate:

$$remember_t = \sigma(W_r x_t + U_r wm_{t-1}) $$

(Notice the similarity to our previous network equations; this is just a shallow neural network. Also, we use a sigmoid activation because we need numbers between 0 and 1.)

Next, we need to compute the information we can learn from \(x_t\), i.e., a candidate addition to our long-term memory:

$$ ltm'_t = \phi(W_l x_t + U_l wm_{t-1}) $$

\(\phi\) is an activation function, commonly chosen to be \(tanh\).

Before we add the candidate into our memory, though, we want to learn which parts of it are actually worth using and saving:

$$save_t = \sigma(W_s x_t + U_s wm_{t-1})$$

(Think of what happens when you read something on the web. While a news article might contain information about Hillary, you should ignore it if the source is Breitbart.)

Let's now combine all these steps. After forgetting memories we don't think we'll ever need again and saving useful pieces of incoming information, we have our updated long-term memory:

$$ltm_t = remember_t \circ ltm_{t-1} + save_t \circ ltm'_t$$

where \(\circ\) denotes element-wise multiplication.

Next, let's update our working memory. We want to learn how to focus our long-term memory into information that will be immediately useful. (Put differently, we want to learn what to move from an external hard drive onto our working laptop.) So we learn a focus/attention vector:

$$focus_t = \sigma(W_f x_t + U_f wm_{t-1})$$

Our working memory is then

$$wm_t = focus_t \circ \phi(ltm_t)$$

In other words, we pay full attention to elements where the focus is 1, and ignore elements where the focus is 0.

And we're done! Hopefully this made it into your long-term memory as well.

To summarize, whereas a vanilla RNN uses one equation to update its hidden state/memory:

$$h_t = \phi(Wx_t + Uh_{t-1})$$

An LSTM uses several:

$$ltm_t = remember_t \circ ltm_{t-1} + save_t \circ ltm'_t$$

$$wm_t = focus_t \circ tanh(ltm_t)$$

where each memory/attention sub-mechanism is just a mini brain of its own:

$$remember_t = \sigma(W_r x_t+ U_r wm_{t-1}) $$

$$save_t = \sigma(W_s x_t + U_s wm_{t-1})$$

$$focus_t = \sigma(W_f x_t + U_f wm_{t-1})$$

$$ ltm'_t = tanh(W_l x_t + U_l wm_{t-1}) $$

(Note: the terminology and variable names I've been using are different from the usual literature. Here are the standard names, which I'll use interchangeably from now on:

The long-term memory, \(ltm_t\), is usually called the cell state, denoted \(c_t\).

The working memory, \(wm_t\), is usually called the hidden state, denoted \(h_t\). This is analogous to the hidden state in vanilla RNNs.

The remember vector, \(remember_t\), is usually called the forget gate (despite the fact that a 1 in the forget gate still means to keep the memory and a 0 still means to forget it), denoted \(f_t\).

The save vector, \(save_t\), is usually called the input gate (as it determines how much of the input to let into the cell state), denoted \(i_t\).

The focus vector, \(focus_t\), is usually called the output gate, denoted \(o_t\).
)

Snorlax

I could have caught a hundred Pidgeys in the time it took me to write this post, so here's a cartoon.

Neural Networks

Recurrent Neural Networks

LSTMs

Learning to Code

Let's look at a few examples of what an LSTM can do. Following Andrej Karpathy's terrific post, I'll use character-level LSTM models that are fed sequences of characters and trained to predict the next character in the sequence.

While this may seem a bit toyish, character-level models can actually be very useful, even on top of word models. For example:

Imagine a code autocompleter smart enough to allow you to program on your phone. An LSTM could (in theory) track the return type of the method you're currently in, and better suggest which variable to return; it could also know without compiling whether you've made a bug by returning the wrong type.

NLP applications like machine translation often have trouble dealing with rare terms. How do you translate a word you've never seen before, or convert adjectives to adverbs? Even if you know what a tweet means, how do you generate a new hashtag to capture it? Character models can daydream new terms, so this is another area with interesting applications.

So to start, I spun up an EC2 p2.xlarge spot instance, and trained a 3-layer LSTM on the Apache Commons Lang codebase. Here's a program it generates after a few hours.

While the code certainly isn't perfect, it's better than a lot of data scientists I know. And we can see that the LSTM has learned a lot of interesting (and correct!) coding behavior:

It knows how to structure classes: a license up top, followed by packages and imports, followed by comments and a class definition, followed by variables and methods. Similarly, it knows how to create methods: comments follow the correct orders (description, then @param, then @return, etc.), decorators are properly placed, and non-void methods end with appropriate return statements. Crucially, this behavior spans long ranges of code – see how giant the blocks are!

It can also track subroutines and nesting levels: indentation is always correct, and if statements and for loops are always closed out.

It even knows how to create tests.

How does the model do this? Let's look at a few of the hidden states.

Here's a neuron that seems to track the code's outer level of indentation:

(As the LSTM moves through the sequence, its neurons fire at varying intensities. The picture represents one particular neuron, where each row is a sequence and characters are color-coded according to the neuron's intensity; dark blue shades indicate large, positive activations, and dark red shades indicate very negative activations.)

And here's a neuron that counts down the spaces between tabs:

For kicks, here's the output of a different 3-layer LSTM trained on TensorFlow's codebase:

Investigating LSTM Internals

Let's dig a little deeper. We looked in the last section at examples of hidden states, but I wanted to play with LSTM cell states and their other memory mechanisms too. Do they fire when we expect, or are there surprising patterns?

Counting

To investigate, let's start by teaching an LSTM to count. (Remember how the Java and Python LSTMs were able to generate proper indentation!) So I generated sequences of the form

aaaaaXbbbbb

(N "a" characters, followed by a delimiter X, followed by N "b" characters, where 1 <= N <= 10), and trained a single-layer LSTM with 10 hidden neurons.

As expected, the LSTM learns perfectly within its training range – and can even generalize a few steps beyond it. (Although it starts to fail once we try to get it to count to 19.)

We expect to find a hidden state neuron that counts the number of a's if we look at its internals. And we do:

I built a small web app to play around with LSTMs, and Neuron #2 seems to be counting both the number of a's it's seen, as well as the number of b's. (Remember that cells are shaded according to the neuron's activation, from dark red [-1] to dark blue [+1].)

What about the cell state? It behaves similarly:

One interesting thing is that the working memory looks like a "sharpened" version of the long-term memory. Does this hold true in general?

It does. (This is exactly as we would expect, since the long-term memory gets squashed by the tanh activation function and the output gate limits what gets passed on.) For example, here is an overview of all 10 cell state nodes at once. We see plenty of light-colored cells, representing values close to 0.

In contrast, the 10 working memory neurons look much more focused. Neurons 1, 3, 5, and 7 are even zeroed out entirely over the first half of the sequence.

Let's go back to Neuron #2. Here are the candidate memory and input gate. They're relatively constant over each half of the sequence – as if the neuron is calculating a += 1 or b += 1 at each step.

Finally, here's an overview of all of Neuron 2's internals:

If you want to investigate the different counting neurons yourself, you can play around with the visualizer here.

(Note: this is far from the only way an LSTM can learn to count, and I'm anthropomorphizing quite a bit here. But I think viewing the network's behavior is interesting and can help build better models – after all, many of the ideas in neural networks come from analogies to the human brain, and if we see unexpected behavior, we may be able to design more efficient learning mechanisms.)

Count von Count

Let's look at a slightly more complicated counter. This time, I generated sequences of the form

aaXaXaaYbbbbb

(N a's with X's randomly sprinkled in, followed by a delimiter Y, followed by N b's). The LSTM still has to count the number of a's, but this time needs to ignore the X's as well.

Here's the full LSTM. We expect to see a counting neuron, but one where the input gate is zero whenever it sees an X. And we do!

Above is the cell state of Neuron 20. It increases until it hits the delimiter Y, and then decreases to the end of the sequence – just like it's calculating a num_bs_left_to_print variable that increments on a's and decrements on b's.

If we look at its input gate, it is indeed ignoring the X's:

Interestingly, though, the candidate memory fully activates on the irrelevant X's – which shows why the input gate is needed. (Although, if the input gate weren't part of the architecture, presumably the network would have presumably learned to ignore the X's some other way, at least for this simple example.)

This neuron is interesting as it only activates when reading the delimiter "Y" – and yet it still manages to encode the number of a's seen so far in the sequence. (It may be hard to tell from the picture, but when reading Y's belonging to sequences with the same number of a's, all the cell states have values either identical or within 0.1% of each other. You can see that Y's with fewer a's are lighter than those with more.) Perhaps some other neuron sees Neuron 10 slacking and helps a buddy out.

Remembering State

Next, I wanted to look at how LSTMs remember state. I generated sequences of the form

AxxxxxxYaBxxxxxxYb

(i.e., an "A" or B", followed by 1-10 x's, then a delimiter "Y", ending with a lowercase version of the initial character). This way the network needs to remember whether it's in an "A" or "B" state.

We expect to find a neuron that fires when remembering that the sequence started with an "A", and another neuron that fires when remembering that it started with a "B". We do.

For example, here is an "A" neuron that activates when it reads an "A", and remembers until it needs to generate the final character. Notice that the input gate ignores all the "x" characters in between.

Here is its "B" counterpart:

One interesting point is that even though knowledge of the A vs. B state isn't needed until the network reads the "Y" delimiter, the hidden state fires throughout all the intermediate inputs anyways. This seems a bit "inefficient", but perhaps it's because the neurons are doing a bit of double-duty in counting the number of x's as well.

Copy Task

Finally, let's look at how an LSTM learns to copy information. (Recall that our Java LSTM was able to memorize and copy an Apache license.)

(Note: if you think about how LSTMs work, remembering lots of individual, detailed pieces of information isn't something they're very good at. For example, you may have noticed that one major flaw of the LSTM-generated code was that it often made use of undefined variables – the LSTMs couldn't remember which variables were in scope. This isn't surprising, since it's hard to use single cells to efficiently encode multi-valued information like characters, and LSTMs don't have a natural mechanism to chain adjacent memories to form words. Memory networks and neural Turing machines are two extensions to neural networks that help fix this, by augmenting with external memory components. So while copying isn't something LSTMs do very efficiently, it's fun to see how they try anyways.)

For this copy task, I trained a tiny 2-layer LSTM on sequences of the form

baaXbaaabcXabc

(i.e., a 3-character subsequence composed of a's, b's, and c's, followed by a delimiter "X", followed by the same subsequence).

I wasn't sure what "copy neurons" would look like, so in order to find neurons that were memorizing parts of the initial subsequence, I looked at their hidden states when reading the delimiter X. Since the network needs to encode the initial subsequence, its states should exhibit different patterns depending on what they're learning.

The graph below, for example, plots Neuron 5's hidden state when reading the "X" delimiter. The neuron is clearly able to distinguish sequences beginning with a "c" from those that don't.

For another example, here is Neuron 20's hidden state when reading the "X". It looks like it picks out sequences beginning with a "b".

Interestingly, if we look at Neuron 20's cell state, it almost seems to capture the entire 3-character subsequence by itself (no small feat given its one-dimensionality!):

Here are Neuron 20's cell and hidden states, across the entire sequence. Notice that its hidden state is turned off over the entire initial subsequence (perhaps expected, since its memory only needs to be passively kept at that point).

However, if we look more closely, the neuron actually seems to be firing whenever the next character is a "b". So rather than being a "the sequence started with a b" neuron, it appears to be a "the next character is a b" neuron.

As far as I can tell, this pattern holds across the network – all the neurons seem to be predicting the next character, rather than memorizing characters at specific positions. For example, Neuron 5 seems to be a "next character is a c" predictor.

I'm not sure if this is the default kind of behavior LSTMs learn when copying information, or what other copying mechanisms are available as well.

States and Gates

To really hone in and understand the purpose of the different states and gates in an LSTM, let's repeat the previous section with a small pivot.

Cell State and Hidden State (Memories)

We originally described the cell state as a long-term memory, and the hidden state as a way to pull out and focus these memories when needed.

So when a memory is currently irrelevant, we expect the hidden state to turn off – and that's exactly what happens for this sequence copying neuron.

Forget Gate

The forget gate discards information from the cell state (0 means to completely forget, 1 means to completely remember), so we expect it to fully activate when it needs to remember something exactly, and to turn off when information is never going to be needed again.

That's what we see with this "A" memorizing neuron: the forget gate fires hard to remember that it's in an "A" state while it passes through the x's, and turns off once it's ready to generate the final "a".

Input Gate (Save Gate)

We described the job of the input gate (what I originally called the save gate) as deciding whether or not to save information from a new input. Thus, it should turn off at useless information.

And that's what this selective counting neuron does: it counts the a's and b's, but ignores the irrelevant x's.

What's amazing is that nowhere in our LSTM equations did we specify that this is how the input (save), forget (remember), and output (focus) gates should work. The network just learned what's best.

Extensions

Now let's recap how you could have discovered LSTMs by yourself.

First, many of the problems we'd like to solve are sequential or temporal of some sort, so we should incorporate past learnings into our models. But we already know that the hidden layers of neural networks encode useful information, so why not use these hidden layers as the memories we pass from one time step to the next? And so we get RNNs.

But we know from our own behavior that we don't keep track of knowledge willy-nilly; when we read a new article about politics, we don't immediately believe whatever it tells us and incorporate it into our beliefs of the world. We selectively decide what information to save, what information to discard, and what pieces of information to use to make decisions the next time we read the news. Thus, we want to learn how to gather, update, and apply information – and why not learn these things through their own mini neural networks? And so we get LSTMs.

And now that we've gone through this process, we can come up with our own modifications.

For example, maybe you think it's silly for LSTMs to distinguish between long-term and working memories – why not have one? Or maybe you find separate remember gates and save gates kind of redundant – anything we forget should be replaced by new information, and vice-versa. And now you've come up with one popular LSTM variant, the GRU.

Or maybe you think that when deciding what information to remember, save, and focus on, we shouldn't rely on our working memory alone – why not use our long-term memory as well? And now you've discovered Peephole LSTMs.

Making Neural Nets Great Again

Let's look at one final example, using a 2-layer LSTM trained on Trump's tweets. Despite the tinybig dataset, it's enough to learn a lot of patterns.

For example, here's a neuron that tracks its position within hashtags, URLs, and @mentions:

And here are some of the proclamations the LSTM generates (okay, one of these is a real tweet):

Unfortunately, the LSTM merely learned to ramble like a madman.

Recap

That's it. To summarize, here's what you've learned:

Here's what you should save:

And now it's time for that donut.

Thanks to Chen Liang for some of the TensorFlow code I used, Ben Hamner and Kaggle for the Trump dataset, and, of course, Schmidhuber and Hochreiter for their original paper. If you want to explore the LSTMs yourself, feel free to play around!

When each of these events happened, people instantly came to Twitter – and, in particular, Twitter search – to discover what was happening.

From a search and advertising perspective, however, these sudden events pose several challenges:

The queries people perform have never before been seen, so it’s impossible to know beforehand what they mean. How would you know that #bindersfullofwomen refers to politics, and not office accessories, or that people searching for “Horses and Bayonets” are interested in the debates?

Since these spikes in search queries are so short-lived, there’s only a short window of opportunity to learn what they mean.

So an event happens, people instantly come to Twitter to search for the event, and we need to teach our systems what these queries mean as quickly as we can, because in just a few hours those searches will be gone.

How do we do this? We’ll describe a novel real-time human computation engine we built that allows us to find search queries as soon as they’re trending, send these queries to real humans to be judged, and finally incorporate these human annotations into our backend models.

Overview

Before we dive into the details, here’s an overview of how the system works.

(1) First, we monitor for which search queries are currently popular.

Behind the scenes: we run a Storm topology that tracks statistics on search queries.

For example: the query “Big Bird” may be averaging zero searches a day, but at 6pm on October 3, we suddenly see a spike in searches from the US.

(2) Next, as soon as we discover a new popular search query, we send it to our human evaluation systems, where judges are asked a variety of questions about the query.

Behind the scenes: when the Storm topology detects that a query has reached sufficient popularity, it connects to a Thrift API that dispatches the query to Amazon’s Mechanical Turk service, and then polls Mechanical Turk for a response.

For example: as soon as we notice “Big Bird” spiking, we may ask human judges to categorize the query, or provide other information (e.g., whether there are likely to be interesting pictures of the query, or whether the query is about a person or an event) that helps us serve relevant tweets and ads.

Finally, after a response from a judge is received, we push the information to our backend systems, so that the next time a user searches for a query, our machine learning models will make use of the additional information. For example, suppose our human judges tell us that “Big Bird” is related to politics; the next time someone performs this search, we know to surface ads by @barackobama or @mittromney, not ads about Dora the Explorer.

Let’s now explore the first two sections above in more detail.

Monitoring for popular queries

Storm is a distributed system for real-time computation. In contrast to batch systems like Hadoop, which often introduce delays of hours or more, Storm allows us to run online data processing algorithms to discover search spikes as soon as they happen.

In brief, running a job on Storm involves creating a Storm topology that describes the processing steps that must occur, and deploying this topology to a Storm cluster. A topology itself consists of three things:

Tuple streams of data. In our case, these may be tuples of (search query, timestamp).

Spouts that produce these tuple streams. In our case, we attach spouts to our search logs, which get written to every time a search occurs.

Bolts that process tuple streams. In our case, we use bolts for operations like updating total query counts, filtering out non-English queries, and checking whether an ad is currently being served up for the query.

Here’s a step-by-step walkthrough of how our popular query topology works:

Whenever you perform a search on Twitter, the search request gets logged to a Kafka queue.

The Storm topology attaches a spout to this Kafka queue, and the spout emits a tuple containing the query and other metadata (e.g., the time the query was issued and its location) to a bolt for processing.

This bolt updates the count of the number of times we’ve seen this query, checks whether the query is “currently popular” (using various statistics like time-decayed counts, the geographic distribution of the query, and the last time this query was sent for annotations), and dispatches it to our human computation pipeline if so.

One interesting feature of our popularity algorithm is that we often rejudge queries that have been annotated before, since the intent of a search can change. For example, perhaps people normally search for “Clint Eastwood” because they’re interested in his movies, but during the Republican National Convention users may have wanted to see tweets that were more political in nature.

Human evaluation of popular search queries

At Twitter, we use human computation for a variety of tasks. (See also Clockwork Raven, an open-source crowdsourcing platform we built that makes launching tasks easier.) For example, we often run experiments to measure ad relevance and search quality, we use it to gather data to train and evaluate our machine learning models, and in this section we’ll describe how we use it to boost our understanding of popular search queries.

So suppose that our Storm topology has detected that the query “Big Bird” is suddenly spiking. Since the query may remain popular for only a few hours, we send it off to live humans, who can help us quickly understand what it means; this dispatch is performed via a Thrift service that allows us to design our tasks in a web frontend, and later programmatically submit them to crowdsourcing platforms like Mechanical Turk using any of the different languages we use across Twitter.

On our crowdsourcing platforms, judges are asked several questions about the query that help us serve better ads. Without going into the exact questions, here are flavors of a few possibilities:

What category does the query belong to? For example, “Stanford” may typically be an education-related query, but perhaps there’s a football game between Stanford and Berkeley at the moment, in which case the current search intent would be sports.

Does the query refer to a person? If so, who, and what is their Twitter handle if they have one? For example, the query “Happy Birthday Harry” may be trending, but it’s hard to know beforehand which of the numerous celebrities named Harry it’s referring to. Is it One Direction’s Harry Styles, in which case the searcher is likely to be interested in teen pop? Harry Potter, in which case the searcher is likely to be interested in fantasy novels? Or someone else entirely?

Turkers in the machine

Since humans are core to this system, let’s describe how our workforce was designed to give us fast, reliable results.

For completing all our tasks, we use a small custom pool of judges to ensure high quality. Other typical possibilities in the crowdsourcing world are to use a static set of in-house judges, to use the standard worker filters that Amazon provides, or to go through an outside company like Crowdflower. We’ve experimented with these other solutions, and while they have their own benefits, we found that a custom pool fit our needs best for a few reasons:

In-house judges can provide high-quality work as well, but they usually work standard hours (for example, 9 to 5 if they work onsite, or a relatively fixed and limited set of hours if they work from home), it can be difficult to communicate with them and schedule them for work, and it’s hard to scale the hiring of more judges.

Using Crowdflower or Amazon’s standard filters makes it easy to scale the workforce, but their trust algorithms aren’t perfect, so an endless problem is that spammy workers get through and many of the judgments will be very poor quality. Two methods of combatting low quality are to seed gold standard examples for which you know the true response throughout your task, or to use statistical analysis to determine which workers are the good ones, but these can be time-consuming and expensive to create, and we often run tasks of a free-response researchy nature for which these solutions don’t work. Another problem is that using these filters gives you a fluid, constantly changing set of workers, which makes them hard to train.

In contrast:

Our custom pool of judges work virtually all day. For many of them, this is a full-time job, and they’re geographically distributed, so our tasks complete quickly at all hours; we can easily ask for thousands of judgments before lunch, and have them finished by the time we get back, which makes iterating on our experiments much easier.

We have several forums, mailing lists, and even live chatrooms set up, all of which makes it easy for judges to ask us questions and to respond to feedback. Our judges will even give us suggestions on how to improve our tasks; for example, when we run categorization tasks, they’ll often report helpful categories that we should add.

Since we only launch tasks on demand, and Amazon provides a ready source of workers if we ever need more, our judges are never idly twiddling their thumbs waiting for tasks or completing busywork, and our jobs are rarely backlogged.

Because our judges are culled from the best of the crowdsourcing world, they’re experts at the kinds of tasks we send, and can often provide higher quality at a faster rate than what even in-house judges provide. For example, they’ll often use the forums and chatrooms to collaborate amongst themselves to give us the best judgments, and they’re already familiar with the Firefox and Chrome scripts that help them be the most efficient at their work.

All the benefits described above are especially valuable in this real-time search annotation case:

Having highly trusted workers means we don’t need to wait for multiple annotations on a single search query to confirm validity, so we can send responses to our backend as soon as a single judge responds. This entire pipeline is design for real-time, after all, so the lower the latency on the human evaluation part, the better.

The static nature of our custom pool means that the judges are already familiar with our questions, and don’t need to be trained again.

Because our workers aren’t limited to a fixed schedule or location, they can work anywhere, anytime – which is a requirement for this system, since global event spikes on Twitter are not beholden to a 9-to-5.

And with the multiple easy avenues of communication we have set up, it’s easy for us to answer questions that might arise when we add new questions or modify existing ones.

Thanks

Thanks to everyone on the Revenue and Storm teams, as well as our Turkers, for helping us launch this project.

(For some background, the contest provided a training dataset of edges, a test set of nodes, and contestants were asked to predict missing outbound edges on the test set, using mean average precision as the evaluation metric.)

Exploration

What does the network look like? I wanted to play around with the data a bit first just to get a rough feel, so I made an app to interact with the network around each node.

The node in black is a selected node from the training set, and we perform a breadth-first walk of the graph out to a maximum distance of 3 to uncover the local network. Nodes are sized according to their distance from the center, and colored according to a chosen metric (a personalized PageRank in this case; more on this later).

We can see that the central node is friends with three other users (in red), two of whom have fairly large, disjoint networks.

There are quite a few dangling nodes (nodes at distance 3 with only one connection to the rest of the local network), though, so let’s remove these to reveal the core structure:

And here’s an embedded version you can manipulate inline:

Since the default view doesn’t encode the distinction between following and follower relationships, we can mouse over each node to see who it follows and who it’s followed by. Here, for example, is the following/follower network of one of the central node’s friends:

The moused over node is highlighted in black, its friends (users who both follow the node and are followed back in turn) are colored in purple, its followees are teal, and its followers in orange. We can also see that the node shares a friend with the central user (triadic closure, holla!).

Here’s another network, this time of the friend at the bottom:

Interestingly, while the first friend had several only-followers (in orange), the second friend has none. (which suggests, perhaps, a node-level feature that measures how follow-hungry a user is…)

And here’s one more node, a little further out (maybe a celebrity, given it has nothing but followers?):

The Quiet One

Let’s take a look at another graph, one whose local network is a little smaller:

A Social Butterfly

And one more, whose local network is a little larger:

Again, I encourage everyone to play around with the app here, and I’ll come back to the question of coloring each node later.

Distributions

Next, let’s take a more quantitative look at the graph.

Here’s the distribution of the number of followers of each node in the training set (cut off at 50 followers for a better fit – the maximum number of followers is 552), as well as the number of users each node is following (again, cut off at 50 – the maximum here is 1566)

Nothing terribly surprising, but that alone is good to verify. (For people tempted to mutter about power laws, I’ll hold you off with the bitter coldness of baby Gauss’s tears.)

Similarly, here are the same two graphs, but limited to the nodes in the test set alone:

Notice that there are relatively more test set users with 0 followees than in the full training set, and relatively fewer test set users with 0 followers. This information could be used to better simulate a validation set for model selection, though I didn’t end up doing this myself.

Preliminary Probes

Finally, let’s move on to the models themselves.

In order to quickly get up and running on a couple prediction algorithms, I started with some unsupervised approaches. For example, after building a new validation set* to test performance offline, I tried:

Recommending users who follow you (but you don’t follow in return)

Recommending users similar to you (when representing users as sets of their followers, and using cosine similarity and Jaccard similarity as the similarity metric)

Recommending users based on a personalized PageRank score

Recommending users that the people you follow also follow

And so on, combining the votes of these algorithms in a fairly ad-hoc way (e.g., by taking the majority vote or by ordering by the number of followers).

This worked quite well actually, but I’d been planning to move on to a more machine learned model-based approach from the beginning, so I did that next.

*My validation set was formed by deleting random edges from the full training set. A slightly better approach, as mentioned above, might have been to more accurately simulate the distribution of the official test set, but I didn’t end up trying this out myself.

Candidate Selection

In order to run a machine learning algorithm to recommend edges (which would take two nodes, a source and a candidate destination, and generate a score measuring the likelihood that the source would follow the destination), it’s necessary to prune the set of candidates to run the algorithm on.

I used two approaches for this filtering step, both based on random walks on the graph.

Personalized PageRank

The first approach was to calculate a personalized PageRank around each source node.

Briefly, a personalized PageRank is like standard PageRank, except that when randomly teleporting to a new node, the surfer always teleports back to the given source node being personalized (rather than to a node chosen uniformly at random, as in the classic PageRank algorithm).

That is, the random surfer in the personalized PageRank model works as follows:

He starts at the source node $X$ that we want to calculate a personalized PageRank around.

At step $i$: with probability $p$, the surfer moves to a neighboring node chosen uniformly at random; with probability $1-p$, the surfer instead teleports back to the original source node $X$.

The limiting probability that the surfer is at node $N$ is then the personalized PageRank score of node $N$ around $X$.

Here’s some Scala code that computes approximate personalized PageRank scores and takes the highest-scoring nodes as the candidates to feed into the machine learning model:

/** * Calculate a personalized PageRank around the given user, and return * a list of the nodes with the highest personalized PageRank scores. * * @return A list of (node, probability of landing at this node after * running a personalized PageRank for K iterations) pairs. */defpageRank(user:Int):List[(Int, Double)]={// This map holds the probability of landing at each node, up to the // current iteration.valprobs=Map[Int, Double]()probs(user)=1// We start at this user.valpageRankProbs=pageRankHelper(start,probs,NumPagerankIterations)pageRankProbs.toList.sortBy{-_._2}.filter{case(node,score)=>!getFollowings(user).contains(node)&&node!=user}.take(MaxNodesToKeep)}/** * Simulates running a personalized PageRank for one iteration. * * Parameters: * start - the start node to calculate the personalized PageRank around * probs - a map from nodes to the probability of being at that node at * the start of the current iteration * numIterations - the number of iterations remaining * alpha - with probability alpha, we follow a neighbor; with probability * 1 - alpha, we teleport back to the start node * * @return A map of node -> probability of landing at that node after the * specified number of iterations. */defpageRankHelper(start:Int,probs:Map[Int, Double],numIterations:Int,alpha:Double=0.5):Map[Int, Double]={if(numIterations<=0){probs}else{// Holds the updated set of probabilities, after this iteration.valprobsPropagated=Map[Int, Double]()// With probability 1 - alpha, we teleport back to the start node.probsPropagated(start)=1-alpha// Propagate the previous probabilities...probs.foreach{case(node,prob)=>valforwards=getFollowings(node)valbackwards=getFollowers(node)// With probability alpha, we move to a follower...// And each node distributes its current probability equally to // its neighbors.valprobToPropagate=alpha*prob/(forwards.size+backwards.size)(forwards.toList++backwards.toList).foreach{neighbor=>if(!probsPropagated.contains(neighbor)){probsPropagated(neighbor)=0}probsPropagated(neighbor)+=probToPropagate}}pageRankHelper(start,probsPropagated,numIterations-1,alpha)}}

Propagation Score

In the first iteration, this user propagates its score equally to its neighbors.

In the second iteration, each user duplicates and keeps half of its score S. It then propagates S equally to its neighbors.

In subsequent iterations, the process is repeated, except that neighbors reached via a backwards link don’t duplicate and keep half of their score. (The idea is that we want the score to reach followees and not followers.)

/** * Calculate propagation scores around the current user. * * In the first propagation round, we * * - Give the starting node N an initial score S. * - Propagate the score equally to each of N's neighbors (followers * and followings). * - Each first-level neighbor then duplicates and keeps half of its score * and then propagates the original again to its neighbors. * * In further rounds, neighbors then repeat the process, except that neighbors * traveled to via a backwards/follower link don't keep half of their score. * * @return a sorted list of (node, propagation score) pairs. */defpropagate(user:Int):List[(Int, Double)]={valscores=Map[Int, Double]()// We propagate the score equally to all neighbors.valscoreToPropagate=1.0/(getFollowings(user).size+getFollowers(user).size)(getFollowings(user).toList++getFollowers(user).toList).foreach{x=>// Propagate the score...continuePropagation(scores,x,scoreToPropagate,1)// ...and make sure it keeps half of it for itself.scores(x)=scores.getOrElse(x,0:Double)+scoreToPropagate/2}scores.toList.sortBy{-_._2}.filter{nodeAndScore=>valnode=nodeAndScore._1!getFollowings(user).contains(node)&&node!=user}.take(MaxNodesToKeep)}/** * In further rounds, neighbors repeat the process above, except that neighbors * traveled to via a backwards/follower link don't keep half of their score. */defcontinuePropagation(scores:Map[Int, Double],user:Int,score:Double,currIteration:Int):Unit={if(currIteration<NumIterations&&score>0){valscoreToPropagate=score/(getFollowings(user).size+getFollowers(user).size)getFollowings(user).foreach{x=>// Propagate the score... continuePropagation(scores,x,scoreToPropagate,currIteration+1)// ...and make sure it keeps half of it for itself. scores(x)=scores.getOrElse(x,0:Double)+scoreToPropagate/2}getFollowers(user).foreach{x=>// Propagate the score...continuePropagation(scores,x,scoreToPropagate,currIteration+1)// ...but backward links (except for the starting node's immediate// neighbors) don't keep any score for themselves.}}}

I played around with tweaking some parameters in both approaches (e.g., weighting followers and followees differently), but the natural defaults (as used in the code above) ended up performing the best.

Features

After pruning the set of candidate destination nodes to a more feasible level, I fed pairs of (source, destination) nodes into a machine learning model. From each pair, I extracted around 30 features in total.

As mentioned above, one feature that worked quite well on its own was whether the destination node already follows the source.

I also used a wide set of similarity-based features, for example, the Jaccard similarity between the source and destination when both are represented as sets of their followers, when both are represented as sets of their followees, or when one is represented as a set of followers while the other is represented as a set of followees.

abstractclassSimilarityMetric[T]{defapply(set1:Set[T],set2:Set[T]):Double;}objectJaccardSimilarityextendsSimilarityMetric[Int]{/** * Returns the Jaccard similarity between two sets, 0 if both are empty. */defapply(set1:Set[Int],set2:Set[Int]):Double={valunion=(set1.union(set2)).sizeif(union==0){0}else{(set1&set2).size.toFloat/union}}}objectCosineSimilarityextendsSimilarityMetric[Int]{/** * Returns the cosine similarity between two sets, 0 if both are empty. */defapply(set1:Set[Int],set2:Set[Int]):Double={if(set1.size==0&&set2.size==0){0}else{(set1&set2).size.toFloat/(math.sqrt(set1.size*set2.size))}}}// ************// * FEATURES *// ************/** * Returns the similarity between user1 and user2 when both are represented as * sets of followers. */defsimilarityByFollowers(user1:Int,user2:Int)(implicitsimilarity:SimilarityMetric[Int]):Double={similarity.apply(getFollowersWithout(user1,user2),getFollowersWithout(user2,user1))}// etc.

Along the same lines, I also computed a similarity score between the destination node and the source node’s followees, and several variations thereof.

Extended Similarity Scores

1234567891011

/** * Iterate over each of user1's followings, compute their similarity with * user2 when both are represented as sets of followers, and return the * sum of these similarities. */deffollowerBasedSimilarityToFollowing(user1:Int,user2:Int)(implicitsimilarity:SimilarityMetric[Int]):Double={getFollowingsWithout(user1,user2).map{similarityByFollowers(_,user2)(similarity)}.sum}

Other features included the number of followers and followees of each node, the ratio of these, the personalized PageRank and propagation scores themselves, the number of followers in common, and triangle/closure-type features (e.g., whether the source node is friends with a node X who in turn is a friend of the destination node).

If I had had more time, I would probably have tried weighted and more regularized versions of some of these features as well (e.g., downweighting nodes with large numbers of followers when computing cosine similarity scores based on followees, or shrinking the scores of nodes we have little information about).

Feature Understanding

But what are these features actually doing? Let’s use the same app I built before to take a look.

Here’s the local network of node 317 (different from the node above), where each node is colored by its personalized PageRank (higher scores are in darker red):

If we look at the following vs. follower relationships of the central node (recall that purple is friends, teal is followings, orange is followers):

…we can see that, as expected (because edges that represented both following and follower were double-weighted in my PageRank calculation), the darkest red nodes are those that are friends with the central node, while those in a following-only or follower-only relationship have a lower score.

How does the propagation score compare to personalized PageRank? Here, I colored each node according to the log ratio of its propagation score and personalized PageRank:

Comparing this coloring with the local follow/follower network:

…we can see that followed nodes (in teal) receive a higher propagation weight than friend nodes (in purple), while follower nodes (in orange) receive almost no propagation score at all.

Going back to node 1, let’s look at a different metric. Here, each node is colored according to its Jaccard similarity with the source, when nodes are represented by the set of their followers:

We can see that, while the PageRank and propagation metrics tended to favor nodes close to the central node, the Jaccard similarity feature helps us explore nodes that are further out.

However, if we look the high-scoring nodes more closely, we see that they often have only a single connection to the rest of the network:

In other words, their high Jaccard similarity is due to the fact that they don’t have many connections to begin with. This suggests that some regularization or shrinking is in order.

So here’s a regularized version of Jaccard similarity, where we downweight nodes with few connections:

We can see that the outlier nodes are much more muted this time around.

For a starker difference, compare the following two graphs of the Jaccard similarity metric around node 317 (the first graph is an unregularized version, the second is regularized):

Notice, in particular, how the popular node in the top left and the popular nodes at the bottom have a much higher score when we regularize.

And again, there are other networks and features I haven’t mentioned here, so play around and discover them on the app itself.

Models

For the machine learning algorithms on top of my features, I experimented with two types of models: logistic regression (using both L1 and L2 regularization) and random forests. (If I had more time, I would probably have done some more parameter tuning and maybe tried gradient boosted trees as well.)

So what is a random forest? I wrote an old (layman’s) post on it here, but since nobody ever clicks on these links, let’s copy it over:

Suppose you’re very indecisive, so whenever you want to watch a movie, you ask your friend Willow if she thinks you’ll like it. In order to answer, Willow first needs to figure out what movies you like, so you give her a bunch of movies and tell her whether you liked each one or not (i.e., you give her a labeled training set). Then, when you ask her if she thinks you’ll like movie X or not, she plays a 20 questions-like game with IMDB, asking questions like “Is X a romantic movie?”, “Does Johnny Depp star in X?”, and so on. She asks more informative questions first (i.e., she maximizes the information gain of each question), and gives you a yes/no answer at the end.

Thus, Willow is a decision tree for your movie preferences.

But Willow is only human, so she doesn’t always generalize your preferences very well (i.e., she overfits). In order to get more accurate recommendations, you’d like to ask a bunch of your friends, and watch movie X if most of them say they think you’ll like it. That is, instead of asking only Willow, you want to ask Woody, Apple, and Cartman as well, and they vote on whether you’ll like a movie (i.e., you build an ensemble classifier, aka a forest in this case).

Now you don’t want each of your friends to do the same thing and give you the same answer, so you first give each of them slightly different data. After all, you’re not absolutely sure of your preferences yourself – you told Willow you loved Titanic, but maybe you were just happy that day because it was your birthday, so maybe some of your friends shouldn’t use the fact that you liked Titanic in making their recommendations. Or maybe you told her you loved Cinderella, but actually you *really really* loved it, so some of your friends should give Cinderella more weight. So instead of giving your friends the same data you gave Willow, you give them slightly perturbed versions. You don’t change your love/hate decisions, you just say you love/hate some movies a little more or less (you give each of your friends a bootstrapped version of your original training data). For example, whereas you told Willow that you liked Black Swan and Harry Potter and disliked Avatar, you tell Woody that you liked Black Swan so much you watched it twice, you disliked Avatar, and don’t mention Harry Potter at all.

By using this ensemble, you hope that while each of your friends gives somewhat idiosyncratic recommendations (Willow thinks you like vampire movies more than you do, Woody thinks you like Pixar movies, and Cartman thinks you just hate everything), the errors get canceled out in the majority. Thus, your friends now form a bagged (bootstrap aggregated) forest of your movie preferences.

There’s still one problem with your data, however. While you loved both Titanic and Inception, it wasn’t because you like movies that star Leonardio DiCaprio. Maybe you liked both movies for other reasons. Thus, you don’t want your friends to all base their recommendations on whether Leo is in a movie or not. So when each friend asks IMDB a question, only a random subset of the possible questions is allowed (i.e., when you’re building a decision tree, at each node you use some randomness in selecting the attribute to split on, say by randomly selecting an attribute or by selecting an attribute from a random subset). This means your friends aren’t allowed to ask whether Leonardo DiCaprio is in the movie whenever they want. So whereas previously you injected randomness at the data level, by perturbing your movie preferences slightly, now you’re injecting randomness at the model level, by making your friends ask different questions at different times.

And so your friends now form a random forest.

Moving on, I essentially trained scikit-learn’s classifiers on an equal split of true and false edges (sampled from the output of my pruning step, in order to match the distribution I’d get when applying my algorithm to the official test set), and compared performance on the validation set I made, with a small amount of parameter tuning:

Random Forest

123456789101112131415161718

######################################### STEP 1: Read in the training examples.########################################truths=[]# A truth is 1 (for a known true edge) or 0 (for a false edge).training_examples=[]# Each training example is an array of features.forlineinopen(TRAINING_SET_WITH_FEATURES_FILENAME):values=[float(x)forxinline.split(",")]truth=values[0]training_example_features=values[1:]truths.append(truth)training_examples.append(training_example_features)############################## STEP 2: Train a classifier.#############################rf=RandomForestClassifier(n_estimators=500,compute_importances=True,oob_score=True)rf=rf.fit(training_examples,truths)

So let’s look at the variable importance scores as determined by one of my random forest models, which (unsurprisingly) consistently outperformed logistic regression.

The random forest classifier here is one of my earlier models (using a slightly smaller subset of my full suite of features), where the targeting step consisted of taking the top 25 nodes with the highest propagation scores.

We can see that the most important variables are:

Personalized PageRank scores. (I put in both normalized and unnormalized versions, where the normalized versions consisted of taking all the candidates for a particular source node, and scaling them so that the maximum personalized PageRank score was 1.)

Whether the destination node already follows the source.

How similar the source node is to the people the destination node is following, when each node is represented as a set of followers. (Note that this is more or less measuring how likely the destination is to follow the source, which we already saw is a good predictor of whether the source is likely to follow the destination.) Plus several variations on this theme (e.g., how similar the destination node is to the source node’s followers, when each node is represented as a set of followees).

Model Comparison

How do all of these models compare to each other? Is the random forest model universally better than the logistic regression model, or are there some sets of users for which the logistic regression model actually performs better?

To enable these kinds of comparisons, I made a small module that allows you to select two models and then visualize their sliced performance.

Above, I bucketed all test nodes into buckets based on (the logarithm of) their number of followers, and compared the mean average precision of two algorithms: one that recommends nodes to follow using a personalized PageRank alone, and one that recommends nodes that are following the source user but are not followed back in return.

We see that except for the case of 0 followers (where the “is followed by” algorithm can do nothing), the personalized PageRank algorithm gets increasingly better in comparison: at first, the two algorithms have roughly equal performance, but as the source node gets more followers, the personalized PageRank algorithm dominates.

And here’s an embedded version you can interact with directly:

Admittedly, building a slicer like this is probably overkill for a Kaggle competition, where the set of variables is fairly limited. But imagine having something similar for a real world model, where new algorithms are tried out every week and we can slice the performance by almost any dimension we can imagine (by geography, to make sure we don’t improve Australia at the expense of the UK; by user interests, to see where we could improve the performance of topic inference; by number of user logins, to make sure we don’t sacrifice the performance on new users for the gain of the core).

Mathematicians do it with Matrices

Let’s switch directions slightly and think about how we could rewrite our computations in a different, matrix-oriented style. (I didn’t do this in the competition – this is more a preview of another post I’m writing.)

Personalized PageRank in Scalding

Personalized PageRank, for example, is an obvious fit for a matrix rewrite. Here’s how it would look in Scalding’s new Matrix library:

// ***********************************************// STEP 1. Load the adjacency graph into a matrix.// ***********************************************valfollowing=Tsv(GraphFilename,('user1,'user2,'weight))// Binary matrix where cell (u1, u2) means that u1 follows u2.valfollowingMatrix=following.toMatrix[Int,Int,Double]('user1,'user2,'weight)// Binary matrix where cell (u1, u2) means that u1 is followed by u2. valfollowerMatrix=followingMatrix.transpose// Note: we could also form this adjacency matrix differently, by placing// different weights on the following vs. follower edges.valundirectedAdjacencyMatrix=(followingMatrix+followerMatrix).rowL1Normalize// Create a diagonal users matrix (to be used in the "teleportation back// home" step).valusersMatrix=following.unique('user1).map('user1->('user2,'weight)){user1:Int=>(user1,1)}.toMatrix[Int, Int, Double]('user1,'user2,'weight)// ***************************************************// STEP 2. Compute the personalized PageRank scores.// See http://nlp.stanford.edu/projects/pagerank.shtml// for more information on personalized PageRank.// ***************************************************// Compute personalized PageRank by running for three iterations,// and output the top candidates.valpprScores=personalizedPageRank(usersMatrix,undirectedAdjacencyMatrix,usersMatrix,0.5,3)pprScores.topRowElems(numCandidates).write(Tsv(OutputFilename))/** * Performs a personalized PageRank iteration. The ith row contains the * personalized PageRank probabilities around node i. * * Note the interpretation: * - with probability 1 - alpha, we go back to where we started. * - with probability alpha, we go to a neighbor. * * Parameters: * * startMatrix - a (usually diagonal) matrix, where the ith row specifies * where the ith node teleports back to. * adjacencyMatrix * prevMatrix - a matrix whose ith row contains the personalized PageRank * probabilities around the ith node. * alpha - the probability of moving to a neighbor (as opposed to * teleporting back to the start). * numIterations - the number of personalized PageRank iterations to run. */defpersonalizedPageRank(startMatrix:Matrix[Int, Int, Double],adjacencyMatrix:Matrix[Int, Int, Double],prevMatrix:Matrix[Int, Int, Double],alpha:Double,numIterations:Int):Matrix[Int, Int, Double]={if(numIterations<=0){prevMatrix}else{valupdatedMatrix=startMatrix*(1-alpha)+(prevMatrix*adjacencyMatrix)*alphapersonalizedPageRank(startMatrix,adjacencyMatrix,updatedMatrix,alpha,numIterations-1)}}

Not only is this matrix formulation a more natural way of expressing the algorithm, but since Scalding (by way of Cascading) supports both local and distributed modes, this code runs just as easily on a Hadoop cluster of thousands of machines (assuming our social network is orders of magnitude larger than the one in the contest) as on a sample of data in a laptop. Big data, big matrix style, BOOM.

Cosine Similarity as L2-Normalized Multiplication

Here’s another example. Calculating cosine similarity between all users is a natural fit for a matrix formulation since, after all, the cosine similarity between two vectors is just their L2-normalized dot product:

Cosine Similarity, Matrix Style

1234567

// A matrix where the cell (i, j) is 1 iff user i is followed by user j.valfollowerMatrix=...// A matrix where cell (i, j) holds the cosine similarity between// user i and user j, when both are represented as sets of their followers.valfollowerBasedSimilarityMatrix=followerMatrix.rowL2Normalize*followerMatrix.rowL2Normalize.transpose

A Similarity Extension

But let’s go one step further.

To change examples for ease of exposition: suppose you’ve bought a bunch of books on Amazon, and Amazon wants to recommend a new book you’ll like. Since Amazon knows similarities between all pairs of books, one natural way to generate this recommendation is to:

This, too, is a dot product! So it can also be rewritten as a matrix multiplication:

12345678910

// A matrix where cell (i, j) holds the similarity between books i and j.valbookSimilarityMatrix=...// A matrix where cell (i, j) is 1 if user i has bought book j, // and 0 otherwise.valuserPurchaseMatrix=...// A matrix where cell (i, j) holds the recommendation score of// book j to user i.valrecommendationMatrix=userPurchaseMatrix*bookSimilarityMatrix

Of course, there’s a natural analogy between this score and the feature I described a while back above, where I compute a similarity score between a destination node and a source node’s followees (when all nodes are represented as sets of followers):

For people comfortable expressing their computations in a vector manner, writing your computations as matrix manipulations often makes experimenting with different algorithms much more fluid. Imagine, for example, that you want to switch from L1 normalization to L2 normalization, or that you want to express your objects as binary sets rather than weighted vectors. Both of these become simple one-line changes when you have vectors and matrices as first-class objects, but are much more tedious (especially in a MapReduce land where this matrix library was designed to be applied!) when you don’t.

Finish Line

By now, I think I’ve spent more time writing this post than on the contest itself, so let’s wrap up.

I often get asked what kinds of tools I like to use, so for this competition my kit consisted of:

Scala, for code that needed to be fast (e.g., extracting features) or that I was going to run repeatedly (e.g., scoring my validation set).

Python, for my machine learning models, because scikit-learn is awesome.

Ruby, for quick one-off scripts.

R, for some data analysis and simple plotting.

Coffeescript and d3, for the interactive visualizations.

Finally, I put up a Github repository containing some code, and here are a couple other posts I’ve written that people who like this entry might also enjoy:

Or how does language change as you travel to different regions? Recall the classic soda vs. pop. vs. coke question: some people use the word “soda” to describe their soft drinks, others use “pop”, and still others use “coke”. Who says what where?

Let’s take a look.

To make this map, I sampled geo-tagged tweets containing the words “soda”, “pop”, or “coke”, performed some state-of-the-art NLP technology to ensure the tweets were soft drink related (e.g., the tweets had to contain “drink soda” or “drink a pop”), and tried to filter out coke tweets that were specifically about the Coke brand (e.g., Coke Zero).

It’s a little cluttered, though, so let’s clean it up by aggregating nearby tweets.

Here, I bucketed all tweets within a 0.333 latitude/longitude radius, calculated the term distribution within each bucket, and colored each bucket with the word furthest from its overall mean. I also sized each point according to the (log-transformed) number of tweets in the bucket.

We can see that:

The South is pretty Coke-heavy.

Soda belongs to the Northeast and far West.

Pop gets the mid-West, except for some interesting spots of blue around Wisconsin and the Illinois-Missouri border.

For comparison, here’s another map based on a survey at popvssoda.com.

We can see similar patterns, though interestingly, our map has less Coke in the Southeast and less pop in the Northwest.

Finally, here’s a world map of the terms, bucketed again. Notice that “pop” seems to be prevalent only in parts of the United States and Canada.

As some astute readers noted, though, the seeming dominance of coke is probably due to the difficulty in distinguishing the generic use of coke for soft drinks in general from the particular use of coke for referring to the Coca-Cola brand.

So let’s instead look at a world map of a couple other soft drink terms (“fizzy drink”, “mineral”, and “tonic”):

Notice that:

“Fizzy drink” shows up for the UK, New Zealand, and Maine.

“Tonic” appears in Massachusetts.

While South Africa gets “fizzy drink”, Nigeria gets “mineral”.

I’ve been getting a lot of questions lately about interesting things you can do with the Twitter API, so this was just one small project I’ve worked on to illustrate. This paper contains another awesome application of Twitter data to geographic language variation, and just for fun, here are a few other cute mini-projects:

What do people eat during the Super Bowl? (wings and beer, apparently)

What do people want for Christmas, compared to what they actually get?

What do guys and girls really say?

When were people losing and gaining power during Hurricane Sandy? (click the image to interact)

How does information of a geographic-specific nature spread? (click the image to see a dynamic visualization of when and where tweets related to surviving Hurricane Sandy were shared)

So how can you use the data you’ve gathered to discover different kinds of groups?

One way is to use a standard clustering algorithm like k-means or Gaussian mixture modeling (see this previous post for a brief introduction). The problem is that these both assume a fixed number of clusters, which they need to be told to find. There are a couple methods for selecting the number of clusters to learn (e.g., the gap and prediction strength statistics), but the problem is a more fundamental one: most real-world data simply doesn’t have a fixed number of clusters.

That is, suppose we’ve asked 10 of our friends what they ate in the past day, and we want to find groups of eating preferences. There’s really an infinite number of foodie types (carnivore, vegan, snacker, Italian, healthy, fast food, heavy eaters, light eaters, and so on), but with only 10 friends, we simply don’t have enough data to detect them all. (Indeed, we’re limited to 10 clusters!) So whereas k-means starts with the incorrect assumption that there’s a fixed, finite number of clusters that our points come from, no matter if we feed it more data, what we’d really like is a method positing an infinite number of hidden clusters that naturally arise as we ask more friends about their food habits. (For example, with only 2 data points, we might not be able to tell the difference between vegans and vegetarians, but with 200 data points, we probably could.)

Luckily for us, this is precisely the purview of nonparametric Bayes.*

*Nonparametric Bayes refers to a class of techniques that allow some parameters to change with the data. In our case, for example, instead of fixing the number of clusters to be discovered, we allow it to grow as more data comes in.

A Generative Story

Let’s describe a generative model for finding clusters in any set of data. We assume an infinite set of latent groups, where each group is described by some set of parameters. For example, each group could be a Gaussian with a specified mean $\mu_i$ and standard deviation $\sigma_i$, and these group parameters themselves are assumed to come from some base distribution $G_0$. Data is then generated in the following manner:

When deciding what to eat when she woke up yesterday, Alice could have thought girl, I’m in the mood for pizza and her food consumption yesterday would have been a sample from the pizza Gaussian. Similarly, Bob could have spent the day in Chinatown, thereby sampling from the Asian Gaussian for his day’s meals. And so on.

The big question, then, is: how do we assign each friend to a group?

Assigning Groups

Chinese Restaurant Process

One way to assign friends to groups is to use a Chinese Restaurant Process. This works as follows: Imagine a restaurant where all your friends went to eat yesterday…

Initially the restaurant is empty.

The first person to enter (Alice) sits down at a table (selects a group). She then orders food for the table (i.e., she selects parameters for the group); everyone else who joins the table will then be limited to eating from the food she ordered.

The second person to enter (Bob) sits down at a table. Which table does he sit at? With probability $\alpha / (1 + \alpha)$ he sits down at a new table (i.e., selects a new group) and orders food for the table; with probability $1 / (1 + \alpha)$ he sits with Alice and eats from the food she’s already ordered (i.e., he’s in the same group as Alice).

…

The (n+1)-st person sits down at a new table with probability $\alpha / (n + \alpha)$, and at table k with probability $n_k / (n + \alpha)$, where $n_k$ is the number of people currently sitting at table k.

Note a couple things:

The more people (data points) there are at a table (cluster), the more likely it is that people (new data points) will join it. In other words, our groups satisfy a rich get richer property.

There’s always a small probability that someone joins an entirely new table (i.e., a new group is formed).

The probability of a new group depends on $\alpha$. So we can think of $\alpha$ as a dispersion parameter that affects the dispersion of our datapoints. The lower alpha is, the more tightly clustered our data points; the higher it is, the more clusters we have in any finite set of points.

(Also notice the resemblance between table selection probabilities and a Dirichlet distribution…)

Just to summarize, given n data points, the Chinese Restaurant Process specifies a distribution over partitions (table assignments) of these points. We can also generate parameters for each partition/table from a base distribution $G_0$ (for example, each table could represent a Gaussian whose mean and standard deviation are sampled from $G_0$), though to be clear, this is not part of the CRP itself.

Code

Since code makes everything better, here’s some Ruby to simulate a CRP:

# Generate table assignments for `num_customers` customers, according to# a Chinese Restaurant Process with dispersion parameter `alpha`.## returns an array of integer table assignmentsdefchinese_restaurant_process(num_customers,alpha)return[]ifnum_customers<=0table_assignments=[1]# first customer sits at table 1next_open_table=2# index of the next empty table# Now generate table assignments for the rest of the customers.1.upto(num_customers-1)do|i|ifrand<alpha.to_f/(alpha+i)# Customer sits at new table.table_assignments<<next_open_tablenext_open_table+=1else# Customer sits at an existing table.# He chooses which table to sit at by giving equal weight to each# customer already sitting at a table. which_table=table_assignments[rand(table_assignments.size)]table_assignments<<which_tableendendtable_assignmentsend

>chinese_restaurant_process(num_customers=10,alpha=1)1,2,3,4,3,3,2,1,4,3# table assignments from run 11,1,1,1,1,1,2,2,1,3# table assignments from run 21,2,2,1,3,3,2,1,3,4# table assignments from run 3>chinese_restaurant_process(num_customers=10,alpha=3)1,2,1,1,3,1,2,3,4,51,2,3,3,4,3,4,4,5,51,1,2,3,1,4,4,3,1,1>chinese_restaurant_process(num_customers=10,alpha=5)1,2,1,3,4,5,6,7,1,81,2,3,3,4,5,6,5,6,71,2,3,4,5,6,2,7,2,1

Notice that as we increase $\alpha$, so too does the number of distinct tables increase.

Polya Urn Model

Another method for assigning friends to groups is to follow the Polya Urn Model. This is basically the same model as the Chinese Restaurant Process, just with a different metaphor.

We start with an urn containing $\alpha G_0(x)$ balls of “color” $x$, for each possible value of $x$. ($G_0$ is our base distribution, and $G_0(x)$ is the probability of sampling $x$ from $G_0$). Note that these are possibly fractional balls.

At each time step, draw a ball from the urn, note its color, and then drop both the original ball plus a new ball of the same color back into the urn.

Note the connection between this process and the CRP: balls correspond to people (i.e., data points), colors correspond to table assignments (i.e., clusters), alpha is again a dispersion parameter (put differently, a prior), colors satisfy a rich-get-richer property (since colors with many balls are more likely to get drawn), and so on. (Again, there’s also a connection between this urn model and the urn model for the (finite) Dirichlet distribution…)

To be precise, the difference between the CRP and the Polya Urn Model is that the CRP specifies only a distribution over partitions (i.e., table assignments), but doesn’t assign parameters to each group, whereas the Polya Urn Model does both.

Code

# Draw `num_balls` colored balls according to a Polya Urn Model# with a specified base color distribution and dispersion parameter# `alpha`.## returns an array of ball colorsdefpolya_urn_model(base_color_distribution,num_balls,alpha)return[]ifnum_balls<=0balls_in_urn=[]0.upto(num_balls-1)do|i|ifrand<alpha.to_f/(alpha+balls_in_urn.size)# Draw a new color, put a ball of this color in the urn.new_color=base_color_distribution.callballs_in_urn<<new_colorelse# Draw a ball from the urn, add another ball of the same color.ball=balls_in_urn[rand(balls_in_urn.size)]balls_in_urn<<ballendendballs_in_urnend

And here’s some sample output, using a uniform distribution over the unit interval as the color distribution to sample from:

>unit_uniform=lambda{(rand*100).to_i/100.0}>polya_urn_model(unit_uniform,num_balls=10,alpha=1)0.27,0.89,0.89,0.89,0.73,0.98,0.43,0.98,0.89,0.53# colors in the urn from run 10.26,0.26,0.46,0.26,0.26,0.26,0.26,0.26,0.26,0.85# colors in the urn from run 20.96,0.87,0.96,0.87,0.96,0.96,0.87,0.96,0.96,0.96# colors in the urn from run 3

Here are some sample density plots of the colors in the urn, when using a unit normal as the base color distribution:

Notice that as alpha increases (i.e., we sample more new ball colors from our base; i.e., as we place more weight on our prior), the colors in the urn tend to a unit normal (our base color distribution).

And here are some sample plots of points generated by the urn, for varying values of alpha:

Each color in the urn is sampled from a uniform distribution over [0,10]x[0,10] (i.e., a [0, 10] square).

Each group is a Gaussian with standard deviation 0.1 and mean equal to its associated color, and these Gaussian groups generate points.

Notice that the points clump together in fewer clusters for low values of alpha, but become more dispersed as alpha increases.

Stick-Breaking Process

Imagine running either the Chinese Restaurant Process or the Polya Urn Model without stop. For each group $i$, this gives a proportion $w_i$ of points that fall into group $i$.

So instead of running the CRP or Polya Urn model to figure out these proportions, can we simply generate them directly?

This is exactly what the Stick-Breaking Process does:

Start with a stick of length one.

Generate a random variable $\beta_1 \sim Beta(1, \alpha)$. By the definition of the Beta distribution, this will be a real number between 0 and 1, with expected value $1 / (1 + \alpha)$. Break off the stick at $\beta_1$; $w_1$ is then the length of the stick on the left.

Now take the stick to the right, and generate $\beta_2 \sim Beta(1, \alpha)$. Break off the stick $\beta_2$ into the stick. Again, $w_2$ is the length of the stick to the left, i.e., $w_2 = (1 - \beta_1) \beta_2$.

And so on.

Thus, the Stick-Breaking process is simply the CRP or Polya Urn Model from a different point of view. For example, assigning customers to table 1 according to the Chinese Restaurant Process is equivalent to assigning customers to table 1 with probability $w_1$.

Notice that for low values of alpha, the stick weights are concentrated on the first few weights (meaning our data points are concentrated on a few clusters), while the weights become more evenly dispersed as we increase alpha (meaning we posit more clusters in our data points).

Dirichlet Process

Suppose we run a Polya Urn Model several times, where we sample colors from a base distribution $G_0$. Each run produces a distribution of colors in the urn (say, 5% blue balls, 3% red balls, 2% pink balls, etc.), and the distribution will be different each time (for example, 5% blue balls in run 1, but 1% blue balls in run 2).

For example, let’s look again at the plots from above, where I generated samples from a Polya Urn Model with the standard unit normal as the base distribution:

Each run of the Polya Urn Model produces a slighly different distribution, though each is “centered” in some fashion around the standard Gaussian I used as base. In other words, the Polya Urn Model gives us a distribution over distributions (we get a distribution of ball colors, and this distribution of colors changes each time) – and so we finally get to the Dirichlet Process.

Formally, given a base distribution $G_0$ and a dispersion parameter $\alpha$, a sample from the Dirichlet Process $DP(G_0, \alpha)$ is a distribution $G \sim DP(G_0, \alpha)$. This sample $G$ can be thought of as a distribution of colors in a single simulation of the Polya Urn Model; sampling from $G$ gives us the balls in the urn.

So here’s the connection between the Chinese Restaurant Process, the Polya Urn Model, the Stick-Breaking Process, and the Dirichlet Process:

Polya Urn Model: One way to generate these values $x_i$ would be to take a Polya Urn Model with color distribution $G_0$ and dispersion $\alpha$. ($x_i$ would be the color of the ith ball in the urn.)

Chinese Restaurant Process: Another way to generate $x_i$ would be to first assign tables to customers according to a Chinese Restaurant Process with dispersion $\alpha$. Every customer at the nth table would then be given the same value (color) sampled from $G_0$. ($x_i$ would be the value given to the ith customer; $x_i$ can also be thought of as the food at table $i$, or as the parameters of table $i$.)

Stick-Breaking Process: Finally, we could generate weights $w_k$ according to a Stick-Breaking Process with dispersion $\alpha$. Next, we would give each weight $w_k$ a value (or color) $v_k$ sampled from $G_0$. Finally, we would assign $x_i$ to value (color) $v_k$ with probability $w_k$.

Recap

Let’s summarize what we’ve discussed so far.

We have a bunch of data points $p_i$ that we want to cluster, and we’ve described four essentially equivalent generative models that allow us to describe how each cluster and point could have arisen.

In the Chinese Restaurant Process:

We generate table assignments $g_1, \ldots, g_n \sim CRP(\alpha)$ according to a Chinese Restaurant Process. ($g_i$ is the table assigned to datapoint $i$.)

We generate table parameters $\phi_1, \ldots, \phi_m \sim G_0$ according to the base distribution $G_0$, where $\phi_k$ is the parameter for the kth distinct group.

Given table assignments and table parameters, we generate each datapoint $p_i \sim F(\phi_{g_i})$ from a distribution $F$ with the specified table parameters. (For example, $F$ could be a Gaussian, and $\phi_i$ could be a parameter vector specifying the mean and standard deviation).

In the Polya Urn Model:

We generate colors $\phi_1, \ldots, \phi_n \sim Polya(G_0, \alpha)$ according to a Polya Urn Model. ($\phi_i$ is the color of the ith ball.)

Given ball colors, we generate each datapoint $p_i \sim F(\phi_i)$.

In the Stick-Breaking Process:

We generate group probabilities (stick lengths) $w_1, \ldots, w_{\infty} \sim Stick(\alpha)$ according to a Stick-Breaking process.

We generate group parameters $\phi_1, \ldots, \phi_{\infty} \sim G_0$ from $G_0$, where $\phi_k$ is the parameter for the kth distinct group.

Given group assignments and group parameters, we generate each datapoint $p_i \sim F(\phi_{g_i})$.

In the Dirichlet Process:

We generate a distribution $G \sim DP(G_0, \alpha)$ from a Dirichlet Process with base distribution $G_0$ and dispersion parameter $\alpha$.

We generate group-level parameters $x_i \sim G$ from $G$, where $x_i$ is the group parameter for the ith datapoint. (Note: this is not the same as $\phi_i$. $x_i$ is the parameter associated to the group that the ith datapoint belongs to, whereas $\phi_k$ is the parameter of the kth distinct group.)

Also, remember that each model naturally allows the number of clusters to grow as more points come in.

Inference in the Dirichlet Process Mixture

So we’ve described a generative model that allows us to calculate the probability of any particular set of group assignments to data points, but we haven’t described how to actually learn a good set of group assignments.

Let’s briefly do this now. Very roughly, the Gibbs sampling approach works as follows:

Take the set of data points, and randomly initialize group assignments.

Pick a point. Fix the group assignments of all the other points, and assign the chosen point a new group (which can be either an existing cluster or a new cluster) with a CRP-ish probability (as described in the models above) that depends on the group assignments and values of all the other points.

We will eventually converge on a good set of group assignments, so repeat the previous step until happy.

For more details, this paper provides a good description. Philip Resnick and Eric Hardisty also have a friendlier, more general description of Gibbs sampling (plus an application to naive Bayes) here.

Fast Food Application: Clustering the McDonald’s Menu

Finally, let’s show an application of the Dirichlet Process Mixture. Unfortunately, I didn’t have a data set of people’s food habits offhand, so instead I took this list of McDonald’s foods and nutrition facts.

First, how does the number of clusters inferred by the Dirichlet Process mixture vary as we feed in more (randomly ordered) points?

As expected, the Dirichlet Process model discovers more and more clusters as more and more food items arrive. (And indeed, the number of clusters appears to grow logarithmically, which can in fact be proved.)

How many clusters does the mixture model infer from the entire dataset? Running the Gibbs sampler several times, we find that the number of clusters tends around 11:

Let’s dive into one of these clusterings.

Cluster 1 (Desserts)

Looking at a sample of foods from the first cluster, we find a lot of desserts and dessert-y drinks:

Caramel Mocha

Frappe Caramel

Iced Hazelnut Latte

Iced Coffee

Strawberry Triple Thick Shake

Snack Size McFlurry

Hot Caramel Sundae

Baked Hot Apple Pie

Cinnamon Melts

Kiddie Cone

Strawberry Sundae

We can also look at the nutritional profile of some foods from this cluster (after z-scaling each nutrition dimension to have mean 0 and standard deviation 1):

We see that foods in this cluster tend to be high in trans fat and low in vitamins, protein, fiber, and sodium.

Cluster 2 (Sauces)

Here’s a sample from the second cluster, which contains a lot of sauces:

Hot Mustard Sauce

Spicy Buffalo Sauce

Newman’s Own Low Fat Balsamic Vinaigrette

And looking at the nutritional profile of points in this cluster, we see that it’s heavy in sodium and fat:

Cluster 3 (Burgers, Crispy Foods, High-Cholesterol)

The third cluster is very burgery:

Hamburger

Cheeseburger

Filet-O-Fish

Quarter Pounder with Cheese

Premium Grilled Chicken Club Sandwich

Ranch Snack Wrap

Premium Asian Salad with Crispy Chicken

Butter Garlic Croutons

Sausage McMuffin

Sausage McGriddles

It’s also high in fat and sodium, and low in carbs and sugar

Cluster 4 (Creamy Sauces)

Interestingly, even though we already found a cluster of sauces above, we discover another one as well. These sauces appear to be much more cream-based:

Creamy Ranch Sauce

Newman’s Own Creamy Caesar Dressing

Coffee Cream

Iced Coffee with Sugar Free Vanilla Syrup

Nutritionally, these sauces are higher in calories from fat, and much lower in sodium:

Cluster 5 (Salads)

Here’s a salad cluster. A lot of salads also appeared in the third cluster (along with hamburgers and McMuffins), but that’s because those salads also all contained crispy chicken. The salads in this cluster are either crisp-free or have their chicken grilled instead:

Premium Southwest Salad with Grilled Chicken

Premium Caesar Salad with Grilled Chicken

Side Salad

Premium Asian Salad without Chicken

Premium Bacon Ranch Salad without Chicken

This is reflected in the higher content of iron, vitamin A, and fiber:

Cluster 6 (More Sauces)

Again, we find another cluster of sauces:

Ketchup Packet

Barbeque Sauce

Chipotle Barbeque Sauce

These are still high in sodium, but much lower in fat compared to the other sauce clusters:

Cluster 7 (Fruit and Maple Oatmeal)

Amusingly, fruit and maple oatmeal is in a cluster by itself:

Fruit & Maple Oatmeal

Cluster 8 (Sugary Drinks)

We also get a cluster of sugary drinks:

Strawberry Banana Smoothie

Wild Berry Smoothie

Iced Nonfat Vanilla Latte

Nonfat Hazelnut

Nonfat Vanilla Cappuccino

Nonfat Caramel Cappuccino

Sweet Tea

Frozen Strawberry Lemonade

Coca-Cola

Minute Maid Orange Juice

In addition to high sugar content, this cluster is also high in carbohydrates and calcium, and low in fat.

Cluster 9 (Breakfast Foods)

Here’s a cluster of high-cholesterol breakfast foods:

Sausage McMuffin with Egg

Sausage Burrito

Egg McMuffin

Bacon, Egg & Cheese Biscuit

McSkillet Burrito with Sausage

Big Breakfast with Hotcakes

Cluster 10 (Coffee Drinks)

We find a group of coffee drinks next:

Nonfat Cappuccino

Nonfat Latte

Nonfat Latte with Sugar Free Vanilla Syrup

Iced Nonfat Latte

These are much higher in calcium and protein, and lower in sugar, than the other drink cluster above:

Cluster 11 (Apples)

Here’s a cluster of apples:

Apple Dippers with Low Fat Caramel Dip

Apple Slices

Vitamin C, check.

And finally, here’s an overview of all the clusters at once (using a different clustering run):

In the Chinese Restaurant Process, each customer sits at a single table. The Indian Buffet Process is an extension that allows customers to sample food from multiple tables (i.e., belong to multiple clusters).

The Chinese Restaurant Process, the Polya Urn Model, and the Stick-Breaking Process are all sequential models for generating groups: to figure out table parameters in the CRP, for example, you wait for customer 1 to come in, then customer 2, then customer 3, and so on. The equivalent Dirichlet Process, on the other hand, is a parallel model for generating groups: just sample $G \sim DP(G_0, alpha)$, and then all your group parameters can be independently generated by sampling from $G$ at once. This duality is an instance of a more general phenomenon known as de Finetti’s theorem.

And you can easily switch which variables are getting plotted, and see all the information associated with each point.

(Same dataset, different aesthetic assignments.)

I’m thinking of adding more kinds of charts, support for categorical variables, more interactivity (sliders to interact with other dimensions?!), and making the UI even easier (e.g., simplify column naming). In the meantime, the code is here on Github, and tips and suggestions are welcome!

Scalding is an in-house MapReduce framework that Twitter recently open-sourced. Like Pig, it provides an abstraction on top of MapReduce that makes it easy to write big data jobs in a syntax that’s simple and concise. Unlike Pig, Scalding is written in pure Scala – which means all the power of Scala and the JVM is already built-in. No more UDFs, folks!

This is going to be an in-your-face introduction to Scalding, Twitter’s (Scala + Cascading) MapReduce framework.

In 140: instead of forcing you to write raw map and reduce functions, Scalding allows you to write natural code like

Not much different from the Ruby you’d write to compute tweet distributions over small data? Exactly.

Movie Similarities

Imagine you run an online movie business, and you want to generate movie recommendations. You have a rating system (people can rate movies with 1 to 5 stars), and we’ll assume for simplicity that all of the ratings are stored in a TSV file somewhere.

Let’s start by reading the ratings into a Scalding job.

You want to calculate how similar pairs of movies are, so that if someone watches The Lion King, you can recommend films like Toy Story. So how should you define the similarity between two movies?

One way is to use their correlation:

For every pair of movies A and B, find all the people who rated both A and B.

Use these ratings to form a Movie A vector and a Movie B vector.

Calculate the correlation between these two vectors.

Whenever someone watches a movie, you can then recommend the movies most correlated with it.

Let’s start with the first two steps.

Before using these rating pairs to calculate correlation, let’s stop for a bit.

Since we’re explicitly thinking of movies as vectors of ratings, it’s natural to compute some very vector-y things like norms and dot products, as well as the length of each vector and the sum over all elements in each vector. So let’s compute these:

To summarize, each row in vectorCalcs now contains the following fields:

movie, movie2

numRaters, numRaters2: the total number of people who rated each movie

size: the number of people who rated both movie and movie2

dotProduct: dot product between the movie vector (a vector of ratings) and the movie2 vector (also a vector of ratings)

ratingSum, rating2sum: sum over all elements in each ratings vector

ratingNormSq, rating2Normsq: squared norm of each vector

So let’s go back to calculating the correlation between movie and movie2. We could, of course, calculate correlation in the standard way: find the covariance between the movie and movie2 ratings, and divide by their standard deviations.

More Similarity Measures

Cosine Similarity

Correlation, Take II

We can also also add a regularized correlation, by (say) adding N virtual movie pairs that have zero correlation. This helps avoid noise if some movie pairs have very few raters in common (for example, The Great Gatsby had an unlikely raw correlation of 1 with many other books, due simply to the fact that those book pairs had very few ratings).

Jaccard Similarity

Recall that one of the lessons of the Netflix prize was that implicit data can be quite useful – the mere fact that you rate a James Bond movie, even if you rate it quite horribly, suggests that you’d probably be interested in similar action films. So we can also ignore the value itself of each rating and use a set-based similarity measure like Jaccard similarity.

Incorporation

Finally, let’s add all these similarity measures to our output.

Book Similarities Revisited

Let’s take another look at the book similarities above, now that we have these new fields.

Here are some of the top Book-Crossing pairs, sorted by their shrunk correlation:

Notice how regularization affects things: the Dark Tower pair has a pretty high raw correlation, but relatively few ratings (reducing our confidence in the raw correlation), so it ends up below the others.

And here are books similar to The Great Gatsby, this time ordered by cosine similarity:

Input Abstraction

So our code right now is tied to our specific ratings.tsv input. But what if we change the way we store our ratings, or what if we want to generate similarities for something entirely different?

Instead of using an explicit rating given to us, we can simply generate a dummy rating of 1 for each check-in. Correlation doesn’t make sense any more, but we can still pay attention to a measure like Jaccard simiilarity.

So we simply create a new class that scrapes tweets for Foursquare check-in information…

…and bam! Here are locations similar to the Empire State Building:

Here are places you might want to check out, if you check-in at Bergdorf Goodman:

Hybrid CEO. Our customers include companies like Airbnb, Dropbox, Expedia, Pandora, and Pinterest, who use us to power their personalization platforms and run millions of human/AI tasks every month. Hit me up if you're interested!