We will be addressing two important issues faced by recommendation systems. First, how do you solve the cold start problem i.e., provide recommendations for new users with very limited behavior data available. Second, even if we have a recommendation list for new users, how do we prevent ourselves from presenting the same recommendation list repeatedly.

I will go over the details of how both of these problems have been solved in sifarish, my OSS recommendation engine. We calculate certain statistical parameters from user engagement historical signals and compute popularity for an item by combining those statistical parameters.

Why Dithering

The second issue has to do with what is known as “above the fold issue”. When presented with a long list, typically users will scan only the top few items from the list. Rarely will they scroll down a long list or navigate to another page. If the same popular item list is presented, it’s effectiveness goes down as time goes. We will use a technique called dithering to alter the list little bit, every time it’s presented to the user.

It’s been reported that dithering is very significant in getting higher click through with recommended items. Recommendation algorithms, no matter how complex and smart are not always likely to accurately predict what an user may be interested in because of inherent idiosyncrasies in us. Dithering essentially allows experimentation with the recommendation results.

Popularity Defined

How do you define popularity for an item. Is it the number of users who bought an item, or the numbers of users who liked it. My answer is all of them and may be more. In sifarish we first calculate the implicit rating from the user engagement historical data and then we calculate the following quantities form implicit rating data.

Number of users

Reflects the visibility of an item

Rating mean

Reflects the likability of an item

Rating median

Reflects the likability of an item. It’s outlier proof

Rating std deviation

Indicates how unanimous or stable the likability is

Finally, we define popularity as an weighted average of the all 4 quantities. The relative weights are user configurable.

Popularity Computed with Hadoop

Popularity score is computed by running historical user engagement data through a sequence of Hadoop MR jobs as listed below. The output of one MR goes as input for the next MR in the sequence.

More details on the implicit rating generator MR can be found in my earlier post. The Normalizer MR normalizes all attributes to the same scale.There are two choices for normalizing : minmax and zscore. If zscore based normalization is outlier proof. Since we are dealing with processed statistical parameters, there is no outlier and we have used the minmax based normalization.

The WeightedAverage MR is from my OSS project chombo. It’s general purpose MR that calculates weighted average of all the attribute values, in this case all four statistical parameters.

Here is some sample output for popularity score showing the top of the list. Each line contains itemID, followed by score.

Popularity Dithered

Even though the popular item scores may be recomputed periodically e.g. once a week, during the interim period we are forced to present the same list to the new users until it’s recomputed. When presented with a long list, users typically don’t scan beyond the first few items. It’s a well known problem in search. In case of search an user will rather change the search query than navigate to the next page.

Dithering solves the problem by introducing some random noise into the popularity score so that the list is reordered. It’s done in such a way, that it’s effect decreases as we go towards the top of the list. So the top items are more likely to retain their positions in the list. This process is like making something worse to get better results.

In sifarish, the dithering process computes new score for an item by taking log of existing score and adding a gaussian noise as below.

The parameter stdDev is configurable. As it’s value goes up, randomness goes up and the item list gets more shuffled. After the new scores are computed for each item, the item list is sorted by rating

Dithering in Real Time with Storm

Every time the popular items are to be presented to the user, the list is dithered in real time. In sifarish this done using Storm. The Storm topology involves two stage processing. A Redis spout layer reads userID from a Redis queue.

A bolt layer has the dithering logic. Bolts read the original recommendation list from a Redis cache for a given user. Why do we have user when we are dealing with global popular item list. The reason is that the same storm based solution is used for dithering personalized recommendation list. For personalized recommendation, recommendation list is generated for each user. For popular items, we use an unique ID for a fictitious user.

Every time an item list, whether popular or personal recommendation, is to be presented to the user, the application reads a already dithered item list from the Redis cache and writes the userID in a Redis queue, so that a new dithered list can be generated by the storm topology to be consumed by the next application thread.

The bolt dithering process reads original items list form a Redis Cache and computes new scores for each item in the list based on dithering logic and then sorts the list by rating. It finally writes the new list to Redis cache with userID as the cache key.

Here is some sample output. Comparing with the original list above, we can see that the top item has retained it’s place. Dithering has shuffled items from the second place onwards. I am showing only the top 5 items here.

I have used the value of 0.015 for the std deviation parameter dither.std.dev. You could increase it to inject more randomness in the dithering process.

Personalized Recommendations

So far we have discussed dithering for popular item recommendation. However, as mentioned earlier, personalized recommendation list can be dithered using the same process.

Personalized recommendation list as computed by collaborative filtering algorithm is used as the source list. Here are the details of how that is done. Every time personalized recommendations are to be presented to the user, a dithered item list is obtained for the user, based on the source list.

Summing Up

We have gone through the details of how to apply dithering to recommendation list, which has been reported to be very effective in getting higher click through for recommended items.

I have used Storm to generate the dithered list in real time. You could use Hadoop also to precompute a set of dithered items list for each user and use them as needed. The advantage processing real time with Storm is that the computation is done as needed.

Share this:

Like this:

LikeLoading...

Related

About Pranab

I am Pranab Ghosh, a software professional in the San Francisco Bay area. I manipulate bits and bytes for the good of living beings and the planet. I have worked with myriad of technologies and platforms in various business domains for early stage startups, large corporations and anything in between. I am an active blogger and open source contributor. I am passionate about technology and green and sustainable living.
My technical interest areas are Big Data, Distributed Processing, NOSQL databases, Data Mining and Programming languages. I am fascinated by problems that don't have neat closed form solution.

3 Responses to Popularity Shaken

This blog is a great combination of practical and theoretical machine learning. One quick question: are there any plans to add geolocation bias to sifarish? In terms of using lat/lng or geohash to influence recommendations?

Adam
I am glad you liked the post. I have thought about geo location influenced recommendation. If you have a use case and if it’s OK, please share it through github issue tracker or by sending me an email at pkghosh@yahoo.com. Roughly, it will work as follows

– Map reduce to join recommended items with item attribute data, through which it will pickup geo location attributes for items
– In real time, user ID along with user’s geo location is submitted to Storm. Storm will alter the recommendation list based on proximity calculation with geo location data