Slope One Predictors for really simple collaborative filtering

Daniel Lemire and Anna Maclachlan have a simple method for guessing how a person will rate a thing based on collaborative filtering.  It’s named Slope One – the predictor is linear with no coefficient. Slope one’s advantage is that it is fast and easy to implement – some simple linear algebra to compute a particular unknown rating.

We propose three related slope one schemes with predictors of the form f(x) = x + b, which precompute the average difference between the ratings of one item and another for users who rated both.

…In a pairwise fashion, we determine how much better one item is liked than another. One way to measure this differential is simply to subtract the average rating of the two items. In turn, this difference can be used to predict another user’s rating of one of those items, given their rating of the other.

And then you can do a weighted Slope One to try to get even closer. From the Wikipedia article,

If a user rated several items, the predictions are simply combined using a weighted average where a good choice for the weight is the number of users having rated both items.

slopeone2.png

Given the shown ratings, we begin by observing that item 1 is rated, on average, 3 points less than item 2 or item 3. Moreover, items 1 and 2 have been corated by a single user, whereas items 1 and 3 have been corated by 2 users. From this information, and the ratings that Lucy provided for items 1 and 2, we can provide a prediction.

They experimented with 2 sets of movie data using All But One Mean Average Error and compare to other 4 other rating schemes (bias from mean, adjusted cosine item-based, etc). It looks like the range for ratings was 0-1. The Netflix prize range is 1-5 aiming for predictions with .856 RMSE, which roughly would be .21 RMSE over 0-1. I have no idea how to compare these different reports of error beyond normalizing though. It can’t be doing better than the Netflix prize of course.

I haven’t seen anyone else test this comparison or anybody comparing it to other methods they mention in the paper – Bayes, Latent Class, clustering…

slopeone.png

The paper at Lemire’s site or arXiv.

Article, examples and some demos in different languages at Wikipedia.

I saw that someone built a slope one module for Drupal in case you were thinking about creating an out-of-my-garage Amazon or Netflix.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s