Target

The benefit of using GraphChi is that it requires a single multicore machine and can scale up to very large models, since at no point the data is fully read into memory. In other words, GraphChi is very useful for machine with limited RAM since it streams over the dataset. It is also possible to configure how much RAM to use during the run.

Here are some performance numbers:

The above graph shows 6 iterations of SGD (stochastic gradient descent) on the full Netflix data.
Netflix has around 100M ratings so the matrix has 100M non-zeros. The size of the decomposed
matrix is about 480K users x 10K movies. I used a single multicore machine with 8 threads, where GraphChi memory consumption was limited to 800Mb, using 8 cores. The factorized matrix has a width of D=20. In total it takes around 80 seconds per 6 iterations, which is around 14 seconds per iteration.

Preprocessing the matrix is done once, and take around 35 seconds.

The input to GraphChi ALS/SGD/bias-SGD is the sparse matrix A in sparse matrix market format. The output are two matrices U and V s.t. A ~= U*V' and both U and V have
a lower dimension D.

For tensor-ALS, time-SVD++

For bias-SGD, SVD++, time-SVD++

Additional three files are created: filename_U_bias.mm, filename_V_bias.mm and filename_global_mean.mm. Bias files include the bias for each user (U) and item (V).
The global mean file includes the global mean of the rating.

For SVD

For each singular vector a file named filename.U.XX is created where XX is the number of the singular vector. The same with filename.V.XX. Additionally a singular value files is also saved.

Algorithms

Here is a table summarizing the properties of the different algorithms in the collaborative filtering library:

ALGORITHM

Method type

Comments

ALS

ALS

ALS_COORD/CCD++

ALS

Using parallel coordinate descent

Sparse-ALS

ALS

Sparse feature vectors (useful for classifying users/items together)

SGD

SGD

bias-SGD

SGD

bias-SGD2

SGD

Supports logistic loss and MAE

SVD

Lanczos

One Sided SVD

For skewed matrices (with one dimension larger than the other)

NMF

For positive matrices.

RBM

SGD

MCMC method

SVD++

SGD

LIBFM

SGD

PMF

ALS

MCMC method

time-SVD++

SGD

Supports time

BPTF (not implemented yet)

MCMC method

BaseLine

X

X

WALS

ALS

Supports weights for each recommendation.

TENSOR ALS

Tensor factorization.

GENSGD

Supports arbitrary string format. Can be used for classification.

SPARSE_GENSGD

libsvm format.

CLiMF

SGD

Minimizes MRR (mean reciprocal rank)

Note: for tensor algorithms, you need to verify you have both the rating and its time. Typically the exact time is binned into time bins (a few tens up to a few hundreds). Having too fine granularity over the time bins slows down computation and does not improve prediction.Using matrix market format, you need to specify each rating using 4 fields:[user] [item] [time bin] [rating]

Common command line options (for all algorithms)

--training

the training input file

--validation

the validation input file (optional). Validation is data with known ratings which not used for training.

--test

the test input file (optional). Test input file is used for computing predictions to a predefined list of user/item pairs.

--minval

min allowed rating (optional). It is highly recommended to set this value since it improves prediction accuracy.

--maxval

max allowed rating (optional). It is highly recommended to set this value since it improves prediction accuracy.

--max_iter

number of iterations to run

--quiet

run with less traces. (optional, default = 0).

--halt_on_rmse_increase

(optional, default = 0). Stops execution when validation error goes up. Runs at least the number of iterations specified in the flag. For example --halt_on_rmse_increase=10 will run at least 10 iterations, and then stop if validation RMSE increases.

--load_factors_from_file

(optional, default = 0). This options allows for two functionalities. Instead of starting with a random state, you can start from any predefined state for the algorithm. This also allows for running a few iterations, saving the results to disk for fault tolerance, and running later FROM THE SAME EXACT state.

--D

width of the factorized matrix. Default is 20.

--R_output_format Save output in sparse matrix market format (compatible
with R)Baseline methodThe baseline method is a simple and quick way of checking the accuracy of the predictions.The baseline method support three operation modes: --algorithm=global_mean // assigns all recommendations to be the global rating mean. --algorithm=user_mean //assigns recommendations based on each user mean value. --algorithm=item_mean //assigns recommendations based on each item mean value.To summarize, the baseline method assigns one of the three possible means as the recommendation results and computes the prediction error. Any other algorithm should give a better result than the baseline method, and thus it can be used a sanity check for the deployed algorithms.ALS (Alternating least squares)Pros: Simple to use, not many command line argumentsCons: intermediate accuracy, higher computational overheadALS is a simple yet powerful algorithm. In this model the prediction is computed as: r_ui = p_u * q_iWhere r_ui is a scalar rating of user u to item i, and p_u is the user feature vector of size D, q_i is the item feature vector of size D and the product is a vector product. The output of ALS is two matrices: filename_U.mm and filename_V.mm The matrix U holds the user feature vectors in each row. (Each vector has exactly D columns). The matrix V holds the feature vectors for each time (Each vector has again exactly D columns). In linear algebra notation the rating matrix R ~ UVBelow are ALS related command line options:

Basic
Confirmation

--lambda=XX

Set regularization. Regularization helps to prevent overfitting.

CCD++ (Alternating least squares, parallel coordinate descent)

Pros: Simple to use, not many command line arguments, faster than ALSCons: Slower convergence relative to ALSIn CCD++ the prediction is computed as: r_ui = p_u * q_iWhere r_ui is a scalar rating of user u to item i, and p_u is the user feature vector of size D, q_i is the item feature vector of size D and the product is a vector product. The output of CCD++ are two matrices: filename_U.mm and filename_V.mm The matrix U holds the user feature vectors in each row. (Each vector has exactly D columns). The matrix V holds the feature vectors for each time (Each vector has again exactly D columns). In linear algebra notation the rating matrix R ~ UVBelow are CCD++ related command line options:

Basic
Confirmation

--lambda=XX

Set regularization. Regularization helps to prevent overfitting.

Stochastic gradient descent (SGD)Pros: fast methodCons: need to tune step size, more iterations are needed relative to ALS.SGD is a simple gradient descent algorithm. Prediction in SGD is done as in ALS: r_ui = p_u * q_iWhere r_ui is a scalar rating of user u to item i, and p_u is the user feature vector of size D, q_i is the item feature vector of size D and the product is a vector product. The output of SGD is two matrices: filename.U and filename.V. The matrix U holds the user feature vectors in each row. (Each vector has exactly D columns). The matrix V holds the feature vectors for each time (Each vector has again exactly D columns). In linear algebra notation the rating matrix R ~ UV

Bias-SGD

Pros: fast methodCons: need to tune step sizeBias-SGD is a simple gradient descent algorithm, where besides of the feature vector we also compute item and user biases (how much their average rating differs from the global average).Prediction in bias-SGD is done as follows:r_ui = global_mean_rating + b_u + b_i + p_u * q_iWhere global_mean_rating is the global mean rating, b_u is the bias of user u, b_i is the bias of item i and p_u and q_i are feature vectors as in ALS. You can read more about bias-SGD in reference [N]. The output of bias-SGD consists of two matrices: filename.U and filename.V. The matrix U holds the user feature vectors in each row. (Each vector has exactly D columns). The matrix V holds the feature vectors for each time (Each vector has again exactly D columns). Additionally, the output consists of two vectors: bias for each user, bias for each item. Last, the global mean rating is also given as output.

Where global_mean_rating is the global mean rating, b_u is the bias of user u, b_i is the bias of item i and p_u and q_i are feature vectors as in ALS. You can read more about bias-SGD in reference [N]. The output of bias-SGD2 consists of two matrices: filename.U and filename.V. The matrix U holds the user feature vectors in each row. (Each vector has exactly D columns). The matrix V holds the feature vectors for each time (Each vector has again exactly D columns). Additionally, the output consists of two vectors: bias for each user, bias for each item. Last, the global mean rating is also given as output.

Koren’s SVD++

Pros: more accurate method than SGD once tuned, relatively fast methodCons: a lot of parameters for tuning, subject to numerical errors when parameters are out of scope.Koren SVD++ is an algorithm which is slightly more fancy than bias-SGD and give somewhat better prediction results.

Prediction in Koren’s SVD++ algorithm is computed as follows:r_ui = global_mean_rating + b_u + b_i + q_i * ( p_u + w_u )Where r_ui is the scalar rating for user u to item i, global_mean_rating is the global mean rating, b_u is a scalar bias for user u, b_i is a scalar bias for item i, p_u is a feature vectors of length D for user u, q_i is a feature vector of length D for item i, and w_u is an additional feature vector of length D (the weight) for user u. The product is a vector product. The output of Koren’s SVD++ is 5 output files:Global mean ratings - include the scalar global mean rating.user_bias - includes a vector with bias for each usermovie_bias - includes a vector with bias for each moviematrix U - includes in each row the feature vector p_u of size D and then the weight vector w_u of size D total width of 2D.matrix V - includes in each row the item feature vector q_i of width D.

Weighted Alternating Least Squares (WALS)

Pros: allows weighting of ratings (can be thought of as confidence in rating), almost the same computational cost like in ALS.Cons: worse modeling error relative to ALSWeighted ALS is a simple extension for ALS where each user/item pair has an additional weight. In this sense, WALS is a tensor algorithm since besides of the rating it also maintains a weight for each rating. Algorithm is described in references [I, J].Prediction in WALS is computed as follows:r_ui = w_ui * p_u * q_iThe scalar value r for user u and item i is computed by multiplying the weight of the rating w_ui by the vector product p_u * q_i. Both p and q are feature vectors of size D.
Note: for weighted-ALS, the input file has 4 columns:

[user] [item] [weight] [rating]. See example file in section 5e).

--lambda - regularizationAlternating least squares with sparse factorsPros: excellent for spectral clusteringCons: less accurate linear model because of the sparsification stepThis algorithm is based on ALS, but an additional sparsifying step is performed on either the user feature vectors, the item feature vectors or both. This algorithm is useful for spectral clustering: first the rating matrix is factorized into a product of one or two sparse matrices, and then clustering can be computed on the feature matrices to detect similar users or items. The underlying algorithm which is used for sparsifying is CoSaMP. See reference [K1]. Below are sparse-ALS related command line options:

Basic configuration

--user_sparsity=XX

A number between 0.5 to 1 which defines how sparse is the resulting user feature factor matrix

--movie_sparsity=XX

A number between 0.5 to 1 which defines how sparse is the resulting movie feature factor matrix

Example running sparse-ALS:

WARNING: sparse_als.cpp(main:202): GraphChi Collaborative filtering library is written by Danny Bickson (c). Send any comments or bug reports to danny.bickson@gmail.com

[training] => [smallnetflix_mm]

[user_sparsity] => [0.8]

[movie_sparsity] => [0.8]

[algorithm] => [3]

[quiet] => [1]

[max_iter] => [15]

0) Training RMSE: 1.11754 Validation RMSE: 3.82345

1) Training RMSE: 3.75712 Validation RMSE: 3.241

2) Training RMSE: 3.22943 Validation RMSE: 2.03961

3) Training RMSE: 2.10314 Validation RMSE: 2.88369

4) Training RMSE: 2.70826 Validation RMSE: 3.00748

5) Training RMSE: 2.70374 Validation RMSE: 3.16669

6) Training RMSE: 3.03717 Validation RMSE: 3.3131

7) Training RMSE: 3.18988 Validation RMSE: 2.83234

8) Training RMSE: 2.82192 Validation RMSE: 2.68066

9) Training RMSE: 2.29236 Validation RMSE: 1.94994

10) Training RMSE: 1.58655 Validation RMSE: 1.08408

11) Training RMSE: 1.0062 Validation RMSE: 1.22961

12) Training RMSE: 1.05143 Validation RMSE: 1.0448

13) Training RMSE: 0.929382 Validation RMSE: 1.00319

14) Training RMSE: 0.920154 Validation RMSE: 0.996426

tensor-ALS

Note: for tensor-ALS, the input file has 4 columns:

[user] [item] [time] [rating]. See example file in section 5b).

--lambda - regularization

Non-negative matrix factorization (NMF)Non-negative matrix factorization (NMF) is based on Lee and Seung [reference H]. Prediction is computed like in ALS: r_ui = p_u * q_iNamely the scalar prediction r of user u is composed of the vector product of the user feature vector p_u (of size D), with the item feature vector q_i (of size D). The only difference is that both p_u and q_i have all nonnegative values.The output of NMF is two matrices: filename.U and filename.V. The matrix U holds the user feature vectors in each row. (Each vector has exactly D columns). The matrix V holds the feature vectors for each time (Each vector has again exactly D columns). In linear algebra notation the rating matrix R ~ UV, U>=0, V>=0.

SVD Command line arguments

Number of inner steps of each iterations. Typically the number should be greater than the number of singular values you look for.

--nsv

Number of singular values requested. Should be typically less than --nv

--ortho_repeats

Number of repeats on the orthogonalization step. Default is 1 (no repeats). Increase this number for higher accuracy but slower execution. Maximal allowed values is 3.

--max_iter

Number of allowed restarts. The minimum is 2= no restart.

--save_vectors=0

Disable saving the factorized matrices U and V to file. On default save_vectors=1.

--tol

Convergence threshold. For large matrices set this number set this number higher (for example 1e-1, while for small matrices you can set it to 1e-16). As smaller the convergence threshold execution is slower.

Scalability

Currently the code was tested with up to 3.5 billion non-zeros on a 24 core machine. Each Lanczos iteration takes about 30 seconds.

Difference to Mahout

Mahout SVD solver is implemented using the same Lanczos algorithm. However, there are several differences1) In Mahout there are no restarts, so quality of the solution deteriorates very rapidly, after 5-10 iterations the solution is no longer accurate. Running without restarts can be done using our solution with the --max_iter=2 flag.2) In Mahout there is a single orthonornalization step in each iteration while in our implementation there are two (after computation of u_i and after v_i ).3) In Mahout there is no error estimation while we provide for each singular value the approximated error.4) Our solution is typically x100 times faster than Mahout.

1) --tol=XX, this is the tolerance. When not enough singular vectors converge to a desired

tolerance you can increase it, for example from 1e-4 to 1e-2, etc.

2) --nv=XX this number should be larger than nsv. Typically you can try 20% more or even larger.

3) --nsv=XX this is the number of the desired singular vectors

4) --max_iter=XX - this is the number of restarts. When the algorithm does not converge you can increase the number of restarts.

Restricted Bolzman Machines (RBM)RBM algorithm is detailed in [Hinton's paper]. It is a MCMC method that works on binary data. In other words, the ratings have to be binned into a discrete space. For example, for KDD CUP 2011 rating between 0 to 100 can be binned into 10 bins: 0-10, 10-20 etc. rbm_scaling defines the factor to divide the rating for binning (in the example it is 10). rbm_bins defines how many bins are there in total. In this example we have 11 bins: 0,1,..,10.

Basic configuration

--rbm_mult_step_dec=XX

Multiplicative step decrement (should be 0.1 to 1, default is 0.9)

--rbm_alpha=XX

Alpha parameter: gradient descent step size

--rbm_beta=XX

Beta parameter: regularization

--rbm_scaling=XX

Optional. Scale the rating by dividing it with the rbm_scaling constant. For example for KDD cup data rating of 0..100 can be scaled to the bins 0,1,2,3,.. 10 by setting the rbm_scaling=10

--rbm_bins=XX

Total number of binary bins used. For example in Netflix data where we have 1,2,3,4,5 the number of bins is 6

Basically, each user node has 3 fields: h, h0 and h1, each of them is a vector of size 20.

Those vectors are appended to get a single vector of default size 60.

The U matrix has row as the number of users (M) x 60.

Each movie node has 3 fields: ni (a double), bi is a vector of size rbm_bins (the default is 6).

and w is a vector of size rbm_bins * D = 120 in default.

In the output file first the bi vector is written (size = 6) and then w, total of 126.

The V matrix has rows as the number of items (N) x 126.

Note that the prediction involves bi, h, w but does not involve h0, h1, ni.

Koren time-SVD++Pros: more accurate than SVD++Cons: many parameters to tune, prone to numerical errors.Koren’s time-SVD++ [Korens paper above] takes into account also the temporal aspect of the rating. Prediction in time-SVD++ algorithm is computed as follows: r_uik = global_mean_rating + b_u + b_i + ptemp_u * q_i + x_u * z_k + pu_u * pt_i * q_kThe scalar rating r_uik (rating for intersection of user u, item i, and time bin k) equals the above sum. Like in Koren’s SVD++ the rating equals the sum of the global mean rating and biases for user and item. The following are feature vectors. For the user we have ptemp_i , x_u and pu_u. All of length D. For the item we have additional three feature vectors: ptemp_u, x_u and pu_u. For the time bins we have z_k and q_k, two feature vectors of size D.

Basic configuration

--lrate=XX

Learning rate

--beta

Beta parameter (bias regularization)

--gamma

Gamma parameter (feature vector regularization)

--lrate_mult_dec

Multiplicative step decrement (0.1 to 1, default 0.9)

--D=X

Feature vector width. Common values are 20 - 150.

Special Note: This is a tensor factorization algorithm. Please don’t forget to prepare a 4 column matrix market format file, with [user] [ item ] [ time ] [ rating ] in each row. It is advised to delete intermediate files created by als_tensor, since they have a different format.

Factorization Machines (FM)

GraphChi's libFM algorithm implementation contains a subset of the full libFMfunctionality with only three predictions: user, item and time. Users are encouraged to check the original libFM library: http://www.libfm.org/ for enhanced implementation. libFM library by Steffen Rendle has a track record performance in KDD CUP and is highly recommended collaborative filtering package. Factorization machines is a SGD type algorithm. It has two differences relative to bias-SGD: 1) It handles time information by adding feature vectors for each time bin2) It adds additional feature for the last item rated for each user.Those differences are supposed to make it more accurate than bias-SGD.Factorization machines is detailed in reference [P]. There are several variants, here the SGD variant is implemented (and not the ALS).Prediction in LIBFM is computed as follows:r_ui = global_mean_rating + b_u + b_i + b_t + b_li + 0.5*sum(p_u.^2 + q_i.^2 + w_t.^2 + s_li.^2 - (p_u + q_i + w_t + s_li ).^2) Where global_mean_rating is the global mean rating, b_u is the bias of user u, b_i is the bias of item i , b_t is the bias of time t, b_li is the bias of the last item li, and p_u is the feature vector of user u, and q_i is the feature vector of item i, w_t is the feature vector of time t, s_li is the feature vector of last item li. All feature vectors have size of D as in ALS. .^2 is the element by element power operation (as in matlab).The output of LIBFM consists of three matrices: filename.Users, filename.Movies and filename.Times. The matrix Users holds the user feature vectors in each row. (Each vector has exactly D columns). The matrix Movies holds the feature vectors for each item (Each vector has again exactly D columns). The matrix Times holds the feature vectors for each time. Additionally, the output consists of four vectors: bias for each user, bias for each item, bias for each time bin and bias for each last item. Last, the global mean rating is also given as output.

Basic configuration

--libfm_rate=XX

Gradient descent step size

--libfm_regw=XX

Gradient descent regularization for biases

--libfm_regv=XX

Gradient descent regularization for feature vectors

--libfm_mult_dec=XX

Multiplicative step decrease. Should be between 0.1 to 1.Default is 0.9

--D=X

Feature vector width. Common values are 20 - 150.

PMF

Pros: once tuned, better accuracy than ALS, since it involves extra sampling stepCons: sensitive to numerical errors, needs fine tuning, does not work on every dataset, higher computational cost, higher prediction computational cost.PMF and BPTF are two Markov Chain Monte Carlo (MCMC) sampling methods. They are based on ALS, but on each step a sampling from the probability is perform for obtaining the next state. Prediction in PMF/BPTF is like in ALS, but instead of computing one vector product of the current feature vector, the whole chain of products is computed and the average is taken.More formally, the prediction rule of PMF is:r_ui = [ p_u(1) * q_i(1) + p_u(2) * q_i(2) + .. + p_u(l) * q_i(l) ] / lWhere l is the length of the chain. Note: typically in MCMC methods, the first XX samples of the chain are thrown away, so p_u and q_i will start from XX and not from 1.The prediction rule of BPTF includes a feature vector for each time bin, denote w:r_uik = [ p_u(1) * q_i(1) * w_k(1) + p_u(2) * q_i(2) * w_k(2) + .. + p_u(l) * q_i(l) * w_k(l) ] / l Where the product is a tensor product, namely \sum_j p_uj * q_ij * w_kj

Basic configuration

--pmf_burn_in=XX

Throw away the first XX samples in the chain

--pmf_additional_output=1

Save as output all the samples in the chain (after the burn in period).Each sample is composed of two feature vectors. Each will be saved on its own file.

Example running PMF

Here we run 10 iterations of PMF, where the first 5 are discarded (pmf_burn_in) and the rest are used for computing the prediction:

Prediction computation in gensgd:

Where f is an index going over all the factors involved, pvec_f is the feature vector of
factor f, bias_f is the bias of factor f, and .^2 is elementwise square. See equation (5)
in the libFM paper. (Note: that x_i and x_j are all equal 1 in our implementation).

Output of gensgd

The output of gensgd are the following files:
1) a matrix of size f x D, where f is the number of feature vectors used and D is the feature vectors width. Generated filename is training_file_name + "_U.mm".
2) a vector of size f x 1, where f is the number of feature vectors which holds the scalar bias for each feature vector. Generated filename is training_file_name + "_bias_U.mm".
3) the global mean. Generated filename is training_file_name + "_global_mean.mm"
4) Mapping file for each feature. For each feature (each column) there is a map between the
feature string name, and the integer id of this feature, in the arrays (1) and (2) above. The mapping files are generated only when using the --rehash=1 option. Generated file names are training_file_name + ".map." + feature_id

CLiMF is a ranking method which optimizes MRR (mean reciprocal rank) which is an information retrieval measure for top-K recommenders. CLiMF is a variant of latent factor CF which optimises a significantly different objective function to most methods: instead of trying to predict ratings CLiMF aims to maximise MRR of relevant items. The MRR is the reciprocal rank of the first relevant item found when unseen items are sorted by score i.e. the MRR is 1.0 if the item with the highest score is a relevant prediction, 0.5 if the first item is not relevant but the second is, and so on. By optimising MRR rather than RMSE or similar measures CLiMF naturally promotes diversity as well as accuracy in the recommendations generated. CLiMF uses stochastic gradient ascent to maximise a smoothed lower bound for the actual MRR. It assumes binary relevance, as in friendship or follow relationships, but the graphchi implementation lets you specify a relevance threshold for ratings so you can run the algorithm on standard CF datasets and have the ratings automatically interpreted as binary preferences.

CLiMF-related command-line options: --binary_relevance_thresh=xx Consider the item liked/relevant if rating is at least this value [default: 0] --halt_on_mrr_decrease Halt if the training set objective (smoothed MRR) decreases [default: false] --num_ratings Consider this many top predicted items when computing actual MRR on validation set [default:10000]

Command line arguments

(optional) A value between (0,1]. When the dataset is big and there are a lot of user/item pairs it may not be feasible to compute all possible pairs. knn_sample_percent tells the program how many pairs to sample

--minval

Truncate allowed ratings in range (optional)

--maxval

Truncate allowed ratings in range (optional)

--quiet

Less verbose (optional)

--algorithm

(Mandatory) The type of algorithm output for which the top K ratings are computed. For rating application the following algorithms are supported: als,sparse_als,nmf,sgd,wals
For rating2 application: svd++,biassgd,rbm. For example --algorithm=als

The rating command does not support yet all algorithms. Contact me if you like to add additional algorithms.

Implicit rating handles the case where we have only positive examples (for example when a user bought a certain product) but we never have indication when a user DID NOT buy another product. The paper [Pan, Yunhong Zhou, Bin Cao, Nathan N. Liu, Rajan Lukose, Martin Scholz, and Qiang Yang. 2008. One-Class Collaborative Filtering. In Proceedings of the 2008 Eighth IEEE International Conference on Data Mining (ICDM '08). IEEE Computer Society, Washington, DC, USA, 502-511.] proposes to add negative examples at random for unobserved user/item pairs. Implicit rating is implemented in the collaborative filtering library and can be used with any of the algorithms explained above.

Basic configuration

--implicitratingtype=1

Adds implicit ratings at random

--implicitratingpercentageAlternatively,--implicitratingnumedges

A number between 1e-8 to 0.8 which determines what is the percentage of negative ratings (edges) to add to the sparse model. 0 means none while 1 means fully dense model. ORThe number of negative ratings (edges) to add

--implicitratingvalue

The value of the rating added. On default it is zero, but you can change it.

Computing test predictions

It is possible to compute test predictions: namely entering a list of user / movie pairs and getting predictions for each item in the list. For creating such a list, create a sparse matrix market format file with the user/movie pair list in each row (and for the unknown prediction put a zero or any other number).Here is an example for generating predictions on the user/movie pair list on Netflix data:

Speeding up execution

0) Verify that your program is compiled using the "-O3" compiler flag. (Should be enabled on default). This gives significant speedup (for example x5). Verify that your program is compiled using EIGEN_NDEBUG compiler flag. (Should be enabled on default).

1) If your system has enough memory, you can preload the problem into memory instead of reading them from disk on each iteration. This is done using the --nshards=1 command.

3) You can tune the number of execution threads using execthreads command.Depends on your machine different number of threads may give better results. The thumb rule is one thread per physical core. Example for setting the number of threads:./toolkits/collaborative_filtering/als --training=smallnetflix_mm --validation=smallnetflix_mme --lambda=0.065 --minval=1 --maxval=5 --max_iter=6 --quiet=1 execthreads 44) You can disable compression by defining the following macro in your program code:#define GRAPHCHI_DISABLE_COMPRESSION
and recompiling. This will require increased disk space but will speed up execution.

It is possible to apply K-fold cross validation to your dataset. This is done by applying the following two flags:--kfold_cross_validation=10, enables k-fold cross validation by setting K=10 and so on.--kfold_cross_validation_index=3, defines that we are working on the 4th fold (out of 10, indices start from zero).
Notes:
1) Currently supported algorithms for k-fold cross validation are: als, wals, sparse_als, svdpp, nmf, pmf, sgd, biassgd, biassgd2, rbm, timesvdpp, baseline.
2) Selection is done by rows, so when using K=10, index=3 every 4th row in 10 rows will be excluded from the training set.

Other cost functions

Most of the algorithms compute RMSE by default. We also support MAP@K metric. You can run it using the --calc_ap=XX flag. The --ap_number=XX flag defines K.
Note: the assumption is that the dataset has binary values (0/1).

Common errors and their meaningFile not found error:

bickson@thrust:~/graphchi$ ./bin/example_apps/matrix_factorization/als_vertices_inmem file smallnetflix_mm INFO: sharder.hpp(start_preprocessing:164): Started preprocessing: smallnetflix_mm --> smallnetflix_mm.4B.bin.tmpERROR: als.hpp(convert_matrixmarket_for_ALS:153): Could not open file: smallnetflix_mm, error: No such file or directory

Solution:

Input file is not found, repeat step 5 and verify the file is in the right folder

ACM KDD CUP 2010 - in this post I explain how to predict student learning abilities using ACM KDD CUP 2010 dataset.Million songs dataset - in this post I explain how to obtain the winning solution in the millions songs dataset contest, using a computation of item based similarities and their derived recommendaitons.

Acknowledgements/ Hall of Fame

Deployment of GraphChi CF toolkit was not possible without the great help of data scientist around the world who contributed their efforts for improving my code! Here is a preliminary list, I hope I did not forget anyone...

Are results in 7) probably old ?And how can I start WALS? (BTW it's not essential for me to run WALS, i'm interesting mostly in working with SVD - but may be my notes will help you to fix some bugs :) )

Hi Aleksandr, You feedback is highly valuable. In fact, I have just added you an acknowledgement (send me a link to your website if you have one and I will link to it).Regarding 7) - it explains how to read the output of the multiple methods.

I am not sure what you mean about results of the U_matrix? please elaborate.

I am experiencing the same compilation errors that Taras reported earlier. However, updating has failed to resolve the issue. Are there any any additional resources which may be necessary (I'm using a virtual machine running a new install of Ubuntu 12.10).

Very strange.. I just compiled a few minutes ago and everything is smooth..Are you checking out from mercurial?Please checkout using hg pull; hg updateand then recompile using make clean; make cf.If you still have an error please send me the output..

It is not possible to compute blending or some ensemble method to join together several solutions for higher accuracy. This is a feature we may add in the future.

It is possible, to load factors from previous run of the same algorithms from disk and continue from the previous run results in order to refine them. This is done using the --load_factors_from_file=1 flag. See example in the section "adding fault tolerance".

I am expecting that "filename.ids - includes recommended item ids for each user." will contain k recommendations defined by --num_ratings. For example when --num_ratings = 3, then for each customer recommend 3 items. If this is the case then I am experiancing 10 recommendation everytime. Is that a bug?

1. Note: for weighted-ALS, the input file has 4 columns: [user] [item] [weight] [rating]. See example file in section 5b). > I think it is 5e rather than 5b.2. For weighted-ALS use the rating4 command > There is no rating4 command /toolkits/collaborative_filtering/

I am working on installing GraphChi on centos vm 6.3 and encountering following error.

als.cpp:214: instantiated from here../../src/util/ioutil.hpp:157: error: ‘deflateInit’ was not declared in this scope../../src/util/ioutil.hpp:174: error: ‘deflate’ was not declared in this scope../../src/util/ioutil.hpp:178: error: ‘deflateEnd’ was not declared in this scope../../src/util/ioutil.hpp:188: error: ‘deflateEnd’ was not declared in this scopemake[1]: *** [als] Error 1make[1]: Leaving directory `/home/romit/graphchi/toolkits/collaborative_filtering'

1) Please try to check out from mercurial - this error should be fixed now. (I assume you downloaded the tgz source file.2) If it does not help, please send me the full compilation command line and the full output so I could look at it. But checking from mercurial should fix it.

There may be more than one correct way to map between your problem into a matrix factorization problem. weight may be the number of clicks or frequency of clicks - you should try both and see which works better.

Regarding zeros - it is not recommended to use zero rating. I suggest trying 1 for viewed and 2 for purchased.

I'm dealing with a one class problem too and i'm not sure i clearly understood your answer.

From what i understand of "[Pan, Yunhong Zhou, Bin Cao, Nathan N. Liu, Rajan Lukose, Martin Scholz, and Qiang Yang. 2008. One-Class Collaborative Filtering.", they add 0 in the matrix giving them a weight considering a specific measure.In the case of a product i would say that the inverse of click would be a good weight for these zero (meaning that the more a person click on a product the less the 0 is likely to be true).

Without weight, i would put 2, 1 and 0 to the products bought, seen and nothing.

I don't undesrtand why it's not recommended to use zero rating in this specific case (i think i saw you use -1/1 in a classification probleme in one of your example)?Would you use "WALS + 1 for viewed and 2 for purchased" ?

Hi Alex, You are of course right, they use zero in their paper. What confused me is that for the zero case you should specify --minval=0 and --maxval=1 namely allowed predictions should be truncated between [0,1]. (And not --minval=1 and --maxval=1 as appeared in the question).

The problem with zeros, is that you can not differentiate between zero which is a missing value, to a known zero rating. That is why in many cases we use -1 as negative rating and 1 as positive rating.

"(And not --minval=1 and --maxval=1 as appeared in the question)." => do you talk about Burhan's question or Venkata siva's one ?

"The problem with zeros, is that you can not differentiate between zero which is a missing value, to a known zero rating." => that part is still not clear, maybe there's something i've missed about the graph representation of data.If i add implicit ratings with 0 value to my one class data, i'll get a matrix filled with 1 (positive ratings), 0 (negative ratings) and missing values, in my mind i would not get 0 corresponding to missing values.Do you mean that graphchi will add edge with 0 value coresponding to missing value in addition to the implicit ratings ?

To be clearer, in a one class problem, if i use this command : ./toolkits/collaborative_filtering/biassgd2 --training=xxx --implicitratingtype=1 --implicitratingvalue=-1 --implicitratingpercentage=0.00001 --minval=-1 --maxval=1 will i get a different result (i mean rank of product not ratings) than this one :./toolkits/collaborative_filtering/biassgd2 --training=xxx --implicitratingtype=1 --implicitratingvalue=0 --implicitratingpercentage=0.00001 --minval=0 --maxval=1

In the graphlab user group you've answered to Venkata siva on the same subject : "I will add a sanity check that verifies you do not use implicit rating value of 0.", so i supposed there's a real problem of using 0 value implicit ratings ??!

Hi Alex,So many questions - I am starting to get confused.. :-)Some algorithm can support zero ratings, and some others can not. For example, when you solve a sparse linear system, there is no distinction between a zero coefficient or no coefficient. In ALS it is possible to have zero ratings, and the algorithm tries to minimizethe dot product between the matching factors and the zero rating. The same with SGD - it is possible to have zero rating. My answer to Venkata about implicit rating value of zero is wrong - I will fix it - since zero is support in some of the algos.

I have few queries, Can you please suggest me on these?1) At present, Now I tuned lamda such that RMSE ~ 2. But how can I verify, whether reco's are correct are not? Is there any way to check correctness(like confusion matrix).2) And also my requirement is to generate reco's based location. So can you please suggest me, which algorithm will fit for this?

There are many ways to evaluate the quality of recommendations. Take a look herefor a detailed list: http://bickson.blogspot.co.il/2012/10/the-10-recommender-system-metrics-you.htmlIf you have the location of the user as one of the features, I suggest trying out gensgd:http://bickson.blogspot.co.il/2012/12/collaborative-filtering-3rd-generation_14.html

I have a question about the --implicitratingtype option. Pan et al.'s paper suggest 3 implicit rating types: uniform random, user-oriented, and item-oriented. Am I right in assuming that only the first is implemented already?

Also, Pan et al.'s paper uses weighting to indicate how credible the training data is. It seems like this is not implemented in ALS itself, but after reading the papers it seems that WALS (by Hu et al.) is a generalization of this weighting, so I can just use WALS instead?

And one more thing: in the WALS paper, prediction is done by simply computing p_u * q_i (the latent user and item factors). Still, here it is stated that we should use r_ui = w_ui * p_u * q_i. Why is this? Is this a result of this specific implementation?

Hi Danny,Can you please provide some instructions about how to build this on windows.I have tried the Java version but it doesn't include this great package (is there a plan to port it to Java version?)

I used Eclipse and MinGW to build on windows but got lots of errors . A brief description of the build process for windows (environment ,libraries etc) can save a lots of debugging time and make this great package available on windows .If I want to install linux on my PC (dual boot) what version you recommend?Sample erors from my build:Description Resource Path Location Type'pread' was not declared in this scope ioutil.hpp /graphchi/src/util line 44 C/C++ Problem'pwrite' was not declared in this scope ioutil.hpp /graphchi/src/util line 89 C/C++ Problem'random' was not declared in this scope stripedio.hpp /graphchi/src/io line 383 C/C++ Problem'random' was not declared in this scope stripedio.hpp /graphchi/src/io line 705 C/C++ Problem'S_IROTH' was not declared in this scope binary_adjacency_list.hpp /graphchi/src/preprocessing/formats line 194 C/C++ Problem'S_IROTH' was not declared in this scope conversions.hpp /graphchi/src/preprocessing line 692 C/C++ Problem'S_IROTH' was not declared in this scope graphchi_engine.hpp /graphchi/src/engine line 988 C/C++ Problem'S_IROTH' was not declared in this scope ioutil.hpp /graphchi/src/util line 107 C/C++ Problem'S_IROTH' was not declared in this scope sharder.hpp /graphchi/src/preprocessing line 256 C/C++ Problem'S_IROTH' was not declared in this scope sharder.hpp /graphchi/src/preprocessing line 524 C/C++ Problem'S_IROTH' was not declared in this scope sharder.hpp /graphchi/src/preprocessing line 665 C/C++ Problem'S_IROTH' was not declared in this scope sharder.hpp /graphchi/src/preprocessing line 795 C/C++ Problem'S_IROTH' was not declared in this scope sharder.hpp /graphchi/src/preprocessing line 846 C/C++ Problem'S_IWOTH' was not declared in this scope binary_adjacency_list.hpp /graphchi/src/preprocessing/formats line 194 C/C++ Problem'S_IWOTH' was not declared in this scope conversions.hpp /graphchi/src/preprocessing line 692 C/C++ Problem'S_IWOTH' was not declared in this scope graphchi_engine.hpp /graphchi/src/engine line 988 C/C++ Problem'S_IWOTH' was not declared in this scope ioutil.hpp /graphchi/src/util line 107 C/C++ Problem'S_IWOTH' was not declared in this scope sharder.hpp /graphchi/src/preprocessing line 256 C/C++ Problem'S_IWOTH' was not declared in this scope sharder.hpp /graphchi/src/preprocessing line 524 C/C++ Problem'S_IWOTH' was not declared in this scope sharder.hpp /graphchi/src/preprocessing line 665 C/C++ Problem'S_IWOTH' was not declared in this scope sharder.hpp /graphchi/src/preprocessing line 795 C/C++ Problem'S_IWOTH' was not declared in this scope sharder.hpp /graphchi/src/preprocessing line 846 C/C++ Problemmake: *** [example_apps/connectedcomponents] Error 1 graphchi C/C++ Problemthere are no arguments to 'pread' that depend on a template parameter, so a declaration of 'pread' must be available [-fpermissive] ioutil.hpp /graphchi/src/util line 44 C/C++ Problemthere are no arguments to 'pread' that depend on a template parameter, so a declaration of 'pread' must be available [-fpermissive] ioutil.hpp /graphchi/src/util line 62 C/C++ Problemthere are no arguments to 'pwrite' that depend on a template parameter, so a declaration of 'pwrite' must be available [-fpermissive] ioutil.hpp /graphchi/src/util line 89 C/C++ Problemthere are no arguments to 'random' that depend on a template parameter, so a declaration of 'random' must be available [-fpermissive] graph_objects.hpp /graphchi/src/api line 292 C/C++ Problemtoo many arguments to function 'int mkdir(const char*)' sharder.hpp /graphchi/src/preprocessing line 654 C/C++ Problem

Hi Manu,I am not sure I got your question. How do you compute the predictions? Is this using the "rating" application, or do you read the matrices U and V and compute the required dot products? If you read U and V using your own program from file, please be careful, since we save the matrices by row order and not by column order.

I am performing the following steps:1) I created the training file with user id, item id and rating on my dataset.2) I ran SGD on the training file.3) I then ran the command for computing predictions../toolkits/collaborative_filtering/rating --training=hit_mm --num_ratings=10 --quiet=1

Now, the file "hit_mm".ids that is being created has the 10 item recommendations corresponding to each user.

When I repeat the steps 2 and 3 on the same training file(no changes are made in it), the "filename.ids" created this time has different item ids corresponding to each customer.Thus, the item recommendations for the customers generated are different each time even though I have not changed the training file data.

I wanted to get the top 10 recommendations for each user but each time the items recommended are different for the user even though SGD algorithm followed by the computing recommendation command is being run on the same dataset.

This can only be the case if there are several items which gets exactly the same score - in that case you can get a different ordering of them each time. Can you check the ratings file to verify if this is the case?If you suspect some bug, please prepare a small training file were this error happens and send me the file along with the exact command line arguments you are using so I can look at it. And please send it to our user mailing list: graphlab-kdd@groups.google.com

The recommendations are different even when I run the smallnetflix_mm example as explained above.I run the following steps:1) ./toolkits/collaborative_filtering/baseline --training=smallnetflix_mm --validation=smallnetflix_mm --minval=1 --maxval=5 --quiet=1 --algorithm=user_mean

Hi Manu,Thanks for our note. I now get your question.SGD starts at a random states and computes a gradient descent starting from this state. Thus each run starts from a different state. Furthermore, running 5 iterations does not get the algorithm into a local minima (you can learn about it that RMSE still goes down and does not converge), thus it makes sense that each different run results in a different recommendation.

I assume that once you run the algorithm for enough iterations, hopefully the algorithm converge to some significant local minima, and in that case you will see some repeating items recommended that are more "dominent".

From the other hand, if you will run the command rating twice from the same state, you will get the exact same recommendation. You can try it out.

If you use the --load_factors_from_file=1 flag, you can force SGD to start from a specific initial state, and in this case, most chances you will get similar result (up to some randomness induced by parallelization of the computation).

Thank You for the explanation.I tried it after increasing the iteration for the "smallnetflix" example. I am getting some repeating items for it.In the case of my company dataset, I am not getting repeating items after increasing the interations. I would try to force SGD to start from a initial state.

I executed my dataset 4 differnt times with ALS algorithm having 100 iterations each. The data that is being returned now contains some commen items in each execution. However, the common item being recommended is not good recommendation. It is as I think because the ratings are only a handful and the matrix is very sparse.

Should I try using WALS in highly sparse matrix cases with weightage based on whether item was viewed, put in cart or purchased.

It that case, I recommended not to merge the different features into a single scalar (viewed, put in cart, purchased) but to use gensgd: http://bickson.blogspot.co.il/2012/12/collaborative-filtering-3rd-generation_14.html with all the different features for getting a more accurate prediction.

The strange thing is that if I run it a second time, it works fine. It seems that you are expecting a file that isn't (yet?) available, but the second time around it does find that file (generated by the previous run?).

This is on a mac, the command used is wals with options --minval=0 --maxval=1 --max_iter=2 --quiet=1 --implicitratingtype=1 --D=20 --lambda=0.02

It seems that you local binary cache file we use for GraphChi got garbled. Please remove all intermediate files using the command "rm -fR filename.*" where filename is your training input file, and the same for your validation input file.

Hi,Danny I tried to apply SVD++ in GraphChi to my own dataset, but got training RMSE larger than baseline with user mean algorithm. And most predicted results are 1. So I tested it with smallnetflix_mm dataset. And I got this:

I also tried different parameters of max_iter and D, but the results(Validation RMSE) I got from als were worse than baseline.I don't know how this happened? Is it happened because I have not correctly use this tool?

What's the result of als algorithm on small netflix dataset? Can you show me?

Hi, First of all, when using the baseline method, it seems you are using the training dataset as the validation dataset and thus you get the same training and validation RMSE.When I run it on the validation I am getting (for global means)$ ./toolkits/collaborative_filtering/baseline --training=smallnetflix_mm --validation=smallnetflix_mme --quiet=1... 1.72134) Iteration: 0 Training RMSE: 1.0781 Validation RMSE: 1.09003

Second, you are right that ALS overfits the training and have very good error of 0.624 while the validation error is even worse than the baseline. I suggest trying SGD instead :

It works much better.I don't know why you comment "// * vertex.num_edges();" According to the paper "Yunhong Zhou, Dennis Wilkinson, Robert Schreiber and Rong Pan. Large-Scale Parallel Collaborative Filtering for the Netflix Prize." It indeed needs it.

Hi Weiwen,Thanks for your comment, I have added an additional flag called --regnormal, where on default (1) it adds regularization as described by the paper by Zhou Wilkinson and Scheriber, and when set to zero, it does add lambda as regularization. This applies to the algorithms: als, sparse_als, als_tensor, pmf and wals.

The first one is about zero rating. My rating matrix is a complete one, but quite sparse with a lot of zero ratings. In this case, how can I differentiate the zero rating from non-rated ones? The zero ratings contribute for RMSE calculation and objective function minimization.

The second one is about the recommendation for a new user. In your examples, the users in the validation and test files are the same with the ones in the training files. How can I predict the rating of a new user when I have its probe rating and the trained U, V matrices?

Most of the algorithms like SGD, ALS etc. can take into computation ratings which are zero, in that case they try to force the product of the matching user and item feature vectors to be close to zero. When a rating is not specified this user-item pair is not taken into account int he computation. So uknown ratings should be simply ignored and not entered into the input file.

An additional approach, is in WALS algorithm, where there is a weight for each rating. In that case you can put a confidence level in the rating correctness, where rating with larger weight will influence the computation more.

The SGD, ALS algorithms are not incremental, so if a new user with known ratings is added you will need to run the algorithm again. It is possible however to store the results of the previous computation, and load it using using --load_factors_from_file, that we the computation with the new user will start from the best computed previous position. Note that the new user ID will have to be in the matrix range of the previous computation (so you need to assume a bound on the number of new users).

In case there is no information at all about the new user, not much can be done, in that case you can recommend popular items for example.

Hi Rico,The training and validation formats are fine. My only concern is that user 3 have rated item 1 in the training, and item 2 in validation, so there will be no more additional items to rate for her since there are only 2 items. Besides of that the syntax is fine.

The reason is that I want to get the validation RMSE as my evaluation metric to compare different algorithms. That is, I view the rating in the validation file as the groundtruth, and compare the groundtruth and the prediction by the algorithm.

Does the ALS-WR method based on Y. Koren et al. paper "Collaborative Filtering for Implicit Feedback Datasets" differ from WALS, presented here? I'm asking, because GraphChi needs two parameters, rating AND weight, while Mahout impl of ALS-WR requires only confidence ratings (r_ij according to the article)? Probably I missed something, but I want to compare the results with implicit feedback and no numeric ratings (user 'watched' a TV show 5 times)

I am trying to run genSGD on some of my data with 43 features. The program gives me the following error: FATAL: gensgd.cpp(main:1136): file_columns exceeds the allowed storage limit - please increase FEATURE_WIDTH and recompile.

My question is, how may I increase Feature_Width? I searched this page and could not find any except for the "--D" option. I tried it as follow:./toolkits/collaborative_filtering/gensgd --training=GroupAGraphChi.csv --val_pos=1 --rehash=1 --max_iter=100 --gensgd_mult_dec=0.999999 --minval=0 --quiet=1 --calc_error=1 --file_columns=44 --D 50But it still does not work. (Moreover, Latent variable width is definitely a different creature from feature width).

Hi Wendy, This is rather simple. You need to change line 43 of gensgd.cpp to have your new data width + 1. You can view the code here: https://code.google.com/p/graphchi/source/browse/toolkits/collaborative_filtering/gensgd.cppAfter this change you must recompile (using make clean; make cf)

Got it! Many thanks!Also, is there a rather conservative way to get the numbers for the Matrix Market header?This is what I found from a quick search:"""If format was specified as coordinate, then the size line has the form:

m n nonzeros

wherem is the number of rows in the matrix;n is the number of columns in the matrix;nonzeros is the number of nonzero entries in the matrix (for general symmetry), ."""

But it seems that your construction of the MM header does not follow this guideline.

I've been trying to get the climf algorithm to run but I'm having issues. I am running it on a matri market format with 629233 users and 2039744 items with 35950755 observations. Here is the MM file specification:

Hi Danny,I am comparing accuracy of different algorithms and i need a way to get total RMSE of an algorithm, is there any command in graphchi to compute total RMSE? what about getting accuracy results in MAE?plus i am getting segmentation fault error on MOVIELENS_MM and MOVIELENS_MME when trying to use SGD. any suggestions?

Hi Sam, You should prepare an additional validation file and pass it into graphchi using the command line --validation=filename. The validation RMSE will be printed in each iteration. Please send us the full error you are getting for SGD. Verify you are taking the latest version from github, and send us the full command line you used.

Thanks for your quick response, i'm new to this field so excuse my low level questions and spending your valuable time is highly appreciated in advance, i got familiar to MF and Graphchi Framework by advice of Tauqi chen:1.in ALS the max Iter is 15 no matter what you choose for maxiter, 2.regarding RMSE can we refer to Validation as the results of accuracy of our experiment?3. what kind of visualizations(rather than RMSE charts) we could use to show results of our experiments? what tools we could use? Excel charts?4.In ALS by adjusting lamda from 1e-4 to something much bigger like 10 or 20 i get better results. is it ok? can we infer we increased accuracy by regularizing weights of parameter to a small range?5. is there any way to tune regularization parameters for each q,p and their biases in SGD?6. is there any reference to compare time and space complexity of Algos implemented in Graphchi/Graphlab? 7. In SGD i get best results on factor Width D=13 on Movielens 100k, we can infer that lower width is harder to distinguish reliable factors and for higher dimensions there are noise in U and V matrixes? 8.this is my problem that also happens with some other algos:sam@sam:~/graphchi$ ./toolkits/collaborative_filtering/baseline --training=smallnetflix_mm --validation=smallnetflix_mm --minval=1 --maxval=5 --quiet=1 --algorithm=user_mean membudget_mmb 20000WARNING: common.hpp(print_copyright:183): GraphChi Collaborative filtering library is written by Danny Bickson (c). Send any comments or bug reports to danny.bickson@gmail.com [training] => [smallnetflix_mm][validation] => [smallnetflix_mm][minval] => [1][maxval] => [5][quiet] => [1][algorithm] => [user_mean][feature_width] => [20][users] => [95526][movies] => [3561][training_ratings] => [3298163][number_of_threads] => [4][membudget_Mb] => [800]Segmentation fault (core dumped)

Hi Sam!1) are you using the latest version of graphchi from github? max_iter is a variable and should not be always 15.2) You can use both training and validation RMSE as measures of the success of your CF method3) This is a good question - there is no standard way. I suggest registering to our beta at: http://beta.graphlab.com to learn more about visualization and applicability of graphlab4) It depends on the matrix and the values inside it. Probably you have big values.5) You need to do it manually. No automated method yet.6) There are many related papers in this domain. See the the first part of this blog opst.7) You should try different widths and you will get different results for each dataset.8) THere is still not enough information to debug this. Please send also your OS type. Are yo working on a virtual box under windows? Note that validation and training can not use the same file name. Verify you get the latest from github. You can compile in debug using "make clean; make cfd"and then run:gdb ./toolkits/collaborative_filtering/baselinerun --training=smallnetflix_mm --validation=smallnetflix_mm --minval=1 --maxval=5 --quiet=1 --algorithm=user_mean membudget_mmb 20000==> send me the full output of the failure, including the output of the command "where" when it fails.

Please do the following:compile in debug using "make clean; make cfd"and then run:gdb ./toolkits/collaborative_filtering/svdpprun --training=smallnetflix_mm --validation=smallnetflix_mme --biassgd_lambda=1e-4 --biassgd_gamma=1e-4 --minval=1 --maxval=5 --max_iter=6 --quiet=1==> send me the full output of the failure, including the output of the command "where" when it fails.

Also please advise me:i used MOVIELENS 10M from your provided datasets and tried SGD and ALS and tried to tuned both Algos parameters (Assuming that Lamda is independet from gamma and matrix width) so SGD converges at Iter 96 with Validation RMSE of 0.898646 with Lamda 0.1,Gamma 0.01 and With 13 and ALS validation RMSE of 0.8589 with Lamda 10 and width 14 at itter 14.what is the problem? i thought that SGD should give better results.as i said i considered parameters independent for example i kept Lamda and gamma constant and checked different widths and for lmda and gamma the same, kept one constant and found best lamda and gamma.

Hi Sam, SGD should not give better results vs. ALS - all depends on dataset properties so it is always advised to try both.Regarding the seg fault. I can't reproduce this error - especially rmse.hpp:206 does not exist in my code. are you using the latest version from github?

i have a question that has been in my mind for quiet some time:GraphLab and Graphchi get the data and convert it to a graph structure. while this is the main idea for graph-based social network analysis as they treat users or items or both of them as vertices and their relations as edges (in RS Handbook 2 approaches are mentioned to overcome Neighborhood-based issues) but in Matrix factorization approaches we don't need to define any graphs.why we need to model data as graph for computing MF models?

Hi Sam, Viewing a problem as a graph is only a way of thinking about the problem and it has sometimes some benefits. There is a correspondence between graph and sparse matrix, so both ways of presentations are interchangeable.

Dear DannyI am thankful for your wonderful support. Could you please advise me if SGD implementation in Graphchi includes the confidence Koren mentions in his paper?in the paper you mentioned for implementation of SGD, Koren mentions a section"Input with Varying Confidence levels" and accounts for Cui as Confidence.my guess is that this implemetation does not have confidence in it and i should use W-ALS.Also i have to impute the confidence separately and there is no option that Graphchi compute it based on number of occurance, right? any advise sir?

Will the 'rating --model=ALS' compute the top-N for CLiMF as well? I feel like the calculation should be the same as for the other matrix factorisation algorithms, i.e., you're still just trying to find for each row U the columns in V with the largest dot product (since sigmoid(x) is a strictly increasing function of x, the dot products correspond to the sigmoid of the predicted ranking, and we are trying to maximise this quantity).

Thats a great suggestion! I have just added climf support for the rating utility. Please pull from git, recompile and let me know if it works for you. (Note that climf computes MRR estimation and not rating between 1-5).

Yes. You can run once, and then when new data comes you can use the command line arguments --load_factors_from_file=1 which will load the saved model. You will also want to use the --test=filename to point to the new data you want to predict on.

It seems you got into numerical errors. Try to use the --minval=XX and --maxval=XX command line arguments to limit the range of predicted values. If you like to send me a small dataset where this error happens I will be happy to take a look.

Hi Danny.Thank you so much for your support.But I have one question. Could you please advise me how to make its speed fast?You said that "for speedup, verify that your program is compiled using the "-O3" or EIGEN_NDEBUG compiler flag.”I want to get more detailed information of it for executing your codes(ALS/wALS/SVD/RBM/PMF).It would be very thankful if you respond it.Best regards, Yuna.

I suggest verifying the macro#define GRAPHCHI_DISABLE_COMPRESSIONis defined. It should be defined at the top of the .cpp file before all the includes (for example als.cpp). Then you need to make clean and make cf.This will give a speedup of x2.

Hi Danny,Thank you very much for your support. I have run the prediction of the rbm algorithm, but it comes out that almost all of the user, the algorithm have recommend the item 1801 and 2964. I don't think it is a satisfying result.

I am trying to run sgd,svdpp,timesvdpp on movielens data but I can't achieve performance mentioned in "Matrix Factorization Techniques for Recommender Systems". I have tried a lot to initial parameter. For SGD I am doing ./toolkits/collaborative_filtering/sgd --training=userMovielens --kfold_cross_validation=10 --kfold_cross_validation_index=3 --sgd_lambda=0.001 --sgd_gamma=0.03 --minval=1 --maxval=5 --max_iter=600 --quiet=1 --sgd_step_dec=0.9 --D=65. Can you please point out what am I doing wrong.

D is always the latent feature vector width (as in all methods).multiplicative step decrement is how much you decrease the SGD step size. The default is 0.9, namely you multiply by 0.9 the step size after each iteration.

About Me

6 years ago, along with my collaborators at Carnegie Mellon University, I have started the GraphLab large scale open source project, which is a framework for implementing machine learning algorithms in parallel and distributed settings. When the project became popular, we have decided to raise money to expand the project and provide an industry grade solution.
Specifically I wrote the award wining collaborative filtering toolkit to GraphLab which is widely deployed today, and helped us win top places at ACM KDD CUP 2011, ACM KDD CUP 2012 among other competitions.
Checkout our website: http://dato.com