Pages

Thursday, March 17, 2011

Which one of the sqldf, plyr, doBy and aggregate functions/packages would be faster for applying functions on groups of rows? I was wondering about this earlier in this post. It seems sqldf would be the fastest according to a post in manipulatr mail list.

Well, here is the ranking from the fastest to the slowest:

(check the link for the post above for details on comparison)

The downside of sqldf is that you will be able to use only sql functions on groups, but in the rest of the other options you can apply R functions on groups.

EDIT: I repeated the experiment on my own computer, this time I included data.table package as well. The new experiment shows that sqldf is still the fastest but data.table ranks second now. But I have been warned that the mean() function has large overhead and in fact if I use .Internal(mean(x)) instead of mean(x) for the group operations, data.table is the fastest!!! Scroll down to see another comparison of the grouping functions, this time using sum() function which doesn't seem to have a large overhead.

The code is below. I mostly copy-pasted from the post at manipulatr, except the data.table part and the plot.

EDIT: More unbiased way to measure this is to use sum() function on groups. When I do that, data.table comes first!

(1) The calculation of DT from d was left out of the timing. The timing for data table is therefore larger than shown. (2) The statement that using sum is more unbiased seems strange. Both summation and averaging are common operations that represent real usage cases. This is data dredging where you use different variations of the benchmark until you find one where you like the result. (3) The fact that minor changes in the benchmark can cause the order to change shows that the conclusions of one particular benchmark cannot be relied upon. (4) Other criteria that might be used but were not mentioned would be ease of specification and whether the method can handle intermediate results larger than memory.