data.table vs dplyr: can one do something well the other can’t or does poorly?

Question: ❓❓❓

Overview

I’m relatively familiar with data.table, not so much with dplyr. I’ve read through some dplyr vignettes and examples that have popped up on SO, and so far my conclusions are that:

data.table and dplyr are comparable in speed, except when there are many (i.e. >10-100K) groups, and in some other circumstances (see benchmarks below)

dplyr has more accessible syntax

dplyr abstracts (or will) potential DB interactions

There are some minor functionality differences (see “Examples/Usage” below)

In my mind 2. doesn’t bear much weight because I am fairly familiar with it data.table, though I understand that for users new to both it will be a big factor. I would like to avoid an argument about which is more intuitive, as that is irrelevant for my specific question asked from the perspective of someone already familiar with data.table. I also would like to avoid a discussion about how “more intuitive” leads to faster analysis (certainly true, but again, not what I’m most interested about here).

Question

What I want to know is:

Are there analytical tasks that are a lot easier to code with one or the other package for people familiar with the packages (i.e. some combination of keystrokes required vs. required level of esotericism, where less of each is a good thing).

Are there analytical tasks that are performed substantially (i.e. more than 2x) more efficiently in one package vs. another.

One recent SO question got me thinking about this a bit more, because up until that point I didn’t think dplyr would offer much beyond what I can already do in data.table. Here is the dplyr solution (data at end of Q):

Which was much better than my hack attempt at a data.table solution. That said, good data.table solutions are also pretty good (thanks Jean-Robert, Arun, and note here I favored single statement over the strictly most optimal solution):

The syntax for the latter may seem very esoteric, but it actually is pretty straightforward if you’re used to data.table (i.e. doesn’t use some of the more esoteric tricks).

Ideally what I’d like to see is some good examples were the dplyr or data.table way is substantially more concise or performs substantially better.

Examples

Usage

dplyr does not allow grouped operations that return arbitrary number of rows (from eddi’s question, note: this looks like it will be implemented in dplyr 0.5, also, @beginneR shows a potential work-around using do in the answer to @eddi’s question).

data.table internally optimises expressions of the form DT[col == value] or DT[col %in% values] for speed through automatic indexing which uses binary search while using the same base R syntax. See here for some more details and a tiny benchmark.

dplyr offers standard evaluation versions of functions (e.g. regroup, summarize_each_) that can simplify the programmatic use of dplyr (note programmatic use of data.table is definitely possible, just requires some careful though, substitution/quoting, etc, at least to my knowledge)

Benchmarks

I ran my own benchmarks and found both packages to be comparable in “split apply combine” style analysis, except when there are very large numbers of groups (>100K) at which point data.table becomes substantially faster.

@Arun ran some benchmarks on joins, showing that data.table scales better than dplyr as the number of groups increase (updated with recent enhancements in both packages and recent version of R). Also, a benchmark when trying to get unique values has data.table ~6x faster.

(Unverified) has data.table 75% faster on larger versions of a group/apply/sort while dplyr was 40% faster on the smaller ones (another SO question from comments, thanks danas).

Good Answer: 👌👌👌

We need to cover at least these aspects to provide a comprehensive answer/comparison (in no particular order of importance): Speed, Memory usage, Syntax and Features.

My intent is to cover each one of these as clearly as possible from data.table perspective.

Note: unless explicitly mentioned otherwise, by referring to dplyr, we refer to dplyr’s data.frame interface whose internals are in C++ using Rcpp.

The data.table syntax is consistent in its form – DT[i, j, by]. To keep i, j and by together is by design. By keeping related operations together, it allows to easily optimise operations for speed and more importantly memory usage, and also provide some powerful features, all while maintaining the consistency in syntax.

1. Speed

Quite a few benchmarks (though mostly on grouping operations) have been added to the question already showing data.table gets faster than dplyr as the number of groups and/or rows to group by increase, including benchmarks by Matt on grouping from 10 million to 2 billion rows (100GB in RAM) on 100 – 10 million groups and varying grouping columns, which also compares pandas.

On benchmarks, it would be great to cover these remaining aspects as well:

A concern for this is referential transparency. Updating a data.table object by reference, especially within a function may not be always desirable. But this is an incredibly useful feature: see this and this posts for interesting cases. And we want to keep it.

Therefore we are working towards exporting shallow() function in data.table that will provide the user with both possibilities. For example, if it is desirable to not modify the input data.table within a function, one can then do:

By creating a shallow copy using shallow(), we understand that you don’t want to modify the original object. We take care of everything internally to ensure that while also ensuring to copy columns you modify only when it is absolutely necessary. When implemented, this should settle the referential transparency issue altogether while providing the user with both possibilties.

Also, once shallow() is exported dplyr’s data.table interface should avoid almost all copies. So those who prefer dplyr’s syntax can use it with data.tables.

But it will still lack many features that data.table provides, including (sub)-assignment by reference.

adds/updates DT1‘s column col with mul from DT2 on those rows where DT2‘s key column matches DT1. I don’t think there is an exact equivalent of this operation in dplyr, i.e., without avoiding a *_join operation, which would have to copy the entire DT1 just to add a new column to it, which is unnecessary.

data.table syntax is compact and dplyr’s quite verbose. Things are more or less equivalent in case (a).

In case (b), we had to use filter() in dplyr while summarising. But while updating, we had to move the logic inside mutate(). In data.table however, we express both operations with the same logic – operate on rows where x > 2, but in first case, get sum(y), whereas in the second case update those rows for y with its cumulative sum.

This is what we mean when we say the DT[i, j, by] form is consistent.

Similarly in case (c), when we have if-else condition, we are able to express the logic “as-is” in both data.table and dplyr. However, if we would like to return just those rows where the if condition satisfies and skip otherwise, we cannot use summarise() directly (AFAICT). We have to filter() first and then summarise because summarise() always expects a single value.

While it returns the same result, using filter() here makes the actual operation less obvious.

It might very well be possible to use filter() in the first case as well (does not seem obvious to me), but my point is that we should not have to.

In case (c) though, dplyr would return n() as many times as many columns, instead of just once. In data.table, all we need to do is to return a list in j. Each element of the list will become a column in the result. So, we can use, once again, the familiar base function c() to concatenate .N to a list which returns a list.

Note: Once again, in data.table, all we need to do is return a list in j. Each element of the list will become a column in result. You can use c(), as.list(), lapply(), list() etc… base functions to accomplish this, without having to learn any new functions.

You will need to learn just the special variables – .N and .SD at least. The equivalent in dplyr are n() and .

Joins

dplyr provides separate functions for each type of join where as data.table allows joins using the same syntax DT[i, j, by] (and with reason). It also provides an equivalent merge.data.table() function as an alternative.

Some might find a separate function for each joins much nicer (left, right, inner, anti, semi etc), whereas as others might like data.table’s DT[i, j, by], or merge() which is similar to base R.

However dplyr joins do just that. Nothing more. Nothing less.

data.tables can select columns while joining (2), and in dplyr you will need to select() first on both data.frames before to join as shown above. Otherwise you would materialiase the join with unnecessary columns only to remove them later and that is inefficient.

data.tables can aggregate while joining (3) and also update while joining (4), using by = .EACHI feature. Why materialse the entire join result to add/update just a few columns?

Once again, the syntax is consistent with DT[i, j, by] with additional arguments allowing for controlling the output further.

do()…

dplyr’s summarise is specially designed for functions that return a single value. If your function returns multiple/unequal values, you will have to resort to do(). You have to know beforehand about all your functions return value.

In data.table, you can throw pretty much anything in j – the only thing to remember is for it to return a list so that each element of the list gets converted to a column.

In dplyr, cannot do that. Have to resort to do() depending on how sure you are as to whether your function would always return a single value. And it is quite slow.

Once again, data.table’s syntax is consistent with DT[i, j, by]. We can just keep throwing expressions in j without having to worry about these things.

Have a look at this SO question and this one. I wonder if it would be possible to express the answer as straightforward using dplyr’s syntax…

To summarise, I have particularly highlighted several instances where dplyr’s syntax is either inefficient, limited or fails to make operations straightforward. This is particularly because data.table gets quite a bit of backlash about “harder to read/learn” syntax (like the one pasted/linked above). Most posts that cover dplyr talk about most straightforward operations. And that is great. But it is important to realise its syntax and feature limitations as well, and I am yet to see a post on it.

data.table has its quirks as well (some of which I have pointed out that we are attempting to fix). We are also attempting to improve data.table’s joins as I have highlighted here.

But one should also consider the number of features that dplyr lacks in comparison to data.table.

4. Features

I have pointed out most of the features here and also in this post. In addition:

fread – fast file reader has been available for a long time now.

fwrite – NEW in the current devel, v1.9.7, a parallelised fast file writer is now available. See this post for a detailed explanation on the implementation and #1664 for keeping track of further developments.

data.table provides faster equivalents of set operations from v1.9.7+ (written by Jan Gorecki) – fsetdiff, fintersect, funion and fsetequal with additional all argument (as in SQL).

data.table loads cleanly with no masking warnings and has a mechanism described here for [.data.frame compatibility when passed to any R package. dplyr changes base functions filter, lag and [ which can cause problems; e.g. here and here.

Finally:

On databases – there is no reason why data.table cannot provide similar interface, but this is not a priority now. It might get bumped up if users would very much like that feature.. not sure.

On parallelism – Everything is difficult, until someone goes ahead and does it. Of course it will take effort (being thread safe).

Progress is being made currently (in v1.9.7 devel) towards parallelising known time consuming parts for incremental performance gains using OpenMP.

Related Posts

Question: ❓❓❓ I have been reading through the C++ FAQ and was curious about the friend declaration. I personally have never used it, however I am interested in exploring the language. What is a good Read more…

Question: ❓❓❓ I observed that rand() library function when it is called just once within a loop, it almost always produces positive numbers. for (i = 0; i < 100; i++) { printf("%d\n", rand()); } Read more…

Question: ❓❓❓ Consider the main axis and cross axis of a flex container: Source: W3C To align flex items along the main axis there is one property: justify-content To align flex items along the cross Read more…