Organize your data manipulation in terms of “grouped ordered apply”

Consider the common following problem: compute for a data set (say the infamous iris example data set) per-group ranks. Suppose we want the rank of irisSepal.Lengths on a per-Species basis. Frankly this is an “ugh” problem for many analysts: it involves all at the same time grouping, ordering, and window functions. It also is not likely ever the analyst’s end goal but a sub-step needed to transform data on the way to the prediction, modeling, analysis, or presentation they actually wish to get back to.

In our previous article in this series we discussed the general ideas of “row-ID independent data manipulation” and “Split-Apply-Combine”. Here, continuing with our example, we will specialize to a data analysis pattern I call: “Grouped-Ordered-Apply”.

The example

Let’s start (as always) with our data. We are going to look at the iris data set in R. You can view the data in by typing the following in your R console:

data(iris)
View(iris)

The package dplyr makes the grouped calculation quite easy. We define our “window function” (function we want applied to sub-groups of data in a given order) and then use dplyr to apply the function to grouped and arranged data:

Some possible confusion

The above works well, because all the operators we used were “grouping aware.” I think all dplyr operations are “grouping aware”, but some of the “in the street” tactics of “working with or around dplyr” may not be. For example slice is part of dplyr and grouping aware:

Thus head “is not part of dplyr” even when it is likely a dplyr adapter supplying the actual implementation.

This can be very confusing to new analysts. We are seeing changes in semantics of downstream operators based on a data annotation (the “Groups:”). To the analyst grouping and ordering probably have equal stature. In dplyr grouping comes first, has a visible annotation, is durable, and changes the semantics of downstream operators. In dplyr ordering has no annotation, is not durable (it is quietly lost by many operators such as dplyr::compute and dplyr::collapse, though this is possibly changing), and can’t be stored (as it isn’t a concept in many back-ends such as relational databases).

It is hard for new analysts to trust dplyr the data iris %>% group_by(Species) %>% arrange(desc(Sepal.Length)) is both grouped and ordered. As we see below order is in the presentation (and not annotated) and grouping is annotated (but not in the presentation):

Notice the apparent mixing of the groups in presentation. That is part of why there is a visible Groups: annotation, the grouping can not be inferred from the data presentation.

My opinion

Frankly it can be hard to document and verify which dplyr pipelines are maintaining the semantics you intended. We have every reason to believe the following is both grouped and ordered:

iris %>% group_by(Species) %>% arrange(desc(Sepal.Length))

It is ordered as dplyr::arrange is the last step and we can verify the grouping is present with dplyr::groups().

We have less reason to trust the following is also grouped and ordered (especially in remote databases or Spark):

iris %>% arrange(desc(Sepal.Length)) %>% group_by(Species)

The above may be simultaneously grouped and ordered (i.e. have not lost the order), but for reasons of “trust, but verify” it would be nice to have a user-visible annotation certifying that. Remember, explicitly verifying the order requires the use of a window-function (such as lag) so verifying order by hand isn’t always a convenient option.

We need to put some of the dplyr machinery in a housing to keep our fingers from getting into the gears.

Essentially this is saying wrap Hadley Wickham’s “The Split-Apply-Combine Strategy for Data Analysis” (link) concept into a single atomic operation with semantics:

By wrapping the pipeline into a single “Grouped-Ordered-Apply” operation we are deliberately making intermediate results not visible. This is exactly what is needed to get rid of depending on distinctions of how partitioning is enforced (be it by a grouping annotation, or with an actual split) and worrying about the order of the internal operations.

A Suggested Solution

Our new package replyr supplies the “Grouped-Ordered-Apply” operation as replyr::gapply (itself built on top of dplyr). It performs the above grouped/ordered calculation as follows:

replyr::gapply can use either a split based strategy, or a dplyr::group_by_ based strategy for calculation. Notice replyr::gapply‘s preference for “parametric treatment of variables.” replyr::gapply anticipates that the analyst may not know the names of columns or variables when they are writing their code, but may in fact need to take names as values stored in other variables. Essentially we are making dplyr::*_ forms preferred. The rank_in_group is using dplyr preferred non-standard evaluation, which assumes the analyst knows the names of the columns they are manipulating; that is appropriate for transient user code.

Now that we have the rank annotations present we can try to confirm they are in fact correct (i.e. that the implementation maintained grouping and ranking throughout). The calculation is detailed (checking ranks are unique per-group, integers in the range 1 to group-size, and order compatible with the value column Sepal.Length), so we have wrapped the calculation in replyr:

For simplicity we wrote the primary checking function in terms of operations that happen to be only correct when there is only one group present (i.e. the function needs formal splitting and isolation, not just dplyr::group_by). This isn't a problem as we can then use replyr::gapply(partitionMetod='split') to correctly apply such code to all groups in turn.
Notice the Split-Apply-Combine steps are all wrapped together and supplied as part of the service; the user only supplies (column1,column2,f()). The transient lifetime and limited visibility of the sub-stages of the wrapped calculation are the appropriate abstractions given the fragility of row-order in modern data stores. The user doesn't care if the data is actually split and ordered, as long as it is presented to their function as if it were so structured. We are using the Split-Apply-Combine pattern, but abstracting out if it is actually implemented by formal splitting (ameliorating the differences between base::split, tidyr::nest and SQL GROUP BY ... ORDER BY ...). There are benefits in isolating the user-visible semantics from the details of realization.

Much can be written in terms of this pattern including grouped ranking problems, dplyr::summarize, and more. And this is precisely the semantics of gapply (grouped ordered apply) found in replyr.

Additional examples

An advantage of using the general notation as above is that dplyr has implementations that work on large remote data services such as databases and Spark.

For example here is the "rank within group" calculation performed in PostgreSQL (assuming you have such a database up, and using your own user/password). For these additional examples we are going to continue to suppose our goal is to compute the rank of Sepal.Length for irises grouped by Species.

Notice the sparklyr adapter changed column names by replacing "." with "_", so we had to change our ordering column specification "ocolumn='Sepal_Length'" to match. This is the only accommodation we had to make to switch to a Spark service. Outside of R (and Lisp) dots in identifiers are considered a bad idea and should be avoided. For instances most SQL databases reserved dots to indicate relations between schemas, tables, and columns (so it is only through sophisticated quoting mechanisms that the PostgreSQL example was able to use dots in column names).