"A Map function extracts a 10-byte sorting key from a text line and
emits the key and the original text line as the intermediate key/value
pair. We used a built-in Identity function as the Reduce operator.
This function passes the intermediate key/value pair unchanged as the
output key/value pair. The final sorted output is written..."

I don't understand how the actual sorting happens. As far as I understand, they Map function extracts a key value pair, and then the Reduce function somehow spits out the data sorted. What is a "sorting key"?

2 Answers
2

The sorting is dependent on some implementation details and additional functionality in Google's runtime. See section 4.2:

We guarantee that within a given partition, the intermediate
key/value pairs are processed in increasing key order. This ordering
guarantee makes it easy to generate a sorted output file per
partition, which is useful when the output file format needs to
support efficient random access lookups by key, or users of the output
find it convenient to have the data sorted.

The system also allows an arbitrary partitioning scheme (mentioned in section 4.1). The sorting system uses this scheme:

The partitioning function uses the initial bytes of the key to
segregate it into one of R pieces.

Thus when the Reduce function is run on each partition there are no changes to the data but the output is guaranteed to be in correct order (presumably there's nothing special about the implementation of this - it's just a simple local sort of some kind).

Once you have your sorted partitions, all you have to do then is concatenate them in the order of the initial bytes that were used to make the partitions and you have a fully sorted list.

(I'm taking a "sorting key" to be a summary of the value that's designed such then when the keys are sorted the values are also in the correct sorted order, at least to a reasonable approximation. Simply truncating the first letters of a string would be enough to make a crude sorting key)

Have a look at this YouTube video, of a professor explaining the concepts involved in the MapReduce algorithm. Unfortunately, all the code is in Scheme, so readability's not the greatest, but he does a good job of explaining what goes on.

Basically, MapReduce has more than just those two steps. It goes like this:

Map: for each input, produce 0, 1 or more key-value pairs

Sort: Gather the key-value pairs and sort them by their keys into a group of buckets

Reduce: Each bucket gets reduced in parallel to a single result. Results from each bucket are then combined (reduced) to a final result.

The sorting step is basically there so you can run the Reduce operation in a distributed, parallel fashion instead of having to do the whole thing serially, which would require the entire Reduce task to be handled by a single system, which kind of defeats the purpose.