I still plan on writing more on scheduling algorithms, but I was at Strange Loop last week and listened to Guy Steele talking about parallel algorithms. He’s working in a language he called Fortress, but certainly the algorithms could be implemented in C# as well.

The problem Steele was working on to demonstrate a parallel algorithm is splitting a string into words. For simplicity, we’ll assume that words are only separated by spaces. Here are some sample tests so you can see the pattern.

In a typical implementation, it’s not too complicated to do this in a sequential way. Start with an empty list. Iterate over each character, building a word along the way, and when you hit a space, add the word to the list and start over. Maybe something like this:

publicabstractclassSplitter

{

publicabstractIEnumerable<string> Split(string stringToSplit);

}

publicclassNormalSplitter : Splitter

{

publicoverrideIEnumerable<string> Split(string ss)

{

var result = newList<string>();

string word = "";

for (int i = 0; i < ss.Length; ++i)

{

if (ss[i] == ' ')

{

if (!string.IsNullOrEmpty(word))

{

result.Add(word);

word = "";

}

}

else

word += ss[i];

}

if (!string.IsNullOrEmpty(word))

result.Add(word);

return result;

}

}

Of course, I’m not big on For loops. Linq’s Aggregate function is more or less the same as the Reduce side of the MapReduce algorithm, or some of the Folding methods you see in other languages. How would we do this using Aggregate?

Instead of the For loop, we give the Aggregate algorithm the seed, or result it should give back for an empty string, and a lambda method that tells it how to combine a list of words, with an additional character, like this:

The lambda method more or less is the same as the inside of the normal splitter, above:

if (c == ' ')

{

if (!string.IsNullOrEmpty(word))

{

result.Add(word);

word = "";

}

}

else

word += c;

return result;

How do they compare? They’re pretty close to each other. I handed each one a string consisting of the text of Moby Dick and each one was able to process it in about 300 milliseconds. Sometimes one is faster, sometimes the other.

Now, how would you break this problem down into chunks that multiple processors can work on? It’s tricky. We have a list of strings that we’re holding on to, we have a word that we’re holding on to, and as long as we have those two things, we’re stuck iterating over the characters one by one so we can update our stuff in the correct order. If we try to process two characters simultaneously, the ordering of our words, or the words themselves, can be jumbled.