More LINQ with System.Interactive – Functional fun and taming side-effects

With the recent release of the Reactive Extensions for .NET (Rx) on DevLabs, you’ll hear quite a bit about reactive programming, based on the IObservable<T> and IObserver<T> interfaces. A great amount of resources is available on Channel 9. In this series, I’ll focus on the dual of the System.Reactive assembly, which is System.Interactive, providing a bunch of extensions to the LINQ Standard Query Operators for IEnumerable<T>. In today’s installment we’ll talk about EnumerableEx’s facilities to tame side-effects in a functionally inspired manner:

To side effect or not to side effect?

Being rooted in query comprehensions as seen in various functional programming languages (including (the) pure one(s)), one would expect LINQ to have a very functional basis. Indeed it has, but being hosted in various not functionally pure languages like C# and Visual Basic, odds are off reasoning about side-effects in a meaningful and doable manner. As we’ve seen before, when talking about the Do and Run operators, it’s perfectly possible for a query to exhibit side-effects during iteration. You don’t even have to look that far, since every lambda passed to a query operator is an opportunity of introducing effects. The delayed execution nature of LINQ queries makes that those effects appear at the point of query execution. So far, nothing new.

So, the philosophical question ought to be whether or not we should embrace side-effects or go for absolute purity. While the latter would be preferable for various reasons, it’s not enforceable through the hosting languages for LINQ, so maybe we should exploit side-effects if we really want to do so. The flip side of this train of thought is that those side-effects could come and get us if we’re not careful, especially when queries get executed multiple times, potentially as part of a bigger query. In such a case, you’d likely not want effects to be duplicated. Below is a sample of such a problematic query expression:

Using Generate, we generate a sequence of random numbers. Recall the first argument is the state of the anamorphic Generate operator, which we get passed in the lambdas following it: once to produce an output sequence (just a single random number in our case) and once to iterate (just keeping the same random number generator here). What’s more important is we’re relying on the side-effect of reading the random number generator which, as the name implies, provides random answers to the Next inquiry every time it gets called. In essence, the side-effect can (not) be seen by looking at the signature of Random.Next, which says it returns an int. In .NET this means the method may return the same int every time it gets called, but there are no guarantees whatsoever (as there would be in pure functional programming languages).

This side-effect, innocent and intentional as it may seem, comes and gets us if we perform a Zip on the sequence with itself. Since Zip iterates both sides, we’re really triggering separate enumeration (“GetEnumerator”) over the same sequence two times. Though it’s the same sequence object, each of its iterations will produce different results. As a result, the expected invariant of the Zip’s output being only even numbers (based on the assumption l and r would be the same as they’re produced by the same sequence) doesn’t hold:

52
114
112
103
41
13578
114
59
137

While random number generation is a pretty innocent side-effect, not having it under control properly can lead to unexpected results as shown above. We can visualize this nicely using another side-effect introduced by Do:

If we look a bit further to the original query, we come to the conclusion we can’t apply any form of equational reasoning anymore: it seems that the common subexpression “xrs” is not “equal” (as in exposing the same results) in both use sites. The immediate reason in the case of LINQ is the delayed execution, which is a good thing as our Generate call produces an infinite sequence. More broadly, it’s the side-effect that lies at the heart of the problem as equational reasoning breaks down in such a setting. For that very reason, side-effect permitting languages have a much harder time carrying out optimizations to code and need to be very strict about specifying the order in which operations are performed (e.g. in C#, arguments to a method call – which is always “call-by-value” – are evaluated in a left-to-right order).

Don’t forget the Take(10) call though, as calling ToArray (or ToList) on an infinite sequence is not quite advised on today’s machines with finite amounts of memory. It’s clear such hacking is quite brittle and it breaks the delayed execution nature of the query expression. In other words, you can’t really hand out the resulting expression to a caller for it to call when it needs results (if it ever does). We’re too eager about evaluating (part of) the query, just to be able to tame the side-effect:

// What if the consumer of randomEvens expects different results on each enumeration... Hard cheese!
randomEvens.Run(Console.WriteLine);
randomEvens.Run(Console.WriteLine);

It’s clear that we need some more tools in our toolbox to tame desired side-effects when needed. That’s exactly what this post focuses on.

Option 1: Do nothing with Let

A first way to approach side-effects is to embrace them as-is. We just allow multiple enumerations of the same sequence to yield different results (or more generally, replicate side-effects). However, we can provide a bit more syntactical convenience in writing queries that reuse the same common subexpression in multiple places. In the above, we had to introduce an intermediate variable to store the common expression in, ready for reuse further on:

Because of the call-by-value nature of the languages we’re talking about, the expression used for the source parameter will be fully evaluated (not the same as enumerated!) before Let gets called, so we can feed it (again in a call-by-value manner) to the function which then can refer to it multiple times by means of its lambda expression parameter (in the sample above this is “xrs”). Let comes from the world of functional languages where it takes the following form:

let x = y in z

means (in C#-ish syntax)

(x => z)(y)

In other words, there’s a hidden function x => z sitting in a let-expression and the “value” for x (which is y in the sample) gets passed to it, providing the result for the entire let-expression. In EnumerableEx.Let, the function is clear as the second parameter, and the role of “y” is fulfilled by the source parameter. One could create a Let-form for any object as follows (not recommended because of the unrestricted extension method):

public static R Let<T, R>(this T t, Func<T, R> f)
{
return f(t);
}

With this, you can write things like this:

Console.WriteLine(DateTime.Now.Let(x => x - x).Ticks);

This will print 0 ticks for sure, since the same DateTime.Now is used for x on both sides of the subtraction. If we were to expand this expression by substituting DateTime.Now for x, we’d get something different due to the duplicate evaluation of DateTime.Now, exposing the side-effect of reading from the system clock:

Console.WriteLine((DateTime.Now - DateTime.Now).Ticks);

(Pop quiz: What sign will the above Ticks result have? Is it possible for the above to return 0 sometimes?)

Option 2: Cache on demand a.k.a. MemoizeAll

As we’ve seen before, on way to get rid of the side-effect replication is by forcing eager evaluation of the sequence through operators like ToArray or ToList. However, those are a bit too eager in various ways:

They persist the whole sequence, which won’t work for infinite sequences.

They do so on the spot, i.e. the eagerness can’t be delayed till a later point (‘on demand”).

The last problem can be worked around using the Defer operator, but the first one is still a problem requiring another operator. Both those things are what MemoizeAll provides for, essentially persisting the sequence bit-by-bit upon consumption. This is achieved by exposing the enumerable while only maintaining a single enumerator to its source:

In the figure above, this is illustrated. Red indicates a fetch operation where the original source’s iterator makes progress as an element is requested that hasn’t been fetched before. Green indicates persisted (cached, memoized) objects. Gray indicates elements in the source that have been fetched and hence belong to the past from the (single) source-iterators point of view: MemoizeAll won’t ever request those again. Applying this operator to our running sample using Zip will produce results with the expected invariant:

Now we’ll see the xrs-triggered Do messages being printed only 10 times since the same element will be consumed by the two uses of xrs within Zip. The result looks as follows, showing how the right consumer of Zip never causes a fetch back to the random number generating source due to the internal caching by MemoizeAll:

What about lifetime of the source’s single enumerator? As soon as one of the consumers reaches the end of the underlying sequence, we got all elements cached and are prepared to any possible inquiry for elements on the output side of the MemoizeAll operator, hence it’s possible to dispose of the original enumerator. It should also be noted that memoization operators use materialization internally to capture the behavior of the sequence to expose to all consumers. This means exceptions are captured as Notification<T> so they’re repeatable to all consumers:

The above will therefore print 1, 42 twice. In other words, the source blowing up during fetching by MemoizeAll doesn’t terminate other consumers that haven’t reached the faulty state yet (but if they iterate long enough, they’ll eventually see it exactly as the original consumer did).

Finally, what’s All about MemoizeAll? In short: the cache used by the operator can grow infinitely large. The difference compared to ToArray and ToList has been explained before, but it’s worth repeating it: MemoizeAll doesn’t fetch its source’s results on the spot but only makes progress through the source’s enumerator when one of the consumers requests an element that hasn’t been retrieved yet. Call it a piecemeal ToList if you want.

Option 3: Memoize, but less “conservative”

While MemoizeAll does the trick to avoid repetition of side-effects, it’s quite conservative in its caching as it never throws away elements it has retrieved. You never know whether someone – like a slow enumerator or a whole new enumerator over the memoized result – will request the data again, so a general-purpose Memoize can’t throw away a thing. However, if you know the behavior of the consumers of the memoized source, you can be more efficient about it and use Memoize specifying a buffer size. In our running sample of Zip we know that both uses of the source for the left and right inputs to Zip will be enumerated at the same pace, so it suffices to keep the last element in the cache in order for the right enumerator to be able to see the element the left enumerator just saw. Memoize with buffer size 1 does exactly that:

Another valid buffer size – also the default – is zero. It’s left to the reader, as an exercise, to come up with a plausible theory for what that one’s behavior should be and to depict this case graphically.

(Question: Would it be possible to provide a “smart” memoization operator that knows exactly when it can abandon items in the front of its cache? Why (not)?)

Derived forms

The difference between Let and the Memoize operators is that the former feeds in a view on an IEnumerable<T> source to a function, allowing that one to refer to the source multiple times in the act of producing a source in return. Let is, as we saw, nothing but fancy function application in a “fluent” left-to-right dataflowy way. Derived forms of Memoize exist that have the same form where a function is fed a memoized data source:

Replay is Memoize on steroids

Publish is MemoizeAll on steroids

The following snippets show just what those operators do (modulo parameter checks):

Option 4: Fair (?) sharing with Share and Prune

The Share operator shares an IEnumerator<T> for any number of consumers of an IEnumerable<T>, hence avoiding duplication of side-effects. In addition, it also guarantees that no two consumers can see the same element, so in effect the Share operator has the potential of distributing elements across different consumers. Looking at it from another angle, one consumer can steal elements from the source, preventing another consumer from seeing it. Prune is derived from Share as follows:

The naming for Prune follows from the effect consumers inside the function have on the sequence being shared: each one consuming data effectively prunes elements from the head of the sequence, so that others cannot see those anymore. An example is shown below, showing another way a Zip could go wrong (practical scenarios for this operator would involve sharing) since the left and right consumers both advance the cursor of the same shared enumerator under the hood:

I guess a "smart" Memoize operator could know when it can abandon items if told beforehand how many enumerators are going to enumerate over it. Then, it can track each enumerator, and whenever the last enumerator is moving to the next item it should abandon the previous item.

That's right but not quite as smart as one would expect, since you still need to tell how much enumerators are going to consume the resource. However, inferring that is not feasible because of various operators having a dynamic behavior with regards to the number of iterations that are going to happen (e.g. SelectMany where the source is used in the selector lambda body).

That was exactly my suggestion: telling the Memoize operator how many enumerators are going to iterate over it. It is not worse than telling it how long a buffer it should have, and might be more intuitive in some use-cases.

I guess that calls to GetEnumerator beyond the given limit should throw. Fewer calls than the limit is a harder problem (which I have no solution for) - the buffer could grow indefinitely.

Implementing this "smart" Memoize shouldn't be so hard (especially for smart people like you).