Lambda Expressions vs. Anonymous Methods, Part Five

Last time I demonstrated that the compiler could have to do an exponential number of bindings in order to determine whether there was a unique best overload resolution for a function call that takes a lambda. Some of you may have wondered whether we simply were not being clever enough in the compiler. Perhaps there is some way to optimize this problem so that the unique solution is found in less than exponential time.

As it turns out, the question of whether there is a clever way to do this in C# 3.0 is equivalent to solving the most famous unsolved problem in computer science. That is a problem which has stumped generations of the finest minds in academia, and is widely believed to be unsolvable, so I don't feel particularly bad about not solving this one myself.

Consider the following slight revision of the set of overloads I sketched out last time:

This expression makes the compiler solve the Boolean satisfiability problem with three terms per "or", aka "3SAT". If 3SAT has a unique solution then the program compiles and produces the unique solution. If it has more than one solution then compilation fails with an ambiguity error. If it has no solution then the program fails with the error that MustBeT has an F argument. But no matter how you slice it, the compiler must solve SAT.

Determining whether a given predicate has a set of valuations for its variables which make it true is an NP-complete problem; actually finding the set of valuations is an NP-hard problem. Therefore, overload resolution in C# 3.0 is at the very least NP-hard. There is no known polynomial-time algorithm for any NP-complete or NP-hard problem and it is widely believed that there is none to be found, though that conjecture has yet to be proven. So our dream of fast lambda type analysis in C# 3.0 is almost certainly doomed, at least as long as we have the current rules for overload resolution.

Does this really matter in practice? As I mentioned last time, hopefully not. Other languages also have this issue. When I ran an early draft of this post past him, Erik Meijer immeditely pointed out that the ML type inference system has similar worst cases, and the Haskell type inference system is even worse. Apparently in Haskell you can encode a Turing machine into the type system and make the compiler run it! In practice these sorts of problems do not arise in real-world code in any of these languages, so I am not too worried about it for C# 3.0.

'Apparently in Haskell you can encode a Turing machine into the type system and make the compiler run it!'

Same thing goes for C++ templates. They are Turing complete too. The reason behind template meta-programming in C++. I suppose any language with similar inferencing abilities and compiler able to make decision which type makes the best candidate if any will suffer similar problems.

Partial specialization makes life a living hell for C++ compiler writers. One reason VC++ team never got partial specialization right before VC++ 2003.

John Smith

26 Jul 2007 9:33 PM

Awesome example!

I would just like to note that you have shown the 2-SAT problem which is actually in P. But since it's trivially extended to be N-SAT if you have And(x0,x1,...xn-1) and Or(x0,x1,...xn-1) functions I'll buy it.