On the one hand, it looks rather innocently: a function is given some box, and if it opens it, it's caught red-handed. Such function would hang given infinite loop. A function that doesn't touch the box, yet finishes, is const (). Such 'test' is a nice feature to have.

On the other hand, something is wrong, since f is more defined than g, and test f is not at least as defined as test g. This contradicts monotonicity. By giving two exceptions to (+), you can check which argument is evaluated first:

throw A + throw B

That means flip (+) is not the same as (+). Addition is not commutative!

Which of these points is correct?

The representation of a->bInternally, the (->) type is a list of instructions - force a thunk, case, perform a primitive instruction like adding integers. In other words, it's an algorithm. You could conceivably write an algebraic datatype that encompassed all those options.

Haskell has an evaluator that takes an algorithm and runs it step by step.

($) :: Algorithm a b -> a -> b

Having access to internal source code of an algorithm, you can write a "debugger" that stops if the function forces its argument. In a sense, this is what the function test is doing.

Lazy IO uses this trick. When a thunk is forced evaluation is stopped momentarily, and IO is performed.

Still, the denotational semantics argument seems disturbing.

The distinction

The solution to the dilemma is:

There are two different ways of interpreting values of type a -> b.

functions that assign a value of b to each value of a.

algorithms that are recipes how to turn a into b.

In Haskell, the function view is used, but in this post I'll use both to illustrate differences. I'll call the algorithm view "operational" and function view "denotational".

Representing algorithms is possible using ADT, as you seen above. Functions are represented using algorithms:

data Function a b = Function (Algorithm a b)

You can turn an algorithm into a function:

evaluate :: Algorithm a b -> Function a bevaluate = Function

but the reverse conversion is ill-defined - one function has many representations. evaluate turns an algorithm into a "black box".

Think about rational numbers. You represent them as pairs of integers, even though a rational number is not a pair. Then, you write operations like addition, which don't break 'internal consistency'. Not every operation on pairs can be lifted to operation on rationals. Different pairs may represent same rationals. It's the same with functions stored as algorithms.

Questions:The questions focus on differences between algorithms and functions.

It is correct if you fix operational semantics and consider the argument to be an algorithm. It falls under "debugging" category. However, since implementations of the same function may run for different amounts of time, it doesn't make sense in denotational view.

It makes sense in algorithm view, since you can run f False and f True in parallel, interleaving steps. But as it turns out, this is a rare case where a function makes sense in denotational semantics. See lub package.

8. Since a->b in Haskell is seen as function type, not algorithm type, anything dependent of "internals" is ambiguous. How does Haskell deal with it?

For example, catching exceptions is allowed only in IO, since they can be used to inspect evaluation order. Another example is:

Control.Concurrent.mergeIO :: [a] -> [a] -> IO [a]

You can deem lazy I/O as safe, if you take the 'algorithm' view. In the denotational view, lazy IO doesn't make sense unless you consider IO as nondeterministic.

In my opinion this is not a good approach, and it might be better if Haskell had separate types for computations and values. (A computation can be thought as an algorithm returning a value. For example, 2+2 and 4 are two different computations.)

SummaryIn most languages, 'functions' are algorithms. In Haskell, the emphasis is on functions as in mathematics and referential transparency.

Since Haskell is running on a computer, operational semantics (algorithms) describe how things work on the lower level and you can't do away with them. Things like "debugging", "unsafePerformIO", "measuring time/space complexity of a function", "order of evaluation" are relevant to operational semantics and break the abstraction. Things like "referential transparency" or "functional reactive programming" are valid for functions.

I think this is what makes Haskell different from imperative languages. FP is about functions, not algorithms. This has advantages, like better support for composability - and disadvantages, like more difficult time/space reasoning.

1 comment:

Actually your answers rises even more questions. :) What is the difference between "to assign a value" and "to give a recipe"? What is "to show an algorithm/function", "to read an algorithm/function", "a well-defined function"?

"id" is strict while "const ()" is not. We can compare lambda-terms syntactically, with equivalences (beta-, eta-, etc.) or in models so we can obtain a comparison with any precision we want. This comparison may be decidable or not.