If in order to be truly "pure", why don't we just replace our code with nothing but a bunch of constants?

We could then go around saying I write constants instead of I write code.

I don't understand the thinking that in order to be truly pure, it has to return the same value. Why can't it be pure if it returns the data type specified? I.e., if I say I'm goign to pass you back a string, then you can know for a fact you are going
to get a string returned.

Why do we care what the actual value is?

Why would it make sense to have to specify not only the data type returned, but also that it's not a constant value by plaing <IO> in front of it?

Also, what's up with the Let? Why can't we just say x = "whatever"

I think it would be better if you show a sample application, where this type of functional programming language shines.

When we don't want our "world" to change, why doesn't it make more sense to use the concept of Interfaces when needed?

I think the problem with the concept of purity and functional programming, is that it's trying to force programmers into a box of how mathmeticians think. That logic falls apart because the need for a programming language is not simply to solve mathmetical
equations.

To step back and keep it simple, rather than talking about how it works;

Where I think this would be useful is as a feature in an existing language.

For example, if you could create a certain type with a certain state. That way, if you have that special type you also know it has a certain state, or a certain value. The equivalent (somewhat) of saying what Erik said in his video of you know you're going
to get a certain type and a certain value, but really I mean state.

This would be beneficial in the case of a string for example. Let's say you create a new stringempty type, where whenever you get this value you back you know it will never be nothing.

Why would this matter?

It would eliminate all of the exceptions for people who don't check if string is nothing before trying to call the Trim() method on it for example.

Imagine how many exceptions this would elimnate and how many lines of code it would eliminate.

The bottom line is I don't care if a string is nothing, I just want to get the value, trim it, and see if it's equal or not equal to a certain value.

Why functional programming? See Joe Duffy's video on ParallelFx here on C9 and Simon Peyton Jones' papers on Software Transactional Memory on the MS Research site for a couple approachs to improving the concurrency of applications in this multi-core age
we live in.

1) The samples you gave used object oriented languages, not functional programming languages exclusively.

2) I've watched all of the C9 videos on these functional programming languages. I only remember one referring to a real world scenario which was in telecommunications. I even watched a video on youtube when the interviewee was younger and the demo showed them
crashing the system, where only one process or thread would die, and it wouldn't bring down the whole system. How they could make changes live to the system and it would only affect new processes, etc.

However, that really didn't relate back to the concepts of "purity", etc", of the why someone should use functional programming languages.

I'm not approaching this with an argument of why not to use functional programming.

Rather, I'm simply asking someone to explain the reasoning behind FP in a "KISS" way on why it's better than an existing programming language.

Or, maybe it's not meant for real world use in today's world. Maybe it's more of theory and bits and pieces are taken to enhance existing languages.

MetaGunny, it matters because purity is easier to reason about, and it's also easier for the compiler to process (if you want the compiler to help you sort out parallelism and concurrency, it almost always needs your code to be pure -- if the language
is pure the compiler can guarantee that this is the case).For example if you have a function that writes to a global variable when you call it, then you can not safely perform 100 tasks simultaneously which all use this function - the side effects mean that sequencing matters all of a sudden. And even if sequencing
doesn't matter it's very likely that you'll need to protect the global variable with a lock, which destroys parallelism because of contention.

If you instead write your algorithms in a functional style then not only is it easy to reason about (e.g. if you ever see that a function returns the wrong results given a set of input parameters, then you already have all the information you need to fix the
bug, whereas in C++ you may have hours of "quality time" with the debugger before you're even able to figure out exactly what series of events lead to an inconsistent state). It's just more explicit, whereas with side effects everywhere you have all these
hidden data flows that can mess up your program but are very difficult to find.I work at a games company, so I do use C++ every day, and I would estimate that probably 80%+ or so of all bugs I encounter simply could not happen in a functional language because you'd just write things differently.

Basically the issue I find is that when you write imperative code where you update state you need to sort of keep a "mental log" of what your code does as you read it. So at any point in the code you have to keep track of what happend before and go "Okay, so
if I end up in this branch then X is not null, and Y is greater than zero" etc. etc. It just fills up your mental resources book-keeping all the state changes that you've done to make sure you're doing the right thing at the right time. In a purely functional
world all that disappears. All you have is input and output. You combine the input into the output somehow, but you don't have to keep track of N previous operations to do it correctly, you only have to worry about inputs and outputs. You can feel how much
more "mental resources" you have to spend on the actual problem, as opposed to book-keeping.

Also, in an imperative programming language the "meaning" in a program consists of both "values" and "statements". The values can be statically typechecked, but statements can not. In other words, if you swap parameters around in a function call, chances are
good that the compiler will tell you about it, but if you swap the order of two statements around chances are good that it will keep compiling happily (and chances are high that the program will not do the right thing anymore). In a purely functional language
the meaning is 100% values, so everything gets statically checked. It may not always be enough to ensure no bugs get through, but I think you'd be surprised at how often Haskell programs "just work" when you compile them, even after large changes (which almost
never happens in C++!).

Finally the merits of being able to split effects up into separate monads is notable for example in STM. With STM the "readTVar" etc. operations are all in the "STM" monad (not "IO"), in that monad you can only read and write transactional variables (no other
effects). This means that the compiler can statically guarantee that no irrevocable effects happen inside a transaction (which is dangerous because a transaction can be preempted and have to be re-run).

Although I'm sure there are other benefits, I'm presuming functional programming is being talked about more and coming back because of multi-core.

Yes that's probably true.I guess languages always strive to improve productivity and correctness, and I personally think that functional programming provides enough benefits in these areas. I think Ericsson, from that telecom stuff you mentioned, meassured something like a 20x productivity
boost compared to C++ (they switched to Erlang when their C++ implementation collapsed under its own complexity IIRC).

But yes, it seems like the "killer app" that may finally convince the last remaining skeptics is the observation that it's immensly more suited to parallell and concurrent programming.

MetaGunny wrote:

﻿

So, the question is, is this all theory or is there a functional programming language I could start using today and write a massively parrell program in it easily?

Yes, Haskell for sure. Download GHC at www.haskell.org/ghcIt has parallelism annotations (stick a "par" in the code and it will evaluate it in parallel), STM for the shared state concurrency, lightweight threads (you can easily kick of thousands of them - they're almost free and get mapped down onto actual OS threads
by the runtime), and it's in the process of getting nested data parallelism, there's experimental code already in GHC (essentially you specify the parallelism on the data structures rather then on the control flow - this is pretty much the most fine grained
parallelism you can get, e.g. it's what games use to process vertices and pixels, but Haskell's version will support nested data structures and not just flat arrays, and it will work for arbitrary code, not just shaders -- NDP requires purity btw, otherwise
all the flattening and fusion transformations that the compiler needs to do wouldn't work).

They mention it as "useless" but I think Eric got side tracked from the orginal graph and meant to show (like Simon PJ did in the interview they referred to) how Haskell over the last 20 years have moved "up" into the "useful" area by starting with the pure
and elegant (but useless) and gradually shaping it into something that's actually useful. So now we have the pure and elegant foundation, but it's a fully general purpose language that you can use to write real applications (with GUIs, networking, database
access, C interop and whatever you can imagine).

As an old-school, impure programmer, I especially liked the way you cleared up what is meant by functional - in my old-school (C++) ways, I kept thinking 'volatile results'...

One could argue, though, that your starting premise is what is wrong with regards to the 'contract' each function call has, at least in terms of adding 'functional' features to an imperative language. And I suspect Charles may have thought that as well, which
is why I suspect he asked about wether it was worthwhile to be able to 'flag' a call/routine as being 'functional' - f(x) called with the same 'x' returning the same result. Always. That is, we know
our (default) implementation of an (imperative) function (routine) is not pure, but look here, this function (routine) is meant to be pure, so please beef up the (static?) analysis, and tell me if this gets broken.

However, I'm assuming that you reference reading other material because you yourself don't truly understand the why.

If you did, you could have explained it.

No, I referenced them, because they explain some of the benefits of pure functional programming where the minimization of shared mutable state can enable opportunities to simplify the construction of highly concurrent software.

It's interesting I think to observe just how much 'push back' / hostility there can to be Functional programming from the old-school / career predicated Object Orientated folks. I guess this is exactly what happened when OO was new and folks thought that
C++ would never be a viable alternative to C, or going back further that HLLs would ever be a better choice than pure assembly. Of course it’s ironic that FP has been around since pretty much the dawn of computing.

When trying to convey quickly to folks at my company why I think FP + the .NET Framework is such a incredibly powerful combination I usually direct then to read Erik's foreword to Don Syme's new 'Expert F#' book. Since I probably should quote it verbatim
here I can just encourage folks to seek out Erik foreword - it's beautifully succinct and forms a very compelling argument.

Charles, I really wished you would have brought up Spec# and asked Eric how that may play into the honest/dishonest comparison. If you used Spec# contracts, couldn't the compiler guarantee honesty so that you could have a mixture of honest and dishonest
functions by examining the contracts?

The thing that left me a little confused is that for the last 2 years we have been hearing how Orcas is bringing in functional programming concepts to C#. Maybe I missunderstood Eric, but it sounds like that is not true and seemed to contradict some of the
past C9 videos that were promoting the products.

This was a great interview though and was very stimulating....classic Channel 9.

﻿Charles, I really wished you would have brought up Spec# and asked Eric how that may play into the honest/dishonest comparison. If you used Spec# contracts, couldn't the compiler guarantee honesty so that you could have a mixture of
honest and dishonest functions by examining the contracts?

The thing that left me a little confused is that for the last 2 years we have been hearing how Orcas is bringing in functional programming concepts to C#. Maybe I missunderstood Eric, but it sounds like that is not true and seemed to contradict some of the
past C9 videos that were promoting the products.

This was a great interview though and was very stimulating....classic Channel 9.

LINQ is implemented as a monad (a functional construct). Lamda expressions and closures are functional contructs that are first class citizens in C#, an imperative language. We are not advocating that you throw away tools. Rather, add new ones to your toolbox.
C# will continue to evolve as will VB.NET and C++. F#, the newest member of the .NET family of languages, is a really powerful functional language that offers all of the benefits of functional plus full force .NET functionality.

The multi-core problem is not going to be solved by requiring developers to become functional programmers. The Parallel Computing Platform, for example, will be equally accessible from any CLI-based language. Parallel Extensions for .NET are implemented as
a manged library and there is a lot more going on in the Developer Division that you haven't heard about yet... Stay tuned to Channel 9 for the details.

Again, functional programming is one way of writing software and it does not have to replace the ways you are already comfortable with as we head into the highly concurrent future (so, C# will not
become a functional language even though it employs mechanisms that are functional in nature...).

From the video, the main point seems to be that there are pure and impure functions. A pure function will give you the same result each time when you pass it the same arguments. An impure function will not.

Now it seems to me, that there can not be such a thing as a pure function. Because every function takes some time to be executed, so every function has as a side-effect that time has passed, or ticks increased. So if you use Ticks anywhere in your program,
as in the example in the clip, all functions would become instantly impure.

I liked the distinction between honest and dishonest functions. An honest function gives you an indication of wether or not it changes anything other then the parameters passed while a dishonest does not.

What i liked most though was the explanation of how lambda functions were just a shorthand notation for a kind of class. That made lambda functions alot easier to understand.

It's a bit strange though that instead of getting to see alot of benefits that you can get from using pure functions, about 40 minutes of the video were actually about the disadvantages of it and why only having pure functions doesn't work.

Now i'm not sure if this is the right place to ask questions, but i wonder why functions still give a result. When you follow the convention that any result given , can also just be an out parameter of the function. It's easy to see that y = f(x) is the same
as f(x,out y). However, when taking c# for example, there are a lot of things i could do in the second form that aren't possible in the first form. Like having multiple out parameters f(x, out y, out z). Or even overloading the function with a different type
for the out parameter: f(int x, out int y) and f(int x, out string y).

As a second pondering, switching between functional and object oriented, is just passing the object data as first parameter to the object function. So all object oriented programming is also functional programming. It's just grouped together with a bit more
intuitive syntax.

ps: thank you for providing a smaller download channel 9 crew. With a download cap of 12 GB a month in this backwards country, it is really appreciated to be able to get those vids in smaller sizes.

(this is a response to people who are befuddled by purely functional programming)

I can answer your questions with a thought experiment:

If it's ok for a function to return arbitrary values when you call it with the same parameters, how does this affect the behaviour of other functions? Should 1+1 always = 2? Should Console.WriteLine("Foo") print "bar"? If functions don't behave only according
to the parameters you pass in, how do you specify how the function should behave? In other words, how do you program? Further, how can you ever hope to reason about or debug code that changes behaviour arbitrarily?

This is why current languages provide a mix of purely functional behaviour (like integer addition), and "impure" behaviour which is harder to reason about. The more "pure" the behaviour, the more predictable and deterministic the program is, which means it's
easier to understand and debug, and you can gain a higher confidence in its correctness.

I wrote a post about a number of advanced languages features which I think most people misunderstand or downplay the value of:

What are the types of A.f and B.f? In .NET languages, both of them have type double -> double, but one of them is pure and the other one isn't, so their *types* say nothing about their purity.

In Haskell, B.f couldn't be written that way, and would have to be in a state monad (one way or another , so its type would be
double -> StateMonad double or some such like that, distinct from the type of A.f, which remains double -> double. Thus, the compiler can *help* you compose pure functions like A.f with side-effecting functions like B.f by preventing non-sensical combinations. A programming language that
can't tell you the difference between a side-effecting function from a non-side-effecting function can't help you avoid certain goofups.

Side effects are available in pure functional programming, it's just that the side-effected stuff must be threaded through function arguments and return values, and Monads provide very solid abstractions for doing that.

It's a real pain, however, to have a beautiful, fully pure functional program, and then you want to plunk a Console.WriteLine down at the lower levels. You end up having to rewrite all the higher layers to be in the IO monad (unless you're an old hand and you
foresaw the need The mixed mode languages, where the type systems "fibs" to you about side-effecting functions having the same types as non-side-effecting functions, are real nice in the plunking department, precisely because they can't TELL that you've
just converted your beautiful functional palace into a side-effecting frat house by plunking down a Console.WriteLine.

Can we shuffle the statement 1 and 2? Can we do them at the same time? The answer is yes, because result2 does not rely on result1, and the function call has no sideeffects, so it doesn't matter if result2 is computed before result1.

Some things that a pure (or shall we say a function which is constant with respect to it's arguments) function have that a side-effecting function doesn't have include:

Let A and B be one or more statements, which contain no side-effects.Let AB denote the instruction or set of instructions A followed by the instructions in B.

1. AB has no side-effects.2. If B does not depend on A, then AB = BA3. If A is a function that takes arguments c1 ... cn, and c1... cn are known at the time of compilation, then the result of A can be fully determined at compile-time (i.e. never needs to be computed in the program, which makes your program faster).4. If B does not depend on A, then A and B can be executed on different threads at the same time.5. If A1 ... An are independent, non-side-efffecting blocks, then loops such as

but because new ComplexNumber() is constant with respect to its arguments, we know at compile time the value of new ComplexNumber(1,0) and ComplexNumber(4,3).

but Operator+ is constant with respect to its arguments, so we know that it's value can be fully determined at compile time, so we evaluate it to get

new ComplexNumber(5, 3).Magnitude.

But .Magnitude is constant with respect to it's arguments, and we know ComplexNumber(5,3) at compile time, so we can evaluate it to be Math.Sqrt(34). But Math.Sqrt is c wrt. args, so we can even evaluate it at compile time!

entirely at compile time, and thus your program will run faster, and more errors can be caught before you ever ship the program to your customers.

And less errors for customers and more (provably correct) parrallized code means that your programs will be more reliable and faster. So to sum up, pure functions and functional programming techniques leads to more money and prestige for you.

Ooh, ooh, I really like the AB = BA observation ... it reminds me of group theory, where we consider commutators

[AB] =def= AB - BA

which are members of the group that *measure* the extent to which two other members of the group, namely A and B, do not commute. If you've solved Rubik's cube, you'll remember such commutators as one of the only two building blocks you need to solve it (the
other being PQp =def= PQ(P^-1), the conjugator or "change-of-coordinates" gadget).

So two statements are perhaps just mutually non-side-effecting if their commutator is zero. They may side-effect a whole bunch of other stuff, but they don't hurt each other.

In solving the cube, the general strategy is to find some brute-force operation, say Q, that does something you want, but may side-effect the bejeebers out of other cubelets. First, get the cubelets you CAN'T afford to hurt out of the way of the bejeebers with
another brute-force op, P, then do the Q you want, then put the cubelets you saved back with a p = (P^-1), undoing all the deleterious side effects!

If we could UNDO side-effects in programs (Rollback, anyone?), then we could do all the mess we wanted (Q) so long as we got stuff out of the way first (P), then undid the mess (cleaned up the frat house, p=P^-1)

The Time Warp operating system allowed us to undo side-effects by sending antimessages and canceling things that never should have been done. Hmmm, maybe there's a group-theory accounting of Time Warp available.

Absolutely. It's important to showcase various perspectives on interesting topics in programming. I'd imagine Anders et al (including Erik) are busy at work innovating C# V.Next. When we can talk about what they're working on publicly, you can bet we will.
This goes for VB.NET, C++ and F# as well. All of these great languages deserve deep-dive and compelling interviews! Certainly, the people who create them are unusually gifted in the art of language design and implementation and we
want to showcase them on C9.

﻿If we could UNDO side-effects in programs (Rollback, anyone?), then we could do all the mess we wanted (Q) so long as we got stuff out of the way first (P), then undid the mess (cleaned up the frat house, p=P^-1)

Madness! Sheer madeness! Sounds fun though. I suppose you could implement it by buffering all of the side-effects (store backups of mutated memory in objects, and buffer external requests like writes etc), although there are some side-effects that you would
never be able to do:

(Assume, for argument's sake that atomic blocks are rolled back when any exception occurs):

atomic { int i =readIntFromConsole(); Console.Write(i / 0);}

Naturally the (i / 0) would throw an error, but you can't rollback a user-interaction request, without some more complicated memory-wiping device (illegal in the States, shucks). But in principle statements such as

// commit changes: foreach( FutureFunc f in atomicBlock) { try { // this also assumes that f doesn't rely on certain things, or mutate structures that we later rely on in non-side-effecting functions f.Invoke(); }catch(Exception e){ // it's more difficult to work out what to do with // exceptions if they happen here: throw e; } }}catch(Exception e){ // rethrow, or do something; throw e;}

It would be great to see a C9 video on Spec# (I couldn't find one with a search, I appologize if one already exists). A compiler that could verify which functions had side-effects in your C# code and what they are could make C# more "honest." It seems
to me that development of this project could solve some of the problems Erik mentioned in terms of unknown side-effects.

Every object has a local virtual time (LVT) clock that says what time it is "now," virtually and locally, of course.

Every method call generates an "activation record" in the operating system, which is realized as a "message" M from a Sender object S, to a Receiver object R, from a virtual SendTime (ST), to a virtual ReceiveTime (RT).

Messages also have an algebraic sign, plus or minus. Messages are always created and destroyed in + / - pairs. Everything else in the messages of a matching pair is bit-wise identical (optimize via hashes). More on creation and destruction in a bit.

For each object O, the operating system maintains an InputQueue IQ of incoming messages, ordered by ReceiveTimes RTs; an OutputQueue OQ of sent messages, ordered by SendTimes STs; and a StateQueue SQ of instance-variable blocks, ordered by the time "just-before-which"
the state is needed to process an input message, denoted "t minus epsilon"

The operating system schedules execution by the "lowest-virtual-time-first" discipline. When a method is entered at time T, user code "wakes up" with an input message with RT = T and a state with T-epsilon. When the method returns, the OS saves a state at time
T for later method calls. When O makes a method call to receiver object R, OS generates a message pair for the call, saves the negative copy in O's OQ for later potential cancelation, and sends the positive copy to the receiver R.

When the positive copy of M arrives at R, there are two cases: either R is scheduled to run at a time later than M.RT (that is, R is sitting in the OS's scheduling Queue at a time later than local virtual "now" for the OS), or R is scheduled to run at some
time earlier than M.RT. In case 1, OS ROLLS BACK object R to time M.RT (that is, reschedules R to run at the earlier time M.RT), and OS cancels (perhaps lazily) ALL the negative messages in R's output queue by sending them to their receivers, where they potentially
annihilate their matching messages and cascade the rollbacks, canceling all deleterious side effects!

The whole thing works like magic and gets REAL speedup (with lazy cancellation) on big networks and huge distributed apps like discrete-event simulations. It's also used inside game physics libraries to do optimistic collision response. I'd love a good excuse
to resurrect Time Warp

Search "Time Warp Operating System" and "Theory of Virtual Time."

There is some subtlety about multiple method calls at the same receive time, but just pick any deterministic sub-ordering and the system will work, I promise.

﻿It would be great to see a C9 video on Spec# (I couldn't find one with a search, I appologize if one already exists). A compiler that could verify which functions had side-effects in your C# code and what they are could make C# more
"honest." It seems to me that development of this project could solve some of the problems Erik mentioned in terms of unknown side-effects.

For some reason I can't click through any of these links, but this is a good listTime warp operating systemThis paper describes the Time Warp Operating System, under development for three
...Jefferson 85 David R. Jefferson, Virtual time, ACM Transactions on
...portal.acm.org/citation.cfm?id=37508

15.3 Time WarpWe designed the Time Warp Operating System (TWOS) to address this problem.
...project were David Jefferson, Peter Reiher, Brian Beckman, Frederick Wieland,
...www.netlib.org/utk/lsi/pcwLSI/text/node364.html - 10k

You, sir, are a godsend. On the downside, I will get nothing done over the next three months as I'll be spending all my time writing code to do this in practise.

I have to admit I've been looking at compiler theory including optimisations, autoconcurrency and proofs of correctness over the past couple of weeks, and it's pretty astounding how deep this rabbithole goes. It also means that when people say "how bad can
a goto / unsafe block be?" I nearly die of a heart attack.

One of the crazy notions I had was to label functions within the .NET framework (such as Console.WriteLine) as side-effecting monads and to label functions (such as Math.Sin) as constant with respect to argument functions in order to get some of these benefits
out, but I had completely forgotten about you folks down in Cambridge. Seems you beat me to it.

It is interesting to see the "effects" of this video and the interesting discussions it generates. In particular it is fascinating to see people sharpening their understanding of what functional programming entails.

Just as I like to eat as wide an variety of food as possible (for instance I drink three different kinds of tea at a time), I like to program in and learn about as many different languages as possible. For a while I was literelly collecting every language definition
I could lay my hands on. So, I am not taking any sides in this context, and neither am I reflecting the official Microsoft position on this topic. I am making some personal observations, pointing out the fundamental assumptions behind each possible interpretation
of what functional programming means, thereby hopefully offering Channel 9 viewers the opportunity to learn and increase their understanding of the field of programming languages and type systems.

According to Graham Hutton (http://www.cs.nott.ac.uk/~gmh/book.html) functional programming is defined as:
Broadly speaking, functional programming is a style of programming in which the basic method of computation is the application of functions to arguments. There are at least two ways to interpret this broad definition. First of all you
can consider functional programming as a particular style of programming that values compositionality. The second way is to concentrate on the mathematical definition of
function, which implies purity. In order to bring clarity is often good to look at both sides of the coin and just like in coding, to really understand something you need to explore the edges of an idea. This is what Seth Godin (http://sethgodin.typepad.com/)%20calls">http://sethgodin.typepad.com/)
calls looking for the "purple cow", or the "free prize inside". I just love coins, both heads and tails.

The Haskell interpretation of functional programming is programming with pure mathematical functions. The free prize inside Haskell is the observation that you can deal with effects by making all effects explicit in the type system and distinguishing
pure values from side-effecting computations of values via monads. These computations are themselves values that can be combined using pure functions, which allows programmers to define their own "control structures". This is why Simon Peyton Jones calls Haskell
the world's best imperative language (http://research.microsoft.com/~simonpj/papers/marktoberdorf/). Another way of looking at this is that Haskell enables Landin's "the next 700 programming
languages"
http://www.cs.utah.edu/~eeide/compilers/old/papers/p157-landin.pdf where you define a domain specific imperative sublanguage of effects (a monad plus non-proper morphisms) and glue together effectful computations using pure functions.

In Haskell, the type-system keeps track of effects. However, as anyone that has programmed in Haskell has experienced, this purity comes at a high price. As soon as one sub-expression has an effect, every expression that depends on that expression has a potential
effect as well, and the monadness percolates all the way up.

The underlying glue language in Haskell is based on three principles abstraction, (b) parameterization, and (c) correspondence (http://people.cis.ksu.edu/~schmidt/text/info.html), which brings
us to the functional programming style. In normal words adhering to the functional programming style means allowing programmers to create closures of everything.

The purple cow in languages that use the functional programming style without insisting on absolute purity such as
Lisp, Scheme,
ML, OCaml,
F#, Erlang,
Scala, Algol68,
Python, Ruby,
Smalltalk, C#, Visual Basic, Java, .. is that the compositional style allows for very concise programs, even in the presence of effects. As a "terminology purist", I like to refer to languages that embrace the functional style as “closure oriented”
programming. Note that it is no coincidence that there is really just one major pure functional language and many more pragmatic closure-oriented languages.

This pragmatism is invaluable in practice. It is much easier to deal with reality if you are pragmatic. For example, if you write a compiler in F#, you can generate fresh identifiers and still use a nice functional style whereas in Haskell the fact that generating
a fresh name is a side effect, you are forced to switch to a monadic style forcing the programming to explicitly order all effects. One of the problems is that Haskell lumps all effects together, so it is not possible to distinguish between "benign" effects
that are commutative or idempotent, … and real "bad" effects.

While closure-oriented languages make certain effects implicit (such as evaluation, state, exception, threads, events) that leaves many other effects that need to be dealt with explicitly, for example non-determinism (the IEnumerable monad).
So we increasingly see that languages that use the functional programming style also factor out certain effects explicitly and provide a monadic sublanguage for creating effectful computations. Query comprehensions in C# and VB directly correspond
to monad comprehensions in Haskell and are applicable way beyond just queries.

It's pretty amazing how much you can prove about a program in a high level language such as C# or Haskell, given that fundamentally it compiles (or runs as) assembly code, which is in principle difficult to reason about.

i had hoped to hear a bit more about the good reasons why one dont write windows programs in haskell though.. thats the one thing that i think is making it hard for me to understand fp. in general it seems that fp is like the philosophers stone, once you
master it you can do anything and not only that, it will have free scalability and will be error proof to boot

clearly this is not so and i dont think even the most hardcore fp buffs really think that, but im just trying to describe how it feels to me as a simple c#/java programmer

so what are the reasons according to the masters why, say haskell is not used for general purpose programing? charles i know you asked the question but i think that could be a topic for a whole interview

again, im not trying to debunk fp or question its usefullness, but i think a more well rounded discussion where both the pros and cons get equal time whould be great to get discouraged about fp just because one tried to apply it in the wrong area whould
be a great shame and i think part of the reason why im yet to understand monads fully (besides beeing slightly daft;P) is because i have been trying to apply it to the wrong real world c# problems..

i'd really really like to see more interviews that point out the cases where we (as in general purpose programmers)
already use monads today without realizing it. from what i understand from blogs/other c9 vids, that happens quite alot, ive just not connected the dots yet

also id like to see more of the monads-are-sort-of-generic-classes type examples (are they? if not, what are the diffrances/conditions?)

i had hoped to hear a bit more about the good reasons why one dont write windows programs in haskell though.. thats the one thing that i think is making it hard for me to understand fp. in general it seems that fp is like the philosophers stone, once you
master it you can do anything and not only that, it will have free scalability and will be error proof to boot

clearly this is not so and i dont think even the most hardcore fp buffs really think that, but im just trying to describe how it feels to me as a simple c#/java programmer

so what are the reasons according to the masters why, say haskell is not used for general purpose programing? charles i know you asked the question but i think that could be a topic for a whole interview

again, im not trying to debunk fp or question its usefullness, but i think a more well rounded discussion where both the pros and cons get equal time whould be great
to get discouraged about fp just because one tried to apply it in the wrong area whould be a great shame and i think part of the reason why im yet to understand monads fully (besides beeing slightly daft;P) is
because i have been trying to apply it to the wrong real world c# problems..

i'd really really like to see more interviews that point out the cases where we (as in general purpose programmers)
already use monads today without realizing it. from what i understand from blogs/other c9 vids, that happens quite alot, ive just not connected the dots yet

also id like to see more of the monads-are-sort-of-generic-classes type examples (are they? if not, what are the diffrances/conditions?)

thanks again charles and erik

FP remains arcane and unpopular for a bunch of reasons: historical and pedagogical, mostly. The grandfather of FP, LISP, got a bad reputation for performance in the early days, and suffered further guilt-by-association with the failures of AI in the 80's. This
guilt by association was completely undeserved, as the failures of AI had little to do with choice of programming language (I know of some underperforming AI programs written in FORTRAN).

FORTRAN and C early-on got the reputation as being THE choice languages for the discriminating programmer who had REAL problems to solve and needed to squeeze performance out of the machines. Many miracles in science and engineering were enabled because of
number-crunching FORTRAN programs and operating systems written in C (yes, Windows is a miracle of engineering, despite its warts and wrinkles).

Can you name a famous program or system written LISP, ML (any dialect), Scheme, Haskell, etc.? There are some great programs out there, but not any famous ones, and new programmers are attracted by fame and naturally gravitate to the languages of the famous
programs. (BTW, there is a really famous LISP program: it's the scripting language in AutoCAD)

Finally, for people who are not fully trained in mathematics or physics, FP feels weird and hard to learn, especially if they have lots of programming experience in imperative languages. To a mathematician or physicist, FP is the most natural thing in the world.
We all get this "Oh, where has THIS been all my life" feeling when we encounter it. But to the business or systems programmer with no exposure even to LISP, it feels "inside-out." And it should, because FP starts with the math and SUBTRACTS abstraction where
necessary to get to the machine. Imperative programming starts with the machine and ADDS abstraction up to the math.

The situation is just like that of colors: there are additive colors (red, green, and blue) and subtractive colors (cyan, magenta, yellow). It's possible to make up any picture by composing each kind within its own system, but it's difficult to mix the two.

The future of FP is very bright. It's more than possible, it's plain EASIER to write real-world, high-performance apps and systems code in FP, because you don't have to keep reinventing over and over and over again all the math you need to make the programs
small, beautiful, and RIGHT. Programmers are becoming more mathematical by training, and they've already seen some of the ideas of FP in school by the time they hit the streets. And, finally, it won't be long before there are some famous programs on the street
and FP will capture some of the glitter and spotlight. There are already some miraculous F# programs (F# is not Haskell, but you get the drift)

FP remains arcane and unpopular for a bunch of reasons: historical and pedagogical, mostly. The grandfather of FP, LISP, got a bad reputation for performance in the early days, and suffered further guilt-by-association with the failures of AI in the 80's. This
guilt by association was completely undeserved, as the failures of AI had little to do with choice of programming language (I know of some underperforming AI programs written in FORTRAN).

FORTRAN and C early-on got the reputation as being THE choice languages for the discriminating programmer who had REAL problems to solve and needed to squeeze performance out of the machines. Many miracles in science and engineering were enabled because of
number-crunching FORTRAN programs and operating systems written in C (yes, Windows is a miracle of engineering, despite its warts and wrinkles).

Can you name a famous program or system written LISP, ML (any dialect), Scheme, Haskell, etc.? There are some great programs out there, but not any famous ones, and new programmers are attracted by fame and naturally gravitate to the languages of the famous
programs. (BTW, there is a really famous LISP program: it's the scripting language in AutoCAD)

Finally, for people who are not fully trained in mathematics or physics, FP feels weird and hard to learn, especially if they have lots of programming experience in imperative languages. To a mathematician or physicist, FP is the most natural thing in the world.
We all get this "Oh, where has THIS been all my life" feeling when we encounter it. But to the business or systems programmer with no exposure even to LISP, it feels "inside-out." And it should, because FP starts with the math and SUBTRACTS abstraction where
necessary to get to the machine. Imperative programming starts with the machine and ADDS abstraction up to the math.

The situation is just like that of colors: there are additive colors (red, green, and blue) and subtractive colors (cyan, magenta, yellow). It's possible to make up any picture by composing each kind within its own system, but it's difficult to mix the two.

The future of FP is very bright. It's more than possible, it's plain EASIER to write real-world, high-performance apps and systems code in FP, because you don't have to keep reinventing over and over and over again all the math you need to make the programs
small, beautiful, and RIGHT. Programmers are becoming more mathematical by training, and they've already seen some of the ideas of FP in school by the time they hit the streets. And, finally, it won't be long before there are some famous programs on the street
and FP will capture some of the glitter and spotlight. There are already some miraculous F# programs (F# is not Haskell, but you get the drift)

thanks for the reply to awnser your question, no i can tof the top of my head name a popular program written in the languages you mention, but then again, i usually dont know what language a program is written in (i can guess its c++/c# or java but i dont really know)

again, im not trying to imply that FP is bad or something but surely there must be areas where its less useful than say imperative techniques? i think those areas are as important to highlight as the ones where FP is very useful such as concurrency. knowing
the pros and the cons will make it easier for noobs such as me to understand fp, because we wont try and apply it in unneccesarily difficult situations.

as a side note, again from a semi noob in both areas, i find it sligtly odd that both dynamic languages and FP languages are haveing a second coming at the same time. they seem to be at opposing ends of the spectrum somehow.

dynamic languages are, according to to those who know, great because one is not "restricted" by the type system (like the one in java/c#) and can do anything.
but say haskell, (as i understand it) has a whole lot stronger type system, so for a dynamic guy, shouldnt that be even worse than say c#?:O

and it goes both ways, if youre a haskell guy and you think c#/java is bad because of all the potential side effects, whouldnt say, ruby be even worse? in ruby one dont even know what type you get back from a function, much less if it has a side effect.

and yet, both these paradigms are on the rise.. am i missing something? whats a simple programmer to do

﻿again, im not trying to imply that FP is bad or something but surely there must be areas where its less useful than say imperative techniques? i think those areas are as important to highlight as the ones where FP is very useful such as concurrency. knowing
the pros and the cons will make it easier for noobs such as me to understand fp, because we wont try and apply it in unneccesarily difficult situations.

Yes. It's worth pointing out that there are a number of areas that for the foreseeablefuture functional programming techniques won't be playing a part.

Basically as Brian said, languages like C and Fortran start with the machine and add from there, whereas languages like Lisp and Haskell start with the math and subtract from there, so in applications that are very close to the machine your languages like C
and C++ are going to remain more popular for a reasonably long time. Examples include the low level operating system, device drivers and the 3D rendering agent inside 3D programs (e.g. games).

This being said, most tasks are problem-orientated, rather than machine-orientated, and it is these types of problems that are often best solved in languages like Haskell and Lisp. Where problem-oreientated programs turn into rich client applications, languages
like F# bridge the final gap between solving the problem and allowing the user to use it.

aL_ wrote:

as a side note, again from a semi noob in both areas, i find it sligtly odd that both dynamic languages and FP languages are haveing a second coming at the same time. they seem to be at opposing ends of the spectrum somehow.

dynamic languages are, according to to those who know, great because one is not "restricted" by the type system (like the one in java/c#) and can do anything.
but say haskell, (as i understand it) has a whole lot stronger type system, so for a dynamic guy, shouldnt that be even worse than say c#?

For someone like me who's worked quite a lot with C and C++, I know first hand the problems that can happen when you pass something around which isn't what you thought it was, usually crashing the application at point-of-use. The type system in C#, Haskell
etc counteracts this to a large extent - instead of crashing the application at point of use, it stops the compiler at compile-time, which is nearly always a better idea. If you have 100 errors in your program, wouldn't you prefer to see them before you ship
your application, rather than only catching 99 of them in test?

Languages such as PHP and other essentially type-less languages help you to make errors by their lack of type checking. Consider the code:

echo "Twelve is " + ("1" + "2");echo "Twelve is ". ("1" . "2" );

Note that in PHP the dot '.' is the string-concatenator, and + is a mathematical operator. In the first example (which happens remarkably often), I've accidentally used plus instead of dot. PHP doesn't mind however, since it can convert a string to an integer
silently. "Twelve is" doesn't look like an integer, so becomes the value 0, "1" and "2" become 1 and 2 respectively and although echo expects a string, and (0 + 1 + 2) is an integer, it doesn't matter because PHP can convert it straight back into the string
"3"!.

If this little code snippet occurred hidden away somewhere, and I didn't test for it, I might never notice that I've used this incorrectly. Haskell would object to my using + with strings and echo with ints respectively, and would never have run the first time.

Ultimately this means that type-less or mostly type-less languages are great for scripting small programs, but when it comes to proper applications of more than a couple of hundred lines, it becomes incredibly important to have strong type checking, to reduce
the strain of testing, and to catch as many of the obvious little errors before you ship your application to users.

aL_ wrote:

and it goes both ways, if youre a haskell guy and you think c#/java is bad because of all the potential side effects, whouldnt say, ruby be even worse? in ruby one dont even know what type you get back from a function, much less if it has a side effect.

and yet, both these paradigms are on the rise.. am i missing something? whats a simple programmer to do

True, although you have to be careful with Ruby. It's actually strongly typed behind-the-scenes, which helps catch a number of errors, even if you don't force the types at compile time - it can usually infer them from context.

Basically, the benefits in FP and strong static typing come through most promintently in large, real world apps. They catch silly mistakes and abstract things like concurrency so the programmer doesn't have to think about it.

The languages where strong static typing are not commonly used are nearly invariably designed to be used as small, quick scripts, such as Bash, or on webpages such as PHP and Javascript. One could argue that these uses are sufficiently small that the benefits
of FP and static typing don't come through, (although I would disagree).

In all real world applications there are errors (yes, even in Microsoft programs), however the number of these can be reduced by extensive testing (expensive) or by automatic detection, which is what static typing is designed to do. By enfocing this by using
strong static typing, you guarrantee that errors are errors of logic or input, rather than silly errors such as forgetting what parameters a function takes, using the wrong operator etc.

It took me a while to understand what was being expressed in the video.. but still I think I managed to miss the usage concepts..

Ok, so we have functions (methods?) that are more transparent about how they work, they provide an overview how the return data is aquired, I don't think you said anything about if exceptions are noted here (like with; as was noted, java)..

So ok, great, we have a system where everything is 'honest' about it's workings, why is that useful?

It took me a while to understand what was being expressed in the video.. but still I think I managed to miss the usage concepts..

Ok, so we have functions (methods?) that are more transparent about how they work, they provide an overview how the return data is aquired, I don't think you said anything about if exceptions are noted here (like with; as was noted, java)..

So ok, great, we have a system where everything is 'honest' about it's workings, why is that useful?

I'm glad you put the term honest in quotes. I don't think it's the precise way to express what Erik is talking about (then again, I'm not precisely qualified to debate Erik - he's a programming language expert. I'm not.). Perhaps a clearer (and less contentious) way
to think about this is in terms of surpise-capable(unexpected side effectual) versus surprise-free (a priori side-effectual or side effect free (also known as 'useless')) functions. So, if you call a 'pure' function then you can expect, and rely upon up front(prior
to execution of said function - a priori...), no invokations of Surprise that could end up breaking your algorithm or befuddling your critical operation because, well, that's what Surprises are all about - you are not expecting them and therefore you have
no way to handle them since you didn't write code that expects them (and code that calls your code is in the same boat so the Surprise bubbles up through the system surprising callers, which is not very composable, right?)

I'm glad you put the term honest in quotes. I don't think it's the precise way to express what Erik is talking about (then again, I'm not precisely qualified to debate Erik - he's a programming language expert. I'm not.). Perhaps a clearer (and less contentious) way
to think about the this is in terms of surpise-capable(unexpected side effectual) versus surprise-free (a priori side-effectual or side effect free (also known as 'useless')) functions. So, if you call a 'pure' function then you can expect, and rely upon up
front(prior to execution of said function - a priori...), no invokations of Surprise that could end up breaking your algorithm or befuddling your critical operation because, well, that what Surprises are - you are not expecting them and therefore you have
no way to handle them since you didn't write code that expects them (and code that calls your code is in the same boat so the Surprise bubbles up through the system surprising callers, which is not very composable, right?

Does this make sense?

C

Charles speaks the truth

I like to focus on the type system: if the type system cannot tell you the difference between a side-effecting function and a non-side-effecting function, then the type system isn't telling you "the whole truth." That is, it's not telling you everything about
your program that it is possible, mathematically, to say.

dynamic languages are, according to to those who know, great because one is not "restricted" by the type system (like the one in java/c#) and can do anything.
but say haskell, (as i understand it) has a whole lot stronger type system, so for a dynamic guy, shouldnt that be even worse than say c#?﻿

I think there is a lot of misunderstanding "in the air" about "dynamic typing." I think it's often confused with "weak typing." Scheme is a great example of a "dynamically typed" language that is also (mostly) "strongly typed."

"Dynamically typed" just means that types are carried around with data and are checked at run time, not at compile time.

"Strongly typed" just means that there are not a lot of implicit conversions going on. If you call a function that expects a double and you pass it a string, you will get an error. In a statically typed, strongly typed language, you will get the error at compile
time. In a dynamically typed, strongly typed language, you will get the error at run time.

"Weakly typed" just means the compiler and run time will do lots of implicit conversions for you, so, it will convert your string to a double or your double to a string or what not. You may still get an error at run time, but the conversion routine will throw
it rather than the ultimate function you're trying to call:

Math.Sin("32.6") --> Math.Sin(StringToDouble("32.6")), rewritten by compiler if weakly typed, either static or dynamic.

﻿So ok, great, we have a system where everything is 'honest' about it's workings, why is that useful?

It's useful because it not only allow more robust systems, but also allows various other "optimisations".

This idea of (bad) side-effect free permeates most of modern day computing. For example, at the macro-level, we have Operating Systems with protected memory systems, privilege levels, etc, which then allows robust multi-tasking, etc.

Moving to the micro-level, we have instruction-level parallelism, which CPU designers (at the chip, transistors level) needed in order to improve throughput [1]. Things like super-scalar, pipelining, out-of-order execution, etc, I believe, can all trace their
roots back to this idea of (bad) side-effect free operations. Sure, they had to introduce a slew peripheral hardware to cache or hide the various (good and bad) side-effects, but they did it.

Now were moving into the age where they are trying (again) to do it at the programmer level. At least, that's my take on it...

[1] The root causes for this are many. But from my perspective, the primary cause is the unwillingness of the 'industry' to move to vastly faster memory sub-systems - orders of magnitude, in some cases. (And yes, it would be extremely expensive, which is why
it's not being done, considering ->). So far, they've been able to hide the huge differences between the the throughput achieveable from the various 'levels' in the memory heirachy: register to register, register to L1 cache, register to L2 cache, etc.

dynamic languages are, according to to those who know, great because one is not "restricted" by the type system (like the one in java/c#) and can do anything.
but say haskell, (as i understand it) has a whole lot stronger type system, so for a dynamic guy, shouldnt that be even worse than say c#?﻿

I think there is a lot of misunderstanding "in the air" about "dynamic typing." I think it's often confused with "weak typing." Scheme is a great example of a "dynamically typed" language that is also (mostly) "strongly typed."

"Dynamically typed" just means that types are carried around with data and are checked at run time, not at compile time.

"Strongly typed" just means that there are not a lot of implicit conversions going on. If you call a function that expects a double and you pass it a string, you will get an error. In a statically typed, strongly typed language, you will get the error at compile
time. In a dynamically typed, strongly typed language, you will get the error at run time.

"Weakly typed" just means the compiler and run time will do lots of implicit conversions for you, so, it will convert your string to a double or your double to a string or what not. You may still get an error at run time, but the conversion routine will throw
it rather than the ultimate function you're trying to call:

Math.Sin("32.6") --> Math.Sin(StringToDouble("32.6")), rewritten by compiler if weakly typed, either static or dynamic.

Thanks Erik. Helped me understand more about FP and what you and BrianB have been talking about in last few videos. I think this was the first time I heard clearly that "state" is needed and not bad. We "state" folks are not all in the weeds rubbing
our gold rings. At the same time I am more confused (which is good):

1) If we had "public void IO GetSum(1,2,3) {...}", does this really help? In a typical windows app, most functions will now have IO tacked on. But does that make my app better/easier to understand? Moreover, I can still lie. I can say func Throws Exception1,
but still throw Exception2. Or would compiler prevent such madness from me? I am a baby on FP, but I can find precious little functions that would not need IO in my current WinForms app (btw, using Linq and local DB for Disconnected operation - Thanks much!).

2) If I declare all possible exceptions that a function "may" throw; a moderately complex function may have reams of ThrowsException attributes. By calling .Net libs which may call other libs, which may call Win32Apis, you could end up with all Exceptions
in the world under that call tree. Not to mention some Exceptions we can't know (i.e. OutOfMemory, ThreadInterupt, ThreadAbort) in advance. So we are kinda forced to lie with the unperfect information we have and external forces. How would this be handled,
or did I miss the idea (which is probable).

3) I heard state was good and FP was good, and started to see some light on "IO". But I did not "gleen" the solution path from the video. Do you see a clear path opening here?

4) It seems like Windows message pumb was ahead of its' time! Single threaded with a Message Queue (rock on Erlang). Even with that "good" design, we still have challenges in that area (i.e. invokes, dead locking ourselfs, live locks, etc) As you say, given
just a sliver of daylight, we can/will hang proverbialself.

﻿If you have 100 errors in your program, wouldn't you prefer to see them before you ship your application, rather than only catching 99 of them in test?

I whould prefer to find errors at complie time. but im a static typing kind of guy i totaly agree that its better to find errors at compile time and not run time. but alot of people prefer languages without and static checks at all. are they "wrong"? or
are we (the static guys) "wrong"? perhaps there is something inbetween? i just think thats an interesting discussion.. (and one that can lead to infinite flame wars im sure )

evildictaitor wrote:

﻿True, although you have to be careful with Ruby. It's actually strongly typed behind-the-scenes, which helps catch a number of errors, even if you don't force the types at compile time - it can usually infer them from context.

alright i dont know very much about the internals of ruby. but meant general dynamicly(i.e not staticaly) typed languages.

evildictaitor wrote:

﻿One could argue that these uses are sufficiently small that the benefits of FP and static typing don't come through, (although I would disagree).

i do too. i vastly prefer static typing to dynamic.

but my point was that a lot of people feel that dynamic typing is the way to go and looking at the jaoo vids for example one get the feeling that more dynamically typed languages are coming back(as well functional languages).
the DLR, iron python and iron ruby are hot stuff right now too for general computing too it seems if one listens to some of the guys at jaoo:)

im not dissing FP or static typing at all, i'd pick static typing over dynamic any day of the week. i just think its interesting that both types of programing "are the future" at the same time it'd be interesting to have an interview with the masters (brian
erik and anders among others) to talk about that.. or perhaps to hear some dynamic typing buffs view on say haskell and f#

but yeah, i share your feeling that dynamicly typed languages are best suited for small programs/scripts and staticly typed languages are better for larger ones

﻿Here's the doctor's prescription for you:In your daily work, use C#, VB, and F#. Get, read, and work EVERY problem in Structure and Interpretation of Computer Programs, by Abelson and Sussman. (Use PLT Scheme to do the problems).When done with SICP, get, read, and work EVERY problem in Expert F# by Don Syme, et al. Use F#, obviouslyWhen done with that, come back for a Haskell prescription. The stuff above should take you three years.

﻿I think there is a lot of misunderstanding "in the air" about "dynamic typing." I think it's often confused with "weak typing."

true, true. i do know the diffrence between static/dynamic typing and strong/weak typing (belive it or not ) even though i slipped up and use the wrong trem in my post. my bad

what i meant was that for a haskell programmer, c# seems to have weak typing because its not visible in the types if there are any side effects.. but in like (as you mention)javascript, vb(not vb.net) and possebly c/c++, the types are even weaker, there you
dont even know what you get back. true? (this is meant as an observation)

after some consideration it seems to me that FP and dynamic typing dont really conflict with one and other. i whould simply get "cant implicitly convert IO<int> to int" errors at runtime and not at comiple time. right?

but isnt it sort of the point of functional programing to prove the correctness of a program by using a static analysis? (im sure the awnser is "sort of" but i hope you get my point ) doesnt use of dynamic typing sort of negate that?

that whould be make very interesting interview

oh and btw, is it really impossible to be dishonest in haskell (or f# for that matter)?
cant you do something like:

(sorry i dont know haskell syntax) but perhaps the IO monad doesnt have something like "Value" to get the value inside the monad? or is there something else stopping you from doing something like this?

but isnt it sort of the point of functional programing to prove the correctness of a program by using a static analysis? (im sure the awnser is "sort of" but i hope you get my point ) doesnt use of dynamic typing sort of negate that?

My totally un-diplomatic answer is: Yes, it does, which is why dynamically typed languages will prove to be a dead end for anything but the simplest of shell scripting.
The benefits of dynamically typed languages usually have nothing to do with the lack of static type checking (easy access to data structures, high level expression-based syntax, etc.). And almost all of those benefits can be had in statically typed languages
too.

Programs are getting more complex, not less, and anything that can go wrong, will eventually go wrong (Murphy's law). So the more potential crashes that the compiler can
rule out entirely with 100% guarantees, the better.We still need testing, but testing is a poor substitute for proof (especially when those proofs can be had for a price as cheap as just running the type checker).

Furthermore, types aren't a burden, they're a big help. Do people complain about the
burden of drawing their OOP classes out in UML before starting to program? Types give an overal shape of a program (especially in FP - UML may be better when everything is based on state).

We need more static checking, not less.

</rant>

aL_ wrote:

﻿oh and btw, is it really impossible to be dishonest in haskell (or f# for that matter)?
cant you do something like:

(sorry i dont know haskell syntax) but perhaps the IO monad doesnt have something like "Value" to get the value inside the monad? or is there something else stopping you from doing something like this?

Exactly, the IO monad simply doesn't have a way to get out of it*. It's "sticky" and infects anything it touches.However there are other monads that do have functions that can "run" a monadic value. For example the ST monad can work with mutable variables (only, no networking, or disk writes etc.), and once you've written your algorithm that requires mutable state, you
pass it to "runST" which will give you back a pure value.The STM monad can deal with concurrent acccess to shared mutable state (only), and once you've written an STM action (a "transaction") you pass it to "atomically" which will run it atomically, and return... an IO action! So you can get from ST to pure, and
from STM to IO, and you can get from pure to any monad via the "return" function.

This gives you a nicely "layered" program, where you have this thin layer of IO at the bottom that does your IO, and then some STM on top of that which handles your concurrency synchronisations, then the bulk of your program is pure, but some of those pure
calls may use the ST monad for local encapsulated mutable state.

* actually there's the unsafePerformIO which let's you escape when you
really need it, and when you really know what you're doing. If you do use this, just must absolutely guarantee that the thing you're wrapping actually does have pure semantics (and even then things can go wrong - so you
really need to know what you're doing!).

A question for the monad experts (I don't know alot about monads but I want to learn):

How are monads combined, and what would that look like in C#?

Say, for instance, that I have an IO layer over a file system or a database that returns the IO monad, and I want to combine that with the maybe monad. Do I create a new monad that combines them, or do I wrap the Maybe monad on top of the IO monad?

﻿Say, for instance, that I have an IO layer over a file system or a database that returns the IO monad, and I want to combine that with the maybe monad. Do I create a new monad that combines them, or do I wrap the Maybe monad on top of the IO monad?

I'm not sure I can convert all of this into pseudo C# so I'll just use Haskell. Shouldn't be too hard to follow.

In Haskell you have something called "Monad transformers". It's basically a monad which takes an "inner monad", and let's you access that inner monad using the function "lift". The convention is that the monad transformer's types end with "T".
Sadly there is no MaybeT, but it can easily be defined ( http://www.haskell.org/haskellwiki/New_monads/MaybeT ).I'll use the ErrorT transformer instead.

So a transformer for "errors" is called "ErrorT". It basically takes an error type, and an "inner monad", and allows you to call "throwError" to signal an error, and if anything signals an error the whole thing will signal an error.

Anyway, here's the Haskell type when the error type is a string "like an error message).

type ErrorIO a = ErrorT String IO a

This just declares a type synonym (for convenience) called ErrorIO with one type parameter, and says that it's equal to the error monad with an error of type String, and an IO as the inner monad. These transformers nest, so you
can wrap another transformer around the ErrorIO monad to stack even more features on.

Notice the "lift" in there which lets you get at the inner monad (IO in this case, which we use to print "Hello!" to he terminal).
Normally when you do this you'd define shorthands for all the methods you want to use from the various inner monads so you don't have to use "lift" everywhere (which can get messy when you want to access a monad that's several layers deep since you have to
apply "lift" several times). Or if you want easy access to any monadic actions in the inner monads you would define shorthand "lift" functions for each of the inner monads so you at least only have to use a single lift function ("liftIO", "liftState", etc.).

And here's some exploring in GHCi (interactive GHC), which "runs" the monadic value returns by the function test. This simply "unwraps" the transformer and returns the inner monadic value (if there were no errors):

This just tries the "test" function in the two cases, the first one returns "Left" which signals that the ErrorT monad failed, and the error message is "Error!!!". The second one returns "Right" (memory rule: "Right means Correct") and the value that came out
of the inner monad.*Main> :type runErrorT ( test False )runErrorT ( test False ) :: IO (Either String Int)

This just shows the type inferred for the return value when running the outer error monad. As you can see it unwraps the outer monad, and returns an action of the type of the inner monad, that returns "Either" the error type (String) or the value type (Int).

This uses the "catchError" function to catch an error, which just prints the message "error!" on to the screen, and then returns 0 (without errors), and as you can see we get "Right 0" at the end. The stuff to the right of catchError is a lambda expression
which takes the error, and returns a monadic expression that handles it (which may throw a new error).

﻿My totally un-diplomatic answer is: Yes, it does, which is why dynamically typed languages will prove to be a dead end for anything but the simplest of shell scripting.
...We need more static checking, not less.

Thats exactly my feeling as well.. in college we had a course in smalltalk and our professor happily exclaimed that "its a great language, you can change anything! you can change the way everything from how strings behave to the core of the runtime
behaves" and i was like.. what? how can you possibly test that? or how could you possebly run anything not written (and rememberd) by yourself if some peice of code somewhere can change ToString to return strings backwards (just an example)i got the awnser that "ofcourse you whouldnt do anything like that!" well.. lets hope so.. perhaps nothing so extreme but.. still.. i do think there is a place for dynamic languages in gp programming and im sure there are lessons to be learned there but as the sole language for large scale programs? i dont know..

perhaps in c#4.0 we'll have a "inpure" (or pure) keyword that marks the purity of a return value i really do feel that optimal is some kind of middle road between totaly pure and inpure..

sylvan wrote:

﻿Exactly, the IO monad simply doesn't have a way to get out of it*. It's "sticky" and infects anything it touches.

cool thanks for clearing that up it seems more and more like the built in monads are things that make haskell program practical, just like the .net framework and the java framework makes c# and java practical i wonder if you could port those monads
to c#.. they whould make a nice addition to the .net framework i think (perhaps they are already in there somewhere?)

Erik, thanks for the great video. However I have a question, you said that we can integrate FP with imperative programming by ignoring the monads (IO()) from the type. I think this is feasible, but you can't do the other way around right?

Erik, thanks for the great video. However I have a question, you said that we can integrate FP with imperative programming by ignoring the monads (IO()) from the type. I think this is feasible, but you can't do the other way around right?

You mean integrate imperative programming into functional programming by inserting a monad into the type signature? That's certainly feasible.

For a good example of a functional programming living alongside imperative programming, I'd really recommend having a bash at F# - it's all the power of C# with all the conciseness of Haskell.

Although I'll never forgive Microsoft for having more people working on Haskell than Sun has working on Java

Erik, thanks for the great video. However I have a question, you said that we can integrate FP with imperative programming by ignoring the monads (IO()) from the type. I think this is feasible, but you can't do the other way around right?

You mean integrate imperative programming into functional programming by inserting a monad into the type signature? That's certainly feasible.

For a good example of a functional programming living alongside imperative programming, I'd really recommend having a bash at F# - it's all the power of C# with all the conciseness of Haskell.

Although I'll never forgive Microsoft for having more people working on Haskell than Sun has working on Java

not exactly, i mean, if right now i have a solution which i built using C#, and then suppose I want to make a 'parallel' library using Hashkell/CLR (if there is such a thing), or a FP, Erik said that it can be done by compiling the code to assembly and when
I load it up to my C# solution ignoring the monads (IO(), etc ) will make it usable (or maybe, compiling it to CLI will strip the monads altogether). But, how about the other way around? Suppose that my FP library needs to use a business object (not necessarily
a complex one, maybe a transfer object) from my C# assembly, can it be done?

﻿not exactly, i mean, if right now i have a solution which i built using C#, and then suppose I want to make a 'parallel' library using Hashkell/CLR (if there is such a thing), or a FP, Erik said that it can be done by compiling the code
to assembly and when I load it up to my C# solution ignoring the monads (IO(), etc ) will make it usable (or maybe, compiling it to CLI will strip the monads altogether). But, how about the other way around? Suppose that my FP library needs to use a business
object (not necessarily a complex one, maybe a transfer object) from my C# assembly, can it be done?

There is such a thing - it's called F#, which is a sort of Haskell/CLR thing.

If you're developing a program in F#, you can also do the reverse, and load a program written in C# or VB.NET - since both are compiled to CIL, F# reflects out the program, and attempts to work out which functions are monoidal and which are computational, and
labels them accordingly.

So in conclusion, yes, an F# application can use C# logic, and a C# solution can use code written in F#.

I'm just joining this discussion. But, I can't help being reminded about where I was at in 2002 with C++ claiming that template basesd metaprogramming was revolutionary, when in fact, it was too complicated for most people to use to solve real world problems,
and as a result of that (and other things) alot of those developers moved to the much more "designed with the needs of the developer" C# / .NET 1.1.

I bought pared down generics, made sense to type collections for many reasons. Solved a very common problem.

Now, I think that some of this stuff wont ever be used as much more than an intellectual curiosity - just like bind2() and whatnot wasn't ever used in C++ on any large scale, stl/boost whatever. They just didnt't meet any real need in particular, other than
the needs of the developers to feel smart - and they had developers writing code that DID NOTHING (it was generic though, and it WOULD work on antyhing), but would if only if YOU used it correctly (Assuming you could figure out how). Which restates the last
bit - lots of work went into writing generic libraries which never were used because they were felt to be WAY out of touch with the needs joe blow coder.

A path siimiar to this killed C++ in my mind. Why do you feel that this wont end in the same result.

Also - why do you have to even bother the developer with this at all. Can you do this using the intermediate representation? just like out of order execution of CPU instructions.

The last thing that a developer likes to see are things in his code that he doesnt fully understand - those most likely get rewritten in the real world - since who wants to maintain code they dont understand.

I haven't read all of the posts. But what does IO do besides telling me/compiler/whoever that the function is impure?

For example if I define a function which returns int, but could also have exceptions, than from a pure functional approach I can also get a super set of two. IE = {int U Exception} and let my funciton return IE, in that case it would be very "honest", wouldn't
it? Why not have some suffix? int impure_function(x)....?

﻿I haven't read all of the posts. But what does IO do besides telling me/compiler/whoever that the function is impure?

For example if I define a function which returns int, but could also have exceptions, than from a pure functional approach I can also get a super set of two. IE = {int U Exception} and let my funciton return IE, in that case it would be very "honest", wouldn't
it? Why not have some suffix? int impure_function(x)....?

The point of the type-purity is not to help the programmer (e.g. function suffixing), but to help the compiler to optimise your code.

With regards to exceptions, they are generally a thing of the imperative world, since undefined behaviour is exceptional behaviour inside of functional languages by default, for example in C#:

It seems a lot of people are getting hung up on the notion of "honest". The point is not that the programmer is "honest" about his intentions - it's that the compiler can prove things about the program - the contracts in Haskell and in other functional languages
which employ such safeguards cannot be trivially broken, so it guarrantees honesty - breaking it takes some real effort.

Remove this comment

Remove this thread

Comments Closed

Comments have been closed since this content was published more than 30 days ago, but if you'd like to continue the conversation,
please create a new thread in our Forums, or
Contact Us and let us know.