Reminds me of a talk that John Carmack recently gave about functional programming in video game development. It was interesting to hear that it actually might have some real benefits, and he apparently rewrote the Wolfenstein 3D game engine in a functional language (can't remember which one he used right now) and it ended up being much shorter than the original C code, and to him at least, more clear.

Tynach wrote:Reminds me of a talk that John Carmack recently gave about functional programming in video game development. It was interesting to hear that it actually might have some real benefits, and he apparently rewrote the Wolfenstein 3D game engine in a functional language (can't remember which one he used right now) and it ended up being much shorter than the original C code, and to him at least, more clear.

I don't know much about game design, but at least naively it seems like it would be way harder in functional. WC3, a game I am very familiar with for reasons I won't get into, has an odd combination of a C-family base (C++ iirc) and a functional scripting language (JASS2), which makes some sort of sense (since scripting typically follows an intuitively functional format), but this already causes problems when you would really like to define new objects like triggered projectiles (the hardcoded projectiles are not accessible in JASS) or when you want properties of objects not available in the scripting language (like damage type). But maybe I can just chalk that up to Blizzard making a buggy, incomplete, ten-year-old scripting language.

E: On an unrelated note, is it easier to code paralellism in imperative programming, or does it not really matter? This is becoming a big deal now even in essentially linear games, due to high CPU demands and the widespread use of multicore processors. The only firsthand experience I have with the complications of parallelism were with a poorly-optimized and ill-explained Fortran subroutine in the GROMACS molecular dynamics suite. P.S. Fortran sucks.

E2: Or GPGPU, for that matter.

Last edited by Eebster the Great on Fri Sep 27, 2013 4:22 am UTC, edited 1 time in total.

edenist wrote:I feel like I really should put more effort into learning some more functional programming.

Ever since my seventh-grade QBASIC lesson, I have felt functional programming more intuitive for simple tasks. I am very much not a CS major, so I never actually bothered to learn serious languages like C, LISP, and Haskell. But both functional and object-oriented programming makes a lot more sense to me than imperative programming, which typically leaves me scratching my head.

Um, tail recursion is easily done in assembly, and with gotos. WTF does it have to do with fp?It's usually not even a language feature but an internal compiler optimization.

It usually isn't Congress or the State that tries to abridge free expression or free speech, [...] actually, in the present situation, the main threat to expression comes from public opinion.~Christopher Hitchens

Tynach wrote:Reminds me of a talk that John Carmack recently gave about functional programming in video game development. It was interesting to hear that it actually might have some real benefits, and he apparently rewrote the Wolfenstein 3D game engine in a functional language (can't remember which one he used right now) and it ended up being much shorter than the original C code, and to him at least, more clear.

That's not terribly surprising, and might not have anything to do with it being a functional language - in the early-mid nineties we figured out that programming languages could be fast without requiring really low-level programming, and C is from way before that revolution. To get a real comparison, you'd probably need to get him (or someone similar) to rewrite it in some non-functional modern languages, and probably also his current preferred C++ style.

The prevalence of imperative programming languages is one of the big things that has kept me from really grokking programming in a thorough way. (I don't consider myself a programmer, even though I have an undeserved "Software Engineer" title on my resume). OOP still seems too much like an imperative programming language to be any more approachable (though it certainly seems more powerful and flexible than non-OO imperative styles).

But just writing a huge equation or logical statement composed of more of those nested deeper and deeper and so on and so on, and knowing that when this program is run it will just evaluate that expression according to the proper order of operations and spit out the correct output? That makes perfect sense to me.

This is also why difficulties with threading programs for multiprocessing always struck me as counterintuitive, because in my mind -- not really understanding how exactly imperative languages get translated into the series of boolean logical functions that everything boils down to on the hardware level -- it seems like the computer itself must be inherently functional, so why doesn't it automatically just evaluate all the first tier of sub-expressions in parallel, then evaluate all the next tier of sub-expressions in parallel, and so on. E.g. if the logical expression the program is ultimately equivalent to is of the form:

It seems like it should just naturally evaluate as many of i, j, k and l at once as possible, then h, g, and f in that order.

Imperative programming languages seem like an awkward attempt to tell a computer what to do like it was a human following orders, rather than telling it what problem to solve like the sophisticated problem-solving machine it is.

Pfhorrest wrote:This is also why difficulties with threading programs for multiprocessing always struck me as counterintuitive, because in my mind -- not really understanding how exactly imperative languages get translated into the series of boolean logical functions that everything boils down to on the hardware level -- it seems like the computer itself must be inherently functional, so why doesn't it automatically just evaluate all the first tier of sub-expressions in parallel

It does. You do get a lot of low-level parallelism "for free." Even 20 years ago, CPUs were a long ways away from the abstraction of executing one instruction at a time per CPU core.

- First there's the pipelines: executing an instruction involves a sometimes deep sequence of sub-steps, from decoding it to figure out what kinds of resources will be needed (integer math, RAM/cache fetch, RAM/cache store, I/O busses and locking, register manipulation, etc.), to actually using those CPU sub-units to do what they need to do. There's no need for one instruction to make it all the way through the pipeline before the next one can start. There can be many instructions in a pipeline at once, at different stages.

- Then there's actual parallelism: multiple pipelines and multiple execution units of a given type. Sometimes the CPU can tell right away that 2 instructions won't interfere with each other's results, and thus they can be executed entirely in parallel. E.g., if the CPU has at least 2 integer math units, it can run two pipelines, both doing instructions that need integer math. There's a large element of resource scheduling: deciding which instructions to put into which pipeline, to minimize "pipeline stalls" in which an instruction is "waiting" for a given execution unit type to become available. (There are of course other stalls, like waiting for RAM cache, which also need to be minimized by advanced planning on the part of the CPU.)

Sometimes the first instruction may or may not affect the second and third, in which case the CPU may speculatively execute the second and third instructions but not actually commit to their results until the first instruction finishes and the CPU finds out whether doing the second one was a waste of time. A twist on this is branch prediction, in which the CPU sees a conditional branch instruction ("if x = y then goto z"), guesses which way it's going to go, and speculatively starts executing stuff at location z, again not committing to it until it has finished determining the result of "x = y".

- And sometimes you get parallelism by having your compiler be smart enough to explicitly ask for it. For example, "take each of these 25 thousand bytes that represent an image, and fade each pixel a little step toward black by subtracting a small number from each byte." That's where technology with names like MMX, SSE or Altivec comes in. You say "loop through all the bytes of this memory" and your compiler says "obviously the order doesn't matter, so just set up the MMX unit to do this in large chunks of bytes at once."

...Though none of this has much to do with functional programming per se.

Pfhorrest wrote:Imperative programming languages seem like an awkward attempt to tell a computer what to do like it was a human following orders, rather than telling it what problem to solve like the sophisticated problem-solving machine it is.

A computer isn't a sophisticated problem-solving machine. It's a bunch of binary storage with varying levels of speed and persistence hooked up to some decision logic that reads orders out of that storage, loads and stores data as requested, and tells some extremely fast arithmetic units what operations to perform on what data. It's a lot easier to take an imperative program and convert it into a series of condition tests and arithmetic operations that the CPU can understand than it is to take something like Haskell, a Lisp dialect, or an ML derivative and turn it from the idealized mathematical representation that the programmer is working with into a series of orders for the CPU to follow. Now, the fact that our idiot-savant-in-a-box has managed to fool you into thinking it's a sophisticated problem solving machine is a testament to about 60 years of work on compilers and programming language design, but I don't see any chance of us moving from a Von Neumann architecture to something that has anything to do with the underlying math of functional programming anytime soon.

zeruslord wrote:That's not terribly surprising, and might not have anything to do with it being a functional language - in the early-mid nineties we figured out that programming languages could be fast without requiring really low-level programming, and C is from way before that revolution. To get a real comparison, you'd probably need to get him (or someone similar) to rewrite it in some non-functional modern languages, and probably also his current preferred C++ style.

Five years ago I wrote a small helper utility in C to play back network traffic captured by tcpdump/Wireshark. Then some guy decided to rewrite it using the local corporate OO style. It ended up five times bigger, much less easy to read and lacking some important features of the original solution that didn't fit well into the object model chosen by him.

What I remember from my own functional programming course is that it's a lot of fun, but also impractical as hell. I can see functional programming working for things like solving Project Euler problems, but for large projects of hundreds of thousands or millions of lines of code? I don't see the benefit. I'd like to be convinced though. If someone can make a good case, I'd be interested in hearing it.

The fact is computers are inherently imperative. They execute instructions in order, one after another. Sure there's hyperthreading and branch prediction, but those don't fundamentally alter that picture. It seems to me that imperative programming is just more in line with how computers actually work. It's also more in line with how humans actually work. Or with how most programs function in the real world.

It's one of those irregular verbs, isn't it? I have an independent mind, you are an eccentric, he is round the twist- Bernard Woolley in Yes, Prime Minister

zeruslord wrote:That's not terribly surprising, and might not have anything to do with it being a functional language - in the early-mid nineties we figured out that programming languages could be fast without requiring really low-level programming, and C is from way before that revolution. To get a real comparison, you'd probably need to get him (or someone similar) to rewrite it in some non-functional modern languages, and probably also his current preferred C++ style.

Five years ago I wrote a small helper utility in C to play back network traffic captured by tcpdump/Wireshark. Then some guy decided to rewrite it using the local corporate OO style. It ended up five times bigger, much less easy to read and lacking some important features of the original solution that didn't fit well into the object model chosen by him.

That's pretty much a classic C problem, and corporate OO styles are usually designed to make really large order processing and payroll systems less bad to maintain. I'm not claiming C is useless, just that solving higher-level problems should be done in higher-level languages.

Diadem wrote:I can see functional programming working for things like solving Project Euler problems, but for large projects of hundreds of thousands or millions of lines of code? I don't see the benefit. I'd like to be convinced though. If someone can make a good case, I'd be interested in hearing it.

The real killer app for the statically typed ones is compiler engineering - they've got a bunch of language features that make dealing with parse trees extremely easy, and there's little difficulty with the immutable data because each step in compilation generates new output code. There's not a lot of really large-scale projects in functional languages, which is unfortunately a self-perpetuating state of affairs.

Tynach wrote:Reminds me of a talk that John Carmack recently gave about functional programming in video game development. It was interesting to hear that it actually might have some real benefits, and he apparently rewrote the Wolfenstein 3D game engine in a functional language (can't remember which one he used right now) and it ended up being much shorter than the original C code, and to him at least, more clear.

Pfhorrest wrote:it seems like the computer itself must be inherently functional...Imperative programming languages seem like an awkward attempt to tell a computer what to do like it was a human following orders, rather than telling it what problem to solve like the sophisticated problem-solving machine it is.

Your computer isn't clever. A compiler spits out code for the processor to read, and that code reads exactly like "move this value here, add this to that, read that, go to this line if this condition is met" - simple, dumb, explicit instructions for a very dumb person. Your computer is a state machine and imperative languages are all about changing state.

I also think that imperative style languages are more obvious to non-mathematicians. Think about how you'll present or explain an algorithm to someone who isn't terrific at maths; I don't think I'd start by presenting the recursive version of the Euclidean algorithm to someone.

Maybe with high-level languages where we, the programmer, are not concerned about how it get converted to those imperative low-level instructions, functional programming is more natural. I don't know; I've never used a functional language.Until we deal with something that has changing state, which is fairly common.

Kit. wrote:Five years ago I wrote a small helper utility in C to play back network traffic captured by tcpdump/Wireshark. Then some guy decided to rewrite it using the local corporate OO style. It ended up five times bigger, much less easy to read and lacking some important features of the original solution that didn't fit well into the object model chosen by him.

I laughed heartily at "intuitive clarity of abstract mathematics". In my CS bachelor there's a course on FP in Haskell this period. And numerous people face difficulties exactly with the mathematical approach (mostly recursion and passing 'memory' around as parameters) and at least some were baffled when the professor explained how to write a CPU in Haskell.

Apart from having my first GOOMHR moment after spending a couple of hours yesterday reading up on functional programming and discussing it with my colleagues, this is an interesting discussion. It took me a long time to "get" the OO thing, but when I did (thanks to this course) it was quite a revelation. So now I'm wondering whether declarative programming (of which functional programming is a subset, I believe) is just as important a step and I should be asking my company to shell out for another course.

The thing is, though, that as has been pointed out several times, the processor is inherently imperative, and each development in the evolution of programming languages takes the language further away from the machine code that the processor actually executes. This matters, because although it's great to be able to model the problem domain and describe the objects and relationships between them without worrying about how it will be implemented, when you come to debug the program you become concerned with what happens in what order. Why isn't this defined yet? Why isn't this getting called? Similarly, the way OO separates the static class hierarchy from the run-time object hierarchy is extremely powerful, but it makes it hard to see what's actually going to be instantiated and which methods are going to be called by looking at the code.

I imagine the declarative revolution is going to make this even more difficult. Part of the problem is that, although the language looks declarative, actually it's doing imperative things under the hood, and the error messages betray this. I've been doing some Ruby on Rails stuff; you write something like "has_many :accounts" and it looks as though you're just declaring something about the model, when in fact you're calling a base-class method called has_many (i.e. at some level it's really a disguised GOTO). Mistype that and you'll get something like "No Method Error". So in order to fix your problem you end up having to understand what's really going on. That might improve with better compilers, but I'm not holding my breath given that C++ compilers still explode with hundreds of gibberish errors if you miss one semicolon off the end of your class definition (which I do 50% of the time).

I'm sure the upsides outweigh the downsides, and I'm quite excited to be possibly living through another paradigm shift in programming at a stage in my career where it might be worth finding out about.

Eebster the Great wrote:I think I get the joke, but abstract mathematics is actually pretty clear to me and essentially ideal for computer programming. Though OOP kind of fits the bill too.

The first thing I thought of was the following description of C:“a language that combines all the elegance and power of assembly language with all the readability and maintainability of assembly language” (see h t t p : / / c a t b . o r g /jargon/html/C/C.html)

Note: "This message was flagged as spam and has been denied." even when clicking preview so I had to edit it... sigh.

Eebster the Great wrote:E: On an unrelated note, is it easier to code paralellism in imperative programming, or does it not really matter?

Functional languages are much easier to parallelise automatically. This is due to "referential transparency", i.e. the principle that if you evaluate (calculate) the same expression twice, the result is by definition always the same[1]. So if you have some expression with two (or more) sub-expressions in it, the sub-expressions can always be evaluated in parallel.

[1] this seems like a totally obvious thing, but in imperative languages this is not necessarily true. The value of an expression may depend on variables whose value changes from time to time and thus the result may differ. Functional languages have no variables.

This debate on languages is beginning to sound like a Mac/Win/Linux debate of old.

Functional/Procedural/Object/Declarative ? Which is best? Which is fastest?

One comments seemed to imply Java was not as fast as C++? Well, that depends: on the problem statement, the skill of the coder, and the power of the compiler. Java can be "almost" as fast as some C++ implementations. So is C++ better? Worse?

Does this debate even make sense! The best language is either (a) the one you will be paid for writing in; (b) the quickest to code ; (c) the one you have most fun writing. There is NO universal best.

Kit. wrote:If you are developing a product that does alpha-blending in video, you don't want it to crash with a stack overflow after a mere gigabyte of data.

So, any fp language that pretends to be both practical and universal, is better to have tail recursion as a language feature. Like Scheme does.

You can write functional style in a language/compiler that doesn't support tail recursion, and you can write procedural or oo or whatever style in a language which still gives you tail recursion.The comic fails, ... or is it making fun of the functional guy?

It usually isn't Congress or the State that tries to abridge free expression or free speech, [...] actually, in the present situation, the main threat to expression comes from public opinion.~Christopher Hitchens

My line of thinking tends to be "use the best tool for the job" or, failing that, use the best concepts for the job. Functional programming can be very useful for some concepts such as map-reduce. By framing some problems in this way, you can easily set up a parallelized algorithm to operate over large chunks of memory independently, significantly improving performance over a procedural style. However, creating a system with many complex objects that are modified in several disparate places in a very large codebase isn't as clean in functional languages, at least in my opinion. With an object-oriented, statically typed system, you're able to track the usage of objects and variables, since there is a clean distinction between data and code. I know this doesn't apply to all functional programming languages, but it seems all the popular ones are based on Lisp, which was very much in love of getting rid of this distinction.

In conclusion, everyone should use Python, since it allows for any programming paradigm you want.

rhialto wrote:This is due to "referential transparency", i.e. the principle that if you evaluate (calculate) the same expression twice, the result is by definition always the same[1]. (...)

[1] this seems like a totally obvious thing, but in imperative languages this is not necessarily true. The value of an expression may depend on variables whose value changes from time to time and thus the result may differ. Functional languages have no variables.

Wait, what?

How is that obvious? In fact, how is that anything but completely counter-intuitive? Computers are state machines. Yet functional programming insists that a fuction like getStateOfX() can not exist. This is in fact fact what functional languages so hard and counter-intuitive.

What you're saying is not even true in mathematics. A function f(x) = r x^2 clearly does not always return the same value for the same input of x. It's return value also depends on r.

It's not even always true in functional languages. If it were, functional languages could not use the keyboard, hard drive, or any other tool for external input.

It's one of those irregular verbs, isn't it? I have an independent mind, you are an eccentric, he is round the twist- Bernard Woolley in Yes, Prime Minister

Pfhorrest wrote:it seems like the computer itself must be inherently functional

Actually computers are inherently imperative. A program is a sequence of basic instructions which the (idealized, non-optimized) processor pulls one at a time and executes in sequence, storing results in registers or in memory as the instruction dictates. See Assembly language.

I'm still trying to figure out the punchline. explainxkcd says that "its own reward" is a pun on "its own reword". I don't think so ... that's a really long shot. So obscure as to be of dubious relevance.

Instead, my guess as to what's intended to be funny is: "X being its own reward" is recursive. Moreover, a tail recursive call means that the return value of the callee is also the return value ("reward"?) of the calling function. So, the joke is the humorous connection between recursion in functional programming and recursion in things that are said not to require a utilitarian payoff, because they are "their own reward." The latter recursion is much less precise and rigorous in its meaning than the former, so drawing the connection is humor from absurdity.

Diadem wrote:What you're saying is not even true in mathematics. A function f(x) = r x^2 clearly does not always return the same value for the same input of x. It's return value also depends on r.

Label your function fr(x).Either your function should be a function on two parameters (i.e. f(x,r) = r*x2) or it describes a class of functions rather than a specific one.Edit: Or r is fixed and your statement is a lie.

Diadem wrote:It's not even always true in functional languages. If it were, functional languages could not use the keyboard, hard drive, or any other tool for external input.

It usually isn't Congress or the State that tries to abridge free expression or free speech, [...] actually, in the present situation, the main threat to expression comes from public opinion.~Christopher Hitchens

In pure functional programming, side-effecting commands are values that can be manipulated through a well-defined, extensible API. In that sense, they are like integers, lists, hash tables, SQL queries, and everything else in the imperative world. A good API for commands is the reason that "Haskell is the world’s finest imperative programming language" despite the absence of side effects.

larsh wrote:I'm still trying to figure out the punchline. explainxkcd says that "its own reward" is a pun on "its own reword". I don't think so ... that's a really long shot. So obscure as to be of dubious relevance.

Instead, my guess as to what's intended to be funny is: "X being its own reward" is recursive. Moreover, a tail recursive call means that the return value of the callee is also the return value ("reward"?) of the calling function. So, the joke is the humorous connection between recursion in functional programming and recursion in things that are said not to require a utilitarian payoff, because they are "their own reward." The latter recursion is much less precise and rigorous in its meaning than the former, so drawing the connection is humor from absurdity.

Am I right? Anybody got a better explanation of the punch line?

I think it's amusing that a programming concept for said concept's sake is rewarding for someone.