I'm about to start a simulation/modelling project. I already know that OOP is used for this kind of projects. However, studying Haskell made me consider using the FP paradigm for modelling a system of components. Let me elaborate:

Let's say I have a component of type A, characterised by a set of data (a parameter like temperature or pressure,a PDE and some boundary conditions,etc.) and a component of type B, characterised by a different set of data(different or same parameter, different PDE and boundary conditions). Let's also assume that the functions/methods that are going to be applied on each component are the same (a Galerkin method for example). The object's mutable state would be used for non-constant parameters.

If I were to use an OOP approach, I would create two objects that would encapsulate each type's data, the methods for solving the PDE(inheritance would be used here for code reuse) and the solution to the PDE.

On the other hand, if I were to use an FP approach, each component would be broken down to data parts and the functions that would act upon the data in order to get the solution for the PDE. Non-constant parameters would be passed as functions of something else (time for example) or expressed by some kind of mutability(emulation of mutability,etc.) This approach seems simpler to me assuming that linear operations on data would be trivial.

To conclude, would implementing the FP approach be actually simpler and easier to manage (add a different type of component or new method to solve the pde) compared to the OOP one?

I come from a C++/Fortran background, plus I'm not a professional programmer, so correct me on anything that I've got wrong.

2 Answers
2

Good question, I've been thinking along similar lines. Historically, the OO paradigm arose from the need for computer simulation - see the history of Simula - and despite early OO languages like Smalltalk being made by people who knew what they were doing (i.e. Alan Kay), OO is now arguably over-used and brings in far too much accidental complexity.

Generally, FP style programs will be shorter, easier to test, and easier to modify than OO programs. As Rob Harrop put it in his talk, Is the Future Functional?, you can never get simpler than functions and data; the two compose infinitely, to build up whatever abstractions are needed. So one way to answer your question (or am I just restating it? :) is to ask, What's the highest level function, and the highest-level input-data --> output-data look like? Then you can start breaking down those "alpha" functions and data types into the next layer of abstractions, and repeat as necessary.

My own opinion at this point is, unless you're modeling a situation where there really are discrete objects that only interact in definite ways (e.g. a model of a computer network) - and thus map directly to the capabilities of a clean, message-passing-paradigm OO language - it's simpler to go FP. Note that even in the games-programming community - where simulations are very prevalent and performance requirements are paramount - experienced developers are moving away from the OO-paradigm and/or using more FP, e.g. see this HN discussion or John Carmack's comments on FP

It's good to know that I'm not the only one doubting about OOP in simulation and thanks for answering my question! I had read John Carmack's comments on FP and I thought about implementing some FP aspects on C++ (copying the objects, or gather the input and pass it to a function) but then again I don't know if I should start my project with C++ instead of a FP language, like Haskell,as FP aspects are built-in and you express mutability only when it's needed. Did you continue using Clojure or FP in general, considering that you had a similar problem/question?
–
heaptobesquareOct 2 '12 at 17:54

@heaptobesquare - Yes, I've been steadily ramping-up my Clojure-fu with the goal of writing simulations in it. Nothing ready to show yet, but I see no show-stoppers, and Clojure's design is beautifully pragmatic, e.g. you can use transients/mutation if needed, plus its agents are well-suited to asynchronous aspects. At some point (no promises when) I'll write an article on the topic...
–
limistOct 2 '12 at 19:23

I have taken a look at Clojure but I can't say I'm fond of S-expressions. I know it is practical (Lisp code is data) but is it easy to get used to?
–
heaptobesquareOct 3 '12 at 8:46

@heaptobesquare - s-expressions/Lisp syntax is actually very easy to get used to; first pick a good editor (Emacs or vim, my vote is for Emacs, see dev.clojure.org/display/doc/Getting+Started+with+Emacs) that has a mode for Clojure, get a good book (e.g. Programming Clojure), and start hacking. After a few weeks at most, the syntax will fade into the background, as it should - it's so perfectly consistent you'll think of it exponentially less often, and free up mental cycles for more important things. :)
–
limistOct 3 '12 at 13:37

IMHO for almost every task of reasonable complexity the question "is an FP style or OOP style the better choice" cannot be answered objectively. Typically, in such a situation the question is not "either FP or OOP", but how to combine the best parts of both paradigms for solving your problem.

The problem you have scetched above seems to be a very mathematical one, and I make a wild guess that you will need some matrix operations. OOP is very good for modeling abstract data types, and matrix calculus can be easily implemented as "matrix objects" with operations on matrices. Implementing this in a manner where all matrix operations are part of a matrix class helps you to keep things together which belong together, so maintaining a good overall structure.

On the other hand, PDEs are equations on functions, and the solution may be function again. So using a functional approach for that type of "components" may seem natural here. Those functions may have matrix parameters, showing one example of how to combine OOP and FP. Another example would be a matrix class implementation, which uses functional tools to map a certain operation to every element of your matrix. So here, too, it is not "OOP versus FP" but "OOP combined with FP" which brings you the best results.

Thank you for your answer! So, if I where to use C++, would encapsulating only the data (that is parameters,boundary conditions and the PDE in matrix form) of the component to an object and defining the functions (even some higher-order ones, in the case a parameter is a function of something else), outside of the object's scope, that would operate on the object's data, be efficient?
–
heaptobesquareOct 2 '12 at 17:57

@heaptobesquare: honestly, I can't tell you if it will be efficient in your case. Give it a try, think big, start small. Start programming some "tracer code" (artima.com/intv/tracer.html) to find out what works best and what not. And if you come to situation where you notice that something does not work properly, refactor.
–
Doc BrownOct 2 '12 at 18:37

Haskell has the Hmatrix library, which is bindings for the BLAS/LAPACK libraries, and a very nice syntax for it, which I would choose over an OOP approach personally.
–
paulOct 2 '12 at 20:20

@paul: Thanks, I'll definitely take a look at it! Are Haskell libraries generally consistent and rich in content? The wikis say so but is this a fact?
–
heaptobesquareOct 2 '12 at 22:01

@heaptobesquare: The only Haskell library I've used to any extent is Parsec (I used it to write an assembler), but I loved using it. I've only done GHCI exploration of Hmatrix and the Haskell OpenGL bindings but they seem quite nice. Hmatrix looks to be almost as concise as MATLAB (which I've used quite a bit) -- which was made specifically for that sort of thing. From my limited experience the libraries are consistent -- this is because Haskell is built on a small number of simple building blocks -- and they're also rich because Haskellers don't like to do mundane things :)
–
paulOct 3 '12 at 1:44