“I am sitting in a garden with a philosopher. Pointing to a nearby tree he says: ‘I know that THIS is a tree’! To another man passing by, overhearing this statement, I explain: ‘Oh no, he is not insane, we are just philosophing’”.
– Wittgenstein. Über Gewißheit. 1951.

In format terms, a substitution (\(b \implies B) / (s \implies S:B\)) of an instance \(b\) with supertype \(B\) and subtype \(S\) is subject to the constraints:

Preconditions of \(B\) cannot be strengthened in a subtype

Postconditions of \(B\) cannot be weakened in a subtype

Invariants of \(B\) are preserved in \(S\)

However, you will soon be convinced that inheritance is far less important for sound concept definitions than you might have been told so far (see Composition over Inheritance).

States and Side-effects

The notion of side-effects is common in the functional programming mind set. In the conventional mathematical interpretation of the function concept, a function call is a static mapping between value spaces. As a consequence in functional programming, any function call could be replaced by its result. Functions that satisfy this property named referential transparency are called pure.

Any persistent, observable effect of a function call would violate referential transparency. For example, calls of printf could not be replaced by their returned result (usually 0) as this would eliminate the intended side-effect on the console.

Unfortunately, every operation on a physical computational machine does have side-effects. At the very least, the CPU instruction counter is changed. Functional programming languages like Haskell therefore classify some side-effects as negligible.

Typically, a functional programming runtime environment aims at hiding side-effects from the programmer by means of mechanisms like garbage collection. Allocating data on the heap is definitely a side-effect, but considered irrelevant to functional semantics.

Let’s have a look at their definitions at Wikipedia and see if we agree:

Operator (mathematical)

A mapping that acts on elements of a space and produces elements of the same space. (source)

Do you see the implications?

Note how the choice of words in this definition sounds suspiciously uncommon. This is because it carefully tries to avoids assumptions on how an operator could actually work.

Why does it say “produce elements” instead of “map to elements”?

Why “space” instead of “set”?

Why “act on elements” instead of “map elements from”?

Différance! (Jacques Derrida: the difference of words hidden in their “deference” in a given context)

Operation (mathematical)

A calculation from zero or more input values (operands) to an output value (source)

The choice of words is very specific here: “value”, “calculation”, “input” …

Q: Doesn’t this sound notably familiar? How is this different from a function?

Note that in Mathematics, an Operator is just a symbol that denotes a specific Operation. Like we use the name of a function (its declaration) to represent its implementation (the function definition). They essentially have the same meaning.

Mas sabe el diablo por viejo que por diablo.

When learning to play the piano, there is a specific challenge for adult learners. As adults, we are mentally capable to fully understand concepts and mechanisms before we achieve the competence to apply them correctly. It is not a challenge to understand the principle of a C major chord progression. But it will take weeks of daily exercise to transfer this knowledge to muscle memory and making a technique second nature.

Likewise, there is mental muscle memory for modeling and programming. Take course assignments seriously even if the related concepts seem painfully evident.