G'day all.
Quoting Ron de Bruijn <rondebruijn at yahoo.com>:
> I have been thinking about the following laws of a
> monad. o is composition.
>> # associative law of a monad
> mu o (mu o T) == mu o (T o mu)
>> The notation: Tmu, that's the same as T o mu, right?
No.
The "join" operation, mu, is a natural transformation, and the "map"
operation, T, is a functor. Therefore you can't compose them directly.
To understand what Tmu and muT mean, remember what a natural
transformation is:
Given two functors F,G : C -> D, a natural transformation t : F -> G
assigns to each object c of C an arrow t_c : F(c) -> G(c) such that
for all f \in C[c,c']:
G(f) o t_c = t_c' o F(f)
This is the important point: A natural transformation maps each object
of C to an arrow of D.
Now suppose that we have another functor H : A -> C. Then you can
construct another natural transformation tH : FH -> GH by:
(tH)_a = t_{H(a)} : FH(a) -> GH(a)
Similarly, if you have a functor K : D -> B, you can construct a
natural transformation Kt : KF -> KG where:
(Kt)_d = K(t_d) : KF(d) -> KG(d)
(Exercise: Prove that tH and Kt are natural. It should be
straightforward from the definitions.)
> How do I relate this to
> x*(y*z) == (x*y)*z ?
Let (A,mu,eta) be a monoid, where:
mu : A^2 -> A
Here, A^2 is AxA, the category product operation.
How would you express the associative law for this monoid using in a
category theory way?
Let's assume for the moment that A is a set. If a, b, c are elements
of A, then we can express this as:
mu(mu(a,b),c) = mu(a,mu(b,c))
Category theory doesn't let us look inside A. So what we're actually
looking at here, on both the LHS and RHS of the above associative law,
is an arrow f : A^3 -> A, which "multiplies" three elements of A. We
want to express this two ways in terms of mu.
We can do this using the category product of two arrows. For example,
the product mu x id : A^3 -> A^2, looks something like this in Haskell:
leftMu :: (A,A,A) -> (A,A)
leftMu (a,b,c) = (mu a b, c)
If you follow this line of reasoning, then you'll find that the
associative law can be expressed as:
mu o (mu x id) = mu o (id x mu)
Compare this with the associative law for monads:
mu o (muT) = mu o (Tmu)
Makes sense?
Now, just for jollies, let's take a look at these associative laws
in Haskell code.
Using the monad basis, mu is spelled "join" (or "concat" in the list
monad) and T is spelled "fmap". Hence:
join . join == join . fmap join
(Exercise: Why does muT == join, but Tmu == fmap join?)
The "normal" Haskell basis, however, is more like Kleisli triples.
Using (=<<) = flip (>>=), we have:
join = (=<<) id
Hence the associative rule is:
(=<<) id . (=<<) id == (=<<) id . fmap ((=<<) id)
Flipping the reverse binds:
(xss >>= id) >>= id == fmap (\xs -> xs >>= id) xss >>= id
Converting to do-notation:
do { x <- do { xs <- xss; xs }; x }
== do { xs <- xss; do { x <- xs; x } }
Seen this way, you can think of do { _; _ } as an associative operator.
> # the identity law of a monad
> mu o (T o eta) =={id}_\mathcal{C}= mu o (eta o T)
That should read:
mu o (Teta) = id = mu o (etaT)
Compare with the identity law for monoids:
mu o (id x eta) = id = mu o (eta x id)
Understanding a monad as a generalisation of a monoid really helps
you understand these laws, or so I've found.
Hope this helps you out.
Cheers,
Andrew Bromage