**Free monads** (arising from algebras). Let be a category, be an endofunctor on , and be the category of -algebras. We can define a forgetful functor as on objects, and on morphisms. Assume that has a left adjoint, which we call . As always in case of adjunctins, a monad arises. We call this monad the *free monad* generated by , and denote it by .

**Free monads are initial algebras**. If has finite products, we can prove that is equal to the initial algebra (if it exists) of the endofunctor . We name the action of this initial algebra . So, the situation is:

.

We split into two components:

One can prove that (and so and ) is natural in .

We can provide another natural transformation, which, intuitively, transforms a into its free monad:

Moreover, is the unit of the monad , and is the multiplication of the monad.

**Eilenberg-Moore algebras**. Let be a monad on . We define an (Eilenberg-Moore) -algebra (aka “for qua monad”) as an algebra , where

(1)

(2)

By we denote the category of Eilenberg-Moore -algebras.

**The theorem** can now be stated as:

is isomorphic to .

Is there any use of such a theorem? It allows us to automatically transfer some properties from the simpler level of -algebras to the world of free monads and initial algebras. This theorem will appear at least once more in this blog, so don’t forget about it too soon.

How to prove it? One way is to use Beck’s monadicity theorem (the “evil” version from Mac Lane’s book). It exactly fits the conditions about existence of adjoints, and the unintuitive condition about creating coequalizers has a lot to do with (1). But we are (or at least I am) interested in something that can be encoded in Haskell more directly. So, let’s build an explicit isomorphism.

We define two functors (we should actually prove that they are really functors, but you know…):

To prove the theorem, we show that (**A**) and (**B**) . We concentrate on objects, arrows are easy.

**A**.

Since the functors do not alter the carrier of the algebra, we focus on the action:

= (def. of )

= (def. of )

= (computation law)

= (sum)

= (inl)

= (comp. law)

= (sum + inr)

= (functor)

**B**.

We first calculate:

= (def. of )

= (naturality of )

= (naturality of )

= (1)

= (def. of )

= (def. of )

= (similarly to **A**)

We use this result in the following calculation:

= (sum)

= (prev. calculation)

= (2)

= (sum)

= (def. of )

We use this result as a premise in the fusion law, hence:

= (fusion)

= (reflection law)

We conclude:

]]>`M`

and `N`

)
f :: M a -> N a

which respects the following:

f . pure = pure f (mf <*> mx) = f mf <*> f mx

I was wondering: Sometimes it is the case that homomorphisms of simpler algebraic structures (for example, monoids) are authomatically homomorphisms of more complicated structures (for example, groups).

Indeed, given two groups and , and a function which is a monoid homomorphism (that is , and ), one can prove that is also a group homomorphism (that is, it additionally preserves inverses: ). Furthermore, monads give rise to applicative functors (via `pure = return`

and `(<*>) = ap`

). Of course, there is a notion of **monad morphism**, which is a function (for monads `M`

and `N`

)

f :: M a -> N a

subject to:

f . return = return f (m >>= k) = f m >>= f . k

So maybe being an idiom morphism is sufficient to be a monad morphism?

It didn’t sound very probable, but it was most certainly something worth examining. A positive answer would be of a nontrivial use, since monad morphisms play a central role in the theory of monad transformers. Sadly, no happy ending here. Idiom morphisms **don’t** need to be monad morphisms. A counterexample:

phi :: [a] -> Maybe a phi xs | odd (length xs) = Just (head xs) | otherwise = Nothing

This function is an idiom morphism (hint: `<*>`

preserves the parity of the product of the lengths of its arguments), while it is not a monad morphism:

phi ([0,1] >>= \x -> [0..x]) = phi [0,0,1] = Just 0

while

phi [0,1] >>= \x -> phi [0..x] = Nothing >>= \x -> phi [0..x] = Nothing]]>