Thursday, February 14, 2013

Comonads are objects

Haskell programmers popularized the use of monads to structure imperative computations, complete with syntactic sugar in the form of do notation. However, category theorists predict that there is also a completely symmetric concept called a "comonad", which tickles our fancy. After all, if monads are so intriguing, then perhaps comonads are equally intriguing, and who doesn't like twice the intrigue?

However, there's nothing intriguing about comonads. In fact, comonads are so pervasive in modern programming that they need no introduction. I hope to show that you very likely already employ a comonadic programming style, although you probably prefer to call it "object-oriented programming".

In this post I will build the metaphor of comonads as objects and also introduce syntactic sugar for comonads that strongly motivates this object-oriented interpretation.

The Builder Pattern

Let's try to reimplement the objected-oriented builder design pattern in Haskell. For example, let's say that we have a Config value we want to build that just stores a list of program Options:

We want to encapsulate the Config and prevent the user from disassembling it, so we don't export the MakeConfig constructor to prevent the user from pattern matching on the constructor. Instead, we export a function in place of the constructor so that the user can still create values of type Config:

... but I wouldn't be able to use both profile and goFaster on the same builder.

In object oriented programming you don't have this problem, because you usually structure your builder to append options instead of setting them so that you can feed in additional options later. Then you extract the final configuration when you are done:

However, Haskell has no implicit state, so we cannot mutate the original builder like the objected oriented version does. Even if we could, it's not clear where we would store additional options since our builder is a function, not a record with fields.

Instead, we must cheat and modify our original functions to return a new builder that "precompiles in" the given options:

We transformed our option setters into option appenders that leave our builder open just like the object-oriented version. Additionally, we need some sort of extract function that completes the process and returns the final configuration. That's easy since we can flush the builder by supplying it with an empty option list:

However, going back and upgrading all possible option setters into option appenders is a painstaking and laborious process, especially if we already defined a lot of them. Haskell is a functional language, though, so could we just write a function to automate this upgrade process?

In other words, we already wrote some option setter that closes the builder when done:

setter :: ([Option] -> Config) -> Config

... but we want to extend that setter to behave like an appender that keeps the builder open:

extend setter :: ([Option] -> Config) -> ([Option] -> Config)

This means that our extend function must have the following overall type:

You might expect that this isn't possible. After all, option setters seem to irreversibly use up the builder. However, we can actually cheat by intercepting the setter before applying it to the builder to reserve space for future options:

However, all we've proven is that extend works for our goFaster function. How could we prove that it always works in every possible case? We might try to write some tests, perhaps by defining some example option setters, upgrading them to appenders, and simulating a few build processes, but these tests would only prove that extend works in those few cases that we tested. Could we instead prove that extend does the right thing in every conceivable scenario? Is that even possible?

Fortunately, Haskell's enforced purity unlocks a more powerful and more general alternative to testing: equational reasoning. We simply define the expected behavior of our extend function in terms of equations and prove that extend satisfies those equations. But what might that expected behavior be?

Well, we expect that if we extend a setter to become an appender, apply it to a builder, and then immediately extract the result, it should behave as if we had directly used the setter. The equation for that would be:

extract (extend setter builder) = setter builder

In other words, there's no point in extending a setter if we plan to extract the result immediately afterwards.

We might also expect that if we extend the empty option setter (i.e. extract), the resulting appender should not change the builder since extract doesn't supply any options:

Exercise: Prove that:

extend extract builder = builder

There's one last property that's really subtle, but important. If we choose not to extend the second of two setters, we can always go back and extend the second setter later as long as we haven't applied them to the builder yet:

Interestingly, these three equations suffice to completely describe what it means to convert an option setter into an appender. We can formulate any other property in terms of those three properties. This means that we no longer need to test our function at all because we've proven it correct in all possible circumstances.

For example, we might suppose that if we choose not to extend the third of three setters, we can always go back and extend the third setter later on as long as we haven't applied them to the builder yet:

In fact, the proof for the two-setter case generalizes to any number of stages. In other words, if we choose not to extend the last setter in an arbitrarily long chain, we can always go back and extend the last setter later if we haven't applied them to the builder yet.

You might be surprised to learn that we've inadvertently defined our first comonad in our pursuit of simulating object oriented programming, but it might not be clear what exactly a comonad is or why those three equations are the only equations we need to prove.

The Iterator Pattern

Now let's imagine that we're writing a new terminal shell, complete with command line history. We could model history as an infinite iterator that stores events in reverse chronological order:

data Iterator a = a :< (Iterator a)
infixr 5 :<

The history would start off empty, which we model as an infinite stream of empty commands:

Duplication with minor changes is a code smell. I would like these two functions to remain consistent such that next3' always shifts to the value that next3 points to, but when I duplicate the code I must manually enforce that consistency between them. The "Don't Repeat Yourself" principle says you should not duplicate code precisely to avoid this problem.

Moreover, even the initial duplication could introduce mistakes, such as accidentally deleting one step while modifying the code:

next3' :: Iterator a -> Iterator a
next3' iterator = next' (next' iterator))
-- Oops! I deleted the next and my boss
-- distracted me in the middle of the process
-- so I forgot to replace it with next'

We'd prefer to automate the conversion process both to avoid introducing errors and to enforce consistency between the two functions. In other words, I need a function that takes some sort of retrieval:

Boy, that's a lot of effort to prove that extend upgrades one retrieval correctly. I'd prefer to pick my proofs wisely so that I can prove that extend upgrades all retrievals correctly. But what proofs suffice to nail down what it means to convert a retrieval into a shift?

Well, I expect that if I upgrade a retrieval into a shift, apply the shift, then extract the value under the cursor, I should get the same value as if I had just used the retrieval directly:

Exercise: Prove that:

extract (extend retrieval iterator) = retrieval iterator

This proof is simple and requires no fancy tricks.

Since extract points to the current location, I expect the equivalent shift to not move the iterator at all:

Again, there's one last non-trivial property. If I choose not to extend the second of two retrievals, I can still extend the second retrieval later if I haven't applied both of them to an iterator yet:

Notice how we arrived at the exact same three equations, despite solving two completely unrelated problems. Something very deep seems to lie just beneath the surface.

The Command Pattern

Now imagine that we are trying to build a small DSL for manipulating a thermostat. The thermostat keeps an internal floating point temperature in Kelvin, which it presents to the user as a string representation in Celsius. The thermostat comes with a simple API:

Increment the temperature by one degree

Decrement the temperature by one degree

This is the classic text-book example of objected-oriented design where we have some object that maintains state and methods on that object. Unfortunately, we're gluttons for punishment and choose to implement this solution in Haskell, one of the most decidedly anti-object-oriented languages in existence.

We choose to model our thermostat as a pair of values:

The internal representation of temperature, in Kelvin

A function that reads out the temperature into various representations

However, there's a potential source of bugs in this approach. When a user selects a given preview, nothing requires us to apply an adjustment that matches the preview. We can only enforce this by being sufficiently diligent programmers. Wouldn't it be nice, though, if we could eliminate an entire class of bugs by automatically converting the preview into the correct corresponding adjustment?

In other words, we have a preview function:

preview :: (Kelvin, Kelvin -> a) -> b

... and we want to extend it to behave like the corresponding adjustment function:

Which one is the "right" version? Well, we can decide which implementation is the correct one by specifying the desired behavior in the form of equations, and then seeing which implementation satisfies the equations.

We first expect that if you extract the result of an adjustment, it should match the corresponding preview.

Exercise: Prove that:

extract (extend preview thermostat) = preview thermostat

Both extend up and up' satisfy this criterion, so we can't yet say which one is correct.

We also expect that if you extend a preview of the current state (i.e. extract) to become an adjustment, then this adjustment should do nothing.

Exercise: Prove that:

extend extract thermostat = thermostat

This criterion doesn't apply to any previews other than extract, so we can't use it to select between extend up and up'.

This leaves us with our final equation: If you combine an adjustment and preview, you can extend the pair of them into an adjustment, which should be identical to extending the preview alone.

This proves to be the acid test that eliminates our original implementation of up'.

Challenge Exercise: Prove that our original implementation of up' does not play nice with extend. In other words, find a counter-example that proves the following equation false:

extend (\t' -> p (up' t')) t = extend p (up' t)

Challenge Exercise: What bug does the above counter-example predict may happen when you mix extend with up'?

Comonads

At this point, we've tackled three separate object-oriented problems and in all three of them we define an extract and extend function that seem very similar across all three cases. We already know that these two functions always seem to obey the same set of equations, despite having very different types in each case.

However, we can squint a little bit and realize that the types are not as different as they initially seem. Define:

The combination of w, extract, and extend is a comonad. The equations we've been proving are known as the "comonad laws".

Comonad laws

Why do we always seem to come across the same set of equations each time? Well, we can get a clue if we define a derived operator in terms of extend:

(f =>= g) w = g (extend f w)

This operator has the nice property that extract is both its left and right identity:

extract =>= f = f
f =>= extract = f

Also, this operator is associative:

(f =>= g) =>= h = f =>= (g =>= h)

In other words, comonads form a category, specifically a "CoKleisli" category, where (=>=) is the associative composition operator for the category and extract is the identity of this composition. The comonad laws are nothing more than the category laws in disguise.

If you take the last three equations and substitute in the definition of (=>=), you will derive the same three comonad laws we have repeatedly proven.

Object-oriented programming

Every time we try to implement object-oriented programming in Haskell we gravitate, inexorably, to modeling objects as comonads. This suggests that object-oriented programming and comonadic programming are one and the same. Haskell programmers have struggled with comonads because we so militantly rejected object-oriented programming.

This means we should approach object oriented programming with humility instead of disdain and borrow object-oriented insight to devise a suitable syntactic sugar for programming with comonads.

Syntactic sugar for comonads

The object-oriented intuition for comonads suggests the syntactic sugar for comonads, which I call method notation.

Notice how it looks like we are declaring an anonymous method and immediately invoking it on the defaultConfig object. Also, we only use setters and yet the method does "the right thing" and does not extend the last setter, as if we were returning the last line directly.

Here we are declaring next3 as if it were part of a class definition. method notation implicitly makes it a function of the corresponding object and brings this into scope, which always implicitly refers to the current state of the object.

If we want to refer to previous states, we just bring the intermediate steps into scope using the "prompt" notation:

We don't have to carefully keep track of whether or not we should use them as accessors or "mutators". method notation automatically gets that correct for us.

Notice the duality with do notation and monads. With monads, unwrapped variables are scarce since we can irreversibly convert them to wrapped values, so do notation keeps them in scope as long as possible. With comonads, wrapped variables are scarce since we can irreversibly convert them to unwrapped values, so method notation keeps them in scope for as long as possible.

Our thermostat example looks just like the classic object-oriented example using method notation, except we don't need to say return at the end as method notation always implicitly returns the value of the last line:

Any function we apply to an invoked method can be converted to a post-fix application:

toString (w # method
this # up
this # up
this)
= (w # method
this # up
this # up
this) # toString

Observe how suggestive the notation is. When we use post-fix syntax, we can just remove the parentheses and it is still correct:

(w # method
this # up
this # up
this) # toString
= w # method
this # up
this # up
this # toString

The post-fix application style plays incredibly nicely with method syntax, even though I never designed it for that purpose. This suggests that the object-oriented tendency towards post-fix function application style falls out naturally from the comonadic style and is not just a quirk of object-oriented tradition.

Comonad laws

The acid test of syntactic sugar is that the notation promotes an intuition for the comonad laws. Let's write the comonad laws using method notation:

... and like do notation, we can flatten both forms to a single normal form:

= w # method
this # f
this # g
this # h

The notation suggests quite naturally that if you call a method on this, it's equivalent to inlining the method directly. Conversely, you can arbitrarily factor out any group of steps into their own method.

Comonad transformers

do notation has the nice property that it promotes the correct intuition for the monad transformer laws, where the lift distributes over the do block:

Conclusion

Object-oriented programming is the long-lost comonadic programming that Haskell programmers have been searching for. Comonads are "object-oriented programming done right".

Monads and do notation transformed the face of Haskell by turning it into the finest imperative programming language. I believe that comonads and method notation may similarly transform Haskell into the finest object-oriented language as well.

26 comments:

This is pretty fascinating! I've toyed around with comonads a bit, but it never occurred to me to think of them as objects.

I've also seen objects presented as negative recursive types (cf http://requestforlogic.blogspot.com/2012/03/what-does-focusing-tell-us-about.html); I'm wondering if there's some way of looking at them that unifies both perspectives? I'm guessing they're orthogonal, though, thanks to the many meanings of the word "object"; Cook-objects appear to be primarily concerned with negativity & mutual recursion, whereas comonadic objects (correct me if I'm wrong) appear to primarily concerned with the inner state of an object. It's interesting (though perhaps only superficially) that both approaches are dual to familiar approaches (datatypes and monads, respectively).

Are you familiar with Dominic Orchard's codo-notation (http://www.cl.cam.ac.uk/~dao29/drafts/codo-notation-orchard-ifl12.pdf)? It appears to be very similar to your method notation, but without the implicit "this" argument. There's also od-notation, but a few moments of quick googling didn't reveal a link to it (though they did reveal an eye doctor in New Hampshire! http://www.eyesightnh.com/new-hampshire/susan-haskell-o-d-f-a-a-o..htm).

Wow, that first link is really good. It very closes mirrors some observations and intuitions I built in the process of working with comonads. For example, comonads and `method` notation resist embedding any effects in between steps. I just answered another person on reddit who asked me about embedding these comonadic computations within a Kleisli category:

The summary is that comonads stubbornly resist any useful interleaving of monadic side effects, which exactly mirrors how that link envisioned the use of these negative recursive types as views.

Moreover, the idea of using comonads as a more powerful case analysis is strongly suggested by "THE" comonad: the cofree comonad. If you combine the cofree comonad with method syntax, you get a DSL for case analysis on the base functor of the cofree comonad.

I am familiar with Dominic Orchard's codo notation, and the entire reason I investigated this line of reasoning was that I was unsatisfied with his approach, which didn't behave the way I expected the rightful dual to 'do' notation to behave. For example, he was still binding unwrapped values within the monad, which seemed intuitively incorrect, and the application of maintaining simultaneity only made sense for a couple of comonads and not all of them.

However, the thing that was most unsatisfying about his approach was the fact that the desugaring was so complicated and required additional structure (i.e. ComonadZip) on top of the Comonad class to implement correctly.

Hm, cool. I'll have to think about the Kleisli category stuff more; I'm not as up on category theory as I'd like to be. I'd be interested to see more about case analysis with the cofree comonad; I'm not entirely sure what that means, exactly.

In the Reddit thread, you mention CoIO; I assume you've read Kieburtz's "Codata and Comonads in Haskell"? He describes the OI comonad there, and some possible ways of using it, but apparently it has issues (http://www.haskell.org/pipermail/haskell-cafe/2006-November/019265.html). I think it's possible to use it for effects in a more restricted setting where information is preserved (coincidentally, my research is on information preservation...), but in a more traditional setting, your intuition matches mine: it seems like OI a (or CcOI a) "should" be the type of pure values of type a. If that's the case, it's possible that it would be more productive (ha) to explore it in an eager language (Idris, perhaps). I think I read something to this effect elsewhere, but I can't remember where.

One side note that I think is interesting: correct me if I'm wrong, but I believe that in modal logic, the diamond/"possibility" operator can be thought of as the free monad, and the lax operator can be thought of as the free strong monad. The box/"necessity" operator, however, doesn't act like the free [costrong?] comonad, because you have the intro rule:

\Delta, \empty |- A---------------------[]I\Delta, \Gamma |- []A

This is going off of the presentation in Pfenning-Davies (http://www.cs.cmu.edu/~fp/papers/mscs00.pdf), in particular.

But anyway, this rule gives you things like T |- []T which seems...wrong, given that there's no corresponding theorem <>_|_ |- _|_ (pardon the ascii). Again, my category theory is rusty, so maybe that's acceptable, but it seems kind of off; my intuition says you shouldn't be able to get into the cofree comonad (or whatever) unless you're already in it!

Anyway, that's probably a bit rambly at the end there, sorry. Maybe it'll be useful? *shrug*

Could you elaborate more on your research about preserving information? This is a topic that has fascinated me as well.

My intuition about CoIO is that if you were to implement it in terms of CoFree, the key function the user would write would have type:

forall r . CoFree f r -> r

In other words, the way you define 'CoIO' behavior is simply by selecting one 'r' value from the CoFree stream. The universal quantification keeps you honest and forces you to select a value from the CoFree comonad.

For example, the 'r' values could just be assembly code, and depending on which 'r' you select decides which program you compile. However, that approach does not work well for various reasons, but I'm just throwing that crazy idea out there.

I've never read the "Comonads and Codata" paper before, but I remember that I rejected a similar line of thinking because any function of type "a -> ()" should be isomorphic to "()", which means that there should only be one such function. I'm not sure that comonads are supposed to be able to encapsulate effects, particularly when monads do so incredibly well. It would seem odd to me for both an abstraction and its dual to both overlap in functionality.

I'm going to preface the logic stuff by saying that logic is totally a weak point of mine. I'm not familiar with the concrete rules of most logic systems since almost everything I've learned about computer science is just a side effect of programming in Haskell, but I'll make an effort to try to follow what you are saying by consulting the relevant Wikipedia articles. :)

So if diamond is possibility, and it's a monad, then we'd expect that:

<><> p -> <> p

p -> <> p

So far so good. If it's a free monad, then there must be an adjunction (I'm going to bastardize notation by mixing logic and Haskell):

(f? p -> m p) ~ ((Monad m) => <> p -> m p)

See, here's the part that's a bit confusing. A free monad requires defining some sort of base functor, but in the case of '<>' it's not clear what that base functor is. Or in other words:

Free f p = <> p implies f = ?

A free monad is at heart a list of 0 or more nested applications of the given functor, so I'm wondering what modal logic functor, when applied 0 or more times, is equivalent to the possibility operator.

The other part that's curious is that it seems like box also should be a monad:

p -> [] p

[][] p -> [] p

Are those valid rules in modal logic?

Also, there's nothing about comonads that says you can't build them. A key point to remember is that the definitions of monads and comonads are positive: they specify a uniform interface to what you can do, and don't forbid anything. Specifically, they don't forbid you from being able to unwrap the monad (with the Identity, Reader, and State monads being the canonical counter-examples). However, these unwrapping operations are non-uniform across monads, whereas the monad operations are uniform across them, so it's better to think of the monad class as simply specifying the uniform property that all monads share.

So in the same way that I don't necessarily expect a monad to be unwrappable, I also don't expect a comonad (even the CoIO comonad) to be uncreatable.

I'm doing with with Amr Sabry at Indiana University, specifically on extending stuff done in his student Roshan James' thesis on "information effects". They have a paper from POPL '12 that's pretty rad: http://delivery.acm.org/10.1145/2110000/2103667/p73-james.pdf?ip=129.79.245.33&acc=ACTIVE%20SERVICE&CFID=225224900&CFTOKEN=62904902&__acm__=1360956345_7a3d0778ff13776dc280f100adbe9ad9 , if it's not paywalled for you.

As far as the base functor is concerned, I'm not actually sure---I'm not up on my categorical logic, unfortunately. Some of Steve Awodey's lecture notes talk about <> and [] as a monad and a comonad; if you ctrl+f "modal" in http://www.andrew.cmu.edu/course/80-413-713/notes/chap10.pdf it's the page with the 2nd and 3rd result. I'm not sure if that's helpful or not, though. Maybe that means they're not free after all? I'm not sure. I think I'd heard someone say they were, but maybe I'm misremembering; I do know that the only operations available on <> (and lax) are only return and bind, though.

Box isn't a monad: the intro rule only allows you to use persistent assumptions when trying to prove []A, erasing your regular context. So if you tried to prove A -> []A, you'd stick A in your context, then immediately forget it. It's bad news! But this means you can prove A -> []A for exactly those A which are equivalent to T. There's no corresponding rule for <> or () that allows you to get out of the monad exactly when the thing in the monad is bottom, though.

And yeah, I guess I might be conflating freeness with the idea that only a certain set of operations is allowed, *and no more*. I guess "cofree" doesn't make sense in that interpretation, though, or at least no sense I can make of it. But the idea is that if you can *only* use those operations, then there's no way to get out of the fancy monad, and no way to get into the fancy comonad. This property is definitely desirable for the IO monad, and Kieburtz seems to have assumed it for his OI comonad.

I should also note that there are tons of different modal logics; I believe the one I'm referring to is canonically called S5 (or IS5, I guess, since intuitionistic logic is where it's at). The paper I linked to is a good reference, if you have the time for it; Frank Pfenning also has some awesome lecture notes on his site. I'd probably recommend them if you want to start learning more about logic in general. He's got an undergrad constructive logic course (http://www.cs.cmu.edu/~fp/courses/15317-f09/) that might be helpful for some basics, but they're also covered briefly in the beginning of his modal logic (http://www.cs.cmu.edu/~fp/courses/15816-s10/) and linear logic (http://www.cs.cmu.edu/~fp/courses/15816-s12/) course notes. Also, I dunno if you're an undergrad or a grad student or a postdoc or professor or industry person or hobbyist or what, but have you heard of OPLSS (http://www.cs.uoregon.edu/Activities/summerschool/summer13/)? Frank gave a great series of talks on logic last year, and iirc he does it pretty much every year.

With respect to the Kieburtz paper, why do you bring up a -> () ? Maybe there's an obvious reason to do so, but I'm missing it, if so. Also, you mention that it'd be weird for an abstraction and its dual to have the same use, which is intuition I share, but it can be wrong. In particular, the reader monad is isomorphic to the product comonad (as proved in http://www.cs.ioc.ee/~tarmo/papers/cmcs08.pdf, which is a great read if you haven't seen it already). So there can be some overlap going on. I also do like the idea of passing around a "token" of sorts that gives you permission to access IO stuff, but you'd need at the very least a linear type system to be able to keep track of it, and using it as a modality is kind of trivial, so yeah, maybe not the best approach.

Here the semicolons separate two different contexts. The first context contains assumptions about validity, which hold in all reachable worlds. The second context holds assumptions about truth, which apply only for the current world.

So, we can derive (using . for an empty context):

A valid ; . |- []A true

But not:

. ; A true |- []A true

The []I rule is essentially similar to the "extend" function, but expressing it using two kinds of assumptions leads to the system having particularly nice properties. Dually, there's a second kind of conclusion for possibility, related to the diamond <> modality.

It should be clear that [] corresponds to a comonad, and <> corresponds to a monad. Further the two are related in a very particular way, as required by modal logic. As a result, e.g., <> is strong with respect to the first context but not the second.

My understanding of what you said is that "necessarily" (i.e. the [] operator) reflects a property that must be true of all reachable worlds, not just the current one. So I think your []I rule says something like:

"If we can infer 'A' even from an empty current world, that means that its truth must be independent of the selected world".

Or, in other words, the [] reflects some sort of truth that *must* originate from a world-independent source.

Then that would make sense as a comonad, since it's obviously true that if is something is true in a world-independent way, then it's still true in a world-dependent way:

[]A -> A

Similarly, if 'A' is true in a world independent way, then the "fact that it is true in a world-independent way" is itself true in a world-independent way:

[]A -> [][]A

In other words, world-independence is itself a world-independent property.

I just glossed over it. This actually has some fascinating material for additional uses of comonads. I particularly liked the liveness analysis application. I will have to take time to digest it some more.

Hi Gabriel,you often write "objected-oriented" and "objected oriented" which may be a subtle pun underlining your thesis that Haskell frowns upon object-orientation. Is this intentional, or did you intend to use the common term "object-oriented"?

And here I was starting to think that designing programs through the communication of possibly stateful, ideally referentially-transparent entities over well defined interfaces (pure pipes/lazy pure fp/pure TPL Dataflow Block/) was object-orientation done right! That is, if OOD is to be defined by its published benefits.

This post is about comonads more than it is about this object oriented programming. This post introduces comonads as a subset of object oriented programming, specifically the fluent programming style pioneered by JQuery. However, I am not done with the analogy and I already know of several other striking parallels to OOP that I have not written up yet, including parallels to inheritance and traits.

It does not bother me in the least, though, that comonads do not cover the full spectrum of OOP as OOP had become an umbrella term for a wide variety of unrelated functionality. I'm simply showing that comonads cover a surprising subset of OOP functionality without any built-in language support for OOP.

One thing I find endearing about Haskell is that, unlike other languages it does not compromise its elegance or simplicity in order to be a "multi paradigm language". Rather it takes the principled position that functional programming is sufficient to implement all other forms of programming. This post and future posts build on that idea to show how even OOP can be a subset of functional programming.

As I understand the question, the extended preview is supposed to *actually* change the thermostat. The above version seems to do that better, in that it changes the left part of the tuple (what I think of as the "actual" underlying temperature setting). The conversion function is then just left as-is.

I really liked this article, and learnt a lot going through the exercises. But I do recommend the answer blog post "comonads are neighbourhoods not objects" which has very good criticism of the thesis developed here. At least it convinced me. But I learnt a lot following the argumentation in any case. http://gelisam.blogspot.co.uk/2013/07/comonads-are-neighbourhoods-not-objects.html

You define configBuilder as being equal to MakeConfig.Therefore, those are two different names for the same function. But I can do something with MakeConfig (pattern matching) that I can't do with configBuilder. This appears to contradict the fact that those are different names for the same function.

Arguably, this is a flaw in Haskell. So let me propose an alternative: do not export any means of constructing a Config value from other data types. Instead, export a "blank" Config value which may be used as the starting point. Users of the module may then feed this into a provided function, along with their arguments, to achieve the same effect as building a Config "from scratch."

Besides, the comonad structure does not include a "constructor" function. But the monad structure does, and that is the "unit" function, which, in Haskell, is called "return." Being the duel, comonads have the opposite instead: extract.

At time of writing, the article relies on Haskell's current lack of support for pattern matching on alternative names of the type constructor. What if a future version of Haskell was smart enough to see that configMaker was just another name for MakeConfig and thereby allows pattern matching on configMaker? Well, then a user of the module could bypass the interface by pattern matching on configMaker.

But when does a mathematician ever say that you don't have permission to use a function he/she defined? That's more of an engineering thing than a math thing, unless we're modling agents interacting with each other.Instead, I'd stress placing constraints into the definition of the data type itself concerning which values it may contain. You can accomplish this with Group Theory:You have a generating set of functions that accept a value of the internal type and output a value of that type as well. The group operator is function composition, and the underlying set to the group holds all the functions we can get by composing functions from our generating set.We also have a "starting value," the one wrapped by the "blank" Config value mentioned earlier. We define our constrained, comonadic data type as a wrapper around each value x of the underlying type for which there exists a function in the group that accepts the starting value and gives x. Intuitively, we choose all values the group can "reach" from the starting value.

Your one blog article has provoked me to so much thought! I'm very glad you wrote it. It's greatly interesting that Category Theory predicts programming paradigms.I'll end with a wild thought: what if we had a programming system with more categories than our one Category of Data Types, Hask? We could have functors and natural transformations for going between categories of different type systems.For example, we may have a category of discrete data types, like Hask, and a category of OOP-y, nested types, and we'll want ways of going back and forth. Or even a category for "constrained" data types. Just a weird, wild thought.