As functional programmers, we know how useful category theory can be for our work - or perhaps how abstruse and distant it can seem. What is less well known is that applying category theory to the real world is an exciting field of study that has really taken off in just the last few years. It turns out that we share something big with other fields and industries - we want to make big things out of little things without everything going to hell! The key is compositionality, the central idea of category theory.

Alas, so far at least this talk hasn't worked for me. Overall, I didn't pick up, from what he was saying, how to connect the things he was talking about to the basic mathematics of category theory. That aspect of the talk came across to me as (except for some trivial bits) too hand-wavy to see actual categories in it. Admittedly, I don't consider myself to grasp a mathematical concept until I intuitively grasp it, which is quite a high bar for abstract mathematics; but if I hold myself to that standard when explaining math to others, why shouldn't I ask as much of others when they're explaining stuff to me?

His first line set my teeth on edge: "Category theory. Some of you might know it as a valuable part of your functional programmers' toolkits, others might know it as scary nonsense that some people just won't shut up about." I've noticed some people condescendingly assume that failure to love category-based pure-functional programming can only be explained by fear of mathematics. Granted, the talk doesn't hinge on that; but it did start things off on the wrong foot, for me.

This is talk intended as an introduction and invitation for people who might have never looked at category theory before. As such it qualifies as a success: it has made at least one person who hadn't looked into category theory before want to learn more about it.

I've kept wondering about this, and really wanting to give it a second look, to understand the different ways this talk has evidently struck different listeners. Now I have, and here are some further thoughts.

• Something I've been through a few times, and have observed others go through as well: I'll read a novel, or watch a movie, because someone highly recommended it, and it doesn't work at all for me because it keeps taking me in directions I don't expect and don't want to go. And afterward, thinking about it, I realize a major part of the trouble was that I went into it expecting a different type of story. Mistaken expectations can ruin what ought to be a fine piece of storytelling. I think some of that happened to me with this talk. I went into it knowing some category theory and thinking about it mathematically, and the talk is aimed at folks unfamiliar with the subject and provides an overview with too little time to get into technicalities. So a sketch that could be intriguing for some people was more frustrating for me.

• As an introduction to the subject, the earlier parts of it where I'm already familiar with the mathematics seemed pretty sound. If I knew how the later parts connected to the math I might think the same about those parts; only, I don't know quite how they connect to the math, and having been guided by my prior knowledge to think of the earlier parts in those terms, when I get to the later parts I want those connections that just aren't there.

• A point I saw go by in his conclusions was "Supports marvelous graphical languages". A written point only, I think. It resonated with an insight I've had in mind for some years now, about how category theory sits in relation to mathematics generally. Up until about five hundred years ago, mathematics was written out in words. Imagine solving, say, a set of quartic polynomials in Latin. Then from the 1500s forward mathematicans started to develop arithmetic notations, things like the infix "+" and "−", and "=". And mathematics suddenly accelerated, from nearly a dead stop, into overdrive. The notational revolution reached its height with Leonhard Euler in the eighteenth century, and its consequences played out through the nineteenth century. And then in the twentieth century it went kind of awry, because of abstract mathematics. The new stuff was too abstract for the simple elegant algebraic notation, so it had to be expressed in words. And things have gotten dreadfully messy because of that. But while category theory too suffers greatly from this problem with words, its saving grace is that it affords some lovely opportunities to express elegant mathematics in pictures.

• A rhetorical question (iirc) raised late in the talk: he said smart people tended to compositionality "even" without category theory, so how much more powerful would it be to explicitly apply category theory. That grated on me so, I just can't resist registering an objection. Explicit theory does not always improve things; somethings its effect is more to limit things. I'm not saying category theory doesn't come in handy, as a tool; just that I'm leery of his assumption there that formalizing an intuitive approach is necessarily an unmitigated improvement.

The new stuff was too abstract for the simple elegant algebraic notation, so it had to be expressed in words. And things have gotten dreadfully messy because of that. But while category theory too suffers greatly from this problem with words, its saving grace is that it affords some lovely opportunities to express elegant mathematics in pictures.

A few months ago I tried to understand LabView from NI, because some of my colleagues are using it and I wanted to know if I can perform consistency checks of LabView programs with circuit specifications exported from Comos ( e.g. checking the number of pins of a virtual instrument ). Trying to read LabView docs was a strange experience because it is all pictures with wordy explanations. NI keeps LabView clean from the infection with formal or programming language notation. The pictorial turn and the superiority of the image must be preserved. In an alternative history, dominated by the priests of LabView, we might not have developed formal notations at all.

I'm pretty sure, NI also understands LabView "well". It is just that their kind of understanding flows naturally into the opposite direction. In a way I appreciate their lack of an ecumenical solution and consider it as a strength. However they cannot be completely faithful to the picture either which is why they write so many words.

But LabVIEW in particular is relatively inconsistent (e.g. a subvi has different semantics than the subgraph, due to an implicit sequencing frame), overly stateful (e.g. those property nodes), and hides too many properties of programs behind deep menus. It's very difficult to understand or reason about a modular program, and the attack surface is basically all the things.

We could certainly go a lot further with consistent use of pictures, cf interaction nets.