Skip to the Main Content

Note:These pages make extensive use of the latest XHTML and CSS Standards. They ought to look great in any standards-compliant modern browser. Unfortunately, they will probably look horrible in older browsers, like Netscape 4.x and IE 4.x. Moreover, many posts use MathML, which is, currently only supported in Mozilla. My best suggestion (and you will thank me when surfing an ever-increasing number of sites on the web which have been crafted to use the new standards) is to upgrade to the latest version of your browser. If that's not possible, consider moving to the Standards-compliant and open-source Mozilla browser.

The category Δ\Delta and the nerve construction arise canonically
from the free category monad on directed graphs.

By the ‘nerve construction’ I mean the usual functor N:Cat→[Δop,Set]N: \mathbf{Cat} \to
[\Delta^{op}, \mathbf{Set}], from small categories to simplicial
sets.

I’ll start by reviewing the nerve construction. Then I’ll
explain why for a long time I didn’t accept it as something natural
— and why I finally did accept it. I’ll also give some
examples and write a few words about Mark’s new work.

At the heart of the explanation will be a generalized nerve
construction:

Let T\mathbf{T} be a nice monad on a nice category. Then there
are a canonically-defined category ΔT\Delta_\mathbf{T} and ‘nerve’
functor
Alg(T)→[ΔTop,Set]
\mathbf{Alg}(\mathbf{T}) \to [\Delta_\mathbf{T}^{op}, \mathbf{Set}]
with all the excellent properties of the ordinary nerve
functor.

The ordinary nerve construction comes from taking T\mathbf{T} to be
the free category monad on the category of directed graphs.

Since ordered sets can be regarded as special categories, and
order-preserving maps are then just functors, you can view Δ\Delta
as a full subcategory of Cat\mathbf{Cat}. The inclusion Δ→Cat\Delta \to
\mathbf{Cat} induces a functor
N:Cat→[Δop,Set]C↦Hom(−,C),
\begin{aligned}
N: &\mathbf{Cat} &\to &[\Delta^\op, \mathbf{Set}] \\
&C &\mapsto&Hom(-, C),
\end{aligned}
called the nerve functor.

The first excellent property of the nerve functor is that it is full
and faithful. (In the jargon, Δ\Delta is a
dense subcategory of Cat\mathbf{Cat}.) Because of this,
Cat\mathbf{Cat} is equivalent to a full subcategory of [Δop,Set][\Delta^\op,
\mathbf{Set}], namely, the full subcategory consisting of all
simplicial sets isomorphic to the nerve of some category. This
subcategory is called the essential image of the nerve
functor.

The second excellent property of the nerve functor is that there is an
intrinsic description of its essential image.
In fact, there are many such descriptions, and you probably know a
couple (e.g. via Segal conditions, or unique filling of inner horns).
I’ll give one in detail later. So you could define a category
as a simplicial set conforming to one of these descriptions, and a
functor as a map between such simplicial sets.

What’s not to love?

The nerve construction is simple, clean, and links category theory to
simplicial sets, which are important in topology and other areas. Why
wasn’t it love at first sight?

Well, there’s a difference between useful and natural.
For example, the notion of triangulated category has certainly been
useful, but almost no one who’s thought about it believes that the
definition is ‘right’. Brutally put, it’s a hack: useful, but not
natural. (I’d say the same of model categories.) There’s no doubt
that the nerve construction has been found useful, but I wasn’t
convinced that it was natural.

OK, but what does ‘natural’ mean? It’s a matter of aesthetics, and
obviously there’s no precise answer, but here’s my stab at it. When you
meet a new definition, there are two questions you might ask:

where does it come from?

and

why is the definition exactly that, not something
slightly different?

If there’s a satisfactory answer to both, you can call the definition
‘natural’.

Let’s try this out on the nerve functor.

Where does it come from?
Since the nerve functor is
induced by the inclusion Δ→Cat\Delta \to
\mathbf{Cat}, we’re really asking where Δ\Delta and its inclusion
into Cat\mathbf{Cat} come from.

One answer might be that Δ\Delta is the free monoidal category on a
monoid. But that’s just wrong: Δ\Delta, as defined above and in most of
the literature, does not include the empty set, whereas this free
monoidal category D\mathbf{D} is a skeleton of the category of
finite, possibly empty, totally ordered sets. Had you been shooting for
D\mathbf{D}, you’d have hit it — but presheaves on D\mathbf{D}
are not the same as simplicial sets.

Another answer is geometric: think of triangles, tetrahedra, etc. and
the maps between them. But how do you justify the ordering of the
vertices? That doesn’t seem very geometric. And even if you can
justify it, it would be more satisfactory if you could say where
Δ\Delta came from without appealing to the visual sense — by
thought alone.

Why is the definition exactly that, not something slightly
different? The nerve functor would continue to be full and
faithful if you replaced the category Δ\Delta of finite nonempty
totally ordered sets by the category of finite nonempty partially
ordered sets, or changed ‘finite’ to ‘countable’, or dropped the
nonemptiness, or any combination of these. (I’m sure you could
also find intrinsic descriptions of the essential image in each of
these cases.) You might wonder whether Δ\Delta was somehow minimal, but
no: if you replace Δ\Delta by its full subcategory consisting of just
[0][0], [1][1] and [2][2], the nerve functor is still full and faithful.

Asking these questions is a substitute for innocence. When
you meet an unmotivated definition, you want to ask ‘but
why?’ However, unless you’re completely impatient, you’ll
suspend judgement until you’ve seen a few results using the
definition… and before you know it, you’ve forgotten that innocent
first question.

In my pre-revelation years, I didn’t understand why the nerve
construction was a natural thing. I didn’t actively think that it was
unnatural — I just remained to be convinced.

Generalized nerves

The following theorem is what convinced me. I’ll state it roughly now
and fill in most of the details afterwards.

Theorem
Let T\mathbf{T} be a nice monad on a nice category ℰ\mathcal{E}.
Then there is a canonical small full subcategory ΔT\Delta_\mathbf{T} of
Alg(T)\mathbf{Alg}(\mathbf{T}) such that the induced functor
NT:Alg(T)→[ΔTop,Set]
N_T:
\mathbf{Alg}(\mathbf{T})
\to
[\Delta_\mathbf{T}^{op}, \mathbf{Set}]
is full and faithful. Its essential image consists of the presheaves
on ΔT\Delta_T preserving certain limits.

Call a presheaf on ΔT\Delta_\mathbf{T} a T\mathbf{T}-simplicial
set. Then a T\mathbf{T}-algebra can be regarded as a
T\mathbf{T}-simplicial set satisfying a limit-preservation
condition. The slogan is this:

T\mathbf{T}-simplicial sets are to T\mathbf{T}-algebras
as
ordinary simplicial sets are to categories.

To help us make precise the vague parts of the theorem, let’s go back
to the ordinary nerve
construction, which corresponds to taking the free category
monad T\mathbf{T} on the category ℰ\mathcal{E} of directed graphs.
First note that ℰ\mathcal{E} is a presheaf category, since a directed
graph is a presheaf on the small category
E=(0→→1).
\mathbf{E} = (0 \stackrel{\to}{\to} 1).
Convention: I’ll write the value of a
presheaf at an object using a subscript; for instance, if XX is a
directed graph (a presheaf on E\mathbf{E}) then X0X_0 and X1X_1 mean
X(0)X(0) and X(1)X(1). Now, the free category functor TT can be
described as follows: for a directed graph XX,
(T(X))0=Hom([0],X),(T(X))1=∑n∈ℕHom([n],X).
\begin{aligned}
(T(X))_0 & = & Hom([0], X), \\
(T(X))_1 & = & \sum_{n \in \mathbb{N}} Hom([n], X).
\end{aligned}
Here [n][n] means the directed graph
•→•→⋯→•\bullet \to \bullet \to \cdots \to \bullet with nn arrows; the free
category on it is the ordered set also called [n][n]. The
summation means coproduct.

Here are the precise hypotheses. The category ℰ\mathcal{E} is
‘nice’ if it is a presheaf category
[Eop,Set][\mathbf{E}^{op}, \mathbf{Set}]. The monad T=(T,η,μ)\mathbf{T} = (T, \eta,
\mu) is ‘nice’ if it satisfies the following two conditions:

For each e∈Ee \in \mathbf{E}, the functor (T(−))e(T(-))_e is a coproduct of
representables. This means that there are a set I(e)I(e) and a family
(We,i)i∈I(e)(W_{e, i})_{i \in I(e)} of presheaves such that
(T(X))e=∑i∈I(e)Hom(We,i,X)
(T(X))_e
=
\sum_{i \in I(e)} Hom(W_{e, i}, X)
for all presheaves XX.

An equivalent condition is that TT preserves connected limits.

Another equivalent condition is that the induced functor ℰ→ℰ/T(1)\mathcal{E}
\to \mathcal{E}/T(1) has a right adjoint; one then says that TT is a
‘parametric right adjoint’, as in Mark’s title.

The unit η\eta and multiplication μ\mu are cartesian natural
transformations (that is, the naturality squares are not only
commutative but pullbacks).

These conditions on a monad are very commonly met in
higher-dimensional category theory. (Sometimes TT is only required to
preserve pullbacks, not all connected limits.) I’ll give examples later.

We can now see exactly what ΔT\Delta_\mathbf{T} is. Write F:[Eop,Set]→Alg(T)F:
[\mathbf{E}^{op}, \mathbf{Set}] \to \mathbf{Alg}(\mathbf{T}) for the free
T\mathbf{T}-algebra functor. Then ΔT\Delta_\mathbf{T} is the full
subcategory of Alg(T)\mathbf{Alg}(T) consisting of the algebras F(We,i)F(W_{e,
i}), for e∈Ee \in \mathbf{E} and i∈I(e)i \in I(e).

In the motivating example, the free category monad on directed graphs,
ΔT\Delta_\mathbf{T} is the full subcategory of Cat\mathbf{Cat} whose
objects are F([0])F([0]) (from taking e=0e = 0) and F([0]),F([1]),F([2]),…F([0]), F([1]),
F([2]), \ldots (from taking e=1e = 1). In our notation, the category
F([n])F([n]) is written as [n][n], so ΔT=Δ\Delta_\mathbf{T} = \Delta. It
makes no difference that the category F([0])F([0]) was listed twice. So
we recover the usual nerve construction.

I won’t describe the limit-preservation
condition, since it’s a bit more complicated, but it can be done
explicitly in
terms of the representing family (W••)(W_{\bullet\bullet}). In the motivating
example, it says that a simplicial set XX is the nerve of a category
if and only if for each k,n1,…,nk∈ℕk, n_1, \ldots, n_k \in \mathbb{N}, the colimit
[n1]+[0][n2]+[0]⋯+[0][nk]=[n1+⋯+nk]
[n_1] +_{[0]} [n_2] +_{[0]} \cdots +_{[0]} [n_k]
=
[n_1 + \cdots + n_k]
in Δ\Delta is turned by XX into a limit in Set\mathbf{Set}. (Here
+[0]+_{[0]} means pushout over [0][0].) In other words, for all kk and
nin_i, the canonical map
Xn1+⋯+nk→Xn1×X0Xn2×X0⋯×X0Xnk
X_{n_1 + \cdots + n_k}
\to
X_{n_1} \times_{X_0} X_{n_2} \times_{X_0} \cdots \times_{X_0} X_{n_k}
is an isomorphism. This is one of the well-known Segal-type
characterizations of nerves of categories.

That’s how I learned to love the nerve construction. I loved it even
more when I figured out some other examples.

Examples

MM-sets Let’s begin with a simple one. Fix a monoid
MM. The category of sets is certainly ‘nice’, as is the free left
MM-set monad M×−M \times - on Set\mathbf{Set}. If you work it out,
you’ll see that ΔT=Mop\Delta_\mathbf{T} = M^{op}, so that a T\mathbf{T}-simplicial
set (presheaf on ΔT\Delta_\mathbf{T}) is a functor M→SetM \to
\mathbf{Set}. The limit-preservation condition is vacuous,
so we recover the basic observation that the category of left MM-sets
is the functor category [M,Set][M, \mathbf{Set}].

Strict nn-categories Fix n∈ℕ∪{∞}n \in \mathbb{N} \cup
\{\infty\}. Let ℰ\mathcal{E} be the category of nn-dimensional
globular sets (also called nn-graphs), which is ‘nice’ (a presheaf
category). Let T\mathbf{T} be the free strict nn-category monad on
ℰ\mathcal{E}, which is also nice. Then ΔT\Delta_T is the category
whose objects are globular pasting diagrams of dimension at most nn,
viewed as strict nn-categories, and whose maps are strict
nn-functors.

(To view a globular pasting diagram as an nn-category, picture, say,
a 2-dimensional such diagram. The 0-, 1- and 2-cells in the diagram
freely generate a 2-category. This gives rise to an nn-category for
any n≥2n\geq 2 by adding just identities in higher dimensions.)

This category ΔT\Delta_T has been much studied, especially in the case
n=∞n = \infty. André Joyal seems to have been the first to do
so, in an unpublished note ‘Disks, duality and Θ\Theta-categories’.
He wrote Θ\Theta for ΔT\Delta_T, and proposed that a weak
∞\infty-category might be defined as a presheaf on Θ\Theta
satisfying certain conditions. (For details, see Definition J of
this
or
page 269 of this.)
Clemens Berger
showed that the
nerve functor Str∞Cat→[Θop,Set]\mathbf{Str}\infty\mathbf{Cat} \to [\Theta^{op},
\mathbf{Set}] is full and faithful, which is a special case of the
general result above. I have the impression that Michael Makkai and
Marek Zawadowski also proved this independently, but I don’t have a
reference.

Weak nn-categories Several of the proposed definitions
of weak nn-category are of the form ‘a weak nn-category is an
algebra for a certain monad on a certain category’. The monad and the
category are (almost?) always ‘nice’ in our sense. This means that a
weak nn-category can also be regarded as a presheaf-with-properties:
that is, a presheaf on ΔT\Delta_\mathbf{T} (where T\mathbf{T} is the
free weak nn-category monad) preserving certain limits.

People often make the distinction between ‘algebraic’ definitions of
weak nn-category (in which an nn-category is a presheaf with
structure) and ‘non-algebraic’ definitions (in which an
nn-category is a presheaf with properties). This result shows
how algebraic definitions can be regarded as non-algebraic.

Multicategories A multigraph consists of a set
X0X_0 and, for each a1,…,an,a∈X0a_1, \ldots, a_n, a \in X_0, a set Hom(a0,…,an;a)Hom(a_0,
\ldots, a_n; a). Let T\mathbf{T} be the free (non-symmetric)
multicategory monad on the category of multigraphs. Both category and
monad are ‘nice’, so the theorem applies and we have a nerve
construction for multicategories. The category ΔT\Delta_\mathbf{T}
has as its objects all finite planar rooted trees.

Weber’s new work

Here I’ll say a tiny bit about how Mark’s paper goes beyond what I
did — though I’m conscious that I’m missing out large
parts of his work.

First, Mark’s proof of the result above is probably more efficient
than mine. He makes effective use of a factorization system, whereas
I just went at it directly and got a great long proof that I didn’t
much fancy writing up.

Second, Mark seems to have a result that resembles the one above but
works in greater generality. Here’s an important example. Over the
last few years, Ieke Moerdijk and Ittay Weiss have
developed
the theory of dendroidal sets. The slogan is:

dendroidal sets are to symmetric multicategories
as
simplicial sets are to categories.

We almost did this example just now, except that we did
non-symmetric multicategories. To get dendroidal sets, we’d
want to use the free symmetric multicategory monad on multigraphs
— but this monad isn’t ‘nice’ in the sense above.
However, Mark’s theory does manage to capture this example.

It would be great if someone else wrote something about the
parts of Mark’s paper that go beyond what I’ve explained.

Posted at January 6, 2008 9:30 PM UTC

TrackBack URL for this Entry: http://golem.ph.utexas.edu/cgi-bin/MT-3.0/dxy-tb.fcgi/1564

34 Comments & 1 Trackback

Re: How I Learned to Love the Nerve Construction

OK, but what does ‘natural’ mean? It’s a matter of aesthetics, and obviously there’s no precise answer…

Are you likening this mathematical judgement to ones exercised on works of art?

In 1877 John Ruskin said of Whistler’s Nocturne in Black and Gold: The Falling Rocket

I have seen, and heard, much of Cockney impudence before now; but never expected to hear a coxcomb ask two hundred guineas for flinging a pot of paint in the public’s face.

Should we say that there’s a sense in which there is a greater possible ‘objectivity’ to mathematical aesthetic judgements? If so, or if you see them as equally objective, how would you describe that objectivity?

Re: How I Learned to Love the Nerve Construction

Are you likening this mathematical judgement to ones exercised on works of art?

Not actively, but clearly it’s something one might do.

I don’t know if there’s more agreement about what’s mathematically ‘natural’ than there is, say, about what’s good art. I suspect that if you gather together all the mathematicians in any particular discipline then they’ll have some shared sense of what’s natural.

I also think that category theorists tend to have an especially finely tuned sense of the natural (which is not to say that it’s in any way correct).

Re: How I Learned to Love the Nerve Construction

What we could do with are lots of examples of mathematicians publicly hemming and hawing about whether or not they love various constructions, whether they find things natural or merely useful.

Then we could consider how much variation there is in such judgements, and whether tastes change much over time.

I made as a start, as you may recall, with that chapter of mine on groupoids. There’s enough material already out there for young researchers to wade in. But blogs should provide a welcome new resource.

Re: How I Learned to Love the Nerve Construction

What we could do with are lots of examples of mathematicians publicly hemming and hawing about whether or not they love various constructions, whether they find things natural or merely useful.

When I was a lad, I asked Alexander Givental if some construction, which he had just explained, was canonical. His reply: “What means canonical? It means I tell you how to do it.”

In other words, there are no choices involved. I think this is what most mathematicians mean when they say a mathematical object is canonical. I also think they would say ‘natural’ is a synonym for ‘canonical’. (As I learned here,
that should not be confused with functorial. Getting schooled on naturality once every ten years or so isn’t so bad, I guess.) So I think there is probably a lot of agreement on what it means for a construction to be natural.

On the other hand, I think there would be lots of disagreement over which arguments , concepts, and explanations are natural. In that sense, ‘natural’ might be closer in sense to ‘comfortable’. There are probably lots of instances where mathematicians Jack and Jill have a way of understanding X and where Jack thinks his point of view explains what Jill did and Jill thinks hers explains his.

Re: How I Learned to Love the Nerve Construction

Hi Valeria. No, I’m afraid I don’t know of such a thing. If it does exist, someone quite likely to know about it is Richard Garner at Macquarie, who apart from being generally well-informed did his thesis on polycategories.

By the way, the button for commenting on the post itself is (for some reason) at the very bottom of the page. I don’t know why there isn’t one at the bottom of the post itself. Maybe it’s to encourage people to read the comments before commenting themselves, or maybe it wasn’t by design.

Re: How I Learned to Love the Nerve Construction

Here is how I see some naturality in the construction of Mark Weber.

For this, I need to go a little deeper into the technicalities.

The main question might be: what might we mean by a nice monad ?
Mark found a very nice class of such:
if you work on a category P(C)P(C) of presheaves on CC (this is to simplify a little bit), and if you consider
a monad TT on P(C)P(C), then `nice’ means that TT is parametric right adjoint (p.r.a), i.e. that the functor

P(C)→P(C)/T(1)P(C) \to P(C)/T(1)

induced by TT has a left adjoint (or, equivalently, preserves limits). Mark gives some very nice and useful necessary and sufficient conditions for a functor to be a p.r.a. As André Joyal pointed out to me, these conditions can be translated into the following form.

If TT is any monad on P(C)P(C), we can form Kl(T)Kl(T), the Kleisli category of TT, and there is a natural candidate for a factorization system on Kl(T)Kl(T). If I remember well, a factorization system on a category MM is the data of two subcategories AA and BB of MM such that any identity of MM belongs to AA and BB, and such that any map ff of MM fators as f=baf=ba, where bb is in BB, and aa in AA. This factorization is asked to be unique up to unique isomorphism. Note also that such a factorization system is completely determined by either AA or BB.

If we come back to our monad TT, there is a candidate for a factorization system on Kl(T)Kl(T), by setting

A=freemapsA=free maps

And one of Mark’s very nice results is that TT is p.r.a. monad iff free maps define a factorization system on Kl(T)Kl(T).

This leads to the construction of the category ΔT\Delta_T introduced by Tom above. Maybe the most natural candidate would be ΔT=Kl(T)\Delta_T=Kl(T), but the game consists to take the minimal construction with respect to CC:
the natural conditions we would like for ΔT\Delta_T are:

1) This should be a full subcategory of Kl(T)Kl(T).

2) The canonical factorization system on Kl(T)Kl(T) should induce a factorization system on ΔT\Delta_T.

3) For any object c of C (seen as a representable presheaf on C), T(c) should be in ΔT\Delta_T.

4) ΔT\Delta_T should be minimal among the full subcategories of Kl(T)Kl(T) satisfying the conditions 1), 2) and 3).

And indeed, this is Mark’s definition (this where the notion of generic map appears: these are just the maps which belong to B): ΔT\Delta_T is defined as the closure of T(C)T(C) by factorizations in Kl(T)Kl(T). The density of CC in P(C)P(C)
also provide a canonical notion of Segal condition which describes TT-algebras as certain presheaves on ΔT\Delta_T.

It seems to me that this suggests the naturality of ΔT\Delta_T (with respect to the Yoneda embedding of CC in P(C)P(C)) is reduced to the naturality of Kl(T)Kl(T).

I cannot resist to add another example.
There is a notion of (non reflexive) symmetric graph: a graph with an involution which exchanges sources and targets. These form indeed a presheaf category on a small category CC (I leave you as an exercise to give an explicit description of CC; hint: CC has only two objects…). There is a monad TT on these symmetric graphs whose algebras are 11-groupoids, and it seems to me that it is a p.r.a. monad. The corresponding category ΔT\Delta_T is the category of non empty finite sets, and the nerve functor gives back the characterization of groupoids as symmetric simplicilal sets satisfying the Segal condition.

I suspect strongly that we can construct strict nn-groupoids this way for n≥2n\geq 2. Did anyone thought about it?

Re: How I Learned to Love the Nerve Construction

Thanks! That’s really helpful.

You wrote:

The main question might be: what might we mean by a nice monad? Mark found a very nice class of such: if you work on a category P(C)P(C) of presheaves on CC (this is to simplify a little bit), and if you consider a monad TT on P(C)P(C), then ‘nice’ means that TT is parametric right adjoint (p.r.a), i.e. that the functor
P(C)→P(C)/T(1)
P(C) \to P(C)/T(1)
induced by TT has a left adjoint.

Now I’m confused. This definition of ‘niceness’ is the same as the one that I gave — see the first bullet point in the middle of the ‘Generalized nerves’ section. You’ve only mentioned the functor part of the monad, but looking at Definition 2.3 of Mark’s paper, a monad is p.r.a if and only if its functor part is p.r.a and its unit η\eta and multiplication μ\mu are cartesian natural transformations. So, a monad on a presheaf category is p.r.a if and only if it is ‘nice’ in the sense of my post.

The reason why this puzzles me is that Mark seems to capture more examples than I did, e.g. dendroidal sets. (See the very end of my post.) I know he allows more general categories than presheaf categories, but in this case that makes no difference — we’re talking about a monad on the category of multigraphs, which is a presheaf category.

So maybe my mistake is in thinking that the free symmetric multicategory monad on (non-symmetric) multigraphs isn’t nice (== p.r.a, == familially representable). Maybe it is nice after all.

If, possibly, it is not a bad idea to approach weak nn-categories in terms of presheaves on what I came to know as ωCat\omega\mathrm{Cat}, which seems to be what you are denoting Str∞Cat\mathbf{Str}\infty\mathbf{Cat}, then shouldn’t it be relevant that ωCat\omega Cat (I’ll keep calling it that way just in case it is in fact not your Str∞Cat\mathbf{Str}\infty\mathbf{Cat}) has the striking property of being biclosed, as Todd taught me, following Sjoerd Crans?

Since this means that the representable presheaves on ωCat\omega Cat can actually be taken to take values not just in SetSet, but actually in ωCat\omega Cat itself, might it not be worthwhile to consider ωCat\omega Cat-valued presheaves on ωCat\omega Cat, i.e. the category
ωCat(ωCatop)
\omega Cat^{(\omega Cat^{op})}
??

Have you thought about that at all?

I am asking this question because ever since Todd made me aware of Sjoerd Crans’s work I felt like, surely, it’s the monoidal structure on ωCat\omega Cat, generalizing the Gray tensor product from n=2n=2, which, as Sjoerd himself very nicely emphasizes, is so inherently “higher dimensional” that it would seem odd to ignore its presence in any discussion of higher categories.

Regarding your questions, I don’t know. There was a
time when I knew what Sjoerd’s tensor product was, but I’m afraid I’ve
forgotten. And maybe for related reasons, I don’t get the point of
why one would consider functors ωCatop→ωCat\omega Cat^{op} \to \omega Cat
rather than ordinary presheaves on ωCat\omega Cat. I guess it’s
something to do with self-enrichment…? This isn’t to say I’m
against the idea — I’m just in the dark.

I have a combative reason for choosing ∞\infty over ω\omega,
although in principle I’m happy to use them interchangeably and have
usually plumped for ω\omega in the past. The reason is that some
people have begun to use ‘∞\infty-category’ to mean what is also
called (∞,1)(\infty, 1)-category, and I want to fight that trend.
‘nn-category’ has been around since the 1960s, I believe,
and it’s an excellent name. It will be disastrous if the water is
muddied by people using the same name to mean something different.
To any readers who do this: please don’t! If you find ‘(∞,1)(\infty,
1)-category’ too cumbersome, invent another name — but not
‘∞\infty-category’. That’s taken.

Re: How I Learned to Love the Nerve Construction

Another reason to prefer ∞\infty over ω\omega is that an nn-category has nn-dimensional cells, while an ∞\infty-category does not (usually) have ω\omega-dimensional cells, but only cells of dimensions less than ω\omega. This would be a bigger problem if there were any known use for categories that did have ω\omega-dimensional cells, but I’m not aware of any. The collection of ∞\infty-categories appears to be just another ∞\infty-category.

One might regard this last as being because 1+ω=ω1+\omega=\omega, since the reason nn-categories form an (n+1)(n+1)-category is that there’s a hom-nn-category between any two nn-categories; thus from an ordinal point of view it might be more accurate to say that nn-categories form a (1+n)(1+n)-category. (-:

Re: How I Learned to Love the Nerve Construction

I tend to say “omega” in private and among nn-categorical friends, and “infinity” in public. If you say “omega-category” to someone not au fait with nn-categories, they’ll think “what on earth is omega?” If you’re talking rather than writing, they won’t even know that it’s meant to be a lower case omega.

It’s interesting, the question of what kind of thing nn must be in order to be able to talk about nn-categories (even strict ones). You can talk about ℤ\mathbb{Z}-categories, for instance; that terminology would suggest “ℕ\mathbb{N}-category” for ω\omega-category. Cubical nn-categories, or nn-fold categories, have something to do with the Boolean algebra 2n2^n (the power-set of a set of cardinality nn). What’s the general setting?

Re: How I Learned to Love the Nerve Construction

I vaguely recall that one of the very first papers on ω\omega-categories, by Street, mentions (ω+1)(\omega + 1)-categories. Maybe his paper on orientals?

Anyway, I’d become much more interested in (ω+1)(\omega + 1)-categories if someone defined an interesting notion of the ‘∞\inftyth homotopy group’ for a topological space.

There’s a perfectly fine ∞\infty-sphere — in fact, a few: the unit sphere in a countable-dimensional Hilbert space has a number of interesting topologies, and there’s also the inductive limit of the SnS^n’s for finite nn, which is a CWCW complex. But, since these all these spaces are contractible, I just get the trivial group for

π∞(X,*)=[S∞,X]\pi_\infty(X,*) = [S^\infty, X]

Maybe there’s some topology on some sort of ∞\infty-sphere that gives something more interesting. Has anyone tried?

Re: How I Learned to Love the Nerve Construction

Indeed, thanks for a great post. I’m getting from this that “you can think of algebras for a monad as presheaves-with-properties”. I like the We,iW_{e,i}’s… it seems they are the “primeval soup” from which all the more complex categorical life forms (algebras for a given monad) are built from.

By the way, am I correct in understanding that the We,iW_{e,i}’s are fixed once-for-all and don’t depend on the XX’s?

To a monad novice like me, it’s kind of weird how that very concrete condition about TT being determined by a fixed family {We,i}\{W_{e,i}\}, and the apparantly-more-intrinsic/abstract one about TT preserving connected limits, are equivalent conditions.

Theorem Let TT be a nice monad on a nice category ℰ\mathcal{E}. Then there is a canonical…

It seems everyone is using this “nice category” business here at the n-cafe :-) It bothers me aesthetically that, apparantly, “nice” doesn’t refer to an intrinsic property of the category, but rather to sneaky outside information, eg. “this category is nice because I happen to have been told it is a presheaf category”. (On the other hand, you have explained how, at least for finite categories, one can tell internally if a category is nice or not.)

In other words, it’s perhaps an english qualm I have (or is it a political qualm?) : I refuse to be told what’s nice and what’s not… something should be called nice if it is nice. (This kind of issue also occurs in the end of Andrew Stacey’s comment here. )

From Mark Weber’s abstract I get the impression one can substitute “cocomplete” categories for “nice” categories… is that a correct understanding?

This definition of ‘niceness’ is the same as the one that I gave…You’ve only mentioned the functor part of the monad…

Heh, I think this is the second-most confusing language thing in category theory. Of course in some sense both of you are right, because the word “monad” can mean “just the functor” or “functor plus goodies”. This happens with everything…

For me, the most confusing language thing in category theory is the word “adjoint”. I don’t know grammar, but it seems to me that it can be uses as a (preposition?), “this functor is left adjoint to that one”, a noun “this functor is a left adjoint”, and most confusingly, as an adjective, eg. “TT is parametric right adjoint”. Add to that the whole left-right business and Houston we have a problem.

Re: How I Learned to Love the Nerve Construction

Heh, I think this is the second-most confusing language thing in category theory. Of course in some sense both of you are right, because the word “monad” can mean “just the functor” or “functor plus goodies”. This happens with everything…

But surely this is nothing special about category theory. Referring to a thing with structure and to the underlying thing by the same name is ubiquitous in mathematics. A group is really a set together with a multiplication, etc., but we say “a finite group” not “a group whose underlying set is finite”.

Re: How I Learned to Love the Nerve Construction

To answer some of Bruce’s questions:

am I correct in understanding that the We,iW_{e,i}’s are fixed once-for-all and don’t depend on the XX’s?

Yes. Sorry, my lack of clarity. Another way to describe this condition on the functor
T:[Eop,Set]→[Eop,Set]
T:
[\mathbf{E}^{op}, \mathbf{Set}]
\to
[\mathbf{E}^{op}, \mathbf{Set}]
is that for each e∈Ee \in \mathbf{E}, the functor
(T(−))e:[Eop,Set]→Set
(T(-))_e:
[\mathbf{E}^{op}, \mathbf{Set}]
\to
\mathbf{Set}
is a coproduct of representables. A functor with codomain Set\mathbf{Set} is sometimes called familially representable if it is a coproduct of representables. By extension, a functor TT whose codomain is a presheaf category is familially representable if it satisfies the condition just described. (The domain is irrelevant.)

To a monad novice like me, it’s kind of weird how that very concrete condition about TT being determined by a fixed family (We,i)(W_{e,i}), and the apparantly-more-intrinsic/abstract one about TT preserving connected limits, are equivalent conditions.

Well, this bit is nothing to do with monads, so you can’t use that excuse :-) This is only about functors.

Here’s something similar but simpler and probably more familiar. Consider the following three conditions on a functor U:ℰ→SetU: \mathcal{E} \to \mathbf{Set}:

UU has a left adjoint

UU is representable

UU preserves limits.

Various implications hold in complete generality, but let’s assume for convenience that ℰ\mathcal{E} satisfies the hypotheses of the Special Adjoint Functor Theorem. (E.g. ℰ\mathcal{E} might be a presheaf category [Eop,Set][\mathbf{E}^{op}, \mathbf{Set}] for some small E\mathbf{E}.) Then the three conditions are equivalent.

Under the same hypotheses on ℰ\mathcal{E}, the following are equivalent:

the induced functor ℰ→Set/U(1)\mathcal{E} \to \mathbf{Set}/U(1) has a left adjoint

UU is a coproduct of representables

UU preserves connected limits.

From here it’s a short step to the equivalence you mention — just apply the fact that (co)limits in presheaf categories are computed pointwise.

It bothers me aesthetically that, apparantly, “nice” doesn’t refer to an intrinsic property of the category, but rather to sneaky outside information, eg. “this category is nice because I happen to have been told it is a presheaf category”.

In that case, you should like Mark’s paper! And if I’d been writing a formal account of the theorem, not a blog post, I’d have put it slightly differently: “let E\mathbf{E} be a small category and let T\mathbf{T} be a nice monad on [Eop,Set][\mathbf{E}^{op}, \mathbf{Set}]”, rather than “let T\mathbf{T} be a nice monad on a presheaf category”.

For me, the most confusing language thing in category theory is the word “adjoint”. I don’t know grammar, but it seems to me that it can be uses as a (preposition?), “this functor is left adjoint to that one”, a noun “this functor is a left adjoint”

I’m surprised you find that confusing. Mathematics is full of little abuses like that, isn’t it? You say what it means for a representation to be irreducible or a module to be simple, and pretty soon you’re talking about “irreducibles” and “simples”. That probably happens in ordinary language too.

Maybe you should think of marriage. Old-fashioned language allows “Bruce, husband to Brenda”. You could say “Bruce is a husband”, which in societies where same-sex marriage is outlawed is equivalent to “Bruce has a wife”. Some people prefer to say “UU is a right adjoint”; some prefer “UU has a left adjoint”.

and most confusingly, as an adjective, eg. “TT is parametric right adjoint”.

I don’t think that’s good English. “TT is a parametric right adjoint”, yes.

Add to that the whole left-right business and Houston we have a problem.

Re: How I Learned to Love the Nerve Construction

Thanks to Tom and Denis-Charles for describing some of my work in such an appealing way.

First I’d like to clarify some things about the generality of the nerve theorem. Pra monads on presheaf categories are exactly what Tom considered in his unpublished work. The monad on the category of multigraphs whose algebras are symmetric multicategories (which are simply called “operads” in the work of Moerdijk and Weiss) is an example, as Denis-Charles pointed out in this thread.

The general setting for the nerve theorem consists of a cocomplete category K, a monad T on K, and a dense subcategory S of K. The condition to be satisfied is a little stronger than the condition that T preserve the colimits in K that exhibit S as dense (recall that density says that every object of K is canonically a colimit of objects of S).

Even when K is Set you get a lot of non-pra examples. For instance when S is finite sets, the condition on T amounts to finitariness (ie preserving filtered colimits), the analogue of “Delta” is the dual of the Lawvere theory corresponding to T, and the nerve theorem says that you can identify the algebras of T with the Set-valued models of its associated Lawvere theory. Of course there are lots of finitary monads on Set which aren’t pra.

Another advantage of the general setting is that for a given K and T, there can be many choices of S, and this is useful even in the pra case. If K is a presheaf category and T is pra, then there is a canonical choice of S, say S(T). However, if for example you have another monad T’ on K and a cartesian monad morphism phi:T’–>T, then as far as T’ is concerned, you have two obvious choices of “S”:

(1) S(T)

(2) note that the pra’ness of T’ follows from that of T and the cartesianness of phi, and so one could take “S” to be S(T’).

These are in general different. For describing the nerves of algebras of Batanin operads, Tom’s work used (2) whereas the work of Clemens Berger uses (1) (… recall that an “omega-operad” in the sense of Batanin can be succinctly defined as a cartesian monad morphism T’–>T where T is the monad on globular sets whose algebras are strict omega-categories, and the algebras of such an operad are the algebras of the monad T’).

Second I’d like to point out that the monad on “symmetric” graphs whose algebras are groupoids, as alluded to by Denis-Charles is not pra. If it were pra then it would preserve pullbacks. Intuitively this isn’t to be expected because when it comes to describing explicitly what the free groupoid on a symmetric graph X is, you are going to be looking at some undirected paths, but it will also be necessary to mod out by some equivalence relation: if f is an edge of X, it has “partner” g going in the opposite direction from the definition of symmetric graph. In the free groupoid on X, the path “f followed by g” would need to be identified with an empty path, because f and g would have to be inverses in the free groupoid. Experience (from playing with endofunctors of Set) shows that if modding out by some equivalence relation is necessary to describe your endofunctor explicitly, then that endofunctor probably doesn’t preserve pullbacks.

Formally, one can consider the the ordinal {0<1<2} as a symmetric graph as follows: take the vertex set to be V={0,1,2}. As for the edges demand a unique edge from n to m in V iff n and m are consecutive. Write [2] for the symmetric graph so obtained. Write 1 for the terminal symmetric graph. Write T(X) for the underlying symmetric graph of the free groupoid on the symmetric graph X. For instance, T1 is the “free living involution” (containing one object, the identity arrow on it, and another arrow e such that e^2 is the identity). T[2] is the chaotic groupoid with 3 objects (recall that a category is said to be “chaotic” when there is a unique arrow between any two objects, and such categories are automatically groupoids).

With some calculating one can show that T doesn’t preserve the pullback of the unique map [2]–>1 along itself. Doing the pullback first and then applying T gives something equivalent (as a category) to 1 + “the integers” where the integers is regarded as a one object groupoid. On the other hand doing things in reverse order: apply T to the above unique map and then take the pullback of the result along itself, gives something equivalent to 1+1.

The characterisation of groupoids via their symmetric simplicial nerves that Denis-Charles described is obtained by looking at the monad on the usual category of graphs whose algebras are groupoids, and taking “S” to be S(T) where T is the category monad on Graph.

Re: How I Learned to Love the Nerve Construction

Excellent, excellent. Thanks, Mark.

The general setting for the nerve theorem consists of a cocomplete category K, a monad T on K, and a dense subcategory S of K.

Is what you call SS here what you call Θ0\Theta_0 in your paper (e.g. in Section 4)? And to recover the case that I wrote about above, would one take K=[Eop,Set]K = [\mathbf{E}^{op}, \mathbf{Set}] and S=ES = \mathbf{E}? If so, what does the condition on the monad say? Is it still more generous than p.r.a.?

Even when K is Set you get a lot of non-pra examples. For instance when S is finite sets, the condition on T amounts to finitariness (ie preserving filtered colimits), the analogue of “Δ\Delta” is the dual of the Lawvere theory corresponding to T, and the nerve theorem says that you can identify the algebras of T with the Set-valued models of its associated Lawvere theory.

This is nice. So, for instance, if you take a monoid MM and consider the theory of left MM-sets, then the nerve theorem applied in this way describes the MM-sets as the Set\mathbf{Set}-valued models of the Lawvere theory of MM-sets, whereas the nerve theorem applied to p.r.a. monads on Set\mathbf{Set} describes them as the functors M→SetM \to \mathbf{Set}.

For describing the nerves of algebras of Batanin operads, Tom’s work used (2) whereas the work of Clemens Berger uses (1)

Right — it’s just that in my post I only mentioned the case where T′T' is the terminal operad, and then (1) and (2) coincide. So in that context the difference didn’t show up.

Re: How I Learned to Love the Nerve Construction

Yes, what I called S in my post, is what I denoted as Θ0\Theta_0 in my paper. Terminology: Θ0\Theta_0 is said to “endow T with arities” when T and Θ0\Theta_0 satisfy the hypotheses of the nerve theorem.

In your post, you were looking at T=category monad on Graph. No, Θ0\Theta_0 is not just the representables in this case. Write [n][n] for the ordinal {0\{0<11<…<n}n\}, and regard [n][n] as a graph with vertex set {0,...,n}\{0,...,n\}, and with a unique edge i→(i+1)i{\rightarrow}(i+1) for 0≤i0{\leq}i<nn. Then Θ0\Theta_0 in this case consists of the graphs [n][n] for all natural numbers n. The graphs [0][0] and [1][1] are the representables.

The condition on the monad in this case is more generous than pra although I don’t know a natural example to illustrate this. However an analogous situation is to take T to be the monoid monad on Set. The arities that you get by regarding T as pra are just the finite sets, and then as I said in my previous post, for this situation the condition on the monad amounts to finitariness. In general, the condition on the monad is some sort of colimit preservation property, whereas pra-ness is a sort of limit preservation property.

I’d like to correct a mistake in my previous post, where I said that you get the symmetric simplicial nerve of a groupoid from the monad on Graph whose algebras are groupoids. This is wrong … Denis-Charles was right, to get this nerve let T be the monad on symmetric graphs whose algebras are groupoids.

However as I pointed out this T isn’t pra. As a matter of fact neither is the groupoid monad on Graph. So to complete and then validate Denis-Charles’ observation, one must exhibit arities for T directly. Θ0\Theta_0 in this case is described in a similar way to the case of the category monad on Graph. The difference is that one must regard [n][n] as a symmetric graph: objects as before, and there’s a unique edge i→ji{\rightarrow}j iff ii and jj are consecutive.

The proof that this Θ0\Theta_0 endows T with arities is a really nice exercise in the general definitions. To carry out this verification you also need to describe TX explicitly. An edge in TX is a path in X modulo an equivalence relation (that I alluded to in my previous post). In the end the verification comes down to noting that every equivalence class has a canonical representative – its associated reduced path. A path is reduced when no two consecutive arrows are of the form (f,i(f))(f,i(f)), where i(f)i(f) is obtained from ff by the given involution on the set of edges.

The cool thing is that while this idea of reduced paths isn’t functorial in the way that one would like if one wanted to prove T pra, it is exactly what you use when showing Θ0\Theta_0 endows T with arities.

Re: How I Learned to Love the Nerve Construction

This comment is largely a note to myself, as every time I re-read this post I have to work this out again. However, the fact that nobody else has asked makes me think I must be the only one stupid enough to find it hard to remember what this notation is doing.

Anyway I think that in the case of the free strict omega-category monad on globular sets:

E is the globular category G, so each e is a natural number n.

I(e) is the set of e-dimensional pasting diagrams.

W_{e,i} is the globular set corresponding to the pasting diagram i of I(e).

Re: How I Learned to Love the Nerve Construction

Hi, Eugenia, all you have to do is type the math the usual way with dollar signs around the symbol strings and select the text filter that says “Markdown with itex to MathML”.

This comment is largely a note to myself, as every time I re-read this post I have to work this out again. However, the fact that nobody else has asked makes me think I must be the only one stupid enough to find it hard to remember what this notation is doing.

Anyway I think that in the case of the free strict omega-category monad on globular sets:

EE is the globular category GG, so each ee is a natural number nn.

I(e)I(e) is the set of ee-dimensional pasting diagrams.

We,iW_{e,i} is the globular set corresponding to the pasting diagram ii of I(e)I(e).

Re: How I Learned to Love the Nerve Construction

Their proof of the nerve theorem might be more accessible to some (such as me) than the proof in the original paper. It comes in the first few pages, with a minimum of auxiliary definitions and background to digest before getting there.

They also devote the last section to the free groupoid example which was discussed here.

Re: How I Learned to Love the Nerve Construction

I have edited the entry you created a bit for formatting (as I do to many entries): have added various hyperlinks, made the table of contents appear, added a context-table and formatted the references. This is meant as a convenience, but if you think I broke something in the process that should not have been broken, please let me know or fix it.

I have removed your question “Should this be part of nerve?”. The answer to this is, I think: “It should remain a separate entry, but also parts of it and a pointer to it should certainly be at nerve, too.”

So I have added a brief pointer there. I have also added a brief pointer in the Properties-section at nerve and realization, which is currently maybe the more systematic entry on this topic. Both of this could be further expanded on.

In fact, I think the present state of our entries nerve and nerve and realization is unsatisfactory. The idea was that the former is only about nerves, while the latter is about the adjunction with realization. But somehow both still deserve to be cleaned up and expanded.

Maybe we should copy over some more of the paragraphs from monad with arities to nerve. If you or anyone feels energetic enough to take care of this, it would be much appreciated.

Re: How I Learned to Love the Nerve Construction

I don’t know enough about the Cech nerve or its relation to nerves in (higher) category theory to say; for all I know, the Cech nerve theorem might be visible as a case of later “nerve theorems.” Perhaps something like “realization theorem”???

Re: How I Learned to Love the Nerve Construction

I don’t know enough about the Cech nerve or it relation to nerves in (higher) category theory to say;

I’d say all that is relevant here is that the two theorems are entirely unrelated, except for the fact that the word “nerve” appears in both of them.

Perhaps something like “realization theorem”???

Right, I should also think about how to rename that nerve theorem. But I thought you might have a proposal for how to rename this nerve theorem (the one in your entry).

If not, then never mind. I just thought I’d ask.

Perhaps something like “realization theorem”???

Maybe I should go for that. Though maybe it’s still as unspecific as before.

That “other” nerve theorem – which says that the simplicial set obtained from the Čech nerve of a good cover of a paracompact space by contracting all components to a point, presents the homotopy type of that space – I like to think of as a special case of étale homotopy theory. The étale homotopy type of an object in any connected topos is that of its hypercovers, similarly contracted. The “nerve theorem” says that in the context of paracompact topological spaces an ordinary Čech cover suffices, if only it is “good”, and that the result agrees with the standard homotopy type.

So maybe one should call it the “paracompact étale homotopy theorem”?! In any case, this should make it clear that there is no relation to what you have in the entry on monads with arities.