Skip to the Main Content

Note:These pages make extensive use of the latest XHTML and CSS Standards. They ought to look great in any standards-compliant modern browser. Unfortunately, they will probably look horrible in older browsers, like Netscape 4.x and IE 4.x. Moreover, many posts use MathML, which is, currently only supported in Mozilla. My best suggestion (and you will thank me when surfing an ever-increasing number of sites on the web which have been crafted to use the new standards) is to upgrade to the latest version of your browser. If that's not possible, consider moving to the Standards-compliant and open-source Mozilla browser.

We came pretty close to our goal of explaining how processes of computation are 2-morphisms in a 2-category. Namely, we came close to seeing how for any typed λ-calculus there’s a 2-category with

types as objects,

terms as morphisms,

rewrite rules (ways of simplifying terms) as 2-morphisms.

We saw how to draw these 2-morphisms as surfaces, and we noticed an intriguing relation to René Thom’s ideas on catastrophe theory: it turned out that the process of ‘applying a function to its argument’ gives a surface with the simplest sort of catastrophe — a fold catastrophe:

We did not reach our goal of proving that the 2-category we got is a ‘cartesian closed 2-category’. In fact, we didn’t even define that concept! Nor did we get around to generalizing all this to the non-cartesian — or roughly speaking, ‘quantum’ — context.

Today’s session was a bit sad, because it’s the last one my student Mike Stay will attend. He’s the one who turned my interest in categories and computation from a hobby to something a bit more than a hobby, by coming to U.C. Riverside with a master’s in computer science and wanting to work on categories and quantum computation. Unfortunately, he couldn’t earn enough money here to support his family, so he took a job at Google, and he’s leaving town this weekend. Luckily, his skills in cryptography will now let him earn more than I do! I’m hoping he becomes a bigshot at Google and sets up a little n-category research group.

Right now we’re writing an expository paper on logic, computation, categories and Feynman diagrams, for a book Bob Coecke is editing. Google has a nice policy of letting their employees spend one day a week on projects of their own choosing. So, we’ll keep working away — but it won’t be the same. I may or may not keep the Classical vs Quantum Computation seminar going in the Spring quarter: my other students are working on other stuff!

Posted at March 15, 2007 9:36 PM UTC

TrackBack URL for this Entry: http://golem.ph.utexas.edu/cgi-bin/MT-3.0/dxy-tb.fcgi/1205

Re: Classical vs Quantum Computation (Week 18)

I know there was a harsh reaction against the inflated rhetoric about catastrophe theory that was so common after René Thom’s Structural Stability and Morphogenesis and Semiophysics came out.

However, it’s precisely one of Thom’s dangerously broad claims that I wanted to take up here, namely that verbs correspond to catastrophes. This is a relation between syntax and differential topology — and it’s precisely this sort of relation we’re seeing when we use surface diagrams to represent processes of computation in the 2-category coming from a typed λ-calculus!

Of course in nn-category theory we say ‘object’ instead of ‘noun’, ‘morphism’ instead of ‘verb’ — and we can go further and talk about 2-morphisms, etc. Thom didn’t think in terms of nn-categories, but that needn’t stop us.

By following some systematic rules, we saw in class that the process of evaluating a function, or more precisely the 2-morphism ‘β-reduction’, corresponds to a fold catastrophe! And this is just the tip of the iceberg — further thought along these lines quickly takes us to the swallowtail singularity, and beyond.

So, something big is going on, and the person to blame for these dangerously sexy big ideas is Thom, not Whitney.

Of course, some people got carried away with catastrophe theory in rather silly ways. But it would also be silly not to mention it when some of Thom’s ideas are getting realized in a very precise way.

Re: Classical vs Quantum Computation (Week 18)

Returning to precise disciplines, there remains the small problem of relating this to fundamental nn-categories of statified spaces, the kind of thing we talked about back here. Can one see β\beta-reduction in terms of a stratified space?

Re: Classical vs Quantum Computation (Week 18)

David wrote:

Can one see β-reduction in terms of a stratified space?

That’s a fun question!

I hope stratified spaces give nn-categories with duals, and there’s some evidence for this. An example is a symmetric monoidal category with duals, as defined below. These are relevant to quantum logic and the ‘quantum λ-calculus’.

Lately I’ve been catching up on the ‘classical λ-calculus’, and today’s class only discussed this. This is all about cartesian closed categories.

So: if we want to relate β-reduction to stratified spaces, it’ll work more smoothly if we consider β-reduction in the quantum λ-calculus!

So, anyway, the answer to your question is yes: in the ‘quantum’ case, where we have a symmetric monoidal category with duals CC, we should be able to build a stratified space whose fundamental category is CC. And then, β-reduction in the corresponding quantum λ-calculus should really be related to fold singularities in a very direct way!

This would be fun to explore. It would really bring out the full content of the word topological.

Some reminders may be helpful, at least for people who are insufficiently fanatical about attending this Café:

CC is closed if the functor of tensoring with any object x∈Cx \in C:
a→x⊗a a \to x \otimes a
has a right adjoint, the internal homb→hom(x,b). b \to hom(x,b) .
In other words, we have a natural isomorphism
HOM(x⊗a,b)≅HOM(a,hom(x,b)). HOM(x \otimes a, b) \cong HOM(a, hom(x,b)).
Here HOM denotes the usual set of morphisms from one object to another, while
hom is the “internal hom”, which is an object in our category. If the tensor product in CC is cartesian, and CC is also closed, we call it cartesian closed.

CC has duals for objects if it is closed and for any object x∈Cx \in C there
is an object x*∈Cx^* \in C, the dual of xx, such that
hom(x,b)≅x*⊗b hom(x,b) \cong x^* \otimes b
for all b∈Cb \in C. If CC has duals for objects, it’s common for category theorists
to say CC is compact or compact closed; algebraic geometers say it’s rigid.

CC has duals for morphisms if for any morphism f:a→bf : a \to b there is
a morphism f†:b→af^\dagger: b \to a such that
f††=f f^{\dagger \dagger} = f(fg)†=(gf)† (f g)^\dagger = (g f)^\dagger 1†=1 1^\dagger = 1
and all the structural isomorphisms (the associator, the left and right unit laws, and the braiding and balancing) are unitary, where a morphism ff is unitary when f†f^\dagger is the inverse of ff.

CC has duals if it has duals for objects and morphisms. If CC has duals, Abramsky and Coecke call it strongly compact closed, while Selinger calls it dagger compact closed.

The category of finite-dimensional Hilbert spaces is symmetric monoidal, and it has duals. The same is true for nCobn Cob, whose morphisms are nn-dimensional cobordisms, like this (for nn = 2):

Re: Classical vs Quantum Computation (Week 18)

James Tobias of the English department at UCR came to this class, having seen an announcement for it, and being interested in quantum computation. I warned him that it was the last class of two quarter’s worth of fancy mathematics. Amazingly, he seemed to enjoy or at least tolerate it!

The participants of the class had a short discussion of the etymology of the word ‘catastrophe’. Here’s some followup from Prof. Tobias:

By the way, I also thought that “strophe” was an equivalent of the Latin “trope,” but I was confused between the Latin and Greek. So I looked up the terms in the OED to make sure of the meanings. “Strophe” of “catastrophe” is, as you indicated, “a turning,” so that “catastrophe” is an “overturning” or broadly, a sudden conclusion.

The OED on “strophe”: “Originally the word στροφη, ‘turning’, was applied to the movement of the chorus from right to left, and αντιστροφη [antistrophe], ‘counter-turn’, to its returning movement from left to right; hence these terms became the designations of the portions of the choric ode sung during these movements respectively.”

In these terms, since you so elegantly “fell turning” into your conclusions after the back and forth parsing of functions and diagrams, your talk, including the call and response with your chorus of graduate students, was for this outsider, a very successful catastrophe!

Read the post Cohomology and Computation (Week 19)Weblog: The n-Category CaféExcerpt: The origins of cohomology in the study of 'syzygies', or relations between relations. Tracked: April 20, 2007 8:39 PM

Re: Classical vs Quantum Computation (Week 18)

I went to a neat talk here at Google yesterday. It was about an interesting paper
that says given only the type of a term, you can construct the term of
that type that must be there, and also about a Haskell program
“Djinn” that “magically” produces the code.

There’s a list of examples and a help file that explains a little bit about how to use it.

It produced code for identity, S and K combinators, first and second
projections from a pair, swap, compose, curry, uncurry, bind and
return for monads, even call-with-current-continuation, all from just
the type information.

I think it also generated the traditional Church-numeral code for zero and plus, but I’m not sure about that.

Re: Classical vs Quantum Computation (Week 18)

Thanks! It’s an interesting paper, but I’ll need to learn
about the “polymorphic” lambda-calculus and some other stuff
before really appreciating it.

Encapsulation, inheritance and polymorphism are the three things that object-oriented programming brought to imperative languages like C or Fortran.

(Parametric) polymorphism is a second-order concept. It lets you give datatypes as inputs to other datatypes, not just as inputs to function symbols.

For instance, consider lists. A list datatype should have some function symbols that act on it. We’d like to be able to
- append an element to the list
- project onto the nth element of the list
- get the list length
- etc.

Notice that I never said what type an element is! In the first-order lambda calculus, I’d have to specify what the output datatype of the projection function symbol is. Here, it shouldn’t matter: the projections should work the same way no matter what the element type is.

The element datatype becomes a parameter in the definition of a list, and we can use the same function symbols on many differently-shaped objects. Thus “parametric polymorphism”.

Re: Classical vs Quantum Computation (Week 18)

Silly pedantry: Christopher Strachey at least was thinking about the different kinds of polymorphism that can occur in computer languages in the context of imperative languages before the initial object oriented programming languages became popular.

Re: Classical vs Quantum Computation (Week 18)

Someday I would like to understand polymorphism from a nice category-theoretic point of view. I have a book checked out from the library which claims to handle this using fibered categories… I forget the author and title right now, and it’s back home (I’m in my office).

Re: Classical vs Quantum Computation (Week 18)

Re: Classical vs Quantum Computation (Week 18)

No, it’s not Categories for Types — it’s something more high-powered, or at least more difficult to understand. It’s got a yellow cover, instead of a yellow and blue cover. I’ll say something more useful when I get home.

Perhaps though it would be good for me to look again at Categories for Types. Previously I only paid attention to the chapters on categories with finite products (‘algebraic theories’) and cartesian closed categories (‘the typed lambda-calculus’). There are further chapters that go into more complicated topics. Does it discuss polymorphism?

(By the way, I’ve always found the title Categories for Types amusing. At first it sounds analogous to Topology and Geometry for Physicists. But what sort of person is a ‘type’? Is it like a ‘character’?)

Re: Classical vs Quantum Computation (Week 18)

Please pardon this interruption, but this brief experimental paper by Dolev and Elitzur which was also published in an edited book shows the non-sequential behavior of the wavefunction in QM. The overall result of this paper is Lorentz-invariant, but still with Bell-like correlations and, as the authors conclude, this result appears to transcend ordinary concepts of probability theory, causality, space and time:

http://arxiv.org/abs/quant-ph/0102109

I have previously suggested that this non-sequential behavior of the wavefunction might have something to do with the speed-up of quantum computation over classical computation. Do you see any way to relate this experimental finding of Dolev and Elitzur to the categories Hilb or nCob? Thanks.