One of the easiest examples I can think of for frobenius algebras is a plain ol' matrix algebra with tr : V → k as the co-unit (or equivalently, tr(a⋅b) as the frobenius form). This is enough data to generate a comultiplication δ : V → V ⊗ V. This turns out to be μ†, for multiplication μ. Is there any intuition for what this map does (aside from the obvious "do multiplication on the dual space")?

I don't quite see why you aren't happy with the intuition that you give. It seems to me that it cleanly describes what the comultiplication is and how it arises.
–
Simon WadsleyOct 27 '09 at 12:21

Maybe this is all there really is to say about this co-multiply. I was just wondering if there's something else there, like this example: Define a frobenius algebra on any FD vector space by making comultiply "copy" a basis. delta :: |i> |-> |ii> and counit "delete" a basis. epsilon :: |i> |-> 1. Mult. and unit are just the daggers. For delta_X defined on the eigenvectors of Pauli X (|+>, |->), it's a (happily coincidental?) fact that the induced multiply delta^dag is actually logical XOR on the Pauli Z basis (|0>, |1>).
–
Aleks KissingerOct 29 '09 at 12:19

That's the point! In fact, this type of frobenius algebra (called special FA) uniquely picks out a basis in the underlying object. We often take this as a pure categorical way to define basis. See eg Coecke et al's "Bases" paper.
–
Aleks KissingerNov 1 '09 at 10:45

1 Answer
1

Here's how I live to think about matrices. Penrose (1971) figured out that you can draw linear algebra diagrammatically. A picture in the Penrose notation is a directed labeled graph with external leaves. The edges are labeled by vector spaces (changing the direction on an edge has the same effect as swapping the label X with the dual vector space X*), and vertices by multilinear maps. In this way, placing two edges next to each other is the tensor product. The ground field R should be drawn as an invisible edge, so that X ⊗ R = X.

So, pick your favorite finite-dimensional vector space X, and think about the types of diagrams you can draw using just it. Well, the space of matrices (what you call V) is X ⊗ X*, so it looks like two parallel lines pointed in opposite directions. Then you can check that the trace is the directed cap, the identity element (thought of as a map R → V) is the directed cup, and multiplication and comultiplication are both given by trivalent vertices.

Not only does the notation "explain" the comultiplication, it "proves" all the associativity and unital properties you might want. Mostly, though, I think it makes it totally clear what the Frobenius pairing (a,b) → Tr(ab) is doing. It's just the map:

->-
/ _ \
pair = / / \ \
| | | |
^ v ^ v
| | | |

Which is just the canonical fact that (X ⊗ X*)* = X ⊗ X*. This ability to rotate X ⊗ X* is why δ = μ*.

This is a good way of thinking about these things! Also, it justifies the existence of what I previously thought was a cute but somewhat pointless construction of turning a compact structure (cap and cup) into a frobenius algebra. This is exactly the matrix frobenius algebra, when you think of linear maps as their "names". I.e. express M as "[M] := (1 (x) M) o cup". The frobenius multiplication "mu ([M] (x) [N])" reduces by compact structure "string pulling" to [MN]. Cool! Defining trace as cap also unifies the "internal" notion of trace of a matrix with the "self-loop" one: Tr(M) = cap[M].
–
Aleks KissingerOct 29 '09 at 12:37