Network Theory (Part 30)

The network theory series is back! You may have thought it died out, but in fact it’s just getting started. Over the last year my grad students have made huge strides in working out the math of networks. Now it’s time to explain what they’ve done.

In the last three episodes I explained how electrical circuits made of resistors, inductors and capacitors are a great metaphor for many kinds of complex systems made of interacting parts. And it’s not just a metaphor; there are mathematically rigorous analogies—in other words, isomorphisms—between such electrical circuits and various other kinds of ‘passive linear networks’.

I showed you a chart of these analogies last time:

displacement:

flow:

momentum:

effort:

Mechanics: translation

position

velocity

momentum

force

Mechanics: rotation

angle

angular velocity

angular momentum

torque

Electronics

charge

current

flux linkage

voltage

Hydraulics

volume

flow

pressure momentum

pressure

Thermal Physics

entropy

entropy flow

temperature momentum

temperature

Chemistry

moles

molar flow

chemical momentum

chemical potential

But what do I mean by a ‘passive linear network’? Let me explain this very roughly at first, since we’ll be painfully precise later on.

Right now by ‘network’ I mean a graph with gizmos called ‘components’ on the edges. For example:

In a network there is some kind of ‘flow’ running along each edge, and also some kind of ‘effort’ across that edge. For example, in electronics the flow is electrical current and the effort is voltage. The chart shows the meaning of flow and effort in other examples.

‘Passivity’ means roughly that none of the components put out energy that didn’t earlier come in. For example, resistors lose energy (which goes into heat, which we’re not counting). Capacitors can store energy and later release it. So, resistors and capacitors are passive—and so are inductors. But batteries and current sources actually put out energy, so we won’t allow them in our networks yet. For now, we’re just studying how passive components respond to a source of flow or effort.

For some subtleties that show up when you try to make the concept of passivity precise, try:

Finally, ‘linearity’ means that the flow along each edge of our network is linearly related to the effort across that edge. Here are the key examples:

• For electrical resistors, linearity is captured by Ohm’s law. If an edge in our network is labelled by a resistor of resistance usually drawn like this:

then Ohm’s law says:

where is the voltage across that edge and is the current along that edge.

• If our edge is labelled by an inductor of inductance :

we have

Here we need to think of the voltage and current as functions of time.

• If our edge is labelled by a capacitor of capacitance :

we write the equation

where again we think of the voltage and current as functions of time.

Both linearity and passivity are simplifying assumptions that we eventually want to drop. If we include batteries or current sources, we’re dropping passivity. And if include transistors, we’re dropping linearity. Obviously both these are important!

However, there is a lot to do even with these simplifying assumptions. And now it’s time to get started!

In what follows, I will not use the terms ‘flow’ and ‘effort’, which are chosen to be neutral and subject-independent. Instead, I’ll use the vocabulary from electronics, e.g. ‘current’ and ‘voltage’. The reason is that we’ve all heard of resistors, capacitors, Ohm’s law and Kirchhoff’s laws, and while these have analogues in every row of the chart, it seems pointless to make up weird new ‘neutral’ terms for all these concepts.

But don’t be fooled by the jargon! We’re not merely studying electrical circuits. We’ll be studying passive linear networks in full generality… with the help of category theory.

Linear passive networks as morphisms

To get going, let’s think about circuits made of resistors. We can do this without harm, because we’ll later include capacitors and inductors using a simple effortless trick. Namely, we’ll generalize the ‘resistance’ of a resistor, which is a real number, to something called ‘impedance’, which is an element of some larger field. Everything will be so abstract that replacing resistances with impedances will be as easy as snapping our fingers.

Right now I want to define a category where the morphisms are circuits made of resistors. Any morphism will go from some ‘inputs’ to some ‘outputs’, like this:

So a morphism is roughly a graph with edges labelled by numbers called ‘resistances’, with some special nodes called ‘inputs’ and some special nodes called ‘outputs’.

What can do with morphisms? Compose them! So, suppose we have a second morphism whose inputs match the outputs of the first:

Then we can compose them, attaching the outputs of the first to the inputs of the second. We get this morphism as the result:

So, composing morphisms is a way to build big electrical circuits—or other ‘linear passive networks’—out of little ones.

This seems pretty simple, but let’s try to formalize it and see why we have a category. In fact it takes a bit of thought. To check that we get a category, we need to check that composition is associative:

and that each object has an identity morphism that does what an identity should:

All these equations seem obviously true in our example… until you try to prove them.

You might think an identity morphism should be a bunch of straight pieces of wire—a bunch of edges each with an input node and an output node—but that doesn’t quite work, since sticking an extra edge onto a graph gives a new graph with an extra edge!

Also, we are composing circuits by ‘sticking them together’. This process is formalized in category theory using a pushout, and pushouts are only defined ‘up to canonical isomorphism’. The very simplest example is the disjoint union of two sets. We all know what it means, but if you examine it carefully, you’ll see it’s only defined up to canonical isomorphism, because it involves a choice of how we make the two sets disjoint, and this choice is somewhat arbitrary.

All this means the category we’re after is a bit subtler than you might at first expect; in fact, it’s most naturally thought of as a bicategory, meaning roughly that all the equations above hold only ‘up to canonical isomorphism’.

So, we proceed like this.

First we define a concept of ‘labelled graph’, where (for now) only the edges are labelled. We do this because we want our circuits to have edges labelled by ‘resistances’, which are real numbers. But we do it in greater generality because later we’ll want the edges to be labelled by ‘impedances’, which are elements of some other field. And since we’re studying electrical circuits just as examples of networks, later still we will probably want graphs whose edges are labelled in still other ways.

So:

Definition. A graph consists a finite set of edges, a finite set of nodes, and two functions

Thus each edge will have some node as its source and some node as its target:

Definition. Given a set we define an -labelled graph to be a graph together with a function This assigns to each edge its label We call the label set.

We use the letter because for circuits of resistors we will take the label set to be

the positive real numbers, and will be the resistance of the edge For circuits that also contain inductors and capacitors we will take the label set to be the positive elements of some larger field… but more about that later!

Now we want to give our -labelled graph a set of nodes called ‘inputs’ and a set of nodes called ‘outputs’. You might think the set of inputs should be disjoint from the set of outputs, but that’s a tactical error! It turns out an identity morphism in our category should have the inputs being exactly the same as the outputs… and no edges at all:

To handle this nicely, we need to make a category of -labelled graphs. This works in the obvious way, if you’re used to this stuff. A morphism from one -labelled graph to another sends edges to edges, nodes to nodes, and preserves everything in sight:

Definition. Given -graphs and a morphism of-labelled graphs from to is a pair of functions

such that the following diagrams commute:

There is a category where the objects are -labelled graphs and the morphisms are as we’ve just defined them.

Warning: the morphisms in are not the morphisms of the kind we really want, the ones that look like this:

They are just a step along the way. A morphism of the kind we really want is a diagram like this in :

where is an -labelled graph and are -labelled graphs with no edges!

You see, if and have no edges, all they have is nodes. We call the nodes of the inputs and those of the outputs. The morphisms and say how these nodes are included in is our circuit made of resistors.

In general, any diagram shaped like this is called a cospan:

If we turned the arrows around it would be called a span. Cospans are good whenever you a thing with an ‘input end’ and an ‘output end’, and you want to describe how the ends are included in the thing. So, they’re precisely what we need for describing a circuit made of resistors, like this:

This makes us want to cook up a category where:

• an object is an -labelled graph with no edges. We can alternatively think of it as a finite set: a set of nodes.

• a morphism from to is a cospan of -labelled graphs:

We still need to say how to compose these morphisms. We know it will amount to attaching the outputs of one circuit to the inputs of the next—that’s all there is to it! But we need to make this precise and prove we get a category. And as I’ve hinted, we will actually get something bigger and better: a bicategory! This will come as no surprise to if you’re familiar with span and cospan bicategories–but it may induce a heart attack otherwise.

This bicategory can then be ‘watered down’ to give our category And when we take

we’ll get the category where morphisms are circuits made of resistors! We’ll call this

Then I’ll explain what we can do with this category! There’s no end of things we could do with it. But the main thing Brendan does is study the ‘black-boxing’ operation, where we take a circuit, forget its inner details, and only keep track of what it does. This turns out to be quite interesting.

References

I thank Brendan Fong for drawing some of the pictures of circuits here. For the details of what I’m starting to explain here, read our paper:

Post navigation

38 Responses to Network Theory (Part 30)

This looks very much like a course I took at Berkeley my last year as an undergraduate entitled “Non-equilibrium Thermodynamics”. It was mostly out of the EE Dept and all of the electronic analogies (and isomorphisms) were utilized. I was very enthralled with the generalization possible for modeling biological systems such as membranes where there were physical boundaries across which voltages and chemical potentials could be modeled. I think you need to look into Onsager relations. (If I remember correctly a Nobel was given to Onsager.)

The “mobility analogy” is the name of the “Firestone analogy” in wikipedia: http://en.wikipedia.org/wiki/Mobility_analogy
The “impedance analogy” ist the “old analogy”:http://en.wikipedia.org/wiki/Impedance_analogy
Topology preserving means parallel components will be parallel in all the analogies, not serial. That is only true in the mobility/Firestone analogy.
In (Part 29) you discuss a mechanical “auto-analogy”, which describes the dualizing process between the two possibilities.
I wonder, if just the firestone analogies should be called “analogy”, but the assignment velocity-current resp. force-voltage should be called “dualogy” on the basis of the topology of the system.

I don’t know if this is relevant but in 1948 a new approach to studying water transport in trees was introduced based on an Ohmic analogy. It involved studying the soil-plant-atmosphere continuum with a phenomenological model using the notions of resistance, capacitance and water potential. This approach dominated the field for decades. A brief account can be found in section 7.2 in the following paper on the rise of sap in trees (a subject that I think came up some time ago on this blog) which also contains in section 3 a brief account of the role of trees in climate change:http://arxiv.org/abs/1404.3066

Since the late sixties and early seventies a new paradigm called Hydraulic Architecture has dominated the literature, one which combines elements of the Ohmic analogy with elements of the original Cohesion-Tension theory of the rise of sap which dates back to the end of the 19th century.

I once had the task of implementing an isomorphism between the 1st & 3rd rows of your table in hardware, for a particular case. As a summer engineering intern at Bell Labs in the pre-transistor era, I assigned myself the job of simulating the mechanical behavior of a telephone switching relay with the resistors, capacitors and inductors of an analogue computer (digital computers were not yet up to this task, but were getting there). I just checked with Google and I am amazed that the model of relay (the “BJ3”) that I worked on is still available; the backstop contact tab may be just a little bit heavier on that relay as a result of my summer work to understand the “chatter” behavior better.

There are numerous alternate citations to this compact summary published in Nature regarding the collaboration of GEORGE OSTER, ALAN PERELSON, and AHARON KATCHALSKY. Just use the search strategy of a concatentation of their names and “Network Thermodynamics”.

Thanks, David! I’d seen this paper a long time ago, but it’s time to refresh my memory and see exactly how far this approach has gone. In Borg-like fashion, we want to assimilate all different approaches to network theory, adding their distinctive qualities to the Collective.

Soon we’ll want to fix the potentials at input and output nodes and see what the potentials at other nodes do. These are determined by a nice rule: they minimize the total power consumed by the resistors! The power consumed by a single resistor is the voltage across that resistor times the current through that resistor. But the current is the voltage divided by the resistance! So, this formalism gets a bit ugly when the resistance can be zero.

(Try to stuck a perfectly conductive piece of wire into the two holes of an electrical socket, and this ugliness will be made manifest.)

Luckily, if you have an edge of zero resistance, you can achieve the same effect by ‘collapsing’ that edge: deleting it, and identifying the two nodes at its ends. That’s roughly what David Winsemius said.

You’ll note that I said an ‘identity morphism’ in looks like this:

If we allowed edges of zero resistance, we might instead try to let an identity morphism be a graph with zero-resistance edges connecting the inputs and outputs. But since zero-resistance edges make life a bit difficult, and since there’s no law against graphs without edges, we might as well put them to use as identity morphisms!

As you can begin to see, there are lots of slightly different ways to do things, which should ultimately be ‘equivalent’ in some sense that needs to be made precise. Jamie Vicary has proposed a project to Brendan, which I’ll remind him of now: give a nice category-theoretic characterization of the category (or bicategory) If we do this, we might be in a position to prove that different ways of making the ‘design decisions’ give equivalent categories (or bicatgories)… either in the standard sense of equivalence for these structures, or some looser sense.

It is a pity that the linear passive network is not a general directed graph because of the Kirchhoff’s law, so that I think that it cannot realize each possible mathematical morphism.
I am thinking that if the graph have a diode in each edge, then the constraint give an arrow in the edge, but this don’t solve the problem because of the conservation laws in the nodes.
It is possible to change the direction definition with other, different from current direction, using arbitrary definition.
I am thinkin that if there is an alternating current, then the direction of the current in each edge change each semiperiod, so that there are two morphisms in a passive network (a flip between pushout and pullback).

I thought that the first part of the post represent a correspondence between category and electric circuit, and this enlightened me in the overlapping.
In my Valkenburg book on the net, there is an arbitrary arrow in each branch, but this direction is not constant between contiguous loop (I think to simplify the calculus), but it is possible to assign an arbitrary arrow between two nodes to solve the circuit; I think that it is possible, after a solution, to assign the arrow in each branch using the current direction (a natural choice with an arbitrariness for null currents), so that the function between input node, and output node, is a morphism, and it is not possible (in the natural choice) to assign arbitrary direction to the arrows; but rereading the post I understand that there may be a problem with the identity morphism …

I thought that the first part of the post represent a correspondence between category and electric circuit…

I’m not sure what you mean by that. The goal of the post is to construct a specific category, , in which a morphism is an electrical circuit made of resistors. After some review of why this could be interesting, I start by showing what a morphism in looks like:

Then I describe these morphisms in a precise way. has the following objects and morphisms:

• an object is a -labelled graph with no edges. We can alternatively think of it as a finite set: a set of nodes.

• a morphism from an object to an object is a diagram like this in the category of -labelled graphs:

Then I say that next time I’ll explain how we compose these morphisms.

You seem to be noting that the use of directed graphs in this formalism—graphs with arrows on the edges—is a bit tricky.

I think that it is possible, after a solution, to assign the arrow in each branch using the current direction (a natural choice with an arbitrariness for null currents),

I can’t do that, first because I can’t allow any arbitrary choices (it destroys mathematics to make arbitrary choices), but second and more importantly because I want my circuit to be the same mathematical entity regardless of what the currents are.

The use of directed graphs is actually unnecessary; it seemed convenient to me but right now I’m doubting it’s a good idea.

rereading the post I understand that there may be a problem with the identity morphism …

I said there’s a problem with identity morphisms unless we allow graphs with no edges. But graphs with no edges are perfectly valid graphs! So, we can take identity morphisms to look like this:

I feel the post communicated very little of what I was intending to say, perhaps because I was trying to say too many different things. But your comments are very helpful because I’m teaching a class on this today, and you’ve reminded me how important it is to say a few things and say them clearly.

For the next few posts, starting with this one, I’ll be summarizing some of the ideas in this paper. If you find mistakes in the paper or things that are unclear, let us know! We will be working to improve it.

Hi John, are you guys planning to put in references to proofs in section 6.3? The earliest I know are probably in Weinstein’s CBMS lectures. One textbook reference in S. Benenti, “Hamiltonian structures and generating families.”

It’s a big paper, and I also want to talk about other papers, so I certainly won’t explain everything in here—just enough to help you get started! If you have questions, don’t be shy about asking them.

In this first seminar, I start with a quick overview of network theory, and then begin building a category where the morphisms are electrical circuits. Here are some lecture notes to go along with the video:

One thing that puzzles me about your (bi)category of labelled graphs is that you don’t requre inputs to map to leaves and you don’t require the outputs to map to roots. Is my understanding correct? By a root I mean a vertex with no outgoing edges and a leaf a vertex with no incoming edges (since everything is directed, this makes sense).

This suggests to me that your resulting category will be quite different from the various operads of wiring diagrams that David has been studying. He has several.

1) For the circuits we’re talking about now, the directions on the edges are not really important, since resistors, capacitors and inductors don’t have a built-in directionality. For diodes the direction would be important: if you turn around a diode on an edge you get a different circuit. But it would never make sense, for electrical circuits, to require that output terminals have no outgoing edges and input terminals have no incoming edges. For diodes this restriction would actually eliminate some interesting circuits.

2) For the circuits we’re talking about now, the distinction between inputs and outputs is purely conventional; the categories we’ll be talking about are all dagger-categories, meaning we can switch what we call the input and what we call the output. Later in the seminar, when we get to control theory, we may meet some categories where the inputs are truly different from the outputs.

3) It’s tempting to use zero-resistance wires as identity morphisms. However, we’re avoiding zero-resistance wires in our formalism because they behave in a singular way (a ‘short circuit’). We can avoid zero-resistance wires without loss of generality because whenever you have a zero-resistance wire in an electrical circuit, you can collapse that edge in your graph and get rid of it, obtaining a circuit that behaves the same way.

But if you allow zero-resistance wires, you can take any electrical circuit, add extra zero-resistance wires at the input and output terminals, and get an electrical circuit that behaves in the same way but obeys the restriction you mention.

(Later I’ll make the concept of ‘behavior’ precise; for now people can read about it in the paper Brendan and I are writing.)

Looking at the factors of the different classes of units allows seeing more of the consistent analogies between different physical quantities, particularly when you notice that most of the classes of units factor into two parts, {length and (time or frequency)} and one of these five: { unity, mass, charge, mass/charge, mass/charge^2 }.

Using the number of length and time factors in a type of unit to place that unit type horizontally and vertically respectively in a table, and creating a table for each of the five other factors, about fifty classes of units or physical quantities can be arranged in a stack of tables whose geometric relationships define algebraic relations and possible classes of dimensionally valid physical equations.

Thermal units don’t fit neatly into this scheme, but I have noted some of the more important ones in a text box.

The names of the unit classes are from Alan Eliasen’s physically typed language / Swiss-Army-chainsaw desk calculator Frink. Names in parentheses are not defined in Frink, but still work in calculations.

How To Write Math Here:

You need the word 'latex' right after the first dollar sign, and it needs a space after it. Double dollar signs don't work, and other limitations apply, some described here. You can't preview comments here, but I'm happy to fix errors.