Network Theory (Part 3)

As we saw last time, a Petri net is a picture that shows different kinds of things and processes that turn bunches of things into other bunches of things, like this:

The kinds of things are called states and the processes are called transitions. We see such transitions in chemistry:

H + OH → H2O

and population biology:

amoeba → amoeba + amoeba

and the study of infectious diseases:

infected + susceptible → infected + infected

and many other situations.

A “stochastic” Petri net says the rate at which each transition occurs. We can think of these transitions as occurring randomly at a certain rate—and then we get a stochastic process described by something called the “master equation”. But for starters, we’ve been thinking about the limit where there are very many things in each state. Then the randomness washes out, and the expected number of things in each state changes deterministically in a manner described by the “rate equation”.

It’s time to explain the general recipe for getting this rate equation! It looks complicated at first glance, so I’ll briefly state it, then illustrate it with tons of examples, and then state it again.

One nice thing about stochastic Petri nets is that they let you dabble in many sciences. Last time we got a tiny taste of how they show up in population biology. This time we’ll look at chemistry and models of infectious diseases. I won’t dig very deep, but take my word for it: you can do a lot with stochastic Petri nets in these subjects! I’ll give some references in case you want to learn more.

Rate equations: the general recipe

Here’s the recipe, really quick:

A stochastic Petri net has a set of states and a set of transitions. Let’s concentrate our attention on a particular transition. Then the th state will appear times as the input to that transition, and times as the output. Our transition also has a reaction rate.

The rate equation answers this question:

where is the number of things in the th state at time . The answer is a sum of terms, one for each transition. Each term works the same way. For the transition we’re looking at, it’s

The factor of shows up because our transition destroys things in the th state and creates of them. The big product over all states, , shows up because our transition occurs at a rate proportional to the product of the numbers of things it takes as inputs. The constant of proportionality is the reaction rate .

The formation of water (1)

But let’s do an example. Here’s a naive model for the formation of water from atomic hydrogen and oxygen:

This Petri net has just one transition: two hydrogen atoms and an oxygen atom collide simultaneously and form a molecule of water. That’s not really how it goes… but what if it were? Let’s use for the number of hydrogen atoms, and so on, and let the reaction rate be . Then we get this rate equation:

See how it works? The reaction occurs at a rate proportional to the product of the numbers of things that appear as inputs: two H’s and one O. The constant of proportionality is the rate constant . So, the reaction occurs at a rate equal to . Then:

• Since two hydrogen atoms get used up in this reaction, we get a factor of in the first equation.

• Since one oxygen atom gets used up, we get a factor of in the second equation.

• Since one water molecule is formed, we get a factor of in the third equation.

The formation of water (2)

Let me do another example, just so chemists don’t think I’m an absolute ninny. Chemical reactions rarely proceed by having three things collide simultaneously—it’s too unlikely. So, for the formation of water from atomic hydrogen and oxygen, there will typically be an intermediate step. Maybe something like this:

Here OH is called a ‘hydroxyl radical’. I’m not sure this is the most likely pathway, but never mind—it’s a good excuse to work out another rate equation. If the first reaction has rate constant and the second has rate constant , here’s what we get:

See how it works? Each reaction occurs at a rate proportional to the product of the numbers of things that appear as inputs. We get minus signs when a reaction destroys one thing of a given kind, and plus signs when it creates one. We don’t get factors of 2 as we did last time, because now no reaction creates or destroys two of anything.

The dissociation of water (1)

In chemistry every reaction comes with a reverse reaction. So, if hydrogen and oxygen atoms can combine to form water, a water molecule can also ‘dissociate’ into hydrogen and oxygen atoms. The rate constants for the reverse reaction can be different than for the original reaction… and all these rate constants depend on the temperature. At room temperature, the rate constant for hydrogen and oxygen to form water is a lot higher than the rate constant for the reverse reaction. That’s why we see a lot of water, and not many lone hydrogen or oxygen atoms. But at sufficiently high temperatures, the rate constants change, and water molecules become more eager to dissociate.

Calculating these rate constants is a big subject. I’m just starting to read this book, which looked like the easiest one on the library shelf:

• S. R. Logan, Chemical Reaction Kinetics, Longman, Essex, 1996.

But let’s not delve into these mysteries today. Let’s just take our naive Petri net for the formation of water and turn around all the arrows, to get the reverse reaction:

If the reaction rate is , here’s the rate equation:

See how it works? The reaction occurs at a rate proportional to , since it has just a single water molecule as input. That’s where the comes from. Then:

• Since two hydrogen atoms get formed in this reaction, we get a factor of +2 in the first equation.

• Since one oxygen atom gets formed, we get a factor of +1 in the second equation.

• Since one water molecule gets used up, we get a factor of +1 in the third equation.

The dissociation of water (part 2)

Of course, we can also look at the reverse of the more realistic reaction involving a hydroxyl radical as an intermediate. Again, we just turn around the arrows in the Petri net we had:

Now the rate equation looks like this:

Do you see why? Test your understanding of the general recipe.

By the way: if you’re a category theorist, when I said “turn around all the arrows” you probably thought “opposite category”. And you’d be right! A Petri net is just a way of presenting a strict symmetric monoidal category that’s freely generated by some objects (the states) and some morphisms (the transitions). When we turn around all the arrows in our Petri net, we’re getting a presentation of the opposite symmetric monoidal category. For more details, try:

After I explain how stochastic Petri nets are related to quantum field theory, I hope to say more about this category theory business. But if you don’t understand it, don’t worry about it now—let’s do a few more examples.

The SI model

The SI model is an extremely simple model of an infectious disease. We can describe it using this Petri net:

There are two states: susceptible and infected. And there’s a transition called infection, where an infected person meets a susceptible person and infects them.

Suppose is the number of susceptible people and the number of infected ones. If the rate constant for infection is the rate equation is

Do you see why?

By the way, it’s easy to solve these equations exactly. The total number of people doesn’t change, so is a conserved quantity. Use this to get rid of one of the variables. You’ll get a version of the famous logistic equation, so the fraction of people infected must grow sort of like this:

Puzzle. Is there a stochastic Petri net with just one state whose rate equation is the logistic equation:

The SIR model

The SI model is just a warmup for the more interesting SIR model, which was invented by Kermack and McKendrick in 1927:

The SIR model has an extra state, called resistant, and an extra transition, called recovery, where an infected person gets better and develops resistance to the disease:

If the rate constant for infection is and the rate constant for recovery is , the rate equation for this stochastic Petri net is:

See why?

I don’t know a ‘closed-form’ solution to these equations. But Kermack and McKendrick found an approximate solution in their original paper. They used this to model the death rate from bubonic plague during an outbreak in Bombay that started in 1905, and they got pretty good agreement. Nowadays, of course, we can solve these equations numerically on the computer.

The SIRS model

There’s an even more interesting model of infectious disease called the SIRS model. This has one more transition, called losing resistance, where a resistant person can go back to being susceptible. Here’s the Petri net:

Puzzle. If the rate constants for recovery, infection and loss of resistance are and , write down the rate equations for this stochastic Petri net.

In the SIRS model we see something new: cyclic behavior! Say you start with a few infected people and a lot of susceptible ones. Then lots of people get infected… then lots get resistant… and then, much later, if you set the rate constants right, they lose their resistance and they’re ready to get sick all over again! You can sort of see it from the Petri net, which looks like a cycle.

A ‘compartmental model’ is closely related to a stochastic Petri net, but beware: the pictures in this article are not really Petri nets!

The general recipe revisited

Now let me remind you of the general recipe and polish it up a bit. So, suppose we have a stochastic Petri net with states. Let be the number of things in the th state. Then the rate equation looks like:

It’s really a bunch of equations, one for each . But what is the right-hand side?

The right-hand side is a sum of terms, one for each transition in our Petri net. So, let’s assume our Petri net has just one transition! (If there are more, consider one at a time, and add up the results.)

Suppose the th state appears as input to this transition times, and as output times. Then the rate equation is

where is the rate constant for this transition.

That’s really all there is to it! But subscripts make my eyes hurt more and more as I get older—this is the real reason for using index-free notation, despite any sophisticated rationales you may have heard—so let’s define a vector

that keeps track of how many things there are in each state. Similarly let’s make up an input vector:

and an output vector:

for our transition. And a bit more unconventionally, let’s define

Then we can write the rate equation for a single transition as

This looks a lot nicer!

Indeed, this emboldens me to consider a general stochastic Petri net with lots of transitions, each with their own rate constant. Let’s write for the set of transitions and for the rate constant of the transition . Let and be the input and output vectors of the transition . Then the rate equation for our stochastic Petri net is

That’s the fully general recipe in a nutshell. I’m not sure yet how helpful this notation will be, but it’s here whenever we want it.

Next time we’ll get to the really interesting part, where ideas from quantum theory enter the game! We’ll see how things in different states randomly transform into each other via the transitions in our Petri net. And someday we’ll check that the expected number of things in each state evolves according to the rate equation we just wrote down… at least in there limit where there are lots of things in each state.

Post navigation

35 Responses to Network Theory (Part 3)

I have cited compartmental models when I formulated the Oil Shock Model for resource depletion. The extraction of oil follows distinct stages of transfer described as a stochastic data flow from discovery to eventual consumption. I received a comment the other day from someone that said it reminded him of pharmacokinetic models for drug ADME (absorption, distribution, metabolism, elimination). In this case pharmacokinetic drug delivery models are compartmental in their structure.

I think you are on to a very good approach in this series. Start with the very pure mathematics and then as you splinter it off to the applied math fields, you can take it in any direction you want.

Can you help me towards a precise mathematical description of a “compartmental model”? Something like this, I guess:

There is a directed graph whose vertices are called compartments. For each vertex there is a function of time called the amount of stuff in compartment . These obey a differential equation

where the first term in the right hand side is a sum over edges pointing into the vertex , and the second term is a sum over edges pointing out of the vertex .

But what are the question-marked items allowed to be functions of? And what kinds of functions are allowed? I’m not sure how much generality is allowed.

The more generality, the more models this formalism includes. But the less generality, the more interesting things we can say about these models. So maximum generality is not necessarily the goal here.

The great thing about stochastic Petri nets is that the rate at which a given transition happens is proportional to the product of the numbers of things in each state that’s an input to this transition. This is a strong limitation which allows us to say a lot about these models.

I get the feeling that “compartmental models” are a lot like the “box models” that Nathan Urban was talking about in “week304”. I don’t know if anyone has attempted a mathematically precise definition of either concept.

I think you are on to a very good approach in this series. Start with the very pure mathematics and then as you splinter it off to the applied math fields, you can take it in any direction you want.

Thanks! That’s sort of my plan. I’m mainly trying to work out pure-mathematical descriptions of a lot of “network languages” in order to find some syntheses. But in the process, we can learn their applications to many different subjects, and get some ideas for tricks that haven’t been tried yet.

John, The most obvious candidates for the question marks are valuations of probabilities. Then what it describes is a probability flow from one state to another. In a linear approximation this would amount to a continuous Markov model. The constraint rules are that the sum of the probabilities of all the states equals to one.

This gets used for describing reliability models of systems (which happened to be one of the original applications) whereby the probabilities describe the eventual flow into failure states, i.e. the probability of failure of a system. Many of the popular stochastic Petri Net packages are meant to be used for reliability applications, see the SPNP software from Trivedi et al.

I typically extend the idea of probabilities to concentrations without blinking an eye. Concentrations have to obey the same conservation laws as probabilities so I think the transition is fairly seamless. So “the amount of stuff” is essentially the concentration of material in a compartment.

You can make the connection to a general volume of fluid but then you have to consider the stimulus and work out the solution as a convolution with a time-varying input flow volume. That is what I use for oil production modeling.

I am perhaps being a bit pedantic with my suggestions but that’s what I can offer.

My own interest which I am excited about pursuing is the work coming out of “valuation algebras”, “local computations”, and “generic inferencing”. See the work by Pouly and Kohlas. I think they have some quality ideas on how to work the generality of the approach.

To first order, I often assume that the outflow is proportional to the amount that is left. So a small oil reservoir has proportionally less outflow than a big reservoir. This extraction process is called proportional drawdown, and it works well when you have a mix of reservoir sizes, thus making it applicable in stochastic simulations. So some may be higher and some lower but overall the mean proportional drawdown rate works pretty well.

Or maybe people permit the flow from the ith container to the jth to depend on how much stuff is in other containers—I don’t know.

Depending on our choices, different mathematical technologies will become relevant.

In the case I illustrated, the oil rig operators never put the oil back so you don’t have to worry about 2-way flow. But I definitely see your concerns in the general case, as it is hard to separate the two flow directions.

But the less generality, the more interesting things we can say about these models.

It seems to me that (with the transitions that you have defined) the formalism is already pretty general, because it allows you to place any polynomial (and hence, by Taylor, pretty much any continuous function) on the right side of the ODE.

The only interesting limitations would be that variables are always positive, and that the said function must be 0 in 0, so x=0 is always an equilibrium.

It seems to me that (with the transitions that you have defined) the formalism is already pretty general, because it allows you to place any polynomial […] on the right side of the ODE.

The only interesting limitations would be that variables are always positive, and that the said function must be 0 in 0, so x=0 is always an equilibrium.

That’s true—I think only in the course of our solving that Lotka-Volterra puzzle did I realize how general these rate equations of stochastic Petri nets are!

People working on chemical reaction networks prove lots of interesting theorems about these equations, like the existence of nonzero stable equilibrium solutions, but to do this they make some extra assumptions. A great intro is:

The key insight of CRNT [chemical reaction network theory] is that the apparent nonlinearity of the ODEs arising from mass-action kinetics conceals a great deal of linearity. This arises from the underlying network of chemical reactions, which define a directed graph on the complexes that participate in the chemical reactions. The complexes are the multisets of chemical species which occur on the left and right hand sides of each chemical reaction. The existence of this underlying structure limits the nonlinearity that can appear, which is why the strong results of CRNT are possible. Working at the level of complexes serves to disentangle the interactions between individual chemical species. Fixed points thereby fall into two categories: those that arise at the level of complexes and those that arise from the way in which different complexes contain the same chemical species. The deficiency of a network, which we define in this paper as the dimension of a certain vector space (Definition 3.2), measures the extent
of the second possibility. However, one of the most interesting discoveries of CRNT has to do with fixed points of the first kind. Whenever there is at least one such fixed point, all fixed points must be of that kind (Theorem 6.2), and, in this case, the fixed points are extremely rigid, with very nice properties of uniqueness and asymptotic stability.

I hope I get around to explaining this material…

(and hence, by Taylor, pretty much any continuous function)

The mathematician in me will nod and smile, while inwardly gnashing his teeth…

One nice thing about stochastic Petri nets is that they let you dabble in many sciences.

I have a feeling they also allow you to dabble in pseudosciences like economics.

The buzzword here is Leontieff’s Input-Output model which is simpler in that all the equations are linear. Also, economic processes don’t proceed by letting components randomly collide in the economy, so quantitatively the model is not one of chemical kinetics where the concentrations of the reagents are multiplied.

In the case of Leontieff’s model of economics you have processes such as

(product A) + (product B) -> (product C)

where you assume linearity between outputs and inputs. So, to produce a unit of C you need a certain number of units of A and of B.

The analogy is not perfect but you can still have blue squares called “industries” and yellow circles called “products” and process equations linking them which look superficially like chemical equations.

In the case of Leontieff’s model of economics you have processes such as

(product A) + (product B) — (product C)

where you assume linearity between outputs and inputs.

Hmm, that’s different. In a stochastic Petri net you assume multilinearity. So, you’d assume the amount of product C is proportional to the amount of product A times the amount of product B:

That’s why we just need a single rate constant for each transition. But it sounds like Leontieff is assuming the amount of product C equals some constant times the amount of product A plus some constant times the amount of product B:

If so, there’s not enough data in a stochastic Petri net to gives these two constants.

Another difference could be that Leontieff models only consider multi-input, one-output processes. Is that true? Stochastic Petri nets handle multi-input, multi-output processes.

Linearity is a somewhat odd assumption. Suppose you’re making milk () using cows () and farmhands to milk them (). If

then you can make as much milk as you want without any cows, simply by hiring enough farmhands. Or if you don’t have any farmhands, just get enough cows and they’ll milk each other.

I too was going to make a wisecrack about Wasily Leontief winning the Nobel prize… but his model is not quite as silly as I’m making it sound. If we assume no changes in production methods and linearize the problem about a given solution, it should give decent results for small perturbations around the given solution. This is, of course, nothing deep: everything is linear to first order.

Unfortunately, of course, production methods are constantly changing, and the linear approximation is only valid in a limited regime. So, it seems you’d have to be very limited in your ambitions if you wanted to get sensible results from a Leontief input-output model. Presumably Leontief was smart enough to recognize these limitations.

It would be interesting to hear how (un)successful the predictions of this sort of model have been in practice. I can easily imagine the Soviets trying to develop 5-year plans based on enormous systems of linear equations, and then discovering that it never works. Maybe that’s why the KGB detained Leontief several times.

Just an attempt at black humor, there. In fact, Wikipedia says he got his MA in economics in 1924 at the age of 19, but then “sided with campaigners for academic autonomy, freedom of speech and in support of Pitirim Sorokin. As a consequence, he was detained several times by the Cheka. In 1925, he was allowed to leave the USSR, mostly because the Cheka believed that he was mortally ill with a sarcoma, a diagnosis that later proved false.”

Despite your joke about how 5-year plans don’t work, you may be interested to find that the US Bureau of Labour Statistics produces Inter-industry relationships to this day

This page contains links to files of input-output data for the U.S. economy for the historical years 1993-2008, and for the projected year 2018. Input-output data show the flow of commodities from production through intermediate use by industries and purchases by final users. This data are developed as a set of matrices or tables for each year. The input-output tables produced by the BLS are derived from input-output data initially developed by the Bureau of Economic Analysis.

Production processes don’t change as fast as you make it sound, especially in capital-intensive industries where it is a multi-year project to change the way a plant operates or build a new plant. The computer industry and its 18-month Moore’s Law may be a counterexample, but by and large both in agriculture and manufacturing processes change on multi-year time scales. The building of new infrastructure (changing the costs of transportation or energy) are also multi-year processes. Or, if the year is too slow, change to quarterly data.

Looking at the wikipedia-page, it appears one use is to go from measured outputs and a model to figure out what some difficult to measure intermediate notes are. In a pre-computer age the linearity may have been chosen partly because it makes this “just” a big matrix inversion problem.

What you say is that in order to make one unit of milk you need k units of cows and k’ units of milk.

What you generally do with this is to investigate how a given level of demand is going to be met. Given an output vector O, the amounts of inputs needed to produce it is I = AO for some matrix O (the k and k’ above would be entries of A). If you assume that the inputs come from part of the outputs, you have the surplus O-I available for consumption (C) or export (E) (and if a component of it is negative, you need to import it) resulting in

C + E = O – I = (1-A) O

So if you know the level of net production (Consumption plus net exports) and the production matrix A, you get the gross production O.

If you want to consider more than one way of producing the same product, say you can do it with h and h’ units respectively, you call that a different process by which milk can be produced but then A is not a square matrix…

What you say is that in order to make one unit of milk you need k units of cows and k’ units of milk.

Where k = 0 and k’ = 1.

(Presumably you mean “and k’ units of farmhands”.)

Okay, I see… so Leontieff wasn’t nearly as silly as I thought. That’s a relief. I thought you were saying Leontieff input-output models were like stochastic Petri nets but with the reaction rates being linear in each variable rather than multilinear. But no, it’s something quite different going on here.

In fact, it’s as if he’s dealing with a plain old Petri net, but where the number of inputs of each transition can be any positive real, rather than an integer… and maybe there’s just one output? (That could easily be generalized.)

So while a Petri net can handle transitions like

3 H + 2 O + C → CH3OH

a Leontieff input-output model can also handle transitions like:

1.5 cows + .1 farmhands → 1 gallon of milk per day

(I’m a city slicker: I realize I have no idea how much milk a dairy cow makes per day! It’s embarrrassing!)

So, maybe I should try to subsume Leontieff input-output models into Petri net theory using this slight generalization of Petri nets.

By the way: when I thought Leontieff said

“the rate of milk production is k times the number of cows plus k’ times the number of farmers”

it seemed silly because it assumed complete substitutability: we could completely eliminate cows if we had enough farmhands, or vice versa.

It’s a relief to hear he’s saying

“to make one unit of milk it takes k cows and k’ farmhands”

but this assumes a complete lack of substitutability. That may be almost okay in this example, but I guess there are lots of situations where we can manufacture something using a bit less X if we use a bit more Y.

But okay, you say this can be gotten around:

If you want to consider more than one way of producing the same product, say you can do it with h and h’ units respectively, you call that a different process by which milk can be produced but then A is not a square matrix…

Good! In general we might have a continuum of such ways: e.g. if we can produce one product using some X and some Y as long as f(X,Y) = 1 for some function f. But I guess we can truncate that continuum down to a finite set, approximately.

I don’t understand why these discussions never seem to get around to the problem that in order to apply an equation to a real world problem, you somehow need to fill in for the assumed but undefined environment in which you are applying it. The problem that we’re always struggling with is that models turn out to be missing critical information that was not, or could not have been, encoded in them.

What if “the problem with models” for operating in open environments, isn’t with the models themselves, and their failure to be capable of the impossible feat of containing their own environment? What if the problem is with the assumption that you should rely on the best model you have as if it included all the information it needs to guide you?

If you don’t make that assumption it would seem like you’d need to apply a model with uncertainty, and then be constantly looking for what information the model will turn out to be missing, as that is the usual cause of failure. It seems plausible that one would then seek to devise some practice and procedure, “a method” of some sort, for using the model together with observation as a way of learning from the environment. It’s what one would do on the chance you might learn in advance when your model is going to fail, and you’ll need to discover new relations,variables or constants to plug in. Isn’t that reasonable?

It is probably worth remembering here that modern industrial strength computing was first used in the Manhattan Project to solve the extended Bateman equations of nuclear reactionshttp://en.wikipedia.org/wiki/Decay_chain

Those are excellent examples of stochastic Petri nets, John! Thanks for reminding me of them—I was fascinated by them as a kid. I learned much of my science from my dad’s old 1947 CRC Handbook of Chemistry and Physics. This featured old elements like ‘mesothorium’, and described the decay chains. It also had nifty tables of chemical compounds, integrals, logarithms, trig functions, and much more.

Cursed be the day my mom threw it out!

Here’s the thorium series:

Here’s the neptunium series:

And here’s the uranium series, also known as the radium series:

Note these aren’t drawn as Petri nets: to translate them into Petri nets, each arrow must be replaced by a transition with one input and one output. Sometimes the same isotope is the input or output of more than one transition. But we have no transitions with multiple inputs or outputs, since we’re not looking at fission or fusion, and we’re not considering alpha particles as outputs, even though they’re really helium nuclei!

Puzzle 1: You need two transitions, one with one input (from P), two outputs (to P) and rate , (this is needed to create the first term in the equation), and another transition with two inputs (from P) one output (to P) and rate (needed for the second term).

By the way this leads me to think that if you are constantly dealing with stuff that gets created and destroyed maybe this formalism it’s not the “best” one.

Puzzle 2: The equations should be very much like the ones in the SIR model, except that the first one has a term at the end, and the last one has the term .

I agree with WebHubTel that this resembles very much pharmaco-kinetics modeling, it would be nice to understand if it is exactly the same or not though.

Thanks! It looks like you are! You got both those puzzles right—great! Let me write up official solutions, so I can add them to my network theory webpages.

By the way this leads me to think that if you are constantly dealing with stuff that gets created and destroyed maybe this formalism it’s not the “best” one.

I actually think it’s very good—as you’ll see soon, these Petri net transitions are all nicely described using ‘annihilation and creation operators’ very much like those in quantum field theory! This may be ‘inefficient’ in a certain way, but it reveals some interesting connections between ideas.

Anyway:

Puzzle. Is there a stochastic Petri net with just one state whose rate equation is the logistic equation:

Answer. Yes. Use the Petri net with one state, say amoeba, and two transitions:

• fission, with one amoeba as input and two as output, with rate constant .

• competition, with two amoebas as input and one as output, with rate constant .

(The idea of ‘competition’ is that when two amoebas are competing for limited resources, one may die.)

Puzzle. If the rate constants for recovery, infection and loss of resistance are and write down the rate equations for this stochastic Petri net:

[…] Last time I explained the rate equation of a stochastic Petri net. But now let’s get serious: let’s see what is really stochastic—that is, random—about a stochastic Petri net. For this we need to forget the rate equation (temporarily) and learn about the ‘master equation’. This is where ideas from quantum field theory start showing up! […]

I thought that it could be interesting to show this formalism can be mapped into information-flow block diagrams (of the kind used for control and signal processing).

This picture:

shows the translation of the simple SI petri net, with two states and one transition block. The states are just integrators preceded by sums:

while the transitions are just products followed by gains:

What clearly stands out is that conservation of mass has to be specifically enforced by a feedback signal for every transition input (i.e. for every state output). This makes petri nets much better for problems where stuff is mostly conserved. Anyway things work fine and give the expected result, as shown here:

Using a physical modeling language and concepts like “through” and “across” variables (i think all the variables here are “through”), it should not be hard to actually implement a simulation environment in which you draw a stochastic petri net and then click the button to simulate it.

I don’t really understand how you’re reinterpreting a “state” as an integrator preceded by a sum. In Petri net theory a state is usually interpreted as a container with some amount of “stuff” in it (for example, a certain number of rabbits, or hydrogen atoms), with the amount being a function of time. I guess you’re taking the Petri net and giving it a dramatically different interpretation. I wish I really understood…

An integrator is in fact a container with some stuff (the value of the state) inside.

Provided that you have an initial condition, let’s say x(0) at time 0, if you integrate the rate over time (let’s say until the time t) you end up exactly with the value of x(t).

In the discrete time the integrator would simply be a sum, or a counter if you like, of the things that have arrived at each deltaT.

But in any case, you can just follow the equations back thought the wires.

That is if i call the output of the integrator I, the input has to be dI/dt, which, by the given equations, has to be equal to the sum of 3 terms, beta*S*I, beta*S*I, and -beta*S*I.

These terms are assembled by the transition block (you can follow back Out1, Out2 and Back2 in the transition to see how they are built, and so on, until you get back to the output of the integrators and you hopefully realize that the whole thing is a self-consistent way to rewrite the ODE visually as a graph).

Please let me know if that helps. I really do need to understand WHY this notation is evidently not obvious to those outside of a few fields.

Perhaps i should also have stated more clearly that the “output” of an integrator is just the value of the state, the input is the rate of change of that state, that is its derivative, and you can set an initial condition inside …

The arrows in the diagram must be interpreted as a signal propagating in the arrow direction, or, also, as an assignment of a variable from the departure to the destination, but they do not represent a flow, in which a balance equation mandates that stuff must disappear from one place in order to get somewhere else.

This is the reason why negative feedback loops have to be implemented separately to make stuff disappear (the -1 gains in the transition blocks).

I think I finally understand what you were saying. You are rewriting the rate equation of a stochastic Petri net in terms of asignal-flow graph. Between then and now I’ve thought more about those graphs…. and in fact I want to talk about them now in my network theory series, though it may take a while for me to get there!

I really do need to understand WHY this notation is evidently not obvious to those outside of a few fields.

It’s not that the notation was inherently hard, it’s that I was busy trying to understand Petri net notation, and you were translating it into some other notation, which looks slightly similar (networks of boxes and edges) but works differently. It was too much for me to process! The fact that you called a ‘state’ an ‘integrator’ was the straw that broke the camel’s back. But it’s just like how we get the charge on a capacitor by integrating the current through it.

[…] this particularly nice special case happens to be the rate equation for a certain stochastic Petri net. So, I’ve succeeded in connecting the “diagram theory” discussion to the “network theory” discussion!

What is the logic behind writing the rate as the product of the powers of the inputs?

Is it that we are counting the number of possible encounters between the interacting species?- eg.: if a wolf eats 2 and only 2 rabbits at once, and there are rabbits and wolves resp., the number of encounters would be about . This is true only if are very large, so that non-allowed encounters are negligible (a wolf cannot eat the same rabbit twice). Also, if the order in which the species interact to complete one encounter is immaterial, the extra factors can be adjusted into the rate constant.

Is this understanding okay?

Also, I remember a bit about Le Chatelier’s principle for equilibrium reactions which had to do something with the rate being proportional to the product of powers of the inputs divided by the product of the powers of the outputs. So, in general shouldn’t the products of a transition also play a role in determining the rate as the products might start interfering. Either we could get the products to appear in the denominators or create a separate transition to allow obstruction(which we are doing here) – why is the second approach better?

Also, we have considered transitions like death which have only inputs. Why not source like transitions? (E.g. a human planting seeds at a constant rate, irrespective of the network the seed is involved in.) Are we also going to consider time varying rates later?

What is the logic behind writing the rate as the product of the powers of the inputs?

Is it that we are counting the number of possible encounters between the interacting species?- eg.: if a wolf eats 2 and only 2 rabbits at once, and there are rabbits and wolves resp., the number of encounters would be about .

Right, that’s the logic.

This is true only if are very large, so that non-allowed encounters are negligible (a wolf cannot eat the same rabbit twice).

Right. The rate equation is an approximation that’s only good in a certain limit where there are many things of each kind. In later sections we discuss the master equation, which is more accurate. Here you’ll see that the number replaces the number Also, the master equation treats the numbers of things as integers rather than real numbers. Also, it treats the processes using
probability theory.

Also, if the order in which the species interact to complete one encounter is immaterial, the extra factors can be adjusted into the rate constant.

Right.

So, in general shouldn’t the products of a transition also play a role in determining the rate as the products might start interfering?

Chemists use the rate equation and master equation that I’m discussing here. However, in chemical reactions for every transition there is always a reverse transition that converts the products back into the inputs. I bet what gives the effect you’re mentioning. Maybe you can look up the Le Chatelier principle you’re remembering, and try to derive it from a suitable master equation.

Also, we have considered transitions like death which have only inputs. Why not source like transitions?

Those are allowed too: in mathematics whatever is not explicitly forbidden, is allowed. I consider a transition like this in Part 5.

Are we also going to consider time varying rates later?

No. We could, but there is too much to do and not enough time to do it all!

Thanks. I read the next part, and everything was already there.
The Le Chatelier Principle I was talking about, is indeed about reversible reactions, and the principle is concerned with the situation at equilibrium, i.e. when the forward and backward rates of reactions are equal.

[…] John explained in parts 2 and 3 how one can put rates on different transitions. For today I am only going to be concerned with ‘reachability:’ what token states are reachable from what other token states. John talked about this idea in part 25. […]

How To Write Math Here:

You need the word 'latex' right after the first dollar sign, and it needs a space after it. Double dollar signs don't work, and other limitations apply, some described here. You can't preview comments here, but I'm happy to fix errors.