Idea

Linear logic is sometimes thought of as being a logic for arguing about resource sensitive issues, but it can also be thought of categorically, or interpreted using Game Semantics, or as being related to Petri nets, or as a particular form of quantum logic. A bit more formally:

Indeed, these more general senses of linear logic still faithfully follow the original motivation for the term “linear” as connoting “resource availability” explained below (and in Girard 87), since the non-cartesianness of the tensor product means the absence of a diagonal map and hence the impossibility of functions to depend on more than a single (linear) copy of their variables.

Resource availability

Unlike traditional logics, which deal with the truth of propositions, linear logic is often described as dealing with the availability of resources. A proposition, if it is true, remains true no matter how we use that fact in proving other propositions. By contrast, in using a resource AA to make available a resource BB, AA itself may be consumed or otherwise modified. Linear logic deals with this by restricting our ability to duplicate or discard resources freely. For example, we have

Linear logic would disallow the contraction step and treat have cake,have cake⊢A\text{have cake},\; \text{have cake} \vdash A as explicitly meaning that two slices of cake yield AA. Disallowing contraction then corresponds to the fact that we can’t turn one slice of cake into two (more’s the pity), so you can't have your cake and eat it too.

Game semantics

Linear logic can also be interpreted using a semantics of “games” or “interactions”. Under this interpretation, each proposition in a sequent represents a game being played or a transaction protocol being executed. An assertion of, for instance,

P,Q⊢R P, Q \vdash R

means roughly that if I am playing three simultaneous games of PP, QQ, and RR, in which I am the left player in PP and QQ and the right player in RR, then I have a strategy which will enable me to win at least one of them. Now the above statements about “resources” translate into saying that I have to play in all the games I am given and can’t invent new ones on the fly.

As a relevant logic

Linear logic is closely related to notions of relevant logic?, which have been studied for much longer. The goal of relevant logic is to disallow statements like “if pigs can fly, then grass is green” which are true, under the usual logical interpretation of implication, but in which the hypothesis has nothing to do with the conclusion. Clearly there is a relationship with the “resource semantics”: if we want to require that all hypotheses are “used” in a proof then we need to disallow weakening.

Definition

Linear logic is usually given in terms of sequent calculus. There is a set of propositions (although as remarked above, to be thought of more as resources to be acquired than as statements to be proved) which we construct through recursion. Each pair of lists of propositions is a sequent (written as usual with ‘⊢\vdash’ between the lists), some of which are valid; we determine which are valid also through recursion. Technically, the propositional calculus of linear logic also requires a set of propositional variables from which to start; this is usually identified with the set of natural numbers (so the variables are p0p_0, p1p_1, etc), although one can also consider the linear logic LL(V)LL(V) where VV is any initial set of propositional variables.

Here we define the set of propositions:

Every propositional variable is a proposition.

For each proposition AA, there is a proposition A⊥A^\perp, the negation of AA.

For each proposition AA and proposition BB, there are four additional propositions:

The terms “exponential”, “multiplicative”, and “additive” come from the fact that “exponentiation converts addition to multiplication”: we have !(A&B)≡!A⊗!B!{(A \& B)}\equiv !{A} \otimes !{B} and so on (see below).

However, the connectives and constants can also be grouped in different ways. For instance, the multiplicative conjunction ⊗\otimes and additive disjunction ⊕\oplus are both positive types, while the additive conjunction &\& and multiplicative disjunction ⅋\parr are negative types. Similarly, the multiplicative truth 1\mathbf{1} and the additive falsity 0\mathbf{0} are positive, while the additive truth ⊤\top and multiplicative falsity ⊥\bot are negative. This grouping has the advantage that the similarity of symbols matches the adjective used.

In relevant logic?, the terms “conjunction” and “disjunction” are often reserved for the additive versions &\& and ⊕\oplus, which are written with the traditional notations ∧\wedge and ∨\vee. In this case, the multiplicative conjunction A⊗BA\otimes B is called fusion and denoted A∘BA\circ B, while the multiplicative disjunction A⅋BA\parr B is called fission and denoted A+BA+B (or sometimes, confusingly, A⊕BA\oplus B). In relevant logic the symbol ⊥\bot may also be used for the additive falsity, here denoted 0\mathbf{0}. Also, sometimes the additive connectives are called extensional and the multiplicatives intensional.

Sometimes one does not define the operation of negation, defining only p⊥p^\perp for a propositional variable pp. It is a theorem that every proposition above is equivalent (in the sense defined below) to a proposition in which negation is applied only to propositional variables.

We now define the valid sequents, where we write A,B,C⊢D,E,FA, B, C \vdash D, E, F to state the validity of the sequent consisting of the list (A,B,C)(A,B,C) and the list (D,E,F)(D,E,F) and use Greek letters for sublists:

The main point of linear logic is the restricted use of the weakening and contraction rules; if these were universally valid (applying to any AA rather than only to !A!{A} or ?A?{A}), then the additive and multiplicative operations would be equivalent (in the sense defined below) and similarly !A!{A} and ?A?{A} would be equivalent to AA, which would give us classical logic. On the other hand, one can also remove the exchange rule to get a variety of noncommutative logic?; one must then be careful about how to write the other rules (which we have been above).

As usual, there is a theorem of cut elimination showing that the cut rule and identity rule follow from all other rules and the special cases of the identity rule of the form p⊢pp \vdash p for a propositional variable pp.

The propositions AA and BB are (propositionally) equivalent if A⊢BA \vdash B and B⊢AB \vdash A are both valid, which we express by writing A≡BA \equiv B. It is then a theorem that either may be swapped for the other anywhere in a sequent without affecting its validity. Up to equivalence, negation is an involution, and the operations &\&, ⊕\oplus, ⊗\otimes, and ⅋\parr are all associative, with respective identity elements⊤\top, 0\mathbf{0}, 1\mathbf{1}, and ⊥\bot. These operations are also commutative? (although this fails for the multiplicative connectives if we drop the exchange rule). The additive connectives are also idempotent (but the multiplicative ones are not).

Remark

There is a more refined notion of equivalence, where we pay attention to specific derivations π:A⊢B\pi: A \vdash B of sequents, and deem two derivations π,π′\pi, \pi' of A⊢BA \vdash BLambek-equivalent if they map to the same morphism under any categorical semantics SS; see below. Given a pair of derivations π:A⊢B\pi: A \vdash B and ρ:B⊢A\rho: B \vdash A, it then makes sense to ask whether they are Lambek-inverse to one another (i.e., whether S(ρ)=S(π)−1S(\rho) = S(\pi)^{-1} under any semantics), so that the derivations exhibit an isomorphism S(A)≅S(B)S(A) \cong S(B) under any semantics SS. This equivalence relation A≡LambekBA \equiv_{Lambek} B is strictly stronger than propositional equivalence. It should be observed that all equivalences A≡BA \equiv B listed below are in fact Lambek equivalences.

We also have distributive laws that explain the adjectives ‘additive’, ‘multiplicative’, and ‘exponential’:

Multiplication distributes over addition if one is a conjunction and one is a disjunction:

We can also restrict attention to sequents with one term on either side as follows: Γ⊢Δ\Gamma \vdash \Delta is valid if and only if ⨂Γ⊢⅋Δ\bigotimes \Gamma \vdash \parr \Delta is valid, where ⨂(A,B,C)≔A⊗B⊗C\bigotimes(A, B, C) \coloneqq A \otimes B \otimes C, etc, and similarly for ⅋\parr (using implicitly that these are associative, with identity elements to handle the empty sequence).

We can even restrict attention to sequents with no term on the left side and one term on the right; A⊢BA \vdash B is valid if and only if ⊢A⊸B\vdash A \multimap B is valid, where A⊸B≔A⊥⅋BA \multimap B \coloneqq A^\perp \parr B. In this way, it's possible to ignore sequents entirely and speak only of propositions and valid propositions, eliminating half of the logical rules in the process. However, this approach is not as beautifully symmetric as the full sequent calculus.

Variants

The logic described above is full classical linear logic. There are many important fragments and variants of linear logic, such as multiplicative linear logic, intuitionistic linear logic (in which ⊸\multimap is a primitive operation), full intuitionistic linear logic (where multiplicatives and additives connectives are all independent of each other), non-commutative linear logics (braided or not), light linear logics, etc.

Firstly, there is a monoidal ‘tensor’ connective A⊗BA \otimes B. NegationA⊥A^\bot is modelled by the dual object involution (−)*(-)^*, while linear implicationA⊸BA\multimap B corresponds to the internal hom, which can be defined as (A⊗B⊥)⊥(A\otimes B^\bot)^\bot. There is a de Morgan dual of the tensor called ‘par’, with A⅋B=(A⊥⊗B⊥)⊥A \parr B = (A^\bot\otimes B^\bot)^\bot. Tensor and par are the ‘multiplicative’ connectives, which roughly speaking represent the parallel availability of resources.

LL recaptures the notion of a resource that can be discarded or copied arbitrarily by the use of the modal operator !! the !-modality: !A!A denotes an ‘AA-factory’, a resource that can produce zero or more AAs on demand. It is modelled using a comonad!! on the underlying **-autonomous category that is (symmetric) monoidal, in the sense that there is an isomorphism !A⊗!B≅!(A&B)!A\otimes!B \cong !(A\& B). Since every AA is canonically a symmetric &\&-comonoid (remembering that &\& is the product), !A!A is then a symmetric ⊗\otimes-comonoid.

An LL sequent

A1,…,An⊢B1,…,BmA_1,\ldots,A_n \vdash B_1,\ldots,B_m

is interpreted as a morphism

⊗iAi→⅋jBj \otimes_i A_i \to \parr_j B_j

The comonoid structure on !A!A then yields the weakening

Γ⊗!A→Γ⊗I→Γ \Gamma\otimes !A \to \Gamma \otimes I \to \Gamma

and contraction

Γ⊗!A→Γ⊗!A⊗!A \Gamma\otimes !A \to \Gamma \otimes !A \otimes !A

maps. The corresponding rules are interpreted by precomposing the interpretation of a sequent with one of these maps.

Particular monoidal and **-autonomous posets for modeling linear logic can be obtained by Day convolution from ternary frames. This includes Girard’s phase spaces as a particular example.

Polycategories

A different way to explain linear logic categorically (though equivalent, in the end) is to start with a categorical structure which lacks any of the connectives, but has sufficient structure to enable us to characterize them with universal properties. If we ignore the exponentials for now, such a structure is given by a polycategory. The polymorphisms

(A,B,C)→(D,E,F)(A,B,C) \to (D,E,F)

in a polycategory correspond to sequents A,B,C⊢D,E,FA,B,C \vdash D,E,F in linear logic. The multiplicative connectives are then characterized by representability and corepresentability properties:

(A,B)→CA⊗B→C\frac{(A,B) \to C}{A\otimes B \to C}

and

A→(B,C)A→B⅋C\frac{A \to (B,C)}{A \to B\parr C}

(Actually, we should allow arbitrarily many unrelated objects to carry through in both cases.) The additives are similarly characterized as categorical products and coproducts, in a polycategorically suitable sense.

Finally, dual objects can be recovered as a sort of “adjoint”:

(A,B)→CA→(B*,C)\frac{(A,B) \to C}{A \to (B^*,C)}

If all these representing objects exist, then we recover a **-autonomous category.

One merit of the polycategory approach is that it makes the role of the structural rules clearer, and also helps explain why ⅋\parr sometimes seems like a disjunction and sometimes like a conjunction. Allowing contraction and weakening on the left corresponds to our polycategory being “left cartesian”; that is, we have “diagonal” and “projection” operations such as Hom(A,A;B)→Hom(A;B)Hom(A,A; B) \to Hom(A;B) and Hom(;B)→Hom(A,B)Hom(;B) \to Hom(A,B). In the presence of these operations, a representing object is automatically a cartesian product; thus ⊗\otimes coincides with &\&. Similarly, allowing contraction and weakening on the right makes the polycategory “right cocartesian”, which causes corepresenting objects to be coproducts and thus ⅋\parr to coincide with ⊕\oplus.

On the other hand, if we allow “multi-composition” in our polycategory, i.e. we can compose a morphism A→(B,C)A \to (B,C) with one (B,C)→D(B,C)\to D to obtain a morphism A→DA\to D, then our polycategory becomes a PROP, and representing and corepresenting objects must coincide; thus ⊗\otimes and ⅋\parr become the same. This explains why ⅋\parr has both a disjunctive and a conjunctive aspect. Of course, if in addition to multi-composition we have the left and right cartesian properties, then all four connectives coincide (including the categorical product and coproduct) and we have an additive category.

Game semantics

We can interpret any proposition in linear logic as a game between two players: we and they. The overall rules are perfectly symmetric between us and them, although no individual game is. At any given moment in a game, exactly one of these four situations obtains: it is our turn, it is their turn, we have won, or they have won; the last two states continue forever afterwards (and the game is over). If it is our turn, then they are winning; if it is their turn, then we are winning. So there are two ways to win: because the game is over (and a winner has been decided), or because it is forever the other players turn (either because they have no move or because every move results in its still being their turn).

This is a little complicated, but it's important in order to be able to distinguish the four constants:

In ⊤\top, it is their turn, but they have no moves; the game never ends, but we win.

Dually, in 0\mathbf{0}, it is our turn, but we have no moves; the game never ends, but they win.

In contrast, in 1\mathbf{1}, the game ends immediately, and we have won.

Dually, in ⊥\bot, the game ends immediately, and they have won.

The binary operators show how to combine two games into a larger game:

In A&BA \& B, is is their turn, and they must choose to play either AA or BB. Once they make their choice, play continues in the chosen game, with ending and winning conditions as in that game.

Dually, in A⊕BA \oplus B, is is our turn, and we must choose to play either AA or BB. Once we make our choice, play continues in the chosen game, with ending and winning conditions as in that game.

In A⊗BA \otimes B, play continues with both games in parallel. If it is our turn in either game, then it is our turn overall; if it is their turn in both games, then it is their turn overall. If either game ends, then play continues in the other game; if both games end, then the overall game ends. If we have won both games, then we have won overall; if they have won either game, then they have won overall.

Dually, in A⅋BA \parr B, play continues with both games in parallel. If it is their turn in either game, then it is their turn overall; if it is our turn in both games, then it is our turn overall. If either game ends, then play continues in the other game; if both games end, then the overall game ends. If they have won both games, then they have won overall; if we have won either game, then we have won overall.

So we can classify things as follows:

In a conjunction, they choose what game to play; in a disjunction, we have control. Whoever has control must win at least one game to win overall.

In an addition, one game must be played; in a multiplication, all games must be played.

To further clarify the difference between ⊤\top and 1\mathbf{1} (the additive and multiplicative versions of truth, both of which we win); consider A⅋⊤A \parr \top and A⅋1A \parr \mathbf{1}. In A⅋⊤A \parr \top, it is always their move (since it is their move in ⊤\top, hence their move in at least one game), so we win just as we win ⊤\top. (In fact, A⅋⊤≡⊤A \parr \top \equiv \top.) However, in A⅋1A \parr \mathbf{1}, the game 1\mathbf{1} ends immediately, so play continues as in AA. We have won 1\mathbf{1}, so we only have to end the game to win overall, but there is no guarantee that this will happen. Indeed, in 0⅋1\mathbf{0} \parr \mathbf{1}, the game never ends and it is always our turn, so they win. (In ⊥⅋1\bot \parr \mathbf{1}, both games end immediately, and we win. In A⊗1A \otimes \mathbf{1}, we must win both games to win overall, so this reduces to AA; indeed, A⊗1≡AA \otimes \mathbf{1} \equiv A.)

Negation is easy:

To play A⊥A^\perp, simply swap roles and play AA.

There are several ways to think of the exponentials. As before, they have control in a conjunction, while we have control in a disjunction. Whoever has control of !A!{A} or ?A?{A} chooses how many copies of AA to play and must win them all to win overall. There are many variations on whether the player in control can spawn new copies of AA or close old copies of AA prematurely, and whether the other player can play different moves in different copies (whenever the player in control plays the same moves).

Other than the decisions made by the player in control of a game, all moves are made by transmitting resources. Ultimately, these come down to the propositional variables; in the game pp, we must transmit a pp to them, while they must transmit a pp to us in p⊥p^\perp.

A game is valid if we have a strategy to win (whether by putting the game in a state where we have won or by guaranteeing that it is forever their turn). The soundness and completeness of this interpretation is the theorem that AA is a valid game if and only if ⊢A\vdash A is a valid sequent. (Recall that all questions of validity of sequents can be reduced to the validity of single propositions.)

Game semantics for linear logic was first proposed by Andreas Blass, I believe in Blass (1992). The semantics here may or may not be the same as proposed by Blass.

Multiple exponential operators

Much as there are many exponential functions (say from ℝ\mathbb{R} to ℝ\mathbb{R}), even though there is only one addition operation and one multiplication operation, so there can be many versions of the exponential operators !! and ??. (However, there doesn't seem to be any analogue of the logarithm to convert between them.)

More precisely, if we add to the language of linear logic two more operators, !′!' and ?′?', and postulate of them the same rules as for !! and ??, we cannot prove that !A≡!′A!{A} \equiv !'{A} and ?A≡?′A?{A} \equiv ?'{A}. In contrast, if we introduce &′\&', ⊥′\bot', etc, we can prove that the new operators are equivalent to the old ones.

In terms of the categorial interpretation above, there may be many comonads !!; it is not determined by the underlying **-autonomous category. In terms of game/resource semantics, there are several slightly different interpretations of the exponentials.

One sometimes thinks of the exponentials as coming from infinitary applications of the other operations. For example:

!A≔1&kA&(k2/2)(A⊗A)&(k3/6)(A⊗A⊗A)&⋯!{A} \coloneqq 1 \& k A \& (k^2/2) (A \otimes A) \& (k^3/6) (A \otimes A \otimes A) \& \cdots (which is ekA\mathrm{e}^{k A} in an appropriate sense), where nAn A means an nn-fold additive conjunction A&⋯&AA \& \cdots \& A for nn a natural number, and we pretend that kk is a positive number such that kn/n!k^n/{n!} is always a natural number (which of course is impossible).

All of these justify the rules for the exponentials, so again we see that there may be many ways to satisfy these rules.