Thursday, May 02, 2019

In the previous text, I tried to focus on the differences in the treatment of QFT (quantum field theory) that may discourage too naive students of "mundane QFT" when they are trying to switch to modern advanced QFT and string theory in particular.

This text is somewhat similar but it focuses on the "later differences" – what string theory actually tells us about the world and the physical laws that we didn't know when we were confined in the mundane QFT paradigm – or that we couldn't even imagine. There's some overlap with texts such as top 12 achievements of string theory – Joe Polchinski had added the last two – but here I am looking at the issue from a different, less marketing and more heureka, perspective.

So what do I see differently than when I was in the mundane QFT phase?

Note that all these insights, and some others, show that it's childish when someone wants to "return" physics to a pre-stringy era. We – I really mean the body of the best theoretical physicists – have simply learned some things that cannot be "unlearned".

Theories that seemingly look different may actually be the same.

I chose this as the #1 point not because I am sure it's the deepest one but because it's arguably the most self-evident and well-defined class of insights among the sufficiently important ones.

In physics jargon, dualities exist. Dualities have become omnipresent in string theory – and in QFT. Only when physicists had several examples, they were pushed to qualitatively change their perspective. Before the discovery of dualities, physicists thought that if two theories differ in some technical aspects and if there's no "sufficiently obvious" field redefinition or a map that shows their equivalence (usually some kind of drudgery that brings a Lagrangian in one form to another form), the theories have to be physically unequivalent.

Before string theory, people knew various equivalences. Different formulae for a function – e.g. Riemann zeta function – were equal to each other. Great but in some sense "straightforward". Schrödinger's definition of quantum mechanics is equivalent to Heisenberg's or Dirac's one, when properly interpreted. That's also great but it boils down to a rather simple unitary transformation.

But physicists used to assume that these equivalences only apply to "objects of limited size". When you describe the laws of physics for a whole Universe and everything in it, you have specified so much information that it just can't be equivalent to a completely different set of physical laws. But the equivalences actually exist everywhere.

So we have the equivalence of sine-Gordon and the massive Thirring models; fermionization and bosonization. S-duality, T-duality, U-duality, string-string duality, M-theory – type IIA duality, mirror symmetry. And many more that I will discuss separately. Most of these dualities were first found in string theory and/or in QFT that is close to string theory, or at least in QFTs by string theorists i.e. physicists who have the ability to do research on string theory.

We know that similar dualities, like S-duality of Yang-Mills, exist even in local QFTs. The lesson may be more general. Maybe if you find some other theories different from QFTs or string theory, they will also exhibit dualities. But string theory has been the playground where we actually learned this general lesson. Intelligent enough theories of Nature exhibit dualities.

You shouldn't underestimate the philosophical importance of dualities. It really means that an "apparently different collection of basic building blocks and interactions" may be completely equivalent to a "seemingly completely different one". It means that our description of a theory – our way of thinking about the allowed objects and their evolution, their shape in the spacetime, and even the dimension of the spacetime etc. – "overdetermine" the actual physical identity of the theory. Our language – even when it's mathematical language – is too talkative. The physical beef of the Universe is just the language modulo some powerful equivalences and redundancies.

Dualities are clearly important, even at the fundamental philosophical level, and those who keep on assuming that dualities don't exist are just wrong. They assume something that could have been natural to assume centuries ago. But we've learned that it's wrong – just like the Flat Earth is wrong – and the newer picture we have learned is actually more exciting. "The perfectly precise physical equivalence" between "two worlds" whose basic laws look qualitatively different – and aren't related by any obvious enough field redefinition – could have looked "infinitely unlikely" before string theory but we know that such things are omnipresent so they just can't be assumed to be infinitely unlikely!

Some dualities are considered a great playground by pure mathematicians, such as mirror symmetry, but this blog post isn't about "achievements" but about "paradigm shifts in physics", so I won't dedicate special sections to the "ways how string theorists have impressed pure mathematicians" here.

In quantum gravity, the maximum information doesn't scale with the volume.

It scales with the surface. The black hole entropy is proportional to the event horizon's surface. You just can't compress too much information at a fixed information density to excessively large regions. The black hole entropy was known to be proportional to the surface in the 1970s, before it was derived from string theory. But string theory has confirmed the formula and has added more explicit pictures, especially the AdS/CFT correspondence, that make it clear that at least in some contexts and descriptions, the information about the whole region in quantum gravity "lives on the surface" and looks rather local on the surface.

To make it brief, string theory has been rather essential to realize – and make explicit – all the ideas that we call the holography of quantum gravity.

Black holes look like qualitatively different, large "beasts" that differ from the elementary particles. But string/M-theory has shown us that the black hole microstates – there are many microstates because the black hole entropy is large for a large black hole – are nothing else than the "very massive" counterparts of elementary particle species.

The qualitative difference between an electron and a black hole could have looked – and arguably did look to most people – "obvious" but we already know it's wrong. Even if one found a theory of quantum gravity that is totally different from M-theory or type I/IIA/IIB or heterotic string theories, it would almost certainly be true that elementary particles, including the graviton, are just light siblings of the exponentially many black hole microstates describing heavy black holes.

We have learned a lesson here. We know how to interpolate between particles and black holes in various examples. We will never return back. When we were mundane QFT theorists, we thought that a realistic theory of quantum gravity required us to make two objects – black holes and elementary particles – peacefully co-exist. We know it was wrong: there is no qualitative difference between the two and a promising theory produces both kinds of objects simultaneously, from the same underlying material and laws.

All dimensionless parameters are ultimately determined in quantum gravity, unless there are exactly massless scalar fields.

The Standard Model has some 30 parameters, the MSSM has about 105. We got used to the parameters in mundane QFT. It looked like we could pick the spectrum and then we could also adjust the masses, mixing angles, and the renormalizable couplings, among a few others. And we also had the tendency to disfavor effective QFTs with many parameters – an Occam razor's instinct. But string/M-theory shows that it's wrong. The couplings are ultimately all determined in quantum gravity, and if they're not, there has to be a modulus, an exactly massless field that causes a new long-range "fifth force" and that violates the equivalence principle (so this possibility is heavily disfavored experimentally).

So the "freedom" to adjust the parameters – which looked like the final answer in mundane QFT – is actually an illusion in quantum gravity. Perhaps because quantum gravity, like string/M-theory, must negotiate the peaceful co-existence between the black holes and the light elementary particles, it imposes new constraints and those imply that the allowed values of the dimensionless parameters are discrete.

This change of thinking also means that it is utterly irrational to disfavor effective QFTs with a higher number of couplings. The number of couplings in a low-energy effective QFT is whatever it is predicted to be (and you should always allow all couplings that keep the symmetry, consistency, and degree of renormalizability of the theory!). 30 or 105 may look like many but it doesn't measure any "sickness" or "contrivedness" of the theory because these collections of parameters are derivable from a more fundamental viewpoint that in principle has no adjustable continuous parameters.

Again, it's a paradigm shift that is almost certainly correct – moving us from a naive, wrong answer to the mature, correct one – and it's a paradigm shift that largely took place thanks to string theory. Even if you imagined that string theory will be superseded by a different one, it's very likely that the new one will in principle determine all the parameters, up to a discrete set of choices. Such string vacua or similar theories clearly do imply low-energy predictions, including the number of "seemingly free but not really free" parameters, and that's why all the people who are "repelled" by a higher number of parameters, or by one number or another, just don't understand the non-fundamental character of effective QFTs.

Topology of the spacetime manifold isn't a good observable.

Before some string theory advances, people already knew that the spacetime was able to get curved – like in Einstein's general relativity. They did believe that in quantum gravity, it was right to imagine it as a "quantum foam" where the geometry oscillates and changes the topology. So many things may be hard. But they only talked the talk – they didn't walk the walk.

Whenever they considered how gravity interacts with itself and other fields, they were actually completely ignoring these warnings that "topology in quantum gravity may be hard" etc. In particular, people were assuming that whatever quantum physics you discuss, you first need to determine some background spacetime's topology, and that gave you some subsectors of the Hilbert space. And in each Hilbert space, you could discuss various states that differed by the fields – and continuous deformations – on top of the fixed topological spacetime background.

We know that this can't work. The "total Hilbert space" simply isn't neatly split to these "superselection sectors" separated according to the spacetime topology. In the Calabi-Yau topological transitions, we know that excitations of one topology may be said to be equivalent to other excitations of a "nearby topology". In the ER-EPR correspondence, a wormhole is equivalent to an entangled black hole pair. The two sides have different topologies but they correspond to the same states.

So even the spacetime topology is a part of the description, a "way of thinking about some physical states", but if the two ways of thinking differ from one another, it doesn't mean that they're not the same states! So one can't uniquely associate topologies to a basis of the Hilbert space. One can't say what is the probability that a generic, chosen state of the Hilbert space has the spacetime topology X or Y. There are many possible answers to such a question. There's no general way to "measure" the spacetime topology, at least not for microscopic (Planckian) objects or highly entangled states.

Also, the spacetime topology may "continuously" change by physical processes, in flop transitions and other critical transitions... A whole discussion of the "emergent character of spacetime" could be added here but I don't want to focus on that important point in this blog post. Again, the emergent nature of the spacetime has been a "lore" for some time but people didn't know how to deal with it mathematically. In string/M-theory, we have increasingly known how to convert the "emergent character of the space" into equations.

Supersymmetry is a rare fermionic symmetry allowed in physics

In Russia, supersymmetry was basically discovered by "mathematicians" who studied some advanced "group theory". In the West, almost simultaneously, supersymmetry (the world sheet supersymmetry) was first discovered by Pierre Ramond when he worked to add fermions to the 2D string world sheet. After Ramond, simple 4D SUSY theories were built by Wess and Zumino. Supergravity theories etc. were added in the late 1970s, the MSSM has been studied since 1980 or so.

Supersymmetry is a new kind of symmetry whose generators are Grassmannian. It is a "moral loophole" in the Coleman-Mandula theorem. Almost all string theorists' preferred models of the real Universe require supersymmetry. The alliance between string theory and supersymmetry is obvious, and so is the importance of string theory for the discovery of supersymmetry (at least in the West).

Also, supersymmetry restricts the maximum dimension of the spacetime – basically because the dimension of the spinor-like representations, needed for fermions, grows exponentially with the dimension but it still has to match the degeneracy of the bosons. M-theory's 11 dimensions, and more subtle and debatable "12 dimensions" of F-theory, is about the maximum. I think that S-theory in 13 dimensions etc. are already extremely problematic and you just shouldn't assume that they're as physical and decompactified theories as M-theory. Even F-theory is already problematic but F-theory's usage for the construction of stringy vacua is a hard science that works (but two dimensions out of 12 are simply not quite decompactified in fully physical vacua, they are a torus). S-theory is not, so far.

Near the Planck scale, the idea of finitely many local fields is not OK.

People sort of knew it from the beginning – if one studies quantum gravity, something prevents you from localizing objects and particles with a better precision than one Planck length or so. String theory makes these guesses quantitative in various ways. Particles can't be quite point-like, they are typically objects such as vibrating strings whose size cannot be smaller than the Planck length. Their internal fluctuations make it unavoidable that their limbs may fluctuate at least one Planck length away.

Also, if you study all particle species at the Planck length resolution, you will find infinitely many, like infinitely many excited string modes. At a higher coupling, they're not quite independent because of ER-EPR and other things.

QFTs are ultimately not man-made, and they're connected within a natural theory and its "landscape".

As I mentioned, in mundane QFTs, one thought that physics is an inventors' game. You pick your building blocks – particle species or fields – and their interactions. These are like different car models, separated from each other.

String theory makes it clear that physicists are ultimately discovering, not inventing or constructing, effective QFTs. QFTs compatible with quantum gravity form a particular set that looked "unlimited" when people were in the naive mundane QFT stage. But now they have analyzed a lot of physics and we arguably know "a big chunk of the allowed effective QFTs".

Their particle spectrum and the qualitative characteristics of the interactions aren't something you can really choose freely. For some choices, there may exist no vacuum of quantum gravity that allows the particle spectrum or some forms of the potentials and other interactions. Such QFTs forbidden within quantum gravity are referred to as the swampland.

The allowed theories are actually created by Nature – they are solutions to some fundamental equations. They are long-distance limits of string/M-theoretical vacua. A theory with certain properties at low energies may exist or it may refuse to exist. The answer isn't up to you. There are deeper laws and the "man-made construction" of a QFT is just a "guess" how a limit could look like.

In the mundane QFT phase, people were always "bottom-up model builders", assuming that they had the freedom to build the theory in any way. They actually knew that the low-energy laws were just derived from more fundamental ones – by taking the limit or by the RG flows. But as in other cases, they talked the talk but didn't walk the walk. Now we know that we have to walk the walk. We are really forbidden from considering some type of effective QFTs in quantum gravity.

And some pairs or sets of QFTs may have looked equivalent at low energies – but they may still be limits of string vacua that differ at higher energies. The similarity or equality of two low-energy QFTs is therefore "an illusion", the underlying string vacua may be "very far from each other" according to some relevant measure at Planckian energies. People always knew that long distances were "derived" but they often thought as if the low-energy QFTs were "fundamental", anyway. We already know it is wrong to do so.

The choices of the QFT spectrum result from a geometric picture that should be looked for, that doesn't have to be unique, but the properties of that geometric picture may clarify special properties of the QFT.

In mundane QFT, the particle spectrum was an arbitrary man-made input. In string theory, the spectrum is derived from the modes of strings and other buildings blocks that propagate and co-exist (with branes, fluxes etc.) in some higher-dimensional geometry.

Even if we don't know what's the right stringy geometrization of our favorite QFT, like the Standard Model, we know that such pictures exist and they're almost certainly more fundamental. Also, they explain some special "accidents" in a QFT. For example, the decoupling of two sectors in a QFT may be due to the geometric separation of the excitations in the direction of extra dimensions, e.g. in a braneworld where the sectors arise from non-interacting brane stacks.

You can still imagine that the choice of the fields and interactions is a "man-made process based on the human freedom" but you're simply not at the cutting edge of theoretical physics if it is so. If you keep on making this "man-made" assumption, you're like Kepler who identified the planets with Platonic bodies, assuming that such guesses could be right. But in our new stringy pictures, we really realize that random guesses like that have no a priori reasons to be right. Either you have some experimental evidence or it's just a silly unlikely guess. The likely guesses are those that may arise from a natural – complete or incomplete – UV-completion of the QFT, from a string compactification.

In particular, the Standard Model starts with a choice of the \(SU(3)\times SU(2)\times U(1)\) gauge group. Like the spacetime topology, it's a "first choice" and everything else has to adapt it. In the new stringy picture, we know it's not the case. The gauge group is derived from the geometric properties of the compactification as well. And the choice of the gauge group isn't fundamental. After all, the gauge group isn't a real symmetry because physical states have to be invariant singlets.

Heterotic string theory allows to interpolate between \(E_8\times E_8\times U(1)^2\) and \(SO(32)\times U(1)^2\), to mention a cool example. The groups have the same dimension and the same rank but they are clearly different. Just like we can gradually change the spacetime topology, we can deform the compact spacetime dimensions so that the relevant low-energy gauge group changes from one to another. Also, we know that the Yang-Mills gauge groups may be continuously connected to the diffeomorphism symmetry of GR – like in the Kaluza-Klein construction. But string theory gives us lots of new constructions that show the "sibling status" of Yang-Mills symmetries and general covariance. We actually know that these things, again thought to be rather separate choices in the past, are connected aspects of the same underlying substance.

Second quantization of fields isn't the only way to describe multi-particle states

Relativistic quantum mechanics requires quantum fields and their creation and annihilation operators automatically allow antiparticles and multi-particle states. That was a great insight around 1930. In the mundane QFT stage, people thought it was the only way to get or describe the theories with multi-particle states.

But we know it's not the only one now. In particular, matrix models allow the description of composite systems in terms of "block-diagonal matrices" in some theories whose degrees of freedom are large-size matrices. The BFSS paper is the simplest example of an equivalence between such a non-gravitational matrix model and something that should look like an effective QFT at long distances – namely 11D supergravity. The BFSS matrix model was the first full complete definition of M-theory at all energies.

People could have always said that "the fundamental theory of our Universe doesn't have to be given by a strict QFT" but they didn't know how to reproduce all the realism and advantages of a QFT by a non-QFT, or something that explicitly avoids the "man-made construction of multiparticle states" by simply combining creation operators in a chain. Matrix models allow N-particle states to co-exist for many values of N.

So we really know that the QFT apparatus isn't even needed to get multiparticle states in a relativistic theory – we have other descriptions that achieve the same goal without explicit chains of creation operators. The inevitability of QFT as a framework for the multi-particle states has decreased or disappeared.

Summary

This list is surely not complete and some important entries are missing – and I will realize some of them in an hour from now. But there simply are entries like that which show that our assumptions about how we should proceed in QFTs, what is natural in QFTs, and whether QFTs are necessary at all were simply wrong – once you try to construct complete theories that incorporate quantum gravity. String theory has shown that lots of things previously considered impossible or extremely unlikely are actually possible if not omnipresent. Some unnatural things became possible. Other, previously possible things, are banned.

So we know that to stick to the old picture means to be attached to something analogous to the medieval prejudices about science. There were vague reasons to believe those prejudices in the mundane QFT stage. But the research into string/M-theory has simply falsified them – much like the Flat Earth has been falsified. Our modern stringy proofs that settle these questions are much more reliable than the vague guesses that have led to the old answers – many of which are believed to be incorrect now. So if you're a competent theoretical high-energy physicist as of 2019, you simply need to know the modern answers obtained with some very explicit and indisputable evidence – and you have to abandon the prejudices that used to be justified by sloppy evidence and that have been proven wrong.

The idea that physicists will "return" to an epoch in which string theory and its lessons may be ignored is as childish as the idea of a "return" to the Flat Earth. Science just doesn't work like that. Even in the absence of some "characteristically stringy empirical evidence", string theory has brought us proofs of many important, even philosophically game-changing statements that have falsified some incorrect hypotheses in the future.

The falsification of those old expectations – and this falsification had the form of a nearly rigorous mathematical "disproof" – cannot be undone. Falsification can never be undone. And the fact that no actual new experimenters were needed for the advances changes nothing about the irreversibility of the disproofs whatsoever. So unless you undergo lobotomy or burn all books and web pages that carry the knowledge about string theory, it's just impossible to "unlearn" string theory. Everyone who suggests that top theoretical high-energy physicists of 2025 could work on something that denies the whole history and lessons of string theory are completely detached from any kind of rational thinking about science and you should never assume that they're "equivalent" to actual physicists because they are not.