Smolin guesses the DNA of physical law

In a new paper, Smolin sketches a mechanism by which the laws of physics can be selected naturally and thus evolve.

He outlines a universal or "meta" law, from which many possible versions of physical law can emerge depending on the start-up----depending on what happens as a new region of space time and matter is initiated (possibly by budding from a prior region.)

The conjectured universal law is general enough that regions with different dimensionality, different spacetime geometry, and differently behaving matter can emerge, depending on initial conditions.

"A simple cubic matrix model is presented, which has truncations that, it is argued, lead at the classical level to a variety of theories of gauge fields and gravity. These include Yang-Mills theories and background independent theories of connections. The latter includes Chern-Simons theory in d=3, and BF theory and general relativity in d=4. General relativity coupled to Yang-mills theory for any SU(N) may also arise from quantum corrections.
On the basis of these results we conjecture that there are large universality classes of cut-off gauge and gravity theories, connected by transformations that mix up local and spacetime symmetries. If our universe is described by one of these theories then the question of the choice of the laws of physics is to a large extent subsumed in the problem of the choice of initial conditions in cosmology."

"It used to be widely believed that the search for the unification of the known interactions and particles within quantum theory would lead to a unique theory, knowledge of which would lead to explanations for the gauge and symmetry groups, representations and parameters of the standard model and predictions for future experiments. Instead, string theory, the most developed approach to such a unification, appears to lead to a vast landscape of equally consistent theories[1, 2], at least perturbatively, while non-perturbative approaches to quantum gravity also show few constraints on matter coupling[3].

There are roughly speaking two factors that may go into an explanation of why particular laws are selected from a landscape of possible laws: statistical considerations such as the Anthropic principle[4] and dynamical principles such as proposed in cosmological natural selection[5, 1]. There are several arguments, given in detail in [6], that lead to the conclusion that statistical considerations alone cannot yield predictions that are verifiable or falsifiable. The many recent attempts to achieve predictions from some version of statistical or anthropic considerations on the landscape have not contradicted this. This means that any approach to a landscape of theories that leads to verifiable or falsifiable predictions must be based on a dynamical mechanism for selection of the laws that apply to our universe.

Thus, a list of possible theories is not enough, there must be processes that allow the choice of laws to evolve as the universe does. Thus it appears that to do physics on the landscape we require a meta-theory that governs how theories evolve in time."

Thus, a list of possible theories is not enough, there must be processes that allow the choice of laws to evolve as the universe does. Thus it appears that to do physics on the landscape we require a meta-theory that governs how theories evolve in time

This sounds fairly sound IMHO, except I don't know what he means with "meta-theory". That sounds a little suspect. But maybe he means that at some point the degree of speculation implied to selecting randomly a meta-theory, is insignificant, and thus the arbitration is not an issue, then I think it sounds plausible.

I like the sound of this but the way I imagine the evolution of laws, even the meta laws governing this evolution would evolve. This is IMO the true spirit of "background independence", becuase it has become clear that there are different degrees of background dependence. Laws fixed as a sort of abstract background (not necessarily a background SPACE), does not comply to my idea of background independence.

The problem seems to be that even if you consider the laws to evolve from a "space of law", if THESE rules of evolution are again fixed be commit the same mistake again. When is this going to end? Here I think that this entire view, is put relative to an observer, the constrains of the observer puts an natural limit to this. Whatever landspace we come up with, it's constrained by the observer. The landspace is only in the microstructure of hte observer. The point is that at some point the questions we ask aren't distinguishable, and hence that becomes the regularisation.

The models we've seen so far always tend to be iterative, and the iteration itself seems to lack physical interpretation. It's seems to be an artifact. Maybe if we could transform our modelling so that hte models expansion = learning, conicindes with the physical time evolution? THIS is what I personally think is a good lead. This will also resolve the issue of the arrow of time and it may explain the relative visibiltiy of the arrow of time in different views.

So, a true unified theory should transcend and encompass all changes of all kinds within the universe (ex: the dynamics of dynamics of ... etc)?

The way you describe it, one would suspect that a theory can encompasses all possible changes of all kinds, would be a massive super theory -> landscape problems. I think that is smolin's point - it isn't a good idea.

That's also very unphysical since the very theory here must live inside a real observer, or a group of observers. Anything else doesn't describe anything useful IMHO.

Which means that the problems becomes that of optimising the theory and the observer for fitness in a givne unknown environment. The complexity of the theory is bounded. Yes, a complex massive theory can potentiallt be better but it also requires a more complex observer. This might suggest that the laws of nature gets simpler as the observers do, and it also means that all posable questions get bounded.

I think this would be a radical rethinking and takes as yet one level away from realism, more so than ordinary QM did. I wouldn't rule out smolings thinking. I'm curious to see what he comes up with.

Arivero informs us that Lubos Motl attacked this paper shortly after it appeared.

Has Smolin ever written the same type of personal stuff about Lubos, so that this qualifies as defense?

Otherwise I'd think anyone is doing himself a disfavour by these types to writings. Seriously, who is impressed by someone pointing out how stupid someone else is, without showing the slightest sign of doubt in his own position? I thought Smolin is trying to find new ways, and doing so taking a few steps back seems relevant.

Nevertheless, these blogs are entertaining. Maybe the entertainment industry is the future of physics :)

I found the Smolin article by reading Lubos's blog. What I liked about the article was the suggestion of using quadratic equations. As it turns out, this fits in well with the density matrix stuff I do.

Let (I,T) be the (weak hypercharge, weak isospin) quantum numbers of the elementary fermions. Actually this works for the rest of the elementary particles as well, but my primary interest is in the fermions. This is a collection of points on the plane:

Can these points be described as the solutions of some coupled quadratic equations? That is, can we write quadratic equations in I and T whose solutions are these? (And not a whole lot of other things we don't want.) The answer is yes!

Let J and K be complex numbers that we add to I and T. Then the above set of quantum numbers are the solutions of the following coupled quadratic equations:

And regarding Lubos, I don't care much how insane you are, how brutal you are to yourself or others. So long as you've found one beautiful fact about physics I will paper over those differences I can. And I think that Lubos has found that beautiful fact, in this paper (see the description of "tripled Pauli statistics"):http://arxiv.org/abs/gr-qc/0212096

" This means that any approach to a landscape of theories that leads to verifiable or falsifiable predictions must be based on a dynamical mechanism for selection of the laws that apply to our universe.
..

...
I must be missing something, because this sounds like a reiteration of what physics has stood for since its discovery.

There is something profoundly new here. It is aimed at injecting a DARWINIAN insight into physics.

To understand this you have not only to look at the recent paper but also see the earlier papers of Smolin that he refers to.
===========================================

What Smolin attempts to sketch out in this paper is a GENETIC CODE for physical law. When a baby universe (or region) buds off from a mother region and begins expanding in what form does it inherit physical laws and its constitution of space time and matter? Wherein gradual random mutations occur as well. This is what Smolin means by initial conditions.

If the act of budding corresponds to the formation of a certain type of black hole in the mother region, then those versions of the law will dominate which are optimized for the production of that kind of black hole.

The significance of the paper turns on the key issue of verifiability/falsifiability.

Smolin is saying that if we can come up with a UNIQUE unified theory of space time and matter with no vast landscape of different possibilities then FINE!!!

But if the only theoretical framework we can discover has millions of possibilities with no apparent explanation of why the laws/constants we see should be those particular ones, then we are back in Darwin's situation-----

he saw all these complex elegant lifeforms, species, that looked like they were designed nearly perfectly to fill their niches

-----and he came up with a mechanism for generating complex patterns that look like they were designed, but weren't.

Basically the mechanism was natural selection by reproductive success. Smolin is setting things up so he can apply the same idea to explain space time matter and natural law.

Basically the mechanism was natural selection by reproductive success. Smolin is setting things up so he can apply the same idea to explain space time matter and natural law.

If you think this is insane there is no second issue. But I can imagine that either you like it or you simply don't see how this can possibly be of any use to physics.

Personally I like it, and I can definitely see the beauty and the potential in this. If Smolin is way off chart with this, then so am I because I like his reasoning and I see similarities with my personal views.

So if you like it, the second issue is the next step - wether Smolins more specific suggestion is the right realisation of this? Here I am not sure I follow him. But I get the impression that he is "experimenting" with *possible* realisations of his basic idea? In this sense I can't anything but like it, even if I don't understand, or disagree on the second point. I suspect he is searching or maybe thinking out loud, motivated by a basic instinct in which general direction to go.

How many here sort of like his sentiment, granted that it's open for interpretation? Regardless of the realisation he tries to make is right? I'm curious.

Stephen Baxter has already posited that the universe evolves (in regard to the fine-tuning problem). Throwing in a formula or two is not a lot more rigorous.

Creationists believe that evolution is false. Evolutionists believe otherwise, obviously. Extrapolating this contrast from just carbon-based systems to all systems (including the all-encompassing one known as the universe) is not really a far stretch, intellectually anyway, especially when it's been done before. In fact, it would be somewhat unscientific to deny that the extrapolation beyond just organic chemistry is a necessity.

I'm not aware of Stephen Baxter and on first reading I don't follow Smolins logic that lead him to come up with the cubic action. But OTOH I'm not sure there is a profound one beyond the "pick a simple model and see what it gives", but that may be a way to create yourself some references. That's fine with me.

My personal expectation of realisation in line with Smolin's supposed sentiment is that the dynamics of the configuration is not separable from the dynamics of the degrees of freedom. IE. I expect a more elementary origin of "action" itself, that can be traced to the point where the concept of action becomes trivial. I suspect that the process that has selected the degrees of freedom also has with it an action measure. In a certain sense I'm not sure the "action formulation" is the cleanest way.

I'm trying to see the common denominators of rovelli, penrose and perhaps smolin. IMO they all at least partially share something, although on the surface it may differ. But I see no reason what these ideas shouldn't be able to converge.

So, I have noticed Smolin proposing this "evolving universes" idea in several of his writings. Every time I have noticed Smolin mentioning this, he does not seem terribly enthusiastic about it-- his tone usually seems to be "this is an idea I keep thinking is interesting, and I don't really take it very seriously, but by the way I wish people would mention I had this idea in 1995 when they talk about the history of anthropic/multiverse reasoning" :)

But, what I am trying to figure out is whether this research is really in the same line as his previous "evolving universes" idea? Your OP makes the new paper here sound a lot like the old "evolving universes" stuff, with the difference being that he gives a specific way in which a universe could be "described" and that this would act like DNA. (I assume this to be the state space of the cubic "meta-theory" he proposes.) However reading the paper, although I don't think I'm qualified to understand all of it, it seems like his paper entirely talks about this cubic matrix model "meta-theory" and doesn't discuss the evolutionary idea at all. May I assume that the reason why you talk about this paper using the evolutionary language in your OP is because there is a connection you see between this "meta-theory" stuff and Smolin's older evolutionary-universe stuff?

Here's what I understand of where Smolin was up until now: Smolin's previous "evolution of the universe" idea involved proposing a theory where universes could spawn other universes, with each spawn creating a universe whose laws and constants were similar to, though somehow slightly different from, the parent universe. Smolin proposed this in part to explain how certain settings of fundamental constants are picked out given a random starting point-- since even if the "universal ancestor" universe had random physical laws, if you consider the totality of the tree of universes it will naturally be dominated by that branch of universes that happens to stumble upon that that general family of laws+constants which maximizes the number of child universes per parent. (For example if the way in which universes spawn other universes is that bubble universes occur within black holes, then we would expect our universe's laws to be somewhere close to a maxima for the probability of creating lots of individual black holes over the universe's lifetime.) This principle would allow one to identify some set of the phase space for possible universes as preferred, while retaining maybe slightly more productivity than one would be given by concepts like the "principle of mediocrity". Does that sound about right?

This new paper however seems to be describing a slightly different type of thing. This is the part of Smolin's new paper which seems to describe the idea most succinctly:

Third, there have been a number of suggestions that physical processes are computations[12].
However, the central result in computer science is the universality of computation, that
all computers are equivalent to a universal computer, a Turing machine. Any computer
can be simulated on any other computer, by writing an appropriate program. Might it be
that there is also a universality class of dynamical theories, any solution of one may be
represented by a solution of another by a precise choice of initial conditions?

...so, since I am a CS person this of course is the part that sticks out to me :) (And I find this a very interesting question to ask because I've seen a couple of visible attempts [say, Tegmark] to suggest computational models are sufficient to produce the laws of physics, but very few people doing this in a way that seriously asks the question of which computational model is most appropriate-- i.e., are vanilla turing machines enough?) But, this also seems to be asking a different question from his previous evolutionary-universe ideas. In fact he appears to allude to his evolutionary ideas at the top of page 3 (mentioning a multiverse with "a meta-theory that evolves in time") and then suggest this paper is trying to talk about something different.

In this paper, Smolin appears to be suggesting that is slightly confused to take some theory (say, the laws of our universe) and then ask "what is the meta-theory from which these laws emerged?". His argument instead is that it is reasonable to suggest that any dynamical theory of a certain expressive power qualifies as such a meta-theory, in the sense that properly formed initial conditions can cause that theory to "truncate" (I'm not entirely sure what this means) to the action of one of the other theories in the class. So for example maybe there is some set of initial conditions for string theory which, if you run string theory, the resulting dynamics are a perfect simulation of Twistor theory, and likewise there is some set of initial conditions for Twistor theory from which emerges the behavior of string theory (I picked these two randomly and I don't know if either would be in Smolin's universality class). Smolin then proposes a specific model, a "cubic matrix" model, to act as the examplar for theories in this universality class-- like a turing machine, in the sense that if you show some theory has initial conditions which produce the behavior of the cubic matrix model you have also shown that your theory can produce the behavior of any other theory in the universality class. Does that sound about accurate?

So if this is Smolin's point, then it seems like when he talks about one theory "evolving" from another within this framework he doesn't mean "evolve" in the Darwinian sense, he means "evolve" in the everyday-language sense of one thing turning into another. Twistor theory could "evolve" from string theory (or whatever) in the sense that twistor theory could emerge from some set of initial conditions in string theory.

Is the connection you are trying to draw in the OP that once we have defined an appropriate "universality class" theory, we can immediately turn around and also use that as a meta-theory for evolution of universes (since it may not be the theory of everything, but it can probably emulate it)? Do you have reason to believe Smolin intends to do this or are you just highlighting this as a possible application?

Or am I missing something altogether...?

Just trying to understand, thanks :)

One more thing, I think Smolin misspelled "Mendel" on page 3. Is there some way to inform him of this? :|