The Landscape and the Emperor’s New Clothes

The String Vacuum Project, described as “a large, multi-institution, interdisciplinary collaboration”, that has been established over the last few years, is having its Kick-Off Meeting next month at the University of Arizona. This group had submitted grant proposals to the NSF for funding of such a project in the past, but I don’t know if they ever managed to get NSF or other funding. They motivate the project by claiming that

Given that relatively large numbers of string vacua exist, it is imperative that string phenomenologists confront this issue head-on…

Bert Schellekens has a web-site devoted to promoting the Anthropic Landscape, where he argues that

The String Theory Landscape is one of the most important and least appreciated discoveries of the last decades.

Besides the web-site, he has slides from two general talks on-line (here and here). In the talks he compares string theorists to the famous Emperor parading in no clothes, except what he is criticizing is those string theorists who have been unwilling to acknowledge the existence and importance of the anthropic landscape. He’s critical in particular of

those people claiming that they have always known that String Theory would never predict the standard model uniquely, but that they did not think this point was worth mentioning.

His modernized version of the fable of the Emperor goes as follows:

Many years ago, there lived some physicists who cared much about the uniqueness of their theories. One day they heard from two swindlers that they could make the finest theory which was absolutely unique. This uniqueness, they said, also had the special capability that it was invisible to anyone who was stupid enough to accept anthropic thinking.

Of course, all the townspeople wildly praised the magnificent unique theory, afraid to admit that anthropic thoughts were inevitable, until Lenny Susskind shouted:

“String theory has an anthropic landscape”

It’s not clear who he would identify as the “two swindlers”….

According to Schellekens, the “string vacuum revolution” is on a par with the other string theory revolutions, but most people prefer to overlook it, since it has been a “slow revolution”, taking from 1986-2006. The earliest indications he finds is in Andy Strominger’s 1986 paper “Calabi-Yau manifolds with Torsion”, where he writes:

All predictive power seems to have been lost.

and in one of his own papers from 1986 where the existence of 101500 different compactifications is pointed out.

Schellekens claims that “string theory has never looked better”, but he completely ignores the main question here, the one identified by Strominger in 1986 right at the beginning. If all predictive power is lost, your theory is worthless and no longer science. What anthropic landscape proponents like him need to do is to show that Strominger was wrong; that while string theory seems to have lost all predictive power, this is a mistake and there really is some way to calculate something that will give a solid, testable prediction of the theory. The String Vacuum Project is an attempt to do this, but there is no evidence beyond wishful thinking that it can lead to a real prediction. Schellekens has worked on producing lots of vacua and describing them in a “String Vacuum Markup Language”, and in his slides describes one construction that involves 45761187347637742772 possibilities. These possibilities can be analyzed to see if they contain the SM gauge groups and known particle representations, but this is a small number of discrete constraints and there is no problem to satisfy them. The problem is that one typically gets lots and lots of other stuff, and while one would like to use this to predict beyond-the-SM phenomena, there is no way to do this due to the astronomically large number of possibilities.

He lists goals for the future (“Explore unknown regions of the landscape”, “Establish the likelihood of SM features”, “Convince ourselves that the standard model is a plausible vacuum”), but none of these constitutes anything like a conventional scientific prediction that would allow one to test to see if what one is doing has any relation to reality. In the end, he comes up with the only real argument for the String Vacuum Project and other landscape research, that of wishful thinking:

… and maybe we get lucky.

Update: There’s a story about the String Vacuum Project in this week’s Nature by Geoff Brumfiel. It includes skeptical comments from Seiberg and yours truly, as well as Gordon Kane’s claim that:

evidence supporting string theory could emerge “within a few weeks” of the [LHC]’s start-up.

Update: At the blog Evolving Thoughts, there’s a discussion of whether theoretical physicists have now taken up a “stamp-collecting” model of how to do science. I point out that this is stamp-collecting done by people who don’t have any stamps, just some very speculative ideas about what stamps might look like.

70 Responses to The Landscape and the Emperor’s New Clothes

The point has been made many times already, but I guess it has to made again that even with the vast landscape of vacua in string theory, this does not mean that all predictive power is lost. All one has to do is find the particular vacuum which describes our universe, and it will allow an understanding of all phenomena that we can observe. I might add that this is the end-goal of the string vacuum project.

Tim,
At least 99% of the 10^500 possible vacua are complete garbage and can be ruled out easily. Thus, the regions of the landscape for which realistic vacua may arise is limited. If string theory is the right theory, then at least one of these vacua is our universe and it can be found with a focused effort.

Strominger’s point back in 1986 was that predictivity is lost because, at the accuracy to which you can calculate anything, finding a vacuum state that agrees with the SM is useless for predictions since there will be lots of them. You can’t use any of them to make real predictions of new phenomena, because if experiment disagrees with your “prediction” of something new, you just pick a different state.

If you think Strominger is wrong, you need to come up with a well-defined proposal about how you are going to make a real, testable, convincing prediction. No one has done that because it can’t be done. The argument being made by the SVP people seems to essentially be: let’s just compute lots of stuff, and maybe a miracle will happen, something will pop out that looks just like new things we see at the LHC. There’s no way to prove this is impossible, but it’s pure wishful thinking, not science.

Peter,
This may have been the situation back in 1986. However, today we have better calculational tools and we have a much better understanding of how to calculate gauge couplings, Yukawa couplings and so on. Central to this issue is moduli stabilization which had not even been addressed in 1986. The requirement that all moduli be stabilized is actually quite difficult to achieve. However, if someone constructed a model with the standard model in the low energy limit with absolutely all moduli stabilized then one would have to take it very seriously. Indeed, if all moduli are stabilized then the values of all couplings are fixed and can be compared with experiment.

The fact of the matter is that you couldn’t calculate Yukawas reliably in a realistic model back in 1986, you still can’t, and there is no realistic prospect of being able to do so in the forseeable future, or ever for that matter. The calculations the SVP people are doing are of things like the gauge groups and representations of low-lying states. Here’s a recent quote from Mike Douglas:

“none of us believe we will be able to compute these masses from first principles in any model anytime soon”

Peter,
Regarding your statement about the Yukawas, I beg to differ. We most certainly know how to calculate them in Type II compactifications. For example, see hep-th/0302105. Again, the problem is uniqueness which brings us back to the moduli stabilization issue.

The number of string vacua most commonly quoted is 10^500, so Bert’s 10^1500 was wrong by 1000 orders of magnitude. No doubt, it secures him a place in the history of physics.
It is really unbelivable that anybody can take SVML or the idea of “classifying” or “studying” all these vacua seriously. Just wait and see who shows up in Tucson…

I think Mike Douglas actually does know what he is talking about on this issue. Perhaps you can try arguing this with him and let us know the results.

The paper you point to gives nothing like a realistic model in which the low-energy Yukawa terms that determine particle masses can be calculated reliably, to an accuracy that would allow confrontation with experiment. This was the case for the compactifications and attempts to compute Yukawas considered back in 1986, and this hasn’t changed.

Peter,
Douglas’s statement about not being able to calculate masses from first principles is not exactly equivalent to saying once cannot calculate Yukawa couplings. I agree that it’s unlikely that we’ll be able to calculate the exact number for the mass of the electron anytime soon. However, we can calculate the ratio of the masses of the elementary particles to each other.

Peter said that he does not “… know if they [String Vacuum Project] ever managed to get NSF or other funding …”.

The web site for the SVP “… Kick-Off Meeting … at the University of Arizona …” says in part:
“… Some NSF support is available for participants … This meeting is supported in part by the US National Science Foundation …”
so
it seems that the NSF indeed has deemed SVP to be worthy of funding, and has awarded funds.
That should not be surprising, considering that the three “Meeting Organizers” are listed on that site as
“Keith R. Dienes, University of Arizona
Gordy Kane, University of Michigan
Stuart Raby, Ohio State University”
all of whom are very prominent members of the USA theoretical physics establishment.

What’s not clear to me is whether the NSF funding for this conference is new funding specifically for the SVP, and that’s what the “kickoff” is about, or whether this is just being funded out of an older grant of one of the organizers.

My understanding was that it’s slightly misleading to claim that Yukawa couplings can be calculated, at least in traditional heterotic compactifications – because although one can evaluate the integrals on particular (very symmetric) Calabi-Yaus, to compare with experiment one needs to do a field redefinition so that the kinetic terms are canonical, which one cannot do without knowing the metric on the Kahler moduli space. And the latter is not protected by supersymemtry. As far as I understand not so much is known about these metrics, except that in certain cases they must be quaternion-Kahler, and that various people are working on ways to calculate the quantum corrections more efficiently (Rocek for example).

So it seems that even in the best cases where the CY is symmetric enough to calculate some Yukawas (or at least show that lots vanish), one can’t calculate particle masses or ratios thereof.

So Eric, I am just trying to understand– when Schellekens says on his website that the String Theory landscape “implies a fundamental non-uniqueness of the gauge theory underlying the Standard Model”, do you disagree with this statement?

Another Anon,
I’m not that familiar with Yukawa couplings in heterotic models. However, from hep-th/0601204 it does appear that one may calculate them. In the case of type IIA compactifications, there is certainly no difficulty in calculating them, at least for toroidal orientifolds.

Coin,
At present, the SM gauge group does not appear to be uniquely singled out. It is actually quite possible that the physics of our universe was by random choice, but at this point nobody really knows. I don’t think string theory in it’s current form can answer this question.

“I would like this [a unique vacuum] to be true, but scientists are supposed to be immune to believing something just because it makes them happy.”

In other words, your insistence on a unique vacuum, or theory of everything with a unique solution is, well, wishful thinking. For this to note we don’t need Polchinski but I find it amusing that he points it out so clearly.

Now that we are at it, what are your reasons for believing in a unique theory with a possibly unique solution as the right description of our world other than wishful thinking? Nothing short of knowing the future will make your arguments anything else than wishful thinking.

I have no idea whether an ultimately successful unified theory will have a unique vacuum or not. But I do know that a successful scientific theory will make distinctive testable predictions that will allow us to figure out whether it is right.

My only claim is that the multiverse, string anthropic landscape, etc. is not science but nothing more than an excuse for failure. It predicts nothing, and inherently can predict nothing. This is a conventional way in which speculative ideas fail, the only unconventional thing here is that the people involved refuse to admit what has happened.

On the one hand you say that it’s wishful thinking to hope that the string guys will find a vacuum that contains the SM at low energies and on the other hand you say that the whole approach is meaningless because it’s not predictive.

Make up your mind! Is it just difficult to study all string vacua? Or it’s not science because it’s not predictive? These two things are not the same and you keep jumping between the two back and forth as you see fit.

From the end of the Braun, He, Ovrut paper you refer to:
“Of course, to explicitly compute the quark/lepton masses one needs, in addition, the Kahler potential, which determines the correct normalization of the fields.”
Sure – you can compute on tori, and orbifolds. I guess maybe you can also calculate normalized Yukawa couplings on families CY 3-folds where there is a Gepner point, which gives you an exact solution to normalize the fields.

First of all, I don’t know at what level to try and discuss this with you. What exactly is your background in this subject?

You keep ascribing arguments to me that I don’t make, and ignoring the ones I do make. Please read what I actually write and make an honest attempt to understand it.

No, I’ve never argued that the string guys will not find a vacuum that contains the SM. What I have repeatedly argued is that attempts to construct vacua that reproduce even the gross features of the SM lead to such complicated constructions that you lose all predictivity.

It is both difficult to study “string vacua” since they are so complex (by construction, the simple ones disagree with experiment), and studying them is not science since there is no reasonable hope of ever getting a prediction out of them. They are the end point of a 25 year old research program which found that simple versions of the idea don’t work, so you have to go to more and more complicated ones, never actually getting any closer to a real scientific prediction. At the end point you have a complicated and useless mess. This is often what happens when you pursue wrong ideas, there is nothing unusual about reaching such an endpoint when you pursue a wrong idea, all that is unusual is the refusal to admit failure.

As a regular reader of your blog, which I find very useful for the information content, (from your blog I learned about Resonaances, from where I learned about the CERN BSM Institute, which enabled me to visit CERN for a week last summer), I have to say that I understand Trent’s point of view.

I am sure you will agree with me that whatever the true fundamental theory of physics is, it must lead to an extraordinarily rich range of phenomena, from simple and economical foundations. And M-theory has done exactly that, to a degree that is unprecedented and completely unrivalled, and indeed was previously unimagined. Moreover, arguments have been put forward that somewhere among the vast number of effectively isolated vacua in the “landscape”, there will be some that match the physical world we live in.

If, in fact, we do live in one such effectively isolated M-theory vacuum, then our task must be to find out which of these vacua we live in, starting with coarse characterizations, such as, does it have TeV-scale gravity or not? Is it based on Horava-Witten theory or type IIB or type IIA? Is the compact six-manifold Calabi-Yau, or is it some other type of compact spin six-manifold? Answering these questions, by means of input from both theory and experiment, and pinning down further details of the vacuum we live in, might then help us to answer questions of fundamental practical significance. For example, will it ever be possible and practical to build a space vehicle that can accelerate to about ten percent of the speed of light over the course of a year, travel to the nearest star outside the Solar System over the course of about fifty years, and decelerate at the other end over the course of another year, without consuming a significant fraction of the material in the Solar System as fuel in the process? And if so, how large will such a vehicle be, and what will it be like?

One of the organizers of the Tucson meeting is Keith R. Dienes, who in his paper arXiv:hep-th/0602286 dealt a major blow to the expectation that the existence of a very large number of vacua means that some of them will have a very small cosmological constant. Among the 10^5 vacua studied in this paper, Dienes found a large degeneracy among the different models with regard to the value of the cosmological constant, so that only relatively few different values of the cosmological constant were realized. Both positive and negative values of the cosmological constant were found, but none of the models had zero cosmological constant, and the smallest magnitude cosmological constant found was about 0.02.

If the type of cosmological constant degeneracy identified by Dienes occurs among other types of M-theory vacua, then even the existence of 10^500 vacua would not be sufficient to guarantee that some of the vacua have cosmological constant around 10^-120, so it would seem to be important and useful to investigate this question for the various types of M-theory vacua.

Some of the different types of M-theory vacua have implications for the LHC. In particular, those with TeV-scale gravity certainly do, and there are a number of different realizations of TeV-scale gravity in M-theory, not just RS versus ADD, since there are a number of qualitively very different realizations of ADD in M-theory, with distinct signatures for the LHC. Comparing the predictions of as many different such models as possible, with the LHC data, should help to point in the direction of the correct vacuum.

Thanks for writing, although I disagree with you strongly on several points.

1. We don’t actually know what “M-theory” is, all we know is that there probably is some interesting structure that plays the role of such a theory. The underlying structure may not be either simple or useful for unifying physics using extra dimensions.

2. It’s fine to make the hypothesis that we live in a string vacuum and investigate its implications. But thousands of people have been doing this for nearly a quarter century and the results are in. Simple constructions of such vacua don’t look at all like the real world. To get something closer and closer to the real world, you have to invoke more and more complicated constructions, at no point ever finding something constrained enough to convincingly explain the origin of any features of the SM we don’t understand. This is just a classic example of what happens when you follow a failed speculative idea. Instead of acknowledging this failure, people are promoting a research program based purely on wishful thinking that something unexpected is going to turn up, even though the whole thing looks hopeless.

3. Even if you could show that known vacua can’t possibly give a small enough CC (and I don’t for a minute believe this is possible), this would not cause people to give up on M-theory, just to say that more vacua must be found and investigated.

4. If you want to argue that we need more new ideas about signatures for the LHC to look for, I’d agree. I just don’t see the kind of constructions the SVP is promoting as giving anything new here. I don’t at all buy your idea that the agenda for particle physics in the next few years will be to compare LHC data to complicated string vacua constructions. The hope that the LHC is going to produce lots of deviations from the SM that correspond to characteristics of string vacua showing up all of a sudden at the TeV scale seems to me pure fantasy and wishful thinking.

could you please point me to one reference that defines what M-theory actually is?

Perturbative string theory is the investigation of the implication of replacing in the perturbative expansion of quantum field theory the Feynman diagrams, which are correlators of a 1-dimensional QFT, by correlators of a 2-dimensional QFT.

It turns out that many 2-dimensional QFTs have a close “holographic” relation to 3-dimensional QFTs (the most famous example being the relation between 2-dimensional WZW theory with 3-dimensional Chern-Simons theory).

Generally speaking, “M-theory” is the name given to a hypothetical theory which people seem to keep finding indications for, and which is supposed to be to String theory roughly like these 3-dimensional QFTs are to the 2-dimensional QFTs.

More strictly speaking, whenever you see people actually doing something in what they address as “M-theory”, they are studying the action functional of 11-dimensional supergravity subject to a couple of constraints which generalize the old Dirac quantization constraint of electric and magnetic charges.

Also, whenever you see somebody talk about “M-theory vacua”, he or she is talking about this concrete issue: 11d sugra + quantum constraints.

Possibly the best text on 11d SUGRA is the old textbook by Castellani, D’Auria and Fre, “Supergravity and Superstrings: A geometric perspective”.

The possibly best discussion of those “quantum constraints” is due to Freed, Hopkins, Singer et al.

There is of course (as always) much more that people are considering. But if you know 11d sugra + Dirac quantization as in the above sources, you have a pretty good idea of what most of the stuff referred to as “M-theory” is about.

Urs, surely by now it has occurred to you that many of the anti M Theory comments on this blog are rather tongue-in-cheek, and one should probably not assume that the writer, whose comments above suggest at least a little familiarity with the literature, is ignorant of the oft used meaning of the term M Theory as a theory in 11 classical dimensions, a theory which in itself does not satisfy the criteria of the real M Theory.

Sorry for the delay in responding. On the definition of M-theory, I pointed out in the introduction and subsection 2.3.3 of arXiv:0704.1476 that the simplest and most economical hypothesis, namely that on a smooth uncompactified background, M-theory and d = 11 supergravity are the same thing, is consistent with all results in the literature, and moreover appears to be required by the apparent absence from the supermembrane mass spectrum of any states other than the single particle and multi-particle states of supergravity, (section 12 of de Wit, arXiv:hep-th/9902051).

This working hypothesis states that on a smooth uncompactified background, the predictions of M-theory are to be calculated from the CJS theory of d = 11 supergravity in the framework of effective field theory, (which in this case simply means BPHZ renormalization, since there are no massive states to integrate out), and that this will lead to unambiguous results, with all finite counterterms fixed uniquely by Slavnov-Taylor-Zinn-Justin identities, (and thus no undetermined parameters connected with the short distance completion of the theory). The content of this hypothesis, which is consistent with all results in the literature, is thus that the CJS theory does not admit any non-trivial locally supersymmetric higher derivative local deformations.

Deser and Seminara, in arXiv:hep-th/9812136 and arXiv:hep-th/0002241, and Metsaev, in arXiv:hep-th/0410239, constructed a family of \partial^{2n} R^4 linearized on-shell invariants, which if they could be extended to fully non-linear on-shell invariants, would invalidate the above working hypothesis. In d = 4, it was found that such linearized invariants could always be extended to fully non-linear invariants by the Noether procedure, but the existence of the auxiliary field formalisms in d = 4 implies that this must necessarily be so. In d = 11 there is no analogous auxiliary field formalism, (Rivelles and Taylor, Phys. Lett. B121, 37-42, 1983), and the Noether procedure is known to fail if one tries to use a six-form gauge field instead of a three-form, (Nicolai, Townsend, and van Nieuwenhuizen, Lett. Nuovo Cim. 30, 315, 1981). The Deser-Seminara-Metsaev linearized invariants have not yet been completed to fully non-linear on-shell invariants, although partial completions exist, and the above working hypothesis implies that there will be an obstruction to full completion.

Superspace counterterms have been constructed in standard d = 11 superspace by Duff and Toms (in Ellis and Ferrara, eds., 1983, see SPIRES), and Howe and Tsimpis (in arXiv:hep-th/0305129), and if the gauge completion mapping that matches the geometrical transformations in superspace to the CJS supersymmetry variations could be completed up to the necessary power of \theta, which in the Duff and Toms case is \theta^{32}, for a general solution of the CJS field equations, then these superspace counterterms would give fully non-linear on-shell higher derivative invariants for the CJS theory, and thus invalidate the above working hypothesis. However the gauge completion mapping is only known for a general solution of the CJS field equations at first order in \theta, (Cremmer and Ferrara, Phys. Lett. B91, 61, 1980), and partly at second order in \theta, (de Wit, Peeters, and Plefka, arXiv:hep-th/9803209). I suggested in the introduction to arXiv:0704.1476 that an obstruction might exist that prevents the geometrical transformations in superspace from matching the CJS supersymmetry variations for a general solution of the CJS field equations beyond a certain power of \theta. This would mean that the superspace counterterms do not result in locally supersymmetric deformations of the CJS theory, and would thus be consistent with the above working hypothesis. There is already a partial proof of the existence of such an obstruction to gauge completion, in the shape of a discrepancy between a component framework calculation of Hyakutake and Ogushi, arXiv:hep-th/0601092, and the superspace construction of Howe and Tsimpis. Howe and Tsimpis found that there should be an independent on-shell superinvariant for each independent Chern-Simons term, of which there are two at dimension 8, namely C \wedge tr(R \wedge R \wedge R \wedge R), and C \wedge tr(R \wedge R) \wedge tr(R \wedge R), but Hyakutake and Ogushi found that only one linear combination of these two terms might occur in a superinvariant, namely the combination which occurs in the bulk Green-Schwarz term. However, there are gaps in this proof, because Hyakutake and Ogushi omitted some possibly relevant terms from their ansatz, namely the third type of term in their equation (4.3), and Howe and Tsimpis have not published the full details of their argument, and it is not clear how firm their claims are. If these gaps could be closed, the existence of an obstruction to gauge completion would be proved.

The way in which the Green-Schwarz action for the type IIA superstring arises from the solitonic membrane of d = 11 supergravity was described by Horvava and Witten in subsection 2 (iii) of arXiv:hep-th/9510209, and I reviewed this argument, with some details added, in subsection 2.3.3 of arXiv:0704.1476. That there is no place for a fundamental supermembrane in d = 11, separate from the solitonic membrane, was already shown by Hull and Townsend in section 7 of arXiv:hep-th/9410167. The supermembrane action instead arises from d = 11 supergravity, as the worldbrane effective action of the solitonic membrane, in the double dimensional reduction from which the Green-Schwarz action for the superstring arises.

The above working hypothesis places the type IIA superstring in a classic strong / weak coupling duality relationship with a known, but very special, quantum field theory in eleven dimensions, namely the CJS theory. On ten extended dimensions times a circle of large radius L, the dimensionless gravitational coupling associated with the circle, namely \kappa^{2/9} / L, is weak, the solitonic membrane is dynamically unimportant, the superstring coupling constant, namely L^{3/2} / \kappa^{1/3}, is large, and the dynamics is most conveniently described by the CJS theory, but when L shrinks to become small compared to \kappa^{2/9}, the situation is reversed.

For M-theory on a non-smooth background, the most interesting case is Horava-Witten theory, and here we can again adopt a working hypothesis that the predictions of the theory are to be calculated from the Horava-Witten action in the framework of effective field theory, with the resistance to deformation of the CJS action preventing the occurrence of undetermined parameters connected with the short-distance completion of the theory. The \delta(0) terms in the action and transformation rules found by Horava and Witten have been avoided in the improved form of Horava-Witten theory proposed by Moss in arXiv:hep-th/0308159, arXiv:hep-th/0403106, and arXiv:hep-th/0508227. A critical test, directly relevant to the LHC, will be to check that the masses of U(1) gauge bosons resulting from Witten’s Higgs mechanism, (Witten, Phys. Lett. B149, 351-356, 1984), work out to be finite and nonzero, since these masses initially appear to be infinite in Horava-Witten theory, as explained near the beginning of section 5 of arXiv:0704.1476.

Not knowing as much about string theory as some of the other commenters here, I am curious. In your posts, you describe M-Theory as simply the study of 11-D SUGRA. This is an interesting way of looking at things, but there is one point that confuses me. Other descriptions of M-Theory that I have seen, and my observations of M-Theory when I hear about people researching it in practice, indicate that much of the actual study of M-Theory is concerned with n-dimensional vibrating objects, where these objects are called “strings” if n=1 and “branes” if n>1.

I can understand why one would choose to use strings if one’s goal is the study of 11D Sugra– since the graviton can be viewed as a string excitation mode, etc– but, if M-theory is as you say just the study of 11D Sugra, then I am confused where the branes come from. In what way do (n>1)-branes follow from this depiction of M-theory as in some way derived from 11D Sugra?

As far as I can tell, Chris is proposing a series of unconventional ideas about M-theory. This topic doesn’t have anything to do with the topic of the posting, I’m no expert on this, and this is not a general physics discussion board. Please, if you want to discuss Chris’s ideas about M-theory with him, contact him directly.

in response to #2 ” It’s fine to make the hypothesis that we live in a string vacuum and investigate its implications. But thousands of people have been doing this for nearly a quarter century and the results are in. Simple constructions of such vacua don’t look at all like the real world. To get something closer and closer to the real world, you have to invoke more and more complicated constructions, at no point ever finding something constrained enough to convincingly explain the origin of any features of the SM we don’t understand. This is just a classic example of what happens when you follow a failed speculative idea. Instead of acknowledging this failure, people are promoting a research program based purely on wishful thinking that something unexpected is going to turn up, even though the whole thing looks hopeless.”

Couldn’t nature though really be like this? Perhaps nature itself really is the result of one form of string, theory, one choice of string vacua, one compactification among 10^500, one scale of SUSY breaking, and is simply not fundamentally possible for physics to do any better, and is inherently unpredictable.

Sure, it’s logically possible that nature is like this. It’s also possible things are fundamentally just random choices of some Supreme Being. While these hypotheses are logically possible, they’re not scientific hypotheses, because they are inherently untestable. The problem with the landscape is not that it’s logically impossible, but that it appears to be impossible to ever use the idea to make testable predictions. The people who argue for the Landscape seem to me to just be hoping a miracle will happen and somehow a testable prediction will emerge, despite all the accumulated evidence that there is no sign of such a possibility. If your speculative idea not only make no predictions, but you can’t even come up with a plausible scenario of how further work on it will lead to predictions, it’s just not science.

Although I agree with what Peter wrote, I think there is an additional way that you can look at anthropic considerations as a selection principle. Take, for example, quantum electrodynamics. In QED it is impossible to predict an actual number for either the electromagnetic coupling strength or the electron mass; these are “inputs” to QED. Unfortunately, their prediction is made impossible due to renormalization, a procedure that is used to cancel infinities that arise when one wants to do more accurate calculations in the theory. If one took the point of view that QED is a truly fundamental theory, then one would be forced to say that there is no a priori choice of the two parameters, and look for other explanations for why they have the values they do.

You could propose that the explanation for the values of the electromagnetic coupling constant and electron mass could be anthropic, that only certain values would allow us as observers to exist and ask questions and develop QED. You could then quantitatively analyze all possible choices of these two parameters, and probably find out that only certain ranges of values would be compatible with us existing and developing QED. That might give you confidence that you are on the right track, and you might consider your analysis to be evidence that these parameters really are random variables. Then you could end your search for explanations, and just accept that some critical quantities are inherently unpredictable in the final theory, i.e.: that’s the way it is, and you just have to deal with it.

However, you could take an entirely different position. You could adopt the view that QED is not a fundamental theory, and so the appearance that the coupling constant and electron mass are free parameters is just a reflection that there are crucial ingredients that are missing from the theory, that QED is an incomplete and non-fundamental theory. Then you would keep searching for a better theory with the expectation that QED would emerge as a limiting case of the theory, but that the coupling constant and mass would emerge as predictions of the more fundamental theory.

Comparing these two points of view, there is no way to distinguish which one is correct, given that we don’t currently have a theory that predicts the coupling constant or electron mass. All of the anthropic arguments that might look like successes are nothing more than constraints you put on QED; they reflect observations about the world that cannot be incompatible with the theory, but a more predictive theory would need to abide by those same observational constraints (or else it is wrong). The fact that we haven’t yet found a more predictive theory means nothing more than such a theory is not easy to find from our current vantage point (e.g., we may be unwittingly over-constraining the form that we allow a fundamental theory to take, making it essentially impossible to find that theory while remaining compatible with observation).

In my view at least, anthropic arguments are only worthwhile if they reduce the number of possibilities to a finite and small number of possible solutions, and you do not assume that your theory is fundamental. Then you have an interim theory with only a few viable solutions, and you can use it to make predictions that you can subsequently test; the testability makes it science. However, you can’t use anthropic arguments as evidence that your theory is fundamental, since the alternative explanation that you haven’t yet found the more predictive theory still remains.

re: “…Take, for example, quantum electrodynamics. In QED it is impossible to predict an actual number for either the electromagnetic coupling strength or the electron mass; these are “inputs” to QED….”

If it were possible somehow to experimentally determine both the scale, mechanism and nature of SUSY-breaking and the specific parameters of the moduli of the calabi-yau manifold of our universe, given these experimentally derived and observed numbers, could they then be used as input for a string theory to see if its “output” (i.e the MSSSM +GR, plus novel predictions or postdictions) be testable and falsifiable. The landscape may well be “inherently untestable”, but perhaps the only way to get “predictions” out of string theory is to accept that the specific parameters of higher dimensional space can only be experimentally observed.

… perhaps the only way to get “predictions” out of string theory is to accept that the specific parameters of higher dimensional space can only be experimentally observed.

This doesn’t sound much different in principle than the approach that is used for established theories today, e.g., the standard model of particle physics. The standard model has approximately 20 parameters which are measured (the electron mass and electromagnetic coupling are two of them); once measured, the SM is extremely predictive and very much in agreement with a vast number of experiments, as you already know.

However, taking the widely held view that the standard model is an effective theory, not a truly fundamental one, there is no need to explain why its parameters have the values they do; ideally, that would be the job of a more fundamental theory. Depending on your tastes, you could take the view that there is no fundamental theory and the parameters are random variables, or you could assume that a more predictive theory is possible but we haven’t found it yet. Both possibilities are logically possible, and as I mentioned before, failure to find that more predictive theory says nothing about whether it is there to be found.

None of this has anything to do with SUSY or compactification of higher dimensions, which you mentioned:

If it were possible somehow to experimentally determine both the scale, mechanism and nature of SUSY-breaking and the specific parameters of the moduli of the calabi-yau manifold of our universe […]

Certainly, observations by themselves are little more than a collection of literal numbers, pictures, etc. which must organized and interpreted before they carry meaning. For the kinds of experiments you seem to be thinking of, theories play the key role in providing a framework for organizing and interpreting the data. The more “flexible” the theory (i.e., the less predictive it is), the easier it will be to find a way to fit the data, and the less rigid the conclusions you can draw from the fit; this is true in general, not at all specific to the string theory landscape idea.

Assume that a supersymmetric string theory is what you will use to organize and interpret your data, and let’s say that it predicts the MSSM plus GR as you said. (In my view, prediction of gravity is not exactly a prediction; the presence of something that looks like gravity was a crucial reason for taking string theory seriously as a “theory of everything” in the first place.) Now you do a number of experiments. If your theory is predictive in the meaningful sense, it should uniquely (i.e., without flexibility) and correctly determine many more experimental values than you used to construct your theory. That is, you want to get more out of the theory than you put into it. Failing that, the theory basically just re-parameterizes what you already know, which is not very impressive.

Nonetheless, even if you have a relatively unique theory that has made robust predictions that were experimentally confirmed (which would be real progress), you still have a lot of unexplained parameters. At this point you are back to the same questions of how to explain them. You still can’t scientifically distinguish between the possibility that the parameters are fundamental random variables (i.e., your predictive string theory is fundamental), versus the possibility that a more fundamental theory can be found that predicts those parameters uniquely (i.e., your predictive string theory is not fundamental), so the arguments in my previous post still apply. You still can’t use anthropic arguments to decide whether your theory is fundamental…

I’ve agreed with what you said. While there would be many unexplained parameters, it could explain what is unexplained in the SM and GR.

While such a theory may not be “fundamental” if we could observe the parameters of the moduli of the 6-dimensional space (or if we live in a large dimensional braneworld) experimentally/observationally and use that data as input and then put them in with the current superstring theories, and we don’t get the MSSM + GR, wouldn’t this “falsify” string theory, thus making it science?

if we could observe the parameters of the moduli of the 6-dimensional space… experimentally/observationally and use that data as input…

If we could do this, I think many or most of the people currently complaining that string theory is not science would fall silent, or at least find something else about String Theory to complain about. In fact as near as I can gather, it is specifically because no one can observe the parameters of string theory (and even constraining those parameters, such as the String Vacuum Project types wish to do, seems to be a great difficulty) that people have started accusing it of being nonscience in the first place.

I’m not sure I have correctly interpreted your question, so what follows may not be very helpful.

You mentioned

While there would be many unexplained parameters, it could explain what is unexplained in the SM and GR.

Sometimes a new theory or perspective can clarify what we know and don’t know, but I don’t think that is the case here. The parameters of the SM and GR are very specific, and given those parameters the range of those theories is well known. If for some ranges of those parameters the SM or GR break down, that would be extremely interesting and exciting and would probably give clues about a more fundamental theory. But you need experiments to tell you where (or if) the SM and GR have limits of applicability. Theories are generalizations, extrapolations, or interpolations of what we can observe, so beyond requirements of mathematical/logical and empirical consistency you can’t use a new theory to tell you that the SM or GR will break down in some regime; you must qualify it with, “If the new theory is true, then…”

I am even less sure I have understood your next paragraph. For example, I don’t think anyone would argue that you can observe the moduli; they parameterize the Calabi-Yau manifold, but couldn’t themselves be observed or measured because you couldn’t step outside our four dimensional spacetime and perform the necessary experiments. Hence you could make only indirect inferences by noting that certain ranges of the parameters are compatible with observations and others aren’t; anthropic reasoning would be included here.

… use that data as input and then put them in with the current superstring theories, and we don’t get the MSSM + GR, wouldn’t this “falsify” string theory, thus making it science?

First, the MSSM is just one possible extension of the SM. There are no experiments yet which imply that the MSSM (or supersymmetry in general) is part of nature, so it is on a very different footing than general relativity. Assuming you know this, I think you are asking whether everything we have observed experimentally is sufficient in principle to constrain the string theory landscape enough to uniquely predict something new that we can experimentally test. That is where the controversy lies. Peter has argued that if you have an unimaginably large number of valid choices (e.g., 10^500) then, unless you have some way to a priori prune that huge number down to a small number, you could have a huge number of theories (i.e., vacua) that each fits all experimentally known (and knowable) values within the limits of experimental precision: your theory ceases to make unique predictions and thus is not testable in the conventional sense. Jacques Distler, on the other hand, has made a fairly specific argument in a spirited discussion on Cosmicvariance that the situation may not be nearly that bad; I think his argument is also reasonable and shouldn’t be dismissed, but you should refer to that discussion for some details (the link should be to comment #233, but the complete discussion had more than 500 comments!).

Coin actually states my intention — would string theory be “predictive” and “falsifiable” if it were possible to observe, experimentally, the values of the moduli? I understand that Peter argues there are an unimaginably large number of possible choices for the vacuum configuration are possible in theory, but if nature is really 11D and SUSY, it has chosen one specific set of values that could, if the technology existed, be observed and measured and recorded. I recognize that it is not evidentially possible to deduce the parameters of these hidden dimensions, even consistency with the SM, MSSM, and GR in the low energy LHC & TEV range does not constrain the theory enough. But if there could be observational/experimental measurements, I understand that “no one can observe the parameters of string theory” but is it less science if the technology does not exist to access and measure the parameters of these hidden dimensions?

The problem is that no one is able to, for any even slightly realistic string theory models, compute the relation between observable quantities (eg masses and mixing angles) and things like moduli parameters that specify the model. This has been the case since string theory became popular, there is no reason to believe it is going to change.