Thursday, November 15, 2012

There are no hospitals for theories

BBC's Pallab Ghosh has quoted Christopher Parkes of LHCb who has said "Supersymmetry may not be dead but these latest results have certainly put it into hospital."

Even if one (or two) gets into a hospital, it doesn't mean he's not a supersymmetric hero (a superhero for short). A shoulder surgery isn't the end of the world.

But nothing like that is possible in science.

Supersymmetry is a symmetry, a principle added to the list of conditions we expect from models. But it is not a particular model. There are many supersymmetric models or supermodels for short.

Some of them, like Petra Němcová, were named envoys for Haiti and they're much more than richly decorated hollow skulls. ;-)

But let's not get distracted too much. What I want to say is that one must carefully distinguish the confirmation or falsification of a particular model; and the confirmation or falsification of the whole principle or framework. These are totally different things. Moreover, in contrast with the opinion of Pallab Ghosh and his sources, the framework doesn't get falsified or "nearly falsified" if you falsify 80% or 90% of the particular models.

Of course, I have discussed the same issue many times, e.g. in Bayes and SUSY (May 2012).

If you falsify 90% of the models (SUSY models) in a framework (SUSY), and it's very hard to quantify the fraction because we deal with continuous, noncompact parameter spaces equipped with an ill-defined measure, the belief that the framework is still right requires the believer to think that he had a bad luck that only occurs in 10% of cases.

But it's not such a big deal to believe this modest "bad luck". It's equivalent to less than 2-sigma "deficit" arguing against your general point – in this case, I mean supersymmetry. But less than 2-sigma excesses and deficits are almost everywhere. They're just not terribly strong arguments for assertions either way. The other, partly theoretical arguments for and against SUSY are arguably much stronger than that.

The idea that theories – in this case SUSY – may be sent to hospitals is based on an application of "collective guilt" to models. One assumes that supersymmetric models represent a nation or a family and they share the pain of each other. So if some of the relatives – models – are killed, the others suffer.

But this ain't the case. Two models may share some properties – for example, they may respect the principles of broken supersymmetry – but their truth values are completely independent. In fact, they are negatively correlated because if one model is right, we know that the other, inequivalent model must be wrong! This negative correlation in the truth values is there regardless of any similarities in the assumptions or technical properties of the models.

Despite all the similarities between the models, the death of another model is always a good news for a model that hasn't been killed simply because the competition gets less severe. Moreover, the "clustering" of the models into the "families" that someone may prefer is artificial and there can be many other ways how to organize physical models into "families". The experimental signatures that models predict (e.g. lots of events with many leptons) may often be more important than their deepest assumptions (such as SUSY). No reliable conclusion may depend on an arbitrary way how one arranged the models into "families".

Car accident

Let me give you two similar examples showing why the "hospital" idea is logically flawed. You take a bus to go to a nice trip. In a car accident, 90% of the passengers in your bus get killed. You were a bit lucky and avoided all injuries. Now the question is:

Should you be taken to the hospital?

The answer is obviously No. You shouldn't be treated as an ill person. The past proximity to several people who were killed by a truck going in the opposite direction isn't a disease – let's ignore the psychological shocks you may have experienced (but frankly speaking, I don't really believe that the treatment of people with such shocks is too sensible or useful, either).

Your life goes on even though you have belonged to a group of people whose majority is gone. After all, all of us belong to a group – all people who have ever been born – whose majority is already dead. While the current world population is 7 billion people, the total number of people who have ever walked on the globe is significantly higher, over 100 billion. So over 90% of the people who have ever been born is dead by now. Does it mean that your existence and health is a contradiction? I don't think so. The whole life is about the circulation of the material between dead and alive organisms, it's about the selection.

Science is totally analogous to life. Evidence falsifies some theories – counterparts of life forms, species, and individual organisms – that were not sufficiently viable and it focuses the confidence and probability – a counterpart of the resources on the Earth – to those that have survived. In this way, the theories are getting more accurate, more sophisticated, more viable – much like the species and organisms.

It's completely incorrect to say that the people who live today are not viable just because they belong to the group of 100 billion people most of which are already dead.

Higgs search and elimination of possibilities

My second example is the search for the Higgs boson. Let's look at the situation we were experiencing months before December 2011 when the confidence level for the 126 GeV Higgs boson surpassed 4 sigma and sensible people became pretty much sure it was there.

Before December 2011, experimenters were only able to eliminate intervals of masses that the Standard Model Higgs boson couldn't have (let's assume the Standard Model is right – or at least a relevant approximate step in our improving knowledge).

A priori, the Higgs boson mass could have been anything between 0 GeV and 1,000 GeV. The prior probability that the mass would be above 600 GeV was already small, for various reasons. So let's shrink the window to 0-600 GeV. By 2011, a vast majority of this interval was eliminated. Around Summer 2011, only the interval 115-130 GeV remained viable. But no Higgs was discovered by a stronger-than-3-sigma signal yet.

Now you may think about it and say that it was strange. The Higgs had not been discovered yet and only an interval of width 15 GeV – 1/40 of the overall interval 0-600 GeV – remained as possible. Pallab Ghosh could have said that 39/40=97.5% of the Higgs boson idea had been falsified. The Higgs boson as an idea should have been taken to an intensive-care unit, the BBC should have written.

(If you decide that it's natural for the Higgs mass scale to be any number between 0 and the GUT scale, 97.5% could even be replaced by 99.99999999999999999%. Well, it should be over thirty digits "9" because the squared mass is what could be uniformly distributed)

But we know it would be a completely wrong conclusion. The Higgs boson was there, somewhere in the remaining interval. There has never been any good reason to doubt that some Higgs boson had to exist. The gradual shrinking of the "habitat" wasn't a sign of the Higgs boson's deteriorating health. Instead, it was a gradual improvement of our knowledge of Higgs' properties.

Elimination is easier than discovery

If you think about the numbers, you will easily understand why it's pretty ordinary that new particles are usually discovered after a big majority of the parameter space has been eliminated. The reason is simple: it's easier to eliminate a point in the parameter space (assuming that the point is really wrong) than to discover something in it (assuming that it's there). Why?

Well, the reason is simple. Physicists are usually satisfied with the 95% confidence (2-sigma) level exclusion but they demand a 99.9999% (5-sigma) level discovery. Now, 5 sigma is 2.5 times greater than 2 sigma but the number of collisions (or something else) scales as the second power so you need 6.25 times more "data" for a discovery than you need for the exclusion at the same point.

So if you assume that Nature sits at a generic point, you may make the following estimate. Find the moment at which about 50% of the parameter space is excluded. Multiply the amount of data collected by that moment by the factor of 6. And you will get an estimate of the amount of data that's needed for the discovery.

Now, this is just an estimate, not a strict rule, of course. The actual amount of data you may need may be 10 times smaller or 10 times greater than data and it's still not shocking. But if you apply those numbers to the Higgs boson or supersymmetry, you will realize that there was no reason to be "worried" about the general Higgs boson idea in the Fall 2011; and there's no reason to be "worried" about the general idea of supersymmetry today.

People should try to think a bit rationally and realize that the "collective guilt" principle can't be applied to physical models because the "clustering of theories into collectives" is completely artificial, man-made, and inconsequential for the validity of individual models. The fates of individual models are independent. And if you need some strong enough negative evidence against a whole framework, you need to eliminate 99.7% (3-sigma equivalent) or 99.9999% (5-sigma equivalent) of the parameter spaces. The elimination of 90% of a parameter space doesn't give us much useful information. It is only as powerful an argument as any other 1.5-sigma bump seen anywhere.

And that's the memo.

Brahe's health

Exactly two years ago, I described a Danish research focusing on Tycho Brahe's remains in Prague. He could have been murdered by mercury etc., perhaps even by Johannes Kepler himself. Today, BBC tells us that the Danish+Czech research is over. There was mercury in the beard but it was normal, not deadly. Moreover, Kepler's description of Brahe's declining health "matched a severe bladder infection".

Kepler has been great but I, for one, wouldn't consider the stories written by a prime murder suspect as uncritically as they did.

Hi, I am not sure whether I understand the question. But if 3,4,5 or more theories agree with an observation, then this observation gives us no information about which of these theories is right. We may have other - reliable or (usually) unreliable ways to decide which of them is preferred ("prior probabilities" etc.) but it's important to realize that none of the discrimination is due to the observation.

In frequent cases, we just can't say which of them is right. It may be tempting to find a quasi-rational reason to think why the "observation picked one of these theories" but they're just not rational.

Take for example gravity as per newton or G.R. , here we have 2 totally different kinds of ontology while both theories correspond to data , does that means : theories can never describe ontology ? specially if we consider reg. cahill process physics gravity as space inflow with no dark matter , energy , accelerating universe , even no G.R. and as such it " describe" another kind of ontology.

Newton's theory and GR have been experimentally indistinguishable until some moment, 1919 or whatever is the point, and then they became distinguishable. The result of the discrimination tests, once they were possible, is that Newton's theory is wrong while GR is right. Before 1919 (...), one had to rely on other arguments and intuition to decide whether he believed Newton's theory or GR.

I don't know what to do with questions like "can theories describe ontology?" I, and science, operate with different questions: is this theory or another theory right or wrong? Up to some moment, the answer often remains unknown. With some advances, it becomes possible to answer such questions.

It is not known to me what a "reg. cahill process" is but I don't believe it is a scientific term. And if it could be classified as a scientific theory meant to explain similar phenomena as GR, then it would be a wrong one because it's implausible that a theory manifestly inequivalent to GR - except for "more sophisticated ones" such as string theory - would be able to reproduce all the successes of GR. The known experimental tests pinpoint the right theory almost uniquely within "a large class of theories that deal with comparably difficult concepts".

But we can never be sure that a particular theory is correct despite correlating data , remember "theory" of ptelomey ?even G.R. is wrong in the very small or very dense .......let me ask : how can we be sure that space itself is bent when there are many explanations for light bending without curvature of space proper ?

My point is : even if we exclude all rival theories , it always remain the possibility that the remaining one is only corresponding to not describing reality.

Hi, as we said, no theory in science is really "completely correct". We may only show when a theory is wrong - but as long as a theory agrees with all the observations, it's "temporarily correct" but chances are that it will be proved wrong by more detailed, advanced experiments in the future.

Ptolemy's theory was a parameterization of the motion of celestial bodies, a sort of Fourier series. For this purpose, it was fine – and equivalent to other ways to parameterize the observed motion. When it comes to explanations of the reasons behind the motion, Ptolemy's theory had pretty much nothing to say.

When we talk about competitors of GR, they have to explain the causes of the trajectories, bending light etc., as well, in a sufficiently natural way. I assure you that all theories at a comparable or smaller complexity as GR are either equivalent to - or manifestly (almost by construction) small deformations of - general relativity, or they disagree with some experiments. If you think that there's a completely different, inequivalent theory that still reproduces all the success of GR, then you have been had. It's not really possible.

NO , not insured but permissive , i mean if the situation is always temporary -- knowledge is a function of time -- then science is forbidden logically from taking any ontological stand , you for ex. are sure that m-theory must be correct but this is dogma not science , maybe that speculation will remain thus....mere speculation , science is not allowed to tell humanity any meta-claims concerning reality since this is always beyond scientific investigation in principle.

I respect science , i admire what science show us of the aspects of design in nature , we look with awe and wonder-- a spiritual attribute of humanity -- into what no law or force or mechanism can generate . so let science respect our spiritual needs ........to know and stand in awe.

Hi, it's just not true that science can't take "any ontological stand". It may still make billions of reliable statements that certain ontological assertions are wrong, and even when it comes to positive statement, it can make statements that certain conceptual frameworks are enough to explain a certain large category of phenomena etc.

The reasons why I think that M-theory is right - and, independently of that, why it can't be "deformed further" so it should be a final step in the process - are sufficiently indirect and mathematically loaded but they exist. But there are many other assertions that may be defended by solid evidence.

But it's still true that science isn't about getting guaranteed final answers to all important questions we can make - like religion. Science just can't do it and it really contradicts its essence, the very fact that science may keep on evolving. And this ability of science to evolve is actually a source of pride - people who really like science *love* when it evolves. While science and religion may be similar in many respects, this is a respect in which the difference between science and religion couldn't be greater.

Religion may give you "eternally valid perfect truths". The only problem is that all of those that refer to the physical world are demonstrably wrong.

How science could ever prove that the ( physical world ) is all that exists ? how can science ever prove that the reality of man is mere physical ? how could it prove that a physical mechanism can generate consciousness , feelings or sensations , can science prove that a physical mechanism generated laws , constants and initial conditions of the cosmos ??? you can speculate about any thing pre-cosmic but i am asking for a mechanism to generate what its structure is depending upon , these are some of the religious aspects and science would never render them obsolete, these are creations that science MUST respect and confess that it cannot penetrate , this is a veiled reality where human reason stands still.

I recommend that all interested persons read Dr. david abel book ( the first gene ) , it is not about genetics , it is about information and "" the incompleteness of the physicodynamical law "" where he gives ample evidence that the formal is beyond and above the physical , that the physical is unable in principle to generate all of what we consider that it need information generation , control , pattern formation , complexity.....in short : goal directed action.

I am of course pleased that Matt is finally forced to face some harsh reality of the trolls. Obviously, he is the man who is right here.

By the way, I can't resist pointing out the following analogy. The understanding of falsification by Shmoit and other people talking about hospitals etc. is similar to the understanding of the falsification of climate models by their infinite advocates, despite the apparently opposite sign.

The cases are analogous because in both cases, they talk about the fate of many different models as being "one collective fate". For the climate model, they take some "average of models" and check whether it's close to the reality, plus minus the error margin given by the typical variance of models. And the hospital particle physicists want to apply the same recipe to SUSY.

However, the individual models and individual points in the parameter space can't be grouped to a "collective" and the "average predictions" of such a "collective" are physically meaningless. Some special SUSY models may predict significantly different things from what some people have called "garden variety models" - and they may still be natural at the same moment. Of course that if some SUSY models are falsified and others remain viable, the composition of the "collective of SUSY models in the literature" will drift, and it's a good thing. It's nothing else than taking the observational data into account.

We don't have the exact single model that predicts everything so we must still extract some data from the experiments - nothing to be ashamed of, this is how science has worked since the beginning although we will ultimately be able to do better ;-). Once it's done, we may figure out which SUSY models remain viable and spend more time by investigating their logic and consequences.

You are probably right, maybe Matt Strassler still has to learn what a really bad evil troll is ... ;-)Anyway, it is very nice of you helping him over there. And the exchange about saying sorry made me LOL :-D

I like the analogy with the climate models too, but I'm not sure if there are any good ones among them that could be deluted ...

There are surely some climate models that are much better than others and that could be further improved. What's problematic is that most of the folks in that field are not too concerned with finding the accurate ones. Larger classes of not-necessarily-good models are better for them, especially because these classes also contain models which are perhaps inaccurate but make dramatic predictions.

For 10 years, I couldn't even erase it. It was created as fun for folks who believe we're surrounded by ET aliens. But when I got my 20th excited e-mail from someone who was superhappy by the idea that he could meet a real ET alien, my pleasure from the pranks was already gone haha.

You make a good point about not discarding a full theory because bits of its parameter space have been ruled out, I think.

When talking about the Higgs boson, you mention the likelihood of its mass being above 600 GeV was small. Can such a boundary also be posed for a lightest detectable-by-the-LHC supersymmetric particle?

Dmckee seems to be in a bad mood again, he is randomly closing string theory questions on Physics SE, see

http://physics.stackexchange.com/q/44300/2751

Even if it is not linguistically brilliantly formulated this is no reason to close it as a duplicate since it is not ...

Sorry for the off topic, but this is annoys me :-( If the issue gets not resolved until tomorrow I will escalate it to meta ... David Zaslavsky will probably resign from being a moderator at Physics SE.

OT, but reminds me of atheist criticisms of the concept of a "God." Most conceptions are morally absurd, ugly, and, where they make empirical claims (which is rare) untrue, but that doesn't mean they all are, including the Hebraic conception elucidated in the middle parts of Genesis. Google The Torah and the West Bank for an explanation.

Thanks, Bastiaan. The claim I am making is that one can't really throw a theory out even if 90% of its parameter space is ruled out.

In SUSY, a Higgs boson was guaranteed to be below 135 GeV. However, all other new particles may be in principle arbitrarily heavy. On the first hand, again, if SUSY is solving the hierarchy problem, some of them, possibly gauginos, higgsinos, and stop squark, shouldn't be too far from the electroweak scale.

There's still a difference from the Higgs. A Higgs above 1 TeV in the Standard model would produce a theory that breaks down at slightly higher energies. There's no breakdown if new SUSY particles are much heavier - even if SUSY doesn't solve the hierarchy problem.

I am not certain who you refer to as a Troll. I do know that I appear to be unwelcome at PW site for pointing out some of the same and expressing that that blogger has mis used the Pauli "not even wrong" to attack Lumo , Kane and even Strassler. Of course perhaps my comments were "noise" in response to someone calling Kane a liar. Regardless I think that it is not very difficult - even with less than a Phd in math and physics - that one cannot exclude what one has not looked at or looked for.The keys for me are usually in the last place the third time I looked there because it is easy to miss things when one is too sure one has looked closely when one has not.

and he has of course his army of subtrolls he sends out to spoil other reasonable physics blogs he linkes to, if they are not protected against such attacks by an efficient firewall (=comment moderation) as TRF for example happily has one.

It is completely pointlesst trying to comment on troll-sites, normal reasonable people are highly unwelcome there...

I sometimes just try to blow away the worst trolls from other otherwise nice enough places, such as Prof. Strassler's site.