It's not as simple as that, for all sciences - once again an article on repeatability seems to have focused on medicinal drug research (it's usually that or psychology), and labelled the entire "Scientific community" as 'rampant' with " statistical, technical, and psychological biases".

How about, Physics?

The LHC has only been built once - it is the only accelerator we have that has seen the Higgs boson. The confirmation between ATLAS and CMS could be interpreted as merely internal cross-referencing - it is still using the same acceleration source. But everyone believes the results, and believes that they represent the Higgs. This isn't observed once in the experiment, it is observed many, many times, and very large amounts of scientists time are spent imagining, looking for, and measuring, any possible effect that could cause a distortion or bias to the data. When it costs billions to construct your experiment, sometimes reproducing the exact same thing can be hard.

The same lengths are gone to in order to find alternate explanations or interpretations of the result data. If they don't, they know that some very hard questions are going to be asked - and there will be hard questions asked anyway, especially for extraordinary claims - look at e.g. DAMA/LIBRA which for years has observed what looks like indirect evidence for dark matter, but very few people actually believe it - the results remain unexplained whilst other experiments probe the same regions in different ways.

Repetition is good, of course, but isn't a replacement for good science in the first place.

ATLAS, CMS, ALICE, LHCb, etc. are all different experiments that just happen to share the same accelerator.
They are done using different detectors, designed, built, operated and the data is analyzed by different groups of people.

If you follow your logic, that the different experiments using the same accelerator negates the whole thing, to the extreme then doing two experiments on the same planet/solar system/universe won't be enough..

And I don't buy your inverse argument that one (good) experiment is good enough either.
It is very difficult to tell from outside the group if the experiment is actually good or not (although you probably can tell if it's bad).
Screw-ups can happen no matter how many people look at the data if there is some flaw in the experimental setup - only way to make really sure is to use different experiments to measure the same thing.

I think you've got my argument completely backwards - I'm pointing out that if repetition as implied by the article is everything, then results from e.g. the LHC are discounted.

Similarly, I am not saying that you don't need to repeat - just that it isn't the be all and end all of what defines 'science'. Supporting this interpretation is that I mentioned DAMA. Nobody is accusing them of not taking care, but nobody really believes the result either.

There is a philosophical assumption underpining astronomy and cosmology, the so-called "cosmological principle."[0] This is an assumption (albeit not completely unfalsifiable) that physics across the universe at large scales is the same for all observers, that here on earth there should be similar to physics in andromeda. Of course, all scientific experiments done have been done on or near earth, and thus, our interpretation of astronomy on earth is tempered by experiments we do here, for example, we assume that the speed of light here is the same as the speed of light elsewhere.

This is, in principle, a problem. We unfortunately have no way to measure the speed of light in andromeda, unlike what we can here on earth, so we really have no idea if our astronomical models are wrong given the non-constancy of the speed of light in andromeda or elsewhere.

So, yes, in principle not being able to repeat science experiments everywhere in the universe is a problem. However, I think if one thinks a little less broadly, testing newtonian gravity in say, Italy and also in China shows at least across the Earth, the phenomenon is similar. Then, at least one can say, "certainly, gravity is the same in Italy and China, and perhaps across the surface of the Earth." That is a stronger statement than "gravity is this way in Italy." Ordering claims by "scientific goodness", we can say that

Gravity is the same *across the universe* > gravity is the same across the Earth
> gravity is in Italy

My point here is that even if one can't fulfill the extreme of testing theories everywhere in the universe, one can progressively prove stronger and stronger statements regarding the validity of scientific theories.

Somewhere the LHC stands between "the SM is validated across the universe" and "the SM is validated at one detector at the LHC". Yes, it would be "better" if the Higgs was found at other experiments, but the current situation is "better" than if the Higgs was found at one detector there and not in any other. Repeatability, like everything in science, is not a binary step function but is some continuous function over the domain [0,1].

That's only true if all the other physics involved are the same wherever that supernova is; your own link mentions this. We can't prove that's the case since we can't test the individual parts of that theory - it's possible to construct a different set of physics that produces results identical to the physics we're familiar with.

That isn't a particularly fruitful approach since it doesn't permit any real discovery - it's rather brain-in-a-jar - but it is still something you have to assume away.

There are some phenomena which allow good remote assessments of properties, based on various interactions. Cephid variable brightness, for example, serving to validate the expanding universe theory. The stellar Main Sequence gives a good sense of stellar masses. Spectroscopy works pretty well across a wide span of the Universe (several billions of light years). Etc.

Since these operate in concert across the observable universe, and themselves involve various other interactions (elements, strong and week nuclear forces, gravity, speed of light, rates of hydrogen fusion, etc.), we can conclude that either there is no appreciable change in any of the underlying fundamental constants or that change occurs in a compensated fashion such that no net change is detectable.

The second fails Occam's Razor. The conclusion that the laws of physics appear to be similar throughout observable space seems robust.

I read the article carefully, and was unable to find where it says what you say it says.

In particular, you're making super-strong statements like "be all and end all of what defines 'science'" -- that's not what I see in the article. The article is reasonably pointing out that we have a repetition crisis and that more emphasis should be placed on it. That's not the various super-strong things you're claiming you see in it.

In hindsight I do think it's true that you want numerous experiments with possibly different controls with the exception of whatever is important for the effect. In that sense you want your experiment to "discover the right abstraction" of details that your principle works over.

So I think what has to occur is a gradual "loosening up" of the controls from strict replication to weak replication, with both types of replication giving information about the effect you are testing: its validity and its generality, respectively.

We don't need to 'define repetition', we need to foster a culture that a) accepts repetition and b) does not accept something for a fact just because it's in a journal. Right now, (a) is not even acceptable; nobody will publish a replication study. Ofc, LHC is impossible to replicate, but the vast majority of life science studies are.

I should think this is mostly needed in life sciences. Other, more 'exact' sciences seem to not have this problem.

I don't think we do. I think we need to foster a culture of honesty and rigor. Of good science. Which is decidedly different from fostering a culture of "repetition" for its own sake.

Paying for the cost of mountains upon mountains of lab techs and materials that it would require to replicate every study published in a major journal just isn't a good use of ever-dwindling science dollars. Replicate where it's not far off the critical path. Replicate where the study is going to have a profound effect on the direction of research in several labs. But don't just replicate because "science!"

In fact, one could argue that the increased strain on funding sources introduced by the huge cost of reproducing a bunch of stuff would increase the cut-throat culture of science and thereby decrease the scientist's natural proclivity toward honesty.

> and b) does not accept something for a fact just because it's in a journal

Again, it's entirely unclear what you mean here.

It's impossible to re-verify every single paper you read (I've read three since breakfast). That would be like re-writing every single line of code of every dependency you pull into a project.

And I'm pretty sure literally no scientist takes a paper's own description of its results at face value without reading through methods and looking at (at least) a summary of the data.

Taking papers at face value is really only a problem in science reporting and at (very) sub-par institutions/venues.

I don't care about the latter, and neither should you.

WRT the former, science reporters often grossly misunderstand the paper anyways. All the good reproducible science in the world is of zero help if science reporters are going to bastardize the results beyond recognition anyways...

> I don't think we do. I think we need to foster a culture of honesty and rigor. Of good science. Which is decidedly different from fostering a culture of "repetition" for its own sake.

No one is proposing repetition for its own sake. The point of repetition is to create rigor, and you can't do rigorous science without repetition.

> Paying for the cost of mountains upon mountains of lab techs and materials that it would require to replicate every study published in a major journal just isn't a good use of ever-dwindling science dollars. Replicate where it's not far off the critical path. Replicate where the study is going to have a profound effect on the direction of research in several labs. But don't just replicate because "science!"

I could see a valid argument for only doing science that will be worth replicating, because if you don't bother to replicate you aren't really proving anything.

I could see a valid argument for only doing science that will be worth replicating, because if you don't bother to replicate you aren't really proving anything.

Exactly. A lot of the science I've done should not be replicated. If someone told me they wanted to replicate it, I would urge them not to. Not because I have something to hide. But because some other lab did something strictly superior that should be replicated instead. Or because the experiment asked the wrong questions. Or because the experiment itself could be pretty easily re-designed to avoid some pretty major threats.

The problem is that is that hindsight really is 20/20. It's kind of impossible to ONLY do good science. So it's important to have the facility to recognize when science (including your own) isn't good -- or is good but not as good as something else -- and is therefore not worth replicating.

I guess the two key insights are:

1. Not all science is worth replicating (either because it's too expensive or for some other reason).

2. Replication doesn't necessarily reduce doubt (particularly in the case of poorly designed experiments, or when the experiment asks the wrong questions).

>I don't think we do. I think we need to foster a culture of honesty and rigor.

Foster all you want, an honor system doesn't protect you from the incompetent people and dishonest people publishing junk for funding or self-promotion. If we had a culture if repetition it promotes cross checks that make up for the flaws in human nature.

The entire point of science, and this is not a hyperbole, is that results are reproducible. If the experiment is not reproducible one must take the results on faith. There is no such thing as faith based science.

In order to build a shared body of knowledge based on scientific facts, then, results must be repeated. It is how different people can talk about the same thing, without fearing an asymmetry of knowledge and understanding about the axioms on which their discussion of the world rests. Otherwise it is faith or religion or narrative, something other than than science.

> The entire point of science, and this is not a hyperbole, is that results are reproducible.

No, it's not. The point of science -- its end -- is to understand the natural world. Or to cure diseases. Or, more cynically, to learn how build big bombs and more manipulative adverts.

Reproducible results are the means, not the end.

I know that seems like hair splitting, but it's important. Epistemological purity can do just as much harm as good, because even the most pure science is usually motivated more by "understand the natural world" or "improving our understanding of some relevant mathematical abstraction", rather than by episemological purity itself.

To be quite honest about it, I feel that this sort of epistemological purity that insists on reproducability as a good in itself feels a lot like some sort of legalistic religion.

> If the experiment is not reproducible one must take the results on faith. There is no such thing as faith based science.

I don't think I (or anyone here) is arguing against this. Or against reproducing important experiments.

I'm wholly supportive of reproducing results when it makes sense. But I'm also wary of, in a resource-constrained environment, prefering reproducing results over producing good science in the first place.

To be concrete about it, I'll always prefer a single (set of) instance(s) of a well-designed and expertly executed experiment over 10 reproductions of a crappy experiment. In the former case I at least know what I don't know. In the latter case, the data from the experiment -- no matter how many times it's reproduced -- might be impossible to interpret in anything approach a useful way.

Put simply, a lot of science isn't worth the effort of reproducing. Either because it's crap science, or because the cost of reproducing is too high and the documentation/oversight of the protocol sufficiently rigorous.

The point of science isn't an to perfectly adhere to the legalistic tradition of a Baconian religion. The point of science is learn things.

The only way we "understand" the natural world is by making verifiable predictions. If those predictions can't be consistently verified, then we don't understand the relevant phenomenon at all, and we haven't learned anything.

> To be concrete about it, I'll always prefer a single (set of) instance(s) of a well-designed and expertly executed experiment over 10 reproductions of a crappy experiment.

I'd take 2-3 repetitions of a moderately well-designed and moderately executed experiments over either. Even the most well-designed and executed experimental protocols can produce spurious results, due to the stochastic nature of the universe.

I think the issue here is that scientists want to get funding, have prestige, and perhaps learn about the world in the process. The public wants to be cured of diseases, see new technologies, and learn about the world too.

There is a disconnect between the motivation and capability of scientists in the current funding system and what the public wants. So an easy solution is that if the public wants reproduce-able science, they need to pay for it. I'm sure some scientists who couldn't make it into Harvard or Caltech (ie., me) and thus can't do cutting edge science would be happy to take the dollars, have a living, and just reproduce the work of others. But you can't simply declare to scientists they should do X while not enabling them to.

Science is a tool. Tools are means, not ends. We use sciene to gather facts we can agree on. But sciene isn't the "truth". Science isn't the facts. It a process. That produces facts. If and only if they are reproducible. Otherwise it is faith, religious, or a narative.

What's more, the scientific process is used discretely. One fact at a time. Understanding of our world, its meaning, these things are accumulative, over the entire context of our experience, and utilize things like feelings, and faith, and religion, and narative, to create.

> Taking papers at face value is really only a problem in science reporting and at (very) sub-par institutions/venues.
> WRT the former, science reporters often grossly misunderstand the paper anyways. All the good reproducible science in the world is of zero help if science reporters are going to bastardize the results beyond recognition anyways...

Science is funded by the public, and done for the public. Good science reporting is very important to ensure that science continues to get funded. Too often scientific papers are written in a way that makes them incomprehensible to anyone outside of the field, whether that is through pressure to use the least amount of words possible or use of technical jargon.

There's also the option that papers are written the way they are so that they still remain papers and not books. A five page paper on someone's findings is much easier to read than a 20-30 page paper where field specific knowledge is redefined and explained instead of referenced.

I know I made the original "dwindling funding" claim, but it's actually a red herring. I should have said something like "increasingly competitive nature of grants". The argument relevant to this article is NOT about whether there's enough funding for science. Rather, the argument is about how difficult it is to get funding. I think it's fairly uncontroversial (and correct) to say that getting good science funded has become considerably harder over the years. I'm not sure anyone has done the work of trying to quantify that; maybe someone else can help find data for that.

3. From an impact-on-culture perspective -- which is the relevant one in my comment -- I think (2) is more intresting than your data and also more interesting than (1). The question should be "how difficult is it to fund good science", not "how much are spending in absolute or relative terms". This is, of course, very difficult to quantify. But looking at percentage of GDP is at least better than looking at absolute dollars.

Oh I was not commenting on what you do. I was commenting that many who hold that view of non repetition have leanings that are favored by soft sciences. They dont feel the "need" for repetition even in hard science. It is a personal bias thing that influences their need. It can happen when emotion overcomes hard logic and science.

That seems remarkable to me, perhaps I've missed these discussions. Can you provide evidence that this is a pervasive movement in some sciences, rather than the opinion of a few?

"Many" is a trigger weasel word, of course, and needs backing up.

My interpretation -- perhaps incorrect -- is that you feel the softer sciences are wilfully undermining the quality of harder sciences. I very much doubt this is the case. Some philosophers of science and some softer science key influencers may introduce difficult and challenging questions about the appropriateness and usefulness of some research methodologies (as are people in this thread) but I doubt they'd make the blanket assertion you're suggesting.

Project much? Looks like you're the one getting all heated and angry at someone defending the social sciences and are trying to attack them. It's better not to bring emotion into discussions of science.

Agreed , only important studies are worth replicating, the problem is that no major journal will publish a replication. Journals in general push for novelty rather than quality here.

b) can be a problem with meta-analyses and reviews. When gathering data "from the literature" , not all the data gathered is of the same quality/certainty, which can have a compounding effect. Or when someone from a mathematical or computational field tries to create a model using data reported in the literature. It is often difficult when working in an interdisciplinary environment to assess the quality of everything you read, especially if you re not familiar with all the experimental methods.

Also, off topic, but i wonder why you chose a throwaway account to weigh into this. i hope it's not a "science politics" reason.

> I think we need to foster a culture of honesty and rigor. Of good science. Which is decidedly different from fostering a culture of "repetition" for its own sake.

That is not sufficient. Honesty and rigor is of course required for good science, but it is not sufficient.

Even if honesty and rigor you WILL still get false positives. Statistics is used to measure how likely this is. Statistical methods do nothing to tell you if any particular case happens to be a false positive. For many studies a confidence of 95% is considered good enough to publish, but if you do the math that means a honest researcher who publishes 20 such studies has probably published one false result! If there are 20 studies published in a journal statistically one is false. Thus replication is important.

It gets worse though, the unexpected is published more often - if it is true it means a major change to our current theories and this is important to published. However our theories have been created and refined over many years, it is somewhat unlikely they are wrong (but not impossible). Or to put it a different way, if I carefully drop big and small rocks off the leaning town of Pisa and measure that the large rock "falls faster" that is more likely to be published than if I found they fell at the same speed. I think most of us would be suspicious of that result, but after examining my "methods and data" will not show any mistake I made. Most science is in areas where wrong results are not so obvious.

> It's impossible to re-verify every single paper you read

True, but somebody needs to re-verify every paper. It need not be you personally, but someone needs to. Meta-analysis only works if people re-verify every paper. Note that you don't need to do the exact experiment, verifying results with a different experiment design is probably more useful than repeating the exact same experiment: it might by chance remove a design factors that we don't even know we need to account for yet.

> And I'm pretty sure literally no scientist takes a paper's own description of its results at face value without reading through methods and looking at (at least) a summary of the data.

I hope not, but even if they check out, it doesn't follow that things are correct. Maybe something wasn't calibrated correctly and that wasn't noticed. Maybe there are other random factors nobody knows to account for today.

The above all assumes that good science is possible. In medical fields you may only have a few case studies: several different doctors saw different people each with some rare disease, tried some treatment and got some result. There is no control, no blinding, and a sample size of 1. But it is a rare disease so you cannot do better.

The key here is we need a culture of skepticism. That's the fundamental core philosophical trait of the enlightenment: not belief in what anyone says (just 'cause they say it or have authority to say it) but doubt in claims until they are rationally proven, and proven again, and proven again. Skepticism builds on itself because a culture where everyone is skeptical is a culture where the burden of proof lies with the claimant. This applies to science as well: whether an experiment is repeated once or hundreds of times, the real measure is to what degree it convinces the winnowing number of skeptics aligned against the idea.

Does doing 50 studies and only publishing the 3 that produce favorable results count as repetition? Because, BTW, that's exactly what Big Pharma does, and also intrinsically what the bias towards publishing only significant results does.

Pharma needs to send information about studies beforehand and regulators pay attention to this for new drugs which counters the effect your talking about. Further, they are happy to report inconclusive results the FDA mostly cares about harm to people not effectiveness.

Now, after drugs are on the market then there is another wave of less reputable research. But, the FDA has already approved the drug so they don't care as much.

I'm not sure of this. By 'exact' disciplines I'm assuming you mean disciplines of science more dependent on mathematical proofs. A CACM paper a little bit ago discusses this, and found a large number of papers where not repeatable. If I recall, it mostly focused on that nobody shares code and/or the code didn't build.

The problem is worst in fields that don't have reproducibility 'built in' to the field.

I do genetics and development, and the main sanity check we have is the distribution of mutant lines. If you say that mutant X does Y, other people are likely going to see that (or not) when they get their hands on it and start poking around. This strength of working with mutants is at the core of the success of molecular biology. Even if don't set out to confirm someone else's results, you're quite likely to come across some inconsistencies in the course of investigation.

If a field lacks that sort of mechanism, they need to take special care to address reproducibility.

Exactly. Most 'good science' start with premises A, B, and C from their respective papers, and go on to show that X is likely true because of this new data. In general, it would be impossible to propose X if A, B, and C were not themselves true - they were the foundations of this new research. In that way, even though the publication of X may not have directly (though often precisely directly) repeat the experiments of A, B, and C, they still have replicated the core concepts.

As noted, this does not get rid of the confounding negative effects, where paper D was also used as a (in retrospect, faulty) premise as well. Though no one ever actually comes out and says 'D' is bad.

Over time, A, B, and C theories accrue significant weight while D falls off. It's not as explicit as some may like, but in the end the spirit of replication is well and alive.

I suppose the funding agencies could play a role in this. When you submit a proposal that cites prior work as a basis or motivation for you project, you should be required to show that either 1) The cited works have been reproduced, or 2) you're going to reproduce them.

I dimly recall some physics papers from years ago, with titles such as: "New experimental limit on the existence of..." Whenever my project was going particularly badly, we'd lighten things up by describing it as a negative study. One such highlight was: "New experimental limit on the existence of oil in the vacuum pump."

I think the difference is that we are able to test and understand/verify the operation of the LHC at multiple levels. It's not like they just built what was on paper, flicked the ON switch, and assumed all the data that came out was correct. The components were individually tested and collisions at energies accessible by other supercolliders were done so we could compare to other results.

Unfortunately for medical drugs and psychology, researchers are mostly gathering data without an understanding of the underlying mechanisms. There are also virtually never have proposals which can be tested for compliance with reality in a quantifiable and isolated way as we can with physics, chemistry, or parts of biology.

So I feel the replication crisis is not a matter of various fields just not knowing how to do "good science", but that these fields by their nature make clean hypothesis testing vastly more difficult and p-hacking and statistical trickery (intentional or otherwise) harder to sift out.

> but that these fields by their nature make clean hypothesis testing vastly more difficult and p-hacking and statistical trickery (intentional or otherwise) harder to sift out.

How certain are we that its 'the nature' of these fields rather than political choices that were solidified long ago? There could be much larger samples and six-sigma confidence in life sciences if the grants were allocated differently. I know this is happening, for example, in neuroscience where the (private) Allen Institute is creating significantly more rigorous (and useful) datasets compared to the bulk of studies, because they are funding their studies differently.

Thinking about physics, the LHC experiment benefits from not being a completely isolated study. Instead, its results are multiple pieces of a larger jigsaw puzzle or web of interconnected results that include both a quantitative predictive theory and other experiments that test other predictions of the same theory. And the Higgs was not the only thing that could be tested at the LHC. For instance, it probably replicated a huge slew of prior studies during construction and testing of the system, from already-known particles all the way down to the calibration of voltmeters.

In physics, some theories have been tested and interconnected to such a degree, that if an experiment conflicts with theory, it's reasonable to suspect the experiment rather than discard the theory. That's what happened with that apparent faster-than-light travel of neutrinos captured in Italy. It was something like a partially unplugged cable. Once corrections were made for the cable, the theory snapped right back into place. Those theories can be trusted as akin to tools in day to day physics. For my humble graduate experiment, refutation of any major law would have led me to fix the experiment and try again. I simply assumed things like conservation of charge. Such laws probably include electrodynamics, gravitation, quantum mechanics, thermodynamics, and Darwinian evolution.

And this is a common feature of major studies in physics, chemistry, evolutionary biology, and other sciences as well.

Where we may run into trouble is in branches of sciences that don't have over-arching theories or that web of connected results. At the other end of the scale are areas of sciences where the results are mainly a database of observed statistical correlations, with little or no apparent progress towards a general theory. When someone publishes a surprising result, there is no reason to say that the experiment must have been done wrong. You just add it to the pile. The best that can be hoped for is that some kind of meta-analysis will demonstrate an inconsistency among multiple studies. Those fields don't have the "hard" theories that can be used as tools to test experimental results.

To be fair, there's another situation, where you have not one, but zero, reproducible results. That's the state of affairs in the search to unify gravity and quantum mechanics.

There were several "bogey Higgs" at different energies during earlier Tevatron and LHC runs. Maybe these were 3-5 events which hinted at a signal. Then they disappeared. But the real Higgs was not confirmed until there were 40 events of two different decay paths at two different detectors, i.e. 5-sigma significance. Different runs and decay paths are like multiple replications.

The LHC is somewhat of a special case, because repetition is extremely difficult for obvious reasons. That doesn't preclude making repetition an appropriate requirement in 99.9% of other scientific findings. In this particular case, we just have to make sure that the LHC guys conform to extremely stringent burdens of proof and high scientific accountability, which they already impose on themselves but which the 99.9% do not impose on themselves for a variety of reasons (an uncomfortable truth, but due to many things like pressure to publish and generally being less good, a truth nonetheless).

Physics is a special case for several reasons, [1] most importantly there is a good theory with the effect that different experiments measure the same thing. So once you start looking for a specific deviation from the standard model, you can not only look at accelerator experiments, but also at g-2 experiments [2], or lamp shift or beta decay in the case of the electro-weak interaction. (Or at different places in accelerator experiments, in the sense that data at one point should lead to corrections at another point.)

In other disciplines, the role of repetition is a lot more important. The look elsewhere effect means, that your statistical significance depends on all other research, published or unpublished. [3] If you are doing a repetition study, the look elsewhere effect just goes away. So good science in the first place is very important of course, but there is a lot of value in doing repeated experiments for fundamental statistical reasons. Especially in fields were there is no good predictive theory (everywhere except physics) and where it is very hard to go to very high significance.

[1] I recently read

John Lewis Gaddis, The Landscape of History, 2004

which is a essay on epistemology of history and in a way tries to reject "physics envy," with the argument that physics is everything where there is a good theory, so in a way where we are winning. And consequently physics envy looks a lot like cargo cult, other disciplines need to figure out their own epistemology.

[2] Incredible high precision experiments which measure the magnetic moment of electrons.

Not that it isn't important in physics, it is important but for different reasons. Nobody is coming around selling physicists anything based on a non-repeated experiment. Drug studies has a lot of perverse incentives in play, and the subsequent interpretation of results for, for example, off-label use are also skewed toward one set of interpretations. There could be billions of dollars in sales at stake.

>Repetition is good, of course, but isn't a replacement for good science in the first place.

Repetition is science.

>Define repetition.

Google says: "the recurrence of an action or event."

Science requires 1) a prediction of an observation 2) repeated observation consistent with the prediction.

If there is no repeated observation of what was predicted, it is not science.

>But everyone believes the results, and believes that they represent the Higgs.

It doesn't matter what is believed. "Everyone" used to believe all kinds of things, that doesn't mean that those things were true, or accurate. Repeated observation of prior predictions is all that matters.

>When it costs billions to construct your experiment, sometimes reproducing the exact same thing can be hard.

That doesn't excuse, or allow in, things that haven't been reproduced.

With all respect I do not agree with your example for the lhc .
For two reasons primarily:
The first has already been mentioned about various groups and experiments / detectors.
The second, I think is more important. In order to claim discoveries at any of those experiments certain statistical constraints must be met. The 5 sigma rule is by its very nature garnered by repetition and is a necessary because of the nature of quantum mechanics and particle physics. Here is a write up on cern and 5 sigma
http://physicsbuzz.physicscentral.com/2012/07/does-5-sigma-d...

What has been seen is reading of instruments which has been built according to some mathematical model to register conditions which supposed to happen according to bunch of theories and then a lot of calculations follows which presumably conform the assumptions according to the reading.

The simple analogy is pictures one has on fMRI. These pictures are, well, pictures based on approximations, not the mind. Not even close.

There is a paradox - one cannot make an instrument out of atoms to "see" what is going on inside these atoms. What they saw are pictures created out of mathematical models, not reality as it is.

What does "prove mathematically" mean here? He constructed a self-consistent mathematical theory, and validated that it reproduced both the special relativistic and Newtonian limits, but I don't think "proved mathematically" is a good way to state that.

You can't "mathematically prove" a theory agrees with reality except by reconciling it with experiments (possibly indirectly through links to previous theories), so the only difference here is that a non-ad-hoc mathematical theory of depression medication doesn't appear to be feasible, not that physics has some magic alternate method of proof.

You're missing the point of what I said w/ the Einstein example. He proved it via maths. Whether or not it was empirically validated is irrelevant. It was true mathematically

Today, this cannot be done with fields like medicine and psychology. No one can create a mathematical proof for a cure for cancer, make a treatment based upon that proof, and have it work on the first try

I've done enough mathematical proofs that I don't think I need to "google it", but I read the Wikipedia page anyway. It failed to explain how a mathematical proof could have anything to do with "proving" General Relativity in absence of empirical evidence. He certainly proved that (as I said before) it agreed with existing (experimentally backed) theories using mathematics, but if you have a method which allows you to deduce the truth of physical laws from purely mathematical proofs (no links to experiments at all!) you'll revolutionize epistemology. As it stands most scientists are using the crutch of experimental evidence at some level (either directly or by reasoning within theories that are at least for now validated by experiment).

> Whether or not it was empirically validated is irrelevant. It was true mathematically

If you mean it was a self-consistent theory, note that there are infinite physically incorrect theories which are "true mathematically". It is true mathematically in the same way number theory is "true mathematically", but you don't see anyone assume that we can describe gravity with prime number theory. If you mean it was mathematically proved consistent with previous physical theories backed by evidence, we're back to the experimental link, and also there are infinite physically incorrect theories which would agree with Newtonian mechanics and special relativity. If you mean he proved it satisfied some properties we expect from physical theories because they've been consistently upheld (e.g. energy conservation), there's an infinite number of wrong ones there too.

Mathematical convenience has been an excellent guide in physics, particularly fundamental physics (electromagnetic waves, antiparticles, and the Higgs Boson were all the results of positing things for mathematical convenience), but it is not a substitute for verifying novel experimental predictions. Nobody taken seriously in the physics (or mathematics for that matter!) community thinks it is, not even the often decried string theorists.

Epistemical simplicity is one of the things I love about math, but it's hard to extend it wholesale to physics. GR will almost certainly be proved wrong some day, at least in some regime (Einstein was ironically one of the first to feel it needed something more, as an alternative to the inelegant cosmological constant). If GR had been mathematically proven, this would be terrible, as it would prove our mathematics are inconsistent!

As for the existence objective reality, that's another thing that seems hard to prove conclusively. I'd suggest looking at some of the basic epistemology surrounding modern science (e.g. Popper[1]) for some thoughts on this.

As an interesting aside, while it's true that mathematical proofs (idealizing here and assuming incorrect proofs are never accepted by the mathematical community, because they on occasion are) are absolute statements of truth, they may not be stating precisely the truth you expect. Thanks to Godel[2], we know that it is not possible for any consistent mathematical theory rich enough to talk about addition and multiplication of natural numbers to prove it's own consistency. As a result, we may have a proof that 2+2 != 5, but that actually doesn't exclude the possibility that there is a proof of 2+2 = 5. In fact, his result shows we will never be able to prove such a thing does not exist (since it would imply the consistency of our mathematical system). So our absolute truths from proofs of X are actually of the form ZFC implies X, where ZFC is the background theory which is generally taken to underly modern mathematical works unless otherwise specified. So things are not so clear cut even here.

You've totally lost sight of the issue at hand. Now you're talking about things utterly unrelated to this discussion.

It seems you're more interested in dicing up and attacking what I said, instead of what I mean.

I'll re-articulate it for you once more: We don't have a way to mathematically prove a drug for depression or to explain psychological phenomena. The studies in the article are talking about those which rely on empirical observation.

What? You've been talking about (a) whether GR needs empirical observation, (b) whether things are true whether people admit them or not (i.e. Objective reality) and (c) mathematical proofs being evidence of absolute truths. Those were my three paragraphs.

How many problems have you worked through in GR? Have you gone through the proofs that GR recreates Newtonian gravity in the low-energy limit? If not, don't go around talking about how Einstein proved GR was a true description of reality with naught but mathematical proof when he didn't think he did that.

Physics has the same requirements for empiric evidence as the life sciences, it's just that we work within regimes where we can apply fundamental, empirically validated mathematical theories directly. If you don't believe me, go ask another physicist.

Yes, it is - that's the very basis for science. One of the main problems is that many academic disciplines have been wrongly classified as hard science (see "social" sciences).

>The LHC has only been built once.. But everyone believes the results, and believes that they represent the Higgs.

Nobody intelligent "believes" anything. We examine evidence and draw tentative conclusions based on that evidence, always retaining doubt because, unless you are omniscient, there is always new information that can come to light that can cause you to change your conclusion. Science is a process. If you "believe" anything without doubt, you aren't a scientist, you are a priest.