Pages

Tuesday, December 12, 2017

Research perversions are spreading. You will not like the proposed solution.

The ivory tower from
The Neverending Story

Science has a problem. The present organization of academia discourages research that has tangible outcomes, and this wastes a lot of money. Of course scientific research is not exclusively pursued in academia, but much of basic research is. And if basic research doesn’t move forward, science by large risks getting stuck.

At the root of the problem is academia’s flawed reward structure. The essence of the scientific method is to test hypotheses by experiment and then keep, revise, or discard the hypotheses. However, using the scientific method is suboptimal for a scientist’s career if they are rewarded for research papers that are cited by as many of their peers as possible.

To the end of producing popular papers, the best tactic is to work on what already is popular, and to write papers that allow others to quickly produce further papers on the same topic. This means it is much preferable to work on hypotheses that are vague or difficult to falsify, and stick to topics that stay inside academia. The ideal situation is an eternal debate with no outcome other than piles of papers.

You see this problem in many areas of science. It’s origin of the reproducibility crisis in psychology and the life sciences. It’s the reason why bad scientific practices – like p-value hacking – prevail even though they are known to be bad: Because they are the tactics that keep researchers in the job.

It’s also why in the foundations of physics so many useless papers are written, thousands of guesses about what goes on in the early universe or at energies we can’t test, pointless speculations about an infinitude of fictional universes. It’s why theories that are mathematically “fruitful,” like string theory, thrive while approaches that dare introduce unfamiliar math starve to death (adding vectors to spinors, anyone?). And it is why physicists love “solving” the black hole information loss problem: because there’s no risk any of these “solutions” will ever get tested.

If you believe this is good scientific practice, you would have to find evidence that the possibility to write many papers about an idea is correlated with this idea’s potential to describe observation. Needless to say, there isn’t any such evidence.

What we witness here is a failure of science to self-correct.

It’s a serious problem.

I know it’s obvious. I am by no means the first to point out that academia is infected with perverse incentives. Books have beenwritten about it. Nature and Times Higher Education seem to publish a comment about this nonsense every other week. Sometimes this makes me hopeful that we’ll eventually be able to fix the problem. Because it’s in everybody’s face. And it’s eroding trust in science.

At this point I can’t even blame the public for mistrusting scientists. Because I mistrust them too.

Since it’s so obvious, you would think that funding bodies take measures to limit the waste of money. Yes, sometimes I hope that capitalism will come and rescue us! But then I go and read things like that Chinese scientists are paid bonuses for publishing in high impact journals. Seriously. And what are the consequences? As the MIT technology review relays:

“That has begun to have an impact on the behavior of some scientists. Wei and co report that plagiarism, academic dishonesty, ghost-written papers, and fake peer-review scandals are on the increase in China, as is the number of mistakes. “The number of paper corrections authored by Chinese scholars increased from 2 in 1996 to 1,234 in 2016, a historic high,” they say.”

If you think that’s some nonsense the Chinese are up to, look at what goes on in Hungary. They now have exclusive grants for top-cited scientists. According to a recent report in Nature:

“The programme is modelled on European Research Council grants, but with a twist: only those who have published a paper in the past five years that counted among the top 10% most-cited papers in their discipline are eligible to apply.”

What would you do to get such a grant?

To begin with, you would sure as hell not work on any topic that is not already pursued by a large number of your colleagues, because you need a large body of people able to cite your work to begin with.

You would also not bother criticize anything that happens in your chosen research area, because criticism would only serve to decrease the topic’s popularity, hence working against your own interests.

Instead, you would strive to produce a template for research work that can easily and quickly be reproduced with small modifications by everyone in the field.

What you get with such grants, then, is more of the same. Incremental research, generated with a minimum of effort, with results that meander around the just barely scientifically viable.

Clearly, Hungary and China introduce such measures to excel in national comparisons. They don’t only hope for international recognition, they also want to recruit top researchers hoping that, eventually, industry will follow. Because in the end what matters is the Gross Domestic Product.

Surely in some areas of research – those which are closely tied to technological applications – this works. Doing more of what successful people are doing isn’t generally a bad idea. But it’s not an efficient method to discover useful new knowledge.

That this is not a problem exclusive to basic research became clear to me when I read an article by Daniel Sarewitz in The New Atlantis. Sarewitz tells the story of Fran Visco, lawyer, breast cancer survivor, and founder of the National Breast Cancer Coalition:

“Ultimately, “all the money that was thrown at breast cancer created more problems than success,” Visco says. What seemed to drive many of the scientists was the desire to “get above the fold on the front page of the New York Times,” not to figure out how to end breast cancer. It seemed to her that creativity was being stifled as researchers displayed “a lemming effect,” chasing abundant research dollars as they rushed from one hot but ultimately fruitless topic to another. “We got tired of seeing so many people build their careers around one gene or one protein,” she says.”

So, no, lemmings chasing after fruitless topics are not a problem only in basic research. Also, the above mentioned overproduction of useless models is by no means specific to high energy physics:

“Scientists cite one another’s papers because any given research finding needs to be justified and interpreted in terms of other research being done in related areas — one of those “underlying protective mechanisms of science.” But what if much of the science getting cited is, itself, of poor quality?

Consider, for example, a 2012 report in Science showing that an Alzheimer’s drug called bexarotene would reduce beta-amyloid plaque in mouse brains. Efforts to reproduce that finding have since failed, as Science reported in February 2016. But in the meantime, the paper has been cited in about 500 other papers, many of which may have been cited multiple times in turn. In this way, poor-quality research metastasizes through the published scientific literature, and distinguishing knowledge that is reliable from knowledge that is unreliable or false or simply meaningless becomes impossible.”

Sarewitz concludes that academic science has become “an onanistic enterprise.” His solution? Don’t let scientists decide for themselves what research is interesting, but force them to solve problems defined by others:

“In the future, the most valuable science institutions […] will link research agendas to the quest for improved solutions — often technological ones — rather than to understanding for its own sake. The science they produce will be of higher quality, because it will have to be.”

As one of the academics who believe that understanding how nature works is valuable for its own sake, I think the cure that Sarewitz proposes is worse than the disease. But if Sarewitz makes one thing clear in his article, it’s that if we in academia don’t fix our problems soon, someone else will. And I don’t think we’ll like it.

133 comments:

Yes, all true, all right, all already said, so please stop feeding the system. Senior people in academia should stop advertising grad school or postdoc positions. Stop enticing young students and making them believe that a career in academia is something good to pursue. Because what these young people will do is to go in academia and carry on the current system. They'll have no other choice. It's unreasonable to expect that future young researchers will change the system, simply because they can't. If they tried they would commit a career suicide, so they won't. Senior people could, but all they do, at best, is to complain about the system, talk about how bad it is, just to carry it on as it is when they sit on selection committees for tenure track positions, awarding whoever has more papers, more citations etc. So please,stop throwing young lives into this carnage. This might be a way to fix academic science. Do not feed it.

if you give people perverse incentives, they will act perversely. Peer review, grant funding, the tenure track - the whole academic system - these things can be manipulated and abused. In the end, it comes to some committee to decide what area of research to fund, and which research groups in particular. Then we can ask how these committees are formed and what are the incentives of their members. The funding could be very arbitrary if you don't include a criterion of high-profile publication

I have an example of semi-failed project in biotech: Florida government came with the idea of seeding biotech research, to make Florida into a biotech hub. Since the governments like big solutions, they decided to fund Scripps Research Institute and Max Planck Institute to open their branches in South Florida Palm Beach area.

I am quite familiar with the history of Scripps Florida: the Florida government wrote checks to the tune of half billion USD - most of that funding went to real estate developers, the institute bough land and built three very expensive but dysfunctional buildings with it, and hired lots of researchers. It started in a grand style but it was a mismanaged bureaucratic effort, and when the government funding run out Scripps found itself unable to finance its operations and came to near bankruptcy. Why the medicinal chemistry groups there did not discover new drugs, why they are not many biotech spin-off like startup biotech companies coming from Scripps FL - I witnessed that the people who took over medicinal chemistry and "Translational research" actually had no experience in the field (they had their own research groups that was only tangentially related) - they took over by a power grab, to control the funding and people to their own advantage, and predictably they run it to the ground. And the reason why this happened is that the incentives were perverse and disconnected from any accountability, enabling all the crazy stuff that happens within a large academic institution.

What I think is best is a mixture of approaches. Big successful research groups are worth funding because of the virtuous spiral there (the fame and funding and good working conditions attracts best students and postdocs) but big bureaucratic academic institutes are a serious problem. At least in biotech I really like the idea of research incubators - if you re going to subsidize research and technology development, do it in a way that makes it easier for lots of new small groups to start and survive up to medium size, but do not make their existence too comfortable so that there is still incentive for them to eventually become independent and move out.

it seems to me that the problem as you describe it is personal reward (usually ego bulshit) versus collective gain and goal. In physical sciences, I think Google, IBM and others will lead the game sooner than we think.

Looking at what their AlphaZero IA do in chess game in just a few hours, maybe they already lead...

Experimental data -> theory -> testable prediction may be just another complex game.

You are always on point Dr. H. The rise of citation frenzy for grant purposes has distorted science but there was a logic to it initially. Only scientists, thru citations, could filter the quacks from real science. I think that was the initial idea anyway. So if you open up grant awards to unpopular (in the science community) ideas you will eventually have to deal with that problem...

I think it's actually more fun to work on problems that other people aren't already solving - that's what curiosity is all about, finding something really new. So a rather flat incentive system that lets scientists have fun as long as they don't completely slack off might work better than what we have now.

It's not that simple because academics allow others to define what counts as "collective goal". That plus selection not so much acting on actions but on people. Meaning the ones who survive see little wrong with what's going on.

My doctoral dissertation and post-doc were in fundamental strong-interaction physics which has a healthy theory-experiment feedback loop, but absolutely no end-user. Not even other physicists care very much about confinement-region QCD. So I moved on.

After that, I did medical physics research which has a number of inherent limitations (can't experiment on humans, humans want to keep their medical information private, people like to sue doctors/hospitals/pharma/whoever has money, human body and its failings are widely-varied and super-complicated), as well as significant commercial aspects which really shouldn't be involved. It was really frustrating and political and I really wasn't doing anything, but making medicine more expensive, so I moved on.

I have worked in defense since then, doing basically nuclear engineer jobs, and it's not bad since there is a lot of room for improvement and people actually use the things they pay for and give you feedback. It's engineering though, and I don't know how to improve academic science without turning it into engineering, but your idea sounds pretty good.

Publications are the the currency and journal+citations define the value of each publication-coin.

I wonder what impact it would have if publications would have to include data and used software in a reproducible manner?

Today the model-scientist is a natural in spotting where the money is, is gifted with capturing a lot of funding, knows how to sell each even so minor success and is able to write many papers fast. Being thorough, precise and being able to follow up on an idea with persistence over many years is not a property which is rewarded by the system.There is no place for the potential scientist who is just great in having good ideas and creating valuable data and is maybe not the best sales person. I think we're - as a society - throwing away a lot of person-power in science.

By the way I should mention that the record of achievement of Google in Life sciences has been really troubled. They started an Alphabet company called Verily, in a grand style, and few years later they were dealing with mass revolt and exodus.

Each field of science has its preferred style of management and it turns out that people with background in computing often have a wishful arrogance about engineering living organisms. (The reason why medicine breakthroughs are hard to achieve through "streamlined" systematic pre-planned effort is that unlike software development, the living things are really poorly designed and their control and signaling mechanisms are byzantine and to a great degree hidden; like reverse-engineering poorly written spaghetti legacy code lacking any documentation, biology resists streamlining, and high computing power will not help much.)

As someone who is hopefully starting graduate school in physics next year, do you have any advice?

On the one hand I would love to just follow my nose and work on whatever fancies me. On the other I want to do this long term, and I see who gets hired. It's whoever has been successfully working on whatever happens to be in vogue at the moment.

I think it's fucked that I have to make this choice, but I dont see an obvious alternative. So, what would you suggest?

I read once that Peter Higgs said his method of working would never have survived the publish or perish frenzy. Citations seem to me like a very crude way of estimating the worth of work done - but its very easy to automate - which maybe where the problem is. Maybe the system isn't broke, maybe its entered a chaotic regime like some self-organised systems seem to fall into once in a while...

I think "a rather flat incentive system that lets scientists have fun as long as they don't completely slack off" is what they used to have in state-run research institutes in Eastern Bloc. It may work reasonably well in theoretical fields but it is not ideal for experimental science (where you need to run expensive instruments and materials, and get a lot of support work from other research groups - then it becomes question of who is getting these resources, especially if there are several competing labs and several project proposals, and maybe there are also political and institutional interests at play, etc.) In fact, the system you propose was not even success for IAS, where so many luminaries came to rest in their solipsistic desolation (while the younger people were having fun spreading Feynman diagrams - so it was not a total waste)

You are so right, Sabine.This "publish or die" mentality and "be quoted pr die" mantra is destroying the science.Peer reviwes are are a vicious tool meant to preserve the most conservative approaches.The whole situation is resembling the mideaval scene.If you don't agree with Aristoteles, burn at stake!And there is little hope for an eventual new Galileo.How sad.

A number of years ago, there was an interesting attempt to circumvent those incentive problems by that well-known bastion of creativity and innovation, the U.S. Department of the Army. They would let a small number of researchers study any topic whatsoever for about 15 percent of their time. It was very popular and attracted good researchers. Of course, when budgets got tight, it was one of the first things that was cut. When people spend much of their time figuring out how to get the word "research" out of their program names because budget cutters love to target them, the problems you describe only get worse.

I suspect there is a flip side, namely how many research dollars are being chased.Ddecreasing the pool of money exacerbates the enumerated perverse incentives every bit as much as increasing the number of researchers, or so it would seem to me.

I think we have to admit that science and society is an evolutionary process that we can't control. It will have many dead ends, but over trillions of processes the evolutionary process of science will happen.

Our research is pretty much all done with an end user in place, and they are key collaborators in the work. The research is designed to fit in with their decision making processes. This is incredibly interesting for the application of theory, because it means it has to be grounded.

The downside is that academic publications suffer because so much time is spent working in collaboration and writing reports that people are going to use. If we had a proportion of academic funding to match the applied funding - enough to hire and mentor some ECRs that would be peachy because we could then write as many papers as reports. It's not going to happen though. All of our non-project time is spent on business development.

I am involved in one piece of work that overturns a 'settled' aspect of natural science and does not have dedicated end users. It's also unfunded. It is proving immensely unpopular with the discipline and useful by everyone else.

I disagree with Sarewitz, you don't have to give the power of selection to other people - you just have to work with them to get mutually satisfying outcomes. There's no added reward for that, so I'm a big fan of open source publishing where work that goes into practice is seen and rewarded for that.

Try not to overspecialize too early. Have at least two topics you can go with, otherwise you are too likely to get stuck. Be on the lookout for stipends: If you have your own money you don't have to care all that much what other people want. Aim at positions that are not project-bound (though I know there aren't many) or apply for your own funding (which you can do once you have a PhD). None of this really solves the problem, but it helps to alleviate peer pressure. And learn to say "no". If you are any good at what you do, people will come lay projects on your feet and ask you to do the calculations because papers must be produced. Ask yourself whether the topic is worth it. There's only so much time in your life. Best,

I would put the blame squarely on the shoulders of journal editors. Most will accept papers that promote their school of thought and will suppress papers that are contrary regardless of whether they point to new knowledge.If the editors are post empiricits for example they will not promote papers that can be empiracally verified to the detriment of their school of thought.For example the WIMP dark matter paradigm continues to be promoted despite overwhelming evidence to the nonexistence of WIMPs.

Nicely put, and scary! The poster child for "we haz bad science" has to be the field of nutrition. The stupid research, broken paradigms and lack of self-correction boggle the mind. (Nina Teicholz exposes some of the nonsense in this now-famous article published in the BMJ: http://www.bmj.com/content/351/bmj.h4962)

Please keep writing on this topic, Sabine! There are far too few (intelligent, non-conspiracy-minded) people making the case that science has a serious problem. Journalist Daniel Engber, writing from Slate, is an exception. He wrote something similar a few months ago: http://www.slate.com/articles/health_and_science/science/2017/08/science_is_not_self_correcting_science_is_broken.html

Science, like the evolution of species, has the capacity to self-correct errors.Bad theories will eventually disappear.There is no need to worry about the progress of science. Bad papers will not be referenced and good papers will have a big impact (whatever the policy).The adequacy of published theories with experiment will play the role of darwinist evolution.Science purges itself under the pression of experiments and reality.

CM, Michael, and everyone else who still believes that science "self-corrects".

You have an elementary misunderstanding about self-organizing systems, and that includes evolution and most likely also market economies. For a system to be able to optimize anything (reproduction, pricing, describing nature), it needs to have a suitably configured feedback look. That's why market economies only work in a suitable environment where contracts are binding, advertisements aren't allowed to be blunt lies, monopolies are broken up, etc. The same is necessary in academia: The system will only work if suitably configured, meaning the feedback loop must be functional. It is not.

I disagree with Sarewitz on the conclusion that he draws, but I think his diagnosis of the problem is spot on: The idea that the "free play of free intellects" is sufficient is and has always been entirely idiotic. It's the same idiocy that is behind the belief that an "invisible hand" guides market economies. In reality, nothing's for free and there's no invisible hands. It's up to us to make sure collaborative enterprises actually work towards the goal we want to reach.

I just mean you can add vectors to spinors no problem, provided you suitably define the space you work in and the addition law on it. But I have found many physicists think there's something wrong with that. I really just mentioned this an example for discarding something unfamiliar because it would take time and effort to think about it.

@Sabine: Thanks. Without the intention of starting a discussion on the issue, which would indeed be off-topic, suffice it to say that I guess I belong to the physicists who think there is something if not flat-out wrong with adding vectors and spinors, then at least something problematic with the notion. For how is one to construct from such entities Lorentz invariant quantities? But perhaps I am just blindly subscribing to the standard notion, without taking "time and effort to think about it".

As I see it, there are several problems in the actual system, but moving back to a more egalitarian fundings distribution would solve many of them.

Take for instance European funding for basic research. Programs such as ITNs or ERCs promote huge, often multimillion, grants with an extremely small success rate. Unfortunately, this elitist view of how research should be conducted i) exacerbates problems with the current citations/impact factor evaluation system ii) does not promote diversity (of ideas) in basic research and iii) it is probably highly inefficient. See for instance http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0065263. Honestly, betting all your money on a single horse has never been a brilliant strategy.

It is true that originally EU research funds were thought as a complement to national funding, but the sad story is that many national research fundings agencies are also shifting towards this big grants approach.

A few big grants could stay to tackle projects that really need huge collaborations or a large pool of manpower to be addressed. But the majority of funds should be spread out thin. In particular, theoretical research does not need that much. Give a theoretical PI a single postdoc and some money to travel and you will get a productive scientist.

Also, like in the old days, Universities and research institutions should get the money to open more temporary (2-3 years) post doc positions not linked to any particular project. You apply, get selected by the faculty, move in, and then you chose by yourself the people and the problems you wish to work with/at.

Do this and you will alleviate this ridiculous bibliometrics race which is plaguing modern academia.

Ah yes, and of course also hiring committees should start again to really evaluate candidates (like, for instance, reading their papers and talking with them) rather than handling the selection process to the editorial choices of Nature and Science. The scientific tastes of a bunch of former post-docs turned editors is currently weighting far too much on academic careers (more a responsibility of the hiring committees though).

A multimillion Euro grant sounds like a lot but it's not. It pays a PI plus maybe 3 postdocs for a 5 year project, and that is assuming they are theorists and don't need any lab equipment. Yes, I know, it's shocking how much money goes into taxes, benefits, and overhead, but that's academia for you. You can estimate 100k per person per year.

thank you for your reply, but I know exactly how much overheads and taxes (rightly so the latter) I am paying on my grants. And paying PIs salary out of grants has always struck me as bit odd. Universities already pays faculty salaries, and they should not overcharge them with teaching/admin so that they can still do research.

Anyhow, my point is exactly that, due to the obvious constraints on total grant money, it would be much better to give 3 PIs the money to pay a postdoc each rather than granting the resources for 3 postdocs to a single PIs, thus leaving the others with no manpower for their research.

It is rather obvious that the current systems promotes a rich get richer effect and strongly encourages academics to overhype their research (these days we all have to be nothing less than exceptional, it seems there is no more interest in good and competent people) and "game" the impact factor/citation game. I am really worried this is undermining basic research freedom.

"Yes, I know, it's shocking how much money goes into taxes, benefits, and overhead, but that's academia for you."

Overhead is a different issue, but taxes and benefits are based on the gross salary and are not different for academia. I don't think that they are shocking; I think that they are good. There are countries which don't have them, where people die because they can't pay their hospital bill. I don't want to live there.

You seem to assume that everyone who applies for an ERC grant has a faculty position. This clearly isn't so. In fact most of the people I know apply for an ERC grant in the hope to get a faculty position afterwards. The ERC doesn't process applications for postdoc salaries because the national funding agencies have that on their agenda.

Yes, the current system promotes a rich-get-richer effect. But that's because the ERC grants (despite claims to the contrary) are exceedingly risk averse and are only given out for topics that already have a large number of followers or to people who are already established. You only have to look at who gets these grants or for what and that's plainly obvious. (That's leaving aside that very few of them go to basic research to begin with.)

Having said that, I don't think you can fix the problems with basic research on 5-year grants, period. It's not enough time.

The "shocking" remark sums up the reaction I usually get. People tend to be impressed if they hear how much money I get in grants, but if you look at what's left on the paycheck it's not much. To top it off overheads pay for the administration who usually have better contracts than those who get the grants, while I can't find funds to pay my student who'd be more than covered by the overhead. Don't get me started.

@Sabine: Thanks for the link. After posting of my former comment, it actually did occur to me that what you might have in mind was Clifford/geometric algebras. On my book shelf I even have the book "Clifford Algebras and Spinors", by Pertti Lounesto. But I have always had some difficulty in really getting the formalism 'under my skin'. And the Dirac equation, say, in that formalism does not strike me as particularly beautiful, unlike the standard column-spinor formalism; but I guess such an argument is not held in high esteem by you in view of your upcoming book 'Lost in Math', with the subtitle 'How Beauty Leads Physics Astray'?

Exactly :) No, I don't find arguments from beauty convincing. But my point here was merely to say that the more researchers are pressed for time and have to produce, the less likely they are to make an effort learning something new. I know, not a very deep insight.

I agree that there are some serious problems in academia. I think Sabine's later comments re: overhead and administration are more the problem in my experience than the perverse incentives though those are certainly there, maybe more in fields other than physics. Remember: it's the administrators who drove the rush towards metric-driven evaluation. I can tell you that from the hiring and tenure decisions that have recently taken place within the physics department at Columbia, those perverse incentives have played little, if any role. In fact, the decisions were driven by the uniqueness of the candidates work, whether it had significant impact outside the narrow scope of the problem being studied, and whether the work was respected by senior members of the community (more on this below). I wonder if a big part of the problem isn't disconnect between perception and reality. i.e. young scientists receive lots of "information" regarding how to get ahead and it's nearly all bullish*t. In fact, two young theorists recently tenured at Columbia were successful because they did unique work that was not in the mainstream, at least when they did it. They did not succeed by being lemmings and publishing on popular topics. Maybe Columbia is different, but I don't think so, at least I think there are plenty of other top institutions that evaluate candidates according to what I would consider appropriate guidelines.

I have my own comments re: problems that we face in physics, but they won't fit within the character limit here. Maybe I'll ask Sabine if she would consider posting them.

I reiterate my position that science is self-correcting. Good papers will be recognized and bad papers will be forgotten.An individual may have the feeling that things are going wrong, just like an ammonite before the Cretaceous-Tertiary extinction, but it's a 'problem' of perspective and not a problem of science in general.

It seems to me that p-values are very much about a falsificationist approach to the scientific method. And the persistent problems with p-values are problems with a falsificationist model of scientific method. As for that, the notion that scientific method is hypothesis testing strikes me as backwards. The scientific study of nature generates the hypotheses, and they are evaluated against experience, as arguments to the best explanation. The variety of things that constitute a scientific investigation of nature (which by the way does include people,) is why it is so hard to define scientific method, and why so many limited definitions mislead. Mostly we can agree that it is systematic, objective, collective (or community based if you insist,) and see the universe as lawful/regular, complete in itself, and as, indeed, a unity in its diversity.

Dearly wish I shared your perspective! But evidence presesented by people like Sabine, Daniel Engber, and Teicholz (to name a few) compellingly argues otherwise. Science is not automatically self-correcting in practice. And even the dark idea that "science advances one funeral at a time" seems to be misguidedly optimistic, since wrong paradigms are easily transmitted from one generation to the next.

The Northern Hemisphere is a triangle whose three interior angles sum to 540° versus Euclid’s 180°. Empirically wrong theory happens. I volunteer two rapid, economic, unambiguous experiments sourcing baryogenesis and the Tully-Fisher relation, healing non-classical gravitation and SUSY. Look. They may be wrong, but they are not ridiculous.

Sabine, it seems to me you have in mind the German systems where the only tenured people are typically full professors with relatively large groups and budgets.In other countries, like France or UK*, relatively junior academic positions (Charge de recherché, maitre de conference, lecturers) are tenured. I suspect — but correct me if I am wrong — in these countries few ERC grants, even at starter level, go to untenured people.

More in general, I know that these days getting an ERC grant is rightly seen as a preferential path to tenure/promotion/moving to a better place.But this means the system is going in the wrong direction confusing means with ends. The goal of the game should be producing good valuable research, not getting grant money.

Having said that, I don't think you can fix the problems with basic research on 5-year grants, period. It's not enough time.

I believe it’s mostly a matter of incentives. Give tenure earlier (~5 years from PhD ?), spread grant money more evenly and rely less on bibliometrics (especially on short time scales): You will remove/mitigate many of the bad ones.For overinflated egos, of course, there is no cure.

*Well, thanks to Thatcher none is technically tenured anymore in UK, but this is another story.

@ Brian Cole: very happy to hear that at Columbia people are more considerate. Do you have any bibliometric based evaluation system in place for ranking and funding universities/departments in US ?

If you want a certain behaviour to be adopted, you have to design an incentive scheme that rewards the adopter, as people strongly react to incentives. Science used to be a meritocraty - may the better explanation win, so the incentive scheme was obvious. If experimental feedback loops are no longer available in certain areas and if publication counts / impact factors are an unreliable metric, why don't we design an artifical incentive scheme that leads research in the right direction?For example, you could limit the number of words that each scientist has to his disposal, so every publication counts. Or you could create an artifical scoring system that includes testability and that penalizes overly complex models (c.f. regularization in machine learning, occam's razor). Consequently, the question arises how to design a scoring system that cannot be gamed - which neatly leads us to the scientific discipline of market design / mechanism design. Game theory to the rescue!

My opinion is that the first step is to stop counting citations with respect to funding, and stop counting every time an author's name appears on a paper. The reason is I am aware of some fields where you might get a lot of papers with up to fifty authors on the paper, and the same authors on many papers, and worse, they all busily cite each other. It is a sort of cozy club to get more citations, and these scientists could perhaps be congratulated for having worked out how to get the funding, but I am not so sure it is doing science any good because citations for scientists outside the club tend to get pretty thin.

I am not sure how this would be done, but I would like to see funding awarded on the basis of what the author has contributed that is new, or at least was new when published, in other words, what sort of advance has the author made. I am not holding my breath waiting for this to happen.

@Ginelli The "System" was going to the wrong direction from its beginnings, resulting into this marvellous piece of modern humorhttp://physicsworld.com/cws/article/indepth/2017/dec/04/a-little-learning-is-a-dangerous-thingThen again, the "System" beginnings -and it's future failures- have had already been traced backwards long ago...https://en.wikipedia.org/wiki/Dialectic_of_Enlightenment

I applaud Ian Miller for speaking about the elephant in the room. A paper with 20 authors and 500 citations is going to appear in 20 CVs and in at least 20 funding or job applications, all while the serious work was probably done by one or two people from the list of authors (usually the first and last).

You are optimistic. I know papers where the middle author has done all the work, the first author insisted on writing some part of it, and then insisted on becoming first author because of cozy relationship with last author, whose contribution was a tired glance at the end result, correcting a typo.

Fundamental theoretical modelling constitutes the spear point of our advancing global civilization. Further back, in the slightly wider section of our metaphorical civilization lance, are the laboratory research facilities that either confirm or refute the theoretical models. Even further back, in a much wider section, are the engineering and manufacturing industries, that in many instances are able to translate laboratory results into practical everyday products that greatly ease, or improve, our lives.

But now the sharp end of our civilization lance has fractured by becoming mired in a labyrinth of esoteric mathematical modelling, disconnecting it from the experimental section, thereby impairing its ability to penetrate the darkness of our ignorance. Those most influential in determining the theoretical programs in Academia should heed the advice of Sabine and others to mend our damaged spear.

Ok super snarky but don't feel bad Academia us "Engineers" in the "Private" industry throw the good money after the bad just as often, it feels better to us because we can always point at the questionable law causing the stupidity and mouth something like "Hey it's not my fault the layers made me do it". I am hopeful though all the sci-related blogs I read are at least calling out the problem so if we're luck the days for self review and Citogenesis are numbered right? RIGHT??? /crosses fingers

your analysis is spot on. What worries me most is something even most comments here do not seem to understand: That our whole system of discovering new knowledge is in grave danger. The processes and structures you describe are not only prevalent in one field or one region. As you describe, it is something happening everywhere, in every discipline. I really think it is unconceivable for most that our scientific system could ever break.

Yet this is what is happening, at break neck speed. And there is no silver lining anywhere. Especially since those who profit from the system (the "model-scientists", as another commenter called them so aptly) categorically deny that there might be anything wrong here. Those who could change the system have the greatest incentive that everything stays the way it is.

To everyone trying to stay sane, trying to get a decent job, to work for an outcome that actually changes something for the better in the world: Stay away from academia.

David: if your advice for a better life and world is to stay away from academia, then by extension one should stay away from all of society, because I believe the way it works in academia is actually way better than in most companies.

Looking at the big picture, there are many good reasons for striving to move towards a sharing/resource-based/open-access economy. That would automatically solve the very real problems we encounter in science pointed out by Sabine, and many other very urgent problems regarding equity and sustainability on the planet as well.

Granted the socioeconomic issues that bedevil the modern scientific academy, there is an additional structural problem that contributes significantly to the dysfunction you and others perceive. Put succinctly quantitative analysis is over-weighted while qualitative analysis is almost nonexistent. The result is a growing sense that modern physics at the macro and micro scales has gone inert.

It would appear that qualitative physics analysis has not been taught formally or at least effectively, in the modern academy for decades, except perhaps, as a survey course for non-technical students. Students of physics are trained to work on, and think in terms of, mathematical models. They are not trained to develop or evaluate the underlying qualitative models that provide the framework for those math models. That's how you wind up with trained physicists touting Many Worlds as if it were somehow a scientific proposition.

To the people have been trying to post comments to this thread for a few days which I don't approve:

I want to remind you of our comment rules. I will not approve links to your websites. Since I cannot remove parts of comments and only entirely block them, links will result in your comment not appearing at all.

I applaud Ian Miller for speaking about the elephant in the room. A paper with 20 authors and 500 citations is going to appear in 20 CVs and in at least 20 funding or job applications, all while the serious work was probably done by one or two people from the list of authors (usually the first and last).

Juraj, you are unnecessarily optimistic. I know papers where the middle author did all the work, the first author insisted on writing up parts of it and becoming first author through cozy relationship with the last author (boss), who took a tired look at the end result and corrected a typo. So no, who does the work is not so obvious.

I commented in this earlier post on the obvious problem of counting papers with N coauthors the same as a paper without coauthors. It's an obvious problem with the obvious consequence that everyone tries to divide up their works on as many people as possible. It's another contribution to streamlining.

The zeroth-order approximation one could make is to divide out the number of coauthors. It's as simple as that. Of course no one is going to do it because that would decrease their number of papers.

there was a time i had similar sentiments as you. in hindsight i conclude i was wrong. the field did progress despite all signs of the opposite. you cant cage bright minds. there is enough obscure opportunties around the globe, certainly in the more abstract disciplines. dont worrytoo much on money being wasted... that mode of thinking has more dangers in it than some string theorists beating on dead horses.

your zeroth-order solution looks much worse then the original problem, as it would strongly discourage collaborative work and the sharing of unpublished ideas and intuitions. This approach would inevitably force everyone to carefully consider the potential benefits of sharing ideas and/or involving colleagues (or even a student) in a project against the very precise loss of bibliometric weight for each new collaborator added in.A very sad world indeed. Science is a collaborative enterprise. Our goal should be to solve relevant scientific problems, not compete to the death with our colleagues.

Bibliometric algorithms, simply, cannot be applied more or less blindly to hiring, fundings and promotions; I really believe people should stop obsessing so much about paper counts, impact factor and citations numbers.

Of course, a quick look at someone bibliometrics can quickly give you a first rough impression about her/his seniority, productivity and status inside his/her community. But serious, career impacting decision cannot reasonably be based on this (unscientific) criteria. (Yes, I know that unfortunately sometimes they are)

Jeffrey Hall gave an excellent Nobel Lecture. No perversions at all. The PI (Principal Investigator) and AI (Actual Investigator) a Jeffrey Hall connotation has raised the ethics level. Jeffrey Hall need to be admired for this PI and Ai and his great work with the other two prize winners. Here is the lecture.

It would discourage collaboration compared to the present situation, but not discourage collaboration per se. Also, as I said it's a zeroth order approximation and clearly not optimal. Yes, science is collaborative. It is also contemplative. This isn't an either-or question, it's a question of balance. The present situation puts a bonus on teaming up with other people. This is clearly a disadvantage for any topic that an individual wishes to work out by themselves. It plainly punishes people who don't have many others to work with or who don't like working with others for some reason.

Btw, if you look at the data you will find that while the total number of articles per physicist has sharply risen in the last 15 years, if you normalize this for the number of authors, you will find that the productivity per person has remained constant. In other words, if you think that collaboration has a positive feedback, that's not supported by the data. Best,

The problem goes much further than publishing incentives and career ambitions. The system crushes the diversity of thinking that is needed to make progress. I have seen fruitful ideas being ignored for decades, not only by journals, but also in research discussions and even casual chats, as people take on board what they are and are not allowed to discuss. When economic incentives reign, the corruption is very deep.

Sabine raises the interesting issue of whether collaboration is advantageous. I believe it clearly is in experimental work because you can bring in people with different specialties, and therefore get much more difficult work done. The downside, in my opinion, is you usually find, if there are enough authors, the presence of drones who really contribute very little but somehow get to become authors. However, theoretical work raises a different issue. A collaborative team will adopt the same basic premises, otherwise they won't be able to collaborate, but does this inhibit the generation of new ideas? Does the team adopt a "well-trodden path" in preference to finding unexplored territory? The latter is dangerous because usually it is wrong, and not productive for generating papers, yet it is the necessary step for a major breakthrough. I am curious to know what others think of this.

I agree with you that certain evaluation practices do not work, it's just that I believe the answer is less bibliometrics, not a different bibliometric.

Btw, if you look at the data you will find that while the total number of articles per physicist has sharply risen in the last 15 years, if you normalize this for the number of authors, you will find that the productivity per person has remained constant. In other words, if you think that collaboration has a positive feedback, that's not supported by the data.

I may be naive, but I do not think that -- at least above a certain minimum threshold -- the number of papers produced has much to do with scientific productivity.

But even if we may clearly disagree, thank you for promoting these kind of discussions. Best.

Isn't one of the problems that we have Big Science? Even just a hundred years ago, when quantum theory was being theorised, it seemed almost a gentlemanly pursuit; I'd be curious on the actual numbers - for example how many physics journals and physicists were there in Europe then and now; I think the figures would be illuminating. With larger organisations comes real problems in trying to organise...

I think that one of the strengths of academia is the ideal of openness and sharing; its not often noted that this was one of the drivers that drove the internet originally - it wasn't all just about technology, it was also how universities organised a peer-to-peer network that was fundamental in shaping the ethos and the technology of the internet.

Dear Sabine,I find very interesting those data on the number of papers per author rising but the productivity staying constant over the last 15 years. Could you please provide a link to the source? Thanks not only for this but also for the blog as a whole.

Sorry for the missing reference. It's from a paper that I meant to write about for a while. I am pretty sure it's somewhere here on my desk... I'll write that post when it resurfaces, links all including.

I am not questioning whether collaboration is advantageous per se. I am asking if the level of collaboration we presently have is the optimal one. It's all a matter of balance. The problem is if you have external pressures (eg by funding schemes) then you are skewing the balance one way or the other, leading to a suboptimal configuration.

I left academia a long time ago. In economics, some of they people that I worked with indicated that the weight they gave an author with multiple authors was obtained by dividing by the square root of the number -- thus my paper in Nature in 1990 was worth about 30% of a paper (although in some parts of the academy the Nature publication counted for more). That seems more reasonable than dividing by N and not dividing at all. Lead authors might get more credit (unless alphabetical with numerous authors).

James, if careers in your field were decided by these criteria I can see why you left.

But the people inventing these algorithms out of thin air to decide how much your papers are worth, ever considered the unthinkable alternative of i) reading these papers carefully and ii) discussing them in depth with you about them to assess your understanding/contribution ?

When I meet someone I am in awe of scientifically, I tend to think something like "Wow, he/she solved, this and that problems, introduced these techniques and/or started this new interesting subfield". Not "he/she has got an ERC grant and published 8 Nature papers".Silly me.

When I received my Ph.D. in 1985 I chose not to pursue a career in academia. It was the best career decision I could have made. When I had entered Harvard as a National Science Foundation Graduate Fellow in Psychology, my plan had been to stay in academia and work my way up to having my own research lab. My specialty is the mathematical modeling of human cognition and its interactions with physical reality. That included artificial intelligence as about half of my major-general exam in cognitive psychology, along with two minor-general exams, one in psychobiology and the other in word meaning.

As I progressed in my studies, I realized the publish-or-perish paradigm in academia would never provide the resources for my work on an ill-defined modeling problem that did not have small publishable pieces. So I went directly and successfully into full-time consulting on computer-system design for ease of learning and use (now called “usability”).

My decision to leave academia allowed me to continue my research into cognitive models of reality as a self-funded project. The current results are in papers posted on ResearchGate this year, and I am beginning to divide them into posts for a new blog.

There are a number of us in a variety of academic fields who have chosen to become independent. Julian Barbour is a similar example from physics. Our hopes are for collaboration and feedback-loops for considering new ideas. Although parts of what we write might be invalid, other parts could provide a stable foundation on which others in or out of academia can build.

The issues identified in the original post and previous comments will be solved only by finding ways to work together to tackle the big theoretical problems and thereby move science forward.

Mozibur makes a key point that is implicit in the Sarewitz article Sabine linked. Scale matters. The "Republic of Science" model described by Michael Polyani did a pretty good job of lining up incentives so as to maximize the community rate of discovery. But it relied on a lot of shared tacit knowledge about other researchers and how they did things--bibliometrics wouldn't even have been a consideration back then, because people were evaluated pretty directly by small communities of peer discovery-searchers. That model had many problems endemic to old-boy networks and to professions that were also life-consuming hobbies as well as jobs. But it did not suffer from the problem of using manipulable proxies to measure research performance--this was assessed more or less directly.

Francesco, the problem with the funding panel reading the papers and understanding them is (a) time, and (b) the wide range of different specialties. I sat on one such panel for about 9 years and I know with the best will in the world some scientists publish papers I just cannot assess properly. I recall one mathematician made such a funding request and nobody on the panel could understand anything about what he was about. That, of course, was his fault, but even had the application been clear, I doubt I could have followed his papers. I know referees are supposed to help here, but scientists tend to suggest referees, and whoever seeks the referees usually follows the suggestions because the alternative is too time expensive. But then the friendly referees game the system. It is very difficult to win on applications. Having been on both sides of this system, I know it does not work at some of the fringes (although I argue that about 90% of our decisions were quite justifiable) but what is better? They count citations because you end up with a number, which can be used, or misused. I am reasonably confident Einstein would never have got funded in 1904, and I guess we all have to live with it. I welcome improved procedures, but I can't think of what they would be.

Ian, I know that no system could ever be perfect (actually I do not think Einstein was funded in 1904), but I frankly cannot understand how something good can come out from the current one.

The hiring process is more and more dominated by grant money, and funding panelist not only do not find the time to understand the applicants work (which is not their job), but even don’t have the time to select good referees (please do not take this as a personal attack). In turn, I guess referees will be short on time too. Probably since everyone is busy trying to get their own grants and publishing the fabled Nature paper. And it’s hard to do differently, since the average university administrator could not care less that we are investing time in doing a good service to the community. Many of them only care about i) money we bring in and ii) the indicators which will help our University to climb some positions in these meaningless world universities rankings.

Under this twisted perception that productivity should be measured by number of papers, people also writes obviously too many papers, many of them frankly irrelevant. So good referees are overburdened, editors are not anymore able to do their job properly, and too many bad papers get published, polluting the conversation and lowering the signal to noise ratio (and I am just considering legit journals, this is becoming the age of predatory publishers too).

And yes, citations could be a decent indicator of your standing in the community (or at least obviously better than paper counting and journals impact factors), but only on — at least — a 5-10 years time scale AND if you only compare people working in the same subfield. To stay in my area, for instance, complex networks papers will receive on average 20 times more citations that exactly solvable models ones.

Incentives are totally wrong, and smart people (since many of our colleagues are really smart) understand them and react accordingly.

Frankly, I can easily think of a better system. We should: stop caring about university rankings, on the small scale they are provably meaningless. Spread more research fundings. Rely less on bibliometric. Do not let Nature editors or ERC panel dictate our hiring decisions.Ah, and specific to UK: stop funding departments through large grant overheads (I think it is now approaching 40–50%). It would be better to just pay direct research costs and revert the saved money to direct ordinary financing of Universities. Take back control of our universities.

Ian, I know that no system could ever be perfect (actually I do not think Einstein was funded in 1904), but I frankly cannot understand how something good can come out from the current one.

The hiring process is more and more dominated by grant money, and funding panelist not only do not find the time to understand the applicants work (which is not their job), but even don’t have the time to select good referees (please do not take this as a personal attack). In turn, I guess referees will be short on time too. Probably since everyone is busy trying to get their own grants and publishing the fabled Nature paper. And it’s hard to do differently, since the average university administratorcould not care less that we are investing time in doing a good service to the community. They only care about i) money we bring in and ii) the indicators which will help our University to climb some positions in these totally meaningless and unscientific world universities rankings.

Under this twisted perception that productivity should be measured by number of papers, people also writes obviously too many papers, many of them frankly irrelevant. So good referees are overburdened, editors are not anymore able to do their job properly, and too many bad papers get published, polluting the conversation and lowering the signal to noise ratio (and I am just considering legit journals, this is becoming the age of predatory publishers too).

And yes, citations could be a decent indicator of your standing in the community (or at least obviously better than paper counting and journals impact factors), but only on — at least — a 5-10 years time scale AND if you only compare people working in the same subfield. To stay in my area, for instance, complex networks papers will receive on average 20 times more citations that exactly solvable models ones.

Incentives are totally wrong, and smart people (since many of our colleagues are really smart) understand them and react accordingly.

Frankly, I can easily think of a better system. I believe we should: stop caring about university rankings, on the small scale they are provably meaningless. Spread more research fundings. Rely less on bibliometric. Do not let Nature editors or ERC panel dictate your hiring decisions.Ah, and specific to UK: stop funding departments through large grant overheads (I think it is now approaching 40–50%). It would be better to just pay direct research costs and revert the saved money to direct ordinary financing of Universities. Take back control of our universities.

Francesco, Einstein worked in a patent office then and carried out his theoretical work essentially as a hobby; my point was, if someone like that applied now they would be denied funding. I suppose the way to look at it is that life is not supposed to be fair. However, I feel academic science has now lost its way and it is falling into the problem of laziness and relying on authority rather than actual analysis. Since Sabine's blog is basically physics, I shall give a physical example.

One of the more important advances, if correct, in the late 20th century related to deviations from Bell's Inequalities with entangled particles. You can derive the inequalities with simple set theory, so if they are violated either the associative law of sets is wrong, which means all mathematics fails, or something really weird is going on. Nevertheless, the rotating polariser experiment alleges such violations. Now, look at the classic experiment, due to Aspect et al. What we have are two entangled photons, and the reason they are entangled is because angular momentum is conserved. Now, why is angular momentum conserved anyway? According to Nöther's theorem, it is because space is rotationally invariant. Now, what does Aspect do? To use Bell's Inequalities, you need pass/fail type results under three conditions, which means you need six independent measurements (or averaged measurements). So, he makes measurements with a polariser pair at 0 and 22.5 degrees with respect to the lab; at 22.5 and 45 degrees, and at 0 and 45 degrees with respect to the lab. Now, the second pair is simply the first pair rotated in space by 22.5 degrees. Assuming the source is rotationally invariant (and the observations support that) how do you get two independent variables by partially rotating the apparatus? That violates Nöther's theorem, which you rely on to get the angular momentum conservation. There are more problems, but the essence of my argument is the experiment is a triumph of experimental physics, it beautifully proves wave/particle duality for individual photons, but as far as violations of Bell's Inequalities go, it has flawed logic in the analysis of the results. People are finding what they want to find.

So, I wrote up such a paper. Several journals simply rejected it. The usual reason was the editor felt it was not of sufficient importance. The possibility that one of the most important claims from the last half of the 20th century might be wrong is unimportant?? One simply said "This is wrong". Maybe, but if you rely on that, maybe you should point out where? Now, I am not an academic, so paper counts do not matter, and since I am doing this more or less as a hobby, there is a limit to how much trouble I was prepared to go to, and of course, I do not publish where there are page charges. (The reason I got onto a funding panel was that I was unusual in that I owned my private research company and I still published, as a hobby) In the end I archived this in an ebook.

The reason I gave this example is to show if you are not in the mainstream, you have a real problem. Now, you may say I was wrong in the above, and if so, please feel free to show me where, but I feel if there is no room to question established science other than by coming up with some strange experimental observation, science has stopped working properly. The question may have a perfectly good answer, but show it.

Ian somewhat backs up the point about scale. There are too many proposals from too many diverse researchers for the panel members to perform actual evaluation of research and researchers. So bibliometric shortcuts must be employed, looking for the keys under the lamppost because it's brighter there, even though the keys were probably lost across the street.

Ian, You asked what others thought of collaboration on theoretical issues. Whether or not the team members "adopt the same basic premises" depends upon the type of problem being addressed. If the problem is finding underlying invalid assumptions when all existing paradigms are open to questioning, then starting with multiple different premises is an advantage. One team member may recognize an assumption in another's premise that needs to be considered and either validated or rejected. The two key issues are (1) understanding how to look for assumptions within the cognitive models expressed in the mathematics, and (2) having the discussions within a 'safe' environment that requires respectful dialog without any demeaning or degrading of people and/or their ideas. The common goal of the team members is truth as it can best be understood in that time and place.

As an example assumption, Bohr posited the "indivisibility" of the quantum of action as a starting point for quantum theories. In 1931 Dirac found that (with c=1) the quantum of action equals twice the product of unit electric charge and the constant now called "quantum magnetic flux". The original model for this mathematical equation is in Thomson's 1903 Yale lectures in which he describes the "moment of momentum" as the product of charge and pole strength. Unfortunately, only one subdomain of physics, superconductivity, seems to know and use Dirac's finding. For example, page 312 of Störmer's Nobel lecture (https://www.nobelprize.org/nobel_prizes/physics/laureates/1998/stormer-lecture.pdf) shows the calculation of quantum magnetic flux as Planck constant h/unit charge e. There are two accepted units for magnetic flux in supeconductivity, the Dirac unit and the London unit. A comparison of these units is in Ezawa's Quantum Hall Effects, 2nd ed., pp. 87, 125.

To your point about publishing in journals, a write-up of the step-by-step mathematically rigorous derivation of Schrödinger's equation was rejected because it was not sufficiently interesting. So I put it on ResearchGate because I think the possibility of a classical basis for quantum mechanics is an idea to be considered unless there is an irrecoverable error in the math. That paper does not even consider the larger electromagnetism issue because I limited it to describing a mathematical derivation.

We are a group of creative and intelligent people---how do we solve this problem of theoretical collaboration and stop being limited to the ruts in the road that prevent exploration of new ideas? I had hoped posting ResearchGate links to the Theoretical Physics group on LinkedIn would generate collaboration, but unfortunately it did not. Any other suggestions?

I think Ian and Martha's examples illustrate another point. If researchers who are not part of the academic establishment question long-held ideas (trying to get out of the rut), they risk being labeled as fringe elements. Of course, when a paradigm shift eventually occurs, some ideas that were previously heretical become widely acccepted. What is the difference between the earlier presentations that are dismissed, and the one that will finally stick?

Back to the example of Einstein in 1905, I am very curious about how things worked back then such that his ideas were taken seriously even though he was a patent examiner. Was it something about the way he presented his ideas? Or was it just that the field was smaller back then, and the right people were able to devote the necessary attention? Or something else?

Part of the answer may be that his papers didn't just present new ideas; they presented solutions to widely recognized problems. And those solutions could be easily understood, even though they had far-reaching implications. But can every new paradigm be presented in this way? Or are there times the local optimum is just too deep for such an approach?

And even if such a presentation is possible, it doesn't matter if no one is willing to read it at all.

Both to Martha and Ian, excellent points. You are not alone. There are recognized (by the community) physicists who's views are just enough off the mainstream to be mostly ignored by the mainstream. Keep publishing. At some point the significance of the problems you raise might be recognized. Then again, like Noether herself, possibly not until after you have passed from the scene...

Martha, the problem you allude to in your last paragraph is deep, and I do not know the answer. Publishing itself does not necessarily solve the problem. Get it into Nature, and the chances are it will be read, although whether absorbed is another matter. There was a paper by Kocsis et al in Science that showed that a rather unusual prediction by Bohm about the 2-slit experiment was in fact validated, but the number of people who seem to have acknowledged it is quite minimal, and Science is a fairly well-read journal. The problem is, there is just too much being produced all the time, and there is far too little analysis of what has been found or proposed. In my opinion, we need to replace large sets of observations with whatever rules conveyed set membership. The trouble is, analysis and reviewing occupies a lot of time for one publication, and hence little reward for the academic.

Your second paragraph actually contains an issue that I think bedevils quantum theory. You say Bohr posited the indivisibility of action. As I understood it, he quantised angular momentum. Yes, angular momentum has the dimensions of action, and it is often represented as a quantum of action, but you can also define action as the time integral of the Lagrangian. The angular momentum is a constant of motion whereas the second version is an evolving entity. This one has particularly obsessed me, because I am working on chemical bonding, and here angular momentum is known to give no useful simplification at all, but quantising the second version is very productive. Here we have two ways of looking at something quite fundamental that end up being quite different. We need to have some uniformity in our descriptions, which gets back to my review and analysis issue. I just don't know how we can bring that about, though.

All that you described about incremental (normal) versus original (revolutionary) science is already fully accounted in the 1962 Kun's The Structure of Scientific Revolutions, and does not seems to be a problem (is it a problem?) of XXI science butgoes back at least Galileo.

Also, the pragmatic idea that others (the government, the markets) should define the basic science agenda instead of researchers truly involved in the discovery process can be found in a essay of Che Guevara about how the academy should be organized in Cuba after the revolution. It is very ironic that american science policy agree with Che Guevara...

Fledgling Researcher, some books I read in the 1980s about how to become a successful consultant said to never promise a prospective client more than a 10% improvement in their systems. Promising a larger amount of improvement would insult the prospective client because it meant he [sic] had missed seeing a solution that was obvious to the consultant. Therefore a larger increase would be considered impossible and the consultant would not make the sale.

I think this is true in many types of situations, which is why paradigm shifts are so psychologically difficult to achieve. If the blocks to progress in physics were logical or mathematical, physicists would have solved them many decades ago. Instead, the issues are (a) unidentifed cognitive assumptions based on human perception (e.g., Newton with impenetrable particles as fundamental), and/or (b) a need for engineering usefulness (e.g., Heaviside’s elimination of Maxwell’s fundamental electrokinetic momentum, I.e., the vector potential). In The Maxwellians Hunt notes that theorists’ conceptual models never make it into mainstream science in their original forms.

Recently I found that in 1950 Schròdinger in a letter to Einstein (a) suggested the historical-research approach I had been using, and (b) identified precisely the issue I had found that was blocking progress. He suggested physicists go back to Galileo and start over with acceleration (rather than position and velocity) as fundamental. So my work could be considered a student dissertation in physics with Schròdinger as my advisor. That’s when I wrote the Part 2 paper (of a three-paper series) reporting how the acceleration model flows smoothly through the sequence of original historical theorists into the unified set of mathematical results I describe.

The analysis starts with Kepler’s use of the harmonic mean and Galileo’s law of freefall acceleration. It ends with a basic-algebra equation that yields the GR results for deflection of a photon and the precessions of the planetary orbits. In between is a return to Einstein’s 1905 SR before Minkowski introduced inherent spacetime curvature to SR in 1908.

Two other models in the Part 2 paper are (a) the GR relativistic-mass equation that has only three stable states and uses the electron mass-energy to calculate and visualize those three states as the muon’s, the nucleon’s, and the tau’s rest mass-energy; and (b) the combination of Galileo’s law of freefall acceleration with Einstein’s 1905 SR (lightlike) intervals to create Janet’s left-step version of the periodic table of chemical elements. This last model maps directly onto the quantum numbers of the electron configurations and allows logical visualizations of the few anomalies that deviate from the ideal filling sequence.

In all of this, the traditional theories are validated as different points of view. That is why I consider myself to be a traditionalist who creates classical quantum-relativistic models. The models all come from a conceptual paradigm shift to the freefall observer of Einstein’s GR as the default frame of reference. The works of the original theorists became mainstream by the introduction of modifications to make them agree with human perception as the frame of reference. This includes Born’s introduction of probabilities into QM as the only way to apply quantum-wave mechanics to particle collisions.

The theoretical physicists from the 1600s until now who introduced major new theories were geniuses! They and we are all attempting to describe the same physical reality. Thus it is logical that a unified view validating the previous theories is the most feasible model. I believe we can resolve the unification issue and define new publishable, fundable problems to solve.

I have published recently this paper about the K index which is very competitive with the Hirsch h index, being very easy to calculate by inspection in the Web of Science. The interesting thing is that the K index correlates (and even predicts) qualitative evaluations of the scientific career, for example scientific prizes (in the paper, the Nobel Prize). So, I accord with you that poor indexes as number of papers and citations do not capture scientific performance and are not fair to young researchers. But there is a growing number of other indicators (by now there is almost 120 proposed indexes).

The K index, by the way, does not depend on the number of papers: Ernst Ising (of the famous Ising model) has only two papers, so its Hirsch index is h = 2 and this will not change since he is already dead. But his K index is K = 100 and grows each year.

The K index also detects scientific misconduct as excessive number of papers, citations from friends, self citations etc.

Take a look here: https://www.sciencedirect.com/science/article/pii/S0378437117308075 or here:https://pdfs.semanticscholar.org/4ca4/e9a3810de68389d5f957dbc35fd18fd9edd4.pdf

My view as to why you don't see many paradigm switches is a new paradigm is very difficult to achieve because it has to incorporate all the successes of the old one, and it is not that easy to come up with a reason for everyone to throw out their previous understanding and change. Einstein had the advantage with relativity that the constancy of light speed made no sense in the old paradigm, but that sort of situation is unusual. That does not mean that everything is right, but it does mean you won't earn a lot of citations, etc, necessary for funding by trying to overturn the cart. I know. I have tried, admittedly on a less important point.

AbstractThe science of science (SOS) is a rapidly developing field which aims to understand, quantify and predict scientific research and the resulting outcomes. The problem is essentially related to almost all scientific disciplines and thus has attracted attention of scholars from different backgrounds. Progress on SOS will lead to better solutions for many challenging issues, ranging from the selection of candidate faculty members by a university to the development of research fields to which a country should give priority. While different measurements have been designed to evaluate the scientific impact of scholars, journals and academic institutions, the multiplex structure, dynamics and evolution mechanisms of the whole system have been much less studied until recently. In this article, we review the recent advances in SOS, aiming to cover the topics from empirical study, network analysis, mechanistic models, ranking, prediction, and many important related issues. The results summarized in this review significantly deepen our understanding of the underlying mechanisms and statistical rules governing the science system. Finally, we review the forefront of SOS research and point out the specific difficulties as they arise from different contexts, so as to stimulate further efforts in this emerging interdisciplinary field.

"All that you described about incremental (normal) versus original (revolutionary) science is already fully accounted in the 1962 Kun's The Structure of Scientific Revolutions"

And Kuhn got it wrong, really, really, really wrong. Science rarely if ever progresses through the paradigm changes he touts. He even thinks that the discovery of radioactivity and X-rays were paradigm changes, rather than observation, experiment, and hard work driving science!

The Copernican revolution (how many get the pun?) is a bad example, since this wasn't a scientific debate, but rather the church burning people at the stake who disagreed.

Is Kuhn's own theory scientific? If not, why take it seriously? If so, then it is either true or false. If false (as I believe), why take it seriously? If true, then it will be replaced by a new paradigm. :-)

I'm happy that Carlo Rovelli agrees with me in his judgement of Kuhn's ideas.

I think fundamental science is broken, and cannot recover as long as there will be such animals as grants. Grants may have a point if there are some real problems to be solved by applied science. But fundamental science based on grants is nonsense.

Scientists have to be independent. They want to find some fundamental truth, and they have to make a choice what to study. They risk enough with this choice because the wrong choice means they will fail to reach their major aim, the dream of their life. So, they need no incentives. Incentives you need if you work for a bank or so.

What they need is security. Job security. Not big money, true scientists don't care about money, they will work for money not much higher than social security payments, but they need job security, "tenure" starting from day one.

Science with scientists who have to apply for new jobs every two years will be as broken as a legal system with judges who have to apply for new jobs every two years.

Well, Kuhn proposed several ideas: that science evolution is intermittent, like punctuated equilibrium in biological evolution (stasis = normal science vs fast change = revolution) and the fact that there is no clear demarcation rules that separate science from non-science. In this last issue, Popperians misunderstood him: he was not saying that science is irrational, but that the rules are learned from examples (like in modern deep learning neural networks) and not from IF-THEN rules (like old-fashioned AI). Also, the punctuated equilibrium model is very interesting because it can be related to evolutionary models that show self-organized criticality (SOC) (and remember that, in his last works, Popper adopted also an evolutionary epistemology). Finally, SOC ideas answer a common criticism about Kunh's revolutions: what would be the size of a change so that we call it a revolution? SOC ideias say that the events are scale invariant (fractals), like avalanches and earthquakes, where a power law distribution holds. That is, there is no scale, revolutions (or evolutions) occur in all scales, but the probability of occurring a big percolating avalanche is considerable: if we want, we could call such percolating avalanches (large chain reactions of scientific advances) of scientific revolutions, but this is only a name.

And Phillip, Kunh's ideas (like other science philosophers and historians) is metascientific (reflection about the proper definition of what is science), so they cannot be scientific... (you must define science before... in a Popperian way or in a Kunhnian way?Or Feyrabend way, or Lakatos way, or post-modern way). What is your definition of science? Old fashioned Empirism (XIX century), or pre-Popper neopositivism, perhaps?

Did Kuhn get it wrong, or are we arguing about terms? What exactly is a paradigm? If we argue that related observations are incorporated into a set, then the paradigm becomes the rule that conveys set membership. If so, radioactivity was a paradigm shift in as much as no pre-existing rule permitted it. At the time, even atoms were not exactly agreed. I would also disagree in many respects with the comment on Copernicus. First, it was Aristarchus of Samos who devised the heliocentric theory, and it was Claudius Ptolemy who, for the time, put that to bed with some fairly extensive mathematics. If that does not qualify as scientific debate, why not? The fact that everyone woodenly agreed with Ptolemy without thinking too deeply about the nature of the theory may be laughed at now, but how many of us now take theories we see in print and follow them without deep analysis? (Could it be that Ptolemy's mathematics were more "sophisticated"?) As for Kuhn, my opinion is his work should be considered more of an analytical review of what had happened up until he wrote that than an assertion that what he noted would automatically happen in the future. It could be more along the lines of, to quote J S Bach, "Wachet auf".

https://metrics.stanford.edu/There is a video What makes science true?https://www.youtube.com/watch?v=NGFO0kdbZmk

From 5:56 to 7:11 min in the video the data is interesting. It is very depressing to see this video reveal the way reproducible crisis is there in medical researh.. At 7:11 min in the video , chemistry leads the pack with 87% of research being not reproducible. They have not said how they arrived at this percentage.

With Kuhn, I was referring to his idea that paradigm shifts take place in the following way: there is a paradigm, evidence builds up against it but is ignored by the older generation who want to keep the status quo, finally things break down when the evidence is overwhelming or the old generation dies off which leads to a new paradigm and the cycle continues. This is wrong for several reasons: (1) while there might be some resistance from the old guard, it is not nearly as large or influential as he claims, (b) science progresses, whereas Kuhn claims that a different paradigm is not necessarily better, (c) he claims that not just the subject matter (e.g. radioactivity) constitutes a paradigm, but that the new paradigm is due at least as much to changes in the ways of thinking and so on rather than to new evidence. He makes the "science is a social construct" mistake.

Science (physics) is broken because the axiomatic method was abandoned (special relativity was, in my view, the last axiomatic theory). Initial assumptions (axioms; postulates) are of vital importance. For instance, the speed of light either depends or does not depend on the speed of the source, and if measurement is difficult, either possibility can become an initial assumption. The next important thing is VALIDITY - it must be guaranteed that the conclusions of the theory do follow from the assumptions.

Clearly defined initial assumptions followed by valid arguments - without this, theories are not even wrong.

Regarding the post by Unknown, I think the argument in the link that cancer research is so unreproducible is misleading. There is no single cancer, and there are a huge number of variations. One obvious source of irreproducibility is that the starting position is not reproduced. Notwithstanding that, there are definitely bad practices out there. In my PhD research, I could not reproduce a synthesis that was published in a letter by someone who later went on to get a Nobel prize. I gave up on that and found another way, but that scientist published a number of further papers until finally, when the topic was thrashed as far as it could go, he let out why I failed - his conditions had to be carried out at minus 80 degrees C. I never picked that because in chemistry, when something does not work you usually heat it (to overcome a higher activation energy).

This project was interesting in another way. I had entered a raging debate in which chemical strain either introduced a specific quantum effect or it did not. (This was my project - my supervisor had given me hopeless projects.) My results came down firmly that it did not, but the general paradigm had formed that it did, mainly due to output from two variations of quantum chemical programming at the time. My supervisor refused to publish the results that gave the strongest evidence because he did not want to go against the flow, so I published independently my version of what was going on, which was essentially applied Maxwell's electromagnetic theory. However, the quantum computing won out, and the text books now have that proposition as "fact". There are two curious fact about this episode that may give you cause to suspect all is not well. The first is those very same quantum chemical programs were soon used to prove the exceptional stability of polywater. Oops! (You may or not know that John Pople, who won a Nobel Prize, also published papers showing the stability of "anomalous water" as it was called at the time.) The second was that somewhat later (and I am not an academic, probably in part because nobody at the time would touch me because I was on the wrong side of that debate) I did a review in my own time and found that there were up to sixty different types of experiments that falsified that accepted paradigm, but other reviews ignored them all, as well as my work. The review could not be published in the academic literature. One journal refused it on the grounds that the issue was well established (can't have a review that says that is wrong!) but the usual reason was they did not publish logic analyses. In other words, the academic gatekeepers are not interested in going back and trawling through past evidence in case the current paradigm might be wrong. That, to me, is not the science I thought I was signing up for so long ago. However, it is evidence for Philip Helbig that Kuhn was more right than wrong.

"That, to me, is not the science I thought I was signing up for so long ago. However, it is evidence for Philip Helbig that Kuhn was more right than wrong."

No. At best, it is evident that in this particular case someone screwed up. Does the current cold spell in the USA prove that Trump was right and climate scientists wrong? Nope. Learn to differentiate the general from the particular.

" The review could not be published in the academic literature. One journal refused it on the grounds that the issue was well established (can't have a review that says that is wrong!) but the usual reason was they did not publish logic analyses"

Do you think science lacks critical thinkers? If you compare with social sciences, you have Foucault ( Based on history he wrote and wrote on the machinations of Power) , Derrida (deconstruction) , Bourdieu (class structure). There are many more critical thinkers in social sciences who have critiqued the way society has performed. GB Shaw in the earlier days with his Pygmalion was also a master critique of class. In science we have very few bold and courageous critiques . There are books by Les Smolin with " The Trouble with physics" and a blog " Not even wrong " by Peter Woit. The book and blogs are good , but we require a Focault,a Derrida, or a Bourdieu like critical thinkers in science to analyse the way science is performed.

About the Stanford web site on metrics , they should explain on how they arrived at the percentage shown in their utube for making it clear ti viewers.

“There are books by Les Smolin with " The Trouble with physics" and a blog " Not even wrong " by Peter Woit. The book and blogs are good , but we require a Focault,a Derrida, or a Bourdieu like critical thinkers in science to analyse the way science is performed.“

Rather, we need clues as to how science can go into trouble. Let me make two suggestions:

1. If the theory is axiomatic, as is special relativity, then things are more than clear. The theory can only be spoiled by a false initial assumption or an invalid argument (one in which the conclusion does not follow from the premises).

2. If the theory is not axiomatic, that is, if the method is “guessing the equation” (Feynman) and not “deducing the equation from initial assumptions”, then, I’m afraid, the theory is already fatally spoiled by the mere fact that it is not axiomatic.

Phillip, The current cold spell in NY is in accord with what the climate scientists are saying. The extra energy is stirring up the Arctic generating stronger wind systems that happen to be taking the cold air further. Yes, we have to differentiate the general from the particular, but without the particular you cannot uncover the general, and without an example, claims are simply arm-waving. I named one example because it was easier, but this has happened to me more than once.

Take the rotating polariser experiment to "prove" deviations from Bell's Inequality in the Aspect-type experiment. Entanglement depends on the conservation of angular momentum, which in turn depends on the rotational invariance of space, but the experiment claims that rotating the experiment generates two new variables, which denies the requirement. A paper with more objections was rejected by a number of journals on the grounds it was not of sufficient interest! That the major contribution to physics in the late 20th century might be wrong is not of interest?

Next, I once published in Aust. J. Phys. a paper showing the wave functions of the states of atoms with n>1 do not correspond to the excited states of hydrogen, but rather they are multiple wave combinations. Now, if that is correct (and there were plenty of data to support it) then some reasonably straightforward logic required these wave functions to change on electron pairing. My paper showed the required effects in the covalent bonds of the group 1 elements, even out to Cs2. The excuses for rejection included "these molecules are not very interesting" (They were chosen because with only one valence electron, a whole lot of other complications were missing.) and "our readers would not be interested". Unfortunately, the latter may well be true.

The current theory of planetary formation assumes a distribution of planetesimals, and there is a small industry producing papers involving computer simulations crashing these planetesimals into each other to form planets. Two objections: whenever asteroids collide, they fragment, and nobody has a clue how these planetesimals formed, after 70 years of trying. My paper proposed the initial body forms through chemistry (my bias) and each planetary system in our solar system (except possibly Venus) formed by a different mechanism, each of which being optimised at a particular temperature. Evidence in support includes the spacings and the compositions of the planets/moons are consistent with the required thermal function. This was rejected because I did not include the computer simulations, even though there was nothing to simulate. The referees could not put aside their own biases.

The examples may not be correct, and I may be wrong, but how does science progress if you don't find out whether the other ideas are correct? Do you really believe the first idea out there must be correct? Now, my argument is, if all this can happen to me, it is at least sufficient to suggest that the problem might be general. One final amusing example. I wrote a series of papers on the structures of marine polysaccharides for a botany journal. There was a change of editor, and I was told that I had to stop writing papers with so much mathematics, and in particular to stop presenting matrix manipulations to establish my solutions. That is not evidence to me of science going in the right direction.

Ian, indeed, in my neck of the New England woods at about 42.75 degrees north latitude, (less than a degree north of Rome, Italy), it's currently -15F (-26C). Yesterday morning it was -16F, or approaching -27C, colder than what one of the Martian rovers recorded for a daytime high, recently. We've been getting these subzero F/C temps for nearly a week, and will endure the same temperature regime for another week, when it will 'only' dip down to between zero F and the low teens F. For a number of days the highs were in the single digits F. According to the weather bureau it's the result large amplitude swings in the jet stream. It makes perfect sense that more atmospheric energy makes such wild swings possible, especially since, on the flip side, areas like Siberia, normally much colder than the Eastern US, are getting above normal temps.

My own idiosyncratic view of science falls somewhere between Drs. Helbig and Miller (both of whose comments I appreciate). For me it is another form of evolution, which I define as consisting of lots of trials, selection criteria to rank the trials on a scale from failure to neutral to success, and memory to pass the results forward through time.

I would like to think, with Dr. Helbig, that eventually failed science trials will be discarded in favor of more successful trials; however this depends on the selection criteria that are used (as has been discussed at this site frequently). It is possible that science, being a fallible human activity, will start to focus on bad selection criteria and get itself in a hole that it never digs out of. Given infinite time and resources, and the basic criterion of predicting nature, we should eventually get out of such holes. But over the years my optimism has faded and I no longer think we always will.

Non-axiomatic physics - one in which "guessing the equation" is naturally followed by "guessing the fudge factor" - consists, by definition, of "invincible models that can forever be amended":

Sabine Hossenfelder (Bee): "The criticism you raise that there are lots of speculative models that have no known relevance for the description of nature has very little to do with string theory but is a general disease of the research area. Lots of theorists produce lots of models that have no chance of ever being tested or ruled out because that's how they earn a living. The smaller the probability of the model being ruled out in their lifetime, the better. It's basic economics. Survival of the 'fittest' resulting in the natural selection of invincible models that can forever be amended." http://www.math.columbia.edu/~woit/wordpress/?p=9375

PhilipYou prejudge without even seeing the videos. Sure sign of ignorance and arrogance. Ivar Giaever won the 1973 Nobel prize in physics. Each of the scientists in the four videos is more qualified than you. If you can't beat them, smear them.

The Copernican revolution (how many get the pun?) is a bad example, since this wasn't a scientific debate, but rather the church burning people at the stake who disagreed.

No, it really was. The only one burned, as I recollect, was the hermetic mystic, Bruno, who was not a scientist at all. His sentence did not involve his incidental [and uninformed] approval of Copernicus.

The scientific consensus back then had been since the ancient Greeks all in favor of a stationary Earth for a variety of reasons, one of which was a form of the Michelson-Morley experiment: there was no discernible eastern head wind vs. a north-south winds, which they believed should have been the case if the Earth were moving toward the east at high speeds. There were all sorts of other physics objections: why is the moon not left behind? Why is there no observed parallax in the fixed stars? Why is there no Coriolis effect when objects are dropped from towers? And so on. These objections were eventually resolved: sometimes by better measurement (the Coriolis effects and the parallax were very small) and sometimes by better conceptualization (e.g., "inertia"). The big paradigm shift was the shift of astronomy from the mathematics department to the physics department. That is, the term for astronomer in the Renaissance was "mathematicus", which also served for "astrologer." The primary job of an astronomer was to calculate calendars and to cast horoscopes. The idea that it involved making discoveries about real physical bodies was a genuinely revolutionary notion that depended on the invention of the telescope.

Although the Church had been reading some scriptural passages in the light of the settled consensus science since the time of the ancients, they had no objections to other readings, if it could be shown that the straight-forward reading was incorrect. But to do that, as Bellarmine told Galileo, he had to provide empirical proof that accounted for the lack of parallax et al. Until then, he could teach geomobility as a mathematical model ["hypothesis" in the language of the day] but not as a demonstrated theory.

The Copernican Theory contained more epicycles than Peuerbach's then-current version of Ptolemy. Galileo followed Copernicus in insisting on pure Platonic circles on mystic neo-Platonic grounds. (Note that motion around an epicycle around a deferent results in a reasonable approximation to an ellipse.)

Once the phases of Venus were discovered, Ptolemy's model was dropped like a rock and most astronomers settled onto either the Tychonic or the Ursine model. But shortly, Kepler's model, with its actual elliptical orbits and its solution to the pesky orbit of Mars became popular, largely because it was computationally more elegant and mathematically more beautiful; esp. after Newton provided a theory under which it made sense.

Empirical evidence for the Coriolis effect was measured by Jesuit physicists in the 1790s and apparent parallax in alpha-Crucis in 1803, which took care of the last two objections to the dual motions of the earth, after which the Church dropped her objections to teaching the model as an established fact.

Kuhn vastly overstated his case. Paradigm shifts generally consist of looking at the same data from a new perspective and thus seeing things from a different angle. Sometimes, after seeing things in a new light, it's hard to go back and see things the way our ancestors once did, and so we conclude that our ancestors were fools.

"No, it really was. The only one burned, as I recollect, was the hermetic mystic, Bruno, who was not a scientist at all. His sentence did not involve his incidental [and uninformed] approval of Copernicus."

I agree with most of the rest of your post, but even if the Church didn't burn people because they were Copernicans, the fear that they would greatly influenced the debate. Galileo was put under house arrest and shown the instruments of torture. Torture has no place in any sort of debate, and of course this fear held people back.

even if the Church didn't burn people because they were Copernicans, the fear that they would greatly influenced the debate.

There was no such fear and the debate was not "greatly held back" by religious reasons. Everyone in the Late Renaissance knew how the game was played, and there were as many churchmen in favor of the new theories as there were opposed. The rules of the Inquisition forbade the use of torture on the elderly and infirm (unlike secular tribunals) and on a courtier enjoying the protection of the Grand Duke of Tuscany, it was unthinkable. (However, the Grand Duke was playing footsie with the Austrian Hapsburgs while the Pope was helping the Bourbons to finance the "Protestant" side in the Thirty Years War. This word had just leaked out and the Pope was seeing Hapsburg plots everywhere. The timing of the Dialogues and its apparent gratuitous slap in the face could not possibly be a coincidence, etc...! Galileo managed to step into a minefield of interdynastic politics.)

Copernicanism, as such, was held back by the lack of empirical evidence in its favor, its convoluted mathematics (twenty-plus epicycles!), and the fact that it was (dare I say it!) "falsified" by the lack of parallax in the fixed stars. The model was flat-out wrong and its predictions of stellar positions were no better than those of the Ptolemaic model. Better results were obtained from the Tychonic and Ursine models, since these accounted for the phases of Venus noted by Lembo and others. The Renaissance had reintroduced the Platonic notion that Truth was Beauty and Beauty, Truth; and so Kepler's model eventually vanquished both Tycho and Copernicus on the beauty of its math rather than on the empirical evidence.

The whole affair is an object lesson on beauty, mathematization, Popperism, and empiricism.

I don't see the heliocentric theory as vanquishing all because of the "beauty" of its maths. Part of the problem was Aristotle's dynamics. Once you get a feeling for the equivalence principle, and recall the measurements of Aristarchus of Samos, who showed the sun was really a very long way away (I think he underestimated by almost a factor of 5 through the observational difficulties and errors) then (a) it becomes extremely difficult to accept that such a monster (the sun) moves around such small object as the Earth, and (b) if you accept that stars are suns, simple comparison of luminosities shows they must be a very long way away. However, in my opinion, there is also the problem of the tides. I cannot see it is conceivable to believe the Earth is stationary and get two tides a day. In that sense Galileo was right, except he got it wrong, thanks to the rather weak and bizarre tides of the Mediterranean. Once you accept the Earth moves, there is no option but to accept it moves around the sun. The real intellectual problem is to work out why the Moon moves around it.

"The rules of the Inquisition forbade the use of torture on the elderly and infirm"

How humane! (Takes break from writing book and thinks.) Let's see, I am old or infirm enough to avoid torture? Yes, things are more complicated than the cardboard stories often recounted about Galileo and Giordano, but to suggest that a powerful institution which can and did torture and kill people for no other reason than that said people disagreed with its policy didn't have much effect on scientific debate is absurd.

I don't see the heliocentric theory as vanquishing all because of the "beauty" of its maths. Part of the problem was Aristotle's dynamics.

One of the problems with history, as John Lukacs used to say, is that we must study Salamis as if the Persians might still win. That is, we must look at matters based on what was known at the time. To judge the reception of Copernicanism in the early 17th century, you ought not call upon concepts or data that were not known until later. Or for that matter, without taking account of the external events in the world at the time, such as the Thirty Years War.

The ancients, Arabs, and medievals knew the sun was large and far away, though not as large and far away as we now believe, but what principle requires small objects to go around larger ones? Remember, they barely knew momentum (which they called 'impetus') but had no knowledge of 'inertia' (the Latin word meaning 'laziness.') Furthermore, they did not imagine that these bodies were physical objects spinning independently in a void. They were embedded in nested shells made of dark matter, or 'aether.' Each shell was driven by the one above it, like a gear train. There is no reason why a drive wheel must be larger than the wheels being driven. See your bicycle for details.

The comparison of stellar luminosities and diameters provided the best evidence for geocentrism. Procyon was about the same brightness and apparent diameter as Saturn. Therefore, it could not be too much farther off than Saturn. No more say than 100x farther, because simple geometry would dictate that it be larger than the entire "solar system". Indeed, all the stars would dwarf the Sun and form an entirely new class of objects, offending Billy Ockham. The Copernicans answered "Goddidit!" Since God was infinite, who cared how big the stars were?

Now, if the stars were as close and as big as their discs made them out to be, then stellar parallax would be obvious to the eyeball. Since no parallax could be seen, the earth could not be revolving about the sun. Hence, it must be stationary in the center of the World. Tycho's reasoning was tight and based on the very best and most precise observations, measured at Uraniborg.

One problem: those stellar discs turned out to be optical illusions, called Airy Disks, after George Airy, caused by atmopheric aberration. But that was not learned until the 1800s, after first Callendrelli and then Bessel observed actual parallax in the fixed stars.

The interesting thing is that the earth's rotation seemed more acceptable to folks than its revolution. Buridan had pointed out, using Witelo's principle of relativity in the 14th century, that all motion is relative to another motion, and that appearances would be the same if the earth rotated and the heaven remained fixed or if the heavens revolved and the earth remained fixed. So the tides could result from the sun revolving around the earth or the earth rotating under the sun.

What it comes down to is that people like Tycho and Scheiner and Marius and the rest were not fools. They had reasons for reaching the conclusions they did.

And yes, to a late Renaissance polymath, the beauty of the math was a major Platonic selling point. The heliocentric model, once it was simplified to elegance by Kepler's ellipses, was sold by 1660 -- before Newton gave it a physical foundation and long before the discovery of coriolis effects and stellar parallax had rebutted the old Popperian "falsifiers."

Yes, compared to 17th cent. secular courts, they actually were more humane. Torture had been re-introduced with the rediscovery of Imperial Roman Law, which not only allowed torture, but required it under certain circumstances.

A useful account of the workings can be found in Edward Peters, Inquisition, University of California Press, 1989. But also in

However, your suggestion that the tribunals used torture "for no other reason" than that people "disagreed with its policy" is again a "cardboard story." Surely, there were other reasons!

Roman Imperial Law did not allow conviction on a capital crime based on circumstantial evidence. Conviction required either being caught in the act ("red handed"), the agreed testimony of two independent witnesses, or a confession. These were often hard to come by. So the rules allowed torture to elicit the confession provided there was enough circumstantial evidence to convince the prosecutor that the accused probably was guilty. It could only be applied once and any confession obtained under duress must, for the obvious reasons, be affirmed afterward when torture could not again be applied.

Of course, it is well known that a district attorney can get a grand jury to indict a ham sandwich, and there have always been the unscrupulous rule-benders. But we have cases on record of defendents in secular courts deliberately committing blasphemy in order to get their cases transferred to more lenient ecclesiastic courts. Our reral objection is to the 17th century.

The reason the proceedings did not have much impact on scientific debate is that the Church inquisitions were concerned with heresy, not with science. There is no indication that scientific progress was held up, even in Italy. The Galileo case is mentioned so frequently because it is the only instance in which there was an apparent intersection. But as historians have come to appreciate, the reasons had as much to do with personalities and international politics as with any actual heresies. Otherwise, why would someone have needed to insert a false and dishonest Summary into the records or three of the ten inquisitors refuse to sign off on the sentence?

I may have been a bit too brief. To justify the heliocentric theory, you had to show Aristotle was wrong regarding falling, and everybody "knew" he was right. The problem was, you can work out that orbital motion must involve falling awards the centre, and moving away sideways. I think adequate geometry was available at the time. Now if you accept Aristotle, then if the Earth moved, heavier things would fall faster than the light things, and the Earth would simply fall to bits. It doesn't, therefore it had to be stationary. This is a case of where you have to carefully verify your facts, and in fact Aristotle was a strong advocate of observation overruling theory, but unfortunately his technique was inadequate, and while he know about wind resistance, his leaf and stone dropping did not properly allow for air resistance. From then on, everyone was convinced Aristotle was right, and they did not check. Hopefully, that form of aberrant behaviour has now been put to bed, and we are fairly keen on checking, although whether we interpret what we find properly is another issue.

However, the issue with tides is important. My question is, how can you get two equal tides a day if the Earth is stationary? The Moon can attract, but it cannot attract and repel with the same degree of force. (The two tides are very close to equal amplitude.) With motion, the second tide comes from, dare I say it, the pseudo force centrifugal force, but you can't get that without motion. I would be interested in an alternative explanation because I wrote an SF novel in which I hoped to show how science works, and I used this as an example.

Yes, compared to 17th cent. secular courts, they actually were more humane. Torture had been re-introduced with the rediscovery of Imperial Roman Law, which not only allowed torture, but required it under certain circumstances.

In case it wasn't clear, my comment that it was humane that torture was not allowed for the old or disabled was meant ironically.

However, your suggestion that the tribunals used torture "for no other reason" than that people "disagreed with its policy" is again a "cardboard story." Surely, there were other reasons!

Perhaps unclear: in many cases, it was used for no other reason than disagreement (and, yes, heresy is merely a disagreement). It was used for other reasons in other cases.

Roman Imperial Law did not allow conviction on a capital crime based on circumstantial evidence. Conviction required either being caught in the act ("red handed"), the agreed testimony of two independent witnesses, or a confession. These were often hard to come by. So the rules allowed torture to elicit the confession provided there was enough circumstantial evidence to convince the prosecutor that the accused probably was guilty. It could only be applied once and any confession obtained under duress must, for the obvious reasons, be affirmed afterward when torture could not again be applied.

This is just stupid. If the circumstantial evidence is good enough, no confession (and hence no torture) is needed. If it is not good enough, there is an appreciable risk of false confessions, as a confession obtained via torture is worthless. Affirmed afterwards? Give me a break. If not affirmed, the defendant would probably be accused of having lied during torture.

Your defense of these practices is a slap in the face of the thousands of innocent victims tortured and killed by the Catholic church, mainly because they saw women as the cause of all evil.

my comment that it was humane that torture was not allowed for the old or disabled was meant ironically.

Compared to secular courts, which did not so disallow its use? Or which permitted torture to be used as punishment?

in many cases, [torture] was used for no other reason than disagreement (and, yes, heresy is merely a disagreement).

The same is true today. You say it is wrong to murder someone. The murderer disagrees. Hence, his punishment is over a matter of mere disagreement. But wait. Maybe there is more to it than that? Perhaps assassinations in the name of the heresy? Arson directed against churches?

It was used for other reasons in other cases.

Excellent. What reasons, in which cases?

This is just stupid. If the circumstantial evidence is good enough, no confession (and hence no torture) is needed.

You would have to blame the Romans for that, not the Church. Allow conviction on circumstantial evidence and you will guarantee innocent people on death row.

an appreciable risk of false confessions, as a confession obtained via torture is worthless. Affirmed afterwards? Give me a break. If not affirmed, the defendant would probably be accused of having lied during torture.

Hey, General Flynn was not even tortured. Instead, his family was threatened with prosecution. We have ways of eliciting confessions even without torture. Besides -- and again, unlike the secular courts of the era, the ecclesial courts noted up front in their manuals that a confession elicited under torture could be unreliable. That is why it was permitted only when there was already good circumstantial evidence in the first place, why it could not be applied twice in the same case, and why it had to be affirmed. And why the boni viri reviewed transcripts in which all the actual names had been replaced by pseudonyms. Of course, as I've already noted, an unscrupulous special prosecutor who already has an animus against the accused can find ways around legal safeguards -- even today.

It is more convincing to tell us what did happen than to speculate on what "probably" might happen.

Your defense of these practices is a slap in the face of the thousands of innocent victims tortured and killed by the Catholic church

Actually, Project Innocence has found quite a few on death row even today suffering the torture of false conviction. And even in non-capital cases, the Mass. Satanic Day Care molestation panic of some yeas back should be an object lesson as well. During the lifetime of the Spanish Inquisition, itself a political exception, there were three spikes in capital cases (each associated with a political panic regarding subversion. Other than those, capital cases ending in execution by the state amounted to about 1-5% of the total cases considered. Most of the rest ended in sentences like penances, special prayers, pilgrimages, and outright acquittals. In terms of executions per year, they turned over far fewer to the secular authorities than the secular authorities executed on their own behalf -- or the Rationalists guillotined during the few years of the Terror.

mainly because they saw women as the cause of all evil.

Mary the Mother of God? St. Cecilia, St. Anastasia, St. Agatha, Sts. Perpetua and Felicity, etc? I don't know where this "mainly because" suddenly come from, since the sidebar regards the tendency of folks to rely on sacred myths rather than on empirical data.

You might want to read “The Voice of the Dolphins” by Leo Szilard, famous for the nuclear chain reaction, written in 1961. One of the stories, The Mark Grable Foundation I think, describes how to kill scientific research by handing ot lots of high value prizes. It’s effectively what’s happening now.

"The essence of the scientific method is to test hypotheses by experiment and then keep, revise, or discard the hypotheses. However, using the scientific method is suboptimal for a scientist’s career if they are rewarded for research papers that are cited by as many of their peers as possible."

The real problem is academia became a bureaucracy, which exist for the benefit of bureaucrats. Professors are now just another flavor of bureaucrats. If you changed the reward structure from the number of cites to the number of tested hypotheses, you'd see a proliferation of meaningless and sill hypotheses. In fact, I'm sure that's why the cite system is now in play. People, I'm sure gamed a system with meaningless nonsense, so thought they'd measure true research value by counting cites.

Everything gets gamed. Particularly when your job is dependent on political largesse. Political gaming then becomes the goal, not serving the supposed noble ends of increasing humans' knowledge. The last is the pretty my a lot of scientists tell themselves to rationalize the hefty fine politicians levy on tax payers to fund scientists navel gazing. Tenure and political protection from markets, i.e., preventing scientists from getting the most reliable signal known to man about the most valuable research they can do (free market pricing), is the real culprit.

Basically, what you've described is a system that has far too many "scientists" than necessary to do meaningful valuable work.

Witches and pagans learned how to scientifically prove their abilities and everything you every told us that wasn't real or didn't exist actually does because all science was created with the Imagination as it resides in the same area of the brain to invent new experiments to follow, so whats fake and whats real?

Comment moderation on this blog is turned on. Submitted comments will only appear after manual approval, which can take up to 24 hours. Comments posted as "Unknown" go straight to junk. You may have to click on the orange-white blogger icon next to your name to change to a different account.