24 Responses to “Regulating science the way we regulate restaurant kitchens”

I’m not at all sure this is a good idea. The fact is that fraud in the sciences is extremely rare. It is true that when it does occur it is often very damaging. I’m not at all convinced that the general damage level is enough for this to be worth it.

I think the biggest issue is that whenever one of these fraudulent studies gets released the media is all over it and EVERYONE hears about it. But when it finally does come out as being false, and they try to print a new study or retraction no one seems to care.

Just like the “50% of marriages end in divorce” study… People still quote that to me and i feel like smacking them upside the head.

The issue is that our media doesn’t really WANT to check their facts, and our public still is under the delusion that they do. Its not our science community that needs work, its our media that needs to get back to its roots, and our public that needs to be educated in not believing everything you hear.

Erik, yes that’ a good point. But I’m not at all sure that the fraudulent studies are doing any more damage of that sort than either the poorly done studies or the studies that are well-done but simply got unlucky (95% confidence intervals means that at least one out of every 20 studies is just going to be wrong right off the bat).

There’s another issue- we’ve already seen politically motivated accusations of fraud against climate prominent researchers. This sort of thing could potentially make it much easier for politicians and the like to make such accusations against science that they don’t like.

The article in question does have the suggestion of having universities put in place people who are in charge of integrity issues. If it is limited to some sort of requirement for that sort of position for universities taking government funding this would probably be ok.

I’m not saying this is a bad idea, but it is somewhat problematical. I can see a lot of “unintended consequences” when the government (which seems to have a very hard time understanding science, as is evidenced by the lack of rational analysis in policy decisions) gets involved too deeply.

In other words I think it’s a good idea, but I’m skeptical that it can be pulled off in a “scientific” way.

What a dreadful idea. Science, in the long-term, takes care of itself. And all the important ideas typically go through some very hard-ball battles (continental drift, proof of the asteroid extinction of dinosaurs) that weed this stuff out. I can’t imagine anything that would do more to politicize and undermine science than having some bureau to investigate fraud since it would inevitably be used as a tool to attack rivals to political or scientific orthodoxy.

I don’t see a problem with more oversight, especially by staff at the institutional or department level employed specifically for that purpose. Paul raises a good point about politicizing oversight. Look at the climate research that was ‘discouraged’ in the States. It’s easy to imagine political appointees undermining inconvenient research with accusations of research misconduct.

How do you know it isn’t built to weed out fraud? Because sometimes it fails to weed it out? That’s like saying your seatbelt isn’t built to prevent you from going through the windshield because sometimes people end up through the windshield while using a seatbelt… Time to have the government decide when you will and will not go through the windshield! THERE’S a solution…

Look, an ERP study on the neurological correlates of definiteness in Indo-European could claim whatever it wanted, and – if it fit Boingboing’s particular tastes – would get reported here. But most of you (and especially the government) can’t interpret the data, much less decide whether it’s fraudulent or not. The only people who are going to be able to evaluate the research for fraud are other experts in the field. Peer review is the ONLY hope for weeding out fraud.

Not that this will stop most of the l33t internet Scienc0rz from rabidly commenting, of course. I’m sure Wikipedia has a page up on this very topic. When all else fails, there’s commentary.

Here’s the stupid thing: scientists report fraud, and that’s seen as a weakness. Actually, it’s a strength, because it means someone got caught. Because of this, I don’t think we need to fund a government fraud investigator, and I doubt such an office will have sufficient funding or expertise to do its job.

Worse, the cost of a false fraud accusation is probably worse than the cost of real fraud. After all, if a researcher is right, do you want the government to destroy that person’s position through accusations of fraud? What if that person is a Darwin or Copernicus? I can pretty much guarantee that politically and socially unpopular science will be the primary target of fraud investigations.

Fraud is hard to detect in science: from the little work that’s been done on the subject, most fraud occurs as papers that support existing science, or occurs in marginal fields where there’s little money for replicating studies.

Where there are concerns of fraud, the best thing to do is to fund someone else to re-do the study and attempt to replicate the result.

It would be nice to hold scientists accountable, but the regulations would need to be drawn up in a way that made it so that they could only be in trouble for willfully, neglectfully bad science.

Errors happen. Science changes. That’s the nature of the beast. We wouldn’t want everyone who published something that turned out not to be flawed to be in trouble. That hinders innovation and research. And it’s all of our responsibility to look closely at we hear and see for ourselves whether it seems to be true or trustworthy. But there’s a happy medium in there somewhere. You should not be able to willfully publish or allow to be published something intentionally fraudulent or bad science that could have VERY easily been avoided with even a moderate degree of scholarly diligence. But before any consequences should come of it, someone should have to show that you either meant to spread bad info or didn’t try reasonably hard to make sure it was good info.

What a terrible idea. Oh boy, this is bad. Aren’t there any organizations in the academy that can take on training etc. on the meaning of peer review by the academy? Self-regulating professions should always work hard to stay self-regulating.

“anti-fraud” regulation of science in unnecessary; there is already and tried and tested method called replication. Just because one study finds something with a small study does not mean the result is real (fraud or random chance). But if many studies from different scientists find it, then it is likely a real finding. Let the frauds publish and let other scientists prove them as frauds. The real problem is the media reporting science without proper context.

There are realms of science which are already audit-intensive: drug trials, in particular, have to proceed in a highly regimented way in order to satisfy regulatory constraints. It’s very slow and expensive. For the rest of science, I don’t think inserting regulators in the loop will improve the incidence of fraud, and will definitely damage science. The thing about replication is, it either doesn’t matter (so trivial that no one ever depends on the result) or it’ll get caught eventually (when it doesn’t work for someone else). Science by definition is reproducible.

Still, it’s worth shouting from the rooftops that peer review isn’t supposed to catch fraud. There’s no way to audit the raw data as a reviewer, so your job is to judge and improve the quality of the work presuming basic competence and honesty.

In addition to the arguments presented above, explaining why this is not a good idea, I’d like to point out that the line between deliberate fraud and bad science can be a fine one. And it would take a lot of investigation to identify the cases of true fraud – and then we’d have to trust the investigators… and who would regulate these regulators?
This sounds like a way to create an additional complicated loop with the same problems as the current system.

” I’d like to point out that the line between deliberate fraud and bad science can be a fine one.”

This is a very good point. In fact there is no line between the two. Much poorly conducted science could easily be described as ‘fraud’ by an over-zealous police unit. No control group? Fraud! Had a control group but ignored the fact they differed on key variables? Fraud! Did too many analyses, fishing for a significant effect? Fraud. etc.

This is impossible, and whenever governments make new laws to attempt impossible things, it never goes well.

It can’t work for so many reasons I don’t know where to start, but various problems include the international nature of science; the difficulty of discovering fraud; the difficulty of deciding what even constitutes evidence of fraud; the intrusion of lawyers into the scientific method; the list goes on.

The science of keeping food clean, preventing contamination and bacterial growth is well known.

Questions of human honesty and trustworthiness are much more problematic. The “same way it regulates restaurants for public health” sound bite itself illustrates this.

I think that Wakefield’s financial conflicts of interest would best be addressed through financial regulation. And his public actions and speeches that endanger public health could also be addressed directly, independently of any good or bad science he has done.

As for bad science… Peer review and replication reveal bad science. Bad science is best ignored. There will always be cranks. Grant them their freedom of speech, and then ignore them when they rant like the crazies on the street corners.