“Forensic Bioinformatics”

Pielke Jr has sent me the following two links on the longstanding dispute between Baggerly and Coombes, two biostatisticians, against a team of cancer researchers at Duke University, led by young star Dr Potti. See CBS News here and a Baggerly 2010 lecture here.

Baggerly and Coombes had attempted to replicate a leading paper; their efforts have ultimately led to retraction of the papers. But the decisive step in the retraction did not arise from proper operation of the peer review system or university investigations, but through something entirely fortuitous.

Their experience has many parallels to Climate Audit versus the Mann hockey stick, even to some small details. (This is not to say that all details are parallel). For example, the Potti et al papers used “meta genes”, which Baggerly explained as being nothing but principal components.

Like us, Baggerly and Coombes were frustrated by incomplete and/or misleading documentation and the resulting need to resort to what they described as “forensic bioinformatics”- which is exactly equivalent to what has been described at CA as reverse engineering. Like us, they even encountered a one-row-off error (compare to MM03). They described an incident where the Potti authors appear to have reversed labels (resistant-sensitive) on a drug – though the labels were reversed and not Mannian upside-down.

They published critical comments in journals. Potti and the original authors used Mannian language to rebuff the critiques, which they described as “deeply flawed”. The Potti coauthors said that the Baggerly and Coombes criticisms didn’t matter, that they were little more than complaints about typographical errors and that any given criticism was obsolete since subsequent studies had confirmed the results being criticized at any given time.

Baggerly and Coombes had trouble publishing results in cancer journals because their results were “too negative”. Eventually they published in Annals of Applied Statistics, leading to an investigation at Duke University.

But after three months, the Duke investigation completely cleared the Potti authors, stating (in language reminiscent of Muir Russell) that the investigation had “strengthened their confidence”. Baggerly and Coombes’ request for a copy of the investigation report was refused. Eventually they noticed that a copy had been sent to the National Cancer Institute, a federally funded agency, and, in a tactic reminiscent of Climate Audit, they submitted a FOI request for the investigation report, receiving it in a redacted form. Needless to say, the investigation had failed to actually investigate the allegations.

Baggerly and Coombes were incredulous, but had more or less run out of avenues to pursue.

Then completely out of left field came a decisive revelation from “The Cancer Letter”, a weekly newsletter that is sort of like a blog (see here). In his CV, Dr Potti had falsely claimed to be a “Rhodes Scholar (Australia)” – a claim refuted at The Cancer Letter.

Although this misrepresentation did not bear on the dispute itself, it was the sort of thing that the academic community could dig its teeth into and Potti’s data manipulation began to unravel. Dr Nevens, Potti’s senior and coauthor at Duke, finally withdrew his support. The articles (published at the most eminent journals) were subsequently retracted.

There are many other parallels, but not everything is parallel. However, failing to report adverse results (e.g. a verification r2 of ~0 or hide the decline) are forms of data manipulation that should be taken seriously in the climate community, but aren’t.

In his CV, Dr Potti had falsely claimed to be a “Rhodes Scholar (Australia)” – a claim refuted at The Cancer Letter.

The move Mann didn’t make. Rhodes Scholar envy can be a powerful thing. It spurred Mark Shuttleworth to start the company that made him a billionaire by 27, he told me as we shared a taxi through Oxford in 2003. (Mark had been rejected as a South African hopeful early in the 90s. He has since become a generous sponsor of the open source movement, from which he benefited commercially through the Python language, giving rise among other things to the popular Linux distro Ubuntu.)

But what a Potti thing to do in bioinformatics and what remarkable climatic parallels.

In the academic world egos can be big and disputes are rife. People hold incompatible world views (Bayesian vs frequentist, the noble savage vs evidence of frequent violent deaths among prehistoric peoples, incompatible string theories) and quite often the work is so complex that nonspecialists are reluctant to pass judgement. This plus the ideal of academic freedom (even the freedom to be wrong or a crackpot) make universities reluctant to adjudicate academic disputes. This becomes even worse when various fields that are protected can put forth total nonsense work without being questioned (I dare not speak the names but they use the word “studies”). When there is outright fraud, gross incompetence, or data transposed or faked (such as by respondents to a survey) this reluctance to intervene becomes the turning of a blind eye to bad behavior or incoherent results. I’m not offering a solution, just pointing out the difficulty. In my mind when it is pointed out that statistics were used wrong, or data is a mess, or a lone tree at Yamal determines the result, everyone should take the possiblity of a problem seriously.

Yes, that certainly makes sense as to why such investigations are few and far between. The problem is that when they DO ‘investigate’, they ARE adjudicating the issue. When they instead issue whitewash after whitewash, they don’t have the excuse that you are offering.

The only publicized academic investigation that actually worked, as far as I can remember, is the Bellesiles matter at Emory. Even that was only due to his comparative lack of Mannian obfuscation and stonewalling – his excuse for not being able to produce his (fabricated) data was equivalent to ‘the dog ate it’.

theduke:
I had you pegged for a realist and now you disappoint and pull a Candide. I recommend a neat little book by David and Stephen Clark, Newton’s Tyranny: The Suppressed Scientific Discoveries of Stephen Gray and John Flamsteed as an antidote for your misplaced optimism.

Thanks, Bernie. I guess it depends on the day of the week. Climate Science will do that to you.

I’m not a scientist, mathematician, or statistician. My education was in history, literature and philosophy. I was describing my impression of Steve’s efforts. He’s winning in his own inimitable way, I think we can all agree.

theduke:
I share your opinion as to the likely effectiveness of Steve’s efforts. However, Steve quite appropriately looks at a carefully defined series of topics that play to his significant mathematical skill and he uses what he finds there to argue for greater transparency and replicability – the heart of the scientific method – in the climate science field in general. If I had my way, Steve McIntyre would be a recipient of a MacArthur genius award. Without question, he deserves a fellowship or position at Queen’s or UoT. I hope he gains the scientific recognition that were so long denied Gray and, to a lesser extent, Flamsteed.

The heart of the dispute between Flamstead and Newton was Flamstead’s refusal to release his data (obtained by virtue of his official position as astronomer royale) in advance of the publication of his intended magnum opus. Newton’s position was the not unreasonable one that data obtained by a public servant belonged to the public and should be published without delay. There are obvious parallels here with the disputes over release of data in climate science.

Unfortunately the means by which Newton chose to pursue his objective were not so reasonable. Lacking an FOIA his efforts to get Flamstead to comply descended to the level of ugly personal vendetta against Flamstead and his associates. Gray, who was denied recognition of his work and died a pauper as a result, was the worst casualty.

One of the suggestions offered by Baggerly in his lecture (cited above) is that papers before they are published be sent to a reviewer whose job it is solely to take the data and code and rerun it to see if he gets the same result. He is pressing journal editors to do this. His take is that this would increase “by orders of magnitude” the availability (if only by active links) of data and code. Such a fine and reasonable and decent suggestion. And so the question of the hour: When will this become the norm in climate science? One thing I like about CA is that you always get both data and code right away from Steve M.

If taken up this simple step could transform the peer review system more than one could hope. In fact, why not have this done first, and let other reviewers have access to the replication as a matter of course?

It sounds simple (to take the data and code and rerun it) but this is often a very complex procedure.
Steve: yes and no. I’ve posted many scripts that readers have used. Doing analysis is more than just running code, but functional code dramatically simplifies the analysis procedure.

Alan Kay’s dictum for programming language design seems relevant here: simple things should be simple; complex things should be possible. The amount of time Steve has had to spend because really simple things have been made complex, despite peer review, shows not just spite (which has sometimes been present) but a system that is broken. If someone publishing knew the independent replication step was sine qua non it would change publishing. And this would be a bad thing because?

“The amount of time Steve has had to spend…”: Baggerly states that his team spent about 1500 hours doing all the “forensic bioinformatics” (reverse engineering), digging into this highly flawed paper on cancer drug trials based on genetic mapping. Yup. I wonder how much time SM has put into similar investigations in climate science. And as an aside, sort of: What is horrific is that the drug trials at Duke were never stopped because of the egregious (?) flaws in Potti, et al., as pointed out by Coombes and Baggerly, but were stopped instead because of a revelation of lies in Potti’s CV.

“It sounds simple (to take the data and code and rerun it) but this is often a very complex procedure”

One of the reasons I became skeptical in the first place was because our host backed up his words in the language that I speak. Running code. Occasional glitches sure, always fixed. This is the way Engineers work, we don’t have to trust each other, we check.

Now even if the scientists are running horrifically complex stuff, the current state of the art makes it perfectly possible for them to share their virtual machine image with interested parties. Its not rocket science now. I can travel all over the world and create a copy of my machine’s memory, executing images, os, everything, I can save it, restore it, etc. its commonplace.

Steve has shown one way to allow replication, but another is simply to create a complete working virtual PC that others can access and try.

While I could see that very very large simulations may be difficult to replicate due to the sheer compute power required most of the types of complaints that have been raised are against tiny data sets and small computations compared to what regularly goes on in the computer world these days.

I can foresee resistance when authors do not want to release too much of their precious raw data to ‘competitors’ who not only might find something wrong with it but, even worse, find something else that the authors hadn’t.

Having the reproducibility test performed ‘in-house’ before submission seems a reasonable solution in that context.

Reproducibility tests ?
What are co-authors good for? Shouldn’t they be the first to do such tests?
Seems that nowadays co-authors are just an adornment and have no good idea of the contents of the papers they sign.

I’m not clear on why simple math checks can’t become a required portion of peer review. There’s got to be a simple way to force reviewers to go through the math used in papers.
Steve: reviewers don’t have the time and can’t be bothered. What they can and should do is ensure that the authors have made a proper archive so that someone sufficiently motivated can subsequently parse the study.

As a reviewer, I regularly work through portions of manuscripts. Sometimes to check a central result, and sometimes to derive a result the author might have missed. I also look up critical citations to be sure they convey what the author relates. What others do, I can’t say. But the duty of critical review can be fulfilled by those who take that duty seriously.

I reviewed one paper in which the main example had a major mistake in it. This mistake was very revealing about the basic shortcomings of the proposed method. I put that in my comments to the authors and I recommended that the paper not be accepted. The other three reviewers recommended acceptance and so I asked them about the mistake. That is, was there really a mistake there or was I just not understanding the paper. They agreed that the mistake was there and with its significance but still recommended acceptance.

In my world reviewing papers is something that we do “out of hide”, which is to say without any kind of support for the effort. That may seem a very greedy thing to say and yes, we are expected to so some things out of service to our academic community, but we already do a lot of the latter and a day really is only so long. So unless something jumps out to me as suspicious or captures my curiousity in an unusual way, I am not likely to replicate the steps used by the authors. Passing peer review means passing a very cursory quality-control check. Papers with incorrect claims can and do get published all the time in the peer-reviewed literature. To me, this is expected, no big deal. The problem is when some try to confer a blessed status on a paper just because it was published in a peer-reviewed journal.

Establishment confirmation-bias seems to be rampant all over the place; climate science, medicine, psychology, Keynesianism, even amongst the cult-of-science, anti-theist movement which is spreading. I was led to believe the sciences are a bastion of truth, Was I wrong?
Seems collectivists and sociopaths have co-opted all positions of power.
Btw, I’d be interested in knowing how Steve’s opinions have changed over the last few years. I think he’s getting more cynical.

Your confirmation bias is showing, you forgot several “establishment confirmation-bias” organizations in your list – some of which established thousands of years ago.

And that there are bad apples does not mean that apples can not be eaten in general (nor does it mean that apples are not tasty) .

But go ahead, you already know that science is wrong, and you and your religion(s) are right – so believe whatever you want to believe. After all you don’t need any confirmation from someone methodical like Steve, you just need to believe.

The central promulgator of that corruption accusation is anti-skeptic author Ross Gelbspan, whose 1997 “The Heat is On” book is described as seminal for other works which claim skeptics are corrupt, but which ultimately rely on Gelbspan as their source. The ‘big coal & oil’ corruption accusation is not independently corroborated.

Gelbspan was (and in some cases, still is) described as a Pulitzer winner, with no ambiguity about that, one of the more prominent examples being the cover of his 2004 Hardcover “Boiling Point” book ( http://img2.imagesbn.com/images/102700000/102707056.jpg ). His back-peddling about the matter in the 2005 paperback’s preface is quite unconvincing.

Although this misrepresentation did not bear on the dispute itself, it was the sort of thing that the academic community could dig its teeth into . . .

Either that, or it was something they could no longer toe under the rug. Neither explanation does much for the reputation of academe: It either can’t distinguish between solid research and garbage, or it is only interested in gossipy scandals. I don’t buy the excuse that the academies (and journals) are reluctant to pass judgement on highly specialised or technical work; if their credibility is important to them, they damn well better find a way. In this case, inviting a statistician in for review would have been a simple matter.
Steve: Bagerly and Coombes were statisticians but the journals didn’t want to listen them.

Credentials are very important in the academic world, since there are no stock options or bonuses. Debasing this coinage is a serious offense. Likewise when people claim to have a degree and don’t. Ironically, of course, journals never ask for credentials and anyone can publish in any journal without a Ph.D. (which I think is wonderful). Most critically, it is the kind of fact that is easily shown to be false and not just a difference of opinion. When it is shown that someone lied about something this important, it is easier to imagine maybe they lied about their research.

The amazing thing about Anil Potti and quite dissimilar to this is his manipulation of data he collected relating to a drug cocktail used to treat lung cancer patients. I can’t imagine anything more vile than that. This has quite a long account of it.

Unlike Mann, Potti lost his job (but as I understand it) kept his pension by resigning instead of being fired, a common trick used by organizations to minimize damage in cases like this by greasing the way out for the ex-employee. Potti amazingly still practices medicine (oncology, in Chapel Hill, North Carolina). By “catching him” puffing his resume, they avoided having to even deal with his other alleged misconduct.

You have to ask Dr. Neven where he was during the fraudulent behavior, but he won’t say when asked. Two word summary (paraphrased of course): “I dunna.”

Great line in Baggerley’s lecture – which is well worth watching even if you are a statistical and medical pygmy like me. He does it all with pizzazz.

‘Hey, who needs to conduct experiments if you already know the answer?’

I wonder if he has read the Hockey Stick Illusion? His forceful and energetic style of presentation would make an excellent counterpoint to Steve’s more cautious and reticent Canadian delivery. And it seems that despite their apparent external differences they are working in very similar ways.

Perhaps Steve should rename himself as a ‘forensic climatologist’? Sounds a bit better in the media (think Gil Grissom, great brain, clever, figuring out the puzzle) than ‘auditor’ (think Enron, boredom, nit-picking)

This type of problems has been seen before in other areas and even when it becomes public it can be hard to correct .
But its still unusual in most cases , what we seem to see in climate science is this issue far more often even as an expected norm .
This could be becasue of its pseudo-science nature and it could be becasue of its political nature, with even ‘the Team ‘ calling their work that highly unscientific term ‘the cause ‘ while its clear its leader are has happy to act as advocates as scientists.

Almost worse than this is the the failure of of the gatekeepers to control such behaviour and the failure of scientific establishment to call out such nonsense when it seen for to often its found that its ivory tower something its simply not wished to leave to address the public’s concern .

The Baggerly lecture needs a lot of thought before one rushes to comment. There is new information in there, like simple errors tending to be the most common. While this might seem old hat to some, methods to detect and correct the simple errors can do with much more attention and sophistication that currently exists. Also, reviewers and colleagues “might” simply brush past the rudimentary error check stage in order to get to their high expertise areas. (I knew a famous medico who never learned how to put gas in his car, always went to the same garage for a fill).
In recent months in Oz we have found with climate data that conversion from degrees F to C and back using one place after the decimal does not produce a unique result, hence it is impossible to reconstruct how many original deg F were in whole numbers. We have found that some record-setting high temperatures reported in newspapers arose because the Bureau gave Glaisher screen results to newspapers, but used Stevenson screen results for official records; that transcription errors remain despite many rounds of checking. Positional coordinate errors remain. One that we discussed today was the double insertion of a result that displaced the remaining table column by a day playinh havoc with correlation coefficients. Simple errors like these can have large unforseen effects but they seem to be beneath the dignity of peer reviewers to examine.
For reasons like these, I’m still at the stage of mistrusting all Australian climate data pending further checking and verification together with several colleagues, all acting far below their true skills.
You can’t make a solid building on a weak foundation and for global temperatures at least, the foundations are weak while the edifices are elaborate enough for Nobel Prize talk.
………………
How can scientists carry on like this, knowing that there is a plausible chance that they will one day be personally on the receiving end of manipulated and ineffective medical treatment?
………………
For whom the bell tolls.
………………
Many parallels with Steve, but note – must learn to say ‘doxorubicin’ and ‘topotecan’ in under 10 milliseconds.

Ditto on the less than rigorous approach to the use of statistics in academia and even industry. While my experience in that field was decades ago, just avoiding the pitfalls of designing experiments was a major aid to getting reproducible results that could be presented to management and provide consistent improvements to the bottom line.
In the current case of teasing some kind of “proof” for pre-conceived notions out of a mass of already measured, non-randomly produced data, makes the job excruciatingly hard.
All the more reason to have in-house statisticians on hand BEFORE conclusions are reached, let alone published.

“For example, the Potti et al papers used “meta genes”, which Baggerly explained as being nothing but principal components.”

Maybe scientific journals should adopt a policy that any paper using PC methods should include a separate statistics review. PC seems widely misunderstood and the potential for mis-interpretation is high. These shenanigans could be nipped in the bud.

Tom C, many people seem to think PCA was some huge innovation, but really, it’s quite simple. All PCA does is add a layer of obscurity to the process. The problem is journals don’t require papers even say what they did* much less provide true documentation/code so it can be checked. That, combined with the fact PCA obscures how results were generated, means reviewers can miss things they ought to catch.

A separate, statistical review isn’t necessary. What’s necessary is a decent first review. And that requires journals hold authors to higher standards than they currently do.

I think I once inquired of the Geophysical Union whether they could check the Jones nomination and received no response. Perhaps this type of behavior is common or even expected for Geophysical Union fellows?

Without the fake “Rhodes Scholar” issue this affair could have dragged on much longer, while clinical trials were being conducted on real live cancer patients!

Meanwhile, our climatologists want policies of national and world importance to be based upon their own stonewalling and lack of good practices. It is always worth citing episodes such as this disturbing statement in 2005 noting apparent issues with Mann’s work:

[Myles Allen in 2005 on problems in analyzing Mann’s work]: “…I tried and failed to understand Mann’s error analysis using both approaches about 5 years ago, so I don’t think it is worth trying again, particularly given his current level of sensitivity.”

This sort of stuff transcends academia, it is really the politics of power. The Soon and Baliunas rebuttals were also “too negative,” including fabricated claims and “deeply flawed” blather, but got published anyway because Mann et al were riding a power wave and Soon was seen as an outsider. Duke U. apparently perceived nothing more than a couple gnats buzzing around its head, and swatted them away. But when the powerful retracted their support, the outcome was inevitable. The moral of the story is that legitimate skeptics must not quail when confronted with power politics. Activism involves losing a thousand battles on the way to winning the war.

Lesson learned from this incident that well can be applied to the current state of climate science is that unlike what we hear from the defenders of accepting, more or less out of hand, what might appear as an overwhelmingly consensus view by the powers to be in a given science community, it is prudent and necessary to have knowledgeable, skeptical and independent individuals doing analysis and reporting the results by whatever means are available.

On the other hand, it shows that while the peer review system can tend to ignore these skeptical inputs as too negative or for other rather vague reasons and other bodies can do precursory investigations to apparently counter this skepticism, it should give no great solace to the out-of-hand defenders – as that system and those bodies can be shown to get it all wrong on some occasions.

While the peer review system and the academic enablers in these instances are probably not going to make major changes anytime soon, the healthy skepticism that this incident and ones like it can generate and become a motivation for some informed analyses has to be a good thing for science in general. Obviously the advent of the internet as a source for data and a means of communicating results of analyses has been a very positive and important factor in making these analyses effective counters to consensus thinking and the problems that that type of thinking can generate.

I have a question that came to my mind during the discussion on this thread about any potential legal liabilities of reviewers of a published paper that, as in this case, enable further actions that would, because of the flawed and peer reviewed publication, create damages to individuals. Certainly the due diligence of the reviewers could be questioned in cases like this one.

I am not at all sure that litigation of this sort would be proper or practical as a deterrent, but in the litigious times, at least in the US, we live in, I am wondering if such litigations have been initiated or what shields are in place such that reviewers are exempted.

I just recently saw a very interesting TED talk about bad science and bias in the reporting and publication of research results. While the presentation focuses more on particular clinical trials and medicine, the presenter talks about how FOIA efforts have been necessary to try to find the truth and missing data. Not only am I appalled at this happening so pervasively in the field of medicine, the parallels to climate science are undeniable. I think it’s worth 10 or 12 minutes of your time to watch.

Some relevant quotes from the video:
“[In science,] we only hear about the flukes and about the freaks.”

“Positive findings are around twice as likely to be published as negative findings.” (Very relevant given the case cited in Steve’s blog post here)

“Real science is all about critically appraising the evidence for somebody else’s position.”

Perhaps statisticians need to become more organized to enhance and enforce higher research and publication standards across all empirical disciplines. When someone with statistical expertise blows a whistle it ought to require (at a minimum) genuine independent and exhaustive review of data, code, methods, etc. That there has so often been no real independent scrutiny is part of the problem. Here is a disturbing case in the courts this year, a large study on Alzheimer’s at Harvard:

I think that all that is really needed is that within “multidiscplinary” sciences, any paper needs to reviewed by specialists from each contributing discipline. Climatologists would weep real tears at the thought of course, since you are dealing with physics, geology, statistics, and a raft of other disciplines as well.

The supposedly independent review late in 2009 was not provided with key info from Baggerly and Coombes. What then did the “review” actually “replicate” since they didn’t notice the problems? Sounds like the kind of whitewash inquiry we’ve seen too much of…. Interesting article at Nature website:

“In an NCI report obtained by Nature, Duke’s external reviewers say that they can replicate the results using data provided by Potti, but seem unaware of any doubts about the data. Kornbluth and Cuffe admit that, in consultation with John Harrelson, who was acting as chairman of Duke’s Institutional Review Board, they decided not to forward the latest communication from Baggerly and Coombes to the rest of the board or the external reviewers.”

Thanks Skiphil, the article is interesting and so is the first comment from Steven McKinney. In fact, if we take one of McKinney’s statements and leave out a few modifiers we get:

“The […] modeling machinery has undemonstrated type I and type II error rates, and should be evaluated on simulated data of known structure. My concern is that on simulated random […] data for a few dozen cases with randomly assigned binary categorization, the model will find structure somewhere across the 20,000 […] rows of random data and appear to yield a highly accurate predictor.”

Now just substitute “climate proxy” where you see a “[…]” and we have a decent correlation to the Mannian Hockey Stick Generator.

I had a paper accepted in March 2012 in an academic social sciences journal which has only just appeared in the online edition (Oct 2012). So I don’t think it’s at all strange that a paper accepted in July has not been published yet.

We were in a meeting with our intellectual property officers last week and I was informed of something quite pertinent to the ‘Potti and Nevins’ case. You can patent a computer program in which you put information in at one end and get information out the other end, as long as no clinical judgement is required. If you develop an algorithm that helps a clinician make a more informed decision, then you cannot patent it. Potti and Nevins wanted to make money from a computerized diagnostics system. They could only have IP protection if they could show
sequence in —-> drug choice out put
was completely independent of people judgement. We are no where as near this for any aspect of personalized medicine.

This is true for all patents. Patents cannot be about “best practices” or procedures that are done by people. Even if the human plays only a small part of teh process the entire process is unpatentble.

A patent could be found for a “system or method” that uses a computer a components and that “displays” information form which the human makes a choice. That is if the determination of the choice involves a novel (new and useful) method or system

Yes Tom, you can see that their computerized diagnostic system required a definite output.
They would do incremental improvements in their algorithms, but would need a ‘definite’ output.
I am working out how to make a simple output, based on screening, is not dependent on human judgement, but informs a human. It is actually jolly difficult.

The main issues that I think that you are describing is that of “obviousness and perhaps “novelty”. The operative part of a patent are the claims which are sentences appended to the specification and precisely describe be what the claimed invention is. If the claimed invention is something that someone of “ordinary skill in the art” would find to be “obvious” then the invention is not patentable. However one must read the claims to determine just to what a patent is claiming and it is difficult to discuss the validity of any patent without a close examination of the claims. Is a claim “obvious to someone of ordinary skill in the art” which is usually determined to be someone with a masters degree.

ah yes, that would be good in theory. However, that piece cites the example of Andrew Wakefield, and from close investigation I know that he was not the fraudster he is portrayed in Wikipedia as being. The story there makes me even more upset and alarmed than the Climategate whitewashes do. You have to dig thoroughly, like Steve McIntyre does here, to find the hidden evidence.

I suppose that this is not entirely OT: A friend of mine is a professor of biochemistry at an R1 university. He has said to me that, “back in the day” (around 1970), when he graduated with an MD-PhD, there was “no question” that he would land a professorship at a major university, that he would achieve tenure, and that he would get research funding, but that none of those is true anymore for persons with the same degrees. Apparently the supply of professors greatly exceeds the demand. And so, therefore, the pressure to publish and BE RECOGNIZED must be intense. Of course this could tend to lead to all kinds of behaviors that throw true science under the bus in the interest of self-advancement, both as authors and as reviewers. So I think I understand it, but understanding does not equate with a vote of approval.

Interesting.
Perhaps too many people using easy funding to get a degree that isn’t worth much for income because they aren’t of top notch capability for the field of their degree.
Though when I graduated in 1967 I wondered how so many science graduates would find work.

Perhaps something will rub off from this objective ethics program at Duke U:http://www.vem.duke.edu/program.htm
though many academics and U bureaucrats will make a fallacious claim they are not in “the marketplace”.

We appear to have arrived at the point where the athletics departments of major universities have more integrity than their academic counterparts. This is indeed a bizarre twist for me as I have long viewed the emphasis on athletics (in the US this primarily means football) as a corrupting influence on the proper purpose of a University.

In a case like this it is Duke University which needs to be made accountable for the academic misconduct of its faculty member because they failed to properly supervise his activities and abetted his misconduct by conducting a sham investigation.

Sanctions should be imposed which impact the University’s access to research funds.

Then completely out of left field came a decisive revelation from “The Cancer Letter”, a weekly newsletter that is sort of like a blog. In his CV, Dr Potti had falsely claimed to be a “Rhodes Scholar (Australia)” – a claim refuted at The Cancer Letter.

Although this misrepresentation did not bear on the dispute itself, it was the sort of thing that the academic community could dig its teeth into and Potti’s data manipulation began to unravel. Dr Nevens, Potti’s senior and coauthor at Duke, finally withdrew his support. The articles (published at the most eminent journals) were subsequently retracted.

‘complaint’ filed October 22, 2012, against CEI & Mark Steyn states As a result of this research, Dr Mann and his colleagues were awarded the Nobel Peace Prize

It appears that no less an authority than Geir Lundestad, Director, Professor, The Norwegian Nobel Institute has responded

1) Michael Mann has never been awarded the Nobel Peace Prize.
2) He did not receive any personal certificate. He has taken the diploma awarded in 2007 to the Intergovernmental Panel on Climate Change (and to Al Gore) and made his own text underneath this authentic-looking diploma.
3) The text underneath the diploma is entirely his own. We issued only the diploma to the IPCC as such. No individuals on the IPCC side received anything in 2007.

It would appear that the IPCC/Rajendra Pachauri took their Nobel diploma, added the supplementary text and sent a copy exclusively, to some 2,000 AR4 contributors…

Is this not the sort of thing that the academic community could dig its teeth into?, it’s certainly reminiscent of Mann’s proxy practise

This is from Oct. 12, 2012 but I haven’t seen it discussed. Note the issue of what they term “resubstitution” in biostatistics may have some analogues in climate science papers which are not scrupulous about how they “train” and compare data in different periods of time: