Menu

Open science to settle stripy controversy?

Julian Stirling

This is a guest post by Julian Stirling, a final year PhD student at the University of Nottingham.

Those who have been following the stripy nanoparticle controversy will know that Raphaël, along with other researchers, has been writing a paper both analysing the archived STM data provided by Stellacci, and the work presented in some more recent papers. I am happy to announce that our paper was submitted just before Christmas, and we have uploaded a pre-print to the arXiv. Over the coming weeks, we may write some posts to give more background for some of the arguments presented in the paper (I, for one, plan to write a post putting the Fourier analysis in the paper into context for those who are not familiar with it). In this post, I, instead, would like to give a bit of personal background as to how I got involved with the work, and how we have published the work.

I first heard of the stripy work from Phil (Philip Moriarty, who has previously written guest posts at the blog), my supervisor. My reaction was to laugh at what appeared, to me, to be obvious SPM artefacts. It was a good 6 months later (I think), when I heard that Phil had given a new PhD student in the group (Ioaniss – a co-author on the paper) the task of reproducing the stripy images with other nanoparticles. I suggested we could also combine these results with images from an SPM simulator which I wrote as an undergrad. I also offered to help read the raw data into MATLAB as I run an open-source SPM image analysis toolbox for MATLAB. This lead to me helping with analysis, and, before I knew it, I had become the main author on the paper.

The more I delved into the stripy papers and the archived data the more disillusioned I became with the peer review system. This is not an exaggeration. It was not just the stripy artefacts in the original paper, these were bad, but you simply imagine that the reviewers were experts on perhaps nanoparticle synthesis not SPM. The problem was the later papers using “statistical analysis” to separate real stripes from feedback noise. The “statistical analysis” showed such poor methodology that it reminded me of some of the worst data analysis I have seen when marking undergraduate lab work. An example, is averaging using ‘eye-balled’ measurements from SPM images, then mindlessly (I do not use this word lightly) applying standard error calculations to generate uncertainties for these widths which correspond to 0.026 pixels in the original image. Without unbelievable CSI-style image enhancement this cannot be reliable!

The project has slowly consumed more and more of my time (while I was simultaneously trying to write my PhD thesis). A lot of this time was spent doing the analysis which should have been done when the original data was taken. Other times I got stuck trying to understand and reproduce analysis which should never have been done in the first place. Managing to reproduce the published figures from the raw data was not an easy task, often involving extreme offline zooms, combined with interpolation and contrast adjustment. Once this became clear we made the decision to make all of our analysis open. All image analysis was scripted in MATLAB (no graphical programs used), and all data and scripts have been uploaded to figshare. This way anyone who is interested in this controversy can double check every single step of our analysis if they wish. I feel the only way to settle a controversy is to be as open as is possible in terms of both raw data and its analysis.

Finally, when we finished the paper. We had two options. To wait for the paper to be published before making this announcement, or to upload a pre-print to the arXiv. Our decision to release the pre-print also came from our desire to be as open as possible. A number of news sites have picked up on this controversy, which leads us to understand it is of some interest to the nanoscience community. It seems a shame to delay our analysis more than is necessary, especially considering there may be PhD students across the world wasting their valuable time trying to generate striped particles for their work.

I hope you will read our paper. And I wish you a merry Christmas and a stripe-free New Year.

So happy to read the paper finally! Fantastic job!
Though I do wish the code would run under octave….we do have to be dependent on commercial software at times sadly,

The tone is sharp (as I think it should be) and I cannot understand how the case could be made more clearly. The images in 2004 look nothing like the more recent ones where it is difficult to detect stripes. The comments on simulations were spot on IMHO.

stripy feels a lot like arsenic life now…and that isn’t a good thing. The arsenic life paper is still unretracted (as Randy Sheckman recently noted) is still unretracted! This isn’t inconsequential, want a recently published summary of NASA’s scientific achievements?

I bet the underlying reasons as to why both stripy and arsenic life were accepted and remain unretracted is that the peer reviews came back glowing. What good is science when clear straightforward truths cannot be effectively disseminated?

I am sure all of you are nearly spent on the topic, but one thing to consider is a FOIA request for peer reviews. This is what was done for arsenic life, but the details regarding what is eligible for FOIA isn’t clear to me (i.e. does it apply to any research done with federal dollars?).

Thanks nanonymous for those insightful comments. Your point about the peer reviews is extremely well made — I would very much like to see the reviewer comments. Both Raphael and I have requested some of the (anonymous) reviews previously but got fairly short shrift from the editors. The FOIA request is a possible route forward but, as you perceptively pointed out, we are indeed “nearly spent” on this topic!

Hi nanonymous,
Thanks for the kind words. I am glad the first stage is over, now onto peer review!

As for FOIA requests, I have no idea if it is eligible. It would be interesting to see them, to give us an idea of how the peer review went. But this is only one case, I think more important is to campaign for all peer review to be open. As sloppy peer review can cause lots of problems. Both publishing things that should never be published, or suggested rejections which are clearly based on a skim read, missing the point of the entire paper.

There are for sure similarities. High profile papers with obvious flaws which should have never passed peer review. The difficulty/impossibility of correcting the record.

But, there are also massive differences… Only one paper was published. It took hours for Rosie Redfield and many others to take the paper to pieces to an extent that the paper publication in print was delayed so that it could be accompanied by 8 technical comments highlighting its shortcomings. In the case of the stripy nanoparticles saga, we have now over 30 papers published in the ‘top’ IF journals. Publication has continued in spite of ‘stripy revisited’ and in spite of the obvious problems in the new papers.

It is very unlikely that there will be another ‘arsenate life’ paper in a decent journal in the years to come. I fear however that Julian’s wishes for a ‘stripe-free’ 2014 may be overly optimistic…

I believe the “problem” with the peer review of the arsenate (not arsenic :)) paper in science was that the reviewers were physicists, astrobiologists etc., but not one card carrying microbiologist, biochemist of organic chemist. The former, had they done a decent job would have noted the high identity of the putative arsenate-based genome with phosphate DNA genomes and the latter two would have picked up on the well established instability of arsenate esters in water. How a paper that bucks such well established chemistry still stands reflects the problems of the system. I certainly would advise against holding one’s breath with respect to anyone involved, from journals to PI and institutions to take any sort of action, though if you listen carefully, you may hear the sound of a carpet being lifted and vigorous sweeping.

Two ideas that have come to mind (apologies in advance if I missed them in the manuscript):

1) I am sure you have good reasons but is it worth being explicit as to why unfunctionalized silver nanoparticles were used instead of gold?

2) Is it also worth explaining why you could not get verified gold nanoparticles with the thiols in the 2004 paper? Also, making a clear statement that if the 2004 images can be replicated under controlled STM conditions, that would be a very simple way (as we’ve already discussed on this blog) to essentially render most of the critisims in this paper as invalid. To date you haven’t recieved nanoparticle samples nor have associated groups performed this experiment.

1. We needed nanoparticles with a similar size distribution to those used by Stellacci et al but without any functionalisation. We also needed those particles to be stable (i.e. not move during STM scanning), to have unfunctionalized surfaces, *and* to be readily ‘synthesized’ under UHV conditions using the materials available in our UHV system. This combination of factors led us to choose the Ag on C60/Si(111) system.

Deposition of Au on the C60 monolayer used as a template for nanoparticle synthesis in our study doesn’t work — Au forms a silicide with silicon even at room temperature and so any domain boundaries in the C60 monolayer are rapidly ‘attacked’ by the Au. We looked at this in some detail many moons ago. See P. Moriarty, Surf. Sci. Rep. 65 175 (2010) for more details. Please send me an e-mail (philip.moriarty@nottingham.ac.uk) if you don’t have access to that journal.

In any case, the precise type of nanoparticle we used is entirely irrelevant and has no bearing on the conclusions.

We have, however, also looked at conventional passivated Au nanoparticles and carried out a similar study. This will be submitted for publication in due course (I’m hoping that we’ll complete a draft of the paper before the end of this month).

2. The key issue with attempting to replicate the studies in the Nature Materials paper is that we need to get the samples from Stellacci’s group, rather than synthesize the samples ourselves. If we don;t get the samples from that group, then it would be very easy to dismiss our inability to see stripes as being due to difficulties with the synthesis of the ‘stripy’ particles. I really did not want to waste any more time than is necessary on a state-of-the-art instrument carrying out STM (and AFM) measurements of Au nanoparticles if those results were simply going to be dismissed by claims that we’d mucked up the synthesis.

Ref. 51 in our paper is included specifically to address this point: “51. We asked on a number of occasions for samples of mixed-ligand-terminated nanoparticles synthesized
by the Stellacci group to be provided. These samples were unfortunately not sent to us.”

Francesco Stellacci promised me a year ago that he would send us samples of striped nanoparticles. He subsequently promised repeatedly over the last year that the samples were being prepared and would be dispatched soon. Those samples never arrived.

In any case we have effectively reproduced the images in the Nature Materials 2004 paper — Fig. 3 of our paper shows that stripes precisely like those seen in the Nature Materials 2004 publication can be reproduced on entirely unfunctionalised particles which have a very similar size and size distribution to those studied by Stellacci et al. This effect is “universal” in the sense that it is completely independent of the type of nanoparticle and any functionalisation. (And, again, we have reproduced it on standard thiol-passivated particles as noted in the answer to Q1 above).

I feel that it’s a shame that you didn’t use open source software to do open science, then even those who cannot or do not want to pay for matlab and its onerous license agreement can repeat and build on the work. Why not use “R” ?

Dear Alan,
As an committed open source nerd (I run linux and the almost the only non-open source program installed is MATLAB [definitely the only non-free]) I understand this perspective. However, there is no equal equivalent in terms of mathematical programming which is open, Octave is the closest, but is not up to scratch for the work I generally do, for this reason I tend to use MATLAB.

I agree, for this project, where the data was always going to be all open, it would have been nicer to use an open-source program. However, no open source language has the SPM tools needed for this analysis (gwyddion is open source, but it is GUI based, so our analysis would not be scripted and repeatable, also it doesn’t have all of the features used). MATLAB only has these abilities because I spent over a year writing SPM code for MATLAB for my own use (this was later released open source as a toolbox called SPIW). It simply was not feasible to redo years of work to make another language have the needed SPM tools.

I would love an open source MATLAB equivalent, which does everything I do in MATLAB today. But it simply isn’t there. To take the stance of not using MATLAB on principle would detrimentally affect my work. I agree, however, that it is a pity that my use of MATLAB has made a closed prerequisite for our open code. But, as most academics have access to MATLAB I think (or at least hope) that no interested parties are shut out.