My previous posts for Raphael’s blog have focussed on critiquing poor methodology and over-enthusiastic data interpretation when it comes to imaging the surface structure of functionalised nanoparticles. This time round, however, I’m in the much happier position of being able to highlight an example of good practice in resolving (sub-)molecular structure where the authors have carefully and systematically used scanning probe microscopy (SPM), alongside image recognition techniques, to determine the molecular termination of Ag nanoparticles.

For those unfamiliar with SPM, the concept underpinning the operation of the technique is relatively straight-forward. (The experimental implementation rather less so…) Unlike a conventional microscope, there are no lenses, no mirrors, indeed, no optics of any sort [1]. Instead, an atomically or molecularly sharp probe is scanned back and forth across a sample surface (which is preferably atomically flat), interacting with the atoms and molecules below. The probe-sample interaction can arise from the formation of a chemical bond between the atom terminating the probe and its counterpart on the sample surface, or an electrostatic or magnetic force, or dispersion (van der Waals) forces, or, as in scanning tunnelling microscopy (STM), the quantum mechanical tunnelling of electrons. Or, as is generally the case, a combination of a variety of those interactions. (And that’s certainly not an exhaustive list.)

Here’s an example of an STM in action, filmed in our lab at Nottingham for Brady Haran’s Sixty Symbols channel a few years back…

Scanning probe microscopy is my first love in research. The technique’s ability to image and manipulate matter at the single atom/molecule level (and now with individual chemical bond precision) is seen by many as representing the ‘genesis’ of nanoscience and nanotechnology back in the early eighties. But with all of that power to probe the nanoscopic, molecular, and quantum regimes come tremendous pitfalls. It is very easy to acquire artefact-ridden images that look convincing to a scientist with little or no SPM experience but that instead arise from a number of common failings in setting up the instrument, from noise sources, or from a hasty or poorly informed choice of imaging parameters. What’s worse is that even relatively seasoned SPM practitioners (including yours truly) can often be fooled. With SPM, it can look like a duck, waddle like a duck, and quack like a duck. But it can too often be a goose…

That’s why I was delighted when Raphael forwarded me a link to “Real-space imaging with pattern recognition of a ligand-protected Ag374 nanocluster at sub-molecular resolution”, a paper published a few months ago by Qin Zhou and colleagues at Xiamen University (China), the Chinese Academy of Science, Dalian (China), the University of Jyväskylä (Finland), and the Southern University of Science and Technology, Guandong (China). The authors have convincingly imaged the structure of the layer of thiol molecules (specifically, tert-butyl benzene thiol) terminating 5 nm diameter silver nanoparticles.

What distinguishes this work from the stripy nanoparticle oeuvre that has been discussed and dissected at length here at Raphael’s blog (and elsewhere) is the degree of care taken by the authors and, importantly, their focus on image reproducibility. Instead of using offline zooms to “post hoc” select individual particles for analysis (a significant issue with the ‘stripy’ nanoparticle work), Zhou et al. have zoomed in on individual particles in real time and have made certain that the features they see are stable and reproducible from image to image. The images below are taken from the supplementary information for their paper and shows the same nanoparticle imaged four times over, with negligible changes in the sub-particle structure from image to image.

This is SPM 101. Actually, it’s Experimental Science 101. If features are not repeatable — or, worse, disappear when a number of consecutive images/spectra are averaged – then we should not make inflated claims (or, indeed, any claims at all) on the basis of a single measurement. Moreover, the data are free of the type of feedback artefacts that plagued the ‘classic’ stripy nanoparticle images and Zhou et al. have worked hard to ensure that the influence of the tip was kept to a minimum.

Given the complexity of the tip-sample interactions, however, I don’t quite share the authors’ confidence in the Tersoff-Hamann approach they use for STM image simulation [2]. I’m also not entirely convinced by their comparison with images of isolated molecular adsorption on single crystal (i.e. planar) gold surfaces because of exactly the convolution effects they point towards elsewhere in their paper. But these are relatively minor points. The imaging and associated analysis are carried out to a very high standard, and their (sub)molecular resolution images are compelling.

A-C above are STM data, while D-F are constant height atomic force microscope images [3], of thiol-passivated nanoparticles (synthesised by Nicolas Goubet of Pileni’s group) and acquired at 78 K. (Zhou et al. similarly acquired data at 77K but they also went down to liquid helium temperatures). Note that while we could acquire sub-nanoparticle resolution in D-F (which is a sequence of images where the tip height is systematically lowered), the images lacked the impressive reproducibility achieved by Zhou et al. In fact, we found that even though we were ostensibly in scanning tunnelling microscopy mode for images such as those shown in A-C (and thus, supposedly, not in direct contact with the nanoparticle), the tip was actually penetrating into the terminating molecular layer, as revealed by force-distance spectroscopy in atomic force microscopy mode.

The other exciting aspect of Zhou et al.’s paper is that they use pattern recognition to ‘cross-correlate’ experimental and simulated data. There’s increasingly an exciting overlap between computer science and scanning probe microscopy in the area of image classification/recognition and Zhou and co-workers have helped nudge nanoscience a little more in this direction. Here at Nottingham we’re particularly keen on the machine learning/AI-scanning probe interface, as discussed in a recent Computerphile video…

Given the number of posts over the years at Raphael’s blog regarding a lack of rigour in scanning probe work, I am pleased, and very grateful, to have been invited to write this post to redress the balance just a little. SPM, when applied correctly, is an exceptionally powerful technique. It’s a cornerstone of nanoscience, and the only tool we have that allows both real space imaging and controlled modification right down to the single chemical bond limit. But every tool has its limitations. And the tool shouldn’t be held responsible if it’s misapplied…

[1] Unless we’re talking about scanning near field optical microscopy (SNOM). That’s a whole new universe of experimental pain…

[2] This is the “zeroth” order approach to simulating STM images from a calculated density of states. It’s a good starting point (and for complicated systems like a thiol-terminated Ag374 particle probably also the end point due to computational resource limitations) but it is certainly a major approximation.

[3] Technically, dynamic force microscopy using a qPlus sensor. See this Sixty Symbols video for more information about this technique.

4. Clinical particulars

4.1 Therapeutic indications

4.2 Posology and method of administration

TWITTIVIR 5% w/w cream is suitable for adults, children of 13 years of age and above, and the elderly. TWITTIVIR 5% w/w cream is for external use only and should not be applied to broken skin, mucous membranes or near the eyes.

4.3 Contraindications

TWITTIVIR 5% w/w cream is contra-indicated in subjects with known hypersensitivity to the product and its components. (group 1)

4.9 Overdose

There are rare cases of overdosage of TWITTIVIR 5% w/w cream, usually in patients from group 3 above. The effects can be serious, leading to grumpiness and even, in extreme cases (in parents), child neglect. In such cases, the treatment should be immediately stopped.

Here is a short statement in response to Ong and Stellacci. Since theirs was a response to Stirling et al, Julian Stirling was invited to referee their submission (report).

We are pleased that Ong and Stellacci have responded to our paper, Critical assessment of the evidence for striped nanoparticles, PLoS ONE 9 e108482 (2014). Each of their rebuttals of our critique has, however, already been addressed quite some time ago either in our original paper, in the extensive PubPeer threads associated with that paper (and its preprint arXiv version), and/or in a variety of blog posts. Indeed, arguably the strongest evidence against the claim that highly ordered stripes form in the ligand shell of suitably-functionalised nanoparticles comes from Stellacci and co-authors’ own recent work, published shortly after we submitted our PLOS ONE critique. This short and simple document compares the images acquired from ostensibly striped nanoparticles with control particles where, for the latter (and as claimed throughout the work of Stellacci et al.), stripes should not be present. We leave it to the reader to draw their own conclusions. At this point, we believe that little is to be gained from continuing our debate with Stellacci et al. We remain firmly of the opinion that the experimental data to date show no evidence for formation of the “highly ordered” striped morphology that has been claimed throughout the work of Stellacci and co-workers, and, for the reasons we have detailed at considerable length previously, do not find the counter-claims in Ong and Stellacci in any way compelling. We have therefore clearly reached an impasse. It is thus now up to the nanoscience community to come to its own judgement regarding the viability of the striped nanoparticle hypothesis. As such, we would very much welcome STM studies from independent groups not associated with any of the research teams involved in the controversy to date. For completeness, we append below the referee reports which JS submitted on Ong and Stellacci’s manuscript.

Lauren K. Wolf has written a nice overview of the stripy nanoparticle controversy for Chemical & Engineering News, the weekly magazine published by the American Chemical Society. It starts like this:

AS TRUTH SEEKERS, scientists often challenge one another’s work and debate over the details. At the first-ever international scientific conference, for instance, leading chemists argued vociferously over how to define a molecule’s formula. A lot of very smart people at the meeting, held in Germany in 1860, insisted that water was
OH, while others fought for H 2 O.

That squabble might seem tame compared with a dispute that’s been raging
in the nanoscience community during the past decade. […]

Read it allhere… if you have access. If you don’t, email me and I will send you a pdf.

At the time of writing, there are seventy-eight comments on the paper, quite a few of which are rather technical and dig down into the minutiae of the many flaws in the striped nanoparticle ‘oeuvre’ of Francesco Stellacci and co-workers. It is, however, now getting very difficult to follow the thread over at PubPeer, partly because of the myriad comments labelled “Unregistered Submission” – it has been suggested that PubPeer consider modifying their comment labelling system – but mostly because of the rather circular nature of the arguments and the inability to incorporate figures/images directly into a comments thread to facilitate discussion and explanation. The ease of incorporating images, figures, and, indeed, video in a blog post means that a WordPress site such as Raphael’s is a rather more attractive proposition when making particular scientific/technical points about Stellacci et al.’s data acquisition/analysis protocols. That’s why the following discussion is posted here, rather than at PubPeer.

Unwarranted assumptions about unReg?

Julian Stirling, the lead author of the “Critical assessment…” paper, and I have spent a considerable amount of time and effort over the last week addressing the comments of one particular “Unregistered Submission” at PubPeer who, although categorically stating right from the off that (s)he was in no way connected with Stellacci and co-workers, nonetheless has remarkably in-depth knowledge of a number of key papers (and their associated supplementary information) from the Stellacci group.

It is important to note that although our critique of Stellacci et al.’s data has, to the best of our knowledge, attracted the greatest number of comments for any paper at PubPeer to date, this is not indicative of widespread debate about our criticism of the striped nanoparticle papers (which now number close to thirty). Instead, the majority of comments at PubPeer are very supportive of the arguments in our “Critical assessment…” paper. It is only a particular commenter, who does not wish to log into the PubPeer site and is therefore labelled “Unregistered Submission” every time they post (I’ll call them unReg from now on), that is challenging our critique.

We have dealt repeatedly, and forensically, with a series of comments from unReg over at PubPeer. However, although unReg has made a couple of extremely important admissions (which I’ll come to below), they continue to argue, on entirely unphysical grounds, that the stripes observed by Stellacci et al. in many cases are not the result of artefacts and improper data acquisition/analysis protocols.

unReg’s persistence in attempting to explain away artefacts could be due to a couple of things: (i) we are being subjected to a debating approach somewhat akin to the Gish gallop. (My sincere thanks to a colleague – not at Nottingham, nor, indeed, in the UK – who has been following the thread at PubPeer and suggested this to us by e-mail. Julian also recently raised it in a comment elsewhere at Raphael’s blog which is well worth reading); and/or (ii) our assumption throughout that unReg is familiar with the basic ideas and protocols of experimental science, at least at undergraduate level, may be wrong.

Because we have no idea of unReg’s scientific background – despite a couple of commenters at PubPeer explicitly asking unReg to clarify this point – we assumed that they had a reasonable understanding of basic aspects of experimental physics such as noise reduction, treatment of experimental uncertainties, accuracy vs precision etc… But Julian and I realised yesterday afternoon that perhaps the reason we and unReg keep ‘speaking past’ each other is because unReg may well not have a very strong or extensive background in experimental science. Their suggestion at one point in the PubPeer comments thread that “the absence of evidence is not evidence of absence” is a rather remarkable statement for an experimentalist to make. We therefore suspect that the central reason why unReg is not following our arguments is their lack of experience with, and absence of training in, basic experimental science.

As such, I thought it might be a useful exercise – both for unReg and any students who might be following the debate – to adopt a slightly more tutorial approach in the discussion of the issues with the stripy nanoparticle data so as to complement the very technical discussion given in our paper and at PubPeer. Let’s start by looking at a selection of stripy nanoparticle images ‘through the ages’ (well, over the last decade or so).

The Evolution of Stripes: From feedback loop ringing to CSI image analysis protocols

The images labelled 1 – 12 below represent the majority of the types of striped nanoparticle image published to date. (I had hoped to put together a 4 x 4 or 4 x5 matrix of images but, due to image re-use throughout Stellacci et al.’s work, there aren’t enough separate papers to do that).

Stripes across the ages

Putting the images side by side like this is very instructive. Note the distinct variation in the ‘visibility’ of the stripes. Stellacci and co-workers will claim that this is because the terminating ligands are not the same on every particle. That’s certainly one interpretation. Note, however, that images 1, 2, 4, and 11 each have the same type of octanethiol- mercaptopropionic acid (2:1) termination and that we have shown, through an analysis of the raw data, that images #1 and #11 result from a scanning tunnelling microscopy artefact known as feedback loop ringing (see p.73 of this scanning probe microscopy manual).

Moreover, the inclusion of Image #5 above is not a mistake on my part – I’ll leave it to the reader to identify just where the stripes are supposed to lie in this image. Images #10 and #12 similarly represent a challenge for the eagle-eyed reader, while Image #4 warrants its own extended discussion below because it forms a cornerstone of unReg’s argument that the stripes are real. Far from supporting the stripes hypothesis, however, Stellacci et al’s own analysis of Image #4 contradicts their previous measurements and arguments (see “Fourier analysis or should we use a ruler instead?” below).

What is exceptionally important to note is that, as we show in considerable detail in “Critical assessment…”, a variety of artefacts and improper data acquisition/analysis protocols – and not just feedback loop ringing – are responsible for the variety of striped images seen above. For those with no experience in scanning probe microscopy, this may seem like a remarkable claim at first glance, particularly given that those striped nanoparticle images have led to over thirty papers in some of the most prestigious journals in nanoscience (and, more broadly, in science in general). However, we justify each of our claims in extensive detail in Stirling et al. The key effects are as follows:

– The “CSI” effect. We know from access to (some of) the raw data that a very common approach to STM imaging in the Stellacci group (up until ~ 2012) was to image very large areas with relatively low pixel densities and then rely on offline zooming into areas no more than a few tens of pixels across to “resolve” stripes. This ‘CSI’ approach to STM is unheard of in the scanning probe community because if we want to get higher resolution images, we simply reduce the scan area. The Stellacci et al. method can be used to generate stripes on entirely unfunctionalised particles, as shown here.

– Observer bias. The eye is remarkably adept at picking patterns out of uncorrelated noise. Fig. 9 in Stirling et al. demonstrates this effect for ‘striped’ nanoparticles. I have referred to this post from my erstwhile colleague Peter Coles repeatedly throughout the debate at PubPeer. I recommend that anyone involved in image interpretation read Coles’ post.

In “Critical assessment…” we show, via a Fourier approach, that the measurements of stripe spacing in papers published by Stellacci et al in the period from 2006 to 2009 – and subsequently used to claim that the stripes do not arise from feedback loop ringing – are comprehensively incorrectly estimated. We are confident in our results here because of a clear peak in our Fourier space data (See Figures S1 and S2 of the paper).

Fabio Biscarini and co-workers, in collaboration with Stellacci et al, have attempted to use Fourier analysis to calculate the ‘periodicity’ of the nanoparticle stripes. They use Fourier transform of the raw images, averaged in the slow scan direction. No peak is visible in this Fourier space data, even when plotting on a logarithmic scale in an attempt to increase contrast/visibility. Instead, the Fourier space data just shows a decay with a couple of plateaus in it. They claim – erroneously, for reasons we cover below – that the corners of the second plateau and the continuing decay (called a “shoulder” by Biscarini et al.) indicates stripe spacing. To locate these shoulders they apply a fitting method.

We describe in detail in “Critical assessment…” that not only is the fitting strategy used to extract the spatial frequencies highly questionable – a seven free-parameter fit to selectively ‘edited’ data is always going to be somewhat lacking in credibility – but that the error bars on the spatial frequencies extracted are underestimated by a very large amount.

Moreover, Biscarini et al. claim the following in the conclusions of their paper:

“The analysis of STM images has shown that mixed-ligand NPs exhibit a spatiallycorrelated architecture with a periodicity of ∼1 nm that is independent of theimaging conditions and can be reproduced in four different laboratories usingthree different STM microscopes. This PSD [power spectral density; i.e. the modulus squared of the Fourier transform] analysis also shows…”

Note that the clear, and entirely misleading, implication here is that use of the power spectral density (PSD – a way of representing the Fourier space data) analysis employed by Biscarini et al. can identify “spatially correlated architecture”. Fig. 10 of our “Critical assessment…” paper demonstrates that this is not at all the case: the shoulders can equally well arise from random speckling.

This unconventional approach to Fourier analysis is not even internally consistent with measurements of stripe spacings as identified by Stellacci and co-workers. Anyone can show this using a pen, a ruler, and a print-out of the images of stripes shown in Fig. 3 of Ong et al. It’s essential to note that Ong et al. claim that they measure a spacing of 1.2 nm between the ‘stripes’; this 1.2 nm figure is very important in terms of consistency with the data in earlier papers. Indeed, over at PubPeer, unReg uses it as a central argument of the case for stripes:

“… the extracted characteristic length from the respective fittings results in a characteristic length for the stripes of 1.22 +/- 0.08. This is close to the 1.06 +/-0.13 length for the stripes of the images in 2004 (Figure 3a in Biscarini et al.). Instead, for the homoligand particles, the number is much lower: 0.76 +/- 0.5 [(sic). unReg means ‘+/- 0.05’ here. The unit is nm], as expected. So the characteristic lengths of the high resolution striped nanoparticles of 2013 and the low resolution striped nanoparticles of 2004 match within statistical error, ***which is strong evidence that the stripe features are real.***”

Notwithstanding the issue that the PSD analysis is entirely insensitive to the morphology of the ligands (i.e. it cannot distinguish between stripes and a random morphology), and can be abused to give a wide range of results, there’s a rather simpler and even more damaging inconsistency here.

A number of researchers in the group here at Nottingham have repeated the ‘analysis’ in Ong et al. Take a look at the figure below. (Thanks to Adam Sweetman for putting this figure together). We have repeated the measurements of the stripe spacing for Fig. 3 of Ong et al. and we consistently find that, instead of a spacing of 1.2 nm, the separation of the ‘stripes’ using the arrows placed on the image by Ong et al. themselveshas a mean value of 1.6 nm (± 0.1 nm). What is also interesting to note is that the placement of the arrows “to guide the eye” does not particularly agree with a placement based on the “centre of mass” of the features identified as stripes. In that case, the separation is far from regular.

We would ask that readers of Raphael’s blog – if you’ve got this far into this incredibly long post! – repeat the measurement to convince yourself that the quoted 1.2 nm value does not stand up to scrutiny.

So, not only does the PSD analysis carried out by Biscarini et al. not recover the real space value for the stripe spacing (leaving aside the question of just how those stripes were identified), but there is a significant difference between the stripe spacing claimed in the 2004 Nature Materials paper and that in the 2013 papers. Both of these points severely undermine the case for stripy nanoparticles. Moreover, the inability of Ong et al. to report the correct spacing for the stripes from simple measurements of their STM instruments raises significant questions about the reliability of the other data in their paper.

As the title of this post says, whither stripes?

Reducing noise pollution

A very common technique in experimental science to increase signal-to-noise (SNR) ratio is signal averaging. I have spent many long hours at synchrotron beamlines while we repeatedly scanned the same energy window watching as a peak gradually appeared from out of the noise. But averaging is of course not restricted to synchrotron spectroscopy – practically every area of science, including SPM, can benefit from the advantages of simply summing a signal over the course of time.

A particularly frustrating aspect of the discussion at PubPeer, however, has been unReg’s continued assertion that even though summing of consecutive images of the same area gives rise to completely smooth particles (see Fig. 5(k) of “Critical assessment…”), this does not mean that there is no signal from stripes present in the scans. This claim has puzzled not just Julian and myself, but a number of other commenters at PubPeer, including Peer 7:

“If a feature can not be reproduced in two successive equivalent experiments then the feature does not exist because the experiment is not reproducible. Otherwise how do you chose between two experiments with one showing the feature and the other not showing it? Which is the correct one ? Please explain to me.

Furthermore, if a too high noise is the cause of the lack of reproducibility than the signal to noise ratio is too low and once again the experiment has to be discarded and/or improved to increase this S/N. Repeating experiments is a good way to do this and if the signal does not come out of the noise when the number of experiment increases than it does not exist.

This is Experimental Science 101 and may (should) seem obvious to everyone here…”

I’ve put together a short video of a LabVIEW demo I wrote for my first year undergrad tutees to show how effective signal averaging can be. I thought it might help to clear up any misconceptions…

The Radon test

There is yet another problem, however, with the data from Ong et al. which we analysed in the previous section. This one is equally fundamental. While Ong et al. have drawn arrows to “guide the eye” to features they identify as stripes (and we’ve followed their guidelines when attempting to identify those ‘stripes’ ourselves), those stripes really do not stand up tall and proud like their counterparts ten years ago (compare images #1 and #4, or compare #4 and #11 in that montage above).

Julian and I have stressed to unReg a number of times that it is not enough to “eyeball” images and pull out what you think are patterns. Particularly when the images are as noisy as those in Stellacci et al’s recent papers, it is essential to try to adopt a more quantitiative, or at least less subjective approach. In principle, Fourier transforms should be able to help with this, but only if they are applied robustly. If spacings identified in real space (as measured using a pen and ruler on a printout of an image) don’t agree with the spacings measured by Fourier analysis – as for the data of Ong et al. discussed above – then this really should sound warning bells.

One method of improving objectivity in stripe detection is to use a Radon transform (which for reasons I won’t go into here – but Julian may well in a future post! – is closely related to the Fourier transform). Without swamping you in mathematical detail, the Radon transform is the projection of the intensity of an image along a radial line at a particular angular displacement. (It’s important in, for example, computerised tomography). In a nutshell, lines in an image will show up as peaks in the Radon transform.

So what does it look like in practice, and when applied to stripy nanoparticle images? (All of the analysis and coding associated with the discussion below are courtesy of Julian yet again). Well, let’s start with a simulated stripy nanoparticle image where the stripes are clearly visible – that’s shown on the left below and its Radon transform is on the right.

Note the series of peaks appearing at an angle of ~ 160°. This corresponds to the angular orientation of the stripes. The Radon transform does a good job of detecting the presence of stripes and, moreover, objectively yields the angular orientation of the stripes.

What happens when we feed the purportedly striped image from Ong et al. (i.e. Image #4 in the montage) into the Radon transform? The data are below. Note the absence of any peaks at angles anywhere near the vicinity of the angular orientation which Ong et al. assigned to the stripes (i.e. ~ 60°; see image on lower left below)…

Hyperempiricism

If anyone’s still left reading out there at this point, I’d like to close this exceptionally lengthy post by quoting from Neuroskeptic’s fascinating and extremely important “Science is Interpretation” piece over at the Discover magazine blogging site:

“The idea that new science requires new data might be called hyperempiricism. This is a popular stance among journal editors (perhaps because it makes copyright disputes less likely). Hyperempiricism also appeals to scientists when their work is being critiqued; it allows them to say to critics, “go away until you get some data of your own”, even when the dispute is not about the data, but about how it should be interpreted.”

Meanwhile, back at PubPeer, unReg has suggested that we should “… go back to the lab and do more work”.

This is a guest post by Philip Moriarty, Professor of Physics at the University of Nottingham

Since the publication of the ACS Nano and Langmuir papers to which Mathias Brust refers in the previous post, I have tried not to get drawn into posting comments on the extent to which the data reported in those papers ‘vindicates’ previous work on nanoparticle stripes by Francesco Stellacci’s group. (I did, however, post some criticism at ChemBar, which I note was subsequently uploaded, along with comments from Julian Stirling, at PubPeer). This is because we are working on a series of experimental measurements and re-analyses of the evidence for stripes to date (including the results published in the ACS Nano and Langmuir papers) and would very much like to submit this work before the end of the year.

Mathias’ post, however, has prompted me to add a few comments in the blogosphere, courtesy of Rapha-z-Lab.

It is quite remarkable that the ACS Nano and Langmuir papers are seen by some to provide a vindication of previous work by the Stellacci group on stripes. I increasingly feel as if we’re participating in some strange new nanoscale ‘reimagining’ of The Emperor’s New Clothes! Mathias clearly and correctly points out that the ACS Nano and Langmuir papers published earlier this year provide no justification for the earlier work on stripes. Let’s compare and contrast an image from the seminal 2004 Nature Materials paper with Fig. S7 from the paper published in ACS Nano earlier this year…

Note that the image on the right above is described in the ACS Nano paper as “reproducing” high resolution imaging of stripes acquired in other labs. What is particularly important about the image on the right is that it was acquired under ultrahigh vacuum conditions and at a temperature of 77K by Christoph Renner’s group at Geneva. UHV and 77 K operation should give rise to extremely good instrumental stability and provide exceptionally clear images of stripes. Moreover, Renner is a talented and highly experienced probe microscopist. And yet, nothing even vaguely resembling the types of stripes seen in the image on the left is observed in the STM data. It’s also worth noting that the image from Renner’s group features in the Supplementary Information and not the main paper.

Equally remarkable is that the control sample discussed in the ACS Nano paper (NP3) shows features which are, if anything, much more like stripes than the so-called stripy particles. But the authors don’t mention this. I’ve included a comparison below of Fig. 5(c) from the ACS Nano paper with a contrast-enhanced version. I’ll leave it to the reader to make up their own mind as to whether or not there is greater evidence for stripe formation in the image shown on the right above, or in the image shown on the right below…

Finally, the authors neglect any consideration at all of convolution between the tip structure and the sample structure. One can’t just assume that the tip structure plays no role in the image formation mechanism – scanning probe microscopy is called scanning probe microscopy for a reason. This is particularly the case when the features being imaged are likely to have a comparable radius of curvature to the tip.

I could spend quite a considerable amount of time discussing other deficiencies in the analyses in the Langmuir and ACS Nano papers but we’ll cover this at length in the paper we’re writing.