Blog

The CircFacts Blog Page

This is a blog page with the aim of exchanging ideas and views. So if you have any comments or ideas etc., please post your message below. The replies will be posted on the site as soon as possible after submission. So please go ahead and let us know what you think!

Not all studies are equal: Comments on the quality of scientific evidence.

By Stephen Moreton PhD. Posted January 2018

The peer-reviewed medical scientific literature, whilst far from perfect, is much better than the intactivists’ favorite sources of information (commonly social media, blogs, agenda-driven websites, and personal testimonies). That is not to say there are no issues with the literature. There are. Researchers are under pressure to produce publications to further their careers. And some journals are more interested in making profits than in academic standards, so their peer-review may be lax or non-existent, accepting any paper no matter how sloppy, so long as the authors pay a fee.

A detailed discussion of these issues is beyond the scope of this website, those interested may find the article “Why most published research findings are false” by Ioannidis (2005) illuminating, if a little technical, although any reading of it should be accompanied by the caution of Ingraham (2010). The title is as provocative as it is misleading, lending itself to being seized upon by quacks and charlatans to undermine scientific opposition to their nonsense. As Ingraham explains:

“What Ioannidis really says is much less ominous: he argues that it should take rather a lot of good quality and convergent scientific evidence before we can be reasonably sure of a “scientific fact,” and he presents good (scientific!) evidence that a lot of so-called conclusions are premature, not as ready for prime time as we would hope. Any supposedly large treatment effect in medical research is probably exaggerated, and “when additional trials are performed, the effect sizes become typically much smaller,” so “well-validatedlarge effects are uncommon.” Extremely uncommon, in fact.

It’s okay for science to work like this. Ioannidis did not mean that science is broken or deeply flawed.”

And Ingraham is right. This is how science advances. Ideas are tested, verified or falsified, and gradually a growing body of evidence begins to point in one direction. Preliminary studies lead to bigger better ones. If the preliminary findings fail to be confirmed they are abandoned. With replication the findings gain credibility and, eventually, a consensus emerges.

This is well illustrated in the development of the discovery that circumcision protects against female to male HIV transmission. Initial speculative suggestions that circumcision may have a protective effect were followed by ecological studies indicating that this was so, but with anomalies and uncertainties persisting. But, by the early 2000s, the quantity of such studies had grown to the extent that the hypothesis could not be ignored, and calls were being made to consider implementation of circumcision as a preventative measure.

But still there were doubters so, to settle the matter, three randomized controlled trials were conducted. These proved the hypothesis to be correct to the satisfaction of all the professional bodies dealing with the epidemic. In short, after multiple studies, most but not all, pointing in one direction, followed by high quality evidence from three independent studies replicating each other and pointing in the same direction as the majority of previous studies, a consensus emerged: circumcision protects against female to male HIV transmission. This is how science proceeds. It does not jump to conclusions based on single preliminary studies, but on bodies of evidence, multiple studies building on, and replicating, what has gone before, until a consensus is reached, even if a few die-hard individuals remain stuck in denial. A point well-made in a blog post here: https://forthesakeofscience.com/2017/03/10/science-moves-on-bodies-of-evidence/

As alluded to at the start of this post, not all medical journals are equal. There are good ones and bad ones. Again, a detailed coverage of the issues is not the job of this website. Smith (2006) makes some pertinent points, and there has been much discussion of late about “predatory journals” that publish any nonsense, so long as they receive a fee. A rough and ready guide to a journal’s status is its impact factor (the higher the better) but it has its limitations, as many sound journals are simply not tracked, and even high ranked journals can make mistakes.

Setting aside issues of a journal’s reliability, there is, thankfully, an easy to understand if rather “rule of thumb”, guide to a paper’s quality. The hierarchy of evidence:

What the hierarchy conveniently summarizes is that not all study designs are equal. Some are inherently better quality than others, being less subject to biases and pitfalls that can affect any research endeavor. Generally speaking, studies high up the hierarchy are more reliable than ones lower down, being less subject to such things as selection bias and confounding that may skew results.

It is not infallible. One can get well designed ecological studies (relatively low down the hierarchy) that do detect genuine effects. After all, it was ecological studies that indicated there was an association between foreskins and HIV, something later confirmed by randomized controlled trials (near the top of the hierarchy) and verified by follow-up systematic reviews and meta-analyses (right at the top). And one can get badly done meta-analyses, whether through incompetence, or by the author abusing their statistical expertise to subtly manipulate the data to support a desired conclusion. An example of a bad meta-analysis is intactivist pediatrician Robert S. Van Howe’s infamous one “proving” that circumcision does not protect against HIV. It later became, literally, a textbook example of how not to do a meta-analysis. The same author has gone on to carry out several more dubious “meta-analyses”, and “meta-regression” analyses, on circumcision-related topics, all of which have been shot down by experts: Moses et al., (1999); O’Farrell & Egger, (2000); Castellsagué et al., (2007); Waskett et al., (2009); Morris et al., (2014); Morris et al., (2015); Morris et al., (2017). Beware of single-author meta-analyses by individuals with an agenda.

With these caveats in mind, should many studies on a topic exist, some pointing this way, some pointing that way, and others pointing somewhere in between, one has two choices. Firstly, one can simply look at what the majority of studies say and, secondly, one can look at what those in the top tiers of the hierarchy of quality say. Taking this approach with the effect of circumcision on sexual function and pleasure, for example, one finds that the great majority of studies find no overall effect, but with a few finding a negative effect, and a few finding a positive one (See: https://www.facebook.com/CircumcisionResource/photos/a.735986419837365.1073741827.712201812215826/913150462120959/?type=3 ). Narrowing one’s focus to only those studies in the top tiers (case-control and upwards) then every such study finds either no effect or a positive one. So, given that science moves on bodies of evidence, the conclusion is clear: circumcision has no negative effect on sexual function and pleasure.

Sometimes the only studies available are of low to middle rank. Sometimes there may only be one or two on a topic. In the absence of anything better, that is all one has to go on. But, where possible, throughout this website, preference is given to studies in the upper tiers. We pick studies on the basis of quality. Intactivists pick them on the basis of whether or not they fit their agenda.

Expertise, ideology and Robert S. Van Howe: A skeptical look at the works of an anti-circumcision crusader

By Stephen Moreton, PhD. Posted March 2018.

SummaryRobert S. Van Howe is by far the most widely published academic in the intactivists’ ranks. In a recent exchange with Prof. Brian Morris, the world’s most widely published pro-circumcision academic, Van Howe raised some valid points about the need for specialist expertise when using a sophisticated tool such as meta-regression, and about the need to be alert to critiques based on ideology, rather than expertise. Whilst these points have merit, specialist expertise is not necessary to detect ideologically motivated articles. Academic sins, such as ad hominem attacks, inappropriate data manipulation, ignoring criticisms, citing fringe sources, misrepresentation, and double standards, betray those promoting ideological views, even if they are hidden amongst sophisticated technical arguments. This can be illustrated by using examples drawn from Van Howe’s own published works furthering the “intactivist”, or anti-circumcision movement.

The pseudosciences are replete with examples of “experts”, sometimes self-appointed, or considered such only by their band of followers, not by the mainstream. Sometimes with genuine credentials in relevant fields, thereby giving them a veneer of credibility, they bamboozle their audiences with “scientific” arguments and jargon that may convince the layperson and even some professionals but which are, nevertheless, utterly bogus.

Examples include creationists, such as Dr Andrew Snelling who has a PhD in geology, but writes copiously about the earth being 6,000 years old. In the medical sciences, they include Peter Duesberg, professor of molecular and cell biology, and darling of the HIV/AIDS deniers, the infamous Dr Andrew Wakefield, and his anti-vaccine fans, and various purveyors of “alternative medicine” and other pseudo-medical quackery.

Followers of the circumcision wars will soon see “experts” on opposite sides of the debate, each contradicting their opponents. Both cannot be right. To an outsider it may not be apparent who to believe, i.e. who is the real expert, and who a fake expert peddling pseudoscience.

One of the most prominent “experts” on the intactivist side is Robert Storms Van Howe, in Michigan. With 87 publications (as of March 2018), most on circumcision-related topics, he is easily the intactivists’ most widely published academic. This earned him the dubious accolade of “Intactivist of the month” for March 2012 (http://www.intactamerica.org/rvh ). He is a regular at intactivist conferences, and his published works are popular amongst the movement’s followers. His study on fine-touch sensitivity (Sorrells et al., 2007) is a staple of intactivist propaganda.

His nemesis is Prof. Emeritus Brian Morris, in the School of Medical Sciences at the University of Sydney. Like Van Howe, Prof. Morris has also clocked up an impressive tally of publications on circumcision (96 as of March 2018), though from an evidence-based pro-circumcision position. In contrast to Van Howe, only about a quarter of Morris’ prodigious output of over 400 publications is on circumcision. Thus his publication record outstrips Van Howe’s.

Van Howe is unusual for intactivists in that he has some relevant credentials. He has a MD and a master’s degree in statistics and public health, and is a professor of paediatrics in the College of Human Medicine at Michigan State University. His opponent, a professor of medical sciences for two decades, holds two doctorates, did his PhD and post-doctoral studies in Australian and US teaching hospitals, and has earned an international reputation for his contributions into the molecular basis of hypertension and aging. In the late 1980s he invented the human papilloma virus (HPV) test that is now replacing pap smears for more accurate cervical screening. His HPV work was what led to his interest in male circumcision because of its ability to decrease the risk of cervical cancer. He is thus an accomplished molecular biologist, and medical scientist.

So when these two gentlemen go head-to-head in the literature (as often they do) it is truly a clash of the Titans. Intactivists hate Prof. Morris, and expend great energy in attacking him any way they can. But their innuendo and character assassination are signs of desperation from people unable to address the scientific evidence the good professor deploys.

As intactivists use any scurrilous means they can to attack their chief academic critics, it is right to examine their own chief academic protagonist, only this examination will stick rigidly to his own academic track record. No innuendo, no guilt by association, no ad hominem, just what is published in the literature and therefore in the public domain.

The most recent “clash” of these “Titans” was in the journal Global Public Health. In 2015 Van Howe published a meta-regression (Van Howe 2015) undermining the consensus view that circumcision is highly effective against female to male HIV transmission. This triggered a scientific critique by Morris and associates who pointed to various problems with Van Howe’s paper (Morris et al. 2016). An indignant response from Van Howe followed entitled “Expertise or ideology? A response to Morris et al. …” (Van Howe, 2017) which was in large part a protracted personal attack on Morris.

Van Howe cast doubt on the expertise of his critics, suggesting that they were motivated by ideology, whilst portraying himself as an expert. He also described various kinds of “expert”, the best being those who make novel contributions to a field, i.e., those with contributory expertise. Never mind that one of Morris’ co-authors, Prof. Gia Barboza, is herself an experienced statistician, and amply qualified to take on Robert S. Van Howe. In fact, all Morris’ co-authors, and Prof. Morris himself, have considerable contributory expertise, which was pointed out in their reply (Morris et al. 2017). At the time of writing (March 2018) there was no sign of a response from Van Howe.

Being familiar with Van Howe’s works, I have seen him commit the same academic sins he (falsely) accuses Morris of, thereby laying himself open to the charge of hypocrisy. I will leave it to those better qualified than I to judge the technical aspects of Van Howe’s latest effort and the rebuttals, and to decide if he is correct in appointing himself as an expert, or is just pulling rank. I am no statistician. But his haughty air reminds me of a time I shredded a young-earth creationist’s claims only for him to remind me that he had a geology degree, and I did not.

When a professor of mathematics gets Pythagoras’ theorem wrong, one does not need to know how to solve Fermat’s last theorem to suspect something is amiss. Likewise, when a statistician makes basic blunders, and ignores them when they are pointed out, one does not need to know how to conduct a meta-regression to smell a rat.

A few examples will suffice. In 2013 Van Howe published a meta-analysis on STIs purporting to prove that circumcision is ineffective in their prevention (Van Howe, 2013). I spent ten days solid reading all his references on HPV, and others, and was shocked by what I found. He had devised a correction for “sampling bias” and used this to turn contrary results into ones that suited his “cause”. In support of this “correction” he cited articles of his own, or by his wife Michelle Storms (references 16 & 26-28 in Van Howe, 2013), but ignored the published criticisms of those, plus an earlier one, and the critics were not Morris, but other prominent contributors to the field (Castellsagué et al., 2007; Tobian, et al., 2009; Auvert et al., 2009; Tobian et al., 2010; Castellsagué, et al., 2002), i.e., real, contributing experts even by Van Howe’s definition.

Was there any hint of this in Van Howe’s paper? No! Cheerfully he went about applying his own “correction”, which is rejected by real contributory experts, to two randomized controlled trials (RCTs), thereby “correcting” away results that did not fit his desired conclusion. And this by a man who complains bitterly about Morris self-citing and not paying heed to published criticisms.

It gets worse. In the same paper, he stated the “correction” was for studies that sampled the glans or urethra only. Yet he applied it to a study that sampled the glans and sulcus combined (Tobian et al., 2009).

It gets even worse still. The other study he applied it to (Auvert et al., 2009) had included a nested study to test for sampling bias, and it found none. So, Van Howe was applying a sampling bias “correction” that had already been soundly criticized in the literature five times (criticisms he completely ignored), to a study which had already specifically tested for it, and eliminated it.

It doesn’t end there, however. The fact that this study had tested for, and eliminated, sampling bias had already been pointed out to Van Howe in one of the critiques he ignored (Auvert et al., 2009). To paraphrase Lady Bracknell, in “The Importance of being Earnest”: “To make one mistake may be regarded as a misfortune, to do it again after having it pointed out, looks like carelessness.”

I contacted Prof. Morris and sent him these observations (and others), and he showed them to his colleagues. Evidently they agreed with this humble non-expert and used them in their rebuttal (Morris et al., 2014), and acknowledged me. One does not need to be an expert statistician, just an alert skeptic with time and patience, to spot these problems.

Again, specialist expertise is not needed to think there may be something fishy about Van Howe’s response to Morris’ criticism (Waskett & Morris, 2007) of his paper on penile sensitivity (Sorrells et al., 2007). Van Howe put the initial results into Table 2 of that paper. These were then analysed further using his mixed marginal model, the results of which went into their Table 3. Using a Bonferroni correction, Waskett & Morris found that Van Howe’s results in Table 2 were not statistically significant. In his latest attack on Morris (Van Howe, 2017), Van Howe objects, correctly, that one does not apply Bonferroni to mixed models (in his note 1). But Waskett & Morris did not apply a Bonferroni correction to his mixed model. They applied it to the data in Table 2. They had separate criticisms for the mixed model results in Table 3.

This was clarified to Van Howe recently (Morris & Krieger, 2016), yet Van Howe ignored this and persisted in accusing Morris of applying Bonferroni to his mixed model results. Now it may be that there was some special reason why it was inappropriate to apply Bonferroni to a Table full of multiple comparisons (exactly the situation it was designed for) but, unless Van Howe clearly explains why his Table 2 is an exception, poor non-experts like me are going to sense a straw man here.

Van Howe also complains about the tone of Morris’ rebuttal, listing remarks he considers disparaging. But he is not above this himself. In his aforementioned 2013 paper, Van Howe accused respected researchers from John Hopkins of either “incompetence or willful academic misconduct” for allegedly withholding data from penile shaft swabs. But this was as false as it was insulting. As the researchers explained: “We collected swabs from the coronal sulcus/glans and the shaft, but only had resources to assay the corona sulcus/glans samples” (Gray et al., 2010). Evidently the resources later became available and the results for the shaft were published in a separate paper (Tobian et al., 2011). Curiously, Van Howe cited both these papers on other matters, but evidently missed the remark quoted above. Does he not read the sources he cites? I did. That is how I spotted this, and I did not need to be a statistician, expert or otherwise, to do so. Van Howe owes the researchers, whose professionalism he impugned, a retraction and an apology.

I also spotted a grotesque misrepresentation in Van Howe’s latest attack on Morris (Van Howe, 2017), and I did not need any specialist expertise for this. Referring to a meta-analysis on urinary tract infections (Singh-Grewal et al., 2005) Van Howe wrote, “it was estimated that the number of infant circumcisions needed to prevent one urinary tract infection was 111, yet in the analysis by Morris and Wiswell (2013) this number dropped to 4, a discrepancy that raises serious questions as to whether the analysis was performed correctly.” It is a trivial matter, requiring little beyond basic literacy, to look up the two studies cited to see that the Singh-Grewal paper’s emphasis was on children, particularly infants, whereas Morris and Wiswell’s estimate was a lifetime one, raising serious questions as to whether Van Howe actually reads the references he cites.

A recurring theme in Van Howe’s criticisms of the African RCTs on circumcision and HIV is that some of the HIV transmission may not have been sexual (Van Howe & Storms, 2011; Darby & Van Howe, 2011; Van Howe, 2013). In support of this assertion, Van Howe keeps citing the claims of David Gisselquist. This is despite Gisselquist’s views about the African HIV epidemic being driven by vaccination programs having been soundly debunked long ago by experts (real ones) from the WHO and elsewhere (Schmid et al., 2004; White et al., 2007). This has been pointed out ad nauseam by Morris and others (Halperin et al., 2008; Morris et al., 2011; Wamai et al., 2012; Morris et al., 2014). But still Van Howe persists, even stating it again in the meta-regression that triggered the latest round of correspondence (Van Howe, 2015), and yet again in his reply to Morris’s critique. And Van Howe has the chutzpah to accuse Morris of ignoring criticisms.

Van Howe just can’t let go of Gisselquist, but every reputable scientific body has. Gisselquist is now firmly out on the fringe, a darling of anti-vaxers, HIV/AIDS deniers, and Robert S. Van Howe. As an economist and anthropologist with no medical scientific training, Gisselquist would seem ill-equipped to comment on epidemiological matters. Perhaps being an expert is only required of Van Howe’s critics, but anyone will do when on his side. Indeed, looking through the reference list in Van Howe’s meta-regression one finds cited George Hill (a retired airline pilot and sugar farmer), Gregory Boyle (a psychologist) and Ryan McAllister (a physicist). And these are just in connection with the African RCTs, a topic to which their qualifications have no relevance whatsoever (and whose articles on the topic were soundly debunked, and which Van Howe ignores, as usual). Van Howe even co-authored alongside a historian (Robert Darby) a paper on HIV in Australia (Darby & Van Howe, 2011), another on the supposed psychological harm of circumcision alongside an industrial designer (Bollinger & Van Howe, 2011), and another on the carcinogenicity of smegma alongside a professional pianist (Van Howe & Hodges, 2006). Well at least one shared the same first name, if not Van Howe’s statistical expertise that we should all be in awe of. Citing fake experts and fringe sources are tell-tale signs of pseudoscience.

Van Howe seems to attract critiques like iron filings to a magnet. His first attempt at a meta-analysis, “proving” that circumcision increases a man’s risk of contracting HIV became, literally, a textbook example of how NOT to do a meta-analysis (Borenstein et al. 2009).

His next meta-analysis attempted to show that circumcision was ineffective against HPV. It was described by experts (real contributory ones in HPV research, from the respected Catalan Institute of Oncology) in the title of their critique as “a biased, inaccurate and misleading meta-analysis”, stating that was so bad it ought to be retracted from the literature (Castellsagué et al., 2007).

His next meta-analysis was on circumcision and genital ulcerative disease (GUD). But critics pointed out, amongst other things, that some of the data it cited was not present in the sources given (Waskett et al., 2009). Van Howe had to issue a correction.

His fourth meta-analysis tackled a range of STIs other than HIV, again seeking to “prove” that circumcision is ineffective against them. His use of an inappropriate “sampling bias” correction in this paper has already been discussed above. As with his previous meta-analyses, it was soundly debunked. In fact it was so bad the critics recommended it be retracted (Morris et al., 2014).

And then there is his recent meta-regression (Van Howe 2015) which led to the latest round of rebuttal and counter-rebuttal, and which inspired this article. Just looking at his meta-analyses alone, a pattern is emerging. And it continues with his other works, many of which have also attracted harsh criticisms. Everyone can be pardoned occasional errors – to err is human, after all. The mishap with the data in his meta-analysis on GUD, for example, could happen to any author. But he makes so many errors, they are often so egregious that even non-experts can spot them, and they are all systematically biased against circumcision, that something else is surely going on besides mere carelessness. Perhaps it is not Van Howe’s critics who are motivated by ideology, but Van Howe himself.

There are many other problems that I, and others with varying levels of expertise, have observed in Van Howe’s copious output of work opposing circumcision. But the above is enough to prove the point, in contradiction to Van Howe’s implication, that specialist expertise (helpful though it undoubtedly is) is not always necessary to spot errors, if those errors are sufficiently egregious.

Academic sins, such as ad hominem attacks, inappropriate data manipulation, ignoring criticisms, citing fringe sources, misrepresentation, and double standards, betray those promoting ideological views, even if they are hidden amongst sophisticated technical arguments. One does not need to be an expert with specialist skills to spot these, merely an alert skeptic, familiar with the tricks of the pseudoscientific trade, and with the time and patience to do the fact-checking and background reading necessary.

In short, one does not need to be able to conduct a meta-regression, or any other of Van Howe’s statistical tools, to be able to spot the problems described above any more than one needs to be an expert tailor, knowing warp from weft, the correct seam allowance for an armscye, and the right interfacing for a collar, to see that the emperor is naked.

Your email address will not be published. Required fields are marked *

Comment

Name *

Email *

Website

Recent Updates

Updates for March 2019:

1. The section “Function and Sensation” has been updated, see “Circumcision removes half the skin of the penis” which debunks this very often used claim as bogus. The introduction has also been updated.

2. In the section “Risk and Complications” the claim that circumcision causes psychological damage is addressed and debunked.

3. You can comment on any material on the site by posting a blog entry in the “Blog” section.