Authors

Archive for July, 2008

It was once imagined that Europeans are protected from “HIV” by the CCR5Δ32 gene (CCR5 with deletion 32). However, comparison of the geographic distributions of CCR5Δ32 and of “HIV” disproves that suggestion [Mainstream duffers clutch at Duffy straws: African ancestry and HIV, 26 July 2008]. No sooner had we posted that information than HIV/AIDS “researchers” publish a brilliant scheme for mimicking the Δ32 deletion via genetic engineering, as “an attractive approach for the treatment of HIV-1 infection”:

This brings out in force the Luddite that has been growing in me, fertilized by the copious manure that emanates non-stop from the drug industry and its academic henchpeople. (Luddites are named after the machine-destroying protesters in 19th-century Birmingham whose jobs were lost to mechanization during the Industrial Revolution. It’s come to be applied to antagonism against things that are widely applauded as scientific or technological “improvements” or “advances”.)

There are at least two immediately obvious things very wrong with this sort of anti-HIV approach, irrespective whether HIV has anything to do with AIDS. First, gene therapy remains an idea, not a reality—moreover, an idea whose time has passed because its basis has been found to be incorrect. Second, does not the CCR5 gene perform any functions apart from its possible connection to “HIV”? What are those functions? What happens to them if CCR5 is disrupted in a manner that natural selection (or the Creator, makes no difference) never invented or intended?

The idea of gene therapy stemmed from the initial interpretations of DNA as the carrier of hereditary information. It was thought at first that specific sequences of DNA form distinct and separate genes, individual units of hereditary information, each of them responsible only for the production of one particular protein. That has turned out to be not the case. Genomes are dynamic systems and not linear arrays of fixed genes. At various times, different sub-units of what are still called “genes” work together with sub-units of other “genes” to generate the proteins needed at any given time and place. The sophistication of these precisely scheduled interactions is such that genomes can produce many more proteins than they have “genes”: humans have fewer “genes” than corn and only 25% more than flatworms, even though humans are somewhat more complex creatures; see the fairly recent review by Ast, “The alternative genome”, Scientific American, April 2005, 58-65.

Given that current understanding, any notion of replacing a “defective gene” with a non-defective one presupposes that we know everything about which bits of which genes are needed for what, and at which times, in the development and life of the organism. Our knowledge is very far from that.

Official websites describe the problems quite well:

“Although gene therapy is a promising treatment option for a number of diseases (including inherited disorders, some types of cancer, and certain viral infections), the technique remains risky and is still under study to make sure that it will be safe and effective. Gene therapy is currently only being tested for the treatment of diseases that have no other cures” [published 18 July 2008]

“The Food and Drug Administration (FDA) has not yet approved any human gene therapy . . . . Current gene therapy is experimental and has not proven very successful in clinical trials. Little progress has been made since the first gene therapy clinical trial began in 1990. In 1999, gene therapy suffered a major setback with the death of 18-year-old Jesse Gelsinger. . . . [who] died from multiple organ failures 4 days after starting the treatment. . . . Another major blow came in January 2003, when the FDA placed a temporary halt on all gene therapy trials using retroviral vectors in blood stem cells. . . . after . . . a second child treated in a French gene therapy trial had developed a leukemia-like condition. . . . Before gene therapy can become a permanent cure for any condition, the therapeutic DNA introduced into target cells must remain functional and the cells containing the therapeutic DNA must be long-lived and stable. Problems with integrating therapeutic DNA into the genome and the rapidly dividing nature of many cells prevent gene therapy from achieving any long-term benefits. Patients will have to undergo multiple rounds of gene therapy. . . . Anytime a foreign object is introduced into human tissues, the immune system is designed to attack the invader. The risk of stimulating the immune system in a way that reduces gene therapy effectiveness is always a potential risk. Furthermore, the immune system’s enhanced response to invaders it has seen before makes it difficult for gene therapy to be repeated in patients. . . . Viruses, while the carrier of choice in most gene therapy studies, present a variety of potential problems to the patient—toxicity, immune and inflammatory responses, and gene control and targeting issues. In addition, there is always the fear that the viral vector, once inside the patient, may recover its ability to cause disease. . . . Conditions or disorders that arise from mutations in a single gene are the best candidates for gene therapy. Unfortunately, some the most commonly occurring disorders, such as heart disease, high blood pressure, Alzheimer’s disease, arthritis, and diabetes, are caused by the combined effects of variations in many genes. Multigene or multifactorial disorders such as these would be especially difficult to treat effectively using gene therapy. . . .” [last modified 13 May].

Perez et al. ingeniously avoided some of those problems, but the basic lack of knowledge remains. The disruption of CCR5 also disrupted the genome at other sites, chiefly at the neighboring CCR2: “Loss of CCR2 in CD4+ T cells is predicted to be well tolerated as CCR2-/- mice display phenotypes that are not disabling . . . . Mutant alleles of CCR2 have been correlated with delayed progression to AIDS in HIV-infected individuals, although no influence on the incidence of HIV-1 infection was observed . . . . Thus, parallel mutation of a small proportion of CCR2 in CD4+ T cells ex vivo is unlikely to be deleterious . . . . ”; and there was also disruption at “an intron of ABLIM2 on chromosome 4”. “Thus, except for CCR2 (5.39%), and rare (~1/20,000) events at ABLIM2, all the remaining sites showed no evidence of . . . [disruption], given a threshold level of detection of ~1 in 10,000 sequences”.

In other words, the engineering of the CCR5 gene is not 100% specific, other parts of the genome are affected. That no deleterious “side”-effects were observed in the experimental mice is hardly persuasive that the procedure could do in human beings only the one thing we want done and nothing else. Above all, Perez et al. say nothing about why CCR5 exists at all, what functions natural selection intended for it, and whether the CCR5Δ32 that supposedly protects against “HIV”—but doesn’t, according to epidemiological comparisons—is associated with any undesirable conditions.

So, I suggest, Perez et al. are tinkering with things they don’t understand and that are better left alone, hence my reference to Dr. Frankenstein. Incidentally: if you think “Frankenstein” was written by Mary Wollstonecraft Shelley, you should read John Lauritsen’s “The Man Who Wrote Frankenstein”, an illustration that it’s not only in medical science that the mainstream consensus can be wrong for long periods of time. The dogma that the Clovis people were the first Americans was maintained for more than half-a-century in the face of contradicting evidence as well as the implausibility of the idea that the earliest discovered habitation sites would correspond with the earliest actual sites. Back to literary scholarship: it is also not the case that William Shakspere of Stratford-on-Avon wrote Shakespeare’s works, see Diana Price, “Shakespeare’s Unorthodox Biography: New Evidence of an Authorship Problem” ).

“Australia’s HIV rates rising”, by Tamara McLean; July 30, 2008
“AUSTRALIA’S rising HIV rates have come under the spotlight in a UN report that shows the disease is generally stabilising worldwide. . . .The UNAIDS report names Australia alongside Papua New Guinea, Indonesia, China, Britain and most of Africa, as nations in which HIV infections are on the rise. . . . The report, issued ahead of the 17th International AIDS Conference, opening in Mexico City on Sunday, said there were 33 million people living with HIV last year, compared with 32.7 million in 2006. . . . Mortality from AIDS last year was around two million, about 200,000 deaths fewer than in 2005. The authors credited the widening distribution of immune-suppressing drugs for the decline” [emphasis added].

I just can’t wait to see the actual report, it would be too delicious if that’s what it really says. As it is, this does little credit to folks at The Australian.

Rethinkers have written many books and articles and have expounded and argued on many blogs, e-mail lists, and discussion groups. The great majority of all this has been ANTI the mainstream assertions: pointing to flaws in mainstream claims; arguing that no retrovirus could do what HIV is supposed to do; emphasizing that “HIV” has never been isolated and that “HIV” tests have never been validated; debunking claims as to the purported benefits of antiretroviral drugs; and so on.

It occurs to me that this mode of arguing AGAINST something has disadvantages both substantive and rhetorical.

A major substantive disadvantage is that it typically requires much more effort to debunk a claim than to make the claim in the first place. For instance, articles asserting benefits for antiretroviral treatments incorporate a large number of assumptions, a variety of mathematical techniques and elaborate models, and many inadequacies in the data. Explaining what’s questionable in all of those is an Herculean labor—at the end of which one has only shown that the particular article is not convincing. That doesn’t do much to shake HIV/AIDS dogma.

A major rhetorical disadvantage of arguing AGAINST something is that one adopts an essentially defensive posture. Thereby rethinkers allow the mainstream to choose the battlegrounds, the specific topics, and these also determine to a certain extent the weapons that come to be used. Bystanders and observers surely note (at least subliminally) that rethinkers are REACTING, and will assume, naturally enough, that the mainstream has posited a case that seems able to withstand assaults.

As I continue to look for the potentially most convincing case that HIV/AIDS theory is wrong, I suggest that there are a number of other possible modes of arguing that might be more effective than the debunking of detailed aspects of mainstream claims.

1. Make a positive case about HIV and AIDS.

Rather than argue what HIV is not, and what AIDS is not, expound what AIDS IS, and what “HIV” IS. I don’t mean to say, of course, that rethinkers have not done those things. They have, in many ways and in many places, but typically late in the discussion and in the context of debunking mainstream assertions and presenting alternatives. What I have in mind now is an ab initio story about the several different types of “AIDS” and about what “HIV” is.

2. Enlist the power of laughter.

Emphasize the range of unbelievable things that HIV/AIDS theorists require people to believe: more breast-feeding leads to less transmission of “HIV”; married women are more at risk of incurring a sexually transmitted infection than are single women, even prostitutes; “HIV” is spread by quite different mechanisms in different regions of the world; those who share needles are less frequently HIV-positive than those who do not share needles; and so on.

There will always be room, of course, for criticizing mainstream claims that are not obviously and absurdly unbelievable, but I suggest that here too one can distinguish more than one mode (points 3 and 4 below):

3. Point to essential things that are absent from the mainstream case.

We have no electron micrographs of authentic virions of “HIV” extracted direct from AIDS victims or HIV-positive people.

We don’t know how “HIV” destroys the immune system.

We have been unable to discover what properties an anti-HIV vaccine would need to have.

4. Flaws in mainstream claims.

This category includes the mass of material that I described as the outset, and that—it now seems to me— is a less potentially convincing mode of arguing than the first three. One need only read some of the to-and-fro on those blogs in which rethinkers and mainstreamers have at one another, or the lengthy exchange in the on-line British Medical Journal, to recognize that arguing over those particular points cannot bring resolution, because the evidence on those particular matters permits of opposing interpretations—not equally plausible ones, admittedly, but we need final and conclusive evidence, not high or low plausibility.

I chose to moderate this blog in hopes of keeping strictly to substantive issues, and the occasional interventions from mainstreamers illustrate the point I’m trying to make. The questions, whether HAART benefits or doesn’t, and if it does benefit then to what extent, and how to factor in toxic “side”-effects, are sufficiently complex that no agreement is going to be possible when people approach the questions with fundamentally opposed preconceptions; see, for example, More HIV/AIDS GIGO (garbage in and out): “HIV” and risk of death, 12 July 2008. “Fulano/Mengano de Tal” (a. k. a. John/What’sis-name Doe) and I had some further private exchanges about claimed HAART benefits and the particular article that I had originally criticized, until I suggested that we should rather discuss first the more basic issue: Does/do he/she/they agree that the cumulative record of HIV tests in the United States, surveyed and analyzed in my book, demonstrates that what is being detected is not a contagious or infectious thing? If he/she/they does/do not agree, why not?

I have heard no more.

An ad hominem review of my book by one of the AIDStruth vigilantes was soon withdrawn again from amazon.com.

The only mainstream journal to review the book noted that it “can be used as a mirror for some of the major failings of HIV epidemiology during the first quarter century of its existence . . . HIV/ AIDS researchers and health workers . . . should take a hard look at the weak quality of evidence supporting the views of HIV propagation appearing in their pages . . . richly documented . . . asking good questions and . . . detailing how ‘competent and qualified people who questioned the orthodoxy have been largely excluded from the leading journals’ . . . and, consequently, the media . . . Readers should ask the HIV/AIDS establishment, especially the health agencies entrusted with monitoring and intervening in HIV epidemics, why they have settled for evidence from a lesser god when the stakes for getting the picture right are so high. Bauer, Epstein and Chin ought to be thanked for providing us with such a (regretfully unflattering) mirror. Our task ought to be to recognize the serious weaknesses in the available evidence and to insist on rigorous studies that can supply the strong, direct evidence needed for epidemiologic validity”. (The review also had some criticisms, of course, to which I responded in a letter published by the journal).

The mainstream’s strategy evidently is to ignore wherever possible any positive cases made for rethinking. Where they cannot ignore, as with Celia Farber’s article in Harper’s, they respond with character assassination and undocumented counter-arguments. They simply cannot answer such positive cases as those raised by the death of Joyce Ann Hafford or the epidemiology of “HIV” tests in the United States.

*************************

The four points set out above represent scientific or intellectual cases against HIV. There are also cases to be made against HIV on grounds of human costs, both individual and social ones. These human costs might be called “collateral damage” from the mainstream’s paradigmatic war against HIV/AIDS.

5. The individual human case against HIV/AIDS is what an “HIV-positive” diagnosis does to the person concerned—psychologically via the nocebo effect as well as physically if the individual accepts antiretroviral treatment.

6. The social case against HIV/AIDS has two parts:
(a) What the individual’s plight does to others: family, friends, groups.
(b) The enormous amounts of mis-spent money, which dwarf expenditures on much more widespread health-care deficiencies in both the developed and the developing regions of the world—things like cancer or heart disease in the former, malnutrition or malaria or TB (and more) in the latter.

7. The frightful burdens of guilt and remorse that will be the lot of “AIDS activists” and AIDS organizations and HIV/AIDS researchers when finally they have to cope with the realization that they have horribly hurt innumerable people. That the mainstreamers and their groupies have done harm unwittingly, unknowingly, sometimes only indirectly, will be no source of comfort to them.

A just-published article seemed to promise delivery from this dilemma (“Duffy antigen receptor . . . mediates trans-infection of HIV-1 . . . and affects HIV-AIDS susceptibility”, Weijuing He et al., Cell Host & Microbe 4 [2008] 52-62). The significance and authoritativeness of this revelation was underscored by the fact that one of the members of the “international team” is “renowned virologist Robin Weiss” [Sabin Russell, San Francisco Chronicle, 16 July 2008]:

“An international team of AIDS scientists has discovered a gene variant common in blacks that protects against certain types of malaria but increases susceptibility to HIV infection by 40 percent. Researchers, keen to find some biological clues to explain why people of African descent are bearing a disproportionate share of the world’s AIDS cases, suspect this subtle genetic trait — found in 60 percent of American blacks and 90 percent of Africans — might partly explain the difference. Ten percent of the world’s population lives in Sub-Saharan Africa, but that region accounts for 70 percent of the men, women and children living with HIV infection today. In the United States, African Americans make up 12 percent of the population, but account for half of newly diagnosed HIV infections. ‘The cause of this imbalance is not necessarily driven by behavior,’ said Phill Wilson, founder of the Black AIDS Institute in Los Angeles. ‘Gay black men do not engage in riskier behavior than gay white men, for example. African people with this gene may have a higher vulnerability’. . . .
The researchers compared 814 African American military personal who were HIV negative with 470 who were infected with HIV. Out of this comparison popped the surprising number: A 40 percent higher risk of HIV among those whose genes suppressed the Duffy protein.”

At first sight, I was less than overwhelmed. A 40% increased risk doesn’t seem all that much help in explaining why African-American men are about 7 times more often “HIV-positive” than white American men, and African-American women about 21 times more often “HIV-positive” than white American women; nor that certain countries in sub-Saharan Africa report an “HIV-positive” rate of between 5 and 35% whereas no other region in the world, with the exception of the Caribbean, reports anything as high even as 1%. Still, HIV/AIDS dogmatists have proven their ability to explain anything, forget about plausibility; one could easily enough conjure some amplification effect.

I was impressed, however—though still not quite overwhelmed—by another aspect of this study:
“The researchers also made another remarkable finding — once a person with the African gene became infected, the same genetic trait appears to prolong survival. One of the Duffy protein’s natural roles appears to be to ramp up the immune system. It attracts a number of chemical signals that promote inflammation — a defensive mechanism that normally protects the body, but lays out a banquet of white blood cells for HIV to infect and destroy. So the same genetic mutation that raises the risk of HIV infection provides some protection to those who become infected. Similarly, those who carry the normal Duffy protein may be somewhat shielded from HIV infection, but once infected may sicken and die sooner without treatment.”

This counter-intuitive claim struck me hard because I had suggested elsewhere on the incongruity that blacks do in fact survive “HIV disease” to greater ages than members of other races, even as they are also “infected” by “HIV” to a far greater extent than are members of other races [HOW TO TEST THEORIES (HIV/AIDS THEORY FLUNKS), 7 January 2008]. I had taken this as further confirmation of my view that testing “HIV-positive” is an entirely non-specific indication of an immune-system response and that racial disparities reflect differences in genetic patterns relevant to immune-system function; now here was a generic explanation that was consistent with the data AND with the mainstream HIV/AIDS theory!

Admittedly, this explanation of how the Duffy protein could both increase and decrease susceptibility to “HIV” made my head swim. It ramps up the immune system, that should be good. But those extra cells whose job it is to defend the body are thereby exposed to the predations of “HIV”! That fits with the Ho view of frantic rapid turnover, but Ho’s math was discredited as soon as it was published. The explanation is also reminiscent of attempts to invoke immune (hyper)- activation or auto-immune reactions to explain how HIV destroys the immune system—but those had also been found wanting. Just too technically sophisticated for a lay person to understand. Anyway, wouldn’t this mean that anything that ramps up the immune system also makes it easier for HIV to destroy it? Beware vaccination! No wonder all attempts to make an anti-HIV vaccine have failed, indeed have sometimes increased susceptibility!

I was totally confused, so it was some comfort to find that technically sophisticated people could also find this explanation difficult: “The researchers offer an explanation that they concede is far from straightforward. ‘If you found the paper plain sailing, most of my students didn’t,’ Dr. Weiss said.”

Still, that didn’t erase my concern that here was a shred of evidence to support a mainstream explanation. Fortunately, alleviation came from several sources. Nick Wade in the New York Times cited certain reservations [17 July 2008; Gene variation may raise risk of H.I.V., study finds]: “David B. Goldstein, geneticist who studies H.I.V. at Duke University, said that the new result ‘would be pretty exciting if it holds up’” [emphasis added]; and he remarked that the techniques used to avoid effects of chance correlation “might not have been adequate”.

The crucial point seems to be that the Duffy gene in question is characteristic of—closely associated with—African ancestry. The tendency to test “HIV-positive” is also strongly associated with African ancestry. Therefore the Duffy gene and testing “HIV-positive” will inevitably show an association, a correlation, whether or not the gene has any causative role as to “HIV”. It’s the same old correlation-doesn’t-prove-causation fallacy that HIV/AIDS researchers—and innumerable others too, of course—often commit (for example, about “HIV” and excess deaths in Africa, see p.194 in The Origin, Persistence and Failings of HIV/AIDS Theory).

Two blogs concerned with genetics give a full explanation of why the attempts by this “international team of AIDS scientists” to rule out chance correlation were flawed. Some of the gene bits (SNPs) used as “tests” were not independent of the Duffy gene; others were poor discriminators between European and African ancestry, and therefore unsatisfactory for gauging the proportional ancestry of African Americans; and the choice of SNPs raised the suspicion that they had initially been looked at for possible correlations with “HIV” and only later for ancestry-indicating tests; see “DARC and HIV: a false positive due to population structure?” and “Duffy-HIV association: an odd choice of ancestry markers”.

***********************

An HIV-gene claim analogous to the Duffy one was made some years ago about the supposedly protective properties of CCR5 genes with a particular deletion (Δ32), because that is found in European but not in African populations. But it’s present in only a small proportion of Europeans, and moreover its distribution varies enormously from north to south. It’s also present to a negligible extent in North Africa, whereas HIV is as uncommon in North Africa as it is in Europe.

These quite typical episodes illustrate something that science writers and journalists do not usually know but should know—indeed, that it would be good for everyone to know:

REAL SCIENCE ISN’T NEWS
“Real science”, the stuff that is almost universally regarded as reliable, trustworthy, in fact true, is not and cannot be the latest, newest “breakthrough”, because it takes time and the critiquing and testing by other investigators to determine how much validity any new claim has. The real, reliable science is what’s been around long enough to have been thoroughly tested. Science is made trustworthy not by any formulaic “scientific method” but by a knowledge filter of the criticisms and repetitions and modifications and disproofs rendered by other researchers and peer reviewers.

Those who cover science for the media should learn to be as cautiously suspicious of “scientists” and “scientific” institutions as they mostly are of politicians, political institutions, business executives, and corporations. Scientists are no less human than other people, and they are no less capable of being corrupted by career ambitions and pressures, by “the system”, by taking goodies “because it doesn’t hurt anyone” “because everybody does it” and “If I didn’t do it, someone else would”.

Modern-day “Big Science” does not fit the traditional view of a quasi-religious vocation attracting disinterested truth-seekers who form an intellectual free market in which an invisible hand safeguards against error; modern-day “Big Science” is an array of bureaucracies that produce knowledge monopolies and research cartels.

According to HIV/AIDS theory, “HIV” destroys the immune system. One might then reasonably expect that malnourished individuals, or drug abusers, or people suffering from any illness or infection, would therefore be more likely to acquire a further infection—like, say, according to the mainstream view, “HIV”; and they would also be likely have a poorer prognosis for surviving than people who were not already ill before acquiring the extra burden of disease.

But even such uncontroversial and reasonable expectations need to be nailed down by specific clinical trials, so there are endless opportunities to qualify for research grants—without having to entertain a single novel thought. Consequently, “HIV/AIDS researchers” have studied whether “HIV-positive” African children infested with intestinal worms have a poorer prognosis than “HIV-positive” African children who are not so infested [ARE INTESTINAL WORMS GOOD FOR US? ARE THEY GOOD FOR AFRICANS? FOR AFRICAN CHILDREN?, 30 December 2007]. Other “HIV/AIDS researchers” have studied whether “HIV-positive” African children who have insufficient food have a poorer prognosis than “HIV-positive” African children who are properly fed [DRUGS OR FOOD?, 25 December 2007; FOOD IS GOOD FOR CHILDREN, 8 January 2008]. Yet other “HIV/AIDS researchers” regaled us with the startling finding that COCAINE AND HEROIN AREN’T GOOD FOR YOU! [a Golden Fleece Award, 13 June 2008] —not good for you insofar as HIV/AIDS is concerned, that is, of course: there’s nothing wrong with those drugs in themselves, in fact one should help users of cocaine and heroin to remain healthy by providing them with fresh, clean needles (ignoring that a range of studies have shown “HIV-positive” status to be more common among those NOT sharing needles).

Now Ruth M. Ruprecht* and nine collaborators from several prominent American medical institutions have confirmed that Schistosomiasis (infection by parasitic worms) makes acquisition of “HIV-positive” status more likely: “the hypothesis that parasite infection increases an individual’s susceptibility to HIV-1 has never been prospectively tested in a relevant in vivo model. . . . Our data provide the first direct evidence that acute schistosomiasis significantly increases the risk of de novo AIDS virus acquisition, and the magnitude of the effect suggests that control of helminth infections may be a useful public health intervention to help decrease the spread of HIV-1” (Chenine* et al., Acute Schistosoma mansoni infection increases susceptibility to systemic SHIV Clade C infection in Rhesus Macaques after mucosal virus exposure, PLoS Neglected Tropical Diseases, 2 #7 [2008] e265).

Of course, as Chenine et al. note, there had been earlier suggestions (Bentwich et al., Immunology Today 16 (1995) 187-91; Hotez et al., PLoS Medicine 3 #5 (2006) e102, pp. 576-84) that treating schistosomiasis would have corollary benefit in fighting HIV/AIDS; but here is “the first direct evidence”. (Don’t be put off by the fact that the experiment used a human-made simian-human virus, or that the experiment was carried out in monkeys. )

Lest readers be misled by talk of “HIV” “infection”, above, into assuming that we accept HIV/AIDS theory, I hasten to add that everything in this breakthrough research is entirely compatible with our view that “HIV-positive” status is, in the words of Dr. Christian Fiala, like running a fever. The more severe the health challenge from just about any source, the more likely it is that one will test “HIV-positive”—if one has TB, or malaria, or has been vaccinated recently against flu or tetanus, say, to mention just a few of the many possibilities. So humans with schistosomiasis are more likely than others to become “HIV-positive”; and perhaps that applies in monkeys as well.

—————————-*
Science Studies tidbit:
How could I say “Ruth M. Ruprecht and nine collaborators” and then cite “Chenine et al.”?

There’s a large literature in science studies and the sociology of science about authorship practices in science. There are interesting variations between disciplines, but in general and in the beginning, the name of the Big Cheese (BC) always came first. Some BCs, however, displayed an ostentatious false modesty by always placing their name LAST. But as time went by, the numbers of authors on scientific articles went beyond a couple or three or four, to nowadays perhaps an average of ten in medically related matters and dozens in particle physics, so there came into use a ready device for identifying who the Really Big Cheese (RBC) is: look for the footnote, “To whom correspondence should be addressed:” Uninformed lay people might imagine that this would be a secretary, but it’s not, it identifies the RBC.

Also nowadays, especially in medically related fields, one is likely to find a paragraph or two noting which specific contributions to the reported work were made by which of the co-authors: had the idea; carried out the experiments; did the statistics; etc. Providing this useful information came about for bad rather than good reasons: the increasing prevalence of fraudulent publications led to the idea that all the authors of an article should have read and approved it before publication and could therefore be held responsible for its honesty. On an embarrassing number of occasions, co-authors had disclaimed responsibility for fraud in published work by stating that they were not familiar with various aspects of the study or had not seen the final draft of the manuscript. That’s how it’s been going in late-20th and early 21st-century science.