There is a lot we still need to understand about how our bodies work, how it may be affected by diseases, and how we can alleviate these conditions. We use our knowledge to create hypotheses and then must use repeated trials to test these theories. One of the difficulties of testing in humans is that we all have vastly different medical backgrounds – we eat differently, we are brought up differently, we get diseases at different ages from different things. This makes testing things very hard. To help, we use models whereby we try and keep as many factors as possible constant in an effort to understand the one we vary. This is often impractical, impossible or unethical to do in humans. Thus we use animal models.

In Aysha Akhtar’s recent post she has continued to misunderstand and misrepresent the role of animal research in the medical discovery process. Having debunked her first attempt it seems we really ought to do so a second time as well. This time round, much of her argument premises on the idea that research uses EITHER animals OR humans. In truth there is ongoing clinical (humans) and preclinical (animals and non-animal) research for most diseases. It would be impossible to truly understand human diseases without human studies, but animals play a vital role as well.

Aysha Akhtar

Akhtar begins with an anecdote about her father, who is suffering from diabetes and nerve damage. She suggests that because no treatments works for her father, that animal research has done nothing for diabetes. This is wrong. The development of insulin, which has saved millions of lives, relied on research involving dogs. Earlier this year a new diabetes drug, Lixisenatide, developed using the saliva of the Gila Monster (a lizard), was launched in the UK. So animal research is continuing to have an impact on those suffering from diabetes.

Persisting on the diabetes argument, Akhtar notes that one line of diabetes research, involving the study of pancreatic islet cells, has been misled for years by differences between rodent and human islets. Such pitfalls do occur in research, and it is important that scientists watch for the impact of species’ differences. Nonetheless, Akhtar fails to read her own link, to where a researcher notes:

“The results of this study do not decrease the value of basic science and small animal based research,” explained Dr. Camillo Ricordi, scientific director of the Diabetes Research Institute and Stacy Joy Goodman Professor of Surgery. “However, it does underscore the critical importance of translational research, that is, to determine if observations obtained in rodent studies are relevant to patients. Using human tissues and pre-clinical model systems, we can transfer any new pertinent finding toward new treatments for patients in the fastest, most efficient and safest way possible.”

Akhtar then moves onto to discussing stroke.

“Strike 1: Artificially inducing stroke in animals does not recreate the complex physiology that causes the natural disease in humans, which may develop over decades.

Strike 2: Animal stroke models don’t usually include the underlying conditions, which contribute to human stroke.

Strike 3: Artificially inducing in animals the underlying conditions that lead to human stroke does not replicate the processes that occur in humans. “

While trying to create the façade of three arguments, Akhtar has essentially written the same argument in three different ways. The problem with Ahktar’s view is that she is implicitly assuming that all treatment efforts for stroke must aim to solve all the underlying conditions. Prevention is also important, but both parts of research are necessary (and indeed both involve the use of animals).

While noting the failure of many new drugs for treating stroke, Akhtar does not mention that the main treatment for stroke, thrombolysis, was initially shown to work through experiments on rabbits. Furthermore, Ahktar provides no suggestion as to why taking animals out of the research equation will suddenly improve our chances of finding a novel treatment (preventative or remedial). Indeed much of our general understanding about the mechanisms of human stroke comes from research in animals.

So does Akhtar have any evidence to back up her claims? This is where things get interesting. In support of her claims she cites the paper “What can systematic review and meta-analysis tell us about the experimental data supporting stroke drug development?” by Professor Malcolm Macleod of the University of Edinburgh, published in the International Journal of Neuroprotection and NeuroRegeneration in 2005 (1). The obvious first step for anyone wishing to evaluate Akhtar’s claims would be to read that paper, but there is a problem. The International Journal of Neuroprotection and NeuroRegeneration is stopped being published in 2008, is not available online, and is no longer indexed in PubMed…making it very difficult to get hold of the paper in question. Very frustrating, and all the more so since Professor Macleod has published many papers on animal models of stroke that can easily be accessed online…one would almost think that Akhtar is trying to hide something.

Macleod, along with his colleague and Dr H. Bart van der Worp of the University Medical Centre Utrecht, has earned a deserved reputation as a strong critic of inadequate design and reporting of preclinical animal studies of stroke, and has advocated for improved experimental design and reporting and for rigorous systematic reviews of preclinical data supporting a potential therapy to be undertaken before that study is evaluated in human trials. In an open access paper published PloS Medicine in 2010 (2) entitled “Can animal models of disease reliably inform human studies?” they examined weaknesses in experimental design and inadequacies and biases in preclinical animal studies and the misinterpretation and misapplication of the results of animal studies when designing clinical trials. They found that in many cases the design of clinical trials was so different in terms of the treatment regime and outcome measures to the preclinical study that it was impossible to tell whether the failure of a treatment in human patients actually contradicted the earlier success in an animal model. Both outcomes were entirely plausible even if you assumed that there was absolutely no fundamental biological difference between the effects of stroke in the animal model and in human patients. For example, one problem is that the vast majority of neurprotective drugs evaluated in the past few decades in clinical trials were shown to be effective in animal models of stroke only when administered unrealistically soon after induction of stroke – usually after less than 15 minutes – whereas in the subsequent clinical trials treatment did not usually begin until more than 4 hours after stroke onset, too late to be helpful.

In their conclusion they discussed how preclinical research could be improved.

“Although there is no direct evidence of a causal relationship, it is likely that the recurrent failure of apparently promising interventions to improve outcome in clinical trials has in part been caused by inadequate internal and external validity of preclinical studies and publication bias favouring positive studies. On the basis of ample empirical evidence from clinical trials and some evidence from preclinical studies, we suggest that the testing of treatment strategies in animal models of disease and its reporting should adopt standards similar to those in the clinic to ensure that decision making is based on high-quality and unbiased data. Aspects of study quality that should be reported in any manuscript are listed in Box 3.

Box 3. Aspects of Study Quality to Be Reported in the Manuscript

Sample size calculation: How the sample size was determined, and which assumptions were made.

Eligibility criteria: Inclusion and exclusion criteria for enrolment.

Treatment allocation: The method by which animals were allocated to experimental groups. If this allocation was by randomisation, the method of randomisation.

Allocation concealment: The method to implement the allocation sequence, and if this sequence was concealed until assignment.

Blinding: Whether the investigators and other persons involved were blinded to the treatment allocation, and at which points in time during the study.

Flow of animals: Flow of animals through each stage of the study, with a specific attention to animals excluded from the analyses. Reasons for exclusion from the analyses.

Control of physiological variables: Whether and which physiological parameters were monitored and controlled.

Control of study conduct: Whether a third party controlled which parts of the conduct of the study.

Statistical methods: Which statistical methods were used for which analysis.

Recommendations based on [13],[17].
Not only should the disease or injury itself reflect the condition in humans as much as possible, but age, sex, and comorbidities should also be modelled where possible. The investigators should justify their selection of the model and outcome measures. In turn, human clinical trials should be designed to replicate, as far as is possible, the circumstances under which efficacy has been observed in animals. For an adequate interpretation of the potential and limitations of a novel treatment strategy, a systematic review and meta-analysis of all available evidence from preclinical studies should be performed before clinical trials are started. Evidence of benefit from a single laboratory or obtained in a single model or species is probably not sufficient.”

While they don’t answer the question in the title of their paper directly, the implication is clear…with improved experimental design, reporting and reviewing prior to clinical trials, animal models can reliably inform human studies.

“In animal models of focal cerebral ischaemia, hypothermia improves outcome by about one-third under conditions that may be feasible in the clinic, with even modest cooling resulting in a substantial improvement in outcome. Cooling is effective in animals with co-morbidity and with delays to treatment of 3 h. Large randomized clinical trials testing the efficacy of moderate hypothermia in patients with acute ischaemic stroke are warranted”

“…we believe that hypothermia has been studied in sufficient detail and under a sufficiently broad variety of experimental conditions in animal models of ischemic stroke to support the translation of this treatment strategy to clinical trials”

In March 2012 just such a large international clinical trial of hypothermia in ischemic stroke – EuroHYP-1 – was launched.

It is notable that unlike animal rights campaigners who use deficiencies in some animal studies to call for a ban on it, Macleod and van der Worp understand its continuing importance to medical progress, and have worked with animal researchers to improve both the design and reporting of the preclinical animal studies that underpin the decisions to initiate clinical trials. Initiatives such as the ARRIVE guidelines are similar in many ways to recent improvements the design of clinical trials supported by the work of the Cochrane collaboration, and the widespread adoption of standards for the reporting of clinical trials (though as the AllTrials initiative has highlighted the reporting of clinical trial results is still far from perfect). The work that Macleod, van der Worp and their colleagues to improve animal studies, is a direct follow-on from similar work undertaken in the field of clinical trials. This highlights another risk that Akhtar’s clains pose; in claiming that the use of animals is at the root of the failure of potential stroke therapies to translate into clinical benefit she distracts attention from the real problems, problems that affect many areas of medical research and not just animal studies (similarly animal rights activists often attack mouse xenograft studies in cancer, ignoring the fact that it is the standard laboratory cell lines such used to create these models – and for most in vitro cancer research – that are the problem, something that is now being addressed through the development and increasingly widespread use of GM mouse models of cancer and patient-derived tumor xenografts).

As she nears the end of her article Akhtar continues to cherry pick her quotes, including that of Susan Fitzpatrick, former Associate Executive Director of the Miami Project to Cure Paralysis:

“Even if we know all about the animal model, we don’t necessarily know about the disease…”The model becomes what we study, not the human disease“.

What Akhtar fails to mention is that the solutions proposed are not to get rid of animal research, but to continue to improve it. So the article Akhtar quotes from continues:

“Careful thought about which animals are used, and when, will be important. Again, Ivinson says that one of his hopes for a centralised resource is a dedicated team to help develop animal models and make them available to all investigators. It may be that new molecular tools will mean that, rather than always turning to rodents, the fruit fly Drosophila or yeast will become the best early ways to study disease mechanisms. Larger animals could then be brought in, in smaller numbers, much later in the process.”

I found it interesting to note that at UW-Madison, the same percent of total research funding (20%) is associated with animal research and human research (based on percent of grants requiring IACUC approval or requiring IRB approval). A fourth of those grants require both, meaning that both human and animal studies are a part of the studies.