Clinical Trials: Viva Bias

Alexander K. is a thoughtful man, mid-50s, a writer by profession. He earns money with German and English lessons, and it is almost always tight. “A few years ago, I was sitting in the subway and saw posters seeking volunteers for a trial for the treatment of shingles,” he says. “I thought it was interesting. I have relatives who are affected by shingles, and I know how painful that can be. And the pay was not bad either.” 50 euros per injection, and for three injections, a total of 150 euros.

K. went to a doctor’s office close to Berlin’s Kurfuerstendamm, sat, as he says, between about 20 other subjects of all ages, signed the consent form, and got the first injection. “They took blood from me, and I was informed that I would not know if I would get a placebo or not.” K. learned what side effects could occur: fever, dry mouth, watery eyes, or rashes on the skin. For emergencies, he received a phone number. “I was checked for symptoms relatively soon,” said K. “I had nothing, everything was normal. I took the money, and that was it.” Not quite. A few days after the end of the study, K. developed a headache, and at night he was sweating. “I called the next day. They told me to wait, that it would pass on its own.

“And so it was. K. never heard anything from the study or about the results. “It all proceeded correctly,” he says today.

Results: Rarely published, hard to understand

For real? The truth is: we cannot know. As a layman, at any rate. But even for doctors, pharmacists, and scientists, it is almost impossible to see behind the scenes and determine whether the data which are collected and evaluated in clinical trials are actually right. This is not only because the results are rarely published and difficult to understand, but also because the collected data themselves are not always correct, and are difficult to verify. In the long process from discovery of a molecule to the finished drug, the obscure intertwining factors are too diverse, and the individual adjustment steps are too unclear. Dirty data, as this is referred to in English, is a major problem, say connoisseurs of the scene.

Take Artem Andrianov, for example. The 34-year doctor of engineering has long been a developer of data quality tools for clinical trials. He has conceived and created software for data collection in clinical trials — until he switched sides. Today, Andrianov is CEO of Cyntegrity. The company verifies the vast amounts of data from research results for anomalies, based on a combination of statistical and mathematical processes. Even in small numbers, these anomalies could distort results:

“It may happen that a good drug does not come on the market because of sloppy work by testing centers,” said the Russian-born engineer.

Incorrect data enter the trial results either by carelessness or by deliberate manipulation. Where the limit is, you never know. According to publications on this matter, about two percent of all clinical trials involve active fraud, and Andrianov estimates this number to be a little under ten percent.

“The number of unreported cases is high, and many remain undiscovered,” he says.

A race against time

But why does wrong data arise at all? Clinical trials are part of a complex process, with thousands of people involved. The developing company is in a race against time, because they have 25 years from the time of patenting a discovered molecule to try to develop a drug from it and to capitalize on it — and at that point, the patent protection expires, and generics hit the market and eliminate the profit. Studies play the crucial role here: the faster, the more clear, and the more positive a study, the sooner the authorities give their approval. “On average, a study from start to end costs around one and a half billion euros,” said Andrianov. “If we add hidden costs, it can be up to three billion euros. Additional costs arise, for example, if you check certain parameters again, or if the study is adjusted based on results. Such sums of money induce misconduct. ”

There is strong motivation to influence studies, says the software specialist. Because clinical trials are on the one hand complicated, but on the other hand regulated strictly in their process:

“This can be circumvented easily. Patients are involved, as well as institutions and study sponsors, and all gain from it.”

The result of a study could be compared to a manufacturing process. It produces data, which is evaluated and given as a total to the authorities for approval.

Guarantee of quality

Globally, companies, clinics, and centers involved in clinical trials are regulated under the rules of “Good Clinical Practice” (GCP). Good Clinical Practice is a standard for the conduct of human studies on both the scientific and ethical level. It includes both the safety of the subjects and their informed consent, as well as rules for implementation, planning, and documentation. The GCP Directive in force for Europe is28/E 2005/U, and is implemented in the Medicinal Products Act through its 12th amendment. A part of the GCP includes comprehensive quality management for continuous improvement of studies.

Most remains in the dark

In fact, cases of data manipulation rarely come into the public spotlight. The big scandals which come to light, such as that regarding the influenza agent Tamiflu®, where the applicability is still not established and where the study results are kept under wraps to this day. To date, the pills are stored in the millions to protect against an epidemic. Germany alone bought 330 million euros worth of Tamiflu®. The questionable nature of the drug was only brought to light through years of study by the Cochrane Collaboration, an association of doctors and researchers which assesses medical therapies.

“Although clinical studies are subject to strict regulation by ethics committees, agencies and departments, there are many temptations to circumvent or to intentionally ignore or violate them,” says Gerd Antes, head of Cochrane Germany. “Faking data or tampering with instruments is repeatedly reported in the media; however, this is not the dominant problem, since it occurs in isolation and the consequences are manageable. Much more damaging are the many small instances of sloppiness and offenses which do not cause any excitement in isolation, but cause immense damage when seen in total.”

Biases and statistical fluctuations

Errors arise in two ways, says Antes: “For one, there is systematic error, the so-called bias. These are basically distortions that cause statements made in a study to be wrong, even if all procedures were correct. “There are hundreds of cases of bias, says the mathematician and biostatistician. An example is selection bias: “He refers to unequal composition of two treatment groups in a comparative study, for example regarding the body weight of subjects. If a different cure rate is seen, the effect cannot be isolated, and one cannot know whether this is due to the treatment or to the weight.”

The second source of error is due to statistical fluctuations, says Antes:

“No study provides the exact same results as previous studies when repeated, and that is inevitable. The presentation of results must therefore not be represented by an exact number, according to modern-day standards, but with confidence intervals in which the true value likely lies.”

To find these errors in statistics, in intersections, and in colored charts is the daily business of Andrianov. It is going well for him, he says, because as a doctor of engineering and developer of study designs for pharmaceutical companies for years, he knows the process:

“If you know how the data should be distributed, then one sees anomalies relatively fast,” he says.

For example, he recently checked a Phase III clinical trial for rheumatoid arthritis and quickly realized that something was wrong: “Firstly, one center had recruited a lot more patients than the others. It had, therefore, a great influence on the study. In addition, the figures were rounded, with instances of zero and five being very common. If you round frequently, you lose precision.”

Errors are harder to find now

This case was easy — but often, the search for incorrect data is much more difficult. “The errors take place at every step of a study, and different things can lead to dirty data,” said the expert. Often, this goes unrecognized. The results may be incorrect:

“Since data are collected electronically, you cannot leave fields blank any more. Lacking the information, some investigators write something instead of nothing just to go on,” says Andrianov. Earlier, you were able to see open fields and missing data during an inspection. Today, the data is always there — but possibly wrong: “It is hard to verify, unless the error is very obvious.”

The number of incorrect data points is not what is critical — in clinical trials, a single mistake may have serious consequences, because nowadays, researchers looking at drugs for improvements of a few percentage points. Tests are usually conducted with the double-blind method, where neither the test center nor the patient knows who gets the real drug and who gets the placebo. “Suppose that in a study, the placebo improves certain a parameter by ten percent. This is the well-known placebo effect,” says Andrianov. The other group which receives the real drug has an improvement of twelve percent. Now we subtract the ten percent of the placebo from the real drug effect, and come to an improvement of two percent.

“That alone is already enough to bring a new drug to market,” says the expert. “But as I said at the beginning, we have an average of about ten percent incorrect data. Will that affect this improvement of one to two percent? Of course, and even very severely so.”

“You can manipulate a lot of things, and a lot is manipulated”

Errors not only sneak in by mistake, but data are also deliberately manipulated; at least Günther Jonitz, President of the Medical Association of Berlin, is certain of this. He believes that the reason for this lies in the variety of interests of the participating parties: “The patient has hopes that the drugs will bring a cure that often does not exist,” said the 57 year-old. “A researcher who conducts research wants to be exploring an active ingredient which has good results.” The drug companies and the media also gain more when they publish something that has an effect, and the product manufacturer wants to have something, an effect, so that it works. “In this complex situation, all concerned parties are looking towards the same direction,” said Jonitz.

“You can manipulate a lot of things, and a lot is manipulated,” says Andrianov, “especially since adaptive design is becoming more fashionable.” In this method, parameters are readjusted to the results during the study, which strictly taken is incorrect, when one seeks to be statistically correct: “You play with chance and coincidence. In some patients, a drug has helped, but in others, not. But if I can manipulate the parameters, so if maybe an effect is clearly visible in women between 20 and 30 years of age, then I can change the parameters and only look at women between 20 and 30 years,” says Andrianov. Although the observed effect was purely accidental, one can pull the entire conclusion in a certain direction with adaptive design: “So you can only report or achieve effects that are positive, and let the others fall away.”

Countless ways to manipulate

According Jonitz, studies were manipulated in various ways. For example, when study participants are younger than the target group: if a medicine for high blood pressure is tested on 30 to 40 year olds but the actual patients are older than 60 years as a rule, then one would have seemingly reduced side effects and seemingly higher benefits. A second possibility is comparison with a competing product, which is administered in too low a dosage. Another problem is the interpretation of raw data:

“The scientists apply for the study, perform it, and provide all the raw data to the sponsor company. What is made of that data is out of his hands, and he does not know,” says Jonitz. In addition, manipulation can be done by delaying publication and by willfully ignoring negative studies: “Each paperback that comes on the market has an ISBN number. In studies, this is unclear.”

Even more difficult is to learn about flawed studies in retrospect, says Andrianov. In this country, there is no register where wrong or damaging studies would appear: “In Germany, we speak of the so-called golden memory. Tthe German Society for Pharmaceutical Medicine, abbreviated DGPharMed, maintains one or the other folder where centers are listed which have faked studies. But you have to ask directly about a specific center, and you cannot get the information in general,” says Andrianov.

EU regulation for more transparency

“Such a list is not known to me. That is a rumor,” says Florinda Mihaescu, board member of DGPharMed. “In Germany and around the world, mistakes in studies must be reported.” The British National Insitute for Health and Care Excellence (NICE) collects and analyzes such cases. In America, there is the system of warning letters: “If something Serious has happened, it will be recorded and publicly published on the website of the US Food and Drug Administration (FDA). There, individual offenses are listed by examiner. We do not have anything like that yet. As part of the EU-reform, however, new regulations are planned with more openness,” says the graduate biologist who himself makes audits for commissioning companies, which she calls “sponsors.” Mihaescu investigates whether processes, requirements, and guidelines meet the required standards within the Good Clinical Practice (GCP) guidelines. Mihaescu believes in what she does.

“Of course mistakes can happen,” she says. “You always hear of scandals, but errors are becoming easier to discover and manage. As auditor, I am convinced that we are continuously improving. Quality management is deeply rooted within each sponsor. “

In fact, EU regulation for all Member States mandates that both positive and negative trials on medicines for human use must be published in a detailed summary and filed in databases in the future. They shall be viewable, free of cost, from 2017 on. The raw data, which really matters though, remains under wraps.

Share This Story!

Professional in the integration of data-driven Risk-based Monitoring (RbM) process in international clinical trials of pharmacology.
Speaker at regional and global conferences such as: DIA, PharmaForum, PharmaDay, DGGF, etc.
10+ years of experience in data quality projects and biostatistics for the pharmaceutical industry.
Life passion: improving clinical research with RbM, driving the RbM research to new frontiers for CROs, pharma and biotech companies.

One Comment

[…] could argue whether it adds value viewing and verifying the reliability of each single data point. Dirty data and frauds have always existed but can eventually be marginalised by responsive people and processes. So, why […]