Does QS Need An Ethical Review Board?

I am a strong supporter of self experimentation and citizen science, particularly when it comes to health (full disclosure: I’m CoFounder of CureTogether.com). Since our bodies all differ to varying degrees, we need to experiment with foods, lifestyles and medications to find out what will work best for each of us. And pooling our individual data can guide us in choosing intelligently, rather than randomly, which experiments will have the highest chance of yielding answers that will help us.

It will come as no surprise that I’m a great fan of fellow QS member Seth Roberts – a modern pioneer and champion of self-experimentation. At a recent QS Meetup, Seth drew lots of attention from the crowd when he discussed the results of his Butter-Arithmetic experiment. In fact, there was so much interest, that some people decided they wanted to run the same experiment on themselves and pool the data to see if they could replicate his results. This led to Eri Gentry’s Butter-Mind… and Coconut-Mind Study, in which she outlines a good, scientific protocol and writes: “I am currently looking for Butter Mind participants.”

And this is when I became concerned.

In a subtle shift, we went from one person reporting on an experiment he ran on his own body, to a group of people deciding they want to try the same thing, to a public call for participants.

Some would argue that such a call qualifies as an advertisement for a human interventional study, which creates ethical, if not legal, responsibility to establish proper oversight. Specifically, it would require assessment and disclosure of any potential risks of participating and verification that all participants have given voluntary and fully informed consent.

Personally, I think the Butter-Mind experiment is quite safe, and most members of the QS community are likely sufficiently sophisticated to be fully aware of whatever risks it may present. But some might reasonably challenge this, particularly for certain potential participants, and especially when the details of the study are communicated to a wider audience. We already had a cardiologist express concern to Seth about the risks to cardiovascular health of increasing saturated fat consumption.

Most importantly, it is not up to the designers of a study to make the determination of whether research is ethical, whether potential risks and benefits have been properly communicated, and whether informed consent is sufficient. This is the job of an ethical review board.

As the QS movement ventures from simple self-tracking to more sophisticated social experimentation, which offers compelling scientific rewards, there are a couple of options for proceeding.

If there are going to be public calls for participation, then I would strongly urge the QS community to assemble its own ethical review board, according to federal regulations, and to review all studies that in any way seek to actively recruit others to participate.

If we alternatively decide this would pose too great a burden on us self-experimenters, then we need to figure out how to help people with similar interests come together and share data, without anyone “advertising” their study such that it binds us to play by the same rules that were established long ago for pharmaceutical companies.

I am not at all an expert in this, but I think it’s an important distinction that we need to understand and develop rules against.

It would be truly tragic if the nascent QS movement, and its promise for social benefit, became overburdened with regulatory oversight for failure of its pioneers to take appropriate safety precautions. The best way to avoid this is to demonstrate that we have considered the ethical issues and can responsibly regulate ourselves.

For those who might think this is excessive, consider what might happen if, in some future experiment, someone who was not fully informed of potential risks ends up seriously harming themselves.

8 Responses to Does QS Need An Ethical Review Board?

I think this is a very valid and clearly expressed issue. To me it’s clear that the Butter-Arithmetic experiment is not a quantified-self experiment, but rather a conventional scientific study albeit conducted by ‘amateurs’ (in the positive sense!).
I am left wondering what the existing regulations are and how they are triggered. Are they attached to federal funding? How do they apply to private corporations.
What is the bright line between, “try my apple pie and tell me what you think”, and an experiment that must be regulated, and is there a simple mechanism e.g. joining a club, that takes such activities out of the public sphere?
I see two potential risks. The risk of harm coming to a participant in a group organized self-experiment, and the risk of legal problems for the organizers of such groups whether or not harm is caused.
I think the first step is to for the community to gain and understanding the existing legal landscape. Are the any lawyers or professors of scientific ethics here who can help us?

Very interesting post. As one of the co-founders of QS, I am obviously one of the most interested in having us proceed ethically here. I have a question for Daniel and others who have raised this issue. (My question is based on respect for Daniel’s good judgement!) How is Eri’s butter experiment different than all the other collaborative activities that go on in the training and fitness world – for instance, in the CrossFit community, where there is active self-experimentation with non-standard diets? (There are many other examples.)
I am NOT saying that we ought to avoid ethical reflection. But I’m not sure the reference to medical experimentation gives us the right frame. The review board mechanism was adopted for medical experimentation for good reason: there is a record of awful scientific experiment on human subjects. For instance, the federal regulations that mandated the establishment of institutional review boards was influenced by a research project that involved withholding penicillin (and concealing diagnosis) from people suffering from syphilis, in order to learn about the progression of the disease.
Is there similar reason to be concerned here, in the case of an experiment that involves eating 4 tablespoons of butter every day for a few weeks? I think that if we focus on common foods and simple, harmless tests, like Seth’s simple math problems, the ethical risks are minimal. But there may be something I am not seeing clearly yet, and I invite others to pipe up.

Eri Gentry designed an experiment.
The editors of this blog judged the experiment as being sufficiently safe and promising to allow Eri to post her call for participants on this blog.
I think that might be developed into a good system.
Post public criteria that have to be fulfilled to allow the person to post their call to participants on this website.

Thanks Daniel for a good and important post. I think the concept of ‘ethical review’ for Citizen Science experiments is one of the most important questions facing this community. I agree that Citizen Science cannot be bound to the traditional ethical review board (IRB) process currently found in medical studies. This process is expensive and time consuming and would surely slow down progress of this movement. However, there are real risks to Citizen Science if ethics and safety issues are not dealt with effectively. While I am sure there is no harm to eating excess butter for a few weeks to do a study (now, if participants continue the practice, that is their own risk/choice!), there are real potential risks as the number of citizen experiments grows.
An example I have used previously relates to St. John’s Wort. I am interested in over the counter supplements that have been shown to improve patient outcomes (in this case mood and anxiety). I have thought about sponsoring a Citizen Science experiment of St. John’s Wort to see if some of the results in the literature can be replicated in a Citizen Science experiment. However, St. John’s Wort, although available over the counter and common, has some risks. St. John’s Wort is an inducer of the cytochrome P450 enzyme CYP3A4, the most common metabolic pathway for prescription drugs. So taking St. John’s Wort with certain prescription drugs lowers the plasma levels of these drugs reducing their effectiveness. This could lead to lower plasma levels of drugs known to improve survival of certain patients (ie. beta blockers) or lower levels of the compounds in birth control pills leaving a woman vulnerable to pregnancy. While this study seemed like a good idea, I did become worried that, even with sufficient warnings, the study could lead to harm in some participants. Other supplements have the opposite effect and can raise the plasma concentrations of other medications. This could lead to increased side effects or more serious issues such as kidney problems.
The fact is that many drugs and supplements interact, and that drugs or supplements could be studied in Citizen Science experiments outside of their indicated use or at different doses than indicated. As Citizen Science experiments become more common, these risks will become real and need to be managed proactively.
I believe that we, as a community, must find an alternative to the traditional IRB process that is effective and ensures safety of our study participants. I think this process should have ‘layers’ or ‘levels’ founded on community participation. Studies could be posted publicly for comment, and the crowdsourced review could determine the level of scrutiny needed. The community could flag any potential risks or issues that could then be addressed by the sponsor. For studies deemed to be complex, or potentially risky, we could establish a Citizen IRB of volunteers to evaluate studies and offer suggestions. Finally, for very complex or potentially risky studies, the option of a traditional IRB is available. I think we, as a community, should work to develop a risk assessment standard that is applied to all proposed studies, and will ensure the proper level of scrutiny based on the study (from minimal to significant). Like traditional medical studies, our first priority must be to do no harm.

This is and will continue to be one of *the* critical challenges for Citizen Science in the coming years. As datasets become richer, and cheaper and easier to generate, the temptation to move into more complex and hence more risky projects will increase. How will these risks be disclosed and managed?
To the best of my knowledge, there is no clear legal obligation that most Citizen Science projects in this country be reviewed. (Accepting federal funding or conducting research in anticipation of or in support of a federal regulatory approval are the two most common ways to trigger Common Rule compliance requirements.) That does not mean, however, that there is no ethical obligation.
If that ethical obligation is not met, the Citizen Science movement can rest assured that a legal obligation will soon be imposed. Which, as others have suggested, is likely to frustrate the progress of Citizen Science.
How to satisfy that ethical obligation? I think Chris Hogg’s proposal, above, is a reasonable starting point. Self-regulation is always an uncertain business, but that does not mean it is impossible. Just as successful science hinges on the active participation of the community, so too will a successful review process. Perhaps the starting point is a community review board comprised of representatives from the Citizen Science community and its major commercial, non-profit and academic supporters?
There can be no doubt about the stakes. The failure to demonstrate a community-wide commitment to responsible review of projects is likely to send the Citizen Science movement the way of DTC genetic testing: straight into the arms of regulators.

I am very happy you brought this up, Daniel. I learned a lot from your clear description of the problem and the solution starting points that you and the commenters made. This is especially important to me as the creator of a tool for running experiments which has a group experiment feature coming in soon.
A naive question: What are the problems with assuming people will make good decisions when given reasonably-researched information about the risks? I’m thinking of something like CureTogether’s terms: “3. CureTogether Does Not Provide Medical Advice”. In other words, is a legal “cover every possibility under the sun” approach necessary? I’m just wondering what’s rational to assume for intelligent adults who choose to participate…
I’d like to offer to help move this to the next step. I have no experience in this area, but I’m motivated to learn and to help organize. What might the next action be? How about creating a site that explains the problem and then soliciting ideas from the community, starting with the ones outlined here? Seems like the timing is perfect for a kind of self-experimentation manifesto (if that doesn’t sound too grand).
What do you say?
matt (http://www.matthewcornell.org/contact)

Thanks everyone for the positive responses to my post. It sounds like most of us are looking a few steps ahead of the quite safe Butter-Mind experiment to the day when someone proposes something more controversial (Chris Hogg gave the good example of a St. John’s Wort experiment he considered sponsoring). And I’m sure we all prefer to regulate ourselves than have regulation imposed.
I think an efficient way to get this done is for the subset of people who a) care a lot about this issue and b) have some local bandwidth to invest in the necessary research, thinking and writing, should individually propose a minimal set of guidelines and then share them openly with the entire community.
Although I meet the criterion of caring, I unfortunately don’t have the local bandwidth to dive deeply into this. I do strongly suggest keeping it as short and simple as possible.
It sounds like some of the guidelines/principles already bubbling up are:
1. Experiments that actively seek participants should have research done on potential risks of participation and should disclose those clearly to all potential participants. There should be a mechanism for the lone cardiologist to express concern about excess saturated fat consumption and have that be visible to all.
2. Informed consent – have any participants somehow demonstrate they have understood the risks (e.g. take a simple test?)
3. An open review process
4. Involvement of outsiders
5. Do no harm (though I personally prefer “maximize expected benefit”, I think it creates too much risk in this case)
Hopefully someone reading this has the bandwidth to take these bits and develop them into something more complete, which I look forward to reading!
Thanks!

I want to raise another ethical concern. To wit, what are the consequences of successes in the QS movement that become received wisdom among the public at large and are appropriated by the medical/insurance communities? What happens when some form of self-quantification is transformed from a rich man’s (note the gender here, and see my comments on the web site of CALIT(2)) pastime into a national obligation? Will we be arresting people for eating a second helping of peas? Will we be fining the obese or the people who are short of lung capacity? Will diet be no longer a matter of choice but a matter to be regulated? Will food be rationed to individuals on the basis of what is thought to be best for their health? Will the gradual course that bans on smoking have taken be accelerated for bans on this and that? What will happen on the day I get up and don’t particularly feel like running five miles, thank you? What happens to the person who says, “Please leave me alone”? I would hate to see a pseudo-scientific “Strength Through Joy” or “Joy Through Strength” epoch in much the same way I would hate to see the erasure of the separation of church and state. In short, is QS prepared to take on SUCCESS as an ethical question?

Partners

Sponsors

Friends of QS

We recently started a program to invite QS Toolmakers to contribute directly to funding our events. We call this program Friends of QS. If you would like to participate we invite you email us to learn more.