Justin Reich is the executive director of the MIT Teaching Systems Lab, a fellow at Harvard’s Berkman Center for Internet & Society, and the co-founder of EdTechTeacher. Beth Holland is an instructor and communications coordinator at EdTechTeacher, which works with teachers, schools, and districts to leverage new technologies to improve student learner.

The Ethics of Educational Experiments

When CopyrightX launched the first version of its course, it notified students enrolling in the course of an important feature: that students would be assigned to one of two curricula being tested. One curricula focused more on traditional U.S. case law; the other included more global examples and more secondary source explanation of concepts. Students were randomly assigned to one of the two curriculua, and in the end, the course staff decided that the U.S. case law version was superior for a number of reasons.

Educational experiments, like the one in CopyrightX, raise a number of ethical questions. Was it ethical to try out a new curriculum with students? Did it change the ethics of the situation that the course was being taught for free? Typically, experiments happen between iterations of a course (a teacher tries one thing in Fall '13 and another in Fall '14). Was it ethical to experiment on the same cohort of students with a course?

One feature of the CopyrightX example is that students were well informed of the nature of the experiment in advance. How do these ethics change when students are not informed in advance, especially in circumstances where they cannot be informed in advance? In the past few weeks, an unusual experiment took place in a Coursera course, where a professor taught a MOOC for a week and then deleted all of the content. The reasons for the experiment are not totally clear--it seems to be an effort to provoke students into interaction and course ownership and to question the relationship in MOOCs between faculty, for-profit learning management system providers, and universities (more on this from George Siemens and Jonathan Rees). Suffice it to say though, if you tell students in advance that you are going to delete all content after a week, it doesn't have quite the same effect.

In 2012, for one week, Facebook changed an algorithm in its News Feed function so that certain users saw more messages with words associated with positive sentiment and others saw more words associated with negative sentiment. Researchers from Facebook and Cornell then analyzed the results and found that the experiment had a small but statistically significant effect on the emotional valence of the kinds of messages that News Feed readers subsequently went on to write. People who saw more positive messages wrote more positive ones, and people who saw more negative messages wrote more negative ones. The researchers published a study in the Proceedings of the National Academy of Sciences, and they claimed the study provides evidence of the possibility of large-scale emotional contagion.

The debate immediately following the release of the study in the Proceedings of the National Academy of Sciences has been fierce. There has been widespread public outcry that Facebook has been manipulating people's emotions without following widely accepted research guidelines that require participant consent. Social scientists who have come to the defense of the study note that Facebook conducts experiments on the News Feed algorithm constantly, as do virtually all other online platforms, so users should expect to be subject to these experiments. Regardless of how merit and harm are ultimately determined in the Facebook case, however, the implications of its precedent for learning research are potentially very large.

Many online learning platforms like ASSISTments have been running and reporting on these experiment for years, and many of the recently developed online learning platforms like edX are expanding their capacity to do so. The increased ease of conducting experiments, the increased frequency with which we will run experiments, and the potential for running experiments without student foreknowledge all point toward the need to rethinking educational ethics in an age of expanding online learning. Mitchell and I end our op-ed with these thoughts:

[Conversations about research ethics] should happen at all levels of higher education: in institutional review boards, departments and ministries of education, journal editorial boards, and scholarly societies. It should draw upon new research about student privacy and technology emerging from law schools, computer science departments, and many other disciplines. And it should specifically consider the ethical implications of the fact that much online instruction takes the form of joint ventures between nonprofit universities and for-profit businesses. We encourage organizers of meetings and conferences to make consideration of the ethics of educational data use an immediate and ongoing priority. Preservation of public trust in higher education requires a proactive research ethics in the era of big data.

The same is equally true for K-12 education. It's easy than ever to run an educational experiment in an online setting. Students, teachers, parents, schools, and edtech companies all need to be engaged in conversations about how to conduct these experiments in ways that help us understand how to teach better and minimize adverse effects on students.

For regular updates, follow me on Twitter at @bjfr and for my papers, presentations and so forth, visit EdTechResearcher.

Categories:

Ground Rules for Posting
We encourage lively debate, but please be respectful of others. Profanity and personal attacks are prohibited. By commenting, you are agreeing to abide by our user agreement.
All comments are public.

The opinions expressed in EdTech Researcher are strictly those of the author(s) and do not reflect the opinions or endorsement of Editorial Projects in Education, or any of its publications.