A Praxis of Information Justice in Higher Education

Powerful social critiques of contemporary information technologies have led to a scholarly consensus against a neutral, realist understanding of data of any size. It should now be accepted as consensus that data is subjective rather than an objective representation of reality. A growing body of work on data, metrics, and algorithms has approached information from views consistent with a critical-constructive perspective that rejects technological neutrality and shows that data is inherently constructed.

Cathy O’Neill describes predictive analytics as “opinions embedded in math” and “the new phrenology,” while Maciej Cegłowski calls it “money laundering for bias.” In all of these views, the process by which data comes to exist is driven by social factors and riddled with at best unexamined assumptions and values and at worst dangerously prejudicial biases that remain hidden behind claims of objectivity and neutrality. This work should put to rest the idea that one can’t argue with the data. Indeed, one must argue with the data in order to use data meaningfully and effectively.

Professional Standards for Justice in Analytics

This does not mean that big data is inherently bad, or even inherently dangerous. Data scientists have begun to recognize the ethical challenges involved in big data and predictive analytics and in response have begun developing codes of professional ethics that go beyond information privacy. Higher education learning analytics especially has been the subject of robust analysis, and several ethical frameworks for the field have been developed to varying extents. Sharon Slade and Paul Prinsloo identify six principles of justice in learning analytics:

Learning analytics must be understood as a moral practice.

Students must be understood as agents.

Identity and performance must be understood as dynamic constructs rather than essential characteristics.

Success must be understood as complex and multidimensional.

Universities must be transparent about their purposes.

Universities must use learning analytics to improve outcomes.

These are quite valuable, certainly consistent with the understanding of information justice presented here and, as Slade and Prinsloo operationalize them, valuable for guiding practice in learning analytics.

Considerations such as Slade and Prinsloo offer are the basis for a growing number of guidelines and frameworks seeking to establish a professional ethics or codes of conduct in learning analytics, often built on regulatory regimes in other areas of information practice. Based on the Association for Institutional Research Statement of Aspirational Practice and the findings of a working group of researchers and vendors, Rachel Boon identified seven steps for sharing data with students that promote both transparency and shared understandings of data collection. Jisc, which provides technology services to the UK higher education sector, identified eight principles for post-secondary education institutions’ use of learning analytics, including responsibility, transparency and consent, privacy, validity, access, enabling positive interventions, minimizing adverse impacts, and stewardship of data. The New America foundation, which was a major force in developing the Completion Agenda, recently proposed five guiding principles for learning analytics that are more operationally oriented than, for example, Slade and Prinsloo. Perhaps the best known set of standards are those established by The Open University in the UK. The policy sets out in detail both the business case for using analytics and the context and concerns its use presents, identifies specific data that is and is not expected to be used in learning analytics, incorporates existing university policy, oversight processes, and eight principles for using student data ethically to provide student support. The policy was extensively publicized when The Open University instituted it, as it was held up as an innovative model for other universities.

The immediate virtue of such standards is that they make the realist view of learning analytics untenable. OU’s insistence that learning analytics support the university’s mission and that the data be free from bias draws attention to the possibility that some applications would not do so. They also nearly universally recognize students as moral agents, which works quite strongly against more manipulative applications of learning analytics. Especially encouraging is the promotion or adoption of such standards by vendors and interest groups. Jisc and New America could very easily have continued promoting a dangerously naïve view of learning analytics; instead, their participation in these discussions legitimizes the issues and compels a more critical viewpoint. And through their connection with the professional and educational organizations that train data scientists and operate data systems, codes of professional ethics institutionalize these principles in ways that can counter institutionalized data systems, creating logics of appropriateness that, for example, make the “Drown the Bunnies” model quite literally unthinkable. Such statements are important steps toward information justice.

But these professional codes do have noteworthy weaknesses. Most pay scant attention to the kinds of structural conditions and power relationships that Slade and Prinsloo and that I have emphasized. Boon’s first principle, for example, is to “Determine the full range of data available.” That is wholly inadequate; a key principle of information justice is that injustice is often caused by the way data is created to begin with. If we begin from the data we have we are quite likely to miss the injustices present in that data and then institutionalize those injustices in student support programs.

The emphasis on transparency in all of these models disregards the challenges that open data can create in securing justice, and are frequently posited along with privacy protections as if the two are entirely compatible. Indeed, privacy is often treated as if it is the only ethical concern in learning analytics.

And most such approaches understand ethics as a matter of ensuring good faith. New America, for instance, believes that ethics can be achieved through advice such as “convene key stakeholders to make important decisions” and “design predictive models and algorithms so that they produce desirable outcomes.” As the “Drown the Bunnies” case illustrated, who counts as a key stakeholder and what constitutes a desirable outcome depends significantly on the organizational and power structures of the university. For these statements to fully achieve their potential in a praxis of information justice—and for them to avoid being mere paper declarations that do little to influence actual outcomes—they need to be informed by an overarching concept of information justice.

Information Justice as Praxis

“If we accept that higher education is a ‘moral and political’ practice,” as Paul Prinsloo and Sharon Slade argue, “information justice as praxis can act as a powerful counter-narrative to the current hegemony of ‘techno-solutionism’ and the discourses of ‘techno-romanticism.’” (2017, p. 121, citations omitted). Information justice can result when a coherent theory of information politics is both informed by information practices and shapes our choices in the design and use if information and information systems. Ultimately a praxis of information justice must work from four key principles.

Context. Data is a social and political practice, with associated consequences. This requires ongoing work with information ethicists and practitioners—going beyond just information technologists to include at the least activists, legal and policy specialists, and journalists. One of the key questions here concerns the ways that information functions as a public good.

Critique. The injustices present in existing information practices have both distributive and structural dimensions that must both be understood in order to address them. We live in a well-established information environment, and cannot simply propose a new environment de novo. Critiques of that environment and the structures that create and sustain it are necessary for a theory of information justice that is not merely abstract utopianism.

Charge. Positive principles for justice in information and information systems can be based on ethics of care, capabilities, and restorative approaches to justice. It isn’t enough to critique; negative guidance (i.e., “Don’t do that!” whether in the form of a claim of injustice or an assertion of an inviolable right) only gets a data scientist so far. Those designing new information systems will need guidance in building systems that promote information justice. Justice frameworks that posit positive obligations and not just negative injunctions are most likely to develop principles that charge data scientists with promoting positive action.

Culture. Specific information practices that promote justice must be not only proposed but institutionalized. These practices can be reflected in formal standards such as codes of ethics and public policies, as standard elements of theoretical models of information systems, and in educational practices as model problems and solutions for aspiring data scientists.

Certainly, there is much more work to be done in building a praxis of information justice—and happily, there is a growing, multidisciplinary community of excellent researchers and practitioners working on the problems. I am very excited to see where the praxis of information justice goes from here.

We can identify some practices that might prove generally useful at least, by looking at big data through a justice framework, which puts emphasis on collective, social, and commons considerations rather than just the good faith of an individual’s actions. I can think of nothing more important to the pursuit of information justice than making information politics explicit. The political background and consequences of data must be consciously considered in the practice of information. Data scientists routinely speak of the “data provenance,” the origin, source, and process that accounts for the data (hopefully through a series of records). Data provenance needs to be analyzed not just for its technical aspects (e.g., how reliable and valid the data is) but for its social aspects as well (e.g., the justification for coding the data the way that it was). Any claim that data is objective, realist, value-free, or apolitical must be seen as a political claim itself. And normative assumptions must be considered as important as empirical ones in understanding the soundness of information systems.

Another central problem of information justice is exclusivity: individuals, their experiences, their values, and their interests are left out of information systems by the data collection process, the dissemination process, or the operation of the system as a whole. It seems likely, then, that a practice of information justice will be built around forms of pluralism. Information pluralism would embrace, rather than problematize, the “messiness” of data. Rather than seeing conflicting data as inherently erroneous it would encourage information systems to be designed to incorporate and highlight differences in data, identifying them as moments of conflict among assumptions and values to be resolved through social rather than algorithmic solutions.

Information pluralism could take advantage of big data’s increasing abilities to process narrative and unstructured data and to solve for solutions built on the diversity of individual cases rather than the central tendency of the dataset. And it could incorporate the myriad values that compete for the attention of technologists: openness, efficiency, privacy, security, benefit. This would be joined to a kind of participative pluralism, where information systems are designed with the participation of all actors who are part of the system, including those who will serve as the data points and as the objects of decisions based on the information. Such a system would reflect concepts of “deliberative development” or “collaborative transparency,” where concerns with transparency are mediated by the countervailing power of public participation.

Especially important to information pluralism is encouraging participation in the development of data: what one might call “foundational open data.” This approach recognizes the virtues of open data, and in particular the need for open data as a condition of examining the politics of an information system or practice. As long as the data is closed and the algorithms black-boxed it is very difficult to examine the processes, assumptions, and biases of the system. But opening data is often a path toward exacerbating the injustices built into the data. A more promising process would be to make the development of the data itself an open process in which the subjects of the data are included in its development. Making data open at its foundation rather than after its development would at the least allow those challenged by information initiatives to expose the politics in the process to examination, and may well provide inputs that lead to more just data systems.

But a theory, even one oriented toward praxis rather than abstraction, is not enough to make change of its own. Eleanor Saitta was, in the tweet that started this project, right to call not for a data justice theory but a data justice movement. Organizations such as Data Justice, Cardiff University’s Data Justice Lab, the Data % Society Research Institute, and Color of Change promote political contestation of information systems and practices, with the result that the principles of information justice can influence outcomes and promote social and political change.

These organizations’ work supports the most promising political strategy for challenging existing information practices, exploiting gaps in information systems. Data politics is inevitable but not deterministic. Gaps in political and information systems are always present, and can be used to promote more virtuous data politics, developing counter-narratives and undermining seemingly hegemonic institutions. They also carry out one of the most vital roles for an information justice movement, building the capabilities needed for participation in information systems, both skills and technologies. Ultimately it is the organizations in civil society, not philosophers, that make it possible for marginalized groups to participate collaboratively or to challenge embedded power structures in information systems.

It remains vital that the praxis of information justice and social movements contesting information practices be understood as complementary; there is neither a hierarchy nor division of labor to information justice. An intellectual framework for understanding intellectual justice is, one hopes, indispensable for those who wish to bring it about. It can direct attention to possible causes and solutions, and provide paradigmatic cases that serve as starting points for action. The act of developing and maintaining such a theory also offers a critical perspective on the practice of an information justice movement. But, though each in their own ways, the scholar is as privileged as the programmer, the bureaucrat, or the activist. The critical perspective that the philosopher or the social scientist takes on an information system is applicable to academic work, and as difficult to execute from inside as any other. A close relationship between activists and theorists provides challenges to theory from practice that allow for theoretical growth.

A praxis of information justice is desperately needed today, not just in so-called “information societies” but globally, north and south. We can pursue data in good faith without any kind of ethical malice and, because of the structural injustices in data, still produce unjust outcomes. Exhortations to be more ethical as individuals are welcome but insufficient to make much headway toward a more just information environment. Thorny issues remain hidden in the details, to be sure. But as information becomes a primary, public good, we will have no choice but to understand information justice as an essential element of a just society.