Search form

Preparing for the Impact of Artificial Intelligence on Education in Australia

Preparing for the Impact of Artificial Intelligence on Education in Australia

Kalervo Gulson, Sam Sellar, Andrew Murphie and Simon Taylor argue we can act now to ensure the Australian experience of artificial intelligence in schools can be positive...

Artificial Intelligence (AI) is rapidly becoming a central part of contemporary life. However, AI is being introduced into education policy areas, specifically K-12 systems and schools, much faster than either research on its effects or regulation on its use.

In this article we will highlight some key areas relevant to education[i] including the connection between skills and AI, and possible ways to respond to, and prepare for, AI not only in schools but in broader society. We conclude with recommendations and links for helping students and citizens to learn more about AI.

AI broadly refers to autonomous computer systems that employ algorithmic networks to learn from patterns in large data sets in order to improve predictive abilities (Russell & Norvig, 2016; Walsh, 2016). The application of AI in combination with ‘big data’ promises new opportunities to solve complex and intractable social and political problems (Elish & boyd, 2017), but along with the opportunities AI brings there is a need for caution.

Education, skills and AI

There is consensus that automation that is part of Artificial Intelligence will substitute for some tasks and workers, although the nature and extent of this substitution varies. Furthermore, Hajkowicz et al. (2016) argue that to meet future workforce challenges Australian society will need to provide young people with the right skills for current and future demands, as well as providing workplace and lifelong learning to facilitate re-training.

In one of the first reports on AI in education, Luckin et al. (2016) warn against being seduced by new technology and argue for sustaining a strong focus on pedagogy. When it comes to what have been termed ‘21st century’ skills, Heckman (2011) emphasises the need for focus on ‘attentiveness, perseverance, impulse control, and sociability’ (p. 33).

Many frameworks of skills have been identified in the research literature and in national curricula. However, there appears to be some agreement regarding the broad categories of skills that are important, which include the kinds of cognitive skills that have traditionally been emphasised in formal education along with non-cognitive skills (both inter- and intrapersonal) and skills that enable people to interact effectively with information and communication technologies (ICT).

As Campolo et al. (2017) observe, the ‘[e]thical questions surrounding AI systems are wide-ranging, spanning creation, uses and outcomes’ (p. 30). In what follows we focus on: the ethical development and use of AI; preparing citizens for an AI world, which in this article can include students of all ages; and the application of AI in public policy areas like education.

Ethics and AI

It is important to broaden the types of professionals involved in developing AI (Campolo et al., 2017). Lack of diversity among developers will need to be addressed through strategies for improving the gender imbalance in STEM education (OECD, 2018). Luckin (2017) has also called for educationalists to work with AI developers, writing that ‘everyone needs to be involved in a discussion about what AI should and should not be designed to do’ (p. 121). As Campolo et al. (2017) observe, ‘training data, algorithms, and other design choices that shape AI systems may reflect and amplify existing cultural assumptions and inequalities’ (p. 4).

Education, health and other social policy areas are ‘high stakes’ domains for the implementation of AI and it will be important to take measures to avoid biases in decision-making in relation to determining capacity to learn, risk of disease, medical diagnoses and so on.

Regulation and data privacy

As machine learning and algorithms are increasingly embedded in the mediated infrastructure of everyday life, we will need mechanisms to increase transparency, regulation and algorithmic literacy, and also ways to monitor what algorithms are doing in practice and create effective accountability mechanisms (Ananny, 2016). This will include identifying areas of regulation that either need revising or creating.

As corporations provide and manage data systems in education (Williamson, 2017), key questions are: what happens to student, parent and other forms of data when it is used in systems, including who owns data?; and who has access (Zeide, 2017)?

Some suggestions point to the importance of individual ownership of data and opt-in rather than opt-out programs (Tene & Polonetsky, 2012). We might look at the use of Google Mail in schools as one example where opt-in could be trialled.

Use of AI in public agencies like schools

Much of the provision of automated systems is done under the proprietary knowledge of corporations and there has been a call for core public agencies, such as those responsible for criminal justice, healthcare, welfare, and education (for example, “high stakes” domains) ‘…[to] no longer use “black box” AI and algorithmic systems’ (Campolo et al., 2017, p. 1). What is meant by ‘black-box’ is that the workings of these systems is either secret (due to proprietary knowledge) or cannot be known, due to the ways in which calculations are made by some forms of Artificial Intelligence.

It is clear that as some decision-making becomes automated, there needs to be an acknowledgement of the narrowness that can emerge from automation if it lacks context. That is, system and school-based administrators ‘will need to rethink how they formulate goals and use data, while acknowledging the limits and risks of automated systems’ (Campolo et al., 2017, p. 13), especially the possibility of missing important contextual details that go into making complex social areas like education.

Additionally, educators have expressed concerns about the de-humanising effects of introducing robots into classrooms, as well as potentially encouraging authoritarian or dependent attitudes among children. There will be a need to consider whether new forms of AI-driven pedagogies may work at cross-purposes to curricula focused on human values, including the question of ethical uses of AI itself (Serholt et al., 2017).

Conclusion: What should we learn in an AI society?

Debate about how AI will reshape society in the next few decades focuses upon the question of whether technological change will be different this time, as compared to previous periods of significant disruption.

One of the most important things for those who work, teach and learn in schools is to become aware of how AI works, and what it can do and just as importantly cannot do. In the absence of all people becoming computer scientists, there are already attempts by national governments to provide avenues for citizens to become informed not just about what AI might mean for society, but to become informed about how AI works.

While anyone could do some of the well-known Coursera courses on AI, the Finnish government has provided an online course with the aim of teaching the basics of AI to 1% of the population, approximately 55,000 people[ii]. Anyone with access to the internet can enrol and complete this course, available here. In Australia, the NSW Department of Education has begun to commission reports, hold events and provide relevant resources, including an interesting free collection available here.

For educators and regulators alike, it is important to examine issues such as those outlined above whenever new proposals for automation and AI are put forward. It should not be assumed that AI providers or the creators of algorithms will do this work. The teaching profession and education authorities will need to invest resources and time into learning about, understanding and developing these new technologies together, preferably before they become too widespread in our education systems and schools.