Schools are using AI to track what students write on their computers

Under the Children’s Internet Protection Act (CIPA), any US school that receives federal funding is required to have an internet-safety policy. As school-issued tablets and Chromebook laptops become more commonplace, schools must install technological guardrails to keep their students safe. For some, this simply means blocking inappropriate websites. Others, however, have turned to software companies like Gaggle, Securly, and GoGuardian to surface potentially worrisome communications to school administrators

Over 50% of teachers say their schools are one-to-one (the industry term for assigning every student a device of their own), according to a 2017 survey from Freckle Education

But even in an age of student suicides and school shootings, when do security precautions start to infringe on students’ freedoms?

When the Gaggle algorithm surfaces a word or phrase that may be of concern—like a mention of drugs or signs of cyberbullying—the “incident” gets sent to human reviewers before being passed on to the school. Using AI, the software is able to process thousands of student tweets, posts, and status updates to look for signs of harm.

SMPs help normalize surveillance from a young age. In the wake of the Cambridge Analytica scandal at Facebook and other recent data breaches from companies like Equifax, we have the opportunity to teach kids the importance of protecting their online data

in an age of increased school violence, bullying, and depression, schools have an obligation to protect their students. But the protection of kids’ personal information is also a matter of their safety