Campbell's Software (Or 'Do Androids Dream Of Electric Cheats')

Education
I look at K-12 policies and practices from the classroom perspective.

You had better have your homework done. Original T-800 Endoskeleton robot used in filming 'Terminator Salvation' at Robots exhibition at Science Museum in London, England on February 7, 2017. (Tolga Akmen/Anadolu Agency/Getty Images)

In the current print issue of Wired, Tom Simonite looks at "When Software Breaks The Rules" with several examples of bots that cheat, what some might call AI bugs. A four-legged virtual robot was supposed to balance a ball on its back; instead, it tucked the ball into a joint. A gripper was accidentally trained to fake gripping by using the camera angle. And a survival simulation yielded an AI species that survived by eating its own children. Simonite was just scratching the surface of a crowdsourced list of AI bugs kept by research scientist Victoria Krakovna (DeepMind). SF writers have long pushed the notion further; what if robots tasked with bringing world peace do so by enslaving human beings?

Call them hacks, cheats, or bugs, these AI behaviors are recognizable as another version of Campbell's Law: "The more any quantitative social indicator is used for social decision-making, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor." It may seem like a stretch to argue that software involves a social process, but if we're talking about AI that is meant to perform for an interact with humans, I'll argue that a social process is present.

Campbell's Law tells us that if we're not careful about how we define success in a process, we will change how that process operates. One classic (albeit apocryphal) Campbell's Law example is the tale of a Soviet show factory. Given a measure of success base strictly on Number of Shoes Produced, the factory produced only children's shoe, which used less material or, in another version of the story, geared all of their equipment to make only left shoes.

Classroom teachers learn the importance of carefully designing measures for success. Most teachers have that moment in their career when they are grading a test and realize that a student has given an answer that is technically correct, even though it's not at all what the teacher wanted. When you ask a question such as "What are three factors that did not contribute to the start of World War I," you may mean "In the context of our class discussions and the various possible causes considered by the texts that we read," but as you phrased the question, "Mr. Rogers, the price of tea in China, and my mom's cooking" is a correct answer.

All classroom instruction is subject to Campbell because learning is a social process, and the proof of learning is almost always some artificial external indicator. Software is susceptible because computers must depend on external, measurable, observable data to measure the social process of learning. For instance, once students realize that a teacher bot can only count up aspects of their writing with no regard for sense or accuracy, the student writing will become... well, the technical term is "bad."

Part of a teacher's job is to leverage Campbell for a positive effect, to create tasks that can only be completed by completing the desired learning. Context, culture and convenience set rules for completing those tasks, both explicit and understood. Even then, very smart students will look for ways to meet the definition of success by cheating the process while still satisfying the requirements of the indicator. Tell any room full of students, "I want you to climb over that mountain and cross the finish line at the bottom on the other side," and most will sigh and trudge up the marked trail; a small group will always start looking for a passageway through, under, or around the mountain.

Part of doing school is learning the culture, the expectations, the accepted way to do school. If we're going to unleash AI-driven teacherbots in the classroom, we'll have to deal with their tendency to self-hack, their innate susceptibility to Campbell's Law. Isaac Asimov was dealing with this when he created the Three Laws of Robotics, a guarantee that no matter how much the measurement of success pushed robots to distort the process, there were certain lines they wouldn't cross. AI bots in the classroom will need considerably more complex restraints.