News & Events, McCormick School of Engineering, Northwestern University

Approaching AI with Common Sense in Mind

Ken Forbus’s new book and work highlight the importance of practical reasoning in understanding, designing intelligent systems

Mar 12, 2019 //

Alex Gerage

Consider a water pipe on a cold winter day. You understand if the air temperature outside is low enough, the water in the pipe could freeze, causing pressure in the pipe to build and burst open after the water has thawed. You hope it doesn’t happen, and it’s possible that it won’t. Still, you don’t need a degree in meteorology or physics to realize if the water temperature dips below 32 degrees Fahrenheit, your dry basement could be in trouble.

Assessing this situation by breaking up its continuous variables and possibilities into discrete parts is an example of qualitative representation. It’s a key aspect of common-sense reasoning that guides human decision-making every day. And it’s a topic that Northwestern Engineering’s Ken Forbus has studied for more than 30 years.

“As humans, we have the ability to reason about continuous properties as a way of navigating the world around us,” Forbus said. “People understand and use numbers, but we also have another qualitative layer of representation, which allows us to carve up the world into meaningful units to make sense of things that might happen.”

Intelligent Systems Have Something to Learn

While qualitative representations are natural to humans, AI systems have struggled to demonstrate similar common sense. Forbus has experienced these shortcomings firsthand. His book is a resource for AI scientists to learn how understanding qualitative reasoning in humans can lead to smarter computer systems equipped with practical reasoning skills.

“Google Trips once sent me a flight receipt that included myself, my wife, and three other passengers,” Forbus said. “One passenger was the name of the villa we were staying at in Italy, another was the street the villa was located on, and the third was another hotel in the area.”

Working at the intersection of cognitive science and AI, Forbus and his team in the Qualitative Reasoning Group are bridging the gap between AI and the broader physical world by incorporating qualitative representations within software systems. They are creating intelligent systems featuring spatial reasoning, vision, and natural language understanding — mimicking the same traits that help people reason qualitatively — to provide new tools for education, tutoring, and more.

“If you want to build a system that acts as an instructor, the software has to see things the way a person does well enough to really communicate with them,” Forbus said. “My group has focused on natural language and sketch understanding systems that learn by reading, watching the world, and interacting with people.”

One of Forbus’s most ambitious projects, CogSketch, is an AI platform that solves visual problems through sketch-based understanding in order to give immediate, interactive feedback. The book discusses both CogSketch and an offshoot, called “Sketch Worksheets,” software that offers feedback on student sketches by comparing them to their instructor’s template.

The book also touches on Forbus’s work on the Companion cognitive architecture, the basis for AI “software social organisms” that use natural modalities to with work human counterparts. Visitors to the third floor of Northwestern’s Seeley G. Mudd Building, the new home of the Department of Computer Science, can interact with a companion system in the form of an intelligent kiosk. Forbus collaborated with Microsoft Research to develop the kiosk, which uses speech, natural language understanding, and vision to assist users by assessing their requests and directing them to faculty offices, classrooms, and collaboration spaces.

“We need AI systems that can live and learn over weeks, months, and years without needing teams of engineers to manage it,” Forbus said. “Imagine having your own personal AI that worked with you and was on your side. That kind of autonomy is crucial for scaling out AI technology. We’re very excited about that.”