If you have been reading the popular press that reports on technology, you probably have read about the hottest Artificial Intelligence trend—Deep Learning! It’s everywhere in smart computing—self-driving cars, computers that play strategy games, the list goes on, and on. But, as some critics have pointed out—Deep Learning can behave more like a black box model. This means that although Deep Learning can be extremely predictive, it provides little insight on the phenomenon that it is being modeled. In other words, the model learns something, but the humans that built the model cannot explain what was learned.

This is where the cool and sophisticated uncle of Deep Learning comes into play—Bayesian Networks. Bayesian Networks have been tested time and again on educational applications. You can design them to be shallow, and they are interpretable and rigorous enough that they can drive the statistics needed to design a high-stake application. This is an important difference between Bayesian Networks and Deep Learning.

Consider this—a loved one is taking an assessment that is going to determine if they will be admitted to college. It is unlikely that measurement scientists are going to rely on Deep Learning. High stake instruments should not be based on statistical models that nobody can explain. In other words, educational products need accountability. Bayesian Networks allow us to achieve as much interpretability as needed. However, you can also design deep Bayesian Networks, and then they start to resemble the Deep Learning methods that are so en vogue.

If you want to learn more about the role of Bayesian Networks in assessment, you may be interested in checking out our chapter in the Handbook of Cognition and Assessment! In the chapter we discuss the conceptual foundations of Bayesian networks, walk through their basic graphical and formula representations, and discuss their different generalizations as a measurement framework: