Hidden Markov Models

In this tutorial we'll begin by reviewing Markov Models (aka Markov
Chains) and then...we'll hide them! This simulates a very common
phenomenon... there is some underlying dynamic system running along
according to simple and uncertain dynamics, but we can't see it. All
we can see are some noisy signals arising from the underlying system.
From those noisy observations we want to do things like predict the
most likely underlying system state, or the time history of states, or
the likelihood of the next observation. This has applications in fault
diagnosis, robot localization, computational biology, speech
understanding and many other areas. In the tutorial we will describe
how to happily play with the mostly harmless math surrounding HMMs and
how to use a heart-warming, and simple-to-implement, approach called
dynamic programming (DP) to efficiently do most of the HMM computations
you could ever want to do. These operations include state estimation,
estimating the most likely path of underlying states, and and a grand
(and EM-filled) finale, learning HMMs from data.

Powerpoint Format: The Powerpoint originals of these slides are freely available to anyone
who wishes to use them for their own work, or who wishes to teach using
them in an academic institution. Please email
Andrew Moore at awm@cs.cmu.edu
if you would like him to send them to you. The only restriction is that
they are not freely available for use as teaching materials in classes
or tutorials outside degree-granting academic institutions.

Advertisment: I have recently joined Google, and am starting up the new Google Pittsburgh office on CMU's campus. We are hiring creative computer scientists who love programming, and Machine Learning is one the focus areas of the office. If you might be interested, feel welcome to send me email: awm@google.com .