Faculty Teaching Undergraduate Courses

No university in the world has more people teaching robotics than Carnegie Mellon, and no other university in the world has a larger research center focused on robotics.

Our researchers and professors are among the world’s top authorities on computer vision, perception, motion planning, human-robot interaction and robot locomotion. As a result, faculty from the Robotics Institute are uniquely qualified to provide high-quality, state-of-the-art education that delivers both hands-on experience and a deeper perspective on the highly interdisciplinary field of robotics.

Howie Choset is a Professor of Robotics at Carnegie Mellon University. Motivated by applications in confined spaces, Choset has created a comprehensive program in snake robots, which has led to basic research in mechanism design, path planning, motion planning, and estimation. These research topics are important because once the robot is built (design), it must decide where to go (path planning), determine how to get there (motion planning), and use feedback to close the loop (estimation). By pursuing the fundamentals, this research program has made contributions to coverage tasks, dynamic climbing, and mapping large spaces. Already, Choset has directly applied this body of work to challenging and strategically significant problems in diverse areas such as surgery, manufacturing, infrastructure inspection, and search and rescue.

I am an Assistant Professor in the Robotics Institute at Carnegie Mellon University, where I lead the Human And Robot Partners (HARP) Lab. I'm interested in robotics, human-robot interaction, and assistive technology. My research focuses on developing intelligent, autonomous robots that help humans on complex tasks like eating a meal or learning a new skill.…

I am a Senior Systems Scientist at the Robotics Institute of Carnegie Mellon University and the National Robotics Engineering Center (NREC). My work focuses on using robotics for discovery and exploration, and as a workforce for hazardous duty and in demanding applications. Since 1998, I have led recognized robotic programs and produced robotics systems and…

My life goal is to fulfill the science fiction vision of machines that achieve human levels of competence in perceiving, thinking, and acting. A more narrow technical goal is to understand how to get machines to generate and perceive human behavior. I use two complementary approaches, exploring humanoid robotics and human aware environments. Building humanoid robots tests our understanding of how to generate human-like behavior, and exposes the gaps and failures in current approaches. Building human aware environments (environments that perceive human activity and estimate human internal state) pushes the development of machine perception of humans. In addition to being socially useful, building human aware environments helps us develop humanoid robots that are capable of understanding and interacting with humans.
Machine learning underlies much of my work in both humanoid robotics and human aware environments. I am an experimentalist in the field of robot learning, specializing in the learning of challenging dynamic tasks such as juggling. I combine designing learning algorithms with exploring their behavior in implementations on actual robots and in intelligent environments. My research interests include nonparametric learning, memory-based learning, reinforcement learning, learning from demonstration, and modeling human behavior.

I am interested in computational imaging, computer vision, and computer graphics. I am particularly interested in various aspects of material appearance: how humans perceive real world materials, how we can measure their parameters, and how to reason about them in images of the world.

Currently, I am the Director of The Robotics Institute at Carnegie Mellon University.
My work is in the areas of computer vision and perception for autonomous systems. My interests are in the interpretation of perception data (both 2-D and 3-D), including building models of environments.

The control of dynamical systems becomes increasingly important as the era of robotics research dominated by quasi-static machines rapidly comes to a close. Similarly, the importance of state estimation grows as robotic applications require robots to function in larger, more complex environments. George Kantor’s research addresses both of these issues by focusing on the dual problems of controlling robotic mechanisms with non-trivial dynamics and perceiving the state of world through indirect measurements. His approach is both analytical and experimental: he uses mathematics to understand the physical behavior of a given system and then use that understanding to create algorithms for control or estimation. He strives to develop new theoretical concepts and translate them into real-world implementations that solve problems such as balancing an unstable robot or estimating the location of an autonomous vehicle.

I am interested in building robots and related systems that are cost-effective in today’s marketplace. It is clear that sensing and cognition have a long way to go before an autonomous system can match the ability of even a small child. Yet, it is also clear that autonomous systems have a place in our world now if they can compete with humans because they are better, faster, cheaper, safer or even more entertaining.

My general research interests lie in Artificial Intelligence and Robotics. More specifically, they currently cover planning in deterministic and probabilistic domains and machine learning. My research has been mainly motivated by the problem of fast and intelligent decision making by autonomous robotic systems operating in real-world environments. Some of the robotic systems my group does…

My research is motivated from a passion for discovering the "why?" behind "how?" with respect to core problems in Artificial Intelligence, Computer Vision and Machine Learning. I am currently leading the CI2CV laboratory, newly re-located to the Robotics Institute, Carnegie Mellon University in Pittsburgh, PA, USA where we are attempting to make theoretical and technological…

For more than ten years I have been exploring human-robot interaction with the aim of creating rich, effective and satisfying interactions between humans and robots. My research has focused specifically on human-robot collaboration, wherein the robotic and human agents in the system share the same unifying goal or utility function. I further sharpen my scope to human-robot collaboration for learning, in which the measurable outcomes are information gain on the part of the humans in the system. In the context of my focus on collaboration for learning, rich means a cognitively sophisticated interaction in which humans and robots communicate as peers; effective means that formal measures of human learning should yield significant outcomes; satisfying means that humans should find the interaction both useful and pleasurable.

Most of my research is in the area of surgical robotics. Specifically, I have an interest in developing robotic systems and interfaces for microsurgery and minimally invasive surgery that enhance the performance of the surgeon while also being "minimally obtrusive:" small, inexpensive, and easy to use, with minimal disturbance of the workflow in the operating…

David Wettergreen’s research focuses on robotic exploration: underwater, on the surface, and in air and space. His work spans conceptual and algorithm design through to field experimentation and results in mobile robots that explore the difficult, dangerous, and usually dirty places on Earth, in the service of scientific and technical investigations. He currently leads projects in robotic exploration using robots to investigate the geology and biology of the Atacama Desert in Chile using the rover Zoe, and to sample micro-organisms living in the Antarctic ice sheet with Nomad; he has also contributed to Mars rover technology developments through research and experimentation in autonomous long-range rover navigation and rough-terrain surface navigation. Other recent research has included sun-synchronous navigation, conducting field experiments well above the Arctic Circle on Devon Island in Canada.

Dr. William L. “Red” Whittaker is the Fredkin Professor of Robotics, Director of the Field Robotics Center, and founder of the National Robotics Engineering Consortium. He is also the Chief Scientist of RedZone Robotics. He has an extensive record of successful developments of robots for craft, labor and hazardous duty. Examples include robots in field environments such as mines, work sites and natural terrain. Dr. Whittaker’s portfolio includes the development of computer architectures for controlling mobile robots; modeling and planning for non-repetitive tasks; complex problems of objective sensing in random and dynamic environments; and integration of complete field robot systems.