IntroductionToRobotics-Lecture09

Материал готовится, пожалуйста, возвращайтесь позднее

Instructor (Oussama Khatib):Okay. Let’s get started. So today it’s really a great opportunity for all of us to have a guest lecturer. One of the leaders in robotics vision. A Gregory Hager from John Hopkins who will be giving this guest lecture. On Monday, I wanted to mention that on Wednesday we have the mid-term in class. Tonight and tomorrow we have the review sessions, so I think everyone has signed on for those sessions. And next Wednesday the lecture will be given by a former Ph.D. student from Stanford University, Krasimir Kolarov who will be giving the lecture on trajectories and inverse kinematics. So welcome back.

Guest Instructor (Gregory Hager):Thank you. So it is a pleasure to be here today and thank you, Oussama, for inviting me. So Oussama told me he’d like me to spend the lecture talking about vision and as you might guess that’s a little bit of a challenge. At last count, there were a few over 1,000 papers in computer vision in peer reviewed conferences and journals last year. So summarizing all those in one lecture is a bit more than I can manage to do. But what I thought I would do is try to focus in specifically on an area that I’ve been interested in for really quite a long time. Namely, what is the perception in sensing you need to really build a system that has both manipulation and mobility capabilities? And so really this whole lecture has been designed to give you a taste of what I think the main components are and also to give you a sense of what the current state of the art is and, again, it’s obviously with the number of papers produced every year declining the state of the art is difficult, but at least give you a sense of how to evaluate the work that’s out there and how you might be able to use it in a robotic environment. And so really, I want to think of it as answering just a few questions or looking at how perception could answer a few questions. So the simplest question you might imagine trying to answer is where am I relative to the things around me? You turn a robot on, it has to figure out where it is and, in particular, be able to move without running into things, be able to perform potentially some useful tasks that involves mobility. The next step up once you’ve decided where things are is you’d actually like to be able to identify where you are and what the things are in the environment. Clearly, the first step toward being able to do something useful in the environment is understanding the things around you and what you might be able to do with them. The third question is, once I know what the things are how do I interact with them? So there’s a big difference between being able to walk around and not bump into things and being able to actually safely reach out and touch something and be able to manipulate it in some interesting way. And then really the last question, which I’m not gonna talk about today, is how do I actually think about solving new problems that in some sense were unforeseen by the original designer of the system? So it’s one thing to build a materials handling robot where you’ve programmed it to deal with the five objects that you can imagine coming down the conveyor line. It’s another thing to put a robot down in the middle of the kitchen and say here’s a table, clear the table, including china, dinnerware, glasses, boxes, things that potentially it’s never seen before. It needs to be able to manipulate safely.