Author
Topic: Navigation Meathod (Read 609 times)

First of all, thank you to anyone who takes the time to offer me some advice.

Background:

A friend and I are currently working on a autonomous robot that will have the abililty to navigate the University Campus.The robot body and movement system is completed with a Phidgets motor controller operating 4 DC motors as the wheels.A laptop accepts input from a remote the computer and we are manually able to naviage at this point. A kinect sensoris also installed at the front but does not operate at this time.

Question:

We have come to a stand still as to how and give the robot autonomous functionality.He suggests we simply use distance senses on the robot to create a 2D map that the robot will useand the kinect system will run independently creating a point cloud, and then 3D map.

I on the other hand believe we should use the kinect sensor at all times. I feel that a 2D system will get lost whencreating and navigating maps. However, a always 3D system would be able to locate it's self in space as well asconstantly adapt to changes.

Could someone please help push us in one direction or another. If anyone is aware of a project with detailed information regarding their process that would be wonderful. However, ALL information and advice would be valuable and very very much appreciated.

What you want to do is called SLAM -- Simultaneous Locating And Mapping. Additionally, if you have both range sensors and Kinect sensors, you may want to do sensor fusion. However, Kinect may not work reliably outdoors in bright sunlight.There are various approaches to do SLAM, some of which are implemented as modules in ROS.

Additionally, you will need some way for the robot to know how it's moving (odometry.) If you don't have sensors on your wheels, you will need to calculate this out of the readings you take from the environment, which will be really tricky when you also use those readings to build a map.

From this, it seems that having sensors built into the wheels that can detect distance traveled as well as any rotation of the robot would make this incredibly easier? From my understanding, without such sensors the robot would have to make inferences from it's environment as to how far it has gone. You explained this in your reply.

Navigation is the problem everyone hits when they get what they asked for .. map data. what to do with it. the darpa grand challenge has a great video by the 2012 winner describing its methods of getting to a location via the ground. Dont try to map just send it to random locations and stop it hitting things so hard it damages itself. the campus. voila.