Ezb5 Coming?

The USb connection would be nice! And maybe the ability to store a little code onboard? Perhaps a small 5kb storage area where you can put default able code for when the robot looses of connection, or you tell it to use that code.

@Richard, can you share what you're using for SLAM in ROS? I hear that SLAM is your priority of choosing ROS due to present navigation limitations of ez-robot. What sensors are you using? What sensors are popular in the ROS community for SLAM? Thanks!

@DJ First off the fact that ez builder has no current ability to do auto navigation is not a limitation, it's another feature that just hasn't been implemented yet ... Anyway, I bought the kit version of this about 2 weeks ago Oculus Prime ROV Just yesterday I got all the software loaded and the robot calibrated. Right now I have only been messing with the web interface to control the robot. I have also only touched on making room maps for the navigation part... So far so good. If you want when I really get it mapping and navigating I can post my results...

To answer your question... What I wanted to do with slam is have my robot not just wander around aimlessly trying not to bump into to anything, but to be able to send the robot on patrol routes or to certain areas of the house and have it navigate there by itself. The Oculus ROV does this pretty good. However, unless you are willing to cough up $1600 or so Canadian dollars for the "out of the box" full version be prepared for a lot of work assembling and installing/calibrating ROS on the kit version (as I did)....

Bottom line, it's a lot of work just to get a ROS robot to indoor navigate and that's with supported ROS robots like Oculus or Turtlebot.... However, when it comes to designing and building your own robot unless you're, well you or d.cochran, it won't be an option for 95% of us... I won't even go into mentioning how difficult it is to modify your bot with say a ping sensor or any other sensor for that matter. ROS's main focus seems to be sensors like Asus Xtion, Microsoft Kinect or a lidar of some type... You are going to need to learn Arduino as well as that is the preferred micro when working with ROS...

I have mixed feelings about ROS... It does pretty great stuff, but it takes a lot of work and effort to do it. I am going out on a limb to say it is still probably best left for University's, research and serious product development... One major advantage ez robot has over ROS (other than the obvious ease of use and versatility) is the ability to add peripherals or add-ons and interface them quickly.... Also the ability to rapidly test and knock up code...

One side benefit is I am really learning about network stuff, telnet, VNC, ssh and linux too... One last thing... I have yet again been invited to the local college here where I live to help with their inMoov build. It seems their robot program is building one...No doubt they are using Arduino and myrobotlab.... I will be taking my inmoov to demo in a few weeks.... They are about to discover that they are doing things the hard way when I demo how the ezb4 and ez builder out perform Arduino and myrobot lab in every way....

One cool feature Oculus has is "Click steer" When you click anywhere on the camera image the tilt servo attempts to centre the camera where you clicked.... If the robot is moving forward you can actually drive/steer the robot using click steer... Where you click on the camera image the robot steers towards it.... Maybe a new ez robot feature...?

That wouldn't be too difficult a feature to add (click steer) as a plug in. You would only need to have the input from the mouse click on the video image then some simple calculations to work out the XYZ axis movements required.

Although I guess it would be even easier for DJ to add that new control to the current camera control.

The quickest and easiest modules to work with are the neato XV-11 lidar and a XV Lidar controller. The controller will pass back the information from the neato over usb. There are also public python scripts so that you can see what is being done to convert this information to points on a map. For SLAM to work great, you need odometers on the wheels to measure the distance that the robot is traveling. A sabortooth/kangaroo combo could be used for this along with wheels that have encoders. The sabertooth/kangaroo combo might not be needed but it sure makes things a lot easier.

From there, the fun begins. Storing the map as you travel through it and calculating your current map position probability based on what you see with the sensors becomes the next issue to tackle. All of this is stored in X/Y coords.

From there, path planning becomes the next fun part. You can use something like A* for this if you want. This is a popular path planning routine used in computer gaming. It calculates the least cost route to go from the current point to an X/Y coordinate.

Communicating this information over the wifi channel used by the EZ-B could be problematic. There is a lot of data for sure. It would probably be preferable to do this all on the computer running EZ-Builder instead of running all of this data through Wifi. It might also be better to build a SLAM board that is an attachment to the EZ-B that would perform all of these functions and just pass the information back to the EZ-B that is necessary, but this is just a guess based on what I have seen up to this point. SLAM hasn't been my focus up to this point but it is definitely something that I have put some time into learning and experimenting with. It would be an amazing addition to EZ-Builder for sure.