Last week we received a product donation from GoPro: a Hero 3 camera (Black Edition). Some of its notable features are the “ultra” wide angle lens, water-proof case, and attachable lenses. Unfortunately, there doesn’t seem to be much support for streaming pictures over USB, or much of anything for Linux. All we’ve been able to do so far is stream it to GoPro’s iPhone app and take pictures and video using a Micro SD card. Since the camera is set up to stream over HDMI, our current plan is to get an HDMI capture device and plug it into our PCI Express slot. We’re currently in the process of contacting companies that sell HDMI capture devices, since we’re pretty much out of money at this point.

We finished some initial work on getting the sonar array publishing into the system–a node has been created to act as a driver for an Arduino Omega board that is connected to the 12 sonars. I’m not sure if I’ve explained our reason for using a sonar array explicitly. We conceived of the idea originally when the Hokuyo started to malfunction, so we already had a plan to execute when it finally died. Inspired by what RAS did for IGVC in 2009, we have connected a bunch of sonar sensors together in a half-ring. There were about $3 each, but are actually supposed to be pretty good. The data isn’t scaled properly at the moment, so we haven’t been able to bag anything to analyze yet. Here’s a picture of the sonar array taken with the GoPro:

The VN 200 does not have support for OmniSTAR’s subscription service. The GPS data we’re getting back from it is accurate to within 5 meters when staying still, and about 2 meters when moving, but this will probably not be good enough. The competition requires that we reach waypoints within 2 meters. So, we’ve begun to contact different companies that produce GPS receivers that are explicitly compatible with OmniSTAR to see if we can get another product donation.

We’ve also been able to analyze the results of using messages from the VN 200 Inertial Navigation System (INS) in our Extended Kalman Filter (EKF). Unfortunately, from looking at some data we recorded the other day at the intramural fields, our EKF’s orientation estimates are better when ignoring these INS messages and instead working directly with the accelerometer and magnetometer messages (even with proper covariances for each). The INS yaw value appears to drift over time, whereas from using our calculations of roll, pitch, and yaw (using this as a reference), we do not observe any drift. We have not yet done any hard & soft iron calibration.

This weekend we hope to be able to go back to the intramural fields to see if we can autonomously navigate to GPS waypoints using both the sonar array and the camera. This would be a big milestone for us, since all we’ve done so far is navigate to local waypoints around obstacles using only the encoders (feedback from the wheels) as input to the EKF for localization. We haven’t ever navigated to actual GPS waypoints before, and we haven’t done anything autonomously while incorporating GPS and IMU sensors. Using the Hokuyo, the robot was able to navigate around obstacles very well; the same code run with the camera has proved decent, but it still scrapes the edges of orange obstacles. Integration of the sonar array scans and the camera image scans remains as yet untested and there’s still some work that needs to be done for it to work. So, it would be a huge step forward if we’re able to observe robust navigation with all these different components running at once.

Here is a dataflow diagram with the nodes that are currently running in the system:

Discussed:* V-REP – Installed on granny, needs ROS integration* Hareware stuff – A few things were finished (see spreadsheet), but still needs a lot of work* Sonar array – Cruz working on driver for Linux* Vision – Lucas: still no progress – Orange filters need some work, see the “barrels_cones_4-5-13_*” bags* Camera – Can take a video with micro SD card and view it on a computer – Can’t stream* VN 200 – Now integrated with EKF – Initial tests show that GPS is disappointing (see this report for more details) – Emailed VectorNav about OmniSTAR compatibility* PATW competition on the 27th – Cancel it. Instead, we’ll go to A&M the next weekend to test, as practice for going to Michigan* Sparkfun competition in Boulder, Colorado – Wth, it only costs $30 to enter. Letths get em.

To do: * Acquire tall orange cones* Get micro SD cord and HDMI recorder* Test VN 200 in an open field* Hareware stuff!!!!!!!!1111* Get a GPS receiver that is compatible with OmniSTAR

2. On the walkway the robot can only make it over the ramp if it goes at around max speed. It might be easier to do this on grass. It also has issues trying to make it up the hill between RLM and ENS, but this is due to it slipping on the wet grass.

3. The remote kill switch is ON if it doesn’t have power. This means that if the remote kill switch gets unplugged or dies, the robot could go charging forward, and the only way to stop it will be to hit the emergency stop. Speaking of which…

4. The emergency stop is still mounted on the front of the robot. This makes the situation of the robot getting out of control even more fun!

5. The vision scanning now can go at a comfortable 10 Hz. Our robot can navigate autonomously around red/orange things using only the camera. However, since the decision making code is still reactive, it sometimes hits cones when they drop below the view of the camera.

6. We really need to have fences and colored drums to test with.

7. We’re blocked on:

No bagged VN 200 INS data to test EKF integration. Apparently it won’t publish INS data without a GPS lock, and we haven’t been able to get a lock around ENS.

No sonar driver written for Granny yet. The sonar array also needs to be mounted.

No driver yet for the GoPro camera.

8. We really need to mount the monitor. We also need to mount the power converter for it.

9. I added a page to the wiki on GitHub describing the reactive decision making nodes. We should add more pages on that wiki that describe other sets of nodes and parts of the project. It would be good to have a single place for all documentation.

1. The Hokuyo is dead for all intents and purposes. I’ve send an email to the SICK Group requesting a donation. I will also be emailing the guy that the RAS IGVC group from 2010 talked to get their Hokuyo fixed. In the meantime, Frank will be assembling an array of inexpensive sonar sensors in case we can’t get the Hokuyo fixed or replaced. It has been shown in simulation that the robot does not require very many beams to perform well.

2. Cruz sent me a list of five companies to talk to about getting a camera. Of those, GoPro and Allied Vision have responded with a hopeful note. Still talking with them.

3. There is a number things to do that don’t involve software. I’ve compiled a spreadsheet of them. The three people who are mainly contributing to these are Chris Davis, Cruz, and Josh Bryant. Some other folks have started helping, including Blake and Han. If anyone else would like to help please check out that list and then ask Chris D, Cruz, Josh B, Frank, or I for details.

4. For the last two weeks Lucas has made no progress on vision, as seen here.

5. I’ve been testing out some vision-only obstacle avoidance using the same reactive agent that was used to process the Hokuyo scan. The jist is to take a binary image that contains obstacles (output from Frank and Lucas’s work in vision), transform it to correct for perspective, transform it again into a log-polar image (which is basically a plot of angles versus ln(distance)), and divide that into zones and determine the distance from the front of the robot to obstacles detected in the image. The results are published to /scan. Originally we were considering using ray tracing to simulate scans (inspired by these guys), but our method is easy to code and can use optimized OpenCV/CUDA function calls.

The process of taking an image and creating 10 simulated Hokuyo beams is currently very slow, running at 2 Hz. It was written in Python, so hopefully porting it to C++ will improve this. From testing, the reactive agent still performs reasonably well despite image processing slowness, although its overall performance is very dependent on the camera’s mounting and angle of the lens. The reactive agent has no memory, so the robot may run over an obstacle if it falls out of the camera’s view. The camera needs to be pointed almost directly downward, with the Hokuyo just below the picture.

Hey guys. I want to update everyone before we disperse into the wind for Spring Break.

0. We will not have a meeting this Sunday, but there will one on the 17th.

1. Looks like we’re not getting any discount on the $450 camera we were looking at. Time to start looking for a decent camera, and fast. We really can’t put this off. It’s probably best to distribute the process. Everyone, do research and find cameras that might work with our application, and then send me links. I don’t want one option. Give me ten, and I will email every one of the companies that sell them.

2. I’ve been experiencing some issues with the Hokuyo. Sometimes it starts up, runs for a few seconds, but then crashes with an error code that isn’t documented. This is very concerning. I will probably begin emailing companies for a replacement over the break.

2b. Turns out that the Hokuyo spins inside if it is being powered at all. From now on, we must keep it powered off unless it’s being used. We really need a switch on the power line to make this more convenient.

3. Tested the robot outside yesterday. Here’s a video:

4. Unfortunately, the VN 200 driver node was not complete yesterday, so we couldn’t test it. We need to get that done soon!

5. For those not in the senior design group, see our Testing & Evaluation Plan report. Feel free to comment if something isn’t clear.