Agenda for 7/26/10

Brown ROS

The position_tracker bug (which was not a bug in our code at all) has not been identified, but I have confirmed that installing the cturtle version of ROS at least makes it go away.

I need to write the ROS users' list to that effect. Something wonky in the old transformation code that appears to have been fixed.

24/7 Lab

All set up and functional, except for two important issues

ROSjs (or whatever we decide to call it -- maybe we should hammer this out today?) was changing from underneath me while Trevor and Sarah worked like mad to get things going at Bosch, so I was unsuccessful setting it up. Working with Trevor both to figure everything out and get everything documented and awesome is on the agenda

Computer juice. The prototype chargers we put together last spring are nowhere to be found. So I need help running them down. And probably taking one apart to replace with another power cord, and stuff like that.

Once I've got ROSjs going, I'll need to start working on a few other things. An overhead camera to film the interaction, both for data collection purposes and as payment to users. A UI that will allow that.

Agenda for 7/20/10

Brown ROS/AAAI

Last week, obviously, was devoted to the AAAI conference demos.

Position trackers, AR recognition and other odometry and UI components have gone through several bugfix iterations

A peculiar bug in ROS itself surfaces from time to time in unpredictable ways when running my (or others', I have confirmed) transform matrix code. Sarah, I, and Aggeliki have all experienced bizarre crashes. Calling simple, little rospy.sleep() causes it! But only inside position_tracker.py. Very frustrating. Works on some machines, doesn't on others. And no information whatsoever about what is wrong.

HRI

Trying to tie up the loose ends for IRB approval. Current difficulties involve massaging funding proposals into the package. We can engineer the tasks such that said approval is not necessary for a particular experiment if we must, but it certainly will be a good idea to have approval on hand regardless, so we can stop worrying about it. I've been worrying about it for two months now. :-P

With an absence of other good lab space, this week we're going to get the 24-hour world-accessible lab going in my living room, with at least one robot.

Agenda for 6/22/10

Brown ROS/AAAI

The position tracker/AR localizer suite is now functioning correctly, even when the camera is in motion, and using the entire ROS transformation framework.

I'd like to poke at wrapping AR localization into something that emulates a weird laser scanner, so that the whole ROS nav stack can be used. That's a medium-term and not terribly high-priority project, though.

HRI

Aggeliki left me with all of her mesh networking and navigation code, which I now have to go through to start setting up the interactions.

Drafting the HRI paper has begun, as I work on a rough intro and lit review.

Agenda for 6/15/10

Brown ROS

June release updates for ar_recog and ar_localizer are complete and (to a limited extent) tested. ar_recog now reports three-dimensional localization in real space, while ar_localizer works dynamically even when the robot is in motion.

Oblique tag angles still introduce pretty big localization errors, probably unfixable. These could be incorporated into a probabilistic error model, but for the naive version that we're releasing that would miss the point. So we probably want to introduce an angle threshold beyond which we ignore tag data. It will take a bit of research to figure out where that threshold should be.

The PS3 camera becomes very important to this application, because it runs at a high enough framerate to minimize motion blur. With lesser cameras, it still won't work very well in motion.

AAAI Demo

The localization work feeds directly into the demo, as will the HTML teloperation. This integration effort also contributes directly to the mesh networking and HRI projects.

Agenda for 6/08/10

Brown ROS

Trevor and I developed a fix for ar_recog. Turns out that the ROS image pipeline works very differently among different versions. Thus, we could not rely on being able to tell whether we were dealing with rectified imagery or not, because it was reported and acted upon differently depending upon the ROS version. Thus we now require the camera_info topic to be explicitly assigned, if rectified imagery is to be used.

Discovered a bug in the Create odometry. The Create does not directly report the encoder's odometry, but rather provides an angle measurement in an accumulator. However, if a Create is turning slowly while polling the sensor packet at a fairly high rate, no turning information is reported at all. Which of course makes proportional control of direction impossible, since ideally we'd like to sneak up on a desired value. We are currently modifying the driver to replace slow-speed turn information from odometry to a static turn model that doesn't rely on the (nonexistent) reported odometry.

Mesh networking localization and sensor fusion

I have implemented a particle filter localizer for the Create. Once we fix the driver's odometry, I can build an error model for odometric navigation and start incorporating AR tag localization likelihoods.

Still to come: communicating probabilities from one node to another. Particle filtering is great, but it may be unrealistic to hand huge bags of particles over the mesh network.

HRI

The experimental design is coalescing. Performance-based measurement of robot network performance under varying degrees of teleoperation. How quickly can a human user, operating remotely over a web interface, find and (if we get a good arm working) retrieve a tagged object from an environment. Experiments can be done in the department; hopefully we can get into a store or warehouse setting once or twice for the video.

Agenda for 6/01/10

I was out for part of last week, for Commencement and reunion activities. All of that is over with now, however.

Brown ROS release

Coded and documented position_tracker for release.

Discovered a fairly major bug in ar_recog with respect to its treatment of rectified versus nonrectified imagery. Trevor and I will work out a fix shortly.

Began updating ar_recog to report tag coordinates directly, without relying on ar_localizer. ar_localizer will evolve into a Bayesian sensor-fusion multi-robot coordination system.

Agenda for 5/25/10

Commencing this week, and entertaining family. I'll be in and out, but I should be there for Tuesday meetings.

Navigation

We made a lot of progress this week, incorporating bump-sensing into the localization picture. Robots can hurtle down corridors at full speed and navigate in the blind.

Next up -- testing, overlaying path planning, and expanding to multi-robot coordination.

SLAM on Creates -- no one does SLAM very well without laser rangefinders; there may be an article or two worth writing on how to do it with cheap hardware, especially in distributed fashion.

Let's get back to using belief networks, too. I finally got a hold of the mesh networking proposal and gave it a read; it seems like we've moved a long way from the original plan. I don't think that's necessarily a positive development -- I think it's worth revisiting mesh networking with MRF/factor graph/DBN concepts.

Agenda for 5/18/10

Hardware

Create arms! Hardware tinkering is fun. Between Chad and Trevor and myself and everyone else who we've talked to about arm design, we should actually construct a development plan, so that we're not just flailing.

Navigation

The AR tag localization has undergone some optimizations and should be ready for release. It turns out, however, that Kalman filtering is not the answer, or at least not the whole answer. The kind of errors that our localization system currently suffers are completely nonlinear, since they mostly stem from improper identification of tags, rather than Gaussian noise in the distance estimation.

One possibility is to use odometric position estimation to decide whether or not to reject a location report as unrealistic. Too bad the confidence factor reported by AR Toolkit is such a black box; if we knew where that number came from, we could perhaps use it more appropriately.

Also, especially for navigating hallways: incorporating bump sensing to help with the localization problem.

Mesh networking

It sure would be nice to get Roombas moving about the department, exploring and interacting with a human controller. I may have to generate some more momentum in this area.

Agenda for 5/11/10

Hardware

We have received, and are receiving, large amounts of hardware. All of which requires setting up, updating, installing, testing, etc... This is a big time sink, and will remain so this week. Though we'll have a lot of new toys to show for it.

Kalman filter for navigation

I am integrating the ROS Kalman filters into our Create driving and localization suite. This is the right thing to do, although of course it seems like it would be easier to implement my own.

Still figuring out the various formats the packages expect for messages and matrices, and quantifying confidence and belief values for various measurements. An inordinate amount of work. However, presumably, once this is done, things should Just Work (tm).

Wider mesh networking project

Better localization (above) will help.

Aggeliki and I have been talking about getting coordinated multi-robot exploration going. I think we're close.

With that goal in mind, I've been thinking about how to structure the HRI paper. Going to do some reading this week to help me aim the problem.

Agenda for 5/04/10

Real-space navigation project

All kinds of cool progress, leading to a web-enabled navigation interface.

It even works well for overhead localization, except for distance problems. AR tags identification is a little dodgy from far away. This can be helped by revamping the AR tags to be more distinctive, which is something we should do for all kinds of reasons anyhow.

In order to have measurement-free camera localization, however, I'll have to implement a mode that takes into account all three axes of AR tag rotation. At the moment, the cameras can locate tags based on their own location, but they can only locate themselves based on the assumption that they're looking at AR tags rightside-up. Fixing that is not such a tall order.

I'm now starting to work on location-based multi-robot coordination.

Other efforts

This was a week where I organized the spending of ridiculous moneys. We'll have Apples, netbooks and robot servoes, oh my! ...soon.

Agenda for 4/27/10

Real-space navigation project

Notice the upgrade... not just vision, but navigation.

We now have a system that can fuse odometry and AR tag imagery to maintain location estimation on an arbitrary coordinate plane.

At the moment, it simply uses the video system to provide location estimation when it's available, and maintains odometry for the times when it has no accurate visual cues.

Certainly, it could be worked into a principled Kalman-filter-based all-source fusion georeferencing and location system, and we might want to go that way someday. What we have already, though, gets us a lot.

_And_ it can be integrated into the CS 148 course materials.

Not _quite_ ready for a demo as I type this, but unless I run into an unforeseen problem, I should have one for you tomorrow.

Other efforts

The reason I had to include the last bullet, above, is that I have been grinding out ICDL reviews, which are now finished.

Once I manage to connect with Dawn, I'll put together a robot package to send to Ikwuagwu.

Agenda for 4/20/10

Real-space vision project

Gscam is up and running and current on experimental. Internal: Gscam gives directions on setting up the whole pipeline.

I've implemented the matrix transformations and trigonometry to reconstruct distance based on camera calibration. It works... almost. Two problems:

The camera calibration is sensitive, and my crummy printed chessboard isn't a very good calibrator. Too small, floppy and imperfectly printed. The calibrations are therefore not very consistent. I need to get my paws on a well-constructed calibrator; I suspect that it will work much better.

The AR tag shape files weren't built quite right, so the current version of ar_alpha is misidentifying which corner is which. It'll be easy to fix, though the existing shape captures will have to be redone.

Even if calibration proves trickier than we expect, the problem is simply that the calibrator miscalculates focal lengths and principal coordinates. This is a linear scalar error, and will be trivial to work around. Hopefully, for general release usage, we can depend on ROS's calibrations, but if not, we can perfectly well do our own.

Practical upshot: we have a working realspace estimator. It needs more testing and tweaking.

Other efforts

I was asked to do an emergency IROS review with a day's turnaround. Check.

Dawn will be back tomorrow; I'll coordinate getting the specced hardware purchase in the works.

ICDL reviews.

Agenda for 4/13/10

Real-space vision project

Most of my programming effort this week focused on rewriting the probe camera driver to reflect the ROS camera API. Now renamed "gscam" (the GStreamer camera), the node now maintains hooks for camera calibration. Still more to write and test, but in the next couple of days the calibration routines should be up and running.

My laptop is incapable of testing the camera pipeline, and Maria is way oversubscribed, so my testing has to wait until I'm home with my desktop computer. Last week I spent a couple mornings at home working, and will likely do the same this week until everything works.

Next up: going from camera calibration to realworld distances. I still need to figure out the best API to get this working with AR Alpha.

CS148 Textbook

I spent an afternoon troubleshooting the setup and installation directions, taking notes, and incorporating them into the text. I ran into network trouble on the Asus machine that I didn't have time to troubleshoot, so I didn't finish, but will soon.