INFORMATION NOTICE ON COOKIES : This website uses only browsing/session cookies. Users can choose whether or not to accept the use of cookies and access the website. By clicking on "Further Information", the full information notice on the types of cookies will be displayed and you will be able to choose whether or not to accept them whilst browsing on the website.Further information

Status reports

Thrusday 29.07.2010

We did a demo showing the autonomous grasping of a lego piece with the right arm. The grasping was a simple routine using three fingers. Giacomo finished a more complex module to control the hand that will be integrated later. We also started with the calibration of the left arm, but had to abort to let Ugo and Stephane show their demo. The routines for calibrating the arms were also shown in a separate demo.

Serena showed her 'clapping' behavior, and force detecting. Also some routines for exploring positions for inserting the legos.

We think it is possible to put the lego pieces together using visual servoing, but the force control is a much better solution. It is some sort of a 'peg in the hole' problem.

Wednesday 28.07.2010

We are considering changing our team name to: "The Calibrators!"

Last night, we managed to do autonomous grasping from the table. It required a lot of calibration:

arm encoders (for our forward kinematics)

cameras (for arToolkit)

hand/eye calibration (of course the previous two did not match)

Timelapse of the night before the demo. This is how the last details were done, in a hacking night. The tricky part was to make a module that learns a mapping between the forward kinematics and the position obtained from the cameras. In the video you can see the icub following his hand with the head, and during that we are learning the map.

Repository

Get the team repo like this:

git clone gitosis@10.0.0.217:vvv10repo.git

For this to work, give Alexis your public ssh key.

Sunday 25.07.2010

Started closing the loop between marker positions and hand positions. In the first experiments we found some offsets that
we hope to solve on Monday. Our code for moving the icub to watching positions, reaching, and the high-level state machine
are still working.

The Artoolkit program detects reliably the position of our 4 markers in distances up to 40-50cm from the eyes.

We need a bit of extra light (thanks to Paul for the desk lamps).

Since we have two markers on each lego piece, we have a program that receives their position and reports the position of the lego piece even if we only see one marker, and an average if we see both. A speed/acceleration filter reduces false positives.

Looking at the lego pieces on the table:

We also checked the relative position between the forward kinematics of the arm, and the estimated position of a marker. After moving the arm to many positions, we discovered an error of +-1.5cm, picture of the experiment:

Saturday 24.07.2010

We have tried to extract the best quality images possible from the Dragonfly2 cameras on the iCub.

There are 3 limiting factors:

* Firewire bus bandwidth
* Ethernet bandwidth
* CPU on the PC104

1. Firewire bus bandwidth:

The PC104 has one 1394a (400Mbps) bus. This has to be divided between two cameras.

So, if you are working at 640x480 RGB8 (3bytes/pixel) @ 30fps, then you need 210Mbps per camera That would be 420Mbps for two cameras, that exceeds the maximum.

Another option is to use YUV422, that uses 1.5bytes per pixel. We have added a videtype to the icubmoddev executable to support it.
Then you need 105Mbps per camera.

Even better is to read RAW data from the sensor, and apply the debayer pattern on the PC104.

The Dragonfly2 cameras have a 12bit ADC, so they support RAW8 and RAW16 modes. RAW8 sends 8bits per pixel, and RAW16 16bits per pixel.
We have added the RAW16 mode to icubmoddev to support RAW16, and debayering on the PC104 computer.

The RAW16 mode should get the highest possible quality out of the cameras, but you need to adjust brightness, contrast, white balance, etc, on your software. (The normal adjustments in the framegrabber program tell the camera how to apply the debayer pattern and then transfer RGB data or YUV data).

For two cameras at 640x480 @ 30fps we are sending 421Mbits/sec. Serializing all this data uses a lot of CPU power!

3. CPU consumption

The most efficient camera mode from the CPU point of view is RGB8 since icubmoddev only does a memcpy from the camera buffer to the internal buffer.

The YUV mode does a mode conversion (YUV422->RGB8) before doing the memcpy.

The RAW8 and RAW16 modes apply a debayer pattern before doing the memcpy.

The mode we leave:

We were looking for a nice balance between all of these limitations, so we set the cameras like this:

Resolution: 640x480 Format: Format7 0 Color-coding: RGB8

We leave the framerate setting empty, because in Format7, the framerate is automatically selected to use the maximum bandwidth. The package data size is set to 44%, so that even including a little overhead, two cameras share almost all the bandwidth. The resulting framerate is 19fps.

Framerate = 19fps

The CPU usage for the icubmoddev process for each camera is ~ 30-40%.

The Network bandwidth is: 133.6Mbps per camera.

So we are using 267Mbps for both cameras, or 26.7% of the gigabit ethernet bandwidth.

Camera calibration parameters

We also adjusted the camera calibration parameters in the XML file used by the 'camcalib' program. We recommend people to use the ports:

/icub/camcalib/left/out
/icub/camcalib/right/out

Since the camcalib software runs on another computer, we avoid having multiple connections getting camera images from the robot.

Wishlist

What we would like best is that the PC104 computer receives RAW8 (or RAW16) data and sends it as it is on the network, thus reducing network bandwidth and the CPU cost for serializing the images to put them on the network. The clients should then apply the debayer function on their own. (Or this could be done transparently on the FrameGrabber interface, as suggested by Lorenzo.

Thrusday 22.10.2010

Finally managed to calibrate the cameras of the iCub at 640x480 and get precise values
from the artoolkit program. More light is good! A flat CalTab is important!
Vvv10_camera_calibration

Also got the kinematic tree of the iCub loaded, managed to detect the markers at a useful distance
and tested the marker to world coordinate transformations.

The iCub looking for lego pieces:

Here is what it looks like in rviz:

And an example rectified image from the camera. The marker in the hand was detected well.