Tag Archives: motion

Post navigation

On the Goal post flight video, I’d forgotten to include the fusion, so the video is purely of the integrated (acceleration – gravity). Still a good flight so I’ll leave it in place. However, I’ve now just rerun the flight with the fusion in place and it’s a lot less pretty, so there’s more work to do. Apologies for any excitement -> disappointment transitions caused – I ensure you mine is worse!

The paint has hardly dried on the raspivid solution for unbuffered macro-block streaming before a conversation on the Raspberry Pi forums yielded the way to get picamera to stream the video macro-block without buffering by explicitly opening an unbuffered output file rather than leaving it to picamera to second guess what’s needed.

Yes she drifted forward into the football goal over the course of the 20 seconds, but that’s 4 times longer to drift that far so frankly, I’m stunned how good the flight was! I have ideas as to why, and these are my next post.

Courtesy of a discussion with 6by9 on the Raspberry Pi forum, the motion tracking for Zoe is now working using raspivid which has an option to not buffer the macro-block output. Here’s a plot of a passive flight where she moves forward and back (X) a couple of times, and then left and right (Y).

Macro-block vs accelerometer

The plot clearly shows the macro-blocks and accelerometer X and Y readings are in sync (if not quite at the same scale), so tomorrow’s the day to set Zoe loose in the garden with the motors powered up – fingers crossed no cartwheels across the lawn this time!

I flew Zoe over the weekend with the camera motion running, and it was pretty exiting watching her cartwheel across the lawn – clearly there’s more to do in the testing before I try again!

So I did a passive flight in the ‘lab’ just now and got these stats; I had hoped to show a comparison of the accelerometer measured distances vs the camera video distances, but that’s not what I got:

Motion stats

The lower 2 graphs are the interesting ones: the left one shows how bad the integration error is with the accelerometer – all the details in the accelerometer are swamped by integrated offset errors. It also shows that we are only getting data from the video every four seconds.

So I did a test with my raw motion code (i.e. no overheads from the other sensors in the quadcopter code), and it showed those 4 second batches contain 39 samples, so clearly there’s some buffering of the 10 Hz video frame rate as configured.

So next step is to work out how to identify whether it’s the FIFO or the video that’s doing the buffering, and how to stop it!

These two show the motion tracking results, and the motion processing loop intervals.

Each spike in the dt (time interval) graph happens at about 10Hz (the frame rate for the video) and shows the camera data processing; this is only triggered when there is camera motion data available, not simply based on a 10Hz timer. As you can see this is there from the start. The horizontal units here are the number of motion processing loops i.e. the spikes start right from the word go, not after 4 seconds as suggested by the top graph.

The silence in the top graph’s first 4 second thus suggests the camera motion-blocks simply are not detecting motion at this point.. The flight plan is a 2 seconds take-off, 12 second hover and 2 second descent, but I’ve had to abort it after 5 seconds due to instability: Zoe is flapping both around pitch (blue) and roll (orange). The instability grows during the flight, so the 4 second point is likely to be where the low resolution camera macro-blocks actually start to see the instability.

Currently, the camera motion processing is still not fed into the PIDs, so the flapping is not caused by the camera motion. Certainly, the next step is to include this in the PIDs and see if this actually works.

Finally, there’s one fly in the ointment: the IMU FIFO overflow triggers every other flight; this is almost certainly not due directly to the camera itself, but to how I’m managing the FIFO and it’s overflow interrupt; first couple of attempts to control this have failed, so I’ll have to keep stabbing in the dark.

You can just see the brown video cable looping to the underside of the frame where the camera is attached.

She’s standing on a perspex box as part of some experimenting as to why this happens:

Lazy video feed

It’s taking at least 4.5 seconds before the video feed kicks in (if at all). Here the video data is only logged. What’s plotted here is the cumulative distance; what’s shown is accurate in time and space, but I need to investigate further why the delay. It’s definitely not to do with the starting up of the camera video process – I already have prints showing when it starts and stops, and those happen at the right time; it’s either related to the camera processing itself, or how the FIFO works. More anon as I test my ideas.

Back from DisneyLand where it was 35°C in the shade. It actually turned out to be fun, even cool at times, and gave me plenty of thinking time, the net result of which is I’ve changed the main scheduling loop which now

polls the IMU FIFO to check how many batches of sensor data are queued up there; the motion processing runs every 10 batches

if the IMU FIFO has less than 10 batches, select.select() is called, listening on the OS FIFO of the camera macro-block collection process; the timeout for the select.select() is based upon the IMU sampling rate, and the number of IMU FIFO batches required to reach 10.

The select.select() wakes either because

there are now >=10 batches of IMU FIFO data present, triggering motion processing

there are macro-block data on the OS FIFO, which updates the lateral PID distance and velocity input.

Even without the camera in use, this improves the scheduling because now the motion processing happens every 10 batches of IMU data, and it doesn’t use time.sleep() whose timing resulted in significant variation in the number of IMU FIFO batches triggering motion processing.

I’m taking this integration carefully step by step because an error could lead to disastrous, hard to diagnose behaviour. Currently the camera FIFO results are not integrated with the motion processing, but instead are just logged. I hope during the next few days I can get this all integrated.

Note that due to some delivery problems, this is all being carried out on Zoe with her version 2 PiZero.

Update: Initial testing suggests a priority problem: motion processing is now taking nearly 10ms means the code doesn’t reach the select.select() call, but instead simply loops on motion processing. This means that when finally the OS FIFO of macro-blocks gets read, there are possibly several sets, and they are backed up and out-of-date. I’ll change the scheduling to prioritize reading OS FIFO and allow the IMU FIFO to accumulate more samples.

We’re off on holiday tomorrow, so I’m leaving myself this note to record the state of play: the new A+ 512MB RAM is overclocked to 1GHz setup with Chloe’s SD card running March Jessie renamed Hermione. New PCBs have arrives and one is made up. The new PCB is installed and has passed basic testing of I2C and PWM

To do:

install LEDDAR and Pi Camera onto the underside

update python picamera to the new version

test motion.py on hermione

merge motion.py with X8.py

work out why udhcpd still doesn’t work on May Jessie-lite (different SD card)

Another walk up the side of the house, but then walking a square as best I could, finishing where I started, and as you can see, the camera tracked this amazingly well – I’m particularly delighted the start and end points of the square are so close. Units are pretty accurate too.

I’m now very keen for Hermione’s parts to arrive, as I suspect this is going to work like a dream, both stabilising long term hover, and also allowing accurate traced flight plans with horizontal movement. Very, very excited!

Shame about the trip to DisneyLand Paris next week – I’m not going to get everything done before then, which means Disney is going to be more of a frustrating, annoying waste of my time than usual!

I’ve reworked the code so that the video is collected in a daemonized process and fed into a shared memory FIFO. The main code for processing the camera output is now organised to be easily integrated into the quadcopter code. This’ll happen for Hermione once I’ve built her with a new PCB and 512MB memory A+. I suspect the processing overhead of this is very light, given that most of the video processing happens in the GPU, and the motion processing is some very simple averaging.