Tag Archives: motion

Post navigation

I’m doing some very careful testing before I set Hermione loose live to fly in a circle. This morning, I’ve confirmed the video lateral motion block tracking is working well.

For this first unpowered flight, I walked her forwards about 3m and then left by about the same. Note that she always pointed in the same direction; I walked sideways to get the left movement:

Forward – Left

For this second unpowered flight, again, I walked her forwards about 3m, but then rotated her by 90° CCW before walking another. Because of the yaw, from her point of view, she only flew forwards, and the yaw is not exposed on the graph. This is exactly how it should be:

Forward – Yaw 90° CCW – Forward

So I’m happy the lateral motion tracking is working perfectly. Next I need to look at the target. I can go that with the same stats.

The only problem I had was that the sun needs to be shining bright for the video tracking to ‘fly’ above the lawn; clearly it needs the high contrast in the grass when sunlit.

For each macro-block vector in the list, undo yaw that had happened between this frame and the previous one

Fill up a dictionary indexed with the un-yawed macro-block vectors

Scan the directory, identifying clusters of vectors and assigned scores, building a list of highest scoring vector clusters

Average the few, highest scoring clusters, redo the yaw of the result from step 2, and return the resultant vector

Although this is quite a lot more processing, splitting it into five phases compared to yesterday’s code’s two means that between each phase, the IMU FIFO can be checked, and processed if it’s filling up thus avoiding a FIFO overflow.

Before moving on to compass and GPS usage, there’s one last step I want to ensure works: lateral movement.

The flight plan is defined thus:

take-off in the center of a square flight plan to about 1m height

move left by 50cm

move forward by 50cm – this place her in to top left corner of the square

move right by 1m

move back by 1m

move left by 1m

move forwards by 50cm

move right by 50cm

land back at the take-off point.

The result’s not perfect despite running the ground facing camera at 640 x 640 pixels; to be honest, with lawn underneath her, I still think she did pretty well. She’s still a little lurchy, but I think some pitch / roll rotation PID tuning over the IKEA mat should resolve this quickly. Once again, you judge whether she achieved this 34 second flight well enough?

A few test runs. In summary, with the LiDAR and Camera fused with the IMU, Zoe stays over her play mat at a controlled height for the length of the 30s flight. Without the fusion, she lasted just a few seconds before she drifted off the mat, lost her height, or headed to me with menace (kill ensued). I think that’s pretty conclusive code fusion works!

Currently, getting lateral motion from a frame full of macro-blocks is very simplistic: find the average SAD value for a frame, and then only included those vectors whose SAD is lower.

I’m quite surprised this works as well as it does but I’m fairly sure it can be improved. There are four factors to the content of a frame of macro-blocks.

yaw change: all macro-block vectors will circle around the centre of the frame

height change: all macro-blocks vectors will point towards or away from the centre of the frame.

lateral motion change: all macro-blocks vectors are pointing in the same direction in the frame.

noise: the whole purpose of macro-blocks is simply to find the best matching blocks between two frame; doing this with a chess set (for example) could well have any block from the first frame matching any one of the 50% of the second frame.

Given a frame of macro-blocks, yaw increment between frames can found from the gyro, and thus be removed easily.

The same goes for height too derived from LiDAR.

That leaves either noise or a lateral vector. By then averaging these values out, we can pick the vectors that are similar to the distance / direction of the average vector. SAD doesn’t come into the matter.

This won’t be my first step however: that’s to work out why the height of the flight wasn’t anything like as stable as I’d been expecting.

I added some filtering to macroblock vector output, including only those with individual SAD values less than half the overall SAD average for that frame. I then took the rig for a walk around a 3m square (measured fairly accurately), held approximately 1m above the ground. The graph goes anti-clockwise horizontally from 0,0. The height probably descended a little towards the end hence the overrun from >1000 to <0 vertically.

3m square walk

This is probably pretty perfect and more than good enough to work out the scale from macroblock vectors to meters:

This is what I’d already worked out to take into account any tilt of the quadcopter frame and therefore LEDDAR sensor readings.

With this in place, this is the equivalent processing for the video motion tracking:

Horizontal compensation

The results of both are in the earth frame, as are the flight plan targets, so I think I’ll swap to earth frame processing until the last step of processing.

One problem as I’m now pushing the limit of the code keeping up with the sensors: with diagnostics on and 1kHz IMU sampling, the IMU FIFO overflows as the code can’t keep up. This is with Zoe (1GHz CPU speed) and without LEDDAR.

LEDDAR has already forced me to drop the IMU sample rate to 500Hz on Chloe; I really hope this is enough to also allow LEDDAR, macro-block processing and diagnostics to work without FIFO overflow. I really don’t want to drop to the next level of 333Hz if I can help it.

Imagine a video camera facing the ground over a high colour / contrast surface. If the camera moves forward, the macro-blocks shows forward movement. But if the camera stays stationary, but pitches backwards, then the macro-blocks show a similar motion forwards because the rearward tilt means the camera is now points forward of where it was pointing.

The motion processing code sees this camera rearward tilt as forward motion, and so applies more rearward tilt to compensate – I think you can see where this leads too!

I’m obviously going to have to remove the perceived forward motion due to rearward pitch by applying some maths using the pitch angle, but I’m not quite sure what it is yet. The need for this is backed up by the fact the PX4FLOW has a gyro built in, so it too must have been using it for that type of compensation. The fact it’s a gyro suggests it may not be rotation matrix maths, but something simpler. Thinking hat on.

Just realized I’d solved exactly this problem for height, essentially knowing height (PX4FLOW has URF) and incremental angle change in the quadcopter frame (PX4FLOW has a gyroscope), you can calculate the distance shift due to the tilt, subtract it from the camera values and thus come up with the real distance

Currently the macros blocks are just averaged to extract lateral distance increment vectors between frames. Due to the fixed frame rate, these can produce velocity increments. Both can then be integrated to produce absolutes. But I suspect there’s even more information available.

Imagine videoing an image like this multi-coloured circle:

Newton Disc

It’s pretty simple to see that if the camera moved laterally between two frames, it’d be pretty straight forward for the video compression algorithm to break the change down into a set of identical macro-block vectors, each showing the direction and distance the camera had shifted between frames. This is what I’m doing now by simply averaging the macro-blocks.

But imagine the camera doesn’t move laterally, but instead it rotates and the distance between camera and circle increases. By rotating each macro-block vector by the position it is in the frame compared to the center point and averaging the results, what results is a circumferential component representing yaw change, and an axial component representing height change.

I think the way to approach this is first to get the lateral movement by simply averaging the macro-block vectors as now; the yaw and height components will cancel themselves out.

Then by shifting the contents of the macro-block frame by the averaged lateral movement, the axis is brought to the center of the frame – some macro-blocks will be discarded to ensure the revised macro-block frame is square around the new center point.

Each of the macro-block vectors is then rotated according to the position in the new square frame.The angle of each macro-block in the frame is pretty easy to work out (e.g. a 4×4 square has rotation angles of 45, 135, 225 and 315 degrees, 9×9 square has blocks to be rotated by 0, 45, 90, 135, 180, 225, 270, 315 degrees), so now averaging the X and Y axis of these rotated macro-block vectors gives a measure of yaw and size change (height). I’d need to include distance from the center when averaging out these rotated blocks.

At a push, even pitch and roll could be obtained because they would distort the circle into an oval.

Yes, there’s calibration to do, and there’s a dependency on textured multicoloured surfaces, and the accuracy will be very dependent on frame size and rate. Nevertheless, in the perfect world, it should all just work(TM). How cool would that be to having the Raspberry Pi camera providing this level of information! No other sensors would be required except for a compass for orientation, GPS for location, and LiDAR for obstacle detection and avoidance. How cool would that be!

No, it’s not a secret code word; great flocks of geese are flying daily over my house in V-formation, honking all the way. It’s a magnificent site to see if you can stand the noise. Around where I live there are many artificial lakes resulting from surface gravel mining a condition of which is that when finished, each mine is rebuild as a lake to attract the wildlife, including the geese.

Anyway, their migration up north is a true sign of Autumn, and the weather maker has taken the hint, and the wind-speeds are up in the teens again. Which is annoying as I need to get the camera motion working smoothly outside before I can bring it inside for inclement weather testing.

Since the embarrassing flight from the other day, I’ve made 2 changes:

I’ve added complementary filters for vertical and lateral motion sensor velocity and distance inputs from the camera and LEDDAR – previously I was directly overriding the integrated IMU values, and this is the most likely cause (IMHO) of the poor flight quality.

I’ve ordered a multi-coloured picnic blanket which hopefully will make the job of the macro-block code much easier compared to the high-resolution green-grass it’s previously used.

I am also considering increasing the video resolution (320 x 320) and frame rate (10Hz) up to 480 x 480 or 640 x 640 @ 20Hz. But there’s a balance here to ensure the this still allows enough spare time for the processing of the IMU sensors coming in at 100Hz.