Here we create an octree initialised with it’s voxel dimension, give it an input cloud to work with, and get back a vector of the centroids of all possible voxels. This vector can then be used to construct a new cloud.

Convergence

In both cases the size of the output cloud is unknown beforehand. It’s dependent on the choice made for the voxel leaf dimension. Only after downsampling can you query the cloud for it’s size.

An iterative method is needed if you want to end up with a given size: You can perform one of the downsampling methods, and based on the resultant output size, adjust leaf up or down and do it again. Stop iterating when the output cloud size converges to within a tolerance of a required size or percentage of the input.

Stand out obvious to me, was that taking on this project was a prime example of the Pareto principle, aka 80/20 principle where:

Only 20% of the time and work was taken to get a functional and working program that fully satisfied my curiosity. I could have left it at that, and I normally do.

The remaining 80% of the time was refining, profiling, measuring benchmarks, finishing touches, preparing an article, and everything that makes it more than just a hobby program.

This was a very satisfying project for me for various reasons. It interests me; It potentially has uses in my day job; It’s a good demo of my C++ and STL programming skills. Adds to the portfolio of my career.

I set out to make a time lapse video covering a year of foliage changes in the garden of the apartment block where I live. I used a Raspberry Pi with a camera module.

It took six photos each day, and at the same times each day. 12pm, 12.30pm, 1pm, 1.30pm, 2pm, 2.30pm. My thinking was that I’d pick the best photo for each day. In the end I chose images of one time slot instead. The photos are then merged into a video.

This post details the making of the video, mostly for me to have a record of the commands and settings I used.

Enclosure

Its vitally important that the camera module doesn’t move at all over the course of the time lapse. For this, I made an enclosure for the Pi to be installed in the window aperture. The camera module is positioned behind a hole in the side.

Setup and networking

I wanted the Pi connected to my home WiFi network in order to communicate with it, and for it to be internet enabled for progress emails. I used a WiFi dongle for this and mostly just followed the instructions here: Automatically Connect a Raspberry Pi to a Wifi Network

Taking a photo

The built-in raspistill command is used. Support for the camera module is enabled when the Pi is first setup with raspi-config.

Here, if the destination is not already a mount point, the drive will be mounted. Then the contents of the images directory gets rsync‘ed to the mounted drive. The drive will stay mounted until the Pi is switched off.

Local storage capacity considerations

With each photo being about 4.5Mb, 6 photos a day for a year would lead to running out of space both on the Pi’s internal storage as well as the USB backup drive. So I wanted to determine daily the current disk usage. The df command was used.

#!/bin/bash
#Filename: df.sh
/bin/df -h > /home/pi/df.txt

Here, the human readable output is sent to a local file for a later scheduled email.

Periodically I would then manually ssh in and copy the images to another computer and free up space on the Pi and the USB drive.

Daily emails

I wanted the day’s thumbnails emailed to me everyday as confirmation that it’s all working correctly. ssmtp does the emailing and mpack enables attachments to emails.

sudo apt-get install ssmtp mpack

The email sending needs a mail server. Strangely, sending to a gmail address using the gmail server wasn’t working. I’d get permission and authentication errors. However I did succeed sending to a gmail address using a yahoo account for the server. The ssmtp config looks like this.

Here, a photo is taken on the hour and half-hour between 12pm and 2.30pm. At 3pm a thumbnail would be made, at 3.15pm the backup would be made on the USB drive. The current disk usage would be saved to file a minute before being sent as an email together with the thumbnail as an attachment.

Merge into a video

After a year of the above I had all the needed photos. To accumulate images into a video I used ffmpeg, this time on a Linux VirtualBox instance.

sudo apt-get install ffmpeg

I copied all the same time slot images to their own directories and renamed them with a sequential identifier. ie. I’d have all the 12pm photos in one directory named 1200_1.jpg, 1200_2.jpg, 1200_3.jpg … 1200_365.jpg, etc. The same for the 12.30pm, 1pm, 1.30pm, etc. photos.

Here, the images with filenames starting with identifier 1 are stitched together with a frame rate of 15, a 1280×960 size and x264 encoding and sent to an output .mp4 file.

Conclusion

I was expecting the changing foliage over the seasons to be the dominant feature. Instead I find the changing length of the shadows over the seasons to be more impressive.

Disappointing is the flicker of shadows that occurs between overcast days and sunshiny days. On overcast days there are no shadows from the trees. When followed by a bright day having the tree shadows, then the flicker caused by shadows disappearing and appearing again causes the flicker.

Post-processing won’t help, it’s not the contrast or brightness at fault, it’s the lack of shadows. I thought there’d be sufficient variation in the six photos each day for me to choose the brightest one, but typically if there are no tree shadows in one photo, there won’t be any in others of that day.

Point Cloud Library (PCL) offers pre-built binaries but these don’t include Qt support. For this you need to build PCL from source. Building PCL from source with Qt support is quite an undertaking as there are many dependencies. This post documents the steps I took to build it.

You need to decide from the outset whether you want a x86 or x64 build of PCL because every dependency must be of the same type. So it influences your choice of download and build of boost, Qt, flann, etc.

Your choice of Qt is important here. The x86 or x64 version of Qt will build the same type of Qt .dlls irrespective of what VS compiler you set it to use. Use the same architecture throughout.

Choice of version of Qt and VTK
You need to go with one of the following version combinations.
VS 2013 + Qt 5.5 (or below) + VTK 6.3 (or below)
VS 2015 + Qt 5.6 (and up) + VTK 7.0 (and up)
This is because of a deprecated feature (QtWebKit) in Qt 5.6 which impacts on what can be built. The installers of Qt have version 5.6.2 as the first one that supports VS 2015.

Boost is installed with a pre-compiled binary, which installs for example like this.

C:\
--boost\
----boost_1_59_0

Qt is also installed with the pre-compiled binary

C:\
--Qt\
----Qt5.6.2

Eigen is installed by simply copying files

C:\
--eigen

CMake
CMake-gui is built with a self-executable, choosing the option to add CMake to $path. If you’re unfamiliar with CMake (as I was) then it works like this: Place the downloaded source in one directory. Make another directory for where the build files will be created and where the build will take place. Open CMake and specify these two directories appropriately. Press Configure. You’re asked what generator to use. Choose Visual Studio 2015 x64 to create Visual Studio .sln and .vcxproj files configured for an x64 build. Continue refining the option entries until pressing Configure gives no errors, then press Generate. You then have a Visual Studio solution which can be opened and built using VS.

Flann is configure with CMake and built with Visual Studio. x64 compiler flags. Used default CMake options. The INSTALL project in the VS solution is then built to install the libraries to Program Files. I manually moved files to separate the Release and Debug version of the .dlls and .libs. I think this was necessary for PCL 1.7, but not sure for 1.8

VTK is configured with CMake. Enable the Qt options. Then build using VS, both the Release and Debug congurations, but don’t install yet. Building debug and release libraries of VTK presents difficulties which need to be handled carefully. We’re going to make use of the build directory still.

PCL
PCL is configured with CMake. For FLANN library I used flann_cpp_s because of advice here. I got CMake errors where not all boost libraries were found. I had to add an entry for the BOOST_LIBDIR directory. For VTK_DIR point to the VTK build root directory. Open Visual Studio as Administrator. Select all pcl projects, right click Properties > Debugging > Environment and set to

PATH=C:\vtk\VTK-7.0.0-build\bin\$(OutDir);%PATH%

Do this for both Release and Debug congurations. In this way when the compile needs the appropriate VTK library the path will be pointing at the appropriate build library depending on the active configuration.
Build both Release and Debug congurations.
A build error:

C:\Program Files\flann\include\flann/util/serialization.h(18): error C2228: left of '.serialize' must have class/struct/union;

VTK
Reopen the VTK.sln in Visual Studio as Administrator. Build the INSTALL project to install both Release and Debug configuration but with manually creating Debug and Release for bin/ and lib/ subdirectories and moving files to create following structure.

The reason for doing the install at all is simply convenience. It makes it intuitive to find the .dlls and .libs again, and the build directory can be removed.

Dependency Walker will now show for any of the PCL .dlls that their dependent VTK .dlls are not available. But that’s OK. The PCL .dlls have _release or _debug in their filenames, so you can change %PATH% to point to c:\Program Files\PCL\bin. The VTK .dlls unfortunately are not so well named, so they need to be copied side-by-side with your application’s .exe.