Gaétan Laurehttp://gaetanlaure.com
Welcome to my personal websiteWed, 14 Nov 2018 06:43:54 +0000en-UShourly1https://wordpress.org/?v=4.7.12Flying Over the Mountain Ridge at Sunsethttp://gaetanlaure.com/flying-over-the-mountain-ridge/
http://gaetanlaure.com/flying-over-the-mountain-ridge/#respondSat, 20 May 2017 11:06:32 +0000http://gaetanlaure.com/?p=2021This winter, I took my quadcopter V1 for a hike near Beuil, in the southern French Alps. There was a bit of snow and the view was really enjoyable following the ridge! Flight setup: Frame: Self-designed Flight Controller: APM 2.5 Continue reading Flying Over the Mountain Ridge at Sunset→

]]>This winter, I took my quadcopter V1 for a hike near Beuil, in the southern French Alps. There was a bit of snow and the view was really enjoyable following the ridge!

Flight setup:

Frame: Self-designed

Flight Controller: APM 2.5 (RCTimer)

GPS: u-Blox CN06 Plus (RCTimer)

Motors: T-Motor MT2216-12 V2 800KV

ESC: Favourite Littlebee 30A (BLHeli)

Props: Graupner E-Prop 9×5

Battery: Zippy 4S 5000mAh 20C

RC/TX: Graupner MX-16 HoTT / GR-16

Video TX: ImmersionRC 5.8Ghz

Camera: GoPro 3 Black

Gimbal: DYS Smart3

All up weight: 1.8 Kg

Some pictures taken by my friend, Olivier Guerin, which created a website dedicated to nature pictures: mynaturepicture.com.
On his website, we can see handsome Mercantour landscapes and animal photographies through the seasons.

]]>http://gaetanlaure.com/flying-over-the-mountain-ridge/feed/0The Quadcopter V1 – a Quadcopter for Aerial Videohttp://gaetanlaure.com/the-quadcopter-v1/
http://gaetanlaure.com/the-quadcopter-v1/#commentsThu, 22 Dec 2016 14:26:35 +0000http://gaetanlaure.com/?p=1797In this post, I describe how I designed and built the Quadcopter V1. This quadcopter is inspired from the Tricopter V2.5 (which is a remix of the RCExplorer tricopter). Some parts such as the arms, the motor mounts and the Continue reading The Quadcopter V1 – a Quadcopter for Aerial Video→

]]>In this post, I describe how I designed and built the Quadcopter V1. This quadcopter is inspired from the Tricopter V2.5 (which is a remix of the RCExplorer tricopter). Some parts such as the arms, the motor mounts and the landing-gear are the same. The most important novelty is the use of the APM 2.5 board (with ArduPilot, an open source software) and a GPS which enable semi-automatic and automatic flight modes (such as flying given a straight line, performing automatic panorama, following a pre-programmed waypoint track, returning to launch in case of signal lost..). These flight modes are really interesting to get better video footages. The video performances are also improved by the use of a 3 axis gimbal.

I want to thank the SoFAB for helping me in the realization of this project, enabling me to manufacture the quadcopter parts.

Here is a summary of the drone characteristics and components used in this configuration:

Overview of the electronic components used

Let's first have an overview of the electronic components used to build the drone.

The flight controller used is the APM 2.5 (ArduPilot Mega) from RCTimer (APM 2.5 documentation). The architecture is quite close to the Arduino Mega (same processor), but the board has, in addition, all the sensors needed to fly a drone (3 axis accelero/gyro/compass and a barometer). It runs the ArduCopter open-source software which enables to configure, fly the drone and preform autonomous flights. The GPS used is designed to come over the top of the APM 2.5 (using standoffs). It also embeds an external compass (magnetometer). We are going to use this compass instead of the APM one in order to avoid interferences.

The APM 2.5 is powered using this power module (which is plugged on the battery). This power module also provides voltage and current measurements to the APM.

The props used are the Graupner E-Prop 10x5. They are stiff and come quite well balanced. These props are a bit expensive (7-8€ / prop) but they are nice to avoid having vibration and Jello effect in the video.

The drone is powered using a Zippy 4S 5000mAh LiPo battery (15.2 V nominal voltage). It weights 450 grams. I replaced the bullet battery connectors by an XT60 connector.

I still use the Graupner MX16-Hott radio controller and a GR-16 receiver to control the drone.

Information about the drone state (location, orientation, battery voltage...) are sent to a ground station (a computer or a smartphone) using these telemetry modules and the MAVLink protocol. A module is plugged into the APM board (using a serial interface), the other one is plugged to the ground station (here my smartphone with an OTG wire) through a USB port.

I receive the video signal through FatShark Attitude V2 goggles. Goggles are powered using a 3S 2200 mAh LiPo battery. With this setup, the video signal can be received up to 1 km range (without obstacle).

To stabilize the GoPro, I use the DYS smart3 3 axis gimbal. It has an IMU (Inertial Measurement Unit) and 3 brushless motor to perform stabilization around the roll, yaw and pitch axis. It features signal inputs, the GoPro pitch orientation can then be controlled from the ground using the radio controller. This gimbal weight 240 grams. I use a 12V regulator to adapt the LiPo battery voltage to the gimbal voltage range.

This is how all the components are going to be wired on the drone. The large green wires symbolize multiple wires. In the building part, I will mention precisely the APM ports used to connect all the components.

The frame design on SolidWorks

In this part, let's see how the quadcopter frame is designed. It has been CAD designed using SolidWorks. For this purpose, I created a virtual assembly of all the designed parts. The advantage of using this approach is that you can directly see if the different parts match correctly. You can also check if there is enough space to embed all the electronic components. Thus, CAD design enables to accelerate the prototyping phase, machining parts only at the end.

This is what the quadcopter design look likes. Some parts were taken from CAD libraries: the GoPro, the FPV transmitter and the flight controller are from grabcad.com. The landing gear (3D printed), the motor mounts and the arms are the same than the Tricopter V2.5. The parts specially designed for this quadcopter are in plywood and in PMMA cast (a kind of plexiglass). These materials can easily be cut by the Trotec Speedy 100 laser cutting machine I used at SoFAB.

An exploded view to see how all the parts are laid out. The FPV transmitter and the telemetry module are mounted on the back arms using zip-ties. The motors are mounted on the arms using the circular motor mounts.

The two main parts of the frame are laser cut into 5mm plywood (okoume,
3 plies). They are separated by two spacers laser cut into 10mm PMMA cast sheet.

Carbon arms are assembled this way between the two main plates. They come into abutment with four screws.

Arms can be folded up, such as the tricopter 2.5, for easy transportation. It also absorbs shocks and prevents damage in case of a crash.

The drone folded up.

The APM 2.5 (in blue) is mounted on a vibration damping platform (laser cut into 3mm PMMA cast sheet). The APM accelerometers are then isolated from vibrations (coming from the motors) and provide less noisy data. Cyan small tubes represent the damping balls.

The vibration damping platform parts are assembled this way. The receiver (with the two antennas) is attached, using double-sided tape, on the front part of the platform.

The battery tray is mounted on the main frame using four standoffs. It carries the battery (in blue) and is designed to fit the DYS smart3 gimbal on it. It's laser cut into 5mm plywood such as the two main plates of the frame.

Building the drone

Let's now build the drone.

I first assemble the arms, the method is really close to the one described by RCExplorer. The procedure is the same for each arm. I start by wiring the ESCs and motors.

I have unsoldered the original ESC wires to solder longer ones. Power and signal wires have to be 50 cm long (the signal wires I used were only 43 cm long, so I'm going to solder them to the original wires). I soldered 4mm gold bullet connectors on the power wires. I cut the motor wires to a length of 8 cm.

I first soldered the motor wires directly on the ESC. I don't use any connector to make the connection safer. Thanks to the BLHeli firmware flashed on the ESC, the motor direction of rotation can still be modified without permuting two wires. I also soldered the power and signal wires.

To protect the wiring, I slid 6mm wire mesh over the power and signal cables.

I put on some 14mm heat shrink to protect the ESC.

The motor and the ESC are ready to be mounted on the arm.

The motor is attached using the circle motor mounts and M3 18mm screws. I put threadlocker on the screws to avoid unscrew due to vibration. I screw a few turns at a time, describing a cross pattern, to get an equal tension on the four screws at the end.

I'm going to pull wires through the arms. To make it easier, I take servo wires out of the servo connector.

Servo wires pulled through the arm.

Power wires were pulled one by one. It's not so easy with the bullet connectors.

The ESC is finally mounted on the arm using zip-ties.

The four arms are ready to be fitted on the quadcopter frame. Note that the ESCs are mounted in opposite directions. They can then face backwards on the quadcopter and be better protected.

I number the signal wire to connect them in the correct order (to the APM) once arms will be installed.

Let's now assemble the vibration damping platform and mount the APM board and the GPS on it.

Wiring the APM and the GPS board. The compass is connected to the APM I2C port. The GPS is connected to the APM GPS port. GPS board LEDs are connected to A6 and A7 APM outputs.

The GPS board is installed over the APM board using M3 30mm standoffs. I use plastic standoffs, instead of aluminium ones, to minimize interference on the external compass. I use small M3 5mm standoffs to mount both boards on the vibration damping platform. I use two zip-ties to prevent the boards to move away in case of a damping rubber break.

Let's now mount the arms and the vibration damping platform on the two main plates.

The two main plates and the spacers laser cut. I use twelve M3 30mm screws and lock nuts to mount the arms and the damping platform. I also use flat washers to tighten the screws without damaging the plywood.

The four first screw are installed like this on the top plate.

For the next steps, the top plate is positioned downward on a box.

The arms are installed, given the numbers indicated on the top plate (to match ArduCopter ESC wiring for Quad X configuration), motors downward. The signal wires are pushed through two holes (the closest ones to the front of the plate).

The bottom plate is then installed. The power wires are pushed through the corresponding two holes.

The flat washers and lock nuts are then placed.

And finally screwed.

Spacers are inserted between the plates.

The vibration damping platform is then installed over the top plate (using four M3 30mm screws). The ESC signal wires pass under the vibration damping platform.

The ESC signal wires are connected this way, respecting the number order, on the APM board.

The last four screws are installed. The arms come into abutment these screws.

Under the bottom plate view.

Arms can be folded up like this without any difficulties concerning the wires.

Let's now mount the battery tray.

The battery tray, also laser cut into plywood. Four M3 40mm standoffs, M3 30mm screws and M3 18mm screws are needed to mount it on the frame.

Mounting the standoffs. I put threadlocker on the screws to avoid unscrew due to vibration.

The battery tray, mounted on the frame.

We can now mount the radio receiver and the buzzer.

The radio receiver is attached on the vibration damping mount using double-sided tape. The two radio receiver antennas are fitted into plastic tubes, oriented at 90 degrees to each other (to improve the received signal quality). I use the PPM (Pulse Position Modulation) output of the radio receiver, thus all the radio channels are transmitted through only one wire. I connect it to the RC input 1 of the APM board. A jumper is connected to the signal pins of the RC inputs 2 and 3 to enable the PPM mode on RC input 1.

The buzzer is mounted on the left back arm using zip-ties. It's connected on the APM analog output A5.

Let's now mount the landing gear.

It's 3D printed using ABS wire to make it enough strong and resistant.

The landing gear is attached on the arms using zip-ties.

We can now install the telemetry module.

It's mounted on the right back arm using zip-ties.

I connect it to the telemetry port on the APM board.

It's time to install the power module and connect the ESC power wires.

The ESC power wire are connected to the distribution harness. I use electrical tape to secure the connection of each ESC.

The distribution harness is attached to the battery tray standoffs using zip-ties.

The power module is installed this way. The cable to power the APM board pass through the back holes of both plates.

It's connected to the APM power port.

The battery XT-60 connector located on the back side will facilitate the battery connection.

It's time to mount the video transmitter and corresponding wires.

The ImmersionRC transmitter used is going to be mounted using the FPV transmitter post (on the left). The wire to connect the GoPro to the transmitter (which is a Tarot Gopro Hero3 AV Cable adapted to be plugged into the transmitter) is 38 cm long. The power wires are designed to connect both the FPV transmitter and the gimbal to the distribution harness (there are 18 cm between the LC filter and the JST connector, and 7 cm between the bullet connector and the JST connector). The LC filter (on the bottom left of the image) is used to smooth the battery power supply and avoid interferences on the video signal. A 12V regulator (on the upper right corner of the image) is used to adapt the voltage from the battery to the gimbal. I soldered wires with JST connectors on it.

The transmitter is attached on the transmitter post using double-sided tape. The transmitter post is mounted on the left back arm using zip-ties. The power wires are connected to the distribution harness. All the wires are protected using 6mm wire mesh (12 cm long).

The 12V regulator is attached on the battery tray using double-sided tape. The wires are set this way using zip-ties.

We can now mount the gimbal and the GoPro.

The DYS smart3 and the GoPro 3. I 3D printed the yellow part, designed by Lasersaber, to mount the GoPro on the gimbal faster and without any screws. I always use a filters with the GoPro: the polarizing filter to reduce light reflections and protect the GoPro lens; or a neutral density filter (ND4) to slow down the GoPro shutter speed and avoid Jello effect in case of a bright daylight.

The gimbal is mounted using four M3 25mm standoffs and four M3 18mm screws. The GoPro is well maintained by the yellow mount. The hole of the yellow mount has been widened a bit to fit the filter inside. The gimbal power wires are connected to the 12V regulator output (through the JST connector). The video cable is connected to the GoPro mini USB port.

The video feedback wire pass behind the GoPro, it doesn't prevent the gimbal to move in any direction.

In order to control the gimbal orientation (around the pitch axis) from the radio, a signal wire (in orange) is connected to the RX-P input of the gimbal.

Pitch orientation will be sent through the APM board. The signal cable is connected to the A10 output.

Almost finished, let's now cover the barometer on the APM board (to avoid disturbed pressure measurement in case of wind).

The barometric pressure sensor.

Covering it with some foam and tape. The tape also secure receiver and ESC wire connections on the APM board.

The GPS board reinstalled.

The only thing left is to mount the battery and the props.

I use two straps to mount the battery. The LiPo cell checker is installed over the battery using Velcro.

The props are held in place using M5 washers and lock nuts.

The quadcopter is finally fully mounted! Some pictures of the result:

ArduPilot configuration

It's now time to flash and configure the ArduPilot firmware on the APM 2.5. Everything is well detailed on the ArduCopter wiki.

I connect the APM 2.5 to a computer (with an USB cable) and use Mission Planner.

I first flash ArduCopter V3.2.1.

I can now set up the board (all the configuration procedure is described on the ArduCopter wiki). I first set the frame type as 'X'. After that, I perform the accelerometer calibration (it consists of placing the quadcopter on each edge to set default accelerometer min/max and offset on the 3 axis).

I now configure the compass. I use the external one (which is on the GPS board). The "live calibration" consist of rotating the quadcopter around all axis. After that, I perform the radio calibration. It consists of moving the radio sticks and switches to set their corresponding minimum and maximum values.

I then configure the flight modes used. All the modes are described in depth here. "Stabilize" is the basic flight mode, to fly manually, controlling the roll and pitch angles of the copter. When the sticks are released, the vehicle automatically levels itself. "Loiter" is a GPS assisted mode. Sticks control the copter relative motion.It makes possible to fly given a straight line. When the sticks are released the copter stops and holds its position and heading. "Circle" mode makes the drone describe a circle of a given radius at a given turn rate. A zero radius will cause the copter to stay in place and rotate, it's very useful to perform panorama. "Auto" mode is used to fly through pre-programmed waypoints. "RTL" ("Return To Launch") is used to make the drone fly back and land to the takeoff location.

I now set up the radio failsafe, it will ensure the drone comes back home automatically in case of signal lost. The receiver has been configured to pull the throttle channel to 900 in case of signal lost. I set "FS PWM" to 990: when the throttle channel is below this value, the failsafe procedure (a "Return To Launch") is engaged.

The gimbal is configured this way. I can control the tilt angle through the APM. I'm doing it using the RC6 channel input and the RC10 APM output.

Here is how I configured the PID gains. I first used the "AutoTune" mode to make the quadcopter configure the gains by itself. I then adjusted the values to get a better compromise between a smooth and quick response, to get a steady video (avoiding oscillations) and manage correctly external perturbations (such as wind..).

Once the configuration is done, we are ready to fly. We can use Mission Planner as a ground station, with the telemetry modules, to get live quadcopter information. We can also use a smartphone with the app DroidPlanner 2 (or Tower) as a ground station.

The quadcopter flying

It's now time to fly!

The GoPro stays horizontal when the quadcopter is tilting, thanks to the gimbal.

]]>http://gaetanlaure.com/the-quadcopter-v1/feed/5Quadcopter V1 design files now available on Thingiversehttp://gaetanlaure.com/quadcopter-v1-thingiverse/
http://gaetanlaure.com/quadcopter-v1-thingiverse/#respondMon, 19 Dec 2016 12:00:02 +0000http://gaetanlaure.com/?p=1839I just uploaded the Quadcopter V1 design files on Thingiverse. I put some instructions on how cut/print parts. There is also the list of all the other parts and hardware used. I hope it will be helpful for people wanting Continue reading Quadcopter V1 design files now available on Thingiverse→

]]>I did an autonomous mission flight using the ArduPilot “Auto mode” with my Quadcopter V1.

The drone followed a pre-programmed waypoint track while facing an “interest point”. It worked really well. There was a light wind that day. This "Auto mode" is really nice to perform precise cinematic aerial shots automatically.

Flight setup:

Frame: Self-designed

Flight Controller: APM 2.5 (RCTimer)

GPS: u-Blox CN06 Plus (RCTimer)

Motors: T-Motor MT2216-12 V2 800KV

ESC: Favourite Littlebee 30A (BLHeli)

Props: Graupner E-Prop 9x5

Battery: Zippy 4S 5000mAh 20C

RC/TX: Graupner MX-16 HoTT / GR-16

Video TX: ImmersionRC 5.8Ghz

Camera: GoPro 3 Black

Gimbal: DYS Smart3

All up weight: 1.8 Kg

I used the DroidPlanner 2 apps on my smartphone, and this 433 MHz telemetry module (plugged using an OTG USB cable), to plan and send the mission to the drone.

This is the defined mission. The first step is the "takeoff" (1), the drone automatically takeoff and climb up to reach the programmed altitude (15 m). After that, the drone target an "interest point" (2), which is 2 meters from the ground. During all the flight, the drone keeps facing this point and orient the gimbal in order to keep this point in the center of the camera framing. The following steps are "spline waypoints" (3-11). The drone fly through these points, at an altitude of 20 meters (with respect to the takeoff position), describing a spline (enabling a smooth trajectory). To finish with, the last step is a "return to launch" (12), the drone land automatically to the takeoff position.

This plot shows the GPS track log (recorded on the APM during the flight) and the defined waypoints. The curve is smooth (as expected using "spline waypoints") and quite close to the waypoints. The maximum error between the GPS track and the waypoints is about 3 meters.

]]>http://gaetanlaure.com/quadcopter-v1-fully-autonomous-flight/feed/0Quadcopter V1 Speed Test (reaching 78 km/h)http://gaetanlaure.com/quadcopter-v1-speed-test/
http://gaetanlaure.com/quadcopter-v1-speed-test/#respondMon, 12 Dec 2016 10:20:34 +0000http://gaetanlaure.com/?p=1585I did a speed test with my Quadcopter V1. I was able to reach 78 km/h! It gives a good information about the headwind speed the quadcopter can handle. There was no wind that day. At the maximum speed, we Continue reading Quadcopter V1 Speed Test (reaching 78 km/h)→

]]>I did a speed test with my Quadcopter V1. I was able to reach 78 km/h! It gives a good information about the headwind speed the quadcopter can handle.

There was no wind that day. At the maximum speed, we can see that the image is shaky. I think that it’s caused by the wind force applied on the GoPro which makes the gimbal dampers vibrate.

Flight setup:

Frame: Self-designed

Flight Controller: APM 2.5 (RCTimer)

GPS: u-Blox CN06 Plus (RCTimer)

Motors: T-Motor MT2216-12 V2 800KV

ESC: Favourite Littlebee 30A (BLHeli)

Props: Graupner E-Prop 9×5

Battery: Zippy 4S 5000mAh 20C

RC/TX: Graupner MX-16 HoTT / GR-16

Video TX: ImmersionRC 5.8Ghz

Camera: GoPro 3 Black

Gimbal: DYS Smart3

All up weight: 1.8 Kg

These curves show the GPS speed, the battery voltage and the current draw. These data were recorded on the APM board (dataflash logs).

The maximum speed (78 km/h) is reached after about 6 seconds acceleration. At the corresponding time, the current draw (31.3 A) is twice bigger than during hovering. The power draw reaches 31.3 A x 13.5 V ≈ 423 W. On the voltage curve, we can notice a voltage drop.

The speed never reached a constant value, it should be possible to go faster with the same propulsion chain (battery/ESCs/motors/props). But the mechanic parts might not handle such a speed.

]]>http://gaetanlaure.com/quadcopter-v1-speed-test/feed/0Hiking and Flying – Quadcopter V1http://gaetanlaure.com/hiking-and-flying-quadcopter-v1/
http://gaetanlaure.com/hiking-and-flying-quadcopter-v1/#respondTue, 06 Dec 2016 10:00:00 +0000http://gaetanlaure.com/?p=1563I did my first mountain hike with the Quadcopter V1, it was in the Maritime Alps (in southeastern France). I flew over lakes, did some panorama, the landscape was really nice! The image is not perfectly stable because of the Continue reading Hiking and Flying – Quadcopter V1→

]]>I did my first mountain hike with the Quadcopter V1, it was in the Maritime Alps (in southeastern France). I flew over lakes, did some panorama, the landscape was really nice!

The image is not perfectly stable because of the wind. Sometimes, we can notice a slight Jello effect, it could have been avoided using an ND (Neutral Density) filter to slow down the GoPro shutter speed (which was high because of the bright daylight). I was still using the 2 axis gimbal, the image would have been more stable with the 3 axis gimbal.

Flight setup:

Frame: Self-designed, made out of plywood

Arms: Square carbon booms (10x10x325 mm)

Flight Controller: APM 2.5 (RCTimer)

GPS: u-Blox CN06 Plus (RCTimer)

Motors: T-Motor MT2216-12 V2 800KV

ESC: Favourite Littlebee 30A (BLHeli)

Props: Graupner E-Prop 9×5

Battery: Zippy 4S 5000mAh 20C

RC/TX: Graupner MX-16 HoTT / GR-16

Video TX: ImmersionRC 5.8Ghz

FPV Goggles: Fatshark Attitude V2

Camera: GoPro 3 Black

Gimbal: Walkera G-2D

All up weight: 1.7 Kg

Flight time: ~ 15 minutes

The Quadcopter V1 equipped with the 2 axis gimbal. After a difficult landing during the hike, the landing gear had been fixed with some tape!

]]>http://gaetanlaure.com/hiking-and-flying-quadcopter-v1/feed/03D reconstruction of a house using a dronehttp://gaetanlaure.com/3d-reconstruction-of-a-house/
http://gaetanlaure.com/3d-reconstruction-of-a-house/#commentsSat, 15 Oct 2016 09:30:46 +0000http://gaetanlaure.com/?p=1355A few months ago, I did my first 3D reconstruction of a house using the Tricopter V2.5 and my GoPro Hero 3 Black. Thanks to my friend Thibault, for letting me experiment this on his house! In this post we Continue reading 3D reconstruction of a house using a drone→

A few months ago, I did my first 3D reconstruction of a house using the Tricopter V2.5 and my GoPro Hero 3 Black. Thanks to my friend Thibault, for letting me experiment this on his house!

In this post we will see how the 3D reconstruction is performed from the GoPro pictures. After what I will present some applications such as:

measuring distances, areas and volumes in the model;

building orthophotos and digital elevation models;

displaying the 3D model in Google Earth.

The 3D reconstruction

The 3D reconstruction has been done by photogrammetry using the software Agisoft PhotoScan. They are many other photogrammetry softwares on the market, such as Pix4D which works similarly to PhotoScan.

Photogrammetry method consists of computing the 3D model of a scene from 2D images captured from different viewpoints. The captured images have to overlap. It means that each part of the scene you want to reconstruct must appear in multiple images. By matching the corresponding pixels in the images, the 3D positions of the corresponding points are obtained by triangulation and by performing global optimization. In the process, the camera locations (from where the images have been captured) and the camera intrinsic parameters (such as the distortion profile) are computed to minimize the quadratic error of 3D point reprojections over all the images. After what, a mesh and a texture are computed from the 3D point cloud and the images, providing the final result.

Let’s now see how it works in practice.

As I explained, the first step is to capture pictures of the house from various angles. I flew the drone around the house while the GoPro 3 was taking a 12 Megapixel picture every second.

The GoPro isn’t the best camera for this application:

It has a fisheye lens which provides high distortion and reduces the sharpness in the corner;

It uses rolling shutter speed (as with any CMOS sensors, images are captured line by line) and the image can be distorted if the drone moves fast;

The automatic exposure can provide over exposed images which will lack texture (and then lack information to match pixels and build the 3D model);

The JPEG compression which adds noise (artifacts) in the edges. It would be better to use raw images (which is now possible with the recently released GoPro 5).

But the advantage of the GoPro is to be well suited for use on a drone (small and light weight) and as you will see, the 3D reconstruction accuracy is acceptable.

Here is a sample of the 143 pictures taken from the drone:

For this test, I mainly took pictures from one side and the roof of the house. Thus, the 3D reconstruction will not cover all the sides.

As you can see, the light and shadows are changing between pictures. The clouds were moving fast on that day. For a better 3D reconstruction result, I should have a constant light between the different shots. When shadows are moving between frame, it’s more difficult for the software to match corresponding pixels.

Now let’s process the pictures using PhotoScan.

In order to minimize the processing time, I use a computer with an i7-6700K (quite powerful 4GHz quad-core processor), 16 GB of RAM and images are stored on an SSD.

We load the pictures in a new project.

We are going to run all the steps specified in the workflow menu:

Align Photos;

Build Dense Cloud;

Build Mesh;

Build Texture.

Let’s run the first step to align photos. PhotoScan is going to search for the tie and key points (which are corner points) on each image. After that, it compares the local key points around the tie points to match them (the tie points) between images. A global optimization on all the images will then determine:

the 3D position of the tie points;

the position and orientation of the camera for each image;

the intrinsic parameters of the camera such as the focal length and the distortion model.

In the “Advanced” section, I kept the default parameters: the numbers of key points and tie points found are respectively limited to 40000 and 1000 per image. It’s a good compromise between process time and finding enough tie points to properly align photos.

If the camera positions are initialized using recorded GPS data in the EXIF files, this step will be faster. If you don’t have any GPS information, which was my case, it takes a little longer to process, but works fine as well.

It took 53 mins 40 secs to complete. Images are now aligned and we can see their 3D positions and orientations. We can see from where the photos were taken. I was manually controlling the drone, and tried to move around one side of the house and the roof. The camera was oriented in front, tilted at 45 degrees and tilted down to get all the necessary orientations.

Zooming in. The sparse point cloud represents the tie points used to align photos. We recognize the points used on walls, in white, and the points used on the trees, in green.

On each image, we can observe the tie points found. Blue ones correspond to used matches (the sparse point cloud). White ones are the tie points which are not used.

When the walls are over exposed, PhotoScan doesn’t find any tie points on them because of the lack of texture. It then uses other parts of the image such as trees, ground and roof to find tie points. Hopefully it’s not the case of most of the photos. The best configuration would have been to take the photos during a cloudy day, to get a uniform light.

The panel in the menu “Tools -> camera calibration” also provides the camera intrinsic parameters determined during the optimization process.

Before computing the dense cloud, the bounding box can be adjusted to limit the reconstruction volume to the house and the garden.

It’s time to compute the dense cloud. During this step, PhotoScan computes the depth information on each image and combines them into one point cloud. To keep the process low time consuming I used the default parameter “medium” for quality. As you will see, it’s enough for the reconstruction of a house. In the “advanced” menu I chose “Aggressive” for “depth filtering” to sort out most of the outliers. It makes sense because there is not too much details in the scene.

Here is the result! It took 18 mins 31 secs to obtain this dense point cloud representing the house.

Zooming in.

Let’s now run the mesh reconstruction in the workflow menu. I selected again medium parameters, we don’t need too much polygons because most parts of the house are walls and plane surfaces.

I finally got the 3D mesh of the house, it took 2 mins 54 secs to proceed. We can see that the result is quite good. The walls of the house are not perfectly flat. It may be due to the variation of luminosity between images and overexposed images. Also, as I said before, the GoPro 3 isn’t optimal for 3D reconstruction. It’s mainly due to its fisheye lens inducing distortion that reduces the precision in the corner of images.

Zooming in.

Now the last step is to compute the texture. PhotoScan projects images (and merges them) on the mesh to generate the texture. The size of the texture has a significant effect on the final render. In this case, a 15000×15000 sized texture is optimal (enough details and sufficiently light to be uploaded online).

The final result: the mesh with the texture patched on it. It took 2 mins 47 secs to compute the texture.

Zooming in, most of the scene is quite good. The texture adds details not represented on the mesh.

The texture also mask the mesh defects and walls appear flatter.

In some parts where the trees are close to the walls, outlines of the trees have been projected on the house. This can be corrected by editing the texture image.

This is what the texture image looks like (it’s not the original resolution). It’s a 2D image, representing a mosaic of all the house parts. It can be easily edited, to correct defects, as any usual image.

I finally created a local reference coordinate system for this model. For that purpose, I measured two sides of the swimming pool and then defined 3 reference points in the corner of the pool. I provided the coordinates of the three points in the “Markers” table. “Point 1” is used to define the origin, “point 2” the X axis and “point 3” the Y axis of the coordinate system. The “update” button makes PhotoScan perform a short optimization to create the orthogonal coordinate system which fit, with a minimal error, the coordinates provided for the three points. The scale system (in meter) is then also created. After the optimization, the mean error of the three reference points was about 1.5 cm.

The local coordinate system is now correctly defined. The model is then ready to be exported and uploaded online.

The model hosted on Sketchfab. This website works fine using .obj files for the geometry, .mtl files for the material properties and .jpg files for the texture.

Finally it took about 1 hour and 18 mins to compute the 3D model (to process all the workflow steps: align photos, build dense cloud, build mesh and build texture).

Measuring distances, area and volume in the model

Thanks to the local coordinate system, we are now able to measure distances in the model. I compared the measurements of 9 distances in the model to the real corresponding value (classically measured with a tape measure).

Measuring the real values.

I then created 15 new points to define the 9 distances in the model. We can see them in the “scale bars” panel, but I didn’t use them to scale my model (the purpose here is to check the model accuracy). In the column “Distance (m)” I provided the real measured distances. The column “Error (m)” shows difference between the real measurements and the model measurements. I ordered the distance by error value. We can see that for all the distances, the max error is 3.5 cm (the mean error is 1.96 cm). This result is quite good considering that we used a GoPro, which is not the best camera for this application. Let’s keep in mind that here, errors could come from the model itself, but also from the accuracy with which I located the points to measure distances in the model.

The same distances shown on the wireframe mesh.

Zoom on the distance 7-8.

Zoom on the distances 13-14, 15-16 and 17-18.

I also measured the height of the 8 steps, between the swimming pool and the house, with the tape measure. Their height are not equal, from 13.1 to 17.6 cm, with an average of 14.5 cm. In the model, the point 19 is estimated 118 cm higher than the point 5 (which is 2 cm close to 8 * 14.5 cm = 116 cm, the height corresponding to the 8 steps).

We can also add points to estimate measures that we can’t get with the tape measure. I did it to know the altitude of the top of the house (point 20), the result should be close to 10.18 meters!

We can also compute areas and volumes directly in PhotoScan. I chose to compute the area of the first floor terrace. I did it restricting the dense point cloud to the terrace and I computed again the mesh (to get a more accurate mesh for this part). Then the area is provided using the tool in “Tools -> Mesh -> Measure Area And Volume…”. The result is about 40 m² here. For computing a volume, the principle is the same, but with a closed surface (surfaces can be closed using “Tools -> Mesh -> Close Holes”). I don’t have any good example in this 3D reconstruction to try this feature.

Building orthophoto and digital elevation model (DEM)

Let’s now show how to build the orthophoto and the digital elevation model.

The orthophoto is the projection of the 3D model on a plane. Here I chose the plane XY with a view from the top (X and Y are the axis from the local coordinate system defined using the pool edges). The resolution can be set to a maximum precision of 4.66 mm per pixel.

Here is the result (downsized). It’s finally a 2D map built from the 3D model. It’s easy to measure distances because the image has a defined scale.

Now let’s compute the DEM (digital elevation model). It provides a 2D image of the elevation of each points relatively to a chosen plane. I again chose the XY plane with a view from the top.

The image exported by PhotoScan is a .tif file. It’s a matrix of floats defining the height (in meters) of each point. I used Matlab to get a colormap from this matrix, display a colorbar and export the result in .png.

Here is again the DEV merged with the orthophoto. It can be useful to match the elevation with the different parts of the house.

I guess both of these maps can be useful to check quickly the progress of public works for example, measuring horizontal distances and elevations. It can also be used to generate level curves for cartography applications.

Exporting the model in Google Earth

To finish with, let’s show rapidly how we can export the 3D model in Google Earth. The first thing to do is to change the local coordinate system using GPS locations and absolute altitudes to define our reference markers (on the corner of the pool).

I got the longitude and the latitude of the 3 reference corners of the pool using Google Maps (the same points that I previously used to set the local coordinate system). It’s not the most accurate method due to the resolution of Google Maps in this area. We will see later how to increase the scale calibration of the model.

The absolute altitude of these points is known using Elevation Finder site. This altitude has to be corrected to be used in the WGS 84 system adding 49.246 m to it (this value corresponds to the EGM96 geoid altitude correction provided by GeoidEval). The WGS84 altitude of our reference points is then 320.078 m.

In the reference settings, I set the coordinate system to WGS 84 (EPSG::4326). The 3 reference markers can now be defined using the GPS coordinates.

GPS coordinates and altitudes of the reference markers (point 1, 2 and 3) are now set. The GPS coordinates taken from Google Maps for the reference markers aren’t super accurate. This is why I also added two reference scale bars (1-2 and 1-3) corresponding to the sides of the pool between the reference markers. The optimization is then done using the GPS references and the scale bars. After the optimization process, the max error reached is 20 cm for the reference point locations. The maximum distance error (the same 9 distances considered before) is still under 3.5 cm (mean error of 1.87 cm). It means that we keep the same accuracy in measuring distances, areas and volumes in the model (the scale of the model is accurate), but we can’t expect an absolute location of the house in Google Earth more accurate than 20 cm. We can now save the model in .kmz. In this format we will be able to open it in Google Earth.

Google Earth before loading the house into it.

Displaying the house into Google Earth. It’s well aligned and oriented and the scale seems consistent. It can be useful to see the reconstruction in its original environment.

Zooming in.

Conclusion and perspective

In this post, we saw how to perform 3D photogrammetry reconstruction using PhotoScan. It took about 1 hour and 18 mins to process the 3D model (align photos, build dense cloud, build mesh and build texture) with an i7-6700K. I used a GoPro 3 to take the pictures, but it’s not the best camera for this application due to the fisheye lens (with a high distortion). The result is nonetheless quite good, providing an under 3.5 cm precision in the model after having scaled it. We also saw how to build orthophotos and digital elevation models which can be used, for example, in public works, cartography and geology. The model can also be georeferenced and exported into services such as Google Earth to see it in its original environment.

To improve the 3D reconstruction results and get more precision in the model, I should use a camera with a more appropriate lens. The best compromise would be to keep the GoPro which is well suited for use on a drone (small and light, usable with a light gimbal) but replace its lens by a non-fisheye one to increase the accuracy of the reconstruction. Peau Production sells non-fisheye lenses designed to be fitted on the GoPro 3 and 4, such as the 82° HFOV (horizontal field of view) lens or the 60° HFOV lens. These lenses could really improve the result also because of their narrower HFOV (the original GoPro lens has a 123° HFOV) which allow to take smaller area with the same resolution. It will provide more details and information for the 3D reconstruction.

]]>http://gaetanlaure.com/3d-reconstruction-of-a-house/feed/2Taking a 360° panorama in the air!http://gaetanlaure.com/360-panorama-in-the-air/
http://gaetanlaure.com/360-panorama-in-the-air/#respondSun, 31 Jul 2016 13:10:55 +0000http://gaetanlaure.com/?p=1201I took my first 360° panorama from my quadcopter! We can see Pégomas (a small town), the sunset and the park where I’m doing flight tests. It’s the result of 25 images stitched together with Autopano. The camera used is Continue reading Taking a 360° panorama in the air!→

The images have been taken at different orientations: the GoPro was shooting every second while the drone was performing a slow rotation around the yaw axis (i.e. horizontally). The GoPro was tilted up and down by controlling the gimbal orientation around the pitch axis.

]]>http://gaetanlaure.com/360-panorama-in-the-air/feed/0The Quadcopter V1 – Demonstration Videohttp://gaetanlaure.com/the-quadcopter-v1-demo/
http://gaetanlaure.com/the-quadcopter-v1-demo/#respondWed, 13 Jul 2016 21:30:54 +0000http://gaetanlaure.com/?p=1117I finally finished the first version of my quadcopter. Here is a short demonstration video: The frame was CAD designed and laser cut into plywood at SoFAB. Carbon booms come from RCExplorer store. In this video, I use different semi-automatic Continue reading The Quadcopter V1 – Demonstration Video→

The frame was CAD designed and laser cut into plywood at SoFAB. Carbon booms come from RCExplorer store. In this video, I use different semi-automatic and automatic Arducopter flight modes:

"Loiter" mode to fly given a straight pass (01:27), using compass and GPS.

"Circle" mode to perform the panorama (00:39) at a specific rotation speed, but also to take the footage around (01:43) me given a specific radius.

The image is stabilized using a Walkera G-2D gimbal (2 axis). The yaw axis isn't stabilized by the gimbal so I had to post process some footage using the warp stabilizer in Adobe Premiere Pro. I'm going to replace this gimbal by a 3 axis ones soon.

Setup:

Flight Controller: APM 2.5

GPS: u-Blox CN06 Plus (RCTimer)

Motors: T-Motor MT2216-12 V2 800KV

ESC: Favourite Littlebee 30A (BLHeli)

Props: Graupner E-Prop 9x5

Battery: Zippy 4S 5000mAh 20C

RC/TX: Graupner MX-16 HoTT / GR-16

Video TX: ImmersionRC 5.8Ghz

FPV Goggles: Fatshark Attitude V2

Camera: GoPro 3 Black

Gimbal: Walkera G-2D

Total weight : 1.7 Kg

Flight time : ~ 15 minutes

I'm going to explain how I built it in detail in the "Projects" section.