Fancy blend: How to align your model from RealityCapture with images in Meshlab

The aim of this tutorial is to create a 3D model in RealityCapture, and export the mesh to Meshlab with cameras and images. The geometry will be aligned with images exactly so that you will be able to create a fancy blend like this one:

Creating a model in RealityCapture is truly a simple thing, which can be done by literally anyone. The process is undemanding, and the results are pretty impressive, no matter whether you are a rookie or an experienced 3D modeller.

The first nice surprise is that the background of the software is dark. Just a small detail, but after a few hours of long staring at a computer screen, it is a quite comfortable change. The second great advantage of RealityCapture is the interactive tutorial - really simple and efficient. The Help window will soon become your best photogrammetric friend, and the whole tutorial is accompanied by a plenty of useful links. In our particular case, where we were reconstructing a historical tank from World War 2, we can see that you do not need a lot of images (however, this is not a problem for RealityCapture, the more the better) to create an amazing 3D model. For example, we used 60 images (4228x2848 pix) with 300 dpi resolution (taken with NIKON D90), which we can easily obtain with a better smartphone today.

Creating a model in RealityCapture

The Quick Start tutorial is really quick and after a while you are able to explore the whole photogrammetric universe. You can add images through WORKFLOW -> Add imagery -> Image

or add a whole image folder WORKFLOW -> Add imagery -> Folder

or simply drag and drop desired images into the program window.

Once the images are loaded, they are ready to be aligned. In this step, the software is looking for points (so-called tie points), which the loaded images have in common. The result of alignment is a 3D point cloud whose points are determined by its Cartesian coordinates and RGB color values. To align images click ALIGNMENT -> Process -> Align Images.

Now with the point cloud built, we can see which and how the images have been aligned, their overlapping mutual tie points etc. through SCENE -> Alignment Cameras -> Camera Scale

Here you can see the aligned images and their overlap:

With ALIGNMENT -> Analyze -> Inspect we can see how the particular camera positions are related and in which parts there are some tie points (or images) missing.

In the ideal case, the point cloud is wrapped around the object in a blue sphere-shaped net. This way we could get a perfect model. However, the point of this post is to show you that you can create a gorgeous model without using a professional camera or terabytes of data. If there are some holes and empty spaces in your model after the reconstruction, here you can see why. The best is to take pictures as evenly as possible from all sides. Quantity does not always implicate quality, and increasing the number of images does not have to mean higher quality of the model, when they are not taken properly.

When we are satisfied with the number of images and their relative orientation, we can proceed to the model calculation. Now it is needed to set a reconstruction box, to define object of interest, by RECONSTRUCTION -> Model Alignment -> Set Reconstruction Region.

We can choose from manual or automatic setting, and after the region placement, we can still modify it by scaling and rotating the box with respect to the three axes.

If we are done with the reconstruction region, we can proceed to the model calculation WORKFLOW -> Process -> Calculate model. It is possible to calculate models in Preview, Normal, or High quality. The first two options takes less time, the third one offers higher resolution. In our case, we have chosen the high quality calculation. There were 65 input images in the total size of 190 MB and the calculation lasted 25 min and 22 s.

And here we are - the high quality 3D model:

After a reconstruction, it is time to give the model a nice coat. This can be done by texturing and colouring.

Currently, we are dealing with quite a lot of triangles: our model is a mesh consisting of almost 22 million triangles.

Therefore, we better use the Simplify tool to lower the number of triangles in the grid, make it simpler and easier to use in the following steps. We can access the tool via WORKFLOW -> Process -> Simplify

or RECONSTRUCTION -> Tools -> Simplify Tool

and set the desired number of triangles. We have chosen to set the threshold to 1 million.

Now we can proceed to colouring and texturing. You can texture models through WORKFLOW -> Process -> Texture and colour them by WORKFLOW -> Process -> Colorize.

The second option to colorize and texture models is RECONSTRUCTION -> Process -> Texture and to colour them through RECONSTRUCTION -> Process -> Colorize.

Here you can see the texturized and colorized model:

Exporting the model

Now, being done with the calculation and simplification, the only thing left to be done is export. There are more ways how to export models. For further processing in Meshlab, we need 3 outputs from RealityCapture:

A file with the extension (.out) can be exported through ALIGNMENT -> Export -> Registration.

We can select Bundler v0.3 (negative z), which will export the output project and corresponding images coming into the processing, but undistorted (it is important to switch 'Undistort images' to 'True'). We can also select a format of the output images (in our case .png). Choosing the negative z axis causes change in the orientation of the z-axis from pointing towards the camera to the opposite side, so that the camera is looking down to the negative z-axis direction. If we did not set the z-axis as negative, the output mesh would not match perfectly the output images while overlapping.

3. Image list

And the last thing we need for processing in Meshlab is the corresponding image list. You can do this manually on your own or use our custom tool available on Dropbox: click here.

All you need to do is copy the downloaded Windows batch file to the same folder, where the undistorted images are (or create a folder with the images you want to process in Meshlab) and then run it. The batch will create a text file required for loading the images to Meshlab. Under the link listed above, you will find 2 Windows batch files: one for generating an image list for JPG (listJpgImagesToTxtFile.bat) and one for PNG images (listPngImagesToTxtFile.bat).

WARNING: Before you start processing the exported data in Meshlab, make sure you follow the steps mentioned above.

For advanced users of RealityCapture it can seem more simple not to download our custom tool for creating imagelist .txt file, but to export the bundle out project without undistorted images and export these separately through 'Undistorted Images with Imagelist'. However, this procedure is not recommended as you need to set exactly the same values (especially turn on the centering to principal point). In case your results in Meshlab look like the following (blurry edges and wrong geometry)

As usual, we will start with opening a project, which will be our previously exported BundleOut (.out) file together with images to be loaded with the respective image list through File -> Open project...

Meshlab will load our bundler as a point cloud.

By going through File -> Import Mesh... we open our polygon file (*.ply), which is hidden under the point cloud, at the first glance.

We can scale the size of the points by holding Shift + I and scrolling the mouse wheel, or completely switching off the view in the left side of the toolbar clicking on the green-eye icon.

To achieve identical geometry of images, we can identify the mesh view with the raster image view by choosing the desired image in the list on the right side of the window and then clicking on the Show Current Raster View button in the main toolbar or through View → Show Current Raster Mode.

We can scale the transparency of the raster by scrolling the mouse wheel. If we have a look at the model now, it still looks a bit "shaggy". To achieve a smoother texture, we will apply a Laplacian filter through Filters -> Smoothing, Fairing and Deformation -> Laplacian Smooth. The model after HC Laplacian Smoothing looks like this:

Now, as the model is just as perfect as it can be, we can finally take the last steps towards our finish line and dress our model in a nice coat.

First, we apply the Ambient occlusion filter through Filters -> Color Creation and Processing -> Ambient Occlusion - Per Face, which calculates how exposed each point in a scene is to ambient lighting. We can choose from 2 ways of calculation - per face or per vertex. For now, we have chosen the per-face way.

Ambient Occlusion (Per Face) filtered mesh:

The next view we want to show is going to be a view with a normal map shader.

In a normal map, the RGB components correspond to X, Y, and Z coordinates of the mesh surfaces normal, respectively. The resulting view looks quite cool:

And, finally, an undistorted original image, which has also been generated as a snapshot from Meshlab. The former input image would not be geometrically identical, because of the distortion elimination, and we have chosen to export it the same way as the previous two images in order to keep the same dimensions as the ambient occlusion and normal map images.