Monday, 23 December 2013

One of the most interesting things in 3D computer graphics is the constant need to test different ways of doing the same work. Generally seeking a more practical and faster alternative without losing quality compared to traditional techniques.

In the field of forensic facial reconstruction for example, despite to have already a good range of methodologies linked to digital modeling, it never hurts to propose new approaches, especially if they offer faster and more accurately with respect to human anatomy.

As mentioned earlier, in order to make the process more convenient, intuitive and less tiring forensic facial reconstruction, we decided to change a bit the way of modeling the muscles and the base of the skin.

Despite having a quick execution, this method is not very intuitive and much less elegant. I figured it would be much more practical now sculpt the structure of the major muscles on the skull. The big question was: How to do this using the structure of the skull?

1) I only had to involve the skull with another object, such as a sphere, using the Shrinkwrap modifier in Blender. This object needs to be slightly smaller than the skull, evidencing which areas should be filled with muscles.

2) This geometry created is difficult to edit, because the lack of uniformity in its structure. To resolve this impasse we use the Remesh modifier that creates a mesh with 4-sided faces, more or less the same size, ideal to be carved.

3) Once set the base, I just sculpt your core muscles, creating new subdivisions if necessary. When using the Blender sculpting tools, the appearance of the muscles is naturally fibrous, in order to take a more compatible with real tissue structure. If this is not the ideal result, at least this is much higher than that of the previous methodology.

4) Once muscles are ready, simply create a copy, simplify it with Remesh and sculpt the base of the skin that will be the benchmark of adequacy of face modeled before.

5) Using the Shrinkwrap, we suitable the mesh to match it with the muscles and the tissue depth markers.

The purpose of this methodology among other things is to harness the expertise of some professionals with classical modeling, making the process simpler and offering the possibility of division of labor, where a person sets up the skulls with markers, another person riding muscles and skin base and finally the artist creates the final rendered face. Last but not least, the modeling in Blender sculpt mode making a phase that would be boring in one of the nicest and most fun parts of forensic facial reconstruction, such is its convenience and intuitiveness.

The development of the methodology proceeds, any news will be posted here.

A big hug!

Acknowledgements

For The Field Museum in Chicago that shared a video with a CT-Scan that was reconstructed in this post.

Saturday, 21 December 2013

Yesterday I had the opportunity to attend a very interesting event: the School of Data held in Trento (to tell the truth, for working reason, I could see just the last part of the meeting, but it was very instructive).

There, among others, I met Maurizio Napolitano, one of the greatest expert of Open Data here in Italy, and he mentioned an important initiative which I did not know, that is a petition for Open Data in Italian Cultural Heritage.

This "minipost" is intended to help the petition to reach the objective (some legislative amendments to allow and encourage the use of Open Data in our work, which is also, of course, one of the aims of ATOR).

Here is the link to sign the petition. The goal is set for 200 signatures before December 31, but I think we can do something more :).

Wednesday, 18 December 2013

In May 2013 I traveled to Curitiba (Brazil) on the occasion of two events, Mummy's Happy Day 2, where I would do a lecture and Faces of Evolution, an exhibition where I had presented a series of facial reconstructions of hominid modeled by myself and by archaeologist Dr. Moacir Elias Santos .

The conference was a success and the height of the excitement, we already have begun to draft plans for a new exhibition. As the staff of the Rosicrucian and Egyptian Museum had an Andean mummy (in Portuguese) of a child of about two years (when she died), we decided that the next exhibition will contemplate the children of the past. How would their homes, their toys and the historical reality of the time in which they lived.

Like the Faces of Evolution, the exhibition Children of the Past (working title) would consist of facial reconstructions of children from various periods.

Until that time we had the mummy of St. Louis, a baby belonging to the Roman period of ancient Egypt. In addition to his aforementioned Andean mummy, but had not yet done tomography her.

After the initial excitement, life returned to normal, but the desire to rebuild faces did not leave us. A few days ago, when I was writing an article about a mummy that I had extracted from a video, I had to resort to the original material posted on Vimeo (sent by The Field Museum), so I could remember what part met the corresponding CT slices that mummy.

When I reviewed the video, I almost had a heart attack because I did not know why at the time, about a year ago, I did not realize that within that filming had at least 2 or three scans mummies ... and one body whole!

When I had some time between one job and another, I made the extraction of CT scans. It works like this:

1) Converts up the video into a sequence of images.

2) Edit the sequence of images isolating the area where the CT appears (as pictured above).

I was very enthused when I opened the file in InVesalius and came across one complete mummy inside a beautiful coffin. Until that moment, did not think it was a boy.

Speaking of thecoffin, it was possible toobserve the detailsof the timber, both internally andexternally.

When I filtered the data in InVesaius, hiding the wooden track, behold, the skeleton inside is revealed in 3 dimensions. The "impurities" around the coffin it seems to be the sealing material between one timber and another.

One of the interesting features of a CT is that it can come in the scale. In the case of the boy I had to resort to the references presented in the video to put it at the correct scale, and discover its height, which is 1.37m (dehydrated). From such data began to age estimation.

To estimate the age, one of the most affordable ways is to analyze the teeth. The problem is to filter the area of the bones that would become an endless work, since the sealing material would be exported with the skull. Fortunately InVesalius has a selection tool and deleting areas that ended up helping a lot in filtering the bones, leaving practically only the skull for export.

Once the skull has been imported to Blender, it became easier to see the details and attest that it was a boy between 12 and 15 years. How we knew it was a boy? Simple, tomography is full body, then the characteristics that differ from the other sex were evident in body morphology.

Thenthe muscleswere adequateto the skull.

The same was done with the skin. From a previously modeled and deformed face until the characteristics match with the mummy's boy. A video with this technique can be seen here.

To finish I used the patterned clothe modeled to an previous reconstruction and was finally ready the mummy boy who came from a video.

Monday, 9 December 2013

In 2009 I met a technology called SfM, or Structure-from-Motion, where with a series of photographs and a kind of reverse engineering using the camera data, we could reconstruct the objects photographed in a cloud of points in three dimensions. After many studies realized, I saw it was not a trivial task to get good results, but I did not give up until I found a very interesting material in the ATOR's blog.

I already knew the impressive 123D Catch, but the goals with this technology were two. 1) Scanning objects in 3D using only free software and 2) The scan would need to have an accuracy of millimeters.

In ATOR's site I accessed excellent materials that enabled me to learn how the PPT-GUI work and achieve amazing results for me, at least at that time. Not so long ago, it was in May 2012.

As usual, I sent an email of thanks to the staff of ATOR and I took the opportunity to congratulate them for the excellent service presented in the posts. I told him that I was interested by their awesome field of archeology and was available if they needed anything related to 3D.

The answer came quickly and was very positive. The staff of Arc-Team research group and archaeological work, maintainers blog ACTOR, congratulated me for the work I developed with free software and invited me to write on the site, which I readily accepted. Moreover, they asked me if I was interested in participating in a project developed by them doing the reconstruction of a castle called Caldonazzo, where they work in their ruins. Second proposal also readily accepts. In that moment was born a partnership that lasts a year and a half and has yielded good fruits.

Caldonazzo

Caldonazzo is a tourist village located in Trentino, northern Italy. Famous for its lakes and mountains, houses a set of ruins of what has been a great castle, built between the twelfth and thirteenth centuries.

Since 2006 these ruins are the object of study under the responsibility of the Archaeological Superintendence of Trento, represented by Dr. Nicoletta Pisu.
The Arc-Team, a group to which I was built, aims to make the archaeological survey, organization of historical documents and was also tasked to scan the space in three dimensions, as well as rebuild it digitally. It is precisely this last part that comes into my work.

The Reconstruction

Despite my knowledge in architectural modeling, I had never worked with archaeological buildings. The challenge was to get something from nothing. Above we have the floorplan humanized and created from the 3D, but at the beginning of the modeling we did not we had a lot of information and the research for some features of the work would be changing as new references were found.

From Italy the staff of Arc-Team sent me via Dropbox all the data that were collected. Scans made ​​from photographs, notes on the works, the basic floor plans, facades, etc.

For facilitating the work I chose to use Inkscape to align elements scanned with plants and lay the ground for architectural modeling.

With the floorplan and placement of elements, sufficed to raise them in 3D.

I got the curves of the terrain and I converted into a mesh, that receive little by little the castle, so go already adapting all the terrain.

Not leaving some to follow the basic data of floor plans and cuts that I received from Italy.

Texturing was already set, together with the vegetation. When we work with architectural modeling, people expect the fastest possible visual results. Thus, the coloring and mapping of the scene, offer a preview of how it looks, keeping everyone motivated.

Once data arrived and internet modeling progressed, the part of the palace and also courtyard greenery received a modest humanization.

Details such as the configuration of the stairs was widely discussed, so that they represent a strategic tool of defense. In the case of the castle Caldonazzo, according to surveys, they relied on a retractable staircase for access to the tower. Thus, if the castle was invaded, the occupants would have to defend themselves shutting them in the tower. When storing the ladder, they hindered the access of the attackers inside the tower.

During the modeling of the castle courtyard area, was also worked to position the camera to show all the more didactic and elegant way possible.

After a few months of working, the outside of the palace was modeled completely, leaving only a few details to be completed. The main cameras were already positioned and humanization was complete.

The structure was ready, waiting for the footage taken by drones (look at the post about the filming that is very interesting!), to insert the castle virtually, through a technique called camera tracking, where the program captures the displacement of the real camera and transports it to the 3D scene. Thus, over a composite image can be fused real footage + 3D scene.

Some tests crossing photo + 3D scene had already been made​​, as the picture shown above, aiming to work the "color palette" of the real environment.

During the modeling of the outside of the castle, the data were collected and made ​​the refinements of the internal divisions of the building. With a floor plan ready, little by little were being composed indoor environments, both in the setting of furniture when lighting.

Above we have the dining room of the castle. Interestingly, the floor in question is not only of wood. Was placed mortar on the floor, much like the floors we have today, but matte. The wall, in contrast, has coated veneers.

Unlike a modern architectural modeling for internal purposes, such work is done with reference of documentary evidence. The artist does not do what he wants, or even what is more pleasing to the eye. He follows comments from archaeologists, who in their turn were guided by documents, pictures and the excavation of the site.

The kitchen had fairly simple accessories in relation to what we have today.

During the excavations, the staff of Arc-Team found some parts of the plaster wall belonging to the rooftop, where would the dorm. Using a graphic revolution, I created a graphic pattern to adorn the wall of 3D modeling.

The room was then modeled, always according to the observations of archaeologists.

To make viewing a most didactic work, was composed a cut lightweight prospect, covering the largest number of environments. So we can have a good idea of ​​the building structure, scales and the like.

We also developed a blueprint to serve as the basis of presentation of the site and reconstruction. Always using Inkscape.

Once the footage was taken by drone, the time to cross the actual filmed scene with the scene vitual raised by scanning photogrammetry was reached.

The work is still in its start phase. Because it is a natural hill with irregularity it was a challenge to match the real scene with virtual scene.

Fortunately the process was facilitated by the robustness of tracking and compositing tools natively present in Blender. Above is an image with the scene in 3D view at left and rendered at right.

Now we have two different renderings representing boards. Note that the 3D scene fits the scene in the background, that it is the video.

At the beginning of the post was shown the results of previous studies of tracking. There is still much work ahead, but slowly we can get a good idea of how it was the palace Caldonazzo in its heyday and all done with free and open software.

I hope you enjoyed.

I leave here my thanks to Arc-Team for the opportunity to work with them in this fantastic project. I hope its the first of many. Grazie tante amici!

Thursday, 5 December 2013

Hi all,
I would like to present the results we obtain in the Caldonazzo's castle project. Caldonazzo is a touristic village in Trentino (North Italy), famous for its lake and its mountains. Few people know about the medieval castle (XII-XIII century) whose tower is actually the arms of the town. Since 2006, the ruins are subject to a valorization project by the Soprintendenza Archeologica di Trento (dott.ssa Nicoletta Pisu). As Arc-Team we participated in the project with archaeological field work, historical study, digital documentation (SFM/IBM) and 3D modeling.
In this first post i will speak about the 3D documentation, the aerial photography campaign and the data elaboration.

1) The 3D documentation
One of the final aims of the project will be the virtual reconstruction of the castle. To achieve that goal we need (as starting point) an accurate 3D model of the ruins and a DEM of the hill. The first model was realized in just two days of field-work and four days of computer-work (most of the time without a direct contribution of the human operator). The castle's walls were documented using Computer Vision (Structure from Motion and Image-Based Modeling); we use Pyhon Photogrammetry Toolbox to elaborate 350 pictures (Nikon D5000) divided in 12 groups (external walls, tower-inside, tower-outside, palace walls, fireplace, ...).

The different point clouds were rectified thanks to some ground control point. Using a Trimble 5700 GPS the GCPs were connected to the Universal Transverse Mercator coordinate system. The rectification process was lead by GRASS GIS using the Ply Importer Add-on.

To avoid some problems encountered using universal coordinate system in mesh editing software, we preferred, in this first step, to work just with only three numbers before the dot.

2) The aerial photography campaign After walls documentation we started a new campaign to acquire the data needed for modeling the surface of the hill (DEM) where the ruins lie. The best solution to take zenithal pictures was to pilot an electric drone equipped whit a video platform. Thank to Walter Gilli, an expert pilot and builder of aerial vehicles, we had the possibility to use two DIY drones (an hexacopter and a xcopter) mounting Naza DJI technology (Naza-M V2 control platform).

Both the drones had a video platform. The hexacopter mount a Sony Nex-7; the xcopter a GoPro HD Hero3. The table below shows the differences between the two cameras.

As you can see the Sony Nex-7 was the best choice: it has a big sensor size, an high image resolution and a perfect focal lenght (16mm digital = 24 mm compare to a 35mm film). The unique disadvantage is the greater weight and dimension than the GoPro, that's why we mounted the Sony on an hexacopter (more propellers = more lifting capability). The main problem of the GoPro is the ultra-wide-angle of the lens that distorts the reality in the border of the pictures.
The flight plan (image below) allowed to take zenithal pictures of the entire surface of the hill (one day of field-work).

The best 48 images were processed by Python Photogrammetry Toolbox (one day of computer-work). The image below shows the camera position in the upper part, the point cloud, the mesh and the texture in the lower part.

At first the point cloud of the hill was rectified to the same local coordinate system of the walls' point cloud. The gaps of the zenithal view were filled by the point clouds realized on the ground (image below).

After the data acquisition and data elaboration phases, we sent the final 3D model to Cicero Moraes to start the virtual reconstruction phase.

3) The Orthophoto

The orthophoto was realized using the texture of the SFM's 3D model. We exported out from MeshLab an high quality orthogonal image of the top view which we just rectified using the Georeferencer plugin of QuantumGIS.
As experiment we tried also to rectified an original picture using the same method and the same GCPs. The image below shows the difference between the two images. As you can see the orthophoto matches very well with the data of the GPS (red lines and red crosses), while the original picture has some discrepancies in the left part (the area most far away from the drone position, which was zenithal on the tower's ruin).

your browser does not support IFRAM

4) The DEM

The DEM was realized importing (and rectifying) the point cloud of the hill inside GRASS 7.0svn using the Ply Importer Add-on. The text file containing the transformation's info was built using the relatives coordinates extracted from Cloud Compare (Point list picking tool) and the UTM coordinates of the GPS' GCPs.

After data importing, we use the v.surf.rst command (Regularized spline tension) to transform the point cloud into a surface (DEM). The images below show the final result in 2D and 3D visualization.