Friday, July 26, 2013

Every archaeologist in Scandinavia knows the iconic Bronze Age carvings from Swedish site Tanumshede.

There are many interpretations of these images. As we experienced on our own, the carvings are truly astonishing. They are just simple graffiti, representation of people mythology etc. Most of authors agree on fact that those carvings are representation of ship/boats. Their special form, two-stems, made other interpretation problems for researchers. Before the World War II in Hjortspring, Denmark, the answer for all questions was discovered – the ship, maybe not from the same time as Tanumshede, but closest to it. The reconstructed boat is quite overwhelming, especially when you are standing in front of it. It was made in 90s, and looks quite good – maybe a bit strange, but convincible. The sea test went worse – boat turned out to be very stiff – which made characteristic of waves’ attack very aggressive. That resulted in very uncomfortable boating and wet crew on board. All those problems, alongside with idea of two-stems in one-hull boat, quite difficult to motivate, made us think about proposition of other reconstruction of Tanumshade boats. The need to trade on longer distances during Bronze Age is obvious. The idea of rigging to British Islands on Hjortspring-like boat is ridiculous and two-stems is not motivated neither by practical nor technological needs. We decided that those boats should look like outrigger proas. As we will try to show, the hypothesis is not truly nonsense and the capabilities of two-hull boats were tested by many during the last few thousand years – such type of boat was used to populate islands on the Pacific Ocean, over the area of thousands square kilometers.

During our research we found out that we are not the only one with this idea. In 1924 Ossian Elgrstrom had make an experiment. He created a small proa model, and ordered a group of children to draw it. The results were almost the same as the rock carvings. Also, in 1925 Gustaf Hallstrom published similar article, presenting the same ideas. Finally, in 1997, Sven Osterholm had made a small dugout two-hull boat. But, as we found out, all of them thought that these boats were a dugouts.

Our primary aim was to design a boat model in 3DS Max, using Bronze Age rock carvings from Tanumshede as a reference. We wanted to do it as persuasive as we could with our poor skills. We thought that those boats were made with similar techniques as Hjortspring and Ferriby boats – from wide and long planks, literally sewed together in shape of boat and straighten by frames, so we tried to show it on the model. Our model has real dimensions, which corresponds with the model made in Delftship. Our model grown from 1 to more than 500 thousands polygons in total.

Due to irregular shape, we were forced to think more during texturing – we divided our model by adding ID to every part with different direction of normals. Finally we ended up with 2 multi/sub objects, 6 different textures and materials. We are especially proud of the final effect of our water struggle. We made it by trials and errors method, based only on knowledge from the course, without any further tutorials.

We added some bipeds to make it more convincing and to add a bit of life to the model. The boat of that type is stable enough for a man to stand in it and the characteristics of relation between width and length allow to build boats with big displacement, which are easy to move using only oars. Although such a way of moving seems to be more strongly motivated by rock carvings, we found some clues to risk a hypothesis of use mast and sails.The outrigger design allow to use relatively big sails in such small boats easily and safely. Two pictures presented below were taken from Copenhagen Museum catalog (read carefully the description :))

DelftShip is a CAD software, used for designing ships and making simple hydrostatic calculations. It is free, and we have decided to use it to check our boat’s proprieties. Using the Internet, it was possible to learn the basics in just one day, as well as to become a fully-educated specialist in hydrostatic ;-). We thought that there will be a possibility to import a DXF file from 3DS Max, but that turned out impossible, so we have designed a new model from scratch. Nevertheless we exported DXF file and, after cleaning in AutoCAD, we received a good blueprint.

Aka (the linking between the hull and outrigger) is not the same as in the 3DS Max model, but that does not affect the calculations. To keep things simple, we have made explanations directly on the report (we highly recommend you to open those in new tab - it will be bigger).

The last thing we have done was a small analysis, which took us surprisingly long time. We created a plain white background with boat carving, and we aligned it, the model, camera and lights to create a little “shadow theatre”.We set a VRay sun, hence the yellowish color of the plain – it is a sunrise. But it looks like a glittering torch light, and that looks fine. As it turned out, the shadow looks very similar to the original carving – quod erat demonstrandum.

SUMMARY

Our primary goal was to make a model in 3DS Max, and we fulfilled that in 100%. Creating it was not so difficult, which cannot be said about the texturing part. As it can be seen on the stems, we gave up fighting with proper unwrap and the software created texture perpendicular to the natural one. The best fun was with placing bipeds (Pompeii team, regret that you did not use them ;).

Our latter goal was to create a model in DelftShip and to try this software, as we have never used it before. The calculations coming from DelftShip has proved that the boat’s properties were quite good, especially the displacement value. Displacement is very interesting – even if this is a small boat, with 8 men on-board, it could still be used for trade. That only strengthens our hypothesis about the multi-hull ships in the Bronze Age.

The result of our “torch” analysis seems to be very similar to the carvings. Perhaps we can imagine peoples, waiting for their relatives to come home from distant journeys, and sailors, waving to them (like on some carvings)?

Project goal: Create a 3D model of the entire outer surface of Kärnan.

Brief History:

According to dendrochronological analysis, Kärnan was built during the 1310s. The tower is the only part of the one time expansive Danish castle that was used for three centuries. In stories from the Medieval period, Kärnan is said to be a seat of power in eastern Denmark, and thus experienced its fair share of battles and seiges.

Kärnan got its present form as the result of a restoration that took place from 1893 to 1894. Having fallen out of use for at least a century, it was restored by the town- partially because of its dangerously ruined state. Before rebuilding, the tower was approx 30 meters high, compared to its current height of 35 m. Much like our model, the top part of the tower was completely reconstructed at this time. Thus, our model is of the present appearance of the tower, not a historical interpretation.

Our Process:

During the first week of the assignment, all three of us traveled to Helsingborg to acquire pohotos of Kärnan. First, we took photos specifically of the bottom as the view is partially obstructed by a low wall surrounding most of the building. Next, we took pictures which included as much of the building as possible. Following this, we walked to the top of the building. Though we had our doubts about PhotoScan being able to create geometry for this part of Kärnan, we thought it would be best to at least try. We took the greatest amount of photos in this part of the acquisition.

Side tower being cleaned in MeshLab

The second week was spent mostly trying to get a good result in PhotoScan. Even after attempting several different methods of processing the top, only one face of the side tower was useable. The bottom models and the models of the middle of the building were quite successful. We were able to begin aligning meshes of these two parts during this week as well.

Naturally, the third week was spent doing everything else. By Wednesday we had reconstructed the top of Kärnan using 3ds Max and aligned it with our otherwise complete model. We began texturing the model on Thursday as well as setting the scene with VRay materials. At this point, we felt we could add to our model to make it feel like a more complete scene- such as stairs, the low surrounding wall, glass for all the windows, and grass.

Technical information:

When using Photoscan, we used all of the photos at first to align the cameras, but when building the geometry we sometimes had to disable half of them, otherwise there was a problem with the amount of space the process used.

All aligned meshes, imported to 3DS Max

We aligned and cleaned the mesh mostly using MeshLab. However, there were not enough reference points to connect the bottom tower meshes with the rest of the tower, so we combined those in 3DS Max. Applying the Poisson filter was very problematic, and we ended up not using this filter at all. Texturing was not possible with MeshLab, so we ended up using only 3DS Max for that part of the project. Before importing into 3DS Max, we had to reduce the number of faces from 900,000 to 400,000.

We used 3DS Max to reconstruct the majority of the top of Kärnan. Aside from having to scale much of the created components, this was very smooth. We also reconstructed the stairs and surroundings in this program. Texturing was a challenge because of all the different objects we used to create the model. We created IDs for each object that needed a specific texture, and applied VRay material bitmap to the MultiSub Object we created.

For our project, we have worked with the Winged Man of Uppåkra figure that was found in Uppåkra during the summer of 2011. This was an exciting venture for our group, particularly because there has been little published on this artefact. Nicoló had suggested this project to our group because he was aware that the Historical Museum was preparing a new exhibit to display this artefact and others. After getting in contact with the museum, the curator sounded enthusiastic about our project and we were given a green light to perform a data acquisition campaign of the artefact. Our basic plan was to digitally acquire the artefact, produce a 3D model of it, and perform a camera animation around the model to give the viewer a sense of the geometry. The museum was interested in having the artefact visualised on top of a small box; the currently favoured interpretation of the role of the Winged Man of Uppåkra figure.

We were permitted access to the artefact for three hours on the afternoon of 31 May 2013 (Friday) at storage facility in Gastelyckan. Owing to the type of material (which is a gilded bronze), complexity in shape, and size of the artefact [Fig. 1], it was decided that the best data acquisition method to use would be image-based modeling. Our group settled on using three cameras: Olof’s Canon EOS 1000D with a 55mm lens, Justin’s Nikon D7000 with a 55mm fixed focal length macro lens, and Jack’s point and shoot camera (primarily for video). Our other equipment included a sturdy tripod, two photography lamps with diffuser screens, and a photography table.

Fig. 1 - Photo of the Winged Man of Uppåkra

The data acquisition campaign was accomplished with little difficulty. The first technique to be attempted was framing the artefact in a way that there was no visible geometry in the camera frame other than the artefact itself: thereby allowing for the camera to be kept stationary while the position of the artefact itself was manipulated [Fig. 2 & Fig. 3].

Fig. 2 - The stationary-camera setup; the little box was used to frame the shot before the artefact arrived.

Fig. 3 - Justin and Jack preparing the stationary-camera setup

Fig. 4 - Mobile-camera setup

Although a great method in theory, we admittedly forgot to turn off the actual lights: a short-sight that effectively made useless the diffused photography lamp setup. Realising this, we then developed a new setup where the artefact was placed in a central point and the camera traveled around the object [Fig. 4]. Justin used a remote trigger in order to ensure that his shadow was not cast over the object resulting from having to stand in front of one of the lamps. In both setups, Jack acquired video of the artefact, using his camera to simulate how we wanted to try to visualise the camera movement for the final movie.

Modeling and Cleaning

After the acquisition campaign, we then set about creating the models in Photoscan. Again, because of the size of the object, we felt that our models would be best rendered using either the “High” or “Very High” settings for the geometry construction phase. Initially, the group was optimistic that the macro photos would render the most detailed model, and indeed the first model developed from the stationary-camera macro photos was quite impressive. However, this initial model was produced from photos that had not utilised the magnification power of the macro lens—as such the depth of field was quite deep, allowing for the generation of photos where the details were extremely crisp. Unfortunately however, Photoscan seemed to have severe difficulties generating geometry once photos where the magnification power of the macro lens was utilised were added as a chunk [Fig. 5].

Fig. 5 - Photoscan was confused

The result from this error was extremely unfavourable and was likely resulting from two factors: 1) the non-diffused room lights, and 2) the macro lens’ depth of field when used in high magnification. The first factor was probably negligible, but since there was no acquisition of the model with the room lights off and only the diffused lights operating, we cannot discount this entirely. The second factor however, was without any doubt a contribution to the failure in generation of geometry. The issue with macro lenses, is that they have an extremely shallow depth of field—there is a very small plane of focus. In other words, although the macro lens allows the photographer to magnify the details of an object, but only on a very thin region: anything that is too far into the foreground or background is subsequently out of focus and therefore useless for the development of a 3D model. Although this first set of photos produced a very impressive model, the detail was not captured as was hoped.

The second model produced was from the photos that Olof captured while moving the artefact and keeping the camera stationary. The settings were Very High Quality, Sharp, and with 250k faces. Using these settings and Olof’s photos, Photoscan produced a model with incredibly impressive geometry. Unfortunately, there were some major problems with this model that affected the figure’s face (arguably one of the most important features of this artefact). For some reason there was a severe loss of detail around the cheeks/sides of the face as well as under the nose and above the mouth. When the texture was developed, Photoscan left the model looking (quite disconcertingly we must add) eerily similar to one of the world’s most notorious dictators [Fig. 6]. Several attempts at re-processing this model only resulted in the same issues or worse, while cleaning the “best” model only ended in a large loss of detail from the face [Fig. 7].

Fig. 6 - Hmm ...

Fig. 7 - Removing the noise also removed much of the detail from the face. We would decide this was unacceptable for our final model

Justin first attempted to fix this loss of detail by reconstructing the face in 3DS Max. However, becoming displeased with the results, he eventually went back to his photos and attempted to produce a detailed model from the mobile-camera acquisition set. Although some photos were again plagued by the limitations caused by the depth of field, the non-macro magnified photos produced an impressive model with an excellent display of geometry (albeit less impressive than some of the geometry visible from the model composed of Olof’s photos). Fortunately, the model produced contained very few areas that needed to be cleaned; allowing for a model that was moved quickly from Photoscan to Meshlab (for cleaning) to 3DS Max for texturing work and scene setup. The favourable qualities of this last model were key behind our decision to use this as our display model for the artefact [Fig. 9].

Fig. 8 - Uncleaned final model

Scene-Setup and Animation
All three group members attempted to develop a box for the project. Running off of a group-developed theory that the Winged Man of Uppåkra figure may have been used also as a brooch at one time, Jack discovered through research that there was a specific ornate style of women’s jewellery that combined a brooch and a box. This combination was intended to both have a decorative, storage, and utilitarian (it was used to keep a cloak pinned to the shoulders) function. Although tempting to reproduce, these boxes were highly ornate and would have presented a challenge for our group to achieve with the limited amount of time left. Such a creation would have been entirely subjective and founded on nothing concrete from the associated excavation. Furthermore, we were worried that such an ornate box may mislead the audience and may have even subtracted attention from the true purpose of the scene—to highlight the Winged Man of Uppåkra figure and the impressive details visible on its surface. As such, it was decided that a very simple wooden box would be used instead.

Our basic scene is comprised of one camera and four lights. One of these lights was a spot light, positioned to contrast the geometry of the figure through highlights and shadows. The remaining three lights are omni-lights positioned to soften the shadows around the box and underneath the chin of the figure. For the animation, we had originally decided that it was important to have a camera that moved in a circular motion around the model. This was accomplished using a target camera set on a fixed point and bound to a path around the object. The path causes the camera to approach the model from the front, move around the model in an anti-clockwise direction, with the end of the path bringing the camera close alongside of the model to highlight the geometry on its back and face [Fig. 9]. From this animation, we then produced a 10 second movie (300 frames) which we rendered in .AVI format. The movie was produced in high-quality format with a resolution of 1536x1167. However, due to the quality and length of the film, it was much too large to upload to blogger. Instead we have included the original sample video (full length but very low quality) and a short 3 second sample of the high quality video. You will find the links to these movies after the following images.

Fig. 9 - Screenshot showing the path of the camera, the position of the lamps and a preview of the rendering

Fig. 10 - A rendered sample of the box and artefact placed on top

Fig. 11 - Olof hard at work rendering the high quality movie

Future Directions

Although we accomplished our goals for this project, we recognise that there is still much room for improvement. This project has functioned to truly test our skills not only in using 3D modeling software, but in the actual data acquisition process as well. In fact, this is something that we would like to revisit if possible in the future as we are confident that we will be much more effective with our procurement of data—thereby allowing us to produce a model of extremely impressive quality. We also recognise that the texture of our both our artefact and box needs some work. It was our hope that we could have visualised our models using VRay, however due to constraints on time and a difficulty in communication (our group was, by the last week of this project, physically separated by an ocean and several time zones) we were unable to realise this level of quality with the remaining time. As such, the pursuit of visualising the scene using VRay is a top priority for our future work on this project. Another issue we want to tackle is making the animation a bit more complex. The level of detail on this artefact is stunning and we recognise that static lights do not do this detail justice. Thus we intend to tackle this issue by actively “painting” the model with light so that its movement across the surface shows the depth of detail as the highlights and shadows shift about.

This project has been an extremely useful test of our abilities and has functioned to teach us a great deal about one of the ways virtual reality in archaeology is applied outside of the classroom. We would like to sincerely thank Lovisa Dal for her assistance at the Gastelyckan storage facility, Jerry Rosengren for permitting us to work with this artefact, and Nicolo Dell’Unto for his guidance and tutelage throughout this project and in the ARKN10 course in general. It is with great eagerness that we look forward to continuing our work in order to produce a 3D model and animation of outstanding quality for the exhibit at the Historical Museum in Lund.

We were working on testing the possibilities of image base modelling and photoscan to capture the geometry and colour of glass pieces. The idea was to create models of shards that can be later used to make 3ds max reconstructions of the whole vessels.

The material came from the excavations at Iron age setellment site at Uppåkra. The nature of the galss finds from the setellment sites makes them dificult to identify, as the shards are very fragmentetd and the material can be disspresed around the features, making it hard to connected the fragments belonging to one vessel. We were aware of the possible limitations, and the outcome of the first week of our work can hardly be called satisfying, but we managed to make some useful observations and developed some ideas about the possible solutions to many problems we encountered. Prelininary results can be considered as promissing, but more reaserch will be requiered.

Acquisition:

Glass is probably one of the most demanding materials when it comes to image based modelling. It is reflective and transparent at the same time. Experiments with modern examples, outside the studio enviroment, were a series of not very satisfying events, but fortunately the real artefacts display a bit different qualities, being more opaque and having more distinct features on the surface that could be captured by camera. We knew that this will be a though experience.

Unfortunately due to the holiday season and time constraints we could conduct only one campaign at the museum storehouse. We have selected initially three pieces that displayed different qualities:

A transparent claw beaker fragment. The shard has probably been a part of a rim of a beaker

A transparent claw beaker fragment. The shard has probably been a part of a rim of a beaker

A burnt shard, most likely from a Snartemo beaker.

A piece of green glass with polished ovals. This shard is about 0.9 cm thick.

backside of the polished glass bowl

We used two defused lights on the stands to illuminate the white background on which the artefact was placed. The lights were mostly directed on the walls and the celling of the photo room to provide even more diffused conditions. We were using Sony alfa300 camera with sigma 18-200 mm lens and an IPhone 4. We run some test using the tripod that was available at the storehouse. We tried to process the images at the spot to check the quality of the acquisition. Surprisingly best results were reached with IPhone used as a ´free camera’. The Sony camera was focusing too much on different spots on the glass instead of the whole pieces and that resulted in the less dense point clouds. The results were better when a simple grid made out of lined piece of paper with some symbols drawn on it was used.

Post processing in Photoscan and Photoshop.

We uploaded the photos from different cameras and different acquisitions. Initially we started processing them just straight “of the memory card”. Soon we realized that better effect could be reached if the images were first edited in Photoshop. We used it mainly to boost the contrast, reduce the brightness and change colour a bit to improve the initial point detection outcome.

The back side of the burned piece after photoshop.

The front side of the burned piece after Photshop.

Here the .raw format that was used by high quality images from Sony alfa300 offered a lot of different possibilities, unfortunately the time did not allowed us to test everything. Also altering the colour might be considered a problem, since it is an important quality in the research related to ancient glass. Other problematic fact was the scale of the shards - when confronted with small objects placed on a flat surface Photoscan creates a lot of artificial bumps on it, that t mainly occlude the edges, what is a major setback in the next step of the process.

The Photoscan model without texture, from the IPhone acquistion

The Photoscan model in a wireframe visualization

Photoscan model with texture

The output was quite satisfying. The surface was a bit bumpy, probably due to the reflective glass surface and the top and bottom halves of shards had yet to be connected.

Cleaning of the models and alignment in MeshLab

To clean and align the shards we used MeshLab. It was a difficult process since the only part that was common for both models were the edges, that usually are only couple of millimetres thick, and they get easily distorted in Photoscan. The way to solve that problem would probably be to use the rough manual glue tool, but it would be a long a difficult process. Other problem that we encountered was connected to the fact that we would lose all the textures if we merged parts of the model. The solution was to re-project the colour information from high quality textures to the vertexes and later use sampling to colour the poison filter model made out of both merged meshes. This simple process however was impossible to complete since there was a problem with mesh flattening - programed crashed each time we run it, despite the fact that most of cleaning and repairing filters were used before the process. Meshlab offers however excellent filters that can be used to smooth out the mesh, for example Laplacian smooth.

﻿

roughly cleaned photoscan model without texture

Photoscan model with texture

smoothed photoscan model without texture

smoothed photoscan model with texture

Preparing the reconstructions and placing the pieces in 3ds max

After creating models in Meshlab we needed to have some reconstructed vessels to place them on for presentation. These were made in 3ds max using drawings and photos as the source materials. We mainly use the lathe modifier to create the “body “ of the glass vessel to which additionally modelled parts - like strings made using sweep modifier or claws were added. Modelling could be difficult since it is a challenge to place the strings directly on a bend surface of the beaker.

The hardest part was however to achieve a good effect by working with v-ray material. It is a powerful tool that offers many possibilities but the ancient glass in a very hard to model, since each vessel displays a bit different qualities. There are many levels of transparency, different colours and different scales of reflections. It is possible to spend hours on working to achieve a perfect outcome. In our case the time was however a limiting factor. We managed to produce some renderings, but to save time just the front parts of the modelled pieces were placed on reconstructions.

The recontruction was made with this drawing of a bowl with polished ovals and the Uppåkra shard, as a reference.

Drawing from Stjernquist 2004.

Reconstruction, in 3d studio max, of a bowl with polished ovals. The Uppåkra shard could have been a part of a bowl of this type.

The model of the shard attached to the reconstructed bowl.

The reconstruction of the Snartemo beaker has been made with this drawing and the Uppåkra shard, as references. Drawing from Näsman 1990

Reconstruction of a beaker of Snartemo type. These beakers can have different shapes and colours so this is one interpretation made with the shard and a drawing as reference.

Conclusion and further perspectives

There is still more reaserch needed when it comes to application of imaged based modeling to the glass reconstruction. Our results are promissing, but a more standarised method and approach must be developed. More test are needed, especially with the aquistion techniques. A bigger sample of glass pieces is needed to be processed.

Probably using a powder on pieces and scanning them by the laser scanner with the later reprojecton of color from photos might have been an easier solution. It is possible to create a model and reconstruction starting from image based modeling. We are still not sure if it can be applied to every piece, or will it only work on selected examples, displaing certain qualities, such as small transparency and high level opaquness. It is a difficult process that probably gets much easier with experience, and more impressive results can be reached in a longer term perspective. A training in light setting and studio photography could probably influence the results, since the aquistion part is always the cruicial link in working with image based modeling.