In June 2018, Otto Lowe and Pedro Miró recorded the famous Jamnitzer Bell – also known as the Cellini Bell – at the British Museum. The small, silver, and highly decorated bell was recorded using photogrammetry, a method that in this case posed a number of difficulties.

Although predominantly matte, the bell contained highly reflective areas, a characteristic that makes recording with any system challenging. The level of detail on the small bell also required many more photographs, with higher precision, than would normally have been needed for an object of this size.

In order to deal with the issues surrounding the recording, the object was effectively photographed twice. First, the team took long exposures on a tripod using a Sony A7RII with a Zeiss 55mm F1.8 lens, and a Sony G Macro 90mm F2.8. The second recording employed cross-polarisation with a Canon 5DSR, a Sigma 50mm lens, and a Yongnuo mounted flash. Cross-polarisation is a technique in which both lens and flash are covered with a polarising filter in order to reduce glares on the object. It is, however, an imperfect solution given that strong reflections can be rendered as totally black and information is thereby lost.

Using the two methods allowed us to produce two models, each containing a different dataset: one with high, macro detail of the decorations, the other with minimal interference from glare. During the recording, it was also hoped that it would be possible to align the datasets together to produce a single model incorporating all the information.

The issue with aligning the cross-polarised images with ordinary photos is that, as the name suggests, the colour polarity is reversed. What looked silver with white glare in the normal photos of the bell appeared brown with black shadows in the cross-polarised images. This was a problem because the algorithms behind photogrammetry use colour as a reference to align images: two images showing the same scene in different colours are unlikely to be recognised as the same by the algorithm.

The first objective in aligning the bell was to unify colour across the two datasets. This involved sifting through each of the 1400 images in Lightroom, arranging them into groups by tone and exposure and then ensuring that both these variables were corrected across groups. Initially, the two datasets – cross-polarised and normal – aligned perfectly with each other. Control points – a manual way of telling Reality Capture that there is a common feature between two or more images ­– were used to merge them together. The user manually locates a feature across two or more images in each dataset. In this case, only two images from each dataset were used.

The figure above demonstrates how the control points were set. The left and central images of the bottom row were taken from the cross-polarised images. The two images on the right were taken from the ordinary images. They all display the same feature. The feature in the ordinary image appears black, and in the cross-polarised images, white. By manually setting a control point on this feature, we were telling the algorithm that this point should be recognised as one feature.

The photogrammetry software Reality Capture was able to align both datasets perfectly having manually set three such control points.

Image of the entire aligned object, with camera positions displayed, processed with the software Reality Capture

The reconstructed 3D model was made up of 91.5 million polygons – this is a vast 3D file. In order to view it, the resolution was lowered to 10 million polygons. Below are renders of the reduced 3D model.