SV-POW! … All sauropod vertebrae, except when we're talking about Open Access

Sauroposeidon in perspective–too much perspective

January 29, 2014

Here are two photos of what I infer to be C8 of OMNH 53062, the holotype of Sauroposeidon. The top one was taken by Mike during our visit to the OMNH in 2007. If you’re a regular you may recognize it from several older posts: 1, 2, 3. The bottom one was taken by Mike Callaghan, the former museum photographer at the OMNH, sometime in 1999 or 2000. I used it in Wedel et al. (2000) and Wedel and Cifelli (2005).

You’ll notice that the two photos are far from identical. In both cases, the photographers were up on ladders, as far above the vertebra as they could get, and there are still significant perspective effects. That’s just a fact of life when you’re taking photos of a vertebra that is 1.4 meters long, from anything lower than a helicopter. In Mike Taylor’s shot, the neural spine looms a little too large; in Mike Callaghan’s shot, the prezygapophysis looks a little too small, probably because it was curving off at the edge of the shot. So neither photograph is “right”; both distort the morphology of the specimen in different ways. Here’s how the two images stack up, with the outlines scaled to the same length:

When I ran a draft of this post past Mike, he wrote (with permission to post):

I think the current draft misses an important point: the warning. We really can’t trust photos, however carefully taken, and however beautifully composited into TNFs*. You’re welcome to quote me as having said I’d have assumed the two C8s were different vertebrae. For that matter, I bet I could have worked up several taxonomically significant characters to distinguish them. Yikes.

So the moral is, photos of big specimens almost always involve some distortion. This is clearly not ideal. But I have a plan for fixing it. I am hoping to get back to the OMNH this spring, and the next time I’m there, I’m going to take photos of this vertebra from a zillion angles and make a 3D model through photogrammetry. Happily, Heinrich Mallison has been producing a very helpful series of tutorials on that very topic over at dinosaurpaleo: 1, 2, 3, 4, with more on the way (I’ll update the links here later). Update: Don’t forget to check out Peter Falkingham’s (2012) paper in PE on making photogrammetric models with free software.

Armed with that model, it should be possible to produce a perspective-free lateral view image of the vertebra, to which all of the previous photos can be compared. I can’t use CT data because this vertebra has never been CTed; it’s too big to fit through a medical CT scanner, and probably too fragile to be packed up and shipped to an industrial CT machine like they used on Sue (not to mention that would require a significant chunk of money, which is probably not worth spending on a problem that can be solved in other ways).

So, photogrammetry to the rescue, or am I just deluding myself? Let me know what you think in the comments.

Finally, I should mention that the idea of superseding photographs with 3D photogrammetric models is not original. I got religion last week while I was having beers with Martin Sander and he was showing me some of the models he’s made. He said that going forward, he was going to forbid his students to illustrate their specimens only with photographs; as far as he was concerned, now that 3D models could be cheaply and easily produced by just about everyone, they should be the new standard. Inspiring stuff–now I must go do likewise.

Related

22 Responses to “Sauroposeidon in perspective–too much perspective”

Illuminating stuff, Matt. The camera may never lie but it certainly can be misleading. Mike T.’s photo looks like it was taken from a position that was more neural spine-wards (and a little to the right) of Mike C.’s.

However, even if both cameras were in the exact same position and orientation, you might still have different amounts of distortion introduced by different lenses (think fish-eye as an extreme example). Even our own eyes distort images – lines that are straight, such as railway tracks receding into the distance, are curved on the imaginary flat plane of our vision.

Very good post! Ahh… photos! I’ve been 4 years actively studying the way to minimize distortion (eliminate it completely is impossible) Nevertheless using normal lens (50 / 55mm) on D/SLR cameras and using the inner 60% of the frame (which make take a good distance from the object) is my own formula.
I also take “side photos” (moving the camara sideways over the same plane) for after checking parallax error.
Lights, correct use of the normal views, and missing scale are amongst the most common and devastating mistakes on technical photography.
As an illustrator, a bad photo is the worst thing to work with… and I’ve seen the “outline paradox” many many times! jajajaj

Another–and perhaps more important–area where surface models excel is when you can remove colors on the original specimen that wash out relevant details…I bet this is probably the case for the example vertebra of Sauroposeidon. How many fossae and foramina just don’t show up well on the photos above? With a good photogrammetric reconstruction, you might end up with something that looks like a really nice shaded monocolor rendering, much more clearly showing morphology on this element than in the photos. We did this for Dahalokely, and I am now a firm believer in figuring specimens as color-free surfaces when practical. (note that some specimens–especially those with complex sutures or preserved as “roadkill”–may still be best shown through other methods)

:-) Here’s something, though: I wonder how many coelurosaur workers get far enough back from their specimens to take photos that aren’t secretly horked by perspective? A short (I promise) detour into astronomy may be informative.

The features one can see on the moon vary between moonrise and moonset, because an Earthbound observer moves thousands of miles east during that span. The Earth’s diameter is just under 8000 miles at the equator, and the moon is roughly 240,000 miles away, so 30 times Earth’s diameter. And there is still noticeable parallax–several degrees of longitude! So if you’re working on a vertebra 2 inches long, and your camera is only 5 feet back from the specimen, you’re still getting Earth-to-moon levels of perspective distortion. For the 1.4-meter Sauroposeidon vertebra, you’d have to get more than 42 meters (138 feet) above the vert to drop the distortion any lower.

Andy–

That’s interesting. After looking at Martin’s models, I figured that one of the advantages of photogrammetry is that you could keep the original color, unlike models generated by laser-scanning or CT. Yes, color sometimes obscures details, but sometimes it reveals them, too. On balance, I think I’d prefer to have a choice about whether to keep the color or not.

Since I’m thinking about astronomy at the moment–I wonder what you might spot by doing a blink comparison of full color and color-free renders of the same model. Sounds like an interesting experiment, anyway.

Keep in mind that I have zero experience here, and as someone wise once said, the person with an experience is not at the mercy of the person with an opinion. I’d be interested to hear your further thoughts.

The key thing we need is publishing the 3D models with and without textures, and then looking at them with a viewer that allows rotating and changing light direction. A 3D COFORM project developed a browser plug-in for that: http://cenobium.isti.cnr.it/
Direct link to the site where you can select files to view in the lightbox: http://cenobium.isti.cnr.it/monreale/capitals

First time commenting here, though I’ve been lurking for a while. I’m not a paleontologist but I do know a thing or two about optics (I’m an astronomer in fact :-) At NASA we’re using photogrammetry to measure with tens of microns precision the alignment of optics for the 6.5 meter James Webb Space Telescope. Photogrammetry can *definitely* solve your problem here. For that level of precision we use several carefully calibrated cameras mounted on mechanized booms, but depending on the level of precision you’re aiming for a more manual approach should suffice.

It *is* possible to eliminate perspective distortion optically, using a design called a telecentric lens. However this comes at a price: image size becomes fixed and no longer changes with distance to the target. You can’t just back up to take pictures of a larger object… Telecentric lenses are commonly used for machine vision applications, for instance automated parts inspection in factories, since they eliminate perspective distortion. But these are typically looking at small parts only tens of cm from the lens. Unfortunately you’d need a lens larger than the vertebra itself to take such a picture for Sauroposeidon!

Fascinating stuff, Marshall. Thanks for chipping in, I hope we’ll be hearing from you again. Interesting how many regular photographs can get you the effect (and more) of a single special one. Seems like it ought to be a specific example of a more general truth.

One caution on all-digital, though…just because it’s a digital image doesn’t mean that it is distortion-free. In playing with Meshlab, I noted that it has perspective distortion built in to the software; there is a way to get rid of it (shift+mouse wheel until it says “orth” in the lower left corner), but it is something I only noticed as problematic relatively recently. Other visualization packages have similar…conveniences.

And as for color…I agree that it is best to at least initially keep it; throwing away color is throwing away data. In terms of whether or not it is needed to figure a specimen is certainly case-by-case.

Photogrammtery is definitely the way to go, but you’ll get even more from itehif you bite the bullet and learn to use a 3D programme for lighting and rendering etc. Blender is open-source, free and very capable with a feature set rivalling some of the big names in 3D modelling and animation software. You will only need a fraction of the features available and be able to produce high-quality results.

In the case of lens distortion, some software will correct for it (they can have access to the lens used via the EXIF data included in the photo). Photoshop can also be used to remove distortion manually at a pinch, but work on copies of your photos.

In the case of figuring specimens, I figure ichnites by rendering a textured version of the mesh, an untextured with low-angle lighting and flat grey texture with zero specularity and ambient occlusion to enhance detail, plus a colour elevation and/or contour model too (although these might not apply on a 360 model).

Andy made a good point: you need to set your projection mode to orthogonal, not perspective! Rhinoceros does that automatically for all axial views, but the Perspective viewport is (logically) in perspective projection mode unless you change it.

I’m gonna have to back Heinrich up on this, but also propose an alternate — more expensive — but ultimately superior method of photographing large bones.

First though, one way to do this is to provide explicit range and angle information using markers. You establish your position and the object (as you have done above) and provide perspective and angle. You take the stereoscopic method of helping “remove” distortion and simply extend this a little further. Not two, not three, but many angles. One can extend this and use a process by which you capture large degrees of range around a bone and in multiple planes.

Or you can use a hand-held CT scanner or an industrial X-ray/CT machine. The digital process removes all distortion, so every shot is as though there was no perspective. Perspective can be added if needed, but one can create a “perfect” image so that you can always know the relative depth at a given section to a length or width at a given section by measuring it directly in the digital specimen.