This past summer I began using my TSE-24mm II. I took this photo of the Mir-i Arab Madrassa and Kalon Minaret in Bukhara, Uzbekistan, by shifting the lens up. I don't recall any tilt - if there was, it was purely by accident. Unfortunately Bukhara is a long way from Minneapolis and it's not easy to go back and try the same scene again with different settings!

Could I have avoided the distortion by a different in-camera technique?

Is the best way to fix it to clone the tower in PS CS6 and do some transformations on that, or is there a technique that will correct everything in the whole image?

BTW the Guide to Lightroom4 update was very useful because it made me aware of the new Lightroom/Photoshop HDR processing workflow, which was helpful in this image.

Thanks for the hints. I ended up warping a clone of the tower for a couple of reasons. First almost the entire tower was distorted, close to the base. Second maybe I just suck at painting skies but I found it easier to take a sky from a pre-HDR image and paste it over the top, adding a mask, and then cloning in the missing pieces. Overall still way more time consuming than I would prefer. Having said that, the minaret is 50m (150ft) tall, and I guess I should consider myself lucky to get a usable image in the end given my lack of experience!

As for a different technique, in situations like that I would personally feel more comfortable with a very high resolution stitch, in preference to a classic lens shift approach. PTGui and other stitching programs offer a wide range of projections for a variety of oddball cases, with the option to scheimpflug to one's heart's content right there on the screen. And even lacking the right projection, with an abundance of pixels in the original every manner of ad hoc PS distortion can be applied with minimal IQ cost. Yes it's more work, but you can get the shot every time.

And I wonder if some appropriate projection could be successfully applied to your existing original. I fiddled with it on the PTGui preview screen for a couple minutes, couldn't really get it, but there is probably somebody here who could!

At any rate, congratulations on a good save! Perfectly good image in every way.

Lens profile will not handle it as the problem not a problem of the lens but of perspective. It is not possible render surfaces correctly in a rectilinear projection.

Hi Erik,

You and Bill are correct that the issue has to do with anamorphotic perspective distortion, it stems from projection of a 3-dimensional subject on a flat plane (our sensor) at an angle. However, given that flat plane projection, the real issue is the viewing position of that projection. The image will look undistorted when viewed from the correct (proportionally scaled) center of projection(!). It's just like lettering on roads that looks undistorted when approaching it, but very stretched when we're looking from too close or from the side.

To correct the situation geometrically is two-fold, and I've added an overview screen-capture sample from my Pano application as attachment.

First, there is a slight rotation and non-level optical axis shooting issue.

Second the vertical shift needs to be compensated for to restore correct heights.

Then the resulting image (see second attachment) should be watched from the correct projection viewpoint which is now at the horizon line near the bottom edge of the image, and from a distance of (and here is the real problem) of 3.3 mm! For anyone myopic enough to pull that off with one eye, the perspective will look perfectly normal, just as it was in real life. When the image is watched from further away, then it will look distorted, stretched, be cause the wrong viewing point is used.

The only geometrically correct solution is to proportionally enlarge the image enough to allow viewing from a more comfortable distance. So for viewing it at a normal reading distance (say approx. 12 inch), it should be magnified to 10x its size on display, or some 30x when printed at 300 or 360 PPI. The original, without the subsequent pano projection correction for the shift should be viewed from the upper edge, which is not common because it might require a ladder with decent output sizes on a wall, while the pano corrected version should be viewed from the level of the natural horizon in the image.

To prevent having to jump through these hoops, one can fudge a bit and apply some warp distortion.

As for the rectilinear projection math involved, a Pano stitcher can be set up quite easily to make sure that the image is squared correctly, by adding a vew vertical line control points. When a shift was used, then that should also be manually input as a vertical offset. A 12mm shift on a sensor that is 24mm high, will require a vertical shift of the image center by 12/24 = 50% of the number of vertical pixels. That will put the apparent pano horizon near the top edge of the image, and after the pano stitcher did it's corrections the horizon is now at its natural image position again.

Since in this case the original sensor image was 36x24mm, and the image was projected by the lens from 24mm distance (the focal length at infinity focus), the output viewing distance scales proportionally. To view from 10x 24mm distance, the sensor image needs to be enlarged 10x (=360x240mm), to view it from say 1 metre distance, it should be magnified 1000/24mm = 41.7x (1500x1000mm output size). These numbers are approximate, one should formally use the exit pupil distance instead of the focal lenght, but it's close enough to get an idea of the implications.

Any (anticipated) deviation from the 'correct' viewing position will seemingly introduce a distortion which can be compensated for by introducing a warp distortion.

Viewing from the 'wrong' position/distance can also be used creatively to 'enhance' the effect of e.g. a wide-angle effect by viewing it from 'too far away', or a compressed telelens effect by viewing it from 'too close by'.

I took this photo of the Mir-i Arab Madrassa and Kalon Minaret in Bukhara, Uzbekistan, by shifting the lens up. I don't recall any tilt - if there was, it was purely by accident. Unfortunately Bukhara is a long way from Minneapolis and it's not easy to go back and try the same scene again with different settings!

Could I have avoided the distortion by a different in-camera technique?

It's not easy because you were so close, and the tower is so high. I don't know if it would have been possible to shoot from a bit further away. That would have helped in not requiring as much shift to get the vertical Field Of View you required. Alternatively, and that's what I do almost routinely, is use thr camera in a portrait orientation which would reduce the need for extreme shift, and get the horizontal FOV by shooting a horizontal row of 3 images for later stitching in a panorama stitcher. That pano stitcher would also automatically take care of any residual refinements you may need to get the verticals perfectly squared and rotations eliminated. For a lesser lens it would also automatically remove lens distortions, but they are already virtually non-existent with the TS/E 24mm II.

Quote

Is the best way to fix it to clone the tower in PS CS6 and do some transformations on that, or is there a technique that will correct everything in the whole image?

The 'best' way requires a bit of work, as shown in the earlier posts, but a bit of warp in the top half may conceil the most displeasing distortions enough (as shown in Bill's example) for the intended effect.

Quote

BTW the Guide to Lightroom4 update was very useful because it made me aware of the new Lightroom/Photoshop HDR processing workflow, which was helpful in this image.

Yes, these harsh light situations require multiple techniques to get more pleasing results. The floating point TIFF HDR approach helps to not overcomplicate the workflow, although it's still not as good a solution as a dedicated HDR tonemapping application such as SNS-HDR, getting closer but not quite there yet.

I thought maybe whatever algorithms were being used to create the Lens Profiles could be engineered to include the mathematics written into the Free Transform>Warp feature in Photoshop that bill t. so excellently demonstrated with his posted edits.

In fact he nailed it IMO. Not only did he achieve near perfect vertical correction but also eliminated the elliptical distortion of the tower on the right.

From this I have to say emphatically Free Transform is probably one of most important tools Photoshop has to offer so I have to keep it around now that I rarely use the program and do all my Raw edits in ACR. Just wish Free Transform like tools were engineered into Lens Profile so all edits could be done in the converter.

Is it possible to provide a reference to some of the math that you went through in your correction? For example, how did you arrive at a 12 mm vertical shift?

Hi Brad,

That 12mm shift is the maximum amount that the TS-E allows (and should position the horizon near the lower edge), and I had to assume that that is what was used (probably a fraction less but not much). That assumption is based on the image composition which is pretty much using all the room at the top and little foreground, and the size of the structures (compared to the sitting person at the righthand side). In practice it's best to make a note of the actual setting as indicated on the lens (shift is in millimetres, tilt is in degrees).

When using multiple images in a Pano stitcher, the stitcher may be able to automatically figure out the required offset amount. However, since that additional (traditionally called 'e') parameter complicates the automatic determination of the other parameters, I prefer to set it by hand and let the automatic Control point optimization do the rest.

It's pretty easy to verify for an arbitrary shift lens. Just take an image centered, and another with a certain amount of shift applied. Then verify the number of pixels that the image shifted on the sensor. In my experience with the TS/E that shift is pretty much the same on the sensor as on the lens indication). The number of pixels shifted versus the total number of pixels in that dimension has the same ratio as a percentage of the image shift dimensions in mm.

The benefit of a rectilinear projection is that everything scales proportionally. Therefore the skewed projection pyramid (with the sensor as its base) of the lens at the image forming side of the optics, can be extended proportionally on the subject side of the optics. When we use the thin lens model of optics, everything stays simple.

All projection proportions scale by the same amount, so focal distance versus object/output distance has the same proportions as the sensor size versus object/output size.

I would like to thank everybody for sharing their experience and insights – it's involved a lot of reading for me, especially because I've not tried panoramas in any kind of disciplined way before. It's been too many years since I last studied math, but nonetheless even the Zeiss document was worthwhile

A few questions for Bart -

where did you get the two attachments used limit horizontal movement on your nodal slide?

The screen grab was from PTAssembler, a Windows only application that was developed by a photographer to fulfil his own needs for Gigapixel images. Max Lyons has pioneered many cool features like the first Gigapixel image capability with relatively small computer memory requirements, stitching and focus stacking and exposure blending in one operation, and Camera Position Optimization, and added those to his own software stitching engine. Other Pano stitchers have copied some of those features, but none of them offer so many choices in projections (including a hybrid that combines several projections in one set of images), yet. It's not perfect, it depends on a few plugins from others which can cause issues, but it has saved my projects where others have failed, so I keep returning to it. It's a lean mean stitching machine, with tremendous capabilities.

A free stitcher like Hugin has also matured greatly over the years, and PTGUI shares the same roots as PTAssembler and has also become a leading choice. AutoPano Giga has also become better over the years, and the most recent version 3 has a lot of automation to offer that seems to work reasonably well. These programs are also available for Mac users.

Thank you so much again - I can only say (again) that I have learned so much from you all.

I am glad I took the necessary "source" files in the field even though at the time I knew I lacked the proper software tools with which to process them. For instance I had occasionally tried Photomatrix but it almost always gave me results I was unhappy with. But my initial experiments with SNS-HDR just now indicate it's able to give me the kind of look I prefer, such as in this image of the Kalon Mosque courtyard.

I am glad I took the necessary "source" files in the field even though at the time I knew I lacked the proper software tools with which to process them.

Yes, that's the right approach, and it just takes some additional storage space to create that peace of mind when reshooting is not an option. Even when forced to shoot handheld, just do it because who knows how far you can still get in postprocessing. Of course, stacking the odds in ones favor by using good technique is better, but we are not always that lucky.

Quote

For instance I had occasionally tried Photomatrix but it almost always gave me results I was unhappy with. But my initial experiments with SNS-HDR just now indicate it's able to give me the kind of look I prefer, such as in this image of the Kalon Mosque courtyard.

That looks very natural, despite the harsh light conditions. SNS-HDR is also my preferred tool for HDR tonemapping.

Shoot wider with a 17mm ts-e lens and crop down (and lose image quality), or shoot with a 24mm lens on a medium format technical camera and crop down (and retain more image quality)? Stitching would be best, I think, when practical.