mattdm's statement "that blur is largely reversible if you do the exact same thing in reverse" raised the question in my mind. The rotation is a geometric transformation of the image that consist of a spatial transformation of the coordinates, and an intensity interpolation. If the rotation is not a multiple of 90 degrees then the interpolation algorithm plays a crucial role.

In such cases, if we use an averaging interpolation algorithm (e.g., bicubic) the operation is lossy*. But can we use a different approach (e.g., nearest neightbor interpolation) instead and have our rotated image "un-rotated"?

(*) It is only my feeling (I still cannot support it with a mathematical proof): as we cannot know for sure which pixel contributed to wich value, we cannot reverse the averaging. But I'm not confident that we cannot use probabilistic methods to accurately estimate the original values.

While I lack the required math skills, I have performed some tests myself (with gimp), but after anti-rotating, the images differ:

Test 1

Figure 1 - Source image (256x256)

Figure 2 - From left to right: a) image rotated 9,5 degrees clockwise; b) image rotated again 9,5 degrees anti-clockwise; and c) difference between images. For these operations I have used nearest neightbor interpolation. The images are downscaled after the operations to better fit this website layout.

Figure 3 - From left to right: a) image rotated 9,5 degrees clockwise; b) image rotated again 9,5 degrees anti-clockwise; and c) difference between images. For these operations I have used bicubic interpolation. The images are downscaled after the operations to better fit this website layout.

Test 2

Following @unapiedra's sugestion, I did a simpler test: rotating a 2x2 matrix. This case is uninteresting because depending upon the angle either all cells are rotated by the same angle, or no cell is rotated. That is, the rotation is always lossless.

In this case, I'm upscaling by a 6x factor. Reasoning for choosing this factor (unfortunately incorrect as I have seen with a counter-example):

A 30-degrees rotated pixel has coordinates bottom-left to top-right: [0,0]-[0.3660, 1.3660]. That is, the shortest projected side has a 0.36 pixels length. The sampling theorem requires that we sample at a double rate.

5 Answers
5

Image rotation is a lossy operation, but rotating an image once then rotating it back likely loses very little detail, especially compared to typical JPEG compression.

Image rotation works like this mathematically:

A grey level image consists of luminance values L_(x,y) at integer pixel positions x, y. First a real-argument function f(x,y) is constructed that reproduces the values L_(x,y) at the same x,y positions, but will also give values at non-integer x,y. Hence it's interpolating between integer x,y positions (this is where interpolation methods come in---there are several possible choices for f(x,y)).

Finally g(x,y) values are calculated at integer x,y positions again to create an image.

At this point you can ask: why is this lossy? Can't we just reverse all these calculations to reconstruct the original unrotated L_(x,y), if we know what interpolation method was used to construct f?

Theoretically that is possible, but this is not what happens when you do a rotation of the same angle in the opposite direction. Instead of reversing the original rotation operations precisely, an opposite sign rotation is performed using the same sequence of operations that the initial rotation used. The back-rotated image will not be precisely the same as the original.

Furthermore, if the values of g(x,y) were rounded to a low precision (8-bit, 0..255), the information loss is even greater.

Lots of accumulated rotations will effectively blur the image. Here's an example of rotating a 500 by 500 pixel Lena image 30 times by 12 degrees, amounting to a full 360 degree rotation:

There's another reason why blur-type operations will be lossy. One might naively think that mathematically reversing a blur should give us back the original unblurred image. This is theoretically true for as long as we're working with infinite precision. The procedure is called deconvolution and it is being used in practice to sharpen images that are blurry due to motion blur or optical reasons.

But there is one catch: blurring is insensitive to small changes in the source image. If you blur two similar images, you get even more similar results. De-blurring is very sensitive to small changes: if you de-blur two only slightly different images, you get two very different results. We're usually not working high precision (8 bit is quite low precision actually), and the roundoff errors will get magnified when attempting to reverse a blur.

This is the reason why blurring is "irreversible" and why it is losing detail.

@Michael It's an important point of why these types of operations are not reversible, even though mathematically they do look reversible. My point is that it's not just about the existence of roundoff errors, it's about these errors often becoming amplified when trying to reverse the operations.
– SzabolcsAug 23 '13 at 21:24

Its also due to the image size being finite and important data gets lost (in the convolution case)
– Michael NielsenAug 24 '13 at 7:46

"Instead of reversing the original rotation operations precisely, an opposite sign rotation is performed using the same sequence of operations that the initial rotation used" - I think this is the answer I was looking for. I'm not accepting your answer yet because I'm playing some maths (trying to find a scaling factor as I have mentioned in some other comments).
– AlbertoAug 26 '13 at 16:30

I'm giving up with the math reasoning as I don't get anywhere. I'm accepting your answer because it's the most detailed and convincing one.
– AlbertoAug 28 '13 at 8:46

Rotating isn't lossy in a general context but in combination with images it is.

The reason is binning of pixel values together. If an image is rotated most pixels' location will not alighn perfectly with the image's grid structure. Interpolation is now needed to decide where to place that pixel's value.

In nearest neighbour interpolation the pixel value is moved to the closest grid cell (pixel location).

In bicubic interpolation the pixel value has effect on all close grid cells in the vicinity, in essence the value is distributed to the close pixels.

Nice point that rotation isn't lossy in itself, but rotating one grid orientation of pixels to another is. While most will probably infer that from the question, it is worth highlighting still.
– AJ Henderson♦Aug 23 '13 at 16:39

Are you implying most will not read the first sentence? "... but in combination with images it is [lossy]".
– UnapiedraAug 23 '13 at 19:09

The more detailed answer by szabolscs will probably be accepted and is a bit more specific.
– UnapiedraAug 23 '13 at 19:14

1

The actual loss of data doesn't occur until you re-rasterize the rotated image. If your image editing software stored intermediate work internally as an image and rotation value, but only rasterized to display it on the screen or when explicitly saved in a bitmap based format, then you could give a second rotation of equal magnitude but opposite in direction and undo the rotation without loss. (Caveat, I don't know how Photoshop/GIMP/etc projects store your data.)
– Dan Is Fiddling By FirelightAug 23 '13 at 19:18

@Unapiedra - no, I was giving you a +1 for pointing it out.
– AJ Henderson♦Aug 23 '13 at 19:36

If you increase the resolution sufficiently, I believe you should be able to make it lossless, but I think it would have to be quite a bit larger. You would effectively need to "fill out" enough pixels that the error is reduced to under the rounding error of reversing the operation and reducing the resolution back to the original. At this point though, you might as well simply store the original image in a hidden layer of the image since you are adding substantially more data points than a copy of the image would take.

So practically speaking, it is required to be lossy since it is averaging values to create new points and would generally require more additional storage to upscale sufficiently than it would take to simply store another copy.

Your idea is correct but can only work for a finite number of possible rotations. Imagine you set your intermediate resolution to get lossless rotation for 3.22 degrees. Now, rotate by 3.23, your result would show losses. This holds in general because there are infinitely many rotations but always finite number of pixels.
– UnapiedraAug 23 '13 at 15:42

@Unapiedra - yeah, honestly, it just occurred to me that it should probably be possible to preserve information in at least some additional cases by increasing resolution and then decreasing it after rotating back, but either way, since it is impractical due to the amount of additional information that would need to be stored, I didn't bother going further with a proof of it, since there would be no reason, in practical terms, to ever do it rather than storing the unrotated image in addition to the rotated one.
– AJ Henderson♦Aug 23 '13 at 16:37

"If you increase the resolution sufficiently[...]" - This is exactly my intuition: it should be posible to find a zoom factor for a given angle that allow us for a "rotation and anti-rotation" lossless operation. But how to calculate this factor? I fully agree with the observation: 'there would be no reason...', but I'm now studing this topic and this question is more of a theorical one (as per 'in medical applications, would it be possible to lossless rotate an image to further transform it blahblah')
– AlbertoAug 26 '13 at 15:44

1

@Alberto I'd suggest you try with a 2x2 pixel image and some odd angle (15.3deg) and upscale until you find a solution. You can also solve it theoretically (I think) but I don't have the time.
– UnapiedraAug 26 '13 at 19:30

AJ's theory about increasing resolution is correct. The issue is the relative size of the cells (pixels) to the details we are able to make out. The larger the cells, the more we have to shove results into bins. Here I rotate with bicubic interpolation with +10 and -10 and compare with abs(I1-I2). Secondly, I do a lanczos resize x10 , rotate with bicubic interpolation with +10 and -10 and resize back to original size, and compare with abs(I1-I2).

So two questions: 1) is it possible for a given rotation angle to calculate a resizing factor? and 2) isn't interpolation going to introduce errors, that is, wouldn't it be better to resize and rotate with nearest neighbor?
– AlbertoAug 26 '13 at 16:14

"Note that "lossy" does not always equate to "noticeably lossy"." and we've come back full circle to my original point - rotating an image does lose information, but generally not enough to be noticeable.
– Matt GrumAug 23 '13 at 22:32