We all know the score when it comes to surveillance in TV & Movies these days.

Scene: The killer has all but got away with it until some upstart agent spots a smudge in a reflection…
Agent: “Can you guys zoom in on that region”
Tech: “Sure”
Agent: “Now enhance it a little”
Tech: “How’s that?”
Agent: “…a little bit more…bingo! We’ve got him!”

Hardware Configuration
My set up was to use a laptop screen to display images, the crumpled foil placed on the keyboard, the laptop screen tilted forwards and the camera pointed at the foil reflecting the images on the screen.

Calibration
First stage to reconstructing the images is to profile the foil in terms of a reflectance map. To do this I display a black screen with a white rectangle in different known x-y positions. Because I’m in control of the white test square I can capture a frame, from the camera, of the crumpled foil with the test square at different grid positions on the laptop screen. This gives me, in this case, a 20×20 matrix of matrices. Each sub matrix is a 1280×960 grayscale image, this is the image captured from the camera that’s pointed at the crumpled foil. The 20×20 matrix is the position of the white rectangle on the laptop screen. This video shows the output from the camera as the white test square scans across the laptop screen.

After the calibration I now know what parts of the reflectance image, from the foil, contribute to what parts of the laptop screen. The theory now is that if I display an image on the screen, capture the reflected image from the crumpled foil I should be able to back project this mess below in order to partially reconstruct the input image, but only to an accuracy of a 20×20 image as that’s the accuracy used for calibration. If I use a finer calibration matrix, say 100×100, the white square on the laptop screen is also reduced in size therefore it doesn’t provide enough illumination to overcome the background light that leaks through the black regions of the screen. The black regions of an LCD are still illuminated from behind, they are not completely dark.

Processing

Once calibrated we can process any image like this…

Isn’t it obvious what it is?

This foil reflected image was created by displaying a grayscale image on the laptop screen. I used grayscale because it meant I was only dealing with single channel images and made things more simple to prove the concept. Below is the image I test the processing with.

Note: This is not my input image, I’m not sure who owns it, I found it for free on a couple of different wallpaper sites so figured it was fair game.

Results

OK, so it’s not going to allow the upstart CSI Agent to see the killer… but you can clearly make out the pyramids…just. The processing time takes a couple of seconds per image but there is no parallel or optimised processing or going on so it could easily be made an order of magnitude faster by using the other 7 threads on the CPU + a bit of optimisation.

It’s actually a bit easier to see the results on smaller images.

No bit of computer vision work is complete without a chessboard.

It’s important to also do a comparison of Input/Output using the input image scaled to the calibration dimensions. As I could only achieve a good light level from a large white calibration pixel the input image should be scaled down to this same size so we can appreciate the maximum reconstruction resolution that could be achieved. Here is the pyramid image scaled to 20×20 then upscaled again for viewing.

Conclusion

The image can definitely be reconstructed from fractured images captured from reflections off a crumpled foil sheet. After looking at the best possible case for reconstruction, i.e. the resized input image to the calibration matrix size, we can see the results are actually pretty impressive, if I do say so myself!

For an evenings worth of work, about 150 lines of C++ and OpenCV the results are promising for reconstructing higher quality images and even worth experimenting with colour and real-time processing for video.