Secondary Menu

David Prutchi received his Ph.D. in Engineering from Tel-Aviv University in 1994, and then conducted post-doctoral research at Washington University. His area of expertise is the development of active implantable medical devices, and is currently the Executive Vice President of Engineering at Impulse Dynamics. He is an adept do-it-yourselfer and avid photographer dedicated to bringing cutting-edge experimental physics and technical photography within grasp of fellow buffs.

Agree with you on suitability of 3 ioptically-independent cameras for imaging far-away objects, or if slow capture (or off-line) to give you time for image registration. In fact, I have been working on a setup like that using 5MP cameras, but am still trying to do optical alignment because good digital registration is very computationally-heavy. In my book on UV photography (https://www.amazon.com/gp/product/1682031241/ref=a...) I discussed the Matlab-based “SIFT Flow” registration method that I use for hyperspectral imaging. It warms up the Xeon processor when running it...Regarding processing to yield usable images, please take a look at the DOLPi whitepaper at:http://www.diyphysics.com/wp-content/uploads/2015/...There I give a very detailed explanation and Python/Mat...

Agree with you on suitability of 3 ioptically-independent cameras for imaging far-away objects, or if slow capture (or off-line) to give you time for image registration. In fact, I have been working on a setup like that using 5MP cameras, but am still trying to do optical alignment because good digital registration is very computationally-heavy. In my book on UV photography (https://www.amazon.com/gp/product/1682031241/ref=a...) I discussed the Matlab-based “SIFT Flow” registration method that I use for hyperspectral imaging. It warms up the Xeon processor when running it...Regarding processing to yield usable images, please take a look at the DOLPi whitepaper at:http://www.diyphysics.com/wp-content/uploads/2015/...There I give a very detailed explanation and Python/Matlab code on how to process the three images together to yield useful information. For a quick view of the DOLPi cameras and their capabilities and applications, please watch the 10-minute video that I submitted to the Hackaday Prize in 2015 (this project was one of the HAD Prize winners that year):Regarding your last question, there are at least two universities that have built DOLPi cameras based on the ones described in my whitepaper and are flying them on drones for archaeological site discovery. At the Hackaday Superconference last year I met both Professors who have been doing it and discussed with them ways of increasing the real-time sensitivity (through stretching the dynamic range on the rendering code) to enable visualization of subtle polarization contrast when flying over vegetation. They were just starting their field experiments, so I haven't heard yet of results.Cheers,David

Thank you for your kind message! I watched the YouTube video you mentioned. Great hack! Thank you!

Hi,I did try the same conversion on a JVC 3-CCD camera. However, I wasn't able to take apart the beamsplitter and dichroic filters. These newer cameras use a beamsplitter prism with integrated filters, and no amount of attacking with dedicated optical adhesive solvents managed to disassemble it. In addition, the CCD sensors are assembled directly onto the prism, so I doubt that I would have been able to put it back together and achieve good registration.Regarding your second question, in my original DOLPi project I used one camera and three filter positions in sequence (either mechanically or electro-optically switched), which gives excellent results, but the sequential image capture is slow with the Raspberry Pi because of the way in which individual frames are captured by the GPU (...

Hi,I did try the same conversion on a JVC 3-CCD camera. However, I wasn't able to take apart the beamsplitter and dichroic filters. These newer cameras use a beamsplitter prism with integrated filters, and no amount of attacking with dedicated optical adhesive solvents managed to disassemble it. In addition, the CCD sensors are assembled directly onto the prism, so I doubt that I would have been able to put it back together and achieve good registration.Regarding your second question, in my original DOLPi project I used one camera and three filter positions in sequence (either mechanically or electro-optically switched), which gives excellent results, but the sequential image capture is slow with the Raspberry Pi because of the way in which individual frames are captured by the GPU (which has closed firmware, and thus is extremely difficult to bypass).Placing three modern cameras to take independent pictures indeed speeds up the process, but that requires either geometrical transformation in software (to counteract parallax), or a beamsplitter arrangement to have a single optical input that is copied three times over so that the images can be optically registered. Both are possible, but require more complex hardware and/or optics. Please take a look at the comprehensive discussion in the DOLPi paper mentioned in the article for a detailed discussion about these.As such, the old, tube-based camera hack is a much easier path for whomever wants to start playing with polarimetric imaging.

Thank you very much for your kind comment! That's exactly what I like to do!

Yes. Randomly-polarized light shows as grayscale (because it reaches all three tubes equally), while polarized light reaches the tubes differently, and hence its angle of polarization is represented as a color.

Thanks for your kind comment! I'm not sure what you mean by "depth colorization". However, if it involves image processing to render something different than what the 1980s electronics in the camera can generate, I would recommend building one of my "DoLPi" Raspberry Pi-based cameras. Please see details at: http://www.diyphysics.com/wp-content/uploads/2015/10/DOLPi_Polarimetric_Camera_D_Prutchi_2015_v5.pdf

Thanks for your kind comment! I'm not sure what you mean by "depth colorization". However, if it involves image processing to render something different than what the 1980s electronics in the camera can generate, I would recommend building one of my "DoLPi" Raspberry Pi-based cameras. Please see details at: http://www.diyphysics.com/wp-content/uploads/2015/10/DOLPi_Polarimetric_Camera_D_Prutchi_2015_v5.pdf