In this contribution, we have done exploratory experiments using deep learning framework to classify elastic scattering spectra of biological tissues into normal and cancerous ones. An analytical assessment highlighting the superiority of convolutional neural network (CNN) extracted deep features over classical hand crafted biomarkers is discussed. The proposed method employs elastic scattering spectra of the tissues as input to CNN and thereby, averting the requirement of domain experts for extraction of diagnostic feature descriptors. Experimental results are discussed in detail.

Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*

Melanoma is the least common but deadliest skin cancer, accounting for only about 1% of all cases, but is the cause of the vast majority of skin cancer death. In some parts of the world, especially among western countries, melanoma is becoming more common every year. The detection of melanoma in early stage can be helpful to cure it. Unfortunately, long ques and high prices for dermatology service can result in the skin cancer diagnosis at its later stage, thus increasing the risk of mortality for the patient. It is important to provide a non-invasive optical device for primary care physicians to help diagnose different skin malformation based on obtained optical images. Such device will be able to automatically classify different skin malformations, but the results of classification strongly rely on obtained image quality.

This study aims at finding solutions of image quality problems in the area of biophotonics. The resulting image quality depends on hardware capabilities of the object illumination, image sensor, optical system and image post processing (image storage format). Although several of the quality problems of the imaging systems may be prevented in advance, some flaws may not be removed as easily. For example, uneven illumination cases, where skin is not flat (for example: nose, ear). Due to that, it is not possible to create uniform illumination field and the resulting optical image has noticeable differences across it. Sometimes, it is the skin texture that could cause problems for the automatic malformation classification and diagnosis. In this case, image quality enhancement can be helpful for removing different image flaws and raise the precision of malformation classification.

In this research methods for solving different image quality problems in multispectral images of skin malformations are proposed. Multispectral image acquisition and proposed methods are tested on noncontact skin cancer analyzing device prototype. Nevertheless, it could be applied on other multispectral image analysis algorithms. Pilot studies of filtering methods show good results when trying to deal with uneven lighting problems in images. Quality enhancement methods include high pass filtering, extraction of nonskin fragments (hair, markers, etc.), image stabilization and other methods. The image quality enhancement techniques were clinically tested on multispectral images of different skin malformations and the results of the study are presented in this paper.

Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*

In this study we propose a new approach to monitoring of the removal of luminescent nanocomposites and their components with urine using artificial neural networks. A complex multiparametric problem of optical imaging of synthesized nanocomposites - nanometer graphene oxides, covered by the poly(ethylene imine)–poly(ethylene glycol) copolymer and by the folic acid in a biomaterial is solved. The proposed method is applicable for optical imaging of any fluorescent nanoparticles used as imaging nanoagents in biological tissue.

Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*

Skin cancer diagnostics is one of the medical areas where early diagnostic allows achieving patients’ high survival rate. Typically, skin cancer diagnostic is performed by dermatologist, since the amount of such specialists is limited, mortality rate is high [1]. By creating the low cost and easy to use diagnostic device, it is possible to bring skin cancer diagnostic to primary care physicians and allow to check much more persons and diagnose skin cancer on the early stages. There are several existing devices, that provide skin cancer diagnostics [2]. Most of them process the skin images locally and have limited diagnostic capabilities; some of them send images to dermatologists for manual analysis to achieve higher diagnostic quality. Therefore, there is a lack of diagnostic quality or response time.

To be able to use the latest diagnostic algorithms and still have fast acting automated diagnostic system, we propose using distributed cloud-based system. In that system, diagnostic device is used only for image acquisition under special multispectral illumination (405nm, 535nm, 660nm and 950nm). Obtained skin imaged are sent further to cloud system for analysis and diagnostic results visualization. By means of proposed approach, images could be processed by using the same Matlab [3] algorithms [4] that skin cancer research team is using. That will eliminate the need of adopting each algorithm to a specific architecture of diagnostic device. Moreover, the proposed system keeps relation between multiple skin analysis from each patient and could be used to track skin lesions changes in time. Proposed cloud system has architecture that allows fast scaling according to real-time requirements. Proposed system uses central load balancing server, that accepts diagnostic requests and sends image processing request to less loaded Matlab processing station. In case of high load, balancing server can launch an additional processing station. Therefore, it brings main cloud system advantages – efficient resource usage and fast adopting to current needs by increasing processing power. The cloud system is using Vagrant virtual machine management tool that allows easily recreating proposed cloud system as local-private cloud in situations where diagnostic results require high level of security.

The system is being tested in ongoing European project by the biophotonic research team and medical personal. The results of clinical testing will follow after completing first stage of clinical tests.

This work has been supported by European Regional Development Fund project ‘Portable Device for Non-Contact Early Diagnostics of Skin Cancer’ under grant agreement # 1.1.1.1/16/A/197.

Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*

Diabetic Retinopathy (DR) is one of the most dominant diseases across the globe which causes blindness. In this manuscript, we have probed tissue multifractality in order to identify the submicron level changes in medium refractive indices due to progress of diabetic retinopathy from mild to severe stages. Hence the quantification of multifractal parameters like Hurst exponent (measurement of correlation) and width of singularity (measurement of heterogeneity) have been executed. As we proceed from healthy to different stages (mild, moderate and severe) of diabetic retinopathy, there are decrement of Hurst exponent value, whereas, width of singularity spectrum increases. In general, the use of multifractal analysis on in vivo diabetic retinopathy images lead to a diagnostic modality as a potential statistical biomarker.

Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*

The new visual method has been invented in order to measure the stroke volume of the extracorporeal pneumatic heart assist pump. Heart pumps of this type have a pneumatic chamber and a blood chamber separated by a flaccid membrane. Equipping the heart pump with a miniature camera makes it possible to observe the surface of the membrane from the pneumatic chamber side without obstructing its normal operation. The momentary shape of the flaccid membrane affects the volume of the blood chamber. The essence of the used measurement method is to observe a surface of the membrane using a camera and to determine the shape of this membrane in the actual 3-dimensional space, only on the basis of a one-shot image. This method works due to markers arranged on the surface of the membrane from the pneumatic chamber side. In the measurement, the image processing and analysis techniques are used. The difficulty of the accuracy verification of the shape mapping is that heart assist pump fitted with a flaccid membrane has only two membrane states with a known mathematical description. Research has already been conducted to verify the method for the extreme states and it has produced very good results. Invented new technique to 3D modeling of any shape of the flaccid membrane with well-known geometric dimensions allowed verifying the method for any shape of the membrane. The real membrane was replaced in sequence with four different rigid models with the known geometric dimensions. Results obtained in the study were presented.

Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*

Biomedical tissue classification is of great interest in many fields, e.g. for finding a clear boundary for cancer resections. At the moment, no sufficient tool has been found to fulfill the needs for intraoperative surgical guidance. For a precise tumor removal, the resolution has to be as high as possible and the delivered information should be in real time. Otherwise intraoperative guidance cannot be done accurately. Optical coherence tomography (OCT) has already demonstrated its benefits in ophthalmology, dermatology and endoscopy. Providing μm resolution for a penetration depth of 1-2 mm at acquisition rates in the MHz regime, OCT is a perfect tool for contactless investigations during surgery. Additional benefit can be provided if the obtained images are analyzed and the tissue type is immediately classified. Usually, the histopathological analysis of ex vivo samples directly after removal conforms the tissue classification. A common practice for that is the histopathological analysis, where the samples are embedded in paraffin, stained with, e.g. hematoxylin and eosin and then, classified by an experienced pathologist, who analyzes tissue slices of approximately 10 μm thickness. By employing OCT for classification, an entire three-dimensional image can be classified without further preparation of the tissue. Here we present a texture feature based approach by utilizing local binary patterns, run length analysis, Haralicks texture features and Laws texture energy measures. After applying all these texture features, a principal component analysis (PCA) was performed, which decreased the dimensionality of the data set. This step was necessary in order to enhance the performance of the employed support vector machines (SVM) classifier. To find the best parameters for the kernel, a grid search and a 10 fold cross validation were done. As a first step, the texture analysis based post processing approaches were applied on 13 ex vivo brain tissue samples, which were diagnosed as meningioma (8), healthy white tissue (3) and healthy gray tissue (2). The samples were imaged with a commercial OCT system (Thorlabs Callisto). On the raw OCT images, some structural differences between healthy tissue and meningioma may already be recognized. However, an automated approach that does not require interpretation of the result, would certainly help the surgeon during surgery. At the end, we trained a SVM classifier, which was able to differentiate between healthy tissue and meningioma, with an accuracy of nearly 98%. As the next logical step, these findings will be validated intraoperatively and further application for texture based classification will be investigated.

Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*

High resolution optical imaging modalities such as optical coherence tomography (OCT), confocal and multiphoton microscopy continue to show promise for diagnostic imaging. These imaging modalities commonly employ 2D scanning mechanisms that scan the sample in regular, pre-defined patterns. However, these scanners can often have limited in field-of-view and can be susceptible to artefacts due to patient or clinician motion. We have recently demonstrated a new imaging paradigm called dual-beam manually-actuated distortion-corrected imaging (DMDI) that overcomes these limitations. DMDI exploits the predictable path and spatial separation of two beams to calculate and correct the scanning distortion caused by manual actuation of the probe or the sample. DMDI was first implemented using a dual-beam micromotor catheter (DBMC) which could be useful for in vivo imaging of internal vessels, air ways, or tubular organs. Here, we present a new implementation of DMDI using a single axis galvanometer scanner.
OCT imaging is used to demonstrate this implementation of DMDI. A single 1310nm swept source laser is split into two independent OCT interferometers. The two samples arms of the interferometers are aligned at different angles onto a single-axis galvo-mirror which is driven synchronously by the swept source. After passing through a scan lens, the scan pattern traced by the two beams is a pair of roughly parallel lines. A one-time calibration procedure is performed by imaging a phantom to precisely determine the beam separation and scanning pattern.
Samples were scanned by manually moving them approximately perpendicular to the scan lines, acquiring two images. Using common, unique features in both of the images, the recorded time difference between the imaging of the features, and the calibrated relationship between the two beams, the image distortion caused by manually actuating the sample can be discerned, and the distortion-corrected images can be produced.
To validate the galvanometer implementation of DMDI, we first imaged a phantom with a defined flat pattern. Image restoration was performed on the en face OCT images and showed distortion correction was feasible both perpendicular and parallel to the scan beam axis over a range of speeds. We also demonstrate correction for en face OCT images of a biological sample.
DMDI is demonstrated as a versatile imaging modality as it can be adapted for different implementations. Although a bench top galvanometer scanner setup was used in this study this implementation could be adapted for imaging body sites such as the oral cavity or skin. Furthermore, OCT was chosen due to its availability in our lab, however in principle any point-scanning modality could be used for DMDI.

Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*

Based on vector diffraction theory and inverse Faraday effect, we report on the generation of optical needle and magnetization needle with tunable longitudinal depth by focusing a narrow annulus of azimuthally polarized beams using optmagnetic materials and an elliptical mirror. In this paper, we present the expression of the approximate result between the angular thickness Δθ and the longitudinal depth when the annulus is assumed to be sufficiently narrow (Δθ<<π/2). We theoretically demonstrate that the induced magnetization needle has the same longitudinal depth as optical needle, but with different distributions. The results are applied to the specific cases of the elliptical mirror and the parabolic mirror, then we further theoretically demonstrate that the longitudinal depth is equally long in the elliptical mirror focusing system and the parabolic mirror focusing system.

Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*

In the field of pathology there is an ongoing transition to the use of Whole Slide Imaging (WSI) systems which scan tissue slides at intermediate resolution (0~.25 μm) and high throughput (15mm2=min) to digital image files. Most scanners currently on the market are line-sensor based push-broom scanners for three-color (RGB) brightfield imaging. Adding the ability of fluorescence imaging opens up a wide range of possibilities to the field, in particular the use of specific molecular (proteins, genes) imaging techniques. We propose an extension to fluorescence imaging for a highly efficient WSI systems based on a line scanning technique using multi-color led epi-illumination. The use of multi-band dichroics eliminates the need for filter wheels or any other moving parts in the system, the use of color sequential illumination with leds enables imaging of multiple color channels with a single sensor. Our approach offers a solution to fluorescence WSI systems that is technologically robust and cost-effective. We present design details of a four-color led based epi-illumination with a quad-band dichroic filter optimized for leds. We provide a thorough analysis regarding the obtained optical and spectral efficiency. The primary throughput limitation is the minimum Signal-to-Noise-Ratio (SNR) given the available optical power in the illumination etendue, and indicates that a throughput on the order of 1000 lines/sec can be obtained.

Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*

We present a fluid-membrane lens with two piezoelectric actuators that offer versatile, circular symmetric lens surface shaping. A wavefront-measurement-based control system ensures robustness against creeping and hysteresis effects of the piezoelectric actuators. We apply the adaptive lens to correct synthetic aberrations induced by a deformable mirror. The results suggest that the lens is able to correct spherical aberrations with standard Zernike coefficients between 0 μm and 1 μm, while operating at refractive powers up to about 4m-1. We apply the adaptive lens in a custom-built confocal microscope to allow simultaneous axial scanning and spherical aberration tuning. The confocal microscope is extended by an additional phase measurement system to include the control algorithm. To verify our approach, we use the maximum intensity and the axial FWHM of the overall confocal point spread function as figures of merit. We further discuss the ability of the adaptive lens to correct specimen-induced aberrations in a confocal microscope.

Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*

Diatom detection has been a challenging task for computer scientist and biologist during past years. In this work, the new state of art techniques based on the deep learning framework have been tested, in order to check whether they are suitable for this purpose. On the one hand, RCNNs (Region based Convolutional Neural Networks), which select candidate regions and applies a convolutional neural network and, on the other hand, YOLO (You Only Look Once), which applies a single neural network over the whole image, have been tested. The first one is able to reach poor results in out experimentation, with an average of 0.68 recall and some tricky aspects, as for example it is needed to apply a bounding box merging algorithm to get stable detections; but the second one gets remarkable results, with an average of 0.84 recall in the evaluation that have been carried out, and less aspects to take into account after the detection has been performed. Future work related to parameter tuning and processing are needed to increase the performance of deep learning in the detection task. However, as for classification it has been probed to provide succesfully performance.

Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*

If a scanning illumination spot is combined with a detector array, we acquire a 4 dimensional signal. Unlike confocal microscopy with a small pinhole, we detect all the light from the object, which is particularly important for fluorescence microscopy, when the signal is weak. The image signal is basically a cross-correlation, and is highly redundant. It has more than sufficient information to reconstruct an improved resolution image. A 2D image can be generated from the measured signal by pixel reassignment. The result is improved resolution and signal strength, the system being called image scanning microscopy. A variety of different signal processing techniques can be used to predict the reassignment and deconvolve the partial images. We use an innovative single-photon avalanche diode (SPAD) array detector of 25 detectors (arranged into a 5× 5 matrix). We can simultaneously acquire 25 partial images and process to calculate the final reconstruction online.

Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*

In this paper, we focused on the improvement of reconstructed image quality of the mobile three-dimensional display using the computer-generated integral imaging. The three-dimensional scanning method is applied instead of capturing the depth image in the acquisition step, and much more accurate three-dimensional view information (parallax and depth) can be acquired compared with the previous mobile three-dimensional integral imaging display, and the proposed system can reconstruct clearer three-dimensional visualizations of real-world objects. Here, the three-dimensional scanner acquires the three-dimensional parallax and depth information of the real-world object by the user. Then, the entire acquired data is organized and the three-dimensional the virtual model is generated based on the acquired data, and the EIA is generated from the virtual three-dimensional model. Additionally, in order to enhance the resolution of the elemental image array, an intermediate-view elemental image generation method is applied. Here, five intermediateview elemental images are generated between each four-original neighboring elemental image according to the pixel information, at least, the resolution of the generated elemental image array is enhanced almost four times than original. When the three-dimensional visualizations of real objects are reconstructed from the elemental image array with enhanced resolution, the quality can be improved quite comparing with the previous mobile three-dimensional imaging system. The proposed method is verified by the real experiment.

Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*

Currently under development high fidelity and interactive full parallax light field displays have unique and challenging requirements due to human factors and space constraints imposed on them. A high fidelity light field display with no vergence accommodation conflict and desktop size foot print implies a display with tens of gigapixels and pixel pitch in the range of 10 microns or below. Achieving interactive image and video display performance on these types of displays requires a fundamental redesign of the display input interface and image processing pipeline. In this paper, we discuss various ways of addressing these issues with light field compression and display system design innovations.

Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*

Digital holography is a growing field that owes its success to the provided three-dimensional imaging representation. This is achieved by encoding the wave field transmitted or scattered by an object in the form of an interference pattern with a reference beam. While in conventional imaging systems it is usually impossible to recover the correct focused image from a defocused one, with digital holography the image can be numerically retrieved at any distance from the hologram. Digital holography also allows the reconstruction of multiple objects at different depths.

The complex object field at the hologram plane can be separated on real and imaginary, or amplitude and phase components for further compression. It could be inferred that more inter-component redundancies exist in real and imaginary information than in the amplitude and phase information. Also, several compression schemes, like lossless compression, lossy compression, based on subsampling, quantization, and transformation, mainly using wavelets were considered. The benchmark of the main available image coding standard solutions like JPEG, JPEG 2000, and the intra coding modes available on MPEG-2, H264/AVC and HEVC video codecs were also analyzed for digital holographic data compression on the hologram plane.

In the current work, the benchmark of the main available image coding standard solutions JPEG, JPEG-XT, JPEG 2000 and the intra mode of HEVC, are performed for digital holographic data represented on the object plane, instead of the hologram plane. This study considers Real-Imaginary and Amplitude-Phase representations. As expected Real, Imaginary and Amplitude information present very similar compression performance and are coded very efficiently with the different standards. However, the phase information requires much higher bitrates (3/4 bpp more) to reach similar quality levels. Moreover, the Amplitude information results in slightly larger bitrates for the same quality level than real or imaginary information.

Comparing the different standards, the HEVC intra main coding profile is a very efficient model and outperforms the other standards. Furthermore, JPEG 2000 results in very similar compression performance. A comparison with studies where coding was performed on the hologram plane will reveal the advantages of coding on the object plane. Hence, becomes evident that future representation standards should consider the representation of digital holograms on the object plane instead of the hologram plane.

Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*

In this paper we investigate the suitability of Gabor Wavelets for an adaptive partial reconstruction of holograms based on the viewer position. Matching Pursuit is used for a sparse light rays decomposition of holographic patterns. At the decoding stage, sub-holograms are generated by selecting the diffracted rays corresponding to a specific area of visualization. The use of sub-holograms has been suggested in the literature as an alternative to full compression, by degrading a hologram with respect to the directional degrees of freedom. We present our approach in a complete framework for color digital holograms compression and explain, in details, how it can be efficiently exploited in the context of holographic Head-Mounted Displays. Among other aspects, encoding, adaptive reconstruction and selective degradation are studied.

Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*

With the advent of light field acquisition technologies, the captured information of the scene is enriched by having both angular and spatial information. The captured information provides additional capabilities in the post processing stage, e.g. refocusing, 3D scene reconstruction, synthetic aperture etc. Light field capturing devices are classified in two categories. In the first category, a single plenoptic camera is used to capture a densely sampled light field, and in second category, multiple traditional cameras are used to capture a sparsely sampled light field. In both cases, the size of captured data increases with the additional angular information. The recent call for proposal related to compression of light field data by Joint Picture Expert Group (JPEG), also called JPEG Pleno, reflects the need of a new and efficient light field compression solution. In this paper, we propose a compression solution for sparsely sampled light field data. Each view of multi-camera system is interpreted as a frame of multi-view sequences. The pseudo multi-view sequences are compressed using state-ofart Multiview-extension of High Efficiency Video Coding (MV-HEVC). A subset of four light field images from Stanford dataset are compressed, on four bit-rates in order to cover the low to high bit-rates scenarios. The comparison is made with state-of-art reference encoder HEVC and its real-time implementation x265. The rate distortion analysis shows that the proposed compression scheme outperforms both reference schemes in all tested bit-rate scenarios for all the test images. The average BD-PSNR gain of 1.36 dB over HEVC and 2.15 dB over x265 is achieved using the proposed compression scheme.

Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*

Multiphoton imaging commonly relies on laser-scanning setups which quickly image horizontal sections (x y images) by pixelwise scanning a sample region with focused laser pulses. Different horizontal planes are imaged by adjusting the distance relative between focusing optics and sample. However, in many cases actually a visualization of vertical sections is desired that then can only be obtained indirectly from time-consuming acquisition and processing of complete volume scans. We present a modified multiphoton tomograph for clinical in vivo and ex vivo tissue imaging with direct and fast x-z-imaging capability and exemplify different applications spanning from visualizing anatomic structures to substance penetration studies. The fast x-z imaging is realized by synchronizing the scanning-mirror movement with the tuning of the relative distance between sample and focusing optics

Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*

Thermal imaging cameras improve the situational awareness of pilots during the aircraft operation. Nowadays thermal sensors are readily available onboard as the part of the Enhanced Vision System (EVS). While video synthesized using 3D modeling (Synthetic Vision System, SVS) can be easily displayed on a Head-up Display (HUD) due to the presence of the area segmentation data, the projection of the EVS video on a HUD usually results in an image with large bright areas that partially obscure the cockpit view from the cabin crew. This paper is focused on the development of the ClearHUD algorithm for effective presentation of the EVS video on a HUD using the optical flow estimation. The ClearHUD algorithm is based on the optical flow estimation using the video from the SVS and the EVS. The difference of the optical flows is used to detect the obstacles. The areas of the detected obstacles are projected with high intensity, and the remaining regions are filtered using the segmentation from the SVS.

The ClearHUD algorithm was implemented in a prototype software for testing using 3D modeling. The optical flow for the SVS is estimated using ray tracing. The optical flow for the EVS is estimated using FlowNet 2.0 convolutional neural network (CNN). The evaluation of the ClearHUD algorithm has proved that it provides a significant increase of brightness of obstacles and reduces the intensity of non-informative areas.

Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*

We present a novel speckle reduction scheme for application in laser-based projection systems. The scheme combines the use of a microlens array (MLA) as screen material with the concept of reduced spatial coherence. Incorporating the screen in the speckle reduction process reduces laser projector cost and complexity. On a typical screen, random scattering of coherent light would cause random interference, i.e. speckle. On an MLA screen however, the interference between the fields emitted by different microlenses is inhibited if the spatial coherence area of the incident light is made smaller than the microlens footprint. We tested both a MLA with randomly arranged lenses of varying size, averaging 120 μm in diameter, and a MLA with regularly spaced lenses with a fixed diameter of 100 μm. We benchmarked the performance of these MLA screens and a regular diffusive screen. Using a small-scale projection setup with a CCD camera as observer, we experimentally quantified the speckle contrast observed on these screens. Objective speckle contrast measurements on the irregular MLA yield results close to the subjective human speckle detection limit. Besides the experimental validation of the proposed speckle reduction scheme, we constructed a quantitative model to describe the speckle characteristics of the different screens. The model corresponds very well with experimental results and allows us to quantify the relative contributions of the different speckle reduction processes at play. Our approach can benefit any laser-based projection system, such as for example 3D cinema.

Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*

Modern DLP projectors often use a time-multiplex approach to generate color: A rotating color wheel is used to project the red, green and blue components (and possibly more) as separate sub-images which are each displayed for a short period of time. Applications like color calibration require high quality measurements of the color output of the projector, which might be acquired with digital cameras. When capturing the output of a DLP projector with a color wheel, the timing of the projector in relation to the exposure time must be taken into account, to avoid deviations introduced by capturing fractional parts of a color wheel rotation. In this work, we demonstrate the feasibility of software-only semi-synchronization between a DSLR camera and a DLP projector, using only a PC, camera with an USB interface and a projector connected via HDMI. We found that a reasonable estimate of the end of the actual exposure can be acquired with millisecond precision. By relating that to the previous vertical blanking interval, we are able to reconstruct the position of the color wheel throughout the exposure. Using a multitude of photos, it is possible to measure the actual color wheel timing of the projector for a specified input color. Furthermore, we show that this data can be used to build a simple model of the image formation process of the projector, which enables the compensation of color deviations introduced by the incomplete rotation. We show that using our compensation technique significantly improves the accuracy of the color measurements for reasonable exposure times.

Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*

Touchless Human-Computer Interaction (HMI) is important in sterile environments, especially, in operating rooms (OR). Surgeons need to interact with images from scanners, rayon X, ultrasound images, etc. Problems about contamination may happen if surgeons must touch a keyboard or the mouse. To reduce the contamination and to give the possibility to the surgeon to be more autonomous during the operation, different projects have been developed in the Medic@ team from 2011. In order to recognize the hand and the gestures, two main projects: Gesture Tool Box and K2A; based on the use of the Kinect’s device (with a depth camera) have been prototyped. The detection of the hand gesture was done by segmentation and hand descriptors on RGB images, but always with a dependency on the depth camera (Kinect) to the detection of the hand. Additionally, this approach does not give the possibility that the system adapts to a new gesture demanded by the end-user, for example, if a new gesture is demanded, a new algorithm must be programed and tested. Thanks to the evolution of NVDIA cards to reduce time processing algorithms for CNN, the last approach explored was the use of the deep learning algorithms. The Gesture tool box project done was to analyze the hand gesture detection using a CNN (pre-trained in VGG 16) and transfer learning. The results were very promising showing 85% of accuracy for the detection of 10 different gestures form LSF ( French Sign Language) and also it was possible to create a user interface to give autonomy to the end user to add his own gesture and to do the transfer learning automatically. However, we still had some problems about the real time delay (0,8s) recognition and the dependency of the Kinect device. In this article, a new architecture is proposed, in which we want to use standard cameras and to reduce the real time delay of the hand and gesture detection. The state of the art shows the use of a YOLOv2 using Darknet framework as a good option with faster time recognition compared to other CNN. We have implemented YOLOv2 for the detection of the hand and signs with good results in gesture detection and with 0.10 seconds on gesture time recognition in laboratory conditions. Future work will include reducing the errors of our model, recognizing intuitive and standard gestures and doing tests in real conditions.

Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*

The rainbow hologram provides observation of the reconstructed object with different spectra over different viewing position. Recently, we’ve proposed a concept of digital rainbow holographic display using diffraction grating and white LED lighting source. In the technique, the slit is implemented numerically by reducing the frequency of the hologram, while the rainbow effect is realized by dispersion of white light source on the diffraction grating. Phase only SLM with 4F imaging system is used for implementation of complex wave fields. For classical rainbow hologram, image blur is known to be very important key point regarding holographic image quality. In this paper, we analyze image blur and visual perception for digital rainbow holographic display. The quality of reconstructed rainbow holograms is investigated under varying viewing conditions regarding visual perception and depth resolution. In experiments, the visual properties of the digital rainbow hologram are analyzed using optical reconstructions for the hologram of 3D and 2D objects of different depth.

Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*

Dry photopolymer materials are being actively studied for practical applications such as holographic data storage, 3D display, wearable displays with diffractive optical elements and so on. Their versatility, ease of use and self-processing ability give them many advantages over more traditional recording materials such as silver halide and dichromate gelatin for holographic uses. The necessity for development and optimization of such dry photopolymer diffraction elements with higher capability and stability has been recognized and they have recently received significant attention in the areas of holographic and wearable displays. In this work, we carried out nanoparticles composites, which are based on a SiO2 nanoparticle doped acrylate-thiol-ene photopolymer material, in order to fabricate holographic diffraction elements for 3D display uses. Preliminary examination on this material and fabrication techniques shows that a flexible free-standing volume grating with thickness of 200 μm and grating period of 1 μm was fabricated by using proposed nanoparticle composite material in the recording wavelength of 532 nm. The examination on thinner material layers with varied nanoparticles, such as in the range of 10 μm, is undergoing. We believe that such holographic diffraction elements offer a significant potential in display technique.

Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*

We investigate the spatial frequency response of a volume grating recorded in a ZrO2 nanoparticle-dispersed nanocomposite. We experimentally find that there exists the optimum recording intensity to maximize the saturated refractive index modulation amplitude of a nanocomposite grating recorded at short and long grating spacing. A strong parametric relationship between grating spacing and recording intensity is seen and an increase in the saturated refractive index modulation amplitude at shorter grating spacing (< 0.5 μm)can be obtained by using higher recording intensities than those at longer grating spacing. Such a trend can be qualitatively explained by a phenomenological model used for holographic polymer-dispersed liquid crystal gratings. We also describe another method for the improvement of the high spatial frequency response by co-doping of thiol monomer that acts as a chain-transfer agent.

Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*

A design and implementation of full-parallax holographic stereogram printer is presented. The holographic stereogram is synthesized using 2D perspective images of the 3D object that are rendered from multiple directions. The perspective images of the 3D scene are firstly captured by a virtual camera and transformed to two-dimensional holographic elements called hogels. The hogels are exposed using the perspective images to form the whole holographic stereogram. After all the hogels are exposed successively, a holographic stereogram can be achieved. Numerical simulation and optical reconstructions are implemented.

Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*

In the present paper we consider quantitative estimation of the tolerances widening in optical systems with curved detectors. The gain in image quality allows to loosen the margins for manufacturing and assembling errors. On another hand, the requirements for the detector shape and positioning become more tight. We demonstrate both of the effects on example of two optical designs. The first one is a rotationally-symmetrical lens with focal length of 25 mm, f-ratio of 3.5 and field of view equal to 72°, working in the visible domain. The second design is a three-mirror anastigmat telescope with focal length of 250 mm, f-ratio of 2.0 and field of view equal to 4°x4°. In both of the cases use of curved detectors allow to increase the image quality and substantially decrease the requirements for manufacturing precision.

Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*

In numerous applications, such as surveillance, industrial inspection, medical imaging and security, high resolution is of crucial importance for the performance of the computer vision systems. Besides spatial resolution, high frame rate is also of high importance in these applications. While the resolution of CMOS imaging sensors is following Moores law, for optics it is becoming increasingly challenging to follow such a development.

In order to follow the pixel size reduction, lenses have to be constructed with much more precision, while the physical size increases dramatically. Moreover, expertise needed to construct a lens of sufficient quality is available at a very few locations in the world. The use of lower quality lenses with high resolution imaging sensors, lead to numerous artifacts.

Due to the different light refraction indexes for different wavelengths, primary color components do not reach their targeted pixels in the sensor plane, which causes lateral chromatic aberration artifacts. These artifacts manifest as false colors in high contrast regions around the edges. Moreover, due to the variable refraction indexes, light rays do not focus on the imaging sensor plain, but in front or behind it, which leads to blur due to the axial aberration. Due to the increased resolution, the size of the pixel is significantly reduced, which reduces the amount of light it receives. As a consequence, the amount of noise increases dramatically. The amount of noise further increases due to the high frame rate and therefore shorter exposure times. In order to reduce the complexity and the price, most cameras today are built using one imaging sensor with spatial color multiplexing filter arrays. This way, camera manufacturers avoid using three imaging sensors and beam splitters, which significantly reduces the price of the system. Since not all color components are present at each pixel location it is necessary to interpolate them, i.e. to perform demosaicking. In the presence of lateral chromatic aberration, this task becomes more complex, since many pixels in the CFA do not receive a proper color, which creates occlusions which in turn create additional artifacts after demosaicing. To prevent this type of artifacts, occlusion inpainting has to be performed.

In this paper we propose a new method for simultaneous correction of all artifacts mentioned above. We define operators representing spatially variable blur, subsampling and noise applied to the unknown artifacts free image, and perform reconstruction of the artifacts free image. First we perform lens calibration step in order to acquire the lens point spread (PSF) function at each pixel in the image using point source. Once we obtained PSFs we perform joint deconvolution using variable kernels obtained from the previous step.

Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*

In this paper, motion-blur compensation method for micro fabricated objects using a galvanometer mirror with back-and-forth rotation is proposed. Motion-blur compensation is expected to extend exposure time without motion blur because longer exposure time can decrease the intensity of illumination to avoid shape expansion of a target object by heat of illumination. Dealing with this demand, a galvanometer mirror is installed between the target and a 2D high-speed camera, and controls the optical axis of the camera to follow the moving target. Each continuous images are taken during the motion of the stage, and finally taken images are integrated into one image by patching for detecting fabrication error using image processing. The experimental system that consists of a high-speed camera, a galvanometer mirror and a high-precision stage is developed and a 20mm=/s moving drilled silicon nitride sheet having holes of about 40 μm in diameter are lattice-shaped at a pitch of 60 μm is captured without motion blur by using this system. Comparing captured images with still images in diameter, roundness and curvature of the each holes, the effectiveness of this system is validated.

Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*

Many are the optical designs that generate curved focal planes for which field flattener must be implemented. This generally implies the use of more optical elements and a consequent loss of throughput and performances. With the recent development of curved sensor this can be avoided. This new technology has been gathering more and more attention from a very broad community, as the potential applications are multiple: from low-cost commercial to high impact scientific systems, to mass-market and on board cameras, defense and security, and astronomical community.

We describe here the first concave curved CMOS detector developed within a collaboration between CNRS-LAM and CEA-LETI. This fully-functional detector 20Mpix (CMOSIS CMV20000) has been curved down to a radius of Rc =150mm over a size of 24x32mm2. We present here the methodology adopted for its characterization and describe in detail all the results obtained. We also discuss the main components of noise, such as the readout noise, the fixed pattern noise and the dark current. Finally we provide a comparison with the at version of the same sensor in order to establish the impact of the curving process on the main characteristics of the sensor.

Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*

This article presents a novel technique to acquire and visualize two-dimensional images of dynamic changes of acoustic pressure in the case of a stationary acoustic wave. This method uses optical feedback interferometry sensing with a near-infrared laser diode. The stationary acoustic wave is generated using two piezoelectric transducers of 40 kHz facing each other, dynamic changes in acoustic pressure are measured in a 100 mm x 100 mm acoustic propagation field whose refractive index is variable along the optical path of the laser from the laser diode to a distant mirror and vice-versa. The image system records an image of 100 x 100 pixels of the acoustic pressure variation.

Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*

Security holograms are widely used for anti-counterfeiting of banknotes, documents and consumer products. The development of automated devices for operative authentication and quality inspection of security holograms is still an actual task. There are several approaches to solving this problem. One of them is based on an image comparison of inspected and reference holograms. Also for quality inspection of security holograms, the methods based on direct and indirect measurements of microrelief parameters are used. In this article we present a complex solution for automated quality inspection and authentication of security holograms.

Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*

A virtual reality camera is a complex entity which has more dedicated components as well as image quality layers and image processing algorithms than a normal still or video camera. Components like fish eye optics, multiple camera synchronization and stitching algorithms create new practical challenges for image quality measurements.

The work gives an overview of those measurement challenges which are faced daily in an image quality laboratory when virtual reality cameras are being validated. Some of the measurement issues are very concrete like size of the test charts or how to measure uniformity of a camera of which the field of view is more than 180 degrees. On the other hand, new algorithms and a great number of individual imaging sensors in a single camera device require new measurement methods and a powerful test environment to handle the huge number of images generated.

The paper concentrates on measurement practices for three main features of virtual reality cameras. Firstly, image quality measurement issues of fish eye cameras. Secondly, challenges and requirements of a multi camera system. And thirdly, challenges in measuring stitching algorithms together with 3D rendering. Each of these areas differs from traditional image quality measurements and require some special test charts, measurement processes, equipment, or test environments.

Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*

Specification and inspection of surface imperfections on optical elements are standardized processes as defined by the standards ISO 10110-7 and ISO 14997 respectively. According to the latter, manual visual inspection is typically employed to assess surface imperfections on basic optical elements. However, operator-dependent measurement results are not desirable due to their lack of reproducibility and variation across operators. In this article, we describe and analyze a machine vision setup designed to mimic a human testers inspection process in an automated and objective way. Our setup consists of multiple cameras and LED light sources, both arranged on the surface of a hemisphere with the optical element to be inspected at its centre. Motion of the sample during the image acquisition phase can be avoided by use of individually controllable LED sources. The system is capable of acquiring a sparse pseudo BRDF (Bidirectional Reflectance Distribution Function) representation of imperfections. Thus, enabling discrimination of imperfection classes defined in the ISO standard, as shown by experiments. Image fusion and imperfection classification methodologies are discussed and the feasibility to discriminate between dust and surface imperfection by a stereo-vision approach is demonstrated. A comparative analysis with results from manual visual inspection for 20 optical elements of the same geometry is given which indicates a good agreement.

Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*

Yarn hairiness is one of the essential parameters for assessing yarn quality. Most photoelectric yarn measurement systems are likely to underestimate hairiness because hairy fibers on a yarn surface are often projected or occluded in these two-dimensional (2D) systems. This paper presents a three-dimension (3D) test method for hairiness measurement using a multi-perspective imaging system. The system was developed to reconstruct a 3D yarn model for tracing the actual length of hairy fibers on a yarn surface. Five views of a yarn from different perspectives were created by two angled mirrors, and simultaneously captured in one panoramic picture by a camera. A 3D model was built by extracting the yarn silhouettes in the five views separating and transferring the silhouettes into a common coordinate system. From the 3D model, curved hair fibers were traced spatially so that projection and occlusion occurred in the current systems could be avoided. In the experiment, the proposed method was compared with two commercial instruments, i.e., the Uster Tester and Zweigle Tester. It is demonstrated that the length distribution of hairy fibers measured from the 3D model showed an exponential growth when the fiber length is sorted from shortest to longest. The H-value and S3 value measured by the multi-perspective method are larger than those obtained from Uster Tester and Zweigle Tester, respectively. The H-values of the proposed method have high consistency with those of Uster Tester (r = 0.992). It is indicated that the proposed method allows more accurate and comprehensive hairiness index measurement.

Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*

The face recognition method is proposed for cases of an insufficient training set, when the input data consists only of two facial images (full face and profile). The 3D model of a face is created semi-automatically using the input data (two images), which is then used for the recognition process. The training set for the recognition process consists of these created 3D models of faces. The basic problem of face recognition is the insufficient information about the proportions of the unidentified person's face, images can also contain some artefacts, for example eyeglasses, beard, moustache that can decrease the precision of the recognition process and make the image analysis more difficult. Another important aspect is illumination, which can practically change the results of the classification. The proposed recognition method consists of several steps: unknown image face alignment, facial reference points estimation using gradient maps using dlib (http://dlib.net/) and OpenCV (https://opencv.org/) open source computer vision libraries. After features extraction it is necessary to perform thresholding on some facial reference points, which is most important for recognition process. For this purpose, several important features are selected and distances between them are calculated. The training set consists of early created 3D models of faces that could be used to get the missing information about the proportions of the person's face. The proposed algorithm is used for classification. Using this method classification results are approximately 90% positive compared to when using only the insufficient training set that contains only two images.

Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*

Based on the objects’ sparse features, the compressive sensing imaging system has the unique advantage of breaking the Nyquist sampling theorem, and the target image can be reconstructed from very few random coded observations. The system is characterized by simple coding and complex decoding. It is difficult to meet the increasing real-time requirements in application because of the large time consumption by the iterative optimization algorithms. Therefore, it is a powerful way to improve the efficiency by bypassing the complex reconstruction process and extracting the target information directly from the random measurement data. In this paper, based on MNIST handwritten digital character database as an example, the object recognition method from random measurements of compressive sensing camera is explored. Firstly, the training samples in the MNIST database are coded with the observation of the random Bernoulli measurement matrix. And then the K-nearest neighbor classifier is constructed on the standardized samples, the measurements in the same measurement matrix of the target sample are put in the classifier, given the target recognition results. The experimental results show that the average recognition rate is 82.8% under the sampling rate of 0.1, and the total time to process 500 images is 0.063s. In contrast, the experiment of the traditional method by first reconstructing and then recognizing is conducted, the average recognition rate is 84.3%，and the total time to process 500 images is 48.2s. The proposed method is close to the traditional strategy in recognition accuracy, but the computational efficiency has been greatly improved (765 times), with great practical value.

Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*

In allusion to the complex route characteristics of the irregular shape and the fuzzy feature for the mobile robot vision navigation in unstructured environment, this paper proposed a method based on fuzzy-rough set theory for unstructured path recognition and visual guidance. Firstly, we established an adaptive charge-coupled device (CCD) image definition automatic control algorithm to capture the high definition image of navigation area, and based on that a fuzzy-rough set model(F-R model) for unstructured path recognition is developed, which on the one hand by means of the rough set method the target and background and uncertainty area are predefined according to the gray features of the image itself, on the other hand the iterative relative fuzzy connectedness (IRFC) image ROI delineation algorithm is fused with the rough set method to reclassify the uncertain region and delineate the boundary of robot navigation path and non navigation region. By establishing a fusion F-R model, the seeds location and path identification can be automatically realized in unknown unstructured path region without the environmental prior knowledge. The experimental results showed that the proposed method is of practical significance to improve the ability of autonomous exploration of mobile robots in unstructured environment. Currently, the algorithm and running speed need to be further optimized for fast path recognition of robot navigation, which can lay the foundation for vision based high speed mobile robot navigation.

Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*

This article discusses issues related to the development of a new algorithm for the recognition and measurement of marks in optical-electronic angle measuring devices. The basis of the algorithm is the Hough transform. The article contains the results of functional testing and measurement results using this algorithm.

Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*

Damped sinusoidal signals occur in many fields of science and practical issues. One of them are adaptive optics systems where such signals are undesirable and often diminish the system performance. One of solutions to reject these signals is a method called AVC which is based on the estimation of vibrations parameters. In recent years, an universal, fast and accurate estimation method has been presented. It can be used to estimate multifrequency signals and can be useful in many various cases where the estimation method plays a crucial role. The main idea of this paper is using it in the AVC method to increase the system performance. There can be distinguished several measurement parameters that affect the accuracy and the speed of the estimation method: CiR (number of signal cycles in the estimation process), N (number of signal samples in a measurement window), H (time window order). There are also parameters that are especially important in practical situations (damped signals with noise and harmonics): SNR, THD, γ (changed in time a damping ratio). Total estimation errors consist of systematic errors and random errors. This paper is focusing on the second component, i.e. when the signal with γ ≠ 0 is distorted by noise. Results can be very useful from a practical point of view because they give information about the estimation accuracy in dependence of noise power for various damping ratio values. The value of the empirical MSE of the frequency estimator is approximately 10Λ-3 Hz for SNR = 30 dB H = 2 and γ = 0.01%.

Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*

In aquaculture engineering, estimation of chlorophyll concentration is of utmost importance for water quality monitoring. For a particular area, its concentration is a direct manifestation of the region suitability for fish farming. In literature different parametric and non parametric methods have been studied for chlorophyll concentration prediction. In this paper we have pre-processed the remote sensing data by logarithmic transformation which enhances the data correlation and followed by Gaussian Process Regression (GPR) based forecasting. The proposed methodology is validated on Sea-viewing Wide Field-of-View Sensor (SeaWIFS) and the NASA operational Moderate Resolution Imaging Spectro-radiometer onboard AQUA (MODIS-Aqua) data-sets. Experimental result shows the proposed method's efficacy in enhanced accuracy using the projected data.

Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*

In this paper, a block reconstruction method of object image based on compressed sensing(CS) and orthogonal modulation is presented. Using this method, the amount of data processing can be greatly reduced due to the application of CS theory and it brings convenience for post-processing. The method can be utilized especially when we just need to reconstruct partial of a huge image, because the orthogonal basis matrix can extract the measurements of corresponding block, and then the needed partial image can be reconstructed directly instead of reconstructing the whole huge image at first. Therefore, this method can reduce the redundant computation in process of reconstruction. And the total amount of calculation is also greatly reduced. The feasibility is verified by results of an experiment, in which we use a video projector to incorporate the random measurement matrix into the system.

Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*

The main difference of the planapochromatic objectives is the extended working spectral range. However, other differences also in achieving increased numerical apertures and an extended linear field. The main challenge is the technical possibility of obtaining rational optical designs.

Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*

Texture is one of the most important elements used by the human visual system (HVS) to distinguish different objects in a scene. Early bio-inspired methods for texture segmentation involve partitioning an image into distinct regions by setting a criterion based on their frequency response and local properties in order to further perform a grouping task. Nevertheless, the correct texture delimitation still remains as an important challenge in image segmentation. The aim of this study is to generate a novel approach to discriminate different textures by comparing internal and external image content in a set of evolving curves. We propose a multiphase formulation with an active contour model applied on the highest energy coefficients generated by the Hermite transform (HT). Local texture features such as scale and orientation are reflected in the HT coefficients which guide the evolution of each curve. This process leads to the enclosure of similar characteristics in a region associated with a level set function. The efficiency of our proposal is evaluated using a variety of synthetic images and real textured scenes.

Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*

This paper examines two models for image representation used for optical flow estimation in Particle Image Velocimetry (PIV). The common approach for flow estimation bases on a cross-correlation between PIV images. An alternative solution bases on an optical flow, which has the advantage of calculating vector fields with much better spatial resolution. The optical flow-based estimation requires calculations of temporal and spatial derivatives of the image intensity, which is usually achieved by using finite differences. Due to rapid intensity changes in the PIV images caused by particles having small diameters, an exact estimation of spatial derivatives using finite differences may lead to numerical errors that render data interpretation limited or even impractical. The present study aims at solving this problem by introducing two algorithms for PIV image processing, which differs in terms of a digital image representation. Both algorithms rely on a PIV image model, wherein the particle image complies with an Airy disc, which is well approximated by using a Gaussian function. Numerical analysis of sample PIV images (uniform and turbulent fields) show that both methods allow for high precision flow-velocity fields estimates in conjunction with the Lucas-Kanade algorithm.

Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*

The aim of this work is to present a technique for non-intrusive velocity vector measurements of micron-sized tracer particles following a fluid flow. The technique is based on Particle Image Velocimetry (PIV). In contrast to conventional PIV, which analyzes light scattering for incident high-energy laser pulses, the technique uses a light sheet produced by a prototype LED-based illuminator. A sequence of exposures from the flow taken by a high-speed camera is analyzed by means of a multi-scale optical flow-based algorithm developed by the authors. The LED illuminator offers the possibility to deliver high-power light pulses at microsecond levels and high-repetition rates. An integrated optics produce a lightsheet with adjustable thickness and width, enabling the user to measure velocity components either in a plane or in a volume. Compared to pulse lasers used in PIV systems, the illuminator has the advantages of low cost, safe operation, and much simpler construction. For the purposes of experimental verification, velocity vector measurements in a crosssection of a rotary water flow seeded with micron sized tracer particles have been performed. The velocity vectors have been computed using a multi-scale estimation algorithm based on optical flow and four-level pyramidal decomposition of PIV images. In order to validate our optical flow-based approach, the experimental results have been finally analyzed by means of a commercial PIV software that uses image cross-correlation for velocity field estimation.

Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*

In the present paper we compare different approaches for estimation of freeform and aspherical surfaces complexity. We consider two unobscured all-reflective telescope designs: a narrow-field Korsch-type system with a slow freeform secaondary and a wide-field Schwarzschild-type system with an extreme freeform secondary. The performance improvement obtained due to the freeforms use is demonstrated. The Korsch telescope provides a diffraction-limited image quality for a small field 0.8x0.1° at F/3. The Schwarzschild design covers a large field of 20x8° and allows to increase the aperture from F/6.7 to F/3. Also, we analyze the freeforms shapes using different techniques. It is shown that the usual measures like root-mean square deviation of the sag are ineffective. One of the recommended way to estimate the surface complexity is computation of the residual slope and its conversion into fringes frequency. A simpler alternative is computation of the sag deviation integral.

Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*

Multiphoton microscopy (MPM) is a method for characterizing biological samples, becoming more and more established in life sciences labs thanks to its label-free imaging ability. In MPM, few biological substances have been highlighted as endogenously fluorescent, such as elastin, myosin, keratin, redox indicators, collagen or amino acids. Tryptophan, an amino acid fundamental in the synthesis of proteins, is known for its endogenous fluorescence. In this article, we propose to show an original solution specifically dedicated to multiphoton microscopy with an ultrawide band laser system. The specificity stands into the filtering system based on a prisms-line allowing spatial shaping of the spectrum. Our custom-designed multiphoton microscope, coupled with a picosecond ultrawide band laser correctly filtered spectrally, is adapted for charactering the two- and three-photon absorption ranges and imaging of tryptophan. This highlights in one hand that the use of a picosecond ultrawide band laser spectrally filtered does allow to reach both the two- and three-photon abortion (TPA and ThPA) ranges of this substance. In another hand, a quantitative comparison of the resulting images shows high differences in the image quality where the three-photon image looks better contrasted and better resolved than the two-photon one. An explanation of this highly interesting phenomena can be proposed with a study of the probability of presence of multiphoton processes involved and the cross section values of ThPA and TPA. This initiating work, cumulating an innovative multiphoton setup and interesting results, plays a crucial role for extending the label-free imaging ability of MPM.

Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*

The number of narrow-band spectral channels that are used in satellite instruments has been increasing to hundreds and even thousands, due to recent developments in satellite meteorology. They measure radiation in wide ranges: from ultra-violet to distant infra-red. The comparison of various approaches for selection of the most informative channels represents a certain scientific and practical interest. In our work the techniques of the optimal choice are considered for spectral channels with the fixed and variable widths. Practically, all known methods of the inverse problems solution use the certain a-priori information on required parameters. The following methods of the optimal planning were used for a remote sensing satellites experiments: DRM(analysis of Data resolution Matrix), Weighted functions (Jacobians analyzing), Iterations (selection of the satellite channels is defined by Entropy Reduction), combined channel technique (spectral channel with variable width - based on maximization of the determinant of Fishers information matrix). The methods of the best linear estimation and variational technique were used to solve the inverse problem. The proposed technique was used for remote sensing of the atmosphere (temperature and humidity profiles) and surface parameters (sea surface temperature).

Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*

Leaf maturation from initiation to senescence is a phenological event of plants that is a result of the influences of temperature and water availability on physiological activities during a life cycle. Detection of newly grown leaves (NGL) is therefore useful in diagnosis if growth of trees, tree stress and even climatic change. There are many important applications that can naturally be modeled as a low-rank plus a sparse contribution. This paper develop a new algorithm and application to detect NGL. It uses first sparse matrix as a preprocessing to enhance target and applied deep learning to segment the image. The experimental results show that our proposed method can detect targets effectively and decrease false alarm rate.

Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*

The automatic analysis of 3D models of hands and feet could be useful in many areas. The main idea of the proposed method is to create cross section 2D images of the scanned 3D model that could be later analysed usin g any image processing algorithms. These cross section 2D images are created by placing any number of parallel planes inside the 3D model and calculating the intersections between the 3D model and the plane. The cross section images could then be used to create a "bone structure" of the scanned 3D model by using any existing image processing algorithms in order to find the central points of the 3D model in each cross section image and connecting these points between the consecutive cross section images. This "bone structure" could then be used to determine the orientation of the scanned model in 3D space in order to create more cross section images at specific locations as well as acquiring basic information about the shape of the hand or foot or which fingers are missing that could be used when creating the 3D model of the prosthesis. However, acquiring accurate dimensional information about the scanned 3D model would require additional input from the user to define the scale of the 3D model.

Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*

Both light-field and polarization information contain lots of clues about scenes, and they can be widely used in variety of computer vision tasks. However, existing imaging systems cannot simultaneously capture the light-field and polarization information. In this paper, we present a low-cost and high-performance miniaturized polarimetric light-field camera, which is based on the six heterogeneous sensors array. The main challenge for the proposed strategy is to align the multi-view images with different polarization characteristics, especially for regions with high degree of polarization -- in which the intensity correlations are commonly weak. To solve this problem, we propose to use Convolutional Neural Network (CNN) based stereo matching method for aligning the heterogeneously polarized images accurately. After stereo matching, both the light field and the Stokes vectors of scene are estimated, and the polarimetry conventions, e.g., the polarization angle, the linear polarization degree and the circular polarization degree, are given. We implement the prototype of the multisensor polarization light-field camera and perform extensive experiments on it. The polarimetric light-field camera achieves six live streaming on time and the heterogeneous processor of NVIDIA Jetson TX2 is exploited for image processing. Benefiting from the multi-sensor parallel polarization imaging and efficient parallel processing, the proposed system achieves promising performance on time resolution, signal-to-noise ratio. Besides, we develop the object recognition applications to show the superiorities of proposed system.

Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*

Electrochromic devices offer a wide potential in microoptics applications owing to their compact set-up, low power consumption, and small control voltage. However, the need to use TiO2 nanoparticle layers (NPL) as efficient electrode material still prevents the development of microstructured optical filters like a tunable optical iris. Here we suggest to replace the TiO2 NPL by a nanotube TiO2 electrode obtained by electrochemical anodization of thin microstructured titanium layers. This renders the complete fabrication route of electrochromic filter devices on base of MEMS compatible processes possible. Process control will be addressed as critical issue because extended anodization may cause the delamination of the TiO2 nanotube film. For the chemisorption of electrochromic viologen molecules a temper process is necessary to turn the nanotube film into the anatase phase. Finally, a fully functional disc shaped tunable intensity filter is presented that relies on the viologen functionalized titania nanotube electrode.

Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*

At present time, security holograms are widely used to protect against counterfeiting of various documents and identity cards. To control the security holograms authenticity on documents, devices that exclude the influence of the human factor and increase the speed and reliability of identification are needed. The paper presents an automatic optic-electronic scanner for operational control of security holograms authenticity on documents. The algorithm of data processing, operation principle of scanner and its design are described. The use of modern scanning systems, high-speed recording devices, specially designed and manufactured components of the optical system, as well as the use of correlation filters in the algorithm for recognizing the information received from the hologram, significantly reduce the time and raise the reliability of the security holograms authenticity control process.

Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*

This paper analyses the possibility of obtaining a full color image using the holographic indicator based on a light guide plate with diffractive optical elements (DOEs). The parameters of the optical collimator system are calculated as well as the experimental results on generation of full color images are presented.

Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*

The majority of optical encryption techniques use not only light intensity distribution, easily registered with photosensors, but also its phase distribution. This provides best encryption strength but requires holographic registration in order to register not only light intensity distribution but also its phase distribution and is accompanied by speckle noise occurring due to coherent illumination. These factors lead to poor decryption quality.

Method of optical encryption with spatially incoherent illumination does not have drawbacks inherent to coherent techniques but provides lower security.

State of the art encryption techniques implement asymmetric encryption which entails that there is no exchange of encryption keys between the sender and receiver. In case of interception of encrypted messages hacker will not be able to decrypt them. There are several asymmetric optical encryption techniques based on DRPE technique. Typically light phase distribution serves as an open key, while amplitude distribution serves as a secret key. However there are no such techniques implementing spatially-incoherent illumination due to limitation to amplitude only registration. We propose for the first time asymmetric optical encryption technique implementing spatially-incoherent illumination. Procedure is described as follows. User 1 optically encrypts information using key 1 and sends it to user 2. User 2 encrypts received data using key 2 and sends it back to user 1. In order to verify identity of user 2, user 1 checks if received data correspond to certain parameters which are unique to user 2 and serve as an additional secret key. If identity check is passed, user 1 decrypts received data using key 1 and sends it back to user 2. Finally, user 2 decrypts received data using key 2 and obtains information. Results of computer simulations of asymmetric optical encryption implementing spatially incoherent illumination are presented.

Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*

Digital holography allows displaying of shape and parameters of 2D-objects and 3D-scenes. This is achieved by interference pattern recording using imaging sensors (CCD, CMOS, and etc.) and it’s further processing. In result amplitude and phase of the investigated object can be obtained. Cross-sections of registered scene or full 3D-image can be reconstructed numerically (using computer) or optically (using spatial light modulators). There are several restrictions on quality of images reconstructed from the digital holograms: speckle noise, twin image and zero order, camera’s sensor temporal noise (especially shot noise) and fixed pattern noise (especially photo response non-uniformity). The most popular methods of improvement of hologram quality are based on digital filtering techniques. In spite of quality increasing, several parameters (such as resolution or contrast) of the reconstructed images are usually a little or significantly worsened. In this work temporal and spatial noises of the sensor are accurately eliminated without digital hologram resolution worsening. Also effect of camera's noise on reconstructed images is investigated. Characteristics of cameras of different types were used. Numerical experiments were conducted using standard parameters of holographic recording setups in visible optical spectrum (532 nm). Quality of reconstructed images was estimated by the signal-tonoise ratio values. Images reconstructed from digital holograms with various temporal and fixed pattern noise values are shown. Results of improvement of quality of the digital holograms by camera’s noise elimination are presented.

Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*

The application area of unmanned aerial vehicles increases significantly recent years due to progress in hardware and algorithms for data acquisition and processing. Object detection and classification (recognition) in imagery acquired by unmanned aerial vehicle are the key tasks for many applications, and usually in practice an operator solves these tasks. Growing amount of data of different types and of different nature provides the possibility for deep machine learning which nowadays shows high level results for object detection and recognition. Two key problems are to be solved for applying deep learning for object recognition task when dealing with multi-spectral imagery: (a) availability of representative dataset for neural network training and testing and (b) effective way of multi-spectral data fusion during neural network training. The paper proposes the approaches for solving these problems. For creating a representative dataset synthetic infra-red images are generated using several real infra-red images and 3D model of a given object. An technique for realistic infra-red texturing based on accurate infra-red image exterior orientation and 3D model pose estimation is developed. It allows in automated mode to produce datasets of required volume for deep learning and automatically generate ground truth data for neural network training and testing. Two approaches for multi-spectral data fusion for object recognition are developed and evaluated: data level fusion and results level fusion. The results of the evaluation of both techniques on generated multi-spectral dataset are presented and discussed.

Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*

The article deals with the features of the design of helmet displays. Features of individual elements and requirements for some of them. One of the most important aberrations that should be taken into account when designing such systems, un-centered distortion, some moments of modeling distortion and ways of its compensation are also considered.

Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*

A new method of pattern recognition is presented which is based on obtaining photoanisotropic copies on the dynamic polarization sensitive material. The amplitude image of the object is illuminated by a linearly polarized light with a wavelength actinic for this material. In result a photoanisotropic copy of this image is induced on the polarizationsensitive material. In recognition process, a photoanisotropic copy is illuminated by circularly polarized light of a nonactinic wavelength. The distribution of elliptical polarization occurs behind a photoanisotropic copy and reduces to a summary ellipse in the Fraunhofer diffraction region. The parameters of this ellipse are related to the characteristics of the original object and uniquely identify the initial object. The polarization-holographic diffraction element developed by us enables to determine summary ellipse parameters - to obtain all the Stokes parameters in real time and to compare obtained results with recognizable object etalon in database. The method invariance to position, scale and rotation of pattern are investigated. The resolution and sensitivity of this method were also determined. The dynamic polarizationsensitive materials are reversible, with practically unlimited number of recording-deleting cycles. To obtain a photoanisotropic copy of another object on the same material, the previous copy should be deleted with a pulse of circularly polarized actinic light, then a next copy can be recorded. A laboratory model of the recognition device, appropriate software and a theoretical model were created. Database have been obtained by using images of various objects.

Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*

This paper proposed to apply the Bi-dimensional Empirical Mode Decomposition (BEMD) to the dense disparity estimation problem. The BEMD is a fully data-driven method and does not need predetermined filter and wavelet functions. It is locally adaptive and has obvious advantages in analyzing non-linear and non-stationary signals. Firstly we decompose the original stereo images by 2D-sifting process of the BEMD respectively. Through this procedure, a serial of Intrinsic Mode Functions (IMFs) and a residue are achieved. The residue denotes the DC component of the signal. Secondly, subtract the residue from original image. The resulting two dimensional signals can be thought of being free of disturbing frequencies, such as noise and illumination components. Subsequently, to obtain robust local structure information of the images, the plural Riesz transformation is utilized to achieve corresponding 2D analytic signals of the images. Thirdly, extract local phase information of the analytic signals. The similarity of local phase of stereo images, instead of local intensity information, are taken as the basis of calculating matching cost, which could reveal local structure with more robustness. At last, dense disparity map is estimated based on the proposed method. The winnertakes-all (WTA) strategy is applied to compute disparity of each pixel separately. Comparative experiment is conducted to compare the performance of the method with intensity-based methods. Rather good results have been achieved.

Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*

Keywords/Phrases

Keywords

in

Remove

in

Remove

in

Remove

+ Add another field

Search In:

Proceedings

Volume

Journals +

Volume

Issue

Page

Advanced PhotonicsJournal of Applied Remote SensingJournal of Astronomical Telescopes Instruments and SystemsJournal of Biomedical OpticsJournal of Electronic ImagingJournal of Medical ImagingJournal of Micro/Nanolithography, MEMS, and MOEMSJournal of NanophotonicsJournal of Photonics for EnergyNeurophotonicsOptical EngineeringSPIE Reviews