With his exceptional scientific expertise, Professor John Caulfield was involved at Physical Optics Corporation (POC) from its inception (1985) to the present. This paper reviews his recent involvement at POC, which includes both optical and less well known nonoptical scientific and engineering areas.

We review the evolution of machine vision and comment on the cross-fertilization from the neural sciences onto flourishing fields of neural processing, parallel processing, and associative memory in optical sciences and computing. Then we examine how the intensive efforts in mapping the human brain have been influenced by concepts in computer sciences, control theory, and electronic circuits. We discuss two neural paths that employ the input from the vision sense to determine the navigational options and object recognition. They are ventral temporal pathway for object recognition (what?) and dorsal parietal pathway for navigation (where?), respectively. We describe the reflexive and conscious decision centers in cerebral cortex involved with visual attention and gaze control. Interestingly, these require return path though the midbrain for ocular muscle control. We find that the cognitive psychologists currently study human brain employing low-spatial-resolution fMRI with temporal response on the order of a second. In recent years, the life scientists have concentrated on insect brains to study neural processes. We discuss how reflexive and conscious gaze-control decisions are made in the frontal eye field and inferior parietal lobe, constituting the fronto-parietal attention network. We note that ethical and experiential learnings impact our conscious decisions.

Forward and backward propagation can both be modeled in terms ofWigner distributions or alternatively in terms of spatial frequency transfer functions. We use both formalisms to show that, for non-evanescent waves, forward propagation in a negative index material is equivalent to backward propagation in a positive index medium. We consider the implications of this fact for several specific problems, including imaging 3D objects, imaging through thin phase screens, Fresnel holography, and speckle.

3D measurement using methods of structured light are well known in the industry. Most such systems use some variation of straight lines, either as simple lines or with some form of encoding. This geometry assumes the lines will be projected from one side and viewed from another to generate the profile information. But what about applications where a wide triangulation angle may not be practical, particularly at longer standoff distances. This paper explores the use of circular grating patterns projected from a center point to achieve 3D information. Originally suggested by John Caulfield around 1990, the method had some interesting potential, particularly if combined with alternate means of measurement from traditional triangulation including depth from focus methods. The possible advantages of a central reference point in the projected pattern may offer some different capabilities not as easily attained with a linear grating pattern. This paper will explore the pros and cons of the method and present some examples of possible applications.

We propose the use of a pair of phase masks, which have both radial and angular variations, for implementing several varifocal devices. One mask of the proposed pair has a complex amplitude transmittance that is the complex conjugate of the other member of the pair. We show that the overall complex amplitude transmittance has only a radial variation after introducing an in-plane rotation, say by an angle β, between the members of the pair. However, we note that the optical power is proportional to the rotation angle β. As examples of the proposed method, we show that the refractive pair is useful for implementing varifocal lenses, tunable axicons, controllable axilenses, as well as annularly distributed focalizers.

A novel phase-shifting technique, Doppler phase-shifting is introduced to interferometry and holography. Its principle, features and advantages are discussed. The use of multiple wavelengths in this method is also discussed. Their applications such as shape measurement, color holography etc. are presented.

This paper develops an algorithm for autonomous tracking of a person (target) within a crowded and temporally dynamic scene using a multispectral imaging system. The camera is stationary, the field of view is static, and the sensor pixel footprint is on the order of one inch. The operator designates the target to be tracked by selecting a single target-pixel in the first image frame, preferably close to the center of mass of the observable portion of the target in that particular frame. Following the initial designation, the algorithm provides tracking of the target in real-time autonomously with minimal latency. The tracking algorithm is based on a novel temporally adaptive spatial-spectral filter bank used to detect target presence or lack thereof in the field-of-regard of the video frame produced by the multispectral camera. The theory of the temporally adaptive spatial-spectral filter is based on an extension of our earlier work on the enhanced matched filter bank (EMFB). The concept of EMFB is founded on the theory of spatial matched filters, which is the optimal correlation filter for detection of a known image corrupted by noise.

Multiple surface plasmon-polariton (SPP)-wave modes can be guided by the interface of a metal and a chiral sculptured thin film (STF). Theory predicts that the angular locations of SPP-wave modes will be shifted if the void regions of the chiral STF are infiltrated with a liquid. Therefore, chiral STFs of lanthanum fluoride were fabricated and employed as a partnering dielectric material to an aluminum thin film to guide multiple SPP-wave modes. The SPP-wave modes shifted to higher angular locations when the refractive index of the infiltrant was increased, exhibiting sensitivity comparable to state-of-research values. Thereby, surface multiplasmonics was exploited for optical sensing.

In this work we review the use of spatial light modulators (SLMs) for optical processing applications involving colour management. We include pioneering results in collaboration with H. J. Caulfield, where colour information was introduced onto an optical correlator by means of gratings with different orientation, frequency and amplitude. Nowadays SLMs are used to manage colour in applications that include colour digital holography, multispectral and hyperspectral filtering, polarimetric sensing, or pulse shaping systems. Here we review techniques for the spectral characterization of liquid crystal SLMs and some of the advances in their use for some of the above-mentioned applications.

Compressive sensing is a relatively new theory that has introduced a dramatic breakthrough in signal acquisition. In the context of imaging, it asserts that for common types of objects and with proper system design, it is possible to capture N2-pixel images with much fewer than N2 measurements. This implies that it is possible to capture signals with a larger space-bandwidth product than that of the system. Implementation of compressive imaging (CI) systems requires optical design that differs drastically from that for conventional imaging. Fortunately, CI design may benefit from concepts previously developed by Prof. Caulfield and others for optical signal processing.

The reflection of the sunlight over the sea surface is called glitter pattern. In previous works, when the onedimensional case is analyzed, the glitter function was mathematically described like a rect function. This rect function has proved to be a very good representation of the glitter pattern. In this paper a Gaussian glitter function is used like a first approximation to the rect function. The statistical relationship between the variance of the intensities of the image, the glitter pattern, and the variance of the sea surface slopes is obtained and analyzed. The analytical solutions in this relationship are mathematical different but the graphics are very similar.

Early in our careers, John Caulfield and I simultaneously had similar ideas in the field of holography and optics. I’ll discuss two. The first was that, unknown to each other, he published a paper and I filed a patent on the same topic at about the same time. The other topic was one that he and I worked on together to apply for a patent on a topic that we then knew was of mutual interest.

I came to know Caulfield as a graduate student while developing suitable techniques to quantitatively evaluate coherence properties of pulsed Ruby and YAG lasers beams during the first decade of their evolutions. We continued our professional acquaintance till 2011 through various yearly conferences. It was at the 2011, 4th biennial conference on, “The nature of light: What are photons?” [1], Caulfield gave a paper on this topic and privately expressed his deep concern that the optical “Holographic Principle” has been hijacked by the cosmologists based upon insufficient understanding of the physical processes behind generation and reconstruction of optical holograms. Unlike our material universe, holographic images do not exist as touchable objects; but the material universe does. Now, in his absence, I have taken the liberty of presenting his views about the holographic principle and extend that to further challenge the prevailing hypothesis that cosmological red shift is purely optical Doppler shift that has led to the postulate that the current universe is expanding rapidly. Rigorously speaking, the core problem is generated when we assign reality to human interpreted information out of experimentally derived data, which can never capture complete behavioral properties of any cosmological object that we try to characterize. In holography, an object is a touch-able reality. Scattered light from an object brings incomplete, but sufficient information about the object to construct a decent hologram. It records phase and amplitude information indirectly as intensity fringes. Further, the reconstructed IMAGE does not represent the original touch-able reality. Besides, the image is further degraded from the insufficient information originally recorded on the hologram. Physical theories should be based upon our need to map physical processes behind the phenomenon under study. Information is a subjective human interpretation of measurable parameters registered by instruments, whose registration fidelities are always less than 100%. We illustrate this point by further criticizing the postulate of “Expanding Universe” by analyzing optical Doppler shift as a function of the two velocities, those of source-atoms and those of detector-atoms, in the coronas of stars in different galaxies with respect to the stationary space, instead of just the relative velocities between all possible pairs of galaxies.

We shall discuss the origin, the development, and the future of holography. We shall show that basically there are two types of holography, namely the transmission-type of Leith and the reflection-type of Denisyuk. Although the original purpose for developing holography was to produce 3D imageries, it has a much wider domain of application far beyond the legacy.

Optical systems that can recover both the amplitude and phase of a scattered wave eld are important for a range of di erent practical imaging and metrology applications. In this manuscript we examine two di erent techniques: (A) Fresnel based digital holography and (B) Teague's transport of intensity phase retrieval technique, using a special analytical function that serves to act as the scattered wave eld we would like to recover. Nowadays both systems use modern CCD or CMOS arrays to make the necessary intensity measurements. In system (A) an ideal plane wave reference eld is required and should overlap, and interfere, with the scattered eld at at the CCD plane. The resulting intensity distribution recorded by the CCD is a digital hologram. If several captures are recorded, where the phase of the reference has been changed (stepped) between captures, it is possible to recover an approximation to the complex amplitude of the scattered wave eld. In system (B) no reference eld is needed, which is a signi cant advantage from a practical implementation point of view. Rather, the intensity of the scattered wave eld has to be measured at two axially displaced planes. We expect that the performance of both systems will be fundamentally limited by at least three separate factors, (i) the nite extent of CCD array, (ii) the nite extent of the CCD pixels which average the light intensity incident upon them, and (iii) the sampling operation which occurs because the intensity is recorded at a set of uniformly displaced discrete locations. In this manuscript, we examine how factors (i) and (iii), e ect the imaging performance of each system by varying the spatial frequency extent of the scattered wave eld. We nd that system A has superior performance compared to system B.

Inspired by recent results of artificial color due to Caulfield, we carry out intuitive experimental investigations on color sensing under microwave illumination. Experiments have been carried out using a Gunn diode as the microwave source and a microwave diode as a detector. More precise experimental studies have also been carried out utilizing a vector network analyzer. Preliminary results of the experiments validate the feasibility of sensing and discriminating otherwise visual colors under microwave illumination. Caulfield's presumption possibly paves the way for artificial color perception using microwaves.

We report a recent work on robust object detection in high-resolution aerial imagery in urban environment for Intelligence, Surveillance and Recognition (ISR) missions. Our approaches used the simple linear iterative clustering (SLIC) algorithm, which combines regional and edge information to form the superpixels. The irregularity in size and shape of the superpixels measured with the Hausdorff distance served to determine the salient regions in the very large aerial images. Then, the car detection was performed with both the component-based approach and the featurebased approaches. We merged the superpixels with the statistical region merging (SRM) algorithm. The regions were described by the radiometric, geometrical moments and shape features, and classified using the Support Vector Machine (SVM). The cast shadow were detected and removed by a radiometry based tricolor attenuation model (TAM). Detection of object parts is less sensitive to occlusion, rotation, and changes in scale, view angle and illumination than detection of the object as whole. The object parts were combined to the object according to their unique spatial relations. On the other hand, we used the invariant scale invariant feature transform (SIFT) features to describe superpixels and classed them by the SVM as belong or not to the object. All along our recent work we still trace the brilliant ideas in early days by H. John Caulfield and other pioneers of optical pattern recognition, for improving the discrimination of the matched spatial filter with linear combinations of cross-correlations, which have been inherited transformed and reinvented to achieve tremendous progress.

Significant reduction of energy dissipation in computing can be achieved by addressing the theoretical lower limit of energy consumption and replacing arrays of traditional Boolean logic gates by other methods of implementing logic operations. In particular, a slight modification of the concept of computing allows the incorporation of fundamentally lossless optical processes as part of the computing operation. While the introduced new concepts can be implemented electronically or by other means, using optics eliminates also energy dissipation involved in the translation of electric charges. A possible realization of the indicated concepts is based on directed logic networks composed of reversible optical logic gate arrays.

Optical systems with feedback are, generally, non-linear dynamic systems. As such, they exhibit evolutionary behavior. In the paper we present results of experimental investigation of evolutionary dynamics of several models of such systems. The models are modifications of the famous mathematical “Game of Life”. The modifications are two-fold: “Game of Life” rules are made stochastic and mutual influence of cells is made spatially non-uniform. A number of new phenomena in the evolutionary dynamics of the models are revealed: - “Ordering of chaos”. Formation, from seed patterns, of stable maze-like patterns with chaotic “dislocations” that resemble natural patterns, such as skin patterns of some animals and fishes, see shell, fingerprints, magnetic domain patterns and alike, which one can frequently find in the nature. These patterns and their fragments exhibit a remarkable capability of unlimited growth. - “Self-controlled growth” of chaotic “live” formations into “communities” bounded, depending on the model, by a square, hexagon or octagon, until they reach a certain critical size, after which the growth stops. - “Eternal life in a bounded space” of “communities” after reaching a certain size and shape. - “Coherent shrinkage” of “mature”, after reaching a certain size, “communities” into one of stable or oscillating patterns preserving in this process isomorphism of their bounding shapes until the very end.

Keywords/Phrases

Keywords

in

Remove

in

Remove

in

Remove

+ Add another field

Search In:

Proceedings

Volume

Journals +

Volume

Issue

Page

Journal of Applied Remote SensingJournal of Astronomical Telescopes Instruments and SystemsJournal of Biomedical OpticsJournal of Electronic ImagingJournal of Medical ImagingJournal of Micro/Nanolithography, MEMS, and MOEMSJournal of NanophotonicsJournal of Photonics for EnergyNeurophotonicsOptical EngineeringSPIE Reviews