It has been observed that every photon is, in a sense, virtual - being emitted and then sooner or later absorbed. As the motif of a quantum radiation state, the photon shares these characteristics of any virtual state: that it is not directly observable; and that it can signify only one of a number of indeterminable intermediates, between matter states that are directly measurable. Nonetheless, other traits of real and virtual behavior are usually quite clearly differentiable. How 'real', then, is the photon? To address this and related questions it is helpful to look in detail at the quantum description of light emission and absorption. A straightforward analysis of the dynamic electric field, based on quantum electro-dynamics, reveals not only the entanglement of energy transfer mechanisms usually regarded as 'radiative' and 'radiationless'; it also gives significant physical insights into several other electromagnetic topics. These include: the propagating and non-propagating character in electromagnetic fields; near-zone and wave-zone effects; transverse and longitudinal character; the effects of retardation, manifestations of quantum uncertainty and issues of photon spin. As a result it is possible to gain a clearer perspective on when, or whether, the terms 'real' and 'virtual' are helpful descriptors of the photon.

The nature of physical objects cannot be clarified independent of our concepts of space and time. We present arguments to show that neither the classical 3D space - 1D time nor 4D space-time of special relativity provide a satisfactory theoretical framework to this end, as we encounter non-classical objects. The general relativity is perhaps able to accomplish this task. But, it does so only at the expense of rendering the empty physical space neither isotropic nor homogeneous. Waves are not candidates to represent fundamental objects. We use the celebrated example of Compton scattering to argue that the full description of the experiment makes use of both wave-like and particle-like behavior in the early quantum-mechanical formulations. The later quantum field theoretical descriptions of the same phenomenon abandon causality. We present model arguments from modern particle physics experiments that the photon may be a hadron, at least part of the time.

A radio engineer can hardly think about smaller amount of electromagnetic radiation than given by a single oscillation cycle of a unit charge in a dipole. When solved from Maxwell's equations for a dipole of one wavelength, the energy of the emitted radiation cycle obtains the form Eλ = 2/3 hf, where the Planck constant h can be expressed in terms of the unit charge, e, the vacuum permeability, μ0, the velocity of light, c, and a numerical factor as h = 1.1049*2π3e2μ0c=6.62607*10-34 [kgm2/s]. A point emitter like an atom can be regarded as a dipole in the fourth dimension. The length of such dipole is measured in the direction of the line element cdt, which in one oscillation cycle means the length of one wavelength. For a dipole in the fourth dimension, three space directions are in the normal plane which eliminates the factor 2/3 from the energy expression thus leading to Planck's equation Eλ = hf for the radiation emitted by a single electron transition in an atom. The expression of the Planck constant obtained from Maxwell's equations leads to a purely numerical expression of the fine structure constant α=1/(1.1049*4π3) = 1/137 and shows that the Planck constant is directly proportional to the velocity of light. When applied to Balmer's formula, the linkage of the Planck constant to the velocity of light shows, that the frequency of an atomic oscillator is directly proportional to the velocity of light. This implies that the velocity of light is observed as constant in local measurements. Such an interpretation makes it possible to convert relativistic spacetime with variable time coordinates into space with variable clock frequencies in universal time, and thus include relativistic phenomena in the framework of quantum mechanics.

The intention of this paper is to underscore that to understand fundamentally new properties of light beams, we must first find the limits of semi classical model to explain optical interference phenomena. We claim that we have not yet reached that limit. Careful analysis of the processes behind detecting fringes indicate that the effect of superposition of multiple optical beams can become manifest only through the mediation of the detecting dipoles. Since the detectors are quantum mechanical, (i) the observed effects are different for different detectors for the same superposed light beams, and further, (ii) they are only capable of registering discrete number of "clicks", whose rate will vary with the incident intensity. A reduced rate of "clicks" at very low intensity does not prove that light consists of indivisible packets of energy. We have
also experimentally demonstrated that (i) neither Fourier synthesis, nor, (ii) Fourier decomposition actually model the behavior of EM fields under all possible circumstances. Superposed light beams of different frequencies do not synthesize a new average optical frequency. A pure amplitude modulated pulse does not contain any of the mathematical, Fourier analyzed frequencies. The QED definition of photon being a Fourier mode in the vacuum, it necessarily becomes non-local. Since we have demonstrated that the Fourier theorem has various limitations in classical physics, its indiscriminate use in quantum mechanics should also be critically reviewed.

I assume that everywhere in space there is a real random electromagnetic radiation, or zeropoint field (ZPF), which looks similar for all inertial observers, so that the stochastic properties of the field should be Lorentz invariant. This fixes the spectrum except for a single adjustable parameter measuring the scale, which is identified with Planck's constant, so making the ZPF identical to the quantum electromagnetic vacuum. Photons are just fluctuations of the random field or, equivalently, wavepackets in the form of needles of radiation superimposed to the ZPF. Two photons are "classically correlated" if the correlation involves just the intensity above the average energy of the ZPF, but they are "entangled" if the ZP fields in the neighbourhood of the photons are also correlated. These assumptions may explain all quantum optical phenomena involving radiation and macroscopic bodies, provided the latter may be treated as classical. That is, we have an interpretation of quantization for light but not for matter. Detection of photons involves subtracting the ZPF, which cannot be made without a fundamental uncertainty. This explains why photon counters cannot be manufactured with 100% efficiency and no noise (dark rate), which prevents the violation of a genuine Bell inequality (this is the so-called detection loophole). The theory thus obtained agrees very closely with standard quantum optics if this is formulated in the Wigner representation.

The standard mathematical formalism of cavity QED leads us to consider photons as excitations of a quantum harmonic oscillator. Although it is one of the most familiar problems of quantum mechanics, some aspects of the quantum harmonic oscillator remain difficult to visualize, particularly in the rather abstract context of an electromagnetic field. Recently, modern microfabrication and refrigeration techniques have begun to allow the creation of nanoscale mechanical oscillators which can be cooled close to the quantum regime. Despite the extreme physical differences between an electromagnetic cavity and a nanomechanical resonator, both systems may be approximated by the same quantum harmonic oscillator model. However, the conceptual consequences of quantum behavior, and the challenges to physical intuition, are quite different in the two cases. Taking a mechanical point of view therefore allows fresh insight into the quantum harmonic oscillator problem. To illustrate the connection and how it may aid our understanding of light, the mathematical parallelism between an electromagnetic cavity and a mechanical resonator is demonstrated. Current nanomechanics experiments are discussed, and some possible quantum measurements are introduced. Finally, the discrepancies between the predictions of quantum mechanics and our experience of classical vibrating beams are considered, with an emphasis on how nanomechanics may be able to offer a new perspective on the nature of photons.

The reliable generation of true single-photon wavepackets with well-defined modal structure is a crucial ingredient for the realization of linear optical quantum computing. In this paper we present experimental results indicating the efficient generation of conditionally prepared single photons from parametric downconversion in a KTP nonlinear waveguide. In addition we present, theoretically and experimentally, a novel criterion for the assessment of conditional single photon sources which takes into account the contributions of heralded vacuum due to optical losses, of higher photon numbers and of the binary response of typical single photon detectors. Utilizing this criterion, we show that our KTP nonlinear waveguide represents a high fidelity source of conditionally prepared single photons.

A quantum field theory approach is presented to show that paraxial photons of any frequency in the full electromagnetic spectrum may posses arbitrary values of the orbital angular momentum along their propagation direction. Our framework also enables to propose multi-dimensional geometric quantum gates exploiting the orbital angular momentum of photons. The action of these gates is shown to be describable in a remarkably compact form by resorting to the Wigner representation.

Quantum computing has been a relatively new research area in the physics and computer engineering combined fields since Feynman proposed an abstract model of quantum computer in 1980s. After Shor presented the algorithm for practical implementation of quantum computing in 1994, the research of quantum computing has grown quickly. However, there is a gap between the academic research and the application to industry. This survey paper attempts to fill the gap by presenting the application to the industrial needs specifically to the needs of the aerospace industry.

A quantum eraser is proposed that operates in a domain that does not
have any classical counterpart. This quantum eraser utilizes the
complementary aspect between entanglement and single-partite
properties of composite quantum systems. Consequently, in contrast
to the duality of visibility and which-path information which
establish features of single quantum systems, here, properties of
composite quantum systems are considered. In composite quantum
systems entanglement might emerge which is of genuine quantum
origin. This nonclassical correlation mutually exclude the
single-partite properties of the subsystems of the composite quantum
system. The single-partite properties can be describes by wave- and
particle properties, i.e. the standard wave-particle duality.
Remarkably, entanglement can be considered as a resource for
observables that do not exist in classical physics. In a bipartite
photon system, this observable is the two-particle visibility which
describes the phase relations that are shared between both photons
of the composite system. The complementary aspect between
two-particle visibility and single-partite properties of the
subsystems prevents us from the observation of single-partite
properties in an entangled biphoton system. The quantum eraser
erases the two-partite visibility and retrieves single-partite
properties in form of single-particle visibility for both of the two
photons. Thus, both observables contain phase information giving
rise to interference effects. Here, complementarity is explicitly
enforced by entanglement in a quantitative manner.

We show that, in spite of a rather common opinion, interference of probabilities can be easily obtained in purely corpuscular statistical model. All distinguishing features of the quantum probabilistic model (interference of probabilities, Born's rule, complex probabilistic amplitudes, Hilbert state space, representation of observables by operators) are present in a latent form in the classical Kolmogorov probability model. However, classical model should be considered as a contextual model (in the sense that all probabilities are determined by contexts-complexes of physical conditions). One of the main consequences of our contextual probabilistic reconsideration of foundations of quantum mechanics is that interference can be explained without wave-particle duality.

A survey of the historically most widely considered 'paradigms' for the electromagnetic interaction is presented along with the conflicts or defects that each exhibited. In particular, problems derived from the concept of the 'photon' and Quantum Electrodynamics are emphasized. It is argued that a from of direct interaction on the light cone may be the optimum paradigm for this interaction.

Abstractly, the photon is looked at in Euclidean Space Geometry, this time strictly under the electrodynamics of Galilean Transformations of Velocities c'=c±v, where the velocity c refers to that velocity with which the photon is emitted from its moving primary source which moves with velocity v relative to the laboratory frame. A non-interfering hypothetical observer, not of the real world, would note from the laboratory frame that the interference free photon moves with velocity c'. Since any measurement by a real world observer involves interference, the window, lens or mirror of the observers measuring apparatus. This paper will demonstrate that the problems in Modern Physics, involving both electro-magnetism and gravitation, have their pure classical solutions under the electrodynamics of Galilean Transformations of Velocities, while abiding strictly by the urles of Galilean Transformations and employing the classical assumptions of the rectilinear behavior of both the photon and the graviton in Euclidean Space.

The interpretation of the detection of very slow rate of photo counts in interference and diffraction experiments have given rise to the prevailing interpretation that photons interfere by themselves and they are indivisible, albeit non-local. The purpose of this paper is to inspire the development of alternate models for the photons by underscoring that, in reality, light does not interfere with light. The effects of superposition, registered as interference fringes, can become manifest only when a suitable detector can respond simultaneously to all the superposed light beams separately arriving from all the paths (or, slits). It should be a strictly causal process. In fact, different detectors with different quantum properties, report different results while exposed to the same superposed fields. Interference and diffraction effects are always observed as fringes through the processes of re-distribution and/or re-direction of the measured energy of the superimposed fields. Accordingly, we present a number of experiments, actual and conceptual, which highlight the contradictions built into the notion of non-locality in interference. A closer examination of these experiments can guide us to develop a conceptually congruent and causal model for both the evolution of photons and the interference (diffraction) effects by adapting to the classical diffraction theory. This theory has been correctly predicting the characteristics of light whether it is star light propagating through the inter galactic space, or nano tip generated light propagating through complex nano photonic waveguides.

When interpreted with the standard theory of cosmology, recent observations of the apparent magnitude vs. redshift of Type Ia supernovae suggest an accelerating expansion of space. The acceleration is justified by assuming the presence of an unknown dark energy working against gravitation at cosmological distances. The assumption of dark energy is equivalent to Einstein's cosmological constant, which he originally proposed to prevent a collapse of spherically closed space which he assumed to be static. If Einstein's spherically closed space, the surface of a 4-sphere, is allowed to expand in a zero energy balance between the energies of motion and gravitation, no cosmological constant or dark energy is needed. In a thorough analysis of such expansion, the apparent magnitude, m, versus redshift, z, obtains the form m = M0 + 5 log(z) + 2.5 log(z+1), which completely agrees with the Type Ia supernovae observations [1,2]. Due to the assumed spherical geometry and the zero energy balance, the obtained magnitude prediction is absolute in its nature; it has no free parameters like omega(m), omega(lamda), or the Hubble constant H(0) that are needed in the corresponding equation derived from the standard cosmology model. In space, described as a dynamic 4-sphere, the fourth dimension is geometrical in its nature, allowing a universal time coordinate. The velocity of light becomes directly linked to the velocity of space in the direction of the 4-radius and the rest energy of mass gets the meaning of the energy of motion mass possesses mass due to the expansion of space. As further consequences of the zero-energy balance, buildup of mass centers in space result in local bending of space allowing solutions of the perihelion advance of planetary orbits, the bending of light and the Shapiro delay in closed mathematical form. The characteristic absorption and emission frequencies of atomic objects become linked to local motion and gravitation, which means that the concept of proper time is replaced by a direct effect of motion and gravitation on the frequencies of atomic oscillators. In dynamic spherical space the well known equality between the total gravitational energy and the rest energy of mass in space reflects the zero energy balance driving the expansion of spherically closed space.

The photon is modeled as a monochromatic solution of Maxwell's equations confined as a soliton wave by the principle of causality of special relativity. The soliton travels rectilinearly at the speed of light. The solution can represent any of the known polarization (spin) states of the photon. For circularly polarized states the soliton's envelope is a circular ellipsoid whose length is the observed wavelength (λ), and whose diameter is λ/π;
this envelope contains the electromagnetic energy of the wave (hν = hc/λ). The predicted size and shape is confirmed by experimental measurements: of the sub-picosecond time delay of the photo-electric effect, of the attenuation of undiffracted transmission through slits narrower than the soliton's diameter of λ/π, and by the threshold intensity required for the onset of multiphoton absorption in focussed laser beams. Inside the envelope the wave's amplitude increases linearly with the radial distance from the axis of propagation, being zero on the axis. Outside the envelope the wave is evanescent with an amplitude that decreases inversely with the radial distance from the axis. The evanescent wave is responsible for the observed double-slit interference phenomenon.

At its foundations, Maxwell's theory of Electrodynamics, like thermodynamics, is a topological theory independent from geometric constraints of metric, scales, or gauge symmetries. One of the most interesting features of Electromagnetism is its relationship to the transport of momentum and energy by means of photons. This article utilizes a topological perspective to discuss the classical features and quantum concepts associated with the photon, including Topological Spin, Topological Torsion, Helicity and Chirality.

An analytical representation of the mass-energy threshold of a Photon is derived utilising finite reciprocal harmonics. The derived value is "<5.75 x 10-17 (eV)" and is within 4.3(%) of the Eidelman et. al. value endorsed by the Particle Data Group (PDG) "< 6 x 10-17 (eV)". The PDG value is an adjustment of theoretical predictions to fit physical observation. The derivation presented herein is without adjustment and may represent physical evidence of the existence of Euler's Constant in nature at the quantum level.

In these two related parts we present a set of methods, analytical and numerical, which can illuminate the behaviour of quantum system, especially in the complex systems, e.g., where the standard "coherent-states" approach cannot be applied. The key points demonstrating advantages of this approach are: (i) effects of localization of possible quantum states, more proper than "gaussian-like states";
(ii) effects of non-perturbative multiscales which cannot be calculated by means of perturbation approaches; (iii) effects of formation of complex quantum patterns from localized modes or classification and possible control of the full zoo of quantum states, including (meta) stable localized patterns (waveletons). In this first part we consider the applications of numerical-analytical technique based on local nonlinear harmonic analysis to quantum/quasiclassical description of nonlinear (polynomial/rational) dynamical problems which appear in many areas of physics. We'll consider calculations of Wigner functions as the solution of Wigner-Moyal-von Neumann equation(s) corresponding to polynomial Hamiltonians. Modeling demonstrates the appearance of (meta) stable patterns generated by high-localized (coherent) structures or entangled/chaotic behaviour. We can control the type of behaviour on the level of reduced algebraical variational system. At the end we presented the qualitative definition of the Quantum Objects in comparison with their Classical Counterparts, which natural domain of definition is the category of multiscale/multiresolution decompositions according to the action of internal/hidden symmetry of the proper realization of scales of functional spaces (the multiscale decompositions of the scales of Hilbert spaces of states). It gives rational natural explanation of such pure quantum effects as "self-interaction" (self-interference) and instantaneous quantum interaction (transmission of information).

In this second part we present a set of methods, analytical and
numerical, which can describe behaviour in (non) equilibrium ensembles, both classical and quantum, especially in the complex systems, where the standard approaches cannot be applied. The key points demonstrating advantages of this approach are: (i) effects of localization of possible quantum states; (ii) effects of non-perturbative multiscales which cannot be calculated by means of perturbation approaches; (iii) effects of formation of complex/collective quantum patterns from localized modes and classification and possible control of the full zoo of quantum states, including (meta) stable localized patterns (waveletons). We demonstrate the appearance of nontrivial localized (meta) stable states/patterns in a number of collective models covered by the
(quantum)/(master) hierarchy of Wigner-von Neumann-Moyal-Lindblad equations, which are the result of "wignerization" procedure (Weyl-Wigner-Moyal quantization) of classical BBGKY kinetic hierarchy, and present the explicit constructions for exact analytical/numerical computations. Our fast and efficient approach is based on variational and multiresolution representations in the bases of polynomial tensor algebras of generalized localized states (fast convergent variational-wavelet representation). We construct the representations for hierarchy/algebra of observables(symbols)/distribution functions via the complete multiscale decompositions, which allow to consider the polynomial and rational type of nonlinearities. The solutions are represented via the exact decomposition in nonlinear high-localized eigenmodes, which correspond to the full multiresolution expansion in all underlying hidden time/space or phase space scales. In contrast with different approaches we do not use perturbation technique or linearization procedures. Numerical modeling shows the creation of different internal structures from localized modes, which are related to the localized (meta) stable patterns (waveletons), entangled ensembles (with subsequent decoherence) and/or chaotic-like type of behaviour.

We report the results of a new realization of Ghose, Home and Agarwal experiment to test wave-particle duality where some limitations of the former experiment, realized by Mizobuchi and Othake, are overcome. Our results, in agreement with quantum mechanics predictions, indicates that complementarity between wave and particle behavior must be interpreted in a weaker sense as a gradual disappearing of interference when "which-path" indications are obtained and not as a mutual exclusive aspects as in the original Bohr's statement.

We present the experimental generation of a new class of non-classical light states and their complete phase-space characterization by quantum homodyne tomography. These states are the result of the most elementary amplification process of classical light fields by a single quantum of excitation and can be generated by the process of stimulated emission of a single photon in the mode of a coherent state. Being intermediate between a single-photon Fock state and a coherent one, they offer the unique opportunity to closely follow the smooth evolution between the particle-like and the wave-like behavior of the light field and to witness the gradual change from the spontaneous to the stimulated regimes of light emission.

We review and sharpen the concept of a photon wave function based on the quantum theory of light. We argue that a point-like atom serves as the archetype for both the creation and detection of photons. Spontaneous emission from atoms provides a spatially localized source of photon states that serves as a natural wave packet basis for quantum states of light. Photodetection theory allows us to give operational meaning to the photon wave function which, for single photons, is analogous to the electric field in classical wave optics. Entanglement between photons, and the uniquely quantum phenomena that result from it, are exemplified by two-photon wave functions.

James Clerk Maxwell unknowingly discovered a correct relativistic, quantum theory for the light quantum, forty-three years before Einstein postulated the photon's existence. In this theory, the usual Maxwell field is the quantum wave function for a single photon. When the non-operator Maxwell field of a single photon is second quantized, the standard Dirac theory of quantum optics is obtained. Recently, quantum-state tomography has been applied to experimentally determine photon wave functions.

For large bulk disordered media, light transport is generally successfully described by a diffusion process. This picture assumes that any interference is washed out under configuration average. However, it is now known that, under certain circumstances, some interference effects survive the disorder average and in turn lead to wave localizations effects. In this paper, we investigate coherence of a monochromatic laser light propagating in an optically thick sample of laser-cooled strontium atoms. For this purpose, we use the coherent backscattering effect as an interferometric tool. At low laser probe beam intensities, phase coherence is fully preserved and the interference contrast is maximal. At higher intensities, saturation effects start to set in and the interference contrast is reduced.

We reconstruct Maxwell's equations showing that a major part of the information encoded in them is taken from topological properties of spacetime, and the residual information, divorced from geometry, which represents the physical contents of electrodynamics, %these equations, translates into four assumptions:(i) locality; (ii) linearity; %of the dynamical law; (iii) identity of the charge-source and the charge-coupling; and (iv) lack of magnetic monopoles. However, a closer inspection of symmetries peculiar to electrodynamics shows that these assumptions may have much to do with geometry. Maxwell's equations tell us that we live in a three-dimensional space with trivial (Euclidean) topology; time is a one-dimensional unidirectional and noncompact continuum; and spacetime is endowed with a light cone structure readable in the conformal invariance of electrodynamics. Our geometric feelings relate to the fact that Maxwell's equations are built in our brain, hence our space and time orientation, our visualization and imagination capabilities are ensured by perpetual instinctive processes of solving Maxwell's equations. People are usually agree in their observations of angle relations, for example, a right angle is never confused with an angle slightly different from right. By contrast, we may disagree in metric issues, say, a colour-blind person finds the light wave lengths quite different from those found by a man with normal vision. This lends support to the view that conformal invariance of Maxwell's equations is responsible for producing our notion of space. Assuming that our geometric intuition is guided by our innate realization of electrodynamical laws, some abnormal mental phenomena, such as clairvoyance, may have a rational explanation.

The formation of an image, and its correct interpretation by sighted living creatures, is a unique example of specified complexity unlike anything else in nature. While many of the functional aspects of living organisms are extremely complex, only an image requires a unique mapping process by the eye-brain system to be useful to the organism. The transfer of light from an object scene to a visual detection system (eye + brain) conveys an enormous amount of information. But unless that information is correctly organized into a useful image, the exchange of information is degraded and of questionable use. This paper examines the "connections" necessary for images to be interpreted correctly, as well as addressing the additional complexity requirement of dual-image mapping for stereovision capabilities. Statistics are presented for "simple eyes" consisting of a few pixels to illustrate the daunting task that random chance has to produce any form of a functional eye. For example, a 12-pixel eye (or camera) has 12! (479,001,600) possible pixel-to-brain (computer) wiring combinations, which can then be compared to the 126 million rods/cones of the actual human eye. If one tries to "connect the wires" (correctly interpret the information contained) in a 12-pixel image by random processes, by the time 6 pixels become correctly connected, over 99.9% of all the trials are incorrect, producing "noise" rather than a recognizable image. Higher numbers of pixels quickly make the problem astronomically worse for achieving any kind of useful image. This paper concludes that random-chance purposeless undirected processes cannot account for how images are perceived by living organisms.

The human visual system has an amazing sensitivity-even a single photon catch can trigger the release of a signal in a rod photoreceptor cell under certain circumstances. However, behaviorally it requires on an average 5-8 photons for a human to "see" a flash of light. This discrepancy is due to the intrinsic "dark noise" in the visual system. Various aspects of human visual sensitivity to single photons are reviewed and discussed.

Photons are continuously absorbed and emitted by all living cells. A possible means of releasing energy when
an electron changes energy states during a biochemical reaction is via biophoton emission. An example of
energy transfer in biological systems is the process of photosynthesis. Biophoton emission has also been
proposed as one possible mechanism responsible for intra- and intercellular communication (information
transfer) as well as for regulation of biological and biochemical functions within cells and living systems.
Measurements by other researchers of this emission have shown it has the properties of coherent light and is
measurable from the UV through the near IR. Experimental evidence gathered by various researchers since the
1920's indicates that light plays an important role in certain biological functions and processes. Through a
series of experiments we have observed resonance effects between plant parts measured using a highly
sensitive, low noise, cooled CCD in total darkness in a light-tight chamber. Dynamical systems theory offers a
plausible explanation for resonance effects we have observed. The role of photonic interaction at the systemic
level in biological systems has received relatively little attention. Yet, a better understanding of these processes
would help us in deciphering the nature and role of light in biological systems.

The construct herein utilises the Photon mass-energy threshold, as derived by Storti et. al., to facilitate the precise derivation of the mass-energies of a Photon and Graviton. Moreover, recognising the wave-particle duality of the Photon, the root-mean-square (RMS) charge radii of a free Photon and Graviton is derived to high computational precision. In addition, the RMS charge diameters of a Photon and Graviton are shown to be in agreement with generalised Quantum Gravity (QG) models, implicitly supporting the limiting definition of the Planck length.

With the recognition of a logical gap between experiments and equations of quantum mechanics comes: (1) a chance to clarify such purely mathematical entities as probabilities, density operators, and partial traces-separated out from the choices and judgments necessary to apply them to describing experiments with devices, and (2) an added freedom to invent equations by which to model devices, stemming from the corresponding freedom in interpreting how these equations connect to experiments. Here I apply a few of these clarifications and freedoms to model polarization-entangled light pulses called for in quantum key distribution (QKD). Available light pulses are entangled not only in polarization but also in frequency. Although absent from the simplified models that initiated QKD, the degree of frequency entanglement of polarization-entangled light pulses is shown to affect the amount of key that can be distilled from raw light signals, in one case by a factor of 4/3. Open questions remain, because QKD brings concepts of quantum decision theory, such as measures of distinguishability, mostly worked out in the context of finite-dimensional vector spaces, into contact with infinite-dimensional Hilbert spaces needed to give expression to optical frequency spectra.

Any discussion of the nature of light must include a reminder that whenever we make the observation of light (photons), we only observe particle-like properties. This paper provides a reiteration that we don't need wave-like properties to scattered photons to describe phenomena such as diffraction or refraction of light. This paper updates the original ideas of Duane, later revived by Lande, which provided a description of light diffraction without making reference to a wave nature. These are updated using terminology more common to quantum electrodynamics which describes the interaction of particles in terms of the exchange of virtual photons. Diffraction is described in terms of an ensemble of distinct, probability weighted paths for the scattered photons. The scattering associated with each path results from the quantized momentum exchange with the scattering lattice attributed to the exchange or reflection of virtual photons. The probability for virtual particle exchange/reflection is dependent upon the allowed momentum states of the lattice determined by a Fourier analysis of the lattice geometry. Any scattered photon will exhibit an apparent wavelength inversely proportional to its momentum. Simplified, particle-like descriptions are developed for Young's double slit diffraction, Fraunhofer diffraction and Fresnel diffraction. This description directly accounts for the quantization of momentum transferred to the scattering lattice and the specific eigenvalues of the lattice based upon the constraints to virtual photon exchange set by the Uncertainty Principle, Δπi = h/ζi.

There continues to be a common belief that the registration of single photographic grains or emission of single photo electrons at a time validates the assertion that the interference and diffraction patterns are built through the contribution of individual photons (hν). A careful analysis of the past literature indicates that these experiments actually were not able to ascertain that one photon at a time interacted with the photo detector. This paper reviews a series of experiments carried out during the early eighties, which suggest that the simultaneous presence of multiple photons (multiple units of hn) makes possible the registration of a single photographic blackening spot or the emission of a single photoelectron. The congruency with the paradigm of "wave-particle duality" is now better maintained by assuming that the photons, after they are emitted and then propagate from the source, develop the "bunching" property, which we proposed as a "photon clump" in 1985 and explained with a plausible extension of the Heisenberg's Uncertainty Principle.

Bohr's principle of complementarity predicts that in a welcher weg ("which-way") experiment, obtaining fully visible interference pattern should lead to the destruction of the path knowledge. Here I report a failure for this prediction in an optical interferometry experiment. Coherent laser light is passed through a dual pinhole and allowed to go through a converging lens, which forms well-resolved images of the respective pinholes, providing complete path knowledge. A series of thin wires are then placed at previously measured positions corresponding to the dark fringes of the interference pattern upstream of the lens. No reduction in the resolution and total radiant flux of either image is found in direct disagreement with the predictions of the principle of complementarity. In this paper, a critique of the current measurement theory is offered, and a novel nonperturbative technique for ensemble properties is introduced. Also, another version of this experiment without an imaging lens is suggested, and some of the implications of the violation of complementarity for another suggested experiment to investigate the nature of the photon and its "empty wave" is briefly discussed.