Yes. But I don't have time for a complete answer right now. In short, a lens produces the Fourier transform of the field in the entrance pupil. The subtlety here is that the field in the pupil doesn't necessarily look anything like the object you are imaging. In fact it looks a lot like the Fourier transform of the object. Now start thinking about what the Fourier transform of the Fourier transform of the object looks like...
–
Colin KMay 21 '12 at 14:36

@ColinK: Classical optics can never produce a Fourier transform, as this requires phase adding. The lens does not do this.
–
Ron MaimonMay 23 '12 at 6:15

1

@RonMaimon: what? I think you're confusing classical optics and geometrical optics. Classical optics, the 'opposite' of quantum optics, is optics that can be explained without using photons. Specifically, to explain Fourier-transforming lenses, you need paraxial wave optics, which is firmly in the classical domain.
–
ptomatoMay 23 '12 at 8:21

@ptomato: I see--- by "classical" I meant "geometrical" as in the classical trajectory limit for light waves. The wave aspect doesn't matter for any normal vision--- There is no Fourier transform involved in reflecting light off something and focusing it into a retina. The only time a Fourier transform happens is when light diffracts around a small object, and then a properly placed lens will take the diffraction waves into definite spots at different locations on a screen. This is what the Wikipedia article is describing. Your statement that the lens field is a FT of the image is wrong.
–
Ron MaimonMay 23 '12 at 8:27

@RonMaimon, I never meant to imply that the lens image is a Fourier transform. It's not my statement and I agree with you that it's wrong. I'll write an answer explaining what I mean.
–
ptomatoMay 23 '12 at 10:31

6 Answers
6

As mentioned in the question, a thin lens will produce in its focal plane the Fourier transform of the optical field in its pupil, possibly multiplied by a quadratic phase term. However, to understand how this relates to imaging in the wave optics picture, we need to take a step back, and look at the situation more generally. Under the paraxial approximation, the propagation of an optical field can be modeled with the Fresnel diffraction integral:
\begin{equation}
U^\prime(x,y) = \frac{e^{i k z}}{i \lambda z} \mathrm{exp}\left[ \frac{i \pi (x^2 + y^2)}{\lambda z}\right] \ldots\\
\times \iint_{-\infty}^{\infty} U(\xi, \eta) \mathrm{exp}\left[ \frac{i \pi (\xi^2 + \eta^2)}{\lambda z}\right] \mathrm{exp}\left[ \frac{-i 2 \pi (x \xi + y \eta)}{\lambda z}\right] d\xi d\eta
\end{equation}
where $U(\xi,\eta)$ is an optical field, $U^\prime(x,y)$ is the field after propagation by a distance $z$, and $\lambda$ and $k$ are the wavelength and wave number, respectively.

In the case of a thin lens, a transparency in contact with the lens, and a propagation distance equal to the focal length $f$, we can represent the input field as
$$ U(\xi, \eta) = t_A(\xi, \eta) \mathrm{exp}\left[ \frac{-i \pi (\xi^2 + \eta^2)}{\lambda f}\right] $$
where $t_A$ is the amplitude transmission of the transparency, and the quadratic phase term is the wavefront curvature introduced by a thin lens of focal length $f$. If you plug this into the diffraction integral above, you see that, when $z = f$, the integral reduces to a Fourier transform and we have
\begin{aligned} U^\prime(x,y) &= \frac{e^{i k z}}{i \lambda z} \mathrm{exp}\left[ \frac{i \pi (x^2 + y^2)}{\lambda z}\right] \iint_{-\infty}^{\infty} t_A(\xi, \eta) \mathrm{exp}\left[ \frac{-i 2 \pi (x \xi + y \eta)}{\lambda z}\right] d\xi d\eta \\
{} &= \frac{e^{i k z}}{i \lambda z} \mathrm{exp}\left[ \frac{i \pi (x^2 + y^2)}{\lambda z}\right] \mathcal{F}[t_A](x,y)
\end{aligned}
where $\mathcal{F}$ is the Fourier transform. I'm not explicitly stating it, but you can assume that the Fourier transforms I write are always appropriately scaled. In this case, if the FT is defined to take functions of $(\xi, \eta)$ and return functions of spatial frequency $(\alpha, \beta)$, you should assume the implied scaling $(\alpha, \beta) \rightarrow (\frac{x}{\lambda z},\frac{y}{\lambda z})$.

Now, I won't derive it here because the integrals are huge, but if you use the first equation I wrote to propagate some object field by a distance $f$, then apply the wavefront modification by a thin lens of focal length $f$, and propagate another distance $f$, you will see that the quadratic phase terms all cancel each other, and the resulting field is exactly the Fourier transform of the object field, without even the quadratic phase term you get if the object is directly against the lens. If you have trouble with this, keep in mind the Fourier transform identity for the double FT of a function; this makes the derivation simple.

More generally, this derivation can be applied to an arbitrary series of optical elements and propagation distances. With sufficient effort, it can be shown that, for a paraxial optical system described by an ABCD matrix, an optical field is propagated through the system by:
\begin{equation}
U^\prime(x,y) = \frac{e^{i k L_0}}{i \lambda B} \mathrm{exp}\left[ \frac{i \pi D(x^2 + y^2)}{\lambda B}\right] \ldots\\
\times \iint_{-\infty}^{\infty} U(\xi, \eta) \mathrm{exp}\left[ \frac{i \pi A (\xi^2 + \eta^2)}{\lambda B}\right] \mathrm{exp}\left[ \frac{-i 2 \pi (x \xi + y \eta)}{\lambda B}\right] d\xi d\eta
\end{equation}
where $L_0$ is the effective optical path length through the optical axis of the system.

This, of course, is still valid only for a coherent optical system. One way of thinking about this in the context of an imaging system (like an eye or a camera) is that it only applies to the field due to a single point in the scene being imaged. The ultimate image can be obtained by coherently propagating the field from each object point, taking the magnitude squared of the resulting field to get its intensity, and then adding the intensities from each object point.

Thus, I suppose one could claim that we see a superposition of Fourier transforms from each object point, rather than directly seeing a Fourier transform. Indeed, the image on your retina doesn't look like the picture you get if you take some everyday scene and Fourier transform it on your computer. Nonetheless, lenses do perform Fourier transforms on optical fields. When considering an imaging system however, you must consider where the field that is being transformed is, relative to the lens. In general, this field is not the field at the object you are looking at; it is the field some distance in front of your pupil, and in a real-world situation, it is not simply one coherent field from one source point, but an incoherent superposition of fields from every point in your field of view.

As a practical matter, this means that incoherent imaging is rarely simulated with the ABCD integral above. This sort of computation is useful for coherent imaging systems (a telescope is a good example, if you're only talking about stars and not extended objects), but in the incoherent case it is much simpler to simulate imaging purely by applying the MTF/OTF as a convolution or linear filter. Even in this case, however, the computation is still based on a Fourier transform.

I may extend the last part and describe incoherent imaging, if there is interest. I'm hesitant to do so because it begins to go beyond the scope of the question.
–
Colin KMay 23 '12 at 16:46

1

My equations do not obfuscate the physics; they in fact are the relevant physics for optical propagation in most cases. Your second objection is more reasonable: in a technical sense, the quadratic phase means that the result is not strictly a FT. However, it is still most naturally computable with an FT, and in the coherent case, a quadratic phase at the image plane doesn't change the observed intensity. The quad. phase does indeed recover geo. optics, but this is simply a statement that wave optics is more central than geo. optics. It doesn't mean that geo. is the only correct way to think
–
Colin KMay 25 '12 at 19:33

2

Well this has gotten a bit circular. The thing I'm trying to explain is that this: "It is a single diffracting object at the focus of the lens, and geometrical optics everywhere else." is the opposite of correct. It is diffraction everywhere and it is geometrical optics everywhere, if you don't want as much accuracy. It is a FT in all but the most pedantic sense, and the observables are FT's, or their superpositions. It's cool if you like geo. optics. I do to. But that changes nothing.
–
Colin KMay 26 '12 at 3:53

1

@ron: all objects are "diffracting objects". The eye has lenses. I know you're smart, but this isn't your field, and it's showing. You do not always know better.
–
Colin KMay 26 '12 at 16:43

1

No, it isn't. Like I said, it's the ft of the field in the pupil. If you know this a well as is possible, then please stop being wrong.
–
Colin KMay 26 '12 at 19:36

Check Wikipedia on the subject.
It says the image to be transformed has to be 1 focal length in front of the lens (not at infinity or at least further than a focal length).
It says the image has to be in a transparent film, and be lighted from behind by plane waves, as from a point source at a distance.

+1: correct--- this is taking the diffraction from the focal point (which is scattering as the Fourier transform) and refocusing it using the lens to different spots on the focal plate. Outside of this special case, the "quadratic term" makes it so that there is no Fourier transform.
–
Ron MaimonMay 25 '12 at 18:47

No we don't see Fourier transforms--- we see classical (geometrical) optics, which is light propagating along geometric paths in the limit of small wavelengths. This limit makes it so that the light we get from a source is refocused into a point at a location corresponding to the source, there is no Fourier transform involved.

The phenomenon you are talking about is a combination of the diffraction law together with the focusing law. To say that the lens produces the Fourier transform is a misleading way to say it--- all the lens does is focus the diffraction pattern in different directions onto different points on the photographic plate. The diffraction is what is doing the Fourier transform.

If you place a diffracting object at one focus of the lens, the lens will project the diffraction pattern produced by a diffracting object at the focal distance on the other side of the lens onto a screen in such a way that different outgoing angles are each focused onto a different point.

@ColinK: I am not wrong, and it is dismaying that nobody else understands this. Wave optics is not "fourier transforms", it is wave optics. "Fourier transform of the source transmission is done through diffraction, and this is exactly what is described by the article (I read it and understood it). There is no Fourier transform done by wave optics in the geometric limit.
–
Ron MaimonMay 23 '12 at 17:20

"There is no Fourier transform done by wave optics in the geometric limit." That's correct, but why do we care about the geometric limit? Wave optics is relevant and, more importantly, completely valid in the context of vision (and imaging in general). It is definitely the case that diffraction is the physical phenomenon, and the FT is simply a way of modeling it, but this seems to me to be a semantic distinction.
–
Colin KMay 23 '12 at 17:30

1

I think I see the confusion. You prefer to use geometric optics when it is sufficient. This is a reasonable preference because it is computationally much simpler. However, scalar diffraction (what we are calling wave optics) is the more general theory, and is applicable and correct wherever geometric optics is correct (and more). "Diffraction" is always relevant; it is essentially nothing more than a word for the way light propagates. The word is often used only in reference to situations where wave effects are obvious, like edges or double slits, but it always happens.
–
Colin KMay 23 '12 at 17:43

2

I'd say "Given that a thin lens computes monochromatic FT, every thin lens, including our eyes, is 'computing' an infinite number of FT all the time. Just usually not the FT of anything useful and certainly not on the retina."
–
ptomatoMay 23 '12 at 23:41

A positive thin lens does have the property that the complex field amplitude at distance $f$ after the lens is the Fourier transform of the complex field amplitude at distance $f$ before the lens, where $f$ is the focal length of the lens. This is called a $2f$ system.

However, it's wrong to say that that's an "image", because those distances don't match the condition for image-forming:

Correct, and this is because this is diffraction, not emission or reflection. The diffracted waves are going out in different directions, as if they were coming from infinity.
–
Ron MaimonMay 23 '12 at 17:21

@RonMaimon, not sure what you mean by that last bit.
–
ptomatoMay 23 '12 at 23:32

If you have a classical source at infinity (parallel light rays), and the light diffracts through a grating, the diffracted spots go in different directions, but they still look like they came from infinity--- they are focused by a lens using the lens law with $a=\infty$ not according to the distance to the grating. The lens then refocuses all these outgoing diffraction spots to different points on the image. This is why the diffraction plus a lens does a Fourier transform, because the diffraction itself is a Fourier transform. This is my answer.
–
Ron MaimonMay 25 '12 at 18:36

If you put the monochromatic light source at a distance a from the source, and diffrated the light from a thin-slide at the focus, then it would focus at distance b.
–
Ron MaimonMay 25 '12 at 19:08

Missing from these answers is that CCDs like your camera can't, in principle, see a Fourier transform. Even if you stick a lens at the focal plain you won't see the Fourier Transform! Your camera is not an interferometer!

Borrowing the equation from Colin K's answer we see that the integral, can be negative, positive or even imaginary.

As light is rapidly oscillating a conventional camera cannot perform amplitude interferometry and must average over many periods of light, removing the complex part of the wave (not to mention the positive or negative part
\begin{equation}
<U^*U>_t
\end{equation}

Interestingly in the case of a single color with uniform illumination, with difficulty, we can perform interferometric measurements to extract a "holistic" field that includes the real and imaginary parts of our signal, enabling us to truly propagate the field of light back and forth. Doing this for white light and a distribution of illumination angles is a still a current research topic, If I were a worse person I would link to my research papers.

\begin{equation}
<U>_t
\end{equation}

But remember, your camera doesn't see the Fourier transform either because the Fourier transform has imaginary numbers!

In the paraxial approximation for a monochromatic light field, the complex light field in the back focal plane of a lens is the Fourier transform of the complex light field in its front focal plane.

The lens in the human eye is a lens that can accomodate its focal length so that the retina lies in the focal plane of the lens (i.e. accomodation at infinity).

the answer is: Yes, the human eye can perform Fourier transforms (excluding aberrations), as any other lens.

Another way to look at the first statement above is, that from wave optics you get the so-called Fraunhofer approximation for diffraction in the far-field, which is essentially a Fourier transform with appropriate variable substitutions (and with a phase factor which you won't see on the retina or a screen). All a lens now does is to “pull” the far-field (and with it the Fourier transform) into its back focal plane.

You still get the Fourier transform (multiplied by a phase factor you can't see) if your “diffracting” input plane is not in the front focal plane of the lens. However, the further your input plane is from the lens, the smaller gets your range of angles which go through the lens; in other words: The NA gets smaller. This translates to a maximum spatial frequency of your “Fourier transform picture”. Imagine a circular mask on top of your ideal Fourier transform which gets smaller as your input plane moves away from the lens. (As a side note: This mask corresponds to an optical low pass filtering, i.e. a blurring, which limits how much detail the eye can physically resolve for a given object distance.)

In daily life, however, you probably won't see Fourier transforms since most light around us is spatially and temporally incoherent, so the conditions for interference are not fulfilled and hence there are no (distinguishable) diffraction patterns.