Every once in a while (doesn't happen much), I write a more serious note (see sig). This is one such.
I've collected many questions along my studies and my research, and taken the time to present them here.

Without further adieu.

Aperture1. Is the depth of field measured in meters, or as a percent of the zoom? Translating to; At aperture A, will there be a difference at the depth of field for different distances? Yet clearer - does the depth of field change when having a constant A aperture, but moving to / away from the subject (assume continuous lens focus)?

If so, doesn't that render close-subject aperture effect irrelevant?
If not, doesn't the distance to the object have any effect on the depth of field?

Macro2. What do the ratios 1:1 / 1:3.5 / 1:12.5 (general X:Y) mean? can X be greater than Y?
3. Why is a dedicated macro lens needed in the first place? Olympus 570 ultra zoom's lens does 26mm to 520mm, and still macros down to 1cm. Why do DSLR lens only start focusing at 30cm-40cm? is this limitation meant to squeeze consumer wallets?

As theory: Is it technologically possible to shoot Z shots from X to Y in constant focal gaps, such that at aperture A they collectively cover the whole focal distance, to synthesize 1 image, such that any object in the scene between X and Y is in crystal clear crisp sharpness as if the lens was souly focused on it?

Theoretically, this could create high depth pictures, where any object is perfectly focused at any distance range.

If so, why hasn't this been done?
If not, why?

CPL5. A polarizer filters light waves from different angles to the lens (as I understand). Imagine a photographer shooting a fish just under water level of some river. Now consider light waves from both the fish, and water surface reflections. Both are bouncing off an object in front of the lens, and traveling in a straight line towards the sensor. To be clear - both are traveling along the same line towards the lens (same angle to lens), the only difference between them is the origin distance (one from the fish underwater, the other from the water surface).

Now, how does the CPL know to filter the reflections wave from the fish wave if they both income to the lens from the same angle?

6. Gordon recommends applying exposure compensation manually to images taken with a CPL, since it blocks a portion of the light. Since the camera measures the amount of light before each shot through the lens (TTL), why does this have to be done manually? don't the camera realize its darker now from a simple TTL measurement? does this manual set / unset have to be done, or can I forget this recommendation?

7. How does the behavior change for linear polarizers, and if they produce wrong measurements for the camera's exposure system - why do they still exist?

White Balance8. Why was white balance created? Specifically: A sensor converts an amount of light to a current using an analog mechanism. This analog current is converted to a digital signal through a per-pixel DAC. At this point we have an image we can apply basic SHARP / SAT / BRI / CON and save it to jpeg as is. Why is white balance needed? isn't the signal incoming from the sensor good so we'd need to play with its hue now?

Seems like all this hue-play has brought is trouble, and semi-orange pictures of people because some WB sensor got things wrong and ruined the picture. What am I missing?

General9. What is Astigmatism in DSLR lenses? are there any samples of element astigmatism? can this be solved? is this done on purpose in some cases?
10. What does "X elements in Y groups" mean? what does a "group" do? can any customer really conclude anything about a lens from this data (except weight)? if each element essentially decrease the amount of light, doesn't this mean "the smaller the X the better"?
11. Whats the difference between a penta-mirror and a penta-prism?
12. Don't anti shake systems (IS/VR, or sensor-shake) proliferate vignetting?
13. Why is there a shutter? In days of old, a pre-used film strip would "burn" when exposed to light. Thus it had to have been kept safely closed in a dark chamber. A shutter would be used to to make a controlled exposure, and only the wanted amount of light would "brun" the image to the film.

CCDs and CMOS digital sensors don't "burn".
Why aren't they constantly exposed, and only record data when told to? why must we keep counting shutter actuations? is the shutter a dying dinosaur?
14. Why does the mirror-box still exist? Why is an OVF still needed? Is the micro-4/3rds movement due to all digicams? What would this do to the world of lens?

I'll take an easy one for now I should FAQ macro the amount I post about it...

Macro
2: Ratio of image size at sensor to that of subject. This is an indication of magnification, typically at maximum focal length and minimum focus distance. Yes, X can be bigger than Y.
3: Macro is about image size, not how close you get. So all 1:1 macro lenses will give you the same size maximum image of the subject, although longer focal length ones mean you are further away to get it.

A pentamirror is an optical device used in the viewfinder systems of cheaper single-lens reflex cameras instead of the pentaprism. Instead of the solid block of glass of the prism, simple mirrors are used to perform the same task. This is cheaper and lighter, but generally produces a viewfinder image of lower quality and brightness.

1.a Dof is normally measured in distance (cm, m) giving the range where objects inside will look sharp.
1.b It depends only upon magnification (given a certain size sensor) and aperture
1.c So with a fixed focal length, coming closer means larger magnification means less dof
1.d With a zoom, getting closer to a subject but zooming back the same time to give the same magnification of said subject will deliver the same dof.
1.e I don't exactly know what you mean by "close-subject aperture effect". But if you think of the effective aperture of a lens being reduced when focussing closer, you're right: the effective aperture is what counts when determining dof.

2.a 1:1 means that an object of 24mm width will be projected sharply as a 24mm object on the sensor. Now if the sensor is a FF/FX-sensor, the object fills the smaller side of the sensor. If it's an APS-C sensor then the object is already larger than the wide side of the sensor and with a p&s the object will fill the small sensor many times 2.b X greater than Y means that the object is projected larger than life on the sensor. This is what Nikon calls "macro", up to 1:1 only qualifies as "micro" in Nikon's eyes

3.a Not the distance from the front-lens is determining the magnification but the effective folca length when focusing closest. So it might be that a decent 1:2 macro zoom for a DSLR projects a larger image on it's sensor than the super-zoom.
3.b Super-zoom lenses for larger sensors are harder to design than for smaller sensors. So this is not a rip-off.

4. Was perfectly answered by Cam-I-Am

5. The light reflecting from the water-surface is already polarized so can be blocked out by an appropriate turn on the PL. The light from the fish is non-polarized so cannot be blocked by the PL

6. Weeeeell, I'd never contradict Gordon on his own forum but: I never compensate for a PL. But it might turn out that the reduction of glaring highlights makes the image look dull when applying the PL, so you might want to "turn up the brightness". But at least in my experience you need not worry about the darkening effect of the PL. When I apply my PL to the lens the exposure gets an instant boost of almost +2EV automatically

7. Linear PLs are only hazardous to DSLRs. No prob afaik with non-mirror-based cameras.

8. WB is only to atone to the eyes' huge adaptability to color-changes. Otherwise nothing would be wrong as the sensor reproduces the exact color of the light at the moment of the exposure. But seeing that color afterwards in the privacy of your own rooms where your eyes adapted to the light of that surroundings the displayed images might appear false-colored because the captured lighting had a different color than the current one.

9.a Astigmatism lets the grey disk of a perfect Siemens-star look shaped like an 8 (either upright or laying flat like the infinity sign). I could certainly point you to an example in my many test-shots with Siemens-stars, but this is asking for too much at the mo.
9.b Manufacturers try to avoid it by careful design and the use of very good glass, but not always is the lens expensive enough to eliminate the effect entirely.

10.a Groups are a number of lenses that are put together without air between them. This avoids the adverse effect of two glas/air surfaces.
10.b In general it is better to solve the design challenge with fewer lenses/groups but that is not always possible. Zooms always have a larger number of lenses/groups than fixed focals.
10.c And yes, more glass means less transmission, more reflexes. But certain optical errors and aberations can most effectively be eliminated (or reduced) by adding other lenses.

11. See Cam-I-Am's reference

12. Nope. Why should they?

13. Well, I really don't know

14.a Many people prefer ovfs over evfs. It's as simple as that. So the mirror box will not go away for same time
14.b only wide-angle lenses benefit from the lack of a mirror-box, because then they can be designed as normal wide-angles, not the inverted tele-designs they are nowadays (with the respective increase in lenses/groups)

Some speculation:
12: I did a test. I used my widest lens which has dark corners. Put camera in manual mode and took two shots, with and without SSS. When it was on, I shook the camera during the shot. Viewing the preview, I can't say the vignetting was different on either. Maybe the amount the sensor moves is insignificant to the light range of the lens? Similarly on IS/VR systems.
13: I've heard that for shortest shutter speeds, the shutter is actually open longer and the shortness is done electronically. So yes, arguably the curtain doesn't need to be there at all if that is the case. On plus side, it means the sensor is not directly exposed when changing lenses.

As to #13, I assume that
a) shutter speed is really what they say it is (as opposed to popo's assumption)
b) that it is purely to avoid excessive heat build-up in the sensor when you have to "discharge" all the collected photons-turned-into-electrons on a continuous basis. Too much current and heat can lead to more noise.

The new D90 video-mode might not give an indication of this effect as all the 12MPix all binned down to a measly 1-2MPix for video image. That means that they're on the safe side with noise.
-----
Btw.: I moved this thread out of the "technical and scientific photography" section into "off-topic" because it is better fitting and gets a wider audience there.

Just had the time to thuroghly read and understand ... thanks guys! Just a few comments:

Thomas wrote:

5. The light reflecting from the water-surface is already polarized so can be blocked out by an appropriate turn on the PL. The light from the fish is non-polarized so cannot be blocked by the PL

Why is the water's light polarized, and the fish's isn't?

Thomas wrote:

6. Weeeeell, I'd never contradict Gordon on his own forum

Slap CPL on lens - fire and forget - got it!

Thomas wrote:

7. Linear PLs are only hazardous to DSLRs. No prob afaik with non-mirror-based cameras.

Why? How different are they to CPLs?

Thomas wrote:

8. WB is only to atone to the eyes' huge adaptability to color-changes. Otherwise nothing would be wrong as the sensor reproduces the exact color of the light at the moment of the exposure. But seeing that color afterwards in the privacy of your own rooms where your eyes adapted to the light of that surroundings the displayed images might appear false-colored because the captured lighting had a different color than the current one.

Interesting read. If what your saying is true, wouldn't it make more sence to save images straight out of the sensor, and adjust WB at the time of viewing? (which means different WB if the eyes see the picture in a dark basement or sports stadium - WB will need to be set for every view?)

9. Astigmatism = Single axis focus?

Thomas wrote:

12. Nope. Why should they?

Sensor-shake shakes the sensor in the lens's image circle.IS/VR shakes the lens's image circle on the sensor.

Putting a rectangle in a circle and shaking something ... doesn't deepen vignetting?

Thomas wrote:

13. Well, I really don't know

A dead legacy? :ghost:

Thank you for the interesting read, its much appreciated.
I'll definitly keep a link to this thread for future reference

b. i dont even think there's really a shutter anymore. merely a focus mirror, that allows the viewfinder to be used properly.

c. protects dust from reaching the sensor.

d.prevents sensor burn.

e.since it's mechanical and not digital it's quicker to respond to the shutter release. imagine if you were to do a 24 shot burst, how slow it would start getting when the camera has to "think" about each shot stil"?

#13. ... e.since it's mechanical and not digital it's quicker to respond to the shutter release. imagine if you were to do a 24 shot burst, how slow it would start getting when the camera has to "think" about each shot stil"?

I don't agree here. You dont do slow, slower or slowest with a shutter.
There is also a 14 thousand fps version somewhere of glass and crystals, let alone 24 fps by a D90.

Electronic shutters cannot be compared to physical ones in speed... its just not fair...
(though I completely agree about the sweet sound )

My 1300th post! Not an elder, so not called to answer your questions .
Here are my short answers anyway, to a few of the questions:

4) I recently read an an article about focus stacking in a Dutch photography magazine, where it's said to be very useful for macro-photography. There's special software available which will automatically select the sharp areas of the image to create an all-sharp photo. Most commonly made mistake is that people take too few images, and 'gaps' can be seen between sharp images.

8) We humans adjust colours with our brains, so they look normal to us, but a camera read it differently. Some light would be extremely orange-coloured, other light would be very blue. Using Auto whitebalance helps in most situations, unless there's a mix of light or the light is warm and you want to maintain this colour temperature (read more on the temperatureschale of Kelvin if you wish).

10) A group is a bunch of glass elements positioned against each other, and between groups there's more space. Lenses are (I don't know if it's always the case) glued together with optical glue and spacers are used between these glued elements, or groups.

11) A pentamirror is, like the name suggest, a mirror, and a pentaprism is a piece of glass that bends the beam of light coming from the mirror in front of your sensor. It bends it in more than one direction, while a mirror bounces it straight through the viewfinder.

13) No clue, good question though. Maybe for extra precision? Just a random guess.

14) A viewfinder is needed for, I think, comfortable shooting, that allow the photographer to see exactly what the lens is passing through.

8: Very good suggestion Cheeze! I always have the problem with printed images: If they look good at daylight, they just don't look good at incandescent light. So a change in WB fitting the viewing circumstances would be terrific! As an aside: I think, nobody has done this, but: Philips manufactures TV-sets with "ambilight" casting the dominating "hue" of the image to the walls besides the TV-set. This makes the images on the screen more "impressive" but also adapts your eye better to the given white-balance of the image

9: sort of

12: the movement might not be as big as you expected. So the sensor is not really "leaving" the image-circle. Thus: no increased vignetting.

A quick note about applying compensation with polarisiers - it's purely personal preference, and also depends on how your camera meters the scene. As Thomas says, since the metering is performed through the lens, then the camera will already 'know' the polariser is reducing the light entering, and it will adjust the exposure appropriately.

I just find the final result is sometimes a little bright for my own liking, so often prefer to apply a little negeative compensation.

Thanks for all the replies by the way - some great stuff here, and all good material for a possible future technical glossary.

8: Very good suggestion Cheeze! I always have the problem with printed imgaes: If they look good at daylight, they just don't look good at incandescent light. So a change in WB fitting the viewing circumstances would be terrific! As an aside: I think, nobody has done this, but: Philips manufactures TV-sets with "ambilight" casting the dominating "hue" of the image to the walls besides the TV-set. This makes the images on the screen more "impressive" but also adapts your eye better to the given white-balance of the image

Cool Can't wait to see the next Sports Illustrated paper magazine with auto WB feature to it

Gordon Laing wrote:

Thanks for all the replies by the way - some great stuff here, and all good material for a possible future technical glossary.