When talking about sensor DR I am naturally talking about making the capture in one shot. Multi-capture DR enhancement (as used in "HDR" photography) is an ergonomic and artistic kludge that we only use because of limitations in the single-capture DR.If they can see the difference between a multi-exposure synthesized image that has been quantized to 12 vs 16 bits prior to tonemapping, then we can assume that it is possible to see the difference between a "true" 12-bit and 16-bit sensor.

-h

It is only a kludge if it works poorly.

I dont think anyone can tell the difference in print. Eric and other sources have given us side by sides of 16bit MFDBs with 14 bit (DPR for example) or 12 bit (Sony Alpha) cameras. You cant reliably tell which is which on screens with much higher DR than the print.

My Sony A55 does a damn good job of 3 shot HDR to 8 bit jpg in camera. I have compared the 8 bit jpg to 3 raws HDRed to 16 bits in other software. The 8 bit jpg tone mapping is very good. The main difference is the color style compared to the output from lots of work with other software. Or pixel peeping detail which is bled out in print.

Provide a blind source test to people at your local photo club in prints. How many can tell the print from the 8 bit jpg to the print from the 14bit raw if you match the colors correctly?

any guy who can imagine a Fat Finger to deal with shutter slap can't be a bad guy so I'll stop the war.

There's no war. The same words, digital and analog in this case, can refer to somewhat different things in different contexts...as has been noted here by other folks. Jumping from A/D conversion to an inquiry into the ultimate nature of things is just way outside the scope of my initial post!

Fat Fingers come in very handy not only to reduce unwanted resonances but also to help balance slightly body-heavy instruments.

oh my, much here. I can see you'd like to debate this, but there is no need

First, either a membrane potential exists or it has collapsed. That there are different ranges of sensitivity in different cell populations due to supressor or excitatory activity is a fact. But the truth is that at some point the potential collapses and conduction occurs or it does not. Again, about as binary as possible.

Hi,

Sorry to disappoint you. I'm not sure why, but it seems like you are trying to push a pet peeve, or maybe (to put it friendly) you just need to brush up on your knowledge on the subject.

Since you are unlikely to take my word for it, let me point you to a document that explains the somewhat more complex phototransduction process. It was the second Google link I found, there may be better sources, but it seems to offer a reasonable explanation. Here it is: http://arxiv.org/ftp/quant-ph/papers/0208/0208053.pdf (see section 2.1.1 Photon transduction and signal amplification).

In short, for those who are not that interested in reading all of it, the relevant part states:

Quote

If this molecule absorbs a photon, it undergoes photoisomerization forming straight chain version, known as all-trans-retinal. Alltrans-retinal unleashes a series of conformational changes in the protein opsin fragment producing metarhodopsin II, which is the activated form of rhodopsin.Most of the conformational changes occur in less than a millisecond, but the last transformation, from metarhodopsin II to metarhodopsin III, requires several minutes to accomplish.

"Requires several minutes to accomplish" doesn't sound like a digital process, does it. In addition, the section concludes with:

Quote

Thus the first essential feature of the retina is that it amplifies the photon signal and converts it into macroscopic electric currents.

Macroscopic electric currents are not exactly digital either, are they?

If you are suggesting that diffraction, due to wavefronts that are distrurbed by edges of the aperture do not exist, are not a reality, then we won't have to take diffraction seriously anymore. Or do we?

Besides, it might be wiser to use a reference with a bit more credibility than :"James Carter began thinking about and developing alternative theories of physics as a teenager. Around 1968, he developed the principle of Gravitational Expansion that explained gravity to be the opposite of Einstein’s General Relativity."

Sure, Einstein got it all wrong ...

Quote

So one more time, ...

Thanks but no thanks. I've got better things to do, but I'm always open to good quality information.

Sorry to disappoint you. I'm not sure why, but it seems like you are trying to push a pet peeve, or maybe (to put it friendly) you just need to brush up on your knowledge on the subject.

Since you are unlikely to take my word for it, let me point you to a document that explains the somewhat more complex phototransduction process. It was the second Google link I found, there may be better sources, but it seems to offer a reasonable explanation. Here it is: http://arxiv.org/ftp/quant-ph/papers/0208/0208053.pdf (see section 2.1.1 Photon transduction and signal amplification).

In short, for those who are not that interested in reading all of it, the relevant part states:"Requires several minutes to accomplish" doesn't sound like a digital process, does it. In addition, the section concludes with: Macroscopic electric currents are not exactly digital either, are they?

If you are suggesting that diffraction, due to wavefronts that are distrurbed by edges of the aperture do not exist, are not a reality, then we won't have to take diffraction seriously anymore. Or do we?

Besides, it might be wiser to use a reference with a bit more credibility than :"James Carter began thinking about and developing alternative theories of physics as a teenager. Around 1968, he developed the principle of Gravitational Expansion that explained gravity to be the opposite of Einstein’s General Relativity."

Sure, Einstein got it all wrong ...

Thanks but no thanks. I've got better things to do, but I'm always open to good quality information.

One photon is not digital. One airplane is not digital, and a thousand airplanes do not all of the sudden become analog either.

The possible signal amplification (a single photon can lead to hydrolysis of approximately 10^5 cGMP molecules/s) of the Rods in particular is impressive, but it is a constant stream of molecules:

Quote

Thus unlike ordinary neurons, which release transmitter from the synaptic button as a discrete event in response to an action potential, in photoreceptors there is a continuous release of neurotransmitter from the synapses, even in the dark.

let’s take this from the summary; “...Moreover, we explicitly stress on the fact that due to existent amplification of the signal produced by each photon

“signal produced by each photon”...does that sound analog?

this ; If this molecule absorbs a photon, it undergoes photoisomerization forming straight chain version, known as all-trans-retinal. All- trans-retinal unleashes a series of conformational changes in the protein opsin fragment producing metarhodopsin II, which is the activated form of rhodopsin.

“if a molecule absorbs a photon... (something happens)” sound analog?

this; . Elaborate experiments have shown that the human is capable of consciously detecting the absorption of a single photon by a rod

sound analog?

this; Therefore when opened the CNG channels tend to depolarize the cell.9 If the photoreceptor cell is illuminated,9 When the CGN channel is open, the resting membrane potential of ‐40 mV is dragged towards the reversal potential of the CGN channel, which is 0 mV. Thus the photoreceptor is depolarized.cytoplasmic cGMP concentration decreases and disrupts the ionic current through the CNG channels, thereby hyperpolarizing the cell.

sound analog?

this; Thus the first essential feature of the retina is that it amplifies the photon signal and converts it into macroscopic electric currents.

amplification is not detection, no? It’s not even the signal. But even here it says “photon signal”, Singular. So an event causes an effect, sound analog?

I’m afraid that your reference is talking about something completely different, but it is useful in that it describes the process well and we are back to a photon exciting a cell. The best way to look at this I suppose (signal modulation in the retina) is that;ISOShutter SpeedAperturephotosite arrayall impact the picture in some way. But the picture itself is the result of photons striking the receptor. Not the ISO setting.

Diffraction and all? No, you referred to the “duality” of photons. I merely am pointing out that “duality” is an outmoded concept based on what we now know to be the structure and properties of photons

And as far as airplanes (now a long reach), you’re talking about something entirely different. Airplanes are not photons hitting photoreceptors. An airplane may land or crash, that’s binary and information however. You seem to not understand the distinction if that’s your example.

Now about your expressions. You are the one who initially went point by point, hammer and tong about what I was saying. And beginning with your last post you’ve even brought references in which you apparently think buttress your case (actually they say exactly what I said, one photon hitting one cell is sufficient information). And now you claim it’s a peeve for me... Please...

You obviously know a great deal more about the nuts and bolts of photography than I do. That’s cool. But either you haven’t really read what I said, or you haven’t read the reference or you don’t understand what is being said.

You’ve taken on the role of a bit of a typical Bulletin Board Bully (BBB) here, which you have also taken on before. in my limited experience, every board has one or two. That’s OK, you know a lot and I typically appreciate your input when you are not jumping to conclusions about people personally, or mis-guidedly assessing motives. Right now you’re trying for reversal (“push a pet peeve...” that is BS, It doesn’t matter. But I have learned more about you), and make it appear that it matters a great deal to me that you accept what I say. Yet you’re the one initially going off point by point. I have followed the first time since it was harmless, but now you’re getting personal, which BBB’s often do. It really doesn’t matter to me whether you accept it or not, nor does it matter if you go on insisting photons are sometimes waves. Not here to convince you, especially when your notions are frozen into place. And oh, the further we go the more we realize that Einstein was right about a great many things, but not everything. That you reject the authors’ overarching theory is fine, but it doesn’t invalidate over a century of getting to know photons within new and more accurate concepts of the Universe.

Going to break off this discussion now since you’ve taken it to the absurd and are being far too personal. But you’re right, one of us needs to better understand the subject.

This started with a casual remark made about analog/digital distinctions as commonly used in digital camera discussions. Right or wrong, they were casual remarks, intending to draw familiar distinctions. In point of fact, the semantics of the terms "analog" and "digital" bring in extensive argumentation in naturalistic semantics. The distinction is notoriously vague, and one wonders whether /either term/ has the kind of clarity needed for science. But for purposes of discussion here, we recognize that the commonplace use of the "analog-to-digital converter" creates a practical distinction between the way signals are handled in two clearly different domains of engineering.

This started with a casual remark made about analog/digital distinctions as commonly used in digital camera discussions. Right or wrong, they were casual remarks, intending to draw familiar distinctions. In point of fact, the semantics of the terms "analog" and "digital" bring in extensive argumentation in naturalistic semantics. The distinction is notoriously vague, and one wonders whether /either term/ has the kind of clarity needed for science. But for purposes of discussion here, we recognize that the commonplace use of the "analog-to-digital converter" creates a practical distinction between the way signals are handled in two clearly different domains of engineering.

So the thread was originally about how many bits are needed in camera ADC/raw files? Assuming a linear conversion, would not this be the number of bits necessary to encode the saturating end of the sensel, and still having quantization noise that is essentially hidden(/dithered) by the photon/readnoise?

This started with a casual remark made about analog/digital distinctions as commonly used in digital camera discussions. Right or wrong, they were casual remarks, intending to draw familiar distinctions. In point of fact, the semantics of the terms "analog" and "digital" bring in extensive argumentation in naturalistic semantics. The distinction is notoriously vague, and one wonders whether /either term/ has the kind of clarity needed for science.

Hi Luke,

The term is actually well defined in signal processing circles:"A digital signal is a discrete-time signal for which not only the time but also the amplitude has been made discrete."

Discrete-time can be substituted for discrete-space or position, e.g. when a sampling frequency in fractional seconds in time is replaced by fractional distance in space, depending on the particular use (e.g. sound signals with amplitude over time such as frequency, versus image signals such as luminance over pixel positions).

Discussions become 'difficult' when people deviate from the accepted definition. For those interested in more background information, I can recommend this website as a starting point.

The term is actually well defined in signal processing circles:"A digital signal is a discrete-time signal for which not only the time but also the amplitude has been made discrete."

Discrete-time can be substituted for discrete-space or position, e.g. when a sampling frequency in fractional seconds in time is replaced by fractional distance in space, depending on the particular use (e.g. sound signals with amplitude over time such as frequency, versus image signals such as luminance over pixel positions).

Discussions become 'difficult' when people deviate from the accepted definition. For those interested in more background information, I can recommend this website as a starting point.

Of course Bart. As I acknowledged, the term gains a clear distinction in engineering domains. If the term is defined by stipulation, it still serves its practical purpose. In naturalistic semantics however, the terms are subjected to more dialectical stress, and there is a deeper scientific distinction to be made (and defended, if possible). I think this pretty much sums up the entire digression.

It is digital in the sense that it is there or not there. If you illuminate a photomultiplicator to very weak light and plot the output on an oscilloscope you will see discrete pulses for each photon. So I don't think individual photons are analogue.

One photon is not digital. One airplane is not digital, and a thousand airplanes do not all of the sudden become analog either.

The possible signal amplification (a single photon can lead to hydrolysis of approximately 10^5 cGMP molecules/s) of the Rods in particular is impressive, but it is a constant stream of molecules:Cheers,Bart

It is digital in the sense that it is there or not there. If you illuminate a photomultiplicator to very weak light and plot the output on an oscilloscope you will see discrete pulses for each photon. So I don't think individual photons are analogue.

Hi Erik,

And that is an excellent example where sticking to the definition makes things easier to discuss. The photon is only discrete in number, there are indeed no fractional photons, but it is continuous in time (arrival time is random). Hence it does not comply with the signal theory definition of digital signal and thus it is analog.

See how much easier it becomes to discuss things when we look for both discrete-quantity/amplitude and discrete-time dimension?

We can digitize (quantize) the photons 'being there' or not, by defining a discrete-time interval, and that helps in defining the benefits and drawbacks of 16-bit DSLR components, and features like noise.

And that is an excellent example where sticking to the definition makes things easier to discuss. The photon is only discrete in number, there are indeed no fractional photons, but it is continuous in time (arrival time is random). Hence it does not comply with the signal theory definition of digital signal and thus it is analog.

See how much easier it becomes to discuss things when we look for both discrete-quantity/amplitude and discrete-time dimension?

We can digitize (quantize) the photons 'being there' or not, by defining a discrete-time interval, and that helps in defining the benefits and drawbacks of 16-bit DSLR components, and features like noise.

I suspect that if I design the test in order to prove a point (and not for photographic value), 100% will be able to distinguish them.

You are of course right that for 99% of the people and 99% of the cases, 8 bits (with gamma) seems to be sufficient.

-h

8 bits with gamma may be sufficient for many, but it is not optimal for high quality output.

The Nikon D800e is one of the better performing sensors evaluated by DXO, and they report a screen DR of 13.23 stops. This is for a SNR of 0 dB or 1:1. A SNR of 1 is not photographically useful. One may derive DRs for other SNRs from the DXO full SNR curve. The DR at SNR 1.0 can be read from the curve. The saturation at SNR 0 dB (1:1) is 0.01%. Dividing this value into 100% gives a DR of 10,000:1, or 13.29 stops (log2 10,000 = 13.29), very close to the reported value of 13.23.

One can derive the DRs for SNR 18 dB, 12 dB, and 6 dB (SNRs 8:1, 4:1 and 2:1 respectively) by interpolation. The interpolated saturations are 0.144%, 0.055%, and the corresponding DRs are 9.4, 10.8, and 12.2 stops respectively. If one considers a noise floor of 8:1 as acceptable for photographic use, the DR of the D800e is 9.4 stops.

How many bits are needed to encode this DR? An 8 bit sRGB file can encode a total DR of 11.7 stops. The maximum encoded value is 255 and the minimum value is 1, but one must convert to linear (scene referred) to obtain the scene luminance ratios. The sRGB value of 1 is converted to linear by dividing by 12.92 (see inverse gamma) which yields 0.077399. Thus, the DR is 1/0.07739 or 3295 or 11.7 stops. However, effective DR can be limited by posterization. Human vision can detect a difference of luminance of about 1%, and the difference in luminance levels should be kept below this amount for the highest quality output. The steps between levels in gamma encoded data are variable and are largest at the low end. For example, the difference between the sRGB levels of 1 and 2 is 100%. In his article on HDR encoding, Greg Ward uses a cutoff value of 5% difference in levels in determining the DR of 8 bit sRGB, since this amount of error may not be noticeable in the darkest levels. This 5% cutoff occurs at a sRGB value of approximately 44. This converts to 6.4 linear. The effective DR for high quality output at this cutoff is therefore 255/6.4 or 5.3 stops. In log10 notation this is 1.6 orders of magnitude, as shown in Table 1 of the article.

The 5.3 stops of high quality output obtained with an sRGB JPEG is not optimal for the D800e.

An 8 bit sRGB file can encode a total DR of 11.7 stops. The maximum encoded value is 255 and the minimum value is 1

Why not a minimum value of 0? An 8-bit number can represent [0...255] (inclusive) or [1...256] or some other range.

Quote

The 5.3 stops of high quality output obtained with an sRGB JPEG is not optimal for the D800e.

Bill

Hence I avoided claiming that 8 bits was optimal for anything, just that it appears to be "acceptable" for a large percentage of people and cases. Poynton claims (I believe) that 8 bits with ideal gamma allows for transparence (no banding) at contrast ratios of 50:1. In practice it seems that this is a worst-case estimate (or people accept slight banding).

Why not a minimum value of 0? An 8-bit number can represent [0...255] (inclusive) or [1...256] or some other range.

AFAIK 0..255 is universally used for 8 bit output in Photoshop and other applications. However, I doubt that anyone would notice and difference in output is one used 1..256, since the difference between 0 and 1 is not perceptible on screen or in print. However, the use of 0 would give an infinite proportional step between 0 and 1, and the DR would be infinite. Not very useful .

Hence I avoided claiming that 8 bits was optimal for anything, just that it appears to be "acceptable" for a large percentage of people and cases. Poynton claims (I believe) that 8 bits with ideal gamma allows for transparence (no banding) at contrast ratios of 50:1. In practice it seems that this is a worst-case estimate (or people accept slight banding).

50:1 is pretty low contrast with a DR of 5.64 stops. This is within the high quality encoding range described in my prior post. A typical print has a contrast ratio of ~250:1, and screen contrast is even higher. Another authority, Norman Koren, does state: "But image quality in an 8/24-bit file will be adequate, though just barely, if the exposure is correct and little editing is required. This is achievable in studio environments, but less often when using "natural" (i.e., uncontrolled) light.", supportive of Poynton and your view.

I do use high quality JPEG for prints at my local Costco (profile sRGB or their custom profile) and have not noted any banding. However, sRGB is insufficient to record all colors captured by the camera and I use 16 bit ProphotoRGB when printing with my Epson 3880. AdobeRGB is a bit better, but still insufficient. My camera does not have a wide gamut output space for JPEG, so I must shoot raw if I want to render into ProphotoRGB. 8 bit encoding may work with ProphotoRGB, but most authorities recommend 16 bit.

Yes, but beyond photons the resting potential of the photoreceptor, the physiologic issue is likewise digital. Either it is -40mV, and silent (0) or it is ~+5mV and firing (1). There is no "in-between". Engineering or Physiology, discrete values are the essence of "digital". Temporal and spacial issues are what give it the "analog" experience.

It is digital in the sense that it is there or not there. If you illuminate a photomultiplicator to very weak light and plot the output on an oscilloscope you will see discrete pulses for each photon. So I don't think individual photons are analogue.