Yes, those graphs are great, and camera makers could introduce them in their cameras, instead of the silly JPEG histograms they have now. They were plotted with Histogrammar, but be careful if you decide to use it since the EV plots only make sense if obtained from images with 0 processing (and believe me, mainstream RAW developers don't make things easy to obtain 0 processing outputs).

* The flaw, or I'd rather say subjective point of this definition is what is a 'high dynamic range scene'?. My choice is to relate the decision to how difficult is to compress the scene's DR into the output devices' DR (paper: ~4 stops, monitor: ~6-7 stops) in a realistic manner that looks pleasant to the observer. Based on my experience, a escene with >8 stops of DR begins to require some processing but can still be tonemaped successfully without too much effort. >10-12 stops definitively needs a more skilled processing, and I consider it a good figure to speak about HDR imaging ...

Yes, by just comparing scene DR with todayís output DR, there are probably (too) many scenes which could be called HDR.

Likewise I'd suggest that the actual differentiator of an "HDR image" is the application of Pixel-selection-based tone mapping techniques, e.g. mask-based exposure blending - as opposed or in addition to a global tone curve - in order to reproduce all relevant details in the shadows and highlights.

Such discontinuous tone mapping, when overdone or poorly implemented, may easily explain the hated "HDR look" as discussed here.

It's also important to note, that while HDR images expand the dynamic range of the scene, don't really "expand" the dynamic range of the final image that we see. Most HRD software up-rezs the composite images to a 32 bit composite. If my calculations are correct that's 4,294,967,296 colors. This is a true expansion of the original Dynamic range. However, since our monitors and printers are generally 24 bit sRGB displays or 16 million colors, they can't display the 32 bit images, do the software DOWN-rezs them to 24 bit 16 million color images. This down rez effects both luminosity and color depth and compresses both. Both the uprez and downrez are "tone mapping".

It's not 32 bits per pixel in the ordinary sense, which would be 10 or 10 2/3 bits per channel. It's 8 bits per channel with an 8 bit scaling factor. Thus it is like a floating point number, and has much greater range than 4 billion colours.

It's not 32 bits per pixel in the ordinary sense, which would be 10 or 10 2/3 bits per channel. It's 8 bits per channel with an 8 bit scaling factor. Thus it is like a floating point number, and has much greater range than 4 billion colours.

Ok (read the math just went over my head! ), but the point remains the same. No device we have today can display the composite image and it has to be down rezed for print or display on a computer monitor....

Most HRD software up-rezs the composite images to a 32 bit composite. If my calculations are correct that's 4,294,967,296 colors.

There are bits and there are colors and the two may or may not correlate in any meaningful way. Lets not get color gamut, bit depth/encoding and range all lumped together.

Here's my understanding (and I'm more than open to correction, I am getting my head around all this HDR, Tone Mapping semantics).

The need for 32 bit encoding and floating point math is to have an unlimited set of values for describing what can be a huge number of tones. We all know that an 8-bit per color document in of itself doesn't have less or more dynamic range than a 16-bit per color image. The 16-bit per color document has the POTENTIAL to have more range. That is, if you have more range of tones than you can define with specific values, that's a big problem! So with 0-255, you can have a pixel value of 89 and 90. you can't have a value of 89.5 any more than you can have a value of 89.7. So my understanding is, higher bit depth along with floating point math provides an unlimited set of values to encode what can be a huge number of tones. A 32 bit LDR image is still LDR!

Tone Mapping. In a generic sense, any time you alter the tones of an image, using curves for example on a 24 bit image, you are tone mapping. Sound reasonable?

HDR and the range of the scene versus the capture. I agree that bracketing a scene that the camera can capture in one shot probably should not be called HDR. But playing devils advocate, lets say the scene is 12 stops and your camera can capture that range. But you bracket 2-3 shots and load into your HDR software of choice to tone map an appearance you desire (lets not go into the ugly HDR effects look that make myself and others want to vomit). We could alter sliders in our raw converter on one capture yes, and get the rendering with one image. But is it possible that bracketing and using a product we prefer, we can alternatively tone map better/faster/easier? Of course thatís not a great route to take if something is moving, you donít want to go the tripod route etc.

I've been playing with this a bit using just one image (tone mapping). In Lightroom I build a virtual copy, apply two different tone mapping moves but use Enfuse to create one image. First of all, I find Enfuse does a magnificent job of HDR/Tone Mapping, whatever you want to call it with a very natural look. Its also easy to use and cost very little (donationware). I've taken bracketed images into HDR Express, Photoshop's HDR and Photomatix and keep preferring the clean and natural look I get from Enfuse. Plus it works in LR which I love.

While a single capture may have all the tones we want to express, is it unreasonable to use the better HDR/Tone Mapping tools to produce a rendering we wish to express?

RFPhotography

HDR is 32 bits per channel not 32 bits per pixel (i.e., 10 and change bits per channel). And yes, it is uprezzed from the 8, 12, 14 or 16 bit original input images.

Andrew is right about why the need for floating point. There are just far too many possible colours to be created to keep them all in an integer space. It's got nothing to do with the dynamic range.

Insofar as bit depth and dynamic range, I heard an example not that long ago that sums it up beautifully. Think of a building. That building has 10 floors. Between each floor are stairs. The total number of floors is the dynamic range. The number of stairs is the bit depth. Within that 10 floor building you can have 8 stairs per floor or 16 stairs per floor (i.e., 8 bit or 16 bit). But the total dynamic range (the number of floors or 'stops') doesn't change. To take the analogy further, the HDR building might have more floors when it's an actual HDR image (which we know we can't use). And rather than discrete stairs, the HDR building has an escalator that is smoothly variable between any of the floors.

Enfuse is a nice program, particularly with Tim Armes' LR front end. It's not true HDR; however. The images don't enter the 32 bit space but are retained in the native bit depth. Enfuse is an image blending program rather than an HDR program. Because you're not going through the strong local tonemapping routines of an HDR tonemapper that's why it tends to give more natural results. It's the local contrast operators that really take you into the land of the surreal. Natural results can be obtained with actual HDR programs as well; some more effectively than others, it just takes a little more work and practice.

WRT multi-processing a single file and feeding those into Enfuse, you're not gaining anything Andrew. You're not gaining additional DRange by multi-processing the single RAW file, just pushing around what already exists. Now, I will say I haven't tried it with Enfuse so I don't know if the result is different from running a single image through Enfuse (maybe that's not possible) but with an HDR app, it makes no difference. While I like the results that can be achieved with Enfuse, my biggest issue with it is speed. I find it brutally slow so it's not viable for a volume workflow. But it does produce really nice results.

HDR is 32 bits per channel not 32 bits per pixel (i.e., 10 and change bits per channel). And yes, it is uprezzed from the 8, 12, 14 or 16 bit original input images.

Andrew is right about why the need for floating point. There are just far too many possible colours to be created to keep them all in an integer space. It's got nothing to do with the dynamic range.

The _number_ of colors that can be created using a 32-bit integer is equal to the number of colors that can be created with a 32-bit floating point number. But the distribution of those and the error is very different, floating point numbers can represent very small and very large numbers.

I believe that integer representation with gamma (such as jpeg) can have very similar properties to floating point numbers if implemented in the right way (regular jpeg is not suited for HDR).

-h

Logged

RFPhotography

Well, I'm not a mathematician but I don't see how an integer based system can have as many colours as a floating point system. If even going to one decimal point, I can get 9 more levels between each integer in each channel. To me, that's more colours.

Well, I'm not a mathematician but I don't see how an integer based system can have as many colours as a floating point system. If even going to one decimal point, I can get 9 more levels between each integer in each channel. To me, that's more colours.

What is the _number_ of shades available in each? The answer is 4, 2^2 for both. What kind of data can be stored within them? Basically any data. But floating point is a lot easier to work with for many kinds of data and operations. The distribution of available numbers and the quantization error makes a lot of sense for many tasks.

-h

Logged

RFPhotography

As I said, I'm not a mathematician. As simple as your example may be, I still don't see it as being the same. It seems to me that you're using an integer based equation (2^2) to describe a non-integer based system.

In an integer system, I can combine R=8, G=57, B=240. That gives me a combined colour. It's sort of a neon blue. But if I can combine R=8.1, G=57, B=240, that's a different combined colour. It's very close to the previous one, but it is different. Forget about negative numbers. I don't think that all HDR image formats can accept negative numbers (not positive).

Enfuse is a nice program, particularly with Tim Armes' LR front end. It's not true HDR; however. The images don't enter the 32 bit space but are retained in the native bit depth.

So going back to the semantics of all this, HDR isnít solely about taking multiple captures that exceed the range of one and producing a new rendering from the group, it has to also involve 32 bit processing?

Quote

Enfuse is an image blending program rather than an HDR program. Because you're not going through the strong local tonemapping routines of an HDR tonemapper that's why it tends to give more natural results. It's the local contrast operators that really take you into the land of the surreal. Natural results can be obtained with actual HDR programs as well; some more effectively than others, it just takes a little more work and practice.

So what are the advantages?

Quote

WRT multi-processing a single file and feeding those into Enfuse, you're not gaining anything Andrew. You're not gaining additional DRange by multi-processing the single RAW file

I agree and I thought I made that clear. What I am doing is using an alternative tone mapping procedure that may be better, faster, easier.

Quote

While I like the results that can be achieved with Enfuse, my biggest issue with it is speed. I find it brutally slow so it's not viable for a volume workflow. But it does produce really nice results.

It is slow but the results are such that I get to my goals faster in the long run.

Bob, what hjulenissen is trying to say is that floating point is chosen in HDR formats for simplicity of operations, but it is not strictly necessary to encode a high dynamic range image, and hence it is not a condition for HDR.

If you expand an integer 16-bit format with a gamma curve, you can encode and process HDR information on it. So HDR is not about floating point formats. I don't agree either that HDR is about bracketing, nor about any particular kind of local contrast algorithms. All those are tools to achieve the goal: represent on a limited DR output device a HDR input scene, but not a necessary condition to talk about HDR.

This is the number of levels devoted to each f-stop with 16-bit integer:

16-bit integer encoding, linear (left) vs 2.2 gamma (right)

Left is linear, as camera sensors capture information. Right is gamma encoded, and as can be seen the 2.2 gamma 16-bit integer format is able to represent a high dynamic range; in the 20th stop we still have 45 tonal levels. A specific floating point format would be more evenly distributed (gamma will always devote more levels to highlights), but using integer formats is still possible.

So if you have a camera that can capture in a single shot 15 stops of DR, create copies of this capture at different exposures, and blend them some way to obtain an output image that can display the entire DR of the original scene into some monitor or print, then you are doing HDR. Any extra proposed requirement for HDR is inventing a non-existent definition.

if you have a camera that can capture in a single shot 15 stops of DR, create copies of this capture at different exposures, and blend them some way to obtain an output image that can display the entire DR of the original scene into some monitor or print, then you are doing HDR.

In an integer system, I can combine R=8, G=57, B=240. That gives me a combined colour. It's sort of a neon blue. But if I can combine R=8.1, G=57, B=240, that's a different combined colour. It's very close to the previous one, but it is different.

I have tried to give you examples, but I have obviously been unable to present it in such a way that we can agree. Perhaps wikipedia is better at explaining than me.

Let us agree then that no camera, display (or probably printer) in the world operate on floating point number. For those devices it is all about integers. Anything that we operate on in-between originated as integers once, and will be converted to integers before we will be able to see an image.

It seems to me that the benefits of using 32-bit floats in HDR representations is to be able to allocate bits to the representation of the low tones, up to the practical limits. Using supersampling, which is one of the commonplaces of HDR, it is possible to take several integer samples and produce a normalized average of higher precision. If nothing else, this allows for a greater amount of processing (e.g., relighting, etc) without losing tonal coherence.

RFPhotography

We've had this discussion before, GL. We don't, and won't likely ever, agree.

When I talk about HDR I'm talking about it in the technical sense as it relates to photography and as it was derived from the motion picture industry. True HDR formats are 32 bit. In the photographic sense bracketing is required because no capture device can represent such a broad brightness scale. HDR has become the default term to describe all methods of extending dynamic range. Like Xerox is the default term for photocopying and Ski-Doo is the default term for all recreational snow machines. But it's not correct. I like JP Caponigro's term of XDR to describe the broader discussion of extending dynamic range and XDR includes HDR. When capture and output devices are capable of rendering the wider brightness range of HDR (or XDR) images then it won't be High or Extended any longer. It will be normal. Cameras can already capture far more range than displays and printers can reproduce. If the camera can capture 15 stops of brightness there's no need to do any blending. At that point, it's all about editing (tonemapping) to bring the image back into a range that can be seen on screen and printed. We don't (and won't) agree that that is HDR (or XDR).

H, don't disagree with the concept that the images started out in an integer space and will end up in an integer space. It's what happens in between that's the issue and where I think I'm not completely following your line of thinking.

Andrew, what are the advantages? Depends. Maybe there are none. Horses for courses, as they say. And if I misread your comments then I apologise.

It's what happens in between that's the issue and where I think I'm not completely following your line of thinking.

mmm so HDR is about what happpens in between, no matter where the information came from or goes to. Just look at these two scenarios:

Scenario A:A scene with 8 stops of DR, a camera with 8 stops of DR, you bracket 3 shots and feed them into some 32-bit HDR tone mapping software. Then you render the final output to a print.

Scenario B:A scene with 11 stops of DR, a camera with 11 stops of DR, you make a single shot, create copies at different exposures and blend them in Photoshop layers. Then you render the final output to a print.

According to your technical definition of HDR, the scenario with less scene's DR is HDR while the case with more DR is not. Fantastic.

RFPhotography

What do I mean by such a broad scale? Beyond what cameras can capture. The new K5 can do up to, what, about 12 stops? My D700 can do about 11. So I'm thinking in the range beyond that. Basically anything beyond what the camera being used can capture. It's not that difficult a concept to grasp.

In both your scenario A and Scenario B there's no need for either HDR or XDR. You're not gaining anything by using either HDR software, automated blending software or manual blending with layers. It's a moot point.