I'm not in total agreement that a stouffer step wedge is very useful for assessing the performance of digital editing tools in a way that's functional for photographers. It's interesting and enlightening and does allow an understanding of what is happening for making informed edits.

I used a step wedge in my darkroom days as a prepress technician and it was very useful for getting consistent results cranking out line conversions of silver halide B&W print for reproduction on a commercial press under controlled lighting conditions and exposures. The characteristic of the performance of this wedge subjected to chemical development was well established and made quite familiar.

Using this test chart to characterize and make familiar the behavior of ACR/LR tools is pretty much useless for photographers who don't shoot under controlled conditions as well as add their own variation viewing and judging the appearance through emotionally driven edits.

IOW I don't think anything needs to be fixed. From what I gather from your Charles Cramer link and what's been discussed here, Adobe engineers have made it clear that they've made it a lot more easier to SEE every element of useable detail in a Raw capture and have given adequate tools to apply our own tone mapping of those details.

You can't edit what you can't see. Adobe has expanded the dynamics of Raw capture with Process 2012 so you can see to edit. They just reorganized the flow and behavior of the tools to make it more logical and intuitive.

Just wish I didn't have to fork over an additional $200 to my $133 upgrade to CS5 from CS3 now that that function is now available that wasn't in previous versions.

And BTW who the heck needs to ETTR a scene shot in broad daylight? ETTR is only useful to expose in such a way to reduce the level of noise to usable signal. The noise reduction improvements and expanded dynamics in CS6 pretty much makes ETTR useless and pointless.

If one applies positive exposure in ACR/LR, the darker tones are moved to the right where they appear to be expanded because of the gamma encoding. To evaluate compression and expansion of tones one should use a log histogram...

Bill,

Let’s keep in mind that linear-scaling and gamma-encoding are commutative operations, means that the sequence can be essentially exchanged, without changing the result:

If one applies any +exposure as represented by a straight-line w/slope > 1 onto linear data, it is still a straight-line when applying it alternatively onto gamma-encoded data (different value of the multiplier though).

Contrast is homogenously increased, numerically in terms of RGB ratios, as well as perceptually, more or less (differences with regard to perception would certainly be worthwhile a different discussion). In a first order, the distribution of gamma-encoded data is more in line with perception than linear data for example. Perceived mid gray is found somewhere around the middle of a gamma-encoded tonal scale.

Needless to mention that gamma as such remains invisible in a color-managed environment (convert-to does not change the image appearance on screen). It is just the distribution of gamma-encoded data, the numbers as well as corresponding curves and histograms which can be useful while allowing a a kind of intuitive understanding. Whereas double-log is probably more for my brighter moments /:)

If one applies any +exposure as represented by a straight-line w/slope > 1 onto linear data, it is still a straight-line when applying it alternatively onto gamma-encoded data (different value of the multiplier though).

Contrast is homogenously increased, numerically in terms of RGB ratios, as well as perceptually, more or less (differences with regard to perception would certainly be worthwhile a different discussion). In a first order, the distribution of gamma-encoded data is more in line with perception than linear data for example. Perceived mid gray is found somewhere around the middle of a gamma-encoded tonal scale.

Needless to mention that gamma as such remains invisible in a color-managed environment (convert-to does not change the image appearance on screen). It is just the distribution of gamma-encoded data, the numbers as well as corresponding curves and histograms which can be useful while allowing a a kind of intuitive understanding. Whereas double-log is probably more for my brighter moments /:)

Kind regards,

Peter

Peter,

I'm not sure that I understand your point. The exposure control in PV2012 is not linear in the highlights, so increasing exposure will result in tonal compression when the highlights are rolled off. Log-Log is customarily used for characteristic curves because it linearizes gamma encoded images. For example, here are the values of the Stouffer wedge for gamma 1.0 and 2.2 with linear axes:

And with log-log:

The reason for gamma encoding of ones editing space is to make it more perceptually uniform where a proportional change at the lower end of the scale has the same effect as one at the upper end of the scale. Gamma encoding is not to accommodate the log nature of our visual system, since the inverse gamma function is used when the image is displayed. It is true that the gamma encoding makes no difference in a color managed system: a gamma 1 image does not appear dark.

So that readers can note the differences between linear and log histograms, these are shown for various exposure adjustments.

Please take some time to address and check out the attached spreadsheet w/ its two tables and graphs.Only 1 number per table is entered (black, bold, framed). All the rest calculates from cell to cell.

Please take some time to address and check out the attached spreadsheet w/ its two tables and graphs.Only 1 number per table is entered (black, bold, framed). All the rest calculates from cell to cell.

Best regards, Peter

Peter,

I studied your spreadsheet and prepared a detailed response with some samples of my own, but the session timed out before I could post it and I don't have the energy to reconstruct it. This problem has been noted by others and the sysop should take steps to prevent it. I do have my settings to remain logged on, but they are hot honored.

Stouffer wedge with calculated values for gamma 1, gamma 2.2 and sRGB along with an image of the wedge rendered with ACR 7.1 using PV2010 with a linear tone curve. Note the better precision of the log-log plot for the shadows.

Detect or correct? There’s a difference. Exposure might allow you to detect clipping but what Eric and others are saying is it is the wrong tool to alter said clipping. That is the job of the White slider in PV2012.

If you can pony up a mere $25 for George Jardine’s new video’s on LR4 and PV2012, video #4 is worth the price of admission alone in seeing how all the new tools interact.

This is not necessarily true. Clipping due to global overexposure is best dealt with the Exposure control. See my post below. Eric has confirmed this in a previous exchange involving PV2010 and the same principle applies to PVw2012.

No, it doesn't. Eric has confirmed this. The article you linked previously is in conflict with the portion I've bolded above. The 2012 PV requires a different approach. It's true that the Exposure control is a linear adjustment but it's more concentrated in the midtones.

Well, look at Tutorial #4 in the section dealing with the Big Sur photograph at about 10:14 in the tutorial. That image was globally overexposed or one could merely say exposed to the right. George dealt with this issue with the Exposure control. On the other hand the image of the man in the walkway (at about 20 minutes into the tutorial) had good exposure for the midtones, but the highlights were burnt. In this case, George used the Highlight control. These adjustments are in agreement with my original contention.

... in general, ACR / LR is the wrong tool to use to understand / analyze the input capture data. That's because all of the feedback mechanisms (visualization and numbers) are based on rendered output, not the input. I understand the idea (and temptation!) of wanting to use ACR/LR to analyze the input data, but it was not designed for that purpose and is indeed rather problematic for doing so.

... in general, ACR / LR is the wrong tool to use to understand / analyze the input capture data. That's because all of the feedback mechanisms (visualization and numbers) are based on rendered output, not the input. I understand the idea (and temptation!) of wanting to use ACR/LR to analyze the input data, but it was not designed for that purpose and is indeed rather problematic for doing so.

Eric,

Now that you have re-entered the thread, did you see my comment on anomalous behavior of the clipping indicator after white balance was applied. This lead to my initial post that the indicator and the Alt+Exposure or Alt+Whites controls were not helpful.

PV2012 is a definite advance in its highlight rendering capabilities, but the new math and image adaptive highlight rendering and auto highlight does complicate using ACR to look at input data. From my experience, PV2010 with linear tone curve settings does a reasonable job in this area. The ICC recommended a method for obtaining scene referred data with ACR and it seems to work with PV2010. Any comments?

... in general, ACR / LR is the wrong tool to use to understand / analyze the input capture data.

Abobe (Photoshop) actually had a good way to deal with complexity- many tools have a checkbox "More Options" or fewer …

To Reduce Complexity is certainly a popular marketing approach in general (many potential pitfalls though), and to furnish Camera Raw with some permanent auto-functions such as Highlight-recovery (or was it called D-lighting ) may make it easier for some users, however, it may make it harder for other users, e.g. those practicing ETTR, to know if Raw-channels clipped. And I thought I had understood that RAW is all about perfect Control, to "Rendering the Print".

... in general, ACR / LR is the wrong tool to use to understand / analyze the input capture data. That's because all of the feedback mechanisms (visualization and numbers) are based on rendered output, not the input. I understand the idea (and temptation!) of wanting to use ACR/LR to analyze the input data, but it was not designed for that purpose and is indeed rather problematic for doing so.

Abobe (Photoshop) actually had a good way to deal with complexity- many tools have a checkbox "More Options" or fewer …

To Reduce Complexity is certainly a popular marketing approach in general (many potential pitfalls though), and to furnish Camera Raw with some permanent auto-functions such as Highlight-recovery (or was it called D-lighting ) may make it easier for some users, however, it may make it harder for other users, e.g. those practicing ETTR, to know if Raw-channels clipped. And I thought I had understood that RAW is all about perfect Control, to "Rendering the Print".

Coming back to Eric's comment, one can use Rawdigger, a tool designed specifically for evaluation of raw files, to determine clipping. Here is my Stouffer wedge that is overexposed. Step 4 is below clipping. There is a good spread between the minimum and maximum and the selection contains 596 different levels.

Step 3 is not entirely clipped, but there are only 83 levels in the same selection, so 513 levels have been clipped. The selection has too few levels to show a good bell shaped curve, but it is reasonable to assume that the right portion of the normal curve has been truncated. One would expect the standard deviation to fall with clipping in the ADC and it would be zero for complete clipping. However, with the D3 the green channels saturate before the ADC clips and the observed standard deviation is due in part to pixel response non-uniformity. The last fully intact step is step 4. Each step is 1/3 stop.

If we look at the clipping indicator with PV2012 and default settings with the Adobe Standard profile we do see that step one is clipped. The automatic recovery has brought steps 2 and 3 below clipping.

PV2010 with default settings overestimates the degree of clipping. This is due in part to the +0.5 EV baseline offset that Adobe uses for this camera.

However, if we correct for the baseline offset by using -0.5 EV exposure and set the tone curve to linear, we get an accurate indication of clipping.

The take home point is one should use the proper tool to check for clipping with ETTR. Rawdigger is designed for this purpose. However, one can use PV2010 with a linear tone curve and the proper exposure correction to obtain an accurate indication of clipping, at least under these test conditions. One could save these settings as a preset so that they can easily be recalled.

So there is no equivalent with PV2012. You’d make this preset, toggle on to view a more accurate clipping, then move back to PV2012 and adjust to desired result?

Hi Andrew,

I think it's more about a means to evaluate whether an (or one) exposure (in a bracketed series) is a better starting point than another. In the PV2010 that is more straightforward to do, but we're looking for a useful shortcut in the workflow when we want to use PV2012 (which certainly has some strong points).

It would be super nice if there was some sort of Raw clipping indicator that made sense, other than having to make round-trips to RawDigger, determining per camera (per ISO?) default DNG exposure bias/offsets, and such. The current PV2012 clipping indicator does not show what's needed, in fact I also have a hard time understanding what triggers it (it ain't Raw clipping, so much is clear).

So there is no equivalent with PV2012. You’d make this preset, toggle on to view a more accurate clipping, then move back to PV2012 and adjust to desired result?

Yes, that is what I would do. It is really ridiculous that one has to jump through such hoops merely to determine clipping in the raw file. A raw histogram such as offered in Rawtherapee would be the ideal solution to this problem, and I don't see why it is not offered. I would think that the programming to implement such a feature would be minimal. Eric, where are you when we need you?

Please take some time to address and check out the attached spreadsheet w/ its two tables and graphs.Only 1 number per table is entered (black, bold, framed). All the rest calculates from cell to cell.

I encourage interested readers to open Peter's spreadsheet and PDF. Exposure adjustment in PV2010 involves scaling by a factor; a +1 EV involves multiplying all values by 2. As Peter's documents show, one can perform the multiplication on the gamma 1.0 raw file or the rendered gamma 2.2 file.

George Jardine's and Andrew's step wedges are not produced by photographing an actual step wedge and rendering the raw file, but synthetically in Photoshop. The steps are perceptually uniform and are evenly spaced in the gamma 2.2 space.

As Peter explains, gamma 2.2 compresses the highlight tones. Therefore, the steps in the raw file are larger in the shadows than the highlights. This can be demonstrated in Photoshop by converting to a profile with a gamma of 1.0, as shown below.

Alternatively, one can calculate the raw values, which would be proportional to the luminances of a photographed wedge. The calculation involves applying the inverse gamma function. The calculations demonstrated here are for sRGB (approximately gamma 2.2). The steps in the shadows are very small.

For the purpose of demonstrating the visual effects of adjusting the exposure in the raw converter, the perceptually uniform wedge that the DigitalDog and George Jardine used in their tutorials is preferable to the Stouffer wedge that I had been using and was used by Charles Cramer in his excellent post here on LuLa. As George explains in his tutorial, editing of the synthetic step wedge TIFF in ACR/LR is not exactly the same as editing a raw file in these programs.

Adjustment of exposure in PV2012 simply moves the histogram to the left or right without changing their spacing (this is what I would expect, but I have not actually tested it). With PV2012, increasing the exposure rolls off the highlights as clipping is approached, and the steps move closer together in the highlights and the contrast decreases as can be seen by looking at the slope of the characteristic curve.

George Jardine's and Andrew's step wedges are not produced by photographing an actual step wedge and rendering the raw file, but synthetically in Photoshop.

Absolutely and the files were more to get an idea what the newer PV controls were doing than attempting to analyze anything with respect to the actual raw data. George has been working hard on a real world raw step wedge (see: http://mulita.com/blog/?p=3358).

Absolutely and the files were more to get an idea what the newer PV controls were doing than attempting to analyze anything with respect to the actual raw data. George has been working hard on a real world raw step wedge (see: http://mulita.com/blog/?p=3358).

Thanks for the link. I looked at George's web site. He is doing some good work and his tutorial on the LR develop module is outstanding.

If you are looking at the histogram spacing of the spikes from the wedge, the Stouffer is not good since it is not perceptually uniform. However plotting the characteristic curve from Imatest as I have shown above does give a good handle in the tone curve, but it is difficult to evaluate some of the fancy math from a simple tone curve. Some wonder why the plots are log-log. Of course that is always how H&D curves have been plotted. An interesting characteristic of the log log plot is that the curve is closely related to the Opto-Electronic Conversion Function (OECF), which is a linear curve of exposure vs. pixel level. Interested readers should refer to the Imatest documentation(see under Stepchart, output, second figure).

To Reduce Complexity is certainly a popular marketing approach in general (many potential pitfalls though), and to furnish Camera Raw with some permanent auto-functions such as Highlight-recovery (or was it called D-lighting ) may make it easier for some users, however, it may make it harder for other users, e.g. those practicing ETTR, to know if Raw-channels clipped. And I thought I had understood that RAW is all about perfect Control, to "Rendering the Print".

I disagree. ACR and LR are about trying to perfect the image rendering process, not the capture process. My view is that optimizing the capture process (e.g., with ETTR) should be done in the camera, and the tools & feedback mechanisms that you need to perform ETTR optimally should be provided by the camera, not the post-capture image processing software.

Now that you have re-entered the thread, did you see my comment on anomalous behavior of the clipping indicator after white balance was applied. This lead to my initial post that the indicator and the Alt+Exposure or Alt+Whites controls were not helpful.

PV2012 is a definite advance in its highlight rendering capabilities, but the new math and image adaptive highlight rendering and auto highlight does complicate using ACR to look at input data. From my experience, PV2010 with linear tone curve settings does a reasonable job in this area. The ICC recommended a method for obtaining scene referred data with ACR and it seems to work with PV2010. Any comments?

Hi Bill, yes, our WB math has changed in PV 2012 which is the reason for the difference between the two. Even with PV 2010 you won't get fully scene-referred data out in many cases, because color profiles can (and do) apply fairly non-linear color mappings.