Well, I have the results, and they pretty much echo what Bill and John came up with. Here they are:

I've plotted the mean error in CIEL*a*b* Delta-E, the worst case error, and the mean plus two standard deviations for the 121 point sample target. I could do more work to present scatter plots, DeltaEab, hue angle shifts, Lch, CMC, and all the rest of it, but the errors are so small that I'm pretty sure that it's not worth the effort.

Why are the errors for no push so big? Because I used a different exposure for the baseline image and the test image. That tells me that we're pretty much just looking at noise here.

Thanks to Eric for setting me on the path to working with the real raw files, and for pointing out that TIFF images have a different processing pipeline.

In a way, I'm sorry that I sounded the alarm based on TIFF images, which turned out to be a red herring. On the other hand, I've learned a lot from this exercise. I hope you folks don't think I've wasted your time, and I apologize if you think I have.

Why are the errors for no push so big? Because I used a different exposure for the baseline image and the test image. That tells me that we're pretty much just looking at noise here.

Jim,

I'm not clear on what you are using as your "standard" in this case? I assumed, perhaps incorrectly, that your standard was an image with no exposure adjustment, and the 2,3,4 stop pushes were compared back to that. In light of your statement here I'm a little confused.

***Nevermind, I realize now that I'm looking at a group of data for 121 patches as opposed to a single color. I didn't read completely my mistake.

ACR/Lightroom assume that integer TIFFs (the vast majority of TIFFs are written as integer TIFFs) are output-referred, and that floating-point TIFFs are scene-referred (e.g., HDR files). The bit depth itself is not critical (ACR/LR will read 16-bit, 24-bit, and 32-bit floating-point formats for TIFF and DNG), but the data type itself (floating-point vs integer) matters to the default interpretation.

ACR/Lightroom assume that integer TIFFs (the vast majority of TIFFs are written as integer TIFFs) are output-referred, and that floating-point TIFFs are scene-referred (e.g., HDR files). The bit depth itself is not critical (ACR/LR will read 16-bit, 24-bit, and 32-bit floating-point formats for TIFF and DNG), but the data type itself (floating-point vs integer) matters to the default interpretation.

Thanks, Eric, that makes sense to me. I'm still trying to figure out how to produce synthetic 32-bit floating-point TIFF files. I can make 32-bit FITS files, but I can't figure out how to convert them to TIFFs. I may need to buy another program.

On the real image front, you probably saw the results with all the noise. I am now in the process of figuring our how to remove most of that noise, with pretty good success so far. The best thing seems to be to use a median filter with an extent the same as the patch size, Because it's a median filter, it won't mater if the alignment isn't perfect and I get a few rows or columns of an adjacent patch. The current problem is that the filters for one set of images takes about 8 hours to run. I am going to have to recode so that I isolate each patch and calculate the median of that sub-image. That will take me a while. I will use the results of the real median filter runs to provide a exemplar result when debugging.

One tricky thing about using real images is just dawning on me. After I do the image alignment step, which I need to do because I'm using a 180mm lens to keep the angle subtended by the target small, the pixels in the camera appear at different places on the sample images. That means that I have to filter out PRNU and dust more than what I have to deal with were the images perfectly aligned as shot.

Perhaps you could talk about how the output-referred and scene-referred pipelines differ next weekend.

I modified the Matlab program to calculate the medians of each patch individually, rather than passing a median filter over the whole image. That reduced the run times from 8 hours to 5 seconds, and most of that time was file reading; the actual media calculations take a little over a second. The results were the same to 9 decimal places. I looked at the results. I'm pretty sure I have successfully dealt with the random noise, as evidenced by scatter plots that are smooth rather then jittery. However, the graph of the statistics is barely changed from the one I posted above. This means that I have not dealt with capture variation well.

In the past, I've handled capture variation by averaging multiple exposures. I am loath to do that here, since the image manipulations are already fairly labor intensive. Instead, I plan to focus on working with synthetic images. Since they are almost noiseless (maybe a little LSB toggling in integer TIFFs), I'll have fewer images to deal with than if I have to do averaging. In addition, I will be able to precisely place the colors in CIELab., which you can't do with a real camera, due to the nature of the filters in the CFA. These errors can't be entirely calibrated out with camera profiles, since the CFA's spectral responses are not a 3x3 matrix multiply away from those of the human cone cells.

In order to present LR with images that it will process as it does raw files, I will have to learn how to create floating-point TIFFs (thank you, Eric, for the pointer here). I have discovered a TIFF object in Matlab that lets you do things that the image file writing function, imwrite, won't normally let you do, including, reportedly, writing 32-bit floating point TIFFs. However, before I do that, I will have to learn a lot about TIFF tags and the LabTIFF library.

The discovery that there are at least two image-processing pipelines in Lightroom (Eric, are there more than two?) makes me more motivated than ever to devise techniques to discover what LR is doing in various circumstances. Photoshop is not completely open about its image processing algorithms, but at least you can turn on or off each layer individually and see the effects. In LR, all the image processing takes place inside a black box, and the user can't see inside that box.

There are really just two paths in ACR/Lr: one for scene-referred images, and one for output-referred images.

Examples in the former group (scene-referred) include raw files and HDR images (e.g., floating-point images that are created by Merge to HDR Pro in Ps if you choose the 32-bit option in the dialog, or any third-party software that does exposure merges and lets you save out a floating-point linear image).

There are really just two paths in ACR/Lr: one for scene-referred images, and one for output-referred images.

Thanks, Eric. I'm getting close. I can now write 32-bit floating point TIFFs and read them into Photoshop, and assign the ProPhotoRGB profile to them. However, although the RGB values as seen through the eyedropper match the RGB values of the 16-bit integer TIFF I started with, the image doesn't look the same, and the CIELab values as seen through the eyedropper are different.

I've been assuming that, with a floating point TIFF, that the white point is 1.0,1.0,1.0, not 255,255,255 or 65535, 65535, 65535, and the fact that I'm seeing something close to right seems to verify that. If I write an image with a gamma of 1 and tell PS it's in PPRGB, it's too dark, as I'd expect, and if I write the image with a gamma of 2.2, the RGB values are OK, but it doesn't look right or measure right in Lab.

Probably some TIFF tag I need to figure out. This is what I'm using now:

I'm enjoying reading this interesting thread. I find myself doing most of my corrections in Photoshop because I prefer to avoid the colour / saturation shifts that the ACR/Lightroom adjustments generate, processing separately for colour and for contrast using different curves using Luminosity and Color blending. It doesn't take long, but it's a bit clunky having to generate a separate tiff to do so. The one thing that you cannot do so easily in Photoshop is recover highlights and, to a lesser extent perhaps, shadows. Perhaps a future version of ACR will provide for easier separation of color and contrast enhancement.

I'm done more experimentation with 16-bit integer and 32-bit floating point TIFFs in Photoshop, with some confusing (to me) results.

First, I wrote out a 32-bit TIFF file from Photoshop and looked at a lot of the TIFF tags. Other than some tags that you'd expect to be filled being empty (White Point, Transfer Function), there weren't a lot of surprises. Still, I copied a bunch of tags from the PS file and used them when I created FP TIFFs. No joy, however. The RGB values were the same as the 16-bit files I started out with, but the Lab values were wrong, and the image looked wrong.

I checked the values in the PS-written FP file and found that they did indeed span the range from 0.0 to 1.0.

Then I brought a 16-bit integer PP RGB image into PS and converted it to 32-bit FP. The RGB values changed! They got smaller, except for the 0,0,0 and 255,255,255 patches, which did not change. The Lab values did not change. That leads me to believe that PS uses a different tone curve for 16-bit integer images and 32-bit FP images.

I'm not entirely sure how Ps itself handles 32-bit images and conversions to/from 16-bit or 8-bit, but I'm pretty sure that 32-bit images in Ps are always considered linear light (regardless of which color space you have assigned in your Color Settings or in the Assign Profile dialog ...) and are displayed as such.

The mystery is solved. PS writes 32-bit floating point RGB files with a linear tone curve. The values range from 0.0 to 1.0. PS writes 16-bit integer RGB files with the tone curve of the RGB color space. The values range from 0 to 65535.

The bug? Pro Photo RGB has a gamma of 1.8, not 2.2. Oops...

There's an interesting anomaly in PS. In the info box, the RGB values in a 32-bit FP image go from 0 to 255. However, if you click on the foreground or background square in the toolbar, you get a color picker that's only for 32-bit use. In it, the RGB values vary from 0.0 to 1.0.

I'm not entirely sure how Ps itself handles 32-bit images and conversions to/from 16-bit or 8-bit, but I'm pretty sure that 32-bit images in Ps are always considered linear light (regardless of which color space you have assigned in your Color Settings or in the Assign Profile dialog ...) and are displayed as such.

As usual, Eric, you are entirely right. I have now brought 32-bit underexposed synthetic images into both Photoshop and Lightroom. In each case, it takes less of an Exposure slider adjustment than the amount of underexposure to get the RGB values back to about those of the correctly exposed synthetic image. The necessary amount of Exposure slider adjustment is different in the two programs. In Photoshop, it takes +2.49 to correct a 3-stop-under image. In Lightroom, it takes about +1.67. The adjustment is a little trickier in LR because there's no CIELab pixel value readout (although I think that's in LR 5). I think the readout in LR 4.4, which is what I am using, is for the ProPhoto RGB primaries and white point with the sRGB tone curve, or a gamma of 2.2. Thus the RGB readouts in LR and PS for a ProPhoto RGB image are not comparable, since the gammas of the pixel value displays are different.

By the way, the imported images, which have the PP RGB ICC profile embedded (I copied it from a Photoshop-generated PP RGB image) have the color temperature set to 5000 degrees Kelvin, as I would expect with PP RGB, but the tint is set to +10, and I would have expected zero.

Now that I can get synthetic floating point images into Lightroom, I'll rerun the Exposure control tests with them.

I brought a set of synthetic images into Photoshop as 32-bit floating point TIFFs with ProPhotoRGB primaries and white point, but with a gamma of one (from now on, I'm going to call these type of files "linear ProPhoto RGB" or "linear PP RGB", even though, strictly speaking, there's no such thing. There were no surprises. The Lab values all read within one least-significant digit of the values in Matlab. Same as before: an 11x11 grid, L*=50 for all 121 points, and the a* and b* values running from -50 to + 40.

Then I imported the images into Lightroom. They appeared much brighter and much more chromatic than the same images in Photoshop. I applied one stop of LR Exposure correction for each stop of underexposure. All the images then looked pretty much the same. I exported them from LR as integer PP RGB TIFFs, brought them into Photoshop and looked at the Lab values. They read in PS about the way they looked in LR. L*s running in the high 60s, and a*s and b*s looking even higher that what you'd expect with that kind of L* bump. I converted them into Lab 16-bit integer TIFFs and read them into Matlab. They looked ugly. In 3d, here's the baseline(properly exposed) image:

And in 2D, looking down from along the L* axis:

Looks like gamut mapping to me, and probably some other stuff.

I went back to LR, created a set of virtual copies, and cranked the Exposure adjustment back one stop on each. When I exported them and read them into Photoshop, the L* values were about right, but they were too chromatic.

In 3d, here's the baseline(properly exposed) image:

And in 2D, looking down from along the L* axis:

It looks like LR is looking at the fact that the files I'm feeding it are floating point and is invoking some default processing that it considers appropriate for HDR images. If that's the case, I need to find out where that processing is, and figure out how to turn it off.

I haven't been able to figure out how to turn off the processing that LR automagically applies to 32-bit floating point files, but I did go ahead and compute the CIEL*a*b* Delta-E stats of the differences in the Lab values of the 121 patches in each of the "underexposed and compensated stop for stop less one stop for the Lightroom processing" patches. The results are much like what Bill and John have reported from real camera testing. They're also similar to what I have measured in real camera testing, but with far less noise.

The overall stats:

For reference, a 3D look at the difference between the baseline exposure and itself expressed as displacements of the original target values:

And a 3D look at the "4-stop under and corrected in LR image", processed the same way:

And, finally, a two dimensional look at the immediately preceding data:

I'd show you the one, two, and three stop under plots, but they'd be boring; they're virtually the same as the four stop under graphs above.

I'm not really happy about not understanding the LR processing of the 32-bit floating point files, but, until I find out more about that, I'm going to have to leave it there.

At least, now the synthetic and the in-camera testing is telling us pretty much the same thing.