3. Correction of coma, spherical aberrations, and other similar distortions.

4. Correction of barrel, pincushion, and the most complex forms of mustache distortion.

5. Vignetting correction.

6. Stacking multiple DNG images for focus or exposure blending.

For best results, the approach I'm working on will require the user to shoot a specially-designed target containing an array of point sources of white light with each combination of camera, lens, aperture, and focal length to create a "blur profile" customized to that particular combination of hardware. As an alternative, my goal is to allow users to create and share blur profiles (e.g. for a Canon 1Ds wearing a 17-40/4L at f/4 and 17mm). As long as a given user's hardware matches that used to generate the blur profile, results will be good. If the user's hardware is different (e.g. the lens is softer in the corners at 17mm) than the profiled hardware, results will be poor, and the user will need to make their own custom profile. This would be similar to the owner of a printer finding that factory "canned" profiles work poorly and improving output by making a custom printer profile.

The biggest hassle I'm encountering right now is figuring out the DNG SDK. I've downloaded it and browsed through it, but there's a huge number of classes, and the object hierarchy and their intended usages aren't clear to me yet. I'm trying to figure out how to accomplish the following tasks:

2. Once the image has been processed, write it to a new linear RGB DNG file that can be opened in any DNG-aware application, with all original tags and metadata correctly preserved.

All of the internal correction operations are done on linear RGB data in the camera's native color space, so that setting white balance, selection of color profile, etc. is unaffected by the image correction operation. The user will have the exact same color processing options, ability to select white balance, exposure adjustments, and output color space selection as when working with the original RAW(s).

I can't contribute anythng of technical substance to your inquiries - that's for the kind of highly expert people Jeff mentions, but I am very interested in what you are trying to do and would like to remain in loop about how it is progressing. Good luck with it.

The best place to start in terms of the code would be dng_validate.cpp. The dng_validate () routine in there will read in a negative, parse through it, and build the various image stages. This includes decompressing the raw image data, linearizing it if necessary, and render to a TIFF file if you want. From a developer's perspective, I'd suggest putting a breakpoint somewhere in that routine, feeding a DNG raw file to it, and then stepping through to see how the code flows. In terms of terminology, the code comments use the term "stage 1" image to describe the raw image data, "stage 2" to describe the linearized raw image data (in a canonical space [0, 1], or [0, 65535] if you prefer), and "stage 3" to describe the 3-channel or 4-channel data (i.e., after demosaic). If you want to do processing on linearized mosaic data, you want to grab the stage 2 image; see the Stage2Image () routine in the dng_negative class (dng_negative.h). If you want to do processing on RGB demosaiced data in the native camera color space, prior to white balance, you want the stage 3 image; see the Stage3Image () routine in the dng_negative class.

If you want to do processing on RGB demosaiced data in the native camera color space, prior to white balance, you want the stage 3 image; see the Stage3Image () routine in the dng_negative class.

Thank you, that's very helpful. The processing I'm doing is all on linear RGB data scaled 0-1 in floating-point format (this makes a lot of the deconvolution math much more straightforward), so I'll need the stage 3 data and normalize it to a maximum value of 1. Three questions:

Is the stage 3 data always scaled to the same output range (say 0-65535) regardless of the number of bits/pixel in the original RAW file, or does the range of image values vary by camera model?

How do I write the modified stage 3 data back to a new DNG file, while copying all metadata from the original DNG?

I've got the dng_validate project downloaded along with XMPCore and expat so dng_validate will build with no errors or warnings. I'd like to compile dng_sdk (along with whatever it needs from expat and XMPCore) as a either a .NET or COM .dll so I can use its functionality from Visual Basic .NET or other .NET-family languages like C#, etc. How can I do this from Visual C++?

Sorry for the stupid questions, my programming background is VB, not C, and I'm having a bit of trouble getting my head wrapped around all of the DNG SDK components and how they relate to each other.

Yes, the stage 3 image data is always normalized to 16 bits [0,65535], independent of camera model. That is, if you have intentionally clipped a pixel value by overexposing, that pixel component will be 65535. Typically the stage 3 image will be very "green" because it is not yet white-balanced.

To modify a DNG, you'll want to take a look at lines roughly 250 thru 320 of dng_validate.cpp. That's where you'll find sample code that writes a DNG file out to disk; the metadata should be preserved, so you don't have to worry about that. Also, the SetStage3Image () routine in the dng_negative class will let you store your modified negative data back to the dng_negative object.

Note that you'll have to save as a linear DNG since your modified data is already 3-channel (no longer in mosaic form). I believe you can do this by using

it looks like your software is running before any raw converter, otherwise why bother to output into linear DNG ? to have a workflow like : in camera raw file -> dng converter (well only ~5% cameras out there will have native in camera DNG) -> raw converter #1 -> your software -> raw converter #2 ? too many steps... and if you want to run it after the raw converter very few will produce non linear DNG file as their output (if any ? I know several programs that operate (modify the raw data) on DNG files w/o making them linear, but they are hardly widely used fully functional raw converters... )... and if you are running before a raw converter than how do you plan to coexist ? does DNG has a tag saying - hello people, I do contain a corrected data (besides just linearizing), please do not bother w/ certain corrections and optimizations ? just curious... or if you plan to act like DxO (raw -> DxO (corrections) -> linear DNG -> ACR/LR to deal w/ WB and some colors) then you have to be a good in demosaicing first of all... so why don't you just write a fully functional raw converter ?

ouch, for example (w/o reading .PDF - you should be able to tell that in a blink - I mean to tell about tags/opcodes, not that I am lazy - that I know already myself) do you have a tag (DNG opcode) saying - please, raw converter, do not do any automatic CA/PF correction, it was done already ?

deja, there is not currently such a tag. This is a more general problem for all images, not just DNG. For example, if you have a JPEG or TIFF file lying around somewhere, it can be difficult to tell whether or not it has had some form of lens corrections applied to it. A standard metadata tag would be useful to indicate whether specific forms of processing had been applied to the image, in this case. My understanding is that such tags are under current consideration for standardization, but that could take a while.

or if you plan to act like DxO (raw -> DxO (corrections) -> linear DNG -> ACR/LR to deal w/ WB and some colors) then you have to be a good in demosaicing first of all... so why don't you just write a fully functional raw converter ?

Short answer: I don't want to spend a bunch of time reinventing the wheel when there are many very good RAW converters already out there. OTOH, DXO doesn't really have a lot of competition for really good lens corrections, it's way overpriced, and it's crippled because you are locked into using "canned" blur profiles for cameras and lenses, and if DXO doesn't have profiles for your camera or lens, you're screwed--you can't use it.

If your lens' blur characteristics differ from the DXO-profiled lens, then the "canned" profiles aren't going to work all that well anyway. If a printer manufacturer only allowed the use of their canned print profiles, and didn't allow users to make their own profiles, they'd be laughed out of the market. But that is essentially DXO's business model.

Quote

it looks like your software is running before any raw converter, otherwise why bother to output into linear DNG ?

The corrections I'm doing need to be done after demosaic. The deconvolution algorithm I'm working on analyzes a linear RGB image of an array of circular light sources (ideally about 5-10 pixels in diameter in the RAW image) and analyzes the blur characteristics found. The blur in the demosaiced image is a combination of lens blur, AA filter blur, and blur introduced by the demosaic algorithm of the RAW converter. By analyzing the total blur caused by all elements of the image capture process (lens, camera, and RAW converter) all of the blur from every blur source can be removed at once.

The basic workflow is this:

1. Run DNG Converter to convert RAWs to linear-RGB DNG files. Essentially all this does is fills in the missing color channel values from the Bayer matrix (e.g. adds R & B values to a G pixel), and then scales the values 0-65535. This can be done on a batch of files.

2. Process the DNGs through my program to correct lens, AA filter, and demosaic blur, remove CA/color fringing, correct fisheye/barrel/pincushion/mustache distortion, eliminate vignetting, etc., and eventually stack multiple focus and/or exposure-bracketed images into a single output DNG. This can be a batch process also.

3. Open the processed DNGs and edit as desired.

The RGB data is still in the camera's native color space, so you still have total flexibility to set white balance, adjust exposure, process colors, etc. the same as with the original RAW, as long as the RAW converter you use can read DNGs. Using my program will require an extra step in the workflow, but my goal is to make the benefits greatly outweigh the slight inconvenience.

deja, there is not currently such a tag. This is a more general problem for all images, not just DNG. For example, if you have a JPEG or TIFF file lying around somewhere, it can be difficult to tell whether or not it has had some form of lens corrections applied to it. A standard metadata tag would be useful to indicate whether specific forms of processing had been applied to the image, in this case. My understanding is that such tags are under current consideration for standardization, but that could take a while.

My solution for now at least is simply to look for lens correction opcode tags in the source DNG file, and omit them when writing the destination DNG file. If a tag is ever defined to indicate that lens corrections have already been applied, then I'll add it to the output file...

1. Run DNG Converter to convert RAWs to linear-RGB DNG files. Essentially all this does is fills in the missing color channel values from the Bayer matrix (e.g. adds R & B values to a G pixel), and then scales the values 0-65535. This can be done on a batch of files.

2. Process the DNGs through my program to correct lens, AA filter, and demosaic blur, remove CA/color fringing, correct fisheye/barrel/pincushion/mustache distortion, eliminate vignetting, etc., and eventually stack multiple focus and/or exposure-bracketed images into a single output DNG. This can be a batch process also.

3. Open the processed DNGs and edit as desired.

you will endup w/ a subpar demosaicing in the first place that way... that is the problem here - you will be optimizing not the best possible demosaiced data... not a problem in many cases for many people, but aren't you after the ultimate quality ?

you will endup w/ a subpar demosaicing in the first place that way... that is the problem here - you will be optimizing not the best possible demosaiced data... not a problem in many cases for many people, but aren't you after the ultimate quality ?

Only if you consider ACR a "subpar" RAW converter. And you're not necessarily limited to ACR; any RAW converter that can export to linear-RGB DNG can be used for demosaicing. The majority of the lens corrections have to be done on demosaiced RGB data; for example, CA corrections (which involve shifting the locations of color channels relative to each other) cannot be saved back to RAW because the pixel locations no longer match the Bayer matrix pattern. Correcting barrel/fisheye distortion has the same problem, only worse because the degree of pixel-shifting es even greater.

It just makes sense to demosaic first, and then apply corrections and adjustments.

Umm, DNG converter has the full ACR demosaicing algorithm embedded in it, rather than a simplified, lower performance version? That would surprise me. But then, I get surprised on a regular basis.....

But the fundamental problem you run into is actually that in order to do lens correction, you have to interpolate pixels. So if you do the correction as a separate step to the demosaicing, you have two layers of interpolation on top of each other. Of course, you can do that, but not, IMHO, a good idea. Which is why I believe that DxO will only write a linear DNG - interpolation for demosaicing and for lens correction are a single step for them. But that's only what I believe

The testing I've done indicates that opening a linear-RGB DNG (demosaiced in DNG converter) and processing the source RAW directly in ACR makes zero difference--if both files are processed with the same settings, the outputs will be an exact pixel-for-pixel match. Where did you get the notion that DNG converter didn't use the same demosaic engine as ACR? If you think otherwise, do your own comparison.

As to the single interpolation vs double objection, that's a load of crap. Demosaicing compares Bayer pixels to their immediate neighbors to guess the values of the missing channels. That requires looking at the pixels before the color channels are shifted around relative to each other (as in CA correction), or the whole concept of "adjacent pixels" gets flushed down the toilet. You have to demosaic first, then you can juggle the location of the color channels. Doing otherwise is a recipe for crappyocrity, not a quality enhancement.

The testing I've done indicates that opening a linear-RGB DNG (demosaiced in DNG converter) and processing the source RAW directly in ACR makes zero difference--if both files are processed with the same settings, the outputs will be an exact pixel-for-pixel match. Where did you get the notion that DNG converter didn't use the same demosaic engine as ACR? If you think otherwise, do your own comparison.

As to the single interpolation vs double objection, that's a load of crap. Demosaicing compares Bayer pixels to their immediate neighbors to guess the values of the missing channels. That requires looking at the pixels before the color channels are shifted around relative to each other (as in CA correction), or the whole concept of "adjacent pixels" gets flushed down the toilet. You have to demosaic first, then you can juggle the location of the color channels. Doing otherwise is a recipe for crappyocrity, not a quality enhancement.

OK, I deduce you have firmly held views on this subject(!)...................I guess we await your working code with eager anticipation so we can get to see all that quality shine through.