Am I the only one to be, slightly, frustrated with this situation? Do you think what I'm writing about is only of philosophical interest and I'm just moaning too much?[a href=\"index.php?act=findpost&pid=192627\"][{POST_SNAPBACK}][/a]

Personally, I couldn't care less, provided the images I get at the end of the process are of sufficient "technical quality" for the purpose for which I intend to use them, and that I don't have to jump through too many hoops to get there. By "technical quality" I mean everything after I've pressed the shutter. How the bytes are handled along the way and at what point in the mosaic/de-mosaic process things are done, I couldn't give a **** about. If my image is for printing at 32" x 14" and it's of sufficiently high quality that I'm proud of it and feel comfortable showing it to others (or offering it for sale), then that's all I need. If my image is for sending to someone as a record of something, then a poor quality JPG from a camera phone with no editing is probably good enough.

At the end of the day, all that matters from the computer side of photography is that images are "fit for purpose".

How the bytes are handled along the way and at what point in the mosaic/de-mosaic process things are done, I couldn't give a **** about. [a href=\"index.php?act=findpost&pid=193251\"][{POST_SNAPBACK}][/a]

I'm glad you're happy not to care, but some people aren't. I see this discussion like many of the technical discussions which go on in this forum: Geeks wanting to know how things work. So far only Jeff's response concerning commercial confidence has been the only valid reason given why us users aren't entitled to know more about what is going on in our raw converters.

As for the ACR clipping thing, I think Panopeeper and Guillermo and others have pretty well described and documented this behaviour in previous threads over the last few months. Off the top of my head, I think this was why the -S option was added to dcraw, to allow a more accurate clipping point.

In Camera Raw, a portion is done in the demosaicing (the baseline as part of the demosiacing process) and a portion is done after demosaicing on the linear luminance or color data.[a href=\"index.php?act=findpost&pid=192935\"][{POST_SNAPBACK}][/a]

The details of the internal processing pipeline are of great interest to the intellectually curious photographers; indeed, in Jeff's recent Camera Raw tutorial, I found the sidebar clips with Thomas Knoll to be some of the most interesting parts of the tutorial. Curiosity aside, I am quite content to leave the details of the optimal processing to Mr. Knoll. Detailed knowledge of this processing would be of no practical use to most photographers, but would be most useful to software engineers developing competing products.

One can examine the code of dng_validate, but there is no guarantee that the algorithms are the same as used in Camera Raw.

Quote

Which is better? That all depends...as for satisfying your own personal curiosity, why don't you just go ahead and test CR/LR vs. the rest of the noise reduction plug-ins and see what floats your boat. I have...and for at least up to ISO 800 on MY cameras, CR/LR is fine and dandy but for heavy duty noise reduction I use Noiseware...although I do so on the post processed image and apply it locally only to those areas that need it.

[a href=\"index.php?act=findpost&pid=192935\"][{POST_SNAPBACK}][/a]

The post processed image is not the optimal stage of the workflow to apply the NR plugins. I have communicated with Jim Christian, the developer of Noise Ninja, and he stated that NN could do a better job if it could operate on the raw data before adjustments such as exposure that alter the noise characteristics of the image have been made. BibblePro does incorporate NoiseNinja into the raw converter for optimum processing, but that feature alone is unlikely to cause many ACR users to switch over to Bibble.

Application of NR to the post processed image on a layer using surface masks and other methods for local control does have its advantages. However for those who like parametric image editing, this interrupts the workflow and requires generating and most likely storing an intermediate TIFF image.

While I think commercial reasons are valid in general, I would like to stress that we are not asking to know the details of the particular algorithms used.

The request to the converter authors is quite simple: Pls. indicate somehow which functions affect the raw data and which work on demosaiced data. I think this could be done without letting too much proprietary info out, and would de-confuse many users thinking that if a function is offered built-in a converter it must by definition be a better thing (quality wise) since it is done 'in raw'.

While I think commercial reasons are valid in general, I would like to stress that we are not asking to know the details of the particular algorithms used.

The request to the converter authors is quite simple: Pls. indicate somehow which functions affect the raw data and which work on demosaiced data. I think this could be done without letting too much proprietary info out, and would de-confuse many users thinking that if a function is offered built-in a converter it must by definition be a better thing (quality wise) since it is done 'in raw'.[a href=\"index.php?act=findpost&pid=193331\"][{POST_SNAPBACK}][/a]

Let us suppose they were to give you this information. Could you please explain what you will do with it and why you think it will help you to improve the quality of your output, the alternative being to simply experiment on your computer whether performing certain adjustments in ACR or in PS gives preferable results.

Let us suppose they were to give you this information. Could you please explain what you will do with it and why you think it will help you to improve the quality of your output[a href=\"index.php?act=findpost&pid=193343\"][{POST_SNAPBACK}][/a]

It may very well improve the quality of someones output. But that's not the only reason. Some of us just like to know how things work.

It may very well improve the quality of someones output. But that's not the only reason. Some of us just like to know how things work.[a href=\"index.php?act=findpost&pid=193355\"][{POST_SNAPBACK}][/a]

Well, for better or worse as the case may be, as you know, we live in a world of intellectual property rights, copyrights and patents, so much as it would interesting to know *all*, we'll just have to content ourselves with knowing what's not protected. As for the quality issue, my humble experience suggests that in this particular case the only way to really *know* - even if we had the benefit of knowing the theory - is to try the alternatives and compare them.

Well, for better or worse as the case may be, as you know, we live in a world of intellectual property rights, copyrights and patents, so much as it would interesting to know *all*, we'll just have to content ourselves with knowing what's not protected. [a href=\"index.php?act=findpost&pid=193363\"][{POST_SNAPBACK}][/a]

On thinking more about this IP patent stuff, I wonder if this is really a good excuse after all. In some of the cases it is possible to work out what's going on under the hood in terms of mosaiced vs demosaiced. It just needs people like Panopeeper and Guillermo and other uber-geeks to do the hard work. Now if these guys can work it out, then I am betting that the raw developers at opposition companies can work it out too.

I guess in the end it is a bit of both: Commercial confidence, and a lack of motivation to fully explain and document a product's features.

Bernie, I guess I find myself agreeing with Bill Janes on this one, quote:

<Detailed knowledge of this processing would be of no practical use to most photographers, but would be most useful to software engineers developing competing products.>

As for fully explaining the product's features, interestingly, when you think about it, most of the majors (e.g. Microsoft and Adobe) don't. And look at the publishing industry which has exploded to fill this gap - I think for the better.

One can examine the code of dng_validate, but there is no guarantee that the algorithms are the same as used in Camera Raw.

The CR code augments what's in the DNG SDK, but the fundamentals are the same.

Consider a Canon CR2 image that has been converted to a DNG. When CR processes the DNG image it uses the processing model as described in the DNG spec, clearly. When CR processes the corresponding CR2 raw file you get the same results.

While I think commercial reasons are valid in general, I would like to stress that we are not asking to know the details of the particular algorithms used.

The request to the converter authors is quite simple: Pls. indicate somehow which functions affect the raw data and which work on demosaiced data. I think this could be done without letting too much proprietary info out, and would de-confuse many users thinking that if a function is offered built-in a converter it must by definition be a better thing (quality wise) since it is done 'in raw'.[a href=\"index.php?act=findpost&pid=193331\"][{POST_SNAPBACK}][/a]

Depends on what you define as 'raw'. Raw data can undero many transforms yet still be considered "raw' in the sense that the image values are scene-referred.

FYI, the DNG processing model performs a linearization of the original raw image values followed by demosaicing, then white balance. All of the other image stages follow. So to answer your question, all of the image ops except for linearization (which isn't under user control anyways) happens after demosaicing. If you want further details I encourage you to see the DNG spec.

Although to be honest I'm still not sure how this information is going to help you improve your conversion results.

As for others who want to "know how things work" because they're scientifically or technically interested in finding out, that's an understandable sentiment (trust me, I empathize) but not at all a valid or compelling reason for a company to provide the info.

So to answer your question, all of the image ops except for linearization (which isn't under user control anyways) happens after demosaicing. If you want further details I encourage you to see the DNG spec.

Well, that's interesting regarding DNG... I'm fairly sure this is not the case with other raw converters.[a href=\"index.php?act=findpost&pid=193531\"][{POST_SNAPBACK}][/a]

Nikos, could you explain what reasons you have to be "fairly sure this is not the case with other raw converters"?

Reading what Eric is telling us, it makes sense to me, albeit I have no knowledge or expertise about any of the code. But as far as I understand it, the original "raw" data is greyscale until it is demosaiced in order to impart colour information. How can one possibly perform a white balance, for example, unless there is colour to be balanced, and it goes on from there, as colour balance affects luminosity, etc.

Nikos, could you explain what reasons you have to be "fairly sure this is not the case with other raw converters"?

Reading what Eric is telling us, it makes sense to me, albeit I have no knowledge or expertise about any of the code. But as far as I understand it, the original "raw" data is greyscale until it is demosaiced in order to impart colour information. How can one possibly perform a white balance, for example, unless there is colour to be balanced, and it goes on from there, as colour balance affects luminosity, etc.[a href=\"index.php?act=findpost&pid=193544\"][{POST_SNAPBACK}][/a]

Pls. do not confuse WB with colour balancing. This is a good example of the confusion that may be imparted by users not understanding the diffrence between raw and RGB data.

To WB you multiply R,G.B pixel values by suitable multipliers (usually leaving G as is and multiplying R and B ) . Raw data is not more greyscale (regardless what some people like to say) than de-mosaiced RGB data is. Both contain colour information albeit represented (coded) differently.

Reading what Eric is telling us, it makes sense to me, albeit I have no knowledge or expertise about any of the code. But as far as I understand it, the original "raw" data is greyscale until it is demosaiced in order to impart colour information. How can one possibly perform a white balance, for example, unless there is colour to be balanced, and it goes on from there, as colour balance affects luminosity, etc.[{POST_SNAPBACK}][/a]

As has been discussed before on this forum, the raw file is not grayscale. The color information is present in the mosaic pattern of the Bayer array. The basic Bayer pattern is a 2 by 2 block containing 2 green pixels, one red pixel and one blue pixel. This is shown in Panopeeper's [a href=\"http://www.cryptobola.com/photobola/Image.htm]Rawanalyze Manual.[/url]

All you would have to do to white balance the original raw data would be to multiply each pixel of the raw file by its RGB multiplier. No demosaicing would be necessary.

Bill

PS: This essentially duplicates Nikos's post, which was posted while I was writing my message.

<No matter what the filter arrangement, the raw file simply records the luminance values for each pixel, so the raw file is a grayscale image. It contains color information...so raw converters know whether a given pixel ...........represents red, green, or blue.....but it doesn't contain anything humans can interpret as color.>

<The raw converter interpolates the missing color information for each pixel from its neighbors, a process called demosaciing............>

<No matter what the filter arrangement, the raw file simply records the luminance values for each pixel, so the raw file is a grayscale image. It contains color information...so raw converters know whether a given pixel ...........represents red, green, or blue.....but it doesn't contain anything humans can interpret as color.>

<The raw converter interpolates the missing color information for each pixel from its neighbors, a process called demosaciing............>[a href=\"index.php?act=findpost&pid=193552\"][{POST_SNAPBACK}][/a]

Yes Bruce did and Schewe still maintains this raw is grayscale metaphor which I find to be quite confusing...

Yes Bruce did and Schewe still maintains this raw is grayscale metaphor which I find to be quite confusing...

But this subject has been dicussed to death in other threads.[a href=\"index.php?act=findpost&pid=193553\"][{POST_SNAPBACK}][/a]

Nikos, either it is or it isn't, or it's the perspective from which one looks at it, or it depends on the context, so yes there can be some lack of clarity because there may be different equally valid ways of looking at it.

But whatever, I'm still having trouble seeing specifically what practical differences it would make to my workflow knowing which way to interpret this information. Sometimes in these discussions we bear witness to angels dancing on the heads of pins with zero significance to anything that matters except to code writers.