Does programmatically reducing noise reduction the give the same results as hardware post processing in camera, such as on a Nikon D90? What is the best software to do such things (like Photomatix without hdr, what I know)? Thank you for your help.

Well technically they are both programmatic NR, as the in-camera NR is done within its firmware. So maybe this should be re-named something like "How does in-camera NR compare to Software NR"
–
Digital LightcraftSep 17 '12 at 16:48

@DarkcatStudios: I think your misunderstanding. There IS hardware-level noise reduction that has nothing to do with firmware...its actually performed by circuitry etched right into the sensor for each pixel, or into the readout circuitry around the sensors periphery. That is literal HARDWARE, not software by any measure.
–
jrista♦Sep 17 '12 at 21:50

1

@jrista - Ah ok - not so much a misunderstanding but knowledge gap! I was unaware of real hardware NR... your answer is very good!
–
Digital LightcraftSep 18 '12 at 7:16

1 Answer
1

The key difference between the two is that one works on a post-digitized image, where as the other works on the pre-digitized analog signal. Whenever you work with an analog signal, you have the ability to be more precise and accurate, and can eliminate noise before it is "burned into" a digitized image. Thats most certainly not to say that applying noise reduction post-digitization is ineffective. On the contrary, with advanced algorithms, such as a well-informed wavelet deconvolution, noise reduction in post can be extremely effective. There are some things that hardware noise reduction can do that software noise reduction can't, or simply can't be as effective at, though.

Types of Hardware Noise

The key area where hardware noise reduction is most effective is in reducing electronic noise. This is the noise introduced by the electronics themselves and during sensor readout, and is primarily responsible for the unsightly, unnatural patterned forms of noise: read noise, fixed-pattern noise (fpn), dynamic pattern noise/horizontal and vertical banding noise, and non-uniform response noise (or pixel response non-uniformity noise, PRNU). These forms of noise are extremely difficult to remove with software unless the software knows the exact characteristics of each specific sensor. Such an algorithm would require a considerable amount of support data to function...for each and every sensor it needed to work with. As such, it is far more effective to remove these types of noise directly in hardware.

Each type of noise introduced by the electronics of the sensor or by the readout process can be compensated for, in some cases entirely, in others mostly. There are a variety of sources of this noise. Dark current, the electric current passing through the sensor circuits, is responsible for putting a limit on low signal level as it often "adds" a few electrons of its own to each pixel that weren't generated by photon conversion. As its only a few electrons worth, it only affects IQ in very low-signal areas, but that can mean the difference between having or not having one to two full stops of additional dynamic range.

Fixed pattern noise is an intrinsic property of each sensor, and differs from sensor to sensor. Some pixels run "hot", while others run "cool". Hotter pixels again tend to introduce more electrons to the pixel, on top of whatever may be added by dark current. These differences are often the result of microscopic manufacturing nuances, and are difficult to control. FPN often shows up in unchanging horizontal and vertical striations, often lending to the type of highly undesired types of noise photographers loath.

Dynamic pattern noise, or banding, also often exhibits in horizontal and vertical bands. There can be a variety of causes for banding, from fluctuations of current running through the sensor to external signal interference in a poorly shielded sensor. Since this type of pattern noise is dynamic, it can be nearly impossible to remove in post as it changes from frame to frame. It tends to be a fairly weak aspect of noise, though, and it tends to occur at a very low signal level, so it is usually only visible in very deep shadows.

Noise due to non-uniform response has to do with very slight differences in quantum efficiency between pixels. The quantum efficiency (Q.E.) of a sensor may fall within a certain percentage range, however there are usually still very slight differences between a given pixel and the pixels that surround it. There can also be slight differences in analog gain for each pixel's amplifier, which can introduce further slight counting error.

Hardware Noise Reduction

Most of the types of noise above can be controlled, and either eliminated or greatly mitigated. The most common form of hardware noise reduction is Correlated Double Sampling (CDS). Canon, Nikon, and Sony sensors all use some form of CDS. The general idea is that a separate part of the circuit for each pixel measures how much noise dark current introduces during readout. Before reading the pixel, dark current charge is read (sampled), then the full pixel well is sampled, and the difference between the two is kept as the actual pixel signal.

Most sensors don't employ much more in the way of hardware-level noise reduction. In the case of Sony's new Exmor sensor brand, a variety of additional hardware noise reduction mechanisms are utilized. Non-uniform response is handled by additional circuitry that compensates for differential in Q.E. between neighboring pixels. This balances out the Q.E. of all pixels in general, producing a much more uniform response to the final image signal. To combat the some FPN, Sony also employs Column-Parallel Analog-to-Digital Conversion (CP-ADC).

ADC is generally the culprit for "read noise", where additional signal noise as well as quantization noise is introduced to the analog image signal as it is digitized. Part of this is due to the high speed of ADC units...which are normally off of the sensor die and housed in an external image processing chip. Most cameras utilize some form of parallel ADC, however there are considerably fewer ADC's than rows or columns of pixels. They must operate at a fairly high frequency to convert enough pixels to maintain the cameras frame rate, and higher frequency circuits tend to introduce more read noise. CP-ADC moves the ADC onto the sensor die, and adds one ADC unit per pixel column. This allows each ADC to operate at a far lower frequency, greatly mitigating the amount of read noise introduced. It hyperparallelizes the read pipeline, improving overall sensor readout rate (theoretically...we have yet to see a Sony Exmor sensor capable of faster readout than a Canon sensor.) Finally, each ADC can be tuned to mitigate vertical banding in its column, helping mitigate fixed pattern noise that exhibits as banding.

Benefits of Hardware Noise Reduction

Due to its multiple levels of hardware noise reduction, Sony Exmor sensors are far less succeptible to electronic and read noise. They still suffer from a small amount, and microscopic manufacturing defects still cause per-pixel fixed pattern noise resulting in "hot" red, green, and blue pixels. The impact is only a couple electrons worth in most cases, as most dark current and vertical and horizontal fixed pattern noise is eliminated by their additional noise reduction circuitry. Read noise is reduced by ADC's that operate at a slower frequency in parallel. This leads to a signal floor well below most other manufacturers (including those for medium format sensors), which can fall anywhere from 8 to 35 electrons worth of read noise at ISO 100.

Most might think this simply leads to less noise. In general, it does not. It only leads to less electronic and read noise, which, even at 30+ electrons worth, is still a very tiny fraction of maximum signal. In APS-C sensors maximum signal ranges from around 15,000 to 30,000 electrons per pixel. In FF sensors maximum signal ranges from 60,000 to around 100,000 electrons per pixel, possibly even more for sensors that have a particularly large pixel pitch. The vast bulk of image noise comes from photon shot noise. This is noise due to the random nature of light, such that photons are not guaranteed to evenly distribute over the sensor and produce smooth gradients and smooth solids. Removal of random photon shot noise is a realm where wavelet deconvolution excels, so the use of software noise removal is still a critical component of noise removal in general.

The true benefit of hardware noise reduction is less in the mitigation of noise, and more to the benefit of dynamic range. By reducing electronic and read noise from dozens of electrons to around 2.8 electrons per pixel, dynamic range in low-signal regions of an image can be greatly improved. This is the primary reason that Nikon's D7000, D800, and D3200 are able to achieve over 13 stops of dynamic range while Canon sensors are barely able to achieve 11 stops, and have never achieved more than 12 stops, of dynamic range.

Benefits of Software Noise Reduction

Finally, software noise reduction has its place, and its just as important as hardware noise reduction. No matter how good your hardware noise reduction, there will always be some amount of noise in a photograph. At the very least, ADC tends to introduce some degree of quantization error due to non-scalar gain. If gain is 1.4, that means every 1.4 electrons results in one level of digital output. Since you can't convert .4 electrons, ADC will oscillate between reading one electron as a single digital level, and reading two electrons as a single digital level. This produces a slight amount of noise that is usually termed "read noise". Its largely insignificant, but it can be minimally observed while pixel peeping.

The higher the signal to noise ration, which can improve with improved Q.E., the less the random physical nature of light will cause visible noise. It will always cause some though, and it will always be most visible in areas of smoother tone below a medium gray tone (18% gray). Cranking up ISO also increases the effect photon shot noise has on an image. Regardless of the efficiency of the sensor, photography at high ISO will always be noisier than at low ISO. This is due to the fact that sensors are actually fixed response analog devices that sense light in a linear fashion. Increasing ISO does not actually increase sensitivity. On the contrary, it simply reduces the electron count that is considered "maximum saturation". If your full-frame sensor is capable of registering 100000 electrons at ISO 100, it will be capable of registering 50000 electrons at ISO 200, 25000 at ISO 400, 12500 at ISO 800, 6250 at ISO 1600, etc. Beyond ISO 400, your always working with a signal that, from a linear standpoint, is below medium gray, and simply amplifying it. The farther you push ISO after that point, the more pronounced photon shot noise will become.

When it comes to dealing with predictably random noise, such as that caused by photon shot noise and introduced by the ADC, software algorithms excel. One of the best forms of noise removal for this type of noise are various Wavelet Deconvolution algorithms that are fairly good at predicting the random distribution of photons. Most noise reduction algorithms in commercial software seem to work according to less advanced concepts than wavelet deconvolution, so noise removal is currently less effective in general than it probably could be. Hopefully we'll see more advanced noise removal algorithms introduced into commercial software in the future, with the hope that random noise can be mitigated nearly entirely without affecting useful, real detail.