I've read many post that say in full frame camera the pixel density is lower compared to crop sensor camera's and so it captures more light and has thus has better ISO performance and greater dynamic range. So if I change by crop sensor camera to shoot at a lower resolution, will that equate to a better pixel density and mimic the performance of a full frame (or medium format) or will it always shoot at maximum resolution and the reduce the size?

--EDIT: 1--
I've a Canon 60D and I've 3 options for RAW image sizes (RAW, M-RAW amd S-RAW). If RAW is just a dump from the Camera sensors, How can their be 3 different sizes? Does the camera also scale down RAW images as well?

Vivek - read this question: photo.stackexchange.com/q/3419/1024. According to @whuber (and the article he links to) the smaller RAWs are indeed some kind of aggregation of the individual sensels, like what Stan describes in his answer, only it is done in soft rather than in hardware.
–
ysapSep 27 '11 at 14:14

(The page ysap links to covers the mraw/sraw part of the question.)
–
mattdmSep 27 '11 at 14:31

I'll give the document from ysap a read and comment on it.
–
VivekSep 27 '11 at 15:18

3 Answers
3

Given that you have a Canon, the lower RAW modes, mRAW and sRAW, DO INDEED UTILIZE ALL of the available sensor pixels to produce a richer result without the need for bayer interpolation. The actual output format, while it is still contained within a .cr2 Canon RAW image file, is encoded in a Y'CbCr format, similar to many video pulldown formats. It stores luminance information for each FULL pixel (2x2 quad of 1 red, 1 blue, and 2 green pixels), and each chrominance channel is derived from half pixel data (1x2 pair of 1 red+1 green or 1 blue+1 green).

I am not exactly certain what the specific low-level hardware read and encoding differences between mRAW and sRAW are, however generally speaking the smaller the output format, the more sensor pixel input information you can use for each output pixel. The small amount of interpolation present in m/sRAW is moot, as both formats interpolate far less than native RAW. It should also be noted that neither mRAW nor sRAW are actual "RAW" formats in the normal sense...sensor data IS processed and converted into something else before it is saved to a .cr2 file.

The sRaw format (for "small RAW") was introduced with the 1D Mark III
in 2007. It is a smaller version of the RAW picture.

For the 1D Mark III, then the 1Ds Mark III and the 40D (all with the
Digic III), the sRaw size is exactly 1/4 (one fourth) of the RAW size.
We can thus suppose than each group of 4 "sensor pixels" is summarized
into 1 "pixel" for the sRaw.

With the 50D and the 5D Mark II (with the Digic IV chip), the 1/4th
size RAW is still there (sRaw2), and a half size RAW is also appearing
: sRaw1. With the 7D, the half size raw is called mraw (same encoding
as sraw1), 1/4th raw is called sraw (like the sraw2).

Jpeg code of Dcraw was first modified (8.79) to handle sRaw because of
the h=2 value of the first component (grey background in the table).
Normal RAW have always h=1. Starting with the 50D, we have v=2
instead of v=1 (orange in the table). Dcraw 8.89 is the first version
to handle this and the sraw1 from 50d and 5D Mark II.

"h" is the horizontal sampling factor and "v" the vertical sampling
factor. It specifies how many horizontal/vertical data unit are
encoded in each MCU (minimum coded unit). See T-81, page 36.

3.2.1 sRaw and sRaw2 format

h=2 means that the decompressed data will contain 2 values for the
first component, 1 for column n and 1 for column n+1. With the 2 other
components, decompressed sraw and sraw2 (which all have h=2 & v=1),
always have 4 elementary values

Every "pixel" in sRAW and mRAW images contain four components...a split Y' component (y1 and y2), as well as an x (Chrominance Blue) and z (Chrominance Red). All four components (from a 1/2 image perspective, sRAW1/mRAW) have a column height of 2 (h) and a width of 1 (v). This indicates that the Luminance value (Y') is comprised of a FULL 2x2 pixel quad...or two 2x1 pixel columns stored in y1 and y2.

The references below do not seem to specifically state this, so I am speculating a bit here, however with the sRAW2 (1/4 raw) I believe Luminance information would be derived from a 4x4 pixel block where h=4 and v=2. Encoding chrominance would get more complex at a 1/4 size image, as the bayer color filter array on the sensor is not arranged in neat red and blue columns. I am unsure whether alternating 2x1 height columns are processed for each Cr and Cb component, or if some other form of interpolation is performed. One thing is certain...the interpolation of source data is always larger than the output data, and no overlapping (as in normal bayer interpolation) occurs as far as I can tell.

Finally, sRAW1/mRAW and sRAW/sRAW2 are compressed using a lossless compression algorithm. This is a critical distinction between these formats and JPEG, which also uses a ycc type encoding. JPEG performs lossy compression, making it impossible to restore pixels back to their exact original representation. Canon's s/mRAW formats are indeed able to be restored back to original full precision 15-bit image data.

In theory, it could if the camera used the right strategy for reducing the image size.

As you noted, with current crop-sensor cameras, the raw image remains the same no matter what JPEG size you have set. The JPEG image is simply scaled. This can somewhat reduce the appearance of noise, but the reduction is due to the image scaling algorithm (you can't fit as many speckly pixels into the smaller picture as you can into the full-sized version). It's more likely, though, that you'd be able to do at least as good, if not better, if you do the noise reduction and scaling yourself after the fact.

There is a strategy that will produce true noise reduction. Some high-resolution medium format backs (Like the Phase One SensorPlus series) use a strategy called pixel binning, where groups of adjacent sensels are treated as one much larger sensel and their cumulative charge is read from the sensor. That's different from reading individual charges and averaging (which is what you're restricted to in post-read processing) -- it occurs at the hardware level, and changes what "raw" means. The read noise has a better chance of cancelling out, and the cumulative charge makes the analog-to-digital conversion less ambiguous (the range of quanta converted is wider with less amplification).

In practice, this usually means cutting the resolution by a factor of four (half the width and half the height). With a 60 or 80MP medium-format back, that still leaves you with a 15 or 20MP image; with a 16MP crop-sensor camera, you'd be down to a 4MP raw image. Now you may know and I may know that a clean 4MP image is better than a noisy 16MP image, but not everybody will buy into the idea that it costs extra to produce a smaller image. That means that it's unlikely that you'll see pixel binning used in anything less than pro-level camera any time soon. It may appear in full-frame cameras if their resolution keeps climbing, but I wouldn't look for it in a crop sensor. (Well, maybe Pentax might take a stab some day, since they don't do full-frame.)

I'm sorry I think I should clarify about the RAW image sizes. I've a Canon 60D and I've 3 options for RAW image sizes (RAW, M-RAW amd S-RAW). If RAW is just a dump from the Camera sensors, How can their be 3 different sizes? Does the camera also scale down RAW images as well?
–
VivekSep 27 '11 at 12:39

@Stan: Canon already does exactly what you described with their mRAW and sRAW formats. They are not literal RAW formats, they are YUV derivatives (Y'CrCb to be exact), and they do indeed to forms of pixel binning. See my answer for more details.
–
jrista♦Sep 27 '11 at 16:49

Re the future: The real limitation is sensor area. If the sensor size remains the same and resolution goes up (by shrinking the pixels), there will be no net gain from pixel binning. It's merely a matter of using more sensels to read the same physical area of the sensor. What we can hope for is improved sensitivity of individual sensels, so that more light and less noise are registered within any given small portion of the sensor.
–
whuberSep 28 '11 at 17:48

1

@jrista: that ain't binning, that's post-read averaging. Binning must result in an integral reduction in linear resolution, and individual photosite data is not available for processing since cumulative reads (not separate, then averaged, reads) are performed over multiple sensels. (In a Bayer-quad system, that means 1/4, 1/16, 1/64, etc., of the full resolution expressed as area or pixels.) Post-read averaging is no different, technically, from scaling; it's just working in a different data space.
–
user2719Sep 29 '11 at 16:51

If high noise is your main problem, one solution is to shoot several frames and have a software with good algorithms combine one good image from several worse ones. For example ALE, Anti-Lamenessing Engine does this. For moving subjects this obviously does not work, but you can shoot handheld, say, at ISO 1600 and then combine the shots to get to levels of noise close to ISO 400 or 800.