There is no specific luminance channel directly off the sensor. It's RGB photo sites, and the native ratio on all the canon sensors is 4-2-2. So big deal that mRaw is 4-2-2, so is Raw, no?

Each photosite gives us one 14bit measurement of one of the RGB channels. All this data is essentially the raw data file (with minamal processing).

mRaw and sRaw are just using fewer sites. And just like full raw, they are demosaiced by DPP or whatever raw software you use.

Correct. There is no luminance channel on the sensor. However every single pixel on the sensor can be read to determine the luminance level of that pixel, regardless of what color channel it is. Three pixels, a red, green, and blue (or even a full quartet, 2x2 RGGB) can be read to produce a single luminance value (y).

That said...Is the difference between an encoded format and a raw format unclear?

Raw...UNENCODED. ZERO processing.

m/sRAW...ENCODED. Processed. Interpolated. Demosaiced. Not RAW.

Since the picture doesn't seem to be clear to anyone who still thinks mRAW and sRAW are not RAW formats but demosaiced (or processed) and encoded formats, here is what happens when generating either one:

Sensels are all read.

Luminance channel data (Y) is produced from the raw values read from every sensel, regardless of its color.

Chrominance channel data (Cr, Cb) are produced from raw values of every other sensel read and a corresponding Y component.

The luminance and chrominance channel data is assembled and compressed with lossless compression.

The format and structure of this data is the same as for JPEG images, with the primary differences being non-lossy (lossless) compression and higher bit depth (14 bits vs. 8 bits). Outside of those two differences, mRAW and sRAW could be considered a better form of JPEG, rather than a smaller form of RAW. The very word RAW should be a clue here...RAW...unprocessed, uncooked, unmodified, natural and untainted in every way. As in raw meat, strait off the bone. Raw meat isn't yet a meal...it has to be cooked. To continue the analogy, m/sRAW would be like meat that is cooking...its been sliced and diced, seasoned, marinated, and just needs some heat to become edible.

Neither mRAW nor sRAW actually contain original sensel data off of the sensor. Each value that is encoded in the output image, be it from luminance data or chrominance data, has been processed. According to the author of the article I linked, in his attempt to be accurate, prefers not to call the Y channel an actual "luminance" channel, as its based on linear data rather than gamma-corrected data. So he calls it Luma, which has long been a short-hand way of referring to a linear luminance component. Similarly, in an attempt to be accurate, he calls the Cr and Cb channels "Chroma" rather than chrominance, for the same reason. (In the world of CIE and standards-based color and transforms, luminance and chrominance must be appropriately weighted components to be real or "true".)

To generate a single Y (luma) value, three sensels, red, green, and blue, must be read and weighted:

y = (0.296 * r) + (0.592 * g) + (0.114 * b)No sensels are skipped in the production of y values, so were not losing any information here. Similarly, to Cr and Cg (chroma) values, every other (every even) sensel in a row is read and processed against the corresponding luma value:

cr = r - ycb = b - yGreen pixels are not read into their own distinct color channel, as green is a byproduct of combining y (luma...which is based on red, blue, AND green sensels) with Cr and Cb (which are the difference between a red sensel and a luma value and a blue sensel and a luma value). Each y, cb, and cr triple are then stored with 14-bit precision for saving to the actual image file (which is still ultimately a .CR2 container file, it just contains something entirely different than a normal full RAW .CR2.) The final output data in an m/sRAW file has very little to do with a two dimensional matrix of R/G/B/G sensels on a CMOS die like a true RAW file does. The final output data is a transformation of RAW data into something entirely different, and more reminiscent of pre-digital analog TV signals sent along airwaves and cable into peoples homes.

It doesn't really matter if you think the processing described above is "minimal" or not...its still a transformation, and a fairly radical one. You don't have original untainted source information to regenerate luminance and chrominance with appropriate gamma weighting for the particular device you actually work your images on (which may be 2.2, or possibly 1.8 ). Luminance information is ALREADY weighted...its got a 0.296 weight for the red channel, 0.592 weight for the green channel, and 0.114 weight for the blue channel. Thats baked. Its in, its done. Burned off the sensor, and now it sits between you and the actual RAW data that would have given you a richer editing experience. The blue and red channels are also already baked, since they are the difference of a red or blue sensel read and the corresponding y value for those same pixels...which was weighted.

The entire point of RAW is to get the data from the camera to the computer BEFORE such processing occurs...before ANY processing of ANY KIND occurs...since its effectively THAT processing that a RAW editor like ACR, Lightroom, Aperture, or one of the open source tools do. And, to be blunt, those tools to a far better job, with more advanced algorithms that require more horsepower than a camera image processor has to produce better images. Having shot mRAW for about two weeks solid after I first got my 7D, I was rather dismayed to see a variety of "demosaicing" artifacts (or YCC encoding artifacts, for those who want to be more accurate) baked into my images. Funky color fringing that had nothing to do with chromatic aberration, or odd aliasing along round edges that don't occur with more advanced demosaicing algorithms like AHDD.

This is just a warning. I'm trying to be honest to those who expect 100% RAW capability out of an mRAW or an sRAW, with the assumption that you can push exposure, white balance, and noise removal around to the same degree as with an actual true RAW. From someone who spent a fair amount of time experimenting with mRAW, I was EXTREMELY dismayed by the limitations and encoding artifacts. Both mRAW and sRAW produce far smaller files that import faster, load faster, and operate faster in LR's develop module. But there ARE tradeoffs...tradeoffs you should be aware of and take into account when choosing your image mode. Neither mRAW or sRAW are true raw formats. They are effectively super-jpeg, 14-bits with lossless compression. The deeper bit depth gives you about the same post-process editing latitude you might get from a 16-bit TIFF image.

This is just a warning. I'm trying to be honest to those who expect 100% RAW capability out of an mRAW or an sRAW, with the assumption that you can push exposure, white balance, and noise removal around to the same degree as with an actual true RAW. From someone who spent a fair amount of time experimenting with mRAW, I was EXTREMELY dismayed by the limitations and encoding artifacts. Both mRAW and sRAW produce far smaller files that import faster, load faster, and operate faster in LR's develop module. But there ARE tradeoffs...tradeoffs you should be aware of and take into account when choosing your image mode. Neither mRAW or sRAW are true raw formats. They are effectively super-jpeg, 14-bits with lossless compression. The deeper bit depth gives you about the same post-process editing latitude you might get from a 16-bit TIFF image.

Thanks for taking the time to write all that. I have played with exposure, white balancing and noise on my mRAW files but have not noticed anything significantly different than RAW processing. Maybe I am not pushing them hard enough? I guess I'll try again: create RAW and mRAW files from a static scene and push both in post-processing. Do you have results from such a test to show the difference?

This is just a warning. I'm trying to be honest to those who expect 100% RAW capability out of an mRAW or an sRAW, with the assumption that you can push exposure, white balance, and noise removal around to the same degree as with an actual true RAW. From someone who spent a fair amount of time experimenting with mRAW, I was EXTREMELY dismayed by the limitations and encoding artifacts. Both mRAW and sRAW produce far smaller files that import faster, load faster, and operate faster in LR's develop module. But there ARE tradeoffs...tradeoffs you should be aware of and take into account when choosing your image mode. Neither mRAW or sRAW are true raw formats. They are effectively super-jpeg, 14-bits with lossless compression. The deeper bit depth gives you about the same post-process editing latitude you might get from a 16-bit TIFF image.

Thanks for taking the time to write all that. I have played with exposure, white balancing and noise on my mRAW files but have not noticed anything significantly different than RAW processing. Maybe I am not pushing them hard enough? I guess I'll try again: create RAW and mRAW files from a static scene and push both in post-processing. Do you have results from such a test to show the difference?

I did have results. I photograph birds, and when exposing and trying to maximize my use of the DR my 7D offers, I tend to push exposure pretty far to the right. Lot of birds are white or have a lot of white parts. I often have to do some considerable highlight recovery to restore detail to the feathers in such photographs. When using mRAW, I found that the ability to adjust exposure is very limited. I could recover highlights some with mRAW, but it had some pretty severe limitations.

With RAW I can recover 100% of the detail in feather highlights so long as I did not actually blow them (surpass maximum saturation). With mRAW, I could only recover some tonality and a little color (such as with a Snowy Egret, which has yellowish-silvery plume feathers that show up white when you ETTR), but detail was usually unrecoverable. I could push exposure around a lot, I could even drop the exposure slider as far as it would go, making midtone areas of the photos nearly black...but those white highlights would stay bright white.

Same thing usually went for shadows. If I needed to lift shadows (which was more often the case with landscape and macro photography than birds), in RAW...despite the fact that I use a Canon camera...I could usually lift shadows by a couple stops if I needed to. With mRAW, lifting shadows would ultimately result in rather muddy, blotchy patches of grayish tone, without much detail. I could push the exposure slider to its maximum positive setting, and still not extract much detail from the shadows.

I don't think I have any of those files. After messing with mRAW for a couple weeks, I was so disgusted I marked most (if not all) of them as rejects in LR. I purge my rejects periodically, so its doubtful they still exist. I could make some more, however most of the birds I generally photograph that exhibited the problem so well (such as Snowy Egrets) have all flown south.

These are predicted specs from Japanese magazine CAPA. I tried to verify the credibility of CAPA: in June 2012, they predicted the following specs for the Sony A99 which was only announced recently (Sep 2012): 24 MP, SLT technology, 101 phase detect AF points with wide coverage and ISO up to 51,200. Turns out the A99 has 19 conventional phase detect AF points and 102 on-sensor phase detect AF points, all with limited coverage. Also, actual ISO that 'only' goes up to 25,600.

Most gearheads only can afford one system at a time. They spend tens of thousands of dollars on gear only to have their state of the art cameras sitting on their coffee tables. They don't bother taking photos or learn how to shoot in manual. Then they wait eagerly on the Internet for the next uber megapixel camera with 50 extra stops of DR to come out from an opposing brand. Then they sell all their gear again and start all over again.

The technical talk in this thread is most welcome, I'm a computer guy but didn't know a lot (compared to some of you) about how the camera does it's thing and S/M Raw. These are always good reads and I thank that couple of you talking it out.

To people saying naive things, and there were a couple (that guy probably has a rebel and kit lens). I'm sure there are people out there who could out shoot me with a pinhole camera. Let's remember that the equipment is only measured against other equipment and that photographers have nothing to do with their gear. Skill, passion, and vision can be had by anyone and not only those who can afford an expensive camera.

The Nikon D600 looks far better on paper than the 6D, which is disappointment spec-wise and should have been a lot better for the money they are charging. I will not however be selling my Canon gear, as some of you have screamed for in this thread, for saying Nikon has a better product. There can be circumstances which you know nothing about that let people be in this position. Mine is that I already have a 5DIII that I feel is a nicer camera than both of the last ones but if I had to have a backup body or when I tell my friends to buy a camera I feel the D600 has the specs that I would rather have/tell people to get (and then borrow any good lenses they may have acquired) and I wish Canon was the name on it. I won't be buying a 6D.

And on the topic at hand. I would be interested in a Canon camera like this for landscape and maybe light studio work. More like a 645D than a D800. on this body I don't need high frame rates, I don't need a bazillion point AF, I don't need fancy video modes. I would like the following things, which I feel like Canon could do.

46.1mp Full Frame - amazing IQ and Detail

3 FPS

ISO 100-6400 (maybe a native 50)

16bit - 14.5+ Stops DR

Low point AF (4,9,11 tops) with good spread and very accurate low light center point

Very weather sealed

Built in flash controller

$4800-5200

Logged

Osiris36

4. People who equate number of bits with quality or DR seems to have been brain-washed by marketing. All of the info that I have seen suggests that very few or none current cameras are actually limited by the number of bits used in the ADC/raw file format (Sony FF DSLR being a possible exception). Rather, it seems that they are limited by various analog/physical noise phenomena, and the sensible engineers choose a number of bits that allows them to capture all of the information (pure noise does not contain information in the sense we are talking about: it can be replaced by a random generator in your raw developer). It is possible that the rumored camera brings amazing advances in signal/noise properties that warrants 16 bits, or Canon might do this for marketing purposes alone (just like medium-format manufacturers).

-h

Not really true. You need to look at how ADCs work. A highspeed 14 bit ADC will rarely give you 14 usable bits at output. Go look at the datasheets from very well respect semi companies who make discrete ADCs. You give up precision for speed. This is why the Exmor line has the noise floor it does. Very little is conventional electronic noise. Canon has two choice, a lot of (much) slower ADCs or more, 'wider' ADCs at almost the same speed (but slowed as much as possible).

Please feel free to test how many images you can take at full size then do it in mraw. Whether or not i am right about extra processing doesn't overshadow the fact that something slows it down and you can't record as many images before it takes a break. If you own the camera or manual pdf you will see on pg 121 the effect of how many images can be shot in raw continuously. Now almost double that files size with the d800 let alone a 46 mp canon and now imagine how long it takes to write a raw file let alone a med raw. Facts are facts. I tested the buffer out on my 5dm3 in march when it came out. It was one of the very first things i did.

Please feel free to test how many images you can take at full size then do it in mraw. Whether or not i am right about extra processing doesn't overshadow the fact that something slows it down and you can't record as many images before it takes a break. If you own the camera or manual pdf you will see on pg 121 the effect of how many images can be shot in raw continuously. Now almost double that files size with the d800 let alone a 46 mp canon and now imagine how long it takes to write a raw file let alone a med raw. Facts are facts. I tested the buffer out on my 5dm3 in march when it came out. It was one of the very first things i did.

So finally a rumor to stop the bleeding of people owning Canon gear selling up for Nikon.

i.e. this rumor was to be expected after the combination of the D800 and D600 showed up Canon's current full frame sensor being found wanting.

Expect the 5D Mark III to have a shorter life than either then 5D or 5D Mark II.

So, are you one of the people "owning Canon gear selling up for Nikon"?

Whether or not I am is not the point.

This rumor is strategic in nature, as will be the announcement of the camera next month, because it is talking to specific feature/performance areas where Canon is currently vulnerable.

Canon need to do something to keep people from wondering whether or not their R&D has fallen behind and cannot keep up with the pace that Sony have set.

With all of the vulnerability you claim Canon is having, one would think you would be the first to be selling your Canon gear. You say the D800 and D600 have superior sensors, and yet you haven't sold your Canon gear in order to buy Nikon?

Why does buying Nikon gear require selling Canon gear?

I'm only selling the Canon gear that doesn't perform as well as stuff I can replace with competitor's gear.I keep the Canon stuff that works better than the competitor's for whatever particular uses I may have for it. But, wherever I can drop using Canon in favor of better raw IQ from competitor's products, I do.I push my files in post and need better noise performance than Canon provides. If Canon's noise levels were as high, but at least not PATTERNED I would not be very concerned about shifting allegiance to the competition. I hate the noise stripes and plaid patterns on my big prints that come from my McCanon cameras.

canon rumors FORUM

All I can say is that if Canon can one-up the Exmor sensor, I will have some serious respect for their engineers. The rumor of a new sensor actually sounds plausible to me. Canon CMOS technology in the last few years has shown relatively slow, evolutionary improvement without any radical design changes to improve performance. This points to the fact that they are quite likely concentrating most of their sensor R&D resources on an entirely new sensor tech while existing tech receives only moderate boosts in performance.

This could also be why Canon has been slow to introduce replacements for the 1Ds III and 7D. These are/were the flagship models of the full-frame and APS-C camera segments, respectively. This makes them ideal for introducing and showing off a brand new sensor technology in both sensor formats. Just my 2¢.

Osiris36

All I can say is that if Canon can one-up the Exmor sensor, I will have some serious respect for their engineers. The rumor of a new sensor actually sounds plausible to me. Canon CMOS technology in the last few years has shown relatively slow, evolutionary improvement without any radical design changes to improve performance. This points to the fact that they are quite likely concentrating most of their sensor R&D resources on an entirely new sensor tech while existing tech receives only moderate boosts in performance.

This could also be why Canon has been slow to introduce replacements for the 1Ds III and 7D. These are/were the flagship models of the full-frame and APS-C camera segments, respectively. This makes them ideal for introducing and showing off a brand new sensor technology in both sensor formats. Just my 2¢.

If Canon goes to a 32 channel read out at 16 bit with their ADC, their DR will suddenly shoot up on the test scores (like DXO). Ofcourse for 98% of images taken this will mean absolutely nothing, but it will stop people from complaining.