I wish they would improve the soft output of hte 5DIII to match the GH2. unlikely since it seems you have to remove the OLP to get any sharpnesss out of hte 5DIII video.

I have never seen any evidence that OLPF removal, which will only affect the 22MP image directly, affects the 2MP video in any significant (positive) way. That's a false meme that has even bled into the stills people's vernaculars, and it should be crushed until and unless reproducible, verifiable evidence (i.e. resolution charts, moire charts, etc.) is presented.

The damage to the video is done in the digital downscaling, and if ML can hack into that and provide a better method, the 5D3 will be unleashed. If Canon really wanted to protect the 1DC and C300 from the 5D3, they will have made that inaccessible.

I think that they are one in the same (G:R:B and Y:Cb:Cr) in this application, but I'm not certain.

The 4:2:2 is a type of chroma sub-sampling.

Since the human eye is more responsive to brightness than to colour it is possible to save space when storing an image by storing more brightness values than colour values. This is done by converting the RGB colour space to Y:Cr:Cb. The colours (Cr,Cb) are then sampled less than the brightness (Y) when the image is saved.

Note that the Y:Cr:Cb colour space can be converted back to RGB (it is simply a different 3-dimensional representation of the same data). Missing Cr:Cb values are filled in from surrounding values and then each pixel is converted back. This is done when viewing the movie.

Check out the Wikipedia article that has a few picture showing how the image stored in different sub-sampling schemes looks:

The subsampling scheme is commonly expressed as a three part ratio J:a:b

J: horizontal sampling reference (width of the conceptual region). Usually, 4.a: number of chrominance samples (Cr, Cb) in the first row of J pixels.b: number of (additional) chrominance samples (Cr, Cb) in the second row of J pixels.

From this you can appreciate the 4:2:0 is worse than 4:2:2 since the former will only change colour every other horizontal line. Both will change colour every other vertical line. The holy grail is 4:4:4 since that does not throw any information away. If also takes up more space.

The second factor in quality is how many bits are used to store the data. Canon DSLRs use 8-bits (values from 0-255). The Cinema cameras use 10-bit (0-1023) or 12-bits (0-4095) so get a lot more gradation of tone. This also takes more space.

I believe the combination of sub-sampling, bit-depth, image size (e.g. 1920x1080) and frame rate (e.g. 24/s) are all combined to create a bit-rate for the movie. This is how many numbers are passed through the system per second. It is this final number that states the overall quality of the system and also the type of storage media requirements involved.

Note that the Y:Cr:Cb colour space can be converted back to RGB (it is simply a different 3-dimensional representation of the same data). Missing Cr:Cb values are filled in from surrounding values and then each pixel is converted back. This is done when viewing the movie.

This is what I was saying, they are the same essentially in terms of the relevant data they carry, with maybe a affine transformation between them. Which is why I thought that MAYBE, the Bayer pattern, which is like a native 4:2:2 sample, might have something to do with preventing a 4:4:4 output, without serious processing/interpolation.

Also, 8bit is nice, but since monitors are only 8-bit(RGB totalling 24) the extra bits are ONLY use full in post. Theoretically the extra-bits don't give you a better picture, if you have the proper white-balance/exposure to begin with.

if the data is binned before readout by low level sensor circuits as some have suggested, I doubt a firmware induced process change on the post binned data will help resolution. The reason the D800 beats the 5DmkIII in perceived resolution is because internally it takes a 2240 x 1260 frame resulting from demosaic (yes, that's 1260P video) and downscales it decently to 1080p. Although the OLP delete mod on the 5DIII does help things.

I think the most realistic expectations from tweaked firmware should at least bump the bitrate so that the codecs don't fall apart so easily under motion. That has so far it is the weakness of both the D800 and 5DmkIII (although not an issue if you use the 4:2:2 HDMI out on the D800 + ninja 2.0)

ultimately, I decided to skip the 5DIII since it was clear Canon is not going to put the hardware in it that would cause it to cannibalize its precious C line.

if the data is binned before readout by low level sensor circuits as some have suggested, I doubt a firmware induced process change on the post binned data will help resolution.

The resolution isn't really that bad, though, if you look at resolution charts. Something like 800-900 lines, which isn't terrible. It's the apparent softness that's an issue, and that's compounded by the bad sharpening algorithm that you can't turn up at all or you get halos. It's not an elegant solution, but sharpening in post really does help. But sharpening also brings out compression artifacts. If Canon or ML could improve the camera's sharpening algorithm (and add focus peaking and zebras) I would be thrilled with this camera for video.

There's little chance of more resolution out of the camera, but either less sharpening or a higher bitrate (though 90Mbps seems pretty high as is in terms of SD card buffer) would make a big difference. Clean HDMI out would be fine, but at that point, just buy an FS100 instead. The point of dSLRs is their size and ease of use; external capture can be a pain.

I believe the combination of sub-sampling, bit-depth, image size (e.g. 1920x1080) and frame rate (e.g. 24/s) are all combined to create a bit-rate for the movie. This is how many numbers are passed through the system per second. It is this final number that states the overall quality of the system and also the type of storage media requirements involved.

Not really... The codec - the type of compression used to store the data is massively important in understanding bitrates. For example the 5DMIII has a 90mb/s intraframe codec and the far lower long GOP codec, which have the same subsampling ratio, image size, bit depth and frame rate.

Describing the bitrate as the 'overall quality of the system' is also quite misleading... I'd take the 1080p 24mb/s footage off a D800 over the 25mb/s 576i minidv footage off a pd150 or xl1 anyday