This is not an official news source for CineForm or GoPro product releases, just some bits and pieces of stuff I happen to be working on. My work and hobbies are pretty much the same thing. -- David Newman

Saturday, November 13, 2010

I had two inquiries today on using native DSLR in today's modern NLEs vs CineForm intermediates. Both these users who knew to use CineForm for finishing, multi-generation and effects renders, but wondered if the native DSLR decoding of the NLE was now the same quality. While CineForm has been known for its performance and workflow advantages for mixed media, sometimes users forget we are solution for source file quality also. Pretty pictures to follow.

The linked image below was from one of the first videos I took with a 7D of my daughter in poor lighting with some ISO gain, and likely that plastic 50mm -- so not the best shooting conditions and in need for some color correction and maybe some sharpening. The more you have to correct the image, the more important the source image quality is.

Source converted with NeoHD's HDlink to a CineForm AVI imported into CS5 and output to a PNG (2.04MB).

Viewing the above linked images at 1:1, the CineForm and CS5 output will look identical, Vegas has a small color shift as it is likely not compensating for the 601 colorspace used in Canon DSLR cameras, but otherwise maintains the same detail and dynamic range. While both the CS5 and Vegas outputs have undergone less processing, it is the CineForm output that is more color correctable. You may have also noticed the CineForm source PNG is nearly twice as large as a either of the NLE native decodes. This is due to the CineForm output having more continuous tones, and a better 4:2:0 up conversion, resulting more "information" for the PNG compressor to store -- information is in quotes, as clearly there can be no more source data than the original MOV, yet there are image development choices they can make that limited data more useful to the filmmaker.

Zooming in on the source you can see some of the source's H.264 compression, but more apparent is the 4:2:0 chroma artifacting, greatly reduced in the CineForm output. The chroma artifacts can be seen as the break-up of the image into 2x2 pixel regions, particular where the chroma is change rapidly (i.e. color edges.)

The CineForm file's has no such 2x2 chroma break-up and produces more natural looking continuous tones, which are more suitable for extreme post processing via sharpening and color correction. Try it for yourself with the source data above.

All of this is independent to the amount of compression applied to the CineForm file. So while there are some that insist native is the ideal, and that a using an intermediate is a compromise, I think these images help demonstrate that with the right up-conversion, it is a compromise not to. :)

-------------------

Update: I realized after posting that the face is actually more forgiving with these chroma artifacts, so I shoot a scene with sharper color transitions.

Even without any zoom, the resolution loss at the color change boundaries is quite apparent. If you are not seeing it, look at the top red edge and the curve in the blue, and pump up the saturation. As saturation increases the perceived resolution drops approaching quarter res HD.

While these are considered common 4:2:0 artifacts, the CineForm image is from the same 4:2:0 source. All images benefit from this style of up-converison filtering -- I just wish my HDTV would do it, as I feel I'm watching 960x540 for the many deeply saturated TV shows.

------

Update 2: Some may perceive the chroma jaggies as sharpness, when fact all outputs are equally as sharp in luma. The jaggies in the chroma make artificial edges that eyes can see as detail, even though it is false detail. This false detail will make any downstream sharpening more difficult. Just as it is in the camera setup, you do not want artificial sharpening, as you can add sharpening later, but it is much harder to remove.

20 comments:

Anonymous
said...

Thanks for the highly interesting text. Would it be a proper interpretation that CF interpolates incomplete data in a different way to come up with a more pleasant visual preception?

This is a highly hypothetical question but there's some technical curiosity behind it: What happens if one takes a frame of such native DSLR video and zooms that say 500%, and then one shoots this from the display. What will the CF codec do to such image which should contain some jaggies? Guess, the CF image will still appear closer to what it should than the native DSLR video.

You may consider it a matter of pre-processing the image for a more pleasant look, but our goal isn't a particular look, I feel the images CineForm produces are more technically correct. NLEs shown don't seem do any processing on the chroma, and I think it is the wrong choice. The luma information is 1920x1080 (from more DSLRs) and the chroma only 960x540, there are choices on how to scale the chroma up to match the luma, and currently the NLE seem to line-double the chroma, and as for scaling quality good, that is the lowest quality method.

There are many more scaling choices than that, although as the eye is far more sensitive to luma, you don't need to do too much compute overhead to better line doubling.

Currently Canon DSLRs are 601 (yes that supprized us too.) The 5D didn't flag this in the bit stream, but the 7D did, both are 601.

CineForm files support full swing YUV, although that is not how we chose to handle the full signal from Canon. Downstream tools generally don't handle full range YUV well, so we take the 0-255 input, bump that to 10-bit 0-1023, then range correct to 64-940, compressing the results. So we made a standard range YUV without clipping, and without loss of codewords as we bumped to 10-bit first. This greatly simplifies YUV playback or HDSDI/HDMI devices.

You can do some neat things with AVISynth, but I rarely have time for the experimentation involved. I can see on your blog that the extra cubic filter don't buy you much. As we aren't striving for partical look, we just need a chroma filter that is most suitable for post processing. Cubic filtering on chroma can sharpen, with can reduce downstream flexibility.

David, thank you for your comments, the HDV source on the blog is 709 (well pretty sure) :-) but will test 550D DSLR again with some newly created 601 based LUTS, with and without interpolation, but certainly without bicubic. :-)

Cheers.

Out of interest Editshare Lightworks is coming out at the end of the month, opensource. Do you think Cineform products may become available for it?

David, without wishing to over stay my welcome, could I ask, with regard to full range levels, whether theoretically / technically when considering grading and post in 8bit, putting aside certain downstream tools handling, does it make sense from to work full range levels, giving plent of space for adjustments, unsquashed to 16 - 235 for as long as possible, obviously at high precision, and then squash to 16 - 235 at encoding for delivery, dithering as necessary at that point to reduce any banding that may have crept in doing the squashing for players / home cinmea whatever that will expect and require restricted levels?

CineForm does that for you by maintaining high percision in the codec, and having a color development engine to allow users to enhance the image before it is output (try using FirstLight to learn more on this.) When final image is output to a 8-bit display or application, the resulting YUV or RGB is the best it can be.

The F3 is outputting 4:2:2 to the nanoflash, so there is no up-conversion needed, and certainly 220Mb/s is way beyond seeing any artifacts. There are still workflow advantages to use CineForm as you intermediate, primarily is you have to render elements between tools or you wish to use FirstLight for color correction. For all uncompressed or effectively uncompressed, use Filmscan 1 or 2.

Thanks for the response! So the F3/nano merits using Filmscan1 rather than High?

And I have another question: I did a ton of googling but can't find a concrete answer. I used the CineStyle pic profile on the 7D and I have the latest NeoScene on Mac. I just bought my first Mac and moved my license but haven't done anything yet. I don't know what boxes to check: Limit YUV? 601 source? I'm not sure if the 7D is still 601 when it uses CineStyle.