There is a lot to think about in his video. The comparison of film grain vs digital was quite enlightening. I have often puzzled over the fact that many of the highest end movies are still shot on film, usually Kodak, even though there is a big cost difference. Obviously it is very tricky to make digital shots reproduce the film look. I am guessing The Last Jedi, where he is DP was shot on film.
I have been thinking about going to a 4k camera, but it is clearly not so simple. Unlike going SD to 1080 p turned out to be a no brainer.

Thanks. I see a lot of directors and DPs push for 4K "because we can" or in the belief that they are future-proofing their production. While there is some validity to that argument, the video clearly shows that not all 4K (or higher) is equal. Many factors affect how we visually perceive the apparent resolution of the image. As editors we often deal with this 4K-acquired material clogging up our drives and storage networks, so it's good to see a DP take a studied approach to the topic.

Obviously the tests here involve high end cameras, but the overall take-away about more pixels not necessarily being the answer reminded me of the issues that come from trying to throw more pixels at the cameras at the lower end of the scale.

Back when the GH4 came out there was a lot of excitement about how amazing the pictures looked and I found one particular test promoted by nofilmschool that suggested there was a favourable comparison to be drawn between the GH4 and the RED EPIC.

So I grabbed the test footage to see what the GH4 really looked like when you break it down into its component channels.

(I used the original 4K footage at its actual size, and inside BM Fusion I comped it over a 1080 background without scaling, which effectively cropped it to the area I wanted to test. I rendered ProRes4444 for ease of sharing, but I don't think this is masking or exaggerating anything relevant to the test.)

The point I would make is that it's far too easy to be seduced by how the combined RGB output seems to look and ignore the very serious issues in the individual channels. In this example, the green looks OK, the red is pretty atrocious, and the blue is utterly unacceptable with massive blocky artefacts.

My impression from this (and from other GH4 footage that I have analysed) is that the quest for more pixels comes at the expense of very undesirable compression.

This might not matter if you're not going to be doing much with the image, but as soon as you start colour correcting it, for example, then you are immediately going to be exposing these hidden issues. Worst of all is if you expect to be able to get a passable key from footage like this, as so many users want to do.

This is not a specific criticism of the GH4 which can produce very nice pictures under the right circumstances and when you are fully aware of the limitations, but because the market has told us all that more pixels means a better image, many users are unaware of the trade-offs they are making which can quickly become problematic.

One final point to stress - when making comparisons/evaluations of moving images, always make sure to check out the integrity or otherwise of the individual colour channels. It's amazing what nastiness gets hidden away. And once you've spotted the problem in the isolated channels, you will be able to go back to the combined RGB and clearly see the issues that you missed on first inspection. I would have liked to see how the cameras in Steve Yedlin's tests compared from the point of view of the individual channels, but that's a whole other story that was not directly relevant to his point, of course.

A lot of this goes back to the move to using Bayer pattern sensors, where photosites are assigned to RGB information in essentially a 4:2:2 ratio, with twice as many Green-assigned photosites as Red and Blue. Therefore, a 4K Bayer-pattern sensor only has 2K worth of green "pixels" (which are the determining element for actual resolution) if you use pixels as your definition of resolution.

If we go back to the GVG Viper camera (used on such films as "Zodiac') - it used three individual CCDs - one each for R, G, and B. Each CCD was 1920 x 1080, i.e. "2K". But that's for EACH CCDs. The results, if you go by the numbers would be equivalent in today's terms to a 6K 4:4:4 camera.

The beauty of the Bayer pattern design is the ability to go large format for a shallower depth of field and less chromatic aberration, because there is no optical block required. But to get the most out of the image, the key is in the math to turn monochrome data from RGB-assigned photosites into actual RGB, color video.

Add to this light sensitivity/noise. The more photosites you cram into the same physical sensor dimension, the smaller each photosite becomes. Therefore, as a "bucket" for light, smaller photosites mean worse light sensitivity. In the case of ARRI, they've opted for fewer, but larger photosites, so you end up with a camera that has nominally lower resolution (measured by pixel count), but better low-light capability and dynamic range.

[Simon Ubsdell]"And ARRI do it really beautifully ....
...RED, not so much"

I've worked a fair amount with material shot on REDs and ALEXAs, including grading a few films shot with each. I once had a DP who shot with RED, but used vintage Bausch and Lomb lenses. They looked like crap with a lot of aberration in the corners, but did have a character to them. In his words, they took the digital edge off of the RED image. I'm not sure I completely buy that, but I was happy with the results.

For all the discussion of RED and camera raw, I've always been less than enthused about how much I could really swing the image in grading. Ultimately ARRI with LogC has gotten me farther. DPs not that familiar with RED cameras sometimes think raw means they can seriously underexpose the image. I don't really find the REDs to be all that light-sensitive. Unfortunately, many like to shoot dark when they want dark, which hurts the latitude and resolution. I'm more of the "expose to the right" and then grade it in post mentality.

I found it fascinating how much the addition of film grain enhanced the perception of sharpness. Although, in spite of the questionable value of higher pixel count, my takeaway from the video was that the Alexa65 looked damned good.

[Oliver Peters]"I found it fascinating how much the addition of film grain enhanced the perception of sharpness."

I'd have liked to know more about his approach to adding grain. This is rarely done correctly to emulate what actually happens with film. To do it right you need a separate grain pattern for each channel, and a larger grain size for the blue channel, and so on. It's by no means as straightforward as just tweaking the controls on your compositor's Grain filter. There is often a really nasty "stuck-on" look to artificial grain that comes from it's not being composited correctly and popular Grain overlay packages are especially guilty of this.

I suspect the perception of sharpness is partly due to the fact that the grain is more noticeable in the midtones which gives the effect of a slight gamma reduction which makes the image look a bit punchier. But I liked his explanation too.

[Oliver Peters]"my takeaway from the video was that the Alexa65 looked damned good."

Hard to disagree with that. Even though this wasn't meant as a "shoot-out", there did seem to be a clear overall winner.

I had the opportunity to see Dunkirk in 70mm at AMC Northpark Center in Dallas. It was very much a treat to see a movie shot on and finished on film. The fidelity was just amazing. The details and richness of the colors was just awesome. I stayed for the end credits only to see there was no mention of any NLE the film was edited on. My guess is the movie was cut the old fashioned way. The entire experience was pleasing to my eyes and looked much better than any 4K resolution image. More pixels does not mean a better image. I can only imagine what strain anything above 4K would subject even a high end, fully decked out computer regardless of what OS is used.

Nice capsulation of the workflow. I've interviewed a number of feature film editors, where the director considered a complete film post workflow. They generally all concluded that not enough of the pieces existed any longer to make a complete photochemical pipeline. Now there you have a true dying art!

An important piece in the puzzle is which process offers the best workflow in post processing like grading. I take the point that all cameras and lenses used in this demo show it is possible to get a matched look but which approached offered the greatest flexibility in post to achieve a look.

The point about it not being just down to pixels is very relevant. I recently graded a doco shot in 1080 on an older camera. By adding film grain emulations using Film Convert plugins we made the image seem sharper, more 'cinematic' and it hid some of the typical giveaways of a video image like highlight clipping. However the overall process was very time consuming and there were shots that just didn't respond to highlight recovery. My take is that lenses plus getting focus and exposure right make a big difference. In docos a camera with wider dynamic range and the ability to recover highlight and clean blacks is more important than raw pixel counts.