Panasonic are hardly going to say that there is no advantage and a simple image of a grey scale tells us nothing about how the codec actually performs.

It's not as simple as 10 bit v 8 bit. Bit rate and codec efficiency are just as significant.

Alisters totally correct, and you can get an idea of the theory very easily in Photoshop by creating a simple gradient, black to white across the screen.

To show how bit depth can cause banding, use the "levels" control with "output" set to about 32. You get a nearly all black scale ranging in value from 0 to 32. Then expand it back with the level control set to give an output level of 255. Lo and behold, back comes the greyscale with black to white, but significant banding - the effect of insufficient bitdepth in the intermediate step.

But there's another way to get the same effect. Save the image as a JPEG with maximum compression. Lo and behold - severe vertical banding, just as before. A nice demonstration of how the same problem can be caused by lack of bitdepth or too heavy compression.

It's certainly true that 10 bit will get better results than 8 bit *IF ALL ELSE REMAINS EQUAL*, but if the price of 10 bit is significantly higher compression than an equivalent 8 bit system, you're taking one step forwards, one step backwards.

Given that you can fake shallow DoF in post to some degree I'm still to be convinced by the DoF adapter route..

Alister, do you really think dof can be realistic done in post? I haven't done it myself and haven't seen many but the ones I saw looked well, fake. Do you have any examples or can point to any examples that look realistic?
Thanks.

Just wanted to chime in and say that recording a scene in 8bit won't cause any banding. Banding only occurs when converting between colorspaces (i.e. viewing 10bit on an 8bit display), and only when the software involved can't handle the conversion properly.

8bit is beyond sufficient for video capture. 10bit or higher is used more for heavy grading, when you aren't getting the look you want out of the camera that is. Like another poster said, an increase to 10bit without an increase in bitrate will actually produce a far inferior image. So the equivalent of an 8bit 100mbps would be a 400mbps 10bit.

I've had my eye on the nanoflash for a long time and have seen it demonstrated in person. Quite an impressive little device and I intend to pick one up soon.

I've been using one today to shoot thunderstorms at sunset. You can really see the colorspace difference in highly saturated images such as the ones I shot today. Really impressed by my NanoFlash, worth every penny.

I'd be looking to film day for night in post as well as add or change lighting and basically be able to paint the picture the way I want it. Would converting 8bit captured footage to 10bit for colour correction help?

Most editing and almost all finishing software will work in a wider colorspace when performing colour correction. For example, Sony Vegas can do all processing in 32bit without having to re-render or convert anything. The original file does not need to be 10bit in order to colour it in 10bit. And unless you have a top of the line camera with amazing glass and a very colourful scene lit to perfection, there probably isn't any more than 8bits of data available in the scene. So what's the point of carrying that 8bits inside a bulkier 10bit or 16bit box?

The long answer is that if you have a vfx pipeline or something to that effect, you want an intermediate codec that has plenty of breathing room. That's when you use a 4:4:4 10bit codec, when you're ILM and the scene is going sequentially through eight artists. If the box is too small, it won't be able to carry all the extra information added by the animation and compositing. This is not a common workflow, though.

The average human eye can distinguish somewhere between 3-10 million colors. 8bit is 16 million, 10bit is over a billion.

Hi Jad okay so you're working in your 10 bit environment now the crucial bit you render out to ten bit this should have the effect of at least preserving everything you had in 8bit anf giving you more headroom. Thats my point and my question was would this workflow help.

10 bit has the potential to offer 4 times more colour info but as Im now finding out doesnt neccesarily mean a lot with low bitrates. Alister has said colour correction with 8 bit 100mbps is working well for him.

What I can see is that to get an advantage for 10 bits I'd need 400mbps and the AJA Ki Pro does 220mbps So maybe there isnt much difference between the two. I dont know its all a grey area to me because unless you have specialised equipment to test for yourself then different codecs etc will make different results. I cant and wont know myself unless I become an engineer and buy the equipment all I can do is ask advice and hope someone without a biased opinion can help. So far though this discussion has been educational and interesting. Im interested now in Alistars results and how he thinks it compares to 10 bit.

Like another poster said, an increase to 10bit without an increase in bitrate will actually produce a far inferior image. So the equivalent of an 8bit 100mbps would be a 400mbps 10bit.

The change from 8 bit to 10 bit doesn't involve a quadrupling of the bitrate, but rather an increase of 25%. You have to code 8 bits anyway, 10 bits gives 2 more, so an increase of 25%. The same principle as when you write 100, you don't have to write 10x as many digits on the page as when you write 10.

My belief is that whereas with coding some improvements (50p v 25p, higher colour space etc) you don't need as much extra bandwidth as may be first thought, with the case of 10 bit you do. Reason is that with the other examples it's possible to take advantage of correlations between the new data and the old, in this case, the new least significant digits are random compared to the 8 bit data.

"One step forward, one step backwards" probably sums it up better. The change to 10 bit (without additional bitrate) will improve things in one way, make them worse in another. "Far inferior image" may be overstating the case.

Although vegas is only 8 bit so Id have to use after effects and create a proxy in vegas?

Vegas now has a 32bit float pipeline. I've only tested this with the 10bit YUV codecs but certainly for them the whole 10bits goes into and out of the pipeline as it should. The downside is not having a 16bit integer mode like Ppro and AE does. All those 32bit float calcs do use up a lot of RAM and CPU.

It's 8 or 10 bits per channel and there are three channels: red, green, and blue. 4x the amount of information per channel means 64x the total amount of information overall. But of course it depends on the codec, etc etc.

The analogy with the digits is a bit misleading because we are talking about binary. Adding two more bits brings the number of colors per channel from 2^8 to 2^10, which is a factor of four. However, if the codec's a YUV where that first channel is for luminance, then maybe only U and V are extended to 10bit, which is only two factors of 4.. Bottom line, it's safe to estimate that jumping from 8bit to 10bit will increase the size of the files by somewhere between 4x-64x. Try it with uncompressed video and you'll see.

Regarding the final delivery, if they accept 10bit then you definitely want to process and deliver in 10bit for the most predictable results. I am operating under the assumption that most delivery formats are 8bit, which may be an incorrect assumption.

Upgrading to 10bit is generally a diminishing return these days, but of course one day in the future it will be completely mainstream.

So if the AJA Ki Pro records 220mbps using 10 bit which 4 times the information is needed how would that compare to the Nanodrive recording at 100mbps in 8 bit? I assume the quality would be like the nanodrive at 50mbps but at ten bit.

Would I be right in thinking that the Ki Pro will be much better in terms of colour correction but the Nanodrive be slightly better at overall defination?