Here's a real codec busting scene, recorded through the GH2's HDMI port and then processed in Avisynth to remove the pulldown. I also applied GrgurMG's chroma fixing code. I then encoded it at high bitrate h264, taking care to preserve as much detail as possible - it's very close to the original.

http://www.sendspace.com/file/xteivu

If you have a Mac, let me know if this h264 file plays for you. I would like to know for future reference.

And for comparison, here's the in-camera AVCHD version:

http://www.sendspace.com/file/chirs9

This scene was shot at iso 3200, so it's a little noisy, but I think you'll get the idea.

ghosttv

02-22-2011, 01:38 PM

Thank you so much guys. If you need paypal donation I give. Is your system for hdmi capture complete? What still needs fixing? I think you guys must be genious.

roei z

02-22-2011, 03:03 PM

how do i open it on windows? don't seem to recognize it..

Ralph B

02-22-2011, 03:17 PM

how do i open it on windows? don't seem to recognize it..

Try changing the file extension from h264 to MTS and see if that works.

rambooc1

02-22-2011, 03:22 PM

Or Mp4 , works for me.

roei z

02-22-2011, 03:38 PM

nice.. thanks for your efforts.
so what's the pixel peepers verdict for now?

Ian-T

02-22-2011, 03:39 PM

FOOTAGE FOR YOUR ENJOYMENT!

Here's a real codec busting scene, recorded through the GH2's HDMI port and then processed in Avisynth to remove the pulldown. I also applied GrgurMG's chroma fixing code. I then encoded it at high bitrate h264, taking care to preserve as much detail as possible - it's very close to the original.

http://www.sendspace.com/file/xteivu

If you have a Mac, let me know if this h264 file plays for you. I would like to know for future reference.

And for comparison, here's the in-camera AVCHD version:

http://www.sendspace.com/file/chirs9

This scene was shot at iso 3200, so it's a little noisy, but I think you'll get the idea.Wow....terrible banding and noise in there. I know...it's a codec killer. Thanks.

How about some people/good light/movement shots? That would be cool. (if you get a chance that is....if not thanks anyways) :)

Ralph B

02-22-2011, 03:56 PM

Thank you so much guys. If you need paypal donation I give.

Thanks, but this is all about giving back.

plasmasmp

02-22-2011, 10:01 PM

Fantastic detail in that capture. Thanks for the upload.

GrgurMG

02-23-2011, 12:24 PM

FOOTAGE FOR YOUR ENJOYMENT!

Here's a real codec busting scene, recorded through the GH2's HDMI port and then processed in Avisynth to remove the pulldown. I also applied GrgurMG's chroma fixing code. I then encoded it at high bitrate h264, taking care to preserve as much detail as possible - it's very close to the original.

http://www.sendspace.com/file/xteivu

If you have a Mac, let me know if this h264 file plays for you. I would like to know for future reference.

And for comparison, here's the in-camera AVCHD version:

http://www.sendspace.com/file/chirs9

This scene was shot at iso 3200, so it's a little noisy, but I think you'll get the idea.

Good stuff Ralph. Had to do some file renaming for compatibility with some things, but the footage is great. Used .mts at first, but when importing into After Effects to compare the two clips, it seemed to prefer I make it .AVI.

You said the mpeg4 is pretty close to the original, but do you think you could throw us a uncompressed still or two? png, tiff, whatever.

g.l

02-23-2011, 01:07 PM

Thanks Ralph. Without studying stills, it does look a little nicer, more 'present'. The question is, is it worth an expensive capture setup, and the huge increase in storage space?

Of course the answer depends on your priorities, but for me I'll say 'no' for now. Cost is a big issue, as is weight (I need to be portable mostly), power, and especially long-term storage.

If 4:2:2 could be recovered, then I would definitely use it for indoor shooting (especially greenscreen of course).

Danielvilliers

02-24-2011, 03:35 AM

For me, I would like if you could get us some 160 ISO sun lit (high contrast) frame-grabs (and some h.264 version, to see in motion). The reason is that for me what will make it worthy to record the hdmi is for the detail, dynamic range, file robustness for colour grading and motion. It is a bit difficult to judge on a 3200 ISO image if any of the above will be worth because the files are already at the limit or falling apart.

Human beings also would be very interesting because most who will go with all the way to record hdmi will be doing it for their narratives. I don't know if you can move with your set-up to do so... have you seen or investigated the brite-view or asus wi-cast wireless hdmi to see if they could be viable solutions?

Ralph B

02-24-2011, 09:40 AM

For me, I would like if you could get us some 160 ISO sun lit (high contrast) frame-grabs (and some h.264 version, to see in motion). The reason is that for me what will make it worthy to record the hdmi is for the detail, dynamic range, file robustness for colour grading and motion. It is a bit difficult to judge on a 3200 ISO image if any of the above will be worth because the files are already at the limit or falling apart.

Human beings also would be very interesting because most who will go with all the way to record hdmi will be doing it for their narratives. I don't know if you can move with your set-up to do so... have you seen or investigated the brite-view or asus wi-cast wireless hdmi to see if they could be viable solutions?

The HDMI clips were recompressed at high bitrate h264 to preserve their quality. But, just for safety, I've included uncompressed PNG stills.

Under these ideal conditions, with the camera stationary, the AVCHD codec acquits itself splendidly (at least at 24P). There's very little difference between it and the HDMI capture.

http://www.sendspace.com/file/9dlmd8

ghosttv

02-24-2011, 08:04 PM

So under good conditions its not really worth of it to use hdmi? Mr ralph are you sure the effort is worth of it for this? I'm confuse. The frame grap from before was so beautiful! I could smoke it! Omg.

Ralph B

02-24-2011, 08:32 PM

It is a bit difficult to judge on a 3200 ISO image if any of the above will be worth because the files are already at the limit or falling apart.

They're not really falling apart. I debated whether I should clean it up or post it raw. I chose raw because we're in a testing phase, and I figured people want to see what the camera is putting out.

But if you apply some Neat Video and color correction, you get this:

http://www.sendspace.com/file/wccs6k

Ralph B

02-24-2011, 08:43 PM

So under good conditions its not really worth of it to use hdmi? Mr ralph are you sure the effort is worth of it for this? I'm confuse. The frame grap from before was so beautiful! I could smoke it! Omg.

Don't be confused. The AVCHD codec performs differently under different conditions... sometimes good, sometimes bad. However, the HDMI is always good!

GrgurMG

02-24-2011, 08:55 PM

So under good conditions its not really worth of it to use hdmi? Mr ralph are you sure the effort is worth of it for this? I'm confuse. The frame grap from before was so beautiful! I could smoke it! Omg.

It takes a very close look, but the HDMI is still cleaner (less artifacting but more/finer noise for you to possibly remove in post) and you can see the difference in detail in the shadows when you bring up their brightness. They're sharper (more detailed) with less blockiness. It is hard to notice without zooming in and amping up the shadows though... so it depends where your priorities lie. On the dog walking still you can see a pretty decent difference in artifacting when taking a close look at the dogs.

GrgurMG

02-24-2011, 09:34 PM

I think there's definately situations where you'll see a much larger difference. I'm thinking midlevel ISO, alot of fine detail filling the screen, slow (as not to blur too much) camera motion.. maybe lit plenty well enough to see the detail but still a little dim.

Ralph B

02-24-2011, 09:52 PM

I think there's definately situations where you'll see a much larger difference. I'm thinking midlevel ISO, alot of fine detail filling the screen, slow (as not to blur too much) camera motion.. maybe lit plenty well enough to see the detail but still a little dim.

Absolutely! The man on the roof I posted earlier is a good example of the codec not being able to handle fine detail in motion. Today's test was a best case scenario.

Danielvilliers

02-25-2011, 02:50 PM

Looking at the frame grabs is like looking at photos!!! With the gh2 the video has reach the photo level (not at the same resolution).

Ralph B

02-26-2011, 01:10 PM

SCRIPT UPDATE

I have updated the script in the first post, to include GrgurMG's chroma fix (with some new pictures to illustrate it). I have also added some minor tweaks to a few parameters. These changes are based on continuing work of using the script on a wide variety of footage. One of the nice things about working with Avisynth is that none of this is set in stone. You're always free to experiment and change anything you want.

evero

02-28-2011, 01:43 PM

Thanks for your efforts om this, and I look forward to test the script when I learn the avisynth procedure.

One question: Might be difficult for NTSC people to answer, but I wonder if the procedure is the same on PAL GH2 models? Or is the hdmi ouput set to 50i and would require different avisynth paramethers to retrieve a progressive capture?

Thanks!

Ralph B

02-28-2011, 02:51 PM

Thanks for your efforts om this, and I look forward to test the script when I learn the avisynth procedure.

One question: Might be difficult for NTSC people to answer, but I wonder if the procedure is the same on PAL GH2 models? Or is the hdmi ouput set to 50i and would require different avisynth paramethers to retrieve a progressive capture?

Thanks!

Very good question, and one I would like to answer definitely. I have only one piece of HDMI PAL footage, and apparently the capture device wasn't working properly because there are many glitches in it. Nevertheless, I was able to remove the pulldown, and it looks quite good (or at least as good as it can look, given the poor source material). So, yes, I believe the script, with some adjustments, will work for PAL, but I would like to test it on some high quality footage before I say for sure.

timetraveller

02-28-2011, 08:28 PM

Very good question, and one I would like to answer definitely. I have only one piece of HDMI PAL footage, and apparently the capture device wasn't working properly because there are many glitches in it. Nevertheless, I was able to remove the pulldown, and it looks quite good (or at least as good as it can look, given the poor source material). So, yes, I believe the script, with some adjustments, will work for PAL, but I would like to test it on some high quality footage before I say for sure.

Hi,

I'm from Spain, and I can provide footage from my GH2 (when I get my Atomos) to Ralph or any user interested in hacking the HDMI 50i signal. Please contact me. :)

Thanks guys.

P.S. If there's a way to record a few seconds of footage from the HDMI output without the Atomos, please tell me how, me so I don't have to wait for the Atom to arrive.

roguebot

02-28-2011, 09:45 PM

Hi everyone! New here.

I'm wondering if anyone here has tried this script with footage captured from a ki pro mini?

It's very likely that I'm going to have access and be able to test my GH2 with a ki pro mini in the near future. I guess my two big concerns are setting the capture rate to 30fps and working with Prores 4:2:2 inside avisynth. Otherwise I figure it should be fine, but I'm wondering if there are any reasons it may not, and if anyone else has tried it and has advice to share.

Also I'd be glad to share the results of the test with you fine people.

And thanks Ralph B. This is quite awesome.

Ralph B

03-01-2011, 12:39 AM

It's very likely that I'm going to have access and be able to test my GH2 with a ki pro mini in the near future. I guess my two big concerns are setting the capture rate to 30fps and working with Prores 4:2:2 inside avisynth. Otherwise I figure it should be fine, but I'm wondering if there are any reasons it may not, and if anyone else has tried it and has advice to share.

Opening up a Prores file in Avisynth is easy. You need to install Quicktime for Windows, Quicktime ProRes Decoder, and have qtsource.dll in you Avisynth plug-in folder. The first two you get from Apple, and you can get qtsource here:
http://www.tateu.net/software/

The unknown question is whether the ki pro will work with the GH2. Looking over it's specs, I don't see any mention of 30.0 fps, so you'll just have to plug the camera in and see whether you get a picture. By all means, let us know what you find!

Ralph B

03-01-2011, 12:49 AM

Hi,

I'm from Spain, and I can provide footage from my GH2 (when I get my Atomos) to Ralph or any user interested in hacking the HDMI 50i signal. Please contact me. :)

Thanks guys.

P.S. If there's a way to record a few seconds of footage from the HDMI output without the Atomos, please tell me how, me so I don't have to wait for the Atom to arrive.

Yes, I'd love to see PAL footage recorded with the Atomos! Hopefully, the Atomos won't have any problem 'seeing' the PAL GH2's signal.

As far as recording without the Atomos, well, you got to have some sort of HDMI capture board in your computer.

Matthew P

03-01-2011, 01:12 AM

What I don't get is that the pre-production GH2 outputted 24p via HDMI, rather than putting it in an interlaced wrapper, yet the production model not only puts a pulldown on it, but adds another layer of it on top. :( Panasonic mustn't have wanted it to compete with the AF100... that is, until Ralph came along and made an awesome script that dodged around their unfair tactics. :) Great work Ralph!

I wonder how the PAL GH2 does things. Does it output 50i when in 24p mode, instead of 60i? If so, I wonder if that would result in a more consistent wrapper pulldown, "2:2:2:2:2:2:2:2:2:2:2:3". I hope so. :)

roei z

03-01-2011, 01:18 AM

What I don't get is that the pre-production GH2 outputted 24p via HDMI, rather than putting it in an interlaced wrapper, yet the production model not only puts a pulldown on it, but adds another layer of it on top. :( Panasonic mustn't have wanted it to compete with the AF100... that is, until Ralph came along and made an awesome script that dodged around their unfair tactics. :) Great work Ralph!

I wonder how the PAL GH2 does things. Does it output 50i when in 24p mode, instead of 60i? If so, I wonder if that would result in a more consistent wrapper pulldown, "2:2:2:2:2:2:2:2:2:2:2:3". I hope so. :)

you mean once it's hacked (once upon a time in the future, that is) it would be able to output 24p again ?

Ralph B

03-01-2011, 01:18 AM

What I don't get is that the pre-production GH2 outputted 24p via HDMI, rather than putting it in an interlaced wrapper, yet the production model not only puts a pulldown on it, but adds another layer of it on top. :( Panasonic mustn't have wanted it to compete with the AF100... that is, until Ralph came along and made an awesome script that dodged around their unfair tactics. :) Great work Ralph!

I wonder how the PAL GH2 does things. Does it output 50i when in 24p mode, instead of 60i? If so, I wonder if that would result in a more consistent wrapper pulldown, "2:2:2:2:2:2:2:2:2:2:2:3". I hope so. :)

Did the pre-production GH2 really output a native 24P through the HDMI? I didn't know that. If true, it would give hope that Vitaliy could actually do something to fix it.

Matthew P

03-01-2011, 04:44 AM

Did the pre-production GH2 really output a native 24P through the HDMI? I didn't know that. If true, it would give hope that Vitaliy could actually do something to fix it.

That was when there was text on the screen, I believe. So Panasonic decided to remove the text, but include a pulldown. I guess they figured that people would try cropping the text out.

It's also interesting that he mentions 25p several times (and in later blog entries), but said that it was "the inadequate 17Mbps mode". You don't suppose that the pre-production model had the option of 25pF, but was excluded to protect the AF100?!

roei z

03-01-2011, 05:08 AM

it will be very interesting to unveil the secrets inside GH2

timetraveller

03-01-2011, 08:54 AM

One "silly" question: if you're about shooting something that it's going to end in 2,35 format, if you record de output HDMI signal in recording instead of preview, then you get true 24p??

I know you'd have the recording logo, but when you crop to 2,35 format, you won't see it anymore.

So my question is: that signal (output when the camera is recording) is completely clean or not?

Ralph B

03-01-2011, 10:18 AM

Yeah, it's on Philip Bloom's site.

That was when there was text on the screen, I believe. So Panasonic decided to remove the text, but include a pulldown. I guess they figured that people would try cropping the text out.

Sorry to be a sceptic, but all Philip Bloom was describing was the text and logo that was being displayed on the screen. There was no mention of whether pulldown was being inserted or not. In fact, on that page he says he didn't even shoot with the camera, let alone record through the HDMI.

So, from what I see, the only difference with the HDMI between pre-production and production, is that Panasonic removed the logo and text. Of course, I'll be happy to be proven wrong.

Matthew P

03-01-2011, 11:20 AM

Sorry to be a sceptic, but all Philip Bloom was describing was the text and logo that was being displayed on the screen. There was no mention of whether pulldown was being inserted or not. In fact, on that page he says he didn't even shoot with the camera, let alone record through the HDMI.

So, from what I see, the only difference with the HDMI between pre-production and production, is that Panasonic removed the logo and text. Of course, I'll be happy to be proven wrong.

It's all guessing I suppose, but his monitor DOES say that the input is 24hz when the camera starts recording (according to what he wrote in the blog post). Apparently, before record is hit, the pre-prod gh2 outputs 50i. I doubt that a monitor would say 24hz for an interlaced stream, even if it's proper 25pf, let alone 24p in 50i with a wacky pulldown. Just my personal view of it mind. It's always nice to think that the camera has some to-be-unlocked potential. :)

Caesar

03-05-2011, 03:16 PM

Sorry to be a sceptic, but all Philip Bloom was describing was the text and logo that was being displayed on the screen. There was no mention of whether pulldown was being inserted or not. In fact, on that page he says he didn't even shoot with the camera, let alone record through the HDMI.

So, from what I see, the only difference with the HDMI between pre-production and production, is that Panasonic removed the logo and text. Of course, I'll be happy to be proven wrong.But now you've solved that, no?

PappasArts

03-05-2011, 04:06 PM

It's all guessing I suppose, but his monitor DOES say that the input is 24hz when the camera starts recording (according to what he wrote in the blog post). Apparently, before record is hit, the pre-prod gh2 outputs 50i. I doubt that a monitor would say 24hz for an interlaced stream, even if it's proper 25pf, let alone 24p in 50i with a wacky pulldown. Just my personal view of it mind. It's always nice to think that the camera has some to-be-unlocked potential. :)

http://philipbloom.net/wp-content/uploads/2010/09/IMG_2723.jpg

Exactly! The text is generated by the Marshall monitor; reading that it has received a 24hz stream form the GH2. The green dot behind the 1 on the overlay text is from the GH2. We all know or suspect what Panasonic did here- and it's a pretty sh*ty thing to do!

To bad Voltaic or Cineform couldn't write some code that could untangle the current way it outputs. If were lucky, a hack will happen!

To bad Voltaic or Cineform couldn't write some code that could untangle the current way it outputs.

That's exactly what the avisynth script I wrote does!

PappasArts

03-06-2011, 12:52 AM

That's exactly what the avisynth script I wrote does!

Yes! That is truly awesome what you have put together..........

Pappas

Ralph B

03-10-2011, 06:45 PM

A NEW SCRIPT FOR 1080 60i FOOTAGE

It took quite a while, but here it is. I've updated the first post with a new script for 1080 60i.
I know people around here are primarily intersted in progressive and tend to brush interlace aside.
But, if you have a GH2 and an HDMI capture device, I would encourage you to shoot some 1080 60i material.
It's really quite beautiful and way superior to the in-camera 720 60P.

GrgurMG

03-11-2011, 02:49 PM

A NEW SCRIPT FOR 1080 60i FOOTAGE

It took quite a while, but here it is. I've updated the first post with a new script for 1080 60i.
I know people around here are primarily intersted in progressive and tend to brush interlace aside.
But, if you have a GH2 and an HDMI capture device, I would encourage you to shoot some 1080 60i material.
It's really quite beautiful and way superior to the in-camera 720 60P.

Nice work. Can't wait to see some footage from this. I think there's definately use for it.

Jockomo

03-13-2011, 09:48 AM

I just wanted to stop in and thank the folks who worked so hard to solve this.
You guys rock!

Ian-T

03-13-2011, 10:29 AM

+1

henryolonga

03-14-2011, 01:14 PM

+1

Ralph B

03-15-2011, 12:22 AM

BATCH PROCESSING

I have been successfully batch processing HDMI clips using a program PDR pointed out. I thought it would be helpful to have a section in the first post describing it, and giving the link. So, it's there now.

Sage

03-15-2011, 11:19 AM

BATCH PROCESSING

I have been successfully batch processing HDMI clips using a program PDR pointed out. I thought it would be helpful to have a section in the first post describing it, and giving the link. So, it's there now.

Thanks Ralph -
The gh2 is now the no excuses camera!

Ralph B

03-15-2011, 12:45 PM

The gh2 is now the no excuses camera!

Well, there's still the issue of getting a good field recorder. The Nanoflash works, but at $3k it's not an option for most people. I hope the Atomos Ninja works out, because it would be the perfect compliment to the GH2.

David Shapton

03-15-2011, 03:12 PM

I hope the Atomos Ninja works out, because it would be the perfect compliment to the GH2.

Hang in there - we're making solid progress on the remaining few firmware issues. It's taken much longer than we predicted but the great results we're getting are worth waiting for! (NB - haven't tested with the GH-2 in any way yet!).

Dave Shapton
President
Atomos EMEA

Ralph B

03-15-2011, 04:15 PM

Hang in there - we're making solid progress on the remaining few firmware issues. It's taken much longer than we predicted but the great results we're getting are worth waiting for! (NB - haven't tested with the GH-2 in any way yet!).

Dave Shapton
President
Atomos EMEA

Thank you so much for replying. It's good to know we've got your attention!
As one who has developed software (in my little way), I can well imagine the complexity of the problems you're going through.

I know it's a little early to be requesting features on a product that hasn't shipped, but I want to put this out to you for future consideration.
Please implement a Peaking function on the Ninja's LCD screen. A problem all DSLR's have is it's so difficult to focus them while shooting, and peaking solves this. I was thrilled to learn that the Ninja could double as a monitor, and peaking would make it a grand slam home run!

diegocervo

03-16-2011, 12:04 AM

David,
thanks for the update.
I know that your team is working hard on this firmware, but can you please share the release date?
I ordered (and paid....) the Ninja 1 month ago and I'm sure there are other people out there as impatient as me to put our hands on this recorder.
Best,
Diego

Matthew P

03-25-2011, 03:33 AM

Has the HDMI output from a PAL GH2 been tested/captured yet? If the PAL GH2 outputs 24p in 50i, it might be a more consistent pulldown (unless Panasonic crippled it in some other way).

Ralph B

03-25-2011, 10:40 AM

Has the HDMI output from a PAL GH2 been tested/captured yet? If the PAL GH2 outputs 24p in 50i, it might be a more consistent pulldown (unless Panasonic crippled it in some other way).

Several months ago I applied a modified script to a piece of 24P/50i PAL footage, and I successfully extracted the picture. However, the source footage had many capture glitches in it (unrelated to the GH2) and it was impossible to fix those. If someone were to post a clean 24P PAL HDMI capture, I would be happy to "put it through the paces".

David Shapton

03-25-2011, 01:03 PM

David,
thanks for the update.
I know that your team is working hard on this firmware, but can you please share the release date?
I ordered (and paid....) the Ninja 1 month ago and I'm sure there are other people out there as impatient as me to put our hands on this recorder.
Best,
Diego

Just in case you hadn't heard elsewhere, the Ninja is now shipping. Thanks for your patience!

Dave
Atomos

Chaos123x

03-26-2011, 05:34 AM

I wonder if the Ninja pro res recorder will get a firmware update that has a "gh2 mode" that will apply all these processes so it records the footage clean without having to post process.

I hope they are at least working on it. i'd much rather have a Gh2 and a Ninja than a Af100 and kiPro.

roei z

03-26-2011, 07:02 AM

I wonder if the Ninja pro res recorder will get a firmware update that has a "gh2 mode" that will apply all these processes so it records the footage clean without having to post process.

I hope they are at least working on it. i'd much rather have a Gh2 and a Ninja than a Af100 and kiPro.

nice idea :)

Isaac_Brody

03-26-2011, 04:31 PM

Can someone post footage of HDMI capture? Well shot and under good lighting conditions?

And if you have any greenscreen samples I'd appreciate being able to test footage that has been run through AVI synth.

J Bellari

03-26-2011, 04:59 PM

Can someone post footage of HDMI capture? Well shot and under good lighting conditions?

And if you have any greenscreen samples I'd appreciate being able to test footage that has been run through AVI synth.

+1

GrgurMG

03-27-2011, 01:25 PM

I wonder if the Ninja pro res recorder will get a firmware update that has a "gh2 mode" that will apply all these processes so it records the footage clean without having to post process.

I'm gonna guess unlikely... I somehow don't think it has the hardware/processing capabilty to do the various processes on the fly. I think it'd need custom hardware for it... but I could be wrong. Granted doing the process in avisynth just may not be that efficient and some higher level coding of an algorithm might require less processing power. I dunno. All I know is in avisynth on a Quadcore, it's not nearly realtime.

martinbeek

04-02-2011, 10:49 AM

Hello all.

My Atomos Ninja arrived today and i hooked up the GH2 right away!
The Ninja showed "i50" when the GH2 was switched on. There was no difference in behavior or display when recording was hit on the GH2, or when the display mode was altered.
UPDATE: i expected the Ninja to display "i60"?! If i read the first post in this thread, that's what i'd assume.

I accidentally left the display on, but it doesn't influence the way Ninja records the image.

The GH2 is a european model from december 2010.

I am not experienced with removing pulldown or reversed telecine et cetera. I use FCP and cinematools. Willing to start using Avisynth, no problem, but if i can extract the 24p from this footage on my mac, that would be great!

The link below is a filesharing link and i haven't found out how many downloads are allowed. Please report any problems and i'll try to store it elsewhere, or use the alternative (slower) link.

Call me stupid but... If i run the GH2/Ninja footage above through MPEG StreamClip, converting to ProRes444 and setting "Frame Blending", "Better Downscaling" and "DeInterlace Video" i get an acceptable and usable file to edit in FCP. Or am i missing anything here...?!
If the pulldown-cadence-wizards can have a look at this, that'd be great.

Martin.

Ralph B

04-02-2011, 06:10 PM

Hello all.

My Atomos Ninja arrived today and i hooked up the GH2 right away!
The Ninja showed "i50" when the GH2 was switched on. There was no difference in behavior or display when recording was hit on the GH2, or when the display mode was altered.

My experience with the GH2 and HDMI capture suggests that you should ABSOLUTELY be recording simultaneously to the camera. When the camera is in standby it sends an altered image out the HDMI with crushed blacks and clipped highlights. This goes away when the camera is recording to an SD card. The proper procedure is to roll camera first, then start the HDMI capture. And stop the HDMI capture before stopping the camera capture.

This will prevent any additional complications to an already non-standard signal.

The Avisynth scripts in the first post were developed using an NTSC GH2. The script for 24P processing will have to be modified for PAL cameras. If you would like to post a raw 24P PAL file captured while the GH2 is in record mode, I will be happy to take a look at it.

It's also important to post the original in-camera MTS file, as well.

martinbeek

04-03-2011, 02:54 AM

Ralph.

With my camera and Ninja, NOTHING changes whatsoever in the picture of the HDMI and the GH2 viewfinder when i hit the record button. Also not on any HDMI monitor. So i can't acknowledge any change in blacks or so. If this is a setup issue, please let me know.
However, will try to shoot some footage with record "on".

Martin

martinbeek

04-03-2011, 03:37 AM

Ralph and others.

I have replaced my previous GH2/Ninja file with a new version that was recorded while the GH2 was recording.
Start GH2 Record -> Start Ninja record -> Stop Ninja record -> Stop GH2 record

first of all I'm not a pro. anyway, just today I checked for the first time the hdmi out of my GH2 PAL. I captured two files with my blackmagic shuttle interface. Don't forget I captured without press the rec. button, it's liveview.
Interesting is this:

From 50i you can't get 25P
From 24P you can get 25P

if camera settings is PAL 50i the captured file is interlaced.
if camera settings is 24P CINEMA, captured file is 25P without any interlaced effects and without aliasing etc. and of course colorspace is 4:2:2.

the two files are captured as 50i.

I'm surprised, can somebody confirm this.

Sorry no movie at this time...I hope I can upload some litte tests.

Cheers

martinbeek

04-03-2011, 07:00 AM

Hi Amadeus.

This can't be right. The camera can not be switched from 50i to 25p anywhere; there is no such "switch" and there is no such HDMi standard. 25P is neither supported by the camera nor by the HDMI standard. Colorspace is always 4:2:0.
Also, if 25p is extracted from the 50i signal, that proofs that - here too - Panasonic adds fields to cripple the 24p.

I do have a PSL GH2 here, in cinemamode, and it always outputs 50i.

Still waiting to hear from the cracks on this forum about the specs of the PAL footage i've supplied a few posts earlier.

I you have a Mac and FCP and/or Cinema Tools, you can analyze the clip.

Martin

Hi Folks,

first of all I'm not a pro. anyway, just today I checked for the first time the hdmi out of my GH2 PAL. I captured two files with my blackmagic shuttle interface. Don't forget I captured without press the rec. button, it's liveview.
Interesting is this:

From 50i you can't get 25P
From 24P you can get 25P

if camera settings is PAL 50i the captured file is interlaced.
if camera settings is 24P CINEMA, captured file is 25P without any interlaced effects and without aliasing etc. and of course colorspace is 4:2:2.

the two files are captured as 50i.

I'm surprised, can somebody confirm this.

Sorry no movie at this time...I hope I can upload some litte tests.

Cheers

Amadeus5D2

04-03-2011, 07:37 AM

I loaded the two files in AE

the two files have the same prop. but when you interprete the footage and disable the settings upper field first/lower field first. You will see two different files on screen!
the 24P looks progressive while the 50i looks interlaced. So there must be a difference.

here the media info ( after captured with BM ) of one of the files, btw they have the same prop.

After two days of recording many clips from the GH2 via HDMI i must say that i'm not impressed by the overall quality. The AVCHD image looks far better in all cases; it's cleaner, sharper and totally different in respect to color reproduction. It seems that a lot of image processing is not forwarded to the HDMI port. Mind you, i'm judging this from de-interlaced 50i footage.

Am i misinterpreting the workflow here? Should i wait for a 24p PAL avisynth script and then judge the image quality, or should i be able to judge the 50i footage as-is?

Now that Atomos Ninja recorders are shipping to endusers en-masse throughout Europe, many of them GH2 owners, the following question arises: How easy is it to adapt the AviSynth script to work with 50i PAL ?

Martin Beek.

Barry_Green

04-03-2011, 07:06 PM

After spending half a day working with and being screwed over by the GH2's HDMI monitor port and recording, I agree with you, Martin. The HDMI output on the GH2 is simply not good. It's fine for focus, it's good for composition, but it's lousy for quality and the color is totally wrong. And it does vary substantially from before recording (i.e., when the camera is in "pause" mode) and then when you press the record button. After losing hours of work to the inaccuracy of the monitoring output, I've now completely given up any level of interest in exploring it any further.

In my opinion, trying to record the HDMI output of a GH2 is a recipe for heartache and giving up so many of the features that make the GH2 such a good little camera in the first place.

First, you can't trust the monitor. It doesn't output an accurate image. When you're in pause mode it's a wildly different image than when you're in record mode, and it's inaccurate. The gamma is substantially stretched, the blacks are stretched down and the whites are stretched up. When you go into "record" mode it's more accurate, but it's still not accurate. The monitor doesn't show you the real colors! It shows a muted/pale image as compared to what's really being recorded in the camera. And, if you go into playback mode, you'll get the same muted/inaccurate picture representation. I recorded externally and internally on the GH2, and brought both files into the NLE, and found that the colors were substantially brighter and more saturated and more accurate to what I was trying to get, from the internal codec -- the external recording was flat and pale, and that's not the recorder's fault, it was recording what the HDMI was feeding it.

Second, you have to give up so many good features! It's pointless. Valuable features like ETC mode and 720p mode just go away when you plug something (whether a monitor or a recorder) into the HDMI port. That's awful, but what's worse is that you also have to turn off the highlight flashing mode because it stays live on the monitor output and if you have something that hits 100 IRE, it'll start flashing on the monitor which means it'll also start flashing on the external recorder! So now you have to give up ETC mode, and 720p mode, and the zebra-replacement highlight flashing... those are three of my favorite features. But they all go away if you try to use an external recorder. Bah.

Third, it screws up the chroma. The monitor output is just wrong. It's not a 4:2:2 signal, it's a screwed-up 4:2:0. The onboard codec looks better than the external, as far as chroma resolution (and chroma quality, for that matter).

Fourth, it has a non-video-standard frame rate (which was of course the point of this whole thread in the first place). At 60.00i, you can't just use the footage, now you've got to leap through hoops in post to try to strip out the two layers of pulldown (the 59.94->60.00, and then the 23.976->59.94).

Fifth, there's no audio, so you would always have to shoot double-system with a clapper. There's not even a reference track! No PluralEyes or any other simple post-sync option, you have to slate every take.

Sixth, now you're trusting your recording to the single worst connector ever designed in the history of video -- the mini-HDMI connector. HDMI is bad enough, mini-HDMI is simply horrible. I can't tell you how annoying and irritating it is to look over at the monitor and find that everything has gone green, for no discernible reason, and you have to unplug and re-plug the camera monitor cable to get at least vaguely accurate color again. You want that happening in the middle of your recording? I sure don't.

I know this will be an unpopular post, and people will think I'm taking away their candy, but -- man, just try it for a while. Recording external on a GH2 is a frustrating, inaccurate, tedious, limiting workflow. If anyone asked me if they "should" do it, my answer would be a succinct "hell no." If someone wants to do it anyway and "prove me wrong", well, go right ahead and more power to you. But after this weekend's experiments, I'm done with it... I'll use the GH2 for what it's good for and love it for that, but shackling it with handcuffs in order to end up with a seriously compromised workflow all in an effort to chase things that just aren't going to come true (it's not going to be 4:2:2, it's not going to have better or even as-good-as color, the only thing it'll do is be less compressed)... no thanks, the tradeoffs are totally not worth it.

PDR

04-03-2011, 09:10 PM

Ralph and others.

I have replaced my previous GH2/Ninja file with a new version that was recorded while the GH2 was recording.
Start GH2 Record -> Start Ninja record -> Stop Ninja record -> Stop GH2 record

It sounds like some of you are giving up due to the other issues, but if anyone still cares about the Pal GH2 HDMI:

This is 24p in a 50i stream, with an inserted duplicate field that occurs ~every 25 fields , but not at that exact interval (it's slightly off the perfect cadence)

You can use avisynth to adaptive field match and decimate to return proper 23.976 , at least with that sample. It's hard to be 100% certain, because there is a point in that clip where the only difference in frames looks like a rolling shutter artifact. Have a look at frames 82-84 on the processed sample, this corresponds to ~3.4-3.5 sec timeframe which looks like it matches the native clip's motion ok. I would look at the AVCHD clip as well to double check. A better test sample to examine cadence would be a deliberate slow linear pan.

But there appears to be another serious problem with a left green border and a faint double green/purple line on the right border. Do you see this on the Mac in QT or in FCP ? It's not a Windows QT prores decoder issue, or avisynth QTInput isse, because other ProRes422 clips decode fine here without those issues.

I included a sample , encoded with QT h.264 so Mac users can examine it . Apparently avisynth works with Parallels , but I don't know the details how to get it running there
http://www.mediafire.com/?izp1b4fhr4056hs

Martin, could you post some side-by-side AVCHD and ProRes clips? With 24p and 25i internal framerates. Some greenery or general street view. Or anything that have a lot of details, predictable motion and colors.

improser

04-03-2011, 11:54 PM

Martin, could you post some side-by-side AVCHD and ProRes clips? With 24p and 25i internal framerates. Some greenery or general street view. Or anything that have a lot of details, predictable motion and colors.

Please, add to that petition some large surface with gradient lighting to explore if the posterization is affected too in the HDMI signal! See the thread of banding sky last posts.

martinbeek

04-04-2011, 01:02 AM

Please, add to that petition some large surface with gradient lighting to explore if the posterization is affected too in the HDMI signal! See the thread of banding sky last posts.

Hi. I'll get back to the requests a.s.a.p. but i already have to disappoint you re. the color banding.... It's still there if recorded to HDMI. You cal already see that by monitoring the HDMI signal. If that is what you mean with Posterization. The posterization you sometime see in dark areas due to compression artifacts are of course gone.

Panasonic has acknowledged the existence of the color banding, but doesn't say if they are working on it.

So far for now.

I am seriously considering returning the Ninja recorder while it is still possible (5 days money back return policy by law in the Netherlands). The Ninja is great, no bad marks there, but i am afraid that i'll get stuck with a useless $1000 recorder and (for me) a pretty useless camera.
I'll have to discuss this with the dealer, maybe they want to give me some more time to experiment with the Ninja; it will benefit all parties.
But waiting for weeks/months for a hack, firmware update or PAL avisynth script is probably out of the question.

Again, i have no problem with either device - great tools in their own respect - but i'm not (yet) feeling comfortable with the combo.

Martin.

Ralph B

04-04-2011, 01:56 AM

Ralph and others.

I have replaced my previous GH2/Ninja file with a new version that was recorded while the GH2 was recording.
Start GH2 Record -> Start Ninja record -> Stop Ninja record -> Stop GH2 record

************************************************** *******************
To run this, in addition to the plugins refered to in the first post, you ned to have Quicktime and Apple's Prores decoder installed on your PC.
Get them here:

After spending half a day working with and being screwed over by the GH2's HDMI monitor port and recording, I agree with you, Martin. The HDMI output on the GH2 is simply not good. It's fine for focus, it's good for composition, but it's lousy for quality and the color is totally wrong. And it does vary substantially from before recording (i.e., when the camera is in "pause" mode) and then when you press the record button. After losing hours of work to the inaccuracy of the monitoring output, I've now completely given up any level of interest in exploring it any further.

In my opinion, trying to record the HDMI output of a GH2 is a recipe for heartache and giving up so many of the features that make the GH2 such a good little camera in the first place.

First, you can't trust the monitor. It doesn't output an accurate image. When you're in pause mode it's a wildly different image than when you're in record mode, and it's inaccurate. The gamma is substantially stretched, the blacks are stretched down and the whites are stretched up. When you go into "record" mode it's more accurate, but it's still not accurate. The monitor doesn't show you the real colors! It shows a muted/pale image as compared to what's really being recorded in the camera. And, if you go into playback mode, you'll get the same muted/inaccurate picture representation. I recorded externally and internally on the GH2, and brought both files into the NLE, and found that the colors were substantially brighter and more saturated and more accurate to what I was trying to get, from the internal codec -- the external recording was flat and pale, and that's not the recorder's fault, it was recording what the HDMI was feeding it.

Second, you have to give up so many good features! It's pointless. Valuable features like ETC mode and 720p mode just go away when you plug something (whether a monitor or a recorder) into the HDMI port. That's awful, but what's worse is that you also have to turn off the highlight flashing mode because it stays live on the monitor output and if you have something that hits 100 IRE, it'll start flashing on the monitor which means it'll also start flashing on the external recorder! So now you have to give up ETC mode, and 720p mode, and the zebra-replacement highlight flashing... those are three of my favorite features. But they all go away if you try to use an external recorder. Bah.

Third, it screws up the chroma. The monitor output is just wrong. It's not a 4:2:2 signal, it's a screwed-up 4:2:0. The onboard codec looks better than the external, as far as chroma resolution (and chroma quality, for that matter).

Fourth, it has a non-video-standard frame rate (which was of course the point of this whole thread in the first place). At 60.00i, you can't just use the footage, now you've got to leap through hoops in post to try to strip out the two layers of pulldown (the 59.94->60.00, and then the 23.976->59.94).

Fifth, there's no audio, so you would always have to shoot double-system with a clapper. There's not even a reference track! No PluralEyes or any other simple post-sync option, you have to slate every take.

Sixth, now you're trusting your recording to the single worst connector ever designed in the history of video -- the mini-HDMI connector. HDMI is bad enough, mini-HDMI is simply horrible. I can't tell you how annoying and irritating it is to look over at the monitor and find that everything has gone green, for no discernible reason, and you have to unplug and re-plug the camera monitor cable to get at least vaguely accurate color again. You want that happening in the middle of your recording? I sure don't.

I know this will be an unpopular post, and people will think I'm taking away their candy, but -- man, just try it for a while. Recording external on a GH2 is a frustrating, inaccurate, tedious, limiting workflow. If anyone asked me if they "should" do it, my answer would be a succinct "hell no." If someone wants to do it anyway and "prove me wrong", well, go right ahead and more power to you. But after this weekend's experiments, I'm done with it... I'll use the GH2 for what it's good for and love it for that, but shackling it with handcuffs in order to end up with a seriously compromised workflow all in an effort to chase things that just aren't going to come true (it's not going to be 4:2:2, it's not going to have better or even as-good-as color, the only thing it'll do is be less compressed)... no thanks, the tradeoffs are totally not worth it.

Barry, such a harsh criticism! Have you looked at the first post in a while? Most of the issues you bring up have been solved. Plus, batch processing is now available. It's really no more difficult to process than it is to remove ordinary 3:2 pulldown.

But, I agree with you, the mini HDMI cable is a pain in the buttinski, and needs to be carefuly dressed.

martinbeek

04-04-2011, 02:29 AM

Ralph. The HDMI Avisynth footage looks absolutely stunning! I am as happy as a monkey with the key to the banana plantation..;-)

I still see a stutter in the movement between the two flowers, but that can be my macBook at work. Can you please re-check on that?

Great stuff man! Thanks for your time and commitment!

Martin Beek

PDR

04-04-2011, 06:35 AM

I processed your clip and it turns out the 24P script in the first post works just fine on PAL footage. Here is the result:

I think there is a mistake with the encoding, as the file has duplicates , and is 29.97 (the encoding program used might be inserting the dupes to achieve 29.97, you probably entered the wrong fps, the script is fine) . Compare it to the one I posted earlier, as you will see there are too many frames for 23.976.

A better option to directshowsource is QTInput using QTSource. DSS isn't necessarily frame accurate and you can "lose your place" with non linear seeks. This can cause errors in the field matching. Also, QTInput decodes Prores422 to YUY2 (422) by default, directshowshource will go usually go through a color space converter to RGB (more quality loss incurred with unecessary colorspace conversions)

Are you guys seeing the green left border mentioned earlier? Is this PAL GH2 HDMI issue or Ninja issue ? or something else ? Other PAL GH2 HDMI samples recorded by intensity pro didn't have this problem

martinbeek

04-04-2011, 07:08 AM

Yo PDR; that's all too much information for the layman ;-) Hoping that Ralph can mash up a new script.

But, indeed, i see the green border too!
Sh*t! Now that all went so well...
You have probably checked the original file? I will record a clip with the Ninja on both the GH2 and the EX1-R and a few other HDMI sources i can find and see where that comes from.

Martin

PDR

04-04-2011, 07:13 AM

No - the script he posted is absolutely fine. I tested it. The only change I would suggest is to use QTInput() instead of DirectShowSource() for the reasons posted earlier.

What happened (I think) was that he made an encoding error (entered the wrong fps during encoding, so the encoder inserted those dupes).

Yes, I checked the original file. It's not a QT Windows issue, because other Prores422 clips decode normally without the green strip. And other Pal GH2 HDMI clips posted earlier recorded by other devices do not exhibit the issue

If that was straight from the Ninja, the one abnormal thing about those prores files is the metadata. Mediainfo reports abnormal dimensions (other prores clips don't show this, they show 1920x1080 only)

Another user , Diego, said something about some programs having problems with the native Ninja clips
http://www.dvxuser.com/V6/showthread.php?223069-HDMI-Atomos-Ninja-Hard-Disk-Recorder-795&p=2295168&viewfull=1#post2295168

When you export for quicktime, mind that for some strange reasons After Effects and Premiere are not able to open these files. Actually all files being recorded on the ninja cannot be opened in AE and PP without previously exporting them to ProresHQ from quicktime. I don't know why and any help on this is really welcome.

I suspect something is up with the Ninja prores implementation

David Shapton

04-04-2011, 08:50 AM

I suspect something is up with the Ninja prores implementation

I don't think there's anything wrong with the ProRes implementation: it's been certified by Apple, and it's 100% accurate. I will get the metadata checked.

Dave Shapton
Atomos

PDR

04-04-2011, 09:20 AM

I don't think there's anything wrong with the ProRes implementation: it's been certified by Apple, and it's 100% accurate. I will get the metadata checked.

Dave Shapton
Atomos

ok thanks; perhaps there is something wrong with martin's particular model or his particular GH2 ?

are there any more samples (recorded from other cameras) floating around to examine ?

David Shapton

04-04-2011, 09:30 AM

perhaps there is something wrong with martin's particular model or his particular GH2 ?

With the green stripe from the GH2 footage, we think this is is likely to be a timing error with the HDMI output from the Panasonic. We are hopeful that we can compensate for this in our HDMI input software.

We haven't seen this stripe from any other camera.

Dave Shapton
Atomos

martinbeek

04-04-2011, 09:55 AM

Dave, all.

Dave, your'e right - it's all certified. Can't be the codec. And it isn't.
I have checked the original file and there is no green stripe. Downloaded the file: stripe! So it has something to do with the transfer (ftp or other). Sorry about that. Please forget the green stripe.

READ UPDATE ON LAST LINE

I have now overwritten the file with one that is different and absolutely stripe free. Checked the FTP file and it is in good order.

Recorded with the GH2 in record mode. Untouched, uploaded from the Ninja's harddisk.
This time a plant in my garden waving in the wind. Lot's of detail, moderate motion.

http://mediatube.marvelsfilm.com/009.mov 305 MB

Indeed, metadata is incorrect, saying: 1888x1062 pixels...

Can you powerful wizards please run this one again?

Many thanks.

Martin.

UPDATE: no green stripes, until i open the clip in MPEG Streamclip and there it is again, only much thinner!!! I am getting crazy i think... Does this have to do with the meta data?!
It's invisible in Quicktime, but immediately apparent in FCP - #STRANGE !

PDR

04-04-2011, 10:02 AM

If it helps to debug the issue, "aemonk" posted ProRes422 clips recorded from AJA IoPro from a Pal GH2 HDMI that did not exhibit this issue, but the links are down. I deleted the clips a long time ago, and I can't recall if the metadata was different on those
http://www.dvxuser.com/V6/showthread.php?231315-Any-GH2-news-on-the-HDMI-output-front/page24

I asked this earlier , but can some mac users have a look Martin's clip to rule out PC QT ProRes decoder issues

I have checked the original file and there is no green stripe. Downloaded the file: stripe! So it has something to do with the transfer (ftp or other). Sorry about that. Please forget the green stripe.

sorry I didn't see this before posting. Are you saying the file was corrupted somewhere along the line from the file storage service ?

UPDATE: no green stripes, until i open the clip in MPEG Streamclip and there is again, only much thinner!!! I am getting crazy i think... Does this have to do with the meta data?!
It's invisible in FCP and Quicktime!

This would suggest mpegstreamclip is causing the issue

Is the new file you uploaded directly from the ninja? or processed in some other program first ?

martinbeek

04-04-2011, 10:09 AM

PDR.

Sorry, our messages crossed over. I was updating it while you were commenting.

The green stripe is indeed still there, maybe a bit thinner. Not visible in Quicktime (that i used to check), but is IS there in FCP too. BUGGER!

Martin

PDR

04-04-2011, 10:13 AM

martin - what about the simultaneous AVCHD recording? is the stripe there ?

(there are actually stripes on the right side as well if you look closely)

Not visible in Quicktime (that i used to check), but is IS there in FCP too

If it's NOT visible in quicktime player, what player version are you using (quicktime x ?)

If you recall what Diego said:

When you export for quicktime, mind that for some strange reasons After Effects and Premiere are not able to open these files. Actually all files being recorded on the ninja cannot be opened in AE and PP without previously exporting them to ProresHQ from quicktime. I don't know why and any help on this is really welcome.

This suggests that quicktime can decode it properly and transform it into a "proper"? Prores file . His sample ALSO exhibits the stripe (at least on PC), but I'm not sure what he did. Apparently he deinterlaced it, I'm not sure if he shot in 24p mode. The meta data looks "normal" in his sample
Width : 1 920 pixels
Height : 1 080 pixels

But Quicktime on PC does exhibit the stripe issue

Stripe issues aside, Ralph's script works fine cadence wise on that new sample

EDIT: sorry Diego's sample does have the stripe

martinbeek

04-04-2011, 10:30 AM

It's crazy indeed. The quicktime version is 10.0 (118). It's absolutely NOT visible in QT, even zoomed in 200%.
The AVCHD footage does not show the stripes. Now i'd like to see some GH2 PAL Ninja footage from other people!

Martin

PDR

04-04-2011, 10:32 AM

Sorry I looked again at Diego's GH2 PAL Ninja "sky" sample, even though the metadata looks "normal" , it too has the stripe . I'm unclear as to how he processed it, I think he shot 50i and deinterlaced to 25p using jes deinterlacer. It's definitely not the native clip

martinbeek

04-04-2011, 10:43 AM

PDR. That is a bit more assuring; telling me that it's not my particular GH2.
How bizarre! But i'm happy that the Avisynth scripting works. I'll install VirtualBox, Win XP and Avisynth tonight and see for myself!
Great stuff guys!

Dave: let us know if your contact with Panasonic can say more about this issue.

Martin

PDR

04-04-2011, 10:47 AM

If you open it in quicktime (which seems to decode without the stripe for you) and export (force a re-encode) of another prores file for import into FCP, is the stipe visible in FCP ?

martinbeek

04-04-2011, 10:59 AM

Yes sir!
Save-As to "HD 1080 P" in quicktime (resulting in a H.264 deinterlaced file) does indeed make the stripes disappear!

Martin.

PDR

04-04-2011, 11:05 AM

I don't know enough about macs - but that suggests that the decoding pathway used by that particular quicktime version for decoding the prores ninja files is different than the one used by fcp, mpegstreamclip and other programs, including the quicktime version used on PC's . At least that's a workaround for now for mac users until the issue gets sorted out

Amadeus5D2

04-04-2011, 11:10 AM

sorry, you make me crazy

martinbeek

04-04-2011, 11:20 AM

PDR.

The weird thing is, that Apple claims that the Quicktime version is system-wide. All components and applications use the same installed quicktime version to do their work.
I'll dive into FCP to see if i can find another backdoor. The resulting conversion is not that great in respect to the codec and bitrate.

Martin

PDR

04-04-2011, 11:26 AM

Yes, that was my understanding too, that all Prores gets decoded through the QT Prores decoder API (that's the way it works on windows)

Can't you export Prores from Quicktime?

You can use lossless options as well like png in mov, or animation codec (huge filesizes) . These are RGB options, so there will be a Y'CbCr=>RGB conversion

martinbeek

04-04-2011, 11:31 AM

Quicktime 7 is showing the stripes, so Quictime 10 has become cleverer. All in all not a real solution.

Martin.

PDR

04-04-2011, 11:33 AM

Yes, it's quicktime 7.x on PC... There is no quicktime 10.x option available on PC

I still think it has to do with the metadata; just a hunch

martinbeek

04-04-2011, 01:18 PM

Wasn't there a tool on the Mac to set which version of quicktime is used by FCP? Can vaguely remember that such setting exists. It seems to me that FCP still uses QT 7 instead of 10.
Maybe have it all wrong here...

I think there is a mistake with the encoding, as the file has duplicates , and is 29.97 (the encoding program used might be inserting the dupes to achieve 29.97, you probably entered the wrong fps, the script is fine) . Compare it to the one I posted earlier, as you will see there are too many frames for 23.976.

A better option to directshowsource is QTInput using QTSource. DSS isn't necessarily frame accurate and you can "lose your place" with non linear seeks. This can cause errors in the field matching. Also, QTInput decodes Prores422 to YUY2 (422) by default, directshowshource will go usually go through a color space converter to RGB (more quality loss incurred with unecessary colorspace conversions)

Are you guys seeing the green left border mentioned earlier? Is this PAL GH2 HDMI issue or Ninja issue ? or something else ? Other PAL GH2 HDMI samples recorded by intensity pro didn't have this problem

PDR

04-04-2011, 02:26 PM

Ralph's script works great. The only thing I would change is change the first line to QTInput("video.mov") . You need qtsource.dll because the input filter doesn't come with the default avisynth install (it's an external plugin)

I've used FCP before, but I'm primarily a PC user, so I don't know how to get FCP to use QT X components

Ralph B

04-04-2011, 02:48 PM

Ralph's script works great. The only thing I would change is change the first line to QTInput("video.mov") . You need qtsource.dll because the input filter doesn't come with the default avisynth install (it's an external plugin)

I used both QTinput() and directshowsource() and I prefered directshowsource because of gamma issues. QTinput resulted in crushed blacks, whereas directshowsource looked correct, at least on a PC. Don't know how this will translate to a Mac. This isssue requires careful testing. We're in new territory with Prores.

PDR

04-04-2011, 03:10 PM

I used both QTinput() and directshowsource() and I prefered directshowsource because of gamma issues. QTinput resulted in crushed blacks, whereas directshowsource looked correct, at least on a PC. Don't know how this will translate to a Mac. This isssue requires careful testing. We're in new territory with Prores.

There are gamma issues with h.264 in mov, but not with ProRes422 using QTInput. It's not new territory, QTInput has used by many folks in the post production industry for years when avisynth processsing has been required

Levels are passed untouched through QTInput, so if there is crushed blacks, they were there in the first place. In contrast, RGB conversion using Rec709/601 will clip Y' <16 and Y' >235 , reducing the data available . Keeping the same colorspace and no additional filters yields the highest quality

Here are some screenshots, decoded with QTinput converted to RGB using Rec709 for the screenshot, and the other is the ProRes reference decoder using QTPlayer . You can verify this with other footage too , for example from the Arri Alexa. QTinput produces spec compliant decoding

http://www.mediafire.com/?rgxnqzjyrrtkrpz

Perhaps you were using QTInput incorrectly ?

martinbeek

04-04-2011, 03:12 PM

I have Avisynth up and running under virtualbox on my Mac!
The first part of the script seems to run OK until i get the error "There is no function ConvertToYV24" - which is strange, because that's an internal function.
Any suggestions?

Martin

PDR

04-04-2011, 03:21 PM

ConvertToYV24() requires Avisynth 2.6.x alpha

It's not in the official 2.5.8 release

martinbeek

04-04-2011, 03:36 PM

Thanks! Working now (so far). Now trying to play MOV in Windows Media Player...

M.

PDR

04-04-2011, 03:37 PM

I would like to add a link to the proper version of AVIsynth in the first post. What do you recommend, PDR?

I haven't examined those chroma functions, but it seems they rely on 2.6.x functions

2.6.x is fairly stable, I've been testing it for almost a year now , and I think it's safe to suggest that one (provide the official sourceforge link)

And it's very easy to switch back & forth between 2.5.8 and 2.6.0 by installing one on top of the other, I've done this many times for debugging

daidalos

04-05-2011, 12:33 AM

just a try:
left green line correction of 009.mov : see http://www.mediafire.com/file/5a6m4jsviffdmvo/Testqt10.rar (can only upload till 200MB)

* A new section on ProRes with links
* Link to Avisyth 2.60 alpha
* Link to VirtualDub
* General clean-up to make the sections easier to read

Everything is now there to get you going.

martinbeek

04-05-2011, 10:44 AM

Hey Ralph.

Great! I can't wait to see the results. I do need some last little piece of help.
If i load the AVS file into VirtualDub, what do i do then?!
I've tried to Save Avi, but then i end up with a file 5 times bigger and with weird color moire all over.

If you can help me out with that last part of the chain, that'd be terrific!

UPDATE: is there a way to save to Prores MOV?

Thanks,

Martin

FIRST POST UPDATE

I've updated the first post with:

* A new section on ProRes with links
* Link to Avisyth 2.60 alpha
* Link to VirtualDub
* General clean-up to make the sections easier to read

Everything is now there to get you going.

PDR

04-05-2011, 11:06 AM

I'm not familar with virtualbox. How is virtualbox set up ? do you have a copy of windows installed inside of it?

here's the thing: you cannot encode to Prores on a PC , but you can decode it

vdub can only export AVI wrapped videos - which sucks for importing into mac applications like FCP (unless you use the external encoder feature)

The filesize is large , because you didn't select compression (video=>compression)

I would probably ditch vdub and use ffmpeg to encode to DNxHD (MOV wrapped) which FCP will accept . ffmpeg can accept avs scripts directly, and you can make a windows batch file to encode , for example, all the files in a directory

Ralph B

04-05-2011, 11:40 AM

I would probably ditch vdub and use ffmpeg to encode to DNxHD (MOV wrapped) which FCP will accept . ffmpeg can accept avs scripts directly, and you can make a windows batch file to encode , for example, all the files in a directory

This sounds good for Mac people. If somebody wants to write step by step instructions, I'll add it into the first post.

PDR

04-05-2011, 11:59 AM

martin will have to help with the testing , but I can provide instructions on a PC

some questions first for martin:

1) did you "fix" the prores file first in QT 10 ? the green border ?

2) what does the preview in vdub look like ? do you get the weird colors before exporting? there might be graphics driver issues running through your virtualization software

1) create your avs scripts - there are some avs batch script creators, I think Ralph made some links to some tools in the 1st post ?

2) download a binary of ffmpeg for windows and place a copy of ffmpeg.exe into same directory as your video files, and .avs files (your 1 folder should have the .mov and .avs files)
http://hawkeye.arrozcru.org/

3) write the following in notepad, save it, change the extension from .txt to .bat . Double click the .bat file to process all the .avs files in the directory

Note that batch script is only valid for the 1920x1080p23.976 variant of DNxHD and gives the 175Mb/s 8-bit 4:2:2 variety.

There is no audio in the script "-an" . If for some reason you wanted audio change the script to "-acodec copy" instead of "-an" . That will give uncompressed audio in the .mov wrapper, but AFAIK, the HDMI audio is useless

If you wanted the 1920x1080p23.976 115Mb/s variant of DNxHD , substitute "-b 115M" for "-b 175M"

Yes that all works! I added ConvertToYUY2 at the end. Does that degrade my original footage in any way?

no it doesn't degrade it with adding ConvertToYUY2(), but there are compression lossess (DNxHD is a lossy format, the compression is very similar to ProRes422 - so it would be like incurring 1 round of generation loss - it would be similar to re-encoding prores422 to prores422)

YUY2 is 4:2:2 subsampled Y'CbCr . The chroma fix part of the script is working in YV24 (Y'CbCr 4:4:4) just for intermediate processing. You cannot do a "handshake" in software between 444 => 422 without properly converting it first or if the recieving software has provisions to accept it. In this case ffmpeg cannot "see" Y'CbCr 4:4:4 properly, so you convert it back to YUY2 which is the same sampling as the original Prores422

If you read the 2nd last line of it says "# Depending on the source file you may need to convert from YV24 back to YUY2/YV12." :)

In avisynth anything that is commented out "#" is like a rem statement or doesn't affect the actual processing - they are just meant for comments

If you wanted no compression loss, you would need to use uncompressed (e.g. v210, 10-bit 4:2:2), or a losslessly compressed format like huffyuv. v210 is huge in filesize, many times larger than Prores422.

Can you test it out if huffyuv is compatible with FCP? I don't think the ffmpeg variant of huffyuv is compatible with mac. But at least that would give mac users a losslessly compressed option. For most people, DNxHD 175 will be "good enough" , but if you pixel peep, you can see differences

or if you wanted use -vcodec v210 , which works for sure, but huge filesizes

martinbeek

04-05-2011, 01:39 PM

Everyone...
The workflow GH2 -> Ninja -> Avisynth+FFMpeg+DnxHD -> FCP/Mac works!! Got it all working!!
Resulting footage looks perfectly allright to me! But please have a look at the result here and check for me, if there are any dupe frames or other damage: http://mediatube.marvelsfilm.com/GH2_PAL.mov (294 MB)
I didn't worry about the green stripes yet.

Wow!!

Thanks Ralph and PDR for the great help and effort!

Martin

martinbeek

04-05-2011, 02:49 PM

Here is a quick and dirty side-by-side comparison. Left AVCHD from camera, right Ninja->AviSynth.
The Avisynth version has elevated levels and there seems to be a small and constant vertical shift, but that can be the compression on the left.

BUT... There is a horrible hickup around the 9 sec. mark. Auch!
Anyone knows what causes this, and what i have to tweak?
Is there an application that i can use to automatically detect these defects? As a kind of quality check?

http://mediatube.marvelsfilm.com/dual_h264.mov (63 MB)

By the way; look at the difference in transitions between colors on the slate !! HDMI rules!

Martin

martinbeek

04-05-2011, 03:09 PM

Ralph.

In the code on page 33 it says between the lines "trim(1,0)" . That line does not appear in the code on page 1. What does it do?

Martin.

PDR

04-05-2011, 03:28 PM

trim(1,0) cuts off the 1st frame

I think levels shift is from the way quicktime decodes DNxHD , it's not a problem on PC unless you use quicktime (viewing DNxHD through QT player on a PC exhibits the gammashift as well, but not when using other programs) . Quicktime is horrible, horrible for gamma (and other) issues even on Mac only software. h264 in MOV also has this problem. I'll look into it for a workaround. v210 shouldn't exhibit gamma or levels shift on mac or pc even through quicktime

You can see in this comparison, that the problem only occurs when using QT player. The 1st frame of your 2nd 009.mov clip is uncombed, so it makes a good comparison spot . The shots labelled "qt" in them are taken though quicktime. So "original qt.png" is the original ProRes reference picture.
http://www.mediafire.com/?7nisp9hivch4fd6

Not sure what's happening with vertical shift, or about the frame discrepancy at the end - you have to provide more details on what you did, maybe upload the clips . The script isn't perfect. It can make mistakes because the HDMI output from the GH2 isn't a perfect cadence ; the inserted fields aren't every say 25 fields all the time. It can occur at (almost) random spots . If you provide longer clips along with the simultaneous recording, the script can be tweaked. Ralph asked in the very beginning for simultaneous AVCHD recordings for this reason

Levels issue aside, the details retained in the shadow areas in the HDMI "half" is what strikes me. It's reminescent of Ralph's comparison of the fellow on the roof a dozen pages back.

PDR

04-05-2011, 04:03 PM

I 'm pretty sure that the vertical shift is from the chroma fix aspect of that script ; if you remove it or comment it out, there is no shift . Maybe Ralph or GrgurMG can have a look at it and tweak it a bit.

I used a different approach to the chroma shift a dozen or pages back, you can try that if you want. There is no shift with that method

By the way; look at the difference in transitions between colors on the slate !! HDMI rules!

MartinWow....even the noise looks good. I love how the dark areas are still detailed and not smoothed out (or pasty). When you put them side by side like this, and as good as the AVCHD is....I can see its blockiness and added compression noise loud and clear compared to the organic looking noise on the HDMI side. Wow,....what a difference. The colors look good also.

Thanks for being ourr guinea pig. This can really push me to getting the Ninja myself.

EDIT: Also, if you have not already noticed....you are getting a lot more latitude in your image from the HDMI vs AVCHD.

Ian-T

04-05-2011, 05:01 PM

By the way,...Cineform just confirmed with a user in another forum that the GH2 is outputting 4:2:2. FYI

PDR

04-05-2011, 05:06 PM

By the way,...Cineform just confirmed with a user in another forum that the GH2 is outputting 4:2:2. FYI

But it's not usable 4:2:2 , at least according to the tests done on Ralph's samples, and Barry's "wringer" sample. Go back about 10-20 pages, there are a bunch of tests there that are pretty indicative . You have to downsample the chroma in order to make it "usable" . It's full of aliasing. This effectively makes it 4:2:0

Can you post the link to the other forum ?

Ian-T

04-05-2011, 05:39 PM

But it's not usable 4:2:2 , at least according to the tests done on Ralph's samples, and Barry's "wringer" sample. Go back about 10-20 pages, there are a bunch of tests there that are pretty indicative . You have to downsample the chroma in order to make it "usable" . It's full of aliasing. This effectively makes it 4:2:0

Can you post the link to the other forum ?Yes I remember Barry's statement.

The user is on HV20.com who sent a sample clip in to Cineform. Post #18 http://www.hv20.com/showthread.php?41961-WOOHOO!-Atomos-Ninja-just-arrived

But the issue remains that the HDLink is showing that it is upconverting the file. They are currently investigating.

PDR

04-05-2011, 06:01 PM

Thanks for the link

Well Prores422 is ..... guess what... 422 :)

But that's just the recording format. If you examine the actual content , it's full of aliasing, you have to downsample it (it's effectively 4:2:0) . If you take a take 8-bit footage and encode it to cineform - sure it's a 10-bit recording format, but doesn't suddenly become 10-bit content.

HDLink will show that it's upconverting the file, if it decodes ProRes as 4:2:0 - there is likely a downsample somewhere along the processing chain

Now those (pretty conclusive) tests were recorded from AJA Io Pro, Intensity Pro and Nanoflash, so it's possible that Ninja is doing something different (but very unlikely IMO)

Ian-T

04-05-2011, 06:13 PM

I see. So the fact that it shows as a ProRes 4:2:2 container says nothing about what went into it. We should get a clue by the aliasing and jagged edges that it was originally a lower chroma. It seems like another clue (IMO) would be the Cineform software trying to upconvert the footage to 4:2:2.

I think they would need to have an actual camera in hand and record the output directly to be more definitive.

PDR

04-05-2011, 06:27 PM

The fact that it has jaggies and aliasing doesn't necessarily prove or disprove the original signal was higher or lower chroma subsampling. In fact I think it was 4:2:2 originally - I posted screenshots and evidence or this in the original HDMI thread, usin AEmonk's samples . If you want to see that discussion , have a look at that old thread. All 4:2:2 means is for every 4 pixels of luma there are 2 of Cb, Cr.

BUT - The problem here, is in order to make it "usuable" you have to downsample the chroma. Ralph suggested this first (ConvertToYV12) , in the beginning of this thread - that made the artifacts go away. Essentially what you are doing is downscaling, then upscaling the chroma. So if you are using the GH2 HDMI, it will effectively be 4:2:0 (you are downsizing Cb and Cr to 960x540 instead of 960 x1080)

Cineform trying to upsample the footage indicates that it "sees" the recording format at 4:2:0. It's decoding Prores422 as 420. There is no way HDLink can determine anything about the actual content - it can't distinguish between 8-bit 4:2:0 content converted to a 10-bit 422 cineform intermediate for example. It will "see" that cineform sample as 10-bit 4:2:2. Would it try upsample that? No. Even though the original content was 8-bit 4:2:0.

Now, no chroma tests specifically have been posted using the Ninja and the GH2, but I doubt the results will be any different than the Nano, AJA IoPro , or Intensity . But if we examine Martin's test clips, there is quite a bit of aliasing in Cb and Cr , simlar to all the GH2 HDMI footage

huatao

04-05-2011, 08:22 PM

Yes I remember Barry's statement.

The user is on HV20.com who sent a sample clip in to Cineform. Post #18 http://www.hv20.com/showthread.php?41961-WOOHOO!-Atomos-Ninja-just-arrived

But the issue remains that the HDLink is showing that it is upconverting the file. They are currently investigating.

Hi, I'm new here. I just read that thread from HV20.com, but isn't it talking about the canon HV20, not the panny gh2 ?

Or am I missing anything ?

SLoNiCK

04-06-2011, 01:50 AM

Well, I've made screengrab from last Martin's file (it was progressive one), dropped into PS, converted to Lab. There were 1-pixel vertical gradations and only 2-pixel horizontal gradations in a-b channels, so it wasn't just additional resolution from luma during colorspace conversion. What's that? 4:2:2? Broken chroma? Some field differencies? Hate to ask, but could anyone shoot color resolution chart with GH2+Ninja?
BTW, is it nescessary to turn off everything on built-in LCD screen to get clean HDMI out?

martinbeek

04-06-2011, 05:54 AM

Well, I've made screengrab from last Martin's file (it was progressive one), dropped into PS, converted to Lab. There were 1-pixel vertical gradations and only 2-pixel horizontal gradations in a-b channels, so it wasn't just additional resolution from luma during colorspace conversion. What's that? 4:2:2? Broken chroma? Some field differencies? Hate to ask, but could anyone shoot color resolution chart with GH2+Ninja?
BTW, is it nescessary to turn off everything on built-in LCD screen to get clean HDMI out?

It is not necessary to manually clear the display information, since it will be cleaned from the HDMI signal the moment you press Record. You'll have to go into recording in order to get the correct HDMI signal anyway. The only thing you have to disable is the Highlight warning. I use two Custom settings on the mode dial on the camera; C1 with, and C2 without Highlight warning.

I only have an old Fuji film colorchart that i used during my good-old film days (s-16mm) and i don't have a fancy mcBeth card. It only shows RGB, black and 50% gray. So i gather that won't help you very much.

For Ralph and PDR, i have put the last un-altered "stutter" footage in a zip file (both AVCHD and Ninja), see link below. It's awfully dirty footage; poorly exposed at 400ASA, but nice as a worse case scenario for this test (as it turned out). I published the script that i used some posts earlier.

I had to remove older test footage from my server because i've gone through my 2GB storage limit, so some older footage in this thread is now offline.

AVCHD Cinema 24L, 1/50, nostalgic style.
ProRes 422 from Ninja, 50i.
The Ninja was started after the GH2 was put into record. I did not touch any controls on either device during the shoot.

The side-by-side example of AVCHD and processed AviSynth is still online: http://mediatube.marvelsfilm.com/dual_h264.mov
(Avisynth on the right; there is a hick-up around 6 secs in the Avisynth part).

Cheers!

Martin.

PS: Good news from Dave at Atomos: their techies are sure they're able to solve the green stripes problem in one of the following Ninja firmware releases!! It's (another) GH2 HDMI timing issue. Further, they now have several GH2s (also PAL) that they'll test thoroughly in the very near future. Their commitment gives me good hopes for having their techies to come up with a total GH2 solution, but that's just my wishful thinking...;-)

PDR

04-06-2011, 06:35 AM

Good news from Atomos

Martin - I would suggest waiting until the firmware update before doing other tests. It may change other things besides the stripes

BTW, there are free file hosting sites available , e.g. mediafire.com, megaupload.com, sendspace.com, fileserve.com - some of them have size upload limits, but some are 1GB

Ian-T

04-06-2011, 07:17 AM

Hi, I'm new here. I just read that thread from HV20.com, but isn't it talking about the canon HV20, not the panny gh2 ?

Or am I missing anything ?You're absolutely correct. My mistake. There are a number of folks there using the GH-2 and I didn't catch that. But now this makes me even more curious on why the HD Link is upconverting the ProRes file from that camera. PDR's explanation for the GH-2 makes sense if it in fact was the GH-2. I'm starting to now wonder if any of these consumer cameras actually output a true 4:2:2 signal over HDMI.

Amadeus5D2

04-06-2011, 09:26 AM

Hi, as I said I have no problems to get a from the 24P ( out 50i HDMI) a clean 25P. Please can you tell me after which seconds I should have asynchronous frames. The video I made today is about 50 sec. and I can't see any frame errors...

martinbeek

04-06-2011, 09:40 AM

Hi, as I said I have no problems to get a from the 24P ( out 50i HDMI) a clean 25P. Please can you tell me after which seconds I should have asynchronous frames. The video I made today is about 50 sec. and I can't see any frame errors...

Hello Amadeus.

Can you explain your workflow to me in a step-by-step fashion?
Can you supply a processed file?

Thanks,
Martin

Ralph B

04-06-2011, 10:09 AM

(Avisynth on the right; there is a hick-up around 6 secs in the Avisynth part).

The "stall" happens just before you went out of record on the camera. That's why I recommend stopping the HDMI capture before stopping the camera. It prevents things like this from occuring.

Amadeus5D2

04-06-2011, 10:26 AM

the file is uploading now and around 500MB.....done:

http://www.megaupload.com/?d=NC4PLESB

I upload the file as is so you can switch from 50i to 25P. Probably you need BM codec, I captured with the lowest BM MJPG.
The codecs comes with the drivers: http://www.blackmagic-design.com/support/software/ download e.g. latest Intensity driver Mac/Win and install to get the codecs.

First I turned on the GH2 then I attached the HDMI cable who is connected to my Intensity shuttle. Don't press the camera rec. button..it's liveview capture, record liveview to HD with BM Media Express.

The "stall" happens just before you went out of record on the camera. That's why I recommend stopping the HDMI capture before stopping the camera. It prevents things like this from occuring.

Mmmm.... are you sure? There are at least four seconds before scene end on the hdmi shot.
But, i'm certainly not going to question your expertise! ;-)

In the Avisynth script is mention of tweaking a certain parameter; is that still necessary, or is the standard setting sufficient?

I'll wait with further experiments until the next Ninja firmware arrives.

I have a meeting tomorrow with the service manager of Panasonic Belgium/Netherlands. Want to compare my old (december 2010) GH2 with an april 2011 model. Further i'll certainly discuss the whole HDMI saga. Not that it'll leave much of an impression i'm afraid... But the more people complain... who knows.

Thanks,

Martin Beek

Ralph B

04-06-2011, 11:52 AM

Mmmm.... are you sure? There are at least four seconds before scene end on the hdmi shot.

Maybe we're not talking about the same thing. I downloaded the raw clip of the girl closing the slate, and ran it through avisynth. It looks fine. No glitch around 6 seconds. Only a stall just before the camera goes out of record.

Amadeus5D2

04-06-2011, 12:50 PM

I have excited news findings

I have the same issue like you and all others when I attached panasonic own lens 14-40mm, I can't get a proper 25p from 50i !!!! Strange that should be some electronical problems between Cam and Lens!

With the Kipon and my Canon 24-70mm attached I have not these issues

Can somebody test that?

Martin, here the testvideo captured via HDMI BM Intensity Shuttle with KIPON and Canon 24-70mm, GH2 settings Video 24P (out 50i), the content is not interesting but look at the red color around 30 sec . looks very clean to me and to the length of video ( 50sec.) without problem concerning sync. frames.

http://www.megaupload.com/?d=NC4PLESB

cheers

martinbeek

04-06-2011, 02:01 PM

Maybe we're not talking about the same thing. I downloaded the raw clip of the girl closing the slate, and ran it through avisynth. It looks fine. No glitch around 6 seconds. Only a stall just before the camera goes out of record.

But you did see the glitch in the side by side movie? I can try running it through avisynth again.
Mmmmm... I assume you and i use the same script. Maybe a critical setting on the edge of falling over?
We're almost there, but this worries me a bit.

It's 11 pm here and i'm going to sleep on it ;-)

Martin.

PDR

04-06-2011, 03:05 PM

the file is uploading now and around 500MB.....done:

http://www.megaupload.com/?d=NC4PLESB

I upload the file as is so you can switch from 50i to 25P. Probably you need BM codec, I captured with the lowest BM MJPG.
The codecs comes with the drivers: http://www.blackmagic-design.com/support/software/ download e.g. latest Intensity driver Mac/Win and install to get the codecs.

First I turned on the GH2 then I attached the HDMI cable who is connected to my Intensity shuttle. Don't press the camera rec. button..it's liveview capture, record liveview to HD with BM Media Express.

Blackmagic capture settings:

<snip>

AE settings:

<snip>

thanks for verifying

Hi there, the cadence is fine ; it is 25p content without inserted fields or deleted fields (unlike the other samples). But I'm guessing the "liveview" drops the resolution

What you are seeing is essentially upscaled incorrectly, probably from low resolution . It's probably taken the 50i signal, dropped 1/2 the fields and used a point resize on the remaining fields, so you get aliasing (jaggy edges)

This is a 1:1 crop from a static portion of the video (frame 800)

http://i54.tinypic.com/nz56qe.png

Was the sample processed in AE? or straight from BM ?

You can use avisynth filters for antialiasing, or deinterlace it, but resolution is low (it wasn't scanned progressive)

http://i52.tinypic.com/9qe992.png

Amadeus5D2

04-06-2011, 03:08 PM

Hi PDR,

no, this is the interlaced unprocessed sample, load it in AE and disable footage settings upper/ lowerfield first and you will have a perfect free of aliasing sample and don't use smoothing in the timeline btw. the sample does not change resolution when you switch to smooth!

But captured with Pana lenses there is a difference if smooth is selected/deselectet....deselectet has lower resolution and selected seems to me upscaled!!! But not with this settings GH2 and KIPON/ Canonlens. The next days I can upload a short comparison. Strange but true.

I upload the file as is so you can switch from 50i to 25P. Probably you need BM codec, I captured with the lowest BM MJPG.
The codecs comes with the drivers: http://www.blackmagic-design.com/support/software/ download e.g. latest Intensity driver Mac/Win and install to get the codecs.

First I turned on the GH2 then I attached the HDMI cable who is connected to my Intensity shuttle. Don't press the camera rec. button..it's liveview capture, record liveview to HD with BM Media Express.

The only way I think you can do that (cleanly) is to do a PAL slowdown (slow everything down, including audio to 24000/1001, keeping the same framecount)

50i => 25p is very easy , you just bob-deinterlace it (50i => 50p) and discard 1/2 the frames.

But this sample is interlaced scan , with progressive content (every consecutive field is duplicated, not unique) .

You might as well record true 50i instead (this way you get 2x the temporal information). I believe 50i and 60i are clean signals (truly interlaced) from GH2 HDMI

I don't see any benefit to recording "liveview" ?

EDIT: sorry, 50i and 60i are not clean signals ... so there may be a benefit to recording "liveview" then....

What you're saying about the different lenses is interesting. If they affect the cadence or scaling.... I can't see how (very strange) but please demonstrate the comparision

Amadeus5D2

04-06-2011, 03:43 PM

I have this results only with GH2 settings 24p ( out is 50i) it don't work with 1080/50i! I never tried to capture while GH2 is recording. Yes I make a comparison...I think too it's very strange. cheers

PDR

04-06-2011, 03:57 PM

It's a conspiracy theory with Panasonic's lens :)

I wonder if "clean" 24p/59.94i can be captured with NTSC model with other lenses. I don't recall if Ralph and the others were using other lenses

Amadeus5D2

04-06-2011, 04:07 PM

I can send you a private video: a comparison it's a short screen movie of AE with the two lenses but not so good because they have different subjects, so i will make later one with the same subjects. But take a look what happens when I set smooth on/of the video, made with the Panasonic lens, you don't believe that here is a difference.

Amadeus5D2

04-07-2011, 01:38 PM

Hi PDR,

strange concerning lenses, today I tried too reproduce the issue with the lens, but so far I haven't the same result, the good one is that Panasonic's 14-40 looks now o.k.
Strange because I made yesterday 2x the same test with the same result, sorry. Who knows, if I can find the matter. Anyway I will take care on that.

PDR

04-07-2011, 02:48 PM

For Ralph and PDR, i have put the last un-altered "stutter" footage in a zip file (both AVCHD and Ninja), see link below. It's awfully dirty footage; poorly exposed at 400ASA, but nice as a worse case scenario for this test (as it turned out). I published the script that i used some posts earlier.

AVCHD Cinema 24L, 1/50, nostalgic style.
ProRes 422 from Ninja, 50i.
The Ninja was started after the GH2 was put into record. I did not touch any controls on either device during the shoot.

The side-by-side example of AVCHD and processed AviSynth is still online: http://mediatube.marvelsfilm.com/dual_h264.mov
(Avisynth on the right; there is a hick-up around 6 secs in the Avisynth part).

Are you sure it's at ~6 seconds ? I'm not seeing the problem in either the side by side encoding, or when looking at the native Ninja recording with script processing?

There are some problems at the end , after ~9.5 seconds maybe it was a typo and you meant 9 seconds ?

by "hickup" i assume you mean a discrepancy between AVCHD and aviysnth processed HDMI footage ?

There is a way to get it perfectly matched up (even after 9.5 seconds , all the way to the end of the AVCHD clip) - but not automatically. It's a long winded explanation. The short version is you have a missing field at ~9.5 seconds, and that screws everything up. It's replaced by another field of the wrong frame. Getting rid of inserted duplicate fields is usually no problem; but fixing something that doesn't have the information there in the first place is more difficult. This probably isn't something that an Atomos update will fix, it can only record the signal that is given to it. The fix would have to be issued by Panny. If this pattern occurs only at the beginning & end of recordings (before you hit stop), it shouldn't matter, but if this pattern occurs in the middle, then it's a bigger problem

RE: levels/gamma issues

Did you realize there is a levels issue before encoding to DNxHD ?

There is a discrepancy between your HDMI recorded and AVCHD footage with respect to saturation and levels. But this has been documented before. The GH2 HDMI output is different to begin with. This is in addition to the quicktime/DNxHD gamma issues mentioned earlier. I've included screenshots with a waveform
http://www.mediafire.com/?1mf63qxlx6ghn9y#1

But you have converted the AVCHD to ProRes - this complicates and introduces other variables. In the future I would suggest uploading the native AVCHD files - not only because they are smaller, but also because it's easier to debug when/how problems are introduced . For example, black level is elevated in the Prores converted AVCHD clip, and it has lower contrast. The Ninja clip has Y=16, proper black level, but AVCHD prores clip is higher. Did that occur because of the ProRes conversion? or something else? How did you convert to ProRes? Log & Transfer? or is this the "normal" screwed up levels of GH2 HDMI signal?

martinbeek

04-08-2011, 01:56 AM

Are you sure it's at ~6 seconds ? I'm not seeing the problem in either the side by side encoding, or when looking at the native Ninja recording with script processing?

There are some problems at the end , after ~9.5 seconds maybe it was a typo and you meant 9 seconds ?

by "hickup" i assume y

SORRY SORRY SORRY - despicable me! You are right, it's a typo! That was meant to read 9 seconds indeed!

I understand your explanation and will Start the Ninja after the camera and stop it again before the camera is stopped.

Btw. the AVCHD footage was fed into 5DtoRGB - it does a nice job converting AVCHD to prores444 if you don't need sound; but that was just a quick and dirty solution to get it into FCP fluidly.

Martin.

martinbeek

04-08-2011, 03:20 AM

Ralp.

I have noticed another difference between the page 1 and page 33 code.
After "AssumeTFF()" it says on page 33: converttoyuy2()
Is that meant to stay there for ProRes PAL footage ?

Martin

martinbeek

04-08-2011, 04:36 AM

PDR

You mentioned a slightly different approach for preventing color shifting that didn't impose a shift of the whole image. You said "a dozen pages earlier", but can you point me to that post, can't find it myself.

When i take screengrabs from the Ninja footage and the corresponding processed AviSynth footage, and zoom in 200% in photoshop, i see a blurring of fine detail that i'd like to keep - such as facial hair, patterns on small leaves, ... I gather that that's part of the shifting/restoring?

Sorry for all the questions guys, but i want to get this perfect and i'm for 99% there!

Mmmm... Is there another codec that comes to mind that is cross-platform and FCP can read and is as good as DNxHD, but remains Y'CbCr all the way?

Martin

Ralph B

04-08-2011, 09:40 AM

I have noticed another difference between the page 1 and page 33 code.
After "AssumeTFF()" it says on page 33: converttoyuy2()
Is that meant to stay there for ProRes PAL footage ?

No. converttoyuy2() is necessary if you use directshowsource() for the input. If you use qtinput, converttoyuy2() is not needed.

PDR

04-08-2011, 09:54 AM

PDR

You mentioned a slightly different approach for preventing color shifting that didn't impose a shift of the whole image. You said "a dozen pages earlier", but can you point me to that post, can't find it myself.

When i take screengrabs from the Ninja footage and the corresponding processed AviSynth footage, and zoom in 200% in photoshop, i see a blurring of fine detail that i'd like to keep - such as facial hair, patterns on small leaves, ... I gather that that's part of the shifting/restoring?

Sorry for all the questions guys, but i want to get this perfect and i'm for 99% there!

Thanks,
Martin

The blurring might be from compression (DNxHD is not lossless) , or you might be percieving chroma artifacts and aliasing as "detail". It is not true detail.

But the shifting (whole image shifted vertically up) is likely from that version of the chroma fix. Note your samples also show the interlaced chroma issues as well (this is a different problem than the shift) . I don't know if you can look at CbCr in FCP, but you can convert to RGB and look at the blue channel for example - you will see notching in the leaf sample (actully everywhere). If you don't know how to do this , this I can post some examples.

BUT I would hold off on any chroma related testing - it's possible the green border stripes and metadata might be partially responsible for those shifting and chroma issues (but not the notching issues) . So the end result might change. If chroma planes aren't aligned properly, you can get something like this. The only reason I looked at your last uploaded samples was there seemed to be a discrepancy between what you and ralph were "seeing". But we know that was a typo now.

If you want to look at earlier chroma testing the links are below. But I would hold off until a firmware update, and you (or someone with a ninja) shoot a better chroma test sample. You need objects with sharp color edges, preferably red on green or red on blue. Color charts are fine. The script using avisynth 2.6 still needs a shift (0.5 vertical up) to look "perfect" IMO, but I would wait from the firmware fix and better chroma tests with ninja.

Mmmm... Is there another codec that comes to mind that is cross-platform and FCP can read and is as good as DNxHD, but remains Y'CbCr all the way?

Martin

Sorry for long post:

what do you mean by "as good"? as good performance editing? as good quality? as good compression ? avoids gamma issues (or retains screwed up gamma)?

Quicktime messes up the gamma on almost everything. I think quicktime X has some other gamma options and maybe the new version of FCP X. These gamma issues have plagued FCP/quicktime for years. See if you can play with the settings or interpretation settings.

BUT what do you want it to look like ? The AVCHD or the Ninja footage ? Which one is "correct" ? I would say the AVCHD footage should be the goal, but your starting footage (before processing or DNxHD) is already mismatched. Do you see the problem here ?? How did 5DtoRGB do the conversion ? Does it go through RGB intermediate ? Also how did you control exposure ? The waveform monitor shows the black level too high in the AVCHD sample - was that because of the conversion? or was original AVCHD like that ? Note the waveform tracings I posted earlier measures Y' in Y'CbCr - this looks at the actual Y' data, not the RGB converted representation.

*I should clarify - you might be seeing something different than me because QT embeds a color profile in PNG's - depending on what program you are using and how you have it set up - it may or may not ignore the "flag". Some of the earlier examples might have been confusing - I should have explained this earlier:

To confuse you some more, what you are "seeing" in screenshots or in a media player is the RGB converted representation of Y'CbCr . You're NOT viewing the original Y'CbCr data. How that conversion is done (how the screenshot is taken), or what color profile is embedded affects how you see the final RGB image.

You have to be aware of HOW your application handles Y'CbCr => RGB conversions and what color model you are working in. QT uses ~1.8 gamma, but everything else in the industry uses a 2.2 gamma. All hardware, displays, broadcast monitors use 2.2 gamma. That's why everything is screwed up in quicktime. The industry standard for HD material is using ITU Rec.709 for Y'CbCr <=> RGB conversions. This is official spec, and covers everything from blu-ray to broadcast industry, both NTSC and PAL regions. Some programs have color management - you can use and assign color profiles or working spaces, and can control how that Y'CbCr <=> RGB conversions are handled. I think QT X and maybe the new FCP gives you some more control.

Keeping everything in Y'CbCr allows you the most options because that is the original recording format - both AVCHD and ProRes are in Y'CbCr (provided your downstream program can access it as Y'CbCr and has color management options). You can access full luminace range , even "illegal" ranges Y' <16, Y'>235 . BUT if it has been converted to RGB using ITU Rec.709 , that dynamic range is lost (you will clip shadows and highlight details) . By the strictest definition, you can access the highest dynamic range possible of the recorded format.

To pass Y'CbCr in QT (at least on windows), you need uncompressed fourcc "IYUV" AVI as input. The plane arrangement is important. QT will NOT treat other forms of uncompressed AVI (e.g. YV12 , which has Cb Cr channel order reversed) as uncompressed Y'CbCr; it will convert to RGB using the screwed up gamma . This is the entire basis for QT gamma issues (RGB conversion).

Uncompressed IYUV (huge filesizes) will pass through Y'CbCr and you should get the same gamma (the preview , which has been converted to RGB for display might look different, but the end result should be the same) . This is the same gamma shift workaround that works when encoding h264 through quicktime - there is no h264/mov gamma shift, because Y'CbCr is preserved (at least on windows). So please test the IYUV sample in FCP to see if it's compatbile. I also included a cut version of the original 2nd sample 009.mov Ninja ProRes with the leaves)

http://www.mediafire.com/?kmaszm70so074jb

In this example , both tiff screenshots have been processed in QT (so both have the same gamma, and should look the same in every application regardless of color profile settings). If you use a difference mask, you will see there are horizontal notching differences - even on this frame that has no apparent combing at quick glance. This is because they have been fixed in the IYUV sample with the chroma fix. If you look at the original 009.mov sample or 009.tif image and look at the blue channel photoshop or AE, you will see this interlaced notching and aliasing as well - these are the HDMI chroma issues outlined earlier. (it's easier to see in earlier examples, like Barry's wringer test). If you can look at Cb and Cr in the 009.mov sample (not sure if you can do that on a MAC) , you will see how these artifacts transfer over to RGB (They already exist in the Y'CbCr original - and that's the basis for using the chroma fix in the first place)

Ralph B

04-08-2011, 10:32 AM

Updates to the first post:

1) Note that the script works with both NTSC and PAL cameras.

2) Downgraded the 1080 60i script to experimental status.

Ralph B

04-08-2011, 11:04 PM

Keeping everything in Y'CbCr allows you the most options because that is the original recording format - both AVCHD and ProRes are in Y'CbCr (provided your downstream program can access it as Y'CbCr and has color management options). You can access full luminace range , even "illegal" ranges Y' <16, Y'>235 . BUT if it has been converted to RGB using ITU Rec.709 , that dynamic range is lost (you will clip shadows and highlight details) . By the strictest definition, you can access the highest dynamic range possible of the recorded format.

Question, PDR. Even though the digital video spec is 16-235, if there is picture information present from 0-255, will a television display it or will it be clipped? I've often wondered about this.

PDR

04-09-2011, 06:53 AM

Question, PDR. Even though the digital video spec is 16-235, if there is picture information present from 0-255, will a television display it or will it be clipped? I've often wondered about this.

If you mean Y' 0-255 (ie. full range video), and assuming you have usable data in that region (many cameras record usable data Y>235 , but rarely Y<16 - it's not clipped it will be displayed , but will barely be visible depending on the TV's calibration. The ITU specs purposely specifiy the range Y' 16-235 to allow for broadcast overshoot and undershoot. If you open up any professional BD or DVD (e.g. from Hollywood) and look in a waveform monitor, you will often see overshoots (but rarely undershoots) . In my experience, your average TV out of the box will barely show anything beyond that range - as expected (more on the white side than black) ; but some models are calibrated incorrectly out of the box.

Ralph B

04-09-2011, 09:30 AM

If you mean Y' 0-255 (ie. full range video), and assuming you have usable data in that region (many cameras record usable data Y>235 , but rarely Y<16 - it's not clipped it will be displayed , but will barely be visible depending on the TV's calibration. The ITU specs purposely specifiy the range Y' 16-235 to allow for broadcast overshoot and undershoot. If you open up any professional BD or DVD (e.g. from Hollywood) and look in a waveform monitor, you will often see overshoots (but rarely undershoots) . In my experience, your average TV out of the box will barely show anything beyond that range - as expected (more on the white side than black) ; but some models are calibrated incorrectly out of the box.

So you're saying that 16-235 is something of an advisory range and not a hard limit? If that's the case, then it's safe to do all your post work full range, 0-255, and know that it's going to be carried through to the end user. Now, whether the end user's TV is setup correctly, well, that's their problem. Yes?

PDR

04-09-2011, 09:40 AM

So you're saying that 16-235 is something of an advisory range and not a hard limit? If that's the case, then it's safe to do all your post work full range, 0-255, and know that it's going to be carried through to the end user. Now, whether the end user's TV is setup correctly, well, that's their problem. Yes?

Yes, it's not a hard limit.

BUT if you have full range Y' 0-255 data, you have to adjust it to Y' 16-235 on final export to delivery to make it legal or it won't look right on 99.9% of setups . Full range Y' data displayed with a standard RGB conversion looks wrong. Full range Y' data displayed with full range RGB conversion looks correct.

Note there are displays that support full range, but these are quite rare. Full range done correctly gives many benefits ,including less banding because there are more gradations. In fact, flash supports full range flag, if you use your own flash (not something like youtube or vimeo which re-encode). But full range isn't a convential way of doing things, not recommended. If you want I can post some examples

Be careful when you use the term "0-255", you have to specify if you mean Y' 0-255 or RGB 0,0,0-255,255,255 . Most programs (when using filters) work in RGB , not Y'CbCr.

Many programs convert to RGB using a hard limit (e.g. if you take a screenshot, Y'CbCr is converted to RGB like this : Y' [16,235] => RGB [0,255] , so Y' <16 and Y' >235 is clipped) .

Some programs and hardware that do ITU Rec.709/601 conversions don't clip the data, they clamp the data. The distinction is important. In the latter, the data is still there but just "squished" (all the values Y' >235 , Y",16 are just squished within the range Y' 16-235, instead of completely eliminated) . This is why some programs don't strictly follow these guidelines. For example, sony vegas will use "studio RGB" using Rec.709 coefficients - this means Y' 0-255 gets "mapped" to RGB 0-255. A strict ITU Rec. 709/601 conversion would "map" Y' 16-235 to RGB 0-255

But some programs allow you to access Y' values before that RGB conversion - in that case that Y' data can preserved . The order is important

The other way to access all the data is to do a full range RGB conversion (ie. Y' [0-255] => RGB [0,255] ) .

We've only been talking about Y' , but this affects CbCr too - Rec601/709 conversions specify CbCr in the 16-240 range, not the 16-235 range. So chroma is also clipped unless your program can access CbCr before the RGB conversion , or you do a full range RGB conversion

Ralph B

04-09-2011, 10:08 AM

Yes, it's not a hard limit.

BUT if you have full range Y' 0-255 data, you have to adjust it to Y' 16-235 to make it legal or it won't look right on 99.9% of setups .
What setups are you refering to?

Full range Y' data displayed with a standard RGB conversion looks wrong.
What do you mean by "standard RGB conversion"?

If you want I can post some examples

Please do.

Many programs convert to RGB using a hard limit (e.g. if you take a screenshot, Y'CbCr is converted to RGB like this : Y' [16,235] => RGB [0,255] , so Y' <16 and Y' >235 is clipped) .

Some programs and hardware that do ITU Rec.709/601 conversions don't clip the data, they clamp the data. The distinction is important. In the latter, the data is still there but just "squished" (all the values Y' >235 , Y",16 are just squished within the range Y' 16-235, instead of completely eliminated) . This is why some programs don't strictly follow these guidelines. For example, sony vegas will use "studio RGB" using Rec.709 coefficients - this means Y' 0-255 gets "mapped" to RGB 0-255. A strict ITU Rec. 709/601 conversion would "map" Y' 16-235 to RGB 0-255

Understood.

But some programs allow you to access Y' values before that RGB conversion - in that case that Y' data can preserved . The order is important

What programs would that be?

The other way to access all the data is to do a full range RGB conversion (ie. Y' [0-255] => RGB [0,255] ) .

Where would you do that, avisynth?

We've only been talking about Y' , but this affects CbCr too - Rec601/709 conversions specify CbCr in the 16-240 range, not the 16-235 range.

What are the practical applications where this comes into play?

martinbeek

04-09-2011, 02:39 PM

Just for illustration of the detail blurring, i've made two screendumps of a part of a frame of Ninja footage and the same frame after AviSynth. 200% enlarged in photoshop, gamma adjusted.

All displays work in RGB. You're not viewing Y'CbCr directly - that data has to be upsampled and converted to RGB for display. 99.9% will use ITU Rec.709 for HD content, some will mistakenly use Rec601, but the black and white point will be the same. (The colors will be shifted slightly).

Standard range "legal" video decoded by standard matrix means Y' 16-235 => RGB 0,0,0 - 255,255,255 . Thus "black" is displayed at RGB 0,0,0 , white at RGB 255,255,255 . The black level in the video was Y'16 and white Y'235. Virtually all equipment is calibrated to display Y' 16 as black and Y'235 as white. If your video has set Y' 0 as black and Y' 255 as white it won't decode it properly because the wrong RGB conversion is performed (most equipment can't handle or dispaly full range video correctly). So your blacks will be crushed, and whites will be blown. In order to display full range video correctly, you need to decode with a full range matrix ie.. Y' [0,255] => RGB [0,255] . Very few equimpent or support full range, and of those that do, very few acknowledge the flags in a video stream (the switch has to be done manually). Flash is the only one I'm aware of that accepts full range flag in the video stream

Please do.

I can't figure out how to get blip.tv embedded, you'll have to click on the links . You might not be able to see the "banding" depending on the quality or bit depth of your display. These are contrived examples, but just to illustrate the point that full range is possible

The first is your "regular" example. Normal range video , decoded with normal range RGB conversion. The 2nd is full range video decoded with full range RGB (flag in the video stream tells the flash player to decode to RGB using full range) . Both are regular 8-bit h.264 , and this is non dithered . Because black to white has 0-255 steps or gradations, there is less visible banding and posterization than a black to white of 16-235 "steps" . It's the same idea (but worse than) 10-bit video, where there are 0-1023 "steps" in full range 10-bit

http://www.blip.tv/file/4409086
http://i51.tinypic.com/2qwk6qa.png

http://www.blip.tv/file/4409087
http://i51.tinypic.com/xenccj.png

What programs would that be?

Examples: I think FCP can do this, if it "sees" the data in Y'CbCr (it looks like DNxHD doesn't work for it) . PP CS5 can with the "YUV" labelled filters. And of course avisynth

Where would you do that, avisynth?

In avisynth you can specify the matrix and the range
e.g.

ConvertToRGB(matrix="PC.709")

"PC" range matrices are full range , "Rec" matrices are standard range. There are other options you can use - other coefficients you can use (other than BT709/601) if you use the colormatrix() filter, and other types of sampling for the RGB conversion (in avisynth 2.6, e.g. if you recall Grgur chose spline36 in one of the chroma fix scripts, you might choose bilinear for softer, or lanczos for sharper etc...) . The sampling algorithm affects how the chroma is upscaled to RGB. Sharper algorithms have more aliasing, softer algorithms have less aliasing but more blurring . Have a look at the documenation for the full list

What are the practical applications where this comes into play?

This comes into play all the time in practical applications. Just search for "chroma clipping" the AF100 subforum there were a few lively discussions there. Chroma clips before luminance in many cameras. The canon dslr's do this too with the red channel. So if you're monitoring exposure with zebras or waveform monitor, you might think you're safe, but in fact one of the channels are clipped. The end result is not only less color information available, but the relative imbalance causes certain ugly unnatural looking "color hues" in highlight areas.

(I'm talking about RGB here because the camera sensor and everything before recording actually works in RGB, but you can see this in CbCr in the recorded media as well). So by either using a full range RGB conversion ,or if your application can access CbCr before the RGB conversion - you can preserve some of that data CbCr <16, and >240 , similar to how you can preserve the luminance data Y' <16, Y'>235

Cosimo Bullo

04-09-2011, 04:51 PM

It seems the weird histogram crunching Barry was talking about goes away in jpeg mode. Can anyone else confirm?

PDR

04-09-2011, 06:08 PM

Just for illustration of the detail blurring, i've made two screendumps of a part of a frame of Ninja footage and the same frame after AviSynth. 200% enlarged in photoshop, gamma adjusted.
<SNIP>

Ps. Ninja = ProResHQ, AviSynth = DNxHD

Not sure what's going on there, the blurring of the hair of the leaves it's probably partly due to that version of chroma fix shift, but the grain pattern in the background of those screenshots doesn't even look like it's even the same frame

Some of it might be Quicktime or FCP issues and RGB conversion

Some of it might be DNxHD compression, but I'm getting different results.

I didn't use GrgurMG's chromafix, the one I used was

FixBrokenChromaUpsampling()
ConvertToYV12()

Here are screenshots you can examine in photoshop or whatever program. You will notice even the grain pattern is the same, but slightly less in some areas (now those are compression losses) . The biggest difference is the lack of chroma aliasing. It's very prevalent in the blue channel in the ProRes shot. At quick glance of the RGB composite you might miss the aliasing, especially in motion
http://www.mediafire.com/?2rhr9ik5yo5nqce

Now my question is, which one is real? Which one is displaying what's actually on the raw file?

Ralph B

04-09-2011, 06:29 PM

It seems the weird histogram crunching Barry was talking about goes away in jpeg mode. Can anyone else confirm?

Yes, I confirm. Not sure what it means, or if there's any practical value to it, but the histogram does not change when you're recording in MJPEG mode.

PDR

04-09-2011, 06:30 PM

Now my question is, which one is real? Which one is displaying what's actually on the raw file?

Neither.

You're converting to RGB , and possibly incorrecty if the signal is interlaced (there is an interlaced=true and interlaced=false switch in ConvertToRGB() ), and if you do the conversion incorrectly, you will get CUE (chroma upsampling error). But the HDMI signal is screwed up to begin with, so .....

So the HDMI signal is Y'CbCr 4:2:2, if you convert it to RGB, it isn't exactly what's on the "raw" file. It's an RGB representation of what's on the "raw" file

A full range matrix means you see all of the data - if you had data in Y' <16, and Y'> 235 , those would not be clipped if you converted to RGB (i.e. you can salvage super brights and super darks)

Ralph B

04-09-2011, 06:35 PM

Neither.

You're converting to RGB , and possibly incorrecty if the signal is interlaced (there is an interlaced=true and interlaced=false switch in ConvertToRGB() ), and if you do the conversion incorrectly, you will get CUE (chroma upsampling error). But the HDMI signal is screwed up to begin with, so .....

So the HDMI signal is Y'CbCr 4:2:2, if you convert it to RGB, it isn't exactly what's on the "raw" file. It's an RGB representation of what's on the "raw" file

A full range matrix means you see all of the data - if you had data in Y' <16, and Y'> 235 , those would not be clipped if you converted to RGB (i.e. you can salvage super brights and super darks)

Chroma issues aside, which histogram is accurately representing what the luminance is on the raw file?

PDR

04-09-2011, 06:41 PM

Chroma issues aside, which histogram is accurately representing what the luminance is on the raw file?

Neither, the way you have it setup

That is a histogram, looking at RGB. Strictly speaking, luminance Y' refers to Y'CbCr and the "raw" file is Y'CbCr . So the luminace values reported by that filter will change depending on how you did the RGB conversion - you can see that this is true in your 2 screenshots.

So the "True" value is the one read from the original file, not the RGB representation of it.

Background info: Usually any filter used in vdub causes a RGB conversion . That conversion uses Rec601 if you let vdub do the RGB conversion. A rec601 conversion will shift the colors slightly (incorrect for HD). When you use video=>full processing mode in vdub this also does RGB conversion with the default settings. You bypass color space conversions in vdub by using video=>fast recompress . This way you keep the same "color space" or "color model" . Now some newer beta versions of vdub handle things slightly differently, you have different options for input and output color depths/spaces, but basically it still works the same. So in your first example, the file directly input, is actually converted to RGB by vdub, not the native Y'CbCr , because you are using "full processing mode" when using a filter. This is one of the the reasons why many users prefer to use avisynth for everything including filtering (you have more control). And if you need to use vdub filters, do the RGB conversion in avisynth , you have more control. For example, vdub doesn't do interlaced YV12 chroma sampling correctly. Anyways, there are a few more bugs/quirks in vdub, I won't get into them all here - the point is you're not accurately accessing the luminance

If you've recorded a YUY2 4:2:2 format (eg. ut video codec 422) , then you can use avisynth's histogram() or videoscope() . Histogram() will be on the side so you can use turnright() turnleft() . The "Brown" areas are "illegal" ie. Y'<16, Y'>235. It will show up as yellow if you have "illegal" excursions. This "classic" version of the histogram is actually a wave form monitor (x -axis isn't 0-255). If want the histogram version with the x-axis showing 0-255, use histogram("levels") - this will also show CbCr histogram

e.g.

AVISource("YUY2.avi")
TurnRight()
Histogram()
TurnLeft()

or

AVISource("YUY2.avi")
Videoscope("bottom")

Videoscope only accepts YUY2 input (so not YV12), but the wave form tracing is probably more what you're used to

If you haven't already, AvspMod is a great tool to view and edit scripts. Push F5 to preview. It has programmable sliders, macros , you can compare script versions in different tabs (hit number keys to swap)

But if you're looking for a realtime monitoring of the live HDMI feed, I don't think any of these will give what you want

improser

04-10-2011, 03:19 PM

Any of these investigations about video ranges its helpfull in the banding issue of the gh2?

PDR

04-10-2011, 04:48 PM

Any of these investigations about video ranges its helpfull in the banding issue of the gh2?

not the version of banding you're talking about

there are 3 different types of banding that people are mixing up

1) from poor compression (e.g. vimeo or youtube) , lack of pre processing & dithering - increased range , better pre processing , and using a better encoder will help with this kind

2) the lowlight fixed pattern noise , like vertical or horizontal streaks - increased range won't help with this

3) bad sensor issue - I don't know what this is, but it's not from 8-bit compression. An example is the chem trails video - not much will help with this type either. The amount of grain you need to "cover it up" effectively is too enormous to be practical. To be effective for distribution you would require much higher bitrates as well

improser

04-10-2011, 05:40 PM

not the version of banding you're talking about

there are 3 different types of banding that people are mixing up

1) from poor compression (e.g. vimeo or youtube) , lack of pre processing & dithering - increased range , better pre processing , and using a better encoder will help with this kind

2) the lowlight fixed pattern noise , like vertical or horizontal streaks - increased range won't help with this

3) bad sensor issue - I don't know what this is, but it's not from 8-bit compression. An example is the chem trails video - not much will help with this type either. The amount of grain you need to "cover it up" effectively is too enormous to be practical. To be effective for distribution you would require much higher bitrates as well

You´re right, im refering to the third type, correctly named Posterization

Im very sad about that, maybe a open letter to Panasonic to repair that? All the way Canon 5D 24fps request

PDR

04-10-2011, 06:15 PM

You´re right, im refering to the third type, correctly named Posterization

Im very sad about that, maybe a open letter to Panasonic to repair that? All the way Canon 5D 24fps request

You can correctly apply the term "posterization" to #1 as well. Macroblocks assigned high quantizers from the encoder cause the same effect. The difference in #3 is the HDMI recording with high bitrates doesn't eliminate or improve the issue, so this indicates some problem upstream (maybe sensor, maybe in camera processing ?)

Someone in one of the threads said Panasonic is aware of the issue (#3)

I don't think Panasonic will do anything about the 5D 24fps :)

martinbeek

04-11-2011, 01:32 AM

Hi PDR.

There is no FCP involved, just pre/post Avisynth footage. Same frame.
I'll have a look at your upload. I've used different versions of the script as it appears in this group, all with the same result.

Martin.

Not sure what's going on there, the blurring of the hair of the leaves it's probably partly due to that version of chroma fix shift, but the grain pattern in the background of those screenshots doesn't even look like it's even the same frame

Some of it might be Quicktime or FCP issues and RGB conversion

Some of it might be DNxHD compression, but I'm getting different results.

I didn't use GrgurMG's chromafix, the one I used was

FixBrokenChromaUpsampling()
ConvertToYV12()

Here are screenshots you can examine in photoshop or whatever program. You will notice even the grain pattern is the same, but slightly less in some areas (now those are compression losses) . The biggest difference is the lack of chroma aliasing. It's very prevalent in the blue channel in the ProRes shot. At quick glance of the RGB composite you might miss the aliasing, especially in motion
http://www.mediafire.com/?2rhr9ik5yo5nqce

martinbeek

04-11-2011, 01:42 AM

Here are screenshots you can examine in photoshop or whatever program. You will notice even the grain pattern is the same, but slightly less in some areas (now those are compression losses) . The biggest difference is the lack of chroma aliasing. It's very prevalent in the blue channel in the ProRes shot. At quick glance of the RGB composite you might miss the aliasing, especially in motion
http://www.mediafire.com/?2rhr9ik5yo5nqce

PDR. I'd a look at those images. I assume you didn't do any postprocessing on them? I can't believe my eyes. I see a totally different result; in my case the footage is worlds apart. Color al different, gamma issues, blurry. Even without the chroma fix, i'm looking at something totally different here.
As far as i know i use the same script, same batchscript and same codec as you. What program did you load the footage into to make the framegrabs?

Martin.

PDR

04-11-2011, 06:42 AM

PDR. I'd a look at those images. I assume you didn't do any postprocessing on them? I can't believe my eyes. I see a totally different result; in my case the footage is worlds apart. Color al different, gamma issues, blurry. Even without the chroma fix, i'm looking at something totally different here.
As far as i know i use the same script, same batchscript and same codec as you. What program did you load the footage into to make the framegrabs?

No other processing other than choosing the matrix . I took the screenshots of frame #0 from avspmod, using Rec709 matrix for the Y'CbCr=>RGB conversion . Everything else was the same, except for the chroma fix part of the script. If you use the one I used , there is no vertical shift (at least not a large one), so that eliminates shifting as the cause for what you're seeing as "blurring"

Even if I take the screenshots through QT player (ie. open up the DNxHD footage in QT and export a screenshot), the pattern of the grain is the same.
http://www.mediafire.com/?ybe6qqolnytly2q#2

You may or may not see a gamma difference depending on if your application reads the embedded color profile tag, but the small details are the same. Even if you ignore gamma issues, your screenshots look different: background grain looks like it was taken a different frame #

The only other thing I can think of is you might be using the proxy bitrate for DNxHD (not the 175Mb/s version). Make sure the ffmpeg command line was entered correctly

Danielvilliers

04-11-2011, 12:12 PM

Very very affordable recorder from blackmagic http://www.blackmagic-design.com/products/hyperdeckshuttle/ with it and atomos making the gh2 hdmi output camera a very viable and cost effective solution. Who would have thought that we would be able to get that level of quality for less than $ 1500. Thanks to everyone who has made it possible on this thread :-)

Ralph B

04-11-2011, 03:12 PM

Very very affordable recorder from blackmagic http://www.blackmagic-design.com/products/hyperdeckshuttle/ with it and atomos making the gh2 hdmi output camera a very viable and cost effective solution. Who would have thought that we would be able to get that level of quality for less than $ 1500. Thanks to everyone who has made it possible on this thread :-)

Amazing price, but probably not the best choice for the GH2 because it doesn't have analog audio inputs. Remember, if the HDMI capture device has audio inputs, you can record audio along with the HDMI picture, and then correct the time delay when you process the files in Avisynth. The beauty of that, is your clips are ready for editing - no manually syncing up audio.

GrgurMG

04-12-2011, 12:40 PM

Just getting caught up with this thread, been busy. Glad we're still going strong.

Upon reading I've heard the color (saturation/etc) differences between the AVCHD and HDMI brought up. Once by Barry's anti-GH2 HDMI manifesto and then here...

BUT what do you want it to look like ? The AVCHD or the Ninja footage ? Which one is "correct" ? I would say the AVCHD footage should be the goal, but your starting footage (before processing or DNxHD) is already mismatched.

If I recall correctly, it's been brought up before that the AVCHD footage can be slightly oversaturated in certain situations which can sometimes slightly obscure some finer detail. In which case, having the slightly muted HDMI footage is actually a slightly better starting point before post-production if your trying to extract/optimize every last bit of color/luma detail.

Obviously the difference can be a pain when monitoring, as Barry was saying, but it really depends on your workflow and needs. If your using a monitor exclusively with the GH2, I might imagine you can just adjust the monitor (+saturation/etc) accordingly to compensate. For some, all these "adjustments" may not be worth it, but that really depends on which side of the thought process, time-constraint, and/or budget line your on.

Some would much rather pay a premium for things that are as quirk free and versatile as possible... that's one reason professional equipment is so expensive... at times overly expensive. The other side of the coin are people who are on a budget or otherwise just prefer the "makeshift" way-of-life. Those, like myself, who don't find "jimmy rig" to be a dirty word. For these people, all these little adjustments or having to do some extra post-processing on something, is completely exceptable and worth it. There's many types of equipment, from rigging to lighting, that you can put together much much cheaper yourself, it just takes time and some extra thought to accomplish. I find projects like this (GH2 HDMI output) to fall under that same catagory. Obviously, again, aside from budget, time contraints can also be a factor, but for those with a lax turnaround on whatever they're doing.. letting your computer render while you catch some Zs isn't a problem.

Even, if your not on that much of a budget, GH2 HDMI can still prove useful in things such as long continuous recording (assuming you have the HDD space or capture device card swappability to surpass the record time on your SD card). If for whatever reason you need to record in very low light, without extra lighting, and post production is still required even with a fast lens, the GH2 HDMI footage can certainly come in handy in preserving those dark details. Also, if your doing noise reduction in post, having the cleaner (finer noise) footage is a plus as well.

:costumed-smiley-047

GrgurMG

04-12-2011, 12:46 PM

Also... Quicktime... ick. :zombie_smiley:

mambomobil

04-17-2011, 10:05 AM

We also have a GH 2 , european PAL model (DMC - GH2K) since a few Days. There seems to be a problem with the HDMI out. we tried to connect it to some devices - including AJA IO express - the camera signal will not come in. we tried on a panasonic beamer too and it works - so it seems not to be a hardware problem. I was already thinking if there could be a HDCP implementation (which would be really mean...) ? Anybody here have some idea about?
thanks in advance -

tobias

Ralph B

04-17-2011, 10:57 AM

We also have a GH 2 , european PAL model (DMC - GH2K) since a few Days. There seems to be a problem with the HDMI out. we tried to connect it to some devices - including AJA IO express - the camera signal will not come in. we tried on a panasonic beamer too and it works - so it seems not to be a hardware problem. I was already thinking if there could be a HDCP implementation (which would be really mean...) ? Anybody here have some idea about?
thanks in advance -

tobias

It sounds like something's going on with the AJA device. People have successfully captured HDMI output from PAL GH2's with Blackmagic Intensity, Atomos Ninja, and Convergent Nanoflash.

Double check to make sure your input setting is 50i.

You mention "some devices". What else besides the AJA have you tried?

mambomobil

04-17-2011, 12:24 PM

hi Ralph,

thank you for your fast answer. we tried the gh2 also on a Samsung Syncmaster which has a "native" hdmi in (not converted from DVI or something...). This monitor never had a problem to show 50i (we tried that before many times with different cameras). Tomorrow i will do another test with the AJA. The strange thing is: our panasonic beamer (pt-ae 1000) shows the picture...
thanks again - will come back tomorrow.

Tobias

Ian-T

04-17-2011, 03:00 PM

Even, if your not on that much of a budget, GH2 HDMI can still prove useful in things such as long continuous recording (assuming you have the HDD space or capture device card swappability to surpass the record time on your SD card). If for whatever reason you need to record in very low light, without extra lighting, and post production is still required even with a fast lens, the GH2 HDMI footage can certainly come in handy in preserving those dark details. Also, if your doing noise reduction in post, having the cleaner (finer noise) footage is a plus as well.

:costumed-smiley-047Amen. This is what I've been saying all along.

Ralph B

04-19-2011, 07:44 PM

I have an advanced 24P script in the works that fixes the residual problems of the original script. Stay tuned.

martinbeek

04-20-2011, 03:08 AM

I have an advanced 24P script in the works that fixes the residual problems of the original script. Stay tuned.

Wow! I'm looking forward to it!!

Martin Beek.

martinbeek

04-20-2011, 03:14 AM

I have tried to wrap things up for the readers of our blog at http://marvelsfilm.wordpress.com
Jorgen Escher at http://colorbyjorg.wordpress.com has made things even more digestible by making a zipped folder with all you need, to get you up and running with AviSynth. It's targeted towards Ninja (ProRes HQ) and Mac users, but apart from some minor script and batch changes, the bulk remains the same. Jorgen's zip file contains everything you need to install including a predefined directory structure and the scripts that feature in this thread.
Both Jorgen and i are no cracks on the subject, but we hope it helps people getting started.

Martin Beek.

Cosimo Bullo

04-20-2011, 10:14 AM

Martin, that's great. But please make sure Ralph B is being given his credit for this work.

PDR

04-20-2011, 10:30 AM

Ralph - are you referring to missing fields, dupe detection + decimation, and interpolating a new frame using mvtools ? (are you "rec" on D9) ?

If so, the dupe detection isn't the problem - the problem is interpolation often isn't very "clean", often worse than a dupe

In the examples so far, the dropped field has another field pair you can use to reconstruct the frame, that would give a much cleaner frame. The problem is there is no way of knowing if it's the top field or bottom field that has been dropped when you are field matching. e.g. IF you use AssumeTFF() you might miss the "good" field , but if you used AssumeBFF(), you might miss the other good field in other field drops. You would have to somehow adaptively switch field order

I can explain in more detail if this is what you're talking about

But if you're thinking about something else, then disregard this

Ralph B

04-20-2011, 12:49 PM

I have tried to wrap things up for the readers of our blog at http://marvelsfilm.wordpress.com
Jorgen Escher at http://colorbyjorg.wordpress.com has made things even more digestible by making a zipped folder with all you need, to get you up and running with AviSynth. It's targeted towards Ninja (ProRes HQ) and Mac users, but apart from some minor script and batch changes, the bulk remains the same. Jorgen's zip file contains everything you need to install including a predefined directory structure and the scripts that feature in this thread.
Both Jorgen and i are no cracks on the subject, but we hope it helps people getting started.

Martin Beek.

Hey Martin, that's great! I'm all for making things easy for people to use, and I've endeavored to do that in the first post. But, this takes it to another level. Good job!

martinbeek

04-21-2011, 02:21 PM

Martin, that's great. But please make sure Ralph B is being given his credit for this work.

Absolutely! Done!
Will be raising a statue of him in my front garden as soon as i get his picture...

Martin

martinbeek

04-21-2011, 02:23 PM

Ralph.

I've read this here: http://blendervse.wordpress.com/2010/09/10/ycbcr-to-rgb-by-3d-lut-via-avisynth/
A bit above my station, but some interesting sounding stuff! Could that be a better solution for the last line of the Avisynth script?

Martin

Ralph B

04-22-2011, 11:35 PM

ADVANCED SCRIPT IS HERE!

I have updated the first post with an advanced script for processing HDMI 24P footage.
This new script solves the known issues of the original script, plus it runs faster!

The original script worked well most of the time, but occasionally a duplicate frame would slip through. On closer investigation, I discovered the problem wasn't the script, but that the GH2 never sent the frame across the HDMI in the first place! At this point, one might throw up one's hands and declare, "you can't fix what isn't there!" Or can you? ...

Most of you have experienced motion vector technology, even if you don't know it by name. It's the process utilized in televisions to create new frames so you can watch a 24 fps movie at 120 fps. (whether that's good or bad is of no concern here). Avisynth has motion vector capabilty in the form of mvtools, and I have used it to recreate the missing frames.

Here is an example. This is a scene processed with the original script. As the car approaches the stop sign, you will see it stutter. These are duplicate frames that fill in for missing frames that were never sent across the HDMI.
http://www.sendspace.com/file/89g6ja

And here's the same scene processed with the advanced script.
http://www.sendspace.com/file/6k4g78

There's more good news. Another issue of the original script was a few duplicate frames that would appear in the first second of each clip. This seemed to be a problem of fdecimate, but no matter, motion vector technology fixes them, too!

I reprocessed every HDMI clip I have with the advanced script, and I had 100% success - clean, smooth motion - not one stutter or duplicate frame.

In order to run the advanced script, you will have to download mvtools2.dll and place it in your Avisynth plugins folder. Get it here:
http://avisynth.org.ru/mvtools/mvtools2.html

The other area of improvement concerns the Chroma Fix. Taking PDR's suggestion, I carefully examined his "short" Chroma Fix and compared it to GrgurMG's "long" Chroma Fix. There was an infinitesimal difference, but I literally had to put a magnifying glass up to the screen to see it. At any normal viewing distance, they were identical. But, the "short' Chroma Fix renders twice as fast! I think most people will prefer that, and so that's what I put in the advanced script. However, you're always free to use the "long" one if you want, and I left it in the first post.

Sage

04-23-2011, 07:00 AM

Way to go Ralph!
Can the 60i script be perfected likewise?

Ralph B

04-23-2011, 09:42 AM

Way to go Ralph!
Can the 60i script be perfected likewise?

I'm working on it, but don't hold your breath. The 60i stuff is really gnarly.

mambomobil

04-23-2011, 10:06 AM

finally i found the problem with the AJA IO express - with the PAL GH2 it does not work because there is no 50i input... - but i made a test with the NINJA and it works perfectly!
Since the camera outputs 50i at the HDMI and i would love to convert it to 25p instead of 24p but keep the advantages of this great scripts. I was trying around with the "Assume(24000,1001)" line, changing it to "Assume(25,1)" - without result. So i guess i will have to get rid of the pulldown??
Thanks for your help -

Tobias

PDR

04-24-2011, 10:49 AM

finally i found the problem with the AJA IO express - with the PAL GH2 it does not work because there is no 50i input... - but i made a test with the NINJA and it works perfectly!
Since the camera outputs 50i at the HDMI and i would love to convert it to 25p instead of 24p but keep the advantages of this great scripts. I was trying around with the "Assume(24000,1001)" line, changing it to "Assume(25,1)" - without result. So i guess i will have to get rid of the pulldown??
Thanks for your help -

Did you record at 24p?

If you've applied Ralph's script, and changed the line to AssumeFPS(25), this will do a simple PAL speedup (i.e keep same number of frames, just speed it up from 23.976)

If you are recording analog audio , add "true" to speedup the audio as well
AssumeFPS(25, true)

If you were feeding that .avs script into ffmpeg to encode to DNxHD , you have to modify those parameters also (the parameters in the earlier post are only valid for 1080p23.976)

Note it's "AssumeFPS" not "Assume", and check for the latest script, he's added some recent improvements

Ralph B

04-24-2011, 12:51 PM

IMPORTANT ALERT

If you downloaded the Advanced script in the last two days, be aware there is a formating error in it.

DO NOT USE IT!

The corrected script is now available in the first post.

Boon

04-25-2011, 04:26 AM

does AviSynth runs on the mac?

Marcelo Lessa

04-25-2011, 07:59 AM

This may be an obvious question, but I'm new here and really curious about getting a Ninja or some other external HDMI recorder for my GH2. Is the only workaround for the pulldown issue achievable through Windows? Is there no OS X solution?
Also, is the image quality produced through this process that noticeably superior to its AVCHD counterpart--especially given the intense workflow?
Thank you for your time.
Best,
Marcelo

Ian-T

04-25-2011, 08:32 AM

Marcelo though the answer to your question is subjective I think the positives far outweigh the negatives. The image to me is sharper (though the GH-2 doesn't seem to need a lot of help in that area) and you get what looks to be more detail in the dark areas (which means no more smearing). And....absolutely no more compression artifacts (including the blocking). There will be certain things that remain due to its 8 bit nature....but..

mambomobil

04-25-2011, 10:21 AM

Did you record at 24p?

If you've applied Ralph's script, and changed the line to AssumeFPS(25), this will do a simple PAL speedup (i.e keep same number of frames, just speed it up from 23.976)

If you are recording analog audio , add "true" to speedup the audio as well
AssumeFPS(25, true)

If you were feeding that .avs script into ffmpeg to encode to DNxHD , you have to modify those parameters also (the parameters in the earlier post are only valid for 1080p23.976)

Note it's "AssumeFPS" not "Assume", and check for the latest script, he's added some recent improvements

thanks PDR! as far as i can see that should work. but i have another question concerning the readout of the chip: do you know if this is also with the "pal - model" 23,976fps? panasonic says in a flyer the chip "Produces" 50p - which is interlaced in the camera. in my opinion that would change something in the "remove pulldown" section (?). the output at the hdmi is anyway 50i. running or not running the camera.
the ninja i was capturing with sees only this 50i signal - and also records 50i.
another thing: after running the avisynth script with virtual dub i imported the avi to premiere - and i have some doubled frames again (which seemed solved in virtual dub...).

sorry for bothering you people so much with maybe stupid questions - but i got the feeling we are almost there...

thanks again -

tobias

Ralph B

04-25-2011, 10:58 AM

after running the avisynth script with virtual dub i imported the avi to premiere - and i have some doubled frames again (which seemed solved in virtual dub...).

When working with Premiere (or any video editing program), be sure the frame rate for your footage matches the frame rate of your project. If not, the program will conform the footage to match the project's frame rate, and may produce either duplicate or blended frames.

In this case, the frame rate of the Premiere project should be 23.976 because that's the frame rate of the clips after they emerge from Avisynth.

mambomobil

04-25-2011, 11:50 AM

thanks ralph, that was it! Any suggestions with this "readout - issue"? is there a difference on the hdmi output when i change from 24p to 50i in the camera setup (concerning readout)?

PDR

04-25-2011, 11:56 AM

does AviSynth runs on the mac?
Not directly. Some mac users were using virtualbox I think ; scroll back a few pages

I think martin set up a blog page for mac users with more clear instructions

but i have another question concerning the readout of the chip: do you know if this is also with the "pal - model" 23,976fps? panasonic says in a flyer the chip "Produces" 50p - which is interlaced in the camera. in my opinion that would change something in the "remove pulldown" section (?). the output at the hdmi is anyway 50i. running or not running the camera.
the ninja i was capturing with sees only this 50i signal - and also records 50i.

This works when you record using 24p in camera, not the liveview which is 50i , low resolution, and lower DR

If you go back a few pages the setup was used for testing on PAL GH2 examples with Ninja (Martin) ; there might be some differences with your setup with IO express

I don't know if it works for any other configurations. You would have to test it or provide samples

is there a difference on the hdmi output when i change from 24p to 50i in the camera setup (concerning readout)?

If it's like the NTSC model, there are extra fields inserted. (instead of 60i + extra fields, you probably have 50i + extra fields) . I think Ralph put up a 60i script

another thing: after running the avisynth script with virtual dub i imported the avi to premiere - and i have some doubled frames again (which seemed solved in virtual dub...).

If you were using the modified script for PAL speedup - AssumeFPS(25) , then you have to interpret the footage in premiere as 25fps (if it already isnt identified as 25p) , and use 25fps squence settings

Otherwise it's as Ralph said above (23.976)

martinbeek

04-28-2011, 09:10 AM

Is there any gain from using convertToYUY2 instead of YV12 ?
Much slower, that's for sure...

Martin

Ralph B

04-28-2011, 10:24 AM

Is there any gain from using convertToYUY2 instead of YV12 ?

Martin

No. ConverttoYV12 is part of the Chroma Fix and should be used for optimum performance. It's also essential for the filldrops function to work in the Advanced Script.

martinbeek

04-28-2011, 03:11 PM

Hello All.

An update for MAC/ProRes users who render to DNxHD MOV files.
I've found that the following addition to the script (placed before ConvertToYV12) results in a perfect match of the original HDMI footage and the converted DNxHD footage.

-I'm about starting a documentary film in which we'll going to use two GH2 (PAL version) with several lenses.

The thing is that I'm trying to decide a good workflow for the editing process as there's going to be a lot of material and the shooting is going to take several months.

-Most of the documentary involves interviews, that's why we decided for the GH2 (no time limit, no mirror, no heat etc..), also, quality is much better (to my eye). With the right lenses you can get a depth of field as small as with 5D, 7D etc..

-We want to use two Atomos.

-Boom microphone is going to be wired just to the "A" camera.

-Editing is going to be made in Media Composer in Mac Machines (both desktop and laptop)

-So, can you guys help me with the workflow?

Here's a first aproximation:

1. Record the HDMI out to the Atomos (prores at max. resolution. I think the out signal is 50i, right?)

2. Convert to 24fps DNxHD 185 with the script of Ralph B. BUT with which software in Mac OSX??? ffmpegx?? This is the point where I'm COMPLETELY LOST.

I've considered recording just in AVCHD in the cards, but I think that HDMI workflow delivers much more room for color correction (no blocks in shadows) and won't take much more time, as AVCHD workflow also involves transcoding from original files . Editing native files is a pain in the ass for all machines.

What do you think, guys?

Thanks for yout help.

Ralph B

05-01-2011, 10:32 PM

Here's a first aproximation:

1. Record the HDMI out to the Atomos (prores at max. resolution. I think the out signal is 50i, right?)

We need to stop right here. Are you shooting 24P or 50i into the Ninja? If you're shooting 50i, then you need do no further processing, and can go staight to editing (if my understanding of the PAL GH2 is correct).

If you're shooting 24P, then you need to go the Avisynth route.

I have neither a PAL GH2 or a Mac, so I can't help you much beyond this.

Cosimo Bullo

05-01-2011, 10:42 PM

Could you all post some more examples of your footage with this fix? Thanks!

Ralph B

05-01-2011, 11:13 PM

Could you all post some more examples of your footage with this fix? Thanks!

Rather than me uploading processed footage, why don't you install Avisynth and the related software, and I'll upload a raw clip or two for you to process. Deal?

Ian-T

05-02-2011, 05:29 PM

I'm down for some raw footage. Bring it on. :)

Ralph B

05-02-2011, 09:10 PM

Raw Footage

Here are two raw GH2 HDMI clips, one captured with UT 422 lossless codec and the other captured with Blackmagic MJPEG codec.
I did this because they're both excellent codecs, they're free, and anybody doing HDMI capture should seriously check them out. And not just for capture... they're both viable options to use for your renders.

UT is available here:
http://www.free-codecs.com/download/Ut_Video_Codec_Suite.htm

Click on the Intensity download for your OS. On the next page, don't enter any information, just click Download Now. Save the file. Run it. This will install the drivers for the Intensity, even if you don't have an Intensity card present on your machine. But along with the drivers, comes all the Blackmagic codecs. You're free to use these codecs in all your video applications, no restrictions.

A side note: The first time I used the Blackmagic MJPEG codec, I thought it was defective because the colors were significantly altered. It turned out, the codec was fine, but my Nvidia graphics card needed to have Dynamic Range set to Full (0-255).

Keep in mind that Man on Roof was shot through a dirty window. Dog Walking was shot through a clean window.

timetraveller

05-02-2011, 10:02 PM

We need to stop right here. Are you shooting 24P or 50i into the Ninja? If you're shooting 50i, then you need do no further processing, and can go staight to editing (if my understanding of the PAL GH2 is correct).

If you're shooting 24P, then you need to go the Avisynth route.

I have neither a PAL GH2 or a Mac, so I can't help you much beyond this.

Thanks Ralph: I'm shooting 24p, so, given that the HDMI out is 50i when recording 24p, I have to go AVISYNTH route...

martinbeek

05-05-2011, 07:39 AM

As a Mac user i was not entirely satisfied with the results i got from the DNxHD codec. There were some gamma issues, with different curves in different color channels, and the chroma was a bit too high. Al lot of difference with the original HDMI recorded signal.

I've found a perfect solution to get 100% identical footage (pre vs post Avisynth processing) using the XDCamHD codec. A delight for pixel peepers! The bitrate is 92mb (higher result in buffer overflow errors, but 92 is already very high for XDCam). It gives much smaller files than DNxHD too.

You need to have the XDCam codec installed on the Mac only, AviSynth/ffMpeg does not need it. The XDCamHD codecs are automatically installed with the following "XDCam Transfer for Mac" application from sony (direct link): http://www.sonybiz.net/res/attachment/file/12/1166605189212.ZIP

Remark: on the Mac, the codec is reported and labelled XDCamHD/50mb but the bitrate is really around 92mb.

Cheers!

Martin Beek.

PDR

05-05-2011, 11:08 AM

@martin - your probably don't want to include ildct and ilme flags for ffmpeg , as they are meant for interlaced footage - interlaced DCT scan and interlaced motion estimation

martinbeek

05-06-2011, 01:04 AM

@martin - your probably don't want to include ildct and ilme flags for ffmpeg , as they are meant for interlaced footage - interlaced DCT scan and interlaced motion estimation

But.., a great observation as ever; if you'd be of the female persuasion (nothing reveals the opposite btw), i'd give you a big sloppy kiss.

As a reward i'll disclose my source. That means that i'm admitting that i've just googled it up...
The original commandline from the post below didn't work (first because of a pair of quotes in the wrong place, then because of some weird error).

http://www.itbroadcastanddigitalcinema.com/ffmpeg_howto.html

Maybe there is more that can be tweaked.

Salute,

Martin

PDR

05-06-2011, 08:54 AM

In my opinion, MPEG2 isn't very good quality wise, even at 100Mb/s for longGOP . You need >250+ Mb/s I-frame only for typical content . In addition, FFMPEG's mpeg2 encoder isn't very good, even the highest quality unconstrained -qscale 1 -intra 1 -qmin 1, won't get that level

Even without pixel peeping, looking at 1:1 , you will see differences in quality. Fine details will be missing. I can show comparisions if you want. I thought the whole point of using the ninja (or any external recorder) was to retain those details.

Unfortunately the mac software handles many Y'CbCr formats inappropriately with incorrect gamma, and you cannot encode to prores on a pc :(

I would check again about your gamma settings, because many articles say it's fixed in newest releases of snow leopard, QT X. Apparently you can choose 2.2 gamma so everything looks the same on PC and Mac now (and the rest of the world)

Another possiblity is to explore rendering the Avisynth output to the various Quicktime codecs that are available on the PC. For example, "Animation" is lossless at 100, and the quality is still quite good as you drop to lower numbers. Photo-JPEG at 100 is also worth checking out.

And here's a long shot - instead of using QTinput, use directshowsource, and then go through the DNxHD procedure. Just try it.

PDR

05-06-2011, 01:31 PM

Animation is lossless at 100%, but only as RGB

A word of caution on using RGB intermediates ; if you don't convert to RGB using full range in avisynth (e.g. if let ffmpeg or quicktime do the conversion) you will clip your superbrights and superdarks . This is what Mac software is doing when it "sees" Y'CbCr formats like DNxHD as RGB (this is in addition to the gamma shift) . Your software needs to be able to "see" it as a Y'CbCr format, or you need to map full range if you to preserve all the data

I've mentioned numerous times the problems with directshow, i'll try not to belabor the point. (It's even more important now not to use directshow, because mvtools is being used - the forward and backwards vectors can get messed up ). If you decide to use directshow then make sure the output pin is YUY2 not RGB, or you will clip the data . Use info() to check. You can use graphstudio to map your directshow filters

LeavingTheCandy

05-08-2011, 12:25 AM

Wow! What some Voodoo. I was thinking about trying my GH2 to my KiPro, but I think I'll have a beer instead.

Ralph B

05-08-2011, 08:44 AM

Wow! What some Voodoo. I was thinking about trying my GH2 to my KiPro, but I think I'll have a beer instead.

I hope you're not being scared off by all this tech talk. The basic process works extremely well, and is described in detail in the first post of this thread.
While there is certainly a learning curve, once you get over that hump, the process is completely automatic.

If you already have a KiPro, it would be a shame not to use it with the GH2.

martinbeek

05-08-2011, 03:02 PM

Hey guys.

That's pretty disappointing (re. the XDCamHD). I've been accused of pixel peeping many times and this time i see no difference between the proresHQ and the Mpeg, but - OK- i trust your judgement and knowledge on the matter.

I use the latest setup and snow leopard, et cetera. Still, the DNxHD footage doesn't look even close to the proresHQ Ninja-recorded footage.

Two questions.

1)
Can you give us Mac users all some last and ultimate advise, apart from going the Cineform route, that would provide the best picture quality and replication of the original Ninja footage with FInal Cut Pro?

2) WHICH version of Cineform - if i want to follow that route - should i obtain and do i need to pay for both a PC license AND a Mac license?

Feeling the workflow is close to perfection, but the results still looks sh*t on the mac...

Thanks, Martin.

Ralph B

05-08-2011, 05:08 PM

(re. the XDCamHD). I've been accused of pixel peeping many times and this time i see no difference between the proresHQ and the Mpeg

Are you saying that rendering to XDCam and then viewing that file on the Mac, looks identical to the original Prores on the PC? Then use it! You've solved the problem!

martinbeek

05-09-2011, 10:31 AM

Are you saying that rendering to XDCam and then viewing that file on the Mac, looks identical to the original Prores on the PC? Then use it! You've solved the problem!
Well... what i'm saying is, that I can not see the difference, but then again, i have no calibrated monitor or special measurement equipment to judge the results. I am just confused, just as LeavingTheCandy, by all the tech details regarding RGB.
If you say that this is all but lossless, i'l take your word for it - no mistake - and i'll dump XDCam right away.

I will now - as a test - first follow the route mentioned in this article:
That suggests converting the DNxHD footage to Prores using Compressor; it seems to do a great job in restoring levels and gamma.
Let you know what i find. If you have suggestions, please let me know.

Martin.

PDR

05-09-2011, 11:11 AM

Well... what i'm saying is, that I can not see the difference, but then again, i have no calibrated monitor or special measurement equipment to judge the results.

You don't need any special equipment. Just take a screenshot of the same frame using the same method , and compare in photoshop or a browser

All that matters is if YOU can see the difference. If you think it's fine , then go ahead and use it ... But I'm telling you - it's not as good as DNxHD, not at those bitrates you're using. It might be "good enough" for whatever you're using it for...

If you use long GOP MPEG2, you will have fluctuations in frame quality. B-frames will be a lot lower quality. I-frame will provide more consistent interframe quality but you need about 250+ MB/s for similar results as ProRes422 or DNxHD175 .

This was taken from your examples:
http://www.mediafire.com/?135ece6dz1esbdg

Even without zooming you can see the differences. DNxHD does a fairly good job, but the fine detail and grain is gone in the MPEG2 encode, and dark shadow areas are negatively affected. The MPEG2 encode used even higher settings than the settings you were using.

Remember this was a fairly static scene. If you had more motion or complex content, it would have fallen apart even more. The compression simply isn't good enough at those bitrates. It will certainly be better than the onboard AVCHD @ 24Mb/s, but it's clearly worse than DNxHD in terms of compression

I am just confused, just as LeavingTheCandy, by all the tech details regarding RGB.
If you say that this is all but lossless, i'l take your word for it - no mistake - and i'll dump XDCam right away.

The take home message about RGB is this: You need a format that Mac software will recognize as Y'CbCr , not RGB, otherwise it will clip data and screw up the gamma.

I suspect DNxHD is probably being clipped by your Mac software as well because it's treated as RGB - I would do some tests with full range (slightly overexposed) subjects

MPEG2 will work, but you need higher bitrates; 100MB/s isn't enough. I can't find any free encoders that will do this for you (I tried HCEnc, but it's not working properly with 422 or it's crashing on very high bitrates)

PDR

05-09-2011, 11:43 AM

The one that works here for quicktime on the PC is uncompressed 2VUY . (When you re-open the exported file in QT it looks the same as the Prores). Theoretically it should work when transferred to Mac as well. Other forms of uncompressed don't seem get treated as uncompressed Y'CbCr, and other compression formats are mishandled even when encoded through quicktime itself.

Filesize will be about 4-5x larger than the ninja prores, but at least you won't be clipping data, or incurring compression losses. You can convert that to prores on the mac if you want to.

Add this to the end of the script, and then play the avs in vdub or a media player like MPC to write the file

ConvertToYUY2()
QTOutput("out.mov", format="2Vuy", raw="uyvy")

Ralph B

05-09-2011, 07:12 PM

STOP PRESS!!!

http://www.matrox.com/video/en/support/downloads/

I just installed the Matrox VFW Software Codecs, and their MPEG2 I-Frame Codec is fabulous at high data rates.
And they have a version of it that runs on the Mac. We may just have found the ideal cross platform codec.

I ran it through five generations and it held up so well, that I recommend it without any hestitation.

The only question I have is, since it renders to an AVI file on the PC, will that be directly readable on a Mac?

PDR

05-09-2011, 07:32 PM

Good idea Ralph!

You should be able to re-wrap into mov with ffmpeg

ffmpeg -i input.avi -vcodec copy output.mov

You can batch re-wrap with a .bat file as well

Whether or not FCP "likes" it is another matter... IIRC, some MAC nanoflash users had some problems with high bitrate I-frame MPEG2 files

or does the MAC version allow you to parse AVI wrapped MPEG2 ?

EDIT: hmm maybe not so good. It forces a RGB conversion, and the decoder output is RGB if you are using matrox decoder...If you use other decoder you can decode in Y'CbCr, but the levels are clipped. I can't find a way to make it pass through Y'CbCr levels untouched, preserving all the data.

That setting should work for full range RGB to prevent clipping, but it's output pin is still RGB if you use the matrox decoder

Ideally you want Y'CbCr all the way through to avoid Y'CbCr<=>RGB conversions (slight preventable quality loss incurred)

Full range RGB won't look like the prores capture, it will look flatter, less contrast

Ralph B

05-10-2011, 09:26 AM

Before we draw any conclusions, we need somebody with a Mac to test it and see how the files come across.

In any event, this is a helluva good codec, and I'm glad I found it. I'm going to use it in my own work.

martinbeek

05-10-2011, 01:47 PM

Great news! I've solved the "DNxHD Mac conversion problem"!
I've ran the DNxHD file through the freeware utility "MPEG STREAMCLIP", saving it as ProRes444 and the whole signal range is correctly translated to ProRes444 YCbCr and definitely not to RGB.
Signal on scopes is now identical to Ninja HDMI recording.
Wow!

And indeed, DNxHD looks so much better than XDcamHD.

A great relieve...

Martin

ChemiX

05-19-2011, 04:16 PM

Is this necessary also for PAL footage or only applies to NTSC? Can it be done on a Mac?

Thanks

Ralph B

05-20-2011, 05:06 PM

Is this necessary also for PAL footage or only applies to NTSC? Can it be done on a Mac?

Thanks

If you're shooting 24P through the HDMI with a PAL GH2, then yes, you will need to fix the footage. Avisynth is the best (and perhaps only) way to do this correctly. As you probably know, Avisynth is a Windows program and does not run on a Mac. However, there are several ways to run Windows on a Mac. MartinBeek has successfully implimented an Avisynth workflow on a Mac. I suggest you study his posts in this thread.

martinbeek

05-24-2011, 01:53 AM

Hello all, and PDR and RalphB in particular.

Since you have proven to have much better eyes (or monitoring) than i, can you please have a look at the resulting footage when doing the conversion "back to mac" using MPEG StreamClip to get all levels and gamma right for use in FCP.

The original AviSynth DNxHD clip and the resulting MPEG StreamClip ProRes444 clip can be downloaded here: http://www.megaupload.com/?d=DIF7KJN8 (501 MB)

To my eye, levels look OK now in FCP and conversion is very fast.
My question to you is, if it is a lossless process and if you can spot any anomalies in either the levels, sharpness or color.

Thanks a bunch!

Martin Beek

Ralph B

05-24-2011, 11:11 AM

Hello all, and PDR and RalphB in particular.

Since you have proven to have much better eyes (or monitoring) than i, can you please have a look at the resulting footage when doing the conversion "back to mac" using MPEG StreamClip to get all levels and gamma right for use in FCP.

The original AviSynth DNxHD clip and the resulting MPEG StreamClip ProRes444 clip can be downloaded here: http://www.megaupload.com/?d=DIF7KJN8 (501 MB)

To my eye, levels look OK now in FCP and conversion is very fast.
My question to you is, if it is a lossless process and if you can spot any anomalies in either the levels, sharpness or color.

Thanks a bunch!

Martin Beek

Martin,

Are these two clips supposed to be identical? Because they're not. Here's what I did, and you can do the same. I rendered a single frame of each clip as a still image and brought them into Photoshop. Copy one and paste it over the other. Then use Ctl/Cmd Z to toggle between them. There is clearly a gamma shift between them. The good news is there is no difference in fine detail - everything is preserved.

Have you tried rendering the Avisynth output to the Matrox codec I mentioned on the previous page? I think it would be a worthwhile experiment. It may save you having to do a conversion when you bring the files into the Mac. Of course, you will have to install the Matrox codec on both the PC and Mac.

martinbeek

05-24-2011, 12:10 PM

Martin,

Are these two clips supposed to be identical? Because they're not. Here's what I did, and you can do the same. I rendered a single frame of each clip as a still image and brought them into Photoshop. Copy one and paste it over the other. Then use Ctl/Cmd Z to toggle between them. There is clearly a gamma shift between them. The good news is there is no difference in fine detail - everything is preserved.

Have you tried rendering the Avisynth output to the Matrox codec I mentioned on the previous page? I think it would be a worthwhile experiment. It may save you having to do a conversion when you bring the files into the Mac. Of course, you will have to install the Matrox codec on both the PC and Mac.

I'm trying that right now. What would the ffmpeg command line (-vcodec) become for this codec? Can't find any info on that.

Thanks,

Martin

PDR

05-24-2011, 06:58 PM

You won't be able to use ffmpeg for matrox ; it's a proprietary MPEG2 format (although free) . ffmpeg's MPEG2 encoder isn't that great, and cannot reach high enough bitrates

I mentioned this earlier, but if quality is the biggest concern, use uncompressed 10bit 422 aka v210. Bitrate is ~1000Mb/s so >5x the size of DNxHD 175 , but it's lossless

On windows, QT treats it as Y'CbCr, so it should treat the same on a Mac. Colors are the same as the Ninja Prores in QT

-vcodec v210

The beta version of FFMBC reports that DNxHD color is fixed for quicktime ; but I compiled that rc6 version and did some tests, and still colors differ from the Prores in Quicktime, at least on windows... maybe it hasn't been implented yet in that version, or there is a special switch I don't know about
http://code.google.com/p/ffmbc/

Changelog:
Fix DNxHD colors in Quicktime.

Ralph B

05-25-2011, 09:50 AM

I'm trying that right now. What would the ffmpeg command line (-vcodec) become for this codec? Can't find any info on that.

Thanks,

Martin

Don't render using ffmpeg. Do it this way:

Install the Matrox Codec on your PC.
If you haven't already, install VirtualDub on your PC.
Drag and drop the Avisynth script onto VirtualDub.exe. VirtualDub will open with the Avisynth processed movie.
Press Ctrl P to select a codec.
Scroll down the list and highlight Matrox MJPEG-2 I-Frame HD
Press Configure. Set Frame Rate to 23.98 and Data Rate to 151 Mb/sec. (You can experiment with higher data rates, but this should be sufficient)
Press OK, then OK.
Press F7 which brings up the Save As Dialog. Render.

This will produce an AVI file. Now here's the part I'm not sure about. I don't know whether you'll be able to open this AVI file directly on the Mac. (Of course, you will have installed the Matrox codec on the Mac). You'll just have to try it.

If it doesn't open, then you can use ffmpeg to rewrap the AVI to MOV, as PDR suggested on Page 49.

If this turns out to work, then you can use the Batch Processing program I describe in the first post, so you don't have to drag and drop scripts onto VirtualDub.

martinbeek

05-26-2011, 01:34 AM

Great! This worked like a charm and my Mac has no problem importing the clip in FCP!
It probably doesn't care about the file extension; only the codec.
Indeed, it looks perfectly 1:1.

Thanks!

Martin.

Don't render using ffmpeg. Do it this way:

Install the Matrox Codec on your PC.
If you haven't already, install VirtualDub on your PC.
Drag and drop the Avisynth script onto VirtualDub.exe. VirtualDub will open with the Avisynth processed movie.
Press Ctrl P to select a codec.
Scroll down the list and highlight Matrox MJPEG-2 I-Frame HD
Press Configure. Set Frame Rate to 23.98 and Data Rate to 151 Mb/sec. (You can experiment with higher data rates, but this should be sufficient)
Press OK, then OK.
Press F7 which brings up the Save As Dialog. Render.

This will produce an AVI file. Now here's the part I'm not sure about. I don't know whether you'll be able to open this AVI file directly on the Mac. (Of course, you will have installed the Matrox codec on the Mac). You'll just have to try it.

If it doesn't open, then you can use ffmpeg to rewrap the AVI to MOV, as PDR suggested on Page 49.

If this turns out to work, then you can use the Batch Processing program I describe in the first post, so you don't have to drag and drop scripts onto VirtualDub.