I know this sounds like a useless idea, but I would like to see the ability to upscale lower resolution videos to 720P. The reason behind this is youtube now allows 60FPS video to be uploaded, but only 720P or 1080P video. My 60fpS SD video gets covered to 30fps on youtube. It would be nice to have handbrake make a youtube 60FPS ready video that can be discarded once uploaded.

There is a method to emulate 60p from 29.97i. It's called "Bob," and it's a deinterlace option.

But upscaling to 720p in Handbrake GUI is unlikely to happen. It is quite possible in the CLI, but rarely recommended. You would do well to search your topics before suggesting a feature. It's covered in "README before suggesting a feature" at the top of this page.

I was disappointed myself when I saw youtube's new 60fps support is only for HD content, there's a lot of interlaced SD material out there that could benefit from being bobbed to 60p instead of destructively deinterlaced to 30.

musicvid wrote:What is being called 60i today is actually the HD-era term for 29.97i, or NTSC.
Likewise for 50i which is 25i PAL.
Why would Youtube support a playback standard that isn't used?

I guess I misspoke using the term SD. I'm talking about 480P 60FPS EDTV. Like footage from an original XBOX or Wii in progressive mode. The only way to get youtube to get it to playback at 60FPS is to upload a 720P or higher Video

musicvid wrote:SD was never shot or delivered for 60fps consumer playback, save for the brief appearance of EDTV (HDV-SD) about a decade ago.
Until then, progressive frames did not exist above 320x240.

NTSC video has temporal information at 60hz, that's 60 fields per second with 2 fields stored in each frame at 30fps. Deinterlacing full-frame NTSC 480i video to 480p 30fps destroys half of the temporal information present in the original (assuming it's true video and not actually just telecined film transmitted or stored as video, but that's a whole other can of worms). Bobbing to 60fps is the only way to preserve all of that original 60hz motion when converting from interlaced to progressive frames.

Last edited by JackNF on Sun Nov 02, 2014 4:52 pm, edited 1 time in total.

musicvid wrote:SD was never shot or delivered for 60fps consumer playback, save for the brief appearance of EDTV (HDV-SD) about a decade ago.
Until then, progressive frames did not exist above 320x240.

NTSC video has temporal information at 60hz, that's 60 fields per second with 2 fields stored in each frame. Deinterlacing full-frame NTSC 480i video to 480p 30fps destroys half of the temporal information present in the original (assuming it's true video and not actually just telecined film). Bobbing to 60fps is the only way to preserve all of that original 60hz motion when converting from interlaced to progressive frames.

This too. Its what your HDTV does with SD interlaced video, but 480i/1080i do contain 60FPS of information.

No, deinterlacing 60i to 30p does not throw away half the temporal information. Sophisticated algorithms (Yadif, EEDI2, McDeint) use all the information from both fields to draw one progressive frame. You are describing simple interpolation, which is obsolete. Best to know this before posting.

60i contains two fields per frame, each field containing HALF the information of one 30p frame. The deinterlace and decomb user guides on this very site are wonderfully presented, the problem being nobody reads them before posting speculation. https://trac.handbrake.fr/wiki/Decomb#options

Want upscaling? Use the Handbrake CLI or another application.

This has turned into a case of wishful thinking. So, my last post here.

musicvid wrote:No, deinterlacing 60i to 30p does not throw away half the temporal information. Sophisticated algorithms (Yadif, EED12, McDeint) use all the information from both fields to draw one progressive frame. You are describing primitive interpolation, which is obsolete. Best to know this before posting.

Yes all those sophisticated algorithms do use some of that data, but motion-compensated algorithms cannot perform miracles. Each field is a distinct image representing a 1/60th of a second timeslice of the subject video (albeit with only half the vertical information of a full frame). When you have 60 slices per second but only allow for 30 such slices per second in your final output it doesn't matter how sophisticated the algorithm used to reduce those 60 slices down to 30, the 30fps video will not have as smooth a motion playback as a video that had instead maintained the full 60 timeslices.

Those 'sophisticated algorithms' you mentioned all take one field then tease out as much vertical information as they can from the adjacent field before discarding that adjacent field, and the motion detail represented therein. The motion compensation mentioned has to do with how it handles deinterlacing the edges where it detects some combing, it's using full motion data from the source video in those deinterlacing calculations but certainly not doing anything to actually keep that full motion data in the output. It's a way of taking a 240-line field and adding in as much detail from the other 240-line field as possible in order to get a more detailed fully progressive 480-line frame then you'd get by simply upscaling 240 to 480 alone or by applying a simple vertical blend to an interlaced 480-line frame composed of the two fields. There are advanced bobbers out there that use these same sorts of tricks while maintaining the full motion data, best of both worlds and the closest you can get to what interlaced video actually looked like on a CRT display but that of course outputs 60fps in order to keep everything, not 30.

You wouldn't say that a filter that resized a still image to half it's original size using 'sophisticated algorithms' in order to avoid distortion and aliasing in the output wasn't still losing information, permanently throwing away detail that was present in the original. The only difference here is that all it takes is one glance to see what's happened to the still image, whereas for video motion the lost detail is in the smoothness of its playback so stills are useless in identifying that loss. You actually have to sit there and watch before and after or side-by-side for a while in order to see what's happened, what's been lost. It's much more subtle, but it is there.

I didn't mean for this to turn into a war about deinterlacing methods as I clearly stated 480p in the title. But to clear up the interlace 30 vs 60 fps issue. Back when TV was first produced and the creators were trying to find a way to offer sharp pictures and smooth motion over the small amount of bandwith they were given to broadcast over the air, a compromise had to be made. Interlaced video takes a half frame of approx 240 lines every 1\60th of a second and combine 2 fileds into one "frame". Stuff in motion would be half vertical resolution while stuff that was still would have the illusion of full 480 lines of resolution. It was a form of analog video compression to broadcast smooth 60fps over the bandwidth of 30. Of course old and new video editors only see 30frames because thats whats actually there.

Any way, back to my original request. It serves only one purpose, to prepare 480p 60fps video for use on YouTube. If the devolopers decide to allow upscaling in the gui, great. If not, there are other programs to do what I want. I just find handbrake the most efficient encoder out there.
Thank you devolpers for the wonderful program

Each field is a distinct image representing a 1/60th of a second timeslice of the subject video

Jack,
I already told you that is incorrect.

And I'm trying to tell you that no, I'm the one correct on that point. What exactly do you think the difference IS between the two fields in a single frame? What causes that combing? It's the 1/60th of a second difference between when the even scanlines and the odd scanlines were captured by the video camera. If you can't seem to accept this fundamental fact of how video cameras have worked for the past ~70 years then we'll just have to agree to disagree, each content in knowing the other guy's got it wrong.

It takes two fields to make anything resembling a "distinct image."
Deinterlacing does not "destroy half the temporal information."
Adjacent fields are not "discarded."
That's just the beginning.
Repurposing the English language to disguise factual errors just isn't working, Jack.

"`Twas brillig, and the slithy toves
Did gyre and gimble in the wabe:
All mimsy were the borogoves,
And the mome raths outgrabe."
-- Lewis Carroll

Last edited by musicvid on Mon Nov 03, 2014 9:27 pm, edited 1 time in total.

musicvid wrote:
It takes two fields to make anything resembling a "distinct image."
Deinterlacing does not "destroy half the temporal information."
Adjacent fields are not "discarded."
Reinventing the English language just isn't

I have agree with Jack here. Each field can contain one image to itself. It takes 2 fields to make one frame, but 1 video frame in interlaced mode can contain 2 distinct images. I've deinterlaced old VHS home movies using a bob filter that come out 60fps. 60 full independent frames, each one different from the last. If de interlacing to 30p does not get rid of temporal information, where did the extra frames come from?Www.100fps.com

Well I was just gonna leave this be, but that example image you dug up is seriously flawed. It is clearly a photograph that was chopped up into fake "fields" just for some very rudimentary demonstration and fails to show the combing artifacts you'd see in an actual frame of interlaced video in motion. The "scanlines" in that example are each several pixels tall... real interlaced video weaves the fields even/odd so only 1 pixel per scanline.

Here's part of an interlaced frame in motion where you clearly see combing. It's a double-image: the fields are showing the car before and after moving that little bit along the road. If each field in that image were separated out on its own yes you would have two separate images (each half the height of what they should be for the proper aspect ratio, but most definitely distinct images). Each can be viewed perfectly well on their own, clearly showing the back of that car in different positions because of the different times each field was captured.

If you just deinterlaced that frame instead of separating the two fields then you lose one of those two positions, half the motion detail. You cannot represent that car in both of those two positions at once within a single frame without either leaving it interlaced or opting for a blended deinterlace which would actually just result in some really ugly ghosting (don't do it). Any other type of deinterlacer has no other option but to choose one position or the other, one field or the other, and thus discard the motion detail from the unchosen field.

The simplest of bobbing filters aligns and re-sizes each field into the height of a full frame to make it a full frame, rendering full 60fps video albeit with only half the true vertical detail of the source (it's keeping full motion video but has to upscale the vertical resolution of each field/frame by 2x). Meanwhile the simplest of deinterlacers simply discards a field and resizes up from half-height losing both temporal AND vertical detail. As previously discussed, both smart bobbers and smart deinterlacers go a long way in getting that vertical detail back by borrowing from neighboring fields wherever possible.

Yeah, I know I do tend to overwork writing stuff like this. I inevitably cringe at something I wrote and spend far too much time rewriting the most trivial of forum posts if no one else has chimed in after me yet. An old habit from my university days... essay writing they really drove home revision revision revision!

And my one last post before calling it quits, i think your own example you've linked to proves one of my points quite succinctly.

Three interlaced frames, each showing a capital 'A' in two distinct positions. Deinterlacing as you've demonstrated reduces that to just one position per frame when cleaning up the combing.

Overall the six positions present in those three source frames get reduced to just three positions when deinterlacing. Bobbing instead creates two frames showing all six original positions. Now multiply everything by ten and compare having 30 such positions shown one after another in one second to 60 positions shown one after another in that same second and tell me again which would playback smoother, which better preserves the motion data in that source video, tell me again that there isn't 60fps motion detail in NTSC video half of which gets lost when deinterlacing.

And for my other big sticking point, ultimately it's your dogged insistence that one field on it's own is not a distict image, that it would just be useless alone instead of being the distorted but otherwise crisp and comb-free image that it is. I've tried to prove my point by explaining how different bobbing and deinterlacing methods rely on that fact to do what they do, but you're having none of it. Load an interlaced clip into avisynth and use the "separatefields" command sometime, from how you've been talking I think you might be surprised at what you find.

My NLE suites (Vegas Pro, Premiere Pro) have more than enough capability to separate the fields in the preview in real time. They look the same now as they did a decade ago.

[EDIT} Now I've edited my post to bring about a peaceful conclusion. It's a ridiculous argument, and we each have a POV. Until I've run some in-depth tests on Handbrake's bobber (I may be surprised), I'm going to defer speculation.

Ok, So i'm going to bring this topic back to the original topic of letting handbrake upscale for youtube 60P use.
So heres some tests. the source is VHS home movie and me riding a bike in 1986. The video was bobbed to 60P and then i used mpegstreamclip to upscale to 720P. So 480i converted to 480P and then upscaled to 720P. The suggestion is to avoid the double convert as mpegstreamclip can't bob and handbrake can't upscale. But also could be used on 480P sources as well.
the first example is the up convert. Make sure to use chrome(pc or mac) or safari in yosemite (mac) only to view the 60P. Also you may have to manually select 720P http://youtu.be/koh7VU9bryc
This example is just handbrakes slow deinterlacer to 480 30P, used for reference. http://youtu.be/_EiUHffgh0I