Not possible. Encoding a frame with fields will end up with HORRIBLE compression artifacts as the encoder will use all of its bitrates for the jagged edges. I've done some encoding tests and I seriously doubt that answer.

I do not understand your answer; I can only assume that you are unaware of the advanced encoders used in HDV camera systems. Irregardless, Steve is correct. HDV uses FRAME encoding, whether JVC, Canon or Sony.

I do not understand your answer; I can only assume that you are unaware of the advanced encoders used in HDV camera systems. Irregardless, Steve is correct. HDV uses FRAME encoding, whether JVC, Canon or Sony.

I'll have to plead ignorance on the precise details.
However if both Canon and Sony are encoding the exact same thing I still have to ask why Canon's 25F tape will not play on anyone else's HDV kit?

No I'm not talking about image quality, simply about how it's encoded.

If you want to educate yourselves in V1 compression follow these steps and all will become obvious.

Capture 2 static scenes one interlaced and the other progressive.
Capture both clips.
Either in a NLE or Compositing package change the colour space to YUV.
Compare the YUV channels of the I and P footage.

The Y channel gives the game away!

If FRAME encoding is used for both I and P in the V1, according to Steve, then ironically it would appear not ideally suited to the task particularly for progressive footage. Which is a hoot.

If it is the same encoder encoding the same information (because the camera is locked off) then why does V1 progressive footage look like it is encoded by another (bad) camera altogether?

You can flicker filter all you like but that is not going to repair the damage already been done to the image. If I had a Sony V1 and wanted to shoot 25P I'd forget it and shoot 50i and deinterlace in post. If you were to do the above comparison test you might just kick the 25P mode into touch.

Piotr,
try using the Median Filter in Vegas, it seems to give finer control than GB.

For all who might be interested:
I'm including a 1920 x 1080 image of alternate black and white lines. This is full 1080 line res. A 1080 display device should be able to display this with no flicker, certainly an LCD on a PC can. If you see anything wierd going on there's something wrong with your display device if it's being told to display progressive. It will encode to HDV with no apparent loss.

I tried to upload this file as a .png but unfortunately .png is limited to 1000x1000, the jpeg compression seems to have done no harm, I hope.

There are differences between the two, and a discussion with either/both Canon and Sony engineers makes that very clear. While the techniques might be somewhat similar, they are not at all the same.
This thread is going no where fast any longer, gang.

So, coming back to the topic. I have blurred with Premiere Gaussian filter (V=2.5%) all the default (7) sharpness clips from the V1E, and compared them to the clips I captured earlier with the XH-A1 (same scenery, lighting, 25F mode). Bottom line:

The post-processed picture from the V1E - while freed from virtually any liny twitter - is still sharper and (visually, as I have no means to measure that) higher resolution than the raw video from the Canon.

Which doesn't change the fact that A1 is a wonderful machine, and probably can be tweaked to produce sharper images than those I shot during my short testing period.

The rest is just a matter of personal preferrences. Even more off topic is a question of the price/value ratio for the two cameras; it is arguably better for the Canon and this has always been the most important reason for me to still consider it.

Canon is taking 24Hz 540-line fields and "field doubling" (BBC term so I'm going to use it) them to 1080-line frames and then encoding frames using FRAME encoding.

Sony is taking 24Hz 1080-line frames, adding 2-3 pulldown and placing every 2 fields into one 1080-line frame and then encoding using FRAME encoding.

PsF and pulldown only exist in the video world -- not the MPEG-2 world. In the MPEG-2 world there are only frames anf fields. In fact, in the MPEG-2 world a frame can carry 2 fields that have no relation to each other.

Interlace video and judder frames will indeed stress the MPEG-2 encoder more than progressive. And, indeed 720p HDV has always been artifact free, while 1080i HDV has always had a negative reputation.

So in summary:

* no ICP to worry about. Cue is still possible if the decoder doesn't check the Picture_Structure flag and assumes the video was encoded as fields not frames.

* another week and no postings of V1E "oil paint" or MPEG-2 "blocking"

flags in a mpeg2 file are not the same as progressive encoding. Yes to us it may look the same but they are two totally different beasts. Flags mean that mpeg2 interlaced encoding is used but the flags tell the decoder to keep certain fields together so they come out looking progressive.

Progressive encoding has no flags and the video is encoder and decoded as progressive so there are no funky field pairing up conversions going on.

Take 24p for example. There is no way 24p sitting inside of 60i could be encoded as frames. A 24p mpeg2 file encoded as 24p is different then a 24p file encoded for DVD with the flags set to make it 60i. So 24p on the V1 has no option at all other then to encode as fields. Now maybe 30p and 25p are encoded as true frames but I doubt it. It was always my understanding that SONY wanted to make sure P footage from the V1 would work 100% with every piece of HDV equipment and all NLE's out there. This means the video must be encoded as fields because this is the normal structure of HDV 1080i. This is why there are no progressive specs in HDV 1080i because frame encoding is not supported. Canon of course gets away with using Frame encoding which is why the tapes will not play in SONY equipment. The normal HDV decoders do not know what to do with frame encoding. The Canon decoders on the other hand were made to decode field and frame decoding. Canon put in a few extra parts of mpeg2 that are not in the HDV spec.

It would have been nice for SONY to use frame encoding since most major NLE's now support the Canon F modes but then that would mean that older SONY decks wouldn't be able to play the tapes because the decoders just cannot decode frame encoding. I think this would have upset some people so SONY decided to stick with the HDV specs and keep everything field encoded.

Flags on the other hand are easier for normal decoders to read. That is why a DVD player can play a flagged 24p in a 60i file but it cannot play a 24p encoded mpeg2 p file. The decoders are still putting out interlaced video with the flags even though it looks progressive. Progressive scan DVD players know to take the flags and instead of putting out 60i pump out the 24p.

So if the P footage from the V1 says it is P it is only because the flags tell the decoder to decode as P. That does not mean the encoding was progressive or frame encoded. Vegas clearly has a good decoder since it is going to deal with V1 footage so it knows to treat the flagged video as progressive once it gets in the system.

So 24p on the V1 has no option at all other then to encode as fields. Now maybe 30p and 25p are encoded as true frames but I doubt it.

This is why there are no progressive specs in HDV 1080i because frame encoding is not supported.

Well you better tell Sony they don't use FRAMES.

The MPEG-2 spec not only allows both FRAMES and FIELDS -- the decoder can switch between them as it desires. All it needs to do is flag fields as TOP or BOTTOM and a frame as FRAME. A decoder reads the flags and does the decode and correct color up-sample.

The nature of the fields/frames coming into the encoder is irrelevent. In HD1 the encoder gets Frames while in HD2 it gets Fields. It has no idea if these are interlace fields, interlace fields with pulldown, or PsF.

According to Sony, an HD2 encoder does only one thing -- combine the fields and encode.

An HD2 decoder does the decode to a frame and then outputs fields. The decoder has no idea what ls in the frames. It spits out 2 fields that AFTER the decoder is turned into interlace video. From this point onward, video equipment has no idea if these are interlace fields, interlace fields with pulldown, or PsF.

Thus Sony progressive rides exactly the same path as does Sony interlace. HD2 has always been ready for carrying progressive. Had Sony not used FRAME from Day 1, then it would have had to add FRAME to encoders and decoders.

Bottom line -- with Sony there is never anything but interlace video except in the EIP. Even the display treats the video as interlace and uses the cadence to decide is it "film" or "video." If video, then it does to 25PsF/30PsF exactly what it does with 50i/60i. At no point does any part of the system know what it is carrying -- except in the deinterlacer if it senses a 2-3 cadence.

I believe the MPEG-2 progressive flag is set so that NLE's can know the kind of video in a FILE.

Canon is taking 24Hz 540-line fields and "field doubling" (BBC term so I'm going to use it) them to 1080-line frames and then encoding frames using FRAME encoding.

Sony is taking 24Hz 1080-line frames, adding 2-3 pulldown and placing every 2 fields into one 1080-line frame and then encoding using FRAME encoding.

PsF and pulldown only exist in the video world -- not the MPEG-2 world. In the MPEG-2 world there are only frames anf fields. In fact, in the MPEG-2 world a frame can carry 2 fields that have no relation to each other.

Interlace video and judder frames will indeed stress the MPEG-2 encoder more than progressive. And, indeed 720p HDV has always been artifact free, while 1080i HDV has always had a negative reputation.

So in summary:

* no chroma errors to worry about

* another week and no postings of V1E "oil paint" or MPEG-2 "blocking"

*No postings of oil paint effect? I take it you missed Massimiliano's post.
*No Image problems other than continuing extreme macroblocking and detail removal. Piotr's recent clips.
*After fix severe mosquito noise is seen around contrasty lines. Source s_5.m2t to name one of many of Piotr's clips.
*V1E Prog mode may well have a slight increase in res over its Int mode but at what cost? Very inefficient use of bandwidth destroys the fidelity of the image. It is therefore of questionable use. Why would anyone use 25P when it looks like 3rd or 4th generation HDV?
*Aliasing removed in post also drops resolution back down to Int mode levels.
*25P continues to be supported by BD just as before. There is no move under way to 24P. There is a move to produce feature films in 24P across ALL regions. PAL land now gets 24P films rather than 25P. All other R50 HDTV generated progressive and interlaced footage to be delivered in 50i aka 1080i25. FACT. 25P is delivered as 1080i25 to ensure titles and credit rolls are smooth.

I will post images captured from V1E in progressive and interlaced compared to Canon which PROVE categorically that the Canon even in 25F has no less V resolution than the V1E in interlaced mode. The Canon has far more H res than the V1E as well. Easily demonstrated by way of a posted image.

To keep reciting the Canon field doubles mantra does NOT make it correct. It isn't correct. Tom Roper has measured the resolution drop of 24F and it only confirms what owners of the XH-A1 see with their own eyes.

Piotr

In the real world of professional level production sharpness is an image is to be avoided. It is amateur hour.

*No postings of oil paint effect? I take it you missed Massimiliano's post.

Tony, I supposse Massimilianos's V1E has got the same "level" of fix as your own; i.e. not the final one (both were fixed early in January).

Quote:

Originally Posted by Tony Tremble

*After fix severe mosquito noise is seen around contrasty lines. Source s_5.m2t to name one of many of Piotr's clips

The clip you mention is a raw file. The proposed method of vertical blurring removes most of it, along with the line twitter.

Quote:

Originally Posted by Tony Tremble

Piotr

In the real world of professional level production sharpness is an image is to be avoided. It is amateur hour.

Agreed. I am an amateur and I like it sharp. I'm very happy that the more professional look I adopt in my video, the less problems I'll have to deal with in post. With sharpness set to 3 or 4, there is no excessive sharpness and no line twitter or dancing pixels even without any post-processing.

I'm looking forward to your promised images. I confess that during my A1 testing I didn't even think of reverting to the V1 (the broken handle flap made me return it to my dealer and get the fixed V1 for another try), therefore I didn't try to re-create the sharp and natural picture of the Sony. Should this prove to be possible without introducing similar mosquito noise and aliasing - who knows? See my post about the price/value ratio above:)

Tony, have a very nice weekend, too.

UPDATE: I shot a nice, artificially lit scene tonight with the most "filmic" settings - 25p, sharpness at 3, Gamma 2, Cinecolour, colour gain at 3 etc. Beautiful - never achieved such a look in Canon (much flatter and washed-out, or artificial colours when boosted; much more of noise). But maybe I just didn't try hard enough... Sorry for OT. My point though is that this proved sharpness at 3 is *not* enough to eliminate line twitter completely (still some on fine, very bright lines like white papersheet edges). Some very slight vertical blurring is still needed in post... Wow, this camera has so much resolution! Even on a full 1080 display, the 25PfS video behaves as it was squeezed down (you can see this effect when watching high-res video in a small window on your computer screen).

The BBC reports that Sony downconverts HD to get SD in the A1 and Z1. The relation between the two is fixed at 2.25 -- which matches 1080 divided by 480. So this factor was designed into the V1U and V1E.

There is a fundamental difference between PAL and NTSC cameras. PAL units must have 1.2X greater V rez. because of the difference in V rez: 576 vs 480. This is why the PAL chips always have more rows.

In order to get 576 lines, given the 2.25 factor, the V1E's HD signal needs to carry 1296-lines of rez. Of course a compromise, for example, 540 and 1215 could be chosen.

To get 1215, the only option is to raise the low-pass filter frequency to enable the capture of a signal with more vertical resolution. But, R50 HD cameras don't have extra CCD rows. The rows are fixed at 1080. When you capture a wider bandwidth signal -- with the same number of rows -- the result is inherently far more aliasing and twitter on horizontal edges/lines.

Altough this sounds more like a speculation (or educated guess; Steve please correct me if I am wrong) than a factual knowledge, there certainly is something to it. The more I shoot in 25PsF, the more I'm certain the V-rez of the V1E is very high; too high I'd say. When uncompressed (through HDMI or Component), it almost seems to actually exceed the HDV specs, looking on a 1080 display as if it had actually more than 1080 lines (don't take me literally, I'm speculating based on purely subjective impressions). The result is similar to a video displayed in a window whose size in pixels is less that the video resolution. But feed it compressed through FireWire, and voila! - the picture (before or after tape) displays perfectly, without any aliasing or line twitter (for instance, in VLC through Capture Device). You can re-introduce the twitter only with turning the bobbing on, thus unneccessarily deinterlacing progressive video, or by... yes you guessed it, by decreasing the VLC window size, so that it's less than 1920x1080!

Piotr,
the V res is not too high. With 25PsF or 25p 1080 you really can have 1080 lines of V res. Take my sample test image I posted yesterday. That will display perfectly if displayed correctly.
It'd be pretty silly to have a system with 1080 vertical pixels if they cannot be used after all.

The problem that you and I and others are seeing comes about when we display half those lines at one point in time and the other half at a different point in time. You very easily run into the same problem with still images from a DSC or graphics or text and are displaying it on an interlaced display or one that's attempting to emulate an interlaced display or one that's wrongly trying to de-interlace progressive.

For what it's worth though we've taken an SD feed from the HDMI output while having the HDMI chip do the scaling and the results look pretty damn good. The odd thing (only a very subjective evaluation) is the res drops when switching the camera between I and P. When we get a chance we're going to try recording to DigiBeta, if the results on our broadcast monitor are anything to go by this is going to be one very usefull camera. We'll have a package that can record 16:9 4:2:2 SD to DB that's both portable and relatively cheap.