For example, our last shot was at an air display, so a combination of world war 2 aircraft, helicopters and fast jets, we filmed them taking off at speeds of about 100mph and then flying past at up to 400.mph. Takeoffs should be particularly tricky for compression, because when shooting from a tower there is a lot of ground reference including grass in the foreground. But even when panning at speed we found NO artifacts at all, certainly there was a very slight softening of the image, but this occurs with HDCam in a fast pan too.

We don't pay any attention to the theoretical limitations of HDV when we shoot and so far, it hasn't caused any problems.

I have heard this argument before. Does this mean if you are delivering on
DVD you want to acquire footage at 6.4 mbps? Not me brother.

IMO, the HD I've seen via cable looks fine until someone turns his/her
head or smiles and then you see HORRIBLE macroblocks. Talk about
jaggies, the Olympic's looked hideous as soon as the action started.
I want to see beautiful people's perfect teeth and eyes, not a mosaic.

I plan to acquire my footage at the highest possible
quality I can afford and THEN deliver/compress it in whatever low resolution
format is required. I don't want to start with the lowest bandwidth at
the beginning of the production process.

HDV doesn't have that kind of problem. I am no big HDV fan, I dislike the format in fact, but lets give credit where credit is due.

You see the kind of artifacting you are talking about even for material acquired on 35mm. It isn't even an issue with the encoding to HD 12Mbps.

Its a failure of digital transmission.

Analog transmission has its own problems, but when it failed it did so much more gracefully.

The best channels (image wise) on my local cable are the popular ones which are still being sent down the wire as analog.

HDV's real failure is not as a format for acquiring moving pictures for display, but rather when those pictures must be manipulated. Try a difference key with a "clean plate." I don't know whether to laugh or cry when I think of that.

I agree that if you stuff 5 mm focal length and f/5 into the Rayleigh limit formula you would get a resolution of .004 mm for red (700 nm) light which means that a 1/3 inch chip could resolve about 1100 points along its long dimension. The XL2 would be right at the diffraction limit by that reckoning and thus if this was the correct model the new 20X couldn't be any better. This model assumes a simple 5 mm focal length lens with a 1 mm diameter stop in front of it. That is not what we have with these zoom lenses. A modern zoom lens has a section where the light is nearly collimated and this is where the stop is usually put. This stop does not have to close down to 1 mm to give an entrance pupil of 1 mm. To get a better feel for what I am describing take a modern zoom lens, put it in manual mode (off the camera) and look in the back end. Stop the iris down to the point where you can see the blades. Now rotate the zoom ring. The diameter of the iris will not appear to change (or if it does it will only be slightly and that will be due to the lenses - you won't see the blades move). This is done to convince you that the iris isn't coupled to the zoom ring. Now look into the front end. You will see an image of the blades. Rotate the zoom ring. The image of the iris (entrance pupil) will change dramatically in diameter with the zoom setting. Although the mechanical stop isn't changing the entrance pupil is. This lets the effective aperture of the lens remain constant as you change its effective focal length. It also lets you have a physical stop large enough that the Rayleigh limit is not reached.

The point is that though the effective focal length of the lens may be set to 5 mm and the entrace pupil may be 1 mm the physical stop is never that small. Light rays never get stuffed through an aperture small enough that they interfere with each other (in reality they do, of course, but in a good design this will reduce sharpness less than the myriad other ills which plague the lens designer).

As I said before, I doubt the new lens is diffraction limited - at least not at an aperture as large as f/5.

Ouch! Another stop and a half gone. Even if I don't agree with these gents exact numbers I am beginning to see their point! Guess we'll have to get used to the idea of controlling exposure more with ND filters and less with the iris.

I guess I would consider the image softening on a pan (or otherwise)
to be a clever way to mask HDV bandwidth choking. Given the choice,
I prefer to be the guy adding blur to a clip as opposed to having
it occur whether I want it or not.

I guess what bothers me is that the promise of HD was *more resolution.*
This whole 'jumbo' compression thing is very much like "lawyer-ese". The
real truth doesn't matter, it's what you can make people believe.
Ulitimately, it is what you can get away with is all that really matters now.

Yes, much of what HD we see is sent at even lower bandwidth than 20mbps.
but WHY oh WHY do am I now starting to pine for a good old fashioned,
interlaced, composite NTSC signal?

The digital HD broadcast revolution has been hijacked and I am bummed.

[QUOTE=Thomas Smet]HDV is a lot different than HD mpeg2 broadcast. A lot of times the bitrates used for broadcast are much lower.

Take 720p broadcast for example. Here we have a mpeg2 signal lower than the bitrate for HDV but on top of that it is compressing 60p instead of 30p. That means that even if you had a broadcast at 19.7 Mbits/s that video quality gets cut in half because of the double frame rate.]

I am not sure these apparent relationships between 30P and 60P , and also between 720 and 1080i hold linearly true. In fact I am certain they do not. Mpeg2 makes good use of the fact that as you increase resolution there is MORE similar data both in time and space, so the compression becomes more efficient. For this reason HD needs about ONLY twice the data rate for about 4 times more picture content.

So, for example increasing the frame rate from 30 to 60 will not have a very big effect on the data required for the SAME picture quality as the frames change LESS between images. You may only have a 2 to 5% reduction in picture quality for this major change in picture rate, with the SAME final data rate.

Mpeg2 is a remarkably good codec at reasonable data rates and most broadcast artifacting appears because of statistical multiplexing limiting a given channels bandwidth...this causes horrible artifacts because mpeg2 then becomes inefficient. On a tape system the problem is different: the data rate is CONSTANT so if the codec and input filtering is optimal HDV will give good results...and does. Most of the time they will be better than DV because the stream really is retaining more overall information...the temporal reduction becomes very beneficial in these situations.

Even on a CUT such coding can be very good at spreading the data changes over lots of frames. The 19meg of HDV is more than sufficient for most real world situations.

Some real world relationships that seem initially obvious to us, simply don't apply to coding algorithms, this is why CODING continues to improve, and decoding was always what was specified first.

I guess I would consider the image softening on a pan (or otherwise)
to be a clever way to mask HDV bandwidth choking. Given the choice,
I prefer to be the guy adding blur to a clip as opposed to having
it occur whether I want it or not.

Well, there are a lot of factors that go into whether or not HDV or any other codec creates blur.

First off, you may be seeing exagerrated motion blur that has nothing to do with the codec. I see a LOT of that in DV and HDV produced materials. I am ashamed to say I see it in my own work ocassionally. It is due to bad technique.

Secondly and often in combaination with above, you may be seeing vectorized quantization. That is definitely a feature not a bug. What happens is the codec determines the direction a pixel is moving and applies motion vectors to it. You may be familiar with this from some high end MPEG-2 DVD compressors. The net effect is that the codec can replace lots of data with a bit of nifty math. (And its one reason you oget so much better performance from Altivec rather than MMX/SSE equipped processors on MPEG-2 video. Yes SSE3 made great strides)

DVD didn't use vectorized quantization anywhere near as much as HDV. That's why DVD so famously breaks up into blocks on motion, and why we end up having to increase the bitrates.

HDV on the other hand depends on cheap fast processors to use vectorization instead.

So, you have your choice of evils: Big blocky stuff on motion or some directionally correct blur. I'll take the latter.

Now, I would much rather have intraframe compression at higher bitrates, but considering what this buys I'll take the tradeoff.

Quote:

Originally Posted by Jacques Mersereau

I guess what bothers me is that the promise of HD was *more resolution.*
This whole 'jumbo' compression thing is very much like "lawyer-ese". The
real truth doesn't matter, it's what you can make people believe.
Ulitimately, it is what you can get away with is all that really matters now.

Well, that depends a bit on what flavor of HDV you are using.

1080i and 1080F/p flavors really show higher resolution.

720p flavors have it, but it can sometimes be hard to see. Of course having to use less bandwidth for the image data means less agressive application of the HDV codec, so with the same data rate you should get fewer compression artifacts.

IIRC there was a proposed 480p version of HDV which would have amounted to a inraframe DV. Prettier to look at, but with all the editing disadvantages of HDV. I think that was discarded early on in the codec standards process. It would have made an interesting comparison though between DV25, DV50 and HDV 480p.

Quote:

Originally Posted by Jacques Mersereau

Yes, much of what HD we see is sent at even lower bandwidth than 20mbps.
but WHY oh WHY do am I now starting to pine for a good old fashioned,
interlaced, composite NTSC signal?

The digital HD broadcast revolution has been hijacked and I am bummed.

Well HDV is what it is. I think of it as more related to DVD than what we are used to in an edit format. It is a great software engineering feat. We could do a lot worse than adopting it as the standard for next generation HD media to replace DVD.

Of course, MPEG4 has great advantages over MPEG2, and some newer codecs like H.264 deliver great results at lower data rates, but at the cost of more processing than you can expect to get into cameras and decks for a while. I am not sure if any PC solution can encode MPEG4 H.264 in realtime for HD resolutions, and I am likewise unsure how long it will be before we can.

Has the digital resolution been highjacked ? I don't think so. We are just very early on the road. Manufacturers didn't need to give us anything as capable as HDV, especially when it is so clearly an interim solution with a short shelf life.

I espect we will get a DV100 codec, like DVCPRO HD into these systems. DTE systems, like hard drives, will make this possible, but expensive. We will need a VERY high capacity media to archive footage. A 50GB BD-ROM (Blue Ray) will hold about 68 minutes of 100Mbps signal.

Anyways, I thin kwe agree on the symptoms of HDV and current HD production, but not about the overall market direction.