what i was saying was videophiles would give a crap about losing stuff most people wont notice. like the so called audiophiles who hate mp3s. the average person will have an hd signal they think is better than the old 480.

Probably only because they paid for something called "HD", not because it has any better quality. Better than 1080i or 720p, anyway.

What I wonder is whether H.264, which usually gives 50:1 compression, will shrink the uncompressed HDMI from 10.2Gbps to 204Mbps. Which would sail across Gb-e wires, and even fit in a couple-few WiFi 802.11g channels. If so, cheap multichannel 802.11g transceivers could be right around the corner, like at the 2009 CES .

Matthew - realise that the comment was tongue in cheek... but same principle, I'm sure H.264 can get as high as 50:1, but certainly not in realtime without some real kick arse hardware

(btw, it often only achieves this because it can handle a dynamic number of frames in buffer of upto 16 prior frames to do the motion deltas from, wherease most of the previous MPEGs only held 1 or 2 prior frames. Having to wait an indeterminate amount of time with frames in buffer would also cause more "realtime" issues in a realtime stream, lipsyncing is bad enough now!! I'm guessing that the algorithm would have to disable such a deep frame buffer for realtime streams, and that would dramatically impact the ratio, too)

Well, I was "kidding on the square". Whether or not it actually works, I won't be surprised to see announcements of "working" products at the 2009 CES, especially if "wireless HDMI" continues to get the same uncritical coverage it already this year. But most video isn't really "realtime" in latency. Here in NYC, I routinely talk on the phone with people while watching the same TV channel on the same cableco, and can often hear a difference of up to 10s. Which is by no means evidence of the maximum. And who cares? Even if the lag is over a minute, the only real effect is that realtime clocks on TV (like on the half-hour TV news shows) will be out of sync by a digit, which seems inconsequential in a media environment like LMCE which can display truly realtime NTP. And even the basic FF/RV "cable movie on demand" controls I have now are "soft" by at least a few seconds or more. For LAN media playback, the same MD buffers improving compression ratios can also be used for local FF/RV while the latent network recovers more key frames outside the buffer (dropping the intermediary frames during FF/RV should improve the "compression" even more drastically) to minimize control/response feedback latency.

So while I don't know about H.264 delivering 50:1, especially on 1080p, even 11:1 or better (which is what MP3 gets for audio) would fit in Gb-e. And though H.264 consumes about 10-100x MPEG-2 processing, MPEG-2 decoders are really cheap, and there's a 16-stream H.264 DVR for $1200, which also compresses, so I don't think <$100 H.264-out PCI-e cards are too far away.

Understood - but let me clarify what I meant by realtime. I don't mean "live", I agree that's not relevant.

Introducing a delay of even up to 1 min as you suggest doesn't compensate and allow for less powerful/cheaper hardware to be used. By realtime, I mean the hardware needs to be able to process the video stream at the same rate it is coming in - in other words, not take an existing file and convert it, then play it. Because TV is "realtime" in this sense, if the hardware is not powerful enough, even a delay(buffer, really) of 1 min will eventually run out, because the hardware will progressively get further and further behind .... then black screen! So it is the nature of TV not having a set "end point" like a file does that prevents you from using the lower powered hardware.

If you used that lower powered hardware against a file, that would be fine, you would just have to wait loner before you could begin watching the content. But that doesn't logically have an analogy in a realtime stream. No matter how long you waited before beginning the content, eventually the content would catch up with the hardware and the stream would stop. So the hardware needs to be able to encode at at least 1.5Gb/s.

Moreover - even if introducing a delay (buffer period) of 1 min did fix this issue of lower powered hardware, this would happen every time you changed channel - not many people would be prepared to wait 1 min per "flip" in channel surfing!

I don't disagree that there are encoders (decoders are always cheap because they have far less to do) that can do realtime streams, possibly cost effectively. I guess this goes back to my original point - these algorithms are necessarily adaptive. The higher quality you want out, the more time they take to do it (at the same price point, roughly!) - once you configure it for a level of compression quality that gets to less than 25/30/50/60 frames per second, it can no longer be used for this job, effectively setting an upper limit on the quality of the compression for a given piece of hardware (and therefore price).

Remember the first DVD recorders? Their video quality was appaling, even though some of them used MPEG2 - those encoders for CE equipment were VERY expensive initially, and so this set an effective upper limit on quality compared with what consumers were prepared to pay. As volumes and technologies progressed, they were able to achieve much higher encoded bitrates at lower costs, so quality went up. Same principle - if there is the demand for this then I agree that 2009 will be interesting, but I think that demand will only come from HDDVD and BluRay recorders, not from wireless HDMI cable replacements... much higher volume...