Author
Topic: Help determine my min requirements (Read 954 times)

Ok, may last post was I was a bit too hopeful. Perhaps someone could help identify some of the requirements I should be looking for?

Both of my TVs support 1080i. My cable provider gives me up to 1080 depending on channel. Output from by cable boxes can be component or hdmi, but not mixed (ie, if i use hdmi for video, i have to use it for sound too).

I would like to be able to record tv in 1080i, as well as play them back in 1080i.. I would be fine if playback was 720, as long as it doesn't look all goofed up.I believe I will need to pick an nvidia 7xxx or 8xxx to get vdpau support. (I know linuxmce does not support this, but i'll give it a shot, but i'm not counting on it.)Until vdpau is supported, Im lost as to what CPU i should be looking for. Looking around i haven't seen a cpu that i can use for 1080i.. I hope im misunderstanding something.

I also have a lot of trouble finding a mini-itx system that has what i need with an optical spdif. Does anyone know if digital coax is becoming a standard over optical? If that were the case, I would woudn't mind getting getting a system that does not have optical.

To try an summarise in a different way - practically any chipset will generate a 1080i/p screen resolution, and no CPU 'power' is required for that. The higher UIs (UI2 and UI2 Alpha Blended) will require more and more power the higher the resolution you choose, but that is not the actual screen resolution, it is the animation that requires the grunt. For those you will need decent (and compatible) hardware acceleration in your GPU, so this has no impact on your CPU requirements either. Thus very low power CPUs like the Intel Atom can easily handle UI2 on high res screens as all the work is done by the GPU. BUT - none of this has anything whatsoever to do with VDPAU.

VDPAU is a new API for allowing high end nVidia GPUs to do hardware acceleration of decoding video streams. Nothing to do with the screen resolution nor the 3D animation of UI2. It will be needed for decoding either high bit rate compressed video files/disks and/or the more advanced video codecs, like H264 (often used for HD video files and BD/HDDVD). Currently, we rely on software decoding in the CPU until this is integrated. So if you want to play BD or high resolution/high bit rate video files, you will need a high end CPU. However, nothing in your post indicates that you need to decode such sources.

You talk about HDMI/component from your cable provider, but not how you intend to supply this to the TV. HDMI pretty much has to be connected directly to your TV (HDMI capture devices are far and few between, expensive, of questionable quality or compatibility with LinuxMCE) so LinuxMCE's CPU/GPU is not involved in any way, except to control your cable box and TV... select inputs, volume, etc.

If you capture your component, then again VDPAU has no part to play, you are capturing the video uncompressed and thus there is nothing to decompress. Capturing analogue video (Component) you will always loose quality, particularly on an HD signal, but doable and at least you can then introduce the stream directly into LinuxMCE and so record it or redirect it to other MDs. You should look for commentary on the quality of the various capture boards, and also whether they have hardware compression built in to reduce the load on your CPU when storing the file during a recording session.

Thank you, that does clear up some of my confusion. I will clarify the rest of what is confusing me.

Quote

Its not 100% clear what you want to do here....

My most basic requirement would be setting up lmce to use my cable box (or 2 of them) as an input(s), and be able to output to my tv without any loss (or noticable loss).

Quote

To try an summarise in a different way - practically any chipset will generate a 1080i/p screen resolution, and no CPU 'power' is required for that. The higher UIs (UI2 and UI2 Alpha Blended) will require more and more power the higher the resolution you choose, but that is not the actual screen resolution, it is the animation that requires the grunt. For those you will need decent (and compatible) hardware acceleration in your GPU, so this has no impact on your CPU requirements either. Thus very low power CPUs like the Intel Atom can easily handle UI2 on high res screens as all the work is done by the GPU. BUT - none of this has anything whatsoever to do with VDPAU.

That clears up a much of confusion right there. I would like to be able to use UI2+alpha blending with 1080 resolution, but im fine with just the overlay and no transparancy if it's going to make a major difference in price (which it sounds like it is).

Quote

VDPAU is a new API for allowing high end nVidia GPUs to do hardware acceleration of decoding video streams. Nothing to do with the screen resolution nor the 3D animation of UI2. It will be needed for decoding either high bit rate compressed video files/disks and/or the more advanced video codecs, like H264 (often used for HD video files and BD/HDDVD). Currently, we rely on software decoding in the CPU until this is integrated. So if you want to play BD or high resolution/high bit rate video files, you will need a high end CPU. However, nothing in your post indicates that you need to decode such sources.

A true newbie question here. My understanding is that when I'm watching live tv, lmce will be recording it to give me pause/rewind functionality. If I am watching/recording an HD tv show, I assume that it would be recorded with some sort of compression (which would surely eat up some cpu). Regardless wouldn't playing this video back require decoding?

Quote

You talk about HDMI/component from your cable provider, but not how you intend to supply this to the TV. HDMI pretty much has to be connected directly to your TV (HDMI capture devices are far and few between, expensive, of questionable quality or compatibility with LinuxMCE) so LinuxMCE's CPU/GPU is not involved in any way, except to control your cable box and TV... select inputs, volume, etc.

I should mention that I am thinking of setting up a hybrid box for my initial test run. For my connection from my hybrid to my tv, I'm fine with doing HDMI or Component. My tv can't do 1080p anyway. For my connection from my cable box to my hybrid, I wanted to use hdmi, but i sounds like that may not be a good idea (based on the next part).

Quote

If you capture your component, then again VDPAU has no part to play, you are capturing the video uncompressed and thus there is nothing to decompress. Capturing analogue video (Component) you will always loose quality, particularly on an HD signal, but doable and at least you can then introduce the stream directly into LinuxMCE and so record it or redirect it to other MDs. You should look for commentary on the quality of the various capture boards, and also whether they have hardware compression built in to reduce the load on your CPU when storing the file during a recording session.

How much quality loss are we talking about when using component? On one of my TVs I have tried both component and hdmi from my cable box to tv, and honestly do not see a difference. Is this non-noticable loss what you are talking about, or will it get worse once it passes through the hybrid box?

There probably is no real difference in cost between UI2 and UI2 alpha, but you will notice the ongoing issue with video tearing in alpha mode. Its annoying, so most people use overlay only - this isn't a performance thing, so don't buy a kick arse card thinking you can over come it, 95% people experience it no matter which GPU (and the other 5% I believe just don't notice it to be perfectly honest!)

LiveTV is exactly that, live, but both the TV subsystems (Myth and VDR) provide a "time shifting" capability similar to what you describe, allowing you to pause, rewind, etc. If you are in the US then your only option is Myth, and I don't know enough about that to comment as I use VDR only. As VDR is DVB only, it is actually capturing the source digital signal directly inside the PC in its compressed MPEG2 form, so there is no compression performed, it is already compressed Over The Air, so it just streams this directly to disk. Thus identical to the source material at the binary level... no loss. In the US you can do this as well with DVB's equivalent for North America, ATSC (ATSC=N.A. plus a few other countries, DVB=Rest Of The World) and Myth. However, only for en clair/unencrypted channels. You are talking about encrypted stuff (almost certainly) so you unfortuately need to capture only the decoded analogue out of your cable box, which then needs to be re-digitised, and yes, compressed - as this is only a Myth thing, I cannot really say much, except encrypting MPEG2 real time is not a massive load, but a reasonable CPU is advised. Decoding MPEG2 is even less a task.

In terms of loss, that is a subjective thing - it can be very significant, or almost unnoticeable. However, if capturing 1080i/p signals, then a decent quality card is indicated and onboard hardware compression almost mandated. Others, hopefully, will chime in with their experience. But FYI, large posts like these often discourage people from jumping in. If no one does, then split out that subject alone, and start a new thread on it - single paragraph, no more than a few sentences, direct, to the point with a clear and sensible question.