If this is your first visit, be sure to
check out the FAQ by clicking the
link above. You may have to register
before you can post: click the register link above to proceed. To start viewing messages,
select the forum that you want to visit from the selection below.

PowerColor SCS3 Radeon HD 4650 512MB

03-26-2009, 10:20 AM

Phoronix: PowerColor SCS3 Radeon HD 4650 512MB

Back in December we looked at the Sapphire Radeon HD 4650 512MB OC graphics card. This mid-range ATI graphics card had performed well under Linux and what separated it from the other Radeon HD 4650 graphics cards on the market was its factory overclock of 650/900MHz. While not factory overclocked beyond the RV730PRO specifications, PowerColor has the PowerColor SCS3 Radeon HD 4650 512MB, which instead offers passive cooling. Is this an ideal candidate for a Linux-based HTPC? In this article we are looking at the PowerColor SCS3 Radeon HD 4650 512MB.

Comment

Well, BitStream is one of most awaiting features for me ;-) On windows it works well, actually it's kinda funny to watch 1080p movie + 3 or 4 shaders combined with 0%CPU/0-2%GPU on HD4850, when CPU does the same with 30-40% of usage ;-)

Comment

Bridgman, correct me if I'm wrong, but you get a better image quality using a GPU acceleration instead of a CPU acceleration. I have in mind that the more you use the GPU, the best picture you get (ride of artefact mpeg for example, better scaling, etc.)

Because the GPU can achieve far more of this specific calculation than a CPU could ever do.

Finally, the question is not so much acceleration (a quad core can do the job even for 1080i I guess) but rather for quality.
Isn't it ?

I'll also be very happy when it would be possible to encode videos using the GPU, as it's already possible to do with fglrx in Windows. I don't have windows, but even if my C2Q can handle my m2ts files conversions, I'll be more than happy to use my 4870 to convert my videos...

Comment

Bridgman, correct me if I'm wrong, but you get a better image quality using a GPU acceleration instead of a CPU acceleration. I have in mind that the more you use the GPU, the best picture you get (ride of artefact mpeg for example, better scaling, etc.)

Because the GPU can achieve far more of this specific calculation than a CPU could ever do.

Most of the opportunities to improve image quality (filtering, fancy di-interlacing, post-filtering etc..) are in the render part of the pipe, which is already accelerated.

For the decode part of the pipe (the first part) you basically "follow the instructions" with very few opportunities to improve quality.

Most of the image enhancement stuff is shader-based, working on the fully decoded image, so it pretty much has to be in the "render" part of the pipe. You can think of the Xv API as the dividing line between "decode" and "render".

The order of the last few steps can change, and steps can be combined in a single shader :

Finally, the question is not so much acceleration (a quad core can do the job even for 1080i I guess) but rather for quality. Isn't it ?

Decode acceleration is mostly about reducing CPU utilization and power consumption. Render acceleration does as much or more to reduce CPU utilization, but there are also opportunities to improve image quality. Scaling and MC usually put the biggest load on the CPU.

All of the steps from MC down are great candidates for processing with shaders. Most of the earlier steps (from IQ on down) can also be done on shaders but dainty, dedicated hardware can usually do the same work with less power consumption.