Hi all,
I have an amd64 box with a Radeon HD3200 on it, using open source radeon driver.

In the past I'm using an Nvidia GF 9600 with VDPAU support which offload decoding to GPU and reduce CPU usage, but now with this radeon I don't know what can I do.
Digging on the web, found nothing usefull, so: is there some good man explain to me exactly what I should activate for video decoding?
Or at least, how to get information of what my GPU support and don't support.

xv and xvmc at the moment. I've been reading about the vdpau galliun3d state tracker, but it isnt done yet and no driver supports it. When that gets done though all gallium3d drivers should be able to support it. Its just a matter of time now. I think they are in desperate need of coders that know how the graphics stack functions._________________MB: Biostar TForce 6100 AM2 @ 250x10
CPU: AMD Athlon 64 3800+ X2 @ 2500mhz
MEM: G. Skill DDR2-800 2GB @ DDR2-1000
GPU: nVidia GeForce 7600 GT
OS: Gentoo Linux 2006.1

@duby2291: The gallium vdpau tracker is very much finished. The problem is somewhere else - GPUs suck at decoding (so you won't see a shader based h264 decoder), and accessing the dedicated decoders would require either reverse-engineering or AMD releasing docs. And that's not a simple thing: https://forums.gentoo.org/viewtopic-t-946818.html

@duby2291: The gallium vdpau tracker is very much finished. The problem is somewhere else - GPUs suck at decoding (so you won't see a shader based h264 decoder), and accessing the dedicated decoders would require either reverse-engineering or AMD releasing docs. And that's not a simple thing: https://forums.gentoo.org/viewtopic-t-946818.html

My understanding is that AMD is not going to release documents. It won't happen. It has something to do with DRM, as in restrictions management. As far as shaders doing the work, that is kinda the whole point of gallium3d. If they didnt intend shaders to do the work then it would not have been implemented on gallium3d. Thats what gallium3d does. It's just a matter of the driver being able to compile TGSI into native hardware instructions. My understanding of that situation is that they are currently working on a new LLVM based compiler that should resolve alot of the problems that the old shader compiler had. Both radeon and nouvaou are working towards this objective now... Its exactly the same situation with OpenCL. As soon as an adequate compile infrastructure is implemented then it should work

Like I said it is just a matter of time... They are in desperate need of coders that know how the graphics stack works and knows how compiler infrastructures work.

EDIT: I think this whole situation should improve tremendously once GCN based architectures get good support. The architecture was designed more for computing than for throughput. Where as the old VLIW architectures were designed more for throughput. It should be a whole lot easier to code an adequate compile infrastructure for the newer architecture._________________MB: Biostar TForce 6100 AM2 @ 250x10
CPU: AMD Athlon 64 3800+ X2 @ 2500mhz
MEM: G. Skill DDR2-800 2GB @ DDR2-1000
GPU: nVidia GeForce 7600 GT
OS: Gentoo Linux 2006.1

My understanding is that AMD is not going to release documents. It won't happen. It has something to do with DRM, as in restrictions management.

They're trying. I wouldn't get my hopes up, but who knows, it might happen.

duby2291 wrote:

As far as shaders doing the work, that is kinda the whole point of gallium3d. If they didnt intend shaders to do the work then it would not have been implemented on gallium3d. Thats what gallium3d does. It's just a matter of the driver being able to compile TGSI into native hardware instructions.

My understanding of that situation is that they are currently working on a new LLVM based compiler that should resolve alot of the problems that the old shader compiler had. Both radeon and nouvaou are working towards this objective now... Its exactly the same situation with OpenCL. As soon as an adequate compile infrastructure is implemented then it should work

No, it will not work. GPUs excel at massive parallelism, but decoding is pretty much a serial process. No amount of compiler optimizations will change that. GPUs simply aren't built for the kind of task that is video decoding. That's why graphic cards have dedicated decoder units. Intel/Nvidia/AMD wouldn't bother with those if the GPU could do it. But they did bother with them, because the GPU cannot do it.

@duby2291: thank you, but activating xvmc on mplayer, and using it with -vo xvmc results in:

Code:

The output video driver you choose is incompatible with this codec.
Try adding scale filter in filter sequence, for example: -vf spp,scale

*It's translated from Italian, so I hope you understand.

@Gusar:
Yes, probably my google-fu are very weak, or probably I was using wrong keywords on searching, however I'm amazed like you to find out 2 recent threads ask similar question and have not seen them.
The main problem for me, is that there are a lot of terms (xv, xvmc, xvba, vaapi, vdpau, mesa, gallium, ecc...) that I don't know what they do exactly, and don't know how the cooperate. Several times, someone refers to "vdpau" as the API and someone as the hardware support, ending with confusion.
So any attempt to search anything ends up with too much and generic results or too little and advanced (for me) discussions.
Yes, I'm a noob, no one born expert.
Anyway thank you, now I know a little more about AMD/ATI and their (un)support.

My understanding is that AMD is not going to release documents. It won't happen. It has something to do with DRM, as in restrictions management.

They're trying. I wouldn't get my hopes up, but who knows, it might happen.

duby2291 wrote:

As far as shaders doing the work, that is kinda the whole point of gallium3d. If they didnt intend shaders to do the work then it would not have been implemented on gallium3d. Thats what gallium3d does. It's just a matter of the driver being able to compile TGSI into native hardware instructions.

My understanding of that situation is that they are currently working on a new LLVM based compiler that should resolve alot of the problems that the old shader compiler had. Both radeon and nouvaou are working towards this objective now... Its exactly the same situation with OpenCL. As soon as an adequate compile infrastructure is implemented then it should work

No, it will not work. GPUs excel at massive parallelism, but decoding is pretty much a serial process. No amount of compiler optimizations will change that. GPUs simply aren't built for the kind of task that is video decoding. That's why graphic cards have dedicated decoder units. Intel/Nvidia/AMD wouldn't bother with those if the GPU could do it. But they did bother with them, because the GPU cannot do it.

It doesnt matter. AMD is not going to release UVD docs. It must be done with shaders or it isnt going to happen at all. I believe it will be difficult to do, but I don't believe it "cannot" be done. It is being worked on as we speak. It's going to happen. There is no other route to take. That's the whole reason for spending all this time developing this new LLVM compiler infrastructure is to make a new platform that is more suitable for compute tasks. It must be done in order to facilitate things like video decode and OpenCL and other things.

EDIT: I agree with you that current GPU architectures arent suitable for compute jobs. But they are capable of computing. This new round of architectures is even more capable of computing than the last. It only gets better from here. In the future (only god knows how far) the GPU will be capable of doing most co-processing. When that day come the compile infrastructure that is being built today is going to come in real handy. Thats why I say that even things that arent easy now, are in fact very worthwhile pursuits. Even though its hard it still needs done._________________MB: Biostar TForce 6100 AM2 @ 250x10
CPU: AMD Athlon 64 3800+ X2 @ 2500mhz
MEM: G. Skill DDR2-800 2GB @ DDR2-1000
GPU: nVidia GeForce 7600 GT
OS: Gentoo Linux 2006.1

And you know that how? Chances are slim, but flat out saying "they're not going to" without anything to back it up is silly.

duby2291 wrote:

It must be done with shaders or it isnt going to happen at all.

Then it isn't going to happen.

duby2291 wrote:

I believe it will be difficult to do, but I don't believe it "cannot" be done. It is being worked on as we speak.

No it isn't. There was a person at AMD working on it, but he, like everyone else who tried before, discovered it can't be done. So he instead switched to writing UVD code, but there's no guarantee he'll be able to release that code.

duby2291 wrote:

It's going to happen. There is no other route to take.

Everyone who tried to do something with video on the GPU (be it decoding or encoding) gave up. Every single one. If you think you're smarter than all of them, go ahead, try yourself. Or you can trust them when they say GPUs aren't suitable for this kind of task.

duby2291 wrote:

That's the whole reason for spending all this time developing this new LLVM compiler infrastructure is to make a new platform that is more suitable for compute tasks.

EDIT: I agree with you that current GPU architectures arent suitable for compute jobs. But they are capable of computing. This new round of architectures is even more capable of computing than the last. It only gets better from here. In the future (only god knows how far) the GPU will be capable of doing most co-processing. When that day come the compile infrastructure that is being built today is going to come in real handy. Thats why I say that even things that arent easy now, are in fact very worthwhile pursuits. Even though its hard it still needs done.

Wow, are you stubborn. "Compute" isn't some sort of magic. You can't say "It's compute! It can do anything!"
I can only repeat: GPUs excel at massive parallelism. Keyword "massive". Video decoding is a serial process. The magic of "compute" can't change this.

Did you just skip the whole thread?_________________“And even in authoritarian countries, information networks are helping people discover new facts and making governments more accountable.”– Hillary Clinton, Jan. 21, 2010

http://www.phoronix.com/scan.php?page=news_item&px=OTQ1MQ <- Again, have you noticed the date of the article? Christian König is the "AMD person" I mentioned above. He did what that article says, he wrote the vdpau state tracker and the mpeg2 decoder. But he is *not* working on an h264 decoder, he's writing UVD code.

duby2291 wrote:

It IS being done. It WILL happen.... Its just a matter of time.

None of the projects you linked to above are active anymore. Corresponding precisely to what I said - everyone who tried, failed.

OpenVideo supports the GPU fixed-function hardware Unified Video Decoder (UVD), which
allows interoperability with OpenCL through a common API (OpenDecode API). OpenVideo
provides the way for all OpenCL-based video applications to access the fixed-function hardware
in GPUs.

So it's about accessing UVD and using the video decoded by UVD in an OpenCL context, *not* for decoding the video itself in OpenCL. Nvidia has had something like this for a long time in CUDA, it's called nvcuvid.

Clearly no amount of evidence is going to change your mind.... When it happens I'll be sure to post here though. If all else fails an OpenCL decoder will be written but first it needs to be well supported. Hell if MS can do it with HLSL, then it CAN obviously be done. It is being done right now. It will happen. The foundation work is simply hard and there isnt a large enough pool af skilled programmers that know how to do it. Once its done though it will work. It may not be as efficient as a special purpose hardware implementation, but it will get the job done and pull that load off the cpu.

Because OpenCL is magic? There was an attempt at an OpenCL VP8 decoder. You can search the Phoronix forums for how that went. Hint: It didn't go well.

duby2291 wrote:

Hell if MS can do it with HLSL

Wait, what? Where does MS have a HLSL decoder? They have a crappy software one and they have hooks into the dedicated hardware decoding units.

duby2291 wrote:

It is being done right now.

The power of denial. It's amazing.

Clearly you didnt read anything I posted... If you didnt read anything, how could you debunk anything? You didnt post any evidence.. All you did was post a link to a forum thread where -you- stated that it couldnt be done... That isnt evidence....

VDPAU is working right now. It's just that the only thing it supports is XvMC at the moment. say you use mplayer for your media player backend and you select the vdpau video output and then try to play an x264 video it won't work. But it will work on mpeg and mpeg2 video, things like VCD and DVD._________________MB: Biostar TForce 6100 AM2 @ 250x10
CPU: AMD Athlon 64 3800+ X2 @ 2500mhz
MEM: G. Skill DDR2-800 2GB @ DDR2-1000
GPU: nVidia GeForce 7600 GT
OS: Gentoo Linux 2006.1

VDPAU supporting XvMC makes no sense. XvMC is a framework for hardware video decoding, just like VDPAU. You either use one or the other.

@cord: It means you have VDPAU, plain and simple. There's only a shader-based mpeg2 decoder, so it's practical use is limited. Flash benefits from it though, it'll be able to use VDPAU for hardware presentation (the "P" in VDPAU) instead of doing everything in software.