If this is your first visit, be sure to
check out the FAQ by clicking the
link above. You may have to register
before you can post: click the register link above to proceed. To start viewing messages,
select the forum that you want to visit from the selection below.

I understand there are benefits to GPU computing, when there are many paralelized calculations to be performed, however, this is not always the case, or there would be no reason for AMD including the UVD block in their GPUs and APUs in the first place.

What I grasp from the HSA inniciative is more like:

CPU: very good for serial general calculations
GPU: very good for paralel calculations, still getting better at being general

There are, however, other workloads, such as video transcoding for example that are not specially suitable for neither, hence intel, amd, nvidia and several arm soc vendors include a video transcoding block in their chips which is several times more efficient than either cpu, gpu of both at this specific workload.

What I would like to know is if the HSA foundation has plans to do to these blocks the same it is doing to the GPU: helping programmers use the speciallysed logic transparently.

I don't know which hardware have been used for the wayland presentation, but keep in mind that emphasis have been put on low power operation. Considering that the cpu part of my E-350 remind me the performance level of my 2003 32bit K8 (sonora) saying that current mobile chips are at least 3x more powerful should be a safe assumption, and its cpu usage should be proportional.

could you estimate how much time you would need to invest to get gpu based aka shaderbased accelleration of x264 720p and 1080p accelleration.

for a guy who is able to programm in C but have no experince in kernel or x or driver development?

I would maybe try it if it would have any change of completion without working on it for 6 months full-time. And bridgeman did say its all there to get it running easy.

he never explizitly said easy but if that task would cost 100.000$ manpower-costs it would be a unneeded statement because then it would be clear to anybody that it will not happen ever, if amd does it not by them self.

could you estimate how much time you would need to invest to get gpu based aka shaderbased accelleration of x264 720p and 1080p accelleration.

for a guy who is able to programm in C but have no experince in kernel or x or driver development?

I would maybe try it if it would have any change of completion without working on it for 6 months full-time. And bridgeman did say its all there to get it running easy.

he never explizitly said easy but if that task would cost 100.000$ manpower-costs it would be a unneeded statement because then it would be clear to anybody that it will not happen ever, if amd does it not by them self.

Christian Konig already wrote code for shader based h264. I am not sure if he released it and where.

Which video, which player, which distro? With my x120e and playing the preview of 'skyfall' in 720p and 1080p, cpu usage was 35-55% in average for HD and 70-90%. I tried to raise the resolution up to 1600x1200 and the cpu usage did not raise much, a few percent at most. I am running openSuSE 12.1 with latest updates and catalyst 12.8, under xfce. Desktop effects are disabled, and I think anti-aliasing is too. For the purpose, I downloaded the preview and played them under vlc.

I hadn't watched that video yet, but the proposals I've seen about OpenGL implementation seems promising. If only more specifications would be available to them, along with properly working BIOS, up to a total control of power management the game won't be the same.

For the Intel demo, it's using GStreamer + libva acceleration, and rendering that directly into a GPU texture which Wayland uses to render onto the screen.

According to the devs, I think that was the best their hardware could do - (Sandybridge, i think, or maybe Ivy), but future hardware would allow them to use a hardware overlay to display the video instead which would save a lot of power usage compared to using the GPU texture. The problem with current hardware was that the VA decoded format was incompatible with the current hardware overlays.

Fixed that for you..."HSA is marketing speech for: shader based calculation without the usual drawbacks and overheads. Now you can imagine how good this will work."

The UVD unit also use the shaders without the usual drawbacks and overheads now you can imagine how good this work with the opensource drivers on a hd2900 with a broken uvd unit and or on a hd3870 with closed source only uvd unit and so one and so one.
Your marketing speech is a ridiculous attempt to sell the new DRM TrustZone system then we get the (OLD-UVD unit+TrustZone) = HSA!!!
And Intel do have "Intel Trusted Execution Technology" already because of this they can deliver the video acceleration right now with opensource drivers.

But hey the fools will get the truth when the HSA anti-consumer hardware is ready to sell then we all know how good TrustZone will work against the customers. . .

Christian Konig already wrote code for shader based h264. I am not sure if he released it and where.

If you are interested better ask in the mailing list.

thanx I did even forgot about that, the last thing we could read here was nearly a year ago. lol I just tried to start it with mplayer and stuff but did not work, just thought it was a hidden secret and nobody speaks about the support

libvdpau_r600.so that file is missing it seems... or whatever funny xorg-messages:

but of curse no no vainfo no vdpauinfo no xvmc stuff works so maybe we talk about that is not working in a year or so... its funny because if you read some of the posts here on phoronix you could think that it was already running a bit buggy or so in january... ^^