OSNews: http://www.osnews.com/story/24754/VDPAU_on_Radeon_Starts_Working
Exploring the Future of Computingen-usCopyright 2001-2016, David Adamsadam+nospam@osnews.comFri, 09 Dec 2016 17:17:20 GMThttp://www.osnews.com/images/osnews.gifOSNews.comhttp://www.osnews.com
Nice work..http://www.osnews.com/thread?473566
http://www.osnews.com/thread?473566but I feel that developers waste their efforts. What we need is standardized hardware. This effort is good only for linux and it hides the fact that there are only 2 players in the GPU industry. In my understanding we need a hardware interface like USB/USB mass storage for example in order to make use of accelerator cards and leave once and for all the hardware dark ages. Even in cloud computing people are trying to avoid vendor lockin. The above work is linux/ati-nvidia lockin and though I love linux this is not something I am very happy. We need standardized accelerator cards that could work out of the box with every OS that chooses to support them. Kudos to developers but it is not how things are supposed to work. Unfortunately I see the same attitude in ZiiLabs and VIA. Where are the standard bodies?Wed, 18 May 2011 15:13:00 GMTdonotreply@osnews.com (fithisux)CommentsRE: Nice work..http://www.osnews.com/thread?473592
http://www.osnews.com/thread?473592As if that is ever going to happen.Wed, 18 May 2011 18:09:00 GMTdonotreply@osnews.com (No it isnt)CommentsRE: Nice work..http://www.osnews.com/thread?473602
http://www.osnews.com/thread?473602The standard bodies have tried to do something...

In my understanding we need a hardware interface like USB/USB mass storage for example in order to make use of accelerator cards and leave once and for all the hardware dark ages.

USB hardware is ridiculously simple and has an incredibly limited scope compared to what a GPU does. It was also built on the back of the standards that came before it (UART) and has had over 30 years to mature. The main thing that makes USB appropriate for hardware standardization is that it has a very simple and narrow external interface to software - it does nothing more than move bytes back and forth.

Pointing at USB as an example is silly - a GPU is at least a few orders of magnitude more complex on the outside (let alone internally).

We had standardized graphics hardware once, remember VESA BIOS Modes? There are only about 20 or so actual functions available in VBE 3.0 - its useful to a point, but it only implements a very tiny subset of what a video card can actually do even in the basic world of 2D. Regardless, the vast majority of real world video drivers did not use VESA BIOS, it was ok as a fallback when nothing else was available but it was much slower than using drivers written against native hardware functionality. And it never even thought about dealing with 3D...

Even fixed function 3D is insanely complex. 3D with unified shaders is even more complex, but at least the external facing stuff is gradually converging on like-minded interfaces. Regardless, we haven't even reached a point where the basic rendering method used by 3D hardware has converged - you have some cards that use tile-based rendering, some that use z-occlusion, some have even proposed doing ray-tracing. Some development is better done using immediate mode APIs, others deferred rendering. I have no idea how you could possibly standardize all this in hardware when there are so many non-trivial differences that have to be exposed to make writing a functional API layer workable.

The above work is linux/ati-nvidia lockin and though I love linux this is not something I am very happy.

VDPAU is a vendor lockin? How? Sure Nvidia originally wrote it, but it is wide open. SG supported it in their Chrome GPUs (for what that is worth), so other 3rd parties could have done it. It is no longer an Nvidia API - it is a Linux API. All Intel, ATI, etc. need to do to support it is write support for it in their drivers.

We need standardized accelerator cards that could work out of the box with every OS that chooses to support them.

We have that now with OpenGL, and it doesn't require standardized hardware. If you specifically mean video decode acceleration remember we are talking about GPUs - there is no standard for that because the video card for the most part doesn't actually implement anything that looks like video decode acceleration... VDPAU (ideally) uses general purpose shader features on the GPU to do its work - there is not "special" hardware and therefore no "special" hardware interfaces.

If you want fixed function video decode accelerators there are many available, and standardizing those might even make sense - but that has nothing to do with VDPAU or GPUs in general.Edited 2011-05-18 19:42 UTCWed, 18 May 2011 19:40:00 GMTdonotreply@osnews.com (galvanash)CommentsRE[2]: Nice work..http://www.osnews.com/thread?473609
http://www.osnews.com/thread?473609

It was a good idea, and back in '96 it was quite advanced too, but it simply wasn't flexible enough and thus didn't gain enough traction. I actually had a card with a subset of VBE implemented and was toying around with it for a while, just out of curiosity.

Sadly hardware vendors have absolutely no interest in implementing a modern version of such; user lock-in is so much more profitable.Wed, 18 May 2011 20:12:00 GMTdonotreply@osnews.com (WereCatf)CommentsRE: Nice work..http://www.osnews.com/thread?473614
http://www.osnews.com/thread?473614Only two players for GPUs? I wonder what these should be ...
I guess you meant just AMD and Nvidia, though you are totally wrong on that. There are a lot more players and some major like Intel.
And yeah the "crappy" Intel GPUs are good enough to accelarate videos.

but I feel that developers waste their efforts. What we need is standardized hardware. This effort is good only for linux and it hides the fact that there are only 2 players in the GPU industry. In my understanding we need a hardware interface like USB/USB mass storage for example in order to make use of accelerator cards and leave once and for all the hardware dark ages. Even in cloud computing people are trying to avoid vendor lockin. The above work is linux/ati-nvidia lockin and though I love linux this is not something I am very happy. We need standardized accelerator cards that could work out of the box with every OS that chooses to support them. Kudos to developers but it is not how things are supposed to work. Unfortunately I see the same attitude in ZiiLabs and VIA. Where are the standard bodies?

The easiest thing will be to make part of the drivers contained in firmware and provide a standard interface to any OS. Like an EFI or "BIOS" for graphic cards.Wed, 18 May 2011 23:54:00 GMTdonotreply@osnews.com (twitterfire)CommentsRE[2]: Nice work..http://www.osnews.com/thread?473637
http://www.osnews.com/thread?473637

Only two players for GPUs? I wonder what these should be ...
I guess you meant just AMD and Nvidia, though you are totally wrong on that. There are a lot more players and some major like Intel.
And yeah the "crappy" Intel GPUs are good enough to accelarate videos.

Btw. Gallium3D is cross plattform, so not just Linux.

What he meant is there are only two manufacturers who matter.Wed, 18 May 2011 23:56:00 GMTdonotreply@osnews.com (twitterfire)CommentsRE: Nice work..http://www.osnews.com/thread?473663
http://www.osnews.com/thread?473663

This effort is good only for linux and it hides the fact that there are only 2 players in the GPU industry.

The above work is linux/ati-nvidia lockin and though I love linux this is not something I am very happy.

This should work for any hardware that has a gallium driver for it, and that framework was carefully designed to be portable across hardware and operating systems. Although at the moment, Linux + ATI/NVidia is pretty accurate.Thu, 19 May 2011 01:46:00 GMTdonotreply@osnews.com (smitty)CommentsRE: Nice work..http://www.osnews.com/thread?473675
http://www.osnews.com/thread?473675In my understanding we need a hardware interface like USB/USB mass storage for example in order to make use of accelerator cards and leave once and for all the hardware dark ages.

USB Mass Storage is AFAIK in a simple term a proxy between USB and PATA/SATA. It is transferring merely an already-standardized protocol of hard drives. Do you install different driver for each different hard drive? I do not think.

Let's see all of USB devices. Unless it has some standard protocol for that part of devices, each has it's own driver to work properly.

Therefore, as someone else pointed out, you have come up with inappropriate example.

Actually, OpenGL should be the de-facto standard but except for Microsoft devices (Windows machines and Xbox) and some(few) devices with own proprietary API, OpenGl is actually the standard to tinker with Graphics for most of computer devices.

Unfortunately, this post is about video decoding but about graphics driver so ... may I vote you down as off-topic ? :pThu, 19 May 2011 03:39:00 GMTdonotreply@osnews.com (t3RRa)CommentsRE[3]: Nice work..http://www.osnews.com/thread?473695
http://www.osnews.com/thread?473695Intel matters a lot. I really wonder how people could think otherwise.

And that is just on desktop. On embded -- which is also targeted by Linux -- it is a total different story.Thu, 19 May 2011 08:33:00 GMTdonotreply@osnews.com (mat69)CommentsRE[4]: Nice work..http://www.osnews.com/thread?473697
http://www.osnews.com/thread?473697Sorry for not mentioning Intel or Imagination. But my comments apply. Gallium is not a hardware interface anyway. I believe in simple standardized 2D accelerated framebuffers and standardized accelerator cards. So I can buy them like a USB/IEEE1394/PATA PCI/PCIexpress cards and add to my whatever system. The lockin kills small businesses, kills small OSes like Syllable/Haiku, kills research (like Genode/Fiasco) and kills the fun of computing. One standard driver for all framebuffers, one driver for opencl accelerator cards, one driver for all sound cards and so on that should encapsulate standards. This is what I require as a consumer.Thu, 19 May 2011 08:41:00 GMTdonotreply@osnews.com (fithisux)CommentsIntel G45 VA-API Support Is Availablehttp://www.osnews.com/thread?473701
http://www.osnews.com/thread?473701

Intel matters a lot. I really wonder how people could think otherwise.

And that is just on desktop. On embded -- which is also targeted by Linux -- it is a total different story.

As far as I know, there is also code available to provide a translation layer between the VA-API and VDPAU APIs.

Like the Radeon drivers for AMD/ATI, Intel graphics drivers for Linux are also open source. The only difference is that Intel's drivers are written by Intel, whereas the Radeon drivers for AMD/ATI chipsets are written by the open source community from programming specification documents provided by AMD.

Keen observers will note that these programming documents do not cover the UVD video acceleration hardware features of AMD/ATI chips. I believe this is due to the fact that video DRM functionality is inextricably embedded in the UVD hardware, and AMD have agreed to not disclose this functionality to open source drivers. It is for this reason that open source video decode acceleration for AMD/ATI chips, which this thread topic is about, must be done via GPU shaders.

On this page:http://wiki.x.org/wiki/RadeonFeature
Video decode (XvMC/VDPAU/VA-API) using the 3D engine is a work-in-progress for the Gallium3D drivers, whereas video decode (XvMC/VDPAU/VA-API) using UVD is not available for older cards (which do not have the requisite hardware) or TODO (not started through lack of information) for the newer cards which do have UVD hardware.

We wouldn't have the kick-ass GPUs we have today if a hardware compatibility requirement had been enforced.

Yes, it would make the life of hobbyist OS developers easier, but it would be at the expense of the majority, who'd rather have *fast* hardware acceleration than simply have access to basic functionality across a wide range of devices.

I'd rather have two major vendors with decent products than a whole bunch of vendors with semi-sucky products, like we had back in the dark old DOS days.Thu, 19 May 2011 13:23:00 GMTdonotreply@osnews.com (f0dder)CommentsRE[2]: Nice work..http://www.osnews.com/thread?473723
http://www.osnews.com/thread?473723You'd still have to standardize on an interface to this, though - which would either kill innovation, or require updates frequently enough that you end up not having much of a standard anyway.Thu, 19 May 2011 13:28:00 GMTdonotreply@osnews.com (f0dder)CommentsRE[2]: Nice work..http://www.osnews.com/thread?473731
http://www.osnews.com/thread?473731On the OS-software interface side, OpenGL and DirectX are standardized, with relatively infrequent changes.

Why couldn't this happen on the hardware-OS interface side ?

It's not as if GPU vendors innovate so much that they need new standards all the time anyway. From time to time we see a new feature like unified shaders or tesselation, but most of the time it's really just stacking more and more shaders on a single chip and making them run faster.Edited 2011-05-19 13:50 UTCThu, 19 May 2011 13:50:00 GMTdonotreply@osnews.com (Neolander)CommentsRE[3]: Nice work..http://www.osnews.com/thread?473757
http://www.osnews.com/thread?473757Remember how slow the OpenGL consortium used to be?

That was for agreeing about software standard. Now care to wager how long it would take to get people to agree about hardware standards? We'd end with too little or too much, after taking way too long.

If you want to be able to support new tech, then we don't have a set-in-stone standard, and what good is it, then?

A minimalist interface, say modesetting and basic 2D acceleration + compositing would go a far way, and could probably done in UEFI. I don't believe in anything more than that.

I do wish hardware vendors would be more open and release specifications, but OTOH there's the whole point about R&D costs and patented tech that might even have been licensed from other companies.Thu, 19 May 2011 16:33:00 GMTdonotreply@osnews.com (f0dder)CommentsRE[4]: Nice work..http://www.osnews.com/thread?473789
http://www.osnews.com/thread?473789They where "slow" just as much because of old school fixed function GPUs as they where because every company had to reinvent the wheel by coming up with their own proprietary method of doing the same thing that did nothing but case massive compatibility problems with accelerated apps and games back in the bad old days.

These days with unified shaders they can implement allot more features allot faster, hence the *.1 releases backporting new features to older GPUs that can still handle the new extensions.Thu, 19 May 2011 20:14:00 GMTdonotreply@osnews.com (Kivada)Comments