If this is your first visit, be sure to
check out the FAQ by clicking the
link above. You may have to register
before you can post: click the register link above to proceed. To start viewing messages,
select the forum that you want to visit from the selection below.

RV600, OpenCL/Gallium, ffmpeg and blender

Hi!

My head is now close to bursting with API acronym overload so I was hoping I could just describe to you guys what I want to see my Linux box do and just how far off we are seeing this happen.

My laptop has a Mobility Radeon HD 2400 so of course I was very excited to see the recent open X driver code drop as well as the announcement of OpenCL as I'd really like to see ffmpeg and/or mencoder being able to harness my GPU to greatly accelerate video encoding and rendering.

I understand the drivers are dev only at the moment and we'll need to wait until at least the next big xorg release before mortals can get open source RV600 accel without rolling there own and hoping for the best but how does opencl fit into this? I presume that opencl can work independently of X seeing as it isn't just for accelerating graphics so am I right in thinking that first someone needs to write an opencl driver for RV600? Is this already being worked on?

Then of course someone needs to update ffmpeg so it can take advantage of opencl- has this work already begun or are there no opencl drivers finished yet?

Then what about blender? Is blender just going to be running straight on top of gallium or gallium and opencl or?? I know gallium isn't finished yet so I would imagine no work has been done on blender yet to get it playing nice with gallium right?

OpenCL can't be completely independent of graphics unless you plan to run with a separate GPU dedicated to compute work. This might actually be an easier way to get OpenCL going quickly but you lose some things in the process (see below).

If OpenCL is running on the same GPU as graphics, then you need a way to ensure that the two drivers (compute and graphics) don't both access the chip at the same time and lock it up or mess up what the other driver is doing. That means the OpenCL driver will need to use the drm (kernel module) at minimum.

The next point is that one of the cool things about OpenCL from a programmer perspective is that it can interoperate with OpenGL, ie you can share data between compute and graphics to allow efficient visualization of the compute results. This is great for the app developers but a big pain for the driver developers, who actually have to *implement* OpenCL in such a way that it can share with OpenGL. For the open source world, this means tying into Mesa, I expect.So... at first glance OpenCL can be independent from graphics, but the closer you get the less independence there seems to be.

Bottom line is that practically speaking an OpenCL implementation is likely to either be an extension to an existing OpenGL implementation or (more likely) a separate code base sharing some code and structures with the OpenGL tree). I know we have been working on adding OpenCL to the Catalyst driver stack for quite a while (since that gives us a solution for a wide range of OSes), but I am not aware of anyone working on an open source OpenCL implementation yet.

For things like video processing (ffmpeg) I think coding over Gallium3D would probably make more sense than coding over OpenCL for the long term; it'll be interesting to see which shows up first; proprietary OpenCL drivers or Gallium3D getting merged into the mainstream open source driver stack. I have been meaning to check whether the current Gallium3D API includes support for resolving conflicts between multiple clients trying to draw through it at the same time (eg. video and gl at the same time), maybe I'll remember today

Blender, on the other hand, may make more sense to run over OpenCL than Gallium, but I don't know enough about the internals to really say. Having the code be open-source is nice, but it still doesn't help with finding time to actually *look* at it all

I didn't know gallium was also going to be capable of being used for accelerating video encoding with ffmpeg etc.

So if/when gallium support makes it into both xorg, compiz(++) and ffmpeg will I be able, for example, run a composited desktop (xorg/compiz accelerated by gallium) and then also encode some video using ffmpeg/gallium on the same RV600 GPU?

What is the state of gallium now, especially in respect to being run on RV6/700? Do you think it would be unreasonable to say we might have a gallium stack in time for Ubuntu 9.10 (ie a finished gallium and working xorg with gallium support- not ffmpeg gallium too)?

I think we all see 2009 as the year Gallium3D becomes part of the mainstream driver stack. Nobody is quite sure how smoothly the transition will go, although the folks working on Gallium3D seem to be more confident than the rest of us, which is a good sign

The state of Gallium3D right now is (roughly) :

- framework is integrated into Mesa in a branch, not yet merged to Mesa master (IT EXISTS)

- most testing seems to have been done with the softpipe (CPU) driver, not sure about the implementation on Intel 915. There is a Cell implementation which apparently works pretty well but don't think that one is in a public repository

- the Nouveau developers are working on the low level code for older NVidia chips and have some working code; glisse started writing some support for ATI R300 and MostAwesomeDude is working on LLVM integration

- we aren't doing anything in house with Gallium3D yet but will probably switch over to it once we have basic 3D support going on 6xx/7xx using a copy of the "classic Mesa" HW driver code for 3xx-5xx. My guess is that Gallium3D will come up first on 3xx-5xx followed a month or two later by 6xx/7xx.

Will it all be running this year ? My guess is yes. Not so sure if it will make it into a major distro this year although I think that's what everyone would like to see.

The main challenges are:

1. A lot of other things, particularly memory management and kernel modesetting, are also in the pipe for this year. Memory management is, practically speaking, a pre-requisite for Gallium3D so a lot of things have to come together quickly.

2. I don't know if there is a plan worked out for merging Gallium3D into Mesa master. Mesa-over-Gallium3D may end up as a separate project alongside mesa/mesa, or Gallium3D may end up as another driver sub-tree alongside all of the other driver options in the Mesa tree today.

The right solution may be a third option, where Gallium3D lives separately from Mesa and Mesa is changed to call the separate Gallium3D code; it all depends on how the devs see multiple instances of Gallium3D being used at the same time, ie is it a library linked into multiple clients or is it a standalone thing handling multiple clients ?

multitasking?

Hi bridgman!

Thanks for your great replies, you've really cleared up a lot of questions I had about gallium and opencl, 'cept one.

multitasking!

If compiz, quake3 and ffmpeg all get ported to gallium will I be able to run all three at once, all taking advantage of my RV600? Obviously ffmpeg is going to take a significant performance hit if I fire up a game (or two?) such as q3 but is this type of multitasking a planned feature?

For things like video processing (ffmpeg) I think coding over Gallium3D would probably make more sense than coding over OpenCL for the long term; it'll be interesting to see which shows up first; proprietary OpenCL drivers or Gallium3D getting merged into the mainstream open source driver stack. I have been meaning to check whether the current Gallium3D API includes support for resolving conflicts between multiple clients trying to draw through it at the same time (eg. video and gl at the same time), maybe I'll remember today

This doesn't make a lot of sense to me. I can see playback going straight to Gallium but bypassing openCL for encoding purposes (filtering, effect transitions, pre-processing etc) doesn't seem to make a whole lot of sense when openCL is not limited to GPU's but also can be used with DSP's and CPU's.

It doesn't make sense to me to use G3D instead of OCL because I would like to have my programs run on Windows/OSX/Linux from the same source. Let's not create another ALSA here

That is another big reason why it would not make sense. You would be essentially limiting the use of GPU processing of ffmpeg etc to linux which is a backwards solution and exactly the type of painted corners projects are trying to avoid nowdays.

I was actually thinking about playback (ie decoding) rather than encoding when I wrote that (ie "coding" the ffmpeg support). For rendering Gallium3D is attractive because the decode processing can easily be integrated with the render processing (done by Xv today) to draw directly to the screen.

For encoding or transcoding (where display is not an integral part of the task) I agree that OpenCL would probably be the way to go.

I was actually thinking about playback (ie decoding) rather than encoding when I wrote that (ie "coding" the ffmpeg support). For rendering Gallium3D is attractive because the decode processing can easily be integrated with the render processing (done by Xv today) to draw directly to the screen.

For encoding or transcoding (where display is not an integral part of the task) I agree that OpenCL would probably be the way to go.

Even then, utilizing openCL on decoding would allow for useful features such as cleaning up the source through filtering etc before firing it off for accelerated playback something Gallium could not handle alone. Galluim in conjunction with openCL would allow far a more flexible solution. This would allow processing to be done such as what MotionDSP is trying to do with Carmel.