If this is your first visit, be sure to
check out the FAQ by clicking the
link above. You may have to register
before you can post: click the register link above to proceed. To start viewing messages,
select the forum that you want to visit from the selection below.

I haven't tried any OpenCL application besides my unit tests are there are currently nearly no built-in functions implemented. I will take a look at GEGL that uses OpenCL in a way that may be compatible with Clover.

And yes, I'm interested in a better logo (the site branding is the one of Doxygen, I cannot do much about it but it isn't so bad for a generated site).

Comment

Clover implements the complete OpenCL API and can execute OpenCL programs. But at the moment, all of it runs on the CPU, and getting it to actually run on GPUs will be a hitload of work and nobody really knows how it should be done.

And this is due to a few factors:

- the situation regarding IRs in Gallium is muddy: Mesa IR seems to be on the way out, with TGSI being generated directly, but there are attempts to use LLVM too, and GLSL IR is also in the mix
- TGSI is good for graphics (shaders), but is not good for computing, like OpenCL, which is why people are proposing LLVM
- LLVM sucks at graphics

So we're likely looking at (at least) two IRs coexisting: one for shaders, one for OpenCL stuff?

Yes, all of that looks right.

Comment

Clover implements the complete OpenCL API and can execute OpenCL programs. But at the moment, all of it runs on the CPU, and getting it to actually run on GPUs will be a hitload of work and nobody really knows how it should be done.

And this is due to a few factors:

- the situation regarding IRs in Gallium is muddy: Mesa IR seems to be on the way out, with TGSI being generated directly, but there are attempts to use LLVM too, and GLSL IR is also in the mix
- TGSI is good for graphics (shaders), but is not good for computing, like OpenCL, which is why people are proposing LLVM
- LLVM sucks at graphics

So we're likely looking at (at least) two IRs coexisting: one for shaders, one for OpenCL stuff?

I've been wondering about this for a while now so I just have to ask. Why are we using either LLVM or TGSI instead of GLSL IR to begin with? Were the first two already in existence or an adaptation of existing code or is there some benefit of using them over using GLSL natively? Or am I misunderstanding something entirely?

Comment

I've been wondering about this for a while now so I just have to ask. Why are we using either LLVM or TGSI instead of GLSL IR to begin with? Were the first two already in existence or an adaptation of existing code or is there some benefit of using them over using GLSL natively? Or am I misunderstanding something entirely?

Originally every driver in Mesa used the Mesa IR.

When the devs decided to create a new driver framework (named Gallium) they decided that Mesa IR had to go, and built Gallium around a new TGSI that I believe they based at least somewhat on what Direct3D does internally. Or at least some of the Windows drivers. For reasons of backwards compatibility and not to have to rewrite all of Mesa, most of Mesa remained using the old Mesa IR and it was simply converted into TGSI when passed into Gallium.

Later, the Intel devs (who were not using Gallium) created a new GLSL compiler which was a lot better than the old one. Since they were creating it from scratch, they decided that they should simply use it's internal GLSL IR dircetly within their drivers rather than converting it into Mesa IR first. They also didn't like TGSI, because they felt the unstructured nature of the bytecode would keep them from doing some of the optimizations that would be possible in their structured GLSL IR. To keep backwards compatibility they went ahead and wrote a translation for GLSL IR -> the old Mesa IR, and and from there all the drivers worked exactly the same as before. Since then, they've updated the i965 drivers to directly use GLSL IR while the old pre-i965 drivers are still using Mesa IR. And plombo wrote a GLSL IR -> TGSI translator to avoid the old Mesa IR in the middle, mostly because the old Mesa IR lacked features that both GLSL IR and TGSI have (for GL 3.0) and because it simplifies the whole chain.

Originally, TGSI was designed to be a modernized version of Mesa IR that would be compatible across hardware and OSs. However, while perfectly adequate, it's never really been considered that great by the developers. So after Intel came out with GLSL IR and basically said it was superior in every way (not sure if that's really true or not) there was a lot of talk about dropping TGSI for it since that way they could directly use what the Intel devs are most focused on rather than translating it in the middle to another language. There was also talk about just using LLVM IR, since that would open up a lot of optimizations using the LLVM framework and possibly make more complicated stuff more manageable. I think that is a particular concern with compute where longer and more complicated programs become more and more common.

The problem is that the Gallium drivers are built around TGSI, so any change is going to require a fair amount of work, and up to this point no one has really seemed that interested in doing it. Instead, the focus has been on adding new features to drivers, and just getting the hardware to work.

Comment

Does "gallium mesa state tracker" belong there ? Smells like a copy/paste glitch...

But yeah, GLSL IR is very recent compared to the others, enough so that I expect most people are still trying to find enough time to look at it and form an opinion about its suitability for graphics and compute. It often seems easier to stick with the devil you know

... in the same way that the proverbial hog looks at the digital watch. The code seems to be nicely commented, and if it were anything but a compiler I would probably find it easy to understand. Unfortunately compilers and I do not get along.

Comment

I still find the idea about two or more IRs puzzling, as an interested outsider.

Is the AMD IR found in Catalyst drivers used for both compute and shader tasks? If so, shouldn't we also strive towards a "universal" IR that you would only need to target once? Why doesn't TGSI work for computing anyway? It's not like OpenCL was unknown when it was designed.

Comment

I still find the idea about two or more IRs puzzling, as an interested outsider.

Is the AMD IR found in Catalyst drivers used for both compute and shader tasks? If so, shouldn't we also strive towards a "universal" IR that you would only need to target once? Why doesn't TGSI work for computing anyway? It's not like OpenCL was unknown when it was designed.

Yes, I'm aware that it's anything but easy, I'm just interested.

TGSI doesn't work for computing because it was designed only for graphics, and more specifically, because it was based on Direct3D assembly. It thus has no concept of things like memory and pointers. And although it could be added to TGSI, since every driver has its own IR already, TGSI just gets out of the way.

Also, code-generating from TGSI is suboptimal for some hardware, especially SoA targets like nv50/nvc0. It's the same reason we're trying to propagate GLSL IR through the Gallium interface.

AMD can get away with a "universal" IR (AMD IL) because all the hardware they target closely matches that IR. We don't have that advantage for TGSI, because we have to target a wider range of hardware.