Hi,
I have just pushed a branch containing an LLVM shader backend for r600g to my
personal git repo:
http://cgit.freedesktop.org/~tstellar/mesa/ r600g-llvm-shader
There are three main components to this branch:
1. A TGSI->LLVM converter (commit ec9bb644cf7dde055c6c3ee5b8890a2d367337e5)
The goal of this converter is to give all gallium drivers a convenient
way to convert from TGSI to LLVM. The interface is still evolving,
and I'm interested in getting some feedback for it.
2. Changes to gallivm so that code can be shared between it and
the TGSI->LLVM converter. These changes are attached, please review.
3. An LLVM backend for r600g.
This backend is mostly based on AMD's AMDIL LLVM backend for OpenCL with a
few changes added for emitting machine code. Currently, it passes about
99% of the piglit tests that pass with the current r600g shader backend.
Most of the failures are due to some unimplemented texture instructions.
Indirect addressing is also missing from the LLVM backend, and it relies
on the current r600g shader code to do this.
In reality, the LLVM backend does not emit actual machine code,
but rather a byte stream that is converted to struct r600_bytecode.
The final transformations are done by r600_asm.c, just like in the current
shader backend. The LLVM backend is not optimized for VLIW, and it only
emits one instruction per group. The optimizations in r600_asm.c are
able to do some instruction packing, but the resulting code is not yet
as good as the current backend.
The main motivation for this LLVM backend is to help bring compute/OpenCL
support to r600g by making it easier to support different compiler
frontends. I don't have a concrete plan for integrating this into
mainline Mesa yet, but I don't expect it to be in the next release.
I would really like to make it compatible with LLVM 3.0 before it gets
merged (it only works with LLVM 2.9 now), but if compute support evolves
quickly, I might be tempted to push the 2.9 version into the master
branch.
If you are interested, test it out and let me know what you think.
Thanks,
Tom Stellard