If this is your first visit, be sure to
check out the FAQ by clicking the
link above. You may have to register
before you can post: click the register link above to proceed. To start viewing messages,
select the forum that you want to visit from the selection below.

Intel isn't the driver behind this project

OpenMP already has roughly 5 different proposals being discussed and Intel isn't magically making it progress any faster. Nor is Intel contributing significantly to LLVM/Clang. There is a long list of contributors and they aren't high up on it.

Lattner, Finkel, and probably a dozen other Apple, Cray and Google Engineers are working on making sure OpenMP meets with the architectural design requirements of LLVM/Clang.

There are lots of changes also in improving the present LLVM/Clang stacks outside of adding OpenMP that will most likely allow the sanctioned OpenMP implementation to advance more rapidly.

Go back to October mailing lists for LLVM and Clang dev list [not the commit lists] to see the many differing discussions and concerns. It will happen.

Now if projects like Inkscape and others who hardcode check for GCC will change that behavior than I can start compiling against LLVM/Clang, straight from their trunk or latest tar balls.

Then again, they can't manage to update their Poppler support for anything remotely recent [and they are aware of it via bug files], so besides people waiting on OpenMP I'm still waiting for present GCC/OpenMP aware software to update their own dependencies.

Can someone briefly explain, to someone only vaguely familiar with LLVM/Clang, what this means? As I understand, OpenMP is an API with libraries supporting various languages (C, FORTRAN, etc). When one says that LLVM/Clang doesn't currently support OpenMP, is it a compiler issue? That is, is it because OpenMP's C implementation utilizes some aspect of the language that Clang doesn't implement and thus can't compile? Or is it more of an LLVM optimizer issue in that the optimizer doesn't understand OpenMP and thus cannot perform optimizations that convert code into OpenMP calls? (Does LLVM even work that way?)

I guess I'm struggling to understand how all the LLVM pieces fit together. I understand that LLVM operates on an intermediate bytecode-type blob that it translates on-the-fly (presumably?) to executable instructions and I understand that it has optimizers that can in some sense optimize code paths at run-time (possibly after gathering run-time statistics about how the code behaves so that it can make progressively better optimizations?).

But what I'm struggling with are all the LLVM back-ends. For example, a few days ago there was a thread about LLVM 3.2 supporting nVidia's CUDA PTX back-end. What does this mean? Does this mean that LLVM's optimizers will identify pieces of code suitable for execution on a GPU and will attempt to generate CUDA code for those pieces? Or does it simply mean that LLVM's optimizers are now aware of the CUDA API and can now analyze and optimize CUDA code?

Anyone care to offer a 10,000 foot view of how all this fits together? :-)

The issue is that OpenMP is not just an API with libraries; it also extends the programming languages via compiler hints ("#pragma omp <xxx>" for C/C++). Those hints explicitly identify parallel sections, which get spread across multiple threads at runtime. Support for those hints needs to be added to each compiler; without them the code will fall back to running on a single CPU core rather than taking advantage of multiple cores.

The back ends are basically code generators, ie they add the ability to convert from LLVM IR to a specific instruction set, which could be assembler-level (PTX, AMDIL) or hardware-level (x86, r600). AFAIK they normally rely on the front end (clang in this case) to pass down information about parallel execution, since identification of parallel sections is done in the source code.