If this is your first visit, be sure to
check out the FAQ by clicking the
link above. You may have to register
before you can post: click the register link above to proceed. To start viewing messages,
select the forum that you want to visit from the selection below.

LLVM 2.6 Released, Clang Is Now Production Ready

Phoronix: LLVM 2.6 Released, Clang Is Now Production Ready

Version 2.6 of LLVM, the Low-Level Virtual Machine, has been released. This modular compiler infrastructure, which can replace many parts of the GNU Compiler Collection and go far beyond the conventional roles as a code compiler such as being used within Apple's Mac OS X OpenGL implementation for providing optimizations and is similarly going to be used within Gallium3D, has taken a major leap forward with the 2.6 release. LLVM 2.6 includes better x86_64 code generation, new code generators for multiple architectures, support for SSE 4.2, improved optimizations, and perhaps most notably it is the first release to include Clang where it's now at a "production quality" status for C and Objective-C on x86...

my compilers class is making a lua compiler frontend for llvm :-). It's definitely good to start over once in a while, as gcc is old and crusty. I think llvm is a couple years away from being capable of being a replacement for gcc within linux, but it enables so much more already. There's the JIT, optimizations. Actually, apple switched to it as the compiler for iphone.

Freebsd is trying to switch to it too, and gentoo folks as well. It compiles the freebsd kernel.

To sum it up, llvm is Low-Level-Virtual-Machine, not GCC take2. The fact that it is capable of compiling into native code is necessary for its mission and nice, but it is much more than gcc. And supposedly it provides a very clean and modern C++ API for devs to mess with.

Last edited by garytr24; 10-25-2009 at 01:54 PM.
Reason: added some stuff

Last time i checked, the higher optimizationlevels are yet to be implemented (i.e. no profiled optimizations), but maybe this has changed?

GCC is a behemoth, not easily changed. GCC can create very good code, but sometimes that requires you to know the magic set of flags. From simple benchmarks i've seen GCC fail miserably, even though everthing the code was supposed to be doing was dead simple.

I like the other features (other than faster compile time, and hopefully faster programs) even more, but I'm not sure what languages can take full advantage of it (things like cross platform llvm bitcode with run-time optimization, yummy!). It sure wont work with C/C++, thanks to #define (and probably other reasons too)

one thing GCC can't do at all that modern compilers do according to my compilers prof is profiling and optiimizations that make use of it. And GCC is huge and a pain to work with. It's been around for a really long time. LLVM brings a whole new clean infrastructure for compiler development. C and Objective-C are pretty done, and C++ is WIP.

Last time i checked, the higher optimizationlevels are yet to be implemented (i.e. no profiled optimizations), but maybe this has changed?

GCC is a behemoth, not easily changed. GCC can create very good code, but sometimes that requires you to know the magic set of flags. From simple benchmarks i've seen GCC fail miserably, even though everthing the code was supposed to be doing was dead simple.

I like the other features (other than faster compile time, and hopefully faster programs) even more, but I'm not sure what languages can take full advantage of it (things like cross platform llvm bitcode with run-time optimization, yummy!). It sure wont work with C/C++, thanks to #define (and probably other reasons too)

why would #defines break it? do you mean like #define WIN32 or LINUX? I think the value would be for say the flash plugin to run on multiple architectures. A usb thumbdrive that can boot a linux OS on your ARM smartbook and netbook or whatever. Apple's using it right now for OpenCl and graphics shaders. If your video card can't support the shaders, the JIT compiles them to the cpu.

why would #defines break it? do you mean like #define WIN32 or LINUX? I think the value would be for say the flash plugin to run on multiple architectures. A usb thumbdrive that can boot a linux OS on your ARM smartbook and netbook or whatever. Apple's using it right now for OpenCl and graphics shaders. If your video card can't support the shaders, the JIT compiles them to the cpu.

Almost any platform/architecture related macro will ruin it. Given that #ifdef WIN32 stuff will probably alter large part of the code, it's a given that it wont ever work (if so then we wouldn't need the ifdef to start with), but so will also architecture related defines;
From lowlevel things as endianness or limits of basic types, to more exotic changes, which can basically alter any part of the code to become whatever depending on available libraries or architecture.

Ex-Cyber, i thought he meant more like "actually put it to good use"? Although, i dont know how efficient the profiled optimizations are in GCC.