Planet Clang

March 02, 2015

Welcome to the sixty-first issue of LLVM Weekly, a weekly newsletter (published every Monday) covering developments in LLVM, Clang, and related projects. LLVM Weekly is brought to you by Alex Bradbury. Subscribe to future issues at http://llvmweekly.org and pass it on to anyone else you think may be interested. Please send any tips or feedback to asb@asbradbury.org, or @llvmweekly or @asbradbury on Twitter.

On the mailing lists

Ashutosh Nema is proposing a new loop versioning optimisation. This is where multiple versions of the loop are generated and the implementation chosen based on runtime memory aliasing tests. It was suggested that some recent work on Loop Access Analysis provides some of this functionality.

Ahmed Bougacha started a discussion on disabling GlobalMerge, which is currently enabled for ARM and AARch64. Much of the ensuing discussion centers around understanding why there seems to be a performance degradation when using GlobalMerge with LTO.

Katya Romanova moved a discussion on a jump threading optimisation bug to llvm-dev. The issue is due to the fact an unreachable block is generated with ill-formed instruction, and there is a lot of follow on discussion about whether passes should generate unreachable blocks.

February 27, 2015

This release contains the work of the LLVM community over the past six months: many many bug fixes, optimization improvements, support for more proposed C++1z features in Clang, better native Windows compatibility, embedding LLVM IR in native object files, Go bindings, and more. For details, see the release notes [LLVM, Clang].

Many thanks to everyone who helped with testing, fixing, and getting the release into a good state!

Special thanks to the volunteer release builders and testers, without whom this release would not be possible: Dimitry Andric, Sebastian Dreßler, Renato Golin, Sylvestre Ledru, Ben Pope, Daniel Sanders, and Nikola Smiljanić!

If you have any questions or comments about this release, please contact the community on the mailing lists. Onwards to 3.7!

February 23, 2015

Welcome to the sixtieth issue of LLVM Weekly, a weekly newsletter (published every Monday) covering developments in LLVM, Clang, and related projects. LLVM Weekly is brought to you by Alex Bradbury. Subscribe to future issues at http://llvmweekly.org and pass it on to anyone else you think may be interested. Please send any tips or feedback to asb@asbradbury.org, or @llvmweekly or @asbradbury on Twitter.

On the mailing lists

Lefteris Ioannidis has introduced himself on the mailing list. He is working on propagating parallelism at the IR level, with a hope to ultimately upstream his work. He's interested in chatting to anyone working in this area.

A new pass for constructing gc.statepoint sequences with explicit relocations was added. The pass will be further developed and bugfixed in-tree. r229945.

The old x86 vector shuffle lowering code has been removed (the new shuffle lowering code has been the default for ages and known regressions have been fixed). r229964.

A new bitset metadata format and lowering pass has been added. In the future, this will be used to allow a C++ program to efficiently verify that a vtable pointer is in the set of valid vtable pointers for the class or its derived classes. r230054.

February 16, 2015

Welcome to the fifty-ninth issue of LLVM Weekly, a weekly newsletter (published every Monday) covering developments in LLVM, Clang, and related projects. LLVM Weekly is brought to you by Alex Bradbury. Subscribe to future issues at http://llvmweekly.org and pass it on to anyone else you think may be interested. Please send any tips or feedback to asb@asbradbury.org, or @llvmweekly or @asbradbury on Twitter.

February 09, 2015

Welcome to the fifty-eighth issue of LLVM Weekly, a weekly newsletter (published every Monday) covering developments in LLVM, Clang, and related projects. LLVM Weekly is brought to you by Alex Bradbury. Subscribe to future issues at http://llvmweekly.org and pass it on to anyone else you think may be interested. Please send any tips or feedback to asb@asbradbury.org, or @llvmweekly or @asbradbury on Twitter.

News and articles from around the web

The Red Hat developer blog has a post about the plan to change the G++ ABI along with GCC 5. This is required for full C++11 compatibility. Unlike the last ABI change where the libstdc++ soname was changed, it will stay the same and instead different mangled names will be used for symbols.

February 02, 2015

Welcome to the fifty-seventh issue of LLVM Weekly, a weekly newsletter (published every Monday) covering developments in LLVM, Clang, and related projects. LLVM Weekly is brought to you by Alex Bradbury. Subscribe to future issues at http://llvmweekly.org and pass it on to anyone else you think may be interested. Please send any tips or feedback to asb@asbradbury.org, or @llvmweekly or @asbradbury on Twitter.

I've been at FOSDEM this weekend in Brussels (which is why this week's issue is perhaps a little shorter than usual!). Most talks were recorded and I'll be linking to the videos from the LLVM devroom once they're up. For those interested, you can see the slides from my lowRISC talk here. If you want to chat about the project, you may want to join #lowRISC on irc.oftc.net.

(If you’re not certain what I’m talking about, notice that f() is declared as a const member function, and yet it mutates the class data member ref.)
A const member function is a design contract guaranteeing the caller that this function will not mutate the object’s internal state in any observable way. In the example above, ref can be observed because of the get_ref() call, which may lead you to believe this is some sort of bug with the compiler or with C++ itself, or perhaps is undefined behavior. However, this code is well-formed, and is a conscious design of the language that makes sense when you think more deeply about what a reference is.

References are not like pointers, or other objects, in that they only exist to refer to another object. In the case of a reference data member of a class, that means the reference must be initialized by the class constructor, and so its referent exists outside of the class itself (in a sane universe). When you mutate a reference member of a class, you are not mutating the internal state of the class — you are mutating something external to the class. For this reason, reference data members of a class may be mutated with a const context. This holds true even when the actual object is const, not just the member function. For instance:

January 26, 2015

Welcome to the fifty-sixth issue of LLVM Weekly, a weekly newsletter (published every Monday) covering developments in LLVM, Clang, and related projects. LLVM Weekly is brought to you by Alex Bradbury. Subscribe to future issues at http://llvmweekly.org and pass it on to anyone else you think may be interested. Please send any tips or feedback to asb@asbradbury.org, or or @llvmweekly or @asbradbury on Twitter.

A post on the official LLVM Blog announces that LLDB is coming to Windows, announcing to a wider audience that it is now possible to debug simple programs with LLDB on Windows and giving a rationale for investing effort into porting LLDB to Windows and adding support for the MS debug format. The post also features a todo list indicating what's next for Windows support.

A draft version 0.1 of the IA-32 psABI (processor specific application binary interface) is available. This aims to supplement the existing System V ABI with conventions relevant to newer features such as SSE1-4 and AVX. Comments are welcome.

Ahmed Bougacha has been having problems with the cost model calculations for saturation instructions. The cost is over-estimated because a number of the individual IR instructions fold-away later in lowering. He suggests adding a new method to TargetTransformInfo for multi-instruction cost computation. There hasn't been much feedback thus far.

Chandler Carruth has been looking through the LLD libraries and trying to work out the current layering, as well as what a potential future layering might be. He proposes offering a basic library offering basic functionality and a second library offering a higher-level interface for actually doing linking.

LLVM commits

A backend targeting the extended BPF (Berkeley Packet Filter) interpreter/JIT in the Linux kernel has been added. See this LWN article for more background. r227008.

There's been a flurry of work on the new pass manager this week. One commit I will choose to pick out is the port of InstCombine to the new pass manager, which seems like a milestone or sorts. r226987.

LLVM learnt how to use the GHC calling convention on AArch64. r226473.

InstCombine will now canonicalize loads which are only ever stored to always use a legal integer type if one is available. r226781.

January 20, 2015

We've spoken in the past about teaching Clang to fully support Windows and be compatible with MSVC. Until now, a big missing piece in this story has been debugging the clang-generated executables. Over the past 6 months, we've started working on making LLDB work well on Windows and support debugging both regular Windows programs and those produced by Clang.

Why not use an existing debugger such as GDB, Visual Studio's, or WinDBG? There are a lot of factors in making this kind of decision. For example, while GDB understands the DWARF debug information produced by Clang on Windows, it doesn't understand the Microsoft C++ ABI or debug information format. On the other hand, neither Visual Studio nor WinDBG understand the DWARF debug information produced by Clang. With LLDB, we can teach it to support both of these formats, making it usable with a wider range of programs. There are also other reasons why we're really excited to work on LLDB for Windows, such as the tight integration with Clang which lets it support all of the same C++ features in its expression parser that Clang supports in your source code. We're also looking to continue adding new functionality to the debugging experience going forward, and having an open source debugger that is part of the larger LLVM project makes this really easy.

The past few months have been spent porting LLDB's core codebase to Windows. We've been fixing POSIX assumptions, enhancing the OS abstraction layer, and removing platform specific system calls from generic code. Sometimes we have needed to take on significant refactorings to build abstractions where they are necessary to support platform specific differences. We have also worked to port the test infrastructure to Windows and set up build bots to ensure things stay green.

This preliminary bootstraping work is mostly complete, and you can use LLDB to debug simple executables generated with Clang on Windows today. Note the use of the word "simple". At last check, approximately 50% of LLDB's tests fail on Windows. Our baseline, however, which is a single 32-bit executable (i.e. no shared libraries), single-threaded application built and linked with Clang and LLD using DWARF debug information, works today. We've tested all of the fundamental functionality such as:

Process inspection while stopped, such as stack unwinding, frame setting, memory examination, local variables, expression evaluation, stepping, etc (one notable exception to this is that step-over doesn't yet work well in the presence of limited symbol information).

Of course, there is still more to be done. Here are some of the areas we're planning to work on next:

Fixing low hanging fruit by improving the pass-rate of the test suite.

Better support for debugging multi-threaded applications.

Support for debugging crash dumps.

Support for debugging x64 binaries.

Enabling stepping through shared libraries.

Understanding PDB (for debugging system libraries, and executables generated with MSVC). Although the exact format of PDB is undocumented, Microsoft still provides a rich API for querying PDB in the form of the DIA SDK.

If you're using Clang on Windows, we would encourage you to build LLDB (it should be in the Windows LLVM installer soon) and let us know your thoughts by posting them to lldb-dev. Make sure you file bugs against LLDB if you notice anything wrong, and we would love for you to dive into the code and help out. If you see something wrong, dig in and try to fix it, and post your patch to lldb-commits.

January 19, 2015

Welcome to the fifty-fifth issue of LLVM Weekly, a weekly newsletter (published every Monday) covering developments in LLVM, Clang, and related projects. LLVM Weekly is brought to you by Alex Bradbury. Subscribe to future issues at http://llvmweekly.org and pass it on to anyone else you think may be interested. Please send any tips or feedback to asb@asbradbury.org, or @llvmweekly or @asbradbury on Twitter.

It seems to have been a very busy week in the world of LLVM, particularly with regards to discussion on the mailing list. Due to travel etc and the volume of traffic, I haven't been able to do much summarisation of mailing list discussion I'm afraid.

News and articles from around the web

Registration for EuroLLVM 2015, to be held at Goldsmiths College in London, UK on April 13-14th is now open.

All slides and videos from the last LLVM Developers' meeting are now live, including those from Apple employees.

On the mailing lists

Ahmed Bougacha has posted an RFC on adding integer saturation intrinsics to LLVM. There are various questions in the ensuing thread about whether adding an intrisc is necessary and the best way to go about this. i.e. whether it is possible to just pattern match later on in the compilation flow.

Philip Reames is seeking wider feedback on two implementation issues for GCStrategy. The two key questions are whether GC-specific properties should be checked in the IR verifier and what the access model for GCStrategy should be. No responses yet, so now is the time to dive in.

Lang Hanes has posted a proposed new JIT API with a catchy name (ORC: On Request Compilation). The aim is to cleanly support a wider range of JIT uses cases, and to be clear this higher level API would not replace the existing MCJIT.

LLVM commits

A new code diversity feature is now available. The NoopInsertion pass will add random no-ops to x86 binaries to try to make ROP attacks more difficult by increasing diversity. r225908. I highly recommend reading up on the blind ROP attack published last year. It would also be interesting to see an implementation of G-Free for producing binaries without simple gadgets. The commit was later reverted for some reason.

A nice summary of recent MIPS and PowerPC target developments, as well as the OCaml bindings is now there in the form of the 3.6 release notes. r225607, r225695, r225779.

LLVM learned the llvm.frameallocate and llvm.framerecover intrinsics, which allow multiple functions to share a single stack allocation from one function's call frame. r225746, r225752.

January 13, 2015

Welcome to the fifty-fourth issue of LLVM Weekly, a weekly newsletter (published every Monday) covering developments in LLVM, Clang, and related projects. LLVM Weekly is brought to you by Alex Bradbury. Subscribe to future issues at http://llvmweekly.org and pass it on to anyone else you think may be interested. Please send any tips or feedback to asb@asbradbury.org, or @llvmweekly or @asbradbury on Twitter.

As you receive this week's issue, I should be on my way to California where I'll be presenting lowRISC at the RISC-V workshop in Monterey and having a few mother meetings. I'm in SF Fri-Sun and somewhat free on the Saturday if anyone wants to meet and chat LLVM or lowRISC/RISC-V.

Google now use Clang for production Chrome builds on Linux. They were previously using GCC 4.6. Compared to that baseline, performance stayed roughly the same while binary size decreased by 8%. It would certainly have been interesting to compare to a more recent GCC baseline. The blog post indicates they're hopeful to use Clang in the future for building Chrome for Windows.

Philip Reames did an interesting back of the envelope calculation about the cost of maintaining LLVM. He picked out commits which seems like they could be trivially automated and guesstimated a cost based on developer time. The figure he arrives at is $1400 per month.

Arch Robinson kicked off a discussion about floating point range checks in LLVM. This isn't currently supported, though there's agreement it could be useful as well as a fair amount of discussion on some of the expected subtleties.

If you're wondering about alias instructions, Bruce Hoult has a good explanation.

LLVM commits

An option hoist-cheap-insts has been added to the machine loop invariant code motion pass to enable hosting even cheap instructions (as long as register pressure is low). This is disabled by default. r225470.

The calculation of the unrolled loop size has been fixed. Targets may want to re-tune their default threshold. r225565, r225566.

DIE.h (datastructures for DWARF info entries) is now a public CodeGen header rather than being private to the AsmPrinter implementation. dsymutil will make use of it. r225208.

The new pass manager now has a handy utility for generating a no-op pass that forces a usually lazy analysis to be run. r225236.

January 11, 2015

In early October of 2014, I started collecting changes that I saw fly by on llvm-commits that I thought would be straight-forward to automate. I was trying to be pretty conservative, so these tend to be pretty basic things: fixing deceptive white space around an if clause, removing the name of a method from it’s doxygen comment, removing a couple of syntactically redundant semi colons, and things of similar complexity. These weren’t chosen because they were interesting, but precisely because they were not.

In the 66 days since I started collecting, I’ve saved 105 unique commits. That’s a bit less than 2 per day, and only about 1.6% of the 6,500 commits made to LLVM in that time.

Let’s assume that each of those changes took an average of 15 minutes on the part of their author. That’s not too much more than a single build and test cycle, so it seems like a reasonable estimate. At roughly $2 per developer minute, we can guesstimate that each of these changes cost about $30. Taken together, these 105 changes consumed about 26 hours of developer time at a cost of a bit over $3,150.

This gives us a value for straight forward code maintenance activities of roughly $1,400 per month (or roughly $50 per day.)

If anything, this is an extremely low estimate. I know several of these changes required review, and at least a couple of them broke the build and had to be reverted. We could probably add in several more hours of developer time just for that alone.

Now this is only a small fraction of the roughly $88,000 in development time going to the project as a whole each month*, but it’s still pretty material.

* Using the same logic as above: 15 minutes per change, $2 per developer minute, 6500 changes, llvm repository only. It goes without saying that this is a massive understatement of the actual value of the contributed work.

January 06, 2015

Chrome 38 was released early October 2014. It is the first release where the Linux binaries shipped to users are built by clang. Previously, this was done by gcc 4.6. As you can read in the announcement email, the switch happened without many issues. Performance stayed roughly the same, binary size decreased by about 8%. In this post I'd like to discuss the motivation for this switch.

Motivation

There are two reasons for the switch.1. Many Chromium developers already used clang on Linux. We've supported opting in to clang for since before clang supported C++ – because of this, we have a process in place for shipping new clang binaries to all developers and bots every few weeks. Because of clang's good diagnostics (some of which we added due to bugs in Chromium we thought the compiler should catch), speed, and because of our Chromium-specific clang plugin, many Chromium developers switched to clang over the years. Making clang the default compiler removes a stumbling block for people new to the project.2. We want to use modern C++ features in Chromium. This requires a recent toolchain – we figured we needed at least gcc 4.8. For Chrome for Android and Chrome for Chrome OS, we updated our gcc compilers to 4.8 (and then 4.9) – easy since these ports use a non-system gcc already. Chrome for Mac has been using Chromium's clang since Chrome 15 and was already in a good state. Chrome for iOS uses Xcode 5's clang, which is also new enough. Chrome for Windows uses Visual Studio 2013 Update 4. On Linux, switching to clang was the easiest way forward.

Keeping up with C++'s evolution in a large, multi-platform project

C++ had been static for many years. C++11 is the first real update to the C++ language since the original C++ standard (approved on July 27 1998). C++98 predated the founding of Google, YouTube, Facebook, Twitter, the releases of Mac OS X and Windows XP, and x86 SSE instructions. The time between the two standards saw the rise and fall of the iPod, several waves of social networks, and the smartphone explosion.The time between C++11 and C++14 was three years, and the next major iteration of the language is speculated to be finished in 2017, three years from C++14. This is a dramatic change, and it has repercussions on how to build and ship C++ programs. It took us 3+ years to get to a state where we can use C++11 in Chromium; C++14 will hopefully take us less long. (If you're targeting fewer platforms, you'll have an easier time.)There are two parts to C++11: New language features, and new library features. The language features just require a modern compiler at build time on build machines, the library features need a new standard library at runtime on the user's machine.Deploying a new compiler is conceptually relatively simple. If your developers are on Ubuntu LTS releases and you make them use the newest LTS release, they get new compilers every two years – so just using the default system compiler means you're up to two years behind. There needs to be some process to relatively transparently deploy new toolchains to your developers – an "evergreen compiler". We now have this in place for Chromium – on Linux, by using clang. (We still accept patches to keep Chromium buildable with gccs >= 4.8 for people who prefer compiling locally over using precompiled binaries, and we still use gcc as the target compiler for Chrome for Android and Chrome OS.)The library situation is slightly more tricky: On Linux and Mac OS X, programs are usually linked against the system C++ library. Chrome wants to support Mac OS X 10.6 a bit longer (our users seem to love this OS X release), and the only C++ library this ships with is libstdc++ 4.2 – which doesn't have any C++11 bits. Similarly, Ubuntu Precise only has libstdc++ 4.6. It seems that with C++ updating more often, products will have to either stop supporting older OS versions (even if they still have many users on these old versions), adopt new C++ features very slowly, or ship with a bundled C++ standard library. The latter implies that system libraries shouldn't have a C++ interface for ABI reasons – luckily, this is mostly already the case.To make things slightly more complicated, gcc and libstdc++ expect to be updated at the same time. gcc 4.8 links to libstdc++ 4.8, so upgrading gcc 4.8 while still linking to Precise's libstdc++ 4.6 isn't easy. clang explicitly supports building with older libstdc++ versions.For Chromium, we opted to enable C++11 language features now, and then allow C++11 library features later once we have figured out the story there. This allows us to incrementally adoptC++11 features in Chromium, but it's not without risks:vector<int> v0{42} for example means something different with an old C++ library and a new C++ library that has a vector constructor taking an initializer_list. We disallow using uniform initialization for now because of this.

Since bundling a C++ library seems to become more common with this new C++ update cadence, it would be nice if compiler drivers helped with this. Just statically linking libstdc++ / libc++ isn't enough if you're shipping a product consisting of several executables or shared libraries – they need to dynamically link to a shared C++ library with the right rpaths, the C++ library probably needs mangled symbol names that don't conflict with the system C++ library which might be loaded into the same process due to other system libraries using it internally (for example, maybe using an inline namespace with an application-specific name), etc.

Future directions

As mentioned above, we're trying to figure out the C++ library situation. The tricky cases are Chrome for Android (which currently uses STLport) and Chrome for Mac. We're hoping to switch Chrome for Android to libc++ (while still using gcc as compiler). On Mac, we'll likely bundle libc++ with Chrome too.

We're workingon making clang usable for compiling Chrome for Windows. The main motivations for this are using AddressSanitizer, providing a compiler with great diagnostics for developers, and getting our tooling infrastructure working on Windows (used for example automated large-scale cross-OS refactoring and for building our code search index – try clicking a few class names; at the moment only code built on Linux is hyperlinked). We won't use clang as a production compiler on Windows unless it produces a chrome binary that's competitive with Visual Studio's on both binary size and performance. (From an open-source perspective, it is nice being able to use an open-source compiler to compile an open-source program.)

January 05, 2015

Welcome to the fifty-third issue of LLVM Weekly, a weekly newsletter (published every Monday) covering developments in LLVM, Clang, and related projects. LLVM Weekly is brought to you by Alex Bradbury. Subscribe to future issues at http://llvmweekly.org and pass it on to anyone else you think may be interested. Please send any tips or feedback to asb@asbradbury.org, or @llvmweekly or @asbradbury on Twitter.

I'm going to be in California next week for the RISC-V workshop. I'm arriving at SFO on Monday 12th and leaving on Sunday the 18th. Do let me know if you want to meet and talk lowRISC/RISC-V or LLVM, and we'll see what we can do.

The C++ Filesystem Technical Specification, based on the Boost.Filesystem library has been approved.

On the mailing lists

Virgile Bello has some questions on how he can control the calling convention in LLVM. In this case, he has an CLR frontend and is trying to pass an object on the CLR stack to a native Win32 function. Reid Kleckner suggests the best way may be to just link with Clang and use its implementation. In another followup, he links to the talk on this topic at the last LLVM dev meeting.

The release of LLVM/Clang 3.5.1 may be slightly delayed due to the addition of new patches late in the process. Chandler Carruth points out that there are some unpleasant bugs in InstCombine in the current 3.5.1 release candidate. If there is a release candidate 3, the patch in question will definitely make it in.

LLVM commits

Instruction selection for bit-permuting operations on PowerPC has been improved. r225056.

The scalar replacement of aggregates (SROA) pass has started to learn how to more intelligently handle split loads and stores. As explained in detail in the commit message, the old approach lead to complex IR that can be difficult for the optimizer to work with. SROA is now also more aggressive in its splitting of loads. r225061, r225074.

December 29, 2014

Welcome to the fifty-second issue of LLVM Weekly, a weekly newsletter (published every Monday) covering developments in LLVM, Clang, and related projects. LLVM Weekly is brought to you by Alex Bradbury. Subscribe to future issues at http://llvmweekly.org and pass it on to anyone else you think may be interested. Please send any tips or feedback to asb@asbradbury.org, or @llvmweekly or @asbradbury on Twitter.

This issue marks the end of one full year of LLVM Weekly. It's a little shorter than usual as the frenetic pace of LLVM/Clang development has slowed over the holiday period. Surprising even to me is that we managed to make it full all 52 weeks with an issue every Monday as promised. This requires a non-trivial amount of time each week (2-3+ hours), but I am intending to keep it going into 2015. I'd like to give a big thank you to everyone who's said hi at a conference, sent in corrections or tips on content, or just sent a random thank you. It's been very helpful in motivation. I don't currently intend to change anything about the structure or content of each issue for next year, but if you have any ideas then please let me know.

I can't make it to 31C3 due to the awkward timing of the event, but do let me know if there are any LLVM/Clang related talks worth sharing. There was a talk about Code Pointer Integrity which has previously been covered in LLVM Weekly and is working towards upstreaming. The video is here. If you're interested in lowRISC and at 31C3, Bunnie is leading a discussion about itat 2pm on Monday (today).

News and articles from around the web

There doesn't seem to have been any LLVM or Clang related news over the past week. Everyone seems to be busy with non-LLVM related activities over the christmas break. If you're looking for a job though, Codeplay tell me they have two vancancies: one for a debugger engineer and another for a compiler engineer.

On the mailing lists

David Li has shared some early info on Google's plans for LTO. He describes the concept of 'peak optimization performance' and some of the objectives of the new design. This includes the ability to handle programs 10x or 100x the size of Firefox. We can expect more information in 2015, maybe as early as January.

The discussion on possible approaches to reducing the size of libLLVM has continued. Chris Bieneman has shared some more size stats. These gains come from removing unused intrinsics. Chandler Carruth has followed up with a pleasingly thought-provoking argument on a different approach: target-specific intrinsics shouldn't exist in the LLVM front or middle-end. He describes the obvious issues with this, with the most fiddly probably being instruction selection converting appropriate IR to the right target-specific functionality.

LLVM commits

The SROA (scalar replacement of aggregates) pass has seen some refactoring to, in the future, allow for more intelligent rewriting. r224742, r224798.

Welcome to the fiftieth issue of LLVM Weekly, a weekly newsletter (published every Monday) covering developments in LLVM, Clang, and related projects. LLVM Weekly is brought to you by Alex Bradbury. Subscribe to future issues at http://llvmweekly.org and pass it on to anyone else you think may be interested. Please send any tips or feedback to asb@asbradbury.org, or @llvmweekly or @asbradbury on Twitter.

I'll be at MICRO-47 this week. If you're there do say hi, especially if you want to chat about LLVM or lowRISC/RISC-V.

Chris Bieneman started a discussion about supporting stripping out unused instrinsics with the aim of reducing the size of libLLVM. The proposed patches reduce binary size by ~500k, which he later points out is more significant in the context of their already size-reduced build.

Welcome to the fifty-first issue of LLVM Weekly, a weekly newsletter (published every Monday) covering developments in LLVM, Clang, and related projects. LLVM Weekly is brought to you by Alex Bradbury. Subscribe to future issues at http://llvmweekly.org and pass it on to anyone else you think may be interested. Please send any tips or feedback to asb@asbradbury.org, or @llvmweekly or @asbradbury on Twitter.

December 15, 2014

Welcome to the forty-ninth issue of LLVM Weekly, a weekly newsletter (published every Monday) covering developments in LLVM, Clang, and related projects. LLVM Weekly is brought to you by Alex Bradbury. Subscribe to future issues at http://llvmweekly.org and pass it on to anyone else you think may be interested. Please send any tips or feedback to asb@asbradbury.org, or @llvmweekly or @asbradbury on Twitter.

Support for statepoints landed in LLVM this week, and Philip Reames has a blog post detailing some notes and caveats. See also the mailing list discussion linked to below about future plans for GC in LLVM.

Philip Reames has a post detailing his future plans for GC in LLVM. Comments are invited. The aim is to eventually delete the existing gcroot lowering code. If you are actively using this, please do speak up.

Rafael Espíndola has been working on type merging during LTO and ultimately proposes moving to a single pointer type in LLVM IR. There seems to be positive feedback on the idea, given that pointer types don't convey useful information to the optimizer and don't really provide safety.

LLVM commits

The statepoint infrastructure for garbage collection has landed. See the final patch in the series for documentation. r223078, r223085, r223137, r223143.

The PowerPC backend gained support for readcyclecounter on PPC32. r223161.

Support for 'prologue' metadata on functions has been added. This can be used for inserting arbitrary code at a function entrypoint. This was previously known as prefix data, and that term has been recycled to be used for inserting data just before the function entrypoint. r223189.

Other project commits

An effort has started in lld to reduce abstraction around InputGraph, which has been found to get in the way of new features due to excessive information hiding. r223330. The commit has been temporarily reverted due to breakage on Darwin and ELF.

A large chunk of necessary code for Clang module support has been added to LLDB. r223433.

December 01, 2014

Welcome to the forty-eighth issue of LLVM Weekly, a weekly newsletter (published every Monday) covering developments in LLVM, Clang, and related projects. LLVM Weekly is brought to you by Alex Bradbury. Subscribe to future issues at http://llvmweekly.org and pass it on to anyone else you think may be interested. Please send any tips or feedback to asb@asbradbury.org, or @llvmweekly or @asbradbury on Twitter.

News and articles from around the web

John Regehr has posted an update on the Souper superoptimizer which he and his collaborators have been working on. They have implemented a reducer for Souper optimizations that tries to reduce the optimization to something more minimal. There current results given ~4000 distinct optimisations of which ~1500 LLVM doesn't know how to do. Of course many of these may in fact be covered by a single rule or pass. One of the next steps for Souper is to extend Souper to support the synthesis of instruction sequences. See also the discussion on the llvm mailing list.

The LLVM Blog features a summary of recent advances in loop vectorization for LLVM. This includes diagnostics remarks to get feedback on why loops which aren't vectorized are skipped, the loop pragma directive in Clang, and performance warnings when the directive can't be followed.

The LLVM Haskell Compiler (LHC) has been newly reborn along with its blog. The next steps in development are to provide better support for Haskell2010, give reusable libraries for name resolution and type checking, and to produce human-readable compiler output.

On the mailing lists

Hal Finkel has posted an RFC suggesting the removal of the BBVectorize pass on the basis that it hasn't progressed to production quality while the SLP vectorizer exists and has been enabled for some time and it has various bugs and code fixmes. If you feel differently, now is the time to speak up.

LLVM commits

Support for -debug-ir (emitting the LLVM IR in debug data) was removed. There's no real justification or explanation in the commit message, but it's likely it was unfinished/unused/non-functional. r222945.

InstCombine will now canonicalize toward the value type being stored rather than the pointer type. The rationale (explained in more detail in the commit message) is that memory does not have a type, but operations and the values they produce do. r222748.

The documentation for !invariant.load metadata has been clarified. r222700.

November 25, 2014

Loop vectorization was first introduced in LLVM 3.2 and turned on by default in LLVM 3.3. It has been discussed previously on this blog in 2012 and 2013, as well as at FOSDEM 2014, and at Apple's WWDC 2013. The LLVM loop vectorizer combines multiple iterations of a loop to improve performance. Modern processors can exploit the independence of the interleaved instructions using advanced hardware features, such as multiple execution units and out-of-order execution, to improve performance.

Unfortunately, when loop vectorization is not possible or profitable the loop is silently skipped. This is a problem for many applications that rely on the performance vectorization provides. Recent updates to LLVM provide command line arguments to help diagnose vectorization issues and new a pragma syntax for tuning loop vectorization, interleaving, and unrolling.

New Feature: Diagnostics Remarks

Diagnostic remarks provide the user with an insight into the behavior of the behavior of LLVM’s optimization passes including unrolling, interleaving, and vectorization. They are enabled using the Rpass command line arguments. Interleaving and vectorization diagnostic remarks are produced by specifying the ‘loop-vectorize’ pass. For example, specifying ‘-Rpass=loop-vectorize’ tells us the following loop was vectorized by 4 and interleaved by 2.

Many loops cannot be vectorized including loops with complicated control flow, unvectorizable types, and unvectorizable calls. For example, to prove it is safe to vectorize the following loop we must prove that array ‘A’ is not an alias of array ‘B’. However, the bounds of array ‘A’ cannot be identified.

void test2(int *A, int *B, int Length) {

for (int i = 0; i < Length; i++)

A[B[i]]++;

}

clang -O3 -Rpass-analysis=loop-vectorize -S test2.c -o /dev/null

test2.c:3:5: remark:

loop not vectorized: cannot identify array bounds

for (int i = 0; i < Length; i++)

^

Control flow and other unvectorizable statements are reported by the '-Rpass-analysis' command line argument. For example, many uses of ‘break’ and ‘switch’ are not vectorizable.

C/C++ Code

-Rpass-analysis=loop-vectorize

for (int i = 0; i < Length; i++) {

if (A[i] > 10.0)

break;

A[i] = 0;

}

control_flow.cpp:5:9: remark: loop not vectorized: loop control flow is not understood by vectorizer

New Feature: Loop Pragma Directive

Explicitly control over the behavior of vectorization, interleaving and unrolling is necessary to fine tune the performance. For example, when compiling for size (-Os) it's a good idea to vectorize the hot loops of the application to improve performance. Vectorization, interleaving, and unrolling can be explicitly specified using the #pragma clang loopdirective prior to any for, while, do-while, or c++11 range-based for loop. For example, the vectorization width and interleaving count is explicitly specified for the following loop using the loop pragma directive.

Performance Warnings

Sometimes the loop transformation is not safe to perform. For example, vectorization fails due to the use of complex control flow. If vectorization is explicitly specified a warning message is produced to alert the programmer that the directive cannot be followed. For example, the following function which returns the last positive value in the loop, cannot be vectorized because the ‘last_positive_value’ variable is used outside the loop.

int test5(int *List, int Length) {

int last_positive_index = 0;

#pragma clang loop vectorize(enable)

for (int i = 1; i < Length; i++) {

if (List[i] > 0) {

last_positive_index = i;

continue;

}

List[i] = 0;

}

return last_positive_index;

}

clang -O3 -g -S test5.c -o /dev/null

test5.c:5:9: warning:

loop not vectorized: failed explicitly specified loop vectorization

for (int i = 1; i < Length; i++) {

^

The debug option ‘-g’ allows the source line to be provided with the warning.

Conclusion

Diagnostic remarks and the loop pragma directive are two new features that are useful for feedback-directed-performance tuning. Special thanks to all of the people who contributed to the development of these features. Future work includes adding diagnostic remarks to the SLP vectorizer and an additional option for the loop pragma directive to declare the memory operations as safe to vectorize. Additional ideas for improvements are welcome.

November 24, 2014

Welcome to the forty-seventh issue of LLVM Weekly, a weekly newsletter (published every Monday) covering developments in LLVM, Clang, and related projects. LLVM Weekly is brought to you by Alex Bradbury. Subscribe to future issues at http://llvmweekly.org and pass it on to anyone else you think may be interested. Please send any tips or feedback to asb@asbradbury.org, or @llvmweekly or @asbradbury on Twitter.

News and articles from around the web

Version 3.0 of the Capstone disassembly framework has been released. Python bindings have been updated to support Python 3, and this release also adds support for Sparc, SystemZ and XCore. It also has performance improvements.

On the mailing lists

If you're wondering how the process of adding OpenMP support to Clang is going, the answer is that it's still ongoing and there's hope it will be done by the 3.6 release, depending on the speed of code reviews.

Peter Collingbourne has proposed adding the llgo frontend to the LLVM project. Chris Lattner is in favour of this, but would like to see the GPLv3+runtime exception dependencies rewritten before being checked in. Some people in the thread expressed concern that the existing base of LLVM/Clang reviewers know C++ and may not be able to review patches in Go, though it looks like a non-zero of existing LLVM reviewers are appropriately multilingual.

November 18, 2014

Welcome to the forty-sixth issue of LLVM Weekly, a weekly newsletter (published every Monday) covering developments in LLVM, Clang, and related projects. LLVM Weekly is brought to you by Alex Bradbury. Subscribe to future issues at http://llvmweekly.org and pass it on to anyone else you think may be interested. Please send any tips or feedback to asb@asbradbury.org, or @llvmweekly or @asbradbury on Twitter.

News and articles from around the web

Chrome on Linux now uses Clang for production builds. Clang has of course been used on OS X Chrome for quite some time. The switch saw reduction in binary size of ~8%, but this was vs GCC 4.6 rather than something more up-to-date.

The LLVM in HPC workshop at SC14 is taking place on Monday and the full agenda with abstracts is available online

November 10, 2014

Welcome to the forty-fifth issue of LLVM Weekly, a weekly newsletter (published every Monday) covering developments in LLVM, Clang, and related projects. LLVM Weekly is brought to you by Alex Bradbury. Subscribe to future issues at http://llvmweekly.org and pass it on to anyone else you think may be interested. Please send any tips or feedback to asb@asbradbury.org, or @llvmweekly or @asbradbury on Twitter.

James Molloy has been experimenting with the scheduling model on the Cortex-A57 and found some oddities. I noted the MicroOpBufferSize is currently set to 128, and reducing it right down to 2 seems to have no effect. Andrew Trick responded with some suggetions on implementing a custom scheduling strategy.

LLVM commits

The PBQP register allocator has had its spill costs and coalescing benefits tweaked. This apparently results in a few percent improvement on benchmarks such as EEMBC and SPEC. r221292, r221293.

The new SymbolRewriter pass is an IR to IR transformation allowing adjustment of symbols during compilation. It is intended to be used for symbol interpositioning in sanitizers and performance analysis tools. r221548.

November 03, 2014

Welcome to the forty-fourth issue of LLVM Weekly, a weekly newsletter (published every Monday) covering developments in LLVM, Clang, and related projects. LLVM Weekly is brought to you by Alex Bradbury. Subscribe to future issues at http://llvmweekly.org and pass it on to anyone else you think may be interested. Please send any tips or feedback to asb@asbradbury.org, or @llvmweekly or @asbradbury on Twitter.

News and articles from around the web

The 2014 LLVM Dev meeting was held last week. I couldn't make it, but it seems like there was a great selection of talks. Sadly the keynote about Swift's high-level IR was cancelled. No word yet on when we can expect slides and videos online. However, slides by Philip Reames and Sanjoy Das from their talk on on implementing fully relocating garbage collection in LLVM are online.

Peter Zotov has been doing lots of work on the LLVM OCaml bindings recently, and is looking for additional help. Recently, he's closed almost all open bugs for the bindings, migrated them to ocamlfind, fixed Lllvm_executionengine, and ensured pretty much the whole LLM-C API is exposed. Tasks on the todo list include writing tests in OUnit2 format, migrating the Kaleidoscope tutorial off camlp4, and splitting up and adding OCaml bindings to this patch. More ambitiously, it would be interesting to writing LLVM passes in OCaml and to represent LLVM IR as pure AST. If any of this interests you, do get in touch with Peter. He's able to review any patches, but could do with help on working through this list of new features.

Tom Stellard suggests deprecating the autoconf build system. Right now there is both an autotools based system and a CMake system, though CMake seems most used by developers for LLVM at least. Bob Wilson points out that the effort required to keep the existing makefiles working is much less than what might be needed to update the CMake build to support all uses cases. Though other replies make it seems that the CMake build supports pretty much all configurations people use now. If there are people who actually enjoy fiddling with build systems (far-fetched, I know), it seems like a little effort could go a long way and allow the makefile system to be jettisoned.

Chris Matthews announces that a new Jenkins-based OSX build cluster is up and running. This includes multiple build profiles and an O3 LTO performance tracker. The Jenkins config should be committed to zorg soon.

LLVM commits

Support for writing sampling profiles has been committed. In the future, support to read (and maybe write) profiles in GCC's gcov format will be added, and llvm-profdata will get support to manipulate sampling profiles. r220915.

A comment has been added to X86AsmInstrumentation to describe how asm instrumentation works. r220670.

The Microsoft vectorcall calling convention has been implemented for x86 and x86-64. r220745.

The C (and OCaml) APIs gained functions to query and modify branches, and to obtain the values for floating point constants. There have been a whole bunch of additional commits related to the OCaml bindings, too many to pick out anything representative. r220814, r220815, r220817, r220818.

The loop and SLP (superword level parallelism) vectorizers are now enabled in the Gold plugin. r220886, r220887.

Clang commits

A refactoring of libTooling to reduce required dependencies means that clang-format's binary is now roughly half the size. r220867.

October 27, 2014

Welcome to the forty-third issue of LLVM Weekly, a weekly newsletter (published every Monday) covering developments in LLVM, Clang, and related projects. LLVM Weekly is brought to you by Alex Bradbury. Subscribe to future issues at http://llvmweekly.org and pass it on to anyone else you think may be interested. Please send any tips or feedback to asb@asbradbury.org, or @llvmweekly or @asbradbury on Twitter.

This week it's the LLVM Developers' Meeting in San Jose. Check out the schedule. Unfortunately I won't be there, so I'm looking forward to the slides and videos going online.

On the mailing lists

Renato Golin proposes moving libunwind into compiler-rt. One of the subtleties is hat libunwind isn't fully compatible with GCC's unwind implementation (due to different data structure layouts), which means they can't be mixed.

October 21, 2014

I recent discussion on LLVM commits w.r.t. the statepoint changes which are up for review, I managed to get myself confused and made a couple of inaccurate statements regarding the existing capabilities of gcroots vs the newly proposed statepoints. This post is a (hopefully correct) summary of the similarities and differences.

For the purposes of this post, I am only talking about the semantics of the collector at a source language level call site. The issues highlighted with gc root and safepoint poll sites in my previous post still stand, but I didn’t do a very good job (in retrospect) of distinguishing between safepoints at call sites, and additional checks + runtime calls inserted to ensure that running code checks for a safepoint request at some interval. The points in that post apply to the later; this one talks about the former.

From a functional correctness standpoint, gc.root and statepoint are equivalent. They can both support relocating collectors, including those which relocate roots. To prevent future confusion, let me review how each works.

gc.root uses explicit spill slots in the IR in the form of allocas. Each alloca escapes (through the gcroot call itself); as a result, the compiler must assume that any readwrite call can both consume and update the values in question. Additionally, the fact that all calls are readwrite prevents reordering of unrelated loads past the call. gcroot relies on the fact that no SSA value relocated at a call site is used at a site reachable from the call. Instead, a new SSA value (whose relation to the original is unknown by the compiler) is introduced by loading from the (potentially clobbered) alloca. gcroot creates a single stack map table for the entire function. It is the compiled code’s responsibility to ensure that all values in the allocas are either valid live pointers or null.

Statepoints use most of the same techniques. We rely on not having an SSA value used on both sides of a call, but we manage the relocation via explicit IR relocation operations, not loads and stores. We require the call to be read/write to prevent reordering of unrelated loads. Since the spill slots are not visible in the IR, we do not need the reasoning about escapes that gc.root does.

To explicitly state this again since I screwed this up once before, both statepoints and gc.roots can correctly represent relocation semantics in the IR. In fact, the underlying reasoning about their correctness are rather similar.

They do differ fairly substantially in the details though. Let’s consider a few examples.

Consider a simple optimization for null pointer relocation. If the optimizer manages to establish that one of the value being relocated is null, propagating this across a statepoint is straightforward. (For each gc.relocate, if source is null, replaceAllUsesWith null.) Implementing this same optimization for gc.root is harder since the store and load may have been reordered from immediately around the call. This isn’t an unsolvable problem by any means, but it would be a GVN change, not an InstCombine one. In practice, we believe InstCombine style optimizations to be advantageous since they’re simpler to write and debug. Arguably, they’re also more powerful given the current pipeline since they have multiple opportunities to trigger.Derived Pointers – gcroot can represent derived pointers, but only via convention. There is no convention specified, so it’s up to the frontend to create it’s own. Statepoints define a convention (explicitly in the relocation operation) which makes describing optimizations straight forward.

One thing we plan to do with the statepoint representation is to implement an “easily derived pointer” optimization (to run near CodeGenPrep). On X86, it’s far cheaper to recreate a GEP base + 5 derived pointer than relocate it. Recognizing this case is quite straight forward given the statepoint representation.

A frontend could implement a similar optimization for gcroot at IR generation time. You could also implement such an optimization over the load/call/store representation, but the implementation would be much more complex (analogous to the null optimization above).

To be fair, gc.root may need such an optimization less. Since call-safepoints are inserted early, CSE has not yet run. As a result, there may be fewer “easily derived pointers” live across a call.

Format – Statepoints use a standard format. gc.root supports custom formats. Either could be extended to support the other without much difficulty.

The more material difference between the two is that gc.root generates a single stack map for the entire function while statepoints generate a unique stack map per call site. Having a single stack map imposes a slight penalty on code compiled with gc.root since dead values must explicitly be removed from the alloca (by a write of null). In the wrong situation (say a tight loop with two calls), this could be material.

Lowering - Currently, both gc.root and statepoint lower to stack slots. gc.root does this at the IR level, statepoints does so in SelectionDAG.

The design of statepoints is intended to allow pushing the explicit relocations back through the backend. The reason this is desirable is that pointers can be left in callee saved registers over call sites. Without substantial re-engineering, such a thing is not possible for gc.root. The importance of this from a performance perspective is debatable. It is my belief that the key benefit would be in a) reducing frame sizes (by not requiring spill slots), and b) avoiding spills around calls.

An advantage of gc.root is that the backend can remain largely ignorant of the gc.root mechanism. By the point the backend encounters them, a gc.root is just another alloca. One potential problem with the current implementation is that the escape is lost when lowering; the gcroot call is lowered to an entry into a side table and the alloca no longer escapes. This is a source of possible bugs, but is also a straightforward fix.

As to the lowering currently implemented, it’s debatable which is better. Statepoints optimize constants, and unifies based on SDValue. As a result, two IR level values of different types (with the same bit pattern) can end up sharing the same stackslot. However, it suffers when trying to assign stack slots. We currently use heuristics, but you can end up with ugly shuffling of values around on the stack across basic blocks. (There’s a number of ways to improve that, but it’s not yet implemented.) gc.root doesn’t suffer from this problem since stack slots are assigned by the frontend.

Since the stack spills and reloads are visible at the IR layer, gcroot gets the full ability of the optimizer to remove redundant reloads. Statepoints only get to leverage the pieces in the backend. In theory, this could result in materially worse spill/reload code for statepoints. In practice, this appears not to matter much provided the same value is assigned to the same slot across both calls, but I don’t actually have much data here to say anything conclusively yet.

I haven’t tried to measure frame size for gc.root vs statepoints. I suspect that statepoints may come out slightly ahead, but I doubt this is material. There are also cases (see “easily derived pointers” above), where gc.root may come out ahead.

IR Level Optimization – Both gc.root and statepoints cripple optimization (by design!). gcroot works better with inlining today, but statepoints could be easily enhanced to handle this case. (The same work would benefit symbolic patchpoints.)

It is my belief that statepoints are easier to optimize (i.e. teach to LICM), but this is purely my guess with no real evidence. Both suffer from the fact that calls must be marked readwrite. Not having to reason about memory seems easier, but I’m open to other arguments here.

Community Support & Compatibility
From a practical perspective, statepoints have active users behind them. We are interested in continuing to enhance and optimize them in the public tree. The same support does not seem to exist for gcroot.

The implementation of statepoints is largely aligned with that of patchpoints. The implementation of gcroot is completely separate and poorly understood by the majority of the community.

It wouldn’t be hard to write a translation pass from gcroot to statepoints or from statepoints to gcroot. If folks are concerned about compatibility, this would be a reasonable option. The largest challenge to transparently replacing one with the other is in generating the right output format.Summary
To summarize, gcroot and statepoints are functionally equivalent (modulo possible bugs.) In their current form, the two are largely comparable with each having some benefits. Long term, we believe a statepoint representation will allow better code generation and IR level optimization of code with safepoints inserted. We believe statepoints to be easier to optimize both at the IR level and backend.

Again, the late safepoint proposal is independent and could be done with either representation. It’s currently implemented on statepoints, but it could be extended to gcroot without too much work.

October 20, 2014

In case you missed it, you may like to know that there will be a talk on "OpenMP* Support in Clang/LLVM: Status Update and Future Directions" at the LLVM developers' meeting http://www.llvm.org/devmtg/2014-10/ in a couple of weeks' time.

Welcome to the forty-second issue of LLVM Weekly, a weekly newsletter (published every Monday) covering developments in LLVM, Clang, and related projects. LLVM Weekly is brought to you by Alex Bradbury. Subscribe to future issues at http://llvmweekly.org and pass it on to anyone else you think may be interested. Please send any tips or feedback to asb@asbradbury.org, or @llvmweekly or @asbradbury on Twitter.

News and articles from around the web

Eli Bendersky's repository of examples for using LLVM and Clang as libraries and for building new passes aren't new, but they are incredibly useful for newcomers to LLVM/Clang and I haven't featured them before. If you want to build something using LLVM or Clang, the llvm-clang-samples repos is one of the best places to start.

Other project commits

Welcome to the forty-first issue of LLVM Weekly, a weekly newsletter (published every Monday) covering developments in LLVM, Clang, and related projects. LLVM Weekly is brought to you by Alex Bradbury. Subscribe to future issues at http://llvmweekly.org and pass it on to anyone else you think may be interested. Please send any tips or feedback to asb@asbradbury.org, or @llvmweekly or @asbradbury on Twitter.

I've been in Munich for ORCONF this weekend. Slides from my talk about lowRISC are available here.

Saleem Abdulrasool points out that lld doesn't conform to the LLVM/Clang coding style. As you can imagine, few topics attract more feedback from developers than whitespace and variable naming conventions so the thread is rather long. There's general agreement that it would be better if lld used the LLVM style, though unease about moving over in a single large patch on the basis that this would dirty commit history and make git/svn blame less useful. A patch was submitted to git some years ago to implement the ability to ignore certain shas in git blame but it seems the feature was never added.

LLVM commits

Switches with only two cases and a default are now optimised to a couple of selects. r219223.

llvm-symbolizer will now be used to symbolize LLVM/Clang crash dumps. r219534.

The calculation of loop trip counts for loops with multiple exits has been de-pessimized. r219517.

October 14, 2014

Last week, the first set of patches for our work on garbage collection support in LLVM hit the mailing list. The review process will probably take a few weeks, but hopefully these should have landed by the 2014 LLVM Developers Meeting at the end of this month. At that conference, my co-worker Sanjoy and I are going to be giving a talk about our progress on statepoints, and late safepoint placement.

Here’s the full text of the review request, along with a couple of updates:

Title: [Patch] Statepoint infrastructure for garbage collection

The attached patch implements an approach to supporting garbage collection in LLVM that has been mentioned on the mailing list a number of times by now. There’s a couple of issues that need to be addressed before submission, but I wanted to get this up to give maximal time for review.

The statepoint intrinsics are intended to enable precise root tracking through the compiler as to support garbage collectors of all types. Our testing to date has focused on fully relocating collectors (where pointers can change at any safepoint poll, or call site), but the infrastructure should support collectors of other styles. The addition of the statepoint intrinsics to LLVM should have no impact on the compilation of any program which does not contain them. There are no side tables created, no extra metadata, and no inhibited optimizations.

A statepoint works by transforming a call site (or safepoint poll site) into an explicit relocation operation. It is the frontend’s responsibility (or eventually the safepoint insertion pass we’ve developed, but that’s not part of this patch) to ensure that any live pointer to a GC object is correctly added to the statepoint and explicitly relocated. The relocated value is just a normal SSA value (as seen by the optimizer), so merges of relocated and unrelocated values are just normal phis. The explicit relocation operation, the fact the statepoint is assumed to clobber all memory, and the optimizers standard semantics ensure that the relocations flow through IR optimizations correctly.

During the lowering process, we currently spill aggressively to stack. This is not entirely ideal (and we have plans to do better), but it’s functional, relatively straight forward, and matches closely the implementations of the patchpoint intrinsics. We leverage the existing StackMap section format, which is already used by the patchpoint intrinsics, to report where pointer values live. Unlike a patchpoint, these locations are known (by the backend) to be writeable during the call. This enables the garbage collector to transparently read and update pointer values if required. We do optimize lowering in certain well known cases (constant pointers, a.k.a. null, being the key one.)

There are a few areas of this patch which could use improvement:

The patch needs rebased against TOT. It’s currently based against a roughly 3 week old snapshot. (FIXED)

The intrinsics should probably be renamed to include an “experimental” prefix.

The usage of Direct and Indirect location types are currently inverted as compared to the definition used by patchpoint. This is a simple fix. (FIXED)

The test coverage could be improved. Most of the tests we’ve actually been using are built on top of the safepoint insertion mechanism (not included here) and our runtime. We need to improve the IR level tests for optimizer semantics (i.e. not doing illegal transforms), and lowering. There are some minimal tests in place for the lowering of simple statepoints.

The documentation is “in progress” (to put it kindly.) (MUCH IMPROVED, MORE TODO)

Many functions are missing doxygen comments

There’s a hack in to force the use of RSP+Offset addressing vs RBP-Offset addressing for references in the StackMap section. This works, shouldn’t break anyone else, but should definitely be cleaned up. The choice of addressing preference should be up to the runtime.

When reviewing, I would greatly appreciate feedback on which issues need to be fixed before submission and those which can be addressed afterwards. It is my plan to actively maintain and enhance this infrastructure over next few months (and years). It’s already been developed out of tree entirely too long (our fault!), and I’d like to move to incremental work in tree as quickly as feasible.

Planned enhancements after submission:

The ordering of arguments in statepoints is essentially historical cruft at this point. I’m open to suggestions on how to make this more approachable. Reordering arguments would (preferably) be a post commit action.

Support for relocatable pointers in callee saved registers over call sites. This will require the notation of an explicit relocation psuedo op and support for it throughout the backend (particularly the register allocator.)

Optimizations for non-relocating collectors. For example, the clobber semantics of the spill slots aren’t needed if the collector isn’t relocating roots.

Further optimizations to reduce the cost of spilling around each statepoint (when required at all).

Support for invokable statepoints.

Once this has baked in tree for a while, I plan to delete the existing gc_root code. It is unsound, and essentially unused.

In addition to the enhancements to the infrastructure in the currently proposed patch, we’re also working on a number of follow up changes:

Verification passes to confirm that safepoints were inserted in a semantically valid way (i.e. no memory access of a value after it has been inserted)

A transformation pass to convert naive IR to include both safepoint polling sites, and statepoints on every non-leaf call. This transformation pass can be used at initial IR creation time to simplify the frontend authors’ work, but is also designed to run on *fully optimized* IR, provided the initial IR meets certain (fairly loose) restrictions.

A transformation pass to convert normal loads and stores into user provided load and store barriers.

Further optimizations to reduce the number of safepoints required, and improve the infrastructure as a whole.

We’ve been working on these topics for a while, but the follow on patches aren’t quite as mature as what’s being proposed now. Once these pieces stabilize a bit, we plan to upstream them as well. For those who are curious, our work on those topics is available here: https://github.com/AzulSystems/llvm-late-safepoint-placement

October 06, 2014

Welcome to the fortieth issue of LLVM Weekly, a weekly newsletter (published every Monday) covering developments in LLVM, Clang, and related projects. LLVM Weekly is brought to you by Alex Bradbury. Subscribe to future issues at http://llvmweekly.org and pass it on to anyone else you think may be interested. Please send any tips or feedback to asb@asbradbury.org, or @llvmweekly or @asbradbury on Twitter.

I'll be in Munich next weekend for the OpenRISC conference where I'll be presenting on the lowRISC project to produce an open-source SoC. I'll be giving a similar talk in London at the Open Source Hardware User Group on 23rd October.

News and articles from around the web

Capstone 3.0 RC1 has been released Capstone is an open source disassembly engine, based initially on code from LLVM. This release features support for Sparc, SystemZ and XCore as well as the previously supported architectures. Among other changes, the Python bindings are now compatible with Python 3.

An interesting paper from last year came up on the mailing list. From EPFL, it proposes adding -OVERIFY to optimise programs for fast verification. The performance of symbolic execution tools is improved by reducing the number of paths to explore and the complexity of branch conditions. They managed a maximum 95x reduction in total compilation and analysis time.

Richard Pennington who maintains the Clang/LLVM ELLCC cross-development toolchain is considering dropping support for Microblaze. The Microblaze backend was dropped from LLVM last year, but Richard has been maintaining it out of tree. However there seems to be very little actual interest. If somebody wants to pick it up, now is the time to jump in.

LLVM commits

The expansion of atomic loads/stores for PowerPC has been improved. r218922. The documentation on atomics has also been updated. r218937.

For the past few weeks, Chandler Carruth has been working on a new vector shuffle lowering implementation. There have been too many commits to summarise, but the time has come and the new codepath is now enabled by default. It claims 5-40% improvements in the right conditions (when the loop vectorizer fires in the hot path for SSE2/SSE3). r219046.

SimplifyCFG now has a configurable threshold for folding branches with common destination. Changing this threshold can be worthwhile for GPU programs where branches are expensive. r218711.

Basic support for the newly-announced Cortex-M7 has been added. r218747.

As discussed on the mailing list last week, the sqrt intrinsic will now return undef when given a negative input. r218803.

llvm-readobj learnt -coff-imports which will print out the COFF import table. r218891, r218915.

Clang commits

Support for the align_value attribute has been added, matching the behaviour of the attribute in the Intel compiler. The commit message explains why this attribute is useful in addition to aligned. r218910.

A rather useful diagnostic has been added. -Winconsistent-missing-override will warn if override is missing on an overridden method if that class has at least one override specified on its methods. r218925.

Support for MS ABI continues. thread_local is now supported for global variables. r219074.

Matcher and DynTypedMatcher saw some nice performance tweaking, resulting in a 14% improvement on a clang-tidy benchmark and compilation of Dynamic/Registry.cpp sped up by 17%. r218616.

lifetime.start and lifetime.end markers are now emitted for unnamed temporary objects. r218865.

The __sync_fetch_and_nand intrinsic was re-added. See the commit message for a history of its removal. r218905.

Clang gained its own implementation of C11 stdatomic.h. The system header will be used in preference if present. r218957.

Clang now understands -mthread-model to specify the thread model to use, e.g. posix, single (for bare-metal and single-threaded targets). r219027.

Other project commits

lldb gained initial support for scripting stepping. This is the ability to add new stepping modes implemented by python classes. The example in the follow-on commit has a large comment at the head of the file to explain its operation. r218642, r218650.

September 30, 2014

Welcome to the thirty-ninth issue of LLVM Weekly, a weekly newsletter (published every Monday) covering developments in LLVM, Clang, and related projects. LLVM Weekly is brought to you by Alex Bradbury. Subscribe to future issues at http://llvmweekly.org and pass it on to anyone else you think may be interested. Please send any tips or feedback to asb@asbradbury.org, or @llvmweekly or @asbradbury on Twitter.

News and articles from around the web

A backend for the educational 'y86' instruction set architecture has been started. The source is on Github.

A new binary snopshot of the ELLCC cross compilation toolchain is now available. Pre-compiled binaries are available for ARM, MIPS, PPC, and x86. All tarballs contain header files and runtime libraries for all targets to allow you to build for any supported target.

Wondering how to use noalias and alias.scope metadata notations? Hal Finkel has the answer for you.

Should the LLVM project standardise on a commit message policy? Renato Golin suggests trying to keep the first line short followed by some number of 80 character paragraphs. It seems there's massive agreement on this sort of guidance.

September 22, 2014

Welcome to the thirty-eighth issue of LLVM Weekly, a weekly newsletter (published every Monday) covering developments in LLVM, Clang, and related projects. LLVM Weekly is brought to you by Alex Bradbury. Subscribe to future issues at http://llvmweekly.org and pass it on to anyone else you think may be interested. Please send any tips or feedback to asb@asbradbury.org, or @llvmweekly or @asbradbury on Twitter.

I've been at PyConUK this past weekend so I'm afraid it's another slightly shorter than normal issue. I've been talking about Pyland, a programming game that aims to teach children programming in Python (and of course, runs on Raspberry Pi).

News and articles from around the web

A paper has recently been published about Harmony. In the words of the authors "Harmony is an open source tool (built as an LLVM pass) that creates a new kind of application profile called Parallel Block Vectors, or PBVs. PBVs track dynamic program parallelism at basic block granularity to expose opportunities for improving hardware design and software performance." Their most recent paper on ParaShares describes how they find the most 'important' basic blocks in multithreaded programs.

If you're wondering about the current status of compiling glibc with Clang/LLVM, Kostya Serebryany has the answer. There are about ten instances of nested functions and four of VLAIS, with some patches waiting to be reviewed by upstream.

Other project commits

September 15, 2014

Welcome to the thirty-seventh issue of LLVM Weekly, a weekly newsletter (published every Monday) covering developments in LLVM, Clang, and related projects. LLVM Weekly is brought to you by Alex Bradbury. Subscribe to future issues at http://llvmweekly.org and pass it on to anyone else you think may be interested. Please send any tips or feedback to asb@asbradbury.org, or @llvmweekly or @asbradbury on Twitter.

This week's issue comes to you from sunny Tenerife. Yes, my dedication to weekly LLVM updates is so great that I'm writing it on holiday. Enjoy! I'll also note that I'm at PyCon UK next week where I'll be presenting on the results of a project we had some interns working on over the summer creating a programming game for the Raspberry Pi.

News and articles from around the web

Not only does Pyston have a shiny new blog, they've also released version 0.2. Pyston is an implementation of Python using LLVM, led by Dropbox. This release supports a range of language features that weren't supported in 0.1, including support for the native C API. The plan is to focus on performance during the development cycle for 0.3.

Sylvestre Ledru has posted a report of progress in building Debian with Clang following the completion of this year's Google Summer of Code projects. Now with Clang 3.5.0 1261 packages fail to build with Clang. Sylvestre describes how they're attacking the problem from both sides, by submitting patches to upstream projects as well as to Clang where appropriate (e.g. to ignore some unsupported optimisation flags rather than erroring out).

On the mailing lists

Philip Reames has started a discussion on adding optimisation hints for 'constant' loads. A common case is where a field is initialised exactly once and then is never modified. If this invariant could be expressed, it could improve alias analysis as the AA pass would never consider that field to MayAlias with something else (Philip reports that the obvious approach of using type-based alias analysis isn't quite enough).

Hal Finkel has posted an RFC on attaching attributes to values. Currently, attributes such as noalias and nonnull can be attached to function parameters, but in cases such as C++11 lambdas these can be packed up into a structure and the attributes are lost. Some followup discussion focused on whether these could be represented as metadata. The problem there of course is that metadata is intended to be droppable (i.e. is semantically unimportant). I very much like the suggestion from Philip Reames that the test suite should run with a pass that forcibly drops metadata to verify it truly is safe to drop.

In total, we reported 295 bugs with patches. 85 of them have been fixed (meaning that the Debian maintainer uploaded a new version with the fix).

In parallel, I think that the switch by FreeBSD and Mac OS X to Clang also helped to fix various issues by upstreams.

Hacking in clang

As a parallel approach, we started to implement a suggestion from Linus Torvalds and a few others. Instead of trying to fix all upstream, where we can, we tried to update clang to improve the gcc compatibility.

gcc has many flags to disable or enable optimizations. Some of them are legacy, others have no sense in clang, etc. Instead of failing in clang with an error, we create a new category of warnings (showing optimization flag '%0' is not supported) and moved all relevant flags into it. Some examples, r212805, r213365, r214906 or r214907

We also updated clang to silent some useless arguments like -finput-charset=UTF-8 (r212110), clang being UTF-8 compliant.

Finally, we worked on the forwarding of linker flags. Clang and gcc have a very different behavior: when gcc does not know an argument, it is going to forward the argument to the linker. Clang, in this case, is going to reject the argument and fail with an error. In clang, we have to explicitly declare which arguments are going to be transfer to the linker. Of course, the correct way to pass arguments to the linker is to use -Xlinker or -Wl but the Debian rebuild proved that these shortcuts are used. Two of these arguments are now forwarded:

-u Force symbol to be entered in the output file as an undefined symbol - r211756. This one fixed most of the haskell build failures. It fixed the most common issue that we had (701 occurrences but this does not mean that all these packages build fine now, some haskell-based package are failing later in the process)

New errors

Just like in other releases, new warnings are added in clang. With (bad) usage of -Werror by upstream software, this causes new build failures:

Next steps

The Debile project being close to ready with Clément Schreiner's GSoC, we will now have an automatic and transparent way to rebuild packages using clang.

Conclusion

As stated, we can see a huge drop in term of number of failures over time:

Hopefully, Clang getting better and better, more and more projects adopting it as the default compiler or as a base for plugin/extension developments, this percentage will continue to decrease.
Having some kind of release goal with clang for Jessie+1 can now be considered as potentially reachable.

September 08, 2014

We are excited to announce the next release of the Intel® OpenMP* Runtime Library at openmprtl.org. This release aligns with Intel® Composer XE 2013 SP1 Update 4, scheduled for release in summer of 2014.

Welcome to the thirty-sixth issue of LLVM Weekly, a weekly newsletter (published every Monday) covering developments in LLVM, Clang, and related projects. LLVM Weekly is brought to you by Alex Bradbury. Subscribe to future issues at http://llvmweekly.org and pass it on to anyone else you think may be interested. Please send any tips or feedback to asb@asbradbury.org, or @llvmweekly or @asbradbury on Twitter.

On the mailing lists

Hal Finkel is calling for testers of the new contrext-free language pointer aliasing analysis algorithm. As well as some speedup, there are some benchmark slowdowns which sound worth of further investigation.

Clang commits

VariantMatcher::MatcherOps was modified to reduce the amount of generated code. This reduces object size and compilation time. r217152.

Support for the 'w' and 'h' length modifiers in MS format strings was added. r217195, r217196.

A new warning is born. -Wunused-local-typedef will warn about unused local typedefs. r217298.

Other project commits

LLDB has gained initial support for 'type validators'. To quote the commit message, "Type Validators have the purpose of looking at a ValueObject, and making sure that there is nothing semantically wrong about the object's contents For instance, if you have a class that represents a speed, the validator might trigger if the speed value is greater than the speed of light". r217277.

It is now possible to build libc++ on systems without POSIX threads. r217271.

A target.process.memory-cache-line-size option has been added to LLDB which changes the size of lldb's internal memory cache chunks read from the remote system. r217083.

September 01, 2014

Welcome to the thirty-fifth issue of LLVM Weekly, a weekly newsletter (published every Monday) covering developments in LLVM, Clang, and related projects.LLVM Weekly is brought to you by Alex Bradbury. Subscribe to future issues at http://llvmweekly.org and pass it on to anyone else you think may be interested. Please send any tips or feedback to asb@asbradbury.org, or @llvmweekly or @asbradbury on Twitter.

As I mentioned in a previous issue, I am involved in the lowRISC projects to produce a fully open-source SoC. Just a quick reminder that we are hiring, and you have just over a week to get your application in.

News and articles from around the web

LLVM/Clang 3.5 is inching ever closer to release. The fourth and hopefully final release candidate is available for testing.

Quarks Lab have published a preview of SCAF, a Source Code Analysis Framework built on Clang. It promises a release soon.

The VMKit project website has this week been updated to mark the project as retired. VMKit was a project to implement virtual machines such as a JVM on top of LLVM. People interested in restarting the project are encouraged to get in touch with Gaël Thomas.

On the mailing lists

Manuel Klimek has provided a quick run-down of the state of his work on Clang C++ refactoring tools. He reports there are a number of standalone, single-use refacotring tools but more work needs to be done on generalising and integrating them. The plan is to push more of these tools to tools-extra (where clang-rename lives), make them integratable as a library, integrate them into libclang and then integrate them into projects like ycmd.

A discussion about improving llvm-objdump, kicked offed by Steve King, makes an interesting read. I'm looking forward to a future with a more featureful llvm-objdump that prints symbols of branch targets by default.

I linked last week to the mailing list thread on removing static initializers for command line options and regrettably was unable to summarise the extensive discussion. The bad news is discussion has continued at a rapid pace, but thankfully Chandler Carruth has rather helpfully sumarised the main outcomes of the discussion. It's also worth reading this thread for an idea of what the new infrastructure might look like.

LLVM commits

The AArch64 backend learned about v4f16 and v8f16 operations, r216555.

The LLVM CMake build system now includes support for building with UndefinedBehaviourSanitizer. r216701.

Clang commits

The -fdevirtualize and -fdevirtualize-speculatively flags are now recognised (and ignored) for compatibility with GCC. r216477.

Some Google Summer of Code work has started to land. In particular, the Clang static analyzer gained initial infrastructure to support for synthesizing function implementations from external model files. See the commit message for full details on the intent of this feature. r216550.

Support was added for capturing variable length arrays in C++11 lambda expressions. r216649.

August 25, 2014

Welcome to the thirty-fourth issue of LLVM Weekly, a weekly newsletter (published every Monday) covering developments in LLVM, Clang, and related projects.LLVM Weekly is brought to you by Alex Bradbury.Subscribe to future issues at http://llvmweekly.org and pass it on to anyone else you think may be interested. Please send any tips or feedback to asb@asbradbury.org, or @llvmweekly or @asbradbury on Twitter.

News and articles from around the web

The third release candidate for LLVM/Clang 3.5 is now available. As ever, test it on your codebases and report any regressions.

Adrian Sampson has written a blog post about Quala, a tool for implementing pluggable type systems for C/C++ using Clang. The example type systems are a system allowing nullable and non-nullable pointers as well as an information flow tracking system. In the future, Adrian wants to connect type annotations to LLVM IR.

On the mailing lists

There is a proposal to move the minimum supported Visual Studio version for compiling LLVM/Clang up to 2013 from 2012. LLVM/Clang 3.6 would be the first stable release with this requirement assuming there are no objections. With the introduction of C++11 features into the LLVM/Clang codebases, MSVC2012 support is troublesome due to a number of unsupported constructs. If this change would effect you negatively, now is the time to pipe up.

Richard Carback reports that two of his interns at Draper Laboratories have been working on resurrecting the LLVM C Backend, with source on Github. If this is to make it back into the mainstream repository, somebody will have to volunteer to maintain it which Richard has kindly done.

August 18, 2014

Welcome to the thirty-third issue of LLVM Weekly, a weekly newsletter (published every Monday) covering developments in LLVM, Clang, and related projects.LLVM Weekly is brought to you by Alex Bradbury.Subscribe to future issues at http://llvmweekly.org and pass it on to anyone else you think may be interested. Please send any tips or feedback to asb@asbradbury.org, or @llvmweekly or @asbradbury on Twitter.

There has been some discussion about an extended Clang API. The initial discussion frames this as an 'ABI support library'. An extended Clang API could be used for automatically generating bindings to C or even C++ code (which right now Julia is using private interfaces to do).

Renato Golin has started a discussion about a target specific parsing API. The bug report describes the problem more fully, which is duplication of code which performs the same parsing task (e.g. -mfpu on command line and the .fpu assembly directive).

LLVM commits

FastISel for AArch64 will now make use of the zero register when possible and supports more addressing modes. r215591, r215597.

August 11, 2014

Welcome to the thirtieth issue of LLVM Weekly, a weekly newsletter (published every Monday) covering developments in LLVM, Clang, and related projects. LLVM Weekly is brought to you by Alex Bradbury. Subscribe to future issues at http://llvmweekly.org and pass it on to anyone else you think may be interested. Please send any tips or feedback to asb@asbradbury.org, or @llvmweekly or @asbradbury on Twitter.

News and articles from around the web

Nuno Lopes, David Menendez, Santosh Nagarakatte, and John Regehr have written about ALIVe. This is a very promising tool that aims to aid the specification and proof of peephole optimisations (such as those currently found in LLVM's InstCombine). It uses an SMT solver in order to prove optimisations correct (and if incorrect, provides a counter-example).

Source and binaries for the first LLVM/Clang 3.5 Release Candidate are now available. If you like your LLVM releases to be on-time and regression-free, do your part and test them on your codebases.

Thomas Ströder and colleagues have recently published a paper "Proving Termination and Memory Safety for Programs with Pointer Arithmetic" which creates symbolic execution graphs from LLVM IR in order to perform its analysis. The preprint is available here.

On the mailing lists

Amin Shali from Google has posted an RFC on adding a rename refactoring tool to Clang. The proposed feature addition would consist of a command-line tool to semantically rename a symbol and an API that could be used by IDEs/editors to do the same.

LLVM commits

Support for scoped noalias metadata has been added. The motivation for this is to preserve noalias function attribute information when inlining and to model block-scope C99 restrict pointers. r213864, r213948, r213949.

The llvm-vtabledump tool is born. This will dump vtables inside object files. Right now it only supports MS ABI, but will in the future support Itanium ABI vtables as well. r213903.

The llvm.assume intrinsic has been added. This can be used to provide the optimizer with a condition it may assume to be true. r213973.

The loop vectorizer has been extended to make use of the alias analysis infrastructure. r213486.

Various additions have been made to support the PowerPC ELFv2 ABI. r213489, r213490, and more.

The R600 backend gained an instruction shrinking pass, which will convert 64-bit instructions to 32-bit when possible. r213561.

The llvm.loop.vectorize.unroll metadata has been renamed to llvm.loop.interleave.count. r213588.

LLVM 3.5 release notes for MIPS have been committed, if you're interested in seeing a summary of work in the last development cycle. r213749.

Welcome to the thirty-first issue of LLVM Weekly, a weekly newsletter (published every Monday) covering developments in LLVM, Clang, and related projects. LLVM Weekly is brought to you by Alex Bradbury. Subscribe to future issues at http://llvmweekly.org and pass it on to anyone else you think may be interested. Please send any tips or feedback to asb@asbradbury.org, or @llvmweekly or @asbradbury on Twitter.

On the mailing lists

Johannes Kapfhammer, a Google Summer of Code student has posted an RFC on adding a fuzzy parser for highlighting C++. The Clang parser can't be used for this as it may be highlighting incomplete snippets where not all types or functions are included. It was pointed out in responses that this is similar to clang-format's parser, but apparently this parser is not easily reusable and very tied to the clang-format implementation.

In response to a question about documentation for adding builders to the LLVM buildbot service, Dan Liew has posted a summary of how he has done it. He's looking for feedback on whether this is the best way to do things.

Tom Stellard proposes renaming the R600 target to AMDGPU. The motivation is that the backend has the name since the R600 was the first AMD GPU targeted, but it has added support for all AMD GPUs since then. There seems to be agreement this would be a sensible renaming.

LLVM commits

FastISel for AArch64 saw a number of improvements, including support for shift-immediate, arithmetic with overflow intrinsics. r214345, r214348, and more.

The SLPVectorizer has seen a largeish commit that implements an "improved scheduling algorithm". Sadly the commit message offers no further details. r214494.

TargetInstrInfo gained isAsCheapAsMove which takes a MachineInstruction and returns true if that instruction is as cheap as a move instruction. r214158.

LLVM libraries can now be exported as importable CMake targets, making it easier for those building LLVM-based projects. This is now documented. r214077.

Release notes for PowerPC changes during 3.5 development have been committed. r214403.

Welcome to the thirty-second issue of LLVM Weekly, a weekly newsletter (published every Monday) covering developments in LLVM, Clang, and related projects. LLVM Weekly is brought to you by Alex Bradbury.Subscribe to future issues at http://llvmweekly.org and pass it on to anyone else you think may be interested. Please send any tips or feedback to asb@asbradbury.org, or @llvmweekly or @asbradbury on Twitter.

Some readers may be interested to know that lowRISC, a project to produce a fully open-source SoC started by a number of us at the University of Cambridge Computer Lab has been announced. We are hiring.

News and articles from around the web

Codeplay contributed the LLDB MI (Machine Interface) frontend a while ago, and have now committed some additional features. To coincide with that, they've published a series of blog posts covering the MI driver's implementation, how to set it up from within Eclipse, and how to add support for new MI commands.

McSema, a framework for transforming x86 programs to LLVM bitcode has now been open-sourced. The talk about McSema from the ReCON conference is also now online.

Eric Christopher has written to the mailing list to warn us of incoming API changes. These changes include modifying getSubtarget/getSubtargetImpl to take a Function/MachineFunction, so sub-targets could be used based on attributes on the function.

LLVM commits

Initial work on the MachineCombiner pass landed. This estimates critical path length of the original instruction sequence vs a transformed (combined) instruction sequence and chooses the faster code. An example given in the commit message is choosing between add+mul vs madd on AArch64, and a followup commit implements MachineCombiner for this target. r214666, r214669.

A few useful helper functions were added to the LLVM C API: LLVM{IsConstantString, GetAsString, GetElementAsConstant}. r214976.

A flag has been added to experiment with running the loop vectorizer before the SLP vectorizer. According to the commit message, eventually this should be the default. r214963.

The old JIT is almost dead, it has been removed (for those not paying close attention, 3.5 has already been branched so still contains the old JIT). However, the patch was then reverted, so it's in zombie status. r215111.

AArch64 gained a load balancing pass for the Cortex-A57, which tries to make maximum use of available resources by balancing use of even and odd FP registers. r215199.

Clang commits

Thread safety analysis gained support for negative requirements to be specified. r214725.

Coverage mapping generation has been committed. The -fcoverage-mapping command line option can be used to generate coverage mapping information, which can then be combined with execution counts from instrumentation-based profiling to perform code coverage analysis. r214752.

A command line option to limit the alignment that the compiler can assume for an arbitrary pointer. r214911.

July 21, 2014

Welcome to the twenty-ninth issue of LLVM Weekly, a weekly newsletter (published every Monday) covering developments in LLVM, Clang, and related projects.LLVM Weekly is brought to you by Alex Bradbury.Subscribe to future issues at http://llvmweekly.org and pass it on to anyone else you think may be interested. Please send any tips or feedback to asb@asbradbury.org, or @llvmweekly or @asbradbury on Twitter.

This is a special extended issue which I'm choosing to subtitle "LLVM Weekly visits the GNU Tools Cauldron". The event took place over the weekend and had a wide range of interesting talks. You can find my notes at the end of this newsletter. Talks were recorded and the videos should be made available in the next month or two.

On the mailing lists

Hal Finkel has posted an RFC on adding an IR-level intrinsic to LLVM to represent invariants. These are conditions that the optimizer is allowed to assume will be valid during the execution of the program. The post also comes with a complete set of patches implementing the concept. An issue raised by Philip Reames is that by representing the invariant as an IR instruction, this may affect profitability heuristics in optimisation passes. Chandler Carruth followed up with a clear description of the trade-offs which it seems people mostly agree with.

Yuri Gribov has re-opened a previously discussed issue, that LLVM and GCC set the frame pointer to point to different stack slots on ARM. Renato Golin responded with an explanation of how GCC ended up doing things differently. The AddressSanitizer people would prefer things to be unified so they can use knowledge of the layout to perform unwinding without using tables (a speed optimisation). It doesn't seem likely that either LLVM or GCC will be changing though.

Andrew Kaylor has proposed that Lang Hames take over from him as code owner for MCJIT, RuntimeDyld, and JIT event listener. This was agreed and CODE_OWNERS.txt has been updated appropriately.

LLVM commits

A dereferenceable attribute was added. This indicates that the parameter or return pointer is dereferenceable (i.e. can be loaded from speculatively without a risk of trapping). This is subtly different to the nonnull attribute which doesn't necessarily imply dereferenceability (you might for instance have a pointer to one element past the end of an array). r213385.

A new subtarget hook was added to allow targets to opt-out of register coalescing.r213078, r213188.

RegionInfo has been templatified to it works on MachineBasicBlocks. r213456.

A monster patch from Nvidia adds a whole bunch of surface/texture intrinsics to the NVPTX backend. r213256.

Support was added for emitting warnings if vectorization is forced and fails. r213110.

Improvements to FastISel continue with the implementation of the FastLowerCall hook for X86. This actually reproduces what was already being done in X86, but is refactored against the target independent call lowering. r213049.

The ARM dmb, dsb and isb intrinsics have been implemented for AARch64. r213247.

Clang commits

Clang's rewrite engine is now a core feature (i.e. it can not be disabled at configure time). r213171.

Error recovery when the programmer mistypes :: as : was improved. r213120.

The AARch64 Clang CLI interface proposal for -march has been implemented. See the commit message for details. r213353.

OpenMP work continues with the addition of initial parsing and semantic analysis for the final, untied and other clauses, and the master directive. r213232, r213257, r213237, and more.

Other project commits

The 'Kalimba' platform is now supported by lldb (presumably this refers to the CSR processor). r213158.

LLVM Weekly at the GNU Tools Cauldron

For full details on the conference and details on the speakers for the talks I've summarised below see the GNU Tools Cauldron 2014 web page. Apologies for any inaccuracies, please do get in touch if you spot anything I may have noted incorrectly. LLVM followers may be particularly interested in Renato Golin's talk on collaboration between the GCC and LLVM communities.

Glibc BoF

2.20 is in "slushy" freeze mode. What else is left? fmemopen, fd locking, some -Wundef work

Anyone planning to check in something big for 2.21?

Mentor Graphics planning to check in a NIOS II port. They won't be accepted until Linux kernel patches are in a kernel release.

A desire for AArch64 ILP32 ABI to get in. Kernel patches currently in review, compiler work is ready.

OpenRISC

NaCl (nptl)

Benchmarking glibc? Does anyone have a good approach. There is a preload library approach (see notes from Ondrej's talk).

Glibc has been built with AddressSanitizer, help needed to get it integrated into the build system. There was a comment this would be nice to get in to distributions.

Red Hat are working on supporting alternate libm implementations, including a low-precision and high-precision implementation. Intel are looking to add math functions that work on small vectors.

Abigail: toward ABI taming

Want to determine if changes to your shared library break apps for users, and users want to know whether an updated library remains compatible with their code. The bidiff tool will tell you the differences in terms of ABI given two object files as its input.

libabi consists of modules such as a DWARF reader, the comparison engine. Tools such as bidiff are built on this API

What's next for libabigail?

bicompat will help application authors determine whether their application A is still compatibile with an updated version of a given library L by examining the undefined symbols of A that are resolved by L.

More amenable to automation (such as integration into build systems)

Support for un-instantiated templates. This would require declarations of uninstantiated templates to be represented in DWARF.

Writing VMs in Java and debugging them with GDB

Oracle Labs have been working on various dynamic language implementations in Java (e.g. Ruby, Python, R, JS, ...).

FastR is a reimplementation of R in Java featuring an interpreter (Truffle) and dynamic compiler (Graal).

Truffle and Graal starts with an AST interpreter. The first time a node is evaluated it is specialised to the type that was seen at runtime. Later the tree is compiled using partial evaluation.

It may be deployed on standard HotSpot (no compilation), GraalVM, or the SubstrateVM (SVM) which uses Graal to ahead-of-time compile the language implementation. Debugging the SVM is difficult as Java debugging tools are not available. The solution is to generate DWARF information in the SVM's output.

Truffle and Graal are open source, the SubstrateVM is not (yet?).

GCC and LLVM collaboration

Good news: license issues, personal grudges and performance are off-topic.

Users should be protected from whatever disagreements take place. In the future we should have more pro-active discussions on various issues as opposed to reactive discussions regarding e.g. compiler flags that have been noticed to be arbitrarily different after the fact.

Renato lists common projects that we may collaborate on: binutils, glibc, sanitizers. Sanitizers are a collaboration success story.

Can we agree on a (new?) common user interface?

There's a surprising amount of confusion about -march, -mtune, and -mcpu considering we're in a room of compiler developers. It sounds like there's not much support for re-engineering the set of compiler flags as the potential gain is not seen as being great enough.

Can we agree to standardise on attributes, C/C++ extensions, builtins, ASM, the linker API?

GCC docs have just been rewritten, so some criticisms about how difficult it is to dig in are no longer valid.

Machine Guided Energy Efficient Compilation

Initial investigations in 2012 found that compiler flags can have a meaningful effect on energy consumption. This raises the question of how to determine which flags to use.

MAGEEC will target both GCC and LLVM initially. It is implemented as a compiler plugin which performs feature extraction and allows the output of the machine learning algorithm to change the sequence of passes which are run. Fractional Factorial Design is used to reduce the optimisation space to explore.

Turning passes on/off arbitrarily can often result in internal compiler errors. Should the machine learning algorithm learn this, or should GCC better document pass requirements?

It would be useful to MAGEEC if the (currently internal) plugin API could be stabilized. They also currently have to use a hacked up Clang as it doesn't provide plugin hooks.

The project has produced a low cost energy measurement board as well as their own benchmark suite (Bristol/Embecosm Embedded Benchmark Suite, or BEEBS). BEEBS 2.0 is schedule for release by 31st August 2014 with a much wider range of benchmarks (currently 93). Jeremy showed a rather pleasing live demo where you can run a benchmark on a microcontroller development board and immediately find the number of mJ consumed in running it.

The current state of the project has it not achieving better results than GCC O2, but this is expected to change over the coming months.

Just-in-time compilation using GCC

libgccjit.so is an experimental branch of GCC which allows you to build GCC as a shared library and embed it in other programs in order to allow in-process code generation at runtime.

A dedicated API for JIT will allow better stability guarantees. It provides a high-level API designed for ease of use.

It has a C++ API wich includes some cunning operator overloading to massively reduce verbosity, and a Python API.

David Malcolm has written Coconut, a JIT compiler for Python using libgccjit.so. It is incomplete and experimental.

Drawback: currently have to write out a .s to a file and invoke gcc on it.Some might make a cheeky comment about the benefits of architecting a compiler so it can be used as a library, but I of course wouldn't dare. The good news is the speaker is actively looking at what would be needed to use GAS and GNU ld as a library.

Introduction to new Intel SIMD ISA and its impact on GCC

AVX-512 offers 64 simple precision or 32 double precision floating point operations per cycle. It also has 8x64-bit mask registers.

Rounding modes can be set on a per-instruction process

Basic support is available from GCC 4.9.x.

News from Sanitizers

MemorySanitizer detects use of uninitialized memory. Increases CPU by about 2.5x and RAM by 2x. Was released in LLVM in 2013. It is currently Linux/x86-64 only.

History growth is limited by limiting the history depth and the number of new history nodes per stack trace.

MSan has found hundreds of bugs across Google internal code, Chromium, LLVM, etc. It was more challenging for Chromium due to the number of system libs that had to be rebuilt.

AddressSanitizer annotations allows you to detect access to the regions of e.g. std::vector<> which has been allocated as part of its capacity but not yet been used (i.e. will start to be used in the next push_back). Next is to do the same for std::string and std::deque.

Glibc uses GNU-C instead of ANSI C which currently prevents compilation with Clang (nested functions in particular are problematic). It can however be built with ASan by GCC.

Evgeniy comments that the lack of standardisation between Clang and GCC for things like __has_feature(address_sanitizer) vs __SANITIZE_ADDRESS__ is irritating. This is just the sort of thing Renato was talking about yesterday of course.

glibc performance tuning

Use memset as an example. Look at 3 variants.

Writing a useful benchmark is more difficult than you might think. Simply running memset many times in a loop is not a good benchmark when using the same memory locations due to the processor's load-store forwarding. Even when fixing this, the branch predictor may perform much better than it would when memset is used in a real world scenario and lead to unrepresentative results.

To move beyond microbenchmarks, Ondrej has been using LD_PRELOAD to link against instrumented versions of the functions which record details about the time taken.

strcmp was the most frequently called glibc function in Ondrej's testing (when running Firefox).

Devirtualization in GCC

This is a special case of indirect call removal, and although the talk is given in the context of C++ the techniques apply to other languages too. Some basic cases are handled in the front-end and even specified by the language standard.

It is a special case of constant propagation across aggregates, which is already done by Global Value Numbering and Interprocedural Constant Propagation. But these passes only catch a tiny number of possible cases.

Loss of information between the frontend and middle end can make some cases almost impossible. The intermediate language can be extended with explicit representations of base types, locations of virtual table pointers, and vtables. Also annotate polymorphic calls specifying instance and polymorphic call type and flags to denote constructors/destructors.

I'm not able to summarise details on the GCC devirt implementation better than the slides do. Hopefully they'll be made available online.

A particular challenge is to match types between different compilation units. The C++ One Definition Rule is used.

It can be used to strengthen unreachable function removal.

Feedback-directed devirtualization was extended in GCC 4.9 to work inter-module with LTO.

July 17, 2014

Over the past year, the WebKit project made tremendous progress on the ability to optimize JavaScript applications. A major part of that effort was the introduction of the Fourth Tier LLVM (FTL) JIT. The Fourth Tier JIT targets long-running JavaScript content and performs a level of optimization beyond WebKit's interpreter, baseline JIT, and high-level optimizing JIT. See the FTL Optimization Strategy section below for more on WebKit's tiered optimizations. The engineering advancements within WebKit that made the FTL possible were described by Filip Pizlo in the Surfin' Safari Blog post, Introducing the WebKit FTL JIT. On April 29, 2014, the WebKit team enabled FTL by default on trunk: r167958.

This achievement also represents a significant milestone for the LLVM community. FTL makes it clear that LLVM can be used to accelerate a dynamically type checked languages in a competitive production environment. This in itself is a tremendous success story and shows the advantage of the highly modular and flexible design of LLVM. It is the first time that the LLVM infrastructure has supported self-modifying code, and the first time profile guided information has been used inside the LLVM JIT. Even though this project pioneered new territory for LLVM, it was in no way an academic exercise. To be successful, FTL must perform at least as well as non-FTL JavaScript engines in use today across a range of workloads without compromising reliability. This post describes the technical aspects of that accomplishment that relate to LLVM and future opportunities for LLVM to improve JIT compilation and the LLVM infrastructure overall.

JavaScript pages are ubiquitous and users expect fast load times, which WebKit's architecture is well suited for. However, some JavaScript applications require nontrivial computation and may run for periods longer than one hundred milliseconds. These applications demand aggressive compiler optimization and code generation tuned for the target CPU. FTL brings the full gamut of compiler technology to bear on the problem.

As with any high level language, high level optimizations must come first. Grafting an optimizing compiler backend onto an immature frontend would be futile. The marriage of WebKit's JIT with LLVM's optimizer and code generation works for two key reasons:

Before translating to LLVM IR, WebKit's optimizing JIT operates on an IR that clearly expresses JavaScript semantics. Through type inference and profile-driven speculation, WebKit removes as much of the JavaScript abstraction penalty as possible.

LLVM IR has now adopted features for supporting speculative, profile-driven optimization and avoiding the performance penalty associated with abstractions when they cannot be removed.

As a result, WebKit can engage the FTL on any long-running JavaScript method. In areas of the code dominated by abstraction overhead, FTL-compiled code is at least competitive with that of a custom JIT designed specifically for JavaScript. In areas of the code where WebKit can remove the abstraction penalty, FTL can achieve fantastic speedups.

Asm.js is a subset if JavaScript that avoids abstraction penalties, allowing JITs to directly benefit from low-level performance optimization. Consequently, the performance advantage of FTL is likely to be quite apparent on asm.js benchmarks. But although FTL performs well on asm.js, it is in no way customized to the standard. In fact, with FTL, regular JavaScript code written in a style similar to asm.js will derive the same benefits. Furthermore, as WebKit's high-level optimizations become even more advanced, the benefits of FTL will expand to a broader set of idiomatic JavaScript code.

A convenient way to measure the impact of LLVM optimizations on JavaScript code is by running C/C++ benchmarks that have been compiled to asm.js code via emscripten. This allows us to compare native C/C++ performance with WebKit's third tier (DFG) compiler and with WebKit FTL.

Figure 1: Time to run benchmarks from LLVM test-suite.

Figure 1 shows the time taken to run a handful of benchmarks from LLVM's own test-suite. The benchmark workloads have been adjusted to run for approximately one second. In every case, FTL achieves significant improvement over WebKit's non-LLVM JIT (DFG). In some cases, the emscripten compiled JavaScript code is already approaching native C performance, but in other cases FTL code still takes about twice as long as clang compiled C code[1]. One reason for the discrepancy between clang and FTL is the call overhead required for maintaining the JavaScript runtime's additional frame information. Another reason is that LLVM loop optimizations are not yet sophisticated enough to remove bounds and overflow checks and thus have not been enabled. These benchmarks are very tight loops, so a minor inefficiency, such as an extra compare or store in the loop, can result in a significant slowdown.

[1] gcc-loops is currently an outlier because clang performance recently sped up dramatically from auto-vectorization that has not been enabled yet in FTL.

WebKit's tiered architecture provides flexibility in balancing responsiveness, profile collection, and compiler optimization. The first tier is the low-level interpreter (LLInt). The second is the baseline JIT--a straightforward translation from JavaScript to machine code. WebKit's third tier is known as the Data Flow Graph (DFG) JIT. The DFG has its own high-level IR allowing it to perform aggressive JavaScript-specific optimization based on the profile data collected in earlier tiers. When running as a third tier, the DFG quickly emits code with additional profiling hooks. It may be invoked again as a fourth tier, but this time it produces LLVM IR for traditional compiler optimization.

We reuse most of the DFG phases. The new FTL pipeline is a drop-in replacement for the third-tier DFG backend. It involves additional JavaScript-aware optimizations over DFG SSA form, followed by a phase that lowers DFG IR to LLVM IR. We then invoke LLVM's optimization pipeline and LLVM's MCJIT backend to generate machine code.

The DFG JIT front end generates LLVM IR in a form that is amenable to the same optimizations traditionally performed with C code. The most notable differences are summarized in FTL-Style LLVM IR.

Figure 3. The FTL optimization pipeline after lowering to LLVM IR.

After lowering to LLVM IR, FTL applies a subset of mid-level optimizations that are currently the most important in JavaScript code. It then invokes the LLVM backend for the host architecture with full optimization. This optimizes the code for the target CPU using aggressive instruction selection, register allocation, and machine-specific optimization.

Patch points are the key LLVM feature that allows dynamic type checking, inline caching, and runtime safety checks without penalizing performance. In October, 2013, we submitted a proposal to amend LLVM IR with patch points to the LLVM developer list. Since then, we've successfully implemented patch points for multiple architectures and their performance impact has been validated for various use cases, including branch-to-fail safety checks, inline caches, and code invalidation points. The details of the current design are explained in the LLVM specification of stack map and patch point intrinsics.

Patch points are actually two features in one intrinsic. The first feature is the ability to identify the location of specific values at the intrinsic's final instruction address. During code emission, LLVM records that information as meta-data alongside the object code in what we call a "stack map". A stack map communicates to the runtime the location of important values. This is a slight misnomer given that locations may refer to register names. Typically, the runtime will read values out of stack map locations when it needs to reconstruct a stack frame. This commonly occurs during "deoptimization"--the process of replacing an FTL stack frame with a lower-tier frame.

The second feature of patch points is the ability of the runtime to patch the compiled code at specific instruction address. To allow this, the intrinsic reserves a fixed amount of instruction encoding space and records the instruction address of that space along with the stack map. Because the runtime needs to know the location of values precisely at the point it patches code, the two features must be combined into one intrinsic.

Patch points are viewed by LLVM passes much like unknown call sites. An important aspect of their design is the ability to specify the effective calling convention. For example, code invalidation points are almost never taken and the call site should not clobber any registers, otherwise the register allocator could be severely restricted by frequent runtime checks. An optional feature of stack maps is the ability to record the registers that are actually live in compiled code at each call site. This way the JIT can declare a call as preserving all registers to maximize compiler freedom, but at the same time the runtime can avoid unnecessary save and restore operations when the "cold" call is actually taken.

To better support inline cache optimizations, LLVM now has a special "anyregcc" calling convention. This convention allows any number of arguments to be forced into registers without pinning down the name of the register. Consequently, the compiler does not have to place arguments in particular registers or stack locations, or emit extra copies and spills around call sites, and the runtime can emit efficient patched code sequences that operate directly on registers.

The current patch point design is labeled experimental so that it may continue to evolve without preserving bitcode compatibility. LLVM should soon be ready to adopt the patch point intrinsic in its final form. However, the current design should first be extended to capture the semantics of high level language runtime checks. See Extending Patchpoints.

FTL attempts to generate LLVM IR that closely resembles what the optimizer expects to see from other typical compiler frontends. Nonetheless, lowering JavaScript semantics into LLVM operations tends to result in IR with different characteristics from statically compiled C code. This section summarizes those differences. More details and examples will be provided in a subsequent blog post.

The prevalence of patch points in the IR means that values tend to have many more uses and can be live into a large number of patch point call sites. FTL emits patch points for a few distinct situations. First, when the FTL front end (DFG) fails to eliminate type checks or bounds checks, it emits explicit compare and branch operations in the IR. The branch target lands at a patch point intrinsic followed by unreachable. This can result in much more branchy code than LLVM typically handles with C benchmarks. Fortunately, LLVM's awareness of branch probability means that the branch-to-fail idiom does not excessively hinder optimization and code generation. Heap access and polymorphic calls also use patch points, but these are emitted directly inline with the hot path. This allows the runtime to implement inline caches with specific instruction sequences that can be patched as program behavior evolves. Finally, runtime calls may act as code invalidation points. A runtime event, such as a potential change in object layout, may invalidate speculatively optimized code. In this case WebKit emits nop patch points that can be overwritten with a separate runtime call at an invalidation event. This effectively invalidates all code that follows the original runtime call.

Some type checks result in multiple fast paths. For example, WebKit may check a numeric value for either a floating-point or fixed point representation and emit LLVM IR for both paths. This may result in a sequence of redundant checks interleaved with control flow merges.

To support integer overflow checks, when they cannot be removed through optimization, FTL emits llvm.sadd.with.overflow intrinsics in place of normal add instructions. These intrinsics ensure that the code generator produces an optimal code sequence for the overflow checks. They are also used by other active LLVM projects and are gradually gaining support within LLVM optimization passes.

LLVM heuristics are often sufficient to guess branch probability. However FTL makes the job easier by directly emitting LLVM branch weight meta-data based on profiling. This is particularly important when partially compiling a method starting at the inner loop. Such compilations can squash nested loops so that LLVM's heuristics can no longer infer the loop depth from the CFG structure.

FTL builds an internal model of the JavaScript program's type system determined by profiling. It conveys this information to LLVM via type-based-alias-analysis (tbaa) meta-data. In FTL tbaa, each object field has a unique tag. This is a very effective approach to memory disambiguation, and much simpler than the access-path scheme that clang now uses.

Another way that FTL deviates from the norm, is in its use of inttoptr instructions. These are used to materialize addresses of runtime objects, including all data and code from outside the current compilation unit (currently a single method at a time). inttoptr is also used to convert an untyped JS value to a pointer. Occasionally, pointer arithmetic is performed on non-pointer types rather than using getelementptr instructions. This is primarily a convenience and has not proven to hinder optimization. FTL's use of tbaa is effective enough to obviate the need to analyze getelementptr when the base address is already an unknown object.

An important pattern that occurs in FTL's LLVM IR is the repeated use of the same large constants that are used as masks to disambiguate tagged values, or several constants that represent global addresses that tend to be at small offsets from each other. LLVM's current one basic block a time code generation approach resulted in redundant rematerialization of the same large constant in each basic block. The fact that FTL creates a large number of basic blocks even further exacerbated this problem. The LLVM code generator has been enhanced to avoid these expensive repeated rematerialization of such constant values.

The FTL JIT successfully leverages LLVM's existing MCJIT framework for runtime compilation. MCJIT was designed as a low-level toolkit that allows runtime compilers to be built by reusing as much of the static compiler's machinery as possible. This approach improves maintainability on the LLVM side. It integrates with the existing compiler toolchain and allows developers to test features of the runtime compiler without understanding a particular JIT client. The current API, however, does not provide a simple out-of-the-box abstraction for portable JITs. Overcoming the impedance mismatch between WebKit goals and the low-level MCJIT API required close collaboration between WebKit and LLVM engineers. As LLVM becomes more important as a JIT platform, it should provide a more complete C API to improve interoperability with JIT clients and decrease the fragility and maintenance burden within the client code base.

Bridging the gap between LLVM internals and portable JITs can be accomplished by providing more convenience wrappers around the existing MCJIT framework and adding richer C APIs for object code parsing and introspection. Ideally, a cross-platform JIT client like WebKit should not need to embed target-specific details about LLVM code generation on the client side. The JIT should be able to request LLVM to emit code for the current host process without understanding LLVM's language of target triples and CPU features. LLVM could generally provide a more obvious C API for lazily invoking runtime compilation. Along these lines, a JIT should be able to reuse the MCJIT execution engine for multiple modules without the overhead of reinitializing pass manager instances each time. An API also needs to be added for configuring the code generation pass manager. Most of the coordination between the JIT and LLVM now occurs directly through a memory manager API, which can be awkward for the JIT client. For example, WebKit looks for platform-specific section names when allocating section memory in order to locate frame meta-data and debug information. A better interface for WebKit would be a portable API that communicates object code meta-data, including frame information and stack maps. In general, the JIT codebase should not need to provide its own support for platform-specific object file formats. LLVM already has this support, it only needs to be exposed through the C API. Similarly, a JIT should be able to lookup line numbers without implementing its own DWARF parser. An additional layer of functionality for general purpose debug info parsing and object code introspection would not be specific to JIT compilation and could benefit a variety of LLVM clients.

FTL illustrates an important use case for LLVM: embedding LLVM optimization and codegen libraries cleanly within a larger application running in the same process. The ideal solution is to build a set of LLVM components as a shared library that exports only a limited C API. Several problems have made this a challenging endeavor:

The dynamic link time initialization overhead of the static initializers that LLVM defines is unacceptable at program launch time - especially if only parts of the library or nothing at all are used.

As with static initializers, weak vtables introduce an unnecessary and unacceptable dynamic link time overhead.

In general only a limited set of methods - the LLVM API - should be exported from the shared library.

LLVM usurps process-level API calls like assert, raise, and abort.

The resulting size of the LLVM shared library naively built from static libraries is larger than it needs to be. Build logic and conditional compilation should be added to ensure that only the passes and platform support required by the JIT client are ultimately linked into the shared library.

The issues listed above have required clever engineering tricks to circumvent. These are the sort of tricks that hinder adoption of LLVM. Therefore it would be in the best interest of the LLVM community to cooperate on improving the infrastructure for embedding LLVM.

The LLVM optimizer and code generator are composed of generic, retargetable components designed to generate optimal code across an extremely diverse range of platforms. The compile time cost of this infrastructure is substantial and may be an order of magnitude greater than that of a custom-built JIT. Fortunately, WebKit's architecture for concurrent, tiered compilation largely sidesteps this penalty. Nonetheless, there is considerable opportunity to reengineer LLVM for use as a JIT, which will decrease FTL's CPU consumption and increase the breadth of JavaScript applications that benefit from FTL.

When running in a JIT environment, an opportunity exists for LLVM to strike a better balance between compile time and optimization strength. To this end, an alternate "compile-fast" optimization pass pipeline should be standardized so that the LLVM community can work together to maintain an ideal sequence of lighter-weight passes. Long running, iterative IR optimization passes, such as GVN, should be adapted to optionally run in fewer iterations. Hodge-podge passes like InstCombine that run many times should be optionally broken up so that some subset of functionality can run at different times: for example, canonicalize first and optimize later.

There are also considerable opportunities for improving code generation efficiency which will benefit JITs and static compilers alike. LLVM machine IR should be generated directly from LLVM IR without generating a Selection DAG, as proposed by Jakob Olesen in his Proposal for a global instruction selector. The benefit of this improvement would be considerable and widespread. More specific to high level languages, codegen passes should be tuned to handle branchy code more efficiently. For example, the register allocator can be taught to skip expensive analysis at points in the code where branches are not expected to be executed.

One overhead that will remain with the above improvements is simply the cost of bridging WebKit's DFG IR into LLVM IR. This involves lowering to SSA form and constructing LLVM instructions, which currently takes significant amount of time relative to DFG's non-LLVM codegen path. With some scrutiny, this could likely be made more efficient.

Without incurring significant compile time increase, LLVM optimizations can be further improved to handle prevalent idioms in JavaScript programs. One straightforward LLVM IR enhancement would be to associate type-based alias information with call sites. This would improve redundant instruction elimination across runtime calls and patch points. Another area of improvement would be better handling of branch-and-merge idioms. These are quite common in FTL produced IR and can improved through CFG simplification, jump threading, or tail duplication. With careful pass pipeline management, loop optimizations can be enabled, such as auto-vectorization. Once LLVM is analyzing loops, bounds and overflow check elimination optimization can also be implemented. To do this well, patch points will need to be extended with new semantics.

In settings like JavaScript and other high level languages, patch points will be used to transfer control to the runtime when speculative optimization fails in the sense that the program behaves differently than predicted. It is always safe to assume a misprediction and give control back to the runtime because the runtime always knows how to recover. Consequently, patch points could optionally be associated with a check condition and given the following semantics: the patch point code sequence must be executed whenever the condition holds, but may safely be executed at its current location under any superset of the condition. When combined with LLVM loop optimization, the conditional patch point semantics would allow powerful optimization of runtime checks. In particular, bounds and overflow checks could be safely hoisted outside loops. For example, the following simplified IR:

%c = cmp <TrapConditionC> // where C implies both A and B@patchpoint(1, %c, <state-before-loop>)Loop: do something...

Note that the first patch point operand is an identifier that tells the runtime the program location of the intrinsic, allowing it find the correct stack map record for the program state at that location. After the above optimization, not only does LLVM avoid performing repeated checks within the loop, but it also avoids maintaining additional runtime state throughout the loop body.

Generally, high level optimization requiring knowledge of language-specific semantics is best performed on a higher level IR. But in this case, extending LLVM with one aspect of high level semantics allows LLVM's loop and expression analysis to be directly leveraged and naturally extended into a new class of optimization.

WebKit's FTL JIT already shows considerable value in improving JavaScript performance, demonstrating LLVM's remarkable success as a backend for a JavaScript JIT compiler. The FTL project highlights the value of further improving LLVM's JIT infrastructure and reveals several exciting opportunities: improved efficiency of optimization passes and codegen, optimizations targeted toward common idioms present in high level language, enabling more aggressive standard optimizations like vectorization, and extending and formalizing patch point intrinsics. Realizing these goals will require the continued support of the LLVM community and will advance and improve the LLVM project as a whole.

July 14, 2014

Welcome to the twenty-eighth issue of LLVM Weekly, a weekly newsletter (published every Monday) covering developments in LLVM, Clang, and related projects. LLVM Weekly is brought to you by Alex Bradbury.Subscribe to future issues at http://llvmweekly.org and pass it on to anyone else you think may be interested. Please send any tips or feedback to asb@asbradbury.org, or @llvmweekly or @asbradbury on Twitter.

I'll be at the GNU Tools Cauldron 2014 next weekend, being held at the University of Cambridge Computer Laboratory (which handily is also where I work). If you're there, do say hi.

LLVM commits

FastISel gained some infrastructure to support a target-independent call lowering hook as well as target-independent lowering for the patchpoint intrinsic. r212848, r212849.

DominanceFrontier has been templatified, so in theory it can now be used for MachineBasicBlocks (where previously it was only usable with BasicBlocks). r212885.

The quality of results for CallSite vs CallSite BasicAA queries has been improved by making use of knowledge about certain intrinsics such as memcpy and memset. r212572.

Work on overhauling x86 vector lowering continues. Chandler now reports that with the new codepath enabled, LLVM is now at performance pairty with GCC for the core C loops of the x264 code when compiling for SSE2/SSE3. r212610.

ASM instrumentation for AddressSanitizer is now generated entirely in MachineCode, without relying on runtime helper functions. r212455.

Generation of the new mips.abiflags section was added to the MIPS backend. r212519.

isDereferenceablePointer will now look through some bitcasts. r212686.

Clang commits

A new checker was added, to flag code that tests a variable for 0 after using it as a denominator (implying a potential division by zero). r212731.

July 09, 2014

Welcome to the twenty-fifth issue of LLVM Weekly, a weekly newsletter (published every Monday) covering developments in LLVM, Clang, and related projects. LLVM Weekly is brought to you by Alex Bradbury. Subscribe to future issues at http://llvmweekly.org and pass it on to anyone else you think may be interested. Please send any tips or feedback to asb@asbradbury.org, or @llvmweekly or @asbradbury on Twitter.

News and articles from around the web

Facebook have released a number of clang plugins they have been using internally. This includes plugins to the clang analyzer primarily for iOS development as well as a plugin to export the Clang AST to JSON. The code is available on Github and they have started a discussion on the mailing list about moving some of this code upstream.

This week saw the release of LLVM and Clang 3.4.2. This is a bug-fix release which maintains ABI and API compatibility with 3.4.1.

On the mailing lists

Rafael Espíndola has started a thread to discuss clarification on the backward compatibility promises of LLVM. He summarises what seems to be the current policy (old .bc is upgraded upon read, there is no strong guarantee on .ll compatibility). Much of the subsequent discussion is about issues such as compatibility with metadata format changes.

Duncan P.N. Exon Smith has posted a review of the new pass manager in its current form. He starts with a high-level overview of what Chandler Carruth's new PassManager infrastructure offers and has a list of queries and concerns. There are no responses yet, but it's worth keeping your eyes on this thread if you're interested in LLVM internals development.

LLVM commits

The LLVM global lock is dead, and the LLVM Programmer's Manual has been updated to reflect this. llvm_start_multithreaded and llvm_stop_multithreaded have been removed.r211277, r211287.

The patchset to improve MergeFunctions performance from O(NxN) to O(N x log(N)) has finally been completely merged. r211437, r211445 and more.

Range metadata can now be attached to call and invoke (previously it could only be attached to load). r211281.

ConvertUTF in the Support library was modified to find the maximal subpart of an ill-formed UTF-8 sequence. r211015.

LoopUnrollPass will now respect loop unrolling hints in metadata. r211076.

The R600 backend has been updated to make use of LDS (Local Data Share) and vectors for private memory. r211110.

X86FastISel continues to improve with optimisation for predicates, cmp folding, and support for 64-bit absolute relocations. r211126, r211130.

The SLPVectorizer (superword-level parallelism) will now recognize and vectorize non-SIMD instruction patterns like sequences of fadd,fsub or add,sub. These will be vectorized as vector shuffles if they are profitable. r211339.

July 08, 2014

One of the lesser-known features of C++11 is the fact that you can overload your non-static member functions based on whether the implicit this object parameter is an lvalue reference or an rvalue reference by specifying a functions ref-qualifier. This feature works similar to the way cv-qualifiers work when specifying a method must be called on a const or volatile object, and can in fact be combined with cv-qualifiers.
To specify a ref-qualifier for a member function, you can either qualify the function with & or &&. (The ref-qualifier must come after any cv-qualifiers.) For instance, if you wanted to declare a function to be called on an rvalue reference object only, you would write:

When executed, this code will output: Copy Move Move Move. The Copy is because b is an lvalue, not an rvalue, and so operator()() & will be called. However, the results of that function are an rvalue, and so the subsequent subexpressions will result in calling operator()() &&. Due to this, resources can be stolen from one invocation to the next on the last three subexpressions, reducing the performance penalties of a copy operation.

In case you are wondering why the std::move(*this) is used when constructing a Builder object; the unary expression *this always results in an lvalue, which would end up calling the copy constructor instead of the move constructor. So the std::move call is required to convert the lvalue into an rvalue.

Ref-qualifiers are not something you will likely use often. However, it is never a bad thing to understand the tools the programming language has to offer. Note: ref-qualifiers are currently supported by clang (tested with 3.4), gcc (tested with 4.9) but not MSVC 2013.

It’s time for an update on Clang’s support for building native Windows programs, compatible with Visual C++! We’ve been working hard over the last few months and have improved the toolchain in a variety of ways. All C++ features aside from debug info and exceptions should work well. This link provide more specific details. In February we reached an exciting milestone that we can self-host Clang and LLVM using clang-cl (without fallback), and both projects pass all of their tests! Additionally both Chrome and Firefox now compile successfully with fallback! Here are some of the highlights of recent improvements:

Microsoft compatible record layout is done! It’s been thoroughly fuzz tested and supports all Microsoft specific components such as virtual base table pointers, vtordisps, __declspec(align) and #pragma pack. This turned out to be a major effort due to subtle interactions between various features. For example, __declspec(align) and #pragma pack behave in an analogous manner to the gcc variants, but interact with each other in a different manner. Each version of Visual Studio changes the ABI slightly. As of today clang-cl is layout compatible with VS2013.

Clang now supports all of the calling conventions used up to VS2012. VS2013 added some new ones that we haven’t implemented yet. One of the other major compatibility challenges we overcame was passing C++ objects by value on 32-bit x86. Prior to this effort, LLVM modeled all outgoing arguments as SSA values, making it impossible to take the address of an argument to a call. It turns out that on Windows C++ objects passed by value are constructed directly into the argument memory used for the function call. Achieving 100% compatibility in this area required making fundamental changes to LLVM IR to allow us to compute this address.

Most recently support for run time type information (RTTI) was completed. With RTTI support, a larger set of programs and libraries (for example ICU) compile without fallback and dynamic_cast and typeid both work. RTTI support also brings along support for std::function. We also recently added support for lambdas so you can enjoy all of the C++11 functional goodness!

July 07, 2014

Welcome to the twenty-seventh issue of LLVM Weekly, a weekly newsletter (published every Monday) covering developments in LLVM, Clang, and related projects.LLVM Weekly is brought to you by Alex Bradbury.Subscribe to future issues at http://llvmweekly.org and pass it on to anyone else you think may be interested. Please send any tips or feedback to asb@asbradbury.org, or @llvmweekly or @asbradbury on Twitter.

News and articles from around the web

An LLVM code generator has been merged into the MLton whole-program optimizing compiler for Standard ML. This was written by Brian Leibig as part of his Master's thesis, which contains more information on its performance and design.

Clang's Microsoft Visual C++ compatibility page has been updated to reflect the status of the current SVN trunk. As can be seen from the relevant diff, record layout has been marked complete along with RTTI. Lambdas are now marked mostly complete.

Pavel Chupin has written to the list on behalf of Intel to get feedback on upstreaming support for the x32 ABI. As you might expect, people are in favour of the idea. The NativeClient team are also interested, particularly as NaCl's x86-64 ABI is fairly similar to x32.

Sunil Srivastava has shared a proposal for an ABI test suite for Clang. There is wide support for Sony submitting the implementation for code review. A later response clarifies that of the 400 test files, about 20% are hand-written and the rest come from the test case generator.

LLVM commits

The X86 backend now expands atomics in IR instead of as MachineInstrs. Doing the expansions at the IR level results in shorter code and potentially there may be benefit from other IR passes being able to run on the expanded atomics. r212119.

July 01, 2014

We have created a fun infographic on the history of the OpenMP standard which has been published in the Intel Parallel Universe (pdf). The folks over at OpenMP.org liked it so much it’s currently their headline news. We now understand why “a picture is worth a thousand words”, since this took as much effort as writing 5,000!

June 30, 2014

Welcome to the twenty-sixth issue of LLVM Weekly, a weekly newsletter (published every Monday) covering developments in LLVM, Clang, and related projects. LLVM Weekly is brought to you by Alex Bradbury. Subscribe to future issues at http://llvmweekly.org and pass it on to anyone else you think may be interested. Please send any tips or feedback to asb@asbradbury.org, or @llvmweekly or @asbradbury on Twitter.

Trail of Bits have posted a preview of McSema, a framework for translating x86 binaries to LLVM bitcode. The accompanying talk took place on the 28th June, so hopefully we'll hear more about this soon. The blog post tells us that McSema will be open source and made available soon.

LLVM commits

A significant overhaul of how vector lowering is done in the x86 backend has been started. While it's under development it's off by default, though it's hoped that in times there will be measurable performance improvements on benchmarks conducive to vectorization. r211888 and more.

X86 FastISel will use EFLAGS directly when lowering select instructions if the condition comes from a compare. It also now supports floating-point selects among other improvements. r211543, r211544, and more.

ScaledNumber has been split out from BlockFrequencyInfo into the Support library. r211562.

The loop vectorizer now features -Rpass-missed and -Rpass-analysis reports. r211721.

The developer documentation has been updated to clarify that although you can use Phabricator to submit code for review, you should also ensure the relevant -commits mailing list is added as a subscriber on the review and be prepared to respond to comments there. r211731.

June 23, 2014

In C++, there are two forms of binary operator overloading you can use when designing an API. The first form is to overload the operator as a member function of the class, and the second form is to overload the operator as a friend function of the class. I want to explore why you would use one form of overloading instead of the other, using a Fraction class as an example.
For the purposes of this discussion, this is part of the interface for our expository class.

One of the ways we can implement our binary operator overloads is as member functions of the Fraction class. I’m going to pick on the equality operator, but any of the overloaded binary operators would suffice.

Since there are two different ways to implement this, it’s reasonable to ask which way is “correct?” The answer to that question depends on your intentions as a class designer. Consider the following use case:

Some coding conventions suggest that equality comparisons against a constant value put the constant on the left-hand side of the comparison (so that an accidental assignment operation by typing = instead of == would trigger a compile error), so this example is not particularly far-fetched.

If you use a member function for the operator overload, this code would not compile because there’s no way for the implicit converting constructor from double to Fraction to be called. However, by using a friend function for the operator overload, the compiler can call the converting constructor to create a Fraction object which would make the comparison viable. Because of this, I would claim that declaring the operators to be friends is the correct approach for the class design.

This exemplifies a reasonable way to decide how to implement the overloaded binary operators. If you want to allow implicit conversions for items on the left-hand side of the operator, then using friend function overloads is required. If implicit conversions are not desirable for some reason, or not possible (due to having no implicit converting constructors), then using a member function is acceptable. If you’re looking for a general rule of thumb, I would recommend always using the friend function form — it’s more likely to behave how the user would expect in all cases, instead of having curious edge cases where their usage fails. Imagine how confusing it would be for the user of a Fraction class that SomeFraction * 1 succeeds, but 1 * SomeFraction fails to compile! That being said, it ultimately boils down to a design choice that you must make as a class designer.

I would like to thank Jens Maurer for the design discussion which spawned this blog posting.

Welcome to the twenty-fourth issue of LLVM Weekly, a weekly newsletter (published every Monday) covering developments in LLVM, Clang, and related projects. LLVM Weekly is brought to you by Alex Bradbury. Subscribe to future issues at http://llvmweekly.org and pass it on to anyone else you think may be interested. Please send any tips or feedback to asb@asbradbury.org, or @llvmweekly or @asbradbury on Twitter.

LLVM commits

A weak variant of cmpxchg has been added to the LLVM IR, as has been argued for on the mailing list. Weak cmpxchg allows failure and the operation returns {iN, i1} (in fact, for uniformity all cmpxchg instructions do this now). According to the commit message, this change will mean legacy assembly IR files will be invalid but legacy bitcode files will be upgraded during read. r210903.

X86 FastISel gained support for handling a bunch more intrinsics. r210709, r210720 and more. FastISel also saw some target-independent improvements r210742.

This week there were many updates to the MIPS backend for mips32r6/mips64r6. e.g. r210899, r210784 and many more.

NoSignedWrap, NoUnsignedWrap and Exact flags are now exposed to the SelectionDAG. r210467.

Support has been added for variable length arrays on the Windows on ARM Itanium ABI. r201489.

Some simple reordering of fields in Value and User saves 8 bytes of padding on 64-bit. r210501.

FastISel will now collect statistics on when it fails with intrinsics. r210556.

The MIPS backend gained support for jr.hb and jalr.hb (jump register with hazard barrier, jump and link register with hazard barrier). r210654.

June 12, 2014

The late safepoint placement pass we released recently has a couple of restrictions on the IR it can handle. I’ve described those restrictions a couple of different times now, so I figured it was time to put them up somewhere I could reference and that google might find. A shorter version of this post will also appear in the source code shortly.

The SafepointPlacementPass will insert safepoint polls for method entry and loop backedges. It will also transform calls to non-leaf functions to statepoints. The former are how the application (mutator) code interacts with the garbage collector and may actually trigger object relocation. The latter are necessary so that polls in called functions can inspect and modify frames further up the stack.

The current SafepointPlacementPass works for nearly arbitrary IR. Fundamentally, we require that:

Pointer values may not be cast to integers and back.

Pointers to garbage collected objects must be tagged with address space #1

In addition to these fundamental limitations, we currently do not support:

safepoints at invokes (as opposed to calls)

use of indirectbr

aggregate types which contain pointers to GC objects

pointers to GC objects stored in global variables, allocas, or at constant addresses

constant pointers to garbage collected objects (other than null)

garbage collected pointers which are undefined (“undef”)

use of gc_root

Patches welcome for the later class of items. I don’t know of any fundamental reasons they couldn’t be supported.

Fundamentally, a precise garbage collector must be able to accurately identify which values are pointers to garbage collected objects. We choose to use the distinction between pointer types and non-pointer types in the IR to establish that a particular value is a pointer and use the address space mechanism to distinguish between pointers to garbage collected and non-garbage collected objects. We don’t require that the types of pointers be precise – in LLVM this would not be a safe assumption! – but we do require that the pointer be a pointer.

We disallow inttoptr instructions, and addrspacecast instructions in an effort to ensure this distinction is upheld. Otherwise, you could have code like the following:

Note that while the SafepointPlacementPass will try to check for some violations of this assumption, it will not catch all cases. At the end of the day, it is the responsibility of the frontend author to get this right.

Now on to the various implementation restrictions.

We plan to support safepoints on InvokeInsts. In fact, the released code already has partial support for this. This is not a high priority for us at the moment, but should be fairly straight forward to complete if anyone is interested.

IndirectBr creates problems for the LoopSimplify pass which we use as a helper for identifying backedges in loops. Our source language doesn’t have any need for indirect branches, but if anyone can identify a better way to detect backedges which doesn’t involve this restriction, we’d gladly take the patch.

Currently, we not support finding pointers to garbage collected objects contained in first class aggregate types in the IR. The extensions required to support this are fairly straight forward, but we have no need for this functionality. Well structured patches are welcome, but since this will be a fairly invasive change, please coordinate the merge early and closely. (Alternatively, wait until this has been merged into upstream LLVM and use the standard incremental review and commit process.)

Note that we have no plans to support untagged unions containing pointers. We could support tagged pointers, but this would require either extensions to the IR, or language specific hooks exposed in the SafepointPlacementPass. If you’re interested in this topic, please contact me directly.

The support for pointers to GC objects in global variables, allocas, or arbitrary constant memory locations is weak at best. There’s some code intended to support these cases, but tests are lacking and the code is likely to be buggy. Patches are welcome.

We do not support constants pointers to garbage collected objects other than null. For a relocating garbage collector, such constant pointers wouldn’t make sense. If you’re interested in supporting non-relocating collectors or relocating collectors with pinned objects, some extensions may be necessary.

We have not integrated the late safepoint placement approach with the existing gcroot mechanism. Given this mechanism is simply broken, we do not plan to do so. Instead, we plan to simply remove that support once late safepoint placement lands. If you’re interested in migrating from one approach to the other, please contact me directly. I’ve got some ideas on how to make this easy using custom transform passes, but don’t plan on investing any time in this unless requested by interesting parties.

June 09, 2014

Welcome to the twenty-third issue of LLVM Weekly, a weekly newsletter (published every Monday) covering developments in LLVM, Clang, and related projects. LLVM Weekly is brought to you by Alex Bradbury. Subscribe to future issues at http://llvmweekly.org and pass it on to anyone else you think may be interested. Please send any tips or feedback to asb@asbradbury.org, or @llvmweekly or @asbradbury on Twitter.

News and articles from around the web

Philip Reames has announced that code for late safepoint placement is now available.This is a set of patches to LLVM from Azul Systems that aim to support precise relocating garbage collection in LLVM. Phlip has a long list of questions where he is seeking feedback from the community on how to move forwards with these patches. There's not been much response so far, hopefully it will come soon as I know there are many communities who are very interested in seeing better GC support in LLVM (e.g. Rust, Ocaml).

LunarG have announced the Glassy Mesa project. This project, funded by Valve, will explore increasing game performance in Mesa through improvements in the shader compiler. The current parser and optimisation layer are replaced with glslang and the LLVM-based LunarGlass. More technical details are available in the slide deck.

Sébastien Métrot has released xspray, a frontend for lldb on OS X. One of its interesting features is the inbuilt support for plotting your data.

On the mailing lists

Zachary Tuner has started a discussion on multi-threading and mutexes in LLVM, following from his patches (currently in review) that tries to replace LLVM's own mutex implementation with std::mutex and std::recursive_mutex. The key questions are whether multi-threading should be a compile-time or tunetime parameter, what should happen if you attempt to acquire a mutex in an app with threading disabled, and whether debugging code for finding deadlocks should be included.

LLVM commits

The jumptable attribute has been introduced. If you mark a function with this attribute, references to it can be rewritten with a reference to the appropriate jump-instruction-table function pointer. r210280.

GlobalAlias can now point to an arbitrary ConstantExpression. See the commit message for a discussion of the consequences of this. r210062.

The subword level parallelism (SLP) vectorizer has been extended to support vectorization of getelementptr expressions. r210342.

The LLVM programmer's manual has been improved with an example of using IRBuilder. r210354.

Clang commits

Semantic analysis to make sure a loop is in OpenMP canonical form has been committed. r210095.

__builtin_operator_new and __builtin_operator_delete have been added. Some optimisations are allowed on these which would not be on ::operator new and are intended for the implementation of things like std::allocator. r210137.

New pragmas have been introduced to give optimisation hints for vectorization and interleaving. You can now use #pragma clang loop vectorize(enable) as well as vectorize(disable), vectorize_width(n), interleave(enable/disable), and interleave_count(n). r210330.

Support for the MSVC++ ABI continues with the addition of dynamic_cast for MS. r210377.

Support for global named registers has been expanded slightly to allow pointer types to be held in these variables. r210274.

Welcome to the twenty-second issue of LLVM Weekly, a weekly newsletter (published every Monday) covering developments in LLVM, Clang, and related projects.LLVM Weekly is brought to you by Alex Bradbury. Subscribe to future issues at http://llvmweekly.org and pass it on to anyone else you think may be interested. Please send any tips or feedback to asb@asbradbury.org, or @llvmweekly or @asbradbury on Twitter.

News and articles from around the web

David Given has shared his partially complete backend for the VideoCore IV VPU as used in the BCM2835 in the Raspberry Pi. It would also be interesting to see a QPU LLVM backend now it has been publicly documented.

Tartan is a Clang analysis plugin for GLib and GNOME. To quote its homepage "The plugin works by loading gobject-introspection metadata for all functions it encounters (both functions exported by the code being compiled, if it is a library; and functions called by it). This metadata is then used to add compiler attributes to the code, such as non-NULL attributes, which the compiler can then use for static analysis and emitting extra compiler warnings."

LLVM commits

A LoadCombine pass was added, though is disabled by default for now. r209791.

AAPCS-VFP has been taught to deal with Cortex-M4 (which only has single precision floating point). r209650.

InstructionCombining gained support for combining GEPs across PHI nodes. r209843.

Vectorization of intrinsics such as powi, cttz and ctlz is now allowed. r209873.

MIPS64 long branch has been optimised to be 3 instructions smaller. r209678.

Clang commits

OpenMP implementation continues. Parsing and Sema have been implemented for OMPAlignedClause. r209816.

The -Rpass-missed and -Rpass-analysis flags have been added. pass-missed is used by optimizers to inform the user when they tried to apply an optimisation but couldn't, while pass-analysis is used to report analysis results back to the user. A followup commit documents the family of flags. r209839, r209841.