Introduction

The end-game of the Compile Project is to deprecate (some of) GDB's built-in language parsers, permitting developers to type (relatively) arbitrary code into GDB, having GCC compile the code snippet, and then running the code in the inferior debug process.

As envisioned the project was split into two distinct sub-projects, one covering the “smaller” C language compiler plug-in and corresponding debugger support and one sub-project covering C++.

The initial C-language endeavor would also establish the basic infrastructure for future development efforts.

For questions about the project, please email gdb@sourceware.org with the subject prefix, “[Compile]”, e.g., “[Compile] When is C++ going to be ready?”

Bugs may be filed in sourceware Bugzilla under the product “gdb” and the “compile” component.

Status

The C-language compile sub-project was officially released with GDB 7.9 and GCC 5.1.0. There has been minimal development effort expended on C support since the first public release. If you notice any problems, please either contact the GDB mailing list or file a bug into GDB's Bugzilla database under the compile category.

The C++-language sub-project is currently under development, but working branches for both GCC and GDB are publicly available in gcc.gnu.org and sourceware.org GIT repositories.

These branches are typically kept in good order, but no guarantees can obviously be made about stability or suitability for any particular purpose. This is an extremely experimental project and is likely to change drastically and/or frequently. [Note, though, that at least one developer does use both branches for all day-to-day development activities.]

GCC repository

git://gcc.gnu.org/gcc.git

branch

origin/aoliva/libcp1

GDB repository

git://sourceware.org/binutils-gdb.git

branch

origin/users/pmuldoon/c++compile

History and Genesis

The compile and execute project originated from the desire to reuse parts of the GCC language parser in GDB. Language parsing is a complex and involved task. In C++ especially, it is a fiendishly difficult problem, especially when parsing templates. Over the last few years there have been many C++ parsing bugs found in GDB, most of which are not easily solved. This accumulation of bugs, many of which exist to this day, has proven a less than ideal experience for GDB users.

Several previous projects have tried to solve this by reusing the functionality of GCC language parsers. The idea is not new. The difficulty has always presented itself as accessibility to those parsers. Though these other projects have achieved success to various degrees, and failures in others, the idea remains that this is the right way forward. GCC has, after all, presumably already parsed and compiled the source at some point in the past. If we model GCC as an example of "best in the class" parser, the conclusion remains that reusing that GCC parser functionality in as many places as possible is the right and correct thing to do. It frees GDB developers from maintaining an identical (and arguably, inferior) set of parsers in their code-base, and allows the right people to focus on the right problem in GCC.

Given the statements above, what are the issues that stymied the other projects? Broadly speaking all of them run into two fundamental issues:

Access to source code is needed.

Compilation times for source.

Most other projects relied on recompiling an expression within source code to provide contextually relevant debug info for the expression. GDB could then examine this new debug information and parse out the expression meaning this way. The problem this approach inevitably (always) faces is that source code is not always available, and compile and link time of the whole source takes too long. Even with a modestly sized project, the compilation and link step delay renders a poor user experience. Solving the issue presented by these two obstacles has hampered previous efforts, and, to this day, GDB still relies on its own internal language parsers to examine an expression.

That being said, GDB's internal parsers do get a lot of things right. Alas, they also get a lot of things wrong. It has to parse the expression “by hand”, figure out what is a symbol, the types, what is an inferior function call, etc. It also has to deal with various versions of languages. If GDB were to work around a C++ bug exposed in GCC, what then of newer GCC versions that fixed that bug? In short it can't work around these bugs because GCC changes (hopefully for the better!). However, there is an argument that constant input into the maintenance of the parsers will result in an equilibrium with GCC. This author's counter is these are just not enough people in GDB with the expert knowledge to maintain all of the parsers (and there are many). And the people that do, brave souls and explorers they are, inevitably never have enough time to do so.

So the argument has been set out for this project's approach. In summary, having engineers backward engineer a complex and evolving set of languages to make a “consumer” parser is always going to fall behind the compiler curve of development. New features, bugs, new debug info standards all mean that GDB will always, in the absolute best case scenario, be one release away from GCC's parser. The only tool available to GDB's parser is the somewhat murky blob of information know as debug info. It's an imprecise amount of information to work with, subject to producer bugs and regressions. GCC generates its own trees and its internal meta-data is far richer than debug info.

Just so one does not think this is all just rhetoric, examine this bug encountered the other day:

(gdb) p it
$1 = 1
(gdb) p it+1
Attempt to take address of value not located in memory.

While there may be many internally technical reasons why that would not work in GDB, to the user, who just wants the iterator incremented, it is a GDB failure. This would not have happened if the expression was parsed by GCC. GCC knows what it is, and would know the correct resolution to it+1. It compiles code like this every day.

Project Approach

Well enough about project justification, and ideology. Let's look at our chosen methodology and how we set out to achieve the project goals.

Above there was a brief discussion about previous projects in this area. The idea was sound, and this project approaches the problem in a similar manner as those previous projects. We aim to use GCC to construct the meaning or value of a given expression by asking GCC to compile something. So GCC can do the heavy-lifting and GDB can interpret the result. All GDB has to do is execute the produced object code, caches the return value and (in some cases) print it. Sounds simple? But the project still have to overcome the issues encountered before: no access to inferior source, and compile/link time.

The approach we used to address these issues was to write a set of functions called “Oracles”. Oracles are just data-brokers between a running GCC and a running GDB. They simply enable a method of two-way information exchange between GCC and GDB. Put simply, an Oracle provides answers to GCC queries with the answers provided by GDB, and likewise for the other direction. In return for these answers, GCC can compile the expression, and produce object-code without referring to the original source code. If GCC needs to know what a symbol means in an expression contained in the inferior, it can just ask GDB for the data. One way of looking at this project is just as a collection of Oracles and Marshallers that coordinate the flow of information to and from GDB and GCC. In this sense, GDB provides information about the inferior it knows about, and GCC provides information relating to errors or the resultant object code. So GCC does not need the inferior source (GDB provides the answers), nor does it need to recompile the inferior source, but just the expression.

But before we delve into the technical innards of this projects, lets look at it from a functionality point of view.

A First Example

Here is a simple piece of example code. It has been compiled, loaded into GDB and stopped at a breakpoint:

I want to create a new variable. An integer. This has not been previously defined either in GCC or GDB.

I want to compute some value and assign the value to the location of this new variable.

I want to assign the value of this new variable to an existing variable in the program above. While the program is running, without recompiling the code or stopping execution in any way.

This raises some interesting questions. GCC does not know anything at this point. GDB has started it up and it is sitting idle. GDB knows about the type and location of all inferior variables (in this case, i, c, and f. At this point GDB is also sitting idle waiting on user input. It does not know that I want to create a new inferior variable, I've not typed anything into GDB yet. Neither GCC nor GDB know that this new variable's value will be computed somehow, and that this new variable value will be assigned to an existing variable.

So how do we do this? We write some brand new C code to tell GDB and GCC what to do. Currently in the project we have access to a command called “compile code”. So lets see what happens in GDB.

(gdb) compile code int z = 5; c = z;

This example creates z, a new variable, and just assigns a value to it. It then assigns the value of z to c.

The problems highlighted above are:

GDB knows about c but GCC has no idea what c is. When we pass this code snippet to it, c will be undefined, and without the use of the oracles GCC would emit a syntax error.

We don't have access to source code remember! So no cheating that way

Neither GDB nor GCC know about z. At the moment it is just a bunch of text on the screen.

Before we investigate how it happens, lets look at the result of the command. When you type the above into a GDB with this patch-set, what happens? Well nothing! That is, nothing is printed, but there is plenty going on behind the scenes – the code is compiled and executed. So if you were to type:

(gdb) print c
$1 = 5

The variable c is now equal to z which is 5. (Previously, c was 12.)

Internals of the example

We have the result. It works (phew!). How did that happen?

The first thing GDB does is to annotate the code so it will be in a format that GCC can recognize.

The process of annotations falls into several discrete steps. They are enumerated below. (Note, this is for the C language. Other languages may have different steps or operations).

Enumerate and include macros that are in scope

The code the user writes and wishes to be compiled and injected might refer to a macro. GDB will enumerate all of the macros it knows about and select ones that are in the current scope. The scope referred to here is the place where the inferior is stopped at by GDB – that's where the compiled code will be injected. GDB (with this project) makes no differentiation regarding whether an actual macro is used or not. If the macro is in the current scope it is included unconditionally in the annotated output.

And so on. In most programs there are dozens and dozens of system defined macros, so this section can be quite lengthy.

Generate callable scope

Currently the project does not “patch in” the newly compiled bytes at the program counter. That might be a viable solution in the future, perhaps as we explore other avenues for this project (like “fast” breakpoints). The project instead utilizes GDB's ability to make inferior function calls (basically call a function out of sequence without affecting the current execution context of the inferior). For that we need to generate a unique callable scope. GDB can then “know” the function name to call when it prepares to execute the snippet. We call this inserted a scope a “code header”. Currently there is only one code header, for C. Other languages will need their own code header, and other code headers perhaps for other types of functionality. The C code header looks like this:

The function is called “_gdb_expr”. This is what GDB will call when it comes to execute the code snippet. It takes one parameter which is an auto-generated C struct that contains a register name and value pairing. The use for this structure and why we need these registers will be explored in the next section. However there are disadvantages to this approach. It relies upon the host language's ability to access such low level details. Java, for example, may have some difficulty with this if someone were to write an extension to this project for GCJ's compiled Java. However in all cases we always managed to find a workaround (for Java one could use CNI/JNI).

Generate locals location

If you read the above section about a callable scope, you would know that GDB actually wraps the code snippet in its own callable scope. The reasons why are explained above. But that creates a problem GDB now has to solve. We want the code to act as if it is running in the current scope of the inferior where it has been stopped by GDB. In our example above we are stopped in the main function. It has three local variables: i, c, and f. But as the snippet is being executed in its own auto-generated scope, accessing those local variables becomes problematic. The snippet's callable scope will have its own stack, and those variables will not be found within them. We experimented with copying the stack and other approaches, but they all fell short of our project guidelines: permanence and non-involvement. Permanence meaning that if a user assigns a variable in the local inferior frame's stack (say, in this example, c) that local should maintain that value even after the snippet has stopped executing. The other factor, non-involvement, means we like to tread lightly in the inferior, and wholesale copying of stacks or register manipulations in the inferior is not, as it were, treading lightly.

The solution we came up with and implemented was to calculate the location of each the inferior's locals in the current stack, and map “shadow” locals to those. I quoted shadow as it is an often overloaded term in computer science – don't attribute any of those other meanings here, a shadowed local in this context is a local in our snippet that “points to” another local. Writing to a shadowed local will alter the value of the local variable it points to.

To do this we had to write a compile-loc2{language}.{ext} file. Other languages will need their own implementation. This part of the project enumerates the locals and annotates them to the code header detailed above. Here is an example of one of the variables (for brevity, just one - the rest look similar). Note this code is auto-generated, may change, and in order not to interfere with the user's snippet, has to have obscured variable names.

In this example, GDB is calculating the location of the i local variable. In the callable scope section, we saw that the function is passed a register struct of name and value pairings. In the auto-generated code above, you may understand why GDB needs those registers. In this case, GDB has identified that it needs the RBP register value to calculate the stack offset in the current context of the inferior. Other registers may be needed also, and GDB will add whatever it needs to the struct as it enumerates the local variables.

For this local variable, i, GDB calculates the base of the frame, and the location of the stack in that frame. GDB then adds the offset of the current local in the stack, and assigns that location to the automatically generated shadowed local (in this case, i_ptr). Beyond the location GDB assigns no type information at this point. Type information will come later when the code is compiled by GCC. In this case GDB uses the utility “void” type here. This was a matter of convenience for the implementation of the C language. If you are writing your own language adaptation you may have to deal with types more explicitly here.

So for each local in scope, GDB generates a snippet largely similar to the one above. These variables are annotated to the “_gdb_expr” function that was previously generated.

Generate #pragma and insert snippet

Once all of the local shadowing is complete, finally, we get to insert the user's actual code snippet. But first we need to mark in the generated output the delimiter between GDB generated code, and what the user actually wrote. For that we use a #pragma directive. This is a hint for the GCC plug-in. With this pragma GCC knows what variables to process and what to begin asking GDB about via the Oracles. Remember, GCC has no access to the inferior source. The inferior source might not be even available. The only code GCC will ever see is what GDB generates, and what the user writes in their snippet. So any references to inferior variables, function, globals has to be resolved by GDB telling GCC about them.

You can see in the middle of that output the user snippet which has been wrapped in a local scope under the pragma directive. The scope ends when the user code is complete.

Generate code footer

The final operation before GDB passes the code to GCC is to generate the code footer. Similar to the code header described above, this is language specific. In the case of C, it is just the closing of the function body, so all it contains is a single } on its own line.

And now we are ready to explore the GCC side, and the input of the Oracles in the project.

The GCC interface

*to-do* explain the solib, the single exported function, the vtable of function calls, the concept of oracles and how things get resolved in GCC.

Loading the object file

Relocating the object file

Object file produced by GCC has to be loaded into the inferior. As GDB loads directly the .o file it needs to relocate it.

One could propose loading position independent .so file instead but then GDB could no longer use BFD library for its relocation as .so files can be relocated only by ld.so (that is by dlopen()). Using dlopen() would be too intrusive for the inferior as such complicated inferior function may affect inferior data being debugged.

The .o file is also built as -fPIC (position independent). Otherwise GDB would need to reserve space large enough for any object file in the inferior in advance to get the final address when to link the object file to and additionally the default system linker script would need to be modified so that one can specify there the absolute target address.

GDB also builds the object file -mcmodel=large (for 64-bit targets only). That way no GOT (Global Offset Table) is needed to be created in inferior memory by GDB (normally it is set by ld.so).

Relocations to inferior data variables are already set as absolute addresses in the object file, thanks to the Oracle. Relocations to inferior functions need to be set when loading the object file.

Mapping the object file

There is a goal of minimal inferior modifications during debugging in general. Using .o file and loading it manually from GDB has been therefore preferred to a more simple approach of linking an .so file and using inferior function call of dlopen() for it.

Contrary to it GDB uses only inferior function call for mmap() to reserve inferior memory for the object file. GDB also uses the same mmap() call to get memory for struct __gdb_regs mentioned above.

Discarding the object file

The object file and its generated source file need to be kept around during execution of its code as GDB may stop inferior even inside the code entered by user, like during normal inferior function calls of inferior code.

After the inferior function call frame disappears (usually due to a return from the object file) GDB can delete the files on disk. Inferior function call frame is indicated by frame <function called from gdb> in a backtrace.

Memory allocated in inferior by the mmap() call is never deallocated, though. There are many possibilities how the inferior program may keep references to its memory, one such case may be:

(gdb) compile code inferior_charptr_variable = "hello";

Missing Features

Abstraction of compile/

The initial implementation of C-language compile support had a lot of new ground to cover and did not really pay attention to a "clean" abstraction of interfaces to facilitate additional languages. The design we have today is still largely a mass copy/paste of the functions used by the C compile code, modified to work with C++.

The compile feature does not support the use of GDB-specific parser grammar extensions such as convenience variables and the artificial array operator. The former is especially important because GDB represents registers (and pseduo-registers) as convenience variables, e.g., $pc, $sp, etc.

Offline debugging [RTL/gimple interpreter?]

The compile command currently requires a running inferior, so the internal parsers must still be used when debugging offline, e.g., examining core files.

Testing. Real-world, non-trivial testing.

Even for the completed C Compile Sub-Project, the code base remains largely untested with real-world, production code. Rerouting the “print” command to bypass the internal parser/evaluator for C would help mitigate this for both C and C++, C++ can often be the “Wild West” of development. We've all seen developers do crazy things in C++, and GDB needs to be prepared to deal with this craziness.

C Sub-Project

The C sub-project is complete and has been available for GDB users since GDB 7.9. Questions and patches may be directed to the standard GDB mailing lists. Bugs may be filed under the "compile" category in GDB's Bugzilla.

Limitations

Nothing known beyond the project limitations listed above.

C++ Sub-Project

The C++ sub-project is currently under development, and check-ins are pushed to development GCC and GDB branches (listed above). Questions and patches may be directed to the standard GDB mailing lists. Bugs may be filed under the “compile” category in GDB's Bugzilla.

Template Support

Template support is currently underway. Support for function templates using type and value parameters is almost ready to push to the development branch. That will be followed by template-template parameters and class templates. Expressions in parameters will require additional development time.

Scoping

Unlike the C compile sub-project, the compiler plug-in C++ (and likely those for other languages) needs to know the current scope of compilation. Consider the following example:

Because the emitted code must be callable, we “lose” the concept of the current compilation scope. The compiler does not know that the definition of struct A is in scope inside f::B::mf.

Likewise references to "this" and unqualified class members and method calls while in class scope will currently elicit errors.

TLS

There are likely several issues at play here. At a simplistic level, Compile currently treats TLS variables just like any other variable. The GCC plug-in sends an oracle request for the variable name, and GDB internally recognizes this as a TLS variable and passes the address back to the plug-in.

This would leave us with feature parity with GDB's internal parser and evaluator. In other words, we're not going to worry too much about it right now.

However, if the user attempts to define a new TLS variable in his code snippet, well, let's just say, “Now accepting patches!”

Exception handling

What happens if some function/method call in the code snippet throws an uncaught exception? Right now, if there is no exception handler installed in the current frame, this exception will propagate up until it finds a handler or std::terminate.

Symbol lookups

In these early development stages, symbol lookups have been a massive problem. Type conversion currently needs to do four separate symbol lookups in order to pass all current “simple” tests. One of these is a regular expression search over the entire symbol table. These are undeniably too expensive for non-trivial applications.

The current state of these lookups are described in gdb/compile/compile-cplus-symbols.c:gcc_cplus_convert_symbol.

Limitations

Template support

Default parameter values: if you don't actually use the default value, compiler does not output DW_AT_default_value, and we never know about them.

Lacks real-world testing. We have "simple" tests, aka gdb/testsuite/gdb.compile/cp-simple-*.exp which test very basic functionality of the entire (C++) compile module. We really need much more realistic/complex tests.

Future "Neat" Applications of Compile

Once basic compile is “complete,” there are some very useful features that we can then offer users.

Fix and Continue

Fix and continue is a feature that many modern debuggers offer. It allows users to “replace” (re-implement) entire functions while stopped in the inferior. All normal debugging operations (stack and variable inspection, breakpoints, etc) are supported transparently.

The Compile Project is a fundamental first step to supplying this feature to users.

"Fast" Breakpoint Conditions

A frequently used feature of GDB is conditional breakpoints, e.g., break at a certain location iff some expression evaluated at the stop context is true. Currently a conditional breakpoint requires GDB to fetch memory and re-evaluate the conditional expression every time the breakpoint is encountered. In many situations, this can greatly slow down (and frustrate) the debugging experience. Fast breakpoint conditions can mitigate this slow down by having the running application evaluate the conditional expression, breaking only when the conditional expression evaluates true.

Like Fix and Continue, this feature relies first on the ability to compile and execute (nearly) arbitrary code. In fact, aside from UI presentations, the two features share nearly identical requirements, such as the ability to track "new" blocks and to jump to this new, “named” block.

How to extend the gdb/gcc interface

As far as possible we want to be tolerant with GCC versions. We want new GDB to work with an older GCC, and new GCC to work with an older GDB, as there can be many months of GDB and GCC releases not coinciding.

Also, considering an --enable-targets=all build, gdb should be reasonably able to cope with different versions of gcc available on the same system. E.g., one might have the most recent version for x86 gcc around, but not for ARM, etc.

We should avoid bumping the plugin's so version. The so version is supposed to indicate binary compatibility, and the ABI is maintained as long as the ABI of the single entry point -- e.g., gcc_c_fe_context for the C plugin -- is maintained.

It's the code that does the initialization/handshake on both ends (gdb/gcc) that should decide which versions each end supports. It does not have to be a single version that is supported on both ends.

The preferred solution is to add another method to the end of the vtable of functions presented to GDB from the GCC plugin. The plugins' entry points were designed with this idea in mind. For example, the C plugin's entry point is:

1/* The type of the initialization function. The caller passes in the 2 desired base version and desired C-specific version. If the 3 request can be satisfied, a compatible gcc_context object will be 4 ^^^^^^^^^^ 5 returned. Otherwise, the function returns NULL. */ 6 7typedefstructgcc_c_context *gcc_c_fe_context_function 8 (enumgcc_base_api_version, 9enumgcc_c_api_version);

Note the "compatible". So to add new functionality, bump GCC_C_FE_VERSION_x to GCC_C_FE_VERSION_x+1, and add new functions at the end of the struct gcc_c_context vtable, keeping it compatible with GCC_C_FE_VERSION_x.

With that, an older GDB that requests e.g., GCC_FE_VERSION_0 still works with a plugin that implements GCC_FE_VERSION_1. A newer GDB that understands both v1 and v0 can request version GCC_FE_VERSION_0, indicating minimum supported version. Because the versions are backward compatible, the plugin may return back a higher version that is also compatible with GCC_FE_VERSION_1, in which case GDB can make use of the newer v1 methods; otherwise, GDB must fall back to using the older v0 methods. See an example here.