Valgrind 3.7.0 now includes an embedded gdbserver, which is wired to the valgrind innards in the most useful way possible. What this means is that you can now run valgrind in a special mode (simply pass --vgdb-error=0), then attach to it from gdb, just as if you were attaching to a remote target. Valgrind will helpfully tell you exactly how to do this. Then you can debug as usual, and also query valgrind’s internal state as you do so. Valgrind will also cause the program to stop if it hits some valgrind event, like a use of an uninitialized value.

A few improvements are possible; e.g., right now it is not possible to start a new program using valgrind from inside gdb. This would be a nice addition (I think something like “target valgrind“, but other maintainers have other ideas).

I think this is a major step forward for debugging. Thanks to Philippe Waroquiers and Julian Seward for making it happen.

This is awesome! It is by far the simplest way to write a GCC plugin. The primary reason is that the author, the amazing David Malcolm, has put a lot of effort into the polish: this plugin is the simplest one to build (“make” works for me, with the Fedora 15 system GCC) and also the one with the best documentation.

Why would you want to write a plugin? Pretty much every program — and especially every C program, as C has such bad metaprogramming support — has rules which cannot be expressed directly in the language. One usually resorts to various tricks, and in extremis patch review by all-knowing maintainers, to preserve these. The plugin offers another way out: write a Python script to automate the checking.

I’ve already written a couple of custom checkers for use on GDB (which is a very idiosyncratic C program, basically written in an oddball dialect of C++), which have found real bugs. These checkers cover things that no generic static analysis tool would ever correctly check, e.g., for the proper use of GDB’s exception handling system. The exception checker, which we use to verify that we’re correctly bridging between Python’s exception system and GDB’s, took less than a day to write.

Phil Muldoon added support for breakpoints to the Python API in gdb this past year. While work here is ongoing, you can already use it to do neat things which can’t be done from the gdb CLI.

The interface to breakpoints is straightforward. There is a new Breakpoint class which you can instantiate. Objects of this type have various attributes and methods, corresponding roughly to what is available from the CLI — with one nice exception.

The new bit is that you can subclass Breakpoint and provide a stop method. This method is called when the breakpoint is hit and gets to determine whether the breakpoint should cause the inferior to stop. This lets you implement special breakpoints that collect data, but that don’t interfere with other gdb operations.

If you are a regular gdb user, you might think that this is possible by something like:

break file.c:73
commands
silent
python collect_some_data()
cont
end

Unfortunately, however, this won’t work — if you try to “next” over this breakpoint, your “next” will be interrupted, and the “cont” will cause your inferior to start running free again, instead of stopping at the next line as you asked it to. Whoops!

Here’s some example code that adds a new “lprintf” command. This is a “logging printf” — you give it a location and (gdb-style) printf arguments, and it arranges to invoke the printf at that location, without ever interrupting other debugging.

This code is a little funny in that the new breakpoint will still show up in “info break“. Eventually (this is part of the ongoing changes) you’ll be able to make new breakpoints show up there however you like; but meanwhile, it is handy not to mark these as internal breakpoints, so that you can easily delete or disable them (or even make them conditional) using the normal commands.

There have been many new Python scripting features added to gdb since my last post on the topic. The one I want to focus on today is event generation.

I wrote a little about events in gdb-python post #9 — but a lot has changed since then. A Google SoC student, Oguz Kayral, wrote better support for events in 2009. Then, Sami Wagiaalla substantially rewrote it and put it into gdb.

In the new approach, gdb provides a number of event registries. An event registry is just an object with connect and disconnect methods. Your code can use connect to register a callback with a registry; the callback is just any callable object. The event is passed to the callable as an argument.

Each registry emits specific events — “emitting” an event just means calling all the callables that were connected to the registry. For example, the gdb.events.stop registry emits events when an inferior or thread has stopped for some reason. The event describes the reason for the stop — e.g., a breakpoint was hit, or a signal was delivered.

Here’s a script showing this feature in action. It arranges for a notification to pop up if your program stops unexpectedly — if your program exits normally, nothing is done. Something like this could be handy for automating testing under gdb; you could augment it by having gdb automatically exit if gdb.events.exited fires. You could also augment it by setting a conditional breakpoint to catch a rarely-seen condition; then just wait for the notification to appear.

To try this out, just “source” it into gdb. Then, run your program in various ways.

First, why rebase? Performance is the biggest reason. Emacs Lisp is a very basic lisp implementation. It has a primitive garbage collector and basic execution model, and due to how it is written, it is quite hard to improve this in place.

Seccond, why Common Lisp? Two reasons: first, Emacs Lisp resembles Common Lisp in many ways; elisp is like CL’s baby brother. Second, all of the hard problems in Lisp execution have already been solved excellently by existing, free-software CL implementations. In particular, the good CL implementations have much better garbage collectors, native compilation, threads, and FFI; we could expose the latter two to elisp in a straightforward way.

By “rebase” I mean something quite ambitious — rewrite the C source of Emacs into Common Lisp. I think this can largely be automated via a GCC plugin (e.g., written using David Malcolm’s Python plugin). Full automation would let the CL port be just another way to build Emacs, following the upstream development directly until all the Emacs maintainers can be convinced to drop C entirely (cough, cough).

Part of the rewrite would be dropping code that can be shared with CL. For example, we don’t need to translate the Emacs implementation of “cons“, we can just use the CL one.

Some CL glue would be needed to make this all work properly. These days it can’t be quite as small as elisp.lisp, but it still would not need to be very big. The trickiest problem is dealing with buffer-local variables; but I think that can be done by judicious use of define-symbol-macro in the elisp reader.

Emacs might be the only program in the world that would see a performance improvement from rewriting in CL :-). The reason for this is simple: Emacs’ performance is largely related to how well it executes lisp code, and how well the GC works.

Here’s a homework problem for you: design a static probe point API that:

Consists of a single header file,

Works for C, C++, and assembly,

Allows probes to have arguments,

Does not require any overhead for computing the arguments if they are already live,

Does not require debuginfo for debug tools to extract argument values,

Has overhead no greater than a single nop when no debugger is attached, and

Needs no dynamic relocations.

I wouldn’t have accepted this task, but Roland McGrath, in a virtuoso display of ELF and GCC asm wizardy, wrote <sys/sdt.h> for SystemTap. Version 3 has all the properties listed above. I’m pretty amazed by it.

This past year, Sergio Durigan Junior and I added support for this to gdb. It is already in Fedora, of course, and it will be showing up in upstream gdb soon.

The way I think about these probes is that they let you name a place in your code in a way that is relatively independent of source changes. gdb can already address functions nicely ("break function") or lines ("break file.c:73") — but sometimes I’d like a stable breakpoint location that is not on a function boundary; but using line numbers in a .gdbinit or other script is hard, because line numbers change when I edit.

We’ve also added probes to a few libraries in the distro, for gdb to use internally. For example, we added probes to the unwind functions in libgcc, so that gdb can properly “next” over code that throws exceptions. And, we did something similar for longjmp in glibc. You can dump the probes from a library with readelf -n, or with “info probes” in gdb.

The probes were designed to be source-compatible with DTrace static probes. So, if you are already using those, you can just install the appropriate header from SystemTap. Otherwise, adding the probes is quite easy… see the instructions, but be warned that they focus a bit too much on DTrace compability; you probably don’t want the .d file and the semaphore, that just slows things down. Instead I recommend just including the header and using the macros directly.

I’ve been running an Emacs built from bzr for a while now. I did this so I could try a newer version of Semantic; the one in Emacs 23 is just too broken to use.

Semantic, in case you haven’t heard of it, is an ambitious project to turn Emacs into an IDE. Really it is quite a crazy project in some ways — it includes its own implementation of CLOS (which opens a strange Emacs maintenance debate: how can CLOS be ok but the CL package not be?) and a port of Bison to elisp (but again, strange: the port is really a pure port, it does not use sexps as the input — bizarre).

Semantic is now usable in Emacs — I’ve found a few buglets, but nothing serious. In fact, now I find it indispensible.

I have it configured in the most Emacsy way of all: I didn’t make any changes, I just enabled it with M-x semantic-mode. Then I visited a bunch of gdb source files. Semantic started indexing them in the background.

Now when I want to jump to a declaration or definition of a function, I use C-c , J. The key binding is nuts, of course, but I’ve been too lazy to rebind it yet. Anyway, this acts basically like M-., except you don’t have to ever run etags. Wonderful.

Semantic has some other nice features, too, but I haven’t really used them yet. If you’re using it I’d love to hear what you do with it.

I wrote a while ago about my blog-reading woes. Those woes are now over!

Lars Magne Ingebrigtsen, of Gnus and gmane fame, has now brought us gwene — an RSS to NNTP gateway. You enter the feeds you want to read, and soon they show up as newsgroups in gmane. Thanks also should go to Ted Zlanatov, for bringing this up on the gmane discussion list, and thus getting it all rolling.

I haven’t quite retired my rss2email cron job, but that is mostly out of laziness. Any day now.

Normally I am unhappy about the whole SaaS trend, but gmane gets a pass. I am not sure if it is because Lars seems trustworthy, or because NNTP is so obviously a fringe interest, or because gmane is at least theoretically replaceable in the event of the worst.

I read Declare and The Jennifer Morgue a few months ago, due to responses to my post about The Atrocity Archives.

First, I was expecting Declare to inhabit the same intersection of genres as the others — hackers plus Cthulhu plus spies. I was mistaken. It is a supernatural spy novel, but there are no hackers and it is not set in an obviously Lovecraftian world.

One funny thing is that some characters appear in both books. Since I’m largely ignorant of British spy history, I wasn’t aware that these were real figures. That’s too bad, I think that sort of knowledge makes the books a bit richer.

The Jennifer Morgue has a great name and is a typically fast-paced Stross book. However, I generally dislike books that go meta, and the whole geas thing was too cutesy for me. I finished this book mostly out of stubbornness,. Also, I find that the frenetic style of Stross or MacLeod wears on me after a while.

Declare was a much different book — slower paced, more detailed, with more history and character development. It was a bit repetitive at times and perhaps a bit long; but I thought the ending was rather clever and I enjoyed it overall. For some reason the title Declare has stuck with me and I think of it often.

There’s a fun source rewriting trick that I’ve wanted to try out for a long time — and I finally got a chance to do it while working on the multi-threading patch for Emacs.

The Problem

In the multi-threaded Emacs, a let binding must be thread-local, because this is really the only way to manage dynamic binding in the presence of threads. Emacs also has a notion of a buffer-local variable, and furthermore some buffer-local variables are stored directly in the internal struct buffer — that is, assignments to the variable in lisp are transformed by the lisp implementation into a field assignment in C. These fields are freely used elsewhere in the C code.

Our implementation of thread-locals, though, is an alist mapping a thread object to the variable’s value. So, to keep the C code working properly, we need to rewrite every field access to use a function that finds the proper per-thread value.

The Idea

The idea, of course, is automated rewriting. However, like many other GNU programs, Emacs is heavily macroized, and furthermore may be the last program in the whole distro that uses K&R-style function definitions. For these reasons I assumed that existing refactoring tools would not work well.

Luckily, though, this problem doesn’t require a very sophisticated refactoring tool. Really all we need to do is find the location of each field reference, and then find the start of the left-hand-side, and then rewrite that into the new form.

The Hack

All we really need is to find a series of locations — the rest we can handle with some straightforward elisp scripting. And what simpler way is there to get locations than to get the compiler to give them to us?

I wrote a batch script in elisp to automate the whole procedure. Why elisp? Not only is it a natural, perhaps even required, fit when hacking on Emacs, it also has some nice “sexp” functions which allow skipping over properly-parenthesized expressions. This means I could do without a whole parser. And why automate the whole process? I expected it wouldn’t work properly the first time; having a single script let me git reset after each test run and simply re-run from scratch.

This elisp script first edits struct buffer to rename each field. Then it runs make to rebuild Emacs. This causes the compiler to emit an error message for each bad field access.

A critical point here is that I used GCC svn trunk. Only recent versions of GCC emit correct column numbers in error messages . GCC 4.4 might have worked, I am not sure — and in the end I needed a small libcpp patch to deal with a certain macro case.

The elisp script reads the output of make and pulls out the error messages. For each error on a given line, it works in reverse order (so that multiple fixes on one line will work properly without the bother of inserting markers), rewriting the field accesses. I wrote a bit of ad hoc code to back up to the start of the left-hand-side of the field access; doing this well is a bit funny, like writing a parser that works backwards, but in my case I knew I could get away with something relatively simple (I think this little sub-hack caused the script to miss less than 10 rewrites, i.e., tolerable).

I would guess that this script got 90% of the field accesses. I had to fix up a few by hand, mostly in macro definitions in header files. And, I had to revert a few changes as well, mostly in the garbage collector (which wants to see the real underlying alist, not the per-thread value). Still, diffstat says: 49 files changed, 1305 insertions(+), 1021 deletions(-) — in other words, not something you’d want to do by hand.

So, ok, this is horrible. But fun! I think I will end up doing it again, for frame- and keyboard-local variables. Maybe someday I’ll finish my patch to make libcpp properly track locations through macros, and then the script can even fix up macro definitions for me.

I’m not extremely interested in Eclipse-style refactoring — where the tool provides a couple dozen refactorings for you. Instead, I think I want my refactoring tool to answer queries for me, so I can feed that information to a customized rewriting script.

Another way I could have done this was writing a GCC plugin with treehydra or MELT, but unfortunately my free time is so limited that I haven’t managed to even build either one yet. Once plugins are in the Fedora GCC, I think it would be very worthwhile to package up treehydra…