After getting frustrated with Ubuntu’s Unity interface to the point of wanting to bludgeon myself to death with a bag of squirrels, I tried out Cinnamon by manually installing the latest version. It’s actually pretty nice, except for one major annoyance: every time I get a skype message, it un-minimises the relevant chat window and brings it over the top of my current foreground window/process. This is really annoying. There is a bug entry in Cinnamon’s bug database, but it closes with this frankly disturbing comment by someone called Clement Lefebvre (clefebvre):

I don’t like the idea of this being configurable.. mainly because this isn’t a question of preference but a question of fixing a bug. The focus should always be taken when the user launches a new app and never be taken on his behalf when he doesn’t trigger the creation of new content.

What the … it actually checks for Skype and handles it specially. That’s got to be wrong. Clement then goes on to say:

For other apps which face a similar issue, I’d like people to insert the following code at line 27 in /usr/share/cinnamon/js/ui/windowAttentionHandler.js:

global.log_error(“Focus stolen by : ” + window.get_wm_class());

Then restart cinnamon, and when the focus is stolen from you, click on the ^ applet, troubleshoot, looking glass, click on the error tab, and you should see the wm class name of the app which stole your focus. Give me that wm class name, and we’ll add it to Cinnamon.

Oh for fuck’s sake… this is not how you do these things. Fix the bug in Cinnamon, make it respond to notifications properly for all apps; don’t just implement a workaround on an app-by-app basis. Am I missing something here? Surely other window managers aren’t implementing this workaround (because they don’t need to, because they handle the notification correctly in the first place)? This doesn’t exactly inspire confidence in the quality of Cinnamon.

I’ve long despised most dynamically typed languages, for reasons which I’ve yet to properly enunciate, but I’ve begun to loathe one above all the others: yes, I’m referring to the abomination that is PHP. I’ve never got around to properly writing about why I abhor it so much but I was recently given a link to an article on another blog, which contains (amongst a much larger and very well argued discussion) this little gem (reproduced with permission):

I can’t even say what’s wrong with PHP, because— okay. Imagine you have uh, a toolbox. A set of tools. Looks okay, standard stuff in there.

You pull out a screwdriver, and you see it’s one of those weird tri-headed things. Okay, well, that’s not very useful to you, but you guess it comes in handy sometimes.

You pull out the hammer, but to your dismay, it has the claw part on both sides. Still serviceable though, I mean, you can hit nails with the middle of the head holding it sideways.

You pull out the pliers, but they don’t have those serrated surfaces; it’s flat and smooth. That’s less useful, but it still turns bolts well enough, so whatever.

And on you go. Everything in the box is kind of weird and quirky, but maybe not enough to make it completely worthless. And there’s no clear problem with the set as a whole; it still has all the tools.

Now imagine you meet millions of carpenters using this toolbox who tell you “well hey what’s the problem with these tools? They’re all I’ve ever used and they work fine!” And the carpenters show you the houses they’ve built, where every room is a pentagon and the roof is upside-down. And you knock on the front door and it just collapses inwards and they all yell at you for breaking their door.

That’s what’s wrong with PHP.

I’ve not really ever had to write much PHP but I’ve sat looking over the shoulder of someone who was in the process of coding some small PHP scripts, and I was horrified. The problems with the ‘==’ operator, and the overloading of the function return types, were particularly unsettling. I’ve written previously about my difficulty in just compiling PHP so that it worked as I wanted – which just adds to the feeling that PHP was simply slapped together from a load of unrelated pieces, and it was only by chance that a turing-complete language emerged. One of these days I’ll get around to explaining just why dynamic languages are bad, but in the meantime get some entertainment (and perhaps education) by reading Eevee’s post.

It’s not so bad, as web sites go, in terms of appearance. Nice width, good text/background contrast. They’re probably using WordPress in the backend. That transition between screenshots (if that’s what they are) effect is really nice, I like it.

Notice they had a release recently – up to version 1.4 now, it seems, and 1.3.1 was … well, I don’t know how long ago, because the news items aren’t dated, which does seem like a serious omission, but at least I can click the links to get the full story with date: 1.4 was released 14/3/2012 and 1.3.1 was 20/2/2012, less than a month earlier. Ok, I can live with that. Hmm, fast release schedule! Usually a sign of a project in its infancy: enthusiastic developers, and plenty of low-hanging fruit in terms of bugs to fix and features to add. Which makes the next observation even more puzzling.

No – mention – anywhere – of – what – the – heck – it – is.

How exactly do you manage to put together a website with flashy screenshots and monthly news items and still neglect this most basic, this most essential of details? I can only assume that the webmaster is some form of greater ape. Possibly an orangutang. Don’t get me wrong, I like orangutangs, they have an odd graceful sobriety about them, and you can train them to wash clothes and (apparently) bang up a quick website, but they just don’t have the intelligence of a real human.

So please, Cinnamon, ditch the ape, get a human, and put an “about” page on your crappy website.

(For anyone who’s just itching to know, LWN has an entry here which explains what Cinnamon is).

With several new compiler releases it’s time to update my compiler benchmark results (last round of results here, description of the benchmark programs here). Note that the tests were on a different machine this time and the number of iterations was tweaked so numerical results aren’t comparable.

Without further ado:

So, what’s interesting in this set of results? Generally, note that LLVM and GCC now substantially compete for dominance. Notably, GCC canes LLVM in bm4, where it appears that GCC generates very good “memmove” code, whereas LLVM generates much more concise but apparently also much slower code; also in bm8 (trivial loop removal – though neither compiler actually performs this optimization, GCC apparently generates faster code). On the other hand LLVM beats GCC quite handily in bm6 (a common subexpression elimination problem).

GCC 4.6.2 improves quite a bit over 4.5.2 in bm5 (essentially a common subexpression refactoring test). However, it’s slightly worse in bm3 and for some reason there is a huge drop in performance for the bm7 test (stack placement of returned structure).

I have no idea what this means so in the interim I try upgrading ‘glib’. This mostly goes without hiccups (except that newer versions of glib apparently have a dependency on libffi, for reasons which are tentatively explained in the glib README), however, I get the same error message when I try to start Evince. After some Googling, I discover the necessity of ‘compiling the schemas’:

# glib-compile-schemas /usr/share/glib-2.0/schemas

No mention at all of this in Evince build documentation. It looks like the schema might be compiled automatically if you “make install” Evince without setting DESTDIR, but frankly the necessity of compiling the schema should be documented.

A common method of protecting data from race conditions in an object-oriented system is to use a simple mutex. The pattern goes something like this:

Call some access or modification method on an object

On entry to the method, mutex (associated with the object) is obtained

… method does its stuff

Mutex is released

Method returns

Java, for instance, provides built-in support for this particular sequence with the “synchronized” keyword. A method tagged as “synchronized” automatically obtains the mutex on entry and releases it on return, which removes a lot of the burden from the programmer who otherwise has to make sure that the mutex is properly released on every code path (including exceptions). This works fine in a lot of cases. Java’s mutexes are also re-entrant meaning that one synchronized method can call another one in the same object without causing instant deadlock, even though they share the same mutex (the mutex is associated with the object, not the method). However, “synchronized” is not suitable as a general purpose mutex, and in fact its re-entrancy can lead to bad design.

Consider a design which uses the listener pattern. That is, some class has a modifier method which modifies data and notifies a number of listeners of the change (via some listener interface). How could such a design be made thread-safe? it might be tempting to mark the modifier method as synchronized, and in many cases this would work fine. If a listener needs to access or modify data, it can do so, and the re-entrant nature of “synchronized” means no deadlock will occur. However, this approach is a bad design, for subtle reasons, which basically boil down to this: a mutex should not necessarily prevent data access or modification by threads which do not hold the mutex.

Huh, you may be thinking, what’s this guy smoking? That’s precisely what a mutex is for. But that’s not exactly what a mutex is for; a mutex is a mechanism to allow threads to co-ordinate access to data (or some resource) in order to avoid race conditions, but it is not specifically meant to limit resource access to the single thread holding the mutex. In particular, if the thread holding the mutex is waiting on a mutex held by a second thread, and the second thread is operating under the assumption that this is the case, then it should be fine for that thread to access the resource that is protected by the mutex – clearly there can be no race condition, since the only other thread that otherwise has any right to access the resource (i.e., the thread that holds the mutex) is blocked.

If you think this argument is absurd, consider two cases in Java where this situation can come about:

An arbitrary thread needs to modify the data (the “model”) of a Swing component, and needs to do so synchronously, from within a listener callback which is called with a mutex protecting an object “O” held. To modify the model safely the listener needs to use EventQueue.invokeAndWait() or equivalent. The code invoked on the dispatch thread also attempts to access the object “O”. Bang – deadlock.

Again, a listener callback is called with a mutex held. This time the listener calls a method on a remote object (in a second Java VM) via RMI. The remote invocation calls back in to the first VM and attempts to obtain the (supposedly re-entrant) mutex, however, due to the RMI implementation, is now running on a different thread in the first VM. Bang – deadlock.

The obvious workaround – invoking the listener callback(s) outside of a synchronized block, i.e. without the mutex held, does work; but it leaves the possibility that the data is modified between the modification event and the listener being notified, which can be undesirable. The best solution from a design point-of-view is to move the acquisition and release of the mutex outside of the responsibility of the object being protected by it. Thus the sequence at the start of the would be changed to:

Acquire mutex

Invoke method

… method does its thing

Method returns

Release mutex

Also, it should be assumed (even enforced) that the mutex is not re-entrant, to allow for the case where the executing thread is not the one which actually acquired the mutex. This is more complicated, and places more burden on the programmer – which is not desirable, especially when dealing with concurrency – but it is a better overall design. For one thing, it allows a solution for the two problems outlined above; secondly, it separates concerns – why should an object worry about synchronising if it may or may not be used in a multi-threaded system?

Ideally there’d be some language support for this sort of design too. It would be nice if there was a way to tag that a specific object should only be touched when an associated mutex was held, and have static analysis to determine when such a rule was being broken. Unfortunately such things are not yet readily available/usable, at least not as far as I’m aware.

In conclusion: be careful when using “synchronized” or re-entrant mutexes; consider using non-re-entrant mutexes instead.

… However, I don’t understand why it’s unable to create a file in /tmp. I’ve verified the file doesn’t already exist before xkbcomp is run, and that all users can create files in /tmp (the “t” bit is set).

Once again, the error message is bad: please tell my why you can’t open the file. (Hint: it’s called perror).

Update: (Solved!):

Turns out the permissions were the problem. They were: rwxr-xrwt, i.e. the group was denied write permission. I didn’t think this was a problem seeing as “other” users are allowed to write files, but apparently that’s not how Linux permission checking works.