Welcome to Gentoo Universe, an aggregation of weblog articles on all topics written by Gentoo developers. For a more refined aggregation of Gentoo-related topics only, you might be interested in Planet Gentoo.

Let me present an informal an unofficial state of Chromium Open Source packages as I see it. Note a possible bias: I'm a Chromium developer (and this post represents my views, not the projects'), and a Gentoo Linux developer (and Chromium package maintenance lead - this is a team effort, and the entire team deserves credit, especially for keeping stable and beta ebuilds up to date).

Arch Linux - ships stable channel, promptly reacts to security updates. NaCl is enabled, following Gentoo closely - I consider that good, and I'm glad people find that code useful. :) 5 use_system_... gyp switches are enabled. A notable thing is that the PKGBUILD is one of the shortest and simplest among Chromium packages - this seems to follow from The Arch Way. There is also chromium-dev on AUR - it is more heavily based on the Gentoo package, and tracks the upstream dev channel. Uses 19 use_system_... gyp switches.

Debian - ancient 6.x version in Squeeze, 22.x in sid at the time of this writing. This is two major milestones behind, and is missing security updates. Not recommended at this moment. :( If you are on Debian, my advice is to use Google Chrome, since official debs should work, and monitor state of the open source Chromium package. You can always return to it when it gets updated.

Fedora - not in official repositories, but Tom "spot" Callaway has an unofficial repo. Note: currently the version in that repo is 23.x, one major version behind on stable. Tom wrote an article in 2009 called Chromium: Why it isn't in Fedora yet as a proper package, so there is definitely an interest to get it packaged for Fedora, which I appreciate. Many of the issues he wrote about are now fixed, and I hope to work on getting the remaining ones fixed. Please stay tuned!

This is not intended to be an exhaustive list. I'm aware of openSUSE packages, there seems to be something happening for Ubuntu, and I've heard of Slackware, Pardus, PCLinuxOS and CentOS packaging. I do not follow these closely enough though to provide a meaningful "review".

Some conclusions: different distros package Chromium differently. Pay attention to the packaging lag: with about 6 weeks upstream release cycle and each major update being a security one, this matters. Support for NativeClient is another point. There are extension and Web Store apps that use it, and when more and more sites start to use it, this will become increasingly important. Then it is interesting why on some distros some bundled libraries are used even though upstream provides an option to use a system library that is known to work on other distros.

Finally, I like how different maintainers look at each other's packages, and how patches and bugs are frequently being sent upstream.

Just a simple announcement for now. It's a bit messy, but should work :D

I have packaged Openstack for Gentoo and it is now in tree, the most complete packaging is probably for Openstack Swift. Nova and some of the others are missing init scripts (being worked on). If you have problems or bugs, report as normal.

Last night I ended up in Bizarro World, hacking at Jürgen’s gmaillabelpurge (which he actually wrote on my request, thanks once more Jürgen!). Why? Well, the first reason was that I found out that it hasn’t been running for the past two and a half months, because, for whatever reason, the default Python interpreter on the system where it was running was changed from 2.7 to 3.2.

So I tried first to get it to work with Python 3 keeping it working with Python 2 at the same time; some of the syntax changes ever so slightly and was easy to fix, but the 2to3 script that it comes with is completely bogus. Among other things, it adds parenthesis on all the print calls… which would be correct if it checked that said parenthesis wouldn’t be there already. In a script link the one aforementioned, the noise on the output is so high that there is really no signal worth reading.

You might be asking how comes I didn’t notice this before. The answer is “because I’m an idiot”: I found out only yesterday that my firewall configuration was such that postfix was not reachable from the containers within Excelsior, which meant I never got the fcron notifications that the job was failing.

While I wasn’t able to fix the Python 3 compatibility, I was able to at least understand the code a little by reading it, and after remembering something about the IMAP4 specs I read a long time ago, I was able to optimize its execution quite a bit, more than halving the runtime on big folders, like most of the ones I have here, by using batch operations, and peeking, instead of “seeing” the headers. At the end, I spent some three hours on the script, give or take.

But at the same time, I ended up having to workaround limitations in Python’s imaplib (which is still nice to have by default), such as reporting fetched data as an array, where each odd entry is a pair of strings (tag and unparsed headers) and each even entry is a string with a closed parenthesis (coming from the tag). Since I wasn’t able to sleep, at 3.30am I started re-writing the script in Perl (which at this point I know much better than I’ll ever know Python, even if I’m a newbie in it); by 5am I had all the features of the original one, and I was supporting non-English locales for GMail — remember my old complain about natural language interfaces? Well, it turns out that the solution is to use the Special-Use Extension for IMAP folders; I don’t remember this explanation page when we first worked on that script.

But this entry is about Python and not the script per-se (you can find on my fork the Perl version if you want). I have said before I dislike Python, and my feeling is still unchanged at this point. It is true that the script in Python required no extra dependency, as the standard library already covered all the bases … but at the same time that’s about it: it is basics that it has; for something more complex you still need some new modules. Perl modules are generally easier to find, easier to install, and less error-prone — don’t try to argue this; I’ve got a tinderbox that reports Python tests errors more often than even Ruby’s (which are lots), and most of the time for the same reasons, such as the damn unicode errors “because LC_ALL=C is not supported”.

I also still hate the fact that Python forces me to indent code to have blocks. Yes I agree that indented code is much better than non-indented one, but why on earth should the indentation mandate the blocks rather than the other way around? What I usually do in Emacs when I’m getting stuff in and out of loops (which is what I had to do a lot on the script, as I was replacing per-message operations with bulk operations), is basically adding the curly brackets in different place, then select the region, and C-M-\ it — which means that it’s re-indented following my brackets’ placement. If I see an indent I don’t expect, it means I made a mistake with the blocks and I’m quick to fix it.

With Python, I end up having to manage the space to have it behave as I want, and it’s quite more bothersome, even with the C-c < and C-c > shortcuts in Emacs. I find the whole thing obnoxious. The other problem is that, while Python does provide basics access to a lot more functionality than Perl, its documentation is .. spotty at best. In the case of imaplib, for instance, the only real way to know what’s going to give you, is to print the returned value and check with the RFC — and it does not seem to have a half-decent way to return the UIDs without having to parse them. This is simply.. wrong.

The obvious question for people who know would be “why did you not write it in Ruby?” — well… recently I’ve started second-guessing my choice of Ruby at least for simple one-off scripts. For instance, the deptree2dot tool that I wrote for OpenRC – available here – was originally written as a Ruby script … then I converted it a Perl script half the size and twice the speed. Part of it I’m sure it’s just a matter of age (Perl has been optimized over a long time, much more than Ruby), part of it is due to be different tools for different targets: Ruby is nowadays mostly a long-running software language (due to webapps and so on), and it’s much more object oriented, while Perl is streamlined, top-down execution style…

I do expect to find the time to convert even my scan2pdf script to Perl (funnily enough, gscan2pdf which inspired it is written in Perl), although I have no idea yet when… in the mean time though, I doubt I’ll write many more Ruby scripts for this kind of processing..

When you talk to a friend, she or he knows you are the person in question. But when you do this a friend far away through computers, you can not be sure.
That's why computers have ways to let you know if the person you are talking to is really the right person.

The ways we use today have one problem: We are not sure that they work. It may be that a bad person knows a way to be able to tell you that he is in fact your friend. We do not think that there are such ways for bad persons, but we are not completely sure.

This is why some people try to find ways that are better. Where we can be sure that no bad person is able to tell you that he is your friend. With the known ways today this is not completely possible. But it is possible in parts.

I have looked at those better ways. And I have worked on bringing these better ways to your computer.

While I’m sure that it’s not of interest to the vast majority of readers coming from Gentoo Universe, I’m sure that some of you won’t mind some updates on my personal situation, at least to help you understand my current availability and what you can ask me to do for you, realistically.

First of all, I’m not currently in the USA — since I didn’t have a work visa, my stay was always supposed to be limited to three months at a time. The three months expired in early December, so before the expiration I traveled back to Europe — in particular to the bureaucracy of Italy and to the swamp of Venice; you can guess I don’t really like my motherland.

I’m not planning at this point to go back to the US anytime soon. Among other reasons, during 2012 I spent over six months there, and they have been very clear the last time I entered: I’m not welcome back right away — a few months would be enough, but that also means that the line of work I started back in February last year couldn’t proceed properly. While the original plan was for me to get an office in, or nearby, London, I haven’t seen any progress for said plan, which meant I went back to my old freelancing. I suppose this currently puts me in a consulting capacity more than anything.

Unfortunately, as you can guess, after a hiatus of a full year, most of my customers found already someone else to take care of them, and I’m currently only following one last customer in their project — for something they paid already for, which means that there isn’t any money to be made there. I am already trying to get a new position, this time as a full-time employee, as the life of a freelancer (in Italy!) really made me long for more stability. For the moment I have no certain news for my future employment, but you can probably guess that, in the case I do accept a full-time position, the time I have to spend on Gentoo is likely going to be reduced, unless said position requires me to use Gentoo — and I wouldn’t bet on that if I was you.

Furthermore I do expect that, whatever position I’m going to accept next, I’m going to move out of Italy — the political scene in Italy has never been good, but it reached my limit with the current populist promises from both sides of the aisles, and from the small challengers alike; and my freelancing experience makes me wonder how on earth it’s possible that only one (small) party is actually trying to fight the crisis and increase productivity … but this all is for a different time. Anyway, wherever am I going to end up (I’m aiming for one of the few English-speaking countries in Europe), it’s going to take a while for me to settle down (find a place to live, get it so that it’s half-decently convenient to me, etc.), which is going to eat away some of the time I spend on Gentoo.

Time is being eaten away already to be honest. Among other things, here at home I’ve got a bunch of paperwork to take care of: not only the general taxes that need to be paid and accounted for, but the bank took some of my time just to make sure I have money to cover for the expenses (during the year I accrued some debts here in Italy, as I was living off the American account), and so on. I’m also trying to reduce the expenses as much as it’s possible for me. Most of the hardware I had before, anyway, has been dropped already, back in June when the original plan was for me to get an H1B visa and jump out of here, so it’s less bothersome that it can seem at first.

The one thing that really bothers me the most is that since last year I’ve been feeling like wherever I am, I’m “borrowing” my space — it’s not something I like. While some people, such as Luca, feel comfortable with just carrying their things in a suitcase, and as long as they have a place to sleep and wash their clothes they are happy, I’ve always been quite the sedentary guy: I like having my space, personalized for my needs and so on. Even now back at home I don’t feel entirely stable because I do not know how long I’m going to stay here.

I’m afraid I have overindulged during the months in the US, relying too much on the promises made then. Hopefully, I’ll come out of the recent mess on my feet, and possibly with a less foul mood than I have been having recently.

openSUSE 12.3 is getting closer and closer and probably one of the last changes I pushed for MySQL was switching the default MySQL implementation. So in openSUSE 12.3 we will have MariaDB as a default.

If you are following what is going on in openSUSE in regards to MySQL, you probably already know, that we started shipping MariaDB together with openSUSE starting with version 11.3 back in 2010. It is now almost three years since we started providing it. There were some little issues on the way to resolve all conflicts and to make everything work nicely together. But I believe we polished everything and smoothed all rough edges. And now everything is working nice and fine, so it’s time to change something, isn’t it? So let’s take a look of the change I made…

MariaDB as default, what does it mean?

First of all, for those who don’t know, MariaDB is MySQL fork – drop-in replacement for MySQL. Still same API, still same protocol, even same utilities. And mostly the same datafiles. So unless you have some deep optimizations depending on your current version, you should see no difference. And what will switch mean?

Actually, switching default doesn’t mean much in openSUSE. Do you remember the time when we set KDE as a default? And we still provide great Gnome experience with Gnome Shell. In openSUSE we believe in freedom of choice so even now, you can install either MySQL or MariaDB quite simply. And if you are interested, you can try testing beta versions of both – we have MySQL 5.6 and MariaDB 10.0 in server:database repo. So where is the change of default?

Actually, the only thing that changed is that everything now links against MariaDB and uses MariaDB libraries – no change from users point of view. And if you will try to update from system that used to have just one package called ‘mysql’, you’ll end up with MariaDB. And it will be default in LAMP pattern. But generally, you can still easily replace MariaDB with MySQL, if you like Oracle Yes, it is hard to make a splash with a default change if you are supporting both sides well…

What happens to MySQL?

Oracles MySQL will not go away! I’ll keep packaging their version and it will be available in the openSUSE. It’s just not going to be a default, but nothing prevents you from installing it. And if you had it in past and you are going to do just a plain upgrade, you’ll stick to it – we are not going to tell you what to use if you know what you want.

Why?

As mentioned before, being default doesn’t have many consequences. So why the switch? Wouldn’t it break stuff? Is that MariDB safe enough? Well, I’m personally using MariaDB since 2010 with few switches to MySQL and back, so it is better tested from my point of view. I originally switched for the kicks of living on the edge, but in the end I found MariaDB boringly stable (even though I run their latest alpha). I never had any serious issue with it. It also has some interesting goodies that it can offer to it’s user over MySQL. Even Wikipedia decided to switch. And our friends at Fedora are considering it too, but AFAIK they don’t have MariaDB yet in their distribution….

Don’t take it as a complain about MySQL guys and girls at Oracle, I know that they are doing a great job that even MariaDB is based on as they do periodical merges to get newest MySQL and they “just” add some more tweaks, engines and stuff.

So, as I like MariaDB and I think it’s time to move, I, as a maintainer of both, proposed to change the default. There were no strong objections, so we are doing it!

Overview

So overall, yes, we are changing default MySQL provider, but you probably wouldn’t even notice

In January of last year I was invited to attend the Semantic Physical Science Workshop in Cambridge, England. That was a great meeting where I met like-minded scientists and developers working on adding semantic structure to data in the physical sciences. Peter managed to bring together a varied group with many backgrounds, and so the discussions were especially useful. I was there to think about how our work with Avogadro, and the wider Open Chemistry project might benefit from and contribute to this area.

I made a guest blog post talking about open access and the Avogadro paper, which was later republished for a different audience. I would like to thank Geoffrey Hutchison, Donald Curtis, David Lonie, Tim Vandermeersch and Eva Zurek for the work they put into the article, along with our contributors, collaborators and the users of Avogadro. If you use Avogadro in your work please cite our paper, and get in touch to let us know what you are doing with it. As we develop the next generation of Avogadro we would appreciate your input, feedback and suggestions on how we can make it more useful to the wider community.

We are currently working on integrating carbon nanotube nanomechanical systems into superconducting radio-frequency electronics. Overall objective is the detection and control of nanomechanical motion towards its quantum limit. In this project, we've got a PhD position with project working title "Gigahertz nanomechanics with carbon nanotubes" available immediately.

You will design and fabricate superconducting on-chip structures suitable as both carbon nanotube contact electrodes and gigahertz circuit elements. In addition, you will build up and use - together with your colleagues - two ultra-low temperature measurement setups to conduct cutting-edge measurements.

Good knowledge of electrodynamics and possibly superconductivity are required. Certainly helpful is low temperature physics, some sort of programming experience, as well as basic familiarity with Linux. The starting salary is 1/2 TV-L E13.

The combination of localized states within carbon nanotubes and superconducting contact materials leads to a manifold of fascinating physical phenomena and is a very active area of current research. An additional bonus is that the carbon nanotube can be suspended, i.e. the quantum dot between the contacts forms a nanomechanical system. In this research field a PhD position is immediately available; the working title of the project is "A carbon nanotube as a moving weak link".

You will develop and fabricate chip structures combining various superconductor contact materials with ultra-clean, as-grown carbon nanotubes. Together with your colleagues, you will optimize material, chip geometry, nanotube growth process, and measurement electronics. Measurements will take place in one of our ultra-low temperature setups.

Good knowledge of superconductivity is required. Certainly helpful is knowledge of semiconductor nanostructures and low temperature physics, as well as basic familiarity with Linux. The starting salary is 1/2 TV-L E13.

Two days ago, Luca asked me to help him figure out what’s going on with a patch for libav which he knew to be the right thing, but was acting up in a fashion he didn’t understand: on his computer, it increased the size of the final shared object by 80KiB — while this number is certainly not outlandish for a library such as libavcodec, it does seem odd at a first glance that a patch removing source code is increasing the final size of the executable code.

My first wild guess which (spoiler alert) turned out to be right, was that removing branches out of the functions let GCC optimize the function further and decide to inline it. But how to actually be sure? It’s time to get the right tools for the job: dev-ruby/ruby-elf, dev-util/dwarves and sys-devel/binutils enter the battlefield.

We’ve built libav with and without the patch on my server, and then rbelf-size told us more or less the same story:

If you wonder why it’s adding more code than we expected, it’s because there are other places where functions have been deleted by the patch, causing some reductions in other places. Now we know that the three functions that the patch deleted did remove some code, but five other functions added 4KiB each. It’s time to find out why.

A common way to do this is to generate the assembly file (which GCC usually does not represent explicitly) to compare the two — due to the size of the dsputil translation unit, this turned out to be completely pointless — just the changes in the jump labels cause the whole file to be rewritten. So we rely instead on objdump, which allows us to get a full disassembly of the executable section of the object file:

As you can see, trying a diff between these two files is going to be pointless, first of all because of the size of the disassembled files, and secondarily because each instruction has its address-offset prefixed, which means that every single line will be different. So what to do? Well, first of all it’s useful to just isolate one of the functions so that we reduce the scope of the changes to check — I found out that there is a nice way to do so, and it involves relying on the way the function is declared in the file:

Okay that’s much better — but it’s still a lot of code to sift through, can’t we reduce it further? Well, actually… yes. My original guess was that some function was inlined; so let’s check for that. If a function is not inlined, it has to be called, the instruction for which, in this context, is callq. So let’s check if there are changes in the calls that happen:

The size is the same and it’s 274 bytes. The increase is 4330 bytes, which is around 15 times more than the size of the single function — what does that mean then? Well, a quick look around shows this piece of code:

This is just a fragment but you can see that there are two calls to the function, followed by a pair of xor and jne instructions — which is the basic harness of a loop. Which means the function gets called multiple times. Knowing that this function is involved in 10-bit processing, it becomes likely that the function gets called twice per bit, or something along those lines — remove the call overhead (as the function is inlined) and you can see how twenty copies of that small function per caller account for the 4KiB.

So my guess was right, but incomplete: GCC not only inlined the function, but it also unrolled the loop, probably doing constant propagation in the process.

Is this it? Almost — the next step was to get some benchmark data when using the code, which was mostly Luca’s work (and I have next to no info on how he did that, to be entirely honest); the results on my server has been inconclusive, as the 2% loss that he originally registered was gone in further testing and would, anyway, be vastly within margin of error of a non-dedicated system — no we weren’t using full-blown profiling tools for that.

While we don’t have any sound numbers about it, what we’re worried about is for cache-starved architectures, such as Intel Atom, where the unrolling and inlining can easily cause performance loss, rather than gain — which is why all us developers facepalm in front of people using -funroll-all-loops and similar. I guess we’ll have to find an Atom system to do this kind of runs on…

Well, all of MythTV 0.26 is now in portage, masked for testing for a few days.

If anyone is interested now is a good time to give it a try and report any issues you find. If all is quiet the masks will come off and we’ll be up-to-date (including all patches up to a few days ago).

Thanks to all who have contributed to the 0.26 bug. I can also happily report that I’m running Gentoo on my mythtv front-end, which should help me with maintaining things. MiniMyth is a great distro, but it has made it difficult to keep the front- and back-ends in sync.

You probably got used to read about me updating Typo at this point — the last update I wrote about was almost an year ago when I updated to Typo 6, using Rails 3 instead of 2. Then you probably remember my rant about what I would like of my blog …

Well, yesterday I was finally able to get rid of the last Rails 2.3 application that was running on my server, as a nuisance of a customer’s contract finally expired, and since I was finally able to get to update Typo without having to worry about the Ruby 1.8 compatibility that was dropped upstream. Indeed since the other two Ruby applications running on this server are Harvester for Planet Multimedia and a custom application I wrote for a customer, the first not using Rails at all, and the second written to work on both 1.8 and 1.9 alike, I was able to move from having three separate Rails slot installed (2.3, 3.0 and 3.1), to having only the latest 3.2, which means that security issues are no longer a problem for the short term either.

The new Typo version solves some of the smaller issues I’ve got with it before — starting from the way it uses Rails (now no longer requiring a single micro-version, but accepting any version after 3.2.11), and the correct dependency on the new addressable. At the same time it does not solve some of the most long-standing issues, as it insists on using the obsolete coderay 0.9 instead of the new 1.0 series.

So let’s go in order: the new version of Typo brings in another bunch of gems — which means I have to package a few more. One of them is fog which includes a long list of dependencies, most of which from the same author, and reminds me of how bad the dependencies issue is with Ruby packages. Luckily for me, even though the dependency is declared mandatory, a quick hacking around got rid of it just fine — okay hacking might be too much, it really is just a matter of removing it from the Gemfile and then removing the require statement for it, done.

For the moment I used the gem command to install the required packages — some of them are actually available on Hans’s overlay and I’ll be reviewing them soon (I was supposed to do that tonight, but my job got in the way) to add them to main tree. A few more requires me to write them from scratch so I’ll spend a few days on that soon. I have other things in my TODO pipeline but I’ll try to cover as many bases as I can.

While I’m not sure if this update finally solves the issue of posts being randomly marked as password-protected, at least this version solves the header in the content admin view, which means that I can finally see what drafts I have pending — and the default view also changed to show me the available drafts to finish, which is great for my workflow. I haven’t looked yet if the planning for future-published posts work, but I’ll wait for that.

My idea of forking Typo is still on, even though it might be more like a set of changes over it instead of being a full-on fork.. we’ll see.

It has been a long time since I wrote anything on here, I am still alive and kicking! 2012 was another roller coaster of a year, with many good and bad things happening. Louise and I got our green cards early on in the year (massive thanks to my employer), which was great after having lived in the US for over five years now. We started house hunting a few months after that, which was an adventure and a half.

As we were in the process of looking for a house I was promoted to technical leader at Kitware, and I continue to work on our Open Chemistry project. We ended up falling in love with the first house we found, and found a great realtor who took us back there for a second look. We then learned how different buying a house in the US versus England, but after several rounds of negotiations came to an agreement. We had a very long wait for completion, but that all proceeded well in the end.

As we moved out of the place we had been renting for the last three years we found out just how bad some landlords can be about returning security deposits...that is still ongoing and has not been a fun process. We never rented in England, but many friends have assured us that this isn't that unusual. Our move actually went very smoothly though, and we have some great friends who helped us with some of the heavy lifting. We have been learning what it is like to own a home in the country, with a well, septic, large garden etc. The learning curve has been a little steep at times! We attended two weddings (I was a groomsman in one) with two amazing groups of friends - it was a pleasure to be part of the day for two great friends.

I made a few guestblogposts, which I will try to talk more about in another post, and attended some great conferences including the ACS, Semantic Physical Science and Supercomputing. Our Avogadro paper was published, and was recently published in final form (I will write more about this too). I finally cancelled my dedicated server (an old Gentoo box), which I originally took when I was consulting in England, this was very disruptive in the end and I didn't have a complete backup of all data when it was taken offline. This caused lots of disruption to email (sorry if I never got back to you). I moved to a cloud server with Rackspace in the end, after playing with a few alternatives. I was retired as a Gentoo developer too (totally missed those emails), it was a great experience being a developer and I still value many of the friendships formed during that time. My passion for packaging has wained in recent years, and I tend to use Arch Linux more now (although still love lots of things about Gentoo).

Just before Xmas our ten year old German Shepherd developed a sudden paralysis in his back legs and had to be put down. It was pretty devastating, after having him from when he was 12 weeks old. He joined our little family just after we got our own place in England, he had five great years in England and another five in the US. He was with me for so much of my life (a degree, loss of my brother, marriage, loss of my sister, moving to another country, birth of our first child, getting a "real" job). We had family over for the holidays as we call them over here (Xmas and New Year back home), which was great but we may not have been the best of company after having just lost our dog.

I think I skipped lots of stuff too, but it was quite a year! Hoping for more of a steady ride this year to say the least.

During the recent Gentoo mudslinging about libav and FFmpeg, one of the contention points is the fact that FFmpeg boasts more “security fixes” than libav over time. Any security conscious developer would know that assessing the general reliability of a software requires much more than just counting CVEs — as they only get assigned when bugs are reported as security issues by somebody.

I ended up learning this first-hand. In August 2005 I was just fixing a few warnings out of xine-ui, with nothing in mind but cleaning up the build log — that patch ended up in Gentoo, but no new release was made for xine-ui itself. Come April of 2006 and a security researcher marked them as a security issue — we were already covered, for the most part, but other distros weren’t. The bug was fixed upstream, but not released, simply because nobody considered them security issues up to that point. My lesson was that issues that might lead to security problems are always better looked at from a security expert — that’s why I originally started working with ocert for verifying issues within xine.

So which kind of issues are considered security issues? In this case the problem was a format string — this is obvious, as it can theoretically allow, under given conditions, to write to arbitrary memory. The same is true for buffer overflows obviously. But what about unbound reads, which in my experience form the vast majority of crashes out there? I would say that there are two widely different problems with them, which can be categorized as security issues: information disclosure (if the attacker can decide where to read and can get useful information out of said read — such as the current base address for the executable or libraries of the process, which can be used later), and good old crashes — which for security purposes are called DoS: Denial of Service.

Not all DoS are crashes, not all crashes are DoS, though! In particular, you can DoS an app without having it crashing, but rather deadlocking, or otherwise exhausting all of one scarce resource — this is the preferred method for DoS on servers; indeed this is the way the Slowloris attack for Apache worked: it used all the connection handlers and caused the server to not answer legitimate clients; a crash would be much easier to identify and recover from, which is why DoS on servers are rarely full-blown crashes. Crashes cannot realistically be called DoS when they are user-initiated without a third-party intervening. It might sounds silly, and remind of an old joke – “Doctor, doctor, if I do this it hurts!” “Stop doing that, then!” – but it’s the case: if going to the app’s preferences and clicking something causes the app to crash, then there’s a bug which is a crash but is not a DoS.

This brings us to one of the biggest problem with calling something a DoS: it might be a DoS in one use-case, and not in another — let’s use libav as an example. It’s easy to argue that any crash in the libraries for decoding a stream as a DoS, as it’s common to download a file, and try to play it; said file is the element in the equation that comes from a possible attacker, and anything that can happen due to its decode is a security risk. Is it possible to argue that a crash in an encoding path is a DoS? Well, from a client’s perspective, it’s not common to — it’s still very possible that an attacker can trick you into downloading a file and re-encoding it, but it’s less common a situation, and in my experience, most of the encoding-related crashes are triggered only with a given subset of parameters, which makes it more difficult for an attacker to exploit than a decoder-side DoS. If the crash only happens when using avconv, also, it’s hard to declare it a DoS taking into consideration that at most, it should crash the encoding process, and that’s about it.

Let’s now turn the table, and instead of being the average user downloading movies from The Pirate Bay, we’re a video streaming service, such as YouTube, Vimeo or the like — but without the right expertise, which means that a DoS on your application is actually a big deal. In this situation, assuming your users control the streams that get encoded, you’re dealing with an input source that is untrusted, which means that you’re vulnerable to both crashes in the decoder and in the encoder as real-world DoS attacks. As you see what earlier required explicit user interaction and was hard to consider a full-blown DoS now gets much more important.

This kind of issues is why languages like Ada were created, and why many people out there insist that higher-level languages like Java, Python and Ruby are more secure than C, thanks to the existence of exceptions for error handling, making it easier to have fail-safe conditions which should solve the problem of DoS — the fact that there are just as many security issues in software written in high-level languages as low-level shows how false that concept is nowadays. Because while it does save from some kind of crashes, it also creates issues by the increase in the sheer area of exposure: the more layers, more code is involved in execution, and that can actually increase the chance for somebody to find an issue in them.

Area of exposure is important also for software like libav: if you enable every possible format under the sun for input and output, you’re enabling a whole lot of code, and you can suffer from a whole lot of vulnerabilities — if you’re targeting a much more reduced audience, like for instance you’re using it on a device that has to output only H.264 and Speex audio, you can easily turn everything else off, and reduce your exposure many times. You can probably see now why even when using libav or ffmpeg as backend, Chrome does not support all the input files that they support; it would just be too difficult to validate all the possible code out there, while it’s feasible to validate a subset of them.

This should have established the terms on what to consider DoS and when ­— so how do you handle this? Well, the first problem is to identify the crashes; you can either wait for an attack to happen, and react to that, or proactively try to identify crash situations, and obviously the latter is what you should do most of the time. Unfortunately, this requires the use of many different techniques, and none yields a 100% positive result, even the combined results are rarely sufficient to argue that a piece of software is 100% safe from crashes and other possible security issues.

One obvious thing is that you just have to make sure the code is not allowing things that should not happen, like incredibly high values or negative ones. This requires manual work and analysis of code, which is usually handled through code reviews – on the topic there is a nice article by Mozilla’s David Humphrey – at least for what concerns libav. But this by itself is not enough, as many times it’s values that are allowed by the specs, but are not handled properly, that cause the crashes. How to deal with them? A suggestion would be to use fuzzing, which is a technique in which a program is executed receiving, as input, a file that is corrupted starting from a valid one. A few years ago, a round of FFmpeg/VLC bugs were filed after Sam Hocevar released, and started using, his zzuf tool (which should be in Portage, if you want to look at it).

Unfortunately, fuzzing, just like using particular exemplars of attacks in the wild, have one big drawback – one that we could call “zenish” – you can easily forget that you’re looking at a piece of code that is crashing on invalid input, and you just go and resolve that one small issue. Do you remember the calibre security shenanigan ? It’s the same thing: if you only fix the one bit that is crashing on you without looking at the whole situation, an attacker, or a security researcher, can actually just look around and spot the next piece that is going to break on you. This is the one issue that me, Luca and the others in the libav project get vocal about when we’re told that we don’t pay attention to security only because it takes us a little longer to come up with a (proper) fix — well, this, and the fact that most of the CVE that are marked as resolved by FFmpeg we have had no way to verify for ourselves because we weren’t given access to the samples for reproducing the crashes; this changed after the last VDD for at least those coming from Google. If I’m not mistaken, at least one of them ended up with a different, complete fix rather than the partial bandaid put in by our peers at FFmpeg.

Testsuites for valid configurations and valid files are not useful to identify these problems, as those are valid files and should not cause a DoS anyway. On the other hand, just using a completely shot-in-the-dark fuzzing technique like zzuf could or could not help, depending on how much time you can pour to look at the failures. Some years ago, I read an interesting book, Fuzzing: Brute Force Vulnerability Discovery by Sutton, Greene and Amini. It was a very interesting read, although last I checked, the software they pointed to was mostly dead in the water. I should probably get back at it and see if I can find if there are new forks of that software that we can use to help getting there.

It’s also important to note that it’s not just a matter of causing a crash, you need to save the sample that caused the issue, and you need to make sure that it’s actually crashing. Even a “all okay” result might not be actually a pass, as in some cases, a corrupted file could cause a buffer overflow that, in a standard setup, could let the software keep running — hardened, and other tools, make it nicer to deal with that kind of issues at least…

the task was to combine do and re by nils frahm into a new work. i chopped “re” into loops, and rearranged sections by sight and sound for a deliberately loose feel. the resulting piece is entirely unquantized, with percussion generated from the piano/pedal action sounds of “do” set under the “re” arrangement. the perc was performed with an mpd18 midi controller in real time, and then arranged by dragging individual hits with a mouse. since the original piano recordings were improvised, tempo fluctuates at around 70bpm, and i didn’t want to lock myself into anything tighter when creating the downtempo beats.

normally i’d program everything to a strict grid with renoise, but for this project, i used ardour3 (available in my overlay) almost exclusively, except for a bit of sample preparation in renoise and audacity. the faint background pads/strings were created with paulstretch. my ardour3 session was filled with hundreds of samples, each one placed by hand and nudged around to keep the jazzy feel, as seen in this screenshot:

this is a very rough rework — no FX, detailed mixing/mastering, or complicated tricks. i ran outta time to do all the subtle things i usually do. instead, i spent all my time & effort on the arrangement and vibe. the minimal treatment worked better than everything i’d planned.

Many moons ago, we acquired an old RolandDG DXY-800A plotter. This is an early A3 plotter which took 8 pens, driven via either the parallel port or the serial port.

It came with software to use with an old DOS-version of AutoCAD. I also remember using it with QBasic. We had the handbook, still do, somewhere, if only I could lay my hands on it. Either that, or on the QBasic code I used to use with the thing, as that QBasic code did exercise most of the functionality.

Today I dusted it off, wondering if I could get it working again. I had a look around. The thing was not difficult to drive from what I recalled, and indeed, I found the first pointer in a small configuration file for Eagle PCB.

The magic commands:

H

Go home

Jn

Select Pen n (1..8)

Mxxxx,yyyy

Move (with pen up) to position xxx.x, yyy.y mm from lower left corner.

Dxxxx,yyyy

Draw (with pen down) a line to position xxx.x, yyy.y mm

Okay, this misses the text features, drawing circles and hatching, but it’s a good start. Everything else can be emulated with the above anyway. Something I’d have to do, since there was only one font, and I seem to recall, no ability to draw ellipses.

Inkscape has the ability to export HPGL, so I had a look at what the format looks like. Turns out, the two are really easy to convert, and Inkscape HPGL is entirely line drawing commands.

hpgl2roland.pl is a quick and nasty script which takes Inkscape-generated HPGL, and outputs RolandDG plotter language. It’s crude, only understands a small subset of HPGL, but it’s a start.

It’s not that hard to build a ghc with some exotic target if you have gcc there.

mkGmpDerivedConstants needs to be more cross-compiler friendly It should be really simple to implement, it only queries for data sizes/offsets. I think autotools is already able to do it.

GHC should be able to run llvm with correct -mtriple in crosscompiler case. That way we would get registerised crosscompiler.

Some TODOs:

In order to coexist with native compiler ghc should stop mangling —-target=ia64-unknown-linux-gnu option passed by user and name resulting compiler a ia64-unknown-linux-gnu-ghc and not ia64-unknown-linux-ghc.

That way I could have many flavours of compiler for one target. For example I would like to have x86_64-pc-linux-gnu-ghc as a registerised compiler and x86_64-unknown-linux-gnu-ghc as an unreg one.

In my post of yesterday I noted some things about the init scripts, small niceties that init scripts should do in Gentoo for them to work properly and to solve the issue of migrating pid files to /run. Today I’d like to add a few more notes of what I wish all daemons out there implemented at the very least.

First of all, while some people prefer for the daemon to not fork and background by itself, I honestly prefer it to — it makes so many things so much easier. But if you fork, wait till the forked process completed initialization before exiting! The reason why I’m saying this is that, unfortunately, it’s common for a daemon to start up, fork, then load its configuration file and find out there’s a mistake … leading to a script that thinks that the daemon started properly, while no process is left running. In init scripts, --wait allows you to tell the start-stop-daemon tool to wait for a moment to see if the daemon could start at all, but it’s not so nice, because you have to find the correct wait time empirically, and in almost every case you’re going to run longer than needed.

If you will background by yourself, please make sure that you create a pidfile to tell the init system which ID to signal to stop — and if you do have such a pidfile, please do not make it configurable on the configuration file, but set a compiled-in default and eventually allow an override at runtime. The runtime override is especially welcome if your software is supposed to have multiple instances configured on the same box — as then a single pidfile would conflict. Not having it configured on a file means that you no longer need to hack up a parser for the configuration file to be able to know what the user wants, but you can rely on either the default or your override.

Also if you do intend to support multiple instances of the same daemon make sure that you allow multiple configuration files to be passed in by he command-line. This simplifies a lot the whole handling of multiple-instances, and should be mandatory in that situation. Make sure you don’t re-use paths in that case either.

If you have messages you should also make sure that they are sent to syslog — please do not force, or even default, everything to log files! We have tons of syslog implementations, and at least the user does not have to guess which one of many files is going to be used for the messages from your last service start — at this point you probably guessed that there are a few things I hope to rectify in Munin 2.1.

I’m pretty sure that there are other concerns that could come up, but for now I guess this would be enough for me to have a much simpler life as an init script maintainer.

I tried not to get into this discussion, mostly because it will degenerate to a mud sliding contest.

Alexis did not take well the fact that Tomáš changed the default provider for libavcodec and related libraries.

Before we start, one point:

I am as biased as Alexis, as we are both involved on the projects themselves. The same goes for Diego, but does not apply to Tomáš, he is just a downstream by transition (libreoffice uses gstreamer that uses *only* Libav).

Now the question at hand: which should be the default? FFmpeg or Libav?

How to decide?

- Libav has a strict review policy every patch goes through a review and has to be polished enough before landing the tree.

- FFmpeg merges daily what had been done in Libav and has a more lax approach on what goes in the tree and how.

- Libav has fate running on most architectures, many of those are running Gentoo, usually real hardware.

- FFmpeg has an old fate with less architectures, many qemu emulations.

- Libav defines the API

- FFmpeg follows adding bits here and there to “diversify”

- Libav has a major release per season, minor releases when needed

- FFmpeg releases a lot touting a lot of *Security*Fixes* (usually old code from the ancient times eventually fixed)

- Libav does care about crashes and fixes them, but does not claim every crash is a Security issue.

- FFmpeg goes by leaps to add MORE features, no matter what (including picking wip branches from my personal github and merging them before they are ready…)

- Libav is more careful, thus having less fringe features and focusing more polishing before landing new stuff.

So if you are a downstream you can pick what you want, but if you want something working everywhere you should target Libav.

If you are missing a feature from Libav that is in FFmpeg, feel free to point me to it and I’ll try my best to get it to you.

Exactly two years ago, a group consisting of the majority of FFmpeg developers took over its maintainership. While I didn’t like the methods, I’m not an insider so my opinion stops here, especially since if you pay attention to who was involved: Luca was part of it. Luca has been a Gentoo developer since probably most of us even used Gentoo and I must admit I’ve never seen him heating any discussion, rather the contrary, and it’s always been a pleasure to work with him. What happened next, after a lot of turmoil, is that the developers split in two groups: libav formed by the “secessionists” and FFmpeg.

Good, so what do we chose now? One of the first things that was done on the libav side was to “cleanup” the API with the 0.7 release, meaning we had to fix almost all its consumers: Bad idea if you want wide adoption of a library that has an history of frequently changing its API and breaking all its consumers. Meanwhile, FFmpeg maintained two branches: the 0.7 branch compatible with the old API and the 0.8 one with the new API. The two branches were supposed to be identical except for the API change. On my side the choice was easy: Thanks but no thanks sir, I’ll stay with FFmpeg.
FFmpeg, while having its own development and improvements, has been doing daily merges of all libav changes, often with an extra pass of review and checks, so I can even benefit from all the improvements from libav while using FFmpeg.

So why should we use libav? I don’t know. Some projects use libav within their release process, so they are likely to be much more tested with libav than FFmpeg. However, until I see real bugs, I consider this as pure supposition and I have yet to see real facts. On the other hand, I can see lots of false claims, usually without any kind of reference: Tomáš claims that there’s no failure that is libav specific, well, some bugstend to saythe contrary and have been open for some time (I’ll get back to XBMC later). Another false claim is that FFmpeg-1.1 will have the same failures as libav-9: Since Diego made a tinderbox run for libav-9, I made the tests for FFmpeg 1.1 and made the failures block our old FFmpeg 0.11 tracker. If you click the links, you will see that the number of blockers is much smaller (something like 2/3) for the FFmpeg tracker. Another false claim I have seen is that there will be libav-only packages: I have yet to see one; the example I had as an answer is gst-plugins-libav, which unfortunately is in the same shape for both implementations.

In theory FFmpeg-1.1 and libav-9 should be on par, but in practice, after almost two years of disjoint development, small differences have started to accumulate. One of them is the audio resampling library: While libswresample has been in FFmpeg since the 0.9 series, libav developers did not want it and made another one, with a very similar API, called libavresample that appeared in libav-9. This smells badly as a NIH syndrome, but to be fair, it’s not the first time such things happen: libav and FFmpeg developers tend to write their own codecs instead of wrapping external libraries and usually achieve better results. The audio resampling library is why XBMCbeing broken with libav is, at least partly, my fault: While cleaning up its API usage of FFmpeg/libav, I made it use the public API for audio resampling, initially with libswresample but made sure it worked with libavresample from libav. At that time, this would mean it required libav git master since libav-9 was not even close to be released, so there was no point in trying to make it compatible with such a moving target. libswresample from FFmpeg was present since the 0.9 series, released more than one year ago. Meanwhile, XBMC-12 has entered its release process, meaning it will probably not work with libav easily. Too late, too bad.

Another important issue I’ve raised is the security holes: Nowadays, we are much more exposed to them. Instead of having to send a specially crafted video to my victim and make him open it with the right program, I only have to embed it in an HTML5 webpage and wait. This is why I am a bit concerned that security issues fixed 7 months ago in FFmpeg have only been fixed with the recently released libav-0.8.5. I’ve been told that these issues are just crashes are have been fixed in a better way in libav: This is probably true but I still consider the delay huge for such an important component of modern systems, and, thanks to FFmpeg merging from libav, the better fix will also land in FFmpeg. I have also been told that this will improve on the libav side, but again, I want to see facts rather than claims.

As a conclusion: Why is the default implementation changed? Some people seem to like it better and use false claims to force their preference. Is it a good idea for our users? Today, I don’t think so (remember: FFmpeg merges from libav and adds its own improvements), maybe later when we’ll have some clear evidence that libav is better (the improvements might be buggy or the merges might lead to subtle bugs). Will I fight to get the default back to FFmpeg ? No. I use it, will continue to use and maintain it, and will support people that want the default back to FFmpeg but that’s about it.

One of probably the biggest problems with maintaining software in Gentoo where a daemon is involved, is dealing with init scripts. And it’s not really that much of a problem with just Gentoo, as almost every distribution or operating system has its own to handle init scripts. I guess this is one of the nice ideas behind systemd: having a single standard for daemons to start, stop and reload is definitely a positive goal.

Even if I’m not sure myself whether I want the whole init system to be collapsed into a single one for every single operating system out there, there at least is a chance that upstream developers will provide a standard command-line for daemons so that init scripts no longer have to write a hundred lines of pre-start setup code commands. Unfortunately I don’t have much faith that this is going to change any time soon.

Anyway, leaving the daemons themselves alone, as that’s a topic for a post of its own and I don care about writing it now. What remains is the init script itself. Now, while it seems quite a few people didn’t know about this before, OpenRC has been supporting since almost ever a more declarative approach to init scripts by setting just a few variables, such as command, pidfile and similar, so that the script works, as long as the daemon follows the most generic approach. A whole documentation for this kind of scripts is present in the runscript man page and I won’t bore you with the details of it here.

Beside the declaration of what to start, there are a few more issues that are now mostly handled to different degrees depending on the init script, rather than in a more comprehensive and seamless fashion. Unfortunately, I’m afraid that this is likely going to stay the same way for a long time, as I’m sure that some of my fellow developers won’t care to implement the trickiest parts that can implemented, but at least i can try to give a few ideas of what I found out while spending time on said init scripts.

So the number one issue is of course the need to create the directories the daemon will use beforehand, if they are to be stored on temporary filesystems. What happens is that one of the first changes that came with the whole systemd movements was to create /run and use that to store pidfiles, locks and other runtime stateless files, mounting it as tmpfs at runtime. This was something I was very interested in to begin with because I was doing something similar before, on the router with a CF card (through an EIDE adapter) as harddisk, to avoid writing to it at runtime. Unfortunately, more than an year later, we still have lots of ebuilds out there that expects /var/run paths to be maintained from the merge to the start of the daemon. At least now there’s enough consensus about it that I can easily open bugs for them instead of just ignore them.

For daemons that need /var/run it’s relatively easy to deal with the missing path; while a few scripts do use mkdir, chown and chmod to handle the creation of the missing directories , there is a real neat helper to take care of it, checkpath — which is also documented in the aforementioned man page for runscript. But there has been many other places where the two directories are used, which are not initiated by an init script at all. One of these happens to be my dear Munin’s cron script used by the Master — what to do then?

This has actually been among the biggest issues regarding the transition. It was the original reason why screen was changed to save its sockets in the users’ home instead of the previous /var/run/screen path — with relatively bad results all over, including me deciding to just move to tmux. In Munin, I decided to solve the issue by installing a script in /etc/local.d so that on start the /var/run/munin directory would be created … but this is far from a decent standard way to handle things. Luckily, there actually is a way to solve this that has been standardised, to some extents — it’s called tmpfiles.d and was also introduced by systemd. While OpenRC implements the same basics, because of the differences in the two init systems, not all of the features are implemented, in particular the automatic cleanup of the files on a running system - on the other hand, that feature is not fundamental for the needs of either Munin or screen.

There is an issue with the way these files should be installed, though. For most packages, the correct path to install to would be /usr/lib/tmpfiles.d, but the problem with this is that on a multilib system you’d end up easily with having both /usr/lib and /usr/lib64 as directories, causing Portage’s symlink protection to kick in. I’d like to have a good solution to this, but honestly, right now I don’t.

So we have the tools at our disposal, what remains to be done then? Well, there’s still one issue: which path should we use? Should we keep /var/run to be compatible, or should we just decide that /run is a good idea and run with it? My guts say the latter at this point, but it means that we have to migrate quite a few things over time. I actually started now on porting my packages to use /run directly, starting from pcsc-lite (since I had to bump it to 1.8.8 yesterday anyway) — Munin will come with support for tmpfiles.d in 2.0.11 (unfortunately, it’s unlikely I’ll be able to add support for it upstream in that release, but in Gentoo it’ll be). Some more of my daemons will be updated as I bump them, as I already spent quite a lot of time on those init scripts to hone them down on some more issues that I’ll delineate in a moment.

For some, but not all!, of the daemons it’s actually possible to decide the pidfile location on the command line — for those, the solution to handle the move to the new path is dead easy, as you just make sure to pass something equivalent to -p ${pidfile} in the script, and then change the pidfile variable, and done. Unfortunately that’s not always an option, as the pidfile can be either hardcoded into the compiled program, or read from a configuration file (the latter is the case for Munin). In the first case, no big deal: you change the configuration of the package, or worse case you patch the software, and make it use the new path, update the init script and you’re done… in the latter case though, we have trouble at hand.

If the location of the pidfile is to be found in a configuration file, even if you change the configuration file that gets installed, you can’t count on the user actually updating the configuration file, which means your init script might get out of sync with the configuration file easily. Of course there’s a way to work around this, and that is to actually get the pidfile path from the configuration file itself, which is what I do in the munin-node script. To do so, you need to see what the syntax of the configuration file is. In the case of Munin, the file is just a set of key-value pairs separated by whitespace, which means a simple awk call can give you the data you need. In some other cases, the configuration file syntax is so messed up, that getting the data out of it is impossible without writing a full-blown parser (which is not worth it). In that case you have to rely on the user to actually tell you where the pidfile is stored, and that’s quite unreliable, but okay.

There is of course one thing now that needs to be said: what happens when the pidfile changes in the configuration between one start and the stop? If you’re reading the pidfile out of a configuration file it is possible that the user, or the ebuild, changed it in between causing quite big headaches trying to restart the service. Unfortunately my users experienced this when I changed Munin’s default from /var/run/munin/munin-node.pid to /var/run/munin-node.pid — the change was possible because the node itself runs as root, and then drops privileges when running the plugins, so there is no reason to wait for the subdirectory, and since most nodes will not have the master running, /var/run/munin wouldn’t be useful there at all. As I said, though, it would cause the started node to use a pidfile path, and the init script another, failing to stop the service before starting it new.

Luckily, William corrected it, although it’s still not out — the next OpenRC release will save some of the variables used at start time, allowing for this kind of problems to be nipped in the bud without having to add tons of workarounds in the init scripts. It will require some changes in the functions for graceful reloading, but that’s in retrospective a minor detail.

There are a few more niceties that you could do with init scripts in Gentoo to make them more fool proof and more reliable, but I suppose this would cover the main points that we’re hitting nowadays. I suppose for me it’s just going to be time to list and review all the init scripts I maintain, which are quite a few.

Right at the start the new year 2013 brings the pleasant news that our manuscript "Transversal Magnetic Anisotropy in Nanoscale PdNi-Strips" has found its way into Journal of Applied Physics. The background of this work is - once again - spin injection and spin-dependent transport in carbon nanotubes. (To be more precise, the manuscript resulted from our ongoing SFB 689 project.) Control of the contact magnetization is the first step for all the experiments. Some time ago we picked Pd0.3Ni0.7 as contact material since the palladium generates only a low resistance between nanotube and its leads. The behaviour of the contact strips fabricated from this alloy turned out to be rather complex, though, and this manuscript summarizes our results on their magnetic properties.Three methods are used to obtain data - SQUID magnetization measurements of a large ensemble of lithographically identical strips, anisotropic magnetoresistance measurements of single strips, and magnetic force microscopy of the resulting domain pattern. All measurements are consistent with the rather non-intuitive result that the magnetically easy axis is perpendicular to the geometrically long strip axis. We can explain this by maneto-elastic coupling, i.e., stress imprinted during fabrication of the strips leads to preferential alignment of the magnetic moments orthogonal to the strip direction.

"Transversal Magnetic Anisotropy in Nanoscale PdNi-Strips"D. Steininger, A. K. Hüttel, M. Ziola, M. Kiessling, M. Sperl, G. Bayreuther, and Ch. StrunkJournal of Applied Physics 113, 034303 (2013); arXiv:1208.2163 (PDF[*])[*] Copyright American Institute of Physics. This article may be downloaded for personal use only. Any other use requires prior permission of the author and the American Institute of Physics.

UPDATE: Added some basic migration instructions to the bottom.
UPDATE2: Removed mplayer incompatibility mention. Mplayer-1.1 works with system libav.

As the summary says the default media codec provider for new installs will be libav instead of ffmpeg.

This change is being done due to various reasons like matching default with Fedora and Debian, or due to fact that some projects which are high-profile (eg sh*tload of people use them) will be probably libav only. One example being gst-libav which is in return required by libreoffice-4 which is due release in about month. To go for least pain for the user we decided to move from default ffmpeg to default libav library.

This change won’t affect your current installs at all but we would like to ask you to try to migrate to the libav and test and report any issues. So if stuff happen in the future and we are forced to throw libav as only implementation for everyone you are not left in the dark screaming for your suddenly missing features.

What to do when some package does not build with libav but ffmpeg is fine

There are no such packages left around if I am searching correctly (might be my blindness so do not take my word for it).

So if you encounter any package not building with libav just open bugreport on bugzilla and assign it to media-video team and add lu_zero[at]gentoo.org to CC to be sure he really takes a sneaky look to fix it. If you want to fix the issue yourself it gets even better. You write the patch open the bug in our bugzie and someone will include it. Also the patch should be sent to upstream for inclusion, so we don’t have to keep the patches in tree for long time.

What should I do when I have some issues with libav and I require more features that are on ffmpeg but not on libav

Its easier than fixing bugs about failing packages. Just nag to lu_zero (mail hidden somewhere in this post ;-)) and read this.

So when is this stuff going to ruin my day?

The switch in the tree and news item informing all users of media-video/ffmpeg will be created at the end of the January or early February, unless something really bad happens while you guys test it now.

I feel lucky and I want to switch right away so I can ruin your day by reporting bugs

Great I am really happy you want to contribute. The libav switch is pretty easy to be done as there are only 2 things to keep in mind.

You have to sync your useflags between virtual/ffmpeg and the newly-to-be-switched media-video/libav. This is most probably best to do just edit your package.use stuff and replace the media-video/ffmpeg line with media-video/libav one.

Then one would go straight away for emerge libav but there is one more caveat. Libav has split libpostproc library while ffmpeg still is using the internal one. Code wise they are most probably equal but you have to take account for it so just call emerge with both libraries.emerge -1v libav libpostproc

If this succeeds you have to revdep-rebuild the packages you have or use @preserved-rebuild from portage-2.2 to rebuild all the RDEPENDS of libav.

Many times, when I had to set the make.conf on systems with particular architectures, I had a doubt on which is the best –jobs value.
The handook suggest to have ${core} + 1, but since I’m curious I wanted to test it by myself to be sure this is right.

To make a good test we need a package with a respectable build system that respects the make parallelization and takes at least few minutes to compile. Otherwise with packages that compile in few seconds we are unable to track the effective difference.kde-base/kdelibs is, in my opinion, perfect.

If you are on architecture which kde-base/kdelibs is unavailable, just switch to another cmake-based package.

Now, download best_makeopts from my overlay. Below an explanation on what the script does and various suggestions.

You need to compile the package on a tmpfs filesystem and, I’m assuming you have /tmp mounted as a tmpfs too;

You need to have the tarball of the package on a tmpfs because if you have a slow disk, it may takes more time.

You need to switch your governor to performance.

You need to be sure you don’t have strange EMERGE_DEFAULT_OPTS.

You need to add ‘-B’ because we don’t want to include the time of the installation.

You need to drop the existent cache before compile.

As you can see, the for will emerge the same package with makeopts from 1 to 10. If you have, for example, a single core machine, just try the for from 1 to 4 is enough.

Please, during the test, don’t use the cpu for other purposes, and if you can, stop all services and make the test from the tty; you will see the time for every merge.

I tested this script on ~20 different machines and in the majority of the cases, the best optimization was ${core} or more exactly ${threads} of your CPU.

Conclusion:
From the handbook:

A good choice is the number of CPUs (or CPU cores) in your system plus one, but this guideline isn’t always perfect.

I don’t know who, years ago, suggested in the handbook ${core} + 1 and I don’t want to trigger a flame. I’m just saying, ${core} + 1 is not the best optimization for me and the test confirms the part:“but this guideline isn’t always perfect”

In all cases ${threads} + ${X} is slower than only ${threads}, so don’t use -j20 if you have a dual-core cpu.

Also, I’m not saying to use ${threads}, I’m just saying feel free to make your tests to watch what is the best optimization.

If you have suggestions to improve the functionality of the script or you think that this script is wrong, feel free to comment or leave an email.

This is a tricky review to write because I’m having a very bad time finishing this book. Indeed, while it did start well, and I was actually interested in the idea behind the book, it easily got nasty, in my mind. But let’s start from the top, and let me try to write a review of a book I’m not sure I’ll be able to finish without feeling ill.

I found the book, Amusing Ourselves to Death, through a blog post in one of the Planets I follow, and I found the premise extremely interesting: has the coming of the show business era meant that people are so much submersed by entertainment to lose sight of the significance of news? Unfortunately, as I said the book itself, to me, does not make the point properly, as it exaggerates to the point of no return. While the book has been written in 1985 – which means it has no way to know the way the Web changed media once again – it is proposed to be still relevant today in the introduction as written by the son of the author. I find that proposition unrealistic. It goes as far as stating that most of the students the book was told to read agreed with it — I would venture a guess that most of them didn’t want to disagree with their teacher.

First of all, the author is a typography snob and that can be easily seen when he spends pages and pages telling all the nice things about printed word — at the same time, taking slights at the previous “media” of spoken word. But while I do agree with one of the big points in the book (the fact that different forms makes discourse “change” — after all, my blog posts have a different tone from Autotools Mythbuster, and from my LWN articles), I do not think that a different tone makes for a more or less “validity” of it. Indeed this is why I find it extremely absurd that, for Wikipedia, I’m unreliable when writing on this blog, but I’m perfectly reliable the moment I write Autotools Mythbuster.

Now, if you were to take the first half of the book and title it something like “History of the printed word in early American history”, it would be a very good and enlightening read. It helps a lot to frame into context the history of America especially compared to Europe — I’m not much of an expert in history, but it’s interesting to note how in America, the religious organisations themselves sponsored literacy, while in Europe, Catholicism tried their best to keep people within the confines of illiteracy.

Unfortunately, he then starts with telling how evil the telegraph was by bringing in news from remote places, that people, in the author’s opinion, have no interest in, and should have no right to know… and the same kind of evilness is pointed out in photography (including the idea that photography has no context because there is no way to take a photograph out of context… which is utterly false, as many of us have seen during the reporting of recent wars. Okay, it’s all gotten much easier thanks to Photoshop, but in no way it was impossible in the ’80s.

Honestly, while I can understand having a foregone conclusion in mind, after explaining how people changed the way they speak with the advent of TV, no longer caring about syntax frills and similar, trying to say that in TV the messages are drown in a bunch of irrelevant frills is … a bit senseless. The same way it is senseless to me to say that typography is “pure message” — without even acknowledging that presentation is an issue for typography as much as TV, after all we wouldn’t have font designers otherwise.

While some things are definitely interesting to read – like the note about the use of pamphlet in the early American history that can easily compare to blogs today – the book itself is a bust, because there is no premise of objectivity, it’s just a long text to find reasons to reach the conclusion the author already had in mind… and that’s not what I like to read.

Through both LWN and netzpolitik.org I just heard that Aaron Swartz has committed suicide. While watching his speech “How we stopped SOPA” his name ring a bell with me, I looked into my inbox and found that he and I once had a brief chat on html2text, I piece of free software of his that I was in touch with in the context of Gentoo Linux. So there is this software, his website, these past mails, this amazing talk, his political work that I didn’t know about… and he’s dead. It only takes a few minutes of watching the talk to get the feeling that this is a great loss to society.

When you work with Django and especially with static files or other template tags you realize that you have to include {% load staticfiles %} in all our template files. This violates the DRY principle because we have to repeat the {% load staticfiles %} template tag on each template file .

Lets give an example.

We have a base.html file which links some Javascript and CSS files from our static folder.

As you can see I load again staticfiles in index.html. If I remove it, I will take this error. “TemplateSyntaxError at /, Invalid block tag ‘static’”. Unfortunately even if we extend base.html it will not inherit load template tag from the file and it will not load staticfiles to index.html that means it will not load our extra javascript file.
The truth is that there is a hack-y way to do that. After a small research I finally found a way to follow DRY principle and avoid repeating {% load staticfiles %} template tag in every template file.

Open one of the files that loads automatically from the beginning( settings.py, urls.py and models.py ). I will use settings.py.
So we add the following to settings.py:

Passwords. No one likes them, but everybody needs them. If you are concerned about your online safety, you probably have unique passwords for your critical accounts and some common pattern for all the almost-useless accounts you create when browsing the web.

At first I used to save my passwords in a gpg encrypted file. Over time however, I began using Firefox’s and Chrome’s password managers, mostly because of their awesome synching capabilities and form auto-filling.

Unfortunately, convenience comes at a price. I ended up relying on the password managers a bit too much, using my password pattern all over the place.

Then it hit me: I had strayed too much. Although my main accounts were relatively safe (strong passwords, two factor authentication), I had way too many weak passwords, synced on way too many devices, over syncing protocols of questionable security.

Looking for a better solution, I stumbled upon LastPass. Although LastPass uses an interesting security model, with passwords encrypted locally and a password generator that helps you maintain strong passwords for all your accounts, I didn’t like depending on an external service for something so critical. Its ui also left something to be desired.

A Unix command line tool that takes advantage of commonly used tools like gnupg and git to provide safe storage for your passwords and other critical information.

Pass‘ concept is simple. It creates one file for each one of your passwords, which it then encrypts using gpg and your key. You can provide your own passwords or ask it to generate strong passwords for you automatically.

When you need a password you can ask pass to print it on screen or copy it to the clipboard, ready for you to paste in the desired password field.

Pass can optionally use git, allowing you to track the history of your passwords and sync them easily among your systems. I have a Linode server, so I use that + gitolite to keep things synced.

Installation and usage of the tool is straightforward, with clean instructions and bash completion support that makes it even easier to use.

All this does come with a cost, since you lose the ability to auto save passwords and fill out forms. But this is a small price you pay compared to the security benefits gained. I also love the fact that you can access your passwords with standard Unix tools in case of emergencies. The system is also useful for securely storing other critical information, like credit cards.

Pass is not for everyone and most people would be fine using something like LastPass or KeePass, but if you’re a Unix guy looking for a solid password management solution, pass may be what you’re looking for

Pass was written by zx2c4 (thanks!) and is available in Gentoo’s portage

I spent 12 days in Greece. The Greek hospitality is superb, I can not ask for better friends in Greece. I first arrived in Thessaloniki, stayed there for a few nights. Then went to Larissa, and stayed with my friend and his family. There was a small communication barrier with his parents in this smaller town, they don’t get too many tourists. However, I had a very nice Christmas there and it was nice to be with such great people over the holidays. I went to a namesday celebration. Even though I couldn’t understand most of the conversations, they still welcomed me, gave me food and wine, and exchanged culture information. Then I went to Athens, stayed in a hostel, and spent New Year’s watching the fireworks over the Acropolis and the Parthenon. Cool experience! It was so great to be walking around the birthplace of “western ideals” – not the oldest civilization, but close. Some takeaway thoughts: 1) Greek hospitality is unlike anything I’ve experienced, really. I made sure that a I told everyone that they have an open door with me whenever we meet in “my new home” (meaning, I don’t know when or where), 2) you cannot go hungry in Greece, especially when they are cooking for you! 3) the cafe culture is great, 4) I want to go back during the summer

Of course, you will always find the not so nice parts. I got fooled by the old man scam, as seen here. Luckily, they only got 30€ from me, compared to some of the stories I’ve heard. Looking back on it, I just laugh at myself. Maybe I’ll be jaded towards a genuine experience in the future but, lesson learned. I don’t judge Athens by this one mishap, however.

I only have pictures of Athens since I had to buy a new camera.. Pics here

the assignment was to record the sound of ice in a glass, and make something of it.

the track picture shows my lo-fi setup for the field recording segment. i balanced a logitech USB microphone (which came with the Rock Band game) on a box of herbal tea (to keep it off the increasingly wet kitchen table), and started dropping ice cubes into a glass tumbler. audible is the initial crack and flex of the tray, scrabbling for cubes, tossing them into the cup. i made a point of recording the different tone of cubes dropped into a glass of hot water. i also filled the cup with ice, then recorded the sound of water running into it from the kitchen tap. i liked this sound enough to begin the song with it.

i decided that my first song of 2013 should incorporate the piano, so with the ice cubes recorded, i sat down to improvise an appropriately wintry melody. the result is a simple two-minute minor motif. i turned to the ardour3beta to integrate the field recordings and the piano improvisation.

it’s been awhile since i last used my strymon bluesky reverb pedal, so i figured i should use it for this project. i setup a feedback-free hardware effects loop using my NI Komplete Audio6 interface with the help of #ardour IRC channel, and listened to the piano recording as it ran through fairly spacious settings on the BSR. (normal mode, room type, decay @ 3:00, predelay @ 11:00, low damp @ 4:00, high damp @ 8:00). with just a bit of “send” to the reverb unit, the piano really came to life.

i added a few more tracks in ardour for the ice cube snippets, with even more subtle audio sends to the BSR, and laid out the field recordings. i pulled them apart in several places, copying and pasting segments throughout the song; minimal treatment was needed to get a good balance of piano and ice.

Those who have met me, might notice I have a somewhat unusual taste in clothing. One thing I despise is having clothes that are heavily branded, especially when the local shops then charge top dollar for them.

Where hats are concerned, I’m fussy. I don’t like the boring old varieties that abound $2 shops everywhere. I prefer something unique.

The mugshot of me with my Vietnamese coolie hat is probably the one most people on the web know me by. I was all set to try and make one, and I had an idea how I might achieve it, bought some materials I thought might work, but then I happened to be walking down Brunswick Street in Brisbane’s Fortitude Valley and saw a shop selling them for $5 each.

I bought one and have been wearing it on and off ever since. Or rather, I bought one, it wore out, I was given one as a present, wore that out, got given two more. The one I have today is #4.

I find them quite comfortable, lightweight, and most importantly, they’re cool and keep the sun off well. They are also one of the few full-brim designs that can accommodate wearing a pair of headphones or headset underneath. Being cheap is a bonus. The downside? One is I find they’re very divisive, people either love them or hate them — that said I get more compliments than complaints. The other, is they try to take off with the slightest bit of wind, and are quite bulky and somewhat fragile to stow.

I ride a bicycle to and from work, and so it’s just not practical to transport. Hanging around my neck, I can guarantee it’ll try to break free the moment I exceed 20km/hr… if I try and sit it on top of the helmet, it’ll slide around and generally make a nuisance.

Caps stow much easier. Not as good sun protection, but still can look good. I’ve got a few baseball caps, but they’re boring and a tad uncomfortable. I particularly like the old vintage gatsby caps — often worn by the 1930′s working class. A few years back on my way to uni I happened to stop by a St. Vinnies shop near Brisbane Arcade (sadly, they have closed and moved on) and saw a gatsby-style denim cap going for about $10. I bought it, and people commented that the style suited me. This one was a little big on me, but I was able to tweak it a bit to make it fit.

Fast forward to today, it is worn out — the stitching is good, but there are significant tears on the panelling and the embedded plastic in the peak is broken in several places. I looked around for a replacement, but alas, they’re as rare as hens teeth here in Brisbane, and no, I don’t care for ordering overseas.

Down the road from where I live, I saw the local sports/fitness shop were selling those flat neoprene sun visors for about $10 each. That gave me an idea — could I buy one of these and use it as the basis of a new cap?

These things basically consist of a peak and headband, attached to a dome consisting of 8 panels. I took apart the old faithful and traced out the shape of one of the panels.

Now I already had the headband and peak sorted out from the sun visor I bought, these aren’t hard to manufacture from scratch either. I just needed to cut out some panels from suitable material and stitch them together to make the dome.

There are a couple of parameters one can experiment that changes the visual properties of the cap. Gatsby caps could be viewed as an early precursor to the modern baseball cap. The prime difference is the shape of the panels.

The above graphic is also available as a PDF or SVG image. The key measurements to note are A, which sets the head circumference, C which tweaks the amount of overhang, and D which sets the height of the dome.

The head circumference is calculated as ${panels}×${A} so in the above case, 8 panels, a measurement of 80mm, means a head circumference of 640mm. Hence why it never quite fitted (58cm is about my size) me. I figured a measurement of about 75mm would do the trick.

B and C are actually two of three parameters that separates a gatsby from the more modern baseball cap. The other parameter is the length of the peak. A baseball cap sets these to make the overall shape much more triangular, increasing B to about half D, and tweaking C to make the shape more spherical.

As for the overhang, I decided I’d increase this a bit, increasing C to about 105mm. I left measurements B and D alone, making a fairly flattish dome.

For each of these measurements, once you come up with values that you’re happy with, add about 10mm to A, C and D for the actual template measurements to give yourself a fabric margin with which to sew the panels together.

As for material, I didn’t have any denim around, but on my travels I saw an old towel that someone had left by the side of the road — likely an escapee. These caps back in the day would have been made with whatever material the maker had to hand. Brushed cotton, denim, suede leather, wool all are common materials. I figured this would be a cheap way to try the pattern out, and if it worked out, I’d then see about procuring some better material.

Below are the results, click on the images to enlarge. I found due to the fact that this was my first attempt, and I just roughly cut the panels from a hand-drawn template, the panels didn’t quite meet in the middle. This is hidden by making a small circular patch where the panels normally meet. Traditionally a button is sewn here. I sewed the patch from the underside so as to hide the edges of it.

Not bad for a first try, I note I didn’t quite get the panels aligned at dead centre, the seam between the front two is just slightly off centre by about 15mm. The design looks alright to my eye, so I might look around for some suede leather and see if I can make a dressier one for more formal occasions.

Async-signal-safe functions A signal handler function must be very careful, since processing elsewhere may be interrupted at some arbitrary point in the execution of the program. POSIX has the concept of "safe function". If a signal interrupts the execution of an unsafe function, and handler calls an unsafe function, then the behavior of the program is undefined.

After that a list of safe functions follows, and one notable things is that malloc and free are async-signal-unsafe!

I hit this issue while enabling tcmalloc's debugallocation for Chromium Debug builds. We have a StackDumpSignalHandler for tests, which prints a stack trace on various crashing signals for easier debugging. It's very useful, and worked fine for a pretty long while (which means that "but it works!" is not a valid argument for doing unsafe things).

Now when I enabled debugallocation, I noticed hangs triggered by the stack trace display. In one example, this stack trace:

generates SIGSEGV (tcmalloc::Abort). This is just debugallocation having stricter checks about usage of dynamically allocated memory. Now the StackDumpSignalHandler kicks in, and internally calls malloc. But we're already inside malloc code as you can see on the above stack trace (see frame @7, bold font), and re-entering it tries to take locks that are already held, resulting in a hang.

no dynamic memory, and that includes std::string and std::vector, which use it internally

no buffered stdio or iostreams, they are not async-signal-safe (that includes fflush)

custom code for number-to-string conversion that doesn't need dynamically allocated memory (snprintf is not on the list of safe functions as of POSIX.1-2008; it seems to work on a glibc-2.15-based system, but as said before this is not a good assumption to make); in this code I've named it itoa_r, and it supports both base-10 and base-16 conversions, and also negative numbers for base-10

warming up backtrace(3): now this is really tricky, and backtrace(3) itself is not whitelisted for being safe; in fact, on the very first call it does some memory allocations; for now I've just added a call to backtrace() from a context that is safe and happens before the signal handler may be executed; implementing backtrace(3) in a known-safe way would be another fun thing to do

Note that for the above, I've also added a unit test that triggers the deadlock scenario. This will hopefully catch cases where calling backtrace(3) leads to trouble.

Okay here it comes another post about Munin for those who are using this awesome monitoring solution (okay I think I’ve been involved in upstream development more than I expected when Jeremy pointed me at it). While the main topic of this post is going to be IPv6 support, I’d like first to spend a few words for context of what’s going on.

Munin in Gentoo has been slightly patched in the 2.0 series — most of the patches were sent upstream the moment when they were introduced, and most of them have been merged in for the following release. Some of them though, including the one bringing my FreeIPMI plugin to replace the OpenIPMI plugins, or at least the first version of it, and those dealing with changes that wouldn’t have been kosher for other distributions (namely, Debian) at this point, were also not merged in the 2.0 branch upstream.

But now Steve opened a new branch for 2.0, which means that the development branch (Munin does not use the master branch, for a simple logistic reason of having a master/ directory in GIT I suppose) is directed toward the 2.1 series instead. This meant not only that I can finally push some of my recent plugin rewrites but also that I could make some more deep changes to it, including rewriting the seven asterisk plugins into a single one, and work hard on the HTTP-based plugins (for web servers and web services) so that they use a shared backend, like SNMP. This actually completely solved an issue that, in Gentoo, we solved only partially before — my ModSecurity ruleset blacklists the default libwww-perl user agent, so with the partial and complete fix, Munin advertises itself in the request; with the new code it includes also the plugin that is currently making the request so that it’s possible to know which requests belongs to what).

Speaking of Asterisk, by the way, I have to thank Sysadminman for lending me a test server for working on said plugins — this not only got us the current new Asterisk plugin (7-in-1!) but also let me modify just a tad said seven plugins, so that instead of using Net::Telnet, I could just use IO::Socket::INET. This has been merged for 2.0, which in turn means that the next ebuild will have one less dependency, and one less USE flag — the asterisk flag for said ebuild only added the Net::Telnet dependency.

To the main topic — how did I get to IPv6 in Munin? Well, I was looking at which other plugins need to be converted to “modernity” – which to me means re-using as much code possible, collapse multiple plugins in one through multigraph, and support virtual-nodes – and I found the squid plugins. This was interesting to me because I actually have one squid instance running, on the tinderbox host to avoid direct connection to the network from the tinderboxes themselves. These plugins do not use libwww-perl like the other HTTP plugins, I suppose (but I can’t be sure, for what I’m going to explain in a moment) because the cache://objects request that has to be done might or might not work with the noted library. Since as I said I have a squid instance, and these (multiple) plugins look exactly like the kind of target that I was looking for to rewrite, I started looking into them.

But once I started, I had a nasty surprise: my Squid instance only replies over IPv6, and that’s intended (the tinderboxes are only assigned IPv6 addresses, which makes it easier for me to access them, and have no NAT to the outside as I want to make sure that all network access is filtered through said proxy). Unfortunately, by default, libwww-perl does not support accessing IPv6. And indeed, neither do most of the other plugins, including the Asterisk I just rewrote, since they use IO::Socket::INET (instead of IO::Socket::INET6). A quick searching around, and this article turned up — although then this also turned up that relates to IPv6 support in Perl core itself.

Unfortunately, even with the core itself supporting IPv6, libwww-perl seems to be of different ideas, and that is a showstopper for me I’m afraid. At least, I need to find a way to get libwww-perl to play nicely if I want to use it over IPv6 (yes I’m going to work this around for the moment and just write the new squid plugins against the IPv4). On the other hand, using IO::Socket::IP would probably solve the issue for the remaining parts of the node and that will for sure at least give us some better support. Even better, it might be possible to abstract and have a Munin::Plugin::Socket that will fall-back to whatever we need. As it is, right now it’s a big question mark of what we can do there.

So what can be said about the current status of IPv6 support in Munin? Well, the Node uses Net::Server, and that in turn is not using IO::Socket::IP, but rather IO::Socket::INET or INET6 if installed — that basically means that the node itself will support IPv6 as long as INET6 is installed, and would call for using it as well, instead of using IO::Socket::IP ­— but the latter is the future and, for most people, will be part of the system anyway… The async support, in 2.0, will always use IPv4 to connect to the local node. This is not much of a problem, as Steve is working on merging the node and the async daemon in a single entity, which makes the most sense. Basically it means that in 2.1, all nodes will be spooled, instead of what we have right now.

The master, of course, also uses IPv6 — via IO::Socket::INET6 – yet another nail in the coffin of IO::Socket::IP? Maybe. – this covers all the communication between the two main components of Munin, and could be enough to declare it fully IPv6 compatible — and that’s what 2.0 is saying. But alas, this is not the case yet. On an interesting note, the fact that right now Munin supports arbitrary commands as transports, as long as they provide an I/O interface to the socket, make the fact that it supports IPv6 quite moot. Not only you just need an IPv6-capable SSH to handle it, but you can probably use SCTP instead of TCP simply by using a hacked up netcat! I’m not sure if monitoring would get any improvement of using SCTP, although I guess it might overcome some of the overhead related to establishing the connection, but.. well it’s a different story.

Of course, Munin’s own framework is only half of what has to support IPv6 for it to be properly supported; the heart of Munin is the plugins, which means that if they don’t support IPv6, we’re dead in the water. Perl plugins, as noted above, have quite a few issues with finding the right combination of modules for supporting IPv6. Bash plugins, and indeed any other language that could be used, would support IPv6 as good as the underlying tools — indeed, even though libwww-perl does not work with IPv6, plugins written with wget would work out of the box, on an IPv6-capable wget… but of course, the gains we have by using Perl are major enough that you don’t want to go that route.

All in all, I think what’s going to happen is that as soon as I’m done with the weekend’s work (which is quite a bit since the Friday was filled with a couple of server failures, and me finding out that one of my backups was not working as intended) I’ll prepare a branch and see how much of IO::Socket::IP we can leverage, and whether wrapping around that would help us with the new plugins. So we’ll see where this is going to lead us, maybe 2.1 will really be 100% IPv6 compatible…

Nowadays I see lots of new blog posts about how to contribute in open source projects and I decided to write a blog post about how to contribute to Gentoo Linux and become a vital part of the project.

My colleagues at university everytime we talk about Gentoo tell me that they cannot install Gentoo because it is too difficult for them or they are not ready to install it and configure it because they don’t have the experience and they finally give up. Also some other colleagues tell me that they want to contribute to Gentoo and they don’t know how to start. Thats why I wrote this blogpost in order to give some guidelines for those who want to contribute.

In order to help and contribute in Gentoo you don’t have to know to code or to be a super duper Linux guru. Of course code/programming is the core of open source projects but there are ways to contribute without knowing to code. Requirements are two things. A Gentoo installation and will to help.

Community

Gentoo like the rest FOSS Projects is based on volunteer efforts. The pylons of every FOSS project is its community. Without its community Gentoo wouldn’t exist. Even if someone doesn’t know to code, can contribute and learn from project’s community.

Forums: Join our forums and help other users with their problem. It is a good opportunity also to learn more things about Gentoo.

Mailing Lists: Subscribe to our mailing lists and learn about the latest community and development news of the project. Everyone can also help users to related mailing lists or discuss with Gentoo developers.

IRC: Join in our IRC channels. Help new users with their issues. Discuss with users and developers and express your opinion about the new features and the technical issues of the project. Make sure you will read our Code of Conduct first.

Planets: Follow our planet and watch some Gentoo-stuff blog posts from developers about Gentoo. There are interesting conversations (via comments) after the blog post between the users and the developers.

Promote: After you get some experience with the project promote your favourite distro( Gentoo of course ) writing blog posts and articles in forums and sites related to open source. You can also spread the word in your local linux users group and at your university.

Participate in Events: Every month there are meetings from the most of the Gentoo project teams. The meetings take place at #gentoo-meetings. There is an ‘open floor’ at the end of the meeting where users can express their opinion.

Documentation

Gentoo has always been known for its wide variety and quality of documentation. It covers lots of aspect of Linux. Topics about desktop, software, security and most of them are not totally Gentoo based. That’s the reason Gentoo documentation is successful and that’s why users from other Linux distributions using it. So you can be a part of this effort and improve the documentation.

Wiki: Wiki is our fresh project. There are lots of ways to help here. Add new articles about the topics you would like to see ( and have knowledge of them of course) and want to share it with the other Gentoo users. Improving and expanding wiki articles is a good way to help the project (avoid copy paste from other sources in the net). All users are encouraged to help, wiki is open for everyone. Use it responsibly because your posts will affect the Gentoo users who will try to follow your guide.

Translations: If English is not your native language translating wiki/documentation will be a very good way to help users that don’t know English and want to join to the community. Translations is a good way to contribute and expand the Gentoo community.

(bonus) Write article to your blog: If you find a configuration or a tool or a new solution to a problem that saved your life at the Gentoo world. Don’t be afraid and share it with the other users.

Development ( Code )

As I said code is the core of any software project. So if you have some knowledge with shell scripts and programming you are welcome to join the team. With small steps you can gain more experience with the project and contribute to it with your features and patches.

Bugs: Every FOSS project has its own bug tracking system, Gentoo as well has its own Bugzilla. There we report our issues. Build and run time failures , kernel problems , Gentoo tools issues, stable requests. You can also start contributing by confirming and reproducing bugs and then try to offer solutions and fix the bugs( patches are welcome ). So feel free to report new bugs to our Bugzilla. In addition there are requests to add or update(version bump*) ebuilds. Instead of requesting new ebuilds and version bumps you can also write and submit your ebuilds to our Bugzilla in order to be added to the Portage tree by a Gentoo developer. Try picking up a bug from maintainer-wanted alias. If you need a review for your ebuild #gentoo-dev-help is the right place to do it.

* Please avoid 0day bump requests.

Arch Tester: An Arch Tester (a.k.a AT) is a trustworthy user capable of testing an application to determine its stability. Arch Testers should have a good understanding on how ebuilds works, bash scripting and should test lots of packages to their arch. You can become an AT at x86 and amd64 archs. Requirement is to have a stable Gentoo box. Your goal will be to test and install packages from the testing arch (~arch) and see if they are working in the stable arch. Then you can open a stabilization request to Bugzilla.

Sunrise Project: Sunrise is a starting point for gentoo users to contribute. The Sunrise team encourage users to write ebuilds and make sure that they follow Gentoo QA standards. Sunrise’s goal is to allow non-developers to maintain them. For questions you can ask at #gentoo-sunrise at Freenode.

Proxy-maintaining: The goal of this team is to maintain abandoned (orphaned) packages in order to prevent treecleaners from removing those packages. Pick up some packages from the maintainer-needed list a begin to maintain it. For questions you can join #gentoo-dev-help.

Bugday: Bugday is an event which take place at #gentoo-bugs at Freenode every first weekend of every month. You can join and pick a bug and fix it. But have in mind that every day is a bugday so it doesn’t have to be a bugday to add your ebuild and fix bugs.

Become a developer: After you reach a good amount of contribution and you think you can be an active and vital member of the project you can start the process of becoming a developer. Talk to a Gentoo developer in order to mentor you and help you fill the ebuild and staff quiz and then the process of the recruiting will be completed with a live interview with a recruiter.

There are lots of Gentoo project teams that need new members and help . Everyone can contribute to Gentoo either knowing to code or not. Every piece of help is useful for the project.

I think I covered the biggest part of the Gentoo and how to contribute to it . I’ll wait for your comments, if you think I missed something inform me. Fixes always welcome.

I want to take take a few moments from my deserved Christmas break to say thanks to all the donors who have contributed to our last fundraiser. After 1.5 years, we’ve been able to hit our €5000 goal. This is a big, I mean really big, achievement for such a small (I am not sure now) but awesome distro like ours.

We’ve always wanted to bring Gentoo to everyone, make this awesome distro available on laptops, servers and of course, desktops without the need to compile, without the need of a compiler! It turns out that we’re getting there.

So, the biggest part of the “getting there” strategy was to implement a proper binary package manager and starting to automate the distro development, maintenance and release process.
Even though Entropy is in continuous development mode, we’ve got to the point that it’s reliable enough. Now, we must push Sabayon even farther.

Let me keep the development ideas I had for a separate blog post and tell you here what’s been done, what we’re going to do and what we still need in 2013.

First things first, last year we bought a new and shiny build server, which is kindly hosted by the University of Trento, Italy, featuring a Rack 2U dual Octa Opteron 6128, 48GB RAM and, earlier last year,
2x240GB Samsung 830 SSDs. In order to save (a lot of) money, I built the server myself and I spent something like 2500€ (including the SSDs). Take into consideration that prices for hardware in the EU are much higher than in the US.

Now we’re left with something like 3000€ or more and we’re planning to do another round of infra upgrades, save some money for hardware replacement in case of failures, buy t-shirts and DVDs to give out at local events, etc.

So far, the whole Sabayon infrastructure is spread across 3 Italian universities and TOP-IX (see at the bottom of http://www.sabayon.org for more details) and consists of four Rack 1U servers and one Rack 2U.
Whenever there’s a problem, I jump on a car and fix issues myself (like PSU, RAM, HDD/SSD failures) or kindly delegate the task to friends living closer than me.

As you can imagine, it’s easy to suck 200-300€ whenever there’s a problem and while we have failover plans (to EC2), these come with a cost as well.
As you may have already realized, free software does not really come for free, especially for those who are actually maintaining it. Automation and scaling out across multiple people (individuals involved in the development of this distro) are the key, and in particular the former, because it reduces the “human error” impact on the whole workflow.

As I mentioned above, I will prepare a separate blog post about what I mean with “automation”. For now, enjoy your Christmas holidays, the NYE celebrations and why not, some gaming with Steam on Sabayon.

During the last weeks, I spent several nights playing with UEFI and its extension called UEFI SecureBoot. I must admit that I have mixed feelings about UEFI in general; on one hand, you have a nice and modern “BIOS replacement” that can boot .efi files with no need for a bootloader like GRUB, on the other hand, some hardware, not even the most exotic one, is not yet glitch-free. But that’s what happens with new stuff in general. I cannot go much into detail without drifting away from the main topic, but surely enough, a simple google search about UEFI and Linux will point you to the problems I just mentioned above.

But hey, what does it all mean for our beloved Gentoo-based distro named Sabayon? Since DAILY ISO images dated 20121224, Sabayon can boot off UEFI systems, through DVD and USB (thanks to isohybrid –uefi) and, surprise surprise, with SecureBoot turned on!. I am almost sure that we’re the first Linux distro supporting SecureBoot out of the box (update: using shim!) and I am very proud of it. This is of course thanks to Matthew Garrett’s shim UEFI loader that is chainloading our signed UEFI GRUB2 image.

The process is simple and works like this: you boot an UEFI-compatible Sabayon ISO image off DVD or USB, if SecureBoot is turned on, shim will launch MokManager, that you can use to enroll our distro key, called sabayon.der and available on our image under the ”SecureBoot” directory. Once you enrolled the key, on some systems, you’re forced to reboot (I had to on my shiny new Asus Zenbook UX32VD), but then, the magic happens.

There is a tricky part however. Due to the way GRUB2 .efi images are generated (at install time, with settings depending on your partition layout and platform details), I have been forced to implement a nasty way to ensure that SecureBoot can still accept such platform-dependent images: our installer, Anaconda, now generates a hardware-specific SecureBoot keypair (private and public key), then our modified grub2-install version, automatically signs every .efi image it generates with that key, which is placed into the EFI Boot Partition under EFI/boot/sabayon ready to be enrolled by shim at the next boot.
This is sub-optimal, but after several days of messing around, it turned out that it’s the most reliable, cleanest and easiest way to support SecureBoot after install without disclosing our private key we use to sign our install media. Another advantage is that our distro keypair, once enrolled, will allow any Sabayon image to boot, while we still allow full control over the installed system to our users (by generating a platform-specific private key at install time).

SecureBoot is not that evil after all, my laptop came with Windows 8 (which I just ripped off completely) and SecureBoot disabled by default and lets anyone sign their own .efi binaries from the ”BIOS”. I don’t see how my freedom could be affected by this, though.

And we start the new year with more Autotools Mythbusting — although in this case it’s not with the help of upstream, who actually seemed to make it more difficult. What’s going on? Well, there has been two releases already, 1.13 and 1.13.1, and the changes are quite “interesting” — or to use a different word, worrisome.

First of all, there are two releases because the first one (1.13) was removing two macros (AM_CONFIG_HEADER and AM_PROG_CC_STDC) that were not deprecated in the previous release. After a complain from Paolo Bonzini related to a patch to sed to get rid of the old macros, Stefano decided to re-introduce the macros as deprecated in 1.13.1. What does this tell me? Well, two things mainly: the first is that this release has been rushed out without enough testing (the beta for it was released on December 19th!). The second that there is still no proper process in the deprecation of features with clear deadlines of when they are to disappear.

This impression is further strengthened in respect with some of the deprecation that appear in this new release, and some of the removals that did not happen at all.

This release was supposed to mark the first one not supporting the old-style name of configure.in for the autoconf input script — if you have any project still using that name you should update now. For some reason – none of which has been discussed on the automake mailing list, unsurprisingly – it was decided to postpone this to the next release. It still is a perfectly good idea to rename the files now, but you can probably get pissed easily if you felt pressurized into getting ready for the new release, and then the requirement is dropped without further notice.

Another removal that was supposed to happen with this release was the three-parameters AM_INIT_AUTOMAKE call, which substitutes the parameters of AC_INIT, instead of providing the automake options. The use of this macro is, though, still common for packages that calculate their version number dynamically, such as from the GIT repository itself, as it’s not possible to have a variable version passed to AC_INIT. Now, instead of just marking the feature as deprecated but keeping it around, the situation is that the syntax is no longer documented but it’s still usable. Which means I have to document it myself, as I find it extremely stupid to have a feature that is not documented anywhere, but is found in the wild. It’s exactly for bad decisions like this that I started Autotools Mythbuster.

This is not much different from what has happened with the AM_PROG_MKDIR macro, which was supposed to be deprecated/removed in 1.12, with the variables being kept around for a little longer — first it ended up being completely messed up in 1.12 to the point that the first two releases of that series dropped the variables which were supposed to stay around and the removal of the macro (but not o fthe variables) is now scheduled for 1.14 because, among others, GNU gettext is still using it — the issue has been reported, and I also think it has been fixed in GIT already, but there is no new release, nor a date for it to get fixed in a release.

Then there are things that changed, or were introduced in this release. First of all, silent rules are no longer optional — this basically means that the silent-rules option to the automake init is now a no-op, and the generated makefiles all have the silent rules harness included (but not enabled by default as usual). For me this meant a rewrite of the related section as now you have one more variant of automake to support. Then there finally is support in aclocal to get the macro directory selected in configure.ac — unfortunately this for me meant I had to rewrite another section of my guide to account for it, and now both the old and the new method are documented in there.

There are more notes in the NEWS file, and more things that are scheduled to appear in the next release, an I’ll try to cover them in my Autotools Mythbuster over the next week or so — I’ll expect this time I need to get into the details of Makefile.am like i have tried to avoid up to now. It’s quite a bit of work but it might be what makes the difference for so many autotools users out there that I really can’t avoid the task at this point. In the mean time, I welcome all support, be it through patches, suggestions, Flattr, Amazon or whatever else — the easiest way is to show the guide around: not only it’ll reduce the headaches for me and the other distribution packagers to have people actually knowing how to work on autotools, but also the more people know about it, the more contributions are likely to come in. Writing Autotools Mythbuster is far from easy, and sometimes it’s not enjoyable at all, but I guess it’s for the best.

Finally, a word about the status of automake in Gentoo — I’m leaving to Mike to bump the package in tree, once he’s done that, I’ll prepare to run a tinderbox with it — hopefully just getting the reverse dependencies for automake would be enough, thanks to autotools.eclass. For when the tinderbox is running, I hope I’ll have all the possible failures covered in the guide, as it’ll make the job of my Gentoo peers much easier.

Just wanted to take a quick moment and wish everyone a Happy New Year! It’s that day where we can all start anew, and make resolutions to do this or that (or to not do this or that ). My resolution is to get back to updating my blog on a regular basis. I don’t know that it will be nearly every day like it was before I moved, but I’m going to try to post often (the backlog of topics is getting quite large).

Last Saturday evening, I sent an e-mail to a low-volume mailinglist regarding IMA problems that I’m facing. I wasn’t expecting an answer very fast of course, being holidays, weekend and a low-volume mailinglist. But hey – it is the free software world, so I should expect some slack on this, right?

Well, not really. I got a reply on sunday – and not just an acknowledgement e-mail, but a to-the-point answer. It was immediately correct and described why, and helped me figure out things further. And this is not a unique case in the free software world: because you are dealing with the developers and users that have written the code that you are running/testing, you get a bunch of very motivated souls, all looking at your request when they can, and giving input when they can.

Compare that to commercial support from bigger vendors: in these cases, your request probably gets read by a single person whose state of mind is difficult to know (but from the communication you often get the impression that they either couldn’t care less or they are swamped with request tasks so they cannot devote enough time on your request). In most cases, they check the request for containing the right amount of information in the right format on the right fields, or even ignore that you did all that right and just ask you for (the same) information again. And who knows how many times I had to “state your business impact”.

Now, I know that commercial support from bigger vendor has the burden of a huge overload in requests, but is that truely that different in the free software world? Mailinglists such as the Linux kernel mailinglist (for kernel development) gets hundreds (thousands?) mails a day, and those with request for feedback or with questions get a reply quite swiftly. Mailinglists for distribution users get a lot of traffic as well, and each and every request is handled with due care and responded to within a very good timeframe (24h or less most of the time, sometimes a few days if the user is using a strange or exotic environment that not everyone knows how to handle).

I think one of the biggest advantages of the free software world is that the requests are public. That both teaches the many users on those mailinglists and fora on how to handle problems they haven’t seen before, as well as allows users to first look for a problem before reporting it. Everybody wins with this. And because it is public, many users are happily answering more and more questions because they get the visibility (with acknowledgements) they deserve: they gain a specific position in that particular area that others respect, because we can see how much effort (and good results) they gave earlier on.

So kudos to the free software world, a happy new year – and keep going forward.

Unfortunately, all times we have a big list to keyword or stabilize, repoman complains about missing packages. So, in this post I will give you the solution to avoid this problem.

First, please download the batch-pretend script from my overlay.
I’m not a python programmer but I was able to edit the script made by Paweł Hajdan. I just deleted the bugzilla commit part, and I make the script able to print repoman full if the list is not complete.
This script works only with =www-client/pybugz-0.9.3

Now, to check if repoman will complain about your list, you need to do:./batch-pretend.py --arch amd64 --repo /home/ago/gentoo-x86 -i /tmp/yourlist

where:

Batch-pretend.py is the script (obviously);

amd64 is the arch that you want to check. You will use ~amd64 for the keywordreq;

/home/ago/gentoo-x86 is the local copy of the CVS;

/tmp/yourlist is the list which contains the packages;

Few useful notes:

If you want to check on some arches, you can use a simple for:for i in amd64 x86 sparc ppc ; do
./batch-pretend.py --arch "${i}" --repo /home/ago/gentoo-x86 -i /tmp/yourlist
done

The script will run ekeyword, so it will touch your local CVS copy of gentoo-x86. If this is not your intention, please make another copy and work there or don’t forget to run cvs up -C.

Before doing this work, you need to run cvs up in the root of your gentoo-x86 local CVS.

The list must be structured in this mode:# bug #445900
=app-portage/eix-0.27.4
=www-client/pybugz-0.9.3
=dev-vcs/cvs-1.12.12-r6
#and so on..

I have been playing with Linux IMA/EVM on a Gentoo Hardened (with SELinux) system for a while and have been documenting what I think is interesting/necessary for Gentoo Linux users when they want to use IMA/EVM as well. Note that the documentation of the Linux IMA/EVM project itself is very decent. It’s all on a single wiki page, but it’s decent and I learned a lot from it.

That being said, I do have the impression that the method they suggest for generating IMA hashes for the entire system is not always working properly. It might be because of SELinux on my system, but for now I’m searching for another method that does seem to work well (I’m currently trying my luck with a find … -exec evmctl based command). But once the hashes are registered, it works pretty well (well, there’s a probably small SELinux problem where loading a new policy or updating the existing policies seems to generate stale rules and I have to reboot my system, but I’ll find the culprit of that soon ;-)

The IMA Guide has been updated to reflect recent findings – including how to load a custom policy, and I have also started on the EVM Guide. I think it’ll take me a day or three to finish off the rough edges and then I’ll start creating a new SELinux node (KVM) image that users can use with various Gentoo Hardened-supported technologies enabled (PaX, grSecurity, SELinux, IMA and EVM).

So if you’re curious about IMA/EVM and willing to try it out on Gentoo Linux, please have a look at those documents and see if they assist you (or confuse you even more).

So, I finally managed to getting around to fixing the backend of znurt.org so that the keywords would import again. It was a combination of the portage metadata location moving, and a small set of sloppy code in part of the import script that made me roll my eyes. It’s fixed now, but the site still isn’t importing everything correctly.

I’ve been putting off working on it for so long, just because it’s a hard project to get to. Since I started working full-time as a sysadmin about two years ago, it killed off my hobby of tinkering with computers. My attitude shifted from “this is fun” to “I want this to work and not have me worry about it.” Comes with the territory, I guess. Not to say I don’t have fun — I do a lot of research at work, either related to existing projects or new stuff. There’s always something cool to look into. But then I come home and I’d rather just focus on other things.

I got rid of my desktops, too, because soon afterwards I didn’t really have anything to hack on. Znurt went down, but I didn’t really have a good development environment anymore. On top of that, my interest in the site had waned, and the whole thing just adds up to a pile of indifference.

I contemplated giving the site away to someone else so that they could maintain it, as I’ve done in the past with some of my projects, but this one, I just wanted to hang onto it for some reason. Admittedly, not enough to maintain it, but enough to want to retain ownership.

With this last semester behind me, which was brutal, I’ve got more time to do other stuff. Fixing Znurt had *long* been on my todo list, and I finally got around to poking it with a stick to see if I could at least get the broken imports working.

I was anticipating it would be a lot of work, and hard to find the issue, but the whole thing took under two hours to fix. Derp. That’s what I get for putting stuff off.

One thing I’ve found interesting in all of this is how quickly my memory of working with code (PHP) and databases (PostgreSQL) has come back to me. At work, I only write shell scripts now (bash) and we use MySQL across the board. Postgres is an amazing database replacement, and it’s amazing how, even not using it regularly in awhile, it all comes back to me. I love that database. Everything about it is intuitive.

Anyway, I was looking through the import code, and doing some testing. I flushed the entire database contents and started a fresh import, and noticed it was breaking in some parts. Looking into it, I found that the MDB2 PEAR package has a memory leak in it, which kills the scripts because it just runs so many queries. So, I’m in the process of moving it to use PDO instead. I’ve wanted to look into using it for a while, and so far I like it, for the most part. Their fetch helper functions are pretty lame, and could use some obvious features like fetching one value and returning result sets in associative arrays, but it’s good. I’m going through the backend and doing a lot of cleanup at the same time.

Feature-wise, the site isn’t gonna change at all. It’ll be faster, and importing the data from portage will be more accurate. I’ve got bugs on the frontend I need to fix still, but they are all minor and I probably won’t look at them for now, to be honest. Well, maybe I will, I dunno.

Either way, it’s kinda cool to get into the code again, and see what’s going on. I know I say this a lot with my projects, but it always amazes me when I go back and I realize how complex the process is — not because of my code, but because there are so many factors to take into consideration when building this database. I thought it’d be a simple case of reading metadata and throwing it in there, but there’s all kinds of things that I originally wrote, like using regular expressions to get the package components from an ebuild version string. Fortunately, there’s easier ways to query that stuff now, so the goal is to get it more up to date.

It’s kinda cool working on a big code project again. I’d forgotten what it was like.

Adventurous users, contributors and developers can enable the Integrity Measurement Architecture subsystem in the Linux kernel with appraisal (since Linux kernel 3.7). In an attempt to support IMA (and EVM and other technologies) properly, the System Integrity subproject within Gentoo Hardened was launched a few months ago. And now that Linux kernel 3.7 is out (and stable) you can start enjoying this additional security feature.

With IMA (and IMA appraisal), you are able to protect your system from offline tampering: modifications made to your files while the system is offline will be detected as their hash values do not match the hash values stored in extended attributes (whereas the extended attributes are then protected through digitally signed values using the EVM technology).

I’m working on integrating IMA (and later EVM) properly, which of course includes the necessary documentation: concepts and a ima guide for starters, with more to follow. Be aware though that the integration is still in its infancy, but any questions and feedback is greatly appreciated, and bugreports (like bug 448872) are definitely welcome.

One year after my last blog post on this topic I encountered some minor difficulties with combining KDEPIM-4.4 (i.e. kmail1) and the KDE 4.10 betas. These difficulties are fixed now, and the combination seems to work fine again. Anyway, I became curious about the level of stability of Akonadi-based kmail2 once more. After all, I've been running it continuously over the year on my office desktop with a constant-on fast internet connection, and that works quite well. So, I gave it a fresh try on my laptop too. I deleted my Akonadi configuration and cache, switched to Akonadi mysql backend, updated kmail and the rest of KDEPIM without migrating to 4.9.4, and re-added my IMAP account from scratch (with "Enable offline mode"). The overall use case description is "laptop with large amount of cached files from IMAP account, fluctuating internet connectivity". Now, here are my impressions...

Reaction time is occasionally sluggish, but overall OK.

The progress indicator behaves a bit odd, it checks the mail folders in seemingly random order and only knows 0% and 100% completion.

Random warning messages. It seems that kmail2 uses some features that "my" IMAP server does not understand. So, I'm getting frequent warning notifications that don't tell me anything and that I cannot do anything about. SET ANNOTATION, UID, ... Please either handle the errors, inform the user what exactly goes wrong, or ignore them in case they are irrelevant. Filed as a wish, bug 311265.

Network activity stops working sometime. This sounds worse than it actually is, since in 99% of all cases Akonadi now detects fine that the connection to the server is broken (e.g., after suspend/resume, after switching to a different WLAN, or after enabling a VPN tunnel) and reconnects immediately. In the few remaining cases, re-starting the Akonadi server does the trick. You just have to know what to kick.

More problematic is, while you're in online mode, any problems with connectivity will make kmail "hang". Clicking on a message leads to an attempt to retrieve it, which requires some response from the network. As it seems to me, all such requests are queued up for Akonadi to handle, and if that does not get a reply, pending requests are stuck in the queue... OK, you might say that this is a typical use case for offline mode, but then I would have to be able to predict when exactly my train enters the tunnel... Compare this to kmail1 disconnected IMAP accounts, where regular syncing would be delayed, but local work remained unaffected.

Offline mode is a nice concept, and half a solution for the last problem, but unfortunately it does not work as expected. For mysterious reasons, a considerable part of the messages is not cached locally. I switch my account to offline mode, click on a message, and obtain an error message "Cannot fetch this in offline mode". Well, bummer. Bug 285935.

This may just be my personal taste, but once something goes wrong (e.g., non-kde related crash, battery empty, ...) and the cache becomes corrupted somehow, I'd like to be able to do something from kmail2 without having to fiddle with akonadiconsole. A nice addition would be "Invalidate cache" in the context menu of a mail folder, or some sort of maintenance menu with semi-safe options.

Finally... something is definitely going wrong with PGP signatures; the signatures do not always verify on other mail clients. Tracking this down, it seems that CRLF is not preserved in messages, see bug 306005.

On the whole, for the laptop use case the "new" KDEPIM is now (4.9.4) more mature than the last time I tried. I'll keep it now and not downgrade again, but there are still some significant rough edges. The good thing is, the KDEPIM developers are aware of the above issues and debugging is going on, as you can see for example from this blog post by Alex Fiestas (whose use case pretty much mirrors my own).

We had a marathon with Alexandre (tetromino) in the last 2 weeks to get Gnome 3.6 ebuilds using python-r1 eclasses variants, EAPI=5 and gstreamer-1. And now it is finally in gentoo-x86, unmasked.

You probably read, heard or have seen stuff about EAPI=5 and new python eclasses before but, in short, here is what it will give you:

package manager will finally know for real what python version is used by which package and be able to act on it accordingly (no more python-updater when all ebuilds are migrated)

EAPI=5 subslots will hopefully put an end to revdep-rebuild usage. I already saw it in action while bumping some of the telepathy packages to discover that empathy was now automatically being rebuilt without further action than emerge -1 telepathy-logger.

No doubt lots of people are going to love this.

Gnome 3.6 probably still has a few rough edges so please, check bugzilla before filing new reports.