Welcome to Gentoo Universe, an aggregation of weblog articles on all topics written by Gentoo developers. For a more refined aggregation of Gentoo-related topics only, you might be interested in Planet Gentoo.

In my post of yesterday I noted some things about the init scripts, small niceties that init scripts should do in Gentoo for them to work properly and to solve the issue of migrating pid files to /run. Today I’d like to add a few more notes of what I wish all daemons out there implemented at the very least.

First of all, while some people prefer for the daemon to not fork and background by itself, I honestly prefer it to — it makes so many things so much easier. But if you fork, wait till the forked process completed initialization before exiting! The reason why I’m saying this is that, unfortunately, it’s common for a daemon to start up, fork, then load its configuration file and find out there’s a mistake … leading to a script that thinks that the daemon started properly, while no process is left running. In init scripts, --wait allows you to tell the start-stop-daemon tool to wait for a moment to see if the daemon could start at all, but it’s not so nice, because you have to find the correct wait time empirically, and in almost every case you’re going to run longer than needed.

If you will background by yourself, please make sure that you create a pidfile to tell the init system which ID to signal to stop — and if you do have such a pidfile, please do not make it configurable on the configuration file, but set a compiled-in default and eventually allow an override at runtime. The runtime override is especially welcome if your software is supposed to have multiple instances configured on the same box — as then a single pidfile would conflict. Not having it configured on a file means that you no longer need to hack up a parser for the configuration file to be able to know what the user wants, but you can rely on either the default or your override.

Also if you do intend to support multiple instances of the same daemon make sure that you allow multiple configuration files to be passed in by he command-line. This simplifies a lot the whole handling of multiple-instances, and should be mandatory in that situation. Make sure you don’t re-use paths in that case either.

If you have messages you should also make sure that they are sent to syslog — please do not force, or even default, everything to log files! We have tons of syslog implementations, and at least the user does not have to guess which one of many files is going to be used for the messages from your last service start — at this point you probably guessed that there are a few things I hope to rectify in Munin 2.1.

I’m pretty sure that there are other concerns that could come up, but for now I guess this would be enough for me to have a much simpler life as an init script maintainer.

I tried not to get into this discussion, mostly because it will degenerate to a mud sliding contest.

Alexis did not take well the fact that Tomáš changed the default provider for libavcodec and related libraries.

Before we start, one point:

I am as biased as Alexis, as we are both involved on the projects themselves. The same goes for Diego, but does not apply to Tomáš, he is just a downstream by transition (libreoffice uses gstreamer that uses *only* Libav).

Now the question at hand: which should be the default? FFmpeg or Libav?

How to decide?

- Libav has a strict review policy every patch goes through a review and has to be polished enough before landing the tree.

- FFmpeg merges daily what had been done in Libav and has a more lax approach on what goes in the tree and how.

- Libav has fate running on most architectures, many of those are running Gentoo, usually real hardware.

- FFmpeg has an old fate with less architectures, many qemu emulations.

- Libav defines the API

- FFmpeg follows adding bits here and there to “diversify”

- Libav has a major release per season, minor releases when needed

- FFmpeg releases a lot touting a lot of *Security*Fixes* (usually old code from the ancient times eventually fixed)

- Libav does care about crashes and fixes them, but does not claim every crash is a Security issue.

- FFmpeg goes by leaps to add MORE features, no matter what (including picking wip branches from my personal github and merging them before they are ready…)

- Libav is more careful, thus having less fringe features and focusing more polishing before landing new stuff.

So if you are a downstream you can pick what you want, but if you want something working everywhere you should target Libav.

If you are missing a feature from Libav that is in FFmpeg, feel free to point me to it and I’ll try my best to get it to you.

Exactly two years ago, a group consisting of the majority of FFmpeg developers took over its maintainership. While I didn’t like the methods, I’m not an insider so my opinion stops here, especially since if you pay attention to who was involved: Luca was part of it. Luca has been a Gentoo developer since probably most of us even used Gentoo and I must admit I’ve never seen him heating any discussion, rather the contrary, and it’s always been a pleasure to work with him. What happened next, after a lot of turmoil, is that the developers split in two groups: libav formed by the “secessionists” and FFmpeg.

Good, so what do we chose now? One of the first things that was done on the libav side was to “cleanup” the API with the 0.7 release, meaning we had to fix almost all its consumers: Bad idea if you want wide adoption of a library that has an history of frequently changing its API and breaking all its consumers. Meanwhile, FFmpeg maintained two branches: the 0.7 branch compatible with the old API and the 0.8 one with the new API. The two branches were supposed to be identical except for the API change. On my side the choice was easy: Thanks but no thanks sir, I’ll stay with FFmpeg.
FFmpeg, while having its own development and improvements, has been doing daily merges of all libav changes, often with an extra pass of review and checks, so I can even benefit from all the improvements from libav while using FFmpeg.

So why should we use libav? I don’t know. Some projects use libav within their release process, so they are likely to be much more tested with libav than FFmpeg. However, until I see real bugs, I consider this as pure supposition and I have yet to see real facts. On the other hand, I can see lots of false claims, usually without any kind of reference: Tomáš claims that there’s no failure that is libav specific, well, some bugstend to saythe contrary and have been open for some time (I’ll get back to XBMC later). Another false claim is that FFmpeg-1.1 will have the same failures as libav-9: Since Diego made a tinderbox run for libav-9, I made the tests for FFmpeg 1.1 and made the failures block our old FFmpeg 0.11 tracker. If you click the links, you will see that the number of blockers is much smaller (something like 2/3) for the FFmpeg tracker. Another false claim I have seen is that there will be libav-only packages: I have yet to see one; the example I had as an answer is gst-plugins-libav, which unfortunately is in the same shape for both implementations.

In theory FFmpeg-1.1 and libav-9 should be on par, but in practice, after almost two years of disjoint development, small differences have started to accumulate. One of them is the audio resampling library: While libswresample has been in FFmpeg since the 0.9 series, libav developers did not want it and made another one, with a very similar API, called libavresample that appeared in libav-9. This smells badly as a NIH syndrome, but to be fair, it’s not the first time such things happen: libav and FFmpeg developers tend to write their own codecs instead of wrapping external libraries and usually achieve better results. The audio resampling library is why XBMCbeing broken with libav is, at least partly, my fault: While cleaning up its API usage of FFmpeg/libav, I made it use the public API for audio resampling, initially with libswresample but made sure it worked with libavresample from libav. At that time, this would mean it required libav git master since libav-9 was not even close to be released, so there was no point in trying to make it compatible with such a moving target. libswresample from FFmpeg was present since the 0.9 series, released more than one year ago. Meanwhile, XBMC-12 has entered its release process, meaning it will probably not work with libav easily. Too late, too bad.

Another important issue I’ve raised is the security holes: Nowadays, we are much more exposed to them. Instead of having to send a specially crafted video to my victim and make him open it with the right program, I only have to embed it in an HTML5 webpage and wait. This is why I am a bit concerned that security issues fixed 7 months ago in FFmpeg have only been fixed with the recently released libav-0.8.5. I’ve been told that these issues are just crashes are have been fixed in a better way in libav: This is probably true but I still consider the delay huge for such an important component of modern systems, and, thanks to FFmpeg merging from libav, the better fix will also land in FFmpeg. I have also been told that this will improve on the libav side, but again, I want to see facts rather than claims.

As a conclusion: Why is the default implementation changed? Some people seem to like it better and use false claims to force their preference. Is it a good idea for our users? Today, I don’t think so (remember: FFmpeg merges from libav and adds its own improvements), maybe later when we’ll have some clear evidence that libav is better (the improvements might be buggy or the merges might lead to subtle bugs). Will I fight to get the default back to FFmpeg ? No. I use it, will continue to use and maintain it, and will support people that want the default back to FFmpeg but that’s about it.

One of probably the biggest problems with maintaining software in Gentoo where a daemon is involved, is dealing with init scripts. And it’s not really that much of a problem with just Gentoo, as almost every distribution or operating system has its own to handle init scripts. I guess this is one of the nice ideas behind systemd: having a single standard for daemons to start, stop and reload is definitely a positive goal.

Even if I’m not sure myself whether I want the whole init system to be collapsed into a single one for every single operating system out there, there at least is a chance that upstream developers will provide a standard command-line for daemons so that init scripts no longer have to write a hundred lines of pre-start setup code commands. Unfortunately I don’t have much faith that this is going to change any time soon.

Anyway, leaving the daemons themselves alone, as that’s a topic for a post of its own and I don care about writing it now. What remains is the init script itself. Now, while it seems quite a few people didn’t know about this before, OpenRC has been supporting since almost ever a more declarative approach to init scripts by setting just a few variables, such as command, pidfile and similar, so that the script works, as long as the daemon follows the most generic approach. A whole documentation for this kind of scripts is present in the runscript man page and I won’t bore you with the details of it here.

Beside the declaration of what to start, there are a few more issues that are now mostly handled to different degrees depending on the init script, rather than in a more comprehensive and seamless fashion. Unfortunately, I’m afraid that this is likely going to stay the same way for a long time, as I’m sure that some of my fellow developers won’t care to implement the trickiest parts that can implemented, but at least i can try to give a few ideas of what I found out while spending time on said init scripts.

So the number one issue is of course the need to create the directories the daemon will use beforehand, if they are to be stored on temporary filesystems. What happens is that one of the first changes that came with the whole systemd movements was to create /run and use that to store pidfiles, locks and other runtime stateless files, mounting it as tmpfs at runtime. This was something I was very interested in to begin with because I was doing something similar before, on the router with a CF card (through an EIDE adapter) as harddisk, to avoid writing to it at runtime. Unfortunately, more than an year later, we still have lots of ebuilds out there that expects /var/run paths to be maintained from the merge to the start of the daemon. At least now there’s enough consensus about it that I can easily open bugs for them instead of just ignore them.

For daemons that need /var/run it’s relatively easy to deal with the missing path; while a few scripts do use mkdir, chown and chmod to handle the creation of the missing directories , there is a real neat helper to take care of it, checkpath — which is also documented in the aforementioned man page for runscript. But there has been many other places where the two directories are used, which are not initiated by an init script at all. One of these happens to be my dear Munin’s cron script used by the Master — what to do then?

This has actually been among the biggest issues regarding the transition. It was the original reason why screen was changed to save its sockets in the users’ home instead of the previous /var/run/screen path — with relatively bad results all over, including me deciding to just move to tmux. In Munin, I decided to solve the issue by installing a script in /etc/local.d so that on start the /var/run/munin directory would be created … but this is far from a decent standard way to handle things. Luckily, there actually is a way to solve this that has been standardised, to some extents — it’s called tmpfiles.d and was also introduced by systemd. While OpenRC implements the same basics, because of the differences in the two init systems, not all of the features are implemented, in particular the automatic cleanup of the files on a running system - on the other hand, that feature is not fundamental for the needs of either Munin or screen.

There is an issue with the way these files should be installed, though. For most packages, the correct path to install to would be /usr/lib/tmpfiles.d, but the problem with this is that on a multilib system you’d end up easily with having both /usr/lib and /usr/lib64 as directories, causing Portage’s symlink protection to kick in. I’d like to have a good solution to this, but honestly, right now I don’t.

So we have the tools at our disposal, what remains to be done then? Well, there’s still one issue: which path should we use? Should we keep /var/run to be compatible, or should we just decide that /run is a good idea and run with it? My guts say the latter at this point, but it means that we have to migrate quite a few things over time. I actually started now on porting my packages to use /run directly, starting from pcsc-lite (since I had to bump it to 1.8.8 yesterday anyway) — Munin will come with support for tmpfiles.d in 2.0.11 (unfortunately, it’s unlikely I’ll be able to add support for it upstream in that release, but in Gentoo it’ll be). Some more of my daemons will be updated as I bump them, as I already spent quite a lot of time on those init scripts to hone them down on some more issues that I’ll delineate in a moment.

For some, but not all!, of the daemons it’s actually possible to decide the pidfile location on the command line — for those, the solution to handle the move to the new path is dead easy, as you just make sure to pass something equivalent to -p ${pidfile} in the script, and then change the pidfile variable, and done. Unfortunately that’s not always an option, as the pidfile can be either hardcoded into the compiled program, or read from a configuration file (the latter is the case for Munin). In the first case, no big deal: you change the configuration of the package, or worse case you patch the software, and make it use the new path, update the init script and you’re done… in the latter case though, we have trouble at hand.

If the location of the pidfile is to be found in a configuration file, even if you change the configuration file that gets installed, you can’t count on the user actually updating the configuration file, which means your init script might get out of sync with the configuration file easily. Of course there’s a way to work around this, and that is to actually get the pidfile path from the configuration file itself, which is what I do in the munin-node script. To do so, you need to see what the syntax of the configuration file is. In the case of Munin, the file is just a set of key-value pairs separated by whitespace, which means a simple awk call can give you the data you need. In some other cases, the configuration file syntax is so messed up, that getting the data out of it is impossible without writing a full-blown parser (which is not worth it). In that case you have to rely on the user to actually tell you where the pidfile is stored, and that’s quite unreliable, but okay.

There is of course one thing now that needs to be said: what happens when the pidfile changes in the configuration between one start and the stop? If you’re reading the pidfile out of a configuration file it is possible that the user, or the ebuild, changed it in between causing quite big headaches trying to restart the service. Unfortunately my users experienced this when I changed Munin’s default from /var/run/munin/munin-node.pid to /var/run/munin-node.pid — the change was possible because the node itself runs as root, and then drops privileges when running the plugins, so there is no reason to wait for the subdirectory, and since most nodes will not have the master running, /var/run/munin wouldn’t be useful there at all. As I said, though, it would cause the started node to use a pidfile path, and the init script another, failing to stop the service before starting it new.

Luckily, William corrected it, although it’s still not out — the next OpenRC release will save some of the variables used at start time, allowing for this kind of problems to be nipped in the bud without having to add tons of workarounds in the init scripts. It will require some changes in the functions for graceful reloading, but that’s in retrospective a minor detail.

There are a few more niceties that you could do with init scripts in Gentoo to make them more fool proof and more reliable, but I suppose this would cover the main points that we’re hitting nowadays. I suppose for me it’s just going to be time to list and review all the init scripts I maintain, which are quite a few.

Right at the start the new year 2013 brings the pleasant news that our manuscript "Transversal Magnetic Anisotropy in Nanoscale PdNi-Strips" has found its way into Journal of Applied Physics. The background of this work is - once again - spin injection and spin-dependent transport in carbon nanotubes. (To be more precise, the manuscript resulted from our ongoing SFB 689 project.) Control of the contact magnetization is the first step for all the experiments. Some time ago we picked Pd0.3Ni0.7 as contact material since the palladium generates only a low resistance between nanotube and its leads. The behaviour of the contact strips fabricated from this alloy turned out to be rather complex, though, and this manuscript summarizes our results on their magnetic properties.Three methods are used to obtain data - SQUID magnetization measurements of a large ensemble of lithographically identical strips, anisotropic magnetoresistance measurements of single strips, and magnetic force microscopy of the resulting domain pattern. All measurements are consistent with the rather non-intuitive result that the magnetically easy axis is perpendicular to the geometrically long strip axis. We can explain this by maneto-elastic coupling, i.e., stress imprinted during fabrication of the strips leads to preferential alignment of the magnetic moments orthogonal to the strip direction.

"Transversal Magnetic Anisotropy in Nanoscale PdNi-Strips"D. Steininger, A. K. Hüttel, M. Ziola, M. Kiessling, M. Sperl, G. Bayreuther, and Ch. StrunkJournal of Applied Physics 113, 034303 (2013); arXiv:1208.2163 (PDF[*])[*] Copyright American Institute of Physics. This article may be downloaded for personal use only. Any other use requires prior permission of the author and the American Institute of Physics.

UPDATE: Added some basic migration instructions to the bottom.
UPDATE2: Removed mplayer incompatibility mention. Mplayer-1.1 works with system libav.

As the summary says the default media codec provider for new installs will be libav instead of ffmpeg.

This change is being done due to various reasons like matching default with Fedora and Debian, or due to fact that some projects which are high-profile (eg sh*tload of people use them) will be probably libav only. One example being gst-libav which is in return required by libreoffice-4 which is due release in about month. To go for least pain for the user we decided to move from default ffmpeg to default libav library.

This change won’t affect your current installs at all but we would like to ask you to try to migrate to the libav and test and report any issues. So if stuff happen in the future and we are forced to throw libav as only implementation for everyone you are not left in the dark screaming for your suddenly missing features.

What to do when some package does not build with libav but ffmpeg is fine

There are no such packages left around if I am searching correctly (might be my blindness so do not take my word for it).

So if you encounter any package not building with libav just open bugreport on bugzilla and assign it to media-video team and add lu_zero[at]gentoo.org to CC to be sure he really takes a sneaky look to fix it. If you want to fix the issue yourself it gets even better. You write the patch open the bug in our bugzie and someone will include it. Also the patch should be sent to upstream for inclusion, so we don’t have to keep the patches in tree for long time.

What should I do when I have some issues with libav and I require more features that are on ffmpeg but not on libav

Its easier than fixing bugs about failing packages. Just nag to lu_zero (mail hidden somewhere in this post ;-)) and read this.

So when is this stuff going to ruin my day?

The switch in the tree and news item informing all users of media-video/ffmpeg will be created at the end of the January or early February, unless something really bad happens while you guys test it now.

I feel lucky and I want to switch right away so I can ruin your day by reporting bugs

Great I am really happy you want to contribute. The libav switch is pretty easy to be done as there are only 2 things to keep in mind.

You have to sync your useflags between virtual/ffmpeg and the newly-to-be-switched media-video/libav. This is most probably best to do just edit your package.use stuff and replace the media-video/ffmpeg line with media-video/libav one.

Then one would go straight away for emerge libav but there is one more caveat. Libav has split libpostproc library while ffmpeg still is using the internal one. Code wise they are most probably equal but you have to take account for it so just call emerge with both libraries.emerge -1v libav libpostproc

If this succeeds you have to revdep-rebuild the packages you have or use @preserved-rebuild from portage-2.2 to rebuild all the RDEPENDS of libav.

Many times, when I had to set the make.conf on systems with particular architectures, I had a doubt on which is the best –jobs value.
The handook suggest to have ${core} + 1, but since I’m curious I wanted to test it by myself to be sure this is right.

To make a good test we need a package with a respectable build system that respects the make parallelization and takes at least few minutes to compile. Otherwise with packages that compile in few seconds we are unable to track the effective difference.kde-base/kdelibs is, in my opinion, perfect.

If you are on architecture which kde-base/kdelibs is unavailable, just switch to another cmake-based package.

Now, download best_makeopts from my overlay. Below an explanation on what the script does and various suggestions.

You need to compile the package on a tmpfs filesystem and, I’m assuming you have /tmp mounted as a tmpfs too;

You need to have the tarball of the package on a tmpfs because if you have a slow disk, it may takes more time.

You need to switch your governor to performance.

You need to be sure you don’t have strange EMERGE_DEFAULT_OPTS.

You need to add ‘-B’ because we don’t want to include the time of the installation.

You need to drop the existent cache before compile.

As you can see, the for will emerge the same package with makeopts from 1 to 10. If you have, for example, a single core machine, just try the for from 1 to 4 is enough.

Please, during the test, don’t use the cpu for other purposes, and if you can, stop all services and make the test from the tty; you will see the time for every merge.

I tested this script on ~20 different machines and in the majority of the cases, the best optimization was ${core} or more exactly ${threads} of your CPU.

Conclusion:
From the handbook:

A good choice is the number of CPUs (or CPU cores) in your system plus one, but this guideline isn’t always perfect.

I don’t know who, years ago, suggested in the handbook ${core} + 1 and I don’t want to trigger a flame. I’m just saying, ${core} + 1 is not the best optimization for me and the test confirms the part:“but this guideline isn’t always perfect”

In all cases ${threads} + ${X} is slower than only ${threads}, so don’t use -j20 if you have a dual-core cpu.

Also, I’m not saying to use ${threads}, I’m just saying feel free to make your tests to watch what is the best optimization.

If you have suggestions to improve the functionality of the script or you think that this script is wrong, feel free to comment or leave an email.

This is a tricky review to write because I’m having a very bad time finishing this book. Indeed, while it did start well, and I was actually interested in the idea behind the book, it easily got nasty, in my mind. But let’s start from the top, and let me try to write a review of a book I’m not sure I’ll be able to finish without feeling ill.

I found the book, Amusing Ourselves to Death, through a blog post in one of the Planets I follow, and I found the premise extremely interesting: has the coming of the show business era meant that people are so much submersed by entertainment to lose sight of the significance of news? Unfortunately, as I said the book itself, to me, does not make the point properly, as it exaggerates to the point of no return. While the book has been written in 1985 – which means it has no way to know the way the Web changed media once again – it is proposed to be still relevant today in the introduction as written by the son of the author. I find that proposition unrealistic. It goes as far as stating that most of the students the book was told to read agreed with it — I would venture a guess that most of them didn’t want to disagree with their teacher.

First of all, the author is a typography snob and that can be easily seen when he spends pages and pages telling all the nice things about printed word — at the same time, taking slights at the previous “media” of spoken word. But while I do agree with one of the big points in the book (the fact that different forms makes discourse “change” — after all, my blog posts have a different tone from Autotools Mythbuster, and from my LWN articles), I do not think that a different tone makes for a more or less “validity” of it. Indeed this is why I find it extremely absurd that, for Wikipedia, I’m unreliable when writing on this blog, but I’m perfectly reliable the moment I write Autotools Mythbuster.

Now, if you were to take the first half of the book and title it something like “History of the printed word in early American history”, it would be a very good and enlightening read. It helps a lot to frame into context the history of America especially compared to Europe — I’m not much of an expert in history, but it’s interesting to note how in America, the religious organisations themselves sponsored literacy, while in Europe, Catholicism tried their best to keep people within the confines of illiteracy.

Unfortunately, he then starts with telling how evil the telegraph was by bringing in news from remote places, that people, in the author’s opinion, have no interest in, and should have no right to know… and the same kind of evilness is pointed out in photography (including the idea that photography has no context because there is no way to take a photograph out of context… which is utterly false, as many of us have seen during the reporting of recent wars. Okay, it’s all gotten much easier thanks to Photoshop, but in no way it was impossible in the ’80s.

Honestly, while I can understand having a foregone conclusion in mind, after explaining how people changed the way they speak with the advent of TV, no longer caring about syntax frills and similar, trying to say that in TV the messages are drown in a bunch of irrelevant frills is … a bit senseless. The same way it is senseless to me to say that typography is “pure message” — without even acknowledging that presentation is an issue for typography as much as TV, after all we wouldn’t have font designers otherwise.

While some things are definitely interesting to read – like the note about the use of pamphlet in the early American history that can easily compare to blogs today – the book itself is a bust, because there is no premise of objectivity, it’s just a long text to find reasons to reach the conclusion the author already had in mind… and that’s not what I like to read.

Through both LWN and netzpolitik.org I just heard that Aaron Swartz has committed suicide. While watching his speech “How we stopped SOPA” his name ring a bell with me, I looked into my inbox and found that he and I once had a brief chat on html2text, I piece of free software of his that I was in touch with in the context of Gentoo Linux. So there is this software, his website, these past mails, this amazing talk, his political work that I didn’t know about… and he’s dead. It only takes a few minutes of watching the talk to get the feeling that this is a great loss to society.

When you work with Django and especially with static files or other template tags you realize that you have to include {% load staticfiles %} in all our template files. This violates the DRY principle because we have to repeat the {% load staticfiles %} template tag on each template file .

Lets give an example.

We have a base.html file which links some Javascript and CSS files from our static folder.

As you can see I load again staticfiles in index.html. If I remove it, I will take this error. “TemplateSyntaxError at /, Invalid block tag ‘static’”. Unfortunately even if we extend base.html it will not inherit load template tag from the file and it will not load staticfiles to index.html that means it will not load our extra javascript file.
The truth is that there is a hack-y way to do that. After a small research I finally found a way to follow DRY principle and avoid repeating {% load staticfiles %} template tag in every template file.

Open one of the files that loads automatically from the beginning( settings.py, urls.py and models.py ). I will use settings.py.
So we add the following to settings.py:

Passwords. No one likes them, but everybody needs them. If you are concerned about your online safety, you probably have unique passwords for your critical accounts and some common pattern for all the almost-useless accounts you create when browsing the web.

At first I used to save my passwords in a gpg encrypted file. Over time however, I began using Firefox’s and Chrome’s password managers, mostly because of their awesome synching capabilities and form auto-filling.

Unfortunately, convenience comes at a price. I ended up relying on the password managers a bit too much, using my password pattern all over the place.

Then it hit me: I had strayed too much. Although my main accounts were relatively safe (strong passwords, two factor authentication), I had way too many weak passwords, synced on way too many devices, over syncing protocols of questionable security.

Looking for a better solution, I stumbled upon LastPass. Although LastPass uses an interesting security model, with passwords encrypted locally and a password generator that helps you maintain strong passwords for all your accounts, I didn’t like depending on an external service for something so critical. Its ui also left something to be desired.

A Unix command line tool that takes advantage of commonly used tools like gnupg and git to provide safe storage for your passwords and other critical information.

Pass‘ concept is simple. It creates one file for each one of your passwords, which it then encrypts using gpg and your key. You can provide your own passwords or ask it to generate strong passwords for you automatically.

When you need a password you can ask pass to print it on screen or copy it to the clipboard, ready for you to paste in the desired password field.

Pass can optionally use git, allowing you to track the history of your passwords and sync them easily among your systems. I have a Linode server, so I use that + gitolite to keep things synced.

Installation and usage of the tool is straightforward, with clean instructions and bash completion support that makes it even easier to use.

All this does come with a cost, since you lose the ability to auto save passwords and fill out forms. But this is a small price you pay compared to the security benefits gained. I also love the fact that you can access your passwords with standard Unix tools in case of emergencies. The system is also useful for securely storing other critical information, like credit cards.

Pass is not for everyone and most people would be fine using something like LastPass or KeePass, but if you’re a Unix guy looking for a solid password management solution, pass may be what you’re looking for

Pass was written by zx2c4 (thanks!) and is available in Gentoo’s portage

I spent 12 days in Greece. The Greek hospitality is superb, I can not ask for better friends in Greece. I first arrived in Thessaloniki, stayed there for a few nights. Then went to Larissa, and stayed with my friend and his family. There was a small communication barrier with his parents in this smaller town, they don’t get too many tourists. However, I had a very nice Christmas there and it was nice to be with such great people over the holidays. I went to a namesday celebration. Even though I couldn’t understand most of the conversations, they still welcomed me, gave me food and wine, and exchanged culture information. Then I went to Athens, stayed in a hostel, and spent New Year’s watching the fireworks over the Acropolis and the Parthenon. Cool experience! It was so great to be walking around the birthplace of “western ideals” – not the oldest civilization, but close. Some takeaway thoughts: 1) Greek hospitality is unlike anything I’ve experienced, really. I made sure that a I told everyone that they have an open door with me whenever we meet in “my new home” (meaning, I don’t know when or where), 2) you cannot go hungry in Greece, especially when they are cooking for you! 3) the cafe culture is great, 4) I want to go back during the summer

Of course, you will always find the not so nice parts. I got fooled by the old man scam, as seen here. Luckily, they only got 30€ from me, compared to some of the stories I’ve heard. Looking back on it, I just laugh at myself. Maybe I’ll be jaded towards a genuine experience in the future but, lesson learned. I don’t judge Athens by this one mishap, however.

I only have pictures of Athens since I had to buy a new camera.. Pics here

the assignment was to record the sound of ice in a glass, and make something of it.

the track picture shows my lo-fi setup for the field recording segment. i balanced a logitech USB microphone (which came with the Rock Band game) on a box of herbal tea (to keep it off the increasingly wet kitchen table), and started dropping ice cubes into a glass tumbler. audible is the initial crack and flex of the tray, scrabbling for cubes, tossing them into the cup. i made a point of recording the different tone of cubes dropped into a glass of hot water. i also filled the cup with ice, then recorded the sound of water running into it from the kitchen tap. i liked this sound enough to begin the song with it.

i decided that my first song of 2013 should incorporate the piano, so with the ice cubes recorded, i sat down to improvise an appropriately wintry melody. the result is a simple two-minute minor motif. i turned to the ardour3beta to integrate the field recordings and the piano improvisation.

it’s been awhile since i last used my strymon bluesky reverb pedal, so i figured i should use it for this project. i setup a feedback-free hardware effects loop using my NI Komplete Audio6 interface with the help of #ardour IRC channel, and listened to the piano recording as it ran through fairly spacious settings on the BSR. (normal mode, room type, decay @ 3:00, predelay @ 11:00, low damp @ 4:00, high damp @ 8:00). with just a bit of “send” to the reverb unit, the piano really came to life.

i added a few more tracks in ardour for the ice cube snippets, with even more subtle audio sends to the BSR, and laid out the field recordings. i pulled them apart in several places, copying and pasting segments throughout the song; minimal treatment was needed to get a good balance of piano and ice.

Those who have met me, might notice I have a somewhat unusual taste in clothing. One thing I despise is having clothes that are heavily branded, especially when the local shops then charge top dollar for them.

Where hats are concerned, I’m fussy. I don’t like the boring old varieties that abound $2 shops everywhere. I prefer something unique.

The mugshot of me with my Vietnamese coolie hat is probably the one most people on the web know me by. I was all set to try and make one, and I had an idea how I might achieve it, bought some materials I thought might work, but then I happened to be walking down Brunswick Street in Brisbane’s Fortitude Valley and saw a shop selling them for $5 each.

I bought one and have been wearing it on and off ever since. Or rather, I bought one, it wore out, I was given one as a present, wore that out, got given two more. The one I have today is #4.

I find them quite comfortable, lightweight, and most importantly, they’re cool and keep the sun off well. They are also one of the few full-brim designs that can accommodate wearing a pair of headphones or headset underneath. Being cheap is a bonus. The downside? One is I find they’re very divisive, people either love them or hate them — that said I get more compliments than complaints. The other, is they try to take off with the slightest bit of wind, and are quite bulky and somewhat fragile to stow.

I ride a bicycle to and from work, and so it’s just not practical to transport. Hanging around my neck, I can guarantee it’ll try to break free the moment I exceed 20km/hr… if I try and sit it on top of the helmet, it’ll slide around and generally make a nuisance.

Caps stow much easier. Not as good sun protection, but still can look good. I’ve got a few baseball caps, but they’re boring and a tad uncomfortable. I particularly like the old vintage gatsby caps — often worn by the 1930′s working class. A few years back on my way to uni I happened to stop by a St. Vinnies shop near Brisbane Arcade (sadly, they have closed and moved on) and saw a gatsby-style denim cap going for about $10. I bought it, and people commented that the style suited me. This one was a little big on me, but I was able to tweak it a bit to make it fit.

Fast forward to today, it is worn out — the stitching is good, but there are significant tears on the panelling and the embedded plastic in the peak is broken in several places. I looked around for a replacement, but alas, they’re as rare as hens teeth here in Brisbane, and no, I don’t care for ordering overseas.

Down the road from where I live, I saw the local sports/fitness shop were selling those flat neoprene sun visors for about $10 each. That gave me an idea — could I buy one of these and use it as the basis of a new cap?

These things basically consist of a peak and headband, attached to a dome consisting of 8 panels. I took apart the old faithful and traced out the shape of one of the panels.

Now I already had the headband and peak sorted out from the sun visor I bought, these aren’t hard to manufacture from scratch either. I just needed to cut out some panels from suitable material and stitch them together to make the dome.

There are a couple of parameters one can experiment that changes the visual properties of the cap. Gatsby caps could be viewed as an early precursor to the modern baseball cap. The prime difference is the shape of the panels.

The above graphic is also available as a PDF or SVG image. The key measurements to note are A, which sets the head circumference, C which tweaks the amount of overhang, and D which sets the height of the dome.

The head circumference is calculated as ${panels}×${A} so in the above case, 8 panels, a measurement of 80mm, means a head circumference of 640mm. Hence why it never quite fitted (58cm is about my size) me. I figured a measurement of about 75mm would do the trick.

B and C are actually two of three parameters that separates a gatsby from the more modern baseball cap. The other parameter is the length of the peak. A baseball cap sets these to make the overall shape much more triangular, increasing B to about half D, and tweaking C to make the shape more spherical.

As for the overhang, I decided I’d increase this a bit, increasing C to about 105mm. I left measurements B and D alone, making a fairly flattish dome.

For each of these measurements, once you come up with values that you’re happy with, add about 10mm to A, C and D for the actual template measurements to give yourself a fabric margin with which to sew the panels together.

As for material, I didn’t have any denim around, but on my travels I saw an old towel that someone had left by the side of the road — likely an escapee. These caps back in the day would have been made with whatever material the maker had to hand. Brushed cotton, denim, suede leather, wool all are common materials. I figured this would be a cheap way to try the pattern out, and if it worked out, I’d then see about procuring some better material.

Below are the results, click on the images to enlarge. I found due to the fact that this was my first attempt, and I just roughly cut the panels from a hand-drawn template, the panels didn’t quite meet in the middle. This is hidden by making a small circular patch where the panels normally meet. Traditionally a button is sewn here. I sewed the patch from the underside so as to hide the edges of it.

Not bad for a first try, I note I didn’t quite get the panels aligned at dead centre, the seam between the front two is just slightly off centre by about 15mm. The design looks alright to my eye, so I might look around for some suede leather and see if I can make a dressier one for more formal occasions.

Async-signal-safe functions A signal handler function must be very careful, since processing elsewhere may be interrupted at some arbitrary point in the execution of the program. POSIX has the concept of "safe function". If a signal interrupts the execution of an unsafe function, and handler calls an unsafe function, then the behavior of the program is undefined.

After that a list of safe functions follows, and one notable things is that malloc and free are async-signal-unsafe!

I hit this issue while enabling tcmalloc's debugallocation for Chromium Debug builds. We have a StackDumpSignalHandler for tests, which prints a stack trace on various crashing signals for easier debugging. It's very useful, and worked fine for a pretty long while (which means that "but it works!" is not a valid argument for doing unsafe things).

Now when I enabled debugallocation, I noticed hangs triggered by the stack trace display. In one example, this stack trace:

generates SIGSEGV (tcmalloc::Abort). This is just debugallocation having stricter checks about usage of dynamically allocated memory. Now the StackDumpSignalHandler kicks in, and internally calls malloc. But we're already inside malloc code as you can see on the above stack trace (see frame @7, bold font), and re-entering it tries to take locks that are already held, resulting in a hang.

no dynamic memory, and that includes std::string and std::vector, which use it internally

no buffered stdio or iostreams, they are not async-signal-safe (that includes fflush)

custom code for number-to-string conversion that doesn't need dynamically allocated memory (snprintf is not on the list of safe functions as of POSIX.1-2008; it seems to work on a glibc-2.15-based system, but as said before this is not a good assumption to make); in this code I've named it itoa_r, and it supports both base-10 and base-16 conversions, and also negative numbers for base-10

warming up backtrace(3): now this is really tricky, and backtrace(3) itself is not whitelisted for being safe; in fact, on the very first call it does some memory allocations; for now I've just added a call to backtrace() from a context that is safe and happens before the signal handler may be executed; implementing backtrace(3) in a known-safe way would be another fun thing to do

Note that for the above, I've also added a unit test that triggers the deadlock scenario. This will hopefully catch cases where calling backtrace(3) leads to trouble.

Okay here it comes another post about Munin for those who are using this awesome monitoring solution (okay I think I’ve been involved in upstream development more than I expected when Jeremy pointed me at it). While the main topic of this post is going to be IPv6 support, I’d like first to spend a few words for context of what’s going on.

Munin in Gentoo has been slightly patched in the 2.0 series — most of the patches were sent upstream the moment when they were introduced, and most of them have been merged in for the following release. Some of them though, including the one bringing my FreeIPMI plugin to replace the OpenIPMI plugins, or at least the first version of it, and those dealing with changes that wouldn’t have been kosher for other distributions (namely, Debian) at this point, were also not merged in the 2.0 branch upstream.

But now Steve opened a new branch for 2.0, which means that the development branch (Munin does not use the master branch, for a simple logistic reason of having a master/ directory in GIT I suppose) is directed toward the 2.1 series instead. This meant not only that I can finally push some of my recent plugin rewrites but also that I could make some more deep changes to it, including rewriting the seven asterisk plugins into a single one, and work hard on the HTTP-based plugins (for web servers and web services) so that they use a shared backend, like SNMP. This actually completely solved an issue that, in Gentoo, we solved only partially before — my ModSecurity ruleset blacklists the default libwww-perl user agent, so with the partial and complete fix, Munin advertises itself in the request; with the new code it includes also the plugin that is currently making the request so that it’s possible to know which requests belongs to what).

Speaking of Asterisk, by the way, I have to thank Sysadminman for lending me a test server for working on said plugins — this not only got us the current new Asterisk plugin (7-in-1!) but also let me modify just a tad said seven plugins, so that instead of using Net::Telnet, I could just use IO::Socket::INET. This has been merged for 2.0, which in turn means that the next ebuild will have one less dependency, and one less USE flag — the asterisk flag for said ebuild only added the Net::Telnet dependency.

To the main topic — how did I get to IPv6 in Munin? Well, I was looking at which other plugins need to be converted to “modernity” – which to me means re-using as much code possible, collapse multiple plugins in one through multigraph, and support virtual-nodes – and I found the squid plugins. This was interesting to me because I actually have one squid instance running, on the tinderbox host to avoid direct connection to the network from the tinderboxes themselves. These plugins do not use libwww-perl like the other HTTP plugins, I suppose (but I can’t be sure, for what I’m going to explain in a moment) because the cache://objects request that has to be done might or might not work with the noted library. Since as I said I have a squid instance, and these (multiple) plugins look exactly like the kind of target that I was looking for to rewrite, I started looking into them.

But once I started, I had a nasty surprise: my Squid instance only replies over IPv6, and that’s intended (the tinderboxes are only assigned IPv6 addresses, which makes it easier for me to access them, and have no NAT to the outside as I want to make sure that all network access is filtered through said proxy). Unfortunately, by default, libwww-perl does not support accessing IPv6. And indeed, neither do most of the other plugins, including the Asterisk I just rewrote, since they use IO::Socket::INET (instead of IO::Socket::INET6). A quick searching around, and this article turned up — although then this also turned up that relates to IPv6 support in Perl core itself.

Unfortunately, even with the core itself supporting IPv6, libwww-perl seems to be of different ideas, and that is a showstopper for me I’m afraid. At least, I need to find a way to get libwww-perl to play nicely if I want to use it over IPv6 (yes I’m going to work this around for the moment and just write the new squid plugins against the IPv4). On the other hand, using IO::Socket::IP would probably solve the issue for the remaining parts of the node and that will for sure at least give us some better support. Even better, it might be possible to abstract and have a Munin::Plugin::Socket that will fall-back to whatever we need. As it is, right now it’s a big question mark of what we can do there.

So what can be said about the current status of IPv6 support in Munin? Well, the Node uses Net::Server, and that in turn is not using IO::Socket::IP, but rather IO::Socket::INET or INET6 if installed — that basically means that the node itself will support IPv6 as long as INET6 is installed, and would call for using it as well, instead of using IO::Socket::IP ­— but the latter is the future and, for most people, will be part of the system anyway… The async support, in 2.0, will always use IPv4 to connect to the local node. This is not much of a problem, as Steve is working on merging the node and the async daemon in a single entity, which makes the most sense. Basically it means that in 2.1, all nodes will be spooled, instead of what we have right now.

The master, of course, also uses IPv6 — via IO::Socket::INET6 – yet another nail in the coffin of IO::Socket::IP? Maybe. – this covers all the communication between the two main components of Munin, and could be enough to declare it fully IPv6 compatible — and that’s what 2.0 is saying. But alas, this is not the case yet. On an interesting note, the fact that right now Munin supports arbitrary commands as transports, as long as they provide an I/O interface to the socket, make the fact that it supports IPv6 quite moot. Not only you just need an IPv6-capable SSH to handle it, but you can probably use SCTP instead of TCP simply by using a hacked up netcat! I’m not sure if monitoring would get any improvement of using SCTP, although I guess it might overcome some of the overhead related to establishing the connection, but.. well it’s a different story.

Of course, Munin’s own framework is only half of what has to support IPv6 for it to be properly supported; the heart of Munin is the plugins, which means that if they don’t support IPv6, we’re dead in the water. Perl plugins, as noted above, have quite a few issues with finding the right combination of modules for supporting IPv6. Bash plugins, and indeed any other language that could be used, would support IPv6 as good as the underlying tools — indeed, even though libwww-perl does not work with IPv6, plugins written with wget would work out of the box, on an IPv6-capable wget… but of course, the gains we have by using Perl are major enough that you don’t want to go that route.

All in all, I think what’s going to happen is that as soon as I’m done with the weekend’s work (which is quite a bit since the Friday was filled with a couple of server failures, and me finding out that one of my backups was not working as intended) I’ll prepare a branch and see how much of IO::Socket::IP we can leverage, and whether wrapping around that would help us with the new plugins. So we’ll see where this is going to lead us, maybe 2.1 will really be 100% IPv6 compatible…

Nowadays I see lots of new blog posts about how to contribute in open source projects and I decided to write a blog post about how to contribute to Gentoo Linux and become a vital part of the project.

My colleagues at university everytime we talk about Gentoo tell me that they cannot install Gentoo because it is too difficult for them or they are not ready to install it and configure it because they don’t have the experience and they finally give up. Also some other colleagues tell me that they want to contribute to Gentoo and they don’t know how to start. Thats why I wrote this blogpost in order to give some guidelines for those who want to contribute.

In order to help and contribute in Gentoo you don’t have to know to code or to be a super duper Linux guru. Of course code/programming is the core of open source projects but there are ways to contribute without knowing to code. Requirements are two things. A Gentoo installation and will to help.

Community

Gentoo like the rest FOSS Projects is based on volunteer efforts. The pylons of every FOSS project is its community. Without its community Gentoo wouldn’t exist. Even if someone doesn’t know to code, can contribute and learn from project’s community.

Forums: Join our forums and help other users with their problem. It is a good opportunity also to learn more things about Gentoo.

Mailing Lists: Subscribe to our mailing lists and learn about the latest community and development news of the project. Everyone can also help users to related mailing lists or discuss with Gentoo developers.

IRC: Join in our IRC channels. Help new users with their issues. Discuss with users and developers and express your opinion about the new features and the technical issues of the project. Make sure you will read our Code of Conduct first.

Planets: Follow our planet and watch some Gentoo-stuff blog posts from developers about Gentoo. There are interesting conversations (via comments) after the blog post between the users and the developers.

Promote: After you get some experience with the project promote your favourite distro( Gentoo of course ) writing blog posts and articles in forums and sites related to open source. You can also spread the word in your local linux users group and at your university.

Participate in Events: Every month there are meetings from the most of the Gentoo project teams. The meetings take place at #gentoo-meetings. There is an ‘open floor’ at the end of the meeting where users can express their opinion.

Documentation

Gentoo has always been known for its wide variety and quality of documentation. It covers lots of aspect of Linux. Topics about desktop, software, security and most of them are not totally Gentoo based. That’s the reason Gentoo documentation is successful and that’s why users from other Linux distributions using it. So you can be a part of this effort and improve the documentation.

Wiki: Wiki is our fresh project. There are lots of ways to help here. Add new articles about the topics you would like to see ( and have knowledge of them of course) and want to share it with the other Gentoo users. Improving and expanding wiki articles is a good way to help the project (avoid copy paste from other sources in the net). All users are encouraged to help, wiki is open for everyone. Use it responsibly because your posts will affect the Gentoo users who will try to follow your guide.

Translations: If English is not your native language translating wiki/documentation will be a very good way to help users that don’t know English and want to join to the community. Translations is a good way to contribute and expand the Gentoo community.

(bonus) Write article to your blog: If you find a configuration or a tool or a new solution to a problem that saved your life at the Gentoo world. Don’t be afraid and share it with the other users.

Development ( Code )

As I said code is the core of any software project. So if you have some knowledge with shell scripts and programming you are welcome to join the team. With small steps you can gain more experience with the project and contribute to it with your features and patches.

Bugs: Every FOSS project has its own bug tracking system, Gentoo as well has its own Bugzilla. There we report our issues. Build and run time failures , kernel problems , Gentoo tools issues, stable requests. You can also start contributing by confirming and reproducing bugs and then try to offer solutions and fix the bugs( patches are welcome ). So feel free to report new bugs to our Bugzilla. In addition there are requests to add or update(version bump*) ebuilds. Instead of requesting new ebuilds and version bumps you can also write and submit your ebuilds to our Bugzilla in order to be added to the Portage tree by a Gentoo developer. Try picking up a bug from maintainer-wanted alias. If you need a review for your ebuild #gentoo-dev-help is the right place to do it.

* Please avoid 0day bump requests.

Arch Tester: An Arch Tester (a.k.a AT) is a trustworthy user capable of testing an application to determine its stability. Arch Testers should have a good understanding on how ebuilds works, bash scripting and should test lots of packages to their arch. You can become an AT at x86 and amd64 archs. Requirement is to have a stable Gentoo box. Your goal will be to test and install packages from the testing arch (~arch) and see if they are working in the stable arch. Then you can open a stabilization request to Bugzilla.

Sunrise Project: Sunrise is a starting point for gentoo users to contribute. The Sunrise team encourage users to write ebuilds and make sure that they follow Gentoo QA standards. Sunrise’s goal is to allow non-developers to maintain them. For questions you can ask at #gentoo-sunrise at Freenode.

Proxy-maintaining: The goal of this team is to maintain abandoned (orphaned) packages in order to prevent treecleaners from removing those packages. Pick up some packages from the maintainer-needed list a begin to maintain it. For questions you can join #gentoo-dev-help.

Bugday: Bugday is an event which take place at #gentoo-bugs at Freenode every first weekend of every month. You can join and pick a bug and fix it. But have in mind that every day is a bugday so it doesn’t have to be a bugday to add your ebuild and fix bugs.

Become a developer: After you reach a good amount of contribution and you think you can be an active and vital member of the project you can start the process of becoming a developer. Talk to a Gentoo developer in order to mentor you and help you fill the ebuild and staff quiz and then the process of the recruiting will be completed with a live interview with a recruiter.

There are lots of Gentoo project teams that need new members and help . Everyone can contribute to Gentoo either knowing to code or not. Every piece of help is useful for the project.

I think I covered the biggest part of the Gentoo and how to contribute to it . I’ll wait for your comments, if you think I missed something inform me. Fixes always welcome.

I want to take take a few moments from my deserved Christmas break to say thanks to all the donors who have contributed to our last fundraiser. After 1.5 years, we’ve been able to hit our €5000 goal. This is a big, I mean really big, achievement for such a small (I am not sure now) but awesome distro like ours.

We’ve always wanted to bring Gentoo to everyone, make this awesome distro available on laptops, servers and of course, desktops without the need to compile, without the need of a compiler! It turns out that we’re getting there.

So, the biggest part of the “getting there” strategy was to implement a proper binary package manager and starting to automate the distro development, maintenance and release process.
Even though Entropy is in continuous development mode, we’ve got to the point that it’s reliable enough. Now, we must push Sabayon even farther.

Let me keep the development ideas I had for a separate blog post and tell you here what’s been done, what we’re going to do and what we still need in 2013.

First things first, last year we bought a new and shiny build server, which is kindly hosted by the University of Trento, Italy, featuring a Rack 2U dual Octa Opteron 6128, 48GB RAM and, earlier last year,
2x240GB Samsung 830 SSDs. In order to save (a lot of) money, I built the server myself and I spent something like 2500€ (including the SSDs). Take into consideration that prices for hardware in the EU are much higher than in the US.

Now we’re left with something like 3000€ or more and we’re planning to do another round of infra upgrades, save some money for hardware replacement in case of failures, buy t-shirts and DVDs to give out at local events, etc.

So far, the whole Sabayon infrastructure is spread across 3 Italian universities and TOP-IX (see at the bottom of http://www.sabayon.org for more details) and consists of four Rack 1U servers and one Rack 2U.
Whenever there’s a problem, I jump on a car and fix issues myself (like PSU, RAM, HDD/SSD failures) or kindly delegate the task to friends living closer than me.

As you can imagine, it’s easy to suck 200-300€ whenever there’s a problem and while we have failover plans (to EC2), these come with a cost as well.
As you may have already realized, free software does not really come for free, especially for those who are actually maintaining it. Automation and scaling out across multiple people (individuals involved in the development of this distro) are the key, and in particular the former, because it reduces the “human error” impact on the whole workflow.

As I mentioned above, I will prepare a separate blog post about what I mean with “automation”. For now, enjoy your Christmas holidays, the NYE celebrations and why not, some gaming with Steam on Sabayon.

During the last weeks, I spent several nights playing with UEFI and its extension called UEFI SecureBoot. I must admit that I have mixed feelings about UEFI in general; on one hand, you have a nice and modern “BIOS replacement” that can boot .efi files with no need for a bootloader like GRUB, on the other hand, some hardware, not even the most exotic one, is not yet glitch-free. But that’s what happens with new stuff in general. I cannot go much into detail without drifting away from the main topic, but surely enough, a simple google search about UEFI and Linux will point you to the problems I just mentioned above.

But hey, what does it all mean for our beloved Gentoo-based distro named Sabayon? Since DAILY ISO images dated 20121224, Sabayon can boot off UEFI systems, through DVD and USB (thanks to isohybrid –uefi) and, surprise surprise, with SecureBoot turned on!. I am almost sure that we’re the first Linux distro supporting SecureBoot out of the box (update: using shim!) and I am very proud of it. This is of course thanks to Matthew Garrett’s shim UEFI loader that is chainloading our signed UEFI GRUB2 image.

The process is simple and works like this: you boot an UEFI-compatible Sabayon ISO image off DVD or USB, if SecureBoot is turned on, shim will launch MokManager, that you can use to enroll our distro key, called sabayon.der and available on our image under the ”SecureBoot” directory. Once you enrolled the key, on some systems, you’re forced to reboot (I had to on my shiny new Asus Zenbook UX32VD), but then, the magic happens.

There is a tricky part however. Due to the way GRUB2 .efi images are generated (at install time, with settings depending on your partition layout and platform details), I have been forced to implement a nasty way to ensure that SecureBoot can still accept such platform-dependent images: our installer, Anaconda, now generates a hardware-specific SecureBoot keypair (private and public key), then our modified grub2-install version, automatically signs every .efi image it generates with that key, which is placed into the EFI Boot Partition under EFI/boot/sabayon ready to be enrolled by shim at the next boot.
This is sub-optimal, but after several days of messing around, it turned out that it’s the most reliable, cleanest and easiest way to support SecureBoot after install without disclosing our private key we use to sign our install media. Another advantage is that our distro keypair, once enrolled, will allow any Sabayon image to boot, while we still allow full control over the installed system to our users (by generating a platform-specific private key at install time).

SecureBoot is not that evil after all, my laptop came with Windows 8 (which I just ripped off completely) and SecureBoot disabled by default and lets anyone sign their own .efi binaries from the ”BIOS”. I don’t see how my freedom could be affected by this, though.

And we start the new year with more Autotools Mythbusting — although in this case it’s not with the help of upstream, who actually seemed to make it more difficult. What’s going on? Well, there has been two releases already, 1.13 and 1.13.1, and the changes are quite “interesting” — or to use a different word, worrisome.

First of all, there are two releases because the first one (1.13) was removing two macros (AM_CONFIG_HEADER and AM_PROG_CC_STDC) that were not deprecated in the previous release. After a complain from Paolo Bonzini related to a patch to sed to get rid of the old macros, Stefano decided to re-introduce the macros as deprecated in 1.13.1. What does this tell me? Well, two things mainly: the first is that this release has been rushed out without enough testing (the beta for it was released on December 19th!). The second that there is still no proper process in the deprecation of features with clear deadlines of when they are to disappear.

This impression is further strengthened in respect with some of the deprecation that appear in this new release, and some of the removals that did not happen at all.

This release was supposed to mark the first one not supporting the old-style name of configure.in for the autoconf input script — if you have any project still using that name you should update now. For some reason – none of which has been discussed on the automake mailing list, unsurprisingly – it was decided to postpone this to the next release. It still is a perfectly good idea to rename the files now, but you can probably get pissed easily if you felt pressurized into getting ready for the new release, and then the requirement is dropped without further notice.

Another removal that was supposed to happen with this release was the three-parameters AM_INIT_AUTOMAKE call, which substitutes the parameters of AC_INIT, instead of providing the automake options. The use of this macro is, though, still common for packages that calculate their version number dynamically, such as from the GIT repository itself, as it’s not possible to have a variable version passed to AC_INIT. Now, instead of just marking the feature as deprecated but keeping it around, the situation is that the syntax is no longer documented but it’s still usable. Which means I have to document it myself, as I find it extremely stupid to have a feature that is not documented anywhere, but is found in the wild. It’s exactly for bad decisions like this that I started Autotools Mythbuster.

This is not much different from what has happened with the AM_PROG_MKDIR macro, which was supposed to be deprecated/removed in 1.12, with the variables being kept around for a little longer — first it ended up being completely messed up in 1.12 to the point that the first two releases of that series dropped the variables which were supposed to stay around and the removal of the macro (but not o fthe variables) is now scheduled for 1.14 because, among others, GNU gettext is still using it — the issue has been reported, and I also think it has been fixed in GIT already, but there is no new release, nor a date for it to get fixed in a release.

Then there are things that changed, or were introduced in this release. First of all, silent rules are no longer optional — this basically means that the silent-rules option to the automake init is now a no-op, and the generated makefiles all have the silent rules harness included (but not enabled by default as usual). For me this meant a rewrite of the related section as now you have one more variant of automake to support. Then there finally is support in aclocal to get the macro directory selected in configure.ac — unfortunately this for me meant I had to rewrite another section of my guide to account for it, and now both the old and the new method are documented in there.

There are more notes in the NEWS file, and more things that are scheduled to appear in the next release, an I’ll try to cover them in my Autotools Mythbuster over the next week or so — I’ll expect this time I need to get into the details of Makefile.am like i have tried to avoid up to now. It’s quite a bit of work but it might be what makes the difference for so many autotools users out there that I really can’t avoid the task at this point. In the mean time, I welcome all support, be it through patches, suggestions, Flattr, Amazon or whatever else — the easiest way is to show the guide around: not only it’ll reduce the headaches for me and the other distribution packagers to have people actually knowing how to work on autotools, but also the more people know about it, the more contributions are likely to come in. Writing Autotools Mythbuster is far from easy, and sometimes it’s not enjoyable at all, but I guess it’s for the best.

Finally, a word about the status of automake in Gentoo — I’m leaving to Mike to bump the package in tree, once he’s done that, I’ll prepare to run a tinderbox with it — hopefully just getting the reverse dependencies for automake would be enough, thanks to autotools.eclass. For when the tinderbox is running, I hope I’ll have all the possible failures covered in the guide, as it’ll make the job of my Gentoo peers much easier.

Just wanted to take a quick moment and wish everyone a Happy New Year! It’s that day where we can all start anew, and make resolutions to do this or that (or to not do this or that ). My resolution is to get back to updating my blog on a regular basis. I don’t know that it will be nearly every day like it was before I moved, but I’m going to try to post often (the backlog of topics is getting quite large).

Last Saturday evening, I sent an e-mail to a low-volume mailinglist regarding IMA problems that I’m facing. I wasn’t expecting an answer very fast of course, being holidays, weekend and a low-volume mailinglist. But hey – it is the free software world, so I should expect some slack on this, right?

Well, not really. I got a reply on sunday – and not just an acknowledgement e-mail, but a to-the-point answer. It was immediately correct and described why, and helped me figure out things further. And this is not a unique case in the free software world: because you are dealing with the developers and users that have written the code that you are running/testing, you get a bunch of very motivated souls, all looking at your request when they can, and giving input when they can.

Compare that to commercial support from bigger vendors: in these cases, your request probably gets read by a single person whose state of mind is difficult to know (but from the communication you often get the impression that they either couldn’t care less or they are swamped with request tasks so they cannot devote enough time on your request). In most cases, they check the request for containing the right amount of information in the right format on the right fields, or even ignore that you did all that right and just ask you for (the same) information again. And who knows how many times I had to “state your business impact”.

Now, I know that commercial support from bigger vendor has the burden of a huge overload in requests, but is that truely that different in the free software world? Mailinglists such as the Linux kernel mailinglist (for kernel development) gets hundreds (thousands?) mails a day, and those with request for feedback or with questions get a reply quite swiftly. Mailinglists for distribution users get a lot of traffic as well, and each and every request is handled with due care and responded to within a very good timeframe (24h or less most of the time, sometimes a few days if the user is using a strange or exotic environment that not everyone knows how to handle).

I think one of the biggest advantages of the free software world is that the requests are public. That both teaches the many users on those mailinglists and fora on how to handle problems they haven’t seen before, as well as allows users to first look for a problem before reporting it. Everybody wins with this. And because it is public, many users are happily answering more and more questions because they get the visibility (with acknowledgements) they deserve: they gain a specific position in that particular area that others respect, because we can see how much effort (and good results) they gave earlier on.

So kudos to the free software world, a happy new year – and keep going forward.

I have written a lot about the hardware IDs but i haven’t said much about submitting new entries to the upstream databases. Indeed, the package just mirrors the data that is collected by the USB and PCI databases that are managed by Stephen, Martin and Michal.

As an example, I’ll show you how I’ve been submitting the so-called Subsystem IDs for PCI devices from computers I either own, or fix up for customers and friends.

First off, you have to find a system or device whose subsystem IDs have not been submitted yet. Unfortunately I don’t have any computer at hand that I haven’t submitted to the database already. But fear not — it so happens I had an interesting opening. I rented a server from OVH recently, as I’ve had some trouble with one of my production hosts lately, and I’m entertaining the idea of moving everything on a new server and service altogether. But the whole thing is a topic for a completely different time. In any case, let’s see what we can do about these IDs now that I have an interesting system at hand.

First of all, while I don’t have the server at hand to know what’s in it, OVH does tell me what hardware is on it — in particular they tell me it’s an Intel D425KT board (yes I got a Kimsufi Atom, I got the three months lease for now and I’ll see if it can perform decently enough), so that’s a start. Alternatively, I could have asked dmidecode — but I just don’t have it installed on that server right now.

This is of course only the first entry in the list but it’s still something. You can see on the second line that it says “Subsystem: Intel Corporation Device 544b” — that means that it knows the subsystem vendor (ID 8086, I can tell you by heart — they have been funny at that), but it doesn’t know the subsystem device. So it’s what we’re looking for: an unknown system! Time to compare the output of lspci -vn — that one does not resolve the IDs, since we’ll need them to submit to the PCI database so if you’re not registered already, do register so that they can be submitted to begin with.

Okay so now we know that our first device is Intel’s (VID 8086) and has a000 as device ID — this brings us to https://pci-ids.ucw.cz/read/PC/8086/a000 easy, isn’t it? At the end of the page there’s a list of the known subsystem IDs; pending submissions does not show up the name, but they show up in the table with a darker gray background. All PCI ID entries are moderated by hand by the database’ s maintainers. When you’ll be reading this, the entry for my board will be in already, but right now it isn’t — if it wasn’t obvious, I’m looking for an entry that reads 8086 544b (which is under “Subsystem” above).

Now the form requires just a few words: the ID itself – which is 8086 544b with a space, not a colon – and a name. Note is for something that needs to be written on the pci.ids, so in most cases need to be empty. Discussion if when you wan tot comment on the certainly of your submission; for my laptop for instance we had some trouble with “Intel Corporation Device 0153” — which is now officially “3rd Gen Core Processor Thermal Subsystem”.

The name I’m going to submit is “Desktop Board D425KT” as that’s what the other entry in the database for that device uses as a format — okay it actually uses DeskTop but I’d rather not capitalize another T and see a kitten cry.

Now it’s time to go through all the other entries in the system — yes there are many of them, and most of the time the IDs are not set in the order of the PCI connections, so be careful. More interestingly, not all the subsystems are going to be listed in the same line. Indeed, the third entry that I have is this:

The subsystem ID is listed under “Capabilities” instead — but it’s always the same. This is actually critical: if the subsystem does not match, it means that it’s coming from a different component — for instance if you’re building your own computer, the subsystem of the internal CPU devices and those of the motherboard will not match, as they come from different vendors. And so would happen to add-on cards (PCI, PCI-E, AGP, …).

Sometimes, a different subsystem is also available on internal components that get different names from the motherboard itself — in this case, the Realtek network card on this motherboard reports a completely different ID and I really don’t know how to submit it:

If for whatever reason you make a mistake, you can click on the “Discuss” link on the submitted content and edit the name that you want to submit. I did make such a mistake during submitting the IDs for this.

Unfortunately, all times we have a big list to keyword or stabilize, repoman complains about missing packages. So, in this post I will give you the solution to avoid this problem.

First, please download the batch-pretend script from my overlay.
I’m not a python programmer but I was able to edit the script made by Paweł Hajdan. I just deleted the bugzilla commit part, and I make the script able to print repoman full if the list is not complete.
This script works only with =www-client/pybugz-0.9.3

Now, to check if repoman will complain about your list, you need to do:./batch-pretend.py --arch amd64 --repo /home/ago/gentoo-x86 -i /tmp/yourlist

where:

Batch-pretend.py is the script (obviously);

amd64 is the arch that you want to check. You will use ~amd64 for the keywordreq;

/home/ago/gentoo-x86 is the local copy of the CVS;

/tmp/yourlist is the list which contains the packages;

Few useful notes:

If you want to check on some arches, you can use a simple for:for i in amd64 x86 sparc ppc ; do
./batch-pretend.py --arch "${i}" --repo /home/ago/gentoo-x86 -i /tmp/yourlist
done

The script will run ekeyword, so it will touch your local CVS copy of gentoo-x86. If this is not your intention, please make another copy and work there or don’t forget to run cvs up -C.

Before doing this work, you need to run cvs up in the root of your gentoo-x86 local CVS.

The list must be structured in this mode:# bug #445900
=app-portage/eix-0.27.4
=www-client/pybugz-0.9.3
=dev-vcs/cvs-1.12.12-r6
#and so on..

Those of you that follow me on Google Plus (or Facebook) already know this, but the other day I was wondering about whether I should have flashed my Kindle Fire (first generation) with CyanogenMod instead of keeping it with the original Amazon operating system. This is the tale of what I did, which includes a big screwup on my part.

But first, a small introduction. I’m the first person to complain about people “jailbreaking” iPhones and similar, as I think that if you have to buy something that you have to modify to make useful, then you shouldn’t have bought it in the first place. Especially if you try to justify with the name “jailbreak” an act that almost all of the public uses to pirate software — I’m a firm maintainer that if we want Free Software licenses to be respected, we have to consider EULAs just as worthy of respect; that is that you can show that they are evil, but you can’t call for disrespecting them.

But I have made exceptions before, and this mostly happen when the original manufacturer “forgets” to provide update, or fails to follow through with promised features. An example of this to me was when I bought an AppleTV hoping that Apple would have kept their promise of entering the European market for TV series and movies so that it would come to be useful. While now they do have something, they have not the ability to buy them to watch in the original English (which makes it useless to me), and that came only after I decided to just drop the device because it wasn’t keeping up with the rest of the world. At the time to avoid having to throw the device away, I ended up using the hacking procedure to turn it into an XBMC device.

So in this case the problem was that after coming back home from Los Angeles, I barely touched the Kindle Fire at all. Why? Well, even though I did buy season passes for some TV Series (Castle, Bones, NCIS), which would allow me to stream them on Linux (unlike Apple’s store that only works on their device or with their software, and unlike Netflix that does not work on Linux), and download to the Kindle Fire, neither option works when outside of the United States — so to actually download the content I paid for, I have to use a VPN.

While it’s not straight forward, it’s possible to set up a VPN connection from Linux to the iPad, and have it connect to Amazon through said VPN, there is no way to do so on the Kindle Fire (there’s no VPN support at all). So I ended up leaving it untouched, and after a month I was concerned about my purchase. So I started considering what were the compelling features of the Kindle Fire compared to any other Android-based tablet. Which mostly came down to the integration with Amazon: the books, the music and the videos (TV series and movies).

For what concerns the books, the Kindle app for Android is just as good as the native one — the only thing that is missing is the “Kindle Owners’ Lending Library”, but since I rarely read books on the Fire, that’s not a big deal (I have a Kindle Keyboard that I read books on). For the music, while I did use the Fire a few times to listen to that, it’s not a required feature, as I have an iPod Touch for that, that also comes with an Amazon MP3 application.

There are also the integration of the Amazon App Store, but that’s something that tries to cover for the lack of Google Play support — and in general there isn’t that much content in there. Lots of applications, even when available, are compatible with my HTC Desire HD but not with the Kindle Fire, so what’s the point? Audiobooks are not native — they are handled through the Audible application, which is available on Google Play, but is also available on my iPod Touch, which means I have no point about it.

So about the videos — that’s actually the sole reason why I ordered it. While it is possible to watch the streamed videos on Linux, Flash would use my monitor and not let me work when watching something, so I wanted a device I could stream the videos to and watch on… a couple of months after I bought the Fire, though, Amazon released an Instant Video application for the iPad, making it quite moot. Especially since the iPad has the VPN access I noted before, and I can connect the HDMI adapter to it and watch the streams on my 32" TV.

All this considered, the videos were the only thing that was really lost if I stopped using the Amazon firmware. So I looked it up and found three guides – 123 – that would have got me set up with an Android 4.1, CyanogenMod 10 based ROM. Since the device is very simple (no bluetooth, no GPS, no baseband, no NFC) supporting it should be relatively easy, the only problem, as usual, is to make sure you can root and flash it.

Unfortunately, when I went to flash it up, I made a fatal mistake: instead of flashing the bootloader’s image (a modified u-boot), I flashed the zip file of it. And the device wouldn’t boot up anymore. Thankfully, there are people like Christopher and Vladimir who pointed me at the fact that the CPU in that tablet (TI OMAP) has an USB boot option — but it requires to short one very tiny, nigh-microscopic pad on the main board to ground, so that it would try to boot from there. Lo and behold, thanks to a friend of mine with less shaky hands who happened to be around, I was able to follow the guide to unbrick the device, and got the CM10 ROM on top of it.

Now I finally got an Android 4 device (the HTC is still running the latest available CM7 — if somebody has a suggestion of a CM10 ROM that does not add tons of customization, and that doesn’t breach the Google license by bundling the Google Apps, I’d be happy to update), I’ve been able to test Chrome for Android, and VLC as well — and I have to say that it’s improving tons. Of course there are still quite a few things that are not really clean (for example there is no Flickr application that can run there!), but it’s improving.

If I were to buy a new tablet tomorrow, though, I would probably be buying a Samsung Galaxy Note 10 — why? Well, because I finally got a hold of a test version of it at the local Mediamarkt Mediaworld and the pen accessory is very nice to use, especially if you’re used to Wacom tablets, and that would give sense to a 10" laptop to me. I’m a bit upset with my iPad inability to do precise drawing to be honest. And since that’s not very commonly known, the Galaxy Notes don’t use capacitive pens, but magnetic ones just like the above-noted Wacoms, that’s why they are so precise.

I have been playing with Linux IMA/EVM on a Gentoo Hardened (with SELinux) system for a while and have been documenting what I think is interesting/necessary for Gentoo Linux users when they want to use IMA/EVM as well. Note that the documentation of the Linux IMA/EVM project itself is very decent. It’s all on a single wiki page, but it’s decent and I learned a lot from it.

That being said, I do have the impression that the method they suggest for generating IMA hashes for the entire system is not always working properly. It might be because of SELinux on my system, but for now I’m searching for another method that does seem to work well (I’m currently trying my luck with a find … -exec evmctl based command). But once the hashes are registered, it works pretty well (well, there’s a probably small SELinux problem where loading a new policy or updating the existing policies seems to generate stale rules and I have to reboot my system, but I’ll find the culprit of that soon ;-)

The IMA Guide has been updated to reflect recent findings – including how to load a custom policy, and I have also started on the EVM Guide. I think it’ll take me a day or three to finish off the rough edges and then I’ll start creating a new SELinux node (KVM) image that users can use with various Gentoo Hardened-supported technologies enabled (PaX, grSecurity, SELinux, IMA and EVM).

So if you’re curious about IMA/EVM and willing to try it out on Gentoo Linux, please have a look at those documents and see if they assist you (or confuse you even more).

So, I finally managed to getting around to fixing the backend of znurt.org so that the keywords would import again. It was a combination of the portage metadata location moving, and a small set of sloppy code in part of the import script that made me roll my eyes. It’s fixed now, but the site still isn’t importing everything correctly.

I’ve been putting off working on it for so long, just because it’s a hard project to get to. Since I started working full-time as a sysadmin about two years ago, it killed off my hobby of tinkering with computers. My attitude shifted from “this is fun” to “I want this to work and not have me worry about it.” Comes with the territory, I guess. Not to say I don’t have fun — I do a lot of research at work, either related to existing projects or new stuff. There’s always something cool to look into. But then I come home and I’d rather just focus on other things.

I got rid of my desktops, too, because soon afterwards I didn’t really have anything to hack on. Znurt went down, but I didn’t really have a good development environment anymore. On top of that, my interest in the site had waned, and the whole thing just adds up to a pile of indifference.

I contemplated giving the site away to someone else so that they could maintain it, as I’ve done in the past with some of my projects, but this one, I just wanted to hang onto it for some reason. Admittedly, not enough to maintain it, but enough to want to retain ownership.

With this last semester behind me, which was brutal, I’ve got more time to do other stuff. Fixing Znurt had *long* been on my todo list, and I finally got around to poking it with a stick to see if I could at least get the broken imports working.

I was anticipating it would be a lot of work, and hard to find the issue, but the whole thing took under two hours to fix. Derp. That’s what I get for putting stuff off.

One thing I’ve found interesting in all of this is how quickly my memory of working with code (PHP) and databases (PostgreSQL) has come back to me. At work, I only write shell scripts now (bash) and we use MySQL across the board. Postgres is an amazing database replacement, and it’s amazing how, even not using it regularly in awhile, it all comes back to me. I love that database. Everything about it is intuitive.

Anyway, I was looking through the import code, and doing some testing. I flushed the entire database contents and started a fresh import, and noticed it was breaking in some parts. Looking into it, I found that the MDB2 PEAR package has a memory leak in it, which kills the scripts because it just runs so many queries. So, I’m in the process of moving it to use PDO instead. I’ve wanted to look into using it for a while, and so far I like it, for the most part. Their fetch helper functions are pretty lame, and could use some obvious features like fetching one value and returning result sets in associative arrays, but it’s good. I’m going through the backend and doing a lot of cleanup at the same time.

Feature-wise, the site isn’t gonna change at all. It’ll be faster, and importing the data from portage will be more accurate. I’ve got bugs on the frontend I need to fix still, but they are all minor and I probably won’t look at them for now, to be honest. Well, maybe I will, I dunno.

Either way, it’s kinda cool to get into the code again, and see what’s going on. I know I say this a lot with my projects, but it always amazes me when I go back and I realize how complex the process is — not because of my code, but because there are so many factors to take into consideration when building this database. I thought it’d be a simple case of reading metadata and throwing it in there, but there’s all kinds of things that I originally wrote, like using regular expressions to get the package components from an ebuild version string. Fortunately, there’s easier ways to query that stuff now, so the goal is to get it more up to date.

It’s kinda cool working on a big code project again. I’d forgotten what it was like.

Adventurous users, contributors and developers can enable the Integrity Measurement Architecture subsystem in the Linux kernel with appraisal (since Linux kernel 3.7). In an attempt to support IMA (and EVM and other technologies) properly, the System Integrity subproject within Gentoo Hardened was launched a few months ago. And now that Linux kernel 3.7 is out (and stable) you can start enjoying this additional security feature.

With IMA (and IMA appraisal), you are able to protect your system from offline tampering: modifications made to your files while the system is offline will be detected as their hash values do not match the hash values stored in extended attributes (whereas the extended attributes are then protected through digitally signed values using the EVM technology).

I’m working on integrating IMA (and later EVM) properly, which of course includes the necessary documentation: concepts and a ima guide for starters, with more to follow. Be aware though that the integration is still in its infancy, but any questions and feedback is greatly appreciated, and bugreports (like bug 448872) are definitely welcome.

So after my post about glibc 2.17 we got the ebuild in tree, and I’m now re-calibrating the ~amd64 tinderbox to use it. This sounds like an easy task but it really isn’t so. The main problem is that with the new C library you want to make sure to start afresh: no pre-compiled dependencies should be in, or they won’t be found: you want the highest coverage as possible, and that takes some work.

So how do you re-calibrate the tinderbox? First off you stop the build, and then you have to clean it up. The cleanup sometimes is as easy as emerge --depclean — but in some cases, like this time, the Ruby packages’ dependencies are causing a bit of a stir, so I had to remove them altogether with qlist -I dev-ruby virtual/ruby dev-lang/ruby | xargs emerge -C after which the depclean command actually starts working.

Of course it’s not a two minutes command like on any other system, especially when going through the “Checking for lib consumers” step — the tinderbox has a 181G of data in its partition (a good deal of which is old logs which I should actually delete at this point — and no that won’t delete the logs in the reported bugs, as those are stored on s3!), without counting the distfiles (which are shared with its host).

In this situation, if there were automagic dependencies on system/world packages, it would actually bail out and I’d have to go manually clean them up. Luckily for me, there’s no problem today, but I have had this kind of problem before. This is actually one of the reasons why I want to keep the world set in the tinderbox as small as possible — right now it consists basically of: portage-utils, gentoolkit (for revdep-rebuild), java-dep-check, Python 2.7 (it’s an old thing, it might be droppable now, not sure), and netcat6 for sending the logs back to the analysis script. I would have liked to remove netcat6 from the list but last time the busybox nc implementation didn’t work as expected with IPv6.

The unmerge step should be straightforward, but unfortunately it seems to be causing more grief than it’s expected, in many cases. What happens is that Portage has special handling for symlinked directories — and after we migrated to use /run instead of /var/run all the packages that have not been migrated to not using keepdir on it, ebuild-side, will spend much more time at unmerge stage to make sure nothing gets broken. This is why we have a tracker bug and I’ve been reporting ebuilds creating the directory, rather than just packages that do not re-create it on the init script. Also, this is when I thank I decided to get rid of XFS as the file deletion there was just way too slow.

Even though Portage takes care of verifying the link-time dependencies, I’ve noticed that sometimes things are broken nonetheless, so depending on what one’s target is, it might be a good idea to just run revdep-rebuild to make sure that the system is consistent. In this case I’m not going to waste the time, as I’ll be rebuilding the whole system in the next step, after glibc gets updated. This way we’re sure that we’re running with a stable base. If packages are broken at this level, we’re in quite the pinch, but it’s not a huge deal.

Even though I’m keeping my world file to the minimum, the world and system set is quite huge, when you add up all the dependencies. The main reason is that the tinderbox enables lots and lots of flags – as I want to test most code – so things like gtk is brought in (by GCC, nonetheless), and the cascade effect can be quite nasty. The system rebuild can easily take a day or two. Thankfully, the design of the tinderbox scripts make it so that the logs are send through the bashrc file, and not through the tinderbox harness itself, which means that even if I get failures at this stage, I’ll get a log for them in the usual place.

After this is completed, it’s finally possible to resume the tinderbox building, and hopefully then some things will work more as intended — like for instance I might be able to get a PHP to work again… and I’ll probably change the tinderbox harness to try building things without USE=doc, if they fail, as too many packages right now fail with it enabled or, as Michael Mol pointed out, because there are circular dependencies.

So expect me working on the tinderbox for the next couple of days, and then start reporting bugs against glibc-2.17, the tracker for which I opened already, even though it’s empty at the time of writing.

One year after my last blog post on this topic I encountered some minor difficulties with combining KDEPIM-4.4 (i.e. kmail1) and the KDE 4.10 betas. These difficulties are fixed now, and the combination seems to work fine again. Anyway, I became curious about the level of stability of Akonadi-based kmail2 once more. After all, I've been running it continuously over the year on my office desktop with a constant-on fast internet connection, and that works quite well. So, I gave it a fresh try on my laptop too. I deleted my Akonadi configuration and cache, switched to Akonadi mysql backend, updated kmail and the rest of KDEPIM without migrating to 4.9.4, and re-added my IMAP account from scratch (with "Enable offline mode"). The overall use case description is "laptop with large amount of cached files from IMAP account, fluctuating internet connectivity". Now, here are my impressions...

Reaction time is occasionally sluggish, but overall OK.

The progress indicator behaves a bit odd, it checks the mail folders in seemingly random order and only knows 0% and 100% completion.

Random warning messages. It seems that kmail2 uses some features that "my" IMAP server does not understand. So, I'm getting frequent warning notifications that don't tell me anything and that I cannot do anything about. SET ANNOTATION, UID, ... Please either handle the errors, inform the user what exactly goes wrong, or ignore them in case they are irrelevant. Filed as a wish, bug 311265.

Network activity stops working sometime. This sounds worse than it actually is, since in 99% of all cases Akonadi now detects fine that the connection to the server is broken (e.g., after suspend/resume, after switching to a different WLAN, or after enabling a VPN tunnel) and reconnects immediately. In the few remaining cases, re-starting the Akonadi server does the trick. You just have to know what to kick.

More problematic is, while you're in online mode, any problems with connectivity will make kmail "hang". Clicking on a message leads to an attempt to retrieve it, which requires some response from the network. As it seems to me, all such requests are queued up for Akonadi to handle, and if that does not get a reply, pending requests are stuck in the queue... OK, you might say that this is a typical use case for offline mode, but then I would have to be able to predict when exactly my train enters the tunnel... Compare this to kmail1 disconnected IMAP accounts, where regular syncing would be delayed, but local work remained unaffected.

Offline mode is a nice concept, and half a solution for the last problem, but unfortunately it does not work as expected. For mysterious reasons, a considerable part of the messages is not cached locally. I switch my account to offline mode, click on a message, and obtain an error message "Cannot fetch this in offline mode". Well, bummer. Bug 285935.

This may just be my personal taste, but once something goes wrong (e.g., non-kde related crash, battery empty, ...) and the cache becomes corrupted somehow, I'd like to be able to do something from kmail2 without having to fiddle with akonadiconsole. A nice addition would be "Invalidate cache" in the context menu of a mail folder, or some sort of maintenance menu with semi-safe options.

Finally... something is definitely going wrong with PGP signatures; the signatures do not always verify on other mail clients. Tracking this down, it seems that CRLF is not preserved in messages, see bug 306005.

On the whole, for the laptop use case the "new" KDEPIM is now (4.9.4) more mature than the last time I tried. I'll keep it now and not downgrade again, but there are still some significant rough edges. The good thing is, the KDEPIM developers are aware of the above issues and debugging is going on, as you can see for example from this blog post by Alex Fiestas (whose use case pretty much mirrors my own).

We had a marathon with Alexandre (tetromino) in the last 2 weeks to get Gnome 3.6 ebuilds using python-r1 eclasses variants, EAPI=5 and gstreamer-1. And now it is finally in gentoo-x86, unmasked.

You probably read, heard or have seen stuff about EAPI=5 and new python eclasses before but, in short, here is what it will give you:

package manager will finally know for real what python version is used by which package and be able to act on it accordingly (no more python-updater when all ebuilds are migrated)

EAPI=5 subslots will hopefully put an end to revdep-rebuild usage. I already saw it in action while bumping some of the telepathy packages to discover that empathy was now automatically being rebuilt without further action than emerge -1 telepathy-logger.

No doubt lots of people are going to love this.

Gnome 3.6 probably still has a few rough edges so please, check bugzilla before filing new reports.

So LWN reports just today on the release of GLIBC 2.17 which solves a security issue and looks like was released mostly to support the new AArch64 architecture – i.e. arm64 – but the last entry in the reported news is possibly going to be a major headache and I’d better post already about it so that we have a reference for it.

I’m referring to this:

The `clock_*' suite of functions (declared in <time.h>) is now available directly in the main C library. Previously it was necessary to link with -lrt to use these functions. This change has the effect that a single-threaded program that uses a function such as `clock_gettime' (and is not linked with -lrt) will no longer implicitly load the pthreads library at runtime and so will not suffer the overheads associated with multi-thread support in other code such as the C++ runtime library.

This is in my opinion the most important change, not only because, as it’s pointed out, C++ software would have quite an improvement not to link to the pthreads library, but also because it’s the only change listed there that I can foresee trouble with already. And why is that? Well, that’s easy. Most of the software out there will do something along these lines to see what library to link to when using clock_gettime (the -lrt option was not always a good idea because it’s not existing for most other operating systems out there, including FreeBSD and Mac OS X).

AC_SEARCH_LIB([clock_gettime], [rt])

This is good, because it’ll try either librt, or just without any library at all (“none required”) which means that it’ll work on both old GLIBC systems, new GLIBC systems, FreeBSD, and OS X — there is something else on Solaris if I’m not mistaken, which can be added up there, but I honestly forgot its name. Unfortunately, this can easily end up with more trouble when software is underlinked.

With the old GLIBC, it was possible to link software with just librt and have them use the threading functions. Once librt will be dropped automatically by the configuration, threading libraries will no longer be brought in by it, and it might break quite a few packages. Of course, most of these would already have been failing with gold but as you remembered, I wasn’t able to get to the whole tree with it, and I haven’t set up a tinderbox for it again yet (I should, but it’s trouble enough with two!).

What about --as-needed in this picture? A full hard-on implementation would fail on the underlinking, where pthreads should have been linked explicitly, but would also make sure to not link librt when it’s not needed, which would make it possible to improve the performance of the code (by skipping over pthreads) even when the configure scripts are not written properly (like for instance if they are using AC_CHECK_LIB instead of AC_SEARCH_LIB). But since it’s not the linkage of librt that causes the performance issue, but rather the one for pthreads, it actually works out quite well, even if some packages might keep an extra linkage to librt which is not used.

There is a final note that I need o write about and honestly worries me quite a bit more than all those above. The librt library has not been dropped — only the clock functions have been moved over to the main C library, but the library keeps asynchronous and list-based I/O operation interfaces (AIO and LIO), the POSIX message queues interfaces, the shared memory interfaces, and the timer interfaces. This means that if you’re relying on a clock_gettime test to bring in librt, you’ll end up with a failing package. Luckily for me, I’ve avoided that situation already on feng (which uses the message queues interface) but as I said I foresee trouble at least for some packages.

Well, I guess I’ll just have to wait for the ebuild for 2.17 to be in the tree, and run a new tinderbox from scratch… we’ll see what gets us there!

I have posted a note about the way FSF (America) started acting like a dictator with the GNU project and the software maintained under its umbrella, which lead to the splitting of GnuTLS — which is something that Nikos is not currently commenting on, simply because he’s now negotiating what’s going to happen with it.

Well, the next step has been Paolo stepping down as GNU maintainer, after releasing a new version of sed. This actually made me think a bit more. What’s going on with sed, grep and the like? Well, most likely they’ll get a new maintainer and they’ll keep going that way. But should we see this as an opportunity? You probably remember that some time ago I suggested we could be less GNU — or at least, less reliant on GNU.

So while I’m definitely not going to fork sed myself ­– I have enough trouble with unpaper especially considering that while in America I didn’t have a scanner, which is a necessity to develop it – but there definitely is room for improvement with it. First of all, it would be a good choice to start with, to get rid of the damn gnulib and eventually implementing what is an extension of glibc itself as an external library (something like libgsupc). Even if this didn’t work on anything but FreeBSD and Linux, it would still be an improvement, and I’m pretty sure it would be feasible without needing that hairy mess of code that, in the source code for sed takes five times as much as the sed sources themselves — 200KiB are the sources for the program, 1.1MiB is the gnulib copies.

Having a new, much less political project to oversee the development of core system utilities would also most likely consolidate some projects that are currently being developed outside of GNU altogether, or simply don’t fit with their scope because they are Linux-specific, which would probably make for a better final user experience. Plus things like keeping man pages actually up to date instead of relying on the info manuals, would almost certainly help!

So can any of you think of other ways to improve the GNU utilities by breaking out of GNU’s boundaries (which is what Nikos and Paolo seem to be striving for), maybe it is possible to get something that is better for everybody and Free at the same time. Myself I know I need to spend some time to fix the dependency upon readline that is present in GnuTLS just for the utilities..

Gentoo Linux is proud to announce the availability of a new LiveDVD to
celebrate the continued collaboration between Gentoo users and developers,
ready to rock the end of the world (or at least mid-winter/Southern Solstice)!
The LiveDVD features a superb list of packages, some of which are listed below.

A special thanks to the Gentoo
Infrastructure Team. Their hard work behind the scenes provide the
resources, services and technology necessary to support the Gentoo Linux
project.

If you want to see if your package is included we have generated both the
x86 package
list, and amd64 package
list.
There is no new FAQ or artwork
the 20121221 release, but you can still get the
12.0 artwork plus
DVD cases and covers for the 12.0 release; and view the 12.1 FAQ (persistence mode
is not available in 20121221).

Special Features:

ZFSOnLinux

Writable file systems using AUFS so you can emerge new packages!

The LiveDVD is available in two flavors: a hybrid x86/x86_64 version, and
an x86_64 multi lib version. The livedvd-x86-amd64-32ul-20121221 version
will work on 32-bit x86 or 64-bit x86_64. If your CPU architecture is x86, then
boot with the default gentoo kernel. If your arch is amd64, boot with the
gentoo64 kernel. This means you can boot a 64-bit kernel and install a
customized 64-bit user land while using the provided 32-bit user land. The
livedvd-amd64-multilib-20121221 version is for x86_64 only.

If you are ready to check it out, let our bouncer direct you to the closest
x86
image or amd64
image file.

If you need support or have any questions, please visit the discussion thread
on our forum.

Warning: This post relies on unreleased blohg features. You will need
to install blohg from the
Mercurial repository or use the
live ebuild (=www-apps/blohg-9999), if you are a Gentoo user. Please ignore
this warning after blohg-1.0 release.

Tumblelogs are old stuff, but
services like Tumblr popularized them a lot recently.
Thumblelogs are a quick and simple way to share random content with readers.
They can be used to share a link, a photo, a video, a quote, a chat log, etc.

blohg is a good blogging engine, we know, but what about tumblelogs?!

You can already share videos from Youtube and Vimeo, and can share most of the
other stuff manually, but it is boring, and diverges from the main objective of
the tumblelogs: simplicity.

To solve this issue, I developed a blohg extension
(Yeah, blohg-1.0 supports extensions! \o/ ) that adds some cool
reStructuredText directives:

link

This directive is used to share links. It will embed the content of the link to
the post automatically, if the provided link is from a service that supports
the oEmbed protocol. If it isn't, and the link is from
a HTML page, it will include the link with the title of the page to the post.
Otherwise it will just include the raw link to the post.

Usage example:

..link:: http://www.youtube.com/watch?v=gp30v6XMxBg

quote

This directive is used to share quotes. It will create a blockquote element
with the quote and add a signature with the author name, if provided.

Usage example:

..quote:::author:Myself
This is a random quote!

chat

This directive is used to share chat logs. It will add a div with the chat log,
highlighted with Pygments.

I was in Budapest for 11 days. I couchsurfed there and it is longer than I normally stay at someone’s house, by far. So, thanks Paul! Budapest was nice, reminded me much of Prague. While, I was there I visited a Turkish Bath, that was very interesting experience. Imagine, a social, public “hot tub & sauna” with water naturally hot. I found a newly minted Crossfit gym, RC Duna, that opened up it’s doors for a traveller, so gracious. Even though I didn’t get to see the Opera in Vienna, I went to the Opera house in Budapest. It was my first time seeing a ballet, The Nutcracker. There were Christmas markets in Budapest too. I actually liked the Budapest ones more so than the Viennese markets. I also helped to organize the first (known) Hungarian Gentoo Linux Beer Meeting

Then I took a train to Belgrade, Serbia. The train was 8+ hours. I couchsurfed again for 3 nights. Had some wonderful chats with my host, Ljubica. She learned about US things, I learned about Serbian things, just what you could hope for, a cultural exchange via couchsurfing. I was her first US guest. Later on, an Argentinian fellow stayed there too and we had conversations about worldly topics, like “why are borders so important and do we need them?” and “speculating why Belgium’s lack of government even worked.” Then perhaps, the best part, I got to try authentic mate. In my opinion, there wasn’t much to actually see in Belgrade during the winter, I did walk around and went to the fortress. Otherwise, nursed my head cold which I got on the train.

I took the bus to Skopje, FYROM. I stayed in Skopje for 3 nights at a nice independent hostel, Shanti Hostel (recommended). I walked around the center (not much to see), walked through the old bazaar, and ate some good food. The dishes in Central Europe include lots of meat. I embarked on a mission to find the semi-finalist entry for the next 7 wonders of the world, Vrelo Cave, but I got lost and took a 10km hike along the river, it was spectacular! And peaceful. Perfect really. I wanted to see what was at the end of the trail, but eventually turned around because it didn’t end. On the way back, I slipped and came within feet of going in the drink. As my legs straddled a tree and my feet went through the branches that were clearly meant to handle no weight, I used that split second to be thankful. I used the next second to watch something black go bounce, …, bounce, SPLASH. It is funny how you can go from thankful to cursing about your camera in the river so quickly. I got up, looked around and thought about how I got off the path, dang. Being the frugal man I am, I continued off the path and went searching for my camera. Well, that was bad because I slipped again. As I was sliding on my ass and grabbing branches, I eventually stopped. It was at this point, I knew my camera was gone since I could see the battery popped out and was in the water. Le sigh. C’est la vie.

So, no pictures, friends. I had a few hundred pictures that I didn’t upload and they are gone. I might buy a camera again but for now, you will just have to take my word for it. My Mom says she will send me a disposable camera ha.

When you are running Gentoo with SELinux enabled, you will be running with a particular policy type, which you can devise from either /etc/selinux/config or from the output of the sestatus command. As a user on our IRC channel had some issues converting his strict-policy system to mcs, I thought about testing it out myself. Below are the steps I did and the reasoning why (and I will update the docs to reflect this accordingly).

Let’s first see if the type I am running at this moment is indeed strict, and that the mcs type is defined in the POLICY_TYPES variable. This is necessary because the sec-policy/selinux-* packages will then build the policy modules for the other types referenced in this variable as well.

Great, we’re now going to switch to permissive mode and edit the SELinux configuration file to reflect that we are going to boot (later) into the mcs policy. Only change the type – I will not boot in permissive mode so the SELINUX=enforcing can stay.

Next we are going to relabel all files on the file system, because the mcs policy adds in another component in the context (a sensitivity label – always set to 0 for mcs). We will also re-do the setfiles steps done initially while setting up SELinux on our system. This is because we need to relabel files that are “hidden” from the current file system because other file systems are mounted on top of it.

Finally, edit /etc/fstab and change all rootcontext= parameters to include a trailing :s0, otherwise the root contexts of these file systems will be illegal (in the mcs-sense) as they do not contain the sensitivity level information.

the assignment was to encode a word or phrase with the Morse method, and then translate that sequence into the song’s underlying rhythm.

i chose the meaning of my name, “the Lord is salvation.” i looked at the resulting dashes and dots and treated them as sheet music, improvising a minor-key motif for piano, using just my right hand.

with the basic sketch recorded, i duplicated an excerpt and ran it through a vintage tape delay effect, putting it in the background almost like a loop. i set to work adding a few notes here and there, some of them reversed, running into more tape delays; contrasting their sonic character with the main melody. the loop excerpt repeats a few times, occasionally transformed by offset placement with the main theme, or reinforced by single note chord changes.

from a very few audio fragments, a mournful story emerged. echoing piano lines and uncovered memories. i did my best to vary the structure while keeping the mood and emotions, but this is still pretty hasty work; i only had a few minutes to arrange this piece before the deadline, due to software issues with ardour 3 beta. ardour crashes every time i attempt to process an audio clip, such as reversing or stretching it. i had to separately render those segments with renoise, then import them to ardour.

"Using/installing Ubuntu is like buying a car. It may have a few features you'll never need or use, and might need to have a couple features added as aftermarket parts.

Using/installing Gentoo is like buying a pile of sheet metal, a few rubber trees, small pile of copper, a pile of sand, and an oil well. Then you have to cut and fabricate the car's body from the sheet metal, extract the rubber from the trees, then use that to make the tires and all the seals on the car. Use the pile of copper to make all the wires, and use the leftover rubber(you did save the scraps didn't you) to make the insulation. Melt down the pile of glass to make the windshield, side and back windows, also the headlights and lights themselves. Then you need to extract the crude oil from the well to refine your own engine oil and gas. In the end, you have a car created to your exact specifications (if you know what the hell you're doing) that may or may not be any better than just buying a car off the lot."

Of course I should additionally mention that Gentoo provides awesome documentation for all the steps and most of the actual assembly work is done single-handedly by portage!

A topic that has been fairly quiet for years has roared into life on a few separate occasions in the last month within the Gentoo community: copyright assignments. The goal of this post is to talk a little about the issues around these as I see them. I’ll state upfront that I’m not married to any particular approach.

But first, I think it is helpful to consider why this topic is flaring up. The two situations I’m aware of where this has come up in the last month or so both concern contributions (willing or not) from outside of Gentoo. One concerns a desire to be able to borrow eclass code from downstream distros like Exherbo, and the other is the eudev fork. In both cases the issue is with the general Gentoo policy that all Gentoo code have a statement at the top to the effect of “Copyright 2012 Gentoo Foundation.”

Now, Diego has already blogged about some of the issues created by this policy already, and I want to set that aside for the moment. Regardless of whether the Foundation can lay claim to ownership of copyright on past contributions, the question remains, should Gentoo aim to have copyright ownership (or something similar) for all Gentoo work be owned by the Foundation?

Right now I’m reaching out to other free software organizations to understand their own policies in this area. Regardless of whether we want to have Gentoo own our copyrights or not there are still legal questions around what to put on that copyright line, especially when a file is an amalgamation of code originated both inside and outside of Gentoo, perhaps even by parties who are hostile to the effort. I can’t speak for the Trustees as a whole, but I suspect that after gathering info we’ll try to have some open discussion on the lists, and perhaps even have a community-wide vote before making new policy. I don’t want to promise that – in fact I’d recommend that any community-wide vote be advisory only unless a requirement for supermajority were set, as I don’t want half the community up in arms because a 50.1% majority passed some highly unpopular policy.

So, what are some of the directions in which Gentoo might go? Why might we choose to go in these directions? Below I outline some of the options I’m aware of:

Maintain the status quo
We could just leave the issue of copyright assignment somewhat ambiguous as has been done. If Gentoo were forced to litigate over copyright ownership right now an argument could be made that because contributors willingly allowed us to stick that copyright notice on our files and made their contribution with the knowledge of our policies, that they have given implicit consent to our doing so.

I’m not a big fan of this approach – it has the virtue of requiring less work, but really has no benefits one way or the other (and as you’ll read below their are benefits from declaring a position one way or the other).

This requires us to come up with a policy around what goes on the copyright notice line. I suspect that there won’t be much controversy for Gentoo-originated work like most ebuilds, as there isn’t much controversy over them now. However, for stuff like eudev or code borrowed from other projects this could get quite messy. With no one organization owning much of the code in any file the copyright line could become quite a mess.

Do not require copyright assignment
We could just make it a policy that Gentoo would aim to own the name Gentoo, but not the actual code we distribute. This would mean that we could freely accept any code we wished (assuming it was GPL or CC BY-SA compatible per our social contract). This would also mean that Gentoo as an organization would find it difficult to pursue license violations, and future relicensing would be rather difficult.

From an ability to merge outside code this is clearly the preferred solution. This approach still carries all the difficulties of managing the copyright notice, since again no one organization is likely to hold the majority of copyright ownership of our files. Also, if we were to go this route we should strongly consider requiring that all contributions be licensed under GPL v2+, and not just GPL v2. Since Gentoo would not own the copyright if we ever wanted to move to a newer GPL version we would not have the option to do so unless this were done.

Gentoo would still own the name Gentoo, so from a branding/community standpoint we’d have a clear identity. If somebody else copied our code wholesale the Foundation couldn’t do much to prevent this unless we retroactively asked a bunch of devs to sign agreements allowing us to do so, but we could keep an outside group from using the name Gentoo, or any of our other trademarks.

Require copyright assignment
We could make it a policy that all contributions to Gentoo be made in conjunction with some form of copyright assignment, or contributor licensing agreement. I’ll set aside for now the question of how exactly this would be implemented.

In this model Gentoo would have full legal standing to pursue license violations, and to re-license our code. In practice I’m not sure how likely we’d actually be to do either. The copyright notice line would be easy to manage, even if we made the occasional exception to the policy, since the exceptions could of course be managed as exceptions as well. Most likely the majority of the code in any file would only be owned by a few entities at most.

The downside to this approach is that it basically requires turning away code, or making exceptions. Want to fork udev? Good luck getting them to assign copyright to Gentoo.

There could probably be blanket exceptions for small contributions which aren’t likely to create questions of copyright ownership. And we could of course have a transition policy where we accept outside code but all modifications must be Gentoo-owned. Again, I don’t see that as a good fit for something like eudev if the goal is to keep it aligned with upstream.

I think the end result of this would be that work that is outside of Gentoo would tend to stay outside of Gentoo. The eudev project could do its thing, but not as a Gentoo project. This isn’t necessarily a horrible thing – OpenRC wasn’t really a “Gentoo project” for much of its life (I’m not quite sure where it stands at the moment).

Alternatives
There are in-between options as well, such as encouraging the voluntary assignment/licensing of copyright (which is what KDE does), or dividing Gentoo up into projects we aim to own or not. So, we might aim to own our ebuilds and the essential eclasses and portage, but maybe there is the odd eclass or side project like eudev that we don’t care about owning. Maybe we aim to own new contributions (either all or most).

There are good things to be said for a KDE-like approach. It gives us some of the benefits of attribution, and all of the benefits of not requiring attribution. We could probably pursue license violations vigorously as we’d likely hold control of copyright over the majority of our work (aside from things like eudev – which obviously aren’t our work to begin with). Relicensing would be a bit of a pain – for anything we have control over we could of course relicense it, but for anything else we’d have to at least make some kind of effort to get approval. Legally that all becomes a murky area. If we were to go with this route again I’d probably suggest that we require all code to be licensed GPL v2+ or similar just to give us a little bit of automatic flexibility.

I’m certainly interested in feedback from the Gentoo community around these options, things I hadn’t thought of, etc. Feel free to comment here or on gentoo-nfp.

One can notice the increased number of maintainer-needed@ packages but this is because we “retired” a lot of inactive developers in the last 2 months. I expect this number to not increase further in the near future.

I would like to thank all of you who are actively participating in this team. Keep up the good work!

I just finished my Fall semester for 2012 today at UVU. This was, by far, the hardest semester I’ve ever had since I’ve been in school. It was brutal. I had three classes which carried with it more work than I was expecting, and I spent a lot of time in the past four months doing nothing but homework. I was talking to my cousin tonight about it (while we were doing some late-night skateboarding in the winter, which, it’s actually really nice out here right now), and I mentioned that the stress was a huge burden on me. Stress is normal, but I’ve learned that if something heavy is really going on, I notice I will stop being cheery. I don’t really get somber, but it’s more like, just focused and serious all the time. Which can be a real bummer.

But, the semester is finished, and it’s freed up a lot of time and has taken that huge burden off of me. I got good grades, and along with that, and some great friends that really stepped up at the last minute and helped me out, it’s really gotten me humbled and grateful to God and everyone that stood by me. I’m really glad this semester is done.

One thing I learned from this last jaunt around is that I’ve decided I’m never taking online classes again. I had two this semester, and one on campus. Looking back, I’ve always had a range of issues with online courses. Either I don’t understand the material very well because I can’t chat with the professor one on one, or I slack the whole time (I did 50% of the coursework in one day. I’m not kidding). The worst one though is I never really feel like I “get” the material. I jump through hoops, get a grade, and move on, but it doesn’t seem like I learned anything.

So, I’m sticking to just two classes from here on out, and doing them all on-campus. That’ll be manageable.

For now I’m really looking forward to not so much having more time, but having less stress. I’ve been wanting to work on some cool side projects, and I also have been itching to go skating … a lot. So tonight I went on a two-hour run with my cousin down Main Street in Bountiful, and it was really cool. We call it a “mort run” since we start at the top of a hill and go all the way down to the mortuary. It’s smooth all the way down and you can just push around and then either skate back up hill or walk. It’s a good workout.

The best part tonight though was debating whether or not we should go to the drive-through at Del Taco, knock on the window and ask for something. We didn’t, but we circled the place like eight times and probably freaked out the employees while we debated it. Eventually, we realized he didn’t have enough cash to buy something on the dollar menu (he was a penny short), so we spent half an hour wandering around downtown looking for lost change. It was pretty fun.

Soooooooooooo ….. projects. One thing I have time to look into now is znurt.org. It’s broken. I’ve known it’s been broken. It would take me probably less than an hour to fix it. I haven’t made the time, for a lot of reasons. It’s actually been on my calendar reminding me over and over that I need to get it done. I’m debating what to do about the site. I could just fix the one error and move on, but it’s still kind of living in a state of neglect. Ideally, I should hand the project over to someone else and let them maintain it. I dunno yet. Part of me doesn’t wanna let it go, but I guess a bigger part doesn’t care enough to actually fix it so … yah. Gotta make a decision there.

Other than that, not much going on. I moved to a new apartment, back into a complex. I like it here. I have a dishwasher now, which I’m really grateful for (I haven’t had one in the last three apartments). The funny thing about that is I seriously have so few dishes, that filling up the entire thing with all of mine it’s half full.

Anyhoo, I am really looking forward to moving on. My big thing is I wanna get some serious skating time in while I’ve got the time. That and enjoy the holidays with friends and family. I’m looking forward to next semester too. I’ve got a class on meteorology and another on U.S. history. I’m almost done with generals. The crazy part about all of this? Since I went back to school two years ago, I’ve put in 30 credit hours. Insane, for someone working full time. I tell you what.

GCC 4.8 is still in its stage 3 development phase, so Zorry will send out the patches to the GCC development community when this phase is done. For Gentoo hardened itself, we now support all architectures except for IA64 (which never had SSP).

Full uclibc support is now in place for amd64, i686, mips32r2: not only is their technological support ok, but stages are now also automatically built to support installations through the regular installation instructions. The next target to get stages automatically built for is armv7a.

Kernel and grSecurity/PaX

Stabilization on 3.6.x is still showing some difficulties. Until those are resolved, we’re still stable in 3.5.4. We have a couple of panics in some odd cases, but these will need to be resolved before we can stabilize further.

glibc-2.16 will also drop the declarations for PT_PAX (in elf.h) and the binutils will also not cover PT_PAX phdr anymore. So, we will standardize fully on xattr-based PaX flags. This will get some proper focus in the next period to ensure this is done correctly. Most work on this support is focusing on communication towards users and the pax-utils eclass support.

There was some confusion if the tmpfs-xattr patch would or would not properly restrict access, but it looks like the PaX patch on mm/shmem.c was based upon the Gentoo patch and enhanced with the needed restrictions, so we can just keep the PaX code.

On USE=”pax_kernel”, which should enable some updates on userland utilities when applications are run under a PaX enabled kernel, prometheanfire tried to get this as a global USE flag (as many applications might eventually want to get a trigger on it). However, due to some confusion on the meaning of the USE flag, and potential need to depend on additional tools, we’re going to stick with a local flag for now.

SELinux

schmitt953 will help in the testing and possible development of SELinux policies for Samba 4.

Furthermore, the userspace utilities have been stabilized (except for the setools-3.3.7-r5+ due to some swig problems, but those have been worked around in setools-3.3.7-r6). Also, the rev8 policies are in the tree and no big problems were reported on them. They are currently still ~arch, but will be stabilized in the next few days. A new rev9 release will be pushed to the hardened-dev overlay soon as well.

Profiles

nvidia is unmasked for the hardened profiles, but still has X and tools USE flags masked, and is only supported on kernels 3.0.x and higher.

Also, the hardened/linux/uclibc/arm/armv7a profile is now available as a development profile. Profiles will be updated as the architectures for ARM are getting supported, so expect more in the next month.

System Integrity

We were waiting for kernel 3.7, which just got released, so we can now start integrating this further. Expect more updates by next meeting.

Docs

For SELinux, some information on USE=”unconfined” is added to the SELinux handbook. Blueness will also start documenting the xattr pax stuff.

While usually Gentoo users compile all their packages on their own computers, LibreOffice tends to be too big a bite for that. This is why we provide for amd64 and x86 app-office/libreoffice-bin and app-office/libreoffice-bin-debug, two packages with a precompiled binary installation and its debug information. In the beginning we just used the binaries from the official LibreOffice distribution. Turns out, however, that these binaries bundle a large number of libraries that we have in Gentoo anyway (bug 361695), and for a lot of reasons bundled libraries are bad. So, we decided to roll our own binaries for stable Gentoo installations. Let me describe a bit how it is done.

On the machine doing the build, two chroots are dedicated to the package build process, one a plain amd64 chroot, the other an x86 chroot entered via linux32. Both have no ~arch packages installed at all, only stable keywords are accepted; both have a very minimal world file listing only a few packages useful for a maintainer as e.g. gentoolkit or eix. Procedure is identical for both. In addition, in both chroots the compiler flags are chosen for as wide compatibility as possible. This means

and obviously the same for CXXFLAGS. Both chroots also use the portage features splitdebug and compressdebug to make debug information available in a separate directory tree. Prior to build, the existing packages are updated, unnecessary packages are cleaned, and dynamic linking is checked:

emerge --syncemerge -uDNav worldemerge --depclean --askrevdep-rebuild

In case any problems occur, these are checked, solved, and the procedure is repeated until all the operations become a no-op. Next step is adapting the (rather simplistic) build script to the new libreoffice version. This means mainly checking for new or discarded useflags, and deciding which value these should have in the binary build. Since LibreOffice-3.6 we also have to decide now which bundled extensions to build. The choice of useflags is influenced by several factors. For example, pdfimport is disabled because the resulting dependency on poppler might lead to broken binaries rather too often.Then, well, then it's running the build. Generating all 12 flavours (base, kde, gnome with and without java for both amd64 and x86) takes roughly a weekend. Time to go out to the christmasmarket and sip a Glühwein. In the meantime, we can also adapt the libreoffice-bin ebuilds for the new version. The defined phase functions are mostly boring, since they only have to copy files into the system. Normally, they can be taken over from the previous version. The dependency declarations, however, have to be copied anew each time from the corresponding app-office/libreoffice ebuild, taking into account the chosen use-flag values. DEPEND is set empty since we're not actually building anything during installation.Finally, COMMON_DEPEND is extended by an additional block named BIN_COMMON_DEPEND, specific for the binary package. Here, we specify any dependencies that need to be stricter now, where a library upgrade would for a normal package require revdep-rebuild - which is not possible for a binary package. Typical candidates where we have to fix the minimum or exact library version are glibc, icu, or libcmis.Once the build has finished, 8.8G of files have to be uploaded to the Gentoo server, added to the mirror system, and then given some time to propagate. Then, we can commit the new ebuild, and open a stabilization request bug. Finished!(Oh and in case you're wondering, new packages are coming tomorrow. :)