Welcome to Gentoo Universe, an aggregation of weblog articles on all topics written by Gentoo developers. For a more refined aggregation of Gentoo-related topics only, you might be interested in Planet Gentoo.

I’ve promised some insight into how much running the tinderbox actually costed me. And since today marks two months from Google AdSense’s crazy blacklisting of my website, I guess it’s a good a time as any other.

SO let’s start with the obvious first expense: the hardware itself. My original Tinderbox was running on the box I called Yamato, which costed me some €1700 and change, without the harddrives, this was back in 2008 — and about half the cost was paid with donation from users. Over time, Yamato had to have its disks replaced a couple of times (and sometimes the cost came out of donations). That computer has been used for other purposes, including as my primary desktop for a long time, so I can’t really complain about the parts that I had to pay myself. Other devices, and connectivity, and all those things, ended up being shared between my tinderbox efforts and my freelancing job, so I also don’t complain about those in the least.

The new Tinderbox host is Excelsior, which has been bought with the Pledgie which got me paying only some $1200 of my pocket, the rest coming in from the contributors. The space, power and bandwidth, have been offered by my employer which solved quite a few problems. Since now I don’t have t pay for the power, and last time I went back to Italy (in June) I turned off, and got rid of, most of my hardware (the router was already having some trouble; Yamato’s motherboard was having trouble anyway, I saved the harddrive to decide what to do, and sold the NAS to a friend of mine), I can assess how much I was spending on the power bill for that answer.

My usual power bill was somewhere around €270 — which obviously includes all the usual house power consumption as well as my hardware and, due to the way the power is billed in Italy, an advance on the next bill. The bill for the months between July and September, the first one where I was fully out of my house, was for -€67 and no, it’s not a typo, it was a negative bill! Calculator at hand, he actual difference between between the previous bills and the new is around €50 month — assuming that only a third of that was about the tinderbox hardware, that makes it around €17 per month spent on the power bill. It’s not much but it adds up. Connectivity — that’s hard to assess, so I’d rather not even go there.

With the current setup, there is of course one expense that wasn’t there before: AWS. The logs that the tinderbox generates are stored on S3, since they need to be accessible, and they are lots. And one of the reasons why Mike is behaving like a child about me just linking the build logs instead of attaching them, is that he expects me to delete them because they are too expensive to keep indefinitely. So, how much does the S3 storage cost me? Right now, it costs me a whopping $0.90 a month. Yes you got it right, it costs me less than one dollar a month for all the storage. I guess the reason is because they are not stored for high reliability or high speed access, and they are highly compressible (even though they are not compressed by default).

You can probably guess at this point that I’m not going to clear out the logs from AWS for a very long time at this point. Although I would like for some logs not to be so big for nothing — like the sdlmame one that used to use the -v switch to GCC which causes all the calls to print a long bunch of internal data that is rarely useful on a default log output.

Luckily for me (and for the users relying on the tinderbox output!) those expenses are well covered with the Flattr revenue from my blog’s posts — and thank to Socialvest I no longer have to have doubts on whether I should keep the money or use it to flattr others — I currently have over €100 ready for the next six/seven months worth of flattrs! Before this, between my freelancer’s jobs, Flattr, and the ads on the blog, I would also be able to cover at least the cost of the server (and barely the cost of the domains — but that’s partly my fault for having.. a number).

Unfortunately, as I said at the top of the post, there no longer are ads served by Google on my blog. Why? Well, a month and a half ago I received a complain from Google, saying that one post of mine in which I namechecked a famous adult website, in the context of a (at the time) recent perceived security issue, is adult material, and that it goes against the AdSense policies to have ads served on a website with adult content. I would still argue that just namechecking a website shouldn’t be considered adult content, but while I did submit an appeal to Google, a month and a half later I have no response at hand. They didn’t blacklist the whole domain though, they only blacklisted my blog, so the ads are still showed on Autotools Mythbuster (which I count to resume working almost full time pretty soon) but the result is bleak: I went down from €12-€16 a month to a low €2 a month due to this, and that is no longer able to cover for the serve expense by itself.

This does not mean that anything will change in the future, immediate or not. This blog for me has more value than the money that I can get back from it, as it’s a way for me to showcase my ability and, to a point, get employment — but you can understand that it still upsets me a liiiittle bit the way they handled that particular issue.

During the following week there will be hard feature freeze on libreoffice and 4.0 branch will be created. This means that we can finally start to do some sensible stuff, like testing it like hell in Gentoo.

This release is packed with new features so let me list at least some relevant to our Gentoo stuff:

repaired nsplugin interface (who the hell uses it :P) that was fixed by Stephan Bergmann for wich you ALL guys should sent him some cookies :-)

liblangtag direct po/mo usage that ensures easier translations usage because the translations are not converted in to internal sdf format

liborcus library debut which brings out some features from calc to nice small lib so anyone can reuse them, plus it is easier to maintain, cookies to Kohei Yoshida

bluetooth remote control that allows you to just mess with your presentations over bluetooth, also there is android remote app for that over network ;-)

telepathy colaboration framework inclusion that allows you to mess with mutiple other people on one document in semi-realtime manner (it is mostly tech preview and you don’t see what is the other guy doing, it just appears in the doc)

binfilter is gone! Which is awesome as it was huge load of code that was really stinky

For more changes you can just read the wiki article, just keep in mind that this wiki page will be updated until the release, so it does not contain all the stuff.

Build related stuff

We are going to require new library that allows us to parse mspub format. Fridrich Strba was obviously bored so he wrote yet another format parser :-)

Pdfimport is no longer pseudo-extension but it is directly built in with normal useflag, which saves quite a lot of copy&paste code and it looks like it operates faster now.

The openldap schema provider is now hard-required so you can use adresbooks (Mork driver handles that). I bet some of you lads wont like this much, but ldap itself does not have too much deps and it is usefull for quite few business cases.

There are also some nice removals, like glib and librsvg are goners from default reqs (no-suprise for gnomers that they will still need them). From other it no longer needs the sys-libs/db, which I finally removed from my system.

Gcc requirement was raised to 4.6, because otherwise boost acts like *censored* and I have better stuff to do than just fix it all the time.

Saxon buindling has been delt with and removed completely.

Paralel build is sorted out so it will use correct amount of cpus and will fork gcc only required times not n^n times.

And last but most probably worst, the plugin foundation that was in java is slowly migrating to python, and it needs python:3.3 or later. This did not make even me happy :-)

Other fancy libreoffice stuff

Michel Meeks is running merges against the Apache Openoffice so we try hard to get even fixes that are not in our codebase (thankfully allowed by license this way). So with lots of efforts we review all their code changes and try to merge it over into our implementation. This will grow more and more complex over a time, because in libo we actually try to use the new stuff like new C++ std/Boost/… so there are more and more collisions. Lets see how long it will be worth it (of course oneliners are easy to pick up :P).

What is going in stable?

We at last got libreoffice-3.6 and binary stable. After this there was found svg bug with librsvg (see above, its gone from 4.0) so the binaries will be rebuilt and next version bump will loose the svg useflag. This was caused by how I wrote the detection of new switches and overlook on my side, I simply tried to just launch the libreo with -svg and didn’t dig further. Other than that the whole package is production ready and there should not be much new regressions.

So after having to get one laptop to boot on UEFI I got to make the Latitude work with that as well, if anything, as a point of pride. Last time I was able to get Windows booting as UEFI but I ended up using GRUB 2 with the legacy BIOS booting instead of UEFI for Linux, and promisd myself to find a way to set this up before.

Well, after the comments on my previous post I made sure to update my copy of SysRescueCD, as I only had a version 2.x on my USB key the other day, and that does not support EFI booting — but the new version (3.0) actually supports it out of the box, which makes it much easier, as I no longer need the EFI shell to boot an EFI stub kernel. To be precise, there also no need to use EFI stub, if not to help in recovery situations.

So, after booting into SysRescueCD, I zeroed out the Master Boot Record (to remove the old-style GRUB setup), re-typed the first partition to EF00 — it was set to EF02 which is what GRUB2 uses to install its modules on non-EFI systems), and formatted it to vfat, then… I chrooted into the second partition (which is my Gentoo Linux’s root partition), rebuilt GRUB2 to support efi-64, and just used grub2-install. Done!

Yes, the new SysRescueCD makes it absolutely piece of cake. And now I actually disabled the non-UEFI booting of that laptop and, not sure if it’s just my impression, though, it feels like it’s actually a second or two faster.

Still on the UEFI topic, turns out that Fabio ordered the same laptop I got (and I’m writing from right now), which means that soon Sabayon will have to support UEFI booting. On the other hand, I got Gentoo working fine on this laptop and the battery life is great, s I’m not complaining about it too much. I’ll actually write something about the laptop and how it feels soon, but tonight, I’m just too tired for it.

Now that the new ultrabook is working almost fine (I’m still tweaking the kernel to make sure it works as intended, especially for what concerns the brightness, and tweaking the power usage, for once, to give me some more power), I think it’s time for me to get used to the new keyboard, which is definitely different from the one I’m used to on the Dell, and more similar to the one I’ve used on the Mac instead. To do so, I think I’ll resume writing a lot on different things that are not strictly related to what I’m doing at the moment but talk about random topic I know about. Here’s the first of this series.

So, you probably all know me for a Gentoo Linux developer, some of you will know me for a multimedia-related developer thanks to my work on xine, libav, and a few other projects; a few would know me as a Ruby and Rails developer, but that’s not something I’m usually boasting or happy about. In general, I think that I’m doing my best not to work on Rails if I can. One of the things I’ve done over the years has been working on a few embedded projects, a couple of which has been Linux related. Despite what some people thought of me before, I’m not an electrical engineer, so I’m afraid I haven’t had the hands-on experience that many other people I know had, which I have to say, sometimes bother me a bit.

Anyway, the projects I worked on over time have never been huge project, or very well funded, or managed projects, which meant that despite Greg’s hope as expressed on the gentoo-dev mailing list, I never had a company train me in the legal landscape of Free Software licenses. Actually in most of the cases, I ended up being the one that had to explain to my manager how these licenses interact. And while I take licenses very seriously, and I do my best to make sure that they are respected, even when the task falls on the shoulder of someone else, and that someone might not actually do everything by the book.

So, while I’m not a lawyer and I really don’t want to go near that option, I always take the chance of understand licenses correctly and, when my ass is on the line, I make sure to verify the licenses of what I’m using for my job. One such example is the project I’ve been working since last March, of which I’ve prepared the firmware, based on Gentoo Linux. To make sure that all the licenses were respected properly, I had to come up with the list of packages that the firmware is composed of, and then verify the license of each. Doing this I ended up finding a problem with PulseAudio due to linking in the (GPL-only) GDBM library.

What happens is that if the company you’re working for has no experience with licenses, and/or does not want to pay to involve lawyers to review what is being done, it’s extremely easy for mistake to happen unless you are very careful. And in many cases, if you, as a developer, pay more attention to the licenses than your manager does, it’s also seen as a negative point, as that’s not how they’d like for you to employ your time. Of course you can say that you shouldn’t be working for that company then, but sometimes it’s not like you have tons of options.

But this is by far not the only problem. Sometimes, what happens is a classic 801 — that is that instead of writing custom code for the embedded platform you’re developing for, the company wants you to re-use previous code, which has a high likelihood of being written in a language that is completely unsuitable for the embedded world: C++, Java, COBOL, PHP….

Speaking of which, here’s an anecdote from my previous experiences in the field: at one point I was working on the firmware for an ARM7 device, that had to run an application written in Java. Said application was originally written to use PostgreSQL and Tomcat, with a full-fledged browser, but had to run on a tiny resistive display with SQLite. But since at the time IcedTea was nowhere to be found, and the device wouldn’t have had enough memory for it anyway, the original implementation used a slightly patched GCJ to build the application to ELF, and used JNI hooks to link to SQLite. The time (and money, when my time was involved) spent making sure the system wouldn’t run out of memory would probably have sufficed to rewrite the whole thing in C. And before you ask, the code bases between the Tomcat and the GCJ versions started drifting almost immediately, so code sharing was not a good option anyway.

Now, to finish this mostly pointless, anecdotal post of mine, I’d like to write a few words of commentary about embedded systems, systemd, and openrc. Whenever I head one or the other side saying that embedded people love the other, I think they don’t know how different embedded development can be from one company to the next, and even between good companies there are so big variation, that make them stand lightyears apart from bad companies like some of the ones I described above. Both sides have good points for the embedded world; what you choose depends vastly on what you’re doing.

If memory and timing are your highest constraints, then it’s very likely that you’re looking into systemd. If you don’t have those kind of constraints, but you’re building a re-usable or highly customizable platform, it’s likely you’re not going to choose it. The reason? While if you’re half-decent in your development cross-compilation shouldn’t be a problem, the reality is that in many place, it is. What happen then is that you want to be able to make last-minute changes, especially in the boot process, for debugging purposes, using shell scripts is vastly easier, and for some people, doing it more easily is the target, rather than doing it right (and this is far from saying that I find the whole set of systemd ideas and designs “right”).

But this is a discussion that is more of a flamebait than anything else, and if I want to go into details I should probably spend more time on it than I’m comfortable doing now. In general, the only thing I’m afraid of, is that too many people make assumption on how people do things, or take for granted that companies, big and small, care about doing things “right” — by my experience, that’s not really the case, that often.

“Nothing gets people’s interest peaked like colorful graphics. Therefore, graphing the web of trust in your local area as you build it can help motivate people to participate as well as giving everyone a clear sense of what’s being accomplished as things progress.”

Last Friday (Black Friday, since I was in the US this year), I ended up buying for myself an early birthday present; I finally got the ZenBook UX31A that I was looking at since September, after seeing the older model being used by J-B of VLC fame. Today it arrived, and I decided to go the easy route: I already prepared a DVD with Sabayon, and after updating the “BIOS” from Windows (since you never know), I wiped it out and installed the new OS on it. Which couldn’t be booted.

Now before you run around screaming “conspiracy”, I ask you to watch Jo’s video (Jo did you really not have anything with a better capture? My old Nikon P50 had a better iris!) and notice that Secure Boot over there works just fine. Other than that, this “ultrabook” is not using SecureBoot because it’s not certified for Windows 8 anyway.

The problem is not that it requires Secure Boot or anything like that but much more simply, it has no legacy boot. Which is what I’m using on the other laptop (the Latitude E6510), since my first attempt at using EFI for booting failed badly. Anyway this simply meant that I had to figure out how to get this to boot.

What I knew from the previous attempt is this:

grub2 supports UEFI both 32- and 64-bit mode, which is good — both my systems run 64-bit EFI anyway;

grub2 requires efibootmgr to set up the boot environment;

efibootmg requires to have access to the EFI variables, so it requires a kernel with support for EFI variables;

but there is no way to access those variables when using legacy boot.

This chicken-and-egg problem is what blown it for me last time — I did try before the kernel added EFI Stub support anyway. So what did I do this time? Well, since Sabayon did not work out of the box I decided to scratch it and I went with good old fashioned Gentoo. And as usual to install it, I started from SysRescueCD — which to this day, as far as I can tell, still does not support booting as EFI either. It’s a good thing then that Asus actually supports legacy boot… from USB drives as well as CDs.

So I boot from SysRescueCD and partition the SSD in three parts: a 200MB, vfat EFI partition; a root-and-everything partition; and a /home partition. Note that I don’t split either /boot or /usr so I’m usually quite easy to please, in the boot process. The EFI partition I mount as /mnt/gentoo/boot/efi and inside it I create a EFI directory (it’s actually case-insensitive but I prefer keeping it uppercase anyway).

Now it’s time to configure and build the kernel — make sure to enable the EFI Stub support. Pre-configure the boot parameters in the kernel, make sure to not use any module for stuff you need during boot. This way you don’t have to care about an initrd at all. Build and install the kernel. Then copy /boot/vmlinuz-* as /boot/efi/EFI/kernel.efi — make sure to give it a .efi suffix otherwise it won’t work — the name you use now is not really important as you’ll only be using it once.

Now you need an EFI shell. The Zenbook requires the shell available somewhere, but I know that at least the device we use at work has some basic support for an internal shell in its firmware. The other Gentoo wiki has a link on where to download the file; just put it in the root of the SysRescueCD USB stick. Then you can select to boot it from the “BIOS” configuration screen, which is what you’ll get after a reboot.

At this point, you just need to execute the kernel stub: FS1:\EFI\kernel.efi will be enough for it to start. After the boot completed, you’re now in an EFI-capable kernel, booted in EFI mode. And the only thing that you’re left to do is grub-install --efi-directory=/boot/grub/efi. And .. you’re done!

When you reboot, grub2 will start in EFI mode, boot your kernel, and be done with it. Pretty painless, isn’t it?

there are a lot of packages assigned to maintainer-needed. This packages lack an active maintainer and his bugs are solved usually by people in maintainer-needed alias (like pinkbyte, hasufell, kensington and me). Even if we are still able to keep the bug list "short" (when excluding "enhancement" and "qa" tagged bugs) any help on this task is really appreciated and, then:1. If you are already a Gentoo Dev and would like to help us, simply join the team adding you to mail alias. There is no need to go to bug list and fix any specified amount of bugs by obligation. For example, I simply try to go to fix maintainer-needed bugs when I have a bit of time after taking care of other things.2. If you are a user, you can:- Step up as maintainer using proxy-maintainers project:http://www.gentoo.org/proj/en/qa/proxy-maintainers/index.xml- Go to bugs:http://tinyurl.com/cssc95vand provide fixes, patches... for them ;)

Today it bit me. I rebooted my workstation, and all hell broke loose. Well, actually, it froze. Literally, if you consider my root file system. When the system tried to remount the root file system read-write, it gave me this:

mount: / not mounted or bad option

So I did the first thing that always helps me, and that is to disable the initramfs booting and boot straight from the kernel. Now for those wondering why I boot with an initramfs while it still works directly with a kernel: it’s a safety measure. Ever since there are talks, rumours, fear, uncertainty and doubt about supporting a separate /usr file system I started supporting an initramfs on my system in case an update really breaks the regular boot cycle. Same because I use lvm on most file systems, and software RAID on all of them. If I wouldn’t have an initramfs laying around, I would be screwed the moment userspace decides not to support this straight from a kernel boot. Luckily, this isn’t the case (yet) so I could continue working without an initramfs. But I digress. Back to the situation.

Booting without initramfs worked without errors of any kind. Next thing is to investigate why it fails. I reboot back with the initramfs, get my read-only root file system and start looking around. In my dmesg output, I notice the following:

EXT4-fs (md3): Cannot change data mode on remount

So that’s weird, not? What is this data mode? Well, the data mode tells the file system (ext4 for me) how to handle writing data to disk. As you are all aware, ext4 is a journaled file system, meaning it writes changes into a journal before applying, allowing changes to be replayed when the system suddenly crashes. By default, ext4 uses ordered mode, writing the metadata (information about files and such, like inode information, timestamps, block maps, extended attributes, … but not the data itself) to the journal right after writing data to the disk, after which the metadata is then written to disk as well.

On my system though, I use data=journal so data too is written to the journal first. This gives a higher degree of protection in case of a system crash (or immediate powerdown – my laptop doesn’t recognize batteries anymore and with a daughter playing around, I’ve had my share of sudden powerdowns). I do boot with the rootflags=data=journal and I have data=journal in my fstab.

But the above error tells me otherwise. It tells me that the mode is not what I want it to be. So after fiddling a bit with the options and (of course) using Google to find more information, I found out that my initramfs doesn’t check the rootflags parameter, so it mounts the root file system with the standard (ordered) mode. Trying to remount it later will fail, as my fstab contains the data=journal tag, and running mount -o remount,rw,data=ordered for fun doesn’t give many smiles.

The man page for genkernel however showed me that it uses real_rootflags. So I reboot with that parameter set to real_rootflags=data=journal and all is okay again.

Edit: I wrote that even changing the default mount options in the file system itself (using tune2fs /dev/md3 -o journal_data) didn’t help. However, that seems to be an error on my part, I didn’t reboot after toggling this, which is apparently required. Thanks to Xake for pointing that out.

Today I have unmasked ghc-7.6.1 in gentoo‘s haskell overlay. Quite a few of things is broken (like unbumped yet gtk2hs), but major things (like darcs) seem to work fine. Feel free to drop a line on #gentoo-haskell to get the thing fixed.

Some notes and events in the overlay:

ghc-7.6.1 is available for all major arches we try to support

a few ebuilds of overlay were converted to EAPI=5 to use subslot depends (see below)

Since I’ve been to the US, I’ve been thinking of replacing my cellphone, which right now still is my HTC Desire HD (which I was supposed not to pay, as I got it with an operator contract, but which I ended up paying dearly to avoid having to pay a contract that wouldn’t do me any good outside of Italy). The reasons were many, including the fact that it doesn’t get to HSDPA speed here in the US, but the most worrisome was definitely the fact that I had to charge it at least twice a day, and it was completely unreasonable for me to expect it to work for a full day out of the office.

After Google’s failure to provide a half-decent experience with the Nexus 4 orders (I did try to get one, the price was just too sweet, but for me it went straight from “Coming Soon” to “Out of stock”), I was considering going for a Sony (Xperia S), or even (if it wasn’t for the pricetag), a Galaxy Note II with a bluetooth headset. Neither option was a favourite of mine, but beggars can’t be choosers, can they?

The other day, as most of my Twitter/Facebook/Google+ followers would have noticed, my phone also decided to give up: it crashed completely while at lunch, and after removing the battery it lost all settings, due to a corruption of the ext4 filesystem on the SD card (the phone’s memory is just too limited for installing a decent amount of apps). After a complete re-set and reinstall, during which I also updated from the latest CyanogenMod version that would work on it to the latest nightly (still CM7, no CM10 for me yet, although the same chipset is present on modern, ICS-era phones from HTC), I had a very nice surprise. The battery has been now running for 29 hours, I spoke for two and something hours on the phone, used it for email, Facebook messages, and Foursquare check-ins, and it’s still running (although it is telling me to connect my charger).

So what could have triggered this wide difference in battery life? Well there are a number of things that changed, and a number that were kept the same:

I did reset the battery statistics, but unlike most of the guides I did so when the phone was 100% charged instead of completely discharged — simply because I had it connected to the computer and charged when I was in the Clockwork Recovery, so I just took a chance to it.

I didn’t install many of the apps I had before, including a few that are basically TSRs – and if you’re old enough you know what I mean! – including Advanced Call Manager (no more customers, no more calls to filter!), and, the most likely culprit, an auto-login app for Starbucks wifi.

While I kept Volume Ace installed, as it’s extremely handy with its scheduler (think of it like a “quiet hours” on steroids, as it can be programmed with many different profiles, depending on the day of the week as well), I decided to disable the “lock volume” feature (as it says it can be a battery drain) and replaced it with simply disabling the volume buttons when the screen is locked (which is why I enabled the lock volume feature to begin with).

I also replaced Zeam Launcher, although I doubt that might be the issue, with the new ADW Launcher (the free version — which unfortunately is not replacing the one in CyanogenMod as far as I can tell) — on the other hand I have to say that the new version is very nice, it has a configurable application drawer which is exactly what I wanted, and it’s quite faster than anything else I tried in a long time.

Since I recently ended up replacing my iPod Classic with an iPod Touch (the harddrive in the former clicked and neither Windows nor Linux could access it), I didn’t need to re-install DoggCatcher either, and that one might have been among the power drains, since it also schedules operation in the background and, again as far as I can tell, it does not uses the “sync” options that Android provides.

In all of this, I fell pretty much in love again with my phone. Having put in a 16GB microSD card a few months ago means I have quite a bit of space for all kind of stuff (applications as well as data), and thanks to the new battery life I can’t really complain about it that much. Okay so the lack of 3G while in the US is a bit of a pain, but I’m moving to London soon anyway so that won’t be a problem (I know it works in HSDPA there just fine). And I certainly can’t lament myself about the physical strength of the device… the chassis is made of metal, I’d venture to say it’s aluminum, but I wouldn’t be sure, which makes it strong enough to resist falling into a stone pavement (twice) and on concrete (only once) — yes I mistreat my gadgets, either they cope with me or they can get the heck out of my life.

After one month I think it was time to write my review about Gentoo miniconf.

In 20 and 21 October I attended to the Gentoo Miniconf which was a part of the bootstrapping-awesome project, 4 conferences (openSUSE Conference/Gentoo Miniconf/LinuxDays/SUSE Labs) where took place in the Technical Czech University at Prague.

Photo by Martin Stehno

Day 0: After our flight arrived in Prague’s airport – we went straight to the pre-conference welcome party in a cafe near the university where the conference took place. There we met the other greeks who arrived in the previous days and I had also the chance to meet a lot of Gentoo developers and talk with them.

Day 1: The first day started earlier in the morning. Me and Dimitris went to the venue before the conference started in order to prepare the room for the miniconf. The day started with Theo as host to welcome us. There were plenty of interesting presentations that covered a lot of aspects of Gentoo, the Trustees/Council, Public Relations, The Gentoo KDE team, Gentoo Prefix, Security, Catalyst and Benchmarking. The highlight of the day was when Robin Johnson introduced the Infrastructure team and started a very interesting BoF which talked about the state of the Infra team, currently running web apps and the burning issue of the git migration. The first day ended with lots of beers in the big party of the conference in the center of the Prague next to the famous Charles Bridge.

Gentoo Developers group photoPhoto by Jorge Manuel B. S. Vicetto

Day 2:The second day was more relaxed. There were presentations about Gentoo@ IsoHunt, 3D and Linux graphics and Οpen/GnuPG .After the lunch break a Οpen/GnuPG key signing party began outside of the miniconf’s room.After the key signing party we continued with a workshop regarding Puppet also a presentation about how to use testing on Gentoo to improve QA and finally the last presentation ended with Markos and Tomáš talking about how to get involved into development of Gentoo. In the end Theo and Michal closed the session of the miniconf.

I really liked Prague especially the beers and the Czech cuisine.

Gentoo Miniconf was a great exp erience for me. I could write lot of pages about the miniconf because I was in the room the whole days and I saw all the presentations.

I had also the opportunity to get in touch and talk with lots of Gentoo developers and contributors from other FOSS projects. Thanks to Theo and Michal for organizing this awesome event.

More about the presentations and the videos of the miniconf can be found here.Looking forward to the next Gentoo miniconf(why not a conference).

Sometimes this blog has something like “columns” for long-term topics that keep re-emerging (no pun intended) from time to time. Since I came back to the US last July you can see that one of the big issues I fight with daily is HP servers.

Why is the company I’m working for using HP servers? Mostly because they didn’t have a resident system administrator before I came on board, and just recently they hired an external consultant to set up new servers … the one who set up my nightmare: Apple OS X Server so I’m not sure which of the two options I prefer.

Anyway, as you probably know if you follow my blog, I’ve been busy setting up Munin and Icinga to monitor the status of services and servers — and that helped quite a bit over time. Unfortunately, monitoring HP servers is not easy. You probably remember I wrote a plugin so I could monitor them through IPMI — it worked nicely until I actually got Albert to expose the thresholds in the ipmi-sensors output, then it broke because HP’s default thresholds are totally messed up and unusable, and it’s not possible to commit new thresholds.

After spending quite some time playing with this, I ended up with write access to Munin’s repositories (thanks, Steve!) and I can now gloat be worried about having authored quite a few new Munin plugins (the second generation FreeIPMI multigraph plugin is an example, but I also have a sysfs-based hwmon plugin that can get all the sensors in your system in one sweep, a new multigraph-capable Apache plugin, and a couple of SNMP plugins to add to the list). These actually make my work much easier, as they send me warnings when a problem happens without having to worry about it too much, but of course are not enough.

After finally being able to replace the RHEL5 (without a current subscription) with CentOS 5, I’ve started looking in what tools HP makes available to us — and found out that there are mainly two that I care about: one is hpacucli, which is also available in Gentoo’s tree, and the other is called hp-health and is basically a custom interface to the IPMI features of the server. The latter actually has a working, albeit not really polished, plugin in the Munin contrib repository – which I guess I’ll soon look to transform into a multigraph capable one; I really like multigraph – and that’s how I ended up finding it.

At any rate at that point I realized that I did not add one of the most important checks: the SMART status of the harddrives — originally because I couldn’t get smartctl installed. So I went and checked for it — the older servers are almost all running as IDE (because that’s the firmware’s default.. don’t ask), so those are a different story altogether; the newer servers running CentOS are using an HP controller with SAS drives, using the CCISS (block-layer) driver from the kernel, while one is running Gentoo Linux, and uses the newer, SCSI-layer driver. All of them can’t use smartctl directly, but they have to use a special command: smartctl -d cciss,0 — and then either point it to /dev/cciss/c0d0 or /dev/sda depending on how which of the two kernel drivers you’re using. They don’t provide all the data that they provide for SATA drives, but they provide enough for Munin’s hddtemp_smartctl and they do provide an health status…

For what concerns Munin, your configuration would then be something like this in /etc/munin/plugin-conf.d/hddtemp_smartctl:

Depending on how many drives you have and which driver you’re using you will have to edit it of course.

But when I tried to use the default check_smart.pl script from the nagios-plugins package I had two bad surprises: the first is that they try to validate the parameter passed to the plugin to identify the device type to smartctl, refusing to work for a cciss type, and the other that it didn’t consider the status message that is printed by this particular driver. I was so pissed, that instead of trying to fix that plugin – which still comes with special handling for IDE-based harddrives! – I decided to write my own, using the Nagios::Plugin Perl module, and releasing it under the MIT license.

You can find my new plugin in my github repository where I think you’ll soon find more plugins — as I’ve had a few things to keep under control anyway. The next step is probably using the hp-health status to get a good/bad report, hopefully for something that I don’t get already through standard IPMI.

The funny thing about HP’s utilities is that they for the most part just have to present data that is already available from the IPMI interface, but there are a few differences. For instance, the fan speed reported by IPMI is exposed in RPMs — which is the usual way to expose the speed of fans. But on the HP utility, fan speed is actually exposed as a percentage of the maximum fan speed. And that’s how their thresholds are exposed as well (as I said, the thresholds for fan speed are completely messed up on my HP servers).

Oh well, anything else can happen lately, this would be enough for now.

Create a homepage and documentation for a project is a boring task. I
have a few projects that were not released yet due to lack of time and
motivation to create a simple webpage and write down some Sphinx-based
documentation.

To fix this issue I did a quick hack based on my favorite pieces of
software: Flask, docutils and Mercurial. It is a single file web
application that creates homepages automatically for my projects, using
data gathered from my Mercurial repositories. It uses the tags, the
README file, and a few variables declared on the repository's .hgrc
file to build an interesting homepage for each project. I just need to
improve my READMEs! :)

It works similarly to the PyPI Package Index, but accepts any project
hosted on a Mercurial repository, including my non-Python and Gentoo-only
projects.

This past Saturday (17 November 2012), I participated in the St. Jude Children’s HospitalGive Thanks Walk. This year was a bit different than the previous ones, as it also had a competitive 5k run (which was actually a 6k). I woke up Saturday morning, hoping that the weather report was incorrect, and that it would be much warmer than they had anticipated. However, it was not. When I arrived at the race site (which was the beautiful Creve Coeur Lake Park [one of my absolute favourites in the area]), it was a bit nippy at 6°C. However, the sun came out and warmed up everything a bit. Come race time, it wasn’t actually all that bad, and at least it wasn’t raining or snowing.

When I started the race, I was still a bit cold even with my stocking cap. However, by about halfway through the 6k, I had to roll up my sleeves because I was sweating pretty badly. It was an awesome run, and I felt great at the end of it. I think that the best part was being outside with a bunch of people that were also there to support an outstanding cause like Saint Jude Children’s Hospital. There were some heartfelt stories from families of patients, and nice conversations with fellow runners.

I actually finished the race in 24’22″, which wasn’t all that bad of a time:

Click to enlarge

In fact, it put me in first place, with 2’33″ between me and the runner-up! Though coming in first place wasn’t a goal of mine, I was in competition with myself. I had set a personal goal of completing the 6k in 26’30″ and actually came in under it! My placement earned me both a medal and a great certificate:

Click to enlarge

After the announcements of the winners and thanks to all of the sponsors, the female first-place runner (Lisa Schmitz) and I had our photo taken together in front of the finish line:

Click to enlarge

Thank you to everyone that sponsored and supported me for this run! The children and families of Saint Jude received tens-of-thousands of dollars just from the Saint Louis race alone!

Last Thursday we had GPG Key & CAcert Signing party at SUSE office inviting anybody who wants to get his key signed. I would say, that it went quite well, we had about 20 people showing up, we had some fun, and we now trust each other some more!

GPG Key Signing

We started with GPG key signing. You know, the ussual stuff. Two rows moving against each other, people exchanging paper slips…

Signing keys

For actually signing keys at home, we recommended people to use signing-party package and caff in particular. It’s easy to use tool as long as you can send mails from command line (there are some options to set up against SMTP directly, but I run into some issues). All you need to do is to call

caff HASH

and it will download the key, show you identities and fingerprint, sign it for you and send each signed identity to the owner by itself vie e-mail. And all that with nice wizard. It can’t be simpler than that.

Importing signatures

When my signed keys started coming back, I was wondering how do I process them. It was simply too many emails. I searched a little bit, but I get too lazy quite soon, so as I have all my mails stored locally in Maildir by offlineimap, I just wrote a following one liner to import them all.

Maybe somebody will find it useful as well, maybe somebody more experienced will tell me in comments how to do it correctly

CAcert

One friend of mine – Theo – really wanted to be able to issue CAcert certificates, so we added CAcert assurance to the program. For those who doesn’t know, CAcert is nonprofit certification authority based on web of trust. You’ll get verified by volunteers and when enough of them trusts you enough, you are trusted by authority itself. When people are verifying you, they give you some points based on how they are trusted and how do they trust you. Once you get 50 points, you are trusted enough to get your certificate signed and once you have 100, you are trusted enough to start verifying other people (after a little quiz to make sure you know what are you doing).

I knew that my colleague Michal čihař is able and willing to issue some points but as he was starting with issuing 10 and I with 15, I also asked few nearby living assurers from CAcert website. Unfortunately I got no reply, but we were organizing everything quite quickly. But we had another colleague – Martin Vidner – showing up and being able to issue some points. I assured another 11 people on the party and now I can give out 25 points. As well as Michal and I guess Martin is now somewhere around 20 as well. So it means that if you need to be able to issue CAcert certificates, visiting just SUSE office in Prague is enough! But still, contact us beforehand, sometimes we do have a vacation

If you’re using LXC, you might have noticed that there was a 0.8.0 release lately, finally, after two release candidates, one of which was never really released. Do you expect everything would go well with it? Hah!

Well, you might remember that over time I found that the way that you’re supposed to mount directories in the configuration files changed, from using the path to use as root, to the default root path used by LXC, and every time that happened, no error message was issued that you were trying to mount directories outside of the tree that they are running.

Last time the problem I hit was that if you try to mount a volume instead of a path, LXC expected you to use as a base path the full realpath, which in the case of LVM volumes is quite hard to know by heart. Yes you can look it up, but it’s a bit of a bother. So I ended up patching LXC to allow using the old format, based on the path LXC mounts it to (/usr/lib/lxc/rootfs). With the new release, this changed again, and the path you’re supposed to use is /var/lib/lxc/${container}/rootfs — again, it’s a change in a micro bump (rc2 to final) which is not documented…. sigh. This would be enough to irk me, but there is more.

The new version also seem to have a bad interaction with the kernel on stopping of a container — the virtual ethernet device (veth pair) is not cleared up properly, and it causes the process to stall, with something insisting to call the kernel and failing. The result is a not happy Diego.

Without even having to add the fact that the interactions between LXC and SystemD are not clear yet – with the maintainers of the two projects trying to sort out the differences between them, at least I don’t have to care about it anytime soon – this should be enough to make it explicit that LXC is not ready for prime time so please don’t ask.

On a different, interesting note, the vulnerability publicized today that can bypass KERNEXEC? Well, unless you disable the net_admin capability in your containers (which also mean you can’t set the network parameters, or use iptables), a root user within a container could leverage that particular vulnerability.. which is one extremely good reason not to have untrusted users having root on your containers.

Oh well, time to wait for the next release and see if they can fix a few more issues.

One of the issues that came through with the recent drama about the n-th udev fork is the matter of assigning copyright to the Gentoo Foundation. This topic is not often explored, mostly because it really is a minefield, and – be ready to be surprised – I think the last person who actually said something sane on the topic has been Ciaran.

Let’s see a moment what’s going on: all ebuilds and eclasses in the main tree, and in most of the overlays, report “Gentoo Foundation” as the holder of copyright. This is so much a requirement that we’re not committing to the tree anything that reports anyone else’s copyright, and we refuse the contribution in that case for the most part. While it’s cargo-culted at this point, it is also an extremely irresponsible thing to do.

First of all, nobody ever signed a copyright assignment form to the Gentoo Foundation, as far as I can tell. I certainly didn’t do it. And especially as we go along with getting more and more proxied maintainers, as they almost always are not Gentoo Foundation members (Foundation membership comes after an year as a developer, if I’m not mistaken — or something along those lines, I honestly forgot because, honestly, I’m not following the Foundation doing at all).

Edit: Robin made me notice that a number of people did sign a copyright assignment, first to Gentoo Technologies that were then re-assigned to the Foundation. I didn’t know that — I would be surprised if a majority of the currently active developers knew about that either. As far as I can tell, copyright assignment was no longer part of the standard recruitment procedure when I joined, as, as I said, I didn’t sign one. Even assuming I was the first guy who didn’t sign it, 44% of the total active developers wouldn’t have signed it, and that’s 78% of the currently active developers (give or take). Make up your mind on these numbers.

But even if we all signed said copyright assignment, it’s for a vast part invalid. The problem with copyright assignment is that they are just that, copyright assignments… which means they only work where the law regime concerning authors’ work is that of copyright. For most (all?) of Europe, the regime is actually that of author’s rights and like VideoLAN shows it’s a bit more complex, as the authors have no real way to “assign” those rights.

Edit²: Robin also pointed at the fact that FSFe, Google (and I add Sun, at the very lest) have a legal document, usually called Contributor License Agreement (when it’s basically replacing a full blown assignment) or Fiduciary Licence Agreement (the more “free software friendly” version). This solves half the problem only, as the Foundation would still not be owning the copyright, which means that you still have to come up with a different way to identify the contributors, as they still have their rights even though they leave any decision regarding their contributions to the entity they sign the CLA/FLA to.

So the whole thing stinks of half-understood problem.

This is actually gotten more complex recently, because the sci team borrowed an eclass (or the logic for an eclass) from Exherbo — who actually handles the individual’s copyright. This is actually a much more sensible approach, on the legal side, although I find the idea of having to list, let’s say, 20 contributors at the top of every 15-lines ebuild a bit of an overkill.

My proposal would then be to have a COPYRIGHTS.gentoo file in every package directory, where we list the contributors to the ebuild. This way even proxied maintainers, and one-time contributors, get their credit. The ebuild can then refer to “see the file” for the actual authors. A similar problem also applies to files that are added to the package, including, but not limited to, the init scripts, and making the file formatted, instead of freeform, would probably allow crediting those as well.

Now, this is just a sketch of an idea — unlike Fabio, whose design methodology I do understand and respect, I prefer posting as soon as I have something in mind, to see if somebody can easily shoot it down or if it has wings to fly, and also in the vain hope that if I don’t have the time, somebody else would pick up my plan — but if you have comments on it, I’d be happy to hear them. Maybe after a round of comments, and another round of thinking about it, I’ll propose it as a real GLEP.

During the last years I started several open source projects. Some turned out to be useful, maybe successful, many were just rubbish. Nothing new until here.

Every time I start a new project, I usually don’t really know where I am headed and what my long-term goals are. My excitement and motivation tipically come from solving simple everyday and personal problems or just addressing {short,mid}-term goals. This is actually enough for me to just hack hack hack all night long. There is no big picture, no pressure from the outside world, no commitment requirements. It’s just me and my compiler/interpreter having fun together. I call this the “initial grace period”.

During this period, I usually never share my idea with other people, ever. I kind of keep my project in a locked pod, away from hostile eyes. Should I share my idea at this time, the project might get seriously injured and my excitement severely affected. People would only see the outcome of my thought, but not the thought process itself nor detailed plans behind it, because I just don’t have them! Besides this might be both considered against any basic Software Engineering rules or against some exotic “free software” principles, it works for me.

I don’t want my idea to be polluted as long as I don’t have something that resembles it in the form of a consistent codebase. And until that time, I don’t want others to see my work and judge its usefulness basing on incomplete or just inconsistent pieces of information.

At the very same time, writing documents about my idea and its goals beforehand is also a no-go, because I have “no clue” myself as mentioned earlier.

This is why revision control systems and the implicit development model they force on individuals are so important, especially for me.
Giving you the ability to code on your stuff, changes, improvements, without caring about the external world until you are really really done with it, is what I ended up needing so so much.
Every time I forgot to follow this “secrecy” strategy, I had to spend more time discussing about my (still confused?) idea on {why,what,how} I am doing than coding itself. Round trips are always expensive, no matter what you’re talking about!

Many internal tools we at Sabayon successfully use have gone through this development process. Other staffers sometimes tell things like “he’s been quiet in the last few days, he must be working on some new features”, and it turns out that most of the times this is true.

This is what I wanted to share with you today though. Don’t wait for your idea to become clearer in your mind, it won’t happen by itself. Just take a piece of paper (or your text editor), start writing your own secret goals (don’t make the mistake of calling them “functional requirements” like I did sometimes), divide them into modest/expected and optimistic/crazy and start coding as soon as possible on your own version/branch of the repo. Then go back to your list of goals, see if they need to be tweaked and go back coding again. Iterate until you’re satisfied of the result, and then, eventually, let your code fly away to some public site.

But, until then, don’t tell anybody what you’re doing! Don’t expect any constructive feedback during the “initial grace period”, it is very likely that it will be just be destructive.

I spent half my Saturday afternoon working on Blender, to get the new version (2.64a) in Portage. This is never an easy task but in this case it was a bit more tedious because thanks to the new release of libav (version 9) I had to make a few more modifications … making sure it would still work with the old libav (0.8).

FFmpeg support is not guaranteed — If you care you can submit a patch. I’m one of the libav developers thus that’s what I work, and test, with.

Funnily enough, while I was doing that work, a new bug for blender was reported in Gentoo, so I looked into it and found out that it was actually caused by one of the bundled dependencies — luckily, one that was already available as its own ebuild, so I just decided to get rid of it. The interesting part was that it wasn’t listed in the “still bundled libraries” list that the ebuild’s own diagnostic prints… since it was actually a bundled library of the bundled libmv!

So you reach the point where you get one package (Blender) bundling a library (libmv) bundling a bunch of libraries, multi-level.

Looking into it I found out that not only the dependency that was causing the bug was bundled (ldl) but there were at least two more that, I knew for sure, were available in Gentoo (glog and gflags). Which meant I could shave some more code out of the package, by adding a few more dependencies… which is always a good thing in my book (and I know that my book is not the same as many others’).

While looking for other libraries to unbundle, I found another one, mostly because its name (eltopo) was funny — it has a website and from there you can find the sources — neither are linked in the Blender package. When I looked at the sources, I was dismayed to see that there was no real build system but just an half-broken Makefile building two completely different PIC-enabled static archives, for debug and release. Not really something that distributions could get much interest in packaging.

So I set up at build my usual autotools-based build system (which no matter what people say it’s extremely fast, if you know how to do it), fix the package to build with gcc 4.7 correctly (how did it work for Blender? I assume they patched it somehow but they don’t write down what they do!), and .. uh where’s the license file?

Turns out that while the homepage says that the library is “public domain”, there is no license statement anywhere in the source code, making it in all effects the exact opposite: a proprietary software. I’ve opened an issue for it and hopefully upstream will fix that one up so I can send him my fixes and package it in Gentoo.

Interestingly enough, the libmv software that Blender packages, is much better in its way of bundling libraries. While they don’t seem to give you an easy way to disable the bundled copies (which might or might not be Blender’s build system fault), they make it clear where each library come from, and they have scripts to “re-bundle” said libraries. When they make changes, they also keep a log of them so that you can identify what changed and either ignore, patch or send it upstream. If all projects bundling stuff did it that way, it would be a much easier job to unbundle…

In the mean time, if you have some free time and feel like doing something to improve the bundled libraries situation in Gentoo Linux, or you care about Blender and you’d like to have a better Gentoo experience with it, we could use some ebuilds for ceres-solver and SSBA as well as fast-C (this last one has no buildsystem at all, unfortunately) all used by libmv, or maybe carve libredcode (for which I don’t even have an URL at hand), recastnavigation (which has no releases) which are instead used directly by Blender.

P.S.: don’t expect to see me around this Sunday, I’m actually going to see the Shuttle, and so I won’t be back till late, most likely, or at least I hope so. You’ll probably see a photo set on Monday on my Flickr page if you want to have a treat.

This wednesday, the Gentoo Hardened team held its monthly online meeting, discussing the things that have been done the last few weeks and the ideas that are being worked out for the next. As I did with the last few meetings, allow me to summarize it for all interested parties…

Toolchain

The upstream GCC development on the 4.8 version progressed into its 3rd stage of its development cycle. Sadly, many of our hardened patches didn’t make the release. Zorry will continue working on these things, hopefully still being able to merge a few – and otherwise it’ll be for the next release.

For the MIPS platform, we might not be able to support the hardenedno* GCC profiles [1] in time. However, this is not seen as a blocker (we’re mostly interested in the hardened ones, not the ones without hardening ;-) so this could be done later on.

Blueness is migrating the stage building for the uclibc stages towards catalyst, providing more clean stages. For the amd64 and i686 platforms, the uclibc-hardened and uclibc-vanilla stages are already done, and mips32r2/uclibc is on the way. Later, ARM stages will be looked at. Other platforms, like little endian MIPS, are also on the roadmap.

Kernel

The latest hardened-sources (~arch) package contains a patch supporting the user.* namespace for extended attributes in tmpfs, as needed for the XATTR_PAX support [2]. However, this patch has not been properly investigated nor tested, so input is definitely welcome. During the meeting, it was suggested to cap the length of the attribute value and only allow the user.pax attribute, as we are otherwise allowing unprivileged applications to “grow data” in the kernel memory space (the tmpfs).

Prometheanfire confirmed that recent-enough kernels (3.5.4-r1 and later) with nested paging do not exhibit the performance issues reported earlier.

SELinux

The 20120725 upstream policies are stabilized on revision 5. Although a next revision is already available in the hardened-dev overlay, it will not be pushed to the main tree due to a broken admin interface. Revision 7 is slated to be made available later the same day to fix this, and is the next candidate for being pushed to the main tree.

The september-released newer userspace utilities for SELinux are also going to be stabilized in the next few days (at the time of writing this post, they are ;-). These also support epatch_user so that users and developers can easily add in patches to try out stuff without having to repackage the application themselves.

grSecurity and PaX

The toolchain support for PT_PAX (the ELF-header based PaX markings) is due to be removed soon, meaning that the XATTR_PAX support will need to be matured by then. This has a few consequences on available packages (which will need a bump and fix) such as elfix, but also on the pax-utils.eclass file (interested parties are kindly requested to test out the new eclass before it reaches “production”). Of course, it will also mean that the new PaX approach needs to be properly documented for end users and developers.

pipacs also mentioned that he is working on a paxctld daemon. Just like SELinux’ restorecond daemon, this deamon will look for files and check them against a known database of binaries with their appropriate PaX markings. If the markings are set differently (or not set), the paxctld daemon will rectify the situation. For Gentoo, this is less of a concern as we already set the proper information through the ebuilds.

Profiles

The old SELinux profiles, which were already deprecated for a while, have been removed from the portage tree. That means that all SELinux-using profiles use the features/selinux inclusion rather than a fully build (yet difficult to maintain) profile definition.

System Integrity

A few packages, needed to support or work with ima/evm, have been pushed to the hardened-dev overlay.

So again a nice month of (volunteer) work on the security state of Gentoo Hardened. Thanks again to all (developers, contributors and users) for making Gentoo Hardened where it is today. Zorry will send out the meeting log later to the mailinglist, so you can look at the more gory details of the meeting if you want.

[1] GCC profiles are a set of parameters passed on to GCC as a “default” setting. Gentoo hardened uses GCC profiles to support using non-hardening features if the users wants to (through the gcc-config application).

[2] XATTR_PAX is a new way of handling PaX markings on binaries. Previously, we kept the PaX markings (i.e. flags telling the kernel PaX code to allow or deny specific behavior or enable certain memory-related hardening features for a specific application) as flags in the binary itself (inside the ELF header). With XATTR_PAX, this is moved to an extended attribute called “user.pax”.

Few days ago I finished fiddling with open build service (obs) packages in our main tree. Now when anyone wants to mess up with obs he just have to emerge dev-util/osc and have the fun with it.

What the hell is obs?

OBS is pretty cool service that allows you to specify how to build your package and its dependencies in one .spec file where you can deliver the results to multiple archs/distros and not care about how it happens (Debian, SUSE, Fedora, CentOS, Archlinux).

Primary implementation is running for SUSE and it is free to use by anyone (eg. you don’t have to build suse packages there if you don’t want to :P). It has two ways how to interact with the whole tool, one is the web application, which is really PITA and the other is the osc command line tool I finished fiddling with.

Okay so why did you do it?

Well I work at SUSE and we are free to use whatever distro we want while being able to complete our taks. I like to improve stuff I want to be able fix bugs in SLE/openSUSE while not having any chroot/virtual with the named system installed, for such task this works pretty well :-)

I’d like to write some sort of public status report or brain dump of what’s going on. I’ve been on-the-road for one month of the planned 12 months and just “Living the Dream” as many of the fellow travelers would say. I’ve met so many people so far, some have been really inspiring, some are not. I’m embracing the idea of slow travel and/or home base travel. I really don’t care how you travel, but the Eurorail, every capital city for two days is not what I want to do. I’ve learned that already from talking to people and my preconceived values. So far, I’m on track by only visiting two countries so far, Netherlands and Czech. I’m really diving into Czech Republic – mind you, I didn’t really plan on that but it somehow happened and I’m very ok with that. However, the bad side of that is that I’m staying still while people are moving by every 2-5 days. Since the hostel gives a free beer token to every guest, I see new people everyday for just long enough to say the smalltalk – I haven’t been in that position before so it’s new for this computer guy from Minnesota his whole life… (Self-reflection, yay) Annnyway, I’m having fun, I’m enjoying myself, I don’t like to “not-work”, I am forcing myself to take the unbeaten path, I’m getting more comfortable with myself and my environment, I’m relaxed, I can go with the flow, I know “it” will work out, I drink tea daily, I started to enjoy coffee, I’m living life, I am balanced. Go me.

As of this writing, I was in Netherlands for 7 days and spent $55usd per day and Czech for 28 days and spent $28usd per day. With my pre-trip expenses, etc, I’ve spent $65usd per day.

I’m doing fine, read my posts about where I’ve been, look at my pictures on Flickr, interact with me on Twitter for what I am doing, and check back often for what I’ve been doing. Ciao.

(After thought: considering that I’ve been at (or lived at) a dropzone for nearly every weekend this past summer (and the past 6 years), I’m really missing skydiving. Not going to lie, I can’t wait to jump out of a plane, most places around me are closing for the winter and I’m not properly prepared to jump in the cold even if they were open poor planning on my part. I didn’t think it would be so bad, taking a hiatus, but that sport is such a part of my life. I miss it.)

Many people have written about Apple’s screwups in the past — recently, the iPad Mini seems to be the major focus for about anybody, and I can see why. While I don’t want to argue I know their worst secret, I’m definitely going to show you one contender that could actually be fit to be called Apple’s Biggest Screwup: OS X Server.

Okay, those who read my blog daily already know that because I blogged this last night:

UNIX friendliness. This can be argued both positively and negatively — we all know that UNIX in general is not very friendly (I’m using the trademarked name because OS X is actually using FreeBSD which isUNIX), but it’s friendlier to have an UNIX server than a Windows one. So if you want to argue it negatively, you don’t have all the Windows-style point-and-click tools for every possible service. If you want to argue it positively, you’re still running solid (to a point) software such as Apache, BIND, and so on.

Windows’s remote management capabilities. This is an extremely interesting point. While, as I just said, OS X Server provides you with BIND as DNS server, you’re supposed not to edit the files by hand but leave it to Apple’s answer to the Microsoft Management Console — ServerAdmin. Unfortunately, doing so remotely is hard.

Yes because even though it’s supposed to be usable from a remote host, it requires that you’re using the same version on both sides, and that is impractical if your server is running 10.6, and your only client at hand is updated to 10.8. So this option has to be dropped entirely in most cases — you don’t want to keep updating to the latest OS your server, but you do so for your client, especially if you’re doing development on said client. Whoops.

So can you launch it through an SSH session? Of course not, because despite all people complaining about X11, the X protocol, and SSH X11 forwarding, are a godsend for remote management, if you have things like very old versions of libvirt and friends, or some other tool that can only be executed on a graphical environment, you only need another X with an SSH client and you’re done.

Okay so what can you do? Well, the first option would be to do it locally on the box, but that’s not possible, so the second best would be to use one of the many remote desktop techniques — OS X Server comes with Apple’s Remote Desktop server by default. While this is using the VNC standard 5900 port… it seems like it does not work with a standard VNC client such as KRDC. You really need Apple’s Remote Desktop Client, which is a paid-for proprietary app. Of course you can set up one of many third party apps to connect to it, but if you didn’t think about that when installing the server, you’re basically stuck.

And I’m pretty sure that this does not limit itself to the DNS server, but Apache, and other servers, will probably have the same issues.

Solaris’s hardware support. This should be easy, if you ever tried to run Solaris on real hardware, rather than just virtualized – and even then … – you know that it’s extremely picky. Last time I tried it, it wouldn’t run on a system with SATA drives, to give you an idea.

What hardware can OS X Server run on? Obviously, only Apple hardware. If you’re talking about a server, you have to remove from the equation all their laptops, obviously. If it’s a local server you could use an iMac, but the problem I’ve got is that it’s used not locally but at a co-location. The XServe, which was the original host for OS X Server, is now gone forever, and that leaves us with only two choices: Mac Pro and Mac Mini. Which are the only ones that are sold with that version of OS X anyway.

The former hasn’t been updated in quite a long time. It’s quite bulky to put at a co-location, even though I’ve seen enough messy racks to know that somebody could actually think about bringing it there. The latter actually just recently got an update that makes it sort of interesting, by giving you a two-HDDs option…

But you still get a system that has 2.5", 5400 RPM disks at most, with no RAID, and that’s telling you to use external storage if you need anything difference. And since this is a server edition, it comes with no mouse or keyboard, just adding those means adding another $120. Tell me again why would anybody in their sane mind use one of those for a server? And no don’t remind that I could have an answer on the tip of my tongue.

For those who might object that you can fit two Mac Minis on 1U – you really can’t, you need a tray and you end up using 2U most of the time anyway – you can easily use something like SuperMicro’s Twins that fits two completely independent nodes on a single 1U chassis. And the price is not really different.

The model I linked is quote, googling, at around eighteen hundreds dollars ($1800); add $400 for four 1TB hard disks (WD Caviar Black, that’s their going price as I ordered, since last April, eight of them already — four for Excelsior, four for work), you get to $2200 — two Apple Mac Minis? $2234, with mouse and keyboard that you need (the Twin system has IPMI support and remote KVM, so you don’t need them).

AIX’s software availability. So yes, you can have MacPorts, or Gentoo Prefix, or Fink, or probably a number of other similar projects. The same is probably true for AIX. How much software is actually tested on OS X Server? Probably not much. While Gentoo Prefix and MacPorts cover most of the basic utilities you’d use on your UNIX workstation, I doubt that you’ll find the complete software coverage that you currently find for Linux, and that’s often enough a dealbreaker.

For example, I happen to have these two Apple servers (don’t ask!). How do I monitor them? Neither Munin nor NRPE are easy to set up on OS X so they are yet unmonitored, and I’m not sure if I’ll ever actually monitor them. I’d honestly replace them just for the sake of not having to deal with OS X Server anymore, but it’s not my call.

I think Apple did quite a feat, to make me think that our crappy HP servers are not the worst out there…

I took a weekend trip to Kutná Hora and Olomouc. Kutná Hora was on the way via train so I got off there (with a small connection train) and visited the Bone Church, a common gravesite of over 40,000 people. I feel like it is one of those things that will just disappear someday – bones won’t last forever in the open air like that.

Otherwise, Kutná Hora was just a small town and I didn’t do much else there besides get on the train again for the city Olomouc (a-la-moats). I probably missed something in Kutná Hora, but it wasn’t obvious to me and I just heard about the church. Olomouc is the 6th largest city in the Czech Republic, and largely a university town. I stayed in the lovely small hostel, the Poet’s Corner (highly recommended), for a few nights. Most students go home on the weekends, which I think is odd, but I did get to talk to some students (from a different city that were home for the weekend) and went out to enjoy the student bars. Good times, I recommend seeing Olomouc if you have a few days open in your itinerary and are not doing the crazy whirlwind capital city Europe tour. There is some nice things to see, I just had to watch the country’s ‘other’ astronomical clock. Also, a few microbreweries, which were delicious, and I even did a beer spa for fun (why not?).

Last week I posted a survey about openSUSE Connect. Although some answers are still coming and you are still welcome to provide more feedback, let’s take a look at some results. Some numbers first. openSUSE Connect is not really busy website, it gets about 80 different visitors per day. Not much, but not a total wasteland. Related to this number is another one. More than half of the people responding in the survey have never ever heard about openSUSE Connect. So it sounds like we should speak about it more…

Now something regarding the feedback. Most people think that it is a good idea and that it either is already useful or it can become quite useful. But even though feedback was positive, lot of people made various suggestions how to improve it. So what can be done to make it better? Most of the feedback was centred around following two topics.

Social aspects

One frequently mentioned topic was social aspect of the Connect. It is social network, where you can’t post status messages and where it is not easy to follow what are people up to. So it’s kinda antisocial social network. There were people asking for adding ability to share what are they going – add status messages, chat and stuff they know from Facebook of Google+. On the other hand there were people who complained that they don’t want to have another social network to maintain. And the third opinion which I think is something between was to provide some easier integration with already existing social networks like Facebook, Twitter or Google+. That I would say sounds the most reasonable solution.

More polishing

This was mentioned with most of the sites aspects. openSUSE Connect is a good thing, it contains many great ideas, but somehow they are not polished enough. As connect itself. People complained that UI could be nicer and more user-friendly. That widgets miss some finishing touches. So what is needed in this aspect? Probably some designers to step in and fix UI But apart from that, some widgets could use even some coding touches. So if you don’t like how is something done, feel free to submit patch

Conclusion?

People didn’t know about openSUSE Connect and there are things to be polished. We had some good ideas and we implemented them when we started with Connect. But there is still quite some work left before Connect will be perfect. Work that can be picked up by anybody as openSUSE Connect is open source, written in PHP and we even have a documentation mentioning among other things how to work on it. We can off course just let it live as it is and use it for membership and elections for which it works well. But looks like my survey got people at least a little bit interested and for example victorhck submitted logo proposal for openSUSE Connect! So maybe we will get some other contributors as well And let’s see how will I spend my next Hackweek

The recruiters team announced a few months ago that they decided not to use the recruiting webapp any more, and move back to the txt quizes instead. Additionally, the webapp started showing random ruby exceptions, and since nobody is willing to fix them, we found it a good opportunity to shut down the service completely. There have been people that were still working on it though (including me), so if you are a mentor, mentee or someone who had answers in there, please let me know so I can extract your data and send it to you.
And now I’d like to state my personal thoughts regarding the webapp and the recruiter’s decision to move back to the quizes. First of all, I used this webapp as mentor a lot from the very first point it came up, and I mentored about 15 people through it. It was a really nice idea, but not properly implemented. With the txt quizes, the mentees were sending me the txt files by mail, then we had to schedule an IRC meeting to review the answers, or I had to send the mail back etc. It was a hell for both me and the mentee. I was ending up with hundreds of attachments, trying to find out the most recent one (or the previous one to compare answers), and the mentee had to dig between irc logs and mails to find my feedback.
The webapp solved that issue, since the mentee was putting his answers in a central place, and I could easily leave comments there. But it had a bunch of issues though, mostly UI related. It required too many clicks for simple actions, the notification system was broken by design, I had no easy way to see diffs or to see the progress of my mentee (answers replied / answers left). For example, in order to approve an answer, I had to press “Edit” which transfered me in a new page, where I had to tick “Approve” and press save. Too much, I just wanted to press “Approve”! When I decided to start filling bugs, surprisingly I found out that all my UI complaints had already been reported, clearly I was not alone in this world.
In short, cool idea but annoying UI. That was not the problem though, the real problem is that nobody was willing to fix those issues, which led to the recruiters’ decision to move back to txt quizes. But I am not going back to the txt quizes, no way. Instead, I will start a Google doc and tell my mentees to put their answers there. This will allow me to write my comments below their answers with different font/color, so I can have async communication with them. I was present during the recruitment interview session of my last mentee Pavlos, and his recruiter Markos fired up a Google doc for some coding answers, and it worked pretty well. So I decided to do the same. If the recruiters want the answers in plain text, fine, I can extract them easily.
I’d like to thank a lot Joachim Bartosik, for his work on the webapp and his interesting ideas he put on this (it saved me a lot of time, and made the mentoring process fun again), and Petteri Räty who mentored Joachim creating the recruiting webapp as GSoC project, and helped in deploying it to infra servers. I am kinda sad that I had to shut it down, and I really hope that someone steps up and revives it or creates an alternative. There has been some discussion regarding that webapp during the Gentoo Miniconf, I hope it doesn’t sink.

So we use roughly 1/3rd the memory to get the same things done (fileserver),
and an informal performance analysis gives us roughly double the IO throughput.
On the same hardware!
(The IO difference could be attributed to the ext3 -> ext4 upgrade and the kernel 2.6.18 -> 3.2.1 upgrade)

Another random data point: A really clumsy mediawiki (php+mysql) setup.
Since php is singlethreaded the performance is pretty much CPU-bound;
and as we have a small enough dataset it all fits into RAM.
So we have two processes (mysql+php) that are serially doing things.

And a "move data around" comparison: 63GB in 3.5h vs. 240GB in 4.5h - or roughly 4x the throughput

So, to summarize: For the same workload on the same hardware we're seeing substantial improvements
between a few percent and roughly four times the throughput, for IO-bound as well as for CPU-bound tasks.
The memory use goes down for most workloads while still getting the exact same results, only a lot faster.

App developers and end users both like bundled software, because it’s easy to support and easy for users to get up and running while minimizing breakage. How could we come up with an approach that also allows distributions and package-management frameworks to integrate well and deal with issues like security? I muse upon this over at my RedMonk blog.

Apparently it’s been a while since my last blog post. This however does mean that I’ve been too busy on the coding side, which is what you may prefer I guess.

The new Equo code is hitting the main Sabayon Entropy repository as I write. But what’s it about?

Refactoring

First thing first. The old codebase was ugly, as in, really ugly. Most of it was originally written in 2007 and maintained throughout the years. It wasn’t modular, object oriented, bash-completion friendly, man pages friendly, and most importantly, it did not use any standard argument parsing library (because there was no argparse module and optparse was about to be deprecated).

Modularity

Equo subcommands are just stand-alone modules. This means that adding new functionality to Equo is only a matter of writing a new module, containing a subclass of “SoloCommand” and registering it against the “command dispatcher” singleton object. Also, the internal Equo library has now its own name: Solo.

Backward compatibility

In terms of command line exposed to the user, there are no substantial changes. During the refactoring process I tried not to break the current “equo” syntax. However, syntax that has been deprecated more than 3 years ago is gone (for instance, stuff like: “equo world”). In addition, several commands are now sporting new arguments (have a look at “equo match” for example).

Man pages

All the equo subcommands are provided with a man page which is available through “man equo-<subcommand name>”. The information required to generated the man page is tightly coupled with the module code itself and automatically generated via some (Python + a2x)-fu. As you can understand, maintaining both the code and its documentation becomes easier this way.

Bash completion

Bash completion code lives together with the rest of the business logic. Each subcommand exposes its bash completion options through a class instance method called “list bashcomp(last_argument_str)”, overridden from SoloCommand. In layman’s terms, you’ve got working bashcomp awesomeness for every equo command available.

Where to go from here

Tests, we need more tests (especially regression tests). And I have this crazy idea to place tests directly in the subcommand module code.
Testing! Please install entropy 149 and play with it, try to break it and report bugs!

Over the past few weeks, I’ve been designing a basic site (in WordPress) for a new client. This client needs some embedded FLVs on the site, and doesn’t want them (for good reason) to be directly linked to YouTube. As such, and seeing as I didn’t want to make the client write the HTML for embedding a flash video, I installed a very simple FLV plugin called WP OS FLV.

The plugin worked exactly as I had hoped it would, by cleanly showing the FLV with just a few basic options. However, I noticed that the pages with FLVs embedded in them using the plugin were significantly slower to load than were pages without FLVs. Doing some fun experimentation with cURL, I found that those pages had some external calls on them. Hmmmmmm, now what would the plugin need from an external site? Doing a little more digging, I found the following line hardcoded twice in the plugin’s wposflv.php file:

That line means that if the site flv-player.net is down or slow, the page with the FLV plugin on your blog will also be slow. In order to fix this problem, you simply need to download the player_flv_maxi.swf file from that site, upload it somewhere on your server, and edit the line to call the location on your server instead. For instance, if your site is my-site.com, and you put the SWF file in a directory called static, you would change the absolute URL to:

I'm sitting on the first day of the Qt Developer Days in Berlin and am pretty
impressed about the event so far -- the organizers have done an excellent job
and everything feels very, very smooth here. Congratulations for that; I have a
first-hand experience with organizing a workshop and can imagine the huge pile of
work which these people have invested into making it rock. Well done I say.

It's been some time since I blogged about Trojitá, a fast and lightweight IMAP
e-mail client. A lot of work has found the way in since the last release; Trojitá now supports
almost all of the useful IMAP extensions including QRESYNC and
CONDSTORE for blazingly fast mailbox synchronization or the
CONTEXT=SEARCH for live-updated search results to name just a few.
There've also been roughly 666 tons of bugfixes, optimizations, new features and
tweaks. Trojitá is finally showing evidence of getting ready for being usable as
a regular e-mail client, and it's exciting to see that process after 6+ years of
working on that in my spare time. People are taking part in the development
process; there has been a series of commits from Thomas Lübking of the kwin fame
dealing with tricky QWidget issues, for example -- and it's great to see many
usability glitches getting addressed.

The last nine months were rather hectic for me -- I got my Master's degree
(the thesis was about
Trojitá, of course), I started a new job (this time using Qt) and
implemented quite some interesting stuff with Qt -- if you have always wondered
how to integrate Ragel, a parser generator, with qmake, stay tuned for future
posts.

Anyway, in case you are interested in using an extremely fast e-mail client
implemented in pure Qt, give Trojitá a try. If you'd like to chat about it,
feel free to drop me a mail or just stop me
anywhere. We're always looking for contributors, so if you hit some annoying
behavior, please do chime in and start hacking.

I’ve written a small script that I call selocal which manages locally needed SELinux rules. It allows me to add or remove SELinux rules from the command line and have them loaded up without needing to edit a .te file and building the .pp file manually. If you are interested, you can download it from my github location.

Its usage is as follows:

You can add a rule to the policy with selocal -a “rule”

You can list the current rules with selocal -l

You can remove entries by referring to their number (in the listing output), like semodule -d 19.

You can ask it to build (-b) and load (-L) the policy when you think it is appropriate

It even supports multiple modules in case you don’t want to have all local rules in a single module set.

So when I wanted to give a presentation on Tor, I had to allow the torbrowser to connect to an unreserved port. The torbrowser runs in the mozilla domain, so all I did was:

If you read my blog on a regular basis, you will know that I traveled through Russia, Mongolia and China last year. If there's one big thing I learned on this trip, it's this: English language is - on a worldwide scale - much less prevalent than I thought. Call me a fool, but I just wasn't aware of that. I thought, okay, maybe many people won't understand English, but at least I'll always be able to find someone nearby who's able to translate. That just wasn't the case. I spent days in cities where I met nobody that shared any language knowledge with me.

I'm pretty sure that translation technologies will become really important in the not-so-distant future. For many people, they already are. I've learned about the opinions of swedish initiatives without any knowledge of swedish just by using Google translate. Google Chrome and the free variant Chromium show directly the option to send something through Google translate if it detects that it's not in your language (although that wasn't working with Mongolian when I was there last year). I was in hotels where the staff pointed me to their PC with an instance of Yandex translate or Baidu translate where I should type in my questions in English (Yandex is something like the russian Google, Baidu is something like the chinese Google). Despite all the shortcomings of today's translation services, people use them to circumvent language barriers.

Young people in those countries are often learning English today, but it's a matter of fact that this will only very slowly translate into a real change. Lots of barriers exist. Many countries have their own language and another language that's used as the "international communication language" that's not English. For example, you'll probably get along pretty well in most post-soviet countries with Russian, no matter if the countries have their own native language or not. This also happens in single countries with more than one language. People have their native language and learn the countries language as their first foreign language.
Some people think their language is especially important and this stops the adoption of English (France is especially known for that). Some people have the strange idea that supporting English language knowledge is equivalent to supporting US politics and therefore oppose it.

Yes, one can try to learn more languages (I'm trying it with Mandarin myself and if I'll ever feel I can try a fourth language it'll probably be Russian), but if you look on the world scale, it's a loosing battle. To get along worldwide, you'd probably have to learn at least five languages. If you are fluent in English, Mandarin, Russian, Arabic and Spanish, you're probably quite good, but I doubt there are many people on this planet able to do that. If you're one of them, you have my deepest respect (please leave a comment if you are).

If you'd pick two completely random people of the world population, it's quite likely that they don't share a common language.

I see no reason in principle why technology can't solve that. We're probably far away from a StarTrek-alike universal translator and sadly evolution hasn't brought us the Babelfish yet, but I'm pretty confident that we will see rapid improvements in this area and that will change a lot. This may sound somewhat pathetic, but I think this could be a crucial issue in fixing some of the big problems of our world - hate, racism, war. It's just plain simple: If you have friends in China, you're less likely to think that "the chinese people are bad" (I'm using this example because I feel this thought is especially prevalent amongst the left-alternative people who would never admit any racist thoughts - but that's probably a topic for a blog entry on its own). If you have friends in Iran, you're less likely to support your country fighting a war against Iran. But having friends requires being able to communicate with them. Being able to have friends without the necessity of a common language is a fascinating thought to me.

So, I started reading [The Definitive Guide to the Xen Hypervisor] (again ), and I thought it would be fun to start with the example guest kernel, provided by the author, and extend it a bit (ye, there’s mini-os already in extras/, but I wanted to struggle with all the peculiarities of extended inline asm, x86_64 asm, linker scripts, C macros etc, myself ).

After doing some reading about x86_64 asm, I ‘ported’ the example kernel to 64bit, and gave it a try. And of course, it crashed. While I was responsible for the first couple of crashes (for which btw, I can write at least 2-3 additional blog posts ), I got stuck with this error:

when trying to boot the example kernel as a domU (under xen-unstable).

0×2000 is the address where XEN maps the hypercall page inside the domU’s address space. The guest crashed when trying to issue any hypercall (HYPERCALL_console_io in this case). At first, I thought I had screwed up with the x86_64 extended inline asm, used to perform the hypercall, so I checked how the hypercall macros were implemented both in the Linux kernel (wow btw, it’s pretty scary), and in the mini-os kernel. But, I got the same crash with both of them.

After some more debugging, I made it work. In my Makefile, I used gcc to link all of the object files into the guest kernel. When I switched to ld, it worked. Apparently, when using gcc to link object files, it calls the linker with a lot of options you might not want. Invoking gcc using the -v option will reveal that gcc calls collect2 (a wrapper around the linker), which then calls ld with various options (certainly not only the ones I was passing to my ‘linker’). One of them was –build-id, which generates a .note.gnu.build-id” ELF note section in the output file, which contains some hash to identify the linked file.

Apparently, this note changes the layout of the resulting ELF file, and ‘shifts’ the .text section to 0×30 from 0×0, and hypercall_page ends up at 0×2030 instead of 0×2000. Thus, when I ‘called’ into the hypercall page, I ended up at some arbitrary location instead of the start of the specific hypercall handler I was going for. But it took me quite some time of debugging before I did an objdump -dS [kernel] (and objdump -x [kernel]), and found out what was going on.

The code from bootstrap.x86_64.S looks like this (notice the .org 0×2000 before the hypercall_page global symbol):

One solution, mentioned earlier, is to switch to ld (which probalby makes more sense), instead of using gcc. The other solution is to tweak the ELF file layout, through the linker script (actually this is pretty much what the Linux kernel does, to work around this):

For those of you who missed my previous updates, we recently organised a PulseAudio miniconference in Copenhagen, Denmark last week. The organisation of all this was spearheaded by ALSA and PulseAudio hacker, David Henningsson. The good folks organising the Ubuntu Developer Summit / Linaro Connect were kind enough to allow us to colocate this event. A big thanks to both of them for making this possible!

The room where the first PulseAudio conference took place

The conference was attended by the four current active PulseAudio developers: Colin Guthrie, Tanu Kaskinen, David Henningsson, and myself. We were joined by long-time contributors Janos Kovacs and Jaska Uimonen from Intel, Luke Yelavich, Conor Curran and Michał Sawicz.

We started the conference at around 9:30 am on November 2nd, and actually managed to keep to the final schedule(!), so I’m going to break this report down into sub-topics for each item which will hopefully make for easier reading than an essay. I’ve also put up some photos from the conference on the Google+ event.

Mission and Vision

We started off with a broad topic — what each of our personal visions/goals for the project are. Interestingly, two main themes emerged: having the most seamless desktop user experience possible, and making sure we are well-suited to the embedded world.

Most of us expressed interest in making sure that users of various desktops had a smooth, hassle-free audio experience. In the ideal case, they would never need to find out what PulseAudio is!

Orthogonally, a number of us are also very interested in making PulseAudio a strong contender in the embedded space (mobile phones, tablets, set top boxes, cars, and so forth). While we already find PulseAudio being used in some of these, there are areas where we can do better (more in later topics).

There was some reservation expressed about other, less-used features such as network playback being ignored because of this focus. The conclusion after some discussion was that this would not be the case, as a number of embedded use-cases do make use of these and other “fringe” features.

Increasing patch bandwidth

Contributors to PulseAudio will be aware that our patch queue has been growing for the last few months due to lack of developer time. We discussed several ways to deal with this problem, the most promising of which was a periodic triage meeting.

We will be setting up a rotating schedule where each of us will organise a meeting every 2 weeks (the period might change as we implement things) where we can go over outstanding patches and hopefully clear backlog. Colin has agreed to set up the first of these.

Routing infrastructure

Next on the agenda was a presentation by Janos Kovacs about the work they’ve been doing at Intel with enhancing the PulseAudio’s routing infrastructure. These are being built from the perspective of IVI systems (i.e., cars) which typically have fairly complex use cases involving multiple concurrent devices and users. The slides for the talk will be put up here shortly (edit: slides are now available).

The talk was mingled with a Q&A type discussion with Janos and Jaska. The first item of discussion was consolidating Colin’s priority-based routing ideas into the proposed infrastructure. The broad thinking was that the ideas were broadly compatible and should be implementable in the new model.

There was also some discussion on merging the module-combine-sink functionality into PulseAudio’s core, in order to make 1:N routing easier. Some alternatives using te module-filter-* were proposed. Further discussion will likely be required before this is resolved.

The next steps for this work are for Jaska and Janos to break up the code into smaller logical bits so that we can start to review the concepts and code in detail and work towards eventually merging as much as makes sense upstream.

Low latency

This session was taken up against the background of improving latency for games on the desktop (although it does have other applications). The indicated required latency for games was given as 16 ms (corresponding to a frame rate of 60 fps). A number of ideas to deal with the problem were brought up.

Firstly, it was suggested that the maxlength buffer attribute when setting up streams could be used to signal a hard limit on stream latency — the client signals that it will prefer an underrun, over a latency above maxlength.

Another long-standing item was to investigate the cause of underruns as we lower latency on the stream — David has already begun taking this up on the LKML.

Finally, another long-standing issue is the buffer attribute adjustment done during stream setup. This is not very well-suited to low-latency applications. David and I will be looking at this in coming days.

Merging per-user and system modes

Tanu led the topic of finding a way to deal with use-cases such as mpd or multi-user systems, where access to the PulseAudio daemon of the active user by another user might be desired. Multiple suggestions were put forward, though a definite conclusion was not reached, as further thought is required.

Tanu’s suggestion was a split between a per-user daemon to manage tasks such as per-user configuration, and a system-wide daemon to manage the actual audio resources. The rationale being that the hardware itself is a common resource and could be handled by a non-user-specific daemon instance. This approach has the advantage of having a single entity in charge of the hardware, which keeps a part of the implementation simpler. The disadvantage is that we will either sacrifice security (arbitrary users can “eavesdrop” using the machine’s mic), or security infrastructure will need to be added to decide what users are allowed what access.

I suggested that since these are broadly fringe use-cases, we should document how users can configure the system by hand for these purposes, the crux of the argument being that our architecture should be dictated by the main use-cases, and not the ancillary ones. The disadvantage of this approach is, of course, that configuration is harder for the minority that wishes multi-user access to the hardware.

Colin suggested a mechanism for users to be able to request access from an “active” PulseAudio daemon, which could trigger approval by the corresponding “active” user. The communication mechanism could be the D-Bus system bus between user daemons, and Ștefan Săftescu’s Google Summer of Code work to allow desktop notifications to be triggered from PulseAudio could be used to get to request authorisation.

David suggested that we could use the per-user/system-wide split, modified somewhat to introduce the concept of a “system-wide” card. This would be a device that is configured as being available to the whole system, and thus explicitly marked as not having any privacy guarantees.

In both the above cases, discussion continued about deciding how the access control would be handled, and this remains open.

We will be continuing to look at this problem until consensus emerges.

Improving (laptop) surround sound

The next topic dealt with being able to deal with laptops with a built-in 2.1 channel set up. The background of this is that there are a number of laptops with stereo speakers and a subwoofer. These are usually used as stereo devices with the subwoofer implicitly being fed data by the audio controller in some hardware-dependent way.

The possibility of exposing this hardware more accurately was discussed. Some investigation is required to see how things are currently exposed for various hardware (my MacBook Pro exposes the subwoofer as a surround control, for example). We need to deal with correctly exposing the hardware at the ALSA layer, and then using that correctly in PulseAudio profiles.

This led to a discussion of how we could handle profiles for these. Ideally, we would have a stereo profile with the hardware dealing with upmixing, and a 2.1 profile that would be automatically triggered when a stream with an LFE channel was presented. This is a general problem while dealing with surround output on HDMI as well, and needs further thought as it complicates routing.

Testing

I gave a rousing speech about writing more tests using some of the new improvements to our testing framework. Much cheering and acknowledgement ensued.

Ed.: some literary liberties might have been taken in this section

Unified cross-distribution ALSA configuration

I missed a large part of this unfortunately, but the crux if the discussion was around unifying cross-distribution sound configuration for those who wish to disable PulseAudio.

Base volumes

The next topic we took up was base volumes, and whether they are useful to most end users. For those unfamiliar with the concept, we sometimes see sinks/sources where which support volume controls going to > 0dB (which is the no=attenuation point). We provide the maximum allowed gain in ALSA as the maximum volume, and suggest that UIs show a marker for the base volume.

It was felt that this concept was irrelevant, and probably confusing to most end users, and that we suggest that UIs do not show this information any more.

Relatedly, it was decided that having a per-port maximum volume configuration would be useful, so as to allow users to deal with hardware where the output might get too loud.

Devices with dynamic capabilities (HDMI)

Our next topic of discussion was finding a way to deal with devices such as those HDMI ports where the capabilities of the device could change at run time (for example, when you plug out a monitor and plug in a home theater receiver).

A few ideas to deal with this were discussed, and the best one seemed to be David’s proposal to always have a separate card for each HDMI device. The addition of dynamic profiles could then be exploited to only make profiles available when an actual device is plugged in (and conversely removed when the device is plugged out).

Splitting of configuration

It was suggested that we could split our current configuration files into three categories: core, policy and hardware adaptation. This was met with approval all-around, and the pre-existing ability to read configuration from subdirectories could be reused.

Another feature that was desired was the ability to ship multiple configurations for different hardware adaptations with a single package and have the correct one selected based on the hardware being run on. We did not know of a standard, architecture-independent way to determine hardware adaptation, so it was felt that the first step toward solving this problem would be to find or create such a mechanism. This could either then be used to set up configuration correctly in early boot, or by PulseAudio for do runtime configuration selection.

Relatedly, moving all distributed configuration to /usr/share/..., with overrides in /etc/pulse/... and $HOME were suggested.

Better drain/underrun reporting

David volunteered to implement a per-sink-input timer for accurately determining when drain was completed, rather than waiting for the period of the entire buffer as we currently do. Unsurprisingly, no objections were raised to this solution to the long-standing issue.

In a similar vein, redefining the underflow event to mean a real device underflow (rather than the client-side buffer running empty) was suggested. After some discussion, we agreed that a separate event for device underruns would likely be better.

Beer

We called it a day at this point and dispersed beer-wards.

Our valiant attendees after a day of plotting the future of PulseAudio

User experience

David very kindly invited us to spend a day after the conference hacking at his house in Lund, Sweden, just a short hop away from Copenhagen. We spent a short while in the morning talking about one last item on the agenda — helping to build a more seamless user experience. The idea was to figure out some tools to help users with problems quickly converge on what problem they might be facing (or help developers do the same). We looked at the Ubuntu apport audio debugging tool that David has written, and will try to adopt it for more general use across distributions.

Hacking

The rest of the day was spent in more discussions on topics from the previous day, poring over code for some specific problems, and rolling out the first release candidate for the upcoming 3.0 release.

And cut!

I am very happy that this conference happened, and am looking forward to being able to do it again next year. As you can see from the length of this post, there are lot of things happening in this part of the stack, and lots more yet to come. It was excellent meeting all the fellow PulseAudio hackers, and my thanks to all of them for making it.

Finally, I wouldn’t be sitting here writing this report without support from Collabora, who sponsored my travel to the conference, so it’s fitting that I end this with a shout-out to them. :)

You might remember that in our team (openSUSE Boosters), we created openSUSE Connect some time ago. It was meant as replacement for users.opensuse.org that nobody knew about and nobody used. We hoped that it will attract more users and that it will be more user friendly way, how to manage personal data. Apart from that, we wanted to include more interesting widgets so it can become your landing page for all your efforts in openSUSE project. With that regards we created bugzilla widget, fate widget, build status widget and some more. We hoped that it would make difference and help people and that they will enjoy using the new site. During this summer my GSoC student created amazing Karma widget as well to make it more fun. And as Connect has been some time already in function, it’s now time to collect some feedback. Did it work? Do you like it? Or did it become just a wasteland? Do you think such a site make sense?

I’m not promising anything right now, but it would be nice to know, what our users think about it and whether it could make sense to put some effort in it and how much and where to concentrate it So please, fill in this little survey and let me know your opinion. I’ll publish results later

i received my native instruments komplete audio 6 in the mail today. i wasted no time plugging it in. i have a few first impressions:

build quality

this thing is heavy. not unduly so — just two or three times heaver than the audiofire 2 it replaces. it’s solidly built, so i imagine it can take a fair amount of beating on-the-go. knobs are sturdy, stiff rather than loose, without much wiggle. the big top volume knob is a little looser, with more wiggle, but it’s also made out of metal, rather than the tough plastic of the front trim knobs. the input ports grip 1/4″ jacks pretty tightly, so there’s no worry that cables will fall out.

i haven’t tested the main outputs yet, but the headphone output works correctly, offering more volume than my ears can take, and it seems to be very quiet — i couldn’t hear any background hiss even when turning up the gain.

JACK support

i have mixed first impressions here. according to ALSA upstream, and one of my buddies who’s done some kernel driver code for NI interfaces, it should work perfectly, as it’s class-compliant to the USB2.0 spec (no, really, there is a spec for 2.0, and the KA6 complies with it, separating it from the vast majority of interfaces that only comply with the common 1.1 spec).

i setup some slightly more aggressive settings on this USB interface than for my FireWire audiofire 2, which seems to have been discontinued in favor of echo’s new USB interface (though the audiofire 4 is still available, and is mostly the same). i went with 64 frames/period, 48000 sample rate, 3 periods/buffer . . . which got me 4ms latency. that’s just under half the 8ms+ latency i had with the firewire-based af2.

at these settings, qjackctl reported about 18-20% CPU usage, idling around 0.39-5.0% with no activity. i only have a 1.5ghz core2duo processor from 2007, so any time the CPU clocks down to 1.0ghz, i expect the utilization numbers to jump up. switching from the ondemand to performance governor helps a bit, raising the processor speed all the way up.

playing a raw .wav file through mplayer’s JACK output worked just fine. next, i started ardour 3, and that’s where the troubles began. ardour has shown a distressing tendency to crash jackd and/or the interface, sometimes without any explanation in the logs. one second the ardour window is there, the next it’s gone.

i tried renoise next, and loaded up an old tracker project, from my creative one-a-day: day 316, beta decay. this piece isn’t too demanding: it’s sample-based, with a few audio channels, a send, and a few FX plugins on each track.

playing this song resulted in 20-32% CPU utilization, though at least renoise crashed less often than ardour. renoise feels noticeably more stable than the snapshot of ardour3 i built on july 9th.

i wasn’t very thrilled with how much work my machine was doing, since the CPU load was noticeably better with the af2. though this is to be expected; the CPU doesn’t have to do so much processing of the audio streams; the work is offloaded onto the firewire bus. with usb, all traffic goes through the CPU, so that takes more valuable DSP resources.

still, time to up the ante. i raised the sample rate to 96000, restarted JACK, and reloaded the renoise project. now i had 2ms latency…much lower than i ever ran with the af2. this low latency took more cycles to run, though: CPU utilization was between 20% and 36%, usually around 30-33%.

i haven’t yet tested the device on my main workstation, since that desktop computer is still dead. i’m planning to rebuild it, moving from an old AMD dualcore CPU to a recent Intel Ivy Bridge chip. that should free up enough resources to create complex projects while simultaneously playing back and recording high-quality audio.

first thoughts

i’m a bit concerned that for a $200 best-in-class USB2.0 class-compliant device, it’s not working as perfectly as i’d hoped. all 6/6 inputs and outputs present themselves correctly in the JACK window, but the KA6 doesn’t show up as a valid ALSA mixer device if i wanted to just listen to music through it, without running JACK.

i’m also concerned that the first few times i plug it in and start it, it’s mostly rock-solid, with no xruns (even at 4ms) appearing unless i run certain (buggy) applications. however, it’s xrun/crash-prone at a sample rate of 96000, forcing me to step down to 48000. i normally work at that latter rate anyway, but still…i should be able to get the higher quality rates. perhaps a few more reboots might fix this.

it could be one of the three USB ports on this laptop shares a bus with another high-traffic device, which means there could be bandwidth and/or IRQ conflicts. i’m also running kernel 3.5.3 (ck-sources), with alsa-lib 1.0.25, and there might have been driver fixes in the 3.6 kernel and alsa-lib 1.0.26. i’m also using JACK1, version 0.121.3, rather than the newer JACK2. after some upgrades, i’ll do some more testing.

early verdict: the KA6 should work perfectly on linux, but higher sample rates and lowest possible latency are still out of reach. sound quality is good, build quality is great. ALSA backend support is weak to nonexistent; i may have to do considerable triage and hacking to get it to work as a regular audio playback device.

(I’d like to first give a global shout out to my first Crossfit home, The Athlete Lab)

Since I’m in Prague for a month, I became a member of Crossfit Praha instead of just being a drop-in client. The gym is quite small, but centrally located in Prague. The lifting days are separate than the normal days (probably unless you are a trusted regular). The premise is, you show up during a block of time, warm up on your own, proceed with WOD, then cool down on your own which is pretty standard across gyms from what I can tell, exception being that everyone is starting the WOD at their own time (not structured times). Now I’ve put my money where my mouth is and have to keep a good diet, drink not so much beer, etc to be able to function the next day(s) after a WOD. “Tomorrow will not be any easier”

If you use the slock application, like I do, you may have noticed a subtle change with the latest release (which is version 1.1). That change is that the background colour is now teal-like when you start typing your password in order to disable slock, and get back to using your system. This change came from a dual-colour patch that was added to version 1.1.

I personally don’t like the change, and would rather have my screen simply stay black until the correct password is entered. Is it a huge deal? No, of course not. However, I think of it as just one additional piece of security via obscurity. In any case, I wanted it back to the way that it was pre-1.1. There are a couple ways to accomplish this goal. The first way is to build the package from source. If your distribution doesn’t come with a packaged version of slock, you can do this easily by downloading the slock-1.1 tarball, unpacking it, and modifying config.mk accordingly. The config.mk file looks like this:

but note that you do not need the extra set of escaping backslashes when you are using the colour name instead of hex representation.

If you use Gentoo, though, and you’re already building each package from source, how can you make this change yet still install the package through the system package manager (Portage)? Well, you could try to edit the file, tar it up, and place the modified tarball in the /usr/portage/distfiles/ directory. However, you will quickly find that issuing another emerge slock will result in that file getting overwritten, and you’re back to where you started. Instead, the package maintainer (Jeroen Roovers), was kind enough to add the ‘savedconfig’ USE flag to slock on 29 October 2012. In order to take advantage of this great USE flag, you firstly need to have Portage build slock with the USE flag enabled by putting it in /etc/portage/package.use:

echo "x11-misc/slock savedconfig" >> /etc/portage/package.use

Then, you are free to edit the saved config.mk which is located at /etc/portage/savedconfig/x11-misc/slock-1.1. After recompiling with the ‘savedconfig’ USE flag, and the modifications of your choice, slock should now exhibit the behaviour that you anticipated.

The kind folks over at Element 14 emailed me last week asking if I’d like to review the new Raspberry Pi 512MB edition and the Adafruit Budget Pack. Whilst I already have a rather large collection of Pi, I thought it’d be fun to write a review since it’s not something I’ve really done before.

So, yesterday the kit arrived and I got chance today to unpack it and have a play around. The kit doesn’t come with a Raspberry Pi, you have to buy that separately. Here’s a breakdown of what the kit includes:

Pi box (a clear acrylic case for the Pi)

Cobbler and GPIO ribbon cable (breakout board to split the GPIO cable out onto a breadboard)

Half-size breadboard with a bundle of breadboarding wires

4GB microSD card with SD adaptor

5V/1A USB power supply and cable

Firstly, the Pi box. The clear plastic looks pretty awesome once it’s assembled, and the laser engraved labels are an excellent touch. However I tend to swap my Pis in and out of cases a lot, and assembling the case is kinda fiddly, so I think I’ll be keeping whichever Pi goes in this case in there.

The USB power supply, cable and SD card: there isn’t really a whole lot to say about these, you need them to use your Pi. The power supply is supposedly specced to the hilt and overrated at 5.25V to account for the voltage drop caused by the cable. However, given that it’s got a US two pin plug and I live in the UK (and don’t have the appropriate adaptor handy) I’ve not been able to test this out. That said, if Adafruit have said it’s the case, I’m totally inclined to believe that it’s the bees knees like they say it is. The SD card is a class 4 Dane-Elec, which will work just fine, but probably isn’t the fastest (note: I haven’t benchmarked this, I’m going off my general experience using various cards in the Pi). That said, this is the budget pack, so if you want a fast, expensive card, you’re best buying that separately.

My favourite part of this whole kit is the Cobbler and the GPIO ribbon cable. Very often when I’m developing with the Pi I need to use a serial console for debugging, and plugging in the rather tiny cables that come with my USB serial adaptor into a Pi each time is somewhat of a pain. I must’ve done it a few hundred times now and I still don’t remember which cable goes to which pin. With the Cobbler I can just leave the serial adaptor connected to the breadboard and use the ribbon cable to connect the Pi of my choice: very nice!

Lastly, the 512MB Raspberry Pi itself. Personally, I think this is huge. 512MB of RAM on an ARM board with a fairly bitchin’ GPU for $35? Never before has “shut up and take my money” been so appropriate. As the foundation have said, hardware accelerated X is being worked on, which combined with a 512MB Pi should make for an impressively capable machine for the money in my opinion.

The hardware alone is useless without cool software though, that’s the most amazing part. In the past twelve months the Raspberry Pi has rocketed into mainstream and has amassed a huge community of fans, many of which are developing and showing off new and cool things for the Pi. If you’ve made something cool, I’d love to see it; tweet me a link and if I think it’s awesome I’ll retweet it and share it on.

Want to find more cool projects? Check out the Raspberry Pi and Element 14 forums, which are both very active and have much of this stuff being shared about.

email me "proof" you are running the latest stable -rc kernel at the moment.

send a link to some kernel patches you have done that were accepted into Linus's tree.

send a link to any Linux distro kernel tree where they keep their patches.

say why you want to do this type of thing, and what amount of time you can spend on it per week.

I'll close the application process in a week, on November 7, 2012, after that
I'll contact everyone who applied and do some follow-up questions through email
with them. I'll also post something here to say what the response was like.

This problem seems to bite some of our hardened users a couple of times a year, so thought I’d blog about it. If you are using grsec and PulseAudio, you must not enable CONFIG_GRKERNSEC_SYSFS_RESTRICT in your kernel, else autodetection of your cards will fail.

PulseAudio’s module-udev-detect needs to access /sys to discover what cards are available on the system, and that kernel option disallows this for anyone but root.

Just wanted to wish you a very happy 15th birthday, Noah! I hope that you have an awesome day, filled with fun and excitement, and surrounded by your friends, family, and loved ones. Those are the best elements of a special day, but maybe, just maybe, you’ll get some cool stuff too! I also can’t believe that it’s just one more year until you’ll have your license; bet you can’t wait!

Anyway, thinking about you, and hope that everything in your life is going superbly well.

David has now published a tentative schedule for the PulseAudio Mini-conference (I’m just going to call it PulseConf — so much easier on the tongue).

For the lazy, these are some of the topics we’ll be covering:

Vision and mission — where we are and where we want to be

Improving our patch review process

Routing infrastructure

Improving low latency behaviour

Revisiting system- and user-modes

Devices with dynamic capabilities

Improving surround sound behaviour

Separating configuration for hardware adaptation

Better drain/underrun reporting behaviour

Phew — and there are more topics that we probably will not have time to deal with!

For those of you who cannot attend, the Linaro Connect folks (who are graciously hosting us) are planning on running Google+ Hangouts for their sessions. Hopefully we should be able to do the same for our proceedings. Watch this space for details!

p.s.: A big thank you to my employer Collabora for sponsoring my travel to the conference.