Welcome to Gentoo Universe, an aggregation of weblog articles on all topics written by Gentoo developers. For a more refined aggregation of Gentoo-related topics only, you might be interested in Planet Gentoo.

I took a weekend trip to Kutná Hora and Olomouc. Kutná Hora was on the way via train so I got off there (with a small connection train) and visited the Bone Church, a common gravesite of over 40,000 people. I feel like it is one of those things that will just disappear someday – bones won’t last forever in the open air like that.

Otherwise, Kutná Hora was just a small town and I didn’t do much else there besides get on the train again for the city Olomouc (a-la-moats). I probably missed something in Kutná Hora, but it wasn’t obvious to me and I just heard about the church. Olomouc is the 6th largest city in the Czech Republic, and largely a university town. I stayed in the lovely small hostel, the Poet’s Corner (highly recommended), for a few nights. Most students go home on the weekends, which I think is odd, but I did get to talk to some students (from a different city that were home for the weekend) and went out to enjoy the student bars. Good times, I recommend seeing Olomouc if you have a few days open in your itinerary and are not doing the crazy whirlwind capital city Europe tour. There is some nice things to see, I just had to watch the country’s ‘other’ astronomical clock. Also, a few microbreweries, which were delicious, and I even did a beer spa for fun (why not?).

Last week I posted a survey about openSUSE Connect. Although some answers are still coming and you are still welcome to provide more feedback, let’s take a look at some results. Some numbers first. openSUSE Connect is not really busy website, it gets about 80 different visitors per day. Not much, but not a total wasteland. Related to this number is another one. More than half of the people responding in the survey have never ever heard about openSUSE Connect. So it sounds like we should speak about it more…

Now something regarding the feedback. Most people think that it is a good idea and that it either is already useful or it can become quite useful. But even though feedback was positive, lot of people made various suggestions how to improve it. So what can be done to make it better? Most of the feedback was centred around following two topics.

Social aspects

One frequently mentioned topic was social aspect of the Connect. It is social network, where you can’t post status messages and where it is not easy to follow what are people up to. So it’s kinda antisocial social network. There were people asking for adding ability to share what are they going – add status messages, chat and stuff they know from Facebook of Google+. On the other hand there were people who complained that they don’t want to have another social network to maintain. And the third opinion which I think is something between was to provide some easier integration with already existing social networks like Facebook, Twitter or Google+. That I would say sounds the most reasonable solution.

More polishing

This was mentioned with most of the sites aspects. openSUSE Connect is a good thing, it contains many great ideas, but somehow they are not polished enough. As connect itself. People complained that UI could be nicer and more user-friendly. That widgets miss some finishing touches. So what is needed in this aspect? Probably some designers to step in and fix UI But apart from that, some widgets could use even some coding touches. So if you don’t like how is something done, feel free to submit patch

Conclusion?

People didn’t know about openSUSE Connect and there are things to be polished. We had some good ideas and we implemented them when we started with Connect. But there is still quite some work left before Connect will be perfect. Work that can be picked up by anybody as openSUSE Connect is open source, written in PHP and we even have a documentation mentioning among other things how to work on it. We can off course just let it live as it is and use it for membership and elections for which it works well. But looks like my survey got people at least a little bit interested and for example victorhck submitted logo proposal for openSUSE Connect! So maybe we will get some other contributors as well And let’s see how will I spend my next Hackweek

The recruiters team announced a few months ago that they decided not to use the recruiting webapp any more, and move back to the txt quizes instead. Additionally, the webapp started showing random ruby exceptions, and since nobody is willing to fix them, we found it a good opportunity to shut down the service completely. There have been people that were still working on it though (including me), so if you are a mentor, mentee or someone who had answers in there, please let me know so I can extract your data and send it to you.
And now I’d like to state my personal thoughts regarding the webapp and the recruiter’s decision to move back to the quizes. First of all, I used this webapp as mentor a lot from the very first point it came up, and I mentored about 15 people through it. It was a really nice idea, but not properly implemented. With the txt quizes, the mentees were sending me the txt files by mail, then we had to schedule an IRC meeting to review the answers, or I had to send the mail back etc. It was a hell for both me and the mentee. I was ending up with hundreds of attachments, trying to find out the most recent one (or the previous one to compare answers), and the mentee had to dig between irc logs and mails to find my feedback.
The webapp solved that issue, since the mentee was putting his answers in a central place, and I could easily leave comments there. But it had a bunch of issues though, mostly UI related. It required too many clicks for simple actions, the notification system was broken by design, I had no easy way to see diffs or to see the progress of my mentee (answers replied / answers left). For example, in order to approve an answer, I had to press “Edit” which transfered me in a new page, where I had to tick “Approve” and press save. Too much, I just wanted to press “Approve”! When I decided to start filling bugs, surprisingly I found out that all my UI complaints had already been reported, clearly I was not alone in this world.
In short, cool idea but annoying UI. That was not the problem though, the real problem is that nobody was willing to fix those issues, which led to the recruiters’ decision to move back to txt quizes. But I am not going back to the txt quizes, no way. Instead, I will start a Google doc and tell my mentees to put their answers there. This will allow me to write my comments below their answers with different font/color, so I can have async communication with them. I was present during the recruitment interview session of my last mentee Pavlos, and his recruiter Markos fired up a Google doc for some coding answers, and it worked pretty well. So I decided to do the same. If the recruiters want the answers in plain text, fine, I can extract them easily.
I’d like to thank a lot Joachim Bartosik, for his work on the webapp and his interesting ideas he put on this (it saved me a lot of time, and made the mentoring process fun again), and Petteri Räty who mentored Joachim creating the recruiting webapp as GSoC project, and helped in deploying it to infra servers. I am kinda sad that I had to shut it down, and I really hope that someone steps up and revives it or creates an alternative. There has been some discussion regarding that webapp during the Gentoo Miniconf, I hope it doesn’t sink.

So we use roughly 1/3rd the memory to get the same things done (fileserver),
and an informal performance analysis gives us roughly double the IO throughput.
On the same hardware!
(The IO difference could be attributed to the ext3 -> ext4 upgrade and the kernel 2.6.18 -> 3.2.1 upgrade)

Another random data point: A really clumsy mediawiki (php+mysql) setup.
Since php is singlethreaded the performance is pretty much CPU-bound;
and as we have a small enough dataset it all fits into RAM.
So we have two processes (mysql+php) that are serially doing things.

And a "move data around" comparison: 63GB in 3.5h vs. 240GB in 4.5h - or roughly 4x the throughput

So, to summarize: For the same workload on the same hardware we're seeing substantial improvements
between a few percent and roughly four times the throughput, for IO-bound as well as for CPU-bound tasks.
The memory use goes down for most workloads while still getting the exact same results, only a lot faster.

App developers and end users both like bundled software, because it’s easy to support and easy for users to get up and running while minimizing breakage. How could we come up with an approach that also allows distributions and package-management frameworks to integrate well and deal with issues like security? I muse upon this over at my RedMonk blog.

Apparently it’s been a while since my last blog post. This however does mean that I’ve been too busy on the coding side, which is what you may prefer I guess.

The new Equo code is hitting the main Sabayon Entropy repository as I write. But what’s it about?

Refactoring

First thing first. The old codebase was ugly, as in, really ugly. Most of it was originally written in 2007 and maintained throughout the years. It wasn’t modular, object oriented, bash-completion friendly, man pages friendly, and most importantly, it did not use any standard argument parsing library (because there was no argparse module and optparse was about to be deprecated).

Modularity

Equo subcommands are just stand-alone modules. This means that adding new functionality to Equo is only a matter of writing a new module, containing a subclass of “SoloCommand” and registering it against the “command dispatcher” singleton object. Also, the internal Equo library has now its own name: Solo.

Backward compatibility

In terms of command line exposed to the user, there are no substantial changes. During the refactoring process I tried not to break the current “equo” syntax. However, syntax that has been deprecated more than 3 years ago is gone (for instance, stuff like: “equo world”). In addition, several commands are now sporting new arguments (have a look at “equo match” for example).

Man pages

All the equo subcommands are provided with a man page which is available through “man equo-<subcommand name>”. The information required to generated the man page is tightly coupled with the module code itself and automatically generated via some (Python + a2x)-fu. As you can understand, maintaining both the code and its documentation becomes easier this way.

Bash completion

Bash completion code lives together with the rest of the business logic. Each subcommand exposes its bash completion options through a class instance method called “list bashcomp(last_argument_str)”, overridden from SoloCommand. In layman’s terms, you’ve got working bashcomp awesomeness for every equo command available.

Where to go from here

Tests, we need more tests (especially regression tests). And I have this crazy idea to place tests directly in the subcommand module code.
Testing! Please install entropy 149 and play with it, try to break it and report bugs!

Over the past few weeks, I’ve been designing a basic site (in WordPress) for a new client. This client needs some embedded FLVs on the site, and doesn’t want them (for good reason) to be directly linked to YouTube. As such, and seeing as I didn’t want to make the client write the HTML for embedding a flash video, I installed a very simple FLV plugin called WP OS FLV.

The plugin worked exactly as I had hoped it would, by cleanly showing the FLV with just a few basic options. However, I noticed that the pages with FLVs embedded in them using the plugin were significantly slower to load than were pages without FLVs. Doing some fun experimentation with cURL, I found that those pages had some external calls on them. Hmmmmmm, now what would the plugin need from an external site? Doing a little more digging, I found the following line hardcoded twice in the plugin’s wposflv.php file:

That line means that if the site flv-player.net is down or slow, the page with the FLV plugin on your blog will also be slow. In order to fix this problem, you simply need to download the player_flv_maxi.swf file from that site, upload it somewhere on your server, and edit the line to call the location on your server instead. For instance, if your site is my-site.com, and you put the SWF file in a directory called static, you would change the absolute URL to:

I'm sitting on the first day of the Qt Developer Days in Berlin and am pretty
impressed about the event so far -- the organizers have done an excellent job
and everything feels very, very smooth here. Congratulations for that; I have a
first-hand experience with organizing a workshop and can imagine the huge pile of
work which these people have invested into making it rock. Well done I say.

It's been some time since I blogged about Trojitá, a fast and lightweight IMAP
e-mail client. A lot of work has found the way in since the last release; Trojitá now supports
almost all of the useful IMAP extensions including QRESYNC and
CONDSTORE for blazingly fast mailbox synchronization or the
CONTEXT=SEARCH for live-updated search results to name just a few.
There've also been roughly 666 tons of bugfixes, optimizations, new features and
tweaks. Trojitá is finally showing evidence of getting ready for being usable as
a regular e-mail client, and it's exciting to see that process after 6+ years of
working on that in my spare time. People are taking part in the development
process; there has been a series of commits from Thomas Lübking of the kwin fame
dealing with tricky QWidget issues, for example -- and it's great to see many
usability glitches getting addressed.

The last nine months were rather hectic for me -- I got my Master's degree
(the thesis was about
Trojitá, of course), I started a new job (this time using Qt) and
implemented quite some interesting stuff with Qt -- if you have always wondered
how to integrate Ragel, a parser generator, with qmake, stay tuned for future
posts.

Anyway, in case you are interested in using an extremely fast e-mail client
implemented in pure Qt, give Trojitá a try. If you'd like to chat about it,
feel free to drop me a mail or just stop me
anywhere. We're always looking for contributors, so if you hit some annoying
behavior, please do chime in and start hacking.

I’ve written a small script that I call selocal which manages locally needed SELinux rules. It allows me to add or remove SELinux rules from the command line and have them loaded up without needing to edit a .te file and building the .pp file manually. If you are interested, you can download it from my github location.

Its usage is as follows:

You can add a rule to the policy with selocal -a “rule”

You can list the current rules with selocal -l

You can remove entries by referring to their number (in the listing output), like semodule -d 19.

You can ask it to build (-b) and load (-L) the policy when you think it is appropriate

It even supports multiple modules in case you don’t want to have all local rules in a single module set.

So when I wanted to give a presentation on Tor, I had to allow the torbrowser to connect to an unreserved port. The torbrowser runs in the mozilla domain, so all I did was:

If you read my blog on a regular basis, you will know that I traveled through Russia, Mongolia and China last year. If there's one big thing I learned on this trip, it's this: English language is - on a worldwide scale - much less prevalent than I thought. Call me a fool, but I just wasn't aware of that. I thought, okay, maybe many people won't understand English, but at least I'll always be able to find someone nearby who's able to translate. That just wasn't the case. I spent days in cities where I met nobody that shared any language knowledge with me.

I'm pretty sure that translation technologies will become really important in the not-so-distant future. For many people, they already are. I've learned about the opinions of swedish initiatives without any knowledge of swedish just by using Google translate. Google Chrome and the free variant Chromium show directly the option to send something through Google translate if it detects that it's not in your language (although that wasn't working with Mongolian when I was there last year). I was in hotels where the staff pointed me to their PC with an instance of Yandex translate or Baidu translate where I should type in my questions in English (Yandex is something like the russian Google, Baidu is something like the chinese Google). Despite all the shortcomings of today's translation services, people use them to circumvent language barriers.

Young people in those countries are often learning English today, but it's a matter of fact that this will only very slowly translate into a real change. Lots of barriers exist. Many countries have their own language and another language that's used as the "international communication language" that's not English. For example, you'll probably get along pretty well in most post-soviet countries with Russian, no matter if the countries have their own native language or not. This also happens in single countries with more than one language. People have their native language and learn the countries language as their first foreign language.
Some people think their language is especially important and this stops the adoption of English (France is especially known for that). Some people have the strange idea that supporting English language knowledge is equivalent to supporting US politics and therefore oppose it.

Yes, one can try to learn more languages (I'm trying it with Mandarin myself and if I'll ever feel I can try a fourth language it'll probably be Russian), but if you look on the world scale, it's a loosing battle. To get along worldwide, you'd probably have to learn at least five languages. If you are fluent in English, Mandarin, Russian, Arabic and Spanish, you're probably quite good, but I doubt there are many people on this planet able to do that. If you're one of them, you have my deepest respect (please leave a comment if you are).

If you'd pick two completely random people of the world population, it's quite likely that they don't share a common language.

I see no reason in principle why technology can't solve that. We're probably far away from a StarTrek-alike universal translator and sadly evolution hasn't brought us the Babelfish yet, but I'm pretty confident that we will see rapid improvements in this area and that will change a lot. This may sound somewhat pathetic, but I think this could be a crucial issue in fixing some of the big problems of our world - hate, racism, war. It's just plain simple: If you have friends in China, you're less likely to think that "the chinese people are bad" (I'm using this example because I feel this thought is especially prevalent amongst the left-alternative people who would never admit any racist thoughts - but that's probably a topic for a blog entry on its own). If you have friends in Iran, you're less likely to support your country fighting a war against Iran. But having friends requires being able to communicate with them. Being able to have friends without the necessity of a common language is a fascinating thought to me.

I’m not sure if you’re following the development of this particular package in Gentoo, but with some discussion, quite a few developers reached a consensus last week that the slotted dev-libs/boost that we’ve had for the past couple of years had to go, replaced with a single-slot package like we have for most other libraries.

The main reason for this is that the previous slotting was not really doing what the implementers expected it to do — the idea for many is that you can always depend on whatever highest version of Boost you support, and if you don’t support the latest, no problem, you’ll get an older one. Unfortunately, this clashes with the fact that only the newest version of Boost is supported by upstream with modern configurations, so it happens that a new C library, or a new compiler, can (and do) make older versions non-buildable.

Like what happened with the new GLIBC 2.16, which is partially described in the previous post of the same series, and lately summarized, where there’s no way to rebuild boost-1.49 with the new glibc (the “patch” that could be used would change the API, making it similar to boost-1.50 which ..), but since I did report build failures with 1.50, people “fixed” them by depending on an older version… which is now not installable. D’oh!

So what did I do to sort this out? We dropped the slot altogether. Now all Boost versions install as slot zero and each replace the others. This makes it much easier for both developers and users, as you know that the one version you got installed is the one you’re building against, instead of “whatever has been eselected” or “whatever was installed last” or “whatever is the first one that the upstream user is finding” which was before — usually a mix of all.

But this wasn’t enough because unfortunately, libraries, headers and tools were all slotted so they all had different names based on the version. This was handled in the new 1.52 release which I unmasked today, by going back to the default install layout that Boost uses for Unix: the system layout. This is designed to allow one and only one version of each Boost library in the system, and does neither provide a version nor a variant suffix. This meant we needed another change.

Before going back to system layout, each boost version installed two sets of libraries, one that was multithread-safe and oen that wasn’t. Software using threads would have to link to the mt variant, while those not using threads could link to the (theoretically lower-overhead) single-thread variant. Which happened to be the default. Unfortunately, this also meant that a ton of software out there, even when using threads, simply linked to the boost library they wanted without caring for the variant. Oopsie.

Even worse, it was very well possible, and indeed was the case for Blender, that both variants were brought in, in the process’s address space, possibly causing extremely hard to debug issues due to symbol collisions (which I know, unfortunately, very well).

An easy way to see (using older versions of boost ebuilds) whether your program is linking to the wrong variant, is to see if you see it linking to libboost_threads-mt and at the same time to some other library such as libboost_system (not mt variant). Since our very pleasant former maintainer decided to link the mt variant of libboost_threads to the non-mt one, quite a few ways to check for multithreaded Boost simply … failed.

Now the decision on whether to build threadsafe or not is done through an USE flag like most other ebuilds do, and since only one variant is installed, everybody gets, by default and in most cases, the multithread-safe version, and all is good. Packages requiring threads might want already to start using dev-libs/boost[threads(+)] to make sure that they are not installed with a non-threadsafe version of Boost, but there are symlinks in place righ tnow so that even if they are looking for the mt variant they get the one installed version of boost anyway (only with USE=threads of course).

One question that raised was “how broken will people’s systems be, after upgrading from one Boost to another?” and the answer is “quite” … unless you’re using a modern enough Portage (the last few versions of the 2.1 series are okay, and most of the 2.2), which can use preserve-libs. In that case, it’ll just require you to run a single emerge command to get back on the new version, and if not, you’ll have to wait for revdep-rebuild to finish.

And to make things sweeter, with this change, the time it takes for Boost to build is halved (4 minutes vs 8 on my laptop), while the final package is 30MB less (here at least), since only one set of libraries is installed instead of two — without counting the time and space you’d waste by having to install multiple boost versions together.

And for developers, this also mean that you can forget about the ruddy boost-utils.eclass, since now everything is supposed to work without any trickery. Win-win situation, for once.

So, I started reading [The Definitive Guide to the Xen Hypervisor] (again ), and I thought it would be fun to start with the example guest kernel, provided by the author, and extend it a bit (ye, there’s mini-os already in extras/, but I wanted to struggle with all the peculiarities of extended inline asm, x86_64 asm, linker scripts, C macros etc, myself ).

After doing some reading about x86_64 asm, I ‘ported’ the example kernel to 64bit, and gave it a try. And of course, it crashed. While I was responsible for the first couple of crashes (for which btw, I can write at least 2-3 additional blog posts ), I got stuck with this error:

when trying to boot the example kernel as a domU (under xen-unstable).

0×2000 is the address where XEN maps the hypercall page inside the domU’s address space. The guest crashed when trying to issue any hypercall (HYPERCALL_console_io in this case). At first, I thought I had screwed up with the x86_64 extended inline asm, used to perform the hypercall, so I checked how the hypercall macros were implemented both in the Linux kernel (wow btw, it’s pretty scary), and in the mini-os kernel. But, I got the same crash with both of them.

After some more debugging, I made it work. In my Makefile, I used gcc to link all of the object files into the guest kernel. When I switched to ld, it worked. Apparently, when using gcc to link object files, it calls the linker with a lot of options you might not want. Invoking gcc using the -v option will reveal that gcc calls collect2 (a wrapper around the linker), which then calls ld with various options (certainly not only the ones I was passing to my ‘linker’). One of them was –build-id, which generates a .note.gnu.build-id” ELF note section in the output file, which contains some hash to identify the linked file.

Apparently, this note changes the layout of the resulting ELF file, and ‘shifts’ the .text section to 0×30 from 0×0, and hypercall_page ends up at 0×2030 instead of 0×2000. Thus, when I ‘called’ into the hypercall page, I ended up at some arbitrary location instead of the start of the specific hypercall handler I was going for. But it took me quite some time of debugging before I did an objdump -dS [kernel] (and objdump -x [kernel]), and found out what was going on.

The code from bootstrap.x86_64.S looks like this (notice the .org 0×2000 before the hypercall_page global symbol):

One solution, mentioned earlier, is to switch to ld (which probalby makes more sense), instead of using gcc. The other solution is to tweak the ELF file layout, through the linker script (actually this is pretty much what the Linux kernel does, to work around this):

You might remember that many years ago (actually, it’s just shy of four years ago) I wrote a post about a disconcerting label I found on the box of a pair of Shure earphones I got to try to sleep better during the night when noise was coming from the outside. This was a Californian notice about the danger of carcinogenic chemicals, most likely related to the PVC in the earphones’ cord — which didn’t even last six full months! I had to trash the extremely expensive pair of earphones, because the cables ruptured behind my years; the stupid plastic was just too rigid I’m afraid.

Well, now that I’ve been in California for a while, I was expecting to see many more similar notices, but at least here in Hermosa Beach where I’m based, I haven’t seen one … until Starbucks was forced to put on. I actually did find out something more about those notices before, as Amazon has a page which is linked in your order when you’re shipping something in California that should have the label attached.

Now the title of this post is obviously inflammatory, I know that and it’s half-intended, but my problem with all of this is that when I wrote about that stupid label, I didn’t really know much about the whole thing — I’ve been told right away in those comments that the labels are extremely common in California, a few months ago I finally found that it was a popular ballot that actually put the law into place… and now I feel like something’s extremely wrong in this place.

Really I feel this is one of the most stupidest warning people can have on things, and somehow, for once, it makes me feel better thinking that in Italy, referendums are only used to vote laws off, not in…

For those of you who missed my previous updates, we recently organised a PulseAudio miniconference in Copenhagen, Denmark last week. The organisation of all this was spearheaded by ALSA and PulseAudio hacker, David Henningsson. The good folks organising the Ubuntu Developer Summit / Linaro Connect were kind enough to allow us to colocate this event. A big thanks to both of them for making this possible!

The room where the first PulseAudio conference took place

The conference was attended by the four current active PulseAudio developers: Colin Guthrie, Tanu Kaskinen, David Henningsson, and myself. We were joined by long-time contributors Janos Kovacs and Jaska Uimonen from Intel, Luke Yelavich, Conor Curran and Michał Sawicz.

We started the conference at around 9:30 am on November 2nd, and actually managed to keep to the final schedule(!), so I’m going to break this report down into sub-topics for each item which will hopefully make for easier reading than an essay. I’ve also put up some photos from the conference on the Google+ event.

Mission and Vision

We started off with a broad topic — what each of our personal visions/goals for the project are. Interestingly, two main themes emerged: having the most seamless desktop user experience possible, and making sure we are well-suited to the embedded world.

Most of us expressed interest in making sure that users of various desktops had a smooth, hassle-free audio experience. In the ideal case, they would never need to find out what PulseAudio is!

Orthogonally, a number of us are also very interested in making PulseAudio a strong contender in the embedded space (mobile phones, tablets, set top boxes, cars, and so forth). While we already find PulseAudio being used in some of these, there are areas where we can do better (more in later topics).

There was some reservation expressed about other, less-used features such as network playback being ignored because of this focus. The conclusion after some discussion was that this would not be the case, as a number of embedded use-cases do make use of these and other “fringe” features.

Increasing patch bandwidth

Contributors to PulseAudio will be aware that our patch queue has been growing for the last few months due to lack of developer time. We discussed several ways to deal with this problem, the most promising of which was a periodic triage meeting.

We will be setting up a rotating schedule where each of us will organise a meeting every 2 weeks (the period might change as we implement things) where we can go over outstanding patches and hopefully clear backlog. Colin has agreed to set up the first of these.

Routing infrastructure

Next on the agenda was a presentation by Janos Kovacs about the work they’ve been doing at Intel with enhancing the PulseAudio’s routing infrastructure. These are being built from the perspective of IVI systems (i.e., cars) which typically have fairly complex use cases involving multiple concurrent devices and users. The slides for the talk will be put up here shortly (edit: slides are now available).

The talk was mingled with a Q&A type discussion with Janos and Jaska. The first item of discussion was consolidating Colin’s priority-based routing ideas into the proposed infrastructure. The broad thinking was that the ideas were broadly compatible and should be implementable in the new model.

There was also some discussion on merging the module-combine-sink functionality into PulseAudio’s core, in order to make 1:N routing easier. Some alternatives using te module-filter-* were proposed. Further discussion will likely be required before this is resolved.

The next steps for this work are for Jaska and Janos to break up the code into smaller logical bits so that we can start to review the concepts and code in detail and work towards eventually merging as much as makes sense upstream.

Low latency

This session was taken up against the background of improving latency for games on the desktop (although it does have other applications). The indicated required latency for games was given as 16 ms (corresponding to a frame rate of 60 fps). A number of ideas to deal with the problem were brought up.

Firstly, it was suggested that the maxlength buffer attribute when setting up streams could be used to signal a hard limit on stream latency — the client signals that it will prefer an underrun, over a latency above maxlength.

Another long-standing item was to investigate the cause of underruns as we lower latency on the stream — David has already begun taking this up on the LKML.

Finally, another long-standing issue is the buffer attribute adjustment done during stream setup. This is not very well-suited to low-latency applications. David and I will be looking at this in coming days.

Merging per-user and system modes

Tanu led the topic of finding a way to deal with use-cases such as mpd or multi-user systems, where access to the PulseAudio daemon of the active user by another user might be desired. Multiple suggestions were put forward, though a definite conclusion was not reached, as further thought is required.

Tanu’s suggestion was a split between a per-user daemon to manage tasks such as per-user configuration, and a system-wide daemon to manage the actual audio resources. The rationale being that the hardware itself is a common resource and could be handled by a non-user-specific daemon instance. This approach has the advantage of having a single entity in charge of the hardware, which keeps a part of the implementation simpler. The disadvantage is that we will either sacrifice security (arbitrary users can “eavesdrop” using the machine’s mic), or security infrastructure will need to be added to decide what users are allowed what access.

I suggested that since these are broadly fringe use-cases, we should document how users can configure the system by hand for these purposes, the crux of the argument being that our architecture should be dictated by the main use-cases, and not the ancillary ones. The disadvantage of this approach is, of course, that configuration is harder for the minority that wishes multi-user access to the hardware.

Colin suggested a mechanism for users to be able to request access from an “active” PulseAudio daemon, which could trigger approval by the corresponding “active” user. The communication mechanism could be the D-Bus system bus between user daemons, and Ștefan Săftescu’s Google Summer of Code work to allow desktop notifications to be triggered from PulseAudio could be used to get to request authorisation.

David suggested that we could use the per-user/system-wide split, modified somewhat to introduce the concept of a “system-wide” card. This would be a device that is configured as being available to the whole system, and thus explicitly marked as not having any privacy guarantees.

In both the above cases, discussion continued about deciding how the access control would be handled, and this remains open.

We will be continuing to look at this problem until consensus emerges.

Improving (laptop) surround sound

The next topic dealt with being able to deal with laptops with a built-in 2.1 channel set up. The background of this is that there are a number of laptops with stereo speakers and a subwoofer. These are usually used as stereo devices with the subwoofer implicitly being fed data by the audio controller in some hardware-dependent way.

The possibility of exposing this hardware more accurately was discussed. Some investigation is required to see how things are currently exposed for various hardware (my MacBook Pro exposes the subwoofer as a surround control, for example). We need to deal with correctly exposing the hardware at the ALSA layer, and then using that correctly in PulseAudio profiles.

This led to a discussion of how we could handle profiles for these. Ideally, we would have a stereo profile with the hardware dealing with upmixing, and a 2.1 profile that would be automatically triggered when a stream with an LFE channel was presented. This is a general problem while dealing with surround output on HDMI as well, and needs further thought as it complicates routing.

Testing

I gave a rousing speech about writing more tests using some of the new improvements to our testing framework. Much cheering and acknowledgement ensued.

Ed.: some literary liberties might have been taken in this section

Unified cross-distribution ALSA configuration

I missed a large part of this unfortunately, but the crux if the discussion was around unifying cross-distribution sound configuration for those who wish to disable PulseAudio.

Base volumes

The next topic we took up was base volumes, and whether they are useful to most end users. For those unfamiliar with the concept, we sometimes see sinks/sources where which support volume controls going to > 0dB (which is the no=attenuation point). We provide the maximum allowed gain in ALSA as the maximum volume, and suggest that UIs show a marker for the base volume.

It was felt that this concept was irrelevant, and probably confusing to most end users, and that we suggest that UIs do not show this information any more.

Relatedly, it was decided that having a per-port maximum volume configuration would be useful, so as to allow users to deal with hardware where the output might get too loud.

Devices with dynamic capabilities (HDMI)

Our next topic of discussion was finding a way to deal with devices such as those HDMI ports where the capabilities of the device could change at run time (for example, when you plug out a monitor and plug in a home theater receiver).

A few ideas to deal with this were discussed, and the best one seemed to be David’s proposal to always have a separate card for each HDMI device. The addition of dynamic profiles could then be exploited to only make profiles available when an actual device is plugged in (and conversely removed when the device is plugged out).

Splitting of configuration

It was suggested that we could split our current configuration files into three categories: core, policy and hardware adaptation. This was met with approval all-around, and the pre-existing ability to read configuration from subdirectories could be reused.

Another feature that was desired was the ability to ship multiple configurations for different hardware adaptations with a single package and have the correct one selected based on the hardware being run on. We did not know of a standard, architecture-independent way to determine hardware adaptation, so it was felt that the first step toward solving this problem would be to find or create such a mechanism. This could either then be used to set up configuration correctly in early boot, or by PulseAudio for do runtime configuration selection.

Relatedly, moving all distributed configuration to /usr/share/..., with overrides in /etc/pulse/... and $HOME were suggested.

Better drain/underrun reporting

David volunteered to implement a per-sink-input timer for accurately determining when drain was completed, rather than waiting for the period of the entire buffer as we currently do. Unsurprisingly, no objections were raised to this solution to the long-standing issue.

In a similar vein, redefining the underflow event to mean a real device underflow (rather than the client-side buffer running empty) was suggested. After some discussion, we agreed that a separate event for device underruns would likely be better.

Beer

We called it a day at this point and dispersed beer-wards.

Our valiant attendees after a day of plotting the future of PulseAudio

User experience

David very kindly invited us to spend a day after the conference hacking at his house in Lund, Sweden, just a short hop away from Copenhagen. We spent a short while in the morning talking about one last item on the agenda — helping to build a more seamless user experience. The idea was to figure out some tools to help users with problems quickly converge on what problem they might be facing (or help developers do the same). We looked at the Ubuntu apport audio debugging tool that David has written, and will try to adopt it for more general use across distributions.

Hacking

The rest of the day was spent in more discussions on topics from the previous day, poring over code for some specific problems, and rolling out the first release candidate for the upcoming 3.0 release.

And cut!

I am very happy that this conference happened, and am looking forward to being able to do it again next year. As you can see from the length of this post, there are lot of things happening in this part of the stack, and lots more yet to come. It was excellent meeting all the fellow PulseAudio hackers, and my thanks to all of them for making it.

Finally, I wouldn’t be sitting here writing this report without support from Collabora, who sponsored my travel to the conference, so it’s fitting that I end this with a shout-out to them. :)

You might remember that in our team (openSUSE Boosters), we created openSUSE Connect some time ago. It was meant as replacement for users.opensuse.org that nobody knew about and nobody used. We hoped that it will attract more users and that it will be more user friendly way, how to manage personal data. Apart from that, we wanted to include more interesting widgets so it can become your landing page for all your efforts in openSUSE project. With that regards we created bugzilla widget, fate widget, build status widget and some more. We hoped that it would make difference and help people and that they will enjoy using the new site. During this summer my GSoC student created amazing Karma widget as well to make it more fun. And as Connect has been some time already in function, it’s now time to collect some feedback. Did it work? Do you like it? Or did it become just a wasteland? Do you think such a site make sense?

I’m not promising anything right now, but it would be nice to know, what our users think about it and whether it could make sense to put some effort in it and how much and where to concentrate it So please, fill in this little survey and let me know your opinion. I’ll publish results later

So after my descriptive post you might be wondering what’s so complex or time-requiring in running a tinderbox. That’s because I haven’t spoken about the actual manual labor that goes into handling the tinderbox.

The major work is of course scouring the logs to make sure that I file only valid bugs (and often enough that’s not enough, as things hide behind the surface), but there are a quite a number of tasks that are not related to the bug filing, at least not directly.

First of all, there is the matter of making sure that the packages are available for installation. This used to be more complex, but luckily thanks to REQUIRED_USE and USE deps, this task is slightly easier than before. The tinderbox.py script (that generates the list of visible packages that need to be tested) also generates a list of use conflicts, dependencies etc. This list I have to look at manually, and then update the package.use file so that they are satisfied. If their dependencies or REQUIRED_USE are not satisfied, the package is not visible, which means it won’t be tested.

This sounds extremely easy, but there are quite a few situations, which I discussed previously where there is no real way to satisfy requirements for all the packages in the tree. In particular there are situations where you can’t enable the same USE flag all over the tree — for instance if you do enable icu for libxml2, you can’t enable it for qt-webkit (well, you can but you have to disable gstreamer then, which is required by other packages). Handling all the conflicting requirements takes a bit of trial and error.

Then there is a much worse problem and that is with tests that can get stuck, so that things like this happen:

And I’ve got to keep growing the list of packages whose tests are unreliable — I wonder if the maintainers ever try running their tests, sometimes.

This task used to be easier because the tinderbox supports sending out tweets or dents through bti so that it would tell me what was its action — unfortunately identi.ca kept marking the tinderbox’s account as spam, and while they did unlock it three times, it meant I had to ask support to do so every other week. I grew tired of that and stopped caring about it. Unfortunately that means I have to connect to the instance(s) from time to time to make sure they are still crunching.

i received my native instruments komplete audio 6 in the mail today. i wasted no time plugging it in. i have a few first impressions:

build quality

this thing is heavy. not unduly so — just two or three times heaver than the audiofire 2 it replaces. it’s solidly built, so i imagine it can take a fair amount of beating on-the-go. knobs are sturdy, stiff rather than loose, without much wiggle. the big top volume knob is a little looser, with more wiggle, but it’s also made out of metal, rather than the tough plastic of the front trim knobs. the input ports grip 1/4″ jacks pretty tightly, so there’s no worry that cables will fall out.

i haven’t tested the main outputs yet, but the headphone output works correctly, offering more volume than my ears can take, and it seems to be very quiet — i couldn’t hear any background hiss even when turning up the gain.

JACK support

i have mixed first impressions here. according to ALSA upstream, and one of my buddies who’s done some kernel driver code for NI interfaces, it should work perfectly, as it’s class-compliant to the USB2.0 spec (no, really, there is a spec for 2.0, and the KA6 complies with it, separating it from the vast majority of interfaces that only comply with the common 1.1 spec).

i setup some slightly more aggressive settings on this USB interface than for my FireWire audiofire 2, which seems to have been discontinued in favor of echo’s new USB interface (though the audiofire 4 is still available, and is mostly the same). i went with 64 frames/period, 48000 sample rate, 3 periods/buffer . . . which got me 4ms latency. that’s just under half the 8ms+ latency i had with the firewire-based af2.

at these settings, qjackctl reported about 18-20% CPU usage, idling around 0.39-5.0% with no activity. i only have a 1.5ghz core2duo processor from 2007, so any time the CPU clocks down to 1.0ghz, i expect the utilization numbers to jump up. switching from the ondemand to performance governor helps a bit, raising the processor speed all the way up.

playing a raw .wav file through mplayer’s JACK output worked just fine. next, i started ardour 3, and that’s where the troubles began. ardour has shown a distressing tendency to crash jackd and/or the interface, sometimes without any explanation in the logs. one second the ardour window is there, the next it’s gone.

i tried renoise next, and loaded up an old tracker project, from my creative one-a-day: day 316, beta decay. this piece isn’t too demanding: it’s sample-based, with a few audio channels, a send, and a few FX plugins on each track.

playing this song resulted in 20-32% CPU utilization, though at least renoise crashed less often than ardour. renoise feels noticeably more stable than the snapshot of ardour3 i built on july 9th.

i wasn’t very thrilled with how much work my machine was doing, since the CPU load was noticeably better with the af2. though this is to be expected; the CPU doesn’t have to do so much processing of the audio streams; the work is offloaded onto the firewire bus. with usb, all traffic goes through the CPU, so that takes more valuable DSP resources.

still, time to up the ante. i raised the sample rate to 96000, restarted JACK, and reloaded the renoise project. now i had 2ms latency…much lower than i ever ran with the af2. this low latency took more cycles to run, though: CPU utilization was between 20% and 36%, usually around 30-33%.

i haven’t yet tested the device on my main workstation, since that desktop computer is still dead. i’m planning to rebuild it, moving from an old AMD dualcore CPU to a recent Intel Ivy Bridge chip. that should free up enough resources to create complex projects while simultaneously playing back and recording high-quality audio.

first thoughts

i’m a bit concerned that for a $200 best-in-class USB2.0 class-compliant device, it’s not working as perfectly as i’d hoped. all 6/6 inputs and outputs present themselves correctly in the JACK window, but the KA6 doesn’t show up as a valid ALSA mixer device if i wanted to just listen to music through it, without running JACK.

i’m also concerned that the first few times i plug it in and start it, it’s mostly rock-solid, with no xruns (even at 4ms) appearing unless i run certain (buggy) applications. however, it’s xrun/crash-prone at a sample rate of 96000, forcing me to step down to 48000. i normally work at that latter rate anyway, but still…i should be able to get the higher quality rates. perhaps a few more reboots might fix this.

it could be one of the three USB ports on this laptop shares a bus with another high-traffic device, which means there could be bandwidth and/or IRQ conflicts. i’m also running kernel 3.5.3 (ck-sources), with alsa-lib 1.0.25, and there might have been driver fixes in the 3.6 kernel and alsa-lib 1.0.26. i’m also using JACK1, version 0.121.3, rather than the newer JACK2. after some upgrades, i’ll do some more testing.

early verdict: the KA6 should work perfectly on linux, but higher sample rates and lowest possible latency are still out of reach. sound quality is good, build quality is great. ALSA backend support is weak to nonexistent; i may have to do considerable triage and hacking to get it to work as a regular audio playback device.

Hello there everybody, today’s episode is dedicated to set up a tinderbox instance like mine which is building and installing every visible package in the tree, running its tests and so on.

So first step is to have a system where to run the tinderbox. A virtual system is much preferred, since the tinderbox can easily install very insecure code, although nothing prevents you from running it straight on the metal. My choice for this, after Tiziano pointed me in that direction, was to get LXC to handle this, as a chroot on steroids (the original implementation used chroot and was much less reliable).

Now there are a number of degrees you could be running the tinderbox at; most of the basics are designed to work with almost every package in the system broken — there are only a few packages that are needed for this system to work, here’s my world file on the two tinderboxes:

But let’s do stuff in order. What do I do when I run the tinderbox? I connect on SSH over IPv6 – the tinderbox has very limited Internet connectivity, as everything is proxied by a Squid instance, like I described in this two years old post – directly as root unfortunately (but only with key auth). Then I either start or reconnect to a screen instance, which is where the tinderbox is running (or will be running).

The tinderbox’s scripts are on git and are written partially by me and partially by Zac (following my harassment for the most part, and he’s done a terrific job). The key script is tinderbox-continuous.sh which is simply going to keep executing, either ad-infinitum, or going through a file given as parameter, the tinderbox on 200 packages at a time (this way there is emerge --sync from time to time so that the tree doesn’t get stale). There is also a fetch-reverse-deps.sh which is used to, as the cover says, fetch the reverse dependencies of a given package, which pairs with the continuous script above when I do a targeted run.

On the configuration side, /etc/portage/make.conf has to refer to /root/flameeyes-tinderbox/tinderbox.make.conf which comes from the repository and sets up features, verbosity levels, and the fetch/resume commands to use curl.. these are also set up so that if there is a TINDERBOX_PROXY environment variable set, then it’ll go through it. Setting of TINDERBOX_PROXY and a couple more variables is done in /etc/portage/make.tinderbox.private.conf; you can use it for setting GENTOO_MIRRORS with something that is easily and quickly reachable, as there’s a lot to download!

But what does this get us? Just a bunch of files in /var/log/portage/build. How do I analyze them? Originally I did this by using grep within Emacs and looked at it file by file. Since I was opening the bugs with Firefox running on the same system, so I could very easily attach the logs. This is no longer possible, so that’s why I wrote a log collector which is also available and that is designed in two components: a script that receives (over IPv6 only, and within the virtual network of the host) the log being sent with netcat and tar, removes colour escape sequences, and writes it down as an HTML file (in a way that Chrome does not explode on) on Amazon’s S3, also counting how many of the observed warnings are found, and whether the build, or tests, failed — this data is saved over SimpleDB.

Then there is a simple sinatra-based interface that can be ran on any computer, and I run it locally on my laptop, and fetches the data from SimpleDB, and displays it in a table with links to the build logs. This also has a link to the pre-filled bug template (it uses a local file where emerge --info is saved as comment #0.

Okay so this is the general gist of it, if I have some more time this weekend I’ll draw some cute diagram for it, and you can all tell me that it’s overcomplicated and that if I did it in $whatever it would have been much easier, but at the same time you’ll not be providing any replacement, or if you will start working on it, you’ll spend months designing the schema of the database, with a target of next year, which will not be met. I’ve been there.

(I’d like to first give a global shout out to my first Crossfit home, The Athlete Lab)

Since I’m in Prague for a month, I became a member of Crossfit Praha instead of just being a drop-in client. The gym is quite small, but centrally located in Prague. The lifting days are separate than the normal days (probably unless you are a trusted regular). The premise is, you show up during a block of time, warm up on your own, proceed with WOD, then cool down on your own which is pretty standard across gyms from what I can tell, exception being that everyone is starting the WOD at their own time (not structured times). Now I’ve put my money where my mouth is and have to keep a good diet, drink not so much beer, etc to be able to function the next day(s) after a WOD. “Tomorrow will not be any easier”

If you use the slock application, like I do, you may have noticed a subtle change with the latest release (which is version 1.1). That change is that the background colour is now teal-like when you start typing your password in order to disable slock, and get back to using your system. This change came from a dual-colour patch that was added to version 1.1.

I personally don’t like the change, and would rather have my screen simply stay black until the correct password is entered. Is it a huge deal? No, of course not. However, I think of it as just one additional piece of security via obscurity. In any case, I wanted it back to the way that it was pre-1.1. There are a couple ways to accomplish this goal. The first way is to build the package from source. If your distribution doesn’t come with a packaged version of slock, you can do this easily by downloading the slock-1.1 tarball, unpacking it, and modifying config.mk accordingly. The config.mk file looks like this:

but note that you do not need the extra set of escaping backslashes when you are using the colour name instead of hex representation.

If you use Gentoo, though, and you’re already building each package from source, how can you make this change yet still install the package through the system package manager (Portage)? Well, you could try to edit the file, tar it up, and place the modified tarball in the /usr/portage/distfiles/ directory. However, you will quickly find that issuing another emerge slock will result in that file getting overwritten, and you’re back to where you started. Instead, the package maintainer (Jeroen Roovers), was kind enough to add the ‘savedconfig’ USE flag to slock on 29 October 2012. In order to take advantage of this great USE flag, you firstly need to have Portage build slock with the USE flag enabled by putting it in /etc/portage/package.use:

echo "x11-misc/slock savedconfig" >> /etc/portage/package.use

Then, you are free to edit the saved config.mk which is located at /etc/portage/savedconfig/x11-misc/slock-1.1. After recompiling with the ‘savedconfig’ USE flag, and the modifications of your choice, slock should now exhibit the behaviour that you anticipated.

I guess it’s time for a new post on what’s the status with Gentoo Linux right now. First of all, the tinderbox is munching as I write. Things are going mostly smooth but there are still hiccups due to some developers not accepting its bug reports because of the way logs are linked (as in, not attached).

Like last time that I wrote about it, four months ago, this is targeting GCC 4.7, GLIBC 2.16 (which is coming out of masking next week!) and GnuTLS 3. Unfortunately, there are a few (biggish) problems with this situation, mostly related to the Boost problem I noted back in July.

What happens is this:

you can’t use any version of boost older than 1.48 with GCC 4.7 or later;

you can’t use any version of boost older than 1.50 with GLIBC 2.16;

many packages don’t build properly with boost 1.50 and later;

a handful of packages require boost 1.46;

boost 1.50-r2 and later (in Gentoo) no longer support eselect boost making most of the packages using boost not build at all.

This kind of screwup is a major setback, especially since Mike (understandably) won’t wait any more to unmask GLIBC 2.16 (he waited a month, the Boost maintainers had all the time to fix their act, which they didn’t — it’s now time somebody with common sense takes over). So the plan right now is for me and Tomáš to pick up the can of worms, and un-slot Boost, quite soon. This is going to solve enough problems that we’ll all be very happy about it, as most of the automated checks for Boost will then work out of the box. It’s also going to reduce the disk space being used by your install, although it might require you to rebuild some C++ packages, I’m sorry about that.

For what concerns GnuTLS, version 3.1.3 is going to hit unstable users at the same time as glibc-2.16, and hopefully the same will be true for stable when that happens. Unfortunately there are still a number of packages not fixed to work with gnutls, so if you see a package you use (with GnuTLS) in the tracker it’s time to jump on fixing it!

Speaking of GnuTLS, we’ve also had a smallish screwup this morning when libtasn1 version 3 also hit the tree unmasked — it wasn’t supposed to happen, and it’s now masked, as only GnuTLS 3 builds fine with it. Since upstream really doesn’t care about GnuTLS 2 at this point, I’m not interested in trying to get that to work nicely, and since I don’t see any urgency in pushing libtasn1 v3 as is, I’ll keep it masked until GNOME 3.6 (as gnome-keyring also does not build with that version, yet).

Markos has correctly noted that the QA team – i.e., me – is not maintaining the DevManual anymore. We made it now a separate project, under QA (but I’d rather say it’s shared under QA and Recruiters at the same time), and the GIT Repository is now writable by any developer. Of course if you play around that without knowing what you’re doing, on master, you’ll be terminated.

There’s also the need to convert the DevManual to something that makes sense. Right now it’s a bunch of files all called text.xml which makes editing a nightmare. I did start working on that two years ago but it’s tedious work and I don’t want to do it on my free time. I’d rather not have to do it while being paid for it really. If somebody feels like they can handle the conversion, I’d actually consider paying somebody to do that job. How much? I’d say around $50. Desirable format is something that doesn’t make a person feel like taking their eyes out when trying to edit it with Emacs (and vim, if you feel generous): my branch used DocBook 5, which I rather fancy, as I’ve used it for Autotools Mythbuster but RST or Sphinx would probably be okay as well, as long as no formatting is lost along the way. Update: Ben points out he already volunteered to convert it to RST, I’ll wait for that before saying anything more.

Also, we’re looking for a new maintainer for ICU (and I’m pressing Davide to take the spot) as things like the bump to 50 should have been handled more carefully. Especially now that it appears that it’s basically breaking a quarter of its dependencies when using GCC 4.7 — both the API and ABI of the library change entirely depending on whether you’re using GCC 4.6 or 4.7, as it’ll leverage C++11 support in the latter. I’m afraid this is just going to be the first of a series of libraries making this kind of changes and we’re all going to suffer through it.

The kind folks over at Element 14 emailed me last week asking if I’d like to review the new Raspberry Pi 512MB edition and the Adafruit Budget Pack. Whilst I already have a rather large collection of Pi, I thought it’d be fun to write a review since it’s not something I’ve really done before.

So, yesterday the kit arrived and I got chance today to unpack it and have a play around. The kit doesn’t come with a Raspberry Pi, you have to buy that separately. Here’s a breakdown of what the kit includes:

Pi box (a clear acrylic case for the Pi)

Cobbler and GPIO ribbon cable (breakout board to split the GPIO cable out onto a breadboard)

Half-size breadboard with a bundle of breadboarding wires

4GB microSD card with SD adaptor

5V/1A USB power supply and cable

Firstly, the Pi box. The clear plastic looks pretty awesome once it’s assembled, and the laser engraved labels are an excellent touch. However I tend to swap my Pis in and out of cases a lot, and assembling the case is kinda fiddly, so I think I’ll be keeping whichever Pi goes in this case in there.

The USB power supply, cable and SD card: there isn’t really a whole lot to say about these, you need them to use your Pi. The power supply is supposedly specced to the hilt and overrated at 5.25V to account for the voltage drop caused by the cable. However, given that it’s got a US two pin plug and I live in the UK (and don’t have the appropriate adaptor handy) I’ve not been able to test this out. That said, if Adafruit have said it’s the case, I’m totally inclined to believe that it’s the bees knees like they say it is. The SD card is a class 4 Dane-Elec, which will work just fine, but probably isn’t the fastest (note: I haven’t benchmarked this, I’m going off my general experience using various cards in the Pi). That said, this is the budget pack, so if you want a fast, expensive card, you’re best buying that separately.

My favourite part of this whole kit is the Cobbler and the GPIO ribbon cable. Very often when I’m developing with the Pi I need to use a serial console for debugging, and plugging in the rather tiny cables that come with my USB serial adaptor into a Pi each time is somewhat of a pain. I must’ve done it a few hundred times now and I still don’t remember which cable goes to which pin. With the Cobbler I can just leave the serial adaptor connected to the breadboard and use the ribbon cable to connect the Pi of my choice: very nice!

Lastly, the 512MB Raspberry Pi itself. Personally, I think this is huge. 512MB of RAM on an ARM board with a fairly bitchin’ GPU for $35? Never before has “shut up and take my money” been so appropriate. As the foundation have said, hardware accelerated X is being worked on, which combined with a 512MB Pi should make for an impressively capable machine for the money in my opinion.

The hardware alone is useless without cool software though, that’s the most amazing part. In the past twelve months the Raspberry Pi has rocketed into mainstream and has amassed a huge community of fans, many of which are developing and showing off new and cool things for the Pi. If you’ve made something cool, I’d love to see it; tweet me a link and if I think it’s awesome I’ll retweet it and share it on.

Want to find more cool projects? Check out the Raspberry Pi and Element 14 forums, which are both very active and have much of this stuff being shared about.

email me "proof" you are running the latest stable -rc kernel at the moment.

send a link to some kernel patches you have done that were accepted into Linus's tree.

send a link to any Linux distro kernel tree where they keep their patches.

say why you want to do this type of thing, and what amount of time you can spend on it per week.

I'll close the application process in a week, on November 7, 2012, after that
I'll contact everyone who applied and do some follow-up questions through email
with them. I'll also post something here to say what the response was like.

In my previous post about Munin I said that I was still working on making sure that the async support would reach Gentoo in a way that actually worked. Now with version 2.0.7-r5 this is vastly possible, and it’s documented on the Wiki for you all to use.

Unfortunately, while testing it, I found out that one of the boxes I’m monitoring, the office’s firewall, was going crazy if I used the async spooled node, reporting fan speeds way too low (87 RPMs) or way too high (300K), and with similar effects on the temperatures as well. This also seems to have caused the fans to go out of control and run constantly at their 4KRPM instead of their usual 2KRPM. The kernel log showed that there was something going wrong with the i2c access, which is what the sensors program uses.

I started looking into the sensors_ plugin that comes with Munin, which I knew already a bit as I fixed it to match some of my systems before… and the problem is that for each box I was monitoring, it would have to execute sensors six times: twice for each graph (fan speed, temperature, voltages), one for config and one for fetching the data. And since there is no way to tell it to just fetch some of the data instead of all of it, it meant many transactions had to go over the i2c bus, all at the same time (when using munin async, the plugins are fetched in parallel). Understanding that the situation is next to unsolvable with that original code, and having one day “half off” at work, I decided to write a new plugin.

This time, instead of using the sensors program, I decided to just access /sys directly. This is quite faster and allows to pinpoint what data you need to fetch. In particular during the config step, there is no reason to fetch the actual value, which saves many i2c transactions even just there. While at it, I also made it a multigraph plugin, instead of the old wildcard one, so that you only need to call it once, and it’ll prepare, serially, all the available graphs: in addition to those that were supported before, which included power – as it’s exposed by the CPUs on Excelsior – I added a few that I haven’t been able to try but are documented by the hwmon sysfs interface, namely current and humidity.

The new plugin is available on the contrib repository – which I haven’t found a decent way to package yet – as sensors/hwmon and is still written in Perl. It’s definitely faster, has fewer dependencies and it’s definitely more reliable at leas ton my firewall. Unfortunately, there is one feature that is missing: sensors would sometimes report an explicit label for temperature data.. but that’s entirely handled in userland. Since we’re reading the data straight from the kernel, most of those labels are lost. For drivers that do expose those labels, such as coretemp, they are used, though.

Also we lose the ability to ignore the values from the get-go, like I describe before but you can’t always win. You’ll have to ignore the graph data from the master instead. Otherwise you might want to find a way to tell the kernel to not report that data. The same probably is true for the names, although unfortunately…

[temp*_label] Should only be created if the driver has hints about what this temperature channel is being used for, and user-space doesn’t. In all other cases, the label is provided by user-space.

But I wouldn’t be surprised if it was possible to change that a tinsy bit. Also, while it does forfeit some of the labeling that the sensors program do, I was able to make it nicer when anonymous data is present — it wasn’t so rare to have more than one temp1 value as it was the first temperature channel for each of the (multiple) controllers, such as the Super I/O, ACPI Thermal Zone, and video card. My plugin outputs the controller and the channel name, instead of just the channel name.

After I’ve completed and tested my hwmon plugin I moved on to re-rewrite the IPMI plugin. If you remember the saga I first rewrote the original ipmi_ wildcard plugin in freeipmi_, including support for the same wildcards as ipmisensor_, so that instead of using OpenIPMI (and gawk), it would use FreeIPMI (and awk). The reason was that FreeIPMI can cache SDR information automatically, whereas OpenIPMI does have support, but you have to tackle it manually. The new plugin was also designed to work for virtual nodes, akin to the various SNMP plugins, so that I could monitor some of the servers we have in production, where I can’t install Munin, or I can’t install FreeIPMI. I have replaced the original IPMI plugin, which I was never able to get working on any of my servers, with my version on Gentoo for Munin 2.0. I expect Munin 2.1 to come with the FreeIPMI-based plugin by default.

Unfortunately, like for the sensors_ plugin, my plugin was calling the command six times per host — although this allows you to filter for the type of sensors you want to receive data for. And that became even worse when you have to monitor foreign virtual nodes. How do I solve that? I decided to rewrite it to be multigraph as well… but shell script then was difficult to handle, which means that it’s now also written in Perl. The new freeipmi, non-wildcard, virtual node-capable plugin is available in the same repository and directory as hwmon. My network switch thanks me for that.

Of course unfortunately the async node still does not support multiple hosts, that’s something for later on. In the mean time though, it does spare me lots of grief and I’m happy I took the time working on these two plugins.

This problem seems to bite some of our hardened users a couple of times a year, so thought I’d blog about it. If you are using grsec and PulseAudio, you must not enable CONFIG_GRKERNSEC_SYSFS_RESTRICT in your kernel, else autodetection of your cards will fail.

PulseAudio’s module-udev-detect needs to access /sys to discover what cards are available on the system, and that kernel option disallows this for anyone but root.

Just wanted to wish you a very happy 15th birthday, Noah! I hope that you have an awesome day, filled with fun and excitement, and surrounded by your friends, family, and loved ones. Those are the best elements of a special day, but maybe, just maybe, you’ll get some cool stuff too! I also can’t believe that it’s just one more year until you’ll have your license; bet you can’t wait!

Anyway, thinking about you, and hope that everything in your life is going superbly well.

David has now published a tentative schedule for the PulseAudio Mini-conference (I’m just going to call it PulseConf — so much easier on the tongue).

For the lazy, these are some of the topics we’ll be covering:

Vision and mission — where we are and where we want to be

Improving our patch review process

Routing infrastructure

Improving low latency behaviour

Revisiting system- and user-modes

Devices with dynamic capabilities

Improving surround sound behaviour

Separating configuration for hardware adaptation

Better drain/underrun reporting behaviour

Phew — and there are more topics that we probably will not have time to deal with!

For those of you who cannot attend, the Linaro Connect folks (who are graciously hosting us) are planning on running Google+ Hangouts for their sessions. Hopefully we should be able to do the same for our proceedings. Watch this space for details!

p.s.: A big thank you to my employer Collabora for sponsoring my travel to the conference.

I’m staying in Prague for another month. I’m working at a hostel as a bartender and getting my own private room and one/two meals per day. I have two consecutive days off per week and I plan on going on overnight trips to other cities in Czech. I’ve basically invalidated the rest of my planning for the next month or two but I’ll figure that out later..

It’s been too long since I’ve cracked out the Jolt and spent the wee hours hacking away on something. So tonight, I picked up a device from my collection and did the inevitable:

More details soon to a tech blog near you. Image release date? Whenever I get around to neatening this up for widespread consumption. Mad props to the Queen for that extra hour tonight, really handy as I’m sure you’ll all agree.

I’ve been in Prague since Oct 17, 10 days now. I really like the city and hope to explore more of the country soon besides the capital city. The city’s archetecture is nice because it was virtual untouched during WW2. The culture is somewhat interesting because it was communist until 1989. Now the city is preserving what was left to decay during that era.

The food is good, the beer is good, and the city is cheap to live in. Being a continental country, the weather is marginal but that just reminds me of home anyway.

Coming from Italy to the US for the first time, it’s important to note a few very different customs. One of these is the already noted bigger portions, that can cause you to overeat if you don’t remember to ask for a box when you’re stuffed. Another big one is tipping. While it’s not unheard of in Italy as well, tipping is not as regular, or regulated, as here. For what I know, tips (mancie) are not declared at all, even if they are supposed to, since they are only possible on cash transaction, as there are no lines in the receipts where you can add tips. Even though Wikipedia says that this requires a citation (maybe I should just take a picture of my next receipt when I go back to Italy).

The reason for this is that the service, i.e., the wage for the waiting staff, is usually included on the bill (usually, explicitly — some rare times it’s included in the price of the food itself, but that’s been rare until a few hours ago). The same is true, as far as I know, in England for the most part, while in France it seems like they are happy to get some.

Anyway, I have to say that up to now, my experience with tipping staff is actually quite positive. It’s not like it changes much of how I go around — even in Italy I tend to always go to the same place, but I guess it helps the fact that I tip well enough that the waitresses remember me, and they almost never bring me the menu nowadays, unless I ask for it (they know already what I’m getting).

A quick check of my past receipts shows that my average tipping is around 22%, with the exception being the breakfasts I get in the morning, which is well over that (but simply because it would be less than eight dollars), at around 50%. This actually paid off, since I didn’t have to know about the local diner’s “Breakfast Club” — the waiter brought me the card after seeing me one morning after the other, already stamped twice; and the one time I forgot my card at the office, he stamped it twice the next visit. Also, once I actually used the fidelity card, which got me free pancakes, they poured in the coffee with it (which is not supposed to be included).

I guess that for most of the waiting staff, having to survive on tips is far from easy. On the other hand, it feels like the waiting staff here is more caring about the single customer’s experience (since their living depends on it) rather than the frenetic “serve as many customers as possible in the shortest time as possible” that most of the Italian restaurants (as in, in Italy) focus on. Even in places I like, and where I know the owner since forever, don’t have the same friendly service.

Googling around, it seems like there is a lot of angst and grief around the concept of tipping – I was looking around to see how much to tip a cap driver since today I went to Santa Monica to see The Oatmeal – and I can from one point understand why, on the other hand it’s also an easy to use them as a way to make sure that you’re offered a decent service. Like the cab driver who brought me back, and who insisted for me to get cash on the ATMs, which meant I had to walk three blocks over, and pay another $3 in fees, and got less than 10% tip (if he accepted the credit card, he would have gotten 20% — yes that means waiting and paying the extra fee, but it’s still more than he got).

I guess one of the reasons why I’m not having much problem, as a customer, with tipping, is that Free Software works the same way. We’re for the most part not paid, or paid (as related to opensource) a minimum wage, and all we do is compensated for the most part in tips … which are actually rarely enough to cover our side of the expenses — I can actually write quite a bit on the subject as recently I found out how much it costed me, in power alone, to run Yamato and the tinderbox at my house.

So in all of this, I can actually say that it’s one of the things that I have really no problem whatsoever with, during my stay here.

A few days ago the box that was hosting our low-risk webapps died (barbet.gentoo.org). The services that were affected are get.gentoo.orgplanet.gentoo.orgpackages.gentoo.orgdevmanual.gentoo.orginfra-status.gentoo.org and bouncer.gentoo.org. We quickly migrated the services to another box (brambling.gentoo.org). Brambling had issues in the past with its RAM, but we changed them with new ones a couple of months ago. Additionally, this machine was used for testing only. Unfortunately the machine started to malfunction as soon as those services were transferred there, which means that it has more hardware issues than the RAM. The resulting error messages stopped when we disabled packages.gentoo.org temporarily. The truth is that this packages webapp is old, unmaintained, uses deprecated interfaces and real pain to debug. In this year’s GSoC we had a really nice replacement by Slava Bacherikov written in django. Additionally, recently we were given a Ganeti cluster hosted at OSUOSL. Thus we decided not to put up again the old packages.gentoo.org instance, and instead create 4 virtual machines in our Ganeti cluster, and migrate the above webapps there, along with the new and shiny packages.gentoo.org website. Furthermore, we will also deploy another GSoC webapp, gentoostats, and start providing our developers with virtual machines. We will not give public IPv4 IPs to the dev VMs though, but probably use IPv6 only so that developers can access them through woodpecker (the box where the developers have their shell accounts), but it is still under discussion. We already started working on the above, and we expect next week to be fully finished with the new webapps live and rocking. Special thanks to Christian and Alec who took care of the migrations before and during the Gentoo Miniconf.

A couple of days ago, Tomas and I, gave a presentation at the Gentoo Miniconf. The subject of the presentation was to give an overview of the current recruitment process, how are we performing compared to the previous years and what other ways there are for users to help us improve our beloved distribution. In this blog post I am gonna get into some details that I did not have the time to address during the presentation regarding our recruitment process.

Recruitment Statistics from 2008 to 2012

Looking at the previous graph, two things are obvious. First of all, every year the number of people who wanted to become developers is constantly decreased. Second, we have a significant number of people who did not manage to become developers. Let me express my personal thoughts on these two things.

For the first one, my opinion is that these numbers are directly related to the Gentoo’s reputation and its “infiltration” to power users. It is not a secret that Gentoo is not as popular as it used to be. Some people think this is because of the quality of our packages, or because of the frequency we cause headaches to our users. Other people think that the “I want to compile every bit of my linux box” trend belongs to the past and people want to spend less time maintaining/updating their boxes and more time doing some actual work nowadays. Either way, for the past few years we are loosing people, or to state it better, we are not “hiring” as many as we used to. Ignoring those who did not manage to become developers, we must admit that the absolute numbers are not in our favor. One may say that, 16 developers for 2011-2012 is not bad at all, but we aim for the best right? What bothers me the most is not the number of the people we recruit, but that this number is constantly falling for the last 5 years…

As for the second observation, we see that, every year, around 4-5 people give up and decide to not become developers after all. Why is that? The answer is obvious. Our long, painful, exhausting recruitment process drives people away. From my experience, it takes about 2 months from the time your mentor opens your bug, until a recruiter picks you up. This obviously kills someone’s motivation, makes him lose interest, get busy with other stuff and he eventually disappears. We tried to improve this process by creating a webapp two years ago, but it did not work out well. So we are now back to square one. We really can’t afford loosing developers because of our recruitment process. It is embarrassing to say at least.

Again, is there anything that can be done? Definitely yes. I’d say, we need an improved or a brand new web application that will focus on two things:

1) make the review process between mentor <-> recruit easier

2) make the final review process between recruit <-> recruiter an enjoyable learning process

Ideas are always welcomed. Volunteers and practical solutions even more ;) In the meantime, I am considering using Google+ hangouts for the face-to-face interview sessions with the upcoming recruits. This should bring some fresh air to this process ;)

When I moved back to Saint Louis with my current job, and started working from home, it became readily apparent that I would need a decent office chair (sitting on one of my chairs from the less-than-great dining room table would certainly not be ideal). After looking at a bunch of different options, and realising that I’m not going to spend $1000+ USD on a Herman Miller Aeron, I found some great choices on Amazon.

For the price, the chair is actually incredibly well-built. Is it an Aeron? No, of course not, but it also doesn’t carry nearly the same price tag with it. That being said, it also doesn’t feel like a cheaply-made knock-off. The only part of the build quality that is somewhat questionable is the armrest construction. They have plastic shields and are rubber-stamped on the top, but they do serve their purpose nicely. I would like a little further adjustment capabilities on them, but they are what they are. The only other qualm that I have is that the chair makes a bit of noise when moving around, or leaning back. I believe that these sounds are related to the two adjustable nuts near the chair’s base, but I haven’t thoroughly tested that idea.

Assembly of the chair was incredibly easy and straightforward. I did find it a lot easier to do with the help of one other person (for holding the back of the chair in place whilst attaching it to the base, et cetera). If you don’t have help, though, it would be easy enough to do by one’s self. There was one piece of plastic that served no useful purpose, but only an aesthetic element. I chose to not screw that piece into backing of the chair (maybe that’s the engineer in me).

More important than the build quality and the ease of assembly, the seat is very comfortable, even for the 8-10 hours per day that I am in it. I don’t find that I struggle to stay comfortable during that time. Also, the lumbar support and backing are both stronger than other chairs that I have used in the past. Given that I have had trouble with my middle back in the past, I’m pleasantly surprised that I don’t experience any discomfort in that area throughout the day.

So, if you are in the market for a good office chair, but don’t want to spend a huge amount of money, I recommend that you at least look into the Lorell 86200. It is nicely built, easy to assemble, and I find it to be one of the most comfortable chairs in the price range.

Several weeks ago, a good friend and I went to Addie’s Thai House in Saint Louis, MO. Though it is a bit far from where we live, and when travelling that distance, we would usually head north to Thai Kitchen, we decided to try a new place (and they had a special at the time). Upon entering the restaurant, I immediately noticed that it was a little more posh than most of the Thai restaurants in the area. The décor and seating arrangements both lent themselves to a higher-scale dining experience.

We started off with an appetiser, and seeing as we wanted to try one that was unique to their menu, we opted for the sweet potatoes. They were cut in a thick string style, deep-fried, and came out with coconut flakes and a sweet and sour dipping sauce. To me, the coconut taste was so subtle that one really had to try to notice it. I found that to be disappointing, because otherwise, they ended up just tasting a lot like regular sweet potato chips.

For dinner, I had the green curry with fresh tofu. It was pleasant, but lacked a lot of the heat that I’m used to with green curry. Also, I found that there were not many vegetables (or much tofu, for that matter) in the pot, but rather that it was primarily sauce. That being said, one of my favourite things to do with curry is to soak some rice in the remainder of the sauce. As such, I did enjoy that aspect of the dish.

She had Praram Long Song, which is a common Siamese dish that generally comes with carrots, spinach, and your choice of protein with a peanut sauce atop it. The peanut sauce wasn’t all that great (especially compared to Thai Kitchen, which has some of the best I’ve ever eaten), and overall, the dish was rather bland.

Though Addie’s Thai House appeared to be a more upscale restaurant in terms of atmosphere, the quality of the food was fairly disappointing. Given that, I would much rather go to one of the restaurants in the area that focuses more on the preparation of the food, especially seeing as Addie’s was a bit more expensive as well. For those reasons, I can’t recommend Addie’s over other nearby Thai places.

In my previous post about Munin I said that I was still working on making sure that the async support would reach Gentoo in a way that actually worked. Now with version 2.0.7-r5 this is vastly possible, and it’s documented on the Wiki for you all to use.

Unfortunately, while testing it, I found out that one of the boxes I’m monitoring, the office’s firewall, was going crazy if I used the async spooled node, reporting fan speeds way too low (87 RPMs) or way too high (300K), and with similar effects on the temperatures as well. This also seems to have caused the fans to go out of control and run constantly at their 4KRPM instead of their usual 2KRPM. The kernel log showed that there was something going wrong with the i2c access, which is what the sensors program uses.

I started looking into the sensors_ plugin that comes with Munin, which I knew already a bit as I fixed it to match some of my systems before… and the problem is that for each box I was monitoring, it would have to execute sensors six times: twice for each graph (fan speed, temperature, voltages), one for config and one for fetching the data. And since there is no way to tell it to just fetch some of the data instead of all of it, it meant many transactions had to go over the i2c bus, all at the same time (when using munin async, the plugins are fetched in parallel). Understanding that the situation is next to unsolvable with that original code, and having one day “half off” at work, I decided to write a new plugin.

This time, instead of using the sensors program, I decided to just access /sys directly. This is quite faster and allows to pinpoint what data you need to fetch. In particular during the config step, there is no reason to fetch the actual value, which saves many i2c transactions even just there. While at it, I also made it a multigraph plugin, instead of the old wildcard one, so that you only need to call it once, and it’ll prepare, serially, all the available graphs: in addition to those that were supported before, which included power – as it’s exposed by the CPUs on Excelsior – I added a few that I haven’t been able to try but are documented by the hwmon sysfs interface, namely current and humidity.

The new plugin is available on the contrib repository – which I haven’t found a decent way to package yet – as sensors/hwmon and is still written in Perl. It’s definitely faster, has fewer dependencies and it’s definitely more reliable at leas ton my firewall. Unfortunately, there is one feature that is missing: sensors would sometimes report an explicit label for temperature data.. but that’s entirely handled in userland. Since we’re reading the data straight from the kernel, most of those labels are lost. For drivers that do expose those labels, such as coretemp, they are used, though.

Also we lose the ability to ignore the values from the get-go, like I describe before but you can’t always win. You’ll have to ignore the graph data from the master instead. Otherwise you might want to find a way to tell the kernel to not report that data. The same probably is true for the names, although unfortunately…

[temp*_label] Should only be created if the driver has hints about what this temperature channel is being used for, and user-space doesn’t. In all other cases, the label is provided by user-space.

But I wouldn’t be surprised if it was possible to change that a tinsy bit. Also, while it does forfeit some of the labeling that the sensors program do, I was able to make it nicer when anonymous data is present — it wasn’t so rare to have more than one temp1 value as it was the first temperature channel for each of the (multiple) controllers, such as the Super I/O, ACPI Thermal Zone, and video card. My plugin outputs the controller and the channel name, instead of just the channel name.

After I’ve completed and tested my hwmon plugin I moved on to re-rewrite the IPMI plugin. If you remember the saga I first rewrote the original ipmi_ wildcard plugin in freeipmi_, including support for the same wildcards as ipmisensor_, so that instead of using OpenIPMI (and gawk), it would use FreeIPMI (and awk). The reason was that FreeIPMI can cache SDR information automatically, whereas OpenIPMI does have support, but you have to tackle it manually. The new plugin was also designed to work for virtual nodes, akin to the various SNMP plugins, so that I could monitor some of the servers we have in production, where I can’t install Munin, or I can’t install FreeIPMI. I have replaced the original IPMI plugin, which I was never able to get working on any of my servers, with my version on Gentoo for Munin 2.0. I expect Munin 2.1 to come with the FreeIPMI-based plugin by default.

Unfortunately, like for the sensors_ plugin, my plugin was calling the command six times per host — although this allows you to filter for the type of sensors you want to receive data for. And that became even worse when you have to monitor foreign virtual nodes. How do I solve that? I decided to rewrite it to be multigraph as well… but shell script then was difficult to handle, which means that it’s now also written in Perl. The new freeipmi, non-wildcard, virtual node-capable plugin is available in the same repository and directory as hwmon. My network switch thanks me for that.

Of course unfortunately the async node still does not support multiple hosts, that’s something for later on. In the mean time though, it does spare me lots of grief and I’m happy I took the time working on these two plugins.

The Gentoo Miniconf is over now but it was a great success. There was 30+ developers that went and I met quite some users too. Thanks to Theo (tampakrap) and Michal (miska) for organizing the event (and others), thanks to openSUSE for sponsoring and letting the Gentoo Linux guys hangout there. Thanks to the other sponsors too, Google, Aeroaccess, et al.

I went to Dordrecht for just a short time, a very small town. We made a mistake on the waterbus that led us to walking around the town for a few hours until we could get to the intended goal of Kinderdijk. Kinderdijk is the home of the famous windmills that Holland is known for. The windmills are preserved and still working but not used since the invention of the electric pump. We had to go see the windmills and get the picture…

Then I went to Delft for one night and just relaxed at the hostel for the night and bummed around inside while it was raining. Delft is home of the famous hand painted blue and white china – “delftware”. I did manage to stroll around the town briefly (not much to see by foot though). Delft has all the canals and architecture that Amsterdam has but very small and different culture.

Earlier this month, I reviewed the self-titled first album by Ronald Jenkees. Now that I’ve listened to his second full-length studio album, Disorganized Fun, several times, I can share my thoughts on it.

1. Disorganized Fun – 9 / 10
Coming in full-force with his mix of disjointed synth elements and smooth beats, this first track lives up nicely to its title. Jenkees played around a lot with pitch bending, and it worked really well with his choices of sounds. In the middle of the track, there’s a great bridge followed by a keyboard solo. Not only does the style live up to the title of the track, but it serves as a great start to his second full-length album.

2. Fifteen Fifty – 8 / 10
Unlike the previous song, this one is a bit more fluid. As such, however, it doesn’t have as much of a stylistic edge, and I found it to drag a bit in spots. There is a neat bass line that comes in around 1’15″ or so, but unfortunately, it doesn’t carry through the rest of the tune. Whilst not a bad song at all, it just doesn’t have the energy of its predecessor (even with the wild solo at the very end).

3. Guitar Sound – 10 / 10
It’s really impressive to me that Jenkees is able to emulate an 80s-style guitar sound as well as he does. The opening portion of this track sounds a lot like some of Eric Johnson’s work, especially in the vein of Cliffs of Dover. There are some great hard-hitting riffs in there that, when coupled with the up-tempo beats and breakdown/variety of the bridge, make for a fantastic track all around! Even at just over 7 minutes, the song doesn’t drag at all.

4. Synth One – 6 / 10
This song has a little stronger emphasis on the drums and beats than the previous tracks, and as such, they stand out more prominently than do some of the synth parts. There are a lot of sound effects in this track that have an old NES feel to them, which is a bit nostalgic. However, I don’t really find this to be one of the stronger songs on the album.

5. Throwing Fire – 8 / 10
I stand corrected about the throwback to old Nintendo games, as this song starts out in a way that almost makes me feel like I just put in the cartridge and fired up Blaster Master. Unlike the former track, however, Throwing Fire has a really upbeat and lively feel to it. There are a couple parts around the 2-minute mark, though, where it seems like Jenkees stumbles a bit on the notes, but they add a nice human element.

6. Minimal MC – 8 / 10
On this track, Jenkees plays a lot with throwing sounds back and forth between the left and right stereo channels, which makes for a very cool effect whilst listening on headphones. Significantly more subdued, and containing a lot fewer effects than some of the previous tracks, Minimal MC adheres to its name. After the halfway mark, there are some great dramatic elements and a little bit of an Asian influence.

7. Stay Crunchy – 10 / 10Stay Crunchy was actually the song that prompted me to buy both of his albums after I originally heard it on Pandora. I think that it is an incredible mix of funky beats and rhythm, great synth work, and some techno/club elements. This is my clear favourite on the album (though that could be related to the Serial Position Preference Effect)!

8. Inverted Mean – 8 / 10
With the intro of this track, I expected someone like Jay-Z to come in with some dramatic near-spoken-word lyrics; it just presents a very theatrical sound right from the start. This song also has a stronger hip-hop feel than many of the others, but it is a nice way to increase the dynamic nature of the album. My favourite part of the piece come in around the 3’15″ mark with this great piano solo which fades out nicely.

9. Outer Space – 8 / 10
A lot stronger emphasis on synth sounds and chaotic melody than the previous track, Outer Space combines techno and dance beats with sci-fi effects. Again, tracks like these really highlight the versatility of his musical vision. Though it isn’t the most appealing track to my ears, this track showcases technical aptitude within the genre.

10. Let’s Ride (rap) – 6 / 10
As with the raps on his previous album, this one is fairly entertaining, regardless of whether or not the technical expertise is as high as his non-rap tracks. The reference to passing the DQ is fairly funny as well.

11. It’s Gettin Rowdy (rap) – 6 / 10
For some reason, this rap makes me think of Regulate by Warren G, but with a little bit of a silly element to it. Ahhh, the delusions of grandeur…

That makes for a total of 87 / 110 or ~79%. That comes out to a very strong 8 stars:

For about a year now, I’ve been working at GRNET on its (OpenStack API compliant) open source IaaS cloud platform Synnefo, which powers the ~okeanos service.

Since ~okeanos is mainly aimed towards the Greek academic community (and thus has restrictions on who can use the service), we set up a ‘playground’ ‘bleeding-edge’ installation (okeanos.io) of Synnefo, where anyone can get a free trial account, experiment with the the Web UI, and have fun scripting with the kamaki API client. So, you get to try the latest features of Synnefo, while we get valuable feedback. Sounds like a fair deal.

Unfortunately, being the only one in our team that actually uses Gentoo Linux, up until recently Gentoo VMs were not available. So, a couple of days ago I decided it was about time to get a serious distro running on ~okeanos (the load of our servers had been ridiculously low after all ). For future reference, and in case anyone wants to upload their own image on okeanos.io or ~okeanos, I’ll briefly describe the steps I followed.

4) Chroot and install Gentoo in /mnt/gentoo. Just follow the handbook. At a minimum you’ll need to extract the base system and portage, and set up some basic configs, like networking. It’s up to you how much you want to customize the image. For the Linux Kernel, I just copied directly the Debian /boot/[vmlinuz|initrd|System.map] and /lib/modules/ of the VM (and it worked! ).

and make sure you have a sane grub.cfg (I’d suggest replacing all references to UUIDs in grub.cfg and /etc/fstab to /dev/vda[1]).
Now, outside the chroot, run:
grub-install --root-directory=/mnt --grub-mkdevicemap=/mnt/boot/grub/device.map /dev/loop0

snf-image-creator takes a diskdump as input, launches a helper VM, cleans up the diskdump / image (cleanup of sensitive data etc), and optionally uploads and registers our image with ~okeanos.

For more information on how snf-image-creator and Synnefo image registry works, visit the relevant docs [1][2][3].

0) Since snf-image-creator will use qemu/kvm to spawn a helper VM, and we’re inside a VM, let’s make sure that nested virtualization (OSDI ’10 Best Paper award btw ) works.

First, we need to make sure that kvm_[amd|intel] is modprobe’d on the host machine / hypervisor with the nested = 1 parameter, and that the vcpu, that qemu/kvm creates, thinks that it has ‘virtual’ virtualization extensions (that’s actually our responsibility, and it’s enabled on the okeanos.io servers).

If everything goes as planned, after snf-image-creator terminates, you should be able to see your newly uploaded image in https://pithos.okeanos.io, inside the Images container. You should also be able to choose your image to create a new VM (either via the Web UI, or using the kamaki client).

That’s all for now. Hopefully, I’ll return soon with another more detailed post on scripting with kamaki (vkoukis has a nice script using kamaki python lib to create from scratch a small MPI cluster on ~okeanos ).

Though their food menu isn’t very extensive–consisting of primarily some appetisers, flatbreads, salads, and a couple larger plates–the food was fairly tasty for the price. We started with the House Chips (which were actually crisps, not chips), and they were quite nice. They were cut from Russet potatoes, and were lightly coated in truffle oil and Parmigiano-Reggiano. As I’m highly allergic to cheese, I had to be careful, but it wasn’t all that big of a deal to avoid the cheese. For dinner, I had grilled chicken and vegetable linguine, which was nice. The sauce was a bit thick for my liking, but it was easy enough to simply use less of it. She had the fancied-up grilled cheese, which was apparently quite good (for obvious reasons, I couldn’t try it). For our wine offering, we went with a 2010 Pinot Grigio from Lagaria. Though overpriced for the vintage, it nicely complemented our entrées.

The best part, in my opinion, was neither the food nor the wine, though. Instead, the atmosphere is what made the evening fantastic. It was a lightly cool night, and we were sitting out on the back patio near the fireplace. The heat from the fire was just enough to take the chill out of the air, but not so hot as to be uncomfortable. The service was a bit slow, but that was to be expected on a Friday evening, and sitting out enjoying the light breeze made time pass quickly.

Overall, Ernesto’s is a nice change of pace from the typical dinner, but the cost seems to be out of alignment with the quality of the food and drink. That being said, it isn’t so outrageously off-balanced as to be off-putting. I would like to go back another time to try some of the flatbreads and another bottle (but this time, of a rustic red).

After spending a few days in Amsterdam, it was very refreshing to goto Rotterdam. Rotterdam, a 1h20m train ride away from Amsterdam, was interesting to me because it is essentially a new town by Europe standards. There are many, many new buildings in Rotterdam since it was bombed and essentially destroyed during WW2, however, being the largest port in Europe (formally the largest in the world) it has been rebuilt pretty fast. I stayed in Rotterdam for 4 days and 3 nights, I could have stayed for more days and felt entertained too. It was still an expensive city but marginally less expensive than Amsterdam. There were many English speakers there but also some less than Amsterdam.

If you’re running ~arch, you probably noticed by now that the latest OpenRC release no longer allows services to “need net” in their init scripts. This change has caused quite a bit of grief because some services no longer started after a reboot, or no longer start after a restart, including Apache. Edit: this only happens if you have corner case configurations such as an LXC guest. As William points out, the real change is simply that net.lo no longer provides the net virtual, but the other network interfaces do.

While it’s impossible to say that this is not annoying as hell, it could be much worse. Among other reasons, because it’s really trivial to work it around until the init scripts themselves are properly fixed. How? You just need to append to /etc/conf.d/$SERVICENAME the line rc_need="!net" — if the configuration file does not exist, simply create it.

Interestingly enough, knowing this workaround also allows you to do something even more useful, that is making sure that services requiring a given interface being up depend on that interface. Okay it’s a bit complex, let me backtrack a little.

Most of the server daemons that you have out there don’t really care of how many, which, and what name your interfaces are. They open either to the “catch-all” address (0.0.0.0 or :: depending on the version of the IP protocol — the latter can also be used as a catch-both IPv4 and IPv6, but that’s a different story altogether), to a particular IP address, or they can bind to the particular interface but that’s quite rare, and usually only has to do with the actual physical address, such as RADVD or DHCP.

Now to bind to a particular IP address, you really need to have the address assigned to the local computer or the binding will fail. So in these cases you have to stagger the service start until the network interface with that address is started. Unfortunately, it’s extremely hard to do so automatically: you’d have to parse the configuration file of the service (which is sometimes easy and most of the times not), and then you’d have to figure out which interface will come up with that address … which is not really possible for networks that get their addresses automatically.

So how do you solve this conundrum? There are two ways and both involve manual configuration, but so do defined-address listening sockets for daemons.

The first option is to keep the daemon listening on the catch-all addresses, then use iptables to set up filtering per-interface or per-address. This is quite easy to deal with, and quite safe as well. It also has the nice side effect that you only have one place to handle all the IP address specifications. If you ever had to restructure a network because the sysadmin before you used the wrong subnet mask, you know how big a difference that makes. I’ve found before that some people think that iptables also needs the interfaces to be up to work. This is not the case, fortunately, it’ll accept any interface names as long as they could possibly be valid, and then will only match them when the interface is actually coming up (that’s why it’s usually a better idea to whitelist rather than blacklist there).

The other option requires changing the configuration on the OpenRC side. As I shown above you can easily manipulate the dependencies of the init scripts without having to change those scripts at all. So if you’re running a DHCP server on the lan served by the interface named lan0 (named this way because a certain udev no longer allows you to swap the interface names with the permanent rules that were first introduced by it), and you want to make sure that one network interface is up before dhcp, you can simply add rc_need="net.lan0" to your /etc/conf.d/dhcpd. This way you can actually make sure that the services’ dependencies match what you expect — I use this to make sure that if I restart things like mysql, php-fpm is also restarted.

So after I gave you two ways to work around the current not-really-working-well status, but why did I not complain about the current situation? Well, the reason for which so many init scripts have that “need net” line is simply cargo-culting. And the big problem is that there is no real good definition of what “net” is supposed to be. I’ve seen used (and used it myself!) for at least the following notions:

there are enough modules loaded that you can open sockets; this is not really a situation that I’d like to find myself to have to work around; while it’s possible to build both ipv4 and ipv6 as modules, I doubt that most things would work at all that way;

there is at least one network interface present on the system; this usually is better achieved by making sure that net.lo is started instead; especially since in most cases for situations like this what you’re looking for is really whether 127.0.0.1 is usable;

there is an external interface connected; okay sure, so what are you doing with that interface? because I can assure you that you’ll find eth0 up … but no cable is connected, what about it now?

there is Internet connectivity available; this would make sense if it wasn’t for the not-insignificant detail that you can’t really know that from the init system; this would be like having a “need userpresence” that makes sure that the init script is started only after the webcam is turned on and the user face is identified.

While some of these particular notions have use cases, the fact that there is no clear identification of what that “need net” is supposed to be makes it extremely unreliable, and at this point, especially considering all the various options (oldnet, newnet, NetworkManager, connman, flimflam, LXC, vserver, …) it’s definitely a better idea to get rid of it and not consider it anymore. Unfortunately, this is leading us into a relative world of pain, but sometimes you have to get through it.

If you’re a Munin user in Gentoo and you look at ChangeLogs you probably noticed that yesterday I did commit quite a few changes to the latest ~arch ebuild of it. The main topic for these changes was async support, which unfortunately I think is still not ready yet, but let’s take a step back. Munin 2.0 brought one feature that was clamored for, and one that was simply extremely interesting: the former is the native SSH transport, the others is what is called “Asynchronous Nodes”.

On a classic node whenever you’re running the update, you actually have to connect to each monitored node (real or virtual), get the list of plugins, get the config of each plugin (which is not cached by the node), and then get the data for said plugin. For things that are easy to get because they only require you to get data out of a file, this is okay, but when you have to actually contact services that take time to respond, it’s a huge pain in the neck. This gets even worse when SNMP is involved, because then you have to actually make multiple requests (for multiple values) both to get the configuration, and to get the values.

To the mix you have to add that the default timeout on the node, for various reason, is 10 seconds which, as I wrote before makes it impossible to use the original IPMI plugin for most of the servers available out there (my plugin instead seem to work just fine, thanks to FreeIPMI). You can increase the timeout, even though this is not really documented to begin with (unfortunately like most of the things about Munin) but that does not help in many cases.

So here’s how the Asynchronous node should solve this issue: on a standard node, the requests to the single node are serialized so you’re actually waiting for each to complete before the next one is fetched, as I said, and since this can make the connection to the node take, all in all, a few minutes, and if the connection is severed then, you lose your data. The Asynchronous node, instead, has a different service polling the actual node on the same host, and saves the data in its spool file. The master in this case connects via SSH (it could theoretically work using xinetd but neither me nor Steve care about that), launches the asynchronous client, and then requests all the data that was fetched since the last request.

This has two side-effects: the first is that your foreign network connection is much faster (there is no waiting for the plugins to config and fetch the data), which in turn means that the overall munin-update transaction is faster, but also, if for whatever reason the connection fails at one point (a VPN connection crashes, a network cable is unplugged, …), the spooled data will cover the time that the network was unreachable as well, removing the “holes” in the monitoring that I’ve been seeing way too often lately. The second side effect is that you can actually spool data every five minutes, but only request it every, let’s say, 15, for hosts which does not require constant monitoring, even though you want to keep granularity.

Unfortunately, the async support is not as tested as it should be and there are quite a few things that are not ironed out yet, which is why the support for it in the ebuild has been this much in flux up to this point. Some things have been changed upstream as well: before, you had only one user, and that was used for both the SSH connections and for the plugins to fetch data — unfortunately one of the side effect of this is that you might have given your munin user more access (usually read-only, but often times there’s no way to ensure that’s the case!) to devices, configurations or things like that… and you definitely don’t want to allow direct access to said user. Now we have two users, munin and munin-async, and the latter needs to have an actual shell.

I tried toying with the idea of using the munin-async client as a shell, but the problem is that there are no ways to pass options to it that way so you can’t use --spoolfetch which makes it vastly useless. On the other hand, I was able to get the SSH support a bit more reliable without having to handle configuration files on the Gentoo side (so that it works for other distributions as well, I need that because I have a few CentOS servers at this point), including the ability to use this without requiring netcat on the other side of the SSH connection (using one old trick with OpenSSH). But this is not yet ready, it’ll have to wait for a little longer.

Anyway as usual you can expect updates to the Munin page on the Gentoo Wiki when the new code is fully deployed. The big problem I’m having right now is making sure I don’t screw up with the work’s monitors while I’m playing with improving and fixing Munin itself.

These XO-1.75 is based on the Marvell Armada 610 SoC (armv7l, non-NEON), which promises countless hours of fun enumerating and obtaining obscure pieces of software which are needed to make the laptop work.

One of these is the xf86-video-dove DDX for the Vivante(?) GPU: The most recent version 0.3.5 seems to be available only as SRPM in the OLPC rpmdropbox. Extracting it reveals a "source" tarball containing this:

I was in Amsterdam for 3 days and 2 nights. My first impressions were quite interesting. The culture in Amsterdam is quite liberal and relaxed, but it is also regulated. This was my first stop in my RTW trip. As my first stop, it was a great place to be dropped in to Europe. It allowed me to get into the Euro mindset and figure out what the heck I was doing. Now I know when I get into a city I need to do the following: 1) goto tourist info building to find a city map, 2) physically find my sleeping accommodations, 3) set my bag down and go explore the city with my map.

Amsterdam is a very old city with much history and architecture. You can look at my pics on flickr, I have left comments on most of them. I liked all the canals that the city is built around, quite unique. Overall, glad I went there but probably not the city for me. I did not fall in love with it.

Crimson Wing traces that fascinating story of the life cycle of the flamingo. In particular, the documentary follows the migration surrounding Lake Natron in Tanzania, Africa. It details the courtship of adult flamingos, the birth of their offspring, and many of the struggles which the birds must endure to sustain life in a rather hostile environment.

Unlike African Cats, this film didn’t have a stunning colour palette that really came to life on Blu-Ray. Instead, the most prominent colour spread was comprised of whites, greys, and some blues (not as much crimson as I would have thought). I don’t believe that this was the fault of a bad transfer to Blu-Ray, but rather, the somewhat washed look of the environment in which the film was shot. Coupled with the slightly disappointing visuals, the narrator had very little vocal and tonal fluctuation, which made the presentation a little dull and monotonous. Also, the balance between information delivery and entertainment was skewed toward the former. Not that facts are bad in a documentary, but it seemed to lack a lot of the charisma of other DisneyNature films. To make matters worse, I didn’t come away from this one knowing much more about flamingos than I did before I started watching.

Overall, though it wasn’t awful, it was certainly not my favourite of the DisneyNature series. However, it is still worth a watch, especially if you are a nature lover.