Welcome to Gentoo Universe, an aggregation of weblog articles on all topics written by Gentoo developers. For a more refined aggregation of Gentoo-related topics only, you might be interested in Planet Gentoo.

mongodb-2.4.4

Just bumped it to portage and fixed an open bug along. This is yet another bugfix release which backports the switch to the Cyrus SASL2 library for sasl authentication (kerberos). Dependencies were adjusted so you no longer need libgsasl on your systems (remember to depclean).

highlights

config upgrade fails if collection missing “key” field

migrate to Cyrus SASL2 library for sasl authentication

rollback files missing after rollback

pymongo-2.5.2

This one is important to note and I strongly encourage you to upgrade asap as it fixes an important security bug (CVE-2013-2132). I’ve almost dropped all other versions from tree anyway…

highlights 2.5.x

support GSSAPI (kerberos) authentication

support for SSL certificate validation with hostname matching

support for delegated and role based authentication

mongodb-2.5.x dev

What’s cooking for the next 2.6 releases ? Let’s take a quick look as of today.

background indexing on secondaries (hell yes!)

new implementation of external sort

add support for building from source with particular C++11 compilers (will fix a gentoo bug reported quite a long time ago)

In the past, when I had to manage my images (pictures) I used GQview (which started back in 2008). But the application doesn’t get many updates, and if an application does not get many updates, it either means it is no longer maintained or that it does its job perfectly. Sadly, for GQview, it is the unmaintained reason (even though the application seems to work pretty well for most tasks). Enter Geeqie, a fork of GQview to keep evolution on the application up to speed.

The Geeqie image viewer is a simple viewer that allows to easily manipulate images (like rotation). I launch it the moment I insert my camera’s SD card into my laptop for image processing. It quickly shows the thumbnails of all images and I start processing them to see which ones are eligible for manipulations later on (or are just perfect – not that that occurs frequently) and which can be deleted immediately. You can also quickly set Exif information (to annotate the image further) and view some basic aspects of the picture (such as histogram information).

Two features however are what is keeping me with this image viewer: finding duplicates, and side-by-side comparison.

With the duplicate feature, geekie can compare images by name, size, date, dimensions, checksum, path and – most interestingly, similarity. If you start working on images, you often create intermediate snapshots or tryouts. Or, when you start taking pictures, you take several ones in a short time-frame. With the “find duplicate” feature, you can search through the images to find all images that had the same base (or are taking quickly after each other) and see them all simultaneously. That allows you to remove those you don’t need anymore and keep the good ones. I also use this feature often when people come with their external hard drive filled with images – none of them having any exif information anymore and not in any way structured – and ask to see if there are any duplicates on it. A simple checksum might reveal the obvious ones, but the similarity search of geeqie goes much, much further.

The side-by-side comparison creates a split view of the application, in which each pane has another image. This feature I use when I have two pictures that are taken closely after another (so very, very similar in nature) and I need to see which one is better. With the side-by-side comparison, I can look at artifacts in the image or the consequences of the different aperture, ISO and shutter speed.

And the moment I start working on images, Gimp and Darktable are just a single click away.

Tiny and Big: Grandpa’s Leftovers

Oil Rush

Intrusion 2

English Country Tune

And we’re back … PulseAudio 4.0 is out! There’s both a short and super-detailed changelog in the release notes. For the lazy, this release brings a bunch of Bluetooth stability updates, better low latency handling, performance improvements, and a whole lot more. :)

One interesting thing is that for this release, we kept a parallel next branch open while master was frozen for stabilising and releasing. As a result, we’re already well on our way to 5.0 with 52 commits since 4.0 already merged into master.

And finally, I’m excited to announce PulseAudio is going to be carrying out two great projects this summer, as part of the Google Summer of Code! We are going to have Alexander Couzens (lynxis) working on a rewrite of module-tunnel using libpulse, mentored by Tanu Kaskinen. In addition to this, Damir Jelić (poljar) working on improvements to resampling, mentored by Peter Meerwald.

That’s just some of the things to look forward to in coming months. I’ve got a few more things I’d like to write about, but I’ll save that for another post.

Anyone who is even remotely busy with innovation will know what mindmaps are. They are a means to visualize information, ideas or tasks in whatever structure you like. By using graphical annotations the information is easier to look through, even when the mindmap becomes very large. In the commercial world, mindmapping software such as XMind and Mindmanager are often used. But these companies should really start looking into Freemind.

The Freemind software is a java-based mind map software, running perfectly on Windows, Linux or other platforms. Installation is a breeze (if you are allowed to on your work, you can just launch it from a USB drive if you want, so no installation hassles whatsoever) and its interface is very intuitive. For all the whistles and bells that the commercial ones provide, I just want to create my mindmaps and export them into a format that others can easily use and view.

At my real-time job, we (have to) use XMind. If someone shares a mindmap (“their mind” map as I often see it – I seem to have a different mind than most others in how I structure things, except for one colleague who imo does not structure things at all) they just share the XMind file and hope that the recipients can read it. Although XMind can export mindmaps just fine, I do like the freemind approach where a simple java applet can show the entire mindmap as interactively as you would navigate through the application itself. This makes it perfect for discussing ideas because you can close and open branches easily.

The export/import capabilities of freemind are also interesting. Before being forced to use XMind, we were using Mindmanager and I could just easily import the mindmaps into freemind. The file format that freemind uses is an XML-based one, so translating those onto other formats is not that difficult if you know some XSLT.

I personally use freemind when I embark on a new project, to structure the approach, centralize all information, keep track of problems (and their solutions), etc. The only thing I am missing is a nice interface for mobile devices though.

The next few weeks (months even) will be challenging my free time as I’m working on (too many) projects simultaneously (sadly, only a few of those are free software related, most are house renovations). But that shouldn’t stop me from starting a new set of posts, being my application base. In this series, I’ll cover a few applications (or websites) that I either use often or that I should use more. In either case, the application does its job very well so why not give some input on it?

With Draw.io, you get a web-browser based drawing application for diagrams, flowcharts, UML, BPMN etc. I came across this application while looking for an alternative to Dia, which by itself was supposed to be an alternative to Microsoft Visio (err, no). Don’t get me wrong, Dia is nice, but it lacks evolution and just doesn’t feel easy. Draw.io on the other hand is evolving constantly, and it is also active on Google Plus where you can follow up on all recent developments and thoughts (I hope I get the G+ link correctly, it’s not that I don’t like numbers, just not in URLs).

I started using Draw.io while documenting free software IT architectures (such as implementations of BIND, PostgreSQL, etc.) for which I needed some diagrams. Although Draw.io is an online application (and its underlying engine is not completely free software) you can easily work with it from different locations. It integrates with Google Drive to store the diagrams on if you want – and if you don’t, you can always save the diagrams in their native XML format on your system and open them later again.

The interface is very easy to use, and I recently found out that it now also supports mobile devices, which is perfect for tablets (the mobile device support is recent afaik and still undergoing updates). The site also works well in various browsers (tried IExplorer 10 at work, Firefox and Google Chrome and they all seem to work nicely) – eat that stupid commercial vendors that force me into using IExplorer 8 or Firefox 10 – you know who you are!

A site/service to keep a close eye on. The service itself is free (and doesn’t seem too limited due to it), but Draw.io also has commercial support if you want through Google Apps and Confluence integration. I don’t have much experience with those yet but that might change in the near future (projects, projects).

One of the things I have been meaning to implement on my system is a way to properly “remove” old files from the system. Currently, I do this through frequently listing all files, going through them and deleting those I feel I no longer need (in any case, I can retrieve them back from the backup within 60 days). But this isn’t always easy since it requires me to reopen the files and consider what I want to do with them… again.

Most of the time, when files are created, you generally know how long they are needed on the system. For instance, an attachment you download from an e-mail to view usually has a very short lifespan (you can always re-retrieve it from the e-mail as long as the e-mail itself isn’t removed). Same with output you captured from a shell command, a strace logfile, etc. So I’m wondering if I can’t create a simple method for keeping track of expiration dates on files, similar to the expiration dates supported for z/OS data sets. And to implement this, I am considering to use extended attributes.

The idea is simple: when working with a file, I want to be able to immediately set an expiration date to it:

$ strace -o strace.log ...
$ expdate +7d strace.log

This would set an extended attribute named user.expiration with the value being the number of seconds since epoch (which you can obtain through date +%s if you want) on which the file can be expired (and thus deleted from the system). A system cronjob can then regularly scan the system for files with the extended attribute set and, if the expiration date is beyond the current date, the file can be removed from the system (perhaps first into a specific area where it lingers for an additional while just in case).

It is just an example of course. The idea is that the extended attributes keep information about the file close to the file itself. I’m probably going to have an additional layer on top if it, checking SELinux contexts and automatically identifying expiration dates based on their last modification time. Setting the expiration dates manually after creating the files is prone to be forgotten after a while. And perhaps introduce the flexibility of setting an user.expire_after attribute is well, telling that the file can be removed if it hasn’t been touched (modification time) in at least XX number of days.

I found myself in a weird situation: a long long time ago, I wrote a java application that I didn’t touch nor ran for a few years. Today, I found it on a backup and wanted to run it again (its a graphical application for generating HTML pages). However, it failed in a particular feature. Not with an exception or stack trace, just functionally. Now, I have the source code at hand, so I look into the code and find the logical error. Below is a snippet of it:

It doesn’t matter what the code is supposed to do, but from what I can remember, I shouldn’t be adding maxRange to the i variable (yet – as I do that later in the code). But instead of setting up the java development environment, emerging the IDE etc. I decided to just edit the class file directly using dhex (a wonderful utility I recently discovered) because doing things the hard way is sometimes fun as well. So I ran javap -c MyClass to get some java bytecode information from the method, which gives me:

I know lines 11 and 12 is about pushing the 2nd and 3rd arguments of the function (which are startValue and maxRange) to the stack to add them (line 13). To remove the third argument, I can change this opcode from 1d (iload_3) to 03 (iconst_0). This way, zero is added and the code itself just continues as needed. And for some reason, that seems to be the only mistake I made then because the application now works flawlessly.

Quick summary

Humble Indie Bundle #8 (still available, get yours quickly!) includes seven games, most of them with (FLAC and MP3) soundtrack as dedicated downloads.
With Hotline Miami, one of the bundle’s most exciting games, no soundtrack is included. Well, maybe it is! Friend Jonathan and I wrote a command line tool to extract the original OGG Vorbis music files from the game’s .wad file today. It’s free software licensed under GPL v3 or later and hosted at Github.

Superficial file format analysis

The game consists of a few files only, the biggest file is HotlineMiami_GL.wad. Using a hex viewer like od(1) you see filenames on the first page, already. However, the .wad seemed to be in a proprietary format and we could not figure it out quick enough. (If you find a way to extract all files from the archive, please comment below!)

Using the strings(1) command a list of Music/*.ogg files can be found:

So we knew we were looking for OGG Vorbis content. Jonathan had the idea to just scan for any OGG Vorbis content in the file (i.e. guessing the offsets), rather than trying to understand where those Music/*.ogg file offsets where located. The OGG file format is well suited for that. Basically we just had to search for the byte sequence “OggS”, extract a few bytes from the header starting at that location, do some simple math, and write a block of continues bytes to a dedicated file.

While not all files seem to contain proper tags, all of them seem perfectly playable. The bitrate seems constant 224 kb/s for all, could be worse. At least to our ears, these files sound like higher quality than this “Hotline Miami Soundtrack (Full)” video on YouTube. But you don’t need that anymore now anyway, right?

Load balancing traffic between servers can sometimes lead to headaches depending on your topology and budget. Here I’ll discuss how to create a self load balanced cluster of web servers distributing HTTP requests between themselves and serving them at the same time. Yes, this means that you don’t need dedicated load balancers !

I will not go into the details on how to configure your kernel for ipvsadm etc since it’s already covered enough on the web but instead focus on the challenges and subtleties of achieving a load balancing based only on the realservers themselves. I expect you reader have a minimal knowledge of the terms and usage of ipvsadm and keepalived.

The setup

Let’s start with a scheme and some principles explaining our topology.

3 web servers / realservers (you can do the same using 2)

Local subnet : 192.168.0.0/24

LVS forwarding method : DR (direct routing)

LVS scheduler : WRR (you can choose your own)

VIP : 192.168.0.254

Main interface for VIP : bond0

Let’s take a look at what happens as this will explain a lot of why we should configure the servers in a quite special way.

black arrow / serving

the master server (the one who has the VIP) receives a HTTP port connection request

the load balancing scheduler decides he’s the one who’ll serve this request

the local web server handles the request and replies to the client

blue arrow / direct routing / serving

the master server receives a HTTP port connection request

the load balancing scheduler decides the blue server should handle this request

the HTTP packet is given to the blue server as-this (no modification is made on the packet)

the blue server receives a packet whose destination IP is the VIP but he doesn’t hold the VIP (tricky part)

the blue server’s web server handles the request and replies to the client

IP configuration

Almost all the tricky part lies in what needs to be done in order to solve the point #4 of the blue server example. Since we’re using direct routing, we need to configure all our servers so they accept packets directed to the VIP even if they don’t have it configured on their receiving interface.

The solution is to have the VIP configured on the loopback interface (lo) with a host scope on the keepalived BACKUP servers while it is configured on the main interface (bond0) on the keepalived MASTER server. This is what is usually done when you use pacemaker and ldirectord with IPAddr2 but keepalived does not handle this kind of configuration natively.

We’ll use the notify_master and notify_backup directives of keepalived.conf to handle this :

The ARP problem

Now some of you wise readers will wonder about the ARP cache corruptions which will happen when multiple hosts claim to own the same IP address on the same subnet. Let’s fix this problem now then as the kernel does have a way of handling this properly. Basically we’ll ask the kernel not to advert the server’s MAC address for the VIP on certain conditions using the arp_ignore and arp_announce sysctl.

Add those lines on the sysctl.conf of your servers :

net.ipv4.conf.all.arp_ignore = 3
net.ipv4.conf.all.arp_announce = 2

Read more about those parameters for the detailed explanation of those values.

The IPVS synchronization problem

This is another problem arising from the fact that the load balancers are also acting as realservers. When keepalived starts, it spawns a synchronization process on the master and backup nodes so you load balancers’ IPVS tables stay in sync. This is needed for a fully transparent fail over as it keeps track of the sessions’ persistence so the clients don’t get rebalanced when the master goes down. Well, this is the limitation of our setup : clients’ HTTP sessions served by the master node will fail if he goes down. But note that the same will happen to the other nodes because we have to get rid of this synchronization to get our setup working. The reason is simple : IPVS table sync conflicts with the actual acceptance of the packet by our loopback set up VIP. Both mechanisms can’t coexist together, so you’d better use this setup for stateless (API?) HTTP servers or if you’re okay with this eventuality.

keep a copy of the IPVS configuration, if we get to be master, we’ll need it back

drop the IPVS local config so it doesn’t conflict with our own web serving

Conclusion

Even if it offers some serious benefits, remember the main limitation of this setup : if the master fails, all sessions of your web servers will be lost. So use it mostly for stateless stuff or if you’re okay with this. My setup and explanations may have some glitches, feel free to correct me if I’m wrong somewhere.

After 9 posts, it’s time to wrap things up. You can review the final results online (incron.te, incron.if and incron.fc) and adapt to your own needs if you want. But we should also review what we have accomplished so far…

We built the start of an entire policy for a daemon (the inotify cron daemon) for two main types: the daemon itself, and its management application incrontab. We defined new types and contexts, we used attributes, declared a boolean and worked with interfaces. That’s a lot to digest, and yet it is only a part of the various capabilities that SELinux offers.

The policy isn’t complete though. We defined a type called incron_initrc_exec_t but don’t really use it further. In practice, we would need to define an additional interface (probably named incron_admin) that allows users and roles to manage incron without needing to grant this user/role sysadm_r privileges. I leave that up to you as an exercise for now, but I’ll post more about admin interfaces and how to work with them on a system in the near future.

We also made a few assumptions and decisions while building the policy that might not be how you yourself would want to build the policy. SELinux is a MAC system, but the policy language is very flexible. You can use an entirely different approach in policies if you want. For instance, incron supports launching the incrond as a command-line, foreground process. This could help users run incrond under their privileges for their own files – we did not consider this case in our design. Although most policies try to capture all use cases of an application, there will be cases when a policy developer did either not consider the use case or found that it infringed with his own principles on policy development (and allowed activities on a system).

In Gentoo Hardened, I try to write down the principles and policies that we follow in a Gentoo Hardened SELinux Development Policy document. As decisions need to be taken, such a document might help find common consensus on how to approach SELinux policy development further, and I seriously recommend that you consider writing up a similar document yourself, especially if you are going to develop policies for a larger organization.

One of the deficiencies of the current policy is that it worked with the unmodified incron version. If we would patch incron so that it could change context on executing the incrontab files of a user, then we can start making use of the default context approach (and perhaps even enhance with PAM services). In that case, user incrontabs could be launched entirely from the users’ context (like user_u:user_r:user_t) instead of the system_u:system_r:incrond_t or transitioned system_u:system_r:whatever_t contexts. Having user provided commands executed in the system context is a security risk, so in our policy we would not grant the incron_role to untrusted users – probably only to sysadm_t and even then he probably would be better with using the /etc/incron.d anyway.

The downside of patching code however is that this is only viable if upstream wants to support this – otherwise we would need to maintain the patches ourselves for a long time, creating delays in releases (upstream released a new version and we still need to reapply and refactor patches) and removing precious (human) resources from other, Gentoo Hardened/SELinux specific tasks (like bugfixing and documentation writing ;-)

Still, the policy returned a fairly good view on how policies can be developed. And as I said, there are still other things that weren’t discussed, such as:

Build-time decisions, which can change policies based on build options of the policy. In the reference policy, this is most often used for distribution-specific choices: if Gentoo would use one approach and Redhat another, then the differences would be separated through ifdef(`distro_gentoo',`...') and ifdef(`distro_redhat',`...') calls.

Some calls might only be needed if another policy is loaded. I think all calls made currently are part of base modules, so can be expected to be available at all times. But if we would need something like icecast_signal(incrond_t), then we would need to put that call inside a optional_policy(`...') statement. Otherwise, our policy would fail to load because the icecast SELinux policy isn’t loaded.

We could even introduce specific statements like dontaudit or neverallow to fine-tune the policy. Note though that neverallow is a compile-time statement: it is not a way to negate allow rules: if there is one allow that would violate the neverallow, then that module just refuses to build.

Furthermore, if you want to create policies to be pushed upstream to the reference policy project, you will need to look into the StyleGuide and InterfaceNaming documents as those define the order that rules should be placed and the name syntax for interfaces. I have been contributing a lot to the reference policy and I still miss a few of these, so for me they are not that obvious. But using a common style is important as it allows for simple patching, code comparison and even allows us to easily read through complex policies.

If you don’t want to contribute it, but still use it on your Gentoo system, you can use a simple ebuild to install the files. Create an ebuild (for instance selinux-incron), put the three files in the files/ subdirectory, and use the following ebuild code:

Hotline Miami

Proteus

Little Inferno

Awesomenauts

Capsized

Thomas Was Alone

Dear Esther

After using a default set of directories to watch, and allowing admins to mark other types as such as well, let’s consider another approach for making the policy more flexible: booleans. The idea now is that a boolean called incron_notify_non_security_files enables incrond to be notified on changes on all possible non-security related files (the latter is merely an approach, you can define other sets as well if you want, including all possible files).

Booleans in SELinux policy can be generated in the incron.te file as follows:

Reloading the incrontab tables now works, and the notifications work as well.

As you can see, once a policy is somewhat working, policy developers are considering the various “use cases” of an application, trying to write down policies that can be used by the majority of users, without granting too many rights automatically.

So far we've provided the useflag "semantic-desktop" which in particular controls the nepomuk functionality. Some components of KDE require this functionality unconditionally, and if you try to build without it, bugs and build failures may occur. In addition, by now it is easily and reliably possible to disable e.g. the file indexer at runtime. So, we've decided that starting with KDE 4.11 we will remove the useflag and hard-enable the functionality and the required dependencies in the ebuilds. The changes are being done already in the KDE overlay in the live ebuilds (which build upstream git master and form the templates for the upcoming 4.11 releases).

After recent experiences the plan to drop kdepim-4.4 is off the table again. We will keep it in the portage tree as alternative version and try to support it until it finally breaks.

In the meantime we (well, mainly Chris Reffett) have started in the KDE overlay to package Plasma Active, the tablet Plasma workspace environment. Since Gentoo ARM support is already excellent, this may become a highly valuable addition. Unfortunately, it's not really ready yet for the main tree and general use, but packaging work will continue in the overlay- what we need most is testing and bug reporting!

Independent of the meeting, a stabilization request has already been filed for KDE 4.10.3; thanks to the work of the kde stable testers, we can keep everyone uptodate. And as a final note, my laptop is back to kmail1... Cheers!

... that is the result of an afternoon of hacking on Aboriginal Linux to include mksh support.
Why? eh ... why not. And for such a crude hack it works surprisingly well - only two of the arm crosscompile targets failed.

In the previous post we made incrond able to watch public_content_t and public_content_rw_t types. However, this is not scalable, so we might want to be able to update the policy more dynamically with additional types. To accomplish this, we will make types eligible for watching through an attribute.

So how does this work? First, we create an attribute called incron_notify_type (we can choose the name we want of course) and grant incrond_t the proper rights on all types that have been assigned the incron_notify_type attribute. Then, we create an interface that other modules (or admins) can use to mark specific types eligible for watching, called incron_notify_file. This interface will assign the incron_notify_type attribute to the provided type.

The permission we are looking for here is userdom_list_user_home_content, but this is only for when we want to watch a user home directory. What if we want to watch a server upload directory? Or a cache directory? We might need to have incrond have the proper accesses on all directories. But then again, all does sound a bit… much, doesn’t it? So let’s split it up in three waves:

The incrond_t domain will support a minimal set of types that it can watch, based on common approaches

I will introduce an interface that allows other modules to mark specific types as being “watch-worthy”

A boolean will be set to allow incrond_t to watch a very large set of types (just in case the admin trusts it sufficiently)

Let’s first consider a decent minimal set. Within most SELinux policies, two types are often used for public access (or for uploading of data). These types are public_content_t and public_content_rw_t, and is used for instance for FTP definitions (upload folders), HTTP servers and such. So we introduce the proper rights to watch that data. There is an interface available called miscfiles_read_public_files but let’s first see if that interface isn’t too broad (after all, watching might not be the same as reading).

# This is only to temporarily check if the rights of the interface are too broad or not
# You can set this using "selocal" or in a module (in which case you'll need to 'require'
# the two types)
allow incrond_t public_content_t:dir { read getattr };

After editing the incrontab to watch a directory labeled with public_content_t, we now get the following:

As the incrontab is a user incrontab, we can expect incrond_t to require setuid and setgid privileges. Also, the fifo_file access is after forking (notice the difference in PID values) and most likely to communicate to the master process. So let’s allow those:

The ngroups_max pseudo-file (in /proc/sys/kernel) returns the maximum number of supplementary group IDs per process, and is consulted through the initgroups() method provided by a system library, so it might make sense to allow it. For now though, I will not enable it (as reading sysctl_kernel_t exposes a lot of other system information) but I might be forced to do so later if things don’t work out well. The search privilege on bin_t is needed to find the script that I have prepared (/usr/local/bin/test) to be executed, so I add in a corecmd_search_bin and retry.

Still not there yet apparently. The incrond forked process wants to execute the script, but to do so it has to follow a symbolic link labeled bin_t. This is because the script points to #!/bin/sh which is a symlink to the system shell. We need to follow this link before the execution can occur; only after execution will the transition from incrond_t to system_cronjob_t be done.

corecmd_read_bin_symlinks(incrond_t)

With that set in the policy, the watch works, incrond properly launches the command and the command properly transitions into system_cronjob_t as we defined earlier (I check this by echo’ing the output of id -Z into a temporary file).

So we are left with the (temporary) rights we granted on public_content_t. Consider the rules we had versus the rules applied with miscfiles_read_public_files:

The rights here seem to bemore than what we need. Playing around a bit with the directories reveals that incrond requires a bit more. For instance, when you create additional directories (subdirectories) and want to match multiple ones:

So it looks like miscfiles_read_public_files isn’t that bad after all.

All we are left with is the access to ngroups_max. We can ignore the calls and make sure they don’t show up in standard auditing using kernel_dontaudit_read_kernel_sysctls or we can allow it with kernel_read_kernel_sysctls. I’m going to take the former approach for my system, but your own idea might be different.

I tested all this with user incrontabs (as those are the “most” advanced) but one can easily test with system incrontabs as well (placing one in /etc/incron.d). Just be aware that incrond will take the first match and will not seek other matches. So if a system incrontab watches /var/www and another line (or user incrontab) watches /var/www/localhost/upload it is very well possible that only the /var/www watch is triggered.

While I don’t want to say that all privacy advocates are the bad kind of crybabies that I described on my previous post there are certainly a lot I would call hypocrite when it gets to things like the loyalty schemes I already wrote about.

So as I said on that post, the main complain about loyalty scheme involve possible involvement with bad government (in which case we have a completely different problem), and basically have to do with hypothetical scenarios of a dystopian future. So what they are afraid of is not the proper use of the tool that is loyalty schemes, but of their abuse.

On the other hand, the same kind of persons advocate for tools like Tor, Bitcoin, Liberty Reserve or FreedomBox. These tools are supposed to help people fight repressive governments among others, but there are obvious drawbacks. Pirates use the same technologies. And so do cybercriminals (and other kind of criminals too).

Where I see a difference is that while even the Irish Times struggled to find evidence of the privacy invasion, or governmental abuse of loyalty schemes (as you probably noticed they had to resort complaining about a pregnant teenager who was found out through target advertising), it’s extremely easy to find evidence of the cyber organized crime relying on tools like Liberty Reserve. Using the trump card of paedophiles would probably be a bad idea, but I’d bet my life on many of them doing so.

Yes of course there are plenty of honest possible uses you could have for these technologies, but I’d also think that if you start with the assumption that your government is not completely corrupted or abusive (which, I know, could be considered a very fantastic assumption), and that you don’t just want to ignore anti-piracy laws because you don’t like them (while I still agree that many of those laws are completely idiotic, I have explained my standing already), then the remaining positive uses are marginal, compared to the criminal activities that they enable.

Am I arguing against Tor and FreedomBox? Not really. But I am arguing against things like MegaUpload, Liberty Reserve and Bitcoin — and I would say that most people who are defending Kim Dotcom and the likes of him are not my peers. I would push them together with the religious people I’m acquainted with, which is to say, I keep them at arm’s length.

The sub-slots feature of EAPI 5 was announced as if it was the ultimate solution to the problem of SONAME changes on library upgrades. However, the longer I see it, the more I believe that it is not really a good solution, and that it misses the actual issue targeting somewhere nearby.

The issue is likely well-known by most of the Gentoo users. Every time a library changes its ABI, it changes the SONAME (the filename programs link to) to avoid breaking existing programs. When the package is upgraded, the new version is installed under the new name, and the old one is removed. As a direct result, all applications linking to the old version become broken and need to be rebuilt.

The classic way of handling this is to run the revdep-rebuild tool. It takes a while to scan the system with it but it supposedly finds all broken executables and initiates a rebuild of them. Of course, the system is in broken state until all relevant packages are rebuilt, and sometimes they just fail to build…

As you can guess, this is far from being perfect. That’s why people tried to find a better solution, and a few solutions were actually implemented. I’d like to describe them in a quasi-chronological order.

Using slots with slot-operator deps

A perfect solution that has been long advocated by Exherbo developers. I’m not aware, though, if they ever used it themselves. I didn’t see an exact explanation of how they expect it to be done, therefore I am mostly explaining here how I think it could be done.

The idea is that every SONAME-version of the library uses a different slot. That is, every time the SONAME changes, you change slot as well. Using different slots for each SONAME means that the incompatible versions of the library can be installed in parallel until all applications are rebuilt. This has a few requirements though.

First of all, only the newest slot may install development files such as headers. This requires that every version bump is accompanied by a revision bump of the older version, dropping the development files. On each upgrade, user builds not only the new version but also rebuilds the older version.

To handle the upgrades without a small moment of breakage (and risk of longer breakage if a build fails), the package manager would need to build both packages before starting the merge process. I doubt that enforcing this is really possible right now.

Secondly, the ebuilds installing development files would need to block the older versions (in other slots) doing the same while keeping the versions lacking development files non-blocked.

To explain this better: let’s assume that we have: foo-1, foo-1-r1, foo-2, foo-2-r1, foo-3, … The -r0 versions have development files and -r1 versions don’t have them (they are just the upgrade compatibility ebuilds). Now, the blocker in foo-3 would need to block all the older -r0 versions and not -r1 ones.

In a real-life situation, there will likely be differing revision numbers as well. And I don’t know any way of handling this other than explicitly listing all blocked versions, one by one.

And in the end, reverse dependencies need to use a special slot-dependency operator which binds the dependency to the slot that was used during the package build. But it’s least of the problems, I believe.

The solution of preserved-libs

An another attempt of solving the issue was developed in portage-2.2. Although it is available in mainstream portage nowadays, it is still disabled by default due to a few bugs and the fact that some people believe it’s a hack.

The idea of preserved-libs is for the package manager to actually trace library linkage within installed programs and automatically preserve old versions of libraries as long as the relevant programs are not rebuilt to use the newer versions. As complex and as simple as that.

Preserving libraries this way doesn’t require any specific action from the package maintainer. Portage detects itself that a library with a new SONAME has been installed during an upgrade and preserves the old one. It also keeps track of all the consumers that link against the old version and remove it after the last one is rebuilt.

Of course it is not perfect. It can’t handle all kinds of incompatibilities, it won’t work outside the traditional executable-library linkage and the SONAME tracking is not perfect. But I believe this is the best solution we can have.

The nothing-new in sub-slots

Lately, a few developers who believed that preserved-libs is not supposed to go mainstream decided to implemented a different solution. After some discussion, the feature was quickly put into EAPI 5 and then started to be tested on the tree.

The problem is that it’s somehow a solution to the wrong problem. As far as I am concerned, the major issue with SONAMEs changing is that the system is broken between package rebuilds. Tangentially to this, sub-slots mostly address having to call tools like revdep-rebuild which is not a solution to the problem.

Basically all sub-slots do is forcing rebuild on a given set of reverse dependencies when the sub-slot of package changes. The rebuilds are pulled into the same dependency graph as the upgrade to be forced immediately after it.

I can agree that sub-slots have their uses. For example, xorg-server modules definitely benefit from them, and so may other cases which weren’t handled by preserved-libs already. For other cases the sub-slots are either not good enough (virtuals), redundant (regular libraries) or even broken (packages installing multiple libraries).

Aside from the xorg module benefit, I don’t see much use of sub-slots. On systems not having preserved-libs enabled, they may eventually remove the need for revdep-rebuild. On systems having preserved-libs, it can only result in needless or needlessly hurried rebuilds.

A short summary

So, we’re having two live solutions right now: one in preserved-libs, and other in sub-slots. The former addresses the issue of system being broken mid-upgrade, the latter removes (partially?) the need for calling an external tool. The former allows you to rebuild the affected packages at any convenient time, the latter forces you to do it right away.

What really worries me is that people are so opposed to preserved-libs, and at the same time accept a partial, mis-designed work called sub-slots that easily. Then advertise it without thoroughly explaining how and when to use it, and what are the problems with it. And, for example, unnecessarily rebuilding webkit-gtk regularly would be an important issue.

A particular result of that was visible when sub-slot support was introduced into app-text/poppler. That package installs a core library with quite an unstable ABI and a set of interface libraries with stable ABIs. External packages usually link with the latter.

When sub-slot support was enabled on poppler, all reverse dependencies were desired to use sub-slot matching. As a result, every poppler upgrade required needlessly rebuilding half of the system. The rev-deps were reverted but this only made people try to extend the sub-slots into a more complex and even less maintainable idea.

Is this really what we all want? Does it benefit us? And why the heck people reinvented library preservation in eclasses?!

With incrontab_t (hopefully) complete, let’s look at the incrond_t domain. As this domain will also be used to execute the user (and system) commands provided through the incrontabs, we need to consider how we are going to deal with this wide range of possible permissions that it might take. One would be to make incrond_t quite powerful, and extend its privileges as we go further. But in my opinion, that’s not a good way to deal with it.

Another would be to support a small set of permissions, and introduce an interface that other modules can use to create a transition when incrond_t executes a script properly labeled for a transition. For instance, a domain foo_t might have an executable type foo_exec_t. Most modules support an interface similar to foo_domtrans (and foo_role if roles are applicable as well), but that assumes that the incron policy is modified every time a new target module is made available (since we then need to add the proper *_domtrans rules to the incron policy. Instead, we might want to make this something that the foo SELinux module can decide.

It is that approach that we are going to take here. To do so, we will create a new interface called incron_entry, taken a bit from the cron_system_entry interface already in place for the regular cron domain (the following comes in incron.if):

With this in place, the foo SELinux module can call incron_entry(foo_t, foo_exec_t) so that, the moment incrond_t executes a file with label foo_exec_t, the resulting process will run in foo_t. I am going to test (and I stress that it is only for testing) this by assigning incron_entry(system_cronjob_t, shell_exec_t), making every shell script being called run in system_cronjob_t domain (for instance in the localuser.te file that already assigned incron_role to the user_t domain.

So although incrond_t has search rights on the incron_spool_t directories (through the read_files_pattern), we need to grant it list_dir_perms as well (which contains the read permission). As list_dir_perms contains search anyhow, we can just update the line with:

Those unix_dgram_sockets are here again. But seeing that cron.log is empty, and logging_send_syslog_msg is one of the interfaces that would enable it, we might want to do just that so that we get more information about why incrond doesn’t properly start. Also, it tries to write into var_run_t labeled directories, probably for its PID file, so add in a proper file transition as well as manage rights:

What happens is that incrond read the (user) crontab, found that it had to “watch” /home/user/test2 but fails because SELinux doesn’t allow it to do so. We could just allow that, but we might do it a bit better by looking into what we want it to do in a flexible manner… next time ;-)

Gnupg is an excellent tool for encryption and signing, however, while breaking encryption or forging signatures of large key size is likely somewhere between painful and impossible even for agencies on significant budget, all this is always only as safe as your private key. Let's insert the obvious semi-relevant xkcd reference here, but someone hacking your computer, installing a keylogger and grabbing the key file is more likely. While there are no preventive measures that work for all conceivable attacks, you can at least make things as hard as possible. Be smart, use a smartcard. You'll get a number of additional bonuses on the way. I'm writing up here my personal experiences, as a kind of guide. Also, I am picking a compromise between ultra-security and convenience. Please do not complain if you find guides on the web on how to do things "better".

The smart cards

Obviously, you will need one or more OpenPGP-compatible smart cards and a reader device. I ordered my cards from kernel concepts since that shop is referred in the GnuPG smartcard howto. These are the cards developed by g10code, which is Werner Koch's company (he is the principal author of GnuPG). The website says "2048bit RSA capable", the text printed on the card says "3072bit RSA capable", but at least the currently sold cards support 4096bit RSA keys just fine. (You will need at least app-crypt/gnupg-2.0.19-r2 for encryption keys bigger than 3072bit, see this link and this portage commit.)

The readers

While the GnuPG smartcard howto provides a list of supported reader devices, that list (and indeed the whole document) is a bit stale. The best source of information that I found was the page on the Debian Wiki; Yutaka Niibe, who edits that page regularly, is also one of the code contributors to the smartcard part of GnuPG. In general there are two types of readers, those with a stand-alone pinpad and those without. The extra pinpad takes care that for normal operations like signing and encryption the pin for unlocking the keys is never entering the computer itself- so without tampering with the reader hardware it is impossible pretty hard to sniff it. I bought a SCM SPG532 reader, one of the devices supported ever first by GnuPG, however it's not produced anymore and you may have to resort to newer models soon.

Drivers and software

Now, you'll want to activate the USE flag "smartcard" and maybe "pkcs11", and rebuild app-crypt/gnupg. Afterwards, you may want to log out and back in again, since you may need the gpg-agent from the new emerge.Several different standards for card reader access exist. One particular is the USB standard for integrated circuit card interface devices, short CCID; the driver for that one is directly built into GnuPG, and the SCM SPG532 is such a device. Another set of drivers is provided by sys-apps/pcsc-lite; that will be used by GnuPG if the built-in stuff fails, but requires a daemon to be running (pcscd, just add it to the default runlevel and start it). The page on the Debian Wiki also lists the required drivers.These drivers do not need much (or any) configuration, but should work in principle out of the box. Testing is easy, plug in the reader, insert a card, and issue the command

gpg --card-status

If it works, you should see a message about (among other things) manufacturer and serial number of your card. Otherwise, you'll just get an uninformative error. The first thing to check is then (especially for CCID) if the device permissions are OK; just repeat above test as root. If you can now see your card, you know you have permission trouble.Fiddling with the device file permissions was a serious pain, since all online docs are hopelessly outdated. Please forget about the files linked in the GnuPG smartcard howto. (One cannot be found anymore, the other does not work alone and tries to do things in unnecessarily complicated ways.) At some point in time I just gave up on things like user groups and told udev to hardwire the device to my user account: I created the following file into /etc/udev/rules.d/gnupg-ccid.rules:

With similar settings it should in principle be possible to solve all the permission problems. (You may want to change the USB id's and the OWNER for your needs.) Then, a quick

udevadm control --reload-rules

followed by unplugging and re-plugging the reader. Now you should be able to check the contents of your card.If you still have problems, check the following: for accessing the cards, GnuPG starts a background process, the smart card daemon (scdaemon). scdaemon tends to hang every now and then after removing a card. Just kill it (you need SIGKILL)

killall -9 scdaemon

and try again accessing the card afterwards; the daemon is re-started by gnupg. A lot of improvements in smart card handling are scheduled for gnupg-2.0.20; I hope this will be fixed as well.Here's how a successful card status command looks like on a blank card:

This is part 2 of a tutorial on OpenPGP smartcard use with Gentoo. Part 1 can be found in an earlier blog post. This time, we assume that you already have a smart card and a functioning reader, and continue setting up the card. Then we'll make everything ready for use with GnuPG by setting up a key pair. As already stated, I am picking a compromise between ultra-security and convenience. Please do not complain if you find guides on the web on how to do things "better". All information here is provided as a best effort, however I urge you to read up on your own. Even if you follow this guide to the last letter- if things break, it is your own responsibility.

Setting the AdminPIN and the PIN

OK, let's start. We insert a blank card into the card reader. The card should come with some paper documentation, stating the initial values of the PIN and the AdminPIN- these we will need in a moment. Now, we want to edit the card properties. We can do this with the command "gpg --card-edit".

gpg/card> helpquit quit this menuadmin show admin commandshelp show this helplist list all available datafetch fetch the key specified in the card URLpasswd menu to change or unblock the PINverify verify the PIN and list all dataunblock unblock the PIN using a Reset Code

This menu is not really that helpful yet. However, a lot more commands are hidden below the "admin" keyword:

gpg/card> adminAdmin commands are allowed

gpg/card> helpquit quit this menuadmin show admin commandshelp show this helplist list all available dataname change card holder's nameurl change URL to retrieve keyfetch fetch the key specified in the card URLlogin change the login namelang change the language preferencessex change card holder's sexcafpr change a CA fingerprintforcesig toggle the signature force PIN flaggenerate generate new keyspasswd menu to change or unblock the PINverify verify the PIN and list all dataunblock unblock the PIN using a Reset Code

First of all we change the AdminPIN and the PIN from the manufacturer defaults to some nice random-looking values that only we know.

At this point a window from gpg-agent pops up (same as when asking for a passphrase), requests the old AdminPIN and twice the new AdminPIN. Make sure you remember the new AdminPIN or write it down somewhere safe. The AdminPIN allows to change the card parameters (from name of cardholder to stored keys and PIN) and can be used to reset the PIN if you have forgotten it or mistyped it three times. However, if you mistype the AdminPIN three times, your card locks up completely and is basically trash. Note that changing the PINs cannot be done via a reader keypad yet.

"forcesig" toggles a flag inside the card that has been introduced because of German legislative requirements for some smartcard applications. Normally, once you have inserted the card into the reader, you enter the PIN once for unlocking e.g. the encryption or the signature key, and then the key remains open for the moment. If the signature PIN is "forced", you will have to reenter the PIN again each time you want to make a signature.

"generate" generates a RSA key pair directly on the card. This is the "high security option"; the generated private key will and can never leave the card, which enhances its security but also makes backups of the key impossible.

Which leaves the "reset code" to be explained. Imagine you are issued a card by e.g. your employer. The card will be preset with your name, login, and keys, and you should not be able to change that. So, you will not know the AdminPIN. If you enter your user PIN wrong three times in a row, it is invalidated. Now the reset code instead of the AdminPIN can also be used to reset the PIN. Basically this is the same functionality as the PUK for mobile phone SIM cards. The definitive source on all this functionality is the OpenPGP Card 2.0 specification.

Generating GnuPG keypairs

As mentioned in the beginning, there are many different ways to proceed. A keypair can be generated on the card or in the computer. Different types of keys or parts of keys can be uploaded to the card. I'm now presenting the following use case:

We generate the GnuPG keys not on the card but on the trusted computer, and then copy them to the card. This makes backups of the keys possible, and you can also upload them later to a second card should the first one accidentally drop into the document shredder.

We upload the whole key, not just subkeys as described in some howtos. This makes it possible to access the entire GnuPG functionality from the card- decrypting, signing, and also especially certifying (i.e. signing keys). Of course this means that your primary key is on the card, too.

In general, before you generate a GnuPG keyset you may want to read up on GnuPG best practices; see e.g. this mailing list post of our Gentoo Infra team lead robbat2 for information and further pointers.Enough talk. We use GPG to generate a 4096bit RSA primary key for signing and certifying with an 4096bit RSA encryption subkey. Note that for all the following steps you need in Gentoo at least app-crypt/gnupg-2.0.19-r2; I strongly recommend app-crypt/gnupg-2.0.20 since there smartcard handling has improved a lot.

Got it. Now we do something unusual- in addition to the sign/certify (SC) main key and the encryption (E) subkey, we add a second subkey, an authentication (A) key (for later on). We edit the just generated key with the --expert option:

So it looks like the paranoid came to my last post about loyalty cards complaining about the invasion of privacy that these cards come with. Maybe they expected that the myth of the Free Software developer who’s against all big corporation, who wants to be off the grid, and all that kind of stuff that comes out when you think of Stallman. Well, too bad as I’m not like that, while still considering myself a left-winger, but a realist one that cannot see how you can get workers happy by strangling the companies (the alternative to which is not, contrarily to what most people seem to think, just accepting whatever the heck they want).

But first an important disclaimer. What I’m writing here is my personal opinion and in no way that of my employer. Even if my current employer could be considered involved in what I’m going to write, this is an opinion I maintained for years — lu_zero can confirm it.

So, we’ve been told about the evil big brother of loyalty card since I can remember, when I was still a little boy. They can track what you buy, they can profile you, thus they will do bad things to you. But honestly I don’t see that like it has happened at all. Yes, they can track what you buy, they might even profile you, but about the evil things they do to you, I still have not heard of anything — and before you start with the Government (capital and evil G), if you don’t trust your government, a loyalty card programme is the last thing you should be worried in.

Let’s have a look first at the situation presented by the Irish Times article which I referred to in my first post on the topic. At least, they have been close to reality enough, so instead of going the paranoia of the Big Brother, they simply noted that marketeers will know about your life, although they do portray it as only negative.

Before long, he had come up with a list of 25 products which, if bought in certain amounts and in a certain sequence, allowed him to tell if a shopper was pregnant and when her due date was.

In his book, Duhigg tells the story of a man who goes into a branch of Target near Minneapolis. He is not happy as he wants to know why the retailer has suddenly started to send his high school-going daughter coupons for baby clothes and cribs. He asks the manager if the shop is trying to encourage very young girls, such as his daughter, to get pregnant.

The manager is bemused but promises to look into it, which he does. He finds that this girl had indeed been targeted with all manner of promos for baby products so he calls the father several days later to convey his apologies and his confusion.

That’s when the man tells him that when he raised the issue with his daughter, she told him she was pregnant. The retailer took a lot of flak when the details of its data mining emerged but the controversy blew over.

So first I would say I find it utterly ludicrous that sending coupons for “baby clothes and cribs” would “encourage very young girls […] to get pregnant”. I would also suggest that if the girl is so young that it’s scandalous that she could get pregnant, then it might indeed be too soon for her to have a loyalty card. In Italy for instance you have to be 18 before you can get a loyalty card for any program — why? Because you expect that a minor still does not have an absolutely clear idea of what his or her choices are going to mold their future as.

Then let’s see what the problem is about privacy here… if the coupons are sent by mail, one would expect that they are seen only by the addressee — if you have no expectation of privacy on personal mail, it’s hard to blame it strongly on the loyalty programmes. In this case, if you would count the profiling as a violation of privacy of the girl, then you would expect that her father looking at the coupons would be a bigger invasion still. That would be like reading a diary. If you argue that the father has a right to know as she’s a minor, I would answer that then she shouldn’t have the card to begin with.

Then there is the (anonymous, goes without saying) comment on my post, where they try to paint loyalty schemes in an even grimmer light, first by stating that data is sold to third party companies at every turn… well, turns out that’s illegal in most of Europe if you don’t provide a way for the customer not to have his data sold. And turns out that’s one of the few things I do take care of, but simply because I don’t want junk mail from a bunch of companies I don’t really care about. So using the “they’ll sell your detail” scare, to me, sounds like the usual bull.

Then it goes on to say that “Regularly purchasing alcohol and buying in the wrong neighbourhoods will certainly decrease your score to get loans.” — well, so what? The scores are statistical analysis of the chance of recovering or defaulting on a loan, I don’t blame banks for trying to make them more accurate. And maybe it’s because I don’t drink but I don’t see a problem with profiling as an alcoholic a person that would be buying four kegs of beer a day — either that or they have a bar.

Another brought point? A scare on datamining. Okay the term sounds bad, but data mining at the end is just a way for businesses to get better at what they do. If you want to blame them for doing so, it’s your call, but I think you’re out of your mind. There are obvious bad cases for data mining, but that is not the default case. As Jo pointed out on Twitter, we “sell” our shopping habits to the store chains, and what we get back are discounts, coupons and the like. It’s a tit-for-tat scenario, which to me is perfectly fine And applies to more than just loyalty card schemes.

Among others, this is why I have been blocking a number of webrobots on my ModSecurity Ruleset — those that try to get data without giving anything back, for me, are just bad companies. If you want to get something, give something bad back.

And finally, the comment twice uses the phrase, taken from the conspirationists’ rulebook, “This is only the beginning”. Sorry guys, you’ve been saying that this is the beginning for the past thirty years. I start to think you’re not smarter than me, just much more paranoid, too much.

To sum it up, I’m honestly of the opinion that all the people in countries that are in all effect free and democratic that complain about “invasion of privacy”, are only complaining because they want to keep hiding their bad sides, be it bad habits, false statements, or previous errors. Myself, as you can see from this blog, i tend to be fairly open. There is very little I would be embarrassed by, probably only the fact that I do have a profile on a dating site, but even in that, well, I’ve been as honest as a person can be. Did I do something stupid in my past? I think quite a few things. On the other hand, I don’t really care.

So, there you go, this is my personal opinion about all the paranoids who think that they have to live off the grid to be free. Unless you’re in a country that is far from democratic, I’d just say you’re a bunch of crybabies. As I said, places where your Government can’t be trusted, have much bigger problems than loyalty schemes or profiling.

My original post about loyalty cards missed the supermarkets that I’m actually using nowadays, because they are conveniently located just behind my building (for one) and right on the way back home from my office (for the other). Both of them are part of the EuroSpar chain and have the added convenience of being open respectively 24/7 and 7-22.

So, when I originally asked the store if they had any loyalty card, I was told they didn’t. I checked the website anyway and found the name of their loyalty program, which is “SuperEasy”, and the next time, I asked about it explicitly, and they gave me the card and a form to fill in; after filling almost all of it, I found that I could also do it online, so I trashed the paper form. They can’t get my name right anywhere here when I spell it.

On the website, strangely enough they even accept my surname as it should be, wow that’s a miracle, I thought… until I went to use the card at the shop and got back the bill that you see on the left. Yes that’s UTF-8 converted to some other 8-bit codepage which is not Latin-1. Indeed it reminds me of CP850 at the time of MS-DOS. Okay I give up, but the funniest part was getting the bill tonight, the one on the right.

But beside them mangling my name in many different possible ways, is there anything that makes EuroSpar special enough for me to write a follow-up post on a topic that I don’t really care about or, honestly, have experience in? Yes of course. Compared with the various rewards I have been talking about last time, this seems to be mostly the same: one point per euro spent, and one cent per point redeemed.

The big difference here is that the points are accrued to the cent, rather than to the lower euro threshold! Not too shabby, considering that unlike Dunnes they do not round their prices to full euros most of the time. And the other one is that even though they have a single loyalty scheme for all the stores.. the cards are per-store, or so they proclaim. The two here are probably owned by the same person so they are actually linked and they work on each.

Another interesting point is that while both EuroSpar host an Insomnia café, neither accept Insomnia’s own loyalty card (ZapaTag) — instead they offer something similar in the sense that you get the 10th drink free. A similar offer is present at the regular Insomnia shops, but there, while you can combine the 10th drink offer with the ZapaTag points, you cannot combine it with other offers such as my usual coffee and brownie for €3,75 (the coffee alone is €3,25 while the brownie is €2,25)… at EuroSpar instead this is actually combinable, but of course if I use the free coffee while getting a brownie, I still have to pay almost as much as the coffee.. but sometimes I can skip on the pastry.

So yes, I think it was worth noting the differences about EuroSpar. And as a final note I’ll just say that even the pharmacy on the way to work has a loyalty card… and it’s the usual discount one, or as they call it “PayBack Card”. I have to see what Tesco does, but they somehow blacklisted my apartment in their delivery service.

I was very sceptic for a long time. Then, I slowly started to trust the kmail2/akonadi combination. I've been using it on my office desktop for a long time, and it works well and is very stable and fast there. (Might be related to the fact that the IMAP server is just across the lawn.) Some time ago, when I deemed things solid enough I even upgraded my laptop again, despite earlier problems. In Gentoo, we've been keeping kdepim-4.4 around all the time, and as you may have read, internal discussions led indeed to the decision to finally drop it some time ago.What happened in the meantime?1) One of the more annoying bugs mentioned in my last blog post was fixed with some help from Kevin Kofler. Seems like Debian stumbled into the same issue long ago. 2) I was on vacation. Which was fun, but mostly unrelated to the issue at hand. None of my Gentoo colleagues went ahead with the removal in the meantime. A lot of e-mails accumulated in my account. 3) Coming back, I was on the train with my laptop, sorting the mail. The train was full, the onboard WLAN slightly overstressed, the 4G network just about more reliable. Network comes and goes sometime with a tunnel, no problem. Or so I thought.4) Half an hour before arriving back home I realized that silently a large part of the e-mails that I had (I though) moved (using kmail2-4.10.3 / akonadi-1.9.2) from one folder to another over ~3 hours had disappeared on one side, and not re-appeared on the other. Restarting kmail2 and akonadi did not help. A quick check of the webmail interface of my provider confirmed that also on the IMAP server the mails were gone in both folders. &%(/&%(&/$/&%$§&/I wasn't happy. Luckily there were daily server backup snapshots, and after a few days delay I had all the documents back. Nevertheless... Now, I am considering what to do next. (Needless to say, in my opinion we should forget dropping kmail1 in Gentoo for now.) Options...a) migrate the laptop back to kmail1, which is way more resistant to dropped connections and flaky internet connection - doable but takes a bit of timeb) install OfflineIMAP and Dovecot on the laptop, and let kmail2/akonadi access the localhost Dovecot server - probably the most elegant solution but for the fact that OfflineIMAP seems to have trouble mirroring our Novell Groupwise IMAP serverc) other e-mail client? I've heard good things about trojita...Summarizing... no idea still how to go ahead, no good solution available. And I actually like the kdepim integration idea, so I'll never be the first one to completely migrate away from it! I am sincerely sorry for the sure fact that this post is disheartening to all the people who put a lot of effort into improving kmail2 and akonadi. It has become a huge lot better. However, I am just getting more and more convinced that the complexity of this combined system is too much to handle and that kmail should never have gone the akonadi way.

We've had CUPS 1.6 in the Gentoo portage tree for a while now already. It has even been keyworded by most of the arches (hooray!), and from the bug reports quite some people use it. Sometime in the intermediate future we'll stabilize it, however until then quite some bugs still have to be resolved.CUPS 1.6 brings changes. The move to Apple has messed up the project priorities, and backward compatibility was kicked out of the window with a bang. As I've already detailed in a short previous blog post, per se, CUPS 1.6 does not "talk" the printer browsing protocol of previous versions anymore but solely relies on zeroconf (which is implemented in Gentoo by net-dns/avahi). Some other features were dropped as well...Luckily, CUPS was and is open source, and that the people at Apple removed the code from the main CUPS distribution did not mean that it was actually gone. In the end, all these feature just made their way from the main CUPS package to a new package net-print/cups-filters maintained at The Linux Foundation. There, the code is evolving fast, bugs are fixed and features are introduced. Even network browsing with the CUPS-1.5 protocol has been restored by now; cups-filters includes a daemon called cups-browsed which can generate print queues on the fly and accepts configuration directives similar to CUPS-1.5. As far as we in Gentoo (and any other Linux distribution) are concerned, we can get along without zeroconf just fine.The main thing that is hindering CUPS-1.6 stabilization a the moment is that the CUPS website is down, kind of. Their server had a hardware failure, and since nearly a month (!!!) only minimal, static pages are up. In particular, what's missing is the CUPS bugtracker (no I won't sign up for an Apple ID to submit CUPS bugs) and access to the Subversion repository of the source. (Remind me to git-svn clone the code history as soon as it's back and push it to gitorious.)So... feel free to try out CUPS-1.6, testing and submitting bugs for sure helps. However, it may take some time to get these fixed...

It is a common request in squid to have it block downloading certain files based on their extension in the url path. A quick look at google’s results on the subject apparently gives us the solution to get this done easily by squid.

The common solution is to create an ACL file listing regular expressions of the extensions you want to block and then apply this to your http_access rules.

blockExtensions.acl

\.exe$

squid.conf

Unfortunately this is not enough to prevent users from downloading .exe files. The mistake here is that we assume that the URL will strictly finish by the extension we want to block, consider the two examples below :

http://download.com/badass.exe // will be DENIED as expected
http://download.com/badass.exe? // WON'T be denied as it does not match the regex !

Squid uses the extended regex processor which is the same as egrep. So we need to change our blockExtensions.acl file to handle the possible ?whatever string which may be trailing our url_path. Here’s the solution to handle all the cases :

blockExtensions.acl

\.exe(\?.*)?$
\.msi(\?.*)?$
\.msu(\?.*)?$
\.torrent(\?.*)?$

You will still be hated for limiting people’s need to download and install shit on their Windows but you implemented it the right way and no script kiddie can brag about bypassing you

Okay so now it’s over a month I’ve been staying in Dublin, it’s actually over a month I’m at my new job, and it is shaping up as a very good new experience for me. But even more than the job, the new experiences come with having an apartment. Last year I was leaving within the office where I was working, and before that I’ve been living with my mother, so finally having a place of mine is a new world entirely. Well, I’ll admit it: only partially.

Even though I’ve been living with my mother, like the stereotype of Italian guys suggests, it’s not like I’ve bee a parasite. Indeed, I’ve been paying all the bills for the past four years, and still I’m paying them from here. I’ve also been doing my share of grocery shopping, cleaning and maintenance tasks, but at least I did avoid the washing machine most of the time. So yeah, it wasn’t a complete revolution for my life, but it was a partial one. So right now I do feel slightly worse for wear, especially because I had a very bad experience with the kitchen, which was not cleaned before I moved in.

Thankfully, Ikea exists everywhere. And their plastic mats for drawers and cabinets are a lifesaver. Too bad I already finished the roll and I’ve not completed half the kitchen yet. I think I’ll go back to Ikea in two weeks (not next week because my sister’s visiting). With this time I bought the same identical lamp three times. Originally in Italy, then again in Los Angeles, and now in Dublin — only difference is that the American version has a loop to be able to orient it, probably because health and safety does not require having enough common sense as to not touch the hot cone…

The end line is that I’m very happy about having moved to Dublin. I love the place, and I love the people. My new job is also quite interesting, even if not as open-source focused as my previous ones (which does not mean it is completely out of the way of open source anyway), and the colleagues are terrific… hey some even read my blog before, thanks guys!

While settling down took most of my time and left me no time to do real Gentoo contributions or blogging (luckily Sven seems to have taken my place on Planet Gentoo), things are getting much better (among others I finally have a desk in the apartment, and tomorrow I’m going to get a TV as well, which I know will boost my ability to keep the house clean — because it won’t require me to stick to the monitor to watch something). So expect more presence from me soon enough!

A few years ago, I gave a history of the 2.6.32 stable kernel, and
mentioned the previous stable kernels as well. I'd like to apologize for not
acknowledging the work of Adrian Bunk in maintaining the 2.6.16 stable kernel
for 2 years after I gave up on it, allowing it to be used by many people for a
very long time.

I've updated the previous post with this information in it at the bottom, for
the archives. Again, many apologies, I never meant to ignore the work of this
developer.

It’s been so long since I switched to film-only photography that I decided a few months ago to sell all my digital equipment. I already own a Nikon FM2 camera which I love but I’ve to admit that I was and still am totally amazed by the pictures taken by my girlfriend’s Rolleiflex 3.5F. The medium format is the kind of rendering I was craving to get and that sooner or later I’d step into the medium format world. Well, I didn’t have to wait as when we were in Tokyo to celebrate new year 2013 I fell in love with what was the perfect match between my love for wide angles and medium format film photography : the Fujifilm GF670W !

For my soon to come birthday, I got myself my new toy in advance so I could use it in my upcoming roadtrip around France (I’ll talk about it soon, it was awesome). Oddly, the only places in the world where you can get this camera is in the UK and in Japan so I bought it from the very nice guys at Dale photographic. Here is the beast (literally) :

Yes, this is a big camera and it comes with a very nice leather case and a lens hood. This is a telemetric camera with a comfortable visor, it accepts 120 and 220 films and is capable of shooting in standard 6×6 and 6×7 !

In the medium format world, the 55mm lens is actually a wide angle one as it is comparable to a 28mm in the usual 24×36 world. Its performances are not crazy on paper with a 4.5 aperture and a shutter speed going from 4s to 1/500s (as fast as a 1956 Rolleiflex) but the quality is just stunning as it’s sharp and offers a somewhat inexistant chromatic abberation.

Want proof ? These are some of my first roll’s shoots uploaded at full resolution :

This Monday I was the first time guest and speaker at (contrary to it’s name) local Czech conference Europen. It was interesting experience. And I would like to share a bit of what I experienced. What made it different from conferences I usually speak at was the audience. Not many Linux guys and quite some Windows guys. I was told that this conference is for various IT professionals and people from academia interested in Open Source.

I was asked to speak there about something techy, low-levelly, genericy, and not SUSE only stuff. I offered OBS and Studio introduction as these are crown jewels of openSUSE environment, but I was told that they would prefer something more generic and little bit more hardcore. So in the end I decided to speak about packaging as that is something I do that since a long time ago. And to make it nor a workshop nor SUSE specific talk, I put in two more packaging systems that I worked with apart from rpm – Portage (from Gentoo) and BitBake (from Open Embedded).

Whenever I visit open source event in Czech Republic, I always know quite some people there already. I know the most prominent people from Linux magazines, other distributions and some other people who are big open source enthusiasts. On this conference, I knew something like six attendees in total (and all of them were there to give a talk and not sure what to expect from audience). Almost everybody was running MS Windows with few MacOS exceptions. Really quite different world.

As I said, in the end I spoke about why do we do software packages in Linux and how do we do it. I spoke about rpm and spec files, about Portage and BitBake showing how nice it is to have inheritance. And in the end I put in part about how great OBS is anyway.

From the almost a day I was at the conference, most questions and feedback got LibUCW library, but Martin Mareš gave amazing presentation and he had a really interesting topic. LibUCW is cool. If I’ll find a free time, I’ll write something about it separately. Otherwise audience was quite calm and quiet. For my presentation, I got question about cross-compilation of rpms, so in the end after the talk I could recommend OBS once more

It was definitely interesting experience as these people were mostly out of our usual scope. If you are interested in browsing the slides, you can, sources are on my github, but they contain quite some pages of example recipes that I was commenting on the spot.

With the filter for X-Spam-Status and X-Spam-Level you will avoid the majority of the incoming spam.
Some mails that does not have any Spam flag, contains subject like viagra, cialis ( which I absolutely don’t need ), rolex and scount.
Yes, I could you the (c|C)ase syntax, but I had problems, so I prefer to write twice the rules instead of have any sort of troubles.
Note: with this email address I’m not subscribed to any newsletter or any sort of offers/catalogs so I filtered scount, markerting, money.

Sometimes I receive mails from people that are not spammer, with the X-Spam-Level flag with one star, so I decided to move these email into a folder, they will be double-checked with naked eye:

:0:
* ^X-Spam-Level: \*
/home/ago/.maildir/.INBOX.pspam/

To avoid confusion I always prefer to use a complete path here.

After a stabilization you will always see the annoying mail from the bugzilla which contains ${arch} stable, so if you want to drop them:

And so on….
These, hints obviously are valid on all postfix-based mailserver; if you are using e.g. qmail, you need to move the .procmailrc, but this is still valid.
I hope this will help

EDIT:
If you need a particular set of rules, you can write it if you take a look at the source/header of the message, so If for example I don’t like to see the mails from bugzilla of the bugs that I reported:

Lab::Measurement 3.11 has been uploaded to CPAN. This is a minor maintenance release, with small bug fixes in the voltage source handling (gate protect and sweep functionality) and the Yokogawa drivers (output voltage range settings).

The bug with svn2git 1.0.8 was a regression that broke support for (non-ASCII) UTF-8 author names in identity maps. That’s fixed in dev-vcs/svn2git-1.0.8-r1 in Gentoo. I sent the patch upstream and to the Debian package maintainer, too.

For svneverever, a directory that re-appeared after deletion was reported to only live once, e.g. the output was

Jos wrote a blog
post yesterday commenting on the complexity of the PIM problem. He raises an
interesting concern about whether we would be all better if there was no Trojitá and I just improved KMail instead.
As usual, the matter is more complicated than it might seem on a first sight.

Executive Summary: I tried working with KDEPIM. The KDEPIM IMAP stack
required a total rewrite in order to be useful. At the time I started, Akonadi
did not exist. The rewrite has been done, and Trojitá is the result. It is up
to the Akonadi developers to use Trojitá's IMAP implementation if they are
interested; it is modular enough.

People might wonder why Trojitá exists at all. I started working on it
because I wasn't happy with how the mail clients performed back in 2006. The
supported features were severely limited, the speed was horrible. After
studying the IMAP protocol, it became obvious that the reason for this slowness
is the rather stupid way in which the contemporary clients treated the remote
mail store. Yes, it's really a very dumb idea to load tens of thousands
of messages when opening a mailbox for the first time. Nope, it does not make
sense to block the GUI until you fetch that 15MB mail over a slow and capped
cell phone connection. Yes, you can do better with IMAP, and the possibility
has been there for years. The problem is that the clients were not
using the IMAP protocol in an efficient manner.

It is not easy to retrofit a decent IMAP support into an existing client.
There could be numerous code paths which just assume that everything happens
synchronously and block the GUI when the data are stuck on the wire for some
reason. Doing this properly, fetching just the required data and doing all
that in an asynchronous manner is not easy -- but it's doable nonetheless. It
requires huge changes to the overall architecture of the legacy applications,
however.

Give Trojitá a try now
and see how fast it is. I'm serious here -- Trojitá opens a mailbox with tens of
thousands of messages in a fraction of second. Try to open a big e-mail with
vacation pictures from your relatives over a slow link -- you will see the
important textual part pop up immediately with the images being loaded in the
background, not disturbing your work. Now try to do the same in your favorite
e-mail client -- if it's as fast as Trojitá, congratulations. If not, perhaps
you should switch.

Right now, the IMAP support in Trojitá is way more advanced than what is
shipped in Geary or KDE PIM -- and it is this solid foundation which leads to
Trojitá's performance. What needs work now is polishing the GUI and making it
play well with the rest of a users' system. I don't care whether this
polishing means improving Trojitá's GUI iteratively or whether its IMAP
support gets used as a library in, say, KMail -- both would be very succesfull
outcomes. It would be terrific to somehow combine the nice, polished UI of
the more established e-mail clients with the IMAP engine from Trojitá. There
is a GSoC proposal for integrating Trojitá into KDE's Kontact -- but for it to
succeed, people from other projects must get involved as well. I have put
seven years of my time into making the IMAP support rock; I would not be able
to achieve the same if I was improving KMail instead. I don't need a
fast KMail, I need a great e-mail client. Trojitá works well enough
for me.

Oh, and there's also a currently running fundraiser
for better address book integration in Trojitá. We are not asking for
$ 100k, we are asking for $ 199. Let's see how many people are willing
to put the money where their mouth is and actually do something to help
the PIM on a free desktop. Patches and donations are both equally welcome.
Actually, not really -- great patches are much more appreciated. Because Jos
is right -- it takes a lot of work to produce great software, and things get
better when there are more poeple working towards their common goal
together.

Update: it looks like my choice of kickstarter platform was rather
poor, catincan apparently doesn't accept PayPal :(. There's the possiblity of
direct donations over
SourceForge/PayPal -- please keep in mind that these will be charged even
if less donors pledge to the idea.

Since a long time I realized that is a pita every time that I keyword, receive a repoman failure for dependency.bad(mostly) that does not regard the arch that I’m changing.
So, checking in the repoman manual, I realized that –ignore-arches looks bad for my case and I decided to request a new feature: –include-arches.
This feature, as explained in the bug, checks only for the arches that you write as argument and should be used only when you are keywording/stabilizing.

Some examples/usage:

First, it saves time, the following example will try to run repoman full in the kdelibs directory:$ time repoman full > /dev/null 2>&1
real 0m12.434s

Almost a year ago, I worked with Pooja on transliterating a Hindi poem to Bharati Braille for a Type installation at Amar Jyoti School; an institute for the visually-impaired in Delhi. You can read more about that on her blog post about it. While working on that, we were surprised to discover that there were no free (or open source) tools to do the conversion! All we could find were expensive proprietary software, or horribly wrong websites. We had to sit down and manually transliterate each character while keeping in mind the idiosyncrasies of the conversion.

Now, like all programmers who love what they do, I have an urge to reduce the amount of drudgery and repetitive work in my life with automation ;). In addition, we both felt that a free tool to do such a transliteration would be useful for those who work in this field. And so, we decided to work on a website to convert from Devanagari (Hindi & Marathi) to Bharati Braille.

If you’re a university student, time is running out! You could get paid to hack on Gentoo or other open-source software this summer, but you’ve gotta act now. The deadline to apply for the Google Summer of Code is this Friday.

If this sounds like your dream come true, you can find some Gentoo project ideas here and Gentoo’s GSoC homepage here. For non-Gentoo projects, you can scan through the GSoC website to find the details.

The USB port doesn’t have a working hotplug detection. That means that if you plug an USB device in the USB port, it will be only detected once, if you remove the USB device, the USB port will stop working. I’ve been told that they are working on it. I haven’t been able to find a workaround for it.

The BeagleBone Black doesn’t detect an microSD card when plugged in when its been booted from the eMMC. If you want to use a microSD card for additional storage, it must be inserted before it boots.

I’d like to thank the people at Beagleboard.org for providing me a Beaglebone Black to document this.

mongoDB 2.4.3

Yet another bugfix release, this new stable branch is surely one of the most quickly iterated I’ve ever seen. I guess we’ll wait a bit longer at work before migrating to 2.4.x.

pacemaker 1.1.10_rc1

This is the release of pacemaker we’ve been waiting for, fixing among other things, the ACL problem which was introduced in 1.1.9. Andrew and others are working hard to get a proper 1.1.10 out soon, thanks guys.

Meanwhile, we (gentoo cluster herd) have been contacted by @Psi-Jack who has offered his help to follow and keep some of our precious clustering packages up to date, I wish our work together will benefit everyone !

Compared to most people around me now, and probably most of the people who read my blog, my life is not that extraordinary, in the terms of travel and moving around. I’ve been, after all, scared of planes for years, and it wasn’t until last year that I got out of the continent — in an year, though, I more than doubled the number of flights I’ve been on, with 18 last year, and more than doubled the number of countries I’ve been to, counting Luxembourg even though I only landed there and got on a bus to get back to Brussels after Alitalia screwed up.

On the other hand, compared to most of the people I know in Italy, I’ve been going around quite a bit, as I spent a considerable amount of time last year in Los Angeles, and I’ve now moved to Dublin, Ireland. And there are quite a few differences between these places and Italy. I’ve already written a bit about the differences I found during my time in the USA but this time I want to focus on something which is quite a triviality, but still is a remarkable difference between the three countries I got to know up to now. As the title suggest I’m referring to stores’ loyalty cards.

Interestingly enough, there was just this week an article on the Irish Times about the “privacy invasion” of loyalty cards.. I honestly don’t see it as big a deal as many others. Yes, they do profile your shopping habits. Yes, if you do not keep private the kind of offers they sent you, they might tell others something about you as well — the newspaper actually brought up the example of a father who discovered the pregnancy of the daughter because of the kind of coupons the supermarket was sending, based on her change of spending habits; I’m sorry but I cannot really feel bad about it. After all, absolute privacy and relevant offers are kinda at the opposite sides of a range.. and I’m usually happy enough when companies are relevant to me.

So of course stores want to know the habits of a single person, or of a single household, and for that they give you loyalty cards… but for you to use them, they have to give you something in return, don’t they? This is where the big difference on this topic appears clearly, if you look at the three countries:

in both Italy and Ireland, you get “points” with your shopping; in the USA, instead, the card gives you immediate discounts; I’m pretty sure that this gives not-really-regular-shoppers a good reason to get the card as well: you can easily save a few dollars on a single grocery run by getting the loyalty card at the till;

in Italy you redeem the points to get prizes – this works not so differently than with airlines after all – sometimes by adding a contribution, sometimes for free; in my experience the contribution is never worth it, so either you get something for free or just forget about it;

in Ireland I still haven’t seen a single prize system; instead they work with coupons: you get a certain amount of points each euro you spend (usually, one point per euro), and then when you get to a certain amount of points, they get a value (usually, one cent per point), and a coupon redeemable for the value is sent you.

Of course, the “European” method (only by contrast with American, since I don’t know what other countries do), is a real loyalty scheme: you need a critical mass of points for them to be useful, which means that you’ll try to get on the same store as much as you can. This is true for airlines as well, after all. On the other hand, people who shop occasionally are less likely to request the card at all, so even if there is some kind of data to be found in their shopping trends, they will be completely ignored by this kind of scheme.

I’m honestly not sure which method I prefer, at this point I still have one or two loyalty cards from my time in Los Angeles, and I’m now collecting a number of loyalty cards here in Dublin. Some are definitely a good choice for me, like the Insomnia card (I love getting coffee at a decent place where I can spend time to read, in the weekends), others, like Dunnes, make me wonder.. the distance from the supermarket to where I’m going to live is most likely offsetting the usefulness of their coupons compared to the (otherwise quite more expensive) Spar at the corner.

At any rate, I just want to write my take on the topic, which is definitely not of interest to most of you…

Recently, I have been toying around with GateOne, a web-based SSH
client/terminal emulator. However, installing it on my server proved to be a
bit challenging: it requires tornado as a webserver, and uses websockets, while
I have an Apache 2.2 instance already running with a few sites on it (and my
authentication system configured for my tastes)

So, I looked how to configure a reverse proxy for GateOne, but websockets were
not officially supported by Apache... until recently! Jim Jagielski added the
proxy_wstunnel module in trunk a few weeks ago. From what I have seen on the
mailing list, backporting to 2.4 is easy to do (and was suggested as an
official backport), but 2.2 required a few additional changes to the original patch (and
current upstream
trunk).

Basically, the new submodule adds the 'ws' and 'wss' scheme to the allowed
protocols between the client and the backend, so you tell Apache that you'll be
talking 'ws' with the backend (same as ajp://whatever sez that httpd will be
talking ajp to the backend).

After having had a quite traumatic experience with a customer’s service running on one of the virtual servers I run last November, I made sure to have a very thorough backup for all my systems. Unfortunately, it turns out to be a bit too thorough, so let me explore with you what was going on.

First of all, the software I use to run the backup is tarsnap — you might have heard of it or not, but it’s basically a very smart service, that uses an open-source client, based upon libarchive, and then a server system that stores content (de-duplicated, compressed and encrypted with a very flexible key system). The author is a FreeBSD developer, and he’s charging an insanely small amount of money.

But the most important part to know when you use tarsnap is that you just always create a new archive: it doesn’t really matter what you changed, just get everything together, and it will automatically de-duplicate the content that didn’t change, so why bother? My first dumb method of backups, which is still running as of this time, is to simply, every two hours, dump a copy of the databases (one server runs PostgreSQL, the other MySQL — I no longer run MongoDB but I start to wonder about it, honestly), and then use tarsnap to generate an archive of the whole /etc, /var and a few more places where important stuff is. The archive is named after date and time of the snapshot. And I haven’t deleted any snapshot since I started, for most servers.

It was a mistake.

The moment when I went to recover the data out of earhart (the host that still hosts this blog, a customer’s app, and a couple more sites, like the assets for the blog and even Autotools Mythbuster — but all the static content, as it’s managed by git, is now also mirrored and served active-active from another server called pasteur), the time it took to extract the backup was unsustainable. The reason was obvious when I thought about it: since it has been de-duplicating for almost an year, it would have to scan hundreds if not thousands of archives to get all the small bits and pieces.

I still haven’t replaced this backup system, which is very bad for me, especially since it takes a long time to delete the older archives even after extracting them. On the other hand it’s probably a lot of a matter of tradeoff in the expenses as well, as going through all the older archives to remove the old crap drained my credits with tarsnap quickly. Since the data is de-duplicated and encrypted, the archives’ data needs to be downloaded to be decrypted, before it can be deleted.

My next preference is going to be to set it up so that the script is executed in different modes: 24 times in 48 hours (every two hours), 14 times in 14 days (daily), and 8 times in two months (weekly). The problem is actually doing the rotation properly with a script, but I’ll probably publish a Puppet module to take care of that, since it’s the easiest thing for me to do, to make sure it executes as intended.

The essence of this post is basically to warn you all that, no matter whether it’s cheap to keep around the whole set of backups since the start of time, it’s still a good idea to just rotate them.. especially for content that does not change that often! Think about it even when you set up any kind of backup strategy…

I finally followed a friend’s advice and stepped into the Gentoo Planet and Universe feeds. I hope my modest contributions will help and be of interest to some of you readers.

As you’ll see, I don’t talk only about Gentoo but also about photography and technology more generally. I also often post about the packages I maintain or I have an interest in to highlight their key features or bug fixes.