You don’t know who is Karen Sandler? Typical GNOME character. That’s just someone that never achieved anything related to computing but has been selected to be some sort of speaker nonetheless. I’m not saying only people that produced something that actually serve or served a purpose are entitled to speak. But to put people in position of “director”/whatever, at some point, there should be some knowledge, abilities, even just ideas, that makes the person stand out to be entitled to represent or lead the others.

So what could she speak of? About bad management?

More like, on GNOME.org“Announcing her departure, Karen said: “Working as the GNOME Foundation Executive Director has been one of the highlights of my career.” She also spoke of the achievements during her time as Executive Director: “I’ve helped to recruit two new advisory board members… and we have run the last three years in the black. We’ve held some successful funding campaigns, particularly around privacy. We have a mind-blowingly fantastic Board of Directors, and the Engagement team is doing amazing work. The GNOME.Asia team is strong, and we’ve got an influx of people, more so than I’ve seen in some time.””

Typical GNOME bullshit? Indeed: pompous titles, bragging, claiming. “Successful funding campaings”? Seriously? “Amazing work”. “Mind blowing”. It’s sad for the few GNOME developers that are worth it, because the main thing is a fucking joke. It’s just empty words, no damn facts that matter that are even slightly true.

“I think I have made some important contributions to the project while I have been Executive Director. I’ve helped to recruit two new advisory board members, and we recently received a one time donation of considerable size (the donor did not want to be identified). Financially the Foundation is in good shape, and we have run the last three years in the black. We’ve held some successful funding campaigns, particularly around privacy and accessibility. We have a mind-blowingly fantastic Board of Directors, and the Engagement team is doing amazing work. The GNOME.Asia team is strong, and we’ve got an influx of people, more so than I’ve seen in some time.I hope that I have helped us to get in touch with our values during my time as ED, and I think that GNOME is more aware of its guiding mission than ever before.”

Yes, you can skip the fact that she consider recruiting advisory board members as an achievement (!!!). It seems that she thinks that a Foundation should focus on itself and not on the project it is derived of, seems that she does not even for a second mention anything that the software project GNOME would benefit of directly.

GNOME.org quoted her putting three dots and skipping “Financially the Foundation is in good shape”, and this just one week before we’re told they are definitely not.

She’s right one one thing though: now GNOME is definitely “more aware of its guiding mission than ever before”, since they are forced to cut on all unnessary expenses like the one she promoted.

So I finally got an Android-based phone. I thought waiting for Ubuntu/Firefox stuff to be released but my current one (Bada-based: never ever) died.

First, I learned that actually you need to lock your phone with a Google account for life. It just confirmed that the sane proper first steps with this is too remove anything linked to Google.

First place to go is to F-Droid. From there, instead of getting tons of shitty freeware from Google Play/Apps/whatever, you get Free Software, as in freedom even though I like free beer.

Using ownCloud? From F-Droid, get DavDroid. Yes, that works perfectly and is easy to set up, unlike the Dav-related crap on Google Apps. The only thing you have to take care of, if your SSL certificate (trendy topic theses days) is self signed, is to make a certificate the specific way Android accepts them. For now, they recommends to do it like:

dual IMAPs servers:

Having your own server handling your mails is enabling -you can implement anti-spam policies harsh enough to be incredibly effective, place catch-alls temporary addresses, etc. It does not even require much maintainance these days, it just takes a little time to set it up.

One drawback, though, is the fact if your host is down, or simply its link, then you are virtually unreachable. So you want a backup server. The straightforward solution is to have a backup that will simply forward everything to the main server as soon as possible. But having a backup server that is a replica of the main server allows you to use one or the other indifferently, and definitely have always one up at hand.

In my case, I run exim along with dovecot. So once exim setup is replicated, it’s only a matter of making sure to have proper dovecot setup (in my case mail_location = maildir:~/.Maildir:LAYOUT=fs:INBOX=~/.Maildir/INBOXand mail_privileged_group = mail set in /etc/dovecot/conf.d/10-mail.conf along with ssl = required in /etc/dovecot/conf.d/10-ssl.conf - you obviously need to create a certificate for IMAPs, named as described in said 10-ssl.conf but that’s not the topic here, you can use only IMAP if you wish).

Then, for each user account (assuming we’re talking about a low number), it’s as simple as making sure SSH access with no passphrase can be achieved from one of the hosts to the other and adding a cronjob like:

Once you checked that you can properly login on both IMAPs, it’s just a matter of configuring your mail clients.

and many mail clients:

I use roundcube webmail whenever I have no access to a decent system with a proper mail client (kmail, gnus, etc) configured. With two IMAPs servers, there’s no benefit of not having the same webmail setup on both.

The only annoying thing is not to have common address book. It’s possible to replicate the roundcube database but it’s even better to have a cloud to share the address book with any client, not doing some rouncube-specific crap. So I went for the option of installing ownCloud on one of the hosts (so far I’ve not decided yet if there is a point in replicating also the cloud, seems a bit overkill to replicate data that is already some sort of backup or replica), pretty straight-forward since I already have nginx and php-fcgi running. And then if was just a matter of pluging roundcube in ownCloud through CardDav.

The only thing so far that miss is the replication of your own identities – I haven’t found anything clear about that but havent looked into it seriously. I guess it’s possible to put ~/.kde/share/config/emailidentities on the cloud or use it to extract identities vcard but I’m not sure a dirty hack is worth it. It’s a pity that identities are not part of the addressbook.

(The alternative I was contemplating before was to use kolab; I needed ownCloud for other matters so I went for this option but I keep kolab in mind nonetheless)

Hi there! I’ve just released SeeYouLater 1.2 (fetch a list of IP or known spammers and to ban them by putting them in /etc/hosts.deny). It now includes seeyoulater-httpsharer, that enables to share ban list over http instead of authenticated MySQL. It’s useful for distant hosts with unreliable link to each other/to avoid having MySQL listening on public ports.

Caching debian/etc (apt) repositories on your local server with nginx and dsniff

It’s quite easy to set up a debian mirror. But having a mirror on a local server is rather overkill in a scenario where you simply regularly have say 3 boxes running some Debian testing amd64, 1 box running the same on arch i686 and 2 other boxes on Ubuntu. Well, it’s more caching than mirroring that you’ll want, as transparently (with no client side setup) as possible.

The install was made over network. There’s nothing overly complicated but to avoid wasting time, it’s always good to properly RTFM. For instance, I learned too late that kFreeBSD does not handle / partition set on a logical one. I did not understood exactly how come, but I had to get my / partition on ufs (ext2 for /home was ok though). I did not even got into ZFS, as it looks like it’s not recommended with a simple i686 CPU. It took me a while and find no way to get my NFS4 partitions mounted as usual from /etc/fstab, or even with mount, I had to add a dirty call to /sbin/mount_nfs -o nfsv4 gate:/all /path in /etc/rc.local. And when it came to Xorg, I found the mouse to be sometimes working, sometimes not, plenty of overly complicated and confusing info on the web, to finally come up with a working /etc/X11/xorg.confcontaining only Section “ServerFlags” Option “AutoAddDevices” “False” EndSection (on three lines).

These are some little inconveniencies that you would not expect with a recent GNU/Linux system install, that the debian-installer does not prevent you in any way to hit/create. I’m not even sure that I found the best fixes for them. It feels a bit like installing RedHat 5.2 with is more than what I actually expected.

Years ago, I was using gnus to read my mails: among other things, I liked the fact that it was, by default, as expected from a newsreader, only showing unread messages and properly expiring old messages after some time. Then, using KDE, at some point, I switched to Kmail because of this nice integration within the desktop environment. Obviously I had to configure it to remove old mails (expires) in a similar fashion.

Then Kmail2 arrived. I’m not able to use this thing. It either does not even start or start overly slowly and use up 100% of cpu time for minutes, whatever computer I’m using, whether it’s an old bold regular P4 or an Athlon II X4, whether I have 1GB RAM or 8. I gather it’s related to akonadi/nepomuk/whatever, stuff supposed to improve your user experience, with fast search and so on. Fact is it’s unusable on any of my computers. So I end up, these days, using Roundcube webmail, which is not that bad but makes me wonder whether it’s worth waiting for Kmail2 to be fixed and, worse, leaves me with IMAPS folders with thousands of expired messages that should be removed.

So this led me to consider doing the expires on the server side instead of client side, with my user crontab on the server. Logged on the server, I just ran crontab -e and added the following:

(Obviously you want to replace user by your local user account and Trash/Junk by your relevant junk IMAP folder) . This setup could probably be enhanced by using flags like DRAFT and such – however, on my local server, no actual draft got properly flagged as such, so it’s better to rely on the basic mark FLAGGED.

While I usually don’t advertise non libre software for obvious reasons (that’s a stupid way to think about computing), I admit, though, that the Steam platform goes toward what I’d like to see since many years. Proprietary software platform indeed – but the business is not made out of selling overly expensive DVD-Rom once in a while but cheap soft (downloadable) copies of games (often) maintained over years. They also seem about to base a future gaming console on some sort of GNU/Linux flavor, that’s not philanthropy, that’s just the only clever way to do a cool gaming based business without getting totally dependant on another software supplier that also brand his own gaming console. Latest South Park was about the fight beetween latest Xbox and Playstation. This issue only exists when you decide to make console non compatible with usual workstation, a shortcut with so many shortcomings. Making a GNU/Linux based console, because it is good business, is obviously going in the right direction.

So I’ll allow myself a little reminder here on how not to waste your bandwidth on a local network where you have several computers having copies of the same steam game. It’s merely a simplified version of the well thought Caching Steam Downloads @ LAN’s article. Obviously, to do this, you need to have your own home server. For instance, it should work out of the box with a setup like this (this is the setup mentioned before from now on in this article).

A) HTTP setup

We first create a directory to store steam depot. It will be served with http so you need to create something like (working with the setup mentioned before):

mkdir /srv/www/depot
chown www-data:www-data /srv/www/depot

Next, you want to setup nginx, to be able to serve as a steam content provider. Everything is based on http -no proprietary non-standard crap- so it can only go smoothly.

If you have the setup mentioned before, then /etc/nginx/sites-available/default contains a server { } statement for general intranet. Add a new file called /etc/nginx/sites-available/steam with the following (watch out for the listen and allow statements, change it depending on your server intranet IP!):

B) DNS setup

Now, you need your server to actually handle requests to steam content server, spoofing these servers IPs. It could be done by messing with the DNS cache server already up on the setup mentioned before but I actually find much more convenient to use dnsspoof from dsniff package with a two-line configuration than wasting time creating say bind9 unnecessarily complex db files.

So we first instead dnsspoof:

apt-get install dsniff

Here come’s the two line configuration, set in /etc/dnsspoof.conf. Obviously, here too you have to set the IP to be your server’s intranet one.

10.0.0.1 *.cs.steampowered.com
10.0.0.1 content*.steampowered.com

Then you want an init.d script. You can create an ugly /etc/init.d/dnspoof with the following (obviously, you want you ethernet ethX device to be properly set!):