After I discussed a possible backup solution using rdiff-backup in the last part of this series I want to show you the second tool which is rsnapshot.

As I already pointed out, I'm not using rdiff-backup anymore. The reason is mainly that it is simply too slow. I'm using a Raspberry Pi as my NAS and it is absolutely not capable of handling larger backups with rdiff-backup. It works for smaller backup sizes, but not for my entire home directory. Even when I pushed the initial full backup directly to the backup disk (not using my Raspberry), all future incremental backups were still unbearably slow. Even when no files changed at all, it took hours over hours for simply comparing all the files I had in my home directory to those on the NAS, whereas a full comparison using rsnapshot is done within five to ten minutes. Now keep this in mind and look at the fact that incomplete backups made with rdiff-backup can't be resumed. You could imagine that in the end you wouldn't have any backup at all. Basically all rdiff-backup would do is to compare and push your files over the day and abort in the evening when you shut down your workstation. And then the next day it would spend all the time reverting the incomplete backup and running another one which might not finish either.

So this is the main reason I stopped my experiments with rdiff-backup. It was a nice time, but I finally moved on. Therefore say hello to our new precious star: rsnapshot!

Backups are a very vital part of every computer system, be it a corporate PC network or simply your local workstation. Unfortunately, they are often neglected, although everyone knows how important they are. The “I haven't had any bad incidences yet, but I know I really should… guess what… I'll do it next week” attitude is only too well known by everybody, including myself.

Performing backups is a tedious process if done wrong. Thus backups need to done automatically in the background without any user intervention. As soon as someone feels the need to do something in order to get his stuff backed up, he will ultimately end up with no backup at all (and probably a bad conscience he only forgets too fast).

In this little two-part article series I will present two tools I've been playing around with a lot and I'll show you how you can use them to set up your own personal NAS with a spare piece of hardware such as a Raspberry Pi. No need for any expensive special storage system.

With KDE 4.10 the file indexer has undergone some major changes which made it pretty usable so I decided to switch it on again. It turned out that the first stage indexing works exceptionally well. It indexed about 60,000 files in my home directory in the blink of an eye.

Unfortunately, I had to realize that the second level indexing does not work so well. I remember Virtuoso often eating up all my CPU in the past. Now Virtuoso keeps quiet, but nepomukindexer let's my workstation fly. It only starts indexing when my PC is idle, but for bigger files it keeps the CPU busy at a level of 100%, which is a pretty bad thing. There is already a Bug report about nepomukindexer consuming too much CPU time on larger files, but I didn't want to wait for a fix.

Long story short: I thought of ways to automatically limit the CPU usage of certain processes (not necessarily only Nepomuk).

Those of you who have been using the RSS feed don't have to do anything, but as an Atom feed subscriber this means you should check your subscription (most people use the Atom feed). Please make sure the feed URL you have saved to your feed reader is http://www.refining-linux.org/feeds/atom10.xml/ (you may also use HTTPS). Until now that URL was redirecting to http://feeds.feedburner.com/RefiningLinux which has now been deprecated.

The old Feedburner URL will redirect to my Atom feed for the next 30 days and then deliver an error 404, so please make sure you update your subscription within that time.

Linux is everywhere, not just on desktops. It's on phones, ebook readers, on public terminals, on routers, on electricity meters and many more devices. The key to Linux' success is it's diversity. It is possible to run Linux on nearly every technical device that has a CPU. Many of these are closed systems, so often you don't even notice that Linux is running on that particular device, but there is always a way to gain access to its internals and modify it the way you want. But often you have the problem that heavy modifications might void your warranty or make updates to a more recent firmware version impossible. In this article I want to show you a simple but powerful way to modify such systems in a non-destructive way.

This blog has now been up for a good one and a half year and nothing has changed much since it started. Now it's time to give it a redesign (if you ask me, this was long overdue). While the main appearance stays the same, the details have changed significantly. Let me walk you through the new goodies.

Tomorrow this blog will be blacked out for 12 hours starting at 1400 CET (1300 UTC or 8 AM EST).

With this initiative Refining Linux is following the protests against the Stop Online Piracy Act (SOPA) and the PROTECT IP Act (PIPA) proposed by US legislators and the media industry. Many huge Internet companies and organizations such as Reddit and Wikipedia participate in these protests. Also companies such as Google, Amazon, Facebook and of course non-profit organizations such as Mozilla and many smaller groups support the protests against SOPA and PIPA.

This article is part of the 2011 Advent calendar series “24 Outstanding ZSH Gems”. Each day between December 1st and December 24th an article will be published as part of this series showing one awesome feature of the Z Shell. Some of the features can of course also be found in other shells such as Bash, but the ZSH implementation is often superior.

I have shown you many things about ZSH throughout this series, but there is much more you can do with it than I could cover here. And of course there is also much more to configure, many more options I couldn't tell you about, many more tips and tricks, tweaks and optimizations.

Generally, it's a long way to go before you have your shell set up as you like. Especially ZSH needs a lot of configuration before it becomes very user-friendly. You can do all this configuration by hand or you can use a framework for that. Yes, there are frameworks for ZSH (and for Bash as well, in case you didn't know) and as a completion of this Advent series I'll show you two of them.

This article is part of the 2011 Advent calendar series “24 Outstanding ZSH Gems”. Each day between December 1st and December 24th an article will be published as part of this series showing one awesome feature of the Z Shell. Some of the features can of course also be found in other shells such as Bash, but the ZSH implementation is often superior.

There are two ZSH modules which allow you to easily work with POSIX extended regular expressions (POSIX ERE) or with Perl compatible regular expressions (PCRE) which are even more advanced than POSIX ERE. These two modules are zsh/regex and zsh/pcre. You can use either one of them or both at the same time. That's entirely up to you. I'll show you both.

This article is part of the 2011 Advent calendar series “24 Outstanding ZSH Gems”. Each day between December 1st and December 24th an article will be published as part of this series showing one awesome feature of the Z Shell. Some of the features can of course also be found in other shells such as Bash, but the ZSH implementation is often superior.

Working on the shell is often working with files and sometimes you need to read or edit their contents. Normally you'd do that with the command line editor of your choice (e.g. nano, vi, vim or emacs), but sometimes you need to write the output of a command or a pipe to a file or feed programs with contents from the hard disk. That's usually done by using the input and output redirection operators, but ZSH gives you one more tool which can sometimes make things easier. This module is called mapfile.