Note to self

2014-08-18

Every once in a while, the cardreader I use gets in trouble. Nice enough, the vendor still seems to maintain the driver over here, although the refusal to simply increase the version number in a sane fashion starts getting ridiculous. Anyhow, SP05 seems be from May 7, 2014, which makes it almost 2 years newer than the binaries I've dragged over from the last system where I've compiled the stuff.
However, compiling the beast proves not so easy because there seems to be no script provided to get autoconf/automake and the whole charade going, and README is empty, bummer. So first off we mustn't forget to make sure that libtool is installed as well. Second, the secret receipt is hidden inside Makefile.cvs, the bootstrap works like

make -f Makefile.cvs all

Now all we have to do is good 'ol

./configure && make

Not a big fan of having some make install wreak havoc on the system so first do a

make -i install

as the unprivileged user and scan the output what's happening.
Turns out it wants do modify those files/directories:

2013-11-24

Finally I'm going to upgrade my vserver to CentOS 6, so let's do a full backup to a local disk first.
Strato does a "time machine"-style online backup, which is nice, but I want a fresh start and integrate my old configs piece-by-piece, so while it is nice to work with a safety net, it's not really what I'm looking for.
Apparently you can download system backups via FTP from a dedicated server, but there seems to be no security whatsoever, neither for the account's master password, nor for the actual data. Not on my watch.
So, there is this directory /private-backup on the server with a README.txt in it (the dir is also mentioned in the management web interface somewhere).
Apparently this dir is left alone on system re-installs, so

Move any backup files/directories that may lie littered around the filesystem into /private-backup

Clean up log files (there was one I had never heard of, which was >2GB)

yum clean all

Clear temporary directories

Become root, chdir into /private-backup, start a screen session (long-running command in a remote session...) and do this

Next the tarball can be replicated into safety by rsync over ssh (single colon between hostname and path). No need to wait until tar is done, provided you pass -c to rsync which allows continuing in case the transfer exceeds the end of the growing file on the server. Finish up by comparing the md5sum of the replica.

2013-11-01

Since upgrading from Fedora 18 to Fedora 19, my laptop has been haunted by this manifestation whenever I tried to log into a regular gnome 3 session. Fortunately, I could always log in in fallback mode, and sometimes even a regular session or what seemed like a blend of regular gnome and fallback appeared, so there was not too much pressure to sort this out.

Several times I tried googling the error, but to no avail. There is about a gazillion possible things that could cause this screen. There are some log files to look at and and debug switches to throw, but nothing led to a conclusion.

So on the night of halloween I decided to upgrade the machine to Fedora 20 (almost beta at the time) using Fedup, reckoning that this might either cure it or bog things up so badly I'd be forced to reinstall.

Anyhow, after the update the exact same problem prevails, uhg.

So I get the idea that it might be just something to do with user settings, after all, most of the system files have just been replaced and nothing changed.

For starters, I create a dummy user on the machine to see if clean settings would cure the issue.

They do!

So where does gnome keep its settings these days...in ~/.config, alright.
Next I narrow it down to the dconf subdirectory by trial-and-error (moving suspicious files/dirs out at a time).
Just a single file in ~/.config/dconf by the name user, binary junk, probably that's the database of all the dconf stuff.

The first thing I figure out is that I can view it in text format using

dconf dump /

There are some apparent suspects like gnome shell extensions and stuff I tweaked over the years to make the gui useable that I reset back to default and try out by logging into a fallback session, making the change, logging out, attempt log in to regular session, crash, log into fallback, lather, rinse, repeat.

By this method I narrow it down to the /org/gnome/desktop hierarchy, because

dconf reset -f /org/gnome/desktop/

allowed me to log into a regular session with no other changes.
Note that this works only if the trailing slash is included.

But that is no small hierarchy. At this point dconf-editor proves useful as it highlights settings that differ from the default, further narrowing down the suspects.

After some more failed attempts, I finally discover the magic spell:

dconf reset -f /org/gnome/desktop/session/

After finding that needle in the haystack, I'm curious if someone else had the same issue by googling a final time, this time adding session and dconf to the query, and promptly I find this.
Apparently the session-name key in this dconf path pointed to gnome-fallback, which seems to have been a file present in older gnome versions, but no more.
Unless the key is reset to gnome, gnome tries to locate a now non-existing file and bails.
So that halloween story ended on a cheerful note, how unfitting!

2012-09-30

Newly bought USB keys or memory cards usually come with patent-encumbered FAT or exFAT filesystems on them. While exFAT does away with size limitations, it is still not a first-class citizen under Linux. Only a fuse-based solution is available for mounting so far, which is not even in repositories such as rpmfusion due to the licensing issues.

If the volumes aren't to be used in devices such as digital cameras or PVRs, and use in Windows-PCs is also not an issue, one can of course opt for a native Linux filesystem.
However, standard mkfs.extX will use a lot of space for some things that is well-spent on a large system harddrive, but much less so on a relatively small flash device.

Choosing mkfs.ext2 will result in maximum usable capacity. However it will not create a journal like ext3 / ext4 would. A journal does away with lengthy filesystem checks in case the medium was not properly unmounted. Given that today's USB keys provide multi-gigabytes of storage and that Linux tends not to mount such devices in synchronous mode by default, you may still choose to keep the journal for removable media. Extents are actually a nice thing that can save space for large files, but it is a relatively new feature that should be avoided for portable media at the moment so they can also be used with older Linux versions, YMMV.

The -T largefile option gives a better ratio of space reserved for management structures vs. actual data to be stored. Assuming you will usually not store thousands of files with only a dozend bytes in them this makes sense, but don't try this with an unpacked gentoo portage tree!

The large_file flag is normally set automatically and allows storing files >2GiB on the filesystem.

The -m 0 flag makes sure there is no space set aside for the superuser, which would make no sense on a data-only volume.

The -c 0 -i 0 flags prevent the filesystem from asking to be checked from time to time. This normally just puts warnings in your logs; no system I'm aware of runs fsck on hotplugged storage devices, and hardly anyone checks their USB keys manually as far as I can tell.

When saving a client certificate from a Mozilla application to disk – like getting it out initially after it's been installed from the cacert website – the certificate is inside a PKCS12 container.

To uniquely identify a certificate, it is useful to know its serial number, especially if you have several certificates that are similar and have gotten renewed several times, so there is more than one version of "the same" certificate around.

2012-08-16

As it so happens, I own a DVR device that will record DVB-S and DVB-S2 TV into segments of a MPEG transport stream.

For DVB-S2 HD programs, the stream already contains h.264 compressed video. However since we don't live in the 80's anymore where there used to be a thing called VPS that would make sure your VCR starts and stops when the program does – even if that does not match the time printed in the program guide – the recording will be enclosed by parts of the surrounding broadcast.

Getting rid of that "garbage" is a bit tedious, but here's how I managed.

The receiver writes files 00001.ts, 00002.ts...00012.ts of one gigabyte each to the attached USB drive. So first turn that into one nice big file by using cat (the target better be ext4 or something else that can handle 2G+ files).

cat *.ts > <name>.ts

Now the problem is we can't just use a media player and seek to the cut points because the data in this transport stream is not properly timestamped.

But first we need to figure out which streams (tracks) are present in the raw material. Often broadcasters transmit a bunch of redundant audio streams. Just in a single language of course, apparently to make sure viewers are not getting too much value for their license fee.

In this case, I'm only going to keep streams 0:0 (the h.264 video) and 0:4 (the AC3 audio). You can use mplayer with the -aid/-vid options to check out the various streams. To play the file with audio stream 0:3 for example, run as

Then open the file e.g. in gnome-mplayer, seek to the start and the end of the actual content and make note of the timestamps.

Finally, fire up mmg (mkvmerge GUI), and add the freshly created <intermediate>.mkv file on the Input tab.
Then switch to the Global tab and enter the title of the recording into the File/segment title field.
Now check the Enable splitting... checkbox and select ...by parts.
Enter the begin/end timestamps noted down earlier into that field, separated by dash like 00:06:45-2:19:31.
Now verify that the Output filename is to your liking and hit the Start muxing button.