Bernhard R. Linkhttp://blog.brlink.eu/index.html
Who made this description field required?some little shell scripthttp://blog.brlink.eu/index.html#i70Mon, 08 Dec 2014 20:35:03 +0100http://blog.brlink.eu/index.html#i70
The Colon in the Shell.<p>I was recently asked about some construct in a shell script starting with a colon(:), leading me into a long monologue about it. Afterwards I realized I had forgotten to mention half of the nice things. So here for your amusement some usage for the colon in the shell:</p>
<p>To find the meaning of ":" in the bash manpage[1], you have to look at the start of the SHELL BUILTIN COMMANDS section. There you find:</p>
<pre>: [arguments]
No effect; the command does nothing beyond expanding arguments and performing any specified redirections. A zero exit code is returned.
</pre>
<p>If you wonder what the difference to <b>true</b> is: I don't know any difference (except that there is no /bin/:)</p>
<p>So what is the colon useful for? You can use it if you need a command that does nothing, but still is a command.</p>
<ul><li>
For example, if you want to avoid using a negation (for fear of history expansion still being on by default on a interactive bash or wanting to support ancient shells), you cannot simply write
<pre>
if conditon ; then
# this will be an error
else
echo condition is false
fi
</pre>
but need some command there, for which the colon can be used:
<pre>
if conditon ; then
: # nothing to do in this case
else
echo condition is false
fi
</pre>
To confuse your reader, you can use the fact that the colon ignores it's arguments and you only have normal words there:
<pre>
if conditon ; then
: nothing to do in this case # <- this works but is not good style
else
echo condition is false
fi
</pre>
though I strongly recommend against it (exercise: why did I use a # there for my remark?).
</li><li>
This of course also works in other cases:
<pre>
while processnext ; do
:
done
</pre>
</li><li>
The ability to ignore the actual arguments (but still processing them as with every command that ignores it arguments) can also be used, like in:
<pre>
: ${VARNAME:=default}
</pre>
which sets VARNAME to a default if unset or empty. (One could also use that the first time it is used, or ${VARNAME:-default} everywhere, but this can be more readable).
</li><li>
In other cases you do not strictly need a command, but using the colon can clear things up, like creating or truncating a file using a redirection:
<pre>
: > /path/to/file
</pre>
</li>
</ul>
<p>Then there is more things you can do with the colon, most I'd put under &quot;abuse&quot;:</p>
<ul><li>misuing it for comments:
<pre>
: ====== here
</pre>
While it has the advantage of also showing up in <b>-x</b> output, the to be expected confusion of the reader and the danger of using any shell active character makes this general a bad idea.
</li><li>
As it practically the same as <b>true</b> it can be used as a shorter form of true. Given that true is more readable that is a bad idea. (At least it isn't as evil as using the empty string to denote true.)
<pre>
# bad style!
if condition ; then doit= ; doit2=: ; else doit=false ; doit2=false ; fi
if $doit ; then echo condition true ; fi
if $doit2 && true ; then echo condition true ; fi
</pre>
</li><li>Another way to scare people:
<pre>
ignoreornot=
$ignoreornot echo This you can see.
ignoreornot=:
$ignoreornot echo This you cannot see.
</pre>
While it works, I recommend against it: Easily confusing and any &gt; in there or $(...) will likely rain harvoc over you.
</li><li>
Last and least, one can shadow the built-in colon with a different one. Only useful for obfuscation, and thus likely always evil. <tt>:(){:&:};:</tt> anyone?
</li></ul>
<p>This is of course not a complete list. But unless I missed something else, those are the most common cases I run into.</p>
<p>[1] &lt;rant&gt;If you never looked at it, better don't start: the bash manpage is legendary for being quite useless as hiding all information in other information in a quite absurd order. Unless you look at documentation about how to write a shell script parser, then the bash manpage is really what you want to read.&lt;/rant&gt;</p>
babblehttp://blog.brlink.eu/index.html#i69Sun, 16 Nov 2014 16:51:38 +0100http://blog.brlink.eu/index.html#i69
Enabling Change<p>Big changes are always a complicated thing to get done and can be the harder the bigger or more diverse an organization is it is taking place in.</p>
<p><b>Transparency</b></p>
<p>
Ideally every change is well communicated early and openly.
Leaving people in the dark about what will change and when means people have much less time to feeling comfortable about it or arranging with it mentally.
Especially bad can be extending the change later or or shortening transition periods.
Letting people think they have some time to transition only to force them to rush later will remove any credibility you have and severely reduce their ability to believe you are not crossing them.
Making a new way optional is a great way to create security (see below), but making that obligatory before the change even arrives as optional with them will not make them very willing to embrace change.
</p>
<p><b>Take responsibility</b></p>
<p>
Every transformation means costs.
Even if some change did only improve and did not make anything worse once implemented (the ideal change you will never meet in reality), the deployment of the change still costs:
processes have adapted to it, people have to relearn how to do things, how to detect if something goes wrong, how to fix it, documentation has to be adopted and and and.
Even as the change causes more good than costs in the whole organization (let's hope it does, I hope you wouldn't try to do something if the total benefit is negative), the benefits and thus the benefit to cost ratio will differ for the different parts of your organization or the different people within it.
It's hardly avoidable that for some people there will not be much benefit, much less perceived benefit compared to the costs they have to burden for it.
Those are the people whose good will you want to fight for, not the people you want to fight against.
</p><p>
They have to pay with their labor/resources and thus their good will for your benefit the overall benefit.
</p><p>
This is much easier if you acknowledge that fact.
If you blame them for having the costs, claim their situation does not even exist or even ridicule them for not embracing change you only prepare yourself for frustration.
You might be able to persuade yourself that everyone that is not willing to invest in the change is just acting out of malevolent self-interest.
But you will hardly be able to persuade people that it is evil to not help your cause if you treat them as enemies.
</p><p>
And once you ignored or played down costs that later actually happen, your credibility in being able to see the big picture will simply cease to exist at all for the next change.
</p>
<p><b>Allow different metrics</b></p>
<p>
People have different opinions about priorities, about what is important, about how much something costs and even about what is a problem.
If you want to persuade them, try to take that into account.
If you do not understand why something is a reason, it might be because the given point is stupid. But it might also be that you miss something. And often there is simple a different valuation of what is important, what the costs are and what are problems.
If you want to persuade people, it is worth to try to understand those.
</p><p>
If all you want to do is persuade some leader or some majority then ridiculing their concerns might get you somewhere.
But how do you want to win people over if you do not even appear to understand their problems.
Why should people trust you that their costs will be worth the overall benefits if you tell them the costs that they clearly see do not exist? How credible is referring to the bigger picture if the part of the picture they can see does not match what you say the bigger picture looks like?
</p>
<p><b>Don't get trolled and don't troll</b></p>
<p>
There will always be people that might be unreasonable or even try to provoke you. Don't allow being provoked. Remember that for successful changes you need to win broad support. Feeling personally attacked or feeling presented a large amount of pointless arguments easily results in not bringing proper responses or actually looking at arguments.
If someone is only trolling and purely malevolent, they will tease you best if they bring actual concerns of people in a way you likely degrade your yourself and your point in answering.
Becoming impertinent with the troll is like attacking the annoying little goblin hiding next to the city guards with area damage.
</p><p>
When not being able to persuade people, it is also far to easy to consider them in bad faith and/or attacking them personally. This can only escalate even more. Worst case you frustrate someone in good faith. In most cases you poison the discussion so much that people actually in good faith will no longer contribute the discussion. It might be rewarding short term because after some escalation only obviously unreasonable people will talk against you, but it makes it much harder to find solutions together that could benefit anyone and almost impossible to persuade those that simply left the discussion.
</p>
<p><b>Give security</b></p>
<p>
Last but not least, remember that humans are quite risk-averse. In general they might invest in (even small) chances to win, but go a long way to avoid risks.
Thus an important part of enabling change is to reduce risks, real and perceived ones and give people a feeling of security.
</p><p>
In the end, almost every measure boils down to that:
You give people security by giving them the feeling that the whole picture is considered in decisions (by bringing them early into the process, by making sure their concerns are understood and part of the global profit/cost calculation and making sure their experiences with the change are part of the evaluation).
You give people security by allowing them to predict and control things (by transparency about plans, how far the change will go and guaranteed transitions periods, by giving them enough time so they can actually plan and do the transition).
You give people security by avoiding early points of no return (by having wide enough tests, rollback scenarios,...).
You give people security by not letting them alone (by having good documentation, availability of training, ...).
</p><p>
Especially side-by-side availability of old and new is an extremely powerful tool, as it fits all of the above:
It allows people to actually test it (and not some little prototype mostly but not quite totally unrelated to reality) so their feedback can be heard. It makes it more predictable as all the new ways can be tried before the old ones no longer work. It is the ultimate role-back scenario (just switch off the new). And allows for learning the new before losing the old.
</p><p>
Of course giving the people a feeling of security needs resources. But it is a very powerful way to get people to embrace the chance.
</p><p>
Also in my experience people only fearing for themselves will usually mostly be passive by not pushing forward and trying to avoid or escape the changes.
(After all, working against something costs energy, so purely egoistic behavior is quite limiting in that regard).
Most people actively pushing back do it because they fear for something larger than only them. And any measure to making them fear less that you ruin the overall organization, not only avoids unnecessary hurdles rolling out the change but also has some small chance to actually avoid running into disaster with closed eyes.
</p>
philosophicalhttp://blog.brlink.eu/index.html#i68Thu, 28 Aug 2014 20:57:39 +0200http://blog.brlink.eu/index.html#i68
Where key expiry dates are useful and where they are not.<p>Some recent blog (<a href="http://blog.josefsson.org/2014/08/26/the-case-for-short-openpgp-key-validity-periods/">here</a> and <a href="http://gwolf.org/node/3950">here</a>) suggest short key expiry times.</p>
<p>Then also highlight some thing many people forget: The expiry time of a key can be changed every time with just a new self-signature. Especially that can be made retroactively (you cannot avoid that, if you allow changing it: Nothing would stop an attacker from just changing the clock of one of his computers).</p>
<p>(By the way: did you know you can also reduce the validity time of a key? If you look at the resulting packets in your key, this is simply a revocation packet of the previous self-signature followed by a new self-signature with a shorter expiration date.)</p>
<p>In my eyes that fact has a very simple consequence: An expiry date on your gpg main key is almost totally worthless.</p>
<p>If you for example lose your private key and have no revocation certificate for it, then a expiry time will not help at all: Once someone else got the private key (for example by brute forcing it, as computers got faster over the years or because they could brute-force the pass-phrase for your backup they got somehow), they can just extend the expiry date and make it look like it is still valid. (And if they do not have the private key, there is nothing they can do anyway).</p>
<p>There is one place where expiration dates make much more sense, though: subkeys.</p>
<p>As the expiration date of a subkey is part of the signature of that subkey with the main key, someone having access to only the subkey cannot change the date.</p>
<p>This also makes it feasible to use new subkeys over the time, as you can let the previous subkey expire and use a new one. And only someone having the private main key (hopefully you), can extend its validity (or sign a new one).</p>
<p>(I generally suggest to always have a signing subkey and never ever use the main key except off-line to sign subkeys or other keys. The fact that it can sign other keys just makes the main key just too precious to operate it on-line (even if it is on some smartcard the reader cannot show you what you just sign)).</p>
rantshttp://blog.brlink.eu/index.html#i67Mon, 02 Jun 2014 19:37:23 +0200http://blog.brlink.eu/index.html#i67
beware of changed python Popen defaults<p>
From <a href="https://docs.python.org/3.4/library/subprocess.html">the python subprocess documentation</a>:
<blockquote>
<p>
Changed in version 3.3.1: bufsize now defaults to -1 to enable buffering by default to match the behavior that most code
expects. In versions prior to Python 3.2.4 and 3.3.1 it incorrectly defaulted to 0 which was unbuffered and allowed short
reads. This was unintentional and did not match the behavior of Python 2 as most code expected.
</p>
</blockquote>
<p>
So it was unintentional it seems that the <a href="https://docs.python.org/3.1/library/subprocess.html">previous documentation</a> clearly documented the default to be 0 and the implementation matching the documentation.
And it was unintentional that it was the only sane value for any non-trivial handling of pipes (without running into deadlocks).
</p><p>
Yay for breaking programs that follow the documentation! Yay for changing such an important setting between 3.2.3 and 3.2.4 and introducing deathlocks into programs.
</p>
rantshttp://blog.brlink.eu/index.html#i66Sun, 16 Feb 2014 15:43:31 +0100http://blog.brlink.eu/index.html#i66
unstable busybox and builtins<p>In case you are using busybox-static like me to create custom initramfses, here a little warning:</p>
<p>The current busybox-static in unstable lost its ability to find builtins with no /proc/self/exe,
so if you use it make sure you either have all builtins you need up until mount /proc (including mount)
and after you umount all file systems as explicit symlinks or simply create a /proc/self/exe -> /bin/busybox
symlink...</p>
warninghttp://blog.brlink.eu/index.html#i65Thu, 15 August 2013 13:00:43 +0200http://blog.brlink.eu/index.html#i65
slides for git-dpm talk at debconf13<p>
Since at my <a href="http://penta.debconf.org/dc13_schedule/events/1035.en.html">git-dpm talk</a> at <a href="http://debconf13.debconf.org/">debconf13</a> I got the speed a bit wrong and as the slides I uploaded to penta seem not to work from the html export, I've also uploaded the slides to <a href="http://git-dpm.alioth.debian.org/git-dpm-debconf13.pdf">http://git-dpm.alioth.debian.org/git-dpm-debconf13.pdf</a>.
</p>
talkshttp://blog.brlink.eu/index.html#i64Wed, 12 June 2013 13:00:00 +0200http://blog.brlink.eu/index.html#i64
listing your git repositories on git.debian.org<p>
With the new gitweb version available on alioth after the upgrade to wheezy (thanks to the alioth admins for their work on alioth), there is a new feature available I want to advertise a bit here: listing only a subtree of all repositories.
Before now one could only either look at a specific repository or get the list of all repositories
and the list of all repositories is quite large and slow.
</p><p>
With the new feature you can link to all the repositories in your alioth project. For example in reprepro's case that is <a href="http://anonscm.debian.org/gitweb/?pf=mirrorer">http://anonscm.debian.org/gitweb/?pf=mirrorer</a>.
Much more I missed what is now possible with the link <a href="http://anonscm.debian.org/gitweb/?pf=debian-science">http://anonscm.debian.org/gitweb/?pf=debian-science</a>: getting a list of all debian-science repositories (still slow enough, but much better than the full list).
</p>
advertisinghttp://blog.brlink.eu/index.html#i63Thu, 09 May 2013 13:14:43 +0200http://blog.brlink.eu/index.html#i63
gnutls and valgrind<p>Memo to myself (as I tend to forget it): If you develop gnutls using applications, recompile gnutls with <tt>--disable-hardware-acceleration</tt> to be able to test them without getting flooded with false-positives.</p>
mumblinghttp://blog.brlink.eu/index.html#i62Thu, 04 Apr 2013 20:39:37 +0200http://blog.brlink.eu/index.html#i62
Git package workflows<p>Given the recent discussions on <a href="hTTP://PLAnet.debian.org/">planet.debian.org</a> I use the opportunity to describe how you can handle upstream history in a <a href="http://git-dpm.alioth.debian.org/">git-dpm</a> workflow.</p>
<p>One of the primary points of git-dpm is that you should be able to just check out the Debian branch, get the <tt>.orig.tar</tt> file(s) (for example using pristine-tar, by <tt>git-dpm prepare</tt> or by just downloading them) and then calling <tt>dpkg-buildpackage</tt>.</p>
<p>Thus the contents of the Debian branch need to be clean from <tt>dpkg-source</tt>'s point of view, that is do not contain any files the <tt>.orig.tar</tt> file(s) contains not nor any modified files.
</p>
<p><b>The easy way</b></p>
<p>The easiest way to get there is by importing the upstream tarball(s) as a git commit, which one will usually do with <tt>git-dpm import-new-upstream</tt> as that also does some of the bookkeeping.</p>
<p>This new git commit will have (by default) the previous upstream commit and any parent you give with <tt>-p</tt> as parents. (i.e. with <tt>-p</tt> it will be a merge commit) and its content will be the contents of the tarball (with multiple <tt>orig</tt> files, it gets more complicated).</p>
<p>The idea is of course that you give the upstream tag/commit belonging to this release tarball with <tt>-p</tt> so that it becomes part of your history and so git blame can find those commits.</p>
<p>Thus you get a commit with the exact orig contents (so pristine-tar can more easily create small deltas) and the history combined.</p>.
<p>Sometimes there are files in the upstream tarball that you do not want to have in your Debian branch (as you remove them in <tt>debian/rules clean</tt>), then when using this method you will have those files in the upstream branch but you delete them in the Debian branch. (This is why <tt>git-dpm merge-patched</tt> (the operation to merge a new branch with upstream + patches with your previous <tt>debian/</tt> directory) will look which files relative to the previous upstream branch are deleted and delete them also in the newly merged branch by default).
<p><b>The complicated way</b></p>
There is only a way without importing the <tt>.orig.tar</tt> file(s), though that is a bit more complicated: The idea is that if your upstream's git repository contains all the files needed for building your Debian package (for example if you call <tt>autoreconf</tt> in your Debian package and clean all the generated files in the clean target, or if upstream has a less sophisticated release process and their .tar contains only stuff from the git repository), you can just use the upstream git commit as base for your Debian branch.<p>
<p>Thus you can make upstream's commit/tag your upstream branch, by recording it with <tt>git-dpm new-upstream</tt> together with the <tt>.orig.tar</tt> it belongs to (Be careful, git-dpm does not check if that branch contains any files different than your <tt>.orig.tar</tt> and could not decide if it misses any files you need to build even if it tried to tell).</p>
<p>Once that is merged with the <tt>debian/</tt> directory to create the Debian branch, you run <tt>dpkg-buildpackage</tt>, which will call <tt>dpkg-source</tt> which compares your working directory with the contents of the <tt>.orig.tar</tt> with the patches applied. As it will only see files not there but no files modified or added (if everything was done correctly), one can work directly in the git checkout without needing to import the <tt>.orig.tar</tt> files at all (altough the pristine-tar deltas might get a bit bigger).</p>
advertisementshttp://blog.brlink.eu/index.html#i61Sun, 10 Feb 2013 15:07:26 +0100http://blog.brlink.eu/index.html#i61
Debian version strings<p>As I did not find a nice explanation of Debian version numbers to point people at,
here some random collection of information about:</p>
<p>All our packages have a version. For the package managers to know
which to replace with which, those versions needs an ordering.
As version orderings are like opinions (everyone has one), none of
them would match any single one chosen for our tools to implement.
So maintainers of Debian packages sometimes have to translate those
versions into something the Debian tools understand.</p>
<p>But first let's start with some basics:</p>
<p>A Debian version string is of the form: [<i>Epoch</i><tt>:</tt>]<i>Upstream-Version</i>[<tt>-</tt><i>Debian-Revision</i>]</p>
<p>To make this form unique, the Upstream-Version may not contain an colon if there is no epoch and not contain a minus, if there is no Debian-Revision. The Epoch must be an integer (so no colons allowed). And the Debian-Revision may not contain a minus sign (so the Debian-Revision is everything right of the right-most minus sign, or empty if there is no such sign).</p>
<p>Two versions are compared by comparing all three parts. If the epochs differ, the biggest epoch wins. With same epochs, the biggest upstream version wins. With same epochs and same upstream versions, the biggest revision wins.</p>
<p>Comparing first the upstream version and then the revision is the only sensible thing to do, but it can have counter-intuitive effects if you try to compare versions with minus signs as Debian versions:</p>
<pre>
$ dpkg --compare-versions '1-2' '<<' '1-1-1' && echo true || echo false
true
$ dpkg --compare-versions '1-2-1' '<<' '1-1-1-1' && echo true || echo false
false
</pre>
<p>
To compare two version parts (Upstream-Version or Debian-Revision), the string is
split into pairs of digits and non digits. Consecutive digits are treated as a number
and compared numerrically. Non-digit parts are compared just like ASCII strings
with the exception that letters are sorted before non-letters and the tilde is
treated specially (see below).
</p>
<p>
So <tt>3pl12</tt> and <tt>3pl3s</tt> are slit into (3, 'pl', 12, '') and (3, 'pl', 3, 's')
and the first is the larger version.
</p>
<p>Comparing digits as characters makes not sense at least at the beginning of the string
(otherweise version 10.1 would be smaller than 9.3). For digits later in the string there
are two different version schemes competing here: There is GNU style 0.9.0 followed by 0.10.0
and decimal fractions like 0.11 < 0.9.
Here a version comparison algorithm has to choose one and the one chosen by dpkg is both
the one supporting the GNU numbering and also the one easier supporting the other scheme:
</p><p>
Imagine one software going 0.8 0.9 0.10 and one going 1.1 1.15 1.2.
With out versioning scheme the first just works, while the second has to be translated
into 1.1 1.15 1.20 to still be monotonic.
The other way around, we would have to translate the first form to 0.08 0.09 0.10, or
better 0.008 0.009 0.010 as we do not know how big those numbers will be, i.e. one would
have to know beforehand where the numbers will end up, while adding zeros as needed for
our scheme can be done with only knowing the previous numbers.
</p>
<p>Another decision to be taken is how to treat non-numbers. The way dpkg did this
was assuming adding stuff to the end increases numbers. This has the advantage to not
needing to special case dots, as say 0.9.6.9 will be bigger than 0.9.6 naturally.
I think back then this decision was also easier as usually anything attached was making
the version bigger and one often saw versions like 3.3.bl.3 to denote some patches
done atop of 3.3 in the 3th revision.
</p>
<p>
But this scheme has the disadvantage that version schemes like 1.0rc1 1.0rc2 1.0
do not map naturally. The classic way to work arround this is to translate that
into 1.0rc1 1.0rc2 1.0.0 which works because the dot is a non-letter (it also works
with 1.0-rc1 and 1.0+rc1 as the dot has a bigger ASCII number than minus or plus).
</p>
<p>The new way is the specially treated tilde character. This character was added
some years ago to sort before anything else, including an empty string.
This means that 1.0~rc1 is less than 1.0:</p>
<pre>
dpkg --compare-versions '1.0~rc1-1' '<<' '1.0-1' && echo true || echo false
</pre>
<p>This scheme is especially useful if you want to create a package sorting before
a package already there, as you for example do want with backports (as a user
having a backport installed upgrading to the next distribution should get the backport
replaced with the actual package). That's why backport usually having versions
like 1.0-2~bpo60+1. Here 1.0-2 is the version of the un-backported version; bpo60
is a note that this is backported to Debian 6 (AKA squeeze) and the +1 is the number
of the backport in case there are multiple tries necessary. (Note the use of the
plus sign as the minus sign is not allowed in revisions and would make the part before
a part of the upstream version).
</p>
<p>
Now, when to use which technique?</p>
<ul><li>Usually try to stick to the upstream version if it
is sane. (With the exception of data based versions, as experience shows upstreams
later change their mind, so a version of 20130210 is best translated to 0.20130210
in order to have a future 1.0 sort higher).</li>
<li>If there are some suffixes in the version that denote a "before", prefix them
with a tilde.</li>
<li>Compare a version to your last version and put zeros or .0 at the end (or the
ends of parts) where necessary to increase them.</li>
</ul>
<p>Some common examples:</p>
<ul><li>You want to package 2.0 but you need to repack the .orig.tar because it contains a file with some non-free license (for example no license at all). Then you can call it 2.0~dfsg-1.
If the file is later relicensed, you can do a 2.0-2 with the original file which sorts higher.
<li>You need to repack 2.0 but it was already in the archive. then you can do
2.0+dfsg-1 as this is higher. (using the plus has the advantage that you can later use the 2.0.0 trick if the file is relicensed, while the older 2.0.dfsg-1 made this impossible).
(One can also use -dfsg-1 here, but + might be easier to parse for humans).
<li>You package a git snaphost of a version 2.0 not yet released, then you can use 2.0~gitXYZ-1
<li>You package a git snapshot of upstream doing some works on top of 1.9, then you can use 1.9+gitXYZ-1
</ul>
babblehttp://blog.brlink.eu/index.html#i60Sun, 13 Jan 2013 15:23:52 +0100http://blog.brlink.eu/index.html#i60
some signature basics<p>While almost everyone has already worked with cryptographic signatures,
they are usually only used as black boxes, without taking a closer look.
This article intends to shed some lights behind the scenes.
</p>
<p>
Let's take a look at some signature. In ascii-armoured form or behind
a clearsigned message one often does only see something like this:
</p>
<pre>
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)
iQIcBAABAgAGBQJQ8qxQAAoJEH8RcgMj+wLc1QwP+gLQFEvNSwVonSwSCq/Dn2Zy
fHofviINC1z2d/voYea3YFENNqFE+Vw/KMEBw+l4kIdJ7rii1DqRegsWQ2ftpno4
BFhXo74vzkFkTVjo1s05Hmj+kGy+v9aofnX7CA9D/x4RRImzkYzqWKQPLrAEUxpa
xWIije/XlD/INuhmx71xdj954MHjDSCI+9yqfl64xK00+8NFUqEh5oYmOC24NjO1
qqyMXvUO1Thkt6pLKYUtDrnA2GurttK2maodWpNBUHfx9MIMGwOa66U7CbMHReY8
nkLa/1SMp0fHCjpzjvOs95LJv2nlS3xhgw+40LtxJBW6xI3JvMbrNYlVrMhC/p6U
AL+ZcJprcUlVi/LCVWuSYLvUdNQOhv/Z+ZYLDGNROmuciKnvqHb7n/Jai9D89HM7
NUXu4CLdpEEwpzclMG1qwHuywLpDLAgfAGp6+0OJS5hUYCAZiE0Gst0sEvg2OyL5
dq/ggUS6GDxI0qUJisBpR2Wct64r7fyvEoT2Asb8zQ+0gQvOvikBxPej2WhwWxqC
FBYLuz+ToVxdVBgCvIfMi/2JEE3x8MaGzqnBicxNPycTZqIXjiPAGkODkiQ6lMbK
bXnR+mPGInAAbelQKmfsNQQN5DZ5fLu+kQRd1HJ7zNyUmzutpjqJ7nynHr7OAeqa
ybdIb5QeGDP+CTyNbsPa
=kHtn
-----END PGP SIGNATURE-----
</pre>
<p>This is actually only a form of base64 encoded data stream.
It can be translated to the actual byte stream using gpg's
--enarmor and --dearmour commands (Can be quite useful if
some tool only expects one BEGIN SIGNATURE/END SIGNATURE block
but you want to include multiple signatures but cannot generate
them with a single gpg invocation because the keys are stored
too securely in different places).</p>
<p>Reading byte streams manually is not much fun, so I wrote
<a href="http://gpg2txt.alioth.debian.org/">gpg2txt</a> some years ago, which can give
you some more information. Above signature looks like the following:
<pre>
89 02 1C -- packet type 2 (signature) length 540
04 00 -- version 4 sigclass 0
01 -- pubkey 1 (RSA)
02 -- digest 2 (SHA1)
00 06 -- hashed data of 6 bytes
05 02 -- subpacket type 2 (signature creation time) length 4
50 F2 AC 50 -- created 1358081104 (2013-01-13 12:45:04)
00 0A -- unhashed data of 10 bytes
09 10 -- subpacket type 16 (issuer key ID) length 8
7F 11 72 03 23 FB 02 DC -- issuer 7F11720323FB02DC
D5 0C -- digeststart 213,12
0F FA -- integer with 4090 bits
02 D0 [....]
</pre>
<p>Now, what does this mean. First all gpg data (signatures, keyrings, ...)
is stored as a series of blocks (which makes it trivial to concatenate public
keys, keyrings or signatures). Each block has a type and a length.
A single signature is a single block. If you create multiple signatures at
once (by giving multiple -u to gpg) there are simple multiple blocks one after
the other.
</p><p>Then there is a version and a signature class.
Version 4 is the current format, some really old stuff (or things wanting to be compatible with very old stuff) sometimes still have version 3.
The signature class means what kind of signature it is.
There are roughly two signature classes: A verbatim signature (like this one), or a signature of a clearsigned signature. With a clearsigned signature not the file itself is hashed, but instead a normalized form that is supposed to be invariant under usual modifications by mailers. (This is done so people can still read the text of a mail but the recipient can still verify it even if there were some slight distortions on the way.)
</p>
<p>Then the type of the key used and the digest algorithm used for creating this signature.
</p>
<p>The digest algorithm (together with the signclass, see above)
describes which hashing algorithm is used.
(You never sign a message, you only sign a hashsum.
(Otherwise your signature would be as big as your message
and it would take ages to create a signature, as asymetric keys are necessarily very slow)).</p>
<p>This example uses SHA1, which is no longer recommended:
As SHA1 has shown some weaknesses, it may get broken in the not too distant future. And then it might be possible to take this signature and claim it is the signature of something else.
(If your signatures are still using SHA1, you might want to edit your key preferences and/or set a digest algorithm to use in your ~/.gnupg/gpg.conf.</p>
<p>Then there are some more information about this signature: the time it was generated on and the key it was generated with.</p>
<p>Then, after the first 2 bytes of the message digest (I suppose it was added in cleartext to allow checking if the message is OK before starting with expensive cryptograhic stuff, but it might not checked anywhere at all), there is the actual signature.</p>
<p>Format-wise the signature itself is the most boring stuff. It's simply one big number for RSA or two smaller numbers for DSA.</p>
<p>Some little detail is still missing: What is this "hashed data" and "unhashed data" about?
If the signed digest would only be a digest of the message text, then having a timestamp in the signature would not make much sense, as anyone could edit it without making the signature invalid. That's why the digest is not only signed message, but also parts of the information about the signature (those are the hashed parts) but not everything (not the unhashed parts).
</p>
blablablahttp://blog.brlink.eu/index.html#i59Thu, 29 Nov 2012 23:05:14 +0100http://blog.brlink.eu/index.html#i59
Gulliver's Travels<p>After seeing some book descriptions recently on planet debian,
let me add some short recommendation, too.</p>
<p>Almost everyone has heard about Gulliver's Travels already,
so usually only very cursory. For example: did you know the book
describes 4 journeys and not only the travel to Lilliput?</p>
<p>Given how influential the book has been, that is even more suprising.
Words like &quot;endian&quot; or &quot;yahoo&quot; originate from it.</p>
<p>My favorite is the third travel, though, especially the acadamy of Lagado,
from which I want to share two gems:</p>
<p>&quot;
His lordship added, 'That he would not, by any further particulars, prevent the pleasure I
should certainly take in viewing the grand academy, whither he was resolved I should go.' He
only desired me to observe a ruined building, upon the side of a mountain about three miles
distant, of which he gave me this account: 'That he had a very convenient mill within half a
mile of his house, turned by a current from a large river, and sufficient for his own family, as
well as a great number of his tenants; that about seven years ago, a club of those projectors
came to him with proposals to destroy this mill, and build another on the side of that mountain,
on the long ridge whereof a long canal must be cut, for a repository of water, to be conveyed up
by pipes and engines to supply the mill, because the wind and air upon a height agitated the
water, and thereby made it fitter for motion, and because the water, descending down a
declivity, would turn the mill with half the current of a river whose course is more upon a
level.' He said, 'that being then not very well with the court, and pressed by many of his
friends, he complied with the proposal; and after employing a hundred men for two years, the
work miscarried, the projectors went off, laying the blame entirely upon him, railing at him
ever since, and putting others upon the same experiment, with equal assurance of success, as
well as equal disappointment.'
&quot;</p>
<p>&quot;I went into another room, where the walls and ceiling were all hung
round with cobwebs, except a narrow passage for the artist to go in and out.
At my entrance, he called aloud to me, 'not to disturb his webs.' He
lamented 'the fatal mistake the world had been so long in, of using
silkworms, while we had such plenty of domestic insects who infinitely excelled
the former, because they understood how to weave, as well as spin.'
And he proposed further, 'that by employing spiders, the charge of dyeing
silks should be wholly saved;' whereof I was fully convinced, when he
showed me a vast number of flies most beautifully coloured, wherewith he fed
his spiders, assuring us 'that the webs would take a tincture from them; and as
he had them of all hues, he hoped to fit everybody’s fancy, as soon as he
could find proper food for the flies, of certain gums, oils, and other
glutinous matter, to give a strength and consistence to the threads.'&quot;
</p>
bookshttp://blog.brlink.eu/index.html#i58Sat, 20 Oct 2012 12:00:53 +0200http://blog.brlink.eu/index.html#i58
Fun with physics: Quantum Leaps<p>A quantum leap is a leap between two states where there is no state in between. That makes it usually quite small, but also quite sudden (think of Lasers).</p>
<p>So a quantum leap is a jump not allowing any intermediate states, i.e.
a &quot;abrupt change, sudden increase&quot; like Merriam Webster defines it.
This then get a &quot;dramatic advance&quot; and suddenly the meaning shifted
from something so small it could not be divided to something quite big.
</p><p>
But before you complain people use the new common meaning instead of the classic
physicalistic meaning, ask yourself: Would you prefer if people kept talking
about &quot;disruptive&quot; changes to announce they did something big?
</p><p>
Update: I'm using quantum jump in the sense as for example used in
<a href="http://en.wikipedia.org/wiki/Atomic_electron_transition">http://en.wikipedia.org/wiki/Atomic_electron_transition</a>. If quantum jump is something different to you, my post might not make much sense.
</p>
rantshttp://blog.brlink.eu/index.html#i57Fri, 19 Oct 2012 23:59:59 +0200http://blog.brlink.eu/index.html#i57
Time flies like an arrow<p>
It's now 10 years I am Debian Developer. In retrospect it feels like a very short time.
I guess because not so much in Debian's big picture has changed.
Except I sometimes have the feeling that less people care about users and more people
instead prefer solutions incapacitating users.
</p><p>
But perhaps I'm only getting old and grumpy and thriving for systems enabling the user
to do what they want was only a stop-gap until there where also open source solutions
for second-guessing what the user should have wanted.
</p><p>
Anyway, thanks to all of you in and around Debian that made the last ten years such
a nice and rewarding experience and I'm looking forward to the next ten years.
</p>
anniversaryhttp://blog.brlink.eu/index.html#i56Sat, 30 Jun 2012 12:04:39 +0200http://blog.brlink.eu/index.html#i56
ACPI power button for the rest of us<p>The acpi-support maintainer unfortunately decided 2012-06-21 that
having some script installed by a package to cleanly shut down the computer
should not be possible without having consolekit and thus dbus installed.
</p><p>
So (assuming this package will migrate to wheezy which it most likely will tomorrow)
with wheezy you will either have to write your own event script or install
consolekit and dbus everywhere.
</p><p>
You need two files. You need one in <tt>/etc/acpi/events/</tt>, for example
a <tt>/etc/acpi/events/powerbtn</tt>:
</p><pre>
event=button[ /]power
action=/etc/acpi/powerbtn.sh
</pre><p>Which causes a power-button even to call a script <tt>/etc/acpi/powerbtn.sh</tt>,
which you of course also need:
</p><pre>
#!/bin/sh
/sbin/shutdown -h -P now "Power button pressed"
</pre><p>
You can also name it differently, but <tt>/etc/acpi/powerbtn.sh</tt> has the
advantage that the script from acpi-support-base (in case it was only removed and
not purged) does not call shutdown itself if it is there.
</p><p>
(And do not forget to restart acpid, otherwise it does not know about your event script yet).
</p><p>
For those too lazy I've also prepared a package <tt>acpi-support-minimal</tt>,
which only contains those scripts (and a postinst to restart acpid to bring
it into effect with installation), which can be get via apt-get using
</p>
<pre>
deb http://people.debian.org/~brlink/acpi-minimal wheezy-acpi-minimal main
deb-src http://people.debian.org/~brlink/acpi-minimal wheezy-acpi-minimal main
</pre>
<p>or directly from <a href="http://people.debian.org/~brlink/acpi-minimal/pool/main/a/acpi-support-minimal/">http://people.debian.org/~brlink/acpi-minimal/pool/main/a/acpi-support-minimal/</a>.</p>
<p>Sadly the acpi-support maintainer sees no issue at all
and ftp-master doesn't like so tiny packages (which is understandable but means
the solution is more than a apt-get away).
</p>
rantshttp://blog.brlink.eu/index.html#i55Wed, 04 Apr 2012 11:07:55 +0200http://blog.brlink.eu/index.html#i55
The wonders of debian/rules build-arch<p>It has taken a decade to get there,
but finally the buildds are able to call <tt>debian/rules build-arch</tt>.</p>
<p>Compare the unfinished old build</p>
<pre>
Finished at 20120228-0753
Build needed 22:25:00, 35528k disc space
</pre>
<p>with the new one on the same architecture finally only building what is needed</p>
<pre>
Finished at 20120404-0615
Build needed 00:11:28, 27604k disc space
</pre>
happinesshttp://blog.brlink.eu/index.html#i54Wed, 28 Dec 2011 11:16:02 +0100http://blog.brlink.eu/index.html#i54
symbol files: With great power comes great responsibility<p>Symbol files are a nice little feature to reduce dependencies of packages.</p>
<p>
Before there were symbol files libraries in Debian just had shlibs files
(both to be found in <tt>/var/lib/dpkg/info/</tt>.
A shlibs file says for each library which packages to depend on when using this
library. When a package is created, the build scripts will usually call
dpkg-shlibdeps, which then looks which libraries the programs in the library use
and then calculate the needed dependencies.
This means the maintainers of most packages do not have to care what libraries
to depend on, as it is automatically calculated.
And as compiling and linking against a newer version of a library can cause
the program to no longer work with an older library, it also means those dependencies
are correct regardless of which version of a library is compiled against.
</p>
<p>
As shlibs files only have one dependency information per soname, that also means they
are quite strict: If there is any possible program that would not work with an older
version of a library, then the shlibs file must pull in a dependency for the newer version,
so everything needing that library ends up depending on the newer version.
</p>
<p>As most libraries added new stuff most of the time, most library packages
(except some notable extremely API stable packages like for example some X libs)
just chose to automatically put the latest package version in the shlibs file.
</p>
<p>This of course caused library packages to be quite strict: Almost every package
depended on the latest version of all libraries, including libc, so practically no
package from unstable or testing could be used in stable.</p>
<p>To fix this problems, symbols files were introduced.
A symbols file is a file (also finally installed in <tt>/var/lib/dpkg/info/</tt> alongside
the shlibs file) to give a minimum version for each symbol found in the library.
</p>
<p>
The idea is that different programs use different parts of a library.
Thus if new functionality is introduced, it would be nice to differentiate
which functionality is used and give dependencies depending on that.
As the only thing programmatically extractable from a binary file is the list of dynamic
symbols used, this is the information used for that.
</p>
<p>
But this only means the maintainer of the library package has now not only one
question to answer (&quot;What is the minimal version of this library a program compiled
against the current version will need?&quot;), but many questions:
&quot;What is the minimal version of this library a program compiled
against the current version and referencing this symbol name will need?&quot;.
</p>
<p>Given a symbols file of the last version of a library package and the libraries in the
new version of the package, there is one way to catch obvious mistakes: If a symbol was
not in the old list but is not in the current library, one needs at least the current version
of the library.</p>
<p>So if dpkg-gensymbols finds a missing symbol, it will add it with the current version.</p>
<p>While this will never create dependencies too strict, it sadly can have the opposite
effect of producing dependencies that are not strict enough:</p>
<p>Consider for example some library exporting the following header file:</p>
<pre>
enum foo_action { foo_START, foo_STOP};
void do_foo_action(enum foo_action);
</pre>
<p>Which in the next version looks like that:</p>
<pre>
enum foo_action { foo_START, foo_STOP, foo_RESTART};
void do_foo_action(enum foo_action);
</pre>
<p>
As the new enum value was added at the end, the numbers of the old constants did not change,
so the API and ABI did not change incompatibly, so a program compiled against the old version still
works with the new one. (that means: upstream did their job properly).
</p>
<p>
But the maintainer of the Debian package faces a challenge: There was no new symbol added,
dpkg-gensymbols will not see that anything changed (as the symbols are the same).
So if the maintainer forgets to manually increase the version required by the <tt>do_foo_action</tt>
symbol, it will still be recorded in the symbols file as needing the old version.
</p>
<p>Thus dpkg will not complain if one tries to install the package containing the program
together with the old version of the library. But if that program is called and calls
<tt>do_foo_action</tt> with argument <tt>2</tt> (<tt>foo_RESTART</tt>), it will not behave
properly.
</p>
<p>To recap:</p>
<ul>
<li>Symbols files are a way to minimize dependencies by looking at which symbols of a library a program uses.</li>
<li>A symbol file lists for each symbol the minimal version of the library a program referencing that symbol needs</li>
<li>dpkg-gensymbols can detect added functionality if it adds new symbol names</li>
<li>it's the duty of the maintainer to ensure added functionality not visible by symbol references is properly recorded in the symbols file</li>
</ul>
warningshttp://blog.brlink.eu/index.html#i53Fri, 25 Nov 2011 22:19:56 +0100http://blog.brlink.eu/index.html#i53
checking buildd logs for common issues<p>Being tired of feeling embarrassed when noticing some warning in a buildd log
only after having it uploaded and looking at the buildd logs of the other architectures,
I've decided to write some little infrastructure to scan buildd logs for things that
can be found in buildd logs.</p>
<p>The result can be visited at
<a href="https://buildd.debian.org/~brlink/">https://buildd.debian.org/~brlink/</a>.
</p>
<p>Note that it currently only has one real check (looking for
<tt>E: Package builds NAME_all.deb when binary-indep target is not called.</tt>)
yet and additionally two little warnings (<tt>dh_clean -k</tt> and
<tt>dh_installmanpages</tt> deprecation output) which lintian could at catch
just as well.</p>
<p>
The large size of the many logs to scan is a surprisingly small problem.
(As some tests indicated it would only take a couple of minutes for a full
scan, I couldn't help to run one full scan, only to learn afterwards that
wb-team was doing the import of the new architectures at that time. Oops!)
</p>
<p>
More surprising for me using small files to keep track of logs already scanned
does not scale at all with the large number of source packages. File system
overhead is gigantic and it makes the whole process needlessly IO bound.
That problem was be easily solved using sqlite to track things done,
though as buildd.debian.org doesn't have that installed yet, so no automatic
updates yet. [Update: already installed, will be some semi-automatic days first, though anyway]
</p>
<p>
The next thing to do is writing more checks, where I hope for some help
from you: What kind of diagnostics do you know from buildd logs that you
would like to be more prominently visible (hopefully soon on <tt>packages.qa.debian.org</tt>, <a href="http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=650039">wishlist item</a> already filed)?
</p>
<p>
Trivial target is everything that can be identified from regular expression
applied to every line of the buildd log. For such cases the most complicated
part is writing a short description of what this message means.
(So if you sent me some suggestions, I'd be very happy to also get a short
text suitable for that, together with the message to looks for and ideally
some example package having that message in its buildd log).
</p>
<p>
I'm also considering some more complicated tests. I'd really like to have
something to test for packages being built multiple times to due Makefile
errors and stuff like that.
</p>
announcementhttp://blog.brlink.eu/index.html#i52Tue, 1 Nov 2011 16:12:33 +0100http://blog.brlink.eu/index.html#i52
File ownership and permissions in Debian packages<p>
As you will know, every file on a unixoid system has some meta-data like owner,
group and permission bits.
This is of course also true for files part of some Debian package.
And it is not very surprising that different files should have different
permissions. Or even different owners or groups.
</p>
<p>
Which file has which settings is of course for the package maintainer to
decide and Debian would not be Debian if there were not ways for the
user to give their own preferences and have them preserved.
This post is thus about how those settings end up in the package and
what is to be observed when doing it.
</p>
<p>
As you will also have heard, a <tt>.deb</tt> file is simply a tar
archive stored as part of an <tt>ar</tt> archive, as you can verify by
unpacking a package manually:
</p>
<pre>
$ ar t reprepro_4.8.1-1_amd64.deb
debian-binary
control.tar.gz
data.tar.gz
$ ar p reprepro_4.8.1-1_amd64.deb data.tar.gz | gunzip | tar -tvvf -
drwxr-xr-x root/root 0 2011-10-10 12:05 ./
drwxr-xr-x root/root 0 2011-10-10 12:05 ./etc/
drwxr-xr-x root/root 0 2011-10-10 12:05 ./etc/bash_completion.d/
-rw-r--r-- root/root 19823 2011-10-10 12:05 ./etc/bash_completion.d/reprepro
drwxr-xr-x root/root 0 2011-10-10 12:05 ./usr/
drwxr-xr-x root/root 0 2011-10-10 12:05 ./usr/share/
--More--
</pre>
<p>
(For unpacking stuff from scripts, you should of course use
<tt>dpkg-deb --fsys-tarfile</tt> instead of <tt>ar | gunzip</tt>.
Above example is about the format, not a recipe to unpack files).
</p>
<p>
This already explains how the information is usually encoded in the package:
A tar file contains that information for each contained file and dpkg is
simply using that information.
</p>
<p>(As tar stores numeric owner and group information, that limits group
and owner information to users and groups with fixed numbers, i.e. 0-99.
Other cases will be covered later.)
</p>
<p>
The question for the maintainer is now: Where is the information which file
has which owner/group/permissions in the <tt>.tar</tt> inside the <tt>.deb</tt>,
and the answer is simple:
It's taken from the files to be put into the <tt>.deb</tt>.
</p>
<p>This means that package tools could simply be implemented first by simply
calling tar and there is no imminent need to write you own tar generator.
It also means that the maintainer has full control and
does not have to learn new descriptive languages or tools to change permissions,
but can simply put the usually shell commands into <tt>debian/rules</tt>.
</p>
<p>There are some disadvantages, though: A normal user cannot change ownership
of files and one has to make sure all files have proper permissions and owners.</p>
<p>This means that <tt>dpkg-deb -b</tt> (or the usually used wrapper <tt>dh_builddeb</tt>)
must be run in some context where you could change the file ownership to root first.
This means you either need to be root, or at least to fake being root by using fakeroot.
(While this could be considered some ugly workaround, it also means upstream's
<tt>make install</tt> is run believing to be root, which also avoids some --
for a packager -- quite annoying automatisms in upstream build scripts assuming
a package is not installed system wide if not installed as root).
</p>
<p>Another problem are random build host characteristics changing how files
are created in the directory later given to <tt>dpkg-deb -b</tt>.
For example an umask which might make all files non-world-readable by default.
</p>
<p>The usual workaround is to first fix up all those permissions.
Most packages use <tt>dh_fixperms</tt> for this, which also sets executable
bits according to some simple rules and has some more special cases so that
the overall majority of packages does not need to look at permissions at all.
</p>
<p>So using some <tt>debhelper</tt> setup, every special permissions and
all owner/group information for owner groups with fixed numbers only needs
to be set using the normal command line tools between <tt>dh_fixperms</tt>
and <tt>dh_builddeb</tt>.
Everything else happens automatically.
Note that <tt>games</tt> is a group with fixed gid. So it is not necessary
(and usually a bug) to change group-ownership of files withing the package
to group games in maintainer scripts (<tt>postinst</tt>,...).
</p>
<p>
If a user wants to change permissions or ownership of a file,
dpkg allows this using the <tt>dpkg-statoverride</tt> command.
This command essentially manages a list of files to get special
treatment and ownership and permission information they should get.
</p>
<p>This way a user can specify that files should have different permissions
and this setting is applied if a new version of this file is installed by
dpkg.</p>
<p>Being a user setting especially means, that packages (that means their
maintainer scripts) should not usually use dpkg-statoverride.</p>
<p>There are two exceptions, though: Different permissions based on
interaction with the user (e.g. asking question with debconf) and
dynamically allocated users/groups with dynamic id.</p>
<p>In both cases one should note that settings to dpkg-statoverride
are settings of the user, so the same care should be given as to files
in <tt>/etc</tt>, especially one should never override something the user
has set in there. (I can think of no example where calling
<tt>dpkg-statoverride --add</tt> without <tt>dpkg-statoverride --list</tt>
in some maintainer script is not a serious bug:
Either you override user settings or you are using debconf as a registry.
</p>
<p>
Moral
</p>
<p>
To recap, your package is doing something wrong if:
</p>
<ul>
<li>
Your maintainer scripts unconditionally change permissions of files
within the package (Those should either be set within the package or
you miss user configuration in dpkg-statoverride).
<li>
Your maintainer scripts call <tt>dpkg-statoverride --add</tt>
without having checked first whether the user already has done so
with different values.
</ul>
explanationshttp://blog.brlink.eu/index.html#i51Thu, 01 Sep 2011 11:42:10 +0200http://blog.brlink.eu/index.html#i51
Neue Webseite, mehrsprachiger Blog<p>
Meine neue Homepage nimmt langsam Gestalt an.
Auch das Aussehen der html Version dieses Blogs ist darum
etwas modernisiert. (Und leider ändern sich all die parma-links)
</p>
<p>
Auch die rss Erzeugung ist jetzt etwas verkompliziert: Es gibt
jetzt mehrere rss Feeds, damit ich auch mal was schreiben oder
in einer Sprache schreiben kann, die f&uuml;r Planet Debian nicht
so geeignet ist.
</p>
metahttp://blog.brlink.eu/index.html#i50Mon, 01 Aug 2011 12:12:44 +0200http://blog.brlink.eu/index.html#i50
About feature branches for patch handling and reverting to old states.Now that the debconf videos are available (big <b>THANKS</b> to the video team),
I was able to watch <a href="http://penta.debconf.org/dc11_schedule/events/728.en.html">the talk about packages in git at Debconf11</a>
and wanted to share some insights:
<ul>
<li>Feature branches<br>
I've looked a long time into topgit, and like many other gave up.
I think there is one big problem: It is not compatible with git.
<br>
Git tracks the history of commits. Everything that is in the history
is assumed to be already applied. So if you merge something with
a change applied and reverted with something with a change applied,
the logical conclusion for git is to not apply it (or rather also
apply the revert).<br>
This means that if you have multiple feature branches on
top of each other, you cannot easily remove a branch from under the other.
You can also not easily switch the order of patches. There are many ideas
floating around how to work around this, but I personally would not
wait until some manages to finish topgit or some replacement of it with
that functionality.
</li>
<li>Reverting to an older state or branching off<br>
The other problem with many feature branches is that while due to
aggressive merging you can fast-forward to a newer state, reverting
to an older state of all the branches is relatively complicated.
<br>
With the &quot;rebased patch branch merged into you debian branch history&quot;
approach of <a href="http://git-dpm.alioth.debian.org/">git-dpm</a> on the
other hand, branching of a state is extremely simple:
<br>
<tt>git checkout -b squeeze reprepro-debian-4.2.0-2</tt>
<br>
for example is the only thing that was needed to branch of a branch for
stable updates. Reverting to a previous state with <tt>git reset --hard</tt>
is not much more complicated (at most you need to remove the upstream and patched
branches, if they are needed they will be recreated anyway).
</li>
</ul>
answershttp://blog.brlink.eu/index.html#i49Tue, 05 Apr 2011 08:40:53 +0200http://blog.brlink.eu/index.html#i49
Paternalism and Freedom<p>
As seen in some mailinglist discussion:
</p>
<blockquote>
<p>
&quot;It seems to be a common belief between some developers that users should
have to read dozens of pages of documentation before attempting to do
anything.
</p>
<p>
&quot;I’m happy that not all of us share this elitist view of software. I
thought we were building the Universal Operating System, not the
Operating System for bearded gurus.&quot;
</p>
</blockquote>
<p>
I think this is an interesting quote as it shows an important misunderstand of
what Debian is for many people.
</p>
<p>
Debian (and Linux in general) was in its beginnings quite complicated and
often not very easy to use. People still felt a deep relieve to have it and a
strong love. Why?
</p>
<p>
Because it's not so much about about how you can use it, but how you can fix it.
</p>
<p>
A system that only has a nice surface and hides everything below it, that does
in a majority of cases just what you most likely want it is nice.
But if the only options are &quot;On&quot;, &quot;Off&quot; and perhaps some
&quot;something is not working as it should, try to fix it&quot;
(aka &quot;Repair&quot;) it is essentially a form of paternalism:
There is your superior that decided what is good for you,
you would not understand it anyway, just swallow what you get.
</p>
<p>
Not very surprisingly many people to be in the position of the inferior of a computer
(the less the more stupid the computer is, but even modern computers are still
stupid enough for most people).
</p>
<p>
So what those people want is not necessarily a system that can only be used
after reading a dozen pages of documentation, but a system they know they can
force to do what they want even if that might then mean reading some pages of
documentation.
</p>
<p>
So a good software in that sense might have some nice interface and some defaults
working most of the time. But more importantly it has a good documentation,
easy enough internals so one can grasp them and be transparent enough that one
can understand why it is currently not working and what to do against this and
then allowing enough user interference to fix it.
</p>
<p>
If all I get offered is some &quot;interface for users too stupid to understand it anyway&quot; and all options to fix it are checking and unchecking all boxes and restarting a lot or perhaps some gracious offer of &quot;There is the source code, just read that spaghetti code, you can there see anything it does though you might need to build a debug version just to see why it does not work&quot; then I would not call any strong feelings against this situation &quot;elitist&quot;.
</p>
rantshttp://blog.brlink.eu/index.html#i48Fri, 17 Dec 2010 16:33:11 +0100http://blog.brlink.eu/index.html#i48
C Code to avoid<p>
One of the bad aspects of the the C Programing language is that it silently
allows many bad C programs. Together with the widespread use of an architecture
that is very bad at catching errors (i386), this sometimes leads to common
idioms that are only working accidentally. This is bad as they often break
on other architectures and can break with every new optimizations or new feature
the compiler adds.
</p>
<p>Take for example a look at the code <a href="http://www.outflux.net/blog/archives/2010/12/16/gcc-4-5-and-d_fortify_source2-with-header-structures/">there</a> (I tried to leave a comment there but did not succeed):
</p>
<p>
If you see something like this:
</p>
<pre>
char buffer[1000];
struct thingy *header;
header = (struct thingy *)buffer;
</pre>
<p>
then it is time to run. I hope you do not depend on this software,
because it is a pure accident if this is doing anything at all.
</p>
<p>
While you can cast a <tt>char *</tt> to a struct, that is only allowed
if that memory actually was this struct (or one compatible, like a struct
with the same initial part and you are only accessing that part).
</p>
<p>
In this case it is obviously not (it's just an array of <tt>char</tt>),
so you might see bus errors or random values if the compiler does no
optimizations and you are on a architecture where alignment matters.
Or the compiler might optimize it to whatever it wants to, because the
C compiler is allowed to do everything with code like that.
</p>
<p>
The next problem is the one that post was about: You are not allowed
to access an array after its end. Something like
</p>
<pre>
struct thingy {
int magic;
char data[4];
}
</pre>
<p>
means you may only access the first 4 bytes of data.
If you access more than that it may work now on your machine,
but it can stop tomorrow with the next revision of the compiler
or on another machine.
</p>
<p>
If you have a struct with a variable length data block, then you
can use the new C99 feature of <tt>char data[]</tt> or the old
gcc extension of <tt>char data[0]</tt>. Or you can use unions.
(Or in some case use the behavior of structs with the same initial
parts).
</p>
<p>
If you use C code with undefined semantics then every new compiler
might break it with some optimization.
There is often the tempting option of just using a slightly different
code that currently works. But in the not too distant future the compiler
(or even some processor) might again get some new optimizations and the
code will break again. Fixing it properly might be harder but it's less
likely it will fail to compile again and it also reduces the chances that
it will not fail to compile but simply do something you did not expect.
</p>
rantshttp://blog.brlink.eu/index.html#i47Wed, 06 Oct 2010 17:39:05 +0200http://blog.brlink.eu/index.html#i47
git-dpm 0.3.0<p>
I've just uploaded <a href="http://git-dpm.alioth.debian.org/">git-dpm</a>
0.3.0-1 packages to experimental.
</p>
<p>
Apart from many bugfixes (which I will also take a look if I can make an
0.2.1 version targeting squeeze, though the freeze requirements tend to
get tighter and tighter, so I may already be too late), the biggest improvement
is the newly added <tt>git-dpm dch</tt> command to spawn <tt>dch</tt> and then
extracts something to be used for the git commit message (I prefer to
have more control over debian/changelog, so I prefer this way over the
other direction).
</p>
announcementhttp://blog.brlink.eu/index.html#i46Sun, 15 Aug 2010 11:28:14 +0200http://blog.brlink.eu/index.html#i46
common inefficient shell code<p>
There is hardly any use in:
</p>
<pre>
cat filename | while ...
do
...
done
</pre>
<p>
Just do:
</p>
<pre>
while ...
do
...
done < filename
</pre>
<p>
If you want the while run in a subshell, use some parentheses,
but you do not need the <tt>cat</tt> at all.
</p>
<p>
Another unnecessary inefficient idiom often seen is
</p>
<pre>
foo="$(echo "$bar" | sed -e 's/|.*//')"
</pre>
<p>
which can be replaced with the less forky
</p>
<pre>
foo="${bar%%|*}"
</pre>
<p>
Similarly there is
</p>
<pre>
foo="${bar%|*}"
</pre>
<p>
as short and fast variant of
</p>
<pre>
foo="$(echo "$bar" | sed -e 's/|[^|]*$//')"
</pre>
<p>
and the same with <tt>#</tt> instead of <tt>%</tt>
for removing stuff from the beginning.
(Note that both are POSIX, only the <tt>${name/re/new}</tt>
not discussed here is bash-specific).
</p>
rantshttp://blog.brlink.eu/index.html#i45Mon, 02 Aug 2010 17:31:39 +0200http://blog.brlink.eu/index.html#i45
git-dpm 0.2.0 with import-dsc<p>I've uploaded git-dpm version 0.2.0.</p>
<p>Most notable change and the one which could need some testing is
the new <tt>git-dpm import-dsc</tt>, which will import a <tt>.dsc</tt>
file and try to import the patches found into git.<\p>
callforhelphttp://blog.brlink.eu/index.html#i44Sat, 29 May 2010 20:05:57 +0200http://blog.brlink.eu/index.html#i44
Ghostscript brain-dead...<p>
Some little warning to everyone using ghostscript:
</p>
<p>
Ghostscript always looks for files in the current directory first,
including a file that is always executed first (before any safe mode
is activated).
</p>
<p>
So by running ghostscript in a directory you do not control, you might
execute arbitrary stuff.
</p>
<p>
Two things make this worse:
</p>
<ul><li>When viewing a pdf file with gv,
gv always changes in the directory where the file is,
so don't use it to view files in such directories.
</li>
<li>The option to disable this behaviour (<tt>-P-</tt>) does not
seem to work at all.</li>
</ul>
<p>
You have been warned.
</p>
warninghttp://blog.brlink.eu/index.html#i43Fri, 19 Feb 2010 13:43:47 +0100http://blog.brlink.eu/index.html#i43
reprepro 4.1.0 and new Packages.diff generation<p>
I've just released reprepro 4.1.0 and uploaded it to unstable.
</p>
<p>
The most noteworthy change and the one where I need your help
is that the included <tt>rredtool</tt> program can now generate
<tt>Packages.diff/Index</tt> files when used as export-hook by reprepro.
(Until now you had to use the included <tt>tiffany.py</tt> script,
which is a fork of the script the official Debian archives use.
That script is still included in case you prefer the old method).
</p>
<p>
So instead of
</p>
<pre>
DscIndices: Sources Release . .gz tiffany.py
DebIndices: Packages Release . .gz tiffany.py
</pre>
<p>
you can now use
</p>
<pre>
DscIndices: Sources Release . .gz /usr/bin/rredtool
DebIndices: Packages Release . .gz /usr/bin/rredtool
</pre>
<p>
to get the new diff generator.
</p>
<p>
The new diff generator has am import difference to the old one:
It merges patches so every client should only need to download and
apply a single patch, and not multiple after each other, thus reducing
the disadvantages of Packages.diff files a bit (and sometimes even reducing
the amount of data to download considerably).
</p>
<p>
While reprepro and apt-get (due to carefully working around bugs/shortcomings
of older versions of apt) seem to work, I don't know if there are other users
of those files that could be surprised by that. If you know any I'd be glad
if you could test them or tell me about them.
</p>
callfortestershttp://blog.brlink.eu/index.html#i42Sat, 09 Jan 2010 16:13:56 +0100http://blog.brlink.eu/index.html#i42
git-dpm now as alioth project<p><a href="http://git-dpm.alioth.debian.org/git-dpm.html">Git-dpm</a> can now be found at <a href="http://git-dpm.alioth.debian.org/">http://git-dpm.alioth.debian.org/</a>
and the source at <tt>git://git.debian.org/git/git-dpm/git-dpm.git</tt></p>
<p>Functionality should now be mostly complete, so testers really needed now.</p>
announcementhttp://blog.brlink.eu/index.html#i41Sun, 03 Jan 2010 11:00:00 +0100http://blog.brlink.eu/index.html#i41
Alpha testers wanted<p>
If you ever tried to determine what patches other distributions apply to
some package you are interested in,
you might have come to the same conclusion as I:
It is quite an impudence how those are presented.
</p>
<p>
If you don't give up, you end up with programs or scripts to extract many
proprietary source package formats, more <a href="http://en.wikipedia.org/wiki/Revision_control">VCS</a> systems installed than you think there should exist.
</p>
<p>
Thats when you start to love the concept that every Debian mirror has next
to each binary package the source in a format that you can extract the changes
easily with only tools you find on every unixoid.
And that's why I love the new (though in my eyes quite misnamed) &quot;3.0 (quilt)&quot;
format, because that makes it even clearer and easier.
</p>
<p>
Sadly one problem remained: How to generate and store those patches?
</p>
<p>
While you can just use patches manually or use quilt to handle those
patches and store the result in a vcs of your choice, the newfangled
VCSes (especially git) became quite good at managing,
moving and merging changes around,
so it seems quite a waste to not be able to use this also to handle
those patches easily.
</p>
<p>
While one can either use git to handle a patchset, by storing it
as a chain of commits and using the interactive rebase, or using git
to store the history of your package, doing both at the same time
is tricky and not reasonably doable with git provided porcelain.
</p>
<p>
Thus I wrote my own tool to facilitate git for both tasks at the same
time. The idea is to have three branches: a branch storing the history
of the your package, a branch storing your patches in a way suitable
to submit them upstream or to create a <tt>debian/patches/</tt> directory
from, and an branch with the upstream contents.
</p>
<p>
I've an implementation which seems to already work, though I am sure
there is still much to improve and many errors and pitfalls still to find.
</p>
<p>
Thus if you also like to experiment with handling patches of a debian
package in git, take a look at the <a href="http://alioth.debian.org/~brlink/git-dpm/git-dpm.html">manpage</a> or the program at
<tt>git://git.debian.org/~brlink/git-dpm.git</tt><br>
(WARNING: as stated above: alpha quality; also places are temporary and are likely to change in the future).
</p>
callforhelphttp://blog.brlink.eu/index.html#i40Fri, 23 Oct 2009 20:43:02 +0200http://blog.brlink.eu/index.html#i40
I'll never understand why some people consider it acceptable to depend on udev<p>
This is just a reminder for all of you that have packages that depend on the udev package: I hate you.
</p>
<p>
A Debian package depending on the udev package (with very few exceptions like
for example the initramfs-tools package that actually uses udev) is
so wrong.
</p>
<ul>
<li>The majority of those packages will work just fine without.
All that is lost is some autoconfiguration of devices, which can
be properly be done by hand. So even if you feel like it would not
be the same without, that is at most a reason for a recommend.
</li>
<li>Depending on a package whose sole purpose is to heavily modify
the behavior of the system and especially what modules are loaded
at boot time and how device nodes are named is a no-go in my eyes.
<br>
Installing udev on a system not having udev installed has a realistic
chance of breaking something or even making the system totally
unbootable. (That is no bug in udev, it's main purpose is to load
all the stuff and do much guessing. Unless you are able to write
omniscient code there is no way for such a system to know what has
to be changed to make the system still bootable with udev suddenly
(re)installed in <b>all cases</b>.).
</li>
<li>Before anyone wants to start to argue with some hypothetical
uninformed user that could be confused by some magic not working:
please switch on your brain and reconsider.<br>
Udev has not be installed in new installs since ages.
Lenny shipped without yaird so it is all but easy to
have a system without udev running (if wanting to have a kernel installed).
So if there is any user not having udev installed ending up with
your package installed, that system was either time-consuming
modified to not have udev or never had and was upgraded multiple
times (thinking of a squeeze package) with each upgrade doing additional
complex steps.
<br>
What do you think are the chances on such a system for a dependency
on udev to do any good? And what are the chances for it to cause
a catastrophic failure?
</li>
</ul>
rantshttp://blog.brlink.eu/index.html#i39Sat, 03 Oct 2009 15:19:02 +0200http://blog.brlink.eu/index.html#i39
An argument for symbol versioning<p>
A little example for why it is nice to have symbol versioning in libraries.
Safe the following as test.sh. Call without arguments: segfault; call with
argument &quot;half&quot;: segfault; call with argument &quot;both&quot;: works.
</p>
<pre>
#!/bin/sh
cat &gt;s1.h &lt;&lt;EOF
extern void test(int *);
#define DO(x) test(x)
EOF
cat &gt;libs1.c &lt;&lt;EOF
#include &lt;stdio.h&gt;
#include "s1.h"
void test(int *a) {
printf("%d\n", *a);
}
EOF
cat &gt;libs1.map &lt;&lt;EOF
S_1 {
global:
test;
};
EOF
cat &gt;s2.h &lt;&lt;EOF
extern void test(int);
#define DO(x) test(*x)
EOF
cat &gt;libs2.c &lt;&lt;EOF
#include &lt;stdio.h&gt;
#include "s2.h"
void test(int a) {
printf("%d\n", a);
}
EOF
cat &gt;libs2.map &lt;&lt;EOF
S_2 {
global:
test;
};
EOF
cat &gt;a.h &lt;&lt;EOF
void a(void);
EOF
cat &gt;liba.c &lt;&lt;EOF
#include "s.h"
#include "a.h"
void a(void) {
int b = 4;
DO(&amp;b);
}
EOF
cat &gt; test.c &lt;&lt;EOF
#include "a.h"
#include "s.h"
int main() {
int b = 3;
DO(&amp;b);
a();
return 0;
}
EOF
rm -f liba.so libs.so* test s.h
if test $# -le 0 || test "x$1" != "xboth" ; then
gcc -Wall -O2 -shared -o libs.so.1 -Wl,-soname,libs.so.1 libs1.c
else
gcc -Wall -O2 -shared -o libs.so.1 -Wl,-soname,libs.so.1 -Wl,-version-script libs1.map libs1.c
fi
if test $# -le 0 ; then
gcc -Wall -O2 -shared -o libs.so.2 -Wl,-soname,libs.so.2 libs2.c
else
gcc -Wall -O2 -shared -o libs.so.2 -Wl,-soname,libs.so.2 -Wl,-version-script libs2.map libs2.c
fi
ln -s libs.so.1 libs.so
ln -s s1.h s.h
gcc -Wall -O2 -shared -o liba.so -Wl,-soname,liba.so liba.c -L. -ls
gcc -Wall -O2 test.c -L. -ls -la -o test
rm libs.so s.h
ln -s libs.so.2 libs.so
ln -s s2.h s.h
gcc -Wall -O2 -shared -o liba.so -Wl,-soname,liba.so liba.c -L. -ls
LD_LIBRARY_PATH=. ./test
</pre>
stuff-to-rememberhttp://blog.brlink.eu/index.html#i38Tue, 28 Jul 2009 17:43:20 +0200http://blog.brlink.eu/index.html#i38
Call for xtrace testersI've just released xtrace pre-release 1.0.0~alpha1, to be found at
<a href="http://alioth.debian.org/frs/?group_id=30990">http://alioth.debian.org/frs/?group_id=30990</a> and soon in experimental.<br>
The biggest change is no longer having protocol specifications compiled in but
read at run-time.<br>
So it would be nice if you could test the new version if you have used one of old
ones. (Or if you have not used them but are interested what some X11 program sends
over the socket).
callforhelphttp://blog.brlink.eu/index.html#i37Fri, 16 Jan 2009 11:04:19 +0100http://blog.brlink.eu/index.html#i37
Multiple filesystems for the paranoid<p>
Given the current discussion on <a href="http://planet.debian.org/">planet.debian.org</a>
about having only one or multiple file-systems,
I just wanted to add a plea for having multiple filesystems.
</p>
<p>
In my (perhaps a bit overly paranoid) eyes, having multiple filesystems is mainly
a security measure. I prefer having enough partitions so that the following properties
hold:
</p>
<ul>
<li>No unprivileged user has write access to a directory on the same
filesystem as system programs are.
Otherwise someone could just hardlink all suid programs and wait.
There are high chances that some might have a security problem some day.
Security and normal updates will install newer versions, fixing some bugs
before enough people know it enough to exploit it or even before anyone
at all knows they are there.
To protect against this one, one could do scans for suid-root files in user-writable
directories, or one just removes the possibility for this attack-vector.
</li>
<li>No unprivileged user has write access anywhere that is not mounted
nodev and nosuid. In theory that should make no difference, because
the user cannot create anything evil anyway. But in practise there were
enough exploits with core files and other stuff leading to an suid-root
executable suddenly around.
</li>
<li>No unprivileged user has write access anywhere that is not mounted
nodev and nosuid. In theory that should make no difference, because
the user cannot create anything evil anyway. But in practise there were
enough exploits with core files and other stuff leading to an suid-root
executable suddenly around. (And do not forget <tt>/proc</tt>, <tt>/sys</tt>, <tt>/lib/init/rw</tt>,
/proc/bus/usb, ...).
</li>
<li>If it is a server that interacts with untrusted users, there should
be no place those daemons can write to that is not mounted noexec.
While noexec is only a weak defense (scripts could still be executed by
calling the interpreter with the script as argument), it gives a little
bit more protection and has no real downsizes. The only noticeable one is
that apt's preconfiguring of debconf using packages will not work but they
will have to ask their questions when being configured.
Again, it does not help much when not used consequently: Do not forget
<tt>/var/tmp</tt> and all the little tmpfs the system adds somewhere.
</li>
</ul>
<p>
Admittedly, those arguments may not be as convincing for a laptop as for a server.
But I personally like to have paranoia enacted everywhere. Uniformness makes
live much easier sometimes.
</p>
<p>
Update: If having paranoid in the title was not enough to hint you that I do not
claim a system losing a significant amount of security compared to more important
measures, let it be told to you know. It's all about thinking about even the
little things and taking measures where they do not otherwise harm.
To get the warm fuzzy feeling I got when e.g. CVE-2006-3626 was found and my computers
had nosuid for <tt>/proc</tt> already set. ;->
</p>
paranoiahttp://blog.brlink.eu/index.html#i36Thu, 18 Dec 2008 21:26:32 +0100http://blog.brlink.eu/index.html#i36
If it is chaotic and late, we all are at fault<p>
I think we all in Debian agree that the current discussion and the votes
are a cruel mess.
But if anyone wants to blame anyone else for this, please consider some facts:
</p>
<p>
The outcome of the vote depends on what is to be voted on. If the vote
is &quot;Eat shit or die&quot; a majority of people might choose the shit.
That's why our constitution allows everyone to amend the vote, to offer more
options, so that people can vote for what they actually want.
This might get messy, especially if the process is chaotic.
It can only work if people take the time and consideration to discuss the
suggestions long enough to get to sane values.
But if you haste it will get messy.
</p>
<p>
That there is so much haste currently is also our all fault.
Of course if more people had worked on the firmware issues, we would not
have this problem. But this is not the fault of one side. The other side
could have worked on that too.
</p>
<p>
Some &quot;But I have nothing against firmware in the kernel.&quot; is as little
an excuse for not working on it as &quot;I do not need kernels for hardware
without free firmwares&quot;.
</p>
<p>
Because the outcomes of the last votes for sarge and etch made clear that
just having the stuff in the kernel is no solution. Everyone that did not
propose a GR to allow non-free firmware more than half a year ago and did
not work on easing things for users needing non-free firmware has either
to admit that it is also his fault by omission as much as those not wanting the firmware
in there and not having done more to get rid of it.
Or you have to admit you willfully did nothing to now take
the release hostage for your goals.
</p>
<p>
That said, I also want to speak against the &quot;lenny without firmware will be
totally unusable&quot;. I didn't look at the details. But when I in the last half
year had some servers that needed some firmware, that was not even in the kernel
and on the installation media, I was extremely surprised how easy it was to put it there
and how anything went correctly without thinking much, the installer just copied
the needed files directly on the installed system. The initrd generator must have
included that somehow (for it was a firmware for the sata card, and it actually boots).
And I think it might not even be needed on the installation media, but might
also be inserted by some other means. (But putting it in the initrd of the netboot
installer was just so easy, that I tried nothing else).
</p>
<p>
Some post-scriptum: I personally would have deemed it more clean to have
Peter Palfrader's proposal not made to amendment of the other vote.
But if it had been handled otherwise, I definitely would have suggested an amendment
to it (and perhaps some others, too). So do not think it would have made things
much faster or easier to grasp.
</p>
<p>
Another post-scriptum: It's the job of our secretary to protect and interpret the
constitution. The only thing looking at the current discussions is why political partisans
in some western countries not yet got to the idea of recalling judges whose job it
is to protect the constitution. Perhaps because in that setting it would sound just
too absurd...
</p>
plea-for-sanitityhttp://blog.brlink.eu/index.html#i35Sat, 13 Dec 2008 17:46:54 +0100http://blog.brlink.eu/index.html#i35
Ever wondered about java windows staying empty in some WMs?<p>
It's a longstanding bug that java programs show empty gray windows
when being used in many window managers.
</p>
<p>
As there is OpenJDK now, I thought: It's free software now, so look
at it and perhaps there is a way to fix it. As always, looking at
java related stuff is a big mistake, but the code in question speaks
volumes. The window configure code has:
</p>
<pre>
if (!isReparented() && isVisible() && runningWM != XWM.NO_WM
&& !XWM.isNonReparentingWM()
&& getDecorations() != winAttr.AWT_DECOR_NONE) {
insLog.fine("- visible but not reparented, skipping");
return;
}
</pre>
<p>
and if you wonder how it detects if there is a non-reparenting window manager,
it does it by:
</p>
<pre>
static boolean isNonReparentingWM() {
return (XWM.getWMID() == XWM.COMPIZ_WM || XWM.getWMID() == XWM.LG3D_WM);
}
</pre>
<p>
Yes, it really has a big list of 12 window managers built in for which it tests.
And this is not the only place where it has special cases for some, but it does
so all the time in the different places.
</p>
<p>
But what Sun did not think about: There are more than 12 window managers out there.
And with this buggy code it would need a list of every single one not doing
reparenting (like ratpoison as when I read the bug reports correctly also
awesome, wmii and a whole list of quite popular ones, too).
</p>
<p>
Or it means that you are not supposed to run graphical java applications unless you
use openlook, mwm (motif), dtwm (cde), enlightenment,
kwm (kde), sawfish, icewm, metacity, compiz or lookinglass or no window manager
at all.
</p>
<p>
As I did not yet had realized that the old workaround of AWT_TOOLKIT=MToolkit
no longer works in lenny before reading some debian-release mail,
which means I haven't use any graphical java program for a long time,
it seems I have decided for the latter.
</p>
<p>
P.S.: I've sent a patch that one can at least manually tell java that one would
like to see windows' contents as <a href="http://bugs.debian.org/508650">b.d.o/508650</a>
</p>
rantshttp://blog.brlink.eu/index.html#i34Sat, 11 Oct 2008 18:49:32 +0200http://blog.brlink.eu/index.html#i34
Iceweasel 3<p>
Trying to get prepared for lenny, the new iceweasel annoys me more and more.
</p>
<ul>
<li>
Problem 1: How do I globally set a program to use for <tt>mailto:</tt> links?
<br>
The old solution no longer works, and the only workaround I found is
placing a <tt>mimeTypes.rdf</tt> with the right contents to
<tt>/etc/iceweasel/profile</tt>, but that of course only works for new accounts,
not when upgrading accounts...
</li>
<li>
Problem 2: Is there a way to get the Xprint interface of iceweasel back?
<br>
It still suggests xprint, but I see no way to not get those gtk printing
dialog boxes. Those of course only show &quot;Print to file&quot; because I
am using lprng and someone decides to disable the lpr backend gtk by default.
(Those printing backends modules in gtk seem like a nice thing at first,
I could easily write one to show the printers in <tt>/etc/printcap</tt> and
have controls for all the options of the printer's PPD file in there,
but it only helps to get the dialog better).
But the postscript it creates is just broken beyond description. No need to
speak about doing color all the time (looks like there is no way to tell cairo
to make a surface grayscale), or embedding all the fonts, but it tries to
do the distance of characters on its own and miserably fails so.
The fonts it embeds also have some strangenesses, or the pswrite method
of the new ghostscript has problems, too.
<br>
Another problem with the gtk print dialog in iceweasel is in the print
setup dialog box. It always defaults to letter, and I found no way to
globally change this. Here most likely the fault is somewhere in the glue
code, giving that a first glance looks really like someone only knowing
letter and executive wrote it...
</li>
<li>I'm sure there were others, but those I forget over the more urgent ones...
</li>
</ul>
call for helphttp://blog.brlink.eu/index.html#i33Tue, 23 Sep 2008 19:58:24 +0200http://blog.brlink.eu/index.html#i33
Phony stamp files in debian/rules<p>
As I wrote in <a href="http://pcpool00.mathematik.uni-freiburg.de/~brl/blog/index.html#29">blog item 29</a>
there are many ways to break your <tt>debian/rules</tt> file.
As I grew of seeing those and many more, I decided to write a
lintian test for this.
</p><p>
Getting that finished will still need several days, as the general
Makefile syntax is quite interesting in detail, and lintian is written
in perl thus so have to be tests. It's quite interesting that the different
cases when variables are resolved and when not seem to quite firmly
force an specific way to parse it. (And relearning perl when I so successfully
unlearned all parts of that language in the past make it not much faster).
</p><p>
Anyway, the reason I'm blogging is to give you already the results of one
particular test a preliminary version gave when running over the lintian lab:
It's checking for targets with <tt>-stamp</tt> in them that are phony,
as that makes no sense.
It will either cause configure or make run multiple times via build, wasting
buildd cycles or even make the build more unstable, or just indicates
needlessly complex Makefiles (having an <tt>install</tt> target that
invokes an <tt>install-stamp</tt> target that does not actually produce a
stamp file just makes the Makefile longer without doing anything at all
but confusing readers).
</p><p>
You can find the preliminary results for that test at
<a href="http://people.debian.org/~brlink/debian-rules-phony-stamp-file.log">http://people.debian.org/~brlink/debian-rules-phony-stamp-file.log</a>.
I looked at some randomly chosen results and did not find a false positive.
As that list was produced by the last runnable version which did not yet
look at variables, I guess the list will only increase.
</p>
bugshttp://blog.brlink.eu/index.html#i32Wed, 21 May 2008 17:13:01 +0200http://blog.brlink.eu/index.html#i32
Some thoughts about recording differences<p>
When recording changes in some software there are basically three
approaches, with their different advantages and disadvantages.
</p>
<ul>
<li>One approach is just record modifications chronologically.
Each modification is just a little diff (or some glorified variant
of diff, or if you insist something that is no diff but just has
some similarity).
Advantages are:
<ul><li>It keeps itself out of the way.
You just do modifications and store them somehow one after the other.
</li><li>You can reconstruct all past points, so can regain missing documentation
by looking at the evolution in time.
</li></ul>
So it is no surprise that essential every tool (i.e. all the VCSes)
to handle non-packaging work on software essentially uses this way.
If you have an canonical other source for the software (like an upstream
for an package), it has quite some severe drawbacks:
<ul><li>Reviewablity does not scale. While every single diff can easily
be reviewed, the mass is not. Most changes are relative to past
versions of the software. If something introduces a problem or and upstream
change introduces a problem with that, the fix might only show up years
later, separated by tons of other changes in between. (The effect especially
with changing upstream always depends a bit on the later merging, but the worst
case is still the same).
</li><li>
For the same reason changes cannot be easily extracted. Single commits
usually are fine, but what of those past changes is needed to make some
specific change in the functionality is easily lost after some time, some
improvements and updates to new upstream versions.
The whole approach degenerates to having a single large <tt>.diff.gz</tt>
file of the package. (Except one has a bit of history recorded, so when
sorting out the stuff in that diff one can better guess what effect a single
unattributable line has).
</li></ul>
</li><li>
The next possibility is to store modifications sorted by topic and against
upstream (either as <tt>.diff</tt> files relative to the upstream source,
as topic branches, or or or). Advantages:
<ul><li>As you have to send the patch upstream anyway, you have to create
such an patch initially, anyway. Also each is in a format that if upstreams
wants to apply a single one, each is in the correct format for this.
</li><li>Branches relative to a (relatively) fix point are usually use to
store in VCSes.
</li><li>When other people want to pick a subset of those patches, they
only have to do the merging, and not dismerging with the missing ones first.
</li></ul>
On the other hand, there are some disadvantages with this:
<ul><li>As the differences are relative to the upstream code, multiple
patches are in general not applicable without additional merging.
</li><li>Because of this, those patches cannot be used to generate
the source to build the Debian package.
</li><li>Thus those patches by itself are never directly used, unless
the maintainer regularly tests them, they are essentially untested and
will see bitrot.
</li><li>As those are not the changes to the package because of the missing
merge, they are nice for a cursory review, but if you want the review the
actual changes, you need to review the merged variants.
</li></ul>
</li><li>
Last but not least, there are stacked patches, i.e. patches that are supposed
to be applied in a specific sequence. Advantages:
<ul><li>The patches itself are actually used. Thus they get wide testing
and bitrot cannot happen that easily.
</li><li>
Equally, reviewing the actual changes can be separated into the different
patches, as connected changes are next to each other and the files
describe the changes without pitfalls in merges.
</li><li>
If someone else wants to use multiple patches from the start of the sequence,
no merging is necessary. They can directly be applied.
</li></ul>
Given these advantages, it is no big surprise that all Debian specific
patch management tools (like dbs) and other programs to deal deal with multiple
patches against other people's code (like quilt) use this approach.
Though there are also some disadvantages:
<ul><li>Ordering patches is not important. Only unrelated patches can
be skipped to have later patches applied cleanly, so ordering needs to
predict what patches are more likely to be used by other people.
</li><li>Testing only happens for the whole stack. If some match modifies
parts that a later patch removes, bitrot can still happen.
</li><li>Extracting those patches from the currently available VCSes is
not trivial.
</li></ul>
</li>
</ul>
<p>So the format most suitable to Debian packages (stacked patches) is
the total opposite of a format most suitable when you are upstream
yourself and nothing is suitable for everything. There are many different
thinkable ways to combine the different things to get more of the advantages,
though many are a bit lacking (like storing quilt series in a VCS as text files),
not yet possible or non-trivial with the current tools.
Hopefully the future will improve that.
</p>
philosophyhttp://blog.brlink.eu/index.html#i31Sun, 18 May 2008 15:40:26 +0200http://blog.brlink.eu/index.html#i31
patches<p>
Looking at the current discussions, I'm wishing some people would
calm down a bit. It's always impressive how some things switch sides
like pendulums.
</p>
<p>
First of all, Debian already is centered about packing software and
not developing them. We already have the rules and policies and methods
in place. Our policy states:
</p>
<blockquote cite="http://www.debian.org/doc/debian-policy/ch-source.html#s4.3">
<p>
If changes to the source code are made that are not specific to the needs of the Debian
system, they should be sent to the upstream authors in whatever form they prefer so as
to be included in the upstream version of the package.
</p>
</blockquote>
<p>
And our source format shows how important marking the difference is to use:
We have explicit <tt>.diff.gz</tt> files to contain them. The differences
are not hidden in same Version Control System (like BSD) or in proprietary
formats (ever tried to unpack a <tt>.srpm</tt> without rpm or without downloading
some magic perl script?), but in a simple universal understandable format.
</p>
<p>
That said, please remember we are a distribution. Our priorities are our users
and not the whims of software authors.
We have to find the middle ground between harmful and necessary changes.
Patching software to abide the FHS, to allow the user choose their editor
or browser in an common way or any other thing to form a coherent set of
packages is no bug in Debian, it is a bug in upstream to not allow this
at least via some configure option.
We have neither the manpower nor the job to rewrite and fork stuff to a usable
state, though. Thus we have to keep to upstream and hope they will include
our modifications or forward-port them to every new release.
</p>
<p>
Thus, we need both: We need to patch (and in general, the worse the software
from a distribution's point of view or from a general point, the more of this
we need. This does not necessarily hold in the other direction, though.).
And we need and want to show what we change. So our users can find out how
exactly the software we ship is different. And so other people dealing with
the same software can profit from our changes. (After all, free software
is about &quot;giving back&quot; a lot.). And of course maintainer change
over time and we want the new one understands what the previous one did.
</p>
<p>
That said, there is of course things that can be improved. But as with
all improvements there are things that improve and there are things where
nothing is worse than good intentions.
</p>
<p>
Adding more thing to keep in sync is almost always a bad thing in the long run.
The easiest way to keep things accessible is to use stuff actually used.
I think any additional place to track patches will be futile.
A good interface to view the <tt>.diff.gz</tt> files in our archive in some
browser, on the other hand, could hardly get unuseful.
</p>
<p>
The format of a single <tt>.diff.gz</tt> is of course also improvable.
Things like a quilt-like standardized patch series look like a very good
idea to me.
But the format is of course also very downgradeable. Storing VCS formated
information for example, tempting as it is, spoils two very important points:
</p>
<ul>
<li>universal readability is lost. Everyone coping with software can
work with patches or patch series, no matter what tools they use.</li>
<li>visibility of changes is sacrificed for history. Every single
VCS I ever saw primarily coped with history. Even patch based ones
represent changes in time and not by itself changes by topic.</li>
</ul>
<p>
And of course, a lot can be improved by just using some rules more strictly.
Not using a pristine tarball or a proper subset of one where the first is not
possible should not be some mere ignorable non-conformist behavior, but seen
as the serious problem it is.
</p>
rantshttp://blog.brlink.eu/index.html#i30Wed, 30 Apr 2008 09:06:29 +0200http://blog.brlink.eu/index.html#i30
gpg2txt<p>
Do you want know what is really stored in your gpg keyring?
Or do you want to store your keyring into a VCS? Or you you
want to be able to delete signatures or other data from a keyring
without having to use gpg's absurd interface?
</p>
<p>
If you do, then you might want to look into a little program
I started for this purpose.
It's still quite alpha, but for some uses should already work.
And if you test now and give some feedback, it might develop more
in a direction you need. To give it a test:
</p>
<pre>
cvs -d :pserver:anonymous@cvs.alioth.debian.org:/cvsroot/gpg2txt checkout gpg2txt
cd gpg2txt
./autogen.sh --configure
make
less README
man -l gpg2txt.1
./gpg2txt -o test ~/.gnupg/pubring.gpg
less test
./gpg2txt --recreate -o pubring.gpg test
</pre>
announcementhttp://blog.brlink.eu/index.html#i29Fri, 25 Apr 2008 21:34:05 +0200http://blog.brlink.eu/index.html#i29
Some basics about make<p> Till today I thought make was a very simple concept, but looking at other
people's debian/rules files I start to lose that faith. So let's begin with
some basics (as I guess many reading this are maintainers for Debian
packages, and you might need some of this knowledge):
</p><p>
As you will already know, the most important part of a Makefile is a
rule. Each rule is there to produce something and has prerequisites, i.e.
things that have to be done before. So far so simple.
</p><p>
When you think about this way, the first pitfall is already no pitfall
anymore:
</p>
<pre>
build: patch build-indep
build-indep: build-indep-stamp
build-indep-stamp:
$(MAKE) doc
touch build-indep-stamp
</pre>
<p>
There are two mistakes in this. First of all, the <tt>clean</tt> is only
called on <tt>./debian/rules build</tt>,
but not on <tt>./debian/rules build-indep</tt>.
And then <tt>patch</tt> is only called by the build target. But what you
really want is that the source is patched before you do something, so it
is something do be done before <tt>build-indep-stamp</tt>.
The pitfall with this error is that you will not see it most of the time.
As make usually processes targets in the order it finds them, it usually
runs patch first. Except when you have multiple processors and tell make
to make sure of them (and even then there is a chance it might work as
the command to run first is fast enough) or if someone trusting on what
he learned about make does call <tt>./debian/rules build-indep-stamp
build</tt>.
</p><p>
I guess a reason one sees this so often is the next pitfall. Most likely
previous the following was tried:
</p>
<pre>
build-indep: build-indep-stamp
build-indep-stamp: patch
$(MAKE) doc
touch build-indep-stamp
patch: patch-stamp
patch-stamp:
whatever
touch patch-stamp
</pre>
<p>
The pitfall with this is some new concept. The problem is that the patch
rule is phony. A rule is called phony when it does not produce the target
it claims to do. The classical example is a <tt>clean</tt> target. You do
not want that a <tt>clean</tt> target creates a file <tt>clean</tt> and
the next time it is called says &quot;already everything done&quot;.
</p><p>
Targets get phony by telling make it is (via <tt>.PHONY:</tt>) or by just
not producing the file it tells to produce or (surprising to many, it seems)
by having a phony target as prerequisite.
</p><p>
In the above example the <tt>patch</tt> target gets implicitly phony, as it
does not produce a file called <tt>patch</tt>. Thus after having
<tt>build</tt> the source and calling <tt>binary</tt> to create packages,
this will most likely in some way depend on <tt>build-indep-stamp</tt>.
But when make thus looks at <tt>build-indep-stamp</tt> to decide if it
already there and even though it sees the file produces there with the
touch command, it cannot determine if that is up to date. It depends
on <tt>patch</tt> and there is no file called this, thus make must assume
it is not up to date, thus <tt>build-indep-stamp</tt> has become
effectively also phony, in the sense of having to be called every time it
is depended on. (In case you have not noticed, the fix would have been
to make <tt>build-indep-stamp</tt> depend on <tt>patch-stamp</tt> instead
or to go without <tt>patch-stamp</tt> and make <tt>patch</tt> the file to
be generated).
</p><p>
Thus, you can put as many -stamp files as you want in a row, as long as
you have a single phony prerequisite in them, it is all void.</p>
trickshttp://blog.brlink.eu/index.html#i28Sun, 09 Mar 2008 13:09:21 +0100http://blog.brlink.eu/index.html#i28
Why I am putting dpkg on hold on my unstable boxes right now...<p> A hijack of an important package makes me always uneasy. But reading
the next two messages to the debian-dpkg list, which I can only read as
&quot;oh, I broke make dist. But why should I follow sane practises, all hail
my workflow&quot; and &quot;oh, I brake Changelog, that's on purpose because I
consider it a bad idea, if you want it make write some scripts to generate
it&quot; does nothing to make me believe such a person can be a good
maintainer.</p>horrorhttp://blog.brlink.eu/index.html#i27Wed, 13 Feb 2008 11:57:42 +0100http://blog.brlink.eu/index.html#i27
IPv6 strikes again<p> If you ever wondered, why exim4 needs so long to start when you
have no net access, though you were sure that configured as
satellite for a smarthost it should have nothing to lookup as the
smarthost in in /etc/hosts, you might just have forgotten to
put a
</p>
<pre>
disable_ipv6 = true
</pre>
<p>
in your exim4.conf. (I'm not sure, but that might also help
to actually deliver mail to hosts which also have ipv6 addresses
on servers with outgoing SMTP when you forgot to blacklist the
ipv6 module).</p>
trickshttp://blog.brlink.eu/index.html#i26Thu, 17 Jan 2008 13:42:41 +0100http://blog.brlink.eu/index.html#i26
censorship and related things<p> I don't know why some people always shout censorship when it's about
what is acceptable behavior and what not and about what things people
want to be associated with and with what not. (Must be some relative
of Godwin's law (the original, not the &quot;you lost&quot; Usenet-variant)).
</p><p>
I personally do not care only little about what happens in irc-channels
I'm not in. I don't know what happens in this this special channel
starting the discussion, and I don't believe any anecdotally examples
can make a big difference. (Humans err, sometimes in tone, and even
some hundred examples of wrong tone in some backyard alone is nothing
I care much about, as long as it is nothing that would be a criminal
offense if done in more public places).
</p><p>
What I absolutely dislike is any form of communication forum - especially
those that could be associated with me - to be declared as a place
where foul behavior (to which I count sexism) is acceptable and even
worse to be accepted as norm. (Why didn't anyone shout &quot;censorship&quot;
at the &quot;Love it or get the fuck out of here&quot;? Perhaps there is some
correlation between views of the world I don't understand).
</p><p>
By the way, there are places where I feel personally offended as a victim
of sexism by a statement like &quot;men are pigs&quot;. For example in a discussion about
sexism. (I know, I know, it might not be sexism according to some
definitions, but I see no reason to not use the word or not dislike it
just because the forcing into gender roles is done by members of the same
sex as opposed to to members of the opposite sex.)</p>
blogwarshttp://blog.brlink.eu/index.html#i25Tue, 15 Jan 2008 20:37:39 +0100http://blog.brlink.eu/index.html#i25
inoffical vs misusing the name<p> I hope I am not alone, but a community stating &quot;Of course we are sexists&quot;
(if <a href="http://www.damog.net/20080115/debian-offtopic/">this</a> is a expression of more than a individual)
is in my eyes nothing that should be allowed to have debian in its name,
even if it is marked as unofficial
(and especially if it says &quot;Love it or get the fuck out of there.&quot;).
</p><p>
Can't people find a way that is neither pseudo-moral pressure of some
&quot;political correctness&quot; nor childish increasing of self-esteem by
showing everyone how &quot;political incorrect&quot; you dare to be.
</p>
wtfhttp://blog.brlink.eu/index.html#i24Tue, 04 Dec 2007 17:55:16 +0100http://blog.brlink.eu/index.html#i24
Pretty print library hierachies<p> Playing around with awk and graphviz can lead to nice but usually
totally useless graphs:
</p>
<pre>
#!/bin/sh
if test $# != 1 ; then echo &quot;Missing argument!&quot; &gt;&amp;2 ; exit 1 ; fi
FILENAME=&quot;$(tempfile -s &quot;.ps&quot;)&quot;
ldd &quot;$1&quot; | mawk 'BEGIN{print &quot;graph deps {&quot;}END{print &quot;}&quot;} function dump(name,binary) { system(&quot;objdump -x &quot; binary &quot; | grep NEEDED | sed -e \&quot;s#.&#42; # \\\&quot;&quot; name &quot;\\\&quot; -- \\\&quot;#\&quot; -e\&quot;s/$/\\\&quot;/\&quot;&quot;)} BEGIN{dump(&quot;'&quot;$1&quot;'&quot;,&quot;'&quot;$1&quot;'&quot;)} /=&gt; \// { dump($1,$3)}' | dot -Tps -o &quot;$FILENAME&quot;
gv &quot;$FILENAME&quot;
rm &quot;$FILENAME&quot;
</pre>
toyshttp://blog.brlink.eu/index.html#i23Fri, 07 Sep 2007 11:31:12 +0200http://blog.brlink.eu/index.html#i23
why is your apt pubring not a file or apt as user updated<p> I had written a little script to create a local config, so one can as
user run everything (short of actually installing packages) as user.
(Which is quite useful to download all packages needed to update an
offline system or to install something on that. Of course one needs
that system status file for that).
</p><p>
When updating that script to the apt now checking signatures I had to
realize, that the file with the keys to look for in Release.gpg files
seems to be no file. At least it's location is not stored in apt's
Dir section, where it would be nicely adapted to changes of the other
directory, but is stored as a simple value elsewhere, so it needs an
additional overwriting.&lt;/rant&gt;
</p><p>
Anyway. The updated script can be downloaded <a href="http://www.brlink.eu/misc/#createlocalapt">here</a>, just in case it
might be of interest to anyone else.
</p>
rant and trickshttp://blog.brlink.eu/index.html#i22Sun, 19 Aug 2007 11:05:59 +0200http://blog.brlink.eu/index.html#i22
Using slapd as thunderbird/icedove addressbook<p> It's been some time since I got this working, but I decided to now
also blog about it here now, as I was just asked it.
</p><p>
The main magic to get thunderbird/icedove use your ldap server as
addressbook, is to include the proper schema. Search the web for
mozillaAbPersonObsolete and you should find it. You do not have to
use any of it's new fields, not even the object class in it is
needed. Your slap only have to know about the field names, then
thunderbird will be able to show the normal inetorgperson's mail
attribute.
</p><p>
Some caveats, though:
</p><p>
You might think you might test your settings in thunderbirds by using
that button to download everything and store it locally.
In my experience that never works but strangely asks for a password, while
the addressbook is already nicely working and needs no password at all.
</p><p>
Also don't be confused by no records shown in the new addressbook.
I guess that is some measure against always loading a possibly large
remote addressbook. To test just enter anything in the search field,
and the matching records should show up nicely. (I'm not sure if all
versions allow searching for substrings. If they do, try searching for
the at sign, to get a full list.)
</p><p>
The shown fields seem also a bit strange, and differ with the different
mozilla messenger/thunderbird/icedove versions. In some versions the
field the primary name is extracted from can be changed, but directive
to set that seems to change even more often.
</p><p>
Finally, some snippet for your /etc/icedove/global-config.js, which causes
all newly created users to have an addressbook as default. I forgot if
all are needed or why I added them, but those that are unnecessary at least
do not seem to harm. (Last tested version is the one in etch, though.
Newer version might again have something changed).
</p>
<pre>
/&#42; ldap-Server for FOOBAR &#42;/
pref(&quot;ldap_2.autoComplete.useDirectory&quot;, true);
pref(&quot;ldap_2.prefs_migrated&quot;, true);
pref(&quot;ldap_2.servers.mathematik.attrmap.DisplayName&quot;, &quot;displayName&quot;);
pref(&quot;ldap_2.servers.default.attrmap.DisplayName&quot;, &quot;displayName&quot;);
pref(&quot;ldap_2.servers.mathematik.auth.savePassword&quot;, true);
pref(&quot;ldap_2.servers.mathematik.description&quot;, &quot;FOOBAR&quot;);
pref(&quot;ldap_2.servers.mathematik.filename&quot;, &quot;foobar.mab&quot;);
pref(&quot;ldap_2.servers.mathematik.maxHits&quot;, 500);
pref(&quot;ldap_2.servers.mathematik.uri&quot;, &quot;ldap://HOSTNAME:389/ou=People,dc=FOOBAR,dc=TLD??sub?(mail=&#42;)&quot;);
</pre>
trickshttp://blog.brlink.eu/index.html#i21Sun, 12 Aug 2007 14:42:56 +0200http://blog.brlink.eu/index.html#i21
Using Xephyr<p> When debugging window managers or testing your X applications in
other window managers, running them in an dedicated fake X server can be
quite nice. While every reasonable complete window manager (even the old twm
and vtwm can, and of course all of fvwm, qvwm, wmaker, ratpoison, ...)
can replace itself with any other, running a window manager in a window of
its own makes many things easier: single-stepping a window manager within
a debugger when that debugger runs in an Xterm on the same server is no
fun. And if some testing needs a more complicated setting, switching may
destroy that. And it is just more comfortable to have the editor handled
by your favorite WM, while you need another WM to test some aspects of
an program. (It's hard to see if initial sizes and layouts work well, when
your WM does allow windows to choose their size. And if your WM does not
have a bug another has, it's easier to test a work around in the other
than trying to port the bug ;-&gt; )
</p><p>
So, here is some example invocation I use:
</p><pre>
#!/bin/sh
Xephyr :2 -reset -terminate -screen 580x724 -nolisten tcp -xkbmap ../../../../../home/brl/.mystuff/dekeymap -auth ~/.Xauthority &
export DISPLAY=:2
icewm
</pre>
<p>
Which Options are useful depends on what to use it for:
</p><p>
-reset -terminate means to terminate when the last child exited.
This is useful if you want it go away fast. Not useful if you
want to switch window managers without other clients running.
</p><p>
-screen 580x724 tells how big the window should be. This is just the
size of one of my working frames, so it integrates well into my workspace.
(It would be nice if Xephyr could change its resolution upon resize
of the window, though i fear programs will either be confused when the
size of their X server changes unadvertised or because of too many
advertisements of its changing).
</p><p>
-nolisten tcp as there i no need to let the world speak to your X server
</p><p>
-xkbmap ../../../../../home/brl/.mystuff/dekeymap
I gave up figuring out how to select a German keyboard, so it justs gets
8 lines of fake description only specifying German keyboard.
</p><p>
-auth ~/.Xauthority tells Xephyr to require authentication. Without this
everyone is allowed to control your sub X-server and all programs within
it. Don't forget to create an token before with xauth add, though.</p>
trickshttp://blog.brlink.eu/index.html#i20Mon, 30 Jul 2007 10:44:19 +0200http://blog.brlink.eu/index.html#i20
about the suggested "Debian maintainers"<p> As one of the most often made arguments for the current Debian
General Resolution about the introduction of DMs seems to be that
&quot;finding a sponsor is hard&quot;, I want to shift discussion a bit in the
other direction: How about more review instead of less?
</p><p>
Currently only sponsored people have the privilege of having a human
looking at their packages before upload. We normal DDs only have some
automated tools other people wrote for us (lintian, linda, piuparts)
and some self written ones (checking diffs, comparing to previous
revisions and so on) and have to hope we spot all problems not yet
detectable by machines ourself. What do teams and people with
comaintainers do? Any chance one of the other can look over the
package you generated? Is there any chance to get something like
that for the rest of us? (Ideally without drawing too much manpower
from the sponsorees, though in my experience slowing that down might
also help, there was more than one package I could not sponsor because
someone already uploaded it before even being able to write half the
list of obvious problems).
Perhaps some ring to review each other packages (of course best with
some classification. There is often not that much sense to have
someone not liking it looking into cdbs packages or vice versa).</p>
voteshttp://blog.brlink.eu/index.html#i19Sat, 20 Jan 2007 13:10:44 +0100http://blog.brlink.eu/index.html#i19
please add mime-types<p> Enrico <a href="http://www.enricozini.org/2007/debtags/tagminer.html">blogged</a> about translating the mime-types of file
to a debtags, stating &quot;I'm not sure it's a good idea to encode mime types
in debtags&quot;.
</p><p>
I just want to throw my two cent in here and state that I only once
looked into debtooks and gave up because it does not list mime-types
but some obscure other specification.
</p><p>
Getting suggested programs that support formats which have the same
type of content like the one I want to show, to convert or to create
does not help me. I most of the time do not want to edit &quot;a video&quot;
or &quot;a spreadsheet&quot;, but I first of all have a very specific file
I want to do something with, or a specific set of formats I want
to create something in. If I have a AbiWord file, Openoffice.org
will not help me, and with video or audio formats it is even worse.
</p><p>
So after turning away disappointedly from debtags, I had to do
a full mirror scan for /usr/lib/mime/packages/ files and their
contents. Having that data cached in debtags would be something
that really makes debtags useful in my eyes. (And the more
direct and verbatim the mime-type is encoded, the more useful
it would be for me).
</p>
debtagshttp://blog.brlink.eu/index.html#i18Sat, 06 Jan 2007 12:31:59 +0100http://blog.brlink.eu/index.html#i18
clean vs. crowded bug pages<p> Marc Brockschmidt <a href="http://gonzo.dicp.de/~he/blog/archives/27-Debians-BTS-sucks.html">wrote</a> the BTS is too crowded and Joey
Hess <a href="http://kitenet.net/~joey/blog/entry/a_clean_BTS_is_a_sign_of_a_sick_mind.html">objected</a> that a too clean BTS can also be a bad sign.
</p><p>
I think both is true or to say better none of the ways makes
sense without the other:
</p><p>
Bug reports are in my eyes one of the most valuable resources
we have. No one can test everything even in almost trivial
packages. To archive quality we need the users input and a badly
worded bug report is still better than no bug report at all.
Our BTS is a very successful tool in that as it lowers the barrier
to report issues. No hassles to create (and wait for completion of)
an account, no regrets by getting funny unparseable mails about
some developer changing their e-mail addresses (did I already
say I hate bugzilla?).
</p><p>
As those reports are valuable information, one should keep them
as long as they can be useful. Looking at the description
of the <a href="http://www.debian.org/Bugs/Developer#tags">wontfix</a> tag shows that even a request that cannot
be or should not be fixed in the current context is considered
valuable. Most programs and designs change, and having a place
to document workarounds and keep in memory what open problems
exist.
</p><p>
On the other hand a crowded bug list is like a fridge you only
put food into. Over time it will start to degrade into the
most displeasing form of a compost heap. The same holds for bug
reports:
</p><p>
Most bugs are easier when they are young: You most probably have
the same version as the submitter somewhere, know what changed
recently and when you can reproduce it you get some hints on
what is happening and get add it. If you cannot reproduce it,
the submitter might still be reachable for more information.
</p><p>
When the report is old, things get harder. Is the bug still
present? Was it fixed in between by some upstream release?
Is the submitter still reachable and does still remember what
happened?
</p><p>
When I care enough of a problem to write a bug report and
trying to supply a patch for it, I try to always take a look
at the bug list and look for some other low hanging fruits
to pick and submit some other patch, too. (After all, most
of the time is spend trying to understand the package and
the strange build system upstream choose instead of plain
old autotools and not when fix the problem). But when it
is hard to see the trees because of all the dead wood
around it, and there is nothing to find with some way to
reproduce it and one knows far too well that the most
efficient steps would be a tedious search for old versions
to see if that was a bug solved upstream many years ago,
good intentions tend to melt like ice thrown in lava.
</p><p>
So, when I wrote both is true I meant that keeping real-world
issues documented and aware is a good thing. But having bugs
rot (and often they do), will pervert all the advantages.
In the worst case, people will even stop submitting new reports
as it takes to long to look at all the old ones to look for a
duplicate.
</p>
btshttp://blog.brlink.eu/index.html#i17Fri, 03 Nov 2006 10:09:08 +0100http://blog.brlink.eu/index.html#i17
again compiler arguments<p> I know I repeat myself, but given current discussion, I
simply feel the need to do so:
</p><p>
Please do not hide the arguments given to the compiler from me.
</p><p>
I cannot fix what I do not know if wrong. Maybe you can.
</p><p>
Keep the argument list tidy.
</p><p>
Many argument lists are longer than necessary. If there is some
-I/&lt;whatever&gt; in the argument list on a Debian system, there is something
fishy. (It's not the universal collection of different stuff all
going wherever it wants, after all). Common cases are:
</p><p>
- buggy scripts to add -I/usr/include
- packages working around upstreams breaking compatibility
- plainly broken upstreams
- oversight
</p><p>
In short: if the line is too long, that is normally a bug causing more
pain than only a long line. Do something against those bugs, please.
There is no need at all for a proper made library to give -I for
stuff installed. It's installed in the system, the default search path
for /usr/include should suffice. It often does not, but that is simply
bad design of that libraries interface. Do something against that, please!
Also for stuff not installed, why do you need more than one -I?
Are you embedding other libraries into your code? Why are they libraries
if noone else uses them? If someone else uses them, why are they not
made to proper library packages? If it is all intern stuff, why does
it need so many include parts instead just a single include/ dir?
And if it only needs one include dir why is it added a dozen times?
What do you need -D for anything but paths? Ever heard of AM_CONFIG_HEADER?
</p><p>
And yes, I know many modern libraries are written by people never
looked at anything but Windows when designing their headers.
(Even some seem to have never looked at unixoid systems even after
using them for decades).
That is a problem, not something to be worked around with even more
kludges. Kludges working around kludges are there to stay. So do
not add them.</p>
rantshttp://blog.brlink.eu/index.html#i16Thu, 26 Oct 2006 10:54:12 +0200http://blog.brlink.eu/index.html#i16
"a speech for policy"<p> If you have to name a single thing that singles out Debian over
all the other distributions in practical quality, then you cannot
come up with anything but Debian having a policy, packages have to
follow.
</p><p>
The little things make something feel raw or polished. Those
things that one by itself look to unimportant by itself
have real importance in their magnitude.
</p><p>
As with all rules, rulesets can become too large and become a
obstacle. This can be avoided by being conservative and minimal
in those rules, which Debian always already practised to
the extreme.
</p><p>
Limiting this further down to things people deem as &quot;important&quot;
will only further reduce the overall quality. Instead of removing
those few things that are in the policy, we should rather extend
to make everything in current policy not met to be a bug
(which can still be tagged wontfix or help) instead of reducing the
rules found in policy or making more things non-binding.</p>
rantshttp://blog.brlink.eu/index.html#i15Mon, 02 Oct 2006 20:35:49 +0200http://blog.brlink.eu/index.html#i15
"current GR and release of etch"<p> I doubt the current <a href="http://www.debian.org/vote/2006/vote_004">vote</a> can delay etch when accepted.
There are many different GR suggestions out there to get
additional exceptions for etch. And there is no doubt at
all such expectations will get accepted with a gigantic majority.
</p><p>
I see more danger to delay etch when the GR is not accepted
but voted down. Then people will have much less ground for
what to get exceptions and far less common ground on what
to base all the GRs that are to come. And given the large
amount of proposals on debian-vote, having many more GR
will not help to get etch out.
</p><p>
Also note that if the GR is not accepted, there are many
people believing that the current rules still apply and these
rules are: source is needed for all bits in the Debian distribution,
and much more things has to be ripped out than those mysterious
6 months if no additional exceptions are voted on.
</p>
rantshttp://blog.brlink.eu/index.html#i14Wed, 20 Sep 2006 13:35:39 +0200http://blog.brlink.eu/index.html#i14
Trademarks<p> If everyone thought that accepting bogus obligations just to
be allowed to name something by it's name, take a look
at [1 Eric Dorland's blog] or directly into the <a href="http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=354622">new problems</a>.
</p><p>
My vote for this: Call it firesomething or mffbrowser or some other free name
once and forall. With some luck somebody will then also write a nice patch
to have a common Debian ca-certificate handling. (I'm sick of having
to do anything twice, especially if it includes writing mozilla extensions
adding a ca-certificate every time a user loads its config as I'm
ignorant in all this stuff to know any better way). Having things as
similar as possible in different environments is a nice goal, but having
working solutions and the right to implement working solutions is much
more important...
</p>
rantshttp://blog.brlink.eu/index.html#i13Thu, 10 Aug 2006 13:47:43 +0200http://blog.brlink.eu/index.html#i13
Graphic Libraries<p> <a href="http://www.grep.be/blog">Wouter Verhelst</a> asked why simple games are so slow nowadays.
</p><p>
I think the problems are in the libs. All this modern stuff tries
to become more and more modern, and get more and more stuff out
of all those new render extensions, direct graphics and hardware
accelerations. There simply is no way to decide which way is
faster, so libraries have to guess. So it is no surprise things
go wrong. And the place they go wrong are of course not the fast
computers, but the older stuff, that does not has those nifty
accelerations and no fast CPU to cancel it out.
</p><p>
Another disadvantage are all those &quot;portable&quot; libraries.
SDL for example needs three connections to the X server before
it does anything. Three times establishing a connection, checking
of security cookies, and so on. Its API looks like living with
windows, or never intended to be use for anything other than
full-screen mode. (You want to find out how large the window is?
Why should you be able to when you said the window cannot be resized?)
</p><p>
QT likes to use extensions, too. I don't know if it is its fault
or newer X servers, but the newer your installation gets, the slower
2D games using QT can become. (Note the can, if you have the right
graphics card, lucky you, if you have the wrong one, bad luck).
To be fair QT is not supposed or designed for 2D games. On the other
hand I don't know what it is supposed to do other than being a C++
compiler benchmark measured in hours.
</p><p>
GTK was such a promising design. Object orientated (widget classes
are one of those very few things were object orientated can be used
with more advantages than disadvantages) but still plain C, small
and looking like designed for X. To be fair, I do not even know
how well it performs, as the ever increasing library cancer drives
me away. From the &quot;users should not be able to change their homedir,
that would be far too much the Unix way&quot; glib, over all this myriad of
different little libraries, all moving all the time, spewing their
headers in so many different directories a compiler invocation folds
three time around your terminal.
</p><p>
Well, enough ranted. My next graphical program will use Athena
Widgets. I only have to hope all this reanimated X development
in the last time will not pull xlib away from under out feet
in the future...
</p>
rantshttp://blog.brlink.eu/index.html#i12Wed, 17 May 2006 22:01:47 +0200http://blog.brlink.eu/index.html#i12
When things suddenly go very fast<p> or in other words:
</p>
<pre>
grep -q 'dn\.regexp' /etc/ldap/slapd.conf &amp;&amp; cat &lt;&lt;EOF
Ha ha, sucker! Ever asked yourself why your ldap database is so fsck'ing
slow despite all the caches and indices you added?
EOF
</pre>
supriseshttp://blog.brlink.eu/index.html#i11Tue, 11 Apr 2006 18:32:58 +0200http://blog.brlink.eu/index.html#i11
only DDs should be allowed to upload packages<p> Anthony Towns <a href="http://azure.humbug.org.au/~aj/blog/2006/04/12#2006-04-11-maintainers">writes</a>:
</p><p>
&quot;Interestingly, the obvious way to solve the second and third problems is
also to do away with sponsorship, but in a different sense - namely by
letting the packager upload directly. Of course, that's unacceptable per
se, since we rely on our uploaders to ensure the quality of their
packages, so we need some way of differentiating people we can trust to
know what they're doing, from people we can't trust or who don't yet know
what they're doing.&quot;
</p><p>
I think the whole point of NM is to make sure we can trust people.
This will be extremly different from sponsorship, as I hope no sponsor
takes a packages and just uploads it, but makes sure it is as correct
as any of his packages, using all his/her experience.
</p><p>
Even some little game or package for special use can cause severe
headache, as the maintainer scripts can delete stuff outside that
package or open security holes. Things having that much power
should only be in the hands of people we actually know and trust.
Thus some DD should be responsible. And I doubt that there are
enough DDs wanting to be responsible for something another person
does when they give a in blank upload privilege for some package without
any chance to look what gets uploaded.
</p><p>
That said, I like the idea to make sure the Maintainer in the .changes
file and the owner of the key that signed it are the same.
(It's nicer to change it to get the mails yourself and bounce them to
the person you are sponsoring, but I sometimes forget it).
Does the field yet has any meaning other who get the mails from the
queue daemons and dak?
</p>
rantshttp://blog.brlink.eu/index.html#i10Sun, 26 Mar 2006 15:42:22 +0200http://blog.brlink.eu/index.html#i10
compiler arguments<p> Please do not hide the arguments given to the compiler from me.
</p><p>
It's hard to realize something is going wrong if you do not see
what is happening. If the argument list is too long, do something
against that instead of hiding it.
</p><p>
Make sure you follow policy when packaging software
</p><p>
Debian packages should be compiled with -Wall -g, but more and
more do not. Please check you do, but check at the correct place.
Do not look into the debian/rules file, but in the build log.
If the Makefile sets a default with a single equal sign (&quot;=&quot;),
running 'CFLAGS=&quot;-Wall -g -O2&quot; make' will not suffice. Try
'make CFLAGS=&quot;-Wall -g -O2&quot;' instead. (Actually, there is no
good reason to put them before the command. Always try to put
things as arguments first, both with make or with ./configure)
</p><p>
It really makes everyone's live easier if those options are set.
</p><p>
Keep the argument list tidy.
</p><p>
Many argument lists are longer than necessary. If there is some
-I/&lt;whatever&gt; in the argument list on a Debian system, there is something
fishy. (It's not the universal collection of different stuff all
going wherever it wants, after all). Common cases are:
</p><p>
- buggy scripts to add -I/usr/include
</p><p>
Better fix those scripts. Also make sure they do not cause other
problems, like linking your program against libraries your program
does not use directly. (Possibly causing funny segfaults when
those libs link against other versions of those libraries)
</p><p>
- -I/usr/X11R6/include
</p><p>
For upstream packages this might perhaps be useful to support
older operating systems and people unable to give it to CFLAGS
themselves. But for FHS systems, this is not needed at all, as
it mandates this handy /usr/include/X11 -> /usr/X11R6/include/X11
symlink. And newer X directly puts the headers in the correct place.
</p><p>
- packages working around upstreams breaking compatibility
</p><p>
Life would be too easy if upstream would not break APIs.
But if they make a new incompatible version, and even
change the library name for that, would it have been that
difficult to also change what programs written/ported for
that new incompatible API have to place in their #include
line?
</p><p>
- plainly broken upstreams
</p><p>
putting stuff in ${PREFIX}/include/subdir/ and #include'ing
other files from that subdirectory without the subdir deserves
application of some large LART.
</p><p>
- oversight
</p><p>
often it is just not necessary, and everything gets much more
readable and easier if left away.
</p><p>
Other things making things unreadable are large amounts
of -Ds generated by ./configure. AM_CONFIG_HEADER can help
here a lot with non-path stuff. Stuff containing paths is
surprisingly often not used at all.</p>
rantshttp://blog.brlink.eu/index.html#i9Mon, 27 Feb 2006 15:13:34 +0100http://blog.brlink.eu/index.html#i9
Gnu FDL<p> My suggestion for the GFDL vote is 1342
</p><p>
( 1 ) Choice 1: &quot;GFDL-licensed works are unsuitable for main in all cases&quot;
</p><p>
of course that only means documents only available under FDL or only
available under FDL or other non-free software licenses. Documents
also available under BSD, GPL or whatever are still free. That &quot;in all
cases&quot; means without looking at the loudness of the proponents of
some document.
</p><p>
( 3 ) Choice 2: &quot;GFDL-licensed works without unmodifiable sections are free&quot;
</p><p>
This does not mean &quot;without unmodifiable sections&quot;, it means &quot;without
additional unmodifiable sections&quot;. FDL has always to include the license
within the work. (I still do not know how to include the license within
a binary easily. But as the FDL as GPL-incompatible anyway it perhaps
makes such work-flows impossible, anyway).
</p><p>
( 4 ) Choice 3: &quot;GFDL-licensed works are compatible with the DFSG [needs 3:1]&quot;
</p><p>
That's even worse. We have non-free for non-free stuff some of our users
might not live without. (Or think so). Foist non-free stuff on them will
severely hurt them in the long run.
</p><p>
( 2 ) Choice 4: Further discussion
</p><p>
Don't forget this option. If you do not like choice 2 (perhaps because you
think like me that it is almost choice 3), rank 4 above it.
Otherwise with equally many [3214] and [1234] votes, choice 2 would most
likely win.
</p><p>
So only rank 2 above 4, if you want to see 2 in action. Otherwise vote
4 over 2. (same with 3 and 4, but 3 does not look so innocent as 2)</p>
voteshttp://blog.brlink.eu/index.html#i8Mon, 16 Jan 2006 10:03:52 +0100http://blog.brlink.eu/index.html#i8
Silver Plate<p> I just feel like quoting some passage from the Debian Developer's Reference:
</p><p>
A big part of your job as Debian maintainer will be to stay in contact with the
upstream developers. Debian users will sometimes report bugs that are not specific to
Debian to our bug tracking system. You have to forward these bug reports to the
upstream developers so that they can be fixed in a future upstream release.
</p><p>
While it's not your job to fix non-Debian specific bugs, you may freely do so if you're
able. When you make such fixes, be sure to pass them on to the upstream maintainers as
well. Debian users and developers will sometimes submit patches to fix upstream bugs --
you should evaluate and forward these patches upstream.
</p><p>
(that's from 3.5 in case anyone wants to look up it there)</p>
cooperatinghttp://blog.brlink.eu/index.html#i7Fri, 16 Dec 2005 19:08:44 +0100http://blog.brlink.eu/index.html#i7
Why not CVS?<p> To Wouter: No, I never used anything else than CVS for
everything serious. Whenever I tried any of them for
something (mostly because someone else used it for something
I wanted to work on) they simply broke. I don't want debug
my tools or use funny workarounds but get some work done on
what I use the tools for.
Using anything not in a Debian stable release is hardly
acceptable for me (remember, it are tools), but when then
even when the testing or unstable versions are not enough for simple
tasks, it's just too bleeding edge for me.
</p><p>
&quot;only suggests you haven't seen many large projects
in the heat of code change&quot;
</p><p>
That's simply a matter of style. If a checkin means a
full compile, manually reading the diff and a minimal
checking for correctness, writing Changelog entries and
possibly adopting the documentation there is simply no
need to handle checkins with a sub-minute resolution.
</p><p>
&quot;Far too often have I seen people afraid to reorganize their
code because that would lose history on the files.&quot;
</p><p>
That's a major problem, but the problem is the fear. No
rcs will ever be able to track history for even most
common possible reorganizations of code. Limiting yourself
to what your rcs can cope with is the main problem, the
abilities of your rcs are a minor one.
</p><p>
&quot;How about the fact that upstream CVS development is rather extremely
dead, [...]&quot;
</p><p>
I prefer tools being able to do what I need over tools that will
be able to adopt my needs normally. Active development means when
I encounter a bug I either have to wait a year until it does no
longer bother me, or wait a week and update software on every computer
I want to use on, possibly locally in my user account if I do no
administrate the computer or the behavior changes so much other
usages are broken. Leading to problems to live within my disk quota
and so on.
</p><p>
Don't understand me wrong, I'm not against SVN. I guess now
(several years after everyone was already told to not use
that old fashioned CVS, but not SVN version N but version N+1,
because N was too broken; for several versions of N)
it is quite useable. And things like atomic commits might even
make it favorable over CVS for larger projects. But not
every project can be within the top ten list of size, coding and
commit styles differ.
But I believe for many people, the ratio of advantages to disadvantages
still points into another direction.</p>
rcshttp://blog.brlink.eu/index.html#i6Tue, 13 Dec 2005 16:21:44 +0100http://blog.brlink.eu/index.html#i6
Why not CVS?<p> With this rcs debate currently on planet.debian.org I felt the
need to add some thoughts.
</p><p>
My point is mainly: Why not simply stick to plain old CVS?
</p><p>
The pros are easily collected: it's installed everywhere,
almost everyone know at least the basic commands, and it is
rock solid technology without all those little nasty bugs
the newer ones have all the time.
</p><p>
Most of the contra arguments are not applicable to me, so
how can they to anybody else? ;-)
</p><p>
Like changelog messages: I write a Changelog after a patch,
because I look at the patch for doing so. After all, that
is what the Changelog is supposed to document, not what
I though I did. (And looking at this is always a good
way to catch some obvious mistakes one did).
</p><p>
Making multiple patched off other people's projects:
Two versions of the directories you are working on is
all that is needed. Change the one, make diffs compared
to the other. Revert the diff (patch -R or just answer
often enough), change the diff to what it should be,
reapply it to make sure it still works, test it, revert
it again. To make another patch for the same original software,
continue from the beginning, otherwise apply the patch to
both copies. Just works. Easier than any darcs or co, even
if that would not core dump, go into endless loops or
play dead dog.
</p><p>
Even the non-exotic new systems still have plenty of
features I never needed:
</p><p>
Something has to be really big moving files around is needed
at all. And if it is needed, just delete it here and add it
there. That looses a bit of history, but that is still found
in the older place's history. Moving whole files is there
only a special case of moving routines between files while
refactorisation, one sometimes just has to look somewhere else.
</p><p>
Even for svn's global revision numbers I have not yet
found a use. Being used to cvsish tagging removes the need
if thinking before, and between commits there is normally
at least a quarter of an hour, so date based indexing always
works.
</p><p>
So, what are we talking about?</p>
blogWarshttp://blog.brlink.eu/index.html#i5Fri, 11 Nov 2005 11:29:32 +0100http://blog.brlink.eu/index.html#i5
Would you have seen the bug<p> ... if not told it is in there:
</p>
<pre>
ssize_t wasread = read(fs,buffer,toread);
if( read > 0 ) {
</pre>
funnyThingshttp://blog.brlink.eu/index.html#i4Thu, 15 Sep 2005 17:24:29 +0200http://blog.brlink.eu/index.html#i4
fontconfig considered harmful<p> I'm sometimes a bit behind on the
&quot;Make Linux as Unusable as Windows&quot; front. So I only
learned today about this 'fontconfig' thing which
is a major victory in that respect.
</p><p>
The .fonts-cache1 files alone are very effective in that:
</p><p>
633k in /home for every single user on an quite normal
sarge install, thus half a gig for all users.
</p><p>
font-data in /home? Yes, really. I did not believe it
when I first saw it, either. Guess sharing your home-dir
over an inhomogeneous network is nothing Windows can,
so it should no longer be supported....
</p><p>
Running fc-cache as root on any computer will make
it stop to do so, but it is disturbing to see again
some of unixoid strengths thrown in the wastebasket.</p>
rantshttp://blog.brlink.eu/index.html#i3Mon, 05 Sep 2005 11:57:39 +0200http://blog.brlink.eu/index.html#i3
When will people learn?<p> ... that the OS exception of the <a href="http://www.gnu.org/licenses/gpl.txt">GPL</a> does not help if you
want things included in an operating system?
(<a href="http://curl.haxx.se/legal/distro-dilemma.html">here</a> is the last example people still did not got it.)
</p><p>
... that library functions should not terminate the program
when they run out of memory but return some sensible error?
</p><p>
... that the home directory of the current user is in getenv(&quot;HOME&quot;)
and not (and never has been and never will be) in
getpwuid(getuid())->pw_dir ? Usage of the latter is a bug almost
everywhere. And even some more often.
(For example, do not use g_get_home_dir from libglib, as it
will return something only in some (though very common) cases
the home directory.)
</p><p>
... that there are ways to design libraries and especially their
headers in a way that one can compile applications without all
those include paths and library paths.
</p>
rantshttp://blog.brlink.eu/index.html#i2Wed, 24 Aug 2005 13:37:10 +0200http://blog.brlink.eu/index.html#i2
Downloading a package and all dependencies<p> To download a package and all packages it depends on (though only
one possible combination, not necessarily the one installed on your
system) use:
</p>
<pre>
mkdir partial
apt-get -o&quot;Dir::Cache::archives=`pwd`&quot; -o&quot;Debug::NoLocking=true&quot; -o&quot;Dir::State::status=/dev/null&quot; -d install packagename
</pre>
trickshttp://blog.brlink.eu/index.html#i1Mon, 22 Aug 2005 16:27:50 +0200http://blog.brlink.eu/index.html#i1
New Blog<p> After this new changelog to blog <a href="http://orebokech.com/debian/#1-1">scripts</a> were so heavily
advertised, I thought that would be a good point to start
a blog, too.
</p><p>
Though I felt like patching it a bit, so that the links
in the generated html are a bit better readable and no
eval or unquoted filenames are used in the script.
</p><p>
And while I am at it, a link to the rss file, making the
xhtml checker by absolute value and hiding the e-mail
Address (dch adds some random address anyway, and the line
is getting so long otherwise)
</p>
meta