For the good of all of us - Softwarehttps://www.skytale.net/blog/
enSerendipity 2.1.4 - http://www.s9y.org/Sun, 15 Jul 2012 14:53:21 GMThttps://www.skytale.net/blog/templates/default/img/s9y_banner_small.pngRSS: For the good of all of us - Software - https://www.skytale.net/blog/
10021Creating a throwaway browserhttps://www.skytale.net/blog/archives/39-Creating-a-throwaway-browser.html
ComputerLinuxSoftwarehttps://www.skytale.net/blog/archives/39-Creating-a-throwaway-browser.html#commentshttps://www.skytale.net/blog/wfwcomment.php?cid=390https://www.skytale.net/blog/rss.php?version=2.0&type=comments&cid=39nospam@example.com (Ralf Ertzinger)
<p>Once every other while it&#8217;s useful to have a browser that&#8217;s not connected to the normal browsing profile (I know. Don&#8217;t say it. I&#8217;ve been here a while).</p>
<p>These are two shell scripts that create a new profile for Firefox and Chrome in a temporary directory, start the browser using that profile and remove the directory afterwards.</p>
<h2>Chrome</h2>
<pre>
#!/bin/bash
trap cleanup EXIT
die() {
echo $@
exit 1
}
cleanup() {
[ -d ${CHROMETMP} ] &amp;&amp; rm -rf ${CHROMETMP}
}
CHROMETMP=$(mktemp -d)
[ -d ${CHROMETMP} ] || die &quot;Could not create temp dir&quot;
chromium-browser --user-data-dir=&quot;${CHROMETMP}&quot;
</pre>
<h2>Firefox</h2>
<pre>
#!/bin/bash
trap cleanup EXIT
die() {
echo $@
exit 1
}
cleanup() {
[ -d ${FIREFOXTMP} ] &amp;&amp; rm -rf ${FIREFOXTMP}
}
FIREFOXTMP=$(mktemp -d)
[ -d ${FIREFOXTMP} ] || die &quot;Could not create temp dir&quot;
export HOME=${FIREFOXTMP}
firefox -no-remote -CreateProfile &#39;throwaway&#39; || die &quot;Could not create profile&quot;
firefox -no-remote -P &#39;throwaway&#39;
</pre>
Sun, 15 Jul 2012 14:47:01 +0000https://www.skytale.net/blog/archives/39-guid.htmlInstalling CentOS 6 from a rescue systemhttps://www.skytale.net/blog/archives/38-Installing-CentOS-6-from-a-rescue-system.html
ComputerLinuxSoftwarehttps://www.skytale.net/blog/archives/38-Installing-CentOS-6-from-a-rescue-system.html#commentshttps://www.skytale.net/blog/wfwcomment.php?cid=380https://www.skytale.net/blog/rss.php?version=2.0&type=comments&cid=38nospam@example.com (Ralf Ertzinger)
<p>I recently accquired a root server hosted by <a href="http://www.manitu.de">Manitu</a> . It was planned to run Centos 6 on that server, but unfortunately that is not in the list of operating systems that can be preinstalled (or reinstalled) via the management web page.</p>
<p>There is, however, a rescue system, an option to remotely boot the server into a bare bones Linux system via the network. Since an OS is basically a bunch of bits on a hard disk it should be possible to get said bits onto the hard disk from the rescue system.</p>
<p><a href="http://www.topgear.com/uk/">How hard can it be?</a></p>
<p>Now, there are some things to consider. First of all, this is not for someone new to Linux or CentOS. There are lots of ways for things to go wrong or not end up as expected. The usual failure mode for this is the remote system not booting, which is kind of hard to debug without any further information. Also, this will erase all data on the hard disks in the system.</p>
<p>The partitioning scheme is what I needed for my system. Adjust as needed. If you don&#8217;t know how that&#8217;s the first clue that you should not be doing this in the first place.</p>
<h2>Preconditions</h2>
<p>Some of the things needed for this to work:</p>
<ul>
<li>The rescue system has to support the hardware you want to install on (mainly network and hard disk controllers). Since the rescue system is provided by the hoster this is usually the case, but it does not hurt to check.</li>
<li>The rescue system has to support software <span class="caps">RAID</span> (if you want it), <span class="caps">LVM</span> (if you want it) and all file systems you want to use (or at least the ones needed to boot the system)</li>
<li>The rescue system kernel has to be reasonably similar to the CentOS installer kernel. Being newer is usually not a problem, being much older can be.</li>
<li>wget (or some other way to download files via <span class="caps">HTTP</span>)</li>
<li>File system support in the kernel for ISO9660, squashfs and ext3</li>
<li>At least 2GB of <span class="caps">RAM</span></li>
<li>Hardware supported by CentOS 6. If CentOS 6 would not install or boot on the system when doing a normal (CD/DVD based or network) install then chances are that installing it this way will not work either.</li>
</ul>
<p>It is further assumed that a tmpfs file system is mounted at <code>/dev/shm</code>.</p>
<h2>Install plan</h2>
<p>The hardware in question here has two <span class="caps">SATA</span> <span class="caps">HDD</span>s on <span class="caps">AHCI</span> compatible controllers without any hardware <span class="caps">RAID</span>. The final hard disk layout will be as follows:</p>
<ul>
<li><code>/dev/md0</code>, 1GB, consisting of <code>/dev/sda1</code> and <code>/dev/sdb1</code>, ext3, mounted at <code>/boot</code></li>
<li><code>/dev/md1</code>, 64GB, consisting of <code>/dev/sda2</code> and <code>/dev/sdb2</code>, <span class="caps">LVM</span> PV</li>
</ul>
<p>The PV is part of a volume group which contains an LV holding the root file system.</p>
<h2>Prepare the Live CD</h2>
<p>The rescue system usually has no CentOS specific tools (like <span class="caps">RPM</span>) which are needed to install the OS packages. In order to get a usable install environment we&#8217;ll commandeer the CentOS 6 live CD. It contains a complete root file system image.</p>
<ul>
<li>Download the current live CD <span class="caps">ISO</span> image from <a href="http://mirror.centos.org">http://mirror.centos.org</a>, putting the file into <code>/dev/shm</code>. Be sure to grab the CD for the right architecture (the right architecture is x86_64, by the way).</li>
<li>Create a bunch of directories<br />
<pre>
mkdir /loop1 /loop2 /sysroot
</pre></li>
<li>Mount the root image (this is a bit recursive)<br />
<pre>
mount -o ro /dev/shm/CentOS-6.2-x86_64-LiveCD.iso /loop1
mount -o ro /loop1/LiveOS/squashfs.img /loop2
mount -o ro /loop2/LiveOS/ext3fs.img /sysroot
</pre></li>
<li>Bind mount needed virtual file systems<br />
<pre>
mount -o bind /dev /sysroot/dev
mount -o bind /proc /sysroot/proc
mount -o bind /sys /sysroot/sys
</pre></li>
</ul>
<p>Now chroot into <code>/sysroot</code> and try running some commands (<span class="caps">RPM</span>, yum). If this does not work then the rescue system kernel is probably too far from the live CD kernel, and this adventure ends here.</p>
<p>The image mounted at <code>/sysroot</code> is read only. This will not do, since we need to change <span class="caps">DNS</span> entries, so create a writable <code>/etc</code>:</p>
<pre>
mount -t tmpfs tmpfs /tmp
rsync -a /etc /tmp
mount -o bind /tmp/etc /etc
</pre>
<p>Ignore the <code>mtab</code> warning the first command produces. After this, fill in <code>/etc/resolv.conf</code> with some name servers and try to pinging something by name.</p>
<h2>Prepare for installation</h2>
<p>Leave the chroot again (working with <span class="caps">LVM</span> does not work within it).</p>
<p>Remove and stop any existing volume groups and software raids. Then partition the hard disks according to the new scheme and create the new software <span class="caps">RAID</span>s and <span class="caps">LVM</span>s.</p>
<pre>
fdisk -u -c /dev/sda
sfdisk -d /dev/sda | sfdisk --force /dev/sdb
mdadm --create /dev/md0 -n 2 --level 1 --metadata=0.90 /dev/sda1 /dev/sdb1
mdadm --create /dev/md1 -n 2 --level 1 /dev/sda2 /dev/sdb2
pvcreate /dev/md1
vgcreate -s 64M vg_tara_root /dev/md1
lvcreate -n lv_swap -L 8G vg_tara_root
lvcreate -n lv_c6_root -L 16G vg_tara_root
mkfs.ext3 /dev/md0
mkfs.ext4 /dev/mapper/vg_tara_root-lv_c6_root
tune2fs -c0 -i0 -r32000 -L boot /dev/md0
tune2fs -c0 -i0 -r32000 -L lv_c6_root /dev/mapper/vg_tara_root-lv_c6_root
</pre>
<p>It&#8217;s probably a good idea to wait for the <span class="caps">RAID</span> resync, that should not take long. The metadata version on <code>/dev/md0</code> is needed to help the boot loader.</p>
<ul>
<li>chroot into <code>/sysroot</code> again.</li>
<li>Prepare a directory tree to hold the system to be installed (it&#8217;s also called sysroot. Yes, this is intentionally confusing):</li>
</ul>
<pre>
mkdir /tmp/sysroot
mount /dev/mapper/vg_tara_root-lv_c6_root /tmp/sysroot
mkdir -p /tmp/sysroot/{boot,dev,proc,sys}
mount /dev/md0 /tmp/sysroot/boot
mount -o bind /dev /tmp/sysroot/dev
mount -o bind /proc /tmp/sysroot/proc
mount -o bind /sys /tmp/sysroot/sys
</pre>
<h2>Installation</h2>
<ul>
<li>Prepare the <span class="caps">RPM</span> database in the new system:</li>
</ul>
<pre>
rpm --root /tmp/sysroot --rebuilddb
</pre>
<ul>
<li>Download the centos-release <span class="caps">RPM</span> from the current CentOS 6 build (again: architecture) into <code>/tmp</code> and install it:</li>
</ul>
<pre>
rpm --root /tmp/sysroot -ihv /tmp/centos-release-6-2.el6.centos.7.x86_64.rpm
</pre>
<ul>
<li>Now install the CentOS base system. This ought to install 300 to 350 packages, depending on the release. Confirm the <span class="caps">GPG</span> key import question, and ignore the <span class="caps">DBUS</span> error at the end. Depending on network connection and hard disk speed this may take a while.</li>
</ul>
<pre>
yum --installroot=/tmp/sysroot groupinstall base
</pre>
<ul>
<li>Install the bootloader and <span class="caps">SSH</span> server package.</li>
</ul>
<pre>
yum --installroot=/tmp/sysroot install grub openssh-server
</pre>
<ul>
<li>Unmount the newly installed system and clean up the chroot</li>
</ul>
<pre>
umount /tmp/sysroot/{boot,dev,proc,sys,} /etc /tmp
</pre>
<ul>
<li>Leave the chroot</li>
<li>Remove the live CD image</li>
</ul>
<pre>
umount /sysroot/{dev,proc,sys,}
umount /loop2
umount /loop1
</pre>
<ul>
<li>Mount the new system under <code>/sysroot</code></li>
</ul>
<pre>
mount /dev/mapper/vg_tara_root-lv_c6_root /sysroot
mount /dev/md0 /sysroot/boot
mount -o bind /dev /sysroot/dev
mount -o bind /proc /sysroot/proc
mount -o bind /sys /sysroot/sys
</pre>
<ul>
<li>chroot into <code>/sysroot</code></li>
<li>Set the password for root</li>
<li>Create a symlink for <code>/etc/mtab</code></li>
</ul>
<pre>
ln -s /proc/mounts /etc/mtab
</pre>
<ul>
<li>Install the boot loader<br />
<pre>
grub-install /dev/md0
</pre></li>
<li>Remove <code>/etc/mtab</code></li>
<li>Populate <code>/etc/fstab</code>. Do not use raw device names here, especially not for software <span class="caps">RAID</span>s. Use labels or <span class="caps">UUID</span>s.</li>
</ul>
<pre>
LABEL=&quot;lv_c6_root&quot; / ext4 defaults 1 1
LABEL=&quot;boot&quot; /boot ext3 defaults 1 2
tmpfs /dev/shm tmpfs defaults 0 0
devpts /dev/pts devpts gid=5,mode=620 0 0
sysfs /sys sysfs defaults 0 0
proc /proc proc defaults 0 0
</pre>
<ul>
<li>Populate <code>/etc/resolv.conf</code></li>
<li>Populate <code>/etc/sysconfig/network</code></li>
</ul>
<pre>
NETWORKING=yes
NOZEROCONF=yes
HOSTNAME=tara.example.org
</pre>
<ul>
<li>Populate <code>/etc/sysconfig/network-scripts/ifcfg-eth0</code> (use the correct values for IPs, prefixes and gateways)</li>
</ul>
<pre>
DEVICE=eth0
BOOTPROTO=static
IPADDR=192.168.0.100
GATEWAY=192.168.0.1
PREFIX=24
ONBOOT=yes
</pre>
<ul>
<li>Populate <code>/boot/grub/grub.conf</code> (mind the line break)<br />
<pre>
default=0
timeout=5
title CentOS (2.6.32-220.23.1.el6.x86_64)
root (hd0,0)
kernel /vmlinuz-2.6.32-220.23.1.el6.x86_64 ro root=LABEL=lv_c6_root LANG=en_US.UTF-8 \
SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=de-latin1-nodeadkeys
initrd /initramfs-2.6.32-220.23.1.el6.x86_64.img
</pre></li>
</ul>
<ul>
<li>populate <code>/root</code></li>
</ul>
<pre>
cp /etc/skel/.* /root
</pre>
<ul>
<li>Create <code>grub.conf</code> symlinks</li>
</ul>
<pre>
ln -s grub.conf /boot/grub/menu.lst
ln -s /boot/grub/grub.conf /etc/grub.conf
</pre>
<ul>
<li>Leave the chroot</li>
<li>Unmount the new system</li>
</ul>
<pre>
umount /sysroot/{boot,sys,dev,proc,}
</pre>
<p>Now, usually this ought to result in a bootable system which can be accessed via <span class="caps">SSH</span>. Only one way to find out.</p>
Sun, 01 Jul 2012 09:15:44 +0000https://www.skytale.net/blog/archives/38-guid.htmlInstalling RedHat 1.1 (Mother's Day + 0.1)https://www.skytale.net/blog/archives/36-Installing-RedHat-1.1-Mothers-Day-+-0.1.html
ComputerLinuxSoftwarehttps://www.skytale.net/blog/archives/36-Installing-RedHat-1.1-Mothers-Day-+-0.1.html#commentshttps://www.skytale.net/blog/wfwcomment.php?cid=360https://www.skytale.net/blog/rss.php?version=2.0&type=comments&cid=36nospam@example.com (Ralf Ertzinger)
<p>Just to see what life was like in the dark ages of Linux distributions I ventured to install the earliest RedHat release I could get my hands on in a <span class="caps">QEMU</span> virtual machine.</p>
<p>It turns out that this is easier said than done. RedHat does have an archive of old versions (available at <a href="http://archive.download.redhat.com">http://archive.download.redhat.com</a>), but this is quite incomplete for the earliest version.</p>
<p>Fortunately there&#8217;s an installable version of Mother&#8217;s Day 1.1 on <a href="http://www.ibiblio.org/pub/historic-linux/distributions/redhat/">ibiblio</a> (the 1.0 release is incomplete as well), which I used.</p>
<p>To make an installable version out of this it&#8217;s recommened to make a local copy of the complete tree, which is easily done with <code>rsync</code>:</p>
<pre>
$ rsync -rv --progress www.ibiblio.org::pub/historic-linux/distributions/redhat/mothers-day-1.1 .
</pre>
<p>This will create a local directory called <code>mothers-day-1.1</code> containing all needed files, taking up about 360MB.</p>
<p>The installer will need to access the files via a <span class="caps">CDROM</span> or a <span class="caps">NFS</span> share. I opted for the CD method, so let&#8217;s create a CD image:</p>
<pre>
$ chmod +x mothers-day-1.1/bin/*
$ mkisofs -J -R -o mothers-day-1.1.iso mothers-day-1.1
</pre>
<p>This makes all the files in <code>mothers-day-1.1/bin</code> executable (this is important because the installer will mount the CD and expects to be able to execute these files for the installation) and creates an <span class="caps">ISO</span> image called <code>mothers-day-1.1.iso</code> containing all files from the <code>mothers-day-1.1</code> directory.</p>
<p>The installer will boot from a floppy disk. The release contains a whole bunch of these, for different hardware configurations (a kernel containing all supported configs would not have fitted on one floppy, so one has to choose the right one). For <span class="caps">QEMU</span> we&#8217;ll need standard <span class="caps">IDE</span> support (easy) and <span class="caps">AMD</span> PCnet support for networking (also easy). The boot image supporting these is located in <code>mothers-day-1.1/images/1211/boot0066.img</code>. These images were meant to be copied to a 1.44MB floppy disk, but the images are only 800k in size. If the images are passed to <span class="caps">QEMU</span> as they are <span class="caps">QEMU</span> will misinterpret the floppy size, causing the boot loader (<span class="caps">LILO</span>) to fail. So <span class="caps">QEMU</span> needs a little hint.</p>
<pre>
$ cp mothers-day-1.1/images/1211/boot0066.img boot.img
$ qemu-img resize boot.img 1440k
Image resized
$ cp mothers-day-1.1/images/rootdisk.img .
</pre>
<p>This copies the correct boot image to <code>boot.img</code> and resizes it to the correct size for a 1.44MB floppy. For convinience I also copied the root image disk, too. This disk already has the correct size.</p>
<p>All that&#8217;s missing now is a hard disk image to install to. This should not be too large, as the <span class="caps">IDE</span> driver in the kernel has some problems handling this. Fortunately this is the deep past, so 768MB will be plenty.</p>
<pre>
$ qemu-img create -f qcow2 disk1.img 768MB
</pre>
<p>Deep past or not, the installer needs memory, and an amazing (for the time) amount of it. 4MB will not be enough, 8MB will do fine. So, let&#8217;s go.</p>
<pre>
$ qemu -M pc -m 8 -fda boot.img -drive file=disk1.img,if=ide,media=disk,cache=writeback \
-cdrom mothers-day-1.1.iso -net nic,model=pcnet -net user -boot a
</pre>
<p>(This adds the hard disk image in writeback cache mode. This is not recommended from a data security standpoint, as data written by the virtual machine is not immediately committed to host storage, but since this is just a for fun exercise and EXT2 formatting takes ages with the default cache strategy I&#8217;ll pass on data security here)</p>
<p>At the <span class="caps">LILO</span> prompt, just press Enter to boot with default options. When prompted, change the floppy to the root disk (<code>change floppy0 rootdisk.img</code> in the <span class="caps">QEMU</span> monitor mode) and press Enter to continue. The installer will come up (which is quite nice), prompting to change the floppy back to the boot floppy.</p>
<p>Select an Express install, say &#8220;No&#8221; to the default package list question, and select CD as the install media. The installer ought to find the CD image on <code>/dev/hdc</code>, which is correct.</p>
<p>There will be no OS/2 on this install, so skip the reboot at the next question.</p>
<p>The hard disk will need to be partitioned. The installer should find a hard disk at <code>/dev/hda</code> (if the installer just presents a list of partitioning programs without a disk device your hard disk image is too large). Partition the disk into one data partition (taking most of the space) and a small swap partition (16MB or so). The installer will ask to reboot if partitions were changed, this is not needed as there were no partitions on the disk to start with.</p>
<p>Confirm <code>/dev/hda2</code> as a swap partition, and select <code>/dev/hda1</code> for formatting.</p>
<p>On the package selection screen select whatever needed (or just everything, it does not really matter :) I&#8217;d recommend at least the Net Utils, everything X and Utils+. And there&#8217;s Doom (but more on that later).</p>
<p>When asked for the type of video card select <span class="caps">SVGA</span>, and enter a hostname for the machine.</p>
<p>The installer will then format swap and file system, which might take a few seconds. Or even minutes. If you did not change the default caching strategy in the <span class="caps">QEMU</span> call above it will definitely take minutes. Or hours.</p>
<p>After the formatting the package installation phase begins. This will also take a few minutes, but at least it has a progress bar. The installer may complain about XF86_SVGA being already installed in the end, this can be ignored.</p>
<p>Then the boot kernel is copied from the boot floppy.</p>
<p>For the mouse, select <code>microsoft-serial</code>, connected to <code>/dev/ttyS0</code>.</p>
<p>The X configuration is a bit wonky (and this would not really change for the next decade or more). Decline autoprobe, select <code>clgd5434</code> as the chipset (this isn&#8217;t correct, but close enough). Enter 4096k of video memory, 10-100 for the clocks, and select the <code>Generic Multisync</code> monitor. The configurator will tell you that it failed after that, but never mind.</p>
<p>Configure networking, entering a host name, domain name and fully qualified host name. Select <code>10.0.2.100</code> as the IP, <code>10.0.2.0</code> as the network, <code>255.255.255.0</code> as the netmask, <code>10.0.2.255</code> as the broadcast, <code>10.0.2.2</code> as the gateway and <code>10.0.2.3</code> as the <span class="caps">DNS</span> server (<span class="caps">QEMU</span> user mode networking is funny).</p>
<p>Select no modem, your keymap, local time and your time zone (the list is sorted upside down, for whatever reason).</p>
<p>Select to install <span class="caps">LILO</span> in <code>/dev/hda</code> without specific parameters and without other operating systems.</p>
<p>Create a user account (if you want) and select a root password.</p>
<p>After that, the installation is finished. Select reboot.</p>
<p>The system will be unable to actually reboot, so stop <span class="caps">QEMU</span> after the installer has terminated and start it again:</p>
<pre>
$ qemu -M pc -m 8 -drive file=disk1.img,if=ide,media=disk -net nic,model=pcnet -net user -serial msmouse
</pre>
<p>This invocation is missing the floppy and CD images (they are not needed anymore) and adds a serial mouse.</p>
<p>At the boot prompt press Enter, and wait until the system has bootet to the login prompt (which will take all of a few seconds). Look around. If you&#8217;re used to RedHat based systems (or Fedora) most things should look familiar.</p>
<p>Next up: getting X to actually work.</p>
Sun, 21 Aug 2011 19:03:51 +0000https://www.skytale.net/blog/archives/36-guid.htmlGIT pushing to a new bare remote repohttps://www.skytale.net/blog/archives/35-GIT-pushing-to-a-new-bare-remote-repo.html
ComputerSoftwarehttps://www.skytale.net/blog/archives/35-GIT-pushing-to-a-new-bare-remote-repo.html#commentshttps://www.skytale.net/blog/wfwcomment.php?cid=350https://www.skytale.net/blog/rss.php?version=2.0&type=comments&cid=35nospam@example.com (Ralf Ertzinger)
<p>Just a note to myself, as I do not do this often enough to remember.</p>
<p>If you have a local <span class="caps">GIT</span> repository (which has no remote so far, as it was only used for local development so far) and want to push it out to a remote repository, and make that repository the default for push and pull operations, here is how it&#8217;s done.</p>
<p>This requires <span class="caps">GIT</span> 1.7, and assumes the following:</p>
<ul>
<li>The local branch tobe pushed is <code>master</code></li>
<li>The remote repo is accessible via <code>ssh://user@example.com/GIT/project.git</code> and already contains a freshly created, bare
repo</li>
</ul>
<p>First, add a remote to the local repository.</p>
<pre>
$ git remote add origin ssh://user@example.com/GIT/project.git
</pre>
<p>This, by itself, does not do exacly much except to add a remote repository to your local repo config. The remote repo is called <code>origin</code>, which is the default name git chooses if you <code>git clone</code> from a remote repo. The remote repo is not associated with any local branches yet.</p>
<p>Second, push the accumulated local commits to the remote repo, designating the remote as the default for future push/pull operations.</p>
<pre>
$ git push --set-upstream origin master
</pre>
<p>This will push the local master branch to the remote origin, creating a master branch there as well, and ties origin to the local master branch as the default for push and pull. Future <code>git pull</code> and <code>git push</code> will work without any specifications of local or remote branches.</p>
Tue, 19 Jul 2011 16:27:26 +0000https://www.skytale.net/blog/archives/35-guid.htmlBuilding a multi OS USB boot stick, Part 1 (Windows)https://www.skytale.net/blog/archives/33-Building-a-multi-OS-USB-boot-stick,-Part-1-Windows.html
ComputerLinuxSoftwareSolarisWindowshttps://www.skytale.net/blog/archives/33-Building-a-multi-OS-USB-boot-stick,-Part-1-Windows.html#commentshttps://www.skytale.net/blog/wfwcomment.php?cid=331https://www.skytale.net/blog/rss.php?version=2.0&type=comments&cid=33nospam@example.com (Ralf Ertzinger)
<p>Among the things I carry around is always a collection of <span class="caps">USB</span> sticks, for various purposes. One of those is usually dedicated to a Linux rescue system, in order to get somehow broken systems back on their feet.</p>
<p>While it is possible these days to access non Linux systems from a booted Linux system any repair work beyond simple text file editing and file copying usually requires OS specific tools to get the job done. Thus it would be nice not only to have a Linux rescue system at hand, but a Windows one as well. And Solaris, while we&#8217;re at it. And possibly some more.</p>
<p><span class="caps">USB</span> sticks are cheap, at least in this part of the world. 10EUR will get you 4GB off the shelf in almost any electronics store, a little more money will get you 8GB ordered online. So space is not really an issue.</p>
<p>Actually installing an operating system in a way that allows it to boot off a removable media requires some specific preparations and tools in each case. This means that a running instance of that specific OS is needed to prepare the installation. This means that to get Windows to boot of an <span class="caps">USB</span> stick a running Windows installation is needed. The same goes for Solaris and Linux.</p>
<h3>Preparations</h3>
<p>The <span class="caps">USB</span> stick used for this exercise is a 4G Sandisk. This procedure will <strong>delete all data</strong> currently on the stick, so either make sure there is nothing of any interest on it, or just get a new one.</p>
<p>The initial plan is to have Windows, Linux and Solaris boot off the stick. Each OS will get it&#8217;s own partition, to keep possible clashes between the files of each system to a minimum (and because Solaris wants and <span class="caps">UFS</span> partition, but more on that later).</p>
<h3>Installing Windows on <span class="caps">USB</span></h3>
<p>The standard Windows installer does not allow for installation on <span class="caps">USB</span> devices. The standard tool for those tasks is <a href="http://www.nu2.nu/pebuilder/">BartPE</a>, a free tool to create so-called Preinstalled Environments. Those are actually a Microsoft supported way to preinstall an operating system on a PC, which is used by system builders to deliver machines with the OS already installed but not registered. The Microsoft tools to create these environments are not easily available, though, and this is where BartPE came in a few years ago. It&#8217;s original purpose was to create Live CDs of Windows, but booting from <span class="caps">USB</span> was added (experimentally) later.</p>
<p>While BartPE is a very valuable tool there is an even better one for this special purpose: <a href="http://www.ubcd4win.com/">The Ultimate Boot CD for Windows</a>, which is basically a BartPE with a lot of useful tools already tacked to the side, and a completely reworked <span class="caps">USB</span> installer.</p>
<p>To use <span class="caps">UBCD</span> the following is needed:</p>
<ul>
<li>The <span class="caps">UBCD</span> installer, which weighs in at 255MB and is available from the projects site</li>
<li>A Windows XP install CD (32 bit)</li>
<li>Service Pack 3 for XP, if the Windows CD does not already include it</li>
<li>A license for the Windows version (this is more a legal than a technical problem, but the Windows install on the <span class="caps">USB</span> stick needs a separate license to be legal)</li>
<li>Drivers</li>
</ul>
<p>The last point is especially interesting. <span class="caps">UBCD</span> will take all drivers whichare contained in the Windows XP install CD, which, as everyone knows who tried to install XP on a reasonably recent machine, is not exactly much. While the <span class="caps">USB</span> installed will boot (hopefully), access to hard disk drives on the machine or access to network interfaces may be severely limited due to missing drivers.</p>
<p><span class="caps">UBCD</span> already comes with a largeish selection of updated drivers for mass storage, <span class="caps">LAN</span> and <span class="caps">WLAN</span>, so simply building an image with the default settings has a good chance of working on a large number of modern machines (although the <span class="caps">WLAN</span> drivers are disabled by default).</p>
<h4>Install procedure</h4>
<ul>
<li>Make a copy of the Windows XP CD (that is, just copy all the files on it into a folder on the hard disk drive)</li>
<li>If the CD did not already contain a Windows copy patched to SP3 download the SP3 install package from Microsoft, and <a href="http://www.howtohaven.com/system/slipstream-xp-service-pack-3.shtml">slipstream the Service Pack into the copied files</a></li>
<li>Install <span class="caps">UBDC</span></li>
<li>Start <span class="caps">UBCD</span> and enter the path to the copied Windows CD in the first field</li>
<li>Set Media Output to None</li>
<li>Click &#8220;Build&#8221;</li>
</ul>
<div class="serendipity_imageComment_center" style="width: 509px"><div class="serendipity_imageComment_img"><!-- s9ymdb:31 --><img class="serendipity_image_center" width="509" height="421" src="https://www.skytale.net/blog/uploads/ubcd1.png" alt="" /></div><div class="serendipity_imageComment_txt">The <span class="caps">UBCD</span> main screen</div></div>
<p>This will start a build process with the default settings, which are reasonable for a first build. <span class="caps">UBCD</span> is very customizable, most of the options are available by clicking the &#8220;Plugins&#8221; button on the main screen. Describing the various things that can be done here is beyond this text, but the <span class="caps">UBCD</span> home page has details on this.</p>
<p>After the build has finished plug in the <span class="caps">USB</span> stick and start <code>ubusb.exe</code> from the <span class="caps">UBCD</span> install folder. To make things easier make sure no other <span class="caps">USB</span> mass storage devices are connected. Set the options to match those in the screenshot below. Specifically:</p>
<ul>
<li>Make sure the right <span class="caps">USB</span> device is selected</li>
<li>Set the partition size to 2048MB (or 2GB)</li>
<li>Set the file system to FAT32-<span class="caps">LBA</span></li>
<li>Set the Boot Loader to grub4dos</li>
<li>Select the right BartPE folder (although it should pick up the correct one automatically)</li>
<li>Don&#8217;t create a CD image</li>
</ul>
<div class="serendipity_imageComment_center" style="width: 583px"><div class="serendipity_imageComment_img"><!-- s9ymdb:32 --><img class="serendipity_image_center" width="583" height="554" src="https://www.skytale.net/blog/uploads/ubcd2.png" alt="" /></div><div class="serendipity_imageComment_txt"><span class="caps">UBUSB</span> main screen</div></div>
<p>Clicking &#8220;Go&#8221; will start the process of repartitioning, formattingand copying of data to the <span class="caps">USB</span> stick. This may take a while.</p>
<p>After the process has finished (hopefully successful) the resulting <span class="caps">USB</span> stick can immediately be tested, because <span class="caps">UBCD</span> comes with a copy of <a href="http://www.qemu.org">qemu</a>, which can emulate a PC. Just click the &#8220;Test <span class="caps">USB</span>&#8221; button, and a virtual PC will try to boot off the <span class="caps">USB</span> stick just created.</p>
<div class="serendipity_imageComment_center" style="width: 740px"><div class="serendipity_imageComment_img"><!-- s9ymdb:33 --><img class="serendipity_image_center" width="740" height="438" src="https://www.skytale.net/blog/uploads/ubcd3.png" alt="" /></div><div class="serendipity_imageComment_txt"><span class="caps">USB</span> boot menu</div></div>
<div class="serendipity_imageComment_center" style="width: 821px"><div class="serendipity_imageComment_img"><!-- s9ymdb:34 --><img class="serendipity_image_center" width="821" height="638" src="https://www.skytale.net/blog/uploads/ubcd4.png" alt="" /></div><div class="serendipity_imageComment_txt">Windows booted off the <span class="caps">USB</span> stick in qemu</div></div>
<p>One down, two to go.</p>
Sun, 21 Mar 2010 17:23:49 +0000https://www.skytale.net/blog/archives/33-guid.htmlOutgoing TLS verification in eximhttps://www.skytale.net/blog/archives/32-Outgoing-TLS-verification-in-exim.html
ComputerSoftwarehttps://www.skytale.net/blog/archives/32-Outgoing-TLS-verification-in-exim.html#commentshttps://www.skytale.net/blog/wfwcomment.php?cid=320https://www.skytale.net/blog/rss.php?version=2.0&type=comments&cid=32nospam@example.com (Ralf Ertzinger)
<p><a href="http://www.exim.org">Exim</a> is a mail server which suports <span class="caps">TLS</span> for encrypted connections. This is supported for incoming connections as well as outgoing connections.</p>
<p>The support for outgoing connections is a bit useless in it&#8217;s default setting, though:</p>
<ul>
<li>If the remote server offers <span class="caps">TLS</span> exim will negotiate an encrypted connection, but will not verify the certificate, rendering the encryption somewhat useless</li>
<li>If the remote side does not offer <span class="caps">TLS</span> mail will be sent in plain text.</li>
</ul>
<p>All in all this is pretty useless from a security point of view. Making exim do the right thing requires some additions to the <span class="caps">SMTP</span> transport (the following is sufficient for the default exim configuration on <a href="http://www.centos.org">CentOS</a> systems):</p>
<pre>
remote_smtp:
driver = smtp
hosts_require_tls = *
tls_tempfail_tryclear = false
tls_verify_certificates = /etc/pki/tls/certs
</pre>
<p>This forces exim to use <span class="caps">TLS</span> for every outgoing connection (<code>hosts_require_tls = *</code>), forbids fallback to clear text if <span class="caps">TLS</span> does not work, (<code>tls_tempfail_tryclear = false</code>) and points to a directory containing a trusted certificates (<code>tls_verify_certificates = /etc/pki/tls/certs</code>).</p>
<p>The last parameter is the main reason for this article, as it does not exactly do what it says on the tin. The exim in CentOS is built against OpenSSL, and the OpenSSL libraries are built with <code>/etc/pki/tls/certs</code> as the default search path for certificates. The documentation for the parameter says:</p>
<p><cite>The value of this option must be the absolute path to a file containing permitted server certificates, for use when setting up an encrypted connection. Alternatively, if you are using OpenSSL, you can set tls_verify_certificates to the name of a directory containing certificate files. This does not work with GnuTLS; the option must be set to the name of a single file if you are using GnuTLS. The values of $host and $host_address are set to the name and address of the server during the expansion of this option. See chapter 39 for details of <span class="caps">TLS</span>.</cite></p>
<p>The part missing from this is that the path set with <code>tls_verify_certificates</code> is searched <strong>in addition</strong> to the default certificate search path configured for OpenSSL. So if the OpenSSL default search path already contains all the certificates required, <code>tls_verify_certificates</code> must be set to force exim to verify the certificates, but the value it is set to does not matter. For security reasons it ought to be set to the default OpenSSL search path, though, to prevent someone from maliciously adding more trusted certificates.</p>
<p>PS:<br />
Doing this for a general purpose mail server is probably not a good idea, as many mail servers do not offer <span class="caps">TLS</span>, and even if they do, their certificate may not be signed by a trusted (by the client) certificate authority. The mail server in question here will only send mail to a single host.</p>
Sun, 21 Mar 2010 13:37:01 +0000https://www.skytale.net/blog/archives/32-guid.htmlCisco VPN debugging by crystal ballhttps://www.skytale.net/blog/archives/31-Cisco-VPN-debugging-by-crystal-ball.html
CiscoComputerSoftwarehttps://www.skytale.net/blog/archives/31-Cisco-VPN-debugging-by-crystal-ball.html#commentshttps://www.skytale.net/blog/wfwcomment.php?cid=310https://www.skytale.net/blog/rss.php?version=2.0&type=comments&cid=31nospam@example.com (Ralf Ertzinger)
<p>In the hope that google picks this up:</p>
<p>The problem space is a Cisco <span class="caps">PIX</span> terminating an <span class="caps">IPS</span>ec <span class="caps">VPN</span> tunnel with a Checkpoint firewall on the other end. The tunnel does not work (the phase 2 setup fails). The Cisco logs the following debug messages:<br />
<pre>
ISAKMP (0): processing SA payload. message ID = 1911693629
ISAKMP : Checking IPSec proposal 1
ISAKMP: transform 1, ESP_3DES
ISAKMP: attributes in transform:
ISAKMP: SA life type in seconds
ISAKMP: SA life duration (VPI) of 0x0 0x0 0xe 0x10
ISAKMP: authenticator is HMAC-SHA
ISAKMP: encaps is 1
ISAKMP (0): atts are acceptable.
ISAKMP : Checking IPSec proposal 1
ISAKMP (0): atts not acceptable. Next payload is 0
ISAKMP (0): SA not acceptable!
ISAKMP (0): sending NOTIFY message 14 protocol 0
return status is IKMP_ERR_NO_RETRANS
</pre></p>
<p>The log message above was created by an incoming proposal (the remote end proposed a connection to the Cisco <span class="caps">PIX</span>). This is useless and confusing at the same time. An <span class="caps">IPS</span>ec proposal contains a list of parameters, sent by one end of the connection, specifying the parameters it is willing to use to establish a secure connection. This proposal specifies 3DES as the encryption algorithm, <span class="caps">SHA</span> as a hash function, and a lifetime for the connection of 3600 seconds (after which the connection has to be renegotiated).</p>
<p>As can be seen, the <span class="caps">PIX</span> accepts this proposal (as it should), since these parameters match those configured on the <span class="caps">PIX</span> for this connection. It then goes on to check the same proposal again, just to reject it this time.</p>
<p>The completely non-obvious solution to this is to disable compression (which the <span class="caps">PIX</span> does not support) on the Checkpoint. Why the <span class="caps">PIX</span> is unable to even give me a hexdump of the offending parameter in the proposal I&#8217;ll probably never know.</p>
Mon, 18 Jan 2010 09:37:14 +0000https://www.skytale.net/blog/archives/31-guid.htmlAdding new dynamic library dependencies to an existing objecthttps://www.skytale.net/blog/archives/28-Adding-new-dynamic-library-dependencies-to-an-existing-object.html
ComputerSoftwareSolarishttps://www.skytale.net/blog/archives/28-Adding-new-dynamic-library-dependencies-to-an-existing-object.html#commentshttps://www.skytale.net/blog/wfwcomment.php?cid=280https://www.skytale.net/blog/rss.php?version=2.0&type=comments&cid=28nospam@example.com (Ralf Ertzinger)
<p>Due to some developing I needed a <a href="http://www.lighttpd.net">lighttpd</a> with mod_magnet enabled. mod_magnet is a module which allows inserting of lua code into the request processing stream. This is a cool feature, and I was pleased to see that</p>
<ul>
<li>lighttpd is on the standard Solaris install <span class="caps">DVD</span></li>
<li>mod_magnet is provided</li>
</ul>
<p>Of course there is this small problem:</p>
<pre>
2009-12-29 23:29:31: (plugin.c.165) dlopen() failed for: /usr/lighttpd/1.4/lib/mod_magnet.soi
ld.so.1: lighttpd: fatal: relocation error: file /usr/lighttpd/1.4/lib/mod_magnet.so:
symbol luaL_checklstring: referenced symbol not found
</pre>
<p>What this means is that there are unresolved symbols remaining in the code after the dymanic loader has done it&#8217;s work, which should not happen. Let&#8217;s look a the dynamic deps of the module.</p>
<pre>
$ ldd /usr/lighttpd/1.4/lib/mod_magnet.so
libsendfile.so.1 =&gt; /lib/libsendfile.so.1
libm.so.2 =&gt; /lib/libm.so.2
libresolv.so.2 =&gt; /lib/libresolv.so.2
libnsl.so.1 =&gt; /lib/libnsl.so.1
libsocket.so.1 =&gt; /lib/libsocket.so.1
libc.so.1 =&gt; /lib/libc.so.1
libmd.so.1 =&gt; /lib/libmd.so.1
libmp.so.2 =&gt; /lib/libmp.so.2
libscf.so.1 =&gt; /lib/libscf.so.1
libuutil.so.1 =&gt; /lib/libuutil.so.1
libgen.so.1 =&gt; /lib/libgen.so.1
libsmbios.so.1 =&gt; /usr/lib/libsmbios.so.1
</pre>
<p>Judging from the name of the missing symbol <code>luaL_checklstring</code> it ought to come from some kind of lua library. But the listing above does not show any missing libraries, lest of all a lua one.</p>
<p>So what happened?</p>
<p>Somehow (and I have no idea how) Sun managed to build a mod_magnet without linking it to the lua libraries at the end. Simply speaking, this is broken.</p>
<p>Fortunately there is a way to fix this. Sun provides a utility called <code>elfedit(1)</code> which allows the editing of <span class="caps">ELF</span> file headers (like shared libraries). The lua library which provides the missing symbols is called <code>liblua.so</code> (no version information). The type of record in an <span class="caps">ELF</span> header which denotes the dynamic libraries needed is called DT_NEEDED. <code>elfedit(1)</code> takes two parameters: the file to edit, and the file into which to write the modified version.</p>
<p>First show the existing DT_NEEDED records.</p>
<pre>
$ elfedit mod_magnet.so mod_magnet2.so
&gt; dyn:value DT_NEEDED
index tag value
[0] NEEDED 0x5f9 libsendfile.so.1
[1] NEEDED 0x60a libm.so.2
[2] NEEDED 0x614 libresolv.so.2
[3] NEEDED 0x623 libnsl.so.1
[4] NEEDED 0x62f libsocket.so.1
[5] NEEDED 0x5d3 libc.so.1
</pre>
<p>This is basically the same list as above, with liblua.so notably lacking. Now add a new entry:</p>
<pre>
&gt; dyn:value -add -s DT_NEEDED liblua.so
index tag value
[34] NEEDED 0x63e liblua.so
</pre>
<p>Now look at the new table, and save it.</p>
<pre>
&gt; dyn:value DT_NEEDED
index tag value
[0] NEEDED 0x5f9 libsendfile.so.1
[1] NEEDED 0x60a libm.so.2
[2] NEEDED 0x614 libresolv.so.2
[3] NEEDED 0x623 libnsl.so.1
[4] NEEDED 0x62f libsocket.so.1
[5] NEEDED 0x5d3 libc.so.1
[34] NEEDED 0x63e liblua.so
&gt; :write
&gt; :quit
</pre>
<p>Looking at the <code>ldd(1)</code> output, just to be sure.</p>
<pre>
$ ldd ./mod_magnet2.so
libsendfile.so.1 =&gt; /lib/libsendfile.so.1
libm.so.2 =&gt; /lib/libm.so.2
libresolv.so.2 =&gt; /lib/libresolv.so.2
libnsl.so.1 =&gt; /lib/libnsl.so.1
libsocket.so.1 =&gt; /lib/libsocket.so.1
libc.so.1 =&gt; /lib/libc.so.1
liblua.so =&gt; /usr/lib/liblua.so
libmd.so.1 =&gt; /lib/libmd.so.1
libmp.so.2 =&gt; /lib/libmp.so.2
libscf.so.1 =&gt; /lib/libscf.so.1
libdl.so.1 =&gt; /lib/libdl.so.1
libuutil.so.1 =&gt; /lib/libuutil.so.1
libgen.so.1 =&gt; /lib/libgen.so.1
libsmbios.so.1 =&gt; /usr/lib/libsmbios.so.1
</pre>
<p>Now the linker picks up the lua libraries. If the modified mod_magnet.so is now put back into <code>/usr/lighttpd/1.4/lib</code>, lighttpd will start and mod_magnet will work.</p>
<p>Now, this wasn&#8217;t so hard, was it?</p>
Wed, 30 Dec 2009 14:34:59 +0000https://www.skytale.net/blog/archives/28-guid.htmlChanging the rpool disk in Solarishttps://www.skytale.net/blog/archives/27-Changing-the-rpool-disk-in-Solaris.html
ComputerSoftwareSolarishttps://www.skytale.net/blog/archives/27-Changing-the-rpool-disk-in-Solaris.html#commentshttps://www.skytale.net/blog/wfwcomment.php?cid=270https://www.skytale.net/blog/rss.php?version=2.0&type=comments&cid=27nospam@example.com (Ralf Ertzinger)
<p>Ever since my storage system was built there was one thing that annoyed me. The 2.5&#8221; hard disk drive that houses the operating system itself was lifted from an old notebook and had the annoying property of parking it&#8217;s heads after five seconds of inactivity. Since <span class="caps">ZFS</span> writes to the disk quite often and regularily this led to a constant cycle of parking and unparking. This was certainly not helping the disks life span, it made an annoying noise and it caused small system hangs whenever the disk had to unpark it&#8217;s heads to read some data.</p>
<p>Under Linux one could use <code>hdparm</code> to instruct the disk to not park it&#8217;s heads, but unfortunately a program mimicking this functionality seems to be absent under Solaris. Thus the plan to replace the disk with a different one which had a more sensible apporoach to head parking.</p>
<p>This turned out to be an interesting endeavour.</p>
<p>The general problem of replacing the disk holding the rpool is common enough that the excellent <a href="http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide#Replacing.2FRelabeling_the_Root_Pool_Disk"><span class="caps">ZFS</span> troubleshooting guide</a> has a section on doing this. The general plan of action is as follows:</p>
<ul>
<li>Insert the replacement disk into an available slot</li>
<li>Create a partition spanning the whole disk</li>
<li>Create boot and data slices</li>
<li>Attach the new disk as a mirror to the rpool</li>
<li>Wait for the resilver to finish</li>
<li>Install grub on the new disk</li>
<li>Try to boot from the new disk</li>
<li>Detach the old disk from the rpool</li>
<li>Remove the old disk</li>
</ul>
<p>This is all very sensible, and it all works as advertised. In my case there is, however, a last step not on the list above:</p>
<ul>
<li>Put the new disk on the controller the old disk was attached to</li>
</ul>
<p>The reason for that is that the case I used only has one internal 2.5&#8221; hard disk drive slot. The new disk was prepared using an external <span class="caps">USB</span>-<span class="caps">IDE</span> converter module. This worked just fine, the <span class="caps">BIOS</span> is even able to boot from the <span class="caps">USB</span> disk. As long as the new disk remained attached to the <span class="caps">USB</span> converter everything was fine, even after the old (internal) disk was removed from the rpool. But putting the new disk into the case caused Solaris to roll over and die early in the boot process due to not finding it&#8217;s rpool disk. The error message indicated that it was trying to read the pool from the external <span class="caps">USB</span> device (which no longer existed at this point).</p>
<p>Investigation (and much swearing) turned up that this information was passed by <span class="caps">GRUB</span> to the Solaris kernel.</p>
<p>Solaris uses a patched <span class="caps">GRUB</span> version which understands <span class="caps">ZFS</span> and has some string replacement magic built in. Every (non failsafe) boot entry contains a line similar to this:</p>
<pre>
kernel$ /platform/i86pc/kernel/$ISADIR/unix -B $ZFS-BOOTFS
</pre>
<p><code>$ZFS-BOOTFS</code> is replaced by <span class="caps">GRUB</span> with the following information:</p>
<ul>
<li>The name of the root pool (usually rpool) and the number of the dataset that contains the root file system (there may be several BEs)</li>
<li>The device path of the disk this <span class="caps">GRUB</span> instance was read from</li>
</ul>
<p>The actual command line that is executed by <span class="caps">GRUB</span> thus looks something like this:</p>
<pre>
kernel /platform/i86pc/kernel/$ISADIR/unix -B zfs-bootfs=rpool/328 \
bootpath=&quot;/pci@0,0/pci8086,2942@1c,1/pci-ide@0/ide@0/cmdk@0,0:a&quot;
</pre>
<p>The interesting part here is the <code>bootpath</code> parameter. This is the device that Solaris will try to mount the rpool from. Even if the rpool consists of several mirror devices, only one is used in the initial boot process. Where does <span class="caps">GRUB</span> get the device path from? It&#8217;s read from the rpool header, from the disk <span class="caps">GRUB</span> was loaded from. Every <span class="caps">ZFS</span> pool disk contains the device path it was last found under. This usually does not matter much, a <span class="caps">RAIDZ</span> will still mount if you swap the disks around when the machine is off, but the boot process relies on the rpool disks not wandering around. My new disk still had the <span class="caps">USB</span> device path embedded, which <span class="caps">GRUB</span> read and passed to the kernel, which then failed to find the disk.</p>
<p>Fixing this turns out to be easy: boot into failsafe mode with the new disk on it&#8217;s final connector. This will search for rpools and BEs on the system and offer to mount one of them. Pick the right one, reboot. This is enough to get the current (and correct) device path embedded into the rpool. The next (non failsafe) boot will thus pick up the correct device path and allow the boot to continue.</p>
<p>The morale of an afternoon thus spent in the innards of the Solaris boot process is thus: do not swap your rpool disk around.</p>
Fri, 25 Dec 2009 15:01:53 +0000https://www.skytale.net/blog/archives/27-guid.htmlCreating a write only directory with SAMBA and ZFShttps://www.skytale.net/blog/archives/26-Creating-a-write-only-directory-with-SAMBA-and-ZFS.html
ComputerSoftwareSolarishttps://www.skytale.net/blog/archives/26-Creating-a-write-only-directory-with-SAMBA-and-ZFS.html#commentshttps://www.skytale.net/blog/wfwcomment.php?cid=260https://www.skytale.net/blog/rss.php?version=2.0&type=comments&cid=26nospam@example.com (Ralf Ertzinger)
<p>One of the intended uses of my OpenSolaris storage server was to serve as a <a href="http://www.samba.org"><span class="caps">SAMBA</span></a> accessible data store. Part of that role was the wish to have an <code>incoming</code> directory modeled after similar directories found on many <span class="caps">FTP</span> servers. In detail this meant a share with the following properties:</p>
<ul>
<li>Readable for everyone (including unauthenticated users, i.e. guests)</li>
<li>Everyone can create new files and directories on the share</li>
<li>Only certain users can delete files and directories from the share</li>
</ul>
<p>So everyone can add files to the share, but removing them requires special privileges.</p>
<p>It turns out that this is impossible to do with normal <span class="caps">UNIX</span> file system permissions, as for <span class="caps">UNIX</span> creating a file (which is a write operation on a directory) is much the same as deleting one (which is a write operation on a directory).</p>
<p>Fortunately OpenSolaris supports a much more powerful file operation permission language in the form of NFSv4 permissions.</p>
<p>It has been said that the NFSv4 permission system has been modeled after a smudged copy of the Windows <span class="caps">NTFS</span> permission system, and there is certainly merit to that claim, which is not a bad thing. The <span class="caps">NTFS</span> permission system is much more expressive than the standard <span class="caps">UNIX</span> system, as it has more actions (besides writing, reading and executing it also knows about deleting, for example), can support a large number of principals with different permissions and can actively deny an action (which is different from &#8220;not allowing&#8221;).</p>
<h3>NFSv4 permissions</h3>
<p>The NFSv4 system knows about the following actions:</p>
<table>
<tr>
<th>Action </th>
<th>Description for files </th>
<th>Description for directories </th>
</tr>
<tr>
<td> read data </td>
<td> Read file contents </td>
<td> List directory contents </td>
</tr>
<tr>
<td> write data </td>
<td> Write file contents (anywhere in the file) </td>
<td> Create new files </td>
</tr>
<tr>
<td> execute </td>
<td> Execute file </td>
<td> Change into directory </td>
</tr>
<tr>
<td> append </td>
<td> Append data to file </td>
<td> Create new directories </td>
</tr>
<tr>
<td> delete </td>
<td> Delete the file </td>
<td> &#8211; </td>
</tr>
<tr>
<td> delete child </td>
<td> &#8211; </td>
<td> Delete a file in the directory </td>
</tr>
<tr>
<td> read/write attributes </td>
<td> Read/write basic attributes </td>
<td> (same as file) </td>
</tr>
<tr>
<td> read/write xattrs </td>
<td> Read/write extended attributes </td>
<td> (same as file) </td>
</tr>
<tr>
<td> read/write <span class="caps">ACL</span> </td>
<td> Read/write <span class="caps">ACL</span>s </td>
<td> (same as file) </td>
</tr>
<tr>
<td> change owner </td>
<td> Change the owner </td>
<td> (same as file) </td>
</tr>
<tr>
<td> sync </td>
<td> Use syncronous file access </td>
<td> &#8211; </td>
</tr>
</table>
<p>NFSv4 also contains a mechanism to specify actions that apply to a file or directory, and actions that are inherited to child objects of a directory (i.e. files or subdirectories). This allows very fine grained control of file system access.</p>
<p>Of special interest here are the bits about writing, appending and deleting files and folders.</p>
<p>The <span class="caps">ACL</span>s are maintained in a list of entries, each entry mapping a username/action pair to a verdict (allow/deny). Each access is matched against each entry in<br />
turn, and the verdict is taken from the first entry to match. So the order of entries is important.</p>
<p>Solaris&#8217; <code>ls</code> has two extensions to list those <span class="caps">ACL</span>s: <code>-v</code> for a verbose listing and <code>-V</code> for a concise listing. The format used by <code>-V</code> can be passed to <code>chmod</code> to change <span class="caps">ACL</span>s.</p>
<p>The permissions corresponding to the list of requirements stated above are as follows (<code>/tank/share/incoming</code> is the directory associated with the <code>incoming</code> share in <code>smb.conf</code>):</p>
<pre>
# ls -lVd /tank/share/incoming
drwxrwxrwx+ 5 root root 6 Dec 12 16:49 /tank/share/incoming
user:sun:-w--dD--------:fdi----:allow
user:sun:-w--dD--------:-------:allow
everyone@:-w--dD--------:f-i----:deny
everyone@:----dD--------:-di----:deny
everyone@:----dD--------:-------:deny
everyone@:rwxp--a-R-c--s:-di----:allow
everyone@:r-xp--a-R-c--s:f-i----:allow
everyone@:rwxp--a-R-c--s:-------:allow
#
</pre>
<p>There are two kinds of entries in this list. Those with an <code>i</code> in the second part of the action list and those without. The entries with an <code>i</code> are so called &#8220;inherit only&#8221; entries. They do not apply to the file or directory they are associated with, but are only inherited to new child entries. The other entries apply to the file/directory they are associated with. </p>
<p>This list can be read in three blocks:</p>
<p>The first block consists of the first two lines. The first line specifies that the right to delete files (<code>d</code>), delete child entries (<code>D</code>) and create new files/write file content (<code>w</code>) for the user named <code>sun</code> is inherited to new files and directories (<code>fdi</code>). This makes sure that this user can always remove files and directories, and overwrite existing file content in newly created files. The second line applies the same rights to the <code>incoming</code> directory itself.</p>
<p>The second block consists of lines 3 to 5 and contains only deny statements. They apply to <code>everyone@</code>, which means exactly what it says on the box. Lines 3 and 4 again deal with rights that are to be inherited to child objects, but the rights inherited to files and directories are different this time. Files inherit a deny to write anywhere in the file (<code>w</code>) and file deletion (<code>dD</code>). Directories just inherit the deletion part, otherwise new files could not be created in subdirectories (which needs the <code>w</code> right). The <code>incoming</code> directory itself gets the &#8220;no deletion&#8221; treatment as well.</p>
<p>The third block consists of the last three lines and restores some rights to non privileged users. Directories inherit the right to be read (<code>r</code>), changed though (<code>x&lt;/code), new files and subdirectories can be created (&lt;code&gt;rp&lt;/code), and attributes of all sorts can be read (&lt;code&gt;aRc</code>). We also allow synchronous file access (<code>s</code>). Files are much the same, except that the write anywhere right is missing. Not that it would matter much if that were allowed here, since it has been explicitly denied earlier. Note that the right to append to a file (<code>p</code>) is explicity allowed. The rights for the <code>incoming</code> directory itself (last line) again match those inherited to subdirectories.</p>
<p>Let&#8217;s see if that works out.</p>
<pre>
$ id
uid=60003(smbnobody) gid=60003(smbnobody)
$ touch /tank/share/incoming/foo
$ ls -V /tank/share/incoming/foo
-r-xr-xr-x+ 1 smbnobody smbnobody 0 Dec 12 18:33 /tank/share/incoming/foo
user:sun:-w--dD--------:------I:allow
everyone@:-w--dD--------:------I:deny
everyone@:r-xp--a-R-c--s:------I:allow
</pre>
<p>The unprivileged user <code>smbnobody</code> (<span class="caps">SMB</span> guest access is mapped to this uid) can create a new file in the incoming directory, and the file inherits the rights mentioned above (<code>I</code> signifies an inherited right).</p>
<pre>
$ cat /etc/passwd &gt; /tank/share/incoming/foo
bash: /tank/share/incoming/foo: Permission denied
$ cat /etc/passwd &gt;&gt; /tank/share/incoming/foo
$
</pre>
<p>The user cannot overwrite the file (even though it is empty), but he can append to it.</p>
<pre>
$ rm /tank/share/incoming/foo
rm: /tank/share/incoming/foo: override protection 555 (yes/no)? y
rm: /tank/share/incoming/foo not removed: Permission denied
$
</pre>
<p>Deletion is also denied. Good.</p>
<pre>
$ id
uid=500(sun) gid=100(users)
$ cat /etc/passwd &gt; /tank/share/incoming/foo
$ rm /tank/share/incoming/foo
$
</pre>
<p>However, the privileged user <code>sun</code> can overwrite and delete the file.</p>
<h3>Samba configuration</h3>
<p>Samba also needs configuration to recognize and use the extended parmission system. The following is an excerpt from <code>smb.conf</code>, describing the <code>incoming</code> share:</p>
<pre>
[incoming]
path = /tank/share/incoming
writable = yes
guest ok = yes
browseable = yes
public = yes
acl check permissions = False
ea support = yes
store dos attributes = no
map readonly = no
map archive = no
map system = no
map hidden = no
vfs objects = zfsacl
nfs4: mode = simple
nfs4: acedup = dontcare
</pre>
<p>This configures Samba to use extended <span class="caps">ACL</span>s using the <span class="caps">ZFS</span> (NFSv4) permission system.</p>
Sat, 12 Dec 2009 17:57:28 +0000https://www.skytale.net/blog/archives/26-guid.htmlAccess problems with Apache server-statushttps://www.skytale.net/blog/archives/25-Access-problems-with-Apache-server-status.html
ComputerSoftwarehttps://www.skytale.net/blog/archives/25-Access-problems-with-Apache-server-status.html#commentshttps://www.skytale.net/blog/wfwcomment.php?cid=250https://www.skytale.net/blog/rss.php?version=2.0&type=comments&cid=25nospam@example.com (Ralf Ertzinger)
<p>Since I have twice now spent considerable time on debugging this:</p>
<p>If you have configured an Apache <code>server-status</code> handler, but retrieving the <span class="caps">URL</span> bound to this handler results in access denied even though there are no access restrictions configured on the container (bad idea, by the way), or the connecting IP is allowed access, make sure that the webserver can access it&#8217;s document root.</p>
<p>This may seem obvious, but if the Apache is configured as a reverse proxy there may not be any files in the document root, because all content is created by the backend servers (or virtual handlers, like <code>server-status</code>). Nonetheless the Apache server must be able to change into the document root, or the virtual handlers will fail (reverse proxy access will work, however).</p>
Wed, 02 Dec 2009 13:17:15 +0000https://www.skytale.net/blog/archives/25-guid.htmlManual IMAPhttps://www.skytale.net/blog/archives/23-Manual-IMAP.html
ComputerSoftwarehttps://www.skytale.net/blog/archives/23-Manual-IMAP.html#commentshttps://www.skytale.net/blog/wfwcomment.php?cid=230https://www.skytale.net/blog/rss.php?version=2.0&type=comments&cid=23nospam@example.com (Ralf Ertzinger)
<p>From time to time I am in the unfortunate situation of having to manually communicate with an <span class="caps">IMAP</span> server (in other words: reading mail via telnet).</p>
<p>Due to the nature of <span class="caps">IMAP</span> this is not remotely as simple as reading mail via telnet using the POP3 protocol, however, as <span class="caps">IMAP</span> is a very rich and powerful protocol with a quirky syntax.</p>
<p>As I tend to forget the commands for the most important tasks it might be a good idea to write them down.</p>
<p>Some definitions:</p>
<p><span class="caps">IMAP</span> handles <strong>messages</strong>. Messages live in <strong>folders</strong>, which can have <strong>subfolders</strong>. Folders are separated by <strong>separators</strong>. Multiple groups of folders can exist, those groups are called <strong>namespaces</strong>. At least one namespace always exists. Within every folder each message has two <strong>identifiers</strong> (both are positive integers). The first (the <strong>sequence number</strong>) is valid only as long as the current folder is <strong>selected</strong> (or open, in other words), and ranges from 1 to N, N being the number of messages in the folder. The second (the <strong><span class="caps">UID</span></strong>) does not change from one selection to the next, and usually not between connects. Ideally, the <span class="caps">UID</span> for a message never changes once it has been assigned. The <span class="caps">IMAP</span> server is free to assign a new <span class="caps">UID</span> to a message, but it must tell the client if it does so.</p>
<p>Each <strong>request</strong> from a client starts with a <strong>tag</strong>, which is a group of characters consisting of letters, numbers and the dot (&#8221;.&#8221;). The server <strong>reply</strong> consists of at least one line, but may consist of several. In the latter case, each line starts with an asterisk (*), except for the last, which starts with the tag chosen by the client. This signals the completion of the command. If the server reply is one lined, only the line starting with the client tag is sent. The client may reuse tags if it wishes. The protocol is not synchronous, the client can send several requests without waiting for the server to complete the preceding command.</p>
<p>Unless the client or the server indicate otherwise the default character set for <span class="caps">IMAP</span> is UTF7 (which, as long as you keep to the first 128 characters of the <span class="caps">ASCII</span> character set, is exactly the same as <span class="caps">ASCII</span> or UTF8).</p>
<p>Requests and replies consist of a space separated list of keywords and <strong>strings</strong>. Strings can be written in two forms, <strong>quoted</strong> and <strong>literal</strong>. Quoted strings can consist of any 7-bit-characters, except <code>CR</code> and <code>LF</code>, enclosed by <code>&quot;</code>. If the quoted string contains the character <code>&quot;</code> itself it<br />
must be quoted as <code>\&quot;</code>.</p>
<p>Literal strings start with the number of characters in the string, enclosed by curly braces, and a <code>CRLF</code>. The string characters then follow.</p>
<p>That ought to be enough to make sense of the following:</p>
<h3>Login</h3>
<p>Assuming the server supports plain text logins (indicated by <code>AUTH=LOGIN</code> in the server greeting:</p>
<pre>
$ telnet mailserver 143
[...]
* OK [CAPABILITY IMAP4 IMAP4rev1 LITERAL+ ID AUTH=LOGIN AUTH=PLAIN AUTH=CRAM-MD5 SASL-IR]
mailserver Cyrus IMAP4 v2.3.7-Invoca-RPM-2.3.7-7.el5_4.3 server ready
foo login user password
foo OK [CAPABILITY IMAP4 IMAP4rev1 LITERAL+ ID LOGINDISABLED ACL RIGHTS=kxte QUOTA MAILBOX-REFERRALS
NAMESPACE UIDPLUS NO_ATOMIC_RENAME UNSELECT CHILDREN MULTIAPPEND BINARY SORT SORT=MODSEQ
THREAD=ORDEREDSUBJECT THREAD=REFERENCES ANNOTATEMORE CATENATE CONDSTORE IDLE LISTEXT
LIST-SUBSCRIBED X-NETSCAPE URLAUTH] User logged in
</pre>
<p>In this example the login user name was <code>user</code> and the password was <code>password</code>. The tag chosen by the client (i.e. the person using telnet) was <code>foo</code>, which was echoed by the server in the login response. From now on the tag used will be the dot (&#8221;.&#8221;), unless specified otherwise.</p>
<h3>Namespaces</h3>
<p>Several groups of folders can exists, these groups are called namespaces. One use is the implementation of shared folders such that the private folders of a user live in one namespace, and the shared folders in another. To list the available namespaces:</p>
<pre>
. NAMESPACE
* NAMESPACE ((&quot;INBOX.&quot; &quot;.&quot;)) ((&quot;user.&quot; &quot;.&quot;)) ((&quot;&quot; &quot;.&quot;))
. OK Completed
</pre>
<p>This user has access to three namespaces: <code>INBOX</code>, <code>user</code> and a namespace without a name. The latter is the default name space. The dot (&#8221;.&#8221;) after the name is the separator used in this namespace.</p>
<h3>Listing folders</h3>
<p>Listing folders within a namespace requires the namespace to be listed, and a pattern describing the required names. The pattern supports wildcards, especially &#8220;*&#8221; (list subfolders, recursively) and &#8220;%&#8221; (list subfolders, not recursively).</p>
<pre>
. LIST &quot;&quot; &quot;INBOX.%&quot;
* LIST (\HasNoChildren) &quot;.&quot; &quot;INBOX.Folder1&quot;
* LIST (\HasNoChildren) &quot;.&quot; &quot;INBOX.Folder2&quot;
* LIST (\HasChildren) &quot;.&quot; &quot;INBOX.Folder3&quot;
. OK Completed
</pre>
<p>This <code>INBOX</code> folder has three subfolders: <code>Folder1</code> and <code>Folder2</code>, both of which have no subfolders, as indicated by the <code>\HasNoChildren</code> flag, and one (<code>Folder3</code>) which has. Because of the &#8220;%&#8221; wildcard the subfolders of <code>Folder3</code> are not shown in this listing.</p>
<p>In general, it is usually not a good idea to list folders using &#8220;*&#8221;. This may return a list containing potentially thousands of folders (think of systems redistributing Usenet news via <span class="caps">IMAP</span>). Instead use &#8220;%&#8221; to descend the folders considered interesting.</p>
<h3>Selecting folders</h3>
<p>In order to read messages the folder containing those must be activated first. This requires the full folder name as returned by <code>LIST</code>.</p>
<pre>
. SELECT &quot;INBOX&quot;
* FLAGS (\Answered \Flagged \Draft \Deleted \Seen NonJunk Junk $NotJunk $Junk $Forwarded)
* OK [PERMANENTFLAGS (\Answered \Flagged \Draft \Deleted \Seen NonJunk Junk $NotJunk $Junk $Forwarded \*)]
* 5966 EXISTS
* 0 RECENT
* OK [UIDVALIDITY 1136990532]
* OK [UIDNEXT 12498]
* OK [NOMODSEQ] Sorry, modsequences have not been enabled on this mailbox
. OK [READ-WRITE] Completed
</pre>
<p>This folder contains 5966 messages (<code>5966 EXISTS</code>), zero of which are unread (<code>0 RECENT</code>). The <code>UIDVALIDITY</code> parameter is an integer describing the validity of the <span class="caps">UID</span> numbers assigned to the messages. As long as this number does not change the mapping from message to <span class="caps">UID</span> has not changed.</p>
<h3>Finding messages</h3>
<p>Unlike POP3 <span class="caps">IMAP</span> servers actually try to parse the messages stored in the folders in order to extract some information from the headers, such as sender address, recipient address, messageid and general message structure (such as attachments). The reason and upshot of this is that the server can search for messages having certain properties (for example, all messages by a certain sender) without having the client download all messages and doing the search itself. There are two search commands (<code>SEARCH</code> and <code>UID SEARCH</code>) which differ in the results they return. The first command returns sequence numbers, the second returns message <span class="caps">UID</span>s.</p>
<p>Multiple search conditions can be used in one search request, those are <span class="caps">AND</span>ed (i.e., all have to be satisfied).</p>
<p>A small table of possible search conditions:</p>
<table>
<tr>
<th>Query </th>
<th>Looking for </th>
<th>Example </th>
</tr>
<tr>
<td> <code>FROM &quot;&lt;mailaddress&gt;&quot;</code> </td>
<td> Mail from that sender </td>
<td> <code>FROM &quot;user@example.org&quot;</code> </td>
</tr>
<tr>
<td> <code>TO &quot;&lt;mailaddress&gt;&quot;</code> </td>
<td> Mail to that recipient </td>
<td> <code>TO &quot;user@example.org&quot;</code> </td>
</tr>
<tr>
<td> <code>SINCE &lt;date&gt;</code> </td>
<td> Mail received after this date </td>
<td> <code>SINCE 1-Nov-2009</code> </td>
</tr>
<tr>
<td> <code>BEFORE &lt;date&gt;</code> </td>
<td> Mail received before this date </td>
<td> <code>BEFORE 1-Nov-2009</code> </td>
</tr>
<tr>
<td> <code>DELETED</code> </td>
<td> Mails marked as deleted </td>
<td> <code>DELETED</code> </td>
</tr>
<tr>
<td> <code>SUBJECT &lt;string&gt;</code> </td>
<td> Mails containing string in the subject </td>
<td> <code>SUBJECT &quot;Proposal&quot;</code> </td>
</tr>
<tr>
<td> <code>BODY &lt;string&gt;</code> </td>
<td> Mails containing string in the body </td>
<td> <code>BODY &quot;Hello Greg&quot;</code> </td>
</tr>
<tr>
<td> <code>NOT &lt;key&gt;</code> </td>
<td> Mails which do not match the key </td>
<td> <code>NOT FROM &quot;user@example.org&quot;</code> </td>
</tr>
<tr>
<td> <code>OR &lt;key1&gt; &lt;key2&gt;</code> </td>
<td> Mails which match either of key1 or key2 </td>
<td> <code>OR FROM &quot;user@example.org&quot; FROM &quot;user2@example.org&quot;</code> </td>
</tr>
</table>
<p>There are quite a bit more of these, <a href="http://www.faqs.org/rfcs/rfc2060.html">RfC 2060</a> lists all possible options. But the ones above are probably the most commonly used.</p>
<p>Please be aware that the full text searches (<code>TEXT</code> and <code>BODY</code>) can be probibitively expensive if the server does not keep a full text search database of the messages. Getting an answer to such a query may take a very long time.</p>
<pre>
. SEARCH FROM &quot;user@example.org&quot; BEFORE 1-Nov-2009
* SEARCH 5 10 456
. OK Completed
</pre>
<h3>Fetching messages</h3>
<p>Now that <code>SEARCH</code> has turned up some messages it might be a good idea to take a look at the contents. The <code>FETCH</code> command takes a list of sequence numbers or <span class="caps">UID</span>s (as with <code>SEARCH</code> there are two variants, <code>FETCH</code> and <code>UID FETCH</code>) and a list of the information we are interested in. The most commonly used parts are:</p>
<table>
<tr>
<th>Part name </th>
<th>Part description </th>
</tr>
<tr>
<td> <code>BODY[TEXT]</code> </td>
<td> Just the mail body, without the headers </td>
</tr>
<tr>
<td> <code>BODY[HEADER]</code> </td>
<td> The mail headers </td>
</tr>
<tr>
<td> <code>BODY[HEADER.FIELDS (&lt;list&gt;)]</code> </td>
<td> Just the header fields indicated in list </td>
</tr>
<tr>
<td> <code>BODY[]</code> </td>
<td> The entire mail text, header and body </td>
</tr>
<tr>
<td> <code>BODY.PEEK</code> </td>
<td> Works as <code>BODY</code> does, but does not mark the mail as seen </td>
</tr>
<tr>
<td> <code>FLAGS</code> </td>
<td> Flags set for the message </td>
</tr>
<tr>
<td> <code>UID</code> </td>
<td> The <span class="caps">UID</span> of the message </td>
</tr>
</table>
<p>As above, RfC 2060 has all the gory details.</p>
<pre>
. FETCH 5 (FLAGS BODY[HEADER.FIELDS (To)])
* 5 FETCH (FLAGS (\Seen) BODY[HEADER.FIELDS (To)] {24}
To: user@example.com
)
. OK Completed
</pre>
<h3>Deleting messages</h3>
<p>Deleting messages in <span class="caps">IMAP</span> is a bit tricky, as there is no explicit delete command. Instead, a flag is set on the message marking it as deleted. This, by itself, does nothing to get the message removed. Just when a special command is called all messages in the current folder marked as to be deleted are removed<sup id="fnrev1479862195c46678e782c5" class="footnote"><a href="#fn1479862195c46678e782c5">1</a></sup>.</p>
<pre>
. UID SEARCH ALL
* 1 EXISTS
* 1 RECENT
* SEARCH 1814
. OK Completed
. UID STORE 1814 +FLAGS (\Deleted)
* 1 FETCH (FLAGS (\Recent \Deleted \Seen) UID 1814)
. OK Completed
. EXPUNGE
* 1 EXPUNGE
* 0 EXISTS
* 0 RECENT
. OK Completed
. UID SEARCH ALL
* SEARCH
. OK Completed
</pre>
<p>The above is executed in a folder containing just a single message (see the result of the <code>UID SEARCH ALL</code>). The flag <code>\Deleted</code> is then added to flag list of the message (<code>UID STORE 1814 +FLAGS (\Deleted)</code>). The <code>STORE</code> command returns the new flag list. The <code>EXPUNGE</code> command then removes the message.</p>
<h3>Leaving <span class="caps">IMAP</span></h3>
<p>When finished with the session the last thing to do is to leave:</p>
<pre>
. logout
Connection closed by foreign host
$
</pre>
<p id="fn1479862195c46678e782c5" class="footnote"><sup>1</sup> The manual page for the rather excellent perl module <code>Mail::IMAPClient</code> had the following to say about this:</p>
<blockquote>
<p> In case you’re curious, expunging a folder deletes the messages that you thought were already deleted via &#8220;delete_message&#8221; but really weren&#8217;t, which means you have to use a method that doesn&#8217;t exist to delete messages that you thought didn&#8217;t exist. (Seriously, I&#8217;m not making any of this stuff up.)</p>
</blockquote>
<p>Unfortunately this gem has disappeared from newer versions of the manual page.</p>
Thu, 05 Nov 2009 19:45:25 +0000https://www.skytale.net/blog/archives/23-guid.htmlSSL cipher settingshttps://www.skytale.net/blog/archives/22-SSL-cipher-settings.html
ComputerSoftwarehttps://www.skytale.net/blog/archives/22-SSL-cipher-settings.html#commentshttps://www.skytale.net/blog/wfwcomment.php?cid=228https://www.skytale.net/blog/rss.php?version=2.0&type=comments&cid=22nospam@example.com (Ralf Ertzinger)
<h3>The Problem</h3>
<p>Securing network services with <span class="caps">SSL</span> is, in general, a good idea, if you can spare the <span class="caps">CPU</span> cycles. Especially personal data should always be protected while in transit via the network. But it may not be enough to simply enable <span class="caps">SSL</span> in the service (be it Apache, Lighttpd, Cyrus <span class="caps">IMAPD</span> or something else) to get a reasonably secure connection.</p>
<p><span class="caps">SSL</span> is a cover phrase for a wide collection of protocols and crypto algorithms. There are at least three protocol suites in use (SSLv2, SSLv3 and TLSv1), which between them support tens of different crypto algorithms with different strengths. Not all of those are still suitable for serious use today.</p>
<p>A list of the ciphers supported by the popular <a href="http://openssl.org">OpenSSL library</a>, which is used by many projects to handle <span class="caps">SSL</span>, can be obtained with the following command:</p>
<pre>
$ openssl ciphers -v &#39;ALL:COMPLEMENTOFALL&#39;
DHE-RSA-AES256-SHA SSLv3 Kx=DH Au=RSA Enc=AES(256) Mac=SHA1
DHE-DSS-AES256-SHA SSLv3 Kx=DH Au=DSS Enc=AES(256) Mac=SHA1
...
$
</pre>
<p>On my notebook (running Fedora 11) this produces a list of 62 ciphers. The number of ciphers supported changes with the version of OpenSSL, so other<br />
systems may display a different list.</p>
<p>During an <span class="caps">SSL</span> handshake between a client and a server the cipher to use is negotiated between the two machines. In practical terms this means that the client send list of ciphers it is able and willing to use to the server, the server compares this list with it&#8217;s own list of supported ciphers and, if a cipher supported by both sides is found returns it&#8217;s choice to the client.</p>
<h3>Defaults</h3>
<p>Unless something else is configured, a server using OpenSSL uses the &#8220;<span class="caps">DEFAULT</span>&#8221; group of ciphers. The content of this group can also change between versions of OpenSSL. The value for the installed version can be queried:</p>
<pre>
$ openssl ciphers -v &#39;DEFAULT&#39;
DHE-RSA-AES256-SHA SSLv3 Kx=DH Au=RSA Enc=AES(256) Mac=SHA1
DHE-DSS-AES256-SHA SSLv3 Kx=DH Au=DSS Enc=AES(256) Mac=SHA1
...
$
</pre>
<p>This list is shorter than the list of all ciphers above, containing 44 ciphers on my notebook. This list is not entirely nonsensical. It does not contain ciphers without encryption (yes, that is a valid mode of operation for <span class="caps">SSL</span>), it does not contain ciphers without authentication (which would allow for Man-in-the-middle attacks). It does, however, contain ciphers whose strength in this day and age must be questioned. These include the so called &#8220;export&#8221; ciphers.</p>
<p>These ciphers stem from a time when it was illegal to export software from the United States which supported strong encryption. So software supporting encryption (for example web browsers, like the venerable Netscape Nagivator) destined for export only supported watered down versions of the strong encryption variants, mostly by supporting shorter keys. Fortunately it is no longer illegal to export strong crypto from the United States, and hasn&#8217;t been for years, but for compatibility reasons OpenSSL is still willing to negotiate these weak ciphers with a client.</p>
<p>Another weak candidate is the <a href="http://en.wikipedia.org/wiki/Data_Encryption_Standard"><span class="caps">DES</span> algorithm</a>. It was made a standard in 1976 (which is an eternity ago in IT terms). Although it was never cryptographically broken, it&#8217;s key length of 56 bits made it increasingly more vulnerable to brute force attacks as faster <span class="caps">CPU</span>s became available. Since the <a href="http://www.eff.org">Electronic Frontier Foundation</a> demonstrated a custom-built <span class="caps">DES</span> cracker in 1998, built for $250.000 and able to brute-force a <span class="caps">DES</span> key in under two days, <span class="caps">DES</span> has been effectively dead. But, for compatibility reasons, OpenSSL is, by default, willing to negotiate <span class="caps">DES</span> as a cipher.</p>
<p>OpenSSL can be told which ciphers to offer in an <span class="caps">SSL</span> negotiation, and thankfully most programs using OpenSSL offer configuration statements so the admin can change the default settings.</p>
<h3>Selections</h3>
<p>Which ciphers should be used then? Let&#8217;s start with all the ciphers supported by the SSLv3/TLSv1 cipher suite (which every program offering <span class="caps">SSL</span> should support, the use of SSLv2 is strongly discouraged due to vulnerabilities). And we only want ciphers which offer high security (which in OpenSSL terms means more than 128 bits key length, plus some ciphers with 128 bit keys). To be on the safe side we also explicitly disable SSLv2 ciphers, so they cannot be reintroduced later:</p>
<pre>
$ openssl ciphers -v &#39;TLSv1+HIGH:!SSLv2&#39;
DHE-RSA-AES256-SHA SSLv3 Kx=DH Au=RSA Enc=AES(256) Mac=SHA1
DHE-DSS-AES256-SHA SSLv3 Kx=DH Au=DSS Enc=AES(256) Mac=SHA1
...
$
</pre>
<p>25 ciphers match this list, but it also contains ciphers without authentication. These have to go, along with all ciphers without encryption (there should not be any, but better save than sorry):</p>
<pre>
$ openssl ciphers -v &#39;TLSv1+HIGH:!SSLv2:!aNULL:!eNULL&#39;
DHE-RSA-AES256-SHA SSLv3 Kx=DH Au=RSA Enc=AES(256) Mac=SHA1
DHE-DSS-AES256-SHA SSLv3 Kx=DH Au=DSS Enc=AES(256) Mac=SHA1
...
$
</pre>
<p>20 remain. It&#8217;s my personal preference to disable ciphers based on triple-<span class="caps">DES</span> (3DES), so these are removed, too. There is no technical reason for this, 3DES is still considered secure.</p>
<p>Finally, the remaining ciphers are sorted by strength, the most secure first, which will make OpenSSL prefer those.</p>
<pre>
$ openssl ciphers -v &#39;TLSv1+HIGH:!SSLv2:!aNULL:!eNULL:!3DES:@STRENGTH&#39;
DHE-RSA-AES256-SHA SSLv3 Kx=DH Au=RSA Enc=AES(256) Mac=SHA1
DHE-DSS-AES256-SHA SSLv3 Kx=DH Au=DSS Enc=AES(256) Mac=SHA1
...
</pre>
<p>On my notebook 14 ciphers remain. For comparison, on my web server (running CentOS 5) this selection only produces 6 ciphers, due to an older version of OpenSSL.</p>
<p>There is, however, two problem with this list. First, it does no longer contain the export or simple <span class="caps">DES</span> ciphers (which was kind of the point). This means that <span class="caps">SSL</span> services secured with this selection are no longer available to <span class="caps">SSL</span> client which only support export grade ciphers. This is a good thing, as these clients are insecure and need to be replaced with something more recent. Depending on the details of the service this option may not be available, though. Please check if these old ciphers must be supprted further before turning them off.</p>
<p>The second problem is Windows. In detail, Windows versions before and including Windows XP. The crypto libraries shipped with these versions do not support newer crypto algorithms (like <span class="caps">AES</span>), so there is no overlap between the set of algorithms supported by the server and those supported by the client. These crypto libraries are primarily used by Internet Explorer, Outlook and Outlook Express, so these programs on Windows XP and earlier will not be able to negotiate an <span class="caps">SSL</span> connection to a web or mail server. Other web browsers and mail clients (like Firefox and Thunderbird) usually ship with their own crypto libraries which do support modern algorithms, and are not<br />
affected. The system crypto libraries in Windows Vista and Windows 7 are also not affected.</p>
<p>If support for older Windows versions cannot be dropped (likely), the cipher list needs to be extended by some RC4 ciphers (which Windows does support):</p>
<pre>
$ openssl ciphers -v &#39;TLSv1+HIGH:!SSLv2:RC4+MEDIUM:!aNULL:!eNULL:!3DES:@STRENGTH&#39;
DHE-RSA-AES256-SHA SSLv3 Kx=DH Au=RSA Enc=AES(256) Mac=SHA1
DHE-DSS-AES256-SHA SSLv3 Kx=DH Au=DSS Enc=AES(256) Mac=SHA1
...
$
</pre>
<p>This brings the number of ciphers up to 19, the new RC4 ciphers are added at the end of the sorted list.</p>
<h3>Configuration</h3>
<p>Now that the cipher list is complete the various services that use <span class="caps">SSL</span> need to be configured to use it. Instructions how to do this can be found in the documentation, examples for some services are below.</p>
<h4>Exim</h4>
<p>Add the following line to the global (first) configuration section and restart Exim:</p>
<pre>
tls_require_ciphers = TLSv1+HIGH : !SSLv2 : RC4+MEDIUM : !aNULL : !eNULL : !3DES : @STRENGTH
</pre>
<h4>Lighttpd</h4>
<p>Add the following line to the configuration section containing <code>ssl.engine = &quot;enable&quot;</code> and restart Lighttpd:</p>
<pre>
ssl.cipher-list = &quot;TLSv1+HIGH !SSLv2 RC4+MEDIUM !aNULL !eNULL !3DES @STRENGTH&quot;
</pre>
<h4>Cyrus <span class="caps">IMAPD</span></h4>
<p>Add the following line in <code>imapd.conf</code> and restart Cyrus:</p>
<pre>
tls_cipher_list: TLSv1+HIGH:!SSLv2:RC4+MEDIUM:!aNULL:!eNULL:!3DES:@STRENGTH
</pre>
<h3>Testing</h3>
<p>In order to test the new settings, a connection attempt using an excluded cipher can be made (which should fail, of course):</p>
<pre>
$ openssl s_client -host www.skytale.net -port 443 -cipher 3DES
CONNECTED(00000003)
140209911707464:error:14077410:SSL routines:SSL23_GET_SERVER_HELLO:sslv3 alert handshake failure:s23_clnt.c:672:
---
no peer certificate available
---
No client certificate CA names sent
---
SSL handshake has read 7 bytes and written 58 bytes
---
New, (NONE), Cipher is (NONE)
Compression: NONE
Expansion: NONE
---
</pre>
<p>A successful attempt (letting openssl select the best cipher) negotiates <span class="caps">AES</span> with a 256 bit key:</p>
<pre>
$ openssl s_client -host www.skytale.net -port 443
CONNECTED(00000003)
...
---
SSL handshake has read 1281 bytes and written 309 bytes
---
New, TLSv1/SSLv3, Cipher is AES256-SHA
Server public key is 1024 bit
Compression: zlib compression
Expansion: zlib compression
SSL-Session:
Protocol : TLSv1
Cipher : AES256-SHA
Session-ID: --removed--
Session-ID-ctx:
Master-Key: --removed--
Key-Arg : None
Krb5 Principal: None
PSK identity: None
PSK identity hint: None
Compression: 1 (zlib compression)
Start Time: 1252852959
Timeout : 300 (sec)
Verify return code: 21 (unable to verify the first certificate)
---
</pre>
Sun, 13 Sep 2009 14:45:35 +0000https://www.skytale.net/blog/archives/22-guid.htmlRunning RivaTuner without Administrator rightshttps://www.skytale.net/blog/archives/21-Running-RivaTuner-without-Administrator-rights.html
ComputerSoftwareWindowshttps://www.skytale.net/blog/archives/21-Running-RivaTuner-without-Administrator-rights.html#commentshttps://www.skytale.net/blog/wfwcomment.php?cid=210https://www.skytale.net/blog/rss.php?version=2.0&type=comments&cid=21nospam@example.com (Ralf Ertzinger)
<p><a href="http://www.guru3d.com/rivatuner">RivaTuner</a> is a tweaking program for Windows used to change some more obscure parameters of modern <span class="caps">GPU</span>s. It&#8217;s main uses are overclocking and monitoring, but it&#8217;s feature list is truly impressive. I mainly use it to change the fan speed settings on the <span class="caps">GTX</span> 260 in my gaming rig (the default profile is not aggressive enough for my taste, letting the <span class="caps">GPU</span> temperature run up to 85 degrees before the fan starts to kick in in ernest).</p>
<p>One problem I always had with RivaTuner is that it requires Administrator privileges to run. It needs those to load a device driver that is then used to communicate (and manipulate) the <span class="caps">GPU</span> driver and some parts of the graphic card. Since my normal user account does not have administrative privileges I had to use the &#8220;Run As&#8221; feature to start RivaTuner to allow it to set my fan parameters.</p>
<p>It turns out this is not really necessary, and that there is a way to run the RivaTuner frontend as a normal user. Here&#8217;s how.</p>
<h3><span class="caps">WARNING</span></h3>
<p>The following instructions involve editing sensitive parts of the Windows registry. Getting this wrong may render your Windows installation unbootable or harm your system in other ways. If you are not comfortable with the registry editor do not attempt to do this.</p>
<h3>Instructions</h3>
<ul>
<li>Install RivaTuner (well, duh).</li>
<li>Start RivaTuner at least once (as an Administrator)</li>
<li>Log in as a user with administrative rights</li>
<li>Start the registry editor</li>
<li>Navigate to the following key:<br />
<pre>
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\RivaTuner32
</pre></li>
<li>Change the key <code>Start</code> to <code>1</code></li>
<li>Reboot</li>
</ul>
<p>What this does is to instruct Windows to load the RivaTuner device driver during system startup, so it is already loaded when a user logs in. Seeing this, RivaTuner will not attempt to load the driver again, but connect to the driver as a normal user (which works).</p>
<h3>Advantages</h3>
<p>With this change RivaTuner can be run as a normal user</p>
<h3>Disadvantages</h3>
<p>The RivaTuner device driver will always be loaded, even when RivaTuner will not be used. This may lead to problems with other drivers, and disabling the device driver again requires another go at the registry.</p>
Thu, 03 Sep 2009 19:59:36 +0000https://www.skytale.net/blog/archives/21-guid.htmlTracing errors through the codehttps://www.skytale.net/blog/archives/20-Tracing-errors-through-the-code.html
ComputerSoftwareSolarishttps://www.skytale.net/blog/archives/20-Tracing-errors-through-the-code.html#commentshttps://www.skytale.net/blog/wfwcomment.php?cid=200https://www.skytale.net/blog/rss.php?version=2.0&type=comments&cid=20nospam@example.com (Ralf Ertzinger)
<p>Open source is a great thing. This becomes especially obvious if one is confronted with a program that refuses to work, and furthermore refuses to yield any kind of helpful error message. Reading the source may be the only way to determine what is actually going on.</p>
<p>Sadly I&#8217;ve been doing rather a lot of that lately, This post shall serve as an example how to navigate the Open Solaris source code in search of an answer.</p>
<h3>The problem</h3>
<p>This specific problem arose during my experiments to create a small Solaris installation for use in an embedded system (small in this context means around 60MB used disk space). More details on this later.</p>
<p>The system has a <code>cfgadm(1M)</code> binary, but it does not work:</p>
<pre>
# cfgadm
cfgadm: Library error: Device library initialize failed: Facility is not active
</pre>
<p>As error messages go this only marginally better that &#8220;Failed&#8221;, but not by much. Telling the user which exact facility is not active would have been helpful.</p>
<p>But at least there are some search friendly strings in there that may help to determine the source code responsible for this message.</p>
<h3>The source</h3>
<p>One thing the classical <span class="caps">UNIX</span> source approach of &#8220;all the source in one tree&#8221; has going for it is that it makes searching in the source relatively easy. The Open Solaris web site has build a search engine above the source tree which automatically cross-references symbols in the code and has some other nice features. <a href="http://src.opensolaris.org/source">The entry page to the search engine is here.</a></p>
<p>Searching for &#8220;Facility is not active&#8221; (note the quotes) yields just a handful of hits. One of those (in <code>/onnv/onnv-gate/usr/src/uts/common/sys/errno.h</code>) hints that there is a system error (and corresponding symbol) called <code>ENOTACTIVE</code> which belongs to this error message.</p>
<p>Running <code>cfgadm</code> under <code>truss(1)</code> confirms this:</p>
<pre>
# truss cfgadm
execve(&quot;/usr/sbin/cfgadm&quot;, 0x08047E24, 0x08047E2C) argc = 1
[...]
sysconfig(_CONFIG_PAGESIZE) = 4096
open(&quot;/devices/pseudo/devinfo@0:devinfo&quot;, O_RDONLY) = 3
ioctl(3, DINFOIDENT, 0x00000000) = 57311
ioctl(3, 0x10DF00, 0x08047460) Err#73 ENOTACTIVE
close(3) = 0
[...]
</pre>
<p>Things go kind of downhill from there. So some code opens the devinfo device, runs two <span class="caps">IOCTL</span>s on in and the second one fails. Furthermore, <code>truss</code> only knows the first <span class="caps">IOCTL</span> by name, not the actually failing one.</p>
<p>Searching for the first name turns up <code>/onnv/onnv-gate/usr/src/uts/common/sys/devinfo_impl.h</code>:</p>
<pre>
#define DINFOIDENT (DIIOC | 0x82) /* identify the driver */
</pre>
<p>Looking around in this file some more yields two other definitions:</p>
<pre>
#define DIIOC (0xdf&lt;&lt;8)
[...]
#define DINFOCACHE (DIIOC | 0x100000) /* use cached data */
</pre>
<p>So the second <span class="caps">IOCTL</span> is actually called <code>DINFOCACHE</code>. Tracing <span class="caps">IOCTL</span>s through the code is, unfortunately, a bit tricky, because the routine that handles the <span class="caps">IOCTL</span> depends on the passed file descriptor (the first parameter to the <span class="caps">IOCTL</span> call). The file associated with the <span class="caps">IOCTL</span> in this case belongs to the file <code>/devices/pseudo/devinfo@0:devinfo</code> (see the <code>open</code> call directly above the two <span class="caps">IOCTL</span>s).</p>
<p>But since the <span class="caps">IOCTL</span> handling code most likely contains the symbol <code>DINFOCACHE</code> as well (that&#8217;s what constants are for, after all) searching for the name will turn up the correct file, possibly buried among others.</p>
<p>Armed this knowledge the search results for <code>DINFOCACHE</code> can be narrowed down to one likely candidate: <code>/onnv/onnv-gate/usr/src/uts/common/io/devinfo.c</code>. This file belongs to the kernel code (it lives in <code>usr/src/uts</code>), and the name fits the name of the device opened above.</p>
<p><code>DINFOCACHE</code> appears twice in a function called <code>di_ioctl</code>, which sounds good. Following the code flow through this function (<code>DINFOCACHE</code> is passed in the <code>cmd</code> parameter), the first relevant code part reads as follows:</p>
<pre>
if ((st-&gt;command &amp; DINFOCACHE) &amp;&amp; !cache_args_valid(st, &amp;error)) {
di_freemem(st);
(void) di_setstate(st, IOC_IDLE);
return (error);
}
</pre>
<p>(By the time execution reaches this code the <code>cmd</code> variable has been copied to <code>st-&gt;command</code>, more or less). <code>cache_valid_args</code>, among other things, does the following:</p>
<pre>
if (!modrootloaded || !i_ddi_io_initialized()) {
CACHE_DEBUG((DI_ERR,
&quot;cache lookup failure: I/O subsystem not inited&quot;));
*error = ENOTACTIVE;
return (0);
}
</pre>
<p>That looks pretty promising, as it sets the right error code if the condition holds. <code>modrootloadied</code> is a kernel symbol, so <code>mdb(1)</code> can be used to inspect this value in a running kernel.</p>
<pre>
# mdb -k
Loading modules: [ unix genunix specfs mac cpu.generic uppc pcplusmp scsi_vhci
ufs sockfs ip hook neti sctp arp usba uhci sd lofs logindmux ptm random crypto
zfs ipc ]
&gt; modrootloaded/X
modrootloaded:
modrootloaded: 1
</pre>
<p>That&#8217;s not the culprit. <code>i_ddi_io_initialized()</code> basically returns the value of <code>sysevent_daemon_init</code>, so what about that?</p>
<pre>
# mdb -k
Loading modules: [ unix genunix specfs mac cpu.generic uppc pcplusmp scsi_vhci
ufs sockfs ip hook neti sctp arp usba uhci sd lofs logindmux ptm random crypto
zfs ipc ]
&gt; modrootloaded/X
modrootloaded:
modrootloaded: 1
&gt; sysevent_daemon_init/X
sysevent_daemon_init:
sysevent_daemon_init: 0
</pre>
<p>Bingo. From the name of the variable the probable name of the not running facility (remember the original error message?) can be deduced: <code>svc:/system/sysevent:default</code>, which, indeed, is not running on the minimal system. Starting it makes <code>cfgadm</code> work.</p>
<p>That wasn&#8217;t so hard, now was it?</p>
Fri, 29 May 2009 14:50:12 +0000https://www.skytale.net/blog/archives/20-guid.html