support for (secure) downloads, ideally via a browser (no special software required)

support for (secure) uploads, ideally via sftp (most of our customers are familiar with ftp)

Our target was RHEL/CentOS 7, but this should transfer to other linuxes pretty
readily.

Here's the schema we ended up settling on, which seems to give us a good mix of
security and flexibility.

use apache with HTTPS and PAM with local accounts, one per customer, and nologin
shell accounts

users have their own groups (group=$USER), and also belong to the sftp group

we use the users group for internal company accounts, but NOT for customers

customer data directories live in /data

we use a 3-layer hierarchy for security: /data/chroot_$USER/$USER
are created with a nologin shell

the /data/chroot_$USER directory must be owned by root:$USER, with
permissions 750, and is used for an sftp chroot directory (not writeable
by the user)

the next-level /data/chroot_$USER/$USER directory should be owned by $USER:users,
with permissions 2770 (where users is our internal company user group, so both
the customer and our internal users can write here)

we also add an ACL to /data/chroot_$USER to allow the company-internal users
group read/search access (but not write)

We just use openssh internal-sftp to provide sftp access, with the following config:

So we chroot sftp connections to /data/chroot_$USER and then (via the ForceCommand)
chdir to /data/chroot_$USER/$USER, so they start off in the writeable part of their
tree. (If they bother to pwd, they see that they're in /$USER, and they can chdir
up a level, but there's nothing else there except their $USER directory, and they
can't write to the chroot.)

Those files capture the ACPI events and handle them via a custom script in
/etc/acpi/actions/volume.sh, which uses amixer from alsa-utils. Volume
control worked just fine, but muting was a real pain to get working correctly
due to what seems like a bug in amixer - amixer -c1 sset Master playback toggle
doesn't toggle correctly - it mutes fine, but then doesn't unmute all
the channels it mutes!

I worked around it by figuring out the specific channels that sset Master
was muting, and then handling them individually, but it's definitely not as clean:

So in short, really pleased with the X250 so far - the screen is lovely, battery
life seems great, I'm enjoying the keyboard, and most things have Just
Worked or have been pretty easily configurable with CentOS. Happy camper!

Just picked up a shiny new Fujitsu ScanSnap 1300i ADF scanner to get
more serious about less paper.

I chose the 1300i on the basis of the nice small form factor, and that SANE
reports
it having 'good' support with current SANE backends. I'd also been able
to find success stories of other linux users getting the similar S1300
working okay:

I plugged the S1300i in (via the dual USB cables instead of the power
supply - nice!), turned it on (by opening the top cover) and then ran
sudo sane-find-scanner. All good:

found USB scanner (vendor=0x04c5 [FUJITSU], product=0x128d [ScanSnap S1300i]) at libusb:001:013
# Your USB scanner was (probably) detected. It may or may not be supported by
# SANE. Try scanimage -L and read the backend's manpage.

Ran sudo scanimage -L - no scanner found.

I downloaded the S1300 firmware Luuk had provided in his post and
installed it into /usr/share/sane/epjitsu, and then updated
/etc/sane.d/epjitsu.conf to reference it:

And so far gscan2pdf 1.2.5 seems to work pretty nicely. It handles both
simplex and duplex scans, and both the cleanup phase (using unpaper)
and the OCR phase (with either gocr or tesseract) work without
problems. tesseract seems to perform markedly better than gocr so
far, as seems pretty typical.

So thus far I'm a pretty happy purchaser. On to a paperless
searchable future!

Ok, this has bitten me enough times now that I'm going to blog it so I
don't forget it again.

Symptom: you're doing a yum update on a centos5 or rhel5 box, using rpms
from a repository on a centos6 or rhel6 server (or anywhere else with
a more modern createrepo available), and you get errors like this:

What this really means that yum is too stupid to calculate the sha256
checksum correctly (and also too stupid to give you a sensible error
message like "Sorry, primary.sqlite.bz2 is using a sha256 checksum,
but I don't know how to calculate that").

The fix is simple:

yum install python-hashlib

from either rpmforge or epel, which makes the necessary libraries
available for yum to calculate the new checksums correctly. Sorted.

I'm a big fan of Coraid and their relatively
low-cost storage units.
I've been using them for 5+ years now, and they've always been pretty
well engineered, reliable, and performant.

They talk ATA-over-Ethernet (AoE),
which is a very simple non-routable protocol for transmitting ATA
commands directly via Ethernet frames, without the overhead of higher
level layers like IP and TCP. So they're a lighter protocol than
something like iSCSI, and so theoretically higher performance.

One issue with them on linux is that the in-kernel 'aoe' driver is
typically pretty old. Coraid's
latest aoe driver is version
78, for instance, while the RHEL6 kernel (2.6.32) comes with aoe v47,
and the RHEL5 kernel (2.6.18) comes with aoe v22. So updating to the
latest version is highly recommended, but also a bit of a pain, because
if you do it manually it has to be recompiled for each new kernel
update.

The modern way to handle this is to use a
kernel-ABI tracking kmod, which gives you
a driver that will work across multiple kernel updates for a given EL
generation, without having to recompile each time.

So I've created a kmod-aoe package that seems to work nicely here. It's
downloadable below, or you can install it from my
yum repository.
The kmod depends on the 'aoetools' package, which supplies the command
line utilities for managing your AoE devices.

Be aware that there are multiple ldap configuration files involved now.
All of the following end up with ldap config entries in them and need to
be checked:

/etc/openldap/ldap.conf

/etc/pam_ldap.conf

/etc/nslcd.conf

/etc/sssd/sssd.conf

Note too that /etc/openldap/ldap.conf uses uppercased directives (e.g. URI)
that get lowercased in the other files (URI -> uri). Additionally, some
directives are confusingly renamed as well - e.g. TLA_CACERT in
/etc/openldap/ldap.conf becomes tla_cacertfile in most of the others.
:-(

If you want to do SSL or TLS, you should know that the default behaviour
is for ldap clients to verify certificates, and give misleading bind errors
if they can't validate them. This means:

if you're using CA-signed certificates, and want to verify them, add
your CA PEM certificate to a directory of your choice (e.g.
/etc/openldap/certs, or /etc/pki/tls/certs, for instance), and point
to it using TLA_CACERT in /etc/openldap/ldap.conf, and
tla_cacertfile in /etc/ldap.conf.

RHEL6 uses a new-fangled /etc/openldap/slapd.d directory for the old
/etc/openldap/slapd.conf config data, and the
RHEL6 Migration Guide
tells you to how to convert from one to the other. But if you simply
rename the default slapd.d directory, slapd will use the old-style
slapd.conf file quite happily, which is much easier to read/modify/debug,
at least while you're getting things working.

If you run into problems on the server, there are lots of helpful utilities
included with the openldap-servers package. Check out the manpages for
slaptest(8), slapcat(8), slapacl(8), slapadd(8), etc.

rpm-find-changes is a little script I wrote a while ago for rpm-based
systems (RedHat, CentOS, Mandriva, etc.). It finds files in a filesystem
tree that are not owned by any rpm package (orphans), or are modified
from the version distributed with their rpm. In other words, any file
that has been introduced or changed from it's distributed version.

It's intended to help identify candidates for backup, or just for
tracking interesting changes. I run it nightly on /etc on most of my
machines, producing a list of files that I copy off the machine (using
another tool, which I'll blog about later) and store in a git
repository.

I've also used it for tracking changes to critical configuration trees
across multiple machines, to make sure everything is kept in sync, and
to be able to track changes over time.

Been playing with Riak recently, which is
one of the modern dynamo-derived nosql databases (the other main ones being
Cassandra and Voldemort). We're evaluating it for use as a really large
brackup datastore, the primary attraction
being the near linear scalability available by adding (relatively cheap) new
nodes to the cluster, and decent availability options in the face of node
failures.

I've built riak packages for RHEL/CentOS 5, available at my
repository,
and added support for a riak 'target' to the
latest version (1.10) of brackup
(packages also available at my repo).

The first thing to figure out is the maximum number of nodes you expect
your riak cluster to get to. This you use to size the ring_creation_size
setting, which is the number of partitions the hash space is divided into.
It must be a power of 2 (64, 128, 256, etc.), and the reason it's important
is that it cannot be easily changed after the cluster has been created.
The rule of thumb is that for performance you want at least 10 partitions
per node/machine, so the default ring_creation_size of 64 is really only
useful up to about 6 nodes. 128 scales to 10-12, 256 to 20-25, etc. For more
info see the Riak Wiki.

Here's the script I use for configuring a new node on CentOS. The main
things to tweak here are the ring_creation_size you want (here I'm using
512, for a biggish cluster), and the interface to use to get the default ip
address (here eth0, or you could just hardcode 0.0.0.0 instead of $ip).

Save this to a file called e.g. riak_configure, and then to configure a couple
of nodes you do the following (note that NODE is any old internal hostname you use
to ssh to the host in question, but FIRST_NODE needs to use the actual -name
parameter defined in /etc/riak/vm.args on your first node):

Problem: you've got a remote server that's significantly hosed, either
through a screwup somewhere or a power outage that did nasty things to
your root filesystem or something. You have no available remote hands,
and/or no boot media anyway.

Preconditions: You have another server you can access on the same
network segment, and remote access to the broken server, either through
a DRAC or iLO type card, or through some kind of serial console server
(like a Cyclades/Avocent box).

Solution: in extremis, you can do a remote rebuild. Here's the simplest
recipe I've come up with. I'm rebuilding using centos5-x86_64 version
5.5; adjust as necessary.

Note:dnsmasq, mrepo and syslinux are not core CentOS packages,
so you need to enable the rpmforge
repository to follow this recipe. This just involves:

1. On your working box (which you're now going to press into service as a
build server), install and configure dnsmasq
to provide dhcp and tftp services:

# Install dnsmasq
yum install dnsmasq
# Add the following lines to the bottom of your /etc/dnsmasq.conf file
# Note that we don't use the following ip address, but the directive
# itself is required for dnsmasq to turn dhcp functionality on
dhcp-range=ignore,192.168.1.99,192.168.1.99
# Here use the broken server's mac addr, hostname, and ip address
dhcp-host=00:d0:68:09:19:80,broken.example.com,192.168.1.5,net:centos5x
# Point the centos5x tag at the tftpboot environment you're going to setup
dhcp-boot=net:centos5x,/centos5x-x86_64/pxelinux.0
# And enable tftp
enable-tftp
tftp-root = /tftpboot
#log-dhcp
# Then start up dnsmasq
service dnsmasq start

3. Finally, finish setting up your tftp environment. mrepo should have copied
appropriate pxelinux.0, initrd.img, and vmlinuz files into your
/tftpboot/centos5-x86_64 directory, so all you need to supply is an
appropriate grub boot config:

Following on from my IPMI explorations, here's the next
chapter in my getting-down-and-dirty-with-dell-hardware-on-linux adventures.
This time I'm setting up Dell's
OpenManage Server Administrator
software, primarily in order to explore being able to configure bios settings
from within the OS. As before, I'm running CentOS 5, but OMSA supports any of
RHEL4, RHEL5, SLES9, and SLES10, and various versions of Fedora Core and
OpenSUSE.

Here's what I did to get up and running:

# Configure the Dell OMSA repository
wget -O bootstrap.sh http://linux.dell.com/repo/hardware/latest/bootstrap.cgi
# Review the script to make sure you trust it, and then run it
sh bootstrap.sh
# OR, for CentOS5/RHEL5 x86_64 you can just install the following:
rpm -Uvh http://linux.dell.com/repo/hardware/latest/platform_independent/rh50_64/prereq/\
dell-omsa-repository-2-5.noarch.rpm
# Install base version of OMSA, without gui (install srvadmin-all for more)
yum install srvadmin-base
# One of daemons requires /usr/bin/lockfile, so make sure you've got procmail installed
yum install procmail
# If you're running an x86_64 OS, there are a couple of additional 32-bit
# libraries you need that aren't dependencies in the RPMs
yum install compat-libstdc++-33-3.2.3-61.i386 pam.i386
# Start OMSA daemons
for i in instsvcdrv dataeng dsm_om_shrsvc; do service $i start; done
# Finally, you can update your path by doing logout/login, or just run:
. /etc/profile.d/srvadmin-path.sh

Now to check whether you're actually functional you can try a few of the
following (as root):

omconfig about
omreport about
omreport system -?
omreport chassis -?

omreport is the OMSA CLI reporting/query tool, and omconfig is the
equivalent update tool. The main documentation for the current version of
OMSA is here.
I found the CLI User's Guide
the most useful.

omconfig allows setting object attributes using a key=value syntax, which
can get reasonably complex. See the CLI User's Guide above for details, but
here are some examples of messing with various bios settings:

Spent a few days deep in the bowels of a couple of datacentres last week,
and realised I didn't know enough about Dell's DRAC base management
controllers to use them properly. In particular, I didn't know how to
mess with the drac settings from within the OS. So spent some of today
researching that.

Turns out there are a couple of routes to do this. You can use the Dell
native tools (e.g. racadm) included in Dell's
OMSA product, or you can use
vendor-neutral IPMI,
which is well-supported by Dell DRACs. I went with the latter as it's
more cross-platform, and the tools come native with CentOS, instead of
having to setup Dell's OMSA repositories. The Dell-native tools may give
you more functionality, but for what I wanted to do IPMI seems to work
just fine.

Mock is a Fedora project that allows
you to build RPM packages within a chroot environment, allowing you to build
packages for other systems than the one you're running on (e.g. building CentOS 4
32-bit RPMs on a CentOS 5 64-bit host), and ensuring that all the required build
dependencies are specified correctly in the RPM spec file.

It's also pretty under-documented, so these are my notes on things I've figured out
over the last week setting up a decent mock environment on CentOS 5.

First, I'm using mock 1.0.2 from the EPEL repository, rather than older 0.6.13
available from CentOS Extras. There are apparently backward-compatibility problems
with versions of mock > 0.6, but as I'm mostly building C5 packages I decided to go
with the newer version. So installation is just:

# Install mock and python-ctypes packages (the latter for better setarch support)
$ sudo yum --enablerepo=epel install mock python-ctypes
# Add yourself to the 'mock' group that will have now been created
$ sudo usermod -G mock gavin

The mock package creates an /etc/mock directory with configs for various OS
versions (mostly Fedoras). The first thing you want to tweak there is the
site-defaults.cfg file which sets up various defaults for all your builds. Mine now
looks like this:

You can use the epel-5-{i386,x86_64}.cfg configs as-is if you like; I copied them
to centos-5-{i386,x86_64}.cfg versions and removed the epel 'extras', 'testing',
and 'local' repositories from the yum.conf section, since I typically want to build
using only 'core' and 'update' packages.

If it fails, you can check mock output, the *.log files above for more info, and/or
rerun mock with the -v flag for more verbose messaging.

A couple of final notes:

the chroot environments are cached, but rebuilding them and checking for updates
can be pretty network intensive, so you might want to consider setting up a local
repository to pull from. mrepo (available
from rpmforge) is pretty good for that.

there don't seem to be any hooks in mock to allow you to sign packages you've
built, so if you do want signed packages you need to sign them afterwards via a
rpm --resign $RPMS.

The new skype 2.1 beta
(woohoo - Linux users are now only 2.0 versions behind Windows, way to go Skype!)
doesn't come with a CentOS rpm, unlike earlier versions. And the Fedora packages
that are available are for FC9 and FC10, which are too recent to work on a stock
RHEL/CentOS 5 system.

So here's how I got skype working nicely on CentOS 5.3, using the static binary
tarball.

Note that while it appears skype has finally been ported to 64-bit architectures, the
only current 64-bit builds are for Ubuntu 8.10+, so installing on a 64-bit CentOS
box requires 32-bit libraries to be installed (sigh). Otherwise you get the error:
skype: /lib/ld-linux.so.2: bad ELF interpreter: No such file or directory.

Tangentially, if you have any video problems with your webcam, you might want to check out
the updated video drivers available in the
kmod-video4linux package from the shiny new
ELRepo.org. I'm using their updated uvcvideo module with a Logitech
QuickCam Pro 9000 and Genius Slim 1322AF, and both are working well.

Over the last few years I've built up quite a collection of packages
for CentOS, and distribute them via a yum repository. They're typically
packages that aren't included in
DAG/RPMForge when I need them, so I just
build them myself. In case they're useful to other people, this post
documents the repository locations, and how you can get setup to make
use of it yourself.

Obligatory Warning: this is a personal repository, so it's
primarily for packages I want to use myself on a particular platform
i.e. coverage is uneven, and packages won't be as well tested as
a large repository like RPMForge. Also, I routinely build packages
that replace core packages, so you'll want the repo disabled by
default if that concerns you. Use at your own risk, packages may nuke
your system and cause global warming, etc. etc.

I've been using kvm for my virtualisation needs lately, instead of
xen, and finding it great. Disadvantages are that it requires hardware
virtualisation support, and so only works on newer Intel/AMD CPUs.
Advantages are that it's baked into recent linux kernels, and so more
or less Just Works out of the box, no magic kernels required.

There are some pretty useful resources covering this stuff out on the
web - the following sites are particularly useful:

That should be sufficient to get you up and running with basic outgoing
networking (for instance as a test desktop instance). In qemu terms this
is using 'user mode' networking which is easy but slow, so if you want
better performance, or if you want to allow incoming connections (e.g. as
a server) you need some extra magic, which I'll cover in a
"subsequent post":kvm_bridging.

Following on from my post yesterday on "Basic KVM on CentOS 5", here's
how to setup simple bridging to allow incoming network connections to
your VM (and to get other standard network functionality like pings
working). This is a simplified/tweaked version of
Hadyn Solomon's bridging instructions.

Done. This should give you VMs that are full network members, able to be
pinged and accessed just like a regular host. Bear in mind that this means
you'll want to setup firewalls etc. if you're not in a controlled
environment.

Notes:

If you want to run more than one VM on your LAN, you need to set the
guest MAC address explicitly, since otherwise qemu uses a static default
that will conflict with any other similar VM on the LAN. e.g. do something
like:

Had to setup some simple policy-based routing on CentOS again recently, and had
forgotten the exact steps. So here's the simplest recipe for CentOS that seems
to work. This assumes you have two upstream gateways (gw1 and gw2), and that
your default route is gw1, so all you're trying to do is have packets that come
in on gw2 go back out gw2.