Alas, socklog is unable to listen on more than one Unix domain socket.

Luckily, socat can be used to relay syslog messages from the vservers to the master socklog on the host. (syslog-ng would also have done the job nicely.)

It's also possible to bind mount the /dev/log socket in every guest; however, I'm afraid this breaks if you restart socklog after a guest is up. Workaround: make /dev/log a symlink to, say, /var/run/syslog/socket everywhere and bind mount the host's /var/run/syslog directory in the guests; that way the socket will still be available even if socklog unlinks and recreates it.

It should be straightforward and next to transparent to manage the services running in vservers.

With runit, it's easy to delegate management rights of a service to users (chown and chmod some files in the pertinent supervise directory). This should continue to work.

Prerequisites

A fairly recent vserver kernel with support for persistent contexts (I used 2.6.19.2-vs2.2.0-rc8.7).

A recent version of util-vserver that supports persistent contexts correctly (I used 0.30.213-rc1).

For service supervision to work, we must be able to send signals to our services. Specifically, runsv must be able to send signals to its children.
Alas, it's not prepared to send signals across context boundaries, which is where signal-relay comes in.

signal-relay

signal-relay is a small program not unlike runit's chpst that does the following:

it forks a child;

inside the child, it execs the program specified on its command line;

in the parent, it sets up signal handlers for every signal that relay the signal to the child, even if the child is running in a different context;

if the child exits, signal-relay exits.

(It can also put the child into its own process group; use the -P switch. Sending signals to process groups in a different context doesn't work yet, though.)

Setting up the vservers

When we start a service for runit, we want the command that starts the service to stay in the foreground until the moment the service dies. vserver exec looks just right, but there is a catch: it only works for vservers that have been "started". vserver start, however, doesn't fit very well into the runit way of doing things. You can set it up as a service (this is discussed in util-vserver:InitStyles), but it seems superfluous to leave some processes around just to keep a vserver "started" so that we can run services in it.

What we need is a way of basically doing "start vserver <guest> if it's not started, then exec program <service> inside it". Normally, a context with no processes running inside it is destroyed by the kernel; thus, just setting /etc/vservers/guest/apps/init/cmd.start to /bin/true isn't going to work; we would need something that stays around for a while, like a script that calls sleep 1m &. This would make vserver start happy, so in our service run script, we could do something like

Now, if you place this run script in a service directory called cron-squid, it will run a cron daemon in the squid guest. This takes care of rotating the squid logs under /var/log/squid, for example; although this could also be done on the host with a few kludges.

This will pass syslog messages from a vserver to the syslog of the host by acting as a syslog server for the /dev/log socket of the guest. Thus, you don't need to run a syslogd inside the vserver and you don't need to migrate to syslog-ng from socklog on the host.

Symlink this run script into a service directory called vserver-logrelay-squid.

Disadvantages

The most significant problem with this approach (as opposed to using initstyle plain and a separate runit instance in each vserver) is that it's no longer straightforward to manage the services running in vservers from inside the vservers; package management scripts can't stop them for upgrades, for example (unless you modify the initscripts in quite horrible ways).

Initial setup is also slightly more complicated.

These have to be weighed against the advantages of:

having fewer superfluous processes (like a separate runit and runsvdir in each vserver);

being able to manage the services easily from the host;

avoiding the need to run sshd inside each guest in order to be able to easily delegate service management privileges to others.

Nowadays I tend to think the initstyle plain method is better after all.