Our
assigned Red Hat engineer was on-site today and pointed out the
blindingly obvious solution. Can't believe I didn't think of it: Run
NFS as a clustered service and have the VMs mount that. That way ANY
system - even outside of the cluster - can also access the data.

This is what we are doing, works great. We considered presenting the
raw devices from our SAN (fc connectivity instead of iSCSI) to the
VM's, but opted against it due to dynamics of changing # of VM's and
GFS requirements for journals / # of nodes, as well as multicast issues
(each dom0 uses a different routed network for VM's). Each VM mounts
NFS from its host.

What kind of security do you apply, both to the NFS cluster, and the
data that get accessed on it?

heya rudi, never realised u were on this list too ;)

the exports are controlled by source IP address in /etc/exports. The data on there is not sensitive data at all in our environment, and GFS is all server environment, with no user access... but I just tested using ACLs and it works 100% (added the acl option to gfs mount, and configured using setfacl). We are using ldap network authentication, so works nicely with group permissions ;)

(although we do have 1 luks volume image on the gfs filesystem that is mounted by one of the phy machines using a keyfile stored locally).