OpenMediaVault installation in LXC with attached HW block device

Target.
1. Install OpenMediaVault NAS into Debian 8 LXC container on ProxmoxVE server with hardware RAID controller card.
2. Provide to OpenMediaVault in LXC container as a storage LSI MegaRAID RAID1 - /dev/sda1.
3. Also continue to use /dev/sda1 as a backup storage for many backup scripts (mysqldump for example) from ProxmoxVE itself.

Definitions.

ProxmoxVE — latest official 4.3 release from http://www.proxmox.com/en/downloads with testing updates;OpenMediaVault — latest 3.beta (3.0.47) from http://www.openmediavault.org with codename Erasmus;OpenMediaVault plugins — plugins for Erasmus from http://omv-extras.org/joomla/;Debian 8 (codename Jessie) LXC template — obtained from ProxmoxVE built-in LXC templates repository - debian-8.0-standard_8.4-1_amd64.tar.gz
*** Daily LXC Template Images from https://jenkins.linuxcontainers.org/view/LXC/view/LXC Templates/ for Debian will be also suitable, but up to 2016.10.21 it has a poor status.LXC container config — file, located in /etc/pve/lxc/XXX.conf where XXX is a container number from ProxmoxVE.LXC container hook script — bash shell script, located in /var/lib/lxc/XXX/<script-name>.sh where XXX is a container number from ProxmoxVE.

Decision.

Step 1 — installation of OpenMediaVault

1. update available containers list in ProxmoxVE shell: pveam update
2. download latest available Debian 8 template into container templates storage (via web-gui or from shell). I found debian-8.0-standard_8.4-1_amd64.tar.gz.
3. create Debian 8 LXC container with at least 1Gb RAM (2Gb as RAM will be much comfortable. But if You are planning to use ZFS — You need much more RAM and should consult with ZFS system requirements) and 2Gb RootFS. Also required at least one network device with Internet connection.
*** Notes for point 3:
You should add a few lines into LXC container config before it first start:lxc.aa_profile: unconfined
lxc.mount.auto: cgroup:rw
lxc.mount.auto: proc:rw
lxc.mount.auto: sys:rw

In a few words — this lines required to start a Debian network scripts (eth0 netcard was not start until I’ve added unconfined profile, proc and sys mount for container), cgroup and unconfined profile required for OpenMediaVault services, such as nfs for example.

5.5 start post-installation commandomv-initsystem
but our installation stopped with a first error:run-parts: /usr/share/openmediavault/initsystem/20hostname exited with return code 1
Our container does not have it’s own hostmane, and in chroot we have hypervisor’s hostname …

You can simply change hostmane from OpenMediaVault Web Management interface, so simply move this file:mv /usr/share/openmediavault/initsystem/20hostname /root/
continue firs time system initializationomv-initsystem
but our installation stopped with a second error:run-parts: /usr/share/openmediavault/initsystem/60rootfs exited with return code 2
LXC container has no fstab mounted rootfs. This is only a rootfs check in OpenMediaVault. In this case we can also simply drop this steb by mooving this file:mv /usr/share/openmediavault/initsystem/60rootfs /root/
continue firs time system initializationomv-initsystem
And after a few perl warnings about locales - finish successfully installation procedure.

5.6 exit from container by «ctrl+D»

5.7 start this container from ProxmoxVE

5.8 login into web-gui of OpenMediaVault
It seems to be all ok. But there is an error with applying changes — Avahi-daemon error …
After a few time googling I have found a solution for Avahi in https://loune.net/2011/02/avahi-setrlimit-nproc-and-lxc/.
We should patch the file - /usr/share/openmediavault/mkconf/avahi-daemon in container.
Go back to ProxmoxVE shell and edit file in pre-mounted rootfs:nano /mnt/usr/share/openmediavault/mkconf/avahi-daemon
At the end of it we should to remove the last line:rlimit-nproc=3
by#rlimit-nproc=3

1.2 edit LXC container config (with number XXX) to add a few new lines:nano etc/pve/lxc/XXX.conf
lxc.cgroup.devices.allow: b 8:0 rwm
lxc.cgroup.devices.allow: b 8:1 rwm
lxc.autodev: 1
***
This lines allow to use /dev/sda, /dev/sda1 (with read-write-mount) inside container itself.
But our LXC container has none block devices in /dev
There is one way is to create device via mknod. But after container reboot we’ll loose those devices. We should create a hook.

As for me, I added this line to my LXC container config:lxc.mount.entry: /raid media/7078dfe1-70c5-46eb-97ec-cca6d2fcff37 none bind,create=dir,optional 0 0

1.5. Stop and start LXC container to apply changes.
After that you can simply mount /dev/sda1 from OpenMediaVault web-gui and start normal operation of any services of OpenMediaVault codename Erasmus in LXC container as a NAS.

Hope that this short man will help a few people to save cost and start their own-made SOHO NAS, based on truly free software. This man is not suitable for production and/or commercial use as a NAS Solution.

Hello i am running ZFS here and not sure if this is ready for ZFS yet ?? noted a couple of changes along with moving the 40network file out of the way which i did have to do.

during Step 1

"3. You should add a few lines into LXC container config before it first start:lxc.aa_profile: unconfined
lxc.mount.auto: cgroup:rw
lxc.mount.auto: proc:rw
lxc.mount.auto: sys:rw"

I am not sure if all these are needed i have added them all for now for a better chance at getting things working I may modify these later and report back here as i know that the profile line is not needed anymore to get networking working in my version of proxmox and the debian 8 container ... not exactly sure about the other lines yet

"5.8 login into web-gui of OpenMediaVault
It seems to be all ok. But there is an error with applying changes — Avahi-daemon error …
After a few time googling I have found a solution for Avahi in https://loune.net/2011/02/avahi-setrlimit-nproc-and-lxc/.
We should patch the file - /usr/share/openmediavault/mkconf/avahi-daemon in container.
Go back to ProxmoxVE shell and edit file in pre-mounted rootfs:nano /mnt/usr/share/openmediavault/mkconf/avahi-daemon
At the end of it we should to remove the last line:rlimit-nproc=3
by#rlimit-nproc=3"

May not be needed anymore as i did not have to modify this maybe they finally fixed it upstream at OVM

During Step 3

Not sure the correct syntax to add the below to the container config ... so i just followed the original post, but it might be easier to do something similar to this maybe avoid the hook file too???

lxc config device add disk unix-block path=/dev/sda

Finally i see a block device and it is one disk of a ZFS mirror pool not sure if that is safe, and i do not seem to be able to see or add any filesystems to OVM, and it does not seem to have ZFS as a file system type??

Hello i am running ZFS here and not sure if this is ready for ZFS yet ?? noted a couple of changes along with moving the 40network file out of the way which i did have to do.

during Step 1

"3. You should add a few lines into LXC container config before it first start:lxc.aa_profile: unconfined
lxc.mount.auto: cgroup:rw
lxc.mount.auto: proc:rw
lxc.mount.auto: sys:rw"

I am not sure if all these are needed i have added them all for now for a better chance at getting things working I may modify these later and report back here as i know that the profile line is not needed anymore to get networking working in my version of proxmox and the debian 8 container ... not exactly sure about the other lines yet

"5.8 login into web-gui of OpenMediaVault
It seems to be all ok. But there is an error with applying changes — Avahi-daemon error …
After a few time googling I have found a solution for Avahi in https://loune.net/2011/02/avahi-setrlimit-nproc-and-lxc/.
We should patch the file - /usr/share/openmediavault/mkconf/avahi-daemon in container.
Go back to ProxmoxVE shell and edit file in pre-mounted rootfs:nano /mnt/usr/share/openmediavault/mkconf/avahi-daemon
At the end of it we should to remove the last line:rlimit-nproc=3
by#rlimit-nproc=3"

May not be needed anymore as i did not have to modify this maybe they finally fixed it upstream at OVM

During Step 3

Not sure the correct syntax to add the below to the container config ... so i just followed the original post, but it might be easier to do something similar to this maybe avoid the hook file too???

lxc config device add disk unix-block path=/dev/sda

Finally i see a block device and it is one disk of a ZFS mirror pool not sure if that is safe, and i do not seem to be able to see or add any filesystems to OVM, and it does not seem to have ZFS as a file system type??

Click to expand...

Hi.
I have no experience with ZFS.

Yesterday I tested to add mount point (lvm thin volume with ext4) to omv lxc container. I need to use it as ftpfs.
I mounted it to container as /dev/vda1.
Every thing is ok:
omv recognize it, successfully mounted and used it, until lxc reboot.
OMV uses fstab to mount storage. I'v found no way to start container's fstab automatically on boot.
I created systemd stupid service file with only one command "mount -a".
After that I successfully reboot lxc with omv with automatically mounted storage.
You can use any virtual or physical block device with omv as it's storage inside lxc.
simple description of providing lvm volume as a storage volume to OMV.
As for me:
- I created mount point with 2Gb, mounted to /media/mp0 (any path as you wish)
- proxmox indicate that it is local-lvm:vm-105-disk-2
to find actual /dev/dm-XX nuber you need to ls -la /dev/pve/vm-105-disk-*
lrwxrwxrwx 1 root root 8 Mar 11 13:28 /dev/pve/vm-105-disk-1 -> ../dm-36
lrwxrwxrwx 1 root root 8 Mar 11 13:28 /dev/pve/vm-105-disk-2 -> ../dm-38
After that I add to mount-hook.sh 2 lines:
mknod -m 777 ${LXC_ROOTFS_MOUNT}/dev/vda b 251 38
mknod -m 777 ${LXC_ROOTFS_MOUNT}/dev/vda1 b 251 38
after start of omv you can see /dev/vda1 wit ext4 in filesystems tab from web-ui of omv.
you can mount it and start operate.
Stupid service to mount all devices, pointed in omv fstab file is:
-----------
[Unit]
Description=Stupid Storage Mount for OMV in lxc

[Service]
Type=oneshot
ExecStart=/bin/mount -a

[Install]
WantedBy=multi-user.target
---------------

There is no matter about physical drive or virtual drive as a storage for OMV. If you correctly create block device in lxc's /dev directory.
it should have /dev/sdx and /dev/sdx1. As you see above, it is possible to use same block device for all 2 nodes.
To use zfs in omv you need to provide correct partition as storage to omv and install zfs plugin from omv-extras.org.
Hope this will be enough.
To correctly install and update plugins read this: - http://forum.openmediavault.org/index.php/Thread/14931-No-key-found-for-using-apt-get-upgrades/

So, in proxmox LXC's you can simply to grow main (root) volume of container, or add another volume (additional virtual storage) as a "mount point". for example I use "/var/lib/mysql" as a 4Gb mount point for mysql databases. But the Mount Point volume mounts only with root:root permissions. You shoud mention this note.

In current release of OMV (including arrakis) path /media/mp0 for mounting storage devices changed to /srv/dev-xxxx (where xxxx is sda1 for example).

So if you are planning to use another virtual volume in your's container as a storage for OMV, you should use this path for mount.
Example (I mentioned, that omv's container has id 110 in proxmox gui).
1. create mount point for a container number 110. Select "Resorces" and push "Add" button - there will be only one choice "Mount Piont"
2. select desired size, target storage, select "Backup" (if you vant to include it for backup operation) and choose "/mnt/dev-sda1" as Path.
Don't use /srv/dev-sda1 as a path, because this path uses OMV itself for mountig of /dev/sda1
3. Go to Proxmox Shell ant investigate needed system ID's of virtual disks for a container with number 110:
ls -la /dev/pve/*110*
lrwxrwxrwx 1 root root 8 Sep 3 08:11 /dev/pve/sas-vm--110--disk--1 -> ../dm-83
lrwxrwxrwx 1 root root 8 Sep 3 08:21 /dev/pve/sas-vm--110--disk--2 -> ../dm-84
after that you need to: ls -la /dev/dm-84 to investigete "block device ID" - here this number is 253.
brw-rw---- 1 root disk 253, 84 Sep 3 08:22 /dev/dm-84

This is indicates that 1st (root) virtual partition is /dev/dm-83 and your's second virtual partition (mounted as /mnt/dev-sda1in your rootfs partition) is /dev/dm-84.
4. there is no block devices (hard drives as for you) in lxs cotainers. You should crteate it before. This can be done by creating hook.
Your custom hook file for a container with number 110 should be placed in /var/lib/lxc/110/ folder.

after these manipulations you can see /dev/sda, /dev/sda1 and /dev/fuse inside container with number 110.

After that you can see /dev/sda1 in OMV web-gui and you need to mount it. But after reboot you miss mounted partition, because LXC doesn't execute /etc/fstab. In this case I created mentioned above stupid service file for mounting. But to be correct, there should be a section for automatic umount. But I'm not so close for systemd to insert tis option to service file. Due to missing of this section, container reboots about 1 minute long. As far as I understood, during this time LXC daemon waiting for abnormal termination of "inside container mounting" procedure.

I have hit the same issue as in the first post, but before I proceed with making it work I was just wondering if anyone considered just running it as a virtual machine instead of container? The purist in me would like to use a container, but the hacks required seem to be a little high maintenance perhaps.

My requirements are simple, as I have an existing lvm/mdadm set - either I pass block devices for software raid in OMV, or I create the raid in Proxmox and mount folders to the OMV container. Whichever is easiest

I made everything to your tutorial, except I used OMV 4.0 on debian 9.3 lxc container.
Everything works great except I have stuck at last point: make hook to mount data drives (sdb2 and sdc2)
I have got this:

But I'm unable to get the temperature under the Web-GUI Storage SMART devices display. It shown as "n/a" for the harddisk temperature.

Seems like the /dev/sda has some permission restriction in the container required by smartctl to read the harddisk info:-Read Device Identity failed: Operation not permitted
A mandatory SMART command failed: exiting. To continue, add one or more '-T permissive' options

Quick Navigation

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick support. 13,000+ satisfied customers have Proxmox subscriptions. Get your own in 60 seconds.