Quote from Ninja_Slayer: “OMV on my odroid c2 ” With an older image there was a problem when the router / DHCP server was IPv6 capable. I thought that we'd fixed this on latest image but at least this would be worth a look and try. In case your router also takes care of IPv6 try to temporarely disable this, burn image again, boot and see what happens. OMV will reboot itself one time automatically and this IPv6 bug led to no network after reboot due to screwed up /etc/network/interfaces file.

Quote from elvenbone: “I used dd on the img file inside the archive. Did not try etcher yet ” That's a great recipe to run into troubles as you did. If you again used dd I would start from scratch. It's really a pretty bad idea to just TRY to write data to an SD card WITHOUT VERIFY. Since data pretty often gets corrupted while writing it to a card (card reader problems in addition to the usual SD card problems. Only use Etcher. Always)

Quote from ryecoaaron: “No idea what cron job that is ” I think this is related to github.com/ThomasKaiser/OMV_fo…2fba956b263e062ea2bf66b9a (enforced root user password change, IIRC some users reported also problems when installing certain plug-ins and the only way to solve the problem is to login as root at least once and to assign a new password)

What about accessing odroidxu4 from a client machine? This should read as Source Code (1 line)and assumes that XU4 is connected to the network, gets a DHCP lease and the router implements dynamic DNS updates. The 'no network interface(s)' outputted at the XU4 local console is irrelevant (timing problem)

Quote from tmihai20: “I just found out there is a 4.14 kernel from HardKernel. Will the new version come in an update or a fresh install will be needed? ” Via an update of course. But currently IMO there's a lot wrong with Armbian and reliability. The last attempt to switch to 4.14 led to bricked boards due to 'processes' that make no sense at all (rolling out updates without communicating and no sufficient testing): - forum.openmediavault.org/index.php/Thread/21575 - forum.armbian.com/topic/629…

Quote from jerryfudd: “If it was anything more than a home setup with all due respect (and I really mean that) I wouldn't be using OMV. ” Interesting. Since you don't trust in Linux and Debian being reliable or why?

Unfortunately he used an old and crappy SSD from a decade ago (he calls 'nice and fast' -- LOL!) which results in the poor NAS performance. If he would've used a fast SSD NAS performance would've remained above 100 MB/s all the time and even any 2.5" HDD would've done a better job.

Quote from jerryfudd: “Maybe I'm missing something, but which surprise will the Rock64 bring? ” Whether you can create an ext4 larger than 16TiB or not. I already mentioned it: serverfault.com/questions/4620…18tb-raid-6/536758#536758 Have you ever had to replace a drive in a RAID just to realize that the huge amount of stress during the rebuild process not only slows performance down too much but also another disk gets killed in the process? Ever ran an fsck on a damaged filesystem that did not …

Quote from jerryfudd: “I'm guessing was down to breaking the 16TB mark? ” Exactly. See the same issue here (2nd post with some explanations): forum.openmediavault.org/index.php/Thread/21748 The RPi OMV image boots by default with a 32-bit kernel and ships with an armhf (32-bit) userland. We provide a 64-bit ARMv8 kernel too but you're on your own how to get the VC4 bootloader load this one instead of the default (that receives kernel upgrades whereas the 64-bit kernel is an outdated 4.11). Once …

Quote from henfri: “Why do you want to create a raid? ” Maybe since it's a concept that has been invented to withstand the fail of a single physical disk? A RAID5 (while being a really horribly bad idea in 2018 with drives of this size) retains all data when one drive fails. Nothing more, nothing less. If you search for data protection as you correctly mentioned a backup strategy and a technical implementation is needed.

Quote from subzero79: “Also did you install the flash memory plugin? ” If the guy providing this minibian something really followed this guide as he claims then not. Good luck with a lot of unnecessary support issues the next time

Quote from maliszep: “So I have downloaded image of OMV, which should be ready to attach to berryboot, from this page: berryboot.alexgoldcheidt.com/images/ There is an image of OMV 3.0.96 - date 28.01.2018, so pretty fresh. ” And 100% unsupported. The official OMV image on the RPi is supposed to do lots of stuff at first boot and contains some important optimizations. No idea what the guy providing this 'minibian based OMV berryboot' thing did but obviously something went wrong (IMO the whole pr…

Quote from mipi: “A backup in the way mentioned here? ” A backup that is known to work. Sorry, I don't own any of those Marvell based NAS boxes and can't say anything about them other than what we both can find on the net. Maybe in 2018 this will change and I get my hands on an Asustor AS4000 for the sole reason that this device is based on new and pretty capable Armada 7K ARM64 SoC for which Bootlin (formerly Free Electrons) are contracted to do the mainline kernel work. If kernel support is su…

Quote from mipi: “wht if i use omv-release-upgrade ” Then you will be upgraded to OMV 4 and Debian 9 and maybe your installation will be bricked afterwards (no idea, I think I already mentioned that Debian 9 needs a minimum kernel version?). I would do this not without an OS backup.

Ok, close to zero 'data integrity' and 'data protection' requirements but only 'one continous large storage pool of 7TB size with some redundancy'? I would still either use 'zmirrors as vdev' or btrfs own RAID 1 mode to achieve this goal but have to admit that I fail to understand the complexity aspect when being new to OMV from my perspective dealing with this stuff daily So stepping back now in the hope others make suggestions that are deeper integrated into OMV.

I personally consider RAID 1 (mdraid) always worst choice possible. I would either create two RAID10 mdraids out of those disks and then put a multi device btrfs on top (with compression=lzo) to end up with 7TB disk space. Or create 2 zmirrors out of each disks and combine them to one pool (should be as fast as the above or even faster). Or simply use btrfs own RAID1 functionality with all 4 disks to get the same amount of space (btrfs uses a different approach than mdraid so you can combine dis…