OMV for ROCK64 (and other RK3328 devices soon)

Edit: Please scroll down until you find the first version marked as 'release'. All versions tagged as 'pre-release' are for testers and developers only. Sometimes they might not even boot. At the time of this writing latest stable release is 5.15 (with OMV 3)

I would prefer the armhf variant (32-bit userland) since it has a lower memory footprint and it's easier to run Plex media server with it. Background info: ayufan and I worked close together to integrate all the improvements for ARM boards and the images are fully supported (kernel updates as deb packages so all you need to do is to 'apt upgrade' from time to time to keep all components of the image up to date).

The fix is simple. You need to edit the respective file in /etc/apt/sources.list.d and replace 'deb http' with 'deb [arch=armhf] http' as explained here for example. Wrt the postfix error no idea though. If you don't need email notifications I would ignore it...

have transferred data from my main Plex server to OMV/Rock64 - sustained write speed of 30MB/sec

This is a horribly low transfer speed since at least 70 MB/s should be possible (when copying with Windows Explorer using Windows 7 or above it must be 100MB/s or even above instead). If you're not already sure that something else is the bottleneck (eg. your Plex server) then I would immediately use Helios LanTest and check whether there's something wrong with your installation.

I think it is the hard drive in my mac or my 8 yr old apple airport that is the limiting factor. airport is dying but thought the network/Lan would be ok. I'll do a test from my SSD (boot drive on mac to the rock and report back).

Yep - need a new router/LAN/switch. Copied file 2GB from mac HD to SSD @ 120MB/sec (no network involved). SSD to rock still max of 30MB/sec. So next tech purchase will be a new router/switch.

The RK3328 is attached to the bottom enclosure with a thermal pad, the devices are stackable, if there's demand for permanent full load large fans can be mounted on one or both sides of a clustered stack:
[IMG:http://kaiser-edv.de/tmp/QETbEc/ZvBcMOC.jpg]

Transformer will be available soon with either 2GB or 4GB DRAM and allows also to use the ROCK64 eMMC modules. If no eMMC is used there's access to an SD card slot on the enclosure side. The PCB just like ROCK64 features also bootable SPI NOR flash so once software support for this is ready the device can boot from onboard SPI flash with OMV on a small partition on the attached disk (no SD card or eMMC needed any more!)

No idea about pricing but I would assume that it will be similar to ROCK64 and depends on amount of DRAM and the accessories like eMMC that are ordered at the same time.

Waiting to see this king of box for a 3.5", and why not with 2x SATA ports to build RAID ?

If the 2.5" transformer will be a success then they think about releasing a 3.5" version in a few months. In fact the PCB is prepared for this since there's already a 12V 2 pin header on the board and 12V traces are routed to SATA power connector. You can use this PCB already with a 12V PSU and a 3.5" disk.

Why would you want to build a RAID out of two disks? It's a bad idea and with those devices that feature only one high-speed data interface (USB3 in this case) there are always problems involved (either a hub which is something you REALLY DO NOT WANT between you and your RAID disks or a SATA PM as it's the case on Hardkernel's Cloudshell 2 with a lot of problems involved -- see my signature)

If the 2.5" transformer will be a success then they think about releasing a 3.5" version in a few months. In fact the PCB is prepared for this since there's already a 12V 2 pin header on the board and 12V traces are routed to SATA power connector. You can use this PCB already with a 12V PSU and a 3.5" disk.

Nice news!
Yeah a box with the 3.5" to screw can be nice to use for
Ratio Go/$ is better for customers.

Why would you want to build a RAID out of two disks? It's a bad idea and with those devices that feature only one high-speed data interface (USB3 in this case) there are always problems involved (either a hub which is something you REALLY DO NOT WANT between you and your RAID disks or a SATA PM as it's the case on Hardkernel's Cloudshell 2 with a lot of problems involved -- see my signature)

About btrfs, I remember reading some problems reported with usual tools to know "real" available free/used space on the HDD... I've not investigated more...
So raid1 can be a nice feature for High Availability... and why not raid1 with btrfs filesystems ?

Do QNAP/Synology (for example) NAS are toy hardware too ? I usually see this kind of "professional" NAS (as many vendors/techies are calling this kind of hardware) on some little LAN at working places...

No, why should they? Since the CPU cores are from the same company (ARM)?

Those NAS boxes all use NAS SoCs (mostly from Marvell and Annapurna Labs, then a few other ARM SoC vendors and the high-end models then also AMD and Intel). The small thingies are all ARM based (some older also MIPS and PowerPC based) but these SoCs are made for the job, feature either many native SATA ports or PCIe (and then it's simply adding a SATA controller and you have a reliable way to attach few disks). These SoCs are also made for high network and IO bandwidth.

USB always adds an unnecessary layer between host and disks that causes performance degradation and even reliability concerns. Adding to this situation then also USB hubs or USB attached SATA port multipliers is already dangerous especially if users want to play RAID (since they obviously do not understand the concept behind RAID at all. You can't add a bunch of unreliable crap to get increased reliability).

If you want ARM boards that could be used for RAID (though it's still stupid in most of the cases where OMV is used for this) then look today for Marvell Armada (starts at old Marvell Kirkwood based NAS boxes that can be re-used with OMV, then we have EspressoBin, then we have Helios4 or Clearfog Base/Pro or Turris Omnia and at the upper end we have even stuff like the MacchiatoBin which should easily outperform 99.9% of Intel based OMV installations out there).

Situation will change next year with more and more other consumer grade ARM SoCs getting PCIe support (eg. Allwinner's H6 or Rockchip's RK3399) and then it's always possible to combine these chips with an 2, 4 or even 8 port SATA controller to build a RAID box. Doesn't change that much that if you don't pay attention to every single point of failure building a RAID is outright stupid since the whole idea of increased availability will not work at all (eg. using an insufficient PSU is a great recipe to trash your whole array the first time some real load is generated, eg. a RAID rebuild after you had to replace a failed drive. This forum is full of such stories, most of them shared by people who confused availability with data protection, made no backups and lost all their data for the sole reason they thought using RAID would allow to skip backups WHICH IS NEVER THE CASE!)

I would prefer the armhf variant (32-bit userland) since it has a lower memory footprint and it's easier to run Plex media server with it. Background info: ayufan and I worked close together to integrate all the improvements for ARM boards and the images are fully supported (kernel updates as deb packages so all you need to do is to 'apt upgrade' from time to time to keep all components of the image up to date).

Does it make sense to install PlexMedia Server? RK3328 is not enough for transcoding, and it can not be disabled.

I would prefer the armhf variant (32-bit userland) since it has a lower memory footprint and it's easier to run Plex media server with it. Background info: ayufan and I worked close together to integrate all the improvements for ARM boards and the images are fully supported (kernel updates as deb packages so all you need to do is to 'apt upgrade' from time to time to keep all components of the image up to date).

I have just received my rock64 4gb with 16gb emmc.

The jessie image jessie-openmediavault-rock64-0.5.15-136-armhf.img.xz boots successfully from either the sd card or the emmc and all seems fine.

However when I try booting the stretch image stretch-openmediavault-rock64-0.6.26-195-armhf.img.xz from either the sd card or the emmc I get what I think is a kernel panic.

I realise that OMV4 is still in beta but it works so smoothly on x86 and I was hoping to keep my systems with the same version.

I realise that OMV4 is still in beta but it works so smoothly on x86 and I was hoping to keep my systems with the same version.

OMV is just an add-on installed on top of Debian. And on those ARM boards kernel+bootloader are not provided by the Debian project but by others (for Rock64 by a small 'Pine64/Rock64 community' and for the other ARM boards by the Armbian community).

The kernel panic you ran into with the 0.6.26 image you chose is well known which is the reason why 0.6.27 has been released immediately afterwards and in the meantime also 0.6.28: github.com/ayufan-rock64/linux-build/releases/ (check the changelog there)

I tested with 0.6.27 / OMV 4 yesterday. No problems except reboot broken. But you should keep in mind that the 0.6 branch will receive a few more 'unstable' updates (bootloader/kernel and not so much OMV!) so if you really want a stable OMV experience you need to go with 0.5.15 / OMV 3 now and then do in a few weeks a 'omv-release-upgrade'.

Or you use 0.6.28 / OMV 4 and don't do any updates until you see a 0.6 release version appearing at the above URL.