Recommended Posts

Longtime user of Synology but new to XPEnology. I currently have DS412+ and has been running without any issues for about 7 years now. Since it is reaching EOL with no updates after DSM 7, I am planning to build out xpenology as a hyperbackup destination until I buy another synology unit.

Am planning to install xpenology on ESXi with RDM to 4 * 8TB WD Red drives. I will be using SHR configuration.

My question is, How easy is it to move the drives from esxi to synology at a later date.

Is it similar to NAS to NAS migration (remove and insert the drives) or should I have to live copy the data?

Share this post

Link to post

Share on other sites

Yes, a migration upgrade works as long as there is no ESXi on the data disks themselves. It's the same platform (DSM).

It might be simpler just to pass through the SATA controller and ensure your drives are 100% seen by DSM. If you must RDM, make sure it's physical RDM so that there is no encapsulation of the partition at all. The only reason I ever found to do this was to support NVMe drives in volumes. See my sig for details on that if you aren't familiar.

Share this post

Link to post

Share on other sites

Am new to ESXi too and am learning. The first VM i will have will be the xlenology. Am planning to build a homelab server(more so workstation) with parts that i already have - msi z170 sli plus mobo with 6 SATA ports, core i7-7700k and 32GB memory. I have been researching on running xpenology and many articles suggested i use RDM and hence i asked.

when you say i need to pass through the sata controller, can i ise the onboard one or i will have to add another pci-e based controller to achieve it? Is there any guide or documentation that i can refer to achieve this?

Share this post

Link to post

Share on other sites

ESXi needs its own storage. It can boot off of a USB key, but it will also need a place for your VM definitions to live, and any virtual disks. This is called "scratch" storage.

XPenology's boot loader under ESXi is a vdisk hosted on scratch. The disks that DSM manages should usually not be - one exception is a test XPenology VM. In any case, if you use scratch to provide virtual disks for DSM to manage, the result won't be portable to a baremetal XPenology or Synology installation.

As you have researched, one alternative is to define RDM definitions (essentially, virtual pointers) for physical disks attached to ESXi. RDM disks can then be dedicated to the XPenology VM and won't be accessible by other VM's. The reasons to do this are 1) to provide an emulated interface to a disk type not normally addressable by DSM, such as NVMe, or 2) allow for certain drives to be dedicated to DSM (and therefore portable) and others to scratch for VM shared access - all on the same controller.

If you have access to other storage for scratch... for example, an M.2 NVMe SSD, you can "passthrough" your SATA controller - i.e. dedicate it and all of its attached drives to the XPenology VM. The controller and drives will then actually be seen by the VM (and won't be virtualized at all) and will be portable. An alternative to the M.2 drive is another PCIe SATA controller, as you suggest.

On my own "main" XPenology system, I do all of the above. There is a USB boot drive for ESXi, an NVMe M.2 drive for scratch, and the XPenology VM has two U.2 connected NVMe drives translated to SCSI via RDM, and the chipset SATA controller passed through with 8 drives attached. Other VM's run along with XPenology, using virtual disks hosted on scratch.