It's because your enumeration of your disks are not contiguous.
Notice that /dev/sde and /dev/sdf is missing in your list
When you set internalportcfg to 0xfff, this means you're assigning /dev/sda - /dev/sdl as disks 1-12.
Because you have a gap in your enumeration (missing /dev/sde and /devsdf) it pushes the remaining 8 disks from /dev/sdg - /dev/sdn.
Since /dev/sdm and /dev/sdn falls outside of /dev/sda - /dev/sdl, two disks won't show up.
Easiest way to solve your problem is to simply increase the number of enumerated slots
maxdisks=16
internalportcfg=0xffff
usbportcfg=0x1f0000
esataportcfg=0x0
With this config, you're allocating disk 1-16 (/dev/sda - /dev/sdp) for internal use.

The proper way to move synoboot to a much higher enumeration is to change DiskIdxMap in grub.cfg
On jun's 1.03b, the default is DiskIdxMap=0C
0C = Disk 13
Change it to something much higher
DiskIdxMap=1F
1F = Disk 32
For more info: https://github.com/evolver56k/xpenology/blob/master/synoconfigs/Kconfig.devices#L245

Totally agree! All depends on the SATA controller chips used. There are a few Marvell chips that don't play too well with the latest XPE builds. Very curious which chips these come with.
Official Product page
http://www.hcipctech.com/Home/ProductCo ... &english=2
I believe the 19NVR3 uses the quad-core J1900. OP linked to 18NVR3, which is the dual-core J1800 model.
At the price of over $150+shipping for the J1900 model, I'm not sure if this really is an ideal solution. Neither the J1800 or J1900 is particularly powerful processor. I think you can get more for your money elsewhere. By the time you max out the 13 SATA ports on your server, you might be concerned with faster processor, ECC memory, and possibly 10Gb Network.
However, if all you want is max number of SATA ports with low power consumption, this does seem like an option. Although I would say XPE may not be the most ideal OS if you're going after low power. Something like UnRAID would typically use less power, since it doesn't spin up all the drives.

neuro1,
There are two ways you can expose Intel X520-DA2 to your XPE VM in ESXi.
1) IOMMU (VT-d). Basically this is PCIe passthrough if your CPU and Motherboard supports it. This will allow you to assign your X520 for exclusive use with the XPE VM. XPE has built-in drivers for X520 so it will see it as a physical card. Downside to this approach is that only your XPE VM has access to 10Gb outside the host.
2) Assign your X520 to a vSwith. Then assign one of your XPE VM's virtual Network Adapter to use VMXNET3 as adapter type assigned to your 10Gb vSwitch. This is the method I employ so multiple VMs can all talk out over the 10Gb adapter.

Basically, if you want to load balance for multiple clients, multiple 1GbE link-aggregation/bond is a perfectly acceptable solution.
However, if your goal is achieve higher than 1Gb to a single client, I'm afraid the only practical and software agnostic solution is with a faster interface such as 10GbE.

I have the same card. Apparently Xpenoboot broke support for some Marvell cards. It use to work with Nanoboot 5.1. So basically, I can't use those cards either until it's fixed with a future version of the bootloader

If you're most concerned with how your board looks, I'm not sure any argument for ECC or other server board features will sway you. I personally feel uptime and data integrity is more important than anything else with a file server. File servers should be functional and reliable. Looks should be a secondary concern in my book.
People who don't recommend ECC are the ones that just haven't lost data YET. Once you've lost data or had corrupt data due to bad RAM, you'll swear by ECC no matter the cost.
If you don't have or want to spend the money on a dedicated setup, then by all means use whatever you have. However, if you're building from scratch, definitely get a server board + ECC. Things like IPMI, ECC memory, IOMMU, and Intel NICs are features you'll only appreciate once you've had them.
Bottomline is if your board and cpu supports ECC, get them. If it doesn't, then make sure you get it for your next full setup.