About vasiliy_gr

Yes...
I spent a lot of hours yesterday trying to compile alx.ko from backports by myself. The best result was: module insmoded and immediately crashed. It may be some incompatibility with the real kernel used or its config (I compiled module against official latest syno kernel with its official config). As for latest gnoboot - there alx.ko is present and works fine.
So we have to wait for next nanoboot releases. By the way does anyone know if this thread is a source for feedback and drivers requests for the nanoboot's author?
If so I'd like to ask the author for alx.ko backport in nanoboot. As for me I need this driver for second onboard NIC on gigabyte ga-h87n and ga-h87n-wifi. And it is the only one problem preventing me from migration to nanoboot on those two configurations (other two configurations do not have any problems - many thanks for lsi 9211 support).

I think you should eliminate overlap in *portcfg. Try setting both esataportcfg and usbportcfg to zero value. Also expand your internalportcfg to obviously high value (0xfffff for example). If you will find all your hdd-s - ok. If not - try higher MaxDisk settings. After finding all hdd's - reduce internalportcfg value to actual bitmap and increase other *portcfg values to actual values as bitmaps.
As for your current settings - they are incorrect. As they have 'one' bits set simultaneously in internalportcfg and two other *portcfg-s.

I have both perc h310 and n54l in baremetal under gnoboot 10.5. But they are - two different baremetal xpenologies.
Seriously speaking I do use h310 reflashed to 8211-8i/IT. Reflash procedure was rather complex. I had to consecutively flash it to latest dell's hba fw, then to 8211/IR and only third was 8211/IT. I used three different tool-chaines (dell's one, official lsi's and lsiutil correspondingly for those three stages). Also I used two different mobos (one with dos environment and other with efi-shell). And also had to cover two pins on pci-e.
So I do not know if you can do it inside n54l. But you can try. As for me - I decided not to buy H310 adapters any more, but only original lsi 9211 (or 9240) as I did previously for two other hardware setups. As the little price difference do not excuse the complexity of making it functional.

I migrated my third xpenology from 4.3-3810 (trantors build r1.0) to 5.0-4458 just an hour ago. All my data and setting stayed unharmed. Except of remote permissions on all the shares. So I had to restore permissions manually (I mean - manually in DSM gui) for both nfs and smb access. May be it is also your case.

I received my H310 yesterday and tried to reflash it into 9211-8i/IT. It was a little bit complex... I had to cover its pci-e pins B5/B6 with tape to make it work on non-uefi mobo. Then I flashed it to Dell official HBA firmware with dell's tools (previously killed its own firmware with megarec). Then I flashed it to 9211/IR firmware with lsi's tools. And at last took it to uefi mobo and with efi version of lsiutil flashed it to 9211/IT.
As a result I have 9211/IT from H310. No performance or compatibility problems. But I still need pci-e pins B5/B6 to be covered ,if I want it to work with old non-uefi mobos.
Sorry for offtopic...

It might be related to cpu_idle driver that I backported from kernel.org. Try it and let me know if it helps
During previous test session I obtained that error with irq16 in dmesg twice straight on two reboots within less than 10 minutes from dsm start. Today I tried to reproduce the situation (no changes in hardware or software) with no luck during 2 hours (with all the activities I had previously). A lot of segfaults with dsmnotify.cgi, but no crashes on irq... So as I can't reproduce the bug, I also have no ability to test sensibly the kernel option you mentioned. Sorry...

I do not use ISCSI on those two configurations. Anyway the delay problem seems to be not a real delay but some notification problem between dsm and its web-gui.
As for HBA is second configuration - no problems at all. None of the hdds is missing, all numerated correctly from 1 to 8, system array is undamaged after reboot.
A little problem occured: I lost my nfs access configuration on exported folders. So I had to repair it manually. But I believe that it is a problem of upgrade from 4.3 to 5.0 dsm version.
No, I haven't. I do not even know this option. I'll try it later when I learn what does this option mean. Anyway I do not had such a problem on 4.3 with trantor's hba build.
Sorry, I did not understand your idea about kexec-ing.
I am using chrome. And in the situation described I saw moving around progress indicator on the dsm window for 4.5 minutes. But if I pressed reload button after smth about 1 minute from the start of reboot sequence - I instantly got into running dsm gui. Really it sounds strange but it was already running gui (without need to login). May be it is some trick from google in their chrome...

Very-very interesting... I have only one question after reading but before experiments on real hardware. You wrote:
I separated my home environment into two zones: one with nas-es, media players and so on. And the second one with workstations, internet routers and wifi. There are 10 meters between them. Reconfiguration is very comlex and even not possible at all (for example - I have optic line from one of the ISP used, its router is absolutely immovable).
So now it is connected via a pair of Netgear GS108T (manageable switches). Switches interconnected with two cables of those 10 meters length. Both nas-es (two of three) and one workstation have 2-3 nic-s onboard. All of them now configured for LACP both on switches and on linuxes. I want to try your method to increase network throughoutput before trying 10G (and really it is a home environment - so I do not need fault tolerance, but I need doubling/tripling of nfs transfer rate to speedup FF/Rewind functions of mplayer).
So... Did I understand your idea correctly - I need two/three/four/so_on pairs of switches between my two zones/rooms to obtain doubling/so_on of throughoutput (with the corresponding number of cables between any pair of switches)? And if I am right - I believe that in this case I do not need manageable switches, but any cheap 1G switches, isn't it? Any other issues to think on (for example - are there any opportunity to use high force of Netgear GS108T instead of extensive doubling of switches?)

I had similar problem with expanding volume of 7 4TB hdd-s in SHR with the 8th one. After much googling I decided that it is a problem of current DSM version (as it was also reported on original synology forums). Especially the version of e2fsprogs package with resize2fs in it which does not support FS resize above 16TB. It can make volume above 16TB, but not resize above this value. Here is its version info:
XPEnology> resize2fs
resize2fs 1.42.6 (21-Sep-2012)
Usage: resize2fs [-d debug_flags] [-f] [-F] [-M] [-P] [-p] device [new_size]
It is for DSM 4.3, latest releases. Did not checked om DSM 5.0 (as it happened 3 months ago).
So... What shall be done?.. I nearly cracked my head thinking how to install newer version of e2fsprogs on xpenology (as I am linux user but not a system-level linux programmer). Also I was not absolutely sure that the problem is exactly with this. But at last I founf a rather simple solution.
I looked on my desktop linux (suse 13.1):
vasil@vgserver:~> resize2fs
resize2fs 1.42.9 (28-Dec-2013)
Usage: resize2fs [-d debug_flags] [-f] [-F] [-M] [-P] [-p] device [new_size]
This version should have no problems with 16TB boundary. (3 months ago real version was a little lower but I give the one I have right now with updates.)
So... I connected an unused ssd to my xpenology and installed minimal suse system from dvd. Then from suse cli I found my SHR to be sure that everything ok (also I looked on it through yast). And the last action looked something like this:
resize2fs -fpF /dev/vg1000/lv
I do not remember now if I changed something in this line (it was 3 months ago) but in my offline notes for this issue I have this string.
It worked several minutes. After completing I checked new volume size - it was really expanded. So i disconnected suse's ssd, reverted boot order to usb-flash, restarted DSM and - mission accomlished. It took less than an hour for everything - connection, installation, operation, disconnection, celebration...
Certainly you may use not suse, but any linux distro with suitable version of e2fsprogs. Or you may try to compile it youself (if you know how you can do it). Also you should realize that my solution is for a problem when your SHR is already expanded but ext4 above it is not. If you have another problem - this solution may be not suitable!!!