I ran into the same situation....the problem was that between HA trying to upgrade vCenter agent, configure HA and update the vSAN cluster, all resource utilization was killing the hostd process. When the hostd process is not able to maintain the ESXi OS/management services because it's "hung", you'll see that very same behavior. The fix was exactly what you said....moving the ESXi hosts out of the vSAN cluster and restarding them.

Anyone running this now with their onboard controller and Update 2? I've got multiple Shuttle SH87R6 with 1 each Kingston SSDNow V300 240GB SSD and 1.5TB 7200RPM HDD and VSAN finally configures properly, but the performance is painful. Under any load (<100 IOPs), the latency jumps to over 1.4 seconds in VSAN Observer. Running PernixData FVP on the same setup, accelerating my Synology DS412+ iSCSI datastore I was getting ~5000 IOPs per host without saturating the queue of the controller, so the "well your onboard controller has a very short queue depth" seems like it shouldn't be the culprit.