I'm looking to build a Hyper-V failover cluster but have found myself needing a storage solution. One option is to choose a dedicated SAN solution and find a way to cable each node to it, but the option I find more ideal would be a way to leverage the storage I already have for each node, consisting of 4 2.5" drives each (excluding boot drives).

I've been doing a lot of research on available solutions (Storage Spaces Direct, Starwind Virtual San, etc.), but I'm having a hard time wrapping my head around the capabilities as well as justifying the price.

Is there a fairly inexpensive way to accomplish what I'm trying to do?

Right now what I have is 2 Supermicro blade chassis with 8 nodes each and 6 bays per node. Currently, all bays are solid state storage. They are not all the same size, but that can easily be rectified where necessary. I don't think I need a 16-way mirror of my data, and I'm not too fond of the Starwind round-robin approach using their vsan for more than 2 nodes.

This person is a verified professional.

I'm looking to build a Hyper-V failover cluster but have found myself needing a storage solution. One option is to choose a dedicated SAN solution and find a way to cable each node to it, but the option I find more ideal would be a way to leverage the storage I already have for each node, consisting of 4 2.5" drives each (excluding boot drives).

I've been doing a lot of research on available solutions (Storage Spaces Direct, Starwind Virtual San, etc.), but I'm having a hard time wrapping my head around the capabilities as well as justifying the price.

Is there a fairly inexpensive way to accomplish what I'm trying to do?

Right now what I have is 2 Supermicro blade chassis with 8 nodes each and 6 bays per node. Currently, all bays are solid state storage. They are not all the same size, but that can easily be rectified where necessary. I don't think I need a 16-way mirror of my data, and I'm not too fond of the Starwind round-robin approach using their vsan for more than 2 nodes.

Hyper-V is free, StarWind vSAN is free... I'm not sure what next level of "free" you're looking for :)

With StarWind you either build a grid (I can't talk much but some other vendors either implement this approach already of will do very soon), or you keep storage on a few nodes (all nodes run VMs and only few provide storage to a cluster, non-symmetric model). Upcoming version of StarWind will incorporate Ceph as a back-end erasure coded pool so you'll get much better resiliency and redundancy level compared to diagonal double parity you have in Storage Spaces (Direct).

This person is a verified professional.

I'm looking to build a Hyper-V failover cluster but have found myself needing a storage solution. One option is to choose a dedicated SAN solution and find a way to cable each node to it, but the option I find more ideal would be a way to leverage the storage I already have for each node, consisting of 4 2.5" drives each (excluding boot drives).

I've been doing a lot of research on available solutions (Storage Spaces Direct, Starwind Virtual San, etc.), but I'm having a hard time wrapping my head around the capabilities as well as justifying the price.

Is there a fairly inexpensive way to accomplish what I'm trying to do?

Right now what I have is 2 Supermicro blade chassis with 8 nodes each and 6 bays per node. Currently, all bays are solid state storage. They are not all the same size, but that can easily be rectified where necessary. I don't think I need a 16-way mirror of my data, and I'm not too fond of the Starwind round-robin approach using their vsan for more than 2 nodes.

Hyper-V is free, StarWind vSAN is free... I'm not sure what next level of "free" you're looking for :)

With StarWind you either build a grid (I can't talk much but some other vendors either implement this approach already of will do very soon), or you keep storage on a few nodes (all nodes run VMs and only few provide storage to a cluster, non-symmetric model). Upcoming version of StarWind will incorporate Ceph as a back-end erasure coded pool so you'll get much better resiliency and redundancy level compared to diagonal double parity you have in Storage Spaces (Direct).

What Kooler said is accurate. We're have a fully functional free version and the commercial license doesn't carry a huge price tag either. I'd be happy to setup a demo for you and discuss your implementation (when you say you can't justify the price, does that mean you talked to someone from our team and didn't like the price?).

In the case of your software (StarWind Virtual SAN), price is definitely not the issue. My main point of concern is only being able to make 1:1 mirrors of data, versus some form of striping, such as what is offered by S2D. Of course, I understand I'm comparing the feature set of a piece of software to a feature of an operating system and those each come with different associated costs.

Fundamentally, my problem is most likely a disparity between my actual needs for such a system and what I want it to be based on the equipment available.
I would like to create a cluster of a relatively large number of nodes (8+), but I probably don't really need that sort of configuration for my environment.

I'll probably start out by making a 2 node cluster with Virtual SAN and moving some more critical services there and seeing how much I can pack onto it before looking at expanding out to more nodes.

Or maybe by the time that rolls around the Ceph integration will be ready and I can just upgrade to that, since that sounds right on point with what I want.

This person is a verified professional.

In the case of your software (StarWind Virtual SAN), price is definitely not the issue. My main point of concern is only being able to make 1:1 mirrors of data, versus some form of striping, such as what is offered by S2D. Of course, I understand I'm comparing the feature set of a piece of software to a feature of an operating system and those each come with different associated costs.

Fundamentally, my problem is most likely a disparity between my actual needs for such a system and what I want it to be based on the equipment available.
I would like to create a cluster of a relatively large number of nodes (8+), but I probably don't really need that sort of configuration for my environment.

I'll probably start out by making a 2 node cluster with Virtual SAN and moving some more critical services there and seeing how much I can pack onto it before looking at expanding out to more nodes.

Or maybe by the time that rolls around the Ceph integration will be ready and I can just upgrade to that, since that sounds right on point with what I want.

You can easily do that. You'll re-use existing StarWind cluster as a gateway nodes (physical or virtual - doesn't matter).