No one can make the blanket statement of whether 3x HBAs with 8 drives each would outperform 1x HBA with 24 drives.All comparison would have to be done against specific products.

That said, there is a potential that multiple HBAs might yield you better performance by:

avoiding saturation on the card itself in hosting less drives

spreading the load across multiple PCIe lanes

Having your drives across too many controllers can affect performance too due to controller timing aspects. In parallel operations, the slowest to respond controller will affect all other controllers.Controller response time given parallel disk requests is one of those useful metrics that reviews unfortunately don't publish.

No one can make the blanket statement of whether 3x HBAs with 8 drives each would outperform 1x HBA with 24 drives.All comparison would have to be done against specific products.

That said, there is a potential that multiple HBAs might yield you better performance by:

avoiding saturation on the card itself in hosting less drives

spreading the load across multiple PCIe lanes

Having your drives across too many controllers can affect performance too due to controller timing aspects. In parallel operations, the slowest to respond controller will affect all other controllers.Controller response time given parallel disk requests is one of those useful metrics that reviews unfortunately don't publish.

That is exactly what I needed... To know that it is possible to saturate the card with more drives, and that it is possible to spread the load across multiple PCIe lanes, but that the card response time might be the limiting factor.

Is there a way to test the response time on the cards so we can test them and post results here and build up a knowledge base for the cards? Would we need to test in a multicard environment?

My SAS2008 based card (plus onboard ports) get's about 800-900MB/sec in my current set up, which used to get about 900-1000MB/sec with a Highpoint 2340 instead of the SAS2008. I suspect however that the SAS2008 will scale better when more drives are added.

I'm getting right around 1TB/sec calculating parity in tRAID (IBM 1015 SAS2008 flashed to IT) and about 450MB/sec during Verify Sync, but poor performance doing everything else, including writing to the array locally in performance mode (around 30MB/sec), and copying/writing across the network (around 20MB/sec).

I'm going to add a 2nd SFF-8087 cable between each controller and its connected LSI SAS2X36 expander backplane (each has 24 drives attached to it). Keeping fingers crossed that 8x6GBs channels for each array of 24 disk will perform better than 4x6GBs does.