right now, we have 10Gbit Switches with 40Gbit uplink, but new network equippment will be bought if necessary. IB as already available.

About capacity: we have a 350TB Isilon for archiving, but storage space is increasing everyday, but I think, we will start with 6 or 8 nodes with a lot's of SSD and maybe HDDs, but that is not already completely discussed.

The thing is, that we use right now NetApp with NFSv3 and pNFS, which is working OK, but in fact, NetApp is very expensive with support.
iSCSI would be also OK for VMs, but we need also a lot's of space for shared storage between the VMs, so NFS is in my eyes necessary, but I don't have much experinence with clustered filesystems on iSCSI for shared storage.
For example: some NFS volumes are mounted on 50 or more VMs for read data and some are writing on it, but different directories!

What we are looking for is a storage, which can scale in Performance and Capacity and there I found compuverde, but I did not find a lot about this storage software.

yes, as I wrote already, we will have a look on ceph, BUT, we need also a real shared storage like NFS, so that several VMs can access same files.
I don't know, If ceph can do this, maybe then with cephFS, but we need a supported product like RedHat Ceph storage and I don't know if the already support cephFS for production use.

That's why I asked for experince with compuverde, but I also found hedvig. But here the same problem like with compuverde, you don't find much user experiences nor real performance graphs with SSDs or NVME.
Both would support NFS, iSCSI and object storage.

We use RedHat Virtualization and for Ceph, we also need OpenStack Cinder.

Ceph provides a POSIX compatible file system - CephFS - that can be shared among several hosts just like NFS. It is not as mature as NFS and setting ip up is an adventure but it does appear to satisfy the requirements you've listed.

I'm from Compuverde. I saw your requirements further up and it looks like Compuverde will fit well into your needs. When it comes to performance (200k IOPS) in 6-8 nodes with NVMe cache and 40 GbE shouldnt be a problem at all, but we would need to know more about your hardware and your workload to give better estimation.
I could also help you to get in touch with other Compuverde users. I also suggest that you try out our free plan if you have hardware available.

Great, for a company storage, we need for sure a supported software environment: What ceph storage Os would you recommend?
I heard of RedHat Ceph storage, but also Canonical should have a supported ceph cluster, right? What would be better?

Has anyone maybe experience with Hedvig?
I don't understand where you install the storage proxy? Do I need to setup these as a KVM VM or should they run on the hosts?

I'm from Compuverde. I saw your requirements further up and it looks like Compuverde will fit well into your needs. When it comes to performance (200k IOPS) in 6-8 nodes with NVMe cache and 40 GbE shouldnt be a problem at all, but we would need to know more about your hardware and your workload to give better estimation.
I could also help you to get in touch with other Compuverde users. I also suggest that you try out our freeplan if you have hardware available.

Click to expand...

Hi Kattlampa,
thank you for your response!
We don't have the hardware yet, we would build the storage servers as we need them. So we are open for recommendations!
About the 200k IOPs, you mean, we can achieve this with a hybrid setup and don't need all flash?
Would it be possible later to add, if more performance is requiered more nodes with only flash storage?

Does compuverde also support Infiniband for storage backend (replication)? I ask, because we have already a lot of IB switches and cards with 56Gbit.

Regarding your youtube videos: I already watched most of them. My problem is, that I don't find any reviews of your product.

Regarding the hardware, we are very flexible and hardware agnostic.
Sure, I'm happy to help you with some hardware suggestions. But it would make more sense if you describe your workload/use case a bit more.
200k IOPS in a hybrid setup of 6-8 nodes is not a problem at all. I know we have customers with vNAS All-flash that gets over 200 IOPS per node.

Yes, we have a lot of customers running on InfiniBand.
We just got Technical Alliance Partner with Mellanox and have their hardware tested and validated.
Worth mentioning is that we do not have support for RDMA yet, but its in the road map.

Regarding customer reviews, i like to mention that we went GA in the beginning of last year and we will come out with case studies and such in short. All quotes that you find on our starting page and some more under use cases is real end user quotes.

About Us

Our community has been around for many years and pride ourselves on offering unbiased, critical discussion among people of all different backgrounds. We are working every day to make sure our community is one of the best.