We are using DELL R720 with H710 raid controller.
A wired problem about the disk io was discovered during tests.
We found that the disk throught on VM server(OVS repo) and VM guest are only about 1/3 of the physical server, around only 50M/s.

We are tring to compare disk io write with simple dd instruction on following enviroments with same hardware
The test cases are:
(H710, 2T 7200rpm * 8 configured with RAID10 as two virtual disks, first for ovm, second for OVS repo)

We can see that on pure Centos 6 we got the reasonable disk io, on OVM sda the score is slower but still acceptable.
But on OVM disk and guest VM, we saw a big drop.

An interesting thing is : I have 3 guest vms on this vm server, when I run a same dd command samultaniously on 3 guest vms, I can still
get the same score. Adding them together we get a 120+ MB/s throughput.
I guess the reason might be that OVS put a io threshold on both OVS and each vms on it to make sure no single vm can drain up all disk io
in order to preserve some to other vms ?

I plan to use mongo db on the guest vm if the io is close to the real machine, could anyone please help me improve the disk io ?

As soon as it comes to database usage, the raw throughput will not be your problem, but latency will. If you want to benchmark your storage in that regard, I'd suggest fio.
You will already trade in some latency due to the OCFS2 filesystem where your pools are running on, so you might have to pony up some more capable drives than your SATA 2TBs for that.

Of course, it depends also on the load you are expecting from your applications.

So the throughput drop to 1/3 on OVS and vm server is common?
I tried on the vm with mongodb installed, and performed an insert test.
The performance also drop around 1/3, so that it is not ok to apply disk io intensive apps on vm server ?

If the virtual disk that is hosted on the storage repo is too slow, you still have the option to use something like iSCSI for that. The performance penalty you get, when using the OCFS2 storage repos really is significant.

I do have achieved throughput of 90 MB/s on my SR inside my guests, but I do have dedicated storage for that (FC and iSCSI), so I can't state anything about the H700's performance. Note also, that I have benchmarked my OCFS2 SRs with fio, which you really should do first, since you won't get good performance it you've got a high-latency storage.

Can you redo these tests and set -ioengine=libaio, that would provide significantly better results. I just checked on my "self-made" hybrid-storage, that is connected via 1GbE, IOPs between 5k and 7k, depending on the type of operation.

If I do some raw testing using dd with oflag=direct am getting approx. 75 MB/s, once the hybrid storage has shuffled the new hot blocks around a bit.

budachst wrote:
If the virtual disk that is hosted on the storage repo is too slow, you still have the option to use something like iSCSI for that. The performance penalty you get, when using the OCFS2 storage repos really is significant.

I do have achieved throughput of 90 MB/s on my SR inside my guests, but I do have dedicated storage for that (FC and iSCSI), so I can't state anything about the H700's performance. Note also, that I have benchmarked my OCFS2 SRs with fio, which you really should do first, since you won't get good performance it you've got a high-latency storage.

Our environment is Fiber Channel & currently all of our repos are OCFS2.