We have a computer running Windows Storage Server 2008 R2 with 6 SSD in RAID 0.

This storage computer has one PCI-E with 4 Ethernet ports and we connected it to a gigabit switch to other computers vi iSCSI.

The problem is that we are not able to get high read/write speeds.

Using HD tune directly in the storage computer we get around 500 MB/s, but using the iSCSI link (in another computer) we only get close to 200 MB/s.

We did set MPIO with multipath, JUMBO frames and disabled CheckSum IPV4.

EDIT

I don't care about data loss. I just need speed because this is a cache computer.

EDIT

Both server and client have 4 GB NICs (1GB each adapter) and multipath and MPIO is correctly configured AFAIK.

EDIT

One thing I cant understand: we have a Dell Equallogic storage and it gets close to 200MB/s using the same switch/configuration. How is it possible? The equallogic was supposed to be a lot more slower than a 6 SSD Disk raid 0 storage.

Also, I have read that a lot of storages out there use 4 1GB NICs and they can easy get close to 500 GB/s. Included one from DELL which has only SSDs, as you guys can see here

EDIT

Also I am thinking about not using Windows Storage Edition and give OpenFiler a try. Should I consider this?

I dont care about data loss. The server is just a cache computer. I want speed and thats it.
–
Rafael ColucciJul 19 '11 at 14:50

Fair enough. How long can you go without this system, though? It's going to be down while you deal with the failed drive and re-build the array.
–
EEAAJul 19 '11 at 14:51

I can go without this system forever. I have a cluster (there are another machines ready to go in case this system fails). I am sorry, but that is not the point, but thanks for trying to help.
–
Rafael ColucciJul 19 '11 at 14:52