Hey all, I've got a 20TB replicated gluster cluster set up and I think I'm having some read/write performance issues over NFS - If anyone has a few minutes to spare, I've put all my hardware, node, and volume options in a few pastebins so I don't have to spell it all out here

post-factum: Thanks! I'm noticing slow read/write speeds over NFS to two node replicated volume. Performance on read/write direct to the bricks is good, but mounting NFS, even as a loopback to a nodes mount point, cuts the read/write performance by a factor of at least two, sometimes more

Hi all, I will try one more time although even JoeJulian didn’t have any ideas; I have a single-brick, single node GlusterFS system (other nodes exist and are ready to join but I am testing on this first). The brick’s backing store is a RAID60 capable of 1-2 Gbyte/sec. However, reading from the gluster FUSE mount with dd maxes out at 200-300 MB/sec . Has anyone seen this or have any ideas? This will make GlusterFS a non-start

Hi all, I wrote this morning with performance problems (Gluster FUSE mount can only do sequential read at 200-300 MB/sec from a brick capable of doing 1-2 GB/sec) and scobanx pooh-poohed this and suggested I use fio instead of dd. However, our use case IS sequential access of very large files. Should we be using two RAID6 bricks instead of a single RAID60 brick (per server) ?