was also reading through BZ regarding direct-io-mode of fuse and am still in the dark as to the current status of direct-io-mode when mounting using fuse. in older versions you could control this setting but not any more it would seem.

Slydder: (help [<plugin>] [<command>]) -- This command gives a useful description of what <command> does. <plugin> is only necessary if the command is in more than one plugin. You may also want to use the 'list' command to list all available plugins and commands.

and`: you should not mix RHS and community glusterfs on the same machine. When you're using community gluster I recommend adding an exclude statement to /etc/yum.repos.d/redhat.repos to preclude getting the RHS client-side bits.

mojibake: "In regards to issue I had yesterday that you chime in on regarding Apache calling an OOM and it ended up killing gluster client." (I don't answer offline in most instances) The kernel is what kills processes when the kernel runs out of memory, not apache. Just happened to pick gluster to kill, probably because it was using the most ram.

JoeJulian: After raising the instance size and continue testing, it looks like t2.small web server is just about the limit that some light load testing will handle. Raising the instance size looks like Memory will hover around 1gb for gluster client and httpd processes.

The workaround should be simple enough, use the yum-plugin-priorities package and then add a weight to the glusterfs.repo file in /etc/yum.repos.d. That will give the community packages a higher weight.

I saw it way back when I upgraded to CentOS 6.6. Since that's all yum, though, like cfeller said I just used package priorities (always have actually) so I didn't actually notice until someone mentioned it here.

JoeJulian: glusterd's management port is 24007/tcp (also 24008/tcp if you use rdma). Bricks (glusterfsd) use 49152 & up since 3.4.0 (24009 & up previously). (Deleted volumes do not reset this counter.) Additionally it will listen on 38465-38467/tcp for nfs, also 38468 for NLM since 3.3.0. NFS also depends on rpcbind/portmap on port 111 and 2049 since 3.4.

am currently building ganesha (which has gfs support). I installed gluster using the debian packages and am in need of glusterfs/api/glfs-handles.h which does not seem to be in the any of the packages and there doesn't seem to be a .dev package. any ideas?

To mount via nfs, most distros require the options, tcp,vers=3 -- Also an rpc port mapper (like rpcbind in EL distributions) should be running on the server, and the kernel nfs server (nfsd) should be disabled

Gorian: Hostnames can be used instead of IPs for server (peer) addresses. To update an existing peer's address from IP to hostname, just probe it by name from any other peer. When creating a new pool, probe all other servers by name from the first, then probe the first by name from just one of the others.

semiosis: just checked for pnfs support for gluster and it's not in yet. ceph, gpfs, vfs all have pnfs support. will see what can be done though. at least I have assurance that the deadlock will not happen due to ganesha.