Re: [Linux-cluster] A few GFS newbie questions: journals, etc

You can add journals, but I don't remember off the top of my head how.
One of the other GFS guys will have to speak up ;)
On Wed, 2005-06-08 at 16:02 -0500, Nate Carlson wrote:
> I think this sounds like a reasonable way to go - make the physical
> servers (or a couple of them) the lock servers, and set up the GFS clients
> as client-only. For this case, will I need to do lock_gulm? If so, I'll
> have to do some research on how to set that up.
Yes, to do it, use lock_gulm.
Though, I kind of wonder why having one node online (+ the three lock
servers) and able to access the file system is such a strict
requirement. In a small cluster of 5-20 nodes with 3 lock servers, the
overhead from addition of lock servers is quite high:
- 5 nodes: 60%(!) more nodes (totaling 8 machines) once lock servers are
added
- 15 nodes: 20% more nodes (totaling 18 machines) once lock servers are
added
- 20 nodes: 15% ...
It's a lot of overhead from a hardware perspective, especially once you
include the fact that both lock servers and clients need fencing. Gulm
is typically used on much larger clusters. If you mount the GFS volumes
on the lock servers, your requirement for "1 node online" will be
broken, so the overhead can't be avoided. Furthermore, if you intend to
do this, you'll find that your availability will be more predictable
when using DLM.
Here are some examples for the amount of fault tolerance for different
configurations using different lock types. 15 machines total, and
assuming correctly configured I/O fencing:
3 lock servers, 12 client-only nodes mounting the GFS volume:
- 1 of the lock servers can fail before the cluster halts
- 12 of the client-only nodes can fail (all of them)
(*The chance of a lock server failing in this configuration is assumed
to be less than the chance of a GFS client node failing.)
15 node cluster, with 3 lock servers also mounting the GFS volume:
- 1 of the lock servers can fail before the cluster halts
- 12 of the client-only nodes can fail (all of them)
15 node cluster, with 5 lock servers also mounting the GFS volume:
- 2 of the lock servers can fail before the cluster halts
- 10 of client-only nodes can fail (all of them)
15 node CMAN/DLM cluster, all nodes having one vote:
- any 7 nodes can fail before the cluster halts
In case there was any confusion, "mounting the file system" is not the
same thing as "joining the cluster". Nodes can join the cluster and
_not_ mount any file systems - or do anything else cluster-related, for
that matter. Any quorate member of the cluster may mount one of the
file systems on the cluster.
-- Lon