This is dedicated to the linux users, system admins, open source enthusiastic, techs whoever is looking for solution, tricks & concept etc. Reader will apply concept or execute command at their own risk. Owner of these article is not responsible for any impact, damages or errors.

Friday, June 17, 2011

Basic Idea on GFS file system

Global File System (GFS) is a shared disk file system for Linux computer clusters. It can maximize the benefits of clustering and minimize the costs.

It does following :

Greatly simplify your data infrastructure

Install and patch applications once, for the entire cluster

Reduce the need for redundant copies of data

Simplify back-up and disaster recovery tasks

Maximize use of storage resources and minimize your storage costs

Manage your storage capacity as a whole vs. by partition

Decrease your overall storage needs by reducing data duplication

Scale clusters seamlessly, adding storage or servers on the fly

No more partitioning storage with complicated techniques

Add servers simply by mounting them to a common file system

Achieve maximum application uptime

While a GFS file system may be used outside of LVM, Red Hat supports only GFS file systems that are created on a CLVM logical volume. CLVM is a cluster-wide implementation of LVM, enabled by the CLVM daemon clvmd, which manages LVM logical volumes in a Red Hat Cluster Suite cluster. The daemon makes it possible to use LVM2 to manage logical volumes across a cluster, allowing all nodes in the cluster to share the logical volumes. Red Hat supports it if gfs is deployed on LVM

GULM (Grand Unified Lock Manager) is not supported in Red Hat Enterprise Linux 5. If your GFS file systems use the GULM lock manager, you must convert the file systems to use the DLM lock manager. This is a two-part process.

“GFS with a SAN” provides superior file performance for shared files and file systems. Linux applications run directly on GFS nodes. Without file protocols or storage servers to slow data access, performance is similar to individual Linux servers with directly connected storage; yet, each GFS application node has equal access to all data files. GFS supports up to 125 GFS nodes.

GFS Software Components :

gfs.ko : Kernel module that implements the GFS file system and is loaded on GFS cluster nodes.

lock_dlm.ko : A lock module that implements DLM locking for GFS. It plugs into the lock harness, lock_harness.ko and communicates with the DLM lock manager in Red Hat Cluster Suite.

lock_nolock.ko : A lock module for use when GFS is used as a local file system only. It plugs into the lock harness, lock_harness.ko and provides local locking.

The system clocks in GFS nodes must be within a few minutes of each other to prevent unnecessary inode time-stamp updating. Unnecessary inode time-stamp updating severely impacts cluster performance. Need to use ntpd for accurate time with time server.

B)At each node, mount the GFS file systems. For more information about mounting a GFS file system.

Command usage:

$ mount BlockDevice MountPoint
$ mount -o acl BlockDevice MountPoint

The -o acl mount option allows manipulating file ACLs. If a file system is mounted without the -o acl mount option, users are allowed to view ACLs (with getfacl), but are not allowed to set them (with setfacl).

NOTE :

Make sure that you are very familiar with using the LockProtoName and LockTableName parameters. Improper use of the LockProtoName and LockTableName parameters may cause file system or lock space corruption.

LockProtoName :

Specifies the name of the locking protocol to use. The lock protocol for a cluster is lock_dlm. The lock protocol when GFS is acting as a local file system (one node only) is lock_nolock.
LockTableName: This parameter is specified for GFS file system in a cluster configuration. It has two parts separated by a colon (no spaces) as follows: ClusterName:FSName

* ClusterName, the name of the Red Hat cluster for which the GFS file system is being created.
* FSName, the file system name, can be 1 to 16 characters long, and the name must be unique among all file systems in the cluster.

NumberJournals:

Specifies the number of journals to be created by the gfs_mkfs command. One journal is required for each node that mounts the file system. (More journals than are needed can be specified at creation time to allow for future expansion.)

Before you can mount a GFS file system, the file system must exist , the volume where the file system exists must be activated, and the supporting clustering and locking systems must be started. After those requirements have been met, you can mount the GFS file system as you would any Linux file system.

2 comments:

dear sir i do have in my network GFS partition and i would like to mount in in windows 2012 to backup the data from it this partition on a sna storage i can see the partition in the windows but i cannot mount into the windows disk manager do you know how?