The NS-G8 Gateway Server system is a consolidation of file-based servers with NAS(NFS, CIFS) applications configured in an 2+1 High-Availability cluster. The servers deliver network services over high-speed Gigabit Ethernet. The cluster tested here consists of two active datamovers that each provides 4 Jumbo-frame capable Gigabit Ethernet interfaces and one stand-by datamover to provide high availability. The NS-G8 is a gateway to Shared Storage Array Symmetrix V-Max.
The Symmetrix V-Max is a new storage architecture using 4 V-Max engines interconnected via a set of multiple active 4 Gbit/s FC HBAs connecting to the fabric.

Configuration Bill of Materials

Item No

Qty

Type

Vendor

Model/Name

Description

1

1

Enclosure

EMC

NSG8-DME0

Celerra NS-G8 empty datamover add on enclosure

2

1

Enclosure

EMC

NSG8-DME1

Celerra NS-G8 empty datamover add on enclosure

3

3

Datamover

EMC

NSG8-DM-8A

Celerra NS-G8 datamover, 4 GbE ports, 4 FC ports

4

1

Control station

EMC

NSG8-CSB

Celerra NS-G8 control station (Administration only)

5

3

Software

EMC

NSG8-UNIX-L

Celerra NS-G8 UNIX License

6

1

Intelligent Storage Array Engine

EMC

SB-64-BASE

Symmetrix V-Max Base Engine, 64 GB Cache

7

3

Intelligent Storage Array Engine

EMC

SB-ADD64NDE

Symmetrix V-Max Add Engine, 64 GB Cache

8

16

FE IO Module

EMC

SB-FE80000

Symmetrix V-Max Front End IO Module with multimode SFP's

9

16

Drive Enclosure

EMC

SB-DE15-DIR

V-Max Direct Connect Storage Bay, Drive Enclosure

10

96

Flash Disk

STEC

NF4F14001B

V-Max Enterprise Flash Drive (EFD) 400 GB 4 Gbit/s FC optical

11

4

FC Disk

SEAGATE

NS4154501B

V-Max Cheetah 450 GB 15K.6 4 Gbit/s FC Disks

12

4

Standby Power Supply

EMC

SB-DB-SPS

V-Max Standby Power Supply

13

1

FC Switch

EMC

EMC DS-300B

24 port Fibre Channel Switch

Server Software

OS Name and Version

DART 5.6.46.4

Other Software

EMC Celerra Control Station Linux 2.6.9-67.0.4.5611

Filesystem Software

Celerra UxFS File System

Server Tuning

Name

Value

Description

ufs syncInterval

22500

Timeout between UFS log flushes

file asyncThresholdPercentage

30

Total cached dirty blocks for NFSv3 async writes

ufs cgHighWaterMark

131071

Defines the systems CG cache size

ufs inoBlkHashSize

170669

Inode block hash size

ufs updateAccTime

0

Disable access time updates

ufs nFlushDir

80

Number of UxFS directory and indirect blocks flush threads

file prefetch

0

Disable DART read pre-fetch

ufs inoHighWaterMark

65536

Number of dirty inode buffers per filesystem

nfs thrdToStream

7

Number of NFS flush threads per stream

ufs inoHashTableSize

2005027

Inode hash table size

mkfsArgs dirType

DIR_COMPAT

Compatibility mode directory style

kernel maxStrToBeProc

24

Number of network streams to process at once

ufs nFlushIno

128

Number of UxFS inode blocks flush threads

kernel outerLoop

16

Number of consecutive iterations of network packets processing

ufs nFlushCyl

40

Number of UxFS cylynder group blocks flush threads

nfs withoutCollector

1

Enable NFS-to-CPU thread affinity

kernel buffersWatermarkPercentage

5

Flushing buffer cache threshold

file initialize nodes

1000000

Number of inodes

file initialize dnlc

3676000

Number of dynamic name cache lookup entries

nfs start openfiles

1200000

Number of open files

nfs start nfsd

4

Number of NFS daemons

Server Tuning Notes

Disks and Filesystems

Description

Number of Disks

Usable Size

This set of 96 EFD disks is divided into 48 2-disk RAID1 pairs, each with 4 LUs per drive, exported as 192 logical volumes. All data file systems reside on these disks.

96

18.8 TB

This set of FC disks consists of 2 2-disk RAID1 pairs, each with 2 LUs per drive, exported as 8 logical volumes. These disks are reserved for Celerra system use

The stripe size for all RAID1 logical volumes was 32 KB. Each logical volume was 100 GB. The filesystem named fs1 was built on a Celerra meta volume made by striping across 24 logical volumes on the first V-Max Engine. The filesystem named fs2 was built on a Celerra meta volume made by striping across 24 logical volumes on same V-Max Engine. fs3 and fs4 were configured similar on the second V-Max Engine. Same with fs5 and fs6 on 3rd V-Max Engine and fs7 and fs8 on the last V-Max Engine.

Network Configuration

Item No

Network Type

Number of Ports Used

Notes

1

Jumbo Gigabit Ethernet

8

This is the Gigabit network interface used for both datamovers.

Network Configuration Notes

All Gigabit network interfaces were connected to a Cisco 6509 switch.

Benchmark Network

An MTU size of 9000 was set for all connections to the switch. Each datamover was connected to the network via 4 ports. The LG1 class workload machines were connected with one port.

Memory Notes

The Symmetrix V-Max was configured with a total of 256 GB of memory. The memory is backed up with sufficient battery power to safely destage all the cached data onto the disk in the event of a power failure.

Stable Storage

8 NFS file systems were used. Each RAID1 pair had 4 LUs bound on it. Each file system was striped over a quarter of the logical volumes. The storage array had 8 Fibre Channel connections, 4 per datamover. In this configuration, NFS stable write and commit operations are not acknowledged until after the storage array has acknowledged that the related data has been stored in stable storage (i.e. NVRAM or disk).

System Under Test Configuration Notes

The system under test consisted of 2 NS-G8 Gateway datamovers attached to a Symmetrix V-Max Storage Array with 4 FC links. The datamovers were running DART 5.6.46.4. 4 GbE Ethernet ports per datamover were connected to the network.

Other System Notes

Failover is supported by an additional Celerra datamover that operates in stand-by mode. In the event of the datamover failure, this unit takes over the function of the failed unit. The stand-by datamover does not contribute to the performance of the system and it is not included in the components listed above.

Test Environment Bill of Materials

Item No

Qty

Vendor

Model/Name

Description

1

24

Dell

Dell PowerEdge 1850

Dell server with 1 GB RAM and the Linux 2.6.9-42.ELsmp operating system

Load Generators

LG Type Name

LG1

BOM Item #

1

Processor Name

Intel(R) Xeon(TM) CPU 3.60GHz

Processor Speed

3.6 GHz

Number of Processors (chips)

2

Number of Cores/Chip

2

Memory Size

1 GB

Operating System

Linux 2.6.9-42.ELsmp

Network Type

1 x Broadcom BCM5704 NetXtreme Gigabit Ethernet

Load Generator (LG) Configuration

Benchmark Parameters

Network Attached Storage Type

NFS V3

Number of Load Generators

24

Number of Processes per LG

32

Biod Max Read Setting

5

Biod Max Write Setting

5

Block Size

AUTO

Testbed Configuration

LG No

LG Type

Network

Target Filesystems

Notes

1..24

LG1

1

/fs1,/fs2,/fs3.../fs7,/fs8

N/A

Load Generator Configuration Notes

All filesystems were mounted on all clients, which were connected to the same physical and logical network.

Uniform Access Rule Compliance

Each client has the same file systems mounted from each of the two active datmovers.

Other Notes

Failover is supported by an additional Celerra datamover that operates in stand-by mode. In the event of the datamover failure, this unit takes over the function of the failed unit. The stand-by datamover does not contribute to the performance of the system and it is not included in the components listed above.

Symmetrix V-Max was configured with 256 GB of memory, 64 GB per V-Max engine. The memory is backed up with sufficient battery power to safely destage all the cached data onto the disk in the event of a power failure.