An On-line Backup Function for a Clustered NAS System (X-NAS)

Transcription

1 _ An On-line Backup Function for a Clustered NAS System (X-NAS) Yoshiko Yasuda, Shinichi Kawamoto, Atsushi Ebata, Jun Okitsu, and Tatsuo Higuchi Hitachi, Ltd., Central Research Laboratory 1-28 Higashi-koigakubo, Kokubunji-shi, Tokyo , Japan Tel: , Fax: {yoshikoy, skawamo, ebata, j-okitsu, Abstract An on-line backup function for X-NAS, a clustered NAS system designed for entry-level NAS, has been developed. The on-line backup function can replicate file objects on X- NAS to a remote NAS in real-. It makes use of the virtualized global file system of X-NAS, and sends NFS write operations to both X-NAS and the remote backup NAS at the same. The performance of the on-line backup function was evaluated and the evaluation results show that the on-line backup function of X-NAS improves the system reliability while maintaining 8% of the throughput of the X-NAS without this function. 1. Introduction An entry-level NAS system is convenient in terms of the cost and the ease of management for offices with no IT experts. However, it is not scalable. To solve this problem, X-NAS, which is a simple, scalable clustered NAS architecture designed for entry-level NAS, has been proposed [6]. Like conventional NAS systems, it can be used for various clients, such as those using UNIX and Windows 1. X-NAS aims at the following four goals. Cost reduction by using entry-level NAS as an element Ease of use by providing a single-file-system view for various kinds of clients Ease of management by providing a centralized management function Ease of scaling-up by providing several system-reconfiguration functions To achieve these goals, X-NAS virtualizes multiple entry-level NAS systems as a unified system without changing clients' environments. In addition, X-NAS maintains the manageability and the performance of the entry-level NAS. It also can easily be reconfigured without stopping file services or changing setting information. However, when one of the X-NAS elements suffers a fault, file objects on the faulty NAS system may be lost if there are no backups. To improve the X-NAS reliability, a file-replication function must therefore be developed. The goal of the present work is to introduce an on-line backup function of X-NAS that replicates original file objects on X-NAS to a remote NAS for each file access request in real- without changing the clients' environments. The performance of the on-line backup function was evaluated and the evaluation results indicate that X-NAS with the on-line backup function improves the system reliability while maintaining 8% of the throughput of standard X-NAS. 1 Windows and DFS are trademarks of Microsoft Corporation. Double Take is a trademark of Network Specialists, Inc. All other products are trademarks of their respective corporations. 11

2 2. On-line backup function for X-NAS To improve the reliability of X-NAS, an on-line backup function for X-NAS has been developed. (Since the details of the X-NAS structure are discussed in another paper [6], they are not described here.) The on-line backup function consists of many sub-functions. Among these sub-functions, we focus on on-line replication, the heart of the on-line backup function, in this paper. The on-line replication replicates files of X-NAS to a remote NAS, which is called a backup NAS, in real- for each file access request Requirements The on-line backup function of X-NAS must meet the following requirements: Generate replicas of file objects in real- in order to eliminate the lag between the original data and the replicas. Use a standard file-access protocol such as NFS to communicate between X-NAS and the backup NAS in order to apply as many kinds of NAS as clients need. Do not change clients' environments in order to curb their management cost On-line replication There are several methods for replicating file objects to remote systems via an IP network. One method is to use a block I/O [5]. Since using a block I/O is a fine-grain process, all file objects are completely consistent with copied objects. However, the system structure is limited because the logical disk blocks of the objects must be allocated to the same address between the original data and its replica. Another method is to change the client's system. DFS [1] is a simple method for replicating file objects to many NASs. It replicates file objects in constant intervals but not in real-. X and the management partition in X-NAS enable the centralized management of many NAS elements and provide a unified file system view for clients (Fig. 1). X is a wrapper daemon and receives an NFS operation in place of the NFS server and sends the operation to others. On-line replication of X-NAS makes use of X in order to copy file objects to the backup NAS. By extending this function, X sends the NFS operation not only to the NFS servers on the X-NAS but also to the NFS servers on the backup NAS. All file objects can thus be replicated in real- for each NFS operation Operations NFS operations handled in X-NAS can be divided into four categories. Category 1 is reading files; category 2 is writing files; category 3 is reading directories; and category 4 is writing directories. X sends NFS operations belonging to categories 2 and 4 to both X-NAS and the backup NAS at the same. On the other hand, NFS operations belonging to categories 1 and 3 are not sent to the backup NAS. When a UNIX client sends a WRITE operation for file f to X-NAS, X on P-NAS (parent NAS) receives the operation in place of the NFS daemon. Figure 1 shows the flow of this operation, and Figure 2 shows the timing chart with or without the on-line backup function. Firstly, X specifies a that stores the file entity by using the inode number of the dummy file f on the management partition. Secondly, X invokes a sub thread and then sends the WRITE operation to the backup NAS by 12

3 using the thread. Thirdly, X sends the WRITE operation to the NFS daemon on the specified C-NAS (child NAS), and then the C-NAS processes the operation. Finally, X waits for the s of the operations from the NFS server on the C- NAS and from the backup NAS, and then it makes one from all the s and sends it back to the client. We call this procedure a synchronized backup. LAN UNIX (NFS) client WRITE file f Windows (CIFS) client Internal LAN #3 #2 main sub #1 X Samba #4 filec file f filea filea filef filec dummy files filea filef filec management partition C-NAS C-NAS P-NAS X-NAS Backup NAS Figure 1: Flow of WRITE operation with online backup function. client X elapsed client X-main X-sub send to backup NAS client Xmain X-sub send to backup NAS access to management partition invoke sub-thread access to wait sub-thread completion (a) X-NAS without on-line backup (b) X-NAS (c) X-NAS with partial asynchronized backup Figure 2. Timing charts of WRITE operation with or without on-line backup function Key features An on-line backup function must guarantee the consistency of data between X-NAS and the backup NAS. To achieve this, X waits for all s from both one of the NFS servers on the X-NAS and the backup NAS for each NFS operation. However, waiting for the s degrades total performance. To solve this problem, the performance of the on-line replication function must be improved through three key features as follows. (1) Multi-threaded wrapper daemon X waits for all s from both NAS systems. This incurs an overhead because of frequent accesses to the network and the disk drives. To reduce this cost, the main 13

4 thread of X invokes a sub thread to send the file I/Os to the backup NAS. This feature enables X-NAS to process the disk accesses of both X-NAS and the backup NAS in parallel. (2) File-handle cache The cost of specifying the full path name and the file handle on the backup NAS is high because of frequent accesses to the network and the disk drives. To reduce this cost, X- NAS makes use of a file-handle cache, which records the correspondence between the file handle of the dummy file, i.e., the global file handle, and the file handle of the backup NAS. (3) Partial asynchronized backup Although the synchronized backup is a simple method, the execution cost is high because this method waits for all the s from the NFS servers on the X-NAS and the backup NAS. A method that does not wait for the from the backup NAS achieves the same performance as X-NAS without the on-line backup function. However, when X-NAS or the backup NAS becomes faulty, it is difficult to guarantee the consistency of data between X-NAS and the backup NAS. Using a log is one solution to guarantee the consistency. However, since the log size is limited, it is not a perfect solution for entry-level NAS, which usually has a small-sized memory. Furthermore, according to the X-NAS concept, the architecture must be simplified as much as possible. X thus supports a partial asynchronized backup method in addition to the synchronized backup. Figure 2(c) shows the timing chart of the WRITE operation with partial asynchronized backup. In the method, after processing disk accesses to the data partition on the X-NAS element, X sends back a to a client without waiting for the from the backup NAS. As a result, the client can send the next operation. The main thread of X can perform the disk accesses to the management partition for the next operation during the waiting for the from the backup NAS. 3. Performance evaluation To evaluate the on-line backup function of X-NAS, an X-NAS prototype based on the NFSv3 implementation was developed. We ran NetBench [3] and SPECsfs97 [4] on the X-NAS prototype with or without on-line backup function. In this evaluation, by taking account of permissible range for the entry-level NAS's users, we set the performance objective for X-NAS with the on-line backup function at 8% of the performance of X- NAS without the function. Throughput and average are used as the performance metrics. In this evaluation, we implemented the partial asynchronized backup function in the WRITE operation. This is because the ratio of the WRITE operations to all operations is higher than other operations in the workload mix of the benchmarks. Furthermore, since the file sizes used by the benchmark programs are from 1 to 3 KB, many WRITE operations are issued continuously and then each process in a WRITE operation could be overlapped Experimental environment In the experimental environment, the maximum number of X-NAS elements is fixed to four. Each X-NAS element and the backup NAS configured with one 1-GHz Pentium III 14

5 processor, 1 GB of RAM and a 35-GB Ultra 16 SCSI disk drive running Red Hat Linux 7.2. For the NetBench test, one to eight clients running Windows 2 Professional were used. The clients, P-NAS, C-NASs, and the backup NAS were connected by 1-Megabit Ethernet because most offices still use this type of LAN Results Figures 3 and 4 show the results of our performance evaluation in terms of throughput and average. The throughputs of X-NAS with the synchronized backup function are about 8% of those without the function. Under an experimental environment with NetBench, the average for X-NAS with the function is about 1.2 s higher than that for X-NAS without the function. On the other hand, under an experimental environment with SPECsfs, the average for X-NAS with the function is about 1.4 s higher than the for X-NAS without it. Although the partial asynchronized backup can improve both throughput and average by several percentage, the performance objective for the in the case of SPECsfs cannot be achieved yet. Throughput (Mbit/sec) without on-line backup with partial-asynchronized backup Number of clients (a) Throughput Response (msec) 4 without on-line backup with partial-asynchronized backup Number of clients (b) Average Figure 3. Throughput and average of X-NAS with or without on-line backup function in the case of NetBench. 2 1 Delivered load (NFSOPS) 8 3 Response (msec) Response (msec) Offered load (NFSOPS) without on-line backup Offered load (NFSOPS) with partial-asynchronized backup Offered load (NFSOPS) (a) Total throughput (b) Total average (c) Average s for WRITE operations Figure 4. Throughput and average of X-NAS with or without on-line backup function in the case of SPECsfs Discussion To specify the reason for the longer in the case of SPECsfs, the average 15

6 of each NFS operation in the case of the synchronized backup was analyzed. The average s for some write requests such as WRITE, SETATTR and CREATE are longer than those for X-NAS without that function. In particular, the average for WRITE operations is 2.5 s higher than that for the other operations. Profiling results of the WRITE operations shows that waiting for the sub-thread completion is about 24% of the total processing and access to the via an IP network is about 48% of that. By applying the partial asynchronized backup to X-NAS, this waiting can be reduced to almost zero. Figure 4(c) shows the effects of the partial asynchronized backup in the case of WRITE operations. The average for WRITE operations with the partial asynchronized backup can be reduced to from 2.5 s to 1.8 s the for X-NAS without the function. As a result, the total average for SPECsfs with the function can be reduced to 1.3 s that without it. However since the ratio of data transmission to the total processing is still higher in the case of the 1- Megabit Ethernet, using a Gigabit network is effective because it can reduce the data transmission for 1-Megabit Ethernet to at least one-fifth. Furthermore, by optimizing other operations such as CREATE and COMMIT, the performance objective of 1.2 s can be achieved. 4. Related work There are several methods for replicating file objects between several NAS systems via the network. DFS [1] is a simple and easy file-replication function on Windows systems. DRBD [5] is a kernel module for building a two-node HA cluster under Linux. Double Take [2] is a third-vendor software to replicate file objects on the master NAS to the slave NAS. 5. Conclusions An on-line backup function for X-NAS, a clustered NAS system, has been developed. On-line replication, the core of the on-line backup function, replicates file objects on X- NAS to a remote backup NAS in real- for each NFS operation. A multi-threaded wrapper daemon with a low overhead, the developed file-handle cache and the partial asynchronized backup method can reduce the overhead for accessing the backup NAS. An X-NAS prototype with the on-line backup function, based on NFSv3 running the NetBench and SPECsfs97 programs attains 8% of the performance of X-NAS without the function. This function improves the dependability of entry-level NAS while maintaining its manageability. References [1] Deploying Windows Powered NAS Using Dfs with or without Active Directory [2] Double-Take Theory of Operations [3] NetBench [4] SFS3. Documentation Version [5] P. Reisner. DRBD. In Proceedings of the 7 th International Linux Kongress, 2. [6] Y. Yasuda et al. Concept and Evaluation of X-NAS: a highly scalable NAS system. In Proceedings of the 2 th IEEE/11 th NASA MSST23. 16

Shared Parallel File System Fangbin Liu fliu@science.uva.nl System and Network Engineering University of Amsterdam Shared Parallel File System Introduction of the project The PVFS2 parallel file system

Application Brief: Using Titan for MS Abstract Businesses rely heavily on databases for day-today transactions and for business decision systems. In today s information age, databases form the critical

Performance and scalability of a large OLTP workload ii Performance and scalability of a large OLTP workload Contents Performance and scalability of a large OLTP workload with DB2 9 for System z on Linux..............

Network Attached Storage Jinfeng Yang Oct/19/2015 Outline Part A 1. What is the Network Attached Storage (NAS)? 2. What are the applications of NAS? 3. The benefits of NAS. 4. NAS s performance (Reliability

BENCHMARKING CLOUD DATABASES CASE STUDY on HBASE, HADOOP and CASSANDRA USING YCSB Planet Size Data!? Gartner s 10 key IT trends for 2012 unstructured data will grow some 80% over the course of the next

New!! - Higher performance for Windows and UNIX environments The IBM TotalStorage Network Attached Storage Gateway 300 (NAS Gateway 300) is designed to act as a gateway between a storage area network (SAN)

There are many things to consider when preparing for a TRAVERSE v11 installation. The number of users, application modules and transactional volume are only a few. Reliable performance of the system is

Running a Workflow on a PowerCenter Grid 2010-2014 Informatica Corporation. No part of this document may be reproduced or transmitted in any form, by any means (electronic, photocopying, recording or otherwise)

Symantec Backup Exec 10d System Sizing Best Practices For Optimizing Performance of the Continuous Protection Server Table of Contents Table of Contents...2 Executive Summary...3 System Sizing and Performance

Red Hat Enterprise Linux as a file server You re familiar with Red Hat products that provide general-purpose environments for server-based software applications or desktop/workstation users. But did you

Technical Note Configuration s VMware Infrastructure 3 When you are selecting and configuring your virtual and physical equipment, you must stay at or below the maximums supported by VMware Infrastructure

C H A P T E R 4 Planning Domain Controller Capacity Planning domain controller capacity helps you determine the appropriate number of domain controllers to place in each domain that is represented in a

1-888-674-9495 www.doubletake.com Real-time Protection for Hyper-V Real-Time Protection for Hyper-V Computer virtualization has come a long way in a very short time, triggered primarily by the rapid rate

Remote Copy Technology of ETERNUS6000 and ETERNUS3000 Disk Arrays V Tsutomu Akasaka (Manuscript received July 5, 2005) This paper gives an overview of a storage-system remote copy function and the implementation

1 VMWARE WHITE PAPER Introduction This paper outlines the considerations that affect network throughput. The paper examines the applications deployed on top of a virtual infrastructure and discusses the

Planning and Administering Windows Server 2008 Servers MOC6430 About this Course Elements of this syllabus are subject to change. This five-day instructor-led course provides students with the knowledge

Red Hat Enterprise linux 5 Continuous Availability Businesses continuity needs to be at the heart of any enterprise IT deployment. Even a modest disruption in service is costly in terms of lost revenue

Achieving Nanosecond Latency Between Applications with IPC Shared Memory Messaging In some markets and scenarios where competitive advantage is all about speed, speed is measured in micro- and even nano-seconds.

VIDEO SURVEILLANCE WITH SURVEILLUS VMS AND EMC ISILON STORAGE ARRAYS Successfully configure all solution components Use VMS at the required bandwidth for NAS storage Meet the bandwidth demands of a 2,200

Protect SQL Server 2012 AlwaysOn Availability Group with Hitachi Application Protector Tech Note Nathan Tran The purpose of this tech note is to show how organizations can use Hitachi Applications Protector

Database Configuration: SAN or NAS Discussion of Fibre Channel SAN and NAS Version:.0 Greg Schulz DBSANNAS.DOC Preface This publication is for general guidance and informational purposes only. The information