Sign up to receive free email alerts when patent applications with chosen keywords are publishedSIGN UP

Abstract:

For the purpose of optimizing the performance separation according to the
usage status of the protocol and the storage system performance, in a
storage system 1 including multiple storage devices 2400 which includes a
storage controlling unit 2410 performing data write to or data read from
a storage drive 2200 according to data input/output requests from an
external device 1000 and a protocol processing unit 2514 that responds to
a processing result for the input/output requests to the external device
1000 and is capable of responding to data input/output requests
transmitted from the external device 1000 following at least two or more
protocols, in which each of the storage devices 2400 includes a cluster
processing unit 2516 configuring clusters 2811 with the other storage
devices 2400 for the external device 1000 and a cluster processing unit
2516, the cluster processing unit 2516 is set to configure cluster groups
2812 for each protocol.

Claims:

1. A storage system coupled to a client device, comprising a plurality of
storage devices and being configured to: partition the plurality of
storage devices into a plurality of logical partitions; activate the
plurality of logical partitions so that the logical partitions are
available for specific protocols, respectively; receive a data
input/output request from the client device; and transfer the data
input/output request to the logical partition available for the protocol
which corresponds to the data input/output request.

2. The storage system according to claim 1, wherein, when a new storage
device is added to the storage system, the storage system relocates the
data of the plurality of storage devices in the storage system including
the new storage device.

3. The storage system according to claim 2, wherein the storage system
partitions the plurality of storage devices including the new storage
device into the plurality of logical partitions before relocating the
data and activates the plurality of logical partitions after relocating
the data.

4. The storage system according to claim 3, wherein the relocation of the
data is judged on the basis of a load information.

5. The storage system according to claim 4, wherein the load information
is related to a load status of a logical unit which is provided to the
client device.

6. The storage system according to claim 5, wherein the load status
includes a usage status of the logical unit and a port performance
information connected with the logical unit.

7. The storage system according to claim 1, wherein, when a storage
device is removed from the storage system, the storage system relocates
the data of the plurality of storage devices to the storage system not
including the removed storage device.

8. The storage system according to claim 7, wherein the data of the
removed storage device is relocated to the logical partition
corresponding to the protocol of the removed storage device.

9. The storage system according to claim 1, wherein, when the storage
device re-partitions the plurality of the storage devices, the storage
system relocates the data of the plurality of storage devices to the
storage system.

10. The storage system according to claim 9, wherein the re-partitioning
is in a manner that the storage device of the logical partition moves to
another logical partition.

11. The storage system according to claim 10, wherein the relocation the
data is judged on the basis of a load information.

12. The storage system according to claim 11, wherein the load
information is related to a load status of a logical unit which is
provided to the client device.

13. The storage system according to claim 12, wherein the load status
includes a usage status of the logical unit and a port performance
information connected with the logical unit.

14. The storage system according to claim 1, furthermore being configured
to inactivate the plurality of logical partitions so that the logical
partitions are not available for all protocols except for the available
protocol.

15. The storage system according to claim 1, wherein the transferring the
data input/output request is in a manner of forwarding or redirecting.

16. A method for controlling a unified storage system, which is coupled
to a client device and comprises a plurality of storage devices, the
method comprising the steps of: partitioning the plurality of storage
devices into a plurality of logical partitions; activating the plurality
of logical partitions so that the logical partitions are available for
specific protocols, respectively; receiving a data input/output request
from the client device; and transferring the data input/output request to
the logical partition available for the protocol which corresponds to the
data input/output request.

Description:

CLAIM OF PRIORITY

[0001] This is a continuation application of U.S. application Ser. No.
12/526,664, filed Aug. 11, 2009, which is a 371 of International
Application No. PCT/JP2009/002947, filed on Jun. 26, 2009, the contents
of which are hereby incorporated by reference into this application.

TECHNICAL FIELD

[0002] This invention relates to a storage system and control methods for
the same, specifically to the technology of optimizing the performance
separation according to the usage status of the protocol and the storage
system performance.

BACKGROUND ART

[0003] For achieving the efficient operation of the storage system, there
is a technology of configuring clusters by partitioning the storage
system into multiple Logical Partitions (LPARs) and providing the
clusters as multiple separate storage systems from the user.

[0004] For example, PTL (Patent Literature) 1 discloses that, in a
cluster-type storage system in which multiple, relatively small-scale
storage systems (clusters) are connected by the interconnection network
and used as one system, for the purpose of solving the problem that the
logical partitioning in the range of the limited bandwidth of the
interconnection network cannot ensure the performance deserving the
resources allocated to the Logical Partitions, the resources of the same
cluster are allocated to each Logical Partition.

[0005] PTL 2 discloses that, for the purpose of efficiently utilizing the
unused resources in the storage system, the control processor logically
partitions the resources including the host I/F, the drive I/F, the disk
drive, the data transfer engine, the cache memory and the control
processor as the target of partitioning and configures multiple Logical
Partitions, and dynamically changes the partitioning ratio for each
Logical Partition depending on the number of accesses from the host
computer.

[0006] PTL 3 discloses that, when managing Logical Partitions of the
subsystem for which storage consolidation has been performed, for
preventing the operational errors by the administrator or others when
changing the RAID configuration, the function of changing the RAID
configuration is allowed to be released in a specified range.

[0010] The demand is expected to increase for COS (Cloud Optimized
Storage) for the cloud service which, for achieving the low TCO (Total
Cost of Ownership), can be started on a small scale with a small initial
investment, catch up with the rapid growth (by Scale Out, i.e.,
installing a new storage system while keeping the configuration of the
existing storage system), and flexibly change the configuration.

[0012] It is expected that, in the operation of the storage system for the
cloud service using the Unified Storage of the Scale-Out configuration,
for optimizing the performance, the demand for efficiently utilizing
(partitioning) the resources included in the Unified Storage (such as the
CPU and the cache memory) will be evident.

[0013] However, as the Unified Storage processes multiple protocols by
common CPU and the cache memory is also shared by the multiple protocols,
it is difficult to optimize the performance separation according to the
usage status of each protocol and the storage system performance.

[0014] This invention is intended in view of the above-mentioned
background for the purpose of providing the storage system and the
controlling methods for the same which can optimize the performance
separation according to the usage status of the protocol and the storage
system performance.

Solution to Problem

[0015] An aspect of this invention for solving the above-mentioned and
other problems is a storage system comprising multiple storage devices,
the storage device including:

[0016] a storage controlling unit writing data to a storage and reading
data from the storage according to data input/output requests transmitted
from an external device; and

[0017] a protocol processing unit responding to the processing result for
the input/output requests to the external device and being capable of
responding to the data input/output requests following at least two or
more protocols transmitted from the external device; wherein

[0018] each of the storage devices includes a cluster processing unit
configuring a cluster along with the other storage devices for the
external device, and

[0019] the cluster processing unit can configure cluster groups for each
of the protocols.

[0020] Thus, by configuring cluster groups for each protocol by using the
storage system using the storage device (Unified Storage) including the
protocol processing unit capable of responding to the data input/output
requests following at least two or more protocols transmitted from the
external device, it becomes possible to optimize the performance
separation according to the usage status of the protocol as well as the
entire storage system performance.

[0021] Another aspect of this invention is the above-mentioned storage
system, wherein

[0022] the storage device stores a cluster management table in which the
protocols handled by the cluster groups to which the storage devices
belong are managed, and

[0023] the cluster processing unit configures a cluster group for each of
the protocols by forwarding or redirecting the data input/output requests
transmitted from the external device to the other storage devices
according to the cluster management table.

[0024] This invention enables easy configuration of cluster groups for
each of the protocols by the cluster management table in which the
protocols handled by the cluster group to which the storage devices
belong are managed and the forwarding or redirecting function is included
in the cluster processing unit.

[0025] Another aspect of this invention is the above-mentioned storage
system, wherein

[0026] if cluster groups for each of the protocols are configured, the
cluster processing unit inactivates resources for achieving the protocols
other than the protocol handled by the cluster group to which the storage
device belongs among the resources for processing the protocols in the
storage device.

[0027] As it is not necessary to activate protocols other than the
protocol handled by the cluster group to which the storage device
belongs, by inactivating such resources, the processing load on the
storage device is reduced and the storage device can be efficiently
operated.

[0028] Another aspect of this invention is the above-mentioned storage
system, wherein

[0029] the storage device stores the cluster management table in which the
protocols handled by the cluster group to which each of the storage
devices belongs are managed,

[0030] the storage device provides the storage area of the storage in
units of logical volumes which are the logically set storage area to the
external device, and

[0031] if the contents of the cluster management table are changed when
the cluster group for each of the protocols has already been configured
and if the protocol to which the logical volumes correspond does not
match the protocol which the storage device to which the relevant logical
volume belongs is supposed to handle after the change, the cluster
processing unit migrates the data of the logical volume to the logical
volume of the storage device of the other storage device which is
supposed to handle the protocol after the change with which the relevant
logical volume corresponds.

[0032] Thus, if the contents of the cluster management table are changed,
the cluster processing unit automatically migrates the data to the
logical volume of the other storage device matching the protocol to which
the logical volume corresponds. Therefore, for example, by changing the
cluster management table from a management device, the configuration of
the cluster groups can be changed easily and flexibly.

[0033] Another aspect of this invention is the above-mentioned storage
system, wherein

[0034] the storage device stores the cluster management table in which the
protocols handled by the cluster group to which each of the storage
devices belongs are managed,

[0035] the storage device provides the storage area of the storage in
units of logical volumes which are the logically set storage area to the
external device, and

[0036] if the contents of the cluster management table are changed to
newly add a storage device to an already configured cluster group when
the cluster groups for each of the protocols have already been
configured, the cluster processing unit determines whether it is
necessary to relocate the data or not according to load information of
the logical volume, and if it is determined to be necessary, migrates the
data between the logical volumes.

[0037] As mentioned above, if the contents of the cluster management table
are changed to newly add the storage device to the already configured
cluster group when the cluster groups for each of the protocols have
already been configured, the cluster processing unit determines whether
it is necessary to relocate the data or not according to the load
information of the logical volume, and if it is determined to be
necessary, migrates the data between the logical volumes. Thus, this
invention enables easy addition of a new storage device to the already
configured cluster group by changing the contents of the cluster
management table. It is also possible to optimize the performance of the
storage system by attempting load distribution at the time of relocation.

[0038] Another aspect of this invention is the above-mentioned storage
system, wherein

[0039] the storage device stores the cluster management table in which the
protocols handled by the cluster group to which each of the storage
devices belongs are managed,

[0040] the storage device provides the storage area of the storage in
units of logical volumes which are the logically set storage area to the
external device, and

[0041] if the contents of the cluster management table are changed to
delete a storage device from an already configured cluster group when the
cluster groups for each of the protocols have already been configured,
the cluster processing unit migrates the data of the logical volume of
the storage device to be deleted to the logical volume of another storage
device handling the protocol which the relevant storage device used to
handle.

[0042] As mentioned above, if the contents of the cluster management table
are changed to delete the storage device from the already configured
cluster group when the cluster groups for each of the protocols have
already been configured, the cluster processing unit migrates the data to
the logical volume of the other storage device handling the protocol
which the relevant storage device used to handle. Thus, this invention
enables easy deletion of a storage device from the already configured
cluster group by changing the contents of the cluster management table.

[0043] Another aspect of this invention is the above-mentioned storage
system, wherein

[0044] the storage device stores the cluster management table in which the
protocols handled by the cluster group to which each of the storage
devices belongs are managed,

[0045] the storage device provides the storage area of the storage in
units of logical volumes which are the logically set storage area to the
external device, and

[0046] if the contents of the cluster management
table are changed to migrate a storage device configuring a certain
cluster group to another cluster group when the cluster groups for each
of the protocols have already been configured, the cluster processing
unit determines whether it is necessary to relocate the data or not
according to the load information of the logical volume, and if it is
determined to be necessary, migrates the data between the logical
volumes.

[0047] As mentioned above, if the contents of the cluster management table
are changed to migrate the storage device configuring a certain cluster
group to the other cluster group when the cluster groups for each of the
protocols have already been configured, the cluster processing unit
determines whether it is necessary to relocate the data or not according
to the load information of the logical volume, and if it is determined to
be necessary, migrates the data between the logical volumes. Thus, this
invention enables easy migration of a storage device configuring a
certain cluster group to the other cluster group by changing the contents
of the cluster management table. It is also possible to optimize the
performance of the storage system by attempting load distribution at the
time of relocation.

[0048] Note that the protocol is at least one of iSCSI (Internet Small
Computer System Interface), NFS/CIFS (NFS: Network File System, CIFS:
Common Internet File System), and FC (Fibre Channel).

[0049] The other problems and the means for solving for the same disclosed
by this application are described in the embodiments and the attached
figures.

Advantageous Effects of Invention

[0050] By this invention, the performance separation according to the
usage status of the protocol and the storage system performance can be
optimized.

BRIEF DESCRIPTION OF DRAWINGS

[0051] FIG. 1 is a schematic diagram of the configuration of the storage
system 1.

[0052] FIG. 2 is a diagram showing the functions of the client device
1000, the management device 1100, and the storage devices 2400.

[0053]FIG. 3 is a diagram showing the operations of the storage devices
2400.

[0054]FIG. 4 is a diagram showing the method of accessing the LUs 2420
when newly installing a Unified Storage.

[0055]FIG. 5 is a diagram showing the case of achieving a cluster by
forwarding.

[0056]FIG. 6 is a diagram showing an example of the cluster management
table 2517.

[0057]FIG. 7 is a diagram showing the case of achieving a cluster by
redirection.

[0058] FIG. 8 is a diagram showing the case of setting cluster groups 2812
in units of protocols.

[0059] FIG. 9 is an example of the cluster management table 2517.

[0060] FIG. 10 is a flowchart showing the cluster group setting processing
S1000.

[0061] FIG. 11 is a diagram showing the case of changing the settings of
the cluster groups 2812.

[0071] The embodiments are described below. FIG. 1 shows the schematic
view of the configuration of the storage system 1 described as an
embodiment. As shown in FIG. 1, this storage system 1 includes multiple
storage devices 2400, one or more client devices 1000 (external device),
and a management device 1100 installed in the first site 6000 such as a
data center, a system operation center or others.

[0072] The client device 1000 and the storage devices 2400 are connected
as communicable via a data network 5000. The data network 5000 is, for
example, LAN (Local Area Network), WAN (Wide Area Network), or SAN
(Storage Area Network). The storage devices 2400 are connected as
communicable with the management device 1100 via a management network
5001. The management network 5001 is, for example, LAN or WAN. The
storage devices 2400 are connected as communicable with other storage
systems installed in the other sites (the second site 7000 and the third
site 8000) via an internal network 5002 and an external network 5003. The
internal network 5002 is, for example, LAN, WAN, or SAN. The external
network 5003 includes LAN, WAN, SAN, the Internet, a public
telecommunication network and exclusive lines. Note that it is also
possible to achieve the data network 5000, the management network 5001,
the internal network 5002, and the external network 5003 by sharing the
same physical communication media by a method of logically partitioning
the communication bandwidth such as VLAN (Virtual LAN).

[0074] The client device 1000 is an information processing device
(computer) such as a personal computer, an office computer, mainframe or
others. The client device 1000 includes at least a processor 1001 (e.g.,
a CPU (Central Processing Unit) or an MPU (Micro Processing Unit)), a
memory 1002 (e.g., a volatile or non-volatile RAM (Random Access Memory)
or a ROM (Read Only Memory)), and a communication interface 1003 (e.g.,
NIC (Network Interface Card) or an HBA (Host Bus Adaptor)). The client
device 1000 may also include a storage (e.g., hard disk or a
semiconductor storage device (SSD (Solid State Drive))), inputting
devices such as a keyboard, a mouse and others, and outputting devices
such as liquid crystal monitors and printers.

[0076] The management device 1100 is an information processing device
(computer) such as a personal computer, an office computer, or others.
The management device 1100 includes at least a processor 1101 (e.g., a
CPU or an MPU), a memory 1002 (e.g., a volatile or non-volatile RAM or a
ROM), and a communication interface 1003 (e.g., NIC or an HBA).

[0077] The management device 1100 may also include a storage (e.g., a hard
disk or a semiconductor storage device (SSD (Solid State Drive))),
inputting devices such as a keyboard, a mouse and others, and outputting
devices such as liquid crystal monitors and printers. The management
device 1100 is connected as communicable with the storage devices 2400
via the management network 5001. The management device 1100 includes a
user interface of GUI (Graphical User Interface), CLI (Command Line
Interface), or others, and provides the functions for the user or the
operator to control or monitor the storage devices 2400. It is also
possible for the management device 1100 to be part of the storage devices
2400.

[0078] Each storage device 2400 includes a header device 2500, a drive
controlling device 2100, and a storage drive 2200 (storage). Each of the
header device 2500, the drive controlling device 2100, and the storage
drive 2200 may be packed in a separate chassis, or at least some of these
may be packed in the same chassis. The functions of the header device
2500 and the drive controlling device 2100 may also be achieved by common
hardware.

[0081] The header device 2500 includes functions for performing the
control related to the protocols (e.g., NAS (Network Attached Storage),
iSCSI, NFS/CIFS, and FC) required for the communication with the client
device 1000, functions related to file control (file system), and a
function of caching data and data input/output requests exchanged with
the client device 1000. The details of the functions included in the
header device 2500 are described later.

[0082] The drive controlling device 2100 includes external communication
interfaces 2104 and 2105 (e.g., NIC, HBA, PCI, and PCI-Express) for the
connection with the header device 2500 and the management network 5001, a
processor 2101 (e.g., a CPU, an MPU, a DMA or a custom LSI), a memory
2102 (volatile or non-volatile RAM or ROM), a cache memory 2103 (volatile
or non-volatile RAM), and a drive communication interface 2106 (e.g., SAS
(Serial Attached SCSI), SATA (Serial ATA), PATA (Parallel ATA), FC, or
SCSI) for the communication with the storage drive.

[0083] The drive controlling device 2100 reads or writes data from or to
the storage drive 2200 according to the data input/output requests
(hereinafter referred to as drive access requests) transmitted from the
header device 2500. The drive controlling device 2100 includes various
functions for safely or efficiently utilizing the storage drive 2200 such
as the function of controlling the storage drive 2200 by RAID (Redundant
Arrays of Inexpensive (or Independent) Disks), the function of providing
logical storage devices (LDEVs (Logical DEVices) to the header device
2500 and the client device), the function of verifying the soundness of
the data, the function of obtaining snapshots, and others. The details of
the functions included in the drive controlling device 2100 are described
later.

[0085] FIG. 2 shows the functions of the client device 1000, the
management device 1100, and the storage devices 2400. In the client
device 1000, application software (hereinafter referred to as an
application 1011) and a communication client 1012 are executed. The
application 1011 is the software for providing, for example, file
sharing, Email, database or others.

[0086] The communication client 1012 (protocol client) communicates with
the storage device 2400 (e.g., transmitting data input/output requests
and receiving the responses for the same). For the above-mentioned
communication, the communication client 1012 performs the processing
(e.g., format conversion and communication control) related to the
protocols (e.g., iSCSI, NFS/CIFS, and FC). Note that these functions are
achieved by the processor 1001 of the client device 1000 reading the
programs stored in the memory 1002.

[0087] The management device 1100 includes a management unit 1111. The
management unit 1111 sets, controls, and monitors the operations of the
storage devices 2400. Note that the management unit 1111 is achieved by
the processor 1101 of the management device 1100 through reading and
executing the programs stored in the memory 1102. The functions of the
management unit 1111 may be achieved either by a different device from
the storage devices 2400 or by the storage devices 2400.

[0088] The header device 2500 of the storage devices 2400 includes an
operating system 2511 (including driver software), a volume management
unit 2512, a file system 2513, one or more communication servers 2514
(protocol servers) (protocol processing units), and a cluster processing
unit 2516. Note that these functions are achieved by the hardware of the
header device 2500 or by the processor 2501 reading the programs stored
in the memory 2502.

[0089] The volume management unit 2512 provides a virtual storage area
(hereinafter referred to as a virtual volume) based on a logical storage
area provided by the storage drive 2200 to the client device 1000.

[0090] The file system 2513 accepts data input/output requests from the
client device 1000 by a file specification method. The header device 2500
operates as the NAS (Network Attached Storage) server in the data network
5000. Note that, though the description of this embodiment assumes the
header device 2500 to include the file system 2513, the header device
2500 does not necessarily include the file system 2513. The header device
2500, for example, may accept data input/output requests by an LBA
(Logical Block Address) specification method.

[0091] The communication server 2514 communicates with the communication
client 1012 of the client device 1000 and the management unit 1111 of the
management device 1100. For the above-mentioned communication, the
communication server 2514 performs the processing related to the
protocols (e.g., iSCSI, NFS/CIFS, and FC) (e.g., format conversion and
communication control).

[0092] The cluster processing unit 2516 processes the data input/output
requests following the previously set cluster definition. For example, if
the data input/output request received from the client device 1000
addresses itself, the cluster processing unit 2516 processes the data
input/output requests by itself. If the received data input/output
request addresses another storage device 2400, it transmits the relevant
data input/output request to the other storage device 2400 handling the
processing of the data input/output request. The cluster processing unit
2516 manages the above-mentioned cluster definitions in a cluster
management table 2517.

[0093] The drive device 2000 of the storage devices 2400 includes a
storage controlling unit 2410. The storage controlling unit 2410 provides
the logical volume (hereinafter referred to as an LU 2420 (LU: Logical
Unit)) as a logical storage area based on the storage drive 2200 to the
header device 2500. The header device 2500 can specify an LU 2420 by
specifying an identifier of the LU 2420 (LUN (Logical Unit Number)). The
function of the storage controlling unit 2410 is achieved, for example,
by an LVM (Logical Volume Manager).

Unified Storage

[0094]FIG. 3 is a diagram showing the operations of the storage devices
2400. As mentioned above, the storage devices 2400 are Unified Storages
corresponding to multiple protocols (iSCSI, NFS/CIFS, FC) (i.e., capable
of responding to the data input/output requests following multiple
protocols). The storage device 2400 shown in the figure corresponds to
the both protocols of NFS and iSCSI and includes a communication server
2514 operating as an NFS server and a communication server 2514 operating
as an iSCSI target. Among these, the above-mentioned communication server
2514 operating as an NFS server accepts data input/output requests
transmitted from the communication client 1012 operating as an NFS client
of a client device 1000. Meanwhile, the above-mentioned communication
server 2514 operating as an iSCSI target accepts data input/output
requests transmitted from the communication client 1012 operating as an
iSCSI initiator of a client device 1000.

Resource Allocation

[0095]FIG. 4 is a diagram showing the method of accessing the LUs 2420
when newly installing the second storage device 2400 as a Unified Storage
in the storage system 1 configured by using the first storage device 2400
as a Unified Storage for the purpose of enhancement of resources
(performance, capacity and others) and the like. The user or the operator
adds the second storage device 2400 by performing the setting of the LUs
2420 to mount on each client device 1000, the setting of the access path
from the client device 1000 to the LUs 2420, and other settings, by using
the user interface provided by the management unit 1111 of the management
device 1100. Note that, by this method, the storage devices 2400 (the
first storage device 2400 and the second storage device 2400) are
recognized as two independent devices by the client device 1000.

Cluster

[0096]FIG. 5, similarly to FIG. 4, shows the case of newly installing the
second storage device 2400. However, it is different from FIG. 4 in that
a cluster 2811 is configured by using two storage devices 2400, the first
storage device 2400 and the second storage device 2400. If the cluster
2811 is configured by this method, the first storage device 2400 and the
second storage device 2400 are recognized as one virtual storage system
by a client device 1000. Such a cluster 2811 can achieve the data
input/output requests transmitted from the client device 1000, for
example, by forwarding or redirecting the requests in the storage devices
2400.

[0097]FIG. 5 is a diagram showing the case of achieving a cluster 2811 by
forwarding, and FIG. 7 is a diagram showing the case of achieving a
cluster 2811 by redirection.

[0098] As shown in FIG. 5, in the case of forwarding, the cluster
processing unit 2516 firstly identifies the storage device 2400 which
should process the data input/output request received from the client
device 1000. Then, the cluster processing unit 2516 processes the request
if the request is the data input/output request which should be processed
by itself, and if the request is the data input/output request which
should be processed by the other storage device 2400, transfers the
received data input/output request to the other storage device 2400.

[0099]FIG. 6 is information referred to by the cluster processing unit
2516 (hereinafter referred to as the cluster management table 2517) when
identifying the storage device 2400 which should process the data
input/output request. As shown in the figure, the cluster management
table 2517 includes one or more records consisting of a cluster name 611
and a node 612. In the cluster name 611, an identifier given to each of
the configured cluster groups (hereinafter referred to as a cluster ID)
is set. In the node 612, an identifier of the storage device 2400 as a
component of each cluster (hereinafter referred to as a device ID) is
set.

[0100] As shown in FIG. 7, in the case of redirection, the cluster
processing unit 2516 firstly identifies the storage device 2400 which
should process the data input/output request received from the client
device 1000 by referring to the cluster management table 2517. Then, the
cluster processing unit 2516 processes the request if the request is the
data input/output request which should be processed by itself and, if the
request is the data input/output request which should be processed by the
other storage device 2400, notifies the redirecting address (the other
storage device 2400 identified (e.g., network address)) to the client
device 1000 which transmitted the received data input/output request. The
client device 1000 which has received the notification retransmits the
data input/output request to the notified redirecting address.

Cluster Group in Units of Protocols

[0101] Though the methods of achieving cluster groups 2811 without
depending on the protocols have been described above, a group of clusters
(hereinafter referred to as a cluster group 2812) can be set in units of
protocols by using the characteristics of the Unified Storages (capable
of corresponding to multiple protocols).

[0102] FIG. 8 is a diagram showing the case of setting cluster groups 2812
in units of protocols (each with the NFS server and the iSCSI target
shown in the figure). Cluster groups 2812 can also be achieved by
preparing the above-mentioned cluster management table 2517 and
performing forwarding or redirection.

[0103] FIG. 9 is an example of the cluster management table 2517 used for
managing the cluster group 2812. As shown in the figure, the cluster
management table 2517 includes one or more records consisting of a
cluster name 911, a node 912, a role 913, and a node 914. In the cluster
name 911, an identifier given to each of the configured clusters
(hereinafter referred to as a cluster ID) is set. In the node 912, an
identifier of the storage device 2400 as a component of each cluster
(hereinafter referred to as a device ID) is set. In the role 913, an
identifier for identifying the protocol supported by each cluster group
2812 (hereinafter referred to as a protocol ID) is set. In the node 914,
an identifier of the storage device 2400 allocated to each role is set.

[0104] In the cluster management table 2517 shown in FIG. 9, a group of
storage devices 2400 with the cluster name 911 of "Cluster 1" and the
node 914 of "10.1.1.1-10.1.1.5" configure the cluster group 2812 with the
NFS server of the role 913 of "NAS-1". A group of storage devices 2400
with the cluster name 911 of "Cluster 1" and the node 914 of
"10.1.1.6-10.1.1.10" configure the cluster group 2812 with the iSCSI
server of the role 913 of "iSCSI-1". A group of storage devices 2400 with
the cluster name 911 of "Cluster 2" and the node 914 of
"10.1.1.11-10.1.1.15" configure the cluster group 2812 with the FC (Fibre
Channel) server of the role 913 of "FC-1". A group of storage devices
2400 with the cluster name 911 of "Cluster 2" and the node 914 of
"10.1.1.16-10.1.1.20" configure the cluster group 2812 with the NFS
server of the role 913 of "NAS-2".

[0105] Note that, for setting a cluster group 2812 in units of protocols,
it can be considered to inactivate resources (hardware resources and
software resources) for achieving the protocols unused by each storage
device 2400. FIG. 8 shows the active resources in full lines and inactive
resources in dashed lines. As shown in the figure, by inactivating the
resources of the unused protocols, the processing load on the storage
device 2400 can be reduced, and the storage device 2400 can be
efficiently operated.

Initial Setting

[0106] FIG. 10 is a flowchart showing the processing performed when the
user or the operator performs initial setting of a cluster 2811 and a
cluster group 2812 by operating the management device 1100 (hereinafter
referred to as the cluster setting processing S1000). Note that the
letter "S" added to the head of the numerals in the description below
indicates a step.

[0107] Firstly, the management unit 1111 of the management device 1100
accepts setting information of the cluster 2811 and setting information
of the cluster group 2812 from the user or the operator (S1011 to S1012).
At this time, the management unit 1111 accepts the setting information,
for example, by displaying the setting screen of the cluster management
table 2517 shown in FIG. 9.

[0109] Upon reception of the registration completion report (S1031), the
management unit 1111 transmits a resource setting command the (command
for setting activation or inactivation) (S1032) to the cluster processing
unit 2516 of each storage device 2400. Upon reception of the resource
setting command (S1041), the cluster processing unit 2516 of each storage
device 2400 activates the resources related to the protocols which it
handles, and inactivates the resources related to the protocols which it
does not handle (S1042). This series of processing (S1031, 1032, 1041,
and 1042) is not mandatory, but it may also be treated as an option. This
may also be applied to the following descriptions.

[0110] Note that the activation of software resources means starting the
process of the software, while the inactivation of software resources
means terminating or deleting the process of the software.

[0111] The cluster processing unit 2516, according to the interface
protocol allocated to the node, performs the operations of starting,
terminating and deleting for the interface processing process (nfsd or
iSCSI target software).

The Case of Changing Cluster Group Setting

[0112] Next, the case of changing the setting of cluster groups 2812 when
clusters 2811 and cluster groups 2812 are already set in the storage
system 1, as shown in FIG. 11, is described below. FIG. 12 is a flowchart
showing the processing of changing the setting of cluster groups 2812
when clusters 2811 and cluster groups 2812 are already set in the storage
system 1 (hereinafter referred to as the setting changing processing
S1200).

[0113] Firstly, the management unit 1111 of the management device 1100
accepts setting information of the cluster groups 2812 from the user or
the operator (S1211). The management unit 1111 accepts the setting
information, for example, by displaying the setting screen of the cluster
management table 2517 shown in FIG. 9.

[0114] Next, the management unit 1111 transmits the accepted setting
information to each of the storage devices 2400 via the management
network 5001 (S1212). The cluster processing unit 2516 of each storage
device 2400 receives the setting information transmitted from the
management device 1100 (S1221), and reflects the received setting
information in the cluster management table 2517 (S1222). After
reflecting the setting information in the cluster management table 2517,
the cluster processing unit 2516 of each storage device 2400 transmits a
registration completion report to the management device 1100 (S1223).

[0115] Upon reception of the registration completion report (S1231), the
management unit 1111 of the management device 1100 identifies the LU 2420
which does not match the newly set setting of the cluster groups 2812
(after the change of the setting) (i.e., the LU not matching the usage of
the cluster group 2812 after the change of the setting) (S1232). At this
step, such an LU 2420 is identified by referring to the information
related to the LUs 2420 managed by each of the storage devices 2400. FIG.
13 is a table shown as an example of such information (hereinafter
referred to as the volume management table 1300). In the volume
management table 1300 shown in the figure, the current usage 1312
(protocol) of each LU 2420 (LUN 1311) is managed.

[0116] Next, the management unit 1111 determines the storage device 2400
as the migration destination of the LU 2420 identified at S1232
(hereinafter referred to as the migration destination storage device
2400) (S1233). This determination by the management unit 1111 is made,
after the above-mentioned change of the setting, by selecting one of the
storage devices 2400 supporting the protocols matching the usage of the
LU 2420 identified at S1232 from the cluster management table 2517. Note
that, if multiple migration destination storage devices 2400 can be
selected at this step, the management unit 1111 selects the most
appropriate one in view of, for example, the round-robin method, load
distribution, the remaining capacity of the storage area and other
factors.

[0117] Next, the management unit 1111 starts the processing of migrating
the data of the identified LU 2420 to the determined destination
(hereinafter referred to as the migration processing S1234) (S1234). FIG.
14 is a flowchart showing the details of the migration processing S1234.

[0118] Firstly, the management unit 1111 of the management device 1100
selects one of the identified LUs 2420 (hereinafter referred to as the
migration source LU 2420) (S1411). Next, the management unit 1111
transmits a command for obtaining a snapshot of the migration source LU
2420 to the storage device 2400 in which the selected migration source LU
2420 exists (hereinafter referred to as a migration source storage device
2400) (S1412). Upon reception of the command for obtaining the snapshot
(S1413), the migration source storage device 2400 obtains the snapshot of
the migration source LU 2420 (S1414).

[0120] Next, the management unit 1111 transmits a command for replicating
the data stored in the migration source LU 2420 to the migration
destination LU 2420 to the migration source storage device 2400 and the
migration destination storage device 2400 (S1431). Upon reception of the
above-mentioned replication command (S1432 and S1433), the migration
source storage device 2400 and the migration destination storage device
2400 transfers the snapshot obtained at S1414 from the migration source
LU 2420 to the migration destination LU 2420 (S1434 and S1435). After the
replication is completed, a snapshot replication completion report is
transmitted from the migration destination storage device 2400 (or
migration source storage device 2400 may also be possible) to the
management device 1100 (S1436). The management device 1100 receives the
transmitted replication completion report (S1437). After receiving the
replication completion report, the management device 1100 issues a
command for terminating the migration source LU (S1441-S1443). Further,
the management device 1100 issues a command for starting the migration
destination LU (S1451-S1453).

[0121] Note that, if the migration source storage device 2400 is
implemented with a function of performing the above-mentioned replication
without suspending the services for the client device 1000 (hereinafter
referred to as the fault-tolerant migration function), differential data
while the above-mentioned replication is in process is managed by the
cache memory 2503 (which can be replaced by cache memory 2103) and the
storage drive 2200, and after the replication is completed, the managed
differential data is reflected in the migration destination LU 2420.

[0122] Next, the management unit 1111 performs the processing for
transiting the port connected with the migration source LU 2420 to the
port connected with the migration destination LU 2420 (S1461). This
processing is performed by, for example, setting the IP address of the
iSCSI port connected with the migration source LU 2420 to the IP address
of the iSCSI port connected with the migration destination LU 2420. Note
that it is preferable to perform this transition without changing the
settings of the client device 1000 by, for example, changing the IP
address by DNS (Domain Name Server (System)) rather than changing the
service name or others before and after the migration.

[0123] At S1462, the management unit 1111 determines whether there are any
LUs 2420 not selected at S1411. If there are any (S1462: YES), the
processing returns to S1411. If not (S1462: NO), the migration processing
S1234 is completed and returns to S1235 in FIG. 12.

[0124] Note that the above-mentioned description of the migration
processing S1234 assumes the use of the data replication function among
the storage devices 2400 implemented in the storage devices 2400, but it
is also possible to perform the data replication from the migration
source LU 2420 to the migration destination LU 2420 by using the
replication function implemented in the network switch configuring the
data network 5000, the replication function installed in the client
device 1000 or others.

[0125] The description returns to FIG. 12 again. At S1235, the management
unit 1111 transmits a resource setting command to the cluster processing
unit 2516 of each storage device 2400 (S1235). Upon reception of the
transmitted resource setting command (S1241), the cluster processing unit
2516 of each storage device 2400 performs the resource setting
(activation or inactivation) according to the setting of its cluster
group 2812 after the change (S1242).

[0126] As mentioned above, if the contents of the cluster management table
2517 are changed, the cluster processing unit 2516 automatically migrates
the data to the LU 2420 of the other storage device 2400 matching the
protocol corresponding to the LU 2420. Therefore, the configuration of
the cluster groups 2812 can be changed easily and flexibly by, for
example, changing the cluster management table 2517 from the management
device 1100.

The Case of Adding New Node to Existing Cluster Group

[0127] Next, as shown in FIG. 15, the case of adding a new node (storage
device 2400) to an existing cluster group 2812 is described below. FIG.
16 is a flowchart showing the processing performed in this case (node
addition processing S1600). The node addition processing S1600 is
described below by referring to the above-mentioned figure.

[0128] Firstly, the management unit 1111 of the management device 1100
accepts setting information of adding a new node (the fifth storage
device 2400 in the above-mentioned figure) to a cluster group 2812 from
the user or the operator (S1611). At this time, the management unit 1111
accepts the setting information, for example, by displaying the setting
screen of the cluster management table 2517 shown in FIG. 9.

[0130] Upon reception of the registration completion report (S1631), the
management unit 1111 determines whether it is necessary to migrate the
data (relocate the data) or not due to the addition of the node (S1632).
This determination is made with reference to the load information of the
LUs 2420 such as the usage status of each LU 2420 in each storage device
2400 or the port performance information connected with each LU 2420. If
data migration is determined to be necessary (S1632: YES), the management
unit 1111 starts a data migration processing (S1633). This migration
processing S1633 is the same as the above-mentioned migration processing
S1234 shown in FIG. 14.

[0131] Next, the management unit 1111 transmits a resource setting command
to the cluster processing unit 2516 of each storage device 2400 (S1635).
Upon reception of the transmitted resource setting command (S1641), the
cluster processing unit 2516 of each storage device 2400 performs the
resource setting (activation or inactivation) according to the setting of
its cluster group 2812 after the change (S1642).

[0132] As mentioned above, if the contents of the cluster management table
2517 are changed to add a new storage device 2400 to an existing cluster
group 2812, the cluster processing unit 2516 determines whether it is
necessary to relocate the data or not according to the load information
of the LUs 2420, and if it is determined to be necessary, automatically
migrates the data between the LUs 2420. As mentioned above, this method
enables easy addition of a new storage device 2400 to the already
configured cluster group 2812. It is also possible to optimize the
performance of the storage system 1 by attempting load distribution at
the time of relocation.

The Case of Deleting Node from Cluster Group

[0133] Next, as shown in FIG. 17, the case of deleting a node from an
existing cluster group 2812 is described below. FIG. 18 is a flowchart
showing the processing performed in this case (node deletion processing
S1800). The node deletion processing S1800 is described below by
referring to FIG. 18.

[0134] Firstly, the management unit 1111 of the management device 1100
accepts setting information of deleting a node (the first storage device
2400 in FIG. 17) from a cluster group 2812 from the user or the operator
(S1811). At this time, the management unit 1111 accepts the setting
information, for example, by displaying the setting screen of the cluster
management table 2517 shown in FIG. 9.

[0136] Upon reception of the registration completion report (S1831), the
management unit 1111 determines the storage device 2400 as the migration
destination (hereinafter referred to as the migration destination storage
device 2400) of the data stored in the LU 2420 (LU 2420 to be deleted) of
the storage device 2400 to be deleted (S1832). This determination by the
management unit 1111 is made, after the above-mentioned change of the
setting, by selecting one of the storage devices 2400 supporting the
protocols matching the usage of the LU 2420 to be deleted from the
cluster management table 2517. Note that, if multiple migration
destination storage devices 2400 can be selected at this step, the
management unit 1111 selects the most appropriate one in view of, for
example, the round-robin method, load distribution, the remaining
capacity of the storage area and other factors.

[0137] Next, the management unit 1111 starts to migrate the data stored in
the LU 2420 of the first storage device 2400 to be deleted to the LU 2420
of another storage device 2400 determined at S1832 (S1833). This
migration processing is the same as the above-mentioned migration
processing S1234 shown in FIG. 14.

[0138] As mentioned above, if the contents of the cluster management table
2517 are changed to delete a storage device 2400 from an already
configured cluster group 2812 when the cluster groups 2812 for each of
the protocols have already been configured, the cluster processing unit
2516 migrates the data to the LU 2420 of another storage device 2400
handling the protocol which used to be handled by the relevant storage
device 2400. This enables easy deletion of the storage device 2400 from
the already configured cluster group 2812.

The Case of Transferring Node Between Cluster Groups

[0139] Next, as shown in FIG. 19, the case of transferring a node (storage
device 2400) belonging to an existing cluster group 2812 to another
existing cluster group 2812 is described below. FIG. 20 is a flowchart
showing the processing performed in this case (node transfer processing
S2000). The node transfer processing S2000 is described below by
referring to FIG. 20.

[0140] Firstly, the management unit 1111 of the management device 1100
accepts setting information of transferring a node (FIG. 19 shows the
case of transferring the second storage device 2400 belonging to the
cluster group 2812 of the NAS clusters to the cluster group 2812 (iSCSI
clusters)) from the user or the operator (S2011). At this time, the
management unit 1111 accepts the setting information, for example, by
displaying the setting screen of the cluster management table 2517 shown
in FIG. 9.

[0142] Upon reception of the registration completion report (S2031), the
management unit 1111 determines whether it is necessary to migrate
(relocate) the data or not (S2032). This determination is made with
reference to the load information of the LUs 2420 such as the usage
status of each LU 2420 in each storage device 2400 or the port
performance information connected with each LU 2420. If data migration is
determined to be necessary (S2032: YES), the management unit 1111 starts
a data migration processing (S2033). This processing is the same as the
migration processing S1234 shown in FIG. 14.

[0143] Next, the management unit 1111 transmits a resource setting command
to the cluster processing unit 2516 of each storage device 2400 (S2035).
Upon reception of the resource setting command (S2041), the cluster
processing unit 2516 of each storage device 2400 performs the resource
setting (activation or inactivation) according to the setting of its
cluster group 2812 after the change (S2042).

[0144] As mentioned above, if the contents of the cluster management table
2517 are changed to transfer the storage device 2400 configuring a
certain cluster group 2812 to another cluster group 2812 when the cluster
groups 2812 for each of the protocols have already been configured, the
cluster processing unit 2516 determines whether it is necessary to
relocate the data or not according to the load information of the LUs
2420, and if it is determined to be necessary, automatically migrates the
data between the LUs 2420. Thus, this invention enables easy migration of
a storage device 2400 configuring a certain cluster group 2812 to another
cluster group by changing the contents of the cluster management table
2517. It is also possible to optimize the performance of the storage
system by attempting load distribution at the time of relocation.

[0145] As described so far, the storage system 1 in this embodiment
enables the easy configuration of cluster groups 2812 for each protocol
by using a storage device 2400 (Unified Storage) including the
communication server 2514 (protocol processing unit) capable of
responding to data input/output requests following at least two or more
protocols transmitted from the client device 1000. By this method, it is
possible to optimize the performance separation according to the usage
status of the protocol and the storage system performance.

[0146] It is to be understood that the above-described embodiments are
intended for ease of understanding this invention and by no means should
be limited to the particular constructions herein, but also comprises any
changes, modifications or equivalents within the spirit and scope hereof.