Once again writing to you as you always clear me of great doubt, a question I do not know if you have worked with SAN networks with IBM with the Storwize V3700, I commented that my Zimbra already has little space left and what I did was to buy a module of fiber and buy 3 hard drives to make a Raid 5 and have 2Tb available storage by means of the ZExtras I manage the HSM but just when I was going to configure the array by the fiber module in my server centos I find the novelty that is repeated the arrangement I leave a few lines of what it shows me:

With the command: fdisk -l the following is shown having storage duplicity.

I do not know if something else is missing in the configuration, the truth is the first time I make this configuration, I do not know if you can help me please. is a server Zimbra centos 6.5

In if I thought that it is not normal that only should leave a single arrangement not several, talking to another friend told me that you had to install the multipath driver for your version and guided me with the following links:

First, you really should update to CentOS 6.9 - 6.5 got it's last update 3.5 years ago.
I don't have much experience with multipath, but I think each volume should show up as /dev/mapper/mpathX and two corresponding devices (/dev/sdY+Z).
What's the output of: multipath -ll
How is /etc/multipath.conf and /etc/fstab?

tunk wrote:First, you really should update to CentOS 6.9 - 6.5 got it's last update 3.5 years ago.
I don't have much experience with multipath, but I think each volume should show up as /dev/mapper/mpathX and two corresponding devices (/dev/sdY+Z).
What's the output of: multipath -ll
How is /etc/multipath.conf and /etc/fstab?

Attached as requested, I commented that the command multipath -ll was not done because I have stopped the multipath service to not affect anything with the services I have running in this case (Zimbra).

cat /etc/multipath.conf
# This is a basic configuration file with some examples, for device mapper
# multipath.
# For a complete list of the default configuration values, see
# /usr/share/doc/device-mapper-multipath-0.4.9/multipath.conf.defaults
# For a list of configuration options with descriptions, see
# /usr/share/doc/device-mapper-multipath-0.4.9/multipath.conf.annotated
#
# REMEMBER: After updating multipath.conf, you must run
#
# service multipathd reload
#
# for the changes to take effect in multipathd

You have eight 2.2TB /dev/sdX disks, and I would start looking at that; maybe there's something in the setup on the V3700.
You also have /dev/mapper/mpatha 897GB and /dev/mapper/mpathc 499GB, each with only one /dev/sdX. Is this something you've setup on the V3700?

You have eight 2.2TB /dev/sdX disks, and I would start looking at that; maybe there's something in the setup on the V3700.
You also have /dev/mapper/mpatha 897GB and /dev/mapper/mpathc 499GB, each with only one /dev/sdX. Is this something you've setup on the V3700?

Hello Trunks,

I think he does not understand me or I do not understand.

First of all I do not have 8 2.2tb disc, I only have one of 2.2tb, but in the centos the same storage is repeated 8 times, I understand that configuring the multipath driver of those 8 discs makes me one only to be able to configure and mount the disc .

second "You also have /dev/mapper/mpatha 897GB and /dev/mapper/mpathc 499GB, each with only one /dev/sdX. Is this something you've setup on the V3700?" Regarding your question those discs are local only when I installed the multipath driver I did not know that the disc was blacklisted and in the documentation of centos already reading it tells me that if I do not blacklist the disc /dev/sdx it it changes temporarily to /dev/mapper/ mpathXX and that's what happened because that's local disk.

I understand that if I configure the multipath driver that repeat of 8 disc becomes only one.

I leave that part of the documentation that says:

2.1. Multipath Device Identifiers
Each multipath device has a World Wide Identifier (WWID), which is guaranteed to be globally unique and unchanging. By default, the name of a multipath device is set to its WWID. Alternately, you can set the user_friendly_names option in the multipath configuration file, which sets the alias to a node-unique name of the form mpathn.

For example, a node with two HBAs attached to a storage controller with two ports via a single unzoned FC switch sees four devices: /dev/sda, / dev/sdb, dev/sdc, and /dev/ sdd. DM-Multipath creates a single device with a unique WWID that reroutes I / O to those four devices depending on the multipath configuration. When the user_friendly_names configuration option is set to yes, the name of the multipath device is set to /dev/ mpath/mpathn.

For information on the multipath configuration defaults, including the user_friendly_names configuration option, see section 4.3, "Configuration File Defaults".

You can also set the name of a multipath device to a name of your choosing by using the alias option in the multipath section of the multipath configuration file. For information on the multipaths of the multipath configuration file, see Section 4.4, "Multipaths Device Configuration Attributes".

no, the partition / dev / mapper / multipathzx because before it was dev / sdX but the multipath I change that for not blocking it are logical disk the only disk per fiber is the 2.2tb in the first part is the images of the before and after if you understand me.