Subsequent chapters in this manual describe further procedures used to complete the installation and configuration of the arrays. The flexible architecture of Sun StorEdge 3510 FC arrays and Sun StorEdge 3511 SATA arrays makes many configurations possible.

5.1 Summary of Array Configuration

Sun StorEdge 3510 FC arrays and Sun StorEdge 3511 SATA arrays are preconfigured with a single RAID 0 logical drive mapped to LUN 0, and no spare drives. This is not a usable configuration, but it enables in-band connections with management software. You must delete this logical drive and create new logical drives.

All configuration procedures can be performed by using the COM port. You can also perform all procedures except the assignment of an IP address through an Ethernet port connection to a management console.

The following steps describe the typical sequence of steps for completing a first-time configuration of the array.

The IDs assigned to controllers take effect only after the controller is reset.

9. Delete default logical drives and create new logical drives.

Note - While the ability to create and manage logical volumes remains a feature of arrays for legacy reasons, the size and performance of physical and logical drives have made the use of logical volumes obsolete. Logical volumes are unsuited to some modern configurations, such as Sun Cluster environments, and do not work in those configurations. Avoid using logical volumes and use logical drives instead. For more information about logical drives, refer to the Sun StorEdge 3000 Family RAID Firmware User's Guide.

10. (Optional) In dual-controller configurations only, assign logical drives to the secondary controller to load-balance the two controllers.

Caution - In single-controller configurations, do not disable the Redundant Controller setting and do not set the controller as a secondary controller. The primary controller controls all firmware operations and must be the assignment of the single controller. If you disable the Redundant Controller Function and reconfigure the controller with the Autoconfigure option or as a secondary controller, the controller module becomes inoperable and will need to be replaced.

11. (Optional) Partition the logical drives.

12. Map each logical drive partition to an ID on a host channel, or apply a host LUN filter to the logical drives.

Note - Each operating system has a method for recognizing storage devices and LUNs and might require the use of specific commands or the modification of specific files. Be sure to check the information for your operating system to ensure that you have performed the necessary procedures.

For information about different operating system procedures, see:

Appendix E to configure a Sun server running the Solaris operation system

Appendix H to configure an IBM server running the AIX operating system

Appendix I to configure an HP server running the HP-UX operating system

13. Reset the controller.

Configuration is complete.

Note - Resetting the controller can result in occasional host-side error messages such as parity error and synchronous error messages. No action is required and the condition corrects itself as soon as reinitialization of the controller is complete.

14. Save the configuration to a disk.

15. Make sure that the cabling from the RAID array to the hosts is complete.

Note - You can reset the controller after each step or at the end of the configuration process.

Caution - Avoid using in-band and out-of-band connections at the same time to manage the array. Otherwise, conflicts between multiple operations can cause unexpected results.

5.1.1 Point-to-Point Configuration Guidelines

Remember the following guidelines when implementing point-to-point configurations in your array and connecting to fabric switches:

The default mode is "Loop only." You must change the Fibre Channel Connection mode to "Point-to-point only" with the firmware application. Refer to the Sun StorEdge 3000 Family RAID Firmware User's Guide for more information.

Caution - If you keep the default loop mode and connect to a fabric switch, the array automatically shifts to public loop mode. As a result, communication between the array and the switched fabric runs in half duplex (send or receive) instead of providing the full duplex (send and receive) performance of point-to-point mode.

Check the host IDs on all the channels to ensure that there is only one port ID per channel (on the primary controller or on the secondary controller) for point-to-point mode. When viewing the host IDs, there should be one primary controller ID (PID) or one secondary controller ID (SID); the alternate port ID should display N/A. Proper point-to-point mode allows only one ID per channel.

On the Sun StorEdge 3511 SATA array, if one of the dual-ports of channel 0 is connected to a switch (port FC 0), the other FC 0 port on that controller and the two FC0 ports on a redundant controller cannot be used. Similarly, if one of the channel 1 ports is connected with a switch (port FC 1), the other FC 1 port on that controller, and the two FC 1 port on a redundant controller cannot be used.

If you change the mode to "Point-to-point only" and attempt to add a second port ID, the controller does not allow you to add an ID to the same controller and channel. For example, if the PID for CH 0 is 40 and the SID for CH 0 is N/A, the controller does not allow you to add another PID to CH 0.

The controller displays a warning if the user is in point-to-point mode and tries to add an ID to the same channel but on the other controller. The warning is displayed because you have the ability to disable the internal connection between the channels on the primary and secondary controller using theSun StorEdge CLI set inter-controller link command and, by doing this, you can have one ID on the primary and another ID on the secondary as a legal operation.

However, if you ignore this warning and add an ID to the other controller, the RAID controller does not allow a login as a Fabric Loop (FL) port because this would be illegal in a point-to-point configuration.

The firmware application allows you to add up to eight port IDs per channel (four port IDs on each controller), which forces the fabric switch port type to become Fabric Loop. To ensure F-port behavior (full fabric/full duplex) when attaching to a switch, only one port ID must be present on each channel and the array port must be set to point-to-point mode.

Do not connect more than one port per channel on an array to a fabric switch.

Caution - In point-to-point mode or in public loop mode, only one switch port is allowed per channel. Connecting more than one port per channel to a switch can violate the point-to-point topology of the channel, force two switch ports to "fight" over an AL_PA (arbitrated loop physical address) value of 0 (which is reserved for loop-to-fabric attachment), or both.

With four host channels and four host IDs, you should load-balance the host ID setup so that half the IDs are on the primary controller and half the IDs are on the secondary controller. When setting up LUNs, map each LUN to either two PIDs or two SIDs. The hosts are in turn dual-pathed to the same two switched fabrics. When attaching the cables for a LUN-mapped channel pair, make sure that the first channel is connected to the upper port and the second channel is connected to the lower port of the second channel.

For example, to provide redundancy, map half of the LUNs across Channel 0 (PID 40) and Channel 4 (PID 42), and then map the other half of your LUNs across Channel 1 (SID 41) and Channel 5 (SID 43).

Point-to-point mode allows a maximum of 128 LUNs per array. In a redundant configuration, 32 LUNs are dual-mapped across two channels on the primary controller, and another 32 LUNs are dual-mapped across the secondary controller, for a total of 64 distinct LUNs.

To use more than 64 LUNs, you must change to "Loop only" mode, add host IDs to one or more channels, and add 32 LUNs for each additional host ID.

Note - When in loop mode and connected to a fabric switch, each host ID is displayed as a loop device on the switch so that, if all 16 IDs are active on a given channel, the array looks like a loop with 16 nodes attached to a single switch FL port. In public loop mode, the array can have a maximum of 1024 LUNs, where 512 LUNs are dual-mapped across two channels, primary and secondary controller respectively.

5.1.2 A Sample SAN Point-to-Point Configuration

A point-to-point configuration has the following characteristics:

In SAN configurations, the switches communicate with the Sun StorEdge Fibre Channel array host ports using a fabric point-to-point (F_port) mode.

When you use fabric point-to-point (F_port) connections between a Sun StorEdge 3510 FC array or Sun StorEdge 3511 SATA array and fabric switches, the maximum number of LUNs is limited to 128 LUNs for a nonredundant configuration and 64 LUNs for a redundant configuration.

Fibre Channel standards allow only one ID per port when operating point-to-point protocols, resulting in a maximum of four IDs with a maximum of 32 LUNs for each ID, and a combined maximum of 128 LUNs.

The working maximum number of LUNs is actually 64 LUNs in a configuration where you configure each LUN on two different channels for redundancy and to avoid a single point of failure.

In a dual-controller array, one controller automatically takes over all operation of a second failed controller in all circumstances. However, when an I/O controller module needs to be replaced and a cable to an I/O port is removed, the I/O path is broken unless multipathing software has established a separate path from the host to the operational controller. Supporting hot-swap servicing of a failed controller requires the use of multipathing software, such as Sun StorEdge Traffic Manager software, on the connected servers.

A single logical drive can be mapped to only one controller, either the primary controller or the secondary controller.

In a point-to-point configuration, only one host ID per channel is allowed. The host ID can be assigned to the primary controller and be a PID, or it can be assigned to the secondary controller and be a SID.

If you have two switches and set up multipathing (to keep all logical drive connections operational for any switch failure or the removal of any I/O controller module), ensure that each logical drive is mapped to two ports, one on each I/O controller module, and on two channels. The cables from the two ports mapped to each logical drive must be cabled to two separate switches. See FIGURE 5-1 and FIGURE 5-2 for examples of this configuration.

FIGURE 5-1 and FIGURE 5-2 show the channel numbers (0, 1, 4, and 5) of each host port and the host ID for each channel. N/A means that the port does not have a second ID assignment. The primary controller is the top I/O controller module, and the secondary controller is the bottom I/O controller module.

The dashed lines between two ports indicate a port bypass circuit that functions as a mini-hub. The port bypass circuit on each channel connects the upper and lower ports on the same channel and provides access to both controllers at the same time. If there are two host connections to the upper and lower ports on Channel 0, and one host connection is removed, the other host connection remains operational. Therefore, if you have a redundant multipathing configuration in which you have two host connections to each logical drive and one connection fails, the remaining path maintains a connection to the logical drive.

In FIGURE 5-1 and FIGURE 5-2, with multipathing software to reroute the data paths, each logical drive remains fully operational when the following conditions occur:

One switch fails or is disconnected, and the logical drive is routed to the second switch. For example, if switch 0 fails, switch 1 automatically accesses logical drive 0 through the cabling to the lower port on PID 41.

One I/O controller module fails, and all the host IDs for that controller are reassigned (moved) to the second 1/O controller module. For example, if the upper I/O controller module is removed, host IDs 40 and 41 are automatically moved to the lower module and are managed by the second controller.

An I/O controller module fails or one cable is removed from an I/O controller module, and all I/O traffic to the disconnected channel is rerouted through the second port/host LUN assigned to the logical drive. For example, if you remove the cable to channel 4, the data path for logical drive 1 switches to the port on channel 5.

FIGURE 5-1 A Point-to-Point Configuration with a Dual-Controller Sun StorEdge 3510 FC Array and Two Switches

FIGURE 5-2 A Point-to-Point Configuration With a Dual-Controller Sun StorEdge 3511 SATA Array and Two Switches

Note - These illustrations show the default controller locations; however, the primary controller and secondary controller locations can occur in either slot and depend on controller resets and controller replacement operations.

FL_port connections between a Sun StorEdge 3510 FC array or Sun StorEdge 3511 SATA array and multiple servers allow up to 1024 LUNs to be presented to servers. For guidelines on how to create 1024 LUNs, refer to the Sun StorEdge 3000 Family RAID Firmware User's Guide.

Perform the following steps to set up a DAS loop configuration as shown in FIGURE 5-3 and FIGURE 5-4.

1. Check the location of installed SFPs. Move them as necessary to support the connections needed.

You must add SFP connectors to support more than four connections between servers and a Sun StorEdge 3510 FC array or Sun StorEdge 3511 SATA array. For example, add two SFP connectors to support six connections and add four SFP connectors to support eight connections.

2. Connect expansion units, if needed.

3. Create at least one logical drive per server, and configure spare drives as needed.

4. Create one or more logical drive partitions for each server.

5. Confirm that the Fibre Connection Option is set to "Loop only."

Caution - Do not use the command, "Loop preferred, otherwise point to point." This command is reserved for special use and should be used only if directed by technical support.

11. Connect the first server to port FC 0 of the upper controller and port FC5 of the lower controller.

12. Connect the second server to port FC 4 of the upper controller and port FC1 of the lower controller.

13. Connect the third server to port FC 5 of the upper controller and port FC0 of the lower controller.

14. Connect the fourth server to port FC 1 of the upper controller and port FC4 of the lower controller.

15. Install and enable multipathing software on each connected server.

5.1.4 Connecting Two Hosts to One Host Channel (SATA Only)

Except in some clustering configurations, if you connect more than one host to channel 0 or channel 1 in a DAS loop configuration, you must use host filtering when you want to control host access to storage. Refer to the Sun StorEdge 3000 Family RAID Firmware User's Guide for information about host filters. Refer to the user documentation for your clustering software to determine whether the clustering software can manage host access in this configuration.

5.2 Larger Configurations

Up to eight expansion units are supported when connected to a Sun StorEdge 3510 FC array.

Up to five expansion units are supported when connected to a Sun StorEdge 3511 SATA array.

Up to five Sun StorEdge 3510 FC expansion units and Sun StorEdge 3511 SATA expansion units can be combined when connected to a Sun StorEdge 3510 FC array. This enables you to use FC drives for primary online applications and SATA drives for secondary or near-line applications within the same RAID array.

Certain limitations and considerations apply to these mixed configurations:

Make sure at least one additional logical drive is available before adding a Sun StorEdge 3511 SATA expansion unit. It is preferable to make sure a minimum of one available logical drive per Sun StorEdge 3511 SATA expansion unit is available.

For more detailed information, and for suggestions about the most appropriate configurations for your applications and environment, refer to the Sun StorEdge 3000 Family Best Practices Manual for your array.