Part II Administering Interface Groups

This part discusses administration of other types of configurations
such as virtual local area networks (VLANs), link aggregations, and IP multipathing
(IPMP) groups.

Chapter 5 Administering VLANs

This chapter describes procedures to configure and maintain virtual
local area networks (VLANs). The procedures include steps that avail of features
such as support for flexible link names.

Administering
Virtual Local Area Networks

A virtual local area network (VLAN) is a subdivision
of a local area network at the data link layer of the TCP/IP protocol stack.
You can create VLANs for local area networks that use switch technology. By
assigning groups of users to VLANs, you can improve network administration
and security for the entire local network. You can also assign interfaces
on the same system to different VLANs.

Consider dividing your local network into VLANs if you need to do the
following:

Create a logical division of workgroups.

For example,
suppose all hosts on a floor of a building are connected on one switched-based
local network. You could create a separate VLAN for each workgroup on the
floor.

Enforce differing security policies for the workgroups.

For example, the security needs of a Finance department and an Information
Technologies department are quite different. If systems for both departments
share the same local network, you could create a separate VLAN for each department.
Then, you could enforce the appropriate security policy on a per-VLAN basis.

Split workgroups into manageable broadcast domains.

The
use of VLANs reduces the size of broadcast domains and improves network efficiency.

Overview of VLAN Topology

Switched LAN technology enables you to organize the systems on a local
network into VLANs. Before you can divide a local network into VLANs, you
must obtain switches that support VLAN technology. You can configure all ports
on a switch to serve a single VLAN or multiple VLANs, depending on the VLAN
topology design. Each switch manufacturer has different procedures for configuring
the ports of a switch.

The following figure shows a local area network that has the subnet
address 192.168.84.0. This LAN is subdivided into three
VLANs, Red, Yellow, and Blue.

Figure 5–1 Local Area Network With Three VLANs

Connectivity on LAN 192.168.84.0 is handled by Switches
1 and 2. The Red VLAN contains systems in the Accounting workgroup. The Human
Resources workgroup's systems are on the Yellow VLAN. Systems of the Information
Technologies workgroup are assigned to the Blue VLAN.

VLAN Tags and Physical Points of Attachment

Each VLAN in a local area network is identified by a VLAN tag, or VLAN ID (VID). The VID is assigned during VLAN configuration. The
VID is a 12-bit identifier between 1 and 4094 that provides a unique identity
for each VLAN. In Figure 5–1,
the Red VLAN has the VID 789, the Yellow VLAN has the VID 456, and the Blue
VLAN has the VID 123.

When you configure switches to support VLANs, you need to assign a VID
to each port. The VID on the port must be the same as the VID assigned to
the interface that connects to the port, as shown in the following figure.

Figure 5–2 Switch Configuration for a Network with VLANs

Figure 5–2 shows
multiple hosts that are connected to different VLANs. Two hosts belong to
the same VLAN. In this figure, the primary network interfaces
of the three hosts connect to Switch 1. Host A is a member of the Blue VLAN.
Therefore, Host A's interface is configured with the VID 123. This interface
connects to Port 1 on Switch 1, which is then configured with the VID 123.
Host B is a member of the Yellow VLAN with the VID 456. Host B's interface
connects to Port 5 on Switch 1, which is configured with the VID 456. Finally,
Host C's interface connects to Port 9 on Switch 1. The Blue VLAN is configured
with the VID 123.

The figure also
shows that a single host can also belong to more than one VLAN. For example,
Host A has two interfaces. The second interface is configured with the VID
456 and is connected to Port 3 which is also configured with the VID 456.
Thus, Host A is a member of both the Blue VLAN and the Yellow VLAN.

Meaningful Names
for VLANs

In this Solaris release, you can assign meaningful names to VLAN interfaces.
VLAN names consist of a link name and the VLAN ID number (VID), such as sales0 You should assign customized names when you create VLANs. For more information about customized names,
see Assigning Names to Data Links. For more information about
valid customized names, see Rules for Valid Link Names.

Planning for VLANs on a Network

Use the following procedure to plan for VLANs on your network.

How to Plan a VLAN Configuration

Examine the local network topology and determine where subdivision
into VLANs is appropriate.

Create a numbering scheme for the VIDs, and assign a VID to each
VLAN.

Note –

A VLAN numbering scheme might already exist on the network. If
so, you must create VIDs within the existing VLAN numbering scheme.

On each system, determine which interfaces will be members of
a particular VLAN.

Determine which interfaces are configured on a system.

# dladm show-link

Identify which VID will be associated with each data link on the
system.

Create the VLAN by using the dladm create-vlan command.

Check the connections of the interfaces to the network's switches.

Note the VID of each interface and the switch port where each interface
is connected.

Configure each port of the switch with the same VID as the interface
to which it is connected.

Refer to the switch manufacturer's documentation
for configuration instructions.

Configuring VLANs

The following procedure shows how
to create and configure a VLAN. In this Solaris release, all Ethernet devices can support
VLANs. However, some restrictions exist with certain devices. For these exceptions,
refer to VLANs on Legacy Devices.

VLANs on Legacy Devices

Certain legacy devices handle only packets whose maximum frame size
is 1514 bytes. Packets whose frame sizes exceed the maximum limit are dropped.
For such cases, follow the same procedure listed in How to Configure a VLAN. However, when creating the VLAN, use the -f option
to force the creation of the VLAN.

The general steps to perform are as follows:

Create the VLAN with the -f option.

# dladm create-vlan -f -l link -v VID [vlan-link]

Set a lower size for the maximum transmission unit (MTU),
such as 1496 bytes.

# dladm set-linkprop -p default_mtu=1496 vlan-link

The lower MTU value allows space for the link layer
to insert the VLAN header prior to transmission.

Perform the same step to set the same lower value for the MTU size of each node in the VLAN.

Chapter 6 Administering Link Aggregations

This chapter describes procedures to configure and maintain link aggregations.
The procedures include steps that avail of new features such as support for
flexible link names.

Overview of Link
Aggregations

The Solaris OS supports the organization of network interfaces
into link aggregations. A link aggregation consists of
several interfaces on a system that are configured together as a single, logical
unit. Link aggregation, also referred to as trunking,
is defined in the IEEE 802.3ad Link Aggregation Standard.

The IEEE 802.3ad Link Aggregation Standard provides a method to combine
the capacity of multiple full-duplex Ethernet links into a single logical
link. This link aggregation group is then treated as though it were, in fact,
a single link.

The following are features of link aggregations:

Increased bandwidth –
The capacity of multiple links is combined into one logical link.

Automatic failover/failback –
Traffic from a failed link is failed over to working links in the aggregation.

Load balancing –
Both inbound and outbound traffic is distributed according to user selected
load-balancing policies, such as source and destination MAC or IP addresses.

Support for redundancy –
Two systems can be configured with parallel aggregations.

Improved administration –
All interfaces are administered as a single unit.

Less drain on the network address
pool – The entire aggregation can be assigned one IP address.

Link Aggregation Basics

The basic link aggregation topology involves a single aggregation that
contains a set of physical interfaces. You might use the basic link aggregation
in the following situations:

For systems that run an application with distributed heavy
traffic, you can dedicate an aggregation to that application's traffic.

For sites with limited IP address space that nevertheless
require large amounts of bandwidth, you need only one IP address for a large
aggregation of interfaces.

For sites that need to hide the existence of internal interfaces,
the IP address of the aggregation hides its interfaces from external applications.

Figure 6–1 shows an aggregation
for a server that hosts a popular web site. The site requires increased bandwidth
for query traffic between Internet customers and the site's database server.
For security purposes, the existence of the individual interfaces on the server
must be hidden from external applications. The solution is the aggregation aggr1 with the IP address 192.168.50.32. This
aggregation consists of three interfaces,bge0 through bge2. These interfaces are dedicated to sending out traffic in response
to customer queries. The outgoing address on packet traffic from all the interfaces
is the IP address of aggr1, 192.168.50.32.

Figure 6–1 Basic Link Aggregation Topology

Figure 6–2 depicts
a local network with two systems, and each system has an aggregation configured.
The two systems are connected by a switch. If you need to run an aggregation
through a switch, that switch must support aggregation technology. This type
of configuration is particularly useful for high availability and redundant
systems.

In the figure, System A has an aggregation that consists of two interfaces, bge0 and bge1. These interfaces are connected
to the switch through aggregated ports. System B has an aggregation of four
interfaces, e1000g0 through e1000g3.
These interfaces are also connected to aggregated ports on the switch.

Figure 6–2 Link Aggregation Topology With a Switch

Back-to-Back Link Aggregations

The back-to-back link aggregation topology involves two separate systems
that are cabled directly to each other, as shown in the following figure.
The systems run parallel aggregations.

Figure 6–3 Basic Back-to-Back Aggregation Topology

In this figure, device bge0 on System A is directly
linked to bge0 on System B, and so on. In this way, Systems
A and B can support redundancy and high availability, as well as high-speed
communications between both systems. Each system also has interface ce0 configured
for traffic flow within the local network.

The most common application for back-to-back link aggregations is mirrored
database servers. Both servers need to be updated together and therefore require
significant bandwidth, high-speed traffic flow, and reliability. The most
common use of back-to-back link aggregations is in data centers.

Policies and Load Balancing

If you plan to use a link aggregation, consider defining a policy for
outgoing traffic. This policy can specify how you want packets to be distributed
across the available links of an aggregation, thus establishing load balancing.
The following are the possible layer specifiers and their significance for
the aggregation policy:

L2 – Determines the
outgoing link by hashing the MAC (L2) header of each packet

L3 – Determines the
outgoing link by hashing the IP (L3) header of each packet

L4 – Determines the
outgoing link by hashing the TCP, UDP, or other ULP (L4) header of each packet

Any combination of these policies is also valid. The default policy
is L4. For more information, refer to the dladm(1M) man
page.

Aggregation Mode and Switches

If your aggregation topology involves connection through a switch, you
must note whether the switch supports the link aggregation control
protocol (LACP). If the switch supports LACP, you must configure
LACP for the switch and the aggregation. However, you can define one of the
following modes in which LACP is to operate:

Off mode – The default
mode for aggregations. LACP packets, which are called LACPDUs are
not generated.

Active mode – The
system generates LACPDUs at regular intervals, which you can specify.

Passive mode – The
system generates an LACPDU only when it receives an LACPDU from the switch.
When both the aggregation and the switch are configured in passive mode, they
cannot exchange LACPDUs.

See the dladm(1M) man page and the switch manufacturer's
documentation for syntax information.

Requirements for Link Aggregations

Your link aggregation configuration is bound by the following requirements:

You must use the dladm command to configure
aggregations.

An interface that has been plumbed cannot become a member
of an aggregation.

All interfaces in the aggregation must run at the same speed
and in full-duplex mode.

Certain devices do not fulfill the requirement
of the IEEE 802.3ad Link Aggregation Standard to support link state notification.
This support must exist in order for a port to attach to an aggregation or
to detach from an aggregation. Devices that do not support link state notification
can be aggregated only by using the -f option of the dladm
create-aggr command. For such devices, the link state is always
reported as UP. For information about the use of the -f option, see How to Create a Link Aggregation.

Flexible Names for
Link Aggregations

Flexible names can be assigned to link aggregations. Any meaningful
name can be assigned to a link aggregation. For more information about flexible
or customized names, see Assigning Names to Data Links. Previous Solaris releases identify a link aggregation by the
value of a key that you assign to the aggregation. For
an explanation of this method, see Overview of Link Aggregations. Although that method continues to be
valid, preferably, you should use customized names to identify link aggregations.

Similar to all other data-link configurations, link aggregations are
administered with the dladm command.

How to Create a Link Aggregation

Before You Begin

Note –

Link aggregation only works on full-duplex, point-to-point links
that operate at identical speeds. Make sure that the interfaces in your aggregation
conform to this requirement.

If you are using a switch in your aggregation topology, make sure that
you have done the following on the switch:

Configured the ports to be used as an aggregation

If the switch supports LACP, configured LACP in either active
mode or passive mode

Ensure that the link you want to add has no IP interface that
is plumbed over the link.

# ifconfig interface unplumb

Add the link to the aggregation.

# dladm add-aggr -l link [-l link] [...] aggr

where link represents a data link that you
are adding to the aggregation.

Perform other tasks to modify the entire link aggregation configuration
after more data links are added.

For example, in the case of a
configuration that is illustrated in Figure 6–3, you might need to add or modify cable connections and reconfigure
switches to accommodate the additional data links. Refer to the switch documentation
to perform any reconfiguration tasks on the switch.

Example 6–5 Deleting
an Aggregation

How to Configure VLANs Over
a Link Aggregation

In the same manner as configuring VLANs over an interface, you can also
create VLANs on a link aggregation. VLANs are described in Chapter 5, Administering VLANs. This section combines configuring
VLANs and link aggregations.

Combining
Network Configuration Tasks While Using Customized Names

This section provides an example that combines all the procedures in
the previous chapters about configuring links, VLANs, and link aggregations
while using customized names. For a description of other networking scenarios
that use customized names, see the article in http://www.sun.com/bigadmin/sundocs/articles/vnamingsol.jsp.

Example 6–7 Configuring Links, Aggregations, and VLANs

In this example, a system that consists of 4 NICs needs to be configured
to be a router for 8 separate subnets. To attain this objective, 8 links will
be configured, one for each subnet. First, a link aggregation is created on
all 4 NICs. This untagged link becomes the default untagged subnet for the
network to which the default route points.

Then VLAN interfaces are configured over the link aggregation for the
other subnets. The subnets are named by basing on a color-coded scheme. Accordingly,
the VLAN names are likewise named to correspond to their respective subnets.
The final configuration consists of 8 links for the eight subnets: 1 untagged
link, and 7 tagged VLAN links.

To make the configurations persist across reboots, the same procedures
apply as in previous Solaris releases. For example, IP addresses need to be
added to configuration files like /etc/inet/ndpd.conf or /etc/hostname.interface. Or, filter
rules for the interfaces need to be included in a rules file. These final
steps are not included in the example. For these steps, refer to the appropriate
chapters in System Administration Guide: IP Services, particularly TCP/IP Administration and DHCP.

Throughout the description of IPMP in this chapter and in Chapter 8, Administering IPMP, all references
to the term interface specifically mean IP
interface. Unless a qualification explicitly indicates a different
use of the term, such as a network interface card (NIC), the term always refers
to the interface that is configured on the IP layer.

What's New With IPMP

The following features differentiate the current IPMP implementation
from the previous implementation:

An IPMP group is represented as an IPMP IP interface. This
interface is treated just like any other interface on the IP layer of the
networking stack. All IP administrative tasks, routing tables, Address Resolution
Protocol (ARP) tables, firewall rules, and other IP-related procedures work
with an IPMP group by referring to the IPMP interface.

The system becomes responsible for the distribution of data addresses
among underlying active interfaces. In the previous IPMP implementation, the
administrator initially determines the binding of data addresses to corresponding
interfaces when the IPMP group is created. In the current implementation,
when the IPMP group is created, data addresses belong to the IPMP interface
as an address pool. The kernel then automatically and randomly binds the data
addresses to the underlying active interfaces of the group.

The ipmpstat tool is introduced as the
principal tool to obtain information about IPMP groups. This command provides
information about all aspects of the IPMP configuration, such as the underlying
IP interfaces of the group, test and data addresses, types of failure detection
being used, and which interfaces have failed. The ipmpstat functions,
the options you can use, and the output each option generates are all described
in Monitoring IPMP Information.

The IPMP interface can be assigned a customized name to identify
the IPMP group more easily within your network setup. For the procedures to
configure IPMP groups with customized names, see any procedure that describes
the creation of an IPMP group in Configuring IPMP Groups.

Deploying IPMP

This section describes various topics about the use of IPMP groups.

Why You Should Use IPMP

Different factors can cause an interface to become unusable. Commonly,
an IP interface can fail. Or, an interface might be switched offline for hardware
maintenance. In such cases, without an IPMP group, the system can no longer
be contacted by using any of the IP addresses that are associated with that
unusable interface. Additionally, existing connections that use those IP addresses
are disrupted.

With IPMP, one or more IP interfaces can be configured into an IPMP
group. The group functions like an IP interface with data addresses
to send or receive network traffic. If an underlying interface in the group
fails, the data addresses are redistributed among the remaining underlying
active interfaces in the group. Thus, the group maintains network connectivity
despite an interface failure. With IPMP, network connectivity is always available,
provided that a minimum of one interface is usable for the group.

Additionally, IPMP improves overall network performance by automatically
spreading out outbound network traffic across the set of interfaces in the
IPMP group. This process is called outbound load spreading.
The system also indirectly controls inbound load spreading by performing source
address selection for packets whose IP source address was not specified by
the application. However, if an application has explicitly chosen an IP source
address, then the system does not vary that source address.

When You Must Use IPMP

The configuration of an IPMP group is determined by your system configurations.
Observe the following rules:

Multiple IP interfaces on the same local area network or LAN
must be configured into an IPMP group. LAN broadly refers to a variety of
local network configurations including VLANs and both wired and wireless local
networks whose nodes belong to the same link-layer broadcast domain.

Underlying IP interfaces of an IPMP group must not span different
LANs.

For example, suppose that a system with three interfaces is connected
to two separate LANs. Two IP interfaces link to one LAN while a single IP
interface connects to the other. In this case, the two IP interfaces connecting
to the first LAN must be configured as an IPMP group, as required by the first
rule. In compliance with the second rule, the single IP interface that connects
to the second LAN cannot become a member of that IPMP group. No IPMP configuration
is required of the single IP interface. However, you can configure the single
interface into an IPMP group to monitor the interface's availability. The
single-interface IPMP configuration is discussed further in Types of IPMP Interface Configurations.

Consider another case where the link to the first LAN consists of three
IP interfaces while the other link consists of two interfaces. This setup
requires the configuration of two IPMP groups: a three-interface group that
links to the first LAN, and a two-interface group to connect to the second.

Comparing IPMP and Link Aggregation

IPMP and link aggregation are different technologies to achieve improved
network performance as well as maintain network availability. In general,
you deploy link aggregation to obtain better network performance, while you
use IPMP to ensure high availability.

The following table presents a general comparison between link aggregation
and IPMP.

IPMP

Link Aggregation

Network technology type

Layer 3 (IP layer)

Layer 2 (link layer)

Configuration tool

ifconfig

dladm

Link-based failure detection

Supported.

Supported.

Probe-based failure detection

ICMP-based, targeting any defined system in the same IP subnet as test
addresses, across multiple levels of intervening layer-2 switches.

Based on Link Aggregation Control Protocol (LACP), targeting immediate
peer host or switch.

Finer grain control of the administrator over load spreading of outbound
traffic by using dladm command. Inbound load spreading
supported.

In link aggregations, incoming traffic is spread over the multiple links
that comprise the aggregation. Thus, networking performance is enhanced as
more NICs are installed to add links to the aggregation. IPMP's traffic uses
the IPMP interface's data addresses as they are bound to the available active
interfaces. Thus, for example, if all the data traffic is flowing between
only two IP addresses but not necessarily over the same connection, then adding
more NICs will not improve performance with IPMP because only two IP addresses
remain usable.

The two technologies complement each other and can be deployed together
to provide the combined benefits of network performance and availability.
For example, except where proprietary solutions are provided by certain vendors,
link aggregations currently cannot span multiple switches. Thus, a switch
becomes a single point of failure for a link aggregation between the switch
and a host. If the switch fails, the link aggregation is likewise lost, and
network performance declines. IPMP groups do not face this switch limitation.
Thus, in the scenario of a LAN using multiple switches, link aggregations
that connect to their respective switches can be combined into an IPMP group
on the host. With this configuration, both enhanced network performance as
well as high availability are obtained. If a switch fails, the data addresses
of the link aggregation to that failed switch are redistributed among the
remaining link aggregations in the group.

Using Flexible Link Names on IPMP Configuration

With support for customized link names, link configuration is no longer
bound to the physical NIC to which the link is associated. Using customized
link names allows you to have greater flexibility in administering IP interfaces.
This flexibility extends to IPMP administration as well. In certain cases
of failure of an underlying interface of an IPMP group, the resolution would
require the replacement of the physical hardware or NIC. The replacement NIC,
provided it is the same type as the failed NIC, can be renamed to inherit
the configuration of the failed NIC. You do not have to create new configurations
for the new NIC before you can add it to the IPMP group. After you rename
the new NIC's link with the link name of the replaced NIC, the new NIC automatically
becomes a member of the IPMP group when you bring that NIC online. The multipathing
daemon then deploys the interface according to the IPMP configuration of active
and standby interfaces.

Therefore, to optimize your networking configuration and facilitate
IPMP administration, you must employ flexible link names for your interfaces
by assigning them generic names. In the following section How IPMP Works, all the examples use flexible link
names for the IPMP group and its underlying interfaces. For details about
the processes behind NIC replacements in a networking environment that uses
customized link names, refer to IPMP and Dynamic Reconfiguration. For an overview of the networking stack and the use
of customized link names, refer to Overview of the Networking Stack.

How IPMP Works

IPMP maintains network availability by attempting to preserve the original
number of active and standby interfaces when the group was created.

IPMP failure detection can be link-based or probe-based or both to determine
the availability of a specific underlying IP interface in the group. If IPMP
determines that an underlying interface has failed, then that interface is
flagged as failed and is no longer usable. The data IP address that was associated
with the failed interface is then redistributed to another functioning interface
in the group. If available, a standby interface is also deployed to maintain
the original number of active interfaces.

Consider a
three-interface IPMP group itops0 with an active-standby
configuration, as illustrated in Figure 7–1.

Figure 7–1 IPMP Active–Standby Configuration

The group itops0 is configured as follows:

Two data addresses are assigned to the group: 192.168.10.10 and 192.168.10.15.

Two underlying interfaces are configured as active interfaces
and are assigned flexible link names: subitops0 and subitops1.

The group has one standby interface, also with a flexible
link name: subitops2.

Probe–based failure detection is used, and thus the
active and standby interfaces are configured with test addresses, as follows:

subitops0: 192.168.10.30

subitops1: 192.168.10.32

subitops2: 192.168.10.34

Note –

The Active, Offline, Reserve, and Failed areas in the figures indicate only
the status of underlying interfaces, and not physical locations. No physical
movement of interfaces or addresses nor transfer of IP interfaces occur within
this IPMP implementation. The areas only serve to show how an underlying interface
changes status as a result of either failure or repair.

You can use the ipmpstat command with different options
to display specific types of information about existing IPMP groups. For additional
examples, see Monitoring IPMP Information.

The IPMP configuration in Figure 7–1 can
be displayed by using the following ipmpstat command:

IPMP maintains network availability by managing the underlying interfaces
to preserve the original number of active interfaces. Thus, if subitops0 fails, then subitops2 is deployed to ensure
that the group continues to have two active interfaces. The activation of
the subitops2 is shown in Figure 7–2.

Figure 7–2 Interface Failure in IPMP

Note –

The one–to–one mapping of data addresses to active
interfaces in Figure 7–2 serves
only to simplify the illustration. The IP kernel module can assign data addresses
randomly without necessarily adhering to a one–to–one relationship
between data addresses and interfaces.

The ipmpstat utility displays the information in Figure 7–2 as follows:

After subitops0 is repaired, then it reverts to its
status as an active interface. In turn, subitops2 is returned
to its original standby status.

A different failure scenario is shown in Figure 7–3, where the standby interface subitops2 fails (1),
and later, one active interface, subitops1, is switched
offline by the administrator (2). The result is that the IPMP group is left
with a single functioning interface, subitops0.

Figure 7–3 Standby Interface Failure in IPMP

The ipmpstat utility would display the information
illustrated by Figure 7–3 as follows:

For this particular failure, the recovery after an interface is repaired
behaves differently. The restoration depends on the IPMP group's original
number of active interfaces compared with the configuration after the repair.
The recovery process is represented graphically in Figure 7–4.

Figure 7–4 IPMP Recovery Process

In Figure 7–4, when subitops2 is repaired, it would normally revert to its original status as
a standby interface (1). However, the IPMP group still would not reflect the
original number of two active interfaces, because subitops1 continues
to remain offline (2). Thus, IPMP deploys subitops2 as
an active interface instead (3).

The ipmpstat utility would display the post-repair
IPMP scenario as follows:

A similar restore sequence occurs if the failure involves an active
interface that is also configured in FAILBACK=no mode,
where a failed active interface does not automatically revert to active status
upon repair. Suppose subitops0 in Figure 7–2 is configured in FAILBACK=no mode. In that mode,
a repaired subitops0 is switched to a reserve status as
a standby interface, even though it was originally an active interface. The
interface subitops2 would remain active to maintain the
IPMP group's original number of two active interfaces. The ipmpstat utility
would display the recovery information as follows:

Solaris IPMP Components

Solaris IPMP involves the following software:

The multipathing
daemonin.mpathd detects interface failures
and repairs. The daemon performs both link-based failure detection and probe-based
failure detection if test addresses are configured for the underlying interfaces.
Depending on the type of failure detection method that is employed, the daemon
sets or clears the appropriate flags on the interface to indicate whether
the interface failed or has been repaired. As an option, the daemon can also
be configured to monitor the availability of all interfaces, including those
that are not configured to belong to an IPMP group. For a description of failure
detection, see Failure and Repair Detection in IPMP.

The in.mpathd daemon also controls the designation
of active interfaces in the IPMP group. The daemon attempts to maintain the
same number of active interfaces that was originally configured when the IPMP
group was created. Thus in.mpathd activates or deactivates
underlying interfaces as needed to be consistent with the administrator's
configured policy. For more information about the manner by which the in.mpathd daemon manages activation of underlying interfaces, refer to How IPMP Works. For more information about the daemon,
refer to the in.mpathd(1M) man
page.

The IP kernel module manages outbound load-spreading
by distributing the set of available IP data addresses in the group across
the set of available underlying IP interfaces in the group. The module also
performs source address selection to manage inbound load-spreading. Both roles
of the IP module improve network traffic performance.

The IPMP configuration
file/etc/default/mpathd is used to configure
the daemon's behavior. For example, you can specify how the daemon performs
probe-based failure detection by setting the time duration to probe a target
to detect failure, or which interfaces to probe. You can also specify what
the status of a failed interface should be after that interface is repaired.
You also set the parameters in this file to specify whether the daemon should
monitor all IP interfaces in the system, not only those that are configured
to belong to IPMP groups. For procedures to modify the configuration file,
refer to How to Configure the Behavior of the IPMP Daemon.

The ipmpstat utility provides different types
of information about the status of IPMP as a whole. The tool also displays
other specific information about the underlying IP interfaces for each group,
as well as data and test addresses that have been configured for the group.
For more information about the use of this command, see Monitoring IPMP Information and the ipmpstat(1M) man page.

Types of IPMP Interface Configurations

An IPMP configuration typically consists of
two or more physical interfaces on the same system that are attached to the
same LAN. These interfaces can belong to an IPMP group in either of the following
configurations:

active-active configuration – an IPMP group
in which all underlying interfaces are active. An active interface is
an IP interface that is currently available for use by the IPMP group. By
default, an underlying interface becomes active when you configure the interface
to become part of an IPMP group. For additional information about active interfaces
and other IPMP terms, see also IPMP Terminology and Concepts.

active-standby configuration – an IPMP group
in which at least one interface is administratively configured as a reserve.
The reserve interface is called the standby interface.
Although idle, the standby IP interface is monitored by the multipathing daemon
to track the interface's availability, depending on how the interface is configured.
If link-failure notification is supported by the interface, link-based failure
detection is used. If the interface is configured with a test address, probe-based
failure detection is also used. If an active interface fails, the standby
interface is automatically deployed as needed. You can configure as many standby
interfaces as you want for an IPMP group.

A single interface can also be configured in its own IPMP group. The
single interface IPMP group has the same behavior as an IPMP group with multiple
interfaces. However, this IPMP configuration does not provide high availability
for network traffic. If the underlying interface fails, then the system loses
all capability to send or receive traffic. The purpose of configuring a single-interfaced
IPMP group is to monitor the availability of the interface by using failure
detection. By configuring a test address on the interface, you can set the
daemon to track the interface by using probe-based failure detection. Typically,
a single-interfaced IPMP group configuration is used in conjunction with other
technologies that have broader failover capabilities, such as Sun Cluster
software. The system can continue to monitor the status of the underlying
interface. But the Sun Cluster software provides the functionalities to ensure
availability of the network when failure occurs. For more information about
the Sun Cluster software, see Sun
Cluster Overview for Solaris OS.

An IPMP group without underlying interfaces can also exist, such as
a group whose underlying interfaces have been removed. The IPMP group is not
destroyed, but the group cannot be used to send and receive traffic. As underlying
IP interfaces are brought online for the group, then the data addresses of
the IPMP interface are allocated to these interfaces and the system resumes
hosting network traffic.

IPMP Addressing

You can configure IPMP failure detection on both IPv4 networks and dual-stack,
IPv4 and IPv6 networks. Interfaces that are configured with IPMP support two
types of addresses:

Data Addresses are the conventional
IPv4 and IPv6 addresses that are assigned to an IP interface dynamically at
boot time by the DHCP server, or manually by using the ifconfig command.
Data addresses are assigned to the IPMP interface. The standard IPv4 packet
traffic and, if applicable, IPv6 packet traffic are considered data
traffic. Data traffic flow use the data addresses that are hosted
on the IPMP interface and flow through the active interfaces of that group.

Test Addresses are IPMP-specific addresses
that are used by the in.mpathd daemon to perform probe-based
failure and repair detection. Test addresses can also be assigned dynamically
by the DHCP server, or manually by using the ifconfig command.
These addresses are configured with the NOFAILOVER flag
that identifies them as test addresses. While data addresses are assigned
to the IPMP interface, only test addresses are assigned to the underlying
interfaces of the group. For an underlying interface on a dual-stack network,
you can configure an IPv4 test address or an IPv6 test address or both. When
an underlying interface fails, the interface's test address continues to used
by the in.mpathd daemon for probe-based failure detection
to check for the interface's subsequent repair.

Note –

You need to configure test addresses only if you specifically
want to use probe-based failure detection. For more information about probe-based
failure detection and the use of test addresses, refer to Probe-Based Failure Detection.

In previous IPMP implementations, test addresses needed to be marked
as DEPRECATED to avoid being used by applications especially
during interface failures. In the current implementation, test addresses reside
in the underlying interfaces. Thus, these addresses can no longer be accidentally
used by applications that are unaware of IPMP. Consequently, marking test
addresses as DEPRECATED is no longer required.

IPv4 Test Addresses

In general, you
can use any IPv4 address on your subnet as a test address. IPv4 test addresses
do not need to be routeable. Because IPv4 addresses are a limited resource
for many sites, you might want to use non-routeable RFC 1918 private addresses
as test addresses. Note that the in.mpathd daemon exchanges
only ICMP probes with other hosts on the same subnet as the test address.
If you do use RFC 1918-style test addresses, be sure to configure other systems,
preferably routers, on the network with addresses on the appropriate RFC 1918
subnet. The in.mpathd daemon can then successfully exchange
probes with target systems. For more information about RFC 1918 private addresses,
refer to RFC 1918, Address Allocation for Private Internets.

IPv6 Test Addresses

The only valid IPv6 test address is the link-local address of a physical
interface. You do not need a separate IPv6 address to serve as an IPMP test
address. The IPv6 link-local address is based on the Media Access Control
(MAC ) address of the interface. Link-local addresses are automatically configured
when the interface becomes IPv6-enabled at boot time or when the interface
is manually configured through ifconfig. Just like IPv4
test addresses, IPv6 test addresses must be configured with the NOFAILOVER flag.

When an IPMP group has both IPv4 and IPv6 plumbed on all the group's
interfaces, you do not need to configure separate IPv4 test addresses. The in.mpathd daemon can use the IPv6 link-local addresses with the NOFAILOVER flag as test addresses.

Failure and Repair Detection in IPMP

To ensure continuous availability of the network to send or receive
traffic, IPMP performs failure detection on the IPMP group's underlying IP
interfaces. Failed interfaces remain unusable until these are repaired. Remaining
active interfaces continue to function while any existing standby interfaces
are deployed as needed.

A group failure occurs when all interfaces
in an IPMP group appear to fail at the same time. In this case, no underlying
interface is usable. Also, when all the target systems fail at the same time
and probe-based failure detection is enabled, the in.mpathd daemon
flushes all of its current target systems and probes for new target systems.

Types of Failure Detection in IPMP

The in.mpathd daemon handles the following types of failure detection:

Link-based failure detection, if supported by the NIC driver

Probe-based failure detection, when test addresses are configured

Detection of interfaces that were missing at boot time

Link-Based Failure Detection

Link-based failure detection is always enabled, provided that the interface
supports this type of failure detection.

To determine whether a third-party interface supports link-based failure
detection, use the ipmpstat -i command.
If the output for a given interface includes an unknown status
for its LINK column, then that interface does not support
link-based failure detection. Refer to the manufacturer's documentation for
more specific information about the device.

These network drivers that support link-based failure detection monitor
the interface's link state and notify the networking subsystem when that link
state changes. When notified of a change, the networking subsystem either
sets or clears the RUNNING flag for that interface, as
appropriate. If the in.mpathd daemon detects that the
interface's RUNNING flag has been cleared, the daemon immediately
fails the interface.

Probe-Based Failure Detection

The multipathing daemon performs probe-based failure detection on each
interface in the IPMP group that has a test address. Probe-based failure detection
involves sending and receiving ICMP probe messages that use test addresses.
These messages, also called probe traffic or test traffic,
go out over the interface to one or more target systems on the same local
network. The daemon probes all the targets separately through all the interfaces
that have been configured for probe-based failure detection. If no replies
are made in response to five consecutive probes on a given interface, in.mpathd considers the interface to have failed. The probing rate depends
on the failure detection time (FDT). The default value
for failure detection time is 10 seconds. However, you can tune the failure
detection time in the IPMP configuration file. For instructions, go to How to Configure the Behavior of the IPMP Daemon.
To optimize probe-based failure detection, you must set multiple target systems
to receive the probes from the multipathing daemon. By having multiple target
systems, you can better determine the nature of a reported failure. For example,
the absence of a response from the only defined target system can indicate
a failure either in the target system or in one of the IPMP group's interfaces.
By contrast, if only one system among several target systems does not respond
to a probe, then the failure is likely in the target system rather than in
the IPMP group itself.

Repair detection time is twice the failure detection
time. The default time for failure detection is 10 seconds. Accordingly, the
default time for repair detection is 20 seconds. After a failed interface
has been repaired and the interface's RUNNING flag is once
more detected, in.mpathd clears the interface's FAILED flag. The repaired interface is redeployed depending on the number
of active interfaces that the administrator has originally set.

The in.mpathd daemon determines
which target systems to probe dynamically. First the daemon searches the routing
table for target systems that are on the same subnet as the test addresses
that are associated with the IPMP group's interfaces. If such targets are
found, then the daemon uses them as targets for probing. If no target systems
are found on the same subnet, then in.mpathd sends multicast
packets to probe neighbor hosts on the link. The multicast packet is sent
to the all hosts multicast address, 224.0.0.1 in IPv4 and ff02::1 in IPv6, to determine which hosts to use as target systems.
The first five hosts that respond to the echo packets are chosen as targets
for probing. If in.mpathd cannot find routers or hosts
that responded to the multicast probes, then ICMP echo packets, in.mpathd cannot detect probe-based failures. In this case, the ipmpstat
-i utility will report the probe state as unknown.

NICs That Are Missing at Boot

NICs that are not present at system boot represent a special instance
of failure detection. At boot time, the startup scripts track any interfaces
with /etc/hostname.interface files.
Any data addresses in such an interface's /etc/hostname.interface file are automatically configured on the corresponding
IPMP interface for the group. However, if the interfaces themselves cannot
be plumbed because they are missing, then error messages similar to the following
are displayed:

In this instance of failure detection, only data addresses that
are explicitly specified in the missing interface's /etc/hostname.interface file are moved to the IPMP interface.

If an interface with the same name as another interface that was missing
at system boot is reattached using DR, the Reconfiguration Coordination Manager
(RCM) automatically plumbs the interface. Then, RCM configures the interface
according to the contents of the interface's /etc/hostname.interface file. However, data addresses, which are addresses
without the NOFAILOVER flag, that are in the /etc/hostname.interface file are ignored. This mechanism adheres
to the rule that data addresses should be in the /etc/hostname.ipmp-interface file, and only test addresses should
be in the underlying interface's /etc/hostname.interface file.
Issuing the ifconfig group command causes that interface
to again become part of the group. Thus, the final network configuration is
identical to the configuration that would have been made if the system had
been booted with the interface present.

Failure Detection and the Anonymous Group Feature

IPMP supports failure detection in an anonymous group. By default, IPMP
monitors the status only of interfaces that belong to IPMP groups. However,
the IPMP daemon can be configured to also track the status of interfaces that
do not belong to any IPMP group. Thus, these interfaces are considered to
be part of an “anonymous group.” When you issue the command ipmpstat -g, the anonymous group will be displayed as double-dashes
(--). In anonymous groups, the interfaces would have their
data addresses function also as test addresses. Because these interfaces do
not belong to a named IPMP group, then these addresses are visible to applications.
To enable tracking of interfaces that are not part of an IPMP group, see How to Configure the Behavior of the IPMP Daemon.

Detecting Physical Interface Repairs

When an underlying interface fails and probe-based
failure detection is used, the in.mpathd daemon continues
to use the interface's test address to continue probing target systems. During
an interface repair, the restoration proceeds depending on the original configuration
of the failed interface:

Failed interface was originally an active interface –
the repaired interface reverts to its original active status. The standby
interface that functioned as a replacement during the failure is switched
back to standby status if enough interfaces are active for the group as defined
by the system administrator.

Note –

An exception to this step are cases when the repaired active interface
is also configured with the FAILBACK=no mode. For more
information, see The FAILBACK=no Mode

Failed interface was originally a standby interface –
the repaired interface reverts to its original standby status, provided that
the IPMP group reflects the original number of active interfaces. Otherwise,
the standby interface is switched to become an active interface.

To see a graphical presentation of how IPMP behaves during interface
failure and repair, see How IPMP Works.

The FAILBACK=no Mode

By default, active interfaces that have failed and then repaired
automatically return to become active interfaces in the group. This behavior
is controlled by the setting of the FAILBACK parameter
in the daemon's configuration file. However, even the insignificant disruption
that occurs as data addresses are remapped to repaired interfaces might not
be acceptable to some administrators. The administrators might prefer to allow
an activated standby interface to continue as an active interface. IPMP allows
administrators to override the default behavior to prevent an interface to
automatically become active upon repair. These interfaces must be configured
in the FAILBACK=no mode. For related procedures, see How to Configure the Behavior of the IPMP Daemon.

When an active interface in FAILBACK=no mode fails
and is subsequently repaired, the IPMP daemon restores the IPMP configuration
as follows:

The daemon retains the interface's INACTIVE status,
provided that the IPMP group reflects the original configuration of active
interfaces.

If the IPMP configuration at the moment of repair does not
reflect the group's original configuration of active interfaces, then the
repaired interface is redeployed as an active interface, notwithstanding the FAILBACK=no status.

Note –

The FAILBACK=NO mode is set for the whole IPMP
group. It is not a per-interface tunable parameter.

IPMP and Dynamic Reconfiguration

Dynamic reconfiguration (DR) feature allows you to reconfigure system
hardware, such as interfaces, while the system is running. DR can be used
only on systems that support this feature.

You typically use the cfgadm command to perform DR
operations. However, some platforms provide other methods. Make sure to consult
your platform's documentation for details to perform DR. For systems that
use the Solaris OS, you can find specific documentation about DR in the resources
that are listed in Table 7–1.
Current information about DR is also available at http://docs.sun.com and can be obtained by searching
for the topic “dynamic reconfiguration.”

On a system that supports DR of NICs, IPMP can be used to preserve connectivity
and prevent disruption of existing connections. IPMP is integrated into the
Reconfiguration Coordination Manager (RCM) framework. Thus, you can safely
attach, detach, or reattach NICs and RCM manages the dynamic reconfiguration
of system components.

Attaching New NICs

With DR support, you can attach, plumb, and then add new interfaces
to existing IPMP groups. Or, if appropriate, you can configure the newly added
interfaces into their own IPMP group. For procedures to configure IPMP groups,
refer to Configuring IPMP Groups. After these
interfaces have been configured, they are immediately available for use by
IPMP. However, to benefit from the advantages of using customized link names,
you must assign generic link names to replace the interface's hardware-based
link names. Then you create corresponding configuration files by using the
generic name that you just assigned. For procedures to configure a singe interface
by using customized link names, refer to How to Configure an IP Interface After System Installation. After you assign a generic
link name to interface, make sure that you always refer to the generic name
when you perform any additional configuration on the interface such as using
the interface for IPMP.

Detaching NICs

All requests to detach system components that contain NICs are first
checked to ensure that connectivity can be preserved. For instance, by default
you cannot detach a NIC that is not in an IPMP group. You also cannot detach
a NIC that contains the only functioning interfaces in an IPMP group. However,
if you must remove the system component, you can override this behavior by
using the -f option of cfgadm, as explained
in the cfgadm(1M) man
page.

If the checks are successful, the daemon sets the OFFLINE flag
for the interface. All test addresses on the interfaces are unconfigured.
Then, the NIC is unplumbed from the system. If any of these steps fail, or
if the DR of other hardware on the same system component fails, then the previous
configuration is restored to its original state. A status message about this
event will be displayed. Otherwise, the detach request completes successfully.
You can remove the component from the system. No existing connections are
disrupted.

Replacing NICs

When an underlying interface of an IPMP group fails, a typical solution
would be to replace the failed interface by attaching a new NIC. RCM records
the configuration information associated with any NIC that is detached from
a running system. If you replace a failed NIC with an identical NIC,
then RCM automatically configures the interface according to the contents
of the existing /etc/hostname.interface file.

For example, suppose you replace a failed bge0 interface
with another bge0 interface. The failed bge0 already
has a corresponding /etc/hostname.bge0 file. After you
attach the replacement bge NIC, RCM plumbs and then configures
the bge0 interface by using the information in the /etc/hostname.bge0 file. Thus the interface is properly configured with the test
address and is added to the IPMP group according to the contents of the configuration
file.

You can replace a failed NIC with a different NIC, provided that both
are the same type, such as ethernet. In this case, RCM plumbs the new interface
after it is attached. If you did not use customized link names when you first
configured your interfaces, and no corresponding configuration file for the
new interface exists, then you will have to perform additional configuration
steps. You will need to create a new corresponding configuration file for
the new NIC. Additionally, you will need to add correct information to the
file before you can add the interface to the IPMP group.

However, if you used customized link names, the additional configuration
steps are unnecessary. By reassigning the failed interface's link name to
the new interface, then the new interface acquires the configuration specified
in the removed interface's configuration file. RCM then configures the interface
by using the information in that file. For procedures to recover your IPMP
configuration by using DR when an interface fails, refer to Recovering an IPMP Configuration With Dynamic Reconfiguration.

IPMP Terminology and Concepts

This section introduces terms and concepts that are used throughout
the IPMP chapters in this book.

active interface

Refers to an underlying interface that can be used by the
system to send or receive data traffic. An interface is active if the following
conditions are met:

At least one IP address is UP in the interface.
See UP address.

The FAILED, INACTIVE,
or OFFLINE flag is not set on the interface.

The interface has not been flagged as having a duplicate hardware
address.

Compare to unusable interface, INACTIVE interface.

data address

Refers to an IP address that can be used as the source or destination
address for data. Data addresses are part of an IPMP group and can be used
to send and receive traffic on any interface in the group. Moreover, the set
of data addresses in an IPMP group can be used continuously, provided that
one interface in the group is functioning. In previous IPMP implementations,
data addresses were hosted on the underlying interfaces of an IPMP group.
In the current implementation, data addresses are hosted on the IPMP interface.

DEPRECATED address

Refers to an IP address that cannot be used as the source
address for data. Typically, IPMP test addresses are DEPRECATED.
However, any address can be marked DEPRECATED to prevent
the address from being used as a source address.

dynamic reconfiguration

Refers to a feature that allows you to reconfigure a system
while the system is running, with little or no impact on ongoing operations.
Not all Sun platforms support DR. Some Sun platforms might only support DR
of certain types of hardware. On platforms that support DR of NICs, IPMP can
be used for uninterrupted network access to the system during DR.

Applies only to the current IPMP implementation. The term
refers to the method of creating an IPMP interface by using the ifconfig
ipmp command. Explicit IPMP interface creation is the preferred
method for creating IPMP groups. This method allows the IPMP interface name
and IPMP group name to be set by the administrator.

Compare to implicit IPMP interface creation.

FAILBACK=no mode

Refers to a setting of an underlying interface that minimizes
rebinding of incoming addresses to interfaces by avoiding redistribution during
interface repair. Specifically, when an interface repair is detected, the
interface's FAILED flag is cleared. However, if the mode
of the repaired interface is FAILBACK=no, then the INACTIVE flag is also set to prevent use of the interface, provided that
a second functioning interface also exists. If the second interface in the
IPMP group fails, then the INACTIVE interface is eligible
to take over. While the concept of failback no longer applies in the current
IPMP implementation, the name of this mode is preserved for administrative
compatibility.

FAILED interface

Indicates an interface that the in.mpathd daemon
has determined to be malfunctioning. The determination is achieved by either
link-based or probe-based failure detection. The FAILED flag
is set on any failed interface.

failure detection

Refers to the process of detecting when a physical interface
or the path from an interface to an Internet layer device no longer works.
Two forms of failure detection are implemented: link-based failure detection,
and probe-based failure detection.

implicit IPMP interface creation

Refers to the method of creating an IPMP interface by using
the ifconfig command to place an underlying interface in
a nonexistent IPMP group. Implicit IPMP interface creation is supported for
backward compatibility with the previous IPMP implementation. Thus, this method
does not provide the ability to set the IPMP interface name or IPMP group
name.

Compare to explicit IPMP interface creation.

INACTIVE interface

Refers to an interface that is functioning but is not being
used according to administrative policy. The INACTIVE flag
is set on any INACTIVE interface.

Compare to active interface, unusable interface.

IPMP anonymous group support

Indicates
an IPMP feature in which the IPMP daemon tracks the status of all network
interfaces in the system, regardless of whether they belong to an IPMP group.
However, if the interfaces are not actually in an IPMP group, then the addresses
on these interfaces are not available in case of interface failure.

IPMP
group

Refers to a set of network interfaces that are treated as
interchangeable by the system in order to improve network availability and
utilization. Each IPMP group has a set of data addresses that the system can
associate with any set of active interfaces in the group. Use of this set
of data addresses maintains network availability and improves network utilization.
The administrator can select which interfaces to place into an IPMP group.
However, all interfaces in the same group must share a common set of properties,
such as being attached to the same link and configured with the same set of
protocols (for example, IPv4 and IPv6).

IPMP group interface

See IPMP interface.

IPMP group name

Refers to the name of an IPMP group, which can be assigned
with the ifconfig group subcommand. All underlying interfaces
with the same IPMP group name are defined as part of the same IPMP group.
In the current implementation, IPMP group names are de-emphasized in favor
of IPMP interface names. Administrators are encouraged to use the same name
for both the IPMP interface and the group.

IPMP interface

Applies
only to the current IPMP implementation. The term refers to the IP interface
that represents a given IPMP group, any or all of the interface's underlying
interfaces, and all of the data addresses. In the current IPMP implementation,
the IPMP interface is the core component for administering an IPMP group,
and is used in routing tables, ARP tables, firewall rules, and so forth.

IPMP interface name

Indicates the name of an IPMP interface. This document uses
the naming convention of ipmpN.
The system also uses the same naming convention in implicit IPMP interface
creation. However, the administrator can choose any name by using explicit
IPMP interface creation.

IPMP singleton

Refers to an IPMP configuration that is used by Sun Cluster
software that allows a data address to also act as a test address. This configuration
applies, for instance, when only one interface belongs to an IPMP group.

link-based failure detection

Specifies a passive form of failure detection, in which the
link status of the network card is monitored to determine an interface's status.
Link-based failure detection only tests whether the link is up. This type
of failure detection is not supported by all network card drivers. Link-based
failure detection requires no explicit configuration and provides instantaneous
detection of link failures.

Compare to probe-based failure detection.

load spreading

Refers to the process of distributing inbound or outbound
traffic over a set of interfaces. Unlike load balancing, load spreading does
not guarantee that the load is evenly distributed. With load spreading, higher
throughput is achieved. Load spreading occurs only when the network traffic
is flowing to multiple destinations that use multiple connections.

Inbound load spreading indicates the process of distributing inbound
traffic across the set of interfaces in an IPMP group. Inbound load spreading
cannot be controlled directly with IPMP. The process is indirectly manipulated
by the source address selection algorithm.

Outbound load spreading refers to the process of distributing outbound
traffic across the set of interfaces in an IPMP group. Outbound load spreading
is performed on a per-destination basis by the IP module, and is adjusted
as necessary depending on the status and members of the interfaces in the
IPMP group.

NOFAILOVER address

Applies only to the previous IPMP implementation. Refers to
an address that is associated with an underlying interface and thus remains
unavailable if the underlying interface fails. All NOFAILOVER addresses
have the NOFAILOVER flag set. IPMP test addresses must
be designated as NOFAILOVER, while IPMP data addresses
must never be designated as NOFAILOVER. The concept of
failover does not exist in the IPMP implementation. However, the term NOFAILOVER remains for administrative compatibility.

OFFLINE interface

Indicates an interface that has been administratively disabled
from system use, usually in preparation for being removed from the system.
Such interfaces have the OFFLINE flag set. The if_mpadm command can be used to switch an interface to an offline status.

physical interface

See: underlying interface

probe

Refers to an ICMP packet, similar to the packets that are
used by the ping command. This probe is used to test the
send and receive paths of a given interface. Probe packets are sent by the in.mpathd daemon, if probe-based failure detection is enabled.
A probe packet uses an IPMP test address as its source address.

probe-based failure detection

Indicates an active form of failure detection, in which probes
are exchanged with probe targets to determine an interface's status. When
enabled, probe-based failure detection tests the entire send and receive path
of each interface. However, this type of detection requires the administrator
to explicitly configure each interface with a test address.

Compare to link-based failure detection.

probe target

Refers to a system on the same link as an interface in an
IPMP group. The target is selected by the in.mpathd daemon
to help check the status of a given interface by using probe-based failure
detection. The probe target can be any host on the link that is capable of
sending and receiving ICMP probes. Probe targets are usually routers. Several
probe targets are usually used to insulate the failure detection logic from
failures of the probe targets themselves.

source address selection

Refers to the process of selecting a data address in the IPMP
group as the source address for a particular packet. Source address selection
is performed by the system whenever an application has not specifically selected
a source address to use. Because each data address is associated with only
one hardware address, source address selection indirectly controls inbound
load spreading.

STANDBY interface

Indicates an interface that has been administratively configured
to be used only when another interface in the group has failed. All STANDBY interfaces will have the STANDBY flag set.

test address

Refers to an IP address that must be used as the source or
destination address for probes, and must not be used as a source or destination
address for data traffic. Test addresses are associated with an underlying
interface. These addresses are designated as NOFAILOVER so
that they remain on the underlying interface even if the interface fails to
facilitate repair detection. Because test addresses are not available upon
interface failure, all test addresses must be designated as DEPRECATED to
keep the system from using them as a source addresses for data packets.

underlying interface

Specifies an IP interface that is part of an IPMP group and
is directly associated with an actual network device. For example, if ce0 and ce1 are placed into IPMP group ipmp0,
then ce0 and ce1 comprise the underlying
interfaces of ipmp0. In the previous implementation, IPMP
groups consist solely of underlying interfaces. However, in the current implementation,
these interfaces underlie the IPMP interface (for example, ipmp0)
that represents the group, hence the name.

undo-offline operation

Refers to the act of administratively enabling a previously
offlined interface to be used by the system. The if_mpadm command
can be used to perform an undo-offline operation.

unusable
interface

Refers to an underlying interface that cannot be used to send
or receive data traffic at all in its current configuration. An unusable interface
differs from an INACTIVE interface, that is not currently
being used but can be used if an active interface in the group becomes unusable.
An interface is unusable if one of the following conditions exists:

The interface has no UP address.

The FAILED or OFFLINE flag
has been set for the interface.

The interface has been flagged has having the same hardware
address as another interface in the group.

target systems

See probe target.

UP address

Refers to an address that has been made administratively available
to the system by setting the UP flag. An address that is
not UP is treated as not belonging to the system, and thus
is never considered during source address selection.

Chapter 8 Administering IPMP

This chapter provides tasks for administering interface groups with
IP network multipathing (IPMP). The following major topics are discussed:

IPMP Administration Task Maps

In this Solaris release, the ipmpstat command
is the preferred tool to use to obtain information about IPMP group information.
In this chapter, the ipmpstat command replaces certain
functions of the ifconfig command that were used in previous
Solaris releases to provide IPMP information.

Configuring IPMP Groups

This section provides procedures that are used to plan and configure
IPMP groups.

How to Plan an IPMP Group

The following procedure includes the required planning tasks and information
to be gathered prior to configuring an IPMP group. The tasks do not have to
be performed in sequence.

Determine the general IPMP configuration that would suit your
needs.

Your IPMP configuration depends on what your network needs
to handle the type of traffic that is hosted on your system. IPMP spreads
outbound network packets across the IPMP group's interfaces, and thus improves
network throughput. However, for a given TCP connection, inbound traffic normally
follows only one physical path to minimize the risk of processing out-of-order
packets.

Thus, if your network handles a huge volume of outbound traffic, configuring
multiple interfaces into an IPMP group can improve network performance. If
instead, the system hosts heavy inbound traffic, then the number of interfaces
in the group does not necessarily improve performance by load spreading traffic.
However, having multiple interfaces helps to guarantee network availability
during interfaces failure.

For SPARC based systems, verify that each interface in the group
has a unique MAC address.

Ensure that the same set of STREAMS modules is pushed and configured
on all interfaces in the IPMP group.

All interfaces in the same
group must have the same STREAMS modules configured in the same order.

Check the order of STREAMS modules on all interfaces in the prospective
IPMP group.

You can print a list of STREAMS modules by using
the ifconfig interface modlist command.
For example, here is the ifconfig output for an hme0 interface:

# ifconfig hme0 modlist
0 arp
1 ip
2 hme

Interfaces normally exist as network drivers directly below the IP module,
as shown in the output from ifconfig hme0 modlist. They
should not require additional configuration.

However, certain technologies insert themselves as a STREAMS module
between the IP module and the network driver. If a STREAMS module is stateful,
then unexpected behavior can occur on failover, even if you push the same
module onto all of the interfaces in a group. However, you can use stateless
STREAMS modules, provided that you push them in the same order on all interfaces
in the IPMP group.

Push the modules of an interface in the standard order for the
IPMP group.

ifconfiginterfacemodinsertmodule-name@position

ifconfig hme0 modinsert vpnmod@3

Use the same IP addressing format on all interfaces of the IPMP
group.

If one interface is configured for IPv4, then all interfaces
of the group must be configured for IPv4. For example, if you add IPv6 addressing
to one interface, then all interfaces in the IPMP group must be configured
for IPv6 support.

Determine the type of failure detection that you want to implement.

For example, if you want to implement probe-based failure detection,
then you must configure test addresses on the underlying interfaces. For related
information, seeTypes of Failure Detection in IPMP.

Ensure that all interfaces in the IPMP group are connected to
the same local network.

For example, you can configure Ethernet
switches on the same IP subnet into an IPMP group. You can configure any
number of interfaces into an IPMP group.

Ensure that the IPMP group does not contain interfaces with different
network media types.

The interfaces that are grouped together
should be of the same interface type, as defined in /usr/include/net/if_types.h. For example, you cannot combine Ethernet and Token ring interfaces
in an IPMP group. As another example, you cannot combine a Token bus interface
with asynchronous transfer mode (ATM) interfaces in the same IPMP group.

For IPMP with ATM interfaces, configure the ATM interfaces in
LAN emulation mode.

IPMP is not supported for interfaces using
Classical IP over ATM.

How to Configure an IPMP Group by Using DHCP

In the current IPMP implementation, IPMP groups can be configured with
Dynamic Host Configuration Protocol (DHCP) support.

A multiple-interfaced IPMP group can be configured with active-active
interfaces or active-standby interfaces. For related information, see Types of IPMP Interface Configurations. The following
procedure describes steps to configure an active-standby IPMP group by using
DHCP.

Finally, if you are using DHCP, make sure that the underlying interfaces
have infinite leases. Otherwise, in case of a group failure, the test addresses
will expire and the IPMP daemon will then revert to link-based failure detection.
Such circumstances would trigger errors in the manner the group's failure
detection behaves during interface recovery. For more information about configuring
DHCP, refer to Chapter 12, Planning for DHCP Service (Tasks), in System Administration Guide: IP Services.

On the system on which you want to configure the IPMP group, assume
the Primary Administrator role, or become superuser.

To configure IPv6 IPMP interfaces, use the same command syntax
for configuring IPv6 interfaces by specifying inet6 in
the ifconfig command, for example:

# ifconfig ipmp-interface inet6 ipmp [group group-name]

This note applies to all configuration procedures that involve IPv6
IPMP interfaces.

ipmp-interface

Specifies the name of the IPMP interface. You can assign any
meaningful name to the IPMP interface. As with any IP interface, the name
consists of a string and a number, such as ipmp0.

group-name

Specifies the name of the IPMP group. The name can be any
name of your choice. Assigning a group name is optional. By default, the name
of the IPMP interface also becomes the name of the IPMP group. Preferably,
retain this default setting by not using the group-name option.

Note –

The syntax in this step uses the preferred explicit method of
creating an IPMP group by creating the IPMP interface.

An alternative
method to create an IPMP group is implicit creation, in which you use the
syntax ifconfig interface group group-name. In this case, the system creates the lowest
available ipmpN to become the
group's IPMP interface. For example, if ipmp0 already exists
for group acctg, then the syntax ifconfig ce0
group fieldops causes the system to create ipmp1 for
group fieldops. All UP data addresses
of ce0 are then assigned to ipmp1.

However, implicit creation of IPMP groups is not encouraged. Support
for implicit creation is provided only to have compatible implementation with
previous Solaris releases. Explicit creation provides optimal control over
the configuration of IPMP interfaces.

Add underlying IP interfaces that will contain test addresses
to the IPMP group, including the standby interface.

# ifconfig interface group group-name -failover [standby] up

Have DHCP configure and manage the data addresses on the IPMP
interface.

You need to plumb as many logical IPMP interfaces as
data addresses, and then have DHCP configure and manage the addresses on these
interfaces as well.

Specifies the name of the IPMP interface. You can assign any
meaningful name to the IPMP interface. As with any IP interface, the name
consists of a string and a number, such as ipmp0.

group-name

Specifies the name of the IPMP group. The name can be any
name of your choice. Any nun-null name is valid, provided that the name does
not exceed 31 characters. Assigning a group name is optional. By default,
the name of the IPMP interface also becomes the name of the IPMP group. Preferably,
retain this default setting by not using the group-name option.

Note –

The syntax in this step uses the preferred explicit method of
creating an IPMP group by creating the IPMP interface.

An alternative
method to create an IPMP group is implicit creation, in which you use the
syntax ifconfig interface group group-name. In this case, the system creates the lowest
available ipmpN to become the
group's IPMP interface. For example, if ipmp0 already exists
for group acctg, then the syntax ifconfig ce0
group fieldops causes the system to create ipmp1 for
group fieldops. All UP data addresses
of ce0 are then assigned to ipmp1.

However, implicit creation of IPMP groups is not encouraged. Support
for implicit creation is provided only to have compatible implementation with
previous Solaris releases. Explicit creation provides optimal control over
the configuration of IPMP interfaces.

Add underlying IP
interfaces to the group.

# ifconfig ip-interface group group-name

Note –

In a dual-stack environment, placing the IPv4 instance of an interface
under a particular group automatically places the IPv6 instance under the
same group as well.

For additional options that you can use with the ifconfig command
while adding addresses, refer to the ifconfig(1M) man page.

Configure test addresses on the underlying
interfaces.

# ifconfig interface -failover ip-address up

Note –

You need to configure a test address only if you want to use probe-based
failure detection on a particular interface.

All test IP addresses
in an IPMP group must use the same network prefix. The test IP addresses must
belong to a single IP subnet.

(Optional) Preserve the IPMP group configuration
across reboots.

To configure an IPMP group that persists across
system reboots, you would edit the hostname configuration
file of the IPMP interface to add data addresses. Then, if you want to use
test addresses, you would edit the hostname configuration
file of one of the group's underlying IP interface. Note that data and test
addresses can be both IPv4 and IPv6 addresses. Perform the following steps:

Edit the /etc/hostname.ipmp-interface file
by adding the following lines:

ipmp group group-namedata-address up
addif data-address
...

You can add more data addresses on separate addif lines
in this file.

Edit the /etc/hostname.interface file
of the underlying IP interfaces that contain the test address by adding the
following line:

group group-name -failover test-address up

Follow this same step to add test addresses to other underlying interfaces
of the IPMP group.

Caution –

When adding test address information on the /etc/hostname.interface file, make sure to specify the -failover option before the up keyword. Otherwise,
the test IP addresses will be treated as data addresses and would cause problems
for system administration. Preferably, set the -failover option
before specifying the IP address.

How to Manually Configure an Active-Standby IPMP Group

For more information about standby interfaces, see Types of IPMP Interface Configurations. The following procedure configures
an IPMP group where one interface is kept as a reserve. This interface is
deployed only when an active interface in the group fails.

On the system on which you want to configure the IPMP group, assume
the Primary Administrator role, or become superuser.

Specifies the name of the IPMP interface. You can assign any
meaningful name to the IPMP interface. As with any IP interface, the name
consists of a string and a number, such as ipmp0.

group-name

Specifies the name of the IPMP group. The name can be any
name of your choice. Any nun-null name is valid, provided that the name does
not exceed 31 characters. Assigning a group name is optional. By default,
the name of the IPMP interface also becomes the name of the IPMP group. Preferably,
retain this default setting by not using the group-name option.

Note –

The syntax in this step uses the preferred explicit method of
creating an IPMP group by creating the IPMP interface.

An alternative
method to create an IPMP group is implicit creation, in which you use the
syntax ifconfig interface group group-name. In this case, the system creates the lowest
available ipmpN to become the
group's IPMP interface. For example, if ipmp0 already exists
for group acctg, then the syntax ifconfig ce0
group fieldops causes the system to create ipmp1 for
group fieldops. All UP data addresses
of ce0 are then assigned to ipmp1.

However, implicit creation of IPMP groups is not encouraged. Support
for implicit creation is provided only to have compatible implementation with
previous Solaris releases. Explicit creation provides optimal control over
the configuration of IPMP interfaces.

Add underlying IP
interfaces to the group.

# ifconfig ip-interface group group-name

Note –

In a dual-stack environment, placing the IPv4 instance of an interface
under a particular group automatically places the IPv6 instance under the
same group as well.

For additional options that you can use with the ifconfig command
while adding addresses, refer to the ifconfig(1M) man page.

Configure test addresses on the underlying interfaces.

To configure a test address on an active interface, use the
following command:

# ifconfig interface -failover ip-address up

To configure a test address on a designated standby interface,
use the following command:

# ifconfig interface -failover ip-address standby up

Note –

You need to configure a test address only if you want to use probe-based
failure detection on a particular interface.

All test IP addresses
in an IPMP group must use the same network prefix. The test IP addresses must
belong to a single IP subnet.

(Optional) Preserve the IPMP group configuration across
reboots.

To configure an IPMP group that persists across system
reboots, you would edit the hostname configuration file
of the IPMP interface to add data addresses. Then, if you want to use test
addresses, you would edit the hostname configuration
file of one of the group's underlying IP interface. Note that data and test
addresses can be both IPv4 and IPv6 addresses. Perform the following steps:

Edit the /etc/hostname.ipmp-interface file
by adding the following lines:

ipmp group group-namedata-address up
addif data-address
...

You can add more data addresses on separate addif lines
in this file.

Edit the /etc/hostname.interface file
of the underlying IP interfaces that contain the test address by adding the
following line:

group group-name -failover test-address up

Follow this same step to add test addresses to other underlying interfaces
of the IPMP group. For a designated standby interface, the line must be as
follows:

group group-name -failover test-address standby up

Caution –

When adding test address information on the /etc/hostname.interface file, make sure to specify the -failover option before the up keyword. Otherwise,
the test IP addresses will be treated as data addresses and would cause problems
for system administration. Preferably, set the -failover option
before specifying the IP address.

Example 8–2 Configuring an Active-Standby IPMP Group

This example shows how to manually create the same persistent active-standby
IPMP configuration that is provided in Example 8–1.

Maintaining IPMP Groups

This section contains tasks for maintaining existing IPMP groups and
the interfaces within those groups. The tasks presume that you have already
configured an IPMP group, as explained in Configuring IPMP Groups.

How to Add an Interface to an IPMP Group

Before You Begin

Make sure that the interface that you add to the group matches all the
constraints to be in the group. For a list of the requirements of an IPMP
group, see How to Plan an IPMP Group.

On
the system with the IPMP group configuration, assume the Primary Administrator
role or become superuser.

How to Add or Remove IP Addresses

You use the ifconfig addif syntax to add addresses
or the ifconfig removeif command to remove addresses from
interfaces. In the current IPMP implementation, test addresses are hosted
on the underlying IP interface, while data addresses are assigned to the IPMP
interface. The following procedures describes how to add or remove IP addresses
that are either test addresses or data addresses.

How to Move an Interface From One IPMP Group to Another
Group

You can place an interface in a new IPMP group when the interface belongs
to an existing IPMP group. You do not need to remove the interface from the
current IPMP group. When you place the interface in a new group, the interface
is automatically removed from any existing IPMP group.

On the system with the IPMP group configuration, assume the Primary
Administrator role or become superuser.

Placing the interface in a new group automatically removes the interface
from any existing group.

Example 8–6 Moving an Interface to a Different IPMP Group

This example assumes
that the underlying interfaces of your group are subitops0, subitops1, subitops2, and hme0.
To change the IPMP group of interface hme0 to the group cs-link1, you would type the following:

# ifconfig hme0 group cs-link1

This command removes the hme0 interface from IPMP
group itops0 and then puts the interface in the group cs-link1.

You would then edit the files /etc/hostname.subitops0 and /etc/hostname.subitops1 to remove “group”
entries in those files.

Configuring for Probe-Based Failure Detection

Probe-based failure detection involves the use of target systems, as
explained in Probe-Based Failure Detection.
In identifying targets for probe-based failure detection, the in.mpathd daemon operates in two modes: router target mode or multicast
target mode. In the router target mode, the multipathing daemon probes targets
that are defined in the routing table. If no targets are defined, then the
daemon operates in multicast target mode, where multicast packets are sent
out to probe neighbor hosts on the LAN.

Preferably, you should set up host targets for the in.mpathd daemon
to probe. For some IPMP groups, the default router is sufficient as a target.
However, for some IPMP groups, you might want to configure specific targets
for probe-based failure detection. To specify the targets, set up host routes
in the routing table as probe targets. Any host routes that are configured
in the routing table are listed before the default router. IPMP uses the explicitly
defined host routes for target selection. Thus, you should set up host routes
to configure specific probe targets rather than use the default router.

To set up host routes in the routing table, you use the route command.
You can use the -p option with this command to add persistent
routes. For example, route -p add adds a route which will
remain in the routing table even after you reboot the system. The -p option
thus allows you to add persistent routes without needing any special scripts
to recreate these routes every system startup. To optimally use probe-based
failure detection, make sure that you set up multiple targets to receive probes.

The sample procedure that follows shows the exact syntax to add persistent
routes to targets for probe-based failure detection. For more information
about the options for the route command, refer to the route(1M) man page.

Consider the following criteria when evaluating which hosts on your
network might make good targets.

Make sure that the prospective targets are available and running.
Make a list of their IP addresses.

Ensure that the target interfaces are on the same network
as the IPMP group that you are configuring.

The netmask and broadcast address of the target systems must
be the same as the addresses in the IPMP group.

The target host must be able to answer ICMP requests from
the interface that is using probe-based failure detection.

How to Manually Specify Target Systems for Probe-Based
Failure Detection

Log in with your user account to the system where you are configuring
probe-based failure detection.

Add a route to a particular host to be used as a target in probe-based
failure detection.

$ route -p add -hostdestination-IPgateway-IP-static

where destination-IP and gateway-IP are IPv4 addresses of the host to be used as a target. For
example, you would type the following to specify the target system 192.168.10.137, which is on the same subnet as the interfaces in IPMP group itops0:

$ route -p add -host 192.168.10.137 192.168.10.137 -static

This new route will be automatically configured every time the system
is restarted. If you want to define only a temporary route to a target system
for probe-based failure detection, then do not use the -p option.

Add routes to additional hosts on the network to be used as target
systems.

How to Configure the Behavior of the IPMP Daemon

Use the IPMP configuration
file /etc/default/mpathd to configure the following system-wide
parameters for IPMP groups.

FAILURE_DETECTION_TIME

TRACK_INTERFACES_ONLY_WITH_GROUPS

FAILBACK

On the system with the IPMP group configuration, assume the Primary
Administrator role or become superuser.

where n is the amount of time in seconds
for ICMP probes to detect whether an interface failure has occurred. The default
is 10 seconds.

Type the new value for the FAILBACK parameter.

FAILBACK=[yes | no]

yes– The yes value
is the default for the failback behavior of IPMP. When the repair of a failed
interface is detected, network access fails back to the repaired interface,
as described in Detecting Physical Interface Repairs.

no – The no value
indicates that data traffic does not move back to a repaired interface. When
a failed interfaces is detected as repaired, the INACTIVE flag
is set for that interface. This flag indicates that the interface is currently
not to be used for data traffic. The interface can still be used for probe
traffic.

For example, the IPMP group ipmp0 consists
of two interfaces, ce0 and ce1. In the /etc/default/mpathd file, the FAILBACK=no parameter
is set. If ce0 fails, then it is flagged as FAILED and
becomes unusable. After repair, the interface is flagged as INACTIVE and
remains unusable because of the FAILBACK=no setting.

If ce1 fails and only ce0 is in
the INACTIVE state, then ce0's INACTIVE flag is cleared and the interface becomes usable. If the IPMP group
has other interfaces that are also in the INACTIVE state,
then any one of these INACTIVE interfaces, and not necessarily ce0, can be cleared and become usable when ce1 fails.

Type the new value for the TRACK_INTERFACES_ONLY_WITH_GROUPS parameter.

yes– The yes value
is the default for the behavior of IPMP. This parameter causes IPMP to ignore
network interfaces that are not configured into an IPMP group.

no – The no value
sets failure and repair detection for all network interfaces,
regardless of whether they are configured into an IPMP group. However, when
a failure or repair is detected on an interface that is not configured into
an IPMP group, no action is triggered in IPMP to maintain the networking functions
of that interface. Therefore, theno value is only useful
for reporting failures and does not directly improve network availability.

Restart the in.mpathd daemon.

# pkill -HUP in.mpathd

Recovering an IPMP Configuration With Dynamic Reconfiguration

This section contains procedures that relate to administering systems
that support dynamic reconfiguration (DR).

How to Replace a Physical Card That Has Failed

This procedure explains how to replace a physical card on a system that
supports DR. The procedure assumes the following conditions:

You assigned administratively chosen names to the data links
over which you configured the IP interfaces. These interfaces are subitops0 and subitops1.

Both interfaces belong to the IPMP group, itops0.

The interface subitops0 contains a test
address.

The interface subitops0 has failed, and
you need to remove subitops0's underlying card, ce.

You are replacing the ce card with a bge card.

The configuration files correspond to the interfaces and use
the interfaces' customized link names, thus /etc/hostname.subitops0 and /etc/hostname.subitops1.

Before You Begin

The procedures for performing DR vary with the type of system. Therefore,
make sure that you complete the following:

Ensure that your system supports DR.

Consult the appropriate manual that describes DR procedures
on your system. For Sun hardware, all systems that support DR are servers.
To locate current DR documentation on Sun systems, search for “dynamic
reconfiguration” on http://docs.sun.com.

Note –

The steps in the following procedure refer only to aspects of
DR that are specifically related to IPMP and the use of link names. The procedure
does not contain the complete steps to perform DR. For example, some layers
beyond the IP layer require manual configuration steps, such as for ATM and
other services, if the configuration is not automated. Follow the appropriate
DR documentation for your system.

On the system with the IPMP group configuration, assume the Primary
Administrator role or become superuser.

Perform the appropriate DR steps to remove the failed NIC from
the system.

If you are removing the card without intending to insert a
replacement, then skip the rest of the steps after you remove the card.

If you are replacing a card, then proceed to the subsequent
steps .

Make sure that the replacement NIC is not being referenced by
other configurations in the system.

For example, the replacement
NIC you install is bge0. If a /etc/hostname.bge0 file
exists on the system, remove that file.

# rm /etc/hostname.bge0

Replace the default link name of the replacement NIC with the
link name of the failed card.

By default, the link name of the bge card that replaces the failed ce card is bgen, where n is
the instance number, such as bge0.

# dladm rename-link bge0 subitops0

This step transfers the network configuration of subitops0 to bge0.

Attach the replacement NIC to the system.

Complete the DR process by enabling the new NIC's resources to
become available for use.

For example, you use the cfgadm command
to perform this step. For more information, see the cfgadm(1M) man page.

After this step, the new interface is configured with the test address,
added as an underlying interface of the IPMP group, and deployed either as
an active or a standby interface, all depending on the configurations that
are specified in /etc/hostname.subitops0. The kernel
can then allocate data addresses to this new interface according to the contents
of the /etc/hostname.ipmp-interface configuration
file.

About Missing Interfaces at System Boot

Certain systems might have the following configurations:

An IPMP group is configured with underlying IP interfaces

A /etc/hostname.interface file
exists for one underlying IP interface.

The physical hardware that is associated with the /etc/hostname file is missing.

With the new IPMP implementation where data addresses belong to the
IPMP interface, recovering the missing interface becomes automatic. During
system boot, the boot script constructs a list of failed interfaces, including
interfaces that are missing. Based on the /etc/hostname file
of the IPMP interface as well as the hostname files of
the underlying IP interfaces, the boot script can determine to which IPMP
group an interface belongs. When the missing interface is subsequently dynamically
reconfigured on the system, the script then automatically adds that interface
to the appropriate IPMP group and the interface becomes immediately available
for use.

Monitoring IPMP Information

The following procedures use the ipmpstat command,
enabling you to monitor different aspects of IPMP groups on the system. You
can observe the status of the IPMP group as a whole or its underlying IP interfaces.
You can also verify the configuration of data and test addresses for the group.
Information about failure detection is also obtained by using the ipmpstat command. For more details about the ipmpstat command
and its options, see the PLACEHOLDER IPMPSTAT MAN PAGE.

By default, host names are displayed on the output instead of the numeric
IP addresses, provided that the host names exist. To list the numeric IP addresses
in the output, use the -n option together with other options
to display specific IPMP group information.

Note –

In the following procedures, use of the ipmpstat command
does not require system administrator privileges, unless stated otherwise.

How to Obtain IPMP Group Information

Use this procedure to list the status of the various IPMP groups on
the system, including the status of their underlying interfaces. If probe-based
failure detection is enabled for a specific group, the command also includes
the failure detection time for that group.

Specifies the IPMP interface name. In the case of an anonymous
group, this field will be empty. For more information about anonymous groups,
see the in.mpathd(1M) man
page.

GROUPNAME

Specifies the name of the IPMP group. In the case of an anonymous
group, this field will be empty.

STATE

Indicates a group's current status, which can be one of the
following:

ok indicates that all underlying interfaces
of the IPMP group are usable.

degraded indicates that some of the underlying
interfaces in the group are unusable.

failed indicates that all of the group's
interfaces are unusable.

FDT

Specifies the failure detection time, if failure detection
is enabled. If failure detection is disabled, this field will be empty.

INTERFACES

Specifies the underlying interfaces that belong to the group.
In this field, active interfaces are listed first, then inactive interfaces,
and finally unusable interfaces.The status of the interface is indicated by
the manner in which it is listed:

interface (without parentheses
or brackets) indicates an active interface. Active interfaces are those interfaces
that being used by the system to send or receive data traffic.

(interface) (with parentheses)
indicates a functioning but inactive interface. The interface is not in use
as defined by administrative policy.

[interface] (with brackets) indicates
that the interface is unusable because it has either failed or been taken
offline.

How to Obtain IPMP Data Address Information

Use this procedure to display data addresses and the group to which
each address belongs. The displayed information also includes which address
is available for use, depending on whether the address has been toggled by
the ifconfig [up/down] command. You can also determine
on which inbound or outbound interface an address can be used.

Specifies the hostname or the data address, if the -n option
is used in conjunction with the -a option.

STATE

Indicates whether the address on the IPMP interface is up, and therefore usable, or down, and therefore
unusable.

GROUP

Specifies the IPMP IP interface that hosts a specific data
address.

INBOUND

Identifies the interface that receives packets for a given
address. The field information might change depending on external events.
For example, if a data address is down, or if no active IP interfaces remain
in the IPMP group, this field will be empty. The empty field indicates that
the system is not accepting IP packets that are destined for the given address.

OUTBOUND

Identifies the interface that sends packets that are using
a given address as a source address. As with the INBOUND field,
the OUTBOUND field information might also change depending
on external events. An empty field indicates that the system is not sending
out packets with the given source address. The field might be empty either
because the address is down, or because no active IP interfaces remain in
the group.

How to Obtain Information About Underlying IP Interfaces
of a Group

Use this procedure to display information about an IPMP group's underlying
IP interfaces. For a description of the corresponding relationship between
the NIC, data link, and IP interface, see Overview of the Networking Stack.

routes indicates that the system routing
table is used to find probe targets.

mcast indicates that multicast ICMP probes
are used to find targets.

disabled indicates that probe-based failure
detection has been disabled for the interface.

TESTADDR

Specifies the hostname or, if the -n option
is used in conjunction with the -t option, the IP address
that is assigned to the interface to send and receive probes. This field will
be empty if a test address has not been configured.

Note –

If an IP interface is configured with both IPv4 and IPv6 test
addresses, the probe target information is displayed separately for each test
address.

TARGETS

Lists the current probe targets in a space-separated list.
The probe targets are displayed either as hostnames or IP addresses, if the -n is used in conjunction with the -t option.

How to Observe IPMP Probes

Use this procedure to observe ongoing probes. When you issue the command
to observe probes, information about probe activity on the system is continuously
displayed until you terminate the command with Ctrl-C.
You must have Primary Administrator privileges to run this command.

Specifies the time a probe was sent relative to when the ipmpstat command was issued. If a probe was initiated prior to ipmpstat being started, then the time is displayed with a negative
value, relative to when the command was issued.

PROBE

Specifies the identifier that represents the probe.

INTERFACE

Specifies the interface on which the probe is sent.

TARGET

Specifies the hostname or, if the -n option
is used in conjunction with -p, the target address to which
the probe is sent.

NETRTT

Specifies the total network round-trip time of the probe and
is measured in milliseconds. NETRTT covers the time between
the moment when the IP module sends the probe and the moment the IP module
receives the ack packets from the target. If the in.mpathd daemon has determined that the probe is lost, then the field will
be empty.

RTT

Specifies the total round-trip time for the probe and is measured
in milliseconds. RTT covers the time between the moment
the daemon executes the code to send the probe and the moment the daemon completes
processing the ack packets from the target. If the in.mpathd daemon has determined that the probe is lost, then the field will
be empty. Spikes that occur in the RTT which are not present
in the NETRTT might indicate that the local system is overloaded.

RTTAVG

Specifies the probe's average round-trip time over the interface
between local system and target. The average round-trip time helps identify
slow targets. If data is insufficient to calculate the average, this field
will be empty.

RTTDEV

Specifies the standard deviation for the round-trip time to
the target over the interface. The standard deviation helps identify jittery
targets whose ack packets are being sent erratically. For
jittery targets, the in.mpathd daemon is forced to increase
the failure detection time. Consequently, the daemon would take a longer time
before it can detect such a target's outage. If data is insufficient to calculate
the standard deviation, this field will be empty.

How to Customize the Output of the ipmpstat Command
in a Script

When you use the ipmpstat, by default, the most meaningful
fields that fit in 80 columns are displayed. In the output, all the fields
that are specific to the option that you use with the ipmpstat command
are displayed, except in the case of the ipmpstat -p syntax.
If you want to specify the fields to be displayed, then you use the -o option
in conjunction with other options that determine the output mode of the command.
This option is particularly useful when you issue the command from a script
or by using a command alias

To customize the output, issue one of the following commands:

To display selected fields of the ipmpstat command,
use the -o option in combination with the specific output
option. For example, to display only the GROUPNAME and
the STATE fields of the group output mode, you would type
the following:

To display all the fields of a given ipmpstat command,
use the following syntax:

# ipmpstat -o all

How to Generate Machine Parseable Output of the ipmpstat Command

You can generate machine parseable information by using the ipmpstat -P syntax. The -P option is intended to
be used particularly in scripts. Machine-parseable output differs from the
normal output in the following ways:

Headers are omitted.

Fields are separated by colons (:).

Fields with empty values are empty rather than being filled
with the double dash (--).

In the case of multiple fields being requested, if a field
contains a literal colon (:) or back slash (\),
these can be escaped or excluded by prefixing these characters with a back
slash (\) .

To correctly use the ipmpstat -P syntax,
observe the following rules:

Use the -ooption fields together
with the -P option.

Never use -o all with the -P option.

Ignoring either one of these rules will cause ipmpstat -P to
fail.

To display in machine parseable format the group name, the failure
detection time, and the underlying interfaces, you would type the following: