Chapter 6 Administering Link Aggregations

This chapter describes procedures to configure and maintain link aggregations.
The procedures include steps that avail of new features such as support for
flexible link names.

Overview of Link
Aggregations

The Solaris OS supports the organization of network interfaces
into link aggregations. A link aggregation consists of
several interfaces on a system that are configured together as a single, logical
unit. Link aggregation, also referred to as trunking,
is defined in the IEEE 802.3ad Link Aggregation Standard.

The IEEE 802.3ad Link Aggregation Standard provides a method to combine
the capacity of multiple full-duplex Ethernet links into a single logical
link. This link aggregation group is then treated as though it were, in fact,
a single link.

The following are features of link aggregations:

Increased bandwidth –
The capacity of multiple links is combined into one logical link.

Automatic failover/failback –
Traffic from a failed link is failed over to working links in the aggregation.

Load balancing –
Both inbound and outbound traffic is distributed according to user selected
load-balancing policies, such as source and destination MAC or IP addresses.

Support for redundancy –
Two systems can be configured with parallel aggregations.

Improved administration –
All interfaces are administered as a single unit.

Less drain on the network address
pool – The entire aggregation can be assigned one IP address.

Link Aggregation Basics

The basic link aggregation topology involves a single aggregation that
contains a set of physical interfaces. You might use the basic link aggregation
in the following situations:

For systems that run an application with distributed heavy
traffic, you can dedicate an aggregation to that application's traffic.

For sites with limited IP address space that nevertheless
require large amounts of bandwidth, you need only one IP address for a large
aggregation of interfaces.

For sites that need to hide the existence of internal interfaces,
the IP address of the aggregation hides its interfaces from external applications.

Figure 6–1 shows an aggregation
for a server that hosts a popular web site. The site requires increased bandwidth
for query traffic between Internet customers and the site's database server.
For security purposes, the existence of the individual interfaces on the server
must be hidden from external applications. The solution is the aggregation aggr1 with the IP address 192.168.50.32. This
aggregation consists of three interfaces,bge0 through bge2. These interfaces are dedicated to sending out traffic in response
to customer queries. The outgoing address on packet traffic from all the interfaces
is the IP address of aggr1, 192.168.50.32.

Figure 6–1 Basic Link Aggregation Topology

Figure 6–2 depicts
a local network with two systems, and each system has an aggregation configured.
The two systems are connected by a switch. If you need to run an aggregation
through a switch, that switch must support aggregation technology. This type
of configuration is particularly useful for high availability and redundant
systems.

In the figure, System A has an aggregation that consists of two interfaces, bge0 and bge1. These interfaces are connected
to the switch through aggregated ports. System B has an aggregation of four
interfaces, e1000g0 through e1000g3.
These interfaces are also connected to aggregated ports on the switch.

Figure 6–2 Link Aggregation Topology With a Switch

Back-to-Back Link Aggregations

The back-to-back link aggregation topology involves two separate systems
that are cabled directly to each other, as shown in the following figure.
The systems run parallel aggregations.

Figure 6–3 Basic Back-to-Back Aggregation Topology

In this figure, device bge0 on System A is directly
linked to bge0 on System B, and so on. In this way, Systems
A and B can support redundancy and high availability, as well as high-speed
communications between both systems. Each system also has interface ce0 configured
for traffic flow within the local network.

The most common application for back-to-back link aggregations is mirrored
database servers. Both servers need to be updated together and therefore require
significant bandwidth, high-speed traffic flow, and reliability. The most
common use of back-to-back link aggregations is in data centers.

Policies and Load Balancing

If you plan to use a link aggregation, consider defining a policy for
outgoing traffic. This policy can specify how you want packets to be distributed
across the available links of an aggregation, thus establishing load balancing.
The following are the possible layer specifiers and their significance for
the aggregation policy:

L2 – Determines the
outgoing link by hashing the MAC (L2) header of each packet

L3 – Determines the
outgoing link by hashing the IP (L3) header of each packet

L4 – Determines the
outgoing link by hashing the TCP, UDP, or other ULP (L4) header of each packet

Any combination of these policies is also valid. The default policy
is L4. For more information, refer to the dladm(1M) man
page.

Aggregation Mode and Switches

If your aggregation topology involves connection through a switch, you
must note whether the switch supports the link aggregation control
protocol (LACP). If the switch supports LACP, you must configure
LACP for the switch and the aggregation. However, you can define one of the
following modes in which LACP is to operate:

Off mode – The default
mode for aggregations. LACP packets, which are called LACPDUs are
not generated.

Active mode – The
system generates LACPDUs at regular intervals, which you can specify.

Passive mode – The
system generates an LACPDU only when it receives an LACPDU from the switch.
When both the aggregation and the switch are configured in passive mode, they
cannot exchange LACPDUs.

See the dladm(1M) man page and the switch manufacturer's
documentation for syntax information.

Requirements for Link Aggregations

Your link aggregation configuration is bound by the following requirements:

You must use the dladm command to configure
aggregations.

An interface that has been plumbed cannot become a member
of an aggregation.

All interfaces in the aggregation must run at the same speed
and in full-duplex mode.

Certain devices do not fulfill the requirement
of the IEEE 802.3ad Link Aggregation Standard to support link state notification.
This support must exist in order for a port to attach to an aggregation or
to detach from an aggregation. Devices that do not support link state notification
can be aggregated only by using the -f option of the dladm
create-aggr command. For such devices, the link state is always
reported as UP. For information about the use of the -f option, see How to Create a Link Aggregation.

Flexible Names for
Link Aggregations

Flexible names can be assigned to link aggregations. Any meaningful
name can be assigned to a link aggregation. For more information about flexible
or customized names, see Assigning Names to Data Links. Previous Solaris releases identify a link aggregation by the
value of a key that you assign to the aggregation. For
an explanation of this method, see Overview of Link Aggregations. Although that method continues to be
valid, preferably, you should use customized names to identify link aggregations.

Similar to all other data-link configurations, link aggregations are
administered with the dladm command.

How to Create a Link Aggregation

Before You Begin

Note –

Link aggregation only works on full-duplex, point-to-point links
that operate at identical speeds. Make sure that the interfaces in your aggregation
conform to this requirement.

If you are using a switch in your aggregation topology, make sure that
you have done the following on the switch:

Configured the ports to be used as an aggregation

If the switch supports LACP, configured LACP in either active
mode or passive mode

Ensure that the link you want to add has no IP interface that
is plumbed over the link.

# ifconfig interface unplumb

Add the link to the aggregation.

# dladm add-aggr -l link [-l link] [...] aggr

where link represents a data link that you
are adding to the aggregation.

Perform other tasks to modify the entire link aggregation configuration
after more data links are added.

For example, in the case of a
configuration that is illustrated in Figure 6–3, you might need to add or modify cable connections and reconfigure
switches to accommodate the additional data links. Refer to the switch documentation
to perform any reconfiguration tasks on the switch.

Example 6–5 Deleting
an Aggregation

How to Configure VLANs Over
a Link Aggregation

In the same manner as configuring VLANs over an interface, you can also
create VLANs on a link aggregation. VLANs are described in Chapter 5, Administering VLANs. This section combines configuring
VLANs and link aggregations.

Combining
Network Configuration Tasks While Using Customized Names

This section provides an example that combines all the procedures in
the previous chapters about configuring links, VLANs, and link aggregations
while using customized names. For a description of other networking scenarios
that use customized names, see the article in http://www.sun.com/bigadmin/sundocs/articles/vnamingsol.jsp.

Example 6–7 Configuring Links, Aggregations, and VLANs

In this example, a system that consists of 4 NICs needs to be configured
to be a router for 8 separate subnets. To attain this objective, 8 links will
be configured, one for each subnet. First, a link aggregation is created on
all 4 NICs. This untagged link becomes the default untagged subnet for the
network to which the default route points.

Then VLAN interfaces are configured over the link aggregation for the
other subnets. The subnets are named by basing on a color-coded scheme. Accordingly,
the VLAN names are likewise named to correspond to their respective subnets.
The final configuration consists of 8 links for the eight subnets: 1 untagged
link, and 7 tagged VLAN links.

To make the configurations persist across reboots, the same procedures
apply as in previous Solaris releases. For example, IP addresses need to be
added to configuration files like /etc/inet/ndpd.conf or /etc/hostname.interface. Or, filter
rules for the interfaces need to be included in a rules file. These final
steps are not included in the example. For these steps, refer to the appropriate
chapters in System Administration Guide: IP Services, particularly TCP/IP Administration and DHCP.