SLBC Active-Passive with two FortiController-5103Bs and two chassis (Expert)

This example describes how to setup an active-passive session-aware load balancing cluster (SLBC) consisting of two FortiGate-5000 chassis, two FortiController-5103Bs, and six FortiGate-5001Bs acting as workers, three in each chassis. This SLBC configuration can have up to seven redundant 10Gbit network connections.

The FortiControllers operate in active-passive HA mode for redundancy. The FortiController in chassis 1 slot 1 will be configured to be the primary unit, actively processing sessions. The FortiController in chassis 2 slot 1 becomes the subordinate unit. If the primary unit fails the subordinate unit resumes all active sessions.

All networks in this example have redundant connections to both FortiControllers and redundant heartbeat and base control and management links are created between the FortiControllers using their front panel B1 and B2 interfaces.

This example also includes a FortiController session sync connection between the FortiControllers using the FortiController F4 front panel interface (resulting in the SLBC having a total of seven redundant 10Gbit network connections). (You can use any fabric front panel interface.)

Heartbeat and base control and management traffic uses VLANs and specific subnets. So the switches and network components used must be configured to allow traffic on these VLANs and you should be aware of the subnets used in case they conflict with any connected networks.

This example sets the device priority of the FortiController in chassis 1 higher than the device priority of the FortiController in chassis 2 to make sure that the FortiController in chassis 1 becomes the primary FortiController for the cluster.

1. Hardware setup

Install two FortiGate-5000 series chassis and connect them to power. Ideally each chassis should be connected to a separate power circuit. Install a FortiController in slot 1 of each chassis. Install the workers in slots 3, 4, and 5 of each chassis. The workers must be installed in the same slots in both chassis. Power on both chassis.

Check the chassis, FortiController, and FortiGate LEDs to verify that all components are operating normally (to check normal operation LED status, see the FortiGate-5000 series documents available here).

Create duplicate connections from both FortiController front panel interfaces to the Internet and to the internal network.

Create a heartbeat link by connecting the FortiController B1 interfaces together. Create a backup heartbeat link by connecting the FortiController B2 interfaces together. You can directly connect the interfaces with a patch cable or connect them together through a switch. If you use a switch, it must allow traffic on the heartbeat VLAN (default 999) and the base control and management VLANs (301 and 101). These connections establish heartbeat, base control, and base management communication between the FortiControllers. Only one heartbeat connection is required but redundant connections are recommended.

Create a FortiController session sync connection between the chassis by connecting the FortiController F4 interfaces. If you use a switch it must allow traffic on the FortiController session sync VLAN (2000). You can use any of the F1 to F8 interfaces. We chose F4 in this example to make the diagram easier to understand.

Connect the mgmt interfaces of the both FortiControllers to the internal network or any network from which you want to manage the cluster.

Check the FortiSwitch-ATCA release notes and install the latest supported firmware on the FortiController and on the workers. Get FortiController firmware from the Fortinet Support site. Select the FortiSwitch-ATCA product.

Set Mode to Active-Passive, set the Device Priority to 250, change the Group ID, select Enable Override, enable Chassis Redundancy, set Chassis ID to 1 and move the b1 and b2 interfaces to the Selected column and select OK.

Enter this command to use the FortiController front panel F4 interface for FortiController session sync communication between FortiControllers.

config system ha set session-sync-port f4 end

You can also enter the complete HA configuration with this command.

config system ha set mode active-passive set groupid 5 set priority 250 set override enable set chassis-redundancy enable set chassis-id 1 set hbdev b1 b2 set session-sync-port f4 end

If you have more than one cluster on the same network, each cluster should have a different Group ID. Changing the Group ID changes the cluster interface virtual MAC addresses. If your group ID setting causes a MAC address conflict you can select a different Group ID. The default Group ID of 0 is not a good choice and normally should be changed.

EnableOverride is selected to make sure the FortiController in chassis 1 always becomes the primary unit. Enabling override could lead to the cluster renegotiating more often, so once the chassis is operating you can disable this setting.

You can also adjust other HA settings. For example, you could change the VLAN to use for HA heartbeat traffic if it conflicts with a VLAN on your network. You can also adjust the Heartbeat Interval and Number of Heartbeats lost to adjust how quickly the cluster determines one of the FortiControllers has failed.

3. Configuring the FortiController in Chassis 2

Log into the FortiController in chassis 2.

Enter these commands to set the host name to ch2-slot1 and duplicate the HA configuration of the FortiController in chassis 1. Except, do not select Enable Override and set the Device Priority to a lower value (for example, 10), and set the Chassis ID to 2.

All other configuration settings are synchronized from the primary FortiController when the cluster forms.

config system global set hostname ch2-slot1end

config system ha set mode active-passive set groupid 5 set priority 10set chassis-redundancy enable set chassis-id 2 set hbdev b1 b2 set session-sync-port f4 end

4. Configuring the cluster

After a short time the FortiControllers restart in HA mode and form an active-passive SLBC. Both FortiControllers must have the same HA configuration and at least one heartbeat link (the B1 and B2 interfaces) must be connected. If the FortiControllers are unable to form a cluster, check to make sure that they both have the same HA configuration. Also they can’t form a cluster if the heartbeat interfaces (B1 and B2) are not connected.

With the configuration described in the previous steps, the FortiController in chassis 1 should become the primary unit and you can log into the cluster using the management IP address that you assigned to the FortiController in chassis 1.

The FortiController in chassis 2 becomes the backup FortiController. You cannot log into or manage the backup FortiController until you configure the cluster External Management IP and add workers to the cluster. Once you do this you can use the External Management IP address and a special port number to manage the backup FortiController. This is described below. (You can also connect to the backup FortiController CLI using the console port.)

You can confirm that the cluster has been formed by viewing the FortiController HA configuration. The display should show both FortiControllers in the cluster.

You can also go to Load Balance > Status to see the status of the primary FortiController (slot icon colored green).

Go to Load Balance > Config to add the workers to the cluster by selecting Edit and moving the slots that contain workers to the Members list.

The Config page shows the slots in which the cluster expects to find workers. If the workers have not been configured their status will be Down.

Configure the External Management IP/Netmask. Once you have connected workers to the cluster, you can use this IP address to manage and configure all of the devices in the cluster.

You can also enter this command to add slots 3, 4, and 5 to the cluster.

5. Adding the workers to the cluster

If the workers are going to run FortiOS Carrier, add the FortiOS Carrier license instead. This will reset the worker to factory default settings.

execute factoryreset

Give the mgmt1 or mgmt2 interface of each worker an IP address and connect these interfaces to your network. This step is optional but useful because when the workers are added to the cluster, these IP addresses are not synchronized, so you can connect to and manage each worker separately.

config system interface edit mgmt1 set ip 172.20.120.120 end

Optionally give each worker a different hostname. The hostname is also not synchronized and allows you to identify each worker.

config system global set hostname worker-chassis-1-slot-3end

Register each worker and apply licenses to each worker before adding the workers to the cluster. This includes FortiCloud activation and FortiClient licensing, and entering a license key if you purchased more than 10 Virtual Domains (VDOMs). You can also install any third-party certificates on the primary worker before forming the cluster. Once the cluster is formed third-party certificates are synchronized to all of the workers. FortiToken licenses can be added at any time because they are synchronized to all of the workers.

Log into the CLI of each worker and enter this command to set the worker to operate in FortiController mode. The worker restarts and joins the cluster.

config system elbc set mode forticontroller end

6. Managing the cluster

After the workers have been added to the cluster you can use the External Management IP to manage the the primary worker. This includes access to the primary worker GUI or CLI, SNMP queries to the primary worker, and using FortiManager to manage the primary worker. As well SNMP traps and log messages are sent from the primary worker with the External Management IP as their source address. And finally connections to FortiGuard for updates, web filtering lookups and so on, all originate from the External Management IP.

You can use the external management IP followed by a special port number to manage individual devices in the cluster. The special port number identifies the protocol and the chassis and slot number of the device you want to connect to. In fact this is the only way to manage the backup FortiControllers. The special port number begins with the standard port number for the protocol you are using and is followed by two digits that identify the chassis number and slot number. The port number is determined using the following formula:service_port x 100 + (chassis_id – 1) x 20 + slot_idservice_port is the normal port number for the management service (80 for HTTP, 443 for HTTPS, 22 for SSH, 23 for Telnet, 161 for SNMP). chassis_id is the Chassis ID part of the FortiController HA configuration and can be 1 or 2. slot_id is the number of the chassis slot.

Telnet to connect to the CLI of the worker in chassis 2 slot 4: telnet 172.20.120.100 2324

To use SSH to connect to the CLI the worker in chassis 1 slot 5: ssh admin@172.20.120.100 -p2205

You can also manage the primary FortiController using the IP address of its mgmt interface, set up when you first configured the primary FortiController. You can also manage the workers by connecting directly to their mgmt1 or mgmt2 interfaces if you set them up. However, the only way to manage the backup FortiController is by using its special port number.

To manage a FortiController using SNMP you need to load the FORTINET-CORE-MIB.mib file into your SNMP manager. You can get this MIB file from the Fortinet support site, in the same location as the current FortiController firmware (select the FortiSwitchATCA product).

On the primary FortiController GUI go to Load Balance > Status. As the workers in chassis 1 restart they should appear in their appropriate slots.

The primary FortiController should be the FortiController in chassis 1 slot 1. The primary FortiController status display includes a Config Master link that you can use to connect to the primary worker.

Log into the backup FortiController GUI (for example by browsing to https://172.20.120.100:44321) and go to Load Balance > Status. As the workers in chassis 2 restart they should appear in their appropriate slots.

The backup FortiController Status page shows the status of the workers in chassis 2 and does not include the Config Master link.

7. Results – Configuring the workers

Configure the workers to process the traffic they receive from the FortiController front panel interfaces. By default all FortiController front panel interfaces are in the worker root VDOM. You can keep them in the root VDOM or create additional VDOMs and move interfaces into them.

For example, if you connect the Internet to FortiController front panel 2 interfaces (fctrl/f2 on the worker GUI and CLI) and the internal network to FortiController front panel 6 interfaces (fctrl/f6) you would access the root VDOM and add this policy to allow users on the Internal network to access the Internet.

8. Results – Checking the cluster status

You can use the following get and diagnose commands to show the status of the cluster and all of the devices in it.

Log into the primary FortiController CLI and enter this command to view the system status of the primary FortiController.

For example, you can use SSH to log into the primary FortiController CLI using the external management IP:ssh admin@172.20.120.100 -p2201

Enter this command from the primary FortiController to show the HA status of the primary and backup FortiControllers. The command output shows a lot of information about the cluster including the host names and chassis and slot locations of the FortiControllers, the number of sessions each FortiController is processing (this case 0 for each FortiController) the number of failed workers (0 of 3 for each FortiController), the number of FortiController front panel interfaces that are connected (2 for each FortiController) and so on. The final two lines of output also show that the B1 interfaces are connected (status=alive) and the B2 interfaces are not (status=dead). The cluster can still operate with a single heartbeat connection, but redundant heartbeat interfaces are recommended.

Enter this command from the backup FortiController to show the HA status of the backup and primary FortiControllers. Notice that the backup FortiController is shown first. The command output shows a lot of information about the cluster including the host names and chassis and slot locations of the FortiControllers, the number of sessions each FortiController is processing (this case 0 for each FortiController) the number of failed workers (0 of 3 for each FortiController), the number of FortiController front panel interfaces that are connected (2 for each FortiController) and so on. The final two lines of output also show that the B1 interfaces are connected (status=alive) and the B2 interfaces are not (status=dead). The cluster can still operate with a single heartbeat connection, but redundant heartbeat interfaces are recommended.

After completing a science degree at the University of Waterloo, Bill began his professional life teaching college chemistry in Corner Brook, Newfoundland and fell into technical writing after moving to Ottawa in the mid '80s. Tech writing stints at all sorts of companies finally led to joining Fortinet to write the first FortiGate-300 Administration Guide.

This site uses cookies. Some are essential to the operation of the site; others help us improve the user experience. By continuing to use the site, you consent to the use of these cookies.AcceptPrivacy policy