New FG1000D HA Deployment Using Nexus VPC Troubleshooting

Hi all, I'm deploying a new pair of 1000D in active/active configuration. Outside interface's are connecting to a single ISP & seem to be working fine. The issue I'm having is communication between our core Nexus 9K's & the 1000D's. This is a multi-tenant environment and therefore we are leveraging VDOM's on the FG & VRF's on the 9K's.

I'm using individual /29 networks between the FG & 9K's to route. We are also using LACP on the FG & VPC on the pair of 9K's. The 9K's are not new & have been in production for quite some time. They are configured according to Cisco best practices.

The issue we ran into was that we were unable to communicate properly between the core switches & the firewalls on one of the /29 networks. After working with Fortinet Support for hours, I was told that best practices was for a L2 switch to be physically installed between the firewalls and our core switches. This apparently is due to some requirement for HA configuration. Even though this wasn't clear to me, we did try this method and it didn't resolve the issue.

The physical ports on the FG are configured like this. 802.3ad port is in the root VDOM with no L3 config. The underlying VLAN interfaces are in a unique VDOM. These VLAN interfaces are what use the /29 network to route to the Nexus 9K's. Is there something about this design that doesn't seem right? Any ideas what might be causing this problem? Running 5.6.4 code.

The only reason we're using a /29 rather than a /30 is we need additional IP addresses to accommodate HSRP (VIP) on Nexus 9K's. The 9K's are our core switches so they are doing layer-3. However, we are not doing any dynamic routing protocols so the dynamic route peering over VPC does not apply in this situation.

Cisco Nexus internally use non-standard Ethertypes - the same which FortiOS uses on the HA links. This is documented in the KB. HA chapter of the 'FortiOS Handbook'.To avoid trouble you can change the Ethertype in the HA setup ('conf sys ha'). 3 different types are used.

That is interesting. I hadn't heard that before. I'll look into this but I can tell you that our heartbeat ports are directly connected to each other. The two FG's don't use any switching to connect the two for HA. Do you think this could still apply?

I'd really like to hear from those out there who are running a HA Cluster. Do you have to have a L2 device between the FG's & your core switching/routing? If so, can anyone explain why that is a requirement?

Dear friend , we are working with same senario as you want to use but we are not using fortigate HA ,but we are able to acheivethe same you want .please contact me so we can share the information .we have dual nexus 9K and fortigate 1500D with VdomLayer 3LACP for communicating with Nexus core .

FWIW if you have a VPC a and peer-link up than LACP should work. You could disable LACP all together and see what happens but I would look at the LACP statistics from the NXOS side of things and go from that point forward. If the LCAP packets are not being sent/receive from the FGT or NXOS devices, than trouble shoot as to why.