Assume that we use both route reflectors as cluster ID 1.1.1.1 which is R1’s router ID.

R1 and R2 receive routes from R4.

R1 and R2 receive routes from R3.

Both R1 and R2 as route reflectors appends 1.1.1.1 as cluster ID attributes that they send to each other. However, since they use same cluster, they discard the routes of each other.

That’s why, if RRs use the same cluster ID, RR clients have to connect to both RRs.

In this topology, routes behind R4 is learned only from the R1-R4 direct IBGP session by the R1 (R1 rejects from R2). Of course, IGP path goes through R1-R2-R4, since there is no physical path between R1-R4.

If the physical link between R2 and R4 goes down, both IBGP sessions between R1-R4 and R2-R4 goes down as well. Thus, the networks behind R4 cannot be learned.

Since, the routes cannot be learned from R2 (the same cluster ID), if physical link is up and IBGP session goes down between R1 and R4, networks behind R4 will not be reachable either, but if you have BGP neighborship between loopbacks and physical topology is redundant , the chance of IBGP session going down is very hard.

Note : Having redundant physical links in a network design is a common best practice. Thats why below topology is a more realistic one.

What if we add a physical link between R1-R4 and R2-R3 ?

Figure-2Route Reflector uses same cluster-ID, physical cross-connection is added between the RR and RR clients

In Figure-2 physical cross-connections are added between R1-R4 and R2-R3.

Still, we are using the same BGP cluster ID on the route reflectors.

Thus, when R2 reflects R4 routes to R1, R1 will discard those routes. In addition, R1 will learn R4 routes through direct IBGP peering with R4. In this case, IGP path will change to R1-R4 rather than to R1-R2-R4.

In a situation in which R1-R4 physical link fails, IBGP session will not go down if the IGP converges to R1-R2-R4 path quicker than BGP session timeout (By default it does).

Thus, having the same cluster ID on the RRs saves a lot of memory and CPU resource on the route reflectors even though link failures do not cause IBGP session drop if there is enough redundancy in the network.

If we would use different BGP cluster ID on R1 and R2, R1 would accept reflected routes from R2 in addition to routes from direct peering with R4.

Post navigation

— 24 Comments —

We have a MAN consisting of eight routers runnning iBGP and using two route reflectors within the same cluster. This MAN is connected to two other autonomous systems. Although there are many improvements possible in the rather bad design (I did not design in) I did not yet look into the decision of whether to use a single cluster with two route reflectors or two clusters with one RR each. There is physical redundancy in place however, so maybe there is no real need for two clusters.

We have a MAN consisting of eight routers runnning iBGP and using two route reflectors within the same cluster. This MAN is connected to two other autonomous systems. Although there are many improvements possible in the rather bad design (I did not design in) I did not yet look into the decision of whether to use a single cluster with two route reflectors or two clusters with one RR each. There is physical redundancy in place however, so maybe there is no real need for two clusters.

@Haroon Same cluster ID is not a bad design, in contrast it is a better design. Since you will not need extra memory and CPU to handle the prefixes which would come from the other RR, it is nice.
On the other hand, If it is IP only network, IBGP topology should follow the physical topology of the network. Otherwise, if you are lucky you have suboptimal routing. Worse, it creates persistent routing loop.
In our topology it is not a problem , do you see why ?

@Orphan, why did you say IP only network in regards to loops and suboptimal routing. Wouldn’t it be the case for any destination-based forwarding network?
BTW, it’d be interesting to see a post highlighting bad RR designs and the specific errors they may lead to.
Cheers

Nice post !! you cannot have a persistent loop because you follow the physical topology.
With MPLS RR you don’t have this problem the packets toward the BGP next hops always carry an LDP-generated label for the BGP next hop.

“If the routes received with the same cluster ID by the RR, it is discarded” so why do we need an ibgp session between R1 and R2 if they each discard routes learned from eachother, in this specific topology with 4 routers only

You need an IBGP session between R1 and R2 to send their client BGP routes.You can either use same or different cluster IDs. Use same cluster ID for the same tier route reflectors as per my suggestion.

@Niko, thanks for the comment. If the physical connections between RR and RR Clients fail , then how you would send the traffic between the RR Clients ? That’s why you need that. Unless you redistribute BGP into IGP, which you don’t do except specific applications.

Thanks for your response. I hope you don’t mind if I want to discuss more about this. In figure 2, enable IBGP session between RR1 and RR2. If R1 lose connections with R3 and R4, R1 will not accept any bgp routes from R2 since they have same Cluster ID.

Having redundant RR is good but here it’s not giving us the failover functionality. Like you mentioned, let’s say if the link between R2 – R4 is up but IBGP between R1-R4 is not forming, then R1 will not learn the route from R2 because of loop prevention.

Nice post. But isn’t the point of having multiple RRs,is to have redundancy ? What good does it do me if the iBGP session between R1-R4 fails and networks behind R4 are still unreachable because R1 will discard routes from R2? I might as well just have one RR.

Orhan,
How would you overcome double failure between the RR and the RR clients if you use the same cluster-id? So, If R4 loses the iBGP to R1 and R3 loses the IBGP to R2, How will R3 get the routes that R4 advertises.