Can we create only 4094 networks using OVS?

On creating a network each tap interfaces at the br-int are associated to each vlan on OVS. It indicates that there are one-to-one mappings between VLAN IDs and GRE/VXLAN tunnel IDs.

In detail, When i create a network, a tap interface is created in br-int with a VLAN associated to it in the neutron node. So if i create 100 networks, then 100 tap interfaces will be created with 100 vlan-id's associated with each other. If this is the way it works, then Can we create only 4096 networks,since on creating 4097th network, the VLAN may be exhausted?

Please reply what will happen specific to neutron node? because this wont be a issue at compute node.

Comments

If tenant_network_types = vxlan ( or gre ) , then tap-interface created at br-int doesn't get VLAN ID which uniquely identifies your private subnet. There is no one-to-one mappings between VLAN IDs and GRE/VXLAN tunnel IDs.

VLAN is not unique across hosts,since vlan has limitation of 4096 you can have only that many networks in each node.You will hit other limitations before reaching 4096 networks.Also VLAN just need to be unique per hosts.Its not send through tunnels.I have a detailed answer but its awaiting approval

1.The 4094 vLAN limit for ensuring computing privacy is addressed with the VXLAN 24-bit VNI construct that enables 16 million isolated tenant networks. VMs need to be on the same VNI to communicate with each other. This delivers the isolation demanded in a multi-tenant architecture keeping associated VMs within the same VNI. The VXLAN 24-bit segment ID in each frame differentiates individual logical networks so millions of isolated Layer 2 VXLAN networks can co-exist on a common Layer 3 infrastructure.
2. A multi-tenant cloud infrastructure is now capable of delivering “elastic” capacity service by enabling additional application VMs to be rapidly provisioned in a different L3 network which can communicate as if they are on a common L2 subnet.
3.Overlay Networking overcomes the limits of STP and creates very large network domains where VMs can be moved anywhere. This also enables IT teams to reduce over provisioning to a much lower percentage which can save a lot of money. For example, deploying an extra server for 50 servers, over provisioning is reduced to two percent (from 10 percent estimated before). As a result, data centers can save as much as eight percent of their entire IT infrastructure budget with VXLAN Overlay Networking.
4. Overlay Networking can make hybrid cloud deployments simpler to deploy because it leverages the ubiquity of IP for data flows over the WAN.
5.VMs are uniquely identified by a combination of their MAC addresses and VNI. Thus it is acceptable for VMs to have duplicate MAC addresses, as long as they are in different tenant networks. This simplifies administration of multi-tenant customer networks for the Cloud service provider.
6 Finally, VXLAN is an evolutionary solution, already supported by switches and driven by software changes, not requiring “forklift” hardware upgrades thus easing and hastening the adoption of the technology.

Network tunneling encapsulates each tenant/network combination with a unique "tunnel-id" that is used to identify the network traffic belonging to that combination. The tenant's L2 network connectivity is independent of physical locality or underlying network design. By encapsulating traffic inside IP packets, that traffic can cross Layer-3 boundaries, removing the need for preconfigured VLANs and VLAN trunking. Tunneling adds a layer of obfuscation to network data traffic, reducing the visibility of individual tenant traffic from a monitoring point of view.

In case of GRE(VXLAN) tenants L2 networks the VLAN tags you see in the output of "ovs-vsctl show" and in output of "ovs-ofctl dump-flows br-tun" (mod_vlan_vid) are only locally significant. This VLAN tags are not really L2 tags added to the frames leaving on the physical interface.They are only used by openvswitch to separate traffic ...

Comments

I am not sure if my question is answered. Let me make the question quite simple.

When i create a network, a tap interface is created in br-int with a VLAN associated to it in the neutron node. So if i create 100 networks, then 100 tap interfaces will be created with 100 vlan-id's associated with each other. If this is the way it works, then Can we create only 4096 networks,since on creating 4097th network, the VLAN may be exhausted?

Priya, I think I got your point.
It would imply that at ONE compute host, all VMs running on it can't be connected to more than ~4k networks. But in total (on all comp. hosts together) you can have more: the br-int tags are "local" to the compute host (but better if a more knowledgeble guy answers)

T u l, Thanks for your answer. I found that your answer was too specific to compute node. But i am in need of answer specific to neutron node.With a single controller(single database), multiple neutron and multiple compute nodes, the vlans associated could be easily exhausted. Since on each network being created, the TAP interfaces attached br-int gets associated with each VLAN. And hence we could have only ~4095 networks in a paticular region. If there are 5000 Tenants to this region, then the networks will have exhausted and ~895 Tenants wont have networks created to them.

"Across compute nodes we use the GRE tunnel ID. As discussed previously, each tenant network is provisioned both a GRE tunnel ID and a locally significant VLAN tag. That means that incoming traffic with a GRE tunnel ID is converted to the correct local VLAN tag as can be seen in table 2. The message is then forwarded to br-int already VLAN tagged and the appropriate check can be made."

I think you are aware that there is a Neutron node on openstack setup. If it is present on your Setup, i would like you to execute this command "ovs-ofctl dump-flows br-tun". You would see flows with "mod_vlan_id=<value>". What do you mean by this? What is the maximum value of mod_vlan_id?

Comments

I believe original question is not how VNI or GRE ID greater than 4094 will be supported [explained by scope local] , but how will one node support more than 4094 networks when each network maps to a VLANID. Local VLAN means a VNI 10001 can map to VID 1 on one node and VID 101 on another node.

I have not worked with two or more network nodes(neutron).Based on openstack design,there will be a tunnel created on each network creation. If i have 2 neutrons and create a network, then wont there be tunnel created for both network nodes to the associated compute nodes with the same tun_id?

You can not have more than 4096 VLAN on single node as explained by @vthapar.

Neutron uses OVS' VLAN and Tunnel to offer tenant isolation.

VLAN is layer 2 with a size of 12 bits hence 4096 which is not unique across hosts!!.

while its tunnel which is layer3(GRE,VXLAN) is 24 bits hence 2^24 bits( a large number) and is unique across hosts,so that you can have that many number of tenants.OVS then send l2 frames over l3.

Each tenant network on compute nodes will have a unique vlan tag,this will be different from what you see on network node for the same tenant.So with this if you create multiple network nodes ,vlan tag on one network node for the same tenant might be different on other network node it can even be same as its applicable only to that node.

From what i see is, you are trying to know why there internal tag's set in br-int for a partcular VLAN ?

If i am right then the answer to this is that whenever a VLAN network is created say having segmentation id 1002 then for that VLAN network a tag say 2 is created. Now the design is made in such the way that whenever the data flows from data network it will be tagged data say it will be data for a VM having tagged as 1002.

When it will come to br-int there are flow rules according to which it will remove the VLAN tag 1002 and send the untagged data to port which is internally tagged as 2.