This tutorial demonstrates how OVN works in an OpenStack “DevStack”
environment. It was tested with the “master” branches of DevStack and
Open vSwitch near the beginning of May 2017. Anyone using an earlier
version is likely to encounter some differences. In particular, we
noticed some shortcomings in OVN utilities while writing the tutorial
and pushed out some improvements, so it’s best to use recent Open
vSwitch at least from that point of view.

The goal of this tutorial is to demonstrate OVN in an end-to-end way,
that is, to show how it works from the cloud management system at the
top (in this case, OpenStack and specifically its Neutron networking
subsystem), through the OVN northbound and southbound databases, to
the bottom at the OVN local controller and Open vSwitch data plane.
We hope that this demonstration makes it easier for users and
potential users to understand how OVN works and how to debug and
troubleshoot it.

In addition to new material, this tutorial incorporates content from
testing.rst in OpenStack networking-ovn, by Russell Bryant and
others. Without that example, this tutorial could not have been
written.

We provide enough details in the tutorial that you should be able to
fully follow along, by creating a DevStack VM and cloning DevStack and
so on. If you want to do this, start out from Setting Up DevStack
below.

This section explains how to install DevStack, a kind of OpenStack
packaging for developers, in a way that allows you to follow along
with the tutorial in full.

Unless you have a spare computer laying about, it’s easiest to install
DevStacck in a virtual machine. This tutorial was built using a VM
implemented by KVM and managed by virt-manager. I recommend
configuring the VM configured for the x86-64 architecture, 4 GB RAM, 2
VCPUs, and a 20 GB virtual disk.

Note

If you happen to run your Linux-based host with 32-bit userspace,
then you will have some special issues, even if you use a 64-bit
kernel:

You may find that you can get 32-bit DevStack VMs to work to some
extent, but I personally got tired of finding workarounds. I
recommend running your VMs in 64-bit mode. To get this to work,
I had to go to the CPUs tab for the VM configuration in
virt-manager and change the CPU model from the one originally
listed to “Hypervisor Default’ (it is curious that this is not
the default!).

On a host with 32-bit userspace, KVM supports VMs with at most
2047 MB RAM. This is adequate, barely, to start DevStack, but it
is not enough to run multiple (nested) VMs. To prevent
out-of-memory failures, set up extra swap space in the guest.
For example, to add 2 GB swap:

and then add a line like this to /etc/fstab to add the new
swap automatically upon reboot:

/swapfileswapswapdefaults00

Here are step-by-step instructions to get started:

Install a VM.

I tested these instructions with Centos 7.3. Download the “minimal
install” ISO and booted it. The install is straightforward. Be
sure to enable networking, and set a host name, such as
“ovn-devstack-1”. Add a regular (non-root) user, and check the box
“Make this user administrator”. Also, set your time zone.

You can SSH into the DevStack VM, instead of running from a
console. I recommend it because it’s easier to cut and paste
commands into a terminal than a VM console. You might also
consider using a very wide terminal, perhaps 160 columns, to keep
tables from wrapping.

To improve convenience further, you can make it easier to log in
with the following steps, which are optional:

where VMIP is the VM’s IP address and VMUSER is your username
inside the VM. (You can omit the User line if your
username is the same in the host and the VM.) After you do
this, you can SSH to the VM by name, e.g. sshovn-devstack-1, and if command-line completion is set up in
your host shell, you can shorten that to something like sshovn followed by hitting the Tab key.

If you have SSH public key authentication set up, with an SSH
agent, run on your host:

$ ssh-copy-id ovn-devstack-1

and type your password once. Afterward, you can log in without
typing your password again.

(If you don’t already use SSH public key authentication and an
agent, consider looking into it–it will save you time in the
long run.)

Optionally, inside the VM, append the following to your
~/.bash_profile:

. $HOME/devstack/openrc admin

It will save you running it by hand each time you log in. But
it also prints garbage to the console, which can screw up
services like ssh-copy-id, so be careful.

Boot into the installed system and log in as the regular user, then
install Git:

$ sudo yum install git

Note

If you installed a 32-bit i386 guest (against the advice above),
install a non-PAE kernel and reboot into it at this point:

$ sudo yum install kernel-core kernel-devel
$ sudo reboot

Be sure to select the non-PAE kernel from the list at boot.
Without this step, DevStack will fail to install properly later.

If there’s some kind of failure, you can restart by running
./stack.sh again. It won’t restart exactly where it left off,
but steps up to the one where it failed will skip the download
steps. (Sometimes blindly restarting after a failure will allow it
to succeed.) If you reboot your VM, you need to rerun this
command. (If you run into trouble with stack.sh after
rebooting your VM, try running ./unstack.sh.)

At this point you can navigate a web browser on your host to the
Horizon dashboard URL. Many OpenStack operations can be initiated
from this UI. Feel free to explore, but this tutorial focuses on
the alternative command-line interfaces because they are easier to
explain and to cut and paste.

As of this writing, you need to run the following to fix a problem
with using VM consoles from the OpenStack web instance:

The firewall in the VM by default allows SSH access but not HTTP.
You will probably want HTTP access to use the OpenStack web
interface. The following command enables that. (It also enables
every other kind of network access, so if you’re concerned about
security then you might want to find a more targeted approach.)

$ sudo iptables -F

(You need to re-run this if you reboot the VM.)

To use OpenStack command line utilities in the tutorial, run:

$ . ~/devstack/openrc admin

This needs to be re-run each time you log in (but see the following
section).

Before we really jump in, let’s set up a couple of things in DevStack.
This is the first real test that DevStack is working, so if you get
errors from any of these commands, it’s a sign that stack.sh
didn’t finish properly, or perhaps that you didn’t run the openrcadmin command at the end of the previous instructions.

If you stop and restart DevStack via unstack.sh followed by
stack.sh, you have to rerun these steps.

For SSH access to the VMs we’re going to create, we’ll need a SSH
keypair. Later on, we’ll get OpenStack to install this keypair
into VMs. Create one with:

By default, DevStack security groups drop incoming traffic, but to
test networking in a reasonable way we need to enable it. You only
need to actually edit one particular security group, but DevStack
creates multiple and it’s somewhat difficult to figure out which
one is important because all of them are named “default”. So, the
following adds rules to allow SSH and ICMP traffic into every
security group:

Later on, we’re going to create some VMs and we’ll need an
operating system image to install. DevStack comes with a very
simple image built-in, called “cirros”, which works fine. We need
to get the UUID for this image. Our later commands assume shell
variable IMAGE_ID holds this UUID. You can set this by hand,
e.g.:

OpenStack, OVN, and Open vSwitch all really like UUIDs. These are
great for uniqueness, but 36-character strings are terrible for
readability. Statistically, just the first few characters are enough
for uniqueness in small environments, so let’s define a helper to make
things more readable:

The command above also adds -fyaml to switch to YAML output
format, because abbreviating UUIDs screws up the default table-based
formatting and because YAML output doesn’t produce wrap columns across
lines and therefore is easier to cut and paste.

At each step, we will take a look at how the features in question work
from OpenStack’s Neutron networking layer at the top to the data plane
layer at the bottom. From the highest to lowest level, these layers
and the software components that connect them are:

OpenStack Neutron, which as the top level in the system is the
authoritative source of the virtual network configuration.

We will use OpenStack’s openstack utility to observe and modify
Neutron and other OpenStack configuration.

networking-ovn, the Neutron driver that interfaces with OVN and
translates the internal Neutron representation of the virtual
network into OVN’s representation and pushes that representation
down the OVN northbound database.

In this tutorial it’s rarely worth distinguishing Neutron from
networking-ovn, so we usually don’t break out this layer separately.

The OVN Northbound database, aka NB DB. This is an instance of
OVSDB, a simple general-purpose database that is used for multiple
purposes in Open vSwitch and OVN. The NB DB’s schema is in terms of
networking concepts such as switches and routers. The NB DB serves
the purpose that in other systems might be filled by some kind of
API; for example, in place of calling an API to create or delete a
logical switch, networking-ovn performs these operations by
inserting or deleting a row in the NB DB’s Logical_Switch table.

We will use OVN’s ovn-nbctl utility to observe the NB DB. (We
won’t directly modify data at this layer or below. Because
configuration trickles down from Neutron through the stack, the
right way to make changes is to use the openstack utility or
another OpenStack interface and then wait for them to percolate
through to lower layers.)

The ovn-northd daemon, a program that runs centrally and translates
the NB DB’s network representation into the lower-level
representation used by the OVN Southbound database in the next
layer. The details of this daemon are usually not of interest,
although without it OVN will not work, so this tutorial does not
often mention it.

The OVN Southbound database, aka SB DB, which is also an OVSDB
database. Its schema is very different from the NB DB. Instead of
familiar networking concepts, the SB DB defines the network in terms
of collections of match-action rules called “logical flows”, which
while similar in concept to OpenFlow flows use logical concepts, such
as virtual machine instances, in place of physical concepts like
physical Ethernet ports.

We will use OVN’s ovn-sbctl utility to observe the SB DB.

The ovn-controller daemon. A copy of ovn-controller runs on each
hypervisor. It reads logical flows from the SB DB, translates them
into OpenFlow flows, and sends them to Open vSwitch’s ovs-vswitchd
daemon. Like ovn-northd, usually the details of what this daemon
are not of interest, even though it’s important to the operation of
the system.

ovs-vswitchd. This program runs on each hypervisor. It is the core
of Open vSwitch, which processes packets according to the OpenFlow
flows set up by ovn-controller.

Open vSwitch datapath. This is essentially a cache designed to
accelerate packet processing. Open vSwitch includes a few different
datapaths but OVN installations typically use one based on the Open
vSwitch Linux kernel module.

Switching is the basis of networking in the real world and in virtual
networking as well. OpenStack calls its concept of a virtual switch a
“network”, and OVN calls its corresponding concept a “logical switch”.

In this step, we’ll create an OpenStack network n1, then create
VMs a and b and attach them to n1.

OpenStack needs to know the subnets that a network serves. We inform
it by creating subnet objects. To keep it simple, let’s give our
network a single subnet for the 10.1.1.0/24 network. We have to give
it a name, in this case n1subnet:

This output shows that OVN has three logical switches, each of which
corresponds to a Neutron network, and a logical router that
corresponds to the Neutron router that DevStack creates by default.
The logical switch that corresponds to our new network n1 has no
ports yet, because we haven’t added any. The public and
private networks that DevStack creates by default have router
ports that connect to the logical router.

Using ovn-northd, OVN translates the NB DB’s high-level switch and
router concepts into lower-level concepts of “logical datapaths” and
logical flows. There’s one logical datapath for each logical switch
or router:

This output lists the NB DB UUIDs in external_ids:logical-switch and
Neutron UUIDs in externals_ids:uuid. We can dive in deeper by viewing
the OVN logical flows that implement a logical switch. Our new
logical switch is a simple and almost pathological example given that
it doesn’t yet have any ports attached to it. We’ll look at the
details a bit later:

We have one hypervisor (aka “compute node”, in OpenStack parlance),
which is the one where we’re running all these commands. On this
hypervisor, ovn-controller is translating OVN logical flows into
OpenFlow flows (“physical flows”). It makes sense to go deeper, to
see the OpenFlow flows that get generated from this datapath. By
adding --ovs to the ovn-sbctl command, we can see OpenFlow
flows listed just below their logical flows. We also need to use
sudo because connecting to Open vSwitch is privileged. Go ahead
and try it:

You were probably disappointed: the output didn’t change, and no
OpenFlow flows were printed. That’s because no OpenFlow flows are
installed for this logical datapath, which in turn is because there
are no VIFs for this logical datapath on the local hypervisor. For a
better example, you can try ovn-sbctl--ovs on one of the other
logical datapaths.

A switch without any ports is not very interesting. Let’s create a
couple of VMs and attach them to the switch. Run the following
commands, which create VMs named a and b and attaches them to
our network n1 with IP addresses 10.1.1.5 and 10.1.1.6,
respectively. It is not actually necessary to manually assign IP
address assignments, since OpenStack is perfectly happy to assign them
itself from the subnet’s IP address range, but predictable addresses
are useful for our discussion:

These commands return before the VMs are really finished being built.
You can run openstackserverlist a few times until each of them
is shown in the state ACTIVE, which means that they’re not just built
but already running on the local hypervisor.

These operations had the side effect of creating separate “port”
objects, but without giving those ports any easy-to-read names. It’ll
be easier to deal with them later if we can refer to them by name, so
let’s name a’s port ap and b’s port bp:

We can get some more details on each of these by looking at their NB
DB records in the Logical_Switch_Port table. Each port has addressing
information, port security enabled, and a pointer to DHCP
configuration (which we’ll look at much later in DHCP):

Now that the logical switch is less pathological, it’s worth taking
another look at the SB DB logical flow table. Try a command like
this:

$ ovn-sbctl lflow-list n1 | abbrev | less -S

and then glance through the flows. Packets that egress a VM into the
logical switch travel through the flow table’s ingress pipeline
starting from table 0. At each table, the switch finds the
highest-priority logical flow that matches and executes its actions,
or if there’s no matching flow then the packet is dropped. The
ovn-sb(5) manpage gives all the details, but with a little
thought it’s possible to guess a lot without reading the manpage. For
example, consider the flows in ingress pipeline table 0, which are the
first flows encountered by a packet traversing the switch:

The first two flows, with priority 100, immediately drop two kinds of
invalid packets: those with a multicast or broadcast Ethernet source
address (since multicast is only for packet destinations) and those
with a VLAN tag (because OVN doesn’t yet support VLAN tags inside
logical networks). The next two flows implement L2 port security:
they advance to the next table for packets with the correct Ethernet
source addresses for their ingress ports. A packet that does not
match any flow is implicitly dropped, so there’s no need for flows to
deal with mismatches.

The logical flow table includes many other flows, some of which we
will look at later. For now, it’s most worth looking at ingress table
13:

The first flow in table 13 checks whether the packet is an Ethernet
multicast or broadcast and, if so, outputs it to a special port that
egresses to every logical port (other than the ingress port).
Otherwise the packet is output to the port corresponding to its
Ethernet destination address. Packets addressed to any other Ethernet
destination are implicitly dropped.

(It’s common for an OVN logical switch to know all the MAC addresses
supported by its logical ports, like this one. That’s why there’s no
logic here for MAC learning or flooding packets to unknown MAC
addresses. OVN does support unknown MAC handling but that’s not in
play in our example.)

Note

If you’re interested in the details for the multicast group, you can
run a command like the following and then look at the row for the
correct datapath:

$ ovn-sbctl find multicast_group name=_MC_flood | abbrev

Now if you want to look at the OpenFlow flows, you can actually see
them. For example, here’s the beginning of the output that lists the
first four logical flows, which we already looked at above, and their
corresponding OpenFlow flows. If you want to know more about the
syntax, the ovs-fields(7) manpage explains OpenFlow matches and
ovs-ofctl(8) explains OpenFlow actions:

Let’s go a level deeper. So far, everything we’ve done has been
fairly general. We can also look at something more specific: the path
that a particular packet would take through OVN, logically, and Open
vSwitch, physically.

Let’s use OVN’s ovn-trace utility to see what happens to packets from
a logical point of view. The ovn-trace(8) manpage has a lot of
detail on how to do that, but let’s just start by building up from a
simple example. You can start with a command that just specifies the
logical datapath, an input port, and nothing else; unspecified fields
default to all-zeros. This doesn’t do much:

We see that the packet was dropped in logical table 0,
“ls_in_port_sec_l2”, the L2 port security stage (as we discussed
earlier). That’s because we didn’t use the right Ethernet source
address for a. Let’s see what happens if we do:

Now the packet passes through L2 port security and skips through
several other tables until it gets dropped in the L2 lookup stage
(because the destination is unknown). Let’s add the Ethernet
destination for b:

ovn-trace showed us how a hypothetical packet would travel through the
system in a logical fashion, that is, without regard to how VMs are
distributed across the physical network. This is a convenient
representation for understanding how OVN is supposed to work
abstractly, but sometimes we might want to know more about how it
actually works in the real systems where it is running. For this, we
can use the tracing tool that Open vSwitch provides, which traces
a hypothetical packet through the OpenFlow tables.

We can actually get two levels of detail. Let’s start with the
version that’s easier to interpret, by physically tracing a packet
that looks like the one we logically traced before. One obstacle is
that we need to know the OpenFlow port number of the input port. One
way to do that is to look for a port whose “attached-mac” is the one
we expect and print its ofport number:

In the previous sections we traced a hypothetical L2 packet, one
that’s honestly not very realistic: we didn’t even supply an Ethernet
type, so it defaulted to zero, which isn’t anything one would see on a
real network. We could refine our packet so that it becomes a more
realistic TCP or UDP or ICMP, etc. packet, but let’s try a different
approach: working from a real packet.

Pull up a console for VM a and start ping10.1.1.6, then leave
it running for the rest of our experiment.

Now go back to your DevStack session and run:

$ sudo watch ovs-dpctl dump-flows

We’re working with a new program. ovn-dpctl is an interface to Open
vSwitch datapaths, in this case to the Linux kernel datapath. Its
dump-flows command displays the contents of the in-kernel flow
cache, and by running it under the watch program we see a new
snapshot of the flow table every 2 seconds.

Look through the output for a flow that begins with recirc_id(0)
and matches the Ethernet source address for a. There is one flow
per line, but the lines are very long, so it’s easier to read if you
make the window very wide. This flow’s packet counter should be
increasing at a rate of 1 packet per second. It looks something like
this:

Be careful cutting and pasting ovs-dpctldump-flows output into
ofproto/trace because the latter has terrible error reporting.
If you add an extra line break, etc., it will likely give you a
useless error message.

There’s no output action in the output, but there are ct and
recirc actions (which you can see in the Datapathactions at
the end). The ct action tells the kernel to pass the packet
through the kernel connection tracking for firewalling purposes and
the recirc says to go back to the flow cache for another pass
based on the firewall results. The 0xb value inside the
recirc gives us a hint to look at the kernel flows for a cached
flow with recirc_id(0xb). Indeed, there is one:

In other words, the flow passes through the connection tracker a
second time. The first time was for a’s outgoing firewall; this
second time is for b’s incoming firewall. Again, we continue
tracing with recirc_id(0xc):

It was took multiple hops, but we finally came to the end of the line
where the packet was output to b after passing through both
firewalls. The port number here is a datapath port number, which is
usually different from an OpenFlow port number. To check that it is
b’s port, we first list the datapath ports to get the name
corresponding to the port number:

Previously we set up a pair of VMs a and b on a network n1
and demonstrated how packets make their way between them. In this
step, we’ll set up a second network n2 with a new VM c,
connect a router r to both networks, and demonstrate how routing
works in OVN.

There’s nothing really new for the network and the VM so let’s just go
ahead and create them:

Now a, b, and c should all be able to reach other. You
can get some verification that routing is taking place by running you
ping between c and one of the other VMs: the reported TTL
should be one less than between a and b (63 instead of 64).

Observe via ovn-nbctl the new OVN logical switch and router and
then ports that connect them together:

Let’s see what happens at the logical flow level for an ICMP packet
from a to c. This generates a long trace but an interesting
one, so we’ll look at it bit by bit. The first three stanzas in the
output show the packet’s ingress into n1 and processing through
the firewall on that side (via the “ct_next” connection-tracking
action), and then the selection of the port that leads to router r
as the output port:

The next two stanzas represent processing through logical router
r. The processing in table 5 is the core of the routing
implementation: it recognizes that the packet is destined for an
attached subnet, decrements the TTL and updates the Ethernet source
address. Table 6 then selects the Ethernet destination address based
on the IP destination. The packet then passes to switch n2 via an
OVN “logical patch port”:

It’s possible to use ofproto/trace, just as before, to trace a
packet through OpenFlow tables, either for a hypothetical packet or
one that you get from a real test case using ovs-dpctl. The
process is just the same as before and the output is almost the same,
too. Using a router doesn’t actually introduce any interesting new
wrinkles, so we’ll skip over this for this case and for the remainder
of the tutorial, but you can follow the steps on your own if you like.

The VMs that we’ve created can access each other but they are isolated
from the physical world. In OpenStack, the dominant way to connect a
VM to external networks is by creating what is called a “floating IP
address”, which uses network address translation to connect an
external address to an internal one.

DevStack created a pair of networks named “private” and “public”. To
use a floating IP address from a VM, we first add a port to the VM
with an IP address from the “private” network, then we create a
floating IP address on the “public” network, then we associate the
port with the floating IP address.

VM d is on the “private” switch under its private IP address
10.0.0.8. The “private” switch is connected to “router1” via two
router ports (one for IPv4, one for IPv6).

The “public” switch is connected to “router1” and to the physical
network via a “localnet” port.

“router1” is in the middle between “private” and “public”. In
addition to the router ports that connect to these switches, it has
“nat” entries that direct network address translation. The
translation between floating IP address 172.24.4.8 and private
address 10.0.0.8 makes perfect sense.

When the NB DB gets translated into logical flows at the southbound
layer, the “nat” entries get translated into IP matches that then
invoke “ct_snat” and “ct_dnat” actions. The details are intricate,
but you can get some of the idea by just looking for relevant flows:

In “router1”, first the ct_snat action without an argument
attempts to “un-SNAT” the packet. ovn-trace treats this as a no-op,
because it doesn’t have any state for tracking connections. As an
alternative, it invokes ct_dnat(10.0.0.8) to NAT the destination
IP:

The first step is to add an IPv6 subnet to networks n1 and n2,
then attach those subnets to our router r. As usual, though
OpenStack can assign addresses itself, we use fixed ones to make the
discussion easier:

Now you should have working IPv6 routing through router r. The
relevant parts of the NB DB look like the following. The interesting
parts are the new fc11:: and fc22:: addresses on the ports in
n1 and n2 and the new IPv6 router ports in r:

Let’s explore how ACLs work in OpenStack and OVN. In OpenStack, ACL
rules are part of “security groups”, which are “default deny”, that
is, packets are not allowed by default and the rules added to security
groups serve to allow different classes of packets. The default group
(named “default”) that is assigned to each of our VMs so far allows
all traffic from our other VMs, which isn’t very interesting for
testing. So, let’s create a new security group, which we’ll name
“custom”, add rules to it that allow incoming SSH and ICMP traffic,
and apply this security group to VM c:

Now we can do some experiments to test security groups. From the
console on a or b, it should now be possible to “ping” c
or to SSH to it, but attempts to initiate connections on other ports
should be blocked. (You can try to connect on another port with
ssh-pPORTIP or ncPORTIP.) Connection attempts should
time out rather than receive the “connection refused” or “connection
reset” error that you would see between a and b.

It’s also possible to test ACLs via ovn-trace, with one new wrinkle.
ovn-trace can’t simulate connection tracking state in the network, so
by default it assumes that every packet represents an established
connection. That’s good enough for what we’ve been doing so far, but
for checking properties of security groups we want to look at more
detail.

If you look back at the VM-to-VM traces we’ve done until now, you can
see that they execute two ct_next actions:

The first of these is for the packet passing outward through the
source VM’s firewall. We can tell ovn-trace to treat the packet as
starting a new connection or adding to an established connection by
adding a --ct option: --ctnew or --ctest,
respectively. The latter is the default and therefore what we’ve
been using so far. We can also use --ctest,rpl, which in
addition to --ctest means that the connection was initiated by
the destination VM rather than by the VM sending this packet.

The second is for the packet passing inward through the destination
VM’s firewall. For this one, it makes sense to tell ovn-trace that
the packet is starting a new connection, with --ctnew, or that
it is a packet sent in reply to a connection established by the
destination VM, with --ctest,rpl.

ovn-trace uses the --ct options in order, so if we want to
override the second ct_next behavior we have to specify two
options.

Another useful ovn-trace option for this testing is --minimal,
which reduces the amount of output. In this case we’re really just
interested in finding out whether the packet reaches the destination
VM, that is, whether there’s an eventual output action to c,
so --minimal works fine and the output is easier to read.

As a final demonstration of the OVN architecture, let’s examine the
DHCP implementation. Like switching, routing, and NAT, the OVN
implementation of DHCP involves configuration in the NB DB and logical
flows in the SB DB.

Let’s look at the DHCP support for a’s port ap. The port’s
Logical_Switch_Port record shows that ap has DHCPv4 options:

These options show the basic DHCP configuration for the subnet. They
do not include the IP address itself, which comes from the
Logical_Switch_Port record. This allows a whole Neutron subnet to
share a single DHCP_Options record. You can see this sharing in
action, if you like, by listing the record for port bp, which is
on the same subnet as ap, and see that it is the same record as before:

You can take another look at the southbound flow table if you like,
but the best demonstration is to trace a DHCP packet. The following
is a trace of a DHCP request inbound from ap. The first part is
just the usual travel through the firewall:

The next part is the new part. First, an ACL in table 6 allows a DHCP
request to pass through. In table 11, the special put_dhcp_opts
action replaces a DHCPDISCOVER or DHCPREQUEST packet by a
reply. Table 12 flips the packet’s source and destination and sends
it back the way it came in:

We’ve looked at a fair bit of how OVN works and how it interacts with
OpenStack. If you still have some interest, then you might want to
explore some of these directions:

Adding more than one hypervisor (“compute node”, in OpenStack
parlance). OVN connects compute nodes by tunneling packets with the
STT or Geneve protocols. OVN scales to 1000 compute nodes or more,
but two compute nodes demonstrate the principle. All of the tools
and techniques we demonstrated also work with multiple compute
nodes.

Container support. OVN supports seamlessly connecting VMs to
containers, whether the containers are hosted on “bare metal” or
nested inside VMs. OpenStack support for containers, however, is
still evolving, and too difficult to incorporate into the tutorial
at this point.

Other kinds of gateways. In addition to floating IPs with NAT, OVN
supports directly attaching VMs to a physical network and connecting
logical switches to VTEP hardware.