In this scenario Shaker launches pairs of instances in the same tenant network.
Every instance is hosted on a separate compute node, all available compute
nodes are utilized. The master and slave instances are in different
availability zones. The scenario is used to test throughput between nova and
vcenter zones. The traffic goes within the tenant network (L2 domain).

In this scenario Shaker launches pairs of instances, each instance on its own
compute node. All available compute nodes are utilized. Instances are connected
to one of 2 tenant networks, which plugged into single router. The traffic goes
from one network to the other (L3 east-west). The master and slave instances
are in different availability zones. The scenario is used to test throughput
between nova and vcenter zones.

In this scenario Shaker launches pairs of instances on different compute nodes.
All available compute nodes are utilized. Instances are in different networks
connected to different routers, master accesses slave by floating ip. The
traffic goes from one network via external network to the other network. The
master and slave instances are in different availability zones. The scenario is
used to test throughput between nova and vcenter zones.

In this scenario Shaker launches 1 pair of instances in the same tenant
network. Each instance is hosted on a separate compute node. The master and
slave instances are in different availability zones. The scenario is used to
test throughput between nova and vcenter zones.

In this scenario Shaker launches 1 pair of instances, each instance on its own
compute node. Instances are connected to one of 2 tenant networks, which
plugged into single router. The traffic goes from one network to the other (L3
east-west). The master and slave instances are in different availability zones.
The scenario is used to test throughput between nova and vcenter zones.

In this scenario Shaker launches 1 pair of instances on different compute
nodes. Instances are in different networks connected to different routers,
master accesses slave by floating ip. The traffic goes from one network via
external network to the other network. The master and slave instances are in
different availability zones. The scenario is used to test throughput between
nova and vcenter zones.

In this scenario Shaker launches pairs of instances in the same tenant network.
Every instance is hosted on a separate compute node. The load is generated by
UDP traffic. The master and slave instances are in different availability
zones. The scenario is used to test throughput between nova and vcenter
zones.

In this scenario Shaker launches pairs of instances in the same tenant network.
Every instance is hosted on a separate compute node. The load is generated by
UDP traffic and jumbo packets. The master and slave instances are in different
availability zones. The scenario is used to test throughput between nova and
vcenter zones.

In this scenario Shaker launches pairs of instances, each instance on its own
compute node. Instances are connected to one of 2 tenant networks, which
plugged into single router. The traffic goes from one network to the other (L3
east-west). The load is generated by UDP traffic. The master and slave
instances are in different availability zones. The scenario is used to test
throughput between nova and vcenter zones.

In this scenario Shaker launches pairs of instances on the same compute node.
Instances are connected to different tenant networks connected to one router.
The traffic goes from one network to the other (L3 east-west).

In this scenario Shaker launches pairs of instances on the same compute node.
Instances are connected to different tenant networks, each connected to own
router. Instances in one of networks have floating IPs. The traffic goes from
one network via external network to the other network.

In this scenario Shaker launches instances on one compute node in a tenant
network connected to external network. The traffic is sent to and from external
host. The host name needs to be provided as command-line parameter, e.g.
--matrix"{host:172.10.1.2}".

In this scenario Shaker launches instances on one compute node in a tenant
network connected to external network. All instances have floating IPs. The
traffic is sent to and from external host. The host name needs to be provided
as command-line parameter, e.g. --matrix"{host:172.10.1.2}".

In this scenario Shaker launches instances in a tenant network connected to
external network. Every instance is hosted on dedicated compute node. All
available compute nodes are utilized. The traffic is sent to and from external
host (L3 north-south). The host name needs to be provided as command-line
parameter, e.g. --matrix"{host:172.10.1.2}".

In this scenario Shaker launches instances in a tenant network connected to
external network. Every instance is hosted on dedicated compute node. All
available compute nodes are utilized. All instances have floating IPs. The
traffic is sent to and from external host (L3 north-south). The host name needs
to be provided as command-line parameter, e.g. --matrix"{host:172.10.1.2}".

In this scenario Shaker launches instance in a tenant network connected to
external network. The traffic is sent to and from external host. By default one
of public iperf3 servers is used, to override this the target host can be
provided as command-line parameter, e.g. --matrix"{host:172.10.1.2}".

In this scenario Shaker launches instance in a tenant network connected to
external network. The instance has floating IP. The traffic is sent to and from
external host. By default one of public iperf3 servers is used, to override
this the target host can be provided as command-line parameter, e.g. --matrix"{host:172.10.1.2}".

In this scenario Shaker launches pairs of instances in the same tenant network.
Every instance is hosted on a separate compute node, all available compute
nodes are utilized. The traffic goes within the tenant network (L2 domain).

In this scenario Shaker launches pairs of instances, each instance on its own
compute node. All available compute nodes are utilized. Instances are connected
to one of 2 tenant networks, which plugged into single router. The traffic goes
from one network to the other (L3 east-west).

In this scenario Shaker launches pairs of instances on different compute nodes.
All available compute nodes are utilized. Instances are in different networks
connected to different routers, master accesses slave by floating ip. The
traffic goes from one network via external network to the other network.

In this scenario Shaker launches 1 pair of instances, each instance on its own
compute node. Instances are connected to one of 2 tenant networks, which
plugged into single router. The traffic goes from one network to the other (L3
east-west).

In this scenario Shaker launches 1 pair of instances on different compute
nodes. Instances are in different networks connected to different routers,
master accesses slave by floating ip. The traffic goes from one network via
external network to the other network.

In this scenario Shaker launches 1 pair of instances in the same tenant
network. Each instance is hosted on a separate compute node. The traffic goes
within the tenant network (L2 domain). Neutron QoS feature is used to limit
traffic throughput to 10 Mbit/s.

In this scenario Shaker launches pairs of instances in the same tenant network.
Every instance is hosted on a separate compute node. The traffic goes within
the tenant network (L2 domain). The load is generated by UDP traffic.

In this scenario Shaker launches pairs of instances, each instance on its own
compute node. Instances are connected to one of 2 tenant networks, which
plugged into single router. The traffic goes from one network to the other (L3
east-west). The load is generated by UDP traffic.

In this scenario Shaker launches pairs of instances on different compute nodes.
Instances are in different networks connected to different routers, master
accesses slave by floating ip. The traffic goes from one network via external
network to the other network. The load is generated by UDP traffic.

This scenario uses ping to measure the latency between the local host and the
remote. The remote host can be provided via command-line, it defaults to
8.8.8.8. The scenario verifies SLA and expects the latency to be at most 30ms.
The destination host can be overridden by command-line parameter, e.g.
--matrix"{host:172.10.1.2}".

This scenario uses iperf3 to measure TCP throughput between local host and
ping.online.net (or against hosts provided via CLI). SLA check is verified and
expects the speed to be at least 90Mbit and at most 20 retransmitts. The
destination host can be overridden by command-line parameter, e.g. --matrix"{host:172.10.1.2}".

This scenario uses iperf3 to measure UDP throughput between local host and
ping.online.net (or against hosts provided via CLI). SLA check is verified and
requires at least 10 000 packets per second. The destination host can be
overridden by command-line parameter, e.g. --matrix"{host:172.10.1.2}".

This Heat template creates a new Neutron network plus a north_router to the
external network. The template also assigns floating IP addresses to each
instance so they are routable from the external network.

This Heat template creates a new Neutron network plus a north_router to the
external network. The template also assigns floating IP addresses to each
instance so they are routable from the external network.