Benchmarking Methodology
for Network Security Device Performancebm.balarajah@gmail.comEANTC AGSalzufer 14Berlin10587Germanycross@eantc.deNetSecOPENbmonkman@netsecopen.orgBenchmarking Methodology Working GroupThis document provides benchmarking terminology and methodology for
next-generation network security devices including next-generation
firewalls (NGFW), intrusion detection and prevention solutions (IDS/IPS)
and unified threat management (UTM) implementations. This document aims
to strongly improve the applicability, reproducibility, and transparency
of benchmarks and to align the test methodology with today's
increasingly complex layer 7 application use cases. The main areas
covered in this document are test terminology, traffic profiles and
benchmarking methodology for NGFWs to start with.15 years have passed since IETF recommended test methodology and
terminology for firewalls initially (, ). The requirements for network security element
performance and effectiveness have increased tremendously since then.
Security function implementations have evolved to more advanced areas
and have diversified into intrusion detection and prevention, threat
management, analysis of encrypted traffic, etc. In an industry of
growing importance, well-defined and reproducible key performance
indicators (KPIs) are increasingly needed: They enable fair and
reasonable comparison of network security functions. All these reasons
have led to the creation of a new next-generation firewall benchmarking
document.The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
"SHOULD", "SHOULD NOT", "RECOMMENDED", “NOT RECOMMENDED”, "MAY", and
"OPTIONAL" in this document are to be interpreted as described in BCP 14
, when, and only when,
they appear in all capitals, as shown here.This document provides testing terminology and testing methodology
for next-generation firewalls and related security functions. It covers
two main areas: Performance benchmarks and security effectiveness
testing. This document focuses on advanced, realistic, and reproducible
testing methods. Additionally, it describes test bed environments, test
tool requirements and test result formats.Test setup defined in this document is applicable to all benchmarking
test scenarios described in .Testbed configuration MUST ensure that any performance implications
that are discovered during the benchmark testing aren't due to the
inherent physical network limitations such as number of physical links
and forwarding performance capabilities (throughput and latency) of
the network devise in the testbed. For this reason, this document
recommends avoiding external devices such as switches and routers in
the testbed wherever possible.However, in the typical deployment, the security devices (DUT/SUT)
are connected to routers and switches which will reduce the number of
entries in MAC or ARP tables of the DUT/SUT. If MAC or ARP tables have
many entries, this may impact the actual DUT/SUT performance due to
MAC and ARP/ND table lookup processes. Therefore, it is RECOMMENDED to
connect aggregation switches or routers between test equipment and
DUT/SUT as shown in . The
aggregation switches or routers can be also used to aggregate the test
equipment or DUT/SUT ports, if the numbers of used ports are
mismatched between test equipment and DUT/SUT.If the test equipment is capable of emulating layer 3 routing
functionality and there is no need for test equipment port
aggregation, it is RECOMMENDED to configure the test setup as shown in
. +-------------------+ +-----------+ +--------------------+
|Aggregation Switch/| | | | Aggregation Switch/|
| Router +------+ DUT/SUT +------+ Router |
| | | | | |
+----------+--------+ +-----------+ +--------+-----------+
| |
| |
+-----------+-----------+ +-----------+-----------+
| | | |
| +-------------------+ | | +-------------------+ |
| | Emulated Router(s)| | | | Emulated Router(s)| |
| | (Optional) | | | | (Optional) | |
| +-------------------+ | | +-------------------+ |
| +-------------------+ | | +-------------------+ |
| | Clients | | | | Servers | |
| +-------------------+ | | +-------------------+ |
| | | |
| Test Equipment | | Test Equipment |
+-----------------------+ +-----------------------++-----------------------+ +-----------------------+
| +-------------------+ | +-----------+ | +-------------------+ |
| | Emulated Router(s)| | | | | | Emulated Router(s)| |
| | (Optional) | +----- DUT/SUT +-----+ (Optional) | |
| +-------------------+ | | | | +-------------------+ |
| +-------------------+ | +-----------+ | +-------------------+ |
| | Clients | | | | Servers | |
| +-------------------+ | | +-------------------+ |
| | | |
| Test Equipment | | Test Equipment |
+-----------------------+ +-----------------------+A unique DUT/SUT configuration MUST be used for all benchmarking
tests described in . Since each DUT/SUT
will have their own unique configuration, users SHOULD configure their
device with the same parameters that would be used in the actual
deployment of the device or a typical deployment. Users MUST enable
security features on the DUT/SUT to achieve maximum security coverage
for a specific deployment scenario.This document attempts to define the recommended security features
which SHOULD be consistently enabled for all the benchmarking tests
described in . Table 1 below describes
the RECOMMENDED sets of feature list which SHOULD be configured on the
DUT/SUT.Based on customer use case, users MAY enable or disable SSL
inspection feature for “Throughput Performance with NetSecOPEN Traffic
Mix” test scenario described in To improve repeatability, a summary of the DUT configuration
including description of all enabled DUT/SUT features MUST be
published with the benchmarking results. +---------------------+
| NGFW |
+-------------- +-----------+---------+
| | | |
|DUT Features | Mandatory | Optional|
| | | |
+-------------------------------------+
|SSL Inspection | x | |
+-------------------------------------+
|IDS/IPS | x | |
+-------------------------------------+
|Web Filtering | | x |
+-------------------------------------+
|Antivirus | x | |
+-------------------------------------+
|Anti Spyware | x | |
+-------------------------------------+
|Anti Botnet | x | |
+-------------------------------------+
|DLP | | x |
+-------------------------------------+
|DDoS | | x |
+-------------------------------------+
|Certificate | | x |
|Validation | | |
+-------------------------------------+
|Logging and | x | |
|Reporting | | |
+-------------- +---------------------+
|Application | x | |
|Identification | | |
+---------------+-----------+---------+
In summary, DUT/SUT SHOULD be configured as follows:All security inspection enabledDisposition of all traffic is logged - Logging to an external
device is permissibleDetection of CVEs matching the following characteristics when
searching the National Vulnerability Database (NVD)CVSS Version: 2CVSS V2 Metrics: AV:N/Au:N/I:C/A:CAV=Attack Vector, Au=Authentication, I=Integrity and
A=AvailabilityCVSS V2 Severity: High (7-10)If doing a group test the published start date and
published end date SHOULD be the sameGeographical location filtering and Application Identification
and Control configured to be triggered based on a site or
application from the defined traffic mixIn addition, it is also RECOMMENDED to configure a realistic number
of access policy rules on the DUT/SUT. This document determines the
number of access policy rules for three different classes of DUT/SUT.
The classification of the DUT/SUT MAY be based on its maximum
supported firewall throughput performance number defined in the vendor
data sheet. This document classifies the DUT/SUT in three different
categories; namely small, medium, and maximum.The RECOMMENDED throughput values for the following classes
are:Extra Small (XS) - supported throughput less than 1Gbit/sSmall (S) - supported throughput less than 5Gbit/sMedium (M) - supported throughput greater than 5Gbit/s and less
than 10Gbit/sLarge (L) - supported throughput greater than 10Gbit/sThe Access Conrol Rules (ACL) defined in Table 2 SHOULD be
configured from top to bottom in the correct order as shown in the
table. (Note: There will be differences between how security vendors
implement ACL decision making.) The configured ACL MUST NOT block the
test traffic used for the benchmarking test scenarios.+---------------------------------------------------+---------------+
| | DUD/SUT |
| | Classification|
| | #rules |
+-----------+-----------+------------------+------------+---+---+---+
| | Match | | | | | | |
| Rules Type| Criteria | Description | Action | XS| S | M | L |
+-------------------------------------------------------------------+
|Application|Application| Any application | block | 5 | 10| 20| 50|
|layer | | traffic NOT | | | | | |
| | | included in the | | | | | |
| | | test traffic | | | | | |
+-----------------------+ ------------------------------------------+
|Transport |Src IP and | Any src IP subnet| block | 25| 50|100|250|
|layer |TCP/UDP | used in the test | | | | | |
| |Dst ports | AND any dst ports| | | | | |
| | | NOT used in the | | | | | |
| | | test traffic | | | | | |
+-------------------------------------------------------------------+
|IP layer |Src/Dst IP | Any src/dst IP | block | 25| 50|100|250|
| | | subnet NOT used | | | | | |
| | | in the test | | | | | |
+-------------------------------------------------------------------+
|Application|Application| Applications | allow | 10| 10| 10| 10|
|layer | | included in the | | | | | |
| | | test traffic | | | | | |
+-------------------------------------------------------------------+
|Transport |Src IP and | Half of the src | allow | 1| 1| 1| 1|
|layer |TCP/UDP | IP used in the | | | | | |
| |Dst ports | test AND any dst | | | | | |
| | | ports used in the| | | | | |
| | | test traffic. One| | | | | |
| | | rule per subnet | | | | | |
+-------------------------------------------------------------------+
|IP layer |Src IP | The rest of the | allow | 1| 1| 1| 1|
| | | src IP subnet | | | | | |
| | | range used in the| | | | | |
| | | test. One rule | | | | | |
| | | per subnet | | | | | |
+-----------+-----------+------------------+--------+---+---+---+---+In general, test equipment allows configuring parameters in
different protocol layers. These parameters thereby influence the
traffic flows which will be offered and impact performance
measurements.This document specifies common test equipment configuration
parameters applicable for all test scenarios defined in . Any test scenario specific
parameters are described under the test setup section of each test
scenario individually.This section specifies which parameters SHOULD be considered
while configuring clients using test equipment. Also, this section
specifies the recommended values for certain parameters.The TCP stack SHOULD use a TCP Reno
variant, which include congestion avoidance, back off and
windowing, fast retransmission, and fast recovery on every TCP
connection between client and server endpoints. The default IPv4
and IPv6 MSS segments size MUST be set to 1460 bytes and 1440
bytes respectively and a TX and RX receive windows of 65536 bytes.
Client initial congestion window MUST NOT exceed 10 times the MSS.
Delayed ACKs are permitted and the maximum client delayed Ack MUST
NOT exceed 10 times the MSS before a forced ACK. Up to 3 retries
SHOULD be allowed before a timeout event is declared. All traffic
MUST set the TCP PSH flag to high. The source port range SHOULD be
in the range of 1024 - 65535. Internal timeout SHOULD be
dynamically scalable per RFC 793. Client SHOULD initiate and close
TCP connections. TCP connections MUST be closed via FIN.The sum of the client IP space SHOULD contain the following
attributes. The traffic blocks SHOULD consist of multiple unique,
discontinuous static address blocks. A default gateway is
permitted. The IPv4 ToS byte or IPv6 traffic class should be set
to '00' or ‘000000’ respectively.The following equation can be used to determine the required
total number of client IP address.Desired total number of client IP = Target throughput [Mbit/s]
/ Throughput per IP address [Mbit/s]Based on deployment and use case scenario, the value for
"Throughput per IP address" can be varied.Enterprise customer use case: 6-7 Mbps per IP (e.g.
1,400-1,700 IPs per 10Gbit/s throughput)Mobile ISP use case: 0.1-0.2 Mbps per IP (e.g.
50,000-100,000 IPs per 10Gbit/s throughput)Based on deployment and use case scenario, client IP addresses
SHOULD be distributed between IPv4 and IPv6 type. The Following
options can be considered for a selection of traffic mix
ratio.100 % IPv4, no IPv680 % IPv4, 20% IPv650 % IPv4, 50% IPv620 % IPv4, 80% IPv6no IPv4, 100% IPv6The emulated web browser contains attributes that will
materially affect how traffic is loaded. The objective is to
emulate modern, typical browser attributes to improve realism of
the result set.For HTTP traffic emulation, the emulated browser MUST negotiate
HTTP 1.1. HTTP persistency MAY be enabled depending on test
scenario. The browser MAY open multiple TCP connections per Server
endpoint IP at any time depending on how many sequential
transactions are needed to be processed. Within the TCP connection
multiple transactions MAY be processed if the emulated browser has
available connections. The browser SHOULD advertise a User-Agent
header. Headers MUST be sent uncompressed. The browser SHOULD
enforce content length validation.For encrypted traffic, the following attributes SHALL define
the negotiated encryption parameters. The test clients MUST use
TLSv1.2 or higher. TLS record size MAY be optimized for the HTTPS
response object size up to a record size of 16 KByte. The client
endpoint MUST send TLS Extension Server Name Indication (SNI)
information when opening a security tunnel. Each client connection
MUST perform a full handshake with servercertificate and MUST NOT
use session reuse or resumption. Cipher suite and key size should
be defined in the parameter session of each test scenario.This document specifies which parameters should be considerable
while configuring emulated backend servers using test equipment.The TCP stack on the server side SHOULD be configured similar
to the client side configuration described in . In addition, server initial
congestion window MUST NOT exceed 10 times the MSS. Delayed ACKs
are permitted and the maximum server delayed ACK MUST NOT exceed
10 times the MSS before a forced ACK.The server IP blocks SHOULD consist of unique, discontinuous
static address blocks with one IP per Server Fully Qualified
Domain Name (FQDN) endpoint per test port. The IPv4 ToS byte and
IPv6 traffic class bytes should be set to '00' and ‘000000’
respectively.The server pool for HTTP SHOULD listen on TCP port 80 and
emulate HTTP version 1.1 with persistence. The Server MUST
advertise server type in the Server response header . For HTTPS server, TLS 1.2 or higher MUST be
used with a maximum record size of 16 KBytes and MUST NOT use
ticket resumption or Session ID reuse . The server MUST listen on
port TCP 443. The server SHALL serve a certificate to the client.
It is REQUIRED that the HTTPS server also check Host SNI
information with the FQDN. Cipher suite and key size should be
defined in the parameter section of each test scenario.This section describes the traffic pattern between client and
server endpoints. At the beginning of the test, the server endpoint
initializes and will be ready to accept connection states including
initialization of the TCP stack as well as bound HTTP and HTTPS
servers. When a client endpoint is needed, it will initialize and be
given attributes such as a MAC and IP address. The behavior of the
client is to sweep though the given server IP space, sequentially
generating a recognizable service by the DUT. Thus, a balanced, mesh
between client endpoints and server endpoints will be generated in a
client port server port combination. Each client endpoint performs
the same actions as other endpoints, with the difference being the
source IP of the client endpoint and the target server IP pool. The
client SHALL use Fully Qualified Domain Names (FQDN) in Host Headers
and for TLS Server Name Indication (SNI).Client endpoints are independent of other clients that are
concurrently executing. When a client endpoint initiates traffic,
this section describes how the client steps though different
services. Once the test is initialized, the client endpoints
SHOULD randomly hold (perform no operation) for a few milliseconds
to allow for better randomization of start of client traffic. Each
client will either open a new TCP connection or connect to a TCP
persistence stack still open to that specific server. At any point
that the service profile may require encryption, a TLS encryption
tunnel will form presenting the URL request to the server. The
server will then perform an SNI name check with the proposed FQDN
compared to the domain embedded in the certificate. Only when
correct, will the server process the HTTPS response object. The
initial response object to the server MUST NOT have a fixed size;
its size is based on benchmarking tests described in . Multiple additional
sub-URLs (response objects on the service page) MAY be requested
simultaneously. This MAY be to the same server IP as the initial
URL. Each sub-object will also use a conical FQDN and URL path, as
observed in the traffic mix used.The loading of traffic is described in this section. The loading
of a traffic load profile has five distinct phases: Init, ramp up,
sustain, ramp down, and collection.During the Init phase, test bed devices including the client
and server endpoints should negotiate layer 2-3 connectivity
such as MAC learning and ARP. Only after successful MAC learning
or ARP/ND resolution SHALL the test iteration move to the next
phase. No measurements are made in this phase. The minimum
RECOMMEND time for Init phase is 5 seconds. During this phase,
the emulated clients SHOULD NOT initiate any sessions with the
DUT/SUT, in contrast, the emulated servers should be ready to
accept requests from DUT/SUT or from emulated clients.In the ramp up phase, the test equipment SHOULD start to
generate the test traffic. It SHOULD use a set approximate
number of unique client IP addresses actively to generate
traffic. The traffic SHOULD ramp from zero to desired target
objective. The target objective will be defined for each
benchmarking test. The duration for the ramp up phase MUST be
configured long enough, so that the test equipment does not
overwhelm DUT/SUT's supported performance metrics namely;
connections per second, concurrent TCP connections, and
application transactions per second. The RECOMMENDED time
duration for the ramp up phase is 180-300 seconds. No
measurements are made in this phase.In the sustain phase, the test equipment SHOULD continue
generating traffic to constant target value for a constant
number of active client IPs. The RECOMMENDED time duration for
sustain phase is 600 seconds. This is the phase where
measurements occur.In the ramp down/close phase, no new connections are
established, and no measurements are made. The time duration for
ramp up and ramp down phase SHOULD be same. The RECOMMENDED
duration of this phase is between 180 to 300 seconds.The last phase is administrative and will occur when the test
equipment merges and collates the report data.This section recommends steps to control the test environment and
test equipment, specifically focusing on virtualized environments and
virtualized test equipment.Ensure that any ancillary switching or routing functions between
the system under test and the test equipment do not limit the
performance of the traffic generator. This is specifically important
for virtualized components (vSwitches, vRouters).Verify that the performance of the test equipment matches and
reasonably exceeds the expected maximum performance of the system
under test.Assert that the test bed characteristics are stable during the
entire test session. Several factors might influence stability
specifically for virtualized test beds, for example additional
workloads in a virtualized system, load balancing and movement of
virtual machines during the test, or simple issues such as
additional heat created by high workloads leading to an emergency
CPU performance reduction.Test bed reference pre-tests help to ensure that the desired traffic
generator aspects such as maximum throughput and the network performance
metrics such as maximum latency and maximum packet loss are met.Once the desired maximum performance goals for the system under test
have been identified, a safety margin of 10% SHOULD be added for
throughput and subtracted for maximum latency and maximum packet
loss.Test bed preparation may be performed either by configuring the DUT
in the most trivial setup (fast forwarding) or without presence of
DUT.This section describes how the final report should be formatted and
presented. The final test report MAY have two major sections;
Introduction and result sections. The following attributes SHOULD be
present in the introduction section of the test report.The name of the NetSecOPEN traffic mix (see Appendix A) MUST be
prominent.The time and date of the execution of the test MUST be
prominent.Summary of testbed software and Hardware detailsDUT Hardware/Virtual ConfigurationThis section SHOULD clearly identify the make and model
of the DUTThe port interfaces, including speed and link information
MUST be documented.If the DUT is a virtual VNF, interface acceleration such
as DPDK and SR-IOV MUST be documented as well as cores used,
RAM used, and the pinning / resource sharing configuration.
The Hypervisor and version MUST be documented.Any additional hardware relevant to the DUT such as
controllers MUST be documentedDUT SoftwareThe operating system name MUST be documentedThe version MUST be documentedThe specific configuration MUST be documentedDUT Enabled FeaturesSpecific features, such as logging, NGFW, DPI MUST be
documentedAttributes of those featured MUST be documentedAny additional relevant information about features MUST
be documentedTest equipment hardware and software Test equipment vendor nameHardware details including model number, interface
typeTest equipment firmware and test application software
versionResults Summary / Executive SummaryResults SHOULD resemble a pyramid in how it is reported, with
the introduction section documenting the summary of results in a
prominent, easy to read block.In the result section of the test report, the following
attributes should be present for each test scenario.KPIs MUST be documented separately for each test
scenario. The format of the KPI metrics should be presented
as described in .The next level of details SHOULD be graphs showing each
of these metrics over the duration (sustain phase) of the
test. This allows the user to see the measured performance
stability changes over time.This section lists KPIs for overall benchmarking tests scenarios.
All KPIs MUST be measured during the sustain phase of the traffic load
profile described in . All KPIs
MUST be measured from the result output of test equipment.Concurrent TCP ConnectionsThis key performance
indicator measures the average concurrent open TCP connections in
the sustaining period.TCP Connections Per SecondThis key performance
indicator measures the average established TCP connections per
second in the sustaining period. For “TCP/HTTP(S) Connection Per
Second” benchmarking test scenario, the KPI is measured average
established and terminated TCP connections per second
simultaneously.Application Transactions Per SecondThis key
performance indicator measures the average successfully completed
application transactions per second in the sustaining period.TLS Handshake RateThis key performance indicator
measures the average TLS 1.2 or higher session formation rate
within the sustaining period.ThroughputThis key performance indicator measures the
average Layer 2 throughput within the sustaining period as well as
average packets per seconds within the same period. The value of
throughput SHOULD be presented in Gbit/s rounded to two places of
precision with a more specific kbps in parenthesis. Optionally,
goodput MAY also be logged as an average goodput rate measured
over the same period. Goodput result SHALL also be presented in
the same format as throughput.URL Response time / Time to Last Byte (TTLB)This key
performance indicator measures the minimum, average and maximum
per URL response time in the sustaining period. The latency is
measured at Client and in this case would be the time duration
between sending a GET request from Client and the receival of the
complete response from the server.Application Transaction LatencyThis key performance
indicator measures the minimum, average and maximum the amount of
time to receive all objects from the server. The value of
application transaction latency SHOULD be presented in millisecond
rounded to zero decimal.Time to First Byte (TTFB)This key performance
indicator will measure minimum, average and maximum the time to
first byte. TTFB is the elapsed time between sending the SYN
packet from the client and receiving the first byte of application
date from the DUT/SUT. TTFB SHOULD be expressed in
millisecond.Using NetSecOPEN traffic mix, determine the maximum sustainable
throughput performance supported by the DUT/SUT. (see Appendix A for
details about traffic mix)This test scenario is RECOMMENDED to perform twice; one with SSL
inspection feature enabled and the second scenario with SSL
inspection feature disabled on the DUT/SUT.Test bed setup MUST be configured as defined in . Any test scenario specific test bed
configuration changes MUST be documented.In this section, test scenario specific parameters SHOULD be
defined.DUT/SUT parameters MUST conform to the requirements defined in
. Any configuration changes
for this specific test scenario MUST be documented.Test equipment configuration parameters MUST conform to the
requirements defined in . Following parameters MUST
be noted for this test scenario:Client IP address range defined in Server IP address range defined in Traffic distribution ratio between IPv4 and IPv6 defined in
Target throughput: It can be defined based on requirements.
Otherwise it represents aggregated line rate of interface(s)
used in the DUT/SUTInitial throughput: 10% of the "Target throughput"One of the following ciphers and keys are RECOMMENDED to
use for this test scenarios. ECHDE-ECDSA-AES128-GCM-SHA256 with Prime256v1
(Signature Hash Algorithm: ecdsa_secp256r1_sha256 and
Supported group: sepc256r1)ECDHE-RSA-AES128-GCM-SHA256 with RSA 2048 (Signature
Hash Algorithm: rsa_pkcs1_sha256 and Supported group:
sepc256) ECDHE-ECDSA-AES256-GCM-SHA384 with Secp521 (Signature
Hash Algorithm: ecdsa_secp384r1_sha384 and Supported
group: sepc521r1)ECDHE-RSA-AES256-GCM-SHA384 with RSA 4096 (Signature
Hash Algorithm: rsa_pkcs1_sha384 and Supported group:
secp256)Traffic profile: Test scenario MUST be run with a single
application traffic mix profile (see Appendix A for details about
traffic mix). The name of the NetSecOPEN traffic mix MUST be
documented.The following test Criteria is defined as test results
acceptance criteria. Test results acceptance criteria MUST be
monitored during the whole sustain phase of the traffic load
profile.Number of failed Application transactions MUST be less than
0.001% (1 out of 100,000 transactions) of total attempt
transactionsNumber of Terminated TCP connections due to unexpected TCP
RST sent by DUT/SUT MUST be less than 0.001% (1 out of 100,000
connections) of total initiated TCP connectionsMaximum deviation (max. dev) of application transaction
time or TTLB (Time To Last Byte) MUST be less than X (The
value for "X" will be finalized and updated after completion
of PoC test) The following equation MUST be used to
calculate the deviation of application transaction latency or
TTLB max. dev = max((avg_latency -
min_latency),(max_latency - avg_latency)) / (Initial
latency) Where, the initial latency is calculated
using the following equation. For this calculation, the
latency values (min', avg' and max') MUST be measured during
test procedure step 1 as defined in . The variable
latency represents application transaction latency or
TTLB. Initial latency:= min((avg' latency - min'
latency) | (max' latency - avg' latency))Maximum value of Time to First Byte (TTFB) MUST be less
than XFollowing KPI metrics MUST be reported for this test
scenario.Mandatory KPIs: average Throughput, average Concurrent TCP
connections, TTLB/application transaction latency (minimum,
average and maximum) and average application transactions per
secondOptional KPIs: average TCP connections per second, average TLS
handshake rate and TTFBThe test procedures are designed to measure the throughput
performance of the DUT/SUT at the sustaining period of traffic load
profile. The test procedure consists of three major steps.Verify the link status of the all connected physical
interfaces. All interfaces are expected to be in "UP" status.Configure traffic load profile of the test equipment to
generate test traffic at the "Initial throughput" rate as
described in the parameters . The
test equipment SHOULD follow the traffic load profile definition
as described in . The DUT/SUT
SHOULD reach the "Initial throughput" during the sustain phase.
Measure all KPI as defined in . The measured KPIs during the sustain
phase MUST meet acceptance criteria "a" and "b" defined in .If the KPI metrics do not meet the acceptance criteria, the
test procedure MUST NOT be continued to step 2.Configure test equipment to generate traffic at the "Target
throughput" rate defined in the parameter table. The test
equipment SHOULD follow the traffic load profile definition as
described in . The test
equipment SHOULD start to measure and record all specified KPIs.
The frequency of KPI metric measurements MUST be less than 5
seconds. Continue the test until all traffic profile phases are
completed.The DUT/SUT is expected to reach the desired target throughput
during the sustain phase. In addition, the measured KPIs MUST meet
all acceptance criteria. Follow step 3, if the KPI metrics do not
meet the acceptance criteria.Determine the maximum and average achievable throughput within
the acceptance criteria. Final test iteration MUST be performed
for the test duration defined in .Using HTTP traffic, determine the maximum sustainable TCP
connection establishment rate supported by the DUT/SUT under
different throughput load conditions.To measure connections per second, test iterations MUST use
different fixed HTTP response object sizes defined in .Test bed setup SHOULD be configured as defined in . Any specific test bed configuration changes
such as number of interfaces and interface type, etc. MUST be
documented.In this section, test scenario specific parameters SHOULD be
defined.DUT/SUT parameters MUST conform to the requirements defined in
. Any configuration changes
for this specific test scenario MUST be documented.Test equipment configuration parameters MUST conform to the
requirements defined in . Following parameters MUST
be documented for this test scenario:Client IP address range defined in Server IP address range defined in Traffic distribution ratio between IPv4 and IPv6 defined in
Target connections per second: Initial value from product data
sheet (if known)Initial connections per second: 10% of “Target connections per
second”The client SHOULD negotiate HTTP 1.1 and close the connection
with FIN immediately after completion of one transaction. In each
test iteration, client MUST send GET command requesting a fixed
HTTP response object size.The RECOMMENDED response object sizes are 1, 2, 4, 16, 64
KByteThe following test Criteria is defined as test results
acceptance criteria. Test results acceptance criteria MUST be
monitored during the whole sustain phase of the traffic load
profile.Number of failed Application transactions MUST be less than
0.001% (1 out of 100,000 transactions) of total attempt
transactionsNumber of Terminated TCP connections due to unexpected TCP
RST sent by DUT/SUT MUST be less than 0.001% (1 out of 100,000
connections) of total initiated TCP connectionsDuring the sustain phase, traffic should be forwarded at a
constant rateConcurrent TCP connections SHOULD be constant during steady
state. Any deviation of concurrent TCP connections MUST be
less than 10%. This confirms the DUT opens and closes TCP
connections almost at the same rateFollowing KPI metrics MUST be reported for each test
iteration.Mandatory KPIs: average TCP connections per second, average
Throughput and Average Time to First Byte (TTFB).The test procedure is designed to measure the TCP connections per
second rate of the DUT/SUT at the sustaining period of the traffic
load profile. The test procedure consists of three major steps. This
test procedure MAY be repeated multiple times with different IP
types; IPv4 only, IPv6 only and IPv4 and IPv6 mixed traffic
distribution.Verify the link status of all connected physical interfaces.
All interfaces are expected to be in "UP" status.Configure the traffic load profile of the test equipment to
establish "initial connections per second" as defined in the
parameters . The
traffic load profile SHOULD be defined as described in .The DUT/SUT SHOULD reach the "Initial connections per second"
before the sustain phase. The measured KPIs during the sustain
phase MUST meet acceptance criteria a, b, c, and d defined in
.If the KPI metrics do not meet the acceptance criteria, the
test procedure MUST NOT be continued to "Step 2".Configure test equipment to establish "Target connections per
second" defined in the parameters table. The test equipment SHOULD
follow the traffic load profile definition as described in .During the ramp up and sustain phase of each test iteration,
other KPIs such as throughput, concurrent TCP connections and
application transactions per second MUST NOT reach to the maximum
value the DUT/SUT can support. The test results for specific test
iterations SHOULD NOT be reported, if the above mentioned KPI
(especially throughput) reaches the maximum value. (Example: If
the test iteration with 64Kbyte of HTTP response object size
reached the maximum throughput limitation of the DUT, the test
iteration MAY be interrupted and the result for 64kbyte SHOULD NOT
be reported).The test equipment SHOULD start to measure and record all
specified KPIs. The frequency of measurement MUST be less than 5
seconds. Continue the test until all traffic profile phases are
completed.The DUT/SUT is expected to reach the desired target connections
per second rate at the sustain phase. In addition, the measured
KPIs MUST meet all acceptance criteria.Follow step 3, if the KPI metrics do not meet the acceptance
criteria.Determine the maximum and average achievable connections per
second within the acceptance criteria.Determine the throughput for HTTP transactions varying the HTTP
response object size.Test bed setup SHOULD be configured as defined in . Any specific test bed configuration changes
such as number of interfaces and interface type, etc. must be
documented.In this section, test scenario specific parameters SHOULD be
defined.DUT/SUT parameters MUST conform to the requirements defined in
. Any configuration changes
for this specific test scenario MUST be documented.Test equipment configuration parameters MUST conform to the
requirements defined in . Following parameters MUST
be documented for this test scenario:Client IP address range defined in Server IP address range defined in Traffic distribution ratio between IPv4 and IPv6 defined in
Target Throughput: Initial value from product data sheet (if
known)Initial Throughput: 10% of "Target Throughput"Number of HTTP response object requests (transactions) per
connection: 10RECOMMENDED HTTP response object size: 1KB, 16KB, 64KB, 256KB
and mixed objects defined in the table+---------------------+---------------------+
| Object size (KByte) | Number of requests/ |
| | Weight |
+---------------------+---------------------+
| 0.2 | 1 |
+---------------------+---------------------+
| 6 | 1 |
+---------------------+---------------------+
| 8 | 1 |
+---------------------+---------------------+
| 9 | 1 |
+---------------------+---------------------+
| 10 | 1 |
+---------------------+---------------------+
| 25 | 1 |
+---------------------+---------------------+
| 26 | 1 |
+---------------------+---------------------+
| 35 | 1 |
+---------------------+---------------------+
| 59 | 1 |
+---------------------+---------------------+
| 347 | 1 |
+---------------------+---------------------+The following test Criteria is defined as test results
acceptance criteria. Test results acceptance criteria MUST be
monitored during the whole sustain phase of the traffic load
profileNumber of failed Application transactions MUST be less than
0.001% (1 out of 100,000 transactions) of attempt
transactions.Traffic should be forwarded constantly.Concurrent connetions MUST be constant. The deviation of
concurrent TCP connection MUST NOT increase more than 10%The KPI metrics MUST be reported for this test scenario:Average Throughput, average HTTP transactions per second,
concurrent connections, and average TCP connections per
second.The test procedure is designed to measure HTTP throughput of the
DUT/ SUT. The test procedure consists of three major steps. This
test procedure MAY be repeated multiple times with different IPv4
and IPv6 traffic distribution and HTTP response object sizes.Verify the link status of the all connected physical
interfaces. All interfaces are expected to be in "UP" status.Configure traffic load profile of the test equipment to
establish "Initial Throughput" as defined in the parameters .The traffic load profile SHOULD be defined as described in
. The DUT/SUT SHOULD reach
the "Initial Throughput" during the sustain phase. Measure all KPI
as defined in .The measured KPIs during the sustain phase MUST meet the
acceptance criteria "a" defined in .If the KPI metrics do not meet the acceptance criteria, the
test procedure MUST NOT be continued to "Step 2".The test equipment SHOULD start to measure and record all
specified KPIs. The frequency of measurement MUST be less than 5
seconds. Continue the test until all traffic profile phases are
completed.The DUT/SUT is expected to reach the desired "Target
Throughput" at the sustain phase. In addition, the measured KPIs
must meet all acceptance criteria.Perform the test separately for each HTTP response object
size.Follow step 3, if the KPI metrics do not meet the acceptance
criteria.Determine the maximum and average achievable throughput within
the acceptance criteria. Final test iteration MUST be performed
for the test duration defined in .Using HTTP traffic, determine the average HTTP transaction
latency when DUT is running with sustainable HTTP transactions per
second supported by the DUT/SUT under different HTTP response object
sizes.Test iterations MUST be performed with different HTTP response
object sizes in two different scenarios.one with a single
transaction and the other with multiple transactions within a single
TCP connection. For consistency both the single and multiple
transaction test MUST be configured with HTTP 1.1.Scenario 1: The client MUST negotiate HTTP 1.1 and close the
connection with FIN immediately after completion of a single
transaction (GET and RESPONSE).Scenario 2: The client MUST negotiate HTTP 1.1 and close the
connection FIN immediately after completion of 10 transactions (GET
and RESPONSE) within a single TCP connection.Test bed setup SHOULD be configured as defined in . Any specific test bed configuration changes
such as number of interfaces and interface type, etc. MUST be
documented.In this section, test scenario specific parameters SHOULD be
defined.DUT/SUT parameters MUST conform to the requirements defined in
. Any configuration changes
for this specific test scenario MUST be documented.Test equipment configuration parameters MUST conform to the
requirements defined in . Following parameters
MUST be documented for this test scenario:Client IP address range defined in Server IP address range defined in Traffic distribution ratio between IPv4 and IPv6 defined in
Target objective for scenario 1: 50% of the maximum connection
per second measured in test scenario TCP/HTTP Connections Per SecondTarget objective for scenario 2: 50% of the maximum throughput
measured in test scenario HTTP ThroughputInitial objective for scenario 1: 10% of Target objective for
scenario 1”Initial objective for scenario 2: 10% of “Target objective for
scenario 2”HTTP transaction per TCP connection: test scenario 1 with
single transaction and the second scenario with 10
transactionsHTTP 1.1 with GET command requesting a single object. The
RECOMMENDED object sizes are 1, 16 or 64 Kbyte. For each test
iteration, client MUST request a single HTTP response object
size.The following test Criteria is defined as test results
acceptance criteria. Test results acceptance criteria MUST be
monitored during the whole sustain phase of the traffic load
profile. Ramp up and ramp down phase SHOULD NOT be considered.Generic criteria:Number of failed Application transactions MUST be less than
0.001% (1 out of 100,000 transactions) of attempt
transactions.Number of Terminated TCP connections due to unexpected TCP
RST sent by DUT/SUT MUST be less than 0.001% (1 out of 100,000
connections) of total initiated TCP connectionsDuring the sustain phase, traffic should be forwarded at a
constant rate.Concurrent TCP connections should be constant during steady
state. This confirms the DUT opens and closes TCP connections
at the same rate.After ramp up the DUT MUST achieve the "Target objective"
defined in the parameter
and remain in that state for the entire test duration (sustain
phase).Following KPI metrics MUST be reported for each test scenario
and HTTP response object sizes separately:average TCP connections per second and average application
transaction latencyAll KPI's are measured once the target throughput achieves the
steady state.The test procedure is designed to measure the average application
transaction latencies or TTLB when the DUT is operating close to 50%
of its maximum achievable throughput or connections per second. This
test procedure CAN be repeated multiple times with different IP
types (IPv4 only, IPv6 only and IPv4 and IPv6 mixed traffic
distribution), HTTP response object sizes and single and multiple
transactions per connection scenarios.Verify the link status of the all connected physical
interfaces. All interfaces are expected to be in "UP" status.Configure traffic load profile of the test equipment to
establish "Initial objective" as defined in the parameters .
The traffic load profile can be defined as described in .The DUT/SUT SHOULD reach the "Initial objective" before the
sustain phase. The measured KPIs during the sustain phase MUST
meet the acceptance criteria a, b, c, d, e and f defined in .If the KPI metrics do not meet the acceptance criteria, the
test procedure MUST NOT be continued to "Step 2".Configure test equipment to establish "Target objective"
defined in the parameters table. The test equipment SHOULD follow
the traffic load profile definition as described in .During the ramp up and sustain phase, other KPIs such as
throughput, concurrent TCP connections and application
transactions per second MUST NOT reach to the maximum value that
the DUT/SUT can support. The test results for specific test
iterations SHOULD NOT be reported, if the above mentioned KPI
(especially throughput) reaches to the maximum value. (Example: If
the test iteration with 64Kbyte of HTTP response object size
reached the maximum throughput limitation of the DUT, the test
iteration MAY be interrupted and the result for 64kbyte SHOULD NOT
be reported).The test equipment SHOULD start to measure and record all
specified KPIs. The frequency of measurement MUST be less than 5
seconds. Continue the test until all traffic profile phases are
completed. DUT/SUT is expected to reach the desired "Target
objective" at the sustain phase. In addition, the measured KPIs
MUST meet all acceptance criteria.Follow step 3, if the KPI metrics do not meet the acceptance
criteria.Determine the maximum achievable connections per second within
the acceptance criteria and measure the latency values.Determine the maximum number of concurrent TCP connections that
the DUT/ SUT sustains when using HTTP traffic.Test bed setup SHOULD be configured as defined in . Any specific test bed configuration changes
such as number of interfaces and interface type, etc. must be
documented.In this section, test scenario specific parameters SHOULD be
defined.DUT/SUT parameters MUST conform to the requirements defined in
. Any configuration changes
for this specific test scenario MUST be documented.Test equipment configuration parameters MUST conform to the
requirements defined in . Following parameters MUST
be noted for this test scenario:Client IP address range defined in Server IP address range defined in Traffic distribution ratio between IPv4 and IPv6 defined in
Target concurrent connection: Initial value from product
data sheet (if known)Initial concurrent connection: 10% of “Target concurrent
connection”Maximum connections per second during ramp up phase: 50% of
maximum connections per second measured in test scenario TCP/HTTP Connections per secondRamp up time (in traffic load profile for "Target
concurrent connection"): “Target concurrent connection" /
"Maximum connections per second during ramp up phase"Ramp up time (in traffic load profile for "Initial
concurrent connection"): “Initial concurrent connection" /
"Maximum connections per second during ramp up phase"The client MUST negotiate HTTP 1.1 with persistence and each
client MAY open multiple concurrent TCP connections per server
endpoint IP.Each client sends 10 GET commands requesting 1Kbyte HTTP
response object in the same TCP connection (10 transactions/TCP
connection) and the delay (think time) between the transaction
MUST be X seconds.X = (“Ramp up time” + ”steady state time”) /10The established connections SHOULD remain open until the ramp
down phase of the test. During the ramp down phase, all
connections SHOULD be successfully closed with FIN.The following test Criteria is defined as test results
acceptance criteria. Test results acceptance criteria MUST be
monitored during the whole sustain phase of the traffic load
profile.Number of failed Application transactions MUST be less than
0.001% (1 out of 100,000 transaction) of total attempted
transactionsNumber of Terminated TCP connections due to unexpected TCP
RST sent by DUT/SUT MUST be less than 0.001% (1 out of 100,000
connections) of total initiated TCP connectionsDuring the sustain phase, traffic should be forwarded
constantlyDuring the sustain phase, the maximum deviation (max. dev)
of application transaction latency or TTLB (Time To Last Byte)
MUST be less than 10%Following KPI metrics MUST be reported for this test
scenario:average Throughput, Concurrent TCP connections (minimum,
average and maximum), TTLB/ application transaction latency
(minimum, average and maximum) and average application
transactions per second.The test procedure is designed to measure the concurrent TCP
connection capacity of the DUT/SUT at the sustaining period of
traffic load profile. The test procedure consists of three major
steps. This test procedure MAY be repeated multiple times with
different IPv4 and IPv6 traffic distribution.Verify the link status of the all connected physical
interfaces. All interfaces are expected to be in "UP" status.Configure test equipment to establish “Initial concurrent TCP
connections" defined in . Except
ramp up time, the traffic load profile SHOULD be defined as
described in .During the sustain phase, the DUT/SUT SHOULD reach the “Initial
concurrent TCP connections”. The measured KPIs during the sustain
phase MUST meet the acceptance criteria “a” and “b” defined in
.If the KPI metrics do not meet the acceptance criteria, the
test procedure MUST NOT be continued to “Step 2”.Configure test equipment to establish “Target concurrent TCP
connections”. The test equipment SHOULD follow the traffic load
profile definition (except ramp up time) as described in .During the ramp up and sustain phase, the other KPIs such as
throughput, TCP connections per second and application
transactions per second MUST NOT reach to the maximum value that
the DUT/SUT can support.The test equipment SHOULD start to measure and record KPIs
defined in . The frequency of
measurement MUST be less than 5 seconds. Continue the test until
all traffic profile phases are completed.The DUT/SUT is expected to reach the desired target concurrent
connection at the sustain phase. In addition, the measured KPIs
must meet all acceptance criteria.Follow step 3, if the KPI metrics do not meet the acceptance
criteria.Determine the maximum and average achievable concurrent TCP
connections capacity within the acceptance criteria.Using HTTPS traffic, determine the maximum sustainable SSL/TLS
session establishment rate supported by the DUT/SUT under different
throughput load conditions.Test iterations MUST include common cipher suites and key
strengths as well as forward looking stronger keys. Specific test
iterations MUST include ciphers and keys defined in .For each cipher suite and key strengths, test iterations MUST use
a single HTTPS response object size defined in the test equipment
configuration parameters to
measure connections per second performance under a variety of DUT
Security inspection load conditions.Test bed setup SHOULD be configured as defined in . Any specific test bed configuration changes
such as number of interfaces and interface type, etc. MUST be
documented.In this section, test scenario specific parameters SHOULD be
defined.DUT/SUT parameters MUST conform to the requirements defined in
. Any configuration changes
for this specific test scenario MUST be documented.Test equipment configuration parameters MUST conform to the
requirements defined in . Following parameters MUST
be documented for this test scenario:Client IP address range defined in Server IP address range defined in Traffic distribution ratio between IPv4 and IPv6 defined in
Target connections per second: Initial value from product data
sheet (if known)Initial connections per second: 10% of “Target connections per
second”RECOMMENDED ciphers and keys:ECHDE-ECDSA-AES128-GCM-SHA256 with Prime256v1 (Signature
Hash Algorithm: ecdsa_secp256r1_sha256 and Supported group:
sepc256r1)ECDHE-RSA-AES128-GCM-SHA256 with RSA 2048 (Signature Hash
Algorithm: rsa_pkcs1_sha256 and Supported group: sepc256)ECDHE-ECDSA-AES256-GCM-SHA384 with Secp521 (Signature Hash
Algorithm: ecdsa_secp384r1_sha384 and Supported group:
sepc521r1)ECDHE-RSA-AES256-GCM-SHA384 with RSA 4096 (Signature Hash
Algorithm: rsa_pkcs1_sha384 and Supported group: secp256)The client MUST negotiate HTTPS 1.1 and close the connection
with FIN immediately after completion of one transaction. In each
test iteration, client MUST send GET command requesting a fixed
HTTPS response object size. The RECOMMENDED object sizes are 1, 2,
4, 16, 64 Kbyte.The following test Criteria is defined as test results
acceptance criteria:Number of failed Application transactions MUST be less than
0.001% (1 out of 100,000 transactions) of attempt
transactionsNumber of Terminated TCP connections due to unexpected TCP
RST sent by DUT/SUT MUST be less than 0.001% (1 out of 100,000
connections) of total initiated TCP connectionsDuring the sustain phase, traffic should be forwarded at a
constant rateConcurrent TCP connections SHOULD be constant during steady
state. This confirms that the DUT open and close the TCP
connections at the same rateFollowing KPI metrics MUST be reported for this test
scenario:average TCP connections per second, average Throughput and
Average Time to TCP First Byte.The test procedure is designed to measure the TCP connections per
second rate of the DUT/SUT at the sustaining period of traffic load
profile. The test procedure consists of three major steps. This test
procedure MAY be repeated multiple times with different IPv4 and
IPv6 traffic distribution.Verify the link status of all connected physical interfaces.
All interfaces are expected to be in "UP" status.Configure traffic load profile of the test equipment to
establish "Initial connections per second" as defined in . The
traffic load profile CAN be defined as described in .The DUT/SUT SHOULD reach the "Initial connections per second"
before the sustain phase. The measured KPIs during the sustain
phase MUST meet the acceptance criteria a, b, c, and d defined in
.If the KPI metrics do not meet the acceptance criteria, the
test procedure MUST NOT be continued to "Step 2".Configure test equipment to establish "Target connections per
second" defined in the parameters table. The test equipment SHOULD
follow the traffic load profile definition as described in .During the ramp up and sustain phase, other KPIs such as
throughput, concurrent TCP connections and application
transactions per second MUST NOT reach the maximum value that the
DUT/SUT can support. The test results for specific test iteration
SHOULD NOT be reported, if the above mentioned KPI (especially
throughput) reaches the maximum value. (Example: If the test
iteration with 64Kbyte of HTTPS response object size reached the
maximum throughput limitation of the DUT, the test iteration can
be interrupted and the result for 64kbyte SHOULD NOT be
reported).The test equipment SHOULD start to measure and record all
specified KPIs. The frequency of measurement MUST be less than 5
seconds. Continue the test until all traffic profile phases are
completed.The DUT/SUT is expected to reach the desired target connections
per second rate at the sustain phase. In addition, the measured
KPIs must meet all acceptance criteria.Follow the step 3, if the KPI metrics do not meet the
acceptance criteria.Determine the maximum and average achievable connections per
second within the acceptance criteria.Determine the throughput for HTTPS transactions varying the HTTPS
response object size.Test iterations MUST include common cipher suites and key
strengths as well as forward looking stronger keys. Specific test
iterations MUST include the ciphers and keys defined in the
parameter .Test bed setup SHOULD be configured as defined in . Any specific test bed configuration changes
such as number of interfaces and interface type, etc. must be
documented.In this section, test scenario specific parameters SHOULD be
defined.DUT/SUT parameters MUST conform to the requirements defined in
. Any configuration changes
for this specific test scenario MUST be documented.Test equipment configuration parameters MUST conform to the
requirements defined in . Following parameters MUST
be documented for this test scenario:Client IP address range defined in Server IP address range defined in Traffic distribution ratio between IPv4 and IPv6 defined in
Target Throughput: Initial value from product data sheet (if
known)Initial Throughput: 10% of "Target Throughput"Number of HTTPS response object requests (transactions) per
connection: 10RECOMMENDED ciphers and keys:ECHDE-ECDSA-AES128-GCM-SHA256 with Prime256v1 (Signature
Hash Algorithm: ecdsa_secp256r1_sha256 and Supported group:
sepc256r1)ECDHE-RSA-AES128-GCM-SHA256 with RSA 2048 (Signature Hash
Algorithm: rsa_pkcs1_sha256 and Supported group: sepc256)ECDHE-ECDSA-AES256-GCM-SHA384 with Secp521 (Signature Hash
Algorithm: ecdsa_secp384r1_sha384 and Supported group:
sepc521r1)ECDHE-RSA-AES256-GCM-SHA384 with RSA 4096 (Signature Hash
Algorithm: rsa_pkcs1_sha384 and Supported group: secp256)RECOMMENDED HTTPS response object size: 1KB, 2KB, 4KB, 16KB,
64KB, 256KB and mixed object defined in the table below.+---------------------+---------------------+
| Object size (KByte) | Number of requests/ |
| | Weight |
+---------------------+---------------------+
| 0.2 | 1 |
+---------------------+---------------------+
| 6 | 1 |
+---------------------+---------------------+
| 8 | 1 |
+---------------------+---------------------+
| 9 | 1 |
+---------------------+---------------------+
| 10 | 1 |
+---------------------+---------------------+
| 25 | 1 |
+---------------------+---------------------+
| 26 | 1 |
+---------------------+---------------------+
| 35 | 1 |
+---------------------+---------------------+
| 59 | 1 |
+---------------------+---------------------+
| 347 | 1 |
+---------------------+---------------------+The following test Criteria is defined as test results
acceptance criteria. Test results acceptance criteria MUST be
monitored during the whole sustain phase of the traffic load
profile.Number of failed Application transactions MUST be less than
0.001% (1 out of 100,000 transactions) of attempt
transactions.Traffic should be forwarded constantly.The deviation of concurrent TCP connections MUST be less
than 10%The KPI metrics MUST be reported for this test scenario:Average Throughput, Average transactions per second, concurrent
connections, and average TCP connections per second.The test procedure consists of three major steps. This test
procedure MAY be repeated multiple times with different IPv4 and
IPv6 traffic distribution and HTTPS response object sizes.Verify the link status of the all connected physical
interfaces. All interfaces are expected to be in "UP" status.Configure traffic load profile of the test equipment to
establish "initial throughput" as defined in the parameters .The traffic load profile should be defined as described in
. The DUT/SUT SHOULD reach
the "Initial Throughput" during the sustain phase. Measure all KPI
as defined in .The measured KPIs during the sustain phase MUST meet the
acceptance criteria "a" defined in .If the KPI metrics do not meet the acceptance criteria, the
test procedure MUST NOT be continued to "Step 2".The test equipment SHOULD start to measure and record all
specified KPIs. The frequency of measurement MUST be less than 5
seconds. Continue the test until all traffic profile phases are
completed.The DUT/SUT is expected to reach the desired "Target
Throughput" at the sustain phase. In addition, the measured KPIs
MUST meet all acceptance criteria.Perform the test separately for each HTTPS response object
size.Follow step 3, if the KPI metrics do not meet the acceptance
criteria.Determine the maximum and average achievable throughput within
the acceptance criteria. Final test iteration MUST be performed
for the test duration defined in .Using HTTPS traffic, determine the average HTTPS transaction
latency when DUT is running with sustainable HTTPS transactions per
second supported by the DUT/SUT under different HTTPS response
object size.Scenario 1: The client MUST negotiate HTTPS and close the
connection with FIN immediately after completion of a single
transaction (GET and RESPONSE).Scenario 2: The client MUST negotiate HTTPS and close the
connection with FIN immediately after completion of 10 transactions
(GET and RESPONSE) within a single TCP connection.Test bed setup SHOULD be configured as defined in . Any specific test bed configuration changes
such as number of interfaces and interface type, etc. MUST be
documented.In this section, test scenario specific parameters SHOULD be
defined.DUT/SUT parameters MUST conform to the requirements defined in
. Any configuration changes
for this specific test scenario MUST be documented.Test equipment configuration parameters MUST conform to the
requirements defined in . Following parameters MUST
be documented for this test scenario:Client IP address range defined in Server IP address range defined in Traffic distribution ratio between IPv4 and IPv6 defined in
RECOMMENDED cipher suites and key size:
ECDHE-ECDSA-AES256-GCM-SHA384 with Secp521 bits key size
(Signature Hash Algorithm: ecdsa_secp384r1_sha384 and Supported
group: sepc521r1)Target objective for scenario 1: 50% of the maximum connections
per second measured in test scenario TCP/HTTPS Connections per secondTarget objective for scenario 2: 50% of the maximum throughput
measured in test scenario HTTPS ThroughputInitial objective for scenario 1: 10% of Target objective for
scenario 1”Initial objective for scenario 2: 10% of “Target objective for
scenario 2”HTTPS transaction per TCP connection: test scenario 1 with
single transaction and the second scenario with 10
transactionsHTTPS 1.1 with GET command requesting a single 1, 16 or 64
Kbyte object. For each test iteration, client MUST request a
single HTTPS response object size.The following test Criteria is defined as test results
acceptance criteria. Test results acceptance criteria MUST be
monitored during the whole sustain phase of the traffic load
profile. Ramp up and ramp down phase SHOULD NOT be considered.Generic criteria:Number of failed Application transactions MUST be less than
0.001% (1 out of 100,000 transactions) of attempt
transactions.Number of Terminated TCP connections due to unexpected TCP
RST sent by DUT/SUT MUST be less than 0.001% (1 out of 100,000
connections) of total initiated TCP connectionsDuring the sustain phase, traffic should be forwarded at a
constant rate.Concurrent TCP connections should be constant during steady
state. This confirms the DUT opens and closes TCP connections
at the same rate.After ramp up the DUT MUST achieve the "Target objective"
defined in the parameter
and remain in that state for the entire test duration (sustain
phase).Following KPI metrics MUST be reported for each test scenario
and HTTPS response object sizes separately:average TCP connections per second and average application
transaction latency or TTLBAll KPI's are measured once the target connections per second
achieves the steady state.The test procedure is designed to measure average application
transaction latency or TTLB when the DUT is operating close to 50%
of its maximum achievable connections per second. This test
procedure can be repeated multiple times with different IP types
(IPv4 only, IPv6 only and IPv4 and IPv6 mixed traffic distribution),
HTTPS response object sizes and single and multiple transactions per
connection scenarios.Verify the link status of the all connected physical
interfaces. All interfaces are expected to be in "UP" status.Configure traffic load profile of the test equipment to
establish "Initial objective" as defined in the parameters .
The traffic load profile can be defined as described in .The DUT/SUT SHOULD reach the "Initial objective" before the
sustain phase. The measured KPIs during the sustain phase MUST
meet the acceptance criteria a, b, c, d, e and f defined in .If the KPI metrics do not meet the acceptance criteria, the
test procedure MUST NOT be continued to "Step 2".Configure test equipment to establish "Target objective"
defined in the parameters table. The test equipment SHOULD follow
the traffic load profile definition as described in .During the ramp up and sustain phase, other KPIs such as
throughput, concurrent TCP connections and application
transactions per second MUST NOT reach to the maximum value that
the DUT/SUT can support. The test results for specific test
iterations SHOULD NOT be reported, if the above mentioned KPI
(especially throughput) reaches to the maximum value. (Example: If
the test iteration with 64Kbyte of HTTP response object size
reached the maximum throughput limitation of the DUT, the test
iteration MAY be interrupted and the result for 64kbyte SHOULD NOT
be reported).The test equipment SHOULD start to measure and record all
specified KPIs. The frequency of measurement MUST be less than 5
seconds. Continue the test until all traffic profile phases are
completed. DUT/SUT is expected to reach the desired "Target
objective" at the sustain phase. In addition, the measured KPIs
MUST meet all acceptance criteria.Follow step 3, if the KPI metrics do not meet the acceptance
criteria.Determine the maximum achievable connections per second within
the acceptance criteria and measure the latency values.Determine the maximum number of concurrent TCP connections that
the DUT/SUT sustains when using HTTPS traffic.Test bed setup SHOULD be configured as defined in . Any specific test bed configuration changes
such as number of interfaces and interface type, etc. MUST be
documented.In this section, test scenario specific parameters SHOULD be
defined.DUT/SUT parameters MUST conform to the requirements defined in
. Any configuration changes
for this specific test scenario MUST be documented.Test equipment configuration parameters MUST conform to the
requirements defined in . Following parameters MUST
be documented for this test scenario:Client IP address range defined in Server IP address range defined in Traffic distribution ratio between IPv4 and IPv6 defined in
RECOMMENDED cipher suites and key size:
ECDHE-ECDSA-AES256-GCM-SHA384 with Secp521 bits key size
(Signature Hash Algorithm: ecdsa_secp384r1_sha384 and
Supported group: sepc521r1)Target concurrent connections: Initial value from product
data sheet (if known)Initial concurrent connections: 10% of “Target concurrent
connections”Connections per second during ramp up phase: 50% of maximum
connections per second measured in test scenario TCP/HTTPS Connections per secondRamp up time (in traffic load profile for "Target
concurrent connections"): “Target concurrent connections" /
"Maximum connections per second during ramp up phase"Ramp up time (in traffic load profile for "Initial
concurrent connections"): “Initial concurrent connections" /
"Maximum connections per second during ramp up phase"The client MUST perform HTTPS transaction with persistence and
each client can open multiple concurrent TCP connections per
server endpoint IP.Each client sends 10 GET commands requesting 1Kbyte HTTPS
response objects in the same TCP connections (10 transactions/TCP
connection) and the delay (think time) between each transactions
MUST be X seconds.X = (“Ramp up time” + ”steady state time”) /10The established connections SHOULD remain open until the ramp
down phase of the test. During the ramp down phase, all
connections SHOULD be successfully closed with FIN.The following test Criteria is defined as test results
acceptance criteria. Test results acceptance criteria MUST be
monitored during the whole sustain phase of the traffic load
profile.Number of failed Application transactions MUST be less than
0.001% (1 out of 100,000 transactions) of total attempted
transactionsNumber of Terminated TCP connections due to unexpected TCP
RST sent by DUT/SUT MUST be less than 0.001% (1 out of 100,000
connections) of total initiated TCP connectionsDuring the sustain phase, traffic SHOULD be forwarded
constantlyDuring the sustain phase, the maximum deviation (max. dev)
of application transaction latency or TTLB (Time To Last Byte)
MUST be less than 10%Following KPI metrics MUST be reported for this test
scenario:Average Throughput, max. Min. Avg. Concurrent TCP connections,
TTLB/application transaction latency and average application
transactions per secondThe test procedure is designed to measure the concurrent TCP
connection capacity of the DUT/SUT at the sustaining period of
traffic load profile. The test procedure consists of three major
steps. This test procedure MAY be repeated multiple times with
different IPv4 and IPv6 traffic distribution.Verify the link status of all connected physical interfaces.
All interfaces are expected to be in "UP" status.Configure test equipment to establish “initial concurrent TCP
connections" defined in .
Except ramp up time, the traffic load profile SHOULD be defined as
described in .During the sustain phase, the DUT/SUT SHOULD reach the “Initial
concurrent TCP connections”. The measured KPIs during the sustain
phase MUST meet the acceptance criteria “a” and “b” defined in
.If the KPI metrics do not meet the acceptance criteria, the
test procedure MUST NOT be continued to “Step 2”.Configure test equipment to establish “Target concurrent TCP
connections”.The test equipment SHOULD follow the traffic load
profile definition (except ramp up time) as described in .During the ramp up and sustain phase, the other KPIs such as
throughput, TCP connections per second and application
transactions per second MUST NOT reach to the maximum value that
the DUT/SUT can support.The test equipment SHOULD start to measure and record KPIs
defined in . The frequency of
measurement MUST be less than 5 seconds. Continue the test until
all traffic profile phases are completed.The DUT/SUT is expected to reach the desired target concurrent
connections at the sustain phase. In addition, the measured KPIs
MUST meet all acceptance criteria.Follow step 3, if the KPI metrics do not meet the acceptance
criteria.Determine the maximum and average achievable concurrent TCP
connections within the acceptance criteria.This document makes no request of IANA.Note to RFC Editor: this section may be removed on publication as an
RFC.Acknowledgements will be added in the future release.The authors would like to thank the many people that contributed
their time and knowledge to this effort.Specifically, to the co-chairs of the NetSecOPEN Test Methodology
working group and the NetSecOPEN Security Effectiveness working group -
Alex Samonte, Aria Eslambolchizadeh, Carsten Rossenhoevel and David
DeSanto.Additionally, the following people provided input, comments and spent
time reviewing the myriad of drafts. If we have missed anyone the fault
is entirely our own. Thanks to - Amritam Putatunda, Chao Guo, Chris
Chapman, Chris Pearson, Chuck McAuley, David White, Jurrie Van Den
Breekel, Michelle Rhines, Rob Andrews, Samaresh Nair, and Tim
Winters.A traffic mix for testing performance of next generation firewalls
MUST scale to stress the DUT based on real-world conditions. In order to
achieve this the following MUST be included: Clients connecting to multiple different server FQDNs per
applicationClients loading apps and pages with connections and objects in
specific ordersMultiple unique certificates for HTTPS/TLSA wide variety of different object sizesDifferent URL pathsMix of HTTP and HTTPSA traffic mix for testing performance of next generation firewalls
MUST also facility application identification using different detection
methods with and without decryption of the traffic. Such as: HTTP HOST based application detectionHTTPS/TLS Server Name Indication (SNI)Certificate Subject Common Name (CN)The mix MUST be of sufficient complexity and volume to render
differences in individual apps as statistically insignificant. For
example, changes in like to like apps – such as one type of video
service vs. another both consist of larger objects whereas one news site
vs. another both typically have more connections then other apps because
of trackers and embedded advertising content. To achieve sufficient
complexity, a mix MUST have: Thousands of URLs each client walks thruHundreds of FQDNs each client connects toHundreds of unique certificates for HTTPS/TLSThousands of different object sizes per client in orders matching
applicationsThe following is a description of what a popular application in an
enterprise traffic mix contains.Table 5 lists the FQDNs, number of transactions and bytes transferred
as an example client interacts with Office 365 Outlook, Word, Excel,
PowerPoint, SharePoint and Skype. +---------------------------------+------------+-------------+
| Office365 FQDN | Bytes | Transaction |
+============================================================+
| r1.res.office365.com | 14,056,960 | 192 |
+---------------------------------+------------+-------------+
| s1-word-edit-15.cdn.office.net | 6,731,019 | 22 |
+---------------------------------+------------+-------------+
| company1-my.sharepoint.com | 6,269,492 | 42 |
+---------------------------------+------------+-------------+
| swx.cdn.skype.com | 6,100,027 | 12 |
+---------------------------------+------------+-------------+
| static.sharepointonline.com | 6,036,947 | 41 |
+---------------------------------+------------+-------------+
| spoprod-a.akamaihd.net | 3,904,250 | 25 |
+---------------------------------+------------+-------------+
| s1-excel-15.cdn.office.net | 2,767,941 | 16 |
+---------------------------------+------------+-------------+
| outlook.office365.com | 2,047,301 | 86 |
+---------------------------------+------------+-------------+
| shellprod.msocdn.com | 1,008,370 | 11 |
+---------------------------------+------------+-------------+
| word-edit.officeapps.live.com | 932,080 | 25 |
+---------------------------------+------------+-------------+
| res.delve.office.com | 760,146 | 2 |
+---------------------------------+------------+-------------+
| s1-powerpoint-15.cdn.office.net | 557,604 | 3 |
+---------------------------------+------------+-------------+
| appsforoffice.microsoft.com | 511,171 | 5 |
+---------------------------------+------------+-------------+
| powerpoint.officeapps.live.com | 471,625 | 14 |
+---------------------------------+------------+-------------+
| excel.officeapps.live.com | 342,040 | 14 |
+---------------------------------+------------+-------------+
| s1-officeapps-15.cdn.office.net | 331,343 | 5 |
+---------------------------------+------------+-------------+
| webdir0a.online.lync.com | 66,930 | 15 |
+---------------------------------+------------+-------------+
| portal.office.com | 13,956 | 1 |
+---------------------------------+------------+-------------+
| config.edge.skype.com | 6,911 | 2 |
+---------------------------------+------------+-------------+
| clientlog.portal.office.com | 6,608 | 8 |
+---------------------------------+------------+-------------+
| webdir.online.lync.com | 4,343 | 5 |
+---------------------------------+------------+-------------+
| graph.microsoft.com | 2,289 | 2 |
+---------------------------------+------------+-------------+
| nam.loki.delve.office.com | 1,812 | 5 |
+---------------------------------+------------+-------------+
| login.microsoftonline.com | 464 | 2 |
+---------------------------------+------------+-------------+
| login.windows.net | 232 | 1 |
+---------------------------------+------------+-------------+Clients MUST connect to multiple server FQDNs in the same order as
real applications. Connections MUST be made when the client is
interacting with the application and MUST NOT first setup up all
connections. Connections SHOULD stay open per client for subsequent
transactions to the same FQDN similar to how a web browser behaves.
Clients MUST use different URL Paths and Object sizes in orders as they
are observed in real Applications. Clients MAY also setup multiple
connections per FQDN to process multiple transactions in a sequence at
the same time. Table 6 has a partial example sequence of the Office 365
Word application transactions.+---------------------------------+----------------------+----------+
| FQDN | URL Path | Object |
| | | size |
+===================================================================+
| company1-my.sharepoint.com | /personal... | 23,132 |
+---------------------------------+----------------------+----------+
| word-edit.officeapps.live.com | /we/WsaUpload.ashx | 2 |
+---------------------------------+----------------------+----------+
| static.sharepointonline.com | /bld/.../blank.js | 454 |
+---------------------------------+----------------------+----------+
| static.sharepointonline.com | /bld/.../ | 23,254 |
| | initstrings.js | |
+---------------------------------+----------------------+----------+
| static.sharepointonline.com | /bld/.../init.js | 292,740 |
+---------------------------------+----------------------+----------+
| company1-my.sharepoint.com | /ScriptResource... | 102,774 |
+---------------------------------+----------------------+----------+
| company1-my.sharepoint.com | /ScriptResource... | 40,329 |
+---------------------------------+----------------------+----------+
| company1-my.sharepoint.com | /WebResource... | 23,063 |
+---------------------------------+----------------------+----------+
| word-edit.officeapps.live.com | /we/wordeditorframe. | 60,657 |
| | aspx... | |
+---------------------------------+----------------------+----------+
| static.sharepointonline.com | /bld/_layouts/.../ | 454 |
| | blank.js | |
+---------------------------------+----------------------+----------+
| s1-word-edit-15.cdn.office.net | /we/s/.../ | 19,201 |
| | EditSurface.css | |
+---------------------------------+----------------------+----------+
| s1-word-edit-15.cdn.office.net | /we/s/.../ | 221,397 |
| | WordEditor.css | |
+---------------------------------+----------------------+----------+
| s1-officeapps-15.cdn.office.net | /we/s/.../ | 107,571 |
| | Microsoft | |
| | Ajax.js | |
+---------------------------------+----------------------+----------+
| s1-word-edit-15.cdn.office.net | /we/s/.../ | 39,981 |
| | wacbootwe.js | |
+---------------------------------+----------------------+----------+
| s1-officeapps-15.cdn.office.net | /we/s/.../ | 51,749 |
| | CommonIntl.js | |
+---------------------------------+----------------------+----------+
| s1-word-edit-15.cdn.office.net | /we/s/.../ | 6,050 |
| | Compat.js | |
+---------------------------------+----------------------+----------+
| s1-word-edit-15.cdn.office.net | /we/s/.../ | 54,158 |
| | Box4Intl.js | |
+---------------------------------+----------------------+----------+
| s1-word-edit-15.cdn.office.net | /we/s/.../ | 24,946 |
| | WoncaIntl.js | |
+---------------------------------+----------------------+----------+
| s1-word-edit-15.cdn.office.net | /we/s/.../ | 53,515 |
| | WordEditorIntl.js | |
+---------------------------------+----------------------+----------+
| s1-word-edit-15.cdn.office.net | /we/s/.../ | 1,978,712|
| | WordEditorExp.js | |
+---------------------------------+----------------------+----------+
| s1-word-edit-15.cdn.office.net | /we/s/.../jSanity.js | 10,912 |
+---------------------------------+----------------------+----------+
| word-edit.officeapps.live.com | /we/OneNote.ashx | 145,708 |
+---------------------------------+----------------------+----------+For application identification the HTTPS/TLS traffic MUST include
realistic Certificate Subject Common Name (CN) data as well as Server
Name Indications (SNI). For example, a DUT MAY detect Facebook Chat
traffic by inspecting the certificate and detecting *.facebook.com in
the certificate subject CN and subsequently detect the word chat in the
FQDN 5-edge-chat.facebook.com and identify traffic on the connection to
be Facebook Chat.Table 7 includes further examples in SNI and CN pairs for several
FQDNs of Office 365. +------------------------------+----------------------------------+
|Server Name Indication (SNI) | Certificate Subject |
| | Common Name (CN) |
+=================================================================+
| r1.res.office365.com | *.res.outlook.com |
+------------------------------+----------------------------------+
| login.windows.net | graph.windows.net |
+------------------------------+----------------------------------+
| webdir0a.online.lync.com | *.online.lync.com |
+------------------------------+----------------------------------+
| login.microsoftonline.com | stamp2.login.microsoftonline.com |
+------------------------------+----------------------------------+
| webdir.online.lync.com | *.online.lync.com |
+------------------------------+----------------------------------+
| graph.microsoft.com | graph.microsoft.com |
+------------------------------+----------------------------------+
| outlook.office365.com | outlook.com |
+------------------------------+----------------------------------+
| appsforoffice.microsoft.com | appsforoffice.microsoft.com |
+------------------------------+----------------------------------+NetSecOPEN has provided a reference enterprise perimeter traffic mix
with dozens of applications, hundreds of connections, and thousands of
transactions.The enterprise perimeter traffic mix consists of 70% HTTPS and 30%
HTTP by Bytes, 58% HTTPS and 42% HTTP by Transactions. By connections
with a single connection per FQDN the mix consists of 43% HTTPS and 57%
HTTP. With multiple connections per FQDN the HTTPS percentage is higher.
Table 8 is a summary of the NetSecOPEN enterprise perimeter traffic
mix sorted by bytes with unique FQDNs and transactions per applications.
+------------------+-------+--------------+-------------+
| Application | FQDNs | Transactions | Bytes |
+=======================================================+
| Office365 | 26 | 558 | 52,931,947 |
+------------------+-------+--------------+-------------+
| Box | 4 | 90 | 23,276,089 |
+------------------+-------+--------------+-------------+
| Salesforce | 6 | 365 | 23,137,548 |
+------------------+-------+--------------+-------------+
| Gmail | 13 | 139 | 16,399,289 |
+------------------+-------+--------------+-------------+
| Linkedin | 10 | 206 | 15,040,918 |
+------------------+-------+--------------+-------------+
| DailyMotion | 8 | 77 | 14,751,514 |
+------------------+-------+--------------+-------------+
| GoogleDocs | 2 | 71 | 14,205,476 |
+------------------+-------+--------------+-------------+
| Wikia | 15 | 159 | 13,909,777 |
+------------------+-------+--------------+-------------+
| Foxnews | 82 | 499 | 13,758,899 |
+------------------+-------+--------------+-------------+
| Yahoo Finance | 33 | 254 | 13,134,011 |
+------------------+-------+--------------+-------------+
| Youtube | 8 | 97 | 13,056,216 |
+------------------+-------+--------------+-------------+
| Facebook | 4 | 207 | 12,726,231 |
+------------------+-------+--------------+-------------+
| CNBC | 77 | 275 | 11,939,566 |
+------------------+-------+--------------+-------------+
| Lightreading | 27 | 304 | 11,200,864 |
+------------------+-------+--------------+-------------+
| BusinessInsider | 16 | 142 | 11,001,575 |
+------------------+-------+--------------+-------------+
| Alexa | 5 | 153 | 10,475,151 |
+------------------+-------+--------------+-------------+
| CNN | 41 | 206 | 10,423,740 |
+------------------+-------+--------------+-------------+
| Twitter Video | 2 | 72 | 10,112,820 |
+------------------+-------+--------------+-------------+
| Cisco Webex | 1 | 213 | 9,988,417 |
+------------------+-------+--------------+-------------+
| Slack | 3 | 40 | 9,938,686 |
+------------------+-------+--------------+-------------+
| Google Maps | 5 | 191 | 8,771,873 |
+------------------+-------+--------------+-------------+
| SpectrumIEEE | 7 | 145 | 8,682,629 |
+------------------+-------+--------------+-------------+
| Yelp | 9 | 146 | 8,607,645 |
+------------------+-------+--------------+-------------+
| Vimeo | 12 | 74 | 8,555,960 |
+------------------+-------+--------------+-------------+
| Wikihow | 11 | 140 | 8,042,314 |
+------------------+-------+--------------+-------------+
| Netflix | 3 | 31 | 7,839,256 |
+------------------+-------+--------------+-------------+
| Instagram | 3 | 114 | 7,230,883 |
+------------------+-------+--------------+-------------+
| Morningstar | 30 | 150 | 7,220,121 |
+------------------+-------+--------------+-------------+
| Docusign | 5 | 68 | 6,972,738 |
+------------------+-------+--------------+-------------+
| Twitter | 1 | 100 | 6,939,150 |
+------------------+-------+--------------+-------------+
| Tumblr | 11 | 70 | 6,877,200 |
+------------------+-------+--------------+-------------+
| Whatsapp | 3 | 46 | 6,829,848 |
+------------------+-------+--------------+-------------+
| Imdb | 16 | 251 | 6,505,227 |
+------------------+-------+--------------+-------------+
| NOAAgov | 1 | 44 | 6,316,283 |
+------------------+-------+--------------+-------------+
| IndustryWeek | 23 | 192 | 6,242,403 |
+------------------+-------+--------------+-------------+
| Spotify | 18 | 119 | 6,231,013 |
+------------------+-------+--------------+-------------+
| AutoNews | 16 | 165 | 6,115,354 |
+------------------+-------+--------------+-------------+
| Evernote | 3 | 47 | 6,063,168 |
+------------------+-------+--------------+-------------+
| NatGeo | 34 | 104 | 6,026,344 |
+------------------+-------+--------------+-------------+
| BBC News | 18 | 156 | 5,898,572 |
+------------------+-------+--------------+-------------+
| Investopedia | 38 | 241 | 5,792,038 |
+------------------+-------+--------------+-------------+
| Pinterest | 8 | 102 | 5,658,994 |
+------------------+-------+--------------+-------------+
| Succesfactors | 2 | 112 | 5,049,001 |
+------------------+-------+--------------+-------------+
| AbaJournal | 6 | 93 | 4,985,626 |
+------------------+-------+--------------+-------------+
| Pbworks | 4 | 78 | 4,670,980 |
+------------------+-------+--------------+-------------+
| NetworkWorld | 42 | 153 | 4,651,354 |
+------------------+-------+--------------+-------------+
| WebMD | 24 | 280 | 4,416,736 |
+------------------+-------+--------------+-------------+
| OilGasJournal | 14 | 105 | 4,095,255 |
+------------------+-------+--------------+-------------+
| Trello | 5 | 39 | 4,080,182 |
+------------------+-------+--------------+-------------+
| BusinessWire | 5 | 109 | 4,055,331 |
+------------------+-------+--------------+-------------+
| Dropbox | 5 | 17 | 4,023,469 |
+------------------+-------+--------------+-------------+
| Nejm | 20 | 190 | 4,003,657 |
+------------------+-------+--------------+-------------+
| OilGasDaily | 7 | 199 | 3,970,498 |
+------------------+-------+--------------+-------------+
| Chase | 6 | 52 | 3,719,232 |
+------------------+-------+--------------+-------------+
| MedicalNews | 6 | 117 | 3,634,187 |
+------------------+-------+--------------+-------------+
| Marketwatch | 25 | 142 | 3,291,226 |
+------------------+-------+--------------+-------------+
| Imgur | 5 | 48 | 3,189,919 |
+------------------+-------+--------------+-------------+
| NPR | 9 | 83 | 3,184,303 |
+------------------+-------+--------------+-------------+
| Onelogin | 2 | 31 | 3,132,707 |
+------------------+-------+--------------+-------------+
| Concur | 2 | 50 | 3,066,326 |
+------------------+-------+--------------+-------------+
| Service-now | 1 | 37 | 2,985,329 |
+------------------+-------+--------------+-------------+
| Apple itunes | 14 | 80 | 2,843,744 |
+------------------+-------+--------------+-------------+
| BerkeleyEdu | 3 | 69 | 2,622,009 |
+------------------+-------+--------------+-------------+
| MSN | 39 | 203 | 2,532,972 |
+------------------+-------+--------------+-------------+
| Indeed | 3 | 47 | 2,325,197 |
+------------------+-------+--------------+-------------+
| MayoClinic | 6 | 56 | 2,269,085 |
+------------------+-------+--------------+-------------+
| Ebay | 9 | 164 | 2,219,223 |
+------------------+-------+--------------+-------------+
| UCLAedu | 3 | 42 | 1,991,311 |
+------------------+-------+--------------+-------------+
| ConstructionDive | 5 | 125 | 1,828,428 |
+------------------+-------+--------------+-------------+
| EducationNews | 4 | 78 | 1,605,427 |
+------------------+-------+--------------+-------------+
| BofA | 12 | 68 | 1,584,851 |
+------------------+-------+--------------+-------------+
| ScienceDirect | 7 | 26 | 1,463,951 |
+------------------+-------+--------------+-------------+
| Reddit | 8 | 55 | 1,441,909 |
+------------------+-------+--------------+-------------+
| FoodBusinessNews | 5 | 49 | 1,378,298 |
+------------------+-------+--------------+-------------+
| Amex | 8 | 42 | 1,270,696 |
+------------------+-------+--------------+-------------+
| Weather | 4 | 50 | 1,243,826 |
+------------------+-------+--------------+-------------+
| Wikipedia | 3 | 27 | 958,935 |
+------------------+-------+--------------+-------------+
| Bing | 1 | 52 | 697,514 |
+------------------+-------+--------------+-------------+
| ADP | 1 | 30 | 508,654 |
+------------------+-------+--------------+-------------+
| | | | |
+------------------+-------+--------------+-------------+
| Grand Total | 983 | 10021 | 569,819,095 |
+------------------+-------+--------------+-------------+