TT Gateways — General

Ensure all TT Gateways have TTClean enabled in TTChron to ensure
log files are deleted on a regular basis.

Ensure the Contracts-Per-Message parameter in Aconfig
is set to 50 on all TT Gateways and MPF TT Gateways (instead
of four (4)).

Please note: Trading Technologies will provide
"best effort" support to virtualized deployments of TT applications.
Under best effort support, TT’s Customer Support Center (CSC) will
make their best effort to troubleshoot cases in standard fashion
unless the issue is deemed a virtualization technology-specific
issue, at which point customers must contact the virtualization
vendor directly for assistance.

In general, customers may
experience some performance degradation when running an application
on virtual systems. You (the customer) must determine how virtualization affects
performance in your particular deployment and make any necessary
adjustments to the hardware and configuration. At a minimum, customers
should allocate (reserve) virtual machine resources inline with
the recommended hardware requirements for TT applications as specified
in Server-Class Machine Requirements.

TT Gateways — Market Data Traffic

During trading
sessions with high volumes, the price feed from several TT Gateways
can cause substantial network load and noticeably impact performance
on X_TRADER® workstations. There are two TT-specific options to
help alleviate the amount of network traffic sent: You may enable
price coalescing on TT Gateways or set up an MPF2 (Market Price
Feed) trading environment.

Price Coalescing

When
a TT Gateway receives a price update for a specific product, it
stores that price until a specified time interval has passed. If
the TT Gateway receives another price update for the same product
in the meantime, the TT Gateway will overwrite the stored price
with that new price. The TT Gateway will send out the latest price
it has stored when the time specified has passed.

Price Coalescing is enabled in the Aconfig Utility under CoreServerExchange-SpecificExchange-FlavorMarket-Depth.
The Interval-mSecs setting determines the amount of time
between these price feed broadcasts.

Based on the trading environment, a decision should be made
with your local TAM as to what the Interval-mSecs setting
should be set to (e.g., 50 ms). The higher the setting, the more
coalescing.

Note: When using price coalescing,
not all price updates are sent out, only the latest price update for
the specified products are sent out.

Note: Do not coalesce
TT Eurex or Xetra Gateways (as market data is already coalesced
by the exchange); however, you may coalesce TT EurexPF Gateways.

MPF
Environment

You can configure your TT trading environment
to decouple (i.e., off load) much of the price functionality from
several TT Gateways of a particular market onto a separate TT server.

Using MPF2, you can configure one TT Gateway to provide prices
to client applications and then configure additional TT Gateways
to provide clients with order and fill connectivity. TT Gateways
that supply only order and fill data must host a server component called
the Price Proxy.

Ensure the Contracts-Per-Message parameter in Aconfig
is set to 50 on all TT Gateways and MPF TT Gateways (instead
of four (4)).

Price Proxy servers should also have the Contracts-Per-Message
parameter set to 50. This is found in the priceproxy.ini file.

Note:
For more information on setting up an MPF2 environment, contact
your local TAM.

Globex (CME, CBOT)

Adhere to the following best
practices when installing Globex Gateways:

TT Globex
Gateways should have two or more NICs with one pointing to the internal
network and one or more pointing to the CME price distribution feed.

Point MDP to limited channels (i.e., Equity Futures channel).

TT Gateways allow a maximum of 10 channels per TT Gateway.

It is recommended to have separate TT Gateways for specific
channel groups. For example, a customer could install a TT CME Gateway
dedicated to just NYMEX products (channels) or one dedicated to
just EOS products.

Monitor the number of iLink connections per TT CME Gateway for
degradation of performance. Consult with your TAM to determine and
optimal number of iLink sessions per TT Gateway.

ICE

Adhere to the following best practices when
installing ICE Gateways:

Use a direct line to connect
to ICE due to security and performance issues with the Internet.

If a customer chooses to use an Internet connection, it is recommended
that SSL encryption is used. When using the Internet, install and
configure an SSL wrapper like S Tunnel.

Note: When using
direct lines, TT does not recommend using SSL encryption as this slightly
decreases performance.

FMDS

FMDS servers require a 300 GB or larger hard
drive.

FIX Adapter

FIX Adapter servers require 60 GB or
more of available space, per FIX Adapter instance.

Note: TcpNoDelay must
be set to True if Accumulation should be enabled. If TcpNoDelay is
set to False, Nagling is used and accumulation is disabled,
even if the accumulation parameters are configured.

TT WAN Router Registry

Add the following DWORDs
(and values) to the Registry in order to allow for faster resource
recovery…

HKEY_LOCALMACHINESYSTEMCurrentControlSetServicesTcpip
Parameters

SynAttackProtect = 00000001

TcpTimedWaitDelay = 1e

Note: The value 1e represents
30 seconds in hexadecimal.

Delete the TCPWindowSize setting from the Registry (if
it exists), located at: HKEY_LOCAL_MACHINESYSTEMCurrentControlSetServices
TcpipParameters

Note: TCPWindowSize should
be deleted within the Registry on TT WAN Routers only. The TTMRD.cfg file
will manage this under the TCP_Window_Size parameter entry.
This is so TT can manage the connection at the neighbor level. If
you leave this setting in the registry, then it will override the TTMRD.cfg file.

Within the TTMRD.cfg file, add the TCP_Window_Size parameter
for each neighbor if there is a delay in round trip times
for small sized circuits or if you have a circuit going overseas.
In order to determine the best setting for TCP_Window_Size, follow
the below steps to compute the needed value on every TT WAN Router:

Step

Description

1.

Determine the busiest time of day when the most market data
will be subscribed to over the data line between WAN Routers.

2.

Run a continuous ping from one of the TT WAN Routers to its
neighbor on the other side of the link using the “-t” switch during
this busy time, for at least 30 minutes.

3.

Press “Ctrl-C” to stop the continuous ping. Note the average
response time.

4.

Plug the average response time in seconds into the following
equation: Size of line (Bandwidth (in bits per seconds) * Average
latency (in seconds) / 8 (to convert bits to bytes) = TCPWindowSize
that should be used (in bytes).

Example: If average response
time is 90ms (.090 seconds) and the data line is a T1 (1,544,000bits/sec),
the calculation would be as follows: .090 *1,544,000 / 8 = 17370
bytes = 17K.

If the calculation is < 64K, TT recommends
leaving the TT default setting to 64K.

If the calculation is > 64K, TT recommends rounding to the nearest
standard 2 to the power of X increments (i.e., 64, 128, 256, 512,
etc.). Also create the additional REQUIRED registry setting at: HKEY_LOCAL_MACHINESYSTEMCurrentControlSetServicesTcpipParameters.

Create a new DWORD value “TCP1323Opts” (no quotes) and set the
value to “1” (no quotes). This is necessary to enable Windows scaling
to advertise windows above 64K. Be aware that the maximum setting
for TCPWindowSize is 1 Gig.

Note: Both the TCPWindowSize and TCP1323Opts registry
entries are located in the same hive in the Registry: (HKEY_LOCAL_MACHINESYSTEMCurrentControlSetServicesTcpip
Parameters)

X_TRADER Remote Hosts

Keep the following in mind
when setting up X_TRADER® Remote:

TT does not recommend
setting up X_TRADER® Remote client machines to connect directly
to TT WAN Routers or TT Gateways.

Do not use Satellite Broadband due to high latency.

Do not connect a remote workstation to a proxy server. Use a
NAT device instead or connect directly to the Remote Host machine
through the Internet or a direct line.

Do not install Remote Hosts on VMWare images.

X_TRADER Remote Host Config File

Compression:
Configure X_TRADER® Remote Hosts to use compression by adding the
following change to the ttmd.cfg file between, as shown below.

<General>

# Logging
type StdErr, File, both, none

LoggingType
= File

# tracing level normal, trace1,
trace2, trace3, trace4

TracingLevel = normal

# Number of days log files will be
kept for

LogFileHistory = 10

# Request Port

RequestPort = 10200

# If true,
only local communications (on the same box) allowed

local = false

# Nagling on

TcpNoDelay = false

# compression

compression_level
= 3 #This must be manually added to the ttmd.cfg file on all Remote Hosts.

<MulticastGroups>

>
= 239.255.7.9

</MulticastGroups>

</General>

Note: At the X_TRADER® client level, make sure
the user selects the check box next to Enable Compression within
the Daemon Setup window. After the change, Guardian and TTM
should restart.

Accumulation and Nagling: Both Accumulation
and Nagling are used to conserve bandwidth and are disabled by default.
Accumulation and Nagling cannot be run simultaneously, and neither is
required.

Nagling: Enable Nagling by setting TcpNoDelay
= false within the <General> section of the ttmd.cfg file.

Accumulation: Alternatively, you could enable Accumulation
(instead of nagling) which is supported with TTM version 2.1.1 and
above.

Set accumulator_timeout = [value between 0
and 5000000 microseconds] per service within the <LocalServices>
section of the ttmd.cfg file.

Set accumulator_mtu = [value between 0 and 64000 bytes]
per service within the <LocalServices> section of the ttmd.cfg file.

Set TcpNoDelay = true within the <General> section
of the ttmd.cfg file.

LEGAL

FOLLOW

This website uses cookies for analytics and functionality purposes. You may change your cookie settings using your browser settings. To find out more about our use of cookies, click here. To view our privacy policy, click here. If you continue browsing our website, we will assume that you are ok with our practices.×