OpenAdmin Tool (OAT) version 3.11 includes new Heat Map graphs to aid
in the performance analysis of your Informix database server. View
extent and buffer pool usage in a heat map, also known as a treemap graph.

Go to the new Performance Analysis > System Reports > Graphs page
to view graphs of the number of extents in databases or the percentage
of cached pages in the buffer pools. Use the drop-down controls at the
top of the page to choose the type of data to graph (extents or buffer
pool) and to filter by a particular database.

Extents Heat Map

Use the Extents graph to monitor the number of extents across your
databases. Monitoring extents is important because a large number of
extents can impact performance by requiring the database server to spend
extra time finding the data.

Use the size and color cues for each object to interpret the extents
graph. Each rectangle within a database represents an object, for
example a table or an index. The size of the rectangle indicates the
relative size of the object. The color indicates the number of extents
as indicated by the legend for the graph. Hover over any object in the
graph to see the number of extents and the number of total pages used.

If any table or index has a large number of extents, consider increasing the size of the extents to improve performance.

Screenshots 1 and 2 show examples of the Extents Heat Map in OAT.

Screenshot 1: Extents Heat Map for the sysadmin database

Screenshot 2: Extents Heat Map for all databases

Buffer Pool Heat Map

Use the Buffer Pool graph to monitor the cached pages in your buffer
pool. Both the size of the buffer pool and which table and indexes
pages are cached in the buffer pool can have a big impact on the
performance of queries against the database server.

Use the size and color cues for each object to interpret the buffer
pool graph. Each rectangle within a database represents an object, for
example a table or an index. The size of the rectangle indicates the
relative size of the object. The color indicates the percentage of
pages that are cached as indicated by the legend for the graph. Red
indicates a “hot” table or index that has 100% of its pages cached in
the buffer pool, whereas blue indicates a “cold” table or index that has
0% of its pages cached. Hover over any object in the graph to see the
total rows, total pages, buffered pages, and percentage of pages that
are cached for that object.

A higher percentage of cached pages can improve performance. Queries
that are run on tables with a high cache percentage are faster than
queries that are run on tables with a low cache percentage. If a table
has a low percentage of cached pages and queries are slow, consider
increasing the size of the buffer pool.

Screenshots 3 and 4 show examples of the Buffer Pool Heat Map in OAT.

Screenshot 3: Buffer Pool Heat Map for the sysadmin database

Screenshot 4: Buffer Pool Heat Map for all databases

For more information about the OpenAdmin Tool and the new features in version 3.11, go to the OpenAdmin Tool webpage.

The fetch buffer size (FET_BUF_SIZE, FBS) setting applies to Informix
client-server communications when using the SQLI protocol. While it
can be set for other types besides TCP (shared memory or pipe), as we
will see, it is only really useful for TCP connections.

When the server responds to a client
request for data, it arranges the data in an buffer of the size
requested by the client. The data is arranged by tuples, or rows.
When the buffer is full, or the server has come to the end of the
rows being fetched, the buffer is copied over the communication
channel to a like buffer within the client application process
memory. The client API (ODBC, JDBC, ESQL...) reads from the buffer,
and copies or transforms the data into application variables as the
application requests rows. When the client comes to the end of the
buffer, and there is still more data in the result set, the client
requests the server to send it another buffer of data.

The fetch buffer size setting is
intended to allow the application to reduce the number of times the
client API has to request another buffer. This setting is very much
in the domain of speed versus size tradeoffs. Since at least one
buffer has to be sent, there is no advantage to setting the buffer
size to be larger than the typical, expected result set. Also, each
buffer is associated with one executing statement; so, if there are
lots of statements being processed concurrently, the size of the
buffer can have an impact on the application's memory footprint.

A simple way of thinking of how the
fetch buffer size affects the application performance is to think of
it in terms of number of buffers per result set, times how long of an
inherent delay there is between the client and server. If the result
set size is small, or the communication channel has little to no
delay (which is always the case with shared memory or pipe
connections), there is no significant benefit to larger buffer sizes.
What follows are the results of series of tests where the result set
size, fetch buffer size, and network delay are varied.

These tests were written using the ODBC
interface, but the pattern of results should be the same across all
Informix APIs when using the SQLI protocol. The client and server
are on the same physical machine, with the client running under
Windows and the Informix server on a Linux virtual machine. The
network delay is controlled using Linux's network emulation (netem)
functionality. The delay is in milliseconds; it is roughly equivalent to a what a 'ping' command will tell you is the delay between the host and the client.

The data do not actually exist in the
database; they are manufactured at the time of query execution
through the use of views created to multiply the base result set.
This configuration is not representative of any real world scenarios. The actual performance of this set of queries is not important; the important aspect of these tests is the performance of one condition relative to another. So, the data have been converted to measurements of relative efficiency by dividing each data point within a series by the maximum value within that series. The most efficient setting tested has a relative efficiency of 1.0, and the rest are between 1.0 and 0.0.

Empirical ResultsThe empirical results match predictions. As the size of the result set is increased, the effect of increasing the size of the fetch buffer is that slower communication between the client and the server has less effect. When sending large results to the client, it is possible to compensate for a slow network with a large fetch buffer. When the result size is small, or the network has little delay between the sending and receiving of packets, the setting of fetch buffer size has little effect.

Example
1: Small result set size - 1K

Increasing
the fetch buffer size beyond the size of the result set has no
meaningful effect, across all network conditions.

Table 1. Result set 1K data

Relative Efficiency (Cell data transfer rate / fastest transfer
rate)

Network Delay

Fetch Buffer Size

0.00

5.00

100.00

5000

0.98

0.12

0.01

50000

0.99

0.12

0.01

500000

1.00

0.13

0.01

Example
2. Moderate result set size – 256K

When
there is a negligible network delay, the size of the fetch buffer has
negligible effect. As the network delay increases, the effect of
increasing the fetch buffer size becomes more significant. When the
network delay is 100ms, increasing the fetch buffer can increase
performance by over 6 times.

Table 2. Result set 256K data

Relative Efficiency (Cell data transfer rate / fastest transfer
rate)

Network Delay

Fetch Buffer Size

0

5

100

5000

0.98

0.61

0.06

50000

1.00

0.89

0.26

500000

0.99

0.94

0.45

Example
3. Large result set size – 4MB

When
working with large result sets over a slow network, setting a large
fetch buffer size can be crucial to performance. Notice that the
data transfer rate of the largest buffer over the slowest network is
only about 15% slower than the fastest condition, and 12 times faster
than the same transfer using a small buffer.

Table 3. Result set 4MB data

Relative Efficiency (Cell data transfer rate / fastest transfer
rate)

Network Delay

Fetch Buffer Size

0

5

100

5000

0.98

0.63

0.07

50000

0.99

0.94

0.40

500000

1.00

0.98

0.85

The results above should provide a
useful guide on an appropriate setting for fetch buffer size based on
the actual queries being performed and the network infrastructure in
use. The main drawback to a high setting of fetch buffer size is
limited to when the memory footprint of the application starts to
impede on memory requirements of other processes.