Column label selection: Filter by result label. A regular expression can be used, example: .*Transaction.*
Before display the graph, click on Apply filter button to refresh internal data.

Yes

Title

Define the graph's title on the head of chart. Empty value is the default value : "Aggregate Graph". The button Synchronize with name define the title with the label of the listener. And define font settings for graph title

No

Graph size

Compute the graph size by the width and height depending of the current JMeter's window size. Use Width and Height fields to define a custom size. The unit is pixel.

The test element supports the ThreadListener and TestListener methods. These should be defined in the initialisation file. See the file BeanShellListeners.bshrc for example definitions.

Control Panel

Parameters

Attribute

Description

Required

Name

Descriptive name for this element that is shown in the tree. The name is stored in the script variable Label

No

Reset bsh.Interpreter before each call

If this option is selected, then the interpreter will be recreated for each sample. This may be necessary for some long running scripts. For further information, see Best
Practices - BeanShell scripting .

Yes

Parameters

Parameters to pass to the BeanShell script. The parameters are stored in the following variables:

Parameters - string containing the parameters as a single variable

bsh.args - String array containing parameters, split on white-space

No

Script file

A file containing the BeanShell script to run. The file name is stored in the script variable FileName

No

Script

The BeanShell script to run. The return value is ignored.

Yes (unless script file is provided)

Before invoking the script, some variables are set up in the BeanShell interpreter:

Distribution Graph MUST NOT BE USED during load test as it consumes a lot of resources (memory and CPU). Use it only for either functional testing or during Test Plan debugging and Validation.

The distribution graph will display a bar for every unique response time. Since the granularity of System.currentTimeMillis() is 10 milliseconds, the 90% threshold should be within the width of the graph. The graph will draw two threshold lines: 50% and
90%. What this means is 50% of the response times finished between 0 and the line. The same is true of 90% line. Several tests with Tomcat were performed using 30 threads for 600K requests. The graph was able to display the distribution without any problems
and both the 50% and 90% line were within the width of the graph. A performant application will generally produce results that clump together. A poorly written application that has memory leaks may result in wild fluctuations. In those situations, the threshold
lines may be beyond the width of the graph. The recommended solution to this specific problem is fix the webapp so it performs well. If your test plan produces distribution graphs with no apparent clumping or pattern, it may indicate a memory leak. The only
way to know for sure is to use a profiling tool.

The Response Time Graph draws a line chart showing the evolution of response time during the test, for each labelled request. If many samples exist for the same timestamp, the mean value is displayed.

Control Panel

The figure below shows an example of settings to draw this graph.

Response time graph settings

Please note: All this parameters aren't saved in JMeter jmx script.

Parameters

Attribute

Description

Required

Interval (ms)

The time in milli-seconds for X axis interval. Samples are grouped according to this value. Before display the graph, click onApply interval button to refresh internal data.

Yes

Sampler label selection

Filter by result label. A regular expression can be used, ex. .*Transaction.* . Before display the graph, click on Apply filterbutton to refresh internal data.

No

Title

Define the graph's title on the head of chart. Empty value is the default value : "Response Time Graph". The button Synchronize with name define the title with the label of the listener. And define font settings for graph title

No

Line settings

Define the width of the line. Define the type of each value point. Choose none to have a line without mark

Yes

Graph size

Compute the graph size by the width and height depending of the current JMeter's window size. Use Width and Height fields to define a custom size. The unit is pixel.

This listener can record results to a file but not to the UI. It is meant to provide an efficient means of recording data by eliminating GUI overhead. When running in non-GUI mode, the -l flag can be used to create a data file. The fields to save are defined
by JMeter properties. See the jmeter.properties file for details.

Spline Visualizer MUST NOT BE USED during load test as it consumes a lot of resources (memory and CPU). Use it only for either functional testing or during Test Plan debugging and Validation.

The Spline Visualizer provides a view of all sample times from the start of the test till the end, regardless of how many samples have been taken. The spline has 10 points, each representing 10% of the samples, and connected using spline logic to show a
single continuous line.

The graph is automatically scaled to fit within the window. This needs to be borne in mind when comparing graphs.

The summary report creates a table row for each differently named request in your test. This is similar to the Aggregate Report , except that it
uses less memory.

The thoughput is calculated from the point of view of the sampler target (e.g. the remote server in the case of HTTP samples). JMeter takes into account the total time over which the requests have been generated. If other samplers and timers are in the same
thread, these will increase the total time, and therefore reduce the throughput value. So two identical samplers with different names will have half the throughput of two samplers with the same name. It is important to choose the sampler labels correctly to
get the best results from the Report.

Label - The label of the sample. If "Include group name in label?" is selected, then the name of the thread group is added as a prefix. This allows identical labels from different thread groups to be collated separately if required.

Throughput - the Throughput is measured in requests per second/minute/hour. The time unit is chosen so that the displayed rate is at least 1.0. When the throughput
is saved to a CSV file, it is expressed in requests/second, i.e. 30.0 requests/minute is saved as 0.5.

Kb/sec - The throughput measured in Kilobytes per second

Avg. Bytes - average size of the sample response in bytes. (in JMeter 2.2 it wrongly showed the value in kB)

Times are in milliseconds.

Control Panel

The figure below shows an example of selecting the "Include group name" checkbox.

This test element can be placed anywhere in the test plan. For each sample in its scope, it will create a file of the response Data. The primary use for this is in creating functional tests, but it can also be useful where the response is too large to be
displayed in the View Results Tree Listener. The file name is created from the specified prefix, plus a number (unless this is disabled, see below).
The file extension is created from the document type, if known. If not known, the file extension is set to 'unknown'. If numbering is disabled, and adding a suffix is disabled, then the file prefix is taken as the entire file name. This allows a fixed file
name to be generated if required. The generated file name is stored in the sample response, and can be saved in the test log output file if required.

The current sample is saved first, followed by any sub-samples (child samples). If a variable name is provided, then the names of the files are saved in the order that the sub-samples appear. See below.

Control Panel

Parameters

Attribute

Description

Required

Name

Descriptive name for this element that is shown in the tree.

No

Filename Prefix

Prefix for the generated file names; this can include a directory name. Relative paths are resolved relative to the current working directory (which defaults to the bin/ directory). Versions of JMeter after 2.4 also support paths relative to the directory
containing the current test plan (JMX file). If the path name begins with "~/" (or whatever is in the jmeter.save.saveservice.base_prefix JMeter property), then the path is assumed to be relative to the JMX file location.

Yes

Variable Name

Name of a variable in which to save the generated file name (so it can be used later in the test plan). If there are sub-samples then a numeric suffix is added to the variable name. E.g. if the variable name is FILENAME, then the parent sample file name
is saved in the variable FILENAME, and the filenames for the child samplers are saved in FILENAME1, FILENAME2 etc.

No

Save Failed Responses only

If selected, then only failed responses are saved

Yes

Save Successful Responses only

If selected, then only successful responses are saved

Yes

Don't add number to prefix

If selected, then no number is added to the prefix. If you select this option, make sure that the prefix is unique or the file may be overwritten.

Yes

Don't add suffix

If selected, then no suffix is added. If you select this option, make sure that the prefix is unique or the file may be overwritten.

Graph Results MUST NOT BE USED during load test as it consumes a lot of resources (memory and CPU). Use it only for either functional testing or during Test Plan debugging and Validation.

The Graph Results listener generates a simple graph that plots all sample times. Along the bottom of the graph, the current sample (black), the current average of all samples(blue), the current standard deviation (red), and the current throughput rate (green)
are displayed in milliseconds.

The throughput number represents the actual number of requests/minute the server handled. This calculation includes any delays you added to your test and JMeter's own internal processing time. The advantage of doing the calculation like this is that this
number represents something real - your server in fact handled that many requests per minute, and you can increase the number of threads and/or decrease the delays to discover your server's maximum throughput. Whereas if you made calculations that factored
out delays and JMeter's processing, it would be unclear what you could conclude from that number.

Control Panel

The following table briefly describes the items on the graph. Further details on the precise meaning of the statistical terms can be found on the web - e.g. Wikipedia - or by consulting a book on statistics.

This test element can be placed anywhere in the test plan. Generates a summary of the test run so far to the log file and/or standard output. Both running and differential totals are shown. Output is generated every n seconds (default 3 minutes) on the
appropriate time boundary, so that multiple test runs on the same time will be synchronised. See jmeter.properties file for the summariser configuration items:

The "label" is the the name of the element. The "+" means that the line is a delta line, i.e. shows the changes since the last output. The "=" means that the line is a totals line, i.e. it shows the running total. Entries in the jmeter log file also include
time-stamps. The example "806 in 91.6s = 8.8/s" means that there were 806 samples recorded in 91.6 seconds, and that works out at 8.8 samples per second. The Avg (Average), Min(imum) and Max(imum) times are in milliseconds. "Err" means number of errors (also
shown as percentage). The last two lines will appear at the end of a test. They will not be synchronised to the appropriate time boundary. Note that the initial and final deltas may be for less than the interval (in the example above this is 30 seconds). The
first delta will generally be lower, as JMeter synchronises to the interval boundary. The last delta will be lower, as the test will generally not finish on an exact interval boundary.

The label is used to group sample results together. So if you have multiple Thread Groups and want to summarize across them all, then use the same label - or add the summariser to the Test Plan (so all thread groups are in scope). Different summary groupings
can be implemented by using suitable labels and adding the summarisers to appropriate parts of the test plan.

Control Panel

Parameters

Attribute

Description

Required

Name

Descriptive name for this element that is shown in the tree. It appears as the "label" in the output. Details for all elements with the same label will be added together.

This visualizer creates a row for every sample result. Like the View Results Tree , this visualizer uses a lot of memory.

By default, it only displays the main (parent) samples; it does not display the sub-samples (child samples). Versions of JMeter after 2.5.1 have a "Child Samples?" check-box. If this is selected, then the sub-samples are displayed instead of the main samples.

Monitor Results is a new Visualizer for displaying server status. It is designed for Tomcat 5, but any servlet container can port the status servlet and use this monitor. There are two primary tabs for the monitor. The first is the "Health" tab, which will
show the status of one or more servers. The second tab labled "Performance" shows the performance for one server for the last 1000 samples. The equations used for the load calculation is included in the Visualizer.

Currently, the primary limitation of the monitor is system memory. A quick benchmark of memory usage indicates a buffer of 1000 data points for 100 servers would take roughly 10Mb of RAM. On a 1.4Ghz centrino laptop with 1Gb of ram, the monitor should be
able to handle several hundred servers.

As a general rule, monitoring production systems should take care to set an appropriate interval. Intervals shorter than 5 seconds are too aggressive and have a potential of impacting the server. With a buffer of 1000 data points at 5 second intervals, the
monitor would check the server status 12 times a minute or 720 times a hour. This means the buffer shows the performance history of each machine for the last hour.

The monitor requires Tomcat 5 or above. Use a browser to check that you can access the Tomcat status servlet OK.

The aggregate report creates a table row for each differently named request in your test. For each request, it totals the response information and provides request count, min, max, average, error rate, approximate throughput (request/second) and Kilobytes
per second throughput. Once the test is done, the throughput is the actual through for the duration of the entire test.

The thoughput is calculated from the point of view of the sampler target (e.g. the remote server in the case of HTTP samples). JMeter takes into account the total time over which the requests have been generated. If other samplers and timers are in the same
thread, these will increase the total time, and therefore reduce the throughput value. So two identical samplers with different names will have half the throughput of two samplers with the same name. It is important to choose the sampler names correctly to
get the best results from the Aggregate Report.

Calculation of the Median and 90% Line (90 th percentile )
values requires additional memory. For JMeter 2.3.4 and earlier, details of each sample were saved separately, which meant a lot of memory was needed. JMeter now combines samples with the same elapsed time, so far less memory is used. However, for samples
that take more than a few seconds, the probability is that fewer samples will have identical times, in which case more memory will be needed. See the Summary
Report for a similar Listener that does not store individual samples and so needs constant memory.

Label - The label of the sample. If "Include group name in label?" is selected, then the name of the thread group is added as a prefix. This allows identical labels from different thread groups to be collated separately if required.

# Samples - The number of samples with the same label

Average - The average time of a set of results

Median - The median is the time in the middle of a set of results. 50% of the samples took no more than this time; the remainder took at least as long.

90% Line - 90% of the samples took no more than this time. The remaining samples at least as long as this. (90 th percentile )

Min - The shortest time for the samples with the same label

Max - The longest time for the samples with the same label

Error % - Percent of requests with errors

Throughput - the Throughput is measured in requests per second/minute/hour. The time unit is chosen so that the displayed rate is at least 1.0. When the throughput
is saved to a CSV file, it is expressed in requests/second, i.e. 30.0 requests/minute is saved as 0.5.

Kb/sec - The throughput measured in Kilobytes per second

Times are in milliseconds.

Control Panel

The figure below shows an example of selecting the "Include group name" checkbox.

The mailer visualizer can be set up to send email if a test run receives too many failed responses from the server.

Control Panel

Parameters

Attribute

Description

Required

Name

Descriptive name for this element that is shown in the tree.

No

From

Email address to send messages from.

Yes

Addressee(s)

Email address to send messages to, comma-separated.

Yes

Success Subject

Email subject line for success messages.

No

Success Limit

Once this number of successful responses is exceeded after previously reaching the failure limit , a success email is sent. The mailer will thus only send out messages in a sequence of failed-succeeded-failed-succeeded, etc.

Yes

Failure Subject

Email subject line for fail messages.

No

Failure Limit

Once this number of failed responses is exceeded, a failure email is sent - i.e. set the count to 0 to send an e-mail on the first failure.

Yes

Host

IP address or host name of SMTP server (email redirector) server.

No

Port

Port of SMTP server (defaults to 25).

No

Login

Login used to authenticate.

No

Password

Password used to authenticate.

No

Connection security

Type of encryption for SMTP authentication (SSL, TLS or none).

No

Test Mail

Press this button to send a test mail

No

Failures

A field that keeps a running total of number of failures so far received.