This chapter describes the advanced nGrinder controller configuration. You may not need to read this guide if you are not system admin. However if you want to run nGrinder as the cloud based service or calibrate the behavior, you should read this chapter.

${NGRINDER_HOME} - nGrinder Controller Home

If you start tomcat with "catalina.sh start" or "startup.sh", nGrinder will create ${user.home}/.ngrinder directory into user’s home directory after it starts up successfully. this directory contains the configuration files and data. Following is .ngrinder default location.

Window7 : C:\Users\${user.home}\.ngrinder

Unix/Linux :${user.home}/.ngrinder

But if you like to assign the other directory for this, please set Environment Variables “NGRINDER_HOME” before running tomcat.

Advanced Configuration

${NGRINER_HOME} contains the following files and directories.

database.conf – This contains the database configuration. You can modify this file when you need to use the other DB. The default DB nGrinder uses is H2.

database=H2
database_username=admin
database_password=admin

NOTE: If you like to use Cubrid for database, you need to add following configuration more

NOTE: Currently, we just support 2 types of database, H2,and Cubrid. If you want to use other DB, please modify org.ngrinder.infra.config.Database class to add a new one like Mysql. Or make a official request to us.

system.conf – This file contains the general policy and configuration settings for nGrinder controller. You can modify these settings to calibrate nGrinder controller behavior.

Name

Since

Default Value

Description

verbose

3.0

false

Set true to print the more detailed log.

usage.report

3.2

true

Set false if you don't want to report ngrinder usage to google analytics.

security

3.0.1

false

You can specify whether nGrinder controller uses security mode or not. When the security mode is activated, Each test script is limited to access limited range of targets. Please refer Script Security

agent.max.size

3.0

10

The maximum number of agents attached into one test. This is useful when you like to make nGrinder shared by more user. With this option, each test can use only limited number of agents. For example, if your nGrinder instance has 15 agents in total and you set this field 5, you can guarantee 3 users can run performance tests concurrently.

agent.max.vuser

3.0

1000

The maximum number of vuser which can be created per one agent. If your agent machine spec is good enough, you can increase this.

agent.max.runcount

3.0

10000

The maximum runcount of test per ech agent.

agent.max.hour

3.0

8

The maximum running hour of each test

ngrinder.console.portbase

3.0

12000

The starting port number of console which will be mapped to each test.

You need to restart nGrinder to apply the this configuration.

ngrinder.validation.timeout

3.2.3

100

The script validation timeout in the unit of sec

ngrinder.console.portbase

3.0

12000

The starting port number of console which will be mapped to each test.

You need to restart nGrinder to apply the this configuration.

ngrinder.agent.control.port

3.2.3

16001

The port where the each agent connect to.

ngrinder.max.waitingmilliseconds

3.0

5000

How many milliseconds console will wait until all agents are connected.

By default, nGrinder uses sha1 to encode passwords. If you nedd shar256, please set true.

You need to reinstall nGrinder to apply the this configuration.

ngrinder.dist.logback

3.1.1

true

To be compatible with old agents (before 3.1.1), please set this true. If you use the latest version of agents, just set it false.

ngrinder.dist.safe

3.1.1

false

From 3.1.1, nGrinder doesn't check the file distribution result to speed up the test execution.
If your agent is located in the far places or you distribute big files everyday, you'd better to change this to true.

ngrinder.dist.safe.region

3.1.1

false

If a some region has the slow connection speed, set true.

ngrinder.dist.safe.threashhold

3.1.1

1000000

Set the safe distribution threshold to enable safe distribution for specific transfer size by force.

The followings are related to the clustering. It should be very carefully set.
# You can refer http://www.cubrid.org/wiki_ngrinder/entry/controller-clustering-guide

ngrinder.cluster.mode

3.1

false

if you want to enable nGrinder controller clustering mode, Set true.

You need to restart nGrinder to apply the this configuration.

ngrinder.cluster.uris

3.1

none

All controller IPs which belong to the cluster.

You need to restart nGrinder to apply this configuration.

ngrinder.cluster.listener.port

3.1

40003

Communication port for Cache synchronization in the cluster.

You need to restart nGrinder to apply the this configuration.

process_and_thread_policy.js – This file defines the logic to determine the count of processes and threads for the given vuser count. This file provides the flexibility to configure default process and thread allocation scheme. User usually don’t know which process and thread combination provides the best performance. Therefore, nGrinder let a user just input expected vuser per agent and configure the process and thread count automatically. The default content is like following.NGrinder 3.0

grinder.properties – This file defines the default underlying The Grinder behavior. You can configure The Grinder behavior with this file. Some are overloaded by nGrinder runtime but some are not. In most cases, the admin doesn't need to change this file.
Please refer http://grinder.sourceforge.net/g3/properties.html for detail.

plugins folder – In this folder, you can locate nGrinder plugin. Just drop the plugin into this folder. If you want to check the available plugin, refer nGrinder Plugins

Data Structure

In ${NGRINDER_HOME}, there are some folders to store data used in nGrinder. Followings are description for them.

Folder name

Description

logs

This store the nGrinder logs. nGrinder intercepts the tomcat log and saves the log in ngrinder.log file. This log contains only controller related logs. You can also monitor the content of this file through the admin menu.