Oracle Open Script (or Oracle Functional Testing) is one of the components in OATS, which is integrated with Oracle Load Testing and Oracle Test Manager. It is also a load testing script generator, which is integrated with Eclipse to support script development and debugging. In the current offering, it only runs on Windows.

Correlation

Correlation of dynamic session values is a major task for load test scripting[2]. When a server in AUT (application under test) exchanges dynamic session values with the browser. OpenScript can auto-correlate dynamic session values—For example session IDs.

What's Session ID?

Session ID is used in session tracking. Session tracking enables you to track a user's progress over multiple servlets or HTML pages, which, by nature, are stateless. A session is defined as a series of related browser requests that come from the same client during a certain time period. Session tracking ties together a series of browser requests—think of these requests as pages—that may have some meaning as a whole, such as a shopping cart application.

Session ID is a piece of data that is exchanged between the application's web server and the user agent (or browser). It is typically used to identify a specific user logged on to the application for a particular duration of his/her visit (or session).

Session ID is given per Session. It is often destroyed when the user logs off from the application. Next time you visit the same site, you will have a different session ID. The correlation task is to identify these dynamic values and substitute variables for them in the load testing scripts.

As you know, Oracle Fusion Applications maintain a rich set of dynamic session values. Correlation done manually requires in-depth knowledge of the application itself and can also be error prone. Fortunately, most correlations needed for successful playbacks can be done automatically by Oracle Open Script. For example, it auto-correlates Session IDs.

Different Ways of Storing Session IDs

There are multiple ways for a web page to pass session ID to a web server. Session ID can be stored in:

Cookie[6]

URL

HTML page

Storing Session ID in Cookies

Cookie is the text information that application places in the client's hard disk. Browser sends the cookie back to the application to keep the state. On WebLogic Server, use of session cookies is enabled by default and is recommended, but you can disable them by setting cookies-enabled property[3] to false.
If cookie is enabled on the browser, you often find the following entry in the HTTP headers:

Note that JSESSIONID is the default session tracking cookie name used by WLS. You can configure WebLogic Server session tracking by defining properties in the WebLogic-specific deployment descriptor,weblogic.xml. For a complete list of session attributes, see session-descriptor[3].

Storing Session ID in URL

Session ID can be sent back to the server as a string appended to URL following a question mark (i.e., "?")

On WLS, you can enable URL rewriting by setting url-rewriting-enabledproperty, which encodes the session ID into the URL and provides session tracking if cookies are disabled in the browser. However, storing Session ID in URLs is less secure than storing it in cookies[5].

Storing Session ID in HTML Page

Finally, session ID can also be stored in the hidden field of a HTML page and submitted by the Post Command:

<input type="hidden" name="sessionID" value="54321abcd">

Most user agents (or browsers) allow you to store information in a HiddenField control, which renders as a standard HTML hidden field. A hidden field does not render visibly in the browser, but you can set its properties just as you can with a standard control. When a page is submitted to the server, the content of a hidden field is sent in the HTTP form collection along with the values of other controls. A hidden field acts as a repository for any page-specific information (including Session ID) that you want to store directly in the page.

Show both listening and non-listening sockets. With the --interfaces option, show inter-

faces that are not marked

-p, --program

Show the PID and name of the program to which each socket belongs.

The results are shown below:

$ netstat -ap | grep 9004
(Not all processes could be identified, non-owned process info
will not be shown, you would have to be root to see it all.)
ProtoRecv-QSend-QLocal AddressForeign AddressStatePID/Program name
tcp 0 0 myserver.us.ora:interserver myserver.oracle.com:9004 ESTABLISHED 12550/oidldapd
tcp 0 0 myserver.us.oracle.com:9004 myserver.ora:interserver ESTABLISHED 22328/java

From the output, we know a Java application (i.e., process 22328) is using port 9004. When the first socket is bound to that port, then no other socket could be bound on port 9004 as long as the first socket remains open. To know which application it is, we check out that process' command line:

$ vi /proc/22328/cmdline

On the command line, we have found the following information:

-Dweblogic.Name=AdminServer

Also, BIDomain was mentioned there. So, that process is the AdminServer of BIDomain.

Port 7020

Similarly, we have seen port 7020 was used in another server's log file:

<BEA-002606> <Unable to create a server socket for listening on channel "Default". The address 10.241.88.31 might be incorrect or another process is using port 7020: java.net.BindException: Address already in use.>

When you tried:

# netstat -ap |grep 7020

No entries have been returned. However, if you use:

# netstat -an |grep 7020

You could find one entry:

tcp 0 0 ::ffff:10.241.88.31:7020 :::* LISTEN

In this case, we need to use the following command line:

# netstat -ap --numeric-ports |grep 7020

tcp 0 0 slcag044.us.oracle.com:7020 *:* LISTEN 21696/java

So, we know process 21696 is using port 7020. To investigate further, we typed:

# netstat -ap |grep 21696

tcp 0 0 slcag044.us.oracle.:dpserve *:* LISTEN 21696/java

It shows dpserve in the place of 7020. So, that's why our first search ended up with no entries. Now we know port 7020 was used by the dpserve protocol for service type dpserve[2,3].

Our Solution

In our case, we need to re-order our start-up steps (see [4] for another approach). Instead of starting BIDomain first, we need to start it last. To fix our issue, we have done:

Saturday, August 10, 2013

Heap stress is characterized as OutOfMemory conditions or frequent Full GCs accounting for a certain percentage of CPU time[6]. To diagnose heap stress, either heap dumps or heap histograms can help.

In this article, we will discuss the following topics:

Heap histogram vs. heap dump[1]

How to generate heap histogram or heap dump in HotSpot

Heap Histogram vs. Heap Dump

Without much ado, read this companion article for the comparison. For heap analysis, you can use either jmap or jcmd to do the job[5]. Here we focus only on using jmap.

$ jdk-hs/bin/jmap -help
Usage:
jmap [option]
(to connect to running process)
jmap [option]
(to connect to a core file)
jmap [option] [server_id@]
(to connect to remote debug server)
where <option> is one of:
<none> to print same info as Solaris pmap
-heap to print java heap summary
-histo[:live] to print histogram of java object heap; if the "live"
suboption is specified, only count live objects
-permstat to print permanent generation statistics
-finalizerinfo to print information on objects awaiting finalization
-dump:<dump-options> to dump java heap in hprof binary format
dump-options:
live dump only live objects; if not specified,
all objects in the heap are dumped.
format=b binary format
file= dump heap to
Example: jmap -dump:live,format=b,file=heap.bin <pid>
-F force. Use with -dump:<dump-options> <pid> or -histo
to force a heap dump or histogram when <pid> does not
respond. The "live" suboption is not supported
in this mode.
-h | -help to print this help message
-J<flag> to pass <flag> directly to the runtime system

Generating Heap Histogram

Heap histograms can be obtained by using jmap (note that you need to use jmap from the same JDK installation which is used to run your applications):$~/JVMs/jdk-hs/bin/jmap -histo:live 7891 >hs_jmap_7891.txt

In the output, it shows the total size and instance count for each class type in the heap. For example, there are 2099805 instances of character arrays (i..e, [C), which has a total size of 195645632 bytes. Because the suboption live was specified, only live objects were counted (i.e., a full GC was forced before histogram was collected).

Generating Heap Dump

Heap dump is a file containing all the memory contents of a Java application. It can be generated via:$ ~/JVMs/jdk-hs/bin/jmap -dump:live,file=/tmp/hs_jmap_dump.hprof 7891Dumping heap to /tmp/hs_jmap_dump.hprof ...Heap dump file created

Including the live option in jmap will force a full GC to occur before the heap is dumped so that it contains only live objects. We recommend taking multiple heap dumps. For example, 30 minutes and 1 hour into the run. Then use Eclipse MAT[2,3] to examine the heap dumps.

Heap Histogram vs. Heap Dump

A heap dump is a snapshot of all the objects in the Java Virtual Machine (JVM) heap at a certain point in time. The JVM software allocates memory for objects from the heap for all class instances and arrays. The garbage collector reclaims the heap memory when an object is no longer needed and there are no references to the object. By examining the heap you can locate where objects are created and find the references to those objects in the source. However, dumping of Java heap is time-consuming and lengthy in size.

On the other hand, heap histogram gives a very good summary of heap objects used in the application without doing a full heap dump. It can help you quickly narrow down a memory leak. This information can be obtained in several way:

Attach a running process using the command jrcmd.

Generate from a core file or heap dump

Note that we refer to heap histogram, heap summary, or heap diagnostics interchangeably in this article.

Generating Heap Histogram

A heap histogram can be obtained from a running process using the command:

In the output, there is a "Detailed Heap Statistics" section, which shows the total size and instance count for each class type in the heap:

The first column corresponds to the Class object type contribution to the Java Heap footprint in %

The second column correponds to the Class object type memory footprint in K

The third column correponds to the # of Class instances of a particular type

The fourth column correponds to the delta - / + memory footprint of a particular type

As you can see from the above snapshot, the biggest data type is [C (i.e., character array) and java.lang.String. In order to see which data types are leaking, you will probably need to generate several snapshots, which you might be able to observe a trend that can lead to further analysis.

Generating Heap Dump

Heap dump is a file containing all the memory contents of a Java application. It can be generated via

jrcmd 20488 hprofdump

Wrote dump to /.../appmgr/APPTOP/instance/debug/jrockit_20488.hprof

Then you can use various tools to load that file and look at various things in the heap: how much each kind of object is using, what things are holding onto the most amount of memory, and so on. The size of heap dump file is proportional to the size of Java Heap and can be large.
Three of the most common tools are:

jhat

This is the original heap analyzer tool, which reads the heap dump and runs a small HTTP server that lets you look at the dump through a series of web page links.

VisualVM [3]

MAT [4,5]

Heap-Related JVM Options

When your JVM runs into OutOfMemoryError, you can set:

-XX:+HeapDumpOnOutOfMemoryError

to get a heap dump after the heap is big and bloated just before the JVM dies. Also, you can provide the following flags:

-XX:HeapDumpPath=<path to the destination>

-XX:+ExitOnOutOfMemoryError

Similarly, you can get a heap histogram instead of a full heap dump using[7]:

These may seem unrelated, but when the Hotspot team started building "fastdebug" VM libraries (using -g -O and including assertions), that could just be plugged into any JDK image, that was a game changer. It became possible to plug&play native components when building this way, instead of the java_g builds where all the components had to be built the same way, an all or nothing run environment that was horribly slow and limiting. So we tried to create a single build flow, with just variations on the builds (I sometimes called them "build flavors" of product, debug, and fastdebug). Of course, at the same time, the machines got faster, and perhaps now using a complete debug build of a jdk make more sense? In any case, that fastdebug event influenced matters. We do want different build flavors, but we don't want separate make logic for them.

User Level File Descriptor Limits

To view current open file limit for the current Linux user, run command:

$ulimit -n

8192

To set it to a new value for this running session, which takes effect immediately, run command:

$ ulimit -n 16384

Alternatively, if you want the changes to survive reboot, do the following:

Exit all shell sessions for the user you want to change limits on.

As root, edit the file /etc/security/limits.conf and add these two lines toward the end:

user1 soft nofile 16384
user1 hard nofile 16384

The two lines above changes the max number of file handles - nofile - to new settings.

Save the file.

Login as the user1 again. The new changes will be in effect.

System-wide File Descriptors Limits

On Linux, there is also a system-wide configuration parameter named:

fs.file_max

Use the following command to display maximum number of open file descriptors allowed on the system:

$cat /proc/sys/fs/file-max100000

Many application such as Oracle database or WebLogic server needs this setting quite higher. So you can increase the maximum number of open files by setting a new value in kernel variable /proc/sys/fs/file-max as follows (login as the root):

# sysctl -w fs.file-max=262144

Above command forces the limit to 262144 files. You need to edit /etc/sysctl.conf file and put following line so that after reboot the setting will remain as it is:

# vi /etc/sysctl.conf

Append a configuration directive as follows:

fs.file-max = 262144

Save and close the file. Users need to log out and log back in again to changes take effect or just type the following command:

# sysctl -p

Verify your settings with command:

# cat /proc/sys/fs/file-max

OR# sysctl fs.file-max

Final Words

Note that commands used in this article are good for the following Linux release:

Disclaimer

The statements and opinions expressed here are my own and do not necessarily represent those of Oracle.For your computer health, follow me @xmlandmore. To improve your personal health, follow me @travel2health.

About Me

Healthy pursuits are like traveling. We know there is a wonderland called wellness. But, there are no fixed routes to reach there. The pursuits need effort and determination. We cannot act like tourists who don't know where they've been. We must take notes of warning signs sent by our bodies.

On the journey to wellness, there are many dangers to be avoided; there are many footsteps to be taken; unfortunately, there are no short-cuts.

Travelers walking in the night follow North Star. Healthy pursuits work similarly. We know our goal; we know the signposts; we know the dangers on the road; we can adjust pace if we are tired; we can change travel plans due to the weather. But, We walk steadily and persistently. With my companionship, hopefully, your journey will be made easier.