Introduction

This report compares the performance of Rational DOORS Next Generation (RDNG) version 5.0 to the previous Rational Requirements Composer (RRC) version 4.0.6 release. The test objective is achieved in three steps:

Run version 4.0.6 with standard 1.5 hour test using 400 concurrent users.

Run version 5.0 with standard 1.5 hour test using 400 concurrent users.

The test is run three times for each version and the resulting six tests are compared with each other. Three tests per version is used to get a more accurate picture since there are variations expected between runs.

Disclaimer

The information in this document is distributed AS IS. The use of this information or the implementation of any of these techniques is a customer responsibility and depends on the customerís ability to evaluate and integrate them into the customerís operational environment. While each item may have been reviewed by IBM for accuracy in a specific situation, there is no guarantee that the same or similar results will be obtained elsewhere. Customers attempting to adapt these techniques to their own environments do so at their own risk. Any pointers in this publication to external Web sites are provided for convenience only and do not in any manner serve as an endorsement of these Web sites. Any performance data contained in this document was determined in a controlled environment, and therefore, the results that may be obtained in other operating environments may vary significantly. Users of this document should verify the applicable data for their specific environment.

Performance is based on measurements and projections using standard IBM benchmarks in a controlled environment. The actual throughput or performance that any user will experience will vary depending upon many factors, including considerations such as the amount of multi-programming in the userís job stream, the I/O configuration, the storage configuration, and the workload processed. Therefore, no assurance can be given that an individual user will achieve results similar to those stated here.

This testing was done as a way to compare and characterize the differences in performance between different versions of the product. The results shown here should thus be looked at as a comparison of the contrasting performance between different versions, and not as an absolute benchmark of performance.

What our tests measure

We use predominantly automated tooling such as Rational Performance Tester (RPT) to simulate a workload normally generated by client software such as the Eclipse client or web browsers. All response times listed are those measured by our automated tooling and not a client.

The diagram below describes at a very high level which aspects of the entire end-to-end experience (human end-user to server and back again) that our performance tests simulate. The tests described in this article simulate a large part of the end-to-end transaction as indicated. Performance tests include some simulation of browser rendering and network latency between the simulated browser client and the application server stack.

Findings

Performance goals

Verify that there are no performance regressions between current release and prior release with 400 concurrent users using the workload described below.

When comparing the three 5.0 runs with the three 4.0.6 runs the average of the three runs is used for comparison. Generally a 10% difference is accepted.

Findings

Performance

Several of the user actions, such as opening a project, have improved performance.

There is a performance regression defect open that will negatively impact the time it will take to create reports.

Impact by separating the RDNG repository from the JTS repository.

This change has not negatively impacted performance.

As expected this change has increased the workload for the RDNG application while it has lessened the workload for JTS.

The impact can be seen in higher CPU and memory utilization by the RDNG server and a corresponding lesser utilization by JTS

Note that it is now possible to have more than one RDNG server using the same JTS. This will allow scaling flexibility by adding additional RDNG servers.

Other observations

RDNG now maintains its own indices. Ensure that there is enough disk space to accommodate the indices on the RDNG server.

In our test environment the RDNG indices created for a 240,000 artifact repository required 17.6 GB of disk space. Note that the new RDNG indices require more disk space than previously used by JTS 4.0.6 using the same data. In our test environment RDNG required 50% more disk space for indices.

Putting the indices on a faster drive will improve performance.

When upgrading from a previous version the JTS database will be cloned. This will require additional resources such as disk space and memory.

Over time migrated data that is not required will be removed but initially the new RDNG database created will double the required disk space.

Additional memory requirements by the database will depend on database vendor and configuration. It is strongly recommended to test thoroughly in a test environment before upgrading.

Methodology

Rational Performance Tester was used to simulate the workload created using the
web client. Each user completed a random use case from a set of
available use cases. A Rational Performance Tester script is created for
each use case. The scripts are organized by pages and each page
represents a user action.

Based on real customer use, the test scenario provides a ratio of 70% reads
and 30% writes. The users completed use cases at a rate of 30 pages per
hour per user. Each performance test runs for 90 minutes after all of
the users are activated in the system.

Response time comparison

The median response time provided more even results than the average
response time. The nature of the high variance between tests where
some tasks at time takes a longer time to run, such as when the server
is under heavy load, makes the average response time less
predictive. Both the median and average values are included in the
following tables and charts for comparison.

In the repository that contained 240,000 artifacts with 400 concurrent
users, no obvious regression was shown when comparing response times
between runs.

The numbers in the following charts include all of the pages for all of the scripts that ran.

Results

Observation

In 5.0 RDNG is utilizing its own repository instead of sharing it with JTS. Both memory and CPU utilization reflect the changed work distribution in 5.0 where RDNG is managing its own repository.

Note that additional memory is used by the DB2 server due to the additional database required by RDNG 5.0.

Garbage collection

Verbose garbage collection is enabled to create the GC logs. The GC logs show very little variation between runs. The difference between version 4.0.6 and 5.0 displayed by the GC logs reflect the additional work performed by RDNG 5.0 that was previously done by JTS . Below is an example of the output from the GC log for JTS and RDNG, including both versions 4.0.6 and 5.0 for each application server.

RM

Observation: The graphs reflect the change in workload now when the RDNG application is performing all RDNG related tasks. In 4.0.6 queries and other database related interactions were done by JTS as the owner of the repository.

RM 4.0.6

RM 5.0

JTS

Observation: The graphs reflect the change in workload now when the JTS application is not performing some of the RM related repository tasks. In RM 4.0.6 database related queries were done by JTS.

JTS 4.0.6

JTS 5.0

Create a collection

Observation: All gestures, except open the project, show between 10 - 15% faster execution

Create and edit a storyboard

Display the hover information for a collection

Query by string

Create a PDF report

Observation: The last step of creating a report is slower in 5.0. The root cause for this performance regression is still under investigation and a fix would be delivered to a future release (see work item (xxxxxx))

Create a Microsoft Word report

Observation: The last step of creating a report is slower in 5.0. The root cause for this performance regression is still under investigation and a fix would be delivered to a future release (see work item (xxxxxx))

Open collection and display first page

Create artifact in large module

Display module history

Create traceability report using 50 artifact

Observation: The last step of creating a report is slower in 5.0. The root cause for this performance regression is still under investigation and a fix would be delivered to a future release (see work item (xxxxxx))