I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.

Download Presentation

PowerPoint Slideshow about 'How to Do Analysis In HP LoadRunner' - rhonda

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

The aim of the analysis session is to find the failures in your system’s performance and then pinpoint the source of these failures.

Were the test expectations met? What was the transaction response time on the user’s end under load? Did the SLA meet or deviate from its goals? What was the average transaction response time of the transactions?

What parts of the system could have contributed to the decline in performance? What was the response time of the network and servers?

Can you find a possible cause by correlating the transaction times and backend monitor matrix?

Session Explorer pane: In the upper left pane, Analysis shows the reports and graphs that are open for viewing. From here you can display new reports or graphs that do not appear when Analysis opens, or delete ones that you no longer want to view.

Properties window pane: In the lower left pane, the Properties window displays the details of the graph or report you selected in the Session Explorer. Fields that appear in black are editable.

Graph Viewing Area. In the upper right pane, Analysis displays the graphs. By default, the Summary Report is displayed in this area when you open a session.

Graph Legend. In the lower right pane, you can view data from the selected graph.

Note how the average response time of the check_itinerary transaction fluctuates greatly, and reaches a peak of 75.067 seconds, 2:56 min. into the scenario run.

On a well-performing server, the transactions would follow a relatively stable average response time. At the bottom of the graph, note how the logon, logoff, book_flight, and search_flight transactions follow a more or less stable average response time.

You can see that there was a gradual start of running Vusers at the beginning of the scenario run. Then, for a period of 3 minutes, 70 Vusers ran simultaneously, after which the Vusers gradually stopped running.

In this graph you can see that as the number of Vusers increases, the average response time of the check_itinerary transaction very gradually increases. In other words, the average response time steadily increases as the load increases.

At 64 Vusers, there is a sudden, sharp increase in the average response time.

We say that the test broke the server. The response time clearly began to degrade when there were more than 64 Vusers running simultaneously.

From the graph tree, select the Average Transaction Response Time graph. Look at the check_itinerary transaction, particularly at the slice of elapsed time between 1 and 4 minutes. The average response time started to increase almost immediately, until it peaked at nearly 3 minutes.

In the Auto Correlate dialog box, make sure that the measurement to correlate is check_itinerary, and set the time range from 1:20 to 3:40 minutes - either by entering the times in the boxes, or by dragging the green and red poles into place along the Elapsed Scenario Time axis.

In the Measurement column you can see that the Private Bytes and Pool Nonpaged Bytes, both of which are memory-related measurements, have a Correlation Match of over 70% with the check_itinerary transaction.

Up to now you have filtered a graph and correlated two graphs. The next time you analyze a scenario, you might want to view the same graphs, with the same filter and merge conditions applied. You can save your merge and filter settings into a template, and apply them in another analysis session.