Performance test report

Transcription

1 Disclaimer This report was proceeded by Netventic Technologies staff with intention to provide customers with information on what performance they can expect from Netventic Learnis LMS. We put maximum effort to setup environment and application to perform as much unbiased tests as possible. Anyway, the performance of the application depends on many circumstances, such as: computer hardware network configuration client configuration operating system and software configuration & condition LMS content number of items in database As this is not a complete list, even other conditions may interfere with the tests. Warning: Netventic does not provide any guarantee, that the same values will be achieved with different configurations and/or conditions and Netventic is in no case liable for any loss resulting from using this report. Methodology All tests performed and presented here are synthetic tests by it's nature. However, we try to simulate the real users behavior as truly as possible. All tests were repeated multiple times (at least 3 times). Data presented here are from average test run (best and worst results were excluded out). If there was more than one result remaining, the "most average" was hand picked. Apache and Postgres was restarted before each test run. Test case scenario Following operations were performed in one loop: 1) Acess Login page URL and sign user in 2) Visit URL of Courses list page 3) Visit URL of Course 1 detail page 4) Visit URL of Course 1 player page ) Visit URL of Test 1 info page and start the Test 1 and generate test attempt (14 questions, random questions, random answers, unlimited attempt; otherwise kept default settings) 6) Visit URL of Test player, loading test attempt generated forwared to first question 7) Visit URL of Course 2 detail page Ramp-up Period 1): allways 1s 204 / 232

20 Conclusion Conclusion depends on use case you can expect to experience. High concurency, unpredictable spikes Use case Public facing multi-user (or even multi-tenant) environments Setup variants To handle lot of concurrent users at a reasonable response times, we need (based on number of concurrent users) to use at least following hw configurations: A) up to ~64 req/s: single server with 4 cores (8 threads) and 16-24GB of ram, or B) up to ~128 req/s: single server with 2 processors, each with 4 cores (16 threads) and 32-48GB of ram, or C) up to ~6 req/s: setup a cluster from 2 double-processor servers, one as a web server, one as a database server, dividing processing of apache and postgres to 2 different machines, or D) even higher or unpredictable req/s: setup a dedicated server farm with capacity overdimensioned to handle the spikes, or E) even higher or unpredictable req/s: use cloud environment with auto scaling support, possibly off-loading the serving of static content to CDN Bottlenecks The main bottleneck for high concurrency, if the servers does not swap to disk (has enough RAM), is usually the number of CPU threads available for parallel processing. In extreme cases the I/O performance can became an issue. Costs This use case is the most expensive as multi-processor machines are very expensive and appropriate cloud environment is not easy to configure, but can be most economicall (you don't pay for extra capacity when it is not used only to cover ocassional spikes). Recommended solution Recommended solution is the cloud environment with ability to auto-scale allowing you to scale your capacity up or down automatically according to conditions you define. High speed response, limited concurrency Use case Controlled environment (f.e. intanet )with predictable and relatively-low number of simultaneously working users with demand for low latency. Bottleneck Response bottleneck for low latency, if the servers does not swap to disk (has enough RAM), is usually latency to remote server and slow CPU (low frequency of the processor). Costs For cost optimalization is essential to deterine the amount of RAM based on number of concurrent users we must handle. Recommended solution Without high concurrency demands the best option is single processor server with multi-core support (quadcore) with maximum GHz you can get placed on LAN. How to understand reports? We are using Aggregate Report which provides the decisive data. 223 / 232

21 For visualization of data we use Graph result. Aggregate Report The aggregate report creates a table row for each differently named request in your test. For each request, it totals the response information and provides request count, min, max, average, error rate, approximate throughput (request/second) and Kilobytes per second throughput. Once the test is done, the throughput is the actual through for the duration of the entire test. The thoughput is calculated from the point of view of the sampler target (e.g. the remote server in the case of HTTP samples). JMeter takes into account the total time over which the requests have been generated. If other samplers and timers are in the same thread, these will increase the total time, and therefore reduce the throughput value. So two identical samplers with different names will have half the throughput of two samplers with the same name. It is important to choose the sampler names correctly to get the best results from the Aggregate Report. Label - The label of the sample. If "Include group name in label?" is selected, then the name of the thread group is added as a prefix. This allows identical labels from different thread groups to be collated separately if required. # Samples - The number of samples with the same label Average - The average time of a set of results Median - The median is the time in the middle of a set of results. of the samples took no more than this time; the remainder took at least as long. 9 Line - 9 of the samples took no more than this time. The remaining samples at least as long as this. (90 th percentile ) Min - The shortest time for the samples with the same label Max - The longest time for the samples with the same label Error % - Percent of requests with errors Throughput - the Throughput is measured in requests per second/minute/hour. The time unit is chosen so that the displayed rate is at least 1.0. When the throughput is saved to a CSV file, it is expressed in requests/second, i.e requests/minute is saved as 0.. Kb/sec - The throughput measured in Kilobytes per second Times are in milliseconds. Graph result Warning: Graph Results MUST NOT BE USED during load test as it consumes a lot of resources (memory and CPU). This means, that results from graph are different (usually worse) than from Aggregate report. Graph results are also taken in different test-run than Aggregate report! The Graph Results listener generates a simple graph that plots all sample times. Along the bottom of the graph, the current sample (black), the current average of all samples(blue), the current standard deviation (red), and the current throughput rate (green) are displayed in milliseconds. The throughput number represents the actual number of requests/minute the server handled. This calculation includes any delays you added to your test and JMeter's own internal processing time. The advantage of doing the calculation like this is that this number represents something real - your 224 / 232

22 server in fact handled that many requests per minute, and you can increase the number of threads and/or decrease the delays to discover your server's maximum throughput. Whereas if you made calculations that factored out delays and JMeter's processing, it would be unclear what you could conclude from that number. The following table briefly describes the items on the graph. Further details on the precise meaning of the statistical terms can be found on the web - e.g. Wikipedia - or by consulting a book on statistics. Data - plot the actual data values Average - plot the Average Median - plot the Median (midway value) Deviation - plot the Standard Deviation (a measure of the variation) Throughput - plot the number of samples per unit of time The individual figures at the bottom of the display are the current values. "Latest Sample" is the current elapsed sample time, shown on the graph as "Data". Legend to Summary table 1) Number of concurrent processes (threads) per second - simulating the users 2) Number of times to perform the test case 3) Average response of the application (in miliseconds) including latency to server 4) Requests per second ) Total run time of whole test (in seconds) * Max throughput is not reached as there is not enough requests per second. Red rows = application responses times are considered too slow to be "interactive"... 2 / 232

Running R on the Running R from Amazon's Elastic Compute Cloud Department of Statistics University of NebraskaLincoln April 30, 2014 Running R on the 1 Introduction 2 3 Running R on the Pre-made AMI Building

Microsoft Exchange Server 3 Deployment Considerations for Small and Medium Businesses A Dell PowerEdge server can provide an effective platform for Microsoft Exchange Server 3. A team of Dell engineers

1 Serving 4 million page requests an hour with Magento Enterprise Introduction In order to better understand Magento Enterprise s capacity to serve the needs of some of our larger clients, Session Digital

Fixed Price Website Load Testing Can your website handle the load? Don t be the last one to know. For as low as $4,500, and in many cases within one week, we can remotely load test your website and report

AppDynamics Lite Performance Benchmark For KonaKart E-commerce Server (Tomcat/JSP/Struts) At AppDynamics, we constantly run a lot of performance overhead tests and benchmarks with all kinds of Java/J2EE

bluechip STORAGEline R54300s NAS-Server system Executive summary After performing all tests, the Certification Document bluechip STORAGEline R54300s NAS-Server system has been officially certified according

Web Load Stress Testing Overview A Web load stress test is a diagnostic tool that helps predict how a website will respond to various traffic levels. This test can answer critical questions such as: How

1 of 10 5/4/2011 4:47 PM Chao He he.chao@wustl.edu (A paper written under the guidance of Prof. Raj Jain) Download Cloud computing is recognized as a revolution in the computing area, meanwhile, it also

technical brief in HP Overview HP is a Web-based software application designed to install, configure, manage and troubleshoot network-connected devices. It includes a Web service, which allows multiple

Newtech Supremacy II NAS storage system Executive summary The Newtech Supremacy II NAS system was tested by the Open-E QA team. It has been found that the system is stable and functional but performance

macle GmbH Grafenthal-S1212M Storage system Executive summary After performing all tests, the macle GmbH Grafenthal-S1212M has been officially certified according to the Open-E Hardware Certification Program

Evaluating HDFS I/O Performance on Virtualized Systems Xin Tang xtang@cs.wisc.edu University of Wisconsin-Madison Department of Computer Sciences Abstract Hadoop as a Service (HaaS) has received increasing

System Requirements PREEvision System and deployment scenarios Version 7.0 English Imprint Vector Informatik GmbH Ingersheimer Straße 24 70499 Stuttgart, Germany Vector reserves the right to modify any

Muse Server Sizing 18 June 2012 Document Version 0.0.1.9 Muse 2.7.0.0 Notice No part of this publication may be reproduced stored in a retrieval system, or transmitted, in any form or by any means, without

Tunebot in the Cloud Arefin Huq 18 Mar 2010 What is Tunebot? What is Tunebot? http://tunebot.cs.northwestern.edu Automated online music search engine for query-by-humming (QBH). What is Tunebot? http://tunebot.cs.northwestern.edu

UBUNTU DISK IO BENCHMARK TEST RESULTS FOR JOYENT Revision 2 January 5 th, 2010 The IMS Company Scope: This report summarizes the Disk Input Output (IO) benchmark testing performed in December of 2010 for

Citrix VDI-in-a-Box How to size your VDI-in-a-Box infrastructure Basics of a VDI-in-a-Box grid Simple to install and use Off-the-shelf servers with DAS(direct attached storage) Servers can be of different