Main

Tag Archives: amazon

I have very heterogeneous performance test use cases. From simple performance regression tests that are executed from a Jenkins node to eventual large-ish stress tests that run with over 100K requests per second and > 100 load generators. With higher loads, many problems arise, like feeding data to load generators, retrieving results, real-time view, analyzing huge data sets and so on.

JMeter is a great tool, but it has its own limitations. In order to scale, I had to work around a few of it’s limitations and created a test framework to help me execute tests at scale, on Amazon’s EC2.

Having a central data feeder was a problem. Using JMeter’s master node is impossible. A single shared data source might become a bottleneck, so having a way of distributing it was important. I thought about using a feeder model similar to Twitter’s Iago or a clustered, load balanced resource, but settled for something simpler. Since most tests only use a limited data set and loop around it, I just decided to bzip files and upload them to each load generator before the test starts. This way I avoided the problem of making an extra request to get data during execution and requesting the same data multiple times because of the loop. One problem with this approach is that I don’t have centralized control over the data set, since each load generator is using the same input. I mitigate that by managing the data locally on each load generator, with a hash function or introducing random values. I also considered distributing different files to different load generators based on a hash function, but so far, there was no need.

Retrieving results was tricky too. Again, using JMeter’s master node was impossible because of the amount of traffic. I tried having a pooler fetching raw ( only timestamp, label, success and response time ) results in real-time, but that affected the results. Downloading all results at the end of the test worked by checking the status of the test ( running or not ) every minute and downloading after completion, but I settled with having a custom sampler in a tearDown thread group, compressing and uploading results to Amazon’s S3. This could definitely be a plugin too. It works reasonably well, but I loose the real-time view and have to manually add a file writer and sampler to tests.

With real-time view, I started with the same approach as jmeter-ec2, pooling aggregated data (avg response time, rps, etc) from each load generator and printing that, but it proved useless with a large number of load generators. For now, on Java samplers, I’m using Netflix’s servo to publish metrics in real time (averaged over a minute) to our monitoring system. I’m considering writing a listener plugin that could use the same approach to publish data from any sampler. Form the monitoring system I can then analyze and plot real-time data with minor delays. Another option I’m considering is using the same approach, but using StatsD and Graphite.

Analyzing huge result sets was the biggest challenge I believe. For that, I’ve developed a web-based analysis tool. It doesn’t store raw results, but mostly time-based aggregated statistical data from both JMeter and monitoring systems, allowing some data manipulation for analysis and automatic comparison of result sets. Aggregating and analyzing tests with over 1B samples is a problem, even after constant tuning. Loading all data points to memory to calculate percentiles and sorting is practically impossible, just for the fact that the amount of memory I’ll need is impractical, even with small objects. For now, on large tests, I settled on aggregating data while loading results ( second/minute data points ) and accepting the statistical problems, like average of averages. Another option would be to analyze results from each load generator independently and aggregate at the end. In the future, I’m considering having results on a Hadoop cluster and using Map/Reduce the get the aggregated statistical data back.

The framework also helps me automate most of the test process, like creating a new load generator cluster on EC2, copying test artifacts to load generators, executing and monitoring the test while it’s running, collecting results and logs, triggering analysis, tearing down the cluster and cleaning up after the test completes.

Most of this was written in Java or Groovy and I hope to open-source the analysis tool in the future.

In December of 2009 MySpace launched a new wave of streaming music video offerings in New Zealand, building on the previous success of MySpace music. These new features included the ability to watch music videos, search for artist’s videos, create lists of favorites, and more. The anticipated load increase from a feature like this on a popular site like MySpace is huge, and they wanted to test these features before making them live. If you manage the infrastructure that sits behind a high traffic application you don’t want any surprises. You want to understand your breaking points, define your capacity thresholds, and know how to react when those thresholds are exceeded. Testing the production infrastructure with actual anticipated load levels is the only way to understand how things will behave when peak traffic arrives. For MySpace, the goal was to test an additional 1 million concurrent users on their live site stressing the new video features. The key word here is ‘concurrent’. Not over the course of an hour or day… 1 million users concurrently active on the site. It should be noted that 1 million virtual users are only a portion of what MySpace typically has on the site during its peaks. They wanted to supplement the live traffic with test traffic to get an idea of the overall performance impact of the new launch on the entire infrastructure. This requires a massive amount of load generation capability, which is where cloud computing comes into play. To do this testing, MySpace worked with SOASTA to use the cloud as a load generation platform. Here are the details of the load that was generated during testing. All numbers relate to the test traffic from virtual users and do not include the metrics for live users:

Test Environment Architecture SOASTA CloudTest™ manages calling out to cloud providers, in this case Amazon, and provisioning the servers for testing. The process for grabbing 800 EC2 instances took less than 20 minutes. Calls we made to the Amazon EC2 API and requests servers in chunks of 25. In this case, the team was requesting EC2 Large instances with the following specs to act as load generators and results collectors:

In addition, there were 2 EC2 Extra-Large instances to act as the test controller instance and the results database with the following specs:

15 GB memory

8 EC2 Compute Units (4 virtual cores with 2 EC2 Compute Units each)

1,690 GB instance storage (4×420 GB plus 10 GB root partition)

64-bit platform

Fedora Core 8

PostgreSQL Database

Once it has all of the servers that it needs for testing it begins doing health checks on them to ensure that they are responding and stable. As it finds dead servers it discards them and requests additional servers to fill in the gaps. Provisioning the infrastructure was relatively easy. The diagram (figure 1.) below shows how the test cloud on EC2 was set up to push massive amounts of load into MySpace’s datacenters. While the test is running, batches of load generators report their performance test metrics back to a single analytics service. Each of the analytics services connect to the PostgreSQL database to store the performance data in an aggregated repository. This is part of the way that tests of this magnitude can scale to generate and store so much data – by limiting access to the database to only the metrics aggregators and scaling out horizontally. Challenges Because scale tends to break everything, there were a number of challenges encountered throughout the testing exercise. The test was limited to using 800 EC2 instances SOASTA is one of the largest consumers of cloud computing resources, routinely using hundreds of servers at a time across multiple cloud providers to conduct these massive load tests. At the time of testing, the team was requesting the maximum number of EC2 instances that it could provision. The limitation in available hardware meant that each server needed to simulate a relatively large number of users. Each load generator was simulating between 1,300 and 1,500 users. This level of load was about 3x what a typical CloudTest™ load generator would drive, and it put new levels of stress on the product that took some creative work by the engineering teams to solve. Some of the tactics used to alleviate the strain on the load generators included:

Staggering every virtual user’s requests so that the hits per load generator were not all firing at once

Paring down the data being collected to only include what was necessary for performance analysis

A large portion of MySpace assets are served from Akamai, and the testing repeatedly maxed out the service capability of parts of the Akamai infrastructure

CDN’s typically serve content to site visitors based on their geographic location from a point of presence closest to them. If you generate all of the test traffic from, say, Amazon’s East coast availability zone, then you are likely going to be hitting only one Akamai point of presence. Under load, the test was generating a significant amount of data transfer and connection traffic towards a handful of Akamai datacenters. This equated to more load on those datacenters than what would probably be generated during typical peaks, but that would not necessarily be unrealistic given that this feature launch was happening for New Zealand traffic only. This stress resulted in new connections being broken or refused by Akamai at certain load levels, and generating lots of errors in the test. This is a common hurdle that needs to be overcome when generating load against production sites. Large-scale production tests need to be designed to take this into account and accurately stress entire production ecosystems. This means generating load from multiple geographic locations so that the traffic is spread out over multiple datacenters. Ultimately, understanding the capacity of geographic POPs was a valuable takeaway from the test.

Because of the impact of the additional load, MySpace had to reposition some of their servers on-the-fly to support the features being tested

During testing the additional virtual user traffic was stressing some of the MySpace infrastructure pretty heavily. MySpace’s operations team was able to grab underutilized servers from other functional clusters and use them to add capacity to the video site cluster in a matter of minutes. Probably the most amazing thing about this is that MySpace was able to actually do it. They were able to monitor capacity in real time across the whole infrastructure and elastically shrink and expand where needed. People talk about elastic scalability all of the time and it’s a beautiful thing to see in practice. Lessons Learned

For high traffic websites, testing in production is the only way to get an accurate picture of capacity and performance. For large application infrastructures there are far too many ‘invisible walls’ that can show up if you only test in a lab and then try to extrapolate.

Elastic scalability is becoming an increasingly important part of application architectures. Applications should be built so that critical business processes can be independently monitored and scaled. Being able to add capacity relatively quickly is going to be a key architecture theme in the coming year and the big players have known this for a long time. Facebook, Ebay, Intuit, and many other big web names have evangelized this design principle. Keeping things loosely coupled has a whole slew of benefits that have been advertised before, but capacity and performance are quickly moving to the front of that list.

Real-time monitoring is critical. In order to react to capacity or performance problems, you need real-time monitoring in place. This monitoring should tie in to your key business processes and functional areas, and needs to be as real time as possible.