Tag Archives | load testing

Our friends at BlazeMeter last week hosted a live session giving testers and developers everything they need to run performance testing with the popular open source load performance testing tool JMeter. And we’re happy to share the session here on the uTest Blog.

The hour-long session starts with an overview of performance testing, then moves onto how to run performance testing with JMeter, why it’s worth using BlazeMeter with JMeter, and concludes with a Q&A hosted by BlazeMeter’s Ophir Prusak.

BlazeMeter is a proud uTest partner and provides next-generation, cloud based performance testing solutions that are 100% Apache JMeter™ compatible, and was founded with the goal of making it easy for everyone to run sophisticated, large-scale performance and load tests quickly, easily and affordably.

JMeter is the leading open source load performance testing tool, and cloud-based performance testing provider BlazeMeter will be hosting a live webinar next week giving testers and developers everything they need to run performance testing with the popular tool.

In this webinar on Wednesday, November 19, at 1pm Eastern Time, BlazeMeter’s Ophir Prusak will cover all of the vast capabilities and lesser known limitations of the popular open source load tool. The session will consist of three parts:

How to run performance testing with JMeter. Learn best practices, tips, and what you can and can’t do with JMeter.

Why it’s worth using BlazeMeter with JMeter. Learn the benefits and additional features you can get by running performance tests through BlazeMeter.

This piece was originally published by our good friends at BlazeMeter – the Load Testing Cloud. Don’t forget to also check out all of the load testing tool options out there — and other testing tools — along with user-submitted reviews at our Tool Reviews section of the site.

If you often run load tests, you probably have a mental checklist of questions that run through your mind, including:

How many threads per JMeter engine should I use?

Can my engine handle 10% more threads?

Should I start with the maximum number of threads, or add as I go?

All of these questions are important and should be carefully considered – but what about the load test itself? Have you considered how the actual procedure will be managed?

This piece was originally published by our good friends at BlazeMeter – the Load Testing Cloud. Don’t forget to also check out all of the load testing tool options out there — and other testing tools — along with user-submitted reviews at our Tool Reviews section of the site.

Is your application, server or service is fast enough? How do you know? Can you be 100% sure that your latest feature hasn’t triggered a performance degradation or memory leak?

The only way to be sure is by regularly checking the performance of your web or app. But which tool should you use for this?

In this article, I’m going to review the pros and cons of the most popular open source solutions for load and performance testing.

Chances are that most of you have already seen this page. It’s a great list of 53 of the most commonly used open source performance testing tools. However, some of these tools are limited to only HTTP protocol, some haven’t been updated for years and most aren’t flexible enough to provide parametrization, correlation, assertions and distributed testing capabilities.

We’ve all seen the disastrous results of not properly load testing and sites not being able to shoulder the traffic — the healthcare.gov site crashing in the United States is one example where people’s livelihoods were actually put at risk (e.g. this wasn’t someone being inconvenienced today while pre-ordering the iPhone 6).

So you’d think that more organizations would be taking load testing seriously as part of the software development process, given the bottom-line risks to the business. However, according to a Software Testing Magazine report citing a survey from the Methods & Tools software development magazine, only 24% of organizations load test all of their projects, and even as high as 34% don’t perform any load or performance testing.

I’d be interested to dig deeper into this report, because it isn’t clear if this is a widespread issue in software development, or just in certain sectors. For example, organizations that make up this survey respondent pool may want to re-think their load testing strategies if they’re in industries with a low tolerance for crashes or slow site performance — i.e. retail. Nonetheless, this is still a surprising number.

Is load testing just an optional step for software development organizations? Or have they still not learned with the number of high-profile site crashes as of late? We’d be interested to hear from you in the comments below.

Creator of cloud testing tool LoadStorm, CustomerCentrix, today announced that it has released a LITE version of its cloud load testing tool.

This version is designed to be a cost-effective, easy-to-use complement to its enterprise level tool, LoadStorm PRO. According to the company, LoadStorm allows users to set up tests in the web application and run them from the cloud with no hardware to purchase and no software to install. Users will be able to try LITE for free from their site.

Don’t forget to leave a review of LoadStorm if you’ve used the cloud load testing tool in the past, and be sure to check out the complete library of testing Tool Reviews to check out comparable load testing tools and see which is best for your testing team’s needs.

Two of the most exciting things in the world –load testing and the US census – recently came together to provide us with an interesting case study in launch preparation. I’m talking of course about the new (and free) website on the 1940 census that crumbled under huge traffic last week.

The story is interesting on a number of fronts (particularly, how it was paid for), but for the sake of this blog, I want to stay focused on the load requirements put in place prior to the site’s launch. It’s something we see here at uTest quite frequently: companies simulate what they consider to be an absurd amount of traffic, only to have that figure exceeded after launching. Not the worst problem to have, but a problem nevertheless. This can be caused by a huge media pickup or, as was the case with Inflection, becoming a trending topic on Twitter.

So how well prepared were the operators of the site? Here were the contractual requirements in terms of load testing, according to msn.com:

“When browsing from one image to another, each image should be presented to the user in 3 seconds or less.”

“When moving from the standard rendered image to each zoom level (e.g. zoom 1x, 2x, 3x), the reformatted image should be rendered in 2 seconds or less.”

“Support up to 10 million hits per day while providing response times of less than three seconds for keyword searches of the descriptive metadata.”

“Support up to 25,000 concurrent users.”

And how far off were they? Inflection’s general manager was quoted as saying, “We were expecting a flood, but we got a tsunami.” Here were the hard numbers:

That’s right, if your mobile site doesn’t load within THREE seconds 60% of visitors will abandon it. And, seeing how we’re not a very patient society anymore, 57% of mobile users will only give your site an extra two seconds (bring the grand total up to five) to load before aborting the mission. (According to data pulled from a Compuware survey). And if your site is slow just once it doesn’t matter, the damage has already been done. Need some more convincing? Check out this infographic to find out why load testing is so important.

And remember, your mobile site doesn’t need to load well only in the lab. It has to work out in-the-wild … because apparently 67% of people use their smartphones on a date (and they’re probably in the middle of a crowded restaurant). From Tatango:

It used to be that people visiting your website were sitting in front a computer – traffic to your site may peak on specific days, but they still needed to be at a computer. Load testing for that scenario isn’t good enough any more. Now your site must be prepared for even more traffic because in today’s mobile world people can access the web-anywhere, anytime. Want to snag that Black Friday sale but still in a turkey coma? No problem, grab your smartphone and buy that Blu-ray player from the comfort of your bed. Stuck in a board meeting and realize you forgot to send your sweetheart flowers for Valentine’s Day? No problem, tap tap tap and it’s done. That is, if the site you’re ordering from was prepared for the holiday traffic.

Time and time again companies fail to prepare their websites for high traffic days. I’m not talking “it’s payday and I’m going to buy a new pair of shoes,” I’m talking about Black Friday, major holidays, extensively advertised product launches, and now, Valentine’s Day.

Last November several e-tail sites crashed under the weight of the Black Friday shopping rush. Target underestimated the popularity of its new, highly advertised, Missoni for Target line and left customers frustrated after its site crashed repeatedly. How many Coca-Cola polar bear commercials did you see during the Super Bowl? Enough to think that Coke would be prepared for heavy web traffic. They weren’t (and neither was Acura or the site for the film Act of Valor).

After all these high profile site crashes you’d think companies would begin to do extra load testing leading up to especially high traffic events. And you’d think sites focused on jewelery, chocolate and flowers would take extra care to prepare for Valentine’s Day. Alas, they still haven’t gotten the message, and it’s hurting their sales. From pingdom:

Valentine’s Day is a great day for any vendor selling flowers. Over the years, a large number of websites selling flowers have sprung up, and as you might expect, many of these websites are flooded by eager shoppers on February 14 wanting to buy flowers and gifts for their loved ones.

This is big business. Americans are expected to spend $18.6 billion on Valentine’s Day gifts this year.

Now here is the catch. Every year, some of these websites won’t be prepared to handle the increase in visitor traffic and as a result they slow down significantly, or even crash under the pressure. …

For our customers, this means they can find the testing expertise they need, no matter where they are in their SDLC. And for testers, it means provide more earning opportunities for those individuals with expertise in areas like security testing, performance engineering, or localization validation. Like I said, nothing major. </sarcasm>

In all seriousness, these are exciting times around the halls of uTest. We’ve spent the past 12 months trialing new types of testing services with select beta customers. And now, we’re ready to offer them to any and all companies, on demand. A quick introduction to uTest’s new suite of testing services:

Security testing services to help you avoid launching products with common security- and privacy-related vulnerabilities. Our services include tools-based static and dynamic security testing, as well as manual penetration from trusted, white hat security testers.

Load testing services to make sure your app is ready for peak traffic, and that performance won’t degrade under heavy load. Our services include live load, simulated load and a hybrid load offering that combines cloud-based load testing with live testers.

Localization testing to validate that your app is saying what you think it’s saying. Services include translation validation from native speakers who live in-market, as well as full L10N testing that covers content translations, currency, taxes, shipping options and more.

Usability testing to help you launch products that are intuitive, clean and achieve high conversions. Services include surveys-based testing with targeted focus groups (by age, gender, education, hobbies, location, etc) or usability audits from one of our UX experts.

Special thanks to our friends at Stein + Partners for all their help with our rebranding, as well as an epic month of late nights from the amazing uTest crew. And finally, a word of thanks to our testers for their help in this launch, and the dozens of customers who helped us learn so much about each of these new types of testing. If you’d like more info about any of these new services, drop us a note.

We’ve got more on the way in the coming months. We’re not going to rest until we’ve completely reinvented the way testing services are provided in this ever-evolving apps universe.

Have a comment? Want to tell us you hate/love the new look? Drop us a comment and let us have it!

Update:Mike Butcher over at TechCrunch just took this news prime time. Seems we’re not the only ones who recognize the need for better app security testing.

One thing I’ve learned in life is that it’s important to test everything to prepare for worst-case scenarios — whether it’s software, emergency response, or simply your new snow tires in a desolate parking lot.

The next time I saw him I asked him if he had “tested” them yet. He gave me a quizzical look, and I knew I’d have to take him to an empty parking lot and pretend we were teenagers doing doughnuts all over again. My point was that he’d never know the capabilities and limitations unless he tested them in a (relatively) controlled environment.

The same goes for load testing.

Yesterday, we were hit with an unusually messy, two-part storm that involved precipitation in every liquid state. Morning commuters knew this would impact Boston’s public transportation system’s (the MBTA) train schedule. In an effort to avoid standing outside in rain/snow/sleet, everyone logged onto the MBTA’s website, which inconveniently wasn’t able to handle the demand and the site was brought to its knees.

In a most critical time when people needed updates the most, the site was unable to help. This is a prime example of why it’s important in the software testing world to test in controlled environments. Had the MBTA done more, they could have flushed out potential errors in advance and served their customers better.

(It’s worth noting that the Mass DOT & MBTA are far from strangers to technology and do a wonderful job utilizing Twitter. You can follow them @MassDOT & @mbtaGM.)

In part II of our interview with SOASTA’s Dan Bartow, we get his thoughts on why load testing often gets neglected; advice for test tool selection; the challenge of load testing for mobile apps; SOASTA’s plans for 2011; fights between flying sharks vs. flying crocodiles and more. If you missed our previous segment, you can read Part I here. Enjoy!

uTest: True or false: Load and performance testing is often one of the most neglected phases of software quality. Please explain why this is (or is not) the case.

DB: True! The time allotted for QA in the software development lifecycle has always been the first thing to get squeezed when a project gets behind. Traditional software methodologies such as Waterfall essentially go from requirements to development and ultimately QA at the tail end. When the project development activities get behind then the only thing left to cut from and still deliver a product on time is the QA cycle. Performance still isn’t a part of many project plans today (this is almost a separate topic in itself), but when it is in the project plan it usually gets a slice of the QA time which is already too short in most cases.

Now we live in an agile development world and while agile functional QA is catching up we still don’t have agile performance testing as an industry standard. The reason for this is that the dominant product in lab performance testing, HP LoadRunner, requires you to write code for your performance tests that is more complex than the actual web application code you’re testing. If you have to write your test cases in C and it takes two weeks to write an end-to-end scenario on a finished web app then you have dead weight in your dev lifecycle. As a result of these weaknesses companies have lost confidence in the value of performance testing their apps. The way to reinstate this confidence is with a modern testing tool and a modern approach to testing.

uTest: How important is tool selection when it comes to load and performance testing? Are testing failures a result of this or something else, like personnel?

DB: Tool selection is very important for overall success although testing failures can be because of people, processes and/or technology. You need the right tool for the job and you need the right people to use them in a process that’s set up for success. Just like QA isn’t a one size fits all shoe neither is performance testing. Personally though I think most testing failures are a leadership and execution problem and not because of the tools being used or the processes. Quality comes from the top down. The companies out there delivering the highest quality offerings are the ones that build quality in from the CEO all the way through the company. Probably every tester reading this knows what its like to be a QA Engineer at a company that doesn’t seem to actually care about quality. How ironic! I said tool selection was very important, but I really don’t even want to focus on that here because tools are just tools. Before you worry about whether or not you have the right tools time should be spent on making sure you have the right attitudes on your team and the right players. Once you have a good enough team that is pushing the capabilities of your toolset then I think you’ve got a foundation for success and you can start driving higher.

uTest: How does the expansion of mobile apps and devices impact load testing? Is this a game-changer? Or something current load testing is well suited for?