I'm a new QS tester in a company in an Agile environment. During the sprint commit I was told that one of my tasks was going to be testing user stories? How exactly do you test something like that. If anyone can give me any insight into this it would be greatly appreciated. Thanks.

5 Answers
5

As ByteBuster indicated, user stories are a very high level description of a goal an actor or customer wants to achieve with the product, but doesn't detail exactly how that goal is going to be achieved.

Developers often break user stories down into discrete development focused tasks that are necessary to achieve that goal. Developers should also be writing unit tests against those tasks.

A tester's goal in testing user stories is to put yourself in the shoes of the actor or person and think of the different ways (tests) that persona might achieve the objective of the user story. Also, think of things that might go wrong along the way, or other actions or flows that might block the objective of the user story from being achieved.

Remember, developers often look at software in task or functional chunks. While testers also frequently test discrete functional attributes of software, one of the primary roles of the tester is to look at software more holistically and approach testing from the eyes of the customer.

Thanks I admit I'm fairly new to the world of software testing and agile testing, so this just means that I will look at the story that has been written and determine if that story is testable and brainstorm potential scenarios that can stem from that story?
–
sam2013Jul 21 '13 at 7:09

As the other answers have said, you will probably not test the user stories directly. The method I've used in the past works like this:

Each user story will have one or more acceptance tests. These
tests typically cover a high level test scenario (such as "Given that
I am logged in as a customer, then clicking the link 'My Orders'
takes me to a page showing all orders for that customer." - the exact
phrasing may vary)

Each acceptance test can usually be broken down into one or more test cases. The example above could be considered to have three steel
thread test cases(assuming that the customer login is pre-existing
functionality):

After logging in, the link 'My Orders' exists.

Clicking the link 'My Orders' goes to the specified order list page

The orders on the order list page match exactly the customer's orders in the data source.

Acceptance tests should probably come from the product owner, but they may be left for you to define (in which case, it helps to work with the product owner if you can)

When testing a user story, I personally prefer to leave the acceptance tests until after I've performed all the tests associated with the tasks that are created from the user story - the tasks tend to be more granular and lower-level than the acceptance tests, such as (using the example above): "create a stored procedure to retrieve customer orders from a customer ID"

Acceptance tests are by default part of the steel thread (also known as the "happy path" - the absolute core functionality without which the story can't be complete). Tests that verify the correct function of features required by the acceptance test are also steel thread. Tests for handling error conditions may not be steel thread, and tests for edge conditions probably aren't. As a general rule, save the tests that aren't steel thread until all the steel thread tests pass.

My preference is that the user story is not tested until all tests have passed or the functionality behind non-passing tests has been put on the backlog for another sprint or deemed out of scope - that is, every test has been accounted for and every task has been tested.

Of course, circumstances can make a mess of this, but it's a decent starting point. Good luck.

User Stories are the highest-level requirement artifacts in the software development lifecycle.
Here are some examples of User Stories:

As an application user attempting to save the document, I want to see a warning if the desired document name already exists; The warning should allow me to choose whether to overwrite an existing one;

As an Web site administrator opening the Web site, I want to see a dashboard containing top outstanding administrative tasks;

As a user submitting a document to a remote system, I want to be informed on network errors and let save the document locally;

As you see, the User Stories are pretty broad by their nature. Often, highest level means lowest details, minimal structure, and therefore they may be interpreted differently by different people. This is why

Usually, QA do not test the User Stories directly.

Instead, most issue tracking tools allow to manage mapping User Stories to individual development tasks.
Check this slideshow for a deeper theoretical insight.This StackOverflow question and its answers have some good points on the topic, as well as on the ways how one can use JIRA for such mapping.

As soon as Tasks are being implemented, they are tested by QA, you may be quite familiar with the normal process.
As soon as all tasks bound to a certain User Story appear completed, the QA may mark the User Story completed as well.

I should make an important note. I've seen some teams that do have direct testing of User Stories. They assign specific Success Criteria to each individual User Story, just like in @Pangea's answer. The choice is yours; probably you should consider the size of the project and the dev team, and maybe some more criteria to decide if you really need direct testing of User Stories.

I'd recommend watching the video "James Bach on testing in an agile software development team" - he talks about testing during sprints, covers user stories, things testers should be looking for / asking questions about. He has a lot of ideas and since you are new to the Testing game it might be worth watching. Plus the video is only about 30 minutes long. Enjoy!