Tag Info

Although unit tests are normally good enough, it's nice to see a developer ensure that they've run a process from start to finish, not just the unit tests. Because the types of tests that should be run vary greatly with the type of application, I would discuss with a testing point of contact for the product.
With new developers to me, I usually ask them ...

What has worked well for me is the following steps.
The developer makes their best effort at writing good code.
The developer gets a code-peer review from another developer
The developer runs their unit tests, and checks that they all pass.
The developer and the tester sit down for a review:
The developer walks through with the tester to let them know ...

Ideally, the DoD for each user story should mean all tests for that user story are passing, and all automation is completed, running as part of the overall automation (as opposed to on one person's machine), and running with no errors.
Real world usually means compromises, since there's rarely enough time or resource to cover all the potential implications ...

One of the most common problems that I've faced is the role of the tester. Often times, teams/companies start to believe that an agile approach using TDD eliminates the need for testing. My first experience with it was when of my former teams was told that they were going to become an agile team. Agile Testing by Lisa Crispin and Janet Gregory helped me ...

I would have a staging enviroment thats an exact clone of live. Then when you have to rollback a bug you can also track it down and fix it on the staging enviroment.
Once you have a comprehensive list of bugs and their causes you can work on finding trends and stopping them happening again.

Errors occur when some user enters data that is different than the
test data.
Fuzz testing in addition to boundary testing could help. Really any kind of randomly generated or randomly mutated data (by your business rules) can help find these kinds of errors. Dedicated testers who try to break the system and who do not just test that it is working ...

I know this problem way too well. There's no "right" answer, unfortunately, but there are some things you can do to help with this problem.
Dependency map - do you have a list of application features that have heavy dependencies and tend to break when changes occur in other areas? If you know changes in feature X tend to break feature Y, you know you ...

If you can, try & promote the idea of test first development (aka TDD, BDD, ATDD, Specification By Example) with Continuous Integration (frequent commits to a pipeline such as Hudson or GO from Thoughtworks which continuously runs the automated checks to see if any of them have broken after a recent commit)
Before Developers write the code, they write ...

The short version: regardless of the development methodology, your role is to provide information about the overall quality of the application. You do that via testing anything that isn't included in the developer-maintained automation, and reviewing the developer-maintained automation.
The long version: This question and its answers is a good starting ...

There is no hard and fast rule for what kind of testing a developer should do before handing off to QA. It depends on the developer, the QA team, the organization, and the product. Testing is a means to an end, not an end in itself.
The important thing is to pay attention to the quality of what QA receives and the quality of the released product. If the ...

If you want to go fast, you need to assume that once something is tested and working in a cycle, it will continue to work in that cycle. If you cannot make that assumption, you either need to spend more time testing (by yourself or with the help of others) or your developers need to deliver higher-quality code. No one but you and your developers can decide ...

The elephant in the room: maintenance.
How do we maintain this new feature:
what things need to be configurable?
can we easily roll this out (what's the roll out?)?
what support tools do we need to resolve issues with this in production quickly?
I worry that one day all of humanity will be maintining code of our forefathers...
think of the children!

It is hard to say what your workflow will be. I've worked a few SDET/QA jobs, and it really seems to vary quite a bit. I'll try to cover some of my experiences, though, and hopefully that will be helpful.
Everyday Things
I usually like to start my day with a little bit of blackbox, just to get my brain into motion. I'll spend a little while every morning ...

There are several useful heuristics for stopping testing. A few I can think of
'No more time' - i.e. we stop due to a business imposed deadline.
Have you come to understand what you set out to understand about the feature (you do set specific goals for test sessions right?)
Mission critical bug unearthed at which point documenting and fixing the bug ...

I'm also a firm believer in the idea of developers peer review. Since adopting this methodology at my place of work the quality of work has significantly increased. The new set of eyes provided by peer review are frequently finding more efficient ways to write code or picking up errors that would have otherwise been missed.
This said there is always the ...

This really depends on the complexity of the project, the size of the team and the difficulty of integration.
Unit Tests. Everybody knows this, but it doesn't hurt to repeat. Run your unit tests to make sure your code still does what it should. This is particularly important for developers with code that doesn't need compilation (I'm looking at you, ...

I'd recommend seriously looking at building a framework that has the absolute minimum of repeated script code - this has the advantage of minimising update work. Similarly, I'd consider data-driven scripts with an object-oriented framwork where you're building your transaction objects to harness the application's features.
That way, as the application ...

"several in house people that we can have spend hours on QA"
Are these several people professional QAers/testers? Or just folks with a few hours of free time?
How long did it take to design and develop the updates to your major functionality? Perhaps instead of "hours on QA", you need "days" or "weeks"?
Have you performed a root cause analysis for any of ...

This is the exact problem that bedevils the environment where I work, and have yet to find a strategy that works well and consistently.
Some of the strategies and techniques that help are:
The testing specialist works with the coding specialists on the unit tests. Even if the testing specialist isn't a coder, knowing the coverage of the unit tests and ...

In my opinion, there is incorrect use of terms:
User's requirements in plain language should be called user's stories or etc.
Requirements for the hardware are often called system requirements.
Functional requirements cover what your Application need to do and in which way (i.e. its functions)
P.S. Also, you can read about requirements from the wiki: ...

System requirements are the translations of user requirements in a much more technical language. They are basically the things that a software must perform.
Not exactly. The system usually consist of hardware and software. In some situations, it could even include humans performing well defined processes (for instance, changing deployed batteries).
...

Firstly, I've been exactly where you are and I know the pain you're going through. It's also very 'cool' at the moment for everyone to talk about 'Automation' and how it's a 'golden ticket' to Continuous Integration. Everybody wants their stuff out fast.
There are ways to tackle this, but personally, I think a lot of comes with having an understanding of ...

My definition of done is generally "we don't have to worry about (e.g. touch, test, do ANYTHING with) that feature again in order to release the code.
One of the better discussions I've seen on this is this presentation last year by Ken Schwaber at a Canadian company called Pyxis, it's up on U-tube in three parts, the first part is here

David,
Great general question and great job backing it up with specific details.
I have been focused on answering the question "how can software testers find the most defects using the smallest possible amount of time / fewest number of tests?" for the last five years. Whenever testers are trying to address the problem of "too many possible things to ...

In my organization, we try to have a developer and a tester sit down before writing any code to have exactly that discussion - because the answer will probably be different for each feature you implement.
Unit testing is a given in my team - but more often than not, the tester and developer will also automate GUI tests up front too. That way the core ...

I have worked on teams as a tester and worked on teams as a tool developer. Stuart talked about the transition so I won't talk more about that, but I think I can add some helpful information on what to expect and what you can focus on in your new role.
You are going to be a development team producing products with real customers. The sooner you wrap your ...

Anecdote: I worked on one project for a national company. Testing was being done and overseen by one of the Big Name consultancies. Their testers were told to take screenshots for every test step. They abandoned this practice as (1) it was taking too much time and test progress was too slow and (2) I was fnding 90%of the defects on the project by doing ...

According to me, you should not go for a ready made checklist. More appropriate way is to develop your own checklist.
Understand you requirements, analyse them and then sort it in a form of a checklist. Then you can get feedback or opinions of others and see if it is adequate or if it need any improvement...