QA plan - MSE Studio - Carnegie Mellon University

Carnegie Mellon University
MSE SCS Project
Quality Assurance Plan
Pangea
Version 1.1
May 7, 2008
Lyle Holsigner
Snehal Fulzele
Sahawut Wesaratchakit
Ken Tamagawa
Smita Ramteke
25
Document Revision History
Version Date Author Reviewer Updates
0.1 04/03/2008 Team Team First draft
0.2 04/11/2008 Team Team Second draft
1.0 04/27/2008 Team Team Final draft
1.1 05/07/2008 Team Team Fixes after Bosch
team’s review
Changes in version 1.1
This version contains updates to the QA plan based on Bosch team’s review. Some changes were made to
certain sections of the plan however, not their entire review list of issues were addressed as some of them
were not considered as issues (and they were minor issues).
The major issues addressed were:
1. There is no relation between the strategy and the process lifecycle
This issue was addressed by describing the relation between Agile Unified Process and our quality
assurance process.
2. The goal for testing is not measurable
Team reflected on measure goals for testing. Measurable goals for testing were then described.
3. No mention of how client satisfaction is measured for acceptance testing.
This issue was addressed under Acceptance testing goals where we clarified that we would ask the
client if he was satisfied.
4. The time allocated for integration testing is not reasonable
A reasonable time allocation for integration testing was discussed by the team and the plan was
revised accordingly.
25
5. The refactoring of code will break test cases
The team will be aware of breaking test cases while refactoring and will try to keep the impact of
refactoring minimum.
6. We will check-in design models to version management to keep code and design models in sync.
How will you make sure of the traceability between your UML models and the code?
We did not address this major issue as we use GMF for design model and code generation. For other
parts that we will modify (without GMF based Model driven approach) we will use design reviews
that will check that.
7. Major issues related to test driven development
The team reflected about the use of test driven development. As mentioned by our reviewers we also
had similar concerns, however, we incorporated test driven development into our process as Agile
Unified Process mentions it. We do agree that test driven development cannot be used properly for UI
related work. Thus we will not use test driven development approach towards our development.
8. Responsibility of Support Manager is not mentioned
We mentioned the responsibility of support manager that involves setting up of tool.
9. There is no description of risk management
We did not address this issue as we do not think risk management is a process that is part of quality
assurance.
10. Tool for coverage not mentioned
We mentioned the tool we will use for test coverage – EclEmma – a free eclipse tool for coverage.
11. Time for architecture review in Spring is counted in summer
We fixed this error in our plan by not counting them towards summer semester.
12. The exit criteria for static analysis is not present
We addressed this issue by establishing an exit criteria for static analysis.
13. The time spent for acceptance test is not enough
We addressed this issue in this document. We decided to spend more hours on acceptance testing.
14. The plan/releases schedule is not mentioned in the documentation.
The team did not address this issue in the document as we did not find it relevant here.
15. 4 person-hour to analyze the defects per week is not reasonable.
We realized that we need to spend more time here. Thus this issue was addressed.
16. Mismatch between the CCB preparation. P17 mentioned that one member is assigned to
analyze all defects before meeting; p15 mentioned that 4 member x 1 hour
This issue was addressed by the team. The mismatching content was made consistent with each other.
17. There is no process improvement for the quality assurance plan
We clarified that the process improvement is for the quality assurance process.
25
18. To assign the QA manager to be responsible for all testing job is not reasonable
This issue was addressed as we realized QA manager could not take responsibility for the entire
testing job.
19. The responsibility of regression test is not assigned
This issue was addressed in the plan. We have the QA manager responsible for regression testing.
Changes in version 1.0
This version contains updates to the QA plan based on team review. References and appendix section were
updated and the document structure was changed.
Changes in version 0.2
This version contains updates to the QA plan based on the team review of the first draft. Checklists in the
Appendix sections were added and references were updated. Some sections of the document were omitted
and grammatical errors were corrected.
Changes in version 0.1
This version contains the first draft of the initial QA plan. All sections are added with the appropriate content
in each section as per team discussion.
Table of Contents
Acronyms 5
1. Introduction 5
2. Project Context 5
Project goals 6
Context diagram 6
High priority quality attributes 7
3. Quality Goals 8
Documents 8
Code 9
Testing 9
4. Quality Assurance Strategies 10
High Level Summary of Strategies 10
Traceability Matrix 10
Static Analysis 11
Testing 11
Unit Testing 11
Integration Testing 12
25
Regression Testing 13
Acceptance Testing 13
Review 13
Requirement Documents Review 13
Architecture Review 14
Design Model Review 15
Informal Code Review 15
Formal Code Inspection 15
Refactoring 16
Defect Tracking and Version Control 16
5. Quality Assurance Process Organization 16
Time and Human Resources 16
Team Organization 18
Artifacts 19
Review Process 19
Change Control Board Process 19
Quality Measures and Metrics 19
Quality Process Improvement 20
6. Appendix 20
Architecture Review Checklist 21
Design Model Review Checklist 22
Code Review Checklist (tailored) 23
Reference 25
Acronyms
 ABLE – Architecture Based Language and Environments
 ACDM – Architecture Centric Development Method
 AE – Architecture Evolution
 AE tool – Architecture Evolution Tool
 KLOC –thousand lines of code
 CCB – Change control board
 QA – Quality Assurance
 SCS – School of Computer Science
 SVN – Subversion
 CnC – Component and Connector
 AUP – Agile Unified Process
 RUP – Rational Unified Process
1.Introduction
This document outlines the quality assurance plan to be followed by team Pangaea for the execution of the
MSE SCS studio project (2008). We briefly introduce our project context, followed by quality goals and
25
strategies to achieve those goals. We then describe the quality assurance process organization and
improvement.
2.Project Context
The primary client (David Garlan) is an active researcher in the area of software architecture. He is leading a
research project called ABLE (Architecture Based Language and Environments), which conducts many
research projects leading to an engineering basis for software architecture. One of the research projects is
AcmeStudio, which is a customizable editing environment and visualization tool for developing software
architectural designs. Currently, Garlan and Bradley Schmerl (Technical advisor) are conducting research in
the area of software architecture evolution. They want to a build a tool that can allow architects to plan
architectural evolution. They expect that this tool will facilitate research in the area of architecture evolution.
Project goals
Following are the project goals defined by the client:
 Provide a step toward the overall vision of architectural evolution research by extending AcmeStudio
to display and compare multiple architecture paths.
 Provide a platform for future development in this area to the client.
 Demonstrate the usage of the architecture evolution tool.
Context diagram
This view defines the context of the system during normal operation.
25
Figure 1: Context diagram
The software architect uses our tool to draw the architecture evolution path, compare architecture instances
and analyze evolutions (evolution paths). However, the architect uses AcmeStudio for drawing the
architecture instance in the architecture evolution. Our system and AcmeStudio interact so that actions
required by architect such as creation of diagram architecture instances, drawing of evolution paths, and
comparison of instances can be accomplished. Our system and AcmeStudio are based on Eclipse.
The following are the key requirements of our tool described in the context diagram:
 Diagram Architecture Evolution
The diagram editor should provide basic editing functionalities like creating instances and transitions,
copying and pasting elements etc. Besides, the user should be able to link an instance element to an
existing or new Acme System.
 Compare Instances
The compare feature will allow architect to see the difference between the two instances (visually) on
the overview diagram. Architect selects any two instances from the overview diagram and runs the
compare feature. The tool shows differences between the instances.
 Quality trade-off analysis
Software architect can perform quality analyses for evolution path/s to select the best fit path by using
the AE tool. The architect can compare among paths and make trade-off.
25
Other functional requirements
Following are other functionality across the whole system.
 Logging
 The important system interactions such as acquiring resources are logged. Response Measure:
The logging is performed as soon as important system interaction is over or upon a system
crash. This measurement is unrelated to the stimulus.
 Error handling
 Checked exceptions (in java) are caught or thrown appropriated. Response Measure: The
system will not terminate for any invalid user inputs
High priority quality attributes
 Extensibility
 The AE tool allows third party plug-in developers to add their own quality trade-off analyses
plug-ins to the system and run their analysis. The integration should not take more than 10
hours.
 A third party plug-in developer can change data in required format and structure to reflect in
overview in less than a day.
 Usability
 The AE tool should be consistent with Eclipse graphical editors. The user does not have to refer to
user manual for common operations like drawing diagram.
 The system should response to the user's interaction during developing a diagram and be ready for the
next interaction within ten seconds or if the interaction takes longer than ten seconds the tool should
indicate progress for user
The detailed information on the functional requirements can be found in our use case specification
documents [10], [11] [12]. Other project quality attribute scenarios can be found in the Quality Attribute
Scenarios document [9].
Thus given the project goals, it is imperative that we meet the functional and the quality attribute
requirements of the system. Agile unified process (AUP) along with Architecture Centric Development
Method for design will be followed to achieve high level of customer satisfaction.
3.Quality Goals
This section will describe the quality goals that we are trying to achieve. We have categorized our quality
goals under the sections: Documents, Code and Testing.
25
Documents
Documents cover artifacts such as:
 Requirement specification, such as use cases, paper prototypes (user references), and supplemental
specification
 Architecture specification
 Design documents
The quality goal is:
 The reviewed documentation should have less than 2 catastrophic defects per document, 1 major
defect per page and 10 minor defects per document.
The aforementioned levels are defined as follows:
 Catastrophic: Such defect will demonstrate different understanding of the discussed issue
among team members (reviewer and the owner of the document). For example, 'Architecture
specification is missing key quality attribute scenarios' is a catastrophic defect.
 Major: Such defect will demonstrate significant incorrectness in the documentation but does
not demonstrate different understanding of the discussed issue. For example, 'a component is
missing in the CnC view of the architecture specification' is an example of major defect.
 Minor: Such defect will have insignificant incorrectness in the documentation. For example,
spelling mistakes and grammatical errors are minor defects.
Code
Quality goals for a generated as well as written code are:
 The number of defects found during the code review is expected to be more than the number of
defects found in the post code-review testing (unit, integration and acceptance testing). In other words
we expect more defects to be caught earlier in the implementation phase.
 The defect rate should be no more than 5 defects per KLOC (delivered defect rate)
 The code should adhere to Sun’s [1] Java coding standards.
 The implementation should adhere to the documented architecture specification (architecture
conformance review).
Testing
This section will cover quality goals under the categories: requirements (functional as well as quality
attributes), unit, integration and acceptance testing
 Unit testing:
 Each code module should have associated test suite
 The defect rate should be no more than 5 defects per KLOC
 Integration testing:
 At least 95% of the bugs should be caught before going into the acceptance testing.
 Integration testing should take less than 72 person hours (1.5 days work for the team).
 Acceptance testing:
 Client expresses satisfaction over all 'must have' features without major re-work (work greater
25
than 16 person hours is considered as major work). We will ask client to express their
feedback on each ‘must have’ requirement to judge their satisfaction.
 Integration testing:
 Each integration testing should take less than 72 person hours (1.5 days work for the team).
 At least 95% of the bugs should be caught before going into the acceptance testing.
 Acceptance testing:
 Client expresses satisfaction over all 'must have' features without major re-work (work greater
than 16 person hours is considered as major work).
 Requirements: Testing requirements is same as validating the system.
 Functional: Quality goals for testing functional requirements are:
 All the 'must have' requirements defined in the traceability matrix should be
validated to confirm they work as per customer expectations. Each 'must have'
requirement should have at the most one defect.
 Every requirement defined in the traceability matrix should have at least one
test case. All test cases should pass before any release to customer.
 Quality attributes: Quality goal for testing quality attributes is:
 The high priority quality attribute scenario's response measures are met.
4.Quality Assurance Strategies
Each testing strategy has its advantages, but they come at the expense of time and effort. For example, using
profiling tool will find bottlenecks in the code, but may involve spending time (learning the tool and using it)
and effort. But, since performance is not a high priority quality attribute for the project, we can choose to not
work on the performance bugs. Thus, we are accepting the risk associated with not finding the performance
bottlenecks in the code. (On the other hand, using intra-procedural static analysis using FindBugs will
uncover bugs which we are more concern about).
High Level Summary of Strategies
Pangea will maintain a traceability matrix to map requirements to code artifacts and to manage changes to the
requirements effectively. As a measure to find low level coding defects, static analysis will be applied
individually by developers and on the mainstream repository. On an ongoing basis, unit tests will be
developed and applied prior to code check-in. This will minimize defects prior to integration of units.
Integration testing will be applied to interfacing code units after check-in. Unit tests and integration tests will
also serve as the foundation for regression testing. Extensive reviews of varying levels of formality will be
conducted on all software artifacts. Reviews will help to eliminate defects and to promote our essential
quality attributes. Issues identified through QA strategies shall be refactored at the end of each iteration.
Defects will be tracked using Mantis defect tracking system and subversion will be used as a configuration
management tool. Additionally, metrics shall be examined at the end of each iteration and shall serve as the
basis for improving our quality process. Additionally, at the end of each iteration we will reflect on quality
process to make it more efficient based on the needs at that stage of the project. We elaborate on our QA
strategies in the following sections.
25
We have been using the customized AUP as a software development process. AUP is an iterative process and
we shall execute quality assurance related activities such as reviews and testing every iteration. We shall
deliver working product incrementally and get client feedbacks as early as possible.
Traceability Matrix
A traceability matrix is maintained for the requirements. It includes detailed functional requirements and
each of these requirements is traceable to a high level use case. All the requirements in the matrix is marked
as 'must have' or 'nice to have' by the customer. Since the matrix includes all functional requirements, it could
act as a basis for our testing. One of our quality goals is to have at least one test case for each of the
requirements mentioned in the matrix. Achieving this goal will ensure that we do not miss out on
requirements. Besides, the matrix will ensure that all code modules are traceable to the requirements.
Static Analysis
We will use FindBugs to do static analysis over the code and find potential problems. We think, use of
FindBugs will be a good assistance to the code reviews. The reasons are:
 FindBugs is relatively faster (compared to the tools like Coverity) to execute on a substantial piece of
code. Thus, it will not take much time to run the tool.
 Defects like null dereferencing can be easily missed by manual code review. Such defects are easier
to find using FindBugs.
 Additionally, FindBugs has a good filtering and prioritization mechanism to allow developers to
concentrate on few kinds of bugs relevant to that code module.
 Use of FindBugs will help focus the code reviews on potential problems rather than on discussions
about the low level coding practices.
 FindBugs can find mechanical defects such null dereferencing. This will assist the code reviewers to
focus on potential problems rather than on discussions about the low level coding practices.
We decided not to use other types of static analysis techniques such as dataflow analysis or annotation
(ESC/Java) because it is time consuming and we believe that problems that can be detected with dataflow
analysis (such as dereferenced null pointers) can be caught by FindBugs and code reviews. We are also not
using any profiling tool because high performance is not a high priority quality attribute requirement.
Static analysis will be considered complete
 Once the high priority defects as shown by FindBugs are addressed.
 There is no further resource (time) to allocate
FindBugs gives lot of false positives, thus the developer needs to judge the true positives and address them.
25
Testing
This section details the testing methods that will be applied throughout the development process and how
they will be applied.
 Unit Testing
White-box unit testing will be applied throughout the development process to as a measure to enforce
specifications and to ensure quality. Given a unit test contrived early in the development process,
programmers will be tasked with ensuring that the assertions defined in the test are met.
For completeness, each class will have at least one unit test that must be passed in order for a
compilation to be considered satisfactory. This approach ensures that potentially costly defects will be
identified and corrected early. Procedurally, this method fits well with the process as a whole
because, by rule, class-level defects must be corrected prior to code check-in. Additionally this
promotes efficiency by eliminating local defects that might be difficult to identify and correct after
integration.
To allow for parallel development, scaffolding will be put into place whenever necessary to resolve
dependencies on class interfaces during individual development. This approach also promotes
testability by allowing for unit tests that are truly independent of external units. Drivers and stubs will
be created as high level classes that can easily be extended and overwritten to implement true
functionality at the point of integration.
JUnit will be used for development and application of unit tests. It is a widely used and well accepted
testing integration tool that is decidedly stable and effective. It is seamlessly incorporated in to the
Eclipse development environment. Furthermore, Pangaea team members are sufficiently familiar with
JUnit. Therefore, the learning curve for effective application during coding is minimal. This is an
important factor in tool selection because of limited time and resources.
Philosophically, defects discovered during unit testing will not be considered true, track-able defects.
Because our process requires the correction of local defects identified during unit testing to be
corrected prior to check-in, fixing these defects is viewed as a part of an individual’s coding process,
rather than part of the development of the larger system. Additionally, because unit test is considered
an on-going practice that is closely paired with coding, metrics are not used as the basis for
completion of unit testing. Rather unit- testing continues through coding and ultimately gets rolled
into regression tests. As additional exit criteria, we will require at least 80% statement coverage by
unit tests. EclEmma, a free java code coverage tool for eclipse will be used for statement coverage.
Because the project produces code, we must ensure, through unit, testing that the code is correct with
respect to the architecture. We will create architectures specifically to test generated code. Simple
unit tests will be applied to generated code. Because generated code is simple in nature (class
structures, not algorithms) unit testing is a feasible approach. Basic tests (i.e. Constructor tests) will
be sufficient to ensure adequate quality. Additionally, FindBugs will be run on generated code to help
identify flaws.
25
 Integration Testing
Integration testing will be performed after integration of code units to ensure correct interfacing.
Integration tests will be defined during design efforts and prior to coding. The use of scaffolding
makes this approach possible by allowing independent development of classes. Upon integration,
testing of true interfaces is possible.
All defects identified during integration testing will be added and tracked in the defect tracking
database. This database has been supplied by the client and is suitable for the size and scope of our
project. The database will be used for tracking ensuring that defects are fixed, producing metrics, and
assigning responsibility for fixing defects.
Integration testing will be considered complete when tests have been run on all interfacing code units,
all specifications have been met through satisfaction of test criteria, and the rate of defect discover has
dropped significantly (to < 2 defects per day). Defect rate shall be calculated on an ongoing basis
using the defect tracking database.
 Regression Testing
As a rule, prior to check-in all classes must undergo a regression test suite consisting of the set of unit
tests for the classes checked- out by the developer. This will ensure that previously qualified code
does not break as the result of modifications. It is accepted that unmodified code units will pass their
respective unit tests. This is because code must pass tests prior to check-in.
Nightly builds will be scheduled to ensure proper integration of code units. Regression testing will be
applied to the mainstream repository in the form of the set of integration tests. As with integration
testing, defects discovered during regression testing will be tracked using the defect tracking
database. This is because they are likely to be the result of conflicting code units. Local defects
discovered during individual development will have been resolved prior the the build and need not be
recorded in the defect tracking database.
Regression testing will be applied on an ongoing basis. As such, testing can be considered complete
using the same criteria as integration testing. That is, when regression tests have been applied to the
final build, when all specifications have been met through satisfaction of test criteria, and when the
defect discovery rate has dropped significantly (to < 2 defects per day), regression testing will be
considered complete.
 Acceptance Testing
It is well known that a customer's needs are not always understood entirely until some functional
program is made available for evaluation. Therefore, on an ongoing basis, demos and prototyping
shall be used, whenever possible, to address changing specifications early in the process. To ensure
25
that the final product meets the needs of the client, a period of acceptance testing is planned upon
completion (black box testing).
Acceptance testing is considered complete at the point when no additional specifications remain
unaddressed. To prohibit continuous modification, specifications will be negotiated and agreed upon
so that deliverables are well-defined, so that the project remains within the defined scope, and so that
a solid project plan can be maintained.
Review
 Requirement Documents Review
The requirement artifacts of the studio project are reviewed. The artifacts include use case
specifications, paper prototypes (user references), and supplemental specification. In the process of
requirement management, for functional requirements, use case specification is created first, and then
user reference (paper prototype) is designed according to the use case. For quality attributes,
supplemental specification is created. All requirements are elicited and confirmed with clients
interactively in the client meeting.
Generally, before the client meeting, the author refines the requirement architects, and informal
review is held by at least one team member. Two weeks before releasing, informal inspection is held
by the entire team.
 Architecture Review
The architecture artifacts of the studio project are reviewed. During the spring semester, we followed
the ACDM based architectural team review. During the summer semester, we will check the
conformance of the codes against the architecture periodically.
 ACDM based architectural review
Since we are using ACDM to improve our design, we have been doing architectural review as
described in stage 4 of the ACDM [2]. The author (architect or other team member) explains the
architectural elements in the diagram and the relationship between them. The author goes through
the quality attribute scenarios and explains how the elements in the architecture help to fulfill the
goals of the scenario. Team members ask questions about, tradeoffs between the scenarios,
mapping to other diagrams or clarification about functional requirements, by referring to the
Architecture Review Checklist in Appendix. Problems are noted down as issues and later fixed by
the author. For bigger problems or insufficient knowledge about an area the team decides on
performing architectural experiments (as suggested by ACDM).
With architecture review team can have more confidence in their architecture and it promotes
common understanding of the architecture. We plan to have weekly design review meetings at 0.5
hour each during spring semester. This means that the total amount of time spent for design
25
reviews will be 6 hours for the team.
 Architectural conformance checking
After the spring semester, we are supposed to go to implementation. (In ACDM terminology, we
made GO decision and proceed to stage 6.) We will not develop architecture, but develop detailed
design and construct the implementation. At the beginning of each iteration, we will check the
conformance of the codes against the architecture in informal review sessions. We might need to
reconsider our architecture according to the changes made in terms of requirements and
implementation. This will help us verify that our quality attributes are met with the current
architecture. In that case, the team refines the architecture and update related artifacts. This
approach helps the team to communicate architecture more and it helps to reduce the problems
caused by misunderstanding the architecture.
We plan to have three design review meetings for 1 hour each. This means that the total amount
of time spent for detailed designed reviews will be 12 hours for the team.
 Design Model Review
We use Design Model which is defined in AUP and RUP as Detailed Design. The goal for is the
design model to have all key classes defined in addition to the interfaces between different
components in the architecture. After the author has checked-in his/her own design model to the
individual branch and before the design model is merged to the team branch, the design model is
reviewed in informal review sessions with the entire team. The team will use the design model review
checklist. The author of the detail design will go through the use cases and detailed requirements for
the component he/she has designed and explains how the use case steps are reflected in messages and
method calls between the objects.
The team members will ask clarifying questions. If points are found that need further investigation,
these points will be noted down by the author of the design and reviewed with the architect after they
are corrected. The intention of the design review is to decrease the defects as early as possible.
We plan to have three design review meetings at 2 hours each. This means that the total amount of
time spent for detailed designed reviews will be 24 hours.
 Informal Code Review
After the author has checked-in his/her own code to the individual branch and before the code is
merged to the team branch, code needs to be reviewed according to a checklist [3] by at least one
team member. Every code module needs to be peer reviewed by at least one other team member.
This approach will decrease the potential errors by introducing the other perspectives and prevent the
25
buggy codes from reaching the team branch.
 Formal Code Inspection
We will also perform a formal group inspection based on needs especially for complex algorithms.
The author will tell the support manager when he/she has finished coding the complex module for
which they were responsible. The support manager then calls a review meeting. We expect our review
rate to be around 200 lines of code an hour (industry standard). The findings of the reviews need to be
reported in the Google Docs. By applying the formal inspection only to complex algorithms, we save
a significant amount of time while ensuring that sufficient time is invested to inspect the most
technically difficult modules.
To use the manual review time as effectively as possible, the author of a module will check the code
for common problems with FindBugs tool. This will help focus the code reviews on potential
problems rather than on trivial items such as correct placement of curly brackets.
Refactoring
During the beginning of each iteration, team members will check whether parts of their code need to be
refactored. If so, time will be allocated and this will be recorded as a task during re- estimation (which
happens at the beginning of each iteration). We will also use look for opportunities to refactor during our
informal reviews and formal inspections in addition to finding defects. After each review or formal
inspection, the team lead will try to allocate time for team members to refactor the code. If there is
insufficient time within the current iteration, we will schedule this in the next iteration. The team needs to be
aware that the test codes needs to be changed first before the codes refactored.
Defect Tracking and Version Control
To track the defects during the entire test phase, we will use the Mantis bug tracking system offered by our
client. All of artifacts produced in the quality activities will be managed by Subversion (SVN) according to
the Configuration Management process for our studio project [5]. The tool Subclipse, which is the eclipse
plug-in for Subversion will be used for configuration management.
5.Quality Assurance Process Organization
25
Time and Human Resources
The total amount of resources available in the summer semester is 2304 hours (48 hours/week * 4 people *
12 weeks). We intend to devote 20% of that time to quality assurance. Total expected time for QA tasks for
team is estimated as followings.
Task Exit Criteria Estimated Time for team
 Design issues = 24 hours
4 people x 3
Detailed Design identified
sessions x 2
Review  Team agreement
hours/session
upon the design
 Completion of = 135 hours
coding
 Satisfaction of all 15% * 9001
Unit Testing
specifications 80% hours
code coverage by
unit tests
 For less complex = 35 hours
code segments:
review by at least
one peer
 For complex code: 7000 LOC/2002
Code Review
formal review LOC/hour
including
preparation, review
session, and follow-
up with the author.
1 Planned effort for coding = 900 hours
2 1 person-hour per every 200 LOC
25
Task Exit Criteria Estimated Time for team
 FindBugs analysis = 45 hours
applied to all code.
5% * 9001
Static Analysis  Resolution /
hours
Documentation of
potential problems
 Every defect to be = 48 hours
reviewed in the 4 people x 6
CCB preparation meeting is analyzed sessions x 2
and is ready to hours/session
discuss
 Every defect = 24 hours
decided in the 4 people x 6
CCB meetings meeting is assigned sessions x 1
to rectify or to re- hours/session
analyze
 Tests run on all = 45 hours
interfacing code
units
 All specifications
met through 5% * 9002
Integration Test
satisfaction of test hours
criteria
 Significant drop in
the rate of defect
discovery
1 Planned effort for coding = 900 hours
2 Planned effort for coding = 900 hours
25
Task Exit Criteria Estimated Time for team
 All regression tests = 35 hours
have been applied
to the final build
 All specifications
met through
Regression testing 35 hours
satisfaction of test
criteria
 Significant drop in
the rate of defect
discovery
 Specification fully = 45 hours
implemented in the
form of tests
Acceptance Test  No additional 45 hours
specifications
remain
unaddressed.
Summary: 448 hours
Budget (2304 *20%): 461 hours
Team Organization
At the beginning of the summer semester, the role of quality assurance manager will be assigned to a team
member as a part-time basis (50%). The quality assurance manager is in charge of testing before releasing
software and ensuring reviews of all relevant documents. However, all developers will be responsible for
quality assurance.
All of the quality assurance tasks will be performed by team members, but for acceptance test, the client can
be involved.
Role Technique Responsibility
25
Role Technique Responsibility
Code & Design  Facilitate review meetings
Review  Ensure meeting roles are assigned
Static Analysis  Improve settings of static analysis
QA
manager
 Perform regression test by nightly build
Test
 Register found defects
 Facilitate CCB meetings
CCB
 Assign responsibility before the CCB meeting
Static Analysis  Run tool Findbug on the his/her code
 Fix code according to the results
Author  Develop unit test code by using JUnit
Test
 Fix code according to the results
Code & Design  Reflect the result of review to their designs,
Review codes in order to remove defects
 Review artifact
Reviewer Code & Design
 Follow up the rectification
Review
 Register found defects in other artifacts
Static Analysis  Setup environment for Findbug
Support
manager
 Setup environment for JUnit
Test
 Setup environment for EclEmma
25
Role Technique Responsibility
 Perform integration test
 Run tool Findbug on the integrated code
Test manager Test
 Prepare acceptance test
 Register found defects
Artifacts
The following artifacts related to quality assurance will be managed:
 Test code
Subversion via Subclipse will be used for version control of the test code for unit and integration
tests. These tests will be automation tests. (Repository url is svn://waterfall.able.cs.cmu.edu/arch-
evol)
 Test documentation will be maintained in Google Docs. There are two kinds of document:
 Test specification including test scripts and expected results will be created to conduct system
test.
 Test report will be documented with the version of the code and the test specification.
 Review results
The results of code review will be documented and maintained in Google Docs.
 Defect database
All defects found after review for both code and documents will be registered in Mantis bug tracking
system of AcmeStudio (http://acme.able.cs.cmu.edu/mantis/my_view_page.php) under Pangea
project. All negative results of the tests will be registered in the database.
Review Process
Reviews can be classified into two types: peer review and formal review. Almost every artifact (except for
the most trivial ones) will be informally reviewed by a peer. Formal review is done selectively based on the
importance and complexity of the artifact. Results from both kinds of review will be documents. One of the
reviews will be assigned to follow up the rectification of the reviewed artifact. All defects will be rectified by
the author and checked by the reviewer. If any defects from other artifacts are identify during the review
process, those defects will be register in the defect tracking database.
Change Control Board Process
All registered defects in the database will be reviewed in CCB meetings every iteration to prioritize, identify
affected artifacts and assign responsible person to address each artifact. To be productive, QA manager will
assign defects to individual member to analyze and propose a solution before the CCB meeting.
25
Quality Measures and Metrics
Measurements of QA processes serve to provide an evaluation criterion that provides such information as
how useful the processes are in increasing quality of the system and suggest areas in which the processes can
be improved. Evaluation serves to look at the processes in their current states and identify areas that may be
extended, excluded or modified so that the productivity of QA activities is increased.
Quality Assurance Processes will be evaluated based on
Static Analysis:
 Number of Defects Found
 Type of Errors Identified
 Number of Type Error Occurrences
Reviews:
 Number of Defects Found
 Length of time to find defects
 Whether each defect is found by using the correlated checklist
 Type of Errors Identified
 Number of Type Error Occurrences
 Defects/LOC: Efficiency of Code Review
 LOC/Hour: Rate of code review
Testing:
 Number of Defects Found
 Length of time to complete testing
 Type of Errors Identified
Quality Process Improvement
The team used AUP postmortem to reflect at the processes so far. We will continue to use AUP postmortem
sessions to reflect on our processes. At the end of each iteration the team will review our quality assurance
process where it will analyze the data we collect such as defect data, effort data.
To improve our quality assurance plan, we can reflect upon the following as data is gathered:
 We will classify the metrics (e.g. types of defects). A pattern of defect types will indicate where the
team needs to improve while they are coding.
 As we improve our coding we can also improve the standards and checklists for design and coding to
prevent the frequently found errors.
 We will compare the effort spent over estimated effort, and analyze the results to make the quality
assurance process more effectively.
 Finally we will reflect subjectively as a team over the quality assurance processes such as code review
as part of AUP postmortem in terms of ease of use, what we have learned from it, etc to help us
determine if they suit the team.
25
6.Appendix
Architecture Review Checklist
1. Are the following attributes well-defined for each design entity?
1. Identification (unique name)
2. Type (describing what kind of design entity it is)
3. Purpose (describing why it was introduced, in terms of the requirements)
4. Function (summarizing what the component does)
5. Dependencies (possibly `none'; describing the requires or uses relationship)
6. Interface (provided by the design entity)
7. Processing (including autonomous activities)
8. Data (information `hidden' inside)
2. Is the relationship to the requirements clearly motivated? Is it clear why the proposed architecture
realizes the requirements?
3. Is the software architecture as simple as possible (but no simpler)?
 No more than 7 loosely-coupled coherent high-level components.
 Lower-level components possibly clustered into high-level components (hierarchy).
 Using standard(ized) components.
 Is deviation from intuitively obvious solution motivated?
2. Is the architecture complete?
 Are all requirements covered?
 Trace some critical requirements through the architecture (e.g. via use cases).
3. Are the component descriptions sufficiently precise?
 Do they allow independent construction?
 Are interfaces and external functionality of the high-level components described in
sufficient detail?
 Interface details:
 Routine kind, name, parameters and their types, return type, pre- and postcondition,
usage protocol w.r.t. other routines.
 File name, format, permissions.
 Socket number and protocol.
 Shared variables, synchronization primitives (locks).
 Have features of the target programming language been used where appropriate?
 Have implementation details been avoided? (No details of internal classes.)
4. Are the relationships between the components explicitly documented?
 Preferably use a diagram
5. Is the proposed solution realizable?
 Can the components be implemented or bought, and then integrated together.
 Possibly introduce a second layer of decomposition to get a better grip on realizability.
6. Are all revelevant architectural views documented?
 Logical (Structural) view (class diagram per component expresses functionality).
25
 Process view (how control threads are set up, interact, evolve, and die).
 Physical view (deployment diagram relates components to equipment).
 Development view (how code is organized in files).
7. Are cross-cutting issues clearly and generally resolved?
 Exception handling.
 Initialization and reset.
 Memory management.
 Security.
 Internationalization.
 Built-in help.
 Built-in test facilities.
8. Is all formalized material and diagrammatic material accompanied by sufficient explanatory text
in natural language?
9. Are design decisions documented explicitly and motivated?
 Restrictions on developer freedom w.r.t. the requirements.
10. Has an evaluation of the software architecture been documented?
 Have alternative architecutes been considered?
 Have non-functional requirements also been considered?
 Negative indicators:
 High complexity: a component has a complex interface or functionality.
 Low cohesion: a component contains unrelated functionality.
 High coupling: two or more components have many (mutual) connections.
 High fan-in: a component is needed by many other components.
 High fan-out: a component depends on many other components.
11. Is the flexibility of the architecture demonstrated?
 How can it cope with likely changes in the requirements?
 Have the most relevant change scenarios been documented?
Design Model Review Checklist
1. General
1. The objectives of the model are clearly stated and visible.
2. The model is at an appropriate level of detail given the model objectives.
3. The model's use of modeling constructs is appropriate to the problem at hand.
4. The model is as simple as possible while still achieving the goals of the model.
5. The model appears to be able to accommodate reasonably expected future change.
6. The design is appropriate to the task at hand (neither too complex nor too advanced)
7. The design appears to be understandable and maintainable
8. The design appears to be implementable
2. Diagrams
1. The purpose of the diagram is clearly stated and easily understood.
2. The graphical layout is clean and clearly conveys the intended information.
3. The diagram conveys just enough to accomplish its objective, but no more.
4. Encapsulation is effectively used to hide detail and improve clarity.
5. Abstraction is effectively used to hide detail and improve clarity.
25
6. Placement of model elements effectively conveys relationships; similar or closely coupled
elements are grouped together.
7. Relationships among model elements are easy to understand.
8. Labeling of model elements contributes to understanding.
3. Classes
1. Package partitioning and layering is logically consistent.
2. The key entity classes and their relationships have been identified.
3. Relationships between key entity classes have been defined.
4. The name and description of each class clearly reflects the role it plays.
5. The description of each class accurately captures the responsibilities of the class.
6. The entity classes have been mapped to analysis mechanisms where appropriate.
7. The role names of aggregations and associations accurately describe the relationship between
the related classes.
8. The multiplicities of the relationships are correct.
9. The key entity classes and their relationships are consistent with the business model (if it
exists), domain model (if it exists), requirements, and glossary entries.
4. Error Recovery
1. For each error or exception, a policy defines how the system is restored to a "normal" state.
2. For each possible type of input error from the user or wrong data from external systems, a
policy defines how the system is restored to a "normal" state.
3. There is a consistently applied policy for handling exceptional situations.
4. There is a consistently applied policy for handling data corruption in the database.
5. There is a consistently applied policy for handling database unavailability, including whether
data can still be entered into the system and stored later.
6. If data is exchanged between systems, there is a policy for how systems synchronize their
views of the data.
7. In the system utilizes redundant processors or nodes to provide fault tolerance or high
availability, there is a strategy for ensuring that no two processors or nodes can 'think' that
they are primary, or that no processor or node is primary.
8. The failure modes for a distributed system have been identified and strategies defined for
handling the failures.
5. Transition and Installation
1. The process for upgrading an existing system without loss of data or operational capability is
defined and has been tested.
2. The process for converting data used by previous releases is defined and has been tested.
3. The amount of time and resources required to upgrade or install the product is well-understood
and documented.
4. The functionality of the system can be activated one use case at a time.
Note: We referred to Rational Unified Process for this checklist [6].
Code Review Checklist (tailored)
25
1. Variable, Attribute, and Constant Declaration Defects (VC)
 Are descriptive variable and constant names used in accord with naming conventions?
 Are there variables or attributes with confusingly similar names?
 Could any non-local variables be made local?
 Are all for-loop control variables declared in the loop header?
 Are there literal constants that should be named constants?
 Are there variables or attributes that should be constants?
 Are there attributes that should be local variables?
 Do all attributes have appropriate access modifiers (private, protected, public)?
 Are there static attributes that should be non-static or vice-versa?
2. Method Definition Defects (FD)
 Are descriptive method names used in accord with naming conventions?
 Do all methods have appropriate access modifiers (private, protected, public)?
 Are there static methods that should be non-static or vice-versa?
3. Class Definition Defects (CD)
 Does each class have appropriate constructors and destructors?
 Can the class inheritance hierarchy be simplified?
4. Computation/Numeric Defects (CN)
 Are there any computations with mixed data types?
 Are parentheses used to avoid ambiguity?
5. Comparison/Relational Defects (CR)
 Is each boolean expression correct?
 Has an "&" inadvertently been interchanged with a "&&" or a "|" for a "||"?
6. Control Flow Defects (CF)
 Will all loops terminate?
 When there are multiple exits from a loop, is each exit necessary and handled properly?
 Does each switch statement have a default case?
 Are all exceptions handled appropriately?
 Does every method terminate?
7. Comment Defects (CM)
 Do the comments and code agree?
 Do the comments help in understanding the code?
 Are there enough comments in the code?
25
8. Layout and Packaging Defects (LP)
 Is a standard indentation and layout format used consistently?
 For each method: Is it no more than about 60 lines long?
 For each compile module: Is no more than about 200 lines long?
9. Storage Usage Defects (SU)
 Are arrays large enough?
 Are object and array references set to null once the object or array is no longer needed?
10. Performance Defects (PE)
 Can better data structures or more efficient algorithms be used?
 Can the cost of recomputing a value be reduced by computing it once and storing the results?
 Can a computation be moved outside a loop?
Reference
[1]Java coding standard- http://java.sun.com/docs/codeconv/html/CodeConvTOC.doc.html
[2] ACDM - http://reports-archive.adm.cs.cmu.edu/anon/isri2005/CMU-ISRI-05-103.pdf
[3]Code Review Checklist - http://www.processimpact.com/reviews_book/java_checklist.doc
[4] Design Review Checklist http://www-
static.cc.gatech.edu/classes/AY2007/cs4911_fall/Design%20Review%20Checklist.doc
[5] Pangea Version control guidelines - http://docs.google.com/Doc?docid=ddvgdvwk_34wnsvw6dk&hl=en
[6] Rational Unified Process, IBM Rational - http://www-
306.ibm.com/software/support/rss/rational/948.xml?rss=s948&ca=rssrational
[7] Implementation proposal - http://docs.google.com/Doc?docid=ddvgdvwk_41d54gj3d3&hl=en
[8] Design proposal - http://docs.google.com/Doc?id=dgn85qxj_156hp9vxvhd
[9] Quality Attribute Scenarios - http://docs.google.com/Doc?docid=dgn85qxj_159gf22x5gh&hl=en
[10] Use case specification – Diagram Architecture Evolution -
http://docs.google.com/Doc?docid=dgn85qxj_116d4tjxf&hl=en
[11] Use case specification – Comparison Feature -
http://docs.google.com/Doc?docid=df736g5s_389fsw6hvgz&hl=en
[12] Use case specification – Tradeoff Analysis -
http://docs.google.com/Doc?docid=df736g5s_163g953pkdp&hl=en
[13] Requirements Traceability Matrix -
http://spreadsheets.google.com/ccc?key=phUeZspT7bgI5enbCeF87Sw&hl=en
25