4
1.1-What is Software Testing (2) Definition: Testing is the demonstration that errors are NOT preset in the program? Testing shows that the program performs its intended functions correctly? Testing is the process of demonstrating that a program does what is supposed to do? Testing is the process of executing a program with the intent of finding errors. Glenford J. Myers Testing includes all activities whose goal is to measure and control the quality of the software (ISEB) 4

5
1.1-What is Software Testing (3) From testing user requirements to monitoring the system in operation From testing the functionality to checking all other aspects of software: – Documents (specifications) – Desing (model) – Code – Code+platform – Production, acceptance – Usage, business process Verification: answers the question: “Have we done the system correctly?” Validation: answers the question: “Have we done the correct system?” 5

6
1.1-What is Software Testing (4) Realities in Software Testing Testing can show the presence of errors but cannot show the absence of errors (Dijkstra) All defects can not be found Testing does not create quality software or remove defects Building without faults means – among other – testing very early Perfect development process is impossible, except in theory Perfect requirements: cognitive impossibility 6

9
1.3-Why Testing is Necessary Depreciation of Software Testing Due to software errors the U.S. business loss is ~ $60 billions. 1/3 of software errors can be avoided by better testing process National Institute of Standarts and Technology 2002 Testing process in most software companies is on lowest levels from CMMI model (usually 1 or 2) Software Enginering Institute CMU of Pitsburg All current software development models include software testing as an essential part 9

13
1.3-Why Testing is Necessary (5) Complexity Software – and its environment – are too complex to exhaustively test their behaviour Software can be embedded Software has human users Software is part of the organization’s workflow 13

15
1.3-Why Testing is Necessary (7) Exhaustive Testing Exhaustive testing is impossible Even in theory, exhaustive testing is wasteful because it does not prioritize tests Contractual requirements on testing Non-negligent practice important from legal point of view 15

16
1.3-Why Testing is Necessary (8) Risk-Based Testing Testing finds faults, which – when faults have been removed – decreases the risk of failure in operation Risk-based testing Error: the ”mistake” (human, process or machine) that introduces a fault into software Fault: ”bug” or ”defect”, a faulty piece of code or HW Failure: when faulty code is executed, ti may lead to incorrect results (i.e. to failure) 16

27
1.5-Re-Testing and Regression Testing Definitions Re-testing: re-running of test cases that caused failures during previous executions, after the (supposed) cause of failure (i.e. fault) has been fixed, to ensure that it really has been removed successfully Regression testing: re-running of test cases that did NOT cause failures during previous execution(s), to ensure that they still do not fail for a new system version or configuration Debugging: the process of identifying the cause of failure; finding and removing the fault that caused failure 27

28
1.5-Re-Testing and Regression Testing (2) Re-testing Re-running of the test case that caused failure previously Triggered by delivery and incident report status Running of a new test case if the fault was previously exposed by chance Testing for similar or related faults 28

29
1.5-Re-Testing and Regression Testing (3) Regression Testing Areas Regression due to fixing the fault (side effects) Regression due to added new functionality Regression due to new platform Regression due to new configuration or after the customization Regression and delivery planning 29

34
1.6-Expected Results (3) Sources of Outcomes Finding out or calculating correct expected outcomes/results is often more difficult than can be expected. It is a major task in preparing test cases. Requirements Oracle Specifications Existing systems Other similar systems Standards NOT code 34

36
1.7-Prioritization of Tests Why Prioritize Test Cases? Decide importance and order (in time) “There is never enough time” Testing comes last and suffers for all other delays Prioritizing is hard to do right (multiple criteria with different weights) 36

40
2.1-Models for Testing Verification, Validation and Testing Verification: The process of evaluation a system or component to determine whether the products of the given development phase satisfy the conditions imposed at the start of that phase – building the system right Validation: The determination of the correctness of the products of software development with respect to the user needs and requirements – building the right system Testing: The process of exercising software to verify that is satisfies specified requirements and to detect errors 40

55
2.4-Component Testing Component Testing First possibility to execute anything Different names (Unit, Module, Basic, … Testing) Usually done by programmer Should have the highest level of detail of the testing activities 55

60
2.5-Component Integration Testing (3) Stubs and Drivers 60 A CB DE A B D-E stub A driver B DE

61
2.5-Component Integration Testing (4) Big-Bang Integration + No need for stubs or drivers - Hard to locate faults, re-testing after fixes more extensive Top-Down Integration + Add to tested baseline (easier fault location), early display of “working” GUI - Need for stubs, may look more finished that it is, often crucial details left until last Bottom-Up Integration + Good when integrating on HW and network, visibility of details - No working system until last part added, need for drivers and stubs 61

63
2.5-Component Integration Testing (6) Integration Guidelines Integration plan determines build order Minimize support software Integrate only a small number of components at a time Integrate each component only once 63

64
2.6-System Testing (functional) Functional System Testing Test of functional requirements The first opportunity to test the system as a whole Test of E2E functionality Two different perspectives: – Requirements based testing – Business process based testing 64

65
2.6-System Testing (functional) (2) Requirements Based Testing Test cases are derived from specifications Most requirements require more than one test case Business Process Based Testing Test cases are based on expected user profiles Business scenarios Use cases 65

69
2.7-System Testing (non-functional) (4) Performance/Scalability Testing ”Testing conducted to evaluate the compliance of a system or component with specified performance requirements” – BS 7925-1 The purpose of the test, i.e. can the system: – handle required throughput without long delays? Performance testing requires tools Performance testing is very much about team work together with database, system, network and test administrators 69

71
2.7-System Testing (non-functional) (6) Stress/Robustness Testing ”Testing conducted to evaluate the compliance of a system or component at or beyond the limits of its specified requirements” The purpose of the test, i.e.: – Can the system handle expected maximum (or higher) workload? – If our expectations are wrong, can we be sure that the system survives? Should be tested as early as possible Stress testing requires tools 71

73
2.7-System Testing (non-functional) (8) Performance process planning Analyze and design of performance tests (co-operation of more expert teams) Implementation of performance tests (scripts, scenarios) Run performance tests and monitor different parts of the information system (LAN, servers, clients, databases, etc.) Analyze the results and get feedback to development with some suggestions Tune the system until the required performance is achieved 73

74
2.7-System Testing (non-functional) (9) Security Testing “ Testing whether the system meets its specified security objectives” - BS 7925-1 The purpose of security tests is to obtain confidence enough that the system is secure Security tests can be very expensive Important to think about security in early project phases Passwords, encryption, firewalls, deleted files/information, levels of access, etc. 74

75
2.7-System Testing (non-functional) (10) Usability Testing “Testing the ease with which users can learn and use a product” – BS 7925-1 How to specify usability? Is it possible to test usability? Preferable done by end users Probably their first contact with the system The first impression must be good! 75

77
2.7-System Testing (non-functional) (12) Installability Testing “Testing concerned with the installation procedures for the system” – BS 7925-1 It is possible to install the system according to the installation procedure? Are the procedures reasonable, clear and easy to follow? Is it possible to upgrade the system? Is it possible to uninstall the system? 77

78
2.7-System Testing (non-functional) (13) Platform (Portability) Testing “Tests designed to determine if the software works properly on all supported platforms/operating systems” Are Web applications checked by all known and used browsers (Internet Explorer, Netscape, Mozilla, …)? Are applications checked on all supported versions of operating systems (MS Windows 9x, NT, 2000, XP, Vista)? Are systems checked on all supported hardware platforms (classes)? 78

79
2.7-System Testing (non-functional) (14) Recovery Testing “Testing aimed at verifying the system’s ability to recover from varying degrees of failure” – BS 7925-1 Can the system recover from a software/hardware failure or faulty data? How can we inject realistic faults in the system? Back-up functionality must be tested! 79

81
2.8-System Integration Testing (2) Strategy One thing at a time – Integrate with one other system at a time – Integrate interfaces one at a time Integration sequence important – Do most crucial systems first – But mind external dependences 81

99
3.3. Static Analysis (3) Data Flow Analysis Considers the use of data (works better on sequential code) Examples: – Definitions with no intervening use – Attempted use of a variable after it is killed – Attempted use of a variable before it is defined 99

102
4.1-Black- and White-box Testing Strategy – What’s the purpose of testing? – What’s the goal of testing? – How to reach the goal? Test Case Selection Methods – Which test cases are to be executed? – Are they good representatives of all possible test cases? Coverage Criteria – How much of code (requirements, functionality) is covered? 102

105
4.1-Black- and White-box Testing (4) Measuring Code Coverage How much of the code has been executed? Code Coverage Metrics: – Segment coverage – Call-pair coverage Tool Support: – Often a good help – For white-box tests almost a requirement 105 Executed code segments/call-pairs All code segments/call-pairs Code Coverage =

106
4.2-Black-box Test Techniques Requirements Based Testing How much of the product’s features is covered by TC? Requirement Coverage Metrics: What’s the test progress? Test Coverage Metrics: 106 Tested requirements Total number of requirements Requirement Coverage = Executed test cases Total number of test cases Test Coverage =

107
4.2-Black-box Test Techniques (2) Creating Models Making models in general – Used to organize information – Often reveals problems while making the model Model based testing – Test cases extracted from the model – Examples Syntax testing, State transition testing, Use case based testing – Coverage based on model used 107

109
4.2.1-Equivalence Partitioning Equivalence Partitioning Identify sets of inputs under the assumption that all values in a set are treated exactly the same by the system under test Make one test case for each identified set (equivalence class) Most fundamental test case technique 109

111
4.2.2-Boundary Value Analysis Boundary Value Analysis For each identified boundary in input and output, create two test cases. One test case on each side of the boundary but both as close as possible to the actual boundary line. 111

127
4.5-Error Guessing Error Guessing Not a structured testing technique Idea – Poke around in the system using your gut feeling and previous experience trying to find as many faults as possible Tips and Tricks – Zero and its representations – “White space” – Matching delimiters – Talk to people 127

129
5.1-Organization Structures for Testing Developer’s test their own code Development team responsibility (buddy testing) Tester(s) in the development team A dedicated team of testers Internal test consultants providing advice to projects A separate test organization 129

130
5.1-Organization Structures for Testing (2) Independence Who does the Testing? – Component testing – developers – System testing – independent test team – Acceptance testing – users Independence is more important during test design than during test execution The use of structured test techniques increases the independence A good test strategy should mix the use of developers and independent testers 130

133
5.2-Configuration Management (2) Configuration Identification Identification of configuration items (CI) Labelling of CI’s – Labels must be unique – A label usually consists of two parts: Name, including title and number Version Naming and versioning conventions Identification of baselines 133

137
5.2-Configuration Management (6) CM and Testing What should be configuration managed? All test documentation and testware Documents that the test documentation is based on Test environment The product to be tested Why? Tracebility 137

138
5.3-Test Estimation, Monitoring and Control Test Estimation Why does this happen time after time? Are we really that lousy in estimating test? What can we do to avoid situations like this? 138

139
5.3-Test Estimation, Monitoring and Control (2) Rules of Thumb Lines of the source code Windows (Web-pages) of the application Degree of modifications 139

140
5.3-Test Estimation, Monitoring and Control (3) Test Estimation Activities Identify test activities Estimate time for each activity Identify resources and skills needed In what order should the activities be performed? Identify for each activity – Start and stop date – Resource to do the job 140

141
5.3-Test Estimation, Monitoring and Control (4) What Makes Estimation Difficult? Testing is not independent – Quality of software delivered to test? – Quality of requirements to test? Faults will be found! – How many? – Severity? – Time to fix? Test environment How many iterations? 141

144
5.3-Test Estimation, Monitoring and Control (7) Monitoring Test Execution Progress Changes made to improve the progress 144 Time Planned test cases New delivery date Tests passed Tests run Tests planned Number of test cases Old delivery date Action taken

145
5.3-Test Estimation, Monitoring and Control (8) Test Control What to do when things happen that affects the test plan: Re-allocation of resources Changes to the test schedule Changes to the test environment Changes to the entry/exit criteria Changes to the number of test iterations Changes to the test suite Changes of the release date 145

146
5.4-Incident Management What is an Incident? Any significant, unplanned event that occurs during testing or any other event that requires subsequent investigation or correction. – Differences in actual and expected test results – Possible causes: Software fault Expected results incorrect Test was not performed correctly – Can be raised against code and/or documents 146

149
5.5-Standards for Testing Types of Standards for Testing QA standards – States that testing should be performed Industry-specific standards – Specifies the level of testing Testing standards – Specifies how to perform testing 149

160
6.1-Types of CAST Tools (10) Debugging Tools “Used mainly by programmers to reproduce bugs and investigate the state of programs” Used to control program execution Mainly for debugging, not testing Debugger command scripts can support automated testing Debugger as execution simulator Different types of debugging tools 160

168
6.2-Tool Selection and Implementation (5) Tool Implementation This is development – use your standard development process! Plan resources, management support Support and mentoring Training Pilot project Early evaluation Publicity of early success Test process adjustments 168