To link to the entire object, paste this link in email, IM or documentTo embed the entire object, paste this HTML in websiteTo link to this page, paste this link in email, IM or documentTo embed this page, paste this HTML in website

DESIGN-TIME SOFTWARE QUALITY MODELING AND ANALYSIS OF
DISTRIBUTED SOFTWARE-INTENSIVE SYSTEMS
by
Leslie Chi-Keung Cheung
A Dissertation Presented to the
FACULTY OF THE USC GRADUATE SCHOOL
UNIVERSITY OF SOUTHERN CALIFORNIA
In Partial Fulfillment of the
Requirements for the Degree
DOCTOR OF PHILOSOPHY
(COMPUTER SCIENCE)
May 2011
Copyright 2011 Leslie Chi-Keung Cheung

As our reliance on software system grows, it is becoming more important to understand a system's quality, because systems that provide poor quality of service have costly consequences. It has been shown that addressing problems late, such as after implementation, is prohibitively expensive, because it may involve redesigning and reimplementing the software system. Thus, it is important to analyze software system quality early, such as during system design. In early software quality analysis, in addition to analyzing components that are developed from scratch, it is also necessary to analyze existing components that are being integrated into the system, because software designers make use of them to save development cost.; We focus on two aspects of early software quality analysis: the cost of analysis and parameter estimation. First, we address the high cost of existing design-level quality analysis techniques. In modeling complex systems, existing design-level approaches may generate models that are computationally too expensive to solve. This problem is exacerbated in concurrent systems, as existing design-level approaches suffer from the state explosion problem. To address this challenge, we propose SHARP, a design-level reliability prediction framework that analyzes complex specifications of concurrent systems. SHARP analyzes a hierarchical scenario-based specification of system behavior and achieves scalability by utilizing the scenario relations embodied in this hierarchy. SHARP first constructs and solves models of the basic scenarios, and combines the obtained results based on the defined scenario dependencies; this process iteratively continues through the specified scenario hierarchy until finally obtaining the system reliability. Our evaluations indicate that (a) SHARP is almost as accurate as a traditional non-hierarchical method, and (b) SHARP is more scalable than other existing techniques.; Second, we address the high cost of testing-based approaches, which are typically used in analyzing the quality of existing software components. However, since testing-based approaches require sending a large number of requests to the components under testing, it is quite an expensive process, particularly when testing at high workloads (i.e., where performance degradations are likely to occur) -- this may render the component under testing unusable during the tests' duration (which is also a particularly bad time to have a system be unavailable). Avoiding testing at high workloads by extrapolating (from data collected at low workloads), e.g., through regression analysis, results in lack of accuracy. To address this challenge, we propose a framework that utilizes the benefits of queueing models to guide the extrapolation process, while maintaining accuracy. Our extensive experiments show that our approach gives accurate results as compared to standard techniques (i.e., use of regression analysis alone).; Finally, we address the problem of parameter estimation in existing design-level approaches. An important step in software quality analysis is to estimate the model parameters, which describe, for example, how the system and its components are used (this is known as their operational profile). This information is assumed to be available in existing design-level approaches, but it is unclear how existing approaches obtain such information to estimate model parameters. We identify sources of information available during design, and describe how information from different sources can be translated for use in the context of component reliability estimation. Our evaluation and validation experiments indicate that use of our approach in determining operational profiles results in accurate reliability estimates, where implementations are used as ground truth.

DESIGN-TIME SOFTWARE QUALITY MODELING AND ANALYSIS OF
DISTRIBUTED SOFTWARE-INTENSIVE SYSTEMS
by
Leslie Chi-Keung Cheung
A Dissertation Presented to the
FACULTY OF THE USC GRADUATE SCHOOL
UNIVERSITY OF SOUTHERN CALIFORNIA
In Partial Fulfillment of the
Requirements for the Degree
DOCTOR OF PHILOSOPHY
(COMPUTER SCIENCE)
May 2011
Copyright 2011 Leslie Chi-Keung Cheung