RE 2012 – Proceedings

Welcome to the 20th IEEE International Conference on Requirements Engineering (RE’12). RE'12 will be held at the
Gleacher Center in downtown Chicago - the "Windy City" - just steps from the Navy Pier and the Magnificent Mile. Situated
on beautiful Lake Michigan, Chicago has one of the world’s most distinctive skylines. Chicago is also the city of blues, the
starting point of Route 66, and home of the deep-dish pizza. It has a colorful history featuring many larger-than-life characters
from Frank Lloyd Wright to Jake and Elwood Blues. It is a major hub for industry, telecommunications and infrastructure, and
the location of some of the world's best-known businesses, including Boeing and Wrigley. Chicago is also home to several
world-class Universities including the University of Chicago, Northwestern University, and DePaul University. Work from
DePaul in particular has been very well represented at previous RE conferences so it is fitting for RE to be in Chicago in 2012.

Handling Uncertainty

Models are good at expressing information that is known but do not typically have support for representing what information a modeler does not know at a particular phase in the software development process. Partial models address this by being able to precisely represent uncertainty about model content. In previous work, we developed a general approach for defining partial models and applied it to capturing uncertainty, including reasoning over design models containing uncertainty. In this paper, we show how to apply our approach to managing requirements uncertainty. In particular, we address the problem of specifying uncertainty within a requirements model, refining a model as uncertainty reduces and reasoning with traceability relations between models containing uncertainty. We illustrate our approach using the meeting scheduler example.

Stakeholders frequently use speculative language when they need to convey their requirements with some degree of uncertainty. Due to the intrinsic vagueness of speculative language, speculative requirements risk being misunderstood, and related uncertainty overlooked, and may benefit from careful treatment in the requirements engineering process. In this paper, we present a linguistically-oriented approach to automatic detection of uncertainty in natural language (NL) requirements. Our approach comprises two stages. First we identify speculative sentences by applying a machine learning algorithm called Conditional Random Fields (CRFs) to identify uncertainty cues. The algorithm exploits a rich set of lexical and syntactic features extracted from requirements sentences. Second, we try to determine the scope of uncertainty. We use a rule-based approach that draws on a set of hand-crafted linguistic heuristics to determine the uncertainty scope with the help of dependency structures present in the sentence parse tree. We report on a series of experiments we conducted to evaluate the performance and usefulness of our system.

The modern automobile is a complex electronic system with a number
of features providing functionalities for driver and passenger
convenience, control of the vehicle, and safety of the occupants.
As new features are developed and introduced into the automobile,
they interact with already existing features, sometimes resulting in
undesirable behaviours. These undesirable interactions are often
detected very late in the development cycle, or sometimes even in
the field. This introduces uncertainty in the system development
process as changes to address these interactions often result in a
cascading series of changes whose scope
is difficult to predict. This paper presents a method and algorithms
for identifying and resolving feature interactions early in the
development life-cycle by addressing the problem at the level of
requirements specifications. We have applied this method
successfully in the automotive domain and present a case study of
detecting and resolving feature interactions.

Requirements Processes

Human analysts working with results from automated traceability tools often make incorrect decisions that lead to lower quality final trace matrices. As the human must vet the results of trace tools for mission- and safety-critical systems, the hopes of developing expedient and accurate tracing procedures lies in understanding how analysts work with trace matrices. This paper describes a study to understand when and why humans make correct and incorrect decisions during tracing tasks through logs of analyst actions. In addition to the traditional measures of recall and precision to describe the accuracy of the results, we introduce and study new measures that focus on analyst work quality: potential recall, sensitivity, and effort distribution. We use these measures to visualize analyst progress towards the final trace matrix, identifying factors that may influence their performance and determining how actual tracing strategies, derived from analyst logs, affect results.

Dealing with non-functional requirements (NFRs) has posed a challenge onto software engineers for many years. Over the years, many methods and techniques have been proposed to improve their elicitation, documentation, and validation. Knowing more about the state of the practice on these topics may benefit both practitioners’ and researchers’ daily work. A few empirical studies have been conducted in the past, but none under the perspective of software architects, in spite of the great influence that NFRs have on daily architects’ practices. This paper presents some of the findings of an empirical study based on 13 interviews with software architects. It addresses questions such as: who decides the NFRs, what types of NFRs matter to architects, how are NFRs documented, and how are NFRs validated. The results are contextualized with existing previous work.

Product managers play a pivotal role in maximizing value for software companies. To assist product managers in their activities the Software Product Management (SPM) Maturity Matrix has been created that enables product managers to benchmark their organization, assess individual processes and apply best practices to create an effective SPM environment. Although a number of case studies and expert evaluations have been performed, a large scale quantitative analysis has not yet been conducted to evaluate this instrument. This research evaluates and improves the SPM Maturity Matrix based on 62 case studies. The cases were analyzed to uncover anomalies: blocking questions, blocking levels, and undifferentiating questions. The anomalies were then discussed in a workgroup with experts which resulted in suggested improvements to address the anomalies. The suggestions of the workgroup will be used to improve the SPM Maturity Matrix. As an additional result, the case studies also provide valuable insight into the maturity of software companies in industry.

Requirements Management and Tracing 1

Keeping requirements specifications up-to-date when systems evolve is a manual and expensive task. Software engineers have to go through the whole requirements document and look for the requirements that are affected by a change. Consequently, engineers usually apply changes to the implementation directly and leave requirements unchanged.

In this paper, we propose an approach for automatically detecting outdated requirements based on changes in the code. Our approach first identifies the changes in the code that are likely to affect requirements. Then it extracts a set of keywords describing the changes. These keywords are traced to the requirements specification, using an existing automated traceability tool, to identify affected requirements.

Automatically identifying outdated requirements reduces the effort and time needed for the maintenance of requirements specifications significantly and thus helps preserve the knowledge contained in them.

We evaluated our approach in a case study where we analyzed two consecutive source code versions and were able to detect 12 requirements-related changes out of 14 with a precision of 79%. Then we traced a set of keywords we extracted from these changes to the requirements specification. In comparison to simply tracing changed classes to requirements, we got better results in most cases.

Traceability underlies many important software
and systems engineering activities, such as change impact
analysis and regression testing. Despite important research
advances, as in the automated creation and maintenance of
trace links, traceability implementation and use is still not
pervasive in industry. A community of traceability researchers
and practitioners has been collaborating to understand the
hurdles to making traceability ubiquitous. Over a series of
years, workshops have been held to elicit and enhance research
challenges and related tasks to address these shortcomings. A
continuing discussion of the community has resulted in the
research roadmap of this paper. We present a brief view of the
state of the art in traceability, the grand challenge for
traceability and future directions for the field.

Modern requirements tracing tools employ information retrieval methods to automatically generate candidate links. Due to the inherent trade-off between recall and precision, such methods cannot achieve a high coverage without also retrieving a great number of false positives, causing a significant drop in result accuracy. In this paper, we propose an approach to improving the quality of candidate link generation for the requirements tracing process. We base our research on the cluster hypothesis which suggests that correct and incorrect links can be grouped in high-quality and low-quality clusters respectively. Result accuracy can thus be enhanced by identifying and filtering out low-quality clusters. We describe our approach by investigating three open-source datasets, and further evaluate our work through an industrial study. The results show that our approach outperforms a baseline pruning strategy and that improvements are still possible.

Legal and Regulatory Requirements

Companies that own, license, or maintain personal information face a daunting number of privacy and security regulations. Companies are subject to new regulations from one or more governing bodies, when companies introduce new or existing products into a jurisdiction, when regulations change, or when data is transferred across political borders. To address this problem, we developed a framework called "requirements water marking" that business analysts can use to align and reconcile requirements from multiple jurisdictions (municipalities, provinces, nations) to produce a single high or low standard of care. We evaluate the framework in an empirical case study conducted over a subset of U.S. data breach notification laws that require companies to secure their data and notify consumers in the event of data loss or theft. In this study, applying our framework reduced the number of requirements a company must comply with by 76% across 8 jurisdictions. We show how the framework surfaces critical requirements trade-offs and potential regulatory conflicts that companies must address during the reconciliation process. We summarize our results, including surveys of information technology law experts to contextualize our empirical results in legal practice.

Over time, laws change to meet evolving social needs. Requirements engineers that develop software for regulated domains, such as healthcare or finance, must adapt their software as laws change to maintain legal compliance. In the United States, regulatory agencies will almost always release a proposed regulation, or rule, and accept comments from the public. The agency then considers these comments when drafting a final rule that will be binding on the regulated domain. Herein, we examine how these proposed rules evolve into final rules, and propose an Adaptability Framework. This framework can aid software engineers in predicting what areas of a proposed rule are most likely to evolve, allowing engineers to begin building towards the more stable sections of the rule. We develop the framework through a formative study using the Health Insurance Portability and Accountability (HIPAA) Security Rule and apply it in a summative study on the Health Information Technology: Initial Set of Standards, Implementation Specifications, and Certification Criteria for Electronic Health Record Technology.

Security is primarily concerned with protecting assets from harm. Identifying and evaluating assets are therefore key activities in any security engineering process -- from modeling threats and attacks, discovering existing vulnerabilities, to selecting appropriate countermeasures. However, despite their crucial role, assets are often neglected during the development of secure software systems. Indeed, many systems are designed with fixed security boundaries and assumptions, without the possibility to adapt when assets change unexpectedly, new threats arise, or undiscovered vulnerabilities are revealed. To handle such changes, systems must be capable of dynamically enabling different security countermeasures. This paper promotes assets as first-class entities in engineering secure software systems. An asset model is related to requirements, expressed through a goal model, and the objectives of an attacker, expressed through a threat model. These models are then used as input to build a causal network to analyze system security in different situations, and to enable, when necessary, a set of countermeasures to mitigate security threats. The causal network is conceived as a runtime entity that tracks relevant changes that may arise at runtime, and enables a new set of countermeasures. We illustrate and evaluate our proposed approach by applying it to a substantive example concerned with security of mobile phones.

Socio-technical systems consist of human, hardware and software components that work in tandem to fulfill stakeholder requirements. By their very nature, such systems operate under uncertainty as components fail, humans act in unpredictable ways, and the environment of the system changes. Self-repair refers to the ability of such systems to restore fulfillment of their requirements by relying on monitoring, reasoning, and diagnosing on the current state of individual requirements. Self-repair is complicated by the multi-agent nature of socio-technical systems, which demands that requirements monitoring and self-repair be done in a decentralized fashion. In this paper, we propose a stateful requirements monitoring approach by maintaining an instance of a state machine for each requirement, represented as a goal, with runtime monitoring and compensation capabilities. By managing the interactions between the state machines, our approach supports hierarchical goal reasoning in both upward and downward directions. We have implemented a customizable Java framework that supports experimentation by simulating a socio-technical system. Results from our experiments suggest effective and precise support for a wide range of self-repairing decisions in a socio-technical setting.

Privacy requirements for mobile applications offer a distinct set of challenges for requirements engineering. First, they are highly dynamic, changing over time and locations, and across the different roles of agents involved and the kinds of information that may be disclosed. Second, although some general privacy requirements can be elicited a priori, users often refine them at runtime as they interact with the system and its environment. Selectively disclosing information to appropriate agents is therefore a key privacy management challenge, requiring carefully formulated privacy requirements amenable to systematic reasoning. In this paper, we introduce privacy arguments as a means of analysing privacy requirements in general and selective disclosure requirements (that are both content- and context-sensitive) in particular. Privacy arguments allow individual users to express personal preferences, which are then used to reason about privacy for each user under different contexts. At runtime, these arguments provide a way to reason about requirements satisfaction and diagnosis. Our proposed approach is demonstrated and evaluated using the privacy requirements of BuddyTracker, a mobile application we developed as part of our overall research programme.

Feature Models

Feature models provide an effective way to organize and reuse requirements in a specific domain. A feature model consists of a feature tree and cross-tree constraints. Identifying features and then building a feature tree takes a lot of effort, and many semi-automated approaches have been proposed to help the situation. However, finding cross-tree constraints is often more challenging which still lacks the help of automation. In this paper, we propose an approach to mining cross-tree binary constraints in the construction of feature models. Binary constraints are the most basic kind of cross-tree constraints that involve exactly two features and can be further classified into two sub-types, i.e. requires and excludes. Given these two sub-types, a pair of any two features in a feature model falls into one of the following classes: no constraints between them, a requires between them, or an excludes between them. Therefore we perform a 3-class classification on feature pairs to mine binary constraints from features. We incorporate a support vector machine as the classifier and utilize a genetic algorithm to optimize it. We conduct a series of experiments on two feature models constructed by third parties, to evaluate the effectiveness of our approach under different conditions that might occur in practical use. Results show that we can mine binary constraints at a high recall (near 100% in most cases), which is important because finding a missing constraint is very costly in real, often large, feature models.

In this paper, we present a feature-oriented requirements modelling language (FORML) for modelling the behavioural requirements of a software product line. FORML aims to support feature modularity and precise requirements modelling, and to ease the task of adding new features to a set of existing requirements. In particular, FORML decomposes a product line’s requirements into feature modules, and provides language support for specifying tightly-coupled features as model fragments that extend and override existing feature modules. We discuss how decisions in the design of FORML affect the evolvability of requirements models, and explicate the specification of intended interactions among related features. We applied FORML to the specification of two feature sets, automotive and telephony, and we discuss how well the case studies exercised the language and how the requirements models evolved over the course of the case studies.

Modern technical systems typically consist of multiple components and must provide many functions that are realized by the complex interaction of these components. Moreover, very often not only a single product, but a whole product line with different compositions of components and functions must be developed. To cope with this complexity, it is important that engineers have intuitive, but precise means for specifying the requirements for these systems and have tools for automatically finding inconsistencies within the requirements, because these could lead to costly iterations in the later development. We propose a technique for the scenario-based specification of component interactions based on Modal Sequence Diagrams. Moreover, we developed an efficient technique for automatically finding inconsistencies in the scenario-based specification of many variants at once by exploiting recent advances in the model-checking of product lines. Our evaluation shows benefits of this technique over performing individual consistency checking of each variant specification.

Requirements Communication

Software requirements specifications play a crucial role in software development projects. Especially in large projects, these specifications serve as a source of communication and information for a variety of roles involved in downstream activities like architecture, design, and testing. This vision paper argues that in order to create high-quality requirements specifications that fit the specific demands of successive document stakeholders, our research community needs to better understand the particular information needs of downstream development roles. In this paper, the authors introduce the idea of view-based requirements specifications. Two scenarios illustrate (1) current problems and challenges related to the research underlying the envisioned idea and (2) how these problems could be solved in the future. Based on these scenarios, challenges and research questions are outlined and supplemented with current results of exemplary user studies. Furthermore, potential future research is suggested, which the community should perform to answer the research questions as part of a research agenda.

It is believed that the effectiveness of requirements engineering activities depends at least partially on the individuals involved. One of the factors that seems to influence an individual’s effectiveness in requirements engineering activities is knowledge of the problem being solved, i.e., domain knowledge. While a requirements engineer’s having in-depth domain knowledge helps him or her to understand the problem easier, he or she can fall for tacit assumptions of the domain and might overlook issues that are obvious to domain experts. This paper describes a controlled experiment to test the hypothesis that adding to a requirements elicitation team for a computer-based system in a particular domain, requirements analysts that are ignorant of the domain improves the effectiveness of the requirements elicitation team. The results, although not conclusive, show some support for accepting the hypothesis. The results were analyzed also to determine the effect of creativity, industrial experience, and requirements engineering experience. The results suggest other hypotheses to be studied in the future.

This paper presents a novel approach for pragmatic ambiguity detection in natural language (NL) requirements specifications defined for a specific application domain.
Starting from a requirements specification, we use a Web-search engine to retrieve a set of documents focused on the same domain of the specification. From these domain-related documents, we extract different knowledge graphs, which are employed to analyse each requirement sentence looking for potential ambiguities. To this end, an algorithm has been developed that takes the concepts expressed in the sentence and searches for corresponding ``concept paths'' within each graph.
The paths resulting from the traversal of each graph are compared and, if their overall similarity score is lower than a given threshold, the requirements specification sentence is considered ambiguous from the pragmatic point of view.
A proof of concept is given throughout the paper to illustrate the soundness of the proposed strategy.

Goal Modeling

Requirements completeness is among the most critical and difficult software engineering challenges. Missing requirements often result from poor risk analysis at requirements engineering time. Obstacle analysis is a goal-oriented form of risk analysis aimed at anticipating exceptional conditions in which the software should behave adequately. In the identify-assess-control cycles of such analysis, the assessment step is not well supported by current techniques. This step is concerned with evaluating how likely the obstacles to goals are and how likely and severe their consequences are. Those key factors drive the selection of most appropriate countermeasures to be integrated in the system goal model for increased completeness. Moreover, obstacles to probabilistic goals are currently not supported; such goals prescribe that some corresponding target property should be satisfied in at least X% of the cases.
The paper presents a probabilistic framework for goal specification and obstacle assessment. The specification language for goals and obstacles is extended with a probabilistic layer where probabilities have a precise semantics grounded on system-specific phenomena. The probability of a root obstacle to a goal is thereby computed by up-propagation of probabilities of finer-grained obstacles through the obstacle refinement tree. The probability and severity of obstacle consequences is in turn computed by up-propagation from the obstructed leaf goals through the goal refinement graph. The paper shows how the computed information can be used to prioritize obstacles for countermeasure selection towards a more complete and robust goal model. The framework is evaluated on a non-trivial carpooling support system.

DNA nanotechnology uses the information processing capabilities of nucleic acids to design self-assembling, programmable structures and devices at the nanoscale. Devices developed to date have been programmed to implement logic circuits and neural networks, capture or release specific molecules, and traverse molecular tracks and mazes.
Here we investigate the use of requirements engineering methods to make DNA nanotechnology more productive, predictable, and safe. We use goal-oriented requirements modeling to identify, specify, and analyze a product family of DNA nanodevices, and we use PRISM model checking to verify both common properties across the family and properties that are specific to individual products. Challenges to doing requirements engineering in this domain include the error-prone nature of nanodevices carrying out their tasks in the probabilistic world of chemical kinetics, the fact that roughly a nanomole (a 1 followed by 14 0s) of devices are typically deployed at once, and the difficulty of specifying and achieving modularity in a realm where devices have many opportunities to interfere with each other. Nevertheless, our results show that requirements engineering is useful in DNA nanotechnology and that leveraging the similarities among nanodevices in the product family improves the modeling and analysis by supporting reuse.

Goal models have been found to be useful for supporting the decision making process in the early requirements phase. Through measuring contribution degrees of low-level decisions to the fulfilment of high-level quality goals and combining them with priority statements, it is possible to compare alternative solutions of the requirements problem against each other. But where do contribution measures come from and what is the right way to combine them in order to do such analysis? In this paper we describe how full application of the Analytic Hierarchy Process (AHP) can be used to quantitatively assess contribution relationships in goal models based on stakeholder input and how we can reason about the result in order to make informed decisions. An exploratory experiment shows that the proposed procedure is feasible and offers evidence that the resulting goal model is useful for guiding a decision. It also shows that situation-specific characteristics of the requirements problem at hand may influence stakeholder input in a variety of ways, a phenomenon that may need to be studied further in the context of eliciting such models.

In many software intensive systems traceability is used to support a variety of software engineering activities such as impact analysis, compliance verification, and requirements validation. However, in practice, traceability links are often created towards the end of the project specifically for approval or certification purposes. This practice can result in inaccurate and incomplete traces, and also means that traceability links are not available to support early development efforts. We address these problems by presenting a trace recommender system which pushes recommendations to project stakeholders as they create or modify traceable artifacts. We also introduce the novel concept of a trace obligation, which is used to track satisfaction relations between a target artifact and a set of source artifacts. We model traceability events and subsequent actions, including user recommendations, using the Business Process Modeling Notation (BPMN). We demonstrate and evaluate the efficacy of our approach through an illustrative example and a simulation conducted using the software engineering artifacts of a robotic system for supporting arm rehabilitation. Our results show that tracking trace obligations and generating trace recommendations throughout the active phases of a project can lead to early construction of traceability knowledge.

This paper reports on a large-scale empirical multiple-case study that aimed to characterize the requirements space in the domain of web-based Enterprise Systems (ES). Results from this study, among others, showed that, on the average, about 85% of all the software functionalities in the studied domain are specified using a small core set of five requirements classes even though the results of the study hint at a larger set of nine requirements classes that should be covered. The study also uncovered a law describing the growth pattern of the emerging requirements classes in software domains. According to this law, the emergence of the classes in a requirements taxonomic scheme for a particular domain, independent of the order in which specifications of requirements in that domain are analyzed, includes a rapid initial growth phase, where the majority of the requirements classes are identified, followed by a rapid slow-down phase with periods of no growth (i.e., the stabilization phase).

In current project environments, requirements often evolve throughout the project and are worked on by stakeholders in large and distributed teams. Such teams often use online tools such as mailing lists, bug tracking systems or online discussion forums to communicate, clarify or coordinate work on requirements. In this kind of environment, the expected evolution from initial idea, through clarification, to a stable requirement, often stagnates. When project managers are not aware of underlying problems, development may pro- ceed before requirements are fully understood and stabilized, leading to numerous implementation issues and often resulting in the need for early redesign and modification.
In this paper, we present an approach to analyzing online requirements communication and a method for the detection and classification of clarification events in requirement discus- sions. We used our approach to analyze online requirements communication in the IBM Rational Team Concert (RTC) project and identified a set of six clarification patterns. Since a predominant amount of clarifications through the lifetime of a requirement is often indicative of problematic requirements, our approach lends support to project managers to assess, in real-time, the state of discussions around a requirement and promptly react to requirements problems.

Product Management Concerns

This industrial experience paper presents the results of a survey with an open-ended question designed to clarify how product management practitioners understand the term product management. The survey was conducted through a public LinkedIn group for a period of nine months. During this timeframe it received 201 responses. The responses were analyzed qualitatively to identify the essential components and properties of product management from the practitioners’ viewpoint. In comparison with the existing product management frameworks and definitions, the responses showed a tendency to mix product management and product marketing. Although the respondents had difficulties in naming all product management activities, we identified six that represent the core activities of product managers in the industry. The findings have implications for the evolution of product management frameworks to address the interests of a wider range of product managers and the development of common understanding on the necessary skill sets for the education and recruitment of product managers.

Transport Canada is reviewing its Aviation Security regulations in a multi-year modernization process. As part of this review, consideration is given to transitioning regulations where appropriate from a prescriptive style to an outcome-based style. This raises new technical and cultural challenges related to how to measure compliance. This paper reports on a novel approach used to model regulations with the Goal-oriented Requirement Language, augmented with qualitative indicators. These models are used to guide the generation of questions for inspection activities, enable a flexible conversion of real-world data into goal satisfaction levels, and facilitate compliance analysis. A new propagation mechanism enables the evaluation of the compliance level of an organization. This outcome-based approach is expected to help get a more precise understanding of who complies with what, while highlighting opportunities for improving existing regulatory elements.

Aspect Oriented RE

Aspect-oriented requirements engineering (AORE) introduced an artifact called Requirements Composition Table (RCT). RCT presents a holistic view of an application’s functionality structured by core features and crosscutting concerns. This artifact can effectively support various project tasks and serve as a common frame of reference for all parties on a project team. As AORE remains little-known to most practitioners in the software development field, the purpose of this paper is to explain the RCT concept to practitioners and discuss its benefits.
The RCT technique has been implemented for a number of Wall Street applications at various investment banks. RCT can help us perform important project tasks and has proven to be one of the most valuable artifacts of a software project. This paper discusses the steps to develop an RCT, provides an example of how to use it to perform change impact analysis for releases, describes experiences using RCTs in practice, and discusses lessons learned on projects implementing the RCT technique.

Categorizing requirements based upon their aspects and stakeholder intent can help requirements engineers and other developers to group and retrieve the requirements for analyzing the aspects of concern. The analysis is essential for project planning, system verification and validation, and integration coordination. For software requirements, researchers and practitioners have identified a set of categories, such as functional, performance, safety, to categorize a requirement. In a large systems engineering and integration project comprised of not only software but also hardware and activities in other disciplines (e.g., electrical, civil engineering), we encountered many additional, different aspects of the system that need to be analyzed and thus the requirements need to be categorized for those aspects. This experience report describes the lessons learnt in categorizing requirements in this project. The report provides insights for the practical issues of the categorization and our research on how more effectively the categorization could be done.

Natural Language vs. Formalized Specification

During model-driven requirements elicitation sessions for several commercial products, weaknesses were identified with available modeling languages such as UML and SysML. Continued frustration when attempting to use the UML for requirements capture eventually resulted in collaboration between Siemens and Technische Universität München (TUM) to define a new visual requirements language
called the Unified Requirements Modeling Language (URML). This paper describes some of the rationale behind the development of the URML, highlights some of the more unusual features of the language, and, finally, describes its use on a commercial project.

Natural language (NL) requirement specifications are widely used in industry, but ensuring high quality in these specifications is not easy.
This work investigates in an empirical study the typical defect type distributions in current NL requirement specifications.
For this study, more than 5,800 review-protocol-entries that originate from reviews of real automotive specifications according to a quality-model were categorized by us at Mercedes-Benz. As a result, we obtained (a) a typical defect type distribution in NL specifications in the automotive domain, (b) correlations of quality criteria to defect severity, (c) indicators on ease of handling quality criteria in the review-process and (d) information on time needed for defect correction with respect to quality criteria. To validate the findings from the data analysis, we additionally conducted 15 interviews with quality managers.
The results confirm quantitatively that the most critical and important quality criteria in the investigated NL requirement specifications are consistency, completeness, and correctness.

Prioritization

Requirements engineering activities are a critical part of a project’s lifecycle. Success of subsequent project phases is highly dependent on good requirements definition. However, eliciting and achieving consensus on priority between all stakeholders is a complex task. Considering software development of large scale global applications, the challenges increase by the need of managing discussions between groups of stakeholders with different roles and background. This paper presents a practical approach for requirements elicitation and prioritization based on realistic user behaviors observation. It uses basic statistic analysis and application usage information to automatically identify the most relevant requirements for majority of stakeholders. An industry case illustrates the feasibility and efficiency of our approach.

There are usually more requirements than feasible in a given schedule. Thus, it’s imperative to be able to choose the most valuable ones for implementation to ensure the delivery of a high value software system. There are myriad requirements prioritization frameworks and selecting the most appropriate one is a decision problem in its own right. In this paper we present our approach in selecting the most appropriate value based requirements prioritization framework as per the requirements of our stakeholders. Based on our analysis a single framework was selected, validated by requirements engineers and project managers and deployed for company-wide use by a major IT player in India.

Requirements engineering is an essential activity in creating embedded real-time systems. Companies that produce a number of partially similar products can reduce development time and cost, improve quality and simplify software maintenance by applying reuse practices. Requirements reuse is an essential enabler to achieve effective software reuse. This study describes two different approaches for requirements reuse at Danfoss. The first approach reuses those requirements that are envisioned to be common between two consecutive projects and allows changing and parameterization of parts of the requirements. The second approach organizes all requirements into a common model and explicitly manages variability and different requirement variants in this common model. The results show that both approaches can result in significant savings in reduced effort by reusing common requirements. The first approach was found to be effective when the domain maturity is low and the significant set of requirements were changed from project to project. The second approach allows high reuse potential and significant savings for stable domains, where most requirements tend to be small additions or minor changes of existing requirements.

Researchers from requirements engineering and software architecture had emphasized the importance of Non-Functional Requirements and their influence in the architectural design process. To improve this process we have designed a tool, ArchiTech, which aims to support architects during the design process by suggesting alternative architectural decisions that can improve some types of non-functional requirements in a particular project, and facilitate the reuse of architectural knowledge shared between projects of the same architectural domain (e.g., web-based applications).

Feature models provide an effective way to capture
commonality and variability in a specific domain. Constructing
a feature model needs a systematic review of existing software
artifacts in a domain and is always a collaboration-intensive
activity. However, existing feature modeling methods and tools
lack explicit support of such collaborations. In this paper, we
present an environment for feature modeling that promotes the
collaboration between stakeholders as the basis of creating and
evolving a feature model. We present concepts, methods, and a
tool to show the feasibility of constructing feature models
collaboratively, as well as how to integrate this environment
with traditional feature modeling methods.

The User Requirements Notation (URN) enables the graphical modeling of requirements with goals and scenarios, and jUCMNav is a free, Eclipse-based tool that supports modeling and analysis with URN. Concern-Driven Development (CDD) enables requirements engineers to encapsulate and reason about concerns, whether they are crosscutting (i.e., aspects) or not. However, to truly capitalize on the benefits promised by CDD, concerns need to be encapsulated across software development phases, i.e., across different types of models at different levels of abstraction. Recently, URN was extended to support aspect-oriented concepts. This demonstration focuses on the new concern-driven modeling features of jUCMNav, together with its capabilities to compose aspects together, and to transform aspectual scenario models into design models in the Reusable Aspect Models notation. jUCMNav is hence one of the few tools that enable CDD from requirements to design.

This paper presents a tool suite that automates transition from precise use case and domain models to code. The suite is built around the Requirements Specification Language (RSL) that is based on a precise constrained language grammar. RSL specifications can be used to generate complete MVC/MVP code structure together with method bodies of the Controller layer.

Early stage requirements models are often documented using paper and pencil-based approaches. In our current research, we are exploring lightweight modeling tools and approaches that could provide a beneficial alternative. We have developed the FlexiSketch tool prototype which combines support for free-form sketching with lightweight metamodeling capabilities. This creates the possibility for an automatic transcription of the documented information in later modeling stages. The tool is designed to be used on tablet devices.

Feature-oriented analysis and modeling is widely accepted in software reuse, which consists of two major phases that should be taken seriously. The first is to construct a feature model, and the second is to configure products based on the feature model attained in the first. This paper presents a matrix-based approach to constructing and configuring feature models, whose main advantage is its scalability compared to traditional graphic-based feature models, and the supporting tool is presented to demonstrate its feasibility.

Clustering is of great practical value in discovering natural groupings of large numbers of requirements artifacts. Clustering-based visualization has shown promise in supporting requirements tracing. In this paper, we transform the success to a wider range of clustering-based visual exploration tasks in requirements engineering. We describe ReCVisu, a requirements exploration tool based on quantitative visualizations. We discuss the key features of ReCVisu and its potential improvements over previous work.

A self-adaptive system adjusts its configuration to tolerate changes in its operating environment. To date, requirements modeling methodologies for self-adaptive systems have necessitated analysis of all potential system configurations, and the circumstances under which each is to be adopted. We argue that, by explicitly capturing and modelling uncertainty in the operating environment, and by verifying and analysing this model at runtime, it is possible for a system to adapt to tolerate some conditions that were not fully considered at design time. We showcase in this paper our tools and research results.

Security Requirements Engineering (SRE) deals with the elicitation and analysis of security needs to specify security requirements for the system-to-be. In previous work, we have presented STS-ml, a security requirements modelling language for Socio-Technical Systems (STSs) that elicits security needs, using a goal-oriented approach, and derives the security requirements specification based on these needs. Particularly, STS-ml relates security to the interaction among actors in the STS. In this paper, we present STS-Tool, the modelling and analysis support tool for STS-ml. STS-Tool allows designers to model a STS at a high-level of abstraction, while expressing security needs over the interactions between the actors in the STS, and derive security requirements in terms of social commitments—promises with contractual validity—once the modelling is done.

Requirements engineers need to understand and model different aspects of organizations and systems under construction, and may need to use different modeling notations. However, most modeling tools support only one (or at most a few notations), hindering requirements engineers from using the most appropriate notations for the particular modeling task. The RE-Tools is an open-source toolkit implemented using a UML Profile for StarUML, an open-source UML modeling tool. The toolkit supports many leading requirements modeling notations, including the NFR Framework, the i* Framework, KAOS, Problem Frames, and UML. Each of these notations may be used for modeling independent corresponding diagrams or together with non-functional requirements (NFRs). The toolkit also supports the original qualitative reasoning of the NFR Framework and augments with a quantitative one.

Context-aware systems often use rule-based reasoning engines for decision making without involving explicit interaction with the user. While rule-based systems excel in filtering out unsuitable solutions based on clear criteria, it is difficult to rank suitable solutions based on vague, qualitative criteria with a rule-based approach. Moreover, the description of such systems is typically ad-hoc without well-defined modeling tasks. CARGO (Context-Aware Reasoning using Goal-Orientation) aims to address these problems by combining rule-based and goal-based reasoning as well as scenario-based modeling to provide a more comprehensive way to define context-aware systems and to process contextual information. This demo presents CARGO, a modeling, simulation, and execution environment for context-aware systems built on existing tool support for the User Requirements Notation.

e-Government innovation is commonly hindered by legacy systems in public sector agencies. Legacy systems not only present technical constraints, but they also embed outdated and inefficient business processes. A critical manifestation of this issue occurs during the requirements phase of e-government projects, where requirements tend to mimic the features of old systems that must be modernized or
replaced. The problem is not easy to identify and qualify, unless a critical stance to requirements analysis is taken, and unless requirements are examined in the context of processes that are creative, exploratory and collaborative. We propose a solution to legacy reproduction issues in the development of a game-based tool that through competition enables the analysis and transformation of business requirements by applying risk and opportunity criteria.

Requirements analysis for socio-technical systems faces the challenge of multidisciplinary requirements that need to be collected from, understood by and agreed upon by various stakeholders. In some cases, requirements analysts or involved stakeholders do not fully understand the requirements that other disciplines impose, and thus fail to deliver a requirements specification that can be used in interdisciplinary development teams. In requirements engineering, requirement patterns are used to recognize important and recurring issues, thus reducing the effort of compiling a list of software requirements. The objective of the proposed dissertation project is to develop a similar pattern based approach that helps in analysing requirements from different disciplines and making them comprehensible for all stakeholders who need to agree to a requirement specification as the deliverable of requirements engineering. Due to the importance of legal aspects for, and users’ trust in, socio-technical systems, requirement patterns will be developed for these two aspects. For the legal requirement patterns, legal requirements that are stable concerning changes due to their origin in fundamental, higher-ranked laws will be collected, and requirement patterns will be derived from them. For the requirement patterns for trust support, antecedents that build trust will be collected, and requirement patterns that demand functionality to support these antecedents will be developed. The obtained patterns are then used to compile a requirement list that serves as input for requirements negotiation with the various stakeholders.

Today’s systems are faced with the need of constant
evolution to remain competitive, especially when looking at IT
Ecosystems and their growing number of subsystems. As a
prerequisite for these to stay competitive, system providers
need a clear understanding of their stakeholder’s needs. As
systems tend to be increasingly complex nowadays, support an
increasingly number of stakeholders, have a shorter release
cycles to evolve and need to adapt to the environment and the
users, some of the standard requirements elicitation techniques
tend not to be suitable any more. Especially when adaptivity is
necessary, system providers need to understand the context, in
which the systems are used, but also the context of users for the
adaptation. In this paper I concentrate on the largest
stakeholder group, namely the end-users for requirements
elicitation. Evaluation criteria include i) support of context, ii)
scalability to large numbers of end-users, and iii) scalability to
large numbers of end-user’s needs and problems that lead to
new requirements. My literature review suggests that this
important field is currently underrepresented in Requirements
Engineering research. This research proposes to develop a
framework that explains the different context types and their
role for requirements elicitation. The framework is then used
to investigate existing requirements elicitation techniques and
their potential for considering context. It is also used to show
how emerging techniques can further support requirements
elicitation with context.

Requirements engineers in business-process-driven software development are faced with the challenge of letting stakeholders determine which requirements are actually relevant for early business success and should be considered first or even at all during the elicitation and analysis activities. In the area of requirements engineering (RE) and release planning, prioritization is an established strategy for achieving this goal. Available prioritization approaches, however, do not consider all idiosyncrasies of business-process-driven software development. This lack of appropriate prioritization leads to effort often being spent on (RE) activities of minor importance. To support the requirements engineer in overcoming this problem, the idea of applying different models during prioritization is introduced, which shall bring it to a more reliable basis. Through this notion it is expected to reduce unnecessary (RE) activities by focusing on the most important requirements.

With the recent emergence of cloud computing, the number of cloud service providers is constantly increasing and consumers’ needs are becoming more sophisticated. This situation leads to an evident need for methods which enable providers to correctly elicit requirements coming from very heterogeneous consumers. Moreover, consumers demand ways to find the cloud services which best meet their needs. We propose to address the issues identified by creating the StakeCloud community platform, capable of working as a cloud resources marketplace. It will allow users to input their resource needs and provide them with matching cloud services. Additionally, in case these are not met, they can be communicated as new requirements to cloud providers. Such a contribution will improve the requirements communication and resource identification in cloud systems, bridging the gap between consumers and providers.