Presents an initial set of findings from an empirical study of social processes, technical system configurations, organisational contexts and interrelationships that give rise to open software. The focus is directed at understanding the requirements for open software development efforts, and how the development of these requirements differs from those traditional to software engineering and requir...
View full abstract»

One approach to learning information systems design is case-based reasoning (CBR). However, learning from design cases can be superficial, if cases are not accessible at the level of meaningful design components and are not contextualised in the situation in which they have been conceived. StoryNet is a learning environment inspired by CBR and by a methodology for information systems analysis and ...
View full abstract»

Software problems - problems whose solution is software-intensive - come in many forms. Given that software and computers are deeply embedded in society, one general characteristic of software problems is that their early requirements are expressed `deep into the world', that is, in terms that end-users and other stake-holders would recognise and understand. The developer is left with the difficul...
View full abstract»

The goals of the study presented were to compare different data analysis methods and to demonstrate the viability of simulation as a mechanism to allow such comparisons. Simulation was used to create data sets with a known underlying model and with non-Normal characteristics that are frequently found in software data sets: skewness, unstable variance, and outliers and combinations of these charact...
View full abstract»

The paper discusses the application of state diagrams in UML to class testing. A set of coverage criteria is proposed based on control and data flow in UML state diagrams and it is shown how to generate test cases satisfying these criteria from UML state diagrams. First, control flow is identified by transforming UML state diagrams into extended finite state machines (EFSMs). The hierarchical and ...
View full abstract»

Software practitioners recognise the importance of realistic estimates of effort for the successful management of software projects, the Web being no exception. Estimates are necessary throughout the whole development life cycle. They are fundamental when bidding for a contract or when determining a project's feasibility in terms of cost-benefit analysis. In addition, they allow project managers a...
View full abstract»

Metaheuristic techniques such as genetic algorithms, simulated annealing and tabu search have found wide application in most areas of engineering. These techniques have also been applied in business, financial and economic modelling. Metaheuristics have been applied to three areas of software engineering: test data generation, module clustering and cost/effort prediction, yet there remain many sof...
View full abstract»

Provides the software estimation research community with a better understanding of the meaning of, and relationship between, two statistics that are often used to assess the accuracy of predictive models: the mean magnitude relative error (MMRE) and the number of predictions within 25% of the actual, pred(25). It is demonstrated that MMRE and pred(25) are, respectively, measures of the spread and ...
View full abstract»

Test-first programming is one of the central techniques of extreme programming. Programming test-first means (i) write down a test-case before coding and (ii) make all the tests executable for regression testing. Thus far, knowledge about test-first programming is limited to experience reports. Nothing is known about the benefits of test-first compared to traditional programming (design, implement...
View full abstract»

Contributes to the identification and testing of factors important for the success of open source software (OSS) projects. We present an analysis of OSS communities as virtual organisations, and apply Katzy and Crowston's (2000) competency rallying (CR) theory to the case of OSS development projects. CR theory suggests that project participants must develop necessary competencies, identify and und...
View full abstract»

Projects where developers are geographically distributed and with high personnel turnover are usually considered to be hard to manage. Any organisation that successfully handles such projects merits closer analysis so that lessons can be learned and good practice disseminated. Open source software projects represent such a case. One important factor is good configuration management practices. The ...
View full abstract»

Checkpointing is a very well known mechanism to achieve fault tolerance. In distributed applications where processes can checkpoint independently of each other, a local checkpoint is useful for fault tolerance purposes only if it belongs to at least one consistent global checkpoint. In this case, execution can be restarted from it without needing to rollback the execution in the past. The paper ex...
View full abstract»

A study of the problems experienced by twelve software companies in their requirements process is discussed. The aim of the work is to develop a more holistic understanding of the requirements process, so that companies can more effectively organise and manage requirements. The findings suggest that most requirements problems are organisational rather than technical, and that there is a relationsh...
View full abstract»

The authors consider Java's claim to be a safe and reliable language. First, Java and the aims of the language are introduced and its relationship with C++ is briefly considered. The results of analysing a selection of the software bugs, limitations, weaknesses and flaws that have been found in Java (generically described as Java related defects) are then presented. This analysis is based on repor...
View full abstract»

The high availability of handheld devices with Bluetooth technology allows users to be located with sufficient precision for offering context-sensitive services. The authors present the design and implementation of a scalable architecture for indoor positioning based on Bluetooth sensors. Distance from sensors is estimated using inquiry reports issued at different cyclic power levels, while a cent...
View full abstract»

In a commercial system, performing large, invasive changes, such as changing an entire infrastructure layer, is often impossible because development on a system cannot be halted for any significant amount of time. Performing such changes require tools that can perform mass transformations on the source code. One approach is to use tools that can do global search and replace on the entire program s...
View full abstract»

The emergence of theoretical frameworks that detail how systems developers should evaluate the methodologies they use has served to highlight the ever-present gap between what academics say and what practitioners do. The paper draws on the experience and comments of practising information systems developers exposes the gap and then proposes how the gap may be bridged
View full abstract»

In order to have a software architecture design method that achieves quality attribute requirements several aspects of the method must be in place. First there must be some way to specify quality attribute requirements so that it can be determined whether the designed architecture can achieve them. Secondly, there must be some way for modularising the knowledge associated with quality attributes s...
View full abstract»

The difficulties that engineers have in understanding and applying the quantitative methods in an abstract requirements phase are major obstructions in using formal methods for hybrid real-time safety systems. While formal methods technology in safety-critical systems can help increase confidence of software, the difficulty and complexity in using them can cause another hazard. The authors have pr...
View full abstract»

Legacy systems pose major problems for industry-existing software seems so difficult and expensive to change quickly to keep up with the needs of business. The authors firstly summarise the general problems with modifying existing software-termed software maintenance-and then address the problems of legacy systems. They show that one of the major difficulties is trying to decide rationally among v...
View full abstract»

Software plays an ever increasing role in the critical infrastructures that run our cities, manage our economies, and defend our nations. In 1999, the Presidents Information Technology Advisory Committee (PITAC) reported to the United States President the need for software components that are reliable, tested, modelled and secure supporting the development of predictably reliable and secure system...
View full abstract»

Many tools have been constructed using different formal methods to process various parts of a language specification (e.g. scanner generators, parser generators and compiler generators). The automatic generation of a complete compiler was the primary goal of such systems, but researchers recognised the possibility that many other language-based tools could be generated from formal language specifi...
View full abstract»

A new and general stochastic process algebra with block structure, called PEPABS, is introduced, which is a generalisation of PEPAphinfin. For PEPABS, the activity durations may be allowed to be generally distributed, and the corresponding transition probability matrix has a block-partitioned structure. Specifically, PEPABS is suitable for de...
View full abstract»

A graphical modelling language for specifying concurrency in software designs is presented. The language notations are derived from the communicating sequential process (CSP) language and the resulting designs form CSP diagrams. The notations reflect both data-flow and control-flow aspects of concurrent software architectures. These designs can automatically be described by CSP algebraic expressio...
View full abstract»

The author looks at trends in software and systems, and the current and likely implications of these trends on the discipline of performance engineering. In particular, he examines software complexity growth and its consequences for performance engineering for enhanced understanding, more efficient analysis and effective performance improvement. The pressures for adaptive and autonomous systems in...
View full abstract»

Software systems are expected to change over their lifetime in order to remain useful. Understanding a software system that has undergone changes is often difficult owing to the unavailability of up-to-date documentation. Under these circumstances, source code is the only reliable means of information regarding the system. In the paper, association rule mining is applied to the problem of software...
View full abstract»

With the rapid development of the Internet, the control of traffic congestion has become one of the most critical issues that must be confronted by the users. It is also a major challenge to researchers in the field of performance modelling. The paper presents a discrete-time stochastic queueing model for the performance evaluation of the active queue management (AQM) based congestion control mech...
View full abstract»

The Distributed Control Lab (DCL) provides an open infrastructure for conducting robotics and control experiments over the Internet. It is based on Web services technologies and offers a wide range of frontend applications. Within the DCL environment work is focused on safety strategies and mechanisms in order to prevent malicious code from damaging experimental equipment. These include source cod...
View full abstract»

The performance of the Open Shortest Path first (OSPF) routing protocol software is presented, which includes measuring its performance, analysing the results, proposing solutions for improvement and evaluating their effect. First, a reusable framework for evaluating the performance of routing software is proposed, which allows to perform reproducible experiments in a controlled environment with d...
View full abstract»

Problem frames provide a characterisation and classification of software development problems. Fitting a problem to an appropriate problem frame should not only help to understand it, but also to solve the problem (the idea being that, once the adequate problem frame is identified, then the associated development method should be available). The authors propose software architectural patterns corr...
View full abstract»

All traditional evolutionary algorithms are heuristic population-based search procedures that incorporate random variation and selection. The number of calculations in these algorithms is generally proportional to population size. Classical genetic algorithms (GAs), for instance, require the calculation of the fitness values of every individual in the population. A new evolutionary algorithm that ...
View full abstract»

There has been considerable work in industry on the development of component-interoperability models, such as COM, CORBA and JavaBeans. These models are intended to reduce the complexity of software development and to facilitate reuse of off-the-shelf components. The focus of these models is syntactic interface specification, component packaging, intercomponent communication, and bindings to a run...
View full abstract»

In a world where a person's number of choices can be overwhelming, recommender systems help users find and evaluate items of interest. They do so by connecting users with information regarding the content of recommended items or the opinions of other individuals. Such systems have become powerful tools in domains such as electronic commerce, digital libraries and knowledge management. The authors ...
View full abstract»

The question of whether inspection meetings justify their cost has been discussed in several studies. However, it is still open as to how modern defect detection techniques and team size influence meeting performance, particularly with respect to different classes of defect severity. The influence of software inspection process parameters (defect detection technique, team size, meeting effort) on ...
View full abstract»

Abstract syntax trees are a very common data structure in language related tools. For example, compilers, interpreters, documentation generators and syntax-directed editors use them extensively to extract, transform, store and produce information that is key to their functionality. The authors present a Java back-end for ApiGen, a tool that generates implementations of abstract syntax trees. The g...
View full abstract»

The architecture and performance of a Java implementation of a structured distributed shared memory system, PastSet, is described. The PastSet abstraction allows programmers to write applications that run efficiently on different architectures, from clusters to widely distributed systems. PastSet is a tuple-based three-dimensional structured distributed shared memory system, which provides the pro...
View full abstract»

Rapid application development (RAD) is a contingent information systems development method that has arisen in response to business and development uncertainty in the commercial information systems engineering domain. Participatory design (PD) is a style of software development that has achieved some significance within a number of academic research areas and can be seen as a response to dissatisfa...
View full abstract»

Distributed algorithms and the heuristics used by program derivation methods represent a large repository of fundamental knowledge that has been acquired over the years by the distributed computing community. Attempts to make this body of knowledge available to the broader community have been frustrating to say the least. The main thesis of the paper is that plastic transformations (a specialisati...
View full abstract»

It is argued that computer science and software engineering should be regarded as separate disciplines, and that the long term success of academic computer science departments will only be secured by adopting a stronger engineering stance in research, enhanced by closer links with industry. A number of ways are suggested to revitalise the software engineering community and forge stronger links wit...
View full abstract»

Most object-oriented coupling measures proposed in the literature deal with coupling at the static class level. Measuring dynamic object coupling however gives potential for greater insight into system structure and comparison of the architectural aspects of different systems. In previous work, a dynamic coupling metric (DCM) was introduced and validated theoretically. The authors investigate an e...
View full abstract»

Telecommunications networks have migrated from circuit based telephony services to packet based broadband network services. Merging with computer networks, they are being integrated with non-real-time data services on classical Internet integrated multimedia services, including real time voice, video and services on the new generation Internet. Thus, the concepts and requirements of quality of ser...
View full abstract»

A primary objective of the DATUM (Dependability Assessment of safety critical systems Through the Unification of Measurable evidence) project was to improve the way dependability of software intensive safety-critical systems was assessed. The authors' hypothesis was that improvements were possible if multiple types of evidence could be incorporated. To achieve the objective, the authors had to inv...
View full abstract»

The use of performance analysis and prediction techniques by software designers and software engineers is at best inconsistent and at worst simply does not happen. This is principally because these techniques are seen as separate and difficult to apply. Work on software performance engineering, initiated by C.U. Smith (1990), has sought to bridge the gap but has had limited success. With the emerg...
View full abstract»