Publication Year

Co-author

Key Phrase

Publication Venue

Large data centers host several application environments (AEs) that are subject to workloads whose intensity varies widely and unpredictably. Therefore, the servers of the data center may need to be dynamically redeployed among the various AEs in order to optimize some global utility function. Previous approaches to solving this problem suffer from… (More)

Reinforcement Learning (RL) provides a promising new approach to systems performance management that differs radically from standard queuing-theoretic approaches making use of explicit system performance models. In principle, RL can automatically learn high-quality management policies without an explicit performance model or traffic model, and with little… (More)

Virtualization was invented more than thirty years ago to allow large expensive mainframes to be easily shared among different application environments. As hardware prices went down, the need for virtualization faded away. More recently, virtualization at all levels (system, storage, and network) became important again as a way to improve system security,… (More)

Reinforcement Learning (RL) provides a promising new approach to systems performance management that differs radically from standard queuing-theoretic approaches making use of explicit system performance models. In principle, RL can automatically learn high-quality management policies without an explicit performance model or traffic model, and with little… (More)

Computer systems are becoming extremely complex. Complexity stems from the large number and heterogeneity of a system's hardware and software components, from the multi-layered architecture used in the system's design, and from the unpredictable nature of the workloads, especially in Web-based systems. Because of these reasons, performance management of… (More)

Current computing environments are becoming increasingly complex in nature and exhibit unpredictable workloads. These environments create challenges to the design of systems that can adapt to changes in the workload while maintaining desired QoS levels. This paper focuses on the use of online analytic performance models in the design of self-managing and… (More)

Computer systems are becoming extremely complex due to the large number and heterogeneity of their hardware and software components, the multi-layered architecture used in their design, and the unpredictable nature of their work-loads. Thus, performance management becomes difficult and expensive when carried out by human beings. A new approach , called… (More)

Ocean Sampling Day was initiated by the EU-funded Micro B3 (Marine Microbial Biodiversity, Bioinformatics, Biotechnology) project to obtain a snapshot of the marine microbial biodiversity and function of the world's oceans. It is a simultaneous global mega-sequencing campaign aiming to generate the largest standardized microbial data set in a single day.… (More)

Modern computer systems are based on a wide variety of software servers, such as web servers, application servers, database servers, and mail servers. The typical software architecture of such servers includes a set of processes or threads that serve requests submitted to the server. Requests that arrive at the server and find all threads busy, are placed… (More)

In this paper, we suggest a requirement engineering process for real-time systems that yields a formal specification of the system using timed Petri nets. Scenarios are acquired in form of sequence diagrams as defined by the Unified Modeling Language (UML), and are enriched with time constraints information. These diagrams are transformed into partial timed… (More)