Transcription

1 Adaptive Solutions to Resource Provisioning and Task Allocation Problems for Cloud Computing by Ronald J. Desmarais B.S.Eng., University of Victoria, 2006 A Dissertation Submitted in Partial Fulfillment of the Requirements for the Degree of DOCTOR OF PHILOSOPHY in the Department of Computer Science c Ronald J. Desmarais, 2013 University of Victoria All rights reserved. This dissertation may not be reproduced in whole or in part, by photocopying or other means, without the permission of the author.

4 iv ABSTRACT With the emergence of the cloud computing paradigm, we can now provide dynamic resource provisioning at unprecedented levels of scalability. This flexibility constitutes a rich environment for researchers to experiment with new designs. Such experimental novel designs can take advantage of adaptation, controllability, selfconfiguration, and scheduling techniques to provide improved resource utilization while achieving service level agreements. This dissertation uses control and scheduling theories to develop new designs to improve resource utilization and service level agreement satisfaction. We optimize resource provisioning using the Cutting Stock problem formulation and control theory within feedback frameworks. We introduce a model-based method of control to manipulate the scheduling problem s formulation model to achieve desired results. We also present a control based method using Kalman filters for admission control. Finally, we present two case studies the Yakkit media social application and the Rigi Cloud testbed for deploying virtual machine experiments. The results of our investigations demonstrate that our approaches and techniques can optimize resource utilization, decrease service level agreement violations, and provide scheduling guarantees.

12 xii ACKNOWLEDGEMENTS I would like to thank: Hausi I wish to thank my Advisor for mentoring, support, encouragement, and patience, but most of all for his vision and friendship. Well done Hausi! Randy I wish to thank my first boss, who opened the world of grid and cloud computing to me as both a coop student and an employee. Thank you Randy! NSERC and IBM CAS My first real experience with industry was through IBM s Center for Advanced Studies. I would like to thank Marin Litoiu and Kelly Lyons for their advice and mentor-ship. Finally I wish to thank the Canadian NSERC program for providing financial support. Rigi Group I wish to thank members of Rigi Group: Przemek Lach for our endless arguments, Andreas Bergen and Pratik Jain for the memories, Nina Taherimaksousi for our discussions on what is a novel contribution, Lorena Castaneda for her practicality, Norah Villegas for raising the standard, Sowmya Balasubramanian for proving that you can be a mother, wife and grad student all at the same time, Marcus Csaky for being cool and working at IBM, Ishita Jain for her questions of which I felt honored to answer, Priyanka Agrawal for being smart like Ishita, Atousa for her dedication, Dylan for teaching the kids it s OK to eat dirt, Scott Brousseau for asking the advice of a Killick, Quan Yang for her great vision like Haüsi, Sweta for the good old days, Piotr for teaching and being the best compute dude I know, Alexey for Spasiba, Jochen for being Jochen and that really cool 3-D modeling simulation thing, and lastly Qin Zhu for always liking the stuff I was building and giving me a hand for testing it out! HEP NET Group I would also like to mention members of Dr. Randall Sobie s grid research group, specifically, Ian Gable the network manager, Patrick Armstrong, Frank Berghaus and Dan Vanderster for accelerating my interest in using simulation. CSC Staff I would like to mention the hard work of our graduate secretary Wendy, my techno buddy Tomas Bednar and Brian Douglas.

13 xiii do or do not there is no try Master Jedi Yoda to Luke Skywalker - Star Wars - the Empire Strikes Back

14 xiv DEDICATION I wish to dedicate this dissertation to Valerie MacNeil, Bhreagh MacNeil-Desmarais, Linda Desmarais and Rene Desmarais for their support during my time as a Graduate Student! As well I would like to thank Bixby C. At and his two sisters Bijou and Paully....

15 Chapter 1 Introduction 1.1 Motivation With the emergence of the cloud computing paradigm, we can now provide dynamic resource provisioning at unprecedented levels of scalability. This paradigm provides a rich environment for researchers to experiment with new designs. These experimental designs can take advantage of adaptation, controllability, self-configuration, and scheduling techniques to provide improved resource utilization for cloud providers, while achieving service level agreements (SLAs) for cloud users. This dissertation uses control and scheduling theories within a feedback framework to develop new designs for improving resource utilization, SLA satisfaction, and taking advantage of the cloud s dynamic nature. For example, we use a scheduling formulation within a feedback framework in a simulation environment to evaluate several provisioning scenarios. The results indicate that these techniques can provide improved resource utilization and decrease service level agreement violations. The cloud provides users with access to large amounts of computational resources, bandwidth, and storage [RCL09]. It is replacing older computing paradigms such as Grid and Cluster computing. Its success is due to the bridging of virtualization software with dynamic on-demand provisioning systems (e.g., Amazon s EC2, Microsoft s Azure, Google s App Engine, IBM s Smart Cloud, or Apple s icloud) which can take advantage of virtual infrastructure to deploy their services and applications. 1 The term Cloud in this dissertation most often refers to infrastructure as a service 1

16 2 (IaaS) and occasionally as software as a service (SaaS). These two layers of the cloud paradigm are interesting from a research perspective because they require solutions for runtime adaptation. Cloud services manage operating system environments, application platforms, and install and negotiate access to hosted applications. These pay-for-use services consume resources that are purchased like any other utility [BYV + 09]. This provides opportunities to explore new designs which can take advantage of a pay-as-you-go utility. These designs use dynamic theories, including control theory and feedback models combined with scheduling theories, to improve resource utilization, maintain service level agreements, and minimize costs [BDG + 13] [AAB + 07] [HDPT04] [AAB + 10] [HDPT05] [ADG + 06] [AAC + 10] [DDKM08] [DM07] [FAA + 10] [LNM + 12] [PSS + 12] [DMK + 13] [KAB + 12] [GB12] [MK12] [NCS12] [TMMVL12] [GSLI12] [WKGB12] [Mur03]. The foundations of cloud computing emanate from distributed computing, virtualization, and economics. Distributed computing system topologies can be categorized as hierarchical (Grid Computing), centralized ( ), or decentralized (Peer to Peer) [DNB05]. The objective of distributed systems is to provide aggregation of distributed resources for access by users. However, they differ in how they achieve accessibility. Virtualization, in this dissertation, refers to the management of hardware access by multiple competing operating systems on the same machine. These operating environments run in an isolated sandbox from other operating environments on the same machine (e.g., VMware 2 or Xen. 3 ) Lastly, economic considerations form the basis for computing as a utility. Rajkumar et al. refer to cloud computing as the fifth utility next to water, electricity, gas, and telephony [BYV + 09]. 1.2 Problem Statement The combination of distributed computing, virtualization, and economics has dramatically changed the computing landscape. Cloud computing has the potential for software systems to change themselves dynamically to match user load while main

17 3 taining service level objectives and improving resource availability in a cost effective manner. These improvements relate to increased system utilization and user satisfaction which are two of the primary benefits of cloud computing. The key research questions addressed in this dissertation are (1) how do we take advantage of distributed computing and virtualization technologies, and (2) how do we design them together to achieve desired user and system objectives. There are many different computing architectures in which distributed computing and virtualization may be constructed. There are many different scheduling, control, and management options available to control how users access their applications hosted on the cloud. This is where the wealth of research opportunities in cloud computing exists. There are many interesting cloud computing challenges as enumerated below. These challenges drive the research in this dissertation. 1. What are good characterizations of the cloud computing paradigm? 2. Where does scheduling apply within a cloud computing paradigm? How do scheduling techniques at different levels of cloud computing affect system performance, service requirements, and user experience? 3. What are the relative merits of employing control system theory in cloud computing? How does control theory apply? 4. How can we support different types of users and applications in a fair manner? How do high performance computing applications and users differ from social web applications and users? 5. How can autonomic and feedback models be useful in cloud computing? How is this different from control theory when applied on the cloud? 6. Can we provide a taxonomy and characterization of what, when, where, and how to use scheduling with control and autonomic theory to manage cloud systems? In this dissertation, we focus on characterizing cloud scheduling applications based on control and autonomic theories. We investigate the use of the cutting stock problem as applied to resource provisioning and task allocation, and investigate modeldriven control to affect scheduling results. We investigate how control and autonomic

18 4 theories can be combined within a feedback framework. This is difficult since cloud architectures are layered. In practice, each layer does not allow other layers access to lower layer states. This presents significant challenges for developing accurate layered models to be used by schedulers and controllers. 1.3 The Approach In this dissertation we propose several hybrid designs that combine control and scheduling theories within a feedback framework. Concepts from control theory (i.e., proportional-integral-derivative (PID) controllers and Kalman filters), are integrated with approaches from scheduling theory (i.e., scheduling models such as cutting stock). The feedback framework allows schedulers and controllers to work together. For these designs to work well, the systems need to be configured at runtime. This necessitates research into adaptive and self-adaptive control models. Adaptive controls have the ability to adapt to changes in the computing environment (i.e., virtual machine migration, changes in provisioning, or changes in user workload). Self adaptive controls can extend adaptive control solutions by changing the solutions that adaption controllers use. Several adaptive and self-adaptive models are explored including Model Identification and Adaptive Controllers (MIAC) and Model Reference Adaptive Controllers (MRAC) [Rav00]. The approach employs a scheduling system designed to segment available resources to match resource requests. This technique can be used to provision virtual machines with physical resources (i.e., processor, memory, and bandwidth). This approach uses the cloud s ability to reconfigure dynamically as user loads change. If a cloud provider is in danger of violating its SLAs by over provisioning, the cloud can offload selected user requests to a third party (overflow) at additional cost. The objective is to minimize third party usage in order to maximize profit. The cutting stock formulation models this problem and simulation results show improved profits when employed. Another scheduling based approach is to investigate model-driven scheduling. With this approach, the scheduling formulation model is modified so that the scheduling algorithms can provide minimum guarantees for scheduling solutions. This approach uses proven mathematical theories to ensure scheduling results, but it requires the system to have the ability to modify the system workload and system resources. This

19 5 is necessary to ensure that the problem formulation has a specific structure. For example, it may be necessary for the formulation to be structured as a matroid to ensure scheduling results are at least half of optimal, where optimal is the absolute best achievable schedule. For example, an optimal schedule for a set of jobs can execute in one hour. Using our method we gaurantee to find a solution that will not exceed two hours. Another approach uses a control system design based on Kalman filters. The Kalman filter has a predictive property which can be useful when making load balancing decisions. In this approach, wherein we devise and implement a scenario using the OMNET 4 network simulation environment, the filter is used to predict application performance. If the filter predicts that the application will be overloaded, user requests are offloaded to a third party at additional costs. The objective is to minimize the frequency of application service violations, while minimizing costs. This approach proves useful if the application service is approaching violation (i.e., the user request rate is equal to the application service rate). Another approach involves two case study designs. The first case study is a social tool called Yakkit, designed to take advantage of a distributed set of clouds using our own application overlay called icon. The application data structure is distributed over a cloud or set of clouds and the overlay facilitates distributed searches and message passing between users [DLM11]. The second case study designed and implemented a cloud testing framework called RIGI Cloud. The design employs approaches described in this dissertation to manage cloud provisioning using a batch scheduling system called Torque. 5 The approaches discussed are dependent on the cloud architecture used to host user applications. In this dissertation, we use an economic based cloud architectural model to deploy and evaluate our designs. These designs can be used at any layer of the cloud model (e.g., Infrastructure, Platform, and Software). They can be used within the cloud, or used to aggregate several cloud providers as a global pool of resources. However, cloud providers (i.e., Amazon s EC2) do not allow access to their state. This presents significant challenges and necessitates models at runtime source/torque/

20 6 1.4 Dissertation Overview Chapter 2 describes current research on cloud computing and associated technologies including computing architectures [FK03], a federated cloud compute paradigm [BYV + 09] [AFG + 10], grid scheduling and economics [Buy02] [Ran07] [Van08], control systems [Oga87] [HDPT04] [Xu07], autonomic models [Mül06] [IBM06] [SILI10], and peer to peer computing. Research is summarized along several dimensions using a taxonomic approach. Chapter 3 dimensions the cloud research domain and classifies relevant literature using several of the taxonomies developed in Chapter 2. We use the cloud paradigm developed by Buyya et al. [BYV + 09] and Fox et al. [AFG + 10] within a feedback framework developed by IBM [IBM06] along with a taxonomy on cloud computing to categorize the literature. Finally, this chapter details important aspects of our selected research areas. Chapter 4 introduces the cutting stock problem and maps cloud resource provisioning problems to the cutting stock problem. Results are presented along with a taxonomy on scheduling within the cloud. The model has been adapted and published as a case study for scheduling to a distributed set of clouds [BDM + 11]. Chapter 5 uses a model-driven approach to provide scheduling guarantees. The problem formulation is manipulated so it has specific structural properties. Problem formulations with specific structural properties provide guarantees with respect to solution quality. In this case, we use the greedy algorithm to solve the scheduling problems [BDM + 14]. Chapter 6 presents experiments and works with feedback and utility as applied to computing systems. A taxonomy is presented which categorizes feedback with utility, along with a description of the PID controller, autonomic controllers, and Kalman filters. This work includes the simulation of a Kalman filter for service admission control. Service accessibility is a valid control mechanism which is useful in cloud computing and therefore ought to be studied. This work further describes how feed-

21 7 back and utility can be applied with scheduling and where it is applicable in the cloud paradigm. Chapter 7 presents Yakkit, a case study that investigates the use of an inter-cloud overlay network (icon). The overlay provides support for a distributed data structure and an interface for searching and message passing algorithms. This work investigates how an overlay could potentially make inter-cloud communication ubiquitous. This work was published in the 2011 CASCON Proceedings [DLM11]. Chapter 8 presents RIGI cloud, a test bed for cloud experiments and exploring scheduling/control scenarios. A scheduling and testing framework is presented to boot and execute virtual infrastructure for experiments. This work extended previous work on providing an autonomic grid management system which was published in Software Engineering for Adaptive and Self-Managing Systems (SEAMS 2007) [DM07]. We demonstrated this work at CASCON (2010) as a cloud cluster queue in which we match virtual worker-node resources dynamically to the user workload queued on the head-node. Chapter 9 summarizes the dissertation and our contribution. avenues for future research in this domain. We also outline

22 8 Chapter 2 Related Work 2.1 Introduction This chapter characterizes and reviews related research using several cloud taxonomies based on closely related computing paradigms, as described by Buyya [BYV + 09] and Fox [AFG + 10] and IBM s Autonomic Computing Reference Architecture (ACRA) [IBM06]. This chapter describes our view of the cloud paradigm. We explore it using several proposed taxonomies to aid in the characterization of related research. 2.2 A View of the Cloud Computing Paradigm The cloud computing paradigm, described by Buyya [BYV + 09] and Fox [AFG + 10], provides a description of cloud computing artifacts and architectural perspectives. This section presents our view of these cloud models within a control and feedback framework. Our view of the cloud computing paradigm is derived from Buyya s and Fox s rendition as depicted in Figure 2.1 [CRB + 11]. Buyya s cloud provider paradigm is based on a brokerage system. In this system, a brokerage is used to buy and sell cloud services for users. Users of the system specify to their brokers which services they want and how much they are willing to pay. It is consistent with the view of cloud computing as described by Fox [AFG + 10]. Here, the cloud architecture is partitioned into three layers: the cloud provider layer; the software as a service (SaaS) layer;

23 9 Figure 2.1: an economics based federated cloud compute model comprised of Buyya s [CRB + 11] and Fox s [AFG + 10] models. and the application layer. These three layers provide an architectural perspective to Buyya s model. In addition to the three layers, Buyya describes the concept of a cloud exchange which is used by cloud providers and SaaS Providers to publish and bid on resources. This concept is supported by Fox [AFG + 10] in its description of cloud computing economics. The cloud exchange facilitates the federation of many cloud providers. This allows SaaS Providers to select cloud provider resources based on cost. This dissertation utilizes the following list of cloud computing artifacts as used in Buyya s economics based cloud model. Cloud Provider The categories of cloud provider described herein are compute and storage clouds. The following is a list of the cloud provider artifacts:

24 10 Cloud Coordinator The cloud coordinator executes on each cloud provider. The coordinator is responsible for the following: Export (Publish) The cloud publishes the services it provides to the cloud exchange. These services include infrastructure services (to boot virtual environments and network infrastructure) and platform services (i.e., to start an Apache web server). Monitoring Resources The coordinator needs to monitor the physical load being executed to ensure that service level objectives (SLO) are being achieved (i.e., ensuring service utilization does not exceed 75%, or virtual machines are load balanced). In a cloud provisioning system, the required monitoring information may not be available. Having the ability to specify monitoring requirements in a distributed cloud provisioning system is an area where research could be beneficial. Monitor Applications The coordinator needs to ensure applications are properly load balanced and are achieving their users Quality of Service (QoS) requirements as specified in the SLAs. Scheduling for load balancing is difficult in a cloud provisioning system, due to a lack of both control of localized scheduling and good monitoring. Scheduling for schedulers, or Meta-scheduling, is an interesting research area. Cloud Resource There are two types of cloud resources: compute and storage. Compute resources refer to CPU utilization and how many Millions of Instructions Per Second (MIPS) the user is allocated for their tasks. Storage resources refer to either disk, tape, or memory in regards to storage of accessible content (i.e., video, audio, text, application objects). SaaS Provider The objective of the SaaS Provider is to ensure its users have access to cloud hosted applications in accordance with the users Quality of Service requirements (which are agreed upon in the form of Service Level Agreements). The SaaS Providers are responsible for ensuring that their clients are getting good value (i.e., cost and QoS) when using their cloud hosted applications. They are also responsible for bidding on resources published by cloud providers. If their bid is successful, then the applications can be dynamically booted and hosted on the cloud provider. Once the applications are started, the brokers can negotiate their users work loads to the best cloud sites, which optimizes cost but still achieves their users QoS requirements.

25 11 SaaS User (Application User) The SaaS user utilizes their SaaS Provider to negotiate access to a desired application with a set of desired QoS requirements. For example, the user may desire a minimum response time for their requests to a hosted application, or desire access to several processor cores. The user s QoS requirements can be categorized into two types, intrinsic and external. Intrinsic QoS deal with the workload the user desires to be executed. For example, the user may desire a task or job to be executed by a virtual machine with two cores and two gigabytes of RAM. External QoS are those that the user or others can apply to the work load to improve their own objectives. For example, a user objective may be that their tasks be prioritized according to how many cores each task requires. Therefore the user can apply a cost value metric to the multi-core option tasks that is larger than the value they give to a single-core option task. This value is an example of an external QoS metric. Application layer scheduling on the cloud is an interesting research area since they would likely conflict with the resource schedulers. Cloud Exchange The Cloud Exchange manages supply and demand and negotiates deals between SaaS Providers and acts on behalf of and in good faith to their users and cloud providers. The economics of buying and selling services is an interesting research area for cloud computing. This area could benefit from game theory or advanced scheduling algorithms. Figure 2.1 is an economics based cloud model which, Buyya says, can provide computing power as the 5th utility. In this dissertation, we use Buyya s model within a control and feedback framework as depicted in Figure 2.2. The control and feedback framework is provided by IBM s architectural blueprint for autonomic computing [IBM06]. The blueprint is based on the Autonomic Manager s (AM) Monitor Analyze Plan Execute (MAPE) control flow model and the Autonomic Computing Reference Architecture s (ACRA) management model as depicted in Figure 2.2. The merging of Buyya s economics-based cloud utility within a control and feedback framework provides flexibility to monitor and adapt the artifacts (i.e., the SaaS Provider, cloud provider, or application). The ACRA model defines five layers of control and management. The layers are defined from top to bottom as follows:

26 12 Figure 2.2: Economics Based Cloud Paradigm within a Control and Feedback Framework. Manual Manager is the management interface, specifies which policies should be implemented by the system. Orchestration AM monitors and manages groups of AMs to ensure they work together in an efficient manner (i.e., manage an application s deployment to find cheaper resources). Touch Point AM monitors and manages a resource (i.e., monitor a SaaS Provider to ensure it is not violating its service level agreements). Touch Point control interface to a resource.

27 13 Managed Resource a controlled resource (i.e., a User s Application or Workload). Autonomic Managers (AMs) reside at the Orchestration AM and Touch Point AM layers of the ACRA model. Autonomic Managers implement the MAPE control flow loop. The MAPE elements are as follows: Monitor is a collection of sensor inputs used by the AM s analyzer. Analyzer analyzes sensor inputs with knowledge information. Planner uses the analysis results to create an execution plan. Executer actuates the plan created by the Planner. Knowledge used by the Analyzer and Planner to keep state. Buyya s and Fox s model (cf. Figure 2.1) present several areas where research opportunities exist. For example, the cloud exchange would benefit from research in scheduling to maximize its revenue, or the cloud provider would benefit from admission control theory to ensure its Service Level Agreements (SLAs) are not being violated. In addition to this, by managing Buyya s model within IBM s ACRA model, other research opportunities are presented. For example, scheduling at the orchestration layer could be used to manage an application deployment, or manage a virtual machine deployment for several applications to execute on. The previous description of Buyya s and Fox s cloud architectures and artifacts within IBM s ACRA enables an assessment of related work of their impact on the cloud computing paradigm as depicted in Figure 2.2. For example, grid computing embraces the concept of meta-scheduling. Meta-scheduling techniques in the grid paradigm may be useful in the cloud paradigm s orchestration layer to schedule applications to cloud providers via Buyya s cloud exchange. The difficulty is where and how easily the grid solutions can be applied and adapted for the cloud paradigm. Section 2.3 presents several taxonomies to classify current literature into our cloud model (cf. Figure 2.2).

28 Taxonomies of Cloud Computing This section creates several taxonomies for our model as depicted in Figure 2.2. The objective is to use the taxonomies to classify current literature for our model. There are seven aspects (concepts and components) depicted in Figure 2.2 of interest aiding in the creation of the taxonomies. We classify cloud computing research into seven aspects of interest. The Cloud Aspect the three layered cloud itself could benefit from a taxonomy of taxonomies that categorizes different perspectives. Scheduling Aspect scheduling refers to resource provisioning and task allocation. Resource provisioning specifies the adding or removing of resources on an entity (i.e., a service, application, or virtual machine) that it consumes. For example, an application may be provisioned to a virtual machine to execute on, or a virtual machine may be provisioned more memory so it can host more applications. Task allocation specifies how user generated work load is distributed to application instances. Scheduling in a federation of clouds is difficult due to the heterogeneity of cloud providers. For example, an issue may be which scheduling topology is required to integrate the cloud providers. Do the schedulers work independently at different levels or communicate to better optimize altruistic objectives? To achieve this, several scheduling levels need to work together. At the lowest layer, the cloud providers provision their virtual machines with physical resources (i.e., cpu, memory, storage, and bandwidth), and specifies which virtual machines will host user applications. The next layer determines which cloud providers are used to host user applications. Finally, the last layer allocates user workload to cloud hosted applications. Referring to Figure 2.2, there are three areas where scheduling can be applied as follows: SaaS Provider creates SLA contracts between users and cloud providers. The SaaS Provider provisions user applications using cloud resources (possibly from serveral cloud providers) which are negotiated using the cloud exchange. The SaaS Provider may continuously look for better deals in having its user s applications hosted. Cloud Provider ensures the virtual machines (VMs) have access to physical resources to do actual work.

29 15 ACRA Orchestration Layer ensures the SLAs made through the SaaS Providers are being adhered to and that the user task requests are being handled efficiently to optimize the number of entities required to execute the user workload. In the case of peak workloads, it optimizes access to more expensive resources to handle the load. Cloud Coordination Aspect facilitates communication between cloud entities. Questions include how do schedulers at different levels in the cloud paradigm interact with each other? Can they affect and monitor each other? Discovery Aspect ensures resources from cloud providers are continuously updated. Chapter 3 of Ranjan s dissertation describes the resource discovery system in detail [Ran07]. This is not the focus of this dissertation but is of interest to researchers working on discovery mechanisms. Autonomic Management and Control Aspect the application of autonomic theory to cloud computing is new and evolves from biological systems that autonomously adapt to environmental changes. For example if your body has a virus, then your body may increase its temperature to destroy the virus. Control theory is also an important aspect to the cloud computing paradigm. The dynamic nature of cloud computing makes the cloud an attractive platform for software providers, but requires control mechanisms. To provide control, feedback from the system is required to determine system performance. Research in this area has already been done with control system theory [Oga87] [HDPT04]. The difficulty is in how to combine control theory and autonomic management theory with task allocation and resource provisioning. Application and Workload Aspect applications and their workloads affect cloud system performance; for example, they can be processor intensive and consume large amounts of bandwidth and memory. Security Aspect Cloud and grid providers provide a security mechanism to access and use system resources. For example, the CERN grid uses X509 certificates in a single sign-on framework. Security is not addressed in this dissertation. From the seven aspects of interesting research derived from our model (cf. Figure 2.2), we choose three on which to focus our research. The cloud aspect is selected

30 16 because it represents ACRA s framework, and is necessary to manage Buyya s economic based cloud artifacts. The Scheduling Aspect is selected because it provides models, strategies, and solutions to manage the cloud systems. The Application and Workload Aspect is selected because it is affected by cloud organization and scheduling. Therefore, it is interesting for us to investigate effects different workloads have on the applications. Each of the three aspects described have classifications (taxonomies) which pertain to our model. We propose eleven taxonomies grouped into the three aspects. The taxonomies are used to identify where current literature (i.e., identified techniques from grid scheduling or control theory) may be useful in our model. The taxonomy groups are as follows. The Cloud Aspect (cf. Section 2.3.1) 1. Cloud Taxonomy is a Taxonomy of Taxonomies 2. Organizational Taxonomy 3. Resource Taxonomy 4. Co-ordination and Discovery Taxonomy 5. Management Taxonomy The Scheduling Aspect (cf. Section ) 1. Management Taxonomy is a Taxonomy of Taxonomies 2. Management Organization Taxonomy 3. Cloud Management-Control Taxonomy 4. Cloud Management-Scheduling Taxonomy The Application and Workload Aspect (cf. Section ) 1. Application Taxonomy 2. Workload Taxonomy Sections describe the cloud aspect s taxonomies, the scheduling aspect s management taxonomies, as well as the application and workload aspect s taxonomies, respectively.

31 The Cloud Aspect s Taxonomies Our cloud taxonomy is a taxonomy of taxonomies consisting of four sub taxonomies as depicted in Figure Cloud Organization Taxonomy (cf. Figure 2.4) this taxonomy classifies clouds according to structure. For example, is the cloud a grid of clouds or a single cloud, or does it support federation over several data centers? 2. Cloud Resource Taxonomy (cf. Figure 2.5) this taxonomy consists of several taxonomies. It classifies resources according to organization and resource attributes. For example, the resource may be a physical static resource (like a computer core) which is organized as a federation of similar resources (where the entire system is perceived as one collection of cores and uniformly accessed). 3. Cloud Co-ordination and Discovery Taxonomy (cf. Figure 2.6) this taxonomy classifies resource discovery entities. How the resources are found and used are useful classifications for characterizing a tool or technique. 4. Cloud Management Taxonomy (cf. Figure 2.7) classifies management tools, techniques, models, and algorithms in terms of their organization, scheduling, and control capabilities. Each of these have their own sub taxonomies. The cloud organization taxonomy is depicted in Figure 2.4 as two main classes, the cloud federation class and the data center federation class. The cloud federation classification is concerned with multi-cloud systems. The term grid of clouds can be used to describe this scenario in which different clouds compete for user applications and workloads which are negotiated through SaaS providers. The data federation class is not addressed in this dissertation, but there is an increasing need to support Big Data Applications. The cloud resource taxonomy consists of two sub-taxonomies that classify resources according to organization and attributes. The cloud resource attribute taxonomy classifies resource attributes using the virtualizable and dynamic classes. The virtualizable classification organizes attributes that are virtualized versus those that are not (i.e., a physical processor is not a virtualized resource). The dynamic class classifies attributes according to their dynamic nature. For example an operating system installed on a machine is not dynamic, as opposed to the amount of RAM

32 18 Cloud Organization Taxonomy Cloud Resource Taxonomy Cloud Taxonomy Cloud Co-ordination and Discovery Taxonomy Cloud Management Taxonomy Figure 2.3: A Cloud Taxonomy allocated to a virtual machine is. The cloud co-ordination taxonomy classifies how cloud entities find and use each other. For example, cloud entities may register to a central server that has a lookup service, or they may be part of a distributed hash ring in which all participating nodes have a lookup service that works in a decentralized way, or the entities may register locally and be part of a hierarchy of registered entities. The cloud management taxonomy is one of the foci of this dissertation. Hence we present the last sub-taxonomy of the cloud (a cloud management taxonomy) in its own sub-section(cf. Section 2.3.2).

33 19 Federated Data Center Federated Cloud Non-Federated Data Center Cloud Organization Taxonomy Federated Data Center Non-Federated Cloud Non-Federated Data Center Figure 2.4: A Cloud Organization Taxonomy A Cloud Management Taxonomy The cloud management taxonomy, as depicted in Figure 2.7, consists of three subtaxonomies. The cloud management organization taxonomy, the cloud managementscheduling taxonomy, and the cloud management-control taxonomy as described below. The cloud management-organization taxonomy, as depicted in Figure 2.8, classifies the management entities according to their organization. In this case they may be organized as centralized, decentralized, or hierarchical categories. For example, the scheduling entity may be organized in a hierarchical fashion in which local resources have a local scheduler and the federation of many local schedulers have a metascheduler. In this case the schedulers are organized in a hierarchical fashion. Another possibility is a single centralized scheduling system which manages scheduling at all levels within the cloud system. A decentralized scheduling approach is another pos-

34 20 Centralized Cloud Resource Organization Taxonomy De-Centralized Hierarchical Cloud Resource Taxonomy static Physical dynamic Cloud Resource Attribute Taxonomy static Virtual dynamic Figure 2.5: A Cloud Resource Taxonomy sibility in which a unified scheduling algorithm can schedule independently of other schedulers yet they implicitly work together. The cloud management-control taxonomy, as depicted in Figure 2.9, classifies cloud entities according to their control capabilities and controllability. There are three primary classes: control type, stimulus, and model type. The control type is classified as autonomic, control, and mix methods. Autonomic classification refers to controls and/or management tools which follow the Monitor, Analyze, Plan, and Execute (MAPE-K) model. The control classification categorizes according to traditional control methods such as PID controllers. Mix methods classifies entities which combine traditional and autonomic control methods. Autonomic controls are classified by stimuli, information analysis, and actions. Stimuli describe how the controller receives information (i.e., from the surrounding environment or internal events). Information analysis may be sensor dependent, or

35 21 Centralized Cloud Co-ordination and Discovery Taxonomy De-Centralized Hierarchical Figure 2.6: A Cloud Coordination Taxonomy internal logic, or both. Finding a good balance is difficult. Lastly, how does the autonomic system affect itself and its environment. This classifies entities as actively being able to change the environment versus changing itself. Traditional controls can be classified using first principle models versus more complex models. Another classification is how many inputs and outputs does the controller have. More complex models involve adaptive control and can be classified by following a MIAC (Model Identification) or MRAC (Model Reference) model which identifies how dynamic the control system is. MIAC controllers update the controller s model of the system, and as the model changes, the controllers are tuned to adapt. MRAC models have a predefined model of the system under control and try to finetune the control model to adapt to minor disturbances to the system [ÅW11]. Mixed method is a catch-all classification for control entities that do not fit nicely into autonomic or control classifications explicitly. Generally combination models

36 22 Cloud Management Organization Taxonomy Cloud Management Taxonomy Cloud Management-Scheduling Taxonomy Cloud Management-Control Taxonomy Figure 2.7: A Cloud Management Taxonomy that use both control and autonomic mechanisms are classified here. The cloud management scheduling taxonomy, as depicted in Figure 2.10, classifies entities by their organization and objectives. The objective may be classified as a SaaS Provider objective (i.e., find best QoS for lowest cost), or the objective may be classified as a cloud provider objective which manages user work load, provisioning applications, achieving all SLAs, and satisfying internal SLOs. Cloud management organization is a sub-taxonomy that classifies management entities according to their interoperability. For example, is there a central management system or are there several that all work together in some way? SaaS provider scheduling can be classified as on-line or off-line scheduling along with classifications for systems that employ feedback and those systems that allow themselves to be configurable.

37 23 Centralized Cloud Management Organization Taxonomy De-Centralized Hierarchical Figure 2.8: A Cloud Management Organization Taxonomy The cloud provider has a sub-taxonomy for organization which classify according to how entities interact (i.e., coordinated or uncoordinated interaction). The more interesting classifications arise when considering how the cloud provider provisions resources and allocates workload. These two classes can be further divided into online or off-line scheduling. On-line scheduling can be further classified according to coordination and use of feedback A Cloud Application and Workload Taxonomy The application taxonomy, as depicted in Figure 2.11, classifies applications as being able to run in parallel or in-sequence. This depends on the dependencies of the workload and/or the application. For example, the application may have a single database that limits the amount of parallel access to the data. This is further classified into

38 24 Autonomic Internal Stimuli Sensor Fuzzy logic Actuate Update Actuate Update External stimuli Sensor Actuate Update Fuzzy logic Actuate Cloud Management-Control Taxonomy SISO Update PID Control MIMO Complex MIAC MRAC Mix Methods Figure 2.9: A Cloud Management-Control Taxonomy single or multi-tiered applications. The application is then classified in terms of resource dependencies. Does it require significant bandwidth or is it processor intensive (i.e., a ray-tracer is processor intensive versus a video streaming application which is bandwidth intensive)? The cloud workload taxonomy, as depicted in Figure 2.12, classifies the workload based on task dependencies of the user s workload (i.e., if task A has to wait for task B to finish). This is further classified into traffic patterns (i.e., burst versus non-burst traffic). Burst traffic is non-uniform traffic in which high volumes of tasks and jobs are submitted followed by idle periods. Non-burst traffic refers to uniform task flow (i.e., user workload arrives at a steady rate). Workload dependencies have a subclassification of being centralized or decentralized. This means dependent tasks may have to execute on the same machine due to shared memory requirements for processing data. This is the case for OpenMP (multi-platform shared-memory parallel programming) parallel tasks, as opposed to MPI (message passing interface) which

40 26 Multi-Tier Resource Intensive CPU IO Parallel Both Non-Resource Intensive Resource Intensive CPU Single Tier IO Both Cloud Application Taxonomy Non-Resource Intensive Multi-Tier Resource Intensive Non-Resource Intensive CPU IO Both Non-Parallel Resource Intensive CPU Single Tier IO Both Non-Resource Intensive Figure 2.11: A Cloud Application Taxonomy the literature review and provides insights on how to proceed within a research area by determining how current literature is applicable. Grozev and Buyya [GB12] provide an architectural taxonomy of the current state of the art. They classify academic and industry projects with respect to inter-cloud research to determine the directions current research is progressing in. One of the areas identified for further research is Service Level Agreement techniques which is addressed in this dissertation.

42 28 Chapter 3 An Adaptive Perspective for Classifying the Literature This chapter applies two of the taxonomies proposed in Chapter 2 to classify related literature (i.e., cluster computing, grid computing, control system theory and computational clouds) given our view of the cloud computing paradigm (cf. Figure 2.2). 3.1 A Literature Classification To discuss the related literature, it is prudent to categorize and discuss related works according to the taxonomies developed in Section 2. In particular, we concentrate on the literature related to the cloud research paradigm as depicted in Figure 2.2. We categorize and analyze the literature within this paradigm. Several research categories within the cloud paradigm can take advantage of techniques used in other domains (i.e., the grid domain). Two of these categories, specifically the application of scheduling and control within a feedback framework, will become the focus of this dissertation. We reviewed many papers, journals, books, and theses for this dissertation and a selected subset is discussed in this chapter. To simplify the discussion and categorization (i.e., using taxonomies from Chapter 2), the literature is grouped into five compute paradigms. Grid and cloud systems, peer to peer systems, autonomic and adaptive and self-adaptive systems, control and feedback systems, and scheduling

43 29 systems. This section classifies the literature aspects from each of the five compute paradigms using selected taxonomies. This provides a mapping of aspects from related literature into our proposed cloud paradigm as depicted in Figure 2.2. The focus of this dissertation is applying scheduling and control strategies in the cloud paradigm, therefore, literature is classified using two relevant taxonomies. The cloud management-scheduling taxonomy and the cloud management-control taxonomy. The literature aspects are recorded in Tables 3.1 and 3.2 and their classifications are depicted in Figures 3.1 and 3.2. For example, Proportional Integral (PI) Admission Control Scheduling is an aspect discussed in Yang [YTX04] (cf. Table 3.1). Using the cloud management taxonomy (cf. Figure 3.1), Yang s aspect is relevant to the cloud broker and can be used for dynamic allocation and provisioning (on-line) when resources are statically defined (non-configurable) in a feedback framework (i.e., PI admission can take advantage of feedback). For some cloud applications this is reasonable and the PI controller could be used for scheduling management. 3.2 The Cloud Classification Scheduling and control techniques are some of the most pervasive technologies. Since the focus of this dissertation is on scheduling strategies for clouds, it is important that the entire computing field be reviewed for their potential application in our cloud model. Paradigms examined include grid, peer-to-peer, cluster, and autonomic computing. From the literature it is clear some scheduling oriented aspects are common such as: Admission Control Scheduling (PI or Kalman) Scheduling Algorithms (EDF or FIFO) Scheduling Models (Cutting Stock) Economics of Scheduling These generalized aspects are classified into our model using the cloud managementscheduling taxonomy and the cloud management-control taxonomy.

44 30 Admission Control and Scheduling appear in literature to be a popular way to avoid over or under utilization of a system. The control policy can be as simple as measuring a server s utilization and assigning tasks to the server if it is below a specified threshold. Buyya [YB06] and Yang [YTX04] are two examples of using admission control for cluster and grid computing, respectively. Buyya focuses on using admission control as a way to manage inaccurate deadline estimates for deadline critical jobs in clusters. They accomplish this by weighting the effect of allowing a job to be scheduled by the cluster. They use a risk metric, a deadline delay metric, and an enhanced decision making process. Yang uses resource-based admission control for grid computing. In this case the resourced-based metric is server utilization. The admission controller is based on a PI controller. The system is modeled (the model s characteristic equation is developed) as a PI and different gain values for K p and K I are experimented with to determine their effects under certain load conditions. In Yang s paper [YTX04], rather than attempting to determine the system model parameters (via traditional PI methods, i.e., a step response), they are assigned using data from previous studies. Through manual testing the controller parameters K p, K I and sample rate are set. Figures 3.1 and 3.2 classify these works according to the two taxonomies. The scheduling classification Figure 3.1 supports Yang s [YTX04] use of control theory as a potential solution in the cloud computing paradigm to support the Cloud Broker as an on-line non-configurable feedback controller. Buyya s [YB06] work has the potential to support the cloud provider for on-line allocation and provisioning of workloads and resources. The control classification Figure 3.2 indicates that Yang s [YTX04] technique is classified as a first principles model PID control system with single input and single output, but has potential support for multi-input and multi-output, where Buyya s [YB06] technique is surprisingly categorized as autonomic. This is because the controller does estimation and prediction, which are very difficult to model as a PI control system. Scheduling algorithms in the literature have been around for many decades and there are many models and solutions. Generalized problems are posed and algorithms are developed to solve them. Concrete problems which can be formed from generalized problems can use prescribed solutions. The difficult part is mapping a real world problem to a generalized problem form. For example, the cutting stock problem is a standard problem which tries to cut finite resources into many different predefined

46 32 dimensions. This problem was originally posed for cutting rolls of paper, but has been mapped to solutions for cutting two dimensional sheet metal. In this dissertation, it is mapped as a resource provisioning problem. Other popular scheduling algorithms include Dominant Sequence Clustering (DSC) which is used in minimizing make-spans of Directed Acyclic Graphs (DAGs). Various researchers have embraced these types of techniques [HB05] [ZC05] [CJ01] [ZZ00] [YG94] [PDC08] and [LP98]. Tables 3.1 and 3.2 identify interesting aspects of these papers with potential impact on our model; Figures 3.1 and 3.2 classify the aspects into our model (cf. Figure 2.2). Figure 3.1 indicate most of these models and algorithms are offline scheduling techniques. The scheduling system performs as a state machine with no adaptation, fault tolerance, or assessment capability. These techniques generally have a list of all the tasks to be scheduled, a static model of the resources, and the current state of the resources. This is generally of no use in a dynamic system in which the tasks (workload) are unknown. However, if the scheduling system uses scheduling cycles (i.e., creates a new schedule periodically), then tasks can be queued and an off-line approach can be used to determine a schedule. Figure 3.2 indicates that no DSC algorithms could be classified as control management. This should further motivate research into how cluster methods could be used as controllers, or there are simply no good classification of the clustering methods to be used as controllers. In Moschakis s paper [MK12], they use Gang scheduling to manage Virtual Machines. They employ an adaptive first come first fit algorithm and a largest job first algorithm to schedule workload to the virtual infrastructure. Wu [WKGB12] provide a comprehensive simulation for admission control based on service level agreements (SLAs). They acknowledge the benefit of research into knowledge based admission control which is an area this dissertation addresses. Tordsson [TMMVL12] provide their own cloud broker design. In this design users provide their own VM template that specify a specific image and target cloud. Nathani [NCS12] uses policy based scheduling to manage virtual machines. They use a lease-based scheduling approach along with other scheduling techniques such as backfilling to fill in scheduling gaps. Litoiu [GSLI12] use a feedback model to update an application s performance model. The scheduler uses the performance model to manage the infrastructure and application deployment. The key contribution of the work is in tracking changes in the models so the scheduler can cause adaptation.

47 33 Figure 3.2: Cloud Management-Control Taxonomy Literature Classification Another method of scheduling is to use optimization models (e.g., Knapsack or Cutting Stock) which can model scheduling problems using an objective function and a set of resource constraints. These types of models often use heuristics to solve the problem. A solution is a set of tasks assigned to a set of resources. An example of

48 34 this is presented in Vanderster s dissertation [Van08]. In this work the problem is presented as a 0-1 multi choice Knapsack problem (0-1 MMKP) and solved using a non-optimal algorithm that employs heuristics. Interesting aspects from these works are in Tables 3.1 and 3.2, in this case the optimization model aspect and optimization solutions aspect. Figures 3.1 and 3.2 classify these aspects into our model (cf. Figure 2.2). This type of scheduling is applicable to both SaaS brokers and cloud providers according to its classification. Figure 3.2 indicates there is potential to apply this theory as an MRAC (Model Reference Adaptive Control), task schedules sent to the cluster have an expected result due to knowledge of the compute performance model (i.e., how many MIPS the cores are rated at). Differences between actual and expected execution times can be used to update the reference model. Table 3.1: Cloud Management-Scheduling Literature Aspects Cloud Management-Scheduling Literature Aspects Literature s Aspect Literature Reference PI Admission Control [YTX04] Risk Admission Control [YB06] DSC [HB05] Optimization Models and Solutions [Van08] [PSS + 12] [DMK + 13] [KAB + 12] [GB12] [MK12] [NCS12] Predictive Modeling using Kalman Filter [SILI10] External Adaptivity [SER + 10] [SSWS07] [GSLI12] Application Based Provisioning [WLW + 07] [GSLI12] Autonomic Feedback Control [HDPT05] [Xu07] [GSLI12] [WKGB12] Scheduling management mechanisms can be divided into two categories, autonomic control approaches and traditional control approaches. Traditional control refers to control theory (i.e., engineering control design) and autonomic refers to adaptive and self-adaptive approaches using feedback loops. Several approaches to implement adaptation include predictive modeling, such as a Kalman filters, as presented by Litoiu [SILI10] and others focus on contextual models and adaptation rules based on predicate mathematics as in Sama [SER + 10]. Several adaptive schemes have been presented by Yingsong [SSWS07] which provision virtual machines with

49 35 physical resources in an adaptive way such that the resources are not over provisioned. A more difficult but powerful approach to using adaptive mechanisms is the selfadaptive mechanism. This type of mechanism needs the ability to change the actual adaptation process. Some examples of these systems can be found in [AdAM09] [GL09] [MPS08]. Müller [MPS08] focuses on the visibility aspects of adaptive systems which is the first step in creating self-adaptive systems. Another approach to management is control theory. An excellent masters thesis from UCBerkley by Xu [Xu07] nicely explains the use of traditional control theory to parse system logs to predict and ultimately work through system failures. Moreover, a book by Hellerstein [HDPT04] provides a review of building and constructing controllers for computing systems. Hellerstein is one of the leaders in this area who employs traditional control theory for self-management to satisfy service level agreement requirements [Hel04] [HDPT05] [DHP + 05] [PGH + 02]. Aspects of these works, such as the Kalman filter, are listed in Tables 3.1 and 3.2, and classified in Figures 3.1 and 3.2. Litoiu s [SILI10] work is classified as predictive control. From a scheduling perspective, this can be classified as any class of feedback. From a control perspective, this can be classified as autonomic. The Kalman filter is a predictive controller, it uses gathered knowledge to make decisions. It uses external stimuli to gain knowledge of the system under control. Therefore, depending on the Kalman filter designed, it can be classified as more internal or more external in terms of its sensitivity (i.e., is the filter sensitive or non-sensitive to quick changes in the environment). Yingsong [SSWS07] is classified as scheduling by supporting SaaS or cloud provider software, and as a controller it can be classified as autonomic, which means it would make the perfect autonomic manager for cloud computing. Autonomic management can be characterized as systems that aim for a balance within their operating environment, as, for example, our autonomic nervous system. In reality, many systems follow this model whether or not that was their designer s intention. Systems designed with the autonomic approach can be classified as autonomic. The scheduling characteristics of an autonomic system are not entirely clear and depend on the designer. However, there are several works related

50 36 Table 3.2: Cloud Management-Control Literature Aspects Cloud Management-Control Literature Aspects Literature s Aspect Literature Reference PI Admission Control [YTX04] Risk Admission Control [YB06] [Xu07] [HDPT04] [PGH + 02] [Hel04] [HDPT05] [DHP + 05] [Oga87] [YTX04] Optimization Models and Solutions [Van08] [PSS + 12] [DMK + 13] [KAB + 12] Predictive Modeling using Kalman Filter [SILI10] External Adaptivity [SER + 10] [SSWS07] [GSLI12] Autonomic Provisioning [WLW + 07] Autonomic Feedback Control [HDPT05] [GSLI12] to this that describe and use autonomic systems. Filino [FS07] describes a tool as autonomic that constructs grid applications according to the current grid environment. Quiroz [QKP + 09] uses decentralized on-line clustering to classify grid and cloud work flows. Using this, provisioning based on classification of workload is provided. Wang [WLW + 07] proposes autonomic provisioning to support outsourcing the hosting of virtual machines based on an appliance-based provisioning framework. These three works are good examples of using autonomic systems to describe their solutions since they adapt to environmental changes. IBM provides a blueprint for autonomic computing in the form of white papers [IBM06] describing models (such as the MAPE-K loop), architectures (such as ACRA), frameworks, and components (such as the autonomic manager) to construct autonomic systems. To provide motivation as to why autonomic systems are important, Müller [Mül06] provides a case for its necessity for software evolution and software intensive systems due to their ultra-large scale design and implementation. Aspects of these works, such as Autonomic Control and Risk Admission Control, are listed in Tables 3.1 and 3.2, and classified in Figures 3.1 and 3.2. Wang s [WLW + 07] work is classified using cloud scheduling and cloud control taxonomies. It is a good example of the mix of complex control systems with autonomic systems. They use autonomic systems to self-adapt the control mechanisms which are modeled using queuing theory. There is also an optimization problem to solve to reduce costs. Wang s paper [WLW + 07] is classified in many categories of Figures 3.1 and 3.2, therefore his techniques are potentially useful in many areas of cloud computing.

51 37 Considering economics is essential to scheduling and thus is a rich research field. Economics applied to grid computing has great potential in clouds due to the distributed nature of both paradigms. Moreover the idea of providing computing as a utility is one of the driving forces in this field. Buyya [Buy02] and Vanderster [Van08] both use economic models in their scheduling designs as applied to grids. Buyya [BAGS02] describes the grid economic market place in detail using several economic models. Aspects of these works, such as external adaptivity, are listed in Tables 3.1 and 3.2, and classified in Figures 3.1 and 3.2. Hellerstein [HDPT05] provides comprehensive approaches to integrating autonomic and control theory. Hellerstein has by far the most classifications as being relevant in our Cloud Computing Domain. Pawluk [PSS + 12] and Keahey [DMK + 13] [KAB + 12] both offer solutions to multi-cloud federation using a cloud broker. These solutions use application models and cloud costs to determine deployment of virtual machines and applications. For example, the STRATOS [PSS + 12] broker can load balance an application across several clouds with varying costs. The problem is formulated as a multi-criteria optimization problem and solved using several optimization techniques. 3.3 Summary Literature aspects listed in Tables 3.1 and 3.2 are classified using two taxonomies as depicted in Figures 3.1 and 3.2. Using this classification, we get a sense of where current related literature aspects can be applied to our cloud computing domain as depicted in Figure 2.2. Most literature aspects can be applied to the cloud brokerage and cloud provider. One technique that sticks out from Table 3.1 is the off-line clustering DSC algorithm that has potential for the cloud broker. This could be useful in profiling user traffic flow to group the tasks. This could aid in simplifying scheduling from the brokers perspective. However, it is not very dynamic, and large changes in task workload characteristics could be damaging. A solution to this could be an autonomic control system (cf. Table 3.2) which could be used in addition to the off-line scheduling method to help adapt the scheduling classification when the classification model differs from the actual system model by a significant amount. For example a combination of aspects from Hakem [HB05] and Wang [WLW + 07] or Hakem [HB05] and Hellerstein [HDPT05] could be used together to implement a solution for a SaaS

52 38 Broker. Many other combinations from Tables 3.1 and 3.2 can be used to implement the artifacts of the cloud paradigm (cf. Figure 2.2). The remainder of this dissertation investigates and evaluates some of these solutions and their combinations as applied to cloud computing.

53 39 Chapter 4 Scheduling Strategies A key research question derived from questions in Chapter 1 is Where does scheduling apply from the perspective of the schedules cost?. A key area to apply scheduling strategies in terms of cost is within the cloud provider domain. Buyya [BYV + 09] states that current cloud technologies need to be extended into cloud provider s infrastructure in order to avoid SLA violations and manage risks. Two years later Buyya [BRC10] re-stated the need for modeling behavior and performance to provide scheduling reasoning for service provisioners. These are some of the motivating factors for research in this domain. This chapter explores scheduling models for virtual machine placement within a cloud provider. This work provides controllability in virtual machine placement, and identifies provisioning metrics. To find a good approach to virtual machine placement, we refer to the classification works in Chapters 2 and 3. Tables 3.1 and 3.2, Hernandez and Vanderster s work is a good start. Vanderster and others use a 0-1 Multi Choice Knapsack Problem to model grid resource task allocation [PHVD04] [VDPHS06] [VDS07] [Van08] with a utility model to provide quality of service influence. However not all literature supports the use of optimization models for virtual machine allocation. Flrin [HMGW07] states that the difficulty is due to the unkown state of system resources. In terms of cloud virtual machine provisioning, the difficulty is that the state continuously changes. Vanderster shows that this is true and used a backfilling strategy to compensate the scheduling cycle allocation [Van08]. Vanderster also notices a slight improvement in scheduling performance when compared to a pure backfill strategy.

54 40 Figure 4.1: Scheduling Gaps Rather than using a backfill strategy, this chapter proposes a different model to manage the scheduling cycle gaps as depicted in Figure 4.1. The idea is to treat the resource being provisioned to the virtual machine (e.g., physical memory) as a cut of stock and the amount of time that the virtual machine keeps the cut of stock as the amount of stock you wish to order. This problem can be modeled as a Cutting Stock Problem (CS), in which the stock replenishes every scheduling cycle. 1 For example, Figure 4.2 depicts a partitioning of two servers. For each scheduling cycle, the servers partition their memory to virtual machine instances. The process of partitioning can be modeled as a CS problem. The cutting stock problem is closely related to the knapsack problem. The cutting stock problem uses knapsack to solve subproblems. In specifics, the knapsack problem specifies how many possible configurations there are for a given set of tasks and finite resources. This process is repeated for all sets of finite resources from each cloud. The final solution selects a configuration for each cloud such that all tasks are scheduled in one or more scheduling cycles [GG61]. The remainder of this chapter describes the Cutting Stock Problem in detail and explains how it maps virtual machine placements within a cloud. Utility models are 1 stock problem

55 41 Figure 4.2: Scheduling Gaps discussed for achieving QoS requirements. We also discuss a scenario using the cutting stock formulation and present experimental results. The chapter ends with some discussion and thoughts on using this type of modeling and its potential for being used as a controller. 4.1 The Cutting Stock Problem The cutting stock problem can be solved using an integer linear programming solution. It models the problem with an objective function and a list of constraints. These problems suffer from the scalability issue in which the solution space becomes too large to solve by brute force. Therefore much research goes into identifying heuristics (i.e., resulting in potentially non-optimal solutions) which can solve the problem in a reasonable amount of time. The generalized form is presented in Equation 4.1. minimize subject to n i=1 c ix i n i=1 a ijx i q j, j = 1,..., m x i 0 (4.1) To illustrate cutting stock formulation we devise a simple scenario. A cheese vendor has several packaged blocks of cheese of 50 cm in length, and three customers requiring different quantities of cheese as itemized in Table 4.1. The vendor wants to

Cloud Computing: Computing as a Service Prof. Daivashala Deshmukh Maharashtra Institute of Technology, Aurangabad Abstract: Computing as a utility. is a dream that dates from the beginning from the computer

University of Victoria Faculty of Engineering Winter 2010 Work Term Report Dynamic Resource Distribution Across Clouds Department of Physics University of Victoria Victoria, BC Michael Paterson V00214440

Load balancing model for Cloud Data Center ABSTRACT: Cloud data center management is a key problem due to the numerous and heterogeneous strategies that can be applied, ranging from the VM placement to

CHAPTER 1 INTRODUCTION 1.1 Background The command over cloud computing infrastructure is increasing with the growing demands of IT infrastructure during the changed business scenario of the 21 st Century.

Research Challenges Overview May 3, 2010 Table of Contents I 1 What Is It? Related Technologies Grid Computing Virtualization Utility Computing Autonomic Computing Is It New? Definition 2 Business Business

Cloud computing: the state of the art and challenges Jānis Kampars Riga Technical University Presentation structure Enabling technologies Cloud computing defined Dealing with load in cloud computing Service

Cloud Computing Cloud computing is a rapidly growing technology that allows users to share computer resources according to their need. It is expected that cloud computing will generate close to 13.8 million

Volume 1, Issue 1 ISSN: 2320-5288 International Journal of Engineering Technology & Management Research Journal homepage: www.ijetmr.org Analysis and Research of Cloud Computing System to Comparison of

Quality of Service Guarantees for Cloud Services CS848 Project presentation by Alexey Karyakin David R. Cheriton School of Computer Science University of Waterloo March 2010 Outline 1. Performance of cloud

Cloud Computing Architecture: A Survey Abstract Now a day s Cloud computing is a complex and very rapidly evolving and emerging area that affects IT infrastructure, network services, data management and

Open Archive TOULOUSE Archive Ouverte (OATAO) OATAO is an open access repository that collects the work of Toulouse researchers and makes it freely available over the web where possible. This is an author-deposited

Proceedings of International Conference on Emerging Research in Computing, Information, Communication and Applications (ERCICA-14) Reallocation and Allocation of Virtual Machines in Cloud Computing Manan

CHAPTER 2 THEORETICAL FOUNDATION 2.1 Theoretical Foundation Cloud computing has become the recent trends in nowadays computing technology world. In order to understand the concept of cloud, people should

CloudCenter Full Lifecycle Management An application-defined approach to deploying and managing applications in any datacenter or cloud environment CloudCenter Full Lifecycle Management Page 2 Table of

VMware vcloud Architecture Toolkit Version 2.0.1 October 2011 This product is protected by U.S. and international copyright and intellectual property laws. This product is covered by one or more patents

36326584 Li Sheng Virtual Machine Technology for Cloud Computing Li Sheng lsheng1@uci.edu Abstract: Nowadays, with the booming development of network-based computing, more and more Internet service vendors

Request for contributors Introduction to Cloud Computing https://portal.futuregrid.org/contrib/cloud-computing-class by various contributors (see last slide) Hi and thanks for your contribution! If you

Microsoft Corporation Published: May 2006 Abstract Today s business climate is more challenging than ever and businesses are under constant pressure to lower costs while improving overall operational efficiency.

- Chung-Cheng Li and Kuochen Wang Department of Computer Science National Chiao Tung University Hsinchu, Taiwan 300 shinji10343@hotmail.com, kwang@cs.nctu.edu.tw Abstract One of the most important issues

Architectural Implications of Cloud Computing Grace Lewis Research, Technology and Systems Solutions (RTSS) Program Lewis is a senior member of the technical staff at the SEI in the Research, Technology,