Transcription

1 Characterizing Task Usage Shapes in Google s Compute Clusters Qi Zhang University of Waterloo Joseph L. Hellerstein Google Inc. Raouf Boutaba University of Waterloo ABSTRACT The increase in scale and complexity of large compute clusters motivates a need for representative workload benchmarks to evaluate the performance impact of system changes, so as to assist in designing better scheduling algorithms and in carrying out management activities. To achieve this goal, it is necessary to construct workload characterizations from which realistic performance benchmarks can be created. In this paper, we focus on characterizing run-time task resource usage for, memory and disk. The goal is to find an accurate characterization that can faithfully reproduce the performance of historical workload traces in terms of key performance metrics, such as task wait time and machine resource utilization. Through experiments using workload traces from Google production clusters, we find that simply using the mean of task usage can generate synthetic workload traces that accurately reproduce resource utilizations and task waiting time. This seemingly surprising result can be justified by the fact that resource usage for, memory and disk are relatively stable over time for the majority of the tasks. Our work not only presents a simple technique for constructing realistic workload benchmarks, but also provides insights into understanding workload performance in production compute clusters.. INTRODUCTION Cloud computing promises to deliver highly scalable, reliable and cost-efficient platforms for hosting enterprise applications and services. However, the rapid increase in scale, diversity and sophistication of cloud-based applications and infrastructures in recent years has also brought considerable management complexities. Google s cloud backend consists of hundreds of compute clusters, each of which contains thousands of machines that host hundreds of thousands of tasks, delivering a multitude of services including web search, web hosting, video streaming, as well as data intensive applications such as web crawling and data mining. Supporting such a large-scale and diverse workload is Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. This article was presented at: Large Scale Distributed Systems and Middleware Workshop, LADIS. Copyright. Figure : A Compute enchmark a challenging goal, as it requires a careful understanding of application performance requirements and resource consumption characteristics. Traditionally, Google relies on performance benchmarks of compute clusters to quantify the effect of system changes, such as the introduction of new task scheduling algorithms, capacity upgrading, and change in application source code. As shown in Figure, a performance benchmark consists of one or more workload generators that generate synthetic tasks scheduled on serving machines. In all of the aforementioned scenarios, using historical workload traces can accurately determine the impact of changes to minimize the risk of performance regressions. However, this approach does not allow answering what-if questions about scaling workload or other scenarios that have not been observed previously. To address this limitation, it is necessary to develop workload characterization models. We use the term task usage shape as a statistical model that describes run-time task resource consumption (, memory, disk, etc.). Our goal is develop an accurate characterization of task usage shapes that is sufficiently accurate for producing synthetic workload benchmarks. The key performance metrics we are interested in are the average task wait time and machine resource utilization for, memory and disk in each cluster. Task wait time is important because it is a common concern of cloud users. As the workload typically contains many long-running batch tasks that may alternate between waiting (this also includes the case of rescheduling due to preemption or machine failure) and running state, the total wait time experienced by each task is a main objective to be minimized. Similarly, machine resource utilization is important as it is a common objective of cloud operators to maintain high resource utilization. In this paper, we present a characterization of task usage shape that accurately reproduces performance characteristics of historical traces, in terms of average task wait time and machine resource utilization. Through experiments using real workload traces from Google production clusters, we find that simply modeling the task mean usage can achieve

2 Compute (Cores) (GB) (GB) Cluster Mean Avg. cv Mean Avg. cv Mean Avg. cv Type A Type Type Type Type B Type Type Type Type C Type Type Type Type D Type Type Type Type E Type...7. Type Type Type F Type Type Type Table : Data set used in the experiment high accuracy in terms of reproducing resource utilization and task wait time in Google s compute clusters. While this result may seem surprising at first glance, a closer examination shows that it is due to both () the low variability of task resource usage in the workload, and () the characteristics of evaluation metrics (i.e. task wait time and machine resource utilization) under different workload conditions. Our work not only presents a simple technique for generating workload traces that closely resemble real workload traces in terms of the key performance metrics, but also provides helpful insights into understanding workload performance in production compute clusters. The rest of the paper is organized as follows: Section describes the historical traces we used during our analysis. The experimental results are reported in Section. Section is devoted to the discussion of the evaluation result. Specifically we analyze the correlation between the theoretical model errors (i.e. variability in task usage) and the empirical model errors observed in the simulations. Section surveys related work in this area. Finally, section concludes the paper.. DATASET DESCRIPTION The data set we used in our study consists of historical traces of compute clusters spanning days (June -, ). Together our analysis uses a total of -cluster days of traces from the production clusters. These historical traces contain, memory and disk usage of every task scheduled in each cluster sampled at -minute intervals. Generally speaking, the workload running on Google compute clusters can be divided into task types. Type tasks correspond to production tasks that process enduser requests, whereas type tasks correspond to low priority, non-production tasks that do not directly interact with users. Type and Type represent tasks that have characteristics falling between type and. Table summarizes the size of each cluster as well as the workload composition in terms of the task types. We purposely select clusters of sizes ranging over two orders of magnitude. Typically tasks of type have the highest task population, while tasks of type have the lowest. There are exceptional cases, such as cluster F, which has a large percentage of Type tasks. Table summarizes the mean and average coefficient of variation (CV) for, memory and disk usage for tasks in every cluster over the course of days. the task CV of a particular resource is computed by dividing the standard deviation of the measured usage values by their mean. From Table, it can be seen that and disk have the highest and lowest CVs, respectively. Even though in many cases the average CV can exceed, it does not imply high resource usage variability since CV is generally sensitive to small mean value. For example, even though tasks of Type in compute cluster E have the highest CV for (i.e..), the average usage is very close to, hence the variability in resource usage is small. Similar results have also been reported in [9] and []. Hence we can conclude that the run-time variability of task resource usage is low. The analysis above suggests that simply modeling the mean values of run-time tasks resource consumption is a promising way to model task usage shapes. As a starting point, we call this characterization model the mean usage model of tasks usage shapes. Specifically, the mean usage model stores the mean usage of, memory and disk and running time of each task in the workload. Our hypothesis is that the mean usage model can perform reasonably well for reproducing the performance of real workload.. EXPERIMENTS This section presents our experiment results. We first describe our evaluation methodology. Given a historical workload trace from real compute clusters, We modify the trace by over-writing the actual task resource usage by the modelpredicted usage values. Specifically, to evaluate the mean usage model, we need to replace measured resource usage records by their mean value for each task and each resource type. The other components of the workload, including userspecified resource requirements, task placement constraints [] and request arrival times, are kept intact. We then run two experiments. The first one runs the benchmark using the unmodified historical trace. The second one runs the benchmark using the modified trace after the treatment. Once finished, we compare the benchmark results of both experiments. As mentioned previously, two performance metrics of interests are task wait time and machine resource utilization. In addition, during our experiments we realized that it is necessary to increase the load on individual clusters in order to make the difference more apparent. For example, when there is ample free capacity in a cluster, every task can almost immediately be scheduled and never have to wait during its course of execution. In this case, the task wait time will be low regardless of the quality of the characterization. Hence, we developed a stress generator that increases the load on the cluster by randomly removing a fraction of its machines. We will discuss the effect of load increase on the performance metrics in Section. We conducted trace-driven simulation for all clusterdays. We first report the basic characteristics of our performance metrics. Specifically, Figure shows the total task wait time and resource utilization for cluster A across days. It can be observed that the day-to-day variability for resource utilization is rather small. On the other hand, the day-to-day variability for task wait time can be quite high, especially for the tasks of type, where total task wait time

3 Compute Cluster No. of machines Type Type Type Type A s B s C s D s Resource Utilization Total Task Wait Time (millions of seconds) Task Type E s June June June June June June June June June June F s (a) Resource utilization (b) Task wait time Table : Cluster Size and Workload Composition Figure : Day-to-Day variability of Two Metrics for Resource Utilization % % % 7% % % % 7% % % % 7% % % % 7% % % % 7% % % % 7% (a) (b) (c) (d) (e) (f) Figure : Average Machine Resource Utilization over Days after removing %, %, % and 7% of the machines Total Task Wait Time (millions of seconds) Task Type Task Type Task Type 9 7 Task Type Task Type Task Type % % % 7% % % % 7% % % % 7% % % % 7% % % % 7% % % % 7% (a) (b) (c) (d) (e) (f) Figure : Average Task Wait Time over Days after removing %, %, % and 7% of the machines Percent Error of Resource Utilzation % % % 7% (a) Machine Resource utilization Percent Error of Average Task Wait Time Task Type % % % 7% (b) Task Wait Time Figure 7: Summary of the Percent Model Error of Performance Metrics for the Mean Usage Model on June is times larger than the one on June. These observations are consistent across clusters, which suggests that resource utilization is a more robust metric than total task wait time. The average machine resource utilization and task wait time for all clusters under different utilization levels are shown in Figure and Figure respectively. As expected, both the utilization and total task wait time grow with the utilization level (i.e. the percentage of machines removed). The task wait time seems to grow rapidly at high utilization level. More analysis on this observation will be described in Section. Next we present our evaluation of the mean usage model. The results for resource utilization and task wait time are shown in Figure and, respectively. It can be observed that the model error for resource utilization is quite small ( %) under all circumstances. However, for task wait time, the percent error has very high variability. For example, produces a significant error for tasks of type when number of machines removed is %. However, the large error bar (representing the standard error) indicates that the error is likely caused by one or samples. This is also explained by our previous result that task wait time is a less robust metric compared to resource utilization. The average performance of machine resource utilization and task wait time across all clusters are summarized in Figure 7. The model error for machine resource utilization seems to be uniformly low under all utilization levels. On the other hand, despite the large variation in results, the model errors of task wait time seem to follow decreasing trends for task type and and increasing trends for task type. As type tasks typically have the largest population in the workload, It is reasonable to say that the task wait time seems to increase with machine resource utilization. Overall, these observations suggest that the mean usage model performs well for reproducing the performance of real workload in terms of task wait time and resource utilization.. DISCUSSION The experiment results described in Section suggest that the mean usage model performs well in terms of reproducing the average task wait time and machine resource utilization. It seems intuitive to explain why machine resource utilization performs well, as most of tasks have low resource usage variability for all resource types. However, it is the fact

4 Percent error of resource utilization % % % 7% % % % 7% % % % 7% % % % 7% % % % 7% % % % 7% (a) (b) (c) (d) (e) (f) Figure : Percent Model Error for Resource Utilization after removing %, %, % and 7% of the machines Percent Error of Average Task Wait Time Task Type % % % 7% Task Type % % % 7% Task Type % % % 7% Task Type % % % 7% Task Type % % % 7% 9 7 Task Type % % % 7% (a) (b) (c) (d) (e) (f) Figure : Percent Model Error for Task Wait Time after removing %, %, % and 7% of the machines Percent Error of Task Wait Time Compute Compute Compute Compute Compute Compute Percent Error of Usage Compute Compute Compute Compute Compute Compute Percent Error of Usage 7 Compute Compute Compute Compute Compute Compute Percent Error of Usage 7 Compute Compute Compute Compute Compute Compute Utilization of the Bottleneck Resource Cluster Utilization Cluster Utilization Cluster Utilization (a) Task Wait Time (b) Utilization (c) Utilization (d) Utilization Figure : Percent Model Error of Performance Metrics vs. Machine Resource Utilization that the mean usage model also accurately reproduce task wait time makes the result surprising. It should be pointed out that occasionally we may still see errors % for task wait time. Hence this section is dedicated to analyzing the model errors for both task wait time and machine resource utilization. To start our analysis, note that in addition to modifying the task shapes in the treatment process, we have also used a stress generator to introduce additional load in order to make task wait times more apparent. The stress generator increases the utilization of the cluster by randomly removing a percentage of machines from the cluster. To understand the impact of resource usage variability on model errors for both task wait time and machine resource utilization, we must first determine the impact of utilization on the model errors. From the discussion in Section, we know that the average task wait time increase with resource utilization due to the large population of type tasks. For the model errors of machine cluster utilization, our hypothesis was that it should decrease with the utilization level of the cluster, as higher utilization implies less room for model errors. Furthermore, as there are many tasks waiting to be scheduled, the scheduler in this case will try to bin-pack tasks on physical machines as much as possible, further reducing the model error. To validate this hypothesis, we plot the model errors of the performance metrics against utilization for all the clusters in Figure. However, even though there seems to be a trend that the model errors for machine cluster utilization decrease with utilization level, the trend is not significant enough as the noise in the percent error in both cases can be of equal magnitude. This is mainly because the the model errors for machine cluster utilizations are small (i.e. %). For task wait time, from queuing theory we know that average task wait time (E(w i)) grows hyperbolically with respect to resource utilization (util) (i.e. E(w i) ) for util every compute cluster i [7]. Specifically, as util approaches, E(w i) grows towards infinity. To see this, we plot E(w i) against for every cluster i in Figure 9(a). util The diagram clearly indicates this relationship, as the points for each compute cluster roughly lie on a same line. We also plotted the average difference in task wait time E( w) against in Figure 9(b). It turns out that the points util for each compute cluster again roughly lie on the same line in Figure 9(b). Denote by r wi and r wi the slope of the lines for each cluster in Figure 9(a) and 9(b) respectively. Our hypothesis is that higher task resource variability will cause higher growth rate difference in task wait time, as difference in scheduling decisions at higher utilization level will have higher impact on task wait time. To validate this hypothesis, we plotted ratio of the two slopes for each cluster (i.e. r wi (i)/r wi (i)) against the average CV of the bottleneck resource type (i.e. resource type with the highest utilization as it generally has the largest impact on task schedulability) in Figure (a). The average CV is weighted by task duration, as long running tasks have higher impact on the model error than short running tasks. It turned out there is a direct relationship between these two quantities, as shown in Figure (a). Another way to interpret this re-

5 Average Task Waiting Time 7 /(-util) (a) Task Wait Time Average Task Waiting Time /(-util) (b) Difference in Task Wait Time Figure 9: Total and Difference in Task Wait Time vs of the bottleneck resource util the correlation for seems less accurate. The reason is that task usage for generally has much higher variability than memory and disk, hence the benchmark is more conservative in computing the resource utilization to account for potential future variability of usage. This leads to the inaccuracy observed in Figure (b). Overall, our analysis shows that ignoring task usage variability at run-time does introduce inaccuracies compared to real historical traces, the difference seems to be small in all the cases. Hence, we believe that mean usage model is sufficiently accurate for reproducing the performance of real workloads. bi/ai Cluster Utilization of the Bottleneck Resource (a) b i a i vs. Avg. weighted task CV Average Percent Model Error for CV of the Sum of Usage (c) Percent Error for vs. Est. CV of total Usage Average Percent Model Error for CV of the Sum of Usage (b) Percent Error for vs. Est. CV of total usage Average Percent Model Error for CV of the Sum of Usage (d) Percent Error for vs. Est. CV of total Usage Figure : Correlating Model Error in Performance Metrics with Variability in Task Usage Shapes lationship is as follows: we can model E(w i) = a i + a i util and E( w i) = b i + b i util. Thus E(wi) = ai + a i E( w i) = b i + b i util util and. The model error hence can be expressed as Err = E( w i) E(w i ) = ( util)b i+b i ( util)a i +a i b i a i, the ratio of the two slopes. Intuitively, this result means that the task usage variability does cause a difference in task wait time, but the difference is not significant considering the wait time for most of the tasks also grow at a rapid rate. For machine resource utilization, unlike the case for task wait time, the average model error tends to be quite small (i.e. around %), and the impact of utilization on the model error is also quitesmall. Inthis case, we can simply compute the average model errors under all utilization levels and plot them against CVs of each cluster. Notice that since the utilization is thesum ofresource usage, thecvs we usedshould be the CV of the sum of the total usage (i.e. CV sum). To estimate this value, assuming the resource usage variability follows a normal distribution, then CV sum can be estimated by summing up the variance (i.e. (mean t CV t) ) of each task t weighted by its duration d t, divided by the simulation interval. Using the fact that the sum of the variances is the variance of the sum for normal distributions, we can then compute CV sum accordingly. The results are shown in Figure (b),(c) and (d). There seems to be a correlation between the resource variability and model error observed in the experiment for and. On the other hand,. RELATED WORK There is a long history of research on workload characterization. Specifically, There has been work on characterizing workload in various application domains, such as the Web [], multimedia [], distributed file systems [], databases [] and scientific computing []. Furthermore, different aspects of workload characterization, including arrival patterns [], resource requirements [] and network traffic [] have also been studied. However, the focus of existing work has been on revealing workload characteristics, rather than evaluating the quality of the workload characterization. In contrast, our work focuses on studying the quality of characterizations using performance benchmarks. Our work is directly related to our previous work on task shape classification [9]. The goal in [9] is to construct a task classification model that divides workload into distinct classes using the K-means clustering algorithm. The features used by the clustering algorithm are the mean cpu usage, mean memory usage and task execution time. The accuracy of the model is evaluated by computing the intra and inter cluster similarity in terms of standard deviation from the mean values of the cluster. However, it is unclear whether the task classification criteria are sufficient for generating synthetic workloads that can reproduce the performance characteristics of real workloads. More recently, Chen et. al. [] analyzed the publicly available traces from Google s clouds and performed K-means on jobs using a variety of features. They also used correlation scores to infer relationships between job types and job clusters. However this is different from our work that focus on task shape characterization.. CONCLUSIONS In this paper we studied the problem of deriving characterization models for task usage shapes in Google s compute cloud. Our goal is to construct workload models that accurately reproduce the performance characteristics of real workloads. To our surprise, we find that simply capturing the mean usage of each task (i.e., the mean usage model) is sufficient for generating synthetic workload that produces low model error for both resource utilization and task wait time. The direct implication of our work is that we can realistically estimate the total wait time and resource utilization for existing or imaginary workloads (e.g. workload scaled up by ) using synthetic workload generated from the distribution of task mean usages. Our future work includes using compute cluster benchmarks to find effective clustering algorithms that will produce simpler task shape characterization models with similar performance as the mean usage model.

promoting access to White Rose research papers Universities of Leeds, Sheffield and York http://eprints.whiterose.ac.uk/ This is the published version of a Proceedings Paper presented at the 213 IEEE International

Case Study I: A Database Service Prof. Daniel A. Menascé Department of Computer Science George Mason University www.cs.gmu.edu/faculty/menasce.html 1 Copyright Notice Most of the figures in this set of

Performance Workload Design The goal of this paper is to show the basic principles involved in designing a workload for performance and scalability testing. We will understand how to achieve these principles

Evaluating HDFS I/O Performance on Virtualized Systems Xin Tang xtang@cs.wisc.edu University of Wisconsin-Madison Department of Computer Sciences Abstract Hadoop as a Service (HaaS) has received increasing

Cloud Computing Cloud computing is a rapidly growing technology that allows users to share computer resources according to their need. It is expected that cloud computing will generate close to 13.8 million

Load balancing model for Cloud Data Center ABSTRACT: Cloud data center management is a key problem due to the numerous and heterogeneous strategies that can be applied, ranging from the VM placement to

Let s Get Started! CS533 Modeling and Performance Evaluation of Network and Computer Systems Introduction (Chapters 1 and 2) Describe a performance study you have done Work or School or Describe a performance

The Association of System Performance Professionals The Computer Measurement Group, commonly called CMG, is a not for profit, worldwide organization of data processing professionals committed to the measurement

SOLUTION BRIEF CA Capacity Management and Reporting Suite for Vblock Platforms can you effectively plan for the migration and management of systems and applications on Vblock Platforms? agility made possible

Application Performance Testing Basics ABSTRACT Todays the web is playing a critical role in all the business domains such as entertainment, finance, healthcare etc. It is much important to ensure hassle-free

White Paper How to Achieve Best-in-Class Performance Monitoring for Distributed Java Applications July / 2012 Introduction Critical Java business applications have been deployed for some time. However,

3.1 Publishable summary Concept and Project Objectives Proactive and dynamic QoS management, network intrusion detection and early detection of network congestion problems among other applications in the

APPM4720/5720: Fast algorithms for big data Gunnar Martinsson The University of Colorado at Boulder Course objectives: The purpose of this course is to teach efficient algorithms for processing very large

A white Success The performance testing helped the client identify and resolve performance bottlenecks which otherwise crippled the business. The ability to support 500 concurrent users was a performance

WAIT-TIME ANALYSIS METHOD: NEW BEST PRACTICE FOR APPLICATION PERFORMANCE MANAGEMENT INTRODUCTION TO WAIT-TIME METHODS Until very recently, tuning of IT application performance has been largely a guessing

Published in the 2nd USENIX Workshop on Hot Topics in Cloud Computing 2010 CiteSeer x in the Cloud Pradeep B. Teregowda Pennsylvania State University C. Lee Giles Pennsylvania State University Bhuvan Urgaonkar

The Association of System Performance Professionals The Computer Measurement Group, commonly called CMG, is a not for profit, worldwide organization of data processing professionals committed to the measurement

School of Systems Engineering A Novel Cloud Based Elastic Framework for Big Data Preprocessing Omer Dawelbeit and Rachel McCrindle October 21, 2014 University of Reading 2008 www.reading.ac.uk Overview

IBM RATIONAL PERFORMANCE TESTER Today, a major portion of newly developed enterprise applications is based on Internet connectivity of a geographically distributed work force that all need on-line access

Proceedings of the 2009 Industrial Engineering Research Conference Evaluating the Lead Time Demand Distribution for (r, Q) Policies Under Intermittent Demand Yasin Unlu, Manuel D. Rossetti Department of

Power and Performance Modeling in a Virtualized Server System Massoud Pedram and Inkwon Hwang University of Southern California Department of Electrical Engineering Los Angeles, CA 90089 U.S.A. {pedram,