Elsevier

Future Generation Computer Systems

Volume 86, September 2018, Pages 836-850
Future Generation Computer Systems

Minimizing SLA violation and power consumption in Cloud data centers using adaptive energy-aware algorithms

https://doi.org/10.1016/j.future.2017.07.048Get rights and content

Highlights

  • Addressed the problem of reducing Cloud datacenter high energy consumption with minimal Service Level Agreement (SLA) violation.

  • We propose two novel adaptive energy-aware algorithms for maximizing energy efficiency and minimizing SLA violation rate in Cloud datacenters.

  • The proposed energy-aware algorithms take into account the application types as well as the CPU and memory resources during the deployment of VMs.

  • We performed extensive experimental analysis using real-world workload.

Abstract

In this paper, we address the problem of reducing Cloud datacenter high energy consumption with minimal Service Level Agreement (SLA) violation. Although there are many energy-aware resource management solutions for Cloud datacenters, existing approaches focus on minimizing energy consumption while ignoring the SLA violation at the time of virtual machine (VM) deployment. Also, they do not consider the types of application running in the VMs and thus may not really reduce energy consumption with minimal SLA violation under a variety of workloads. In this paper, we propose two novel adaptive energy-aware algorithms for maximizing energy efficiency and minimizing SLA violation rate in Cloud datacenters. Unlike the existing approaches, the proposed energy-aware algorithms take into account the application types as well as the CPU and memory resources during the deployment of VMs. To study the efficacy of the proposed approaches, we performed extensive experimental analysis using real-world workload, which comes from more than a thousand PlanetLab VMs. The experimental results show that, compared with the existing energy-saving techniques, the proposed approaches can effectively decrease the energy consumption in Cloud datacenters while maintaining low SLA violation.

Introduction

Cloud computing [1], [2] has fundamentally transformed the way IT infrastructure is delivered to meet the IT needs of businesses and consumers. Generally, cloud computing delivery models are classified into software as a service (SaaS), platform as a service (PaaS), and infrastructure as a service (IaaS) [3]. By allowing on-demand IT infrastructure provisioning model, Cloud computing enables organizations to automatically scale up and down IT resources usage based on their current and future needs. It also enables great improvement in business or mission proficiencies without increasing in the corresponding resource (time, people or money) requirements. Moreover, by allowing pay-as-you-go service model, it eliminates high initial acquisition costs, maintenance costs and costs associated with licensing of software.

Although Cloud computing enables organizations to realize great benefits by minimizing operational and administrative costs, it suffers from the problem of high energy consumption that could negate its benefits [4]. For example, an average datacentre consumes energy as much as 25  000 households’ energy consumptions [5]. Such high energy consumption can lead to an increasing Operational Cost (OC) and consequently reduce the Return on Investment (ROI). The high energy consumption, apart from the high OC and diminished ROI, results in much carbon dioxide (CO2) emissions, which contributes to the global warming. Although advances in physical infrastructure have partly addressed the high energy consumption of datacentres issue, effective resource management is vital in further decreasing the high energy consumption of datacentres.

An important question is how to minimize datacentre energy consumption while ensuring the Quality of Service (QoS) delivered by the Cloud system. QoS is an important factor in Cloud environment and can be defined in the form of SLA (Service Level Agreement) [6], [7]. The need to make datacentres efficient not only in regards to performance factors but also in both energy and emissions reduction have motivated a flurry of research recently. Although a remarkable improvements in the hardware infrastructure has enabled techniques to improve energy consumption, there are still lots of room for improvement. For instance, hosts in datacentres operate only at 10%–50% utilization for most of the time [8]. As the low utilization of hosts in datacentres results in huge amount of energy wastage, improving host utilization level in the datacentres can help decrease the energy consumption. However, naively improving host utilization level can affect the QoS delivered by the system. One way to effectively improve host utilization in Cloud datacentres is by using dynamic consolidation of VMs [9], [10], [11]. Dynamic VMs consolidation enables VMs to be reallocated periodically from overloaded hosts by utilizing VMs migration as well as minimize the number of hosts in datacentres by switching idle hosts to low-power mode to save energy consumption. Although dynamic consolidation of VMs have been shown to be NP-hard problem [12], [13], it has been shown to be effective in minimizing energy consumptions [6].

Dynamic VMs consolidation generally involves the detection of overloaded and underloaded hosts in the datacenter (i.e., overloaded hosts detection), choose VMs to be reallocated from an overloaded host (i.e., VM selection) and select the receiving hosts for the VMs marked for relocation from overloaded host (i.e., VM deployment) [6]. Existing approaches only focus on one of the sub-components (i.e., host overload detection algorithm or VM selection algorithm or VM deployment algorithm). Moreover, they only consider the CPU in the dynamic VMs consolidation process and assume that the other system resources are not significant thus leading to wrong VM allocation. In addition, the previous works could not leverage the combination of energy efficiency (energy consumption and SLA violations) and placement of VM. For example, when VM is reallocated or migrated to another host, the existing algorithms only consider minimizing the energy consumption. But actually, SLA violation should also be considered during the process of VM reallocation and migration. Furthermore, previous approaches do not take into account the types of application running in the VMs and thus may not sufficiently reduce energy consumption of the Cloud datacentre and minimize SLA violation rate under a different workload.

In this paper, we propose a novel adaptive energy-aware algorithm for maximizing energy efficiency and minimizing SLA violation rate in Cloud datacenters. In order to adapt well to the dynamic and unpredictable workload commonly running in Cloud datacentres, the proposed energy-aware algorithm uses an adaptive three-threshold framework for the classification of Cloud datacentre hosts into four different classes (i.e., less loaded hosts, little loaded hosts, normally loaded hosts, and overloaded hosts). It also uses two VM selection policies for selecting VMs to migrate from an overloaded hosts. These methods consider both CPU and memory in the course of VM selection and deployment decision making. Finally, a VM deployment policy that leverages the combination of energy efficiency (energy consumption and SLA violations) and placement of VM is presented. It also takes into account both CPU and memory utilization during VM deployment. All in all, the main contributions of the paper can be summarized as follows:

  • (1)

    A framework that divides hosts in the datacentres, according to the different workload running on the hosts, into less loaded hosts, little loaded hosts, normally loaded hosts, and overloaded hosts.

  • (2)

    We put forward an adaptive three-threshold framework, which is different from the existing two-threshold framework. The adaptive three-threshold can adapt well to the dynamic and unpredictable workload commonly running in Cloud datacentres. A new algorithm, based on the adaptive three-threshold framework, for host state (e.g., overloaded) detection is presented.

  • (3)

    To handle the CPU intensive task and I/O intensive task, we present two VM selection methods from the overloaded hosts. The methods consider both CPU and memory utilization in decision making.

  • (4)

    We present VM selection methods for VM migration from the overloaded hosts. The methods consider both CPU and memory utilization in decision making.

  • (5)

    A new VM deployment policy that maximizes energy efficiency (i.e., energy consumption and SLA violation) is presented.

  • (6)

    We evaluate the algorithms proposed in this paper by using real-world workload, which comes from more than a thousand PlanetLab VMs hosted on 800 hosts located in more than 500 places across the world.

The rest of the paper is organized as follows: In Section 2, we present the related work. Adaptive three-threshold VM placement framework is proposed in Section 3. Experimental results and performance analysis are presented in Section 4. Section 5 concludes the paper.

Section snippets

Related work

The prior works concerning the energy consumption management in data centers can be broadly divided into four categories: dynamic performance scaling (DPS) [14], [15], [16], [17], [18], [19], [20], [21], [22], threshold-based heuristics [4], [6], [23], [24], [25], [26], [27], [28], [29], [30], [31], decision-making based on statistical analysis of historical data [32], [33], [34], [35] and other methods [36], [37], [38]. In DPS [14], [15], [16], [17], [18], [19], [20], [21], [22], the system

Adaptive three-threshold VM placement framework

In this section, we present the proposed resource management algorithm along with its components for detecting and relieving overloaded hosts by reallocating some Virtual Machines (VMs), detecting underloaded hosts and perform consolidation, and allocating the VMs selected for relocation to other hosts in the datacenter in a holistic manner. The main notations and their meanings used in throughout the paper are listed in Table 1.

Experimental results and performance analysis

In this section, after presenting some basic definitions (including energy consumption model, SLA violation metrics, and energy efficiency metric), we will focus on the experimental results and performance analysis. The experiment in the paper includes three parts: (1) the first part only evaluates the performance of VPME algorithm (see Section 3.4), which places VM with the aims of maximizing energy efficiency. However, the existing algorithms only place VM with minimizing energy consumption.

Conclusion

This paper puts forward two energy-aware algorithms (KMI-MRCU-1.0 and KMI-MPCU-2.0) based on the adaptive three-threshold framework (ATF), KMI algorithm, VM selection policies (MRCU, MPCU), and maximum energy efficiency placement of VM (VPME) according to the difference of workload. The experimental results show that: (1) regarding the energy efficiency, the algorithm with maximizing the energy efficiency (VPME) has a better performance than the algorithm with minimizing the energy consumption

Acknowledgments

This work was done while the first author had a visiting position at the School of Information Technology, Deakin University, Australia. This work was supported by the National Natural Science Foundation of China (nos. 61572525, 61373042, and 61404213), and China Scholarship Council. This work was also supported partially by Deakin University and the Deanship of Scientific Research at King Saud University, Riyadh, Saudi Arabia through the research group project No RGP-VPP-318. The help of

Zhou Zhou received the Ph.D. degree from Central South University, Changsha, China, in 2017, majoring in computer science. He is currently a lecturer at Changsha University. Also, he has accepted a post-doctoral position in Hunan University for two years (from 2017–2019). His research interests include Cloud computing, energy consumption model, energy-efficient resource management, and virtual machine deployment.

References (46)

  • KumarM.R.V. et al.

    Heterogeneity and thermal aware adaptive heuristics for energy efficient consolidation of virtual machines in infrastructure clouds

    J. Comput. System Sci.

    (2016)
  • HanG. et al.

    An efficient virtual machine consolidation scheme for multimedia cloud computing

    Sensors

    (2016)
  • A. Verma, P. Ahuja, A. Neogi, pMapper: Power and migration cost aware application placement in virtualized systems, in:...
  • G. Jung, M.A. Hiltunen, K.R. Joshi, R.D. Schlichting, C. Pu, Mistral: Dynamically managing power, performance, and...
  • ButtazzoG.

    Scalable applications for energy-aware processors

  • H. Hanson, S.W. Keckler, S. Ghiasi, K. Rajamani, F. Rawson, J. Rubio, Thermal response to DVFS: analysis with an Intel...
  • A. Wierman, L.L. Andrew, A. Tang, Power-aware speed scaling in processor sharing systems, in: Proceedings of the 28th...
  • L.L. Andrew, M. Lin, A. Wierman, Optimality, fairness, and robustness in speed scaling designs, in: Proceedings of the...
  • FlautnerK. et al.

    Automatic performance setting for dynamic voltage scaling

    Wirel. Netw.

    (2002)
  • A. Weissel, F. Bellosa, Process cruise control: event-driven clock scaling for dynamic power management, in:...
  • S. Lee, T. Sakurai, Run-time voltage hopping for low-power real-time systems, in: Proceedings of the 37thAnnual...
  • LorchJ.R. et al.

    Improving dynamic voltage scaling algorithms with PACE

    ACM SIGMETRICS Perform. Eval. Rev.

    (2001)
  • R. Buyya, R. Ranjan, R.N. Calheiros, Modeling and simulation of scalable cloud computing environments and the CloudSim...
  • Cited by (145)

    View all citing articles on Scopus

    Zhou Zhou received the Ph.D. degree from Central South University, Changsha, China, in 2017, majoring in computer science. He is currently a lecturer at Changsha University. Also, he has accepted a post-doctoral position in Hunan University for two years (from 2017–2019). His research interests include Cloud computing, energy consumption model, energy-efficient resource management, and virtual machine deployment.

    Jemal Abawajy (SM’11) is a full professor at Faculty of Science, Engineering and Built Environment, Deakin University, Australia. Prof. Abawajy has delivered more than 50 keynote and seminars worldwide and has been involved in the organization of more than 300 international conferences in various capacity including chair and general co-chair. He has also served on the editorial-board of numerous international journals including IEEE Transaction on Cloud Computing. Prof. Abawajy is the author/co-author of more than 250 refereed articles and supervised numerous Ph.D. students to completion.

    Morshed Chowdhury received his Ph.D. degree in Computing from Monash University, Australia in 1999. He is a Senior Lecturer in the School of Information Technology, Deakin University, Burwood, Australia. He has more than 12 years of industry experiences and has published more than one hundred research papers. He has organized a number of international conferences and served as a member of program committees of several international conferences. He has also acted as reviewer of many IEEE and Elsevier journal papers.

    Zhigang Hu received his M.S. degree and Ph.D. degree in Central South University in 1988, 2002 respectively. Now, He is a professor at the School of Software, Central South University. His research interests include high performance computing, Cloud computing, energy-efficient resource management, and virtual machine deployment.

    Keqin Li is a SUNY distinguished professor of computer science. He is also an Intellectual Ventures endowed visiting chair professor at Tsinghua University, China. His research interests mainly include design and analysis of algorithms, parallel and distributed computing, and computer networking. He has more than 290 refereed research publications. He is currently or has served on the editorial board of IEEE Transactions on Parallel and Distributed Systems, IEEE Transactions on Computers, IEEE Transactions on Cloud Computing, Journal of Parallel and Distributed Computing, International Journal of Parallel, Emergent and Distributed Systems, International Journal of High Performance Computing and Networking, International Journal of Big Data Intelligence, and Optimization Letters. He is a senior member of the IEEE.

    Hongbing Cheng received his Ph.D. degree in Network and Information Security from Nanjing University of Posts & Telecommunications in 2008. He worked as research fellow in Nanjing University, China; University of Stavanger, Norway and Manchester University, England from 2010 to 2013. He is a Professor of College of Computer and Software, Zhejiang University of Technology, Hangzhou, China. He has authored more than 50 refereed journal and conference papers. His research interests include cloud computing, information and network security, big data security, and wireless sensor networks. Dr. Cheng is an Associate Editor of Journal of Network and Information Security. He has served as Symposium Chair and Session Chair for several international conferences.

    Abdulhameed Alelaiwi is a faculity member of Software Engg. Department, at the College of Computer and Information Sciences, King Saud University. Riyadh, Saudi Arabia. He received his Ph.D. degree in Software Engineering from the College of Engineering, Florida Institute of Technology-Melbourne, USA. He has authored and co-authored many publications including refereed IEEE/ACM/Springer journals, conference papers, books, and book chapters. His research interest includes software testing analysis and design, cloud computing, and multimedia. He is a member of IEEE.

    Fangmin Li received the Ph.D. degree from Zhejiang University, Hangzhou, China, in 2001, majoring in computer science. He is currently a Professor at Changsha University. He has authored several books on embedded systems and over 30 academic papers in wireless networks, and also holds ten Chinese patents. His current research interests include cloud computing, wireless communications and networks, and energy-efficient resource management.

    View full text