Elsevier

Computers & Electrical Engineering

Volume 47, October 2015, Pages 222-240
Computers & Electrical Engineering

Novel energy and SLA efficient resource management heuristics for consolidation of virtual machines in cloud data centers

https://doi.org/10.1016/j.compeleceng.2015.05.006Get rights and content

Highlights

  • A multi criteria resource allocation policy is proposed.

  • A multi criteria policy for determination of underloaded PMs is proposed.

  • A novel holistic resource management procedure is proposed.

  • The results show up to 45% reductions in energy consumption.

  • The results show up to 99% reductions in SLA violation.

Abstract

Proliferation of IT services provided by cloud service delivery model as well as diverse range of cloud users have led to the establishment of huge energy hungry data centers all around the world. Therefore, cloud providers are confronted with great pressures to reduce their energy consumption as well as their CO2 emissions. In this direction, consolidation is proposed as an effective method of energy saving in cloud data centers. This paper proposes a new holistic cloud resource management procedure as well as novel heuristics based on multi-criteria decision making method for both determination of underloaded hosts and placement of the migrating VMs. The results of simulations using Cloudsim simulator validates the applicability of the proposed policies which shows up to 46%, 99%, and 95% reductions in energy consumption, SLA violation, and number of VM migrations, respectively in comparison with state of the arts.

Introduction

Cloud computing has recently been brought into focus in both academic and industrial communities due to the increasing pervasive applications and the economy of scale that cloud computing provides [1], [2]. Cloud computing is an operational management model that brings some new modern technologies together to provide extensive services dynamically for all range of cloud users. The research and development community has quickly reached consensus on core concepts of cloud computing such as on-demand computing, elastic scaling, elimination of up-front capital and operational expenses, and establishing pay-as-you-go business model for information technology services [3]. Three main types of cloud computing services are Infrastructure as a Service (IaaS), Platform as a Service (PaaS) and Software as a Service (SaaS) [4]. Users can easily avail of these services on a pay-as-you-use basis without any geographical restrictions [5].

As a direct result of cloud computing’s increasing popularity, cloud computing service providers such as Amazon, Google, IBM and Microsoft have begun to establish increasing numbers of energy hungry data centers for satisfying the growing customers resource (e.g. computational and storage resources) demands [6]. Continuous increase in energy consumption of such huge data centers raises a great concern for both governments and service providers to consume energy more effectively. Apart from the overwhelming operating costs and the total cost of acquisition (TCA) caused by high energy consumption, another rising concern is the environmental impact in terms of carbon dioxide (CO2) emissions [7]. The main portion of energy waste in cloud data centers is in their hardware infrastructure including servers, storage, and network devices. Since hardware devices consume their near maximum power level when they are idle, not fully utilizing them leads to enormous energy wastage. Forrester Research states that servers use nearly 30% of their peak power consumption while sitting idle 70% of time [8]. So, the basic reason of energy waste in data centers’ infrastructure is underutilization. Cloud provides scalability using virtualization and host applications which suffer high load at certain times [9].

Server consolidation using virtualization is an effective approach to achieve better energy efficiency of cloud data center [1], [10], [11]. The reason is that at times of low load, VMs are consolidated on a limited subset of the available physical resources, so that the remaining (idle) computing nodes can be switched to low power consumption modes or turned off [6]. Virtualization is an important feature of cloud computing that allows providing multiple VMs on a single physical machine as well as migration of VMs [12]. Due to the heterogeneity of cloud resources, and also the fact that cloud users may have sporadic and dynamic resource consumption, the cloud environment is highly dynamic. On the other hand, considering various goals that sometimes are contradicted with each other makes the resource management problem in cloud data center a challenging issue which needs tuning some trade-offs between targets. Cloud computing infrastructure controller has to guarantee pre-established contracts despite all the dynamism of workload changes and also it has to efficiently utilize resources and reduce resource wastage [13]. The basic online consolidation problem in cloud data centers is divided into four parts [10]: (1) determining when a host is considered as being overloaded; (2) determining when a host is considered as being underloaded; (3) selection of VMs that should be migrated from an overloaded host; and (4) finding a new placement of the VMs selected for migration from the overloaded and underloaded hosts. This paper focuses on the second and the fourth phases and proposes novel heuristics for them.

According to [14], solving the resource allocation problem using a vector packing algorithm is the best approach for static workloads. However, the key fact that workloads in cloud environments are dynamic makes this conclusion weak for cloud environments. Moreover, the vector packing problem is NP-hard [10]. So, heuristic algorithms such as Best Fit Decreasing (BFD) algorithm have been developed by researchers to solve it. BFD is shown to use no more than 11/9.OPT+1 bins (where OPT is the number of bins provided by the optimal solution) [15]. For instance, [16], [10] model the problem of resource allocation as a bin packing problem with variable bin sizes and prices and solve it by applying Modified Best Fit Decreasing (MBFD) algorithm and Power Aware Best Fit Decreasing (PABFD) algorithm, respectively. The major drawback of current approaches for resource allocation problem in virtualized cloud data center is that they only consider one target such as power consumption in the core of their solutions. However, this paper proposes multi-criteria algorithms based on the Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) [17] for both resource allocation and underloaded PM determination phases. The proposed policies simultaneously optimize energy consumption, SLA violation, as well as the number of VM migrations. Another major contribution of this paper is proposing novel procedure for the whole process of resource management in virtualized cloud data centers. The main idea behind this policy is solving the resource allocation problem for the VMs that are selected to be migrated from either overloaded or underloaded PMs in one step rather than in separate steps for each one. By doing so, a holistic view of resource allocation can be applied to the aggregated VMs and consequently a more precise solution can be found for the problem.

The main contributions of this paper are:

  • Proposing a novel multi criteria resource allocation method namely TOPSIS Power and SLA Aware Allocation (TPSA) policy that simultaneously optimizes energy consumption, number of VM migrations and SLA violations.

  • Proposing a novel multi criteria method for determination of underloaded PMs including Available Capacity (AC), Migration Delay (MDL), and TOPSIS-Available Capacity-Number of VMs-Migration Delay (TACND) Policies.

  • Proposing Enhanced Optimization (EO) policy for the whole process of resource management in cloud data centers that aggregates the resource allocation phases for the VMs selected to be migrated from either overloaded or underloaded PMs in one single phase.

This paper begins by reviewing related works in Section 2. Section 3 presents system models including data center model and the metrics used to evaluate the efficiency of the proposed policies. Section 4 presents our proposed EO policy for the whole process of resource management. Sections 5 Proposed resource allocation policies, 6 Proposed policies for determination of underloaded PMs present our proposed resource allocation policy and the policies proposed for determination of underloaded PMs, respectively. Section 7 assesses the applicability of our proposed solutions using Cloudsim simulator. Finally, concluding remarks and future directions are presented in Section 8.

Section snippets

Motivation and related work

As stated in [2], there is a wide area of research in resource management field in cloud computing including resource provisioning, resource allocation, resource adaptation, resource mapping, resource modeling, resource estimation and resource brokering.

The authors in [18] have investigated power management techniques in the context of large-scale virtualized systems for the first time. In addition to the hardware scaling and VMs consolidation, they have proposed a new power management method

Target system model

The target system model consists of data centers with heterogeneous resources which host various users with different applications who run multiple heterogeneous VMs on data center nodes, resulting in a dynamic mixed workload on each PM. VMs and PMs are characterized with parameters including CPU computation power defined in Millions Instructions Per Second (MIPS), RAM, Disk capacity, and Network bandwidth. The target system model is depicted in Fig. 1 which is a modified version of the model

Proposed policy for resource management procedure

This study improves the on-line resource allocation process in two aspects. First, it proposes EO policy as a novel flowchart for on-line resource management procedure. EO suggests gathering all the VMs to be migrated from either overloaded or underloaded PMs in the VMs migration list and solving the on-line resource allocation problem at once using novel heuristics rather than in separate steps for them. Second, it proposes TPSA policy as a novel heuristic for off-line and on-line resource

Proposed resource allocation policies

In this section we present our proposed policy for resource allocation problem in cloud data centers.

Proposed policies for determination of underloaded PMs

In this section we describe our proposed policies for determination of underloaded PMs which are AC, MDL, and TACND.

Performance evaluation

In this section, we discuss a performance evaluation of the heuristics presented in this paper. We compare our solutions with recent energy aware resource allocation studies which are close to our study including [10], [12] as benchmarks. Similar to our study, they consider the four-phased resource management process introduced in [10].

Concluding remarks and future directions

Development of huge cloud data centers all around the world has led to enormous energy consumption and a steady increase in carbon emissions. This paper has concentrated on consolidation in virtualized cloud data centers as a solution to tackle with this problem. This paper has proposed EO policy as a novel resource management procedure in cloud data centers. Besides, this paper has introduced the central importance of optimizing different targets in cloud data centers at the same time

Ehsan Arianyan received the M.S. degree from Amirkabir University of Technology, Tehran, Iran, in 2010. He is currently working toward the Ph.D. degree with the Department of Electrical Engineering. He is the author of more than 10 peer-reviewed papers as well as 3 books related to cloud computing. His research interests include cloud computing, parallel computing, and decision algorithms.

References (27)

  • S.-Y. Jing et al.

    State-of-the-art research study for green cloud computing

    J Supercomput

    (2013)
  • Y. Gao et al.

    Service level agreement based energy-efficient resource management in cloud data centers

    Comput Electr Eng

    (2013)
  • M. Poess et al.

    Energy cost, the key challenge of today’s data centers: a power consumption analysis of TPC-C results

    Proc VLDB Endowment

    (2008)
  • Cited by (108)

    • Optimization of SLA aware live migration of multiple virtual machines using Lagrange multiplier

      2022, Future Generation Computer Systems
      Citation Excerpt :

      Cloud service providers and consumers agree upon a minimum acceptable level of service, and in case the provider is not able to comply with it, a penalty is incurred. SLA is not new; many works have been done to take account of SLA while consolidating the VMs [39,40] and managing the cloud resources [41,42]. However, these works do not model the VM migration process in the light of SLA.

    • Container Acceleration Method Based on Image Block Granularity Optimization

      2024, 2024 IEEE 4th International Conference on Power, Electronics and Computer Applications, ICPECA 2024
    • Performance Evaluation and Analysis of Meta-Heuristic Techniques in Cloud Computing

      2023, 2023 10th IEEE Uttar Pradesh Section International Conference on Electrical, Electronics and Computer Engineering, UPCON 2023
    View all citing articles on Scopus

    Ehsan Arianyan received the M.S. degree from Amirkabir University of Technology, Tehran, Iran, in 2010. He is currently working toward the Ph.D. degree with the Department of Electrical Engineering. He is the author of more than 10 peer-reviewed papers as well as 3 books related to cloud computing. His research interests include cloud computing, parallel computing, and decision algorithms.

    Hassan Taheri (M’90) received the M.S. and Ph.D. degrees from the University of Manchester, Manchester, U.K., in 1978 and 1988, respectively. He is currently an associate professor with the Department of Electrical Engineering, Amirkabir University of Technology. His research interests include cloud computing, teletraffic engineering, and quality of service in fixed and mobile communication networks.

    Saeed Sharifian received the M.S. and Ph.D. degrees from the Amirkabir University of Technology, Tehran, Iran, in 2002 and 2009, respectively. He is now an assistant professor with the Department of Electrical Engineering, Amirkabir University of Technology. His research interests include high-performance web server architecture, parallel computing and programming, sensor networks, as well as performance modeling and evaluation.

    Reviews processed and recommended for publication to the Editor-in-Chief by Associate Editor Dr. Danielo Gomes.

    View full text