Towards virtual machine scheduling research based on multi-decision AHP method in the cloud computing platform

Virtual machine scheduling and resource allocation mechanism in the process of dynamic virtual machine consolidation is a promising access to alleviate the cloud data centers of prominent energy consumption and service level agreement violations with improvement in quality of service (QoS). In this article, we propose an efficient algorithm (AESVMP) based on the Analytic Hierarchy Process (AHP) for the virtual machine scheduling in accordance with the measure. Firstly, we take into consideration three key criteria including the host of power consumption, available resource and resource allocation balance ratio, in which the ratio can be calculated by the balance value between overall three-dimensional resource (CPU, RAM, BW) flat surface and resource allocation flat surface (when new migrated virtual machine (VM) consumed the targeted host’s resource). Then, virtual machine placement decision is determined by the application of multi-criteria decision making techniques AHP embedded with the above-mentioned three criteria. Extensive experimental results based on the CloudSim emulator using 10 PlanetLab workloads demonstrate that the proposed approach can reduce the cloud data center of number of migration, service level agreement violation (SLAV), aggregate indicators of energy comsumption (ESV) by an average of 51.76%, 67.4%, 67.6% compared with the cutting-edge method LBVMP, which validates the effectiveness.


INTRODUCTION
Cloud computing means a service model that delivers users on-demand and elastic resource requests.Users send requests for computing resources such as storage, databases, servers, applications and networks to cloud providers, where it becomes easier, cheaper and faster to access computing resources.With the application of virtualization technology, multiple cloud customers can simultaneously share physical resources, while cloud vendors create a dynamically scalable application, platform and hardware infrastructure for customers (Shu, Wang & Wang, 2014;Panda & Jana, 2019;El Mhouti, Erradi & Nasseh, 2017).As the number of cloud users proliferates and the scale of data centers increases (Bhardwaj et al., 2020;Dayarathna, Wen & Fan, 2016), it results in increasing energy consumption in cloud data centers, and it continuously increases its operating cost (Myerson, 2017).It is reported that up to now, cloud data centers have Then, they serve as the decision criteria for evaluating the targeted hosts for VM migration.Eventually, this article leverages the AHP to calculate the scientific weights of the above decision criteria for seeking for the appropriate host.The main contributions of this article are as follows: AHP-based resource balance-aware and energy-optimized virtual machine placement policy (AESVMP) is introduced to tackle the dynamic VMP problem.A standard function for balanced resource allocation is introduced to achieve balanced resource utilization of hosts.Simulations using real-world workloads PlanetLab on CloudSim demonstrated improvements in energy consumption, number of VM migrations and service level agreement violation (SLAV) compared with current state-of-the-art VM scheduling strategy.
The remaining sections of this article are organized as follows.Related work is discussed in "Related Work"."Virtual Machine Placement Policy Based on AHP Resource Balancing Allocation" presents a virtual machine placement policy based on AHP resource balancing allocation."Experimental Evaluation" evaluates the proposed approach based on the experimental environment, performance metrics, comparative benchmarks and illustrates extensive simulation results."Conclusion" concludes and describes future work.

RELATED WORKS
Virtualization is the core technology for cloud environment.How to find the most proper targeted host for migrated VM with assistance of the technology is also research direction.Masdari, Nabavi & Ahmadi (2016) and Talebian et al. (2020) introduce exhaustively the development of virtual machine placement.Table 1 gives a relevant features comparison of related works.Beloglazov & Buyya (2012) propose a power aware best fit decreasing (PABFD) algorithm for VM placement.The PABFD algorithm sorts the to-be-migrated VMs in the descending order of CPU resource utilization.Then, it sequentially seeks destination hosts for the VMs while guaranteeing minimum energy consumption of the destination hosts after VM placement.Load balancing plays a pivotal role in VM placement.Uneven allocation of host resources can result in suboptimal resource utilization, performance deterioration, and consequently, a diminished quality of service.Load balancing is referenced in several of the following VM placement policies.Wang et al. (2022b) suggest a VM placement strategy called LBVMP.Define the two planes to be the available resource plane of the PM (CPU, RAM, BW) and the resource plane required by the VM respectively.LBVMP then calculates the distance between the two plats to evaluate the VM allocation solution.Karthikeyan (2023) devise a genetic algorithm to decide the best matching host based on CPU and memory usage.Wei et al. (2023) introduce deep reinforcement learning (DRL)-based strategies to enhance load balancing, aiming to ascertain the optimal mapping between VMs and PMs.Li, Pan & Yu (2022) design a virtual machine placement strategy based on multi-resource co-optimization control.

Strengthens Weakness
LBVMP (Wang et al., 2022b) Considers the ratio of the PM's available resources (CPU, RAM, BW) to the VM's requested resources as a reference standard Ignore other criteria such as power consumption and weighting of individual indicators PABFD (Beloglazov & Buyya, 2012) Select the host with the least increase in power consumption after placement Ignore resources contention and resources balance HPSOLF-FPO (Mejahed & Elshrkawey, 2022) Multi-objective decision making to optimize power consumption and resource utilization Ignore resources balance SAI-GA (Karthikeyan, 2023) Selects the best host based on CPU and memory usage using an adaptive genetic algorithm Ignore bandwidth cost and resources balance GMPR (Wang et al., 2023)  Use a decentralized Q-learning approach to accomplish the Energyefficient and thermal-aware placement of virtual machines  2023) provide a Q-learning based VM placement strategy.Optimize energy consumption and keep the host temperature as low as possible while satisfying service level agreements (SLA).Omer et al. (2021) propose a VM placement strategy with consideration of both energy consumption and traffic priority.For critical applications, select energy-saving PM.For normal type, select sufficient resources PM.The suggestion enables to reduce energy consumption and resource wastage.
The application load submitted by users to the data center is dynamically variable, so the resource utilization of the hosts in the data center also fluctuates over time (Hieu, Francesco & Ylä-Jääski, 2020).The methodology of prediction can predict the future workload conditions of hosts, VMs.Zhuo et al. (2014) propose a VM dynamic predictive scheduling algorithm (VM-DFS).Selecting a PM that meets predicted memory requirements.The number of active hosts is reduced to ensure that the resource requirements of the VMs are satisfied.Wang et al. (2022a) propose a host state detection algorithm based on a combination of grey and ARIMA model.In addition, they propose a CPU utilization and energy-aware VM placement strategy based on the prediction results.Hieu, Francesco & Ylä-Jääski (2020) propose a multi-purpose predictive virtual machine integration algorithm (VMCUP-M).The future utilization of a variety of resources is predicted by using the historical data of the host, and the results of multiple predictions of multiple resources are applied in the process of VM migration selection and targeted host placement, which effectively improves the performance of the cloud data center.
The algorithm proposed based on the application of AHP in this article takes three key criteria into account the power consumption increase, available resources and resource allocation balance ratio of the host respectively.In addition to reducing energy consumption, SLAV ensures the quality of service (Beloglazov & Buyya, 2017;Gmach et al., 2009) and performance-guaranteed for the cloud data center.

VIRTUAL MACHINE PLACEMENT POLICY BASED ON AHP RESOURCE BALANCING ALLOCATION
We first formulate the energy consumption and the overhead of VM migration and briefly give an introduction of the resources allocation balance ratio.Subsequently, we present a VM placement policy computed using analytic hierarchy process(AHP) based on the resource allocation balance ratio and two other criteria and illustrate the proposed algorithms in the end.Table 2 lists the symbols used for the readability of the rest of the article.

Model of energy consumption and overhead of virtual machine migration
In a cloud data center, memory, CPU, cooling systems and other devices all have significant energy demands.Energy consumption of the CPU accounts for approximately 61% of the total power consumed in the data center (Mejahed & Elshrkawey, 2022), so the energy consumption and power consumption of the hosts in the data center varies with CPU utilization (Wang et al., 2022b), the power consumption of the host P U i ð Þ is derived from Eq. ( 1).
where P max i , P idle i , U i present maximum power of host when experiencing full CPU utilization, minimum power of host with sleep state and host's CPU utilization respectively.Energy consumption of host ðE i Þ came from Eq. ( 2), thus the total energy consumption of all active hosts in the cloud data center ðE total Þ stems from Eq. ( 3).
When the VM migration module is triggered.The average performance degradation of the VMs affected by migration is roughly 10% of the CPU utilization of the VMs (Wang et al., 2022b).Therefore, the overhead of VM migration PF degradation j is defined as follows: where the t mig j is calculated by Eq. ( 5).Where, U j (t) presents the CPU utilization of VM v i at time t, v Ram j expresses the RAM resource request capacity of v i and h BW i represents the available BW resource capacity of host h i .The resources allocation balance ratio of host h i

Norpower i
The function power consumption of host h i .

AR i
Available resource of host h i .
Increase in CPU utilization of host h i after placement The cpu resource requirements of vm v j h mips i The cpu resources of host h i W Weighting matrix for the three decision-making criteria (Balance The formulation of balanced resource allocation Unbalanced resource allocation of hosts means that hosts with high resource utilization in single or multiple resource dimensions, and the available resources are not sufficient to allocate to the migrated VMs to ensure their working, which eventually leads to resource loss of the hosts.Thus, a balanced allocation of host resources can further improve the efficiency of resource allocation in cloud data centers and guarantee their quality of service.CPU utilization accounts for a large proportion of hosts' energy consumption, and RAM and BW are closely bound up with service level agreement violation (SLAV).Hence, the physical resources (CPU, RAM, BW) utilization reflects (Wang et al., 2022b) the impact on working performance of VMs to some extent.To evaluate the available resource parallelism that targeted host allocates resources for migrated VM with the account of CPU, RAM, BW (Ferdaus et al., 2014).In 3-D resource as shown Fig. 1, the allocated resources flat surface of host h i according to real-time resources utilization, Dhost allocated i , is determined and denoted by Eq. ( 6), while the total resources flat surface of host h i , Dhost total i , is presented by Eq. ( 7) respectively.
If the angle between the host's allocated resource flat surface and the host's overall resource flat surface is smaller and closer to parallel, it means that the host's resources are more equally allocated.Thus, the resources allocation balance ratio denoted by Balance ser i is defined as the cosine value between flat surface Dhost total  The proposed approach We assume that the data center contains a set of heterogeneous hosts H ¼ h 1 ; h 2 ; . . .; h M f g(i 2 , 1; . . .; M .) and a set of VMs V ¼ v 1 ; v 2 ; . . .; v N f g (j 2 , 1; . . .; N .) and each host also hosts multiple VMs, intuitively as shown in Fig. 2. In this article, we mainly take into consideration resource type CPU, RAM, BW.When a user submits a resource request to the cloud provider, the cloud data center will provide a real-time service to create the VM instance, which will consume the resources of the physical machine in terms of CPU, RAM and BW.A host exhibiting high resource utilization can impact the performance of the VM.This is because the running VMs cocompete for host resources to fulfill their variable workload demands.When the VMM manager module is triggered and communicates with the VMP manager, the VMP manager establishes a more appropriate mapping relationship between VMs and hosts.This is achieved by employing an AHP-based resource balance-aware VMP strategy.The overarching goals encompass minimizing energy consumption, reducing the number of additional VM migrations, and alleviating service level agreement violations within a cloud data center.

AHP-based virtual machine placement strategy
The analytic hierarchy process (AHP), one of the multi-attribute decision-making models (Saaty, 1990), is to decompose a complicated problem into a number of levels and a number of influential factors, then to hierarchize the influential factors and transfer factors in data-form.It uses mathematical methods to calculate the relative weights of a number of influences affecting the decision.Ultimately find the best solution to the problem.The overall process is to first identify the criteria that influence the decision and construct a hierarchical decision tree.Subsequently a judgement matrix is developed based on the decision objectives.Then calculate the relative weights of each criterion, and obtain the weight matrix of each criterion after passing the consistency test.A analytic hierarchy process (AHP) is used to solve the VM placement problem.Decision criteria affecting VM placement and their judgement matrices are first identified.Then the relative weights of these decision criteria are calculated and the optimal host is found for the migrated VMs based on the weights.The detailed steps are as follows.
Step 1: determining the decision criteria and the hierarchical decision tree Firstly, a hierarchical model is constructed.The first targeted layer is to dynamic virtual machine placement with energy savings and QoS guarantees, it alleviates energy consumption and SLAV for cloud data centers in the execution of VM scheduling.The second layer denotes the decision layer with three main criteria: power consumption, resource allocation balance ratio and available resources, and the third layer represents the available physical hosts (h i ).The decision tree composed of these three layers is shown in Fig. 3.When VMM communicates with the VMP manager, the VMP manager module executes scheduling with the following three criteria: the increased power consumption (Norpower i ) of host h i .
resources allocation balance ratio (Balance ser i ) of host h i .available resource (AR i ) of host h i .
In the virtual machine placement process, the AHP-based decision criteria include power consumption, available resources, and resource allocation balance ratio.This framework governs the dynamic execution of virtual machine placement (VMP) with the objectives of minimizing energy consumption, reducing the number of VM migrations, and mitigating the impact of additional VM migrations on SLA violations that may result in performance degradation.We use X ij to represent the mapping relationship between the VM and the host, which is defined as Eq. ( 11).When X ij is 1 it means that VM v j is placed on host h i .
Upon placing the migrated VM onto the designated host, there is a subsequent rise in both CPU utilization and power consumption.The elevated CPU utilization, along with the resulting fluctuations in power consumption, are represented as u incre i and powerDiff i , respectively.powerDiff i is computed using the Eq. ( 12).To further unify the calculation normalize powerDiff i to Norpower i using the Eq. ( 13).Norpower i as one of the three main decision criteria.
A host with more available resources provides performance guarantees for virtual machines and also reduces the number of virtual machine migrations.In comparison to the energy consumption generated by RAM and BW resources, CPU resources account for the largest proportion of energy consumption in cloud data centers (61%) (Kusic et al., 2009).Therefore the amount of CPU available resources (AR i ) is used as one of the three main decision criteria in the VM placement process.AR i is calculated as Eq. ( 14).h mips i and v mips j represent the total CPU resources of host h i and the CPU resource requirements of the v j , respectively.Step 2: determining the weight of the criteria concerning the goal The second step after determining the criteria is to form the judgment matrix of criteria based on the model's objective and determine the priority of the criteria.The construction of a 3 Ã 3 judgment matrix A using the three criteria above is shown in Table 3, where the element A pq ðp; q 2 1; 3 ½ Þ indicates the importance of the value in row p compared to it in column q.Meanwhile, this matrix is given as input to Algorithm 1.
Then stratified single sort and consistency tests are performed separately.The weight matrix of the three decision criteria ( W) is calculated by the Eq. ( 15).The standardised matrix (W) is calculated using the Eq. ( 16) as shown in Table 4.
Subsequent consistency test.The judgement matrix A and the weight matrix W are used to calculate the maximum characteristic root according to Eq. ( 17).Then calculate C.I. and C.R. from Eqs. ( 18) and ( 19).Eventually, the outcome is shown in Table 5.It can be seen that the value C.R. is equal to 0.0051 and less than R.I., which demonstrates that the criteria weight matrix passed the test.Thus hosts can be selected based on the weight matrix W during VM placement.
C:R: ¼ C:I: R:I: Step 3: calculation of host score based on criteria Since the purpose of using the AHP method in this article is to determine the relative weights between the three decision criteria, a hierarchical total ranking is not required.
Ultimately, from the above two steps, the relative weight matrix (W) of the three decision criteria can be obtained.The three standard indices of available hosts are multiplied with the weight matrix to obtain the host's score (scoreHost) by Eq. ( 20).The host score is calculated using Eq. ( 20), and the host with the highest score is considered the most suitable host for placement.One of the strengths of the proposed approach is the flexibility it offers (Ahmadi et al., 2022), the relative weights calculated can be flexibly altered by data centers' preferences.

AESVMP algorithm
The monitoring procedures of the data center periodically monitor the working status of the servers.A host with high resource utilization will suffer from resource contention among the VMs working on it, resulting in performance degradation.Therefore, the management system triggers VM migration according to the AESVMP mechanism proposed in this article, as shown in Algorithm 2, to build a new mapping relationship between migrated VM and host in the cloud data center with the goals of energy saving, reduction of SLAV and the number of VMs migration.The application of Algorithm 1 is to calculate the weigh matrix of the decision criteria using AHP method when VM explores the targeted host.First, the judgment matrix of decision criteria (A) and the number of dimensions of the matrix (n) are used as the input of the algorithm.We define various variables (lines 2-5).The weights of each decision criterion are calculated based on the judgment matrix to obtain the weight matrix ( W) (lines 7).The normalizing weight matrix (W) is obtained through Eq. ( 15) (lines 10).Then k max , C.I. and C.R. are then calculated and used for the consistency test (lines 12-17).Output the standard weight matrix (W) after the final consistency test is passed (lines 18-23).
Algorithm 2 (AESVMP) illustrates the process of VM placement based on the Algorithm 1. First, the inputs to the algorithm are a list of migrated VMs and a list of hosts.Exclude hosts that are overloaded and dormant in (lines 8-13).The next step continues only if the condition is met that the available resources of the host exceed the requested resources of the migrated VM (line 14).Then, find the targeted host with the maximum score according to the calculation of Eqs. ( 10)-( 20) (lines 15-19) for the migrated VM.Finally, return the result of the mapping relationship between VMs and hosts.
Time complexity analysis: We assume that the number of N migrated VMs and a set of M hosts are selected, the time complexity of performing a descending sort is OðMlogMÞ.When triggering VMP, it is clear that the time complexity of targeted host is OðNÞ, so the time complexity of Algorithm 2 is OðMlogM þ NMÞ, and in the worst case when M equals N, the time complexity is OðN 2 Þ.

EXPERIMENTAL EVALUATION
In this section, we introduce the experimental environment, evaluation metrics and comparison benchmarks to validate the performance of the proposed approach.

Experimental environment
The proposed approach is to validate its performance under CloudSim emulator (Calheiros et al., 2011)  The relationship between energy consumption and CPU utilization of the host is shown in Table 6.Then, four types of Amazon EC2 VMs, specific information as shown in Table 7, and PlanetLab project with 10 workloads, specific information as shown in Table 8, are taken into consideration during the experiment.

Evaluation metrics
For the experimental results, the following mainstream performance indicators (energy consumption, the number of virtual machine migration, SLA time per active host (SLTAH), perf degradation due to migration (PDM), service level agreement violation (SLAV) and aggregate indicators of energy consumption (ESV)) are determined to evaluate the performance of the proposed algorithm.These performance indicators are described below: 1.Where energy consumption represents the total energy consumption generated (Garg, Singh & Goraya, 2018) by all hosts running simulated workloads in the cloud data centers.
2. The number of VM migrations means the total number of VM migrations performed during the experiment.If the data center detects a host with a overload or underload state, then it starts VM migration.VM migration affects the performance of VM workloads.Therefore, the fewer VMs migrated, the better.
3. The working performance of migrated VM will be affected due to VM migration technology triggering, thus the performance degradation denoted by PDM is defined below: where N, PF degradation j and C demand j presents the number of VM, performance degradation and total CPU capacity of vm j , respectively.
4. The user submits a request to create a VM instance to the cloud data center and signs a service level agreement with the cloud vendors.As defined in Beloglazov & Buyya (2012), service level agreements refer to the ability of the host and the previously recommended software measurement environment to meet the business quality requirements.SLA time per active host (SLATAH) indicates the percentage that the time of host with 100% CPU utilization divides the time of an active host.
where M, T over i and T active i presents the number of active host, the time of host experiencing 100% CPU utilization and the time of active host, respectively.
5. SLAV is a indicator, service level agreement, to evaluate the overloaded host and performance degradation in combination with SLATAH and PDM.
ESV is an metric in association with total energy consumption (E total ) and service level agreement violations (SLAV) and is calculated below:

Comparison benchmarks
To validate the efficacy of the method proposed in this article, we employed five distinct host state detection methods (THR, IQR, LR, MAD, LRR) and two VM migration techniques (minimum migration time-MMT, maximum correlation-MC) within CloudSim.These were utilized to conduct a comprehensive comparative analysis of the experimental outcomes involving the AESVMP algorithm, the PABFD algorithm (Beloglazov & Buyya, 2012), and the LBVMP algorithm (Wang et al., 2022b).For the sake of comparison, we computed the average results obtained from the five distinct host state detection methods (THR, IQR, LR, MAD, LRR).The safety parameter was set to 1.2 for IQR, LR, LRR, and MAD, while for THR, it was set to 0.8.All the comparative experiments were conducted using CloudSim, with a workload derived from 10 PlanetLab instances.

Experimental results
In this section, 10 workloads (Park & Pai, 2006;Chun et al., 2003) and the performance metrics mentioned are used evaluate the performance of the proposed AESVMP algorithm compared with the VM placement algorithm discussed above.
Table 9 shows the simulation results of the performance comparison between the AESVMP algorithm proposed in this article and the state-of-the-art LBVMP algorithm (Wang et al., 2022b) with the same condition.Where compared with the LBVMP algorithm the AESVMP algorithm outperforms in terms of the number of VM migrations, SLAV, ESV and Energy efficiency, with an average optimization of 51.76%, 67.4%, and 67.6% respectively, but AESVMP performs worse than LBVMP when it comes to energy consumption.We can conclude that the approach effectively optimizes in number of VM migration and the QoS.

Evaluation based on energy consumption
Figure 4 shows the total energy consumption generated by different methods in combination with two VM selection algorithms MMT and MC.When the VM selection methods are MMT and MC, the average energy consumption of AESVMP strategy is reduced by 27.9% and 27.7% compared to PABFD strategy, respectively.AESVMP takes into consideration criteria Norpower i and Balance ser i to select the host with high energyefficiency and underlines resource allocation balance, which can reduce energy consumption.However, the AESVMP strategy has a slightly higher energy consumption than the LBVMP strategy.This may be due to the fact that AESVMP, when selecting hosts with the same conditions, prioritizes hosts with more available resources and a more balanced resource allocation.It focuses on meeting the resource requirements of VMs to improve the quality of service.As a result, the data center employing the AESVMP strategy has a higher number of active hosts compared to the one using the LBVMP strategy, leading to a slight increase in energy consumption.While LBVMP is more focused on energy consumption optimization

Evaluation based on number of migrations
Figure 5 depicts a comparison of the performance metrics (the number of VM migrations) for the PABFD, LBVMP, and AESVMP strategies.When comparing the AESVMP strategy to the PABFD strategy, there is an average reduction of 68.5% and 73.1% when using the VM selection algorithms MMT and MC, respectively.Similarly, when comparing the AESVMP strategy to the LBVMP strategy, there is an average reduction of 57% and 52.7% when using the VM selection algorithms MMT and MC, respectively.The AESVMP strategy considers the available resource criteria of the host and ensures the fulfillment of resource requests from virtual machines.Consequently, this approach proves effective in reducing the number of additional VM migrations.

Evaluation based on PDM
Figure 6 depicts a comparison of performance metrics (PDM) for the PABFD, LBVMP, and AESVMP strategies.When comparing the AESVMP strategy to the PABFD strategy, there is an average reduction of 74.4% and 73.1% when using the VM selection algorithms MMT and MC, respectively.Similarly, when comparing the AESVMP strategy to the LBVMP strategy, there is an average reduction of 52.3% and 53.7% when using the VM selection algorithms MMT and MC, respectively.This reduction in the number of migrations directly contributes to the decline in PDM values.Consequently, it highlights the effectiveness of the AESVMP strategy in optimizing both the number of migrations and the PDM metric.strategy, achieves an average reduction in SLATAH of 73.5% and 67.1% when the VM selection methods are MMT and MC, respectively.Similarly, when comparing the AESVMP strategy to the LBVMP strategy, there is an average reduction of 49.6% and 54.7% when using the VM selection algorithms MMT and MC, respectively.The AESVMP strategy prioritizes the criteria of available resources and resource allocation balance ratio.By selecting hosts with sufficient resource capacity and emphasizing balanced resource allocation, this approach reduces the likelihood of hosts becoming overloaded.Consequently, the SLATAH metric experiences a notable decline.

Evaluation based on SLAV
Figure 8 illustrates the SLAV performance metrics for various methods.When comparing the AESVMP strategy to the PABFD strategy, there is an average reduction of 94.2% and 90.3% when using the VM selection algorithms MMT and MC, respectively.Similarly, when comparing the AESVMP strategy to the LBVMP strategy, there is an average reduction of 74.1% and 77.4% when using the VM selection algorithms MMT and MC, respectively.This optimization of SLAV is directly linked to the reduction in both PDM and SLATAH.Thus the AESVMP strategy significantly reduces SLA violations.

CONCLUSION
In this article, we propose an AHP based resource allocation balance-aware (AESVMP) virtual machine placement strategy, with the help of dynamic virtual machine consolidation technology to dynamically schedule and allocate virtual machine resources, to achieve the goals of energy-saving optimization and reduction in the number of SLAV, ESV and number of virtual machine migrations in the cloud data center.Compared with the benchmark method, AESVMP can optimize the cloud data center of energy consumption, SLAV, ESV, and a number of VM migrations by 27.8%, 92.25% and 94.25% respectively.Compared to the proposed state-of-the-art method LBVMP under the same conditions, the proposed mechanism outperforms in terms of the number of VM migration, SLAV, and ESV.Nevertheless, there are a few limitations that need to be further addressed in future works.The AESVMP algorithm is slightly less optimized for energy consumption compared to the LBVMP algorithm needs to be further optimized.Also in the future, we will test our approach on real cloud platforms (e.g., Video Cloud Computing Platform and OpenStack) to verify the effectiveness of the AESVMP algorithm in real environments.
Further optimization of PM resource utilization and energy consumption by considering resource wastage Ignore power consumption and resources balance VMP-A3C (Wei et al., 2023) Use deep reinforcement learning to maximise load balancing and minimise energy consumption Ignore bandwidth cost PRUVMS (Garg, Singh & Goraya, 2022) Use resource utilization and power consumption as criteria for selecting the right PM Ignore resources balance and SLA violation MRAT-MACO (Nikzad, Barzegar & Motameni, 2022) Finding optimal VM placement solutions using SLA-aware multiobjective ant colony algorithm Ignore bandwidth cost and resources balance CUECC (Wang et al., 2022a) Improve service quality by judgement host cpu utilization and power consumption Ignore resources balance and bandwidth cost MEEVMP (Sunil & Patel, 2023) SLA violation, energy usage and power efficiency of PM are taken into account in the VM placement Ignore bandwidth cost and resources balance VM-DFS (Zhuo et al., 2014) Reduce the number of active hosts by predicting memory requirements for the next service cycle Ignore cpu cost and bandwidth cost VMCUP-M (Hieu, Francesco & Ylä-Jääski, 2020) Predict resource utilization of hosts for the next service cycle, reducing the number of VM migrations Ignore resources balance and power consumption MOPFGA (Liu et al., 2023) Heat recirculation around the PM rack is used as a reference criterion for selecting the host Ignore bandwidth cost and resources balance priority-aware (Omer et al., 2021) Further optimize energy consumption and resource utilization by selecting the right PM through traffic priority and power consumption Ignore resources balance VMDPA (Chang et al., 2022) Choose a host with faster data transfer speeds and lower bandwidth costs Ignore energy consumption and resources balance KCS (Mukhija & Sachdeva, 2023)Integrates bio inspired cuckoo search with unsupervised K clustering machine learning algorithm Ignore bandwidth cost and resources balance Q-learning(Aghasi et al., 2023)

i
and flat surface Dhost allocated i .The balance of resource allocation for host i is inversely proportional to the value of Balance ser i , with a smaller value of Balance ser i indicating a more balanced allocation of the host.Now we give the solution to the computation of normal vector of flat surface Dhost total i and Dhost allocated i defined as Eqs.(8) and (9).We assume that Normal total i ! and Normal allocated i !denoted by normal vector of flat surface Dhost total i and Dhost allocated i respectively, and the value Balance ser i is calculated below:

Figure 1
Figure 1 The allocated resource flat surface Dhost allocated when VM placed on host and host's total resource flat surface Dhost total .Full-size  DOI: 10.7717/peerj-cs.1675/fig-1

Figure 7
Figure7demonstrates variations in the SLATAH performance metric using different methods.In an intuitive comparison, the AESVMP strategy, in contrast to the PABFD

Figure 9
Figure9illustrates the performance metric ESV in relation to different methods.Intuitively, when compared to the PABFD strategy, the AESVMP strategy results in a reduction of ESV by an average of 95.7% and 92.8% when the VM selection methods are MMT and MC, respectively.Similarly, when compared to the LBVMP strategy, the AESVMP strategy reduces ESV by an average of 69.7% and 73.2% with VM selection methods such as MMT and MC.This substantial reduction in ESV is closely tied to energy consumption and SLAV.

Table 1
Summary of related works techniques.

Table 2
List of notations used in this article.

Table 4
Criteria weight.
Gu et al. (2023), PeerJ Comput.Sci., DOI 10.7717/peerj-cs.167514/26 in this article.In the tool, we simulate 800 heterogeneous host with two types HP ProLiant ML110G4 (Intel Xeon 3040) and HP ProLiant ML110G5 (Intel Xeon 3075) HostList, VmsToMigrate // HostList: the list of host; vmsToMigrate: some VMs prepare to migrate The three decision criteria are multiplied by the corresponding weights to obtain the host score.Where W is the weight matrix obtained by Algorithm 1 The two types with same characterize of the number of CPU cores, RAM, BW and storage but are different in CPU capacity with 1860 MIPS and 2660 MIPS, respectively.

Table 6
Power consumption of the servers at different load levels (in Watts).