Task arrival based energy efficient optimization in smart-IoT data center

: With the growth and expansion of cloud data centers, energy consumption has become an urgent issue for smart cities system. However, most of the current resource management approaches focus on the traditional cloud computing scheduling scenarios but fail to consider the feature of workloads from the Internet of Things (IoT) devices. In this paper, we analyze the characteristic of IoT requests and propose an improved Poisson task model with a novel mechanism to predict the arrivals of IoT requests. To achieve the trade-off between energy saving and service level agreement, we introduce an adaptive energy efficiency model to adjust the priority of the optimization objectives. Finally, an energy-efficient virtual machine scheduling algorithm is proposed to maximize the energy efficiency of the data center. The experimental results show that our strategy can achieve the best performance in comparison to other popular schemes.


Introduction
Due to the rapid development of technologies such as cloud computing, Internet of Things, and artificial intelligence, smart cities have become more than just a new concept in the past few years [1]. According to BCC Research report [2], the market size of smart cities has reached 342.4 billion in 2016 and is expected to reach 774.8 billion in 2021. The rapid growth of market size provides vast opportunities for governments and enterprises [3]. A smart city is a new form of urban development, which deeply integrated with digital technology. It can improve the efficiency of urban services and reduce resource consumption.
Smart cities utilize cloud computing and IoT to enhance urban infrastructure construction, enabling cities to achieve smart collaboration, resource sharing, interconnection, and comprehensive perception [4]. As shown in Figure 1, a smart city is composed of different smart-IoT networks. From the transportation sector to the medical sector, from smart campus to smart industry, the concept and technology of smart city have integrated into all fields of modern society, while closely linking with cloud computing platforms. More specifically, Figure 2 shows the data transmission architecture diagram between IoT devices and the data center in a smart city. The role of edge computing nodes here is like the nerve endings of the smart city brain. The edge nodes collect data from IoT devices nearby and transmit further requests that need to be processed to the cloud brain. As the brain center of smart cities, cloud computing analyzes and processes a large number of requests that cannot be processed by edge computing. With the increasing scale and complexity of smart city [5], the sensor networks scattered throughout the city will produce exponentially scaled data, which makes data centers become the backbone in smart city development. Cloud data centers need a customized resource management strategy to balance current demand and exponential growth of data. An inefficient data center will cause serious energy consumption problems and cannot guarantee the quality of service (QoS) (e.g., response time, latency, and throughput) [6]. In general, the QoS requirements of a cloud service can be specified in terms of a service level agreement (SLA) [7,8]. Furthermore, with the growth and expansion of cloud data centers, energy consumption has become an urgent issue for smart cities system [9]. In order to prevent high energy consumption from becoming a bottleneck restricting the development of smart cities, high energy consumption problems need to be resolved. Many researches have proposed several resource management approaches to optimize the energy consumption and SLA of cloud data centers. Toor et al. [10] presented a scheduling framework for allocating jobs in the data center through dynamic voltage frequency scaling (DVFS) techniques.
Kulkarni et al. [11] proposed a green power management method to monitor and assign computing resources efficiently. Zhang et al. [12] proposed an energy-aware allocation algorithm using the conventional expression of Bin-Packing problem. The algorithm aims to reduce the number of used servers and turn off the servers that are in idle state. Aldossary et al. [13] proposed a novel cloud system architecture and energy-aware cost estimation framework to predict the resource usage and energy consumption of Cloud infrastructure services. These studies are looking for the best scheduling framework to fit all scenarios in a cloud data center. However, traditional cloud computing scheduling mechanisms have significant limitations when dealing with IoT requests. These approaches can no longer efficiently handle the massive requests generated by IoT devices in this scenario [14]. Furthermore, due to the high real-time constraints of IoT requests, SLAs in data centers are difficult to guarantee. In this paper, an efficient task arrival model is proposed for real-time prediction of massive IoT requests. How to achieve the trade-off between energy saving and SLAs is another significant concern for energy-efficient scheduling. Zhou et al. [15] defined energy efficiency as the reciprocal of the product of energy consumption and SLA, Chang et al. [16] minimized the energy consumption of the data center without exceeding the static SLA upper limit values (80 and 90%). Malekloo et al. [17] introduced a multi-objective scheduling mechanism to balance a trade-off among energy efficiency, performance and QoS. In literature [18], a novel multi-dimensional resource usage model and virtual machine (VM) placement algorithm are proposed to improve resource utilization in a balanced way. Since the generation and arrival of IoT requests have moderate fluctuations, the approaches mentioned above do not consider that the priorities of energy-saving and SLA optimization are not

Data transmission
static. SLA optimization is the primary concern of energy efficiency management when the load is high. Correspondingly, the hosts in the idle state need to be consolidated to save energy when the load is low.
In this paper, we concentrate on analyzing and modelling the arrivals of IoT requests. We take into account the massiveness of IoT requests and the stability of IoT load fluctuations. A novel Poisson task model based on upper and lower bound is proposed to predict the future arrival of IoT requests. In order to balance the trade-off between energy consumption and SLA, we introduce an adaptive energy efficiency scheduling policy and propose an energy-efficient VM placement algorithm.
The main contributions of this paper can be summarized as follows: 1) We propose an improved Poisson arrival model to predict the future arrival rates of IoT requests. We take into account the fluctuations of IoT requests and introduce a novel upper and lower bound mechanism to enhance the applicability of the arrival model.
2) We present the energy consumption model, SLA violation metrics and an adaptive energy efficiency model to analyze and monitor the variations in a cloud data center.
3) We propose an energy-efficient VM scheduling algorithm based on task arrival rates to achieve the trade-off between energy saving and SLA.
The rest of the paper is organized as follows. Section 2 discusses related work on energy-efficient technology analysis. Section 3 introduces a novel task arrival model with upper and lower bound. In Section 4, we propose an adaptive resource scheduling strategy. Experiments and results are given in Section 5. Section 6 concludes the paper with a summary and future research directions.

Related works
With the rapid growth of the computing resources and applications, energy-saving is an immediate concern of consumers and industries. An increasing number of researchers devote themselves to the research on how to reduce energy consumption and improve utilization rates of data centers. The energy-efficient optimization can be divided into four categories: dynamic voltage frequency scaling technique [10,19] based on energy efficiency model, energy-aware resource allocation heuristics [20][21][22][23][24], cloud workload characterization and prediction [25][26][27][28] and decentralized management method [29]. The comparison of energy-efficient optimization techniques is shown in Table 1, which indicates the characteristics and differences of these strategies.
In [30], Krzywda et al. define a server power consumption model which mainly relying on the DVFS technique. It implies the existence of a cubic relationship between CPU frequency and power consumption. However, the DVFS technique does not consider the optimization of SLAs, which makes this technique has significant limitations in the existing cloud system. In [31], the authors deploy cloud resources in the data center under the client-level SLA requirements to minimize the overall power consumption. This research only concerns about the energy consumption of the environment. It does not take into account the computational performance provided by the cloud system when energy consumption is minimum. The literature [32] uses the Automatic Server Configuration System (ACES) to reduce energy consumption or improve energy efficiency to meet the load requirements to solve problems, thereby improving system energy efficiency. The author of the literature [33] proposes an online algorithm to reduce the energy consumption of the data center as the starting point, dynamically adjust the load of each central server and shut down some servers when the load is low. This form reduces the overall energy consumption for load balancing. In [34], an energy efficiency model for convenient calculation is proposed for the existence and condition of the maximum energy efficiency. In addition, through the necessary measurement and monitoring research on the system energy consumption to achieve research and management of energy consumption in the data center, the relevant results are more significant with the three research teams represented by Kansal et al. [35]. Focusing on the measurement of energy consumption in the cloud computing environment, the literature [36] introduces an energy efficiency model that can be achieved by the unit energy consumption of the data center. To define the performance of the system, it provides a robust basis for optimizing energy efficiency.
Buyya et al. [7] find that an idle server's energy consumption is equivalent to 70% of a fullyloaded server's. Based on this fact, they redesign the power consumption model of the data center. The CPU utilization of the server will change over time, and the authors define total energy consumption as an integral function in a time slice. Based on the power consumption model, virtual machines can be dynamically reallocated depending on the current CPU utilization, and it is more energy-efficient than a static method. Balasubramanian et al. [37] design an online control policy framework to improve resource scheduling efficiency in the cloud environment. This framework optimizes traffic allocation in each time slice and reduces the scheduling queue lengths to relieve network congestion in the environment. Xu et al. [38] propose an energy-aware virtual machine allocation policy which can dynamically schedule the VMs to minimize energy consumption. The policy first selects VMs that need to be migrated and then places the chosen VM on the host using the Modified Best Fit Decreasing (MBFD) algorithm. The authors set lower and upper utilization thresholds and keep the CPU utilization of hosts within the range of dual thresholds. Experiment results show that dual thresholds policy is helpful to energy-efficiency and avoiding service level agreement violation. Hussain et al. [39] define an energy-aware service framework for cloud service providers. It enables cloud providers to manage the resources and services of the cloud platform efficiently.
In literature [40], the authors believe that a computing framework that combines cloud computing and ambient intelligence has become a key component of future Internet development. This framework enables the cloud environment to adapt to the demands of users and realizes adaptive dynamic resource management. Prieta et al. [41] describe the characteristics of multiagent system (such as dynamics, flexibility, and self-learning). The multiagent based cloud computing model can effectively allocate computing demands among nodes without global coordination of the system. The literature [42] believes that the variability of service demands in the cloud computing environment makes the decision-making process of resource allocation difficult. So they proposed a Multiagent system based on a virtual organization (VO) to manage the resources in cloud environments automatically. This method has global self-adaptability and can reduce the computational load of nodes in the resource management decision-making process. Tseng et al. [43] demonstrate the benefits of setting up trusted nodes when studying scheduling problems through several practical cases. The authors introduce that the trusted nodes can reduce the energy consumption and latency while maintaining the same resilience effect. In [44,45], the authors analyze the limitations of traditional centralized systems in resource allocation and management and introduce the advantages of resource optimization based on blockchain. These methods use decentralization to reduce the load of a centralized system. Experimental results show that the decentralized management method can improve the scalability and robustness of the system and effectively cope with the occurrence of failures.
Most of the existing energy efficiency models consider energy consumption to be as crucial as SLA. These researches are all concentrated on seeking the best framework to fit all constraints of a cloud data center in common scenarios. Since the generation and arrival of IoT requests have moderate fluctuations, the traditional energy efficiency models are no longer suitable in these scenarios.

Task arrival model
Suppose there is a set of IoT requests tasks with different constraints. Each task can be assigned to any given virtual machine. Assume that tasks occur randomly over a while and satisfy the following conditions: 1) the time interval can be infinitely divided into plenty of small time slices. The probability of the requests arriving once is proportional to the length of the time slice; the probability of more occurrences of the request is always equal to zero in each minimum interval. The requests arrive independently in different time slices. The probability of IoT requests arriving in each tiny time slice is independent of each other. Table 2 shows the summary of the main notations used in the paper.
In a cloud computing platform, within a short interval of time Δ , we assume that the probability of a new cloud task arriving is assumed to be Δ . If Δ is small enough, the probability of reaching two or more tasks in Δ time is negligible. Suppose that the scheduling interval gap , 2 is divided into n sub-time slices, each sub-time slice length is / . We further assume that the arrival of tasks in a given time slice is independent of the arrival of tasks in other time slices. Then, when n is large enough, the n time slices can be considered to constitute a Bernoulli independent test sequence, and the probability of a successful test is / . From this, it can be seen that n sizes are / . The probability of having k tasks arriving during the time period is approximate:  The MIPS value that does not satisfy the resource requirement , Constraint problem, Energy efficiency model , The weight coefficients of the energy efficiency model The average arrival rates of historical workload ̂ The standard deviation of historical arrival rates We observe the probability of reaching tasks in time t. During the time period 0, , we observe the arrival of tasks in a cloud computing platform server. In a short time slice Δ , the probability of a new task arriving is assumed to be Δ . is a fixed value, and the size depends on the total number of users on the server. If Δ is small enough, the probability of reaching two or more tasks in Δ time will be tiny (one or no arrival) that it is negligible.
Therefore, take the limit value of for the Eq (2), let → ∞, now the equation can be transformed as: Let denotes , according to / , so , the Poisson distribution cloud task access request probability model is obtained: (4) Lemma 1. For any feasible , the probability distribution of IoT devices requests number can be expressed as follows: Proof of Lemma 1.
We define a minterm which satisfies: When each time slice is partitioned small enough, the probability of cloud task reaching the time slice will be very low. Since the size of is proportional to the length of the time slice, the probability of a cloud task reaching more than 1 at this time will become smaller. The speed of probability approaching 0 is much faster than the reduction of time slice length. Therefore, it is confirmed that the probability that the number of cloud tasks reaching more than 1 is the minterm value. The equation of this probability can be expressed as: , .
When 0, according to the independent definition, the following equations is obtained: .
The sufficient and necessary conditions for the number of cloud tasks is 0 at the moment are given as follows: zero cloud tasks were reached at t moment and , interval. We can obtain the following equation regardless of whether any task arrives at the t moment or not.
According to Eqs (7)-(9), can be expressed as: According to the definitions and properties of derivation, Eq (12) can be transformed as: .
The constant value is shown in Eq (15): . (15) Solving the integral of Eq (15), the equation is simplified as follows: , where and are constant values. Because the value of 0 1, the value of is 1, the probability distribution of zero task arrived at t moment is shown in Eq (18): .
In summary, definitions 1 and 2 are the necessary conditions of the proof. Especially the characteristics of service requests in smart home environment guarantee that the minterm does exist. Based on Eq (18) and mathematical induction, when 1, Eq (6) can be deduced. For brevity, the similar proof process is omitted. Lemma 1 is eventually proved.
After demonstrating that the Poisson distribution model is workable for the arrival series of IoT requests, we introduce an effective upper and lower bound mechanism and add it to the Poisson arrival model. The definition of the lower bound and upper bound can be expressed in Eqs (19) and (20): where denotes the predicted amounts of arrival requests; denotes the median value of historical series; denotes the standard deviation of the series; denotes the scale level coefficient and it is a constant value.

Adaptive resource management approach
In this section, we define the energy consumption model and SLA violations metric of the data center. After the previous definition, an adaptive energy efficiency metric model is presented to balance the trade-off between energy consumption and SLA violations. The model can adaptively adjust the weight coefficients of energy consumption and SLA violations through the arrival rates given by the task arrival model.

Energy consumption model
According to previous researches, a server power can be expressed as a function of power consumption by CPU, memory and disks. Moreover, recent studies show that the CPU power consumption generally dominates the energy model in a Xeon processor-based server. Considering the importance of CPUs for server power, we present a utilization-based energy consumption model for cloud data centers. Let denotes the number of physical nodes in the data center, and denotes the power of host ( 1 ). It is obvious that energy consumption is the product of time and power. The real-time energy consumption of a cloud data center is defined in Eq (21): , where denotes the overall energy consumption of the cloud data center. Let In the cloud computing scheduling process, resource constraints represent the processing time of each subtask depends on the resources. Preemption is not allowed, each subtask must be completed without interruption. The problem is to assign each task to the appropriate resources and sort the tasks on the resources to minimize total energy consumption and SLA violation. The challenge is to carefully coordinate and optimize task scheduling and resource allocation to achieve overall cost and time optimization.
Resource scheduling in cloud computing is accomplished in the scheduling interval which preset by cloud service providers. According to the utilization model described above, the energy consumption model of the data center in each scheduling interval is: where denotes the total energy consumption generated by hosts, denotes the simulation time in the data center, and denotes the energy consumption corresponding to the current utilization of the host j. The detail of under different load is illustrated in Table 3.

SLA violations metric
In the scheduling interval, our scheduling goal is to minimize the SLA violations of each host while optimizing the total energy consumption of the data center. The SLA violation rate of all physical hosts in the data center is defined as: where denotes the SLA violation rate of data center; Τ denotes the SLA time corresponding to the ratio of resource requirement not satisfied; denotes the value of MIPS that not satisfies the resource requirement.

Adaptive energy efficiency model
An energy efficiency metric model consists of energy consumption and SLA violations. Previous energy efficiency model studies considered that energy consumption is as important as SLA violations. However, the generation and arrival of IoT requests have moderate fluctuations. The traditional energy efficiency models are no longer suitable in this scenario.
In this subsection, we present an adaptive energy efficiency model to balance the trade-off between energy consumption and SLA violations. Since the data center's optimization goal is to reduce energy consumption and the number of SLA violations, the energy consumption and SLAV are inversely proportional to our optimization goal. In order to simplify the solving process, we make a log transformation of the energy consumption model and the SLAV model. Log transformation is a common way of data transformation, and the purpose is to make the presentation of data close to our desired assumptions. Since the log function is a monotonically increasing function in its domain, taking the logarithm will not change the feature and correlation of the data. In addition, the scale of the variables is compressed, making the data more stable and more convenient for our subsequent calculations.
And the energy efficiency model are defined as follows: The weight coefficients of and ( 1 0 , 1) respectively represent energy consumption and SLA violation rate in the model. In order to better demonstrate our energy efficiency model formula, we define: (26) According to Eqs (25) and (26), can be represented as follows: Apparently, the value of weight coefficients ( and ) determines the emphasis of the energy efficiency model. According to the characteristics of and , we introduce the idea of sigmoid function to determine the value of weight coefficients, which can be expressed as follows: where ̂ denotes the predicted arrival rates of IoT requests given by Poisson arrival model; denotes the average arrival rates of historical workload; denotes the standard deviation of historical arrival rates. The value of and will be updated after the end of each scheduling interval. Regardless of how the IoT requests fluctuate, the proposed energy efficiency model can adaptively adjust the optimization priority of energy consumption and SLA violations. When the arrival rate of IoT requests ̂ is high, according to the Eq (29), the value of will be very close to 1, which means that the optimization of SLA violations is the primary concern of energy efficiency management in the current scheduling interval. Conversely, the energy efficiency model will focus on optimizing the energy consumption of the data center when the value of ̂ is low. If the value of ̂ is equal to the historical average arrival rate , 0.5. The adaptive energy efficiency model can be formalized as follows: 1 According to Eqs (27) and (30), F can be written in:

VM placement based on arrival rates
In this subsection, we propose a novel VM placement policy to balance the trade-off between energy consumption and SLA. We refer to this policy as VM placement based on arrival rates (VPBAR). The pseudo-code for the VPBAR algorithm is presented in Algorithm 1.
In the algorithm, vmList denotes the list of VMs; hostList denotes the set of hosts in the data center; ̂ denotes the predicted arrival rates of IoT requests given by Poisson arrival model; denotes the average arrival rates of historical workload; denotes the standard deviation of historical arrival rates.
In each scheduling interval, the VPBAR algorithm first sorts the vmList in descending order of the CPU utilization and then calculates the value of weight coefficients and according to Eqs (28) and (29). VPBAR algorithm iterates the entire hostList and check whether the host satisfies the resource requirement of the VM. If it meets the requirement, we will calculate the utilization of this host after an allocation of the VM. VPBAR finds a host with the optimal energy efficiency through VM allocation. The algorithm terminates when the allocation of VMs is complete. From the pseudocode of the algorithm, we can know that the complexity of the VPBAR is O(M×N), where M represents the number of hosts and N represents the number of VMs.

Performance analysis
This section evaluates and tests the task arrival model for IoT requests described in Section 3 and VPBAR algorithm introduced in Section 4. It is necessary to evaluate the experiments on a hyper-scale virtualized cloud computing infrastructure. However, it is tough to perform repeatable large-scale experiments on real data centers [15,38,46]. The CloudSim Toolkit [47] is chosen as an experimental platform to evaluate our work. CloudSim is one of the most robust simulation frameworks in cloud computing. It supports on-demand virtualization resource management and simulation of large-scale cloud computing infrastructure.
In the experiment, a real-world workload ClarkNet-HTTP [48] is used to evaluate our models and algorithms. The workload contains traces of requests from the cloud service provider for the Baltimore-Washington DC area. During two weeks period of time, the servers received 3,328,587 requests. The ClarkNet-HTTP workload can simulate a realistic IoT request arrival in the smart building scenario as well as most of the situations that may be faced within the data center. The number of requests in this dataset will fluctuate continuously over time. It is ubiquitous to analyze IoT requests arrival scenario by using this workload.
Each experiment was repeated ten times to ensure the validity and accuracy of the experimental results. Table 3 shows the relationships between the power and CPU utilization of the servers used in our experiment.  The experiment in this paper is divided into two parts. The first part is to evaluate the effectiveness of the task arrival model, and the second part is to evaluate the performance of VPBAR algorithm. Figure 3 shows the prediction results of IoT requests by the Poisson arrival model with upper and lower bound in ClarkNet-HTTP dataset. The orange line in the figure represents the prediction result of the arrival model between August 28 and September 3. After adding the upper and lower bounds to the Poisson arrival model, the predicted result is a curve with moderate fluctuations. When the result of the actual value is within the range of the upper and lower bounds, the prediction result will be considered accurate during this scheduling period. Since the IoT requests may fluctuate moderately, simply modelling the arrival of requests will result in a large deviation in task arrival rates. The results in the figure show that our arrival model performs very well on the predictions of IoT requests.
We test the scheduling algorithm with a different number of hosts. The number of hosts is proportional to the number of tasks. We compare the VPBAR algorithm with DVFS [10], LRR_MMT_1.5 (Local Regression Robust Minimum migration time) [49] and VM-CONSOLIDATION [38] algorithm. Figure 4(a-d) show the experimental results of VPBAR algorithm in different metrics. In Figure 4(a), we focus on the algorithms' results in terms of energy consumption. The experimental results show that VPBAR achieves the lower number of energy consumption, followed by VM-CONSOLIDATION algorithm. DVFS yield the highest energy consumption because it does not consider the optimization of VMs. The DVFS algorithm mainly adjusts the CPU voltage and frequency according to the current load situation of each PM. This method can reduce part of the energy consumption, but there is no way to deal with those underloaded PMs. These physical machines will consume a lot of power even if they are entirely idle. LRR_MMT_1.5 wants to optimize energy consumption by reducing the migration time of VMs. However, due to the volatility of task arrivals, this method cannot effectively consolidate idle PMs to save energy. VPBAR algorithm has the lowest energy consumption. The figure reveals that when the number of hosts created in the data center is relative less, the advantage of VPBAR is not obvious. However, when the number of hosts in the working state is large, VPBAR can significantly reduce the energy consumption of the data center. Figure 4(b) shows the results of average SLA violations in the data center. Since the DVFS technique does not involve the process of consolidation, the results of DVFS are not shown in this figure. The results show that LRR_MMT_1.5 cause a high number of SLA violations, followed by VM-CONSOLIDATION. LRR_MMT_1.5 can reduce the cost caused by the migration. However, it will result in many overloaded hosts in the data center, and these overload phenomena will cause serious SLA violations. Those algorithms do not consider the massiveness and time constraint of IoT requests, which causes performance degradation in the data center. VPBAR can model the arrival of cloud tasks. To avoid overloading as much as possible, it can schedule VMs in advance according to the number of tasks reached in a unit time slice.
In Figure 4(c), we compare the algorithms in terms of execution times. DVFS achieve the shortest execution times because it does not need to schedule VMs. Compared with LRR_MMT_1.5 and VM-CONSOLIDATION, the execution time of VPBAR algorithm is reduced by about 19% due to the efficiency of the task arrival model. As can be seen from the figure, the execution time of the LRR_MMT_1.5 algorithm is not short. The algorithm uses a local regression method that requires constant calculation when making decisions, which makes the algorithm more time-complex. The VM consolidation algorithm only needs to sort the PM lists and VM lists according to utilization, so the algorithm execution time is not too long. In most scenarios, the Poisson task arrival model can give the predictions of arrival rates immediately. It saves much time for resource scheduling. Figure 4(d) illustrates the number of active hosts of all algorithms. The result shows that many hosts in the idle state are turned off to save energy through dynamic consolidation technique. VPBAR algorithm will gradually show its advantages when the number of requests is at a high level. It also reflects the excellent applicability of VPBAR algorithm for IoT requests.

Conclusions
In this era of a rapid increase in data volume, energy efficiency scheduling of cloud data centers has become extremely important. Existing scheduling methods do not consider the characteristics of IoT requests, which may lead to performance degradation or SLA violations in a data center. It is of great significance to design an efficient resource scheduling mechanism to handle IoT requests.
In this paper, we propose an improved Poisson arrival model to predict the future arrival rates of IoT requests. We take into account the fluctuations of IoT requests and introduce a novel upper and lower bound mechanism to enhance the applicability of the arrival model. Furthermore, an adaptive energy efficiency model is presented to adjust the priority between energy-saving and SLAs according to future arrival rates. Regardless of how the IoT requests fluctuate, the proposed energy efficiency model can adaptively adjust the optimization priority of energy consumption and SLA violations. SLA optimization is the primary concern of energy efficiency management when the load is high. Correspondingly, the energy efficiency model will focus on optimizing the energy consumption of the data center when the load is low.
Furthermore, we propose an energy-efficient VM placement algorithm based on task arrival rates (VPBAR) to ensure high availability and energy efficiency. The objectives of our work are to help cloud service providers to provide a better service for their customers. The experimental results show that our arrival model performs very well on the predictions of IoT requests. With the upper and lower bound mechanism, the adaptability of the arrival model becomes stronger. VPBAR algorithm can also achieve excellent results than other solutions. The algorithm shows its advantages when the number of requests is at a high level. Based on the evaluation results, we can conclude that the resource optimization approaches based on task arrival models can achieve a better trade-off between energy saving and SLA optimization. The method is efficient and workable for IoT requests.
Due to the rapid rise of edge computing technology, edge computing nodes also occupy a critical position in the development of smart cities. Our next work goal is to consider the energy-saving problems of edge computing nodes and IoT devices, including considering the battery level and type of devices, data transmission costs, data flow oscillations and other factors.