In-Network Caching and Edge Computing-Based Live Broadcasting Optimization for Football Competitions

(e business of football competitions is called the number one sport in the world, thanks to more than one billion people’s attention. With the development of big convergence media, the live broadcasting of football competitions gradually becomes industrialization and commercialization, which has a direct relationship with economic growth. For the live broadcasting of football competitions, the users focus more on quality of experience, i.e., definition and instantaneity. In terms of such two metrics, the current live broadcasting schemes are difficult to cover them well. (erefore, this paper exploits the emerging innetwork caching and edge computing technologies to optimize the live broadcasting of football competitions, shorten for IELB. At first, the live broadcasting optimization framework based on in-network caching and edge computing is presented. (en, the auction-based method is used to address the task scheduling problem in the edge computing. In addition, a video compression algorithm based on adaptive convolution kernel is introduced to accelerate the video transmission and guarantee users to obtain the contents of football competitions as quickly as possible. (e proposed IELB has been verified based on the collected real football competitions dataset by evaluating response time, and the experimental results demonstrate that IELB is feasible and efficient.


Introduction
With the progress of the times, the industrialization and commercialization development of football has been rapidly promoted. In all outdoor sports, the football obtains the highest productive value, owns the biggest influence, and has the most widespread concern. According to the statistical data, the total output value of football per year can account for 43.5% of that of all sports, reaching 400 billion dollars and exceeding some developed countries and regions' GDP.
us, the football is worthy of being called the number one sport in the world [1,2]. Furthermore, according to the statistical data from FIFA, there are 1.6 million teams with more than 0.2 billion athletes worldwide playing the various football competitions by the end of 2019 [3]. Especially, with the rapid development of mobile Internet and big convergence media, the live broadcasting of football competitions [4] gradually becomes industrialization and commercialization, which has attracted many Internet companies to router's buffer, the cache size of the ICN router is between them, and the ICN router can also be deployed at the edge nodes to satisfy the users' requests as many as possible. In other words, this paper uses the in-network caching feature of ICN to help the live broadcasting optimization of football competitions so as to obtain high definition and nice instantaneity. In fact, the usage of ICN has two obvious advantages. On the one hand, there is no need for fanfare to deploy the expensive CDN server. On the other hand, the ICN router can be regarded as a nice and big cache pool to accommodate more intermediate data stream. e two advantages can motivate the application of ICN into live broadcasting of football competitions.
As abovementioned, the ICN router can be deployed at the edge nodes which usually refer to the mobile devices, such as smartphones. Given this, the situation of mobile edge computing (MEC) [14][15][16] has to be considered and addressed. In fact, MEC has the widespread application value, especially for the optimization of live broadcasting. To be specific, the mobile devices have the limited storage resources and computation resources, and thus, it is very difficult for them to completely handle all tasks which are related to the live broadcasting. erefore, it is one possible solution to offload some complex tasks at the edge server for computing, and the remaining simple tasks are performed at the local device. Under this condition, it is very important to address the task scheduling problem in the edge computing. At present, there have been some task scheduling methods [17][18][19][20][21][22][23], including artificial intelligence (AI) based ones [24,25], but they usually cannot obtain the fast response speed and the low energy consumption. As a conclusion, it is necessary to explore the new method to address the task scheduling problem generated from the live broadcasting optimization.
Furthermore, the live broadcasting of football competition cannot do without the transmission of data stream. In other words, the video transmission is an indispensable operation to connect the contents provider and the users [26]. Regarding the video transmission, the video compression algorithm is very important, which can accelerate the video transmission and guarantee users to obtain the contents of football competitions as quickly as possible. However, the current video compression algorithms exit some limitations [27][28][29][30][31], and especially, the frame loss rate and the transmission time cannot reach the satisfactory level. erefore, the video compression algorithm is also studied in this paper.
With the above consideration, this paper optimizes the live broadcasting of football competitions by using three aspects of technical points, i.e., the in-network caching feature of ICN, the task scheduling of edge computing, and the video compression of video transmission, called IELB. To sum up, the major contributions of this paper are concluded as follows. (i) e live broadcasting optimization framework based on in-network caching and edge computing is presented. (ii) e auction-based method is used to address the task scheduling problem in the edge computing. (iii) A video compression algorithm based on adaptive convolution kernel is introduced to accelerate the video transmission. e rest of this paper is structured as follows. Section 2 reviews the related work. Section 3 presents the comprehensive system framework. Section 4 introduces the auction-based task scheduling. Section 5 proposes the video compression algorithm. Section 6 shows the significant experiment results. Section 7 concludes this paper.

Task Scheduling.
ere have been some research studies on task offloading proposals in the edge computing. In particular, some review papers have presented the comprehensive summary, such as [32][33][34][35]. Furthermore, in [17], the authors formulated successful computation probability, successful communication probability, and successful edge computing probability for offloading tasks to the MEC server. In addition, they also analyzed by simulation how the formulated probabilities vary for different sizes of task, task's target latency, and task arrival rate at the MEC server helping users to make offloading decision. In [18], the authors critically analyzed the resource-intensive nature of the latest existing computational offloading techniques for MEC and highlighted technical issues in the establishment of distributed application processing platforms at runtime where a prototype application was evaluated with different computation intensities in a real MEC environment. In [19], the authors presented a collaborative approach based on MEC and cloud computing that offloaded services to automobiles in vehicular networks. Meanwhile, a cloud-MEC collaborative computation offloading problem was formulated through jointly optimizing computation offloading decision and computation resource allocation. Besides, they also proposed a collaborative computation offloading and resource allocation optimization scheme and designed a distributed computation offloading and resource allocation algorithm to achieve the optimal solution. In [20], the authors proposed a price-based distributed method to manage the offloaded computation tasks from users. erein, a Stackelberg game was formulated to model the interaction between the edge cloud and users so as to maximize the revenue subject to its finite computation capacity. For given prices, each user locally made offloading decision to minimize its own cost which was defined as latency plus payment. Depending on the edge cloud's knowledge of the network information, they developed the uniform and differentiated pricing algorithms, which could both be implemented in the distributed manner. In [21], the authors proposed a multiuser noncooperative computation offloading game to adjust the offloading probability of each vehicle in vehicular MEC networks and designed the payoff function considering the distance between the vehicle and MEC access point, application and communication model, and multivehicle competition for MEC resources. Also, they constructed a distributed best response algorithm based on the computation offloading game model to maximize the utility of each vehicle and demonstrated that the strategy could converge to a unique and stable equilibrium under certain conditions. In [22], the authors used the partial computation offloading problem for multiuser in mobile edge computing environment with the multiwireless channel. e computation overhead model was built based on game theory. en, the partial computation offloading algorithm with low time complexity was given to achieve the Nash equilibrium. In [23], the authors captured a user-centric view to tackle the offloading scheduling problem via jointly allocating communication and computation resources with consideration of the QoE of users where they formulated the design as a mix-integer nonlinear programming problem and solved it in an efficient way by the branch-and-bound method.

Video Compression.
A number of video compression methods have been proposed, including some comprehensive survey papers, such as [36][37][38]. Furthermore, in [27], the authors maintained that a significant reduction in file size without sacrificing the visual quality could be achieved by using several efficient compression techniques. To this end, they proposed the video compression method by using global affine frame reconstruction, which involved affine parameter estimation for motion estimation and affine warping for motion compensation where the motion parameters were estimated and stored as compressed data. In [28], the authors proposed an adaptive transfer function based on perceptual quantizer for video compression, which used a fixed mapping curve from luminance to luma, i.e., the proposed transfer function adaptively mapped luminance to luma according to the contents. In [29], the authors introduced a hybrid spatially and temporally constrained content-adaptive tone mapping operator to convert the input high dynamic range video into a tone mapped video sequence, which was then encoded using the high efficiency video coding standard. In particular, the proposed tone mapped video simultaneously exploited intraframe spatial redundancies and preserved interframe temporal coherence of the tone mapped video sequence. In [30], the authors proposed a novel rate control scheme in H.264 to control the rate of compression ratio where the level of compression was decided with the help of the rate controller scheme. ey measured the quality of the transmitted video and available bandwidth with the proposed technique and also built a quality multimedia content and transferred over the transmitter. In [31], the authors provided a lightweight video compression scheme through interframe and intraframe compression. In interframe compression, redundant frames were removed by a proposed interpolation search-based method and a lightweight edge detection technique. en, intraframe compression was performed by a proposed adaptive column dropping technique modifying an existing technique. Besides, they also devised two reconstruction filters targeting to improve reconstruction quality.

System Framework
By using the in-network caching ability of ICN and edge computing, the proposed system framework of live broadcasting optimization of football competitions is shown in Figure 1. We can see that there are two kinds of different servers, i.e., contents provider server used to store the football competitions related videos and edge computing server used to compute the complex tasks. e whole live broadcasting refers to the video transmission from the contents provider to the mobile device via some indeterminate ICN routers used to store the hot data stream. In particular, during the process of video transmission, a video compression algorithm based on adaptive convolution kernel is introduced to accelerate the video transmission. e live broadcasting contents arrive at the mobile device. Given the limited storage resources and computation resources, the complex tasks are scheduled to the edge computing server for computing, while the remaining simple tasks are performed at the local mobile device. Regarding this, the auction-based method is used to address the task scheduling problem in the edge computing.
In particular, the ICN router stays at the network level instead of the application level (e.g., CDN), which is assumed with the enough cache size to store the hot data stream. In addition, this paper also assumes that all mobile devices have the same configuration. In terms of two servers, the contents provider server has the very abundant space to store these videos of football competitions, just like a toplevel distribution server. Differently, the edge computing server is only a high performance computing and storage server, and its computing ability and storage ability is not a circumstance to the contents provider server. e whole workflow of live broadcasting optimization of football competitions is shown in Figure 2. According to the above statements, task scheduling and video compression are two major research points in this paper, which will be addressed in the following sections.

Task Scheduling
e whole task scheduling in the edge computing includes two parts, i.e., offloading decision used to determine whether the tasks need to be offloaded and the scheduling method used to make resources computation and allocation.

Offloading Decision.
Regarding whether the task is offloaded, its decision depends on whether the task's running time and energy consumption can be decreased in case of performing task offloading. Given this, two conditions, i.e., local performing and offloading performing, are considered and analyzed.
At first, for the arbitrary task task i , when it is performed at the local mobile device, the required time is defined as follows.
where c i is the required computation resources which refer to the number of CPU cycles, and v l is the execution rate of CPU. Furthermore, the required energy consumption is defined as follows.
where p l is the power of the mobile device.
Mobile Information Systems en, when task i is performed at the edge computing server, the required time t off is composed of three parts, i.e., transmission time of task i from the mobile device to the edge computing server, computing time at the edge computing server, and returning time of computing result from the edge computing server to the mobile device, denoted by t up , t down , and t e , respectively. Mathematically, Consider that the size of the output result is far smaller than that of input data for most applications, and the returning time has no the significant influence on the total required time. On this basis, the above equation (3) is modified as follows.
Among them, D i is the size of task i , N is the Gaussian noise power of channel, W is the bandwidth of channel, Los is the transmission gain, and v e is the edge computing server CPU's execution rate. To sum up, the total required time under such condition is expressed as follows.   Let e off denote the total required energy consumption under such condition, as follows.
Moreover, when the consumed time and energy under the task offloading condition are smaller than those under the local performing condition, the task should be performed in the offloading way. For this purpose, the following two constraint conditions should be satisfied.
For these two constraint conditions, the further derivation results are shown as follows.
which indicates that, if the computing ability of edge computing server satisfies in equation (8) and the current network bandwidth environment satisfies in equation (9), the task will be performed in the offloading way.

Scheduling Method.
is paper uses the auction method [39] to address the scheduling strategy, including two roles, i.e., buyers and sellers. In terms of n buyers, they refer to these tasks, i.e., the set of buyers is denoted by TASK � task 1 , task 2 , . . . , task n . Each task has two attributes, i.e., task i (t) and task i (p) which denote the required time to complete task i and the bid of task i , respectively. In terms of m sellers, they refer to the unoccupied virtual machines (VMs), i.e., the set of sellers is denoted by VM � vm 1 , vm 2 , . . . , vm m . Each VM is equipped with one attribute, i.e., vm i (t), which denotes the rental time of vm i . Furthermore, let price ini denote the initial bid price, and it is defined as follows.
Although equation (10) considers the emergency degree of task, there exists the unfairness. Especially, the task in the offloading way will gradually lose the price advantage. erefore, this paper proposes the compensation strategy in terms of the condition of bidding failure. Mathematically, equation (10) is modified as follows.
where s is the length of time slice, and N fai is the number of bidding failures. e derivation operation in terms of s is performed, and the following equation is obtained.
which indicates that the derivation value increases with the increase of N fai . In other words, the large derivation value means the fast increasing speed in terms of price. erefore, another meaning of equation (12) is that, more price compensations are performed based on more bidding failures.
Besides task and VM, there is a set of intermediate variables (denoted by CA) used to store some tasks satisfying auction condition. If task i (t) < vm i (t), it means that task i satisfies the auction condition, and it is added into CA. Otherwise, task i is marked as the bidding failure. Regarding the whole auction process, reference [39] has presented the detailed steps, and this paper does give the corresponding steps no longer. In particular, the time complexity is O(m log m) rather than O(nm).

Video Compression
e whole video compression process depends on the motion compensation which is used to realize the motion evaluation among video frames and pixel information compensation. In particular, the part is completed based on adaptive convolution kernel [40]. Inspired by Simon et al. [40], this paper learns the motion offset in terms of the continuous video frames so as to realize the motion evaluation among video frames. Regarding such process, it is defined as follows.
Among them, I ci denotes the frame i, Δ i,i+1 denotes the offset from frames i to i + 1, I ci ′ denotes the motion compensation on I ci , and f is the mapping function when the adaptive convolution kernel is used.
When the adaptive convolution kernel is used, it needs two convolution kernels to do the motion prediction, denoted by K h (x, y) and K v (x, y), which are considered as the horizontal vector and the vertical vector of K(x, y), respectively. Mathematically, where * denotes the convolution operation. In particular, the rectified linear units function is added after each convolution operation so as to enhance the express ability of the network. On this basis, the general motion compensation on I ci is defined as follows.
In summary, by using the network self-learning operation, the motion compensation is analyzed and performed based on K h (x, y) and K v (x, y) with the nonlinear mapping relationship. In terms of K(x, y) with K × K convolution kernels, the current number of parameters is 2K instead of K 2 , which indicates that two one-dimensional convolution kernels have better computation efficiency than one twodimensional convolution kernel.

Experiment Results
is paper makes three parts of experiments. At first, the proposed auction-based task scheduling strategy called ATS is verified. en, the proposed adaptive convolution kernelbased video compression scheme called ACK is verified. Finally, the proposed whole IELB is verified. Among them, the first part considers transmission time and energy consumption as the evaluation metrics where references [21,22] are used as the baselines, shorten for IoTJ and CoMNeT, respectively. e second part considers frame loss rate and transmission time as the evaluation metrics where references [30,31] are used as the baselines, shorten for CoMCoM and JVCIR, respectively. e third part considers response time and QoE of users as the evaluation metrics by collecting the 1000 football competitions and testing 500 users.

Experiments on Task Scheduling.
e average transmission times under different data sizes (from 10 GB to 60 GB) for ATS, IoTJ, and CoMNeT are shown in Table 1. We can see that the proposed ATS task scheduling strategy in the edge computing has the smallest average transmission time, followed by IoTJ and CoMNeT. In addition, with the increase of data size, the corresponding average transmission time increases. It suggests that the auction-based task scheduling strategy is more efficient than IoTJ and CoM-NeT. Furthermore, when the data size is 50 GB, the independent transmission times under different experiments (from 1 to 8) for ATS, IoTJ, and CoMNeT are shown in Table 2. It is obvious that ATS has the smallest transmission time for each experiment. In particular, for 8 experiment results in terms of the independent transmission time, the proposed ATS has the best stability, followed by CoMNeT and IoTJ. It suggests that the proposed task scheduling is the optimal.
Moreover, the average energy consumption under different data sizes (from 10 GB to 60 GB) for ATS, IoTJ, and CoMNeT are shown in Table 3. We can see that the proposed ATS consumes the smallest energy. In summary, ATS has the smallest transmission time and the smallest energy consumption, which indicates that the proposed task scheduling strategy in the edge computing is considerably satisfactory.

Experiments on Video Compression.
e average frame loss rates under different resolution ratios (i.e., 320P, 480P, 720P, and 1080P) for ACK, CoMCoM, and JVCIR are shown in Table 4. We can see that the average frame loss rate of ACK is the lowest. Although when the resolution ratio is 320P, their average frame loss rates have no significant difference, and the change of ACK is the linear while CoMCoM and JVCIR show the index movement. It suggests that the proposed video compression scheme can guarantee the integrated video transmission. Furthermore, the average transmission time under different resolution ratios for ACK, CoMCoM, and JVCIR are shown in Table 5. We can see that the average transmission time of ACK from the contents provider server to the mobile device is the smallest. Similarly, when the resolution ratio is 320P, their average transmission times have no significant difference. In particular, we also see that the average transmission time generated from video transmission is larger than that generated from the task offloading.

Experiments on Live Broadcasting Optimization.
When the data size is 60 GB and the resolution ratio is 720P, the independent response times under different experiments (from 1 to 10) for IELB are shown in Table 6. We can see that the response time is around 120 ms with the ms-level, which can be acceptable by users.
Furthermore, this paper evaluates QoE of users where the QoE is divided into five grades, i.e., very satisfactory, satisfactory, borderline, dissatisfaction, and very dissatisfaction. From two perspectives (i.e., users and football competition), the evaluation results on QoE of users are shown in Table 7. We can see that the satisfaction rate can reach 100%, which indicates that the proposed live broadcasting optimization of football competition is feasible and efficient.

Conclusions
is paper optimizes the live broadcasting of football competitions by using three aspects of technical points, i.e., the in-network caching feature of ICN, the task scheduling of edge computing, and the video compression of video transmission. At first, the live broadcasting optimization framework based on in-network caching and edge computing is presented. Second, the auction-based method is used to address the task scheduling problem in the edge computing. ird, a video compression algorithm based on adaptive convolution kernel is introduced to accelerate the video transmission. For these proposed strategies, this paper makes three kinds of experiments: (i) the proposed auctionbased task scheduling strategy is verified by testing transmission time and energy consumption; (ii) the proposed adaptive convolution kernel-based video compression scheme is verified by testing frame loss rate and transmission time; and (iii) the proposed whole IELB is verified by testing response time and QoE of users. e experimental results demonstrate that the proposed IELB is feasible and efficient.

Data Availability
e data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest
e authors declare that they have no conflicts of interest.