Joint Communication, Computing, and Caching Resource Allocation in LEO Satellite MEC Networks

Driven by the urgent requirement of ubiquitous and reliable coverage for global users, the low earth orbit (LEO) satellite network has attracted numerous attentions from the academic and industry circles. By deploying multi-access edge computing (MEC) servers in LEO satellites, computation offloading and content caching services can be provided for remote Internet-of-Things (IoT) devices without proximal servers. In this paper, the joint optimization of computation offloading, radio resource allocation and caching placement in LEO satellite MEC networks are investigated. The problem is formulated to minimize the total delay of all ground IoT devices while ensuring the energy, computing and caching constraints. To solve this mixed-integer and non-convex problem, a Lagrange dual decomposition (LDD)-based algorithm is proposed to obtain the closed-form optimal solution. Then, a heuristic algorithm is proposed to further reduce the computation complexity. Numerical results validate that both algorithms are effective compared to the optimal exhaustive search, the full local computing and the full MEC methods. Besides, the offloading ratio and the average delay of all IoT devices with different numbers and computing capacities of devices and satellites are also demonstrated.


I. INTRODUCTION
With the rapid development of Internet of Things (IoT), there is an ever-increasing demand for delay-sensitive and computation-intensive applications. In order to provide services for users in remote regions, disaster areas, as well as the airborne and marine users [1], low-earth orbit (LEO) satellite networks acting as a complement of terrestrial networks have been regarded as a powerful solution to meet the requirements of broadband, ubiquitous and reliable coverage for global users [2]. On the other hand, since the computation and battery capacities are generally limited for ground IoT The associate editor coordinating the review of this manuscript and approving it for publication was Haris Pervaiz . devices to execute their delay-sensitive and computationintensive applications locally, multi-access edge computing (MEC) has emerged as one of potential technologies to broaden the capability of ground IoT devices [3], [4]. Thanks to the low altitudes of LEO satellites, the propagation delay from a ground device to its visible LEO satellites can be reduced to 1-4 ms [5]. Thus, with the deployment of MEC in satellite networks, the task execution latency and the energy consumption of ground IoT devices can be greatly reduced by offloading computation to the powerful MEC servers deployed in LEO satellites. Meanwhile, MEC can provide content caching services to efficiently decrease the traffic burden of the backhaul links in satellite networks and improve the quality of service (QoS) for IoT devices [6], [7].
Recently, satellite edge computing has attracted numerous attentions and become an emerging research direction. In [8], a satellite terrestrial network with double edge computing capacities is studied, where the MEC server allocation is optimized to minimize the energy consumption and latency. Wang et al. [9] propose a game-based computation offloading algorithm to minimize the offloading costs of all users in the satellite edge computing systems. Besides, considering a three-tier computation architecture, the hybrid cloud and edge computing in LEO satellite networks is proposed in [10], where heterogeneous computation resources can be provided to ground users. Nevertheless, the computation offloading strategy is investigated individually in [8], [9], and [10], while the communication and caching resource allocation is ignored. In fact, communication, computing and caching are complementary and competitive. For example, caching can make less data offloading to reduce communication resource occupation, while both content caching and computation offloading compete for computation and storage resources. To facilitate efficient edge computing and content caching in diversified application scenarios, the joint optimization of communication, computing, and caching resource allocation is indispensable for satisfying desirable QoS requirements [11], [12].
To fully exploit both communication and computing resources in LEO satellite networks, in [13], the computation offloading and resource allocation for the terrestrial-satellite networks are jointly optimized to minimize the weighted sum energy consumption of all users. In [14], aiming at minimizing the weighted sum latency of all users in satellite networks where edge servers are deployed in both satellites and gateway stations, the joint computing and communication resource management problems with two different satellite edge computing and local execution scenarios are investigated. However, the storage capacity constraint of MEC server is not taken into account in both [13] and [14]. Differently, Li et al. [15] concentrate on the cache-enabled LEO satellite network. Considering the energy constraints of satellites and coded caching model, an integrated satelliteterrestrial cooperative transmission scheme is designed for the energy efficient traffic offloading from base stations through satellite's broadcast transmission. On the basis of dynamic satellite propagation links and time-varying topology, a cache content updating mechanism is derived in [16] for satellite-terrestrial networks, which enables stable and sustainable quality of user experience under continuous time variation. Besides, in [17], the closely coupled request dispatching and service caching placement are investigated subject to both the computing and storage capacity constraints, while the radio resource allocation is not considered in the formulated problem.
In order to tackle the potential conflict between resource-hungry applications and resource-constrained IoT devices, appropriate resource management is indispensable. However, there are few studies jointly considering the communication, computing and caching resources in LEO satellite MEC networks. It is worth noting that the communication, computing and caching resource allocation for LEO satellite MEC is closely coupled, which renders the joint optimization an intractable problem. More importantly, the association between IoT devices and satellites should be carefully optimized by taking the computing capacity constraint and the cache deployment strategy into account. Motivated by these views, this paper comprehensively investigates the computation offloading strategy, communication and computing resource allocation, as well as the caching placement in LEO satellite MEC networks. We aim to minimize the total delay of all ground IoT devices, where the formulated problem is a challenging mixed-integer and nonlinear problem (MINLP). To tackle this difficulty, we first propose a Lagrange dual decomposition (LDD)-based algorithm with the closed-form optimal solution. Then, a lowcomplexity heuristic algorithm is designed to further reduce the computational complexity. Numerical results verify the effectiveness of our proposed algorithms in comparison with the optimal exhaustive search, the full local computing and the full MEC methods. Meanwhile, we also provide insights into the impacts of the computing and caching capacities on the offloading ratio and the average delay of all ground IoT devices.

II. SYSTEM MODEL AND PROBLEM FORMULATION
In this paper, we consider a LEO satellite MEC network composed of J ground IoT devices and N visible satellites, as shown in Fig. 1. Each satellite is equipped with one MEC server to carry out computational tasks offloaded from its associated ground IoT devices and cache related database. We concentrate on the task model (a k , b k , ε k ), where a k represents the data size of task k ∈ {1, 2, . . . , K } collected by ground IoT devices, b k denotes the required database cached in MEC servers, and ε k is the number of CPU cycles required for executing task k. Define ξ jk ∈ {0, 1}, where ξ jk = 1 indicates task k is generated from device j ∈ {1, 2, . . . , J }, and ξ jk = 0 otherwise. Assume that each device has at most one computation task to handle at each scheduling period, i.e., K k=1 ξ jk ≤ 1, ∀j. Besides, we consider the binary VOLUME 11, 2023 offloading scheme for each device. Let α jn ∈ {0, 1} represents whether the task from the j-th device is processed via the MEC server in satellite n ∈ {1, 2, . . . , N }. If the task from the j-th device is processed via the n-th satellite, α jn = 1, and α jn = 0 otherwise. With the similar definition, β jn ∈ {0, 1} indicates whether the database required by the task of the j-th device is downloaded from the n-th satellite.

A. LOCAL COMPUTING
For each device, to begin the task execution by local computing, the corresponding database cached in MEC servers should be first downloaded from satellites. By applying the orthogonal multiple-access for the downlink transmission, the data rate of device j for downloading the database from the n-th satellite can be expressed as where y jn denotes the ratio of downlink communication bandwidth allocated to the j-th device for the n-th satellite, B D represents the total downlink bandwidth. p D jn and g D jn are the transmission power and channel gain from satellite n to device j, respectively, and σ 2 is the noise power. Thus, the total delay of the j-th device to complete the local computing process can be calculated by where f j represents the computation capacity of the j-th device, indicates the one-way propagation delay between device j and satellite n. The power consumption of CPU for the j-th device can be modelled as P L j = κf 3 j , where κ denotes the energy consumption coefficient relying on the chip architecture [18], [19]. Thus, the energy consumption for local computing of device j can be obtained as

B. EDGE COMPUTING
For the edge computing scheme, the uplink data rate for device j to offload its task to the n-th satellite can be expressed as where x jn represents the ratio of uplink communication bandwidth allocated to device j for satellite n. B U is the available uplink bandwidth. p U jn and g U jn are the uplink transmission power and channel gain from device j to satellite n, respectively.
After finishing the task offloading and edge computing, the computation results should be downloaded from satellites to the device. Since the computing result is of small size in general, the result downloading time is negligible [20]. However, due to the much longer transmission distance compared with the terrestrial communications, each device-satellite link suffers long propagation delay, which cannot be ignored. Therefore, the total delay of the j-th device for accomplishing edge computing is approximated as where U j = K k=1 ξ jk a k , f S jn denotes the computing resource allocated to device j by the n-th satellite).
Thus, the energy consumption of device j for task offloading can be obtained as

C. DATABASE CACHING
It is worth noting that for both local and edge computing, the required database must be cached in the MEC server deployed in the satellites. Let s nk ∈ {0, 1} denote the deployment of database in the satellites. If database k is deployed in satellite n, s nk = 1; Otherwise, s nk = 0. Thus, we have Besides, the volume of cached database and offloaded task data must be no more than the caching capacities of MEC servers in satellites. Hence, for the n-th satellite, the following constraint should be satisfied [21]:

D. PROBLEM FORMULATION
In this paper, we aim to minimize the total delay of all ground devices in LEO satellite MEC networks by jointly optimizing the communication, computing and caching resource allocation. Thus, the optimization problem can be formulated as C2 : C3 : C4 : C8 : α jn , β jn , s nk ∈ {0, 1} , ∀j, n, k, (9i) C9 : x jn , y jn ∈ [0, 1] , ∀j, n, In problem (9), C1-C3 are the energy consumption, computing and caching capacity constraints, respectively, where E max j is the maximum energy reserved for computational tasks of device j, and F S n and C S n denote the computing and caching capacities of the n-th satellite, respectively. C4 indicates that each device can be only associated with one satellite for computation offloading or database downloading, and C5 is the database placement constraint.

III. PROPOSED ALGORITHMS
Since α jn , β jn , s nk are integer variables, problem (9) is a MINLP and finding its optimum is nontrivial. To figure out this challenge, in the following, we first propose a LDD-based algorithm to obtain the optimal communication, computing and caching resource allocation with closed-form solutions. Then, a heuristic algorithm is proposed in order to further reduce the computational complexity.
Then, from (15a)-(15c), we can obtain the optimal communication and computing resource allocation as In order to minimize the Lagrange function (13), we substitute (16) into (14a) and (14b). Then, the optimal computation offloading and database downloading decisions can be expressed as and the optimal caching decision is obtained as Finally, problem (9) can be solved by tackling the dual problem (12) with subgradient method [23]. Thus, the LDD-based algorithm for joint communication, computing and caching resource allocation can be summarized as Algorithm 1.
Remark 1: As the computation offloading, database downloading and caching decisions α jn , β jn , s nk are discrete in practice, it may bring a non-zero duality gap. Nevertheless, the optimum of the dual problem often results in good solutions [23]. Besides, although the dual problem (12) is convex and the subgradient method is guaranteed to converge to the optimal solution as long as the step size is sufficiently small, the speed of convergence may be quite slow when diminishing step size is adopted [23].

B. LOW-COMPLEXITY HEURISTIC ALGORITHM
As the convergence to the optimum for the LDD-based algorithm involves considerable number of iterations, higher computational complexity is required for larger numbers of satellites, IoT devices and tasks. Hence, we next propose a novel low-complexity heuristic algorithm to find an efficient solution for the joint communication, computing and caching resource allocation. Specifically, the objective function of problem (9) can be re-expressed as where prop jn , and T 2 jn y jn = As observed in (19), the communication and computing resource allocation x jn , y jn , f S jn is coupled with the computation offloading and database downloading decisions α jn , β jn , which results in high computational complexity to find the optimal solution. To address this challenge, the proposed low-complexity heuristic algorithm is divided into three steps, where we first obtain the computation offloading and database downloading decisions by assuming average resource allocation, and then determine the database deployment in satellites, followed by deriving the optimal communication and computing resource allocation.
Specifically, in the first step, we assume average communication and computing resource allocation, i.e., x a jn = 1 J e n +1 , y a jn = 1 where J e n or J l n is defined as the number of IoT devices except device j which have decided to offload tasks to or download the database from the n-th satellite. Thus, to minimize the objective function (19) with the consideration of load balancing among satellites, the computation offloading and database downloading decisions of each device can be obtained in sequence, i.e., where J e n and J l n are updated in each loop. Then, considering the limitation of satellite storage capacity, the database corresponding to the k-th task should be only cached on the satellites which provide edge computing or database downloading services for the k-th task, i.e., Finally, by substituting the obtained α ′ , β ′ , s ′ into problem (9), we can obtain Algorithm 2 Low-Complexity Heuristic Algorithm for Joint Communication, Computing and Caching Resource Allocation Step 1. Initialize j = 1, J e n = 0, J l n = 0, ∀n; Step 2. Calculate T 1 jn and T 2 jn with average resource allocation x a jn , y a jn , f S,a jn , ∀n; Step 3. Obtain the optimal computation offloading and database downloading decision α ′ jn , β ′ jn via (20); Step 4. Update J e n and J l n , ∀n, i.e., J e n = J e n + 1 if α ′ jn > 0, and J l n = J l n + 1 if β ′ jn > 0; Step 5. j = j + 1; Step 6. Repeat Step 2-5 until j > J ; Step 7. Obtain the optimal caching decision s ′ via (21); Step 8. Solve problem (22), and obtain x, y, f S .
Observing problem (22), we have the following proposition.
Proposition 1: Problem (22) is a convex problem. Proof: It can be easily found that the objective function of problem (22) is convex with respect to x, y, f S . Besides, constraint C1 is convex with respect to x, and the rest constraints of (22) are all linear constraints. Therefore, problem (22) is a convex problem. ■ Since problem (22) is a convex problem, it can be readily solved via standard convex optimization algorithms. Hence, the proposed low-complexity heuristic algorithm can be summarized as Algorithm 2.

C. COMPLEXITY ANALYSIS
For the LDD-based algorithm, i.e., Algorithm 1, its computational complexity is O (JN ) at each iteration, and the complexity of the outer Lagrangian multipliers update via sub-gradient method is a polynomial function of the dual problem dimension, i.e., JNK + 4N + J for (12) [24]. Hence, the complexity to update dual multipliers is expressed as O ((JNK ) χ ), where χ represents a positive constant [25].
The heuristic algorithm, i.e., Algorithm 2, does not need iterations, and its computational complexity mainly depends on finding the solution of the communication and computing resource allocation problem (22), which is a standard convex problem. The interior-point method can be adopted to solve problem (22), and its complexity can be expressed as O δ 1 2 (δ + γ ) γ 2 , where δ denotes the number of inequality constraints of problem (22) and γ represents the number of variables [26], [27]. Hence, the computational complexity of Algorithm 2 is O (JN ) 3.5 .
Therefore, we can conclude that two proposed algorithms (Algorithms 1 and 2) only require polynomial complexity to obtain the solution of problem (9).

IV. SIMULATION RESULTS
In the simulation, a LEO satellite network is considered, whose parameter settings refer to OneWeb Constellation [28]. The default computing capacity of each MEC server in satellites is randomly taken from [5,10] Gcycles/s, and the default computing capacity of each device is randomly chosen from [0.5, 1] Gcycles/s. Furthermore, we assume that the free-space path loss model is employed for the LEO network. For the task model, it is assumed that the data size of each task, i.e., a k + b k , is randomly taken from [0.5, 3] Mbits, where a k accounts for 20%-40% of the total data size randomly. The simulation parameters are listed in Table 1.
As the proposed LDD-based algorithm is an iterative algorithm, its convergence performance is first demonstrated in Fig. 2, where one of the Lagrangian multipliers λ is taken as an example. Note that Fig. 2 is plotted based on one random channel sample. As shown in Fig. 2, the proposed LDD-based algorithm converges fast.
Then, we compare the performance of our proposed 'LDD-based algorithm' and 'Heuristic algorithm' with 'Exhaustive search' algorithm. As the 'Exhaustive search' algorithm requires the exponential computational complexity, VOLUME 11, 2023   a small-scale problem with J = 10, N = 2 is considered as an instance. It is shown in Table 2 that the proposed 'LDDbased algorithm' approaches the 'Exhaustive search' in terms of the total delay, and the 'Heuristic algorithm' can achieve no more than 110% of the minimum total delay obtained via 'Exhaustive search'. Then, the two proposed algorithms are also compared with two baseline algorithms [10], [29], where 'full local computing method' and 'full MEC method' denote that the all tasks are computed by local devices and the MEC servers in satellites, respectively. It can be observed from Fig. 3 that all the four curves show a similar trend that the average delay increases with the task data size, while the proposed heuristic algorithm and LDD-based algorithm outperform both 'full local computing method' and 'full MEC method', which verifies the effectiveness of our proposed algorithms. For clarity, in the following, the performance of the proposed heuristic algorithm, i.e., Algorithm 2, is presented as representative in Figs. 4-7. In Figs. 4 and 5, we demonstrate the impact of the computing capacities of the satellites and IoT devices. It is observed from Fig. 4 that the average delay of all devices declines consistently with the increase of the computing capacities of both satellites and devices, while the descent rate of the average delay gradually diminishes. This phenomenon can be explained by the fact that although more computing capacities can reduce the task execution delay, the limited radio resource and long distance between ground IoT devices and satellites still bring considerable transmission delay which can not be reduced by improving computing capacities. On the other hand, Fig. 5 illustrates the offloading ratio of devices for edge computing with different computing capacities of the satellites and IoT devices. It is reasonable to find that larger   satellite computing capacity can improve the offloading ratio of IoT devices while increasing the computing capacity of each device reduces the offloading ratio of IoT devices.
Finally, Figs. 6 and 7 show the impact of the number of IoT devices and satellites on the average delay of devices and the offloading ratio for edge computing. We can observe in   6 that the average delay increases with the growth of the number of IoT devices but declines when the number of satellites goes up. This is because more IoT devices generate more furious competition both on the communication and computing resources, and less resources allocated to each device result in larger delay for data transmission and computation execution. Besides, more satellites can bring greater computing capacity and higher degree of freedom in task offloading for edge computing. For the same reason, in Fig. 7, the offloading ratio of all IoT devices for edge computing decreases with the increase of the number of devices but goes up when the number of satellites increases.

V. CONCLUSION
In this paper, we have investigated the joint communicationcomputing-caching resource allocation in LEO satellite MEC networks, where a LDD-based algorithm and a low-complexity heuristic algorithm were proposed, both of which required polynomial computational complexity. Numerical results have demonstrated the effectiveness of our proposed algorithms compared to the optimal exhaustive search method, the full local computing method and the full MEC method. It was found that with the increase of the number and computing capacities for satellites and IoT devices, the average delay of IoT devices consistently declined. Meanwhile, the offloading ratio of IoT devices for edge computing decreased with the increase of the number of devices but went up when the number of satellites was increased.