Heuristics for scheduling data gathering with limited base station memory

In this paper, we analyze scheduling in data gathering networks with limited base station memory. The network nodes hold datasets that have to be gathered and processed by a single base station. A dataset transfer can only start if sufficient amount of memory is available at the base station. As soon as a node starts sending a dataset, the base station allocates a block of memory of corresponding size. The memory is released when computations on the dataset finish. We prove that minimizing the total data gathering and processing time is strongly NP-hard. As this problem is a special case of a specific resource constrained flow shop scheduling problem, for which an exact exponential algorithm is known, we propose several simple polynomial-time heuristics and two groups of local search algorithms, and test their performance in computational experiments. We show that the local search algorithms produce very good schedules, and one of the simple heuristics delivers solutions of comparable quality in a very short time.


Introduction
Data gathering networks find a wide range of applications. Many computational tasks can be divided between a set of computers running in parallel. Each of such workers produces some results, and all these data have to be collected together on a single machine, for aggregating, further processing and storing. Moreover, data gathering wireless sensor networks are used in environmental, military, health and home applications (Akyildiz et al. 2002). The efficiency of collecting the data has an impact on the performance of the whole distributed application. Therefore, scheduling for data gathering networks is an important research area.  Moges and Robertazzi (2006) and Choi and Robertazzi (2008) studied data gathering networks on the grounds of divisible load theory. The analyzed problem was to assign the amounts of measured data to the network nodes and organize the communications in the network so as to minimize the total time of sensing and gathering the data. Later on, scheduling algorithms for networks with fixed sizes of data gathered by individual nodes were proposed. The analyzed objectives included maximizing the network lifetime (Berlińska 2014), minimizing the time of data gathering (Berlińska 2015;Luo et al. 2018a, b), and minimizing the maximum lateness (Berlińska 2018a).
In this paper, we analyze gathering data in networks with limited base station memory. Each worker holds a dataset that should be passed to the base station for processing. A dataset being transferred to, or processed by the base station, occupies a block of memory of a given size. The total size of coexisting memory blocks cannot exceed the base station buffer capacity. Our goal is to gather and process all data within the minimum possible time. We prove that this problem is strongly NP-hard. Then, we propose a group of polynomialtime heuristics and local search algorithms. Their performance and sensitivity to changes of instance parameters are tested in a series of computational experiments.
The rest of this paper is organized as follows. In Sect. 2 we describe the network model and formulate the scheduling problem. In Sect. 3 related work is outlined. The computational complexity of our problem is analyzed in Sect. 4. Heuristic algorithms are proposed in Sect. 5, and the results of computational experiments on their performance are presented in Sect. 6. The last section is dedicated to conclusions.

Problem formulation
We study a data gathering network consisting of m worker nodes P 1 , . . . , P m , and a single base station. Node P i has to transfer dataset D i of size α i directly to the base station. When dataset D i starts being sent, a memory block of size α i is allocated at the base station. The base station has limited memory of size B ≥ max m i=1 {α i }. The transfer of dataset D i may start only if the amount of available memory is at least α i . Sending dataset D i requires time Cα i . After dataset D i is transferred, it has to be processed by the base station, which takes time Aα i . Datasets are processed in the order in which they were received, without unnecessary delay. As soon as processing a dataset finishes, the corresponding memory block is released. Both communication and computation on a dataset are non-preemptive. The base station can communicate with at most one node at a time, and it can process at most one dataset at a time. The scheduling problem is to choose a sequence of dataset transfers such that the total data gathering and processing time is minimized.
Note that since at most one dataset can be transferred at a time, our data gathering network is a two-machine permutation flow shop, where the communication network is the first machine, and the base station is the second machine. For each dataset D i , we have a corresponding job i consisting of two operations: sending dataset D i in time p 1i = Cα i , and processing this dataset in time p 2i = Aα i . Executing job i requires α i units of the base station memory buffer.
machine and before being started on the second one. For the case when no job can be stored in a buffer (called the no-wait problem), a polynomial-time algorithm proposed by Gilmore and Gomory (1964) for solving a specific TSP can be used to obtain optimum solutions. The case with unlimited buffer can also be solved in polynomial time, using Johnson's rule (Johnson 1954). The remaining case of positive, finite buffer size, was shown to be strongly NP-hard by Papadimitriou and Kanellakis (1980).
The problem analyzed in this paper significantly differs from the one described above. Firstly, in our case the base station buffer can hold a fixed amount of data rather than a fixed number of datasets. For example, in a buffer of size 4, we can store only one dataset of size 4, but up to 4 datasets of size 1. Secondly, the buffer is occupied by a dataset not only between, but also during the two operations of its transfer and processing.
A flow shop with such a quantity-based buffer was first studied by Lin et al. (2009). The analyzed problem was to optimize the object sequence in a prefetch-enabled TV-like presentation. It was assumed that the execution time of the first operation of a job is proportional to its buffer requirement, but the execution time of the second operation may be arbitrary. Minimizing the schedule length was proved to be strongly NP-hard, and a branch and bound algorithm was proposed. The performance of this algorithm was further improved by adding new lower bounds by Kononov et al. (2012). Integer linear programming formulations and variable neighborhood search algorithms were proposed by Kononova and Kochetov (2013). The existence of optimal permutation schedules for this problem was analyzed by Fung and Zinder (2016). Lin and Huang (2006) analyzed a relocation problem with a second working crew for resource recycling. In their work, each job was executed on two machines in a permutation flow shop style. The execution time of job i on the first machine was denoted by p i , and on the second machine by q i . Job i required α i units of a resource, and returned β i units of this resource on completion. The goal was to minimize the makespan while not exceeding the available amount of the resource. This problem, denoted as F2|r p|C max , was shown to be strongly NP-hard, and heuristic algorithms for solving it were proposed. The problem was further analyzed by Cheng et al. (2012), who formulated it as an integer linear program. Complexity results for a number of special cases were presented. The authors also studied the non-permutation version of the problem. Since in our problem, job i requires α i memory and returns the same amount of memory after completion, we solve a special case of the permutation version of F2|r p|C max , which can be denoted as F2|r p, In our preliminary work on the problem studied here (Berlińska 2018b), we proposed several heuristics and tested the quality of delivered solutions in computational experiments. In this paper, we design even more algorithms and present their experimental comparison for a wider variety of instance parameters.
Bin-Packing: Given positive integers V and k, and a set of n positive integers {x 1 , . . . , x n }, is it possible to partition the index set N = {1, 2, . . . , n} into k disjoint sets N 1 , N 2 , . . . , N k , such that for each 1 ≤ j ≤ k, i∈N j x i ≤ V ?

Proposition 1 Makespan minimization in data gathering networks with limited base station memory is strongly NP-hard even if
Proof It is clear that the decision version of our problem belongs to NP. We perform a psedudo-polynomial reduction from the Bin-Packing problem. Given an instance of Bin-Packing, we construct the following instance of our scheduling problem. The network has m = n + k + 1 workers, memory of size B = 2V + 1, and A = C = 1. There are n ordinary datasets D 1 , . . . , D n of size α i = x i for i = 1, . . . , n, and k + 1 enforcer datasets D n+1 , . . . , D n+k+1 of size α i = V + 1 for i = n + 1, . . . , n + k + 1. We will show that a schedule not longer than T = 2(k + 1)(V + 1) exists if and only if the corresponding instance of Bin-Packing is a "yes"-instance.
Let us first assume that a schedule of length at most T exists. Note that the base station buffer cannot hold two or more enforcer datasets at a time, since 2(V + 1) > B. Thus, all enforcer datasets have to be sent and processed in disjoint time intervals. Therefore, the time required for transferring and processing all enforcer datasets is at least Fig. 1). All the ordinary datasets that will be processed in a given interval [2 j(V + 1), (2 j + 1)(V + 1)) have to be sent before time 2 j(V + 1), when the transfer of an enforcer dataset starts. Hence, at time 2 j(V + 1) all the ordinary datasets assigned to the analyzed interval coexist in the base station memory with the block allocated for the enforcer dataset. Thus, their total size cannot exceed B − (V + 1) = V . Therefore, the sizes x i of the ordinary datasets processed in interval [2 j(V + 1), (2 j + 1)(V + 1)) fit together in a bin of size V , for each j = 1, . . . , k. Hence, we have the required feasible solution to Bin-Packing problem.
All in all, we showed that a feasible solution of a given instance of Bin-Packing exists if and only if there exists a schedule with makespan not greater than T for the corresponding instance of our scheduling problem, which completes the proof. To conclude this section, we indicate two special cases of our problem that can be easily solved in polynomial time. If B < min i = j {α i + α j }, then no two datasets can overlap in the base station memory, and each dataset permutation yields an optimum schedule of length , then the optimum schedule is constructed in O(m log m) time by Johnson's algorithm (Johnson 1954). Note that the above constraint on B is very rough. In many cases it is enough that B ≥ max i = j {α i + α j }. In order to check if B is large enough, we can compute the schedule length for the dataset permutation returned by Johnson's algorithm and compare it with the makespan obtained for the same sequence and an inifinite base station buffer. If these two makespans are equal, then it is certain that the optimum schedule has been found.

Heuristic algorithms
Before we start describing the proposed algorithms, let us observe the following symmetry property. Suppose that A = kC, where k ≥ 1, and Σ is a schedule of length T for given values of B and (α i ) m i=1 . Then, by reversing schedule Σ and swapping communications with computations, we obtain a schedule of length T for the same values of B and (α i ) m i=1 , computation rate A = C, and communication rate C = A = k A . Therefore, from now on we will assume without loss of generality that A ≥ C.
As our problem is a special case of the permutation version of F2|r p|C max , we do not propose an exact exponential algorithm, since the ILP formulation given by Cheng et al. (2012) can be used to find optimum schedules. However, this approach is not practical because of its high computational complexity. Therefore, we construct fast heuristic algorithms, and analyze the quality of delivered solutions.
We start with a group of "simple" heuristics, each of which constructs a single dataset sequence in O(m log m) time. Algorithm Inc sorts the datasets in the order of increasing sizes. Note that since we assumed A ≥ C, such a dataset sequence would be returned by Johnson's algorithm, and hence, algorithm Inc delivers optimum solutions if the memory limit B is big enough. Algorithm Alter starts with sending the smallest dataset, then the greatest one, the second smallest, the second greatest, etc., thus alternating big and small datasets. The idea behind this approach is to avoid sending the biggest datasets one after another. Indeed, if the base station memory is not very large, big datasets will not fit in the buffer together, and hence, idle times due to waiting for memory release will occur. Therefore, we aim here at forming pairs of consecutively sent datasets, such that the total size of each pair is not too big and fits in a medium-sized buffer. Algorithm LF constructs a schedule step by step, always choosing to send the largest dataset that fits in currently available memory. If all datasets are too big, the communication network is idle until sufficient amount of memory is released, such that some dataset can be transferred. Finally, algorithm Rnd constructs a random dataset sequence. This algorithm is used mainly to verify if the remaining heuristics perform well in comparison to what can be achieved without effort.
The second group of proposed heuristics are local search algorithms with neighborhoods based on dataset swaps. Each of these algorithms starts with a schedule generated by one of the simple heuristics, and then applies the following local search procedure. For each pair of datasets, we check if swapping their positions in the current sequence leads to decreasing the schedule length. The swap that results in the shortest schedule is executed, and the search is continued until no further improvement is possible. These algorithms are called IncSwap, AlterSwap, LFSwap and RndSwap. The beginning of each name shows which simple heuristic is used to construct the initial schedule.
Local search algorithms based on swaps were used for solving problem F2|r p|C max by Lin and Huang (2006), and they were shown to deliver good solutions. However, using a different neighborhood in a local search algorithm may result in obtaining even better results. Therefore, we also analyze local search algorithms based on dataset shifts. Here, instead of swapping a pair of datasets, we move a single dataset into a different position in the schedule. Reflecting the method used for generating the initial schedule, these algorithms are called IncShift, AlterShift, LFShift and RndShift. Lin and Huang (2006) proposed three heuristics H 1 , H 2 , H 3 for solving F2|r p|C max . However, for the special case of this problem analyzed in our work, both H 1 and H 2 yield the same results as IncSwap, and H 3 is equivalent to RndSwap. Thus, it is not necessary to additionally include algorithms H 1 , H 2 and H 3 in our study.
To finish this section, let us note that regardless of the selected dataset sequence, the schedule length never exceeds (A + C) m i=1 α i . Moreover, the total computation time A m i=1 α i is a lower bound on the makespan. Thus, the approximation ratio of any algorithm for solving our problem is at most 1 + C/A. Hence, we can say that our problem is easier to solve when A is large in comparison to C.

Experimental results
In this section, we compare the quality of delivered solutions and the computational costs of the proposed heuristics. The algorithms were implemented in C++ and run on an Intel Core i7-7820HK CPU @ 2.90 GHz with 32GB RAM. Integer linear programs were solved using Gurobi (Gurobi Optimization 2016). The test instances were constructed as follows. The network parameters were C = 1 and A ∈ {1, 2}. We used only two values of parameter A, because, as we explained at the end of Sect. 5, instances with A >> C are less demanding. We generated "small" tests with m = 10 and "big" instances with m = 100. Dataset sizes α i were chosen randomly from the interval [1, 1 + δ α ], where δ α ∈ {0.5, 1, 1.5, 2}. Note that if δ α is very small, then all datasets have similar sizes, and in consequence, the differences between makespans obtained for various dataset sequences are also small. For a given set of sizes α i , we computed the minimum amount of memory that allows to hold more than one dataset in the buffer, B min = min i = j {α i + α j }. Then, the memory limit was set to B = δ B B min , where δ B = 1 + i/10, for i = 1, 2, . . . , 8.
The quality of schedules constructed for small instances was measured by the ratio T /O PT , where T is the makespan obtained by a given heuristic, and O PT is the optimum schedule length delivered by the ILP formulation proposed by Cheng et al. (2012) for the permutation version of F2|r p|C max . Finding optimum solutions for big instances was not possible because of the exponential complexity of the exact algorithm. Therefore, we computed a lower bound LoBo on the schedule length, by disregarding the memory limit and solving the resulting instance of problem F2||C max using Johnson's rule. Thus, the measure of schedule quality for the big instances is T /LoBo. Each point on the following charts represents an average over 100 instances.
In the first set of experiments, we analyze the influence of the size of available memory on the obtained solutions, for small instances with A = 1 and δ α = 1. The simple heuristics Inc, Alter and Rnd deliver the worst results (see Fig. 2a). Still, it is interesting to identify the factors determining the performance of Inc and Alter. When the memory limit is very small, (a)  algorithm Inc performs well, because it places the smallest datasets together, so that they can overlap in the base station memory. Algorithm Alter performs worse even than Rnd, because a pair of a big and a small dataset does not fit in the buffer. When δ B ∈ [1.4, 1.6], algorithm Inc performs badly. Indeeed, as big datasets are sent one after another, they have to be transferred and processed separately, instead of overlapping with smaller datasets. Algorithm Alter can now create a lot of pairs of overlapping datasets, and hence, its performance is better. When the buffer becomes very big (δ B ≥ 1.7), idle times due to waiting for a memory release are rarely necessary, and schedule length can often be minimized using Johnson's rule. Therefore, Inc obtains good results again, while Alter loses performance. It can be seen that algorithm LF performs best of all simple heuristics. The schedules it constructs are almost as good as the ones found by the local search algorithms. The maximum average error of LF (reached for δ B = 1.3) is below 3%. When the memory limit is big enough (δ B ≥ 1.6), LF almost always finds optimum solutions. Very good results are obtained by the local search algorithms. A closer look on their performance is presented in Fig. 2b. It can be seen that the instances with very small or very big memory limit are easiest to solve. It is worth noting that the algorithms based on shifting datasets in the sequence perform slightly better than their counterparts using swaps. However, as in most cases all local search algorithms deliver solutions at most 1% longer than the optimum, the differences are not very significant. The best results for this test set were produced by algorithm IncShift, which found optimum schedules for all 800 instances.
The results obtained for m = 10, A = 1 and δ α = 2 are shown in Fig. 3. Bigger δ α means that the dataset sizes are more diversified, and larger datasets appear than for δ α = 1. Thus, increasing δ B by 0.1 results in a smaller change of the buffer size in relation to the largest dataset size. As a result, algorithm Alter now reaches its worst point at δ B = 1.3 instead of 1.2, and it is not yet outperformed by Inc at δ B = 1.7, 1.8. The schedules delivered by LF are still very good, with the maximum average error below 3.5%. Among the local search algorithms, the best results are again obtained by IncShift, although it does not always find optimum solutions for instances with big δ B . For δ B ∈ [1.5, 1.7], algorithm IncShift is even outperformed by LFShift.
The effects of increasing A to 2 (for m = 10, δ α = 1) are presented in Fig. 4. It can be seen that the results of all heuristics improve in comparison to the case of A = 1 (Fig.  2). This is caused by decreasing the upper bound 1 + C/A on the schedule quality. The instances with δ B ≥ 1.5 are now particularly easy to solve for all algorithms except Inc and (a)    Rnd. The relationships between the results delivered by different algorithms are similar to the ones observed in the previous experiments. The best results overall are again achieved by IncShift. Figure 5 shows the quality of solutions obtained for big instances. Let us remind that the measure of schedule quality is now the relative distance from the lower bound rather than from the optimum solution. When the base station buffer is big, the lower bound is close to the actual optimum, but when the memory is small, the distance between them may grow up to 1 + C/A. Hence comes the slope of lines in Fig. 5. Taking that into account, the results are similar to the ones obtained for the small instances. By comparing Fig. 5a, b, where A = 1 and A = 2, respectively, we confirm that although the value of A determines the maximum possible errors of the heuristics, it has almost no influence on the relationships between the individual algorithms.
In Fig. 6 we present the trade-off between schedule quality and algorithm execution time for all tests with m = 100 and A = 1. Here, each point represents an average over 3200 instances. All the simple heuristics are very fast, although it seems that Alter is slower than the other ones. Algorithms Inc, Alter and Rnd produce results of similar average quality. This is explained by the fact that each of heuristics Inc and Alter performs much better (a)  than Rnd on instances with some values of δ B , but is counterproductive (i.e., achieves worse results than Rnd) on the remaining tests. The results returned by heuristic LF are as good as those delivered by the local search algorithms. The average quality of LF schedules is even a little better than those of both variants of local search starting with Rnd or Alter sequence. Local search algorithms based on shifts are slower than their counterparts using swaps. The running time of local search also depends on the initial schedule. Algorithms starting with the LF schedules are the fastest, and the order of the remaining algorithms is: Inc, Alter, Rnd. Small differences in the quality of the local search algorithms are also visible. The best results are found by starting with LF or Inc schedules. Using Alter for generating the initial solution does not yield such good results, and Rnd is even worse. Thus, choosing a bad initial sequence results both in longer execution time and in worse solutions found. Still, the differences between the average results obtained by LF and all local search algorithms are very small. As LF is several orders of magnitude faster than local search, we recommend it as the best heuristic in the terms of trade-off between quality and time.

Conclusions
In this work, we analyzed scheduling data gathering with limited base station memory. We proved that this problem is strongly NP-hard. Simple heuristics and local search algorithms were proposed, and their performance was tested in computational experiments. We showed that the amount of available memory is the key parameter determining the performance of the heuristics. In general, it is easier to find good solutions if the memory limit is very small or very large. However, algorithm Alter performs best for the medium values of buffer size. Increasing the time A of processing one unit of data causes all algorithms to return better results, as the upper bound on the schedule quality 1 + C/A becomes smaller. Changing the dispersion of dataset sizes does not significantly influence the performance of the heuristics. The average quality of results delivered by algorithms Inc and Alter is close to that of the random algorithm Rnd. Thus, these heuristics should not be used for solving our problem. Contrarily, algorithm LF produces very good schedules. Local search turned out to be a very effective way of constructing good solutions. It allows to find high quality schedules even if we choose a bad initial dataset sequence. Local search using dataset shifts is slower than that based on swaps, but it generates slightly better solutions. Still, the results obtained by the simple LF heuristic are close to those delivered by local search algorithms, and the running time of LF is much shorter. Thus, LF offers the best trade-off between the quality of results and the algorithm execution time, and is recommended as a very good tool for solving our problem. In the case when time is not an important factor, algorithm IncShift should be used, since its average results are the best among all analyzed heuristics, and it very often finds optimum schedules for the small instances.
In the future, the existence of theoretical guarantees (tighter than 1 + C/A) on the approximation ratios of the presented heuristics should be investigated. A possible extension of this work is to analyze data gathering networks in which the size of available base station memory varies in time. Such changes may be caused by other applications and services running on the base station. Constructing good scheduling algorithms for such systems will be an interesting challenge.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.