ALGORITHMS FOR OPTIMIZING FLEET STAGING OF AIR AMBULANCES

In a disaster situation, air ambulance rapid response will often be the determining factor in patient survival. Obstacles intensify this circumstance, with geographical remoteness and limitations in vehicle placement making it an arduous task. Considering these elements, the arrangement of responders is a critical decision of the utmost importance. Utilizing real mission data, this research structured an optimal coverage problem with integer linear programming. For accurate comparison, the Gurobi optimizer was programmed with the developed model and timed for performance. A solution implementing base ranking followed by both local and Tabu search-based algorithms was created. The local search algorithm proved insufficient for maximizing coverage, while the Tabu search achieved near-optimal results. In the latter case, the total vehicle travel distance was minimized and the runtime significantly outperformed the one generated by Gurobi. Furthermore, variations utilizing parallel CUDA processing further decreased the algorithmic runtime. These proved superior as the number of test missions increased, while also maintaining the same minimized distance.


Introduction
Rapid disaster response can be the difference in determining a patient's survival. In urban environments, ambulance retrieval is a standard procedure; however, the process becomes increasingly complicated with the remoteness of an incident and the dispersing of a population. As such, placement of responders for optimal area coverage is an important and critical decision. Additionally, many air ambulance services contain a comparatively small fleet to handle vast areas [1]. Given such a small contingency, proper placement of these vehicles becomes all the more crucial.
Multiple solutions have been proposed in previous works, with some implementing near optimal metaheuristics [2] or concentrating specifically on scheduling [3]. For this research, the problem was formulated to maximize coverage, while minimizing the solution runtime. Real-mission data aided in developing a more realistic scenario rather than relying on synthetic generation. Each mission began at a potential base, performed a pickup, dropped off a patient, and then returned back to the same base. The primary purpose was determining which bases the vehicles should be placed at to maximize the coverage. While exact methods are an option when time is not a factor, in emergency situations there are instances where vehicles must be repositioned quickly to fill demands. In this case algorithmic metaheuristics are a much more necessary. Furthermore, many past works only considered sequential implementations, whereas the Compute Unified Device Architecture (CUDA) platform provided an opportunity for further improvement through parallelization. This research aimed to model the problem in terms of integer linear programming and then use custom algorithms to achieve a near-optimal solution.
The remainder of this paper is arranged as follows. Section 2 describes the related work, emphasizing previous or similar techniques for resolving the topic. Section 3 presents the problem domain and description, along with the constraints. Section 4 describes the base ranking, local search, and Tabu search-based solutions. Sections 5 and 6 explain the results and conclude the paper. arXiv:2001.05291v1 [cs.AI] 10 Jan 2020

RELATED WORK
The healthcare industry is only one candidate when considering the optimization of air assets. The present work explores the minimization of total distance for placement; however, others have examined cost in conjunction with distance. Fernandez-Cuesta et al. [4] looked at this problem from the perspective of the oil industry and suggested two heuristics for optimizing the position of a fleet of helicopters. Placement of vehicles is considered an NP-hard problem, making it unrealistic to achieve an optimal solution or use purely iterative techniques. Dong et al. [5] confronted this issue, taking a relaxed approach by solving for a subset of decision variables, and then locking the solved variables. An elementary approach was then utilized for resolving the remaining variables, minimizing the operation costs while encouraging maximized profits on fleet composition and service levels.
Regarding fleet management, an optimized solution must provide coverage with a minimized retrieval distance and potentially a minimal resource cost. Using approximate dynamic programming, Schmid [6] solved a dynamic ambulance reallocation problem. The approach resolved two of the previously mentioned criterions (coverage and cost) by relocating among a fixed set of stations. This had the added benefit of reducing the cost of subsequent ambulance requests. Maleki et al. [7] attacked a similar relocation problem; however, additionally attempted to minimize the total transit time by ambulances on succeeding calls. In [8], the authors confronted the added constraint relating to time-windows. There was a similar objective of providing minimum coverage at a reduced cost, yet the approach instead used a hybrid metaheuristic. Based on mixed-integer programming formulations, a hybrid evolutionary search algorithm (HESA) was developed. The algorithm shared similarities with genetic algorithms, while also using an embedded local search operator for improving offspring generated from the crossover operation.
Empirical data allows a model to make use of past trends for present solutions. Utilizing information like travel time, dispatch delay, and pickup time; McCormack et al. [9] developed a simulation for land-based ambulances. For actual optimization, the simulation relied on a genetic algorithm for fleet assignment. Similarly, the work of Zhen et al. [10] took a simulation approach with the use of a genetic algorithm for optimization. The work looked to maximize the expected survival probability across variable patient classes. In this approach, the authors utilized the tactic for developing a model for actual deployment and redeployment. As with previously mentioned works, Pond et al. [2] implemented a genetic algorithm, although directed the solution space towards air ambulance vehicle placement. The paper asserted that population density alone was not an accurate enough determinant for placement of vehicles, and instead relied on a large volume of past data for reaching a resolution. Other generalized solutions for these types of set-covering problems implemented fuzzy parameters, such as those in [11,12].
Local and Tabu search are well documented methodologies for resolving optimization and set coverage problems. Literature on these algorithms is substantial and will only briefly be touched upon in this section. In local search, small localized changes are periodically made until a solution is approximately optimal. Tabu search can be seen as an extension of local search-based algorithms. It allows for the exclusion of recently explored areas within a search space and can possibly allow moves that would not improve the objective. A list of previous states are held within a Tabu list, which prevents a search from reaching a local optimum. It only records recent moves and will not allow a solution which has been explored within a particular period. The list clears after a predetermined number of iterations, although the size of the list can vary depending on the problem [13]. In [14], Zimmermann exploited local search for resolving a mobile facility location problem, whereby clients were assigned to existing facilities so that the total movement and client travel costs were minimized. The model reduced the problem into smaller solvable subproblems and then implemented a modified local search for optimization. Gendreau et al. designed and modeled an ambulance location problem, resolving it with Tabu search [15]. The objective was to maximize the coverage using two ambulances, constrained by actual requirements imposed by EMS service laws. Real and randomly generated data points were used, approaching near-optimal results in a reasonable computing time compared to the CPLEX optimizer. Oberscheider and Hirsch looked into efficient transport for non-emergency patients utilizing real-patient data from the Red Cross of Lower Austria [16]. They generated all combinations of patient transports, then performed a set partitioning action upon the previous generation to gain an initial solution. They then inputted these combinations into a Tabu Search and optimized the routing.
Implementing parallelization to improve algorithms is not a new trend and has generally been used to speed up calculations through the use of the graphics progressing unit (GPU) [17]. Reorganizing algorithms to take advantage of multiple simultaneous threads can dramatically enhance performance and see a huge improvement in the runtime of certain techniques. Hussai et al. altered the particle swarm optimization algorithm through the use of the CUDA platform [18]. Through partially coalescing memory accesses, they were able to achieve a massive time improvement when applied to benchmarks. Fabris and Krholing utilized similar benchmarks to test the applications of the CUDA platform on a co-evolutionary differential evolution algorithm for solving min-max optimization problems [19]. Through this application, they found that the algorithm converged to a near optimal fitness and scaled far better than non-parallel variations. Following review, there is currently little published research on applying GPU parallelization to ambulance problems. Similarly, Schulz et al. discussed in their survey that there is a comparatively small amount of literature on applying GPU parallelization to local or Tabu search [20]. Much of the current research has been directed towards swarm algorithms, though the survey suggests that it is still useful for local search-based methods.
As previously discussed in [2], multiple maximum coverage problems have relied on population density for the development of an optimized solution space. This is not feasible when considering a sizeable non-uniform density over a large region. Utilizing some traditional methodologies would mean that a significant area is ignored, risking patient survival through invalid placement. The problem can be treated as a coverage problem with the added addition of ensuring equal importance among those in the north. It should be noted that research on the effect of ambulance response time is still a wary topic [21,22,23,24]; however, from an economic standpoint, there is an interest in reducing travel distance.
Prior research has relied on genetic algorithms for achieving optimization [2], yet was not utilized in this paper as further constraints were implemented and tested upon. Additionally, the methodology of this research was compared against optimized results; achieving near-optimal itself. Similar works have been completed regarding real provided data [3]; although this was more directly related to scheduling, did not list substantial constraints, and made use of a set-partitioning integer program. Given the organization of the data and prior works, this research will explore both local and Tabu search-based solutions, while at the same time assessing the usefulness of parallelizing both algorithms.

MODEL
Placement of ambulances for maximum coverage is a more nuanced problem that cannot rely solely on demographic data alone. Patients typically move to specialized facilities if the care required is more particular. Additionally, if a region's population is sparse then population density is a poor predictor for developing an optimized solution. As a result, historical mission data can instead be utilized for considering possible future demands. For this paper, two years of Ornge collected research data was employed, consisting of both hospital transfers and area pickups by rotary-wing aircraft.
A mission consists of a pickup, a delivery, and a return to base. To simplify the calculation, the distances between pickup and delivery points are ignored since excluding them does not effect the final coverage determination. As this is not a scheduling problem, the formulation considers that each base can only hold a single aircraft. In general dispersing the vehicles will be more beneficial the more spread out missions become over a large area. In essence this process would be completed prior to scheduling in order to determine the best placement of vehicles for servicing missions. Additionally, cost determination for travel is related primarily to scheduling and less to coverage. The objective function can be modified for cost if the need requires it; however, distance is more useful in determining the best regional coverage. Similarly, vehicle speed is ignored for this problem as some areas are clustered with vehicle specific missions (rotary-wing helicopters are required). In this case speed of the vehicle is irrelevant, as the missions cannot make use of the faster vehicle.
The aircraft fleet is made up of following two sets: R: the set of all rotary-wing helicopters r i ∈ R ∀ i = 1, ..., 8 F : the set of all fixed-wing planes f j ∈ F ∀ j = 9, ..., 12 The potential bases consist of the following two sets: A: the set of all aerodromes capable of supporting both rotary-wing helicopters and fixed-wing planes, with each being a 3-tuple of form To perform optimal vehicle placement, distances needed to be calculated. D represented the distance between a potential aerodrome with the sum of each mission's patient pick-up and delivery location. Similarly, E described a measurement applied instead to helipads. For determining optimal distances, there are a number of distance formulas that may be substituted in the model. If the data was purely simulated then Euclidean distance would have been a valid option, though this is not practical when using real coordinate data based on latitude and longitude. The Earth is not a perfectly flat space and the curvature must be considered in order to garner an accurate measurement. In this case Haversine distance is a far more reliable metric for the model and can be calculated with the following formula: In the above formula φ represents row location values for φ k or φ n (aerodromes and helipads) respectively. Similarly, the same can be said for ψ applying to column location values ψ k or ψ n . The remaining distance matrices use the same formula; however, φ p and ψ p can be substituted for φ d and ψ d . The resulting matrices are both of dimension z (number of missions) by k or n (number of aerodromes or heliports) and referenced for achieving total distances for each mission. The optimization model takes the following form: The objective function is described by Equation 2, where the total distance flown by each aircraft assigned to a mission is minimized. The binary decision variables applied are used with references to matrices D and E respectively. These are used to sum the total distances for each assigned base. The equation is separated into three parts, with each separation indicating the possible assignments that can occur: • Rotary-wing helicopter located at aerodrome assigned to a mission.
• Fixed-wing plane located at aerodrome assigned to a mission.
• Rotary-wing helicopter located at helipad assigned to a mission.
Furthermore, the objective function is constrained based on the rules set by Equations 3 to 11. Each mission can only support a single aircraft, constrained by Equation 3. Equations 4 and 5 limit each aircraft to a single base, while equations 6 and 7 ensure that for every helipad or aerodrome there is at most one vehicle. Enforced by prior constraints, Equation 8 guarantees that each rotary-wing assigned mission has an occupied base if the variable is set. In this particular case, both decision variables for a base assignment will not be set as a result of previous constraints. Similarly, Equation 8 does the same type of operation, except towards fixed-wing aircraft and aerodromes. As some missions are rotary-wing only, Equation 10 prevents a fixed-wing aircraft from being assigned to these missions. Lastly, Equation 11 fixes the decision variables to a binary format.

ALGORITHM
Solutions were designed with the previously discussed model while considering the prior constraints and minimization requirements. The description of this solution is described in Algorithm 1, Algorithm 2, and Algorithm 3. All versions had a sequential and parallel CUDA implementation, with minor changes in design. The primary differences related to the lack of outer loops in the CUDA versions, as these indices were acted upon simultaneously by individual threads. Additional differences are given following a description of each algorithm. Given that the differences between the parallel and sequential algorithms are only minor, the actual outlining of them are expressed using the sequential versions.

Base Ranking
Algorithm 1 reduced the problem scope by ranking the most effective bases for covering every mission. The idea was to allow for a more reasonable starting position over the local search, acting on a randomly assigned set. Specifically, the algorithm looked at which bases had the lowest total Haversine distance and assigned a vehicle to each top-ranked base. The number of vehicles was predetermined based on the fleet occupied by Ornge and represented in lines 1 and 2. The input for the solution utilized coordinate data (longitude and latitude) for base locations, mission pickup, and mission destination. In lines 3-5, this information was respectively assigned to the Destination, P ickup, and Base matrices. In this case, the organization of the matrix indices corresponded to each mission (the exception being U nused). As an example: the first index of Destination, P ickup, and Base would be one complete mission from start to end.
Lines 8-12 of the algorithm summed the Haversine distance for each base relative to both its destination and mission pickup. Following this, lines 13-18 determined the top bases for each mission based on the minimum summed Haversine calculation. For this particular set, 12 bases were chosen as this corresponded to the number of vehicles available for assignment. Per the model's decision variables and Equations 3-10, a fixed-wing vehicle could not be assigned to a helipad. As such, lines 14-16 prevented the number of helipads chosen to exceed the number of rotary-wing vehicles. The ranking for each mission was then taken at line 19, where each distance was assessed based on the based on chosen bases in T op_V als relative to Distance. Each vehicle was then assigned to a respective aerodrome or helipad at lines 20-21. Helipads were allocated first to ensure that fixed-wing aircraft had an available aerodrome available. From lines 22-29 the actual mission to base assignment took place. A constraint handled by the algorithm was that each base selected had to be able to accommodate a specific assigned vehicle. In this case 8 helicopters and 4 planes; meaning that the top bases were occasionally the second-best option instead of the first choice. Additionally, some missions specifically required the use of a rotary-wing aircraft, further constraining the rankings (lines 23-25). Line 24 and 27 finalized the algorithm, assigning the bases to each mission (placing them at the corresponding index in Base). This solution did not guarantee the best possible placement, only that the local search had a strong starting point. Swaps among unused bases still needed to be considered, as alternatives may have yielded better results. As such, line 30 assigned any unused bases to U nused.
The Base Ranking algorithm operates quite efficiently, although can be easily made parallel by the elimination of the outer loop at line 8. CUDA operates through simultaneous thread organization, so by separating i into individual threads, each can perform the inner loop j at the same time. As there is no race conditions for writing to Distances, each can write to a row i without issue. Another modification can be made at line 17, as it requires the summing of every row i in the Distance matrix to determine the total distance of each base to every mission. This would again apply a thread to each row i to determine the respective sums. These modifications remove the bottleneck associated with scaling for an increase in missions. The remainder of the algorithm did not require parallelism, as there were only small quick accesses using single loops.

Local Search Fleet Optimization
Algorithm 2 accepted the ranked data and unused bases assigned in Algorithm 1. As there would be no variation on successive runs (and the guarantee of optimal placement), two permutation matrices were generated corresponding to each mission index (line 5 and 6). Essentially, this meant that swaps or take overs would occur at different points each time, offering a distinction between results and allowing the exploration of varying neighbourhoods. Per line 9, the goal of the algorithm was to minimize the total Haversine distance across all missions. The entire set would iterate multiple times completely, stopping only after no further improvement could be found.
The algorithm ran through every mission at least once (line 11), while ensuring that each had a chance to swap with every corresponding option (line 12). Two sets of changes were possible, depending on whether a given unused base contained a vehicle. The choices from lines 13-22 were for the corresponding mission vehicle to be taken over by another vehicle assigned base or moved to a new compatible base (an empty base in U nused). This depended again on whether the base in U nused was occupied or not. In the case of the latter, all subsequent missions utilizing said vehicle needed to be updated. These changes only held if they lowered the total Haversine distance and then the previously used (or occupied) base was transferred into U nused. Once all missions were explored, the iterators were reset and the algorithm repeated if there was further improvement found during the run.
The remaining changes occurred between lines lines 23-27 where vehicle at P ermutation b index i was compatible with the mission at P ermutation b index j. The vehicles located at the respective indexed bases that would be assigned had to be of a compatible type (example: rotary-wing only as per Equation 10) and were updated if they minimized the total Haversine distance. Additionally, if the replaced base was no longer assigned to any other mission, it was moved to U nused.

Algorithm 1: Base Ranking
Data: Base, mission pickup, and mission destination data Result: Assignment of vehicles to top aerodromes 1 heli_number ← number of helicopters; 2 plane_number ← number of airplanes; 3 Destination ← coordinates of destinations for respective missions; 4 P ickup ← coordinates of each mission; 5 Base ← coordinates of aerodromes; 6 Distances ← empty list sized to length of P ickup by length of Base; 7 T op_V als ← empty list sized to heli_number + plane_number; 8 for i ← 1 to number of bases do 9 for j ← 1 to number of missions do 10 Add to Distances summed Haversine distance of Base i to each respective mission P ickup and Destination j ; The parallelization of this algorithm was a bit more complex than the Base Ranking, as there were simultaneous accesses and dependencies. The outer loop at line 11 was eliminated and each index i of the P ermutation a matrix was assigned a thread. This would allow the CUDA platform to do simultaneous checks for each inner loop j at once. In order to prevent race conditions a lock was placed in between lines 13 and 14, lines 18 and 19, and lines 23 and 24. Once the checks were completed, a thread was allowed to perform the adjustment. Additional threads would only act once the thread released the lock. This added a level of sequential accessing to the algorithm, although checks were performed in parallel and the lock would only activate upon a change occurring. Since updates did not occur as often as checks, much of the bottleneck of the algorithm was eliminated.
A visualization of this algorithm can be seen in Figure 1. Assigned vehicles were applied to each mission (summed to pickup and destination), meeting Equations 3 to 10. So long as the change was viable, another assigned base could attempt to take over the assignment of another. The new base would gain the mission, and the old base would be moved to U nused if it had no more assignments remaining. A similar change took place based on whether an unused base contained a vehicle. If it did then a similar takeover occurred with the now assigned based moving from U nused. However, if the base did not contain a vehicle, then a swap occurred with the vehicle moving to the new location and the prior assigned base moving to U nused. Changes only held if they reduced the total Haversine distance applied across all missions.

Tabu Search Fleet Optimization
The Tabu search algorithm is an extension of Algorithm 2. For the most part, the operations have remained the same and even the parallelization operates in the same fashion. It still uses Algorithm 1 as a starting point. The primary Figure 1: Visualization of local search contribution of this the introduction of the T abuList at line 7. A Tabu search prevents recently explored neighbourhoods that improved the results from being explored again. The purpose of this is to give other possible changes a fair chance and prevent the algorithm from becoming stuck in a local optimum. Prior to each iteration j, a check is performed from lines 15-23. If the neighbourhood to be explored exists in the T abuList, then it is skipped depending on whether the selection is based on index i or j (lines [17][18][19][20][21][22]. Selections are only added to the list if the improve the result as suggested by lines 28, 34, and 40. Each neighbourhood in the list has a counter, expressed by T abuCounter and decremented at either line 16 or 42. After so many iterations, the selection is removed from the T abuList and is allowed to be explored again.
It should be noted that a traditional Tabu search can potentially allow moves which will not improve the results. This was not done in the case of this algorithm, as it almost always resulted in a significantly poorer answer. One way to combat this problem was the introduction of more variability through the vectors P ermutation a and P ermutation b . Originally both Algorithm 2 and Algorithm 3 did not utilize these which significantly impacted the performance. The two loops traversed sequentially through indices relative to the normal order the P ickup matrix. As already suggested, this caused the results to be the exact same each time as Algorithm 1 would never have a different result. Additionally, using one permutation vector did improve the result, though the introduction of two allowed for significant variability and the exploration of previously unexplored neighbourhoods by older versions.
In terms of a parallel implementation, the algorithm performs almost identically to Algorithm 2. That being said, while the parallel local search results in the same answer as the sequential variation, the parallel Tabu search will not. The reasoning for this is due to the nature of Tabu search itself. In this algorithm neighbourhoods will only be explored if they are not in the list, otherwise they are skipped. Since multiple neighbourhoods are being explored simultaneously, the result of each can potentially be added to the list and the respective counters are reduced at different intervals. The forces it to explore differently that the non-parallel Tabu search, giving a comparable, yet different result.

RESULTS
Eleven datasets were analyzed for testing the previously described algorithms. All ran for 10 separate attempts; the results of which are summarized in Figures 2-5, Table 1, and Table 2. The datasets were generated as a randomized subset of missions chosen among 13,824 previously recorded in the real-mission dataset. 12 vehicles (8 rotary-wing and 4 fixed-wing) and 378 bases (274 aerodromes and 104 helipads) were used for an individual assignment. Datasets initially ran through an instance of base ranking to generate a strong starting point, and then adjustments were made with the local search algorithm. The latter was performed until no further improvement could be found, at which point the results were recorded. As previously mentioned, this occurred ten times for each set with the average and limits being documented upon completion. Two separate instances were executed for each; one being sequential and the other being a parallel CUDA implementation. For validation, the Gurobi optimizer was employed to measure the algorithmic against the optimized solution.
The purpose of the base ranking algorithm was to allow an improved starting position over a randomized permuted assignment. The results of this algorithm are displayed in Figure 2 and Figure 3. As there was no randomization in the operation, the runtimes and values were the same for every specific sequence. Figure 2 compared this result against the average random starting position generated by 100 sets. In all cases the base ranking algorithm outperformed a random generation of bases applied to missions. While base ranking in conjunction with Tabu search allowed for a near-optimal result, it could not be used on its own for assignment. Despite being an improvement over randomization,  showed that base ranking was still substantially above the optimal ground truth. While the values were fairly uniform with an increase in mission size, they were not acceptable without additional algorithms. The speed of the base ranking should be noted, as it was far faster than the local or Tabu search. Furthermore, Figure 3 shows that the speed was nearly constant for the CUDA variation of the algorithm, while it increased linearly for the sequential version with the addition of missions. This result implies that for this scale the primary slow down point for the CUDA variation is the kernel call to the GPU itself. Given the consistency over randomization and the speed being negligible using CUDA meant it was worth performing upon the increasing sets. Figure 2: Base Ranking versus random average starting over an optimal ground truth On its own the local search algorithm proved unsuccessful in achieving a close to optimal solution. Figure 4 displays that even in the best scenarios, the increase was still just under 30% and went as high as over 60% for the average. The bounds were also not acceptable, with sets like 120 showing a very wide gap. These conclusions imply that it is getting stuck in a local optimum and the nature of the algorithm is preventing it from continuing. The Tabu search modification greatly improved the results and allowed it to achieve much closer to the Gurobi solutions. Per Figure  4, all were consistently under 20% and contained much smaller bounding gaps. This suggests that each solution was achieving a similar one to each other on successive tests. Likewise the parallel Tabu search was able to garner a similar result and even outperformed the sequential variation in some instances. There is no guarantee that the parallel version will always achieve a better metric to the sequential version, as they essentially use the same process with a difference mainly in runtime. However, they should always achieve a similar answer which is proven by Table 1. It not only shows similar averages (A), but also comparable upper (U) and lower (L) bounds. It should be noted that there are still the possibility of outliers occurring, meaning that for real-use the algorithm should be performed multiple times.
For the local search variation, the CUDA and sequential algorithms will always result in the same answer so long as matching permutations are used. As such, the only metric available for differentiating was time whic is summarized in Table 2. The CUDA running time was a significant improvement over the sequential algorithms, only increasing by a small amount given the number of missions. In Figure 5 it can be seen that the trend line is much flatter than  the sequential at a higher number of missions. This similarity is seen in the parallel Tabu search as well. Admittedly, it does display a comparable timing to the sequential variation at a lower mission count, though this changes with greater missions. It even manages to surpass the sequential local search after 110 missions. In all cases, the time for the sequential versions increased faster than the CUDA variations given an increase in the number. As previously speculated, this trend will continue to grow as the number of missions increases, giving validation to the CUDA implementation. Regardless, the timing in conjunction with near-optimal results justifies the use of parallelization for this purpose.

Conclusion
The results generated demonstrates the usefulness of the ranking and algorithmic solutions in both sequential and parallel forms. For the empirical data collected by Ornge, Gurobi achieved an optimized solution in a significantly increased time. Though this time may be acceptable in cases where missions are known, it may not be viable in an emergency where reorganization is required. As such, a solution that delivers near optimization becomes all the more critical. All of the datasets reached a timeframe far exceeding traditional methodologies, while still being within an admissible target range. All algorithms approached an acceptable limit relative to the optimal, which was further enhanced, utilizing parallelization through the CUDA platform. On its own the local search proved insufficient, although modifying the algorithm into a Tabu search greatly enhanced the result. It should be noted that this model is adaptable to possible future changes in the data and could be updated quickly. This further denotes the advantage of these techniques over other similar solutions.