GTOPX Space Mission Benchmarks

This contribution introduces the GTOPX space mission benchmark collection, which is an extension of GTOP database published by the European Space Agency (ESA). GTOPX consists of ten individual benchmark instances representing real-world interplanetary space trajectory design problems. In regard to the original GTOP collection, GTOPX includes three new problem instances featuring mixed-integer and multi-objective properties. GTOPX enables a simplified user handling, unified benchmark function call and some minor bug corrections to the original GTOP implementation. Furthermore, GTOPX is linked from it's original C++ source code to Python and Matlab based on dynamic link libraries, assuring computationally fast and accurate reproduction of the benchmark results in all three programming languages. Space mission trajectory design problems as those represented in GTOPX are known to be highly non-linear and difficult to solve. The GTOPX collection, therefore, aims particularly at researchers wishing to put advanced (meta)heuristic and hybrid optimization algorithms to the test. The goal of this paper is to provide researchers with a manual and reference to the newly available GTOPX benchmark software.


Motivation and significance
The optimal design of interplanetary space mission trajectories is an active and challenging research area in aerospace and related communities, such as the evolutionary and metaheuristic optimization community.Since 2005, the European Space Agency (ESA) maintained a set of real-world space mission trajectory design problems in form of numerical black-box optimization problems, known as the GTOP database [8].Given the usefulness and success of this endeavor in the past, here we present an extended and refurbished version, named GTOPX, as continuation to the no longer maintained GTOP database.
While all problems of the original GTOP database were single-objective and continuous in the nature of the search space domain, the GTOPX collection includes three new problem instances featuring mixed-integer and multiobjective problem properties.The general mathematical form of the considered optimization problem in GTOPX is given as multi-objective Mixedinteger Nonlinear Programming (MINLP): subject to: g i (x, y) ≥ 0, i = 1, ..., m ∈ N x l ≤ x ≤ x u (x l , x u ∈ R ncon ) y l ≤ y ≤ y u (y l , y u ∈ N n int ) (1) where f i (x, y) and g i (x, y) denote the objective and constraint functions depending on continuous (x) and discrete (y) decision variables, which are box-constrained to some lower (x l , y l ) and upper (x u , y u ) bounds.The ten individual benchmark instances of GTOPX are listed in Table 2 with their name, the number of objectives, variables and constraints together with the best known objective function value f (x, y).
In the past, the GTOP database has already attracted a significant amount of attention and results have been published, see for example [1,2,3,4,7,5,10,11,12,14,15,16,22,23].It is to note here that the majority of publications discussing results on GTOP focus on only one or a few problem instances.That is due to the difficulty of these problems, which often require many millions of function evaluations to allow an optimization algorithm to converge to the best known solution.This difficulty is what makes this collection of benchmark problems interesting and a real challenge, even requiring the use of massively distributed computing power by supercomputers in the most difficult cases (see Shuka [21] or Schlueter [19]).While the focus of the original GTOP database was rather on the nature of the application itself, the here introduced GTOPX collection aims to address the broad community of numerical optimization researchers.It does so by a simplified and more user-friendly source code base, that allows easy execution of the benchmarks in three programming languages: C/C++, Python and Matlab.The entire source code base of GTOPX has been compressed into a single file (namely "gtopx.cpp")that is linked via dynamic link libraries (DLL) into the Python and Matlab language.The linking of a pre-compiled DLL has the significant advantage of computation speed and accurate reproduction of results among all considered programming languages, in contrast to a native re-implementation in other languages, which is computationally slower and error prone.
The function call to the individual GTOPX benchmark instances has been generalized, so that any problem instance can easily be accessed with only switching one integer parameter (namely the benchmark number).The generalized call to the GTOPX problem functions in each language is given as follows: Furthermore, it is to note that all GTOPX benchmark instances are thread-safe for execution.This means that the GTOPX problems can be executed in parallel, which is a highly desired feature among modern optimization algorithm design.The GTOPX source code files are free software and published under the GPL license [6].
The remainder of this paper is structured as follows.Section 2 describes each of the ten individual instances from Table 2 in detail.Section 3 contains a comprehensive fitness landscape analysis of the GTOPX single objective benchmarks to provide a better perception of a proper optimisation algorithm selection.A brief summary is given with some conclusive statements.Goal of this paper is to provide researchers with a manual and reference to the newly available GTOPX benchmark software.

Software description
This section gives detailed information to the ten GTOPX benchmark instances.Note that the definitions of the first seven instances are taken from Schlueter [18], while the last three instances are new.

Cassini1
The Cassini1 benchmark models an interplanetary space mission to Saturn.The objective of the mission is to get captured by Saturn's gravity into an orbit having a pericenter radius of 108,950 km and an eccentricity of 0.98.The sequence of fly-by planets for this mission is given by Earth-Venus-Venus-Earth-Jupiter-Saturn, whereas the first item is the start planet and the last item is the final target.The objective function of this benchmark is to minimize the total velocity change ∆V accumulated during the mission, including the launch and capture manoeuvre.The benchmark involves six decision variables: Time interval between events (e.g.departure, fly-by, capture) This benchmark further considers four constraints, which impose an upper limit on the pericenters for the four fly-by maneuvers.The best known solution to this benchmark has an objective function value of f (x) = 4.9307, and the respective vector of solution decision variables x is available online [9].

Cassini2
The Cassini2 benchmark models an interplanetary space mission to Saturn, including deep space maneuvers (DSM) and is therefore considerably more difficult than its counterpart benchmark Cassini1.The sequence of fly-by planets for this mission is given by Earth-Venus-Venus-Earth-Jupiter-Saturn, where the first item is the start planet and the last item is the final target.The objective of this benchmark is again to minimize the total ∆V accumulated during the full mission; however, the aim here is a rendezvous, while in Cassini1 the aim is an orbit infection.The benchmark involves 22 decision variables: The best known solution to this benchmark has an objective function value of f (x) = 8.3830, and the respective vector of solution decision variables x is available online [9].

Messenger (reduced)
This benchmark models an interplanetary space mission to Mercury and does not include resonant fly-by's at Mercury.The sequence of fly-by planets for this mission is given by Earth-Earth-Venus-Venus-Mercury, where the first item is the start planet and the last item is the final target.The objective of this benchmark to be minimized is again the total ∆V accumulated during the full mission.The benchmark involves 18 decision variables: The best known solution to this benchmark has an objective function value of f (x) = 8.6299, and the respective vector of solution decision variables x is available online [9].

Messenger (full)
This benchmark models an interplanetary space mission to Mercury, including resonant fly-by's at Mercury.The sequence of fly-by planets for this mission is given by Earth-Venus-Venus-Mercury-Mercury-Mercury-Mercury.The objective of this benchmark to be minimized is -as before -the total ∆V accumulated during the full mission.The benchmark involves 26 decision variables: The best known solution to this benchmark has an objective function value of f (x) = 1.9579, and the vector of solution decision variables x is available online [9].

GTOC1
This benchmark models a multi gravity assist space mission to asteroid TW229.The mission model drew inspiration from the first edition of the Global Trajectory Optimisation Competition (GTOC) held by ESA in 2007 [13].The objective of the mission is to maximize the change in the semi-major axis of the asteroid orbit.The sequence of fly-by planets for this mission is given by Earth-Venus-Earth-Venus-Earth-Jupiter-Saturn-TW229.This benchmark involves eight decision variables: This benchmark further considers four constraints, which impose an upper limit on the pericenters for the four fly-by maneuvers.The best known solution to this benchmark has an objective function value of f (x) = −1581950.0,and the vector of solution decision variables x is available online [9].Time interval between events (e.g.departure, fly-by, capture)

Rosetta
The Rosetta benchmark models multi gravity assist space mission to comet 67P/Churyumov-Gerasimenko, including deep space maneuvers (DSM).The sequence of fly-by planets for this mission is given by Earth-Earth-Mars-Earth-Earth-67P.The objective of this benchmark is to minimize the total ∆V accumulated during the full mission.The benchmark involves 22 decision variables: The best known solution to this benchmark has an objective function value of f (x) = 1.3434, and the vector of solution decision variables x is available online [9].

Sagas
This benchmark is described as a ∆V -EGA manoeuvre to fly-by Jupiter and reach the Keplerian orbit of 50AU.The sequence of fly-by planets for this mission is given by Earth-Earth-Jupiter, whereas the first item is the start planet and the last item is the final target.The objective of this benchmark is to minimize the total ∆V accumulated during the full mission.The benchmark involves 12 decision variables described in Table 9.
This benchmark further considers two constraints, which impose an upper limit on the on-board fuel and launcher performance.The best known solution to this benchmark has an objective function value of f (x) = 18.1877, and the respective vector of solution decision variables x is available online [9].

Cassini1-MINLP
This new benchmark is a mixed-integer extension of the Cassini1 instance.While in the original Cassini1 instance the sequence of fly-by planets is fixed as Venus-Venus-Earth-Jupiter, the Cassini1-MINLP considers all four fly-by planets as discrete decision variable.Each planet of the solar system is a feasible choice for any of the four fly-by planets.Table 10 lists all possible choices for fly-by planets together with their numerical value, to be used as variable for the GTOPX solution vector x.Table 11 lists the ten decision variables.The Cassini1-MINLP problem has been previously investigated with numerical results in Schlueter [20].In that paper it was revealed, that this benchmark has a strong local minimum corresponding to the fly-by planet sequence {Earth, Earth, Earth, Jupiter}.This local minimum appeared to be the second best solution (in regard to the combinatorial part) and holds an objective function value of f (x) = 3.6307.The best known solution Time interval between events (e.g.departure, fly-by, capture) 7 ∼10 Fly-By planet (discrete value, see Table 10) to Cassini1-MINLP has the fly-by planet sequence {Earth, Venus, Earth, Jupiter} and corresponds to an objective function value of f (x) = 3.5007.The decisive nature of the above described local minima makes the Cassini1-MINLP exceptionally hard to solve to the best known solution.It is therefore noteworthy that the Cassini1 instance and the Cassini1-MINLP instance are significantly different in their complexity and difficulty.

Cassini1-MO
This benchmark is a multi-objective extension of the Cassini1 instance.While in the original Cassini1 instance only one objective (the total ∆V ) was considered, the Cassini1-MO benchmark considers as the second objective the total time of flight (measured in days).The decision variables for Cassini1-MO are identical to the Cassini1 instance:  Time interval between events (e.g.departure, fly-by, capture) The Cassini1-MO has one more constraint than Cassini1.This additional constraint bounds the objective space to solutions with a first objectivefunction value of f 1 (x) ≤ 7.0.This additional constraint is introduced to focus the area of the Pareto front to the truly challenging region of low fuel consuming space mission trajectories.
There is yet no comprehensive analysis conducted on this benchmark, but an approximation of the Pareto front was obtained by the MIDACO solver [17] and is displayed in Figure 1.There, the Pareto front of Cassini1-MO reveals a very distinctive non-separated shape, exhibiting convex (right half of Figure 1) as well as non-convex (left half of Figure 1) areas.The full set of non-dominated solutions is available online at [9] as a text file.

Cassini1-MO-MINLP
The Cassini1-MO-MINLP benchmark combines the previous two extensions of Cassini1-MINLP and Cassini1-MO.This benchmark is therefore a multi-objective mixed-integer problem and very challenging to solve.The decision variables are the same as for Cassini1-MINLP, and the fly-by planet choices are listed in Table 10: Time interval between events (e.g.departure, fly-by, capture) 7 ∼10 Fly-By planet (discrete value, see Table 10) As in Cassini1-MO, the first objective of Cassini1-MO-MINLP is the total ∆V and the second objective is the total time of flight of the space mission.Like the Cassini1-MO benchmark the Cassini1-MO-MINLP has an additional constraint on the objective space.However, in contrast to Cassini1-MO this benchmark bounds the objective space to solutions with a first objectivefunction value of f 1 (x) ≤ 6.0 instead of a value of 7.0, as the changed search space landscape allows for better solutions than the Cassini1-MO benchmark.
The Pareto front (as approximated by the MIDACO solver in [17], see Figure 2) of Cassini1-MO reveals a distinctive separated shape, where it is to note that the upper left part is mostly corresponding to solutions with the best known integer sequence while the lower part is corresponding to the second best known integer sequence.The full set of non-dominated solutions is also available online at [9] as a text file.

Landscape Analysis
Landscape analysis methods can be employed to characterize the features of fitness landscapes by representing the levels of ruggedness, smoothness, multi-modality and neutrality [24].Characteristics like these can be exploited by algorithm designers in the construction of an algorithm, but also in an online fashion by algorithms themselves.For example, when fitness values vary very little in a particular region (and the algorithm might be stuck on a plateau), an optimization method can adjust its parameters (like mutation step size) to generate candidates that are further away from the current solutions.
In the following, we use two ways of characterising the benchmarks.First, we sample uniformly-at-random around the best-known solutions in a way that is similar to those used by some local search algorithms, and we do so with the goal of simulating how a local search would perceive the area around the best solutions -we use parallel coordinate plots for the visualisation due to the high dimensionality.Second, we use systematic grid searches for a holistic picture of the entire problem -we use 3D plots for the visualisation.
First, we investigate the local sampling around the best trajectories known.We then sample the neighbourhood in a way similar to those employed by some local search approaches: (1) the mutation probability of each individual value is 1  N , where N is the number of decision variables, (2) we sample from a normal distribution around the best-known solution with , where U b and Lb are upper and lower bound of i th variable, and (3) we do so one million times.
In Figures 3 to 5 (for the eight single-objective problems), the columns represent the decision variables of the benchmarks, and coloured lines connect the values that belong to a solution vector.The colour denotes the respective solution quality: the most efficient solutions are dark blue.The primary observation of Figure 4 is that some variables play an important role in optimizing the trajectory cost (∆V ), for instance X1 to X8 in Cassini2 (Figure 5), where small changes result in the decision variable can result in large changes in the objective value.For practical approaches, this means that they need to be able to focus differently on the different decision variable.
Next, we use a grid search that is a systematic approach with a very small step (mesh) size (µ = (U b − Lb) * 0.001) as a sampling method across the entire search space.The grid search provides a uniform and discrete sampling of the landscape.While systematic, grid searches can be expensive landscape analysis method where the number of decision variables is high (curse of dimensionality [26]) or the objective function is computationally expensive -hence, we limit ourselves to pairs of grid searches, i.e. for each possible pair of decision variables.
Figure 6 shows results for Messenger (full), Messenger (reduced) and Rosetta.We can see a high level of ruggedness and a large number of spikes in the landscapes.Based on this observed multi-modality (i.e.multiple local optima), it might be worth considering optimization methods in the future such as niching techniques [27], crowding [28], fitness sharing [29], species conserving [30] and covariance matrix adaptation [31].
As a side-effect of our grid search and of the local sampling, we have found small improvements over the previously best known soluuions for the Rosetta benchmark.A new solution with a percentage change improvement of 0.00075% was found.Note that in case of the GTOP benchmarks new solutions must traditionally hold a percentage change improvement of at least 0.1% to be be considered significant.Nonetheless, other researchers (e.g.[7]) have published improvements below the 0.1% threshold and thus such new solutions displayed here might be of interested to some.

Table 3
Description of optimization variables for Cassini1

Table 4
Description of optimization variables for Cassini2 Time interval between events (e.g.departure, fly-by, capture) 10 ∼ 14 Fraction of the time interval after which DSM occurs 15 ∼ 18 Radius of flyby (in planet radii) 19 ∼ 22 Angle measured in planet B plane of the planet approach vector

Table 5
Description of optimization variables for Messenger (reduced)

Table 6
Description of optimization variables for Messenger (full)

Table 7
Description of optimization variables for GTOC1

Table 8
Description of optimization variables for Rosetta

Table 9
Description of optimization variables for Sagas

Table 11
Description of optimization variables for Cassini1-MINLP

Table 12
Description of optimization variables for Cassini1-MO

Table 13
Description of optimization variables for Cassini1

Table 14
New solutions for Rosetta Rosetta