EFFICIENT ALGORITHM FOR GLOBALLY COMPUTING THE MIN–MAX LINEAR FRACTIONAL PROGRAMMING PROBLEM

. In this paper, we consider the min–max linear fractional programming problem (MLFP) which is NP-hard. We first introduce some auxiliary variables to derive an equivalent problem of the problem (MLFP). An outer space branch-and-bound algorithm is then designed by integrating some basic operations such as the linear relaxation method and branching rule. The global convergence of the proposed algorithm is proved by means of the subsequent solutions of a series of linear relaxation programming problems, and the computational complexity of the proposed algorithm is estimated based on the branching rule. Finally, numerical experimental results demonstrate the proposed algorithm can be used to efficiently compute the globally optimal solutions of test examples

Over the past decades, numerous algorithms have been proposed to solve the problem (MLFP), these algorithms can be classified into the following categories: cutting plane algorithm [12], branch-and-bound approach [13,14], partial linearization algorithm [15], interior point algorithm [16]; Dinkelbach-type algorithm [17]; proxregularization method [18].Excluding the aforementioned algorithms, some theoretical research has also made tremendous progress, see [19][20][21][22][23][24][25] for details.Recently, Khajavirad and Sahinidis [26] introduce a new relaxation paradigm for global optimization, which can be used to solve the problem (MLFP).Jiao and Liu [27] proposed a two-level linearizing method for solving the problem (MLFP) with small-size variables.Wang et al. [28] also presented a linear relaxation algorithm for computing the problem (MLFP) with small-size variables.In addition, some related algorithms [29][30][31][32][33][34][35] can also provide some new ideas for solving the problem (MLFP).However, the above algorithms only consider the special form of the problem (MLFP) and the situation where the problem (MLFP) is small-scale.Thus, it remains a challenging problem to present an effective algorithm for solving the problem (MLFP) with large-size variables.
This paper presents an efficient outer space branch-and-bound algorithm.First of all, the equivalent problem of the original problem is obtained by introducing some auxiliary variables.Then, we establish the linear relaxation problem of the equivalence problem by constructing a new linear relaxation method.The linear relaxation problem not only provides an valid lower bound for the optimal value of the equivalence problem, but also its global optimal solution can infinitely approximate the optimal solution of the equivalence problem.Next, we give the convergence analysis of the proposed algorithm.Furthermore, we estimate the maximum number of iterations of the algorithm for the first time.Finally, numerical experimental results compared with other algorithms and BARON show that the proposed algorithm is efficient and robust.
Compared with the existing algorithms, the novelty of this study are as follows.Firstly, a new equivalence problem is constructed and a compact linear relaxation problem is established for the equivalence problem.The relaxation problem can provide a compact lower bound for the equivalence problem.Secondly, the branching search space occurs in R  space of the denominator ∑︀  =1     +   ,  = 1, 2, . . ., , instead of R  space of variable , which reduces the computational complexity of the algorithm and improves its convergence speed.Thirdly, by analyzing the computational complexity of the algorithm, we estimates the maximum iteration times of the algorithm for the first time.Finally, compared with the existed algorithms and the BARON solver, numerical experimental results indicate that the proposed algorithm in this paper has better computational performance.
It should be noted that, the basic differences between the proposed new algorithm and Lagrangian relaxation method to solve the suggested problem are given as follows.The suggested problem is a global optimization problem.The proposed new algorithm is a global optimization algorithm, which can find the global optimal solution of the suggested problem, but the Lagrangian relaxation method is a local optimization method, which can only find the local optimal solution of the suggested problem.In addition, the basic principle of Lagrangian relaxation method is to absorb the constraints in the constrained functions that make the problem difficult into the objective function, and maintain the linearity of the objective function, making the problem easy to solve, but the Lagrangian relaxation of the suggested problem does not maintain the linearity of the objective function.Therefore, the Lagrangian relaxation method is not suitable for solving the suggested problem.
In addition, the differences between the proposed branch-and-bound algorithm and the one proposed by Wang et al. [28] are listed as follows.First of all, the branching spaces are different.The branching space of the proposed algorithm happens in the outer space R  of the denominator ∑︀  =1     +   ,  = 1, 2, . . ., .However, the branching space of the algorithm proposed by Wang et al. [28] happens in the variable dimensional space R  .When  is much smaller than , the proposed algorithm can greatly reduces the computational complexity of the algorithm.Secondly, the linear relaxation methods are different.By using the functional properties of the equivalence problem, the proposed algorithm and the algorithm of [28] construct two different linear relaxation methods.Based on the two different methods, the proposed algorithm and the algorithm of [28] establish two different linear relaxation problems.Thirdly, the computational efficiency is different.From the numerical comparison results, it can be seen that the computational efficiency of the proposed algorithm is significantly higher than that of the algorithm in literature [28].
The remainder of this paper is as follows.In Section 2, the equivalence problem of the problem (MLFP) and its linear relaxation problem are established.In Section 3, an outer space branch-and-bound algorithm is introduced, the convergence proof and the computational complexity analysis of the algorithm are given.Numerical experimental results are presented in Section 4. Section 5 summarizes this paper.

Equivalent problem and its linear relaxation programming
The goal of this section is to derive the equivalence problem of the problem (MLFP) and construct its linear relaxation problem.
In order to achieve this goal, first of all, for each  = 1, 2, . .
Next, by introducing auxiliary variables ℎ and , the equivalence problem (EP) is obtained as follows.
Thus, we can solve the problem (EP) instead of solving the problem (MLFP) because they have the same optimal solution and optimal value.Next, in order to globally solve the problem (EP), its linear relaxation problem needs to be constructed through the following specific procedure.
Based on the above discussion, for any Ω ⊆ Ω 0 , the linear relaxation problem is constructed as follows.
(LRP) : From the construction process of the linear relaxation problem, it is easy to obtain that for all feasible points in Ω, denoting the optimal value of problem (LRP) by  (LRP) and the optimal value of problem (EP) by  (EP), we have  (LRP) ≤  (EP).Thus, problem (LRP) provides a valid lower bound for the optimal value of problem (EP).In addition, the proposed linearization technique is to use the direct linear relaxation method to directly linearize the problem (EP), the established linear relaxation problem can provide a tight lower bound for the problem (EP).

Algorithm and its computational complexity
Throughout this section, based on the previous conclusions, an outer space branch-and-bound algorithm for solving the problem (MLFP) is proposed.The convergence of the algorithm is proven and the computational complexity of the algorithm is analyzed.

Branching rule
To ensure the convergence of the proposed algorithm, in this case, we choose a standard dichotomy, consider any node sub-problem identified by the rectangle Ω = { ∈ R  |  ≤   ≤   ,  = 1, 2, . . ., } ⊆ Ω 0 , and the exact branching rules are described below: and By the branching rule, we split the rectangle Ω into two new sub-rectangles Ω 1 and Ω 2 .

Outer space branch-and-bound algorithm
From the above discussions, the fundamental steps of the proposed algorithm are summarized below.
Step 2. Set UB  = UB −1 .According to the branching rules given earlier, subdivide Ω −1 into two new sub-rectangles, and a new set of partitioned sub-rectangles is designated as Ω  .

Convergence analysis
Throughout this subsection, for the convenience of the brief description, we denote the global optimal value of the problem (EP) as ℎ * , denote ℎ  as the objective function value of the problem (EP) belonging to (  ,   ), and define and the convergence analysis of the algorithm is elaborated as follows.
Theorem 2. Given  > 0, if the algorithm terminates in a finite number of iterations, then   is a global optimal solution to the problem (MLFP), and if the algorithm is infinite, then an infinite sequence {  } will be generated such that any cumulative point of the sequence {  } is a global optimal solution to the problem (MLFP).
Proof.Assuming that the algorithm terminates after  iterations, according to the termination criteria, we have Furthermore, through the solution of the linear relaxation problem (LRP) on Ω  , the best feasible solution of the currently known problem (EP) is obtained, i.e., (  ,   ).According to step 3 of the algorithm, we have Thus, we have So (  ,   ) is a global -optimal solution of the problem (EP) within the sense of Therefore,   is a global -optimal solution of the problem (MLFP).
In the case that the algorithm is infinite, an infinite sequence {(  ,   )} must be generated.It is easy to get that {(  ,   )} is a feasible solution to problem (EP), by making ( * ,  * ) be a cumulative point of the sequence {(  ,   )}, assuming that lim Steps to pass the algorithm, we have lim Thus,  * is a feasible solution of the problem (MLFP) over Ω 0 , it follows that ℎ * ≤ ( * ).
Also, we have Taking the steps of the proposed algorithm and the continuity of the function ( * ), we have From the above discussion, we get that lim Therefore, it is known that any of accumulation points  * is a global optimum solution to the problem (MLFP), and proof of completion.

Computational complexity of the algorithm
Throughout this subsection, the computational complexity of the proposed algorithm will be given.Aiming for this purpose, define the size of the rectangle Ω and define Lemma 1.With the given convergence tolerance  > 0, for any Ω  ⊆ Ω 0 , if at th iteration there exists a rectangle Ω  = [  ,   ] generated by the proposed algorithm which satisfies ∆(Ω  ) ≤   , then we have note that the optimal value of the problem (LRP) is LB(Ω  ) and the best upper bound of the currently known optimal value of the problem (EP) is UB  .
Proof.First assume that   is the optimal solution of the problem (MLFP), and let )︀}︀ , so (  ,   , ℎ  ) is a feasible solution of the problem (EP) over Ω  .Assuming that (  ,   , ĥ ) is an optimal solution to the linear relaxation programming (LRP), with the construction procedure of problem (LRP) in Section 2, we have According to the definitions of UB  and LB(Ω  ), it follows that Thus, based on the above discussion and Theorem 1, for any  = 1, 2, • • • , , we have Further, combining the equation ∆(Ω  ) ≤   , we have and the proof is completed.
Theorem 3. Given  > 0, the proposed algorithm is able to find a global -optimal solution of the problem (MLFP) in at most Proof.Suppose that there exists a sub-rectangle Ω = { ∈ R  |   ≤   ≤   ,  = 1, 2, . . ., } of Ω 0 , which is selected to be partitioned for each iteration.And from Lemma 1, we suppose that there is a subinterval after   iterations, and it satisfies Based on the branching process, we have Thus, we have I.e., Then, after  1 = ∑︀  =1 n iterations, at most  1 + 1 rectangles will be produced by the proposed algorithm, which can denote as Ω 1 , Ω 2 , . . ., Ω 1+1 , and they satisfy that

Numerical experiments
In this section, some numerical comparison results with the software BARON [26] and the known branchand-bound algorithm presented by Feng et al. [13], Jiao and Liu [27], and Wang et al. [28] are given.It should be noted that, BARON was the first commercial optimization software for solving nonlinear and mixed-integer nonlinear problems with deterministic guarantee.We have implemented the algorithms proposed by Feng et al. [13], Khajavirad and Sahinidis [26], Jiao and Liu [27], and Wang et al. [28].All numerical tests are implemented in MATLAB R2014a and run on a microcomputer equipped with 11th Gen Intel(R) Core(TM) i7-1165G7 @2.80 GHz processor and 16 GB RAM.In all numerical tests, and the maximum CPU time limit is set to 3600 s.
First of all, with the given convergence tolerance  = 10 −6 , we test the randomly generated Problem 1 with small-size variables, and the numerical results compared with the algorithms of Feng et al. [13], Jiao and Liu [27] and Wang et al. [28] are shown in Table 1.Next, with the given convergence tolerance  = 10 −2 , we test the randomly generated Problem 1 with large-size variables to further verify the efficiency of our algorithm in terms of computation, and the numerical outcomes compared with the software BARON are shown in Table 2. Tables 1 and 2 respectively recorded their best, worst, and average results among these ten test examples, and we highlighted in bold the winners of average results in their numerical comparisons of Tables 1 and 2. It should be noted that, in order to compare the proposed algorithm with the algorithms [13,27,28] and clearly demonstrate the advantages of the proposed algorithm, we have listed the numerical comparison results of Problem 1 with small-size variables in Table 1.Due to the difficulty in solving Problem 1 with large-size variables using the algorithms [13,27,28], and in order to clearly demonstrate the advantages of the proposed algorithm compared with BARON, we only list the numerical comparison results of Problem 1 with large-size variables in Table 2.
A number of symbols are used in Tables 1 and 2: Iter: the number of iteration for the algorithm; CPU Time: the execution CPU time of algorithm in seconds; "−" denotes the situation that the algorithm failed to terminate in 3600 s.
From Table 1, we can know that when  ≥ 2,  ≥ 10, and  ≥ 6, the algorithm of Feng et al. [13] failed to solve some of arbitrary ten independently generated test examples in 3600 s; when  ≥ 2,  ≥ 10, and  ≥ 10, the algorithm of Wang et al. [27] failed to solve some of arbitrary ten independently generated test problems in 3600 s; but in all cases, the proposed algorithm and the algorithm of Jiao and Liu [27] can globally solve all arbitrary ten independently generated test examples.However, it should be pointed out that when  ≥ 2,  ≥ 10, and  ≥ 6, the proposed algorithm takes less time than the algorithms presented by Feng et al. [13], Jiao and Liu [27], and Wang et al. [28].Thus, the proposed algorithm has significant superiority.From Table 2, we can know that when solving the large-size problem, although the number of iterations of the software BARON is less than our algorithm, it takes more time to calculate.In addition, when  = 2,  = 100,  ≥ 7000 and  = 3,  = 100,  ≥ 7000, the software BARON failed to terminate in 3600 s for some of arbitrary ten independently generated test problems, but in all cases, the proposed algorithm can globally solve all arbitrary ten independently generated test problems.Therefore, the proposed algorithm outperforms the software BARON in terms of computational performance.
In addition, from the numerical results in Tables 1 and 2, it can be seen that the number of variables has a significant impact on the computational efficiency of the algorithms [13,27,28], while the number  of fractions has a significant impact on the computational efficiency of the proposed new algorithm.This is mainly because the computational complexity of the proposed new algorithm is related to the number  of fractions, while the computational complexity of the algorithms [13,27,28] is related to the number  of variables.Therefore, when the number  of fractions in the problem (MLFP) is relatively small and the number  of variables  is relatively large, we use the algorithm proposed in this article to solve it, which has high computational efficiency.
Consider the following optimal investment problem, whose mathematical modelling can be given as below: It should be noted that this problem can be transformed into a minimax linear fractional programming problem.Using the presented algorithm in this article to solve it, the global optimal solution can be obtained as below:  = (0.8104, 0.0000, 0.1896).
It is to say, the optimal investment percentage of these three kinds of investment is 0.8104, 0.0000, 0.1896, respectively.

Conclusion
This paper studies the min-max linear fractional programming problem and presents an efficient branchand-bound algorithm.The proposed branching search takes place in the outer space R  of the denominator ∑︀  =1     +   ,  = 1, 2, . . ., , which mitigates the required computational efforts.The proposed algorithm can achieve an -approximate global optimal solution in at most 2 ∑︀  =1 ⌈︀ log 2 ( 0  − 0  )  ⌉︀ − 1 iterations.Numerical comparisons indicate higher computational efficiency of the proposed algorithm.The limitation of this algorithm is that it is only suitable for solving the min-max linear fractional programming problems and is not suitable for solving the min-max nonlinear fractional programming problems, so the future research will focus on the min-max nonlinear fractional programming problems.

Table 1 .
Numerical comparisons with some algorithms for Problem 1.