A Deterministic Method for Solving the Sum of Linear Ratios Problem

Since the sum of linear ratios problem (SLRP) has many applications in real life, for globally solving it, an efficient branch and bound algorithm is presented in this paper. By utilizing the characteristic of the problem (SLRP), we propose a convex separation technique and a two-part linearization technique, which can be used to generate a sequence of linear programming relaxation of the initial nonconvex programming problem. For improving the convergence speed of this algorithm, a deleting rule is presented. +e convergence of this algorithm is established, and some experiments are reported to show the feasibility and efficiency of the proposed algorithm.


Introduction
is paper considers the sum of linear ratios problem (SLRP) with the following form: where p ≥ 2, g i (x) � n j�1 c ij x j + d i and h i (x) � n j�1 e ij x j + f i are the finite functions such that g i (x) > 0 and h i (x) > 0 for all x ∈ Λ � x | Ax ≤ b, x ∈ X 0 , and A � (a ij ) m×n , b ∈ R m , and c i are the real constant coefficients, i � 1, . . . , p.
As we know, fractional programming is an important branch of nonlinear optimization. As a special case of fractional programming, the sum of linear ratios problem has been widely concerned. e first reason is that SLRP has many important applications in the real world, such as certain government contracting problems [1], cluster analysis [2], and multiobjective bond portfolio [3]. e second reason is the objective function is neither quasiconvex nor quasiconcave, so it usually has multiple local optima that are not global optima, and it is a very challenging issue to find its global optima. In addition, as pointed by Matsui [4], a special case of SLRP was proved to be NP hard ( eorem 1, [4]), so SLRP is an NP-hard problem.
Although many algorithms have been proposed to find the global optimal solution of fractional programming, as far as we know, the research results of SLRP considered in this paper are still relatively little. Most of these algorithms proposed are intended only for the sum of linear ratios problem without coefficients c i . e aim of the present paper is to present a new global optimization algorithm for solving SLRP. To globally solve SLRP, a convex separation technique and a two-part relaxation technique are designed to convert SLRP into a series of linear programming problems. rough a successive refinement process, the solutions of these linear programming problems can be as close as possible to the global optimum of SLRP. e main features of this algorithm are as follows: (1) compared with these methods reviewed above ( [3, 6-9, 11, 15], for example), the method given in this paper can solve the general problem SLRP; (2) compared with the method in [2], this method need not introduce new variables; (3) by using a convex separation technique and a two-part relaxation technique, the initial problem SLRP can be converted into a series of linear programming problems, which is more convenient in the computation than the parametric programming (or concave minimization) methods [8]; (4) numerical results show that the proposed method can solve all of the test problems in finding globally optimal solutions with given precision.
e organizational structure of this paper is as follows. To obtain the relaxed linear programming of SLRP, Section 2 introduces a convex separation technique and a two-part relaxation technique. For improving the convergence speed of the proposed algorithm, a deleting rule is designed. In Section 3, the proposed branch and bound algorithm is described based on the relaxed subproblems, and the convergence of the algorithm is proved. Numerical results are given in Section 4, and some concluding remarks are provided in Section 5.

Linear Relaxation of SLRP
For solving the problem SLRP, one of the key parts is to construct lower bounds for SLRP and its partitioned subproblems. Towards this end, we propose a novel strategy to generate lower bound by underestimating f(x) with a linear function. e detailed construction process is given below.
Let X � [x, x] be either the initial box X 0 of the problem SLRP or the partitioned box in a branch and bound scheme. For convenience in expression, for i � 1, . . . , p, let min e ij x j , e ij x j + f i , max e ij x j , e ij x j + f i .

(2)
For generating the linear relaxation of f(x), a convex separation technique and a two-part relaxation technique are designed.

First-Part Relaxation.
For each term g i (x)/h i (x) (i � 1, . . . , p), since g i (x) > 0 and h i (x) > 0 for all x ∈ Λ, we can denote e lower bound and upper bound of y i can be computed as follows: By introducing the variable y, an equivalent form of f(x) can be derived, which is expressed in the following form: For φ(y), it is not difficult to calculate its gradient and Hessian matrix: erefore, the following relation holds: us, the function is convex on Y. Based on the above results, we can decompose φ(y) into the difference between two convex as follows: where ϕ(y) � (1/2)λ‖y‖ 2 + φ(y) and ψ(y) � (1/2)λ‖y‖ 2 .

Mathematical Problems in Engineering
Let y mid � (1/2)(y +y), and since ϕ(y) is a convex function, we have In addition, ∀y i ∈ [y i , y i ], it is not difficult to show Furthermore, we have Since λ > 0, it follows that us, from (10), (11), and (14), we obtain From (3), it follows that

Second-Part Relaxation.
For ln(z) over the interval [z, z], we can derive its linear lower bound and linear upper bound as follows: where K � (ln(z) − ln(z))/z − z. erefore, we have p). en, we can obtain the linear lower bound function f l (x) of f(x) as follows: where Consequently, the corresponding approximation relaxation linear programming (RLP) of SLRP in X is given as follows: (21)
Taken together above, we have By eorem 1, it can be seen that f l (x) will approximate the objective function f(x) as δ j ⟶ 0, j � 1, . . . , n.
From the above discussion, it can be seen that the objective value of RLP is smaller than or equal to that of LFP for all feasible points; thus, RLP provides a valid lower bound for the solution of LFP. us, for any problem (P), let us denote the optimal value of P by V(P), and then we have For improving the convergence speed of the proposed algorithm, a deleting technique is proposed, which can be used to eliminate the region that does not contain optimal solutions of SLRP.
Let UB be the current known upper bound of the optimal objective value f * of the problem SLRP. Denoting 4 Mathematical Problems in Engineering e following theorem gives the deleting technique.

Theorem 2. For any subrectangle
If there exists k ∈ 1, 2, . . . , n { } such that α k > 0 and ρ k < α k x k , then there is no global optimal solution of SLRP over X 1 ; if ∃k ∈ 1, 2, . . . , n { } such that α k < 0 and ρ k < α k x k , then there is no global optimal solution of SLRP over X 2 , where Proof. We first show that ∀x ∈ X 1 , f(x) > UB. Consider the k-th component x k of x, and it obviously follows that Note that α k > 0, and then from the definition of ρ k and the above inequality, we have , there does not exist global optimal solution of SLRP over X 1 .
Similarly, ∀x ∈ X 2 , if α k < 0 and ρ k < α k x k with some k, we can derive that there is no global optimal solution of SLRP over X 2 .

Algorithm and Its Convergence
Based on the former linear relaxation programming (RLP), this section presents a branch and bound algorithm to solve the problem SLRP. In this method, a sequence of RLP problems over partitioned subsets of X 0 needs to be solved.
As the algorithm goes on, the set X 0 will be partitioned into some subrectangles. Each subrectangle is associated with a node of the branch and bound tree and a relaxation linear subproblem.
At the k-th iteration of the algorithm, let Q k be a collection of active nodes denoted, i.e., each node in Q k is associated with a rectangle X⊆X 0 . ∀X ∈ Q k , a lower bound LB(X) of the optimal value of SLRP needs to be computed. e lower bound of the optimal value of SLRP is computed on the whole initial rectangle X 0 at the k-th iteration by LB k � max LB(X), ∀X ∈ Q k . Let X k be the active node X with LB k � LB(X) and is partitioned into two subrectangles as described below. Meanwhile, the upper bound for each new node is computed as before. If necessary, the upper bound UB k is updated. After deleting all nodes that cannot be improved, a collection of active nodes for the next stage can be obtained. e above process is repeated until the termination conditions are met.

Branching Rule.
In the proposed algorithm, a simple and standard bisection rule is used. Consider any node subproblem identified by rectangle X � x ∈ R n | x j ≤ x j ≤ x j , j � 1, . . . , n}⊆X 0 . e branching rule is briefly introduced as follows: (a) Let i � argmax x j − x j , j � 1, . . . , n . (32) Mathematical Problems in Engineering (c) Let According to this branching rule, the rectangle X is partitioned into two subrectangles X 1 and X 2 .
Since this branching will derive the intervals to zero along any infinite branch of the branch and bound tree, it is sufficient to ensure convergence. is is very important to ensure the convergence of the algorithm.

Algorithm Statement.
Let LB(X) be the optimal objective function value of the problem (RLP) over the rectangle X. Based on the results above, we give the basic steps of the proposed global optimization algorithm.
Step 1. Choosing ε ≥ 0. An optimal solution x 0 and the optimal value LB(X 0 ) are found for the problem (RLP) with X � X 0 , and set If UB 0 − LB 0 ≤ ε, then stop. x 0 is a global ε-optimal solution for the problem SLRP. Otherwise, set Step 2. Setting LB k � LB k−1 . X k− 1 is subdivided into two rectangles X k,1 , X k,2 ⊆R n via the branching rule. Let F � F ⋃ X k− 1 .
Step 3. For each node X k,1 , X k,2 , the lower bound for each linear constraint function n j�1 a ij x j , (i � 1, . . . , m) is computed over the present considered rectangle, i.e., computing lower bound: where x j and x j denote the lower bound and the upper bound of the present considered rectangle, respectively.
then we will put the corresponding node into F. If X k,1 , X k,2 are both put into F, i.e., then go to Step 7.
Step 4. For the undeleted subrectangle X k,1 and/or X k,2 , parameters K 1i , K 2i gl i , gu i , hl i , hu i , y i , y i , (i � 1, . . . , p) are updated. LB(X k,t ) is computed, and an optimal solution x k,t is found for the problem (RLP) with X � X k,t , where t � 1 or t � 2 or t � 1, 2. If necessary, UB k is updated as follows: and let x k denote the point which satisfies UB k � f(x k ).
Step 5. If UB k ≤ LB(X k,t ), then set Step 6. Setting Step 7. Setting Step 8. Setting LB k � min LB(X) | X ∈ Q k , and let X k ∈ Q k satisfy LB k � LB(X k ). If UB k − LB k ≤ ε, then stop. x k is a global ε-optimal solution for the problem SLRP. Otherwise, set k � k + 1 and go to Step 2.

Convergence of the Algorithm.
e following theorem gives some convergence properties of the algorithm.

Theorem 3
(i) If the algorithm is finite, then upon termination, x k is a global ε-optimal solution of SLRP. (ii) If the algorithm is infinite, then any accumulation point of the sequence x k , which is generated along any infinite branch of the branch and bound tree, will be the global solution of SLRP.
Proof (i) If the algorithm is finite, then it will terminate at some stage k, k ≥ 0. Upon termination, by the algorithm, we have According to Steps 1 and 4, it follows that Let v be the optimal value of the problem SLRP. By Section 2, it can be known that Since x k is a feasible solution of SLRP, we have and the proof of part (i) is complete. (ii) When the algorithm is infinite, by the algorithm, we know that LB k is a nondecreasing sequence and bounded above by min x∈Λ f(x). So, we have LB � lim k⟶∞ LB k ≤ min x∈Λ f(x). Since X 0 is a compact set, there must be one convergent subsequence x s { }⊆ x k . Suppose that lim s⟶∞ x s � x. By the proposed algorithm, there exists a decreasing subsequence By the definitions of RLP and SLRP, x is a feasible solution of SLRP obviously. Combining (50), it follows that x is a global solution of SLRP, and the proof of part (ii) is complete.

Numerical Experiment
To test the performance of the proposed global optimization algorithm, it is compared with some other algorithms based on five test problems. e convergence tolerance is set to ε � 1.0e − 2 in our experiment. e results are summarized in Table 1.

Concluding Remarks
In this paper, for solving the problem SLRP, we present a branch and bound algorithm. In this algorithm, we utilize a convex separation technique and a two-part relaxation technique to obtain a sequence of linear programming relaxation of the initial nonconvex programming problem SLRP, which is embedded in a branch and bound frame. Furthermore, for improving the convergence speed of the proposed algorithm, a deleting rule is designed. Numerical results show that it can solve the problem SLRP effectively.

Data Availability
No data were used to support this study.

Conflicts of Interest
All authors declare that they have no conflicts of interest.