Expanding maximum capacity path under weighted sum-type distances

: This paper investigates an inverse version of maximum capacity path problems. Its goal is how to increase arc capacities under a budget constraint so that the maximum capacity among all paths from an origin to a destination is improved as much as possible. Two distinct cases are considered: ﬁxed costs and linear costs. In the former, an algorithm is designed to solve the problem in polynomial time. In the latter, a polynomial-time approach is developed to contain two phases. The ﬁrst phase applies the ﬁrst algorithm as a subroutine to ﬁnd a small interval containing the optimal value. The second phase converts the reduced problem into a minimum ratio path problem. Then, a Secant-Newton hybrid algorithm is proposed to obtain the exact optimal solution. Some theoretical aspects as well as experimental computations are performed to guarantee the correctness and performance of our proposed algorithm.


Introduction
Several variants of combinatorial optimization problems are defined on the path structure in graphs. One of the most important problems is the maximum capacity path problem. It is to find a path joining two prescribed nodes whose capacity, that is the minimum capacity of its arcs, is maximized. This problem is also known as the bottleneck path problem, the max-min path problem, and the widest path problem. Due to the fact that the problem can be seen as a network flow optimization problem, it admits a wide range of applications. For example, in telecommunication networks, the Multiprotocol Label Switching (MpLS) is a protocol to send a packet along a path whose minimum bandwidth is maximized [11]. As another application, single service units (e.g., fire fighter, police) estimate the quality of a route by the maximum distance to an available service location [5].
As other instances of combinatorial optimization problems defined on path structure, shortest and longest path problems have completely different behaviours: the shortest path problem is polynomially solvable, whereas the longest path problem is NP-hard in general [3]. However, the maximum capacity path problem and its minimum version, namely the min-max path problem, have the same behaviour. Indeed, one can use any algorithm solving one to obtain an optimal path of the other. A recursive algorithm is the best-known algorithm that is capable of solving maximum capacity path problems in linear time [15]. This paper concentrates on an inverse version of maximum capacity path problems, called the maximum capacity path expansion problem. In general, three types of inverse problems are definable for any optimization problem: • Given an initial feasible solution, the goal is to change some parameters of the problem as little as possible so that the solution becomes optimal. • The problem is to change some parameters as little as possible such that the objective value of the optimization problem is bounded by a prescribed value. • The goal is to modify some parameters under a budget constraint in a way that the optimal value of the optimization problem is improved as much as possible.
It is customary that the first category is called the inverse optimization [9,10,20]. To distinguish from the first, the problems belonging to the second category are referred to as the reverse optimization problems [18,19], and optimization improvement problems [24,25]. Third category has not any formal name in the literature. However, for the special case that the objective function is bottlenecktype (namely, min-max or max-min), the third category's problems are called the bottleneck capacity expansion problems (BCEPs) [13,23,26]. It is obvious that the problems of second and third categories have a close relationship together because if one interchanges the objective function and the budget constraint of a problem in the third category, then its corresponding problem in the second category is obtained.

Literature review on capacity expansion problems
Suppose that E = {e 1 , e 2 , . . . , e m } is a ground set and F ⊆ 2 E . Let each e i ∈ E be associated to a capacity c i . A (max-min) bottleneck-type combinatorial optimization problem is to find a f ∈ F so that the minimum capacity of its elements is maximized, i.e., This problem is an extension of several combinatorial optimization problems, such as maximum capacity path problem [14], max-min spanning tree problem [21]. For every type of problem (1.1), one can define a capacity expansion problem. It is to increase capacities under some budget and bound constraints so that the capacity of the optimal solution of problem (1.1) is increased as much as possible. This problem is the same BCEP which can be formulated as max in which X is a feasible region to restrict capacity increments.
Yang and Zhang [22] were the first in considering BCEPs defined on networks. They studied the problem of increasing arc capacities under a budget constraint so that the capacities of maximum capacity paths between every pair of nodes are maximized.
They proposed a strongly polynomial-time algorithm which requires to solve a minimum spanning tree at every iteration. Subsequently, Chao and Jinglin [8] investigated the capacity expansion problem in spanning trees. They showed that the problem is reducible to a parametric spanning tree problem. Then, they presented a strongly polynomial-time algorithm to solve the problem. Li and Zhu [13] used the dynamic programming technique to present a pseudopolynomial-time algorithm for expanding the capacity of maximum capacity path from a source to a sink. Then, they designed polynomial-time algorithms for two special cases of the problem. Bukard et al. [6] studied the weight reduction problems with the min-max objective function, instead of max-min. They proved that the problem is NP-hard in general. Then, they reduced the problem to a parametric optimization problem which can be used for solving the problem in the special case of spanning trees. Chao and Jianzhong [7] considered three types of BCEPs (path, spanning-tree, and maximum flow) in two arc-expanding and node-expanding cases. They presented some results on their complexity.
BCEPs are also studied in the general case. A modified binary search algorithm [23] and a discretetype Newton algorithm [26] are presented to solve BCEPs in the general case.

Our contribution
As a special case of BCEPs, this paper focuses on the maximum capacity path expansion problem (MCPEP) in two distinct cases: (I) fixed modification costs; (II) linear modification costs. To the best of our knowledge, the fixed-cost case is not investigated in the literature. In this case, a simple algorithm based on binary search is presented to solve the problem in strongly polynomial time. In the linearcost case, a two-phase algorithm is developed to solve the problem in polynomial time. The first phase applies the same fixed-cost algorithm to find an interval containing the optimal value. The second phase converts the resulted problem into a maximum ratio path problem. Then, it uses a proposed hybrid Secant-Newton algorithm to solve exactly the problem. To validate our proposed algorithm, we theoretically and computationally compare it with the algorithms presented in [23] and [26].
Let us now state an application of MCPEP in transforation systems. Depending on the type of vehicle crossing a road, a capacity can be assigned to each road. For example, if only light vehicles are allowed to pass, the capacity can be equal to a small number, and if any type of light and heavy vehicles is allowed to pass, the capacity can be equal a large number. The capacity of a road depends on various factors such as road width, and road infrastructure. It is clear that the capacity of a route is equal to the minimum capacity of its roads. Now suppose we want to increase road capacities under a budget constraint so that heavy vehicles can pass between two specific points. This is a typical application of MCPEPs.
The rest of this paper is organized as follows. Section 2 states formally the problem and gives some initial results. Section 3 concentrates on the problem with fixed costs. Section 4 focuses on the problem with linear costs. It presents a strongly polynomial-time algorithm and compares it with the others. Section 5 presents our concluding remarks.

Problem statement
This section recalls the definition of maximum capacity path problems and introduces its expansion version. Then, it presents some initial results used throughout the paper.
Suppose that a connected and directed network G(V, A, c) is given in which V = {1, 2, . . . , n} is the node set; A is the arc set containing m arcs. Each arc (i, j) is associated to a nonnegative capacity c i j . The network contains two specific nodes s and t which are called the origin and the destination, . It is obvious that one can simply obtain a path from a given walk P by removing its all cycles. This paper concentrates on paths from the origin s to the destination t, called st-paths. Any st-path P has a capacity c P which is the minimum capacity of its arcs, i.e., c P = min (i, j)∈P c i j .
A classical combinatorial optimization problem, called the maximum capacity path problem (MCPP), is to find an st-path whose capacity is the maximum among all st-paths. Similar to the formulation of shortest path problems [3], one can simply formulate MCCP in the form of a zero-one linear programming problem as follows: 1b) guarantees that z equals the maximum capacity of the path P determined by constraints (2.1c). x i j is a zero-one variable which takes one if and only if (i, j) belongs to the desired path P. The purpose of A + i (A − i ) is the set of arcs enumerating from (terminating at) i. Moreover, constraint (2.1d) is added to prevent node repetitions in the obtained path. On the other word, the solution is a path, and not only a walk. If the objective value of problem (2.1) is equal to z * , then we simply say that the maximum capacity of G(V, A, c) is z * . Now let us define the expansion version of problem (2.1). Suppose that we are capable of increasing arc capacities under some budget and bound constraints, and we would like improve the optimal value of problem (2.1) as much as possible. This problem can be formulated as follows: and y i j is a continuous variable to determine the increment amount of capacity in (i, j); c u i j is an upper bound for increasing the capacity of (i, j); b i j (.) is a nonnegative, realvalued and nondecreasing function to measure the cost of increasing the capacity of (i, j). Moreover it satisfies the property that b i j (0) = 0. The other notations are defined the same as problem (2.1).
Without the loss of generality, we may impose the following assumption to problem. This paper focuses on problem (2.2) for two distinct cost functions: Fixed costs: In this case, it is assumed that b i j (y i j ) is equal to zero for y i j = 0 and a positive fixed valuē w i j for y i j > 0. In the inverse optimization context [12,17] is the Hamming distance between 0 and y i j defined as Linear costs: This case is to assume that b i j (y i j ) = w i j y i j in which w i j > 0 is the cost of per unit increment of the capacity. So b i j (.) is a linear function with the slope w i j and the y-intercept 0. In the literature, the sum of these linear cost functions are called the weighted Manhattan distance [4,21].
A simple observation of problem (2.2) is that y i j = 0, (i, j) ∈ A, is a feasible solution whose objective value z min is equal to the maximum capacity of G(V, A, c). So it provides a lower bound for the optimal value z * of problem (2.2). On the other hand, y i j = c u i j − c i j , (i, j) ∈ A, tights bound constraints (2.2f), but it may not satisfy budget constraint (2.2e). So its objective value, denoted by z max , provides an upper bound for z * . Notice that z max is obtained directly from bound constraints even without noting the budget constraint. In the case of linear costs, a new upper bound can be obtained as which the last inequality hold due to the budget constraint. This inequality implies the upper bound z min + B w L if we assume that (i, j) is the arc which determines the initial maximum capacity of the network. In the case of fixed costs, a similar proof can be done to achieve this upper bound. Let us state formally these observations. Lemma 2.1. The optimal value of problem (2.2) belongs to where z min and z max are respectively the maximum capacities of G(V, A, c i j ) and G(V, A, c u i j ), and w L = min (i, j)∈A w i j for linear costs, min (i, j)∈Awi j for fixed costs.
Proof. The proof is straightforward based on the preceding paragraph.
For an st-path P as well as a fixed value z ∈ [z min ,z max ], consider a capacity vector c (z,P) defined as 3) The following lemma shows that we can restrict ourselves to such capacity vectors to find an optimal solution of problem (2.2).

Lemma 2.2.
Ifc is a feasible capacity vector to problem (2.2), then c (z,P) is also feasible, whereP is a maximum capacity path with respect toc, andz is its capacity. Moreover, the objective values of both the vectors are the same.
Proof. It is easy to see that c (z,P) i j ≤c i j for every (i, j) ∈ A. This simple observation together with the feasibility ofc imply that c (z,P) satisfies the bound and budget constraints. On the other hand, by definition, the capacity ofP with respect to c (z,P) is exactly equal toz. So the objective value of c (z,P) is the same as that ofc.
Notice that the capacity vector c (z,P) is defined with respect to an st-path P as well as a value z ∈ [z min ,z max ]. As seen in Lemma 2.2, z depends on P. On the other word, if P is determined, then we can simply compute z because it is the minimum capacity of arcs belonging to P. Let us now ask this question: Is there a convenient st-path P for a given value z? To reply this question, consider a cost vector w (z) defined as follows: The following lemma clarifies the relationship between w (z) and our question.
For a fixed value z, letP be a shortest st-path with respect to w (z) . Then, 1). c (z,P) is a feasible solution of problem (2.2) if the length ofP is less than or equal to B.
2). There is not any st-path P so that c (z,P) is a feasible solution of problem (2.2) if the length ofP is greater than B.
Proof. The proof of the first part is trivial. To prove the second part, by contradiction, suppose that c (z,P) is feasible for some st-pathP. So c (z,P) satisfies the budget constraint. This together with the assumption that the length ofP is greater than B imply that the length ofP is less than that ofP with respect to w (z) . This is a contradiction becauseP is a shortest st-path.

Fixed costs
This section studies problem (2.2) with fixed costs. It presents an algorithm to solve the problem in strongly polynomial time.
According to Lemma 2.3, problem (2.2) reduces to finding the greatest value z ∈ [z min ,z max ] so that the length of the shortest st-path with respect to w (z) does not exceed B. This result motivates us to apply a binary search procedure for finding such value z. The following lemma aids us to limit the search interval only to a finite set.
Lemma 3.1. The optimal value of problem (2.2) with fixed costs belongs to the finite set S = (i, j)∈A {c i j , c u i j } ∩ [z min ,z max ]. Proof. By contradiction, assume that z * is the optimal value not belonging to S . So there is an st-path P * so that c (z * ,P * ) is the optimal capacity vector. Sort the distinct elements of S in a nondecreasing order and let z 1 < z 2 < . . . < z k be the sorted list. So there is an index l ∈ {2, . . . , k} so that z l−1 < z * < z l . Now, consider the capacity vector c (z l ,P * ) . It is easy to prove that this vector satisfies the bound and budget constraints, and moreover, its optimal value is equal to z l . This contradicts the fact that z * is the optimal value. Algorithm 1 presents a binary search in complete details for solving problem (2.2).

Algorithm 1 Solving problem (2.2) with fixed costs.
Input: An instance of problem (2.2) with fixed costs defined on a network G(V, A, c, w) with two specified nodes s and t in which c is initial capacity vector and w is fixed cost vector.
Output: An optimal st-path P * , and the optimal value z * . Set z max to the length of the maximum capacity path with respect to capacities c u i j . Find a shortest path P with respect to w (z max ) . if The length of P less than or equal to B then Stop because the problem has the optimal path P * = P, and the maximum capacity z * = z max . end if Setz min to the length of the maximum capacity path with respect to capacities c i j . Setz max = min{z max ,z min + B w L } where w L = min (i, j)∈A w i j . Sort the distinct elements of S = (i, j)∈A {c i j , c u i j } ∩ [z min ,z max ]. Let z 1 < z 2 < . . . < z k be the sorted list. Set l=1, u=k. while l < u do Set mid = [ l+u 2 ]. Find a shortest path P with respect to w z mid . if The length of P less than or equal to B then Set l = mid, z * = z mid , and P * = P. else Set u = mid. end if end while Theorem 3.2. Algorithm 1 solves problem (2.2) in strongly polynomial time.
Proof. The correctness of Algorithm 1 follows from the results presented in Lemma 3.1 and the preceding section. So let us argue only about its complexity. The bottleneck operation of Algorithm 1 is to solve a shortest path problem in the While loop.
Since the While loop runs O(log(2m)) = O(log(n)) times, and the Fibonacci heap implementation of Dijkstra's algorithm requires O(m + n log(n)) computations, it follows that Algorithm 1 solves problem (2.2) in O(m log(n) + n log 2 (n)) time.

Linear costs
This section concentrates on problem (2.2) whenever the cost of (i, j) is a linear function as b i j (y i j ) = w i j y i j . Although Lemma 2.3 remains valid in this case, the issue is not as simple as the fixed-cost case because the search space is a continuous interval, instead of a finite set.
To develop an algorithm for solving problem (2.2) with linear costs, consider the function b(z) which is the length of the shortest st-path with respect to the weight vector w (z) , i.e., Here, P (z) denotes the set of all st-paths which have not any intersection with the set In the special case that there is not any st-path for some value z, we set b(z) = +∞. Since b(z) is a parametric shortest path function in terms of z, it follows that it is non-decreasing and piecewise-linear, but unfortunately, it does not enjoy any convexity and concavity property in general. However, we show that b(z) is concave in a small subinterval of [z min ,z max ] containing the optimal value.
Before paying attention to these details, let us state a general description of our proposed algorithm which contains two phases. In the first phase, the binary search is applied to find a subinterval of [z min ,z max ] in which b(z) is concave. In the second phase, the exact optimal value is found by a new hybrid Secant-Newton algorithm.
Let us now return to analysing the behaviour of b(z). It is obvious that b(z) is increasing. Whenever z increases continuously, the breakpoints of b(z) are created in one of three following situations.
• The shortest path is changed. So the slope of the corresponding line in b(z) is either fixed or decreased. • The shortest path is not changed. However, the weight of its one arc is increased from zero to a positive value. In this case, the slope of the corresponding line in b(z) is increased. • For an arc (i, j) of the shortest path P, we have z > c u i j . In this situation, the length of P grows unexpectedly to +∞ because w (z) i j = +∞. So the second shortest path is replaced to P. Consequently, the slope of the corresponding line in b(z) is either unchanged or increased.
If the search interval [z min ,z max ] is reduced so that the second and third cases never occur, then b(z) is concave based on the first case. For this purpose, we reconsider the sorted list z 1 < z 2 < . . . < z k of the set S . Then, we find an index l ∈ {1, 2, . . . , k − 1} so that the optimal value belongs to [z l , z l+1 ). Obviously, the second and third cases does not occur in this interval. This procedure can be done perfectly by Algorithm 1 under the assumption that b i j (z l ) = w i j (z l − c i j ), instead of b i j (z) =w i j . Hence, the first phase applies Algorithm 1 as a subroutine to find an interval [z l , z l+1 ) containing the optimal value. The following result states the concept used in the second phase.
Lemma 4.1. If the optimal value z * is less thanz max , then b(z * ) = B.
Proof. By definition, it is obvious that the function b(z) is strictly increasing in [z min ,z max ), and it is constant elsewhere. Since z * <z max , it follows that there is an index l so that z * ∈ [z l , z l+1 ). Based on Lemma 2.2, we know that there is an st-path P * so that c (z * ,P * ) is the optimal capacity vector. Now assume that b(z * ) is less than B by contradiction. So we can find a value z ∈ (z * , z l+1 ) so that P * is the shortest path with respect to w (z ) and c (z ,P * ) satisfies the budget constraint. Since c (z * ,P * ) satisfies the bound constraints and both the values z * , z belong to [z l , z l+1 ), we imply that c (z ,P * ) also satisfies the bound constraints. Hence, c (z ,P * ) is a feasible capacity vector with the objective value greater than z * which contradicts the optimality of c (z * ,P * ) .  Problem (4.7) is a minimum cost-reliability ratio path problem which is NP-hard in general because it is possible that there are some negative cycles in the network with respect to the cost vector w (z) [1,2]. Fortunately, since we have assumed that the costs w (z) i j are nonnegative, the problem can be solved efficiently by Newton discrete-type method [16]. This is the same concept used by Zhang and Liu [26] to solve BCEPs. Let us recall their proposed method. It generates an increasing sequence {z (k) } k=0,1,... starting z (0) = z l . Suppose that the algorithm has computed the value z (k) . To calculate the next element of the sequence, the algorithm solves a shortest path problem with respect to the nonnegative length vector v = w (z (k) ) . (4.8) Assume that path P * is found. Now, we may encounter two distinct situations: • The length of P * is less than B. In this case, we have So we can set z (k+1) equal to the right-hand term of this inequality. • The length of P * is exactly equal to B. In this case, P is an optimal solution of problem (4.7) because P * satisfies (4.4) and all st-paths satisfy (4.5) for z * = z (k) . So inequalities (4.6) hold for z * = z (k) .
It is remarkable that the case that the length of P * is greater than B never occurs [26]. This algorithm generates an increasing sequence converging the optimal value. However, due to the special structure of b(z), it always approaches to the optimal value only from one side, and it does not use points of the other side. For example, the sequence generated by algorithm present in [26] is as So the algorithm does not use any point in [z * , z l+1 ] to find z * . Specially, if z * is very close to z l+1 , then the algorithm has to approximately traverse all length of [z l , z l+1 ) to look for the optimal value. Here, we present a hybrid Secant-Newton algorithm to approach both the side of the optimal value. This algorithm generates two sequence {z (k) } and {y (k )} starting z (0) = z l and y (0) = z l+1 . Assume that we have computed the kth elements of both the sequences. Their next elements are calculated as follows: in whichŷ y (k+1) is obtained from solving the equation L(z) = B where L(z) is the line joining two points (y (k) , b(y (k) )) and (z (k) , b(z (k) )). This is motivated to use the well-known Secant method in computing the root of an equation.ẑ (k) is the same approximation obtained from the Newton method andŷ (k) is the approximation obtained from the Secant method using two consecutive values y (k) and y (k+1) . Finally, z (k+1) is the maximum value of two calculated approximation. Since z (k+1) ≥ẑ (k) , it follows that the number of iterations at the hybrid method is at most equal to that of the Newton method. So {z (k) } converges the optimal value at the end. Figure 1 depicts a visual description of the hybrid Secant-Newton method. It is easy to see that the sequence {z (k) } ({y (k )} ) is increasing (decreasing) and approaches to the optimal value from the left (right) side. The running time of an iteration of the hybrid algorithm is at most three times of the Newton method's because it calls the function b(z) three times. Hence its worst-case complexity is the same as the Newton method. This proves the following result. Proof. The proof is immediate from the above discussion and the fact that the Newton method solves the problem in strongly polynomial time [26].

Computational and theoretical comparison
We now compare Algorithm 2 with the available algorithms in the literature. We recall that a modified binary search algorithm and a Newton discrete-type algorithm are presented respectively in [23] and [26] to solve BCEPs. Since MCPEP is a special case of BCEPs, we can apply them to solve MCPEP. Based on our preceding discussion, Algorithm 2 has the same worst-case complexity of the algorithm proposed in [26] which runs in strongly polynomial time. Moreover, the algorithm presented in [23] is only a (weak) polynomial time algorithm. This implies that Algorithm 2 as well as one proposed in [26] are theoretically better than one provided in [23].
Let us now compare them in practice. We recall that b(z) is a linear-piecewise function. In instances of MCPEP for which b(z) has a many number of breakpoints in the search interval [z l , z l+1 ] of the second phase, Algorithm 2 has a better performance than the others. This occurs specially whenever the upper bounds are infinity and the amount of budget is very large because in this case, • The initial search interval [z min , z max ] = [z 0 , z k ], specially subinterval [z k−1 , z k ], is very wide; • The search interval in the second phase is usually the same interval [z k−1 , z k ] containing many numbers of b(z)'s breakpoints.
Let us state an example in this case. Consider an instance of MCPEP shown in Figure 2. Here, it is assumed that B = 39, s = 0 and t = 37. Moreover, all upper bounds are +∞ and w i j = 0.01 for every arc (i, j) ∈ A. In this example, we have [z min , min{z min + B w L , z max }] = [3200, 7100] at which the function b(z) contains 8 breakpoints (see Figure 3). This interval is the same interval [z l , z l+1 ] containing the optimal value. For this example, the algorithms were implemented on a laptop with an Intel Core i5 − 2520M CPU (2.50GHz) and 4 GB of RAM. To code the algorithms, Python 2.7.5 together with the library NetworkX 1.8.1 were used. Table 1 compares the performance of the algorithms. It is obvious that Algorithm 2 has solved the problem in a less number of iterations.

Algorithm 2 A hybrid Secant-Newton method to solve problem (2.2) with linear costs
Input: A network G(V, A, c, w) with two specified nodes s and t Output:An optimal st-path P * , and the optimal value z * . Phase I: Apply Algorithm 1 as a subroutine for arc costs b i j (z l ) = w i j (z l − c i j ) to either detect the optimal value is z max or find an interval [z l , z l+1 ) containing the optimal value. In the former, stop because the optimal value is found. In the latter, go to Phase II. Phase II: Set z = z l . Set y = z l+1 . while True do Obtain the length vector v defined in (4.8) for z (k) = z. Find shortest path P * with respect to the length vector v. if the length of P * equals B then Stop because the optimal solution is found (see Lemma 4.1). else Set y = y (k) b(z (k) )−z (k) b(y (k) )

Conclusions
This paper studied maximum capacity path expansion problems with two types of modification costs: fixed cost, and linear cost. In the former, it proposed a strongly polynomial-time algorithm based on binary search to solve the problem. In the latter, it developed a two-phases algorithm. The first phase uses the first algorithm to find a small interval containing the optimal value at which the cost function is concave. Then, a hybrid Secant-Newton algorithm is proposed to obtain exactly the optimal value. The hybrid algorithm has a admissible performance rather than the available algorithms in the literature.
Since the hybrid algorithm can be applied to solve general fractional combinatorial optimization problems, it will be meaningful that one compares it with the other customary approaches, such as binary search, discrete-type Newton method, and Meggido's parametric search [16].
Although there is a variety of studies on capacity expansion problems, there are some aspects that are not considered until now (for example, max-min matching expansion problems, and max-min cut expansion problems). It will be worthwhile to focus on the other expansion problems in the future works.