Algorithm for Solving Linear Fractional Programming

To solve the Linear Fractional Programming system, the Charnes-Cooper Transformation is used to split the original objective function to an equivalent, simplified linear programming system. To prove the congruence of two systems, the feasibility and optimality of the linear fractional program maximizes or minimizes a ratio of affine functions over a polyhedral set is proved. In the last part of the paper, a MATLAB function is used to design an algorithm which accepts the input of parameters of a fractional polyhedral set, and outputs the optimal solution.


Introduction
Linear fractional program maximizes or minimizes a ratio of affine functions over a polyhedral set, has many applications in engineering, management and other industries [1]. The Linear Fractional Programming problem and its Charnes-Cooper Transformation [2] will give us basic thoughts about how to adapt the feasibility and optimality into the real word. For solving the Linear Fractional Program (LFP), its standard form is shown below: where A is an m n matrix, , , , ∈ ℝ , ∈ ℝ and , ∈ ℝ, which are given vectors, also we could assume that 0 for any feasible . We first apply the Charnes-Cooper Transformation, and then solve a linear fractional program shown below: As we learned from linear programming, in this question, we cannot simply consider it as a normal linear program to solve, and of course, we need to link the linear fractional programming problem to our current knowledge. It involves an equivalent problem with the current Linear Program 1 (LP1): Note that the conditions of the variables and constrains are the same as the what we have listed in the abstract above [3].
In order to make connections between the feasible solutions and , we need several steps to prove that the feasible solution to (LP1) is actually related to the feasible solution to (LFP).

Proof of Equality
In general, we need to prove the feasibility and optimal value of (LFP) and its Charnes-Cooper Transformation, which is (LP1), are equivalent. First and foremost, we need to check if one of them is feasible, the other is also feasible. In other words, when (LP1) is feasible, original (LFP) is feasible; when (LFP) is feasible, its transformation (LP1) is also feasible. Moreover, after we prove the feasibility of (LFP) is equivalent to the feasibility of its transformation (LP1), their equivalence of optimality is also needed to be proved [4].

PART1: Proof the equivalence from (LFP) TO (LP1)
Firstly, let us consider the feasibility of (LFP) could imply that (LP1) is feasible.
(a) Let be a feasible solution to (LFP), and let ̅ 1 and ̅ we could show that ( ̅ , ) is a feasible solution to (LP1) and ̅ , . Proof: Firstly, to show the feasibility of the solution ̅ , ), we need to prove that it meets all the constrains of the alternative linear program, i.e. (LP1). So that we have: for variables: so that proves ( ̅ , ) is a feasible solution to (LP1). And for ̅ , , we have: So now we proved when (LFP) is feasible could deduce the (LP1) is feasible, then we need to show that when (LP1) is optimal, it still in the (LFP's) feasible region. At first, we start with the * * . (b) Let * , * the optimal value to (LFP), (LP1), respectively. So that no other value is larger than * (maximization problem) or * (maximization problem). We need to prove that proof result in (a) shows that * * . Proof: Because * is the optimal value to the (LFP), there exist is optimal to (LFP) and: , from (a), ( , ) is feasible to (LP1), therefore: , * for * is the optimal value and always larger or equal to its feasible solution in LP1. Therefore, * , * , which yields: Proof Complete. Now that we have proved that the * * is correct, which implies that the optimal solution of LP1 lies in the LFP feasible region. We need to prove also * * to get the optimal values are the same between each objective functions.

PART2: Proof the equivalence from (LP1) TO (LFP)
Secondly, let us consider the feasibility of (LP1) could imply that (LFP) is feasible.
(c) Let , be the feasible solution to (LP1) and . and then , we need to prove that / is a feasible solution to (LFP) and , . Similarly, for (a), since , is the feasible solution to (LP1), , Then  In conclusion, if , feasible to (LP1), as long as the feasible set of (LFP) is not empty, then there always ∃ feasible to (LFP) such that , , regardless of t 0 or not. Now consider * , * optimal to (LP1), with optimal value * * , * , and * optimal to (LFP) with optimal value * * , which means there always ∃ x feasible to (LFP) such that, * * * that is: * * Combine the result of (b) in part1 and (e) in part2, for * * and * * both valid, we have: * * Proof complete.
Above shows that the feasibility and optimal solution in (LFP) and (LP1) are equivalent. We apply what we have learned from the previous steps, we easily yield the corresponding LP1: Then we tried to use MATLAB to solve the problem by constructing the matrix tableau and calling linprog() in the MATLAB code.

3.Application of Linear Fractional Programming Project
After running the MATLAB code in the Appendix I, print the result by simply type in the command line argument and get: * 0,2.5217,0.5230 , with * 1.0173 .

4.Implement of Linear Fractional Programming
In the tools that we have used to solve the linear programming, we need to take the maximization and minimization problem into consideration. Furthermore, we need to tell apart the original problem into some parts that are compatible to the new Linear Program. Then to call the linprog() function to handle the inputs. And output the optimal solution and the optimal value.

5.Conclusion
According to the proof process, this approach is a very effective to stimulate the Linear Fractional Programming original problem and using the approach will definitely reduce the work load of the computer and individuals. Both the feasibility and the optimality of the transformed linear program (LP1) and linear fractional program (LFP) is proved. And we could use the functions in the equivalent one-dimensional program to solve the original Fractional Program [5]. Z=fval; %% if minimum, stay the same end