A VARIANT OF DAI-YUAN CONJUGATE GRADIENT METHOD FOR UNCONSTRAINED OPTIMIZATION AND ITS APPLICATION IN PORTFOLIO SELECTION

unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Abstract. The quasi-Newton (QN) method are among the efficient variants of conjugate gradient (CG) method for solving unconstrained optimization problems. The QN method utilizes the gradients of the function while ignoring the available value information at every iteration. In this paper, we extended the Dai-Yuan [39] coefficient in designing a new CG method for large-scale unconstrained optimization problems. An interesting feature of our method is that its algorithm not only uses the available gradient value, but also consider the function value information. The global convergence of the proposed method was established under some suitable Wolfe conditions. Extensive numerical computation have been carried out which show that the average performance of this new algorithm is efficient and promising. In addition, the proposed method was extended to solve practical application problem of portfolio selection.


INTRODUCTION
The nonlinear conjugate gradient (CG) algorithms are among the efficient numerical algorithms for solving unconstrained optimization problems, especially, when the problems are of large dimension. The CG method are very popular among mathematicians, engineers, and many more because of its robustness and ability to solve large-scale optimization problems [22]. Consider an unconstrained optimization model where f : R n → R is a smooth function and g denotes the gradient of f . The CG algorithm generate a sequence of iterate {x k } via the following recurrence formula: (1) x k+1 = x k + s k , where k ≥ 0, s k = α k d k [23]. The parameter α k > 0 is known as the step-size which is often computed along the search direction d k with formula defined as where β k represent the conjugate gradient parameter that characterize different CG methods.
The classical formula for β k are group into two. The first group include HS method [27], PRP method [9,11], and LS method [37] with formula given as This group is characterize by their restart properties and efficient numerical performance [21,38]. Restart strategy is usually employed in conjugate gradient algorithms to improve their computational efficiency. However, the convergence of most of these methods are yet to be established under some line search conditions [24,35]. The second group include the FR method [31], the CD method [30], and the DY method [39] with formula given as: .
On the other hand, these methods FR, CD, and DY do not possess the restart strategy and thus perform poorly due to jamming phenomenon [3]. However, the convergence of these methods have been established under various line search methods [24,35]. One of the most frequent used line search method is the inexact line search, particularly the Wolfe line search method [18]. For the Wolfe line search, the step-size α k is computed such that: where 0 < ϕ < σ < 1 [32]. Numerous researcher have studied the conjugate gradient method under the strong Wolfe line search [16]. For more references on advances in conjugate gradient method (see, [1,4,5,6,7,8,15,17,25,26,36]).
Motivated by the method of Dai and Yuan [39], we propose a modification of the conjugate gradient coefficient for solving unconstrained optimization models. The global convergence of the method is established under some mild conditions. Furthermore, the method was extended to solve practical application problem of portfolio selection.
The rest sections of this paper is structured as follows: In section 2, we present the derivation process of the new our method and its algorithm. The convergence result of the proposed method is discussed in section 3. We report preliminary results of the numerical computation carried out on some benchmark test problems in section 4. An application problem of portfolio selection was discussed in section 5. Lastly, section 6 present the conclusion of the paper.
By using the unified quasi-Newton equation and above relation we derive a new coefficient conjugate gradient.
Multiplying both side equation (2) by s T k , we have: From above equation, we get: On the other hand, by using conjugacy condition, we obtain: where G is Hessian matrix. As a result, Putting (5) in (6), which yields: To create algorithms that have global convergence properties, we modify the above formula as follows: where ϑ ∈ [0, 1, 2, 3] and y k = g k+1 − g k .
Step 2. Calculate g k , if g k ≤ ε then stop, x k is optimal point. Else, go to Step 3.

CONVERGENCE ANALYSIS
This section, we prove the global convergence of new Algorithm under the following assumption, which has often been used in the convergence analysis of conjugate gradient methods.
is continuously differentiable and its gradient is Lipschitz continuous, namely, there exists a constant L > 0 such that In [39], it follows from the definition of the direction generated by the Dai-Yuan (DY) method as: By the using the new formula, we have: From (9) and (10), we obtained: The above relation can be rewritten as: Since (1 + ϑ ) and β DY k are positive then β BMS k is always positive, now (11), this yields : This finishes the proof.
The formula (10) is very important in our convergence analysis. Due to playing an important role in analyzing the convergence property for conjugate gradient, Zoutendijk's condition [13] will be proved to be a part of the proposed Algorithm in this study.
Let that d k+1 is generated by (2) and step size α K fulfills (3) and (4), if f (x) satisfies the Assumption 3.1, then : Theorem 3.4. Suppose that assumptions holes. Let {x k } be generated by Algorithm, where the step length satisfies the Wolfe line search conditions. Then : Proof. By contradiction, we suppose that the conclusion is not true. Then there exists a constant ξ > 0 such that g k+1 2 ≥ ξ 2 . From search direction, it follows that d k+1 + g k+1 = β k d k .
Implies that: Dividing both sides of this inequality by d T k+1 g k+1 2 , that: The above inequality implies d k+1 This contradicts Lemma 3.3. Therefore, (13) holds.

NUMERICAL EXPERIMENTS
In this section, we report the numerical results of a comparison between the proposed method (denoted by BMS) and RMIL+ method [40]. Our experiments have been used 49 test functions selected from Andrei [28] and Jamil-Yang [20], as listed in Table 1, with variation dimensions from 2 to 50, 000. Attempts to complete each test function are limited to the termination criterion g k ≤ 10 −6 or to a maximum of 10,000 iterations. All the algorithms are coded in MATLAB R2019a by using a personal laptop with specifications; processor Intel Core i7, 16 GB RAM, Windows 10 Pro 64 bit and the algorithms implemented the Wolfe line search conditions with σ = 0.001 and ϕ = 0.0001.    Table 2  Thus, the overall performance appraisal of the solver is obtained from the performance profile function given by where τ ≥ 0.  According to performance profiles are plotted in Figs. 1-3, where Fig. 1 shows the performance profiles based on NOI, Fig. 2 shows the performance profiles based on NOF and lastly

APPLICATION IN PORTFOLIO SELECTION
Portfolio selection plays an important role in financial mathematics, risk management and economics. Portfolio selection is useful for assessing the combination of available alternative securities. It aims to maximize the investment return of investors which can be done by maximizing return or minimizing risk [14].
where T t is a stock return in period t, I t is a closing stock price in period t, I t−1 is a closing stock price one period before t and N is number of observation periods.
By using data in http://finance.yahoo.com, (15), (16) and (17), we get the mean, variance and covariance of each return stock as in Tables 3 and 4.
Since our portfolio consists four stocks then we have formula for expected of return and variance of return of portfolio as follows [33]: where b 1 , b 2 , b 3 , b 4 are proportion of the BBCA, ACES, ADRO and GGRM stocks respectively and Cov(T i , T j ) is the covariance of return between two stocks. Since what we want here is risk avoidance, therefore, we want a small variance of the return (i.e. low risk), so that our problem about portfolio selection can be written by subject to : Table 4 then the problem (20) will become an unconstrained optimization problem as follows:
Hence, the weight of the proportion of each stock that makes up the optimal portfolio with minimal risk is 57% for BBCA, 19% ACES, −3% ADRO and 27% GGRM with expected portfolio return is 0.0001 and the portfolio risk is 0.001. In this case, investor is allowed to do short selling as in ADRO stock. Another consideration regarding the application of the CG method in portfolio selection can be seen in [2].

CONCLUSION
The conjugate gradient methods have recently been explored by many researchers. This is due to their nice convergence properties, low memory requirement, efficient numerical result, in addition to real-life practical application. In this paper, we have derive a new conjugate gradient parameter for unconstrained optimization problems. The global convergence properties of the proposed method is established under some mild conditions. An interesting feature of our method is the ability to reduce to the classical DY method. Numerical results have been presented to illustrate the performance of the method especially for the large-scale problems.
The proposed method was further extended to solve real-life application problem of portfolio selection.