Next Article in Journal
Optimization of Public Health Education Parameters for Controlling the Spread of HIV/AIDS Infection
Next Article in Special Issue
A Novel Learning Rate Schedule in Optimization for Neural Networks and It’s Convergence
Previous Article in Journal
Numerical Modeling of Sloshing Frequencies in Tanks with Structure Using New Presented DQM-BEM Technique
Previous Article in Special Issue
Partially Projective Algorithm for the Split Feasibility Problem with Visualization of the Solution Set
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Filter and Nonmonotone Adaptive Trust Region Line Search Method for Unconstrained Optimization

1
School of Science, Southwest Petroleum University, Chengdu 610500, China
2
School of Artificial Intelligence, Southwest Petroleum University, Chengdu 610500, China
*
Author to whom correspondence should be addressed.
Symmetry 2020, 12(4), 656; https://doi.org/10.3390/sym12040656
Submission received: 20 March 2020 / Revised: 1 April 2020 / Accepted: 3 April 2020 / Published: 21 April 2020
(This article belongs to the Special Issue Advance in Nonlinear Analysis and Optimization)

Abstract

:
In this paper, a new nonmonotone adaptive trust region algorithm is proposed for unconstrained optimization by combining a multidimensional filter and the Goldstein-type line search technique. A modified trust region ratio is presented which results in more reasonable consistency between the accurate model and the approximate model. When a trial step is rejected, we use a multidimensional filter to increase the likelihood that the trial step is accepted. If the trial step is still not successful with the filter, a nonmonotone Goldstein-type line search is used in the direction of the rejected trial step. The approximation of the Hessian matrix is updated by the modified Quasi-Newton formula (CBFGS). Under appropriate conditions, the proposed algorithm is globally convergent and superlinearly convergent. The new algorithm shows better performance in terms of the Dolan–Moré performance profile. Numerical results demonstrate the efficiency and robustness of the proposed algorithm for solving unconstrained optimization problems.

1. Introduction

Consider the following unconstrained optimization problem:
min x R n f ( x ) ,
where f : R n R is a twice continuously differentiable function. The problem has widely used in many applications based on medical science, optimal control, and functional approximation, etc. As we all know, there are many methods for solving unconstrained optimization problems, such as the conjugate gradient method [1,2,3], the Newton method [4,5], and the trust region method [6,7,8]. Constrained optimization problems can also be solved by processing constraint conditions and transforming them into unconstrained optimization problems. Motivated by this, it is quite necessary to propose a new modified trust region method for solving unconstrained optimization problems.
As is commonly known, the trust region method and the line search method are two frequently used iterative methods. Line search methods involve the process of calculating the step length α k in the specific direction d k and driving a new point as x k + 1 = x k + α k d k . The primary idea of the trust region method is as follows: at current iteration point x k , the trial step d k is obtained by solving the following subproblem:
min d R n m k ( d ) = g k T d + 1 2 d T B k d ,
d Δ k ,
where . is the Euclidean norm, f k = f ( x k ) , g k = f ( x k ) , B k is a symmetric approximation matrix of G k = 2 f ( x k ) , and Δ k is a trust region radius.
Traditional trust region methods have some disadvantages, such as the fact that the subproblem needs to be solved many times to obtain an acceptable trial step within one iteration, which leads to high computational costs for the iterative process. One way to overcome this disadvantage is to use a line search strategy in the direction of the rejected trial step. Based on this situation, Nocedal and Yuan [9] proposed an algorithm in 1998, combining the trust region method and the line search method for the first time. Inspired by this, Michael et al., Li et al., and Zhang et al. proposed a trust region method with the line search strategy ([10,11,12], respectively).
As can be seen in other works [4,7,8] monotone techniques are distinguished from nonmonotone techniques in that the value of the function needs to be reduced at each iteration; at the same time, the use of nonmonotone techniques can not only guarantee finding the global optimal solution effectively, but also improve the convergence rate of the algorithm. The watchdog technique was presented by Chamberlain et al. [13] in 1982 to overcome the Maratos effect of constrained optimization problems. Motivated by this idea, a nonmonotone line search technique was proposed by Grippo et al. [14] in 1986. The step length α k satisfies the following inequality:
f ( x k + α k d k ) f l ( k ) + σ α k g k T d k ,
where σ ( 0 , 1 ) , f l ( k ) = max 0 j m ( k ) { f k j } , m ( 0 ) = 0 , 0 m ( k ) min { m ( k 1 ) + 1 , N } , and N 0 is an integer constant.
However, the common nonmonotone term f l ( k ) suffers from various drawbacks. For example, the valid value of the produced function f in any iteration is essentially discarded, and the numerical results highly depend on the choice of N . To overcome these drawbacks, Cui et al. [15] proposed another nonmonotone line search method as follows:
f ( x k + α k d k ) C k + σ α k g k T d k ,
where the nonmonotone term C k is defined by
C k = { f ( x k ) , k = 0 η k 1 Q k 1 C k 1 + f ( x k ) Q k , k 1 ,
and
Q k = { 1 , k = 0 η k 1 Q k 1 + 1 , k 1 ,
where σ ( 0 , 1 ) , η k [ η min , η max ] , η min [ 0 , 1 ] , and η max [ η min , 1 ] .
Based on this idea, in order to include the minimum value of α k in an acceptable interval and keep the consistency of the nonmonotone term, we proposed a trust region method with the Goldstein-type line search technique. The step length α k satisfies the following inequalities:
f ( x k + α k d k ) R k + c 1 α k g k T d k ,
f ( x k + α k d k ) R k + c 2 α k g k T d k ,
where
R k = η k f l ( k ) + ( 1 η k ) f k ,
c 1 ( 0 , 1 2 ) , c 2 ( c 1 , 1 ) , η k [ η min , η max ] , η min [ 0 , 1 ] , and η max [ η min , 1 ] .
To evaluate the consistency between the quadratic model and the objective function, the ratio is defined by Ahookhosh et al. [16] as follows:
ρ ^ k = R k f ( x k + d k ) m k ( 0 ) m k ( d k ) ,
It is well-known that the adaptive radius plays a valuable role in performance. In 1997, an adaptive strategy for automatically determining the initial trust region radius was proposed by Sartenear [17]. However, it can be seen that the gradient or Hessian information is not explicitly used to update the radius. Motivated by the first-order information and second-order information of the objective function, Zhang et al. [18] proposed a new scheme to determine trust region radius in 2002 as follows: Δ k = c p g k B ^ k 1 , where B ^ k = B k + i I , i N . In order to avoid computing the inverse of the matrix and the Euclidean norm of B ^ k 1 at each iteration point x k , Zhou et al. [19] proposed an adaptive trust region radius as follows: Δ k = c p d k 1 y k 1 g k , where y k 1 = g k g k 1 , and c and p are parameters. Prompted by the adaptive technique, Wang et al. [8] proposed a new adaptive trust region radius as follows: Δ k = c k g k γ , which reduces the related workload and calculation time. Based on this fact, other authors also proposed modified adaptive trust region methods [20,21,22].
In order to overcome the difficulty of selecting penalty factors when using penalty functions, Fletcher et al. first recommended the filter techniques for constrained nonlinear optimization (see [23] for details). More recently, Gould et al. [24] explored a new nonmonotone trust region method with multidimensional filter techniques for solving unconstrained optimization problems. This idea incorporates the concept of nonmonotone to build a filter that can reject poor iteration points, and enforce convergence from random starting points. At the same time, the prototype of the multidimensional filter techniques relax the requirements of monotonicity in the classic trust region framework. This idea has been popularized by some authors [25,26,27].
In the following, we refer to f ( x k ) by g k = ( g k 1 , g k 2 , , g k n ) ; when the i th component of g k = g ( x k ) is needed, it is denoted with g k i , where   i { 1 , 2 , 3 , , n } . We say that an iteration point x 1 dominates x 2 whenever
| g 1 i | | g 2 i | γ g g 2 ,
where γ g ( 0 , 1 n ) is a small positive constant.
Based on [8], we know that a multidimensional filter F is a list of n -tuples of the form ( g k 1 , g k 2 , , g k n ) , such that
| g k j | | g l j | j { 1 , 2 , 3 , , n } ,  
where g k and g l belong to F .
For all g l F , a new trial point x k is acceptable if there exists j { 1 , 2 , 3 , , n } , such that
| g k j | γ 2 + λ 2 g k j γ 1 | g l j | γ 2 + λ 1 g l γ 1 ,
where γ 1 and γ 2 are positive constants, and λ 1 and λ 2 satisfy the inequality 0 λ 1 < λ 2 < 1 n .
When an iteration point x k is accepted by the filter, we add g ( x k ) to the filter, and g ( x l ) F with the following property
| g k j | γ 2 + λ 2 g k j γ 1 | g l j | γ 2 + λ 1 g l γ 1
is removed from the filter.
The rest of this article is organized as follows. In Section 2, we describe a new nonmonotone adaptive trust region algorithm. We establish the global convergence and superlinear convergence of the algorithm in Section 3. In Section 4, numerical results are given, which show that the new method is effective. Finally, some concluding comments are provided in Section 5.

2. The new algorithm

In this section, a new filter and nonmonotone adaptive trust region Goldstein-type line search method is proposed. The trust region ratio is used to determine whether the trial step d k is accepted. Following the trust region ratio of Ahookhosh et al. in [16], we define a modified form as follows:
ρ ^ k = R k f ( x k + d k ) f l ( k ) f k m k ( d k ) ,
We can see that the effect of nonmonotonicity can be controlled the numerator and denominator, respectively. Thus, the new trust region ratio may find the global optimal solution effectively. Compared with the general filter trust region algorithm in [24], we propose a new criteria, that is, whether the trial point x k + satisfies 0 < ρ ^ k < μ 1 , and verify whether it is accepted by the filter F .
At the same time, a new adaptive trust region radius is presented as follows:
Δ k = c p g k γ ,
where 0 < γ < 1 , 0 < c < 1 , and p is a nonnegative integer. Compared with the adaptive trust region method in [8], the new method has the following effective properties: the parameter p plays a vital role in adjusting the radius, and it can also reduce the workload and computational time. However, the new trust region radius only uses gradient function information, not function information.
On the other hand, in each iteration, d k is the trial step to be calculated by
min d R n m k ( d ) = g k T d + 1 2 d T B k d ,
d Δ k : = c p g k γ ,
More formally, a filter and nonmonotone adaptive trust region line search method, which we call the FNATR, is described as follows.
Algorithm 1. A new filter and nonmonotone adaptive trust region line search method.
Step 0. (Initialization) Start with x 0 R n and the symmetric matrix B 0 R n × R n . The constants ε > 0 ,
N > 0 , 0 < μ 1 < 1 , p = 0 , 0 < β 1 < 1 < β 2 , 0 < c 1 < 1 2 < c 2 < 1 and Δ 0 = | | g 0 | | are also given. Set
F = , k = 0 .
Step 1. If | | g k | | ε , then stop.
Step 2. Solve the subproblems of Equations (18) and (19) to find the trial step d k , set x k + = x k + d k .
Step 3. Compute R k and ρ ^ k , respectively.
Step 4. Test the trial step.
If ρ ^ k μ 1 , then set x k + 1 = x k + , F k + 1 = F k , and go to Step 5.
Otherwise, compute g k + = f ( x k + ) .
 if x k + is accepted by the filter F , then x k + 1 = x k + ; add g k + = f ( x k + ) into the filter F , and go to Step 5.
 Otherwise, find the step length α k , satisfying Equations (8) and (9), and set x k + 1 = x k + α k d k . Then, set
p = p + 1 , and go to Step 5.
Step 5. Update the symmetric matrix B k by using a modified Quasi-Newton formula. Set
k = k + 1 , p = 0 , and go to Step 1.
In particular, we consider the following assumptions to analyze the convergence properties of Algorithm 1.
Assumption 1. 
The level set L ( x 0 ) = { x R n | f ( x ) f ( x 0 ) } satisfies L ( x 0 ) Ω ; f ( x ) is continuously differentiable and has a lower bound.
Assumption 2. 
The matrix B k is uniformly bounded, i.e., there exists a constant M 1 > 0 such that B k M 1 .
Remark 1.
There is a constant τ   ( 0 ,   1 ) ; B k is a positive definite symmetric matrix, and d k satisfies the following inequalities:
m k ( 0 ) m k ( d k ) τ g k min { Δ k , g k B k } ,
g k T d k τ g k min { Δ k , g k B k } .
Remark 2.
If f is continuously differentiable and f ( x ) is Lipschitz continuous, there is a positive constant L so that
f ( x ) f ( y ) L x y   x , y Ω .

3. Convergence Analysis

In order to easily derive convergence results, we define the following indexes: D = { k | ρ ^ k μ 1 } , A = { k | 0 < ρ ^ k < μ 1 and   x k +   is   accepted   by   the   filter   F } , and S = { k | x k + 1 = x k + d k } . Then, S = { k | ρ ^ k μ 1   or x k +   is   accepted   by   the   filter   F } . At the time of k S , we obtain x k + 1 = x k + α k d k .
Lemma 1.
Suppose that Assumptions 1 and 2 holds, and d k is the solution of Equation (18); then,
f l ( k ) f k m k ( d k ) τ g k min { Δ k , g k B k } .
Proof. 
According to f l ( k ) = max 0 j m ( k ) { f k j } , we have f l ( k ) f k . Thus, we obtain
f l ( k ) f k m k ( d k ) m k ( 0 ) m k ( d k ) .
Taking into account Equation (24) and Remark 1, we can conclude that Equation (23) holds. □
Lemma 2.
For all k , we can find that
| f k f ( x k + d k ) ( m k ( 0 ) m k ( d k ) ) | O ( d k 2 ) .
Proof. 
The proof can be obtained by Taylor’s expansion and H3. □
Lemma 3.
Suppose that the infinite sequence { x k } is generated by Algorithm 1. The number of successful iterations is infinite, that is, | S | = + . Then, we have { x k } L ( x 0 ) .
Proof. 
We can proceed by induction. When k = 0 , apparently we obtain x 0 L ( x 0 ) .
Assuming that x k L ( x 0 ) ( k 0 ) holds, we get f k f 0 . Then, we prove x k + 1 L ( x 0 ) . Consider the following two cases:
Case 1: When k D , according to Equation (16) we have,
R k f k + 1 μ 1 ( f l ( k ) f k m k ( d k ) ) ,
Thus,
R k f k + 1 + μ 1 ( f l ( k ) f k m k ( d k ) ) ,
According to Equations (23) and (27), we can obtain R k f k + 1 . Using the definition of R k and f l ( k ) , we get
R k = η k f l ( k ) + ( 1 η k ) f k η k f l ( k ) + ( 1 η k ) f l ( k ) = f l ( k ) ,
The above two inequalities show that
f k + 1 R k f l ( k ) f 0 ,
Case 2: When k A , according to 0 < ρ ^ k < μ 1 , we have R k f ( x k + d k ) > 0 . Thus, we obtain f k + 1 R k f l ( k ) f 0 . This shows the sequence { x k } L ( x 0 ) . □
Lemma 4.
Suppose that Assumptions 1 and 2 holds, and the sequence { x k } is generated by Algorithm 1. Then, the sequence { f l ( k ) } is not monotonically increasing and convergent.
Proof. 
The proof is similar to the proof of Lemma 5 in [8] and is here omitted. □
Lemma 5.
Suppose that Assumptions 1 and 2 holds, and the sequence { x k } is generated by Algorithm 1. Moreover, assume that there exists a constant 0 < ε < 1 , so that g k > ε , for all k . Then, Algorithm 1 is well defined; that is, the algorithm terminates in a limited number of steps.
Proof. 
In contradiction, suppose that Algorithm 1 cycles infinitely at iteration k . Then, we have
ρ ^ k p < μ 1 p ,
Following Equation (17), we have c p 0 as p . Thus, we get,
d k p Δ k p 0 ,
where d k p is a solution of the subproblem of Equation (18) corresponding to p in the k th iteration. Combining Lemma 1, Lemma 2, and Equation (28), we obtain
| ρ ^ k p 1 | = | R k f ( x k + d k p ) f l ( k ) f k m k ( d k p ) 1 | = | R k f ( x k + d k p ) f l ( k ) + f k + m k ( d k p ) f l ( k ) f k m k ( d k p ) | | f k f ( x k + d k p ) + m k ( d k p ) f l ( k ) f k m k ( d k p ) | O ( d k p 2 ) τ g k min { Δ k , g k B k } O ( d k p 2 ) τ ε min { Δ k , ε M 1 } O ( Δ k 2 ) O ( Δ k ) 0 ( p )
which implies that there exists a sufficiently large p such that ρ ^ k p μ 1 as p . This contradicts Equation (30), and shows that Algorithm 1 is well defined. □
Lemma 6.
Suppose that Assumptions 1 and 2 holds, and there exists a constant ε such that g k ε for all k . Therefore, there is a constant υ such that
Δ k > υ ,   k = 0 ,   1 ,   2 , ,   ,
Proof. 
The proof is similar to that of Theorem 6.4.3 in [28], and is therefore omitted here. □
In what follows, we establish global convergence of Algorithm 1 based on the above and the lemmas.
Theorem 1.
(Global Convergence) Suppose that Assumptions 1 and 2 holds, and the sequence { x k } is generated by Algorithm 1, such that,
lim k inf g k = 0
Proof. 
Divide the proof into the following two cases:
Case 1: The number of successful iterations and many filter iterations are infinite, i.e., | S | = + , | A | = + .
Suppose, on the contrary, that Equation (34) does not hold. Thus, there exists a constant ε such that g k > ε , as k is sufficiently large. Introduce the index of set S = { k i } . Following Assumption 1, we can find that { g k } is bounded. Therefore, there is a subsequence { k t } { k i } such that
lim t g k t = ε ¯ ,
where ε ¯ is a constant. The iteration point x k t is accepted by the filter F k t ; then there exists j { 1 , 2 , , n } , for every t > 1 , that is
| g k t j | | g k t 1 j | γ g g k t 1
As t is sufficiently large, we have
lim t ( | g k t j | | g k t 1 j | ) = 0 .
However, we obtain γ g g k t 1 γ g ε < 0 , which means that Equation (37) does not hold. The proof is completed.
Case 2: The number of successful iterations is infinite, and the number of filter iterations is finite, i.e., | S | = + , | A | < + .
We proceed from the following proof with a contradiction. Suppose that there exists a constant ε > 0 , such that g k ε , for sufficiently large k . Based on | A | < + , for sufficiently large k S , we have ρ ^ k μ 1 . Thus, set
ξ k = | { p , p + 1 , , k } S | .
Based on Assumption 2, Equation (28), Lemma 1, and Lemma 6, we write
k T ( f l ( k ) f k + 1 ) k T ( R k f k + 1 ) ξ k μ 1 τ ε min { υ , ε M 1 } .
As p and k are sufficiently large, according to | S | = + and | A | < + , we know that ξ k is sufficiently large. Thus, we can find that ξ k μ 1 τ ε min { υ , ε M 1 } + , and the left end of Equation (39) has no lower bound. We can deduce that
k T ( f l ( k ) f k + 1 ) j = p k ( f l ( j ) f l ( j + 1 ) ) = f l ( p ) f l ( k + 1 ) .
Using Lemma 4, as p and k are sufficiently large, the left end of Equation (40) has a lower bound, which contradicts Equation (39). This completes the proof of Theorem 1. □
Now, based on the appropriate conditions, the following superlinear convergence is presented for Algorithm 1.
Theorem 2.
(Superlinear Convergence) Suppose that Assumptions 1 and 2 holds, and the sequence { x k } generated by Algorithm 1 converges to x * . Moreover, it is reasonable to assume that the Hessian matrix 2 f ( x * ) is positive definite. If d k Δ k , where d k = B k 1 g k , and
lim k ( B k 2 f ( x * ) d k d k = 0 ,
then the sequence { x k } converges to x * in a superlinear manner.
Proof. 
Found using the same method as in the proof of Theorem 4.1 in [29]. □

4. Preliminary Numerical Experiments

In this section, we present numerical results to illustrate the performance of Algorithm 1 in comparison with the standard nonmonotone trust region algorithm of Pang et al. in [30] (ASNTR), the nonmonotone adaptive trust region algorithm of Ahookhoosh et al. in [16] (ANATR), and the multidimensional filter trust region algorithm of Wang et al. in [8] (AFTR). We performed our codes in double precision format of algorithm in MATLAB 9.4 programming, and the codes are given in the Appendix A. A set of unconstrained optimization test problems are selected from Andrei [31] with the some medium-scale and large-scale problems. The stopping criteria are that the number of iterations exceeds 10,000 or g k 10 6 ( 1 + | f ( x k ) | ) . n f , n i , and CPU represent the total number of function evaluations, the total number of gradient evaluations, and running time in seconds, respectively. Following Step 0, we exploit the following values: μ 1 = 0.25 , β 1 = 0.25 , β 2 = 1.5 , η 0 = 0.25 , N = 5 , ε = 0.5 , c 1 = 0.25 , c 2 = 0.75 , and B 0 = I R n × R n . In addition, η k is updated by the following recursive formula:
η k = { η 0 / 2 ,   if   k = 1 ( η k 1 + η k 2 ) / 2 ,   if   k 2 ,
The matrix B k is updated using a CBFGS formula [32]:
B k + 1 = { B k + y k y k T d k T y k B k d k d k T B k d k T B k d k   , y k T d k d k 2 ε g k α B k , y k T d k d k 2 < ε g k ,
where d k = x k + 1 x k , and y k = g k + 1 g k .
In Table 1, it is easily can be seen that Algorithm 1 outperforms the ASNTR, ANATR, and AFTR algorithms with respect to n f , n i , and CPU, especially for some problems. The Dolan–Moré [33] performance profile was used to compare the efficiency using the number of functional evaluations, the number of gradient evaluations, and running time. A performance index can be selected as measure of comparison among the mentioned algorithms, and the results can be illustrated by a performance profile. For every τ 1 , the performance profile gives the proportion ρ ( τ ) of the test problems. The performance of each considered algorithmic variant was the best within a range of τ of the best.
It can be easily seen from Figure 1, Figure 2 and Figure 3 that the new algorithm shows a better performance than the other algorithms from the perspective of the number of function evaluations, the number of gradient evaluations, and running time, especially in contrast to ASNTR. As a general result, we can infer that the new algorithm is more efficient and robust than the other mentioned algorithms in terms of the total number of iterations and running time.

5. Conclusions

In this paper, we combine the nonmonotone adaptive line search strategy with multidimensional filter techniques, and propose a nonmonotone trust region method with a new adaptive radius. Our method possesses the following attractive properties:
(1) The new algorithm is quite different from the standard trust region method; in order to avoid resolving the subproblem, a new nonmonotone Goldstein-type line search is performed in the direction of the rejected trial step.
(2) A new adaptive trust region radius is presented, which decreases the amount of work and computational time. However, full use of the function information for the new trust region radius is not made. A modified trust region ratio is computed which provides more information about evaluating the consistency between the quadratic model and the objective function.
(3) The approximation of the Hessian matrix is updated by the modified BFGS method.
Convergence analysis has shown that the proposed algorithm preserves global convergence as well as superlinear convergence. Numerical experiments were performed on a set of unconstrained optimization test problems in [31]. The numerical results showed that the proposed method is more competitive than the ASNTR, ANATR, and AFTR algorithms for medium-scale problems and large-scale problems with respect to the performance profile explained by Dolan–Moré in [33]. Thus, we can draw the conclusion that the new algorithm works quite well for solving unconstrained optimization problems. In the future, it will be interesting to see the new nonmonotone trust region method used to solve constrained optimization problems and nonlinear equations with constrained conditions. On the other hand, it also will be interesting to combine an improved conjugate gradient algorithm with an improved nonmonotone trust region method to solve many optimization problems.

Author Contributions

Conceptualization, Q.Q.; Writing—original draft, Q.Q.; Methodology, Q.Q; Writing—review and editing, X.D; Resources, X.D. Data curation, X.W; Project administration, X.W; Software, X.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Acknowledgments

The authors thank all those who helped improve the quality of the article.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

      function [xstar,ystar,fnum,gnum,k,val]=nonmonotone40(x0,N,npro)
      flag=1;
      k=1;
      j=0;
      x=x0;
      n=length(x);
      f(k)=f_test(x,n,npro);
      g=g_test(x,n,npro);
      H=eye(n,n);
      eta1=0.25;
      fnum=1;
      gnum=1;
      flk=f(k);
      p=0;
      delta=norm(g);
      eps=1e-6;
      t=1;
      F(:,t)=x;
      t=t+1;
      while flag
          if (norm(g)<=eps*(1+abs(f(k))))
              flag=0;
              break;
          end
          [d, val] = Trust_q(f(k), g, H, delta);
          faiafa=f_test(x+d,n,npro);
          fnum=fnum+1;
          flk=mmax(f,k-j,k);
          Rk=0.25*flk+0.75*f(k);
          dq = flk- f_test(x,n,npro)- val;
          df=Rk-faiafa;
          rk = df/dq;
          flag_filter=0;
           if rk > eta1
             x1=x+d;
            faiafa=f_test(x1,n,npro);
           else
               g0=g_test(x+d,n,npro);
               for i=1:(t-1)
               gg=g_test(F(:,i),n,npro);
               end
               for l =1:n
                   rg=1/sqrt(n-1);
               if abs(g0(l))<=abs(gg(l))-rg*norm(gg)
                   flag_filter=1;
               end
               end
                 m=0;
                 mk=0;
                rho=0.6;
               sigma=0.25;
          while (m<20)
              if f_test(x+rho^m*d,n,npro)<f_test(x,n,npro )+sigma*rho^m*g'*d
                  mk=m;
                 break;
             end
              m=m+1;
           end
           x1=x+rho^mk*d;
           faiafa=f_test(x1,n,npro);
           fnum=fnum+1;
           p=p+1;
           end
           flag1=0;
           if flag_filter==1
                  flag1=1;
                  g_f2=abs(g);
                   for i=1:t-1
                    g_f1=abs(g_test(F(:,i),n,npro));
                     if g_f1>g_f2
                         F(:,i)=x0;
                     end
                   end
          end
              %%%%%%%%%%%%%%%%%%%%
               if flag1==1
                   F(:,t)=x;
                   t=t+1;
               else
               for i=1:t-1
                                 if F(:,i)==x
                                     F(:,i)=[];
                                     t=t-1;
                                 end
                    end
               end
          dx = x1-x;
          dg=g_test(x1, n,npro)-g;
          if dg'*dx > 0
                  H= H- (H*(dx*dx’) *H)/(dx'*H*dx) + (dg*dg')/(dg'*dx);
          end
           delta=0.5^p*norm(g)^0.75;
           k=k+1;
           f(k)=faiafa;
           j=min ([j+1, M]);
           g=g_test(x1, n,npro);
           gnum=gnum+1;
           x0=x1;
           x=x0;
           p=0;
      end
      val = f(k)+ g'*d + 0.5*d'*H*d;
      xstar=x;
      ystar=f(k);
      end
      function [d, val] = Trust_q(Fk, gk, H, deltak)
      min qk(d)=fk+gk'*d+0.5*d'*Bk*d, s.t.||d|| <= delta
      n = length(gk);
      rho = 0.6;
      sigma = 0.4;
      mu0 = 0.5;
      lam0 = 0.25;
      gamma = 0.15;
      epsilon = 1e-6;
      d0 = ones(n, 1);
      zbar = [mu0, zeros(1, n + 1)]';
      i = 0;
      mu = mu0;
      lam = lam0;
      d = d0;
      while i <= 100
          HB = dah (mu, lam, d, gk, H, deltak);
          if norm(HB) <= epsilon
              break;
          end
          J = JacobiH(mu, lam, d,H, deltak);
          b = psi (mu, lam, d, gk, H, deltak, gamma) *zbar - HB;
          dz = J\b;
          dmu = dz(1);
          dlam = dz(2);
          dd = dz(3 : n + 2);
          m = 0;
          mi = 0;
          while m < 20
              t1 = rho^m;
              Hnew = dah (mu + t1*dmu, lam + t1*dlam, d + t1*dd, gk, H, deltak);
              if norm(Hnew) <= (1 - sigma*(1 - gamma*mu0) *rho^m) *norm(HB)
                  mi = m;
                  break;
              end
              m = m+1;
          end
          alpha = rho^mi;
          mu = mu + alpha*dmu;
          lam = lam + alpha*dlam;
          d = d + alpha*dd;
          i = i + 1;
      end
      val = Fk+ gk'*d + 0.5*d'*H*d;
      end
      function p = phi (mu, a, b)
      p = a + b - sqrt((a - b)^2 + 4*mu^2);
      end
      function HB = dah (mu, lam, d, gk,H, deltak)
      n = length(d);
      HB = zeros (n + 2, 1);
      HB (1) = mu;
      HB (2) = phi (mu, lam, deltak^2 - norm(d)^2);
      HB (3: n + 2) = (H + lam*eye(n)) *d + gk;
      end
      function J = JacobiH(mu, lam, d, H, deltak)
      n = length(d);
      t2 = sqrt((lam + norm(d)^2 - deltak^2)^2 + 4*mu^2);
      pmu = -4*mu/t2;
      thetak = (lam + norm(d)^2 - deltak^2)/t2;
      J= [1,                0,               zeros(1, n);
          pmu,           1 - thetak,  -2*(1 + thetak)*d';
          zeros (n, 1),  d,               H+ lam*eye(n)];
      end
      function si = psi (mu, lam, d, gk,H, deltak, gamma)
      HB = dah (mu, lam, d, gk,H, deltak);
      si = gamma*norm(HB)*min (1, norm(HB));
      end
      Partial test function
        function f = f_test(x,n,nprob)
      %      integer i,iev,ivar,j
      %      real ap,arg,bp,c2pdm6,cp0001,cp1,cp2,cp25,cp5,c1p5,c2p25,c2p625,
      %           c3p5,c25,c29,c90,c100,c10000,c1pd6,d1,d2,eight,fifty,five,
      %           four,one,r,s1,s2,s3,t,t1,t2,t3,ten,th,three,tpi,two,zero
      %      real fvec(50), y(15)
           zero = 0.0e0; one = 1.0e0; two = 2.0e0; three = 3.0e0; four = 4.0e0;
           five = 5.0e0; eight = 8.0e0; ten = 1.0e1; fifty = 5.0e1;
           c2pdm6 = 2.0e-6; cp0001 = 1.0e-4; cp1 = 1.0e-1; cp2 = 2.0e-1;
           cpp2=2.0e-2; cp25 = 2.5e-1; cp5 = 5.0e-1; c1p5 = 1.5e0; c2p25 = 2.25e0;
           c2p625 = 2.625e0; c3p5 = 3.5e0; c25 = 2.5e1; c29 = 2.9e1;
           c90 = 9.0e1; c100 = 1.0e2; c10000 = 1.0e4; c1pd6 = 1.0e6;
           ap = 1.0e-5; bp = 1.0e0;
      if nprob == 1
      % extended rosenbrock function
            f = zero;
            for j = 1: 2: n
               t1 = one - x(j);
               t2 = ten*(x(j+1) - x(j)^2);
               f = f + t1^2 + t2^2;
            end
       elseif nprob == 3
      % Extended White & Holst function
            f = zero;
            for j = 1: 2: n
               t1 = one - x(j);
               t2 = ten*(x(j+1) - x(j)^3);
               f = f + t1^2 + t2^2;
            end
      elseif nprob == 4
      %EXT beale function.
            f=zero;
            for j=1:2: n
              s1=one-x(j+1);
              t1=c1p5-x(j)*s1;
              s2=one-x(j+1) ^2;
              t2=c2p25-x(j)*s2;
              s3 = one - x(j+1) ^3;
              t3 = c2p625 - x(j)*s3;
            f = f+t1^2 + t2^2 + t3^2;
          end
      elseif nprob == 5
      % penalty function i.
            t1 = -cp25;
            t2 = zero;
            for j = 1: n
               t1 = t1 + x(j)^2;
               t2 = t2 + (x(j) - one) ^2;
            end
            f = ap*t2 + bp*t1^2;
      elseif nprob == 6
      % Pert.Quad
            f1=zero;
            f2=zero;
            f=zero;
            for j=1: n
            t=j*x(j)^2;
           f1=t+f1;
      for j=1: n
          t2=x(j);
          f2=f2+t2;
      end
      f=f+f1+1/c100*f2^2;
      elseif nprob == 7
       % Raydan 1
        f=zero;
        for j=1: n
          f1=j*(exp(x(j))-x(j))/ten;
          f=f1+f;
        end
      elseif nprob == 8
      % Raydan 2 function
          f=zero;
          for j=1: n
          ff=exp(x(j))-x(j);
          f=ff+f;
          end
      elseif nprob==9
       % Diagonal 1
          f=zero;
          for j=1: n
           ff=exp(x(j))-j*x(j);
           f=ff+f;
          end
      elseif nprob==10
      % Diagonal 2
      f=zero;
      for j=1: n
          ff=exp(x(j))-x(j)/j;
          f=ff+f;
          x0(j)=1/j;
      end
      elseif nprob==11
      % Diagonal 3
        f=zero;
        for i=1: n
          ff=exp(x(i))-i*sin(x(i));
          f=ff+f;
        end
      elseif nprob==12
       % Hager
      f=zero;
      for j=1: n
           f1=exp(x(j))-sqrt(j)*x(j);
           f=f+f1;
      end
      elseif nprob==13
      %Gen. Trid 1
      f=zero;
      for j=1: n-1
          f1=(x(j)-x(j+1) +one) ^4+(x(j)+x(j+1)-three) ^2;
          f=f+f1;
      end
      elseif nprob==14
      %Extended Tridiagonal 1 function
      f=zero;
      for j=1:2: n
         f1=(x(j)+x(j+1)-three) ^2+(x(j)+x(j+1) +one) ^4;
         f=f1+f;
      end
      elseif nprob==15
      %Extended TET function
      f=zero;
      for j=1:2: n
          f1=exp(x(j)+three*x(j+1)-cp1) + exp(x(j)-three*x(j+1)-cp1) +exp(-x(j)-cp1);
          f=f1+f;
      end
      end
      function g = g_test(x,n,nprob)
      %      integer i,iev,ivar,j
      %      real ap,arg,bp,c2pdm6,cp0001,cp1,cp2,cp25,cp5,c1p5,c2p25,c2p625,
      %     *     c3p5,c19p8,c20p2,c25,c29,c100,c180,c200,c10000,c1pd6,d1,d2,
      %     *     eight,fifty,five,four,one,r,s1,s2,s3,t,t1,t2,t3,ten,th,
      %     *    three,tpi,twenty,two,zero
      %      real fvec(50), y(15)
      %      real float
      %      data zero,one,two,three,four,five,eight,ten,twenty,fifty
      %     *     /0.0e0,1.0e0,2.0e0,3.0e0,4.0e0,5.0e0,8.0e0,1.0e1,2.0e1,
      %     *      5.0e1/
      %      data c2pdm6, cp0001, cp1, cp2, cp25, cp5, c1p5, c2p25, c2p625, c3p5,
      %     *     c19p8, c20p2, c25, c29, c100, c180, c200, c10000, c1pd6
      %     *     /2.0e-6,1.0e-4,1.0e-1,2.0e-1,2.5e-1,5.0e-1,1.5e0,2.25e0,
      %     *      2.625e0,3.5e0,1.98e1,2.02e1,2.5e1,2.9e1,1.0e2,1.8e2,2.0e2,
      %     *      1.0e4,1.0e6/
      %      data ap,bp /1.0e-5,1.0e0/
      %      data y(1),y(2),y(3),y(4),y(5),y(6),y(7),y(8),y(9),y(10),y(11),
      %     *     y (12), y (13), y (14), y (15)
      %     *     /9.0e-4,4.4e-3,1.75e-2,5.4e-2,1.295e-1,2.42e-1,3.521e-1,
      %     *      3.989e-1,3.521e-1,2.42e-1,1.295e-1,5.4e-2,1.75e-2,4.4e-3,
      %     *      9.0e-4/
           zero = 0.0e0; one = 1.0e0; two = 2.0e0; three = 3.0e0; four = 4.0e0;
           five = 5.0e0; eight = 8.0e0; ten = 1.0e1; twenty = 2.0e1; fifty = 5.0e1;
           cpp2=2.0e-2; c2pdm6 = 2.0e-6; cp0001 = 1.0e-4; cp1 = 1.0e-1; cp2 = 2.0e-1;
           cp25 = 2.5e-1; cp5 = 5.0e-1; c1p5 = 1.5e0; c2p25 = 2.25e0; c40=4.0e1;
           c2p625 = 2.625e0; c3p5 = 3.5e0; c25 = 2.5e1; c29 = 2.9e1;
           c180 = 1.8e2; c100 = 1.0e2; c400=4.0e4; c200=2.0e2; c600=6.0e2; c10000 = 1.0e4; c1pd6 = 1.0e6;
           ap = 1.0e-5; bp = 1.0e0; c200 = 2.0e2; c19p8 = 1.98e1;
           c20p2 = 2.02e1;
      if nprob == 1
      %extended rosenbrock function.
         for j = 1: 2: n
               t1 = one - x(j);
               g(j+1) = c200*(x(j+1) - x(j)^2);
               g(j) = -two*(x(j)*g(j+1) + t1);
         end
      elseif nprob == 3
      % Extended White & Holst function
       for j = 1: 2: n
               t1 = one - x(j);
          g(j)=two*t1-c600*(x(j+1)-x(j)^3) *x(j);
          g(j+1) =c200*(x(j+1)-x(j)^3);
       end
      elseif nprob == 4
      % powell badly scaled function.
          for j=1:2: n
            s1 = one - x(j+1);
            t1 = c1p5 - x(j)*s1;
            s2 = one - x(j+1) ^2;
            t2 = c2p25 - x(j)*s2;
            s3 = one - x(j+1) ^3;
            t3 = c2p625 - x(j)*s3;
            g(j) = -two*(s1*t1 + s2*t2 + s3*t3);
            g(j+1) = two*x(j)*(t1 + x(j+1) *(two*t2 + three*x(j+1) *t3));
          end
      elseif nprob == 5
      % penalty function i.
         for j=1: n
            g(j)=four*bp*x(j)*(x(j)^2-cp25) +two*(x(j)-one);
         end
      elseif nprob == 6
        % Perturbed Quadratic function
          f2=zero;
      for j=1: n
          t2=x(j);
          f2=f2+t2;
      end
      for j=1: n
           g(j)=two*j*x(j)+cpp2*f2^2;
      end
      elseif nprob == 7
      % Raydan 1
      for j=1: n
          g(j)=j*(exp(x(j))-one)/ten;
      end
      elseif nprob ==8
      % Raydan 2
      for j=1: n
          g(j)=exp(x(j))-one;
      end
      elseif nprob==9
      % Diagonal 1 function
      for j=1: n
       g(j)=exp(x(j))-j;
      end
      elseif nprob==10
      % Diagonal 2 function
        for j=1: n
          g(j)=exp(x(j))-1/j;
        end
      elseif nprob==11
      % Diagonal 3 function
        for j=1: n
          g(j)=exp(x(j))-j*cos(x(j));
        end
      elseif nprob==12
      % Hager function
        for j=1: n
          g(j)=exp(x(j))-sqrt(j);
        end
      elseif nprob==13
      % Gen. Trid 1
        for j=1:2: n-1
           g(j)=four*(x(j)-x(j+1)+one)^3+two*(x(j)+x(j+1)-three);
           g(j+1)=-four*(x(j)-x(j+1)+one)^3+two*(x(j)+x(j+1)-three);
        end
      elseif nprob==14
      %Extended Tridiagonal 1 function
        for j=1:2: n
          g(j)=two*(x(j)+x(j+1)-three)+four*(x(j)+x(j+1)+one)^3;
          g(j+1)=two*(x(j)+x(j+1)-three)+four*(x(j)+x(j+1)+one)^3;
        end
      elseif nprob==15
      % Extended TET function
      for j=1:2: n
          g(j)=exp(x(j)+three*x(j+1)-cp1)+ exp(x(j)-three*x(j+1)-cp1)-exp(-x(j)-0.1);
          g(j+1) =three*exp(x(j)+three*x(j+1)-cp1)-three*exp(x(j)-three*x(j+1)-cp;
      end
      tic;
      npro=1;
      %Extended Rosenbrock
      if npro==1
          x0=zeros (500,1);
          for i=1:2:500
              x0(i)=-1.2;
              x0(i+1) =1;
          end
      %Generalized Rosenbrock
      elseif npro==2
       x0=zeros (1000,1);
          for i=1:2:1000
              x0(i)=-1.2;
              x0(i+1) =1;
          end
      %Extended White & Holst function
      elseif npro==3
        x0=zeros (500,1);
          for i=1:2:500
              x0(i)=-1.2;
              x0(i+1) =1;
          end
      %Extended Beale
      elseif npro==4
         x0=zeros (500,1);
         for i=1:2:500
             x0(i)=1;
             x0(i)=0.8;
         end
       %Penalty
      elseif npro==5
          x0=zeros (500,1);
          for i=1:500
          x0(i)=i;
          end
      % Perturbed Quadratic function
      elseif npro==6
          x0=0.5*ones (36,1);
      % Raydan 1
      elseif npro == 7
          x0=ones (100,1);
      %Raydan 2
      elseif npro==8
         x0=ones (500,1);
      %Diagonal 1 function
      elseif npro==9
        x0=0.5*ones (500,1);
      %Diagonal 2 function
       elseif npro==10
         x0=zeros (500,1);
         for i=1:500
          x0(i)=1/i;
         end
      %Diagonal 3 function
      elseif npro==11
          x0=ones (500,1);
      % Hager function
       elseif npro==12
          x0=ones (500,1);
      %Gen. Trid 1
       elseif npro==13
          x0=2*ones (500,1);
      %Extended Tridiagonal 1 function
       elseif npro==14
          x0=2*ones (500,1);
      %Extended TET function
       elseif npro==15
          x0=0.1*ones (500,1);
      end
      N=5;
      [xstar,ystar,fnum,gnum,k,val]=nonmonotone40(x0,N,npro);
      fprintf('%d, %d,%d',fnum,gnum,val);
      xstar;
      ystar;
      toc
	  

References

  1. Jiang, X.Z.; Jian, J.B. Improved Fletcher-Reeves and Dai-Yuan conjugate gradient methods with the strong Wolfe line search. J. Comput. Appl. Math. 2019, 328, 525–534. [Google Scholar] [CrossRef]
  2. Fatemi, M. A new efficient conjugate gradient method for unconstrained optimization. J. Comput. Appl. Math. 2016, 300, 207–216. [Google Scholar] [CrossRef]
  3. Andrei, N. New hybrid conjugate gradient algorithms for unconstrained optimization. Encycl. Optim. 2009, 141, 2560–2571. [Google Scholar]
  4. Gao, H.; Zhang, H.B.; Li, Z.B. A nonmonotone inexact Newton method for unconstrained optimization. Optim. Lett. 2017, 11, 947–965. [Google Scholar] [CrossRef]
  5. Kenji, U.; Nobuo, Y. A regularized Newton method without line search for unconstrained optimization. Comput. Optim. Appl. 2014, 59, 321–351. [Google Scholar]
  6. Xue, Y.Q.; Liu, H.W.; Liu, H.Z. An improved nonmonotone adaptive trust region method. Appl. Math. 2019, 3, 335–350. [Google Scholar] [CrossRef]
  7. Rezaee, S.; Babaie-Kafaki, S. A modified nonmonotone trust region line search method. J. Appl. Math. Comput. 2017. [CrossRef]
  8. Wang, X.Y.; Ding, X.F.; Qu, Q. A New Filter Nonmonotone Adaptive Trust Region Method for Unconstrained Optimization. Symmetry 2020, 12, 208. [Google Scholar] [CrossRef] [Green Version]
  9. Nocedal, J.; Yuan, Y. Combining trust region and line search techniques. Adv. Nonlinear Program. 1998, 153–175. [Google Scholar]
  10. Michael, G.E. A Quasi-Newton Trust Region Method. Math. Program. 2004, 100, 447–470. [Google Scholar] [CrossRef]
  11. Li, C.Y.; Zhou, Q.H.; Wu, X. A Non-Monotone Trust Region Method with Non-Monotone Wolfe-Type Line Search Strategy for Unconstrained Optimization. J. Appl. Math. Phys. 2015, 3, 707–712. [Google Scholar] [CrossRef] [Green Version]
  12. Zhang, H.C.; Hager, W.W. A nonmonotone line search technique and its application to unconstrained optimization. SIAM J. Optim. 2004, 14, 1043–1056. [Google Scholar] [CrossRef] [Green Version]
  13. Chamberlain, R.M.; Powell, M.J.D.; Lemarechal, C.; Pedersen, H.C. The watchdog technique for forcing convergence in algorithm for constrained optimization. Math. Program. Stud. 1982, 16, 1–17. [Google Scholar]
  14. Grippo, L.; Lamparillo, F.; Lucidi, S. A nonmonotone line search technique for Newton’s method. Siam J. Numer. Anal. 1986, 23, 707–716. [Google Scholar] [CrossRef]
  15. Cui, Z.C.; Wu, B.; Qu, S.J. Combining nonmonotone conic trust region and line search techniques for unconstrained optimization. J. Comput. Appl. Math. 2011, 235, 2432–2441. [Google Scholar] [CrossRef] [Green Version]
  16. Ahookhoosh, M.; Amini, K.; Peyghami, M. A nonmonotone trust region line search method for large scale unconstrained optimization. Appl. Math. Model. 2012, 36, 478–487. [Google Scholar] [CrossRef]
  17. Sartenaer, A. Automatic determination of an initial trust region in nonlinear programming. SIAM J. Sci. Comput. 1997, 18, 1788–1803. [Google Scholar] [CrossRef] [Green Version]
  18. Zhang, X.S.; Zhang, J.L.; Liao, L.Z. An adaptive trust region method and its convergence. Sci. China 2002, 45, 620–631. [Google Scholar] [CrossRef] [Green Version]
  19. Zhou, S.; Yuan, G.L.; Cui, Z.R. A new adaptive trust region algorithm for optimization problems. Acta Math. Sci. 2018, 38B, 479–496. [Google Scholar]
  20. Kimiaei, M. A new class of nonmonotone adaptive trust-region methods for nonlinear equations with box constraints. Calcolo 2017, 54, 769–812. [Google Scholar] [CrossRef]
  21. Amini, K.; Shiker Mushtak, A.K.; Kimiaei, M. A line search trust-region algorithm with nonmonotone adaptive radius for a system of nonlinear equations. Q. J. Oper. Res. 2016, 4, 132–152. [Google Scholar] [CrossRef]
  22. Peyghami, M.R.; Tarzanagh, D.A. A relaxed nonmonotone adaptive trust region method for solving unconstrained optimization problems. Comput. Optim. Appl. 2015, 61, 321–341. [Google Scholar] [CrossRef]
  23. Fletcher, R.; Leyffer, S. Nonlinear programming without a penalty function. Math. Program. 2002, 91, 239–269. [Google Scholar] [CrossRef]
  24. Gould, N.I.; Sainvitu, C.; Toint, P.L. A filter-trust-region method for unconstrained optimization. Siam J. Optim. 2005, 16, 341–357. [Google Scholar] [CrossRef] [Green Version]
  25. Wächter, A.; Biegler, L.T. Line search filter methods for nonlinear programming and global convergence. SIAM J. Optim. 2005, 16, 1–31. [Google Scholar] [CrossRef]
  26. Miao, W.H.; Sun, W. A filter trust-region method for unconstrained optimization. Numer. Math. J. Chin. Univ. 2007, 19, 88–96. [Google Scholar]
  27. Zhang, Y.; Sun, W.; Qi, L. A nonmonotone filter Barzilai-Borwein method for optimization. Asia Pac. J. Oper. Res. 2010, 27, 55–69. [Google Scholar] [CrossRef]
  28. Conn, A.R.; Gould, N.I.M.; Toint, P.L. Trust-Region Methods, MPS-SIAM Series on Optimization; SIAM: Philadelphia, PA, USA, 2000. [Google Scholar]
  29. Gu, N.Z.; Mo, J.T. Incorporating Nonmonotone Strategies into the Trust Region for Unconstrained Optimization. Comput. Math. Appl. 2008, 55, 2158–2172. [Google Scholar] [CrossRef] [Green Version]
  30. Pang, S.M.; Chen, L.P. A new family of nonmonotone trust region algorithm. Math. Pract. Theory. 2011, 10, 211–218. [Google Scholar]
  31. Andrei, N. An unconstrained optimization test functions collection. Environ. Sci. Technol. 2008, 10, 6552–6558. [Google Scholar]
  32. Toint, P.L. Global convergence of the partitioned BFGS algorithm for convexpartially separable optimization. Math. Program. 1986, 36, 290–306. [Google Scholar] [CrossRef]
  33. Dolan, E.D.; Moré, J.J. Benchmarking optimization software with performance profiles. Math. Program. 2002, 91, 201–213. [Google Scholar] [CrossRef]
Figure 1. Performance profile for the number of function evaluations ( n f ).
Figure 1. Performance profile for the number of function evaluations ( n f ).
Symmetry 12 00656 g001
Figure 2. Performance profile for the number of gradient evaluations ( n i ).
Figure 2. Performance profile for the number of gradient evaluations ( n i ).
Symmetry 12 00656 g002
Figure 3. Performance profile for running time (CPU).
Figure 3. Performance profile for running time (CPU).
Symmetry 12 00656 g003
Table 1. Numerical comparisons on a subset of test problems. ASNTR: The standard nonmonotone trust region algorithm of Pang et al.; ANATR: The nonmonotone adaptive trust region algorithm of Ahookhoosh et al.; AFTR: The multidimensional filter trust region algorithm of Wang et al.
Table 1. Numerical comparisons on a subset of test problems. ASNTR: The standard nonmonotone trust region algorithm of Pang et al.; ANATR: The nonmonotone adaptive trust region algorithm of Ahookhoosh et al.; AFTR: The multidimensional filter trust region algorithm of Wang et al.
Problem n n f / n i
ASNTRCPUANATRCPUAFTRCPUAlgorithm 1CPU
Extended Rosenbrock 5002649/13261867.2541071/8401545.386547/387642.09186/4770.369
Extended White and Holst function50013/726.7885/36.5245/32.1253/20.218
Extended Beale50029/154.38643/2215.35140/368.53222/172.953
Penalty i50013/832.1865/36.5937/42.1763/20.171
Pert.Quad36153/800.5523128/670.4704101/730.863186/450.167
Raydan 110026/140.862130/982.263208/1053.500982/420.923
Raydan 250013/80.966013/80.996611/60.95499/50.780
Diadonal 150082/4240.5911459/8121957.79459/4321.09121/119.107
Diadonal 25004765/35291532.176251/198106.641390/20143.2522116/1062430.600
Diagonal 35001634/9331822.0911389/7661536.226349/288327.056201/10188.049
Hager50042/2330.2581418/760270.83787/4645.34251/2614.278
Generalized Tridiagonal 150063/325.649053/288.34946/2413.41970/3611.163
Extended Tridiagonal 150025/130.985725/133.44814/103.23378/70.823
Extended TET50015/84.263815/91.63217/92.504417/91.452
Diadonal 45007/40.32937/40.8579/84.03625/40.419
Diadonal 5500106/5443.3048134/11257.032127/10641.096155/7919.024
Diadonal 7100096/7829.19788/7322.30934/1510.26519/152.561
Diadonal 8 1000159/12218.542133/12643.06776/366.78127/211.550
Extended Him100035/187.15030/1617.975108/87514.84328/1822.572
Full Hessian FH3100011/61.75511/65.55517/135.147211/63.912
Extended BD1100043/2561.35830/1617.907335/1923.411930/1926.971
Quadratic QF11000287/195157.332293/2190.259400/27487.043197/9943.280
FLETCHCR341000847/50567.511345/225100.67624/1673.2658/5 33.145
ARWHEAD100047/2438.433429/1624.33864/4138.55224/1718.299
NONDIA1000197/10496.17692/4756.43233/2334.72651/3522.318
DQDRTIC100023/1252.10236/1940.94946/3786.26522/1516.526
EG2100055/3079.99128/1616.04219/1914.16951/2632.424
Broyden Tridiagonal 10001978/14881545.2211553/12881266.0761226/987782.560754/646456.105
Almost Perturbed Quadratic16002548/22671960.4332118/18291543.2531078/7181067.206657/425279.316
Perturbed Tridiagonal Quadratic 30001342/10251672.4341132/8761033.255745/552835.265453/357572.371
DIXMAANA3000576/463132.240223/19888.211378/320108.452209/16578.542
DIXMAANB 3000248/20164.215165/12240.23367/5625.10948/3237.120
DIXMAANC 3000279/197177.221246/167134.27295/4330.14058/2419.011
Extended DENSCH3000673/418476.214533/388309.605254/105199.42187/42219.167
SINCOS30002067/15541045.3011653/1274836.022337/233472.032275/141165.665
HIMMELH3000967/721526.211506/349255.629197/196109.27645/3240.127
BIGGSB130003760/20452321.5092254/18861308.2271836/1025904.2344051/23811987.456
ENGVAL130001784/10871643.092587/423960.42163/43243.84058/32167.991
BDEXP30002259/1876978.4321342/978832.013172/137385.43967/4359.276
INDEF3000325/2092430.215178/1561023.21134/31721.34319/11479.263
NONSCOMP3000264/1071742.85696/471389.12334/18921.32422/14679.120
QUARTC3000167/123643.254332/289921.31322/20425.99567/54356.762

Share and Cite

MDPI and ACS Style

Qu, Q.; Ding, X.; Wang, X. A Filter and Nonmonotone Adaptive Trust Region Line Search Method for Unconstrained Optimization. Symmetry 2020, 12, 656. https://doi.org/10.3390/sym12040656

AMA Style

Qu Q, Ding X, Wang X. A Filter and Nonmonotone Adaptive Trust Region Line Search Method for Unconstrained Optimization. Symmetry. 2020; 12(4):656. https://doi.org/10.3390/sym12040656

Chicago/Turabian Style

Qu, Quan, Xianfeng Ding, and Xinyi Wang. 2020. "A Filter and Nonmonotone Adaptive Trust Region Line Search Method for Unconstrained Optimization" Symmetry 12, no. 4: 656. https://doi.org/10.3390/sym12040656

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop