Next Article in Journal
A New Gronwall–Bellman Inequality in Frame of Generalized Proportional Fractional Derivative
Next Article in Special Issue
An Efficient Conjugate Gradient Method for Convex Constrained Monotone Nonlinear Equations with Applications
Previous Article in Journal
New Polynomial Bounds for Jordan’s and Kober’s Inequalities Based on the Interpolation and Approximation Method
Previous Article in Special Issue
Calculating the Weighted Moore–Penrose Inverse by a High Order Iteration Scheme
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Modified Fletcher–Reeves Conjugate Gradient Method for Monotone Nonlinear Equations with Some Applications

1
KMUTTFixed Point Research Laboratory, SCL 802 Fixed Point Laboratory, Science Laboratory Building, Department of Mathematics, Faculty of Science, King Mongkut’s University of Technology Thonburi (KMUTT), 126 Pracha-Uthit Road, Bang Mod, Thrung Khru, Bangkok 10140, Thailand
2
Department of Mathematical Sciences, Faculty of Physical Sciences, Bayero University, Kano 700241, Nigeria
3
Center of Excellence in Theoretical and Computational Science (TaCS-CoE), Science Laboratory Building, King Mongkut’s University of Technology Thonburi (KMUTT), 126 Pracha-Uthit Road, Bang Mod, Thrung Khru, Bangkok 10140, Thailand
4
Department of Medical Research, China Medical University Hospital, China Medical University, Taichung 40402, Taiwan
5
Department of Mathematics, Faculty of Science, Gombe State University, Gombe 760214, Nigeria
6
Department of Mathematics, Faculty of Applied Science, King Mongkut’s University of Technology North Bangkok, 1518 Pracharat 1 Road, Wongsawang, Bangsue, Bangkok 10800, Thailand
*
Author to whom correspondence should be addressed.
Mathematics 2019, 7(8), 745; https://doi.org/10.3390/math7080745
Submission received: 24 June 2019 / Revised: 1 August 2019 / Accepted: 5 August 2019 / Published: 15 August 2019
(This article belongs to the Special Issue Iterative Methods for Solving Nonlinear Equations and Systems)

Abstract

:
One of the fastest growing and efficient methods for solving the unconstrained minimization problem is the conjugate gradient method (CG). Recently, considerable efforts have been made to extend the CG method for solving monotone nonlinear equations. In this research article, we present a modification of the Fletcher–Reeves (FR) conjugate gradient projection method for constrained monotone nonlinear equations. The method possesses sufficient descent property and its global convergence was proved using some appropriate assumptions. Two sets of numerical experiments were carried out to show the good performance of the proposed method compared with some existing ones. The first experiment was for solving monotone constrained nonlinear equations using some benchmark test problem while the second experiment was applying the method in signal and image recovery problems arising from compressive sensing.

1. Introduction

In this paper, we are considering a system of nonlinear monotone equations of the form
F ( x ) = 0 , subject to x E ,
where E R n is closed and convex, F : R n R m , ( m n ) is continuous and monotone, which means
F ( x ) F ( y ) , ( x y ) 0 , x , y R n .
A well-known fact is that under the above assumption, the solution set of (1) is convex unless is empty. It is important to mention that nonlinear monotone equations arise in many practical applications. These and other reasons motivate researchers to develop a large number of class of Iterative methods for solving such systems, for example, see [1,2,3,4,5,6,7] among others. In addition, convex constrained equations have application in many scientific fields, some of which are the economic equilibrium problems [8], the chemical equilibrium systems [9], etc. Several algorithms were developed to solve (1), among them, are the trust-region [10] and the Levenberg-Marquardt method [11]. Moreover, the requirement to compute and store the matrix in every iteration makes them ineffective for large-scale nonlinear equations.
Conjugate gradient (CG) methods are efficient for solving large-scale optimization and nonlinear systems because of their low memory requirements. This forms part of the reason several Iterative methods with CG-like directions are proposed in recent years [12,13]. Initially, CG methods and their modified versions are proposed for unconstrained optimization problems [14,15,16,17,18,19]. Inspired by them, in the last decade, many authors used the CG direction to solve nonlinear monotone equations for both constrained and unconstrained cases. Since in this article, we are interested in solving nonlinear monotone equations with convex constraints, we will only discuss existing methods with such properties.
Many methods for solving nonlinear monotone equations with convex constraints have been presented in the last decade. For examples, Xiao and Zhu [20] presented a CG method, which combines the well-known CG-DESCENT method in [17] and the projection method by Solodov and Svaiter [21]. Liu et al. [22] proposed two CG methods with projection strategy for solving (1). In [23], a modification of the method in [20] was presented by Liu and Li. One of the reasons for the modification was to improve the numerical performance of the method in [20]. Also, Sun and Liu [24] presented derivative-free projection methods for solving nonlinear equations with convex constraints. These methods are the combination of some existing CG methods and the well-known projection method. In addition, a hybrid CG projection method for convex constrained equations was developed in [25]. Ou and Li [26] proposed a combination of a scaled CG method and the projection strategy to solve (1). Furthermore, Ding et al. [27] extended the Dai and Kou (DK) CG method to solve (1) by also combining it with the projection method. Just recently, to popularize the Dai-Yuan (DY) method, Liu and Feng [28] proposed a modified DY method for solving convex constraints monotone equation. The global convergence was also obtained under certain assumptions and finally, some numerical results were reported to show its efficiency.
Inspired by some the above proposals, we present a simple modification of the Fletcher–Reeves (FR) conjugate gradient method [19] considered in [12] to solve nonlinear monotone equations with convex constraints. The modification ensures that the direction is automatically descent, improves its numerical performance and still inherits the nice convergence properties of the method. Under suitable assumptions, we establish the global convergence of the proposed algorithm. Numerical experiments presented show the good performance and competitiveness of the method. In addition, the proposed method has the advantages of the direct methods [29] such as boundary control method by Belishev and Kuryiev [30], the globally convergent method proposed by Beilina and Klibanov [31] and method based on the multidimensional analogs of Gelfand–Levitan–Krein equations [32,33]. The proposed method can be seen as a local method that looks for the closest root. However, there are several global nonlinear solvers that guarantee finding all roots inside a domain and within a very fine double-float accuracy. In some cases a combination of subdivision-based polynomial solver with a decomposition algorithm are employed in order to handle large and complex systems (see for examples [34,35,36] and references therein).
The remaining part of this article is organized as follows. In Section 2, we mention some preliminaries and present the proposed method. The global convergence of the method is established in Section 3. Finally, Section 4 reports some numerical results to show the performance of the method in solving monotone nonlinear equations with convex constraints, and also apply it to recover a noisy signal and a blurred image.

2. Algorithm

In this section, we define the projection map together with its well-known properties, give some useful assumptions and finally present the proposed algorithm. Throughout this article, · denotes the Euclidean norm.
Definition 1.
Let E R n be nonempty closed and convex set. Then for any x R n , its projection onto E is defined as
P E ( x ) = arg min { x y : y E . }
The following lemma gives some properties of the projection map.
Lemma 1 
([37]).Suppose E R n is nonempty, closed and convex set. Then the following statements are true:
1. 
x P E ( x ) , P E ( x ) z 0 , x , z R n .
2. 
P E ( x ) P E ( y ) x y , x , y R n .
3. 
P E ( x ) z 2 x z 2 x P E ( x ) 2 , x , z R n .
Throughout, we suppose the followings
( C 1 )
The solution set of (1), denoted by E , is nonempty.
( C 2 )
The mapping F is monotone.
( C 3 )
The mapping F is Lipschitz continuous, that is there exists a positive constant L such that F ( x ) F ( y ) L x y , x , y R n .
Our algorithm is motivated by the work of Papp and Rapajić in [12]. In the paper, they modified the well known Fletcher–Reeves conjugate gradient method to solve unconstrained nonlinear monotone equation. The modification was adding the term θ k F ( x k ) to the direction of Fletcher–Reeves. The parameter θ k was then determined in three different ways and three different directions were proposed, namely, M3TFR1, M3TFR2 and M3TFR3. The direction we are interested in is M3TFR1 and is defined as:
d k = F ( x k ) , if k = 0 , F ( x k ) + β k F R w k 1 + θ k F ( x k ) , if k 1 ,
where,
β k F R = F ( x k ) 2 F ( x k 1 ) 2 , θ k = F ( x k ) T w k 1 F ( x k 1 ) 2 , w k 1 = z k 1 x k 1 , z k 1 = x k 1 + α k 1 d k 1 .
It follows that
F ( x k ) T d k = F ( x k ) 2 .
Using same modification proposed in [3], we modify the direction (2) as follows
d k = F ( x k ) , if k = 0 , F ( x k ) + F ( x k ) 2 w k 1 F ( x k ) T w k 1 F ( x k ) max { μ w k 1 F ( x k ) , F ( x k 1 ) 2 } , if k 1 ,
where μ 0 is a positive constant. The difference between the M3TFR1 direction and the direction proposed in this paper is the scaling term appearing in the denominator of Equation (3) i.e., max { μ w k 1 F ( x k ) , F ( x k 1 ) 2 } . This modification was shown to have a very good numerical performance in [3] and also helps in obtaining the boundedness of the direction easily.
Remark 1.
Note the the parameter μ is chosen to be strictly positive because if μ 0 then
max { μ w k 1 F ( x k ) , F ( x k 1 ) 2 } = F ( x k 1 ) 2 .
This means that the direction d k will always be M3TFR1 given by (2).

3. Convergence Analysis

To prove the global convergence of Algorithm 1, the following results are needed.
Algorithm 1: A modified descent Fletcher–Reeves CG method (MFRM).
Step 0. Select the initial point x 0 R n , parameters μ 0 , σ 0 , 0 ρ 1 , T o l 0 , and set k : = 0 .
Step 1. If F ( x k ) T o l , stop, otherwise go to Step 2.
Step 2. Find d k using (3).
Step 3. Find the step length α k = γ ρ m k where m k is the smallest non-negative integer m such that
F ( x k + α k d k ) , d k σ α k F ( x k + α k d k ) d k 2 .

Step 4. Set z k = x k + α k d k . If z k E and F ( z k ) T o l , stop. Else compute
x k + 1 = P E [ x k ζ k F ( z k ) ]
where
ζ k = F ( z k ) T ( x k z k ) F ( z k ) 2 .

Step 5. Let k = k + 1 and go to Step 1.
Lemma 2.
Let d k be defined by Equation (3), then
d k T F ( x k ) = F ( x k ) 2
and
F ( x k ) d k 1 + 2 μ F ( x k ) .
Proof. 
By Equation (3), suppose k = 0 ,
d k T F ( x k ) = F ( x k ) T F ( x k ) = F ( x k ) 2 .
Now suppose k 0 ,
d k T F ( x k ) = F ( x k ) T F ( x k ) + ( F ( x k ) 2 w k 1 ) T F ( x k ) ( F ( x k ) T w k 1 F ( x k ) ) T F ( x k ) max { μ w k 1 F ( x k ) , F ( x k 1 ) 2 } = F ( x k ) 2 + F ( x k ) 2 w k 1 T F ( x k ) F ( x k ) T ( w k 1 T F ( x k ) ) F ( x k ) max { μ w k 1 F ( x k ) , F ( x k 1 ) 2 } = F ( x k ) 2 + F ( x k ) 2 w k 1 T F ( x k ) F ( x k ) 2 w k 1 T F ( x k ) max { μ w k 1 F ( x k ) , F ( x k 1 ) 2 } = F ( x k ) 2 .
Using Cauchy–Schwartz inequality, we get
F ( x k ) d k .
Furthermore, since max { μ w k 1 F ( x k ) , F ( x k 1 ) 2 } μ w k 1 F ( x k ) , then,
d k = F ( x k ) + F ( x k ) 2 w k 1 ( F ( x k ) T w k 1 ) F ( x k ) max { μ w k 1 F ( x k ) , F ( x k 1 ) 2 } F ( x k ) + F ( x k ) 2 w k 1 ( F ( x k ) T w k 1 ) F ( x k ) max { μ w k 1 F ( x k ) , F ( x k 1 ) 2 } F ( x k ) + F ( x k ) 2 w k 1 μ w k 1 F ( x k ) + F ( x k ) T w k 1 F ( x k ) μ w k 1 F ( x k ) F ( x k ) + F ( x k ) 2 w k 1 μ w k 1 F ( x k ) + F ( x k ) 2 w k 1 μ w k 1 F ( x k ) = F ( x k ) + 2 F ( x k ) μ = 1 + 2 μ F ( x k ) .
Combining (8) and (9), we get the desired result. □
Lemma 3.
Suppose that assumptions ( C 1 )–( C 3 ) hold and the sequences { x k } and { z k } are generated by Algorithm 1. Then we have
α k ρ min 1 , F ( x k ) 2 ( L + σ ) F ( x k + α k ρ d k ) d k 2
Proof. 
Suppose α k ρ , then α k ρ does not satisfy Equation (4), that is
F x k + α k ρ d k σ α k ρ F ( x k + α k ρ d k ) d k 2 .
This combined with (7) and the fact that F is Lipschitz continuous yields
F ( x k ) 2 = F ( x k ) T d k = F ( x k + α k ρ d k ) F ( x k ) T d k F T x k + α k ρ d k d k L α k ρ F ( x k + α k ρ d k ) d k 2 + σ α k ρ F ( x k + α k ρ d k ) d k 2 = L + σ ρ α k F ( x k + α k ρ d k ) d k 2 .
The above equation implies
α k ρ min F ( x k ) 2 ( L + σ ) F ( x k + α k ρ d k ) d k 2 ,
which completes the proof. □
Lemma 4.
Suppose that assumptions ( C 1 )–( C 3 ) holds, then the sequences { x k } and { z k } generated by Algorithm 1 are bounded. Moreover, we have
lim k x k z k = 0
and
lim k x k + 1 x k = 0 .
Proof. 
We will start by showing that the sequences { x k } and { z k } are bounded. Suppose x ¯ E , then by monotonicity of F, we get
F ( z k ) , x k x ¯ F ( z k ) , x k z k .
Also by definition of z k and the line search (4), we have
F ( z k ) , x k z k σ α k 2 F ( z k ) d k 2 0 .
So, we have
x k + 1 x ¯ 2 = P E [ x k ζ k F ( z k ) ] x ¯ 2 x k ζ k F ( z k ) x ¯ 2 = x k x ¯ 2 2 ζ F ( z k ) , x k x ¯ + ζ F ( z k ) 2 x k x ¯ 2 2 ζ k F ( z k ) , x k z k + ζ F ( z k ) 2 = x k x ¯ 2 F ( z k ) , x k z k F ( z k ) 2 x k x ¯ 2 .
Thus the sequence { x k x ¯ } is non increasing and convergent, and hence { x k } is bounded. Furthermore, from Equation (15), we have
x k + 1 x ¯ 2 x k x ¯ 2 ,
and we can deduce recursively that
x k x ¯ 2 x 0 x ¯ 2 , k 0 .
Then from assumption ( C 3 ), we obtain
F ( x k ) = F ( x k ) F ( x ¯ ) L x k x ¯ L x 0 x ¯ .
If we let L x 0 x ¯ = κ , then the sequence { F ( x k ) } is bounded, that is,
F ( x k ) κ , k 0 .
By the definition of z k , Equation (14), monotonicity of F and the Cauchy–Schwatz inequality, we get
σ x k z k = σ α k d k 2 x k z k F ( z k ) , x k z k x k z k F ( x k ) , x k z k x k z k F ( x k ) .
The boundedness of the sequence { x k } together with Equations (17) and (18), implies the sequence { z k } is bounded.
Now, as { z k } is bounded, then for any x ¯ E , the sequence { z k x ¯ } is also bounded, that is, there exists a positive constant ν 0 such that
z k x ¯ ν .
This together with assumption ( C 3 ), this yields
F ( z k ) = F ( z k ) F ( x ¯ ) L z k x ¯ L ν .
Therefore, using Equation (15), we have
σ 2 ( L ν ) 2 x k z k 4 x k x ¯ 2 x k + 1 x ¯ 2 ,
which implies
σ 2 ( L ν ) 2 k = 0 x k z k 4 k = 0 ( x k x ¯ 2 x k + 1 x ¯ 2 ) x 0 x ¯ .
Equation (19) implies
lim k x k z k = 0 .
However, using statement 2 of Lemma 1, the definition of ζ k and the Cauchy-Schwartz inequality, we have
x k + 1 x k = P E [ x k ζ k F ( z k ) ] x k x k ζ k F ( z k ) x k = ζ k F ( z k ) = x k z k ,
which yields
lim k x k + 1 x k = 0 .
 □
Remark 2.
By Equation (11) and definition of z k , then
lim k α k d k = 0 .
Theorem 1.
Suppose that assumption ( C 1 )–( C 3 ) holds and let the sequence { x k } be generated by Algorithm 1, then
lim inf k F ( x k ) = 0 .
Proof. 
Assume that Equation (22) is not true, then there exists a constant ϵ 0 such that
F ( x k ) ϵ , k 0 .
Combining (8) and (23), we have
d k F ( x k ) ϵ , k 0 .
As w k = x k + α k d k and lim k x k z k = 0 , we get lim k α k d k = 0 and
lim k α k = 0 .
On the other side, if M = 1 + 2 μ κ , Lemma 3 and Equation (9) implies α k d k ρ ϵ 2 ( L + σ ) M L ν , which contradicts with (24). Therefore, (22) must hold. □

4. Numerical Experiments

To test the performance of the proposed method, we compare it with accelerated conjugate gradient descent (ACGD) and projected Dai-Yuan (PDY) methods in [27,28], respectively. In addition, MFRM method is applied to solve signal and image recovery problems arising in compressive sensing. All codes were written in MATLAB R2018b and run on a PC with intel COREi5 processor with 4GB of RAM and CPU 2.3GHZ. All runs were stopped whenever F ( x k ) 10 5 . The parameters chosen for each method are as follows:
  • MFRM method: γ = 1 , ρ = 0 . 9 , μ = 0 . 01 , σ = 0 . 0001 .
  • ACGD method: all parameters are chosen as in [27].
  • PDY method: all parameters are chosen as in [28].
We tested eight problems with dimensions of n = 1000 ,   5000 ,   10 , 000   , 50 , 000   , 100 , 000 and 6 initial points: x 1 = ( 0 . 1 , 0 . 1 , , 1 ) T , x 2 = ( 0 . 2 , 0 . 2 , , 0 . 2 ) T , x 3 = ( 0 . 5 , 0 . 5 , , 0 . 5 ) T , x 4 = ( 1 . 2 , 1 . 2 , , 1 . 2 ) T , x 5 = ( 1 . 5 , 1 . 5 , , 1 . 5 ) T , x 6 = ( 2 , 2 , , 2 ) T . In Table 1, Table 2, Table 3, Table 4, Table 5, Table 6, Table 7 and Table 8, the number of Iterations (Iter), number of function evaluations (Fval), CPU time in seconds (time) and the norm at the approximate solution (NORM) were reported. The symbol ‘−’ is used when the number of Iterations exceeds 1000 and/or the number of function evaluations exceeds 2000.
The test problems are listed below, where the function F is taken as F ( x ) = ( f 1 ( x ) , f 2 ( x ) , , f n ( x ) ) T .
Problem 1 [38] Exponential Function.
f 1 ( x ) = e x 1 1 , f i ( x ) = e x i + x i 1 , for i = 2 , 3 , , n , and E = R + n .
Problem 2 [38] Modified Logarithmic Function.
f i ( x ) = ln ( x i + 1 ) x i n , for i = 2 , 3 , , n , and E = { x R n : i = 1 n x i n , x i 1 , i = 1 , 2 , , n } .
Problem 3 [6] Nonsmooth Function.
f i ( x ) = 2 x i sin | x i | , i = 1 , 2 , 3 , , n , and E = { x R n : i = 1 n x i n , x i 0 , i = 1 , 2 , , n } .
It is clear that problem 3 is nonsmooth at x = 0 .
Problem 4 [38] Strictly Convex Function I.
f i ( x ) = e x i 1 , for i = 1 , 2 , , n , and E = R + n .
Problem 5 [38] Strictly Convex Function II.
f i ( x ) = i n e x i 1 , for i = 1 , 2 , , n , and E = R + n .
Problem 6 [39] Tridiagonal Exponential Function
f 1 ( x ) = x 1 e cos ( h ( x 1 + x 2 ) ) , f i ( x ) = x i e cos ( h ( x i 1 + x i + x i + 1 ) ) , for i = 2 , , n 1 , f n ( x ) = x n e cos ( h ( x n 1 + x n ) ) , h = 1 n + 1 and E = R + n .
Problem 7 [40] Nonsmooth Function
f i ( x ) = x i sin | x i 1 | , i = 1 , 2 , 3 , , n . and E = { x R n : i = 1 n x i n , x i 1 , i = 1 , 2 , , n } .
Problem 8 [27] Penalty 1
t i = i = 1 n x i 2 , c = 10 5 f i ( x ) = 2 c ( x i 1 ) + 4 ( t i 0 . 25 ) x i , i = 1 , 2 , 3 , , n . and E = R + n .
To show in detail the efficiency and robustness of all methods, we employ the performance profile developed in [41], which is a helpful process of standardizing the comparison of methods. Suppose that we have n s solvers and n l problems and we are interested in using either number of Iterations, CPU time or number of function evaluations as our measure of performance; so we let k l , s to be the number of iterations, CPU time or number of function evaluations required to solve problem by solver s. To compare the performance on problem l by a solver s with the best performance by any other solver on this problem, we use the performance ratio r l , s defined as
r l , s = k l , s min { k l , s : s S } ,
where S is the set of solvers.
The overall performance of the solver is obtained using the (cumulative) distribution function for the performance ratio P. So if we let
P ( t ) = 1 n l s i z e { l L : r l , s t } ,
then P ( t ) is the probability for solver s S that a performance ratio r l , s is within a factor t R of the best possible ratio. If the set of problems L is large enough, then the solvers with the large probability P ( t ) are considered as the best.
Figure 1 reveals that MFRM performed better in terms of number of Iterations, as it solves and wins over 70 percent of the problems with less number of Iterations, while ACGD and PDY solve and win over 40 and almost 10 percent respectively. The story is a little bit different in Figure 2 as ACGD method was very competitive. However, MFRM method performed a little bit better by solving and winning over 50 percent of the problems with less CPU time as against ACGD method which solves and wins less than 50 percent of the problems considered. The PDY method had the least performance with just 10 percent success. The interpretation of Figure 3 was similar to that of Figure 1. Finally, in Table 11 we report numerical results for MFRM, ACGD and PDY for problem 2 with given initial points and dimensions with double float ( 10 16 ) accuracy.

4.1. Experiments on Solving Sparse Signal Problems

There were many problems in signal processing and statistical inference involving finding sparse solutions to ill-conditioned linear systems of equations. Among popular approaches was minimizing an objective function which contains quadratic ( 2 ) error term and a sparse 1 - regularization term, i.e.,
min x 1 2 y B x 2 2 + η x 1 ,
where x R n , y R k is an observation, B R k × n ( k n ) is a linear operator, η is a non-negative parameter, x 2 denotes the Euclidean norm of x and x 1 = i = 1 n | x i | is the 1 norm of x. It is easy to see that problem (25) is a convex unconstrained minimization problem. Due to the fact that if the original signal is sparse or approximately sparse in some orthogonal basis, problem (25) frequently appears in compressive sensing, and hence an exact restoration can be produced by solving (25).
Iterative methods for solving (25) have been presented in many papers (see [42,43,44,45]). The most popular method among these methods is the gradient-based method and the earliest gradient projection method for sparse reconstruction (GPRS) was proposed by Figueiredo et al. [44]. The first step of the GPRS method is to express (25) as a quadratic problem using the following process. Consider a point x R n such that x = u v , where u , v 0 . u and v are chosen in such a way that x is splitted into its positive and negative parts as follows u i = ( x i ) + , v i = ( x i ) + for all i = 1 , 2 , , n , and ( . ) + = max { 0 , . } . By definition of 1 -norm, we have x 1 = e n T u + e n T v , where e n = ( 1 , 1 , , 1 ) T R n . Now (25) can be written as
min u , v 1 2 y B ( u v ) 2 2 + η e n T u + η e n T v , u 0 , v 0 ,
which is a bound-constrained quadratic program. However, from [44], Equation (26) can be written in standard form as
min z 1 2 z T D z + c T z , such that z 0 ,
where z = u v , c = ω e 2 n + b b , b = B T y , D = B T B B T B B T B B T B . Clearly, D is a positive semi-definite matrix, which implies that Equation (27) is a convex quadratic problem.
Xiao et al. [20] translated (27) into a linear variable inequality problem which is equivalent to a linear complementarity problem. Moreover, z is a solution of the linear complementarity problem if and only if it is a solution of the following nonlinear equation:
F ( z ) = min { z , D z + c } = 0 .
The function F is a vector-valued function and the “min” was interpreted as component wise minimum. Furthermore, F was proved to be continuous and monotone in [46]. Therefore problem (25) can be translated into problem (1) and thus MFRM method can be applied to solve it.
In this experiment, we consider a simple compressive sensing possible situation, where our goal is to reconstruct a sparse signal of length n from k observations. The quality of recovery is assessed by mean of squared error (MSE) to the original signal x ˜ ,
M S E = 1 n x ˜ x * 2 ,
where x * is the recovered signal. The signal size is chosen as n = 2 11 , k = 2 9 and the original signal contains 2 6 randomly nonzero elements. In addition, the measurement y is distributed with noise, that is, y = B x ˜ + ϱ , where B is a randomly generated Gaussian matrix and ϱ is the Gaussian noise distributed normally with mean 0 and variance 10 4 .
To demonstrate the performance of the MFRM method in signal recovery problems, we compare it with the conjugate gradient descent CGD [20] and projected conjugate gradient PCG [23] methods. The parameters in PCG and CGD methods are chosen as γ = 10 , σ = 10 4 , ρ = 0 . 5 . However, we chose γ = 1 , σ = 10 4 , ρ = 0 . 9 and μ = 0 . 01 in MFRM method. For fairness in comparison, each code was run from the same initial point, same continuation technique on the parameter η , and observed only the behavior of the convergence of each method to have a similar accurate solution. The experiment was initialized with x 0 = B T y and terminates when
f ( x k ) f ( x k 1 ) f ( x k 1 ) 10 5 ,
where f ( x k ) = 1 2 y B x k 2 2 + η x k 1 .
In Figure 4 and Figure 5, MFRM, CGD and PCG methods recovered the disturbed signal almost exactly. The experiment was repeated for 20 different noise samples (see Table 9). It can be observed that the MFRM is more efficient in terms of the number of Iterations and CPU time than CGD and PCG methods in most cases. Furthermore, MFRM was able to achieve the least MSE in nine (9) out of the twenty (20) experiments. To reveal visually the performance of both methods, two figures were plotted to demonstrate their convergence behavior based on MSE, objective function values, the number of Iterations and CPU time (see Figure 6 and Figure 7). It can also be observed that MFRM requires less computing time to achieve similar quality resolution. This can be seen graphically in Figure 6 and Figure 7 which illustrate that the objective function values obtained by MFRM decrease faster throughout the entire Iteration process.

4.2. Experiments on Blurred Image Restoration

In this subsection, we test the performance of MFRM in restoring a blurred image. We use the following well-known gray test images; (P1) Cameraman, (P2) Lena, (P3) House and (P4) Peppers for the experiments. We use 4 different Gaussian blur kernels with a standard deviation υ to compare the robustness of MFRM method with CGD method proposed in [20].
To assess the performance of each algorithm tested with respect to the metrics that indicate better quality of restoration, in Table 10 we reported the objective function (ObjFun) at the approximate solution, the MSE, the signal-to-noise-ratio (SNR) which is defined as
SNR = 20 × log 10 x ¯ x x ¯ ,
and the structural similarity (SSIM) index that measure the similarity between the original image and the restored image [47] for each of the 16 experiments. The MATLAB implementation of the SSIM index can be obtained at http://www.cns.nyu.edu/~lcv/ssim/.
The original, blurred and restored images by each of the algorithm are given in Figure 8, Figure 9, Figure 10 and Figure 11. The figures demonstrate that both the two algorithms can restore the blurred images. In contrast to the CGD, the quality of the restored image by MFRM is superior in most cases. Table 11 reported numerical results for MFRM, ACGD and PDY for problem 2.

5. Conclusions

In this paper, a modified conjugate gradient method for solving monotone nonlinear equations with convex constraints was presented which is similar to that in [3]. The proposed method is suitable for non-smooth equations. Under some suitable assumptions, the global convergence of the proposed method was demonstrated. Numerical results were presented to show the effectiveness of the MFRM method compared to the ACGD and PDY methods for the given constrained monotone equation problems. Finally, the MFRM was also shown to be effective in decoding sparse signals and restoration of blurred images.

Author Contributions

conceptualization, A.B.A.; methodology, A.B.A.; software, H.M.; validation, P.K., A.M.A. and K.S.; formal analysis, P.K. and K.S.; investigation, P.K. and H.M.; resources, P.K. and K.S.; data curation, H.M. and A.M.A.; writing–original draft preparation, A.B.A.; writing–review and editing, H.M.; visualization, A.M.A. and K.S.; supervision, P.K.; project administration, P.K. and K.S.; funding acquisition, P.K. and K.S.

Funding

Petchra Pra Jom Klao Doctoral Scholarship for Ph.D. program of King Mongkut’s University of Technology Thonburi (KMUTT). This project was partially supported by the Thailand Research Fund (TRF) and the King Mongkut’s University of Technology Thonburi (KMUTT) under the TRF Research Scholar Award (Grant No. RSA6080047). Moreover, Kanokwan Sitthithakerngkiet was supported by Faculty of Applied Science, King Mongkuts University of Technology North Bangkok. Contract no. 6242104.

Acknowledgments

We thank Associate Professor Jin Kiu Liu for providing us with the access of the CGD-CS MATLAB codes. The authors acknowledge the financial support provided by King Mongkut’s University of Technology Thonburi through the “KMUTT 55th Anniversary Commemorative Fund”. The first author was supported by the “Petchra Pra Jom Klao Ph.D. Research Scholarship from King Mongkut’s University of Technology Thonburi”.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Abubakar, A.B.; Kumam, P.; Awwal, A.M. A Descent Dai-Liao Projection Method for Convex Constrained Nonlinear Monotone Equations with Applications. Thai J. Math. 2018, 17, 128–152. [Google Scholar]
  2. Abubakar, A.B.; Kumam, P. A descent Dai-Liao conjugate gradient method for nonlinear equations. Numer. Algorithms 2019, 81, 197–210. [Google Scholar] [CrossRef]
  3. Abubakar, A.B.; Kumam, P. An improved three-term derivative-free method for solving nonlinear equations. Comput. Appl. Math. 2018, 37, 6760–6773. [Google Scholar] [CrossRef]
  4. Mohammad, H.; Abubakar, A.B. A positive spectral gradient-like method for nonlinear monotone equations. Bull. Comput. Appl. Math. 2017, 5, 99–115. [Google Scholar]
  5. Muhammed, A.A.; Kumam, P.; Abubakar, A.B.; Wakili, A.; Pakkaranang, N. A New Hybrid Spectral Gradient Projection Method for Monotone System of Nonlinear Equations with Convex Constraints. Thai J. Math. 2018, 16, 125–147. [Google Scholar]
  6. Zhou, W.J.; Li, D.H. A globally convergent BFGS method for nonlinear monotone equations without any merit functions. Math. Comput. 2008, 77, 2231–2240. [Google Scholar] [CrossRef]
  7. Yan, Q.R.; Peng, X.Z.; Li, D.H. A globally convergent derivative-free method for solving large-scale nonlinear monotone equations. J. Comput. Appl. Math. 2010, 234, 649–657. [Google Scholar] [CrossRef] [Green Version]
  8. DiRksEandM, S.P.; FERRis, C. A collection of nonlinear mixed complementarity problems. Optim. Methods Softw. 1995, 5, 319–345. [Google Scholar]
  9. Meintjes, K.; Morgan, A.P. A methodology for solving chemical equilibrium systems. Appl. Math. Comput. 1987, 22, 333–361. [Google Scholar] [CrossRef]
  10. Bellavia, S.; Macconi, M.; Morini, B. STRSCNE: A Scaled Trust-Region Solver for Constrained Nonlinear Equations. Comput. Optim. Appl. 2004, 28, 31–50. [Google Scholar] [CrossRef]
  11. Kanzow, C.; Yamashita, N.; Fukushima, M. Levenberg–Marquardt methods with strong local convergence properties for solving nonlinear equations with convex constraints. J. Comput. Appl. Math. 2004, 172, 375–397. [Google Scholar] [CrossRef]
  12. Papp, Z.; Rapajić, S. FR type methods for systems of large-scale nonlinear monotone equations. Appl. Math. Comput. 2015, 269, 816–823. [Google Scholar] [CrossRef]
  13. Zhou, W.; Wang, F. A PRP-based residual method for large-scale monotone nonlinear equations. Appl. Math. Comput. 2015, 261, 1–7. [Google Scholar] [CrossRef]
  14. Dai, Y.H.; Yuan, Y. A nonlinear conjugate gradient method with a strong global convergence property. SIAM J. Optim. 1999, 10, 177–182. [Google Scholar] [CrossRef]
  15. Polak, E.; Ribiere, G. Note sur la convergence de méthodes de directions conjuguées. Revue Française D’informatique et de Recherche Opérationnelle Série Rouge 1969, 3, 35–43. [Google Scholar] [CrossRef]
  16. Polyak, B.T. The conjugate gradient method in extremal problems. USSR Comput. Math. Math. Phys. 1969, 9, 94–112. [Google Scholar] [CrossRef]
  17. Hager, W.W.; Zhang, H. A new conjugate gradient method with guaranteed descent and an efficient line search. SIAM J. Optim. 2005, 16, 170–192. [Google Scholar] [CrossRef]
  18. Dai, Y.H.; Liao, L.Z. New conjugacy conditions and related nonlinear conjugate gradient methods. Appl. Math. Optim. 2001, 43, 87–101. [Google Scholar] [CrossRef]
  19. Fletcher, R.; Reeves, C.M. Function minimization by conjugate gradients. Comput. J. 1964, 7, 149–154. [Google Scholar] [CrossRef] [Green Version]
  20. Xiao, Y.; Zhu, H. A conjugate gradient method to solve convex constrained monotone equations with applications in compressive sensing. J. Math. Anal. Appl. 2013, 405, 310–319. [Google Scholar] [CrossRef]
  21. Solodov, M.V.; Svaiter, B.F. A globally convergent inexact Newton method for systems of monotone equations. In Reformulation: Nonsmooth, Piecewise Smooth, Semismooth and Smoothing Methods; Springer: Boston, MA, USA, 1998; pp. 355–369. [Google Scholar]
  22. Liu, S.Y.; Huang, Y.Y.; Jiao, H.W. Sufficient descent conjugate gradient methods for solving convex constrained nonlinear monotone equations. In Abstract and Applied Analysis; Hindawi: New York, NY, USA, 2014; Volume 2014. [Google Scholar]
  23. Liu, J.; Li, S. A projection method for convex constrained monotone nonlinear equations with applications. Comput. Math. Appl. 2015, 70, 2442–2453. [Google Scholar] [CrossRef]
  24. Sun, M.; Liu, J. Three derivative-free projection methods for nonlinear equations with convex constraints. J. Appl. Math. Comput. 2015, 47, 265–276. [Google Scholar] [CrossRef]
  25. Sun, M.; Liu, J. New hybrid conjugate gradient projection method for the convex constrained equations. Calcolo 2016, 53, 399–411. [Google Scholar] [CrossRef]
  26. Ou, Y.; Li, J. A new derivative-free SCG-type projection method for nonlinear monotone equations with convex constraints. J. Appl. Math. Comput. 2018, 56, 195–216. [Google Scholar] [CrossRef]
  27. Ding, Y.; Xiao, Y.; Li, J. A class of conjugate gradient methods for convex constrained monotone equations. Optimization 2017, 66, 2309–2328. [Google Scholar] [CrossRef]
  28. Liu, J.; Feng, Y. A derivative-free iterative method for nonlinear monotone equations with convex constraints. Numer. Algorithms 2018, 1–18. [Google Scholar] [CrossRef]
  29. Kabanikhin, S.I. Definitions and examples of inverse and ill-posed problems. J. Inverse Ill-Posed Probl. 2008, 16, 317–357. [Google Scholar] [CrossRef]
  30. Belishev, M.I.; Kurylev, Y.V. Boundary control, wave field continuation and inverse problems for the wave equation. Comput. Math. Appl. 1991, 22, 27–52. [Google Scholar] [CrossRef] [Green Version]
  31. Beilina, L.; Klibanov, M.V. A Globally Convergent Numerical Method for a Coefficient Inverse Problem. SIAM J. Sci. Comput. 2008, 31, 478–509. [Google Scholar] [CrossRef]
  32. Kabanikhin, S.; Shishlenin, M. Boundary control and Gel’fand–Levitan–Krein methods in inverse acoustic problem. J. Inverse Ill-Posed Probl. 2004, 12, 125–144. [Google Scholar] [CrossRef]
  33. Lukyanenko, D.; Grigorev, V.; Volkov, V.; Shishlenin, M. Solving of the coefficient inverse problem for a nonlinear singularly perturbed two dimensional reaction diffusion equation with the location of moving front data. Comput. Math. Appl. 2019, 77, 1245–1254. [Google Scholar] [CrossRef]
  34. Van Sosin, B.; Elber, G. Solving piecewise polynomial constraint systems with decomposition and a subdivision-based solver. Computer-Aided Design 2017, 90, 37–47. [Google Scholar]
  35. Aizenshtein, M.; Bartoň, M.; Elber, G. Global solutions of well-constrained transcendental systems using expression trees and a single solution test. Computer Aided Geometric Design 2012, 29, 265–279. [Google Scholar]
  36. Bartoň, M. Solving polynomial systems using no-root elimination blending schemes. Computer-Aided Design 2011, 43, 1870–1878. [Google Scholar]
  37. Wang, X.Y.; Li, S.J.; Kou, X.P. A self-adaptive three-term conjugate gradient method for monotone nonlinear equations with convex constraints. Calcolo 2016, 53, 133–145. [Google Scholar] [CrossRef]
  38. La Cruz, W.; Martínez, J.; Raydan, M. Spectral residual method without gradient information for solving large-scale nonlinear systems of equations. Math. Comput. 2006, 75, 1429–1448. [Google Scholar] [CrossRef]
  39. Bing, Y.; Lin, G. An Efficient Implementation of Merrills Method for Sparse or Partially Separable Systems of Nonlinear Equations. SIAM J. Optim. 1991, 1, 206–221. [Google Scholar] [CrossRef]
  40. Yu, Z.; Lin, J.; Sun, J.; Xiao, Y.H.; Liu, L.Y.; Li, Z.H. Spectral gradient projection method for monotone nonlinear equations with convex constraints. Appl. Numer. Math. 2009, 59, 2416–2423. [Google Scholar] [CrossRef]
  41. Dolan, E.D.; Moré, J.J. Benchmarking optimization software with performance profiles. Math. Program. 2002, 91, 201–213. [Google Scholar] [CrossRef]
  42. Hale, E.T.; Yin, W.; Zhang, Y. A fixed-point continuation method for 1-regularized minimization with applications to compressed sensing. CAAM TR07-07 Rice Univ. 2007, 43, 44. [Google Scholar]
  43. Beck, A.; Teboulle, M. A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2009, 2, 183–202. [Google Scholar] [CrossRef]
  44. Figueiredo, M.A.; Nowak, R.D.; Wright, S.J. Gradient projection for sparse reconstruction: Application to compressed sensing and other inverse problems. IEEE J. Sel. Top. Signal Process. 2007, 1, 586–597. [Google Scholar] [CrossRef]
  45. Birgin, E.G.; Martínez, J.M.; Raydan, M. Nonmonotone spectral projected gradient methods on convex sets. SIAM J. Optim. 2000, 10, 1196–1211. [Google Scholar] [CrossRef]
  46. Xiao, Y.; Wang, Q.; Hu, Q. Non-smooth equations based method for 1-norm problems with applications to compressed sensing. Nonlinear Anal. Theory Methods Appl. 2011, 74, 3570–3577. [Google Scholar] [CrossRef]
  47. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Performance profiles for the number of iterations.
Figure 1. Performance profiles for the number of iterations.
Mathematics 07 00745 g001
Figure 2. Performance profiles for the CPU time (in seconds).
Figure 2. Performance profiles for the CPU time (in seconds).
Mathematics 07 00745 g002
Figure 3. Performance profiles for the number of function evaluations.
Figure 3. Performance profiles for the number of function evaluations.
Mathematics 07 00745 g003
Figure 4. (top) to (bottom) The original image, the measurement, and the recovered signals by projected conjugate gradient PCG and modified descent Fletcher–Reeves CG method (MFRM) methods.
Figure 4. (top) to (bottom) The original image, the measurement, and the recovered signals by projected conjugate gradient PCG and modified descent Fletcher–Reeves CG method (MFRM) methods.
Mathematics 07 00745 g004
Figure 5. (top) to (bottom) The original image, the measurement, and the recovered signals by conjugate gradient descent (CGD) and MFRM methods.
Figure 5. (top) to (bottom) The original image, the measurement, and the recovered signals by conjugate gradient descent (CGD) and MFRM methods.
Mathematics 07 00745 g005
Figure 6. Comparison result of PCG and MFRM. The x-axis represent the number of Iterations ((top left) and (bottom left)) and CPU time in seconds ((top right) and (bottom right)). The y-axis represent the MSE ((top left) and (top right)) and the objective function values ((bottom left) and (bottom right)).
Figure 6. Comparison result of PCG and MFRM. The x-axis represent the number of Iterations ((top left) and (bottom left)) and CPU time in seconds ((top right) and (bottom right)). The y-axis represent the MSE ((top left) and (top right)) and the objective function values ((bottom left) and (bottom right)).
Mathematics 07 00745 g006
Figure 7. Comparison result of PCG and MFRM. The x-axis represent the number of Iterations ((top left) and (bottom left)) and CPU time in seconds ((top right) and (bottom right)). The y-axis represent the MSE ((top left) and (top right)) and the objective function values ((bottom left) and (bottom right)).
Figure 7. Comparison result of PCG and MFRM. The x-axis represent the number of Iterations ((top left) and (bottom left)) and CPU time in seconds ((top right) and (bottom right)). The y-axis represent the MSE ((top left) and (top right)) and the objective function values ((bottom left) and (bottom right)).
Mathematics 07 00745 g007
Figure 8. The original image (top left), the blurred image (top right), the restored image by CGD (bottom left) with time = 3 . 70 , signal-to-noise-ratio (SNR) = 20 . 05 and structural similarity (SSIM) = 0 . 83 , and by MFRM (bottom right) with time = 1 . 97 , SNR = 21 . 28 and SSIM = 0 . 86 .
Figure 8. The original image (top left), the blurred image (top right), the restored image by CGD (bottom left) with time = 3 . 70 , signal-to-noise-ratio (SNR) = 20 . 05 and structural similarity (SSIM) = 0 . 83 , and by MFRM (bottom right) with time = 1 . 97 , SNR = 21 . 28 and SSIM = 0 . 86 .
Mathematics 07 00745 g008
Figure 9. The original image (top left), the blurred image (top right), the restored image by CGD (bottom left) with Time = 1 . 95 , SNR = 25 . 65 and SSIM = 0 . 86 , and by MFRM (bottom right) with Time = 3 . 59 , SNR = 27 . 59 and SSIM = 0 . 88 .
Figure 9. The original image (top left), the blurred image (top right), the restored image by CGD (bottom left) with Time = 1 . 95 , SNR = 25 . 65 and SSIM = 0 . 86 , and by MFRM (bottom right) with Time = 3 . 59 , SNR = 27 . 59 and SSIM = 0 . 88 .
Mathematics 07 00745 g009
Figure 10. The original image (top left), the blurred image (top right), the restored image by CGD (bottom left) with time = 5 . 38 , SNR = 25 . 97 and SSIM = 0 . 88 , and by MFRM (bottom right) with time = 38 . 77 , SNR = 26 . 26 and SSIM = 0 . 90 .
Figure 10. The original image (top left), the blurred image (top right), the restored image by CGD (bottom left) with time = 5 . 38 , SNR = 25 . 97 and SSIM = 0 . 88 , and by MFRM (bottom right) with time = 38 . 77 , SNR = 26 . 26 and SSIM = 0 . 90 .
Mathematics 07 00745 g010
Figure 11. The original image (top left), the blurred image (top right), the restored image by CGD (bottom left) with Time = 2 . 48 , SNR = 21 . 50 and SSIM = 0 . 84 , and by MFRM (bottom right) with Time = 4 . 93 , SNR = 22 . 90 and SSIM = 0 . 87 .
Figure 11. The original image (top left), the blurred image (top right), the restored image by CGD (bottom left) with Time = 2 . 48 , SNR = 21 . 50 and SSIM = 0 . 84 , and by MFRM (bottom right) with Time = 4 . 93 , SNR = 22 . 90 and SSIM = 0 . 87 .
Mathematics 07 00745 g011
Table 1. Numerical results for modified Fletcher–Reeves (MFRM), accelerated conjugate gradient descent (ACGD) and projected Dai-Yuan (PDY) for problem 1 with given initial points and dimensions.
Table 1. Numerical results for modified Fletcher–Reeves (MFRM), accelerated conjugate gradient descent (ACGD) and projected Dai-Yuan (PDY) for problem 1 with given initial points and dimensions.
MFRMACGDPDY
DimensionInitial PointIterFvalTimeNormIterFvalTimeNormIterFvalTimeNorm
1000 x 1 23980.426399.01 × 10 6 8340.215569.26 × 10 6 12490.193499.18 × 10 6
x 2 7350.0198858.82 × 10 6 9390.0865823.01 × 10 6 13530.073186.35 × 10 6
x 3 8400.0112389.74 × 10 6 9380.0343594.02 × 10 6 14570.014055.59 × 10 6
x 4 15700.0666596.01 × 10 6 16670.0171889.22 × 10 6 15610.014214.07 × 10 6
x 5 5310.16103018750.116464.46 × 10 6 14570.086909.91 × 10 6
x 6 311340.032327.65 × 10 6 251040.0429676.74 × 10 6 401620.040609.70 × 10 6
5000 x 1 8380.0538655.63 × 10 6 9380.0237293.89 × 10 6 13530.027756.87 × 10 6
x 2 8400.0366532.59 × 10 6 9380.0219516.65 × 10 6 14570.029744.62 × 10 6
x 3 8400.0300896.41 × 10 6 9390.0193178.01 × 10 6 15610.043534.18 × 10 6
x 4 16740.0817414.71 × 10 6 17710.052358.12 × 10 6 15610.032889.08 × 10 6
x 5 5310.030748018750.0388948.14 × 10 6 15610.035567.30 × 10 6
x 6 311340.0875318.1 × 10 6 261080.0534737.96 × 10 6 391580.104199.86 × 10 6
10,000 x 1 5260.038293.7 × 10 6 9390.0449615.5 × 10 6 13530.055449.70 × 10 6
x 2 8400.0550993.64 × 10 6 9390.03589.39 × 10 6 14570.062016.53 × 10 6
x 3 8400.0499745.44 × 10 6 10430.041762.12 × 10 6 15610.087045.90 × 10 6
x 4 16740.1256.61 × 10 6 18750.0663164.58 × 10 6 16650.077974.28 × 10 6
x 5 5310.048751018750.118077.86 × 10 6 391580.207517.97 × 10 6
x 6 281220.136497.18 × 10 6 271120.105936.22 × 10 6 873510.366789.93 × 10 6
50,000 x 1 5260.15843.58 × 10 6 10430.159182.33 × 10 6 14570.231297.12 × 10 6
x 2 8400.180448.1 × 10 6 10430.162523.97 × 10 6 15610.239754.91 × 10 6
x 3 8400.1864.54 × 10 6 10430.157074.67 × 10 6 16650.247354.37 × 10 6
x 4 17780.315675.47 × 10 6 19790.274744.1 × 10 6 381540.552777.54 × 10 6
x 5 5310.18586018750.271185.06 × 10 6 1777122.299509.44 × 10 6
x 6 20900.392376.44 × 10 6 281160.351977.69 × 10 6 36114494.637809.74 × 10 6
100,000 x 1 5260.261164.59 × 10 6 10420.280383.29 × 10 6 15610.500903.39 × 10 6
x 2 9430.352881.59 × 10 6 10420.289995.62 × 10 6 15610.458766.94 × 10 6
x 3 8400.358094.96 × 10 6 10420.292556.59 × 10 6 16650.513806.18 × 10 6
x 4 17780.593477.73 × 10 6 19790.512615.79 × 10 6 1757044.489209.47 × 10 6
x 5 321380.984637.09 × 10 6 18750.460864.05 × 10 6 1767084.494109.91 × 10 6
x 6 17780.577019.31 × 10 6 291200.716786.05 × 10 6 36014459.101709.99 × 10 6
Table 2. Numerical results for MFRM, ACGD and PDY for problem 2 with given initial points and dimensions.
Table 2. Numerical results for MFRM, ACGD and PDY for problem 2 with given initial points and dimensions.
MFRMACGDPDY
DimensionInitial PointIterFvalTimeNormIterFvalTimeNormIterFvalTimeNorm
1000 x 1 380.0070925.17 × 10 7 380.0360615.17 × 10 7 10390.010536.96 × 10 6
x 2 380.0124016.04 × 10 6 380.0061436.04 × 10 6 11430.009379.23 × 10 6
x 3 4110.0039934.37 × 10 7 4110.0064764.37 × 10 7 13510.011116.26 × 10 6
x 4 5140.0103631.52 × 10 7 5140.0059681.52 × 10 7 14550.021549.46 × 10 6
x 5 5140.0072341.1 × 10 6 5140.023491.1 × 10 6 15590.018504.60 × 10 6
x 6 6170.0064961.74 × 10 8 6170.006771.74 × 10 8 15590.019387.71 × 10 6
5000 x 1 380.0115611.75 × 10 7 380.0097941.75 × 10 7 11430.035284.86 × 10 6
x 2 380.0104523.13 × 10 6 380.0095913.13 × 10 6 12470.040326.89 × 10 6
x 3 4110.015161.42 × 10 7 4110.0137671.42 × 10 7 14550.048894.61 × 10 6
x 4 5140.0197333.94 × 10 8 5140.0142743.94 × 10 8 15590.048266.96 × 10 6
x 5 5140.0184624.05 × 10 7 5140.0117284.05 × 10 7 16630.059693.37 × 10 6
x 6 6170.0285362.36 × 10 9 6170.0163452.36 × 10 9 16630.062535.64 × 10 6
10,000 x 1 380.0190531.21 × 10 7 380.01351.21 × 10 7 11430.067326.85 × 10 6
x 2 380.017912.79 × 10 6 380.0158072.79 × 10 6 12470.122329.72 × 10 6
x 3 4110.0330429.73 × 10 8 4110.0207529.73 × 10 8 14550.082886.51 × 10 6
x 4 5140.0315762.56 × 10 8 5140.044832.56 × 10 8 15590.084139.82 × 10 6
x 5 5140.0327472.93 × 10 7 5140.0269752.93 × 10 7 16630.095894.75 × 10 6
x 6 6170.0360021.24 × 10 9 6170.0324451.24 × 10 9 16640.114998.55 × 10 6
50,000 x 1 380.07376.32 × 10 8 7260.169252.94 × 10 6 12470.278265.23 × 10 6
x 2 380.069643.37 × 10 6 9340.188012.78 × 10 6 13510.296427.11 × 10 6
x 3 4110.0930274.87 × 10 8 7250.153759.11 × 10 6 15590.356024.82 × 10 6
x 4 5140.112191.11 × 10 8 7240.153829.18 × 10 6 351410.694706.69 × 10 6
x 5 5140.11731.84 × 10 7 9320.181646.71 × 10 6 351410.684889.12 × 10 6
x 6 6170.137944.01 × 10 10 6190.112165.2 × 10 6 351410.709739.91 × 10 6
100,000 x 1 380.130215.4 × 10 8 7260.26094.14 × 10 6 12470.445417.39 × 10 6
x 2 380.132674.27 × 10 6 9340.326663.93 × 10 6 14550.532993.39 × 10 6
x 3 4110.173384.05 × 10 8 8290.31133.33 × 10 6 15600.586038.71 × 10 6
x 4 5140.200368.15 × 10 9 8280.29973.34 × 10 6 722902.706308.31 × 10 6
x 5 5140.252741.8 × 10 7 9320.320989.46 × 10 6 722902.722208.68 × 10 6
x 6 6170.249522.71 × 10 10 6190.219727.01 × 10 6 722902.758508.96 × 10 6
Table 3. Numerical results for MFRM, ACGD and PDY for problem 3 with given initial points and dimensions.
Table 3. Numerical results for MFRM, ACGD and PDY for problem 3 with given initial points and dimensions.
MFRMACGDPDY
DimensionInitial PointIterFvalTimeNormIterFvalTimeNormIterFvalTimeNorm
1000 x 1 6240.0240623.11 × 10 6 6400.029514.44 × 10 6 12480.012554.45 × 10 6
x 2 6240.0053455.94 × 10 6 6400.00776818.75 × 10 6 12480.013119.02 × 10 6
x 3 6240.0061099.94 × 10 6 6440.00670495.09 × 10 6 13520.014868.34 × 10 6
x 4 8330.0061273.1 × 10 6 8440.0071425.04 × 10 6 14560.016988.04 × 10 6
x 5 11460.0104272.71 × 10 6 11400.0104113.12 × 10 6 14560.015519.72 × 10 6
x 6 16680.0106828.38 × 10 6 16770.0147595.98 × 10 6 14560.015349.42 × 10 6
5000 x 1 6240.0204556.96 × 10 6 6400.0203689.93 × 10 6 12480.036609.94 × 10 6
x 2 7280.0215521.33 × 10 6 7440.0296225.09 × 10 6 13520.036166.85 × 10 6
x 3 7280.0230562.22 × 10 6 7480.0300442.96 × 10 6 14560.045946.14 × 10 6
x 4 8330.0229846.92 × 10 6 8480.0227772.93 × 10 6 15600.043426.01 × 10 6
x 5 11460.0314666.06 × 10 6 11400.0192266.97 × 10 6 15600.042967.25 × 10 6
x 6 17720.0493087.67 × 10 6 17810.0360956.05 × 10 6 321290.100818.85 × 10 6
10,000 x 1 6240.030649.85 × 10 6 6440.039973.65 × 10 6 13520.061924.77 × 10 6
x 2 7280.0358061.88 × 10 6 7440.0372217.19 × 10 6 13520.064429.68 × 10 6
x 3 7280.0357953.14 × 10 6 7480.0532264.18 × 10 6 14560.094998.69 × 10 6
x 4 8330.0410179.79 × 10 6 8480.0579844.15 × 10 6 15600.076968.5 × 10 6
x 5 11460.064488.58 × 10 6 11400.0474139.85 × 10 6 331330.186256.45 × 10 6
x 6 18760.096514.44 × 10 6 18810.0852388.56 × 10 6 331330.155487.51 × 10 6
50,000 x 1 7280.143232.2 × 10 6 7440.171758.17 × 10 6 14560.236423.51 × 10 6
x 2 7280.136254.2 × 10 6 7480.184844.18 × 10 6 14560.248137.12 × 10 6
x 3 7280.132467.03 × 10 6 7480.18279.36 × 10 6 15600.270496.53 × 10 6
x 4 9370.182614.16 × 10 6 9480.189939.27 × 10 6 341370.545457.13 × 10 6
x 5 12500.217435.2 × 10 6 12440.170435.73 × 10 6 682741.023309.99 × 10 6
x 6 18760.346459.93 × 10 6 18850.329388.66 × 10 6 692781.038108.05 × 10 6
100,000 x 1 7280.270783.11 × 10 6 7480.361443 × 10 6 14560.454754.96 × 10 6
x 2 7280.269745.94 × 10 6 7480.375155.91 × 10 6 15600.490183.39 × 10 6
x 3 7280.254759.94 × 10 6 7520.390713.44 × 10 6 15600.490169.24 × 10 6
x 4 9370.30895.88 × 10 6 9520.359613.41 × 10 6 1395594.031109.01 × 10 6
x 5 12500.418397.35 × 10 6 12440.331058.1 × 10 6 702822.071008.54 × 10 6
x 6 19800.647735.75 × 10 6 19890.613295.54 × 10 6 1395594.024409.38 × 10 6
Table 4. Numerical results for MFRM, ACGD and PDY for problem 4 with given initial points and dimensions.
Table 4. Numerical results for MFRM, ACGD and PDY for problem 4 with given initial points and dimensions.
MFRMACGDPDY
DimensionInitial PointIterFvalTimeNormIterFvalTimeNormIterFvalTimeNorm
1000 x 1 6240.008551.65 × 10 6 10400.0146623.65 × 10 6 12480.009894.60 × 10 6
x 2 5200.0042342.32 × 10 6 10400.00641155.79 × 10 6 12480.009669.57 × 10 6
x 3 10420.0074266.42 × 10 6 10400.00548183.29 × 10 6 13520.008878.49 × 10 6
x 4 21900.0116035.84 × 10 6 271100.0128548.97 × 10 6 12480.012075.83 × 10 6
x 5 16710.0107358.48 × 10 6 261060.0156035.97 × 10 6 291170.053719.43 × 10 6
x 6 1150.0059320361470.0250399.56 × 10 6 291170.023966.65 × 10 6
5000 x 1 6240.0199953.68 × 10 6 10400.0182838.15 × 10 6 13520.025033.49 × 10 6
x 2 5200.009345.2 × 10 6 11440.0167333.36 × 10 6 13520.026267.24 × 10 6
x 3 11460.021563.89 × 10 6 10400.0170737.37 × 10 6 14560.033496.29 × 10 6
x 4 22940.0433256.81 × 10 6 291180.0474367.09 × 10 6 13520.022584.25 × 10 6
x 5 18790.0966926.15 × 10 6 271100.0584057.95 × 10 6 311250.054717.59 × 10 6
x 6 1150.0121990391590.0594487.33 × 10 6 632540.100648.54 × 10 6
10,000 x 1 6240.0192645.2 × 10 6 11440.0268773 × 10 6 13520.037614.93 × 10 6
x 2 5200.0178917.35 × 10 6 11440.031184.76 × 10 6 14560.041003.37 × 10 6
x 3 11460.0360795.5 × 10 6 11440.0346732.71 × 10 6 14560.039198.90 × 10 6
x 4 22940.0697789.63 × 10 6 301220.0699715.97 × 10 6 321290.096136.02 × 10 6
x 5 18790.0628218.69 × 10 6 281140.0668666.68 × 10 6 321290.091776.44 × 10 6
x 6 1150.0172370401630.0937497.26 × 10 6 642580.207919.39 × 10 6
50,000 x 1 7280.0934731.16 × 10 6 11440.167496.7 × 10 6 14560.171933.63 × 10 6
x 2 6240.0722061.64 × 10 6 12480.113912.77 × 10 6 14560.152377.54 × 10 6
x 3 12500.142853.33 × 10 6 11440.110366.06 × 10 6 15600.165496.66 × 10 6
x 4 241020.303135.86 × 10 6 311260.309037.94 × 10 6 672700.762837.81 × 10 6
x 5 20870.289556.31 × 10 6 291180.302668.89 × 10 6 672700.761578.80 × 10 6
x 6 1150.0613270421710.411587.96 × 10 6 26910802.925109.41 × 10 6
100,000 x 1 7280.150381.65 × 10 6 11440.24349.48 × 10 6 14560.302295.13 × 10 6
x 2 6240.131262.32 × 10 6 12480.26143.91 × 10 6 15600.316483.59 × 10 6
x 3 12500.315854.71 × 10 6 11440.21618.57 × 10 6 321290.728389.99 × 10 6
x 4 241020.580238.29 × 10 6 321300.652896.68 × 10 6 1355432.867809.73 × 10 6
x 5 20870.51228.92 × 10 6 301220.616377.48 × 10 6 27210925.741409.91 × 10 6
x 6 1150.116960431750.827597.88 × 10 6 548219711.441309.87 × 10 6
Table 5. Numerical results for MFRM, ACGD and PDY for problem 5 with given initial points and dimensions.
Table 5. Numerical results for MFRM, ACGD and PDY for problem 5 with given initial points and dimensions.
MFRMACGDPDY
DimensionInitial PointIterFvalTimeNormIterFvalTimeNormIterFvalTimeNorm
1000 x 1 26980.0235553.51 × 10 6 391540.0222859.7 × 10 6 16630.075756.03 × 10 6
x 2 401540.0245395.9 × 10 6 22850.0156715.03 × 10 6 16630.014705.42 × 10 6
x 3 371440.0216597.11 × 10 6 431730.0295697.96 × 10 6 331320.022086.75 × 10 6
x 4 492060.0306969.52 × 10 6 301220.0149426.05 × 10 6 301210.018358.39 × 10 6
x 5 461940.115897.06 × 10 6 291180.0404066.5 × 10 6 321290.027008.47 × 10 6
x 6 431820.0274718.7 × 10 6 401630.03119.83 × 10 6 301210.017126.95 × 10 6
5000 x 1 381470.0733154.96 × 10 6 301170.0608779.56 × 10 6 17670.043945.64 × 10 6
x 2 20770.0562254.98 × 10 6 16600.0279115.91 × 10 6 17670.046355.07 × 10 6
x 3 411570.0821518.92 × 10 6 783150.127749.7 × 10 6 351400.083119.74 × 10 6
x 4 482020.101669.19 × 10 6 311260.0679118.39 × 10 6 331330.080756.02 × 10 6
x 6 1475623.3081588.44 × 10 7 311260.0678567.81 × 10 6 351410.100917.51 × 10 6
x 7 451900.0902767.14 × 10 6 441790.093717.37 × 10 6 321290.080548.55 × 10 6
10,000 x 1 371430.126659.28 × 10 6 773080.286789.85 × 10 6 17670.068168.81 × 10 6
x 2 22840.0772889.78 × 10 6 16600.0716577.52 × 10 6 17670.088337.80 × 10 6
x 3 391490.12976.74 × 10 6 1054240.342129.08 × 10 6 371480.147326.36 × 10 6
x 4 602500.21757.56 × 10 6 321300.119377.17 × 10 6 371490.142938.25 × 10 6
x 5 441860.17277.68 × 10 6 321300.119218.26 × 10 6 361450.147198.23 × 10 6
x 6 461940.17288.62 × 10 6 451830.156349.01 × 10 6 742980.264567.79 × 10 6
50,000 x 1 441700.622021 × 10 5 9053931.752992.56 × 10 7 421690.581137.78 × 10 6
x 2 692800.96626.87 × 10 6 311220.338177.09 × 10 6 421690.584567.13 × 10 6
x 3 11946425.876579.34 × 10 7 26010472.88249.67 × 10 6 411650.587178.87 × 10 6
x 4 502100.715998.38 × 10 6 331340.390399.98 × 10 6 401610.564317.17 × 10 6
x 5 461940.655388.47 × 10 6 351420.408077.19 × 10 6 823301.089208.44 × 10 6
x 6 502100.691178.12 × 10 6 491990.577028.97 × 10 6 803221.066707.82 × 10 6
100,000 x 1 311210.841834.48 × 10 6 8853061.978065.53 × 10 7 431731.096208.47 × 10 6
x 2 13551859.192948.37 × 10 7 1104422.26619.55 × 10 6 431731.100407.77 × 10 6
x 3 461781.13226.99 × 10 6 34513887.19389.76 × 10 6 421691.083309.66 × 10 6
x 4 502101.37378.85 × 10 6 341380.743628.65 × 10 6 853422.118809.22 × 10 6
x 5 471981.38798.31 × 10 6 361460.790128.09 × 10 6 843382.106409.78 × 10 6
x 6 522181.43187.37 × 10 6 512071.16018.42 × 10 6 1676714.062009.90 × 10 6
Table 6. Numerical Results for MFRM, ACGD and PDY for problem 6 with given initial points and dimensions.
Table 6. Numerical Results for MFRM, ACGD and PDY for problem 6 with given initial points and dimensions.
MFRMACGDPDY
DimensionInitial PointIterFvalTimeNormIterFvalTimeNormIterFvalTimeNorm
1000 x 1 11440.0111568.32 × 10 6 12480.027867.88 × 10 6 15600.016714.35 × 10 6
x 2 11440.0160927.32 × 10 6 12480.010427.58 × 10 6 15600.013464.18 × 10 6
x 3 11440.0104468.83 × 10 6 12480.00926.68 × 10 6 15600.016303.68 × 10 6
x 4 10400.0112337.38 × 10 6 12480.0136174.57 × 10 6 14560.013397.48 × 10 6
x 5 9360.0113258.29 × 10 6 12480.0114923.67 × 10 6 14560.012676.01 × 10 6
x 6 7280.0094528.25 × 10 6 11440.0163518.32 × 10 6 14560.016853.54 × 10 6
5000 x 1 8320.0269241.87 × 10 6 13520.0360254.59 × 10 6 15600.050389.73 × 10 6
x 2 8320.0434881.8 × 10 6 13520.0408974.42 × 10 6 15600.047759.36 × 10 6
x 3 8320.027091.59 × 10 6 13520.0399373.89 × 10 6 15600.049238.25 × 10 6
x 4 8320.0263511.1 × 10 6 13520.0330132.66 × 10 6 15600.057935.64 × 10 6
x 5 7280.0234428.62 × 10 6 12480.0304628.22 × 10 6 15600.045974.53 × 10 6
x 6 7280.0229525.08 × 10 6 12480.0287864.85 × 10 6 14560.050707.93 × 10 6
10,000 x 1 8320.0613742.62 × 10 6 13520.0923726.5 × 10 6 682740.407249.06 × 10 6
x 2 8320.062852.52 × 10 6 13520.0597786.25 × 10 6 682740.418188.72 × 10 6
x 3 8320.0599132.22 × 10 6 13520.0773265.5 × 10 6 341370.219056.22 × 10 6
x 4 8320.0570031.52 × 10 6 13520.0877453.77 × 10 6 15600.100767.98 × 10 6
x 5 8320.0703771.22 × 10 6 13520.0772173.02 × 10 6 15600.126806.40 × 10 6
x 6 7280.0527187.18 × 10 6 12480.0673756.85 × 10 6 15600.119843.78 × 10 6
50,000 x 1 8320.212585.85 × 10 6 14560.329653.78 × 10 6 1435753.091209.42 × 10 6
x 2 8320.212035.63 × 10 6 14560.312973.63 × 10 6 1435753.062009.06 × 10 6
x 3 8320.208854.96 × 10 6 14560.300893.2 × 10 6 1425713.049509.04 × 10 6
x 4 8320.204833.4 × 10 6 13520.268558.42 × 10 6 692781.539209.14 × 10 6
x 5 8320.214672.72 × 10 6 13520.263046.76 × 10 6 682741.494909.43 × 10 6
x 6 8320.209331.61 × 10 6 13520.261433.99 × 10 6 15600.381778.44 × 10 6
100,000 x 1 8320.417018.28 × 10 6 14560.588535.34 × 10 6 292117213.595309.53 × 10 6
x 2 8320.415117.96 × 10 6 14560.588975.14 × 10 6 290116413.309309.75 × 10 6
x 3 8320.440617.01 × 10 6 14560.573184.53 × 10 6 1445796.681509.96 × 10 6
x 4 8320.438054.8 × 10 6 14560.587123.1 × 10 6 1415676.508009.92 × 10 6
x 5 8320.411473.85 × 10 6 13520.563849.56 × 10 6 702823.305108.07 × 10 6
x 6 8320.439252.27 × 10 6 13520.533435.64 × 10 6 341371.645106.37 × 10 6
Table 7. Numerical Results for MFRM, ACGD and PDY for problem 7 with given initial points and dimensions.
Table 7. Numerical Results for MFRM, ACGD and PDY for problem 7 with given initial points and dimensions.
MFRMACGDPDY
DimensionInitial PointIterFvalTimeNormIterFvalTimeNormIterFvalTimeNorm
1000 x 1 4210.0118343.24 × 10 7 10420.0085282.46 × 10 6 14570.009535.28 × 10 6
x 2 4210.0062281.43 × 10 7 9380.0082893.91 × 10 6 13530.008969.05 × 10 6
x 3 3170.0040965.81 × 10 8 8340.0067027.43 × 10 6 3120.004268.47 × 10 6
x 4 7340.005853.89 × 10 6 11460.0095795.94 × 10 6 15610.011696.73 × 10 6
x 5 7340.0061336.36 × 10 6 11460.0153288.97 × 10 6 311260.036469.03 × 10 6
x 6 8370.0061061.9 × 10 6 12490.014262.87 × 10 6 15600.010823.99 × 10 6
5000 x 1 4210.0158367.25 × 10 7 10420.0239535.49 × 10 6 15610.032154.25 × 10 6
x 2 4210.0145213.2 × 10 7 9380.0210658.74 × 10 6 14570.029427.40 × 10 6
x 3 3170.0145171.3 × 10 7 9380.0254374.01 × 10 6 4160.011071.01 × 10 7
x 4 7340.0283888.71 × 10 6 12500.0286073.21 × 10 6 16650.043315.43 × 10 6
x 5 8380.027871.49 × 10 6 12500.0378064.84 × 10 6 331340.093797.78 × 10 6
x 6 8370.0278984.26 × 10 6 12490.0292266.43 × 10 6 15600.040778.92 × 10 6
10,000 x 1 4210.0285281.02 × 10 6 10420.0455857.77 × 10 6 15610.064846.01 × 10 6
x 2 4210.0337824.52 × 10 7 10420.0417152.98 × 10 6 15610.077343.77 × 10 6
x 3 3170.0292651.84 × 10 7 9380.0364225.67 × 10 6 4160.027071.42 × 10 7
x 4 8380.0433011.29 × 10 6 12500.0635274.53 × 10 6 16650.079417.69 × 10 6
x 5 8380.0437412.1 × 10 6 12500.0496046.85 × 10 6 341380.149426.83 × 10 6
x 6 8370.0536666.02 × 10 6 12490.0501539.09 × 10 6 341380.152248.81 × 10 6
50,000 x 1 4210.108162.29 × 10 6 11460.206244.19 × 10 6 16650.259954.89 × 10 6
x 2 4210.119691.01 × 10 6 10420.163646.67 × 10 6 15610.246748.42 × 10 6
x 3 3170.0686444.11 × 10 7 10420.15393.06 × 10 6 4160.094053.18 × 10 7
x 4 8380.160672.88 × 10 6 13540.207282.45 × 10 6 361460.552076.39 × 10 6
x 5 8380.144844.7 × 10 6 13540.194213.69 × 10 6 351420.546799.05 × 10 6
x 6 9410.1611.41 × 10 6 13530.193864.9 × 10 6 361460.557647.59 × 10 6
100,000 x 1 4210.218253.24 × 10 6 11460.325125.93 × 10 6 17690.525955.68 × 10 6
x 2 4210.164351.43 × 10 6 10420.309499.43 × 10 6 16650.521024.34 × 10 6
x 3 3170.130725.81 × 10 7 10420.310314.32 × 10 6 4160.148644.50 × 10 7
x 4 8380.290124.07 × 10 6 13540.388333.46 × 10 6 361461.053609.04 × 10 6
x 5 8380.328216.65 × 10 6 13540.35225.22 × 10 6 742992.107308.55 × 10 6
x 6 9410.436491.99 × 10 6 13530.35616.94 × 10 6 371501.082406.66 × 10 6
Table 8. Numerical results for MFRM, ACGD and PDY for problem 8 with given initial points and dimensions.
Table 8. Numerical results for MFRM, ACGD and PDY for problem 8 with given initial points and dimensions.
MFRMACGDPDY
DimensionInitial PointIterFvalTimeNormIterFvalTimeNormIterFvalTimeNorm
1000 x 1 8270.15021.52 × 10 6 8260.0498266.09 × 10 6 692790.055388.95 × 10 6
x 2 8270.0422481.52 × 10 6 8260.0175946.09 × 10 6 27010850.187989.72 × 10 6
x 3 261140.038777.85 × 10 6 8260.0108886.09 × 10 6 24520.024396.57 × 10 6
x 4 261140.0175427.85 × 10 6 8260.0078736.09 × 10 6 27580.015207.59 × 10 6
x 5 261140.0676927.85 × 10 6 8260.0607336.09 × 10 6 28610.043309.21 × 10 6
x 6 261140.0451737.85 × 10 6 8260.0068896.09 × 10 6 40850.021168.45 × 10 6
5000 x 1 6280.0239258.77 × 10 6 4130.0110055.76 × 10 6 65826391.130309.98 × 10 6
x 2 15700.0435127.94 × 10 6 4130.0091315.76 × 10 6 27580.051017.59 × 10 6
x 3 15700.0464587.94 × 10 6 4130.0113115.76 × 10 6 491040.080358.11 × 10 6
x 4 15700.0447887.94 × 10 6 4130.0104755.75 × 10 6 40850.079798.45 × 10 6
x 5 15700.0446397.94 × 10 6 4130.0110345.77 × 10 6 18400.091289.14 × 10 6
x 6 15700.0439747.94 × 10 6 4130.007855.76 × 10 6 17380.185288.98 × 10 6
10,000 x 1 11540.065956.15 × 10 6 5200.0242322.19 × 10 6 491040.204437.62 × 10 6
x 2 11540.0681256.15 × 10 6 5200.0235112.19 × 10 6 40850.158018.45 × 10 6
x 3 11540.0654866.15 × 10 6 5200.0230042.19 × 10 6 19420.378807.66 × 10 6
x 4 11540.0645156.15 × 10 6 5200.0304352.19 × 10 6 901871.258029.7 × 10 6
x 5 11540.0562616.15 × 10 6 5200.0219632.19 × 10 6 988198812.682599.93 × 10 6
x 6 11540.0677856.15 × 10 6 5200.0218892.21 × 10 6 27580.328597.59 × 10 6
50,000 x 1 7380.178564.5 × 10 6 5230.0875442.45 × 10 6 19420.522916.42 × 10 6
x 2 7380.178624.5 × 10 6 5230.0932272.45 × 10 6 1483043.930639.92 × 10 6
x 3 7380.177464.5 × 10 6 5230.0874842.45 × 10 6 937188622.970979.87 × 10 6
x 4 7380.173924.5 × 10 6 5230.0863292.4 × 10 6 27580.684677.59 × 10 6
x 5 7380.180354.5 × 10 6 5230.089542.4 × 10 6 3467028.450439.79 × 10 6
x 6 7380.175044.5 × 10 6 5230.0932032.5 × 10 6 40850.992308.45 × 10 6
100,000 x 1 281220.914488.61 × 10 6 4200.147432.71 × 10 6 ----
x 2 281220.936628.61 × 10 6 4200.148232.7 × 10 6 ----
x 3 281220.906048.61 × 10 6 4200.14972.79 × 10 6 ----
x 4 281220.923518.61 × 10 6 4200.148442.37 × 10 6 ----
x 5 281220.918968.61 × 10 6 4200.123461.66 × 10 6 ----
x 6 281220.912948.61 × 10 6 4200.125222.11 × 10 6 ----
Table 9. Twenty experiment results of 1 norm regularization problem for CGD, PCG and MFRM methods.
Table 9. Twenty experiment results of 1 norm regularization problem for CGD, PCG and MFRM methods.
S/NIterTimeMSE
CGDPCGMFRMCGDPCGMFRMCGDPCGMFRM
1248138982.281.281.336.16 × 10 5 6.32 × 10 5 1.97 × 10 5
22341381173.371.261.194.08 × 10 5 3.36 × 10 5 5.40 × 10 5
32241521041.901.290.972.78 × 10 5 1.78 × 10 5 1.02 × 10 5
42301431173.212.481.174.08 × 10 5 3.36 × 10 5 5.40 × 10 5
51521191141.651.031.151.23 × 10 5 2.07 × 10 5 5.49 × 10 5
62231271101.892.561.833.33 × 10 5 6.08 × 10 5 6.50 × 10 6
71561201251.371.011.204.25 × 10 5 3.26 × 10 5 1.46 × 10 5
821389101.900.781.121.86 × 10 5 3.77 × 10 4 1.31 × 10 5
92271521182.141.531.452.75 × 10 5 1.54 × 10 5 8.11 × 10 6
102011421012.221.641.016.75 × 10 5 1.86 × 10 5 1.17 × 10 5
11200151901.701.420.902.36 × 10 5 1.29 × 10 5 3.81 × 10 5
12202153911.751.340.846.94 × 10 5 2.99 × 10 5 9.21 × 10 5
132081281251.891.121.261.71 × 10 5 1.42 × 10 5 9.20 × 10 6
141611451221.471.281.261.15 × 10 5 8.75 × 10 6 4.36 × 10 6
152271601001.971.421.003.41 × 10 5 2.40 × 10 5 1.54 × 10 5
16269172882.511.670.983.90 × 10 5 6.59 × 10 5 2.08 × 10 4
172101291051.841.191.112.11 × 10 5 1.89 × 10 5 6.22 × 10 5
18225132961.931.151.003.87 × 10 5 7.78 × 10 5 9.49 × 10 5
19152120921.371.090.872.12 × 10 5 1.32 × 10 5 4.03 × 10 5
201511281131.311.151.064.48 × 10 5 1.85 × 10 5 1.71 × 10 5
Table 10. Efficiency comparison based on the value of the objective function (ObjFun) mean-square-error (MSE), SNR and the SSIM index under different Pi( υ ).
Table 10. Efficiency comparison based on the value of the objective function (ObjFun) mean-square-error (MSE), SNR and the SSIM index under different Pi( υ ).
ImageObjFunMSESNRSSIM
MFRMCGDMFRMCGDMFRMCGDMFRMCGD
P1(1 × 10 4 )1.43 × 10 6 1.47 × 10 6 133.90177.5721.2820.050.860.83
P1(1 × 10 1 )1.43 × 10 6 1.48 × 10 6 130.60177.6921.3920.50.860.83
P1(0.25)1.47 × 10 6 1.48 × 10 6 145.27177.7220.9320.050.850.83
P1(6.25)1.58 × 10 6 1.65 × 10 6 146.06183.9620.919.90.750.79
P2(1 × 10 4 )1.61 × 10 6 1.65 × 10 6 36.8857.5527.5925.650.880.86
P2(1 × 10 1 )1.61 × 10 6 1.65 × 10 6 36.8557.6127.5925.650.880.86
P2(0.25)1.62 × 10 6 1.66 × 10 6 37.7857.6827.4825.640.880.86
P2(6.25)1.77 × 10 6 1.82 × 10 6 56.6558.9625.7225.550.760.83
P3(1 × 10 4 )5.74 × 10 6 5.89 × 10 6 41.6344.4826.2625.970.90.88
P3(1 × 10 1 )5.75 × 10 6 5.90 × 10 6 42.4244.5426.1725.960.890.88
P3(0.25)5.76 × 10 6 5.91 × 10 6 43.3344.6526.0825.950.880.88
P3(6.25)6.35 × 10 6 6.60 × 10 6 106.7948.4722.1625.60.630.85
P4(1 × 10 4 )1.40 × 10 6 1.48 × 10 6 88.81122.4422.921.50.870.84
P4(1 × 10 1 )1.41 × 10 6 1.48 × 10 6 89.22122.5622.8821.50.870.84
P4(0.25)1.41 × 10 6 1.49 × 10 6 89.86122.5622.8521.50.870.84
P4(6.25)1.56 × 10 6 1.69 × 10 6 116.79138.9721.7120.950.760.82
Table 11. Numerical results for modified Fletcher-Reeves method MFRM, accelerated conjugate gradient descent (ACGD) and projected Dai-Yuan (PDY) methods for problem 2 with given initial points and dimensions with double float ( 10 16 ) accuracy.
Table 11. Numerical results for modified Fletcher-Reeves method MFRM, accelerated conjugate gradient descent (ACGD) and projected Dai-Yuan (PDY) methods for problem 2 with given initial points and dimensions with double float ( 10 16 ) accuracy.
MFRMACGDPDY
DimensionInitial PointIterFvalTimeNormIterFvalTimeNormIterFvalTimeNorm
1000x18270.140619.47 × 10 19 12530.0304793.32 × 10 18 301190.040274.76 × 10 19
x28360.0107821.49 × 10 18 7200.0135031.08 × 10 18 361530.0344543.51 × 10 18
x37200.0082631.21 × 10 18 13560.0213023.26 × 10 18 381610.0381683.51 × 10 18
x48230.0156541.80 × 10 19 12510.020563.31 × 10 18 391650.0577933.51 × 10 18
x511380.0184611.59 × 10 18 14590.0888583.34 × 10 18 411730.0697563.51 × 10 18
x610340.0167881.07 × 10 18 10320.0120695.83 × 10 19 401690.033113.50 × 10 18
5000x19330.0286587.22 × 10 19 12540.0416851.52 × 10 18 351490.106921.57 × 10 18
x27230.0240462.18 × 10 19 9410.0491941.55 × 10 18 371570.122191.57 × 10 18
x36170.034363.89 × 10 19 14610.0941291.47 × 10 18 331310.106351.06 × 10 19
x48260.031337.17 × 10 19 14600.0651471.47 × 10 18 391650.183611.57 × 10 18
x59310.0367275.84 × 10 19 10430.11651.47 × 10 18 361440.21787.43 × 10 20
x610340.0301686.41 × 10 19 12510.0382181.51 × 10 18 381610.131441.57 × 10 18
10,000x18280.0646171.89 × 10 19 11500.0685671.03 × 10 18 351490.22531.11 × 10 18
x26190.0442041.90 × 10 19 14620.159491.09 × 10 18 321280.343258.21 × 10 20
x36170.0451921.45 × 10 19 18780.107661.04 × 10 18 391650.238991.11 × 10 18
x410350.0554084.99 × 10 19 12520.0615891.06 × 10 18 391650.231621.11 × 10 18
x57200.0384392.06 × 10 19 14600.0873941.05 × 10 18 401690.289981.11 × 10 18
x69290.0653185.27 × 10 19 16680.099171.03 × 10 18 401700.225641.11 × 10 18
50,000x17260.210171.93 × 10 19 231000.518794.79 × 10 19 341450.928964.96 × 10 19
x26210.247522.09 × 10 19 251080.646774.90 × 10 19 361530.99544.96 × 10 19
x36170.112436.27 × 10 20 23990.504024.93 × 10 19 381610.967684.96 × 10 19
x47200.134421.02 × 10 19 241020.636644.75 × 10 19 793261.75424.96 × 10 19
x59300.202887.25 × 10 20 251060.511164.78 × 10 19 783221.72464.96 × 10 19
x612520.365262.28 × 10 19 23970.563424.76 × 10 19 803301.68124.96 × 10 19
100,000x17270.360656.53 × 10 20 231000.882363.26 × 10 19 301191.21029.26 × 10 21
x25140.200413.91 × 10 20 251080.907773.27 × 10 19 351491.56993.51 × 10 19
x37240.340751.47 × 10 19 251070.958983.26 × 10 19 401701.71263.51 × 10 19
x48310.404442.09 × 10 20 241020.833323.38 × 10 19 1516145.83063.51 × 10 19
x58260.525985.03 × 10 20 251061.02233.47 × 10 19 1516145.67773.50 × 10 19
x67200.334341.45 × 10 19 23970.874383.33 × 10 19 1536225.79063.51 × 10 19

Share and Cite

MDPI and ACS Style

Abubakar, A.B.; Kumam, P.; Mohammad, H.; Awwal, A.M.; Sitthithakerngkiet, K. A Modified Fletcher–Reeves Conjugate Gradient Method for Monotone Nonlinear Equations with Some Applications. Mathematics 2019, 7, 745. https://doi.org/10.3390/math7080745

AMA Style

Abubakar AB, Kumam P, Mohammad H, Awwal AM, Sitthithakerngkiet K. A Modified Fletcher–Reeves Conjugate Gradient Method for Monotone Nonlinear Equations with Some Applications. Mathematics. 2019; 7(8):745. https://doi.org/10.3390/math7080745

Chicago/Turabian Style

Abubakar, Auwal Bala, Poom Kumam, Hassan Mohammad, Aliyu Muhammed Awwal, and Kanokwan Sitthithakerngkiet. 2019. "A Modified Fletcher–Reeves Conjugate Gradient Method for Monotone Nonlinear Equations with Some Applications" Mathematics 7, no. 8: 745. https://doi.org/10.3390/math7080745

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop