A new Parallel VM algorithm for solving large scale optimizations problems.

In this paper, a new optimal parallel line search step-size is designed to improve the parallel VM algorithm and satisfies Wolf's-Powell condition by using thirty-two non-linear test problems. The new proposed algorithm has been worked well on our selected test problems, and it has a superiority on the standard algorithm


Introduction
The parallelization of QN methods has been considered by several researchers in the past decades.The first work was done by Satraeter (1973) for the symmetric rank-one method and it was later modified by Van Laarhoven (1985).At each step, this algorithm updates the Quasi-Newton matrix along m independent directions by parallelly evaluating values of the functions and the gradients at m points.
For a positive definite quadratic function, it can be shown that the method converges in one iteration regardless of the initial starting point.Computational results of Laarhoven (1985) for a set of well-known problems with no more than four variables indicates that the algorithm improves the total number of parallel function evaluations, and the total number of iterations.
In the well known variables metric algorithms, we start from an arbitrary point x °∈ E n .The search direction s k at the k-th iteration of the algorithm is constructed through the following relation: where f is the function to be minimized (assumed to have continuous second derivatives),x k the current iteration point and H k an approximation to the inverse Hessian matrix at x k .The next iteration point x K+1 is obtained by (2) where λ k stands for an approximation to a minimum of along the search direction s k from x k .Finally, we update H k by an additive correction D k , yielding (5) There are many possible choices for the correction matrix D k .We consider first the so-called rank-two updating formula (Fletcher, 1980) where D k is given by where and we update H k with conjugate gradient method (CG) method by calculating the search direction S k+1 as: where H k+1 is any positive-definite symmetric matrix, or preconditional CG method (PCG).
The search direction (defined in (7) was first proposed by Nazareth (1979) and was also called the preconditioned Hestense and Stiesfel method.It has been shown that for a quadratic q as given by: It must be true that H 1 =A -1 Thus, it takes only one linear search to find the minimum of q, for arbitrary initial point x °∈E n , since the first search direction is the Newton direction.A sequential variable metric method, however, needs n linear searches.We expect a similar reduction in linear searches for the nonquadratic case.This is extremely important, since linear searches take a large amount of function evaluations.A further reduction in computing time can be achieved by parallelizing the linear search itself.(see Van Laarhoven, 1985).
Step 5: calculate in parallel ∆f(x k ), ∆f(x k1 ), …, ∆f(∆ kn ), where x kj = x k + δ j and y Step 6: develop search direction Where g k+1 = ∆f(x k+1 ) Step 7: if available storage is exceeded then employ a restart option either with k = n or with powell switching criterion or guarantee the positive definiteness of H.For more details see Al-Bayati &Aaref (2004).

New Parallel VM algorithm
The proposed parallel VM(PVM) algorithm consists of the following steps: 1) Initialization: let k= 0, and x ° be the initial guess of the minimum and H ° = I be the identify matrix.Set ε > 0 as the required accuracy.2) compute the function and gradient values at x k let f k = f (x k ) and g k = ∇f (x k ) 3) compute the parallel search directions.Let m 1 > 0 be the number of processors available for computing the search directions in parallel compute

………
(10) In this paper, we apply the parallel line search algorithm for the Al-Bayatiy (1991) VM-algorithm: Proceed as follows: Call the line search routine in parallel along each search direction , stop executing this procedure once a line minimum λ k has been found to satisfy the following Wolf's condition along any search direction j k d : Let * k d be the search direction that α k has been found successfully.
In this step, if a line minimum points are found from more than one search direction, then * k d is choses to be the search direction that attains the lowest minimum point.The details of our new line searches used in our proposed parallel algorithm is given bellow:

Parallel Line Search Step (PLS)
The parallel line-search procedure works as follows.Let m 2 be the number of parallel processors available for locating the minimum along a particular search direction , let λ max be the maximum allowable step-size, and denote ψ(λ) = f (x k + λ j k d ).The parallel line search process consists of the following steps: 1. Choose the step sizes: ) are m 2 different approximately chosen step sizes.For instance, we may chose λ 1 = 0.5, λ 2 = 1.0, … , λ m2 = λ max , let Φ be the set of these step sizes.

Test for successful points
Let Φ * be the set of step sizes λ i such that for each λ i ∈Φ * , λ i satisfies the Wolf's conditions (2.3) -(2.4).If Φ * ≠ φ (empty set) and is the step size which corresponds to the minimum functional value, that is, and return to the main PQN routine; otherwise, proceed to step 4.

Choose interpolation points:
Let Φ + be the set of step sizes such that for each λ i ∈Φ + , λ i satisfies

Apply the cubic interpolation technique
Let ϕ(λ) be the cubic polynomial passing the two points λ 1 and λ 2 .Let λ * be the minimum of ϕ (λ) Compute

Outlines of the new proposed algorithm.
In this section, we are going to a new optimal step by modifying the parallel Al-Bayati self-scaling (1991) Algorithm by using parallel line search procedure (λ i ) and satisfies the Wolf's condition.

Outlines of the new parallel VM-algorithm.
Step 1: for any starting point x o , and initial matrix (usually H o = I), and n linearly δ 1 , δ 2 , …, δ k , set k=0.
Step 5: calculate in parallel ∆f(x k ), ∆f(x k1 ), …, ∆f(∆ kn ), where x kj = x k + s j and y kj = ∆f(x kj ) -∆f(x k ), Step 6: develop search direction Where g k+1 = ∆f(x k+1 ) Step 7: if available storage is exceeded then employ a restart option either with k = n or with powell switching criterion or guarantee the positive definiteness of H.

Numerical Results and Conclusions:
The comparison tests involve well knowon test functions with different dimensions (Bunday, 1984).All the results were obtained using programs written in FORTRAN.
The comparative performance of the algorithms are evaluated by considering both total number of iteration (NOI) and total number of function (NOF).The stopping criterion is taken to be ║∆f(x k+1 )║< 5*10 -5 The line search employed is the cubic fitting technique which uses function values and their gradients.Which is fully described in Bunday (1984).Two algorithms were tested, namely 1) Standard Parallel Al-Bayati and Aaref(2004) Algorithm.
2) The new proposed algorithm.
Our numerical results are presented in two tables.Table (5.1)compares between the two algorithms using cubic fitting line search with Wolfe condition, for sixteen small dimensionally test functions 4≤ n ≤ 80.It is clear that the idea of parallel algorithms is well defined in the field of parallel methods.
Namely, there are about (17 )% NOI and (12)% NOF improvements on the standard Al-Bayati's*Aaref (2004) method.Also, table (5.2) represents the results of our numerical comparison between the two algorithms, but for sixteen large dimensionality test functions 100 ≤ n ≤ 500.In fact the new algorithm in this case and for the selected set of test functions has an improvement of about (19)% NOI and (15) NOF on the standard well known algorithm. ). ; ∇ψ (λ * ) If λ * satisfies Wolfe's conditions(2.3)-(2.4),thenset λ k =λ * and return to the main PQN routine.Otherwise, replace λ 2 with λ * .Repeat step 5.