A Generalized Pattern Search Algorithm Methodology for solving an Under-Determined System of Equality Constraints to achieve Power System Observability using Synchrophasors

The impact of the generalized pattern search algorithm (GPSA) on power system complete observability utilizing synchrophasors is proposed in this work. This algorithmic technique is an inherent extension of phasor measurement unit (PMU) minimization in a derivative-free framework by evaluating a linear objective function under a set of equality constraints that is smaller than the decision variables in number. A comprehensive study about the utility of such a system of equality constraints under a quadratic objective has been given in our previous paper. The one issue studied in this paper is the impact of a linear cost function to detect optimality in a shorter number of iterations, whereas the cost is minimized. The GPSA evaluates a linear cost function through the iterations needed to satisfy feasibility and optimality criteria. The other issue is how to improve the performance of convergence towards optimality using a gradient-free mathematical algorithm. The GPSA detects an optimal solution in a fewer number of iterations than those spent by a recursive quadratic programming (RQP) algorithm. Numerical studies on standard benchmark power networks show significant improvement in the maximum observability over the existing measurement redundancy generated by the RQP optimization scheme already published in our former paper.


Introduction
State Estimation (SE) is a required mathematical procedure for any Energy Management System (EMS) to accomplish reliable, secure and stable power electrical grid operations. SE can be successfully implemented by using synchrophasor measurements delivered by PMU placed around a modern transmission power network. PMU devices are placed within a transmission power network and succeed in full network observability whereas the SE process is satisfied [1]. However, the installation of PMU at all network buses is not possible due to the high price of a PMU device. Each PMU device price varies from $1260 to $10 000 depending upon the number of PMU channel capacity. Therefore, it's essential to use these measurement devices for installation in any power transmission system [2]. Mathematical and heuristic algorithms are adopted to construct and solve an optimization process in order to strategically place those measurement devices around a power network ensuring a reliable state estimation. A comparative study between thοse class of algorithms and their effectiveness to solve the optimal PMU placement (OPP) problem has been recently published in [2]. Stochastic methods have a feature of randomness while deterministic methods do not [3]. The main objective of such a minimization problem is to find the least number of PMUs to ensure the whole power system observability [1]. Many studies have been published during the two recent decades to contribute to the OPP problem solving under normal conditions [2]. The fundamental algorithmic optimization approach to solve the PMU placement problem can be found using a binary integer linear programming (BILP) methodology.
The BILP approach is able to give a bound to get the OPP solution from an absolute global point of view [2]. More details about the BILP approach in solving Boolean optimization models can be found in [4]- [5]. A pioneer BILP work for solving the OPP problem is presented in [6]. This work employs an analysis of power network observability using synchronized measurements jointly with power flows and injection measurements. A generalized integer linear programming (GILP) methodology is proposed in [7], covering different placement topics such as complete and incomplete observability. Reference [8] proposes optimal multi-phase scheduling regarding the PMU placement around a power network. In the meanwhile, measurement indices such as the bus observability index (BOI) and the system observability redundancy system (SORI) are used to rank and optimize the optimal solutions in [8]. An Integer Quadratic Programming model formulation determines OPP solutions with maximum observability in [9]. A semi-definite programming (SDP) problem is adopted in [10] to solve the OPP problem whereas the SE is numerically checked [1]. However, the numerical results derived by the existing optimization model are not optimal in terms of measurement redundancy [8]. Despite what [6]- [10] claim, references [11] and [12] use a branch-and-bound method implemented in two stages to show that the BILP model can produce non-unique optima. Authors in [13] employ a weighted least square (WLS) method to find the OPP solution using only synchronized measurements. Although the authors introduce an innovative nonlinear programming method to get the OPP solution, the solution algorithm is not able to deliver a workable solution that satisfies feasibility and optimality at once. It is only capable of providing approximate solutions that are not mathematically efficient. Nonlinear programming algorithms are adopted in References [14]- [16] to solve a combinatorial Boolean optimization problem such as the OPP problem. The recursive quadratic programming (RQP) [14]- [15] and Interior-Point methods (IPM) [16] are based on the Newton approach to solve the OPP problem. RQP and IPM convert the nonlinear programming problem into an easier set of subproblems and solve them successively until a local minimum point is located. Their remarkable advantage against WLS formulation is that they are able to solve the optimization process satisfying the Boolean optimality restriction and feasibility simultaneously. A mathematical Groebner algorithm is proposed to get a solution for the WLS optimization problem in [17]. The Groebner algorithm solves the WLS formulation towards optimality. Also, heuristic approaches are adopted to solve the OPP problem in [18]- [24]. Recursive tabu search [18], binary particle swarm optimization (BPSO) [19]- [21] and genetic algorithm [22]- [23] are used to find optimality. A modified binary Cuckoo optimization algorithm is presented in [24] to get the OPP solution with maximum observability. All heuristic algorithms are able to find the optimality in a larger time-scale than those spent by BILP approaches [6]- [8], [11]- [12], [23]. More details about the applicability of heuristic algorithms to solve a combinatorial optimization problem can be found in [3]. On the other hand, a SDP formulation is proposed in [25], which may or may not ensure the maximum network observability compared to [21], [24] that give optimal solutions with maximum observability for all case studies. One important application of synchrophasors is the ability to use those measurement devices as a fault recorder during power network fault conditions. Several optimization algorithms for fault location using synchrophasor measurements have been recently published in [2]. Optimization algorithms are presented in [26]- [28] to determine any fault on a power transmission system using standard IEEE power systems through numerical simulation. Mathematical or heuristic algorithms are used to yield a minimum number of PMUs and the corresponding optimal sites around the power network for faulty  [28]. This paper is following [15] using a linear cost function to be minimized. The number of equality constraints is fewer than the optimization variables to get an optimal solution [29]- [33]. Direct search methods are derivative-free methods that try to minimize the objective function at a finite number of points at each iteration and determine an optimal solution without given any derivative information. These optimization techniques can be applied for solving optimization problems, in which analytical derivatives can't be calculated or the objective function is nonsmooth [34]- [44]. The generalized pattern search algorithms (GPSA) are a subfield of direct search methods and thereby can be carried on nonsmooth optimization problems [34]- [44].
Τhe GPSA is presented in finding a globally optimal solution of a real-valued linear objective function with an equality constraint function and simple bounds for the OPP problem. The GPSA methodology is able to deliver more than one optimal solution with maximum SORI [8] in complete agreement with those found in [21]. The RQP methodology, as presented in [14]- [15], mainly relies on the local search process using derivative information as the convergence indicator to optimality [29]- [32]. In contrast, GPSA is implemented to build a search space by computing a sequence of points towards optimality without any analytical information about the cost function under the equality constraints defined on a compact decision variable set [34]- [44]. The GPSA converges towards the desirable optimal solutions in a fewer number of iterations than those spent by a RQP algorithm implemented in [14]- [15]. The GPSA convergences to optimality without needing any analytical or approximate derivative calculation [34]- [44]. The GPSA convergences to an optimal solution within a maximum observability framework with an absolute precision practically in a shorter time scale than those spent by a BPSO in [21]. The GPSA succeeds in finding optimal solutions with the highest SORI compared to previous nonlinear approaches published in [14]- [16]. The rest of the remaining paper is as follows: Section 2 attends to the originality of the proposed work. The deterministic optimization approach in the OPP problem solving is analyzed in Section 3. The mathematical model is presented in Section 4 while the simulation results derived by the GPSA are presented in Section 5. Section 6 examines the convergence speed of GPSA towards optimality within a maximum observability framework. Section 7 presents a discussion about the simulation results. Finally, Section 8 concludes this paperwork.

Original Contribution of this paperwork
In an optimization problem, gradient-based algorithms obtain local or global minimum requiring continuously differentiable functions [29]- [32]. In that case, applying finite-difference derivative approximations or given analytical the gradient of the objective and the constraints is the way to detect optimality [29]- [32]. On the other hand, derivative-free mathematical algorithms belong to a category of nonlinear approaches to avoid the need for having differentiable functions [34]- [44]. Because direct search methods neither calculate nor approximate derivatives, they are in many cases described as ''derivative-free'' [39]. A direct search algorithm, also mentioned as a non-gradient method, depends on the objective function only and does not employ derivatives of the function [34]- [44]. Pattern search methods depict a subset of direct search algorithms in which the minimum point of a continuous objective function is sought without the use of derivatives [34]- [44]. Torczon proposed the category of generalized pattern search (GPS) methods for solving unconstrained NLP problems and proved that it includes a coordinate search with fixed step sizes towards optimality. At each iteration of Torczon's algorithm, the objective function is estimated on a finite set of neighboring points on a constructed discrete mesh without ever computing or approximating derivatives [34]. To optimize nonlinear constraints, Lewis and Torczon introduced a derivative-free augmented Lagrangian pattern search method, for solving nonlinear programming problems on the condition that the functions are continuously differentiable. However, the proposed method requires evaluations of Lagrange multipliers and a penalty parameter that must be calculated at every iteration [35]. GPSA is a derivative-free optimizer where the current iteration is improved by minimizing the cost function at a limited number of iterations using a coordinate search direction [34]-[44]. This optimization process only uses function evaluation values per iteration towards optimality [37]. Thereby, this paper presents a GPSA for searching a non-convex area for a global solution point. Optimization techniques are incorporated in the GPSA framework such as an initial mesh-size and a poll methodology in order to succeed optimality with precision [34]- [44]. The pattern search methods are applied to some classical benchmark mathematical problems used to validate their ability to detect the optimum point in [37], [39], [41]-[43]. This work reveals the effectiveness of GPSA applied to a real design optimization problem such as the OPP problem to get the optimum. The contribution is stated as follows: 1. The GPSA is applied to an OPP decision problem through ''black-box optimization'' without given any analytical or approximate derivative information. 2. The GPSA minimizes a linear objective function efficiently under a bounded non-convex nonlinear problem formulation. It is demonstrated by the simulation results that the convergence towards optimality is assured, while optimality is proven to comply with the binary restriction. Thus, it is quite easy to achieve optimality and feasibility. 3. A GPSA is presented for determining a global optimum using a category of non-convex programming problems defined on a restricted region called ''hyper-box'' [29]- [33]. 4. Although the GPSA is a ''black-box'' derivative-free optimizer [34]- [44], the GPSA presents a reasonable convergence rate towards optimality. 5. The advantage of the GPSA is scalability to the increasing of the optimization of the highdimensional objective function [42], whereas the optimal solutions cover the maximum observability. Maximizing the observability has the superiority that a bigger area of the power transmission network will continue to be monitored in case of a PMU outage [8].

Mathematical Principle of deterministic algorithms applied to an optimization problem
Various optimization methods exist for identifying locally optimal solutions for a nonlinear nonconvex approach for such a decision-making problem as the PMU allocation monitoring problem. The decision-making problem provides an optimization solution to a question of the form Yes/No [5]. The PMU placement is such as this optimization problem. The Yes/No question is to pose a PMU at a power network node or not to be monitored with accuracy [2]. Nonlinear algorithms are used to get an OPP solution and discussed in detail in [14]- [16]. These nonlinear algorithms require the computation of the derivatives analytically or give an approximate differentiation in order to achieve optimality [29]- [33]. Direct search methods are a category of deterministic optimization techniques designed for ''black-box'' optimization problems. Pattern search methods are completely derivative-free algorithms. They do not employ or approximate any derivative information, nor attempts to linearize any constraint [34]- [44]. A derivative-free algorithm is proposed to solve the OPP problem for a sufficient and reliable power network state estimation [34]- [44]. A GPSA is proposed to improve the efficiency of a nonlinear OPP model in finding the optimality and to present a set of non-unique globally optimal solutions with maximum measurement redundancy. GPSA then solves the nonlinear problem and it is easy to converge towards optimality within a reasonable time scale. The GPSA is applied to a nonlinear problem that involves a linear objective function, an equality constraint function defined on a compact region [0, 1], ensuring complete power observability. The bound decision constraint is a relaxation of the binary restriction of the equivalent problem in a BILP format [6]- [8], [11]- [12], [23]. The GPSA's ability to accomplish optimality is given using computation search steps during the iteration procedure to generate a sufficient and guaranteed lower bound within a small number of iterations [34]- [44]. GPSA is able to find a neighborhood of a locally optimal solution rapidly and then detect the minimum point without significant computation effort. GPSA detects a global minimizer in a non-empty and compact set according to the Weierstrass theorem [29]- [33].

Generalized Pattern Search Algorithm Theory
Due to the derivative-free nature, GPSA is proposed to solve non-smooth optimization problems. This paper shows that a linear objective function defined on a bounded non-convex nonlinear problem can be solved by a GPSA optimizer routine. GPSA is implemented through a pattern-search optimizer . Due to the derivative-free nature of such an algorithm, the pattern-search optimizer achieves an optimal solution counting more function evaluations than nonlinear algorithms such as RQP and IPM in solving a NLP model [14]- [16]. During the iterative process, the GPSA optimizer examines a neighborhood of a locally optimal solution and then, converges to an optimal solution taking over a reasonable number of function evaluations [34]- [44]. The optimizer routine starts at multiple points within a bound variable constraint and finds single or multiple local solutions [45]- [46]. Although GPSA optimizer routine doesn't use gradients and is slower than gradient-based algorithms, GPSA gives a feasible optimal solution at the final iterate with adequate efficiency.
Thus, the applicability of this optimizer solver is proved for a variety of optimization problems. The GPSA methodology, a direct-search algorithm, calculates a sequence of points that approaches an optimum point. In every iterate, this algorithm seeks out a summation of points, named mesh around the current iterate in which an existing point is calculated by the previous GPSA iteration [45]. The GPSA detects a point in the mesh that improves the objective function at the current point. At the next stage of the GPSA, a new point is discovered and becomes the current point, allowing the GPSA in order to successfully terminate to an optimal solution [34]- [44]. Thus, the GPSA is able to minimize a cost function consists of nonlinear constraints and bound using the MATLAB patternsearch optimizer routine [45]- [46]. The patternsearch optimizer routine is implemented to solve a minimization problem without inequality, involving only 0-1 equality constraints and bounds [34]- [44].
To reduce the number of function assessments required to deliver optimality, an accelerator parameter can be used. This method includes ways to accelerate the convergence for problems of higher dimensions [42]. This optimization key is an optional parameter within the MATLAB patternsearch routine [45]- [46]. The GPSA can give a feasible optimal solution satisfying tolerances such as MeshTolerance, StepTolerance and FunctionTolerance. MeshTolerance provides a tolerance on the mesh size, whereas StepTolerance is needed as an absolute criterion regarding the change in the current point to the next one. A Function tolerance is used as the minimum limit on the difference in the objective during the iterations towards optimality [45]- [46].
Optimization parameters such as an optional search step and a poll step are computed in each GPSA's iteration towards optimality [34]- [44]. An optimization strategy called the search step involves selecting mesh points that can be candidates for the next iteration, given that only a finite number of points are chosen. The poll step is a local exploration near an identified incumbent filter point, and its properties guarantee convergence. The two search steps compute the cost function value at a restricted number of a convergent sequence of points around a mesh [34]- [44]. GPSA uses a search methodology exploration that takes place prior to a poll to optimize a constraint nonlinear minimization problem shown in Figure 1.
The optimization process tries to find a better point than the current point. . By default, the MATLAB patternsearch optimizer routine does not employ a search methodology for solving an optimization process. In Figure 1, a GPSA flowchart is given when a direct-search algorithm adopts an exploratory search methodology as proposed in [34]-[44]. This adaptation is a parameter option using the patternsearch routine, a MATLAB implementation of GPSA in a mathematical programming platform [46]. As it is depicted in the flowchart of Figure 1, two optional parameters are offered in MATLAB patternsearch optimizer routine. The first is that patternsearch refines the Mesh if the poll is deemed unsuccessful, and the second is that the mesh is expanded to terminate if the poll is deemed successful [34]- [44]. At each stage, the GPS algorithms poll the point in the current mesh by calculating their cost function values [38]. This direct search algorithm stops polling the points on the mesh when it detects a point whose objective value is less than the current point. In this case, the poll is considered to be successful and that point is the current point at the next point [ hand, the poll is considered to be unsuccessful if the GPSA doesn't succeed to discover a point that improves the objective function and the current point remains the same at the next iteration [38], [46]. The Refine Mesh option depicted in the GPSA's flowchart is used. In that case, the pattern search methods multiply the mesh size by 0.5 after each unsuccessful poll [46]. GPSA needs many function evaluations to reach an optimal solution compared to derivative-based algorithms [39]. The mesh accelerator is able to enforce the GPSA to converge more rapidly to an optimum point by decreasing the number of iterations and the function evaluation needed to satisfy the mesh tolerance [46].
Τhe patternsearch optimizer routine, a GPSA implementation in a MATLAB platform, stops if a maximum number of iterations or a maximum number of function evaluations is spent during the iterative process towards optimality [45]- [46]. In this case, a maximum number of iterations or of function evaluations should be given in order to reach an optimal solution within the feasibility and optimality criteria [46]. .

Figure 1. Flow chart of GPSA and polling method strategy [46]
A stopping criterion based on the mesh size is available in the patternsearch optimization routine [46]. The termination and tolerances criteria are concluded as follows [46]: 1. GPSA implemented via a MATLAB patternsearch optimizer routine minimizes the objective until the mesh size is less than the given tolerance value. 2. The patternsearch optimizer evaluates the objective function within the predefined given number of function evaluations to terminate to an optimal solution. 3. The patternsearch optimizer terminates when the distance from the point of one satisfied poll to the point of the next satisfied poll is less than the predefined tolerance value. 4. The patternsearch optimizer finds a solution without the iterations to exceed a maximum predefined number of iterations for the algorithm's termination.

Solution of a system of equality constraints-optimal PMU placement problem
A PMU placement monitoring problem is stated as a system of nonlinear equality constraints in which their number is fewer than the decision variables [29]- [32]: (1) Where are unknown optimization variables where [29]- [33]. The optimal solution, in which the linear cost function must be minimized, is as follows: For the sake of simplicity, all weights reflecting PMU installation cost at all buses are unity, is assumed [6]. Problem of (1)-(3) is a nonlinear programming (NLP) problem, that is a minimization model formulation with a linear objective function under a set of non linear equality constraints and simple bounds [29]- [32]. A point with the lowest objective function value is defined as a local minimizer point and the corresponding function value is considered to be the optimal value at that point [29]- [32]. That point must satisfies all the equality constraints. Such an optimal point, it is called a feasible point [33]. All the minimum points including the feasible set that involves the linear cost function, the equality constraints and the bounded design optimization variables, are feasible solutions [33]. As it is well known, this kind of NLP problem can be solved by nonlinear algorithms adopted in solving unconstrained problems [29]- [32]. Two main mathematical categories are identified for solving the nonlinear programming (1)

gradient based algorithms 2. direct search algorithms
Gradient-based mathematical algorithms are frequently used in nonlinear programming. The gradient-based algorithms' feature is that they need the objective function and constraints to be continuous and have continuous first derivatives [29]- [33]. Gradient-based algorithms use first and second order derivatives, gradients and hessian, in order to derive a local minimum point [29]- [32]. Newton, quasi-Newton, conjugate gradient methods, active-set methods, trust-region-reflective, recursive quadratic programming and Interior-Point methods are mostly applied to get an NLP optimal solution [29]- [33]. Theoretical background for these nonlinear methods can be found in [29]- [32]. Their applicability to real optimization problems can be found in [14]- [15], and [16]. On the other hand, direct search algorithms are relatively new and they do not use explicit or approximate derivatives [34]- [44]. Direct search methods such as the suggested GPSA need many function evaluations compared to derivative-based mathematical algorithms. Despite that algorithmic feature, the GPSA can rapidly discover the neighborhood of a minimizer solution although it is relatively slow to find the minimum point in the final iteration. This is an acceptable cost of not employing derivative information about the objective and the constraints for detecting optimality [38]. The GPSA are investigated in the present work. The proposed system of equations (1) is initially presented in [15]. Α   [15]. Reference [15] uses a quadratic cost function to be minimized. In this research scientific work, a linear objective function is adopted to simplify the optimization process further and then solved in a GPSA framework [34]- [44].
All the optimization variables in this constraint function (1) are within the region [0, 1]. The bound decision variable constraint is a relaxation of 0-1 variables of the equivalent BILP format in [6]- [8], [11]- [12], [23]. Such transformations are proposed in [5]. The GPSA is a category of direct search methods for solving a nonlinear programming problem [37]. The local or global gradient-based methods may be stuck to locally optimal solutions [29]- [33]. On the other side, the GPSA based optimization algorithms can deliver a global solution [34]- [44]. However, GPSA requires much more function evaluation to detect optimality. This phenomenon arises from the fact of not using the derivatives during the iteration process. The computation effort is an acceptable feature needed for the GPSA to terminate to optimality with sufficient precision. Each iteration of a GPSA methodology concludes two stages [34]- [44]. Firstly, the poll step is a local descent procedure to calculate appropriate steps to result in a solution close to a locally optimal solution. Then, the search step discovers the solution space globally and performs to avoid to being trapped in a local solution. This optimization technique doesn't satisfy a globalized solution, but it has a higher possibility to achieve a global optimum point due to the search step involved in the optimization process [34]- [44]. GPSA optimizes the linear objective under the constraint function (1) and results in an optimal solution that takes the value 0 or 1. A mathematical explanation about the satisfaction of the binary restriction on the optimal solution is given in [14]- [16], [28]. Given a local minimum of the nonlinear minimization problem, some of the will be equal to satisfying the equality constraints . The rest of will be equal to , because the objective function, defined over a closed and bounded set is minimized when [14], [28]. For the IEEE-14 bus power network shown in Figure 2 [47], the system of equality constraints is as [15]: (4) GPSA delivers an optimal solution that confirms the equality constraint function and minimizes the linear cost function in the following eqn. (5). x  GPSA uses a step that minimizes the cost function. Simultaneously, the value of the constraint violation is measured to terminate to a feasible optimal solution, as shown in Figure 3. That optimal solution involves decision variables that are proved to satisfy the Boolean restriction [14]- [16]. A nonunique global minimum point is defined, that is satisfying the system of nonlinear equality constraints having the highest SORI, which can succeed maximum power network observability. When the feasibility requirement is met, GPSA yields the best optimal solution, which is the current best point as depicted in Figure 3 [33]. The optimal solution depicted in Figure 3 has the maximum observability ensuring maximum reliability for this specific power network [8]. Then again, the large number of function evaluations needed to converge towards optimality is due to the derivative-free nature of this direct search algorithm [34]- [44]. The feasibility and optimality are met simultaneously within six iterations, which is a remarkable advantage regarding the iterations spent by RQP [14]- [15]. The feasible region is defined by feasibility for the following constraint function: , , [29]- [32], is supposed to be an underdetermined system of equations, whereas ; a closed-and-bounded region, that is, a compact decision optimization variable set [29]- [32]. The constraint function is determined in a feasible set constituted by optimization bounded variables and equality constraint that they are fewer than the variables in number [15]:

Simulation Results: A simulation indicator about the GPSA properties to optimality
Pattern search methods are used to optimize a linear cost function under a non-convex set of equality constraints with simple bounds [15]. The derivatives are neither computed nor precisely estimated using direct search methods [34]- [44]. The GPS algorithm is a derivative-free approach compared to a RQP derivative-based algorithm presented in [14]- [15] to get an optimal PMU placement set solution. GPSA routine is applied to the minimization model that involves a linear objective function and polynomial constraints. GPSA optimizes a linear cost function where the equality constraints are smaller than the decision variables [15], [29]- [33]. The efficiency of a GPSA in detecting the solution point for minimizing a linear objective function under a non-convex set constrained in a compact region [0, 1] is demonstrated and analyzed. The objective function that tries to minimize the OPP problem is written in a linear form regarding the objective presented in [15] to simplify the given optimization problem further and to detect optimality within fewer iterations compared to [15] . The mesh is constructed by adding the current point to a scalar multiple called as mesh size of a set of vectors which is called a pattern. A pattern is an optimization process that detects which points to search at each iteration. The GPSA seeks out for a point whose objective function value is lower than the incumbent [43]. In that case, the poll is successful and the point found becomes the current point at the next iteration [38]. Various poll techniques are found in the MATLAB optimization library. A poll method is a pooling optimization strategy employed in the GPSA optimizer routine. In this study, 'gpspositivebasis2n' is used to optimize the nonlinear programming problem [46]. The optimization process detects the global solution when the above stopping criteria are satisfied at once [41]. The stopping criteria is an optimization process, which incorporates the max constraint, maximum function evaluations and the accuracy of the best solution within termination criteria such as mesh-size, Lagrange multiplier evaluations, and a penalty parameter that penalizes infeasible areas [34]- [35], [41], [45]- [46]. Thereby, the nonlinear program is efficiently solved in terms of converge rate, computational time, function evaluations and a number of iterations spent towards global optimality [33].

Results for the bound-constrained nonlinear problem
Pattern search methods optimize nonlinear constraints. They formulate a sub-problem by combining the objective function and the function for the nonlinear constraints. The Lagrangian is used and some penalty parameters in order to find optimality [35]. GPS algorithms are applied to an optimization problem reflecting the IEEE-118 bus transmission network in the MATLAB environment. GPSA is successful in delivering global optimality for a medium-sized minimization problem reflecting the IEEE-118 bus system as shown in Table 1 and Figure 4 [47]. The GPSA detects and shows the existence of a minimum solution point that satisfies the constraint function. The MATLAB ''blackbox'' pattern search optimizer routine needs an initial starting point to minimize the objective function. Given an arbitrary initial starting point, the GPSA uses a coordinate direction to converge to optimality through the iterative process [34]- [44]. Τhe GPSA starts with an initial guess solution and performs a sequence of iterations to improve this initial guess towards optimality [34]. During the iterative process, a GPSA generates a number of function evaluations to derive an optimum point. The evaluated points are used to construct an estimated cost function value at the current iteration [34]. The function evaluations are count together with the estimated objective function values and used to find the minimum objective in the final iteration [46]. The GPSA implements a step acceptance mathematical law based on penalty constants or Lagrange multiplier evaluations [35]. Pattern search methods require evaluations of Lagrange multipliers and a penalty parameter that must be updated at every iteration, as shown in Table 1 [35]. At each iteration step, the objective function is calculated at a finite number of points on a mesh to try to detect one that delivers a minimum objective function value than the incumbent point [46].

An incumbent solution is a trial point in
where the GPSA estimated the cost function having the lowest value determined so far [43]. At each iteration step, this direct search algorithm tries to decrease the objective function and constraint violation, depending on the range to which the iterates are infeasible as shown in Table 1. Βased on the function evaluations, the MATLAB patternsearch optimizer routine detects an optimal solution point whereas the termination criteria are reached [46]. The convergence to a feasible iteration is of primary concern. The convergence of GPSA is established with the stopping conditions satisfied [32]- [33]. Specifically, the MATLAB GPSA optimizer solver is designed so that the sequence of points terminates when the mesh size is less than the predefined tolerance [46]. Pattern search methods bring an infeasible point into a feasible region by decreasing the constraint function valuation [34]-[44], whereas the objective function value is decreased. This is an attractive optimization process by the logic that a solution point with the least objective can be detected. A local minimizer point to the constrained problem is found by penalizing infeasible regions [29]- [33]. Further, the constraint violation is within the feasibility criteria in the final iteration. The sequence of iterates results in a solution point without any constraint violation in the final iteration, meaning that the pattern search methods converge to a feasible optimal solution [33]. The iterative process converges towards optimality within predefined termination and feasibility criteria as point out in [32]- [33]. The ''black-box'' pattern search optimizer routine delivers a feasible optimal solution under appropriate optimality conditions as established in [34]- [44]. The termination criterion is reached when the mesh size tolerance is less than a MeshTolerance value, whereas the optimal solution satisfies the feasibility criteria in 28.39 seconds [45]- [46]. The number of function evaluations is reasonable for optimality due to the derivative-free feature of a ''black-box'' optimizer routine [45]- [46]. Thereby, the optimization needed in the GPSA iterative process is computationally inexpensive. Minimizing the objective with respect to the equality constraint function with simple bounds is a major concern. The GPSA terminates to a non-unique global optimum point avoiding approximate solutions as found in [13]. Thereby, pattern search methods are an effective optimizer tool in solving a 0-1 nonlinear programming problem with simple bounds and detect the optimality with absolute precision [34]-[44]. As shown in Table 1, the objective function is estimated at a finite number of points on the mesh and finally to detect the one that yields the lowest objective function value than the incumbent [38]. The GPSA optimizes the non-convex problem involving 118 design optimization variables within six iterations. GPSA determines the global rather than a local minimum point of the objective [37].As it well known, the algorithms are able to decrease the constraint violation but they can't guarantee to find a feasible solution from an infeasible one [33]. Pattern search methods can decrease the constraint violation during the iterative process by using penalized terms [35]. GPSA detects a feasible solution point having the lowest bound of the objective function for achieving optimality. As shown in Table1, infeasible solutions are penalized during the iteration process with an increased penalty parameter. Pattern search methods are able to decrease the constraint violation and guarantee the determination of a feasible solution from an infeasible one [34]. GPSA guarantees global optimality for resolving the linear objective, which consists of a system of an equality constraint function defined on a compact region [15]. The convergence is succeeded within six iterations [46]. Due to the derivative-free nature of the GPSA, a reasonable number of function evaluations are spent towards optimality [34]- [44]. The sequence of the points generated by the GPSA is globally convergent to a constrained optimal point [35]. The objective function value is non-increasing in every GPSA iteration towards optimality [45]- [46]. The feasibility as expressed by the constraint function violation over minimizing the objective function is major of concern in nonlinear programming [33]. As illustrated in Table 1, the feasibility and optimality are met simultaneously without any constraint violation that is, the max constraint is zero. The iterative process incorporates the determination of a minimum bound of the objective through the counting of function evaluations, the minimization of the constraint violations using penalized terms considering a mesh size parameter [45]- [46]. This optimization process results in an acceptable optimal feasible solution for the PMU placement problem in a well-stated nonlinear framework [29]- [32]. Using the IEEE-118 bus system as an illustrative example, the GPSA's effectiveness is shown in Table 2. GPSA detects the best optimal solution covering the maximum observability. The PMU position sites are as follows: That optimal solution has been found that is ranked with the maximum SORI ensuring the best power network reliability [8].   Five plot diagrams are generated during the iterative process. As it is shown, the Best Function Value is achieved where a total number of function evaluations were needed for achieving optimality within an acceptable current Mesh size [46]. The optimal solution is feasible since the constraint function satisfies the predefined feasibility and stopping criteria [32]- [33]. The final plot diagram illustrates the PMU placement sites confirming that the GPSA leads to an optimal solution point satisfying the binary restriction of the decision optimization variables. The simulation results depicted in Figure 4 prove in practice that the pattern search methods are globally convergent for general nonlinear programming [35]. The OPP monitoring problem is well stated in a nonlinear framework with an equality constraint function and simple bounds. This kind of nonlinear program is solved by this subclass of direct search methods spending less iteration by other nonlinear algorithms [14]- [16] and in a reasonable time. The best objective function value obtained by RQP, IPM, and GPSA is given in Table 3 for each transmission power network system [47]. The numbers of PMUs delivered using the GPSA are comparable with those found by RQP and IPM for all IEEE test systems [47]. Α GPSA method, which moves along the coordinate directions, succeeds to converge to an optimal solution for all starting points given by the user [37]. Thereby, multiple optima with the minimum function value are located. The optimal results derived with various initial starting points for the constrained nonlinear problem are given in Tables 4-5. GPSA can find a globally optimal solution without being near the initial starting point [34]- [44]. The computation time was evaluated compared to the time spent by RQP [14]- [15] and IPM [16] and found to be reasonable due to the GPSA's derivative-free nature [34]- [44]. Although the number of function evaluations is large, the optimization is terminated within few iterations [45]- [46]. GPSA is scalable for all dimension optimization problems, each one reflects to a real transmission power network [47]. GPSA detects globalized optima points when the dimension of the optimization problem increases. The GPSA detects a set of non-unique optima with the least cost function value. BOI indicates the number of times a bus is noticed by the PMU placement set, whereas SORI counts the summation of BOIs [8]. Table 4 depicts the PMUs allocation sets and the SORI value for every power network [47].  4, 6, 9, 15, 20, 24, 25, 28, 32, 36, 38, 39, 41, 46, 50, 53 1, 4, 6, 9, 15, 20, 24, 28, 31, 32, 36, 38, 41, 47, 50, 53, 57 1, 5, 9, 15, 19, 22, 25, 27, 29, 32, 36, 38, 39, 41, 46, 50, 53 69 1, 4, 9, 13, 20, 22, 25, 27, 29, 32, 36, 41, 44, 47, 51, 53, 57 1, 4, 9, 13, 20, 22, 26, 29, 30, 32, 36, 41, 44, 47, 51, 53, 57 1, 4, 9, 15, 20, 24, 28, 29, 30, 32, 36, 38, 39, 41, 46, 51, 53 71 1, 4, 9,15,20, 24,25, 28, 29, 32, 36, 38, 39, 41, 46, 50, 54 1, 4, 8, 9, 15, 20, 24, 25, 28, 32, 36, 38, 39, 41, 47, 51, 53 1, 4, 9, 15, 19, 22, 25, 27, 29, 32, 36, 38, 39, 41, 46, 50, 53 1, 4, 7, 9, 15, 20, 24, 25, 28, 32, 36, 38, 39, 41, 47, 50, 53 1, 5, 9, 12, 15, 17, 21, 25, 29, 34, 37, 40, 45, 49, 53, 56,  62, 64, 68, 70, 71, 78, 85, 86, 89, 92, 96, 100, 105, 110 The GPSA delivers optimal points with maximum power network observability in a reasonable computational time compared to those optimal solutions delivered by a RQP algorithm [14]- [15]. Table 5 illustrates a set of non-unique constrained global optimal points with maximum power network observability for each power system [47]. These optimal points are in complete agreement to those found in [21]. The remarkable advantage of GPSA is the capability to detect multiple solutions with maximum SORI [8]. On the other side, the BPSO detects only one optimal solution with best SORI [21]. The GPSA convergence analysis for the OPP method makes it clear why this optimization method is characterized by robustness, as their proponents have long claimed [34]- [44]. The mathematical results delivered by the GPSA are presented in Table 5 regarding the determination of the optimal solutions and the corresponding maximum network observability index [8]. Table 6 gives a comparative study concerning the optimal cost function value and the SORI term between the proposed GPSA and other algorithms implemented [9]- [10], [14]- [16], [21]. The optimal results delivered by the GPS algorithms are either equal to or has the highest measurement redundancy using all IEEE power systems regarding those derived by RQP [14]- [15] and SDP [10]. The GPSA gives the same optimal results in terms of SORI regarding those found by BPSO [21]. Thus, the maximum power network observability and the maximum reliability are successful, as presented in Table 6.  4, 6, 9, 10, 12, 15, 18, 25, 27 52 2, 4, 6, 9, 10, 12, 15, 20, 25, 27 2, 4, 6, 9, 10, 12, 15, 19, 25, 27 57 bus 1, 4, 6, 9, 15, 20, 24, 28, 30, 32, 36, 38, 41, 46, 50, 53, 57 72

Convergence Rate of GPSA in solving the OPP problem for maximum observability
In this section, the convergence speed is evaluated in terms of maximum observability [8]. GPSA is applied to the IEEE-30-bus and IEEE-57 bus systems employing a search method in implementing the proposed direct search algorithm as depicted in the flowchart of Figure 1 [46]. The scalability is to find the global optimal solution within rapid convergence [32]. This noteworthy property of GPSA is proved in practice with a real design optimization problem as the PMU monitoring problem [2]. . The mesh expansion factor declares the parameter key by which the mesh size is increased after a successful poll. This optimization parameter is selected to be equal to 2 by default. On the other hand, the scale mesh declares if patternsearch routine scales the mesh points with precision multiplying the pattern vectors by constants proportional to the logarithms of the absolute values of components of the current point. The predefined parameter value is equal to 1. The initial mesh size declares the length of the shortest vector from the initial starting point to a mesh point. The predefined value of 1.0 is used as an optional parameter during the simulation run of patternsearch routine. The search function methodology is the GPSPositiveBasis2N whereas the mesh tolerance, step tolerance and constraint tolerance are selected to be 10 -16 in order to satisfy optimality and feasibility at once [33], To achieve a better convergence speed towards optimality, an acceleration mesh operator is implemented to decrease the number of function evaluations needed for achieving an acceptable constraint optimum point. The polling order, which states exactly the order in which the GPSA looks for points on the current mesh, is an important property. The poll order algorithm is selected to be either the 'consecutive' optimization pooling method as a deterministic view of point to the problem solving or the 'success' to use the best direction from the previous poll. Finally, a "random" pooling technique can be used to find optimal results of comparable quality. The randomly distributed binaries variables are used through the MATLAB Randi command to start the pattern search optimizer routine to find an acceptable optimum point. The iterative process displays that the GPSA achieves an optimal solution point which the current best point is shown with best function value in Figures 5-6. The optimal PMU sets achieved by using the GPSA are presented in Tables 7-8  As shown in Figures 5-6, the patternsearch optimizer routine generates a sequence of points that convergences towards optimality within fewer iterations than previous nonlinear algorithms already published in [14]- [16].   is the PMU placement sites for each power system. As can be observed, the GPSA discovers optimality for the 30-bus and 57-bus within six iterations, that is, rapid convergence [45]- [46]. Meanwhile, the maximum observability is satisfied as illustrated in tabulated results. The effectiveness is proved by the fact that the GPSA results in an optimal solution within six iterations concerning other nonlinear algorithms for solving the OPP problem [14]- [16]. GPSA produces a sequence of points that converge optimality, independently of how the initial starting point is selected. In each case study, the globally optimal solution was successfully located, and the GPSA spent 5-6 iterations to converge to this optimal solution within the predefined optimality and feasibility tolerances [33].

Discussion About the Simulation Results
Although the price of PMUs has been decreasing recently, the cost of PMU device remains high and depends upon the number of PMU channels to sufficiently monitor the bus in which is located [2]. An efficient and cost-effective solution regarding the installation of PMUs around the power network with maximum observability continues to be a challenging task [2]. Pattern search methods solve a nonconvex minimization problem structure to give a remarkable impact to the OPP problem-solving. The nonlinear model is a non-convex problem formulation having multiple optimal solutions over a feasible region constituted by a linear objective function, equality constraints and decision variables limitations. The GPSA is applied to this nonlinear model via a MATLAB pattern search optimizer routine starting from different starting points, including decision variables randomly distributed in the binary set {0,1} and it is applied to different size power systems [47].
The GPSA is a derivative-free optimizer method using a sample of a finite number of points to deliver a sufficient decrease in the objective function within a less number of iterations as shown in previous sections. GPS methods are direct search algorithms, initially introduced and studied for unconstrained minimization problems [36] and then extended for problems with bounds [36]. GPSA is able to solve a constrained optimization problem in which an objective function is minimized under a set of inequality constraints [34]-[44].
This study examines the effectiveness of the GPSA to minimize a linear cost function involving equality constraints fewer than the design variables with simple bounds. Derivative-free direct-search algorithms are used to get the optimal solution than gradient-based algorithms adopted for the OPP in [14]- [16]. The GPSA is met with success in practice considering a constraint optimization formulation that concerns a PMU optimal placement solution. The GPSA is easy to implement through the MATLAB pattern search optimizer [45]- [46], and it can converge to a non-unique global optimum point. The remarkable feature of GPSA routine is that there is no need for using the objective gradient and constraint gradients to deliver the constrained minimum solution point [40]. On the other hand, it's necessary to calculate the gradients of the objective and the constraint functions to enable the fmincon optimizer routine [45] for solving the nonlinear program as demonstrated in [14]- [16]. The GPSA is characterized by a globalization strategy using a sufficient decrease condition [34]- [44]. As shown in Figure 1 and Figures 3-6, the GPSA follows a search direction to obtain a suitable step that decreases the objective function, whereas the constraints are not violated in the final iteration [46]. The nonlinear methodology solved by the GPSA is effective, shown by the fact that it delivers the same optimal results as found in [14]- [16]. Moreover, the GPSA optimizes the total number of PMUs placed strategically at bus locations around a power transmission system network without a significant computational burden. GPSA determines optimal solutions which are best in term of the optimal value and maximize the measurement redundancy. The PMU placement sites are better than those generated by RQP [14]- [15] and IPM [16] methods adopted in previous OPP studies from the point of maximum observability [8]. The measurement indices such as BOI and SORI are adopted to derive the measurement redundancy for a specific optimal solution [8]. The GPSA is able to find multiple optimal solutions and each one has particular SORI, as illustrated in Table 4. Some of those optimal placement results are found to be the maximal measurement redundancy solution points as illustrated in Table 5. Thereby, the scalability of the GPSA is proved in terms of finding optimal solutions with the highest SORI and limited iterations satisfying the termination within feasibility and stopping criteria. Those optimal solutions with the highest SORI are presented in Table 5. The optimal solutions are found to be in complete agreement with those delivered by BPSO in shorter time scale [21]. BPSO succeeds in finding optimality with maximum measurement redundancy in a larger computation time due to randomness nature [3]. As observed, the proposed GPSA delivers multiple globally optimal solutions points with the maximum SORI compared to BPSO that gives only one set solution. The GPSA results in constrained optima points with a better convergence behavior than ΒPSO [21]. Figures 3-6 confirm that the best global minima are solutions of the non-convex optimization problem from the aspect of maximum observability criteria as point out in [8]. The GPSA succeeds in finding optimal solutions with higher measurement indices than those optimal solutions derived by RQP [14]- [15] and IPM methods [16]. Finally, optimal solution with highest measurement redundancy is located for all test systems as presented in Table 5.
The fundamentally key in solving an optimization problem is first to determine if a feasible solution exists at all before any constrained optima point can be found by any solution optimization algorithm. Τhe termination and feasibility criteria are evaluated after each GPSA iteration [45]-[46]. By penalizing infeasible regions, a non-unique global minimum point to the constrained nonlinear programming problem is delivered [33]. The optimization process should end when an upper limit of iterations is spent to find optimality [32]- [33], [45]- [46]. Τhe GPS algorithms are considered to converge to an optimal solution when the feasibility and stopping conditions are satisfied within predefined stopping criteria. From that feasibility point of view, all the constrained optima points including the feasible region involving the linear objective function, the equality constraint function and the bounded decision variables, are feasible solutions [33].
GPSA derives a sequence of points that globally converges to an optimum in the finally iterate satisfying the feasibility and optimality at the same time [42]. The convergence characteristic of the GPSA via a MATLAB patternsearch optimizer routine is examined for more complicated networks such as the IEEE-118 bus system in Section 4 [47]. Such an efficient optimization process is shown in Table 1. As observed, the GPSA is able to rapidly find the neighborhood of a constrained optimal solution and converge in a finally iterate without constraint violation. Meanwhile, the final iterate point is a limit point that minimizes the objective function among all feasible solutions [33]. The algorithmic success of GPS algorithms is shown in practice by the fact that more than one constrained optima are easy to be found for an appropriate initial starting point [45]-[46].

Final Remarks
Τhis paper introduces a globalized optimization algorithm to adequately distribute the least possible PMU number in electrical power networks to maximize system observability. GPS algorithms are highly effective and accurate for solving nonlinear models. A GPSA is implemented for solving the OPP problem in a bounded nonconvex nonlinear framework. Simulation results show that the GPS algorithms perform efficiently. The algorithm generates a sequence of points that convergence to an optimal solution, no matter how the initial guess is selected. The two phases of GPSA are global search on the mesh and local poll around the incumbent solution. The search step is a global procedure within the GPSA framework to satisfy optimality. On the other side, the poll step performs locally. GPS algorithms are directional algorithms with an easy analysis through a Clarke calculus. GPSA methods neither require nor explicitly approximate the gradient of the function to deliver an optimal feasible solution. GPSA presents convergence towards optimality in solving the nonlinear model. The convergence is proven by the fact that the GPSA explicitly enforces a condition of a sufficient decrease of the objective function at each iteration step during the iterative process. Starting from an arbitrary initial point, GPSA is proved that finds rapid a neighborhood of optimality and converges efficiently towards an optimum point. An appropriate objective decrease in each GPSA iteration is succeeded and a feasible optimal solution is detected. The numerical results were compared with those found by previous nonlinear and stochastic algorithms. The optimal results are proved to be superior by the aspect of maximum observability in a GPSA framework within fewer iterations.
Those optimal solutions are determined with effectiveness and guaranteed precision in a reasonable computational time. Optimization techniques such as the poll method and the selection of the initial MeshSize are implemented in the GPSA framework to detect a suitable search direction to achieve optimality. Thus, the objective function is minimized with appropriate PMU placement sites avoiding being trapped in the same local solution point. For all IEEE power systems, the numerical results are found to be similar to those delivered by other optimization algorithms such as RQP, IPM, and BPSO. Moreover, the GPSA presents a global convergence to preferable best solutions avoiding being trapped in the same localized optimum point. Also, it is found that the GPSA performs with efficiency on higher-scale optimization problems in finding optimality. Pattern search methods present the dynamic applicability of the ''derivative-free'' direct search methods in nonlinear optimization. These mathematical algorithms are promising optimizer methods in real engineering optimization problems for further study and implementation.