Solving Max-Cut Optimization Problem

The goal of this paper is to find a better method that converges faster of Max-Cut problem. One strategy is to the comparison between Bundle Method and the Augmented Lagrangian method. We have also developed the theoretical convergence properties of these methods.


INTRODUCTION
Optimization is a primary mathematical method aimed at finding the value of variables that provide the minimum value for a mathematical function. Optimization algorithms are a basic and efficient technique in mathematical programming, to arrive at a solution, generally with the help of a computer. Optimization algorithms start with a first estimate of the value of the variables and by an iterative technique generates a sequence of get better estimates, or iterates, until an optimal solution is reached.
A great algorithm should be accurate, fast, efficient and robust. A good approximation of an optimal solution should be generated.
We here present a short overview of multiplier methods [1]. The beginning in the field of multiplier methods begins with Joh (1943). Kuhn and Tucker (1951) are eminent scientists who have conducted extensive research in the field of multiplier methods. Its results on the necessary conditions and adequate conditions are important in this field. Arrow and Hurwicz (1956) introduced the Lagrangian function [2]. King (1966) [3] developed the augmented Lagrangian algorithms.
Hestenes and Powell [4] showed that the algorithm is locally convergent if the second-order sufficient conditions are satisfied. Miele et al. (1971) [5] and Rockafellar (1970) [6] introduced an augmented Lagrangian method for inequality constrained convex programming.
This method has been studied by Rockafellar in several papers [7]. The augmented Lagrangian method has got a powerful theoretical tool for convex programming. Arrow, Gould, and Howe (1971) [8] studied Rockafellar's augmented Lagrangian method and Pierre (1971) [9] introduced a special augmented Lagrangian method with local convergence properties. This method was also studied by Lowe (1974) [10], Bertsekas (1982) [11] Recently, many researchers have been interested in Lagrange's enhanced methods, such as Leyffer (2016) [12], Kanzow et al. (2018) [13] and Lourenço (2018) [14]. The benefit of the augmented Lagrangian method is that it is robust, and we do not need a feasible beginning point. The augmented Lagrangian method has been used to solve optimization problems with both equality and inequality constraints [11].
Also, the Bundle method was independently created by Claude Lemarechal [15] and Philip Wolfe [16] in (1975). Since then a large number of variants of bundle methods have been developed, such as proximal bundle (1990) [17], trust region bundle (2001) [18]. Bundle methods are at the moment the most efficient and promising methods for smooth optimization and they have been successfully used in many practical applications, for example, in engineering, economics, mechanics and optimal control.(2002) [19]. The convergence of the minimization algorithm was studied and compare them with different versions of the bundle methods using the results of numerical experiments (2013) [20]. Bundle methods have been extensively studied to solve convex and nonconvex optimization problems (2015) [21]. The a simple version of the bundle method with linear programming is suggested. (2019) [22].

MAX-CUT Problem
The maximum cut (MAX-CUT) problem is an fascinating area of combinatorial optimization and has several applications in different fields, for instance, physics, computer science, and mathematics. This problem is NP-hard [23] The abbreviation NP denoting for non-deterministic polynomial time, which means NP-hard is a difficult problem that can not be solved accurately. Several papers have studied the MAX-CUT problem. This line of research was started by Goemans-Williamson (1995) [24] with their approximation for the MAX-CUT problem based on semidefinite programming relaxation. Poljak showed that linear programming techniques cannot accomplish a better approximation solution [25], which is why semidefinite programming has attracted great importance and research activity. (For more details see [26] Figure 1: Example MAX-CUT .

The bundle and Augmented Lagrangian Methods
In the optimization problem, we wish to minimize or maximize some function subject to some constraint. The general problem of optimization given by [27,28]: The function is defined from a convex set A 0 into 0. A point 9 5 is a local solution of problem (1) if there exists a neighborhood 9 where 9 > for every 5 9 < ) 5 * ; 8 9 ;> +

Optimality Conditions for Unconstrained Optimization
In this section ,We consider the problem of unconstrained optimization. If 0 i.e., minimize sans constraints [27,28], it can be expressed by: • If is continuously differentiable, then a necessary condition for 9 5 0 is a solution of problem (2) 4 9 • If is twice continuously differentiable, then a necessary condition for 9 5 0 is a solution of problem (2) 4 9 4 9 O • The sufficient conditions for 9 5 0 is a local solution of problem (3) 4 9 4 9 @
Proof: Suppose 5 0 . We want to prove that 4 9 O . By using Taylor expansion of at 9 we get Divide the sides on we have 9 9 4 9 We take the limit to both sides, and use the fact of that 9 is a local minimizer, we get > ? . So , 4 9 is positive semidefinite.

The Augmented Lagrangian Method
This method started to be used in the 1970s. Initially, it was called the multipliers method. Now, this method is called the augmented Lagrangian method. The goal of this method is to solve constrained optimization problems. This is done by substitute a constrained problem with a series of unconstrained problems [4]. The augmented Lagrangian method is analogous to the bundle method since in both of them a bundle term is added to the objective. The difference in the augmented Lagrangian method is the Lagrange multiplier term is added to it [27].
The augmented Lagrangian method was introduced by Hestenes [4]. To introduce the augmented Lagrangian method, we change the constraint .
to the constraint . . therefore, we get the problem ! ( $&%%" . -5 We apply the bundle method to the problem (3), can get the augmented Lagrangian function as follows. We start with the bundle problem for the problem ( 5 We apply the equality augmented Lagrangian and the bundle methods to the linear programming (LP) problem. The initial results will provide an idea to work in semidefinite programming. The bundle method and augmented Lagrangian methods can be used for equality The generally LP problem is given as The dual of problem (4) is given by The optimal solution of problem (4) is , and the optimal value is ; an optimal solution of problem (5) is , and the optimal value is .

Algorithms and Numerical Computation
In this section, We discuss the numerical results of the algorithms by using Julia Language (JuliaBox). The numerical results were generated using the augmented Lagrangian method, which was Validated with the bundle method. This test was done on a specific graph that was imported from the Biq

Bundle Methods [30]
We define another method that can be considered as a stabilization of the plane's cutting method. We start by adding an additional point called the center, , to the bundle of information. We will continue to use the same linear model for our function , but it is no longer a solution LP on each iteration. Instead, we will compute the next iterate of the Algorithm [2].

Numerical Resultes
In this section, we will review our results and we are assessing the performance of the development algorithm proposed. The figures in this section illustrate the number of function calls of the approaches being used to solve the max-cut problems. Different sizes of cases were tested, results were extracted and shown in this section.
In Figure [3] and Figure[4], it is obvious that the Augmented Lagrangian Method provides a more rapid convergence. The bundle methods converge after 4s, while the augmented Lagrangian methods required only 3s for the convergence.  CPU time may change between runs due to the use of other software on the same computer. Accordingly, it was more accurate to plot the number of function calls rather than the CPU time, Which cannot be affected by any other program that is run at the same time by using our program, it is evident that the augmented Lagrangian method performed faster and required fewer function calls, while the bundle method required more function calls. [31] In the section, we present methods solve of minimizing the problem optimization [L-BFGS, BFGS and CQ methods]

L-BFGS and BFGS methods
• Limited-memory BFGS (L-BFGS) is an optimization algorithm in the quasi-Newton method family that is using a limited amount of computer memory to approximate the Broyden -Fletcher -Goldfarb -Shanno algorithm (BFGS). • It is a common algorithm for parameter estimation in machine learning. The target problem for the algorithm is to minimize over unconstrained values for the real vector.
• The algorithm L-BFGS solves the problem of minimizing an objective, given its gradient, by Iteratively measure approximations of the Hessian inverse matrix.
• The conjugate gradient (CQ) method Is an algorithm for the numerical solution of specific linear equation systems, namely those whose matrix is symmetric positive-definite.  In figure [5] we plotted the bounds against the CPU time to compare the performance of the augmented Lagrangian methods and the bundle methods. This test was performed on a specific graph imported from the Biq Mac library [29]. It is evident that the augmented Lagrangian method performed faster and required 28 function calls to converge, while the bundle method required more than 37 function calls.