Neural network for solving convex quadratic bilevel programming problems
Introduction
Bilevel programming problems (BPPs) are hierarchical optimization problems in which the constraint region is implicitly determined by another optimization problem. The BPP can be formulated as follows: where , , and , are vector valued functions of dimensions and . and are real-valued functions of appropriate dimensions.
Numerous applications in science and engineering, such as network design, transport system planning, management and economic policy, can be formulated as BPP. Fernandez-Blanco, Arroyo, and Alguacil (2012) constructed a general bilevel programming framework for alternative market-clearing procedures dependent on market-clearing prices. Using bilevel programming and swarm intelligence technique, Zhang, Zhang, Gao, and Lu (2011) presented a competitive strategic bidding optimization problem in electricity markets. Yang, Zhang, He, and Yang (2009) constructed a bilevel programming model for the flow interception problem with customer choice. So many researchers have made deep research in this field, including the theory, algorithm and application of bilevel programming (Amouzegar, 1999, Dempe, 2002, Etoa, 2010, Luo et al., 1996, Teng and Li, 2002, Vicente et al., 1994, Wang et al., 2005). In the past years, a variety of numerical algorithms have been developed for BPP. However, in many engineering applications, real-time solutions are often needed. For such real-time applications, neural networks based on circuit implementation (Hopfield & Tank, 1985) are more competent.
Over the years, neural networks for optimization and their engineering applications have been widely investigated. Tank and Hopfield applied the Hopfield network for solving linear programming problems (Hopfield and Tank, 1985, Tank and Hopfield, 1986), which motivated the development of neural networks for solving linear programming (Liu et al., 2010, Wang, 1993, Xia, 1996, Xia and Wang, 1995), variational inequalities (Cheng et al., 2008, Gao et al., 2005, Hu and Wang, 2006, Hu and Wang, 2007), nonlinear programming (Bian and Chen, 2012, Forti et al., 2004, Forti et al., 2006, Hosseini et al., 2013, Liu et al., 2013, Liu et al., 2012, Liu and Wang, 2011, Liu and Wang, 2013, Xia and Wang, 2004) and so on. These neural networks are essentially governed by a set of dynamic systems characterized by an energy function, which is the combination of the objection function and constraints of the original optimization problem, and three common techniques, such as penalty functions, Lagrange functions and primal and dual functions, which are used to construct neural networks for solving the optimization problem.
Recently, neural networks for solving BPP have received attention in the literature (Shih, Wen, Lee, Lan, & Hsiao, 2004), (Hu et al., 2010, Lan et al., 2007, Lv et al., 2010, Lv et al., 2008, Sheng et al., 1996). Based on the Frank–Wolfe method, Sheng et al. (1996) first proposed a neural network to solve a class of BPPs appearing in the algorithm. Shih et al. (2004) utilized the dynamic behavior of neural networks to solve multiobjective programming and multilevel programming problems. Lv and his colleagues (Hu et al., 2010, Lan et al., 2007, Lv et al., 2010, Lv et al., 2008) presented neural networks for solving the bilevel linear programming problem (BLPP), convex quadratic BPP and nonlinear BPP. These neural networks for solving BPP are based on Lagrange functions. However, due to the use of Lagrange multipliers, the number of state variables increased doubly, which enlarged the scale of network. Recently, some neural networks (Liu et al., 2010, Liu et al., 2013, Liu et al., 2012, Liu and Wang, 2011) for nonlinear optimization are constructed based on the penalty function, which can be used to reduce the scale of neural networks. Therefore, there is an urgent and significant need to reduce the scale of neural networks for solving BPP.
In this paper, following the Karush–Kuhn–Tacker optimality conditions (Facchinei, Jiang, & Qi, 1999), we first transform the convex quadratic bilevel programming problems (CQBPPs) into a single level problem. Then an approximation equivalent nonlinear optimization problem can be obtained through smoothing the single level problem. In order to solve the approximation equivalent problem effectively, based on the method of penalty functions and the theory of differential inclusions, nonautonomous neural network and neural sub-networks can be constructed. Compared with existing neural networks for CQBPP (Lv et al., 2010), the true power and advantage of our neural networks lie in simple structure and the least number of state variables, and their dynamical behavior and optimization capabilities are analyzed in the framework of nonsmooth analysis (Clarke, 1983) and the theory of differential inclusions (Aubin & Cellina, 1984). It is shown that the limit equilibrium points sequence of the proposed neural networks can approximately converge to an optimal solution of CQBPP under certain conditions. Simulation results on numerical examples and the portfolio selection problem show the effectiveness and performance of the neural network for solving CQBPP.
The remainder of this paper is organized as follows. In the next section, the preliminaries relevant to CQBPP are introduced. In Section 3, the nonautonomous neural network is derived. The convergence of the proposed neural network is proved in Section 4. In Section 5, neural sub-networks for solving CQBPP are constructed. Simulation results on two numerical examples and the portfolio selection problem are given in Section 6 to demonstrate the effectiveness and performance of the neural network. Finally, Section 7 concludes this paper.
Notation: Given column vectors and , is the scalar product of , and , , , , , .
Section snippets
Preliminaries
In this section, some models, assumptions and lemmas about CQBPP are introduced, which are needed in the following development. If and are quadratic functions, and and are linear constraints, problem (1) gives rise to the following where , , , , , , , , , , , ,, . The term (UP) is called the upper level
Proposed neural network
In this section, based on an unconstrained optimization problem of (5), we construct a neural network modeled by a nonautonomous differential inclusion to solve problem (5). Some definitions and properties concerning set-valued maps, nonsmooth analysis are given in the Appendix.
Take as a lower bound of the optimal value of problem (5), i.e, , where is an optimal value of problem (5). Based on Section 2, we construct an energy function as follows: where
Theoretical analysis
In this section, the convergence and optimality of the proposed neural network are proven. According to the theory of differential inclusions (Aubin & Cellina, 1984), we have the following.
Definition 2 The solution of system (9) on , , with initial condition , is an absolutely continuous function on such that for , and for almost everywhere , we have .
Assumption 3 .
Definition 3 is said to be a
Neural sub-networks for CQBPP
In this section, we will design neural sub-networks for solving problem (5). From Section 4, neural network (9) is Lyapunov stable and converges to the solution of problem (7) under some conditions. However, it cannot guarantee to be a solution of (5). From Section 3, one should choose a bigger value of (keeping ) to obtain an approximate solution of problem (5). In the following, neural sub-networks are constructed for solving problem (5), which is constructed by using neural
Numerical simulations
In this section, we use two numerical examples and the portfolio selection problem to illustrate the good performance of neural network (19) for solving problem (5).
Example 1 Consider the following optimization problem (Muu & Quy, 2003).
For neural network (9), we choose with parameter (obtained by solving (20)). Fig. 1 illustrates the solution of
Conclusions
Based on the method of penalty functions, this paper has presented a recurrent neural network for solving CQBPP. Compared with the existing neural network, no Lagrange multipliers, or slack variables are involved in the proposed recurrent neural network. Using the theory of nonsmooth analysis, differential inclusions and Lyapunov-like method, an important result has been proven that the limit equilibrium points sequence of the proposed neural networks can approximately converge to an optimal
Acknowledgment
This publication was made possible by NPRP grant # NPRP 4-1162-1-181 from the Qatar National Research Fund (a member of Qatar Foundation). The statements made herein are solely the responsibility of the authors. This work was also supported by Natural Science Foundation of China (grant no: 61374078).
References (43)
- et al.
A recurrent neural network for solving a class of generalized convex optimization problems
Neural Networks
(2013) - et al.
A neural network approach for solving linear bilevel programming problem
Knowledege-Based Systems
(2010) - et al.
A Hybrid neural network approach to bilevel programming problems
Applied Mathematics Letters
(2007) - et al.
A one-layer recurrent neural network for constrained pseudoconvex optimization and its application for portfolio optimization
Neural Networks
(2012) - et al.
A neural network for solving a convex quadratic bilevel programming problem
Journal of Computational and Applied Mathematics
(2010) - et al.
A neural network approach for solving nonlinear bilevel programming problem
Computers & Mathematics with Applications
(2008) - et al.
A neural network approach to multiobjective and multilevel programming problems
Computers & Mathematics with Applications
(2004) - et al.
A one-layer recurrent neural network for support vector machine learning
IEEE Transactions on Systems, Man and Cybernetics, Part B (Cybernetics)
(2004) - et al.
Bi-level programming model and hybrid genetic algorithm for flow interception problem with customer choice
Computers & Mathematics with Applications
(2009) An evolutionary algorithm for solving nonlinear bilevel programming based on a new constraint-handling scheme
IEEE Transactions on Systems, Man and Cybernetics, Part B (Cybernetics)
(1999)
A DC programming approach for a class of bilevel programming problems and its application in Portfolio Selection
Numerical Algebra, Control and Optimization
Differential inclusions: set-valued maps and viability theory
Convex two-level optimization
Mathematical Programming
Smoothing neural network for constrained non-Lipschitz optimization with applications
IEEE Transactions on Neural Networks Learning Systems
A neutral-type delayed projection neural network for solving nonlinear variational inequalities
IEEE Transactions on Circuits and Systems II: Experimental Briefs
Optimization and nonsmooth analysis
Foundations of bilevel programming
Solving convex quadratic bilevel programming problems using an enumeration sequential quadratic programming algorithm
Journal of Global Optimization
A smoothing method for mathematical programs with equilibrium constraints
Mathematical Programming
A unified bilevel programming framework for price-based market clearing under marginal pricing
IEEE Transactions on Power Systems
Cited by (50)
An efficient recurrent neural network for defensive Stackelberg game
2023, Journal of Computational ScienceA high-performance nonlinear dynamic scheme for the solution of equilibrium constrained optimization problems
2017, Expert Systems with ApplicationsCitation Excerpt :Penalty approach, implicit programming approach and piecewise programming approach are the main methods that have been used to solve this problem (Kocvara & Outrata, 2004). However, in many engineering and scientific applications, real-time solutions are often desired, see He, Li, Huangb, and Li (2014). These problems may have high dimensions and dense structure (Ferris, 2002).
A recurrent neural network for adaptive beamforming and array correction
2016, Neural Networks