LINEARIZED ALTERNATING DIRECTION METHOD OF MULTIPLIERS FOR SEPARABLE CONVEX OPTIMIZATION OF REAL FUNCTIONS IN COMPLEX DOMAIN

The alternating direction method of multipliers (ADMM) for separable convex optimization of real functions in complex variables has been proposed recently [22]. Furthermore, the convergence and O(1/K) convergence rate of ADMM in complex domain have also been derived [23]. In this paper, a fast linearized ADMM in complex domain has been presented as the subproblems do not have closed solutions. First, some useful results in complex domain are developed by using the Wirtinger Calculus technique. Second, the convergence of the linearized ADMM in complex domain based on the VI is established. Third, an extended model of least absolute shrinkage and selectionator operator (LASSO) is solved by using linearized ADMM in complex domain. Finally, some numerical simulations are provided to show that linearized ADMM in complex domain has the rapid speed.


Introduction
Many nonlinear optimization problems with complex variables are usually encountered in the domain of applied mathematics and engineering applications, such as signal processing and control theory [1,4,5,16,20,26,34]. The usual method analyzing optimization problem in complex domain is to separate it into the real part and the imaginary part, and then to recast it into a equivalent real optimization problem by doubling the size of the constraint conditions, see [18,25,31,32] and references therein. But it may lost unknown coupling relationship between the signal itself [22,30].
Under the assumption that L 0 (x, y, λ) has a saddle point, the ADMM iterations In some convex optimization, matrix A or B may not be a scalar matrix. For example, the solution of the subproblem x-update does not have the closed form solution as well as the structure of the matrix A is general. If we used the iteration method to resolved the subproblem x-update, the complexity of the algorithm will greatly improve. In real domain, we can linearize the quadratic term of the ||Ax + By k − b + 1 ρ λ k || 2 2 by the classic calculus. He [13] propose a new alternating direction method (ADM) method which needs to solve two strongly monotone sub-VI problems in each iteration approximately and allows the parameters to vary from iteration to iteration. Osher [27] proposed a fast linearized Bregman method which was applied to the problem of denoising of undersampled signals in compressive sensing. Yang and Yuan [35] propose linearized ADMM to solve the subproblems of some nuclear norm optimization problems. Li [24] proposed a linearized ADMM to solve two general LASSO models: Sparse Group LASSO and Fused LASSO. In [36], an implementable numerical algorithm which called fast linearized ADMM is proposed to solve a signal reconstruction algorithm for solving the augmented l 1 -regularized problem. Ouyang [28] presented a novel framework for acceleration of linearized ADMM which the basic idea is to incorporate a multi-step acceleration scheme into linearized ADMM. The rate of convergence of ADMM is demonstrated that it is better than that of linearized ADMM. Li and Sun [21] presents a majorized ADMM with indefinite proximal terms for solving linearly constrained 2-block convex composite optimization problems with each block in the objective being the sum of a non-smooth convex function and a smooth convex function. By choosing the indefinite proximal terms properly, the global convergence and the iteration-complexity were established in the non-ergodic sense. In [14], a new linearized ADMM is proposed via choosing a positive-indefinite proximal regularization term. The global convergence of the new linearized ADMM is proved and its worst-case convergence rate measured by the iteration complexity is also established. He [15] gave a unified approach to show the O(1/K) convergence rate for both the original ADMM and its linearized variant based on a VI approach in real domain.
Although the convergence and linear convergence rate of ADMM in complex domain has been obtained in [22], and many linearized ADMM methods have been presented in real domain, the convergence for linearized ADMM in complex domain is not given. The purpose of the paper is to present a fast linearized ADMM for separable convex optimization of real functions in complex variables. By using the Wirtinger Calculus technique, we develop some new results in complex domain that are needed in this paper. The convergence of the proposed linearized ADMM is established under some mild assumptions. For the applications, we apply new linearized ADMM in complex domain for an extended model of LASSO. The numerical simulation results are provided to show that the proposed algorithm is indeed more efficient and more robust.
The outline of the paper is as follows. In Section 2, we develop some new results in complex domain by using Wirtinger Calculus technique. In Section 3, the form of VI in complex domain is given by using the Wirtinger Calculus technique. The linearized ADMM in complex domain based on a VI approach and its convergence are presented in Section 4, In Section 5, an extended LASSO model in complex domain is solved by using linearized ADMM. In Section 6, the numerical simulations are reported. Finally, some conclusions are drawn in Section 7.

Preliminaries of Wirtinger Calculus
In this section, we recall and develop some well-known results in complex domain by using Wirtinger Calculus technique. For a comprehensive treatment of Wirtinger Calculus, we refer to [2,3,29].

Complex-to-Real and Complex-to-Complex Mappings
Let z = u + jv ∈ C n , where u ∈ R n , v ∈ R n are the real part and imaginary part of z, respectively. The most commonly used mapping C n → R 2n takes a very simple form and is written such that which is obtained by stacking the real part u on the top of imagery part v, e.g., The second mapping is defined by simple concatenation of the complex vector and its complex conjugate as z ∈ C n → z c (z,z) ∈ C = {(z,z)|z ∈ C n } ⊆ C 2n , which is obtained by stacking z on the top of its complex conjugatez. Similarly, we also have the mapping z ∈ C n →z c = (z, z) ∈ C.
The complex vector z c ∈ C 2n is related to the real vector z r ∈ R 2n as z c = J n z r and z r = where the superscripts (·) H is used for the Hermitian conjugate, the transformation satisfy J n J n H = J n H J n = 2I 2n . The linear map J n is an isomorphism map from R 2n to C 2n and its inverse is given by

Wirtinger's Derivatives
Let f (z) be a complex function defined on C n , where z = u + jv. The function f (z) may be regarded as either defined on R 2n or C n . The functions may take different forms, but they are equally valued. For convenience, we use the same function f to denote them as follows [3]. Although Definition (2.2) often allow the partial derivatives to be expressed elegantly in terms of z andz, neither contains enough information by itself to express the change in a function with respect to a change in z. This motivates the following definition of the real gradient and the complex gradient [2]. respectively.
The linear map J n (2.1) also defines a one-to-one correspondence between ∂f ∂z r and ∂f ∂z c , ∂f ∂z r and ∂f ∂z c , namely [2] ∂f Similarly, we can define the Hessian matrix in complex domain. 29]). The real Hessian H rr can be defined as If real-valued function f (z r ) with respect to the components of z r has the second partial derivatives, it is well known that a real Hessian H rr is symmetric.
It is convenient to define the quantities

First-order complex Taylor series expansion
The function f (z r ) : R 2n → R may be regarded as either defined on R 2n , so its Taylor series expansion is As set R 2n is isomorphism to set C, by (2.7), (2.12) and ∂f ∂z = ( ∂f ∂z ), the function f (z c ) has the complex Taylor series expansion as This means that the first-order complex Taylor series expansion of a real-valued function is real-valued.

Second-order Series of real-valued function in Complex Domain
The functions f (z r ) : R 2n → R has the second-order real Taylor series expansion By (2.11), the second-order term of (2.16) can be represented equivalently as From (2.16) and (2.17), the second-order complex Taylor series expansion of f (z) is given by Note that all of the terms in (2.18) are real valued. As for the Taylor series, we consider the following example, which will be used in the subsequent analysis.
where z ∈ C n , b ∈ C p and A ∈ C p×n .
It follows from (2.19) that . and So the augmented complex gradient and the augmented complex Hessian matrix of f (z) are given by and and (2.25)

VI in Complex Domain
VI is an inequality involving a functional, which has to be solved for all possible values of a given variable, belonging usually to a convex set [9,19]. The mathematical theory of VI was initially developed to deal with equilibrium problems, precisely the Signorini problem [10]. The applicability of the theory has since been expanded to include problems from economics, finance, optimization and game theory [6,11,17,37]. In this section, the form of VI in complex domain will be presented.

VI of Linear Constrained Convex Optimization in Complex Domain
We consider the linearly constrained convex optimization problem where f is the real-valued convex functions in complex variables x ∈ C n , A ∈ C p×n is given matrices and b ∈ C p is a given vector. An equivalent form of (3.1) is [22] min The Lagrangian function of the optimization problem (3.2) is holds for all x and λ. This can be expressed as An equivalent expression of (3.4) is the following VI, namely, The optimal condition of (3.5) can be characterized as finding µ * that satisfies and Ω = C n × C p .

VI of Separable Convex Optimization with Linear Constraints in Complex Domain
By the Assumption 1 of the Lagrangian function L 0 (1.2) has a saddle point, i.e., there exists (x * , y * , λ * ), for which holds for all x, y, λ. An equivalent expression of (3.7) is the following VIs (3.8) To simplify (3.8), we can get It is similar to see that the VI reformulation of (3.9) is to find w We use Ω * to denote the solution set of the variational inequality (3.10), namely w * = (x * , y * , λ * ) ∈ Ω * . (3.11)

Linearized ADMM in Complex Domain
In this section, we present the linearized ADMM for separable convex optimization in complex domain based on the VI approach.

Linearized ADMM for Separable Convex Optimization in Complex Domain
From Example (2.1), we can give the Taylor series of the quadratic term in (1.4a) So the subproblem x-update become The linearized ADMM iterations of optimal problem (1.1) is given by
Proof. Since (4.8) is true for the k-th iteration and the previous iteration, we have and For (4.8) is true for the k-th iterations and (k − 1)-th iterations, we get and Setting y be y k in (4.19) and y k+1 in (4.20), then adding the two resulting inequalities, we can get the conclusion.
This completes the proof.
Assumption ξ − ρA H A 0, matrix P is positive semi-definite. We can define P-norm of vectors as Theorem 4.1. Let w k+1 = (x k+1 , y k+1 , λ k+1 ) refer to (1.4). Then Proof. It follows from the (4.11) and (4.16) that Then, we have Rewriting it, we can get the conclusion. This completes the proof. The inequality (4.21) is essential for the convergence of the alternating direction method. Note that P is positive semi-definite.
The inequality (4.21) can be written as

Extended LASSO with Linearized ADMM in complex domain
Statistics is the study of the collection, analysis, interpretation, presentation, and organization in huge amounts of data. At the beginning, in order to minimize deviation, due to the lack of independent variables and the model will usually choose the independent variables as many as possible. Modeling process, however, need to find has explanatory power of the independent variable on the dependent variable is the most collection, through the choice of the independent variable is required to improve the explanatory and predictive precision of the model. Index selection is extremely important in the process of statistical modeling problem. LASSO is an effective estimation method to realize index set to streamline. In typical applications, there are many more features than training examples, and the goal is to find a parsimonious model for the data. For general background on the LASSO, see [33]. The LASSO has been widely applied, particularly in the analysis of biological data, where only a small fraction of a huge number of possible factors are actually predictive of some outcome of interest; see [12] for a representative case study.
Consider the extended LASSO problem in complex domain as follows min x,y ||Ax − b|| 2 2 + α||y|| 1 , where x ∈ C n , y ∈ C m , A ∈ C p1×n is a given row full-rank matrix with Rank(A) = p 1 , b ∈ C p1 , α > 0 is a scalar regularization parameter, F ∈ C p2×n , G ∈ C p2×m and c ∈ C p2×1 . By (1.4), ADMM in complex domain iterations of (5.1) are where ρ > 0 is a penalty parameter and u = 1 ρ λ is a regular dual variable.
The analytic solution of x-update sub-problem of (5.2a) is As matrix G is not identity matrix, y-update subproblem of (5.2b) do not have closed-form solutions. Then, it is not possible to obtain the exact solution of y k+1 . If inner iterative procedures are required to pursuit approximate solutions of subproblems, the complexity of the algorithm will greatly increase.
Then, the subproblem of y is transferred to follow optimization problem The analytic solution of (5.5) is Proof. Let y = u + jv, p = p re + jp im , y k = a + jb. Then By the soft-threshold in complex domain [22], we can get the result (5.6). This completes the proof. By (5.3) and (5.6), the ADMM in complex domain iterations of further generalized LASSO (5.1) as follows

Numerical Simulation
All our numerical experiments are carried out on a PC with Intel(R) Core(TM) i7-4710MQ CPU at 2.50GHz and 8GB of physical memory. The PC runs MATLAB Version: R2013a on Window 7 Enterprise 64-bit operating system. In model (5.1), x ∈ C n is a discrete complex signal generated by random N (0, 1) with the length n = 500 and y ∈ C m is a discrete r-sparse complex signal generated by random N (0, 1) with the length m = 600 and contains (at most) r = 50 nonzero entries. Select p = 8r (p < n) measurements uniformly at random matrix A p×n via A p×n x = b. Signal x and y have the inner relation by the equation F l×n x − b = Q l×m y with l = 8r.  Hence reconstructing signal x and y from measurement b and relation equation F x − b = Qy is generally an ill-posed problem which is an undetermined system of linear equations [7,8]. As y is sparse, the sparsest solution can be obtained by solving the Lasso model (5.1). The subproblem x-update can be resolved by the iteration formula (5.2a). The subproblem y-update do not has closed revolution, so we can resolve it by the formula (5.9b). In Figure 1, the change of the objective function values in the process of iteration is shown, seen in figure as the objective function value decrease monotonously through the iterations.
In Figure 2 (top), the full red line describe the changes of the primal residuals r k and the dotted lines in represent the original residual tolerance pri . In Figure 2 (bottom), the full blue line describes the changes of the dual residuals s k and the dotted lines represent the dual residuals tolerance dual .   In Figure 3 and Figure 4, the comparisons of the original signal(blue dots) and reconstructed signal (red circles) about x and y are displayed, respectively. In line with the degree of visible from the figure in the circle and point reconstruction signal to restore the original signal very well.
Keeping the parameters fixed, we repeat the experiment 100 times with different random signal, recording the maximum, minimum and average value of relative errors(including 4 types: real part of signal x, imaginary part of signal x, real part of signal y and imaginary part of signal y), the iteration numbers and running time. The results are shown in Table 1. From the Table 1, it can be seen that the linearized ADMM have either the quick convergence speed and robustness. We compare the algorithm (5.9) of linearized ADMM and the (5.2) of ADMM using CVX about the relative error, iteration numbers and run time. Here, the length of signal x and y is n = 50, m = 60. The sparse rate of y is r = n/10. Select p = 8r (p < n) measurements uniformly at random matrix A p×n via A p×n x = b. Signal x and y have the inner relation by the equation F p×n x − b = Q p×m y. The results are shown in Table 2. From the Table 2, it can be seen that the linearized ADMM has the more quick convergence speed than CVX resolving ADMM.

Conclusions
In this paper, the linearized ADMM for separable convex optimization of real functions in complex domain has been explored. First of all, based on the theory of the complex analysis, Taylor series of real-valued function is presented in complex domain by using the Wirtinger Calculus. After that, the convergence of linearized ADMM in complex domain based on the VI approach has been established. In addition, the closed solution of extended LASSO subproblem is got by the linearized ADMM. Eventually, the numerical simulations are provided to show that linearized ADMM has the quick convergence speed and robustness compared with the CVX in complex domain.
Wu-Sheng Lu (University of Victoria) for their insightful comments and suggestions on an earlier draft of this article.