Stability of 1-Bit Compressed Sensing in Sparse Data Reconstruction

1-bit compressing sensing (CS) is an important class of sparse optimization problems. This paper focuses on the stability theory for 1-bit CS with quadratic constraint. The model is rebuilt by reformulating sign measurements by linear equality and inequality constraints, and the quadratic constraint with noise is approximated by polytopes to any level of accuracy. A new concept called restricted weak RSP of a transposed sensing matrix with respect to the measurement vector is introduced. Our results show that this concept is a suﬃcient and necessary condition for the stability of 1-bit CS without noise and is a suﬃcient condition if the noise is available.


Introduction
e standard noiseless compressing sensing (CS) model is to solve the following optimization problem: where A ∈ R m×n is a sensing (or measurement) matrix and x is a sparse signal requiring robust reconstruction from a given nonadaptive measurement vector y [1][2][3][4]. e l 0 -minimization problem is well known to be NP-hard. Hence, to overcome this difficulty, a typical treatment is resorting to use l 1 -norm. Along this approach, a great deal of algorithms is available, e.g., orthogonal matching pursuit algorithm [5], basis pursuit algorithm [6], iterative hard threshold algorithm [7], and iteratively reweighted least squares algorithm [8]. Moreover, some added assumptions have to be added on the measurement matrix A to ensure that a sparse solution/signal could be exactly recovered by l 1 minimization. ese conditions include restricted isometry property [9][10][11], coherence condition [12], null space property [8,13,14], and range space property [15,16]. In recent research, some work has been done concerning the robust reconstruction condition (RRC) based on the above traditional properties and their variants, e.g., exact reconstruction condition [17], double null space property [18], and null space property [19].
However, the above CS model cannot be adapted in some practical problems; for example, in brain signal processing and sigma-delta converters, only the sign or support of a signal is measured. is motivates one to consider sparse signal recovery through low bits of measurements. An extreme quantization is only one bit per measurement. It gives rise to the theory of 1-bit compressed sensing (see Boufounos and Baraniuk [20]).
In this paper, we further consider a constrained 1-bit compressed sensing model involved by a noisy constraint. Precisely, let A ∈ R m×n and B ∈ R l×n be two given full-row rank matrices. Pick y ∈ 1, − 1, 0 { } m with, b ∈ R l is a given vector, and ε is a positive number. e constrained 1-bit compressed sensing model is described as follows: (P)min ‖x‖ 0 s.t. sign(Ax) � y, ‖b − Bx‖ 2 ≤ ε, (2) where the last term ‖b − Bx‖ 2 ≤ ε stands for a noisy constraint.
e corresponding convex relaxed problem via l 1 -norm is expressed as min ‖x‖ 1 , Compared with the recovery of a given signal, it is equally important to study whether the recovered signal is stable. e stability of recovery means that recovery errors stay under control even if the measurements are slightly inaccurate and the data are not exactly sparse. Recent stability study for CS can be found in [21][22][23][24][25]. However, few theoretical results are available on the stability of 1-bit CS. In general, it is impossible to exactly reconstruct a sparse signal by only using 1-bit information. For example, if sign (Ax * ) � (1, 1), then any sufficiently small perturbation x * + v is also positive and hence satisfies the requirement. Hence, we turn our attention to recover part of the information in 1-bit CS, such as support set or sign of a target signal. Due to this reason, the following criterion, where x ≠ 0 and x * ≠ 0 and Δ denotes a sufficient small positive scalar and has been widely used in the 1-bit CS literature. Inspired by this observation, the problem (1) is said to be stable for noisy reconstruction, if for any nonzero vector x ∈ R n , there is a nonzero solution x * of (3) such that where C 1 and C 2 are constant depending on the primal problem data (A, y, ε, B, b). If ε � 0 and x is k-sparse, then the right side of (5) is zero and hence x/‖x‖ 2 � x * /‖x * ‖ 2 , which in turn implies that sign(x) � sign(x * ); i.e., the sign of target signals can be exact recovery. e main target of this paper is to study the necessary and/ or sufficient condition for (5). First, a new definition called restricted weak RSP with respect to y is introduced. Our results show that, for 1-bit CS, this condition is sufficient and necessary condition for stability if there is no noise, while it is sufficient if the noise is available. e analysis is based on the duality theory of linear programming and the fact that the ball constraint can be approximated by polytopes to any level of accuracy. e notations used in this paper are standard. Let R n + be the set of nonnegative vectors in R n . Given a set S, |S| denotes the cardinality of S. e l 0 -norm ‖x‖ 0 counts the number of nonzero components of x, and the l 1 -norm of x is defined as ‖x‖ 1 ≔ n i�1 |x i |. Let e stand for a vector of ones, i.e., e � (1, . . . , 1) T . For a vector x, write For any two norms ‖ · ‖ p and ‖ · ‖ q with p, q ≥ 1, the induced matrix norm ‖A‖ p⟶q is defined as ‖A‖ p⟶q ≔ max ‖x‖ p ≤ 1 ‖Ax‖ q . A convex combination between the points x 1 and x 2 is written as [x 1 , x 2 ], i.e., Given a vector y � 1, e sign function is defined as }. e error of the best k-term approximation of a vector x is defined as e Hausdorff metric of two sets M 1 , M 2 ⊆ R n is (10) Robinson's constant is defined as follows:

Reformulation and Approximation of (3)
e 1-bit CS is NP-hard and hence is difficult to solve precisely. It motivates us to reformulate the 1-bit CS problem by removing the sign function. e advantage of such a reformulation is yielding a decoding method based on the theory of linear programming.

Mathematical Problems in Engineering
Given sign measurements y ∈ − 1, 1, 0 { } m , denote by A + , A − , and A 0 the submatrices of A in which their rows are corresponding to index sets J + (y), J − (y), and J 0 (y), respectively. For simplification of notations, we simply use J + , J 0 , and J − to denote J + (y), J 0 (y), and J − (y), respectively. In the following analysis, we always assume that J + ∪ J − ≠ ∅ because otherwise y � 0, and nothing is measured in this case.
e constraint sign (Ax) � y can be rewritten equivalently as By rearranging the order of the components of y and the order of the associated rows of A if necessary, we may assume without loss of generality that It is clear that In fact, the inclusion "⊇" is clear. For "⊆," take x sat- For any fixed α > 0, define the following relaxed problem (denoted by α-problem for short), min ‖x‖ 1 e formula (15) shows that F � ∪ α>0 F α , where F and F α denote the feasible region of the primal problem and the relaxed problem, respectively. In addition, F β ⊆F α as long as β ≥ α. us, where the limit is in the sense of the Painlevé-Kuratowski.

Proposition 1. A vector x * is an optimal solution of primal problem (P) if and only if
i.e., x * is a feasible solution of β-problem. Since x * is an optimal solution of the primal problem, x * is the optimal solution of β-problem due to F β ⊂ F by (15). "⇐." Let x * be an optimal solution of the primal problem. Take β ∈ (0, α) where α ≔ min α, α ′ and en, x * , x * ∈ F β due to the monotonicity of F α with respect to α. By assumption, x * is an optimal solution of β-problem. Since x * ∈ F β and is an optimal solution of the primal problem, then x * is an optimal solution of the primal problem.
Denote by T * and T * α the optimal solution set of (3) and (18), respectively. Following the similar argument as above, we can obtain the following result.

Mathematical Problems in Engineering 3
where B stands for the unit l 2 -ball, i.e., B ≔ z ∈ R m : ‖z‖ 2 ≤ 1 . According to the convex set separate theorem, the set B can be described as an intersection of an infinite number of half spaces, i.e., Notice that where θ * α denotes the optimal value of (18). Replacing B in (24) by a polytope P⊇B yields a relaxation of T * α , called T P α , i.e., e following lemma claims that the polytope T P can approximate T * to any level of accuracy, as long as P is chosen suitably.
Lemma 1 (see [25], Corollary 6.5.2). For any ε > 0, there exists a polytope approximation P of B satisfying P⊇B and In the remainder of the paper, we fix ε > 0 and choose a polytope P such that T P α and T * α satisfying (26). e polytope can be described as an interaction of a finite number of half spaces: where a i for i ∈ 1, . . . , L are some unit vectors (i.e., ‖a i ‖ 2 � 1) and L is an integer number. For the convenience in the following analysis, we further add 2l half spaces to P, where β j is the j-th column of the l × l identity matrix. is yields the following polytope: Denote by Ω the collection of the vectors a i and ±β j in P 0 , i.e., Clearly, P 0 still satisfies (26), i.e., Let N ≔ |Ω| and let M P 0 be the matrix with column vectors in Ω. us, P 0 can be written as where e N is the vector of one's in R N . By replacing B by the above P 0 , we obtain the following approximation of (3): and the solution set of (33) is where (θ P 0 α ) * denotes the optimal value of (33). Since B⊆P 0 , then

Stability Analysis
e concept of range space property (RSP for short) was first introduced in [15] to develop a necessary and sufficient condition for uniform recovery of sparse signals via l 1 -minimization. It was extended in [26] to weak RSP for developing stability theory of convex optimization algorithms. Recently, restricted RSP (RRSP) was introduced to develop sign recovery condition for sparse signals through 1-bit measurement in [16,25].
Definition 1 (weak RSP). Given a matrix A ∈ R m×n , the transposed matrix A T is said to possess the weak RSP order k, if for any two disjoint sets S 1 , To investigate the stability of 1-bit compressed sensing involved noise constraints, the notion of weak RSP is needed to be extended to the following restricted weak RSP with respect to y.
Definition 2 (restricted weak RSP with respect to y). Given matrices A ∈ R m×n , B ∈ R l×n , and y ∈ − 1, 1, 0 { } m , the pair (A T , B T ) is said to satisfy the restricted weak RSP of order k with respect to y, if for any disjoint subsets S 1 , S 2 of 1, . . . , n where w � (w (1) , w (2) , w (3) ) T ∈ R Theorem 1. Let A ∈ R m×n and B ∈ R l×n be given matrices and b ∈ R l . Suppose that, for any given vector y ∈ sign(Ax) | ‖x‖ 0 ≤ k , the following holds: for any x ∈ R n satisfying y � sign (Ax), there is a solution x * of min x ‖x‖ 1 , where α > 0 and A + , A 0 , and A − are submatrices of A in which their rows are corresponding to index sets J + (y), J − (y), and J 0 (y), such that Here, C is a constant dependent only on the problem data (A, B, y, b). en, (A T , B T ) must satisfy the restricted weak RSP of order k with respect to y.
Proof. Let (S 1 , S 2 ) be any pair of disjoint subsets of 1, . . . , n { } with |S 1 | + |S 2 | ≤ k. To prove that (A T , B T ) satisfies the restricted weak RSP of order k with respect to y, it is sufficient to show that there exists a vector η ∈ R(A T , B T ) such that where w ≔ (w (1) , w (2) , w (3) ) T ∈ R Take a k-sparse vector x in R n . Define Let y ≔ sign (Ax). By assumption, there is a solution x * of (39) such that Since x is k-sparse, then σ k (x) 1 � 0, which in turn implies x/‖x‖ � x * /‖x * ‖. So, sign (x) � sign (x * ). is, together with (43), implies that Since x * is a solution of linear programming (39), then KKT conditions hold; i.e., there exist w � (w (1) , w (2) where z‖x * ‖ 1 is the subgradient of the l 1 -norm at x * , i.e.,

Mathematical Problems in Engineering
Hence, (46) ensures that is together with (45) means that η � A T w + B T h satisfies (42). Since S 1 and S 2 are arbitrary disjoint subsets of 1, . . . , n { } with |S 1 | + |S 2 | ≤ k, we conclude that (A T , B T ) satisfies the restricted weak RSP of order k with respect to y.
We now further show that the restricted weak RSP with respect to y is a sufficient condition for (3) to be stable.
Firstly, for the approximation problem (33), let us introduce variables t, s to yield the following equivalent form: Recall that the solution set of (49) is given as (34). e above optimization problem is a linear programming problem, and the dual problem can be written as According to the dual theory on linear programming, the solution of (49) can be characterized by KKTconditions. □

Mathematical Problems in Engineering
For the convenience of notations, the set in (51) can be written equivalently as (54) e following two lemmas play a key role to establish the stability theory on 1-bit CS problem.

Lemma 3 (Hoffman's error bound).
Let M ′ ∈ R m×q and M ″ ∈ R l×q be given matrices and For any vector x in R q , there is a point x * ∈ F such that where the constant σ ∞,2 (M ′ , M ″ ) is referred to as Robinson's constant defined by M 1 and M 2 .
Hoffman's error bound indicates that, for a linear system F, the distance from a point in space to F can be measured in terms of Robinson's constant and quantity of the linear system being violated at this point.
Lemma 4 (see [25], Lemma 6.2.2). Given three convex compact sets T 1 , T 2 , and T 3 satisfy T 1 ⊆T 2 and T 3 ⊆T 2 , then Inspired by [25,26], we obtain the following result, which states that the restricted weak RSP with respect to y is a sufficient condition for the l 1 -minimization (3) to be stable in sparse vector recovery. (A, B, ε, b, y) is given as (3) and rank (A; B) � m + l. Let ε ′ > 0 be any prescribed small number, and let P 0 be the polytope given in (29) satisfying (26). If C T � (A T , B T ) satisfies the restricted weak RSP of order k with respect to y, then for any nonzero x ∈ R n , there is an optimal solution x * of (3) such that

Mathematical Problems in Engineering
given in (53), and In particular, if x is a feasible solution of (3), then there is an optimal solution x * of (1) such that Proof. Let x ∈ R n be an arbitrary nonzero vector and P 0 be the fixed polytope given in (29) satisfying (26) in Lemma 1. e proof is divided into the following four steps.

□
Step 1. (t, s, w). e first step is to construct t, s, w. Constructing (t, s). Let t ≔ |x|, e choice of (t, s) ensures Let S be the support set of the k largest components of |x|. Define Clearly, Hence, S 1 , S 2 , and S 3 are disjoint. Since C T � (A T , B T ) satisfies the restricted weak RSP of order k with respect to y, there exists a vector η ∈ R(A T , B T ) such that Now, we construct a dual feasible solution w � (w 1 , . . . , w 7 ).
Proof. Following the argument given in eorem 2, we know that the restricted weak RSP of order k of C T with respect to y is a sufficient condition for l 1 -minimization problem (104) to be stable.
On the contrary, eorem 1 claims that if the l 1 -minimization problem is stable for any given y ∈ sign (Ax): ‖x‖ 0 ≤ k , then the matrix C T must satisfy the restricted weak RSP of the order k with respect to y. □

Conclusions
In this paper, the stability theory for 1-bit CS with quadratic constraint is established. In the analysis, it is essential to use the duality theory of linear programming, Hoffman error bound, and the fact that the ball constraint via Euclidean norm can be approximated by polytopes to any level of accuracy. An interesting and challenging topic is to further study the stability theory for 1-bit CS with other norms, e.g., p-norm, particularly as p ∈ (0, 1). In this case, the nonconvex structure of p-norm requires us to adopt the error bounded theory (also called metric subregularity) for nonlinear systems, instead of linear system used in this paper.

Data Availability
e data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest
e authors declare that there are no conflicts of interest.