A Note on Guaranteed Stable Recovery of Sparse Signal in Compressed Sensing via the RIP of Order s

In this paper, we shall continue a study of the CS-recovery of signals studied in [1]. Under the assumption that a m × n matrix A obeys the RIP of order s we decompose the space of unknown vectors into sets M 0 ; M 1 ; · · · ; M 7 de(cid:12)ned by a bias function p x on a good location T 0 = { 1 ; 2 ; · · · ; s } and research a good condition of CS-recovery


Introduction
This paper introduces the theory of compressed sensing(CS). For a signal x ∈ R n , let ∥x∥0 be the l0-norm of x, which is defined to be the number of nonzero coordinates, ∥x∥1 be the l1-norm of x and ∥x∥2 be the l2-norm of x. Let x be a sparse or nearly sparse vector. Compressed sensing aims to recover a high-dimensional signal (for example: images signal, voice signal, code signal...etc.) from only a few samples or linear measurements. The efficient recovery of sparse signals has been a very active field in applied mathematics, statistics, machine learning and signal processing. Formally, one considers the following model: where A is a m × n matrix(m < n) and z is an unknown noise term.
Our goal is to reconstruct an unknown signal x based on A and y given. Then we consider reconstructing x as the solution x ⋆ to the optimization problem where ε is an upper bound on the the size of the noisy contribution.
In fact, a crucial issue is to research good conditions under which the inequality for suitable constants C0 and C1, where T is any location of {1, 2, · · · , n} with number |T | = s of elements of T and xT is the restriction of x to indices in T . One of the most generally known condition for CS theory is the restricted isometry property(RIP) introduced by [2]. When we discuss our proposed results, it is an important notion. The RIP needs that subsets of columns of A for all locations in {1, 2, · · · , n} behave nearly orthonormal system. In detail, a matrix A satisfies the RIP of order s if there exists a constant δ with 0 < δ < 1 such that for all s-sparse vectors a. A vector is said to be an s-sparse vector if it has at most s nonzero entries. The minimum δ satisfying the above restrictions is said to be the restricted isometry constant and is denoted by δs.
Many researchers has been shown that the l1 optimization can recover an unknown signal in noiseless cases and in noisy cases under various sufficient conditions on δs or δ2s when A obeys the RIP. For example, E.J. Candès and T. Tao have proved that if δ2s < √ 2 − 1, then an unknown signal can be recovered [3]. Later, S. Foucart and M. Lai have improved the bound to δ2s < 0.4531 [4]. Others, δ2s < 0.4652 is used in [5], δ2s < 0.4721 for cases such that s is a multiple of 4 or s is very large in [6], δ2s < 0.4734 for the case such that s is very large in [5] and δs < 0.307 in [7]. In a recent paper, Q. Mo and S. Li have improved the sufficient condition to δ2s < 0.4931 for general case and δ2s < 0.6569 for the special case such that n ≤ 4s [8]. J. Ji and J. Peng have improved the sufficient condition to δs < 0.308 [9]. T. Cai and A. Zhang have improved the sufficient condition to δs < 0.333 for general case [10]. T. Cai and A. Zhang have improved the sufficient condition to δ k in case of k ≥ 4 3 s, in particular, δ2s < 0.707 [11]. By using a rescaling method, H. Inoue has obtained the sufficient conditions ofδs < 0.5 andδ2s < 0.828 in [12].
Recently, In [1] we have researched good conditions for the recovery of sparse signals by investigating the difference between the l∞-norm of h ≡ x ⋆ − x and the mean |h 1 |+|h 2 |+···+|hs| s of {|h1|, · · · , |hs|}. In more details, we considered a function p on T0 ≡ {1, 2, · · · , s} defined by where the index of h is sorted by |h1| ≥ |h2| ≥ · · · ≥ |hn| and have shown that for c > 1 and c s < p(1) if A obeys the RIP of order 2s c and δ 2s , then we have stable recovery of approximately sparse signals, where rc is a natural number such that c s (rc − 1) < p(rc) < c s rc, 2 ≤ rc < s c . But, the function p on T0 and rc depend on x. Furthermore rc is not easily searched. In this paper, in order to compensate for these defects, we decompose Kε(y, A) ≡ {x ∈ R n ; ∥y − Ax∥2 ≤ ε} into the following subsets {M0, M1, · · · , M7}: 20 ](k = 1, · · · , 6) and T0 ∩ ( 1 2 s, s], and we show for any x ∈ M k (k = 1, 2, · · · , 7) that if A obeys the RIP of order s and δs < 1 1+ √ 20 k+3 −1 , then the inequality (1.3) holds. We also state in Section 2 the existence of CS-solution.

CS-Solution
In this section, we discuss the existence of CS-solutions mathematically.
Let a m × n matrix A (m < n) and a data y ∈ R m be given. We define closed convex subsets of R n by  K0(y, A). Let y ̸ ∈ AR n . Since AR n is a closed subspace of R n , there exists a unique vector y0 ∈ AR n such that ∥y − y0∥2 = min {∥y − Ax∥2; x ∈ R n }. Then y0 is a vector in AR n such that y − y0 is a vector in the orthogonal complement (AR n ) ⊥ of AR n . It is clear that Kε(y, A) ̸ = ∅ if and only if ∥y − y0∥2 ≤ ε. In this paper, we assume that K0(y, A) ̸ = ∅ in noiseless cases and Kε(y, A) ̸ = ∅ in noise cases. We show the existence of CS-solutions.
For any t > 0 we put Then ADt is a closed convex subset of AR n such that A (∂Dt) = ∂ADt, where ∂K is a boundary of a set K. Assume that y0 ̸ ∈ ADt. Then there exists a vector xt in ∂Dt such that ∥y − Axt∥2 = min {∥y0 − Ax∥2; x ∈ Dt}. Since we have which implies that there exists a vector x ⋆ t in (xt + ker A) ∩ Dt such that Thus we have the following:

Recovery of CS
Take an arbitrary x ∈ Kε(y, A). We denote by x T a vector obtained by changing coefficients of x as follows; Then, Kε(y, A) = Proof. Take an arbitrary x ∈ M k . Let r k be a natural number such that k + 3 20 s < r k ≤ k + 4 20 s and We put Let T1 = {1, 2, · · · , r2} and T2 = {r2 + 1, · · · , n} be a decomposition of {1, 2, · · · , n}. By (3.1) and (3.2) we have By the definition of CS optimization (1.2), we have Hence it follows from (3.3) and (3.4) that Hence we have which implies since A obeys the RIP of order s that we have This completes the proof.

Conclusion
In a previous paper [1], we have discussed sufficient conditions of isometry constant δ by investigating a bias function px defined by each unknown vector x. In this paper, we decompose the space of unknown vectors into sets M0, M1, · · · , M7 defined by the bias function px. More precisely, when x is contained in M k (1 ≤ k ≤ n), the sufficient condition of δs is improved, and so this method is useful. When x ∈ M0, the sufficient condition of δs is not improved by this method. We think that this method is more usable than a previous one in [1].