Ultimate resolution limits of speckle-based compressive imaging

Compressive imaging using sparsity constraints is a very promising field of microscopy that provides a dramatic enhancement of the spatial resolution beyond the Abbe diffraction limit. Moreover, it simultaneously overcomes the Nyquist limit by reconstructing an N-pixel image from less than N single-point measurements. Here we present fundamental resolution limits of noiseless compressive imaging via sparsity constraints, speckle illumination and single-pixel detection. We addressed the experimental setup that uses randomly generated speckle patterns (in a scattering media or a multimode fiber). The optimal number of measurements, the ultimate spatial resolution limit and the surprisingly important role of discretization are demonstrated by the theoretical analysis and numerical simulations. We show that, in contrast to conventional microscopy, oversampling may decrease the resolution and reconstruction quality of compressive imaging. © 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement


Introduction
Irrespective of the microscope system, a conventional far-field optical image represent itself as a convolution of the object's function with the point-spread function of the microscope, distorted by additive noise. In the Fourier domain, it is equal to a multiplication of the object's spectrum with the transfer function of the optical system. For far-field imaging, diffraction acts as low-pass filtering. The Abbe limit [1] for an optical system with numerical aperture NA and wavelength λ is generally defined as having a cut-off frequency ν cutoff = 2NA/λ. The loss of information carried by high spatial frequency components results in a loss of spatial resolution and 'blurred' images. The Abbe diffraction limit has long been considered as an inevitable limit of the spatial resolution achievable by a conventional optical system.
Optics and chemistry joined efforts to revolutionize fluorescence microscopy [2] by proposing many methods for super-resolution microscopy [3][4][5][6]. Conceptually, imaging with a more than 2-fold increase in the spatial resolution could be achieved via two main concepts. The general idea of the first concept is to use fluorophores that make themselves distinguishable for a short period of time, preventing molecules located within the same diffraction region from being detected together [5]. A second concept that does not require any specific fluorescent labelling is based on bandwidth extrapolation or numerical reconstruction of the full spatial frequency range [7]. Computational super-resolution can be achieved in fluorescent Fourier light field microscopy [8]. The recently emerging approach of sparsity-based super-resolution imaging is related to well-known compressive sensing techniques [9][10][11][12]. Sparsity-based methods of super-resolution image reconstruction work for both coherent [13][14][15] and incoherent [16,17] light as well as scanning electron microscopy [18]. Cell imaging beyond the diffraction limit using sparse deconvolution has been demonstrated [19]. Sparsity-based nanoscale imaging paves the way towards the ultimate goal of optical microscopy -studying subcellular structures and molecular details in vivo in a bulk tissue and whole-animal contextsby implementing super-resolution imaging into an ultra-thin endoscopic probe [20,21].
The Nyquist-Shannon sampling theorem states that in order to properly reproduce a signal it should be periodically sampled at a rate that is two-fold the highest frequency present in the signal. With images, the sampling frequency is related to the minimum structure size. Thus, the imaging pixel size (analog of the sampling rate) should be about half the size of the smallest feature one wishes to resolve. However, in contrast to conventional signal sampling, the resolution of far-field optical imaging is not determined by the Nyquist limit only. In order to sample the image adequately, we also need to consider the optical transfer function and resolving power of the optical system, i.e. Abbe diffraction limit.
To summarize, the Abbe and Nyquist limits affect any imaging procedure, requiring a pixel size to be approximately twice smaller than the diffraction limit. Oversampling does not add any useful information and does not provide better resolution, while increasing acquisition time. This conclusion is valid for conventional far-field imaging as well as for some super-resolution techniques, such as STED. The concept of super-resolution optical imaging using the sparsity constraint should be considered differently because it allows simultaneously to overcome both Nyquist and Abbe limits.
Here we address the question of how the Nyquist and the Abbe limits are affecting each other within the super-resolution sparsity-based compressive imaging approach. We simulate compressive imaging by speckle illumination and single-point detection and use two reconstruction algorithms for performance comparison. We investigate the optimal number of measurements (illumination patterns) and the ultimate super-resolution limit as a function of sparsity. We address the role of discretization and demonstrate that in contrast to conventional microscopy, oversampling decreases the resolution and image quality of compressive imaging.

Theoretical model
We focus on a single-pixel detection approach of super-resolution microscopy using the sparsity constraint. In this modality of incoherent imaging, the sample is illuminated with a structured (speckled) beam and the signal is collected with a point detector, as sketched in Fig. 1(a). For each speckle pattern, the total response from the sample is detected. We note that the term 'compressive imaging' is often used to describe a different technique and is related to the idea in which the object is not sparse in the canonical basis, but rather after some transform (e.g. in DCT basis). Here we present an approach that is also often referred to as compressive ghost imaging or single pixel imaging and has many applications in optical microscopy [22][23][24], bioimaging [25][26][27] and endo-microscopy [20,21].
As a sample, we use a pseudo-random object with no implicit symmetries to make the explanation more succinct. Such a sample is presented in Fig. 1(a) and its spatial Fourier spectrum is shown in Fig. 1(c) as solid red line. The sample consists of randomly distributed groups of 1 to 5×5 pixels and is set to be K-sparse, meaning that only K pixels are nonzero. The intensity of each group was set randomly. We vary the pixel size from 60 to 500 nm. The field of view (FOV) consists of N by N pixels, varied from 16×16 to 80×80 pixels.
As illumination, we use speckle patterns that can be generated through a multimode fiber or any other scattering medium for the following reasons. Firstly, random matrices are known to provide reconstruction for any type of sample and sample basis chosen if enough measurements are taken [9]. Secondly, speckle illumination does not require special wavefront shaping equipment, which makes it easy to use them in many different experimental conditions. We simulate the imaging procedure as follows. The sample is illuminated with M artificially generated fully developed two-dimensional speckle patterns I(x, y) as proposed in [28]. This approach allows to simulate realistic speckle patterns with a negative-exponential intensity probability density function and predefined cutoff frequency (diffraction limit). The intensity distribution of each speckle pattern where F is the two-dimensional Fourier transform; (k x , k y ) are the spatial frequencies; r 0 = 0.5 is the correlation coefficient factor; Ω 1 (x, y) and Ω 2 (x, y) are two-dimensional matrices, which are composed of independent uniformly distributed variables on the interval (-π, π); H(k x , k y ) is a circular window function in the Fourier space with the radius equal to the cutoff frequency defined by ν cutoff = 2NA/λ. In our simulations, the number of speckle patterns M varies from 5 to 500. An example of a speckle pattern is presented in Fig. 1(a). We simulate a system with NA = 0.22 and λ = 0.532 µm, corresponded to a diffraction limit, l dl = 1.2 µm. The results can be generalized to different NA's. We cut the spatial Fourier components of the speckle patterns as presented in Fig. 1(c) by the solid blue line. Such a sharp cut corresponds to the window function, used to compute the speckle patterns, and is aimed to provide a strict diffraction limit. We represent our 2D image as a 1D vector x with a set of M two-dimensional N x N illumination patterns as illumination matrix A (M x N 2 ), as presented in Fig. 1(b). Typically, M << N 2 , meaning the object x is undersampled and the problem is ill-posed. As a result of the imaging procedure, M intensity values are collected (one per speckle pattern) and form a measurement vector b. Under the general experimental conditions described above and presented in Fig. 1, our problem amounts to the reconstruction of an image from the single-point intensity measurements, assuming that the image is sparse. The precise knowledge of the illumination matrix is required in contrast to recently developed methods of structured illumination microscopy (SIM) with unknown patterns [29]. We used two very common reconstruction algorithms, described in the next subsections. The main goal of a sparse recovery algorithm is to estimate x that satisfies the underdetermined least-square equation b = Ax, when only b and A are given. No prior information about K (sparsity) was assumed during the reconstruction, as may be needed for different algorithms [30].
Generally, the results of the compressive sensing reconstruction are specific to many parameters and may vary significantly. We used two methods, which may handle large-scale optimization problems [31]: basis pursuit (l1_ls) and gradient projection for sparse reconstruction (GPSR). The two algorithms differ in a way of how a next point in a process of optimization is chosen. The l1_ls uses an interior point method with truncated Newton optimization: the next step in taken as an approximate solution of Newton's equation. In GPSR the optimization steps are proportional to the negative gradient of the function to be minimized [32].

Basis pursuit (L1_LS)
Basis Pursuit (BP) algorithms try to find a sparse solution for x with the smallest/minimum l 1 -norm that satisfies the underdetermined least-square equation Ax = b using convex optimization techniques. One of the available BP solvers is l1_ls [31] and its MATLAB code can be accessed and used for free at https://stanford.edu/∼boyd/l1_ls/. The general approach of BP solves an optimization problem of the form: while l1_ls solves the optimization problem of the form: with ||x|| 1 = ∑︁ n i |x i | the l 1 norm of x and λ > 0 is the regularization parameter. The problem is known as l 1 -regularized least square problem (LSP). Its optimal solution can be calculated numerically by minimizing the quadratic loss of ||Ax − b|| 2 2 )︁ denotes the l 2 norm of x. Since generic solvers for LSP are slow, the l 1 -regularized LSP is transformed to a convex quadratic problem with linear inequality constraints and a solution can be found using standard convex optimization methods [31], such as a truncated Newton interior-point method (with log-barrier) using preconditioned conjugate gradients (PCG) to accelerate convergence [33]. It can handle large problems and is shown to be faster compared to other common algorithms to solve convex optimization problems, known as interior point methods.
Since we are interested in measuring real objects our obtained observations should be only of positive values. The l1_ls package offers a solver with nonnegativity constraints (l1_ls_nonneg.m), which solves the optimization problem of the form: resulting in a positive defined solution of x.
In MATLAB, the algorithm is simply called with the following variables: x = l1_ls_nonneg(A,  b). For the presented simulations, the regularization parameter is set to λ = 10 −3 and kept constant. The maximum number of iterations (MAX_NT_ITER) for the interior-point method is set to 5000 and the maximum number of backtracking line search iterations (MAX_LS_ITER) is set to 500, while all other parameters were using the default values.

Gradient projection for sparse reconstruction (GPSR)
For comparison, we also use the Gradient Projection for Sparse Reconstruction (GPSR) solver [32] in version 5.0. The MATLAB code is available at http://www.lx.it.pt/∼mtf/GPSR/. In general, GPSR is trying to solve the following convex unconstrained optimization problem: with ||x|| 1 = ∑︁ 1 |x i | is the l 1 norm of x and τ a non-negative parameter. To simplify the problem, the vector x is substituted by x = uv, with u, v ≥ 0, meaning, the l 1 norm is substituted by a linear function. The l 2 -norm term stays unaffected. The problem can now be described by a bound-constrained quadratic program: GPSR solves the problem by using backtracking line search and making updates in the negative gradient direction [34]. The updates are fed back to the iterative algorithm until the convergence criteria (smaller than the stop criteria tolA) is met and the function minimized. The package offers two solvers which solve the same problem with slightly different algorithms. For what follows we use GPSR_Basic.m with the maxiter variable set to 10000 and τ fixed to 0.1. Further tweaking of the solver was applied based on the suggestion in [33], setting the ToleranceA (tolA) variable to 10 −3 . The continuation steps (cont_steps) were set to 5, while all other parameters were using the default values given by the function.

Optimal number of measurements
In this section, we investigate the optimal number of measurements required for sparsity-based super-resolution computational imaging. The concept of compressive sensing allows to do far fewer number of measurements than the number of pixels to be resolved (M << N 2 ). Naively one might expect that the more measurements are performed the better the image can be reconstructed, which, as we show here, is not necessarily correct. To address the question of what is the optimal number of measurements for a given set of parameters of an imaging system (diffraction limit and a pixel size), we perform the following simulations.
As a sample, we used an image that consisted of K (varied from 1 to 80) non-zero pixels randomly distributed over the FOV of N 2 = 32×32 pixels with a pixel size of 200 nm. We simulated the compressive sensing imaging procedure by using a different number of random illumination patterns M, from 5 to 200. The quality of reconstruction was estimated by using the cross-correlation between the original ground truth sample and the reconstructed image. The cross-correlation coefficients r as a function of normalized sparsity (K/N 2 ) and normalized number of measurements (M/N 2 ) are presented in Fig. 2 (a) for the L1_LS algorithm and in Fig. 2(b) for the GPSR algorithm. The dashed red lines in Fig. 2 represent the number of measurements predicted by compressive sensing theory for random matrixes for a good image reconstruction.
The total number of measurements needed for a BP reconstruction algorithm is known to be linearly proportional to the sparsity level [35,36]: M ∝ K*log(N 2 ). In our case the length of the signal is equal to N 2 as our sensing matrix is a two-dimensional matrix with the size of N×N. Our simulations nicely coincide with these theoretical predictions for a low K regime. However, for sparsity above a certain level, we see a saturation effect: additional measurements do not improve the quality of the reconstruction. In the case of the GPSR algorithm, an increase in the number of measurements leads to an even worse quality of reconstruction (lower correlation with the original sample), as can be seen in Fig. 2(b). We estimated the maximum level of sparsity as presented by the dotted white line in Fig. 2(a). A good reconstruction was defined as one that has a cross-correlation coefficient r with the original sample of r ≥ 80%. Our results seem to contradict the predictions of compressive sensing theory for random matrixes. However, we are considering the super-resolution limit of compressive sensing theory. As we will show later, the saturation is the result of not all M speckle patterns being independent.
Our main goal is to estimate ultimate limits for speckle-based compressed imaging and the noiseless case sets a limit on super-resolution. However, here we also briefly address the role of noise. We simulated the compressive imaging procedure as described before and normalized the measurement vector b to a certain number of "detected" photons, ranging from 100 to 10 8 . To simulate noise, the measurement vector was replaced by random numbers from the Poisson distribution with the rate parameter b. As a result, Poisson noise with different signal-to-noise ratios (SNRs) was generated in the detected signal. We consider our sensing matrix to be stable (meaning noise free) compared to the signal because the illumination speckle patterns can be potentially measured much more accurate (with a high number of photons and long averaging time) than the actual signal.
The results are presented in Fig. 3. The quality of acquired images reconstructed by the L1_LS algorithm and characterized via the cross-correlation coefficients r between the reconstructed and ground-truth images as a function of normalized sparsity (K/N 2 ) and normalized number of measurements (M/N 2 ) are plotted for the different noise levels: (a) SNR = 32; (b) SNR = 100 and (c) SNR = 1000. In Fig. 3(d) the maximum level of sparsity, for which reconstruction with r >80% is achievable, is presented as a function of SNR in the signal. We show that noise doesn't play a significant role for SNR > 1000. The case of infinite SNR is expected to be indistinguishable with low-noise case [37]. Please note that with L1_LS we aimed to focus on the noiseless case and the use of one of the most common algorithms, which is not made to handle low SNR. The tolerance towards noise can be further improved by using denoising algorithms e.g., Basis Pursuit DeNoising (BPDN) algorithm.
We estimated the optimal number of measurements as the number of measurements for which a further increase does not change the reconstruction quality, as illustrated in Fig. 2(a) by the dotted black line for M optimal /N 2 ∼ 0.13. We repeated the simulations presented in Fig. 2 for different pixel sizes from 150 to 500 nm and determined the optimal number of measurements. Our results demonstrate that the saturation level depends on the pixel size. In Fig. 4(a), we plotted the optimal number of measurements normalized to the total number of pixels as a function of resolution improvement factor (RIF), which is defined as a ratio between the diffraction limit of the imaging system (l dl ) and the pixel size (l): RIF = l dl /l. The open red circles in Fig. 4(a) show the results of our numerical simulations.
The results presented in Fig. 4(a) lead to an interesting conclusion: For a better resolution a lower number of measurements suffices. We can increase the number of illumination patterns only as long as we are able to get new information with each measurement. However, the total number of spatial modes supported by our system is limited by the numerical aperture. Consequently, the total number of independent speckle patterns is also limited.
In the low noise case, the information capacity of the imaging system is the main limiting factor. The limited information capacity shows up as the (limited) rank of the measurement matrix. To analyze the optimal number of measurements and confirm that it is determined by the Abbe diffraction limit, we investigated the illumination matrix, A. We calculated the maximum number of linearly independent vectors in A by using singular value decomposition. The number of non-zero singular values is equal to the rank of A, which is presented by black dots in Fig. 4(a) and perfectly coincides with our simulation results for the optimal number of measurements. To characterize the information capacity of our imaging system, we calculated the spacebandwidth product (SBP) of the sampling function I(x, y), as proposed by Goodman [7]. As a result, the total number of significant samples required to represent I(x,y), or SBP, can be described as: where l is the pixel size, l dl is the diffraction limit. The results are presented in Fig. 4(a) by the solid red line and nicely coincide with our numerical simulations and theoretical analysis of the illumination matrix. As can be seen from Fig. 4(a), the rank of the illumination matrix is correlated with the space-bandwidth product of the sampling function. It can be explained by the following: The rank is equal to the maximum number of linearly independent columns of a matrix, which in our case is the maximum number of independent illumination patterns. On the other hand, the SBP characterizes the total number of significant samples required to represent the illumination [7]. We can conclude, that compressive optical imaging has an optimal number of illumination patterns defined by the Abbe diffraction limit, or equivalently by the total number of independent illumination patterns. This optimal number of measurements also limits the level of sparsity of the sample that may be reconstructed. Going further beyond the diffraction limit (larger resolution improvement factor) requires a sparser sample and lower optimal number of measurements.
Normally, the efficiency of the compressive sensing approach can be improved by optimizing the mutual coherence of the sensing matrix. However, in our work we addressed a particular experimental setup that uses randomly generated speckle patterns (in a scattering media or a multimode fiber) as illumination. The solid blue line in Fig. 4(a) represents the mutual coherence of our sensing matrix, A (with N 2 = 144×144), as a function of RIF. As expected, a lower rank corresponds to a higher value of the mutual coherence.

Ultimate resolution limit
Here we derive the ultimate resolution limit of speckle-based super-resolution compressive imaging. As we showed in the previous subsection, the number of measurements that provide useful information for the imaging procedure is upper-limited to the maximum rank of the measurement matrix. By increasing the ratio between the size of the diffraction-limited spot (l dl ) and the size of a resolved pixel (l), we decrease the rank of the illumination matrix as well as the space-bandwidth product and therefore the information capacity. It means that for higher resolution the (optimal) number of measurements is decreasing.
It is known that for the reconstruction of a K-sparse signal x, only M = 2K random projections are required when the signal and the measurements are noise-free [38]. To estimate the best possible resolution we use the SBP as the total number of significant (independent) measurements M and consider therefore the lower limit for the number of measurements as M = 2K, as suggested by [38]. By substituting l dl /l by RIF in Eq. (7) we get the limit for resolution improvement in the noiseless case: We simulated a super-resolution compressive imaging procedure for N 2 = 32 × 32 pixels, as described above, for different pixel sizes and estimated the maximum sparsity level for a given RIF as presented by the dotted white line in Fig. 2(a). In Fig. 4(b), the RIF extracted from our data is presented as a function of sparsity by the red dots. It clearly shows the expected behavior: the sparser the original sample the better the resolution we can get. The red line in Fig. 4(b) is plotted by using Eq. (8) without any free parameters and represents the results of our theoretical analysis of the resolution limit. We see that our numerical simulations are in a very good agreement with this theoretical prediction.
We can conclude that Eq. (8) provides the ultimate spatial resolution limit for the noiseless compressive imaging approach and the resolution improvement is inverse proportional to the square root of the relative fraction of nonzero basis functions. A nearly similar result was described in the work of Gazit et al. [13], where they state that the resolution limit is inversely proportional to the relative fraction of nonzero basis functions without a square root function. It seems that they were considering 1D signal reconstruction, which would explain the difference. We are focusing on the 2D case because it is more important for microscopy applications.

Optimal discretization
Any imaging procedure makes a discrete representation of an original sample. The resulting image can only be observed in a finite number of points due to the inevitable spatial domain discretization during analog-to-digital conversion. Following the Nyquist-Shannon sampling theorem, we normally choose the discretization step to be twice smaller than the diffraction limit. Finer discretization is not considered to be bad and typically results in a nice and smooth image, while it doesn't really provide a better resolution and may require additional imaging time. This conclusion is valid for almost any imaging procedure including super-resolution approaches and one could wrongly extrapolate this concept to the compressive imaging methods. In this section, we investigate what are the optimal spatial discretization parameters for sparsity-based super-resolution imaging.
We performed simulations using the same sample represented by a different number of pixels. The original sample is shown in Fig. 5(a) and consists of N 2 = 16 × 16 pixels of 300 nm each with K = 15 randomly distributed non-zero values. The spatial cutoff frequency of the speckle patterns was set to 2NA / λ, where NA = 0.22 and λ = 0.532 µm. We divided each pixel of the original image by d subpixels as presented in the first column of Fig. 4(b). We repeated simulations 5 times for different d from 1 (original sample, the top row) to 5 (high discretization, the bottom row). The first column in Fig. 5(b) illustrates the discretization. Each row in Fig. 4(b) shows acquired images consisting of 16d × 16d pixels with a pixel size l = 300/d nm and K = 15d 2 non-zero values grouped into d × d pixels. The normalized sparsity level was kept constant at K/N 2 = 6%. The results obtained with the conventional diffraction-limited approach are presented in the second column of Fig. 5(b). The images become smoother with an increase of discretization. However, the limited optical resolution doesn't allow to resolve all the features independently of the pixel size. The third and fourth columns in Fig. 5(b) show the results of compressive imaging with the optimized number of measurements by L1_LS and GPSR algorithms, respectively. Both algorithms perfectly reconstruct the original sample including all the subdiffractional details if the pixel size is equal to a feature size [the first row in Fig. 5(b)]. However, in contrast to conventional microscopy, oversampling decreases the image quality.
This interesting counterintuitive fact can be explained as follows. According to Eq. (8), the resolution improvement factor of our system RIF ≈ 6, and doesn't change for a different discretization and grid size. It means that our resolution limit is approximately equal to about 200 nm. Consequently, when the pixel size l = 300 nm and is equal to a feature size [the first row of Fig. 5(b)] we see a perfect super-resolution image reconstruction. With a finer grid, the pixel size becomes smaller. The second row represents the same image but with a pixel size of 150 nm, which cannot be reconstructed perfectly according to Eq. (8). We can see image degradation however, both algorithms still provide reasonable results (correlation coefficient r = 0.93) because the pixel size is still approximately equal to the resolution limit. A further decrease of the pixel size to l ≤ 100 nm leads to noticeable worse reconstruction because the pixel size is significantly smaller than the resolution limit (200 nm).
It is important to highlight that different sparsity-based reconstruction algorithms produce different results for a finer discretization grid. The GPSR algorithm is more robust to the finer grid providing a smooth image, as can be seen from Fig. 5(b), last column. It is due to the substitution of the vector under the l 1 norm by a linear function, which results in a more smooth solution [33]. In contrast, the Basis Pursuit L1_LS algorithm results in a very pixelated image. The algorithm tries to resolve each pixel separately and fails when a pixel size becomes significantly smaller than the diffraction limit.
In line with our observations presented above, the reconstruction efficiency highly depends on the sparsity level. For example, for a very low sparsity, K/N 2 = 2%, we can get a good image even for a high discretization level, as presented in Fig. 5(d). We used the sample presented in Fig. 5(c) and repeated simulations 5 times for different d from 1 (original sample, the top row) to 5 (high discretization, the bottom row). According to Eq. (8) for this system RIF ≈ 10 leading to a resolution limit of about 120 nm. Consequently, when the pixel size is bigger than the resolution limit: l = 300 nm [first row Fig. 5(c)] and l = 150 nm [second row Fig. 5(c)] we see a perfect reconstruction. The third row represents the image with l = 100 nm, which is slightly smaller than the resolution limit and we see some degradation. With a finer grid, the pixel size becomes smaller than the resolution limit and the L1_LS algorithm fails to reconstruct a perfect image. However, the smoothing properties of GPSR algorithm provide very good reconstruction even for a high discretization in case of low sparsity.
To summarize, we have shown that speckle-based compressive imaging imposes restrictions on the reconstruction procedure if one approaches super-resolution regime. The maximum achievable resolution improvement factor below the diffraction limit is determined by the sparsity and discretization level. We showed that compressive imaging is very sensitive to the discretization level and in contrast to conventional microscopy, image quality may significantly decrease for a high discretization level. The optimal reconstruction requires the pixel size to be approximately equal to the feature size.

Conclusions
The Abbe and Nyquist limits highly influence almost any imaging procedure requiring a pixel size to be approximately twice smaller than the diffraction (or super-resolution) limit. Here we addressed the question of how the Nyquist and the diffraction limits are connected within the speckle-based super-resolution compressive imaging approach. We simulated super-resolution imaging via sparsity constraints, speckle illumination and single-pixel detection and used two reconstruction algorithms for performance comparison. The fundamental limits of compressive super-resolution imaging have been demonstrated by both numerical simulations and theoretical analysis. We derived the ultimate spatial resolution limit as a function of sparsity and the optimal number of measurements as a function of resolution improvement.
Usually, oversampling makes images look nicer while it does not provide a better resolution and may reduce imaging speed. Here we demonstrated that this concept cannot be directly expanded to super-resolution optical imaging via sparsity constraint. We demonstrated the connection between sparsity and discretization level, which depends on resolution improvement below the diffraction limit. Our simulations show that the optimal pixel size should be equal to the feature size we aim to resolve, and oversampling may cause a noticeable degradation of image quality.