Compressive Optical Deflectometric Tomography: A Constrained Total-Variation Minimization Approach

Optical Deflectometric Tomography (ODT) provides an accurate characterization of transparent materials whose complex surfaces present a real challenge for manufacture and control. In ODT, the refractive index map (RIM) of a transparent object is reconstructed by measuring light deflection under multiple orientations. We show that this imaging modality can be made"compressive", i.e., a correct RIM reconstruction is achievable with far less observations than required by traditional Filtered Back Projection (FBP) methods. Assuming a cartoon-shape RIM model, this reconstruction is driven by minimizing the map Total-Variation under a fidelity constraint with the available observations. Moreover, two other realistic assumptions are added to improve the stability of our approach: the map positivity and a frontier condition. Numerically, our method relies on an accurate ODT sensing model and on a primal-dual minimization scheme, including easily the sensing operator and the proposed RIM constraints. We conclude this paper by demonstrating the power of our method on synthetic and experimental data under various compressive scenarios. In particular, the compressiveness of the stabilized ODT problem is demonstrated by observing a typical gain of 20 dB compared to FBP at only 5% of 360 incident light angles for moderately noisy sensing.


INTRODUCTION
Optical Deflectometric Tomography (ODT) is an imaging modality that aims at reconstructing the spatial distribution of the refractive index of a transparent object from the deviation of the light passing through the object. By reconstructing the refractive index map (RIM) we are able to optically characterize transparent materials like optical fibers or multifocal intra-ocular lenses (IOL), which is of great importance in manufacture and control processes.
ODT is attractive for its high sensitivity and effective detection of local details and object contours. The technique is insensitive to vibrations and it does not require coherent light. Compared to interferometry, ODT can be applied to objects with higher refractive index difference with respect to the surrounding solution.
The deflectometer used for the data acquisition is based on the phase-shifting Schlieren method [18]. For each orientation of the sample, two-dimensional (2-D) mappings of local light deviations are measured. As these light deviations are proportional to the transverse gradient of the RIM integrated along the light ray, ODT is able to reconstruct the RIM from the angle measurements.
First works on deflectometric tomography [4,17] have focused on the use of common reconstruction techniques like the Filtered Back Projection (FBP). They have proved that FBP provides an accurate estimation of the RIM when sufficient amount of object orientations is available. However, when we consider a scenario with a limited number of light incident angles, and in the presence of noise in ODT observations, FBP induces the apparition of spurious artifacts in the estimated RIM, lowering the reconstruction quality.
Inspired by the Compressed Sensing (CS) paradigm [6,13], which demonstrates that few measurements are enough for an accurate reconstruction of low complexity (e.g., sparse) signals, recent works in ODT have started to exploit sparsity to reconstruct the RIM from few acquisitions, i.e., in the presence of an ill-posed inverse problem. Foumouo et al. [18] and Antoine et al. [1] have used a sparse representation in a B-splines basis and regularized the problem using the 1 -norm. The reconstruction was performed using iterative schemes and the results show that, although the method is capable of reproducing the shape of spatially localized objets, the image dynamics is not well preserved and the borders are smoothened.
Although sparsity based methods are new in ODT, they have been used in other applications, such as Magnetic Resonance Imaging (MRI) [27], Absorption Tomography (AT) [32,34], Radio Interferometry [37] and Phase-Contrast Tomography [11,12], for image reconstruction under partial observation models. More details are given in Sec. 4.
An additional problem that rises in all physical applications is the estimation of the sensing operator that fits better the physical acquisition. Most operators present a non ideal behavior, which conditions the numerical methods to solve the inverse problem. In tomographic problems, this operator requires to map spatial data in a Cartesian grid to Fourier data in a Polar grid. Previous works have used gridding techniques to interpolate data from a polar to a cartesian or pseudo polar grid before applying the Fourier Transform [22,38]. However, the error introduced when using these techniques is not bounded and introduces an uncontrolled distortion.

Contribution
In this work, we show that the ODT can be made both compressive and robust to Gaussian noise. In the context of a simplified 2-D sensing model, we propose a constrained method based on the minimization of the Total-Variation (TV) norm that provides high quality reconstruction results even when few acquisitions are available and in the presence of high level of Gaussian noise. This is motivated by assuming the RIM composed of slowly varying areas separated by sharp transitions corresponding to material interfaces. Such a behavior is known to be represented by spatial functions having bounded variations (small TV norm), such as those following a cartoonshape model. This also distinguishes our work from two previous studies focused on continuous RIM decomposition with a B-splines basis. Deflection integrals were there estimated in the spatial domain using complex numerical methods leading to a smoother RIM estimation [1,18].
To account for the noise and the raw data consistency, we add an 2 data fidelity term adapted to Gaussian and uniformly distributed noise. Moreover, the proposed method offers the flexibility to work with more than one constraint. In contrast with [20], we add here two more constraints based on general prior information on the RIM in order to converge to an optimal solution: (i) the RIM is positive everywhere and (ii) the object is completely contained in the experimental field of view. These extra constraints provably guarantee the unicity of the ODT solution.
As for the operator, we use the NFFT algorithm 2 , a fast computation of the non-equispaced DFT. This algorithm provides an efficient estimation of the polar Fourier transform with a controlled distortion regarding the true polar transform.
The compressive ODT problem is solved by means of the primal-dual algorithm proposed by Chambolle et al. [9], complemented by an adaptive choice of its parameters improving the global convergence speed, as proposed in a recent work of Goldstein et al. [19]. The results are compared with a minimum energy (ME) method and a common FBP approach, showing clear gain compared to these two approaches in terms of compressiveness, noise robustness and reconstruction quality.

Outline
In Sec. 2 we provide a brief background on optical deflectometric tomography, describing also the experimental setup used for the data acquisition. Then, the ODT discrete model is presented in Sec. 3. In Sec. 4 we depict related works on tomographic reconstruction, which provide a basis on the methods adopted to recover the RIM: the commonly used FBP method, a standard minimum energy (ME) approach (or penalized least square) and the proposed regularized method coined TV-2 . These methods are described in Sec. 5. In Sec. 6 we present the identification and estimation of the noise in both synthetic and experimental data, and the analysis of the noise impact when comparing common absorption tomography and deflection tomography. Sec. 7 presents the numerical methods used to recover the RIM from the noisy measurements by means of the proposed regularized formulation. Finally, in Sec. 8 some reconstruction results are shown, focusing first on the comparison between common tomographic and ODT reconstructions, and then on the comparison of the reconstruction methods on the basis of reconstruction quality and convergence for both synthetic and experimental data.

Conventions
Most of domain dimensions are denoted by capital roman letters, e.g., M, N, . . . Vectors and matrices are associated to bold symbols while lowercase light letters are associated to scalar values, e.g., Φ ∈ R M ×N or u ∈ R M . The i th component of a vector u reads either u i or (u) i , while the vector u i may refer to the i th element of a vector set. The identity matrix in R D reads I I D , or simply I I when its dimension is clear from the context. The vectors of ones and zeros in R D are denoted 1 and 0, respectively. The set of indices in R D is [D] = {1, · · · , D}. Scalar product between two vectors u, v ∈ R D for some dimension D ∈ N is denoted equivalently by u T v = u · v = u, v . For any p 1, the p -norm of u is u p p = i |u i | p with · = · 2 . The notation u 0 = # supp u represents the cardinality of the support supp u = {i : u i = 0} ⊂ [D]. For a subset S ⊂ [D], given u ∈ R D and Φ ∈ R D ×D , u S ∈ R #S (or Φ S ) denotes the vector (resp. the matrix) obtained by retaining the components (resp. columns) of u (resp. Φ) belonging to S. Alternatively, we have u S = R S u or Φ S = ΦR T S where R S := (I I S ) T ∈ {0, 1} #S×D is the restriction operator. The kernel (or null space) of a matrix Φ is ker(Φ) := {x ∈ R D : Φx = 0}. The norm of an operator K is defined as |||K||| = max{ Kx : x ∈ R N with x = 1}.
We denote Γ 0 (V) the class of proper, convex and lower-semicontinuous functions from a finite dimensional vector space V (e.g., R D ) to (−∞, +∞] [10,16]. The non-negativity thresholding function is (x) + , which is defined componentwise as (x i ) + = (x i +|x i |)/2. The (convex) indicator function ı C (x) of the set C is equal to 0 if x ∈ C and +∞ otherwise. The interior of a set Ω is int Ω, which consists of all points of Ω that do not belong to its boundary ∂Ω. The superscript * denotes the conjugate transpose (or adjoint) of a matrix (or the complex conjugate for scalars), while denotes the convex conjugation of a convex function.

OPTICAL DEFLECTOMETRIC TOMOGRAPHY
In this section, the principles of optical deflectometric tomography are explained and completed with a brief description of the experimental setup used for actual deflectometric acquisition.

Principles
Optical Deflectometric Tomography aims at infering the refractive index spatial distribution (or refractive index map -RIM) of a transparent object. This is achieved by measuring, under various incident angles, the deflection angles of multiple parallel light rays when passing through this transparent object (see Fig. 1-(top)). The (indirect) observation of the RIM, allowing its further reconstruction, is guaranteed by the relation between the total light ray deflection and the integration of the RIM transverse gradient along the light ray path [4].
In this work, we restrict ourselves to two-dimensional (2-D) ODT by assuming that the refractive index n of the observed object is constant along the e 3 -axis for a convenient coordinate system {e 1 , e 2 , e 3 }, i.e., ∂n(r)/∂r 3 = 0 (with r = (r 1 , r 2 , r 3 ) T ∈ R 3 ) and deflections occur in the e 1 -e 2 plane. This assumption is validated in the experimental setup described later in this section, where the light probes a very thin 2-D slice of the 3-D sample and where the particular geometry of the test objects makes the refractive index variation along the e 3 direction negligible, i.e., | ∂ ∂r 3 n| max(| ∂ ∂r 1 n|, | ∂ ∂r 2 n|). Given the refractive index n : r = (r 1 , r 2 ) T ∈ R 2 → n(r), for a particular light ray trajectory R = {r(s) : s ∈ R} ⊂ R 2 parametrized by r(s) = (r 1 (s), r 2 (s)) T ∈ R 2 with s describing its curvilinear parameter 3 , the relation between light deflections and the refractive index n is provided by the light ray equation established from the eikonal equation [5,Section 3.2,p. 129].
For small deflection angles, we can adopt a (first order) approximation and assume the trajectory R is a straight line. The error committed by removing the trajectory dependence in the deflection angle can be estimated by a ray tracing method (not developed here) relying on the Snell law. In our tests, for simple continuous object models, the absolute error between the two deflection models is lower than 0.7 • for deflection angles smaller than 7 • (the angular acceptance of the optical deflectometer). Moreover, in [1], the authors have qualitatively shown that, for the same range of deflection angles and for limited refracted index variations, the model mismatch stays limited.
In this simplified model, the 2-D RIM is measured by ∆(θ, τ ; n), the sinus of the deflection angle α(θ, τ ; n) of a light ray R(θ, τ ) = {r ∈ R 2 : r ·p θ = τ }, where τ ∈ R is the affine parameter I n c id e n t L ig h t R a y s −θ determining the distance between R and the origin, θ ∈ [0, 2π) is the incident angle of R with the e 1 axis, and p θ = (− sin θ, cos θ) T is the (constant) perpendicular vector to the light ray direction t θ = (cos θ, sin θ) T . The simplified ODT model depicted in Fig. 1-(top) is then obtained from the light ray equation (1) as where n r is the (constant) reference refractive index of the surrounding medium, r τ,θ (s) = st θ + τ p θ ∈ R and the Dirac distribution turns the line integral into an integration over R 2 . In short, the above equation relates the deflection angle ∆(τ, θ) to the Radon transform of the transverse gradient of n within the straight line trajectory approximation. Interestingly, this first order deflectometric model is also used in computer graphics for rendering of refractive gas flows [2]. As for traditional parallel absorption tomography [25, Section 3.2, p. 56], there exists a Deflectometric Fourier Slice Theorem (DFST) that relates the 1-D (radial) Fourier transform of the deflection angle along the affine parameter τ , i.e., to the 2-D Fourier transform of the RIM. Mathematically, the DFST establishes the following equivalence [22] (proved in Appendix A): where n(k) = R 2 n(r) e −2πik·r d 2 r stands for the 2-D Fourier transform of n. Hereafter, the value ω is called affine frequency.
We may remark from (4) that there are different ways to cover the 2-D frequency plane with deflectometric measurements. This can be observed from the following symmetry relations for n ∈ R, τ, ω ∈ R, θ ∈ [0, 2π): where the two first relations come from the change p θ+π = −p θ in (2) and (4), and the last equation is due to the Fourier conjugate symmetry of real spatial functions. In particular, from the symmetry (5c), we can restrict ω to positive values by taking θ in the whole circle [0, 2π). Or alternatively, from (5b) and (5c), we can deduce y(ω, θ) = −y * (ω, (θ + π) mod 2π) and restrict θ to the half circle [0, π) with ω ∈ R. We insist on the fact that this symmetry is only preserved when the first order approximation is validated.
When comparing the relation (4) with the standard tomographic Fourier Slice theorem (FST) [25, Section 3.2, p. 56], the main difference is provided by the transverse gradient in the deflectometric relation (2), which results in multiplying by 2πiω/n r the RIM Fourier transform. In particular, from (2) or from (4) (since ω vanishes on the frequency origin) we see that the ODT sensing is blind to constant RIM. As we will see in Sec. 5, this implies that the RIM can only be estimated relatively to the known reference RIM n r .
Let us conclude by insisting on the impact of the equivalence (4). Similarly to the use of the Fourier Slice theorem in common (absorption) tomography, (4) is of great importance for defining a discrete ODT sensing model which can be computed efficiently in the Fourier domain given a discretized refractive index map n.

Deflection Measurements
Experimentally, the deflection angles can be measured by phase-shifting Schlieren deflection tomography (schematically represented in Fig. 1-(bottom)). We briefly explain this system for the sake of completeness in order to set the experimental background surrounding the actual deflection measurement process. More details can be found in [4,18,22,24].
This system proceeds by encoding light ray deflection α in intensity variations. A transparent object is illuminated with an incoherent uniform light source I 0 modulated by a sinusoidal pattern m using a Spatial Light Modulator (SLM). From classical optics, the light deviation angle α is related to a phase shift ∆x = f tan α, where f is the focal length of Lens 1. This phase shift is associated with the intensity variation thanks to the modulation m as I(−τ, θ) = m(∆x)I 0 . These intensity variations are processed by phase shifting methods for recovering the deflection measurements α(τ, θ) for each couple of parameters (τ, θ) ∈ R × [0, 2π]. Up to some linear coordinate rescaling, the affine parameter τ corresponds to the horizontal pixel coordinate in the 2-D CCD detector collecting light (assuming the object refractive index constant along the CCD vertical direction). This correspondence is implicitly allowed by the telecentric system formed by the combination of Lens 2 and 3. The pinhole guarantees that only parallel light rays outgoing from the object are collected. Rather than rotating the whole incident light beam around the object, it is this one which is rotated by an angle −θ along an axis parallel to the CCD pixel vertical direction [4]. Finally, since the system is invariant under time inversion, i.e., under light progression inversion, measuring the deflection angle α in Fig. 1-(bottom) is equivalent to measuring the same angle in Fig. 1-(top).

DISCRETE FORWARD MODEL
In order to reconstruct efficiently the RIM from ODT measurements, recorded data must be treated appropriately considering, jointly, the data discretization, the polar geometry of the ODT sensing and unavoidable measurement noises. We present here the discrete formulation of the ODT sensing and the construction of the forward model from the recorded data.

Discrete domains
Let us first assume that the object of interest is fully contained in a square field-of-view (or FoV) Ω ⊂ R 2 centered on the spatial origin. The physical dimensions of this FoV can be provided by the Deflectometric device itself. In other words, the RIM is constant and equal to the reference index n r outside of Ω. This involves also that the deflection measurement vanishes, i.e., ∆(τ, ω) = 0, if |τ | is bigger than the typical width of Ω in a section of direction θ + π/2.
We can consider a spatial sampling of Ω as follows. We define a of size M := N τ N θ , with N τ the number of parallel light rays passing through the object (assumed even), N θ the number of incident angles in ODT sensing, δτ and δθ = π/N θ the distance between two consecutive affine parameters and angles respectively 4 .
The value δτ can be known experimentally from the pixel size of the CCD detector in a Schlieren Deflectometer (Sec. 2.2). Moreover, the value δτ and the resolution N τ are also related to the FoV Ω so that δτ N τ ≈ δrN 0 . Since there is no reason to ask more resolution to the sampling C N than the available in the affine variations of the ODT measurements, we will work with δτ ≈ δr and N τ ≈ N 0 .
Third, in this discretized setting, the affine frequency ω in (4) must also be sampled with N τ values. As described in the next section, this comes from the replacement of the (radial) Fourier transform in (3) by its discrete counterpart. This leads to a (signed) frequency polar grid of same size with δω = 1/(N τ δτ ). As it will become clearer in the following, only half of this polar grid will be necessary to bring independent observations of the RIM, i.e., we will often work on

Discretized Functions
From the discrete domains defined above, the continuous RIM n observed in the experimental FoV is discretized into a set of N values n(r m,n ) from the coordinates r m,n ∈ C N . This description can always be arranged into a one dimensional N -length vector n ∈ R N , given a convenient mapping between the components indices of n = (n 1 , · · · , n N ) T and the pixel coordinates in C N . This "vectorization" provides us a simplified representation of any linear operator acting on the sampled RIM n as a matrix multiplying n.
For the different functions discretized on P M or on P M , we use the same vectorization trick, namely, a function u defined on C N and sampled on P M is associated to a vector u ∈ R M with the right correspondence between the components of u and the polar coordinates in P M (and similarly for function defined in the Fourier domain and sampled on P M ).
Therefore, the ODT observations {∆(τ, θ) : (τ, θ) ∈ P M } are gathered in a vector z ∈ R M with z j = ∆(τ s , θ t ) for a certain index mapping j = j(s, t) ∈ [M ]. In this discrete representation, the equivalent transformation of (3) reads where y comp ∈ C M is associated to a (vectorized) sampling of y on P M , and F rad : R M → C M performs a 1-D DFT on the radial τ -variations of z, i.e., for a vectorized index k = k(s , t). In other words, if δτ is sufficiently small, e.g., if ∆(τ, θ) is band-limited with a cut-off frequency smaller than 1/δτ = N τ δω, then using a Riemann sum approximation of (3) and knowing that ∆ vanishes on Ω c . Despite the fact that y comp belongs C M R 2M , this vector brings only M independent real observations of n ∈ R N . This is due to the central symmetry (5c) induced by the realness of z and which allows us to consider y comp only on P + M . This is clarified by the definition of the useful operator Θ : C M → R M which perform the two following linear operations. First, it restricts any vector ξ ∈ C M to the indices associated to the half grid P + M . Second, it appends the M/2 imaginary values of the restricted vector to its M/2 real values in order to form a M -length vector in R M . The adjoint operation Θ * , which is also the inverse of Θ for vectors in C M respecting the Hermitian symmetry, is obtained easily by first reforming a M/2-length complex vector from the separated real and imaginary parts, and by inserting the results in C M according to the indices of P + M and by completing the information in P M \ P + M with the central symmetry (5c). Consequently, thanks to Θ, we can form the real vector with (Θ F rad ) : R M → R M . We call y the Frequency Deflectometric Measurements (FDM) vector. This will be our direct source of ODT observations instead of z.

Forward Model
We can now explain how we use the DFST relation (4) for defining the forward model that links any discrete 2-D RIM representation to its FDM vector. In a previous work [22], the data available in the frequency polar grid P M were first interpolated to a Cartesian frequency grid in order to reconstruct the 2-D RIM using a Fast Fourier Transform (FFT). However, the polar to Cartesian frequency interpolation introduced a hardly controlled colored distortion.
We use here another operator F : R N → C M performing a non-equispaced Discrete Fourier Transform (NDFT) for directly relating functions sampled on the Cartesian spatial grid C N to those sampled on the polar frequency grid P M .
More precisely, given a function f : Ω → C sampled on C N , the NDFT 5 computes on the M nodes k of P M . Gathering all the values of f and f into vectors f ∈ C M and f ∈ R M respectively, this relation is conveniently summarized as where the matrix F ∈ C M ×N stands for the linear NDFT operation. Its internal entry indexing follows the one of the components of f and f . We explain in Appendix B how the Non-equispaced Fast Fourier Transform (NFFT) algorithm allows us to compute efficiently in O(N log(N/ )) the multiplications F u and F * v for any u ∈ R N and v ∈ C M , with a controlled distortion with respect to the true NDFT. The action of F on a discretized RIM n is related to the continuous Fourier transform of n as follows. Let j = j(s , t) ∈ [M ] be the j th point k j of the grid P M associated to the polar coordinates (ω s , θ t ). Then, for a sufficiently small δr, To take into account the multiplication by 2πi ω/n r in (4) and the existence of a factor 1/(δr) 2 in the equivalence (10), we introduce the diagonal operator D ∈ R M ×M defined as where ω (j) refers to the ω-coordinate of the j th point of P M , i.e., if as before j = j(s , t) is associated to (ω s , θ t ) ∈ P M then ω (j) = ω s . The operator D models the effect of the transverse gradient in the Fourier domain.
In parallel to the discussion ending Sec. 3.2, we also restrict the action of DF to the domain P + M . Consequently, using the operator Θ (Sec. 3.2), the final linear forward model linking the real FDM to the 2-D NDFT of the discrete RIM n reads The additional noise η ∈ R M integrates the different distortions explained and estimated in Sec. 6, i.e., the numerical computations, the model discretization, the discrepancy to the first order approximation (2), and the actual noise corrupting the observation of z.
Notice that in the absence of noise (η = 0), the model (11) could be easily turned into a classical tomographic model where the DC frequency is not observed. Indeed, forgetting the formal action of Θ, if the frequency origin has been carefully removed from P M , then D is in- , and we can solve the common tomographic problem However, as we present in Sec. 8.1.1, this transformation is not suited to noisy ODT sensing. Even for a simple additive white Gaussian noise η (or AWGN), the multiplication by D −1 breaks the homoscedasticity of η, i.e., the variance of each (D −1 η) j varies with j ∈ [M ]. This interferes with common reconstruction techniques used in classical tomography. Obviously, a noise whitening could be realized for stabilizing such methods but at least for AWGN, this strictly amounts to solve directly the model (11).

RELATED WORKS
This section describes some recently proposed methods for tomographic reconstruction in the domains of differential phase-contrast tomography and common absorption tomography.
In differential phase-contrast tomography, the refractive index distribution is recovered from phase-shifts measurements. These are composed by the derivative of the refractive index map, inducing the apparition of the affine frequency ω when using the FST, as it happens in the ODT sensing model (Sec. 2). In this application, Pfeiffer et al. [30] have used the FBP algorithm to reconstruct the refractive index map from a fully covered set of projections. Cong et al. [11,12] have used different iterative schemes based on the minimization of the TV norm to reconstruct the refractive index distribution over a region of interest. These methods are accurate and provide similar results, but the iterative scheme based on the TV norm has proved to be better than FBP when the amount of acquisitions decreases.
In common Absorption Tomography (AT) we deal with the reconstruction of the absorption index distribution from intensity measurements. As these measurements are directly related to the absorption index, the AT sensing model does not include the affine frequency ω. In this domain, several works have exploited sparsity based methods. Most recent works in AT have focused on promoting a small TV norm [32,34]. Sidky et al. [34] use a Lagrangian formulation for the tomographic reconstruction problem, promoting a small TV norm under a Kullback-Leiber data divergence and a positivity constraint. They aim at reconstructing a breast phantom from 60 projections with Poisson distributed noise. For this, they use the primal-dual optimization algorithm proposed by Chambolle et al. [9]. The method results in high quality reconstruction compared to FBP but with a convergence result that is highly dependent on the Lagrangian parameter choosen.
Ritschl et al. [32] use a constrained optimization formulation to reconstruct the absorption index from low amount of clinical data in the presence of metal implants and Gaussian noise. This problem is solved by means of an alternating method that allows then optimizing separately the raw data consistency function and the sparsity cost function, without the need of prior information on the observations. The fast convergence of the method is based on the estimation of the optimization steps. The gradient descent method is used to minimize the TV norm and the consistency term is minimized via an algebraic reconstruction technique. The method is proven to give better results than FBP.
These works have proved that some tomographic applications can be made compressive in the CS sense [6,13]. In short, accurate tomographic image reconstruction are reachable from a number of samples that is smaller than the desired image resolution, if the image is sparse in some basis, i.e., the image expansion in that basis contains only a small number of nonzero coefficients. However, most of these works have considered Cartesian frequency grids while actual sensing often occurs in polar or non-equispaced grids. Besides, they attack different problems than ODT, where the sensing model and type of measurement change from one to the other. One important difference between AT and ODT sensing models, relies on the presence of the affine frequency as materialized by a diagonal operator D; whose impact is analyzed in detail in Sec. 6.2 and Sec. 8.1.1 for perfect and noisy sensing.

REFRACTIVE INDEX MAP RECONSTRUCTION
Three methods can be considered for recovering the discrete RIM n ∈ R N : the common Filtered Back Projection (FBP); a penalized least square solution, which is related to a minimal energy solution (ME) [7]; and a regularized approach called TV-2 minimization. Since FBP and ME are widely used in tomographic problems, we will use them as a standard to compare with the quality of TV-2 .

Filtered Back Projection
The filtered back projection (FBP) can be briefly defined as an analytical method that consists of first filtering the tomographic projections with an appropriate function, i.e., a "ramp filter" for absorption tomography (AT) [25, Section 3.3, p. 60] or a simple Hilbert transform for deflection tomography [4,30]. The result is then back projected in the spatial domain by angular integration. Despite its simplicity, this technique suffers of severe artefacts when the number of angular observations decreases. Moreover, FBP does not integrate the noise distortion in its processing.

Minimum Energy Reconstruction
Given a linear sensing model of some image (vector) x ∈ R N by a sensing operator Φ ∈ R M ×N with M N , a common procedure is to estimate x by applying the (right) pseudo-inverse of Φ to the observations, i.e., by computing Φ † y = Φ * (ΦΦ * ) −1 y (for non-singular ΦΦ * ).
This solution is actually equivalent to compute a minimum energy solution (or penalized least square) x ME by solving the convex problem In words, the general inverse problem (13) is solved by a regularization promoting a small 2norm (or energy). For large scale problems, solving this last convex formulation is preferred to the estimation of the pseudo-inverse that requires the costly inversion of a M × M matrix. Common reconstruction approaches (e.g., in Medical Imaging) follow this minimum energy principle [7]. They generally proceed by zeroing the Fourier coefficients of all unobserved frequencies, i.e., a process that minimizes the energy of the solution for frequency sampling defined on regular grids. The ME solution is also close to a discretization of the FBP procedure for a densely covered frequency space.
As shown later, the ME method presents some strong limitations when the model (13) is severely ill-conditioned or when noise corrupts the observation of y. For instance, in the case where the tomographic problem subsamples a regular sampling of the Fourier domain, artifacts and bluring appear from the convolution of the pure image with the filter associated to the subsampling pattern (or dirty map [37]).
In ODT, the sensing operator and the sensing model read Φ = ΘDF and y = ΘDF n, respectively, and we will denote by n ME the corresponding ME solution.

TV-2 minimization
In order to overcome the limitations of both FBP and ME methods, we introduce a new reconstruction method which is both less sensitive to unwanted oscillations due to a low density frequency sampling P M and to additional observational noise η in (11).
In particular, since the spatial dimensions in P M and in C N are expected to be equal, i.e., N 0 ≈ N τ (Sec. 3.1) we are interested in lowering the density of P M in the Fourier plane by decreasing the number of angular observations N θ . In other words, with respect to this reduction, we aim at developing a numerical reconstruction which makes Optical Deflectometric Tomography "compressive" in a similar way other compressive imaging devices which, inspired by the Compressed Sensing paradigm [6,13], reconstruct high resolution images from few (indirect) observations of its content [15,27,34,37]. This ability would lead of course to a significant reduction of the ODT observation time with potential impact, for instance, in fast industrial object quality control relying on this technology. This objective of compressiveness can only be reached by regularizing the ODT inverse problem by an appropriate prior model on the configuration of the expected RIM n. Interestingly, the actual RIM of most human made transparent materials (e.g., optical fiber bundles, lenses, optics, ..) is composed by slowly varying areas separated by sharp boundaries (material interfaces) (see Fig. 2-(left)). This can be interpreted with a Bounded Variation (BV) or "Cartoon Shape" model [33] as the typical Shepp-Logan phantom in Fig. 2-(right). Therefore, the inverse problem in (11) can be regularized by promoting a small Total-Variation norm, which in its discrete formulation is defined as [8] is the (finite difference) gradient operator. Reusing the 2-D coordinates of n, this operator is defined along each direction as ∇n = (∇ 1 n, ∇ 2 n) with (∇ 1 n) kl = n k+1,l − n k,l and (∇ 2 n) kl = n k,l+1 − n k,l .
In order to obtain a reconstruction method which is also robust to additive observation noise, we must lighten the strict fidelity constraint implicitly used by the LS method in (14). Therefore, assuming the data corrupted by an additive white Gaussian noise (AWGN), we impose a data fidelity requirement using the 2 -norm, i.e., if η = y − Φn is known to have a bounded norm (or energy) η ε, we force any reconstruction candidate u to satisfy y − Φu ε. The way ε can be estimated will be explained in Sec. 6.
Additionally to a fidelity criterion with the observed data, other requirements can be imposed on the reconstruction. First, we can often assume that the reference refractive index n r (i.e., the one of some optical fluid surrounding the object) is lower than the object RIM. Second, if the object is completely contained in the field-of-view Ω of the ODT experiment, we can force any candidate RIM u to match n r on the boundary of Ω, i.e., imposing u k = n r for all indices k belonging to the border of C N . These indices are associated to pixels r m,n ∈ C N for which at least one of the two 2-D coordinates is equal to either −N 0 /2 or N 0 /2 − 1. The corresponding index set is denoted ∂Ω for simplicity.
Gathering all these aspects, we could propose the following reconstruction program denoting by 1 ∈ R N the vector of ones, i.e., the unit RIM in C N , and recalling that However, the reconstruction can be slightly simplified by observing that the kernel of the sensing operator Φ = ΘDF in ODT contains the set of constant vectors in R N . This is a consequence of the vanishing affine frequency ω (which mainly defines the action of D) on the frequency origin, or more simply, this relies on the occurrence of the RIM gradient in the deflection model (2).
Therefore, a change of variable u → u − n r 1 does not disturb the previous reconstruction which can be recast as remembering that the true RIM estimation is actually n TV− 2 + n r 1.
Without the frontier constraint (u ∂Ω = 0), the unicity of the solution is not guaranteed. In that case, for one minimizer u * , all the vectors of {u * + λ1 : λ ∈ R, u * + λ1 0} also minimize the problem since the kernels of both Φ and ∇ contain constant vectors. Considering the frontier constraint is thus essential for enforcing the unicity of the solution as expressed in the following lemma.
Lemma 5.1. If there is at least one feasible point for the constraints of (15), then the solution of this problem is unique.
Proof. Using the TV norm definition (and squaring it), the TV-2 optimization (15) is equivalent to solve arg min u∈R N ∇u 2 2,1 s.t. y − Φu 2 ε, u 0, u ∂Ω = 0. The kernel of ∇ is the set of constant vectors, while the one of R ∂Ω (defining the frontier constraint) is the set of vectors equal to 0 on ∂Ω. Therefore, Ker ∇∩Ker R ∂Ω = {0}. Moreover, since the domain of ∇ · 2 2,1 is R N , and since we assume at least one feasible point, (15) has at least one solution.
Let x 1 and x 2 be two distinct minimizers. Then, ∈ Ker ∇ and ∇x 1 = ∇x 2 . By the strict convexity of the function ϕ(·) = · 2 2,1 , writing showing that x λ , which also satisfies the convex constraints, is a better minimizer, which is a contradiction.
Therefore, we see that the uniqueness is actually reached by the stabilizing condition u ∂Ω = R ∂Ω u = 0 making the optimization running outside of ker ∇ \ {0}.
As explained in Section 7, the program (15) can be efficiently solved using proximal methods [10] and operator splitting techniques, like the recently proposed primal-dual algorithm by Chambolle and Pock in [9].

NOISE IDENTIFICATION, ESTIMATION AND ANALY-SIS
In this section we first discuss about the different sources of noise and how to estimate the noise energy present in the experimental data. Then, we analyze the noise impact in both AT and ODT measurements.

Noise sources
When a real sensing scenario is being studied, such as the ODT, different sources of noise are present and they have to be considered when determining the global noise energy bound ε in (15). First, we have the observation noise. Under high light source intensity, the images collected by a Schlieren deflectometer ( Fig. 1-(bottom)), and used for computing the ODT deflections z, are mainly affected by electronic noise such as the CCD thermal noise. This induces a noise in the measured deflection angles that can reasonably be assumed Gaussian and homoscedastic, i.e., with an homogeneous variance through all the measurements. By computing the 1-D Fourier transform of the ODT measurements using ΘF rad in (7) the corresponding noise η obs remains Gaussian [29] in the FDM y, i.e., η obs ∼ N (0, σ 2 obs ). Actually, from the orthonormality of the Fourier basis, (7) provides σ 2 obs = (δτ ) 2 N τ σ 2 z , where σ 2 z is the variance of the noise present in the ODT measurements (z). This one can be estimated from the noisy observations using the Robust Median Estimator [14,36].
Finally, this noise, defined as the difference between the noisy FDM y and the noiseless FDM y true , has an energy that can be bounded using the Chernoff-Hoeffding bound [21]: with high probability for c = O(1). Second, there is the modeling error that comes with every mathematical discrete representation of a physical continuous system. In the ODT system, this error is due to (i) the first order approximation used to formulate (2), (ii) the sampling of the continuous RIM and (iii) the discrete model itself. The modeling noise is related to the difference between the noiseless FDM and the sensing model Φ true n = ΘDF true n, where F true performs the exact Polar Fourier Transform. As explained in Sec. 2.1, the modeling noise can be estimated by means of a ray tracing method based on the Snell law (not detailed here). This shows that for simple objects, an absolute error of 0.7 • is expected on light deflections smaller than 7 • . A Gaussian noise model provides a rough estimation of 10 dB for the corresponding measurement SNR. This is equivalent to: Third, we must consider the interpolation noise given by the mathematical error committed by estimating the polar Fourier Transform with the NFFT algorithm, i.e., the noise Φ true n−Φn.
To determine a bound ε nfft on the energy of this error we first estimate the NFFT distortion (i.e., without the action of D), defined as the difference between the NFFT polar Fourier Transform F app and the true NDFT F . Theoretically, for any vector f ∈ R N , the ∞ -norm of this distortion is bounded as , with high probability for c = O(1). Finally, ε nfft can be crudely computed as with ω max ≈ 1 2 N τ δω = 1 2δτ ≈ 1 2δr representing the maximum frequency amplitude in P M . In practice, because of the RIM shift n → n − 1n r explained in Sec. 5, we can bound n 1 and hence C(n) with the expected RIM dynamics δn, i.e., n 1 N δn, and we adjust in order to have ε nfft much lower than the other sources of noises.
Finally, we may also have an error introduced by the instrument calibration, when determining the exact τ and θ associated to the projections. We are going to neglect this error by assuming a pre-calibration process that provides an exact knowledge of these values (see Sec. 8.2).
In conclusion, gathering all the previous noise identifications and assuming the different noise are of zero mean and independent of each other, we can bound the difference between the actual ODT measurement and the sensing model as follows: Therefore, we have ε ≈ ε 2 obs + ε 2 model + ε 2 nfft .

Comparative study of the noise impact on AT and ODT
As we have seen in Sec. 3.3, the main difference between the AT and ODT problems is the appearance of the diagonal operator D in the last one. Therefore, the AT sensing operator is described as Φ AT = ΘF and the ODT sensing operator as Φ ODT = ΘDF . We will now analyze the impact of an additive white Gaussian noise on the measurements regarding the presence of this operator. For this, we apply the sensing operators Φ AT and Φ ODT to a section of the fibers bundle (see Fig. 5-(left)), in order to obtain the AT and ODT acquisitions, respectively. In Fig. 3 and Fig. 4 we show the real part of the Fourier measurements in AT and ODT.
For the class of images we are interested in, we can notice than in AT the magnitude presents a peak around ω = 0 and then decreases significantly when the distance to the center increases, tending to zero in the borders (see Fig. 3). Whereas in ODT, the presence of operator D makes the image intensity to be quite spread through all the pixels (see Fig. 4). This has a direct impact on the reconstruction when the measurement is affected by additive Gaussian noise. As the noise spreads evenly through the image, the pixels that are not around ω = 0 will be more affected in the AT model because their intensity is significantly lower.

NUMERICAL METHODS
This section provides first an overview of the Chambolle-Pock primal-dual algorithm [9] used for solving (15) and hence recovering the RIM from the noisy FDM. The algorithm is then generalized into a product space optimization that allows the minimization of more than two convex functions. We show next how to use this generalization to solve our ODT problem under multiple constraints, before to briefly present an adaptive selection of the optimization parameters due to [19] offering faster convergence speeds.

Chambolle-Pock Primal-Dual Algorithm
The Chambolle-Pock (CP) algorithm (Algorithm 1 in [9]) is an efficient, robust and flexible algorithm that allows to solve minimization problems of the form: for a linear operator K : R N → R W and any variable x ∈ R N . The functions F and G belong to the functional sets Γ 0 (R W ) and Γ 0 (R N ), respectively. In short, CP solves the primal problem described above simultaneously with its dual problem, until the difference between their objective functions -the primal-dual gap -is zero.
For any variable v ∈ R W , the primal-dual optimization can be formulated as the following saddle-point problem: min where F is the convex conjugate of function F provided by the Legendre transform Using the Legendre transform we obtain the primal version described in (17) and also the dual version as follows: max where K * is the exact adjoint of the operator K, such that Kx, v = x, K * v . The CP algorithm is defined by the following iterations: The quantity prox f denotes the proximal operator of a convex function f ∈ Γ 0 (V) for a certain finite dimensional vector space V [10,28]. This operator is defined as: The proximal operator admits the use of non-smooth convex functions as the TV norm, making the algorithm suitable to solve the TV-2 problem described in Sec. 5. Most numerical methods require operator K being in a tight frame, which is not the case for our sensing operator Φ. The CP algorithm reduces the convergence requirements, since we only need to tune the step sizes µ and ν such that the condition µν|||K||| 2 < 1 is true for any operator K. Moreover, as presented in Sec. 7.4, these two parameters can be adaptively tuned during the iterations in order to reach faster convergence [19].
There is an easy way to estimate the convergence of the primal-dual algorithm (20). This relies on evaluating the primal and dual residuals defined by the subgradient of the saddle-point problem (18) with respect to the primal and dual variables, namely, P (x, v) := ∂G(x) + K * v and D(x, v) := ∂F (v) − Kx, respectively [19].
By definition, for an optimal point ( x, v) of (18), zero must belong to these residuals and, therefore, by tracking the size of these residuals we can perform an analysis of the algorithm convergence. Explicit formulas can be obtained for the primal and dual residuals by observing the optimality conditions of (20) at each iteration. This provides or, equivalently, P (k+1) ∈ P (x (k+1) , v (k+1) ) and D (k+1) ∈ D(x (k+1) , v (k+1) ) with the primal and dual residual vectors Goldstein et al [19] show experimentally that a converging algorithm respects for some norm · * (e.g., 1 ). We will analyze the same convergence measure during our experiments in Sec. 8.

Product Space Optimization
In this paper, we are interested in minimization problems containing more than two convex functions. In particular, we aim at solving the general optimization with K j : R N → R W j and p + 1 the number of convex functions. Such a problem does not allow the direct use of the CP algorithm as described before. However, it is easy to adapt it by considering a p-times expanded optimization space R pN . This space is composed of In this context, we define p − 1 bisector planes Π 1,j = {x ∈ R pN : x 1 = x j , 2 j p} in order to work with the following equivalent primal problem: The CP-shape (17) is thus recovered by working in this bigger space R pN and by setting . In this expanded optimization space, the equivalent dual problem is written (see Appendix C.1): For the functions described above, it is easy to see that for any ν > 0 and ζ = (ζ T 1 , · · · , ζ T p ) T ∈ R W we have: and, for any µ > 0 we have (more details in Appendix D.1): prox µG ζ = (I I N , · · · , I I N ) T prox µ p H 1 p j ζ j .

ODT numerical reconstruction
Now we need to transform our TV-2 problem into an expanded form in order to use the CP algorithm. Having two constraints, the optimization space needs to be expanded by p = 2. Using (23), we can reformulate the primal problem from (15) as where C = {v ∈ R M : y − v ε} and P 0 = {u ∈ R N : u i 0 if i ∈ int Ω; u i = 0 if i ∈ ∂Ω}. We show easily that (24) has the shape of (23) with F 1 (s 1 ) = s 1 2,1 for s 1 For building the dual problem, we need the conjugate functions of F 1 , F 2 and H, which are easily computed using the Legendre transform. As a matter of fact, F 1 is the indicator function onto the convex set Q = {q = (q 1 , q 2 ) ∈ R N ×2 : q 2,∞ 1} with the mixed ∞ / 2 -norm defined as q 2,∞ = max k (q 1 ) 2 k + (q 2 ) 2 k [9]. The conjugate function F 2 is computed as (see Appendix. C.2) F 2 (s 2 ) = ı C (s 2 ) = (s 2 ) T y + ε s 2 , The dual optimization problem is thus defined as: In order to apply (20), we must compute the proximal operators of F 1 , F 2 and H. The one of F 1 is given by [9] (prox νF 1 ζ) k = The proximal operator of F 2 is determined via the proximal operator of F 2 by means of the conjugation property defined in [10]: The proximal operator of F 2 is given by the projection onto the convex set C: . The proximal operator of the function H represents a projection onto the positive orthant with zero borders: Finally, making use of the above computations and taking ϑ = 1, the CP algorithm applied to our TV-2 problem in ODT can be reduced to (see Appendix D.2) In our experiments, the variablesx (0) , s The algorithm presented in (25) stops when it achieves a stable behavior, i.e., when Th. The threshold Th is defined for ODT in the next section. In parallel, we analyze the convergence of the algorithm by means of the primal and dual residuals as described in (21). For the iterates in (25), the primal and dual residuals are defined as: In order to guarantee the convergence of the algorithm, i.e., to ensure that x (k) converges to the solution of (15) when k increases, we need to set µ and ν such that µν|||K||| 2 < 1. The induced norm of the operator (|||K|||) was computed as explained in [34] using the standard power iteration algorithm to calculate the largest singular value of the associated matrix K.

Adaptive optimization procedure
Until now, a non-adaptive optimization method, i.e., with constant step-sizes µ and ν, has been considered. However, such a procedure often presents slow convergence as presented in Section 8. Therefore, motivated by the recent work by Goldstein et al. [19], an adaptive version of the CP algorithm is used for the ODT reconstructions (See Algorithm 1). While this variant also depends on a couple of parameters that adjust the algorithm adaptivity, these are more naturally related to the characteristics of the inverse problem solved by this optimization. In this adaptive approach, the stepsize update are tuned to the size of the primal and dual residuals at each iteration. If the primal residual is large with respect to the dual residual, then the primal stepsize µ (k) is increased by a factor 1 1−ρ (k) , and the dual stepsize ν (k) is decreased by the same factor. If the dual residual is large with respect to the primal residual, then the primal stepsize is decreased and the dual stepsize is increased. The parameter ρ (k) is a constant that controls the adaptivity level of the method, and it is updated as ρ (k+1) = βρ (k) , for some β < 1. In Algorithm 1, the parameter Γ > 1 is used to compare the sizes of the primal and dual residuals and the scaling parameter c > 0, which depends on the image expected dynamics, is used to balance the residuals. The specific values selected for the parameters of Algorithm 1 are those recommended by [19].

EXPERIMENTS
In this section, the ODT reconstruction is first compared to the common tomographic (AT) reconstruction using the FBP and ME methods. Then, the proposed regularized reconstruction (TV-2 ) is compared with the FBP and ME procedures on synthetic and experimental ODT data.

Synthetic Data
Three kinds of discrete synthetic 2-D RIM are selected to test the reconstruction. They are defined on a 256 × 256 pixel grid (N = 256 2 ). In the first object, the RIM (n) simulates a 2-D section of a bundle of 10 fibers of radius 8 pixels each, immersed in an optical fluid (the background). The two media have a refractive index difference of δn = 12.1 × 10 −3 (see Fig. 5-(left)). The second object consists in a homogeneous ball centered in the pixel (154, 154) with a radius of 60 pixels, immersed in a liquid with δn = 2.8 × 10 −3 (see Fig. 5-(right)). These two objects were selected in correspondence to the available experimental data we use for reconstruction later in this section. The third object is the well-known Shepp-Logan phantom (see Fig. 2-(right)), which is a more complex image in a "Cartoon-shape" model.
The measurements were simulated according to (11) by means of the operator Φ, and then, additive white Gaussian noise η obs ∼ iid N (0, σ 2 obs ) is added in order to simulate a realistic ODT scenario.
The operator Φ is defined as Φ = ΘDF , with representing the distorsion of F regarding a true operator F true that would provide the actual NDFT. As discussed in Sec. 3.3, the NDFT computational time is inversely proportional to this parameter in O(N log(N/ )). Therefore, we need to do a compromise between an accurate and an efficient computation of the NDFT. For this reason we use two different operators: (i) an accurate and high dimensional operator Φ 0 = ΘDF 0 for the acquisition, with a small 0 = 10 −14 ; and (ii) a less accurate but lower dimensional operator Φ 1 = ΘDF 1 for the reconstruction, with 1 > 0 . The error caused for using a higher for the reconstruction is taken into account in ε nfft (see Sec. 6).
For each object, the ODT measurements are obtained with N τ = 367 according to a varying number of orientations N θ , which allows to analyze the compressiveness of the reconstruction method. In this synthetic experiment, the orientations θ are taken in [0, π) so that N θ = 360 corresponds to two orientations per degree. Hereafter, we consider this last situation, i.e., δθ = π/360 as a "full observation" scenario since, given the considered RIM resolutions, the discrete frequency plane is almost fully covered in this case. More generally, we say that a given orientation number N θ is associated to (100 N θ N full )% of the full coverage, N full being the number of orientations for having δθ = π/360. The reconstruction robustness with respect to the noise level has been considered for a  ODT ME ODT\D ME AT ME Measurement SNR MSNR = 20 log 10 ∆ / η taken in {10 dB, 20 dB, ∞}. This last case with MSNR close to +∞ corresponds to the noiseless scenario, where no Gaussian noise is added, only the NFFT interpolation error (η nfft ) is taken into account. This actually provides a high MSNR value around 270 dB.
The reconstruction quality of n ∈ {n FBP , n ME , n TV− 2 } is measured using the Reconstruction SNR (RSNR) measured by RSNR = 20 log 10 n / n − n .

Robustness comparison for AT and ODT
In order to assess numerically the impact of operator D in ODT, we compare the RSNR between AT and ODT in similar noisy acquisition scenarios. The comparison is made using the FBP and ME procedures, commonly applied in tomographic reconstructions. In ODT the mean of the image cannot be estimated but this is not taken into account by FBP and ME procedures, thus the mean of the reconstructed image was removed for the computation of the RSNR in order to correctly compare the two reconstructions. We analyze the impact of the affine frequency ω, present in ODT, via the compressiveness and noise robustness. For this, we focus on the reconstruction of the bundle of fibers for different number of orientations N θ ∈ {4, 18, 90, 180, 360}. The results are depicted in Fig. 6 and Fig. 7.
In Fig. 6 we can see that, when no Gaussian noise is added, we obtain similar RSNR for both AT and ODT. The same behavior is observed for both FBP and ME reconstructions, with FBP always providing a lower RSNR. The impact of the parameter ω is evident only in the convergence time of ME, causing the ODT reconstruction to be 4 times slower than the AT one. TV−L2 20dB TV−L2 10dB ME 20dB ME 10dB FBP 20dB FBP 10dB However, when we add Gaussian noise in such a way that both data have a MSNR = 20 dB, the AT reconstruction presents a fast degradation while the ODT reconstruction remains almost unaffected by the noise (see Fig. 7). These results, observed for both FBP and ME, corroborate the discussion in Sec. 6.2.
Following the discussion from Sec. 3.3, we analyze now the reconstruction of the RIM using a simplified ODT sensing model that is close to a classical tomographic model (12). In Fig. 7 we show a third curve that corresponds to the RIM reconstruction from a noisy ODT sensing where the Fourier measurements are divided by the diagonal operator D (ODT\D) as in (12). The results were obtained using the FBP and ME procedures and for a MSNR = 20 dB. As it was expected, when dividing the measurements by the operator D, the reconstruction quality decreases significantly compared to the results obtained with the complete ODT sensing model (11). Moreover, the regularized formulation TV-2 cannot be used for this ODT reconstruction because the noise is then heteroscedastic.

TV-2 Reconstruction method
The TV-2 reconstruction is compared with the common FBP and ME methods. The reconstruction quality is investigated with respect to compressiveness and noise robustness. On the contrary of FBP and ME, the TV-2 method takes into account the zero mean of the image by the frontier constraint. Therefore, the mean of the reconstructed image is only removed from the ME and FBP results. Fig. 8 presents comparison graphs of FBP, ME and TV-2 showing the RSNR vs the number of orientations N θ ∈ {4, 18, 90, 180, 360} for the three noise scenarios. These results correspond to the reconstruction of the bundle of fibers for Th = 10 −5 .
In Fig. 8-(left) we present the scenario without added noise, i.e., MSNR = ∞. We can see that for a full coverage, i.e., N θ = 360, as the TV-2 method takes into account the small noise coming from the NFFT interpolation error, it provides a very good reconstruction that outperforms by 62 dB the ME reconstruction quality and by 68 dB the FBP reconstruction quality.
Both FBP and ME methods degrade rapidly when the problem is ill-posed, i.e., when the projections space is not fully covered, whereas the TV-2 method maintains a high performance. By promoting a small TV-norm, the regularized method presents high compressiveness, as it can be observed in the graph where a high reconstruction quality is still achieved at only 5% of 360 incident angles, obtaining a gain of 62 dB over ME and of 68 dB over FBP. Although the performance of the algorithm decreases significantly for a coverage of 1%, it still provides a higher reconstruction quality than both ME and FBP.    The high compressiveness properties of the TV-2 method are preserved when we add Gaussian noise. We are able to obtain good quality images even for a compressive and highly noisy sensing. With TV-2 , at a MSNR = 10 dB, we get a RSNR of 22 dB for a 5% radial coverage compared to 5 dB for ME and -1 dB for FBP. However, we can notice how the reconstruction quality of TV-2 diminishes with respect to the noiseless scenario, whereas FBP and ME are less affected by the noise. Fig. 9 presents the resulting images when reconstructing the bundle of fibers in a noiseless scenario and for N θ = {18, 90}, which represents, respectively, a coverage of 5% and 25% of the frequency plane. The algorithm is set to stop when Th = 10 −5 is reached.
We notice how the TV-2 method preserves the image dynamics even for 5% of coverage, while FBP and ME provide images with implausible negative values. Moreover, at low measurement regime, some artifacts appear in both FBP and ME results. The image dynamics are less preserved in the FBP reconstruction, where the artifacts also affect the center of the fibers for a 5% of coverage.
About the numerical complexities, the FBP method takes less than 1 second for the reconstruction, while both ME and TV-2 methods require more time. For a coverage of 25% of the frequency plane, to reach the same stopping threshold value of 10 −5 , ME requires 6280 iterations (1h05') while TV-2 requires 1900 iterations (28'). For a coverage of 5% of the frequency plane, the situation reverses and, to reach Th = 10 −5 , ME requires 3450 iterations (51') while TV-2 requires 6620 iterations (1h49'). However, the reconstruction quality is clearly higher when using our regularized method. In the case where the quality of the image reconstruction is sufficiently high, the threshold can be decreased to a less restrictive value. At the end of this section we perform a convergence analysis for different threshold values, which allows us to choose the suitable threshold for a required quality or convergence time.   Although the FBP method requires less time than ME, we have noticed that the ME method outperforms FBP at low to medium reduction of the total number of angles. Henceforth, only the ME reconstruction method will be used for comparison with the TV-2 method. In Fig. 10 we present the reconstructed images for the bundle of fibers for a moderately noisy sensing (MSNR = 20 dB). We also show the error images in order to provide a better appreciation of the difference between both methods.
In this noisy scenario, the fiber contours are no longer well estimated using ME and, as we have coverage of only 25%, some oscillating (Gibbs) artifacts appear. On the contrary, the regularized method provides a good estimation on the borders, with no visible artifacts. We notice certain loss in the dynamics of the image, which causes a lower RSNR compared to the noiseless scenario.
Let us show now some results obtained for the other two synthetic images. Fig. 11 and Fig. 12 present the results obtained using ME and TV-2 on the sphere and the Shepp-Logan phantoms, respectively. This comparison was performed for MSNR = 20 dB and for N θ = 90, i.e., a coverage of 25% of the frequency plane. As the image expected dynamics change for the Shepp-Logan phantom, the parameter c for the residuals balancing was set to 250 instead of 1000.
The regularized method provides a better image dynamics for these phantoms than when reconstructing the fibers for the same noise level. For these phantoms, it can also be observed that ME reconstructions present a poor estimation on the borders and oscillating artifacts. Moreover, the error image shows a higher discordance with respect to the actual image. Table 1 presents a more complete comparison of the RSNR obtained using ME and TV-2 on the three synthetic images. The methods are analyzed for three scenarios, one noiseless with       When comparing the behavior of the algorithm for the different synthetic images, we can notice that TV-2 method outperforms the ME method for all cases.

Algorithm Convergence
Finally, we analyze the convergence of the algorithm by studying the evolution of the primal and dual residual norms and of the RSNR for different threshold values. We also analyze the convergence difference between the adaptive and the non-adaptive method. For this, only the reconstruction of the bundle of fibers for N θ = 360 and MSNR = 20 dB is used, however, the other cases present a similar behavior.
The evolution of the primal and dual residual norms (26) and the convergence condition in (22) along the iterations is depicted in Fig. 13. We notice that, when the number of iterations increases, the primal and dual residual norms tend to zero along with the energy ( P (k) 2 + D (k) 2 ).
In order to compare the adaptive and non-adaptive methods, we analyze the evolution of x (k+1) − x (k) / x (k) and the RSNR along the iterations for two cases: (i) "Adapt" where the stepsizes are adaptively updated using Algorithm 1 and (ii) "Non-Adapt" where we use constant stepsizes equal to τ = σ = 0.9/|||K|||. Results are presented in Fig. 14.
The evolution of x (k+1) −x (k) / x (k) helps us to analyze the stability of the algorithm. We can see the curves are not smooth specially for the non-adaptive method, which indicates a non stable behavior mainly due to a bad conditioning of the global operator K in the product space optimization. This could be improved by a preconditioning procedure as described in [3,31]. We also notice that for the same number of iterations, the adaptive method converges faster and provides a better result that the non-adaptive. Moreover, the adaptive method does not require to empirically set the parameters µ and ν as the algorithm will converge to the optimal parameters independently of the initialization. Table 2 presents the values of the time and RSNR for different threshold values. These results show that a lower threshold provides higher reconstruction quality but significantly increases the number of CP iterations. In this specific reconstruction, setting the threshold to 10 −5 or running more than 500 CP iterations guarantees a RSNR higher than 42 dB.

Experimental Data
The reconstruction algorithm was tested with two particular transparent objects similar to the synthetic data studied in the previous section: a homogeneous sphere and a bundle of 10 fibers, both immersed in an optical fluid. The reconstruction is based on N τ = 696 parallel and equally spaced light rays. The experimental setup is based on the Schlieren Deflectometric Tomography described in Sec. 2. A 696 × 523 pixels CCD camera was used for the acquisition, covering a field of view of 3.25mm × 2.43mm. This corresponds to N τ = 696 parallel light rays and 523 2-D slices, which leads to δτ = 4.7 × 10 −3 mm, and thus to δr = 4.7 × 10 −3 mm. Fig. 15 presents one measurement of the deflection angles on the CCD camera grid for the two analyzed optical phantoms. This observation corresponds to θ = 0 • .
The experimental configuration leads to a calibration problem. As the object is rotating, the rotation center is modified within a small range and the origin of the affine parameter τ is altered. A post-acquisition calibration method was therefore implemented for correcting this effect. In short, for each angle, the method estimates the true centrum location by averaging the locations of the maximum and minimum deflection values along τ . The two next paragraphs present the characteristics of the two objects of interest and the reconstruction results obtained for the three tested methods (FBP, ME and TV-2 ) from the collected experimental observations. In all these experiments, and as discussed in Sec. 6.1, the TV-2 method was considered in the context of a 10 dB modeling noise. This choice seems somehow optimal for our experimental conditions. In several tests not reported here, higher or lower values of noise SNR lead either to severe artifacts in areas where no object is expected or to a significant loss in the expected RIM dynamics.

Homogeneous Sphere
The first observed object consists of a homogeneous sphere with a diameter of 1.5 mm. The difference of refractive index between the sphere and the optical fluid where it is immersed is δn = 2.8 × 10 −3 . The deviation map was measured for N θ = 45 angular positions over 360 degrees (i.e., 13% of measurements). The most important noise in the measurements is the modeling noise obtained by assuming MSNR = 10 dB (16), which provides ε model = 0.035; while the estimated observation noise provides ε obs = 0.008. The NFFT interpolation noise is considerably smaller, with ε nfft = 4.24 × 10 −16 . This noise estimation provides a MSNR = 9.79 dB. Fig. 16 shows the reconstruction results obtained when using FBP, ME and TV-2 reconstruction methods, for the 300 th 2-D slice of the observed 3-D object. The results are shown for a threshold Th = 10 −5 , where ME converges in 7180 iterations and TV-2 in 5180 iterations.
We observe a similar behavior to the one found in the synthetic reconstruction. Compared to FBP and ME results, the sphere frontier is sharper with the TV-2 estimation and the RIM vanishes on the background. The image dynamics recovery is also more accurate using our regularized method, whereas with FBP and ME the reconstructions present several artifacts with implausible negative values. It is important to notice that the preservation of the image dynamics depends mainly on the noise estimation and on the proper definition of the constants included in the operator D (see Sec. 3). When considering the appropriate constants, we are able to make an equivalence between the physical problem and its discrete mathematical formulation.

Bundle of fibers
The second measured object is a bundle of 10 fibers of 200 µm diameter each. The refractive index difference with respect to the solution where the fibers are immersed is δn = 12.1 × 10 −3 . The experimental data was measured for 60 angular positions over 360 degrees (i.e., 17% of measurements). As in the case of the sphere, the noise that has more influence in the measurements is the modeling noise obtained by assuming MSNR = 10 dB (16), which provides ε model = 0.093; while the observation noise provides ε obs = 0.005. The NFFT interpolation noise is considerably smaller (ε nfft = 1 × 10 −15 ). This noise estimation provides MSNR = 9.98 dB. Fig. 17 shows the reconstruction results obtained using FBP, ME and TV-2 reconstruction methods, for the 300 th 2-D slice. For a threshold of 10 −5 , ME converges in 33920 iterations and TV-2 in 4560 iterations. Compared to FBP and ME reconstructions, TV-2 provides a much shaper estimation of the true RIM and the background is correctly estimated to 0. However, the image dynamics is not properly recovered. Such reconstruction error is present for the fibers because the refractive index difference between the material and the optical fluid is higher than for the sphere. In this case, the deflection angle is higher and it causes the modeling error to increase. The actual light ray trajectory could be estimated and inserted in an iterative process as done by Antoine et al. [1]. However, the forward model could not be represented in the frequency domain and we would loose its fast computation.
We can also notice that a section of the bundle of fibers represents a complex image to reconstruct, since the light enters and comes out of multiple fibers, and this amplifies the modeling error.
Furthermore, investigating the TV-2 program (15) and assuming that the unknown true RIM is a feasible point of the constraints, i.e., the noise power ε is correctly bounded, the TVnorm of the solution is necessarily smaller than the one of the true map. The norm reduction actually increases with the noise power since the optimization has then more freedom to reduce the TV-norm of the solution. A direct impact for cartoon shape maps is thus a reduction of RIM dynamics in the reconstruction compared to the expected one. Studying if this dynamics loss could be limited (e.g., by constraining the mean) is a matter of future study.

CONCLUSIONS
We have demonstrated how regularized reconstruction methods, such as TV-2 , can be used in the framework of Optical Deflectometric Tomography in order to tackle the lack of deflectometric observations and to make the ODT imaging process robust to noise. The proposed constrained optimization problem shows significant improvements in the reconstructions, compared to the well-known Filtered Back Projection and Minimum Energy methods. The results confirm that, when dealing with a compressive setting, the Total Variation regularization and the prior information constraints (positivity and FoV restriction) help in providing a unique and accurate estimation of the RIM, promoting sharp edges and preserving the image dynamics (when the modeling noise is limited). By working with the Chambolle-Pock algorithm we exploit the advantages of proximal operators and of primal-dual algorithms, and their flexibility to integrate multiple constraints. We have also shown that the use of the fast NFFT algorithm efficiently approximates (with a controlled error) the ODT sensing model involving the polar NDFT.
Noticeably, there still exist some artifacts in the experimental data reconstruction, coming from the modeling error. In order to handle this problem, we could no longer assume a linear light propagation. The actual curved light trajectories, depending themselves on the RIM through the eikonal equation, could be then traced. However, such a model improvement quickly leads to nonconvex optimization procedures and it breaks the fast computability of the forward sensing operator through the Fourier domain. Knowing how to insert this new scheme in our reconstruction method is a matter of future works.
Following the guidelines given in [19], the algorithm convergence is greatly improved by making CP adaptive. However, as the CP algorithm has been proved to work slowly when the problem is badly conditioned [3,31], preconditioning our global operator K can stabilize it and thus improve convergence results. We note also that most of the algorithms are implemented in Matlab, except for the NFFT algorithm which is compiled in C++. A faster implementation could be reached for instance by the adopting parallel (GPU) processing techniques, i.e., a framework particularly adapted to proximal optimization (e.g., see the UNLocBoX project 6 ). Such a numerical improvement will be mandatory for addressing 3-D RIM estimation. In this case, the whole (high dimensional) optimization driving the RIM reconstruction must to be considered in a (discretized) three dimensional space and fast forward model estimation will have to be developed.
We finally remark that the Optical Deflectometric framework treated in this paper can be also applied to other imaging techniques, such as the X-ray phase contrast tomography [30]. With the use of the energetic X-ray light, the first order paraxial approximation involving linear light trajectory is no longer needed and the proposed methods are expected to provide better results.

B NON-EQUISPACED FOURIER TRANSFORM
The non-equispaced Fourier Transform (NFFT) allows us to compute rapidly, i.e., in O(N log N ), the NDFT defined in (8) of a function defined on C N . This computation is performed with a controllable error which can be further reduced by increasing the computational time.
In a nutshell, the NFFT algorithm replaces the equivalent matrix multiplication f = F f of (9) by s = F f , where F has fast matrix-vector multiplication computation. In this scheme, the discrepancy between f and s, as measured by E ∞ (f ) := f − s ∞ , is controlled and kept small.
More precisely, the matrix F starts by embedding R N in a bigger regular space R n of size n = n 2 0 , with n 0 > N 0 . This is obtained from the multiplication of f with a matrix W ∈ R n×N that performs a weighting of f by a vector w ∈ R N + (component wise) followed by a symmetric zero-padding on each side of the function domain C N . Once in R n , the common DFT matrix F n is applied. It can be computed with the FFT algorithm in order to obtained an oversampled Fourier transform of f ∈ R N . Finally, a sparse matrix V ∈ R M ×n multiplies the output of F n in order to end in the M dimensional space P. Each row of V corresponds to the translation in the 2-D Fourier domain of a compact and separable 2-D filter ψ on one specific point of the non-regular grid P M . As a final result, the matrix F is thus factorized in [26] Without entering into unnecessary technicalities, the NFFT scheme is characterized by a precise connection between the function ψ defining V and the weighting performed by W . In particular, each component of w is actually set to the inverse of the Fourier transform of a filter ϕ, while ψ is a periodization of (a truncation of) the same filter.
There exist several choices of windows ϕ/ψ associated to different numerical properties (e.g., localized support in frequency and time, simple precomputations of the windows, ...). We select here the translates of Gaussian bells [35], involving a Gaussian behavior for ϕ(k), which provides fast error decay for E ∞ (f ).
In particular, denoting by κ = n/N 1 the oversampling factor and using the FFT for matrix-vector multiplications involving F n , the total complexity T of the multiplications of Given a vector x = (x T 1 , x T 2 ) T ∈ R 2N , we can define the function G(x ) = H(x 1 )+ı Π 1,2 (x 1 , x 2 ), with the bisector plane Π 1,2 = {x : x 1 = x 2 }. Therefore, for a vector u = (u T 1 , u T 2 ) T ∈ R 2N , the dual function G (x ) can be computed as follows: G (x ) = max

C.2 Convex Conjugate of the indicator function ı C .
Given the indicator function ı C (u) of the convex set C = {v ∈ R M : v − y ε}, its dual function can be computed via the Legendre transform as follows: The value of {b : b ε} that maximizes the last expression is b = v v ε, and we get ı C (v) = v, y + ε v .

D DETAILS ON THE PRODUCT SPACE OPTIMIZATION D.1 Proximal operator of the function G
For every x = (x 1 , · · · , x p ) ∈ R pN , the function G(x ) is defined as: where, for j ∈ [p], Π 1,j denotes the bisector plane Π 1,j = {x ∈ R pN : x 1 = x j }.