TV-based Spline Reconstruction with Fourier Measurements: Uniqueness and Convergence of Grid-Based Methods

We study the problem of recovering piecewise-polynomial periodic functions from their low-frequency information. This means that we only have access to possibly corrupted versions of the Fourier samples of the ground truth up to a maximum cutoff frequency $K_c$. The reconstruction task is specified as an optimization problem with total-variation (TV) regularization (in the sense of measures) involving the $M$-th order derivative regularization operator $\mathrm{L} = \mathrm{D}^M$. The order $M \geq 1$ determines the degree of the reconstructed piecewise polynomial spline, whereas the TV regularization norm, which is known to promote sparsity, guarantees a small number of pieces. We show that the solution of our optimization problem is always unique, which, to the best of our knowledge, is a first for TV-based problems. Moreover, we show that this solution is a periodic spline matched to the regularization operator $\mathrm{L}$ whose number of knots is upper-bounded by $2 K_c$. We then consider the grid-based discretization of our optimization problem in the space of uniform $\mathrm{L}$-splines. On the theoretical side, we show that any sequence of solutions of the discretized problem converges uniformly to the unique solution of the gridless problem as the grid size vanishes. Finally, on the algorithmic side, we propose a B-spline-based algorithm to solve the grid-based problem, and we demonstrate its numerical feasibility experimentally. On both of these aspects, we leverage the uniqueness of the solution of the original problem.


Introduction
In recent years, TV 1 -based regularization techniques for continuous-domain inverse problems have shown to be very fruitful, with rapidly growing theoretical developments [2][3][4][5], algorithmic progress [6][7][8], and data science applications [9][10][11]. The goal of these techniques is to recover a continuous-domain signal of interest from a finite number of measurements. To tackle the obvious ill-posedness of this recovery problem, the prior assumption is that the signal is sparse in a certain basis. This prior is enforced by solving an optimization problem with a regularization term involving the TV norm.
In this work, we study the reconstruction of an unknown periodic real function f 0 : T → R from the knowledge of its possibly noise-corrupted low-frequency Fourier series coefficients, where T = R/2π Z = [0, 2π ] is the torus whose two points 0 and 2π are identified. Fourier-domain measurements have historically been of particular interest in the TV-based community; some of the earliest works in this field focus specifically on this setting [2]. This interest is driven by the favorable theoretical recovery guarantees of Fourier-domain sampling as well as by potential applications such as MRI imaging, as demonstrated by existing discrete-domain sparsity-based methods [12].
Let K c ≥ 0 be the cutoff frequency; we therefore have access to y = (y 0 , y 1 , . . . , y Kc ) ∈ R × C Kc , (1) where y k ≈f 0 [k], the kth Fourier-series coefficient of f 0 . Note that since f 0 is a real function, y 0 ∈ R is approximately the meanf 0 [0] = ⟨f 0 , 1⟩ of f 0 , while y k ∈ C for k ̸ = 0. Moreover, the Fourier series of f 0 is Hermitian symmetric, meaning that f 0 [−k] =f 0 [k] ∈ C for every k ∈ Z. The observation vector y in (1) has 2K c + 1 (real) degrees of freedom: one for the real mean y 0 and two for each of the other complex Fourier series coefficients.

Reconstruction via TV-based optimization
We formulate the reconstruction task as a regularized optimization problem with a sparsity prior, which is enforced via TV-based regularization. More precisely, the reconstruction f * of f 0 is a solution of where y ∈ R × C Kc is the observation vector; ν(f ) is the measurement vector E(y, ·) : R×C Kc → R + ∪{∞} is a data-fidelity functional which is a proper convex function, strictly convex over its effective domain, 2 lower semi-continuous (lsc), and coercive; ∥ · ∥ M is the total-variation (TV) norm on periodic Radon measures; and L is a regularization operator acting on periodic functions. The regularization operator specifies the transform domain in which the signal f 0 is assumed to be sparse. The composition of the TV norm with an operator L is known as generalized TV regularization [5]. For the sake of clarity, we focus on derivative operators of any order, i.e., L = D M where M ≥ 1, although our results can be extended to more general classes of operators such as fractional derivatives.
The data-fidelity term encourages the measurement vector ν(f ) to be close to the observations y. A typical example is the quadratic functional E(y, ν(f )) = The data fidelity (4) is well-suited to an additive noise model where the measurements y are generated as y = ν(f 0 ) + n with n a complex Gaussian vector (see [14, Section IV-B] for more details). Another case of interest is the indicator function E(y, ν(f )) = 0 if y = ν(f ) and ∞ otherwise, which leads to the constrained optimization problem 3 Other classical data-fidelity functionals can be found in [10,Section 7.5].
The choice of the TV norm promotes sparse and adaptive continuous-domain reconstruction, and has recently received a lot of attention (see Section 1.3). The operator L specifies the transform domain in which sparsity is enforced together with the regularity properties of the recovery. In the absence of a regularization operator L, Problem (2) leads to the recovery of periodic Dirac masses; this scenario is the subject of our previous paper [15]. In the latter, we thoroughly analyze the cases of uniqueness of the solution of the resulting optimization problem. In particular, we show that unlike in this manuscript, both cases (uniqueness and nonuniqueness) can occur, and we gave a necessary and sufficient condition on the data vector y that guarantees uniqueness. the existence of a solution to Problem (2), but without adjudicating on its uniqueness. Moreover, it gives the form of the extreme-point solution(s) as periodic L-splines, i.e., functions f such that is a finite sum of shifted Dirac combs, the distinct Dirac locations x n being the knots of the spline (see Definition 1).
Moreover, known proof techniques [5,16,17] allow us to show that the number of knots N is bounded by N ≤ 2K c + 2.
Our contributions can be detailed as follows.
(i) Uniqueness of the Solution. Our main result is Theorem 1, in which we prove that the solution to Problem (2) is always unique. Moreover, we slightly improve the upper bound on the number of knots to K ≤ 2K c in (6). To the best of our knowledge, Theorem 1 is the first systematic uniqueness result for the analysis of TV-based variational problems such as (2). This result has both theoretical and algorithmic implications, which we leverage in our other contributions. Our proof of Theorem 1 relies on a uniqueness result of our previous paper [15,Corollary 1] for Problem (2) in the absence of a regularization operator L. Interestingly, contrary to Problem (2), uniqueness is not systematic in that setting.
(ii) Uniform Convergence of Grid-Based Methods. We study the grid-based discretization of Problem (2). More precisely, we restrict its search space to the finite-dimensional space of uniform L-splines, i.e., L-splines whose knots lie on a uniform grid. We show that as the grid gets finer, any sequence of solutions of the discretized problems converges in uniform norm towards the unique solution of Problem (2). This form of convergence is remarkably strong: in particular, it implies convergence for any L p norm with 1 ≤ p ≤ ∞. This result is a significant improvement over known convergence results for grid-based methods which exhibit weak * -type convergence such as [18,Proposition 4] and [19,Theorem 3].
(iii) Grid-Based Algorithm. We propose an adaptation of the B-spline-based algorithm developed in [19] to solve Problem (2). The main difference with the latter is the periodic setting, which does not add major difficulties but requires a different treatment of the boundaries. Thanks to our aforementioned uniform convergence result, the reconstructed signal is guaranteed to be uniformly close to the gridless solution when the grid is sufficiently fine. We provide some experimental results of our algorithm on some simulated data that demonstrate its numerical feasibility.

Related works
Optimization over radon measures. There exists a vast literature concerned with Dirac recovery using the TV norm as a regularizer, that is, problems of the form (2) in the absence of a regularization operator L. Some of the more recent works concerning this topic include [2,3,8,18,[20][21][22][23][24]. However, our focus in this paper is on generalized TV regularization with a nontrivial operator L. We refer to the introduction of our previous paper [15] for a more detailed coverage of the Dirac recovery literature.
From sparse measures to splines and beyond. In recent years, several works have extended the TV-based Dirac recovery framework to smoother continuous-domain signals by considering generalized TV regularization, i.e., problems such as (2) with a nontrivial regularization operator L. In [5], Unser et al. revealed the connection between the constrained Problem (2) (in a nonperiodic setting) and spline theory for general measurement functionals: the extreme-point solutions are necessarily L-splines. This result was revisited, extended, and refined by several authors [10,16,[25][26][27][28][29][30][31]. This manuscript will strongly rely on the periodic theory of TV-based optimization problems recently developed in [16], of which Problem (2) is a special case.
The purpose of most of these works is to describe the solution sets of certain relevant optimization problems, which are typically nonunique. However, in our setting, as we show in Theorem 1, the solution to Problem (2) is always unique. The closest work in this direction is our recent paper [32], where we provide a full description of the solution set of nonperiodic TV-based optimization problems with a regularization operator L = D 2 (which leads to piecewise-linear reconstructions), and spatial sampling measurements ν. This study includes the characterization of the cases of uniqueness [32, Proposition 6 and Theorem 2], which, contrary to Problem (2), is not systematic.
Convergence results and algorithms for discretized problems. The convergence of discretized optimization schemes to the solutions of continuous-domain TV-regularized problems has been studied by several authors, such as [3,4,7,8,33]. Gridbased methods have specifically been considered in [4,18,23]. In these works, the authors prove convergence results in the weak * sense, which is suited to the space of Radon measures, in a setting where no systematic uniqueness results are known. To the best of our knowledge, our work is the first to prove the convergence of solutions of discretized generalized TV-based problems towards the solution of the original problem in a such a strong sense as the uniform norm. To achieve this, we leverage our uniqueness result of Theorem 1. On the algorithmic side, grid-based methods to solve optimization problems with TV-based regularization have been proposed in [19,25,[34][35][36].

Outline
The paper is organized as follows. Section 2 introduces the mathematical material used in this paper. In Section 3, we present our optimization Problem (2) of interest and prove that it always has a unique solution. In Section 4, we present our proposed grid-based discretization of Problem (2), and prove that its solutions converge uniformly to that of the original problem when the grid size goes to zero. We present our method for solving this discretized problem using a B-spline basis in Section 5. Finally, we exemplify our results on simulations in Section 6.

Periodic functions and periodic splines
We first introduce some notations and recall some basic facts concerning periodic functions and their Fourier series. More details can be found in [16,Section 2]. The Schwartz space of infinitely smooth periodic function is denoted by S(T), endowed with its usual Fréchet topology. Its topological dual is the space of periodic generalized functions S ′ (T).
For k ∈ Z, let e k : T → C be the complex exponential function e k (x) = exp(ikx), which is clearly in S(T). The Fourier series coefficients of f ∈ S ′ (T) are given byf [k] = ⟨f , e k ⟩ ∈ C. For a real function f , these coefficients are Hermitian Its Fourier coefficients areX[k] = 1 for any k ∈ Z. The derivative operator is denote by D. More generally, we consider the Mth-order derivative operator L = D M for a fixed integer M ≥ 1. We then have that Lf = ∑ k∈Z (ik) Mf [k]e k . We define the periodic Green function g L of L = D M as the function Then, g L is the unique periodic and zero-mean function such that D M g L = X − 1. It is worth noting that g L is not a Green's function in the usual sense: there is no periodic function g such that D M g = X, since D M g necessarily has zero mean, whereas the Dirac stream does not. See [16, Section 2.2] for more details on this matter.
We say that f is a periodic L-spline (or simply a L-spline) if where N ≥ 0, a n ∈ R\{0}, and the knots x n ∈ T are pairwise distinct. We call w the innovation of the L-spline f .

A function f satisfies (8) if and only if
for some a 0 ∈ R. In this case, we necessarily have that ∑ N n=1 a n = 0. This is a particular case of [16, Proposition 2.8] and simply follows from taking the mean (or 0th Fourier coefficient) in (8) It is worth noting that the Green's function g L is not a L-spline. However, for any

Periodic Radon measures and native spaces
Let M(T) be the space of periodic Radon measures. By the Riesz-Markov theorem [37], it is the continuous dual of the space C(T) of continuous periodic functions endowed with the supremum norm. The total-variation norm on M(T), for which it forms a Banach space, is given by Let L = D M for some M ≥ 1. We define the native space associated to L as Periodic native spaces have been studied for general spline-admissible operators (i.e., periodic operators with finitedimensional null space and which admit a pseudoinverse) in [16,Section 3]. Proposition 1 recalls some important properties of native spaces for the particular case of the Mth order derivative operator. For this purpose, we define the pseudo-inverse operator L † such that for any f ∈ S ′ (T). In particular, we have that L † X = g L .
Proposition 1 (Theorem 3.2 in [16]). Let L = D M for some M ≥ 1. We have the direct-sum relation and any f ∈ M L (T) has a unique decomposition as where w ∈ M 0 (T) and a ∈ R are given by w = Lf and a =f [0]. Then, M L (T) is a Banach space for the norm

Uniqueness of TV-based penalized problems
It is well known that TV-based optimization problems with regularization operators lead to splines solutions [5]. This is both an existence result and a representer theorem, which provides the form of the (extreme-point) solutions of the optimization task. Here, we focus on problems of the form (2), whose main specificity compared to related works in the literature is the periodic setting and the Fourier-domain measurement operator ν.
We now state our main result, which guarantees the uniqueness of the solution to Problem (2). The proof relies on a result of our previous paper that focuses on the recovery of Dirac impulses from low-frequency Fourier measurements. We recall this result here.
has a unique solution, which is the sum of at most 2K c Dirac impulses.
Our proof technique consists in reformulating Problem (2) over the space of Radon measures and applying Proposition 2 to this equivalent problem. Interestingly, the regularization operator L = D M leads to systematic uniqueness, which is not true of Problem (16) (where there is no regularization operator) in the general case.
→ R + be a functional which is a proper convex function, strictly convex over its effective domain, lsc, and coercive, and λ > 0. Then, the optimization problem admits a unique solution which is a L-spline whose number of knots is bounded by 2K c .
Proof. Using a classical argument based on the strict convexity of E(y, ·) (see for instance [32,Proposition 7]), we deduce that all solutions f * of Problem (17) share an identical observation vector Hence, Problem (17) is equivalent to By Proposition 1, any f ∈ M L (T) admits a unique decomposition We then observe that ν(L † w + a) = ( a,ŵ [1] L [1] , . . . ,ŵ [Kc ] ) and thus that (w * , a * ) is solution of (19) if and only if a * = (y λ ) 0 and w * is solution of the new problem arg min where z ∈ R × C Kc is given by z 0 = 0 and z k =L[k](y λ ) k for k ̸ = 0. Note that the condition w ∈ M 0 (T) in (19) (i.e., w has zero mean) has been removed in (20) since it is automatically satisfied via the constraint ν(w) = z which giveŝ We now prove that Problem (20) has a unique solution w * which is a sum of at most 2K c Dirac impulses. If z = 0, then the result trivially holds, the unique solution being w * = 0. We now assume that z ̸ = 0. In this case, Problem (20) has a nonzero optimal value due to the fact that w = 0 cannot be a solution. Since z 0 = 0 and z ̸ = 0, Problem (19)  The second item is already known; it has been proved for our setting in [16,Theorem 4]. Finally, concerning the third item, known proof techniques [5,16,17] allow us to reach the bound 2K c + 1, which we improve to 2K c .
One can actually be slightly more precise and show that the mean of the solution is known under very mild conditions on the cost functional E. Under this assumption, we also provide a reformulation of Problem (17) over the space of Radon measures.

Proposition 3.
We assume that we are under the conditions of Theorem 1 and that the data-fidelity cost functional E is such that for any fixed (z 1 , . . . , z Kc ) ∈ C Kc , we have where y = (y 0 , y 1 , . . . , y Kc ) ∈ R × C Kc and z = (z 0 , z 1 , . . . , z Kc ) ∈ R × C Kc . Then, the unique solution f * to (17) admits the In particular, this implies thatf * [0] = y 0 .

Remark.
The relation (21) holds for virtually all classical cost functionals, including any ℓ p norm-based cost such as the quadratic data fidelity (4), or any separable cost whose minimum over each component is reached when y m = z m , such as indicator functions. Proposition 3 ensures that the mean of the solution of Problem (17) is given byf * [0] = y 0 .

Uniform convergence of grid-based methods
A common way to solve infinite-dimensional continuous-domain problems such as (24) algorithmically is to discretize them using a uniform finite grid [18,23]. In this section, we propose such a discretization method of the problem that is, Problem (17) with a quadratic data-fidelity cost E(y, z) = 1 2 ∥y − z∥ 2 2 . Note that we no longer denote the solution of Problem (24) as a set but as a function f * , since Theorem 1 guarantees that this solution is unique. We restrict to the case of the quadratic data fidelity for the sake of simplicity, although our results hereafter hold for more general choices of E. Our choice clearly satisfies the assumption of Proposition 3, hence the solution f * of (24) satisfiesf * [0] = y 0 .
Our discretization method, which was introduced for similar problems in [19], consists in restricting the search space of Problem (24) to the space of uniform L-splines M L, 2π P (T), i.e., L-splines in the sense of Definition 1 with knots x n on a uniform grid. The space M L, 2π P (T) is defined as follows for a grid size h = 2π P , where P ∈ N, P ≥ 1, is the number of grid points: Our choice of restricting the search space of Problem (24) to M L, 2π P (T) is guided by Theorem 1, which states that the unique solution to this problem is a L-spline. Hence, this choice of space is compatible with the sparsity-promoting regularization ∥L · ∥ M . Although in general, the solution of our problem does not have knots on a uniform grid, it can be approximated arbitrary closely with an element of M L, 2π P (T) when P is large. The other main feature of our method is that the computations are exact in the continuous domain, both those of the forward model and of the regularization term. Our discretized optimization problem then becomes V λ,P (y) = arg min Note that contrary to the original Problem (24), the solution set V λ,P (y) of the discretized problem (26) is not necessarily unique.
As we shall demonstrate in Section 5, Problem (24) can be solved algorithmically with standard finite-dimensional solvers. However, the important question of how well it approximates the original Problem (24) still remains. We answer this question in Theorem 2 by proving that any sequence of elements of V λ,P (y) converge in a strong sense -namely, uniform convergence -towards f * when P → ∞.
We denote by f * the unique solution to (24). For any P ≥ 1, we set f * P ∈ V λ,P (y). Then, we have that Remark 1. Despite the fact that the solutions to (26) may not be unique, Theorem 2 ensures that the convergence (27) holds for any choice of the f * P .

Remark 2.
Uniform convergence implies convergence with respect to any L p norm for 1 ≤ p ≤ ∞, since we have ∥f ∥ p ≤ (2π ) 1/p ∥f ∥ ∞ for any f ∈ M L (T).

Remark 3.
Theorem 2 holds for more general settings than Problem (24). More specifically, our proof seamlessly extends to the more general setting of Theorem 1 for any cost functional E that is continuous with respect to its second argument, such as ℓ p losses of the form E(y, z) = ∥y − z∥ p p . Compared to the setting of Theorem 1, this notably excludes indicator functions, i.e., the constrained optimization Problem (18). Concerning the regularization operator L, Theorem 2 readily extends to any operator L such that L{1} = 0 and whose periodic Green's function is Lipschitz. This notably excludes the case L = D, i.e., M = 1.

Proof. We first introduce M 0, 2π
}, the uniform discretization of M 0 (T) using Dirac impulses. Then, using Proposition 3 (with a restriction of the search space which does not affect the proof), we have that f * We now prove that the Radon measures w * P converge towards the unique solution of w * = arg min for the weak* topology when P → ∞, where the uniqueness of w * follows from (22) in Proposition 3. This convergence is proved by following [18,Proposition 4]; the fact that the search space in (29) is M 0 (T) rather than M(T) does not impact the proof. Then, the operator L † is linear and continuous between M 0 (T) and M L (T) for their respective weak* topologies. This implies that f * P converges to f * for the weak* topology over M L (T). According to [16,Proposition 9], L = D M is sampling-admissible for M ≥ 2, which implies in particular that X is in the predual of M L (T). Equivalently, this implies that f ↦ → f (x) is weak*-continuous over M L (T), which implies that f * P (x) → f * (x) for any x ∈ T (pointwise convergence).
We now prove that the family (f * P ) P∈N is equicontinuous. Then, by [38,Theorem 15,Chapter 7], pointwise and uniform convergences are equivalent, which will conclude the proof. Since f * P is a L-spline, using the expansion (9), we have for some coefficients a P [n], 1 ≤ n ≤ N p , where g L is the Green's function of L defined in (7). Moreover, g L is a periodic Lipschitz function for L = D M and M ≥ 2, hence ∥g L ∥ Lip := sup x,y∈R, x̸ =y |g L (x)−g L (y)| |x−y| < ∞. For any x, y ∈ R, we have that We have seen that w * P → w * when P → ∞ for the weak* topology. It is therefore bounded for the total-variation norm, thanks to the uniform boundedness principle. We therefore deduce from (31) that the f * P are uniformly Lipschitz, and therefore equicontinuous, which proves the desired result. □ The first part of the proof of Theorem 2, dealing with the pointwise convergence, mostly relies on the generalization of the weak* convergence studied in [18,Proposition 4]. Duval and Peyré use tools from Γ -convergence (see [39] for an introduction) and are themselves inspired by [40].
Theorem 2 shows that our grid-based discretization yields spline solutions that are arbitrarily close to the unique solution f * of (24) in the uniform sense when the discretization step h = 2π P vanishes. It leverages the uniqueness of the spline reconstruction from Fourier measurements ensured by Theorem 1.

B-spline-based algorithm
We now introduce our proposed algorithm to solve the discretized Problem (26) in an exact way, i.e., without any discretization error. The algorithm is based on [19] and uses the B-spline basis to represent the space of uniform splines M L, 2π P (T). The main difference here with [19] is the periodic setting, which actually simplifies the treatment of the boundary conditions. Moreover, for the sake of conciseness, we focus here on discretizing for a fixed grid; we do not present the multiresolution aspect of the algorithm introduced in [19], although it can seamlessly be adapted to our setting.

Preliminaries on uniform periodic polynomial splines
A convenient feature of the space M L, 2π P (T) is that it is generated by periodic B-splines, as will be proved in Proposition 4. To this end, we first provide some background information on B-splines and their periodized versions. B-splines are popular basis functions [41] that are widely used in signal processing applications [42,43], in part due to their short support which leads to well-conditioned optimization tasks. In the non-periodic setting, the scaled B-spline β L,h of the operator L = D M with grid size h > 0 is characterized by its Fourier transform The scaled B-spline β L,h is a piecewise polynomial of order (M −1) with continuous (M −2)-th derivative, and is supported over the interval [0, hM]. For any integer P ≥ 1, the periodized L B-spline with grid size h = 2π P is then defined as β per L, 2π which is a converging sum due to the finite support of β L, 2π Moreover, β per L, 2π P is a periodic L-spline in the sense of Definition 1, and its innovation is given by where the P-periodic sequence d L is characterized by its discrete Fourier transform (DFT) D L [k] = (1 − e −ik 2π P ) M . This relation is easily verified in the Fourier domain. As an example, for L = D, d L is the P-periodized finite-difference sequence Proof. We first observe that the space M L, 2π Indeed, we have where (35)  which proves that they are in fact equal. □

Discrete problem formulation
In practice, to solve Problem ( coefficients, which leads to a computationally feasible finite-dimensional problem, as demonstrated in the following proposition. (26) is exactly equivalent to solving the finite-dimensional problem W λ,P (y) = arg min

Proposition 5. Problem
where the matrix H ∈ C (Kc +1)×P is given by  (24). The expression of the system matrix H immediately follows. The expression of the regularization term follows from (37) and the fact that ∥ ∑ P−1 p=0 a p X(· − x p )∥ M = ∥a∥ 1 for pairwise-distinct knot locations x k . □

Experiments
In this section, we present some results of our discretization method presented in the previous section in various experimental settings. Our method amounts to solving Problem (38), which is a standard discrete problem with ℓ 1 regularization. This can be achieved using standard proximal solvers; in our experiments, we use the ADMM solver [44] of GlobalBioIm [45], a Matlab-based inverse-problem library developed by the Biomedical Imaging Group at EPFL. We solve the linear inversion step of ADMM with a direct matrix inversion, which is feasible due to the relatively small dimension of our problems.

. Qualitative effect
We first present a toy experiment to illustrate the effect of gridding in our discretization method, i.e., restricting the search space to M L, 2π P (T). We therefore design an experiment in which the solution of the problem is known, in order to observe whether our algorithm is able to reconstruct it. To this end, we take L = D 2 , and generate a ground-truth signal f 0 which is a periodic D 2 -spline with 2 knots (the locations and amplitudes of the knots are picked at random).
We then compute the noiseless data vector y = ν(f 0 ) for K c = 3, and solve the corresponding problem (24) with a small regularization parameter λ = 10 −7 in order to enforce the constraints ν(f ) ≈ y with very low error. Since the form of f 0 is compatible with that of the solution given by Theorem 1, the hope is that f 0 will be very close to the solution f * to problem (24), which is confirmed by our experiments.
In Fig. 1(a), we show the reconstruction result of our algorithm, using a voluntarily coarse grid with P = 16 points for visualization purposes. We observe that since the knots of f 0 are quite far from the grid, it is difficult to approximate f 0 with an element of M L, 2π P (T). The reconstruction therefore requires several knots on the grid to mimic a single knot of f 0 , and thus has a much higher sparsity (N = 7 knots versus N = 2 for f 0 ).
However, as we increase the number of grid points, the effect of gridding is greatly reduced, as illustrated in Fig. 1 with P = 512, the reconstruction using our algorithm is visually indistinguishable from f 0 (which is why we do not show it). However, the knot locations of f 0 still do not exactly lie on the grid, and thus our reconstruction still requires multiple knots to mimic a single knot of f 0 , which leads to a sparsity of N = 4. Specifically, our reconstruction has two knots at consecutive grid points 1.3990 and 1.4113 mimicking the knot at 1.4103 of f 0 , and two knots at 4.8106 and 4.8228 mimicking the knot at 4.8122 of f 0 . This effect of knot multiplication due to gridding has already been observed and studied extensively in [18] in the absence of a regularization operator L.
The conclusion is thus that gridding leads to visually near-perfect reconstruction when the number of grid points is very large, which is in line with Theorem 2; however, the sparsity of the reconstruction is a poor indicator of the sparsity of the true solution of Problem (24), since gridding induces clusters of knots. Note that this effect is also present in the noisy scenario since, as shown in the proof of Theorem 1, Problem (17) can be reformulated as a noiseless problem of the form (18).

Quantitative effect
In Theorem 2, we have proved that any sequence of continuous-domain solutions f * P to the grid-restricted problem converges uniformly towards the unique solution f * of problem (24) when P goes to infinity. In order to quantify the speed of this convergence, using the same experimental setting as in Fig. 1, we compute the error ∥f * P − f 0 ∥ ∞ where f * P is the reconstructed signal using our grid-based algorithm, and the ground truth f 0 is a proxy for the solution f * to problem (24). As explained earlier, this is a reasonable proxy due to the very small regularization parameter λ = 10 −7 .
In order to limit the effect of randomness in the choice of the knots of the ground truth, we apply a Monte Carlo-type method by generating 100 different ground-truth signals (following the methodology described in the previous section) and averaging the error over these 100 runs. These average errors for different grid sizes are shown in Fig. 2. The trend appears to be linear in log-log scale, which indicates an empirical speed of convergence of ∥f * P − f 0 ∥ ∞ ≈ ( C P s ) for some constant C > 0 and where −s < 0 is the slope of the linear function. We observe here that s ≈ 1 with s < 1. Note that there is no hope of having s > 1, since a nonuniform piecewise-linear spline cannot be approximated by a uniform piecewise-linear spline with an error smaller than O(h) in uniform norm, where h = 2π Fig. 2. Average error ∥f * P − f 0 ∥ ∞ over 100 runs for different grid sizes P (in log-log scale).

Fig. 3.
Recovery of piecewise-constant spline with 7 knots with K c = 20. For the our reconstruction (gTV), we use λ = 10 −2 and P = 256 grid points; the sparsity of the reconstruction is N = 20 knots. The data y is noisy in the gTV case, whereas the noiseless data ν(f 0 ) is used for the low-pass reconstruction (partial Fourier series of the ground truth signal up to the cutoff frequency K c ).

Noisy recovery of sparse splines
In our next experiment, we attempt to recover a ground-truth signal f 0 based on noisy data y ∈ R × C Kc with K c = 20 and a regularization operator L = D. Once again, the ground-truth signal fits the signal model of problem (24), i.e., f 0 is a periodic D-spline (piecewise constant signal) with N = 7 knots. Each knot x n is chosen at random within consecutive intervals of length 2π 7 , and the vector of amplitudes a = (a 1 , . . . , a n ) is an i.i.d. Gaussian random vector projected on the space of zero-mean vectors. The measurements are corrupted by some additive i.i.d. Gaussian noise 5 n ∈ R × C Kc with standard deviation σ = 10 −3 , i.e., y = ν(f 0 ) + n.
The reconstructed signal using our algorithm is shown in Fig. 3. Despite the presence of noise, the reconstruction of the ground truth f 0 is almost perfect. As observed in the previous experiment, the sparsity of the reconstruction (N = 20) is higher than that of the ground truth (N = 7) due to clusters of knots. We compare our reconstruction to the truncated Fourier series of f 0 up the K c , i.e., f Kc = ∑ Kc k=−Kcf 0 [k]e k , which solely depends on the noiseless data vector ν(f 0 ). Without any prior knowledge, this is the simplest reconstruction one can think of based the available data ν( As it turns out, f Kc is also the unique solution to the following constrained L 2 -regularized problem as demonstrated in [25,Theorem 3]. In fact, adding any LSI regularization operator L in (39) still yields the same solution, since the basis functions ϕ m in [25, Theorem 3] span the same space. This is due to the fact that the measurement functionals ν m , i.e., complex exponentials, are eigenfunctions of LSI operators.
As expected from the fact that f Kc is a trigonometric polynomial whereas f 0 has sharps jumps, the reconstruction is quite poor and exhibits Gibbs-like oscillations, despite the absence of noise. This clearly demonstrates the superiority of gTV over L 2 regularization for sparse periodic splines reconstruction. Note however that the gap in performance decreases as the order of L = D M increases, since Gibbs-like phenomena are less significant for smoother functions. 5 For complex entries, both the real and imaginary parts are i.i.d. Gaussian variables with the same σ .

Sensitivity to the regularization parameter
In our final experiment, we investigate the sensitivity of our reconstruction method to the choice of the regularization parameter λ. We choose the regularization operator L = D 3 (M = 3) and a cutoff frequency of K c = 5. The ground-truth signal f 0 is generated in the same fashion as described in Section 6.2, only with N = 4 knots. The measurements y are corrupted by some additive i.i.d. Gaussian noise n with standard deviation σ . We monitor the evolution of the signalto-noise ratio (SNR) between the ground truth f 0 and the reconstructed signal f * λ for varying values of the regularization parameter λ. In order to reduce the effect of randomness, in the same spirit as in Fig. 2, the results are averaged over 10 realizations of the ground truth f 0 . All the experiments are performed with a fixed grid size of P = 256.
The results are shown in Fig. 4 for low (σ = 0.01 × ∑ Kc k=0 |ν k (f 0 )| Kc +1 , i.e., one hundredth of the mean amplitude of the noiseless measurements ν(f 0 )) and high (σ = 0.5 × ∑ Kc k=0 |ν k (f 0 )| Kc +1 ) noise levels. We observe that in general, our method is more sensitive to the value of λ when the latter is too high than when it is too low. However, this effect is attenuated when the noise level increases: in Fig. 4(a), we observe that for low noise levels, any sufficiently small regularization parameter λ yields similar reconstruction results, whereas the SNR decreases dramatically past a certain value of λ. In comparison, as expected, for high noise levels ( Fig. 4(b)), excessively low values of λ lead to a larger deterioration of the reconstruction quality.

Conclusion
This paper deals with continuous-domain inverse problems, where the goal is to recover a periodic function from its low-pass measurements. The reconstruction task is formalized as an optimization problem with a TV-based regularization involving a high-order derivative operator. It was known that spline solutions always exist (representer theorem). Our main result has proved that the solution is in fact always unique. We then studied the grid-based discretization of our optimization problem. We leveraged our uniqueness result to that any sequence of solutions of the discretized problems converge in uniform norm -a remarkably strong form of convergence -to the solution of the original problem when the grid size vanishes. Finally, we proposed a B-spline-based algorithm to solve the discretized problem, and we illustrated the relevance of our approach on simulations.

Data availability
No data was used for the research described in the article.