Trajectory Generation for the Unicycle Model Using Semidefinite Relaxations

—We develop a convex relaxation for the minimum energy control problem of the well-known unicycle model in the form of a semidefinite program. Through polynomialization techniques, the infinite-dimensional optimal control problem is first reformulated as a non-convex, infinite-dimensional quadratic program which can be viewed as a trajectory generation problem. This problem is then discretized to arrive at a finite-dimensional, albeit, non-convex quadratically constrained quadratic program. By applying the moment relaxation method to this quadratic program, we obtain a sequence of semidefinite relaxations. We construct an approximate solution for the infinite-dimensional trajectory generation problem by solving the first-or second-order moment relaxation. A comprehensive simulation study provided in this paper suggests that the second-order moment relaxation is lossless.


I. INTRODUCTION
The development of reliable and efficient trajectory planning techniques is of tremendous importance to meet the high safety standards imposed on autonomous vehicles.Considerable attention has been devoted to designing optimizationbased trajectory generation methods relying on convex optimization due to their successful application to real-world systems, e.g., to space vehicles [1], [11], [14], [16], [24], [28].This approach typically involves solving an infinitedimensional optimal control problem which is usually neither readily amenable to computation nor can a closed-form solution be specified.
Thus, several approaches in the literature, e.g., [6], [7], [13], [17], [18], relax and reformulate general infinitedimensional optimal control problems, such as the minimum energy control problem for the unicycle model, in some manner leading to finite-dimensional convex optimization problems which can be solved with a dedicated convex optimization solver.
More specifically, the authors in [17] and [18] rely on local optimization algorithms or, more precisely, sequential convex programming methods to produce an approximate solution to an optimal control problem.Using this approach, the non-convex components in the optimal control problem are successively discretized and linearized around a reference carsten.scherer@imng.uni-stuttgart.detrajectory to arrive at a sequence of convex approximations of the original problem.In [7], possible non-convex input constraints are relaxed and the nonlinear dynamics are handled by a piecewise linear approximation.Combined with a discretization step, this leads to a mixed integer linear program to be solved.
In [6] and [13], sum-of-squares and moment relaxation techniques are invoked, respectively, to approximately solve optimal control problems.The result in [13] uses the weak formulation of optimal control problems and assumes that the cost function is a polynomial and the constraints form a semi-algebraic set.This weak formulation can be viewed as a special instance of the generalized moment problem.Accordingly, a hierarchy of semidefinite relaxations is constructed which approximately solves the optimal control problem.The authors in [6] build upon this result from the sum-of-squares perspective in a reproducing kernel Hilbert space, allowing for non-polynomial and smooth data in the optimal control problem.They obtain a semidefinite program (SDP) by subsampling the "kernel sum-of-squares relaxation", approximating the optimal value of the optimal control problem.
As a key contribution, we present a novel approach to construct global approximate solutions to the minimum energy control problem for the unicycle model through semidefinite relaxations.While this paper exclusively addresses this particular setup, the proposed approach should set the stage for constructing approximate solutions to general optimal control problems that can be "polynomialized", i.e., reformulated as an infinite-dimensional polynomial program through rational transformations.Nevertheless, the minimum energy control problem for the unicycle model is of high practical importance itself, e.g., in aerospace applications.A concrete practical example is landing a spacecraft using a parafoil as described in [8].By extending the unicycle model to include the altitude component, the resulting model can be viewed as a kinematic model for a parafoil.Note that this extension does not alter the considered optimal control problem since this altitude component can be calculated independently from the optimization problem.
This paper is structured as follows.In Section II the considered optimal control problem setup is presented and cast into an equivalent formulation in the form of a trajectory optimization problem.Section III demonstrates that loosening a particular constraint in the trajectory generation problem enables a reformulation as a non-convex quadratically constrained quadratic program (QCQP).In Section IV we show how the first and second semidefinite relaxation in the moment hierarchy can be exploited to construct an approximate solution for the trajectory generation problem.Numerical experiments are provided in Section V. A technical proof for the reformulation of the optimal control problem as well as proofs and properties of the solution space of the relaxed optimal control problem are found in the appendix.
Notation.The absolute value of x ∈ R is denoted as |x|.The closure of an open interval Ω ⊂ R is denoted by Ω.For an interval I = (a, b) ⊂ R we denote by C k (I) the set of k-times continuously differentiable functions on I and by C k ( Ī) the set of k-times continuously differentiable functions on I such that the derivatives up to order k can be continuously extended to Ī.If we let L 2 (I) be the space of square integrable functions on I, then the Sobolev space H k (I) is the set of functions f ∈ L 2 (I) whose weak derivatives up to k-th order are in L 2 (I).The k-th (weak) derivative of a function x with respect to t is denoted as , provided that it exists.For first and second order (weak) derivatives, we use the notation ẋ(t) := d dt x(t) and ẍ(t) := d 2 dt 2 x(t), respectively.We make use of the standard embeddings H 1 (I) ⊂ C( Ī) and H 2 (I) ⊂ C 1 ( Ī), see, e.g., Theorem 5.12 in [3].

II. PROBLEM STATEMENT
The minimum energy control problem for the unicycle model with fixed initial and final configuration is given by where Ω := (0, t f ), t f ∈ R. Here, x, y ∈ H 2 (Ω) denote the vehicle's position as functions of time, θ ∈ H 1 (Ω) its heading angle as function of time and v p ∈ R its constant horizontal speed.The vehicle's angular velocity can be manipulated through the control input u ∈ L 2 (Ω).Note that, given u ∈ L 2 (Ω), the constraints in problem (1) uniquely define θ ∈ H 1 (Ω) and x, y ∈ H 2 (Ω), by integration.The constraints in (1) are well-defined due to the embeddings The question of existence of a solution of (1) is not addressed in this paper.
In order to convexify (1) in the upcoming sections we rewrite this problem as Recall that the assumption x, y ∈ H 2 (Ω) renders the constraint (2a) well-defined.In words, this reformulation means that a controller expending minimum energy corresponds to a trajectory with minimum bending energy, as stated in the next theorem, with a proof given in Appendix A. Theorem 1: A solution of problem (1) can be obtained from a solution of problem (2) and vice versa.
In the sequel, we will relax problem (2) in order to arrive at a numerically tractable finite-dimensional formulation.

III. RELAXATION TO A FINITE-DIMENSIONAL PROBLEM
In a next step, we impose the constraint (2a) only at discrete points in time leading to the optimization problem As argued below, this relaxation of the constraint renders the corresponding solution space finite-dimensional.In fact, the solution space of (3) can be confined to a particular function space, which we call the space of variational generalized splines.
for some α 0 , α 1 , β i ∈ R, i = 0, . . ., n.The space of variational generalized spline functions for a fixed partition of I in n subintervals is denoted as S ({t i } n i=0 ).Remark 1: While Definition 1 may seem almost identical to definitions of other types of splines (see, e.g., [20], [25]), specifically cubic splines, our definition differs slightly.To be more precise, we do not require s ∈ C 2 (I) in Definition 1, only that s ∈ H 2 (I), i.e., the second derivative of s may be discontinuous when considering the entire interval I.In addition, while we require the function's first derivative to be interpolating for some given data points, we do not require this for the function itself.
In Appendix B we show that the space S ({t i } n i=0 ) is finite-dimensional, and in particular, that any s ∈ S ({t i } n i=0 ) satisfies i.e., the function s is a cubic polynomial on the subintervals Most of the techniques used to prove the results in Appendix B are reminiscent of proofs from spline theory.However, as mentioned in Remark 1, we did not find a type of spline in the literature which matches Definition 1 precisely.For this reason, we included Appendix B. The next theorem is a direct consequence of the results established thus far.Hence, its proof is omitted.Theorem 2: Let Ω i := (t i−1 , t i ) for i = 1, . . ., N + 1.Further, let the pair (x, y) with x, y ∈ H 2 (Ω) be a solution of (3).Then, x, y ∈ S({t i } N +1 i=0 ).By invoking Theorem 2, we derive a genuine finitedimensional formulation of (3) as given in the next theorem.
Theorem 3: Define the set Then, there exists a matrix Q ∈ R 2N +1×2N +1 with Q ≽ 0 such that a solution to problem (3) can be obtained by solving the finite-dimensional QCQP where Proof: Due to Theorem 2, we can restrict the solution space of (3) to the space of variational generalized spline functions.Let us now introduce basis functions φ j ∈ S {t i } N +1 i=0 , j = −1, . . ., N + 2 for our variational generalized spline space, which can be determined by solving an interpolation problem.Here, a representation in the Lagrangebasis is adopted, where the functions φ j are defined such that for i = 0, . . ., N + 1 we have Then, the functions x and y can be parameterized as where x 1 , . . ., x N , y 1 , . . ., y N ∈ R are unknown coefficients and represent the new decision variables.The coefficients are obtained by using the constructed basis functions (7) and evaluating the boundary values of x and y in (3), yielding Next, we represent our discretized constraint in problem (3) in the basis (7), resulting in In order to transform the cost functional, denote Then, defining Q ∈ R 2N +1×2N +1 as with where as well as Let Then, (11) corresponds to the cost function f and (9) to the constraint set T of (6).Since the constraint set T is diffeomorphic to a torus, a compact set in R 2N , and since the function f is continuous, problem (6) admits a minimizer.Hence, problem (6) can be solved to obtain a solution z = x 1 . . .x N y 1 . . .y N ⊤ .By inserting the coefficients x 1 , . . ., x N , y 1 , . . ., y N into (8) a solution for (3) can be recovered.Even though we were able to render problem (2) finitedimensional by loosening the constraint (2a), the resulting problem (6) remains non-convex due to the non-convexity of the constraint set T .Subsequently, we concentrate on solving (6) and discuss how to generate a (inexact) solution for (6) using convex relaxations based on the theory of moments.This allows us to retrieve an approximate solution for the original problem (2) through the parameterization (8).

IV. SEMIDEFINITE RELAXATIONS OF THE QCQP
As we are interested in global solutions of (6), as opposed to local solutions, we turn to the technique of constructing hierarchies of semidefinite relaxations based on the theory of moments and the dual theory of sum-of-squares.These types of semidefinite relaxations mainly originate from the works of Lasserre [12], Nesterov [21], Parrilo [22], [23] and Shor [27].In our setup we require the availability of (inexact) solutions for (6) in order to generate suboptimal trajectories for (2).Below, we consider two procedures to construct (inexact) solutions for (6).Both of these methods involve solving an SDP from the moment perspective.Thus, we follow Lasserre [12] and consider the sequence of moment relaxations of (6) given by subject to M d (y) ≽ 0, with relaxation order d > 0, where L y denotes the Riesz linear functional, M d (y) the moment matrix and M d−1 (h i y) the localizing matrix.Each successive value f mom d improves upon the optimum value f * and can be obtained by solving an SDP.Since the considered constraint set T satisfies the so-called Archimedean property, we can establish asymptotic convergence, i.e., lim d→∞ f d = f * due to Theorem 2.9 in [12].However, note that the computational burden grows significantly as the relaxation order d is increased.
In the sequel, we consider the first and second relaxation in the hierarchy (12).Using the first-order moment relaxation, we can always retrieve inexact solutions to (6) through a randomized projection algorithm inspired by the approximation scheme for the maximum cut (MAXCUT) problem [9].On the other hand, in all cases considered numerically, see Section V-C, we obtain an optimality certificate through the second-order moment relaxation, meaning that this relaxation is an exact convex formulation of (6).For this reason, higher relaxation orders are not considered in this work.

A. First-order moment relaxation
The first moment relaxation in hierarchy (12) is also referred to as standard SDP relaxation and is equivalent to the dual of the well-known Shor relaxation for QCQPs.For (6) this relaxation can be written as inf subject to Z ≽ 0, If a solution Z * of (13) has rank one, then this relaxation is lossless and a solution to (6) can be extracted from the first column of Z * .Whenever rank(Z * ) > 1, the above relaxation is inexact and we do not directly recover an optimal or even feasible point for (6).For this purpose, we propose a randomized approximation algorithm, based on the MAXCUT algorithm [9], to produce an inexact solution for (6).Like the MAXCUT algorithm, our algorithm exploits that any positive semidefinite matrix is admissible for a sequence of second order moments of a measure.As a consequence, if the matrix the matrix Z * is partitioned as then, the matrix Ẑ − ẑ ẑ⊤ can be interpreted as the covariance matrix for a multivariate normal distribution.This can be used to generate a candidate point ζ from this normal distribution, i.e., ζ ∼ N (ẑ, Ẑ − ẑ ẑ⊤ ).By projecting this candidate onto the set T in some way, a feasible point for ( 6) can be produced.In contrast to the MAXCUT algorithm, our algorithm, given in Algorithm 1, performs a different projection step.More specifically, we exploit that T is diffeomorphic to a torus and project onto this set.The reason for this modification is that for the considered setup, we have observed through numerical experiments that Algorithm 1 yields feasible points associated with a lower objective value when compared to the MAXCUT algorithm.However, we could not establish a performance guarantee for (6) with either algorithm.Algorithm 1 Approximation algorithm for (6) Require: Z * as in (14), obtained by solving ( 13) end for end for Choose the vector from ((z a ) 2000 k=1 ) which is associated with the lowest objective value.

B. Second-order moment relaxation
Let us now consider the next relaxation in hierarchy (12), i.e., the second-order moment relaxation.As supported by numerical results, see Section V-C, we conjecture the following statement and emphasize that this conjecture does not rely on the concrete structure of the cost function.In other words, this conjecture includes problem (6) as a special case.
Conjecture 1: Let p be a quadratic polynomial with real coefficients in the variable z.Then, the second-order moment relaxation of is lossless Remark 2: One could also attempt to prove Conjecture 1 from the sum-of-squares perspective, in which case it needs to be shown that for real-valued quadratic polynomials has a sum-of-squares decomposition whose degree is at most 4. Another approach which might help to prove Conjecture 1 is to replace the ordinary polynomials in the sum-of-squares decomposition by trigonometric polynomials, e.g., as in [19].
By implementing the linear algebra routine devised in [10], we can numerically determine whether the second-order moment relaxation of ( 15) is tight and also extract some global minimizer for (15).This optimality certificate relies on checking a certain rank condition of the moment matrix.We present a convincing numerical indicator in Section V-C to support Conjecture 1, especially concerning the specific QCQP (6).We note that numerically examining this conjecture involves considering all natural numbers for the parameter N as well as gridding the space of quadratic polynomials and the parameter v p .For the particular case (6), this translates into varying the number of samples N and all model parameters consisting of the velocity v p as well as the initial and final values x s , y s , θ s , x f , y f , θ f , t f .V. NUMERICAL RESULTS This section presents some numerical results for the trajectory planning problem (2) as well as a numerical indicator for the validity of Conjecture 1.The involved SDPs were solved using the parser Yalmip [15] and the solver Mosek [2].
We generate an approximate solution for (2) by constructing a (inexact) solution for (3), which translates into problem (6) by using the parameterization (8).In case the first-order moment relaxation ( 13) is lossless, we obtain an optimal trajectory to (3) for a given N .Otherwise, either Algorithm 1 is used to generate a suboptimal trajectory for (3) or, the second-order moment relaxation is solved, which always yielded an optimal trajectory for (3) in our simulation studies through the algorithm presented in [10].
To numerically construct an approximate solution for (2) using our approach, the discretization parameter N as well as the model parameters v p , x s , y s , θ s , x f , y f , θ f , t f need to be fixed.We note that for a given set of model parameters, the exactness of the moment relaxations (12) seems to be independent of N when N > 5.In particular, if t f , x s , y s , x f , y f are held constant, we have observed that the exactness of the first-order moment relaxation depends on the parameter values v p , θ s , θ f .This aspect is further elaborated on throughout this section.In the following simulation results, the constraint (2a) is sampled equidistantly.Moreover, we fix the final time to t f = 1 as well as the initial and final position to (x s , y s ) = (1, −1) and (x f , y f ) = (0, 0), respectively.The values of the horizontal speed v p , initial angle θ s , final angle θ f and the number of samples N are varied depending on the specific simulation goal.More specifically, we vary the values of v p , θ s , θ f to examine the exactness of the firstand second-order moment relaxation and N to investigate the empirical convergence behavior as discussed next.A. On the empirical convergence of solutions to problem (3) We emphasize that the only difference between (2) and (3) lies in the discretization of constraint (2a).Hence, by reducing the mesh size t f N −1 , i.e., by increasing N , we expect a better approximation of an optimal trajectory for (3) to an optimal trajectory for (2).This means that for increasing N , the constraint (2a) should be violated less in-between the sampling instances and the trajectory's shape should appear smoother.Indeed, if ( 13) is lossless, we have observed that values already between N = 10 and N = 30 yield satisfactory results.This is exemplified in Fig. 1 which depicts trajectories resulting from an exact first-order moment relaxation for increasing N in (a), using the parameter values v p = 4, θ s = 0, θ f = 3π 2 .The corresponding velocity functions v(t) are shown in (c), illustrating the satisfaction of constraint (2a).Magnified portions of (a) and (c) are shown in (b) and (d), respectively.Starting with N = 30, the trajectories resulting from larger N can barely be distinguished from each other.In addition, the constraint (2a) is violated only by a negligible discretization error.If the standard SDP relaxation is not tight and Algorithm 1 is employed, this discretization error is larger.Depending on the particular parameter configuration, the number of samples N can be increased to reduce this error.Still, in selected cases a significant error persists which is also apparent in the corresponding non-smooth appearing trajectory.To mitigate this problem, the second-order moment relaxation can be solved instead, albeit at the expense of an increased computational cost.Nevertheless, comparatively low values of N are sufficient to improve the trajectory's quality.Unfortunately, due to this high computational burden, the convergence behavior resulting from the secondorder moment relaxation could not be studied thoroughly.However, for small N , it could be observed that trajectories resulting from the second-order moment relaxation display similar properties compared to trajectories stemming from an exact first-order moment relaxation.On the other hand, a slightly larger error in the satisfaction of constraint (2a) in-between the sampling instances could be noticed.Hence, we expect the convergence rate for these trajectories to be comparable but slightly slower.
Remark 3: The problem size of ( 12) for d = 2 can be reduced by eliminating variables through the constraints h i (z) = 0, i = 1, . . ., N .For instance, Yalmip translates the second-order moment relaxation for N = 10 into a problem with 10 625 constraints, 2310 scalar variables and one 231 × 231 matrix variable, if the equality constraints are not exploited.In contrast, by eliminating decision variables through the given constraints, the resulting problem reduces to 8360 constraints and one 231 × 231 matrix variable.While certain decision variables in the moment matrix can be eliminated in this way, we could not reduce the size of the moment matrix without sacrificing exactness of the secondorder moment relaxation for some parameter configurations.

B. Trajectory generation for problem (2)
For the simulation result depicted in Fig. 2, we set v p = 4, θ s = 0 and N = 12, while θ f is varied.We note that a symmetric behavior is observed if θ f is held constant while θ s is varied, i.e., it does not matter whether θ f is held fixed while varying θ s or vice versa.This is coherent with our problem setup since any trajectory generated for (1) is undirected.Further, we note that the value of the parameter v p can have an impact on the exactness of the firstorder moment relaxation.More precisely, for certain fixed parameters θ s , θ f , N , it is possible to induce an inexact firstorder moment relaxation to become exact by increasing v p .
Each (a) and (b) in Fig. 2 display a trajectory produced by Algorithm 1, and an optimal trajectory for (3) with θ f = 0 and θ f = π 4 , respectively.This optimal trajectory is obtained through the second-order moment relaxation combined with the algorithm from [10].The associated velocity functions are depicted in (c) and (d).In the configuration θ f = θ s = 0 there exists an additional optimal trajectory which is not plotted in (a) for the purpose of clarity.As can be inferred from this figure, the shape of the suboptimal trajectory with θ f = θ s deviates more from the optimal trajectory compared to the case where θ f ̸ = θ s .Indeed, the cost value associated with the suboptimal trajectory drifts further apart from the optimal value of (6) as the configuration θ f = θ s is approached.We note that throughout our simulations, we have never identified a case for which the first-order moment relaxation is exact if the values of θ s and θ f coincide, even if v p is increased.In fact, it seems as if the rank of a solution Z * to (13) becomes more "unstable" as θ s and θ f approach each other, i.e., as rank(Z * ) = 1 transitions to rank(Z * ) = 3.

C. Exactness of the second-order moment relaxation
This subsection presents the results of two simulation studies dedicated to showing that the second-order moment relaxation of ( 6) or ( 15) is sufficient to exactly solve these two problems.To render the simulation time tolerable, while retaining an adequate dimension of the sample space R 2N , the number of samples of constraint (2a) is set to N = 5 for the larger study and to N = 7 for the smaller study.While larger values of N were not considered, we expect that the exactness of the second-order moment relaxation of ( 6) and ( 15) is maintained.
The larger simulation study focuses on our specific problem setup (6).For this simulation we proceeded as follows.
We equidistantly sample the parameter space (θ s , θ f , v p ) ∈ R 3 , where θ s , θ f ∈ [0, 2π] and v p ∈ [3,6].We iterate through each parameter configuration and check whether a solution Z * of ( 13) is of rank one.If rank(Z * ) > 1, the second-order moment relaxation, i.e., (12) for d = 2 is solved.In total, this procedure was conducted 10 6 times.In about 74.91% of the time, (13) was lossless whereas in 24.85%+0.24% of the time we established that rank(Z * ) = 3 such that the second-order moment relaxation, was solved.More specifically, the 24.85% signify that the second-order moment relaxation yielded a global minimizer for (6) while in the remaining 0.24% numerical problems occurred in solving this SDP.We note that for increasing values of v p , numerical issues in solving (12) for d = 2 emerged more frequently.This could not be observed for d = 1.
In an effort to prove the exactness of ( 12) with d = 2 for (6), we noticed that the particular structure of the cost function, as determined by Q in (10), may not be relevant for this phenomenon.This is supported by the smaller simulation study which we discuss in the following.First, we express each quadratic polynomial p from Conjecture 1 as p(z) = 1 z ⊤ A 1 z ⊤ ⊤ and randomly draw the entries of the coefficient matrix A = A ⊤ from the uniform distribution in the interval [−T, T ], with different values for T ∈ R.Then, for varying v p and T , the second-order moment relaxation is solved in case the first-order moment relaxation did not yield an optimal point for (15).This procedure was performed 10 5 times in total with different parameter choices of T and v p , which are specified in Table I.In this study, we decided to conduct more trials for lower values v p since we have observed that higher values of v p typically tighten the firstorder moment relaxation for some fixed A. Therefore, if lower values of v p induce an exact second-order moment relaxation, we also expect this relaxation to be exact for higher values of v p .Otherwise, the chosen parameter values and number of trails have not been chosen according to any system.For a more convincing result in this setup, covering a wider range of parameter configurations, more trials would need to be performed.In almost all cases, specified in Table I, we obtained an optimality certificate by recovering a global minimizer for (15).Only negligible selected cases could be identified for which a minimizer with the algorithm from [10] could not be constructed.In these cases, however, the firstand second-order moment relaxation had the same optimal value, indicating that the global optimum of (15)

VI. CONCLUSIONS
We have proposed a novel approach for convexifying the minimum energy problem for the unicycle model through semidefinite moment relaxations.As has been supported by numerical results, the constructed approximate solutions are highly satisfactory.Beyond that, we conjecture that the second-order moment relaxation of minimizing a quadratic polynomial over an n-dimensional torus represents a tight and tractable convex formulation.
Possible future work includes incorporating constraints in the problem setup, e.g., hard input constraints.Furthermore, this approach could be applied to other optimal control problems, e.g., the Brockett integrator.In this context it should be considered that with the proposed approach, an infinite-dimensional optimization problem with more decision variables directly leads to a finite-dimensional optimization problem with more decision variables and thus, we are faced with a higher computational burden.In this case, it may be possible to exploit or further sparsities in the corresponding semidefinite relaxation, see, e.g., [26] or [29].
For every t ∈ Ω, we can hence construct an interval I t := (t − ϵ, t + ϵ) with some ϵ > 0 as above.Let I t1 ∪ • • • ∪ I tn be a sub-covering of the open-covering ∪ t∈ ΩI t of Ω.Then, we construct a partition 0 = t 0 < t 1 < • • • < t m = t f of Ω such that for any j = 0, . . ., m−1, there exists some k = 1, . . ., n with (t j , t j+1 ) ⊂ I tk .We then infer θ ∈ H 1 (t j , t j+1 ), j = 0, . . ., m − 1. Again by Theorem 5.14 in [3], we conclude that θ ∈ H 1 (Ω) as was to be shown.■ B. Some properties of the space S ({t i } n i=0 ) In the following, let the interval I = (a, b) and its partition be as in Definition 1 and denote I i := (t i−1 , t i ) for i = 1, . . ., n. Further, let α 0 , α 1 , β i ∈ R, i = 0, . . ., n in (4) be fixed.We introduce some additional notation.The space of infinitely differentiable functions with compact support on I is denoted as C ∞ 0 (I).We denote the closure of C ∞ 0 (I) by H 2,2 0 (I).Lastly, we define |s| Before we show that any s ∈ S ({t i } n i=0 ) has to satisfy (5), we need to establish the following lemma.
Proof: We begin the proof by choosing s 1 such that s 1 (t i ) = s(t i ), i = 1, . . ., n − 1. where s| Ii (t) := d 2 dt 2 s| Ii (t) and we take the limit ϵ → 0 since the second derivative of s and s 1 might be discontinuous.As I i was chosen arbitrarily, (18) shows J(s 1 ) ≤ J(s).
Equipped with this lemma, we can show that any s ∈ S ({t i } n i=0 ) is a cubic polynomial on I i .Lemma 2: The space S ({t i } n i=0 ) is finite-dimensional and linear.Specifically, any s ∈ S ({t i } n i=0 ) satisfies ( 5).Proof: The strong form of the cost functional in problem ( 4) is given by ( 5), which can be derived through the Euler-Lagrange equations.We argue now that any solution to (4) needs to satisfy (5).To this end, let I i be fixed but arbitrary.Further, let s 1 ∈ H 2 (I) be some solution for (4).Due to Lemma 1 we can construct another solution s 2 ∈ H 2 (I) to (4) satisfying (5) such that s 1 (t i−1 ) = s 2 (t i−1 ), s 1 (t i ) = s 2 (t i ).
(  (20) is unique in H 2 (I i ).A classical solution to (20) is given by a function satisfying (5).Due to the condition (19) and since a classical solution is also a weak solution, and therefore unique, every solution of (4) has to satisfy (5).
Consequently, any minimizing function for ( 4) is a cubic polynomial on I i satisfying (4a) and (4b).Since the space of polynomials of fixed order is a finite-dimensional linear space, only finitely many unknowns need to be determined in order to characterize the space S ({t i } n i=0 ).

2
b be a partition of the interval [a, b] ⊂ R and let I := (a, b).Then, the space of variational generalized spline functions is given by functions which solve the optimization problem minimize s∈H Last 4 velocity functions magnified.
is attained.

TABLE I :
Number of times the first-or second-order moment relaxation of (15) is solved, partitioned according to different parameter combinations.