Elastic analysis of irregularly or sparsely sampled curves

We provide statistical analysis methods for samples of curves when the image but not the parametrisation of the curves is of interest. A parametrisation invariant analysis can be based on the elastic distance of the curves modulo warping, but existing methods have limitations in common realistic settings where curves are irregularly and potentially sparsely observed. We provide methods and algorithms to approximate the elastic distance for such curves via interpreting them as polygons. Moreover, we propose to use spline curves for modelling smooth or polygonal Fr\'echet means of open or closed curves with respect to the elastic distance and show identifiability of the spline model modulo warping. We illustrate the use of our methods for elastic mean and distance computation by application to two datasets. The first application clusters sparsely sampled GPS tracks based on the elastic distance and computes smooth means for each cluster to find new paths on Tempelhof field in Berlin. The second classifies irregularly sampled handwritten spirals of Parkinson's patients and controls based on the elastic distance to a mean spiral curve computed using our approach. All developed methods are implemented in the \texttt{R}-package \texttt{elasdics} and evaluated in simulations.


Introduction
Elastic analysis of curves β : [0, 1] → R d , d ∈ N, refers to an analysis of the curves' image without taking their parametrisation over the interval [0, 1] into account. Examples for such curves in R 2 are handwritten letters or the outline of an object. Here only the image of the curve represents the object, not the speed with which the parametrisation traverses the outline. Hence, for statistical analysis of such curves including mean computation, clustering or classification, the analysis should be invariant under different possible parametrisations. Ideally, the analysis should also yield an optimal alignment of different curves to allow comparison of corresponding points such as bumps and other features. Consider for instance the handwritten symbols in Fig. 1. Here we would like the mouths of the different fish and the branches of the trees to correspond after alignment. As in this example, curves are often observed at a differing number of discrete points. The aim of this paper is to extend elastic statistical methodology to such realistic cases where curves are irregularly and sparsely sampled. In particular, this includes suitable algorithms for alignment and distance computation for samples of such curves, as well as identifying appropriate spline model spaces for elastic (Fréchet) mean curves. These means can be smooth curves, such as shown for the fish in Fig. 1, or polygonal curves, better suited for curves with sharp corners like the trees in Fig. 1. To this end, we, i.a., derive a useful simplification of the warping problem when interpreting the observed curves as polygons, and show that certain first and second order splines meet the identifiability properties required in a modulo warping context.
The alignment problem for curves in R d is closely related to the registration problem in functional data analysis (Ramsay and Silverman [18]), which is the alignment problem in the case d = 1. For two functions f 1 and f 2 , registration (also called warping) has commonly been treated as an optimisation problem inf γ∈Γ f 1 − f 2 • γ L2 on a suitable function space Γ of warping functions γ. This choice is problematic as the mapping (f 1 , f 2 ) → inf γ∈Γ f 1 − f 2 • γ L2 does not define a proper distance on the space of curves modulo parametrisation. This yields two major pitfalls: First, the mapping is not symmetric, which means aligning f 2 to f 1 will not be equivalent to aligning f 1 to f 2 . Furthermore, inf γ∈Γ f 1 − f 2 • γ L2 can be zero even if f 2 is not a warped version of f 1 , which is related to the so-called 'pinching' problem (Marron et al. [15]). Intuitively speaking, this 'pushes' the integration mass to parts of the domain where f 1 and f 2 are close. To avoid this 'pinching' effect, a regularisation term can be added to the loss function (Ramsay and Silverman [18]). This is done in various dynamic time warping algorithms, where usually large values of the derivative of the warping function are penalised (Sakoe and Chiba [20], Keogh and Ratanamahatana [9]). Alternatively, one can choose a small number of basis functions for the warping or combine both approaches to use penalised basis functions (Ramsay and Li [19]). Moreover, Bayesian approaches to modelling warping functions have been suggested (Cheng et al. [4], Lu et al. [14]). Recently, Matuk et al. [16] developed such a Bayesian approach for sparse one-dimensional functions.
All of these approaches restrict the amount of warping, thus the analysis is not completely independent of the observed parametrisation. This seems more suitable for one-dimensional functions f 1 , f 2 : [0, 1] → R where one seeks to separate phase (parametrisation) and amplitude (image) but considers both as informative. If we analyse curves β : [0, 1] → R d with d > 1 however, we are usually only interested in the image representing the curve, which makes penalised, restricted or Bayesian approaches for the warping less suitable. In this case the object of interest is the equivalence class of the curve with respect to (w.r.t.) parametrisation, hence a proper distance on the resulting quotient space modulo warping is desirable.
To overcome the shortcomings of the usual L 2 distance for curve alignment, Srivastava et al. [24] propose an elastic distance which instead minimises the Fisher-Rao Riemannian metric. This gives a proper metric on the quotient space of absolutely continuous curves modulo parametrisation and translation. For more details on this square-root-velocity (SRV) framework see Srivastava and Klassen [22]. They show that for two absolutely continuous curves β 1 and β 2 , the Fisher-Rao metric can be simplified to the L 2 -distance between the corresponding square-root-velocity (SRV) curves, which can be minimised to obtain the elastic distance.
Srivastava and Klassen [22] observe the following properties of the SRV transformation and the elastic distance. Remark 1.2.
i) To obtain a proper quotient space structure on the space of absolutely continuous curves, we need to consider the closure of SRV curves with respect to re-parametrisation as equivalence classes. That is for a curve β with SRV transformation q, [β] consists of all curves whose SRV transformation is in the closure of {(q i • γ) · √γ |γ ∈ Γ}, with Γ being the set of monotonically increasing, onto and differentiable warping functions.
ii) In (1), it is in fact sufficient to align one of the curves, that is with Γ being the set of monotonically increasing, onto and differentiable warping functions γ : [0, 1] → [0, 1].
iii) Every square integrable SRV curve q uniquely defines an absolutely continuous curve β up to translation, with the back-transform given as β(t) = β(0) + t 0 q(s) q(s) ds.
Note that any statistical analysis based on this elastic distance will be modulo translation as a result of taking derivatives. If the actual position of the curve in space is of interest as well, it has to be included separately in the analysis. On the other hand, if curves are used to model shape objects, translation invariance is a desired property. As in classical shape data analysis (Dryden and Mardia [5]), the analysis should then additionally be independent of the size and the orientation of the shape in space. In this paper, we solely discuss the invariance under re-parametrisation but not the invariance under rotation and scaling and give examples of GPS tracks and handwritten spirals where this elastic analysis is suitable. However, re-parametrisation invariance presents a key aspect of functional shape analysis (Srivastava and Klassen [22]) and may, therefore, also be viewed in this context.
A solution to the variational problem in the distance (3) is usually approximated using a dynamic programming algorithm or gradient-based optimisation (for instance in Srivastava et al. [24]). Both approaches discretise the warping space Γ. The dynamic programming algorithm for instance assumes a discrete grid for the domain of the warping function. An extension by Bernal et al. [2] allows for an unequal number of points on both curves and improves computation time. Lahiri et al. [13] provide an algorithm to align two piecewise linear curves and show that an optimal warping exists if at least one of the curves is piecewise linear. Such an optimal warping also exists if both curves are continuously differentiable (Bruveris [3]).
A direct application of computing pairwise elastic distances for a sample of observed curves is their use for distancebased statistical analysis like various clustering or classification algorithms (Kurtek et al. [11], Laborde et al. [12], Strait and Kurtek [26]).A more challenging but important task is to compute the mean of a random sample of such objects. Srivastava et al. [23] suggest to approximate the Fréchet mean (which they call Karcher mean) for curves w.r.t. the elastic distance via alternating between optimising the alignment to the current mean and computing the L 2 mean of the SRV curves given the current alignment. Their perspective is focused on the curves as functions and, in practice, they rely on evaluating the SRV curves on a regular grid for the mean computation, which works well in the case of densely observed curves. Nevertheless, in real-world applications, we observe curves only at a finite (and often small) number of discrete points, where even the number of points might differ between curves (so-called sparse and irregular setting). This is the case in our example of handwritten symbols displayed in Fig. 1. Here the number of points differs randomly across curves (for the fish in the top row) or with the number of prominent features (for the Christmas trees in the bottom row). We show in examples that (elastic) methods designed for densely observed curves have limitations for such sparse settings. This problem is well-known in functional data analysis (d = 1), where spline representations or some other smoothing method are frequently used to model sparsely and/or irregularly observed functions (e.g. Yao et al. [29], Greven and Scheipl [7]).
The main contribution of this paper is to carefully introduce spline functions for modelling elastic curves in R d on SRV level, extending approaches for functional data also to d ≥ 2 and to the elastic setting. This includes piecewise constant SRV curves, which corresponds to polygonal curves, as a special case. The main advances of this work are the following points: • We provide algorithms to fit elastic spline means for open and closed curves, show the proposed spline curves are identifiable via their coefficients modulo parametrisation and discuss limitations of this identifiability.
• We develop algorithms to align open and closed curves if at least one of them is piecewise linear, for instance a sparsely observed curve which is treated as a polygon. In the special case of open curves with both curves piecewise linear, Lahiri et al. [13] proposed an alternative algorithm. Our algorithm is, however, far simpler to understand and implement. We show local maximization properties of our algorithm.
• We demonstrate how the elastic distance can be used for statistical analysis of irregularly or sparsely observed curves in two examples, involving mean computation, clustering and classification of curves. We provide an implementation of our methods in the R-package elasdics [25].
Moreover, the proposed methodology for elastic spline mean estimation can be viewed as a first step towards an elastic regression analysis for sparsely observed curves when including covariates.
We structure our results as follows: In Section 2, we present algorithms to approximate the elastic distance in (3), introduce spline functions to compute a smooth representative of the Fréchet mean of observed curves and discuss identifiability properties of such elastic spline curves. In Section 3 we test the developed methods via simulation and compare our implementation to the one available in the R Package fdasrvf [28]. We demonstrate how the elastic distance can be used to cluster GPS tracks and compute smooth mean paths in Section 4. A second example dataset comprises handwritten spirals of Parkinson's patients and a healthy control group, which we classify based on the elastic distance to the mean spiral curve computed using our method. Section 5 closes with a discussion.

Elastic analysis of observed curves
In practice, we observe curves in R d , d ∈ N, not continuously but only discretely via evaluations of these curves on discrete (and potentially sparse and curve-specific) grids. An elastic analysis needs to explicitly address this point for distance and subsequently mean computation. We propose to treat a discretely observed curve β with SRV transformation q as a polygon parametrised with constant speed between the observed corners β(s 0 ), . . . , β(s m ). In this case, the problem of finding an optimal re-parametrisation β • γ of β to another curve with SRV transformation p can be simplified (similar as in Lahiri et al. [13]). We can show that instead of solving the minimisation problem (3) over the function space of all suitable warping functions γ, we only need to solve a maximisation problem over a subset of where ·, · + denotes the positive part of the d-dimensional scalar product. For a maximiser (t 1 , . . . , t m−1 ) of (4) there is a γ : [0, 1] → [0, 1] with γ(t j ) = s j for all j = 1, . . . , m − 1 which is a minimiser of (3).
A proof can be found in Appendix A.1. It includes an explicit construction of the minimising warping function γ (or a minimising sequence of warping functions). Although we formulated the warping problem in (3) only for open curves (or closed curves with known start and end point) we can formulate a similar criterion for closed curves, using a different set of warping functions. Here we assume γ : Then the optimisation problem for closed curves is equivalent to the following problem.
Thus, the warping problem for open or closed curves can be simplified if one of the SRV curves is piecewise constant, no matter which form the second SRV curve p has. If p is at least continuous, for example the SRV curve of a modelbased smooth mean curve, the loss functions in (4) and (5) are differentiable. We propose to tackle the remaining maximisation problem with a gradient descent algorithm that can handle linear constrains (for instance method 'BFGS' in constrOptim from R-package stats [17]) and provide a derivation of the gradient in Appendix A.2.
In the following, Subsection 2.1 provides an algorithm to compute the elastic distance if the second curve is piecewise linear, for instance an observed polygon as well. In Subsections 2.2 and 2.3 we introduce spline functions to model smooth or polygonal elastic mean curves and discuss identifiability of such spline curves modulo reparametrisation in Subsection 2.4.

Elastic distance for two piecewise linear curves
We present an algorithm that can be used to find an optimal warping function, and therefore compute the elastic distance, in the case where both curves are piecewise linear. This is relevant either because we model one of the curves as a linear spline (see Subsection 2.2), or because we want to compute the elastic distance between two observed curves. The latter allows us to perform any distance-based analysis of the data such as clustering or classification, for instance.
To obtain an optimal warping for a curve with piecewise constant SRV transformation q to another curve with SRV transformation p, we first notice that the maximisation in one t j direction of the loss function given in (4) only depends on the current values of t j−1 and t j+1 for any p. Moreover, if p is a piecewise constant SRV curve as well, we can even derive a closed form solution of the maximisation problem in (4) with respect to each coordinate direction t j ∈ [t j−1 , t j+1 ] (see Appendix A.3). Hence we propose a coordinate wise maximisation procedure, where we iterate two steps.
The warping problem for two (open) piecewise linear curves has been previously discussed by Lahiri et al. [13]. They propose a precise matching algorithm which produces a globally optimal re-parametrisation of q. Our algorithm can be seen as an alternative, which is much more straightforward to implement (we provide an implementation in the R-package elasdics [25]) but does not guarantee to find a globally optimal solution. Nevertheless, we observe convincing results in simulations (Section 3) and we can prove local maximisation in the following sense. To prove this theorem we first establish that the directional derivatives exist and are non-positive for all coordinate directions. Then we show that this carries over to all directional derivatives using local convexity of the loss function. More details can be found in Appendix A.4.
If the sequence has more than one accumulation point, all of them give the same loss Φ(t). This means they correspond to different re-parametrisations of the second curve, but give the same distance between the two curves. This can happen as the warping problem does not guarantee unique solutions (see the example given in Appendix A.5). In practise, one can pick any maximising t = (t 1 , . . . t m−1 ) to obtain a locally optimal warping function. As we cannot guarantee that this locally optimal t is also a global maximiser, we also propose to exploit varying starting points and therefore construct multiple sequences to find a global maximum. A further advantage of Algorithm 1 is that it can be easily adapted to closed curves, which has not been explicitly addressed by Lahiri et al. [13]. We adjust our algorithm for open polygons via appropriately updating t 0 and t m .
Thus, we provide algorithms to compute the elastic distance between two (open or closed) piecewise linear and continuous curves. These curves form a subspace in the space of absolutely continuous curves and are called splines of degree 1. If we would like to model smooth (that is differentiable) curves, for example for a mean function, a spline space of higher degree might be more suitable.

Modelling spline curves or spline SRV curves
As common in functional data analysis (Ramsay and Silverman [18]), we like to model curves or means for samples of curves as piecewise polynomial functions. This is in particular beneficial as we target our analysis at sparsely observed curves, which cannot be evaluated at arbitrary points. Moreover, spline models impose parsimonious models for smooth curves, which can help to avoid overfitting the observed curves given limited information. Definition 2.4 (Spline curves). We call ξ = (ξ 1 , . . . , ξ d ) T : [0, 1] → R d with d ∈ N a d-dimensional spline curve of degree l ∈ N 0 if all its components ξ 1 , . . . , ξ d : [0, 1] → R are spline curves of degree l with common knot set 0 = κ 0 < κ 1 < · · · < κ K−1 < κ K = 1 for some K ≥ 2. That means ξ 1 , . . . , ξ d are piecewise polynomial of degree l between the knots κ 1 , . . . , κ K , as well as continuous and (l − 1)-times continuously differentiable on the whole domain [0, 1] for l ≥ 1. Denote by S l K;κ0,...,κ K the set of all spline curves of degree l with common knot set 0 = κ 0 < κ 1 < · · · < κ K−1 < κ K = 1.
We can either model the curve β as a d-dimensional spline curve, or its SRV transformation p (see Fig. 2). If β is a spline of degree l ≥ 2, the corresponding SRV curve p will not be a spline curve. The same holds true for the curve β if we model the SRV curve as a spline of degree l ≥ 1. Only if β has degree l = 1 are both the piecewise linear curve itself and its piecewise constant SRV transformation are splines. However, if we use linear spline curves, we need a large number of knots to obtain curves that visually appear similarly smooth as if we use linear splines on SRV level and thus, we expect less parsimonious models.
To use these spline curves or spline SRV curves as model spaces for curves modulo parametrisation, we need to ensure model identifiability, that is that each equivalence class contains at most one spline curve. The unique spline representative then allows to identify and interpret the equivalence class of a curve modulo warping via its spline basis coefficients. We will see in Subsection 2.4 that this is true for quadratic or cubic splines on curve level and for linear spline SRV curves (under mild conditions). Linear spline curves are identifiable under additional assumptions. Therefore, we can use the space of cubic, quadratic or linear spline curves as a model space for smooth curves. However, using quadratic or cubic splines to model on the curve level would not imply a vector space structure on the SRV level, on which the distance is computed. We therefore propose to consider linear spline (and thus continuous) SRV curves to model smooth curves. If p is the SRV transformation of β and p is continuous, we have that the backtransform β(t) = β(0) + t 0 p(s) p(s) ds is differentiable, as the norm · is continuous as well. Alternatively, constant spline SRV curves can be used to model less regular, polygonal mean curves. We thus work with a linear or constant spline model on SRV level in the following.

Elastic means for samples of curves
Since the space of curves modulo parametrisation and translation does not form a Euclidean space, standard statistical techniques for describing probability distributions cannot be applied directly. In particular, taking sums or integrals requires a linear structure of the space, which means that we cannot define the expected value as an integral or the mean as a weighted average here. In order to generalise the mean as a notion of location to arbitrary metric spaces, Fréchet [6] proposed to use its property of being the minimiser of the expected squared distances. Definition 2.5 (Fréchet mean (Fréchet [6])). Let (Ω, F, P ) be a probability space and X a metric space with distance function d, equipped with the Borel-σ-Algebra. For a random variable X : Ω → X we call every element in an expected element of X. For a set of observations x 1 , . . . , x n ∈ X we define the Fréchet mean as an element in That means Fréchet means are empirical versions of expected elements and neither of them need to exist or be unique. Consider for example a uniform distribution on the sphere where every point on the sphere is a valid Fréchet mean. This non-uniqueness can occur for the elastic distance as well, see the example given in Appendix A.5. Nevertheless, Ziezold [30] showed a set version of the law of large numbers for the Fréchet mean, which means that for independently and identically distributed random variables X 1 , . . . , X n : Ω → X the set of Fréchet means converges to the set of the expected elements.
As discussed in the previous subsection, we propose to use linear or constant splines on SRV level as model spaces.
Hence, to compute a Fréchet mean w.r.t. the elastic distance (3) for a set of curves with SRV transformations q 1 , . . . , q n , we need to solve the following minimisation problem for a given degree l ∈ {0, 1} (constant or linear splines).
A solutionp to this optimisation problem is a piecewise constant respectively piecewise linear approximation of the SRV transformation of a Fréchet mean. Hence, the corresponding mean curveβ is either a polygon or a differentiable approximation of the Fréchet mean. If we consider the optimisation problem (6) for only one observed curve (n = 1), we get as a solution a spline approximation of this curve as a special case. Similarly to the proposal of [22] for densely observed curves, we tackle the minimisation problem (6) with an iterative approach in Algorithm 3, alternating between fitting the mean and optimising the warping for each of the observations, but now using our warping approach for sparse curves and modelling the mean with a constant or linear spline.

Algorithm 3: Elastic spline mean for open curves
For the warping step we update the optimal warpings γ i of the observed curves β i , i = 1, . . . n via interpreting them as observed polygons with piecewise constant SRV transformations q i , i = 1, . . . n, as in Lemma 2.1. We tackle the remaining maximisation problem (4) using a gradient descent algorithm as discussed before ifp is piecewise linear and Algorithm 1 ifp is piecewise constant. In the L 2 spline fitting step the integrals in the sum need to be approximated, since the curves β i are only observed on a finite grid 0 = s i,0 ≤ s i,1 ≤ · · · ≤ s i,mi = 1, which means the SRV curves q 1 , . . . , q n are unobserved. One option is to assume that the SRVs q i of the observed curves are piecewise constant, like we do in the warping step. Sincep is piecewise linear (or even piecewise constant), (q i • γ i ) √γ i will be piecewise linear as well (see proof of Lemma 2.1 in the appendix), which leads to a closed form solution of the integral. If we use this approximation of the integral, the resulting mean tends to overfit the edges of the observed polygons (see for an example the mean plotted in blue on the left hand side of Figure 3). Alternatively, we derive an approximation of the integrals in the L 2 fitting step of Algorithm 3 using the mean value theorem and the monotonicity of the warping. For all j = 0, . . . , .
While this is exact for unkown points t i,j , we use an approximation by assuming this mean value of the deriv- for all j = 0, . . . , m i − 1. Thus, for i = 1, . . . , n, the integral in (7) is replaced by the weighted sum . This leaves us with a quadratic minimisation problem w.r.t. the spline coefficients inp, for which we compute the solution analytically as a generalised least squares estimate. There are different options to choose the weights ω i,j in this integral approximation. The weights ) based on the trapezoidal rule for numerical integration give equal importance to each of the observed curves, independent of the number of points m i observed on each of them. An alternative choice of ω i,j = 1 puts more weight on single observations on a specific curve. Consequently, curves or parts of curves with more observations have higher influence on the estimated mean than curves or parts of curves with fewer observations. The difference between this approximation (with ω i,j = 1) and the one based on assuming observed polygons also for the L 2 spline fitting step is displayed in Fig. 3 on the left. In this example, the estimated mean based on this discrete integral approximation (in red) is closer to a proper spiral shape. In the following we will use weights ω i,j = 1 unless stated otherwise.
Remark 2.6 (Smooth elastic mean for closed curves). Algorithm 3 can be adapted for closed curves. We replace the first step by updating the optimal parametrisations γ i via considering the corresponding minimisation problem for closed curves (5) via gradient descent or Algorithm 2 depending on the spline degree. In the second step, that is updating the least-squares estimate for given parametrisations, we use a penalty function method to deal with the non-linear constraint of closedness forp (see for example Sun and Yuan [27]). Thus, we add a cost function penalising openness with increasing weight. Precisely, in the k-th iteration step, we consider the loss function , ifp is the SRV ofβ, the penalty term vanishes if and only ifβ is closed. Figure 3 shows three iterations of this adapted algorithm for calculating a smooth mean of four, irregularly sampled, closed heart shapes. The initial mean (iteration 0) was computed as a least-squares-estimate assuming the curves were parametrised by relative arc length. The sequence (λ k ) k∈N was chosen as λ k = 10 −3 k for all k ∈ N.

Identifiability of spline curves
We model curves or means for samples of curves using basis representations. If we study equivalence classes of curves modulo re-parametrisation, we have to ensure unique spline representatives in each class, meaning that elements of the quotient space are identifiable via their basis coefficients. To see why this is not self-evident, consider as a simple counterexample in R 1 the space of quadratic polynomials P : [0, 1] → R, a subspace of the quadratic spline space. Note that γ a (x) = ax 2 + (1 − a)x defines a feasible warping function for all a ∈]0, 1[, since γ a is differentiable with γ a (x) ≥ 0 and γ a (0) = 0, γ a (1) = 1. Hence all quadratic polynomials of the form P (x) = p 1 γ a (x)+p 0 with p 0 , p 1 ∈ R are elements of the same equivalence class, although they have varying basis coefficients ap 1 , (1 − a)p 1 and p 0 for a ∈]0, 1[ w.r.t. the monomial basis expansion. This counterexample shows in particular that one-dimensional spline functions do not have unique representatives in the space of functions modulo re-parametrisation. As identifiability plays an important role in any spline based modelling approach, it is fortunate that in contrast to the one-dimensional case we can show that in R d with d ≥ 2, nearly all quadratic or cubic spline curves have unique basis representations.
This means nearly all equivalence classes modulo re-parametrisation contain at most one spline curve. Hence we can identify these curves modulo warping via their spline basis coefficients. Only if the spline has a linear image, are there splines with differing coefficients in its equivalence class. This is the case if and only if the splines in each coordinate direction are multiples of each other modulo translation. For more details refer to the proof of this theorem in Appendix A.6. Note that the we do not make any assumptions on the knots here, in particular the knots could be different for Q and P. That means there is almost always a unique representative modulo warping in K,κ0,...,κ K S l K;κ0,...,κ K for given l = 2, 3, i.e. in the union of all spline spaces with varying (also number of) knots. Considering only quadratic or cubic splines is crucial, as the following counterexample with splines of degree four shows. Let Then γ(t) = 0.5(t 2 + t) is a suitable warping function since it fullfills P = Q(γ(t)) and is monotonically increasing and onto, but monomial coefficients differ between P and Q and are thus not identifiable modulo warping. The result for cubic spline curves also implies uniqueness of representatives for linear spline SRV curves, another useful result for identifiable modelling of elastic curves. Corollary 2.8. Let β 1 , β 2 : [0, 1] → R d with SRV functions q 1 and q 2 , respectively. If q 1 and q 2 are nowhere constant linear splines and q 2 (t) = q 1 (γ(t)) γ(t), then q 1 = q 2 .
Proof. Let q 2 1 and q 2 2 the component-wise squares of q 1 and q 2 , respectivly. We compute Here we have cubic splines P and Q on both sides. Hence we deduce γ = id by Theorem 2.7 and consequently q 2 = q 1 . Note that the cubic spline curve P(s) = s 0 q 2 2 (t) dt is linear on any interval if and only if q 2 (t) is constant on this interval, which is excluded by the assumptions.
To sum up, the space of linear SRV spline curves seems particularly suitable to model smooth curves modulo parametrisation and translation as these curves can be identified via their basis coefficients, i.e. there is a unique representation in this space, and the corresponding curves are differentiable, which leads to visually smooth curves. Remark 2.9 (Splines of lower or higher degree). Piecewise linear spline curves or equivalently piecewise constant SRV curves are identifiable via their spline basis coefficients, if we consider one spline space S 1 K;κ0,...,κ K but not the union of several such spaces, and assume that the curve is not differentiable at all of its knots (i.e. no knot is superfluous). Hence, with this weaker identifiability result, piecewise constant srv-curves are a suitable model space as well, with curves modelled as polygons instead of smooth curves. For more details see Appendix A.7. For SRV splines of higher order first note that the counterexample for splines of degree 4 could similarly be constructed for all splines with any degree that is not a prime number. If the degree of the splines is a prime number, it seems possible that one can show a similar identifiability result. This would imply identifiability for quadratic SRV curves using an analogous argument as in Corollary 2.8.
Since we want to use these spline spaces for estimation of smooth or polygonal curves, we need the following result on continuity of the embedding. It allows us to interpret estimated coefficients, for instance compare the coefficients of estimated group means to investigate local differences, as it ensures convergence of the coefficients if the curves converge. Hence if we construct a sequence that converges to the mean with respect to the elastic distance, as we aim to do in Algorithm 3, we can conclude that the estimated spline coefficients converge to the spline coefficients of the mean as well. We show that this continuity property holds whenever we consider a (subset of a) finite dimensional spline space of the following form as a model space Ξ. Definition 2.10. Let Ξ be one of the following for given fixed K ≥ 2, 0 = κ 0 < · · · < κ K = 1: • A subset of S l K;κ0,...,κ K , l = 2, 3, which consists of identifiable splines as described in Theorem 2.7, additionally centred (i.e. with integral zero) to account for translation.
• A set of identifiable curves with linear spline SRV curves in S 1 K;κ0,...,κ K from Corollary 2.8.
• The set of curves with piecewise constant SRV curves in S 1 K;κ0,...,κ K from Remark 2.9.
Note that we do not consider unions of spline spaces here for simplicity in considering convergence of corresponding coefficients.
Lemma 2.11 (Topological embedding). Let f : (Ξ, · ) → (A, d) be the embedding of the spline coefficients defining the functions in Ξ, equipped with the usual Euclidean distance · , into the space A of absolutely continuous curves w.r.t. the elastic distance d. Then f is a topological embedding, i.e. f is a homeomorphism on its image.
A proof for this statement can be found in Appendix A.8. It shows that the distance on spline coefficients and elastic distance of curves modulo translation are topologically equivalent on suitable spline spaces. This means a sequence of curves converges with respect to the spline coefficients if and only if it converges with respect to the elastic distance.
Overall we thus have that any spline model Ξ in Definition 2.10 yields an identifiable model for the Fréchet mean of observed curves, with the possibility to interpret spline coefficients, and this also holds for converging series of estimators as we aim to construct in our algorithms.

Simulations
We test our methods, which we made available for public use in the R-package elasdics [25], on simulated data. Since there is an implementation of the SRV framework already available for R implemented in the package fdasrvf [28] based on Srivastava et al. [24], we compare our results to their output whenever possible.

Simulation: Aligning sparsely and irregularly sampled curves
In this first simulation, we compare our methods for aligning sparsely and irregularly sampled curves to the implementation of the dynamic programming (DP) algorithm in fdasrvf [28]. Since this DP implementation only allows for an equal number of observed points on both curves, we restrict the simulation to this case, although we developed our methods in particular for differing numbers of observed points per curve. In Figure 14 in Appendix B, we present one simulated example for open and closed curves each.
For the open setting we choose a parametrised curve β(t) = sin(t)(cos(12t) + 2t, sin(12t) + t) T , which we use as a template for both curves. The first curve β 1 (displayed in red in Figure 14 in Appendix B) is obtained via sampling an unbalanced observation grid t 1 , . . . , t m with m ∈ {10, 30, 50} and adding a Gaussian random walk error (with standard deviation sd = 0.01) to the evaluations β 1 (t 1 ), . . . , β 1 (t m ). The second curve β 2 is re-sampled 30 times (displayed in grey in Figure 14) using the same sampling scheme as for β 1 .
For the closed setting we choose two butterfly shapes available in fdasrvf [28]. These are discretely observed curves with 100 observations each. We down-sample the curves such that m ∈ {30, 60, 90} points per curve are left and such that points with high estimated curvature are more likely to be included. This way, the images of the curves are well preserved, as we are more likely to remove points on straight lines. Furthermore, we add an error term sin(π i−1 m−1 ) i to the i-th remaining observation for all i = 1, . . . , m, where i is distributed according to a Gaussian random walk with standard deviation sd = 0.5 and the modification with the sinus function ensures closedness. According to this sampling scheme, we draw one copy (plotted in red) of β 1 from the first butterfly shape and 30 copies (plotted in grey) of β 2 from the second butterfly shape.
For each of the settings we compare the optimal alignment for each copy of β 2 to the corresponding β 1 using our coordinate-wise-optimisation (CWO) algorithm with the alignment produced by the dynamic programming (DP) from fdasrvf [28]. When looking at the coordinates separately, we visually observe slightly better alignment for our method CWO compared to DP. This is also evident in a smaller average elastic distance, e.g. a reduction of 33% and 26% on average for m = 30 in the open and closed setting, respectively. For moderate m we observe an reduction of 48% (open, m = 50) and 13% (closed, m = 60). As expected, this difference decreases if 90 points of the butterfly shapes are selected (4% reduction on average), as in this case the points are nearly observed on a regular, fairly dense grid, which is the setting the implementation in fdasrvf is designed for.
A highly unbalanced distribution of observed points on the curves described above causes difficulties for the mean computation in fdasrvf [28] as well. Figure 15 in Appendix B demonstrates this for sets of partially densely and partially sparsely observed curves each, for which we compute means with respect to the elastic distance. The means in red, which are computed by the curve karcher mean function in fdasrvf [28], do not capture the image of the observed curves well (with e.g. a butterfly no longer recognisable). Contrarily, our methods are specifically developed for such unbalanced data, which results in visually appealing mean curves displayed in blue (e.g. of butterfly shape). Since the implementation in fdasrvf aims at computing a mean with respect to the geodesic shape distance, i.e. minimises the geodesic distance on the sub-manifold of (closed) curves with fixed curve length, the results are not completely comparable. Nevertheless, in particular for the open curves, which are of similar length, we expect the impact of this aspect to be relatively small compared to the warping.

Simulation: Convergence of spline mean coefficients
The second simulation is concerned with the convergence and the identifiability of spline means and their associated coefficients, now also with varying numbers of points per curve. For a known template curve β with known B-spline coefficients ξ 1 , . . . , ξ B we generate a set of observed curves β 1 , . . . , β n via independently sampling the coefficients ξ i,b ∼ N (ξ b , σ 2 ) for all i = 1, . . . , n, b = 1, . . . , B. If the template curve is closed, we additionally close the sampled curves via minimising the penalty function given in Remark 2.6 (for estimating a closed mean) in gradient direction. The points t i,1 , · · · , t i,mi−1 on which β i is observed are sampled uniformly on [0, 1], where the number of observed points m i is sampled uniformly either from {10, . . . , 15} (very sparse and unbalanced setting) or {30, . . . , 50} (less sparse but still unbalanced setting). Examples for curves sampled from two open template curves modelled as linear splines on SRV level with three or nine, equally spaced, inner knots, respectively, are displayed in Figure 16 in Appendix B. Here we choose the standard deviation as σ = 0.3, σ = 0.4 for the two open template curves, respectively. Examples for curves sampled from a heart-shaped template (with standard deviation σ = 4) are displayed in Figure 4. The closed, heart-shaped curve is modelled as linear spline on SRV level with ten, equally spaced, inner knots. In this very sparse setting, the sampled curves are hardly recognisable as heart shapes (cf. Figure 4). However, the elastic mean curve over n = 5 observations, estimated using the true knot set and linear SRV splines to allow comparison of estimated and true coefficients, represents the original heart surprisingly well even in this challenging setting. We repeated this simulation 40 times ( Figure 5) each for varying numbers of observations n ∈ {5, 20} and observed points per curve m i . For m i ∈ {10, . . . , 15} observations per curve we generally obtain a heart-shaped curve, which seems smaller and with less pronounced features than the template. If we increase the number of observed curves from n = 5 to n = 20, the variance of the mean curves decreases but a certain bias due to under-sampling the curves remains. This also manifests in the coefficients of the spline means: for n = 20 we observe lower variance of the estimated coefficients than for n = 5, but the distribution of the estimated spline coefficients is still not centered at the coefficients of the template (indicated as red dots in Figure 5).
If we increase the number of points on each curve to m i ∈ {30, . . . , 50}, the estimated means with respect to the elastic distance adapt closer to the template. Moreover, the variance of the estimated spline coefficients decreases as well as the distance of the estimated coefficients to the template. The reduction of variance indicates convergence of the spline coefficients for n → ∞, although we do not expect them to precisely converge to the coefficients of the template, not even if m i → ∞ for all i = 1, . . . , n. This is because we draw the sample curves β 1 , . . . , β n such that β is the mean with respect to the L 2 distance on SRV level, but this does in general not imply that β is the mean with respect to the elastic distance. Nevertheless, we expect this difference to be small, as the coefficients in the rightmost barplot are close to the red dots indicating the template's coefficients. In addition, their low variance for n = 20 confirms our theoretical results on identifiability of spline coefficients in our model (Corollary 2.8) and continuity of the embedding (Lemma 2.11). We observe similar behaviour of the estimated means and associated coefficients for the open curves displayed in Figure 17 and Figure 18 in Appendix B.
So far, we only elaborated on the convergence of correctly specified spline means, also to show convergence of corresponding spline coefficients. Since this assumption is usually questionable in real data applications, we demonstrate the behaviour of our methods in the case of model misspecification. Figure 19 in Appendix B shows means with varying knots using linear SRV splines (smooth means in blue) or constant SRV splines (polygonal means in red). All means are computed for the same set of n = 20 heart-shaped curves, which have been sampled as described above from the third template with m i ∈ {30, . . . , 50} points per curve. For a sufficient number of knots, both the smooth . . , n points observed per curve. The means are computed using linear SRV splines and the same knot set as the template (ten equally spaced inner knots) Bottom: Corresponding distribution of spline mean coefficients (in blue) and template coefficients (in red). and the polygonal means reproduce the original heart shape well. If we consider the number of coefficients n coef s as a measure for model complexity, we observe that the smooth means are closer to the template than the polygonal ones, given the same number of coefficients, with a local minimum at the correctly specified model. This shows that one can obtain more parsimonious models for smooth means using linear SRV curves. Even though the distance to the template for a polygonal mean can be reduced by using more knots, it does not seem to become as low as for the linear SRV mean. This indicates that using linear SRV splines for modelling a smooth 'true' mean might reduce the bias due to under-sampling the curves. While we see a local minimum for the n coef s used to generate the data, close n coef s give similar results and in particular values larger than the true one give similarly good results, with the distance generally decreasing in n coef s . This indicates that results are not very sensitive to n coef s given it is sufficiently large.

Applications on real data 4.1 Clustering and modelling smooth means of GPS-tracks
As our main goal is the development of statistical (elastic) analysis methods for discretely observed data curves, we demonstrate their practical usefulness on two datasets. The first one comprises GPS waypoints tracked on Tempelhof Field, a former airfield (up to 2008) in Berlin, which is now publicly used as a recreation area. Clustering and smooth mean estimation allow us to find new paths on Tempelhof field not yet included in OpenStreetMap. The dataset consists of 55 paths with 15 to 45 waypoints each, recorded by members of our working group using their mobile phones for tracking. Due to the variety of mobile devices used, the number of points per curve differs considerably, hence the data is highly irregular and quite sparsely observed (see Figure 6). We are solely interested in analysing the paths the participants walked on, not the trajectories over time. Separately looking at longitude and latitude over time suggests that the individuals had quite different walking patterns, namely did not move with constant speed. This implies that classical functional analysis of the trajectories is not suitable to study the paths used by the test subjects.
From the GPS data we recover the paths the individuals walked on while tracking their trajectories. This is done in two steps. First, the tracks are clustered using average linkage based on the elastic distance and the elbow criterion for stopping. Here we apply Algorithm 1 to approximate the pairwise distance between the irregularly observed open tracks. Afterwards we compute a smooth elastic Fréchet mean for each of the four largest clusters using Algorithm 3 and linear splines on SRV level with 10 inner knots.  The clustering result displayed in Figure 7 on the left is visually satisfying. Looking again at longitude and latitude separately (on the right) clearly indicates that clustering based on the usual L 2 distance would lead to worse results. In particular, elements of the first and third largest clusters might be classified differently using a non-elastic distance.
The smooth mean curves for each of the four largest clusters displayed in Figure 8 on the left seem to describe the observed tracks well, although the number of estimated spline coefficients and therefore model parameters is low (24 coefficients per mean curve compared to 30 to 90 values per observed curve). Thus, we obtain a smooth mean curve for irregularly sampled curves based on the elastic distance that captures the data well and allows dimension reduction.
One application of the procedure outlined above can be to identify new paths not yet included in an existing map. The smooth mean curves can be added to an OpenStreetMap, for instance, where we only add parts of our estimated means that are notably different from already existing paths. An example of the resulting map is displayed in Figure 8 on the right.

Classifying spiral curve drawings for detecting Parkinson's disease
The Archimedes spiral-drawing test is a common, non-invasive tool for diagnosing patients with Parkinson's disease. Usually, the drawing task is performed on paper and analysed by medical experts to identify deviations of the shape to the spiral template (Alty et al. [1]   through https://www.kaggle.com/team-ai/parkinson-disease-spiral-drawings. Figure 9 displays the spiral curves drawn by the participants. It is visually notable that the non-impaired subjects in the control group follow the template more closely than the patients with Parkinson's disease. This difference seems to become even more severe for the dynamic spiral test.
While the authors of the original study based their analysis on differences in speed distributions of both tasks, Kurt et al. [10] imposed pre-alignment of the spiral curves using a heuristic dynamic time warping algorithm. We will follow up on this, but use the elastic distance defined in Section 1 as a proper distance between the observed curve and a template instead. Moreover, we are only looking for highly interpretable classifiers giving decision rules of the following form: Classify an individual as being at high risk of having Parkinson's disease if the distance of the curve drawn by this individual to the template exceeds a certain threshold. This procedure mimics the decision made by medical experts based on the spiral drawing and allows us to assess whether the additional information provided by time or speed is actually necessary for good classification.
For our analysis, we only use 10% of the values per curve, which results in irregularly sampled curves with 55 to 269 points each. Based on visual inspection, the images of the curves still almost coincide with the original curves. We compute the elastic mean (see Subsection 2.3) of all curves drawn in the static spiral test using piecewise constant splines with 201 knots on SRV level. Afterwards, we use the resulting polygonal mean (displayed in black in Figure 9 on the right) as a template curve. Alternatively, a parametrised version of the original template curve could be directly used in practice, if available. Figure 10 shows the elastic distances of the curves drawn by the participants to the template curve. As expected, this distance is generally greater for Parkinson's patients than for the control group in both settings. Moreover, looking at the scatter plot on the right of Figure 10, there seems to be a strong positive correlation between the distance in the static test and the distance in the dynamic test for healthy individuals (in blue). For Parkinson's patients, this trend is not strongly present. Note that one observation for a Parkinson's patient with an extreme distance greater than 35 in the dynamic setting is not displayed. The grey areas indicate the decision rule based on the zero-one loss with in-sample accuracies of 77.5%, 92.5% and 97.5% for the classifiers based on the static test, dynamic test or both tests, respectively.
We propose intuitive decision rules of the form: Classify as status 'Parkinson' if the distance of the curve drawn by the test subject to the mean curve exceeds a threshold. Here we either analyse the curves in the static or the dynamic spiral test ( Figure 10 on the left and in the middle). The grey areas in Figure 10 indicate the corresponding decision rule. Alternatively, we classify as status 'Parkinson' if any of the distances in the two tests exceeds a respective threshold (as indicated in grey in Figure 10 on the right). To estimate those thresholds, we directly optimise the zero-one loss (also called misclassification loss) as this is feasible for a small dataset and a small set of possible decision functions. For the classifier with one single variable, that is the distance in either the static or dynamic setting, we would expect similar decision boundaries for alternative classifiers like logistic regression or support vector machines with linear kernel and the hinge-loss.
To evaluate our classifiers we use leave-one-out cross-validation for which we obtain 72.5% accuracy for the static setting, 90.0% accuracy for the dynamic setting and 92.5% accuracy for the classifier based on static and dynamic spiral drawings. Since we observe in-sample only one misclassified observation for the classifier based on both distances, including additional features like the difference or the quotient of the two distances is not advisable. Nevertheless, if more data were available, those variables might improve the classification further. To see that an elastic analysis of the observed curves is favourable we compare our results to classification based on the usual L 2 -distance. For this analysis, we re-parametrise the curves according to their relative arc length to account for different speed patterns but do not align them in an elastic manner. For these L 2 -distances we obtain accuracies of 55.0% for the static setting, 80.0% for the dynamic setting and 77.5% for the classifier based on both distances. Hence the elastic distance performs better for all three classifiers.
Elastic alignment of the observed curves to a template allows us to separate phase and amplitude variation. Our classifiers depend on the elastic distance, which means we rely only on the amplitude variation. To see if the phase, that is the temporal pattern, yields additional information compared to only the image, we look in Figure 11  Warping functions in the dynamic setting Figure 11: Optimal warping in both settings separated by the actual status and the predicted status using the classifiers based on only the corresponding distance each and leave-one-out cross-validation.
information to the elastic distance of the curve to the template, we further inspect the warping curves which belong to misclassified subjects. There are two Parkinson's patients with conspicuous speed patterns we misclassify as 'control' in the static setting. Their speed pattern shows starting and stopping motions, which is not present in the curves of any of the healthy control subjects. Contrarily, we do not observe any noticeably different speed pattern for the misclassified individuals in the dynamic setting. Here the image of the curve seems to capture all available information on the status of the participant.
In conclusion, the elastic distance of the curve drawn by the patient to a template curve is an intuitive measurement of performance for both the static and the dynamic spiral drawing test. Using this feature for classification, we mimic and objectify the medical diagnosis process of a doctor. Our classifier in particular performs well for the dynamic spiral test, as the struggle of the Parkinson's patients to follow the curve is captured in the image of the curve here. If more data were available, maybe even from patients with differing but related neurological conditions like essential tremor, it might also be beneficial to analyse the whole aligned curves and not only use there distance to the template, or to additionally analyse the warping functions, which our approach allows to separate from the images.

Discussion
The SRV framework has been developed to analyse curves in R d without taking their parametrisation into account (elastic analysis). Analogously to developments in functional data analysis, these methods were at first targeted at densely observed curves. Since curves are usually observed on a discrete grid in statistical practice, existing methods (as in Srivastava and Klassen [22]) relied on discretising the warping functions and on interpolation of the curves. However, this approach has limitations if curves are sparsely observed. The main contribution of our work is to address the discrete and often sparse nature of observed curves explicitly, and to go beyond the pairwise alignment of curves to develop statistical elastic analysis methods for samples of irregularly or sparsely observed curves in R d .
To do so, we proposed to interpret observed curves as polygons with constant speed parametrisation between the corners to make the alignment problem accessible, either between two such curves or between an observed curve and a model-based curve such as a mean curve. We suggested using splines on SRV level for modelling the mean, either piecewise linear splines, leading to smooth mean curves, or piecewise constant splines, implying polygonal mean curves, and developed corresponding estimation algorithms for open and closed curves. Our approach for elastic mean estimation does not need to interpolate the discretely observed curves as other approaches developed for densely observed curves do, but directly approximates the integral that appears in the corresponding optimisation problem. However, since polygons underestimate the curvature of the real unobserved curves, the polygonal assumption does lead to a kind of shrinkage bias for the estimated elastic mean for sparsely observed curves. While this bias towards curves with smaller curvature decreases with increasing observations per curve, it would be of interest to develop correction methods for (very) sparse settings in future work.
We have shown that the SRV splines modulo parametrisation used for modelling the elastic mean are in general identifiable via their coefficients and we confirmed this result in simulations. While we did not explicitly address the choice of the optimal number of knots for such splines, a further simulation has shown that the estimation of the mean curve is not sensitive to the specific spline degree and choice of knots, given the number of knots is sufficiently large. It may be interesting in the future to investigate penalised estimation with a large number of spline basis functions, although the interpretation of coefficients and identifiability modulo parametrisation would need to be studied in this setting.
Another appealing direction for further research is to include our methods for sparsely and irregularly sampled curves in existing approaches for functional shape analysis. Here the curves have to be aligned with respect to scaling and/or rotation in addition to the alignment with respect to parametrisation and translation. Since this is usually done iteratively, it seems promising to combine this with the iterative warping and mean fitting steps in our methods. Furthermore, elastic mean estimation for irregularly and/or sparsely sampled curves can be seen as a first step towards elastic regression models for such data. That means our methods might be useful building blocks for modelling curves or shapes depending on continuous and/or discrete covariates.

Appendices A Proofs and Computations
In this part of the appendix we provide proofs to all statements presented in Section 2.
A.1 Proof of Lemma 2.1 To calculate the elastic distance between two square root velocity curves p, q : [0, 1] → R d one has to consider the following minimisation problem. The objective function can be written as Hence the minimisation problem stated above is equivalent to We assume that q is the square root velocity curve of a polygon (for example a polygon with observations at its corners). Hence q is piecewise constant, which means there exist time points 0 = s 0 < s 1 < · · · < s m−1 < s m = 1 such that q| [sj ,sj+1] = q j ∈ R d for all j = 0, . . . , m − 1. Since γ is increasing and onto, this gives time points 0 = t 0 < · · · < t m = 1 such that γ(t j ) = s j for all j = 1, . . . , m. Hence the optimisation problem becomes equivalently We can split this optimisation problem into an outer maximisation over t 1 , . . . , t m−1 and an inner one, where for fixed j = 0, . . . , m − 1, the following maximisation problem needs to be solved.
We obtain an upper bound for these objective functions using the Cauchy-Schwarz inequality. We have To show this upper bound is actually the supremum over all feasible functionsγ we consider two distinct cases.
This choice ofγ is feasible as it attains only non-negative values and tj+1 tjγ (t) dt = s j+1 − s j for all j = 0, . . . , m − 1. We calculate where the last equality is due to p(t), q j p(t), q j + = p(t), q j 2 + , since p(t), q j < 0 implies p(t), q j + = 0. Henceγ is a maximising function.
ii) If tj+1 tj p(t), q j 2 + dt = 0 the objective function is bounded above by 0 due to (9) and we construct a sequence (γ k ) k∈N of feasible functions to reach that upper bound. For all k ∈ N leṫ which shows that the functionsγ k are feasible for k ≥ 1 tj+1−tj . Since p ∞ < ∞ we have for sufficiently large k ∈ N tj+1 tj This shows that (γ k ) is a maximising sequence of warping functions since In this cases, we do not find a maximising warping function γ but a sequence of maximising warping functions γ k .
In both cases i) and ii), the inner optimisation (8) takes the value (s j+1 − s j ) tj+1 tj p(t), q j 2 + dt for given j = 0, . . . , m − 1. The overall optimisation thus becomes the outer optimisation over the sum of these terms with respect to t 1 , . . . , t m−1 , i.e. takes the form (4).

A.2 Gradient of the loss function in Lemma 2.1
The simplified loss function given in Lemma 2.1, is differentiable if p is at least continuous. In this case the partial derivatives can be computed as If p is piecewise linear, t → p(t), q j 2 + is piecewise quadratic and one can compute the integral in the denominator exactly.

A.3 Closed form solution for the coordinate wise maximisation needed in Algorithm 1
For fixed j ∈ {1, . . . , m − 1} and fixed 0 = t 0 ≤ · · · ≤ t j−1 ≤ t j+1 ≤ · · · ≤ t m = 1 we need to solve Since p is assumed to be piecewise constant on [t j−1 , t j+1 ] there exists t j−1 = r 0 < · · · < r l = t j+1 such that p| [ r ι , r ι+1 [= p ι ∈ R d for all ι = 0, . . . , l − 1. Hence the objective function restricted to [r ι , r ι+1 [ can be written as This shows that for all ι = 0, . . . , l − 1 there are constant values Without loss of generality we assume A ι1 , A ι2 > 0 since otherwise the objective function is monotonic, hence attains its maximum on the boundary and this case can be included separately below. Thus L| [rι,rι+1] is twice continuously differentiable on ]r ι , r ι+1 [ with Therefore, every maximiser t j within ]r ι , r ι+1 [ fullfills We conclude that every solution to the coordinate wise maximisation problem (11) is contained in the set and can compare function values of L over this set to find the maximiser.
We proof this main result in three steps. First, we show that the accumulation point t * = (t * 1 , . . . , t * m−1 ) is a maximiser of Φ restricted to coordinate directions. Then we conclude that Φ is semi-differentiable at t * for every direction u ∈ R m−1 . Last we use Lemma A.1 below, which establishes local concavity of the loss function, to see that t * is a local maximum of Φ.
Since t * is an accumulation point, there is a subsequence (t (ι k ) ) k∈N with lim k→∞ t (ι k ) = t * . Denote by , j odd} the restrictions of Φ at the current sequence value with either fixed odd or even coordinate entries. Φ is continuous, hence we have point-wise limits with Φ * odd , Φ * even being the restrictions to odd and even coordinate directions at the accumulation point t * . Since at each step we either update all odd or all even entries, Φ (k) odd and Φ (k) even attain their maximum at either the current or the next sequence value. That is for all k ∈ N. Thus, Φ * odd and Φ * even are bounded as well: . We can conclude this as coordinate-wise maximisation produces a monotonically increasing sequence Φ(t (ι+1) ) ≥ Φ(t (ι) ) for all ι ∈ N and the subsequence Φ(t (ι k ) ) converges to Φ(t * ) due to Φ being continuous, which implies the whole sequence converges. Analogously we have Φ * even ∞ ≤ Φ(t * ), hence t * is a maximiser of Φ restricted to any coordinate direction (i.e. t * j maximises Φ(t * 1 , . . . , t j , . . . , t * m−1 ) over t j for all j = 1, . . . , m − 1).
To show that this implies that Φ is partially semi-differentiable at t * first note that Φ is partially semi-differentiable at every point t = (t 1 , . . . , t m−1 ) with (s j+1 − s j ) tj+1 tj p(t), q j 2 + dt > 0 for all j = 1, . . . , m − 1, since the square-root function is differentiable for strictly positive values and tj+1 tj We show that Φ is still partially semi-differentiable at t * in direction t j . A similar argument shows differentiability in direction t j+1 . Let L be the relevant part of the loss function Φ in direction t j .
We need to show that both, left and right derivatives of L at t * j exist.
] since t * j is a maximiser (in t j coordinate direction) and L is non-negative. Therefore L = 0 which means L is differentiable on its whole domain.
tj p(t), q j 2 + dt being piecewise linear, non-negative and monotonically decreasing. Since it attains 0 at t * j , it is also 0 in a right neighbourhood of t * j . If H were strictly positive in a neighbourhood left of t * j , its left derivative would tend to −∞ at t * j as H(t * j ) = 0 and the derivative of the square-root tends to ∞ for values tending linearly to 0. But ∂− ∂tj H(t * j ) = −∞ would imply ∂− ∂tj L(t * j ) = −∞, which contradicts t * j being a maximiser.
Taking all those cases into account we conclude that Φ is partially semi-differentiable at the accumulation point t * produced by coordinate-wise maximisation. Since we already know that t * j is the coordinate-wise maximiser of Φ for all j = 1, . . . , m − 1 in coordinate directions, the left-sided partial derivatives need to be non-negative, the right-sided partial derivatives non-positive.
To show that this implies that t * is a local maximiser, consider sets U = × m−1 j=1 U j ∩ {0 ≤ t 1 ≤ · · · ≤ t m−1 ≤ 1} such that t * ∈ U and p is constant on the interior of the interval U j = ∅ for all j = 1, . . . , m − 1.
Considering the limit s 0 yields Hence the right-sided derivative will be attained if u j − t * j is positive and the left-sided derivative if u j − t * j is negative. This implies (u j − t * j ) lim s 0 ∂Φ ∂tj (α(s)) ≤ 0 for all j = 1, . . . , m − 1 and therefore, But since U is a convex set, Φ is concave on the interior of U (see Lemma A.1) and therefore on U as it is continuous and we compute Thus, t * is a maximum of Φ| U . This means it is a maximum on the union of such U 's, whose interior is a relatively open neighbourhood of t * with respect to the relative topology on {0 ≤ t 1 ≤ · · · ≤ t m−1 ≤ 1}. Hence t * is a local maximiser of Φ. Lemma A.1. Let Φ be the loss function defined in Equation (4), p piecewise constant and U ⊂ R m−1 a convex set such that p(t j ) is constant for all j = 1, . . . , m and all (t 1 , . . . , t m−1 ) ∈ U . Then Φ| U is concave.
Proof. Note that Φ is twice continuously differentiable on the interiorŮ of U . We show that all second directional derivatives ∂ 2 uu Φ are non positive. This implies the Hessian H is negative semi-definite, since u T Hu = ∂ 2 uu Φ for all u ∈ R m−1 . Hence Φ| U is concave. To show the second derivative at t = (t 1 , . . . , t m−1 ) ∈Ů is non-positive in any direction let α ∈ R and u = (u 1 , . . . , u m−1 ) ∈ R m−1 . Define Q j is linear around α = 0 and therefore differentiable with constant derivative Q j (α) =: c j ∈ R. If Q j (0) = 0 for all j ∈ 1, . . . , m − 1 we compute the directional derivative of the loss function as , and the second derivative becomes If Q j (0) = 0 for some j =, . . . , m − 1, we have in particular p(t j ), q j 2 + = 0 and p(t j+1 ), q j 2 + = 0, which means Q j is zero in a neighbourhood of α = 0. Hence the second derivative of Q j (α) is zero as well and does not contribute to the sum.

A.5 Optimal warping and Frèchet means are not unique
We give an example that illustrates that both the optimal warping function minimising the elastic distance and the Frèchet mean for a set of curves with respect to the elastic distance are not necessarily unique. Consider two piecewise linear curves β 1 and β 2 with respective piecewise constant SRV curves p and q given as The corresponding curves β 1 and β 2 are displayed in Figure 12 on the left. The objective function Φ, which needs to be maximised in order to find the optimal warping of the second curve to the first, only depends on one parameter t 1 and is given as where and p(t), With this we compute The two maximisers of Φ correspond to two different optimal warping functions γ 1 and γ 2 of β 2 to β 1 . For t 1 = 0.25 we obtainγ 1 according to (10) in Appendix A.1 aṡ Therefore,γ 1 (t) for t 1 = 0.25 and analogouslyγ 2 (t) for t 1 = 0.75 are piecewise constant witḣ where the constant values are given as c 1 = 2·14 2 14 2 +9 2 and c 2 = 2·9 2 14 2 +9 2 . Here, the form of the derivative of the second optimal warping function γ 2 of the second curve to the first curve is due to symmetry of this particular problem. Thus, both SRV curves and q(γ 2 (t)) γ 2 (t) = are SRV transformations of optimally aligned curves. This also means that both L 2 -means of p and the SRV transformations (q • γ i ) √γ i , i = 1, 2 of either optimally aligned β 2 are SRV transformations of Frèchet means of β 1 and β 2 (in red and blue in Figure 12).
To see this, letβ be a curve with SRV tranformation We compute for i = 1, 2 which shows that all inequalities have to be equalities and, therefore, γ the identity function. This also implies thatβ is optimally aligned to β 1 and β 2 • γ i , i = where the first inequality is due to the square being convex and the second due to the triangle inequality. This shows that everyβ is a minimiser of the sum of squared distances and therefore a Frèchet mean. Hence, both 1 2 p + 1 2 (q • γ i ) √γ i , i = 1, 2 are equivalently valid SRV transformations of Frèchet mean curves.
A.6 Proof of Theorem 2.7 Let Q = (Q 1 , Q 2 , . . . , Q d ). Without loss of generality we assume d = 2. For d > 2 perform a coordinate transformation such that (Q 1 , Q 2 ) has a non-linear image between its knots and consider the first two coordinates.
Note that either q 13 q 12 q 23 q 22 = q 13 q 22 − q 23 q 12 = 0 or q 13 q 11 q 23 q 21 = q 13 q 21 − q 23 q 11 = 0, because otherwise q 12 q 22 and q 11 q 21 are multiples of q 13 q 23 , which means Q has a linear image on γ(I). Thus we need to consider two cases.
The only part left to show is that f −1 (which exists if we restrict the co-domain of f to its image) is continuous as well. To prove this, let (ξ n ) n∈N ⊆ Ξ with β n = f (ξ n ) for all n ∈ N and d(β n , β) n→∞ → 0 for the elastic distance. Hence we have to show ξ n n→∞ → ξ for ξ := f −1 (β).
Denote by p n the SRV transformation of β n for all n ∈ N and by q the SRV transformation of β. Then d(β n , β) = inf γ∈Γ p n − (q • γ) γ L2 ≥ inf γ∈Γ p n L2 − (q • γ) γ L2 = p n L2 − q L2 , which shows that β n L2 = p n 2 L2 is bounded, as d(β n , β) is bounded as a convergent sequence. Since β n L2 or p n L2 induces a norm on Ξ, which is a subset of a finite vector space, ξ n is bounded as well, as all norms are equivalent on finite vector spaces. Consider an arbitrary subsequence of (ξ n ) n∈N . Since this subsequence is bounded in (Ξ, · ) as well, it contains a convergent subsequence (ξ n k ) k∈N . Let ξ * := lim k→∞ ξ n k . Since the embedding f is continuous, we have f (ξ * ) = lim k→∞ f (ξ n k ) = lim k→∞ β n k = β = f (ξ) and therefore ξ * = ξ as f is injective. Hence, every subsequence has a subsequence which converges to ξ with respect to · . Thus, (ξ n ) n∈N converges to ξ in (Ξ, · ) and f −1 is hence continuous.

B Supplementary plots
In this part of the appendix we show plots for the simulations in Section 3.  Figure 14: Comparison of the optimal alignment produced by our method CWO and the one computed with DP. The 40 grey curves are sampled with m points per curve and aligned to the red curve.The first row shows the sampled curves in the moderately sparse setting (m = 30 or m = 60 points per curve for the open or closed curve, respectively) The optimal alignments found by both methods are depicted in the lower rows, with the resulting mean elastic distances given in the headings. To make the alignment visually comparable, the aligned curves are evaluated at the observation grid of the red curve for DP.