Generalization of the Kimeldorf-Wahba correspondence for constrained interpolation

In this paper, we extend the correspondence between Bayes' estimation and optimal interpolation in a Reproducing Kernel Hilbert Space (RKHS) to the case of linear inequality constraints such as boundedness, monotonicity or convexity. In the unconstrained interpolation case, the mean of the posterior distribution of a Gaussian Process (GP) given data interpolation is known to be the optimal interpolation function minimizing the norm in the RKHS associated to the GP. In the constrained case, we prove that the Maximum A Posteriori (MAP) or Mode of the posterior distribution is the optimal constrained interpolation function in the RKHS. So, the general correspondence is achieved with the MAP estimator and not the mean of the posterior distribution. A numerical example is given to illustrate this last result.

where E denotes expectation. In this case, the usual Bayesian estimatorŷ of y is the mean of the posterior distribution of the GP {Y (x)} x∈X given data : From [9], we have the following explicit expression forŷ : where k(x) = K x, x (1) , . . . , K x, x (n) , K is the matrix K x (i) , x (j) 1≤i,j≤n and y = (y 1 , . . . , y n ) .
On the other hand, it is well known (see [12]) that this estimation function (2) is the unique solution of the following optimization problem : where H is the Reproducing Kernel Hilbert Space (see [1]) associated to the positive definite kernel K defined by (1) and I is the set of interpolant functions : This result will be referred to as the correspondence between Bayes' estimation and optimal interpolation in a RKHS or Kimeldorf-Wahba correspondence. Now, we suppose that the function y is known to satisfy some properties or constraints such as boundedness, monotonicity or convexity. Formally, let C be a closed convex set of R X corresponding to such constraints. For instance, C is of the form : If H ∩ C ∩ I = ∅, the following convex optimization problem : has a unique solution denoted by h opt (see e.g. [7] and [11]), which can be seen as the optimal constrained interpolation function associated to the knots x (i) , i = 1, . . . , n.
In the Bayesian framework, the problem is now to make inference from the conditional distribution of the GP {Y (x)} x∈X given Y ∈ C (prior information) and given data Y x (i) = y i , i = 1, . . . , n. This conditional distribution can be thought as a truncated multivariate normal distribution but in an infinite dimensional linear space.
The aim of this paper is to prove that the constrained interpolation function h opt solution of problem (P ) is the mode or Maximum A Posteriori (MAP) of this posterior distribution {Y | Y ∈ C ∩ I}.
The paper is organized as follows : in Section 2, we consider the finite-dimensional case to get insight into the natural correspondence between constrained interpolation functions and Bayes' estimators. Section 3 is devoted to the main result. We approximate the original Gaussian process by a sequence of finite-dimensional Gaussian processes (see e.g. [5], [8] and [10]). The MAP estimator of the finite-dimensional approximation process is well defined. Furthermore, this sequence of MAP estimators is shown to be convergent to the optimal constrained interpolation function solution of problem (P ). As a consequence, we can interpret h opt as the most likely function or mode of the posterior distribution {Y | Y ∈ C ∩ I}. This result can be seen as a generalization of the Kimeldorf-Wahba correspondence in the case of curve-fitting (interpolation case) taking into account linear inequality constraints. This new correspondence is illustrated in Section 4.

The natural correspondence for finite-dimensional Gaussian processes
In this section, we assume that {Y m (x)} x∈X is a finite-dimensional or degenerate GP in the sense that : where {φ j , 1 ≤ j ≤ m} is a set of m linearly independent functions in R X and ξ = (ξ 1 , . . . , ξ m ) ∈ R m is a zero-mean Gaussian vector with covariance matrix Γ m assumed to be invertible. The covariance function of Y m can be expressed as be the linear space spanned by the basis functions φ j and consider on H m the dot prod- where c h i are the coordinates of h i with respect to the basis is the vector of coordinates of K m (., x) ∈ H m (see equation (5)), we have Hence, (H m , (., .) m ) is the RKHS with reproducing kernel K m . In the following proposition, we denote by Proposition 1. Let {Y m (x)} x∈X be a process of the form (4) and H m defined by (6) be the RKHS associated with the kernel function K m given in (5). Let us assume that C is a closed convex subset of R X (for pointwise topology) and Then, the MAP estimatorŷ m defined as the mode of the posterior distribution of {Y m | Y m ∈ C ∩ I} is well defined and is equal to the constrained interpolation function h opt,m solution of arg min Proof. Remark that the sample paths of Y m are in H m by definition. Hence, it makes sense to define the density of Y m with respect to the uniform reference measure λ m on H m (m-dimensional volume measure or Lebesgue measure). This density is defined up to a multiplicative constant and to give it an explicit expression, we consider the following linear isomorphism : We can define the measure λ m on H m as the image measure λ m : To calculate the probability density function (pdf) of Y m , we write Using the fact that ξ is a zero-mean Gaussian vector N (0, Γ m ), we obtain By the transfer formula, we get Hence, the (unconstrained) density of Y m with respect to λ m is the function Let us now introduce the inequality constraints described by the convex set C. In the Bayesian framework, the prior is the following truncated pdf (with respect to λ m ) : H m ∩ C ∩ I is nonempty, the posterior likelihood L pos defined as the pdf of Y m given data interpolation, is given by

The main result
In a Bayesian statistical framework, the prior is the probability distribution of a zero-mean GP {Y (x)} x∈X with covariance function K defined by (1) and assumed to be definite. We suppose that the realizations of Y are in the Banach space E = C 0 (X), the set of continuous functions defined on a compact set X. For the sake of simplicity, we suppose that X = [0, 1].
The results presented in this paper can be generalized to the multi-dimensional case. Let H be the RKHS associated to the positive definite function K. Then, H is an Hilbertian where c = sup x∈X K(x, x) 1/2 < +∞ by continuity of the kernel function K. Here, we suppose that we have also a priori information such as boundedness, monotonicity or convexity constraints. Assume that these properties are mathematically described by the set C, where C is a closed convex subset of R X as in Section 2 (a fortiori, C ∩ E is also a closed convex set of E) 1 . Finally, let I be the set of data interpolating functions Our aim is to make inference from the posterior distribution of the Gaussian process Y , so we need to handle the conditional distribution

Approximation of the Gaussian process Y
Keeping in mind Section 2, we approximate the GP Y by the following finite-dimensional Gaussian process : ) is a zero-mean Gaussian vector. By continuity of the sample paths of Y and continuous piecewise linear approximation in the Banach space E = C([0, 1]), Y N converges uniformly to the original GP Y when N tends to infinity with probability one.
To simplify the proof of the main result (see Theorem 2 below), block matrix structures will be used. To get this structure, we rename the knots of the partition ∆ N = {t 0 , . . . , t N } such that The finite-dimensional approximation of Gaussian Processes (GPs) can be rewritten as where ϕ N,j is the hat function associated to the knot t j .
From Section 2, Y N is a finite-dimensional GP with covariance function ) . Now, we can compute the posterior likelihood function and the mode (or MAP) estimatorŷ N as a function defined on X.
has a unique solution denoted by h opt,N . Additionally, the posterior likelihood function of Y N incorporating inequality constraints and given data is of the form where k N is a normalizing constant. Then, the MAP estimatorŷ N of the posterior distribution (10) is the solution h opt,N of the problem (P N ).
Proof. It is a consequence of Proposition 1 of Section 2.
According to the uniform convergence of Y N to Y , it is natural to define the MAP estimatorŷ of the Gaussian process Y as the limit, if it exists, of the MAP estimatorŷ N of Y N as N tends to infinity.

Asymptotic analysis
This subsection is devoted to the main result of the paper. The aim is to prove that the limitŷ := lim N →+∞ŷ N of the MAP estimatorŷ N of Y N exists in E and is the optimal constrained interpolation function h opt in H : where H is the RKHS associated to the process Y , C is the closed convex set of R X describing the inequality constraints and I is the set of interpolating functions. To reach this goal, we need to analyze the link between the nested linear subspaces H N in E and the RKHS H associated with the reproducing kernel K. To do this, we denote by π N the projection operator from E onto H N defined by : Theorem 1. For any f ∈ E, let us define the sequence of real numbers (m N (f )) N ≥1 by ) . Then, (m N (f )) N ≥1 is nonnegative and increasing. Furthermore, the RKHS H associated to the covariance function K is characterized by and, for all f ∈ H, In particular, for f ∈ H and N ≥ 1, Proof. As Γ N is symmetric positive definite, the sequence (m N (f )) N is nonnegative. The indexing of the knots (see (9)) leads to the following block structure : , where a = (K(t 0 , t N +1 ) . . . , K(t N , t N +1 )) .
The monotonicity property of the sequence (m N (f )) N ≥1 is a consequence of Lemma 1 (see Section 3.3). Thus, lim Hence, for all j, f ∞ (t j ) = f (t j ) and f = f ∞ ∈ H by continuity and density of the knots in [0, 1]. This ends the proof of the first part of the characterization. To conclude, let F be defined as F := Vect {K(., t j ), j ≥ 0}. If g ∈ F ⊥ , we have (g, K(., t j )) H = g(t j ) = 0, j ≥ 0. Hence, by continuity, g = 0 and F ⊥ = {0}. So, by classical approximation in a Hilbert space, the orthogonal projection f N of any f ∈ H onto the subspace F N := Vect {K(., t j ), j = 0, . . . , N } satisfies Furthermore, the MAP estimatorŷ N solution of where L N pos (h) is defined in (10), coincides with h opt,N and we also havê Proof. To avoid some technical difficulties, we suppose that the data points belong to ∆ N for N large enough : The proof without this last assumption can be found in [2] and [4]. Let g ∈ H ∩ C ∩ I, then π N (g) ∈ H N . As π N (C) ⊂ C, π N (g) ∈ C and π N (g) ∈ I due to (H0). So, H N ∩ C ∩ I is a nonempty closed convex subset of H N . Therefore, (P N ) has an unique solution h opt,N . Write We know from approximation theory in the Banach E = C 0 (X) that According to the Lemma 2 of Section 3.3, As h opt,N is the orthogonal projection of 0 onto the convex set H N ∩ C ∩ I in the Hilbert space H N and π N (h opt ) ∈ H N ∩ C ∩ I, we have

From (15), it is sufficient to prove
As π N (h opt ) ∈ H N ∩ C ∩ I and by (12), Hence, It can be expressed ash (., t 0 ) , . . . , K (., t N )) . Then, we get h N H = c h opt,N Γ −1 N c h opt,N = h opt,N H N . By (16), ( h N H ) N is a bounded sequence in H. By weak compactness, there exists a sub-sequenceh N k such that Let us prove that h ∞ ∈ C. For fixed j and for k large enough, From property (17), equality h N H = h opt,N H N and inequality (16), we have H . Since norm convergence and weak convergence (see (17)) imply strong convergence, we haveh The second part is a consequence of Proposition 2.
Comments Remark that assumption (H1) is not restrictive and assumption (H2) is ensured for applications in consideration in this paper (boundedness, monotonicity or convexity constraints). For instance, if f is a non-decreasing function on [0, 1], then the piece-wise linear interpolation π N (f ) is also non-decreasing for any N . For a general convex set C, the sequence of approximation (π N (f )) N must be adapted to satisfy assumption (H2). Now, the constrained optimization problem has a nice probabilistic interpretation as a Bayesian estimator of a function y ∈ C 0 (X). The function h opt =ŷ := lim NŷN can be thought as the most likely function in the subspace C of constrained functions h satisfying h x (i) = y i , i = 1, . . . , n. Theorem 2 proves that this estimatorŷ is independent of the choice of the subdivision {t j } and is a smooth function sinceŷ = h opt is the solution of a constrained interpolation problem in a RKHS.

Technical lemmas
A a a α be a real block matrix where A is an N × N matrix, a is an N ×1 vector and α ∈ R. Assume that B is symmetric positive definite. Let y = (x, y N +1 ) , where x is an N × 1 vector and y N +1 ∈ R. Then, . By block matrix multiplication, we have x = Au + v N +1 a and y N +1 = a u + αv N +1 .
Comparing expression y B −1 y and x A −1 x, we only need to prove the inequality : α ≥ a A −1 a. For this, consider the block vector z = Proof. For x ∈ X, we have which completes the proof of the lemma.

Numerical illustration
The aim of this section is to illustrate the correspondence established in previous sections between the MAP estimator and the constrained interpolation function solution of problem (P ). We are interested in the case where the real function f respects boundedness constraints. Thus, the convex set C is equal to :  (Figure 1b). In both figures, the Gaussian covariance function is used which is defined as where the hyper-parameters (σ, θ) are fixed to (25, 0.2). In Figure 1a, we choose N = 50 and generate 100 sample paths taken from the finite-dimensional approximation of Gaussian processes (8) conditionally to interpolation conditions and boundedness constraints, where the lower and upper bounds are respectively -25 and 20 (the R package 'constrKriging' is used in the simulation, see [6] for more details). Notice that the sample paths of the conditional Gaussian process (gray solid line) respect the boundedness constraints  in the entire domain unlike the unconstrained mean (2). In Figure 1b, we just relax the boundedness constraints such that the unconstrained mean respects it. In that case, the unconstrained mean coincides with the MAP estimator but not with the mean of the simulation (i.e. posterior mean). Hence, in the constrained case, the mean of the posterior distribution does not correspond to the optimal interpolation function.
In Figure 2, we also relax the boundedness constraints such that they do not have an impact on the model. In that case, the unconstrained mean, the mean and the maximum of the posterior distribution coincide as expected.

Conclusion
In this paper, the correspondence between two approaches to solve an interpolation problem in the case of linear inequality constraints is established. On the first hand, a deterministic approach leads to solve a constrained optimization problem under both interpolation conditions and inequality constraints in a Hilbert space. On the second hand, a probabilistic approach considers an estimation problem in a Bayesian framework. In the case of a finitedimensional Gaussian process, the correspondence between the MAP estimator (maximum of the posterior distribution) and the constrained interpolation function is proved. In the infinite-dimensional case, the correspondence is done by finite-dimensional approximation and convergence of the MAP estimator to the constrained interpolation function. This result can be seen as a generalization of the correspondence established by Kimelford and Wahba in [3] between Bayesian estimation on stochastic process and curve fitting.