ω extension formulas for 1-jets on Hilbert spaces

We provide necessary and suﬃcient conditions for a 1-jet ( f, G ) : E → R × X to admit an extension ( F, ∇ F ) for some F ∈ C 1 ,ω ( X ). Here E stands for an arbitrary subset of a Hilbert space X and ω is a modulus of continuity. As a corollary

We provide necessary and sufficient conditions for a 1-jet (f, G) : E → R × X to admit an extension (F, ∇F ) for some F ∈ C 1,ω (X). Here E stands for an arbitrary subset of a Hilbert space X and ω is a modulus of continuity. As a corollary, in the particular case X = R n , we obtain an extension (nonlinear) operator whose norm does not depend on the dimension n. Furthermore, we construct extensions (F, ∇F ) in such a way that: (1) the (nonlinear) operator (f, G) → (F, ∇F ) is bounded with respect to a natural seminorm arising from the constants in the given condition for extension (and the bounds we obtain are almost sharp); (2) F is given by an explicit formula; (3) (F, ∇F ) depend continuously on the given data (f, G); (4) if f is bounded (resp. if G is bounded) then so is F (resp. F is Lipschitz). We also provide similar results on superreflexive Banach spaces.

Introduction and main results
Throughout this paper we will assume that ω : [0, +∞) → [0, +∞) is a concave and increasing function such that ω(0) = 0 and lim t→+∞ ω(t) = +∞. Also, we will denote ϕ(t) = t 0 ω(s)ds (1.1) for every t ≥ 0, and if X is a Banach space then C 1,ω (X) will stand for the set of all functions g : X → R which are Fréchet differentiable and such that Dg : X → X * is uniformly continuous, with modulus of continuity ω, that is to say, there exists some constant C > 0 such that for all x, y ∈ X. Here · * denotes the usual norm of the dual space X * , defined by ξ * = sup{ξ(x) : x ∈ X, x ≤ 1} for every ξ ∈ X * . If E is a subset of R n and we are given functions f : E → R, G : E → R n , Glaeser's C 1,ω version of the classical Whitney extension theorem (see [43,22]) tells us that there exists a function F ∈ C 1,ω (R n ) with ( for all x, y ∈ E. We can trivially extend (f, G) to the closure E of E so that the inequalities (1.2) hold on E with the same constant M , and the function F can be explicitly defined by (1.6) which will be shortened to A(f, G) whenever the subset E is understood. In particular, for a differentiable function F : X → R, we let A(F, ∇F ) stand for A(F, ∇F ; X). As we said, if we construct such an F by means of the Whitney Extension Operator (1.3), then we necessarily have lim n→∞ k(n) = ∞ for all possible choices of k(n). Nevertheless, in the case ω(t) = t (which gives raise to the important class of C 1,1 functions), J.C. Wells [42] and other authors [30,10,4] showed, by very different means, that the C 1,1 version of the Whitney extension theorem holds true if we replace R n with any Hilbert space and, moreover, there is a (nonlinear) extension operator (f, G) → (F, ∇F ) which is minimal, in the following sense. Given a Hilbert space X, with norm denoted by | · |, a subset E of X, and functions f : E → R, G : E → X, a necessary and sufficient condition for the 1-jet (f, G) to have a C 1,1 extension (F, ∇F ) to the whole space X is that (1.7) Moreover, the extension (F, ∇F ) can be taken with best Lipschitz constants, in the sense that is the C 1,1 trace seminorm of the jet (f, G) on E. In particular, considering X = R n we deduce the remarkable corollary that in the case ω(t) = t one can take k(n) = 1 for all n in Theorem 1.1. Let us point out that condition (1.7) appears in Le Gruyer's paper [30]. Wells' Theorem was stated and proved in [42] with the following equivalent condition: there exists a number M > 0 such that for all y, z ∈ E. That this condition is equivalent to (1.7) can be easily checked as follows: for each M > 0 consider the quadratic function and find the point x M ∈ X that minimizes V M . Then we have A(f, G) ≤ M < ∞ if and only if V M (x M ) ≥ 0, which after a straightforward computation is easily seen to be equal to condition (W 1,1 ). We should also mention that Wells's proof [42] was rather elaborate (and constructive only in the case of a finite set E), and that Le Gruyer's proof [30], though very elegant and simple, was not constructive either (Zorn's lemma was used in an essential part of the argument). Very recently, the papers [4,10] supplied constructive proofs of Wells' theorem by means of two different explicit formulas, and also provided new proofs (with explicit formulas) for a related C 1,1 convex extension problem for 1-jets that had been previously considered in [2]; see also [3] for the C 1 convex case.
In this paper we will consider the following questions: is Theorem 1.1 true if we replace R n with a Hilbert space X? Or equivalently, is there a version of Wells's theorem for not necessarily linear moduli of continuity ω? In particular, is Theorem 1.1 true with bounded k(n)? And what can be said about other Banach spaces X? Let us mention that, as was shown in [26], a similar question for the class C 1 (X) has a positive answer, but to the best of our knowledge nothing is known for nonlinear ω and the class C 1,ω (X), where X is a Hilbert space (or more generally a Banach space). It is also important to notice that for the classes C k,ω (X) with k ≥ 2 this kind of results is not true: the best possible constants k(n) in the higher order versions of Theorem 1.1 established in [22] must go to ∞ as n → ∞; see [42, Theorem 1 of Section 5].
As we will see, the main result of our paper gives a positive answer to the first question: a jet (f, G) defined on an arbitrary subset E of a Hilbert space X has an extension (F, ∇F ) with F ∈ C 1,ω (X) if and only if A(f, G) < ∞. Moreover, we can take F such that In particular, considering X = R n , this shows that in Theorem 1.1 one can always take k(n) ≤ 2 for all n ∈ N. We will also prove similar results for superreflexive Banach spaces X for a certain class of moduli of continuity.
In order to state and explain our results more precisely, let us introduce some more notation and definitions. Recall that, given a function g : R → R, the Fenchel conjugate of g is defined by where g * may take the value +∞ at some t. If (X, · ) is a Banach space, with dual (X * , · * ), for any ξ ∈ X * we let ξ, v := ξ(v) denote the duality product.
Definition 1.2. We will say that a 1-jet (f, G) defined on a subset E of a Banach space X satisfies condition (W 1,ω ) with constant M > 0 on E provided that On the other hand, for any function F ∈ C 1,ω (X), the jet (F, ∇F ) satisfies (W 1,ω ) with constant M = M ω (∇F ); see Proposition 3.1(2) below for a proof. Consequently, if M * denotes the infimum of those numbers M > 0 for which (f, G) satisfies (W 1,ω ) with constant M , Theorem 1.3 yields the following estimate We may obtain slightly better constants in the estimate of the gradient if we consider the following extension condition.
Definition 1.4. We will say that a 1-jet (f, G) defined on a subset E of a Banach space X satisfies condition (mg 1,ω ) with constant M on E provided that for all y, z ∈ E and all x ∈ X.
Thus (f, G) satisfies (mg 1,ω ) for some M > 0 if and only if A(f, G) < ∞, and A(f, G) is precisely the smallest M for which (f, G) satisfies (mg 1,ω ) with constant M . Condition (mg 1,ω ) is half-intrinsic and half-extrinsic (in what refers to points x ∈ X), as opposed to (W 1,ω ), which is completely intrinsic (it only concerns points y, z ∈ E). In principle condition (W 1,ω ) should be easier to check, but conditions like (mg 1,ω ) may also appear very naturally in some applications (see, for instance, the paper [1] in the convex setting). Anyhow both conditions are useful and in fact they are equivalent up to an absolute factor; see Proposition 3.1 below. In the case of a nonlinear modulus of continuity ω, these conditions, though equivalent, are no longer identical. This is due to the fact that the minimization of the function leads us in this case to rather perplexing equations which are difficult to handle and solve. Therefore a condition of the type V M (x M ) ≥ 0 would be much more complicated than (W 1,ω ).
With this extrinsic condition we have the following.
Theorem 1.5. Let E be a nonempty subset of a Hilbert space X, and f : The proof of the preceding theorem also gives us the following nearly optimal result.
Note that in the particular case that α = 1 this result yields Wells' theorem. According to Theorem 1.5 we always have (1.11) and, in the special case that ω(t) = t α , we will see that this estimate can be improved as follows: On the other hand, for any extension (H, ∇H) of (f, G) with H ∈ C 1,ω (X) we always have the trivial estimate Hence we may conclude the following.
and A(f, G) is defined by (1.6).
It should be noted that for every function of class F ∈ C 1,1 (X) defined on a Hilbert space, we always have the identity Lip(∇F ) = A(F, ∇F ), but this is no longer true for the class C 1,ω . For instance, it is easy to see that the function f (x) This supremum is attained at couples of points (x, y) with x < 0 < y, and, using the homogeneity of f and f , it is not difficult to check that it is equal to sup t>0 t 3/2 + 3t 1/2 + 2 2(t + 1) 3/2 ≤ 1, 3066.
We will also prove that Theorem 1.5 extends to the class of superreflexive spaces: if X is such a Banach space, thanks to Pisier's results (see [34,Theorem 3.1]), we can find an equivalent norm · in X such that may assume that the norm · is uniformly smooth with modulus of smoothness of power type p = 1 + α for some 0 < α ≤ 1. Hence there exists a constant C > 0, depending only on this norm, such that for all x, y ∈ X, λ ∈ [0, 1]. In particular, we have (1.14) We will consider modulus of continuity ω such that the function t → t α /ω(t) is nondecreasing, which includes the cases ω(t) = t β , with β ≤ α. We will then show that an inequality similar to (1.13) holds true with ψ ω = ϕ ω • · instead of · 1+α , where ϕ ω (t) = t 0 ω(s)ds. As a consequence, we will obtain the following theorem in terms of conditions (mg 1,ω ). Theorem 1.9. Let X be a superreflexive Banach space with an equivalent norm · satisfying (1.13) and let ω be a modulus of continuity such that t → t α /ω(t) is nondecreasing. Let E ⊂ X be a nonempty subset and f : E → R, G : E → X * two functions. There exists F ∈ C 1,ω (X) such that (F, DF ) = (f, G) on E if and only (f, G) satisfies (mg 1,ω ) for some M > 0. Moreover, we can arrange that M ω (DF ) ≤ 3 1 + 3 1+α 1+α C M .
And if we consider the intrinsic conditions (W 1,ω ) we have the following. It is worth noting that the proofs of Theorems 1.9 and 1.10 show that the sufficiency parts of these results still hold true for moduli ω not necessarily satisfying that the function t → t α /ω(t) is non-decreasing, if we only assume that the function ψ ω := ϕ ω • · is of class C 1,ω . However, such an assumption implies superreflexivity of the space X (see [11,Theorem V.3.2]), hence also the existence of an equivalent norm with modulus of smoothness of power type p = 1 + α for some α ∈ (0, 1]. Let us also mention that in [4,Section 6] it was shown that a necessary condition on a Banach space X for the validity of a Whitney-type extension theorem in X for some class C 1,ω is that X is superreflexive. Let us finish this introduction by making a few comments on our method of proof and honoring the title of this paper (where we promised some formulas). If one tries to adapt the proof of Wells' theorem given in [4] to the C 1,ω situation, one sees that the argument breaks down for the following reason: when ω(t) is not linear, it is no longer true that a function u is of class C 1,ω if and only if there exists a convex function ψ of class C 1,ω such that u +ψ is convex and u −ψ is concave. As it turns out, the appropriate class of functions for tackling this more general problem seems to be not that of convex functions, but that of strongly ϕ-paraconvex functions, see Definition 2.5 below.
The main ideas of the proof of Theorem 1.6 are the following: if A(f, G) < ∞ then the functions are well defined and satisfy m(x) ≤ g(x) for all x ∈ X, and m(y) = g(y) = f (y) for all y ∈ E.
Then one can check that the functions m and (−g) are strongly 2Mϕ-paraconvex and define F : (1.15) One may call F the 2Mϕ-strongly paraconvex envelope of g. As we will show, both F and −F are strongly 2Mϕ-paraconvex, and this implies that It is also worth noting that, in the very particular case ω(t) = t, one can also define F above with 2 replaced with 1. In this special case, another expression for F is the following: for each t ∈ R, p ∈ X, ξ ∈ X * , set Then we have (1.17) see Lemma 2.9 below. From this formula we can see that, in the case that E is finite, say that E has m points, then for each x ∈ X, F (x) can be computed by solving a maximization problem in R × X × X * with m constraints, where the function to be maximized and the constraining functions are linear combinations of bilinear functions and quadratic functions. Hence the computation of F (x) is much easier than in the general case of a nonlinear modulus ω.
When ω(t) is not necessarily linear, we may also provide an alternate formula for an admissible extension F of (f, G) as the supremum of a smaller family of functions than that of (1.15): given a 1-jet (f, In the case that ω(t) is linear, it is easily seen that this extension F coincides with (1.17), and also with conv(g + ψ) − ψ; where ψ = M 2 | · | 2 and conv(g + ψ) denotes the convex envelope of g + ψ, that is, the supremum of all lower semicontinuous convex functions lying below g + ψ. This is a consequence of the fact that a function h : X → R is strongly ϕ-paraconvex if and only if h + ψ is convex; where ϕ(t) = M 2 t 2 (however, this is no longer true for nonlinear moduli of continuity).
These results will all be shown in Section 3 below. In Sections 4, 5 we will give some variants of our techniques which will allow us to establish similar results for the subclasses of C 1,ω (X) consisting of bounded and/or Lipschitz functions, and also a certain continuous dependence of the extensions on the initial data, meaning that if a sequence {(f n , G n )} n∈N of jets converges uniformly on E to a jet (f, G) then the corresponding extensions satisfy that lim n→∞ (F n , ∇F n ) = (F, ∇F ) uniformly on X. Finally, in Section 6 we will consider the class C 1,u B (X) of differentiable functions whose derivatives are uniformly continuous on bounded subsets of X, and we will show the following result: suppose that the jet (f, G) is bounded on each bounded subset of E; then there exists Also note that, in the particular case X = R n , we have C 1 (R n ) = C 1,u B (R n ), and this statement is thus equivalent to Whitney's extension theorem for C 1 .

Some technical tools
Recall that the Fenchel conjugate of a function g is denoted by g * and defined as in (1.10).
Proposition 2.1. The following properties hold.
Here, for a function ψ : Abusing terminology, we will consider the Fenchel conjugate of nonnegative functions only defined on [0, +∞), say δ : [0, +∞) → [0, +∞). In order to avoid problems, we will assume that all the functions involved are extended to all of R by setting δ(t) = δ(−t) for t < 0. Hence δ will be an even function on R and therefore In the following proposition we collect some elementary facts concerning the functions ω, ω −1 , ϕ and ϕ * . (1) ϕ is convex; If, in addition, ω is increasing and lim t→∞ ω(t) = ∞, then ω −1 and ϕ * are well defined and be a Hilbert space, and ω a modulus of continuity as in the preceding proposition. Then the function ψ(x) = ϕ(|x|), x ∈ X, satisfies the following inequality: Using the duality theorem (see [44,Proposition 3.5.3], for instance), we obtain that ψ = (ψ * ) * is uniformly smooth with modulus of smoothness δ * , that is, For Hölder moduli of continuity, the preceding lemma is true with constant 2 1−α instead of 2.
Lemma 2.4. Let (X, | ·|) be a Hilbert space, and ω(t) = t α for α ∈ (0, 1]. Then the function ψ(x) = ϕ(|x|), x ∈ X, satisfies the following inequality: We know from [41] that ψ * is uniformly convex with modulus of convexity Thanks to Proposition 2.2 we have By using the duality theorem as in Lemma 2.3, we obtain the desired inequality.
Definition 2.5. If C ≥ 0 is a constant, we will say that a function u is strongly Cϕparaconvex on a Banach space X if we have for all x, y ∈ X and all t ∈ [0, 1].
Thus the preceding two lemmas can be restated by saying that −ψ is strongly Cϕparaconvex for some C > 0. On the other hand, since ψ is also convex, ψ is trivially strongly ϕ-paraconvex.
Some authors call such functions u semiconvex, or ϕ-semiconvex, but we prefer not to use this terminology because it may make the reader think that the function u +Cϕ (| · |) will be convex, at least locally for some large C, which is generally false unless ω is linear. See [8,27,35,36] and the references therein for background on paraconvex and strongly ϕ-paraconvex functions.
Next we recall a well-known fact about this kind of functions which we will have to use in our proofs. This result is usually shown in more specialized settings with the help of subdifferentials or Clarke's generalized gradients. For the reader's convenience (and also because we need precise estimates and the literature's terminology varies depending on authors), we include a self-contained elementary proof of this result. Proposition 2.6. Let (X, · ) be a Banach space, ω a modulus of continuity, ϕ(t) = t 0 ω(s)ds, and u : X → R be a continuous function. Assume that both u and −u are strongly Cϕ-paraconvex. Then u is everywhere Fréchet differentiable, and, with the notation of (1.6), A(u, Du) ≤ C. In particular u is of class C 1,ω (X) with for all x, y ∈ X. Moreover, if X is a Hilbert space, we have Proof. Taking y = a and h = x − a in (2.1) we see that u satisfies and since −u is strongly Cϕ-paraconvex too, we obtain For the moment, let us fix a and h in X, and consider s, t ∈ (0, 1]. The inequality (2.2) implies Similarly, because −u is also strongly Cϕ-paraconvex, we have for all s, t ∈ (0, 1], a, h ∈ X. This entails the existence and local uniform boundedness of the limit for a, v ∈ X. Indeed, on the one hand, by taking s = 1 and using that u is locally bounded we see that there is some r > 0 and a constant k r such that On the other hand, if the limit in (2.5) did not exist then there would be some ε > 0 and two sequences (s n ), (r n ) of strictly positive numbers converging to 0 such that for all n. Up to extracting subsequences we may assume that 0 < r n < s n for all n, and then find (t n ) ⊂ (0, 1] such that r n = t n s n for every n, so that the above inequality reads in contradiction with (2.4) and the fact that Next, by using (2.3) and (2.7) we also get exists and equals D v u(a). Furthermore, by letting t go to 0 in (2.4) we also have for every a, v ∈ X, s ∈ (0, 1], and in particular for all a, v ∈ X. In order to finish the proof that u is differentiable, we will now combine some calculations from [8, Theorem 3.3.7] and [27, Theorem 6.1]. We do not yet know that the function v → D v u(a) is linear, but we do easily get that D λv u(a) = λD v u(a) for all a ∈ X and λ ∈ R; this fact is a straightforward consequence of (2.8) which we will use before establishing the linearity of v → D v u(a). We next show that for all a, b ∈ X. Indeed, writing b = a + h with h = 0, and using the strong ϕparaconvexity of u and −u, and the fact that Observe also that sup v ≤1 |D v u(a)| is finite for every a, thanks to (2.6). Now we may from which we easily deduce that D v+w u(a) = D v u(a) + D w u(a). We thus have that u is everywhere Fréchet differentiable, and from (2.9) we obtain that the jet (u, Du) : X → R × X * satisfies A(u, Du) ≤ C. The estimations for the modulus of continuity of Du are a consequence of Proposition 3.1(3) below.
Let us finish this section by studying what one could fairly call the Cϕ-paraconvex envelope of a function.
Definition 2.7. Given a Hilbert space X, a continuous function g : X → R, and a number C > 0, let us define Proof. Indeed, since h is strongly Cϕ-paraconvex, it is locally Lipschitz (see [27, Proposition 6.1] for a proof of this fact), and then the Clarke subdifferential ∂ C h(x) is nonempty for every x ∈ X. Moreover, according to [27, p. 219], the Clarke subdifferential of h can be written as ,δ) is Lipschitz. Using that h is strongly Cϕ-paraconvex we can prove that, in fact, we have the formula Letting t → 0 + and taking into account that ϕ(t) ≤ tω(t) and lim t→0 + ω(t) = 0, we get We have thus shown (2.10). Since h is locally Lipschitz we have ∂ C h(x) = ∅ for every x ∈ X and the result follows. Lemma 2.9. Assume that ω(t) = at, where a > 0. Then we have, for every x ∈ X,

Proof. Let us call
On the one hand, by using Lemma 2.4 with α = 1, we have that H p,t,ξ is strongly Cϕ-paraconvex, hence it is clear that On the other hand, if h is strongly Cϕ-paraconvex and h ≤ g then, according to the previous lemma, there exists some ξ ∈ X * such that for all y ∈ X. Because y → H x,h(x),ξ (y) is strongly Cϕ-paraconvex and lies below g, we have, by definition of H, Therefore H ≥ h for every h that is strongly Cϕ-paraconvex and lies below g. Since W C is the supremum of all such h, we also have for all x ∈ X. Thus we conclude H = W C .

Proofs of the main results
Let us start by showing the equivalence between conditions (mg 1,ω ) and (W 1,ω ) and their relation with the quantity M ω (G) for jets (f, G) defined on subsets of Banach and Hilbert spaces. (1) Assume that (f, G) satisfies condition (W 1,ω ) with constant M > 0. Then we have and, in particular, M ω (G) ≤ 3M . Moreover, if X is a Hilbert space, then Furthermore, if X is a Hilbert space and ω(t) = t α , α ∈ (0, 1], we have M ω (G) ≤ Proof.
(3) Let y, z ∈ E and v ∈ X. For the point x = 1 2 (y + z) + v, condition (mg 1,ω ) gives Reversing the roles of y and z, and taking x = 1 2 (y + z) − v in condition (mg 1,ω ) we obtain By summing both inequalities we have (3.4) These estimates hold for any v ∈ X, and in particular for every v ∈ X with v = y − z . Then, using that ϕ is convex, we conclude Let us now assume that X is a Hilbert space. Note that the function [0, , which is nonincreasing because so is s → ω(s)/s. Using the concavity of ϕ( √ ·) in (3.4) we obtain Writing |v| = t|y − z| with t > 0 and using Proposition 2.2 we deduce Taking t = √ 15/2 and t = √ 3/2 in (3.5), the desired estimate follows immediately. Finally, assume that X is a Hilbert space and ω(t) = t α for α ∈ (0, 1]. From (3.5) we derive where |v| = t|y −z| for t > 0. It is straightforward to see that the function 0 < t → h(t) = 2(t 2 +1/4) The desired estimate easily follows.
(4) Given y, z ∈ E, we have By summing both inequalities we easily get Applying Jensen's inequality on both sides of the previous inequality (bearing in mind that ω −1 is convex and ω is concave) we obtain Now we show Theorems 1.3, 1.5, 1.6, 1.7, 1.9, 1.10, and Corollary 1.8. Part of the proof of these results, as those of Sections 4 and 5 below, will be deduced from the following technical theorem. (1) ψ − y and −ψ + y are strongly Cϕ-paraconvex for each y ∈ E; for all x ∈ X and all y, z ∈ E.
Let us define functions m, g, F : X → R by and F (x) := sup{h(x) : h ≤ g, h is strongly Cϕ-paraconvex}. Proof. Condition (3) is obviously equivalent to saying that m and g are finite everywhere and satisfy m ≥ g on X. Also, if y ∈ E, it is obvious that m(y) ≥ f (y) and g(y) ≤ f (y), which implies m(y) = g(y) = f (y). Thus we have for all x ∈ X, and m(y) = g(y) = f (y) for all y ∈ E. Proof. That m and −g satisfy the lemma follows from the elementary observation that the supremum of a family of strongly Cϕ-paraconvex functions is strongly Cϕparaconvex. Once we know that m is strongly Cϕ-paraconvex, since m ≤ g on X, we deduce that F is well defined, with m ≤ F ≤ g on X. According to (3.10), this implies F = f on E. Finally, applying the mentioned observation again, we obtain that F is strongly Cϕ-paraconvex as well.
Proof. Fix x, y ∈ X and λ ∈ [0, 1] and define the function Using that F is strongly Cϕ-paraconvex it is straightforward to check that h is strongly Cϕ-paraconvex as well. Also, since F ≤ g, we have that for all z ∈ X; where the last inequality follows from the fact that −g is strongly Cϕ-paraconvex; see Lemma 3.3. We have thus shown that h −λ(1 −λ)Cϕ( x −y ) is strongly Cϕ-paraconvex and less than or equal to g. By the definition of F , we must have h −λ(1 −λ)Cϕ( x −y ) ≤ F . In particular, This proves the lemma. Proof. We already know that both F and −F are strongly Cϕ-paraconvex. Then by Proposition 2.6 we have that F is of class C 1,ω (X), with for all x, y ∈ X, and also that Finally, let us check that DF = G on E. By the definitions of m and g and the fact that m ≤ F ≤ g on X we have, for every y ∈ E and x ∈ X, that is, ψ − y ≤ F ≤ ψ + y on X, and since by condition (2) we also know that ψ ± y is differentiable at y, with Dψ ± y (y) = G(y) and ψ ± y (y) = f (y) = F (y), we conclude that DF (y) = G(y).
The proof of Theorem 3.2 is complete.
Proofs of Theorems 1.3, 1.5 and 1.6. Let us first note that in the case that A(f, G) = 0 our results are trivial. Indeed, if A(f, G) = 0, we may fix a point z 0 ∈ E and we have f (y) + G(y), x −y = f (z 0 ) + G(z 0 ), x −z 0 for all y ∈ E, x ∈ X; then the affine function On the other hand, it is clear that A(f, G) is the infimum of all constants M > 0 for which (mg 1,ω ) holds. In particular, A(f, G) = 0 if and only if (f, G) satisfies condition (mg 1,ω ) with all M > 0.
According to these observations, in our proofs we may assume that A(f, G) > 0. Also note that if A(f, G) is finite and strictly positive then (f, G) satisfies condition (mg 1,ω ) with M = A(f, G).
Let us start with the proof of Theorem 1.6. To prove the necessity of (mg 1,ω ), which is obviously equivalent to A(f, G) < ∞, we just use Taylor's theorem: for all x, y, z ∈ X, from which (mg 1,ω ) follows immediately (in fact this shows that Let us now show the sufficiency of condition (mg 1,ω ). Assume that (f, G) satisfies (mg 1,ω ) with constant M := A(f, G) > 0. For all y, z ∈ E, define the functions Condition (mg 1,ω ) tells us that ψ − z (x) ≤ ψ + y (x) for all x ∈ X, y, z ∈ E, so the functions ψ ± y meet condition (3) of Theorem 3.2, and it is obvious from the definition that they also satisfy condition (2). By Lemma 2.3 we have that x → −Mϕ(|x − z|) is strongly 2Mϕ-paraconvex, which immediately implies that the function is of class C 1,ω (X), where Theorem 1.5 is an immediate consequence of Theorem 1.6 and Proposition 3.1(3). Finally, in order to prove Theorem 1.3 we slightly modify the proof of Theorem 1.6. Assume that the jet (f, G) satisfies condition (W 1,ω ) with constant M > 0 on a subset E of a Hilbert space X. Defining ϕ = ϕ(2·) and ψ ± y (x) := f (y) + G(y), x −y ±M ϕ(|x −y|) for every x ∈ X, y ∈ E, we see from the arguments in the proof of Theorem 1.6 together with Proposition 3.1(1) that the families of functions {ψ ± y } y∈E satisfy all the assumptions of Theorem 3.2 for C = 2M and with ϕ in place of ϕ. Thus if F is the function defined in (3.8) (with ϕ in place of ϕ), then both F and −F are strongly 2M ϕ-paraconvex with F = f and ∇F = G on E. Proposition 2.6 tells us that F ∈ C 1, ω (X) with ω = 2ω(2·) and Proofs of Theorem 1.7 and Corollary 1.8. In the preceding proof we may use Lemma 2.4 instead of Lemma 2.3 to obtain that m and −g are strongly 2 1−α Mϕ-paraconvex, and the rest of the proof goes through just replacing 2M with 2 1−α M at the appropriate points, yielding Theorem 1.7. On the other hand Corollary 1.8 is an obvious consequence of previous results and some remarks made in the introduction, together with the following observation. If α ∈ (0, 1], and ω(t) = t α , then one can combine Lemma 2.4 and Proposition 2.6 to improve the estimates of A(F, ∇F ) in Theorem 1.6 and of the trace seminorm (f, G) E,ω in (1.11) as follows: and Proofs of Theorem 1.9 and 1.10. We start with the proof of Theorem 1.9. Assume that X is a superreflexive space with an equivalent norm · satisfying (1.13) for some α ∈ (0, 1] and C > 0. Let ω be a modulus of continuity such that t → t α /ω(t) is non-decreasing.
Finally, in order to prove the desired inequality, let λ ∈ [0, 1] and x, y ∈ X. We can easily write Let us define m and g by where now ϕ(t) := t 0 ω, and ψ(x) := ϕ( x ). Bearing in mind Lemma 3.6 we see that the function where C * is as in Lemma 3.6. That is to say, −p is strongly C * Mϕ-paraconvex. Then, we define F (x) := sup{h(x) : h ≤ g and h is strongly C * Mϕ-paraconvex}, x ∈ X, and exactly as in the proof of Theorem 1.5 one may use Theorem 3.2 to show that F and −F strongly C * Mϕ-paraconvex, which by Proposition 2.6 implies that F is of class C 1,ω (X), with the following estimate: for every x, y ∈ X. As in that proof, we also have m ≤ F ≤ g on X, m = f = g = F on E, and DF = G on E.
Finally, Theorem 1.10 is a consequence of Theorem 1.9 and Proposition 3.1. Indeed, assuming that (f, G) satisfies condition (W 1,ω ) on E, the functions F and −F are strongly C * M ϕ-paraconvex, with ϕ = ϕ(2·). Bearing in mind that ϕ(t) = t 0 ω; where ω = 2ω(2·), the estimate in Proposition 2.6 yields DF ( Let us conclude this section with a proof that the alternate formula (1.18) also provides an admissible extension F in Theorem 1.6.  X, and a 1-jet (f, G) : Then F is of class C 1,ω (X) and satisfies (F, Proof. By replacing ω(t) with Mω(t) if necessary, we may assume without loss of generality that M = 1. Let us observe that: where a, ξ, λ i , p i are as in the definition of F. Then, for every x, y ∈ X and t ∈ [0, 1], we can write where we have used that −ϕ • | · | is strongly 2ϕ-paraconvex.
• F is well defined and satisfies F ≤ g. Indeed, since m ≤ g and any function x → f (y) + G(y), x − y − ϕ(|x − y|) belongs to F, we have that F is well defined and m ≤ F ≤ g on X. We then deduce that F = f on E and This shows that F is differentiable on E, with ∇F = G on E.
• F is strongly 2ϕ-paraconvex. This is a consequence of the general and obvious fact that the supremum of a family of strongly Cϕ-paraconvex functions is also strongly Cϕ-paraconvex.
In order to show that F ∈ C 1,ω (X), let us also note the following.
• The function −F is strongly 2ϕ-paraconvex. Indeed, let x, y ∈ X, t ∈ [0, 1] and ε > 0. We can find h 1 , h 2 ∈ F with h i ≤ g and F (x) ≤ h 1 (x) + ε, F (y) ≤ h 2 (y) + ε. Define It is straightforward to see that h ∈ F. We have that where we have used the fact that −g is strongly 2ϕ-paraconvex. This shows that Letting ε go to 0 we thus obtain that −F is strongly 2ϕ-paraconvex. Now we can apply Proposition 2.6 to conclude that F ∈ C 1,ω (X) and A(F, ∇F ) ≤ 2 = 2A(f, G).

The bounded case
If a jet (f, G) defined on a subset X of a Hilbert space X satisfies A(f, G) < ∞ then we already know that there exists F ∈ C 1,ω (X) such that (F, ∇F ) = (f, G) on E. If the given functions f, G are bounded on E, then it is natural to ask whether (F, ∇F ) can be taken to be bounded. The extensions F defined by (1.15) may not be bounded (in fact they are never bounded when E is bounded), but in this section we will see how we can modify the proof of Theorems 1.3 and 1.5 so as to get (F, ∇F ) bounded. Also, with a different modification of the proof, we can obtain a certain continuous dependence of the extensions on the initial data, meaning that if a sequence {(f n , G n )} n∈N of jets converges uniformly on E to a jet (f, G) then the corresponding extensions satisfy that lim n→∞ (F n , ∇F n ) = (F, ∇F ) uniformly on X.
In order to formulate our results more precisely, let us introduce some more notation. Let us denote and endow this vector space with the norm which makes C 1,ω b (X) a Banach space. Also observe that the mapping (f, G) → A(f, G) is a seminorm on the vector space of 1-jets and therefore defines a norm on J 1,ω b (E).
Theorem 4.1. Let X be a Hilbert space. There exist a nonlinear operator E : and a constant C > 0, only depending on ω, with the following properties: Proof. Given a jet (f, G) ∈ J 1,ω b (E), let us define E(f, G) as follows. For the number On the other hand, if |x − y| < 1 then In either case we have for all x ∈ X, y ∈ E, and similarly we see that for all x ∈ X, z ∈ E. Let us define, for each y ∈ E, the functions By using (4.1) and (4.2) and the assumption that A(f, G) ≤ M < ∞, it is immediately checked that these functions satisfy conditions (2) and (3) of Theorem 3.2. Besides, recalling Lemma 2.3 and the fact that the maximum of two strongly 2Mϕ-paraconvex functions is strongly 2Mϕ-paraconvex, we also have that m is strongly 2Mϕ-paraconvex. Similarly, −g is strongly 2Mϕ-paraconvex too. Then we can apply Theorem 3.2 with C = 2M , obtaining that F and −F are strongly 2Mϕ-paraconvex, hence F ∈ C 1,ω (X), and that (F, Since m ≤ F ≤ g, it is obvious that we also have Let us now estimate ∇F ∞ . By using (2.4) with s = 1 in the proof of Proposition 2.6, and recalling that both F and −F are strongly 2Mϕ-paraconvex, we have and by setting |h| = 1 and letting t → 0 + we obtain In conclusion, by combining (4.5), (4.6) and (4.7) we obtain that where C > 0 is a constant only depending on ω.
with the following properties: That the mapping (f, G) → E(f, G) satisfies properties (1) and (2) of the statement can be checked exactly as in the proof of the previous theorem.
In order to prove (3) we need to localize the infimum defining the associated functions g and Lemma 4.3. We have that for all x ∈ X, n ∈ N.
Proof. If x ∈ X, y ∈ E and |x − y| ≥ 1 then Therefore Obviously the same holds true of g n .
Lemma 4.4. (g n ) converges to g uniformly on X.
Proof. Let ε > 0, and choose n 0 ∈ N large enough so that Then, given x ∈ X, we either have g( In the first case we have for all n ≥ n 0 . In the second case, thanks to (4.8) we may find y x ∈ E ∩ B(x, 1) such that hence, for all n ≥ n 0 , In either case we see that if n ≥ n 0 then for all x ∈ X. Similarly one can check that for all x ∈ X, n ≥ n 0 . Thus we conclude that g n − g ∞ ≤ ε for all n ≥ n 0 .
Proof. Observe that the family of functions {g, g n , F, F n } n is uniformly bounded thanks to property (2) and the fact that {(f n , G n )} n converges uniformly to (f, G). Together with Lemma 4.4, this implies that, given ε > 0, we can choose n 0 ∈ N so that, for every n ≥ n 0 (4.10) In particular, (4.9) implies that MM −1 n g n ≤ g + MM −1 n − 1 g n ∞ + ε/6 ≤ g + ε/3 for each n ≥ n 0 . (4.11) Observing that a function h is strongly aϕ-paraconvex if and only if ba −1 h + c is strongly bϕ-paraconvex, where a, b > 0 and c ∈ R are any constants, the inequalities in (4.9) and (4.11) yield, for every x ∈ X and n ≥ n 0 , Similarly (using the inequalities of (4.10)), we obtain for all x ∈ X, n ≥ n 0 . Therefore F n − F ∞ ≤ ε for all n ≥ n 0 .
It only remains to be shown that lim n→∞ ∇F n − ∇F ∞ = 0. This is a consequence of the following fact (which is of course well known; we include a short proof here for the reader's convenience).
Lemma 4.6. Let u : X → R be differentiable, and let (u k ) be a sequence of differentiable functions such that u k converges to u uniformly on X, and such that for some constant for all k ∈ N and all x, y ∈ X. Then ∇u k − ∇u ∞ converges to 0 uniformly on X.
Proof. By substracting the second inequality from the first one we get Given ε > 0 we may choose k 0 ∈ N so that u k − u ∞ ≤ ε 2 /4 for all k ≥ k 0 . By taking h ∈ X with |h| = ε and y = x + h in (4.12) we obtain It is clear that property (2) together with the fact that {(f n , G n )} n converges uniformly to (f, G) implies that max{M ω (∇F n ), M ω (∇F )} ≤ A * C, for n large enough and for a constant A * > 0 comparable to f ∞ + G ∞ + A. Combining Lemma 4.5 and this observation, we can apply Lemma 4.6 for (F n ) n and F to conclude lim n→∞ ∇F n = ∇F uniformly on X.
The proof of Theorem 4.2 is complete. (1) Theorems 4.1 and 4.2 have analogues for superreflexive spaces X and the classes C 1,ω (X), assuming that t α /ω(t) is a nondecreasing function. We let the reader formulate them. The proofs are the same, with obvious changes. (2) It would be interesting to know whether one can improve Theorems 1.5 and 4.1 to find an extension operator with the additional property that

The Lipschitz case
In this section we will show a variant of our main result in which we are given (f, G) with G bounded but f unbounded, and we want an extension F with ∇F bounded.
Let us consider the function ψ := ϕ • | · |, and check that − ψ is strongly C ϕ-paraconvex for some absolute constant C > 0. Indeed, if ω is defined as ω = ω on [0, 1] and ω = ω(1) on [1, +∞), observe that for all u, v ∈ X such that |u|, |v| ≥ 1, we have that If one of the vectors, say u, is inside the unit ball and the other is not, then the line segment [u, v] intersects the unit sphere at a unique point z, and we have In any case we see that Now, given x, y ∈ X and λ ∈ [0, 1], we can use (5.1) to obtain This proves that − ψ is strongly C ϕ-paraconvex, with C = 2A + 4. This C is not to be confused with that of the statement of the theorem. Now let M := A(f, G) < ∞ and let y, z ∈ E, x ∈ X. Observe that if |x −y|, |x −z| ≤ 1, then On the other hand if |x − y| > 1 or |x − z| > 1, using that f is Lipschitz, G is bounded, the fact that t ≤ ϕ(1) −1 ϕ(t) for every t ≥ 1, and finally the convexity of ϕ, we can write We conclude that The preceding observations show that the family of functions satisfy conditions (1), (2) and (3) of Theorem 3.2. In addition, since ( ϕ) = ω ≤ ω(1), the function m := sup y∈E ψ − y is Lipschitz with Lip(m) ≤ G ∞ + ω(1) M . Applying Theorem 3.2 (by means of formula (3.9)), we obtain a Lipschitz function F ∈ C 1, ω (X) such that (F, ∇F ) = (f, G) on E, A(F, ∇F ) ≤ C M and Lip(F ) ≤ G ∞ +ω(1) M . Notice that Theorem 3.2 can be applied for ω and ϕ since the assumption that the modulus of continuity ω must satisfy lim t→∞ ω(t) = ∞ is not needed in the proof of Proposition 2.6. Finally, observe that, since ω ≤ ω, we have that F ∈ C 1,ω (X) as well.

The class C 1,u B (X)
Let X be a Hilbert space, and let C 1,u B (X) stand for the space of all Fréchet differentiable functions on X whose derivatives are uniformly continuous on each bounded subset of X.
In this section we combine Theorem 4.1 with a standard partition of unity in order to characterize the 1-jets (f, G) which are restrictions to E of some (F, ∇F ) with F ∈ C 1,u B (X). Proof. For every x ∈ X, y, z ∈ E with |x − y| + |x − z| > 0, let us denote θ(x, y, z) := |f (y) + G(y), x − y − f (z) − G(z), x − z | |x − y| + |x − z| .
Proof. From the above construction of ω k and α k it is clear that for all x ∈ B 3k and all y, z ∈ E ∩ B 3k . This implies that for all x ∈ B 3k and all y, z ∈ E ∩ B 3k . On the other hand, if x / ∈ B 3k and y, z ∈ E k then we have that |x − y| ≥ 1, |x − z| ≥ 1, and using the convexity of ϕ k we get

Therefore we have
Now we can apply Theorem 4.1 to find a function F k ∈ C 1,ω k (X) such that (F k , ∇F k ) | E k = (f k , G k ), with (F k , ∇F k ) bounded. Let us finally define Since the sum defining F is finite on every bounded subset of X, the functions ψ k , ∇ψ k , F k and ∇F k are bounded, and ∇ψ k and ∇F k are uniformly continuous on X, it is clear that F ∈ C 1,u B (X). Also, using the facts that ∞ k=1 ∇ψ k = 0 and F k (y) = f k (y) = f (y) and ∇F k (y) = G k (y) = G(y) if y ∈ supp(ψ k ) ∩ E, we have that, for each y ∈ E, F (y) = f (y) and ∇F (y) = ∞ k=1 ψ k (y)∇F k (y) +