A Constructive Sharp Approach to Functional Quantization of Stochastic Processes

We present a constructive approach to the functional quantization problem of stochastic processes, with an emphasis on Gaussian processes. The approach is constructive, since we reduce the infinite-dimensional functional quantization problem to a finite-dimensional quantization problem that can be solved numerically. Our approach achieves the sharp rate of the minimal quantization error and can be used to quantize the path space for Gaussian processes and also, for example, Lévy processes.


Introduction
We consider a separable Banach space E, • and a Borel random variable X: Ω, F, È → E, B E with finite rth moment X r for some r ∈ 1, ∞ .
For a given natural number n ∈ AE, the quantization problem consists in finding a set α ⊂ E that minimizes e r X, E, • , α e r X, E, α : min is called the nth L r -quantization error of X in E, and any n-quantizer α fulfilling e r X, E, α e n,r X, E 1.3 is called r-optimal n-quantizer.For a given n-quantizer α one defines the nearest neighbor projection where the Voronoi partition {C a α , a ∈ α} is defined as a Borel partition of E satisfying The random variable π α X is called α-quantization of X.One can easily verify that π α X is the best quantization of X in α ⊂ E, which means that for every random variable Y with values in α we have e r X, E, α X − π α X r 1/r ≤ X − Y r 1/r .1.6 Applications of quantization go back to the 1940s, where quantization was used for the finite-dimensional setting E Ê d , called optimal vector quantization, in signal compression and information processing see, e.g., 1, 2 .Since the beginning of the 21st century, quantization has been applied for example in finance, especially for pricing path-dependent and American style options.Here, vector quantization 3 as well as functional quantization 4, 5 is useful.The terminology functional quantization is used when the Banach space E is a function space, such as E L p 0, 1 , • p or E C 0, 1 , • ∞ .In this case, the realizations of X can be seen as the paths of a stochastic process.
A question of theoretical as well as practical interest is the issue of high-resolution quantization which concerns the behavior of e n,r X, E when n tends to infinity.By separability of E, • , we can choose a dense subset {c i , i ∈ AE} and we can deduce in view of that e n,r X, E tends to zero as n tends to infinity.A natural question is then if it is possible to describe the asymptotic behavior of e n,r X, E .It will be convenient to write a n ∼ b n for sequences a n n∈AE and b n n∈AE if In the finite-dimensional setting Ê d , • this behavior can satisfactory be described by the Zador Theorem see 6 for nonsingular distributions È X .In the infinite dimensional case, no such global result holds so far, without some additional restrictions.To describe one of the most famous results in this field, we call a measurable function ρ : s, ∞ → 0, ∞ for an s ≥ 0 regularly varying at infinity with index b ∈ Ê if for every c > 0 Theorem 1.1 see 7 .Let X be a centered Gaussian random variable with values in the separable Hilbert space H, •, • and λ n , n ∈ AE the decreasing eigenvalues of the covariance operator C X :H → H, u → u, X X (which is a symmetric trace class operator).Assume that λ n ∼ ρ n for some regularly varying function ρ with index −b < −1.Then, the asymptotics of the quantization error is given by where ω x : 1/xρ x .
Note that any change of ∼ in the assumption that λ n ∼ ρ n to either º, ≈ or ² leads to the same change in 1.9 .Theorem 1.1 can also be extended to an index b 1 see 7 .Furthermore, a generalization to an arbitrary moment r see 8 as well as similar results for special Gaussian random variables and diffusions in non-Hilbertian function spaces see, e.g., 9-11 have been developed.Moreover, several authors established a precise link between the quantization error and the behavior of the small ball function of a Gaussian measure see, e.g., 12, 13 which can be used to derive asymptotics of quantization errors.More recently, for several types of Lèvy processes sharp optimal rates have been developed by several authors see, e.g., 14-17 .Coming back to the practical use of quantizers as a good approximation for a stochastic process, one is strongly interested in a constructive approach that allows to implement the coding strategy and to compute at least numerically good codebooks.
Considering again Gaussian random variables in a Hilbert space setting, the proof of Theorem 1.1 shows us how to construct asymptotically r-optimal n-quantizers for these processes, which means that sequences of n-quantizers α n , n ∈ AE satisfy e r X, E, α n ∼ e n,r X, E , n −→ ∞. 1.10 These quantizers can be constructed by reducing the quantization problem to a quantization problem of a finite-dimensional normal distributed random variable.Even if there are almost no explicit formulas known for optimal codebooks in finite dimensions, the existence is guaranteed see 6, Theorem 4.12 and there exist a lot of deterministic and stochastic numerical algorithms to compute optimal codebooks see e.g., 18, 19 or 20 .Unfortunately, one needs to know explicitly the eigenvalues and eigenvectors of the covariance operator C X to pursue this approach.
If we consider other non-Hilbertian function spaces E, • or non-Gaussian random variables in an infinite-dimensional Hilbert space, there is much less known on how to construct asymptotically optimal quantizers.Most approaches to calculate the asymptotics of the quantization error are either non-constructive e.g., 12, 13 or tailored to one specific process type e.g., 9-11 or the constructed quantizers do not achieve the sharp rate in the sense of 1.10 e.g., 17 or 20 but just the weak rate e r X, E, α n ≈ e n,r X, E , n −→ ∞. 1.11 In this paper, we develop a constructive approach to calculate sequences of asymptotically r-optimal n-quantizers in the sense of 1.10 for a broad class of random variables in infinite dimensional Banach spaces Section 2 .Constructive means in this case that we reduce the quantization problem to the quantization problem of a Ê d -valued random variable, that can be solved numerically.This approach can either be used in Hilbert spaces in case the eigenvalues and eigenvectors of the covariance operator of a Gaussian random variable are unknown Sections 3.1 and 3.2 , or for quantization problems in different Banach spaces Sections 4 and 5 .
In Section 4, we discuss Gaussian random variables in C 0, 1 , • ∞ .This part is related to the PhD thesis of Wilbertz 20 .More precisely, we sharpen his constructive results by showing that the quantizers constructed in the thesis also achieve the sharp rate for the asymptotic quantization error in the sense of 1.10 .Moreover, we can show that the dimensions of the subspaces wherein these quantizers are contained can be lessened without loosing the sharp asymptotics property.
In Section 5, we use some ideas of Luschgy and Pagès 17 and develop for Gaussian random variables and for a broad class of Lévy processes asymptotically optimal quantizers in the Banach space L p 0, 1 , • p .
It is worth mentioning that all these quantizers can be constructed without knowing the true rate of the quantization error.This means precisely that we know a rough lower bound for the quantization error, that is, e n,r X, E ² C 1 log n −b 1 and the true but unknown for the optimal but still unknown constants C 2 , b 2 .The crucial factors for the numerical implementation are the dimensions of the subspaces, wherein the asymptotically optimal quantizers are contained.We will calculate the dimensions of the subspaces obtained through our approach, and we will see that for all analyzed Gaussian processes, and also for many Lévy processes we are very close to the known asymptotics of the optimal dimension in the case of Gaussian processes in infinitedimensional Hilbert spaces.
We will give some important examples of Gaussian and Lévy processes in Section 6, and finally illustrate some of our results in Section 7.

Notations and Definitions
If not explicitly differently defined, the following notations hold throughout the paper.
i We denote by X a Borel random variable in the separable Banach space E, • with card supp È X ∞.
ii • will always denote the norm in E whereas • L r È will denote the norm in L r Ω, F, È .
iii The scalar product in a Hilbert space H will be denoted by •, • .iv The smallest integer above a given real number x will be denoted by x .v A sequence g j j∈AE ∈ E AE is called admissible for a centered Gaussian random variable X in E if and only if for any sequence ξ i i∈AE of independent N 0, 1distributed random variables it holds that ∞ i 1 ξ i g i converges a.s. in E, • and as m → ∞.A precise characterization of admissible sequences can be found in 21 .
vi An orthonormal system ONS h i i∈AE is called rate optimal for X in the Hilbert space H if and only if as m → ∞.

Asymptotically Optimal Quantizers
The main idea is contained in the subsequent abstract result.The proof is based on the following elementary but very useful properties of quantization errors.V m x .

2.5
Condition 2. There exist linear isometric and surjective operators φ m : Condition 3.There exist random variables Z m for m ∈ J in E with Z m d X, such that for the as m → ∞ along J.
Remark 2.2.The crucial point in Condition 1 is the norm one restriction for the operators V m .
Condition 2 becomes Important when constructing the quantizers in Ê m equipped with, in the best case, some well-known norm.As we will see in the proof of the subsequent theorem, to show asymptotic optimality of a constructed sequence of quantizers one needs to know only a rough lower bound for the asymptotic quantization error.In fact, this lower bound allows us in combination with Condition 3 to choose explicitly a sequence m n ∈ J, n ∈ AE such that Theorem 2.3.Assume that Conditions 1-3 hold for some infinite subset J ⊂ AE.One chooses a sequence m n n∈AE ∈ J AE such that 2.7 is satisfied.For n ∈ AE, let α n be an r-optimal n-quantizer Then, φ −1 m n α n n∈AE is an asymptotically r-optimal sequence of n-quantizers for X in E and as n → ∞.
Proof.Using Condition 3 and the fact that e n,r X, E > 0 for all n ∈ AE since card supp È X ∞, we can choose a sequence m n n∈AE ∈ AE AE fulfilling 2.7 .Using Lemma 2.1 and Condition 2, we see that φ −1 m n α n is an r-optimal n-quantizer for V m n Z m n in F m n .Then, by using Condition 1, 2.7 , and Lemma 2.1 we get

2.9
The last equivalence of the assertion follows from 1.6 .
Remark 2.5.We will usually choose Z m X for all m ∈ AE, with an exception in Section 3 and J AE.
Remark 2.6.The crucial factor for the numerical implementation of the procedure is the dimensions m n n∈AE of the subspaces F m n n∈AE .For the well-known case of the Brownian motion in the Hilbert space H L 2 0, 1 it is known that this dimension sequence can be chosen as m n ≈ log n , n → ∞.In the following examples we will see that we can often obtain similar orders like log n c for constants c just slightly higher than one.
We point out that there is a nonasymptotic version of Theorem 2.3 for nearly optimal n-quantizers, that is, for n-quantizers, which are optimal up to > 0. Its proof is analogous to the proof of Theorem 2.3.

Proposition 2.7. Assume that Conditions 1-3 hold. Let
and for n ∈ AE one sets ξ n : φ m V m Z m .Then, it holds for every n ∈ AE and for every r-optimal n-quantizer

Gaussian Processes with Hilbertian Path Space
In this chapter, let X be a centered Gaussian random variable in the separable Hilbert space H, •, • .Following the approach used in the proof of Theorem 1.1, we have for every sequence ξ i i∈AE of independent N 0, 1 -distributed random variables where λ i denote the eigenvalues and f i denote the corresponding orthonormal eigenvectors of the covariance operator C X of X Karhunen-Loève expansion .If these parameters are known, we can choose a sequence d n n∈AE such that a sequence of optimal quantizer α n for X n d n i 1 λ i f i ξ i is asymptotically optimal for X in E. In order to construct asymptotically optimal quantizers for Gaussian random variables with unknown eigenvalues or eigenvectors of the covariance operator, we start with more general expansions.In fact, we just need one of the two orthogonalities, either in L 2 È or in H.
Before we will use these representations for X to find suitable triples V m , F m , φ m as in Theorem 2.3, note that for Gaussian random variables in H fulfilling suitable assumptions we know that 1 Let h i i∈AE be an orthonormal basis of H. Then a.s..

3.2
Compared to 3.1 we see that h i , X are still Gaussian but generally not independent.
2 Let g j j∈AE be an admissible sequence for X in H such that

3.3
Compared to 3.1 the sequence g i i∈AE is generally not orthogonal.
e n,2 X, H ≈ e n,s X, H , n −→ ∞ 3.4 for all s ≥ 1; see 13 .Thus, we will focus on the case s 2 to search for lower bounds for the quantization errors.

Orthonormal Basis
Let h m m∈AE be an orthonormal basis of H.For the subsequent subsection we use the following notations.
3 Define the linear, surjective, and isometric operators φ m by where e i denotes the ith unit vector in Ê m , 1 ≤ i ≤ m.
Theorem 3.1.Assume that the eigenvalue sequence λ j j∈AE of the covariance operator C X satisfies λ j ≈ j −b for −b < −1, and let > 0 be arbitrary.Assume further that h j j∈AE is a rate optimal ONS for X in H.One sets m n log n 1 for n ∈ AE.Then, one gets for every sequence α n n∈AE of r-optimal n-quantizers as n → ∞.
Proof.Let f j j∈AE be the corresponding orthonormal eigenvector sequence of C X .Classic eigenvalue theory yields for every m ∈ AE

3.7
Combining this with rate optimality of the ONS h j j∈AE for X, we get

3.8
Using the equivalence of the r-norms of Gaussian random variables 23, Corollary 3.2 , and since X − V m n X is Gaussian, we get for all r ≥ 1 With ω as in Theorem 1.1, we get by using 3.4 and Theorem 1.1 the weak asymptotics and the assertion follows from Theorem 2.3.

Admissible Sequences
In order to show that linear operators V m similar to those used in the subsection above are suitable for the requirements of Theorem 2.3, we need to do some preparations.Since the covariance operator C X of a Gaussian random variable is symmetric and compact in fact trace class , we will use a well-known result concerning these operators.This result can be used for quantization in the following way.
Lemma 3.2.Let X be a centered Gaussian random variable with values in the Hilbert space H and X X 1 X 2 , where X 1 and X 2 are independent centered Gaussians.Then

3.13
The covariance operator of a centered Gaussian random variable is positive semidefinite.Hence, by using a result on the relation of the eigenvalues of those operators see, e.g., 24, page 213 , we get inequalities 3.12 .
Let g i i∈AE be an admissible sequence for X, and assume that ∞ i 1 ξ i g i X a.s.In this subsection, we use the following notations.
for j ≤ m and V m f j : 0 for j > m, where λ j and f j denote the eigenvalues and the corresponding eigenvectors of C X and λ m j and f m j the eigenvalues and the corresponding eigenvectors of C X m , with X m defined as Furthermore, it is important to mention that one does not need to know λ j and f j explicitly to construct the subsequent quantizers, since we can find for any m ∈ AE a random variable Z m d X such that V m Z m m i 1 ξ i g i see the proof of Theorem 3.3 , which is explicitly known and sufficient to know for the construction.
3 Define the linear, surjective, and isometric operators φ m by 3.17 where e i denotes the ith unit vector of Ê m for 1 ≤ i ≤ m.
Theorem 3.3.Assume that the eigenvalue sequence λ j j∈AE of the covariance operator C X satisfies as n → ∞.
Proof.Linearity of V m m∈AE follows from the orthogonality of the eigenvectors.In view of the inequalities for the eigenvalues in Lemma 3.2 and the orthonormality of the family f i i∈AE , we have for every h Note next that for every m ∈ AE there exist independent N 0, 1 -distributed random Then, we choose random variables 1≤i<∞ is a sequence of independent N 0, 1 -distributed random variables.We set and get by using rate optimality of the admissible sequences g j j∈AE and λ j f j j∈AE where rate optimality of

3.23
Using the equivalence of the r-norms of Gaussian random variables 23, Corollary 3.2 , and since X − V m n X is Gaussian, we get for all r ≥ 1

3.24
With ω as in Theorem 1.1, we get by using 3.4 and Theorem 1.1 the weak asymptotics e n,r X, H ≈ ω log n −1/2 ≈ log n − 1/2 b−1 , n → ∞.Therefore, the sequence m n n∈AE satisfies 2.7 since and the assertion follows from Theorem 2.3.

Comparison of the Different Schemes
At least in the case r 2, we have a strong preference for using the method as described in Section 3.1.We use the notations as in the above subsections including an additional indexation i 1, 2 for and m, n ∈ AE, where α i n , for i 1, 2, are defined as in Theorems 3.1 and 3.3.Note that for this purpose the size of the codebook n and the size of the subspaces dim F m m can be chosen arbitrarily i.e., m does not depend on n .The ONS h i i∈AE is chosen as the ONS derived with the Gram-Schmidt procedure from the admissible sequence g j j∈AE for the Gaussian random variable X in the Hilbert space H, such that the definition of F m coincides in the twosubsections.

3.26
Proof.Consider for X the decomposition X pr F ⊥ m X pr F m X .The key is the orthogonality of m X , which gives the two equalities in the following calculation:

3.27
The inequality * follows from the optimality of the codebook φ

Gaussian Processes with Paths in
In the previous section, where we worked with Gaussian random variables in Hilbert spaces, we saw that special Hilbertian subspaces, projections, and other operators linked to the Gaussian random variable were good tools to develop asymptotically optimal quantizers based on Theorem 2.3.Since we now consider the non-Hilbertian separable Banach space C 0, 1 , • ∞ , we have to find different tools that are suitable to use Theorem 2.3.
The tools used in 20 are B-splines of order s ∈ AE.In the case s 2, that we will consider in the sequel, these splines span the same subspace of C 0, 1 , • ∞ as the classical Schauder basis.We set for x ∈ 0, 1 , m ≥ 2, and 1 ≤ i ≤ m the knots t m i : i − 1 / m − 1 and the hat functions For the remainder of this subsection, we will use the following notations.
1 As subspaces F m we set F m : span{f m j , 1 ≤ j ≤ m}. 2 As linear and continuous operators V m : C 0, 1 → F m we set the quasiinterpolant where Journal of Applied Mathematics 3 The linear and surjective isometric mappings φ m one defines as It is easy to see that For the application of Theorem 2.3, we need to know the error bounds for the approximation of X with the quasiinterpolant V m X .For Gaussian random variables, we can provide the following result based on the smoothness of an admissible sequence for X in E.
Proposition 4.1.Let g j j∈AE be admissible for the centered Gaussian random variable X in C 0, 1 , • ∞ .Assume that Then, for any > 0 and some constant C < ∞it holds that for every r ≥ 1.
Proof.Using of 25, Theorem 1 , we get for an arbitrary 1 > 0, some constant C 3 < ∞, and every k ∈ AE.Thus, we have Using of 26, Chapter 7, Theorem 7.3 , we get for some constant where the module of smoothness ω f, δ is defined by For an arbitrary f ∈ C 2 0, 1 we have by using Taylor expansion Combining this, we get for an arbitrary 2 > 0 and constants C 5 , C 6 , C 7 < ∞, using again the equivalence of Gaussian moments,

4.10
To minimize over k, we choose k k m m 0,8 .Thus, we get for some constant C < ∞ and an arbitrary > 0 4.11 Now, we are able to prove the main result of this section.for some > 0.
Then, for every sequence α n n∈AE of r-optimal n-quantizers as n → ∞.

Proof. For every
since {f m i , 1 ≤ i ≤ m} are partitions of the one for every m ∈ AE, so that V m op ≤ 1.
We get a lower bound for the quantization error e n,r X, C 0, 1 from the inequality for all f ∈ C 0, 1 ⊂ L 2 0, 1 .Consequently, we have e n,r X, C 0, 1 ≥ e n,r X, L 2 0, 1 .

4.15
From Theorem 1.1 and 3.4 we obtain where ω is given as in Theorem 1.1.Finally, we get by combining 4.16 and Proposition 4.1 for sufficiently small δ > 0 4.17 and the assertion follows from Theorem 2.3.

Processes with Path Space L p 0, 1 , • p
Another useful tool for our purposes is the Haar basis in L p 0, 1 for 1 ≤ p < ∞, which is defined by

5.1
This is an orthonormal basis of L 2 0, 1 and a Schauder basis of The Haar basis was used in 17 to construct rate optimal sequences of quantizers for mean regular processes.These processes are specified through the property that for all 0 ≤ s ≤ t ≤ 1 where ρ : Ê → 0, ∞ is regularly varying with index b > 0 at 0, which means that lim for all c > 0. Condition 5.2 also guarantees that the paths t → X t lie in L p 0, 1 .
For our approach, it will be convenient to define for m ∈ AE and 1 ≤ i ≤ m 1 the knots and the operators Note that for f ∈ L 1 0, 1 , m 2 n 1 , and n ∈ AE 0 e 0 , f e 0 n i 0 For the remainder of the subsection, we set the following. 5.8 Theorem 5.1.Let X be a random variable in the Banach space E, • L p 0, 1 , • p for some p ∈ 1, ∞ fulfilling the mean pathwise regularity property 5.9 for constants C, a > 0 and t > s ∈ 0, 1 .Moreover, assume that K log n −b º e n,r X, E for constants K, b > 0.Then, for an arbitrary > 0 and m n : log n b/a it holds that every sequence of r-optimal n-quantizers α n n∈AE for φ m n ,p V m n X in Ê m n , • p satisfies e n,r X, L p 0, 1 as n → ∞.
Proof.As in the above subsections, we check that the sequences V m and φ m,p satisfy Conditions 1-3.Since V m f λ f | F m , where F m is defined by we get for f ∈ L p 0, 1 , with f p ≤ 1 and p ∈ 1, ∞ by using Jensen's inequality, 12 and thus V m op ≤ 1.The operators φ m,p satisfy Condition 2 of Theorem 2.3 since

5.13
For Condition 3, we note that for t ∈ 0, 1

5.14
Using the inequalities C p∨r m a p∨r .

5.16
Therefore, we know that the sequence m n n∈AE satisfies 2.7 since we get with 5.16 as n → ∞, and the assertion follows from Theorem 2.3.

Examples
In this section, we want to present some processes that fulfill the requirements of the Theorems 3.1, 3.3, 4.2, and 5.1.Firstly, we give some examples for Gaussian processes that can be applied to all of the four Theorems, and secondly we describe how our approach can be applied to Lévy processes in view of Theorem 5.1.

Examples 6.1. Gaussian Processes and Brownian Diffusions (i) Brownian Motion and Fractional Brownian Motion
Let X H t t∈ 0,1 be a fractional Brownian motion with Hurst parameter H ∈ 0, 1 in the case H 1/2 we have an ordinary Brownian motion .Its covariance function is given by Note that except for the case of an ordinary Brownian motion the eigenvalues and eigenvectors of the fractional Brownian motion are not known explicitly.Nevertheless, the sharp asymptotics of the eigenvalues has been determined see, e.g., 7 .
In 28 the authors constructed an admissible sequence g j j∈AE in C 0, 1 that satisfies the requirements of Proposition 4.1 with θ 1/2 H. Furthermore, the eigenvalues λ j of C X H in L 2 0, 1 satisfy λ j ≈ j − 1 2H , see, for example, 7 , such that the requirements for Theorem 4.2 are satisfied.Additionally, this sequence is a rate optimal admissible sequence for X H in L 2 0, 1 , such that the requirements for Theorem 3.3 are also met.Constructing recursively an orthonormal sequence h j j∈AE by applying Gram-Schmidt procedure on the sequence g j j∈AE yields a rate optimal ONS for X H in L 2 0, 1 that can be used in the application of Theorem 3.1.In Section 7 we will illustrate the quantizers constructed for X H with this ONS for several Hurst parameters H.Note that there are several other admissible sequences for the fractional Brownian motion which can be applied similarly as described above; see, for example, 29 or 30 .Moreover, we have for s, t ∈ 0, 1 the mean regularity property and the asymptotics of the quantization error is given as for all r, p ≥ 1 see 13 , such that the requirements of Theorem 5.1 are met with a b H.Note that in 11 the authors showed the existence of constants k H, E for E C 0, 1 and E L p 0, 1 independent of r such that Therefore, the quantization errors of the sequences of quantizers constructed via Theorems 3.1, 3.3, 4.2, and 5.1 also fulfill this sharp asymptotics.

(ii) Brownian Bridge
Let B t t∈ 0,1 be a Brownian bridge with covariance function B s B t min s, t − st.

6.5
Since the eigenvalues and eigenvectors of the Brownian bridge are explicitly known, we do not have to search for any other admissible sequence or ONS for B t t∈ 0,1 to be applied in H L 2 0, 1 .This the eigenvalue-eigenvector admissible sequence also satisfies the requirements of Theorem 4.2.The mean pathwise regularity for the Brownian bridge can be deduced by for any p ≥ 1. Combining 31, Theorem 3.7 and 13, Corollary 1.3 yields for all r, p ≥ 1, such that Theorem 5.1 can be applied with a b 1/2.

(iii) Stationary Ornstein-Uhlenbeck Process
The stationary Ornstein-Uhlenbeck process X t t∈ 0,1 is a Gaussian process given through the covariance function with parameters α, σ > 0. An admissible sequence for the stationary Ornstein-Uhlenbeck process in C 0, 1 and L 2 0, 1 can be found in 21 .This sequence that can be applied to Theorems 3.3 and 4.2 and also by applying Gram-Schmidt procedure to Theorem 3.1.
According to 13 we have e n,r X, L p 0, 1 for all r, p ≥ 1.Furthermore, it holds that and therefore we can choose a b 1/2 to apply Theorem 5.1.

(iv) Fractional Ornstein-Uhlenbeck Process
The fractional Ornstein-Uhlenbeck process X H t t∈ 0,1 for H ∈ 0, 2 is a continuous stationary centered Gaussian process with the covariance function In 22 the authors constructed an admissible sequence g j H j∈AE for H ∈ 0, 1 that satisfies the requirements of Proposition 4.1 with θ 1/2 H/2.Since the eigenvalues λ j of C X H in L 2 0, 1 satisfy λ j ≈ j −1 H , we get again that the assumptions of Theorem 4.2 are satisfied.Similarly, we can use this sequence in Theorems 3.3 and 3.1.

(v) Brownian Diffusions
We consider a 1-dimensional Brownian diffusion X t t∈ 0,1 fulfilling the SDE where the deterministic functions b, σ : 0, 1 × Ê → Ê satisfy the growth assumption Under some additional ellipticity assumption on σ, the asymptotics of the quantization error in L p 0, 1 , • p is then given by e n,r X, L p 0, 1 Furthermore, by the Lévy-Ito decomposition, X can be written as the sum of independent Lévy processes where X 3 is a Brownian motion with drift, X 2 is a Compound Poisson process, and X 1 is a Lévy process with bounded jumps and without Brownian component.
Firstly, we will analyze the mean pathwise regularity of these three types of Lévy processes to combine these results with lower bounds for the asymptotical quantization error.
1 Mean Pathwise Regularity of the 3 Components of the Lévy-Ito Decomposition: i According to an extended Millar's Lemma 17, Lemma 5 , we have, for all Lévy processes with bounded jumps and without Brownian component, that there is for every p ≥ 2 a constant C < ∞ such that for every t ∈ 0, 1 ii We consider the Compound Poisson process where K denotes a standard Poisson process with intensity λ 1 and U k k∈AE is an i.i.d sequence of random variables with Z 1 L p È < ∞.Then, one shows that for some constant C ∈ 0, ∞ , and W denotes a Brownian motion.We consider the Lévy-Ito decomposition X X 1 X 2 X 3 and assume that for Therefore, we receive the mean pathwise regularity for X, all p, r ≥ 1, and some constant C < ∞ ρ p x : Cx Thus, we can choose ρ x Cx 1/α for any p ≥ 1 and constants C p < ∞.The asymptotics of the quantization error for X is given by e n,r X, L p ≈ log n −1/α , n −→ ∞ 6.31 for r, p ≥ 1 14 , such that we meet the requirements of Theorem 5.1 by setting a b α.

Numerical Illustrations
In this section, we want to highlight the steps needed for a numerical implementation of our approach and also give some illustrating results.For this purpose, it is useful to regard an nquantizer α n as an element of E n again denoted by α n instead of being a subset of E.
0 for i / j.Then, the distortion function is differentiable at every admissible n-tuple α a 1 , . . ., a n (i.e., a i / a j for i / j) with where {C a i α : 1 ≤ i ≤ n} denotes any Voronoi partition induced by α {a 1 , . . ., a n }.
Remark 7.2.When r 1, the above result extends to admissible n-tuples with È X {a 1 , . . ., a n } 0. Furthermore, if the norm is just smooth on a set A ∈ B E with È X A 1, then the result still holds true.This is, for example, the case for E, • Ê d , • ∞ and random variables X with È X H 0 for all hyperplanes H, which includes the case of normal distributed random variables.
Classic optimization theories now yield that any local minimum is contained in the set of stationary points.So let n ∈ AE, m m n ∈ AE, r ≥ 1, X, V m , and φ m be given.The procedure looks as follows.
Step 1. Calculation of the Distribution of the Ê m -Valued Random Variable ζ : φ m V m X .This step strongly depends on the shape of the random variable X and the operators V m .
Being in the setting of Section 3.1 one starts with an orthonormal system h i i∈AE in H providing where e i 1≤i≤m denote the unit vectors in Ê m .Thus, the covariance matrix of the random variable ζ admits the representation ζζ ⊥ h i , X h j , X 1≤i,j,≤m C X h i , h j 1≤i,j,≤m , 7.4 with C X being the covariance operator of X.
Similarly, we get for Gaussian random variables in the framework of Section 3.2 in the setting of Section 4 and in the setting of Section 5 with f m j associated with 0,1 •f m j s ds.If one considers in the latter framework a non-Brownian Lévy process, for example, and a compound Poisson process we use the notations as in Examples 6.2 1 ii , the simulation of the gradient leads to the problem of simulating which is still possible.
Step 2. Use a stochastic optimization algorithm to solve the stationarity equation for ζ φ m V m X .For this purpose, the computability of the gradient 7.2 is of enormous importance.One may either apply a deterministic gradient-based optimization algorithm e.g., BFGS combined with a Quasi Monte-Carlo approximation for the gradient, such as the one used in 20 , or use a stochastic gradient algorithm, which is in the Hilbert space setting also known as CLVQ competitive learning vector quantization algorithm see, e.g., 19 for more details .In both cases, the random variable È ζ needs to be simulated, which is the case for the above described examples.
Step 3. Reconstruct the quantizer β b 1 , . . ., b n for the random variable X by setting for 1 ≤ i ≤ n with α a 1 , . . ., a n being some solution of the stationarity 7.9 .

Illustration
For illustration purposes, we will concentrate on the case described in Section 3.1 for r 2.
Examples for quantizers as constructed in Section 4 can be found in 20 .The quantizers shown in the sequel were calculated numerically, by using the widely used CLVQ-algorithm as described in 19 .To achieve a better accuracy, we finally performed a few steps of a gradient algorithm by approximating the gradient with a Monte Carlo simulation.Let X H be a fractional Brownian motion with Hurst parameter H.We used the admissible sequence as described in 28 : where c H is given as 12 J 1−H and J −H are Bessel functions with corresponding parameters, and x n and y n are the ordered roots of the Bessel functions with parameters −H and 1 − H.After ordering the elements of the two parts of the expansion in an alternating manner and applying Gram-Schmidt's procedure for orthogonalization to construct a rate optimal ONS, we used the method as described in Section 3.1.We show the results we obtained for n 10, m 4 and the Hurst parameters H 0.3, 0.5, and 0.7 Figures 1, 2, and 3 .To show the effects of changing parameters, we also present the quantizers obtained after increasing the size of the containing subspace m 8 Figures 4, 5, and 6 and in addition the effect of increasing the quantizer size n 30 Figures 7, 8, and 9 .Since X H is for H 0.5 an ordinary Brownian motion, one can compare the results with the results obtained for the Brownian motion by using the Karhunen-Loève expansion see, e.g., 18 .
α ⊂ E with card α ≤ n.Such sets α are called n-codebooks or n-quantizers.The corresponding infimum e n,r X, E, • e n,r X, E : inf α⊂E,card α ≤n e r X, E, α 1.2

Theorem 4 . 2 .
Let X be a centered Gaussian random variable and g j j∈AE an admissible sequence for X in C 0, 1 fulfilling the assumptions of Proposition 4.1 with θ b/2, where the constant b > 1 satisfies λ j ² Kj −b with λ j , j ∈ AE denoting the monotone decreasing eigenvalues of the covariance operator C X of X in H L 2 0, 1 and K > 0. One sets m n : log n5/4

1
We set for m ∈ AE the subspaces F m : span{f m 1 , . . ., f m m }. 2 Set the linear and continuous operator V m to

Lemma 2.1 see 22 . Let
and let > 0 arbitrary.Assume that g j j∈AE is a rate optimal admissible sequence for X in H.One sets m n log n 1 for n ∈ AE.Then, there exist random variables Z m , m ∈ AE, with Z m d X such that for every sequence α n n∈AE of r-optimal n-quantizers for φ m n Π contains constants a ∈ Ê, σ ≥ 0, and a measure Π on Ê \ {0} satisfying Ê 1 ∧ x 2 Π dx < ∞.By definition, we know that Theorems 13, 14 and 17, Proposition 3 for a constant κ ∈ 0, ∞ .Thus, the sequence m n n∈AE has to grow faster than in the examples above.To fulfill

Proposition 7.1 see
The differentiability of the distortion function was treated in 6 for finite-dimensional Banach spaces what is sufficient for our purpose and later in 33 for the general case.6, Lemma 4.10 .Assume that the norm • of Ê d is smooth.Let r > 1, and assume that any Voronoi diagram {V Then, r-optimality of an n-quantizer α n for the random variable X in the separable Banach space E reads