Asymptotic behavior of the Laplacian quasi-maximum likelihood estimator of affine causal processes

We prove the consistency and asymptotic normality of the Laplacian Quasi-Maximum Likelihood Estimator (QMLE) for a general class of causal time series including ARMA, AR($\infty$), GARCH, ARCH($\infty$), ARMA-GARCH, APARCH, ARMA-APARCH,..., processes. We notably exhibit the advantages (moment order and robustness) of this estimator compared to the classical Gaussian QMLE. Numerical simulations confirms the accuracy of this estimator.


Introduction
This paper is devoted to establish the consistency and the asymptotic normality of a parametric estimator for a general class of time series.This class was already defined and studied in Doukhan and Wintenberger (2007), Bardet and Wintenberger (2009) and Bardet et al. (2012).Hence, we will consider an observed sample (X 1 , • • • , X n ) where (X t ) t∈Z is a solution of the following equation: where , is an unknown vector of parameters, also called the "true" parameters; • (ζ t ) t∈Z is a sequence of centred independent identically distributed random variables (i.i.d.r.v.) with symmetric probability distribution, i.e. ζ 0 ) is a causal AR(1) process.In Doukhan and Wintenberger (2007) and Bardet and Wintenberger (2009), it was proved that all the most famous stationary time series used in econometrics, such as ARMA, AR(∞), GARCH, ARCH(∞), TARCH, ARMA-GARCH processes can be written as a causal stationary solution of (1.1).In Bardet and Wintenberger (2009), it was also established that under several conditions on M θ , f θ and if E[|ζ 0 | r ] with r ≥ 2, the usual Gaussian Quasi-Maximum Likelihood Estimator (QMLE) of θ is strongly consistent and when r ≥ 4 it is asymptotically normal.This estimator was first defined by Weiss (1986) for ARCH processes, and the asymptotic study of this estimator was first obtained by Lumsdaine (1996) for GARCH(1, 1) processes, Berkes et al. (2003) for GARCH(p, q) processes, Francq and Zakoian (2004) for ARMA-GARCH processes, Straumann and Mikosch (2006) for general heteroskedastic models, and Robinson and Zaffaroni (2006) for ARCH(∞) processes.The results of Bardet and Wintenberger (2009) devoted to processes satisfying almost everywhere (1.1) as well as its multivariate generalisation, provide a general and unified framework for studying the asymptotic properties of the Gaussian QMLE.However, the definition of the Gaussian QMLE is explicitly obtained with the assumption that (ζ t ) is a Gaussian sequence and even if it could be applied when the probability distribution of (ζ t ) is non-Gaussian, it keeps some drawbacks of this initial assumption.Indeed, the computation of this estimators requires the minimization of a least squares contrast (typically n t=1 M −2 θ (X t −f θ ) 2 ) and this induces that r = 2 is required for the consistency and r = 4 for the asymptotic normality (and therefore confidence intervals or tests).For numerous real data such requirement is sometimes too strong (for instance the kurtosis of economic data is frequently considered as infinite).Moreover, such estimator is not robust to potential outliers.Hence, the reference probability distribution of (ζ t ) could be a Laplace one and this allows to avoid both these drawbacks.Roughly speaking this choice implies to minimize a least absolute deviations contrast (typically ) instead of the previous least squares contrast.And therefore, r = 1 will be sufficient for insuring the strong consistency of this Laplacian-QMLE, while only r = 2 is required for the asymptotic normality (see below).Such probability distribution choice is not new since this leads to a Least Absolute Deviations (LAD) estimation.Hence, for ARMA processes, Davis and Dunsmuir (1997) proved the consistency and asymptotic normality of the LAD estimator.For ARCH or GARCH processes, the same results concerning the LAD estimator were already established by Peng and Yao (2003), while Berkes and Horváth (2004) proved the consistency and asymptotic normality of the Laplacian-QMLE.Newey and Steigerwald (1997) considered also the estimator for other conditional heteroskedasticity models.Recently, Francq et al. (2011) proposed a two-stage non-Gaussian-QML estimation for GARCH models and Francq and Zakoian (2015) proposed an alternative one-step procedure, based on an appropriate non-Gaussian-QML estimator, the asymptotic properties of both these approaches were studied.In this paper we unify all these studies of the Laplacian-QMLE in a simple framework, i.e. causal stationary solutions of (1.1).This notably allows to obtain known results on ARMA or GARCH but also to establish for the first time the consistency and the asymptotic normality of the Laplacian-QMLE for APARCH, ARMA-GARCH, ARMA-ARCH(∞) and ARMA-APARCH processes.Numerical Monte-Carlo experiments were realized to illustrate the theoretical results.And the results of these simulations are convincing, especially when the accuracy of Laplacian-QMLE is compared with the one of Gaussian-QMLE: except for Gaussian distribution of (ζ t ), the Laplacian-QMLE provides a sharper estimation than the Gaussian-QMLE for all the other probability distributions we considered.This is notably the case, and this is not a surprise, for a Gaussian mixing which mimics the presence of outliers.This provides an effective advantage of the Laplacian QMLE compared to the Gaussian QMLE.
The following Section 2 will be devoted to provide the definitions and assumptions.In Section 3 the main results are stated with numerous examples of application, while Section 4 presents the results of Monte-Carlo experiments and Section 5 contains the proofs.
, where u = (u n ) n∈N is a finitely non-zero sequence (u n ) n∈N .The choice of (u n ) n∈N does not have any consequences on the asymptotic behaviour of L n , and (u n ) could typically be chosen as a sequence of zeros.Finally, if it exists, a Quasi-Maximum Likelihood Estimator (QMLE) is defined by: Usually, the "instrumental" probability density h is the Gaussian density, i.e.
and this provides the Gaussian-QMLE of θ.
Here, we chose as instrumental probability density the Laplacian density, i.e., and this implies E |ζ 0 | = 1.Therefore, we respectively define the Laplacian-likelihood and Laplacian-quasi-likelihood by: Hence, if it exists, a Laplacian-QMLE θ n is a maximizer of L n : We restrict the set Θ in such a way that a stationary solution (X t ) of order 1 or 2 of (1.1) exists.Additional conditions are also required for insuring the consistency and the asymptotic normality of θ n .More details are given now.

Existence and stationarity
As it was already done in Doukhan and Wintenberger (2007) and Bardet and Wintenberger (2009), several Lipschitztype inequalities on f θ and M θ are required for obtaining the existence and r-order stationary ergodic causal solution of (1.1).First, denote g θ Θ = sup θ∈Θ g θ with m ∈ N * and • the usual Euclidean norm (for vectors or matrix).Now, let us introduce the generic symbol K for any of the functions f or M .For k = 0, 1, 2 and some subset Θ of R d , define a Lipschitz assumption on function K θ : For ensuring a stationary r-order solution of (1.1), for r ≥ 1, define the set Then, from Doukhan and Wintenberger (2007), we obtain: Proposition 2.1.If θ 0 ∈ Θ(r) for some r ≥ 1, then there exists a unique causal (X t is independent of (ζ i ) i>t for t ∈ Z) solution X of (1.1), which is stationary, ergodic and satisfies The following lemma insures that if a process X satisfies Proposition 2.1, a causal predictable ARMA process with X as innovation also satisfies Proposition 2.1.We first provide the classical following notion for a sequence (u n ) n∈N of real numbers: (u n ) n∈N is an exponentially decreasing sequence (EDS) Lemma 2.1.Let X be a.s. a causal stationary solution of (1.1) for θ 0 ∈ R d .Let X be such as where (P β0 , Q β0 ) are the coprime polynomials of a causal invertible ARMA(p, q) processes with a vector of parameters Then X is a.s. a causal stationary solution of the equation where f θ0 and M θ0 are given in (5.1) and θ 0 = (θ 0 , β 0 ).Moreover, for i = 0, 1, 2 and with K = f or M and K = f or M ,

Assumptions required for the convergence of the Laplacian-QMLE
The Laplacian-QMLE could converge and be asymptotically Gaussian but this requires some additional assumptions on Θ and functions f θ and M θ : • Condition C1 (Compactness) Θ is a compact set.

Consistency and asymptotic normality
First we prove the strong consistency of a sequence of Laplacian-QMLE for a solution of (1.1).The proof of this theorem, is postponed in Section 5, as well as the other proofs.
Of course, the conditions required for this strong consistency of a sequence of Laplacian-QMLE are almost the same than the ones required for the strong consistency of a sequence of Gaussian-QMLE except that r ∈ [1, 2) is proved to be possible in Theorem 3.1 and not in case of Gaussian-QMLE (see Bardet and Wintenberger (2009)).
Moreover, if r = 2, the condition (3.1) on Lipshitzian coefficients is weaker for Laplacian-QMLE than for Gaussian-QMLE.As we will see below, many usual time series can satisfy the assumptions of Theorem 3.1; for example, an AR(∞) process can be defined for satisfying the strong consistency of Laplacian-QMLE while the conditions given in Bardet and Wintenberger (2009) do not ensure the strong consistency of Gaussian-QMLE.Now we state an extension of Theorem 1 established in Davis and Dunsmuir (1997) which will be an essential step of the proof of the asymptotic normality of the estimator.
Theorem 3.2.Let (Z t ) t∈Z be a sequence of i.i.d.r.v such as Var(Z 0 ) = σ 2 < ∞, with common distribution function which is symmetric (F (−x) = 1 − F (x) for x ∈ R) and is continuously differentiable in a neighborhood of 0 with derivative f (0) in 0.
Then, the asymptotic normality of the Laplacian-QMLE can be established using additional assumptions: where r ≥ 2 and • Θ denotes the interior of Θ.Let X be the stationary solution of the equation (1.1).Assume that the conditions of Theorem 3.1 hold and for i = 1, 2, assume (A i (f, Θ)) and (A i (M, Θ)) hold.Then, if the cumulative probability function of ζ 0 is continuously differentiable in a neighborhood of 0 with derivative g(0) in 0 and if matrix Γ F or Γ M , defined in (5.21), are definite positive symmetric matrix, then As it was already proved for the median estimator (see van der Vaart (2000)) or for least absolute deviations estimator of ARMA process (see Davis and Dunsmuir (1997)), it is not surprising that the probability density function g of the white noise (ζ i ) i impacts the asymptotic covariance of (3.3).However, when f θ = 0, this is not such the case and this is what happens for GARCH processes see Francq et al. (2011) where the probability density g does not appear in the asymptotic covariance.

Comments on these limit theorems
Essentially, these limit theorems could appear close or even very close to the results of 3 other references we chronologically list below but also from which we highlight the differences: • The first related paper is Davis and Dunsmuir (1997) which is cited many times.The framework of this paper is restricted to the LAD (similar to the Laplacian-QMLE) of the parameters of ARMA[p, q] process or residuals of least-square estimation with ARMA[p, q] errors.If the framework (1.1) is clearly more general since it includes for instance GARCH, ARMA-GARCH or APARCH process, the proof we used for establishing the asymptotic normality of the Laplacian estimator is clearly inspired by the one of Davis and Dunsmuir (1997).Thus our results could appear as extensions of this paper.• The second and certainly closest paper, Bardet and Wintenberger (2009).The considered framework is exactly the same, i.e. general causal affine models and the estimation method is the same, i.e. the quasi-maximum likelihood estimation (QMLE).However in Bardet and Wintenberger (2009) the QMLE is based on an "instrumental" Gaussian density instead of a Laplacian one.As it is such the case for instance by comparing quantile with least square regression, this implies three main differences: 1.The moment conditions r of both the limit theorems (strong consistency and asymptotic normality) are weaker with Laplacian QMLE than with the Gaussian one.Indeed, the absolute value of conditional log-density q t (θ) is bounded by an affine function of |X t | in the Laplacian case while it is bounded by a quadratic polynomial of X t in the Gaussian case.As a consequence, r = 1 (respectively r = 2) could be required for the strong consistency (resp.asymptotic normality) of the Laplacian QMLE while r = 2 (resp.r = 4) is required for the Gaussian QMLE.This gain on moment condition can be crucial for instance in an econometric framework where the Kurtosis of data is sometimes infinite.
2. The proof of Theorem 3.1 is simpler and sharper than the proof of strong consistency in Bardet and Wintenberger (2009).Indeed, in our new proof, we use a condition of almost sure uniform consistency based on a general and powerful result established in Kounias and Weng (1969) while a Feller-type condition was "only" used in Bardet and Wintenberger (2009).This difference leads to a very sharp condition on the decreasing rate of the Lipshitzian coefficients (α k ) for Laplacian QMLE, ℓ > 1 in (3.1), while ℓ > 3/2 is required for Gaussian QMLE.
3. The proof of Theorem 3.3 is totally different to the one for Gaussian QMLE since the conditional logdensity is no more differentiable with respect to the parameters.A kind of proof similar to the one used for establishing the asymptotic normality of the median is required.Hence, in a first step we had to prove an extension of a central limit for adapted processes established in Davis and Dunsmuir (1997), i.e. our Theorem 3.2, and we used it in a second step for establishing the asymptotic normality of the Laplacian QMLE.Note also that the conditions on the derivatives of functions f θ and M θ are clearly weaker with Laplacian than with Gaussian QMLE.
• The third related paper is Francq et al. (2011).The framework of this paper is restricted to linear causal models (X t = σ t (θ) ξ t ) in contrast with the affine causal models (X t = M t θ ξ t + f t θ ) considered in (1.1).Hence ARMA but also ARMA-GARCH or ARMA-APARCH processes are not considered in this framework.Moreover the required moment is r = 4 (instead of r = 2 in our conditions) and the condition on the approximation of σ t (θ), i.e. sup θ |σ t (θ) − σt (θ)| ≤ C 1 ρ t is clearly weaker than our Lipshitzian condition (for instance ARCH(∞) processes with Riemanian decay of the coefficients could satisfy our conditions but not their conditions).In Francq et al. (2011), a large family of instrumental probability densities, i.e. generalized Gaussian densities, including Laplace density, but their proof of asymptotic normality mimics the proof using derivatives of Gaussian QMLE since the "shift" component f t θ typically present for ARMA processes, is not considered in their models.Note also that Francq and Zakoian (2015) also studies non-Gaussian QMLE but their assumption A9 implies that the Laplace density is not considered in their asymptotic normality of the QMLE.
Finally it appears that our results provide an original extension or counterpart of these three related references.

Examples
In this section, several examples of time series satisfying the conditions of previous results are considered.Like it could be boring to state the results for all sufficiently famous processes, we refer, mutatis mutandis, to Bardet and Wintenberger (2009) and Bardet et al. (2012) for ARCH(∞) and TARCH(∞).
3/ ARMA-ARCH(∞) processes.ARMA(p, q)-ARCH(∞) processes are a natural extension of ARMA-GARCH processes.They are the solution of the system of equations where ∞) processes were introduced by Robinson (1991) and the asymptotic properties of Gaussian-QMLE were studied in Robinson and Zaffaroni (2006), Straumann and Mikosch (2006) or Bardet and Wintenberger (2009).Hence, we assume that there exists β = (β 1 , • • • , β m ) such as for all i ∈ N, c i = c(i, β), with c(•) a known function.Let θ = (β, a 1 , . . ., a p , b 1 , . . ., b q ).We are going to use Lemma 2.1.Since (ε t ) is supposed to be an ARCH(∞), then 1/2 and direct computations imply that the Lipshitz coefficients of (ε t ) are such as α ). Therefore we assume that there exists ℓ > 1 such as Considering the ARMA part and denoting (ψ j ) such as 1 , then from Lemma 2.1 we deduce that: Then we deduce that (α hold, and X is a.s. a solution of (1.1) for θ included in the r-order stationarity set Θ(r) defined by (3.11) Now the strong consistency and asymptotic normality of the Laplacian-QMLE for ARMA-ARCH(∞) processes can be established: Proposition 3.3.Assume that X is a stationary solution of (3.9) where (3.10) holds and with θ 0 ∈ Θ where Θ is a compact subset of Θ(r) defined in (3.11).Then, 1.If r ≥ 1 and ℓ ≥ 2/ min(r, 2), then θ n a.s.
2. If r = 2, ℓ > 1 and if ∂ i β c(j, β) = O j −ℓ for i = 1, 2, and if Γ f and Γ M defined in (5.21) are definite positive symmetric matrix, then the asymptotic normality (3.3) of θ n holds.This result is a new one.Note that ℓ > 1 and r = 2 is required for the asymptotic normality of Laplacian-QMLE while r = 4 and ℓ > 2 is required for Gaussian-QMLE for such processes (see for instance Bardet and Wintenberger (2009)).This confers a clear advantage to Laplacian-QMLE.
Proposition 3.4.Assume that X is a stationary solution of (3.12) with θ 0 ∈ Θ where Θ is a compact subset of Θ(r) defined in (3.8).Then, 2. If r = 2, and if Γ f and Γ M defined in (5.21) are definite positive symmetric matrix, then the asymptotic normality (3.3) of θ n holds.
This result is stated for the first time for Laplacian-QMLE.The case of Gaussian-QMLE for ARMA-APARCH could be also obtained following the previous decomposition and the paper Bardet and Wintenberger (2009).Once again, the asymptotic normality of Laplacian-QMLE only requires r = 2 while this requires r = 4 for Gaussian-QMLE.

Numerical Results
To illustrate the asymptotic results stated previously, we realized Monte-Carlo experiments on the bevarior of Laplacian-QMLE (denoted θ LQL n ) for several time series models, sample sizes and probability distributions.A comparison with the results obtained by Gaussian QMLE (denoted θ GQL n ) is also proposed.More precisely, the considered probability distributions of (ζ t ) are: • Centred Gaussian distribution denoted N ; • Centred Laplacian distribution denoted L; • Centred Uniform distribution denoted U; • Centred Student distribution with 3 freedom degrees, denoted t 3 ; • Normalized centred Gaussian mixture with probability distribution 0.05 * N (−2, 0.16)+N (0, 1)+0.05* N (2, 0.16) and denoted M.
Hence we computed the root-mean-square error (RMSE) from 1000 independent replications of θ LQL n and θ LQL n for those processes and the results are presented in Table 1 on page 10 and 2 on page 11. for ARMA(1, 1)-GARCH(1, 1) and ARMA(1, 1)-APARCH(1, 1) processes.Conclusion of the numerical results: On the one hand, it is clear that the RMSE decreases as the sample size increases, which validates the theoretical results (consistency of the estimators).On the other hand, Table 1 and 2 show that the Laplacian-QMLE provides more accurate estimation than the Gaussian-QMLE for several types of noise, except of course in the case of a Gaussian distribution (even in this case the RSME of both the estimators are almost the same).

Proofs
Proof of Lemma 2.1.First, as X is a stationary process and the ARMA(p, q) process is causal invertible then X is also a stationary process (the coefficients of Λ β0 are EDS).Moreover, it is well known that (ψ j (β 0 )) j∈N is EDS.Then we have: (5.1) Moreover, we also have: (5. 3) The same kinds of computations could also be done by considering the first and second derivatives of f and M with respect to θ. Note, and this is important, that the first and second derivatives of Λ −1 β with respect to θ are also EDS.Finally, The same kind of computation can be also done for (α (i) j ( K, { θ 0 })) j since the derivatives and second-derivatives of Λ −1 β0 with respect to β and therefore to θ are also EDS.Now we remind two lemmas already proved in Bardet and Wintenberger (2009): Lemma 5.1.Assume that θ 0 ∈ Θ(r) for r ≥ 1 and X is the causal stationary solution of the equation (1.1).If (A 0 (K, Θ)) holds (with K = f or K = M ) then K t θ ∈ L r (C(Θ, R m )) and there exists C > 0 not depending on t such that r for all t ∈ N * .
Proof of Theorem 3.1.The proof of the theorem is divided into two parts and follows the same kind of procedure than in Jeantheau (1998).In (i), a uniform (on Θ) strong law of large numbers satisfied by 1 n L n (θ) converging to L(θ) := −E[q 0 (θ)] is established.In (ii), it is proved that L(θ) admits a unique maximum in θ 0 .Those two conditions lead to the strong consistency of θ n (from Jeantheau (1998)).
(i) In the same way and for the same reason in the proof of Theorem 1 of Bardet and Wintenberger (2009), the uniform strong law of large numbers satisfied by the sample mean of ( q t ) t∈N * (defined in (2.3)]) is implied by establishing E[ q t (θ) Θ ] < ∞.But new computations have to be done in case of Laplacian conditional log-density q t (θ).From Lemma 5.1, for all t ∈ Z, As a consequence, for all t ∈ Z, Hence, the uniform strong law of large numbers for (q t (θ)) follows: −→ n→∞ 0. (5.5) Now, we are going to establish −→ n→∞ 0. Indeed, for all θ ∈ Θ and t ∈ N * , with C > 0. Hence, we have: By Corollary 1 of Kounias and Weng (1969), the proof is achieved if there exists s ∈ (0, 1] such as (5.6) Let us prove (5.6) with s = r/2 when r ∈ [1, 2].From Cauchy-Schwarz Inequality and assumptions A 0 (f, Θ) and A 0 (M, Θ), Using Lemma 5.1 and previous proved results implying E where the last inequality is obtained from the condition (3.1) of Theorem 3.1.Hence, we have which is finite when r ℓ > 2. When r ≥ 2, it is sufficient to consider the case r = 2.As a consequence, we obtain and therefore, using (5.5), which can also be consider as a Kullback-Leibler discripency.We have Using a Central Limit Theorem for martingale-differences (see for instance Billingsley (1968)), and since from Lemma (5.12) Now, using that θ ∈ Θ → Thus, from the strong large number law for martingale-differences (see again Billingsley (1968)), we obtain: and this implies: (5.13) Previous arguments induce that I (3) 1 (v) has the same limit distribution that From the strong large number law for martingale-differences (see Billingsley (1968)), we obtain: and this implies: (5.14) Finally, from (5.12), (5.13) and (5.14), we obtain: Using again Taylor expansion, we can write: since (ζ t ) t admits a symmetric probability distribution with a null median and expectation.Therefore, there is no covariance term and finally we obtain: (5.21) 2/ Now, we consider the approximation W n (v) of W n (v) defined by: From the assumptions of Theorem 3.1 and (5.7) we have −→ n→∞ 0 and then: with W defined in (5.20).
3/ Now, from (5.22), the proof of Theorem 3.2 and the same arguments than in the proof of Theorem 2 of Davis and Dunsmuir (1997), we deduce that finite distributions ( . Moreover, always following the proof of Theorem 2, (W n (v)) v converges to (W (v)) v as a process on the continuous function space C 0 .As a consequence, a maximum v of W n (v) satisfies: with N defined in (5.21) and this implies (3.3).
Proof of Proposition 3.1.First, Condition C2 is satisfied since b 0 > 0. Other conditions on Lipschitz coefficients are also satisfied from Lemma 2.1 (see the arguments above).The identifiability condition C3 is also satisfied from the following which are divided into two parts.In (i) we proof that δ, b 0 , (b + i (θ), b − i (θ)) i≥1 (defined in (3.5)) are unique, thereafter in (ii) we proof that θ = ω, (α i ) 1≤i≤p , (γ i ) 1≤i≤p , (β i ) 1≤i≤q is also unique.(i) The proof of this result follow the same reasoning in Berkes et al. (2003).First we have (5.23) We prove the result by contradiction.Suppose that there exist two vectors ).In one hand, since x ∈ (0, ∞) → x δ is a one-to-one map and since P(X t = ±1, ∀t ∈ Z) = 0, we have δ = δ ′ .In the other hand, by definition of m, we have Since σ δ t−m > b 0 > 0, ζ δ t−m is well defined.Let F k be the F -algebra generated by (ζ i , i < k).The causal representation of tha APARCH(δ, p, q) shows that X j is F j -measurable and thus the right-hand side of the above equations (and consequently also ζ δ t−m in the case ζ t−m ≥ 0 or the case ζ t−m < 0) is a real-valued random variable, measurable with respect to F t−m−1 .Since (ζ j ) is a sequence of independent random variables, this implies that ζ t−m is a.s.constant when ζ t−m ≥ 0 or when ζ t−m < 0, contradicting the hypothesis saying ζ δ 0 has a non-degenerate distribution.This achieves (i).
Proof of Proposition 3.2.Since we prove that Lemma 2.1 implies that conditions on Liptshitzian coefficients (α (i) j (f, Θ)) j and (α (i) j (M, Θ)) j , it remains to prove conditions C2 and C3.Condition C2 holds since c 0 is supposed to be a positive number.Finally, condition C3 also holds since f θ = f θ ′ implies ψ j (θ) = ψ j (θ ′ ) for all j ∈ Z. Therefore the parameters of the ARMA part of the process are identified and then the identification of the parameters GARCH can be deduced from the proof of Proposition 3.1.
Proof of Proposition 3.4.This proofs mimics exactly the proof of Proposition 3.2.
and let (Y t ) t∈Z and (V t ) t∈Z two stationary processes adapted to (F t ) t and such as E Y 2 0

Table 1
Root Mean Square Error of the components of θ LQL

Table 2
Root Mean Square Error of the components of θ LQL