Quasi maximum likelihood estimation for strongly mixing state space models and multivariate L\'evy-driven CARMA processes

We consider quasi maximum likelihood (QML) estimation for general non-Gaussian discrete-ime linear state space models and equidistantly observed multivariate L\'evy-driven continuoustime autoregressive moving average (MCARMA) processes. In the discrete-time setting, we prove strong consistency and asymptotic normality of the QML estimator under standard moment assumptions and a strong-mixing condition on the output process of the state space model. In the second part of the paper, we investigate probabilistic and analytical properties of equidistantly sampled continuous-time state space models and apply our results from the discrete-time setting to derive the asymptotic properties of the QML estimator of discretely recorded MCARMA processes. Under natural identifiability conditions, the estimators are again consistent and asymptotically normally distributed for any sampling frequency. We also demonstrate the practical applicability of our method through a simulation study and a data example from econometrics.


Introduction
Linear state space models have been used in time series analysis and stochastic modelling for many decades because of their wide applicability and analytical tractability (see, e. g., Brockwell and Davis, 1991;Hamilton, 1994, for a detailed account). In discrete time they are defined by the equations X n = F X n−1 + Z n−1 , Y n = HX n + W n , n ∈ Z, (1.1) where X = (X n ) n∈Z is a latent state process, F, H are coefficient matrices and, Z = (Z n ) n∈Z , W = (W n ) n∈Z are sequences of random variables, see Definition 2.1 for a precise formulation of this model. In this paper we investigate the problem of estimating the coefficient matrices F, H as well as the second moments of Z and W from a sample of observed values of the output process Y = (Y n ) n∈Z , using a quasi maximum likelihood (QML) or generalized least squares approach. Given the importance of this problem in practice, it is surprising that a proper mathematical analysis of the QML estimation for the model (1.1) has only been performed in cases where the model is in the so-called innovations form X n = F X n−1 + Kε n−1 , Y n = HX n + ε n , n ∈ Z, (1.2) where the innovations ε have constant conditional variance and satisfy some higher order moment conditions (Hannan and Deistler, 1988, Chapter 4). This includes state space models in which the noise sequences Z, W are Gaussian, because then the innovations, which are uncorrelated by definition, form an i. i. d. sequence. Restriction to these special cases excludes, however, the state space representations of aggregated linear processes, as well as of equidistantly observed continuous-time linear state space models.
In the first part of the present paper we shall prove consistency (Theorem 2.4) and asymptotic normality (Theorem 2.5) of the QML estimator for the general linear state space model (1.1) under the assumptions 1 that the noise sequences Z, W are ergodic, and that the output process Y satisfies a strong-mixing condition in the sense of Rosenblatt (1956). This assumption is not very restrictive, and is, in particular, satisfied if the noise sequence Z is i. i. d. with an absolutely continuous component, and W is strongly mixing. Our results are a multivariate generalization of Francq and Zakoïan (1998), who considered the QML estimation for univariate strongly mixing ARMA processes. The very recent paper Boubacar Mainassara and Francq (2011), which deals with the structural estimation of weak vector ARMA processes, instead makes a mixing assumption about the innovations sequence ε of the process under consideration, which is very difficult to verify for state space models; their results can therefore not be used for the estimation of general discretely-observed linear continuous-time state space models. As alluded to above, one advantage of relaxing the assumption of i. i. d. innovations in a discrete-time state space model is the inclusion of sampled continuous-time state space models. These were introduced in the form of continuous-time ARMA (CARMA) models in Doob (1944) as stochastic processes satisfying the formal analogue of the familiar autoregressive moving average equations of discrete-time ARMA processes, namely where a and b are suitable polynomials, and W denotes a Brownian motion. In the recent past, a considerable body of research has been devoted to these processes. One particularly important extension of the model (1.3) was introduced in Brockwell (2001), where the driving Brownian motion was replaced by a Lévy process with finite logarithmic moments. This allowed for a wide range of possibly heavy-tailed marginal distribution of the process Y as well as the occurrence of jumps in the sample paths, both characteristic features of many observed time series, e. g. in finance (Cont, 2001). Recently, Marquardt and Stelzer (2007) further generalized Eq. (1.3) to the multivariate setting, which gave researchers the possibility to model several dependent time series jointly by one linear continuous-time process. This extension is important, because many time series, exhibit strong dependencies and can therefore not be modelled adequately on an individual basis. In that paper, the multivariate non-Gaussian equivalent of Eq. (1.4) is apparent, and it is essential for many of our arguments. Taking a different route, multivariate CARMA processes can be defined as the continuous-time analogue of discrete-time vector ARMA models, described in detail in Hannan and Deistler (1988). As continuous-time processes, CARMA processes are suited particularly well to model irregularly spaced and high-frequency data, which makes them a flexible and efficient tool for building stochastic models of time series arising in the natural sciences, engineering and finance (e. g. Benth and Šaltytė Benth, 2009;Todorov and Tauchen, 2006). In the univariate Gaussian setting, several different approaches to the estimation problem of CARMA processes have been investigated (see, e. g., Larsson, Mossberg and Söderström, 2006, and references therein). Maximum likelihood estimation based on a continuous record was considered in Brown and Hewitt (1975); Feigin (1976); Pham (1977). Due to the fact that processes are typically not observed continuously and the limitations of digital computer processing, inference based on discrete observations has become more important in recent years; these approaches include variants of the Yule-Walker algorithm for time-continuous autoregressive processes (Hyndman, 1993), maximum likelihood methods (Brockwell, Davis and Yang, 2011), and randomized sampling (Rivoira, Moudden and Fleury, 2002) to overcome the aliasing problem. Alternative methods include discretization of the differential operator (Söderström et al., 1997), and spectral estimation (Gillberg and Ljung, 2009;Lii and Masry, 1995). For the special case of Ornstein-Uhlenbeck processes, least squares and moment estimators have also been investigated without the assumptions of Gaussianity (Hu and Long, 2009;Spiliopoulos, 2009). In the second part of this paper we consider the estimation of general multivariate CARMA (MCARMA) processes with finite second moments based on equally spaced discrete observations exploiting the results about the QML estimation of general linear discrete-time state space models. Under natural identifiability assumptions we obtain in the main Theorem 3.16 strongly consistent and asymptotically normal estimators for the coefficient matrices of a second-order MCARMA process and the covariance matrix of the driving Lévy process, which determine the second-order structure of the process. It is a natural restriction of the QML method that distributional properties of the driving Lévy process which are not determined by its covariance matrix cannot be estimated. However, once the autoregressive and moving average coefficients of a CARMA process are (approximately) known, and if high-frequency observations are available, a parametric model for the driving Lévy process can be estimated by the methods described in Brockwell and Schlemm (2012). Thus it should be noted that the paper Brockwell and Schlemm (2012) considers the same model, but whereas the present paper considers the estimation of the autoregressive and moving average parameters from equidistant observations letting the number of observations go to infinity, Brockwell and Schlemm (2012) assume that the autoregressive and moving average parameters are known and show how to estimate the driving Lévy process and its parameters when both the observation frequency and the time horizon go to infinity. A further related paper is Schlemm and Stelzer (2012) whose result on the equivalence of MCARMA processes and state space models provides the foundations for the estimation procedure considered here. That paper also aimed at using the results of Boubacar Mainassara and Francq (2011) directly to estimate the autoregressive and moving average parameters of an MCARMA process and therefore provided conditions for the noise of the induced discrete time state space model to be strongly mixing. However, when we investigated this route further it turned out that the approach we take in the present paper is more general and far more convenient, since any stationary discretely sampled MCARMA process with finite second moments is strongly mixing, whereas assumptions ensuring a non-trivial absolutely continuous component of the noise are needed to be able to use the results of Boubacar Mainassara and Francq (2011). Hence, the approach taken in the present paper appears rather natural for MCARMA processes. Finally, we note that the estimation of the spectral density of univariate CARMA processes and the estimation in the case of an infinite variance has recently been considered in Fasen and Fuchs (2012a,b), and that Fasen (2012) looks at the behaviour of the sample autocovariance function of discretely observed MCARMA processes in a high frequency limit.
Outline of the paper The organization of the paper is as follows. In Section 2 we develop a QML estimation theory for general non-Gaussian discrete-time linear stochastic state space models with finite second moments. In Section 2.1 we precisely define the class of linear stochastic state space models as well as the QML estimator. The main results, that under a set of technical conditions this estimator is strongly consistent and asymptotically normally distributed as the number of observations tends to infinity, are given as Theorems 2.4 and 2.5 in Section 2.2. The following two Sections 2.3 and 2.4 present the proofs.
In Section 3 we use the results from Section 2 to establish asymptotic properties of a QML estimator for multivariate CARMA processes which are observed on a fixed equidistant time grid. As a first step, we review in Section 3.1 their definition as well as their relation to the class of continuous-time state space models. This is followed by an investigation of the probabilistic properties of a sampled MCARMA process in Section 3.3 and an analysis of the important issue of identifiability in Section 3.4. Finally, we are able to state and prove our main result, Theorem 3.16, about the strong consistency and asymptotic normality of the QML estimator for equidistantly sampled multivariate CARMA processes in Section 3.5.
In the final Section 4, we present canonical parametrizations, and we demonstrate the applicability of the QML estimation for continuous-time state space models with a simulation study.
Notation We use the following notation: The space of m × n matrices with entries in the ring K is denoted by M m,n (K) or M m (K) if m = n. The set of symmetric matrices is denoted by S m (K), and the symbols S + m (R) (S ++ m (R)) stand for the subsets of positive semidefinite (positive definite) matrices, respectively. A T denotes the transpose of the matrix A, im A its image, ker A its kernel, σ(A) its spectrum, and 1 m ∈ M m (K) is the identity matrix. The vector space R m is identified with M m,1 (R) so that u = (u 1 , . . . , u m ) T ∈ R m is a column vector. · represents the Euclidean norm, ·, · the Euclidean inner product, and 0 m ∈ R m the zero vector. K[X] (K{X}) denotes the ring of polynomial (rational) expressions in X over K, I B (·) the indicator function of the set B, and δ n,m the Kronecker symbol. The symbols E, Var, and Cov stand for the expectation, variance and covariance operators, respectively. Finally, we write ∂ m for the partial derivative operator with respect to the mth coordinate and ∇ = ∂ 1 · · · ∂ r for the gradient operator. When there is no ambiguity, we use ∂ m f (ϑ 0 ) and ∇ ϑ f (ϑ 0 ) as shorthands for ∂ m f (ϑ)| ϑ=ϑ 0 and ∇ ϑ f (ϑ)| ϑ=ϑ 0 , respectively. A generic constant, the value of which may change from line to line, is denoted by C.

Quasi maximum likelihood estimation for state space models
In this section we investigate QML estimation for general linear state space models in discrete time, and prove consistency and asymptotic normality. On the one hand, due to the wide applicability of state space systems in stochastic modelling and control, these results are interesting and useful in their own right. In the present paper they will be applied in Section 3 to prove asymptotic properties of the QML estimator for discretely observed multivariate continuous-time ARMA processes.
Our theory extends existing results from the literature, in particular concerning the QML estimation of Gaussian state space models, of state space models with independent innovations (Hannan, 1975), and of weak univariate ARMA processes which satisfy a strong mixing condition (Francq and Zakoïan, 1998). The techniques used in this section are similar to Boubacar Mainassara and Francq (2011).

Preliminaries and definition of the QML estimator
The general linear stochastic state space model is defined as follows.
; a state transition matrix F ∈ M N (R); and an observation matrix H ∈ M d,N (R). It consists of a state equation and an observation equation Y n = HX n + W n , n ∈ Z. (2.2b) The R N -valued autoregressive process X = (X n ) n∈Z is called the state vector process, and Y = (Y n ) n∈Z is called the output process.
The assumption that the processes Z and W are centred is not essential for our results, but simplifies the notation considerably. Basic properties of the output process Y are described in Brockwell and Davis (1991, §12.1); in particular, if the eigenvalues of F are less than unity in absolute value, then Y has the moving average representation (2.3) Before we turn our attention to the estimation problem for this class of state space models, we review the necessary aspects of the theory of Kalman filtering, see Kalman (1960) for the original control-theoretic account and Brockwell and Davis (1991, §12.2) for a treatment in the context of time series analysis. The linear innovations of the output process Y are of particular importance for the QML estimation of state space models.
Definition 2.2. Let Y = (Y n ) n∈Z be an R d -valued stationary stochastic process with finite second moments. The linear innovations ε = (ε n ) n∈Z of Y are then defined by where the closure is taken in the Hilbert space of square-integrable random variables with inner product This definition immediately implies that the innovations ε of a stationary stochastic process Y are stationary and uncorrelated. The following proposition is a combination of Brockwell and Davis (1991, Proposition 12.2.3) and Hamilton (1994, Proposition 13.2).
Proposition 2.1. Assume that Y is the output process of the state space model (2.2), that at least one of the matrices Q and S is positive definite, and that the absolute values of the eigenvalues of F are less than unity. Then the following hold.
i) The discrete-time algebraic Riccati equation has a unique positive semidefinite solution Ω ∈ S + N (R). ii) The absolute values of the eigenvalues of the matrix F − KH ∈ M N (R) are less than one, where is the steady-state Kalman gain matrix. iii) The linear innovations ε of Y are the unique stationary solution tô (2.7a) Using the backshift operator B, which is defined by B Y n = Y n−1 , this can be written equivalently as The covariance matrix V = Eε n ε T n ∈ S + d (R) of the innovations ε is given by iv) The process Y has the innovations representation X n = F X n−1 + Kε n−1 , Y n = HX n + ε n , n ∈ Z, (2.9a) which, similar to Eqs. (2.7), allows for the moving average representation For some parameter space Θ ⊂ R r , r ∈ N, the mappings together with a collection of strictly stationary stochastic processes Z ϑ , W ϑ , ϑ ∈ Θ, with finite second moments determine a parametric family (F ϑ , H ϑ , Z ϑ , W ϑ ) ϑ∈Θ of linear state space models according to Definition 2.1. For the variance and covariance matrices of the noise sequences Z, W we use the notation (cf. Eq. (2.1)) Q ϑ = EZ ϑ,n Z T ϑ,n , S ϑ = EW ϑ,n W T ϑ,n , and R ϑ = EZ ϑ,n W T ϑ,n , which defines the functions It is well known (Brockwell and Davis, 1991, Eq. (11.5.4)) that for this model, minus twice the logarithm of the Gaussian likelihood of ϑ based on a sample y L = (Y 1 , . . . , Y L ) of observations can be written as where ε ϑ,n and V ϑ are given by analogues of Eqs. (2.7a) and (2.8), namely (2.12) and K ϑ , Ω ϑ are defined in the same way as K, Ω in Eqs. (2.5) and (2.6). In the following we always assume that y L = (Y ϑ 0 ,1 , . . . , Y ϑ 0 ,L ) is a sample from the output process of the state space model F ϑ 0 , H ϑ 0 , Z ϑ 0 , W ϑ 0 corresponding to the parameter value ϑ 0 . We therefore call ϑ 0 the true parameter value. It is important to note that ε ϑ 0 are the true innovations of Y ϑ 0 , and that therefore Eε ϑ 0 ,n ε T ϑ 0 ,n = V ϑ 0 , but that this relation fails to hold for other values of ϑ. This is due to the fact that ε ϑ is not the true innovations sequence of the state space model corresponding to the parameter value ϑ. We therefore call the sequence ε ϑ pseudo-innovations.
The goal of this section is to investigate how the value ϑ 0 can be estimated from y L by maximizing Eq. (2.11). The first difficulty one is confronted with is that the pseudo-innovations ε ϑ are defined in terms of the full history of the process Y = Y ϑ0 , which is not observed. It is therefore necessary to use an approximation to these innovations which can be computed from the finite sample y L . One such approximation is obtained if, instead of using the steady-state Kalman filter described in Proposition 2.1, one initializes the filter at n = 1 with some prescribed values. More precisely, we define the approximate pseudo-innovationŝ ε ϑ via the recursion (2.13) and the prescriptionX ϑ,1 =X ϑ,initial . The initial valuesX ϑ,initial are usually either sampled from the stationary distribution of X ϑ , if that is possible, or set to some deterministic value. Alternatively, one can additionally define a positive semidefinite matrix Ω ϑ,initial and compute Kalman gain matrices K ϑ,n recursively via Brockwell and Davis (1991, Eq. (12.2.6)). While this procedure might be advantageous for small sample sizes, the computational burden is significantly smaller when the steady-state Kalman gain is used.
The asymptotic properties which we are dealing with in this paper are expected to be the same for both choices because the Kalman gain matrices K ϑ,n converge to their steady state values as n tends to infinity (Hamilton, 1994, Proposition 13.2). The QML estimatorθ L for the parameter ϑ based on the sample y L is defined aŝ ϑ L = argmin ϑ∈Θ L (ϑ, y L ), (2.14) where L (ϑ, y L ) is obtained from L (ϑ, y L ) by substitutingε ϑ,n from Eq. (2.13) for ε ϑ,n , i. e. (2.15)

Technical assumptions and main results
Our main results about the QML estimation for discrete-time state space models are Theorem 2.4, stating that the estimatorθ L given by Eq. (2.14) is strongly consistent, which means thatθ L converges to ϑ 0 almost surely, and Theorem 2.5, which asserts the asymptotic normality ofθ L with the usual L 1/2 scaling. In order to prove these results, we need to impose the following conditions. Assumption D1. The parameter space Θ is a compact subset of R r .
The next condition guarantees that the models under consideration describe stationary processes.
Assumption D3. For every ϑ ∈ Θ, the following hold: i) the eigenvalues of F ϑ have absolute values less than unity, ii) at least one of the two matrices Q ϑ and S ϑ is positive definite, iii) the matrix V ϑ is non-singular.
The next lemma shows that the assertions of Assumption D3 hold in fact uniformly in ϑ.
Lemma 2.2. Suppose that Assumptions D1 to D3 are satisfied. Then the following hold.
i) There exists a positive number ρ < 1 such that, for all ϑ ∈ Θ, it holds that ii) There exists a positive number ρ < 1 such that, for all ϑ ∈ Θ, it holds that where K ϑ is defined by Eqs. (2.5) and (2.6). iii) There exists a positive number C such that V −1 ϑ C for all ϑ.
Proof. Assertion i) is a direct consequence of Assumption D3, i), the assumed smoothness of ϑ → F ϑ (Assumption D2), the compactness of Θ (Assumption D1), and the fact (Bernstein, 2005, Fact 10.11.2) that the eigenvalues of a matrix are continuous functions of its entries. Claim ii) follows with the same argument from Proposition 2.1, ii) and the fact that the solution of a discrete-time algebraic Riccati equation is a continuous function of the coefficient matrices (Sun, 1998). Moreover, by Eq. (2.8), the function ϑ → V ϑ is continuous, which shows that Assumption D3, iii) holds uniformly in ϑ as well, and so iii) is proved.
For the following assumption about the noise sequences Z and W we use the usual notion of ergodicity (see, e. g., Durrett, 2010, Chapter 6).
The assumption that the processes Z ϑ0 and W ϑ0 are ergodic implies via the moving average representation (2.3) and Krengel (1985, Theorem 4.3) that the output process Y = Y ϑ 0 is ergodic. As a consequence, the pseudo-innovations ε ϑ defined in Eq. (2.12) are ergodic for every ϑ ∈ Θ.
Our first identifiability assumption precludes redundancies in the parametrization of the state space models under consideration and is therefore necessary for the true parameter value ϑ 0 to be estimated consistently. It will be used in Lemma 2.10 to show that the quasi likelihood function given by Eq. (2.15) asymptotically has a unique global minimum at ϑ 0 .
Assumption D5. For all ϑ 0 ϑ ∈ Θ, there exists a z ∈ C such that (2.17) Assumption D5 can be rephrased in terms of the spectral densities f Y ϑ of the output processes Y ϑ of the state space models (F ϑ , H ϑ , Z ϑ , W ϑ ). This characterization will be very useful when we apply the estimation theory developed in this section to state space models that arise from sampling a continuoustime ARMA process.
Proof. We recall from Hamilton (1994, Eq. (10.4.43)) that the spectral density f Y ϑ of the output process Y ϑ of the state space model If Assumption D5 does not hold, we have that both H ϑ (z) = H ϑ 0 (z) for all z ∈ C, and V ϑ = V ϑ 0 , and, consequently, that f Y ϑ (ω) = f Y ϑ 0 (ω), for all ω ∈ [−π, π], contradicting the assumption of the lemma.
Under the assumptions described so far we obtain the following consistency result.
We now describe the conditions which we need to impose in addition to Assumptions D1 to D5 for the asymptotic normality of the QML estimator to hold. The first one excludes the case that the true parameter value ϑ 0 lies on the boundary of the domain Θ.
Assumption D6. The true parameter value ϑ 0 is an element of the interior of Θ.
Next we need to impose a higher degree of smoothness than stated in Assumption D2 and a stronger moment condition than Assumption D4. Assumption D7. The mappings F (·) , H (·) , Q (·) , S (·) , and R (·) in Eqs. (2.10) are three times continuously differentiable.
By the results of the sensitivity analysis of the discrete-time algebraic Riccati equation in Sun (1998), the same degree of smoothness, namely C 3 , also carries over to the mapping ϑ → V ϑ .
Assumption D8 implies that the process Y has finite (4 + δ)th moments. In the definition of the general linear stochastic state space model and in Assumption D4, it was only assumed that the sequences Z and W are stationary and ergodic. This structure alone does not entail a sufficient amount of asymptotic independence for results like Theorem 2.5 to be established. We assume that the process Y is strongly mixing in the sense of Rosenblatt (1956), and we impose a summability condition on the strong mixing coefficients, which is known to be sufficient for a Central Limit Theorem for Y to hold (Bradley, 2007;Ibragimov, 1962).
In the case of exponential strong mixing, Assumption D9 is always satisfied, and it is no restriction to assume that the δ appearing in Assumptions D8 and D9 are the same. It has been shown in Mokkadem (1988); Schlemm and Stelzer (2012) that, because of the autoregressive structure of the state equation (2.2a), exponential strong mixing of the output process Y ϑ 0 can be assured by imposing the condition that the process Z ϑ 0 is an i. i. d. sequence whose marginal distributions possess a non-trivial absolutely continuous component in the sense of Lebesgue's decomposition theorem.
Finally, we require another identifiability assumption, that will be used to ensure that the Fisher information matrix of the QML estimator is non-singular. This is necessary because the asymptotic covariance matrix in the asymptotic normality result forθ L is directly related to the inverse of that matrix. Assumption D10 is formulated in terms of the first derivative of the parametrization of the model, which makes it relatively easy to check in practice; the Fisher information matrix, in contrast, is related to the second derivative of the logarithmic Gaussian likelihood. For j ∈ N and ϑ ∈ Θ, the vector ψ ϑ, j ∈ R ( j+2)d 2 is defined as where ⊗ denotes the Kronecker product of two matrices, and vec is the linear operator that transforms a matrix into a vector by stacking its columns on top of each other.

Assumption D10. There exists an integer j
Our main result about the asymptotic distribution of the QML estimator for discrete-time state space models is the following theorem. Equation (2.20) shows in particular that this asymptotic distribution is independent of the choice of the initial valuesX ϑ,initial .
Theorem 2.5 (Asymptotic normality ofθ L ). Assume that (F ϑ , H ϑ , Z ϑ , W ϑ ) ϑ∈Θ is a parametric family of state space models according to Definition 2.1, and let y L = (Y ϑ 0 ,1 , . . . , Y ϑ 0 ,L ) be a sample of length L from the output process of the model corresponding to ϑ 0 . If Assumptions D1 to D10 hold, then the maximum likelihood estimatorθ (2.20) Note that I and J, which give the asymptotic covariance matrix Ξ of the estimators, are deterministic and only depend on the true parameter value ϑ 0 . The matrix J actually is the Fisher information and an alternative expression for J can be found in Lemma 2.17. Despite being deterministic, the asymptotic variance Ξ is not immediate to obtain and needs to be estimated, as usually in connection with QML estimators. This is a non-trivial task and a detailed analysis of this is beyond the scope of the present paper, but worthy of consideration in more detail in future work. However, it should be noted that when Ξ L is a consistent estimator for Ξ, then Theorem 2.5 implies that Observe that no stable convergence in law (in the sense originally introduced by Rényi (1963)) is needed to obtain the latter result for our QML estimator, as this stronger convergence concept is needed only when the limiting variance in a "mixed normal limit theorem" is random. In practice, estimating the asymptotic covariance matrix Ξ is important in order to construct confidence regions for the estimated parameters or in performing statistical tests. The problem of estimating it has also been considered in the framework of estimating weak VARMA processes in Boubacar Mainassara and Francq (2011) where the following procedure has been suggested, which is also applicable in our set-up. First, J(ϑ 0 ) is estimated consistently byĴ L = L −1 ∇ 2 L ϑ θ L , y L . For the computation ofĴ L we rely on the fact that the Kalman filter cannot only be used to evaluate the Gaussian log-likelihood of a state space model but also its gradient and Hessian. The most straightforward way of achieving this is by direct differentiation of the Kalman filter equations, which results in increasing the number of passes through the filter to r + 1 and r(r + 3)/2 for the gradient and the Hessian, respectively. The construction of a consistent estimator of I = I(ϑ 0 ) is based on the observation that I = ∆∈Z Cov(ℓ ϑ0,n , ℓ ϑ0,n+∆ ), where is a weak white noise with covariance matrix Σ U , it follows from the interpretation of I/(2π) as the value of the spectral density of (ℓ ϑ 0 ,n ) n∈N + at frequency zero that I can also be written as The idea is to fit a long autoregression to (ℓθL ,n ) n=1,...L , the empirical counterparts of (ℓ ϑ 0 ,n ) n∈N + which are defined by replacing ϑ 0 with the estimatê ϑ L in the definition of ℓ ϑ0,n . This is done by choosing an integer s > 0, and performing a least-squares regression of ℓθL ,n on ℓθL ,n−1 , . . . , ℓθL ,n−s , s + 1 n L. Denoting byΦ L s (z) = 1 r + s i=1Φ L i,s z i the obtained empirical autoregressive polynomial and byΣ L s the empirical covariance matrix of the residuals of the regression, it was claimed in Boubacar Mainassara and Francq (2011, Theorem 4 The covariance matrix ofθ L is then estimated consistently as In the simulation study performed in Section 4.2, we estimate the covariance matrix Ξ of the estimators in the way just describe. From a comparison with the standard deviations of the estimators obtained from the simulations it can be seen that the approach performs convincingly. A possible alternative approach to estimate the asymptotic covariance matrix Ξ may also be the use of bootstrap techniques. However, it seems that to this end the existing bootstrapping techniques need to be extended considerably (cf. Brockwell, Kreiß and Niebuhr (2012)).

Proof of Theorem 2.4 -Strong consistency
In this section we prove the strong consistency of the QML estimatorθ L .
The standard idea why the QML (or sometimes also Gaussian maximum likelihood) estimators work in a linear time series/state space model setting is that the QML approach basically is very close to estimating the parameters using the spectral density which is in turn in a one-to-one relation with the second moment structure (see e.g. Brockwell and Davis (1991, Chapter 10)). The reason is, of course, that a Gaussian process is completely characterized by the mean and autocovariance function. So as soon as one knows that the parameters to be estimated are identifiable from the autocovariance function (and the mean) and the process is known to be ergodic, the QML estimators should be strongly consistent. Despite this simple standard idea, the upcoming actual proof of the strong consistency is lengthy as well as technical and consists of the following steps: 1. When we use the the Kalman filter with fixed parameters ϑ on the finite sample y L , the obtained pseudo-innovationsε ϑ approximate the true pseudo-innovations ε ϑ (obtainable from the steady state Kalman filter in theory) well; see Lemma 2.6. 2. The quasi likelihood (QL) function L obtained from the finite sample y L (viaε ϑ ) converges for the sample size L → ∞ uniformly in the parameter space to the true QL function L (obtained from the pseudo-innovations ε ϑ ); see Lemma 2.7. 3. As the number L of observation grows, the QL function L divided by L converges to the expected QL function Q uniformly in the parameter space; see Lemma 2.8. 4. The expected QL function Q has a unique minimum at the true parameter ϑ 0 ; see Lemmas 2.9 and 2.10. 5. The QL function L divided by the number of observations evaluated at its minimum in the parameter space (i.e., at the QML estimator) converges almost surely to the expected QL function Q evaluated at the true parameter ϑ 0 (its minimum). 6. Finally, one can show that also the argumentof the minimum of the QL function L (i.e. the QML estimators) converges for L → ∞ to ϑ 0 , which proves the strong consistency.
As a first step we show that the stationary pseudo-innovations processes defined by the steady-state Kalman filter are uniformly approximated by their counterparts based on the finite sample y L .
i) If the initial valuesX ϑ,initial are such that sup ϑ∈Θ X ϑ,initial is almost surely finite, then, with probability one, there exist a positive number C and a positive number ρ < 1, such that sup ϑ∈Θ ε ϑ,n −ε ϑ,n Cρ n , n ∈ N. In particular,ε ϑ 0 ,n converges to the true innovations ε n = ε ϑ 0 ,n at an exponential rate.
Proof. We first prove part i) about the uniform exponential approximation of ε byε. Iterating the Kalman equations (2.7a) and (2.13), we find that, for n ∈ N, Thus, using the fact that, by Lemma 2.2, the spectral radii of F ϑ − K ϑ H ϑ are bounded by ρ < 1, it follows that where H L ∞ (Θ) ≔ sup ϑ∈Θ H ϑ denotes the supremum norm of H (·) , which is finite by the Extreme Value Theorem. Since the last factor is almost surely finite by assumption, the claim follows. For part ii), we observe that Eq. (2.7a) and Lemma 2.2, ii) imply that ε ϑ has the infinite-order moving average representation This completes the proof.
Lemma 2.7. Let L and L be given by Eqs.
Proof. We first observe that The fact that, by Lemma 2.2, iii), there exists a constant C such that V −1 Lemma 2.6, ii) and the assumption that Y has finite second moments imply that E sup ϑ∈Θ ε ϑ,n is finite. Applying Markov's inequality, one sees that, for every positive ǫ, because ρ < 1. The Borel-Cantelli Lemma shows that ρ n sup ϑ∈Θ ε ϑ,n converges to zero almost surely, as n → ∞. In an analogous way one can show that ρ n sup ϑ∈Θ ε ϑ,n converges to zero almost surely, and, consequently, so does the Cesàro mean in Eq. (2.22). The claim thus follows.
Lemma 2.8. If Assumptions D1 to D4 hold, then, with probability one, the sequence of random functions ϑ → L −1 L (ϑ, y L ) converges, as L tends to infinity, uniformly in ϑ to the limiting function Q : Proof. In view of the approximation results in Lemma 2.7, it is enough to show that the sequence of random functions ϑ → L −1 L (ϑ, y L ) converges uniformly to Q. The proof of this assertion is based on the observation following Assumption D4 that for each ϑ ∈ Θ the sequence ε ϑ is ergodic and its consequence that, by Birkhoff's Ergodic Theorem (Durrett, 2010, Theorem 6.2.1), the sequence L −1 L (ϑ, y L ) converges to Q(ϑ) point-wise. The stronger statement of uniform convergence follows from Assumption D1 that Θ is compact by an argument analogous to the proof of Ferguson (1996, Theorem 16).
Proof. Assume, for the sake of contradiction, that ϑ ϑ 0 . By Assumption D5, there exist matrices C j ∈ M d (R), j ∈ N 0 , such that, for |z| 1, where C j 0 0, for some j 0 0. Using Eq. (2.7b) and the assumed equality of ε ϑ,1 and ε ϑ 0 ,1 , this implies that 0 d = ∞ j= j 0 C j Y j 0 − j almost surely; in particular, the random variable C j 0 Y 0 is equal to a linear combination of the components of Y n , n < 0. It thus follows from the interpretation of the innovations sequence ε ϑ 0 as linear prediction errors for the process Y that C j 0 ε ϑ 0 ,0 is equal to zero, which implies that is assumed to be non-singular, this implies that the matrix C j 0 is the null matrix, a contradiction to Eq. (2.24). Proof. We first observe that the difference ε ϑ,1 − ε ϑ 0 ,1 is an element of the Hilbert space spanned by the random variables {Y n , n 0}, and that ε ϑ 0 ,1 is, by definition, orthogonal to this space. Thus, the expectation E ε ϑ,1 − ε ϑ 0 ,1 T V −1 ϑ ε ϑ 0 ,1 is equal to zero and, consequently, Q(ϑ) can be written as It remains to argue that this chain of inequalities is in fact a strict inequality if ϑ ϑ 0 . If V ϑ V ϑ 0 , the first inequality is strict, and we are done. If V ϑ = V ϑ 0 , the first alternative of Assumption D5 is satisfied. The second inequality is an equality if and only if ε ϑ,1 = ε ϑ0,1 almost surely, which, by Lemma 2.9, implies that ϑ = ϑ 0 . Thus, the function Q has a unique global minimum at ϑ 0 .

Proof of Theorem 2.4.
We shall first show that the sequence L −1 L (θ L , y L ), L ∈ N, converges almost surely to the deterministic number Q(ϑ 0 ) as the sample size L tends to infinity. Assume that, for some positive number ǫ, it holds that sup ϑ∈Θ L −1 L (ϑ, y L ) − Q(ϑ) ǫ. It then follows that where it was used thatθ L is defined to minimize L (·, y L ) and that, by Lemma 2.10, ϑ 0 minimizes Q(·). In particular, it follows that L −1 L (θ L , y L ) − Q(ϑ 0 ) ǫ. This observation and Lemma 2.8 immediately imply that To complete the proof of the theorem, it suffices to show that, for every neighbourhood U of ϑ 0 , with probability one,θ L will eventually lie in U. For every such neighbourhood U of ϑ 0 , we define the real number δ(U) ≔ inf ϑ∈Θ\U Q(ϑ) − Q(ϑ 0 ), which is strictly positive by Lemma 2.10. Then the following sequence of inequalities holds: The last probability is equal to one by Eq. (2.25) and Lemma 2.8.

Proof of Theorem 2.5 -Asymptotic normality
In this section we prove the assertion of Theorem 2.5, that the distribution of L 1/2 θ L − ϑ 0 converges to a normal random variable with mean zero and covariance matrix Ξ = J −1 I J −1 , an expression for which is given in Eq. (2.20). The idea behind the proof of the asymptotic normality essentially is that the strong mixing property implies various central limit theorems. As already said, the QML estimators are intuitively close to moment based estimators. So the main task is to show that the central limit results translate into asymptotic normality of the estimators. The individual steps in the following again lengthy and technical proof are: 1. First we extend the result that the pseudo-innovationsε ϑ obtained via the Kalman filter from the finite sample y L approximate the true pseudo-innovations ε ϑ (obtainable from the steady state Kalman filter in theory) well to their first and second derivatives; see Lemma 2.11. 2. The first derivatives of the QL function L obtained from the pseudo-innovations ε ϑ have a finite variance for every possible parameter ϑ; see Lemma 2.12. 3. Certain fourth moments (viz. covariances of scalar products of the vectors of values of the process at different times) of a strongly mixing process with 4 + δ finite moments can be uniformly bounded using the strong mixing coefficients; see Lemma 2.13. 4. The covariance matrix of the gradients of the QL function L divided by the number of observations converges for every possible parameter ϑ; see Lemma 2.14. 5. The result that the quasi likelihood (QL) function L obtained from the finite sample y L (viaε ϑ ) converges for the sample size L → ∞ uniformly in the parameter space to the true QL function L (obtained from the pseudo-innovations ε ϑ ) is extended to the first and second derivatives; see Lemma 2.15. 6. The previous steps allow to show that the QL function L at the true parameter ϑ 0 divided by the number of observations is asymptotically normal with limiting variance determined in step 4; see Lemma 2.16. 7. The limit of the rescaled second derivative of the QL function L at the true parameter exists, equals the Fisher information and is invertible; see Lemma 2.17. 8. A zeroth order Taylor expansion of the gradient of the QL function L divided by the number of observations at the true parameter ϑ 0 is combined with the asymptotic normality result of step 4 and the already established strong consistency of the QML estimator. Using the third derivatives of L , the error of the Taylor approximation expressed in terms of second derivatives of L is controlled and using the result of step 7 the asymptotic normality of the QML estimator is deduced.
Proof. The claim follows from Assumption D8, the exponential decay of the coefficient matrices c ϑ,ν and c (m) ϑ,ν proved in Lemma 2.6, ii) and Lemma 2.11, and the Cauchy-Schwarz inequality.
We need the following covariance inequality which is a consequence of Davydov's inequality and the multidimensional generalization of an inequality used in the proof of Francq and Zakoïan (1998, Lemma 3). For a positive real number α, we denote by ⌊α⌋ the greatest integer smaller than or equal to α.

26)
where α X denote the strong mixing coefficients of the process X.
The next lemma is a multivariate generalization of Francq and Zakoïan (1998, Lemma 3). In the proof of Boubacar Mainassara and Francq (2011, Lemma 4) this generalization is used without providing details and, more importantly without imposing Assumption D9 about the strong mixing of Y. In view of the derivative terms ∂ m ε ϑ,n in Eq. (2.28) it is not immediately clear how the result of the lemma can be proved under the mere assumption of strong mixing of the innovations sequence ε ϑ 0 . We therefore think that a detailed account, properly generalizing the arguments in the original paper (Francq and Zakoïan, 1998) to the multidimensional setting, is justified.
Proof. It is enough to show that, for each ϑ ∈ Θ, and all k, l = 1, . . . , r, the sequence of real-valued random variables I (k,l) ϑ,L , defined by converges to a limit as L tends to infinity, where ℓ (m) ϑ,n = ∂ m l ϑ,n is the partial derivative of the nth term in expression (2.11) for L (ϑ, y L ). It follows from well-known differentiation rules for matrix functions (see, e. g. Horn and Johnson, 1994, Sections 6.5 and 6.6) that (2.28) By the assumed stationarity of the processes ε ϑ , the covariances in the sum (2.27) depend only on the difference n − t. For the proof of the lemma it suffices to show that the sequence c (k,l) ϑ,∆ = Cov ℓ (k) ϑ,n , ℓ (l) n+∆,ϑ , ∆ ∈ Z, is absolutely summable for all k, l = 1, . . . , r, because then In view of the of the symmetry c (k,l) ϑ,∆ = c (k,l) ϑ,−∆ , it is no restriction to assume that ∆ ∈ N. In order to show that ∆ c (k,l) ϑ,∆ is finite, we first use the bilinearity of Cov(·; ·) to estimate Each of these four terms can be analysed separately. We give details only for the first one, the arguments for the other three terms being similar. Using the moving average representations for ε ϑ , ∂ k ε ϑ and ∂ l ε ϑ , it follows that This sum can be split into one part I + in which at least one of the summation indices ν, ν ′ , µ and µ ′ exceeds ∆/2, and one part I − in which all summation indices are less than or equal to ∆/2. Using the fact that, by the Cauchy-Schwarz inequality, it follows from Assumption D8 and the uniform exponential decay of c ϑ,ν and c (m) ϑ,ν proved in Lemma 2.6, ii) and Lemma 2.11, ii) that there exist constants C and ρ < 1 such that For the contribution from all indices smaller than or equal to ∆/2, Lemma 2.13 implies that there exists a constant C such that . (2.31) It thus follows from Assumption D9 that the sequences c (k,l) ϑ,∆ , ∆ ∈ N, are summable, and Eq. (2.29) completes the proof of the lemma. Proof. Similar to the proof of Lemma 2.7.

Proof.
Because of Lemma 2.15, i) it is enough to show that L −1/2 ∇ ϑ L ϑ 0 , y L is asymptotically normally distributed with mean zero and covariance matrix I(ϑ 0 ). First, we note that which holds for every component i = 1, . . . , r. The facts that Eε ϑ0,n ε T ϑ 0 ,n equals V ϑ0 , and that ε ϑ0,n is orthogonal to the Hilbert space generated by {Y t , t < n}, of which ∂ i ε T ϑ,n is an element, show that E∂ i L ϑ 0 , y L = 0. Using Lemma 2.6, ii), expression (2.32) can be rewritten as where, for every m ∈ N, the processes Y (i) m and Z (i) m are defined by It is convenient to also introduce the notations Y m,n = Y (1) m,n · · · Y (r) m,n T and Z m,n = Z (1) m,n · · · Z (r) m,n T . (2.34) The rest of the proof proceeds in three steps: in the first we show that, for each natural number m, the sequence L −1/2 n Y m,n − EY m,n is asymptotically normally distributed with asymptotic covariance matrix I m , and that I m converges to I(ϑ 0 ) as m tends to infinity. We then prove that L −1/2 n Z m,n − EZ m,n goes to zero uniformly in L, as m → ∞, and the last step is devoted to combining the first two steps to prove the asymptotic normality of L −1/2 ∇ ϑ L ϑ 0 , y L .
Step 1 Since Y is stationary, it is clear that Y m is a stationary process. Moreover, the strong mixing  1.8 b)). In particular, by Assumption D9, the strong mixing coefficients of the processes Y m satisfy the summability condition k [α Y m (k)] δ/(2+δ) < ∞. Since, by the Cramér-Wold device, weak convergence of the sequence L −1/2 L n=1 Y m,n − EY m,n to a multivariate normal distribution with mean zero and covariance matrix Σ is equivalent to the condition that, for every vector u ∈ R r , the sequence L −1/2 u T L n=1 Y m,n − EY m,n converges to a one-dimensional normal distribution with mean zero and variance u T Σu, we can apply the Central Limit Theorem for univariate strongly mixing processes (Ibragimov, 1962, Theorem 1.7) to obtain that The claim that I m converges to I(ϑ 0 ) will follow if we can show that and that Cov Y (k) m,n ; Y (l) m,n+∆ is dominated by an absolutely summable sequence. For the first condition, we note that the bilinearity of Cov(·; ·) implies that These two terms can be treated in a similar manner so we restrict our attention to the second one. The definitions of Y (i) m,n (Eq. (2.33a)) and ℓ (i) ϑ,n (Eq. (2.27)) allow us to compute As a consequence of the Cauchy-Schwarz inequality, Assumption D8 and the exponential bounds in Lemma 2.6, i), we therefore obtain that Var Y (k) m,n − ℓ (k) ϑ 0 ,n Cρ m independent of n. The L 2 -continuity of Cov(·; ·) thus implies that the sequence Cov Y (k) m,n − ℓ (k) ϑ 0 ,n ; ℓ (l) ϑ 0 ,n+∆ converges to zero as m tends to infinity at an exponential rate uniformly in ∆. The existence of a summable sequence dominating Cov Y (k) m,n ; Y (l) m,n+∆ is ensured by the arguments given in the proof of Lemma 2.14, reasoning as in the derivation of Eqs. (2.30) and (2.31).
Step 2 We shall show that there exist positive constants C and ρ < 1, independent of L, such that tr Var it suffices to consider the latter two terms. We first observe that tr Var As before, under Assumption D8, the Cauchy-Schwarz inequality and the exponential bounds for c ϑ 0 ,ν and c (k) ϑ 0 ,ν imply that u (k,l) m,∆ < Cρ m . By arguments similar to the ones used in the proof of Lemma 2.13 Davydov's inequality implies that, for m < ⌊∆/2⌋, It thus follows that, independent of the value of k and l, and therefore, by Eq. (2.39), that tr Var L −1/2 L n=1 U m,n Cρ m . In an analogous way one also can show that tr Var L −1/2 L n=1 V m,n Cρ m , and thus the claim (2.37) follows with Eq. (2.38).
Step 3 In step 1 it has been shown that L −1/2 n Y m,n − EY m,n d − −−− → L→∞ N (0 r , I m ), and that I m converges to I(ϑ 0 ), as m → ∞. In particular, the limiting normal random variables with covariances I m converge weakly to a normal random variable with covariance matrix I(ϑ 0 ).
Step 2 together with the multivariate Chebyshev inequality implies that, for every ǫ > 0, Proposition 6.3.9 of Brockwell and Davis (1991) thus completes the proof.
A very important step in the proof of asymptotic normality of QML estimators is to establish that the Fisher information matrix J, evaluated at the true parameter value, is non-singular. We shall now show that Assumption D10 is sufficient to ensure that J −1 exists for linear state space models. For vector ARMA processes, formulae similar to Eqs. (2.40) below have been derived in the literature (see, e. g., Klein, Mélard and Saidi, 2008;Klein and Neudecker, 2000); in fact, the resultant property of the Fisher information matrix of a vector ARMA process implies that J in this case is non-singular if and only if its autoregressive and moving average polynomials have no common eigenvalues (Klein, Mélard and Spreij, 2005). In conjunction with the equivalence of linear state space and vector ARMA models this provides an alternative way of checking that J in non-singular. We continue to work with Assumption D10, however, because it avoids the transformation of the state space model (2.13) into an equivalent ARMA form.
Proof. It can be shown as in the proof of Boubacar Mainassara and Francq (2011, Lemma 4) that J exists and is equal to J = J 1 + J 2 , where (2.40) J 2 is positive semidefinite because it can be written as Since J 1 is positive semidefinite as well, proving that J is non-singular is equivalent to proving that for any non-zero vector c ∈ R r , the numbers c T J i c, i = 1, 2, are not both zero. Assume, for the sake of contradiction, that there exists such a vector c = (c 1 , . . . , c r ) T . The condition c T J 1 c implies that, almost surely, r k=1 c k ∂ k ε ϑ 0 ,n = 0 d , for all n ∈ Z. It thus follows that 1. Since the sequence ε ϑ 0 is uncorrelated with positive definite covariance matrix, it follows that r k=1 c k ∂ k M ϑ 0 ,ν = 0 d , for every ν ∈ N. Using the relation vec(ABC) = C T ⊗ A vec B (Bernstein, 2005, Proposition 7.1.9), we see that the last display is equivalent to By the definition of ψ ϑ, j in Eq. (2.18) it thus follows that ∇ ϑ ψ ϑ 0 , j c = 0 ( j+2)d 2 , for every j ∈ N, which, by Assumption D10, is equivalent to the contradiction that c = 0 r .

Proof of Theorem 2.5. Since the estimateθ
where ∇ 2 ϑ L (ϑ L , y L ) denotes the matrix whose ith row, i = 1, . . . , r, is equal to the ith row of ∇ 2 ϑ L (ϑ i , y L ). By Lemma 2.16 the first term on the right hand side converges weakly to a multivariate normal random variable with mean zero and covariance matrix I = I(ϑ 0 ). As in Lemma 2.8 one can show that the sequence ϑ → L −1 ∇ 3 ϑ L (ϑ, y L ), L ∈ N, of random functions converges almost surely uniformly to the continuous function ϑ → ∇ 3 ϑ Q(ϑ) taking values in the space R r×r×r . Since on the compact space Θ this function is bounded in the operator norm obtained from identifying R r×r×r with the space of linear functions from R r to M r (R), that sequence is almost surely uniformly bounded, and we obtain that because, by Theorem 2.4, the second factor almost surely converges to zero as L tends to infinity. It follows from Lemma 2.17 that L −1 ∇ 2 ϑ L (ϑ L , y L ) converges to the matrix J almost surely, and thus from Eq. (2.41) that L 1/2 θ L − ϑ 0 d − → N 0 r , J −1 I J −1 , as L → ∞. This shows Eq. (2.19) and completes the proof.

Quasi maximum likelihood estimation for multivariate continuous-time ARMA processes
In this section we pursue the second main topic of the present paper, a detailed investigation of the asymptotic properties of the QML estimator of discretely observed multivariate continuous-time autoregressive moving average processes. We will make use of the equivalence between MCARMA and continuous-time linear state space models, as well as of the important observation that the state space structure of a continuous-time process is preserved under equidistant sampling, which allows for the results of the previous section to be applied. The conditions we need to impose on the parametrization of the models under consideration are therefore closely related to the assumptions made in the discrete-time case, except that the mixing and ergodicity assumptions D4 and D9 are automatically satisfied (Marquardt and Stelzer, 2007, Proposition 3.34). We start the section with a short recapitulation of the definition and basic properties of Lévy-driven continuous-time ARMA processes and their equivalence to state space models (based mainly on Marquardt and Stelzer (2007); Schlemm and Stelzer (2012)). Thereafter we work towards being able to apply our results on QML estimation for discrete time state models to QML estimators for MCARMA processes culminating in our main result Theorem 3.16. To this end we first recall the second order structure of continuous time state space models and provide auxiliary results on the transfer function in Section 3.2. This is followed in Section 3.3 by recalling that equidistant observations of an MCARMA processes follow a state space model in discrete time, as well as discussions of the minimality of a state space model and of how to make the relation between the continuous and discrete time state space models unique. The following Section 3.4 looks at the second-order properties of a discretely observed MCARMA process and the aliasing effect. Together the results of Sections 3.2 to 3.4 allow to give accessible identifiability conditions needed to apply the QML estimation theory developed in Section 2. Finally, Section 3.5 introduces further technical assumptions needed to employ the theory for strongly mixing state space models and then derives our main result about the consistency and asymptotic normality of the QML estimator for equidistantly sampled MCARMA processes in Theorem 3.16.

Lévy-driven multivariate CARMA processes and continuous-time state space models
A natural source of randomness in the specification of continuous-time stochastic processes are Lévy processes. For a thorough discussion of these processes we refer the reader to the monographs Applebaum (2004); Sato (1999).
Definition 3.1. A two-sided R m -valued Lévy process (L(t)) t∈R is a stochastic process, defined on a probability space (Ω, F , P), with stationary, independent increments, continuous in probability, and satisfying L(0) = 0 m almost surely.
The characteristic function of a Lévy process L has the Lévy-Khintchine-form Ee i u,L(t) = exp{tψ L (u)}, u ∈ R m , t ∈ R + , where the characteristic exponent ψ L is given by The vector γ L ∈ R m is called the drift, Σ G is a non-negative definite, symmetric m × m matrix called the Gaussian covariance matrix, and the Lévy measure ν L satisfies the two conditions ν L ({0 m }) = 0 and Just like i. i. d. sequences are used in time series analysis to define ARMA processes, Lévy processes can be used to construct (multivariate) continuous-time autoregressive moving average processes, called (M)CARMA processes. If L is a two-sided Lévy process with values in R m and p > q are integers, the d-dimensional L-driven MCARMA(p, q) process with autoregressive polynomial and moving average polynomial is defined as the solution to the formal differential equation P(D)Y(t) = Q(D)DL(t), D ≡ (d/dt). It is often useful to allow for the dimensions of the driving Lévy process L and the L-driven MCARMA process to be different, which is a slight extension of the original definition of Marquardt and Stelzer (2007). The results obtained in that paper remain true if our definition is used. In general, the paths of a Lévy process are not differentiable, so we interpret the defining differential equation as being equivalent to the state space representation where A ,B, and C are given by It follows from representation (3.3) that MCARMA processes are special cases of linear multivariate continuous-time state space models, and in fact, the class of linear state space models is equivalent to the class of MCARMA models (Schlemm and Stelzer, 2012, Corollary 3.4). By considering the class of linear state space models, one can define representations of MCARMA processes which are different from Eq. (3.3) and better suited for the purpose of estimation.
and an observation equation The R N -valued process X = (X(t)) t∈R is the state vector process, and Y = (Y(t)) t∈R the output process.
A solution Y to Eq. (3.5) is called causal if, for all t, Y(t) is independent of the σ-algebra generated by {L(s) : s > t}. Every solution to Eq. (3.5a) satisfies (3.6) The following can be seen as the multivariate extension of Brockwell, Davis and Yang (2011, Proposition 1) and recalls conditions for the existence of a stationary causal solution of the state equation (3.5a) for easy reference. We always work under the following assumption.
Assumption E. The eigenvalues of the matrix A have strictly negative real parts.

Second order structure and the transfer function
Proposition 3.1 (Sato and Yamazato (1983, Theorem 5.1)). If Assumptions E and L1 hold, then Eq. (3.5a) has a unique strictly stationary, causal solution X given by BdL(u). Moreover, X(t) has mean zero and second-order structure It is an immediate consequence that the output process Y has mean zero and autocovariance function 0, and that Y itself can be written succinctly as a moving average of the driving Lévy process as . This representation shows that the behaviour of the process Y depends on the values of the individual matrices A, B, and C only through the products Ce At B, t ∈ R. The following lemma relates this analytical statement to an algebraic one about rational matrices, allowing us to draw a connection to the identifiability theory of discrete-time state space models.
Proof. If we start at the first equality and replace the matrix exponentials by their spectral representations (see Lax, 2002, Theorem 17.5), we obtain γ e zt C(z1 − A) −1 Bdz = γ e ztC (z1 −Ã) −1B dz, where γ is a closed contour in C winding around each eigenvalue of A exactly once, and likewise forγ. Since we can always assume that γ =γ by taking γ to be R times the unit circle, R > max{|λ| : λ ∈ σ A ∪ σÃ},it follows that, for each t ∈ R, γ e zt C(z1 − A) −1 B −C(z1 −Ã) −1B dz = 0. Since the rational matrix function ∆(z) = C(z1 − A) −1 B −C(z1 −Ã) −1B has only poles with modulus less than R, it has an expansion around infinity, ∆(z) = ∞ n=0 A n z −n , A n ∈ M d (C), which converges in a region {z ∈ C : |z| > r} containing γ. Using the fact that this series converges uniformly on the compact set γ and applying the Residue Theorem from complex analysis, which implies γ e zt z −n dz = t n /n!, one sees that ∞ n=0 t n n! A n+1 ≡ 0 N . Consequently, by the Identity Theorem, A n is the zero matrix for all n > 1, and since ∆(z) → 0 as z → ∞, it follows that ∆(z) ≡ 0 d,m .
The rational matrix function H : z → C(z1 N − A) −1 B is called the transfer function of the state space model (3.5) and is closely related to the spectral density f Y of the output process Y, which is defined as f Y (ω) = R e −iωh γ Y (h)dh -the Fourier transform of γ Y . Before we make this relation explicit, we prove the following lemma.

Lemma 3.3. For any real number v, and matrices
(3.8)

Proof.
We define functions l, r : Using Proposition 3.1 one sees immediately that (d/dv)l(v) = (d/dv)r(v), for all v ∈ R. Hence, l and r differ only by an additive constant. Since l(0) equals r(0) by the definition of Γ 0 , the constant is zero, and l(v) = r(v) for all real numbers v. Proof. First, we recall (Bernstein, 2005, Proposition 11.2.2) that the Laplace transform of any matrix A is given by its resolvent, that is, (zI − A) −1 = ∞ 0 e −zu e Au du, for any complex number z. We are now ready to compute Introducing the new variable h = u − v, and using Lemma 3.3, this becomes By Eq. (3.7b) and the fact that the spectral density and the autocovariance function of a stochastic process are Fourier duals of each other, the last expression is equal to , which completes the proof.
A converse of Proposition 3.4, which will be useful in our later discussion of identifiability, is the Spectral Factorization Theorem. Its proof can be found in Rozanov (1967, Theorem 1.10.1).

Equidistant observations
We now turn to properties of the sampled process Y (h) = (Y (h) n ) n∈Z which is defined by Y (h) n = Y(nh) and represents observations of the process Y at equally spaced points in time. A very fundamental observation is that the linear state space structure of the continuous-time process is preserved under sampling, as detailed in the following proposition. Of particular importance is the explicit formula (3.10) for the spectral density of the sampled process Y (h) . Proposition 3.6 (partly Schlemm and Stelzer (2012, Lemma 5.1)). Assume that Y is the output process of the state space model (3.5). Then the sampled process Y (h) has the state space representation The sequence N (h) n n∈Z is i. i. d. with mean zero and covariance matrix ✁ in particular, f (h) Y : [−π, π] → S + d R e iω is a rational matrix function.
In the following we derive conditions for the sampled state space model (3.9) to be minimal in the sense that the process Y (h) is not the output process of any state space model of dimension less than N, and for the noise covariance matrix ✁ Σ (h) to be non-singular. We begin by recalling some well-known notions from discrete-time realization and control theory. For a detailed account we refer to Åström (1970); Sontag (1998), which also explain the origin of the terminology. Every rational matrix function has many algebraic realizations of various dimensions. A particularly convenient class are the ones of minimal dimension, which have a number of useful properties.

Definition 3.4. Let H ∈ M d,m (R{z}) be a rational matrix function. A minimal realization of H is an algebraic realization of H of dimension smaller than or equal to the dimension of every other algebraic realization of H. The dimension of a minimal realization of H is the McMillan degree of H.
Two other important properties of algebraic realizations, which are related to the notion of minimality and play a key role in the study of identifiability, are introduced in the following definitions.
Definition 3.5. An algebraic realization (A, B, C) of dimension N is controllable if the controllability matrix C = B AB · · · A n−1 B ∈ M m,mN (R) has full rank.

Definition 3.6. An algebraic realization (A, B, C) of dimension N is observable if the observability matrix
We will often say that a state space system (3.5) is minimal, controllable or observable if the corresponding transfer function has this property. In the context of ARMA processes these concepts have been used to investigate the non-singularity of the Fisher information matrix (Klein and Spreij, 2006). The next theorem characterizes minimality in terms of controllability and observability. Proof. By Theorem 3.7, minimality of (A, B, C) implies controllability, and by Lemma 3.8, this is equivalent to ✁ Σ having full rank. (3.11) Proof. We will prove the assertion by showing that the N-dimensional state space representation (3.9) is both controllable and observable, and thus, by Theorem 3.7, minimal. Observability has been shown in Sontag (1998, Proposition 5.2.11) using the Hautus criterion (Hautus, 1969). The key ingredient in the proof of controllability is Corollary 3.9, where we showed that the autocovariance matrix ✁ Σ (h) of N (h) n , given in Proposition 3.6, has full rank; this shows that the representation (3.9) is indeed minimal and completes the proof.
Since, by Hannan and Deistler (1988, Theorem 2.3.4) To prove the claim that the operator M is injective, it is thus sufficient to show that the matrix A ≔ h 0 e Au ⊗ e Au du ∈ M N 2 (R) is non-singular. We write A ⊕ A ≔ A ⊗ 1 N + 1 N ⊗ A. By Bernstein (2005, Fact 11.14.37), A = h 0 e (A⊕A)u du and since σ(A ⊕ A) = {λ + µ : λ, µ ∈ σ(A)} (Bernstein, 2005, Proposition 7.2.3), Assumption E implies that all eigenvalues of the matrix A ⊕ A have strictly negative real parts; in particular, A ⊕ A is invertible. Consequently, it follows from Bernstein (2005, Fact 11.13.14) that A = (A ⊕ A) −1 e (A⊕A)h − 1 N 2 . Since, for any matrix M, it holds that σ(e M ) = {e λ , λ ∈ σ(M)} (Bernstein, 2005, Proposition 11.2.3), the spectrum of e (A⊕A)h is a subset of the open unit disk, and it follows that A is invertible.

Overcoming the aliasing effect
One goal in this paper is the estimation of multivariate CARMA processes or, equivalently, continuoustime state space models, based on discrete observations. In this brief section we concentrate on the issue of identifiability, and we derive sufficient conditions that prevent redundancies from being introduced into an otherwise properly specified model by the process of sampling, an effect known as aliasing (Hansen and Sargent, 1983).
For ease of notation we choose to parametrize the state matrix, the input matrix, and the observation matrix of the state space model (3.5), as well as the driving Lévy process L; from these one can always obtain an autoregressive and a moving average polynomial which describe the same process by applying a left matrix fraction decomposition to the corresponding transfer function We hence assume that there is some compact parameter set Θ ⊂ R r , and that, for each ϑ ∈ Θ, one is given matrices A ϑ , B ϑ and C ϑ of matching dimensions, as well as a Lévy process L ϑ . A basic assumption is that we always work with second order processes (cf. Assumption L1).
To ensure that the model corresponding to ϑ describes a stationary output process we impose the analogue of Assumption E.
Proof. We will show that for every ϑ 1 , ϑ 2 ∈ Θ, ϑ 1 ϑ 2 , the sampled output processes Y (h) ϑ 1 and Y (h) ϑ 2 (h) are not L 2 -observationally equivalent. Suppose, for the sake of contradiction, that the spectral densities of the sampled output processes were the same. Then the Spectral Factorization Theorem (Theorem 3.5) would imply that there exists an orthogonal N × N matrix O such that where ✁ Σ (h),1/2 ϑ i are the unique positive definite matrix square roots of the matrices defined by spectral calculus. This means that the two triples are algebraic realizations of the same rational matrix function. Since Assumption C5 clearly implies the Kalman-Bertram criterion (3.11), it follows from Proposition 3.10 in conjunction with Assumption C3 that these realizations are minimal, and hence from Hannan and Deistler (1988, Theorem 2.3.4) that there exists an invertible matrix T ∈ M N (R) satisfying It follows from the power series representation of the matrix exponential that T −1 e A ϑ 2 h T equals e T −1 A ϑ 2 T h . Under Assumption C5, the first equation in conjunction with Lemma 3.11 therefore implies that A ϑ 1 = T −1 A ϑ 2 T . Using this, the second of the three equations (3.12) gives which, by Lemma 3.12, implies that ( Together with the last of the equations (3.12) and Proposition 3.6 it follows that f ϑ1 = f ϑ2 , which contradicts Assumption C4 that Y ϑ1 and Y ϑ 2 are not L 2 -observationally equivalent.

Asymptotic properties of the QML estimator
In this section we apply the theory that we developed in Section 2 for the QML estimation of general discrete-time linear state space models to the estimation of continuous-time linear state space models or, equivalently, multivariate CARMA processes. We have already seen that a discretely observed MCARMA process can be represented by a discrete-time state space model and that, thus, a parametric family of MCARMA processes induces a parametric family of discrete-time state space models. Eqs. (3.9) show that sampling with spacing h maps the continuous-time state space models (A ϑ , B ϑ , C ϑ , L ϑ ) ϑ∈Θ to the discretetime state space models The initial valueX ϑ,1 may be chosen in the same ways as in the discrete-time case. The steady-state Kalman gain matrices K (h) ϑ and pseudo-covariances V (h) ϑ are computed as functions of the unique positive definite solution Ω (h) ϑ to the discrete-time algebraic Riccati equation In order to obtain the asymptotic normality of the QML estimator for multivariate CARMA processes, it is therefore only necessary to make sure that Assumptions D1 to D10 hold for the model (3.13). The discussion of identifiability in the previous section allows us to specify accessible conditions on the parametrization of the continuous-time model under which the QML estimator is strongly consistent. In addition to the identifiability assumptions C3 to C5, we impose the following conditions. Assumption C6. The parameter space Θ is a compact subset of R r . Assumption C7. The functions ϑ → A ϑ , ϑ → B ϑ , ϑ → C ϑ , and ϑ → Σ L ϑ are continuous. Moreover, for each ϑ ∈ Θ, the matrix C ϑ has full rank.
Proof. Assumption D1 is clear. Assumption D2 follows from the observation that the functions A → e A and (A, B, Σ) → h 0 e Au BΣB T e A T u du are continuous. By Assumptions C2, C6 and C7, and the fact that the eigenvalues of a matrix are continuous functions of its entries, it follows that there exists a positive real number ǫ such that, for each ϑ ∈ Θ, the eigenvalues of A ϑ have real parts less than or equal to −ǫ. The observation that the eigenvalues of e A are given by the exponentials of the eigenvalues of A thus shows that Assumption D3, i) holds with ρ ≔ e −ǫh < 1. Assumption C1 that the matrices Σ L ϑ are non-singular and the minimality assumption C3 imply by Corollary 3.9 that the noise covariance matrices ✁ are non-singular, and thus Assumption D3, ii) holds. Further, by Proposition 2.1, the matrices Ω ϑ are nonsingular, and so are, because the matrices C ϑ are assumed to be of full rank, the matrices V ϑ ; this means that Assumption D3, iii) is satisfied. Assumption D4 is a consequence of Proposition 3.6, which states that the noise sequences N ϑ are i. i. d. and in particular ergodic; their second moments are finite because of Assumption C1.
In order to be able to show that the QML estimatorθ L,(h) is asymptotically normally distributed, we impose the following conditions in addition to the ones described so far.
Assumption C8. The true parameter value ϑ 0 is an element of the interior of Θ.
Assumption C10. There exists a positive number δ such that E L ϑ 0 (1) 4+δ < ∞.  Rajput and Rosiński (1989, Theorem 2.7) that N is infinitely divisible with characteristic triplet (γ, Σ, ν), and that x 1 The first factor on the right side is finite by Assumptions C6 and C9, the second by Assumption C10 and the equivalence of finiteness of the αth absolute moment of an infinitely divisible distribution and finiteness of the αth absolute moments of the corresponding Lévy measure restricted to the exterior of the unit ball (Sato, 1999, Corollary 25.8). The same corollary shows that E N 4+δ < ∞ and thus Assumption D8.
Our final assumption is the analogue of Assumption D10. It will ensure that the Fisher information matrix of the QML estimatorθ L,(h) is non-singular by imposing a non-degeneracy condition on the parametrization of the model.

Assumption C11.
There exists a positive index j 0 such that the ( j 0 + 2)d 2 × r matrix has rank r.
Theorem 3.16 (Consistency and asymptotic normality ofθ L, (h) ). Assume that (A ϑ , B ϑ , C ϑ , L ϑ ) ϑ∈Θ is a parametric family of continuous-time state space models, and denote by y L, where the asymptotic covariance matrix Ξ = J −1 I J −1 is given by Proof. Strong consistency ofθ L,(h) is a consequence of Theorem 2.4 if we can show that the parametric family e A ϑ h , C ϑ , N ϑ , 0 ϑ∈Θ of discrete-time state space models satisfies Assumptions D1 to D5. The first four of these are shown to hold in Lemma 3.14. For the last one, we observe that, by Lemma 2.3, Assumption D5 is equivalent to the family of state space models (3.13) being identifiable from the spectral density. Under Assumptions C3 to C5 this is guaranteed by Theorem 3.13.
In order to prove Eq. (3.16), we shall apply Theorem 2.5 and therefore need to verify Assumptions D6 to D10 for the state space models e A ϑ h , C ϑ , N ϑ , 0 ϑ∈Θ . The first three hold by Lemma 3.15, the last one as a reformulation of Assumption C11. Assumption D9, that the strong mixing coefficients α of a sampled multivariate CARMA process satisfy m [α(m)] δ/(2+δ) < ∞, follows from Assumption C1 and Marquardt and Stelzer (2007, Proposition 3.34), where it was shown that MCARMA processes with a finite logarithmic moment are exponentially strongly mixing.

Practical applicability
In this section we complement the theoretical results from Sections 2 and 3 by commenting on their applicability in practical situations. Canonical parametrizations are a classical subject of research about discretetime dynamical systems, and most of the results apply also to the continuous-time case; without going into detail we present the basic notions and results about these parametrizations. The assertions of Theorem 3.16 are confirmed by a simulation study for a bivariate non-Gaussian CARMA process. Finally, we estimate the parameters of a CARMA model for a bivariate time series from economics using our QML approach.

A simulation study
We present a simulation study for a bivariate CARMA process with Kronecker indices (1, 2), i. e. CARMA indices (p, q) = (2, 1). As the driving Lévy process we chose a zero-mean normal-inverse Gaussian (NIG) process (L(t)) t∈R . Such processes have been found to be useful in the modelling of stock returns and stochastic volatility, as well as turbulence data (see, e. g., Barndorff-Nielsen, 1997;Rydberg, 1997). The distribution of the increments L(t) − L(t − 1) of a bivariate normal-inverse Gaussian Lévy process is characterized by the density f NIG (x; µ, α, β, δ, ∆) = δ exp(δκ) 2π exp( βx ) exp(αg(x)) 1 + αg(x) g(x) 3 , x ∈ R 2 , ν  Table 3. QML estimates for the parameters of a bivariate NIG-driven CARMA 1,2 process observed at integer times over the time horizon [0,2000]. The second column reports the empirical mean of the estimators as obtained from 350 independent paths; the third and fourth columns contain the resulting bias and the sample standard deviation of the estimators, respectively, while the last column reports the average of the expected standard deviations of the estimators as obtained from the asymptotic normality result Theorem 3.16.
where g(x) = δ 2 + x − µ, ∆(x − µ , κ 2 = α 2 − β, ∆β > 0, and µ ∈ R 2 is a location parameter, α 0 is a shape parameter, β ∈ R 2 is a symmetry parameter, δ 0 is a scale parameter and ∆ ∈ M + 2 (R), det ∆ = 1, determines the dependence between the two components of (L(t)) t∈R . For our simulation study we chose parameters δ = 1, α = 3, β = (1, 1) T , ∆ = 5/4 −1/2 −1/2 1 , µ = − 1 2 √ 31 (3, 2) T , (4.5) resulting in a skewed distribution with mean zero and covariance Σ L ≈ 0.4751 −0.1622 −0.1622 0.3708 . A sample of 350 independent replicates of the bivariate CARMA 1,2 process (Y(t)) t∈R driven by a normal-inverse Gaussian Lévy process (L(t)) t∈R with parameters given in Eq. (4.5) were simulated on the equidistant time grid 0, 0.01, . . . , 2000 by applying an Euler scheme to the stochastic differential equation (3.5) making use of the canonical parametrization given in Table 1. For the simulation, the initial value X(0) = 0 3 and parameters ϑ 1:7 = (−1, −2, 1, −2, −3, 1, 2) was used. Each realization was sampled at integer times (h = 1), and QML estimates of ϑ 1 , . . . , ϑ 7 as well as (ϑ 8 , ϑ 9 , ϑ 10 ) ≔ vech Σ L were computed by numerical maximization of the quasi log-likelihood function using a differential evolution optimization routine (Price, Storn and Lampinen, 2005) in conjunction with a subspace trust-region method In Table 3 the sample means and sampled standard deviations of the estimates are reported. Moreover, the standard deviations were estimated using the square roots of the diagonal entries of the asymptotic covariance matrix (2.21) with s(L) = ⌊L/ log L⌋ 1/3 , and the estimates are also displayed in Table 3. One sees that the bias, the difference between the sample mean and the true parameter value, is very small in accordance with the asymptotic consistency of the estimator. Moreover, the estimated standard deviation is always slightly larger than the sample standard deviation, yet close enough to provide a useful approximation for, e. g., the construction of confidence regions. In order not to underestimate the uncertainty in the estimate, such a conservative approximation to the true standard deviations is desirable in practice. Overall, the estimation procedure performs very well in the simulation study.