Variable selection in generalized random coefficient autoregressive models

In this paper, we consider the variable selection problem of the generalized random coefficient autoregressive model (GRCA). Instead of parametric likelihood, we use non-parametric empirical likelihood in the information theoretic approach. We propose an empirical likelihood-based Akaike information criterion (AIC) and a Bayesian information criterion (BIC).


Introduction
Consider the following p-order generalized random coefficient autoregressive model: where τ denotes the transpose of a matrix or vector, t = ( t1 , . . . , tp ) τ is a random coefficient vector, Y (t -1) = (Y t-1 , . . . , Y t-p ) τ , and { t ε t , t = 0, ±1, ±2, . . .} is a sequence of i.i.d. random vectors with E( t ) = φ = (φ 1 , . . . , φ p ), E(ε t ) = 0, and Var t ε t = V φ σ ε σ τ ε σ 2 ε . As a generalization of the usual autoregressive model, the random coefficient autoregressive (RCAR) model (cf. [1,2]), the Markovian bilinear model and its generalization, and the random coefficient exponential autoregressive model (cf. [3][4][5]), model (1) was first introduced by Hwang and Basawa [6]. GRCA has become one of the important models in the nonlinear time series context. In recent years, GRCA has been studied by many authors. For instance, Hwang and Basawa [7] established the local asymptotic normality of a class of generalized random coefficient autoregressive processes. Carrasco and Chen [8] provided the tractable sufficient conditions that simultaneously imply strict stationarity, finiteness of higher-order moments, and β-mixing with geometric decay rates. Zhao and Wang [9] constructed confidence regions for the parameters of model (1) by using an empirical likelihood method. Furthermore, Zhao et al. [10] also considered the problem of testing the constancy of the coefficients in the stationary one-order generalized random coefficient autoregressive model. In this paper, we consider the variable selection problem of the GRCA based on the empirical likelihood method.
Many model selection procedures have been proposed in the statistical literature, including the adjusted R 2 (see Theil [11]), the AIC (see Akaike [12]), BIC (see Schwarz [13]), Mallow'S C p (see Mallows [14]). Other criteria in the literature include Hannan and Quinn's criterion [15], Geweke and Meese's criterion [16], Cavanaugh's Kullback information criterion [17], and the deviance information criterion of Spiegelhalter et al. [18]. Also, Tsay [19], Hurvich and Tsai [20] and Pötscher [21] have studied model selection methods in time series models. Recently, the model selection problem has been extended to moment selection as in Andrews [22], Andrews and Lu [23] and Hong et al. [24]. These model selection methods are concerned with parsimony, as was stressed in Zellner et al. [25], as well as accuracy or power in choosing models.
In this paper, we develop an information theoretic approach to variable selection problem of GRCA. Specifically, instead of parametric likelihood, we use non-parametric empirical likelihood (see Owen [26,27]) in the information theoretic approach. We propose an empirical likelihood-based Akaike information criterion (EAIC) and a Bayesian information criterion (EBIC).
The paper proceeds as follows. The next section is concerned with the methodology and the main results. Section 3 is devoted to the proofs of the main results.
Throughout the paper, we use the symbols " d − →" and " p − →" to denote convergence in distribution and convergence in probability, respectively. We abbreviate "almost surely" and "independent identical distributed" to "a.s. " and "i.i.d. ", respectively. o p (1) means a term which converges to zero in probability. O p (1) means a term which is bounded in probability. Furthermore, the Kronecker product of the matrices A and B is denoted by A ⊗ B, and M denotes the L 2 norm for vector or matrix M.

Methods and main results
In this section, we will first propose the empirical likelihood-based information criteria for choice of a GRCA, then we investigate the asymptotic properties of the new variable selection method.

Empirical likelihood-based information criteria
Hwang and Basawa [6] derived the conditional least-squares estimatorφ of φ, which is given bŷ By using the estimating equation of the conditional least-squares estimator, we can obtain the following score function: where G t (φ) = Y t Y (t -1) -Y (t -1)Y τ (t -1)φ. Following Owen [26], the empirical likelihood statistic for φ is defined as where p 1 , . . . , p n are all sets of nonnegative numbers summing to 1. By using the Lagrange multiplier method, let After simple algebraic calculation, we have Note that n t=1 p t = 1 and n t=1 p t G t (φ) = 0. So we have γ = -n and p t = 1 n(1+λ τ G t (φ)) , which implies that where λ is the solution of the equation The definition ofl(φ) relies on finding a positive p t s such that n t=1 p t G t (φ) = 0 for each φ. The solution exists if and only if the convex hull of the G t (φ), t = 1, 2, . . . , n contains zero as an inner point. When the model is correct, the solution exists with probability tending to 1 as the sample size n → ∞ for φ in a neighborhood of φ 0 . However, for finite n and at some φ value, the equation often does not have a solution in p t . To avoid this problem, we introduce the adjusted empirical likelihood.
Further letḠ n = n -1 n t=1 p t G t (φ) and define G n+1 = -a nḠn for some positive constant a n . We adjust the profile empirical log-likelihood ratio function to withλ =λ(φ) being the solution of Since 0 always lies on the line connectingḠ n and G n+1 , the adjusted empirical loglikelihood ratio function is well defined after adding a pseudo-value G n+1 to the data set. The adjustment is particularly useful so that a numerical program does not crash simply because some undesirable φ is assessed.
A full GRCA assumes that y t relates to τ t Y (t -1) with E( t ) = φ being unknown parameter of size p. Let s be a subset of {1, 2, . . . , p}, and Y [s] (t -1) and φ [s] be subvectors of Y (t -1) and φ containing entries in positions specified by s. Consider the pthorder GRCA specified by E(G t (φ)) = 0 and a submodel specified by n for some positive constant a n . The adjusted empirical log-likelihood ratio becomes We define the adjusted profile empirical log-likelihood ratio as The empirical likelihood versions of AIC and BIC are then defined as where k is the cardinality of s. After l(s) is evaluated for all s, we select the model with the minimum EAIC or EBIC value.

Asymptotic properties
It is well known that under some mild conditions the parametric BIC is consistent for variable selection while the parametric AIC is not. Similarly, we can prove that, when p is constant, EBIC is consistent but EAIC is not.
For purposes of illustration, in what follows, we rewrite the model in the following matrix form (see Hwang and Basawa [6]): let U t = (ε t , 0, 0, . . . , 0) τ are p × 1 vectors, Then model (1) can be written as In order to obtain our theorems, we need the following regularity conditions: (A 1 ) All the eigenvalues of the matrix E(C t ⊗ C t ) + (B ⊗ B) are less than unity in modulus.
Note that when a submodel s is a true model, it implies φ [s] 0 = 0. That is, components of φ 0 not in s are zero. Therefore, Y t only relates to the variables in positions specified by s. The following theorem shows that when φ [s] 0 = 0 is true, then adjusted empirical loglikelihood ratio statistic has a chi-squared limiting distribution with k fewer degrees of freedom. When the null hypothesis of φ [s] 0 = 0 is not true, the likelihood ratio go to ∞ as n → ∞. We state the following theorem in terms of the adjusted empirical likelihood which also applies to the usual empirical likelihood.
The following theorem indicates that, when p is constant, EBIC is consistent but EAIC is not.

Proofs of the main results
In order to prove Theorem 2.1, we first present several lemmas.

Lemma 3.1 Assume that (A 1 ) and (A 2 ) hold. Then A is positive definite and B has rank p.
Proof After simple algebra calculation, we have, for any nonzero vector c = (c 1 , . . . , c p ) ∈ R P , Note that the conditional distribution of Y t , given Y (t -1), is not a degenerate distribution, which implies that Var(Y t |Y (t -1)) > 0 a.s. It follows that (c τ Y (t -1)) 2 Var(Y t |Y (t -1)) ≥ 0 a.s. Therefore, c τ Ac = 0 if and only if c τ Y (t -1) = 0 a.s. Without loss of generality, suppose that the first component c 1 of c is 1, so Y t-1 = -c 2 Y t-2 -· · ·c p Y t-p , which is contradictory with the fact that the conditional distribution of Y t-1 , given (Y t-2 , . . . , Y t-p ), is not degenerate. Hence c τ Ac > 0. That is, A is positive definite. Similarly, we can also prove that B has rank p. The proof of Lemma 3.1 is thus complete.
Proof Note that First, note that By the ergodic theorem, we have Further, note that This, together with (16), proves that Again by the ergodic theorem, we can prove that Finally, we prove that Note that Similar to the proof of (18), we can show that In what follows, we consider 1 n n t=1 G t (φ 0 ) . Denote the ith component of G t (φ 0 ) by G ti (φ 0 ). Then {G ti (φ 0 ), 1 ≤ i ≤ p} is a stationary ergodic martingale difference sequence with E(G ti (φ 0 )) = 0 and E((G ti (φ 0 )) 2 ) < ∞. By the law of the iterated logarithm of martingale difference sequence, we have, for 1 ≤ i ≤ p, Then, by (21) and (22) This, together with (18) and (19), proves Lemma 3.2.

Lemma 3.3
Assume that (A 1 ) and (A 2 ) hold. Then when a n = o(n From (23), together with a n = o(n 1 2 ), it follows immediately that The next step in the proof is to show that By the Fubini theorem, we have, for any positive integer k, Thus, using the ergodic theorem, By the Borel-Cantelli lemma, we know that so that Take k = 1 m , then there exists Q m with P(Q m ) = 0, such that, for any ω ∈ Q c m , which, together with the fact that P(Q) = 0, implies that The proof is complete.
Proof of Theorem 2.2 Letλ be the Lagrange multiplier corresponding toφ [s] , the maximum point of l(φ [s] ). With this notation, we may write Note that This, together with (49), yields Further note that Q 1,n+1 is asymptotic normal with covariance matrix A and {A -1 - Therefore, we have ł(s) → χ 2 (pk) in distribution as n → ∞. The proof is complete.
Proof of Theorem 2.4. First, we consider EAIC. Consider the situation when s 0 is empty. Let s = {1} which contains a single covariant. Based on expansion in the proof of Theorem 2.2, we can prove that l(s 0 )-l(s) → χ 2 1 , which implies that lim n→∞ P(l(s 0 )-l(s) > 2) > 0. Therefore, EAIC is not consistent.
That is, EBIC will not select any model s that does not contain s 0 .
Furthermore, if s contains s 0 , and k > 0 additional insignificant variables, by Theorem 2.2, we have l(s 0 )l(s) → χ 2 k , which implies that P EBIC(s) < EBIC(s 0 ) = P l(s)l(s 0 ) > k log n → 0, as n → ∞. Thus, the model s will not be selected by EBIC as n → ∞. Because p is finite, there are only finite number of scompeting against s 0 , and each of them has o(1) probability being selection. So EBIC is consistent. The proof is complete.

Conclusions
It should be pointed out that variable selection has always been an important problem for our statistician. Many variable selection methods have been proposed in the statistical literature. But for the variable selection method of GRCA, so far it has not been provided by statistician. In this paper, instead of parametric likelihood, we further propose an Akaike information criterion (EAIC) and a Bayesian information criterion (EBIC) for the variable selection problem of GRCA based on the empirical likelihood method. Moreover, we also prove that under some mild conditions the parametric EBIC is consistent, while the parametric EAIC is not when p is constant.