Panel Data Econometrics in R : The plm Package

This introduction to the plm package is a slightly modiﬁed version of Croissant and Millo (2008), published in the Journal of Statistical Software. Panel data econometrics is obviously one of the main ﬁelds in the profession, but most of the models used are diﬃcult to estimate with R . plm is a package for R which intends to make the estimation of linear panel models straightforward. plm provides functions to estimate a wide variety of models and to make (robust) inference.


Introduction
Panel data econometrics is a continuously developing field.The increasing availability of data observed on cross-sections of units (like households, firms, countries etc.) and over time has given rise to a number of estimation approaches exploiting this double dimensionality to cope with some of the typical problems associated with economic data, first of all that of unobserved heterogeneity.
Timewise observation of data from different observational units has long been common in other fields of statistic (where they are often termed longitudinal data).In the panel data field as well as in others, the econometric approach is nevertheless peculiar with respect to experimental contexts, as it is emphasizing model specification and testing and tackling a number of issues arising from the particular statistical problems associated with economic data.
Thus, while a very comprehensive software framework for (among many other features) maximum likelihood estimation of linear regression models for longitudinal data, packages nlme (Pinheiro, Bates, DebRoy, and the˜R Core˜team 2007) and lme4 (Bates 2007), is available in the R (R Development Core Team 2008) environment and can be used, e.g., for estimation of random effects panel models, its use is not intuitive for a practicing econometrician, and maximum likelihood estimation is only one of the possible approaches to panel data econometrics.Moreover, economic panel datasets often happen to be unbalanced (i.e., they have a different number of observations between groups), which case needs some adaptation to the methods and is not compatible with those in nlme.Hence the need for a package doing panel data "from the econometrician's viewpoint" and featuring at a minimum the basic techniques econometricians themselves are used to: random and fixed effects estimation of static linear panel data models, variable coefficients models, generalized method of moments estimation of dynamic models; and the basic toolbox of specification and misspecification diagnostics.

The linear panel model
The basic linear panel models used in econometrics can be described through suitable restrictions of the following general model: where i = 1, . . .n is the individual (group, country . . . ) index, t = 1, . . .T is the time index and u it a random disturbance term of mean 0.
Of course the latter is not estimable with N = n × T data points.A number of assumptions are usually made about the parameters, the errors and the exogeneity of the regressors, giving rise to a taxonomy of feasible models for panel data.
The most common one is parameter homogeneity, which means that α it = α for all i, t and β it = β for all i, t.The resulting model is a standard linear model pooling all the data across i and t.
To model individual heterogeneity, one often assumes that the error term has two separate components, one of which is specific to the individual and doesn't change over time 2 .This is called the unobserved effects model: The appropriate estimation method for this model depends on the properties of the two error components.The idiosyncratic error it is usually assumed well-behaved and independent from both the regressors x it and the individual error component µ i .The individual component may be in turn either independent from the regressors or correlated.
If it is correlated, the ordinary least squares (ols) estimator for β would be inconsistent, so it is customary to treat the µ i as a further set of n parameters to be estimated, as if in the general model α it = α i for all t.This is called the fixed effects (a.k.a.within or least squares dummy variables) model, usually estimated by ols on transformed data, and gives consistent estimates for β.
If the individual-specific component µ i is uncorrelated with the regressors, a situation which is usually termed random effects, the overall error u it also is, so the ols estimator is consistent.Nevertheless, the common error component over individuals induces correlation across the composite error terms, making ols estimation inefficient, so one has to resort to some form of feasible generalized least squares (gls) estimators.This is based on the estimation of the variance of the two error components, for which there are a number of different procedures available.
If the individual component is missing altogether, pooled ols is the most efficient estimator for β.This set of assumptions is usually labelled pooling model, although this actually refers to the errors' properties and the appropriate estimation method rather than the model itself.
If one relaxes the usual hypotheses of well-behaved, white noise errors and allows for the idiosyncratic error it to be arbitrarily heteroskedastic and serially correlated over time, a more general kind of feasible gls is needed, called the unrestricted or general gls.This specification can also be augmented with individual-specific error components possibly correlated with the regressors, in which case it is termed fixed effects gls.
Another way of estimating unobserved effects models through removing time-invariant individual components is by first-differencing the data: lagging the model and subtracting, the time-invariant components (the intercept and the individual error component) are eliminated, and the model (where ∆y it = y it − y i,t−1 , ∆x it = x it − x i,t−1 and, from (3), ∆u it = u it − u i,t−1 = ∆ it for t = 2, ..., T ) can be consistently estimated by pooled ols.This is called the first-difference, or fd estimator.Its relative efficiency, and so reasons for choosing it against other consistent alternatives, depends on the properties of the error term.The fd estimator is usually preferred if the errors u it are strongly persistent in time, because then the ∆u it will tend to be serially uncorrelated.
Lastly, the between model, which is computed on time (group) averages of the data, discards all the information due to intragroup variability but is consistent in some settings (e.g., nonstationarity) where the others are not, and is often preferred to estimate long-run relationships.
Variable coefficients models relax the assumption that β it = β for all i, t.Fixed coefficients models allow the coefficients to vary along one dimension, like β it = β i for all t.Random coefficients models instead assume that coefficients vary randomly around a common average, as β it = β + η i for all t, where η i is a group-(time-) specific effect with mean zero.
The hypotheses on parameters and error terms (and hence the choice of the most appropriate estimator) are usually tested by means of: pooling tests to check poolability, i.e. the hypothesis that the same coefficients apply across all individuals, if the homogeneity assumption over the coefficients is established, the next step is to establish the presence of unobserved effects, comparing the null of spherical residuals with the alternative of group (time) specific effects in the error term, Panel Data Econometrics in R: The plm Package the choice between fixed and random effects specifications is based on Hausman-type tests, comparing the two estimators under the null of no significant difference: if this is not rejected, the more efficient random effects estimator is chosen, even after this step, departures of the error structure from sphericity can further affect inference, so that either screening tests or robust diagnostics are needed.
Dynamic models and in general lack of strict exogeneity of the regressors, pose further problems to estimation which are usually dealt with in the generalized method of moments (gmm) framework.
These were, in our opinion, the basic requirements of a panel data econometrics package for the R language and environment.Some, as often happens with R, were already fulfilled by packages developed for other branches of computational statistics, while others (like the fixed effects or the between estimators) were straightforward to compute after transforming the data, but in every case there were either language inconsistencies w.r.t. the standard econometric toolbox or subtleties to be dealt with (like, for example, appropriate computation of standard errors for the demeaned model, a common pitfall), so we felt there was need for an "all in one" econometrics-oriented package allowing to make specification searches, estimation and inference in a natural way.

Data structure
Panel data have a special structure: each row of the data corresponds to a specific individual and time period.In plm the data argument may be an ordinary data.framebut, in this case, an argument called index has to be added to indicate the structure of the data.This can be: NULL (the default value), it is then assumed that the first two columns contain the individual and the time index and that observations are ordered by individual and by time period, a character string, which should be the name of the individual index, a character vector of length two containing the names of the individual and the time index, an integer which is the number of individuals (only in case of a balanced panel with observations ordered by individual).
The pdata.frame function is then called internally, which returns a pdata.framewhich is a data.framewith an attribute called index.This attribute is a data.framethat contains the individual and the time indexes.
It is also possible to use directly the pdata.framefunction and then to use the pdata.frame in the estimation functions.

Interface
Estimation interface plm provides four functions for estimation: plm: estimation of the basic panel models, i.e. within, between and random effect models.Models are estimated using the lm function to transformed data, pvcm: estimation of models with variable coefficients, pgmm: estimation of generalized method of moments models, pggls: estimation of general feasible generalized least squares models.
The interface of these functions is consistent with the lm() function.Namely, their first two arguments are formula and data (which should be a data.frameand is mandatory).Three additional arguments are common to these functions : index: this argument enables the estimation functions to identify the structure of the data, i.e. the individual and the time period for each observation, effect: the kind of effects to include in the model, i.e. individual effects, time effects or both3 , model: the kind of model to be estimated, most of the time a model with fixed effects or a model with random effects.
The results of these four functions are stored in an object which class has the same name of the function.They all inherit from class panelmodel.A panelmodel object contains: coefficients, residuals, fitted.values,vcov, df.residual and call and functions that extract these elements are provided.

Testing interface
The diagnostic testing interface provides both formula and panelmodel methods for most functions, with some exceptions.The user may thus choose whether to employ results stored in a previously estimated panelmodel object or to re-estimate it for the sake of testing.
Although the first strategy is the most efficient one, diagnostic testing on panel models mostly employs ols residuals from pooling model objects, whose estimation is computationally inexpensive.Therefore most examples in the following are based on formula methods, which are perhaps the cleanest for illustrative purposes.

Computational approach to estimation
The feasible gls methods needed for efficient estimation of unobserved effects models have a simple closed-form solution: once the variance components have been estimated and hence the covariance matrix of errors V , model parameters can be estimated as Nevertheless, in practice plain computation of β has long been an intractable problem even for moderate-sized datasets because of the need to invert the N × N V matrix.With the advances in computer power, this is no more so, and it is possible to program the "naive" estimator (5) in R with standard matrix algebra operators and have it working seamlessly for the standard "guinea pigs", e.g. the Grunfeld data.Estimation with a couple of thousands of data points also becomes feasible on a modern machine, although excruciatingly slow and definitely not suitable for everyday econometric practice.Memory limits would also be very near because of the storage needs related to the huge V matrix.An established solution exists for the random effects model which reduces the problem to an ordinary least squares computation.

The (quasi-)demeaning framework
The estimation methods for the basic models in panel data econometrics, the pooled ols, random effects and fixed effects (or within) models, can all be described inside the ols estimation framework.In fact, while pooled ols simply pools data, the standard way of estimating fixed effects models with, say, group (time) effects entails transforming the data by subtracting the average over time (group) to every variable, which is usually termed time-demeaning.In the random effects case, the various feasible gls estimators which have been put forth to tackle the issue of serial correlation induced by the group-invariant random effect have been proven to be equivalent (as far as estimation of βs is concerned) to ols on partially demeaned data, where partial demeaning is defined as: where θ = 1−[σ 2 u /(σ 2 u +T σ 2 e )] 1/2 , ȳ and X denote time means of y and X, and the disturbance v it − θv i is homoskedastic and serially uncorrelated.Thus the feasible re estimate for β may be obtained estimating θ and running an ols regression on the transformed data with lm().The other estimators can be computed as special cases: for θ = 1 one gets the fixed effects estimator, for θ = 0 the pooled ols one.
Moreover, instrumental variable estimators of all these models may also be obtained using several calls to lm().
For this reason the three above estimators have been grouped inside the same function.
On the output side, a number of diagnostics and a very general coefficients' covariance matrix estimator also benefits from this framework, as they can be readily calculated applying the standard ols formulas to the demeaned data, which are contained inside plm objects.This will be the subject of Subsection˜3.4.

The object oriented approach to general GLS computations
The covariance matrix of errors in general gls models is too generic to fit the quasi-demeaning framework, so this method calls for a full-blown application of gls as in (5).On the other hand, this estimator relies heavily on n-asymptotics, making it theoretically most suitable for situations which forbid it computationally: e.g., "short" micropanels with thousands of individuals observed over few time periods.R has general facilities for fast matrix computation based on object orientation: particular types of matrices (symmetric, sparse, dense etc.) are assigned the relevant class and the additional information on structure is used in the computations, sometimes with dramatic effects on performance (see Bates 2004) and packages Matrix (see Bates and Maechler 2007) and SparseM (see Koenker and Ng 2007).Some optimized linear algebra routines are available in the R package kinship (see Atkinson and Therneau 2007) which exploit the particular blockdiagonal and symmetric structure of V making it possible to implement a fast and reliable full-matrix solution to problems of any practically relevant size.
The V matrix is constructed as an object of class bdsmatrix.The peculiar properties of this matrix class are used for efficiently storing the object in memory and then by ad-hoc versions of the solve and crossprod methods, dramatically reducing computing times and memory usage.The resulting matrix is then used "the naive way" as in (5) to compute β, resulting in speed comparable to that of the demeaning solution.

Inference in the panel model
General frameworks for restrictions and linear hypotheses testing are available in the R environment4 .These are based on the Wald test, constructed as β V −1 β, where β and V are consistent estimates of β and V (β), The Wald test may be used for zero-restriction (i.e., significance) testing and, more generally, for linear hypotheses in the form (R β−r) [R V R ] −1 (R β− r)5 .To be applicable, the test functions require extractor methods for coefficients' and covariance matrix estimates to be defined for the model object to be tested.Model objects in plm all have coef() and vcov() methods and are therefore compatible with the above functions.
In the same framework, robust inference is accomplished substituting ("plugging in") a robust estimate of the coefficient covariance matrix into the Wald statistic formula.In the panel context, the estimator of choice is the White system estimator.This called for a flexible method for computing robust coefficient covariance matrices à la White for plm objects.A general White system estimator for panel data is: where E i is a function of the residuals êit , t = 1, . . .T chosen according to the relevant heteroskedasticity and correlation structure.Moreover, it turns out that the White covariance matrix calculated on the demeaned model's regressors and residuals (both part of plm objects) is a consistent estimator of the relevant model's parameters' covariance matrix, thus the method is readily applicable to models estimated by random or fixed effects, first difference or pooled ols methods.Different pre-weighting schemes taken from package sandwich (Zeileis 2004) are also implemented to improve small-sample performance.Robust estimators with any combination of covariance structures and weighting schemes can be passed on to the testing functions.

Managing data and formulae
The package is now illustrated by application to some well-known examples.It is loaded using R> library("plm") The four datasets used are EmplUK which was used by Arellano and Bond (1991), the Grunfeld data (Kleiber and Zeileis 2008) which is used in several econometric books, the Produc data used by Munnell (1990) and the Wages used by Cornwell and Rupert (1988).

Data structure
As observed above, the current version of plm is capable of working with a regular data.framewithout any further transformation, provided that the individual and time indexes are in the first two columns, as in all the example datasets but Wages.If this weren't the case, an index optional argument would have to be passed on to the estimating and testing functions.

Data transformation
Panel data estimation requires to apply different transformations to raw series.If x is a series of length nT (where n is the number of individuals and T is the number of time periods), the transformed series x is obtained as x = M x where M is a transformation matrix.Denoting j a vector of one of length T and I n the identity matrix of dimension n, we get: the between transformation: P = 1 T I n ⊗ jj returns a vector containing the individual means.The Between and between functions performs this operation, the first one returning a vector of length nT , the second one a vector of length n, the within transformation: Q = I nT − P returns a vector containing the values in deviation from the individual means.The Within function performs this operation.the first difference transformation D = I n ⊗ d where is of dimension (T − 1, T ).g Note that R's diff() and lag() functions don't compute correctly these transformations for panel data because they are unable to identify when there is a change in individual in the data.Specific methods for pseries objects are therefore have been rewritten in order to handle correctly panel data.Further functions called Between, between and Within are also provided to compute the between and the within transformation.The between returns unique values, whereas Between duplicate the values and returns a vector which length is the number of observations.

Formulas
There are two circumstances where standard formula are not very usefull : formulas for dynamic models and formulas for instrument variable estimation.Two deal with the first situation, we provide a class called dynformula, for the second situation, we use the Formula package.
Formula for dynamic models Using the lag and diff functions to write a formula can be very cumbersome for dynamic models.A dynformula function is provided to ease the writing of such formulas.Suppose for example than one wants to estimate a model where employment depends on its own first two lags and on the second and third lags of wages and capital, and capital being differenced.The formula is then: which can be easily obtained with dynformula: log(emp) ~lag(log(emp), 2) + lag(log(emp), 3) + lag(log(emp), 2) + lag(log(emp), 3) + lag(log(wage), 2) + lag(log(wage), 3) + lag(log(capital), 2) + lag(log(capital), 3) The arguments lag and diff are lists/vectors which can be: unnamed, in this case, the length of the list/vector must equal the number of variables, named, in this case the missing variables get the default values (no lags, no logs, no differences), partially named, in this case the unnamed element is the user defined default value.
The diff vector is logical.The elements of lag are either a number or a vector of two numbers: 3 means current value and first three lags, c(1,2) means first two lags.

Formula for instrument variable estimation
Two-part formulas are a very convenient way to describe models estimated using instrument variables, the first part indicating the covariates and the second part the instruments.The Formula package provides a class which unables to construct multi-part formula, each part being separated by a pipe sign.plm provides a pFormula object which is a Formula with specific methods.
The two formulas below are identical : In the second case, the .means the previous parts which describes the covariates and this part is "updated".This is particulary interesting when there are a few external instruments.For a random model, the summary method gives information about the variance of the components of the errors.Fixed effects may be extracted easily using fixef.An argument type indicates howfixed effects should be computed : in level type = 'level' (the default), in deviation from the overall mean type = 'dmean' or in deviation from the first individual type = 'dfirst'.

Model estimation
R> fixef(grun.fe,type = "dmean") The fixef function returns an object of class fixef.A summary method is provided, which prints the effects (in deviation from the overall intercept), their standard errors and the test of equality to the overall intercept.

Random effects estimators
As observed above, the random effect model is obtained as a linear estimation on quasidemeaned data.The parameter of this transformation is obtained using preliminary estimations.
For example, to use the amemiya estimator: R> grun.amem<-plm(inv ~value + capital, data = Grunfeld, model = "random", + random.method= "amemiya") The estimation of the variance of the error components are performed using the ercomp function, which has a method and an effect argument, and can be used by itself : R> ercomp(inv ~value + capital, data = Grunfeld, method = "amemiya", + effect = "twoways")

Introducing time or two-ways effects
The default behavior of plm is to introduce individual effects.Using the effect argument, one may also introduce: time effects (effect="time"), individual and time effects (effect="twoways").
For example, to estimate a two-ways effect model for the Grunfeld data: R> grun.tways<-plm(inv ~value + capital, data = Grunfeld, effect = "twoways", + model = "random", random.method= "amemiya") R> summary(grun.tways)Twoways effects Random Effect Model (Amemiya's transformation) Call: plm(formula = inv ~value + capital, data = Grunfeld, effect = "twoways", model = "random", random.method= "amemiya") Balanced Panel: n=10, T=20, N=200 In the "effects" section of the result, the variance of the three elements of the error term and the three parameters used in the transformation are now printed.The two-ways effect model is for the moment only available for balanced panels.

Unbalanced panels
Most of the features of plm are implemented for panel models with some limitations : the random two-ways effect model is not implemented, the only estimator of the variance of the error components is the one proposed by Swamy and Arora (1972) The following example is using data used by (?) to estimate an hedonic housing prices function.It is reproduced in (Baltagi 2001)

Instrumental variable estimators
All of the models presented above may be estimated using instrumental variables.The instruments are specified at the end of the formula after a | sign.
as illustrated on the following example from Baltagi (2001)

Variable coefficients model
The pvcm function enables the estimation of variable coefficients models.Time or individual effects are introduced if effect is fixed to "time" or "individual" (the default value).
Coefficients are assumed to be fixed if model="within" or random if model="random".In the first case, a different model is estimated for each individual (or time period).In the second case, the Swamy model (see Swamy 1970) model is estimated.It is a generalized least squares model which uses the results of the previous model.Denoting βi the vectors of coefficients obtained for each individual, we get: where σ2 i is the unbiased estimator of the variance of the errors for individual i obtained from the preliminary estimation and: If this matrix is not positive-definite, the second term is dropped.

Generalized method of moments estimator
The generalized method of moments is mainly used in panel data econometrics to estimate dynamic models (Arellano and Bond 1991;Holtz-Eakin, Newey, and Rosen 1988).
The model is first differenced to get rid of the individual effect: Least squares are inconsistent because ∆ it is correlated with ∆y it−1 .y it−2 is a valid, but weak instrument (see Anderson and Hsiao 1981).The gmm estimator uses the fact that the number of valid instruments is growing with t: For individual i, the matrix of instruments is then: Panel Data Econometrics in R: The plm Package y 1 0 0 0 0 0 . . .0 0 0 0 x i3 0 y 1 y 2 0 0 0 . . .0 0 0 0 x i4 0 0 0 y 1 y 2 y 3 . . .0 0 0 0 The moment conditions are: n i=1 W i e i (β) where e i (β) is the vector of residuals for individual i.The gmm estimator minimize: where A is the weighting matrix of the moments.
One-step estimators are computed using a known weighting matrix.For the model in first differences, one uses: with: Two-steps estimators are obtained using H (2) i = n i=1 e (1) i e (1) i where e (1) i are the residuals of the one step estimate.
The gmm estimator is provided by the pgmm function.It's main argument is a dynformula which describes the variables of the model and the lag structure.
The effect argument is either NULL, "individual" (the default), or "twoways".In the first case, the model is estimated in levels.In the second case, the model is estimated in first differences to get rid of the individuals effects.In the last case, the model is estimated in first differences and time dummies are included.
In a gmm estimation, there are "normal" instruments and "gmm" instruments.gmm instruments are indicated with the gmm.inst argument (a one side formula) and the lags with the lag.gmm argument.By default, all the variables of the model that are not used as gmm instruments are used as normal instruments, with the same lag structure.
The complete list of instruments can also be specified with the argument instruments which should be a one side formula (or dynformula).
The model argument specifies whether a one-step or a two-steps model is required ("onestep" or "twosteps").
The following example is from Arellano and Bond (1991)

General FGLS models
General fgls estimators are based on a two-step estimation process: first an ols model is estimated, then its residuals ûit are used to estimate an error covariance matrix more general than the random effects one for use in a feasible-gls analysis.Formally, the estimated error covariance matrix is V = I n ⊗ Ω, with Wooldridge 2002, 10.4.3 and 10.5.5).
This framework allows the error covariance structure inside every group (if effect="individual") of observations to be fully unrestricted and is therefore robust against any type of intragroup heteroskedasticity and serial correlation.This structure, by converse, is assumed identical across groups and thus general fgls is inefficient under groupwise heteroskedasticity.Crosssectional correlation is excluded a priori.
Moreover, the number of variance parameters to be estimated with N = n × T data points is T (T + 1)/2, which makes these estimators particularly suited for situations where n >> T , as e.g. in labour or household income surveys, while problematic for "long" panels, where V tends to become singular and standard errors therefore become biased downwards.
In a pooled time series context (effect="time"), symmetrically, this estimator is able to account for arbitrary cross-sectional correlation, provided that the latter is time-invariant (see Greene 2003, 13.9.1-2, p.321-2).In this case serial correlation has to be assumed away and the estimator is consistent with respect to the time dimension, keeping n fixed.
The function pggls estimates general fgls models, with either fixed of "random" effects6 .
The "random effect" general fgls is estimated by: R> zz <-pggls(log(emp) ~log(wage) + log(capital), data = EmplUK, + model = "random") R> summary(zz) The pggls function is similar to plm in many respects.An exception is that the estimate of the group covariance matrix of errors (zz$sigma, a matrix, not shown) is reported in the model objects instead of the usual estimated variances of the two error components.

Hausman test
phtest computes the Hausman test which is based on the comparison of two sets of estimates (see Hausman 1978).Its main arguments are two panelmodel objects or a formula.A classical application of the Hausman test for panel data is to compare the fixed and the random effects models: R> gw <-plm(inv ~value + capital, data = Grunfeld, model = "within") R> gr <-plm(inv ~value + capital, data = Grunfeld, model = "random") R> phtest(gw, gr) Hausman Test data: inv ~value + capital chisq = 2.3304, df = 2, p-value = 0.3119 alternative hypothesis: one model is inconsistent

Tests of serial correlation
A model with individual effects has composite errors that are serially correlated by definition.The presence of the time-invariant error component7 gives rise to serial correlation which does not die out over time, thus standard tests applied on pooled data always end up rejecting the null of spherical residuals8 .There may also be serial correlation of the "usual" kind in the idiosyncratic error terms, e.g. as an AR(1) process.By "testing for serial correlation" we mean testing for this latter kind of dependence.
For these reasons, the subjects of testing for individual error components and for serially correlated idiosyncratic errors are closely related.In particular, simple (marginal) tests for one direction of departure from the hypothesis of spherical errors usually have power against the other one: in case it is present, they are substantially biased towards rejection.Joint tests are correctly sized and have power against both directions, but usually do not give any information about which one actually caused rejection.Conditional tests for serial correlation that take into account the error components are correctly sized under presence of both departures from sphericity and have power only against the alternative of interest.While most powerful if Panel Data Econometrics in R: The plm Package correctly specified, the latter, based on the likelihood framework, are crucially dependent on normality and homoskedasticity of the errors.
In plm we provide a number of joint, marginal and conditional ml-based tests, plus some semiparametric alternatives which are robust vs. heteroskedasticity and free from distributional assumptions.

Unobserved effects test
The unobserved effects test à la Wooldridge (see Wooldridge 2002, 10.4.4), is a semiparametric test for the null hypothesis that σ 2 µ = 0, i.e. that there are no unobserved effects in the residuals.Given that under the null the covariance matrix of the residuals for each individual is diagonal, the test statistic is based on the average of elements in the upper (or lower) triangle of its estimate, diagonal excluded: n −1/2 n i=1 T −1 t=1 T s=t+1 ûit ûis (where û are the pooled ols residuals), which must be "statistically close" to zero under the null, scaled by its standard deviation: This test is (n-) asymptotically distributed as a standard Normal regardless of the distribution of the errors.It does also not rely on homoskedasticity.
It has power both against the standard random effects specification, where the unobserved effects are constant within every group, as well as against any kind of serial correlation.As such, it "nests" both random effects and serial correlation tests, trading some power against more specific alternatives in exchange for robustness.
While not rejecting the null favours the use of pooled ols, rejection may follow from serial correlation of different kinds, and in particular, quoting Wooldridge (2002), "should not be interpreted as implying that the random effects error structure must be true".
Below, the test is applied to the data and model in Munnell (1990):

Locally robust tests for serial correlation or random effects
The presence of random effects may affect tests for residual serial correlation, and the opposite.One solution is to use a joint test, which has power against both alternatives.A joint LM test for random effects and serial correlation under normality and homoskedasticity of the idiosyncratic errors has been derived by Baltagi and Li (1991) and Baltagi and Li (1995) and is implemented as an option in pbsytest: R> pbsytest(log(gsp) ~log(pcap) + log(pc) + log(emp) + unemp, data = Produc, + test = "j") On the other hand, the statistical properties of these "locally corrected" tests are inferior to those of the non-corrected counterparts when the latter are correctly specified.If there is no serial correlation, then the optimal test for random effects is the likelihood-based LM test of Breusch and Godfrey (with refinements by Honda, see plmtest), while if there are no random effects the optimal test for serial correlation is, again, Breusch-Godfrey's test9 .If the presence of a random effect is taken for granted, then the optimal test for serial correlation is the likelihood-based conditional LM test of Baltagi and Li (1995) (see pbltest).
The serial correlation version is the default: R> pbsytest(log(gsp) ~log(pcap) + log(pc) + log(emp) + unemp, data = Produc) Conditional LM test for AR(1) or MA(1) errors under random effects Baltagi and Li (1991) and Baltagi and Li (1995) derive a Lagrange multiplier test for serial correlation in the idiosyncratic component of the errors under (normal, heteroskedastic) random effects.Under the null of serially uncorrelated errors, the test turns out to be identical for both the alternative of AR(1) and MA(1) processes.One-and two-sided versions are provided, the one-sided having power against positive serial correlation only.The two-sided is the default, while for the other one must specify the alternative option to onesided: R> pbltest(log(gsp) ~log(pcap) + log(pc) + log(emp) + unemp, data = Produc, + alternative = "onesided") As usual, the LM test statistic is based on residuals from the maximum likelihood estimate of the restricted model (random effects with serially uncorrelated errors).In this case, though, the restricted model cannot be estimated by ols any more, therefore the testing function depends on lme() in the nlme package for estimation of a random effects model by maximum likelihood.For this reason, the test is applicable only to balanced panels.
No test has been implemented to date for the symmetric hypothesis of no random effects in a model with errors following an AR(1) process, but an asymptotically equivalent likelihood ratio test is available in the nlme package (see Section˜7)..

General serial correlation tests
A general testing procedure for serial correlation in fixed effects (fe), random effects (re) and pooled-ols panel models alike can be based on considerations in (Wooldridge 2002, 10.7.2).
Recall that plm model objects are the result of ols estimation performed on "demeaned" data, where, in the case of individual effects (else symmetric), this means time-demeaning for the fe (within) model, quasi-time-demeaning for the re (random) model and original data, with no demeaning at all, for the pooled ols (pooling) model (see Section˜3).
For the random effects model, Wooldridge (2002) observes that under the null of homoskedasticity and no serial correlation in the idiosyncratic errors, the residuals from the quasidemeaned regression must be spherical as well.Else, as the individual effects are wiped out in the demeaning, any remaining serial correlation must be due to the idiosyncratic component.Hence, a simple way of testing for serial correlation is to apply a standard serial correlation test to the quasi-demeaned model.The same applies in a pooled model, w.r.t. the original data.
The fe case needs some qualification.It is well-known that if the original model's errors are uncorrelated then fe residuals are negatively serially correlated, with cor(û it , ûis ) = −1/(T − 1) for each t, s (see Wooldridge 2002, 10.5.4).This correlation clearly dies out as T increases, so this kind of AR test is applicable to within model objects only for T "sufficiently large"11 .On the converse, in short panels the test gets severely biased towards rejection (or, as the induced correlation is negative, towards acceptance in the case of the one-sided DW test with alternative="greater").See below for a serial correlation test applicable to "short" fe panel models.
plm objects retain the "demeaned" data, so the procedure is straightforward for them.The wrapper functions pbgtest and pdwtest re-estimate the relevant quasi-demeaned model by ols and apply, respectively, standard Breusch-Godfrey and Durbin-Watson tests from package lmtest: R> grun.fe<-plm(inv ~value + capital, data = Grunfeld, model = "within") R> pbgtest(grun.fe,order = 2) Breusch-Godfrey/Wooldridge test for serial correlation in panel models data: inv ~value + capital chisq = 42.5867,df = 2, p-value = 5.655e-10 alternative hypothesis: serial correlation in idiosyncratic errors The tests share the features of their ols counterparts, in particular the pbgtest allows testing for higher-order serial correlation, which might turn useful, e.g., on quarterly data.Analogously, from the point of view of software, as the functions are simple wrappers towards bgtest and dwtest, all arguments from the latter two apply and may be passed on through the '. . .' operator.
Wooldridge's test for serial correlation in "short" FE panels For the reasons reported above, under the null of no serial correlation in the errors, the residuals of a fe model must be negatively serially correlated, with cor(ˆ it , ˆ is ) = −1/(T − 1) for each t, s.Wooldridge suggests basing a test for this null hypothesis on a pooled regression of fe residuals on themselves, lagged one period: Rejecting the restriction δ = −1/(T − 1) makes us conclude against the original null of no serial correlation.
The building blocks available in plm, together with the function linear.hypothesis() in package car, make it easy to construct a function carrying out this procedure: first the fe model is estimated and the residuals retrieved, then they are lagged and a pooling AR(1) model is estimated.The test statistic is obtained by applying linear.hypothesis() to the latter model to test the above restriction on δ, supplying a heteroskedasticity-and autocorrelation-consistent covariance matrix (vcovHC with the appropriate options, in particular method="arellano")12 .
R> pwfdtest(log(emp) ~log(wage) + log(capital), data = EmplUK, + h0 = "fe") Wooldridge's first-difference test for serial correlation in panels data: plm.model chisq = 131.5482,p-value < 2.2e-16 alternative hypothesis: serial correlation in original errors Not rejecting one of the two is evidence in favour of using the estimator corresponding to h0.Should the truth lie in the middle (both rejected), whichever estimator is chosen will have serially correlated errors: therefore it will be advisable to use the autocorrelation-robust covariance estimators from the Subsection˜6.6 in inference.

Tests for cross-sectional dependence
Next to the more familiar issue of serial correlation, over the last years a growing body of literature has been dealing with cross-sectional dependence (henceforth: xsd) in panels, which can arise, e.g., if individuals respond to common shocks (as in the literature on factor models) or if spatial diffusion processes are present, relating individuals in a way depending on a measure of distance (spatial models).
The subject is huge, and here we touch only some general aspects of misspecification testing and valid inference.If xsd is present, the consequence is, at a minimum, inefficiency of the usual estimators and invalid inference when using the standard covariance matrix14 .The plan is to have in plm both misspecification tests to detect xsd and robust covariance matrices to perform valid inference in its presence, like in the serial dependence case.For now, though, only misspecification tests are included.

CD and LM-type tests for global cross-sectional dependence
The function pcdtest implements a family of xsd tests which can be applied in different settings, ranging from those where T grows large with n fixed to "short" panels with a big n dimension and a few time periods.All are based on (transformations of-) the product-moment correlation coefficient of a model's residuals, defined as e., as averages over the time dimension of pairwise correlation coefficients for each pair of cross-sectional units.The Breusch-Pagan (Breusch and Pagan 1980) LM test, based on the squares of ρ ij , is valid for T → ∞ with n fixed; defined as where in the case of an unbalanced panel only pairwise complete observations are considered, and T ij = min(T i , T j ) with T i being the number of observations for individual i; else, if the panel is balanced, T ij = T for each i, j.The test is distributed as χ 2 n(n−1)/2 .It is inappropriate whenever the n dimension is "large".A scaled version, applicable also if T → ∞ and then n → ∞ (as in some pooled time series contexts), is defined as and distributed as a standard Normal.
Pesaran's (Pesaran 2004) CD test based on ρ ij without squaring (also distributed as a standard Normal) is appropriate both in n-and in T -asymptotic settings.It has remarkable properties in samples of any practically relevant size and is robust to a variety of settings.The only big drawback is that the test loses power against the alternative of cross-sectional dependence if the latter is due to a factor structure with factor loadings averaging zero, that is, some units react positively to common shocks, others negatively.
The default version of the test is "cd".These tests are originally meant to use the residuals of separate estimation of one time-series regression for each cross-sectional unit, so this is the default behaviour of pcdtest.
R> pcdtest(inv ~value + capital, data = Grunfeld) Pesaran CD test for cross-sectional dependence in panels data: formula z = 5.3401, p-value = 9.292e-08 alternative hypothesis: cross-sectional dependence If a different model specification (within, random, ...) is assumed consistent, one can resort to its residuals for testing15 by specifying the relevant model type.The main argument of this function may be either a model of class panelmodel or a formula and a data.frame; in the second case, unless model is set to NULL, all usual parameters relative to the estimation of a plm model may be passed on.The test is compatible with any consistent panelmodel for the data at hand, with any specification of effect.E.g., specifying effect="time" or effect="twoways" allows to test for residual cross-sectional dependence after the introduction of time fixed effects to account for common shocks.
R> pcdtest(inv ~value + capital, data = Grunfeld, model = "within") Pesaran CD test for cross-sectional dependence in panels data: formula z = 4.6612, p-value = 3.144e-06 alternative hypothesis: cross-sectional dependence If the time dimension is insufficient and model=NULL, the function defaults to estimation of a within model and issues a warning.

CD(p) test for local cross-sectional dependence
A local variant of the CD test, called CD(p) test (Pesaran 2004), takes into account an appropriate subset of neighbouring cross-sectional units to check the null of no xsd against the alternative of local xsd, i.e. dependence between neighbours only.To do so, the pairs of neighbouring units are selected by means of a binary proximity matrix like those used in spatial models.In the original paper, a regular ordering of observations is assumed, so that the m-th cross-sectional observation is a neighbour to the (m − 1)-th and to the (m + 1)-th.
Extending the CD(p) test to irregular lattices, we employ the binary proximity matrix as a selector for discarding the correlation coefficients relative to pairs of observations that are not neighbours in computing the CD statistic.The test is then defined as where [w(p)] ij is the (i, j)-th element of the p-th order proximity matrix, so that if h, k are not neighbours, [w(p)] hk = 0 and ρhk gets "killed"; this is easily seen to reduce to formula (14) in Pesaran (Pesaran 2004) for the special case considered in that paper.The same can be applied to the LM and SCLM tests.
Therefore, the local version of either test can be computed supplying an n × n matrix (of any kind coercible to logical), providing information on whether any pair of observations are neighbours or not, to the w argument.If w is supplied, only neighbouring pairs will be used in computing the test; else, w will default to NULL and all observations will be used.The matrix needs not really be binary, so commonly used "row-standardized" matrices can be employed as well: it is enough that neighbouring pairs correspond to nonzero elements in w16 .

Robust covariance matrix estimation
Robust estimators of the covariance matrix of coefficients are provided, mostly for use in Wald-type tests.vcovHC estimates three "flavours" of White's heteroskedasticity-consistent covariance matrix17 (known as the sandwich estimator).Interestingly, in the context of panel data the most general version also proves consistent vs. serial correlation.
All types assume no correlation between errors of different groups while allowing for heteroskedasticity across groups, so that the full covariance matrix of errors is V = I n ⊗ Ω i ; i = 1, .., n.As for the intragroup error covariance matrix of every single group of observations, "white1" allows for general heteroskedasticity but no serial correlation, i.e.
while "white2" is "white1" restricted to a common variance inside every group, estimated as Greene (2003, 13.7.1-2)and Wooldridge (2002, 10.7.2); "arellano" (see ibid. and the original ref.Arellano 1987) allows a fully general structure w.r.t.heteroskedasticity and serial correlation: The latter is, as already observed, consistent w.r.t.timewise correlation of the errors, but on the converse, unlike the White 1 and 2 methods, it relies on large n asymptotics with small T .
The fixed effects case, as already observed in Section˜6.4on serial correlation, is complicated by the fact that the demeaning induces serial correlation in the errors.The original White estimator (white1) turns out to be inconsistent for fixed T as n grows, so in this case it is advisable to use the arellano version (see Stock and Watson 2006).
The errors may be weighted according to the schemes proposed by MacKinnon and White (1985) and Cribari-Neto (2004) to improve small-sample performance18 .
The main use of vcovHC is together with testing functions from the lmtest and car packages.These typically allow passing the vcov parameter either as a matrix or as a function (see Zeileis 2004) A specific vcovHC method for pgmm objects is also provided which implements the robust covariance matrix proposed by Windmeijer (2005) for generalized method of moments estimators.

plm versus nlme/lme4
The models termed panel by the econometricians have counterparts in the statistics literature on mixed models (or hierarchical models, or models for longitudinal data), although there are both differences in jargon and more substantial distinctions.This language inconsistency between the two communities, together with the more complicated general structure of statistical models for longitudinal data and the associated notation in the software, is likely to scare some practicing econometricians away from some potentially useful features of the R environment, so it may be useful to provide here a brief reconciliation between the typical panel data specifications used in econometrics and the general framework used in statistics for mixed models 20 .R is particularly strong on mixed models' estimation, thanks to the long-standing nlme package (see Pinheiro et˜al. 2007) and the more recent lme4 package, based on S4 classes (see Bates 2007) 21 .In the following we will refer to the more established nlme to give some examples of "econometric" panel models that can be estimated in a likelihood framework, also including some likelihood ratio tests.Some of them are not feasible in plm and make a useful complement to the econometric "toolbox" available in R.

Fundamental differences between the two approaches
Econometrics deal mostly with non-experimental data.Great emphasis is put on specification procedures and misspecification testing.Model specifications tend therefore to be very simple, while great attention is put on the issues of endogeneity of the regressors, dependence structures in the errors and robustness of the estimators under deviations from normality.The preferred approach is often semi-or non-parametric, and heteroskedasticity-consistent techniques are becoming standard practice both in estimation and testing.
For all these reasons, although the maximum likelihood framework is important in testing 22 and sometimes used in estimation as well, panel model estimation in econometrics is mostly accomplished in the generalized least squares framework based on Aitken's Theorem and, when possible, in its special case ols, which are free from distributional assumptions (although these kick in at the diagnostic testing stage).On the contrary, longitudinal data models in nlme and lme4 are estimated by (restricted or unrestricted) maximum likelihood.While under normality, homoskedasticity and no serial correlation of the errors ols are also the maximum likelihood estimator, in all the other cases there are important differences.
The econometric gls approach has closed-form analytical solutions computable by standard 20 This discussion does not consider gmm models.One of the basic reasons for econometricians not to choose maximum likelihood methods in estimation is that the strict exogeneity of regressors assumption required for consistency of the ml models reported in the following is often inappropriate in economic settings. 21The standard reference on the subject of mixed models in S/R is Pinheiro and Bates (2000). 22Lagrange Multiplier tests based on the likelihood principle are suitable for testing against more general alternatives on the basis of a maintained model with spherical residuals and find therefore application in testing for departures from the classical hypotheses on the error term.The seminal reference is Breusch and Pagan (1980).linear algebra and, although the latter can sometimes get computationally heavy on the machine, the expressions for the estimators are usually rather simple.ml estimation of longitudinal models, on the contrary, is based on numerical optimization of nonlinear functions without closed-form solutions and is thus dependent on approximations and convergence criteria.For example, the "gls" functionality in nlme is rather different from its "econometric" counterpart."Feasible gls" estimation in plm is based on a single two-step procedure, in which an inefficient but consistent estimation method (typically ols) is employed first in order to get a consistent estimate of the errors' covariance matrix, to be used in gls at the second step; on the converse, "gls" estimators in nlme are based on iteration until convergence of two-step optimization of the relevant likelihood.

Some false friends
The fixed/random effects terminology in econometrics is often recognized to be misleading, as both are treated as random variates in modern econometrics (see e.g.Wooldridge 2002, 10.2.1).It has been recognized since Mundlak's classic paper (Mundlak 1978) that the fundamental issue is whether the unobserved effects are correlated with the regressors or not.In this last case, they can safely be left in the error term, and the serial correlation they induce is cared for by means of appropriate gls transformations.On the contrary, in the case of correlation, "fixed effects" methods such as least squares dummy variables or time-demeaning are needed, which explicitly, although inconsistently23 , estimate a group-(or time-) invariant additional parameter for each group (or time period).
Thus, from the point of view of model specification, having fixed effects in an econometric model has the meaning of allowing the intercept to vary with group, or time, or both, while the other parameters are generally still assumed to be homogeneous.Having random effects means having a group-(or time-, or both) specific component in the error term.
In the mixed models literature, on the contrary, fixed effect indicates a parameter that is assumed constant, while random effects are parameters that vary randomly around zero according to a joint multivariate Normal distribution.
So, the fe model in econometrics has no counterpart in the mixed models framework, unless reducing it to ols on a specification with one dummy for each group (often termed least squares dummy variables, or lsdv model) which can trivially be estimated by ols.The re model is instead a special case of mixed model where only the intercept is specified as a random effect, while the "random" type variable coefficients model can be seen as one that has the same regressors in the fixed and random sets.The unrestricted generalized least squares can in turn be seen, in the nlme framework, as a standard linear model with a general error covariance structure within the groups and errors uncorrelated across groups.

A common taxonomy
To reconcile the two terminologies, in the following we report the specification of the panel models in plm according to the general expression of a mixed model in Laird-Ware form (see the web appendix to Fox 2002) and the nlme estimation commands for maximum likelihood Panel Data Econometrics in R: The plm Package estimation of an equivalent specification24 .
The Laird-Ware representation for mixed models A general representation for the linear mixed effects model is given in Laird and Ware (1982).
where the x 1 , . . .x p are the fixed effects regressors and the z 1 , . . .z p are the random effects regressors, assumed to be normally distributed across groups.The covariance of the random effects coefficients ψ kk is assumed constant across groups and the covariances between the errors in group i, σ 2 λ ijj , are described by the term λ ijj representing the correlation structure of the errors within each group (e.g., serial correlation over time) scaled by the common error variance σ 2 .

Pooling and Within
The pooling specification in plm is equivalent to a classical linear model (i.e., no random effects regressor and spherical errors: b iq = 0 ∀i, q, λ ijj = σ 2 for j = j , 0 else).The within one is the same with the regressors' set augmented by n − 1 group dummies.There is no point in using nlme as parameters can be estimated by ols which is also ml.

Random effects
In the Laird and Ware notation, the re specification is a model with only one random effects regressor: the intercept.Formally, z 1ij = 1 ∀i, j, z qij = 0 ∀i, ∀j, ∀q = 1 λ ij = 1 for i = j, 0 else).The composite error is therefore u ij = 1b i1 + ij .Below we report coefficients of Grunfeld's model estimated by gls and then by ml R> reGLS <-plm(inv ~value + capital, data = Grunfeld, model = "random") R> reML <-lme(inv ~value + capital, data = Grunfeld, random = ~1 | + firm) R> coef(reGLS) Variable coefficients, "random" Swamy's variable coefficients model (Swamy 1970) has coefficients varying randomly (and independently of each other) around a set of fixed values, so the equivalent specification is z q = x q ∀q, i.e. the fixed effects and the random effects regressors are the same, and ψ kk = σ 2 µ I N , and λ ijj = 1, λ ijj = 0 for j = j , that's to say they are not correlated.Estimation of a mixed model with random coefficients on all regressors is rather demanding from the computational side.Some models from our examples fail to converge.The below example is estimated on the Grunfeld data and model with time effects.

Variable coefficients, "within"
This specification actually entails separate estimation of T different standard linear models, one for each group in the data, so the estimation approach is the same: ols.In nlme this is done by creating an lmList object, so that the two models below are equivalent (output suppressed): R> vcmf <-pvcm(inv ~value + capital, data = Grunfeld, model = "within", + effect = "time") R> vcmfML <-lmList(inv ~value + capital | year, data = Grunfeld)

Unrestricted fgls
The general, or unrestricted, feasible gls, pggls in the plm nomenclature, is equivalent to a model with no random effects regressors (b iq = 0 ∀i, q) and an error covariance structure which is unrestricted within groups apart from the usual requirements.The function for estimating such models with correlation in the errors but no random effects is gls().
This very general serial correlation and heteroskedasticity structure is not estimable for the original Grunfeld data, which have more time periods than firms, therefore we restrict them to firms 4 to 6.The within case is analogous, with the regressors' set augmented by n − 1 group dummies.
7.4.Some useful "econometric" models in nlme Finally, amongst the many possible specifications estimable with nlme, we report a couple cases that might be especially interesting to applied econometricians.

AR(1) pooling or random effects panel
Linear models with groupwise structures of time-dependence25 may be fitted by gls(), specifying the correlation structure in the correlation option 26 : R> Grunfeld$year <-as.numeric(as.character(Grunfeld$year))R> lmAR1ML <-gls(inv ~value + capital, data = Grunfeld, correlation = corAR1(0, + form = ~year | firm)) and analogously the random effects panel with, e.g., AR(1) errors (see Baltagi 2001, chap˜5), which is a very common specification in econometrics, may be fit by lme specifying an additional random intercept: R> reAR1ML <-lme(inv ~value + capital, data = Grunfeld, random = ~1 | + firm, correlation = corAR1(0, form = ~year | firm)) The regressors' coefficients and the error's serial correlation coefficient may be retrieved this way: R> summary(reAR1ML)$coef$fixed Significance statistics for the regressors' coefficients are to be found in the usual summary object, while to get the significance test of the serial correlation coefficient one can do a likelihood ratio test as shown in the following.
An LR test for serial correlation and one for random effects A likelihood ratio test for serial correlation in the idiosyncratic residuals can be done as a nested models test, by anova(), comparing the model with spherical idiosyncratic residuals with the more general alternative featuring AR(1) residuals.The test takes the form of a zero restriction test on the autoregressive parameter.
This can be done on pooled or random effects models alike.First we report the simpler case.
We already estimated the pooling AR(1) model above.The gls model without correlation in the residuals is the same as ols, and one could well use lm() for the restricted model.Here we estimate it by gls().

Conclusions
With plm we aim at providing a comprehensive package containing the standard functionalities that are needed for the management and the econometric analysis of panel data.In particular, we provide: functions for data transformation; estimators for pooled, random and fixed effects static panel models and variable coefficients models, general gls for general covariance structures, and generalized method of moments estimators for dynamic panels; specification and diagnostic tests.Instrumental variables estimation is supported.Most estimators allow working with unbalanced panels.While among the different approaches to longitudinal data analysis we take the perspective of the econometrician, the syntax is consistent with the basic linear modeling tools, like the lm function.
On the input side, formula and data arguments are used to specify the model to be estimated.Special functions are provided to make writing formulas easier, and the structure of the data is indicated with an index argument.
On the output side, the model objects (of the new class panelmodel) are compatible with the general restriction testing frameworks of packages lmtest and car.Specialized methods are also provided for the calculation of robust covariance matrices; heteroskedasticity-and correlation-consistent testing is accomplished by passing these on to testing functions, together with a panelmodel object.
The main functionalities of the package have been illustrated here by applying them on some well-known datasets from the econometric literature.The similarities and differences with the maximum likelihood approach to longitudinal data have also been briefly discussed.
We plan to expand the methods in this paper to systems of equations and to the estimation of models with autoregressive errors.Addition of covariance estimators robust vs. crosssectional correlation are also in the offing.Lastly, conditional visualization features in the R environment seem to offer a promising toolbox for visual diagnostics, which is another subject for future work.

Panel
Data Econometrics in R: The plm Package R> sGrunfeld <-Grunfeld[Grunfeld$firm %in% 4:6, ] R> ggls <-pggls(inv ~value + capital, data = sGrunfeld, model = "random") R> gglsML <-gls(inv ~value + capital, data = sGrunfeld, correlation = corSymm(form = Two further arguments are logical : drop.indexdrop the indexes from the data.frameand row.namescomputes "fancy" row names by pasting the individual and the time indexes.While extracting a serie from a pdata.frame,a pseries is created, which is the original serie with the index attribute.This object has specific methods, like summary and as.matrix are provided.The former indicates the total variation of the variable and the share of this variation that is due to the individual and the time dimensions.The latter gives the matrix representation of the serie, with, by default, individual as rows and time as columns.
'***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 , p. 174.'***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 , p.120.'***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 The Hausman-Taylor model (see Hausman and Taylor 1981) may be estimated with the pht function.The following example is from Baltagi (2001) p.130.'***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 (individual) effect Random coefficients model Call: pvcm(formula = inv ~value + capital, data = Grunfeld, model = "random") . Employment is explained by past values of employment (two lags), current and first lag of wages and output and current value of capital.'***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 The same test can be computed using a formula as first argument of the pooltest function: King and985)Pagan (1980)hesis that the same coeffMonfort (1982)to each individual.It is a standard F test, based on the comparison of a model obtained for the full sample and a model based on the estimation of an equation for each individual.The first argument of pooltest is a plm object.The second argument is a pvcm object obtained with model=within.If the first argument is a pooling model, the test applies to all the coefficients (including the intercepts), if it is a within model, different intercepts are assumed.bp:BreuschandPagan(1980),honda:Honda(1985),thedefaultvalue, kw:King and Wu (1997),   ghm: Gourieroux, Holly, andMonfort (1982).The effects tested are indicated with the effect argument (one of individual, time or twoways).To test the presence of individual and time effects in the Grunfeld example, using the Gourieroux et˜al.(1982) test, we use: R> g <-plm(inv ~value + capital, data = Grunfeld, model = "pooling") R> plmtest(g, effect = "twoways", type = "ghm") Bera, Sosa-Escudero, and Yoon (2001) gives no information on the direction of the departure from the null hypothesis, i.e.: is rejection due to the presence of serial correlation, of random effects or of both?Bera, Sosa-Escudero, and Yoon (2001)derive locally robust tests both for individual random effects and for first-order serial correlation in residuals as "corrected" versions of the standard LM test (see plmtest).While still dependent on normality and homoskedasticity, these are robust to local departures from the hypotheses of, respectively, no serial correlation or no random effects.The authors observe that, although suboptimal, these tests may help detecting the right direction of the departure from the null, thus complementing the use of joint tests.Moreover, being based on pooled ols residuals, the BSY tests are computationally far less demanding than likelihood-based conditional tests.
The BSY test for random effects is implemented in the one-sided version 10 , which takes heed that the variance of the random effect must be non-negative: . If one is happy with the defaults, it is easiest to pass the function itself:For some tests, e.g. for multiple model comparisons by waldtest, one should always provide a function19.In this case, optional parameters are provided as shown below (see alsoZeileis  2004, p.12):Moreover, linear.hypothesisfrom package car may be used to test for linear restrictions: '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 vcm <-pvcm(inv ~value + capital, data = Grunfeld, model = "random", + effect = "time") R> vcmML <-lme(inv ~value + capital, data = Grunfeld, random = ~value + The AR(1) test on the random effects model is to be done in much the same way, using the random effects model objects estimated above:The random effects, AR(1) errors model in turn nests the AR(1) pooling model, therefore a likelihood ratio test for random effects sub AR(1) errors may be carried out, again, by comparing the two autoregressive specifications: whence we see that the Grunfeld model specification doesn't seem to need any random effects once we control for serial correlation in the data.