A design-sensitive approach to ﬁtting regression models with complex survey data

: Fitting complex survey data to regression equations is explored under a design-sensitive model-based framework. A robust version of the standard model assumes that the expected value of the diﬀerence between the dependent variable and its model-based prediction is zero no matter what the values of the explanatory variables. The extended model assumes only that the diﬀerence is uncorrelated with the covariates. Little is assumed about the error structure of this diﬀerence under either model other than independence across primary sampling units. The standard model often fails in practice, but the extended model very rarely does. Under this framework some of the methods developed in the conventional design-based, pseudo-maximum-likelihood framework, such as ﬁtting weighted estimating equations and sandwich mean-squared-error estimation, are retained but their interpretations change. Few of the ideas here are new to the refereed literature. The goal instead is to collect those ideas and put them into a uniﬁed conceptual framework. regression models. We will explore an alternative model-based framework for estimating regression models introduced in Kott (2007) that is


Introduction
The standard "design-based" framework for fitting a regression model to survey data was introduced by Fuller (1975) for linear regression and by Binder (1983) more generally. This framework treats the finite population as a realization of independent trials from a conceptual population. A maximum likelihood estimator for regression-model parameters could, in principle, be estimated from the finite-population values. Given a complex sample drawn from the finite population, either that impractical-to-calculate finite-population estimate or its limit as the finite population grows infinitely large is treated as the target of estimation in the Fuller/Binder design-based framework That is not what most analysts think they are estimating when they fit regression models. We will explore an alternative model-based framework for estimating regression models introduced in Kott (2007) that is sensitive to the 2 P. S. Kott complex sampling design and to the possibility that the usual model assumptions may not hold in the population. Under this design-sensitive robust model-based framework some of the methods developed in the conventional design-based framework, such as fitting weighted estimating equations and sandwich meansquared-error estimation, are retained but their interpretations change. Few of the ideas in this paper are new to the refereed literature. The goal here is to collect those ideas and put them into a unified conceptual framework.
Section 2 lays out the standard and extended model which underlie the design-sensitive approach to fitting vaguely-specified regression model taken here as opposed to the more conventional design-based or pseudo-maximumlikelihood (pseudo-ML) approach seen in Skinner (1989), Binder and Roberts (2003), Pfeffermann (1993;2011), Lumley and Scott (2015;2017), and elsewhere. Section 3 describes the weighted estimating equation used to estimate model parameters in a consistent manner under the design-sensitive framework.
The design-sensitive framework described here is model based for the most part. Probability-sampling ("design-based") principles are invoked in setting up an asymptotic framework to more deeply robustify the model. A parallel framework for robustifying the regression model can be found in the econometric literature (e.g., White, 1980;1984). Probability-based inference is also invoked in Section 6.2 to provide some justification for a mean-squared-error estimator.
The design-sensitive robust model-based framework is like the randomizationassisted model-based approach to estimating population means and totals proposed in Kott (2005) as a contrast to the popular model-assisted (design-based) approach advocated in Särndal et al. (1992). Kott replaced the conventional term "design" with the more accurate "randomization" ("probability-sampling" would have been even better) because some aspects of the sampling design, like stratification, often depend explicitly or implicitly on a model. Here we use the modifier "design-sensitive" because our robust model-based framework is quite literally sensitive to how the sample has been designed. In what follows, we shorten "design-sensitive robust model-based framework" to "design-sensitive framework" for convenience.
For linear and logistic regression estimation, there is no difference in the estimator under the design-sensitive and pseudo-ML approaches. As Section 4 shows, that is not the case when estimating a cumulative logistic model, where both approaches lead to consistent, but not identical, parameter estimators.
Section 5 discusses the alternative weights introduced in Pfeffermann and Sverchkov (1999). These produce consistent and potentially more efficient estimators under the standard model. Section 6 describes sandwich-like mean-squared-error estimation (from Fuller, 1975 andBinder, 1983) under the design-sensitive framework in the absence of calibration weight adjustments. In their presence, replication is advocated. The section also discusses the not-completely-resolved issue of how to handle the stratification of primary sampling units. Sections 7 and 8 discuss tests of whether the standard model holds, and Section 9 offers some concluding remarks.

The design-sensitive robust model-based approach
We start by defining the standard model in the following robust (i.e, distributionfree) manner: y k is the dependent random variable being modeled in the population U , while x k is a vector of r explanatory variables (covariates), one of which is 1, and f (.) is a monotonic function. Observe that for logistic regression, and Few additional assumptions about the distribution and variance structure of the ε k are needed in this vaguely-specified version of the model underpinning a regression analysis until the issue of estimating the mean-squared error of an estimator of β arises, as we shall see in Section 6. Although apparently very general, there is a key restriction imposed by the standard model in equation (2.2); namely, that the expected value of the error term ε k is zero no matter the value of x k . This assumption can fail and the standard model not be appropriate in the population being analyzed. For example, suppose y k = x 2 k in the population. Suppose one tries to fit the linear model y k = α + βx k + ε k to this population. The standard model assumption E(ε k |x k ) = 0 for all realized x k , k ∈ U , would fail.
A generalization of the standard model is the extended model under which E(ε k |x k ) = 0 in equation (2.2) is replaced by In other words, ε k has mean zero unconditionally (i.e., E(ε k ) = 0) and is uncorrelated with each of the components of x k . Unlike the standard model, the more general extended model rarely fails, so long as the first three central moments of the random variable x k are finite.
Observe that the standard version of simple linear model through the origin, y k = βx k + ε k , is not exactly of the form specified by equation (2.1) because it is missing an intercept. It similarly assumes E(ε k |x k ) = 0. The extended version of this model assumes only E(ε k ) = 0.

The asymptotic framework and the weighted estimating equation
Although populations from which survey samples are drawn are almost always finite, the samples themselves are usually large. That is why it is reasonable to use asymptotics (arbitrarily-large sample properties) when analyzing survey data. If we assume there is an infinite sequence of nested populations growing arbitrarily large and that a sample is drawn from each using the same selection mechanism (which includes the self-selection of responding to the survey and the possibly-imperfect quasi-selection of individual population units into the sampling frame), where the samples are not necessarily nested but also grow arbitrarily large, then it is possible to take the probability limit of a statistic based on a sample as the expected sample size and population grow arbitrarily large (formally as we advance ad infinitum from one population in the sequence to the next). "Expected sample size" is because under many selected mechanisms the sample size is not fixed.
Suppose t is an estimator for T . A sufficient condition for the probability limit of t, denoted p lim(t), to be T is for the limit of the mean-squared error of t to converge to 0, in which case t is a consistent estimator for T .
Letting N denote the size of population U , then it is not difficult to see that under the extended model (where E(ε k x k ) = 0) given mild assumptions about the values of the components of x k (e.g., they are bounded in number and each have finite moments) and variance structure of the ε k (e.g., they are uncorrelated across primary sampling units, each of which consists of a fraction of the population that converges to 0 as the population grows arbitrarily large). Given a complex sample S with weights {w k }, each nearly equal to the inverse of the corresponding element's selection probability (i.e., any difference tends to zero as the expected sample size grows arbitrarily large), under mild additional conditions on the sampling design and population such that for z k = 1, y k , any component of x k , or any product of two of these. Sufficient additional assumptions are that the z k have finite moments and that the joint probability of drawing two primary sampling units in the first-stage of random sampling is bounded by the product of their individual first-stage selection probabilities.
The modifier "nearly" needs to be added to "equal" when weights are calibrated either to increase the statistical efficiency of the resulting estimators (as in Deville and Särndal, 1992) or to account for unit nonresponse or framecoverage errors (the element is missing from the frame or is contained on the frame more than once) as in Kott (2006). The latter set of calibration adjustments is based on fits of selection models. Consequently, even assuming (as we do) that the selection models employed (for nonresponse and/or coverage adjustment) are correct, the calibrated weights are only asymptotically equal to the inverses of the selection probabilities. We return to the issues surrounding calibrated weights in Section 6.3.
The w k are inserted into equation (3.2) in case E(ε k |x k , w k ) = 0, a situation in which the weights are said to be nonignorable in expectation (with respect to the model − a phrase that usually goes without saying). Full ignorability of the weights or, equivalently, of the selection probabilities in the sense of Little and Rubin (2002), obtains when the conditional ε k |x k are independent of the w k . If the original random sample is selected with probability proportion to some component of x k , while the variance of ε k |x k is a function of that same component, then ε k |x k is clearly not independent of w k , and the weights are not ignorable, but they could still be ignorable in expectation (i.e, E(ε k |x k , w k ) = 0 for every realized x k and w k , k ∈ U ).
Whether the standard or extended model is assumed to hold in the population, solving for b in the weighted estimating equation (Godambe and Thompson, 1974 provides a consistent estimator for β under mild conditions because for some θ k between x T k b and x T k β. This is a consequence of the mean-value theorem. In addition to equation (3.3), the mild conditions assume that and its probability limit are positive definite. Because N −1 S w k x k ε k converges to 0 in probability as the expected sample size grows arbitrarily large (equation (3.1)), b is a consistent estimator for β.
It is not hard to show that x k = 0 is the maximumlikelihood (ML) estimating equation for the population under the independent and identically distributed (iid ) linear regression model and under logistic regression with independent observations (i.e., sampled population elements). Nevertheless, the solution to equation (3.4) is not ML when the weights vary or the ε k within primary sampling units are correlated. Instead, the b solving (3.4) is referred to as pseudo-ML (Skinner, 1989). Strictly speaking, the fullpopulation solution to U [y k − f (x T k b)]x k = 0 in linear or logistic regression need not be ML estimates under the design-based approach because the standard model may not hold in the population. Some advocates of model fitting using pseudo-ML, however, assume that the model does hold in the population (e.g., Pfeffermann, 1993;2011).

The cumulative logistic model
More generally, the pseudo-ML estimating equation in Binder is For logistic, Poisson, and ordinary least squares (OLS) linear regression, f (x T k β)/v k ∝ 1. This equality does not hold for general least squares (GLS) linear regression, however, where v k varies across the elements of the population.
We will return to GLS linear regression in the next section. For now, we consider another example of when the pseudo-ML and design-sensitive estimators differ. The cumulative logistic model is a multinomial logistic regression model for ordered data, where there are L categories with a natural ordering (e.g., always, frequently, sometimes, never). Being in the first category is assumed to fit a logistic model. Being in either the first or second category is assumed to fit a different logistic model. Being in the first, second, or third category is assumed to fit yet another logistic model, and so forth.
The generalized cumulative logistic model is (splitting out the intercept from the rest of the covariates) and y k = 1 if k is in one of the first l categories, 0 otherwise. When β = β for all categories, but each category has its own intercept, the cumulative logistic model is also called a proportional-odds model. Finding the b that satisfies the estimating equations: can be used for the generalized cumulative logistic model or the proportional odds model. This is not the pseudo-ML estimating equation in the surveylogistic routine in SAS/STAT 14.1 (SAS Institute Inc., 2015), the logistic routine in SUDAAN 11 (Research Triangle Institute, 2012) or the gologit2 routine in STATA (Williams, 2005).
When the standard model fails, that is, when the solution for the b in equation (4.2) may not be estimating the same parameter as the pseudo-ML b P ML . This is not a bad thing. Unlike the pseudo-ML solution, the solution to equation (4.3) has this reasonable property: This is a property retained at the asymptotic limit of b but not necessarily the asymptotic limit of b P ML . That is to say, where β is the estimand of b . The equality need not hold when β is replaced by the estimand of b P ML . Pfeffermann and Sverchkov (1999) showed that under the standard model one can replace the w k with the P-S modified weights w k·g = w k g k where g k is a function of the components of x k , say g(x k ), computed to reduce as much as possible the variability of the w k·g in the hopes of decreasing the variability of the linear regression-coefficient estimates under an iid model. Kott (2007) pointed out that the Pfeffermann-Sverchkov result is a simple repercussion of the assumption that E(ε k |x k ) = 0 for all realized x k , k ∈ U . We can see that by replacing w k in equations (3.2) and (3.4) by w k g(x k ) and noting that

Listwise-deletion of observations with missing values
Interestingly, the Pfeffermann-Sverchkov result justifies the often reviled practice of deleting sampled observations with any missing values from a regression analysis (see, for example, Wilkinson et al. 1999). Under the standard model, listwise deletion leads to consistent coefficient estimates when the probability p k that a sampled unit remains in the listwise-deleted sample is a function of the components of x k . (The probability an item value is missing is thus 1 -p k , which is also a function of the components of x k ) Consequently, the true inverse-selection-probability weight is w k /p k , and a potential P-S modified weight is w k = w k /p k × p(x k ). Not only can item nonresponse be missing at random, explanatory-variable values can be missing not at random so long as their missingness does not depend on y k |x k . Moreover, the functional form of p(.) need not be known.

Generalized least squares
and v k is a function of x k , then the pseudo-ML estimator for a linear regression estimator in equation (4.1) in consistent under the standard model (note that f (x T k b) = x T k b). When the weights are ignorable, however, a more efficient estimator is the solution to This suggests that when the weights are not ignorable a more efficient estimator than the solution to equation (4.1) would factor each weight w k /v k by 1/w(x k ) where w(x k ) is a good approximation for w k . A method for creating a reasonable form for w(x k ) is by setting w(x k ) = exp(h(x k )), where h(x k ) is the fitted values of the ordinary-least-squares regression in the sample of log(w k ) on components of x k = (x 1k , . . . , x rk ) T and, perhaps, functions of those components (e.g., x 2 1k )

Mean-squared-error estimation
In this section, we restrict attention to stratified or single-stratum probability samples of primary sampling units (PSUs) of fixed size. Additional stages of probability samples can then be conducted independently within each PSU to draw the sample elements. We do not rule out samples of elements where the PSUs are completely enumerated or where each PSU is composed of a single element.
In our asymptotic framework, the number of PSUs grows infinitely large along with the population. The number of strata may also grow arbitrarily large or it can be fixed. In the former situation, the number of PSUs in a stratum is fixed, while in the later that number grows infinitely large.
Let h denote one of H strata, u k the H-vector of stratum-inclusion identifiers for element k (i.e., if k is in stratum h, then u hk , the h th component of u k (6.1)

When first-stage stratification is ignorable
Mean-squared-error estimation given a stratified multistage sample can be tricky unless a simplifying assumption is made. Usually, the assumption is that the PSUs were randomly selected with replacement within strata.
Instead, we assume for now that the ε k are uncorrelated for elements from different PSUs, have bounded variances, and E(ε k |x k , u k ) = 0 (E(x k ε k |u k ) = 0 for the extended model) which is to say the first-stage stratification is ignorable in expectation. Although it is likely that strata were chosen such that the mean of the y k differed across strata, it is nonetheless not unreasonable to assume that the E(ε k |x k ) (or E(x k ε k )) are unaffected by the first-stage stratum identifiers especially since x k in equation (2.1) may contain a bounded number (as the number of PSUs grows arbitrarily large) of stratum identifiers or functions of stratum identifiers (e.g., u hk x jk ).
To estimate the mean-squared error of the consistent estimator b (i.e., its probability limit tends to β as n grows arbitrarily large), one starts with for some θ k between x T k b and x T k β and the additional assumption that A θ = N −1 S w k f (θ k )x k x T k (equation (3.6)) and its probability limit are finite and positive definite. Let ω k be the inverse of the selection probability of k. We will assume that w k = ω k until Subsection 6.3.
The design-based mean-squared-error estimator for b (from Binder, 1983) is −1 estimates the definite probability limit of A θ in equation (3.6), and e k = y k − f (x T k b). Our assumptions assure the near unbiasedness of the variance estimator in equation (6.1) (as n grows arbitrarily lage) given a sampling design and the population such that p lim(nΨ 2 z ) is bounded, where Ψ z is defined in equation (3.2). They also assure the near unbiasedness of Under the standard model (but not necessarily the extended model), the w k in both equations (6.2) and (6.3) can be replaced by w k·g .
From a model-based viewpoint, the keys to both mean-squared-error estimators are that the E hj = k∈S hj w k x k ε k on the right-hand side of equation (3.5) have mean 0 and are uncorrelated across PSUs and that the probability limit of D, like A θ , is finite and positive definite. The use of robust sandwich-type mean-squared-error estimates like equations (6.2) and (6.3) (the D being the bread of the sandwich) allows the variance matrices of the E hj to be unspecified. The additional asymptotic assumptions allow e k = ε k − x T k (b − β) to be used in place of ε k within the E hj , and D to replace the probability limit of A θ .
Additional variations of the mean-squared-error estimator in equation (6.2) can be made if the analyst is willing to assume that the ε k are uncorrelated across secondary sampling units or across elements. The more components there are in x k , the more reasonable the assumption that the ε k are uncorrelated across elements (or another higher-stage of sampling like housing unit in a householdbased sample of individuals) and the more reasonable the assumption that the first-stage stratification is ignorable.

When first-stage stratification is not ignorable
When the first-stage stratification is not ignorable and again w k = ω, it is tempting to follow design-based theory and argue that under probability-sampling theory the E hj are independent and have a common mean within strata, justifying the use of the mean-squared-error estimator in equation (6.2) but not (6.3). This argument is only valid when the first-stage PSUs are indeed selected with replacement (which would allow the same PSU can be selected more than, with independent subsampling in each selection) or their selection can reasonably be approximated by that design.
A well-known problematic without-replacement sample is a systematic sample of elements drawn from a list ordered in cycles such that each possible sample severely over-or underestimates the estimand. Even when a systematic probability-proportional-size sample of PSUs is drawn from a randomly-ordered list, invoking large-sample or large-population properties (see Hartley and Rao, 1962) when the actual sample or population of PSUs in each stratum is not large is dubious. Graubard and Korn (2002) point out that, even under with-replacement sampling of PSUs, equation (6.2) provides a nearly unbiased mean-squared-error estimator only when the relative sizes of the nonignorable strata are fixed as the population grows arbitrarily large. Otherwise, there is a component of the meansquared error of b that equation (6.2) fails to capture: the random number of elements within each first-stage stratum when the number of strata is bounded (e.g., when the strata are the four US regions and the fraction of elements in each region is random).

An extension for calibrated weights
Let d k denote the inverse of the probability that element k is chosen for the sample. The ratio ω k /d k is the product of calibration factors accounting for the probability of response which can occur at various levels (e.g., household and person for a survey of individuals) and the expected number of times the element is in the sampling frame. When k is known not to be on the frame more than once, this value is the probability the k is on the frame at all.
To simplify the exposition, we will assume that there is a single calibration factor of the form ω k /d k = q(m T k γ), where q(t) is a monotonic function, such as q(t) = 1+exp(t), m k is a vector of variables with a finite number of components, γ is an unknown parameter consistently estimated by a g such that the following calibration equation holds: where c k is a vector of calibration variables with the same number of components as -and often equal tom k , and the population total of c k or a nearly unbiased estimate of that total is known and denoted T c . When used to account for nonresponse, the components of T c can be estimates based on the full sample before nonresponse. When the calibration factor is used strictly to increase the efficiency of estimated means and totals q(t), it is often set at 1 + t (linear calibration), exp(t) (raking), or 1/(1 + t) (pseudo-empirical likelihood) and γ = 0. Linear calibration and raking are also popular for coverage and nonresponse adjustment but γ is no longer 0. For nonresponse adjustment setting q(t) at 1 + exp(t) > 0 assumes the probability of response is a logistic function of m k , while setting q(t) at (L + exp(t))/(1 + exp(t)/U ) assumes response is a truncated logistic function with response probabilities between 1/U and 1/L. Under mild conditions, paralleling those used to justify equation (3.5) and the consistency of b and assuming the selection model for response/nonresponse or coverage error implied by q(m T k γ) is correct and that if T c is a random variable then it is uncorrelated with whether element k is a respondent when sampled, for some ϕ k between m T k g and m T k γ, and so d is a consistent estimator for δ. Just as in the near unbiasedness of b, the differences between 1/q(m T k γ) and the value it estimates (i.e., 1 if element k is a respondent, 0 if not; the number of times element k of the population is in the sampling frame) can be correlated within PSUs, although it is more common to assume that the differences are uncorrelated across all elements.
Often many of the components of m k will also be component of x k . If they all were or if we replace the standard-model assumption in equation (2.2) by then it is easy to see from 3) (or (6.2)) can be used to estimate the mean-squared error of b given T c when under the standard model. The conditioning is needed when T c itself is an estimator. More generally, we would be better served using a replication method (i.e., one that replicates the calibration factors so that equation (6.4) holds for each replicate) to estimate the (conditional) mean-squared error of b in a way that accounts for the mean-squared error contribution from d as well as the ε k (and, perhaps, T c as well). Although the mean-squared error with both these contributions can be linearized as well, linearization becomes increasingly cumbersome as the number of calibration factors increase. So too does replication because there is a calibration equation for each factor, but once replicate weights are determined they can be used for estimating any number of regression models. Linearization does not share this convenient property.

Degrees of freedom
When fitting a regression model to survey data, design-based practice often treats the diagonals of the mean-squared-error estimator in equation (6.3) as if they had a chi-squared distribution with n − H degrees of freedom (Lohr, 2010, p. 438). There is no justification for this under probability sampling theory, which relies entirely on the asymptotic normality of b. This questionable practice clearly comes from var(b) in equation (6.3) looking a bit like the multiple of a chi-squared statistic with n − H degrees of freedom.
In fact, if the E hj were all independent and identically distributed multidimensional normal random variables, then the diagonals of var(b) would indeed be close to a multiple of a chi-squared statistic with n − H degrees of freedom. Unfortunately, the E hj in practice are not likely to be normally distributed, and even if they are close enough to being normal for them to be treated as such, they rarely have the same variances.
A model-based approach in Kott (1994) assumes that the first-stage stratification is ignorable, and the ε k (as opposed to the E hj ) are normally distributed, have mean zero and a common variance, and are uncorrelated. The approximate relative mean-squared error of a diagonal of var(b) ≈ var(N −1 DE hj ), call it rv, can be calculated under those assumptions. Using the Satterthwaite approximation, the effective degrees of freedom for the corresponding component of b would then be 2/rv and could vary across coefficients. It is Although this procedure is itself more than a little questionable, when it is employed in the generation of t statistics the result will likely produce better coverage intervals than conventional design-based practice. Better yet would be to compute alternative measures of the effective degrees of freedom under different assumptions about the variance structure of the ε k within a sensitivity analysis. Valliant and Rust (2010) address the degrees-of-freedom problem in design-based regression analyses.

Tests for choosing weights
Suppose an analyst wants to determine whether b and b , each computed with its own sets of weights, are estimating the same thing. For example, to test whether weights are ignorable in expectation, the analyst could compare b computed using inverse-selection-probability weights with b computed using equal weights. If the vectors are not significantly different, then weights might be ignored. Similarly, b could be compared with a different b computed using P-S modified weights. This would provide an indirect test of the standard model, since using the P-S modified weights produces a consistent estimator for β under the standard model but not more generally.
Under the null hypothesis that b and b are estimating the same thing, is asymptotically chi-squared with r degrees of freedom, r is the dimension of x k , and var(.) is a mean-squarederror estimator analogous to the one in either equation (6.2) or (6.3). A popular design-based test statistic for whether b and b are estimating the same thing is where d is the nominal degrees of freedom, that is, n − H. The F test in equation (7.1) is called the adjusted Wald F test in SUDAAN 11 (RTI International, 2012, p. 217), which also offers a host of variations, of which the adjusted Wald F (Felligi, 1980) and the Satterthwaite-adjusted F , based on Rao and Scott's (1981) Satterthwaite-adjusted chi-squared test, are the best (see Korn and Graubard, 1990). This test, proposed in Kott (1991) which owes much to the more assumptiondependent Hausman (1978) test, is relatively easy to conduct using popular design-based software in the following manner. Two copies are made for each element in the data set. Both are assigned to the same PSU which accounts for their being strongly correlated in mean-squared-error estimation. The first copy is assigned the weight used to compute b and the second the weight used to compute b . The row vector of covariates x T k of the regression is replaced by (x T k x T k ) for the first copy and by (x T k 0 T ) for the second. The regression coefficient is then and testing whether b − b is significantly different from 0 becomes a straightforward regression exercise using design-based software, such as SUDAAN 11 or the survey procedures in SAS/STAT 14.1 (SAS Institute Inc., 2015).
The design-sensitive model-based approach allows each component of d to have its own model-based effective degrees of freedom in a t test and then uses a conservative Bonferroni adjustment to test whether the components in the bottom half of d are significantly different from 0 (i.e., the smallest p value among the components is compared to α/r when testing for significance at the α level). Using a Bonferroni-adjusted t test in place of an F test when analyzing a regression with complex survey data was previously advocated by Korn and Graubard (1990). Holm (1979) proposed a method for determining which components of b −b are significantly different from zero. Using the Holm-Bonferroni procedure, the component with the v th lowest p value is deemed statistically significant at the α level when the v − 1 th lowest p value has been deemed significant and the v th lowest p value is less than α/(1 + r − v).

Another test for the standard model
Determining whether using P-S modified weights yields significantly different regression-coefficient estimates from using inverse-selection-probability weights is one way of testing whether the standard model holds. Here is another test that may prove useful for determining whether the standard logistic model holds in the population, which can be difficult with a clustered sample (Graubard et al., 1997).
Estimate b in equation (2.1) using inverse-selection-probability weights, calibrated weights, or P-S modified weights. Compute f k = f (x T k b) which nearly equals f (x T k β). Apply "design-based" software to the linear model: E(y k ) = α + βf k + γf 2 k . If g, the estimator for γ, is significantly different from 0, then the standard model fails for the model in equation (2.1) because E(ε k |x k ) is clearly not 0 (f k being a function of x k , and the "design-based" mean-squared-error estimator being robust to the heteroskedasticity of the y k − α − βf k − γf 2 k ). That g is not significantly different from 0 is necessary for the standard model to hold but not sufficient to establish that it holds. Observe that when the standard model holds a, the estimator for α, should also not be significantly different from 0. This suggests testing whether a and g are simultaneously not significantly different from 0.

Concluding remarks
The goal of this paper has been to show that some of the techniques in conventional design-based practice can be justified in the design-sensitive framework for estimating the vaguely-specified regression models defined herein. Nevertheless, although inserting weights into an estimating equation is often justified, it is not always necessary, depending on what assumptions are made. In addition, the sandwich-type mean-squared-error estimator used in design-based practice (equation (6.3)) does not fully account for first-stage stratification when firststage stratification is not ignorable in expectation. When it is ignorable, a simpler mean-squared-error estimator (equation (6.2)) can be used that is likely more stable (i.e., it diagonals have less relative mean-squared error). Other, even more stable, mean-squared-error estimators can be constructed if one is willing to assume that element errors are not correlated across smaller levels of clustering than PSUs (e.g., across households but not within households).
In practice the standard and extended models described here rarely produce estimators different from the popular pseudo-ML methodology. An exception to this is the cumulative logistic model. Ironically, it is a simple matter to employ SAS/STAT or SUDAAN to estimate a generalized cumulative logistic model using the methodology discussed here even though the analogous pseudo-ML estimator cannot be computed with SUDAAN. To do so one treats the L-1 equations involving the same element as if they different elements from the same PSU and runs a (binary) logistic regression, relying on the sandwich-like designbased mean-squared-error estimators to handle the correlation of the equations. Testing the "parallel lines" assumption of the proportional-odds model that all the β = β in equation (4.1) is straightforward.
One interesting repercussion of assuming the robust standard model is that listwise deletion turns out to be an appropriate technique for regression analysis so long as the probability an element is deleted from the analysis does not depend on the value of the dependent variable given the independent variables.
The problem with making assumptions is that they can be wrong. Survey statisticians have, for the most part, accepted a design-based framework that effectively focuses on robustness by relying on as few model assumptions as possible. That framework is not particularly helpful when the goal is to fit a regression model. Moreover, it can be misleading when survey statisticians graft a notion like degrees of freedom onto what is an asymptotic theory.
The paper has reviewed statistical tests for determining whether inverseselection-probability weights are ignorable in expectation when fitting a regression model and, if so, whether the standard model nonetheless holds, allowing the use of P-S modified weights. The practicality of these tests, which may not have much power, will need to be determined by further research.
Design-based practice has been to fear that any such test will incorrectly fail to see that the weights are not ignorable or that the standard model fails. In fact, the standard model, like all models, is almost never be completely true. In the same vein, inverse-selection-probability weights are rarely entirely ignorable. Still, the standard model may be useful, and the efficiency gains from ignoring the weights may overwhelm the resulting bias. We need better tools for making such determinations. The design-sensitive model-based approach may be the key to developing those tools.