Local optimization-based statistical inference

This paper introduces a local optimization-based approach to test statistical hypotheses and to construct confidence intervals. This approach can be viewed as an extension of bootstrap, and yields asymptotically valid tests and confidence intervals as long as there exist consistent estimators of unknown parameters. We present simple algorithms including a neighborhood bootstrap method to implement the approach. Several examples in which theoretical analysis is not easy are presented to show the effectiveness of the proposed approach.


Introduction
More and more complex datasets call for sophisticated statistical methods in the modern era. Compared with other fields for analyzing data such as computer science and applied mathematics, statistics can quantify the uncertainty of a phenomenon via hypothesis testing and/or interval estimation, which solidifies the unique feature of this discipline. In conventional frenquentist statistics, for testing a hypothesis or constructing a confidence interval, we need to find proper test statistic or pivotal quantity whose distribution satisfies certain properties (Lehmann and Romano 2006). However, this is quite difficult for many complex problems. The bootstrap method (Efron 1979) relaxes the above requirement on test statistics or pivotal quantities via its ability in distribution approximation, and thus strengthens the power of conventional frequentist inference. Another advantage of bootstrap is that it provides explicit resampling-based solutions if the underlying model is well estimated.
Consequently, bootstrap has been well received in statistics and other fields. The frequentist properties of bootstrap inferential procedures such as the bootstrap interval estimation can be guaranteed by the consistency of bootstrap distribution estimation (Shao and Tu 1995). This is also true for related methods like subsampling (Politis, Romano, and Wolf 1999). Generally speaking, it is more difficult to prove such a consistency than to derive the asymptotic distribution of the corresponding test statistic or pivotal quantity.
From the above discussion it can be seen that we have to do much theoretical work before claiming that the proposed method is a frequentist one. This is not easy for complex problems, and thus hampers the frequentist approach from being more applicable. In this paper we provide a very general approach based on local optimization to complement current frequentist inference. Our approach can be viewed as an extension of the classical bootstrap method, and reduces to it when the region for optimization shrinks to the centre.
On the theoretical aspect, the tests and confidence intervals constructed by our approach possess asymptotic frequentist properties as long as we have consistent estimators of unknown parameters. This feature indicates that we do not need to derive any (asymptotic) distribution or to prove the consistency of distribution estimation before using the proposed approach. In addition, with a proper region for optimization, the proposed approach is first order asymptotically equivalent to the bootstrap method for regular problems. On the computational aspect, our approach only requires the optimal objective value of an optimization problem over a local region, which can be reached by standard optimization techniques. We also present simple experimental design-based algorithms including a neighborhood bootstrap method to solve the optimization problem. These algorithms are easy to implement for practitioners, and produce satisfactory results in our simulations.
The rest of this paper is organized as follows. Sections 2 and 3 introduce local optimizationbased hypothesis testing and interval estimation, respectively. Their asymptotic frequentist properties are studied in Section 4. Some implementation issues are discussed in Section 5.
Section 6 presents four non-regular examples including a high-dimensional problem and a nonparametric regression problem to illustrate the proposed approach. We end the paper with some discussion in Section 7.

Local optimization-based hypothesis testing
Let the random sample X be drawn from a distribution F (·, θ), where θ lies in the parameter space Θ. Here Θ can be a subset of an Euclidean space or an infinite-dimensional space. We are interested in testing where Θ 0 is a close subset of Θ. Let T = T (X) ∈ R be a test statistic. Suppose that T tends to take a large value when H 0 does not hold. It is known that the p-value for testing (1) is defined as where T * φ = T (X * ) and X * is an independent copy of X from F (·, φ) (Fisher 1959). Given a significance level α ∈ (0, 1), we will reject H 0 if P < α. This test can strictly control Type I error within the Neyman-Pearson framework, as shown in the following proposition.
For θ ∈ Θ 0 , we have This completes the proof.
Proposition 1 is a general result, which does not requires any assumption on T . From Proposition 1, a test is obtained by solving an stochastic optimization problem in (2), which can be rewritten as where I is the indicator function and t is the realization of T . In principle, any hypothesis testing problem can be solved by this way as long as the corresponding optimization problem in (5) is solvable. In limited trivial cases, the problem in (5) has obvious solution; an example is the one-sided Z-test. However, except for such cases, this method faces some difficulties in computation: the stochastic optimization problem is generally very hard to solve, especially when Θ 0 is an unbounded set.
In statistical literature, a commonly used strategy to overcome these difficulties is based on the asymptotic distribution of the test statistic T . The optimization problem in (5) is often solvable when replacing the distribution of T by its asymptotic distribution. For example, with a T whose asymptotic distribution is free of unknown parameters, it is trivial to solve (5). For complex problems, it is often not easy to derive the asymptotic distribution, or to find such a T whose asymptotic distribution has desirable properties. A Bayesian remedy is Meng (1994)'s posterior predictive p-value, which averages the objective function in (5) over the posterior distribution of the parameter under the null hypothesis.
Here we provide a more general strategy without any requirement on the distribution of T . Suppose that H 0 holds. For the true parameter θ ∈ Θ 0 , it suffices to obtain a p-value that controls Type I error by optimizing the objective function in (2) over any set that contains θ, instead of over the whole Θ 0 ; see the first inequality in (4). Consequently, we need to where N (θ) is a closed neighborhood of θ containing θ. Here "sup" in (5) is replaced by is continuous with respect to φ. In practice, we use a consistent estimatorθ of θ under H 0 to replace θ in (6), and obtain If the probability of θ ∈ N (θ) tends to one, then the test based on the p-value in (7) is asymptotically valid. We call this test local optimization-based test (LOT) throughout the paper. LOT only requires the maximum value of the objective function over a neighborhood ofθ, which can be achieved by standard optimization techniques. This feature makes LOT work for many complex problems, in which it is hard to analyze the distribution of T .
When N (θ) shrinks toθ, (7) becomes which is the p-value of the bootstrap test (Davison and Hinkley 1997). Therefore, LOT can be viewed as an extension of the bootstrap test. LOT always controls Type I error asymptotically as long asθ is a consistent estimator, whereas the bootstrap test can fail for non-regular cases where the bootstrap distribution estimator is inconsistent (Bickel and Ren 2001). From (2), (7), to (8), LOT is a bridge connecting Fisher's significance test and Efron's bootstrap test; see Table 1.

Local optimization-based interval estimation
The idea of approximating the p-value via local optimization can be modified to construct confidence intervals. Suppose that the parameter of interest is ξ = ξ(θ) ∈ R, and that ξ =ξ(X) is an estimator of ξ. Let H θ denote the c.d.f. of the pivotal quantity ξ −ξ, i.e., It should be pointed out that the (asymptotic) distribution of ξ −ξ is allowed to depend on unknown parameters, and this is different from the standard definition of a pivotal quantity in textbooks. Define H −1 θ as in (3).

Asymptotic properties
This section discusses asymptotic properties of the proposed local optimization-based methods. Further results involving some computational method are deferred in the Appendix. Here we only consider one-sided LOCIs, and similar results also hold for two-sided LOCIs and LOTs. Some notation and definitions are needed. The parameter space Θ is assumed to be a metric space with metric ρ. For A ⊂ Θ, let |A| denote max{ρ(a, b) : a, b ∈ A}. For two c.d.f.'s F 1 and F 2 , the Kolmogorov distance between them is defined as d K (F 1 , F 2 ) = sup x∈R |F 1 (x) − F 2 (x)|. We allow the neighborhood N (·) to depend on n and denote N n (·) for clarity. We use "→ d " to denote "converge in distribution", and let "a.s." be the abbreviation for "almost surely". As in Section 3, let H θ denote the c.d.f. of ξ −ξ. Since N n (θ) is a random set, for φ ∈ N n (θ), H φ is actually a random c.d.f., i.e., Assumption 1. As n → ∞, Pr θ ∈ N n (θ) → 1 for all θ ∈ Θ.
Ifθ is consistent, then N n (θ) is easy to construct to satisfy Assumption 1; see (15) in Section 5.1. We can immediately have the following theorem.
Theorem 1. Under Assumption 1, for all θ ∈ Θ and α ∈ (0, 1), We next show that LOCIs are first order asymptotically equivalent to the bootstrap confidence intervals under regularity conditions. Specifically, if the bootstrap distribution estimator of ξ−ξ is consistent, then " " in (13) can be replaced by "=". Several assumptions are needed.
Assumption 4. (i) There exists a series of numbers a n → ∞ such that a n (ξ −ξ) → d K, where K is a continuous c.d.f. and is strictly increasing on its support.
Assumption 4 indicates that the bootstrap distribution estimator of a n (ξ −ξ) is consistent (Shao and Tu 1995). We can use the conditional distribution of a n [ξ(θ) −ξ(X * )] conditional on X to approximate that of a n (ξ −ξ), and this approximation leads to asymptotically valid confidence intervals for ξ. Assumption 4 holds for general regular cases. We present two simple examples.
Example 1. Let X n be a random number from a binomial distribution BN(n, π) with parameter π ∈ (0, 1). Consider the pivotal quantity π −X n /n. It is clear that ). This result also holds for any strongly consistent estimatorπ n of π. Specifically, with X * n ∼ BN(n,π n ), we can easily prove that d K (Hπ, K) → 0 (a.s.) by the central limit theorem for triangle arrays, whereHπ(x) = P π √ n(π n − X * /n) x| X n , and then Assumption 4 holds.
Here we do not assume a parametric form for F . Then the parameter space Θ = {F ∈ F : It is easy to verify Assumptions 1-3. Furthermore, for F n ∈ N n (F ) and X * 1 , . . . , X * n i.i.d. from F n , through verifying the Lindeberg condition in the central limit theorem for triangle
When applying bootstrap to a specific problem, we need to verified Assumptions 3 and 4 to guarantee its frequentist properties. Theorems 1 and 2 indicate that we do not need to do such theoretical work when using LOCI. With a proper N n (θ), LOCI possesses both the basic frequentist property in (13) and a potential bonus: it enjoys the same first order frequentist property as the bootstrap method when the two assumptions hold (although we may not know this). It can be expected that, under much stronger conditions, LOCI has some high-order asymptotic properties like bootstrap (Hall 1992). We do not discuss this here since it is difficult to specify N n (θ) satisfying such conditions for complex problems.

Implementation
This section discusses how to implement LOT and LOCI. We focus on the cases where Θ is a subset of an Euclidean space. Therefore, it suffices to solve finite-dimensional optimization problems in LOT and LOCI. For some problems with infinite-dimensional parameter spaces, LOT or LOCI is still available through rational simplification; see Section 6.4.

Specification of N (θ)
The first issue is to determine the neighborhood N (θ) in (7) and (12) over which we solve the optimization problem. Suppose that the dimension of Θ is q andθ = (θ 1 , . . . ,θ q ) ′ is a consistent estimator of θ = (θ 1 , . . . , θ q ) ′ . The basic principle is to select N (θ) satisfying for some small constant δ > 0. If we know further the convergence rate ofθ, then the second principle is to select N (θ) satisfying Assumption 2. By Theorem 2, this selection can make the local optimization-based method asymptotically equivalent to bootstrap if the bootstrap distribution estimator is consistent. For example, with θ − θ = O p (1/ √ n), a selection of N (θ) simultaneously satisfying Assumptions 1 and 2 is (16) for some constant δ > 0. The constant δ in (15) or (16) can be specified empirically. For complex problems, the convergence rate ofθ is difficult to exactly know. We will see in Section 6 that, LOT or LOCI has good finite-sample performance even with a simple N (θ) like in (15) that only satisfies Assumption 1.
It seems more reasonable if the variances ofθ j 's are used to construct N (θ). When the variance estimators are not straightforward, the jackknife, bootstrap (Shao and Tu 1995), or even Bayesian methods can be used to estimate the variances. However, such methods will add extra theoretical and computational work, and there are still some constants, which need to be specified empirically, in the final form of N (θ). Therefore, we suggest using the variance estimators only when they are straightforward.

Importance sampling-based approach
Suppose that F (·, θ) has a probability density function (p.d.f.) f (·, θ) with respect to a σ-finite measure ν, and that {f (·, θ) : φ ∈ Θ 0 } has a common support. We use an importance sampling-based approach to solve the stochastic optimization problem in (7).
First we approximate the objective function in (7) by importance sampling. Note that where X * ∼ f (·,θ). According to the sample averaging approximation method in stochastic optimization (Shapiro 2003), we compute the p-value as is the approximation of r(θ) based on X * 1 , . . . , X * M i.i.d. from f (·,θ) with the Monte Carlo sample size M. With sufficiently large M, P IS can be arbitrarily close to P in (7). There are many available iterative algorithms for solving the deterministic optimization problem in (17) such as the interior point method (Boyd and Vandenberghe 2004).
We can also use an experimental design-based method to approximate the p-value in (17). Take L points φ 1 , . . . , φ L uniformly spaced over N (θ) ∩ Θ 0 , and then compute wherer is defined in (18). We call these points try points throughout this paper, which can be constructed from so-called space-filling designs in experimental design; see Section 5.4. Since N (θ) is a small neighborhood, P IS−D often performs well with a moderate L. The designbased method is very easy to implement, and is suitable for those who are not familiar with optimization methods. More sophisticated space-filling design-based optimization method can be found in Fang, Hickernell, and Winker (1996).
Here we only consider the computation of upper limits, i.e., the first optimization problem in (12). Let ϕ = H −1 φ (γ) and S(φ, ϕ) = H φ (ϕ). Suppose that H φ is continuous and strictly increasing on its support for φ ∈ N (θ). The problem (12) is equivalent to the constrained optimization problem For q-dimensional space Θ, the problem optimizes q + 1 variables. Similar to the importance sampling-based sample averaging approximation method in (18), we use an approximation can be used to approximate that to (20). Note thatŜ(φ, ϕ) may not equal γ exactly in (21).
In practice we handle an equivalent problem max φ∈N (θ) ϕ subject toŜ(φ, ϕ) γ instead of (21). A design-based method similar to (19) can also be used to solve (22). Since (22) has not straightforward solution even for a given φ ∈ N (θ), we do not recommend such a method. A more simple and general method for computing LOCIs is to directly compute the quantiles of H φ for a given φ. This method will be discussed in the next subsection.

Neighborhood bootstrap
This subsection discusses a general method, called neighborhood bootstrap, to implement LOT and LOCI. This method still works for the cases where the importance sampling-based approach in Section 5.2 fails. We first consider LOT. Like the design-based p-value in (19), take L try points φ 1 , . . . , φ L uniformly spaced over N (θ) ∩ Θ 0 . The difference from (19) is that the neighborhood bootstrap method directly approximates the objective value in (7) by the Monte Carlo method. Specifically, for each φ l , l = 0, 1, . . . , L, generate X * l,1 , . . . , X * l,M i.i.d. from F (·, φ l ), where φ 0 =θ. Then the p-value in (7) can be approximated by For LOCI, we still consider the computation of upper limits in (12). With {φ 1 , . . . , φ L } uniformly spaced over N (θ), take bootstrap sample X * l,1 , . . . , Neighborhood bootstrap is a very general method. In principle, it can be applied to infinite-dimensional parameter spaces if there are well-defined space-filling designs for such spaces. Another advantage of neighborhood bootstrap is its easy implement, especially for computing LOCIs. For LOT, neighborhood bootstrap is slightly more time-consuming than the importance sampling-based approach.

Design of try points
The design-based p-value in (19) and the neighborhood bootstrap method in Section 5.3 both need L try points φ 1 , . . . , φ L uniformly spaced over N (θ). This subsection presents some discussion on the design of these points. Usually N (θ) is selected as a q-dimensional hypercube like (15) or (16). Specifically, suppose that . . , L, j = 1, . . . , q, and we have φ i = (φ i1 , . . . , φ iq ) ′ ∈ N (θ) for i = 1, . . . , L. Therefore, it suffices to consider the design of ψ 1 , . . . , ψ L in [0, 1] q , called initial design in the following. As mentioned in Section 5.2, the initial design can be constructed from space-filling designs in [0, 1] q . Such designs include grids, Latin hypercube designs (McKay, Beckman, and Conover 1979), and uniform designs (Fang et al. 2000), among others. A simple choice is the following grid where U is a positive integer. There are L = U q points in the grid, and this leads to unaffordable computations for large q. Another choice is the Latin hypercube design (LHD) (McKay, Beckman, and Conover 1979), which is easy to construct for any L and q. The LHD is spaced uniformly in each dimension, and its space-filling properties over the whole [0, 1] q can be improved by iterative algorithms (Park 2001). There are functions for generating LHDs in both MATLAB and R.
Note that in fact we need to design φ 1 , . . . , φ L in N (θ) ∩ Θ 0 for LOT or in N (θ) ∩ Θ for LOCI. For irregular or constrained parameter spaces, this problem becomes complicated. A feasible solution is to design more points in N (θ) and then to keep those in the intersection.

Illustrative examples
This section presents four examples to illustrate LOT and LOCI, in which the (asymptotic) distributions of the test statistics or pivotal quantities are non-regular or unclear.
The LOCI of π max can be easily constructed by the neighborhood bootstrap method in (23), where the pivotal quantity is π max −π max . We next conduct a simulation study to compare the LOCI with the ordinary bootstrap and m-out-of-n bootstrap methods. Here we focus on two-sided 1 − α confidence intervals with α = 0.05. In our simulation study, k is fixed as 5, and n = 30 and 60 are considered. We use six vectors of cell probabilities; see Table 2. In the m-out-of-n bootstrap method, m is set as the integer part of 2 √ n. The neighborhood N (π) is π 1 − δ log(n)/ √ n,π 1 + δ log(n)/ √ n × · · · × π k − δ log(n)/ √ n,π k + δ log(n)/ √ n where two values, 0.1 and 0.5, of δ are used. it is clear that N (π) satisfies Assumptions 1 and 2. We use two grids in (24) to design the try points with U = 3 for δ = 0.1 and U = 5 for δ = 0.5. Note that there is a constraint k i=1 π i = 1 in the parameter space. There are 51 and 101 try points in the two grids, respectively. The bootstrap sample size is 5000 in all the above methods.
We repeat 5000 times to compute the coverage rates (CRs), mean lengths (MLs), and standard deviations of lengths (SDLs) of the confidence intervals. The simulation results are shown in Table 2. We can see that the bootstrap interval usually has low CR. For dispersed π, the m-out-of-n bootstrap method lacks efficiency with longer ML, whereas two LOCIs perform better. As expected, the LOCI with δ = 0.5 is more conservative than that with δ = 0.1. In summary, it can be concluded that the LOCI is at least comparable to the m-out-of-n bootstrap interval.

Interval estimation for the location parameter of the threeparameter Weibull distribution
The Weibull distribution is widely used in many fields such as survival analysis (Cox and Oakes 1984) and reliability (Murthy, Xie, and Jiang 2004). Let X 1 , . . . , X n be i.i.d.
observations from the Weibull distribution Wbl(a, b, τ ), whose p.d.f. is for x > τ , a > 0, b > 0, and τ ∈ R. The parameters a, b, and τ are known as the scale, shape, and location parameters, respectively. If τ is known, then the likelihoodbased inference for the parameters is straightforward (Murthy, Xie, and Jiang 2004). With an unknown τ , the standard method faces difficulties since the distributions have not a common support (Blischke 1974). Estimation for the parameters of the three-parameter Weibull distribution is still an active topic in recent years, and many estimators have been proposed; see Lockhart and Stephens (1994), Cousineau (2009), andTeimouri, Hoseini, andNadarajah (2013), among others. Since the (asymptotic) distributions of these estimators are difficult to derive, there is limited results on interval estimation for the parameters.
This subsection constructs LOCIs for τ based on the maximum product of spacings (MPS) estimation (Cheng and Amin 1983). Obviously our method is also applicable for other parameters. The MPS estimatorsâ,b, andτ are constructed by maximizing where X (1) · · · X (n) are order statistics, X (0) = τ , and X (n+1) = ∞. For all a, b, and τ , the MPS estimators are consistent (Cheng and Amin 1983). Furthermore, for b > 2, they have the same asymptotic distributions as the MLEs It is not straightforward to construct confidence intervals of τ by the asymptotic properties ofτ since b is unknown. Furthermore, the validity of the corresponding bootstrap confidence interval is unclear.
We use neighborhood bootstrap to construct two-sided 1 − α confidence intervals of τ , and conduct a simulation study to evaluate their performance. The pivotal quantity is τ −τ .
The initial design is the grid in (24) with U = 3 that corresponds to L = 27. Since the results are sensitive to the value of b, we set the neighborhood N (â,b,τ ) as where δ n = 4 exp − (1/b) 5 log(n)/ √ n. It is clear that N (â,b,τ ) satisfies Assumptions 1 and 2 for all a, b, and τ by the asymptotic properties of the MPSs. For τ = 1, two values of n, and several combinations of (a, b), the simulation results based on 1000 repetitions are reported in Table 3 with α = 0.05. The bootstrap sample sizes used in the bootstrap interval and LOCI are both 1000. We can see that, for b = 0.5, the CR of the bootstrap interval is satisfactory, and the LOCI has similar performance to it with slightly longer ML. For larger b, the bootstrap interval performs poorly, and the LOCI is much better in terms of CR.

Testing whether all the coefficients in the high-dimensional regression are nonnegative
High-dimensional data analysis that deals with models where the number of parameters is larger than the sample size is a very active research area in recent years, We consider the regression model where X = (x ij ) is the n × p regression matrix, y = (y 1 , . . . , y n ) ′ is the response vector, β = (β 1 , . . . , β p ) ′ is the vector of regression coefficients and ε = (ε 1 , . . . , ε n ) ′ is a vector of i.i.d. normal random errors with zero mean and finite variance σ 2 . Let p 0 denote the number in {j = 1, . . . , p : β j = 0}. For p ≫ n, we make the sparsity assumption of p 0 ≪ n. Many methods have been proposed to estimate the sparse β in (26) such as the lasso (Tibshirani 1996), the smoothly clipped absolute deviation method (Fan and Li 2001), and the minimax concave penalty method (Zhang 2010). Under the assumption that all the coefficients are known to be nonnegative, Efron et al. (2004) introduced a nonnegative lasso method to estimate β, which solves where λ > 0 is a tuning parameter. Applications of this method can be found in Frank and Heiser (2006) and Wu, Yang, and Liu (2014). In this subsection we use the data to test whether the assumption in the nonnegative lasso method is reasonable, i.e., test the following hypotheses H 0 : β j 0, j = 1, . . . , p ↔ H 1 : H 0 does not hold.
In classical n > p settings, the problem to test (28) has been discussed by the likelihood ratio test; see Silvapulle and Sen (2011). However, this method cannot be dirrectly extended to the high-dimensional case since the MLEs perform very poorly for such a case. Here we borrow the idea of the generalized likelihood ratio test in nonparametric statistics (Fan, Zhang, and Zhang 2001), and construct the test statistic whereβ H 0 andβ H 1 are the estimators of β under H 0 and H 1 , respectively. A natural choice is to use the nonnegative lasso estimator in (27) and the lasso estimator asβ H 0 andβ H 1 , respectively. Since the distribution of T under H 0 is unclear, we use LOT to test (28).
First of all we need to estimate all the unknown parameters under H 0 . Wu, Yang, and Liu (2014) showed that the nonnegative lasso estimator in (27) is consistent under H 0 . By Fan, Guo, and Hao (2012), a consistent estimator of σ 2 isσ 2 = y − Xβ LS 2 /n, whereβ LS is the ordinary least squares estimator of β under the submodel selected by the nonnegative lasso. Since p is large, the neighborhood N (β H 0 ,σ 2 ) should be selected elaborately to avoid high-dimensional optimization. We select N (β H 0 ,σ 2 ) as , and δ > 0 is a constant. By the importance sampling-based approach in Section 5.2, the p-value of the LOT for (28) is given by (17).
Note that the asymptotic results in Section 4 cannot be dirrectly applied for diverging p.
However, it is not hard to show that, if H 0 holds, then Pr (β, σ 2 ) ∈ N (β H 0 ,σ 2 ) → 1 as n → ∞ under regularity conditions by selection consistency properties of the nonnegative lasso (Wu, Yang, and Liu 2014). Therefore, similar to Theorem 1, the asymptotic frequentist property of the LOT can be guaranteed.
We conduct a simulation study to compare the above LOT and the bootstrap test whose p-value is given in (8). In the simulation the rows of X in (26) are i.i.d. from a multivariate normal distribution N(0, Σ) whose covariance matrix Σ = (σ ij ) p×p has entries σ ii = 1, i = 1, . . . , p and σ ij = 0.1, i = j. The random errors ε 1 , . . . , ε n i.i.d. ∼ N(0, 1). We use three configurations of n and p, (n, p) = (20, 40), (n, p) = (40, 80), and (n, p) = (60, 120). We take the tuning parameter λ = 4 log(p)/n in the lasso and nonnegative lasso estimator recommended by Wu, Yang, and Liu (2014). In the LOT, δ is set as 0.03 in (29), and we compute the p-value in (19) with 30 try points. Here the initial design of the try points is an LHD, whose dimension is the number of non-zeroβ H 0 ,j 's; see (29). In the two methods, the bootstrap sample sizes are both 2000. The significance levels α = 0.05 and α = 0.1 are considered.
Four vectors of the coefficients under H 0 are used: (i) β 1 = · · · = β p = 0; (ii) β 1 = 2 and     β j = 0 for other j; (iii) β 1 = β 2 = 2 and β j = 0 for other j; (iv) β 1 = β 2 = β 3 = 2 and β j = 0 for other j. To compute the power, we consider β 1 = 2, β 2 = c < 0, and β j = 0 for other j. For each model, we simulate 2000 data sets, and report the Type I errors and powers in Table 4 and Figure 1, respectively. It can be seen that the bootstrap test cannot control Type I error well, and that the LOT has reasonable performance in terms of Type I error and power. The power performance of the LOT is similar for other parameter configurations.

Interval estimation for the minimum of an unknown function
Consider the nonparametric regression model where r is a continuous function defined on [0, 1] and ε ∼ N(0, σ 2 ) is the random error.
However, to the best of the author's knowledge, there is no result on interval estimation for ξ in the regression setting.
Under regularity conditions, sup x∈[0,1] |r(x) − r(x)| → 0 in probability (Härdle and Luckhaus 1984), which implies thatξ is a consistent estimator of ξ. Additionally, a consistent estimator σ 2 of σ 2 can be given from the residual sum of squares ofr. A choice of N n (θ) satisfying the condition in Proposition 3 is where δ > 0 is a constant.
We next conduct a simulation study to compare the bootstrap two-sided 1 − α confidence  We fix σ 2 = 1/4, and x i = (2i − 1)/(2n) for i = 1, . . . , n. The kernel function K in r is the Epanechnikov kernel, and the bandwidth h is set as n −1/5 /5. In LOCIs, we use δ = 0.25 in (31), and take 60-run LHDs as the initial designs of try points for implementing neighborhood bootstrap. The bootstrap sample size is 5000. Based on 5000 repetitions, we report the simulation results in Table 5. It can be seen that the bootstrap method performs poorly in terms of CR, and that the LOCI is much better for all the cases.

Discussion
In this paper we have introduced local optimization-based inference including LOT and LOCI. The main advantage of our approach is that, unlike current frequentist approach, it does not require hard work in deriving (asymptotic) distributions since its asymptotic frequentist properties hold as long as we have consistent estimators of the unknown parameters. The implementation of our approach is based on standard computational methods such as importance sampling and Monte Carlo, which are easy to master for practitioners. Local optimization-based inference can be viewed as an extended bootstrap that complements current frequentist inference. It can fast provide frequentist solutions to complex problems in practice, and has broadly potential applications. Illustrative examples have shown these to some extent. Although local optimization-based inference does not overshoot for regular problems (see Theorem 2), it is more suitable for non-regular problems in which the theoretical derivation is difficult.
We give a further discussion on the specification of the neighborhood N (θ) here. Generally speaking, the choice of N (θ) is flexible; see Section 6. In real applications, for a dataset with fixed sample size n, it is not hard to find a proper N (θ) that guarantees that LOT or LOCI has satisfactory performance via empirical evaluations. Besides the methods in Section 5.1, we can also use informative priors, if any, to inform the construction of N (θ). This provides a way to associate our approach with Bayesian statistics, and is valuable to study in the future. A related problem to the specification of N (θ) is that it is difficult to get the exact solution or to know how close an approximate one to it even for a small N (θ). This problem is not very serious in practice since our terminal is inference instead of optimization.
Simulation results in Section 6 show that the design-based approximation with a moderate L yields satisfactory finite-sample performance of LOT and LOCI even for high-dimensional N (θ). In fact, when bootstrap gives aggressive results, local optimization-based inference can always improve its performance, even with a relatively poor optimization algorithm, since the corresponding optimization problem possesses a better solution principle (Xiong 2014): a better approximation to the exact solution yields less Type I error or higher coverage rate.
A disadvantage of local optimization-based inference is its computational cost. This can be viewed as the price of generality. We can replace the Monte Carlo method in the implementation with LHD sampling or quasi Monte Carlo to improve the computational efficiency (Homem-de-Mello 2008). Iterative algorithms such as stochastic approximation (Kushner and Yin 1997) are also available to solve the stochastic programming problem in (7). Another future topic is to apply the proposed approach to general infinite-dimensional problems, which call for infinite-dimensional optimization techniques. In the neighborhood bootstrap method, we need to develop new space-filling designs in infinite-dimensional spaces.
Appendix: Asymptotic properties of the design-based algorithm As mentioned in Section 5, max l=1,...,Ln H −1 φ l (1 − α) can be used to approximate the upper limit sup φ∈Nn(θ) H −1 φ (1 − α) in LOCI, where {φ 1 , . . . , φ Ln } is a dense subset of N n (θ). We next prove frequentist properties of this approximation. These results are less important in practice since we can obtain an approximation as accurate as possible with a powerful computer. We place them here because they may be still of interest in theory.