coneproj: An R Package for the Primal or Dual Cone Projections with Routines for Constrained Regression

The coneproj package contains routines for cone projection and quadratic programming, plus applications in estimation and inference for constrained parametric regression and shape-restricted regression problems. A short routine check_irred is included to check the irreducibility of a matrix, whose rows are supposed to be a set of cone edges used by coneA or coneB. For the coneA and coneB functions, the vector to project is provided by the user, along with the cone specification and a weight vector. For coneA, a constraint matrix is specified to define the cone, and for coneB, the cone edges are provided. The coneA and coneB algorithms have been coded and compiled in C++, and are called by R. The qprog function transforms a quadratic programming problem into a cone projection problem and calls coneA. The constreg function does estimation and inference for parametric least-squares regression with constraints on the parameters (using coneA). A p value for the “one-sided” test is provided. The shapereg function uses coneB to provide a least-squares estimator for a regression function with several choices of constraints including isotonic and convex regression functions, as well as estimates of parametrically modeled covariate effects. Results from hypothesis tests for significance of the effects are also provided. This package is now available from the Comprehensive R Archive Network at http://CRAN.R-project.org/package=coneproj.


Introduction and overview
The projection of y ∈ IR n onto a set C ⊆ IR n is defined as the pointθ ∈ C that minimizes the Euclidean distance A unique minimum exists if C is closed and convex. We are concerned with projecting onto convex polyhedral cones expressed as for an m × n constraint matrix A. The set C is a cone because given θ ∈ C, we have αθ ∈ C, for all non-negative real numbers α, and it is straightforward to verify that C is convex. We require that A be "irreducible" as defined by Meyer (1999); the intuitive meaning is "nonredundant." The term "polyhedral" means finitely generated, so that points in the cone can be characterized as linear combinations of a finite set of points (generators) where the coefficients of the linear combination are non-negative.
The function coneA will return the projection given the vector y ∈ IR n and the m×n matrix A.
The function coneB will return the projection given y and the generators of C. In addition, if a positive weight vector w is provided, the functions return the minimizer of n i=1 w i (y i − θ i ) 2 over C.
Cone projection is a special case of quadratic programming, which is concerned with minimizing θ Qθ − 2c θ over C. If Q is positive-definite there is a unique minimum. The function qprog returns this minimum, given Q, c, and a constraint matrix A.
Constrained parametric regression is an application of quadratic programming. Given an n × k design matrix X, a data vector y ∈ IR n and the regression model where the errors ε are assumed to be mean zero, and identically distributed, the least-squares estimatorβ ∈ IR k , is found by projecting y onto the linear space spanned by the columns of X. However if it is known a priori that Aβ ≥ 0 for a given m×k matrix A, the constrained leastsquares estimator can be found using a quadratic programming routine with Q = X X. The function constreg provides the constrained estimate of β, and in addition tests H 0 : β ∈ V versus H 1 : β ∈ C , where V is the null space of A and C is the cone defined by A. The one-sided test was shown to have more power than the two-sided test in Meyer and Wang (2012). For derivation of the test, see Raubertas, Lee, and Nordheim (1986).
The semi-parametric regression model is fit using the function shapereg. The function f can be assumed to be either increasing or decreasing, convex or concave, or any combination of monotonicity and convexity. The standard errors forα are provided with approximate p values for the (two-sided) tests for significance. A hypothesis test for H 0 : f is constant is performed if the shape assumption includes monotonicity, and H 0 : f is linear is tested if the shape assumption is either convex or concave.
The methods are discussed in detail in the next section. In Section 3 a user's guide for the package coneproj (Meyer and Liao 2014) in R (R Core Team 2014) is provided, and the data sets in the package are analyzed and discussed. The discussion in Section 4 compares our routines with some existing R packages, and presents some simulations showing how these packages can be useful in practice.

Cone projection routines in this package
The cone projection algorithm of Meyer (2013b) forms the main subroutine of the package. Previous methods include interior point methods (see Fang and Puthenpura 1993 for further details), which can be used more generally for projecting onto a convex set. The methods of Goldfarb and Idnani (1983) and Fraser and Massam (1989) use primal-dual constraint ideas. Silvapulle and Sen (2005, Chapter 3) provide a thorough treatment of optimization problems involving convex cones. The algorithm for coneA and coneB, which are the fundamental part of all routines in this package, can be summarized in three simple steps.
2.1. The basic cone projection: coneA, coneB and check_irred For the coneA routine, the user specifies y ∈ IR n , an m × n matrix A, and an optional weight vector w with positive elements. The routine returnsθ to minimize The matrix A is required to be irreducible, that is, the rows of A form an irreducible set. The rows also form the "primal basis" for a cone projection problem as defined in Fraser and Massam (1989). A set of vectors is irreducible if none can be written as a positive linear combination of two or more of the others, and the origin is not a positive linear combination of two or more vectors in the set. (The phrase "positive linear combination" means a linear combination with positive coefficients.) Let V be the null space of A; that is, the linear space orthogonal to the space spanned by the rows of A. The space V is contained in C. An element in C can be written as the sum of a vector in V and a linear combination of the edges or generators of C with non-negative coefficients. If A has full row rank, it is shown in Meyer (2013b) that the edges δ 1 , . . . , δ m of the cone are the columns of ∆ = A (AA ) −1 . If A does not have full row rank, Proposition 1 of Meyer (1999) can be used to obtain the edges δ 1 , . . . , δ M , where M ≥ m. The cone (1) can alternatively be written as where M is the number of generators and M = m if A has full row rank. The generators δ 1 , . . . , δ M and the basis for V form the "dual basis" for a cone projection problem as defined in Fraser and Massam (1989). The generators δ 1 , . . . , δ M are orthogonal to V , so the projection of Y onto C is the sum of the projections onto V and onto the cone The algorithm of Meyer (2013b) provides the projection onto Ω by determining the face of the cone on which the projection lands. The faces are used for the inference methods and are indexed by subsets of {1, . . . , M }. For such a subset J, the corresponding face is The faces cover the cone; once the face containing the projection is determined, the projection onto Ω is simply the projection onto the linear space spanned by the edges making up the face. For more details and proofs, see Meyer (1999). The algorithm finds the projection by determining the set J.
The initial guess J 0 can be any subset of {1, . . . , M } for which the corresponding δ j , j ∈ J, form a linearly independent set. At the kth iteration, j are non-negative: If yes, go to Step 3.
If no, choose j for which b (k) j is minimized, and remove it from J; go to Step 1.
3. Compute y − θ k , δ j for each j / ∈ J k . If these are all non-positive, then stop. If not, choose j for which this inner product is largest, add it to the set, and go to Step 1.
See Meyer (2013b) for the proof of convergence.
The polar cone is defined as and it can be shown that the projectionρ of y onto Ω o is y −θ, i.e., the residual of the projection onto C.
The constraint cone edges δ 1 , . . . , δ M are not needed if A is provided, because the rows of −A are the edges of the polar cone. See Meyer (1999) for a proof.
The function coneA requires the specification of the constraint matrix (and hence the polar cone edges), while the function coneB requires the user to specify the cone edges δ 1 , . . . , δ M and a basis for the linear space V that is contained in the cone. When there is no linear space in the cone, the user only needs to provide δ 1 , . . . , δ M . In either case, the function returns the projection, the dimension of the face of the cone on which the projection lands (which may be used as a surrogate degrees of freedom of the model), and the number of iterations. It also returns a message concerning convergence when the algorithm does not converge; although theoretically the algorithm must converge, the presence of rounding error in the real world results in a small possibility of non-convergence.
The function check_irred takes a matrix as its argument, whose rows are supposed to form a set of edges, and checks the irreducibility of the rows. For example, the user can choose to use this routine to check the constraint matrix A required by coneA and the cone edges δ 1 , . . . , δ M required by coneB in terms of irreducibility before they make a cone projection by coneA or coneB. If a row is a positive linear combination of other rows, then it can be removed without affecting the problem, and if there is a positive linear combination of rows of the matrix that equals the zero vector, then there is an implicit equality constraint in the matrix which can be dealt with separately, see Meyer (2013b). In the former case, this routine deletes the redundant rows and returns a set of irreducible rows. In the latter case, this routine will return the original matrix, and leave the equality issue to the user to address. In either case, this routine will return a vector storing the positions of the redundant rows and the number of equality constraints in the rows.

Quadratic programming
Given a positive definite n × n matrix Q and a constant vector c ∈ IR n , the object of qprog is to findθ ∈ IR n to minimize θ Qθ − 2c θ subject to Aθ ≥ b. We require that the m × n constraint matrix A be irreducible, and there exists a θ 0 ∈ IR n such that Aθ 0 = b.
The unconstrained solution is Q −1 c. For the constrained solution, we transform it into a cone projection problem as follows. Let U U be the Cholesky decomposition of Q, and define The quadratic programming routine returns the projectionφ, andθ = U −1φ + θ 0 . The routine also returns the dimension of the face of the cone on which the projectionφ lands, the number of iterations before the algorithm converges, and a message concerning convergence of the algorithm when the algorithm does not converge.

Constrained parametric regression
The least-squares regression model (2) is considered, where the object is to findβ to minimize y − Xβ 2 , subject to constraints Aβ ≥ 0. We assume that X is an n × k design matrix which has full column rank, and A is an m × k irreducible constraint matrix. The constrained estimatorβ is found using the quadratic programming routine.

Interest is in testing H
For example, suppose that in a clinical trials scenario there are three treatments and a placebo, and several covariates. The response is improvement of a condition, and the researchers want to know if any of the treatments is significantly better than the placebo. If β 0 is the placebo effect, and treatment effects are β 1 , β 2 , and β 3 , then the first four columns, say, of the design matrix might be dummy variables for the placebo and treatment groups, and the other k − 4 ≥ 0 columns represent the covariate values. If the researchers assume that any of the treatments is at least as good as the placebo, the constraint matrix (for k = 7, say) may be used. The null hypothesis describes the "no treatment effect" scenario (Aβ = 0) and the alternative is that at least one treatment has greater effect than the placebo. This "one-sided" test will have greater power than the standard two-sided F test for sub-models.
Defining SSE 0 to be the sum of squared residuals for the null hypothesis fit (β is in the null space of A) and SSE 1 to be the sum of squared residuals for the constrained least-squares fit, the test statistic has a mixture-of-betas distribution when H 0 is true and ε ∼ N n (0, σ 2 I) for σ 2 > 0. Specifically, the null distribution is: for c > 0, d 0 = dim(V ), and m is the dimension of the smallest linear space containing C, The mixing parameters p d correspond to the probabilities under H 0 that the projection of y onto C lands on a face of dimension d; these can readily be found through simulations. For more details see Meyer and Wang (2012).
The function constreg has inputs y, X, and A. The optional parameter test is a logical scalar to determine if the user wants to perform the test or not. If test = TRUE is specified, the number of simulations used to obtain the mixing distribution parameters for the test has the default value 10,000, and the test is performed; otherwise the test is skipped. The optional parameter w is a vector of weights. If w is specified, the minimization of (y − Xβ) W (y − Xβ) over C is returned, where the diagonal matrix W has elements w. The default value of w is a vector of ones.
Returned are the constrained fit, the unconstrained fit, the constrained parameters and the p value for the test if test = TRUE.

Shape-restricted regression
The regression model (3) is considered, where the only assumptions about f concern its shape. The shapereg function allows eight options, indicated by the value of shape: 1 = increasing, 2 = decreasing, 3 = convex, 4 = concave, 5 = increasing-convex, 6 = decreasing-convex, 7 = increasing-concave, and 8 = decreasing-concave. The vector expression for the model is where the n × k matrix Z is assumed to be of full column rank and θ i = f (t i ). The column space of Z must contain the constant vector, so that if there are no covariates, Z should be a column of ones. The shape restrictions can be written as θ ∈ C where C is a convex polyhedral cone; see Meyer (2013a) for details and conditions for identifiability of θ and α.
The isotonic regression estimator (f is increasing, no covariates) was proposed in Brunk (1958); Robertson, Wright, and Dykstra (1988, Chapter 1) provides details for the leastsquares solution. An algorithm for convex regression was proposed by Fraser and Massam (1989). These can both be solved with the cone projection algorithm, and combinations of constraints such as monotone and convex are simple extensions.
For the monotone case, Cheng (2009) proposed a back-fitting algorithm to find θ and α; the algorithm for shapereg is taken from Meyer (2013a), which employs a single cone projection to find θ and α simultaneously for more general shapes.
The test of constant versus increasing regression function uses the same family of statistics as Raubertas et al. (1986), and is discussed in Robertson et al. (1988, Chapter 2). The test for linear versus convex regression was derived in Meyer (2003). More generally, the test of H 0 : E(y) ∈ V versus H 1 : E(y) ∈ C uses an E 01 statistic and is exact for iid mean-zero normal errors. The optional parameter test is a logical scalar to determine if the user wants to perform the test or not. If test = TRUE is specified, the number of simulations used to obtain the mixing distribution parameters for the test has the default value 10,000, and the test is performed; otherwise the test is skipped. The performance of the test might be computationally intensive for large sample sizes.
Cheng (2009) argued thatα is consistent and asymptotically normal. The shapereg routine returns the standard errors forα as the square roots of the first k diagonal elements of (X X) −1σ2 , where the columns of X contain the columns of Z, plus the edges of the cone indexed by the set J determined by the cone projection. The variance is estimated bŷ σ 2 = SSE /(n − 1.5df ), where SSE is the sum of squared residuals, and the effective degrees of freedom df are returned by coneB. For more details about this variance estimator and the motivation for the 1.5 multiplier see Meyer and Woodroofe (2000). To get approximate p values for the significance ofα, a t distribution with the degrees of freedom specified as 1.5df is used.
Returned are the constrained parameters, the constrained fit, the linear fit, the approximate standard errors, the approximate p values for the significance of the parameters, the sum of squared residuals for the linear part, the sum of squared residuals for the full model, and the p value for the test H 0 : E(y) ∈ V versus H 1 : E(y) ∈ C if test = TRUE.

User guide
We demonstrate the functions using simulated data sets and real data sets. For a more complete demonstration including code for making the plots, see the official reference manual of this package at http://CRAN.R-project.org/package=coneproj.

Constrained regression using coneA and coneB
We demonstrate the cone projection algorithm by fitting a regression function that is increasing over a range but then decreasing.

R> check <-check_irred(amat)
[1] "edges are irreducible!" Then we call coneA to find the non-decreasing vector closest to the first half of y and the non-increasing vector for the rest of y, without and with a weight vector w: R> ans1 <-coneA(y, amat) R> ans2 <-coneA(y, amat, w = (1:n)/n) The unweighted and weighted fits are compared in Figure 1. coneA returns df, thetahat, and steps as shown:

R> names(ans1)
Then we call coneB to make a constrained regression to y, without and with a weight vector w: R> ans3 <-coneB(y, delta, vmat) R> ans4 <-coneB(y, delta, vmat, w = (1:n)/n) The fit to the data and the effective degrees of freedom returned by coneB are identical to that of coneA, but the number of steps for coneB is considerably smaller. This is because the number of edges of the constraint cone used in the fit is smaller than the number of polar cone edges used in the residual vector.

R> ans3$steps
[1] 14 The coefs returned by coneB are the coefficients of the basis of the linear space contained in the convex cone (4) and the constraint cone edges.

R> head(ans3$coefs)
[ To show how the routine check_irred works when the rows of a matrix are reducible, we consider a simple example in which x is a sequence of 4 elements and we want to make a constraint matrix such that the 4 elements have an increasing order. We know that we do not need to compare the 1st and the 3rd elements if we already know that the 1st element is smaller than the 2nd element, and the 2nd element is smaller than the 3rd element.
First, we can make a reducible set of edges which makes a total of 6 comparisons:

Application of qprog to "cubic" data set
To show how qprog works, we use a simulated data set "cubic". In this data set, y is a continuous response variable and x is a continuous predictor variable with values in [0, 2]. The constraint is that the true regression is increasing, convex and non-negative for x ∈ [0, 2], so the corresponding constraint matrix is which is of full row rank and hence irreducible.

Constrained parametric fit to "FEV" data
To see how constreg works, we analyze the "FEV" data set, provided by Kahn (2005). This data set consists of 654 observations on children aged 3 to 19. "FEV" stands for forced expiratory volume, a measure of lung capacity, and is the response variable. "age" and "height" are continuous predictors, while "sex" and "smoke" are categorical predictors. One reasonable assumption is that "FEV" must be non-decreasing in "age", given any value of "height", and similarly "FEV" is non-decreasing in "height", when "age" is fixed. To allow for interaction between the continuous predictors, we fit the model where x 1i is the age of the ith child, x 2i is the height of the ith child, d 1i is an indicator for "boy," and d 2i is an indicator for "smoking." Without loss of generality, we scale "age" and "height" to be within the unit interval (0, 1], and the corresponding constraint matrix is which is not of full row rank but irreducible. First, we load this data set, extract the variables and scale "age" and "height" to be within the unit interval: R> data("FEV", package = "coneproj") R> y <-FEV$FEV R> age <-FEV$age R> height <-FEV$height R> sex <-FEV$sex R> smoke <-FEV$smoke R> scale_age <-(age -min(age)) / (max(age) -min(age)) R> scale_height <-(height -min(height)) / (max(height) -min(height))  If we specify test = TRUE, we can get a p value for the E 01 test in Section 2.3. The null hypothesis is that the regression function depends only on the "smoking" and "sex" variables, so we are not surprised that H 0 is rejected.
R> ans <-constreg(y, xmat, amat, test = TRUE) R> names(ans) [1] "constr.fit" "unconstr.fit" "pval" "coefs" R> ans$pval The constrained fit is compared with the unconstrained fit in Figure 3, where the surfaces are shown for non-smoking girls. For the unconstrained fit, "FEV" increases as "age" increases when "height" is large, but decreases as "age" increases when "height" is small. The constrained fit avoids this situation by keeping the fit of "FEV" non-decreasing with respect to "age".

Shape-restricted regression example
To show how shapereg works, we use another real data set, namely "feet". This data set was collected by the second author in a fourth-grade classroom, with the purpose of determining if boys have wider feet than girls at this age. We use the shapereg function to make a shape-restricted fit to this data set. "width" is a continuous response variable, "length" is a continuous predictor variable, and "sex" is a categorical covariate. The constraint is that "width" is increasing in "length".
First, we load the data set and extract the continuous, constrained predictor "length" and the continuous response variable "width": R> data("feet", package = "coneproj") R> l <-feet$length R> w <-feet$width Next, we extract the categorical covariate "sex" and transform "sex" into a numerical vector coded as 0 and 1: R> s <-feet$sex R> n <-length(s) R> x <-as.numeric(s != "G") Next, we create the X matrix. It is supposed that users provide an n by 1 vector, or an n by k matrix, k ≥ 1. X represents a categorical variable. The user can choose to have a constant vector in X or not.
R> xmat <-x R> shape <-1 R> ans <-shapereg(w, l, shape, xmat) If we let µ B represent the average foot width for fourth-grade boys and µ G represent the average foot width for fourth-grade girls, then we can get that the p value is 0.0512 for the test H 0 : µ B = µ G versus H 1 : µ B > µ G . This indicates that given the test size α = 0.05, µ B is barely significantly larger than µ G . Moreover, by the coefs returned by shapereg, which are the coefficients for X, we know that, for a fixed foot length, fourth-grade boys' feet width is estimated to be 0.24 cm larger than fourth-grade girls' feet width, on average.

R> names(ans)
[1] "coefs" "constr.fit" "linear.fit" "se.beta" "pvals.beta" [6] "shape" "test" "SSE0" "SSE1" "call" R> ans$coefs We can call the R function summary to check the estimate, the standard error, the t value and the p value for the coefficient of the linear part. Also we can check the sum of squared residuals for the linear part and the sum of squared residuals for the full model.

R> ans$pval
[1] 0.000177307 We can also call summary to check the p value if test = TRUE is specified.

R> summary(ans)
Call: shapereg(y = w, t = l, shape = shape, xmat = xmat, test = TRUE) Coefficients: Estimate The scatterplot in Figure 4 shows that while boys tend to have wider feet, they also tend to have longer feet (boys in this classroom were, on average, older and larger), so it makes sense to control for foot length when comparing foot width by sex. By specifying the argument shape = 1, we can make an increasing fit to "width" on "length" with "sex" as a categorical covariate.

Extension of isotonic regression
The R routine gpava() in the package isotone (de Leeuw, Hornik, and Mair 2009), and pava() in the package Iso (Turner 2013) perform isotonic regression, and conreg() in the package cobs (Ng and Maechler 2011) provides convex or concave regression. However, none of these allows for parametrically modeled covariates. The shapereg semi-parametric regression routine will estimate a constrained regression function and a parameter vector using a single cone projection instead of a cyclical back-fitting algorithm. For practical problems, dealing with covariates is essential, for two main reasons. First, users often want to determine the effect of a predictor when other important predictors are controlled for, in order to avoid problems with confounding, as can happen when predictors are related to each other. Second, the power for the test for significance of the predictor of interest is often larger when other predictors are used, because the sum of squared residuals is decreased.
Because multiple parametric regression is so widely available (e.g., the function lm in R), practitioners will often use a plausible parametric form even when there is no a priori evidence or theory concerning such a function f . Instead, there is often vague information such as concerning the shape of f .
To demonstrate that the E 01 test can have higher power than a standard one-sided t test, we consider a simulated data set shown in Figure 5, where two predictors are shown, one continuous (t) and one categorical (x). The responses y i ∼ N (f (t i ) + α 0 + α 1 x i , σ 2 ), i = 1, . . . , n, and y i 's are independent; t is a continuous predictor equally spaced on (0, 3], and x is a categorical covariate with two levels 0 and 1. Suppose the researchers are interested in whether or not the relationship between y and t is linear, while controlling for the effect of x. From the scatterplot of t, x, and y, one might be tempted to use a quadratic alternative. However, because the true function is a splicedtogether linear and exponential function rather than a quadratic, better power is obtained with the more general convex alternative. The powers are compared in Figure 6, for three test sizes, and σ = 1, 2, 4. More importantly, a shape-restricted fit can help eliminate a spurious relationship between the response variable and a confounding variable. In the above example, suppose α 1 = 0 so that E(y) does not depend on x. Because x and t are related, we know that confounding can occur, that is, we can get a spurious significance for x, if the effect of t is not "controlled for" by including t in the model. However, if the functional form of t is mis-specified, confounding can still occur. Using vague assumptions about f will guard against confounding due to mis-specification.
To demonstrate this, we simulated 100,000 data sets with α 1 = 0 and σ 2 = 1, to compare results for the test of H 0 : α 1 = 0 versus H a : α 1 = 0, using a model that specifies a quadratic in t and using a semi-parametric model with the assumption that E(y) is convex in t. For both tests, we use 0.05 for the test size. For the quadratic model, we (erroneously) reject H 0 for 84.2% of the simulated data sets, whereas for our semi-parametric model with convex assumption, we reject H 0 for only 5.7% of the data sets.

Speed
The routines coneA, coneB and qprog load a C++ sub-routine for the core part of their computations, which make them considerably faster than if coded completely in R. For example, coneB is used for the previous shapereg simulations. When the number of simulations is 100,000, for a data set of 50 observations, the time used to get the power for the E 01 test is roughly between 40 and 90 seconds, using a laptop with a 2.53GHz dual-core Intel(R) Core(TM) i3 CPU. When we call coneB for 10,000 times to get the p value for the "feet" data set whose sample size is 39, the time is roughly between 1 and 3 seconds. Finally, when we call coneA for 10,000 times to get the p value for the "FEV" data set whose sample size is 654, the time is roughly between 2 and 5 seconds.
We compare the main routines coneA and coneB in this package with the routine solve.QP in the package quadprog (Turlach and Weingessel 2013) for solving quadratic programming problems. The scenario we consider is that given a predictor x equally spaced on [0, 1], the response y satisfies that y = (x −x) 2 + ε, and the elements of ε are independently normal with the standard deviation being set to 0.2. Fitting a convex regression function is a quadratic programming problem and we use each routine to get a fit for 4 different sample sizes. For small sample sizes, coneB is the fastest, and the speeds of coneA and solve.QP are close. For example, when n = 10, for the same scenario, the time coneB uses is around 0.05 milliseconds; the time coneA uses is around 0.08 milliseconds; the time solve.QP uses is around 0.1 milliseconds, and when n = 100, the time coneB uses is around 0.5 milliseconds; the time coneA uses is around 9 milliseconds; the time solve.QP uses is around 11 milliseconds. For large sample sizes, the routine coneB is faster than coneA and solve.QP by magnitude. For example, when n = 1500, for the same scenario, the time coneB uses is roughly between 0.2 and 0.4 seconds; the time solve.QP uses can range from 50 to 140 seconds; the time coneA uses can range from 170 to 350 seconds, and when n = 3000, the time coneB uses is roughly between 1 and 3 seconds; the time solve.QP uses can range from 500 to 1100 seconds; the time coneA uses can range from 2000 to 4300 seconds.