Response-Surface Methods in R , Using rsm Updated to version 2.10.2, 3 September 2020

This introduction to the R package rsm is a modiﬁed version of Lenth (2009), published in the Journal of Statistical Software . The package rsm was designed to provide R support for standard response-surface methods. Functions are provided to generate central-composite and Box-Behnken designs. For analysis of the resulting data, the package provides for estimating the response surface, testing its lack of ﬁt, displaying an ensemble of contour plots of the ﬁtted surface, and doing follow-up analyses such as steepest ascent, canonical analysis, and ridge analysis. It also implements a coded-data structure to aid in this essential aspect of the methodology. The functions are designed in hopes of providing an intuitive and eﬀective user interface. Potential exists for expanding the package in a variety of ways.


Introduction
Response-surface methodology comprises a body of methods for exploring for optimum operating conditions through experimental methods.Typically, this involves doing several experiments, using the results of one experiment to provide direction for what to do next.This next action could be to focus the experiment around a different set of conditions, or to collect more data in the current experimental region in order to fit a higher-order model or confirm what we seem to have found.
Different levels or values of the operating conditions comprise the factors in each experiment.Some may be categorical (e.g., the supplier of raw material) and others may be quantitative (feed rates, temperatures, and such).In practice, categorical variables must be handled separately by comparing our best operating conditions with respect to the quantitative variables across different combinations of the categorical ones.The fundamental methods for quantitative variables involve fitting first-order (linear) or second-order (quadratic) functions of the predictors to one or more response variables, and then examining the characteristics of the fitted surface to decide what action is appropriate.
Given that, it may seem like response-surface analysis is simply a regression problem.However, there are several intricacies in this analysis and in how it is commonly used that are enough different from routine regression problems that some special help is warranted.These intricacies include the common use (and importance) of coded predictor variables; the assessment of the fit; the different follow-up analyses that are used depending on what type of model is fitted, as well as the outcome of the analysis; and the importance of visualizing the response surface.Response-surface methods also involve some unique experimental-design issues, due to the emphasis on iterative experimentation and the need for relatively sparse designs that can be built-up piece-by-piece according to the evolving needs of the experimenter.
The rsm package for R (R Development Core Team 2009) provides several functions to facilitate classical response-surface methods, as described in texts such as Box and Draper (1987), Khuri and Cornell (1996, Chapters 1-5), Wu and Hamada (2000, Chapter 9), Myers, Montgomery, and Anderson-Cook (2009), Box, Hunter, and Hunter (2005, Chapters 11-12), and Ryan (2007, Chapter 10).In its current form, rsm covers only the most standard first-and second order designs and methods for one response variable; but it covers those reasonably well, and it could be expanded in the future.Multiple-response optimization is not covered in this package, but the desirability package (Kuhn 2009) may be used in conjunction with predictions obtained using the rsm package.The rsm package is available from the Comprehensive R Archive Network at http://CRAN.R-project.org/package=rsm.
Here is a general overview of rsm.First, it provides functions and data types that provide for the coding and decoding of factor levels, since appropriate coding is an important element of response-surface analysis.These are discussed in Section 2. Second, it provides functions for generating standard designs (currently, central-composite and Box-Behnken), and building blocks thereof, and examining their variance function; see Section 3. Third (Section 4), it extends R's lm function to simplify the specification of standard response-surface models, and provide appropriate summaries.Fourth (Section 5) it provides means of visualizing a fitted response surface (or in fact any lm object).Finally (Section 6), it provides guidance for further experimentation, e.g., along the path of steepest ascent.Most rsm functions take advantage of R's formula capabilities to provide intuitive and transparent ways of obtaining the needed results.
To provide some context, there is good commercial software available to help with designing and analyzing response-surface experiments.The most popular include Design-Expert (Stat-Ease, Inc. 2009), JMP (SAS Institute, Inc. 2009), andStatgraphics (StatPoint Technologies, Inc. 2009).These all provide for generating Box-Behnken and central-composite designs, fitting first-and second-order response surfaces, and visualizing them.These programs generally exceed rsm's capabilities (for example, more types of designs, provisions for mixture experiments, etc.); but rsm makes the most important methods available in R. To my knowledge, the functionality of rsm's ccd.pick function is not provided in other software, and rsm may exceed the capabilities of these programs in the generality of central-composite designs that it can create.
The goal of this article is to present an overview of rsm and how its functions may be used to design and analyze response-surface experiments.While most important functions in the package are illustrated, we do not provide comprehensive documentation here; instead, the reader is referred to the manual and online documentation provided with the package.Further note that rsm's features were extended and somewhat modified in version 2.0, and the vignette "Response-Surface Illustration" illustrates using the newer building-block approach to generating designs and some other newer features.

Coding of data
An important aspect of response-surface analysis is using an appropriate coding transformation of the data.The way the data are coded affects the results of canonical analysis (see Section 4) and steepest-ascent analysis (see Section 6); for example, unless the scaling factors are all equal, the path of steepest ascent obtained by fitting a model to the raw predictor values will differ from the path obtained in the coded units, decoded to the original scale.Using a coding method that makes all coded variables in the experiment vary over the same range is a way of giving each predictor an equal share in potentially determining the steepest-ascent path.Thus, coding is an important step in response-surface analysis.
Accordingly, the rsm package provides for a coded.dataclass of objects, an extension of data.frame.The functions coded.data,as.coded.data,decode.data,recode.data,code2val, and val2code create or decode such objects.If a coded.dataobject is used in place of an ordinary data.frame in the call to other rsm functions such as rsm (Section 4) or steepest (Section 6), then appropriate additional output is provided that translates the results to the original units.The print method for a coded.dataobject displays the coding formulas and the data in either coded or decoded form.
As an example, consider the provided dataset ChemReact, which comes from Table 7.6 of Myers et al. (2009).

Generating a design
The functions ccd and bbd are available to generate standard response-surface designs.For example, here we generate a 3-factor Box-Behnken design (Box and Behnken 1960)  One of the most popular response-surface designs is the central-composite design (CCD), due to Box and Wilson (1951).A simple example is the chemical-reaction experiment presented in the preceding section.These designs allow for sequential augmentation, so that we may first experiment with just one block suitable for fitting a first-order model, and then add more block(s) if a second-order fit is needed.The blocks in a CCD are of two types-one type, called a "cube" block, contains design points from a two-level factorial or fractional factorial design, plus center points; the other type, called a "star" block, contains axis points plus center points.
In the following discussion, the term "design points" refers to the non-center points in a block.The levels of the factors are coded, so that the cube blocks contain design points with coordinate values all equal to ±1, and center points at (0, 0, . . ., 0).The design points in the star blocks are at positions of ±α along each coordinate axis.The value of α, and choices of replications of design points and center points, are often selected based on considerations of rotatability (i.e., the variance of the prediction depends only on the distance from the center) and orthogonality of blocks (so that the coefficients of the fitted response-surface equation are not correlated with block effects).
Table 1 displays the parameters of a CCD, along with the names used by the function ccd.pick to be described shortly.Suppose that there are k variables to be varied.For the cube blocks, we start with a given 2 k−p fractional factorial design (or full factorial, when Parameter(s) Cube block(s) Star block(s) Design points (±1, ±1, . . ., ±1) (±α, 0, 0, . . ., 0), . . ., (0, 0, . . ., ±α) Center points (0, 0, . . ., 0) (0, 0, . . ., 0)  1: Parameters of a central-composite design, and names used by ccd.pick.p = 0).We may either use this design as-is to define the design points in the cube block(s).Alternatively, we may confound one or more effects with blocks to split this design into blks.csmaller cube blocks, in which case each cube block contains 2 k−p /blks.cdistinct design points.The star blocks always contain all 2k distinct design points-two on each axis.
Once the designs are decided, we may, if we like, replicate them within blocks.We may also replicate the center points.The names wbr.c and wbr.s (for "within-block reps") refer to the number of replicates of each design point within each cube block or star block, respectively.Thus, each cube block has a total of n.c = wbr.c• 2 k−p /blks.cdesign points, and each star block contains wbr.s • 2k design points.We may also replicate the center points-n0.ctimes in each cube block, n0.s times within each star block.
Finally, we may replicate the blocks themselves; the numbers of such between-block replications are denoted bbr.c and bbr.s for cube blocks and star blocks, respectively.It is important to understand that each block is separately randomized, in effect a mini-experiment within the larger experiment.Having between-block replications means repeating these miniexperiments.We run an entire block before running another block.
The function ccd.pick is designed to help identify good CCDs.It simply creates a grid of all combinations of design choices, computes the α values required for orthogonality and rotatability, sorts them by a specified criterion (by default, a measure of the discrepancy between these two αs), and presents the best few.
For example, suppose that we want to experiment with k = 5 factors, and we are willing to consider CCDs with blks.c= 1, 2, or 4 cube blocks of sizes n.c = 8 or 16 each.With this many factors, the number of different star points (2k = 10) is relatively small compared with the size of some cube blocks ( 16), so it seems reasonable to consider either one or two replications (wbr.s∈ {1, 2}) of each point within each star block.Finally, suppose that we want the total size of the experiment to be no more than N = 65 runs (see restrict in the call below).Here are the ten best choices based on these criteria: > ccd.pick (5, n.c = c(8, 16), blks.c = c(1, 2, 4) The first design listed is also the smallest; it consists of one cube block of 16 runs, plus 6 center points; and one star block with the points replicated once and one center point; thus, the total number of runs is N = (16 + 6) + (10 + 1) = 33.If we choose α = 2, this design is both orthogonal and rotatable as seen by noting that alpha.rotand alpha.orthare both equal to 2. The 16 design points in the cube block may be generated by a 2 5−1 fractional factorial design.
While this is a small design, we have only one center point in the star block, not providing a way to test lack of fit in the star portion.The second and third designs remedy this slightly, but all these designs are fairly lopsided in that the cube block is much larger than the star block.The next few designs require considerably more runs.Design number 4 is nicely balanced in that it consists of three blocks of 21 runs each, and it is both rotatable and orthogonal.However, we still have no lack-of-fit test in the star blocks.Designs 5 and 6 differ only in whether they use two 2 5−1 cubes or four 2 5−2 cubes, but they provide several center points for a lack-of-fit test.If we position the axis points at α = 2.38, the design is almost orthogonal and almost rotatable.The remaining designs also come close to meeting both criteria, but are also somewhat smaller, so that Designs 9 and 10 are essentially down-sized versions of Designs 5 and 6.
The choice of which design is best depends on the tradeoff between economy and ability to assess the fitted surface.Design 1 is the only one of these that is included in Table 7.6 of Myers et al. (2009).It is good to be able to look at a broader range of choices.
Once we decide the design, the ccd function is used to generate it.(Alternatively, starting with rsm version 2.0, the cube, star, foldover, and dupe functions are available for generating and randomizing a CCD in separate blocks, and then they may be combined using djoin.) We first illustrate the generation of Design 1 above.This design requires a 2 5−1 fraction for the cube block.Typically, this is done by confounding the five-way interaction with the mean; or equivalently, create a four-factor design, and generate the levels of the fifth as the four-way interaction of the others.That is the approach implemented by ccd.Suppose that we denote the design factors by A, B, C, D, E; let's opt to use E = −ABCD as the generator.
The following call generates the design (results not shown): The value of α was not specified, and by default it uses the α for orthogonality.The first argument could have been just 4, but then the generator would have had to have been given in terms of the default variable names x1, x2, . . . .The optional left-hand side in the formula creates place-holders for response variable(s), to be filled-in with data later.As in bbd, we could have added coding formulas to create a coded.dataobject.
Next, we illustrate the generation of Design 10.This design has four 2 5−2 cube blocks with 2 center points each, and one unreplicated star block with 4 center points.The non-center points in the cube blocks comprise 4 × 8 = 32 runs, so we most likely want to create them by dividing the full 2 5 factorial into four fractional blocks.We can for example opt to generate the blocks via the factors b 1 = ABC and b 2 = CDE, so that the blocks are determined by the four combinations of b 1 and b 2 .Then the block effects will be confounded with the effects ABC, CDE, and also the b 1 b 2 interaction ABC 2 DE = ABDE.It is important in response-surface work to avoid confounding second-order interactions, and this scheme is thus acceptable.Unlike Design 1, this design includes all 2 5 factor combinations, so we do not use the generators argument; instead, we use blocks to do the fractionation: Each block is randomized separately, but the order of the blocks is not randomized.In practice, we may opt to run the blocks in a different sequence.With this design, just one of the cube blocks is sufficient to estimate a first-order response surface.
It is also important to examine a design's capabilities.First of all, is it adequate to fit the needed first-or second-order model, and how effective is it in predicting the response surface?
The varfcn function (a new addition starting rsm version 2.0) is helpful in this regard.It calculates a scaled version of the varaince of the fitted values over a specified set of design points.By default, it computes this along paths through (1, 0, . . ., 0), (1, 1, . . ., 0), . . ., (1, 1, . . ., 1), or on a grid with the first two variables.The right-hand side of the intended model must be provided.
Figure 1 illustrates this for des10.It shows that the design is nearly rotatable (it would be exactly so if we had chosen alpha = "rotatable" in the call to ccd).It can also be verified that any two of the cube blocks plus the axis block is sufficient to estimate a second-order response surface.Just one cube block plus the axis points, however, is not sufficient.
It is possible to imagine a CCD that consists of a fractional factorial divided into blocks.
For such a design, both generators and blocks would be needed.For smaller numbers of factors, most CCDs require no fractionation of either type, and obviously these are simple to generate.
Starting in version 1.40 of rsm, an inscribed argument is available in ccd.This scales the entire design so that it fits within a unit cube-useful for situations when there are constraints on the region of operability.Note in this example that it is now the axis points that are at ±1, while the cube points are at ± 1/2.(Incidentally, this example also illustrates the default codings used when no coding formulas are specified.) There are several other types of designs that are useful for response surfaces, as mentioned in several of the books referenced in this article.Provisions for generating those designs are an area of future development in the rsm package.

Fitting a response-surface model
A response surface is fitted using the rsm function.This is an extension of lm, and works almost exactly like it; however, the model formula for rsm must make use of the special functions FO, TWI, PQ, or SO (for "first-order,", "two-way interaction," "pure quadratic," and "second-order," respectively), because the presence of these specifies the response-surface portion of the model.Other terms that don't involve these functions may be included in the model; often, these terms would include blocking factors and other categorical predictors.
To What we see in the summary is the usual summary for a lm object (with a subtle difference), followed by some additional information particular to response surfaces.The subtle difference is that the labeling of the regression coefficients is simplified (we don't see "FO" in there).The analysis-of-variance table shown includes a breakdown of lack of fit and pure error, and we are also given information about the direction of steepest ascent.Since the dataset is a coded.dataobject, the steepest-ascent information is also presented in original units.(While rsm does not require a coded.datadataset, the use of one is highly recommended.)In this particular example, the steepest-ascent information is of little use, because there is significant lack of fit for this model (p ≈ 0.01).It suggests that we should try a higher-order model.For example, we could add two-way interactions:
To go further, we need more data.Thus, let us pretend that we now collect the data in the second block.Then here are the data from the combined blocks: The lack of fit is now non-significant (p ≈ 0.69).The summary for a second-order model provides results of a canonical analysis of the surface rather than for steepest ascent.The analysis indicates that the stationary point of the fitted surface is at (0.37, 0.33) in coded units-well within the experimental region; and that both eigenvalues are negative, indicating that the stationary point is a maximum.This is the kind of situation we dream for in response-surface experimentation-clear evidence of a nearby set of optimal conditions.We should probably collect some confirmatory data near this estimated optimum at Time ≈ 87, Temp ≈ 177, to make sure.
Another example that comes out a different way is a paper-helicopter experiment (Box et al. 2005, Table 12.5).This is another central-composite experiment, in four variables and two blocks.The data are provided in the rsm dataset heli; these data are already coded.The original variables are wing area A, wing shape R, body width W, and body length L. The goal is to make a paper helicopter that flies for as long as possible.Each observation in the dataset represents the results of ten replicated flights at each experimental condition.Here we study the average flight time, variable name ave, using a second-order surface.
> heli.rsm<-rsm(ave ~block + SO(x1,x2,x3,x4) From the analysis of variance, it is clear that the second-order (TWI and PQ) terms contribute significantly to the model, so the canonical analysis is relevant.Again, the stationary point is fairly near the experimental region, but the eigenvalues are of mixed sign, indicating that it is a saddle point (neither a maximum nor a minimum).We will do further analysis of these results in subsequent sections.

Displaying a response surface
While the canonical analysis gives us a handle on the behavior of a second-order response surface, an effective graph is a lot easier to present and explain.To that end, rsm includes a function for making contour plots of a fitted response surface.This function is not restricted to rsm results, however; it can be used for plotting any regression surface produced by lm.For more detailed information, see the associated vignette "Surface Plots in the rsm Package."We provide the lm or rsm object, a formula for which predictors to use, and various optional parameters.Consider the paper-helicopter example in the preceding section; there are four response-surface predictors, making six pairs of predictors.If we want to visualize the behavior of the fitted surface around the stationary point, we can provide that location as the at argument: > par(mfrow = c(2, 3)) > contour (heli.rsm,~x1 + x2 + x3 + x4,image = TRUE,+ at = summary(heli.rsm)$canonical$xs) The plots are shown in Figure 2. The image argument causes each plot to display a color image overlaid by the contour lines.When multiple plots like this are produced, the color levels are held consistent across all plots.Note that the at condition does not set the center of the coordinate systems (the default variable ranges are derived from the data); it sets the values at which to hold variables other than those on one of the coordinate axes, as shown in the subtitles.

Direction for further experimentation
In many first-order cases, as well as second-order cases where we find a saddle point or the stationary point is distant, the most useful further action is to decide in which direction to explore further.In the case of first-order models, one can follow the direction of steepest ascent.As already seen in Section 4, the summary method for rsm objects provides some In general, we can specify any set of distances along the path.The decoded coordinate values are displayed if the model was fitted to a coded.datadataset.
At this point it is worth emphasizing that, although the fitted values are also displayed, one must be careful to understand that these are only predictions and that, as the distance increases, they are very poor predictions and should be taken with a grain of salt.What one should do is to conduct actual experimental runs at points along this path, and use the observed response values, not these predictions, for guidance on where to locate the next factorial experiment.
In the second-order case, the steepest function still works, but it uses the ridge analysis method (Hoerl 1959;Draper 1963), which is the analog of steepest ascent in the sense that for a specified distance d, it finds the point at which the predicted response is a maximum among all predictor combinations at radius d.This method makes sense when the stationary point is some distance away; but when this point is nearby, it makes more sense to start at the saddle point (rather than the origin) and follow the most steeply rising ridge in both directions.This path is obtained using the canonical.pathfunction.In this function, distance is a signed quantity, according to the direction along the ridge.
In the heli example, we do have a nearby stationary point.Here are some points within a radius of 5 along the canonical path: > canonical.path(heli.rsm,dist = seq (-5, 5, by

Stationary and rising-ridge situations
Canonical analysis becomes unstable in cases where the matrix B of second-order coefficients is singular or nearly so.As an example, consider the dataset codata provided with rsm and used as an example in Box et al. (2005)  Note that, due to an automatic thresholding provision, one of the eigenvalues has been set to zero.This causes the stationary point to be estimated based only on the surviving eigenvector, with the other one assumed to be a stationary ridge.To ignore this thresholding, set the threshold to zero: The following statements produce an illustrative plot, shown in Figure 3. > contour(CO.rsm,x2 ~x1,2),16)), + zlim = c(-100, 100), col = "gray", decode = FALSE) > lines (c(-1,1,1,-1,-1), c(-1,-1,1,1,-1), col = "green") # design region > points (x2 ~x1,data = canonical.path(CO.rsm),+ col = "blue", pch = 1 + 6*(dist == 0)) > points (x2 ~x1,data = canonical.path(CO.rsm,threshold = 0), + col = "red", pch = 1 + 6*(dist == 0)) > points(x2 ~x1, data=steepest(CO.rsm),+ col = "magenta", pch = 1 + 6*(dist == 0)) It displays the fitted response surface, as well as the results from canonical.pathwith and without the threshold (blue and red points, respectively).The region of the design is shown as a green box.The stationary point (different symbol) is seen to be a saddle point near the upper-left corner when not thresholded, and near the design center whern thresholded.Otherwise, the canonical paths are much the same but with different origins.Both form a path along the rising ridge that occurs in the vicinity of the design.
It is important to note that the stationary point obtained by the default thresholding is not really a stationary point, but rather a nearby point that represents a center for the most important canonical directions.In this example, the true stationary point is very distant, and the thresholded stationary point is the nearest place on a rising ridge that emanates from the true stationary point.The thresholded canonical.pathresults give us a much more usable set of factor settings to explore than the ones without a threshold.

Technical details
This subsection provides some technical backing for canonical analysis and what we do when a threshold is active, in case you're interested.
Let b and B denote the first and second-order coefficients of the fitted second-order surface, so that the fitted value at a coded point x is ŷ(x) = b 0 + b ′ x + x ′ Bx.The stationary point x s solves the equation 2Bx s + b = 0, i.e., x s = − 1 2 B −1 b.The canonical analysis yields the decomposition where there are k predictors, the u j form orthonormal columns of U, and the λ j are the eigenvalues, and Λ Λ Λ = diag(λ 1 , λ 2 , . . ., λ k ).It also happens to be true that

Discussion
The current version of rsm provides only the most standard tools for first-and second-order response-surface design and analysis.The package can be quite useful for those standard situations, and it implements many of the analyses presented in textbooks.However, clearly a great deal of work has been done in response-surface methods that is not represented here.Even a quick glance at a review article such as Myers, Montgomery, Vining, Borror, and Kowalski (2004)-or even an older one such as Hill and Hunter (1989)-reveals that there is a great deal that could be added to future editions of rsm.There are many other useful designs besides central composites and Box-Behnken designs.We can consider higher-order models or the use of predictor transformations.Mixture designs are not yet provided for.There are important relationships between these methods and robust parameter design, and with computer experiments.The list goes on.However, we now at least have a good collection of basic tools for the R platform, and that is a starting point.

>Figure 1 :
Figure 1: Variance function plots for des10, with respect to a second-order model.

Figure 2 :
Figure 2: Fitted response-surface contour plots near the stationary point for the helicopter experiment.

Figure 3 :
Figure3: Fitted response surface for CO.rsm, and the canonical paths with (blue) and without(red) a threshold.Also shown is the path of steepest ascent (magenta), which aligns closely with one direction in the thresholded canonical path.The design region is shown as a green box.
stationary point is at about (−15, 15) in coded units-very distant from the design center.

k
Thus, a really small value of λ j hardly affects B, but has a huge influence on B −1 .Now, for some m < k, let Λ Λ Λ * be the m × m diagonal matrix with only some subset of m eigenvalues; and let U * be the k × m matrix with the corresponding u j .If we excluded the smallest absolute eigenvalues, thenB * = U * Λ Λ Λ * U ′ * ≈ B.Moreover, by orthogonality, U ′ B = Λ Λ ΛU ′ and U ′ Total observations (N) blks.c* bbr.c * (n.c + n0.c) + bbr.s * (n.s + n0.s)Table illustrate this, let us revisit the ChemReact data introduced in Section 2. We have one response variable, Yield, and two coded predictors x1 and x2 as well as a blocking factor Block. Supposing that the experiment was done in two stages, we first act as though the data in the second block have not yet been collected; and fit a first-order response-surface model to the data in the first block: Notice that djoin figures out the fact that ChemReact2 is not coded but it has the appropriate uncoded variables Time and Temp; so it codes those variables appropriately.Also, the Block factor is added automatically.We are now in the position of fitting a full second-order model to the combined data.This can be done by addingPQ(x1, x2)to the above model with interaction, but the easier way is to use SO, which is shorthand for a model with FO, TWI, and PQ terms.Also, we now need to account for the block effect since the data are collected in separate experiments: information about this path.More detailed information is available via the steepest function; for example, . It comes in coded form, but to relate things to the actual variables, let's add the codings: