Constructing Optimal Designs for Order-of-Addition Experiments Using a Hybrid Algorithm

: For order-of-addition experiments, the response is affected by the addition order of the experimental materials. Consequently, the main interest focuses on creating a predictive model and an optimal design for optimizing the response. Van Nostrand proposed the pairwise-order (PWO) model for detecting PWO effects. Under the PWO model, the full PWO design is optimal under various criteria but is often unaffordable because of the large run size. In this paper, we consider the D -, A - and M . S .-optimal fractional PWO designs. We ﬁrst present some results on information matrices. Then, a ﬂexible and efﬁcient algorithm is given for generating these optimal PWO designs. Numerical simulation shows that the generated design has an appealing efﬁciency in comparison with the full PWO design, though with only a small fraction of runs. Several comparisons with existing designs illustrate that the generated designs achieve better efﬁciencies, and the best PWO designs and some selected 100% efﬁcient PWO designs generated by the new algorithm are reported.


Introduction
During the past decades, order-of-addition (OofA) experiments have been widely applied to the biochemistry [1] and foot industries [2] as well as chemical-related areas [3] In the mentioned experiments, several components are added into an apparatus sequentially rather than simultaneously, and different addition sequences of reactants yield different responses. However, as the number of components increases, the sample space suffers from a combinatorial explosion, and performing full design becomes unfeasible. Accordingly, our goal is to model for an OofA experiment and construct an informative and economical fractional design.
For the objective of optimizing and predicting the response, several statistical models are created for the OofA experiment: Refs. [4][5][6] proposed a PWO model with diminishing effects for pairwise factors; Ref. [7] added PWO factor interactions to account for sequencing effects not accounted for by pairwise main effects alone; Ref. [8] proposed a componentposition (CP) model for order-of-addition using categorical explanatory variables. In this article, we only focus on the first-order PWO model. However, the PWO model is also a regression model. A family of criteria can be applied to find optimal designs under the PWO model, such as D-, Aand M.S.-optimal designs. The optimality proof indicates that a full PWO design with m! distinct permutations of components is D-, A-, and M.S.-optimal, but the run size is extremely large. Taking m = 10 as an example, there are m! = 3, 628, 800 distinct permutations. Consequently, over three million runs of experiments should be implemented, which is impractical. Therefore, fractions of full PWO designs with a smaller number of runs are preferable.
Recently, four kinds of fractional PWO designs have been studied. Ref. [4] introduced a method for constructing optimal PWO designs. This method limits the run size to m!/r! (2 ≤ r ≤ m/2), which is also too many for experimenters to afford. For instance, if m = 10, the method needs at least 30,240 runs to be implemented. Refs. [8,9] provided construction methods based on an orthogonal array; the resulting designs are component orthogonal arrays (COAs) and OofA orthogonal arrays (OofA-OAs), in which the run size is also inflexible. Ref. [10] provided a minimal-point design with m(m − 1)/2 + 1 runs. The run size is small, but the efficiency is relatively low. However, theoretical constructions of these fractional PWO designs are highly dependent on run size. Ref. [11] applied the threshold-accepting algorithm to construct the optimal designs (D-efficiency for application) based on the pairwise-order (PWO) model and the tapered PWO model; the designs obtained by threshold-accepting algorithm for 4 ≤ m ≤ 30 with n = m(m − 1)/2 + 1, m(m − 1) + 1, 3m(m − 1)/2 + 1, respectively, are provided for practical uses.
This article gives serious consideration to constructing D-, A-, and M.S.-optimal PWO designs. The main contributions of this article are as follows. First, we construct a new hybrid algorithm for generating the PWO design, which is flexible with no restriction on run size. Second, D-, A-, and M.S.-optimal PWO designs can be constructed using the proposed algorithm. Third, the constructed designs possess high efficiencies compared with the full PWO design, though with a small fraction of runs. Some optimal PWO designs and fully efficient PWO designs are listed in the tables. This paper is organized as follows. We first introduce the PWO model in Section 2. Section 3 gives a review of Fedorov's exchange algorithm for constructing the D-optimal designs. Then, this algorithm is modified and extended for constructing Aand M.S.optimal designs. Some theoretical results on the information matrix and algorithm are also provided in Section 3. In Section 4, based on the exchange algorithm and particle swarm optimization (PSO) algorithm, a novel hybrid algorithm is proposed to achieve D-, Aand M.S.-optimal PWO designs. Some numerical results are given in Section 5. Finally, concluding remarks are provided in Section 6.

Model Specification
Now, we introduce the Van Nostrand PWO model. Suppose there are m components denoted as 1, . . . , m. Any treatment in the OofA experiment corresponds to a permutation of 1, . . . , m, denoted as α, and the first-order PWO model can be expressed as follow where each z jk (α) is a PWO indicator between j and k, For an n-point PWO design, let Y be the n-dimensional response vector, Z be the design matrix with ( m 2 ) columns corresponding to PWO indicators z 12 , z 13 , . . . , z (m−1)m , and β = (β 12 , β 13 , . . . , β (m−1)m ) , where denotes the transpose. Then, the first-order PWO model can be written as or where X = (1 Z) n×p with p = ( m 2 ) + 1 is the model matrix, β = (β 0 , β ) represents the parameter of interest, and the random error ∼ N(0, σ 2 I). Ref. [7] extended the PWO model to the high-order case. Here, we only consider a first-order PWO model. The proposed algorithms also apply to a higher-order PWO model. Furthermore, we refer to M = X X/n as the information matrix of an n-point PWO design. Under the PWO model (3), the variance-covariance matrix of the least squares estimator of β is proportional to M. Hence, it is desirable to maximize the matrix M under some criteria. The popular criteria include the D-criterion det( M) 1/p , the A-criterion tr( M −1 ), the M.S.-criterion tr( M 2 ) (see the reference [12]). Note that tr( M −1 ) is interpreted as +∞ for singular X X. Let X f be the full PWO design and the corresponding information matrix be M f = X f X f /n. For clarity, we take m = 3 as an example to illustrate the characteristics of the full PWO design under D-, Aand M.S.-criteria. The levels of PWO factors in the full PWO design with three components are as follows Table 1.

Exchange Algorithms for Constructing D-, A-, and M.S.-Optimal Designs
Theoretical constructions on optimal designs are always complicated; hence, computer algorithms are applied for constructing approximate and exact optimal designs in the literature. The exchange algorithm is one of the popular computer algorithms for constructing optimal designs for the cases with the design points being selected from a finite design space. [13] first proposed an exchange algorithm for generating D-optimal designs. This algorithm chooses n points to include in the design from a finite set of possible points called candidate points. It starts with nonsingular n-point designs and then adds and deletes one observation to achieve increases in the determinant. After that, some improved implementations are proposed based upon Fedorov's exchange algorithm, such as the Kiefer round-off algorithm, the Mitchell algorithm, the Wan Schalkwyk algorithm, the combined Fedorov, the Wynn-Mitchell algorithm, and so on; see the references [14,15].

The Single-Point Exchange Procedure
Consider an n-point design D = {α i } n i=1 under model (3), with a corresponding model matrix X = (x 1 , . . . , x n ) . If M = X X/n, then the D-, Aand M.S.-criteria maximize det( M) 1/p , −tr( M −1 ) and −tr( M 2 ), respectively, which are equivalent to maximizing φ(D) = det(M), −tr(M −1 ) and −tr(M 2 ), respectively, where M = X X = ∑ n i=1 x i x i . Inspired by Fedorov's exchange algorithm, we develop a new exchange algorithm for generating D-, Aand M.S.-optimal designs simultaneously. This algorithm is realized by multiple iterations of the single-point exchange procedure which works as follows.

Single-point exchange procedure:
Let X be the model matrix of the original design and M = X X.
(1) Find a vector x among the vectors of the complementary design such that u(x) is maximum and add x to the current n-point design; (2) Find a vector x i among the n + 1 vectors of the current n + 1-point design such that v(x i ) is minimum and remove x i .
When using the single-point exchange procedure for generating the D-, Aand M.S.optimal designs, the objective functions are denoted as u * (x) and v * (x i ) with * = D, A, M.S. and defined as below: where M = ∑ n i=1 x i x i is the moment matrix of the current design and M is updated to M x = M + xx when a candidate point from the complementary design is added to the current design.
Here, the complementary design consists of all candidate points from the design space except for the n points of the current design. The proof of this theorem uses some matrix theories, and we present it in the Appendix A. This theorem implies that the exchange algorithm will return local D-, Aand M.S.-optimal designs over multiple iterations of the single-point exchange procedure.

The Technique for Avoiding the Singularity of the Matrix for the Exchange Algorithm
For generating optimal design using a computer search algorithm, the solution is often trapped into the local optimal design. Thus random exchange method is always used to avoid this drawback. For constructing D-, Aand M.S.-optimal designs using a computer search algorithm, a randomly selected initial design possibly corresponds to a singular moment matrix, a and computation problem then arises. This is a significant issue, especially for the case with a rather small number of n. Take the case with n = ( m 2 ) + 1(m = 4, 5) as an example. Among all ( m! n ) options of n-point design, a large proportion of them correspond to a badly conditioned matrix M. As shown in Figure 1, all the reciprocal condition numbers are near 0, and the reciprocal condition number below 10 −12 is counted in the first bin of each histogram with a probability exceeding 50%.
A randomly selected initial design will return a computationally singular matrix M with a large probability. For this reason, we address the issue of avoiding singularities of M and M x in the single-point exchange algorithm. Two types of techniques are provided regarding this issue. The first technique is to start with a nonsingular design instead of starting with a randomly selected design.   This technique is practical since a nonsingular initial design with n points can be obtained by appending n − ( m 2 ) − 1 randomly selected distinct points to the minimal-point design provided in [10]. However, in the hybrid algorithm, the design is updated via both the single-point exchange procedure and some random exchange procedure.
The second technique is inspired by the DETMAX algorithm in [14], a specified nonsingular matrix multiplied by a very small positive parameter θ is added to matrix M or M x . Taking M as an example, we do not consider M −1 directly, but instead attempt to where N f is the number of candidate points, and X f is the model matrix of the full design composed of all N f candidate points. Then, one technique that we can use to avoid the singularity of the matrix is as follows.

Remark 2.
To avoid singularity, x (M + θ(X f X f /N f )) −1 x and x i (M x + θ(X f X f /N f )) −1 x i are maximized and minimized in the single-point exchange algorithm with u(x) and v(x i ) being defined as Equations (4) and (5). The degree of error involved in considering these alternative matrices is less than θ.
To appreciate the degree of error involved in considering the alternative matrix, one can make the following calculations. Let Then, apply the technique in Mitchell (2000) to extend f (θ) in a Taylor series about θ = 0 to obtain the linear approximation: For small θ, the error in considering In the proposed algorithm, the value of θ is set at 0.005, which is found to be quite satisfactory in simulations. This choice based on the run size of the full PWO is sufficiently large such that x M −1 X f X f M −1 x/N f < 1, and the error will be less than 0.5%.
Note that in this paper, we adopt the technique described in Remark 2 to avoid the singularity of the matrix.

The Performance of the Exchange Algorithm
Now, we discuss the performance of the exchange algorithm. The single-point exchange procedure is performed recursively, and the D-, Aand M.S.-efficiencies of the generated designs are calculated. For brevity, the cases with m = 4, 5, 6, 7 components are considered and the run sizes are fixed at n = m(m − 1). The following Figure 2 shows that the efficiencies are deeply increased in former iterations but then stabilized and slow on one value as the number of iterations increased. Therefore, the exchange algorithm yields locally optimal designs that approximate a global optimal design in a reasonable number of iterations. To illustrate the performance of the exchange algorithm for constructing D-, Aand M.S.-optimal designs, 1000 designs are generated by the exchange algorithm concerning each pair of the objective functions defined in Equations (4)- (6). The initial designs are randomly selected. We list the minimum, average, and maximum efficiencies of the generated designs in Table 2. The generated designs are largely depended on the initial designs, most of them are locally optimal designs and some of them even have lower efficiencies than 80%; see the numbers in bold font. Thus, in the next section, we propose a more robust hybrid algorithm that combines the exchange algorithm and the particle swarm algorithm to produce approximate optimal designs with higher efficiency than designs generated by the exchange algorithm.

Constructions on D-, A-and M.S.-Optimal PWO Designs Using a Hybrid Algorithm Combining the Exchange Algorithm and PSO Algorithm
Before introducing the new algorithm, we add some details of the PSO algorithm. PSO is a population-based stochastic algorithm for optimization. Each population member is described as a particle that moves around a search space testing new criterion values. All particles survive from the beginning of a trial until the end, and their interactions result in iterative improvement of the quality of the problem solutions over time. The most common type of implementation defines the particles' behavior as adjusting toward each of its personal best position (local-best) and global-best position so that its trajectory shifts to new regions of the search space and the particles gradually cluster around the optima. To generate optimal experimental designs, a particularly challenging task is to redefine the particle designs' movement toward its personal local-best design and global-best design. A review of some recent applications of PSO and its variants to tackle various types of efficient experimental design is [16]. Since finding optimal PWO designs for the OofA experiment is to solve a discrete optimization problem, we utilize an update procedure for the particle designs that is similar to the modified PSO algorithms in [17,18]. Each particle design relates to its personal local-best design which is derived by exchange procedures starting from itself. During each iteration, the current particle design is adjusted toward its personal local-best design as well as the global-best design by exchanging points with each other. Now, we introduce a new hybrid algorithm called Ex-PSO algorithm, which combines the single-point exchange algorithm and PSO algorithm for generating D-, A-, and M.S.optimal designs. The single-point exchange algorithm is used for generating the local-best design concerning each particle design. The PSO algorithm ensures the particle designs gradually cluster around the optimal PWO design. To avoid singularity, the technique proposed in Remark 2 is used; hence, a parameter θ with a small value is involved in this algorithm.
Since the Ex-PSO algorithm involves a set of parameters denoted as s, t ex , t pso , θ, c 1 , c 2 , we also refer to it as Ex-PSO(m, n; s, t ex , t pso , θ, c 1 , c 2 ) for generating optimal PWO design with m components and n runs. For clarity, we create a programming chart to illustrate the steps of Ex-PSO(m, n; s, t ex , t pso , θ, c 1 , c 2 ). Further, we explain the optimization process and the uses of these parameters as follows. Denote L k s and G as the local-best designs and the global-best design, respectively. These designs are updated during each iteration of the Ex-PSO algorithm. Each local-best design L k is derived from the current particle design D k via a fixed number of iterations of the single-point exchange procedure, denoted as t ex . In addition, the global-best design G is the optimal local-best design that maximizes φ(L k ). The number of iterations of the PSO algorithm is denoted as t pso . Meanwhile, two parameters are used to control the PSO behavior of the Ex-PSO algorithm: c 1 and c 2 , which account for the velocities at which each current design drifts toward the corresponding local-best and global-best design. More specifically, during each iteration of the PSO algorithm, we randomly exchange c 1 points from the difference set D k \L k with c 1 points from L k \D k and then randomly exchange c 2 points from the difference set D k \G with c 2 points from G\D k . This procedure corresponds to the "Update D k by PSO" box in the programming chart.
Finally, we note that the time complexity for each iteration is O(n 3 ). However, the search procedure of PSO also relates to the full design which is a combinatorial large set with m! points, so Ex-PSO algorithm may be time-consuming for large m. The Ex-PSO algorithm (Algorithm 1) implemented in MATLAB running on Intel(R) Core(TM) i7-8550U GHz with 8 GB Memory. Take the case of m = 6, n = 16 for example; it takes 30.42 seconds for running Ex-PSO(6, 16; 10, 20, 100, 0.005, 1, 1).
. . , s and G ← argmax 1≤k≤s φ(L k ) 4: for i = 1 to t pso do 5: Update each D k by single-point exchange procedure 6: Calculate φ(D k ), k = 1, . . . , s 7: if φ(D k ) > φ(L k ) then 8: Update L k by D k 9: end if 10: if φ(L k ) > φ(G) then , respectively, where M and M f are the information matrices of the obtained design and full PWO design, respectively, and p is the number of the columns of the model matrix X.
The number of PSO particles (s), the maximum iteration counts of the single-point exchange algorithm and PSO algorithm (t ex , t pso ), and the numbers of pairs of exchanging points with which each particle design drifts toward the local-best and global-best design (c 1 , c 2 ), control the optimization process of Ex-PSO algorithm. It seems reasonable that these parameters should be increased for large n and m. In our test of searching for optimal PWO designs with m = 4, 5, 6, 7 components, we recommend that these parameters be set at s = 10, t ex = 20, t pso = 100, c 1 = c 2 = 1. Furthermore, we recommend setting the maximum iteration counts of the exchange algorithm and PSO at t ex = 20 and t pso = 100, respectively, which achieves high computational efficiency. Further, to demonstrate the performance of such a set of parameters, we randomly run the algorithm Ex-PSO (m, n; 10, 20, 100, 0.005, 1, 1) one hundred times for generating the Ex-PSO-D designs, because the exchange algorithm seems inefficient under A-optimal criterion, as shown in Figure 3. Therefore, one hundred Ex-PSO-D designs with m = 4, 5, 6, 7 components and n = m(m − 1) runs are generated, and Table 3 highlights that all Ex-PSO-D designs reach at least 93% of the efficiency of the full PWO design. For the cases with large m, the settings on maximum iterations, t ex and t pso may not be enough, but the Ex-PSO algorithm still returns approximate optimal PWO designs; see Tables 4-6 in the following part.
To illustrate the advantages of the obtained designs for fitting PWO model, we compare the Ex-PSO-D designs for four components and 12 runs with the optimal PWO design in [4].

Run
Order-of-Addition z 12 , z 13 , z 14 , z 23 , z 24 , z 34 The information matrix of this design under the first-order PWO model is If the rows are rearranged, this design is the same as the optimal PWO design with 4!/2! runs constructed by [4]. This design also features projective properties [6]. All 4 subsets of three components correspond to two-times-replicated three-component designs.
Furthermore, for the OofA experiment with four components, we generated optimal PWO designs with seven to 23 runs using the Ex-PSO algorithm. Figure 4 shows the efficiencies of these designs. All the obtained designs with n ≥ 12 reach at least 95% efficiency of the full PWO design, though with less than one-fifth of the runs. Especially for the cases with n = 12, the design attains the same efficiency as the full PWO design. Furthermore, we compare four types of fractional PWO designs, which are COA, and the corresponding designs obtained by the threshold-accepting algorithm [11], the Federov's exchange algorithm (which iteratively optimizes a delta function of the x i and x where x i is in the design and x is not, see reference to Section 3.3 in [13],) and the Ex-PSO algorithm, denoted as D coa , D ta , D ex and D ex−pso , respectively. D ta is the best result obtained over repeated runs of the threshold-accepting algorithm with up to 10, 000, 000 iterations, D ex is generated by the optFederov function (implemented in the R library Al-gDesign) with nRepeats = 5, and D ex−pso is the best result obtained over five repeated runs of the Ex-PSO algorithm with t ex = 20 and t pso = 100. The optimal PWO design constructed by [4] which serves as a benchmark for evaluating fractional PWO designs is also listed here and denoted as D peng . In addition, the new hybrid algorithm needs an exhaustive search over the design space during the single-point exchange procedure, and it can be computationally expensive if m is large. Hence, we only report designs with n = m(m − 1)/2 + 1, m(m − 1), m!/r!(r = m/2 ) where 4 ≤ m ≤ 7. Nevertheless, given the tremendous growth in computational resources available, it is feasible to conduct the Ex-PSO algorithm for constructing designs with m > 7.  Tables 5 and 6. From the tables, we can find that D ex−pso reaches a higher efficiency than the other types of designs under the PWO model in most cases. Further, we report the best PWO designs with n = m(m − 1)/2 + 1, m(m − 1) and 4 ≤ m ≤ 7 under the D-, Aand M.S.-optimal criteria in Tables A1-A12 in appendix.
We conclude this section with some numerical results on constructions of fractional PWO designs which have the same correlation structure as the full PWO design. Since these designs are 100% efficient under diverse design criteria including the D-, A-, M.S.-optimal criteria, we call them fully efficient PWO designs. Using the Ex-PSO algorithm, we find the following results.

Remark 3.
Removing p(< m) components from a fully efficient PWO design with m components will result in a fully efficient PWO design with m − p components.
For saving space, some selected fully efficient PWO designs with minimized runs for m = 4, 5, 6, 7 are exhibited in Table 7, other fully efficient PWO designs and the MATLAB codes for the Ex-PSO algorithm are available upon request.  Note: D ex with n = m(m − 1)/2 + 1 is omitted because it reports an error of "singular design". When running the optFederov function from the AlgDesign package in R.

Concluding Remarks
In this article, we have presented a new hybrid algorithm for searching optimal PWO designs for OofA experiments. This algorithm starts with a set of randomly selected particle designs. It combines the single-point exchange algorithm for generating the local-best design related to each particle and the PSO algorithm for clustering the particle designs gradually around the global optimal design. We have addressed two points for this stochastic optimization technique: (i) the selection of initial particle designs; and (ii) the definition of the particle designs' movement toward its personal local-best or global-best design. As the sample space is a large combinatorial set of m! points, it's satisfying to randomly select points for initial designs. In addition, the movement is defined as randomly exchanging points to adjust the particle design toward the target design. Our generated design has an appealing efficiency in comparison with the full PWO design, though with only a small fraction of runs. It also achieves better efficiency compared with the existing designs. In addition, we have reported optimal PWO designs and some selected 100% efficient PWO designs for application.
To illustrate the effectiveness of our model and designs, we provide a real-life OofA experiment. The data is from the four-drug data in Table 3 of [8]. The first-order PWO model fits using the data from the fully efficient PWO design with 12 runs (runs {1,2,6,7,10,11,16,18,20,21,22,24} of four-drug data) as well as the full design with 24 runs (i) with an adjusted R square of 0.93 (vs. 0.77), (ii) an RMSE of 2.19 (vs. 3.53) based on 5 (vs. 17) degrees of freedom. For further comparison, we consider variable selection to simplify the first-order PWO model. We start with a model with intercept only and perform stepwise regression with the AIC criterion. By using the data of fully efficient design, the stepwise regression leads to the following PWO model: y = 44.22 − 3.82z 23 + 6.02z 34 + 2.79z 12 − 2.96z 14 + 1.37z 13 , which has an adjusted R square of 0.95, an RMSE of 2.00 with 6 degrees of freedom. When using the data of full design, the stepwise regression leads to the following PWO model: y = 45.22 − 3.91z 12 + 3.86z 23 − 1.59z 13 , which has an adjusted R square of 0.79, an RMSE of 3.43 with 20 degrees of freedom. Thus, the first-order PWO model is sufficient for the four-drug OofA experiment. Meanwhile, the fully efficient design has an appealing efficiency, though with only a half run of the full design. However, in some applications, we are persuaded that the first-order PWO model cannot account for all systematic effects caused by component ordering; thus, we shall consider the second-order PWO model to obtain a lower error. This approach can be referred to [7].
In addition, from the viewpoint of model robustness, it is interesting to examine how our designs behave under high-order PWO models. From the perspective of model parsimony, we only consider the second-order PWO model with only a few interactions of two PWO factors sharing a common component being entertained, and recommend using the fully efficient PWO designs for analysis. We omit the details to save space but note that, even under such augmented models, the fully efficient PWO designs tend to perform well, particularly under the D-, M.S.-criteria and often under the A-criterion. For example, the fully efficient design for m = 6, n = 24, has D-, A-, M.S.-efficiencies of at least 90%, under such augmented model. This allows further flexibility in model selection with a provision for penalty for the additional parameters that correspond to two-factor interactions. As before, one remains assured of relatively high design efficiency under the model that one arrives at.
As pointed out by reviewers, our work can be extended using alternation optimization techniques, such as the memetic optimization algorithm based on local searches [19]. We conclude with the hope that the present endeavor will generate further interest in optimal designs for OofA experiments and related topics.

Data Availability Statement:
The data used to support the findings of this study are available from the corresponding author upon request.

Acknowledgments:
The authors thank the editor and two referees for their valuable comments and suggestions.

Conflicts of Interest:
The authors declare no conflict of interest.

Appendix A. The Proof of Theorem 1
To prove Theorem 1, the following two lemmas are useful.
The proof of this lemma is straightforward according to matrix theory and is thus omitted.

Lemma A2. Let M be a nonsingular matrix and M
Proof. According to Lemma A1, we have Proof of Theorem 1. Since Fedorov's exchange algorithm has proved this result for the case with φ(D) = det(M), u D (x) and v D (x i ) defined as Equation (4), we hence only prove this result for the other two cases.
First, we prove that the design generated by single-point exchange procedure leads to no increase in tr(M −1 ).
Let X be the model matrix of the current design and denote M = X X, which is updated as M x − x i x i after exchanging x for x i according to the single exchange procedure. The following delta function evaluates the multiple changes from tr( Based on step (2) of this procedure, we know that In the combination of Lemma A2, we obtain Thus, (x i , x) ≤ 1 is obtained, which means tr(M −1 ) does not increase in the singlepoint exchange procedure.
Second, we prove that the design generated in single-point procedure leads to no increase in tr(M 2 ).
To calculate the multiple exchange on the tr(M 2 ) during each iteration of the singlepoint exchange procedure, we define a delta function (x i , x) as follows: By step (2) of this procedure, we have x M x x ≤ x i M x x i and Thus, we obtain (x i , x) ≤ 1 from (A2) and (A3). Theorem 1 is proved. Table A1. D-Optimal PWO designs for m = 4.