The Porcupine Measure for Comparing the Performance of Multi-Objective Optimization Algorithms

: In spite of being introduced over twenty-ﬁve years ago, Fonseca and Fleming’s attainment surfaces have not been widely used. This article investigates some of the shortcomings that may have led to the lack of adoption of this performance measure. The quantitative measure based on attainment surfaces, introduced by Knowles and Corne, is analyzed. The analysis shows that the results obtained by the Knowles and Corne approach are inﬂuenced (biased) by the shape of the attainment surface. Improvements to the Knowles and Corne approach for bi-objective Pareto-optimal front (POF) comparisons are proposed. Furthermore, assuming M objective functions, an M -dimensional attainment-surface-based quantitative measure, named the porcupine measure, is proposed for comparing the performance of multi-objective optimization algorithms. A computationally optimized version of the porcupine measure is presented and empirically analyzed.


Introduction
First introduced by Fonseca and Fleming [1], attainment surfaces provide researchers in multi-objective optimization with a means to accurately visualize the region dominated by a Pareto-optimal front (POF). In many studies, approximated Pareto optimal fronts (POFs) are shown by joining the non-dominated solutions using a curve. Fonseca and Fleming reasoned that it is not correct to use a curve to join these non-dominated solutions. The use of a curve creates a false impression that intermediate solutions exist between any two non-dominated solutions. In reality, there is no guarantee that any intermediate solutions exist. Fonseca and Fleming suggested that, instead of a curve, the non-dominated solutions can be used to create an envelope that separates the dominated and non-dominated spaces. The envelope formed by the non-dominated solutions is referred to as an attainment surface.
Despite being proposed in 1995, attainment surfaces have not seen wide use in the comparison of multi-objective algorithms (MOAs). Instead, the well-known hypervolume [2,3], inverted generational distance [4,5] and its improvements [6], and spread [7] measures are frequently used to quantify and to compare the quality of approximated POFs. This study provides an analysis of the shortcomings of attainment surfaces as a multi-objective performance measure. Specifically, the attainment-surface-based measure proposed by Knowles and Corne [8] is analyzed. Improvements to Knowles and Corne's approach for bi-objective optimization problems are developed and analyzed in this paper. Additionally, an M-dimensional (where M is the number of objectives) attainment-surface-based quantitative measure, named the porcupine measure, is proposed and analyzed.
The porcupine measure provides a way to quantify the ratio of the Pareto front when one algorithm performs statistically significantly better than another algorithm. The objective of this paper is to introduce this new attainment-surface-based measure and to illustrate

Background and Related Work
Fonseca and Fleming [1] suggested that the non-dominated solutions that make up the approximated POF be used to construct an attainment surface. The attainment surface's envelope is defined as the boundary in the objective space that separates those points that are dominated by, or equal to, at least one of the non-dominated solutions that make up the approximated POF from those points for which no non-dominated solution dominates or equals. Figure 1 depicts an attainment surface and the corresponding approximated POF.
The attainment surface envelope is identical to the envelope used during the calculation of the hypervolume metric [2,3]. In contrast to the hypervolume calculation, in the case of an attainment surface, the envelope is not used directly in the calculation of a performance metric. Instead, the attainment surface can be used to visually compare algorithms' performance by plotting the attainment surfaces for both algorithms.
For stochastic algorithms, variations in the performance over multiple runs (also referred to as samples) are expected. Fonseca and Fleming [1] described a procedure to generate an attainment surface that represents a given algorithm's performance over multiple independent runs. The attainment surface for multiple independent runs is computed by first determining the attainment surface for each run's approximated POF. Next, a number of random imaginary lines is chosen, pointing in the direction of improvement for all the objectives. For each line, the points of intersection with each of the lines and the attainment surfaces are calculated. Figure 2a,b depict three attainment surfaces with intersection lines and intersection points.  For each line, the intersection points can be seen as a sample distribution that is unidimensional and can thus be strictly ordered. By calculating the median for each of the sample distributions, the objective vectors that are likely to be attained in exactly 50% of the runs can be identified. The envelope formed by the median points is known as the 50% grand attainment surface. Similar to how the median is used to construct the 50% grand attainment surface, the lower and upper quantiles (25th and 75th percentiles) are used to construct the 25% and 75% grand attainment surfaces.
The sample distribution approach can also be used to compare performance between algorithms. In order to compare two algorithms, two sample distributions-one for each of the algorithms-are calculated per intersection line. Standard non-parametric statistical test procedures can then be used to determine if there is a statistically significant difference between the two sample distributions. Using the statistical test results, a combined grand attainment surface, as depicted in Figure 2c, can be constructed, showing the regions where each of the algorithms outperforms the other. Fonseca and Fleming [1] suggested that suitable test procedures include the median test, its extensions to other quantiles, and tests of the Kolmogorov-Smirnov type [9].
Knowles and Corne [8] extended the work carried out by Fonseca and Fleming and used attainment surfaces to quantify the performance of their Pareto archives evolution strategy (PAES) algorithm. Knowles  In what form should the results be presented.
From their empirical analysis, Knowles and Corne found that at least 1000 lines should be used. In order to generate the intersection lines, the minimum and maximum values for each objective over the non-dominated solutions were found. The objective values were then normalized according to the minimum and maximum values into the range [0, 1]. Intersection lines were then generated as equally spread lines from the origin rotated from (0, 1) to (1, 0), effectively rotating 90°, covering the complete approximated POF.
For M-dimensional problems, where the number of objectives is M > 2, Knowles and Corne suggested using a grid-based approach where points are spread equally on the M, (M − 1)-dimensional hyperplanes. Each hyperplane corresponds to an objective value fixed at the value 1.0. The intersection lines are drawn from the origin to these equally distributed points. In the case of 3-dimensional problems, a 6 × 6 grid would result in 108 (3 × 6 × 6) points and, thus, 108 intersection lines. Similarly, using a 16 × 16 grid on a 3-dimensional problem would result in 768 intersection lines, and so forth.
For statistical significance testing, Knowles and Corne used the Mann-Whitney U test [9] with a significance level of α = 0.05.
Finally, Knowles and Corne found that a convenient way to report the comparison results was to use simple value pairs [a, b], hereafter referred to as the Knowles-Corne measure (KC), where a gives the percentage of space for which algorithm A was found to be statistically superior to algorithm B, and b gives the percentage where algorithm B was found to be statistically superior to algorithm A. It can be noted that 100 − (a + b) gives the percentage where neither algorithm was found to be statistically superior to the other.
Knowles and Corne [8] generalized the definition of the comparison to compare more than two algorithms. For K algorithms, the above comparison is carried out for all ( K 2 ) algorithm pairs. For each algorithm, k, two percentages are reported: a k , which is the region where algorithm k was not worse than any other algorithm, and b k , which is the region where algorithm k performed better than all the other (K − 1) algorithms. Note that a k ≥ b k because the region described by b k is contained in the region described by a k .
Knowles and Corne [10] found that visualization of attainment surfaces in three dimensions is difficult due to the intersection lines not being evenly spread. As an alternative, Knowles presented an algorithm inspired by the work conducted by Smith et al. [11] to visually draw summary attainment surfaces using axis-aligned lines. The algorithm was found to be particularly well suited for drawing 3-dimensional attainment surfaces.
Fonseca et al. [12] continued work on attainment surfaces by introducing the empirical attainment function (EAF). The EAF is a mean-like, first-order moment measure of the solutions found by a multi-objective optimiser. The EAF allows for intuitive visual comparisons between bi-objective optimization algorithms by plotting the solution probabilities as a heat map [13]. Fonseca et al. [14] studied the use of the second-order EAF, which allows for the pairwise relationship between random Pareto-set approximations to be studied.
It should be noted that calculation of the EAF for three or more dimensions is not trivial [15]. Efficient algorithms to calculate the EAF for two and three dimensions have been proposed in [15]. Tušar and Filipič [16] developed approaches to visualize the EAFs in two and three dimensions.

Regarding 2-Dimensional Attainment Surfaces
The attainment surface calculation approach developed by Fonseca and Fleming [1] did not describe in detail how the intersection lines should be generated. Instead, it was only stated that a random number of intersection lines, each pointing in the direction of improvement for all the objectives, should be used. This approach worked well to construct a visualization of the attainment surface.
When Knowles and Corne [8] extended the intersection line approach to develop a quantitative comparison measure, they needed the lines to be equally distributed. If the lines were not equally distributed, as depicted in Figure 2b, certain regions of the attainment surface would contribute more than others, leading to misleading results. Figure 3 depicts two example attainment surfaces with rotation-based intersection lines. Figure 3a depicts a concave attainment surface. Visually, the rotation-based intersection lines look to be equally distributed. Figure 3b, however, depicts a convex attainment surface. Visually, the length of the attainment surface between the intersection lines is larger in the regions closer to the objective axis than in the middle regions. Clearly, the rotation-based intersection lines are not equally spaced for convex-shaped fronts when comparing the length of the attainment surface represented by each intersection line. In order to address the unequal spacing of the rotation-based intersection lines, a new approach to placing the intersection lines is proposed in this paper. To compensate for the shape of the front, the intersection lines can be generated either inwardly or outwardly positioned on a line, running from the extreme values of the attainment surface, based on the shape of the attainment surfaces being compared. Figure 4 depicts the inward and outward intersection line approaches for a convex shaped front. The regions are clearly more equally spread for the inward intersection line approach.
However, the direction of the intersection lines is less desirable for comparison purposes. At the edges, the intersection lines are parallel with the opposite objective's axis. Intuitively, it is more desirable that the intersection lines should be parallel to the closest objective's axis. Another disadvantage of the inward and outward approaches is that the approach to be selected depends on the shape of the front, which is typically unknown. For attainment surfaces that are not fully convex or concave, neither approach is suitable.
An alternative approach, referred to as attainment-surface-shaped intersection lines (ASSIL) in this paper, is to generate the intersection lines along the shape of the attainment surface. In order to equally spread the intersection lines, the Manhattan distance is used to calculate equal spacings for the intersection lines along the attainment surface. Figure 5 depicts the Manhattan distance calculation between two points on the approximated POF, which is q 3 and q 1 in this case. ASSIL can be generated using Algorithm 1.    9: i ← i + 1 10: end while 11: if d = (q i1 − q 11 ) + (q 12 − q i2 ) then 12: q ← q i 13: 20: To ← (q 1 + sin θ,q 2 + cos θ) // Finally, draw the generated intersection line 21: drawIntersectionLine(From, To) 22: end for Intersection lines are spaced equally along the attainment surface. The intersection lines are rotated incrementally such that the intersection lines at the ends of the attainment surface are parallel to the objective axis. Figure 6 depicts the attainment-surface-shaped intersection line approach. The generation of the intersection lines along the shape of the attainment surface allows for an equal spacing of the intersection lines independent of the shape of the front. For all shapes that the attainment surface can assume, whether convex, concave, or mixed, the intersection lines are equally spread out. if O A is statistically significantly less than O B then 11: wins A = wins A + 1 12: else if O B is statistically significantly less than O A then 13: wins B = wins B + 1 14: end if 15: An evaluation of the rotation-based and random intersection line approaches is presented using six artificially generated POF test cases based on those used by Knowles and Corne [8]. Figure 7 depicts the six artificially generated POF test cases. Each of these artificially generated POF test cases was tested using six pof shape geometries, namely concave, convex, linear, mixed, and disconnected geometries. Figure 8 depicts the five POF shape geometries. Table 1 summarises the true KC, the KC with rotation-based and random intersection lines, and the KC with ASSIL results. Values in red indicate attainment surfaces obtained that outperformed the control method (i.e., the true KC measure) by more than 5%, while values in blue indicate attainment surfaces that were found that were 5% worse than the control method. For each of the approaches, 1000 intersection lines were used for the calculation.  As expected, the ASSIL generation approach produced results much closer to the true KC measure: the closer the POFs being compared are to the true POF, the more accurate the comparison using the ASSIL generation approach becomes. Tables 2 and 3 present a comparison of the varying results obtained from using the various intersection line generation approaches. Results comparing the vector evaluated particle swarm optimization (VEPSO) [17], optimized multi-objective particle swarm optimization (OMOPSO) [18], and speed-constrained multi-objective particle swarm optimization (SMPSO) [19] algorithms using the Zitzler-Deb-Thiele (ZDT) [20] and Walking Fish Group (WFG) [21] test sets are presented. The choice of algorithms was arbitrary and only for illustrative purposes. Results were obtained over 30 independent runs. For more details on the algorithms and parameters used, the interested reader is referred to [22]. The characteristics of the problems are summarized in Table 4.      Variations in the results between the different intersection line generation approaches can be seen. ZDT1, ZDT3, ZDT4, and ZDT6 all show variations in the results greater than 5% for each of the comparisons. WFG1, WFG2, WFG5, WFG6, and WFG9 all show variations in the results greater than 5% for at least one of the comparisons. The variations are indicative of the bias towards certain attainment surface shapes shown by the various intersection line generation approaches.

Weighted 2-Dimensional Attainment-Surface-Shaped Intersection Lines
As an alternative to the equally spread intersection lines used by ASSIL, intersection lines can be generated along the shape of the POF, with at least one intersection line per attainment surface line segment. Because the attainment surface segments are not all of equal lengths, a weight is associated with each intersection line to balance the KC measure result. The weighted attainment-surface-shaped intersection lines (WASSIL) generation algorithm is given in Algorithm 3. Figure 9 depicts a convex POF with WASSIL-generated intersection lines. The figure clearly shows that the intersection lines are positioned along the attainment surface, and due to the positioning, the lines are angled slightly differently from the intersection lines in Figure 3b. The WASSIL algorithm should, for the test cases, result in a weighted KC measure result that matches the true KC measure result.
Note that the weighted KC measure is calculated as shown in Algorithm 4. Table 5 summarises the true KC measure, the KC measure with rotation-based and random intersection lines, the KC measure with ASSIL, and the KC measure with WASSIL results. For each of the approaches, 1000 intersection lines were used for the calculation. For POF test cases 1 through 3, only 2 of the 15 measurements using the random intersection line generation approach had a deviation from the true KC of less than 5%. Overall, 50% of the measurements using the random intersection line generation approach had a deviation greater than 5%. This confirms that the random intersection line generation approach is not well suited for the KC calculation.
The rotation-based intersection line generation approach presented by Knowles and Corne fared better than the random intersection line generation approach. Only 7 of the 30 measurements using the rotation-based intersection line generation approach had a deviation greater than 5%. Case 1 with a convex POF fared worse with a deviation of almost 15% for both algorithms. Four of the five case 2 measurements using the rotation-based intersection line generation approach had a deviation greater than 5%. For the remaining case 2 measurement, a deviation of at least 3% is noted. The results indicate the rotation-based intersection line generation approach outperformed the random intersection line generation approach with respect to accuracy. However, the results also indicate that the rotation-based intersection line generation approach is not well suited for the KC calculation and that the results vary based on the POF shape and the spread of the solutions.
As expected, the WASSIL generation approach produced results much closer to the true KC: the closer the approximated POFs being compared are to the true POF, the more accurate the comparison using the WASSIL generation approach becomes. if O A is statistically significantly less than O B then 12: wins A = wins A + w l

13:
else if O B is statistically significantly less than O A then 14: wins B = wins B + w l 15: end if 16: w total = w total + w l 17: end for 18

M-Dimensional Attainment Surfaces
For M-dimensional problems, Knowles and Corne [8] recommended that a grid-based intersection line generation approach, as explained in Section 3, be used. Similar to the rotational approach for 2-dimensional problems, the grid-based approach would lead to unbalanced intersection lines when measuring irregularly shaped POFs. Figure 10 shows an example of an irregularly shaped 3-dimensional attainment surface. Section 6.1 discusses the challenges that need to be addressed in order to generate intersection lines for M-dimensional attainment surfaces. Sections 6.2 and 6.3 introduce two algorithms to generate M-dimensional attainment surface intersection lines. The first uses a naive (and computationally expensive) approach that produces all possible intersection lines. The second is computationally more efficient, considering a minimal set of intersection lines. Section 6.4 presents a stability analysis of the two proposed algorithms to show that the computationally efficient approach performs similarly to the naive approach with respect to comparison accuracy.

Generalizing Attainment Surface Intersection Line Generation to M Dimensions
For 2-dimensional problems, the ASSIL approach generates equally spread intersection lines. Intuitively, generalization of the assil approach for M dimensions requires the calculation of equally spread points over the M-dimensional attainment surface.
The calculation of equally spread points requires that the surface is divided into equally sized M − 1-dimensional hypercubes. For the 3-dimensional case, this would require dividing the attainment surface into equally sized squares. The intersection vectors would be positioned from the middle of each square. The length of the edges would need to be set to the greatest common divider of the lengths of the edges that make up the attainment surface. Even for simple cases, this would lead to an excessive number of squares. The more squares, the higher the computational cost of the measure.
In order to lower the computational cost of the measure, the number of squares needs to be reduced. Because the square edge lengths are based on the greatest common divider, there is no way to reduce the number of squares as long as the squares must be equal in size. If the square sizes differ, the measure will be biased to areas with smaller squares. Areas with smaller squares will carry more weight in the calculation because there will be more of them.
In contrast to the ASSIL approach, the WASSIL approach does not require the intersection lines to be equally spread; instead, only a weight factor must be known for each intersection line. For the 3-dimensional case, the weight factor for each intersection line is calculated as the area of the squares that make up the attainment surface. The weight factor for each intersection line is calculated as the area of the square by multiplying the edge lengths. The weight factor also allows use of rectangles in the 3-dimensional case (hyper-rectangles in the M-dimensional case) instead of equally sized squares.

Porcupine Measure (Naive Implementation)
This section presents the naive implementation for the n-dimensional attainment-surfacebased quantitative measurement named the porcupine measure. The naive implementation uses each of the n-dimensional values from each Pareto-optimal point to subdivide the attainment surface in each of the dimensions. Figure 11 depicts an example of the subdivision approach for each of the three dimensions, considering all intersection lines. Figure 12 depicts the attainment surface, in 3-dimensional space, with the subdivisions visible.  In addition to the calculation of the hyper-rectangles, the center point and intersection vector need to be calculated. The naive implementation of the porcupine measure is summarized in Algorithm 5. For a more detailed algorithm listing, please refer to Appendix A.
Using the intersection lines generated by the above algorithm, two algorithms can now be compared using a nonparametric statistical test, such as the Mann-Whitney U test [9]. The porcupine measure is defined, similar to the weighted KC measure, as the weighted sum of the intersection lines where a statistically significant difference exists over the sum of all the weights (i.e., the percentage of the surface area of the attainment surface, as determined by the weights, where one algorithm statistically performs superior to another). Subdivide the projected attainment surface, in each dimension at every Pareto-optimal point's dimensional value, into hyper-rectangles 6: for each hyper-rectangle do 7: Let c be the center point of the hyper-rectangle 8: Let w be the weight of the hyper-rectangle, equal to the product of the hyper-rectangle's edge lengths 9: for each dimension in objective space m ∈ {1, . . . , M} do 10: Let min m be the smallest of the mth-dimensional values of all the Pareto-optimal points where at least one other dimensional value is less than or equal to that of the corresponding center vector's dimensional value 11: Let max m be the largest of the mth-dimensional values of all the Pareto-optimal points 12: end for 13: Let p be the intersection vector, calculated as p m = c m −min m max m − min m , where p m is the mth component of the vector p, corresponding to the mth objective function 14: drawIntersectionLineWithWeight (from = c − p, to = c + p, weight = w) 15: end for 16: end for Figure 13 depicts an attainment surface with subdivisions and intersection vectors generated using the naive approach. The porcupine measure's name is derived from the fact that the intersection vectors resembles the spikes of a porcupine.

Porcupine Measure (Optimized Implementation)
The large number of subdivisions that result from using the naive implementation of the porcupine measure creates a computationally complex problem when performing the statistical calculations required by the porcupine measure. To reduce the computational cost of the porcupine measure, the naive implementation can be optimized by subdividing the attainment surface only as necessary to accommodate the shape of the attainment surface. Figure 14 depicts an attainment surface with the subdivision lines (dashed) as generated by the optimized implementation. Note that the algorithm yields the minimum number of subdivisions such that the results are independent of the dimension ordering of the Pareto-optimal points. This is by design to allow for the reproducibility and increased stability of the results.
The optimized implementation of the porcupine measure is summarized in Algorithm 6. For a more detailed algorithm listing, please refer to Appendix B.
Similar to the naive implementation, the porcupine measure is defined as the weighted sum of the intersection lines where a statistically significant difference exists over the sum of all the weights (the percentage of the surface area of the attainment surface, as determined by the weights, where one algorithm statistically performs superior to another).

Stability Analysis
In order to show that the optimized implementation provides results similar to the naive implementation, 30 independent runs of each measure were executed. Each measurement run calculated the porcupine measure using the approximated POFs as calculated by 30 independent runs of each of the algorithms being compared. A total of 30 × 30 = 900 runs were executed for each algorithm being compared. Table 6 lists the results for the algorithm pairs that were compared. The algorithm pairs are listed without a separator line between them. The results should thus be interpreted by looking at both lines of the comparison. For each algorithm, the mean, standard deviation (σ), minimum, and maximum for the naive and optimized implementations of the porcupine measure with a maximum side length of 0.1 are listed.
The experimental results in [22] show that a maximum side length of 0.1 for the optimized implementation yielded a good trade-off between accuracy of the results and performance when compared with the naive implementation.
Statistical testing was performed to determine if there were any statistically significant differences between the naive and optimized implementations' results. The Mann-Whitney U test was used at a significance level of α = 0.05. The purpose of the statistical testing was to determine if there was any information loss due to using the optimized implementation compared to the naive implementation. The results indicated that for 52 out of the 54 measurements, or 96%, there were no statistically significant differences. Only for two cases, namely the OMOPSO in the WFG3 OMOPSO vs. VEPSO comparison and the SMPSO in the WFG3 VEPSO vs. SMPSO comparison, was a statistically significant difference noted. In spite of the statistical difference, the ranking of the algorithms did not change. Based on the results, it can, therefore, be concluded that the optimized implementation yielded the same results as the naive implementation and no information was lost. Because no information was lost when using the optimized implementation, it can be concluded that the optimized implementation, with less computational complexity when compared to the naive implementation, can be used when conducting comparisons of multi-objective algorithms. Project the attainment surface parallel to the basis vector, m c , onto the orthogonal (M − 1)-dimensional subspace 6: for each of the Pareto-optimal points, q s do, 7: Let q be the vector with dimensional values set to the minimum dimensional values that are dominated or the corresponding q max value if no minimum exists 8: LetQ be the set of all points that dominate q but not q s , filtered for non-dominance 9: for each dimension m do, 10: Let Q m be the set of all the m'th dimensional values of the points in Q that fall inside the range [q sm ,q m ] 11: end for 12: Let Q min be the set of all the minimum points that will affect the calculation of p 13: Let Q max be the set of all the maximum points that will affect the calculation of p 14: Add all dimensional values of the points in Q min and Q max that fall inside the range [q sm ,q m ] to the corresponding Q m set 15: Add additional values to the Q m sets to limit the side lengths to ∆ max 16: Subdivide the projected attainment surface in each of the m dimensions at every value in the Q m set into hyper-rectangles 17: for each hyper-rectangle do, 18: Let c be the center point of the hyper-rectangle 19: Let w be the weight of the hyper-rectangle, equal to the product of the hyper-rectangle's edge lengths 20: for each dimension m do 21: Let min m be the smallest of the mth-dimensional values of all the Pareto-optimal points where at least one other dimensional value is less than or equal to that of the corresponding center vector's dimensional value 22: Let max m is the largest of the mth-dimensional values of all the Pareto-optimal points 23: The mean standard deviation was 2.743, and the maximum standard deviation was 7.022. The conclusion that can be drawn from the data is that the optimized implementation of the porcupine measure is very robust because measurement values for each of the samples were close to the average.
For the experimentation that was carried out for this study, the runtime of the optimized implementation was notably faster than that of the naive implementation. A difference on the orders of a few magnitudes was noticeable.
The computational complexity of the naive implementation is directly proportional to the size of the Q m sets. It should be noted that, for the tested algorithms with an approximated POF size of 50 points tested over 30 independent samples, the optimal POF had a typical size of 1250 points. The size of the Q m sets were thus approximately 1250 points. For three dimensions, this resulted in a minimum computational complexity of at least 1250 3 , or rather 1,953,125,000 (almost two billion). The optimized implementation resulted in much-reduced computational complexity because only the necessary subdivisions were made. The size of the Q m sets were much smaller. For the three-dimensional case, the maximum edge length leads to a minimum complexity of at least ( 1 0.1 ) 3 , or rather 1000 times lower than that of the naive version.

Conclusions
This article investigated shortcomings that may have led to the lack of adoption of attainment-surface-based quantitative performance measurements for multi-objective optimization algorithms. It was shown that the quantitative measure proposed by Knowles and Corne was biased against convex Pareto-optimal fronts (POFs) when using rotational intersection lines. The attainment-surface-shaped intersection lines (ASSIL) generation approach was proposed. The ASSIL generation approach was shown not to be biased against any attainment surface shape.
An algorithm for an M-dimensional attainment-surface-based quantitative measure, named the porcupine measure, was presented. Additionally, a computationally optimized implementation of the porcupine measure was introduced and analyzed. The results indicated that the optimized implementation performed as well as the naive implementation.
The porcupine measure allows for a quantitative comparison between M-dimensional approximated POFs through the use of attainment surfaces. The porcupine measure provides additional information on an algorithm's performance when compared to another algorithm, which was previously not quantifiable. A thorough comparison comparing the state-of-the-art multi-objective optimization algorithms using the porcupine measure is left as future work.

Algorithm A2 Cont.
The set Q m is defined as the set of all the dimension values that are dominated by q s along with the maximum boundary value,q m , and the minimum boundary value, q s . 12: for all m ∈ {1, . . . , M}, m = m c do 13: let Q m = {q im : q sm < q im <q m , q i ∈Q s ∪q m ∪ q sm } 14: end for 15: let z = q sm c The Q m set can now be used similarly to how it is used in the naive implementation to calculate the hyper-rectangle subdivisions. However, additional subdivisions are necessary to allow for accurate calculation of the intersection vector, p. The set Q min is iteratively constructed as the set that contains all the minimum points that will influence the intersection vectors for hyper-rectangles that lie between the selected point q s and the maximum boundary point q. 16: let Q min = ∅ 17: for all m p ∈ {1, . . . , M} do 18: let q * 1 = q 19: t = 1 20: repeat 21: find q i such that q im p = min{q im p : ∃ m ∈ {1, . . . , M}, m = m p : q im <q * tm } 22: add q i to Q min iff q i ∈ Q min add q im to Q m ∀ q i ∈ Q min , q sm < q im <q m 29: end for Similar to Q min , the set Q max is iteratively constructed as the set that contains all the maximum points that will influence the intersection vectors for hyper-rectangles that lie between the selected point q s and the maximum boundary point q. find q i such that q im p = max{q im p : ∃ m ∈ {1, . . . , M}, m = m p : q im <q * tm } 36: add q i to Q max iff q i ∈ Q max 37:q * (t+1)m = q im if q im > q sm ∧ q im <q *