Optimal designs for regression with spherical data

In this paper optimal designs for regression problems with spherical predictors of arbitrary dimension are considered. Our work is motivated by applications in material sciences, where crystallographic textures such as the missorientation distribution or the grain boundary distribution (depending on a four dimensional spherical predictor) are represented by series of hyperspherical harmonics, which are estimated from experimental or simulated data. For this type of estimation problems we explicitly determine optimal designs with respect to Kiefers $\Phi_p$-criteria and a class of orthogonally invariant information criteria recently introduced in the literature. In particular, we show that the uniform distribution on the $m$-dimensional sphere is optimal and construct discrete and implementable designs with the same information matrices as the continuous optimal designs. Finally, we illustrate the advantages of the new designs for series estimation by hyperspherical harmonics, which are symmetric with respect to the first and second crystallographic point group.


Introduction
Regression problems with a predictor of spherical nature arise in various fields such as geology, crystallography, astronomy (cosmic microwave background radiation data), the calibration of electromagnetic motion-racking systems or the representation of spherical viruses [see Chapman et al. (1995), Zheng et al. (1995), Chang et al. (2000), Schaeben and van den Boog (2003), Genovese et al. (2004), Shin et al. (2007) among many others] and their parametric and nonparametric estimation has found considerable attention in the literature.
Several methods for estimating a spherical regression function nonparameterically have been proposed in the literature. Di Marzio et al. (2009Marzio et al. ( , 2014 investigate kernel type methods, while spherical splines have been considered by Wahba (1981) and Alfed et al. (1996). A frequently used technique is that of series estimators based on spherical harmonics [see Abrial et al. (2008) for example], which -roughly speaking -generalise estimators of a regression function on the line based on Fourier series to data on the sphere. Alternative series estimators have been proposed by Narcowich et al. (2006), Baldi et al. (2009) and Monnier (2011) who suggest to use spherical wavelets (needlets) in situations where better localisation properties are required. Most authors consider the 2-dimensional sphere S 3 in R 3 as they are interested in the development of statistical methodology for concrete applications such as earth and planetary sciences.
On the other hand, regression models with spherical predictors with a dimension larger than three have also found considerable attention in the literature, mainly in physics, chemistry and material sciences. Here predictors on the unit sphere S m = {x ∈ R m : ||x|| 2 = 1}, with m > 3 and series expansions in terms of the so called hyperspherical harmonics are considered. These functions form an orthonormal system with respect to the uniform distribution on the sphere S m and have been, for example, widely used to solve the Schroedinger equation by reducing the problem to a system of coupled ordinary differential equations in a single variable [see for example Avery and Wen (1982) or Krivec (1998) among many others]. Further applications in this field can be found in Meremianin (2009), who proposed the use of hyperspherical harmonics for the representation of the wave function of the hydrogen atom in the momentum space. Similarly, Lombardi et al. (2016) suggested to represent the potential energy surfaces (PES) of atom-molecule or molecular dimers interactions in terms of a series of four-dimensional hyperspherical harmonics. Their method consists in fitting a certain number of points of the PES, previously determined, selected on the basis of geometrical and physical characteristics of the system. The resulting potential energy function is suitable to serve as a PES for molecular dynamics simulations. Hosseinbor et al. (2013) applied four-dimensional hyperspherical harmonics in medical imaging and estimated the coefficients in the corresponding series expansion via least squares methods to analyse brain subcortical structures. A further important application of series expansions appears in material sciences, where crystallographic textures as quaternion distributions are represented by means of series expansions based on (symmetrized) hyperspherical harmonics [see Bunge (1993), Zheng et al. (1995), Mason and Schuh (2008) and Mason (2009)

among many others].
It is well known that a carefully designed experiment can improve the statistical inference in regression analysis substantially, and numerous authors have considered the problem of constructing optimal designs for various regression models [see, for example, the monographs of Fedorov (1972), Silvey (1980) and Pukelsheim (2006)]. On the other hand, despite of its importance, the problem of constructing optimal or efficient designs for least squares (or alternative) estimation of the coefficients in series expansions based on hyperspherical harmonics has not found much interest in the statistical literature, in particular if the dimension m is large. The case m = 2 corresponding to Fourier regression models has been discussed intensively [see Karlin and Studden (1966), page 347, Lau and Studden (1985), Kitsos et al. (1988) and Dette and Melas (2003) among many others]. Furthermore, optimal designs for series estimators in terms of spherical harmonics (that is, for m = 3) have been determined by Dette et al. (2005) and Dette and Wiens (2009), however, to the best of our knowledge no results are available for hyperspherical harmonics if the dimension of the predictor is larger than 3.
In the present paper we consider optimal design problems for regression models with a spherical predictor of dimension m > 3 and explicitly determine optimal designs for series estimators in hyperspherical harmonic expansions. In Section 2 we introduce some basic facts about optimal design theory and hyperspherical harmonics, which will be required for the results presented in this paper. Analytic solutions of the optimal design problem are given in Section 3.1, where we determine optimal designs with respect to all Kiefer's Φ p -criteria [see Kiefer (1974)] as well as with respect to a class of optimality criteria recently introduced by Harman (2004). As it turns out the approximate optimal designs are absolute continuous distributions on the sphere and thus cannot be directly implemented in practice. Therefore, in Section 3.2 we provide discrete designs with the same information matrices as the continuous optimal designs. To achieve this we construct new Gaussian quadrature formulas for integration on the sphere, which are of own interest. In Section 4 we investigate the performance of the optimal designs determined in Section 3.2 when they are used in typical applications in material sciences. Here energy functions are represented in terms of series of symmetrized hyperspherical harmonics which are obtained as well as defined as linear combinations of the hyperspherical harmonics such that the symmetry of a crystallographic point group is reflected in the energy function. It is demonstrated that the derived designs have very good efficiencies (for the first crystallographic point group the design is in fact D-optimal). Finally, a proof of a technical result can be found in Appendix A. The results obtained in this paper provide a first step towards the solution of optimal design problems for regression models with spherical predictors if the dimension is m > 3 and offer a deeper understanding of the general mathematical structure of hyperspherical harmonics, which so far were only considered in the cases m = 2 and m = 3.

Optimal designs and hyperspherical harmonics 2.1 Optimal design theory
We consider the linear regression model ) is a vector of linearly independent regression functions, c ∈ R D is the vector of unknown parameters, x denotes a real-valued covariate which varies in a compact design space, say X (which will be S m in later sections), and different observations are assumed to be independent with the same variance, say σ 2 > 0. Following Kiefer (1974) we define an approximate design as a probability measure ξ on the set X (more precisely on its Borel field). If the design ξ has finite support with masses w i at the points x i (i = 1, . . . , k) and n observations can be made by the experimenter, this means that the quantities w i n are rounded to integers, say n i , satisfying k i=1 n i = n, and the experimenter takes n i observations at each location x i (i = 1, . . . , k). The information matrix of the least squares estimator is defined by [see Pukelsheim (2006)] and measures the quality of the design ξ as the matrix σ 2 n M −1 (ξ) can be considered as an approximation of the covariance matrix σ 2 (X T X) −1 of the least squares estimator in the corresponding linear model Y = Xc + ε. Similarly, if the main interest is the estimation of s linear combinations K T c, where K ∈ R D×s is a given matrix of rank s ≤ D, the covariance matrix of the least squares estimator for these linear combinations is given by σ 2 n (K T (X T X) − K), where (X T X) − denotes the generalized inverse of the matrix X T X and it is assumed that range(K) ⊂ range(X T X). The corresponding analogue of its inverse for an approximate design ξ satisfying the range inclusion range(K) ⊂ range(M(ξ)) is given by (up to the constant σ 2 n ) It follows from Pukelsheim (2006), Section 8.3, that for each design ξ there always exists a designξ with at most s(s + 1)/2 support points such that C K (ξ) = C K (ξ). An optimal design maximises an appropriate functional of the matrix M(ξ) and numerous criteria have been proposed in the literature to discriminate between competing designs [see Pukelsheim (2006)]. Throughout this paper we consider Kiefer's Φ p -criteria, which are defined for −∞ ≤ p < 1 as Following Kiefer (1974), a design ξ * is called Φ p -optimal for estimating the linear combinations K T c if ξ * maximises the expression Φ p (ξ) among all approximate designs ξ for which K T c is estimable, that is, range(K) ⊂ range(M(ξ)). This family of optimality criteria includes the well-known criteria of D-, E-and A-optimality corresponding to the cases p = 0, p = −∞ and p = −1, respectively. Moreover, we consider a generalised version of the criterion of E-optimality introduced by Harman (2004) [see also Filov et al. (2011)]. For the information matrix M(ξ) let λ(M(ξ)) = (λ 1 (M(ξ)), . . . , λ D (M(ξ))) T be the vector of the eigenvalues of M(ξ) in nondecreasing order. Then, for s ∈ 1, . . . , D, we define Φ Es (ξ) by the sum of the s-th smallest eigenvalues of M(ξ), that is, For a fixed s ∈ {1, . . . , D} we call a design ξ * Φ Es -optimal if it maximises the term Φ Es (ξ) among all approximate designs ξ.
In general, the determination of Φ p -optimal designs and of Φ Es -optimal designs in an explicit form is a very difficult task and the corresponding optimal design problems have only been solved in rare circumstances [see for example Cheng (1987), Dette and Studden (1993), Pukelsheim (2006), p.241, andHarman (2004)]. In the following discussion we will explicitly determine Φ p -optimal designs for regression models which arise from a series expansion of a function on the m-dimensional sphere S m in terms of hyperspherical harmonics. It turns out that the Φ p -optimal designs are also Φ Es -optimal for an appropriate choice of s. We introduce the hyperspherical harmonics next.

Hyperspherical harmonics
Assume that the design space is given by the m-dimensional sphere S m = {x ∈ R m : ||x|| 2 = 1}. The hyperspherical harmonics are functions of m − 1 dimensionless variables, namely the hyperangles, which describe the points x = (x 1 , . . . , x m ) T ∈ S m on the hypersphere by the equations
x m−1 = sin θ 1 . . . sin θ m−2 cos φ, where θ i ∈ [0, π] for all i = 1, . . . , m − 2, φ ∈ [−π, π] [see, for example, Andrews et al. (1991) or Meremianin (2009)]. As noted by Dokmanić and Petrinović (2010), this choice of coordinates is not unique but rather a matter of convenience since it is a natural generalisation of the spherical polar coordinates in R 3 . In the literature, hyperspherical harmonics are given explicitly in a complex form (see, for example, Vilenkin (1968) and Avery and Wen (1982)). Following the notation in Avery and Wen (1982), they are defined as where θ m−2 = (θ 1 , . . . , θ m−2 ), µ k = (µ 1 , . . . , µ k ) for k = m − 2, m − 3, and ), which are orthogonal with respect to the measure (here I A (x) denotes the indicator function of the set A). The complex hyperspherical functions are orthogonal to their corresponding complex conjugate and form an orthonormal basis of the space of square integrable functions with respect to the uniform distribution on the sphere In fact the constantsÃ λ,µ m−2 are chosen based on this property [see, for example, Avery and Wen (1982) for more details].
However, as mentioned in Mason and Schuh (2008), expansions of real-valued functions on the sphere are easier to handle in terms of real hyperspherical harmonics which are obtained from the complex hyperspherical harmonics via the linear transformations and P µ m−2 µ m−3 (cos θ m−2 ) is the associated Legendre polynomial which can be expressed in terms of a Gegenbauer polynomial via It is easy to check that in the case of R 3 , the expressions in (2.7), (2.8) and (2.9) give the well known spherical harmonics involving only the associated Legendre polynomial [see Chapter 9 in Andrews et al. (1991) for more details]. The real hyperspherical harmonics defined in (2.7), (2.8) and (2.9) preserve the orthogonality properties of complex hyperspherical harmonics proven in Avery and Wen (1982). In other words, the real hyperspherical harmonics form an orthonormal basis of the Hilbert space is the element of solid angle.
Note that the dimension of the vectors f d and c is where the expression for the sums over the µ i 's (i = 1, . . . , m − 2) is obtained from Avery and Wen (1982). model is defined by (2.11) and as mentioned in Section 2.1 a Φ p -optimal (approximate) design maximises the criterion (2.4) in the class of all probability measures ξ on the set [0, π] m−2 × [−π, π] satisfying the range inclusion range(K) ⊂ range(M(ξ)), where the information matrix M(ξ) is given by We are interested in finding a design that is efficient for the estimation of the Fourier coefficients corresponding to the s(k) hyperspherical harmonics and k ∈ {0, . . . , d} denotes a given level of resolution. To relate this to the definition of the Φ p -optimality criteria, let q ∈ N 0 , 0 ≤ k 0 < k 1 < . . . < k q ≤ d and 0 k,l be the s(k) × s(l) matrix with all entries equal to 0. Define the matrix I a denotes the a × a identity and 0 a,b is an a × b matrix with all entries equal to 0. Note that K ∈ R D×s where D = d λ=0 s(λ) is defined in (2.12), and that K T c ∈ R s defines a vector with components, that is The following theorem shows that the uniform distribution on the hypersphere is Φ Es -and Φ p -optimal for estimating the parameters K T c (for any −∞ ≤ p < 1) .
. . < k q ≤ d be given indices and denote by K ∈ R D×s the matrix defined by (3.2) and (3.3). Consider the design given by the uniform distribution on the hypersphere, that is, whereΩ is a normalising constant given bỹ (3.7) N m = 2(m − 2)!!π m/2 /Γ(m/2) and the double factorial n!! for n ∈ N is defined by The design ξ * is Φ p -optimal for estimating the linear combination K T c in the regression model (2.11).
(iii) Let s = q i=0 s(k i ) be the number of considered hyperspherical harmonics where s(k i ) is defined by (3.1). Then the design ξ * defined by (3.6) is also Φ Es -optimal.
Proof. We note that the explicit expression for the normalising constantΩ in (3.7) is given in equation (30) in Wen and Avery (1985). Let ξ * denote the design corresponding to the density defined by (3.6) and (3.7). Then due to the orthonormality property of the real hyperspherical harmonics, given in equation (2.10), it follows that whereΩ is defined in equation (3.7). This proves part (i) of the Theorem. For a proof of (ii) let p > −∞. According to the general equivalence theorem in Pukelsheim (2006), Section 7.20, the measure ξ * is Φ p -optimal if and only if the inequality From the definition of the matrix K given in equations (3.2) and (3.3) we have that (3.10) Now the right-hand side can be simplified observing the sum rule for real hyperspherical harmonics, that is (3.11) where the constant N m is given by (see Avery and Wen (1982)). Therefore, the right-hand side of (3.10) becomes where the last equality follows from the definition of s in (3.4). Consequently, the righthand side and left-hand side of (3.10) coincide, which proves that the design ξ * corresponding to the density defined by (3.6) and (3.7) is Φ p -optimal for any p ∈ (−∞, 1) and any matrix K of the form (3.2) and (3.3). The remaining case p = −∞ follows from Lemma 8.15 in Pukelsheim (2006), which completes the proof of part (ii). For a proof of part (iii) let diag(γ 1 , . . . , γ D ) denote a diagonal matrix with entries γ 1 , . . . , γ D and let denote the subgradient of Φ Es . Then it follows from Theorem 4 of Harman (2004), that the design ξ * is Φ Es -optimal if and only if there exists a matrix Γ ∈ ∂Φ Es (ξ * ) such that the inequality holds for all θ m−2 ∈ [0, π] m−2 and φ ∈ [−π, π]. We now set Γ = KK T where K is defined by the equations (3.2) and (3.3). Therefore Γ is a diagonal matrix with entries 0 or 1, and tr(Γ) = tr(KK T ) = tr(K T K) = tr(I s ) = s, that is, the matrix Γ is contained in the subgradient ∂Φ Es (ξ * ). Using this matrix in (3.14) the left-hand side of the inequality reduces to and part (i) yields for the right hand side of the inequality whereΩ is defined by (3.7). Consequently, the inequality (3.14) is equivalent to (3.10), which has been proved in the proof of part (ii). This completes the proof of Theorem 3.1.

Discrete Φ p -and Φ E s -optimal designs
While the result of the previous section provides a very elegant solution to the Φ p -optimal design problem from a mathematical point of view, the derived designs ξ * cannot be directly implemented as the optimal probability measure is absolute continuous. In practice, if n ∈ N observations are available to estimate the parameters in the linear regression model (2.11), one has to specify a number, say k, of different points defining by (2.6) the locations on the sphere where observations should be taken, and relative frequencies n j /n defining the proportion of observations taken at each point ( k j=1 n j = n). The maximisation of the function (2.4) in the class of all measures of this type yields a non-linear and non-convex discrete optimisation problem, which is usually intractable.
Therefore, for the construction of optimal or (at least) efficient designs we proceed as follows. Due to Caratheodory's theorem [see, for example, Silvey (1980)] there always exists a probability measure ξ on the set [0, π] m−2 × [−π, π] with at most D(D + 1)/2 support points such that the information matrices of ξ and ξ * coincide, that is, ( 3.15) We now identify such a design ξ assigning at the points such that the identity (3.15) is satisfied, where we simultaneously try to keep the number k of support points "small". The numbers n j specifying the numbers of repetitions at the different experimental conditions in the concrete experiment are finally obtained by rounding the numbers nω j to integers [see, for example, Pukelsheim and Rieder (1992)]. We begin with an auxiliary result about Gauss quadrature which is of independent interest and is proven in the appendix.

16)
if and only if the following two conditions are satisfied: is orthogonal with respect to the weight function a(x) to all polynomials of degree z − r, that is, The weights ω j are given by where ℓ j (x) = r k=1,k =j x−x k x k −x j denotes the jth Lagrange interpolation polynomial with nodes x 1 , . . . , x r .
In the following, we use Lemma 3.1 for z = 2d and the weight function Note that the Gegenbauer polynomials C  Andrews et al. (1991), p. 302]. Hence the r roots of C (m−i−1)/2 r (x) have multiplicity 1, are real and located in the interval (−1, 1). As condition (3.17) is satisfied for a(x) = (1 − x 2 ) (m−i−2)/2 , they define together with the corresponding (positive) weights in (3.18) a Gaussian quadrature formula. Therefore, it follows that for any r ∈ {d + 1, . . . , 2d} there exists at least one quadrature formula {x j i , ω j i } r j=1 for every i = 1, . . . , m − 2, such that (3.16) holds with a(x) = (1 − x 2 ) (m−i−2)/2 . We consider quadrature formulas of this type and define the designs Similarly we define for any t ∈ N and any β ∈ (− t+1 t π, −π] a design ν = ν(β, t) on the interval [−π, π] by where the points φ j are given by The following theorem shows that designs of the form are Φ p -as well as Φ Es -optimal designs.
Consequently, by Fubini's theorem the system above is satisfied if the following equations hold . It is well known [see Pukelsheim (2006)] that equation (3.25) is satisfied for measures of the form (3.21). Hence in what follows we can restrict ourselves to the case µ m−2 = µ ′ m−2 . Now the integrand in equation (3.26) is a polynomial of degree µ m−3 + µ ′ m−3 ≤ 2d. Furthermore, since ζ m−2 corresponds to a quadrature formula for a(x) = 1 that integrates polynomials of degree 2d exactly, we have from Lemma 3.1 for z = 2d and a(x) = 1 that From Andrews et al. (1991) p.457 we have that Also since ζ m−3 corresponds to a quadrature formula for a(x) = √ 1 − x 2 that integrates polynomials of degree 2d exactly, it follows from Lemma 3.1 for z = 2d and From Andrews et al. (1991), Corollary 6.8.4, we have that µ m−4 −µ m−3 (x) are orthogonal with respect to (1−x 2 ) µ m−3 +1/2 on the interval [−1, 1]. This implies (3.28) and in what follows we can restrict ourselves to the case µ m−4 = µ ′ m−4 . It remains to show that if (3.27) holds for i = k + 1, that is, if then (3.27) holds for i = k, that is, (3.30) Note that we use somewhat a "backward induction step" since µ k ≥ µ k+1 . Now since (3.29) holds, for proving (3.30) we can restrict ourselves to the case µ k = µ ′ k . The integrand in (3.30) is a polynomial of degree 2µ k +µ k−1 −µ k +µ ′ k−1 −µ k = µ k−1 +µ ′ k−1 ≤ 2d. Furthermore, since ζ k corresponds to a quadrature formula for a(x) = (1 − x 2 ) m−k−2 2 that integrates polynomials of degree 2d exactly, we have from Lemma 3.1 for z = 2d and [see again Andrews et al. (1991) Corollary 6.8.4]. This implies (3.30) and by induction the system of equations (3.27) is established which completes the proof of the theorem.
Example 3.1. To illustrate our approach we consider the dimension m = 4 and a series expansion of order d = 4. By Theorem 3.2 with r = d + 1 = 5 we have to consider the weight functions a 1 (x) = (1 − x 2 ) 1/2 , a 2 (x) = 1.
To compare the uniform designs with the optimal design ζ * 1 ⊗ζ * 2 ⊗ν * obtained by Theorem 3.2 we consider the efficiency where Φ is either the D-, E-or Φ Es -optimality criterion.
We focus on the estimation of K T c where we fix q = 1, k 0 = 0 and k 1 = 4 and K is a block matrix of the form (3.2) with appropriate blocks given by (3.3). For the case of Φ Es -optimality we set s = s(k 0 ) + s(k 1 ) = 26. The D-, E-and Φ Es -efficiencies of the designsζ 1 ⊗ζ 2 ⊗ν andζ 1 ⊗ζ 2 ⊗ν are presented in Table 1. For the modified optimal designζ 1 ⊗ζ 2 ⊗ν (with the same support points as the optimal design) we observe a good D-efficiency, however the Φ Es -and the E-efficiencies are substantially smaller (54.58% and 49.56%, repectively). The uniform designζ 1 ⊗ζ 2 ⊗ν performs worse with respect to the all considered criteria which shows that this uniform design is inefficient in applications.

Symmetrized hyperspherical harmonics
In the previous example we have already shown that the use of the optimal designs yields a substantially more accurate statistical inference in series estimation with hyperspherical harmonics. In this section we consider a typical application of these functions (more precisely of linear combinations of hyperspherical harmonics) in material sciences and demonstrate some advantages of the new designs in this context. Due to space limitations we are not able to provide the complete background on the representations of crystallographic texture however, we explain the main ideas and refer to Bunge (1993), Mason and Schuh (2008) and Patala et al. (2012) for further explanation. Some helpful background with more details can also be found in the monograph of Marinucci and Peccati (2011).
In applications in material sciences the dimension is m = 4 and the groups under consideration are much more complicated and induce crystal symmetries. For example, Mason and Schuh (2008) define representations of crystallographic textures as quaternion distributions (this corresponds to the case m = 4 in our notation) by series expansions in terms of hyperspherical harmonics to reflect sample and crystal symmetries such that the resulting expansions are more efficient. For this purpose they define the symmetrized hyperspherical harmonics as specific linear combinations of real hyperspherical harmonics which remain invariant under rotations corresponding to the simultaneous application of a crystal symmetry and sample symmetry operation. The exact definition of the symmetrized hyperspherical harmonics is complicated and requires sophisticated arguments from representation theory [see Sections 2 -4 in Mason and Schuh (2008)], but -in principle -it follows essentially the same arguments as described in Example 4.1.
More precisely, the groups induced by the crystal symmetry, sample symmetry operation and the level of resolution λ, define N(λ) symmetrized hyperspherical harmonics of the form where the coefficients α η λ,µ 1 ,µ 2 are well defined and can be determined form the symmetry properties. A list of the first at least 30 symmetrized hyperspherical harmonics polynomials for the different 11 point groups can be found in the online supplement of Mason and Schuh (2008). If the coefficients are standardized appropriately, the symmetrized hyperspherical harmonics also form a complete orthonormal system, that is, Moreover, any square integrable function g that satisfies the same requirement of crystal and sample symmetry can be uniquely represented as a linear combination of these symmetrized hyperspherical harmonics in the form (4.4) Patala et al. (2012) obtained estimates of the missorientation distribution function by fitting experimentally measured missorientation data to a linear combination of symmetrized hyperspherical harmonics, while Patala and Schuh (2013) used truncated series to obtain estimates of the grain boundary distribution from simulated data As these experiments are very expensive and the simulations are very time consuming it is of particular importance to obtain good designs for the estimation by series of hypershperical harmonics. Therefore, we now consider the linear regression model (2.1) where the vector of regression functions is obtained by the truncated expansion of the function g of order d, that is, and investigate the performance of the designs determined in Section 3 in models of the form (4.5). Due to space restrictions we concentrate on the case d = 4 and on the symmetrized hyperspherical harmonics for samples with orthorhombic symmetry and crystal symmetry corresponding to the crystallographic point groups 1 and 2. Similar results for expansions of higher order and different crystallographic point groups can be obtained following along the same lines. For the crystallographic point group 1 there are 11 symmetrized hyperspherical harmonics up to order d = 4 which can be obtained from the online supplement of Mason and Schuh (2008) and are given by  To illustrate the symmetries induced by the crystallographic group in the symmetrized hyperspherical harmonics we use a visualization described by Mason and Schuh (2008). For a fixed hyperangle (θ 1 , θ 2 , φ), the functional value of ∴ Z j 4 (θ 1 , θ 2 , φ) is presented by using a projection of the hyperangle to an appropriate two-dimensional disk. More precisely, we project the hyperangle (θ 1 , θ 2 , φ) onto a two-dimensional disk by P (θ 1 , θ 2 , φ) = x 1 (θ 1 , θ 2 , φ) x 2 (θ 1 , θ 2 , φ) = R(θ 1 , θ 2 ) cos(φ) R(θ 1 , θ 2 ) sin(φ) , (4.8) where the function R(θ 1 , θ 2 ) is given by R(θ 1 , θ 2 ) = (3/2) 1/3 (θ 1 − sin(θ 1 ) cos(θ 1 )) 1/3 2(1 − | cos(θ 2 )|).
of the symmetrized harmonic

A A Technical Result
A.1 Proof of Lemma 3.1 Assume that conditions (A) and (B) are satisfied and let Q(x) be an arbitrary polynomial of degree z. The polynomial Q can be represented in the form where V r (x) = r j=1 (x − x j ) is of degree r, the polynomial P (x) is of degree z − r and the polynomial R(x) is of degree less than r. Since x 1 , . . . , x r are the zeros of V r (x) we have that Q(x j ) = R(x j ) for all j = 1, . . . , r and furthermore, because the degree of R(x) is at most r − 1, it can be represented as