Numerical Quadrature for Gregory Quads

We investigate quadrature rules in the context of quadrilateral Gregory patches, in short Gregory quads. We provide numerical and where possible symbolic quadrature rules for the space spanned by the twenty polynomial / rational functions associated with Gregory quads, as well as the derived spaces including derivatives, products, and products of derivatives of these functions. This opens up the possibility for a wider adoption of Gregory quads in numerical simulations.


Introduction
Numerical quadratures are indispensable in most numerical analyses and simulations.They provide an efficient means of numerically evaluating or approximating integrals of (spline) functions over various domains [1].In the bivariate setting, a quadrature rule for a function f from a certain space is defined as where Ω ⊂ R 2 represents the domain under consideration, m is the number of quadrature points t i (often called nodes) and their corresponding weights ω i , and R m ( f ) represents the error term of the rule.The rule is said to be exact if the error term vanishes for all elements of a linear function space under consideration.The rule is called Gaussian if there is no numerically exact rule that uses fewer nodes.In this paper, we focus on the basis functions associated with Gregory quads as well as the derived spaces useful in the context of numerical analysis.
As isogeometric analysis [2] gained in popularity, so did the search for (Gaussian) quadrature rules for various function and spline spaces.In the fully structured tensor-product setting, the situation is fairly well developed and understood.B-spline quadrature has been studied in [3,4,5,6], NURBS quadrature in [7,8], and the tensor-product setting has been dealt with in [9,10,11,12].
In contrast, the case of non-tensor product splines and other function spaces has received much less attention.To date, only a few recent advances in this direction have been made.In the case of Catmull-Clark subdivision, an approach to the quadrature for subdivision surfaces based on Catmull-Clark quad meshes was introduced in [13,14].This research discussed grouping strategies of quads around extraordinary vertices (EVs), as the surface around EVs has non-tensor product structure.This is an improvement over the naive, per-quad integration.In the area of triangles, the quadrature rules for spline spaces over macro-triangles have been studied in [15,16].The Hammer and Stroud rules generalise to the macro-element spaces of Clough-Tocher and Powell-Sabin provided that some conditions on the split points are met.
Quadrature rules for rational (spline) spaces have not received much attention [7].Gregory quads, the focus of the present paper, which belong to the class of rational surfaces, were used in [17,18,19].The last reference mentions the use of a 4 × 4 quadrature scheme, which is in general non-exact in the given context.We improve on this in this paper.
As shown in [20], Gregory patches provide an effective way of constructing globally G 1 surfaces of arbitrary manifold topology defined by quadrilateral meshes, for example as approximations to the more numerically demanding Catmull-Clark subdivision surfaces.This further motivated us to investigate quadrature rules for the space of functions spanned by the basis functions of Gregory quads as well as for the derived spaces needed in (isogeometric) analysis.
We start by introducing the concepts and basis functions behind bicubic Bézier patches and quadrilateral Gregory patches in Section 2, as well as standard techniques for deriving quadrature rules.Our results regarding the quadrature rules for quadrilateral Gregory patches are presented in Section 3 for the basis functions themselves.This is followed by rules for partial derivatives (Section 4) and mixed derivatives (Section 5).In Section 6 we devise a numerical approach to quadrature finding, which we then apply to the cases of basis function products and products of their derivatives in Section 7. Several numerical tests of the quadratures are presented in Section 8.The paper is concluded in Section 9.

Preliminaries
To fix notation and for the convenience of the reader, we recall here the definitions of bicubic Bézier patches and the Gaussian quadrature rule for their basis functions (Section 2.1), Gregory quads (Section 2.2), and provide a brief overview of standard methods for deriving quadrature rules (Section 2.3).

Bicubic Bézier Patches
A bicubic Bézier patch is a parametric surface of polynomial bidegree (3, 3) defined over the unit square □ ⊂ R 2 .Let I = {0, 1, 2, 3}.Given 16 control points p i j , (i, j) ∈ I 2 (see Figure 1, left), the bicubic Bézier patch is defined as where the cubic Bernstein polynomials.The 16 polynomial functions span the linear space   It is well known that B requires four quadrature nodes, known as the 2 × 2 tensor product Gauss-Legendre rule.Specifically, with the nodes (u i , v i ) and weights ω i specified in Table 1, for any function B(u, v) ∈ B. Note the symmetry of the rule with respect to the bimedians and diagonals of □; we will rely on this later on.

Gregory quads
Quadrilateral Gregory patches [21] are a generalisation of bicubic Bézier surfaces.At the expense of introducing rational blending functions, they provide the possibility to join patches with G 1 continuity over unstructured quad meshes [20,22].This is achieved by separating the two mixed partial derivatives at each of the four patch corners.Each of the four internal control points of a bicubic Bézier patch is 'split' into two control points, and each such pair is blended rationally into one 'moving' point according to the (u, v) ∈ □ parameter position.More precisely, with removable singularities [23] at the four corners of □.The other 12 control points (along the boundary) are associated with the same basis functions as in the bicubic Bézier quad.As such, the boundaries of a Gregory quad are cubic Bézier curves.Note that there exist also rational boundary Gregory patches [24], but these are not considered here.The Gregory quad is thus governed by 20 control points, indexed by the set (see Figure 1, right) J = {00, 01, 02, 03, 10, 13, 20, 23, 30, 31, 32, 33, 11u, 11v, 12u, 12v, 21u, 21v, 22u, 22v}, and defined as The 20 functions, some of which are shown in Figure 2, span the linear space where , We now show that these functions are in fact basis functions and that dim(G) = 20.While this may be considered obvious, we have not found a formal proof of this fact in the existing literature.
Theorem 2.1.The twenty Gregory quad functions G j , j ∈ J are linearly independent and therefore dim(G) = 20.
Proof.Suppose for contradiction that k∈J c k G k = 0 over □ where not all c k vanish.Using G j also as 'test' functions and integrating over □ gives the 20 × 20 system Nc = 0, where the elements of N have the form □ G j G k du dv with j, k ∈ J and the elements of c are simply c k .Any solution c must be an element of ker(N).A direct computation confirms that dim(ker(N)) = 0, which concludes the proof.

Classical Techniques for Deriving Quadratures
We now review classical quadrature derivation techniques, outline why they are insufficient in our context, and prepare the ground for our contribution in this direction.
When considering a univariate function space S spanned by a weighted polynomial basis -that is, a polynomial basis where each function is multiplied by one and the same weighting function w(x) -quadratures can be derived by considering polynomials P k (x) that are orthogonal with respect to a known weighted inner product (using the same weighting function as mentioned earlier) on a certain domain Ω.It works by dividing a function w(x) f (x) ∈ S, where f (x) is a polynomial of at most degree 2d − 1, by the polynomial function P d (x) of degree d.This yields w(x) f (x) = w(x)Q(x)P d (x) + w(x)R(x), where both Q(x) and R(x) are polynomials of at most degree d − 1. Expressing Q(x) = k c k P k (x) and subsequently integrating over Ω then gives Ω w(x) f (x) dx = Ω w(x)R(x) dx because of orthogonality.Finally, expressing R(x) in a degree d − 1 Lagrange basis using the d roots x m of P d (x) as nodes, i.e., A familiar example are the Gauss-Legendre rules, which follow from the Legendre basis which is orthogonal with respect to the standard L 2 inner product (with w(x) = 1).This means that the resulting quadratures apply to polynomial spaces.Other examples include Gauss-Jacobi and Gauss-Chebyshev quadratures, which are based on the Jacobi and Chebyshev polynomials orthogonal with respect to the inner products using w(x) = (1 − x) α (1 + x) β and w(x) = 1 √ 1−x 2 , respectively.In the setting where a weighting function is given but where its associated set of orthogonal polynomials is unknown, another method considering the moments of w(x) can be used to derive quadratures.These moments M k are the values of the integrals Ω w(x)x k dx, with k ∈ N 0 .With k going up to 2d − 1, we then assume the existence of d nodes x m and associated weights w m such that M k = m x k m w m ∀k ∈ {0, . . ., 2d − 1}.Next, we define a polynomial P d (x) = m (x − x m ) = x d + m α m x m , where the α m now have become the unknowns.In order to solve for these unknowns, we use them to define d weighted combinations C n of the moments M k as follows: This yields the linear system whose solvability is discussed for instance in [25,Chapter 19].Upon solving for the α m , we know P d (x) and can solve for its d roots x m .Finally, we can form another linear system using the equations for M k for k ∈ {0, d − 1} to solve for the d weights w m .An example using the above procedure is w(x) = − ln(x), which yields the logarithmic quadratures used in the 2D collocation boundary element method [26].
The notion of orthogonal functions can be extended to the multivariate setting.However, the quadrature derivation method based on these does not directly apply to the Gregory setting: there is no common factor that can be extracted from these 20 functions (acting as the weighting function), and clearly there is no bivariate orthogonal polynomial basis spanning the same space as G. Similarly, the method based on moments of the weighting function (w(x) = 1 in our setting) does not apply as we are in a rational setting (rather than polynomial, allowing the use of moments in the first place).As such, another method must be devised, which we describe in Section 6, to derive quadrature rules for the challenging case of products of (derivatives of) Gregory basis functions.However, before that, we investigate the simpler cases first.

Gaussian quadrature for Gregory quads
We now prove that the quadrature for bicubic Bézier patches (see Table 1) is also exact for Gregory patches.In other words, the quadrature of (4) is exact not only for B but also for G. Naturally, this can be proved directly by checking that all 20 basis functions G j (u, v) in ( 6) are integrated exactly by it.However, this offers no insight into the structure of the basis.Therefore, we now provide a more geometric proof.
We first observe that, due to symmetry, it holds This, unsurprisingly, shows that B ⊂ G.The next step is to find four 'convenient' linearly independent functions D i , i ∈ I that extend the basis of B to a basis of G.By convenient, we mean the following: the four functions D i should have vanishing integrals over □ based on the four nodes listed in Table 1.As the 8 rational basis functions in (7), again due to symmetry, share the same integral over □ (the actual value is not important but, for the curious reader, it equals 1/32), we can simply take One of these blends, D 11 , is shown in Figure 3.Note that it is, by construction, anti-symmetric with respect to one of the diagonals of □.It follows that all four D functions are anti-symmetric with respect to one of the diagonals of □.Further, for each of them, the values at the four quadrature nodes are either 0 or pair-wise opposite to each other; this is the case due to the symmetry of the quadrature and anti-symmetry of the D i .It remains to show linear independence.
All combined, and given the fact that exact quadrature is a linear operator, we have proved the following with the nodes (u i , v i ) and weights ω i specified in Table 1.The quadrature is Gaussian.
Furthermore, a symbolic computation in Maple shows that this is the only exact four-node quadrature for G that exists.

Quadrature for partial derivatives
We now turn our attention to the space G u = span ∂G j ∂u , j ∈ J , i.e., the space spanned by the 20 partial derivatives of the Gregory quad basis functions with respect to u. Analogous results, due to symmetry, hold for the partials in the v direction.
The 20 functions giving G u do not form a basis.That is apparent from Lemma 3.1: B u (defined analogously to G u ) does not have dimension 16, but rather only 12 as the space is spanned by tensor-product polynomials of degree 3 in v and degree 2 in u.Indeed, when using the approach taken in the proof of Theorem 2.1, we obtain dim(ker(N u )) = 4 with N u now constructed with respect to G u , which confirms that dim The search for quadrature for this space leads to equations of the form with ω i and (u i , v i ) as unknowns.Due to dim G u = 16 it is sufficient to consider only 16 linearly independent equations.As G u contains all biquadratic polynomials, one cannot expect the existence of a quadrature with fewer than four nodes.We thus first set m = 4.Using the computer algebra system Maple and the Gröbner basis computation it provides (after converting the rational equations in the system to polynomial ones) shows that the system admits no solution.We thus conclude that there is no four-node quadrature rule for G u .
Next up, we investigate the case when m = 5.We first observe that the derivatives in G u exhibit, as a set, (only) one-fold symmetry: that with respect to v = 1/2.We thus set v 1 = 1/2 and set the remaining four nodes as corners of a rectangle symmetric with respect to v = 1/2.Using Maple shows again that this leads to no solutions.Although an asymmetric solution is unlikely, we tried to find one both using Maple and numerically using Python (following the approach described in Section 6).The numerical approach did not converge to a solution from any of our random initialisations, and Maple's computation with fully generic nodes and weights ran out of memory (64GB).We thus conjecture that the system admits no solution when m = 5.
Finally, we set m = 6, which does lead to solutions.Not unexpectedly, Maple is not able to handle the system (13) in full generality or even with the v = 1/2 symmetry built in.We therefore turned to our numerical approach again, which delivered multiple solutions.Upon inspection, it led us to the case The last line of equalities follows from symmetry and the fact that the weights have to sum to one.This leaves only u 1 and ω 1 as unknowns.The Gröbner basis of the system, according to Maple, is Maple is able to solve this in radical form, but the expressions are far too long to be useful.With the precision of 30 digits, one obtains the following solution: u 1 = 0.0934779568755708578531910506923, ω 1 = 0.126063849132135287691741790291, which yields, when combined with ( 14), a 6-node symmetric quadrature for G u .

Gaussian Quadrature for Mixed Partial Derivatives
The mixed partial derivatives ∂ 2 G j ∂u∂v , j ∈ J of the Gregory quad basis functions exhibit two-fold symmetry when considered as a set.Let G uv = span ∂ 2 G j ∂u∂v , j ∈ J .Using Lemma 3.1 again, it is clear that the dimension of G uv is at most 9 + 4 = 13, where the 9 comes from biquadratic polynomials spanning B uv .Turning again to the proof of Theorem 2.1, one obtains that dim(ker(N uv )) = 7 with N uv constructed according to G uv , confirming that dim G uv = 13.
We approached the problem of quadrature-finding for G uv as in Section 4. To our surprise, we found a solution in the very first step, using only four nodes.And more surprisingly still, the quadrature rule for mixed derivatives of Gregory quads is the same as the one for Gregory quads; see Table 1.Further, a Gröbner basis computation confirms that this is the only solution with two-fold symmetry and four nodes (although 4 non-symmetric four-node solutions exist; see Figure 4).Since G uv contains biquadratic polynomials, this quadrature is optimal.We have thus proved the following Theorem 5.1.For any function H(u, v) ∈ G uv it holds that with the nodes (u i , v i ) and weights ω i specified in Table 1.The quadrature is Gaussian.
Upon closer inspection, the result is actually not that surprising after all.Lemma 3.1 allows us to focus only on the mixed partial derivatives of the D-functions defined in (10); the other functions are simply biquadratic and thus integrated exactly by the rule.And thus again, the symmetry of the quadrature rule and the anti-symmetry of these functions allows us to arrive at Theorem 5.1.

Numerical optimisation using Python
Before moving on to developing/deriving quadratures for products of the Gregory functions or their first-order partial derivatives, we describe a general numerical method that can be used to obtain quadratures for virtually any set of functions on a given domain Ω.This follows the approach of [27] and similar non-linear optimisation schemes.
Given n bivariate functions f j (u, v), we assume the existence of m quadrature nodes (u k , v k ) and weights Unlike the two approaches described Section 2.3, the quadrature nodes and weights must now be solved for simultaneously.This can be done using a nonlinear least-squares approach, for which we use Python's SciPy library: scipy.optimize.least_squareswith the trust region reflective (trf) minimisation algorithm and numerical approximations of Jacobians.
Writing the system of equations above as Fω = I, we define the n residuals r j = k f j (u k , v k )ω k − I j .Their partial derivatives with respect to the unknowns u k , v k and ω k required for the optimisation procedure are therefore The values of the exact integrals I j can be computed using e.g.Python's SymPy library or using Maple.This only has to be done once.
There are multiple challenges in solving for the m triples (u k , v k , ω k ).First, there is the choice of m.On the one hand, it is desirable to set this as low as possible (we note that lower bounds on m are only known for specific cases, which means that in other cases they have to be estimated).On the other hand, a symmetric rule potentially using a slightly higher number of nodes/weights might be preferred.We get back to this point below.
Secondly, as is usually the case in the context of optimisation, an initial guess is required.For a decent initial guess reasonably close to a solution (note that there might indeed be multiple solutions), we need to have extra insights.Without these, the only resort is a random initial guess, which does not come with any guarantee of convergence.
Remark 1.While the random guesses may not seem to be the best initialization, it is an option that allows us to quickly explore the solution spaces with the least number of quadrature points.One could eventually follow the approach of [28] and initialise the quadratures, for example, via the pivoted Gram-Schmidt method, starting with a higher degree tensor product Gauss rule and eliminating those quadrature points with the smallest positive weight.However, such a search requires an excessive number of initial quadrature points, and does not guarantee to find a quadrature with the least possible quadrature points (see Table 1 in [28]).In contrast, we impose symmetry assumptions, which greatly reduces the search space, as we detail below.Finally, there is the matter of accuracy.Although symbolical solutions are arguably the best outcome, numerical solutions that are accurate to float64 machine precision are certainly an acceptable alternative.After all, it is likely that the obtained quadratures will be used in a numerical setting, where exact integration is not required.The tolerance we use is ∥Fw − I∥ ∞ < 1e − 16, which we achieve by using Python's NumPy's longdouble format for all computations.
In order to test the implementation, we set f = G and m = 4. Unfortunately, without further insight, we initialise the triples (u k , v k , ω k ) with random values in (0, 1) 3 .
Running the script 10.000 times resulted (somewhat surprisingly) in an equal number of successfully converged runs.The 4-node solution, which Maple confirms to be unique, is shown in a single scatter plot in Figure 4 (left).
Next, setting f = G u , we get converging runs starting from m = 6.There appear to be many different solutions.The results of 1000 successfully converged runs are plotted in a single scatter plot (semi-transparently using an alpha/blending value of 1 100 ), which we refer to as a density scatter plot, shown in Figure 5 (left).One of these solutions was mentioned/discussed in Section 4. We note that not all runs converged satisfactorily; about 25 of them did not.Likewise, for f = G v we obtain the density scatter plot shown in Figure 5 (right).
Out of curiosity, we also considered f = G u ∪G v .We obtain converging runs for m = 7. Interestingly enough, there appear to be just two solutions integrating all 40 functions; see Figure 4 (middle).Still using random initialisation, in addition to 1000 successfully converged runs, we get about 350 that do not converge satisfactorily (to local minima or saddle points).
Finally, we return to m = 4 to check the uniqueness of the quadrature rules for f = G uv .There turn out to be four (mutually symmetric) solutions in addition to the 2 × 2 Gauss-Legendre rule mentioned and verified in Section 5; see Figure 4 (right).We note that in this case convergence is problematic; only about 1 in every 45 runs converges successfully.

Exploiting symmetry
Up to this point, no symmetry has been enforced in the optimisation process.However, when moving on to deriving quadratures for products of (derivatives of) functions, as described in the next section, the numbers of functions and quadrature nodes/weights generally increase considerably.In such a setting, restricting the set of feasible solutions to symmetric quadratures can greatly speed up the optimisation process.A slight drawback is that potentially more efficient non-symmetric quadrature rules remain undiscovered.However, this potential drawback is outweighed by the independence/invariance with respect to the orientation of the quadrilateral element and the more efficient storage of symmetric quadratures.Moreover, it is a common assumption that, when dealing with a symmetric parametric domain, the corresponding quadrature is expected to be symmetric as well [28,29].
The symmetries of the unit square are rather straightforward.For a point not on a bimedian or diagonal, we have an 8-fold symmetry.For those that do lie on a bimedian or diagonal, there is only 4-fold symmetry (though by using Figure 5: Density scatter plots for 1000 successfully converged runs (see also [14]) for G u and G v .An alpha value of 1 100 was used.
a pair of points, it could be interpreted as an 8-fold symmetry again).
As such, we now consider the functions For nodes that move to a location on a bimedian or diagonal, this symmetry approach results in a pair of overlapping nodes.Note that this comes with a certain challenge: a tolerance to decide whether nodes indeed overlap or not.Rather than setting up such tolerances, we add the option to explicitly constrain nodes to a bimedian or diagonal.That is, for (u k , v k ) on a bimedian, we consider the functions whereas for (u k , v k ) on a diagonal, we consider the functions Naturally, this introduces a different issue, which is the a-priori choice of the numbers of nodes on a bimedian, diagonal or 'ordinary' position.It is very difficult to obtain a good initial guess for a non-linear optimisation problem of this kind.Unlike in the case of spline spaces (and tensor products built upon them), where one can deduce the number of quadrature points lying in each particular knot span, see e.g.[30], in our case the support of all the involved functions is the whole unit square.Another common initialisation strategy, namely a uniform distribution of quadrature points and equal weights, which is simple for a 1D domain, is impossible over the unit square for a given number of nodes such as m = 7 not forming a perfect square.Therefore, we opted for random initial guesses, which turned out to be a good choice in our setting.
Next, an important observation is that there might be certain sets of basis functions satisfying the same symmetries.That is, we could have e.g.f p (u, v) = f q (1 − u, v), which would then allow us to remove either f p or f q from the system of equations.Although this does not influence the space of feasible solutions, it can considerably speed up the optimisation.We have implemented this function elimination procedure symbolically in Maple (using parameter substitutions such as u ← (1−u) and detecting scaled versions by division) to reduce the number of functions involved in the optimisations to the relevant functions forming a set with no symmetries and scaled copies left.
The resulting system of equations (here, as an example, using a single point on a bimedian and another on a diagonal, s as the number of relevant functions and t as the number of symmetric quadrature points) is now as follows: Figure 6: Quadratures for the spaces GG (we conjecture this one to be unique), G u G u , G u G v , and GG u .The highlighted nodes are listed in Tables 3-6, respectively.The nodes are again scaled proportionally to their associated weights.
Without bimedial or diagonal nodes, the partial derivatives of the residuals r j = k g j (u k , v k )ω k − I j now correspond to Note that due to the chain rule, half of the terms come with a negative sign.In addition, half of the entries are partial derivatives of f j with respect to v rather than u.The partial derivatives with respect to v k are obtained similarly.As for the partials w.r.t.ω k , these are just the functions g j (similar to the ordinary setting; see (17)).
The partial derivatives of the residuals involving g + j or g × j are slightly different.For bimedial nodes, only the partial derivative w.r.t.v k matters (and consists of four terms rather than eight).As for diagonal nodes, only the partial derivative w.r.t.u k matters (and again consists of four terms, which are generally more complex in this setting as it involves the product rule).
Using the above approach to derive quadratures for G, while restraining the (now single) quadrature point to a diagonal, we indeed obtain the same result as obtained in Section 3 (that is, the 2×2 Gauss-Legendre rule).In addition, note that there are merely three (rather than twenty) relevant functions in the symmetric setting; see Figure 2. The function spaces G u , G v , and G uv , already addressed in Sections 4 and 5, were not considered in this symmetric context.Table 2: An overview for the spaces GG, G u G u , G u G v , and GG u .The second column lists the number of relevant functions in each space (thus up to symmetries and scaling), which is followed by the counts of nodes on bimedians, diagonals, and general nodes over □.The final column lists the total number of nodes in the quadratures.

Relevant products Bimedial nodes Diagonal nodes Ordinary nodes
Total nodes

Integrating products and products of derivatives
In this section, we describe our main numerical contribution of the paper: symmetric quadrature rules for the product of G with itself, G u = ∂G ∂u with itself, G u G v = ∂G ∂u ∂G ∂v , as well as the mixed product space GG u .The first step is to combine the product of functions with the notion of symmetry as described in the previous section.For two functions f p (u, v) and f q (u, v), we define Note that h pq (u, v) g p (u, v)g q (u, v), as the g j (u, v) are sums of functions rather than factors (in which case we would have had h pq = g p • g q ).The functions h + pq (u, v) and h × pq (u, v) are defined analogously, but with symmetries with respect to the bimedians and the diagonals of □, respectively.
As for symmetries of products, note that to start with, we have n(n−1) 2 pairs of functions that are pairwise identical.In other words, and focusing on our setting, for n = 20 we obtain a 20 × 20 matrix of products, for which all 190 entries below (or above) the diagonal can be ignored/removed.
In addition, following the same symmetry detection approach as described above, we can remove scaled and/or symmetrical versions of functions.In the case of GG, we are left with merely 29 of the 400 products.For the other two cases, the number of relevant products turns out to be 56 and 51, respectively; see the second column of Table 2.
The system of equations has the same structure as (21), which we now express using the shorthand notation Hw = J, where J contains the exact integrals of the relevant products of functions (computed symbolically using Maple).The partial derivatives of the residuals are more involved in this setting, as every term now triggers the product rule (and those of h × pq even a quadruple product rule).As the resulting expressions do not have much added value, they are omitted from the paper.
With everything now in place, we resort to educated guesses for the number of bimedial, diagonal, and ordinary nodes which we denote as triples (b, d, o), and initialise them randomly.
For GG, we have found rules satisfying the same tolerance as before, i.e. ∥Hw − J∥ ∞ < 1e − 16, starting from (b, d, o) = (1, 1, 4).Moreover, the rule using this nodal count appears to be unique.As for the cases of G u G u and G u G v , we obtained results starting from (2, 2, 4) and (1, 1, 4), respectively.In both cases there appear to be a small number of different rules.For the mixed space GG u we obtained several results with at least 5 regular nodes with additional bimedial and diagonal ones, such as with the (2, 1, 5) node configuration.We selected the ones which appear the most uniform, i.e. have a fair distribution of nodes with gradually changing associated weights.
The results are summarised in Table 2 and Figure 6.Individual quadratures are listed in Tables 3-6.Note that all nodes are safely away from the corners of □, thus avoiding any potential numerical instabilities in the quadratures when evaluating the functions near the singularities in the four corners of the domain.

Numerical tests
We perform several preliminary numerical tests on how the proposed quadrature rules perform on functions outside the considered linear spaces.To this end, we define the following three test functions which are shown over the unit square domain in Figure 7. Their exact integrals and the errors induced by a selection of our rules are summarised in Table 7.
To gain insight into the behaviour of the quadrature errors under refinement of the domain, as is often the case in adaptive (isogeometric) methods, we proceed as follows.We recursively (and uniformly) split the original domain □ into four smaller squares, and apply our quadrature rules, appropriately shifted and scaled, to the sub-squares and sum all contributions, and compute the error with respect to the exact integral at each refinement step.Figure 8 shows, on a logarithmic scale, the result of this test using F 1 and F 2 across several of our rules.The results for F 3 are similar to those for F 2 and thus not reported.As expected, the absolute error goes down with every refinement step, at least until we hit the limits of our numerical precision, after which we do not see any improvement.Note that the GG rule integrates F 1 , a bi-degree 7 function, numerically exactly.
For instance in the case of G u G v , the error gets approximately 4 3 = 64 times smaller with each refinement step, and for G and G u this ratio is approximately 4 2 = 16 for both F 1 and F 2 .The corresponding exact rates 4 r are marked by r in the figure next to the dotted triangles indicating these rates.
It is worth noting that since the rules for the spaces G and G uv are the standard 2 × 2 Gauss-Legendre rules, the known error estimates apply to them.The error analysis of our other rules when applied to functions outside our function spaces goes beyond the scope of the current paper.One could mimic the approach of [31, Theorem 5.2-3] and construct the corresponding uni-and bivariate kernels.However, the analysis in [31] assumes the exactness of the rule for the space of polynomials, while our spaces are rational.Another issues is that some of our rules were computed numerically, so up to some error, and since the quadrature points and weights appear in the estimates, the inaccuracy of the points/weights would affect also the estimates.

Conclusion and future work
We have explored quadrature rules for the space G spanned by the 20 basis functions of Gregory quads, but also for the derived spaces G u and G uv .We have found symbolic quadratures for these three spaces.In the cases of G and G uv , the standard 2 × 2 Gauss-Legendre quadrature is exact and Gaussian.We have provided a 6-node quadrature for G u and conjectured that a 5-node quadrature for this space does not exist.We have also investigated quadratures for the product spaces GG, G u G u , G u G v , as well as GG v .Using numerical optimisation and heavily relying on symmetry, we have found numerical quadratures for these three spaces.
Our investigation is, however, not complete; it provides several directions for future investigation.One concerns finding optimal quadrature rules in the case of the product spaces: the ones we have found are likely not Gaussian.Another direction is quadratures for higher-order derivatives, which may arise in higher-order problems.The secondorder derivatives of the 20 Gregory functions are square integrable, which opens up this possibility.It would also be interesting to apply these new quadrature rules to solve actual problems using Gregory patches, in the vein of [17,18,19].Further, on top of our initial tests presented in Section 8, it would be useful to investigate the errors that our quadratures introduce when applied to functions outside the spaces they were designed for.Finally, focusing on the case of triangular Gregory patches is a natural continuation of our investigations into quadratures for Gregory patches.

Figure 1 :
Figure 1: The control net and labeling scheme of a bicubic Bézier patch (left) and a Gregory patch (right).

Figure 2 :
Figure 2: Up to symmetries, there are only three basis function shapes in G. From left to right: G 03 , G 13 , and G 12u .

Figure 3 :
Figure 3: Basis functions G 11v (top) and G 11u (bottom), their sum B 11 (left) and their difference D 11 (right), all scaled up by a factor of 4 for visualization purposes.Note that D 11 is antisymmetric with respect to the dashed diagonal of □ (along which it vanishes).Lemma 3.1.It holds that G = B ⊕ span(D 12 , D 11 , D 21 , D 22 ).(11)

Figure 4 :
Figure 4: From left to right: Quadrature for G, two quadratures for G u and G v combined, and five quadratures for G uv .The colours of the nodes correspond to different quadrature rules.In all cases, these are the only quadratures found given the specific number of nodes.The nodes are scaled proportionally to their associated weights.

Figure 7 :
Figure 7: Our test functions.From left to right: F 1 , F 2 , and F 3 , all plotted over the unit square domain □.

Figure 8 :
Figure8: Plots of the logarithm of the absolute value of the error/difference e i between the exact integral I i = □ F i du dv of F 1 (left) and F 2 (right) over the unit square and the progressively finer approximation obtained by applying our quadrature rules over the recursively refined unit square.Due to numerical precision limitations, the error stops decreasing after a certain number of refinement steps.Each dotted triangle, marked with its associated r, indicates the slope corresponding to error improvement rate of 4 r .

Table 1 :
The four quadrature nodes and weights for bicubic polynomials over the unit square.

Table 3 :
A 40-node quadrature for the space GG.Only the 6 dark nodes as shown in Figure6, top left, are listed; the remaining nodes are obtained by symmetry.

Table 4 :
A 48-node quadrature for the space G u G u .Only the 8 dark nodes as shown in Figure6, top right, are listed; the remaining nodes are obtained by symmetry.

Table 5 :
A 40-node quadrature for the space G u G v .Only the 6 dark nodes as shown in Figure6, bottom left, are listed; the remaining nodes are obtained by symmetry.

Table 6 :
A 52-node quadrature for the space GG u .Only the 8 dark nodes as shown in Figure6, bottom right, are listed; the remaining nodes are obtained by symmetry.