Optimization Algorithm for Reduction the Size of Dixon Resultant Matrix: A Case Study on Mechanical Application

: In the process of eliminating variables in a symbolic polynomial system, the extraneous factors are referred to the unwanted parameters of resulting polynomial. This paper aims at reducing the number of these factors via optimizing the size of Dixon matrix. An optimal configuration of Dixon matrix would lead to the enhancement of the process of computing the resultant which uses for solving polynomial systems. To do so, an optimization algorithm along with a number of new polynomials is introduced to replace the polynomials and implement a complexity analysis. Moreover, the monomial multipliers are optimally positioned to multiply each of the polynomials. Furthermore, through practical implementation and considering standard and mechanical examples the efficiency of the method is evaluated.

find the resultant and solve a polynomial system, is elimination of the variables [Yang, Zeng and Zhang (2012)]. There are two major class of formulation for eliminating the variables of a polynomial system in matrix based methods to compute the resultants. They are called Sylvester resultant [Zhao and Fu (2010)] and Bezout-Cayley resultant [Palancz (2013); Bé zout (2010)]. Both of these methods aim at eliminating n variables from n + 1 polynomials via developing resultant matrices. The algorithm that adapted for this article is inspired by Dixon method which is of Bezout-Cayley type. The Dixon method is considered as an efficient method for identifying a polynomial. This polynomial would also include the resultant of a polynomial system which in some literature is known as projection operator [Chtcherba (2003)]. Dixon method produces a dense resultant matrix which is considered as an arrangement of the non-existence of a great number of zeroes in rows and columns of the matrix. In addition, the Dixon method produces a small resultant matrix in a lower dimension. The Dixon method's uniformity which is being implied as computing the projection operator without considering a particular set of variables is considerable properties. Besides, the method is automatic, and, therefore it eliminates the entire variables at once [Chtcherba (2003); Kapur, Saxena and Yang (1994)]. The majority of multivariate resultant methods, perform some multiplications for resultant [Faug'ere, Gianni, Lazard et al. (1992) ; Feng, Qin, Zhang et al. (2011)]. these multiplications do not deliver any insight into the solutions of the polynomial system [Saxena (1997)]. In fact, they only perform the multiplicative product of the resultant which include a number of extraneous factors. Nonetheless, these extraneous factors are not desirable resulting problems in a number of critical cases [Chtcherba (2003); Saxena (1997)]. Worth mentioning that Dixon method highly suffers from this drawback [Chtcherba and Kapur (2003); Chtcherba and Kapur (2004)]. However, in the polynomial systems, a Dixon matrix well deals with the conversion of the exponents of the polynomials [ Lewis (2010)]. With this property, the Dixon method is highly capable of controlling the size of the matrix. Via utilizing this property, this research aims at optimizing the size of Dixon matrix aiming at the simplification of the solving process and gaining accuracy. In this regards, there has been similar cases reported in the literature on optimizing the Dixon resultant formulation e.g., [Chtcherba and Kapur (2004);Saxena (1997)]. This paper presents a method to finding optimally designed Dixon matrix for identifying smaller degree of the projection operator using dependency of the size of Dixon matrix to the supports of polynomials in the polynomial system, and dependency of total degree of the projection operator to the size of Dixon matrix. Having investigated these relations, some virtual polynomials have been presented to replace with original polynomials in the system to suppress the effects of supports of polynomials on each other. Applying this replacement, the support hulls of polynomials can be moved solely to find the best position to make smallest Dixon matrix. These virtual polynomials are generated by considering the widest space needed for support hulls to be moved freely without going to negative coordinate. In order to find the best position of support hulls related to each other, monomial multipliers are created to multiply to the polynomials of the system while original polynomial system is considered in the condition that the support of monomial multipliers are located in the origin. Starting from the origin and choosing all neighboring points as support of monomial multiplier for each polynomial to find smaller Dixon matrix, the steps of optimization algorithm have been created. This procedure should be done iteratively for finding a monomial multiplier set to multiply to polynomials in the system for optimizing of Dixon matrix. This will lead to less extraneous factors in the decomposed form. Further, a number of sample problems are solved using the proposed method and the results are compared with the conventional methods. The paper is organized as follow. In Section 2, we describe the method of Dixon construction by providing the details and further evidences. In addition, an algorithm for optimizing the size of Dixon matrix has been implemented and tested in some examples, along with the complexity analysis of the optimization algorithm in this section the advantages of presented algorithm is deriving the conditions under which the support hulls of polynomials in a polynomial system do not effect on each other during the optimizing. The comparisons made with relevant optimizing heuristic of Chtcherba show the superiority of the new optimization algorithms to present the results with regards to accuracy. Finally, in the Section 3, a discussion and conclusion remarks are given

Methodology
In this section the optimization algorithm of the Dixon matrix and the related formulation procedure is illustrated via flowchart with some related information and theorems.

Degree of the projection operator and the size of the Dixon matrix
Consider a multivariate polynomial f ∈ ℤ[c, x] . The set ⊂ ℕ d , is a finite set of exponents referred as the support of the f . Further, the polynomial system ℱ = {f 0 , f 1 , … , f d } , with support = 〈 0 , 1 , ⋯ , d 〉 is named unmixed if 0 = 1 = ⋯ = d , and is called mixed otherwise. In Dixon formulation, the computed projection operator will not be able to efficiently adapt to mixed systems, and therefore, for mixed systems, the Dixon resultant formulation is almost guaranteed to produce extraneous factors. This is a direct consequence of the exact Dixon conditions theorem, presented as follow from Saxena [Saxena (1997)]: Theorem 1. In generic d-degree cases, the determinant of the Dixon matrix, is exactly its resultant (i.e., does not have any extraneous factors). While, generally, most of the polynomial systems are non-generic or not d-degree. Considering simplex form of the Dixon polynomial [Chtcherba and Kapur (2004)], every entry in the Dixon matrix Θ is obviously a polynomial of the coefficients of system ℱ which its degree in the coefficients of any single polynomial is at most 1. Then the projection operator which is computed, is utmost of total degree |∆ | in the coefficients of any single polynomial. Note that |∆ | is the number of columns of Dixon matrix. A similar illustration can be presented for |∆ ̅ | (the number of rows of Dixon matrix) when the transpose of Dixon matrix be considered. Then, max{|∆ |, |∆ ̅ |}, is an upper bound for the degree of projection operator created using Dixon formulation in the coefficients of any single polynomial, where and ̅ are the supports of xs and x ̅s (new variables presented in the Dixon method instruction) in the Dixon polynomial, respectively. Then minimizing the size of Dixon matrix leads to minimizing the degree of projection operator and decreasing the number of extraneous factors which exist next to the resultant.

Supports converting and its effects on Dixon matrix
The Dixon matrix size is not invariant with respect to the relative position of the support hulls (the smallest convex set of points in a support in an affine space ) of polynomials [Lewis (2010)]. In other word, the Dixon resultant matrix for a polynomial ℱ with the support set of + c = 〈 0 + c 0 , … , d + c d 〉 is not the same as a system with support set = 〈 0 , 1 , … , d 〉, where c = 〈c 0 , c 1 , … , c d 〉 and c i = (c i,1 , … , c i,d ) ∈ ℕ d for i = 0, 1, … , d. We call i + c i the converted support of f i and + c = 〈 0 + c 0 , … , d + c d 〉 is converted support set of polynomial system ℱ. However, as theorem 2 [Chtcherba (2003)], if a conversion is performed uniformly on all supports of polynomials in a system, the size of the Dixon matrix will not change.
Theorem 2. As the support for a generic polynomial system, consider + c, where c i = c j = t = (t 1 , t 2 , … , t d ) for all i, j = 0, 1, . . . , d, then where "+" is the Minkowski sum [Chtcherba (2003)], and ∆ , ∆ ̅ are the support of Dixon polynomial for the original variables and new variables respectivly. Consider a polynomial system ℱ . Assuming f i = hf′ i for some polynomials h and f′ i , clearly; where Res V is resultant of ℱ over verity V . In particular, if h be considered as a monomial, the points where satisfy the part Res V (f 0 , … , f i−1, , h, f i+1 , … , f d ) are on the axis and the degree of this resultant is not more than maximum total degree of f i . If we do not consider the axis, we have; ( Since, we consider 1 as a variable for d + 1, polynomials in d variables and the resultant computed using Dixon formulation has property (3), a polynomial in a system could be multiplied by a monomial, without changing the resultant [Chtcherba and Kapur (2004)]. A direct consequence from above illustration and considering simplex form of Dixon formulation [Chtcherba and Kapur (2004)], is the sensitivity of the size of matrix to exponent of multipliers to original the polynomial system.

Optimizing method for Dixon matrix
Considering the point that, |∆ | ≠ |∆ +t | and/or |∆ ̅ | ≠ |∆ ̅ +t | unless c i = c j for all i, j = 0,1, … , d from Sections 2.1, 2.2 and [Saxena (1997)], it can be said: the size of Dixon matrix depends on the position of support hulls of polynomials in relation with each other. Besides, there is an direct dependency between the area overlapped by convex hulls of polynomials and the size of Dixon matrix [Saxena (1997)]. Taking advantage of above properties, this paper intend present an optimization algorithm with conversion set c = 〈c 0 , c 1 , … , c d 〉 where c i = (c i,1 , … , c i,d ) ∈ ℕ d for i = 0,1, … , d which is considered for converting the support of a polynomial system = 〈 0 , 1 , … , d 〉 , to make |∆ | and |∆ ̅ | smaller. The converted support set for polynomial system appears as shift the polynomials support hull to find smaller Dixon matrix. The minimizing method is sequential and has initial guess for c 0 at the beginning.
The choice of c 0 should be so that other c i could be chosen without getting in to negative coordinates. Then, search for c 1 is beginning from origin and will be continued by a trial and error method. The turn of c 2 is after c 1 when the support of multiplier for f 1 is fixed then, c 3 and so on. The process continues for finding all c i s.
Not considering the effect of support hulls on each other is a disadvantage of the previous minimizing method [ Chtcherba (2003)]. In fact, sometimes the effects of the support hulls on each other lead the algorithm to the wrong direction of optimization. In the other word, giving high relative distance between convex hulls of supports, the algorithm for optimizing fails and ends up with incorrect results as is noted in the end of solved examples in this section.
To suppress the effect of support hulls on each other, during of running the new algorithm presented in this paper, some polynomials should be replaced by some new polynomials, which are called virtual polynomials. The rules of selecting and using of virtual polynomials are presented in details in this section. In the presented optimization approach, moving of support hulls of polynomials in system of coordinates is divided in four phases as; Phase 1: Choosing c 0 , Phase 2: Shifting the support hull of f 0 by multiplying x c 0 to f 0 , Phase 3: Presenting the virtual polynomials, Phase 4: Converting of other supports. Phase 1, 2: The space needed for executing the optimization algorithm is presented in dimension of S, where S = (s 1 , s 2 , … , s d ), Here, γ = (γ 1 , … , γ d ) is a d -dimensional integer vector. Considering the following polynomial system for optimization, The choice of S is highly dependent to choice of c 0 that choosing bigger c 0,i make the s i bigger and vice versa. Whereas the complexity of the algorithm is also depending on the choice of c 0 , following definition for c 0 is presented for controlling the time complexity, however one can choose bigger elements for c 0 . Then c 0 = 〈c 0,1 , c 0,2 , … , c 0,d 〉 is introduced as; c 0,i = max γ∈⋃ j j=1,2,…,d (γ i ) for i = 1,2, … , d.
The above choice for c 0 can be explained by the fact that it can guarantee to move other support hulls to stay in positive coordinate when they want to approach 0 + c 0 . So the c 0,i should be chosen as above equation or bigger.
Phase 3: By searching for c p as the best monomial multiplier for f p , p = 1, … , d − 1 the virtual polynomials are supposed as v p+1 , v p+2 , … , v d in total degrees of s, s + 1, … , s + (d − p − 1) respectively, where s = ∑ s i d i=1 . These virtual polynomials are considered without any symbolic coefficient and each one should have multifaceted vertical corner support hull. For example, in two-dimensional form ( x = (x 1 , x 2 ) ), if S = (s 1 , s 2 ), the following polynomial is considered as virtual polynomial to replace by f 2 when the algorithm intended to find optimal c 1 . v 2 = 1 + x 1 s 1 + x 2 s 2 + x 1 s 1 x 2 s 2 , Having investigated the above relations, some virtual polynomials have been found to replace the original polynomials to suppress the effects of support of polynomials.
Applying this replacement, the support hulls of polynomials can be moved solely to find the best position to make smallest Dixon matrix. These virtual polynomials are generated by considering the widest space needed for support hulls to be moved freely without going to negative coordinate and having effects on each other. Phase 4: For the purpose of finding the optimal positions of support hulls related to each other, monomial multipliers are created to multiply to the polynomials of the system while original polynomial system is considered in the condition that the support of monomial multipliers are located in the origin. Starting from the origin and choosing all neighboring points as support of monomial multiplier for each polynomial to find smaller Dixon matrix, the steps of optimization algorithm have been created. This procedure should be done iteratively for finding a monomial multiplier set to multiply to polynomials in the system for optimizing of Dixon formulation. The algorithm of the method for optimizing the size of matrix is presented by flowchart shown in following Figure.  (2013)]. Therefore, in each phase of selecting all neighboring points and considering the smallest size of the Dixon matrix, we have a complexity as O(kd 3 n d ).
For getting general answer, the optimizing direction is repeated d times as each complexity are considered for each polynomial (see the step of checking "if i = d?" in Fig. 1). Then, we have total complexity of presented algorithm as O(kd 4 n d ).
The size of Dixon matrix is 765 × 675 . The above polynomials system can be considered in the case of c 1 = (0,0). The best monomial for multiplying to f 1 will be found by trial and error method as x 9 y 6 . Now when c 0 and c 1 are fixed, the f 2 can return to its original place and we have following system which is ready to start process for finding c 2 ; { f 0 = a 01 x 8 y 9 + a 02 x 10 y 9 + a 03 x 11 y 15 + a 04 x 15 y 15 , f 1 = a 11 x 10 y 6 + a 12 x 9 y 13 + a 13 x 11 y 15 + a 14 x 12 y 15 , f 2 = a 21 + a 22 x 2 y 5 + a 23 x 8 y 4 .
The size of Dixon matrix is 201 × 216. Using same method which is done for c 1 , the best monomial multiplier for f 2 will be found as x 8 y 10 and Dixon matrix of size 82 × 82. The optimal resulted polynomial system is { f 0 = a 01 x 8 y 9 + a 02 x 10 y 9 + a 03 x 11 y 15 + a 04 x 15 y 15 , f 1 = a 11 x 10 y 6 + a 12 x 9 y 13 + a 13 x 11 y 15 + a 14 x 12 y 15 , f 2 = a 21 x 8 y 10 + a 22 x 10 y 15 + a 23 x 16 y 14 , with optimized supports hulls which are shown in Fig. 3. The steps of trial and error method for finding optimal c 1 and c 2 are summarized in following tables.
(3,6) 122 × 139 19. Comparing the size of Dixon matrix after executing the algorithm and before executing in beginning of Example 1, the advantage of new presented optimizing method is evident.
Optimizing the size of Dixon matrix using Chtcherba's presented heuristic [Chtcherba (2003)] shows a big failure where the size of Dixon matrix never becomes smaller than 369 × 306. Example 2 Here the strophoid is considered. The strophoid is a curve widely studied by mathematicians in the past two century. It can be written in a parametric form described as follow.
The process is summarized in the Tab. 2.
(1,1,1) 247 × 232 Comparing the presented result presented in Tab. 4 to the result of finding Dixon matrix [Chtcherba (2003)], which tells the Size of Dixon matrix is 6 × 5, we could minimize the size of Dixon matrix. Example 3 The Stewart platform problem is a standard benchmark elimination problem of mechanical motion of certain types of robots. The quaternion formulation we present here is by Emiris [Emiris (1994)]. It contains 7 polynomials in 7 variables. Let x = [x 0 , x 1 , x 2 , x 3 ] and q = [1, q 1 , q 2 , q 3 ] be two unknown quaternions, to be determined. Let q * = [1, −q 1 , −q 2 , −q 3 ]. Let a i and b i for i = 2, … ,6 be known quaternions and let α i for = 1, … ,6 be six predetermined scalars. The 7 polynomials are: f 0 = x T x − α 1 q T q f i = b i T (xq) − a i T (qx) − (qb i q * ) T a i − α i q T q i = 1, … ,5, f 6 = x T q * Out of the 7 variables x 0 ,x 1 ,x 2 ,x 3 ,q 1 ,q 2 , q 3 any six are to be eliminated to compute the resultant as a polynomial in the seventh. Saxena [Saxena (1997)] successfully computed the Dixon matrix of the Stewart problem by eliminating 6 variables x 0 ,x 1 ,x 2 ,x 3 ,q 2 ,q 3 . The size of his Dixon matrix is 56 × 56. To optimize the Saxena's resulted matrix, using our presented method, we should compute the c 0 according vector variable (x 0 , x 1 , x 2 , x 3 , q 2 , q 3 ), to avoid negative coordinate. Using formula (5), the c 0 is (2,2,2,2,2,2) . Then using formula (4), S = (6,6,6,6,6,6) which helps us to find appropriate virtual polynomials v 2 , …, v 6 as stated by details in algorithm formulation. Now we can start to fine optimal c 1 to present the best monomial multiplier for f 1 . Due to long process of finding best direction of optimizing for 6 considered variables, the 0 + 0 1 + 1 2 + 2 3 + 3 process which is presented in Tab. 5 is summarized. Table 5: Steps for finding 1 using presented method with assumption 0 = (2,2,2,2,2,2),

Discussion and conclusion
Though considering the simplex form of the Dixon polynomial, the maximum number of rows and columns of the Dixon matrix was presented as an upper bound of the projection operator in the coefficients of any single polynomial. In addition, since we were working on the affine space, each polynomial in a system could be multiplied by a monomial, without changing the resultant. Moreover, another useful property of the Dixon matrix construction which has been revealed was the sensitivity of the size of Dixon matrix to support hull set of the given polynomial system. Therefore, multiplying some monomials to the polynomials, which were in the original system, changed the size of the Dixon No 1 matrix yet had no effects on the resultant. It only changed the total degree of the projection operator which the resultant was a part. As long as the Dixon matrix had this property, the size of the Dixon matrix was able to be optimized by properly selecting the multipliers for polynomials in the system. Using this property, this paper sought to optimize the size of the Dixon matrix for the purpose of enhancing the efficiency of the solving process and identifying better results. Via considering the properties of Dixon formulation, it was concluded that, the size of Dixon matrix depends on the position of support hulls of the polynomials in relation with each other. Furthermore, in order to suppress the effects of supports hulls of polynomials on each other, some virtual polynomials had been introduced to replace the original polynomials in the system. Applying this replacement, the support hulls of polynomials could be moved solely to find the best position to make smallest Dixon matrix. These virtual polynomials were generated by considering the biggest space needed for support hulls to be moved freely without going to negative coordinate. For the purpose to identify the optimal position of support hulls related to each other, monomial multipliers were created to multiply to the polynomials of the system while original polynomial system was considered on provided that the support of monomial multipliers were located in the origin. The complexity analyses was performed for the corresponding algorithm namely the minimization method of the resultant matrix for the system of polynomials ℱ = {f 0 , f 1 , … , f d } considered by recognition of search area along with the time cost of receiving better Dixon matrix in every single phase. For verifying the results by implementing the presented algorithm for optimizing the Dixon matrix for general polynomial systems, the algorithm was implemented and its applicability was demonstrated in example 1 (2 dimensions), example 2 (3 dimensions) and example 3 (6 dimensions). The results of the method for minimizing the size of Dixon resultant matrix were presented in tables that reveal the advantages and the practicality of the new method. Even if we had the optimal position of support hulls at the outset, the algorithm worked properly.