Matrix rank and inertia formulas in the analysis of general linear models

Abstract Matrix mathematics provides a powerful tool set for addressing statistical problems, in particular, the theory of matrix ranks and inertias has been developed as effective methodology of simplifying various complicated matrix expressions, and establishing equalities and inequalities occurred in statistical analysis. This paper describes how to establish exact formulas for calculating ranks and inertias of covariances of predictors and estimators of parameter spaces in general linear models (GLMs), and how to use the formulas in statistical analysis of GLMs. We first derive analytical expressions of best linear unbiased predictors/best linear unbiased estimators (BLUPs/BLUEs) of all unknown parameters in the model by solving a constrained quadratic matrix-valued function optimization problem, and present some well-known results on ordinary least-squares predictors/ordinary least-squares estimators (OLSPs/OLSEs). We then establish some fundamental rank and inertia formulas for covariance matrices related to BLUPs/BLUEs and OLSPs/OLSEs, and use the formulas to characterize a variety of equalities and inequalities for covariance matrices of BLUPs/BLUEs and OLSPs/OLSEs. As applications, we use these equalities and inequalities in the comparison of the covariance matrices of BLUPs/BLUEs and OLSPs/OLSEs. The work on the formulations of BLUPs/BLUEs and OLSPs/OLSEs, and their covariance matrices under GLMs provides direct access, as a standard example, to a very simple algebraic treatment of predictors and estimators in linear regression analysis, which leads a deep insight into the linear nature of GLMs and gives an efficient way of summarizing the results.


Introduction
Throughout this paper, the symbol R m n stands for the collection of all m n real matrices. The symbols A 0 , r.A/, and R.A/ stand for the transpose, the rank, and the range (column space) of a matrix A 2 R m n , respectively; I m denotes the identity matrix of order m. The Moore-Penrose generalized inverse of A, denoted by A C , is defined to be the unique solution X satisfying the four matrix equations AGA D A; GAG D G; .AG/ 0 D AG; .GA/ 0 D GA: Further, let P A , E A and F A stand for the three orthogonal projectors (symmetric idempotent matrices) P A D AA C , E A D A ? D I m AA C , and F A D I n A C A. All about the orthogonal projectors P A , E A , and F A with their applications in the linear statistical models can be found in [1][2][3]. The symbols i C .A/ and i .A/ for A D A 0 2 R m m , called the positive inertia and negative inertia of A, denote the number of the positive and negative eigenvalues of A counted with multiplicities, respectively. For brief, we use i˙.A/ to denote the both numbers. A 0, A < 0, A 0, and A 4 0 mean that A is a symmetric positive definite, positive semi-definite, negative definite, negative semi-definite matrix, respectively. Two symmetric matrices A and B of the same size are said to satisfy the inequalities A B, A < B, A B, and A 4 B in the Löwner partial ordering if A B is positive definite, positive semi-definite, negative definite, and negative semi-definite, respectively. Also, it is well known that the Löwner partial ordering is a surprisingly strong and useful property between two symmetric matrices. For more results on connections between inertias and the Löwner partial ordering of real symmetric (complex Hermitian) matrices, as well as applications of inertias and the Löwner partial ordering in statistics, see, e.g., [2,[4][5][6][7][8][9].
Recall that linear models are the first type of regression models to be studied extensively in statistical inference, which have had a profound impact in the field of statistics and applications and have been regarded without doubt as a kernel part in current statistical theory. A typical form of linear models is defined by y D XˇC " " "; E." " "/ D 0; D." " "/ D 2 † † †; (1) where y 2 R n 1 is vector of observable response variables, X 2 R n p is a known matrix of arbitrary rank, 2 R p 1 is a vector of fixed but unknown parameters, E." " "/ and D." " "/ denote the expectation vector and the dispersion matrix of the random error vector " " " 2 R n 1 , † † † 2 R n n is a known positive semi-definite matrix of arbitrary rank, and 2 is unknown positive number.
Once a general linear model (GLM) is formulated, the first and most important task is to estimate or predict unknown parameters in the model by using various mathematical and statistical tools. As a foundation of current regression theory, there are a sufficient bunch of results on estimators and predictors of parameter spaces in GLMs. Even so, people are still able to obtain more and more new and valuable results on statistical inference of GLMs. Estimation ofˇas well as prediction of " " " in (1) are major concerns in the statistical inference of (1), and it is always desirable, as claimed in [10,11], to simultaneously identify estimators and predictors of all unknown parameters in GLMs. As formulated in [10,11], a general vector of linear parametric functions involving the two unknown parameter vectorsˇand " " " in (1) is given by D KˇC J" " "; ( where K 2 R k p and J 2 R k n are given matrices of arbitrary ranks. Eq. (2) includes all vector and matrix operations in (1) as its special cases. For instance, (i) if K D X and J D I n , (2) becomes D XˇC " " " D y, the observed response vector; (ii) if J D 0, (2) becomes D Kˇ, a general vector of linear parametric functions; (iii) if K D X and J D 0, (2) becomes D Xˇ, the mean vector; (iv) if K D 0 and J D I n , (2) becomes D " " ", the random error vector.
Theoretical and applied researches seek to develop various possible predictors/estimators of (2) and its specific cases. During these approaches, the unbiasedness of with respect to the parameter spaces in (1) is an important property. In statistical inference of a GLM, usually there are many unbiased predictors/estimators for the same parameter space. Under the situation that there exist unbiased predictors/estimators for the same parameter space, it is natural to find such an unbiased predictor/estimator that has the smallest dispersion matrix among all the unbiased predictors/estimators. Thus, the unbiasedness and the smallest dispersion matrices of predictors/estimators are most intrinsic requirements in the statistical analysis of GLMs. Based on these requirements, we introduce the following classic concepts of the predictability, estimability, and the BLUP/BLUEs of in (2) and its special cases originated from [12]. holds in the Löwner partial ordering, the linear statistic Ly is defined to be the best linear unbiased predictor (BLUP) of under (1), and is denoted by Ly D BLUP. / D BLUP.KˇC J" " "/: If J D 0 in (2), or K D 0 in (2), then the Ly satisfying (3) is called the best linear unbiased estimator (BLUE) and the BLUP of Kˇand J" " " under (1), respectively, and are denoted by respectively. Definition 1.3. Let be as given in (2). The ordinary least-squares estimator (OLSE) of the unknown parameter vectorˇin (1) is defined to be while the OLSE of Kˇunder (1) is defined to be OLSE.Kˇ/ D KOLSE.ˇ/; the ordinary least-squares predictor (OLSP) of the random error vector " " " in (1) is defined to be OLSP." " "/ D y OLSE.Xˇ/; while the OLSP of J" " " under (1) is defined to be OLSP.J" " "/ D JOLSP." " "/. The OLSP of under (1) is defined to be OLSP. / D OLSE.Kˇ/ C OLSP.J" " "/: The above definitions enable us to deal with various prediction and estimation problems under most general assumptions of GLMs. The purpose of this paper is to investigate the performances of BLUP. /, BLUP. / , OLSP. /, and OLSP. / under the assumption that is predictable under (1) by using the matrix rank/inertia methodology. Our work in this paper includes: (I) establishing linear matrix equations and exact analytical expressions of the BLUPs and the OLSPs of (2); (II) characterizing algebraic and statistical properties of the BLUPs and the OLSPs of (2) where A is a symmetric matrix, such as, dispersion matrices of other predictors and estimators; (IV) establishing necessary and sufficient conditions for the following matrix equalities and inequalities to hold, respectively, in the Löwner partial ordering; (V) establishing equalities and inequalities between the BLUPs and OLSPs of and their dispersion matrices.
As defined in (1), we do not attach any restrictions to the ranks of the given matrices in the model in order to obtain general results on this group of problems. Regression analysis is a very important statistical method that investigates the relationship between a response variable and a set of other variables named as independent variables. Linear regression models were the first type of models to be studied rigorously in regression analysis, which were regarded without doubt as a noble and magnificent part in current statistical theory. As demonstrated in most statistical textbooks, some common estimators of unknown parameters under a linear regression model, such as, the wellknown OLSEs and BLUEs are usually formulated from certain algebraic operations of the observed response vector, the given model matrix, and the covariance matrix of the error term in the model. Hence, the standard inference theory of linear regression models can be established without tedious and ambiguous assumptions from the exact algebraic expressions of estimators, which is easily acceptable from both mathematical and statistical points of view. In fact, linear regression models are only type of statistical models that have complete and solid supports from linear algebra and matrix theory, while almost all results on linear regression models can be formulated in matrix forms and calculations. It is just based this fact that linear regression models attract a few of linear algebraists to consider their matrix contributions in statistical analysis.
The paper is organized as follows. In Section 2, we introduce a variety of mathematical tools that can be used to simplify matrix expressions and to characterize matrix equalities, and also present a known result on analytical solutions of a quadratic matrix-valued function optimization problems. In Section 3, we present a group of known results on the predictability of (2), the exact expression of the BLUP of (2) and its special cases, as well as various statistical properties and features of the BLUP. In Section 4, we establish a group of formulas for calculating the rank and inertia in (9) and use the formulas to characterize the equality and inequalities in (11). In Section 5, we first give a group of results on the OLSP of (2) and its statistical properties. We then establish a group of formulas for calculating the rank and inertia in (10) and use the formulas to characterize the equality and inequalities in (12). The connections between the OLSP and BLUP of (2), as well as the equality and inequalities between the dispersion matrices of the OLSP and BLUP of (2) are investigated in Section 6. Conclusions and discussions on algebraic tools in matrix theory, as well as the applications of the rank/inertia formulas in statistical analysis are presented in Section 7.

Preliminaries in linear algebra
This section begins with introducing various formulas for ranks and inertias of matrices and claims their usefulness in matrix analysis and statistical theory. Recall that the rank of a matrix and the inertia of a real symmetric matrix are two basic concepts in matrix theory, which are the most significant finite nonnegative integers in reflecting intrinsic properties of matrices, and thus are cornerstones of matrix mathematics. The mathematical prerequisites for understanding ranks and inertias of matrices are minimal and do not go beyond elementary linear algebra, while many simple and classic formulas for calculating ranks and inertias of matrices can be found in most textbooks of linear algebra. The intriguing connections between generalized inverses and ranks of matrices were recognized in 1970s. A variety of fundamental formulas for calculating the ranks of matrices and their generalized inverses were established, and many applications of the rank formulas in matrix theory and statistics were presented in [13]. Since then, the matrix rank theory has been greatly developed and has become an influential and effective tool in simplifying complicated matrix expressions and establishing various matrix equalities.
In order to establish and characterize various possible equalities for predictors and estimators under GLMs, and to simplify various matrix equalities composed by the Moore-Penrose inverses of matrices, we need the following well-known results on ranks and inertias of matrices to make the paper self-contained. The assertions in Lemma 2.1 directly follow from the definitions of ranks, inertias, definiteness, and semidefiniteness of (symmetric) matrices, and were first summarized and effectively utilized in [4]. This lemma manifests that if certain explicit formulas for calculating ranks/inertias of differences of (symmetric) matrices are established, we can use the formulas to characterize the corresponding matrix equalities and inequalities. Thus there are important and peculiar consequences of establishing formulas for calculating ranks/inertias of matrices. This fact reflects without doubt the most exciting roles of matrix ranks/inertias in matrix analysis and applications, and thus it is technically necessary to establish exact matrix rank/inertia formulas as many as possible from the theoretical and applied points of view.

Lemma 2.2 ([13]
). Let A 2 R m n ; B 2 R m k ; and C 2 R l n : Then In particular; the following results hold: The results collected in the following lemma are obvious or well known.
and assume that P 2 R m m is nonsingular. Then i˙" i˙" In particular; Hence; where U is an arbitrary matrix: In statistical inference of parametric regression models, the unknown parameters in them are usually predicted/estimated by various optimization methods or algorithms. A brief survey on modern optimization methods in statistical analysis can be found in [16]. In any case, we expect that the optimization problems occurred in parameter predictions/estimations under a GLM have analytical solutions, so that we can use the analytical solutions to establish a perfect theory on the statistical inference of the GLM. The notion of BLUP was well established in the literature, but it is not easy to believe that there exist analytical results on BLUPs of a very general nature due to the lack of explicit solutions of BLUPs' optimization problems. The present author recently developed an algebraic method of solving quadratic matrix-valued function optimization problems in [17], and used the method to derive many new analytical matrix equations and formulas of BLUPs of all unknown parameters in GLMs with random effects. By using this optimization method, we are now able to directly obtain exact formulas for calculating the BLUPs of in (2) and their dispersion matrices, and to use the formulas in dealing with various statistical inference problems under general assumptions. In order to directly solve the matrix minimization problem associated with the BLUPs under a GLM, we need the following known result on analytical solutions of a constrained quadratic matrix-valued function minimization problem.
where A 2 R p q , B 2 R n q , and C 2 R n p are given; where K D BA C C C and U 2 R n p is arbitrary: The assertions in this lemma show that a clear way of obtaining analytical solutions of a typical constrained quadratic matrix-valued function minimization problem is established, and all the properties and features of the minimization problem can been discovered from the analytical solutions. With the supports of the assertions in Lemma 2.1 and the rank/inertia formulas in Lemmas 2.2-2.6, we are able to convert many inference problems in statistics into algebraic problems of characterizing matrix equalities and inequalities composed by matrices and their generalized inverses, and to derive, as demonstrated in Sections 4-6 below, analytical solutions of the problems by using the methods of matrix equations, matrix rank formulas, and various tricky partitioned matrix calculations.

Formulas and properties of BLUPs
In what follows, we assume that (1) is consistent, that is, y 2 ROE X; † † † holds with probability 1; see [18,19]. Notice from (3) that the BLUP of in (2) is defined from the minimization of the dispersion matrix of Ly subject to E.Ly / D 0. The advantage of the minimization property of the BLUP has created many problems in statistical inference of GLMs. In particular, the minimum dispersion matrix of Ly can be utilized to compare the optimality and efficiency of other types of predictor of under (1). In fact, BLUPs are primary choices in all possible predictors due to their simple and optimality properties, and have wide applications in both pure and applied disciplines of statistical inference. The theory of BLUPs under GLMs belongs to the classical methods of mathematical statistics, and has been core issues of researches in the field of statistics and applications. It should be pointed out that (3) can equivalently be assembled as certain constrained matrix-valued function optimization problem in the Löwner partial ordering. This kind of equivalences between dispersion matrix minimization problems and matrix-valued function minimization problems were firstly characterized in [11]; see also, [20,21]. Along with some new development of optimization methods in matrix theory, it is now easy to deal with various complicated matrix operations occurring in the statistical inference of (2).
We next show how to translate the above statistical problems under GLMs into mathematical problems on matrix analysis, and solve the problems by using various results and methods in matrix algebra. Under (1) and (2), while Ly can be rewritten as Then, the expectation of Ly can be expressed as the matrix mean square error of Ly is Hence, the constrained covariance matrix minimization problem in (3) converts to a mathematical problem of minimizing the quadratic matrix-valued function f .L/ subject to .LX K/ˇD 0. A general method for solving this kind of matrix optimization problems in the Löwner partial ordering was formulated in Lemma 2.8. In particular, the following comprehensive results were established in [22]; see also [23].
Eq. (35) shows that the BLUPs of all unknown parameters in (2) can jointly be determined by a linear matrix equation composed by the two given coefficient matrices K and J, the given model matrix X, the dispersion matrix of the observed random vector y, and the covariance matrix between and y. So that we find it convenient to present a simple yet general algebraic treatment of the BLUPs of all unknown parameters in a GLM via a basic linear matrix equation, while the BLUPs have a large number of algebraic and statistical properties and features that are technically convenient from analytical solutions of the matrix equation. Matrix equations and formulas for BLUPs like those in (35) and (37) under GLMs were established in the statistical literature by using various direct and indirect methods, for instance, the BLUE of Kˇand the BLUP of J" " ", as well as (52) were established separately in [11]. In comparison, the whole results collected in Theorem 3.2 provide an exclusive and unified theory about BLUPs and BLUEs of parameter spaces and their magnificent properties and features under GLMs. As demonstrated in [22,23], the results in Theorem 3.2 can serve as basic and useful references being applied in the statistical inference of GLMs. From the fundamental matrix equation and formulas in Theorem 3.2, we are now able to derive many new and valuable consequences on properties of BLUPs of parameter spaces in GLMs under various assumptions. For instance, one of the well-known special cases of (1) is y D XˇC " " "; E." " "/ D 0; D." " "/ D 2 I n ; where 2 is unknown positive scalar. In this setting, the BLUP and the OLSP of in (2) coincide under (56), while Theorem 3.2 reduces to the following results. BLUE.Xˇ/ D XX C y; BLUP." " "/ D X ? y: Furthermore; the following results hold: (a) BLUP. / satisfies the following covariance matrix equalities (c) BLUP.T / D TBLUP. / holds for any matrix T 2 R t k : (d) BLUE.Xˇ/ and BLUP." " "/ satisfy BLUE.Xˇ/ D P X y; BLUP." " "/ D .I n P X /y; DOEBLUE.Xˇ/ D 2 P X ; CovfBLUP." " "/; " " "g D DOEBLUP." " "/ D 2 .I n P X /; DOE " " " BLUP." " "/ D D." " "/ DOEBLUP." " "/ D 2 P X : (e) y; BLUE.Xˇ/; and BLUP." " "/ satisfy y D BLUE.Xˇ/ C BLUP." " "/; CovfBLUE.Xˇ/; BLUP." " "/g D 0; D.y/ D DOEBLUE.Xˇ/ C DOEBLUP." " "/:

Rank/inertia formulas for dispersion matrices of BLUPs
Once predictors/estimators of parameter spaces in GLMs are established, more attention is paid to investigating algebraical and statistical properties and features of the predictors/estimators. Since BLUPs/BLUEs are fundamental statistical methodology to predict and estimate unknown parameters under GLMs, they play an important role in statistical inference theory, and are often taken as cornerstones for comparing the efficiency of different predictors/estimators due to the minimization property of the BLUPs'/ BLUEs' dispersion matrices. As demonstrated in Section 3, we are now able to give exact expressions of BLUPs/BLUEs under GLMs, so that we can derive various algebraical and statistical properties of BLUPs/BLUEs and utilize the properties in the inference of GLMs. Since dispersion matrix of random vector is a conceptual foundation in statistical analysis and inference, statisticians are interested in studying dispersion matrices of predictors/estimators and their algebraic properties. Some previous and present work on the dispersion matrices of BLUPs/BLUEs and their properties under GLMs can be found, e.g., in [24][25][26][27]. As is known to all, equalities and inequalities of dispersion matrices of predictors/estimators under GLMs play an essential role in characterizing behaviors of the predictors/estimators. Once certain equalities and inequalities are established for dispersion matrices of predictors/estimators under various assumptions, we can use them to describe performances of the predictors/estimators. This is, however, not an easy task from both mathematical and statistical points of view, because dispersion matrices of predictors/estimators often include various complicated matrix operations of given matrices and their generalized inverses in GLMs, as formulated in (40)- (43). In recent years, the theory of matrix ranks and inertias have been introduced to the statistical analysis of GLMs. We are able to establish various equalities and inequalities for dispersion matrices of predictors/estimators under GLMs by using the matrix rank/inertia methodology; see [5,6,9]. Note from (3) that DOE BLUP. / has the smallest dispersion matrix among all Ly with E.Ly / D 0: So that the dispersion matrix plays a key role in characterizing performances of the BLUP of in (2). In order to establish possible equalities and inequalities for dispersion matrices of BLUPs, we first establish three basic formulas for calculating the rank/inertia described in (9). Proof. Note from (43) that Also note that R.OE K; J † † †X ? 0 / Â R.OE X; † † †X ? 0 / and R. † † †/ Â ROE X; † † †X ? . Then applying (26) to (60) and simplifying by Lemmas 2.4 and 2.5, and congruence operations, we obtain rOE X; † † †X ? i˙. † † †/ (by (22)) D i˙2 (21)) D i˙2 (20)); that is, (13), establishing (57) and (58). Adding the two equalities in (57) and (58) yields (59). Applying Lemma 2.1 to (57)-(59) yields (a)-(e).
Eqs. (57)-(59) establish links between the dispersion matrices of the BLUPs of and any symmetric matrix. Hence, they can be applied to characterize behaviors of the BLUPs, especially, they can be used to establish many equalities and inequalities for BLUPs' dispersion matrices under various assumptions. Because the five matrices A, K, J, X, and † † † occur separately in the symmetric block matrix M in Theorem 4.1, it is easy to further simplify (57)-(59) for different choices of the five matrices in the formulas. We next present several special cases of (57)-(59) and give lots of interesting consequences on dispersion matrices of BLUPs/BLUEs and their operations.
Many consequences can be derived from the previous two theorems for different choices of K, J, and A in them.
Here, we only give the rank/inertia formulas for the difference D.Ay/ DOE BLUP. / . (2) is predictable under (1); and let BLUP. / be as given in (37): Also let A 2 R k n ; and denote

Formulas and properties of OLSPs
The method of least squares in statistics is a standard approach for estimating unknown parameters in linear statistical models, which was first proposed as an algebraic procedure for solving overdetermined systems of equations by Gauss (in unpublished work) in 1795 and independently by Legendre in 1805, as remarked in [28][29][30][31]. The notion of least-squares estimation is well established in the literature, and we briefly review the derivations of OLSEs and OLSPs. It is easy to verify that the norm . y Xˇ/ 0 . y Xˇ/ in (6) can be decomposed as the sum . y Xˇ/ 0 . y Xˇ/ D y 0 E X y C . P X y Xˇ/ 0 . P X y Xˇ/; where y 0 E X y > 0 and . P X y Xˇ/ 0 . P X y Xˇ/ > 0: Hence, miň 2R p 1 . y Xˇ/ 0 . y Xˇ/ D y 0 E X y C miň 2R p 1 . P X y Xˇ/ 0 . P X y Xˇ/I see also [7,8]. The matrix equation XˇD P X y; which is equivalent to the so-called normal equation X 0 XˇD X 0 y by pre-multiplying X 0 , is always consistent; see, e.g., [32, p. 114] and [33, pp. 164-165]. Solving this linear matrix equation by Lemma 2.7 yields the following general results.
Note that .K JX/X C † † †X ? D 0 in (ii) of Theorem 6.1(a) is a linear matrix equation for K and J to satisfy. Solving this equation will produce all such that OLSP. / D BLUP. / hold. Concerning the relationships between DOE BLUP. / and DOE OLSP. /, we have the following results.

Conclusions
We have offered a predominantly theoretical coverage of the statistical predictions and estimations by establishing two groups of standard result on exact algebraic expressions of the BLUPs and OLSPs of all unknown parameters and their fundamental properties under (1). We have also established a variety of exact algebraic formulas for calculating ranks and inertias of the matrices associated with the BLUPs and OLSPs, and have used the formulas to characterize many interesting and valuable equalities and inequalities for the dispersion matrices of the BLUPs and OLSPs under (1). The whole work contains a massive amount of useful results related to the world of GLMs and can serve as a comprehensive description of rank/inertia theory in the parameter prediction and estimation problems under GLMs.
Statistical theory and methods often require various mathematical computations with vectors and matrices. In particular, formulas and algebraic tricks for handling matrices in linear algebra and matrix theory play important roles in the derivations and characterizations of estimators and predictors and their features and performances under linear regression models. So that matrix theory provides a powerful tool set for addressing statistical problems. There is a long list of handbooks on matrix algebra for statistical analysis published since 1960s; see, e.g., [2,[59][60][61][62][63][64][65][66][67][68][69][70][71][72][73][74], while various new algebraic methods were developed regularly in matrix mathematics. But it is rarely the case that the algebraic techniques in matrix theory are ready-made to address statistical challenges. This is why the dialogue between matrix theory and statistics benefits both disciplines.
Although the ranks/inertias of matrices are the conceptual foundation in elementary linear algebra and are the most significant finite integers in reflecting intrinsic properties of matrices, it took a long time in the development of mathematics to establish various analytical and valuable formulas for calculating ranks/inertias of matrices and to use the formulas, as demonstrated in the previous sections, in the intuitive and rigorous derivations of matrix equalities and inequalities in the statistical analysis of GLMs. The present author has been devoting to this subject and has proved thousands of matrix rank/inertia formulas since 1980s by using various skillful calculations for partitioned matrices. This work has provided significant advances to general algebraical and statistical methodology, and a state-of-the-art theory on matrix ranks/inertias with applications has been established. We are now able to use matrix rank/inertia formulas to describe many fundamental properties and features of matrices, such as, establishing and simplifying various complicated matrix expressions, deriving matrix equalities and inequalities that involve generalized inverses of matrices, characterizing definiteness and semi-definiteness of symmetric matrices, and solving matrix optimization problems in the Löwner sense. In the past decade, the present author has been working on applications of the matrix rank/inertia methodology in linear regression analysis, while much experience of using various rank/inertia formulas in statistical inference of GLMs has been achieved, and many fundamental and valuable mathematical and statistical features of predictors/estimators under GLMs have been obtained during this approach. Some recent contributions on this topic by the present author and his collaborators are presented in [5-7, 9, 17, 22, 57, 58, 75-79], which contain a massive amount of useful results related to the world of GLMs. The whole findings in these papers provide significant advances to algebraical methodology in statistical analysis and inference of GLMs, and can merge into the essential part of unified theory of GLMs.