Elementary triangular matrices and inverses of $k$-Hessenberg and triangular matrices

We use elementary triangular matrices to obtain some factorization, multiplication, and inversion properties of triangular matrices. We also obtain explicit expressions for the inverses of strict $k$-Hessenberg matrices and banded matrices. Our results can be extended to the cases of block triangular and block Hessenberg matrices.


Introduction
The importance of triangular, Hessenberg, and banded matrices is wellknown. Many problems in linear algebra and matrix theory are solved by some kind of reduction to problems involving such types of matrices. This occurs, for example, with the LU factorizations and the QR algorithms.
In this paper, we study first some simple properties of triangular matrices using a particular class of such matrices that we call elementary. Using elementary matrices we obtain factorization and inversion properties and a formula for powers of triangular matrices.
Some of our results may be useful to develop parallel algorithms to compute powers and inverses of triangular matrices and also of block-triangular matrices.
In the second part of the paper we obtain an explicit formula for the inverse of a strict k-Hessenberg matrix in terms of the inverse of an associated triangular matrix. Our formula is obtained by extending n × n k-Hessenberg matrices to (n + k) × (n + k) invertible triangular matrices and using some natural block decompositions. Our formula can be applied to find the inverses of tridiagonal and banded matrices.

Elementary triangular matrices
In this section n denotes a fixed positive integer, N = {1, 2, . . . , n}, and T denotes the set of lower triangular n × n matrices with complex entries. An element of T is called elementary if it is of the form I + C k , for some k ≤ n, where I is the identity matrix and C k is lower triangular and has all of its nonzero entries in the k-th column.
Let A = [a j,k ] ∈ T . For each k ∈ N we define E k as the matrix obtained from the identity matrix I n by replacing its k-th column with the k-th column of A, that is, (E k ) j,k = a j,k for j = k, k + 1, . . . , n, (E k ) j,j = 1 for j = k, and all the other entries of E k are zero. The matrices E k are called the elementary factors of A because Let us note that performing the multiplications in (2.1) does not require any arithmetical operations. It is just putting the columns of A in their proper places.
If for some k we have a k,k = 0 then E k is invertible and it is easy to verify that Note that (E k ) −1 is also an elementary lower triangular matrix. If A is invertible then all of its elementary factors are invertible and from (2.1) we obtain Therefore A −1 is the product of the elementary factors of the matrix B = (E 1 ) −1 (E 2 ) −1 · · · (E n ) −1 , but in reverse order.
Notice that B = I + (I − A)D −1 , where D = Diag(a 1,1 , a 2,2 , . . . , a n,n ). Therefore If we are only interested in computing the inverse of A we can find the inverse ofÃ = AD −1 , which has all its diagonal entries equal to one, and then we have A −1 = D −1Ã−1 . This means that it would be enough to consider matrices with diagonal entries equal to one to study the construction of inverses of triangular matrices. If we consider general triangular matrices we can also obtain results about the computation of powers of such matrices. We will obtain next some results about products of elementary factors.
We define C k = E k − I for k ∈ N. Note that (C k ) k,k = a k,k − 1, (C k ) j,k = a j,k for j = k + 1, . . . , n, and all the other entries are zero, that is, all the nonzero entries of C k lie on the k-th column. It is clear that A has the following additive decomposition (2.5) Let L be the n × n matrix such that L k+1,k = 1 for k = 1, 2, . . . , n − 1 and all its other entries are zero. We call L the shift matrix, since ML is M shifted one column to the left. Note that T is invariant under the maps M → ML and M → LM.
In order to simplify the notation we will write the entries in the main diagonal of A as a k,k = x k for k = 1, 2, . . . , n.
The matrices C k have simple multiplication properties that we list in the following proposition.
The proofs are straightforward computations. Notice that all the nonzero entries of C k L k−j are in the j-th column.
From part 3 of the previous theorem we obtain immediately the following multiplication formula. If r ≥ 2 and k 1 > k 2 > · · · > k r then If K is a subset of N with at least two elements we define the matrix where K = {k 1 , k 2 , . . . , k r } and k 1 > k 2 > · · · > k r . Let us note that all the nonzero entries of G(K) are in its k r -th column. If K contains only one element, that is, K = {j} then we put G(K) = C j . Since E k = I + C k , the multiplication properties of the C k can be used to obtain some corresponding properties of the E k .
Proof: The proof of the first part is trivial. For the second part, use part 2 of Theorem 2.1 and the binomial formula. For part 3, write each factor in the form E k j = I + C k j , expand the product, collect terms and use the definition of the function G. Alternatively, we can use part 1 repeatedly and then use the definition of G.
Observe that the number of summands in part 3 is at most equal to the number of subsets of K. If some of the entries a j,k are equal to zero then some of the matrices G(J) are zero.
Taking K = N in part 3 of Theorem 2.2 we obtain Then the summands in (2.8) that may have nonzero entries in the k-th column (other than I) are the matrices G({k} ∪ J) where J ⊆ {k + 1, k + 2, . . . , n}, and thus the number of such matrices is at most equal to 2 n−k . For k = n there is only one, which is C n , for k = n − 1 they are C n−1 and G({n − 1, n}), and so on. Since G({k} ∪ J) is a scalar multiple of C m L k−m where m is the largest element of J, we can group together all the terms having the same largest element m and therefore the k-th column of the product in (2.8) is a linear combination of C k , C k+1 L, C k+2 L 2 , . . . , C n L n−k , and the k-th column of the identity matrix. Therefore, if we have computed E n E n−1 · · · E k then the columns with indices k, k +1, . . . , n are determined and do not change when we multiply by E k−1 , E k−2 , etc. Thus E n E n−1 · · · E k+1 and E n E n−1 · · · E k+1 E k only differ in their k-th columns. This means that computing the sequence E n E n−1 · · · E k , for k = n, n − 1, . . . , 2, 1 we obtain B −1 column by column, starting from the last one. This procedure may be useful to develop parallel algorithms for the computation of inverses of triangular matrices.
We consider now explicit expressions for the positive powers of a triangular matrix A in terms of its elementary factors. We start with A 2 . Using (2.5) and part 1 of Theorem 2.1 we get Therefore, by Theorem 2.1 we have where the last sum runs over all pairs (j, k) such that 1 ≤ j < k ≤ n. Let K = {k 1 , k 2 , . . . , k r } ⊆ N, where k 1 > k 2 > · · · > k r , and let m be a positive integer. We define the scalar valued function g(K, m) as follows where ∆[1, x k 1 , x k 2 , . . . , x kr ] denotes the divided differences functional with respect to the numbers 1, x k 1 , x k 2 , . . . , x kr . g(K, m) is a symmetric polynomial in the x j . For the properties of divided differences see [6]. Using induction and some basic properties of divided differences we can obtain an expression for A m in terms of matrices of the form G(J). Note that the numbers g(J, 0) in the last sum are all equal to 1. A similar result for triangular matrices with distinct diagonal entries x j was obtained by Shur [4]. Other formulas for powers of general square matrices appear in [7].

Inverses of Hessenberg matrices
In this section we use the fact that a k-Hessenberg matrix H is a submatrix of a larger triangular matrix to obtain a formula for the inverse of H, in case it exists. We also characterize the invertible k-Hessenberg matrices in terms of properties of the blocks in a natural block decomposition.
We call an n × n matrix H = [h i,j ] lower k-Hessenberg if h i,j = 0 for i < j − k. We say that H is strict lower k-Hessenberg if h i,j = 0 for i = j − k.
Any lower k-Hessenberg n × n matrix H has the following block decomposition. Let m = n − k. Then Note that A is lower triangular and it is invertible if H is strict k-Hessenberg.
Extending the block decomposition of equation (3.1) to form a lower triangular matrix we obtain the following result.
Proof: Suppose that G = CA −1 B − D is invertible. Define the lower triangular (n + k) × (n + k) matrix T by It is easy to verify that where E, F, and G are as previously defined. Consider now the block decomposition From T T −1 = I n+k we obtain the equations

5)
and Since G is invertible by hypothesis, we can solve for S in the last equation and substitute the resulting expression for S in equation (3.5). In this way we obtain which implies that H is invertible and (3.2) holds. Now suppose that H is invertible and let where the block decomposition is compatible with the decomposition of H given in (3.1). Then, from the equation H −1 H = I n we obtain Since A is invertible the second equation in (3.7) yields U + V CA −1 = 0, and multiplying by B on the right we obtain UB + V CA −1 B = 0. Combining this last equation with the first one in (3.7) we get V (CA −1 B − D) = −I k and therefore G is invertible and G −1 = −V . Let us observe that an important part of the computation of H −1 is the computation of the inverse of the triangular matrix A. Note also that any given strict k-Hessenberg matrix can be modified to become invertible by changing the k × k block D of (3.1) in a suitable way.
In the case of k = 1 the matrix G reduces to a number and then the second term in the right-hand side of (3.2) is the product of a column vector times a row vector. See [2].
Note that Theorem 3.1 holds for tridiagonal matrices and also for banded matrices. Suppose that k = 1, H is tridiagonal, and n ≥ 3. Then the matrices in the block decomposition of H are B = [h 1,1 h 2,1 0 0 · · · 0] T , C = [0 0 · · · 0 h n,n−1 h n,n ], and D = 0. In this case A is lower triangular and tridiagonal. Using the row version of the theory of elementary triangular matrices, that we describe in the next section, it is easy to construct a recursive algorithm to compute the inverses of tridiagonal matrices as the size n increases.
The proof of Theorem 3.1 only uses the hypothesis that the block A is invertible. Therefore the theorem holds also for other types of matrices, such as block Hessenberg matrices.

Row and block versions of the theory of elementary triangular matrices
In this section we present a brief description of two variations on the theory of elementary triangular matrices presented in section 2. The first one is obtained when we consider lower triangular matrices that are the sum of the identity matrix plus a matrix that has all of its nonzero entries in a single row. In the second one we consider block lower triangular matrices with square blocks along the diagonal that may have different sizes.
Recall that T denotes the n × n lower triangular matrices. An element of T is called row elementary if it is of the form I + R k , for some k ≤ n, where I is the identity matrix and R k is lower triangular and has all of its nonzero entries in the k-th row.
Let A = [a k,j ] ∈ T . For k ∈ N we define F k as the matrix obtained from the identity matrix I by replacing its k-th row with the k-th row of A, that is, (F k ) k,j = a k,j for j = 1, 2, . . . , k, (F k ) j,j = 1 for j = k, and all the other entries of F k are zero. The matrices F k are called the elementary factors by rows of A because If for some k we have a k,k = 0 then F k is invertible and Therefore, if A is invertible then The matrices (F j ) −1 are the elementary factors by rows of the matrix and Define R k = F k − I for k ∈ N. Then A = I + R 1 + R 2 + · · · + R n . It is easy to see that F k F k−1 · · · F 2 F 1 and F k−1 · · · F 2 F 1 only differ in the k-th row, and the difference is a linear combination of translates of R 1 , R 2 , . . . , R k . Note that F k F k−1 · · · F 2 F 1 is the inverse of the submatrix of B obtained by deleting the rows and columns with indices k + 1, k + 2, . . . , n, which is often called the k × k section of B. Therefore, computing the sequence of matrices F k F k−1 · · · F 2 F 1 for k = 1, 2, 3, . . . yields a recursive algorithm that gives the inverse of B row by row. That algorithm can also be used to find the inverses of the sections of infinite lower triangular matrices such as the ones considered in [5]. The inversion algorithm introduced in [5] can be combined with the computation of inverses of diagonal blocks of a triangular matrix, using multiplication of elementary matrices, by rows or by columns.
The concept of elementary triangular matrices (by colummns of rows) can be generalized to the case of block triangular matrices in a natural way. We describe next how it is done in the case of column block elementary matrices.
Let k 1 , k 2 , . . . , k r be positive integers such that n = k 1 + k 2 + · · · + k r . Let X j be a k j × k j matrix for 1 ≤ j ≤ r and let A be an n × n block matrix that has the matrices X j along the diagonal and all its other nonzero entries below the diagonal blocks.
For j ∈ {1, 2, . . . , r} let E j , called the block elementary factor of A by columns, be the matrix that coincides with A in all the columns corresponding to the diagonal block X j , that is, the columns with indices between k 1 + k 2 + · · · + k j−1 + 1 and k 1 + k 2 + · · · + k j , and coincides with the identity matrix in the rest of the columns. Then we have A = E 1 E 2 · · · E r .
If the block X j is invertible then E j is also invertible and (E j ) −1 = I − (E j − I) Diag(I m j , (X j ) −1 , I p j ), (4.5) where Diag(I m j , (X j ) −1 , I p j ) is the block diagonal matrix that coincides with the block diagonal of A in the j-th block and with the identity matrix in the rest of the blocks. Note that m j = k 1 + k 2 + · · · + k j−1 and p j = n − m j − k j . If all the diagonal blocks X j are invertible then