On sets of eigenvalues of matrices with prescribed row sums and prescribed graph

Motivated by a work of Boros, Brualdi, Crama and Hoffman, we consider the sets of (i) possible Perron roots of nonnegative matrices with prescribed row sums and associated graph, and (ii) possible eigenvalues of complex matrices with prescribed associated graph and row sums of the moduli of their entries. To characterize the set of Perron roots or possible eigenvalues of matrices in these classes we introduce, following an idea of Al'pin, Elsner and van den Driessche, the concept of row uniform matrix, which is a nonnegative matrix where all nonzero entries in every row are equal. Furthermore, we completely characterize the sets of possible Perron roots of the class of nonnegative matrices and the set of possible eigenvalues of the class of complex matrices under study. Extending known results to the reducible case, we derive new sharp bounds on the set of eigenvalues or Perron roots of matrices when the only information available is the graph of the matrix and the row sums of the moduli of its entries. In the last section of the paper a new constructive proof of the Camion-Hoffman theorem is given.


Background and motivation
The use of the row sums of a matrix to determine nonsingularity or to bound its spectrum has its origins in the 19th century [18,Section 2] and has led to a vast literature associated with the name of Geršgorin and his circles [21]. One of the first observations, due to Frobenius, was that the Perron root ρ(A) (i.e., the biggest nonnegative eigenvalue, or the spectral radius) of a nonnegative matrix A ∈ R n×n + is bounded by where r i denotes the ith row sum of the elements of A. If A is irreducible then the inequalities in (1) are strict except when min n i=1 r i (A) = max n i=1 r i (A). In a recent development, Al'pin [2], Elsner and van den Driessche [11] sharpened the classical bounds of Frobenius by considering a matrix B which has the same zero-nonzero pattern as A, and whose entries are equal to the row sums of A in the corresponding rows. We formalize this idea in the following definition.
For a general complex matrix A ∈ C n×n , its auxiliary matrix is defined as Aux(|A|).
Next, recall the concepts of minimal and maximal cycle (geometric) means. For an arbitrary matrix A ∈ R n×n + these quantities are defined as follows (a i1i2 · a i2i3 · . . . · a i ℓ i1 ) 1/ℓ , where C(A) denotes the set of cycles of the associated graph. Recall that the directed weighted graph, associated with an arbitrary complex matrix A ∈ C nn , is defined by the set of nodes N = {1, . . . , n} and set of edges E such that (i, j) ∈ E if and only if a ij = 0, in which case edge (i, j) is assigned the weight a ij . According to Al'pin [2], Elsner and van den Driessche [11], we have for any nonnegative matrix A. If A and hence B are irreducible then either ν(B) = ρ(A) = µ(B) or (if ν(B) < µ(B)) the inequalities in (4) are strict. Exploiting similar ideas, Boros, Brualdi, Crama and Hoffman [4] investigated a class of complex matrices A ∈ C n×n with prescribed off-diagonal row sums of the moduli of their entries, prescribed associated graph, and prescribed moduli of all diagonal entries. In the case when G(A) is strongly connected with at least two cycles (scwaltcy), they investigated the existence of a positive vector x satisfying |a ii |x i ≥ j =i |a ij |x j , i = 1, . . . , n for all matrices from the class simultaneously, and described the cases when all inequalities in (5) are strict [4,Theorem 1.1], at least one of the inequalities is strict [4,Theorem 1.2], or all inequalities hold with an equality [4,Theorem 1.3]. These results imply generalizations of Geršgorin's theorem due to Brualdi [5]. Following the statement of [4,Theorem 1.4] the authors provide a detailed outline for the proof that Brualdi's conditions are sharp.
In this paper we mainly deal with the two classes of matrices described in the abstract. These classes are similar to those in [4], but we drop the requirement that G(B) is scwaltcy. In particular we also handle the reducible (not strongly connected) case. However we do not prescribe the moduli of diagonal entries, and include these moduli in the row sums instead. This allows us, in particular, to combine the problem statement of Boros, Brualdi, Crama and Hoffman [4] with that of Al'pin [2], Elsner and van den Driessche [11] and to generalize all above mentioned results removing the restriction that B is irreducible. The main results of this paper characterize the Perron roots or the sets of eigenvalues of the classes of matrices under consideration.
At the end of the paper we present a new constructive proof of the Camion-Hoffman theorem [8] (see also [10]). This theorem characterizes regularity of a class of complex matrices with prescribed moduli of their entries. The scaling result of Section 2.3 is crucial for our new proof (which also makes use of one of the previously mentioned characterization results). Since we are dealing with complex rather than with nonnegative matrices here, the triangle inequlity (implicit in Lemma 4.10) also plays a role.
Other proofs of the Camion-Hoffman theorem have been given by Levinger and Varga [16], and Engel [12].

Contents of the paper
The rest of this paper is organized as follows. Section 1.3 is a reminder of the Frobenius normal form of nonnegative matrices. Section 2 is devoted to a form of diagonal similarity scaling called visualization scaling [19] or Fiedler-Pták scaling [14] (see also [1]). Interest in this scaling has been motivated by its use in max algebra, see for example [6] and [7]. Lemmas 2.4 and 2.5 can be used to generalize the simultaneous scaling results of Boros, Brualdi, Crama and Hoffman [4, Theorems 1.1-1.3] to include the reducible case. This also yields a derivation of the bounds of Al'pin, Elsner and van den Driessche (Theorem 2.6). Theorem 2.8 establishes the existence of an advanced visualization scaling, which is applied in the proof of the Camion-Hoffman theorem.
In Section 3 we consider the class of nonnegative matrices with prescribed graph and prescribed row sums. Theorem 3.7 characterizes the set of possible Perron roots of such matrices also when B is reducible. This is one of the main results of this paper. The proof is based on analyzing the sunflower subgraphs of G(B), a technique well-known in max algebra [15]. As an immediate corollary it follows from Theorem 3.7 that for irreducible B with ν(B) < µ(B) and any r, ν(B) < r < µ(B) there exists A with Aux(A) = B such that ρ(A) = r.
In Section 4.1 we consider the class of complex matrices with prescribed graph and prescribed row sums of the moduli of their entries. We seek a characterization of the set of nonzero eigenvalues of such matrices, starting with the irreducible case in Theorem 4.4 . In this case we show in particular that when B has more than one cycle, the set of possible nonzero eigenvalues of A satisfying Aux(A) = B consists either of all s satisfying 0 < |s| < µ(B) when ν(B) < µ(B), or 0 < |s| ≤ µ(B) if ν(B) = µ(B). Then, based on the irreducible case the full characterization in the reducible case is given in Theorem 4.9. In addition to this, the occurance of a 0 eigenvalue is treated in Theorem 4.2.
In Section 4.2 a new proof of the Camion-Hoffman theorem [8] is given, based on the advanced visualization scaling of Section 2.3 and the characterization result of Theorem 4.9.

Frobenius normal form
Let A be a square nonnegative matrix. If A is irreducible (i.e., the associated digraph is strongly connected) then according to the Perron-Frobenius theorem A has a unique (up to a multiple) positive eigenvector corresponding to the Perron root ρ(A) (which is also the greatest modulus of all eigenvalues of A). If A is reducible then by means of simultaneous permutations of rows and columns or, equivalently, an application of P −1 AP similarity where P is a permutation matrix, A can be brought to the following form: where the square blocks A 1 , . . . , A m correspond to the maximal strongly connected components of the associated graph. These diagonal blocks A 1 , . . . , A m will be further referred to as classes of A. Note that each class A i is either a nonzero irreducible matrix, in which case it is called nontrivial, or a zero diagonal entry (and then it is called trivial). If some component G(A i ) of the associated graph G(A) does not have access to any other component, which means that there is no edge connecting one of its nodes to a node in another component, then this component or the corresponding class A i are called final. Otherwise, this component or the corresponding class are called transient. The entries denoted by 0 are actually off-diagonal blocks of zeros of appropriate dimension, and * denote submatrices of approriate dimensions whose zero-nonzero pattern is unimportant.

Visualization of auxiliary matrices
In this section we assume that A is a nonnegative matrix such that G(A) contains at least one cycle. Let us introduce some terminology related to max algebra and visualization. Similarly, by anticritical graph we mean the subgraph of G(A) consisting of all nodes and edges on the cycles whose geometric mean equals ν(A) (also speaking of anticritical nodes and edges). A node is called strictly anticritical if all edges emanating from it are anticritical.
Existence of such vector was proved by Engel and Schneider [13,Theorem 7.2] in the irreducible case, and was extended to reducible matrices in [19].
An existence of such scaling follows from the existence of visualization scaling, applied to a matrix resulting from A after elementwise inversion of the entries.
The following lemmas are based on the results on simultaneous scaling found in [4, Theorems 1.1-1.3]. We make arguments of [4] more precise by basing them on the existence of strictly visualizing vectors [19]. Proof: Assume that µ(B) = 1. Then If i is strictly critical, we have which implies that x j = x k for all j and k such that both (i, j) ∈ E(A) and (i, k) ∈ E(A). Hence we can take any k with (i, k) ∈ E(A), and obtain If i is not strictly critical then let us denote If i is not critical then If i is critical (but not strictly) then which implies x l = x k > x h for these l and h. In particular, note that we have

Bounds of Alpin, Elsner, van den Driessche
We call a nonnegative matrix A truly substochastic, if j a ij ≤ 1 for all i and j a ij < 1 for some i. In a similar way, A is called truly superstochastic if j a ij ≥ 1 for all i and j a ij > 1 for some i. The following known result can be now obtained from Lemmas 2.4 and 2.5.
, not all nodes of G(B) are strictly critical. Taking any strictly visualizing vector , is a truly substochastic matrix multiplied by µ(B), as claimed. As X −1 AX is also irreducible, it follows that ρ(A) = ρ(X −1 AX) < µ(B). The inequality ρ(A) < µ(B) can be also obtained (following an argument found, for instance in [11]) by multiplying the system Ax ≤ µ(B)x, where at least one of the inequalities is strict, from the left by a row vector z such that zA = ρ(A)z (which does not have 0 components if A is irreducible).
Not all nodes of G(B) are strictly anticritical, either. Taking any strictly an- The inequality ρ(A) > ν(B) can be also obtained by multiplying the system Ay ≥ µ(B)y, where at least one of the inequalities is strict, from the left by a row vector z such that zA = ρ(A)z (which does not have 0 components if A is irreducible).

Sum visualization
Definition 2.7. For A ∈ R n×n + and a > 0, a vector x ∈ R n + is called an a-sum visualizing vector of A, if the entries of C = X −1 AX with X = diag(x) satisfy c ij ≤ a for all i, j and j c ij ≥ a for all i. In this case C is called an a-sum visualization of A.
Recall that we have µ(A) ≤ ρ(A) for any nonnegative matrix. Indeed, since for any positive x and any cycle (i 1 , . . . i ℓ ) we have that Let a ∈ α(A) and let C = X −1 AX (for some diagonal X) be such that c ij ≤ a for all i, j and j c ij ≥ a for all i. Then µ(C) ≤ a and ρ(C) ≥ a, and as Let µ(A) ≤ a ≤ ρ(A). We can assume without loss of generality (dividing A by a if necessary) that a = 1 and µ(A) ≤ 1 ≤ ρ(A).
As µ(A) ≤ 1, there exists a non singular diagonal matrix X such that all entries g ij of G := X −1 AX satisfy 0 ≤ g ij ≤ 1. Since G is diagonally similar to A, ρ(A) is also the spectral radius of G and hence the exists a vector z whose entries z i satisfy 1 = max i z i and j g ij zj zi ≥ 1 for all i . We will now construct an entrywise nonincreasing sequence of vectors {y (s) } s≥0 bounded from below by z. Such a sequence obviously converges, and as we will argue, the limit denoted by y satisfies g ij yj yi ≤ 1 for all i, j, and j g ij yj yi ≥ 1 for all i (and, obviously, y ≥ z).
Let us define a continuous mapping f : Now let y (0) = (1, 1 . . . 1) and consider a sequence {y (s) } s≥0 defined by It follows by induction that y (s) ≥ z for all s. The case s = 0 is the basis of induction (since z i ≤ 1 for all i). We have to show that y (s+1) ≥ z knowing that y (s) ≥ z. It amounts to verify that y As the sequence {y (s) } s≥0 is nonincreasing and bounded from below, it has a limit which we denote by y. As f is continuous, this limit satisfies f (y) = y, which by the definition of f implies that j g ij yj yi ≥ 1 for all i. We now show by induction that g ij y (s) j y (s) i ≤ 1, for all i = j and s. Denote by Observe that s = 0 is the basis of induction, so we assume that the claim holds for s and we have to prove it for s + 1. For i, j / ∈ I s the inequality is just one of the nonnegative terms of the sum in the denominator).
Finally if i / ∈ I s and j ∈ I s : then we have g ij Thus the inequalities g ij y (s) j y (s) i ≤ 1 hold for all i = j and s, and this implies that for the limit point y, all the inequalities g ij yj yi ≤ 1 hold as well. The case i = j is trivial since the inequality g ii yi yi = g ii ≤ 1 holds for all i. Let D be the diagonal matrix with d ii = y i x i for all i. For the entries c ij of C = D −1 AD we have c ij ≤ 1 for all i, j and j c ij ≥ 1 for all i so the theorem is proved.
there is no such inversion for the Perron root), and let us formulate the following corollary of Theorem 2.8.
The following are equivalent: Proof: The corollary follows by elementwise inversion of the nonzero entries and applying Theorem 2.8.

Nonnegative reducible matrices
Here we characterize Perron roots of nonnegative matrices with prescribed row sums and prescribed graph. Section 3.1 is devoted to sunflower graphs, which will be used in the proof of the main result. Section 3.2 contains the main result and example.

Sunflowers
We introduce the following definition, inspired by description of the Howard algorithm in [9] and [15,Chapter 6].
Definition 3.1. Let G be a weighted graph. A subgraphG of G is called a sunflower subgraph of G if the following conditions hold: (i) If a node in G has an outgoing edge then it has a unique outgoing edge iñ G; (ii) Every edge inG has the same weight as the corresponding edge in G.
It is easy to see ( [15]) that such a digraph can be decomposed into several isolated components, each of them either acyclic or consisting of a unique cycle and some walks leading to it. A sunflower subgraphG of G is called a simple γ-sunflower subgraph of G, if γ is the unique cycle ofG. The set of all sunflower subgraphs of the weighted digraph G(B), with full node set 1, . . . , n, will be denoted by S(B).
Denoting by µ(G) the maximal cycle mean of a subgraph G ⊆ G(B), we introduce the following parameters: Lemma 3.2. Let G be a strongly connected graph. Then, for any cycle γ of G there exists a simple γ-sunflower subgraph of G.
Observe first that we can construct a simple γ-sunflower on nodes 1, . . . , k: this is just the cycle γ itself.
The proof is by contradiction. Assume that a simple γ-sunflowerG can be constructed for a subgraph induced by the set of nodes M , which contains the nodes 1, . . . , k and is a proper subset of {1, . . . , n}, and that M is a maximal such set. However, since G is connected, there is a walk W from {1, . . . , n}\M to M , and we can pick the last edge of that walk and its last node before it enters M . Adding that node and that edge toG we increase it while it remains a simple γ-sunflower (of a subgraph induced by a larger node set). The contradiction shows that we can construct a simple γ-sunflower of G.
Let us also recall the following.  A sunflower subgraph which has cycles only in the final classes of G(B) will be called thin. In the proof of Proposition 3.4 we actually established the following result. Lemma 3.6. Let G be a graph where each node has an outgoing edge and let G i for i = 1, . . . , q be the nontrivial final components of G. For each collection of cycles α i ∈ G i for i = 1, . . . , q, there is a (thin) sunflower subgraph of G whose cycles are α 1 , . . . , α q . If all final components of G are trivial then there exists an acyclic sunflower subgraph of G (i.e., a directed forest).

Range of the Perron root
We are going to extend Theorem 2.6 to include the reducible case and describe η(B) for a general row uniform nonnegative matrix B.  As an immediate corollary we obtain the following result in the irreducible case. Example. Given an irreducible row uniform matrix B ∈ R n×n + and a constant ρ ∈ (ν(B); µ(B)) = (m(B); M (B)), we describe a method for constructing a matrix A such that Aux(A) = B and ρ(A) = ρ. Take two simple γ -sunflowers: one where γ has cycle mean equal to µ(B), and the other where γ has cycle mean equal to ν(B). Denote by A 1 the matrix associated with the first sunflower, and by A 2 the matrix associated with the second sunflower. We have ρ(A 1 ) = µ(B) and ρ(A 2 ) = ν(B). For the convex combinations of these matrices, we have that ρ(A λ ), where A λ := (1 − λ)A 1 + λA 2 and 0 ≤ λ ≤ 1, will assume all values between ν(B) and µ(B). This follows from the continuity of spectral radius as a function of λ(as in the more general construction above). The value of λ for which ρ(A λ ) = ρ, can be found from the system A(λ)x = ρx, which has n + 1 variables (n components of x and the parameter λ). However, since x can be multiplied by any scalar, one of the coordinates of x can be chosen equal to 1. Then, for at least one of such choices, the existence of solution is guaranteed.

Complex matrices
In Section 4.1 we characterize the set of eigenvalues of complex matrices with prescribed graph and prescribed row sums of the moduli of their entries.
In Section 4.2, a new proof of the Camion-Hoffman theorem is presented. Here |A| denotes the matrix whose entries are the moduli of (complex) entries of A. We first consider the conditions when 0 ∈ σ(B). In what follows, the imaginary number "i" is denoted by ℑ. By a generalized diagonal product of B we mean a product of the form  Suppose that B has no generalized nonzero diagonal products and let A be such that Aux(A) = B. Since all the generalized diagonal products of A are zero, we have det(A) = 0 and 0 ∈ σ(B).

Complex matrices with prescribed row sums of moduli
Suppose that B has two or more non zero generalized diagonal products. Let us permute the columns of B in order to put one of the generalized diagonal products on the (main) diagonal. In other words, consider BP where P is a permutation matrix and all diagonal entries of BP are nonzero. We have Aux(A) = B if and only if Aux(AP ) = BP , and det(A) = det(AP ), therefore 0 ∈ σ(B) if and only if 0 ∈ σ(BP ). As BP has at least one nonzero diagonal product different from the main diagonal, the Frobenius normal form of BP has a nontrivial diagonal block of dimension greater than 1.
Denote the index set of that block by M , and let us take any row uniform nonnegative matrix D = (d kl ) such that Aux(D) = BP . For each k ∈ M , denote by n k the number of outgoing edges of the kth node in M in the associated digraph of BP that go to the nodes in M . As the block is irreducible and has all diagonal entries nonzero, we have n k > 1. Let t k be a bijection between the outgoing edges of k and {1, 2, . . . n k }, and define matrix C = (c kl ) by Then Aux(|C|) = BP . In addition C MM v = 0 where C MM is the principal submatrix of C extracted from rows and columns with indices in M , and v is the vector with all components equal to 1. This implies that det(C) = det(C MM ) = 0 so 0 ∈ σ(BP ) and 0 ∈ σ(B).
We now describe σ(B)\{0} starting from the irreducible case.  In the remaining case λ ≤ ν(B) we will construct a row uniform matrix H so that ν(H) ≤ λ ≤ µ(H). Since B has at least two cycles and it is irreducible, there exists a row with index belonging to one of those cycles and with at least two nonzero elements one of which must be on that cycle. Let t be the index of such row. Consider a cycle α going through that row, with cycle mean c and length ℓ. If we have c ≤ λ, it follows that ν(B) ≤ λ ≤ µ(B) and we select H = B. If c > λ then we multiply all entries of row t by z such that c · z 1/ℓ = λ. Let H be the resulting matrix, so we have 0 < ν(H) ≤ λ ≤ µ(H).
Thus we can assume that ν(H) = λ = µ(H) or ν(H) < λ < µ(H), where H is obtained from B by multiplying the row t with at least two nonzero entries by a nonnegative scalar z ≤ 1. Then by Theorem 3.7, there is a nonnegative matrix E with an eigenvector v such that Ev = λv and Aux(E) = H, where row t has at least two nonzero entries that we denote by e tk and e tl . Since E is irreducible, all components of v are positive. We now modify row t of E to form a matrix C such that Aux(C) = B and Cv = λv. Let x be such that It can be observed that this equation can be explicitly resolved with respect to x.
otherwise. Then Aux(|C|) = B and Cv = λv, so λ is an eigenvalue of C. The claim follows.
We call a class of complex matrices regular if all matrices in the class are nonsingular.  Remark 4.6. As noted in the abstract and introduction of [4], the theorems in that paper imply Brualdi's [5] conditions for the non-singularlty of matrices and show that they are sharp. There is no essential difference or simplification in assuming that the main diagonal of the matrices considered there is the identity, and in that case the spectral content of [4]   (b) For all i ∈ K we haveb ij < b ij for all j where b ij > 0.
(c) For all i / ∈ K and all j we haveb ij = b ij .
We will also need the following variation of Definition 4.1.  Case 3: µ(B) = ν(B) and there is a cycle avoiding the nodes with indices in K. In this case µ(Aux(|A|)) = µ(B) for all A with Aux(A) K B, but ν(Aux(|A|)) < µ(Aux(|A|)) for all such matrices.
In all three cases we obtain the claim by applying Theorem 4.4 to all Aux(|A|) satisfying Aux(|A|) K B.
We are now ready to deal with the general reducible case.
Proof: It is known that λ is an eigenvalue of a matrix A ∈ C n×n if and only if det(A − λI) = 0, which implies that the spectrum of A ∈ C n×n (i.e., the set of eigenvalues of A) is the union of spectra of its nontrivial classes in the Example. To illustrate the last theorem, let us consider the following row uniform matrices: The moduli of eigenvalues in σ(C) assume all values in (0, 3] ∪ {4} ∪ {5}. HereM (C) = 3, which is the maximum cycle mean of the only final class which is multicyclic. As the means of all cycles in that class are equal to each other, the value ofM (C) belongs to σ(C).

Camion-Hoffman theorem
We now will apply Theorem 2.8 and Theorem 4.9 to provide a new proof for a theorem of Camion and Hoffman [8].
Let us first recall the following known facts and a definition:  Proof: Since the condition of Lemma 4.10 are satisfied for a i1 , . . . , a in for all i, there exists a complex matrix C with |C| = A such that j c ij = 0 for all i. This implies det(C) = 0.
We will investigate the following matrix class: (ii) There exists a permutation matrix P and a diagonal matrix D such that P AD is strictly diagonally dominant; (iii) There exists a permutation matrix P and nonsingular diagonal matrices D 1 , D 2 such that all diagonal entries of D 1 P AD 2 are equal to 1 and µ(Aux(D 1 P AD 2 − I)) < 1.
Proof: (i)⇒ (ii): Assume that Ω(A) is regular. Let P be a permutation matrix such that the diagonal product of P A is greater then or equal to any generalized diagonal product of A. Since A is nonsingular the diagonal elements of E = P A are nonzero. Let D be the diagonal matrix with entries equal to the inverse of the corresponding diagonal elements of P A. Since all diagonal entries of P AD are equal to 1, for any cycle α we can find a generalized diagonal product of P AD equal to the product of the entries of α. Since any generalized diagonal product of P AD is less than or equal to 1, it follows that µ(P AD) = 1.
We will now establish that ρ(P AD − I) < 1. The proof is by contradiction. Assume that ρ(P AD − I) ≥ 1. Then P AD − I has a class B such that ρ(B) > 1 and for all i we have b ii = 0. Since µ(P AD − I) ≤ 1 we also have µ(B) ≤ 1. Applying Theorem 2.8 to B, we obtain a diagonal nonnegative matrix Y such that matrix E := Y −1 (B + I)Y has entries satisfying 0 ≤ e ij ≤ 1 and e ii = 1 for all i, j, and k =i e ik ≥ 1 for all i. By Corollary 4.11 there is a matrix H = (h ij ) with complex entries satisfying |H| = E and det(H) = 0. Replacing the class B + I in F by Y HY −1 we obtain a matrix G with det(G) = 0 and |G| ∈ Ω(P AD). As P is a permutation matrix and D diagonal, there is a bijective correspondence between Ω(P AD) and Ω(A) in which the singularity and nonsingularity are preserved. This contradicts that Ω(A) does not contain a singular matrix and hence ρ(P AD − I) < 1.
Since ρ(P AD−I) < 1, there exists a diagonal matrix Z such that Z −1 (P AD− I)Z has all row sums strictly less than 1, see [3,Chapter 6] or [17] for a detailed argument. (Such a diagonal matrix Z can be constructed using Perron eigenvectors of nontrivial classes.) As all row sums in the matrix Z −1 (P AD − I)Z = Z −1 P ADZ − I are strictly less than 1, it follows that the matrix P ADZ is strictly diagonally dominant, with P a permutation matrix and DZ a diagonal matrix, as required.
(ii)⇒ (iii) If P AD is strictly diagonally dominant then there is a digonal matrix D 1 such that the diagonal entries of D 1 P AD are equal to 1 and the row sums of D 1 P AD − I are strictly less than 1. As each entry in Aux(D 1 P AD − I) is strictly less than 1, we also have µ(Aux(D 1 P AD − I)) < 1 as claimed.
(iii)⇒ (i): The proof is by contradiction. Assume that (iii) holds but (i) does not hold. That is, assume that there exists a permutation matrix P and nonsingular diagonal matrices D 1 , D 2 such that µ(Aux(D 1 P AD 2 − I)) < 1, and that (in contradiction with (i)) there exists C ∈ Ω(A) with det(C) = 0. Then µ(Aux(D 1 P |C|D 2 − I)) = µ(Aux(D 1 P AD 2 − I)) < 1, and by Theorem 4.9 we have 1 / ∈ σ(Aux(D 1 P AD 2 − I)). However, we have det(D 1 P CD 2 ) = 0, and we can multiply the rows of D 1 P CD 2 by some complex numbers with moduli 1 to obtain a matrix with zero determinant and with all diagonal entries equal to −1. Adding the identity matrix to this matrix we obtain a matrix in the class Ω(D 1 P AD 2 − I), for which 1 is an eigenvalue. The set of eigenvalues of matrices in Ω(D 1 P AD 2 − I) is a subset of σ(Aux(D 1 P AD 2 − I)), so 1 ∈ σ(Aux(D 1 P AD 2 − I)), a contradiction.
Let us also reformulate the Camion-Hoffman theorem in terms of M -matrices and comparison matrices. Recall that a real matrix B is a nonsingular M -matrix if B = ρI−C where C is a nonnegative matrix and the Perron root of C is strictly less than ρ (see [3] for many other equivalent definitions). For a nonnegative matrix A ∈ R n×n + , its comparison matrix E = comp(A) has entries e ii = a ii for i = 1, . . . , n and e ij = −a ij for i = j.