On the class of matrices with rows that weakly decrease cyclicly from the diagonal

We consider $n\times n$ real-valued matrices $A = (a_{ij})$ satisfying $a_{ii} \geq a_{i,i+1} \geq \dots \geq a_{in} \geq a_{i1} \geq \dots \geq a_{i,i-1}$ for $i = 1,\dots,n$. With such a matrix $A$ we associate a directed graph $G(A)$. We prove that the solutions to the system $A^T x = \lambda e$, with $\lambda \in \mathbb{R}$ and $e$ the vector of all ones, are linear combinations of 'fundamental' solutions to $A^T x=e$ and vectors in $\ker A^T$, each of which is associated with a closed strongly connected component (SCC) of $G(A)$. This allows us to characterize the sign of $\det A$ in terms of the number of closed SCCs and the solutions to $A^T x = e$. In addition, we provide conditions for $A$ to be a $P$-matrix.


Introduction
We consider the class of real n×n matrices with rows that weakly decrease in a cyclic fashion from the diagonal, i.e., matrices A = (a ij ) that satisfy a ii ≥ a i,i+1 ≥ · · · ≥ a in ≥ a i1 ≥ · · · ≥ a i,i−1 , i = 1, . . . , n.
Our interest lies in characterizing sgn(det A), the sign of the determinant of such matrices, with the convention that sgn 0 = 0. In our exposition, we regard the dimension n as fixed and denote the vector of all ones by e.
1.1. Statement of the main result. Consider an n × n matrix A that satisfies (C1). We say there is a gap right before the matrix element a ij if a ij is strictly smaller than the previous element a i,j−1 (a in in case j = 1) in the same row. Let K = (κ ij ) be the 0 -1 matrix obtained from A by replacing each matrix element right after a gap by a 1, and every other element by a 0, that is, K is defined by κ ij := 1 if j = 1 and a in > a i1 , or 2 ≤ j ≤ n and a i,j−1 > a ij ; 0 otherwise.
We interpret K as the adjacency matrix of a directed graph with vertex set {1, . . . , n} and a directed edge from i to j if and only if κ ij = 1, and call this graph G (A). We say two distinct vertices i and j are strongly connected if there are paths in G (A) from i to j and from j to i, and declare each vertex i to be strongly connected to itself. This defines an equivalence relation on the vertex set, the equivalence classes of which we call strongly connected components (SCCs). vertex not in C, and closed otherwise. Necessarily, at least one of the SCCs of the graph G (A) has to be closed. Figure 1 illustrates these definitions.
The main result of this paper is that one of the following four possibilities must apply to the matrix A: 1. There are at least two closed SCCs and det A = 0. 2. There is exactly one closed SCC and either (a) the system A x = e has no solution and det A = 0; or (b) the system A x = e has a solution x ≥ 0 and det A > 0; or (c) the system A x = e has a solution x ≤ 0 and det A < 0. We also show using Farkas' lemma that 2(a) and 2(c) are impossible if Ae > 0, while 2(a) and 2(b) are ruled out if Ae < 0. So if the row sums of A are all strictly positive or all strictly negative, then whether or not A is singular is determined by the number of closed SCCs of G (A).
Our main result generalizes the following theorem by T.S. Motzkin: Motzkin's Theorem [1,Theorem 8]. If the real n × n matrix A is nonnegative and satisfies (C1), then det A ≥ 0, and if the inequalities in (C1) are strict, then det A > 0.
Compared to Motzkin's theorem, in the case of a non-negative matrix A our result provides a precise condition on which of the inequalities in (C1) need to be strict for det A to be strictly positive. Moreover, our result also covers matrices in the class (C1) with negative entries.
Although we focus on matrices with weakly decreasing rows, our analysis can also be used if the matrix A has rows that either weakly decrease or weakly increase from the diagonal in a cyclic fashion. Indeed, for such a matrix there exists a diagonal matrix D with diagonal entries 1 and −1 such that the product DA satisfies (C1). The analysis of our paper applied to the matrix DA then translates into results for the matrix A. A somewhat related class of matrices with rows that decrease from the diagonal in both directions has been studied by Rousseau in [2].

Application and motivation.
Our motivation for considering the class of matrices satisfying (C1) comes from a model used to study particle flows in ring-topology networks. Such network structures occur for instance in traffic and communication networks. Here we describe only the main characteristics of the model; we refer to [3] for a more detailed account of the dynamics and an extended motivation for studying ring-topology networks.
The system consists of n stations and n cells arranged in a ring, with cell 1 followed by station 1, cell 2, station 2, etc., up to station n. Each cell can contain at most one particle at a time. At each station i, particles arrive from outside at a fixed rate p i , and are placed in a buffer before they can enter the ring. At integer times, each station i can perform one of three actions: if cell i contains a particle, the station either forwards this particle to the next cell or removes it from the ring; if cell i is empty but the buffer at station i is not, the station moves one particle from the buffer to the next cell; otherwise, the station does nothing. This is illustrated in Fig. 2.
Consider a particle moving onto the ring at station i. We assume the time it spends on the ring before being removed follows a probability distribution depending only on i. Let a ij be the expected number of times the particle passes through station j, and let b ij be the expected number of times it occupies cell j. Then, by the nature of the model, the n×n matrix A = (a ij ) satisfies (C1) and has the relation A = I + B with the matrix B = (b ij ).
We are interested in determining the possible stationary particle flows through the network. To this end, suppose that for each cell i there is a stationary rate π i at which the cell is vacant. Then the stationary rate at which station i moves particles from the buffer onto the ring is the smaller of π i and p i , since the station can only move a particle onto the ring when cell i is empty. Using that b ij is the expected number of times such a particle occupies cell j, it follows that the vector π = (π i ) has to satisfy where p = (p i ) is the vector of arrival rates, and π ∧ p is the component-wise minimum of π and p. This is called the throughput equation. Its solutions for π characterize all the candidates for a stationary particle flow.
In a separate paper [4] we show that every solution to (1) can be expressed in terms of the solutions to the system A x = e and analogous systems for principal submatrices of A. A solution always exists, but whether it is unique depends on the invertibility of these matrices. In particular, we prove in [4] that the throughput equation has a unique solution for every vector p if and only if A is a P -matrix. The current paper supplements [4] by studying the solutions to the system A x = λ e, for any λ ∈ R, and characterizing the sign of det A (as explained above) for matrices in the class (C1), in Section 2. In Section 3 we provide conditions under which (non-negative) matrices in this class are P -matrices, considering their importance for our application. Section 4 summarizes the results and conclusions of the paper. A matrix A is called: a P -matrix if all its principal minors are strictly positive; a Z-matrix if a ij ≤ 0 for i = j; an M -matrix if A can be expressed in the form sI − B with B ≥ 0 and s ≥ ρ(B); semi-positive if Ax > 0 for some vector x > 0. A result we use is that a semi-positive Z-matrix is a non-singular M -matrix [5, Theorem 1, Condition K33].
Finally, throughout the paper, K = (κ ij ) will be the adjacency matrix of the directed graph G (A) associated with the matrix A under consideration, as defined in Section 1.1.

Main results and proofs
The basis of our approach is an investigation into the solutions of the system A x = λ e, with λ ∈ R. We first show that these solutions are zero outside the union of the closed SCCs of the graph G (A). We then investigate the 'fundamental' solutions to A x = e that are characterized by the fact that they are supported on a single closed SCC. We prove that the solutions to A x = λ e are linear combinations of these fundamental solutions and vectors in ker A that also are associated with a particular closed SCC. We then establish that every closed SCC admits at most one fundamental solution, which is either non-positive or non-negative. This finally leads to the characterization of the sign of det A stated in Section 1.1.
2.1. The support of solutions. We begin our exposition by showing that the solutions to A x = λ e are supported on the closed SCCs of G (A). To do so, we make use of the notations introduced in Section 1.3. Proof. Let C be the union of the closed SCCs of the graph G (A), so that C = {1, . . . , n} \ D, and define the n × n matrix M = (m ij ) by Observe that if i ∈ C and j ∈ D, then we must have m ij = 0 because there cannot be an edge in the graph G (A) from i to j. It follows that, for j ∈ D, So each row sum of [M ] DD is non-negative, and the sum of the i-th row is strictly positive if κ ij = 1, and hence m ij < 0, for some j in C.
Let d be the number of elements in D. Note that every non-empty subset I of D (including I = D) contains an index i for which there is some index j outside I with κ ij = 1, for if not, then I would contain a closed SCC of G (A). This allows us to recursively choose the indices i 1 , . . . , i d so that they are distinct and satisfy the following properties: We now define the vector z = (z i : i ∈ D) recursively by setting where for k = 1 the first sum is zero. We claim that by induction in k, it follows from this definition that for k = 1, . . . , d, Indeed, this is not difficult to verify using (2) and the fact that [M ] DD is a Z-matrix with strictly positive diagonal elements and non-negative row sums. But then it also follows that for 2.2. The fundamental solutions. We now start our investigation into the fundamental solutions to A x = e and their relation with the solutions to the general system A x = λ e with λ ∈ R. We begin with a lemma concerning the non-negative solutions to the general system, in the proof of which we use the definition of the numbers d(i, j) from Section 1.3.
Proof. Note that there must be an index r such that x r > 0, and by Lemma 1, r lies in a closed SCC. We will prove below that if x r > 0 and κ rj = 1, then x j > 0. This suffices to prove the lemma because it implies x j > 0 for every index j to which there is a path from r in the graph G (A).
So assume x r > 0 and κ rj = 1. Suppose, towards a contradiction, that x j = 0. Now let I be the set of indices i for which x i > 0, and let k be the element of I for which d(k, j) is minimal. Note that by (C1) we then have a ij ≤ a ik for every element i of I because d(i, k) < d(i, j). Moreover, since there is a gap just before element a rj in row r, we have a rj < a rk . Therefore, But this contradicts A x = λ e, so it must be the case that x j > 0.
We now focus on solutions to the system A x = λ e that are supported on a specific closed SCC of the graph G (A). Our next two lemmas complement each other. The first says that if the vector x solves A x = λ e and C is a closed SCC of G (A), then the subvector [x] C is mapped to a constant vector by the submatrix [A] CC . Conversely, the second lemma says that if x is supported on C and the subvector [x] C is mapped to the constant vector [λ e] C by [A] CC , then x solves the system A x = λ e.
Lemma 3. Let λ ∈ R. Suppose the real n × n matrix A satisfies (C1), C is a closed SCC of G (A), and x is a vector that solves the system A x = λ e.
Proof. If C has only one element there is nothing to prove, so assume C has at least two elements. Let {i 1 , . . . , i k } be the set of all indices that are element of a closed SCC of G (A), where i 1 < i 2 < · · · < i k . Now choose two indices i and i m from this set that both lie in C, such that i < i m and no index between i and i m lies in C. Then if j is an index in C we have a ji = a ji m−1 because the gaps on row j of the matrix A occur only right before elements with a column index in C. Similarly, if j is an index in a closed SCC other than C, then a ji m−1 = a jim because the gaps on row j occur only right before elements a ji k with i k in the same SCC as j. Finally, by Lemma 1, if j is an index in an open SCC, then x j = 0. It follows that , and x is a vector that solves the system Proof. To see that A x = λ e, it only remains to show that (A x) i = λ for i / ∈ C. So suppose i / ∈ C. Let k be the element of C for which d(k, i) is minimal. Then we must have a ji = a jk for every j in C, because row j of the matrix A cannot have any gaps between columns k and i. Therefore, We introduce some new terminology to proceed. Suppose the matrix A satisfies (C1) and C is a closed SCC of the graph G (A). Consider the system where C c := {1, . . . , n} \ C. We call C a fundamental closed SCC if this system has a solution x, in which case x solves A x = e by Lemma 4 and is called a fundamental solution for C to A x = e. If, on the other hand, the system (3) has no solutions, then we say C is null. In that case, ker [A] CC is non-trivial and we call any vector y such that [y] C ∈ ker [A] CC and [y] C c = 0 a null vector for C. Note that by Lemma 4, such a null vector lies in ker A .
We will show later that every fundamental solution is unique and either non-positive or non-negative. But for the moment, we only assume that for each fundamental closed SCC there is a fundamental solution that is nonpositive or non-negative. Our next theorem states that then the solutions to the system A x = λ e are precisely the sums of a linear combination of these fundamental solutions with coefficients summing to λ, and null vectors for the closed SCCs that are null. In case the closed SCCs are all fundamental or all null, the theorem is to be interpreted as stating that the solutions to the system A x = λ e are, respectively, linear combinations of only fundamental solutions or sums of only null vectors. In particular, the system has no solutions for λ = 0 if every closed SCC is null.
Theorem 5. Suppose the real n × n matrix A satisfies (C1), G (A) has exactly k fundamental closed SCCs C 1 , . . . , C k and exactly closed SCCs C k+1 , . . . , C k+ that are null, and for i = 1, . . . , k there is a non-positive or non-negative fundamental solution x i for C i . Let λ ∈ R. Then the solutions to the system A x = λ e are precisely the vectors Then by Lemma 1 and the fact that the closed SCCs are disjoint, we have x = y 1 + · · · + y k+ . Now note that for i = 1, . . . , , Lemma 3 implies that y k+i is a null vector because C k+i is null, and then A y k+i = 0 by Lemma 4. Finally, for i = 1, . . . , k, using that either [ Note that by these definitions, z ≥ 0 and for every closed SCC there is at least one index j in that SCC such that z j = 0. Moreover, we also have By Lemma 2 it then has to be the case that z = 0 and k i=1 α i = λ.

Existence and uniqueness.
In the previous section we showed that the solutions to the system A x = λ e can be expressed in terms of fundamental solutions to A x = e, under the assumption that these fundamental solutions are either non-positive or non-negative. In this section we prove that every fundamental solution is unique and non-positive or non-negative. We also provide conditions that imply the existence of a fundamental solution for a closed SCC. We start by noting that the following special case of Farkas' Lemma (see, e.g., [6] for a proof) provides a necessary and sufficient condition for the existence of a non-negative solution to A x = e: Farkas' Lemma. For a real n × n matrix A exactly one the following two assertions is true: (a) There exists a vector x such that A x = e and x ≥ 0. (b) There exists a vector z such that Az ≥ 0 and e z < 0.
It is, however, not obvious how to assess which of the two alternatives in Farkas' Lemma holds for a generic matrix A in the class (C1). But (based on Farkas' Lemma) our next lemma shows that the system A x = e has a non-negative solution if Ae > 0 and a non-positive solution if Ae < 0. Lemma 6. Suppose the real n × n matrix A satisfies (C1). If Ae > 0, then Az = 0 implies e z = 0 and the system A x = e has a solution x ≥ 0. Likewise, if Ae < 0, then Az = 0 implies e z = 0 and the system A x = e has a solution x ≤ 0.
Proof. This proof uses ideas from Motzkin's original paper [1, proof of Theorem 8]. We first consider the case that Ae > 0. Let z be a vector such that e z = z 1 + · · · + z n < 0. We will show this implies that at least one component of Az is strictly negative. To this end, definez := 1 n n i=1 z i and let y be the vector with components y i := z i −z. Note that thenz < 0 and y 1 + · · · + y n = 0. Choose the row index r so that it maximizes the sums Next, note that component [r + 1] of the vector Az can be expressed as (4), the first term on the right is non-positive. The second term is zero because n i=1 y [r+i] = 0. Finally, since Ae > 0 andz < 0, the third term is strictly negative. We conclude that (Az) [r+1] < 0. This rules out option (b) in Farkas' Lemma, leaving option (a). Moreover, Az = 0 implies both Az ≥ 0 and A(−z) ≥ 0, and hence e z = 0 by the argument above.

By (C1) and
The proof in the case Ae < 0 is similar. Again, let z be a vector such that e z < 0, and definez and y as before. This time, let r be the row index that minimizes the sums r i=1 y i , so that y i ≥ 0 for m = 1, . . . , n.
From equation (5) we conclude that (Az) [r+1] > 0. Farkas' Lemma applied to the matrix −A then says that there exists a non-negative vector x such that A (−x) = e. Also, Az = 0 again implies e z = 0, as before.
Lemma 6 does not specifically target fundamental solutions to A x = e, which are supported on a particular closed SCC of the graph G (A). But if the matrix A satisfies (C1) and C is a closed SCC of G (A), then the submatrix [A] CC also satisfies (C1). Therefore, Lemma 6 combined with Lemma 4 applied to the submatrix [A] CC provides simple conditions for the existence of a non-positive or non-negative fundamental solution for C. Now observe that because C is a closed SCC, any gaps on a row of the matrix A that has a row index in C occur right before elements with a column index in C. It follows that the adjacency matrix associated with the submatrix [A] CC is simply [K] CC . As a consequence, all vertices of the graph G ([A] CC ) are mutually strongly connected. Our next lemma applied to the submatrix [A] CC therefore shows that if C is fundamental, then the fundamental solution for C is unique and non-positive or non-negative.
Lemma 7. Suppose the real n × n matrix A satisfies (C1) and {1, . . . , n} is the only SCC of G (A). Then the system A x = e has at most one solution, and if x is a solution, then either x < 0 or x > 0.
Proof. Suppose x 1 and x 2 are solutions to the system A x = e. Recall that E is the n × n matrix of all ones, and consider the matrices A + λE for λ > 0. These matrices clearly satisfy (C1), and because they have gaps in the same positions on all rows, {1, . . . , n} is their only SCC. Moreover, A+λE > 0 for λ large enough. So we can choose λ such that A + λE > 0 and, in addition, λ e x 1 = −1 and λ e x 2 = −1. Then by Lemma 6 and Theorem 5, there is a unique non-negative vector y that satisfies (A + λE) y = e. By Lemma 2, y > 0. But for i = 1, 2 we also have It follows that x 1 = µ 1 y and x 2 = µ 2 y, where µ i = (1 + λ e x i ). But then A x 1 = A x 2 = e implies µ 1 = µ 2 = 0. So x 1 = x 2 , and since y > 0, it is the case that either x 1 < 0 or x 1 > 0.

Main result.
We are now in a position to prove our main result. We separate this into two theorems, the first one covering possibilities 1 and 2(a) from Section 1.1, and the second covering possibilities 2(b) and 2(c).
Theorem 8. The real n×n matrix A is singular if it satisfies (C1) and G (A) has at least two closed SCCs or exactly one closed SCC which is null.
Proof. Note that under the assumptions of the theorem, the graph G (A) has at least one closed SCC which is null, or at least two closed SCCs that are fundamental. Now if C is any closed SCC of G (A), then by Lemma 7 applied to the submatrix [A] CC there either is a unique fundamental solution for C which is non-positive or non-negative, or C is null. If C is null, then ker [A] CC is non-trivial and there are infinitely many null vectors for C. It therefore follows from Theorem 5 that the system A x = 0 has infinitely many solutions, hence det A = det A = 0.
Theorem 9. Suppose the real n×n matrix A satisfies (C1), the graph G (A) has exactly one closed SCC, and the system A x = e has a solution x. Then either x ≥ 0 and det A > 0, or x ≤ 0 and det A < 0.
Proof. Let C be the closed SCC of G (A), and let D be the union of the open SCCs of G (A). By Lemma 1, [x] D = 0 if D is non-empty, so x must be a fundamental solution for C. Lemma 7 applied to the submatrix [A] CC yields [x] C < 0 or [x] C > 0. Theorem 5 then tells us that A z = 0 implies z = 0, hence det A = 0. It remains to determine the sign of det A.
We start with the case x ≥ 0. Let C (the closed SCC) consist of the indices i 1 , . . . , i k , where i 1 < i 2 < · · · < i k . Define the n × n matrix B = (b ij ) by where i k+1 is to be interpreted as i 1 . Note that the matrix B satisfies (C1) by definition. Moreover, in the corresponding graph G (B) every vertex i has exactly one outgoing edge: for i ∈ D this edge links i to the next vertex [i + 1], and for i = i ∈ C it links i to i +1 , which is the next vertex in C. It follows that C is the only closed SCC for the matrix B.
Next, observe that the non-zero entries on the rows of B with an index in C span exactly all the columns of the matrix. That is, for every column index j there is exactly one row index i in C such that b ij is non-zero, and in fact b ij = x −1 i . From this it follows immediately that hence B x = e. Furthermore, if j is an index in C, then b jj is the only nonzero element in column j of the matrix B, and if i is an index in D, then the only non-zero element on row i is a 1 on the diagonal. This means that we can bring the matrix B in lower triangular form using elementary column operations that leave the diagonal elements alone, and it follows that Observe that if t is strictly between 0 and 1, then there is a gap right before the element (A t ) ij in row i of the matrix A t if and only if there is a gap right before a ij in A or right before b ij in B. It follows that for each of the matrices A t , with t ∈ [0, 1], C is the only closed SCC of the graph G (A t ). Clearly, it is also the case that x satisfies A t x = e, so by Theorem 5 it follows that A t z = 0 implies z = 0. Hence, det A t = 0 for each t in [0, 1], and since det A t is a continuous function of t, the determinants of the matrices A t must all have the same sign. In particular, det A > 0 because det B > 0.
We now move on to the case x ≤ 0. We can use essentially the same argument as before, but because x i is now negative for i ∈ C we need to define the matrix B differently. To be specific, we now set where i 0 is to be interpreted as i k . In the graph G (B), this links every index i in D exclusively to the previous index [i − 1], and every index in C exclusively to the previous index in C. So again, C is the only closed SCC for the matrix B, and it is also not difficult to see that B x = e again. It then follows by the same argument as before that det A and det B have the same sign. So all that remains is to compute det B.
To compute det B, we first permute the columns so that column n becomes the first column and each of the columns 1, . . . , n − 1 is moved one position to the right. Note that this permutation can be carried out using n − 1 column exchanges, and results in the matrix B * with entries given by For this matrix, if j is an index in C then b * jj is the only non-zero element in column j, and if i is an index in D, then the only non-zero element on row i is a −1 on the diagonal. So this time we can bring the matrix in upper triangular form using elementary column operations that leave the diagonal elements alone. We conclude that This completes the proof.
Remark. Note that by Lemma 6 the system A x = e always has a nonnegative solution for a matrix A that satisfies both Ae > 0 and (C1). So in this case Theorems 8 and 9 say that det A is zero if there is more than one closed SCC and strictly positive otherwise. A similar statement can be made if instead of Ae > 0 we have Ae < 0: in that case det A is zero if there is more than one closed SCC and strictly negative otherwise.

Conditions for a P-matrix
Recall that a real n × n matrix is a P -matrix if all its principal minors are strictly positive. As we have motivated in Section 1.2, we have a special interest in non-negative matrices in the class (C1) that are P -matrices. Of course, we can in principle determine whether such a matrix A is a P -matrix by applying Lemma 6 and Theorems 8 and 9 to each principal submatrix of A. But ideally, we would like to find simple conditions that are more easily verified in the setting of our application and imply that A is a P -matrix. In this section, we provide such a condition.
We start by observing that a non-negative matrix A is a P -matrix if all inequalities in (C1) are strict. Indeed, this is the original condition from Motzkin's Theorem (see Section 1.1) for det A to be strictly positive. That A is actually a P -matrix follows because every principal submatrix of A clearly satisfies the same condition. In fact, our main results show that it is sufficient to require only that the first inequality in (C1) is strict. That is, we claim that a non-negative n × n matrix A is a P -matrix if it satisfies To see this, note that such a matrix has strictly positive row sums and the graph G (A) has only one closed SCC, because there is an edge from every vertex i to the next vertex [i + 1]. So det A > 0 by Lemma 6 and Theorem 9, and A is a P -matrix because every principal submatrix also satisfies (C2).
But we can do better. Our next theorem provides a condition for a matrix A to be a P -matrix that covers the special cases discussed above.
Theorem 10. A non-negative real n × n matrix A is a P -matrix if, in addition to Ae > 0 and (C1), it also satisfies the following condition: There is an index k such that a rr > a rk for all rows r other than k. (C3) Proof. The proof consists of two parts: in the first part we show that (C1) and (C3) together imply that G (A) has exactly one closed SCC, so that we have det A > 0 by Lemma 6 and Theorem 9; in the second part we show that for a non-negative matrix the properties Ae > 0, (C1) and (C3) are preserved when we remove one component (i.e., a row and the corresponding column) of the matrix. The two parts combined establish that all principal minors of A are strictly positive.
For the first part, suppose A satisfies conditions (C1) and (C3), and consider the graph G (A). Let r be a row index other than the index k from condition (C3). We claim that there is a path in G (A) from r to k. Indeed, we can construct such a path step by step, as follows. We first set r 0 := r. Now assume that after m steps we have obtained a path (r 0 , . . . , r m ) that does not yet visit vertex k. Then by (C3) there must be some index i with d(i, k) < d(r m , k) such that there is a gap right before the element a rmi on row r m of A. We set r m+1 := i, and since there is an edge in G (A) from r m to i, we can append this vertex to our path. Continuing like this, the path must eventually visit k because each added vertex is strictly closer to k than the previous vertex. So there is a path to k from every other vertex of G (A), hence every vertex is either in an open SCC or in the SCC that contains k. It follows that the SCC containing k is the only closed SCC of G (A).
For the second part of the proof, suppose the matrix A is non-negative and satisfies Ae > 0, (C1) and (C3). We claim that these properties are preserved when we remove one component of the matrix. Without loss of generality we can assume we remove component n, provided we allow the index k from condition (C3) to be arbitrary. Let A be the obtained submatrix. It is easy to see that A is non-negative, has strictly positive row sums, and satisfies (C1); our only concern is whether condition (C3) is preserved. In the case k < n this is clearly true because for r = 1, . . . , n − 1, both a rr and a rk are elements of the submatrix A . It remains to consider the case in which A satisfies condition (C3) (only) for k = n. But in this case the submatrix A satisfies (C3) with k replaced by 1 because for r = 2, . . . , n−1, we have a rr > a rk = a rn ≥ a r1 , using the fact that A satisfies (C1).
Remark. Inspection of the proof reveals that the assumption that A is nonnegative in Theorem 10 can be weakened to the assumption that for every row of A, the sum of the diagonal entry and all negative entries on that row is strictly positive. Furthermore, the assumptions A ≥ 0 and Ae > 0 are only used in the proof to determine the sign of every principal minor; if we replace them by the single assumption that A is a strictly negative matrix, then the proof of the theorem still goes through but shows that all principal minors of A are strictly negative, i.e., that the matrix −A is a P -matrix.
We close off this section with a brief discussion of conditions that are both necessary and sufficient for a non-negative matrix A in the class (C1) with positive row sums to be a P -matrix. We first note that if A has a constant row, then the condition (C3) is in fact necessary (as well as sufficient) for A to be a P -matrix. Indeed, suppose A is a P -matrix, let k be the constant row, and let r be any row other than k. Then it must be the case that a rr > a rk , because the alternative is that the 2 × 2 submatrix formed by the entries a rr , a rk and a kr , a kk has two constant rows, hence determinant zero.
Condition (C3) is however not necessary in general for A to be a P -matrix. For example, suppose the dimension n is at least 3 and the matrix A satisfies a ii = a i[i+1] > a i[i+2] > · · · > a i[i+n−1] ≥ 0 i = 1, . . . , n.
Then A does not satisfy (C3) because a rr = a rk if r = [k − 1]. But A is a P -matrix: clearly, every 2 × 2 principal minor is strictly positive, and it is easy to see that for every principal submatrix A of dimension at least 3 × 3, the row sums are strictly positive and the corresponding graph G (A ) has only one SCC, so that det A > 0 by Lemma 6 and Theorem 9. Nonetheless, our observation about condition (C3) does allow us to formulate a condition that is necessary for a generic non-negative n × n matrix A in the class (C1) with positive row sums to be a P -matrix: For any row r of the matrix, let k r denote the largest number in {0, 1, . . . , n − 1} such that a rr = a r[r+kr] , and let A r be the principal submatrix consisting of the rows and columns of A with indices r, [r + 1], . . . , [r + k r ]. Then A r has a constant row. So for A to be a P -matrix, it is necessary that condition (C3) holds for each of the principal submatrices A r , that is, A has to satisfy For every row r such that k r > 0, a ss > a sr for s = [r+1], . . . , [r+k r ]. (C4) The following matrix satisfies (C4) but is not a P -matrix: So although (C4) is necessary for a generic non-negative matrix A satisfying Ae > 0 and (C1) to be a P -matrix, it is not sufficient. We leave finding a simple condition that is both necessary and sufficient as an open problem.

Conclusions
To conclude, we summarize our findings. For a real n × n matrix A satisfying condition (C1), we explicitly described the set of solutions to the system A x = λ e in terms of fundamental solutions and null vectors. This led to our main result, a characterization of the sign of the determinant of A in terms of the number of closed SCCs of the graph G (A) associated with A and the solutions to the system A x = e. We also showed that if the row sums of A are all strictly positive or all strictly negative, then the number of closed SCCs determines sgn(det A). Finally, we provided an easy-to-verify sufficient condition under which a non-negative matrix A in the class (C1) is a P -matrix. The problem of finding a simple condition that is both necessary and sufficient for A to be a P -matrix remains open.
It remains to show that the determinant of A cannot be negative. To this end, we consider the family of matrices A t := (1 − t)A + tI, where t ∈ [0, 1] and I is the n × n identity matrix. Observe that for every strictly positive value of t, the diagonal element is strictly the largest element on every row of the matrix A t . It follows that in the directed graph G (A t ) every vertex r has a directed edge to vertex [r + 1], so that G r,∞ = {1, . . . , n}. So by the result we have just proven, all of the matrices A t have non-zero determinant. Since det A t is a continuous function of t and det I = 1, it follows that det A t > 0 for all t in [0, 1]. In particular, det A > 0.