Sign-restricted matrices of $0$'s, $1$'s, and $-1$'s

We study {\em sign-restricted matrices} (SRMs), a class of rectangular $(0, \pm 1)$-matrices generalizing the alternating sign matrices (ASMs). In an SRM each partial column sum, starting from row 1, equals 0 or 1, and each partial row sum, starting from column 1, is nonnegative. We determine the maximum number of nonzeros in SRMs and characterize the possible row and column sum vectors. Moreover, a number of results on interchange operations are shown, both for SRMs and, more generally, for $(0, \pm 1)$-matrices. The Bruhat order on ASMs can be extended to SRMs with the result a distributive lattice. Also, we study polytopes associated with SRMs and some relates decompositions.


Introduction
Let m and n be positive integers and let ∆ m,n be the set of all m × n matrices each of whose entries is 0, +1, or −1, that is, (0, ±1)-matrices. Perhaps the best known class of (0, ±1)-matrices are the n × n alternating sign matrices (ASMs) [5,6,7,8,14]. These are square matrices in which the ±1's in each row and column alternate beginning and ending with a +1, and hence for which all row and column sums equal 1. The set of n × n ASMs is denoted by A n . In [2,13] a generalization of ASMs, called sign matrices, has been defined and these matrices can be rectangular. We prefer to call these matrices "sign-restricted matrices" to emphasize the restrictions on the signs, and they are defined next.
A sign-restricted matrix (abbreviated here to SRM) is an m × n (0, ±1)-matrix A such that each partial column sum, starting from row 1, equals 0 or 1, and each partial row sum, starting from column 1, is nonnegative. This definition arose in [2] where they are shown to be in bijection with combinatorial objects called semistandard Young tableaux, and they were further investigated in [13]. Row 1 and column 1 of an SRM can only contain 0's and +1's and, in particular, column 1 can contain only one +1. Also the +1's and −1's in a column alternate, where the corresponding full column sum is 1 or 0 depending on whether its last nonzero entry, if any, is a 1 or a −1. Note that the transpose of an SRM need not be an SRM. If the rows of A satisfied the same property as the columns and A were a square matrix, then A would be an ASM. Any leading r × s submatrix of an ASM is an SRM. 1 Unlike ASMs, the last nonzero entry in each row and column of an SRM may be a −1. We denote the set of m × n SRMs by S m,n . The subset of S m,n consisting of those matrices with no −1's (so (0, 1)-matrices) is denoted by S + m,n . A 1 × n SRM is just a (0, 1)-vector. Since an m × n SRM has a column sum vector which is a (0, 1)-vector, an SRM can be considered as a generalization of a (0, 1)-vector. A zero matrix is a sign-restricted matrix as is every permutation matrix and subpermutation matrix.
Let A be an m × n SRM with row sum vector R = (r 1 , r 2 , . . . , r m ) and column sum vector S = (s 1 , s 2 , . . . , s n ). As remarked above, S is a (0, 1)-vector, but R may have integer entries larger than 1. If each partial row sum of A equals 0 or 1, then R is also a (0, 1)-vector and the transpose of A is also a sign-restricted matrix. In general, the number of 1's in S equals m i=1 r i . For instance, the row sum vectors of the SRMs in Example 1.1 are (2, 0, 0), (2, 0, 1), and (2, 0, 0, 1), respectively. The column sum vectors are (1, 1, 0), (1, 1, 0, 1), and (1, 1, 1), respectively. The set of all SRMs with row sum vector R and column sum vector S is denoted by S(R, S), or by S m,n (R, S) if we want to emphasize the dimensions of R and S. Similarly we use the notations S + (R, S) and S + m,n (R, S) to denote the SRMs with nonnegative entries Notation: We let M m,n denote the set of real m × n matrices, and simply write M n when m = n.
The remaining paper is organized as follows. Section 2 considers the maximum number of nonzeros in SRMs and characterizes the possible row and column sum vectors of SRMs. Also, we study a connection to another class of (0, ±1)-matrices containing the incidence matrices of directed graphs. In Section 3 we consider the class of SRMs with specified row and column sum vectors, and their connectivity properties under interchanges are investigated. Section 4 is devoted to the Bruhat order for the class S m,n ; we show that this determines a distributive lattice, and it is the Dedekind-MacNeille completion of the Bruhat order restricted to S + m,n . In Section 5 we study a polytope associated with SRMs and some relates decompositions.

Some basic properties of SRMs
First we consider the row and column sum vectors of an SRM. (1) In fact, A can be taken to be a matrix in S + m,n .
Proof. The conditions are clearly necessary, as an SRM has column sums 0 or 1, and (1) is trivial. Conversely, assume S is a (0, 1)-vector and (1) holds. Let k = m i=1 r i = n j=1 s j . Then k ≤ n as S is a (0, 1)-vector. Initially, let A = [a ij ] be the (0, 1)-matrix with a 1 in the first row in those columns for which s j = 1, while all other entries are zero. Thus A has column sum vector S. Next, we modify A by shifting the ones in the first row to other rows so as to obtain the row sum vector R; this may be done as m i=1 r i = k. For instance, this may be done so that the ones are in a "staircase" pattern in the sense that whenever a ij = a kl = 1 and j < l, then i ≤ k. The resulting matrix is a (0, 1)-matrix, so a matrix in S + m,n . For m and n fixed, the maximum of n i=1 r i in an m × n SRM is n. This follows from Proposition 2.1 as each column sum is 0 or 1 so that i r i = j s j ≤ n. This bound is attained by taking S be the all ones vector and R = (n, 0, . . . , 0).
The first column of an m×n SRM can contain only one nonzero and this nonzero is a 1. The second column can then contain at most three nonzeros, the third column at most five nonzeros, and so on, until we reach column ⌈ m+1 2 ⌉ which can contain m nonzeros. After that we can alternate between columns containing (m − 1) and m nonzeros. This construction gives an m × n SRM with the maximum number ζ m,n of nonzeros.
Example 2.2. We give two examples to illustrate how the maximum is obtained.
In particular, if m = n, Proof. This is a straightforward computation using the above construction.
The maximum difference between the number of 1's and the number of −1's in an m × n SRM is n. In fact, in each column, due to the alternating property, the difference between the number of 1's and the number of −1's is either 0 or 1. Thus, for a matrix in S m,n , the maximum difference between the number of 1's and the number of −1's is n, and this is attained for the matrix whose first row is the all ones vector, and all other entries are 0.
The incidence matrices associated with directed graphs give a well-known class of (0, ±1)-matrices. Let D = (V, E) be a directed graph with at least one (directed) edge and vertices {1, 2, . . . , n}. An edge from a vertex i to a vertex j is denoted by (i, j); we assume that D does not have any loops so that i = j. The incidence matrix M of D then has rows corresponding to its vertices (in some order) and columns corresponding to its edges (again, in some order). The column corresponding to the edge (i, j) has a 1 in row i, a −1 in row j, and otherwise only zeros. In particular, each column contains exactly two nonzeros. The first column of M contains a −1 no matter how the edges of D are ordered, and so some row will begin with a −1, and hence M is never an SRM. We can remedy this by using loops.
Let a loop of a digraph at a vertex i correspond to a column in the incidence matrix with a 1 in row i and otherwise all 0's. 2 So putting loops at all vertices and letting these loops correspond to the first columns of the incidence matrix (so the incidence matrix begins with the identity matrix I n ), then no row will begin with a −1. But this does not guarantee that the incidence matrix is an SRM under some ordering of the other edges. We now determine when including certain loops leads to an SRM.
Let S ⊆ V and let D(S) be the digraph obtained from D by putting a loop at each vertex in S. Let M(S) be its incidence matrix obtained from the incidence matrix M of D by augmenting M by distinct unit vectors corresponding to vertices in S where these unit vectors come first. We call M(S) the generalized incidence matrix for D = (V, E) and S. Let d + (v) (resp. d − (v)) denote the number of edges of D with v as tail (resp. head).
The generalized incidence matrix, using vertex order according to their index and edge order This is an SRM.
Proof. First assume that the rows and columns of M are ordered so that M(S) is an SRM. Suppose to the contrary that D contains a directed cycle C, and let i be the first row in M such that the corresponding vertex lies in C. Moreover, C contains exactly two edges incident to vertex i, one with i as its head, and one with i as its tail. For each of these two edges, the other end vertex (different from i) corresponds to some row below row i. Therefore, since each column of M contains exactly two nonzero entries, one of the columns of M (and so of M(S)) corresponding to these two edges must have its first nonzero equal to −1; contradicting that M(S) is a SRM. This shows that D must be acyclic.
i.e., the indegree is at least 2 larger than the outdegree. But then the row sum of M in that row is at most −2 and hence the corresponding row sum of M(S) is negative. Therefore then v ∈ S; otherwise the row sum would be negative. So, (ii) holds.
Conversely, assume conditions (i)-(ii) hold. As D is acyclic, its vertices, and the rows of M, may be ordered v 1 , v 2 , . . . , v n such that each edge has the form (v i , v j ) for some i < j. Next we describe a suitable ordering of the columns of M so that M(S) is a SRM. Choose v j with d + (v j ) = 0, d − (v j ) = 1 and j maximal with this property; such a vertex must exist. Choose a directed path P in D with a maximal number of edges and with terminal vertex v j . Order the edges consecutively along the path P with the edge having v j as its head as the first one. Then order the corresponding columns of M in a similar order. In the submatrix defined by the columns corresponding to P each row contains a 1 and a −1, in that order, except the row corresponding to v j where the only nonzero is a −1. However, that row has a 1 in an earlier column, as unit elements in S are first. Now, remove the edges of P from D and repeat this procedure in the resulting graph, by choosing such a path and order the corresponding columns accordingly. Then the resulting matrix is an SRM, as desired.
Example 2.4 illustrates Theorem 2.5 and the vertex and edge orders used in the proof. In summary, the theorem asserts that if a digraph D satisfies (i) and (ii) one may insert as the initial columns in its incidence matrix a set of distinct unit columns so that the resulting matrix in a SRM. Moreover, the unit columns needed are identified.

Interchanges
In this section we first consider certain connectivity properties of the class S m,n . Define Let A be an SRM and let B be obtained from A by adding or subtracting E in some 2 × 2 submatrix of A (not necessarily with consecutive rows and consecutive columns). We call this operation an interchange. Whether or not B is an SRM depends on A and the chosen submatrix. In any case R(A) = R(B) and S(A) = S(B), where R(A) (resp., S(A)) is the row sum (resp., column sum) vector of A, and similarly for B.
We now establish the following interchange result, the first conclusion of which shows that by a sequence of interchanges every matrix in S m,n (R, S) can be brought to a matrix in S + m,n (R, S). The result is related to the construction in [2] of the "key" of an ASM.

(i) There exist SRMs
Proof. Assume first that A = [a ij ] has at least one −1. Choose a position (i, j) with a ij = −1 and i + j minimal with this property; we then call (i, j) a top-left position of a −1. (Such a position may not be unique, but this has no importance.) Since A is an SRM there must exist a k < i such that a kj = 1 and there exists l < j such that a il = 1 (as the first nonzero in a row or column cannot be a −1). Now, we must have a kl = 0. In fact, a kl cannot be −1 as (i, j) is a top-left position of a −1. Moreover, a kl cannot be 1, because then column l would have to contain a −1 in some position (i ′ , l) with k < i ′ < i, again contradicting that (i, j) a top-left position of a −1.
Let the matrix A ′ be obtained from A by adding the 2 × 2 matrix E (see (4)) to the submatrix of A corresponding to rows k, i and columns l, j. Then, from the properties just mentioned, it follows that A ′ is an SRM. Moreover, the number of −1's in A ′ is one less than the number in A. We can therefore repeat this procedure of interchanges and find a sequence of SRMs, each obtained by an interchange applied to the previous one, such that the final matrix A * does not have any entries equal to −1, i.e., it is a (0, 1)-matrix. As mentioned, interchanges do not change any row or column sums, so R(A * ) = R(A) and S(A * ) = S(A). This proves (i).
Similarly, we may find interchanges and intermediary SRMs connecting B to a (0, 1)-matrix B * . Then R(B * ) = R(B) = R(A) = R(A * ) and S(B * ) = S(B) = S(A) = S(A * ), so A * and B * are both contained in A(R, S), and by the Ryser interchange theorem (see e.g., [3]), one can use interchanges to go from A * to B * such that intermediary matrices are (0, 1)-matrices. In fact, the last interchange result is easy to show directly, because A * and B * are (0, 1)-SRMs, so each column contains at most one 1. The theorem now follows.
The algorithm given in the previous proof is illustrated in the next example.
Example 3.2. Consider the SRM A below, and the transformation to an SRM which is a (0, 1)-matrix: Here we first added the submatrix E to the submatrix given by the first two rows and columns, and then we added E to the submatrix given by rows 1, 3 and columns 2, 3 to get the final matrix.
From the proof of Theorem 3.1 it follows that every matrix class S(R, S), consisting of SRMs with row sum vector R and column sum vector S, contains a unique (0, 1)-matrix A =Ā(R, S) with the following structure: ignoring zero columns (where s j = 0) the first row has ones in the first r 1 columns, the second has ones in the next r 2 columns etc. The example above shows the canonical matrix when R = (2, 0, 0, 1) and S = (1, 1, 1).
We now turn to interchange properties of general (0, ±1)-matrices with a prescribed row sum vector R and column sum vectors S. Here again an interchange is adding or subtracting the matrix E in (4) to some 2 × 2 submatrix in such a way that one obtains a new (0, ±1)-matrix, necessarily with the same row sum vector R and column sum vector S.
Let R = (r 1 , r 2 , . . . , r m ) and S = (s 1 , s 2 , . . . , s n ) be nonnegative integral vectors with r 1 + r 2 + · · · + r m = s 1 + s 2 + · · · + s n . Let A(R, S) denote the class of (0, 1)matrices with specified row sum vector R and specified column sum vector S. Also, let A ± (R, S) be the set of all (0, ±1)-matrices with row sum vector R and column sum vector S. Let J be the m × n matrix of all 1's. Then the mapping A → A + J is a bijection between A ± (R, S) and the set A 0,1,2 (R ′ , S ′ ) of all m × n (0, 1, 2)-matrices with row sum vector R ′ = R + e (m) and column sum vector S ′ = S + e (n) where e (m) (resp. e (n) ) is the all 1's vector of size m (resp. size n). The special case of Theorem 6.2.4 in [3] obtained by taking p = 2 gives a necessary and sufficient condition for the nonemptiness of A 0,1,2 (R ′ , S ′ ) and thus of A ± (R, S). Without loss of generality, S can be assumed to be non-increasing. . . , s n ) be nonnegative integral vectors with r 1 + r 2 + · · · + r m = s 1 + s 2 + · · · + s n . Assume that S is nonincreasing.
By the bijection of the previous paragraph, we have the following as a corollary of Lemma 3.3.
Example 3.5. Let m = n = 2, and R = S = (2, 0). Then A ± (R, S) is nonempty and contains the matrix The condition in the theorem becomes: Note that in this example A(R, S) is empty.
In [1] (see also Theorem 4.4.6 and the paragraph following its proof in [4]), the following result is established.
Lemma 3.6. Any two matrices in A 0,1,2 (R ′ , S ′ ) can be obtained from one another by a sequence of interchanges with all intermediary matrices also in A 0,1,2 (R ′ , S ′ ).
As an immediate corollary we obtain the following.
Corollary 3.7. Any two matrices in A ± (R, S) can be obtained from one another by a sequence of interchanges with all intermediary matrices also in A ± (R, S).
Note that in order to get a matrix with a −1 from a matrix A in A(R, S), A needs to have a 2 × 2 submatrix with at most one 1. For instance,  It follows from Corollary 3.7 that if a nonempty class A ± (R, S) contains a (0, 1)matrix A, that is, a matrix in A(R, S), then A can be obtained from any matrix in A ± (R, S) by a sequence of interchanges where all intermediary matrices are in A ± (R, S). In particular, this is the case when m = n and R = S = (1, 1, . . . , 1), for then A ± (R, S) includes all permutation matrices. The next example illustrates Corollary 3.7.
Example 3.8. Consider I 5 and the following (0, ±1)-matrix with row and column sums equal to 1:  Then by interchanges we get  The next theorem characterizes when A(R, S) = A ± (R, S), and shows (when the class is nonempty) the structure of a certain matrix in that class.
and R is a permutation of (n, . . . , n and S is a permutation of (m, . . . , m Proof. Let A ∈ A(R, S). Suppose there does not exist a matrix in A ± (R, S) having a −1. Then every 0 in A must either be (i) the only 0 in its row, (ii) the only 0 in its column, or (iii) the only 0 in its row and the only 0 in its column. Otherwise A has a 2 × 2 submatrix with at most one 1 and then an interchange creates a matrix in A ± (R, S) with a −1. So for each 0 of A, either it is the only 0 in its row or the only 0 in its column, or it is the only 0 in its row and the only 0 in its column. Thus the row and column sum vector of A is of the form given in the theorem. It is easy to see that if R and S are of this form, then A(R, S) = ∅ and A ± (R, S) = A(R, S).
Theorem 3.11. Then the convex hull of A ± (R, S) equals the set of n × n real Proof. This follows from the fact that the vertex-edge incidence matrix of a bipartite graph is totally unimodular, see [15] (Section 19.3). In fact, this general fact implies that each extreme point A = [a ij ] of the polyhedron defined by (5) is integral, so A is a (0, ±1)-matrix satisfying the equations in (5). Therefore the set of extreme points is equal to A ± (R, S).

Bruhat order
We return to sign-restricted matrices. Recall that S + m,n denotes the set of m × n (0, 1)-SRMs, equivalently, the set of m × n SRMs without any −1's. Thus the matrices in S + m,n have at most one 1 in each column and there is no restriction on the number of 1's in each row. The matrices in S + m,n are the incidence matrices of an ordered partition (X 1 , X 2 , . . . , X m ) of a subset of {1, 2, . . . , n} in which, contrary to the usual definition of a partition, some of the parts X i may be empty. Two extreme cases are (∅, ∅, . . . , ∅) corresponding to the zero matrix O m,n in S + m,n , and ({1, 2, . . . , n}, ∅, . . . , ∅) corresponding to the matrix in S + m,n whose first row is all 1's and other rows are all 0's. We can also think of S + m,n as a generalization of the set P * m,n of m × n subpermutation matrices (or, when m = n, the set of n × n permutation matrices P n ) where the restriction of at most one 1 in each row is removed, but the restriction of at most one 1 in each column is retained.
Let n be a positive integer. Consider the partially ordered set (actually a distributive lattice) (Q n , ⊆) of subsets of {1, 2, . . . , n} ordered by inclusion. We may identify the elements of (Q n , ⊆) with the partially ordered set of 2 n n-tuples of 0's and 1's where (a 1 , a 2 , . . . , a n ) ≤ (b 1 , b 2 , . . . , b n ) if and only if a i ≤ b i for i = 1, 2, . . . , n. As is well known, the n × n permutation matrices are in bijective correspondence with the saturated chains of (Q n , ⊆) from ∅ to {1, 2, . . . , n}; for instance, if n = 4, then (0, 0, 0, 0) < (0, 0, 1, 0) < (1, 0, 1, 0) < (1, 0, 1, 1) < (1, 1, 1, 1), and this corresponds to the permutation (3, 1, 4, 2) and the permutation matrix There is a similar equivalence for the matrices in S + m,n which we now discuss. Consider as above the partially ordered set (Q n , ⊆). A multichain of length m in (Q n , ⊆) is a sequence of subsets of {1, 2, . . . , n} of the form Notice that the definition of a multichain implies that it starts with ∅. Since column sums may equal 0, a multichain need not end with X m = {1, 2, . . . , n}. Also in contrast to the usual notion of a chain in a partially ordered set, in a multichain there may be repeats in the chain. 3 Let C m,n be the set of all multichains of (Q n , ⊆) of length m. In terms of the identification of Q n as n-tuples of 0' s and 1's, a multichain allows for the possibility that successive n-tuples are equal.
Generalizing the above, the matrices in S + m,n are in bijective correspondence with the multichains in C m,n ; for instance, if m = 4 and n = 6, then (0, 0, 0, 0, 0, 0) ≤ (0, 1, 0, 0, 1, 0) ≤ (0, 1, 1, 0, 1, 0) ≤ (0, 1, 1, 0, 1, 0) ≤ (1, 1, 1, 1, 1, 1 There is a partial order, denoted by ≤ B and called the Bruhat-order, on the set P n of n × n permutation matrices (and other classes of matrices as well including ASMs) which can be defined as follows: For an m × n matrix A = [a ij ], let the sum-matrix 4 the sum of the entries of A in its leading i × j submatrix. Then The partially ordered set (P n , ≤ B ) is not a lattice if n ≥ 3. The Dedekind-MacNeille completion of (P n , ≤ B ), the (unique up to isomorphism) smallest lattice extension of (P n , ≤ B ), was shown by Lascoux and Schűtzenberger [11] to be the Bruhat order on the set A n of n × n ASMs: The minimum element of the lattice (A n , ≤ B ) is the n × n identity matrix and the maximum element is the n × n anti-identity matrix L n . In [10] the Dedekind-MacNeille completion of the poset of partial injective functions was determined. This is similar, but not identical, to our result in Theorem 4.2, since the posets considered in [10] are subposets of ours.
We can extend the Bruhat order, as defined above using the sum-matrix, to S + m,n and S m,n , thereby obtaining two partially ordered sets (S + m,n , ≤ B ) and (S m,n , ≤ B ). Example 4.1. Let m = n = 2. The set of matrices in (S + 2,2 , ≤ B ) along with their sum-matrices, indicated by →, is: (e) 0 0 Examining Figure 1 we see that every pair of elements except {b, g} has a unique LUB (least upper bound in the Bruhat order), and every pair of elements except {c, d} has a unique GLB (greatest lower bound in the Bruhat order). There is only one matrix in S 2,2 that is not in S + 2,2 , namely the matrix (p) indicated below with its sum-matrix: We see that Σ(p) ≥ Σ(c), Σ(p) ≥ Σ(d), Σ(p) ≤ Σ(b), and Σ(p) ≤ Σ(g). We conclude that (p) is the GLB of (c) and (d), and (p) is the LUB of (b) and (g) in (S 2,2 ), ≤ B ) and that (S 2,2 , ≤ B ) is a lattice; indeed (S 2,2 , ≤ B ) is therefore the Dedekind-MacNeille completion of (S + 2,2 , ≤ B ); see Figure 1.
It follows that (P n , ≤ B ) is a subposet of (S n,n , ≤ B ), and (P * m,n , ≤ B ) is a subposet of (S m,n , ≤ B ). Clearly, the maximal element of both (P n , ≤ B ) and (P * m,n , ≤ B ) is O m,n , and the minimal element is the m × n matrix Υ m,n with all 1's in row 1 and 0's elsewhere. We have that Σ(Υ m,n ) has all of its rows equal to (1, 2, . . . , n) and hence the sum of the entries of Σ(Υ m,n ) equals m n+1 2 .
We use the notations A ∨ B = LUB{A, B} and A ∧ B = GLB{A, B} for A and B in a lattice. Also a ∨ b = max{a, b} and a ∧ b = min{a, b} for real numbers a, b. where s 1 = i−1 r=1 a rj and s 2 = j−1 s=1 a is . Similarly, for A ′ , we obtain the four sums Here s 1 , s ′ 1 ∈ {0, 1} as A, A ′ ∈ S m,n . Similarly, s 1 +a ij , s ′ 1 +a ′ ij ∈ {0, 1} and each of the four numbers s 2 , s 2 +a ij , s ′ 2 , s ′ 2 +a ′ ij are nonnegative.
Here t 0j = 0 for each j and t i0 = 0 for each i (as for A and A ′ ).
We now prove that C ∈ S m,n . First, let 1 < j ≤ n. Define I + j = {i : t ij = t i,j−1 + 1}, and note that 0 ∈ I + j . Assume i − 1 ∈ I + j , i ∈ I + j , i + 1 ∈ I + j , . . . , k ∈ I + j , k + 1 ∈ I + j . From (7) we get This implies that the nonzeros in column j alternates between 1 and −1, starting with a 1 (if any) as 0 ∈ I + j . Also, the first column of T is the maximum of the first column in A and the first column in A ′ , and therefore the first column in C is either zero or it contains a single 1.
This proves that C ∈ S m,n . Thus, we have that each pair A, A ′ of matrices in S m,n has a unique greatest lower bound (meet) in the Bruhat order, given by the matrix C above. Since (S m,n , ≤ B ) is a finite partially ordered set (or use a similar argument) the corresponding statement for least upper bound (join) holds as well. Therefore (S m,n , ≤ B ) is a lattice. In order that (S m,n , ≤ B ) be distributive, we must have The corresponding property for the real numbers with the usual ≤ order relation holds. Since the join and meet in (S m,n , ≤ B ) is componentwise on the sum-matrices, it follows that (S m,n , ≤ B ) is distributive.
It remains to prove that (S m,n , ≤ B ) is the Dedekind-MacNeille completion of (S + m,n , ≤ B ). This will follow by showing that any given matrix A ∈ S m,n is the meet of some set of matrices in S + m,n . Let S = Σ(A) = [s ij ]. For i = 1, 2, . . . , m, let S (i) be the m × n matrix whose first (i − 1) rows are zero and each of the (m − i + 1) remaining rows are equal to row i of S (so these rows are equal). Then clearly S = max{S (1) , S (2) , . . . , S (m) } as S has monotone columns. Now, S (i) = Σ(A (i) ) where A (i) is the (0, 1)-matrix whose only nonzero row is row i, and it contains a 1 in position (i, j) precisely when row i in S has an increase in column j, i.e., s i,j−1 < s ij (j ≤ n) (where we think of a zero'th row and column of S to contain only zeros). Note that every increase in S is 1 as the columns in A have alternating signs, so A (i) ∈ S + m,n . Therefore A is the meet of {A (i) : i ≤ m} in the Bruhat order. Thus (S m,n , ≤ B ) is the Dedekind-MacNeille completion of (S + m,n , ≤ B ). The construction in the final part of the proof is illustrated by the next example.  and in (S, ≤ B ), the matrix A is the meet of A (1) , A (2) , . . . , A (6) .
A finite lattice has a unique smallest element called its zero element. A joinirreducible element of a finite lattice is a nonzero element of the lattice which cannot be expressed as the join of two elements different from it. A nonzero element is join-irreducible if and only if it covers exactly one element in the lattice. A meetirreducible element is defined analogously. These and other properties can be found in [9]. It is straightforward to verify that the set of meet-irreducible elements of S m,n are the matrices in S + m,n with exactly one nonzero row. This follows as in Example 4.3, and using the observation that two distinct (0, 1)-vectors of the same size have different sum-matrices.
By Birkhoff's representation theorem for finite distributive lattices (Theorem 8.17 in [9]), the lattice (S m,n , ≤ B ) can be respresented as the lattice (J m,n , ⊆) whose elements are the set J m,n of join-irreducible elements of (S m,n , ≤ B ) where we identify each element x with the set {u ≤ B x : u ∈ J m,n } of join-irreducibles below x, and the partial order is that of set-containment. Equivalently, we represent (S m,n , ≤ B ) as the lattice whose elements are the set M m,n of meet-irreducible elements of (S m,n , ≤ B ) where we identify each element x with the set {u ≥ B x : u ∈ M m,n } of meetirreducibles above x, and the partial order is that of reverse set-containment.
(M m,n , ⊇) : Next, we characterize the sum-matrices for the class S + m,n .
where we define s 0j = s i0 = 0 for each i and j.
Proof. Let A ∈ S + m,n and let S = Σ(A), S = [s ij ]. Then the first set of constraints in (8) holds as A is nonnegative and a ij = s ij − s i−1,j − s i,j−1 + s i−1,j−1 for each i, j. The other constraints hold as each column of A has at most one 1.
Conversely, assume S satisfies (8). As mentioned, the linear map T : M m,n → M m,n given by T (A) = Σ(A) is an isomorphism, there is a unique matrix A ∈ M m,n such that T (A) = S, and this A = [a ij ] is given by a ij = s ij − s i−1,j − s i,j−1 + s i−1,j−1 for each i, j. This matrix A is integral. Also, the first set of constraints in (8) implies a ij ≥ 0 for each i, j. Moreover, the second set of constraints in (8) gives which (as A is nonnegative and integral) means that A is a (0, 1)-matrix with at most one 1 in every column, so A ∈ S + m,n , as desired. Let A ∈ A(R, S). A Bruhat interchange (applied to A) is to replace a submatrix  Proof. The assumptions on R and S assure that A(R, S) is nonempty. If A can be transformed into C by Bruhat interchanges, then clearly Σ(A) ≥ Σ(C). Now suppose that Σ(A) ≥ Σ(C). Let the k'th row be the first row where Σ(A) and Σ(C) differ, and let l be the first position in row k where they differ. Thus a kl = 1 and c kl = 0. Since A and C have the same row sum, let t > l be the first position where a kt = 0 and c kt = 1. Consider the submatrices of A and C in the region determined by rows k + 1, . . . , m and columns l + 1, . . . , t. Note that column t of A has a 1 in this region, since A and C agree in column t above row k and so both have only 0's in columns t above row k. Thus A has a 1 in this region. Consider the uppermost 1, say it is in position (k ′ , l ′ ). Let A ′ = [a ′ ij ] be obtained from A by interchanging columns l and l ′ , so, as each column has exactly one 1, this is the inverse of a Bruhat interchange. Therefore A ≤ B A ′ and A ′ = A. We prove that Σ(A ′ ) ≥ Σ(C). Note that So Σ(A ′ ) ≥ Σ(C), and A ≤ B A ′ ≤ B C, as desired. Also, A ′ and C agree in one more position in row k, namely, (k, l). The desired result now follows by induction. Proof. We first extend A and A ′ and C to C ′ by appending a new row of 0's and 1's so that all column sums of A ′ and C ′ are now equal to 1. Let the row sums of A ′ be p 1 , p 2 , . . . , p m+1 and the row sums of C ′ be q 1 , q 2 , . . . , q m+1 . Since A ′ and C ′ have exactly one 1 in each column, we have p 1 + p 2 + · · · + p m+1 = q 1 + q 2 + · · · + q m+1 Let this common value in the last equation be t. We extend A ′ and C ′ to (m + 1) × (n + t) (0, 1)-matrices by including columns with exactly one 1 so that the resulting matrices A ′′ and C ′′ have the same row sum vector R. In each case we use the earliest column as we go down the rows. Thus A ′′ and C ′′ belong to the class A(R, S) where S is a vector of all 1's. Moreover, as Σ(A) ≥ Σ(C), it follows that Σ(A ′′ ) ≥ Σ(C ′′ ). In fact, Σ(A) m+1,j = Σ(C) m+1,j = j for each j. Also, by the construction, Thus by Lemma 4.6, A ′′ can be transformed to C ′′ by a sequence of inverse Bruhat interchanges. There are four types of interchanges depending on where the corresponding 2 × 2 matrix lies in Here X 3 and X 4 have only one row. The relation between the position of the interchange and the type of operation in the theorem is now as follows: • Wholly in X 1 , and so (i).
• In X 1 and X 2 , and so (iii).
• In X 1 and X 3 and so (iv).
Hence if Σ(A) ≥ Σ(C), we can get from A to C by a sequence of inverse of the operations (i), (ii), (iii), and (iv).
We remark that the proof of Theorem 4.7 actually contains an efficient algorithm which, for given matrices A, C ∈ S + m,n with A ≤ B C, constructs matrices K (i) with  (ii) replacing a zero column n with a column with exactly one 1 where this 1 is in the last position or, more generally, replacing column n which has a 1 in row j with a column which has a 1 in row (j − 1), (iii) interchanging column j with columns (j − 1) where the 1 in column j is in the last position and column (j − 1) is a zero column.
Proof. This follows from Theorem 4.7 as these are the operations in that theorem which increase the sum of the entries of Σ(C) by exactly 1.

Polytope and decomposition
In [13] the sign matrix polytope P m,n is defined as the convex hull of the matrices in S m,n (the SRMs of size m × n). It is stated in [13] that "all mn entries contribute to the dimension" and thus that the dimension of P m,n is mn for m > 1. In fact, every m × n (0, 1)-matrix E ij with all 0's except for a 1 in position (i, j) is in S m,n and so these mn matrices are linearly independent. Thus P m,n contains the standard simplex in M m,n . In [13] the following theorem is proved: Theorem 5.1. The set of extreme points of P m,n is S m,n .
This theorem admits a simple proof based on the proof in [6] that the extreme points of the convex hull of the n × n ASMs are precisely the n × n ASMs. We formulate the following lemma which is essentially the proof given in [6]. Proof. Suppose that x ∈ X n and x is a unit vector then, since the ±1's in the x (i) alternate, then x (j) = x for some j with λ j = 1. Now suppose that x is not a unit vector and that k ≥ 2. Then x contains both a 1 and a −1, so there exists p and q such that p + 1 < q and x p = 1, x p+1 = · · · = x q−1 = 0, x q = −1.
It follows that all x (i) have a 1 in position p and a −1 in position q. But then all x (i) have either 0 or −1 in position p + 1 with at least one −1 and hence position p + 1 of λ 1 x (1) + λ 2 x (2) + · · · + λ k x (k) does not equal 0, a contradiction. Theorem 5.1 follows immediately from Lemma 5.2 by considering any nonzero column of a matrix in S m,n .
In [6] the following notion was introduced.
Letting B = J, the all ones matrix, we see that an integral matrix A is summajorized by J if and only if A is an ASM. Another special case is B = rJ, for some positive integer r, and this corresponds to the notion of higher spin ASMs that was studied in [5]. The following polyhedral result was shown in [6]. Now, we connect this to SRMs, and consider the following variation of (9) for a given m × n matrix A = [a ij ] and a nonnegative integer c We call an integral matrix A satisfying (10) a c-SRM. Such a matrix must be a (0, ±1)-matrix with its nonzeros alternating in every column. When c ≥ n, a c-SRM is precisely an SRM (since an SRM has each row sum at most n, and then the third set of constrains in (10) are redundant). In general, the parameter c bounds the row sums of the matrix. Let S c m,n denote the class of c-SRMs of size m × n, and let the c-SRM polytope P c m,n be defined as the convex hull of the matrices in S c m,n . So, when c ≥ n, we have S c m,n = S m,n and P c m,n = P m,n . The following result generalizes the linear inequality description of P m,n given in [13], and the proof is different and short.
The coefficient matrix of the linear system in (12) is totally unimodular, in fact, it is arc-vertex incidence matrix of the directed graph D introduced above (with some repeated arcs/columns). Moreover, all the constants in the system are integers as c is integral. A standard result from polyhedral theory (see [15]) then implies that Σ(P * ) is an integral polyhedron, so all extreme points are integral. From the properties of the isomorphism, P * is integral, and this shows the theorem.
Proof. As already remarked, we may assume that all column sums are equal. If m < n, then we can include n − m zero rows on the bottom of A and this keeps all column sums equal to 1. If m > n, then we can include m−n columns on the right of A each with a single 1 and this keeps all column sums equal to 1. Thus we may also assume that m = n, that is, that A is a square matrix with all column sums equal to 1 and hence its row sum vector R = (r 1 , r 2 , . . . , r m ) satisfies n i=1 r i = n. Let p be the maximum row sum of A. Then we attach to A on the right an n × n(p − 1) matrix A 1 with exactly one 1 in each column so that all row sums now equal p. (Note that the arithmetic is correct here: to get all row sums equal to p we need to attach np − n i=1 r i = np − n = n(p − 1) columns with a single 1.) We may attach on the bottom of A an n(p − 1) × n matrix A 2 with (p − 1) 1's in each column and one 1 in each row in order to make each column sum equal to p. Let A 3 be a n(p − 1) × n(p − 1) (0, 1)-matrix with (p − 1) 1's in each row and column. Then the matrix is an np × np (0, ±1)-matrix, whose row and column sums all equal p and whose only −1's are in A, and hence B + J np is a (0, 1, 2)-matrix with all row and column sums equal to p + 1. Hence B + J np is a sum of (p + 1) permutation matrices, and since J np is a sum of permutation matrices, B is a sum of permutation matrices and the negatives of permutation matrices. Restricting this sum to A completes the proof.

Coda
In this final section we discuss a connection with an unsolved problem concerning disjoint realization of (0, 1)-matrices with specified row and column sums.
Consider the class A(R, S) of m × n (0, 1)-matrices where R = R 1 + R 2 and S = S 1 +S 2 , and R 1 , R 2 , S 1 , S 2 are nonnegative integral vectors. If there are matrices B 1 ∈ A(R 1 , S 1 ) and B 2 ∈ A(R 2 , S 2 ) such that B = B 1 + B 2 is a matrix in A(R, S), then A(R, S) has a (R 1 , S 1 ; R 2 , S 2 ) joint realization and (B 1 , B 2 ) is a joint realization of B (and of A(R, S)); see e.g. pages 188-190 in [3]. For a joint realization the matrices B 1 and B 2 cannot have 1's in common positions and we denote this by B 1 ⊓ B 2 = ∅.
Let A ∈ A ± (R, S). Then A can be uniquely expressed in the form A = A 1 − A 2 where A 1 and A 2 are (0, 1)-matrices such that A 1 ⊓ A 2 = ∅. Let the row and column sum vectors of A 1 and A 2 be, respectively, R 1 , S 1 and R 2 , S 2 , and let R ′ = R 1 + R 2 and S ′ = S 1 + S 2 . Then A ′ = A 1 + A 2 is an (R 1 , S 1 ; R 2 , S 2 ) joint realization of A(R ′ , S ′ ) and (A 1 , A 2 ) is a joint realization of A ′ . Thus, every matrix in A ± (R, S) with at least one 1 and at least one −1 gives some joint realization of A(R, S), and every joint realization of R, S gives a matrix in A ± (R, S).
Given R 1 , S 1 and R 2 , S 2 such that both A(R 1 , S 1 ) and A(R 1 , S 1 ) are nonempty, it is an unsolved problem to determine whether or not A(R 1 + R 2 , S 1 + S 2 ) has an (R 1 , S 1 ; R 2 , S 2 ) joint realization. A necessary condition is that A(R 1 + R 2 , S 1 + S 2 ) is nonempty, but this is not sufficient in general. The following sufficient condition is due to Anstee as a generalization of a theorem of Brualdi and Ross (see Theorem 4.4.14 of [4]).
An immediate corollary of this theorem is the following.