University of Birmingham On integer images of max-plus linear mappings

Let us extend the pair of operations ( ⊕ , ⊗ ) = ( max , + ) over real numbers to matrices in the same way as in conventional linear algebra. We study integer images of mappings x → A ⊗ x , where A ∈ R m × n and x ∈ R n . The question whether A ⊗ x is an integer vector for at least one x ∈ R n has been studied for some time but polynomial solution methods seem to exist only in special cases. In the terminology of combinatorial matrix theory this question reads: is it possible to add constantstothecolumnsofagivenmatrixsothatallrowmaximaareinteger?Thisproblem has been motivated by attempts to solve a class of job-scheduling problems. We present two polynomially solvable special cases aiming to move closer to a polynomial solution method in the general case. © 2018 The Author(s). Published by Elsevier B.V. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).


Introduction
Since the 1960s max-algebra provides modelling and solution tools for a class of problems in discrete mathematics and matrix algebra.The key feature is the development of an analogue of linear algebra for the pair of operations (⊕, ⊗) where This pair is extended to matrices and vectors as in conventional linear algebra.That is if A = (a ij ), B = (b ij ) and C = (c ij ) are matrices of compatible sizes with entries from R, we write . For simplicity we will use the convention of not writing the symbol ⊗.Thus in what follows the symbol ⊗ will not be used (except when necessary for clarity), and unless explicitly stated otherwise, all multiplications indicated are in max-algebra.
The interest in max-algebra (today also called tropical linear algebra) was originally motivated by the possibility of dealing with a class of non-linear problems in pure and applied mathematics, operational research, science and engineering as if they were linear due to the fact that ( R, ⊕, ⊗ ) pioneering papers appeared in the 1960s [17,18] and [36], followed by substantial contributions in the 1970s and 1980s such as [19,23,24,37] and [16].Since 1995 we have seen a remarkable expansion of this research field following a number of findings and applications in areas as diverse as algebraic geometry [31] and [35], geometry [27], control theory and optimization [1], phylogenetic [34], modelling of the cellular protein production [3] and railway scheduling [25].A number of research monographs have been published [1,7,25] and [30].A chapter on max-algebra appears in a handbook of linear algebra [26] and a chapter on idempotent semirings can be found in a monograph on semirings [22].Max-algebra covers a range of linear-algebraic problems in the max-linear setting, such as systems of linear equations and inequalities, linear independence and rank, bases and dimension, polynomials, characteristic polynomials, matrix scaling, matrix equations, matrix orbits and periodicity of matrix powers [1,7,19,14] and [25].Among the most intensively studied questions was the eigenproblem, that is the question, for a given square matrix A to find all values of λ and nontrivial vectors x such that Ax = λx.This and related questions such as z-matrix equations Ax ⊕ b = λx [15] have been answered [10,19,24,20,2] and [7] with numerically stable low-order polynomial algorithms.The same applies to the subeigenproblem that is the problem of finding solutions to Ax ≤ λx [33] and the supereigenproblem that is solution to Ax ≥ λx, [8] and [32].Max-linear and integer max-linear programs have also been studied [37,7,9,21] and [13].
A specific area of interest is in solving the above mentioned problems with integrality requirements.It seems in general there is no polynomial solution method to find an integer eigenvector of a real matrix in max-algebra or to decide that there is none.A closely related [13] is the question whether the mapping x → Ax has an integer image, that is whether Ax is an integer vector for at least one x ∈ R n .The motivation for the latter comes from operational problems such as the following job-scheduling task [19] and [7]: Products P 1 , . . ., P m are prepared using n machines (processors), every machine contributing to the completion of each product by producing a component.It is assumed that each machine can work for all products simultaneously and that all these actions on a machine start as soon as the machine starts to work.Let a ij be the duration of the work of the jth machine needed to complete the component for P i (i = 1, . . ., m; j = 1, . . ., n).If this interaction is not required for some i and j then a ij is set to −∞.The matrix A = ( ) is called the production matrix.Let us denote by x j the starting time of the jth machine (j = 1, . . ., n).Then all components for P i (i = 1, . . ., m) will be ready at time max(x 1 + a i1 , . .., x n + a in ).
Hence if b 1 , . . ., b m are given completion times then the starting times have to satisfy the system of equations: Using max-algebra this system can be written in a compact form as a system of linear equations: (1) A system of the form ( 1) is called a one-sided system of max-linear equations (or briefly a one-sided max-linear system or just a max-linear system).Such systems are easily solvable [17,37] and [7], see also Section 2. However, sometimes the vector b of completion times is not given explicitly, instead it is only required that completions of individual products occur at discrete time intervals, for instance at integer times.This motivates the study of integer images of max-linear mappings to which this paper aims to contribute.More precisely, we deal with the question: Given a real matrix A, find a real vector x such that Ax is integer or decide that none exists.In the terminology of combinatorial matrix theory this question reads: is it possible to add constants to the columns of a given matrix so that all row maxima are integer?We will call this problem the Integer Image Problem (IIP).This problem has been studied for some time [12,13] and [29], yet it seems to be still open whether it can be answered in polynomial time.In this paper we present two polynomially solvable special cases aiming to suggest a direction in which an efficient method could be found for general matrices in the future.We also provide a brief summary of a selection of already achieved results.

Definitions, notation and previous results
Throughout the paper we denote −∞ by ε (the neutral element with respect to ⊕) and for convenience we also denote by the same symbol any vector, whose all components are −∞, or a matrix whose all entries are −∞.A matrix or vector with all entries equal to 0 will also be denoted by 0. If a ∈ R then the symbol a −1 stands for −a.Matrices and vectors whose all entries are real numbers are called finite.We assume everywhere that m, n ≥ 1 are integers and denote M = {1, . . ., m} and N = {1, . . ., n} .
It is easily proved that if A, B, C and D are matrices of compatible sizes (including vectors considered as m × 1 matrices) then the usual laws of associativity and distributivity hold and also isotonicity is satisfied: ( A square matrix is called diagonal if all its diagonal entries are real numbers and off-diagonal entries are ε.More precisely, if . The matrix diag (0) is called the unit matrix and denoted I. Obviously, AI = IA = A whenever A and I are of compatible sizes.
A matrix obtained from a diagonal matrix by permuting the rows and/or columns is called a generalized permutation matrix.
It is known that in max-algebra generalized permutation matrices are the only type of invertible matrices [19] and [7].
If A is a square matrix then the iterated product AA...A in which the symbol A appears k-times will be denoted by n×n the symbol D A stands for the weighted digraph (N, E, w) (called associated with A) where The symbol λ(A) denotes the maximum cycle mean of A, that is: where the maximization is taken over all elementary cycles in D A , and denotes the mean of a cycle σ .With the convention max ∅ = ε the value λ (A) always exists since the number of elementary cycles is finite.It can be computed in O ( n 3 ) time [28], see also [7].We say that A is definite if λ (A) = 0 and strongly definite if it is definite and all diagonal entries of A are zero.
Given A ∈ R n×n it is usual [19,1,25] and [7] in max-algebra to define the infinite series The matrix A * is called the strong transitive closure of A, or the Kleene Star.
It follows from the definitions that every entry of the matrix sequence is a nondecreasing sequence in R and therefore either it is convergent to a real number (if bounded) or its limit is +∞.If λ(A) ≤ 0 then for every k ≥ n and can be found using the Floyd-Warshall algorithm in O ( n 3 ) time [7].
The matrix λ −1 A for λ ∈ R will be denoted by A λ and (A λ ) * will be shortly written as A * λ .
The eigenvalue-eigenvector problem (briefly eigenproblem) is the following: This problem has been studied since the work of R.A. Cuninghame-Green [18].An n × n matrix has up to n eigenvalues with λ (A) always being the largest eigenvalue (called principal).This finding was first presented by R.A. Cuninghame-Green [19] and M. Gondran and M. Minoux [23], see also N.N.Vorobyov [36].The full spectrum was first described by S. Gaubert [20] and R.B. Bapat, D. Stanford and P. van den Driessche [2].The spectrum and bases of all eigenspaces can be found in O(n 3 ) time [10] and [7].
The aim of this paper is to study the existence of integer images of max-linear mappings and therefore we summarize here only the results on finite solutions and for finite A. For A ∈ R n×n and λ ∈ R we denote In this case there are no eigenvalues other than the principal and we can easily describe all eigenvectors: Theorem 2.1 ([18,19,23]).If A ∈ R n×n then λ (A) is the unique eigenvalue of A and all eigenvectors of A are finite.If A is strongly definite and λ = λ (A) then In what follows V (A) will stand for V (A, λ (A)) .
As usual for any a ∈ R we denote the lower integer part, upper integer part and fractional part of a by ⌊a⌋ , ⌈a⌉ and fr (a) .Hence fr (a) = a − ⌊a⌋.For any matrix A the symbol ⌊A⌋ stands for the matrix obtained by replacing every entry of A by its lower integer part, similarly ⌈A⌉ and fr (A).The same conventions apply to vectors.The set of integer eigenvectors of A ∈ R n×n will be denoted by IV (A), that is Given an A ∈ R m×n we will use the following notation: If we randomly generate two real numbers then their fractional parts are different with probability As the next theorem shows strongly definite matrices are an important class for which there is an easy solution to the integer eigenvalue problem.

Theorem 2.3 ([12]
).Let A ∈ R n×n be strongly definite.Then The max-algebraic permanent of a matrix A ∈ R n×n is an analogue of the conventional permanent: where P n is the set of all permutations of N. In conventional notation this reads: which is the optimal value for the linear assignment problem for the matrix A [5,4] and [7].Using the notation for π ∈ P n we can then define the set of optimal permutations: Uniqueness of optimal permutations plays a significant role in max-algebra, see for instance the question of regularity of matrices [11] and [5].It is also important for integer images as shown in Theorem 2.4.Note that it follows from the definitions that IIm (A) = IIm (AQ ) for any generalized permutation matrix Q .It is known [7] that for every A ∈ R n×n with |ap (A)| = 1 there exists a unique generalized permutation matrix Q such that AQ is strongly definite.

Theorem 2.4 ([12]
).Let A ∈ R n×n be column typical.which is polynomial when m is fixed.In particular, this immediately yields an O ( n 3 ) method for answering the problem for 3 × n column typical matrices.
One of the aims of this paper is to present an O method for this particular special case.
It will be useful to also define min-algebra over R [19] and [7]: and for all a and b.We extend the pair of operations to matrices and vectors in the same way as in max-algebra.We also define the conjugate A # = −A T .Note that isotonicity holds for ( ⊕ ′ , ⊗ ′ ) similarly as for (⊕, ⊗), see (2).We will usually not write the operator ⊗ ′ and for matrices the convention applies that if no multiplication operator appears then the product is in min-algebra whenever it follows the symbol #, otherwise it is in max-algebra.In this way a residuated pair of operations (a special case of Galois connection) has been defined, namely

Finding an integer image for a 3 × n matrix
We start with historically the first result in max-algebra.In what follows if A ∈ R m×n and j ∈ N then A j will denote the jth column of A.
that is for all j ∈ N and The notation M j (A, b) will be shortened to M j if no confusion can arise.The answer to P1 is summarized in the following statement.
Proposition 3.1 ([17,19,7]).Let x be as defined in (7) and x ∈ R n .Then Suppose now that Ax = b, x ≤ d for some x ∈ R n .By Proposition 3.1(c) then x ≤ x and ⋃ x j =x j M j = M.It follows from the definition of z that and z ≤ x.Hence if x j = x j then x j ≤ d j .Therefore z j = x j and thus from which the statement follows.■ Problem P3.Given A ∈ R m×n , find a point in IIm (A) or decide that there is none.
Solution to P3 is easy for m = 2 as can be seen from the next few lines where we give a full description of Im (A) and IIm (A).The primary objective of this paper is to present a solution for m = 3 provided that A is column typical, which is done later on.Proposition 3.5 ([12]).Let A ∈ R 2×n .Then ) .
Proof.Let y ∈ R 2 .Then by Corollaries 3.2 and 3.3 y ∈ Im (A) if and only if and for some j, l ∈ N. Equivalently, and for some j, l ∈ N, from which the statement follows.■ Note that we will write shortly α, α instead of α (A) , α (A) if no confusion can arise.
Corollary 3.6.Let A ∈ R 2×n .Then By Corollary 3.6 the set S (if non-empty) consists of integer points on adjacent parallel line segments.We may assume L ∈ Z 2 (or take ⌈L⌉ if necessary).An answer to P4 is in the next proposition which follows from Corollary 3.6 immediately.
Proposition 3.8.The set S = {y ∈ IIm (A) : L ≤ y ≤ U} consists of integer points on line segments described by the following conditions: Problem P5.Given A ∈ R 2×n and L ∈ R 2 , find an x ∈ R n satisfying the following conditions: or decide that there is none.
In order to solve P5 we first prove a few auxiliary statements.
A set S ⊆ R n is called max-convex if λx ⊕ µy ∈ S for any x, y ∈ S and λ, µ ∈ R satisfying λ ⊕ µ = 0.
is a semifield the proof follows the lines of the proofs of the corresponding conventional statements: Proposition 3.10.Let S ⊆ R n be a max-convex set and f (x) = c T x, where c ∈ R n .If f (x) ≤ f (y) for some x, y ∈ S then for every f ∈ [f (x) , f (y)] there exists a z ∈ S such that f (z) = f .
Then λ ⊕ µ = 0 and thus by Proposition 3.9 z ∈ S where z = λx ⊕ µy.Also, Then the set F = {f (x) : x ∈ S} is a non-empty closed interval.
Then clearly x ∈ S and x ≤ x for any x ∈ S by Proposition 3.1.It follows by isotonicity that f (x) ≤ f (x) for every x ∈ S and so f (x) is an upper bound of F attained on S.
On the other hand, define for k = 1, . . ., n: For every x ∈ S there exists a k ∈ N such that x k = x k and for this k we have: ) .
Since min j∈N f for some j 0 ∈ N and x (j 0 ) ∈ S we have that F has a lower bound and this bound is attained on S. The statement now follows from Propositions 3.9 and 3.10.■ Corollary 3.12.Let us return to P5.We denote by X the set of vectors x satisfying (9).Note that we may assume without loss of generality that L = (l 1 , l 2 ) T ∈ Z 2 (otherwise we replace L by ⌈L⌉).By isotonicity we have Ax ≤ A0 for every x ∈ X and we denote A0 by U = (u 1 , u 2 ) T .If L ≤ U is not satisfied then X = ∅ hence we will assume in what follows that L ≤ U.So the task is to find integer points y = (y 1 , y 2 ) T in the rectangle L ≤ y ≤ U (see Fig. 1) of the form y = Ax, x ≤ 0 or to decide that there are none.For ease of reference we will denote Recall that the integer points of the form y = Ax in this rectangle are described by (8).In Fig. 1 the lines containing integer images of A are dashed.A little bit more challenging is the task to identify those of them (if any) that are of the form y = Ax where x ≤ 0.
First we observe in the following statement that every integer point in T (if any) can be ''diagonally projected'' on the left-hand side or bottom side of T .
Proposition 3.13.If x ∈ X then there exists λ ≤ 0 such that the vector x ′ = λx is in X and satisfies either We will also use the diagonal projection of the point U on the left-hand side or bottom side of T .For this we will need to distinguish two possibilities to which we will refer as Case 1 and Case 2: Under the assumption of Case 1 the diagonal projection of Due to Proposition 3.13 it is sufficient to search integer points Ax satisfying (Ax) 1 = l 1 or (Ax) 2 = l 2 and find those (if any) for which the condition x ≤ 0 is satisfied.As there is possibly a non-polynomial number of such points we will narrow the set of candidates.All candidates have the form (l 1 , l 1 + α) or (l 2 − α, l 2 ) , where α ∈ [ α (A) , α (A) ] ∩ Z by Corollary 3.6.

Let us denote
Clearly, U ∈ S and P ∈ S 1 in Case 1 and P ′ ∈ S 2 in Case 2. We can also describe S 1 and S 2 as follows: Note that in Fig. 1 ∩ Z-we only need to check those points of this form that are closest to P.More precisely, we distinguish 3 subcases: Note that both these points are ≥ L.
Case 2 is treated similarly.''Checking'' a point y means verifying that there is an x ≤ 0 for which Ax = y.By Proposition 3.4 this can be done by checking that A(x⊕ ′ 0) = y which is O (n) .Finding α (A) , α (A) and U is obviously O (n) as well so the whole method is O (n) .Problem P6.Given A ∈ R 3×n find an x ∈ R n such that Ax ∈ IIm (A) or decide there is none.Remark 3.14.Since (AD) x = A (Dx) for any D = diag (d 1 , . . ., d n ) ∈ R n×n we have IIm (AD) = IIm (A) .Therefore in P6 we may assume without loss of generality that a 3j = 0 for all j ∈ N by taking d j = −a 3j for j ∈ N if necessary.
Remark 3.16.Since α (Ax) = A (αx) for any α ∈ R we have A (αx) ∈ Z m if Ax ∈ Z m and α ∈ Z. Therefore in P6 we may assume without loss of generality that for every j ∈ N there is an x ∈ R n satisfying Ax ∈ Z m and ⌊ x j ⌋ = 0 whenever IIm (A) ̸ = ∅ by taking α = − ⌊ We are now ready to present the main result of this paper -a solution method for P6 for column typical matrices.So let A ∈ R 3×n be a column typical matrix.We assume without loss of generality (see Remark 3.14) that a 3j = 0 for all j ∈ N. Suppose that Ax ∈ Z 3 for some x ∈ R n and again without loss of generality (see Remark 3.15) that (Ax) 3 = 0. Hence there is a k ∈ N such that x k = 0 ≥ x j for every j ∈ N. Let A be the matrix obtained from A by removing row 3. Define extended by zero in the mth component is an integer image of A and Ax ∈ Z m where x = −(f 1 , . . ., f k−1 , 0, f k+1 , . . ., f n ) T .
Checking the condition (12) is O (mn) , normalization of A with respect to the last row is also O (mn) .In general this check needs to be done for all k = 1, . . ., n and so the method is O ( mn 2 ) .

Conclusions
We have shown in two special cases how to find an integer image of a matrix or decide that none exists, where the matrix is obtained by adding one row to a matrix for which the answer is known.It is self-suggesting an iterative procedure for answering this question in a general case, however the complexity issues remain to be solved.

Theorem 2 .
4 effectively solves the IIP for column typical square matrices.It is not difficult to see that in general m ≤ n is a necessary condition for IIm (A) ̸ = ∅ if A ∈ R m×n is column typical.If m ≤ n then a necessary and sufficient condition for IIm (A) ̸ = ∅ is existence of a submatrix A ′ ∈ R m×m for which IIm ( A ′ ) ̸ = ∅.Hence there is the possibility of solving the IIP for m × n matrices by checking all m × m submatrices.The number of such submatrices is ( n m ) follows immediately that a one-sided system Ax = b has a solution if and only if A ( A # b ) = b (see Corollary 3.3) and using isotonicity then the system Ax ≤ b always has an infinite number of solutions with A # b being the greatest solution.

( a )Corollary 3 . 2 .Corollary 3 . 3 .
Ax ≤ b.(b) Ax ≤ b if and only if x ≤ x.(c) Ax = b if and only if x ≤ x and ⋃ x j =x j M j = M. Ax = b if and only if ⋃ j∈N M j = M. Ax = b has a solution if and only if Ax = b.
is a (non-empty) closed interval.Proof.If S ′ ̸ = ∅ then x j ≤ d j for at least one j ∈ N. The rest of the proof follows the lines of the proof of Proposition 3.11 where x is replaced by x⊕ ′ d.See Proposition 3.4.■ 1. Being motivated by this we say that a real vector v is typical if no two components of v have the same fractional part.On the other hand if every component of a vector v has the same fractional part then we say that v is uniform.If every column of a real matrix A is typical [uniform] then we say that A is column typical [column uniform].
Remark 2.2.Observe that IIm (A) ̸ = ∅ if A has at least one uniform column.
the set S 1 is shown as the bold line segment on the left-hand side of T .By Corollary 3.12 the set{ (l 1 , (Ax) 2 ) T : x ≤ 0, (Ax) 1 = l 1 } is a closed interval or ∅ andso S 1 is a closed interval or ∅, similarly S 2 .The point P is in S 1 in Case 1 and P ′ in S 2 in Case 2, so at least one of the sets S 1 and S 2 is non-empty in each case.
Consider now Case 1.Since the point P