Two cores of a nonnegative matrix

We prove that the sequence of eigencones (i.e., cones of nonnegative eigenvectors) of positive powers A^k of a nonnegative square matrix A is periodic both in max algebra and in nonnegative linear algebra. Using an argument of Pullman, we also show that the Minkowski sum of the eigencones of powers of A is equal to the core of A defined as the intersection of nonnegative column spans of matrix powers, also in max algebra. Based on this, we describe the set of extremal rays of the core. The spectral theory of matrix powers and the theory of matrix core is developed in max algebra and in nonnegative linear algebra simultaneously wherever possible, in order to unify and compare both versions of the same theory.


Introduction
The nonnegative reals R + under the usual multiplication give rise to two semirings with addition defined in two ways: first with the usual addition, and second where the role of addition is played by maximum.Thus we consider the properties of nonnegative matrices with entries in two semirings, the semiring of nonnegative numbers with usual addition and multiplication called "nonnegative algebra", and the semiring called "max(-times) algebra".
Our chief object of study is the core of a nonnegative matrix A. This concept was introduced by Pullman in [33], and is defined as the intersection of the cones generated by the columns of matrix powers A k .Pullman provided a geometric approach to the Perron-Frobenius theory of nonnegative matrices based on the properties of the core.He investigated the action of a matrix on its core showing that it is bijective and that the extremal rays of the core can be partitioned into periodic orbits.In other words, extremal rays of the core of A are nonnegative eigenvectors of the powers of A (associated with positive eigenvalues).
One of the main purposes of the present paper is to extend Pullman's core to max algebra, thereby investigating the periodic sequence of eigencones of max-algebraic matrix powers.However, following the line of [10,11,24], we develop the theory in max algebra and nonnegative algebra simultaneously, in order to emphasize common features as well as differences, to provide general (simultaneous) proofs where this is possible.We do not aim to obtain new results, relative to [33,43], on the usual core of a nonnegative matrix.However, our unifying approach leads in some cases (e.g., Theorem 6.5 (iii)) to new and more elementary proofs than those given previously.Our motivation is closely related to the Litvinov-Maslov correspondence principle [27], viewing the idempotent mathematics (in particular, max algebra) as a "shadow" of the "traditional" mathematics over real and complex fields.
To the authors' knowledge, the core of a nonnegative matrix has not received much attention in linear algebra.However, a more detailed study has been carried out by Tam and Schneider [43], who extended the concept of core to linear mappings preserving a proper cone.The case when the core is a polyhedral (i.e., finitely generated) cone was examined in detail in [43, Section 3], and the results were applied to study the case of nonnegative matrix in [43,Section 4].This work has found further applications in the theory of dynamic systems acting on the path space of a stationary Bratteli diagram.
In particular, Bezuglyi et al. [4] describe and exploit a natural correspondence between ergodic measures and extremals of the core of the incidence matrix of such a diagram.
On the other hand, there is much more literature on the related but distinct question of the limiting sets of homogeneous and non-homogeneous Markov chains in nonnegative algebra; see the books by Hartfiel [22] and Seneta [37] and, e.g., the works of Chi [13] and Sierksma [42].In max algebra, see the results on the ultimate column span of matrix powers for irreducible matrices ( [7,Theorem 8.3.11],[38]), and by Merlet [28] on the invariant max cone of non-homogeneous matrix products.
The theory of the core relies on the behaviour of matrix powers.In the nonnegative algebra, recall the works of Friedland-Schneider [17] and Rothblum-Whittle [34] (on the role of distinguished classes which we call "spectral classes", algebraic and geometric growth rates, and various applications).The theory of max-algebraic matrix powers is similar.However, the max-algebraic powers have a well-defined periodic ultimate behaviour starting after sufficiently large time.This ultimate behaviour has been known since the work of Cuninghame-Green [15,, Cohen et al. [14] (irreducible case), and is described in greater generality and detail, e.g., by Akian, Gaubert and Walsh [1], Gavalec [21], De Schutter [36], and the authors [7,39,40] of the present paper.
In particular, the Cyclicity Theorem of Cohen et al. [2,7,14,23]) implies that extremals of the core split into periodic orbits for any irreducible matrix (see Subsection 4.2 below) 1 .Some results on the eigenvectors of max-algebraic matrix powers have been obtained by Butkovič and Cuninghame-Green [7,8].The present paper also aims to extend and complete the research initiated in that work.This paper is organized as follows.In Section 2 we introduce the basics of irreducible and reducible Perron-Frobenius theory in max algebra and in nonnegative linear algebra.
In Section 3 we formulate the two key results of this paper.The first key result is Main Theorem 1 stating that the matrix core equals to the Minkowski sum of the eigencones of matrix powers (that is, for each positive integer k, we take the sum of the eigencones associated with A k , and then we sum over all k).The second key result is Main Theorem 2 stating that the sequence of eigencones of matrix powers is periodic and defining the period.This section also contains a table of notations used throughout the paper.Section 4 is devoted to the proof of Main Theorem 1, taking in "credit" the result of Main Theorem 2 (whose proof is deferred to the end of the paper).In Section 5 we explain the relation between spectral classes of different matrix powers, and how the eigencones associated with general eigenvalues can be reduced to the case of the greatest eigenvalue, see in particular Theorems 5.4 and 5.7.In Section 6 we describe extremals of the core 1 In fact, many of the cited works and monographs like [2,7,21,23] are written in the setting of maxplus algebra.However, this algebra is isomorphic to the max algebra considered here, so the results can be readily translated to the present (max-times) setting.
in both algebras extending [43,Theorem 4.7], see Theorem 6.5.Prior to this result we formulate the Frobenius-Victory Theorems 6.1 and 6.2 giving a parallel description of extremals of eigencones in both algebras.In Section 7, our first goal is to show that the sequence of eigencones of matrix powers in max algebra is periodic, comparing this result with the case of nonnegative matrix algebra, see Theorem 7.1.Then we study the inclusion relation on eigencones and deduce Main Theorem 2. The key results are illustrated by a pair of examples in Section 8.

Preliminaries
2.1.Nonnegative matrices and associated graphs.In this paper we are concerned only with nonnegative eigenvalues and nonnegative eigenvectors of a nonnegative matrix.
In order to bring our terminology into line with the corresponding theory for max algebra we use the terms eigenvalue and eigenvector in a restrictive fashion appropriate to our semiring point of view.Thus we shall call ρ an eigenvalue of a nonnegative matrix A (only) if there is a nonnegative eigenvector x of A for ρ.Further x will be called an eigenvector (only) if it is nonnegative.(In the literature ρ is called a distinguished eigenvalue and x a distinguished eigenvector of A.) For x ∈ R n + , the support of x, denoted by supp(x), is the set of indices where x i > 0.
In this paper we are led to state the familiar Perron-Frobenius theorem in slightly unusual terms: An irreducible nonnegative matrix A has a unique eigenvalue denoted by ρ + (A), which is positive (unless A is the 1 × 1 matrix 0).Further, the eigenvector x associated with ρ + (A) is essentially unique, that is all eigenvectors are multiples of x.
The nonnegative multiples of x constitute the cone of eigenvectors (in the above sense) A general (reducible) matrix A ∈ R n×n + may have several nonnegative eigenvalues with associated cones of nonnegative eigenvectors (eigencones), and ρ + (A) will denote the biggest such eigenvalue, in general.Eigenvalue ρ + (A) is also called the principal eigenvalue, and V + (A, ρ + (A)) is called the principal eigencone.

Recall that a subset
Note that cones in the nonnegative orthant can be considered as "subspaces", with respect to the semiring of nonnegative numbers (with usual addition and multiplication).In this vein, a cone V is said to be generated by S ⊆ R n + if each v ∈ V can be represented as a nonnegative combination v = x∈S α x x where only finitely many α x ∈ R + are different from zero.When V is generated (we also say "spanned") by S, this is denoted V = span + (S).A vector z in a cone V is called an + is generated by its extremals; in particular, this holds for any finitely generated cone.
Let us recall some basic notions related to (ir)reducibility, which we use also in max algebra.With a matrix A = (a ij ) ∈ R n×n + we associate a weighted (di)graph G(A) with the set of nodes N = {1, . . ., n} and set of edges E ⊆ N × N containing a pair (i, j) if and only if a ij = 0; the weight of an edge (i, j) ∈ E is defined to be w(i, j) := a ij .A graph with just one node and no edges will be called trivial.A graph with at least one node and at least one edge will be called nontrivial.
A path P in G(A) consisting2 of the edges (i 0 , i 1 ), (i 1 , i 2 ), . . ., (i t−1 , i t ) has length l(P ) := t and weight w(P is trivial or for any i, j ∈ {1, . . ., n} there is an i − j path.Otherwise A is reducible.
Notation A ×k will stand for the usual kth power of a nonnegative matrix.

Max algebra.
By max algebra we understand the set of nonnegative numbers R + where the role of addition is played by taking maximum of two numbers: a⊕b := max(a, b), and the multiplication is as in the usual arithmetics.This is carried over to matrices and vectors like in the usual linear algebra so that for two matrices A = (a ij ) and stand for the kth max-algebraic power.
In max algebra, we have the following analogue of a convex cone.
Max cones are also known as idempotent semimodules [26,27].A max cone V is said to be generated by S ⊆ R n + if each v ∈ V can be represented as a max combination v = x∈S α x x where only finitely many (nonnegative) α x are different from zero.When V is generated (we also say "spanned") by S, this is denoted V = span ⊕ (S).When V is generated by the columns of a matrix A, this is denoted V = span ⊕ (A).This cone is closed with respect to the usual Euclidean topology [10].
The maximum cycle geometric mean of A is defined by that it never has any edges connecting different strongly connected components.
If for A ∈ R n×n + we have A ⊗ x = ρx with ρ ∈ R + and a nonzero x ∈ R n + , then ρ is a max(-algebraic) eigenvalue and x is a max(-algebraic) eigenvector associated with ρ.The set of max eigenvectors x associated with ρ, with the zero vector adjoined to it, is a max cone.It is denoted by V ⊕ (A, ρ).
An irreducible A ∈ R n×n + has a unique max-algebraic eigenvalue equal to λ(A) [2,7,15,23].In general A may have several max eigenvalues, and the greatest of them equals λ(A).The greatest max eigenvalue will also be denoted by ρ ⊕ (A) (thus ρ ⊕ (A) = λ(A)), and called the principal max eigenvalue of A. In the irreducible case, the unique max eigenvalue ρ ⊕ (A) = λ(A) is also called the max(-algebraic) Perron root.When max algebra and nonnegative algebra are considered simultaneously (e.g., Section 3), the principal eigenvalue is denoted by ρ(A).
Unlike in nonnegative algebra, there is an explicit description of V ⊕ (A, ρ ⊕ (A)), see Theorem 6.2.This description uses the Kleene star

Series (2) converges if and only if ρ
The path interpretation of max-algebraic matrix powers A ⊗l is that each entry a ⊗l ij is equal to the greatest weight of i − j path with length l.Consequently, for i = j, the entry a * ij of A * is equal to the greatest weight of an i − j path (with no length restrictions).
2.3.Cyclicity and periodicity.Consider a nontrivial strongly connected graph G (that is, a strongly connected graph with at least one node and one edge).Define its cyclicity σ as the gcd of the lengths of all elementary cycles.It is known that for any vertices i, j there exists a number l such that l(P ) ≡ l(mod σ) for all i − j paths P .
When the length of an i − j path is a multiple of σ (and hence we have the same for all j − i paths), i and j are said to belong to the same cyclic class.When the length of this path is 1 modulo σ (in other words, if l(P ) − 1 is a multiple of σ), the cyclic class of i (resp., of j) is previous (resp., next) with respect to the class of j (resp., of i).
The cyclicity of a trivial graph is defined to be 1, and the unique node of a trivial graph is defined to be its only cyclic class.
We define the cyclicity of a (general) graph containing several strongly connected components to be the lcm of the cyclicities of the components.
For a graph G = (N, E) with N = {1, . . ., n}, define the associated matrix A = (a ij ) ∈ {0, 1} n×n by a ij = 1 ⇔ (i, j) ∈ E. This is a matrix over the Boolean semiring B := {0, 1}, where addition is the disjunction and multiplication is the conjunction operation.This semiring is a subsemiring of max algebra, so that it is possible to consider the associated matrix as a matrix in max algebra whose entries are either 0 or 1.
For a graph G and any k ≥ 1, define G k as a graph that has the same vertex set as G and (i, j) is an edge of G k if and only if there is a path of length k on G connecting i to j.Thus, if a Boolean matrix A is associated with G, then the Boolean matrix power A ⊗k is associated with G k .Powers of Boolean matrices (over the Boolean semiring) are a topic of independent interest, see Brualdi-Ryser [5], Kim [25].We will need the following observation.
Theorem and G gcd(l,σ) ) are the same: since gcd(k, σ) divides k, each component of G gcd(k,σ) splits into several components of G k , but the total number of components is the same (as gcd(gcd(k, σ), σ) =gcd(k, σ)), hence their node sets are the same.The claim follows since the node set of each component of G gcd(k,σ) splits into several components of G gcd(l,σ) .
Let us formally introduce the definitions related to periodicity and ultimate periodicity of sequences (whose elements are of arbitrary nature).A sequence {Ω k } k≥1 is called periodic if there exists an integer p such that Ω k+p is identical with Ω k for all k.The least The following observation is crucial in the theory of Boolean matrix powers.
(i) The sequence {G k } k≥1 is ultimately periodic with the period σ.The periodicity threshold, denoted by T (G), does not exceed (n − 1) 2 + 1.
(ii) If G is nontrivial, then for k ≥ T (G) and a multiple of σ, G k consists of σ complete subgraphs not accessing each other.
For brevity, we will refer to T (G) as the periodicity threshold of G.We have the following two max-algebraic extensions of Theorem 2.3.

When the sequence
) are ultimately periodic, we also say that the sequence {A ⊗k } k≥1 (resp. ) is ultimately periodic with growth rate ρ.Let us conclude with a well-known number-theoretic result concerning the coin problem of Frobenius, which we see as basic for both Boolean and max-algebraic cyclicity.
2.4.Diagonal similarity and visualization.For any x ∈ R n + , we can define X = diag(x) as the diagonal matrix whose diagonal entries are equal to the corresponding entries of x, and whose off-diagonal entries are zero.If x does not have zero components, the diagonal similarity scaling A → X −1 AX does not change the weights of cycles and eigenvalues (both nonnegative and max); if z is an eigenvector of X −1 AX then Xz is an eigenvector of A with the same eigenvalue.This scaling does not change the critical also showing that the periodicity thresholds of max-algebraic matrix powers (Theorems 2.4 and 2.5) do not change after scaling.Of course, we also have (X −1 AX) ×k = X −1 A ×k X in nonnegative algebra.The technique of nonnegative scaling can be traced back to the works of Fiedler-Pták [16].
When working with the max-algebraic matrix powers, it is often convenient to "visualize" the powers of the critical graph.Let A have λ(A) = 1.A diagonal similarity scalling A → X −1 AX is called a strict visualization scaling [7,41] if the matrix B = X −1 AX has b ij ≤ 1, and moreover, b ij = 1 if and only if (i, j) ∈ E c (A)(= E c (B)).Any matrix B satisfying these properties is called strictly visualized.
If A = (a ij ) has all entries a ij ≤ 1, then we define the Boolean matrix A [1] with entries If A has all entries a ij ≤ 1 then (4) (A ⊗k ) [1] = (A [1] ) ⊗k .
Similarly if a vector x ∈ R n + has x i ≤ 1, we define x [1] having x i = 0 otherwise.Obviously if A and x have all entries not exceeding 1 then (A⊗x) [1] = A [1] ⊗x [1] .
If A is strictly visualized, then a [1] ij = 1 if and only if (i, j) is a critical edge of G(A).Thus A [1] can be treated as the associated matrix of C(A) (disregarding the formal difference in dimension).We now show that C(A ⊗k ) = C(A) k and that any power of a strictly visualized matrix is strictly visualized.
Proof.Using Theorem 2.7, we can assume without loss of generality that A is strictly visualized.Also note that both in C(A ⊗k ) and in C(A) k , each node has ingoing and outgoing edges, hence for part (i) it suffices to prove that the two graphs have the same set of edges.
Applying Theorem 2.1 (i) to every component of C(A), we obtain that C(A) k also consists of several isolated nontrivial strongly connected graphs.In particular, each edge of C(A) k lies on a cycle, so C(A) k contains cycles.Observe that G(A ⊗k ) does not have edges with weight greater than 1, while all edges of C(A) k have weight 1, hence all cycles of C(A) and that all cycles of C(A) k are critical cycles of G(A ⊗k ).Since each edge of C(A) k lies on a critical cycle, all edges of C(A) k are critical edges of G(A ⊗k ).
G(A ⊗k ) does not have edges with weight greater than 1, hence every edge of C(A ⊗k ) has weight 1. Equation ( 4) implies that if a ⊗k ij = 1 then there is a path from i to j composed of the edges with weight 1.Since A is strictly visualized, such edges are critical.This shows that if a ⊗k ij = 1 and in particular if (i, j) is an edge of C(A ⊗k ), then (i, j) is an edge of C(A) k .Hence A ⊗k is strictly visualized, and all edges of C(A ⊗k ) are edges of C(A) k .Thus C(A ⊗k ) and C(A) k have the same set of edges, so C(A ⊗k ) = C(A) k (and we also showed that A ⊗k is strictly visualized).
Let T (C(A)) be the greatest periodicity threshold of the strongly connected components of C(A).The following corollary of Lemma 2.8 will be required in Section 7. (5) , where A 11 , ..., A rr are irreducible square submatrices of A. They correspond to the sets of nodes N 1 , . . ., N r of the strongly connected components of G(A).Note that in (5) an edge from a node of N µ to a node of N ν in G(A) may exist only if µ ≥ ν.
Generally, A KL denotes the submatrix of A extracted from the rows with indices in K ⊆ {1, . . ., n} and columns with indices in L ⊆ {1, . . ., n}, and A µν is a shorthand for A NµNν .Accordingly, the subvector x Nµ of x with indices in N µ will be written as x µ .
If A is in the Frobenius Normal Form (5) then the reduced graph, denoted by R(A), is the (di)graph whose nodes correspond to N µ , for µ = 1, . . ., r, and the set of arcs is In max algebra and in nonnegative algebra, the nodes of R(A) are marked by the corresponding eigenvalues (Perron roots), denoted by , and by ρ µ when both algebras are considered simultaneously.
By a class of A we mean a node µ of the reduced graph R(A).It will be convenient to attribute to class µ the node set and the edge set of G(A µµ ), as well as the cyclicity and other parameters, that is, we will say "nodes of µ", "edges of µ", "cyclicity of µ", etc.3 final, if it is not accessed by, resp.if it does not access, any other class.Node i of G(A) accesses class ν, denoted by i → ν, if i belongs to a class µ such that µ → ν.
Note that simultaneous permutations of the rows and columns of A are equivalent to calculating P −1 AP, where P is a permutation matrix.Such transformations do not change the eigenvalues, and the eigenvectors before and after such a transformation may only differ by the order of their components.Hence we will assume without loss of generality that A is in Frobenius normal form (5). Note that a permutation bringing matrix to this form is (relatively) easy to find [5].We will refer to the transformation A → P −1 AP as permutational similarity.
2.6.Elements of the Perron-Frobenius theory.In this section we recall the spectral theory of reducible matrices in max algebra and in nonnegative linear algebra.All results are standard: the nonnegative part goes back to Frobenius [18], Sect.11, and the maxalgebraic counterpart is due to Gaubert [19], Ch.IV (also see [7] for other references).
A class ν of A is called a spectral class of A associated with eigenvalue ρ = 0, or sometimes (A, ρ)-spectral class for short, if In both algebras, note that there may be several spectral classes associated with the same eigenvalue.
In nonnegative algebra, spectral classes are called distinguished classes [35], and there are also semi-distinguished classes associated with distinguished generalized eigenvectors of order two or more [44].However, these vectors are not contained in the core 4 .Also, no suitable max-algebraic analogue of generalized eigenvectors is known to us.
If all classes of A consist of just one element, then the nonnegative and max-algebraic Perron roots are the same.In this case, the spectral classes in nonnegative algebra are also spectral in max algebra.However, in general this is not so.In particular, for a nonnegative matrix A, a cycle of G(A) attaining the maximum cycle geometric mean ρ ⊕ (A) = λ(A) need not lie in a strongly connected component corresponding to a class with spectral radius ρ + (A).This is because, if A 1 , A 2 are irreducible nonnegative matrices such that ρ + (A 1 ) < ρ + (A 2 ), then we need not have ρ ⊕ (A 1 ) < ρ ⊕ (A 2 ).For example, let A be the 3 × 3 matrix of all 1's, and let Denote by Λ + (A), resp.Λ ⊕ (A), the set of nonzero eigenvalues of A ∈ R n×n + in nonnegative linear algebra, resp. in max algebra.It will be denoted by Λ(A) when both algebras are considered simultaneously, as in the following standard description.
Theorem 2.10 encodes the following two statements: where the notion of spectral class is defined in two different ways by (6), in two algebras.
In both algebras, for each ρ ∈ Λ(A) define where By "ν is (A, ρ)-spectral" we mean that ν is a spectral class of A with ρ ν = ρ.The next proposition, holding both in max algebra and in nonnegative algebra, allows us to reduce the case of arbitrary eigencone to the case of principal eigencone.Here we assume that A is in Frobenius normal form.
, where 1 is the principal eigenvalue of A ρ .
For a parallel description of extremals of eigencones 5 in both algebras see Section 6.1.
5 In nonnegative algebra, [35,Th. 3.7] immediately describes both spectral classes and eigencones associated with any eigenvalue.However, we prefer to split the formulation, following the exposition of [7].An alternative simultaneous exposition of spectral theory in both algebras can be found in [24].
In max algebra, using Proposition 2.11, we define the critical graph associated with ρ ∈ Λ ⊕ (A) as the critical graph of A ρ .By a critical component of A we mean a strongly connected component of the critical graph associated with some ρ ∈ Λ ⊕ (A).In max algebra, the role of spectral classes of A is rather played by these critical components, which will be (in analogy with classes of Frobenius normal form) denoted by μ, with the node set N μ.See Section 5.2.

Notation table and key results
The following notation table shows how various objects are denoted in nonnegative algebra, max algebra and when both algebras are considered simultaneously.

Sum
Matrix power In the case of max algebra, we also have the critical graph C(A) (with related concepts and notation), not used in nonnegative algebra.
The core and the sum of eigencones appearing in the table have not been formally introduced.These are the two central notions of this paper, and we now introduce them for both algebras simultaneously.
The core of a nonnegative matrix A is defined as the intersection of the column spans (in other words, images) of its powers: The (Minkowski) sum of eigencones of a nonnegative matrix A is the cone consisting of all sums of vectors in all V (A, ρ): If Λ(A) = ∅, which happens when ρ(A) = 0, then we assume that the sum on the righthand side is {0}.
The following notations can be seen as the "global" definition of cyclicity in nonnegative algebra and in max algebra.
1. Let σ ρ be the the lcm of all cyclicities of spectral classes associated with ρ ∈ Λ + (A) (nonnegative algebra), or the cyclicity of critical graph associated with 2. Let σ Λ be the lcm of all σ ρ where ρ ∈ Λ(A).
The difference between the definitions of σ ρ in max algebra and in nonnegative algebra comes from the corresponding versions of Perron-Frobenius theory.In particular, let be an irreducible matrix.While in nonnegative algebra the eigencone associated with the Perron root of A is always reduced to a single ray, the number of (appropriately normalized) extremals of the eigencone of A in max algebra is equal to the number of critical components, so that there may be up to n such extremals.
One of the key results of the present paper relates the core with the sum of eigencones.
The nonnegative part of this result can be found in Tam-Schneider [43, Th. 4.2, part (iii)].
The main part of the proof is given in Section 4, for both algebras simultaneously.
However, this proof takes in "credit" some facts, which we will have to show.First of all, we need the equality This simple relation between Λ(A k ) and Λ(A), which can be seen as a special case of [24, Theorem 3.6(ii)], will be also proved below as Corollary 5.5.
To complete the proof of Main Theorem 1 we also have to study the periodic sequence of eigencones of matrix powers and their sums.On this way we obtain the following key result, both in max and nonnegative algebra.
Main Theorem 2 is proved in Section 7 as a corollary of Theorems 7.3 and 7.4, where the inclusion relations between eigencones of matrix powers are studied in detail.Theorem 6.5, which gives a detailed description of extremals of both cores, can be also seen as a key result of this paper.However, it is too long to be formulated in advance.

Two cores
4.1.Basics.In this section we investigate the core of a nonnegative matrix defined by (9).
In the main argument, we consider the cases of max algebra and nonnegative algebra simultaneously.
One of the most elementary and useful properties of the intersection in ( 9) is that, actually, (12) span(A) ⊇ span(A 2 ) ⊇ span(A 3 ) ⊇ . . .[33] we will prove that

Generalizing an argument of Pullman
also in max algebra.
Note that the following inclusion is almost immediate.
Proof.x ∈ V (A k , ρ) implies that A k x = ρx and hence x = ρ −t A kt x for all t ≥ 1 (using the invertibility of multiplication).Hence x ∈ t≥1 span A kt = t≥1 span(A t ).
So it remains to show the opposite inclusion Let us first treat the trivial case ρ(A) = 0, i.e., Λ(A) = ∅.There are only trivial classes in the Frobenius normal form, and G(A) is acyclic.This implies A k = 0 for some k ≥ 1.

4.2.
Max algebra: cases of ultimate periodicity.In max algebra, unlike the nonnegative algebra, there are wide classes of matrices for which ( 14) and ( 13) follow almost immediately.We list some of them below.
S 2 : Ultimately periodic matrices.This is when the sequence {A ⊗k } k≥1 is ultimately periodic with a growth rate ρ (in other words, when the sequence {(A/ρ) ⊗k } k≥1 is ultimately periodic).As shown by Molnárová-Pribiš [30], this happens if and only if the Perron roots of all nontrivial classes of A equal ρ ⊕ (A) = ρ.
S 3 : Robust matrices.For any vector x ∈ R n + the orbit {A ⊗k ⊗ x} k≥1 hits an eigenvector of A, meaning that A ⊗T ⊗ x is an eigenvector of A for some T .This implies that the whole remaining part {A ⊗k ⊗ x} k≥T of the orbit (the "tail" of the orbit) consists of multiples of that eigenvector A ⊗T ⊗ x.The notion of robustness was introduced and studied in [9].S 4 : Orbit periodic matrices: For any vector x ∈ R n + the orbit {A ⊗k ⊗ x} k≥1 hits an eigenvector of A ⊗σx for some σ x , implying that the remaining "tail" of the orbit {(A ⊗k ⊗ x} k≥1 is periodic with some growth rate.See [40, Section 7] for characterization.S 5 : Column periodic matrices.This is when for all i we have (A ⊗(t+σ i ) ) •i = ρ σ i i A ⊗t •i for all large enough t and some ρ i and σ i .
Observe that S 1 ⊆ S 2 ⊆ S 4 ⊆ S 5 and S 3 ⊆ S 4 .Indeed, S 1 ⊆ S 2 is the Cyclicity Theorem 2.4.For the inclusion S 2 ⊆ S 4 observe that, if A is ultimately periodic then A ⊗(t+σ) = ρ σ A ⊗t and hence A ⊗(t+σ) ⊗ x = ρ σ A ⊗t ⊗ x holds for all x ∈ R n + and all big enough t.Observe that S 3 is a special case of S 4 , which is a special case of S 5 since the columns of matrix powers can be considered as orbits of the unit vectors.
To see that (14) holds in all these cases, note that in the column periodic case all column sequences {A t •i } t≥1 end up with periodically repeating eigenvectors of A ⊗σ i or the zero vector, which implies that span ⊕ (A ⊗t ) ⊆ k≥1 V Σ ⊕ (A ⊗k ) ⊆ core ⊕ (A) and hence span ⊕ (A ⊗t ) = core ⊕ (A) for all large enough t.Thus, finite stabilization of the core occurs in all these classes.A necessary and sufficient condition for this finite stabilization is described in [12].In the case of max algebra, Nitica and Singer [32] showed that at each point x ∈ R n + there are at most n maximal max-cones not containing this point.These conical semispaces, used to separate x from any max cone not containing x, turn out to be open.Hence they can be used in the max version of Pullman's argument.
However, for the sake of a simultaneous proof we will exploit the following analytic argument instead of separation.By B(x, ǫ) we denote the intersection of the open ball centered at x ∈ R n + of radius ǫ with R n + .In the remaining part of Section 4 we consider both algebras simultaneously.Lemma 4.2.Let x 1 , . . ., x m ∈ R n + be nonzero and let z / ∈ span(x 1 , . . ., x m ).Then there exists ǫ > 0 such that z / ∈ span(B(x 1 , ǫ), . . ., B(x m , ǫ)).
Proof.By contradiction assume that for each ǫ there exist points y i (ǫ) ∈ B(x i , ǫ) and nonnegative scalars µ i (ǫ) such that Since y i (ǫ) → x i as ǫ → 0 and x i are nonzero, we can assume that y i (ǫ) are bounded from below by nonzero vectors v i , and then z ≥ m i=1 µ i (ǫ)v i for all ǫ, implying that µ i (ǫ) are uniformly bounded from above.By compactness we can assume that µ i (ǫ) converge to some µ i ∈ R + , and then (15) implies by continuity that z = m i=1 µ i x i , a contradiction.

Theorem 4.3 ([33, Theorem 2.1]). Assume that {K
+ such that K l+1 ⊆ K l for all l, and each of them generated by no more than k nonzero vectors.Then the intersection K = ∩ ∞ l=1 K l is also generated by no more than k vectors.
Proof.Let K l = span(y l1 , . . ., y lk ) (where some of the vectors y l1 , . . ., y lk may be repeated when K l is generated by less than k nonzero vectors), and consider the sequences of normalized vectors {y li /||y li ||} l≥1 for i = 1, . . ., k, where ||u|| := max u i (or any other norm).As the set {u : ||u|| = 1} is compact, we can find a subsequence {l t } t≥1 such that for i = 1, . . ., k, the sequence {y lti /||y lti ||} t≥1 converges to a finite vector u i , which is nonzero since ||u i || = 1.We will assume that ||y lti || = 1 for all i and t.
We now show that u 1 , . . ., u k ∈ K. Consider any i = 1, . . ., k.For each s, y lti ∈ K s for all sufficiently large t.As {y lti } t≥1 converges to u i and K s is closed, we have Since this is true for each s, we have u i ∈ ∩ ∞ s=1 K s = K.Thus u 1 , . . ., u k ∈ K, and span(u 1 , . . ., u k ) ⊆ K.We claim that also K ⊆ span(u 1 , . . ., u k ).
Theorem 4.3 applies to the sequence {span(A t )} t≥1 , so core(A) is generated by no more than n vectors.

Proposition 4.4 ([33, Lemma 2.3]). The mapping induced by
A on its core is a surjection.
Proof.First note that A does induce a mapping on its core.If z ∈ core(A) then for each t there exists x t such that z = A t x t .Hence Az = A t+1 x t , so Az ∈ ∩ t≥2 span A t = core(A).
Next, let m be such that A m has the greatest number of zero columns (we assume that A is not nilpotent; recall that a zero column in A k remains zero in all subsequent powers).If z = A t x t for t ≥ m + 1, we also can represent it as A m+1 u t , where u t := A t−m−1 x t .The components of u t corresponding to the nonzero columns of A m+1 are bounded since A m+1 u t = z.So we can assume that the sequence of subvectors of u t with these components converges.Then the sequence y t := A m u t also converges, since the indices of nonzero columns of A m coincide with those of A m+1 , which are the indices of the converging subvectors of u t .Let y be the limit of y t .Since y s = A s−1 x s are in span(A t ) for all s > t, and since span(A t ) are closed, we obtain y ∈ span(A t ) for all t.
Thus we found y ∈ core(A) satisfying Ay = z.
Theorem 4.3 and Proposition 4.4 show that the core is generated by finitely many vectors in R n + and that the mapping induced by A on its core is "onto".Now we use the fact that a finitely generated cone in the nonnegative orthant (and more generally, closed cone) is generated by its extremals both in nonnegative algebra and in max algebra, see [10,45].

Proposition 4.5 ([33, Theorem 2.2]). The mapping induced by A on the extremal generators of its core is a permutation (i.e., a bijection).
Proof.Let core(A) = span(u 1 , . . ., u k ) where u 1 , . . ., u k are extremals of the core.Suppose that x j is a preimage of u j in the core, that is, Ax j = u j for some x j ∈ core(A), j = 1, . . ., k.Then x j = k i=1 α i u i for some nonnegative coefficients α 1 , . . ., α k , and u j = k i=1 α i Au i .Since u j is extremal, it follows that u j is proportional to Au i for some i.Thus for each j ∈ {1, . . ., k} there exists an i ∈ {1, . . ., k} such that Au i is a positive multiple of u j .But since for each i ∈ {1, . . ., k} there is at most one j such that Au i is a positive multiple of u j , it follows that A induces a bijection on the set of extremal generators of its core.
We are now ready to prove (13) and Main Theorem 1 taking the periodicity of the eigencone sequence (Main Theorem 2) in "credit".
Proof of Main Theorem 1. Proposition 4.5 implies that all extremals of core(A) are eigenvectors of A q , where q denotes the order of the permutation induced by A on the extremals of core(A).Hence core(A) is a subcone of the sum of all eigencones of all powers of A, which is the inclusion relation (14).Combining this with the reverse inclusion of Lemma 4.1 we obtain that core(A) is precisely the sum of all eigencones of all powers of A, and using (11) (proved in Section 5 below), we obtain the first part of the equality of Main Theorem 1.The last part of the equality of Main Theorem 1 now follows from the periodicity of eigencones formulated in Main Theorem 2, or more precisely, from the weaker result of Theorem 7.1 proved in Section 7.

Spectral classes and critical components of matrix powers
This section is rather of technical importance.It shows that the union of node sets of all spectral classes is invariant under matrix powering, and that access relations between spectral classes in all matrix powers are essentially the same.Further, the case of an arbitrary eigenvalue can be reduced to the case of the principal eigenvalue for all powers simultaneously (in both algebras).At the end of the section we consider the critical components of max-algebraic powers.

Classes and access relations.
As in Section 4, the arguments are presented in both algebras simultaneously.This is due to the fact that the edge sets of G(A ⊗k ) and G(A ×k ) are the same for any k and that the definitions of spectral classes in both algebras are alike.Results of this section can be traced back, for the case of nonnegative algebra, to the classical work of Frobenius [18], see remarks on the very first page of [18] concerning the powers of an irreducible nonnegative matrix 6 .
The reader is also referred to the monographs of Minc [29], Berman-Plemmons [3], Brualdi-Ryser [5], and we will often cite the work of Tam-Schneider [43,Section 4] containing all of our results in this section, in nonnegative algebra.
as the number of eigenvalues that lie on the spectral circle.He then remarks "If A is primitive, then every power of A is again primitive and a certain power and all subsequent powers are positive".This is followed by "If A is imprimitive, then A m consists of d irreducible parts where d is the greatest common divisor of m and k.Further, A m is completely reducible.The characteristic functions of the components differ only in the powers of the variable" (which provides a converse to the preceding assertion).And then "The matrix A k is the lowest power of A whose components are all primitive".The three quotations cover Lemma 5.1 in the case of nonnegative algebra.Proof.(i): Assuming without loss of generality ρ = 1, let X = diag(x) for a positive eigenvector x ∈ V (A, ρ) and consider B := X −1 AX which is stochastic (nonnegative algebra), or max-stochastic, i.e., such that n j=1 b ij = 1 holds for all i (max algebra).By Theorem 2.1, B k is permutationally similar to a direct sum of gcd(k, σ) irreducible isolated blocks.These blocks are stochastic (or max-stochastic), hence they all have an eigenvector (1, . . ., 1) associated with the unique eigenvalue 1.If x ∈ V (A k , ρ) for some ρ, then its subvectors corresponding to the irreducible blocks of A k are also eigenvectors of those blocks, or zero vectors.Hence ρ = 1, which is the only eigenvalue of A k .
(ii): By Theorem 2.1, G(A) splits into gcd(k, σ) = σ components, and each of them contains exactly one cyclic class of G(A).
(iii): Use the definition of cyclic classes and that each node has an ingoing edge.Lemma 5.2.Both in max algebra and in nonnegative linear algebra, the trivial classes of A k are the same for all k.
Proof.In both algebras, an index belongs to a class with nonzero Perron root if and only if the associated graph contains a cycle with a nonzero weight traversing the node with that index.This property is invariant under taking matrix powers, hence the claim.
In both algebras, each class µ of A with cyclicity σ corresponds to an irreducible submatrix A µµ .It is easy to see that (A k ) µµ = (A µµ ) k .Applying Lemma 5.1 to A µµ we see that µ gives rise to gcd(k, σ) classes in A k , which are said to be derived from their common ancestor µ.If µ is trivial, then it gives rise to a unique trivial derived class of A k , and if µ is non-trivial then all the derived classes are nontrivial as well.The classes of A k and A l derived from the common ancestor will be called related.Note that this is an equivalence relation on the set of classes of all powers of A. Evidently, a class of A k is derived from a class of A if and only if its index set is contained in the index set of the latter class.It is also clear that each class of A k has an ancestor in A.
We now observe that access relations in matrix powers are "essentially the same".This has identical proof in max algebra and nonnegative algebra.
For all k, l ≥ 1 and ρ > 0, if an index i ∈ {1, . . ., n} accesses (resp. is accessed by) a class with Perron root ρ k in A k then i accesses (resp. is accessed by) a related class with Perron root ρ l in A l .
Proof.We deduce from Lemma 5.1 and Lemma 5.2 that the index set of each class of A k with Perron root ρ k is contained in the ancestor class of A with Perron root ρ.Then, i accessing (resp.being accessed by) a class in A k implies i accessing (resp.being accessed by) its ancestor in A. Since ρ > 0, this ancestor class is nontrivial, so the access path can be extended to have a length divisible by l, by means of a path contained in the ancestor class.By Lemma 5.1, the ancestor decomposes in A l into several classes with the common Perron root ρ l , and i accesses (resp. is accessed by) one of them.Proof.(i): We will prove the following equivalent statement: For each pair µ, ν where µ is a class in A and ν is a class derived from µ in A k , we have that µ is non-spectral if and only if ν is non-spectral.
Observe that by Lemma 5.2, the Perron root of µ is 0 if and only if the Perron root of ν is 0. In this case, both µ and ν are non-spectral (by definition).Further, let ρ > 0 be the Perron root of µ.Then, by Lemma 5.1, the Perron root of ν is ρ k .Let i be an index in ν.It also belongs to µ.
If µ is non-spectral, then i is accessed in A by a class with Perron root ρ ′ such that ρ ′ > ρ in max algebra, resp.ρ ′ ≥ ρ in nonnegative algebra.By Lemma 5.3, there is a class of A k , which accesses i in A k and has Perron root (ρ ′ ) k .Since we have (ρ ′ ) k > ρ k in max algebra or resp.(ρ ′ ) k ≥ ρ k in nonnegative algebra, we obtain that ν, being the class to which i belongs in A k , is also non-spectral.
Conversely, if ν is non-spectral, then i is accessed in A k by a class θ with Perron root equal to ρk for some ρ, and such that ρk > ρ k in max algebra, resp.ρk ≥ ρ k in nonnegative algebra.The ancestor of θ in A accesses7 i in A and has Perron root ρ.Since we have ρ > ρ in max algebra or resp.ρ ≥ ρ in nonnegative algebra, we obtain that µ, being the class to which i belongs in A, is also non-spectral.Part (i) is proved.
(ii): This part follows directly from Lemma 5.1 parts (i) and (ii).
Proof.By Theorem 2.10, the nonzero eigenvalues of A (resp.A k ) are precisely the Perron roots of the spectral classes of A (resp.A k ).By Theorem 5.4(i), if a class of A is spectral, then so is any class derived from it in A k .This impliesthat Λ(A k ) ⊆ {ρ k : ρ ∈ Λ(A)}.The converse inclusion follows from the converse part of Theorem 5.4(i).
Let us note yet another corollary of Theorem 5.4.For A ∈ R n×n + and ρ ≥ 0, let N(A, ρ) be the union of index sets of all classes of A with Perron root ρ, and N s (A, ρ) be the union of index sets of all spectral classes of A with Perron root ρ.Obviously, N s (A, ρ) ⊆ N(A, ρ), and both sets (as defined for arbitrary ρ ≥ 0) are possibly empty.For the eigencones of A ∈ R n×n + , the case of an arbitrary ρ ∈ Λ(A) can be reduced to the case of the principal eigenvalue: V (A, ρ) = V (A ρ , 1) (Proposition 2.11).Now we extend this reduction to the case of V (A k , ρ k ), for any k ≥ 1.As in the case of Propositon 2.11, we assume that A is in Frobenius normal form.
Proof.(i): Apply Corollary 5.6 part (ii) and Lemma 5.3.(ii): Use that M ρ is initial in G(A).(iii): By Proposition 2.11 we have (assuming that where, instead of (8), By part (i) M k ρ = M ρ , hence A k ρ k = (A ρ ) k and the claim follows.
5.2.Critical components.In max algebra, when A is assumed to be strictly visualized, each component μ of C(A) with cyclicity σ corresponds to an irreducible submatrix A [1] μμ (as in the case of classes, A μμ is a shorthand for A N μN μ ).Using the strict visualization and Lemma 2.8 we see that (A ⊗k ) μμ we see that μ gives rise to gcd(k, σ) critical components in A ⊗k .As in the case of classes, these components are said to be derived from their common ancestor μ. (iii) If A is strictly visualized, x i ≤ 1 for all i and supp(x [1] ) is a cyclic class of μ, then supp((A ⊗ x) [1] ) is the previous cyclic class of μ.
Proof.(i),(ii): Both statements are based on the fact that C(A ⊗k ) = (C(A)) k , shown in Lemma 2.8.To obtain (i), also apply Theorem 2.1 to a component μ of C(A).(iii): Use (A ⊗ x) [1] = A [1] ⊗ x [1] , the definition of cyclic classes and the fact that each node in μ has an ingoing edge.

Describing extremals
The aim of this section is to describe the extremals of the core, in both algebras.To this end, we first give a parallel description of extremals of eigencones (the Frobenius-Victory theorems).
6.1.Extremals of the eigencones.We now describe the principal eigencones in nonnegative linear algebra and then in max algebra.By means of Proposition 2.11, this description can be obviously extended to the general case.As in Section 2.6, both descriptions are essentially known: see [7,18,19,35].
We emphasize that the vectors x (µ) and x (μ) appearing below are full-size.(i) Each spectral class µ with ρ + µ = 1 corresponds to an eigenvector x (µ) , whose support consists of all indices in the classes that have access to µ, and all vectors x of V + (A, 1) with supp x = supp x (µ) are multiples of x (µ) .
(ii) V + (A, 1) is generated by x (µ) of (i), for µ ranging over all spectral classes with ρ + µ = 1.(iii) x (µ) of (i) are extremals of V + (A, 1).(Moreover, x (µ) are linearly independent.)Note that the extremality and the usual linear independence of x (µ) (involving linear combinations with possibly negative coefficients) can be deduced from the description of supports in part (i), and from the fact that in nonnegative algebra, spectral classes associated with the same ρ do not access each other.This linear independence also means that V + (A, 1) is a simplicial cone.See also [35,Th. 4.1].(ii) V ⊕ (A, 1) is generated by x (μ) of (i), for μ ranging over all components of C(A).
(iii) x (μ) of (i) are extremals in V ⊕ (A, 1).(Moreover, x (μ) are strongly linearly independent in the sense of [6].) To verify (i'), not explicitly stated in the references, use (i) and the path interpretation of A * .Vectors x (μ) of Theorem 6.2 are also called the fundamental eigenvectors of A, in max algebra.Applying a strict visualization scaling (Theorem 2.7) allows us to get further details on these fundamental eigenvectors.≤ 1 for all i.Moreover, supp(x (μ) [1] ) = N μ.
(iv): In both algebras, the converse part of Theorem 5.4 (i) shows that there are no spectral classes of A σ other than the ones derived from the spectral classes of A. In nonnegative algebra, this shows that there are no extremals other than described in (ii).In max algebra, on top of that, the converse part of Theorem 5.8 (i) shows that there are no components of C(A ⊗σ ρ ) other than the ones derived from the components C(A ρ ), for ρ ∈ Λ ⊕ (A), hence there are no extremals other than described in (ii).In both algebras, it remains to count the extremals described in (ii).

Sequence of eigencones
The main aim of this section is to investigate the periodicity of eigencones and to prove Main Theorem 2. Unlike in Section 4, the proof of periodicity will be different for the cases of max algebra and nonnegative algebra.The periods of eigencone sequences in max algebra and in nonnegative linear algebra are also in general different, for the same nonnegative matrix (see Section 8 for an example).To this end, recall the definitions of σ ρ and σ Λ given in Section 3, which will be used below.
7.1.Periodicity of the sequence.We first observe that in both algebras We now prove that the sequence of eigencones is periodic.
Proof.We will give two separate proofs of part (i), for the case of max algebra and the case of nonnegative algebra.Part (ii) follows from part (i).ρ = ρ are important).By Frobenius-Victory Theorems 6.1 and 6.2 and Theorem 5.4(i), there is a unique spectral class µ of A to which all indices in supp(x) have access.Since supp(y i ) ⊆ supp(x), we are restricted to the submatrix A JJ where J is the set of all indices accessing µ in A. In other words, we can assume without loss of generality that µ is the only final class in A, hence ρ is the greatest eigenvalue, and ρ = 1.Note that supp(x) ∩ N µ = ∅.
In nonnegative algebra, restricting the equality x = i y i to N µ we obtain (20) supp(x µ ) = i supp(y i µ ).
If supp(y i µ ) is non-empty, then y i is associated with a spectral class of A ×l whose nodes are in N µ .Theorem 6.1(i) implies that supp(y i µ ) consists of all indices in a class of A ×l µµ .As x can be any extremal eigenvector of A ×k with supp x ∩ N µ = ∅, (20) shows that each class of A ×k µµ (corresponding to x) splits into several classes of A ×l µµ (corresponding to y i ).By Corollary 2.2 this is only possible when gcd(k, σ) divides gcd(l, σ), where σ is the cyclicity of the spectral class µ.
In max algebra, since ρ = 1, assume without loss of generality that A is strictly visualized.In this case A and x have all coordinates not exceeding 1. Recall that x [1]   is the Boolean vector defined by x x [1] = i y i [1] ⇒ supp(x where supp(x [1] ) = supp(x [1] μ ) by Proposition 6.3(ii) and Theorem 5.8(i), and hence also supp(y i [1] μ ) is non-empty then also supp(y i Nµ ) is non-empty so that y i is associated with the eigenvalue 1.As y i is extremal, Proposition 6.3(ii) implies that supp(y i [1] μ ) consists of all indices in a class of (A [1] μμ ) ⊗l .As x can be any extremal eigenvector of A ⊗k with supp(x [1] ) ∩ N μ = ∅, (21) shows that each class of (A [1] μμ ) ⊗k splits into several classes of (A [1] μμ ) ⊗l .By Corollary 2.2 this is only possible when gcd(k, σ) divides gcd(l, σ), where σ is the cyclicity of the critical component μ.
Let us also formulate the following version restricted to some ρ ∈ Λ(A).Theorem 7.4.Let A ∈ R n×n + , and let σ be either the cyclicity of a spectral class (nonnegative algebra) or the cyclicity of a critical component (max algebra) associated with some ρ ∈ Λ(A).The following are equivalent for all positive k, l: Proof.(i)⇒(ii) follows from the elementary number theory, and (ii)⇒(iii) follows from (17) and Lemma 7.2(i).The proof of (iii)⇒(i) follows the lines of the proof of Theorem 7.3 (v)⇒ (i), with a slight simplification that ρ = ρ and further, x and all y i in x = i y i are associated with the same eigenvalue.
We are now ready to deduce Main Theorem 2 Proof of Main Theorem 2. We prove the first part.The inclusion was proved in Theorem 7.1 (i), and we are left to show that σ ρ is the least such p that V (A k+p , ρ k+p ) = V (A k , ρ k ) for all k ≥ 1.But taking k = σ ρ and using Theorem 7.4 (ii)⇔(iii), we obtain gcd(σ ρ + p, σ ρ ) = σ ρ , implying that σ ρ divides σ ρ + p, so σ ρ divides p.Since Theorem 7.1 (i) also shows that V (A k+σρ , ρ k+σρ ) = V (A k , ρ k ) for all k ≥ 1, the result follows.
The second part can be proved similarly, using Theorem 7.1(ii) and Theorem 7.3 (iii)⇔(v).

Examples
We consider two examples of reducible nonnegative matrices, examining their core in max algebra and in nonnegative linear algebra.The eigencones V ⊕ (A, 1), V ⊕ (A ⊗2 , 1), V ⊕ (A ⊗3 , 1) and V ⊕ (A ⊗4 , 1) are generated by the last four columns of the Kleene stars A * , (A ⊗2 ) * , (A ⊗3 ) * , (A ⊗4 ) * .Namely, V ⊕ (A, 1) = V ⊕ (A ⊗3 , 1) = span ⊕ {(0 1 1 1 1)}, V ⊕ (A ⊗2 , 1) = span ⊕ {(0, 1, 0.8797, 1, 0.8797), (0, 0.8797, 1, 0.8797, 1)}, V ⊕ (A ⊗4 , 1) = span ⊕ {(0, 1, 0.6807, 0.7738, 0.8797), (0, 0.8797, 1, 0.6807, 0.7738), (0, 0.7738, 0.8797, 1, 0.6807), (0, 0.6807, 0.7738, 0.8797, 1)} By Main Theorem 1, core ⊕ (A) is equal to V ⊕ (A ⊗4 , 1).Computing the max-algebraic powers of A we see that the sequence of submatrices A ⊗t M M becomes periodic after t = 10, with period 4. In particular, , where 0 < α < 0.0001.Observe that the last four columns are precisely the ones that generate V ⊕ (A ⊗4 , 1).Moreover, if α was 0 then the first column would be the following max-combination of the last four columns: On the one hand, the first column of A ⊗t cannot be a max-combination of the last four columns for any t > 0 since a ⊗t 11 > 0. On the other hand, a ⊗t 11 → 0 as t → ∞ ensuring that the first column belongs to the core "in the limit"., where 0 < α < 0.0001, and that the first four digits of all entries in all higher powers are the same.It can be verified that the submatrix (A/ρ + (A)) ×12 M M is, approximately, the outer product of the Perron eigenvector with itself, while the first column is also almost proportional to it.the first two columns are the generators of V ⊕ (A ⊗2 , 1).However, the last column is still not proportional to the third one which shows that span ⊕ (A ⊗2 ) = core ⊕ (A).However, it can be checked that this happens in span ⊕ (A ⊗4 ), with the first two columns still equal to the generators of V ⊕ (A ⊗2 , 1), which shows that span ⊕ (A ⊗4 ) is the sum of above mentioned max cones, and hence span ⊕ (A ⊗4 ) = span ⊕ (A ⊗5 ) = . . .= core ⊕ (A).Hence we see that A is column periodic (S 5 ) and the core finitely stabilizes.See Figure 2 for a symbolic illustration.Here core + (A) is equal to the ordinary (Minkowski) sum of V + (A ×2 , 1) and V + (A, ρ + ν ).To this end, it can be observed that, within the first 4 digits, the first two columns of A ×t become approximately periodic after t = 50, and the columns of powers of the normalized submatrix A νν /ρ + ν approximately stabilize after t = 7.Of course, there is no finite stabilization of the core in this case.However, the structure of the nonnegative core is similar to the max-algebraic counterpart described above.
This research was supported by EPSRC grant RRAH15735.Sergeȋ Sergeev also acknowledges the support of RFBR-CNRS grant 11-0193106 and RFBR grant 12-01-00886.Bit-Shun Tam acknowledges the support of National Science Council of the Republic of China (Project No. NSC 101-2115-M-032-007).
The cycles with the cycle geometric mean equal to λ(A) are called critical, and the nodes and the edges of G(A) that belong to critical cycles are called critical.The set of critical nodes is denoted by N c (A), the set of critical edges by E c (A), and these nodes and edges give rise to the critical graph of A, denoted by C(A) = (N c (A), E c (A)).A maximal strongly connected subgraph of C(A) is called a strongly connected component of C(A).Observe that C(A), in general, consists of several nontrivial strongly connected components, and

Corollary 2 . 9 . 2 . 5 .
Let A ∈ R n×n + .Then T c (A) ≥ T (C(A)).Frobenius normal form.Every matrix A = (a ij ) ∈ R n×n +can be transformed by simultaneous permutations of the rows and columns in almost linear time to a Frobenius normal form[3,5]

4. 3 .
Core: a general argument.The original argument of Pullman [33, Section 2] used the separation of a point from a closed convex cone by an open homogeneous halfspace (that contains the cone and does not contain the point).

Lemma 5 . 1
(cf. [3, Ch. 5, Ex. 6.9],[43, Lemma 4.5]  ).Let A be irreducible with the unique eigenvalue ρ, let G(A) have cyclicity σ and k be a positive integer.(i)A k is permutationally similar to a direct sum of gcd(k, σ) irreducible blocks with eigenvalues ρ k , and A k does not have eigenvalues other than ρ k .(ii)If k is a multiple of σ, then the sets of indices in these blocks coincide with the cyclic classes of G(A).

(
iii) If supp(x) is a cyclic class of G(A), then supp(Ax) is the previous cyclic class.

Theorem 5 . 4 (
[43, Corollary 4.6]).Let A ∈ R n×n + .(i) If a class µ is spectral in A, then so are the classes derived from it in A k .Conversely, each spectral class of A k is derived from a spectral class of A.(ii) For each class µ of A with cyclicity σ, there are gcd(k, σ) classes of A k derived from it.If k is a multiple of σ then the index sets of derived classes are the cyclic classes of µ.
Proof.(i): This part follows from Lemmas 5.1 and 5.2.(ii): Inclusion N s (A, ρ) ⊆ N s (A k , ρ k ) follows from the direct part of Theorem 5.4(i), and inclusion N s (A k , ρ k ) ⊆ N s (A, ρ) follows from the converse part of Theorem 5.4(i).

Theorem 5 . 7 .
Let k ≥ 1 and ρ ∈ Λ(A).(i) The set of all indices having access to the spectral classes of A k with the eigenvalue ρ k equals M ρ , for each k.
Evidently a component of C(A ⊗k ) is derived from a component of C(A) if and only if its index set is contained in the index set of the latter component.Following this line we now formulate an analogue of Theorem 5.4 (and some other results).Theorem 5.8 (cf.[7, Theorem 8.2.6], [8, Theorem 2.3]).Let A ∈ R n×n + .(i) For each component μ of C(A) with cyclicity σ, there are gcd(k, σ) components of C(A ⊗k ) derived from it.Conversely, each component of C(A ⊗k ) is derived from a component of C(A).If k is a multiple of σ, then index sets in the derived components are the cyclic classes of μ. (ii) The sets of critical indices of A ⊗k for k = 1, 2, . . .are identical.

Theorem 6 . 2 (
[7, Th. 4.3.5],[41,Th. 2.8]).Let A ∈ R n×n + have ρ ⊕ (A) = 1.(i) Each component μ of C(A) corresponds to an eigenvector x (μ) defined as one of the columns A * •i with i ∈ N μ, all columns with i ∈ N μ being multiples of each other.(i') Each component μ of C(A) is contained in a (spectral) class µ with ρ ⊕ µ = 1, and the support of each x (μ) of (i) consists of all indices in the classes that have access to µ.

[ 1 ]
i = 1 ⇔ x i = 1.Vector x corresponds to a unique critical component μ of C(A) with the node set N μ.Then instead of (20) we obtain(21)

Figure 1
Figure 1 gives a symbolic illustration of what is going on in this example.In nonnegative algebra, the block A M M with M = {2, 3, 4, 5} is also the only spectral block .Its Perron root is approximately ρ + (A) = 2.2101, and the corresponding eigencone

Figure 1 .
Figure 1.The spans of matrix powers (upper curve) and the periodic sequence of their eigencones (lower graph) in Example 1 (max algebra).

Figure 2 .
Figure 2. The spans of matrix powers (upper graph) and the periodic sequence of their eigencones (lower graph) in Example 2 (max algebra)

Observe that the node sets of the compopnents G k and G gcd(k,σ) (or G l
Then gcd(k, σ) divides gcd(l, σ) if and only if G k and G l are such that the node set of every component of G l is contained in the node set of a component of G k .