Asymptotic properties for Markovian dynamics in quantum theory and general probabilistic theories

We address asymptotic decoupling in the context of Markovian quantum dynamics. Asymptotic decoupling is an asymptotic property on a bipartite quantum system, and asserts that any correlation between two quantum systems is broken after a sufficiently long time passes. In this paper, we show that asymptotic decoupling is equivalent to local mixing which asserts the convergence to a unique stationary state on at least one quantum system. If dynamics is asymptotically decoupling, any correlation between two quantum systems is broken exponentially. Also, we give a criterion of mixing that is a system of linear equations. All results in this paper are proved in the framework of general probabilistic theories, but we also summarize them in quantum theory.

Original content from this work may be used under the terms of the Creative Commons Attribution 4.0 licence. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.

Introduction
Decoupling asserts that a quantum channel breaks a correlation between two quantum systems, and attracts much attention in open quantum systems [1] and quantum information [2][3][4] because a correlation between interesting and environmental systems should be small for instance in evaluating information leakage [5,6]. One approach to decoupling is a decoupling theorem [3] by Dupuis et al, who clarified a relation between the accuracy of decoupling and conditional entropies. Here we would like to give another approach to decoupling in the context of quantum dynamics: how long does it take for a correlation or entanglement between two quantum systems to be broken? It is a natural interest in the context of quantum repeaters [7] in order to evaluate an effective time to implement a quantum protocol. Related to this interest, entanglement saving channels, asymptotic entanglement saving channels, and the PPT 2 conjecture [8][9][10][11][12] are studied.
To study decoupling in the above approach, we employ Markovian time-evolution, which is often discussed as a simple one in the discrete and continuous cases. Markovian dynamics has been often discussed in the context of statistical mechanics [13][14][15]. In such studies, it is important how an initial state changes after a sufficiently long time passes. In particular, the convergence (mixing) and the convergence of the long-time average (ergodicity) interest many researchers [13][14][15][16][17][18][19][20][21]. Hence, mixing and ergodicity have been discussed in the context of relaxation to thermal equilibrium [13][14][15] and in several applications beyond relaxation, e.g., in quantum control [22], quantum estimation [23], quantum communication [24], and quantum many-body systems [25].
In this paper, we introduce asymptotic decoupling in the context of Markovian quantum dynamics, and clarify that asymptotic decoupling is equivalent to the convergence to a unique stationary state on at least one quantum system. The latter property is called local mixing, since the convergence to a unique stationary state on a whole quantum system is called mixing, which is a fundamental property in the study of discrete/continuous Markovian dynamics. In addition to the simple equivalence, we examine decoupling speed for a finite time. Since asymptotic decoupling is equivalent to local mixing, a criterion of mixing is useful to determine whether a given dynamics is asymptotically decoupling or not. A well-known criterion of mixing requires us to solve an eigenequation [16], but we give another criterion of mixing that is a system of linear equations.
As a simple application of the result stated in the second paragraph, we give a relation between irreducibility and primitivity which are important properties in Perron-Frobenius theory. Irreducibility and primitivity guarantee the existence of Perron-Frobenius eigenvalues, which play an important roll in analyzing hidden Markovian processes [26,27]. For example, Perron-Frobenius eigenvalues characterize the asymptotic performance of the average of observed values in a classical hidden Markovian process: the central limit theorem, large deviation, and moderate deviation [26]. The same method can be applied to a hidden Markovian process with a quantum hidden system [27]. Moreover, irreducibility and primitivity are useful to investigate entanglement loss in Markovian quantum dynamics thanks to their stability [12]. Therefore, the above importance motivates us to address irreducibility and primitivity. Since irreducibility and primitivity are close to ergodicity and mixing respectively, we derive a relation between irreducibility and primitivity from that between ergodicity and mixing.
In general, quantum dynamics is composed of quantum channels given as completely positive and trace-preserving linear maps (CPTP maps). However, in order to handle stochastic operation including stochastic local operation and classical communication (SLOCC), we need to address quantum channels without trace-preserving. Fortunately, most results in this paper do not require trace-preserving, and complete positivity is also not necessarily required. Therefore, we show our results for positive maps mainly.
So far, we have stated the asymptotic properties in the context of Markovian classical/quantum dynamics, but our results hold in the framework of general probabilistic theories (GPTs). GPTs are a general framework including quantum theory and classical probability theory [28][29][30]. Although quantum theory is widely accepted in current physics, more general requirements based on states and measurements only imply GPTs and do not imply quantum theory uniquely. Hence, some researchers in physics study GPTs to explore conditions characterizing quantum theory. Since our results hold in GPTs, the asymptotic properties for discrete/continuous Markovian dynamics do not require the framework of quantum theory and are common properties in GPTs. GPTs enable us to be easily address Markovian quantum dynamics composed of (not necessarily CP) positive maps. Therefore, our results are proved in the framework of GPTs after the quantum version of our results is stated in section 2. Furthermore, due to the generality of our setting, our result can be used to investigate the structure of linear maps with the invariance of a cone, which is studied in Perron-Frobenius theory.
The remaining is organized as follows. Section 2 is a summary of results in sections 5 and 6 in the quantum case. As preparation for later discussion, section 3 describes the framework of GPTs. Section 4 characterizes dynamical maps in the framework of GPTs. In this framework, section 5 investigates asymptotic properties for discrete Markovian dynamics, namely, asymptotic decoupling, ergodicity, mixing. Section 6 proceeds to continuous Markovian dynamics, and show the same results as the discrete case by using results in the discrete case. Section 7 gives a simple application of a result in section 5.3 to Perron-Frobenius theory. Section 8 is our conclusion.

Summary of our results in quantum theory
First, we remark on quantum channels which compose Markovian quantum dynamics. In quantum theory, quantum channels are given as CPTP maps on the set T (H) of all Hermitian matrices on a finite-dimensional quantum system H. The set T (H) can be regarded as a finite-dimensional real Hilbert space equipped with the Hilbert-Schmidt inner product X, Y = Tr XY. In particular, quantum channels are given as linear maps on T (H), i.e., superoperators. Most results in this paper only require linearity and positivity, but for simplicity we focus on only CPTP maps in this section.
Next, let us introduce some notations on a bipartite quantum system H 1 ⊗ H 2 . Throughout this paper, we use tilde to express to be bipartite. For instance, a bipartite quantum state is denoted byρ, and a bipartite quantum channel is denoted byΓ. For a number i ∈ {1, 2} and a quantum stateρ on H 1 ⊗ H 2 , the reduced state on H i ofρ is denoted by π i (ρ) to emphasize a remaining system instead of a deleted system. That is, by using the partial traces Tr 1 and Tr 2 , we have π 1 (ρ) = Tr 2ρ and π 2 (ρ) = Tr 1ρ .

Discrete-time evolution
As figure 1, let us consider dynamics when a quantum channel Γ is applied to an initial quantum state ρ many times. Then the dynamics is Markovian and discrete-time evolution. Now, in the bipartite case, we are interested in the asymptotic behavior of the nth stateΓ n (ρ) as n → ∞, and especially interested in whether a correlation between two quantum systems vanishes asymptotically. If the correlation vanishes asymptotically, we say thatΓ is asymptotically decoupling. For later convenience, its non-asymptotic version is called decoupling. Mathematically, asymptotic decoupling and (non-asymptotic) decoupling are defined as follows.
Since the above right-hand sides are product states, it means to be no correlation. Next, to clarify a necessary and sufficient condition of asymptotic decoupling, we introduce another asymptotic property, namely, mixing.

Definition 2.3 (Mixing).
A quantum channel Γ is mixing if there exists a state ρ 0 such that any state ρ satisfies The state ρ 0 is called the stationary state.
By using the above term, let us give a necessary and sufficient condition of asymptotic decoupling. For simplicity, first, we give it for a tensor product quantum channel Γ 1 ⊗ Γ 2 .
Theorem 2.4. For any two quantum channels Γ 1 and Γ 2 , the following conditions are equivalent.
Condition (b) can be represented as a simple phrase local mixing. Hence, simply speaking, asymptotic decoupling is equivalent to local mixing. Since spectral criterion [16, theorem 7] is known as a criterion of mixing, the computation of eigenequations determines whether a tensor product quantum channel is asymptotically decoupling or not. These are why theorem 2.4 is simple and meaningful. Moreover, the above equivalence also holds for a general quantum channel, which is stated in the next theorem.
However, it is more important to examine decoupling speed in the viewpoint of quantum physics. To discuss decoupling speed, we need to evaluate the error term o(1) for finite n ∈ N. For this purpose, let us define the remaining and vanishing parts for a quantum channel Γ below. The Jordan decomposition theorem implies that there exists a unique tuple of r linear maps Θ 1 , . . . , Θ r such that where λ 1 , . . . , λ r are distinct eigenvalues of Γ and the symbol id T (H) denotes the identity map on T (H). Since the absolute value of any eigenvalue λ i is less than or equal to one, the linear maps Γ rem and Γ vani are defined as Then Γ rem • Γ vani = Γ vani • Γ rem = 0 and Γ = Γ rem + Γ vani . If we take an arbitrary λ vani ∈ (max |λ i |<1 |λ i |, 1), the relation follows from the definition. (For details, see section 4.) Since Γ n vani vanishes as n → ∞, the asymptotic behavior of Γ n is characterized by Γ rem . Also, it turns out that (Γ n ) rem = Γ n rem and (Γ n ) vani = Γ n vani . Theorem 2.5. For any bipartite quantum channelΓ, the following conditions are equivalent.
There exist a number i 1 ∈ {1, 2} and a state ρ 0,i 1 on H i 1 such that any bipartite quantum stateρ satisfiesΓ where i 2 is an element of {1, 2} except for i 1 . Moreover, the error term equals (3) Condition (c) can be explicitly written as Case (i 1 , i 2 ) = (1, 2)Γ n (ρ) = ρ 0,1 ⊗ π 2 (Γ n (ρ)) + o(1), Case (i 1 , i 2 ) = (2, 1)Γ n (ρ) = π 1 (Γ n (ρ)) ⊗ ρ 0,2 + o(1). Since the state ρ 0,1 or ρ 0,2 is something like a stationary state, condition (c) can also be regarded as local mixing. Hence, for a general quantum channel, the same simple equivalence also holds: asymptotic decoupling is equivalent to local mixing. SinceΓ n vani vanishes exponentially, the error term in condition (c) also vanishes exponentially as n → ∞. Remark 2.6. Some readers might feel that asymptotic decoupling is similar to the non-linear operationρ → π 1 (ρ) ⊗ π 2 (ρ), which is studied as an open timelike curve (OTC). It is known [31,32] that the use of OTCs can violate the uncertainty principle, can solve NP-complete problems efficiently, and can clone unknown quantum states. Understood easily, the above operation with OTCs cannot be realized as the limit of Markovian quantum dynamics: no bipartite quantum channelsΓ satisfy that for any bipartite quantum stateρ. Indeed, anyΓ n is linear and the limit ofΓ n is also linear, but the operation with OTCs is non-linear and thus (4) is impossible. Although asymptotic decoupling is similar to the operation with OTCs at a glance, asymptotic decoupling can be realized as already shown in theorem 2.5.
For the above equivalence, it is meaningful to investigate a necessary and sufficient condition of mixing. Of course, as already mentioned, spectral criterion [16, theorem 7] is known as a criterion of mixing, but it requires us to solve an eigenequation. Hence, let us give another criterion of mixing that is a system of linear equations. For this purpose, we introduce another asymptotic property, namely, ergodicity. Definition 2.7 (Ergodicity). A quantum channel Γ is ergodic if there exists a state ρ 0 such that any state ρ satisfies The state ρ 0 is called the stationary state.
Since the right-hand side of (5) is the Cesàro mean of the right-hand side of (1), mixing implies ergodicity, but the converse does not necessarily hold. Definition 2.7 cannot be checked directly by computing, but its numerical check is realized by the following proposition. Proposition 2.8. For any quantum channel Γ, the following conditions are equivalent.
Actually, as proved in section 5.3, a quantum channel Γ is mixing if and only if Γ ⊗2 is ergodic. This equivalence and proposition 2.8 imply the following theorem. Theorem 2.9. For any quantum channel Γ, the following conditions are equivalent.
Proposition 2.8 is well-known (for instance, see [16, appendix], [17, corollary 2]), but theorem 2.9 is not fully known as far as we know, and only partial results are published. For instance, the equivalence of conditions (a) and (c) in theorem 2.9 was proved for unital quantum channels in finite-dimensions [18, theorem 2.10]. An equivalence of conditions similar to conditions (a) and (c) is known for unital normal CP maps in infinite-dimensions [19, theorem 6.3]. However, only a few preceding studies considered tensor product channels in the first place [18][19][20]. Their proofs are of operator algebra, but our proof is of linear algebra, and thus theorem 2.9 also holds in GPTs and moreover is extended to the case without trace-preserving and positivity.
Finally, we remark two related theorems. One is the long-time average version of theorem 2.4, which states a relation between asymptotic decoupling of the long-time average and local ergodicity. The other is an extension of theorem 2.9 which follows from theorem 2.4 and theorem 2.9 immediately. Theorem 2.10 (Asymptotic decoupling of the long-time average). For any two quantum channels Γ 1 and Γ 2 , the following conditions are equivalent.
Asymptotic decoupling of the long-time average implies local ergodicity, but the converse does not hold. This is because condition (b) in theorem 2.10 does not necessarily hold even if both Γ 1 and Γ 2 are ergodic. Although the error term in (3) vanishes exponentially as n → ∞, the decreasing order of the error term in (19) is O(1/n).

Continuous-time evolution
Next, we address continuous-time evolution. Although Γ n (ρ) has denoted the state at time n ∈ N in the discrete case, we denote by Γ (t) (ρ) the state at time t > 0 in the continuous case. Moreover, it is natural that the family {Γ (t) } t>0 of quantum channels should satisfy The above first condition is illustrated by figure 2. It asserts that the state at time t + s equals the state after a time s passes from the state at time t. A family {Γ (t) } t>0 satisfying the above two conditions is called a C 0 semigroup. It can be easily checked that any C 0 semigroup {Γ (t) } t>0 is right-continuous everywhere. The definition of a C 0 semigroup is simple and intuitive in the finite-dimensional case, but the infinite-dimensional case needs a few technical conditions. In the context of Markovian quantum dynamics, a C 0 semigroup is also called quantum dynamical semigroup [38,39].
In order to state our results in the continuous case, we need to define the continuous versions of asymptotic decoupling, mixing, and ergodicity. Fortunately, once replacing Γ n and n → ∞ with Γ (t) and t → ∞ respectively, the continuous versions of asymptotic decoupling and mixing are defined. However, to define the continuous version of ergodicity, we need to use an integral as follows. The state ρ 0 is called the stationary state.
Any mixing C 0 semigroup {Γ (t) } t>0 is ergodic in the same way as the discrete case. This fact is due to L'Hospital's rule [40]. In the above setting, our results are below. Theorem 2.13. For any two C 0 semigroups {Γ (t) 1 } t>0 and {Γ (t) 2 } t>0 , the following conditions are equivalent.
HereΓ avg,t and (Γ ( ) ) avg,n are defined byΓ avg,t = (1/t) Actually, since it is known that mixing and ergodicity are equivalent to each other for any C 0 semigroup of CPTP maps [33, theorem 1], theorem 2.16 is redundant and can be simplified. However, we do not know whether the case without complete positivity also holds or not. If the same equivalence is not true for a C 0 semigroup of positive TP maps, theorem 2.16 is meaningful because it is also true for any C 0 semigroup of positive TP maps. Furthermore, mixing and ergodicity are not necessarily equivalent to each other for continuous dynamics in GPTs, but theorems 2.15 and 2.16 hold for continuous dynamics in GPTs.
Some readers might consider that theorems 2.13-2.16 are more general than theorems 2.4, 2.5, 2.10 and 2.9. However, it is not true due to the following reason. Any C 0 semigroup {Γ (t) } t>0 can be represented as an exponential function [35, theorem I.3.7]: there exists a superoperator L such that for all t > 0. Thus, Γ (t) does not have the eigenvalue zero. However, the discrete case allows that Γ has the eigenvalue zero. Moreover, {Γ (t) } t>0 is a C 0 semigroup of CP maps if and only if {(Γ (t) ) ⊗2 } t>0 is a C 0 semigroup of positive maps [36, theorem 1]. This fact is completely different from the case with CP maps: the positivity of Γ ⊗2 does not imply the complete positivity of Γ. These are why the continuous case is somewhat restricted. In fact, a superoperator L is called a Lindbladian when it generates a semigroup of quantum channels like (6), which was characterized by Lindblad [38] and Gorini et al [39]. For a Lindbladian L, the continuous Markovian dynamics {e tL (ρ)} t>0 with an initial state ρ is a unique solution of the master equation

GPTs with cones
On the basis of GPTs, let us define states and measurements. GPTs are a general framework including classical probability theory and quantum theory [28][29][30]. Simply speaking, a GPT consists of a proper cone K and a unit effect u. This is because the set S(K, u) of all states is defined by using K and u. Throughout this paper, we consider finite-dimensional GPTs alone. Also, we summarize basic lemmas on convex cones used implicitly/explicitly in this paper, in appendix C. First, we provide preliminary knowledge below. We denote by V * the dual space of a finitedimensional real vector space V, and assume dim V 2 throughout this paper. The dual space V * can be naturally identified with V by using an inner product ·, · on V. Hence, we identify V * with V. A convex set K ⊂ V is called a convex cone if K contains any non-negative number multiple of any x ∈ K. When a convex cone K is a closed set, we say that K is a closed convex cone (for short, cone). For any non-empty convex cone K ⊂ V, the dual cone is defined as It is well-known [41,42] that for all non-empty convex cones KK where cl(X ) denotes the closure of a subset X ⊂ V. In particular, the first three properties follow from the definition immediately. If K * = K, the cone K is called self-dual. The norm on V is defined as x = x, x for all x ∈ V. Figure 3 is an instance of a cone of R 2 and its dual cone. The boundary of the cone in the left figure consists of the two rays l 1 and l 2 . On the other hand, the boundary of the dual cone in the right figure consists of the two rays l ⊥ 1 and l ⊥ 2 . Hence, the larger the angle between l 1 and l 2 is, the smaller the angle between l ⊥ 1 and l ⊥ 2 is.  Next, we describe a GPT, which requires the following components. Suppose that K ⊂ V is a proper cone, i.e., a cone satisfying int(K) = ∅ and K ∩ (−K) = {0}, where int(X ) denotes the interior of a subset X ⊂ V. Once we fix a unit effect u ∈ int(K * ), the set of all states is given as The set S(K, u) is a compact convex set (see lemma C.6). A measurement is given as a decomposition {e i } i of u, namely, it satisfies e i ∈ K * and i e i = u, where each i corresponds to an outcome. When x is a state and a measurement {e i } i is performed, the probability to obtain an outcome i is e i , x . Therefore, once we fix the tuple (V, ·, · , K, u), our GPT is established.
Next, we give two typical examples of GPTs. Table 1 summarizes the tuples (V, ·, · , K, u) which appear below. Example 3.1 (Classical probability theory). In order to recover classical probability theory with d outcomes, put V = R d and K = [ 0, ∞ ) d . Also, choose the inner product ·, · to be the standard inner product. Hence, the relation K * = [ 0, ∞ ) d holds, and thus K is self-dual. Furthermore, choosing the unit effect u = [1, . . . , 1] , we find that states equal probability vectors. Thus, we obtain classical probability theory with d outcomes.
Example 3.2 (Quantum theory). Next, let us recover quantum theory on a finitedimensional complex Hilbert space H. Choose V to be the set T (H) of all Hermitian matrices on H. Also, choose K to be the set T + (H) of all positive semi-definite matrices, which has non-empty interior and satisfies T + (H) ∩ (−T + (H)) = {0}. Furthermore, define the inner product ·, · as Y, X = Tr YX for all X, Y ∈ T (H). Hence, the relation K * = T + (H) holds, and thus K is self-dual. Choosing the unit effect u to be the identity matrix on H, we find that states equal density matrices. Finally, we note that measurements are given as positive-operator valued measures (POVMs). Now, we return to GPTs and focus on two GPTs given by two tuples (V i , ·, · , K i , u i ) with i = 1, 2. The joint system is given as the tensor product space V 1 ⊗ V 2 equipped with the natural inner product induced by the inner products of V 1 and V 2 . Then the joint unit effect is u 1 ⊗ u 2 . When two states x 1 ∈ S(K 1 , u 1 ) and x 2 ∈ S(K 2 , u 2 ) are prepared independently in the respective systems, it is natural that the state on the joint system is given as the product state x 1 ⊗ x 2 . Since any convex combination of product states is also realized by randomization, any state x ∈ S(K 1 ⊗ K 2 , u 1 ⊗ u 2 ) can be realized. Here we define the tensor product cone K 1 ⊗ K 2 as Thus, the following proposition holds.

Proposition 3.3.
If K 1 and K 2 are cones, then we consider what properties a coneK of the joint system should satisfy. From the argument in the previous paragraph, the coneK must contain K 1 ⊗ K 2 . Similarly, by considering that two measurements are performed independently in the respective systems, we obtain K * 1 ⊗ K * 2 ⊂K * . Hence,K must satisfỹ where we note that the relationK min ⊂K max holds due to proposition 3.3. Of course, the conẽ K is not necessarily unique, sinceK min =K max in general. In fact, even if both K 1 and K 2 are self-dual, the coneK min is not necessarily self-dual. To see this fact, we discuss two quantum systems as follows.
Example 3.4. We consider quantum theory on finite-dimensional complex Hilbert spaces H 1 and H 2 . Then the vector spaces of the first, second, and joint systems are T (H 1 ), T (H 2 ), and T (H 1 ⊗ H 2 ), respectively. States on the joint system are density matrices in T + (H 1 ⊗ H 2 ). Let |a i , |b i ∈ H i be orthonormal vectors for each i = 1, 2. When defining the unit vector Thus, the cone T + (H 1 ⊗ H 2 ) strictly contains Sep(H 1 ; H 2 ). Noting the inclusion relation we obtain Sep(H 1 ; H 2 ) = Sep(H 1 ; H 2 ) * . Therefore, the cones T + (H 1 ) and T + (H 2 ) are selfdual, but Sep(H 1 ; H 2 ) is not self-dual.

K-positive maps and (K, u)-dynamical maps
In order to address Markovian dynamics, we describe dynamical maps. Suppose that two tuples (V, ·, · , K, u) and (V , ·, · , K , u ) give GPTs. Since a dynamical map A transmits states x ∈ S(K, u) to states Ax ∈ S(K , u ), a dynamical map must satisfy some properties. First, a dynamical map A is a linear map from V into V because A needs to preserve the convex combination structure. Moreover, the linear map A must satisfy the following properties.
If a linear map A is (K; K )-positive and dual (u; u )-preserving, we say that A is In this paper, we consider the case (V, ·, · , K, u) = (V , ·, · , K , u ) unless otherwise noted. In this case, the above properties are called Kpositivity, dual u-preserving, and (K, u)-dynamical property, respectively. Unless there is confusion, a (K, u)-dynamical map is called a dynamical map simply.
The definition of dynamical maps is different from that of quantum channels. In quantum theory, quantum channels are given as CPTP maps. Trace-preserving and positivity in quantum theory correspond to dual u-preserving and K-positivity, respectively. Hence, a (K, u)dynamical map is a counterpart of a trace-preserving and positive map in quantum theory. Moreover, our class is strictly larger than the class of CPTP maps. The correspondences are summarized in table 2.
As stated in section 1, to handle stochastic operation including SLOCC, we need to address dynamical maps without dual u-preserving. Fortunately, most results in this paper do not require dual u-preserving. Hence, we focus on K-positive maps mainly. As known in preceding studies [43,44], K-positivity guarantees a few good properties. Before stating them, we introduce several basic terms in linear algebra below.
For a linear map A and its eigenvalue λ, the multiplicity as a root of the characteristic polynomial is called the algebraic multiplicity and denoted by am(A; λ). Also, the (complex) dimension of the eigenspace W(A; λ) := Ker(A − λid V ) is called the geometric multiplicity and denoted by gm(A; λ), where id V denotes the identity map on V. Here, the eigenspace is a subspace of the complexification V C := V + √ −1V, and A is naturally identified with the linear map on V C to be A( Unless there is confusion, √ −1 is denoted by i. For convenience, we define am(A; λ) = gm(A; λ) = 0 if λ is not an eigenvalue of A. For any linear map A and any λ ∈ C, the inequality gm(A; λ) am(A; λ) holds. As another basic term, the greatest absolute value of all eigenvalues of a linear map A is called the spectral radius and denoted by r(A) [21]. The spectral radius satisfies where A is the operator norm of A based on the norm on V. Since two arbitrary norms on a finite-dimensional vector space are uniformly equivalent to each other, we can select any norm instead of the above one. Moreover, the equations r(A 1 ⊗ A 2 ) = r(A 1 )r(A 2 ) and r(A * ) = r(A) hold. For details, see [21]. Table 2. Properties for linear maps in quantum theory and GPTs.

Quantum theory GPTs
Positivity K-positivity Trace-preserving (TP) Dual u-preserving Complete positivity (CP) TP and positivity (K, u)-dynamical property In addition to the above terms, we also define the degree of an eigenvalue of a linear map. For instance, the matrix ⎡ which is already a Jordan canonical form, has the eigenvalues 0 and 1. In this case, the degrees of 0 and 1 are two and one, respectively. Now, we have been ready to state the following proposition [43, theorem 3.1].

• r(A) is an eigenvalue of A. • K contains an eigenvector of A associated with r(A).
• The degree of r(A) is greater than or equal to the degree of any other eigenvalue whose absolute value is r(A).  Next, we describe a decomposition of a linear map A, which is defined in the same way as section 2.1. For simplicity, assume r(A) = 1 here. The Jordan decomposition theorem implies that there exists a unique tuple of r linear maps T 1 , . . . , T r such that where λ 1 , . . . , λ r are distinct eigenvalues of A. The linear maps A rem and A vani are defined as vani vanishes exponentially as n → ∞. To see this fact, we take arbitrary λ vani ∈ (max |λ i |<1 |λ i |, 1) and ∈ (0, λ vani − max |λ i |<1 |λ i |). Since the equation r(T i ) = |λ i | holds, the equation (7) implies that for any integer 1 i r with |λ i | < 1 and any sufficiently large n ∈ N. Therefore, Since A n vani vanishes as n → ∞, the asymptotic behavior of A n is characterized by A rem . Also, it turns out that (A n ) rem = A n rem and (A n ) vani = A n vani . Finally, we show the following basic propositions. Proof. Proposition 4.4 implies that A has an eigenvector x 0 ∈ K associated with r(A). Since

Asymptotic properties for discrete Markovian dynamics
In this section, we address asymptotic properties for discrete Markovian dynamics in GPTs. Theorems 2.4, 2.5, 2.9, and 2.10 are derived from theorems 5.4, 5.5, 5.21 and 5.16 respectively. Throughout this section and the next section, suppose that a tuple (V i , ·, · , K i , u i ) gives a GPT for each i = 1, 2. Also, suppose that a tuple (V 1 ⊗ V 2 , ·, · ,K, u 1 ⊗ u 2 ) gives a GPT on the joint system, whereK min ⊂K ⊂K max . We use tilde to express to be bipartite in the same way as section 2.

Asymptotic decoupling
First, to define asymptotic decoupling in the framework of GPTs, for a statex on the joint system, we must define the reduced state π ix on the ith system. The reduced state π ix represents the state from a viewpoint of an observer of the ith system. In our context, it is convenient to define the reduced states π 1x and π 2x for a statex ∈ S(K max , u 1 ⊗ u 2 ) because the coneK max is largest of all conesK of the joint system. The reduced state π 1 (x; u 1 , u 2 ) ofx ∈ S(K max , u 1 ⊗ u 2 ) is defined by the condition Since K * 1 has non-empty interior, the Riesz representation theorem guarantees the unique existence of π 1 (x; u 1 , u 2 ) ∈ V 1 . Let us verify π 1 (x; u 1 , u 2 ) ∈ S(K 1 , u 1 ) as follows. Sincex ∈ S(K max , u 1 ⊗ u 2 ), the right-hand side of (9) is greater than or equal to zero, which implies π 1 (x; u 1 , u 2 ) ∈ K 1 . Moreover, putting y 1 = u 1 in (9), we find that Therefore, π 1 (x; u 1 , u 2 ) ∈ S(K 1 , u 1 ). The reduced state π 2 (x; u 1 , u 2 ) ∈ S(K 2 , u 2 ) is also defined similarly. Unless there is confusion, we express π i (x; u 1 , u 2 ) as π ix simply. Then π i can be naturally regarded as a linear map from V 1 ⊗ V 2 onto V i .
Next, to define asymptotic decoupling for (not necessarilyK-positive) linear maps, we introduce the set for a linear map Ã and n ∈ N. Then the monotonicity D 1 (Ã) ⊂ D 2 (Ã) ⊂ · · · holds.
The assumption (10) allows us to consider the limit (11). Definition 5.1 needs (10), but one may not mind (10) because for any two dynamical maps A 1 and A 2 , the tensor product map Ã = A 1 ⊗ A 2 satisfies (10) (see lemma A.1). In this case, we can simplify (11): Next, to clarify a necessary and sufficient condition of asymptotic decoupling, we introduce (non-asymptotic) decoupling and mixing.
Then Ã is (K, u 1 , u 2 )-decoupling if any statex ∈ S(K, u 1 ⊗ u 2 ) satisfies The state x 0 is called the stationary state.
In the same way as section 2, we give a necessary and sufficient condition of asymptotic decoupling for a tensor product map A 1 ⊗ A 2 and a linear map Ã in this order.
Theorem 5.4. For any two dynamical maps A 1 and A 2 , the following conditions are equivalent.
Theorem 5.5. For any dual u 1 ⊗ u 2 -preserving map Ã with (10), the following conditions are equivalent.
where i 2 is an element of {1, 2} except for i 1 . Moreover, the error term equals SinceÃ n vani vanishes exponentially, the error term in condition (c) also vanishes exponentially as n → ∞. Also, theorem 5.4 follows from theorem 5.5 immediately. To prove theorem 5.5, we give a necessary and sufficient condition of decoupling, which can be regarded as the non-asymptotic version of theorem 5.4. Lemma 5.6. For any dual u 1 ⊗ u 2 -preserving map Ã with (12), the following conditions are equivalent.
. This implication follows from the definition. (a) ⇒ (b). Assume condition (a). Then condition (b) follows from the following steps.
Step 2 implies that π iÃx does not depend onx ∈ S(K, u 1 ⊗ u 2 ). That is, there exists a state x 0,i ∈ S(K i , u i ) such that any statex ∈ S(K, u 1 ⊗ u 2 ) satisfies π iÃx = x 0,i . Thus, this equation and condition (a) yield condition (b).
Proof of theorem 5.5. (c) ⇒ (a). This implication follows from the definition.
(a) ⇒ (b) Assume condition (a). We can take a convergent subsequence {Ã n } n∈N⊂N of {Ã n } ∞ n=1 such that its limit is Ã rem (see lemma A.5). Letting N n → ∞ in (11), we havẽ which is just condition (b). (b) ⇒ (c). Assume condition (b). Lemma 5.6 implies that there exist a number i 1 ∈ {1, 2} and a state x 0,i 1 ∈ S(K i 1 , u i 1 ) such that any statex ∈ S(K, u 1 ⊗ u 2 ) satisfies where i 2 is an element of {1, 2} except for i 1 . Without loss of generality, we may assume (i 1 , i 2 ) = (1, 2). Then, for any statex ∈ S(K, u 1 ⊗ u 2 ) and any n ∈ Ñ Since the second and third terms in the right-hand side vanish as n → ∞, condition (c) holds.
So far, we have defined asymptotic decoupling for dynamical maps, and have given its necessary and sufficient condition. However, as stated in section 1, we need to address dynamical maps without dual u 1 ⊗ u 2 -preserving to handle stochastic operation including SLOCC. In the case without dual u 1 ⊗ u 2 -preserving, the definitions of asymptotic decoupling and mixing are modified as follows.
Lemma 5.10 is proved in appendix A.
Proof of theorem 5.9. Without loss of generality, we may show the equivalence by assuming that the eigenvector y 0,i ∈ int(K i ) equals the unit effect u i for each i = 1, 2, due to lemma 5.10. In the case with y 0,i = u i , theorem 5.9 follows from theorem 5.4. Therefore, the proof is completed.

Ergodicity
We investigate ergodicity in this subsection, which is a weaker property than mixing. In quantum theory, ergodicity is defined for quantum channels. In GPTs, however, the definition of dynamical maps on the joint system depends on the coneK of the joint system, and the cone K is not unique. For this reason, it is convenient to define ergodicity (and mixing) for linear maps in our context. Therefore, we employ the following definition.

Definition 5.11 (Ergodicity). A linear map
The vectors x 0 and y 0 are called a stationary vector and a dual stationary vector, respectively.
If A is a dynamical map, definition 5.11 can be represented like definition 2.7. Indeed, if a (K, u)-dynamical map A is ergodic, then proposition 4.7 implies r(A) = 1, and we can choose a stationary vector x 0 and a dual stationary vector y 0 as x 0 ∈ S(K, u) and y 0 = u. The above relation is similar to the relation between definitions 5.3 and 5.8. Now, definition 5.11 cannot be checked directly by numerical computation. Its numerical check is realized by the following proposition. (b) ⇒ (a). Assume condition (b). All we need is to show condition (b) in proposition 5.12. Let λ ∈ C be an eigenvalue of A whose absolute value is r(A). Then corollary 4.5 implies that am(A; λ) = gm(A; λ). Therefore, condition (b) in proposition 5.12 holds.
If A is a dynamical map, condition (b) in proposition 5.13 turns to a weaker condition. Proposition 5.14 is well-known in quantum theory [16, appendix], [17, corollary 2]. Here, to prove proposition 5.14, we use [16, lemma 5] and proposition 5.13. Applying [16, lemma 5] to our setting, we obtain the following proposition immediately. Since ergodicity is almost never discussed for (not necessarily dual u-preserving) Kpositive maps, propositions 5.12 and 5.13 seem not to be published. (For K-positive maps, K-irreducibility is often discussed; see section 7.) Proposition 5.14 should also be known because the quantum version of proposition 5.14 is well-known, and the proof is the same as that of the quantum version. However, the above three propositions are not so trivial due to the difference among them. If we omitted the proofs of them, a large logical gap would occur, and thus we have proved the three propositions in this subsection.
Next, as an application of lemma 5.6 and proposition 5.14, we show the next theorem on asymptotic decoupling of the long-time average. To state it, for a dynamical map A we define the linear map A as follows. Due to the Jordan decomposition theorem, there exists a unique tuple of r linear maps T 1 , . . . , T r such that (8) holds. Since A is dynamical, without loss of generality, we may assume λ 1 = 1 and then define A = T 1 . In particular, for any two dynamical maps A 1 and A 2 , the linear map (A 1 ⊗ A 2 ) can be defined because A 1 ⊗ A 2 is (K min , u 1 ⊗ u 2 )dynamical. Also, the average A avg,n is defined by A avg,n = (1/n) n−1 k=0 A k for a dynamical map A. (a) Any statex ∈ S(K, u 1 ⊗ u 2 ) satisfies Although the error term in (13) vanishes exponentially as n → ∞, the decreasing order of the error term in (19) is O(1/n). To prove theorem 5.16, the following lemmas are used.

Lemma 5.17. For any dynamical map A, the average A avg,n converges to A .
Proof. Recall the decomposition (8). Then any T i with |λ i | = 1 is similar to a diagonal matrix whose diagonal elements are λ i , . . . , λ i , 0, . . . , 0, due to lemma A.3. Since A k = T k 1 + · · · + T k r for all k ∈ N and n−1 Lemma 5.18. For any two dynamical maps A 1 and A 2 , the following conditions are equivalent.

Proof. Recall the definition W(A; λ) = Ker(A − λid V ). Consider the decompositions of A 1 and A 2 like (8):
where λ k,1 , . . . , λ k,r k are distinct eigenvalues of A k . Since always holds, we find that condition (a) is equivalent to condition (b).

Mixing
In this subsection, we give a criterion of mixing that is different from spectral criterion. Our criterion consists of the following two parts: (i) proposition 5.14 and (ii) the equivalence between mixing of a dynamical map A and ergodicity of the two-fold tensor product map A ⊗2 . Since (i) is already known as stated in section 5.2, (ii) is an essential and additional part. First, we begin with spectral criterion. Spectral criterion is well-known in quantum theory [16, theorem 7], but the above general case seems not to be published. However, proposition 5.19 can be proved in the same way as the quantum case, namely, by considering the Jordan canonical form of A. Moreover, the statement does not change in the three cases, namely, for linear maps, for K-positive maps, and for dynamical maps. This is why we do not prove proposition 5.19.
Next, let us give other conditions equivalent to mixing.

Theorem 5.20. For any linear map A with r(A)
> 0, the following conditions are equivalent.

(a) A is mixing. (b) A ⊗2 is mixing, and A has the eigenvalue r(A). (c) A ⊗2 is ergodic, and A has the eigenvalue r(A). (d) gm(A ⊗2 ; r(A) 2 ) = 1, and A has the eigenvalue r(A).
If A is K-positive, proposition 4.4 guarantees that A has the eigenvalue r(A). Thus, in this case, conditions in theorem 5.20 turn to simpler conditions. (a) A is mixing.
If A is a dynamical map, proposition 4.7 guarantees r(A) = 1, and thus condition (d) can be represented as dim Ker(A ⊗2 − id V ⊗2 ) = 1. Hence, theorem 5.21 gives a criterion of mixing that is a system of linear equations. On the other hand, proposition 5.19 requires us to solve an eigenequation. An eigenequation cannot be solved by finite steps in general, but a system of linear equations can always be solved, which is an advantage of theorem 5.21. Of course, some criteria of mixing that does not require us to solve an eigenequation are known for special classes of quantum channels. For instance, (i) Baumgartner and Narnhofer [34, proposition 28] and (ii) Burgarth et al [17, theorem 13] proved that (i) a quantum channel Γ with a strictly positive semi-definite fixed point is mixing if and only if gm(Γ; 1) = 1 and gm(Γ; e 2πij/k ) = 0 for all integers 1 j < k dim H; (ii) a unital quantum channel Γ is mixing if Γ * • Γ is ergodic.
To prove theorem 5.20, let us show the following preliminary lemma.
From now on, the symbol span(X ) denotes the linear span of a subset X ⊂ V, and the symbols z and Imz denote the complex conjugate and the imaginary part of z ∈ C, respectively. Also, recall the definition W(A; λ) = Ker(A − λid V ) for a linear map A and its eigenvalue λ. Next, we prove am(A; 1) = 1 by contradiction. Due to condition (d), the linear map A has an eigenvector x 0 ∈ V associated with r(A) = 1. Suppose that am(A; 1) > gm(A; 1) = 1. Then there exists a nonzero x ∈ V such that Ax = x + x 0 . Thus, 0 is an eigenvector of A ⊗2 associated with r(A) 2 = 1, condition (d) implies that αx ⊗2 0 = x ⊗ x 0 − x 0 ⊗ x for some α ∈ R. Thus, αx 0 = x − α x 0 for some α ∈ R, and the vector x lies in span(x 0 ). Hence, x is an eigenvector of A associated with r(A) = 1, but the equation x = Ax = x + x 0 is a contradiction. Therefore, am(A; 1) = gm(A; 1) = 1. Since A satisfies condition (b) in proposition 5.19, the linear map A is mixing.
Finally, summarizing theorems 5.4 and 5.21, we have the following theorem immediately.
Theorem 5.23. Let A 1 and A 2 be dynamical maps. If A 2 is not mixing, then the following conditions are equivalent.

Asymptotic properties for continuous Markovian dynamics
On the basis of results in section 5, we prove the continuous versions of theorems 5.4, 5.5, 5.16, and 5.21. For simplicity, we use dual u-preserving in this section, but one can verify the continuous versions without dual u-preserving, i.e., the continuous versions of theorems 5.9, 5.20, and 5.21. Theorems 2.13-2.16 are derived from theorems 6.2-6.5, respectively.
For an initial state x ∈ S(K, u), the state at time t > 0 is denoted by A (t) x in the same way as section 2.2. Moreover, the family {A (t) } t>0 of linear maps is a C 0 semigroup: It can be easily checked that any C 0 semigroup {A (t) } t>0 is right-continuous everywhere. Next, to discuss decoupling speed, let us define the remaining and vanishing parts for a C 0 semigroup {A (t) } t>0 of linear maps. For simplicity, assume r(A (t) ) = 1 here. As defined in section 4, for each t > 0 the remaining and vanishing parts A (t) rem and A (t) vani are defined. Moreover, the following good properties hold: These properties can be verified by using an exponential expression A (t) = e tL [35, theorem I.3.7]. Also, if we decompose A (1) as (8) and take an arbitrary λ vani ∈ (max |λ i |<1 |λ i |, 1), the relation A (t) vani = o(λ t vani ) holds. Therefore, A (t) vani vanishes exponentially as t → ∞.
In order to define asymptotic decoupling for C 0 semigroups of (not necessarilyK-positive) linear maps, we introduce the set is the continuous counterpart of D n (Ã). The continuous version of asymptotic decoupling is defined by replacing D n (Ã), Ã n , respectively. The continuous version of mixing is also defined by replacing A n and n → ∞ with A (t) and t → ∞, respectively. However, the continuous version of ergodicity should be defined by using an integral. For later convenience, let us define the continuous version of ergodicity for C 0 semigroups of (not necessarily K-positive) linear maps.
The vector x 0 is called the stationary vector.
If all A (t) are (K, u)-dynamical, the stationary vector x 0 is a state in S (K, u). Also, A (t) x 0 = x 0 for all t > 0. Now, the continuous versions of theorems 5.4, 5.5, and 5.16 are below.   (17), the following conditions are equivalent.
(b) ⇒ (c). Assume condition (b). Lemma 5.6 implies that there exist a number i 1 ∈ {1, 2} and a state x 0,i 1 ∈ S(K i 1 , u i 1 ) such that any statex ∈ S(K, u 1 ⊗ u 2 ) satisfies where i 2 is an element of {1, 2} except for i 1 . Without loss of generality, we may assume (i 1 , i 2 ) = (1, 2). Then, for any statex ∈ S(K, u 1 ⊗ u 2 ) and any t > 1 Since the second and third terms in the right-hand side vanish as t → ∞, condition (c) holds.
(b) ⇒ (a). Assume condition (b). Theorem 5.16 implies that A ( ) 1 or A ( ) 2 at least one is ergodic. Without loss of generality, we may assume that A ( ) 1 is ergodic: there exists a state x 0,1 ∈ S(K 1 , u 1 ) such that any state x ∈ S(K 1 , u 1 ) satisfies Thus, (20) turns to (Ã ( ) ) avg,nx = π 1 (Ã ( ) ) avg,nx ⊗ π 2 (Ã ( ) ) avg,nx + o(1) PutB = (1/ ) 0Ã (s) ds and B 2 = (1/ ) 0 A (s) 2 ds. ThenB and B 2 satisfy that (i)Ã avg,n =B(Ã ( ) ) avg,n for all n ∈ N, Thus, any statex ∈ S(K, u 1 ⊗ u 2 ) satisfies Also, putting n = n(t) := t/ , we havẽ We show theorem 6.5 by using theorem 5.21. To prove theorem 6.5, we need the following preliminary lemma. That is, A (n ) converges to A 0 , where A 0 is the (K, u)-dynamical map defined as A 0 x = u, x x 0 . Also, let n = n(t) := t/ and δ = δ(t) := t − n(t) for all t . Then any state x ∈ S(K, u) satisfies Moreover, we have where the norm of the first factor in the right-hand side denotes the operator norm based on the norm on V: for a linear map B on V, B := sup x 1 Bx . The first factor in the right-hand side vanishes as t → ∞. The second factor in the right-hand side is bounded because of the continuity of A (t) and the inequality 0 δ < 1. Thus, the right-hand side vanishes as t → ∞, whence (24) turns to A (t) x = x 0 + o(1).

Application to Perron-Frobenius theory
In this section, we apply theorem 5.20 to Perron-Frobenius theory involving the Perron-Frobenius theorem. The Perron-Frobenius theorem is a famous theorem in linear algebra and has many applications in applied mathematics. For example, assuming irreducibility which is a property studied in Perron-Frobenius theory, the references [26,27] gave large deviation, moderate deviation, and the central limit theorem in analyzing hidden Markovian processes. Moreover, assuming primitivity which is also a property studied in Perron-Frobenius theory, they gave a calculus formula of the asymptotic variance in the central limit theorem. These facts motivate us to study irreducibility and primitivity. Another motivation to study irreducibility and primitivity is that irreducibility in classical probability theory can be easily checked. To see this fact, we state irreducibility of a classical channel (stochastic matrix) W: for all integers 1 i, j d, there exists n ∈ N such that j|W n |i > 0, where {|i } d i=1 is the standard basis of R d . This condition is called irreducibility [26]. This classical irreducibility can be easily checked by investigating the supports of outputs for the finite number of inputs. Hence, one often uses irreducibility rather than ergodicity, although irreducibility is close to ergodicity. The closeness is clarified in definition 7.1 and proposition 7.4.
However, if we generalize the classical definition to the case with a general proper cone K, it is difficult to check irreducibility in general. To explain this difficulty, let us introduce a few terms. We say that x ∈ K is extreme if for all x 1 , x 2 ∈ K the relation x = x 1 + x 2 implies x 1 , x 2 ∈ span(x) [43, section 2]. Also, an extreme ray of a cone K is defined as the subset {αx | α 0} ⊂ K with a nonzero extreme vector x ∈ K. For a K-positive map A, it is natural to define irreducibility as follows: for all nonzero extreme x ∈ K and y ∈ K * , there exists n ∈ N such that y, A n x > 0. If the number of extreme rays of K is finite, irreducibility can be easily checked. The cone K = [ 0, ∞ ) d in classical probability theory has certainly exact d extreme rays. However, the number of extreme rays of K is not finite in general, and hence it is difficult to check irreducibility in general.
In order to apply theorem 5.20 to Perron-Frobenius theory, we introduce K-irreducibility and K-primitivity as stronger conditions than ergodicity and mixing, respectively. The definition of K-irreducibility is equivalent to that in the previous paragraph (see proposition 7.4). From now on, let K andK be proper cones of V and V ⊗2 satisfyingK min ⊂K ⊂K max . Then K-irreducibility and K-primitivity are defined as follows.

Definition 7.1 (K-irreducibility). A linear map A is K-irreducible if the following conditions hold.
• A is ergodic. • The interiors of K and K * contain a stationary vector x 0 and a dual stationary vector y 0 , respectively.

Definition 7.2 (K-primitivity). A linear map
A is K-primitive if the following conditions hold.
• A is mixing.
• The interiors of K and K * contain a stationary vector x 0 and a dual stationary vector y 0 , respectively.
In the above definitions, K-irreducibility and K-primitivity are defined for linear maps, but it is usual to define K-irreducibility and K-primitivity for K-positive maps. If A is K-positive, Table 3. Relations among ergodicity, mixing, K-irreducibility, and K-primitivity. The symbols A, x 0 , and x are a (K, u)-dynamical map, the stationary state, an initial state, respectively. The limit n → ∞ is taken in the upper table.
Classification according to definitions Mixing Primitive Classification according to proposition 5.14 and theorem 5.21 Mixing Primitive definitions 7.1 and 7.2 are the same as usual ones. Since K-irreducibility and K-primitivity are clearly related to ergodicity and mixing, theorem 5.20 and lemma C.8 yield the following corollary immediately.

Corollary 7.3. Let A be a linear map having the eigenvalue r(A). Then A is K-primitive if and only if A ⊗2 isK-irreducible.
The upper table in table 3 summarizes the relation among ergodicity, mixing, irreducibility, and primitivity according to their definitions. The lower table in table 3 summarizes the relation among them according to proposition 5.14 and theorem 5.21. Now, we use corollary 7.3 to obtain conditions equivalent to K-primitivity. In preceding studies, there are only a few conditions equivalent to K-primitivity. On the other hand, many conditions equivalent to K-irreducibility are known. For instance, the following conditions are known. ( f ) A and A * have eigenvectors x 0 ∈ int(K) and y 0 ∈ int(K * ) associated with r(A) > 0, respectively, and gm(A; r(A)) = 1. (g) For any nonzero extreme x ∈ K and y ∈ K * , there exists n ∈ N such that y, A n x > 0.

Here d denotes the dimension of V.
Although most equivalences of conditions in proposition 7.4 can be found in [43,45,46], for completeness, we prove proposition 7.4 in appendix B. Combining corollary 7.3 and proposition 7.4, we obtain the following conditions equivalent to K-primitivity. Theorem 7.5. For any K-positive map A such that A ⊗2 isK-positive, the following conditions are equivalent. Here d denotes the dimension of V. Remark 7.6. If A is a classical channel (stochastic matrix), corollary 7.3 can also be derived from an existing result in graph theory. In order to explain it, we rewrite irreducibility and primitivity of stochastic matrices to terms in graph theory. A directed graph is called strongly connected if all two vertices are connected by a path in each direction [47]. A stochastic matrix W is irreducible if and only if the directed graph corresponding to W is strongly connected [48]. The directed graph corresponding to W has the d vertices 0, . . . , d − 1, and it has directed edges ( j, i) if j|W|i > 0. Figure 4 is an instance of a stochastic matrix, the directed graph corresponding to the stochastic matrix, and the adjacency matrix of the graph. Since the path 0 → 1 → 0 → 2 → 3 → 0 passes all the vertices and returns to the first vertex, the graph in figure 4 is strongly connected. Furthermore, the period is defined as the greatest common divisor of the length of all cycles contained in a strongly connected graph. The period of an irreducible stochastic matrix is defined as the period of the directed graph corresponding to the stochastic matrix [49]. The reference [48] call it the index of imprimitivity. The period of an irreducible stochastic matrix W equals 1 if and only if W is primitive (for the only if part, see [48,49]; the if part can be proved from the definition). In graph theory, the tensor product of two directed graph G 1 and G 2 is defined as the directed graph corresponding to the adjacency matrix A 1 ⊗ A 2 , where A 1 and A 2 are the adjacency matrices of G 1 and G 2 , respectively. McAndrew [47] called it the product simply and proved the following statement [47, theorem 1]: if two directed graphs G 1 and G 2 are strongly connected, then their tensor product has exact d 12 strongly connected components, where d 12 is the greatest common divisor of the periods of G 1 and G 2 . If both G 1 and G 2 are the directed graph corresponding to an irreducible stochastic matrix W, the above statement implies corollary 7.3 because d 12 equals the period of the directed graph corresponding to W in this case.
Finally, as a simple application of corollary 7.3, we show that K-irreducibility and Kprimitivity are preserved under a suitable deformation of a K-positive map. The deformation is used in considering a cumulant generating function [27].

Corollary 7.7.
Let Ω be a non-empty finite set, A ω be a K-positive map, and a ω for each ω ∈ Ω. If A := ω∈Ω A ω is K-irreducible, then so is A a := ω a ω A ω . If A := ω∈Ω A ω is Kprimitive, then so is A a := ω∈Ω a ω A ω .
Proof. First, assume that the K-positive map A is K-irreducible, and let us use condition (d) in proposition 7.4. Let x ∈ K\{0}, α 0, αx − A a x ∈ K, and a min := min ω∈Ω a ω > 0. Then A a x − a min Ax ∈ K and thus αx − a min Ax = (αx − A a x) + (A a x − a min Ax) ∈ K. Since A is Kirreducible, we obtain x ∈ int(K). Therefore, A a is also K-irreducible.

Conclusion
We have addressed asymptotic properties for discrete/continuous Markovian dynamics, and proved that (i) asymptotic decoupling is equivalent to local mixing, (ii) any correlation is broken exponentially if dynamics is asymptotically decoupling, and (iii) asymptotic decoupling of the long-time average is equivalent to local ergodicity and a certain condition. Since the equivalence (i) shows the importance to investigate criteria of mixing, we have given a criterion of mixing that is a system of linear equations. Although we have considered only time-homogeneous Markovian dynamics, it would also be interesting to investigate asymptotic properties for time-inhomogeneous Markovian dynamics. For instance, one can investigate a divisible family instead of a C 0 semigroup, where a continuous family {Γ t } t>0 of quantum channels, satisfying Γ t → id T (H) as t ↓ 0, is CP divisible [36,37] if for any t > s > 0 there exists a CP map Γ t,s such that Γ t = Γ t,s • Γ s . If Γ t,s = Γ t−s can be chosen for all t > s > 0, the family {Γ t } t>0 is a C 0 semigroup. We do not know whether, for a CP divisible family of quantum channels, asymptotic decoupling is equivalent to local mixing, which is a future study. It is also a future study to clarify a relation between asymptotic properties for non-Markovian dynamics beyond Markovian property [50,51]. Although we could not write an application of asymptotic decoupling, it is also an important future study to find an application to information processing.
As a simple application, we have stated conditions equivalent to primitivity by using conditions equivalent to irreducibility. Irreducibility and primitivity guarantee the existence of Perron-Frobenius eigenvalues, which characterize the asymptotic performance of observed values in analyzing a hidden Markovian process. They also play an important roll in investigating entanglement loss in Markovian quantum dynamics thanks to their stability. Although the structure of linear maps with the invariance of a cone is studied in Perron-Frobenius theory, our result would be useful in Perron-Frobenius theory because of the generality of our setting.

Appendix A. Proofs of technical lemmas
In this appendix, we prove lemma 5.10, proposition 5.12, and some lemmas used in the previous sections. Also, we show that any two dynamical maps A 1 and A 2 satisfy D 1 (A 1 ⊗ A 2 ) =K\{0}, which implies that Ã = A 1 ⊗ A 2 satisfies (10).
Proof. If we show that A 1 ⊗ A 2 isK max -positive, the assertion follows immediately. Instead of theK max -positivity of A 1 ⊗ A 2 , we may show that (A 1 ⊗ A 2 ) * isK * max -positive, thanks to proposition 4.6. It can be easily checked that the following three facts hold: Lemma A.2. Any dual u 1 ⊗ u 2 -preserving map Ã with (10) satisfies r(Ã) = 1.

Proposition A.4 (Dirichlet's approximation theorem). For any l ∈
Lemma A.5. For any dual u 1 ⊗ u 2 -preserving map Ã with (10), there exists an infinite set N ⊂ N such that {Ã n } n∈N converges to Ã rem .
Proof of proposition 5.12. Without loss of generality, we may assume that A is a Jordan canonical form and satisfies r(A) = 1. Then the convergence of (1/n) n−1 k=0 A k can be reduced to that of (1/n) n−1 k=0 J s (λ) k for each Jordan block J s (λ) of A, where J s (λ) denotes the Jordan block ⎡ Third, if (s, λ) = (1, 1), any n ∈ N satisfies (1/n) n−1 k=0 J s (λ) k = [1]. From the above three cases and am(A; 1) = 1, it follows that A is ergodic.
(a) ⇒ (b). Assume condition (a). First, we show that there is not a Jordan block J s (λ) of A satisfying s 2 and |λ| = 1 by contradiction. Suppose that there is a Jordan block J 2 (λ) of A satisfying |λ| = 1. If λ = 1, the equation holds. Using the formula holds, which contradicts condition (a). Therefore, there is not a Jordan block J 2 (λ) of A satisfying |λ| = 1. It can be proved that there is not a Jordan block J s (λ) of A satisfying s 2 and |λ| = 1 in the same way. This fact can be rewritten as that am(A; λ) = gm(A; λ) for any λ ∈ C whose absolute value is one. Next, we show that A has the eigenvalue 1 by contradiction. Suppose that A did not have the eigenvalue 1. Since any Jordan block J s (λ) with |λ| = 1 must be J 1 (λ) due to the result in the previous paragraph, any Jordan block J s (λ) of A satisfies (1/n) n−1 k=0 J s (λ) k → 0. Therefore, (1/n) n−1 k=0 A k → 0, which contradicts condition (a). This contradiction implies that A has the eigenvalue 1.
Since we have already shown that am(A; 1) = gm(A; 1) 1, the remaining of condition (b) is gm(A; 1) = 1, but it follows from condition (a) immediately.

Appendix B. Proof of proposition 7.4
In this appendix, let us prove proposition 7.4. For x, x ∈ V, we define x K x and x < K x as x − x ∈ K and x − x ∈ int(K), respectively. Proof. Without loss of generality, we may assume r(A) = 1. Thanks to proposition 5.12, we can take an eigenvector x 1 of A associated with r(A) = 1. The ergodicity of A yields Since x 1 = 0, the stationary vector x 0 is an eigenvector of A associated with r(A) = 1. Also, y 0 is an eigenvector of A * associated with r(A) = 1 thanks to lemma B.1. Lemma B.3. If a K-positive map A has an eigenvector x 0 ∈ int(K) associated with 0, then A = 0.
Proof of proposition 7.4. (f) ⇒ (a). Assume condition (f). Since r(A) −1 A is (K, y 0 )dynamical, proposition 5.14 implies that A is ergodic. From lemma B.2 and proposition 5.13, it follows that x 0 and y 0 are a stationary vector and a dual stationary vector, respectively. Therefore, A is K-irreducible.
Second, we show gm(A; r(A)) = 1. Let x be an eigenvector of A associated with r(A). From lemma C.5, we can take a real number α = 0 such that the boundary of K contains x := x 0 − αx. Since Ax = r(A)x , condition (e) implies x = 0, whence x ∈ span(x 0 ). Therefore, gm(A; r(A)) = 1.
Third, we prove y 0 ∈ int(K * ) by contradiction. Suppose that y 0 ∈ K * \{0} belonged to the boundary of K * . Defining we find that dim V 0 1 and AK 0 ⊂ K 0 . Since K 0 ⊂ V 0 is a proper cone, proposition 4.4 implies that the restriction of A to V 0 has an eigenvector in K 0 . However, the boundary of K contains K 0 , which contradicts condition (e). Therefore, y 0 ∈ int(K * ). The results above and in the previous paragraphs are just condition (f).
(a) ⇒ (g). Assume condition (a). Let x and y be nonzero extreme vectors of K and K * , respectively. Thanks to condition (a), the map A has a stationary vector x 0 ∈ int(K) and a dual stationary vector y 0 ∈ int(K * ). Since the ergodicity of A implies a sufficiently large n ∈ N satisfies n k=1 (r(A) −1 A) k x ∈ int(K). Therefore, whence y, A k x > 0 for some integer 1 k n.
(g) ⇒ (b). Assume condition (g). Let x be a nonzero extreme vector of K. Since for any nonzero extreme y ∈ K * there exists n ∈ N such that y, A n x > 0, any nonzero extreme y ∈ K * satisfies y, e A x (1/n!) y, A n x > 0. Since K * is generated by extreme vectors of K * , (which is proved by using the Krein-Milman theorem; for details, see [43]), any y ∈ K * \{0} satisfies y, e A x > 0, which implies e A x ∈ int(K) due to lemma C.1. Since K is generated by extreme vectors of K, any x ∈ K\{0} satisfies e A x ∈ int(K).

Appendix C. Basic lemmas on convex cones
For readers' convenience, we prove basic lemmas on convex cones. The lemmas are often used in our main discussion implicitly/explicitly. Recall that K is called a cone if K is a closed convex cone, and K is called a proper cone if K is a cone satisfying int(K) = ∅ and K ∩ (−K) = {0}. Suppose that some y ∈ V satisfies that y, x > 0 for any x ∈ K ∩ S. Then y = 0. Any y ∈ V with y < min x∈K∩S y, x satisfies min x∈K∩S y + y , x min x∈K∩S y, x − y > 0.
This inequality implies that y + y ∈ K * whenever y < min x∈K∩S y, x . Thus, y ∈ int(K * ), which concludes that int(K * ) contains the right-hand side of (C.1). Next, we prove the remaining part of (C.1) by contradiction. Take y ∈ int(K * ). Suppose that y, x = 0 for some x ∈ K ∩ S. Since y ∈ int(K * ), we can take a sufficiently small > 0 such that y − x ∈ K * . However, the inequality 0 y − x, x = − < 0 is a contradiction. Thus, the relation (C.1) holds. Since K is a cone, we obtain the third relation.
Recall that for any cone K = ∅ the relation K * * = K holds. Using this relation and the third relation, we obtain the fourth relation. The first and second relations follow from the third and fourth ones, respectively. Lemma C.2. Let K be a convex cone having non-empty interior. Then V = K + (−K) Proof. From the definition, it follows that K + (−K) is a linear subspace of V. Since K has non-empty interior, so does K + (−K). Therefore, V = K + (−K). This inequality implies the assertion. We verify the above inequality. First, the equality (b) holds because the function y, x of x ∈ K 0 achieves the minimum number when x is an extreme point of K 0 . Second, the equality (c) follows from the minimax theorem. Third, the equality (a) can be shown as follows. Define f (y) = min x∈K∩S y, x for y ∈ V. Then the function f is concave. Since int(K) = ∅, we can take y 0 ∈ int(K) ∩ cl(B). Thus, any y 1 ∈ ∂K ∩ cl(B) and 0 t < 1 satisfy y t := (1 − t)y 0 + ty 1 ∈ int(K) ∩ cl(B). From the concavity of f, we have f (y t ) (1 − t) f (y 0 ) + t f (y 1 ), lim inf In particular, K ∩ int(K * ) = ∅, which also holds in the case with K = {0}.
Lemma C.6. Let K be a cone and let u ∈ int(K * ). Then S(K, u) is a compact convex set.
Proof. Since S(K, u) is a closed convex set, all we need to do is show that S(K, u) is bounded. We prove it by contradiction. Suppose that some sequence {x n } ∞ n=1 ⊂ S(K, u) satisfied x n → ∞. Then K ∩ S is compact, and thus 0 < min x∈K∩S u, x u, x n / x n = 1/ x n → 0, where S is the unit sphere of V. This contradiction concludes that S(K, u) is bounded.
Lemma C.7. If K is a proper cone, then K * is also a proper cone.
Lemma C.8. Let K 1 and K 2 be proper cones of V 1 and V 2 respectively,K be a convex cone withK min ⊂K ⊂K max , and let x i ∈ K i for i = 1, 2. Then x 1 ⊗ x 2 ∈ int(K) if and only if x i ∈ int(K i ) for each i = 1, 2.
Conversely, letting x i ∈ int(K i ) for i = 1, 2, we can take a basis {x i, j } d i j=1 ⊂ K i of V i with d i j=1 x i, j = x i for each i = 1, 2. Since the tuple {x 1, j 1 ⊗ x 2, j 2 } j 1 , j 2 ⊂K min is a basis of V 1 ⊗ V 2 , the setÕ is an open set of V 1 ⊗ V 2 . Thus, x 1 ⊗ x 2 ∈Õ ⊂ int(K). Lemma C.9. If K 1 and K 2 are proper cones of V 1 and V 2 respectively, thenK min is also a proper cone of V 1 ⊗ V 2 .
Proof. From the definition, it follows thatK min is a convex cone. Also, lemma C.8 implies int(K min ) = ∅ because there exist u 1 ∈ int(K 1 ) and u 2 ∈ int(K 2 ).