Next Article in Journal
Making the Environment an Informative Place: A Conceptual Analysis of Epistemic Policies and Sensorimotor Coordination
Next Article in Special Issue
A Correntropy-Based Proportionate Affine Projection Algorithm for Estimating Sparse Channels with Impulsive Noise
Previous Article in Journal
Robust Inference after Random Projections via Hellinger Distance for Location-Scale Family
Previous Article in Special Issue
Adaptive Extended Kalman Filter with Correntropy Loss for Robust Power System State Estimation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Reduction of Markov Chains Using a Value-of-Information-Based Approach

by
Isaac J. Sledge
1,* and
José C. Príncipe
2,3,4
1
Advanced Signal Processing and Automated Target Recognition Branch, US Naval Surface Warfare Center—Panama City Division, Panama City, FL 32407, USA
2
Department of Electrical and Computer Engineering, University of Florida, Gainesville, FL 32611, USA
3
Department of Biomedical Engineering, University of Florida, Gainesville, FL 32611, USA
4
Computational NeuroEngineering Laboratory (CNEL), University of Florida, Gainesville, FL 32611, USA
*
Author to whom correspondence should be addressed.
Entropy 2019, 21(4), 349; https://doi.org/10.3390/e21040349
Submission received: 18 February 2019 / Revised: 24 March 2019 / Accepted: 25 March 2019 / Published: 30 March 2019
(This article belongs to the Special Issue Information Theoretic Learning and Kernel Methods)

Abstract

:
In this paper, we propose an approach to obtain reduced-order models of Markov chains. Our approach is composed of two information-theoretic processes. The first is a means of comparing pairs of stationary chains on different state spaces, which is done via the negative, modified Kullback–Leibler divergence defined on a model joint space. Model reduction is achieved by solving a value-of-information criterion with respect to this divergence. Optimizing the criterion leads to a probabilistic partitioning of the states in the high-order Markov chain. A single free parameter that emerges through the optimization process dictates both the partition uncertainty and the number of state groups. We provide a data-driven means of choosing the ‘optimal’ value of this free parameter, which sidesteps needing to a priori know the number of state groups in an arbitrary chain.

1. Introduction

Markov models have seen a widespread adoption in a variety of disciplines. Part of their appeal is that the application and simulation of such models is rather efficient, provided that the corresponding state space has a small to moderate size. Dealing with large state spaces is often troublesome, in comparison, as it may not be possible to adequately simulate the underlying models. Such large-scale spaces are frequently encountered in reinforcement learning, for instance [1,2,3,4,5].
A means of rendering the simulation of large-scale models tractable is crucial for many applications. One way of doing this is to reduce the overall size of the Markov-chain state space by aggregation [6]. Aggregation entails either explicitly or implicitly defining and utilizing a function to partition nodes in the probability transition graph associated with the large-scale chain. Groups of nodes, which are related by the their inter-state transition probabilities and have strong interactions, are combined and treated as a single aggregated node in a new graph. This results in a lower-order chain with a reduced state space. A stochastic matrix for the lower-order chain is then specified, which describes the transitions from one super-state to another. This stochastic matrix should roughly mimic the dynamics of the original chain despite the potential loss in information incurred from the state combination.
There are a variety of methods for aggregating Markov chains, as we discuss shortly. In this paper, we develop and analyze a novel approach for aggregating Markov chains, which is composed of two information-theoretic processes [7]. The first process entails quantifying the dissimilarity of nodes in the original and reduced-order probability transition graphs, despite the difference in state space sizes. The second process involves iteratively partitioning similar nodes without explicit knowledge of the number of groups.
For the first process, we adopt the reasonable view that nodes in a pair of chains are dissimilar if their associated rows of the stochastic matrix are sufficiently distinct. We employ an information-theoretic measure, the negative, modified Kullback–Leibler divergence [8], g ( Π , Φ ) = E [ γ log ( Π / Φ ) | Π ] , to gauge distinctiveness and hence identify candidate nodes in the original chain for aggregation. Here, Π and Φ are stochastic matrices associated with two Markov chains, while γ is the stationary distribution associated with the first chain. This divergence assesses the overlap between probability distributions. It coincides with the Donsker–Varadhan rate function appearing in the large-deviations theory of Markov chains [9,10] and measures the ‘distance’ between two Markov chains defined on the same discrete state space.
In the aggregation process that we consider, a reduced-order stochastic matrix Φ is constructed on a discrete state space of a different size than that of the stochastic matrix Π associated with the original chain. To facilitate assessing the negative, modified Kullback–Leibler divergence between the original and reduced-order models, we construct a so-called joint model that incorporates details from both models. This model encodes the salient properties of the lower-order transition matrix. Its corresponding stochastic matrix Θ , is of the proper dimensionality to compare against rows of the original transition matrix; Θ will be specified by Φ = Θ Ψ , where Ψ is a partition matrix. A byproduct of using this joint model is that we can sidestep considering all possible liftings of the reduced-order models to the original space by averaging their dynamics according to a given distribution [11,12]. Our approach therefore avoids having to solve an additional optimization problem, which is a boon when aggregating chains with large state spaces.
The problem of finding an aggregated Markov chain that captures much of the dynamics in the original chain can be posed as a cost function that uses the above divergence. For the second process, we consider the use of an information-theoretic criterion known as the value of information [13,14,15] to efficiently segment the probability transition graph. It proves a partition matrix Ψ as the optimal solution of minimizing E [ E [ g ( Π , Θ ) | Ψ ] | γ ] E [ E [ log ( γ / Ψ ) | Ψ ] | α ] / β with respect to Ψ , where α = E [ Ψ | γ ] is a marginal probability and β is a hyperparameter. This criterion is a constrained, modified free-energy difference that describes the maximum benefit associated with a given quantity of information in order to minimize average losses [16,17]. It is an optimal, non-linear conversion between information, E [ E [ log ( γ / Ψ ) | Ψ ] | α ] , in the Shannon sense [18], and either costs or utilities, E [ E [ g ( Π , Θ ) | Ψ ] | γ ] , in the sense of von Neumann and Morgenstern [19].
In the context of aggregating Markov chains, the value of information describes the change in the distortion between the high-order and low-order transition models that occurs from potentially modifying the number of state groups and elements of those groups. The number of groups is implicitly determined by the bounded information that a given row of the original chain’s transition matrix shares with a corresponding row of the reduced-order chain’s transition matrix. Low information bounds lead to small numbers of groups with many states per group. A potentially good qualitative partitioning is often observed in such cases, as the reduced-order chain is parsimonious. Higher information bounds can lead to large numbers of groups with fewer states per group. The partitioning of the original chain can be over-complete, as related states may be unnecessarily split to yield a lower free energy.
Directly optimizing the value of information via a gradient-based approach yields a pair of updates that are iterated in an alternating manner. The first update, α E [ Ψ | γ ] , revises the marginal probability. The second update makes use of the marginal probability to adjust the partition matrix using a modified Gibbs distribution Ψ α e β g ( Π , Θ ) / E [ e β g ( Π , Θ ) | α ] , where the division is element-wise. The stochastic matrix associated with the joint model Θ = ( γ Ψ / E [ Ψ | γ ] ) Π is also changed as a part of the second update equation.
The second update relies on the hyperparameter β , which captures the effect of the information bound. Increasing β from some base value yields a hierarchy of partitions. Each element of this hierarchy corresponds to a partition with an increasing information bound amount and hence a potentially increasing number of state groups. Finer-scale group structure in the transition matrix is captured as β rises. After some value, however, there are diminishing returns on the quality of the aggregation results. Determining the “optimal” value, in a completely data-driven fashion, is hence crucial. To find such values for arbitrary Markov chains, we apply perturbation theory. In particular, we calculate the underestimation error of the information constraint in the value of information that occurs when considering finite-state chains. We then augment the value-of-information criterion by subtracting out this overestimation. Finally, we determine a lower bound for β that minimizes the underestimation error. The corresponding aggregation process empirically avoids fitting more to the noise than the structure in the stochastic matrix of the high-order model.
As a part of our treatment of the value of information, we furnish convergence and convergence-rate proofs to demonstrate the optimality of the criterion for the aggregation problem.
The remainder of this paper is organized as follows. We begin, in Section 2, with a survey of aggregation techniques for Markov chains. Our approach is given in Section 3. In Section 3.1, we introduce our notation and some fundamental concepts for binary-partition-based aggregation. In Section 3.1.1, we introduce the concept of a joint model so that the differently sized transition matrices of the original and reduced-order chains can be compared. We outline, in Section 3.1.2, how this joint model facilitates the formulation of an minimum-dissimilarity aggregation optimization problem. Properties of this problem are analyzed for general divergence measures. At the end of Section 3.1.2, we discuss practical issues associated with this initial optimization problem, which motivates the use of the value of information. We show, in Section 3.2.1 and Section 3.2.2, how this information-theoretic criterion can be applied to probabilistically partition transition matrices. We also cover how the criterion can be efficiently solved, how to construct the reduced-order transition matrices after partitioning, and how to bound the free parameter that emerges from optimizing this information-theoretic criterion. Lastly, in Section 3.2.3, we furnish a bound on the expected criterion performance. In Section 4, we assess the empirical capabilities of the value of information for Markov chain aggregation. We begin by covering our experimental protocols in Section 4.1. In Section 4.2, we present our simulation results for series of synthetic datasets. We first assess the performance of our value-of-information-based reduction for multiplier values that are either manually selected or chosen in a perturbation-theoretic manner. We also comment on the convergence properties. The appropriateness of the Shannon information constraint over an entropy constraint is additionally investigated in these Sections. Discussions of these results are given at the end of this Section. We summarize these findings in the broader context of our theoretical results in Section 5. Additionally, we outline directions for future research. Appendix A contains all of our proofs.

2. Literature Review

A variety of Markov model aggregation techniques have been proposed over the years. Some of the earliest work exploited the strong–weak interaction structure of nearly completely decomposable Markov chains to obtain reduced-order approximations [20,21]. Both uncontrolled [22,23] and controlled Markov chains [24,25,26] have been extensively studied in the literature.
The aggregation of nearly completely decomposable Markov chains has been investigated by Courtois [27] and other researchers [28,29,30]. Courtois developed an aggregation procedure that yields an approximation of the steady-state probability distribution with respect to a parameter that represents the weak interaction between state groups. This process was later augmented to provide more accurate approximations [31]. It was also combined with various iterative schemes, like the Gauss–Seidel method, to improve the speed of convergence [32,33,34,35]. Years later, Phillips and Kokotovic presented a singular perturbation interpretation of Courtois’ aggregation approach [36]. They developed a similarity transformation that converts the system into a singularly perturbed form, whose slow model coincides with the aggregated matrix found by Courtois’ approach. The use of singular perturbation has also been considered by other researchers [37,38,39].
There are additional approaches that have been developed. A few are worth noting here, as they resemble our contributions in various ways [11,12,40,41,42,43]. For example, Deng and Huang [41] used the Kullback–Leibler divergence as a cost function to obtain a low-rank approximation of the original transition matrix via nuclear-norm regularization. This preserved the cardinality of the state space. Here, we employ the negative, modified Kullback–Leibler divergence as a means of measuring the change in the original and modified chains. We, however, consider a modified chain that is of a reduced order, not the same order. This change should provide more tangible benefits for the simulation of large-scale systems.
Another scheme that is related to ours is that of Vidyasagar [43]. Vidyasagar investigated an information-theoretic metric, the variation of information, between distributions on sets with different cardinalities. Actually computing the metric that he proposed turns out to be computationally intractable for large-scale systems, though. He therefore considered an efficient greedy approximation for finding an upper bound of the distance and studied its use for optimal order reduction. He demonstrated that the optimal reduced-order distribution of a set of a particular cardinality is obtained by projecting the original distribution. That is, the reduced-order distribution should have maximal entropy. This condition is equivalent to requiring that the partition function induces the minimum information loss. In our work, the metric that we consider is tractable for different-cardinality sets. The partitioning process is not, however, which motivates the use of the value of information for efficiently finding approximate partitions. An advantage of using the value of information is that it directly minimizes the information loss, as it relies on a Shannon information constraint that quantifies the mutual dependence of the high-order and low-order chain states.
In [11,12,40], Deng et al. and Geiger et al. developed two-step, information-theoretic approaches for Markov chain aggregation. In the first step, the optimal model reduction problem is solved on the reduced space defined by a fixed partition function. In the second step, Deng et al. [11,40] select an optimal partition function according to a non-convex relaxation of the bi-partition problem, while Geiger et al. [12] find an approximate partition using the information-bottleneck method. In both works, the distortion between the original and reduced-order models was assessed via Kullback–Leibler divergence. The authors defined an optimization-based lifting procedure so that both chains would have the same cardinality. The lifting employed by Geiger et al. incorporates one-step transition probabilities of the original chain, which minimizes information loss. They obtained a tight bound for lumpable chains. This lifting employed by Deng et al. was based only on the stationary distribution of the original chain, which maximizes the redundancy of the aggregated Markov chain. Here, we consider the formation of a joint model based on a similar approach to Deng et al.: we form a probabilistically weighted average of the entries from the original stochastic matrix. However, our formulation of this joint model occurs in a more natural manner; see Definitions 4 and 7 for the details.
In [11,40], Deng et al. note that their optimization problem for partitioning state space of the Markov chains is both non-linear and non-convex. Instead of attempting to solve this problem for a general number of state groups, they focused on addressing the simpler bi-partition problem. In contrast, our formulation of the state-partition process using the value of information is convex. For the case of discrete state spaces, we derive expectation-maximization-like updates for efficiently uncovering partitions with arbitrary numbers of groups; see Proposition 1. These updates converge at a linear rate; we refer to Propositions 2 and 3 for details about the iterative error decrease and appeal to the Picard-iteration theory of Zangwill to establish convergence.
There are other topical differences between these approaches. For example, Geiger et al. [12], through the use of the information bottleneck, attempt to compress the original-model states into reduced-model states, in a lossy way, while keeping as much information about the original transition probabilities as possible. Optimizing the value of information achieves a similar effect, albeit in a different manner. It limits the information lost during quantization by both bounding the divergence between the original and reduced-order models and simultaneously maximizing the mutual dependence between the states in both models. Despite this similar effect, the value of information has practical advantages. We prove that the dynamical system underlying the partitioning process undergoes phase changes, for certain values of the criterion’s single free-parameter, where a new state-group emerges in the reduced-order model. Between critical values of the free parameter, no phase changes occur, which implies that only a finite number of distinct values must be considered; refer to Proposition 3 for the details. For the information bottleneck, investigators would have to sweep over many parameter values, often far more than we consider, and repeatedly solve the aggregation problem. Using an information-bottleneck scheme can hence be computationally prohibitive for large-scale Markov chains.
We further enhance the practicality of the value of information by deriving an expression for the “optimal” free-parameter value; see Proposition 4 the details. This value performs a second-order minimization of the estimation error associated with the Shannon-information term in the value of information. Empirically, using this value causes the partitioning process to fit more to the structure of well-defined state groups in the original model than outlier states. It also tends to yield parsimonious partitions that quantize the state space neither too much nor to little.
Our motivation for considering the value of information arose from our use of this criterion in reinforcement learning. We have previously applied this criterion, in [13,14,15], for resolving the exploration-exploitation dilemma in Markov-decision-process-based reinforcement learning. In our experiments on a variety of complicated application domains, we found that the value of information would consistently outperform existing search heuristics. We originally attributed this improved learning rate solely to a systematic partitioning of the state space. That is, groups of states would be partitioned, according to their action-value function magnitude, and assigned the same action. The problem of determining an action that works well for an entire group of related states is easier than doing the same for each state individually. However, it is our hypothesis that there is an aggregation of the Markov chains underlying the Markov decision processes. The aggregation theory developed in this paper represents a necessary first step to showing that the criterion can perform reinforcement learning on a simpler Markov decision process whose dynamics roughly mirror those of the original problem.
We are not the first to consider the aggregation of Markov chains that appear in Markov-decision-process-based reinforcement learning, though [1,2,3,4,5]. Aldhaheri and Khalil [2] focused on the optimal control of nearly completely decomposable Markov chains. They adapted Howard’s policy-iteration algorithm to work on an aggregated model. They showed that they could provide optimal control that minimizes the average cost over an infinite horizon. Sun et al. [4] employed time aggregation to reduce the state space for complicated Markov decision processes. They divided the original process into segments, by certain states, to form an embedded Markov decision process. Value iteration is then executed on this lower-order model. In [5], Jia provided a polynomial-time means of aggregating states of a Markov decision process when the optimal value function is known. For approximate value functions, he showed how to apply ordinal optimization to uncover a good state reduction with a high probability of being the correct aggregation. A commonality of these works is that they are model-based: they assume that the transition probabilities are explicitly known. Our previous work [14,15], however, focused on model-free learning, where these probabilities are not available a priori. Model reduction should therefore occur implicitly during the exploration process if the concepts we develop as part of our aggregation theory extend to Markov decision processes.

3. Methodology

Our approach for aggregating Markov chains can be described as follows. Given a stochastic matrix of transition probabilities between states, we seek to partition this matrix to produce a reduced-size matrix which we refer to as an aggregated stochastic matrix. The aggregated stochastic matrix has an equivalent graph-based interpretation, as it characterizes the edge weights of an undirected graph. The vertices in this matrix correspond to states of a reduced-order chain. There is a one-to-many mapping of a vertex from the aggregated stochastic matrix to the vertices of the original transition-matrix graph for the high-order chain. Edges of the aggregated stochastic matrix codify the transition probability between pairs of states in the low-order model.
There are many possible aggregated stochastic matrices that can be formed for a given Markov chain. We would like to find a matrix that yields the least total distortion for some measure, particularly the negative, modified Kullback–Leibler divergence, since it is viewed as a measure of change between Markov chains with many beneficial properties [8]. Due to the different sizes of the original transition matrix and the aggregated stochastic matrix, though, directly applying this divergence is not possible. While we could re-define the Kullback–Leibler divergence for probability vectors with different cardinalities, we have opted to instead transform the aggregated stochastic matrices so that they are of the same size as the original transition matrix. We specify how to construct a so-called joint model that encodes all of the dynamics of the reduced-order chain. We provide a straightforward objective function for constructing a binary partition of the original transition matrix to uncover the optimal aggregated stochastic matrix.
The objective function that we specify leads to another issue: finding the optimal aggregated stochastic matrix is not trivial due to the binary-valuedness of the one-to-many mappings. It can quickly become computationally intractable as the size of the state space rises. To make our aggregation approach more computationally efficient, we relax the binary assumption by considering an alternate objective function, which is based on the value of information. Optimization of the value of information yields a probabilistic partitioning process for finding the aggregated stochastic matrix in a subspace. A single parameter associated with this function dictates both the uniformity of the probabilistic partitions and the number of state groups that emerge. In the limit of the parameter value, the solution of the value of information approaches the global solutions of the original objective function. A hierarchy of possible partitions, each with a different number of groups, is produced for other parameter values; these are approximate solutions of the binary-partition-based objective function.
Much of our notation for the theory that follows is summarized in Nomenclature at the end of the paper.

3.1. Aggregating Markov Chains

3.1.1. Preliminaries

For our approach, we consider a first-order, homogeneous Markov chain defined on a finite state space. Our analyses of such chains focus on graph-based transition abstractions.
Definition 1.
The transition model of a first-order, homogeneous Markov chain is a weighted, directed graph R π given by the three-tuple ( V π , E π , Π ) with the following elements
(i) 
A set of n vertices V π = v π 1 v π n representing the states of the Markov chain.
(ii) 
A set of n × n edge connections E π V π × V π between reachable states in the Markov chain.
(iii) 
A stochastic transition matrix Π R + n × n . Here, [ Π ] i , j = π i , j represents the non-negative transition probability between states i and j. We impose the constraint that the probability of experiencing a state transition is independent of time.
The subscripts on the vertices and edges represent the dependence on the matrix Π.
Throughout, we assume that all Markov chains are irreducible and aperiodic. As a consequence, there is a unique invariant probability distribution γ associated with the chain such that γ Π = γ . We will sometimes write this distribution as γ ( Π ) to denote to which matrix the distribution is associated.
We are interested in comparing pairs of Markov chains. A means to do this is by considering given rows of the stochastic transition matrix. We represent the ith row of Π by π i , 1 : n = [ π i , 1 , , π i , n ] . π i , 1 : n is a probability vector describing the chance of transitioning from state v π i to any possible next states. We assume that π i , j = 0 if and only if there is no directed edge from state v π i to state v π j and hence no chance of transitioning between the corresponding states. Note that it does not make sense to quantify the distortion between columns of the stochastic matrices, since they are not guaranteed to be state-transition probabilities.
If a pair of transition models for different Markov chains, R π and R φ , have the same number of states, then they can be compared according to a measure g : R + n × R + n R + acting on π i , 1 : n and φ i , 1 : n i . Here, we take this measure to be the negative, modified Kullback–Leibler divergence; the theory that follows is applicable to many general divergence measures, though.
Definition 2.
Let R π = ( V π , E π , Π ) and R φ = ( V φ , E φ , Φ ) be transition models of two Markov chains over n states. The negative, modified relative entropy, or negative, modified Kullback–Leibler divergence, between a given set of states for these two chains is a function given by g ( π i , 1 : n , φ i , 1 : n ) = j = 1 n γ i π i , j log ( π i , j / φ i , j ) , where γ is the invariant probability distribution associated with R π . The divergence rate is finite provided that Π is absolutely continuous with respect to Φ.
The modified Kullback–Leibler divergence that we consider includes a term for the stationary distribution of the Markov chain, which is not present in the standard definition of the Kullback–Leibler divergence. This is the relative entropy between two time-invariant Markov sources, as defined by Rached et al. [8], which admits a closed-form expression. Note that much like in the standard definition of Kullback–Leibler divergence, division by zero can occur for this modified divergence. This event is largely avoided for our problem, though, since the density in the subspace should be broader than that in the original space.
Since we are considering the problem of chain aggregation, the state spaces will be different. One chain R π will have n states while another R φ will have m states, with m < n . The dimensionalities of given rows in the corresponding transition matrices will hence not be equivalent, which precludes a direct comparison using conventional measures.
To facilitate the application of measures to R π and R φ when they have different discrete state spaces, we consider construction of a joint model R ϑ . This joint model defines a joint state space composed of V π and V φ . It consequently possesses a weighting matrix Θ with the same number of columns as Π , which is outlined in Definition 4 and illustrated in Figure 1.
The joint model relies on the specification of a binary partition function ψ : Z + Z + , which is given in Definition 3. This function provides a one-to-many mapping between states in V π and V φ and hence can be seen as a means of delineating which states of the original chain should be combined.
Definition 3.
Let R π = ( V π , E π , Π ) and R φ = ( V φ , E φ , Φ ) be transition models of two Markov chains over n and m states, respectively. A binary partition function ψ is a surjective mapping between two state index sets, Z 1 : n and Z 1 : m , such that ψ 1 ( Z 1 : m ) is a partition of Z 1 : n . That is, ψ 1 ( j ) Z 1 : n is not empty, ψ 1 ( 1 ) ψ 1 ( m ) = Z 1 : n , and ψ 1 ( j ) ψ 1 ( k ) = , for j k . It can be seen that a partition of a state index set induces a binary partition matrix [ Ψ ] i , j = ψ i , j , where ψ i , j = 1 if i ψ 1 ( j ) and ψ i , j = 0 if i ψ 1 ( j ) . Thus, [ Ψ ] 1 : n , k = i ψ 1 ( k ) e i , where e i is the ith unit vector. The set of all binary partition matrices is { Ψ R + n × m | [ Ψ ] i , j = ψ i , j { 0 , 1 } , j = 1 m ψ i , j = 1 } .
Definition 4.
Let R π = ( V π , E π , Π ) and R φ = ( V φ , E φ , Φ ) be transition models of two Markov chains over n and m states, respectively, where m < n . R ϑ = ( V ϑ , E ϑ , Θ ) is a joint model, with m + n states, that is defined by
(i) 
A vertex set V ϑ = V π V φ , which is the union of all state vertices in R π and R φ . For simplicity, we assume that the vertex set for the intermediate transition model is indexed such that the first m nodes are from R φ and the remaining n nodes are from R π .
(ii) 
An edge set E ϑ V φ × V π , which are one-to-many mappings from the states in the original transition model R π to the reduced-order transition model R φ .
(iii) 
A weighting matrix Θ, which is such that Θ R + m × n , Θ = [ ϑ 1 , 1 : n , ϑ 2 , 1 : n , , ϑ m , 1 : n ] . The partition function ψ provides a relationship between the stochastic matrices Φ and Θ of R φ and R ϑ , respectively. This is given by φ j , k = i ψ 1 ( k ) ϑ j , k j , k , or, rather, Θ Ψ = Φ , where [ Ψ ] 1 : n , k = i ψ 1 ( k ) e i .
An illustration of the joint model relative to the other models is provided in Figure 2.

3.1.2. Partitioning Process and State Aggregation

For any given transition model R π , we would like to find, by way of the joint model R ϑ , another transition model R φ with fewer states that resembles the dynamics encoded by R π . We therefore seek a R ϑ with a weighting matrix Θ that has the least total distortion with respect to the transition matrix Π of R π for some partition. In what follows, we specify how to find R φ .
Before we can define the notion of least total distortion, we must first specify the concept of the total distortion of Θ with respect to Π .
Definition 5.
Let R π = ( V π , E π , Π ) , R φ = ( V φ , E φ , Φ ) , and R ϑ = ( V ϑ , E ϑ , Θ ) be transition models of two Markov chains over n and m states and the joint model over n + m states, respectively. The total distortion between Π and Θ and hence Π and Φ is
q ( R π , R φ ) = min Θ R + m × n i = 1 n p ( i ) g ( π i , 1 : n , ϑ ψ ( i ) , 1 : n ) | R ϑ R π φ
for some unit-sum weights p ( i ) . We can take these weights to be the invariant distribution of the original Markov chain, i.e., p ( i ) = γ i . For this objective function, we have the constraint that R π φ must be a member of the set of all joint models for R π and R φ .
It can be seen from Definition 5 that the total distortion is over the set of all possible binary partitions. We, however, seek the best binary partition. Best, in this context, means that it would yield an R φ with the least total distortion to R π . It hence would lead to a lower-order model R φ that most resembles R π according to the chosen measure g.
Definition 6.
Let R π = ( V π , E π , Π ) and R φ = ( V φ , E φ , Φ ) be transition models of two Markov chains over n and m states, respectively. The accumulation matrix Φ = [ φ 1 , 1 : m , φ 2 , 1 : m , , φ m , 1 : m ] for R φ that achieves the least total distortion to Π of R π , according to g : R + n × R + n R + , is given by
arg   min Ψ R + n × m , Φ R + m × m q ( R π , R φ ) | [ Ψ ] 1 : n , k = i ψ 1 ( k ) e i .
Here, q : R + n × n × R + m × m R + is the total distortion.
At least one minimizer exists for both assessing total distortion and least total distortion. This is because both are continuous functions operating on closed and bounded sets and hence, according to the Weierstrass extreme value theorem, obtain both a maximum and minimum on those sets.
From Definition 5, we can now specify the optimization problem of aggregating a Markov chain described. This problem can be solved in a two-step process. The first step entails finding the optimal partition that leads to the least total distortion between the original chain Π and Φ , as described by Θ . The second step involves constructing the corresponding low-order transition matrix Φ from Θ and Ψ .
Definition 7.
Let R π = ( V π , E π , Π ) and R φ = ( V φ , E φ , Φ ) be transition models of two Markov chains over n and m states, respectively. The optimal reduced-order transition model R φ with respect to the original model R π can be found as follows
(i) 
Optimal partitioning: Find a binary partition matrix Ψ that leads to the least total distortion between the models R π and R φ . As well, find the corresponding weighting matrix Θ that satisfies
arg   min Ψ R + n × m , Θ R + m × n i = 1 n p ( i ) g ( π i , 1 : n , ϑ ψ ( i ) , 1 : n ) | R ϑ R π φ , [ Ψ ] 1 : n , k = i ψ 1 ( k ) e i .
Solving this problem partitions the n vertices of the relational matrix R π into m groups.
(ii) 
Transition matrix construction: Obtain the transition matrix for R φ from the following expression: φ j , k = i ψ 1 ( k ) ϑ j , k using the optimal weights Θ and the binary partition matrix Ψ from step (i).
It is important to notice for the first step in Definition 7 that there is no efficient way to find a R φ with least total distortion to R π . This is due to the binary nature of the partitions, which leads to a problem with an NP-hard computational complexity. For practical problems, which may contain thousands or even millions of states, this aggregation procedure will not be tractable. A more efficient alternative is therefore required.

3.2. Approximately Aggregating Markov Chains

3.2.1. Preliminaries

A straightforward way to make the aggregation problem more efficient is by approximating the least total distortion optimization given in Definition 6. This can be effectuated by relaxing the constraint that the state–state assignments specified by the partition matrix are binary. Instead, each state from the high-order chain can have a chance to map to states in the low-order chain. Such changes lead to the notion of a probabilistic partition matrix.
Definition 8.
Let R π = ( V π , E π , Π ) and R φ = ( V φ , E φ , Φ ) be transition models of two Markov chains over n and m states, respectively. A probabilistic partition function ψ is a surjective mapping between two state index sets, Z 1 : n and Z 1 : m , such that ψ 1 ( Z 1 : m ) is a partition of Z 1 : n , which has a given probabilistic chance of occurring. That is, ψ 1 ( j ) Z 1 : n × R + n is not empty and where ψ 1 ( 1 ) ψ 1 ( m ) = Z 1 : n m × R + m × n , with the real-valued responses being non-negative and summing to one. The probabilistic partition of a state index set induces a probabilistic partition matrix [ Ψ ] i , j = ψ i , j , where ψ i , j = ζ if i ψ 1 ( j ) occurs with probability ζ. The set of all probabilistic partition matrices for the two chains specified above is given by { Ψ R + n × m | [ Ψ ] i , j = ψ i , j [ 0 , 1 ] , j = 1 m ψ i , j = 1 } .
An example of a probabilistic partitioning is given in Figure 3. Here and in Figure 4, U R + n × m is a matrix where [ U ] i , j = u i , j , u i , j = γ i ψ i , j / k = 1 n γ k ψ k , j ; the proof to Proposition 4 explains how this matrix arises when finding a probabilistic partition.
As before, we will partition and compare the dynamics for pairs of chains according to rows of the corresponding stochastic transition matrices. We will, therefore, still encounter issues when trying to compare transition models R π and R φ with differing state spaces. We again consider the construction of a joint model R ϑ to avoid this issue. The only difference between this joint model and the one defined for binary-valued partitions is that the weighting matrix has a different form. The connectivity of the joint-space graph can hence be different.
Definition 9.
Let R π = ( V π , E π , Π ) and R φ = ( V φ , E φ , Φ ) be transition models of two Markov chains over n and m states, respectively, where m < n . R ϑ = ( V ϑ , E ϑ , Θ ) is a joint model, with m + n states, that is defined by
(i) 
A vertex set V ϑ = V π V φ , which is the union of all state vertices in R π and R φ .
(ii) 
An edge set E ϑ V φ × V π , which are one-to-many mappings from the states in the original transition model R π to the reduced-order transition model R φ .
(iii) 
A weighting matrix Θ R + m × n . The partition function ψ provides a relationship between the stochastic matrices Φ and Θ of R φ and R ϑ , respectively. This is given by φ i , j = k = 1 n ϑ j , k ψ k , i i , j , or, rather, Φ = Θ Ψ , where Ψ R + n × m is the probabilistic partition matrix.
An illustration of this joint model is given in Figure 4 for the stochastic matrix presented in Figure 3. Unlike the joint model for binary-valued partitions, using probabilistic partitions allows for each state in the high-order chain to map to multiple states in the low-order chain.

3.2.2. Partitioning Process and State Aggregation

For any transition model R π , we, again, would like to find a joint model R ϑ that facilitates the construction of another transition model R φ . R φ should have fewer states than R π while still possessing similar intra-group transition dynamics. Since we are now considering probabilistic partitions, we instead seek a Θ with the least expected distortion to Π to ensure that the dynamics of Φ largely match those of Π . Definition 5 is hence modified as follows.
Definition 10.
Let R π = ( V π , E π , Π ) , R φ = ( V φ , E φ , Φ ) , and R ϑ = ( V ϑ , E ϑ , Θ ) be transition models of two Markov chains over n and m states and the joint model over n + m states, respectively. The least expected distortion between Π and Θ and hence Π and Φ is
q ( R π , R φ ) = min Ψ R + n × m , Θ R + m × n i = 1 n j = 1 m γ i ψ i , j g ( π i , 1 : n , ϑ j , 1 : n ) | 0 ϑ i , k , ψ i , k 1 , k = 1 m ϑ i , k = 1 , k = 1 m ψ i , k = 1 .
There are few constraints on the probabilistic partitions in Definition 10, which can make finding viable solutions difficult. To address this issue, we impose that the partitions should minimize the information loss associated with the state quantization process. That is, the mutual dependence between states in the high-order and low-order chains should be maximized with respect to a supplied upper bound. Simultaneously, the least expected distortion, for this supplied bound, should be achieved.
Aggregating Markov chains in this fashion can be done via a two-step process similar to Definition 7.
Definition 11.
Let R π = ( V π , E π , Π ) and R φ = ( V φ , E φ , Φ ) be transition models of two Markov chains over n and m states, respectively. The optimal reduced-order transition model R φ with respect to the original model R π can be found as follows:
(i) 
Optimal partitioning: Find a probabilistic partition matrix Ψ that leads to the least expected distortion between the models R π and R φ . As well, find the corresponding weighting matrix Θ that satisfies
arg   min Ψ R + n × m Θ R + m × n i = 1 n j = 1 m γ i ψ i , j g ( π i , 1 : n , ϑ j , 1 : n ) | j = 1 m α j i = 1 n ψ i , j log ( ψ i , j / γ i ) r , 0 ϑ i , k , ψ i , k 1 , k = 1 m ϑ i , k = 1 , k = 1 m ψ i , k = 1
for some positive value of r; r has an upper bound of i = 1 n γ i log ( γ i ) . The variables α, γ, and ψ all have probabilistic interpretations: α j = p ( v φ j ) and γ i = p ( v π i ) correspond to marginal probabilities of states v φ j and v π i , while ψ i , j = p ( v φ j | v π i ) is the conditional probability of state v π i mapping to state v φ j .
(ii) 
Transition matrix construction: Obtain the transition matrix for R φ from the following expression: φ i , j = k = 1 n ϑ k , j ψ k , i using the optimal weights Θ and the probabilistic partition matrix Ψ from step (i).
The optimization problem presented in Definition 11 trades off between the minimum expected distortion and the information contained by the states in the low-order chain R ϕ about those in the original, high-order chain R π after partitioning. It is hence describing the value of quantizing the high-order model by a certain amount [16,17]; this is the value of information formulated for Markov chains, which is, itself, an analogue of rate-distortion theory [18]. Coarsely quantizing Π , as dictated by the parameter r, leads to a parsimonious low-order stochastic matrix Φ that may not greatly resemble the dynamics of Π . Finely quantizing Π , again determined by r, yields a Φ that is similar to the high-order model’s transition matrix Π yet may contain many redundant details.
In the value of information, the role of the Shannon information term j = 1 m α j i = 1 n ψ i , j log ( ψ i , j / γ i ) is to impose a certain level of randomness, or uncertainty, in the partition matrix to ensure that the entries can be non-binary. A similar effect could be achieved by considering a Shannon entropy constraint on the partition matrix i = 1 n j = 1 m ψ i , j log ( ψ i , j ) . However, a Shannon entropy constraint is rather non-restrictive on the entries of the partition matrix: it is a projection of Shannon information that introduces ambiguities. There is hence the potential that a given row ψ 1 : n , j R + n could be a duplicate of another, thereby over-inflating the number of states in the reduced-order chain and leading to a poor aggregation. We have found, empirically, that Shannon mutual information does not share this defect, except when all states have a uniform chance of being grouped together in every group. This is because we are bounding the informational overlap between the original and aggregated states. Coincident partitions often violate this bound. In the Shannon-entropy case, however, we are only bounding the uncertainty on the entries of the partition, so there is no direct constraint between the original and aggregated states.
Definitions 8, 9 and 11 provide a means of approximating the computationally intractable aggregation process outlined in Section 3.1 Actually solving the constrained optimization problem in Definition 11 can be efficiently performed in a few ways. Here, we opt to optimize the Lagrangian. This provides an expectation-maximization-like procedure for specifying the probabilistic partitions Ψ .
Proposition 1.
For a transition model R π = ( V π , E π , Π ) over n states and a joint model R ϑ = ( V ϑ , E ϑ , Θ ) and m + n states, the Lagrangian of the relevant terms for the minimization problem given in Definition 11 is F ( Ψ , α ; Π , Θ , γ ) = E [ E [ g ( Π , Θ ) | Ψ ] | γ ] E [ D KL ( γ Ψ ) | α ] / β , or, rather,
F ( Ψ , α ; Π , Θ , γ ) = i = 1 n j = 1 m γ i ψ i , j g ( π i , 1 : n , ϑ j , 1 : n ) 1 β j = 1 m α j i = 1 n ψ i , j log ( ψ i , j / γ i ) .
Here, β 0 is a Lagrange multiplier that emerges from the Shannon mutual information constraint in the value of information. Probabilistic partitions [ Ψ ] i , j = ψ i , j , which are local solutions of F ( Ψ , α ; Π , Θ , γ ) = 0 , can be found by the following expectation-maximization-based alternating updates
α j i = 1 n γ i ψ i , j , ψ i , j α j e β g ( π i , 1 : n , ϑ j , 1 : n ) / p = 1 m α p e β g ( π i , 1 : n , ϑ p , 1 : n ) ,
which are iterated until convergence.
Proposition 2 shows that the alternating optimization updates in Proposition 3 yield monotonic decreases in the modified free-energy associated with the value of information. Global convergence to solutions can therefore be obtained. Proposition 3 bounds the approximation error as a function of the number of alternating-optimization iterations. Linear-speed convergence to solutions is hence obtained, which coincides with the interpretation of the updates as an expectation-maximization-type algorithm.
Proposition 2.
Let R π = ( V π , E π , Π ) and R φ = ( V φ , E φ , Φ ) be transition models of two Markov chains over n and m states, respectively, where m < n . If [ Ψ ] i , j = ψ i , j is an optimal probabilistic partition and [ α ] j = α j an optimal marginal probability vector, then, for the updates in Proposition 1, we have that:
(i) 
The approximation error is non-negative
F ( Ψ ( k ) , α ( k ) ; Π , Θ , γ ) F ( Ψ , α ; Π , Θ , γ ) = i = 1 n γ i log j = 1 m α j e β g ( π i , 1 : n , ϑ j , 1 : n ) j = 1 m α j ( k ) e β g ( π i , 1 : n , ϑ j , 1 : n ) 0 .
(ii) 
The modified free energy monotonically decreases F ( Ψ ( k ) , α ( k ) ; Π , Θ , γ ) F ( Ψ ( k + 1 ) , α ( k + 1 ) ; Π , Θ , γ ) across all iterations k.
(iii) 
For any K 1 , we have the following bound for the sum of approximation errors
k = 1 K F ( Ψ ( k ) , α ( k ) ; Π , Θ , γ ) F ( Ψ , α ; Π , Θ , γ ) i = 1 n j = 1 m γ i ψ i , j log ψ i , j ψ i , j ( 1 ) .
In both (i) and (ii), F ( Ψ , α ; Π , Θ , γ ) = E [ E [ g ( Π , Θ ) | Ψ ] | γ ] E [ D L ( γ Ψ ) ] / β is the Lagrangian.
Proposition 3.
Let R π = ( V π , E π , Π ) and R φ = ( V φ , E φ , Φ ) be transition models of two Markov chains over n and m states, respectively, where m < n . If [ Ψ ] i , j = ψ i , j is an optimal probabilistic partition and [ α ] j = α j an optimal marginal probability vector, then, the approximation error
F ( Ψ , α ; Π , Θ , γ ) F ( Ψ ( k ) , α ( k ) ; Π , Θ , γ ) 1 k i = 1 n j = 1 m γ i ψ i , j log ψ i , j ψ i , j ( 1 )
falls off as a function of the inverse of the iteration count k. Here, the constant factor of the error bound is a Kullback–Leibler divergence between the initial partition matrix Ψ ( 1 ) and the global-best partition matrix Ψ .
As shown in Proposition 1, a Lagrange multiplier β is introduced to account for the mutual information constraint. The effects of β are as follows. As β tends to zero, minimizing the Lagrangian is approximately the same as minimizing the negative Shannon information. The information loss associated with the quantization process takes precedence, albeit at the expense of a potentially poor reconstruction. In this case, there are few state clusters defined by the partition; that is, there are few rows in Θ . Every state in the high-order transition model R π has an almost uniform chance to map to each state in the low-order model R φ . The alternating updates from Proposition 1 yield a global minimizer of the value of information, which follows from the convexity of the dual criterion and the Picard-iteration theory of Zangwill [44].
As β is increased, the probabilistic partitions become more binary. Higher probabilities are therefore assigned for a state in R π to map to either a small set of states or a single state in R φ . This is because the effects of the Shannon information term are increasingly ignored in favor of achieving the minimum expected distortion. The value-of-information problem given in Definition 11 therefore approaches the binary aggregation problem from Definition 7. An increasing number of clusters are formed by the partition matrix, which increases the number of rows in the weighting matrix Θ and hence Φ . When β tends to infinity, we obtain a completely binary partition matrix. We hence recover the least total distortion function given in Definition 6. This binary partition can contain as many clusters as states in the high-order model R π ; that is, no aggregation may be performed, so R φ is typically equal to R π .
The number of state clusters in the high-order chain, or, rather, the number of distinct rows of the weight matrix Θ , does not increase continuously as a function of β . Instead, it increases only for certain critical values of β where a bifurcation occurs in the underlying gradient flow of the Lagrangian. Critical values of β can be explicitly determined when using the negative, modified Kullback–Leibler divergence by looking at the second derivative of the Lagrangian F ( Ψ , α ; Π , Θ , γ ) at Θ .
Proposition 4.
Let R π = ( V π , E π , Π ) and R φ = ( V φ , E φ , Φ ) be transition models of two Markov chains over n and m states, respectively, where m < n . Let g ( π i , 1 : n , ϑ i , 1 : n ) = j = 1 n γ i π i , j log ( π i , j / ϑ i , j ) , where R ϑ = ( V ϑ , E ϑ , Θ ) is the joint model. The following hold
(i) 
The transition matrix Φ of a low-order Markov chain over states m is given by Φ = Θ Ψ , where Θ = U Π . Here, [ U ] i , j = γ i ψ i , j / k = 1 n γ k ψ k , j for the probabilistic partition matrix [ Ψ ] i , j = ψ i , j found using the updates in Proposition 1.
(ii) 
Suppose that we have a low-order chain over m states with a transition matrix Φ and weight matrix Θ given by (i). For some β 0 , suppose Θ β 0 , the matrix Θ for that value of β 0 , satisfies the following inequality d 2 / d ϵ 2 F ( Ψ , α ; Π , Θ β 0 + ϵ Q , γ ) | ϵ = 0 > 0 . Here, Q R + m × n is a matrix such that k = 1 m q j , 1 : n q j , 1 : n = 1 and i = 1 n q j , i = 0 i . A critical value β c = min β > β 0 ( d 2 / d ϵ 2 F ( Ψ , α ; Π , Θ β + ϵ Q , γ ) | ϵ = 0 0 ) , occurs whenever the minimum eigenvalue of the matrix
diag i = 1 n ψ k , j π i , 1 : n / ϑ k , 1 : n 2 β i = 1 n ψ k , j ( π i , 1 : n / ϑ k , 1 : n 2 ) ( π i , 1 : n / ϑ k , 1 : n 2 )
is zero. The number of rows in Θ and columns in Ψ needs to be increased for β > β c .
Proposition 4 illustrates a major advantage of the value-of-information cost function for partitioning Markov chains: the number of states in a low-order model does not need to be manually specified. It is dictated implicitly by the value of the Lagrange multiplier β that captures the effects of favoring information retainment over achieving a minimal expected distortion. This automatic increase in the number of state groups is depicted in Figure 5.
Choosing a good value for β is crucial for practical problems. There are a variety of ways to do this. One such approach entails applying perturbation theory to obtain an upper bound on β . More specifically, it is known that measurements of Shannon mutual information are always, on average, improperly estimated when considering finite samples [45]. That is, for finitely sized state spaces, the probability distributions that comprise the mutual information expression are approximating, thereby leading to errors that propagate into the aggregation process. Our approach therefore entails modeling this perturbation error and removing it from the value of information. This leads to a modified criterion for which a value of β can be determined that minimizes the estimation error and better fits to the structure of the transition matrix. Such values typically correspond to the beginning of an asymptotic region of the original value-of-information expression where favoring a minimum expected distortion over information loss leads to negligible improvements.
Proposition 5.
Let R π = ( V π , E π , Π ) and R φ = ( V φ , E φ , Φ ) be transition models of two Markov chains over n and m states, respectively, where m < n . R ϑ = ( V ϑ , E ϑ , Θ ) is a joint model, with m + n states. The systematic underestimation of the information cost of the Shannon mutual information term in Definition 11 can be second-order minimized by solving the following optimization problem
min Ψ R + n × m Θ R + m × n j = 1 m α j i = 1 n ψ i , j log ( ψ i , j / γ i ) + j = 1 m i = 1 n γ i ψ i , j 2 / 2 n log ( 2 ) α j | i = 1 n j = 1 m γ i ψ i , j g ( π i , 1 : n , ϑ i , 1 : n ) r , 0 ϑ i , k , ψ i , k 1 , k = 1 m ϑ i , k = 1 , k = 1 m ψ i , k = 1
where β = 2 j = 1 m α j i = 1 n ψ i , j log ( ψ i , j / γ i ) / 2 n .
This corrected version of the value of information has a rescaled slope compared to the original, where a lower bound on the rescaling is given by log ( 2 ) / β log ( 2 ) 2 j = 1 m α j i = 1 n ψ i , j log ( ψ i , j / γ i ) / 2 β n .

3.2.3. Partitioning Process and State Aggregation

The preceding theory outlines how Markov chains can be aggregated by trading off between expected distortion and expected relative entropy. We have shown that global-optimal solutions can be uncovered. However, we have not bounded the aggregation quality of those solutions for arbitrary problems; such bounds are important for understanding how our approach will behave in general.
Toward this end, we quantify the relationship between stationary distributions of the original and reduced-order stochastic matrix for nearly-completely-decomposable systems. Many practical examples of Markov chains are typically nearly-completely-decomposable: groups of states possess similar transition dynamics where the chance to jump between states within the group is higher than states outside of the group.
Definition 12.
The transition model of a first-order, homogeneous, nearly-completely-decomposable Markov chain is a weighted, directed graph R π given by the three-tuple ( V π , E π , Π ) with
(i) 
A set of n vertices V π = v π 1 v π n representing the states of the Markov chain.
(ii) 
A set of n × n edge connections E π V π × V π between reachable states in the Markov chain.
(iii) 
A stochastic transition matrix Π R + n × n . Here, [ Π ] i , j = π i , j represents the non-negative transition probability between states i and j. We impose the constraint that the probability of experiencing a state transition is independent of time. Moreover, for a block-diagonal matrix Π with zeros along the diagonal, we have that Π = Π + ε C . Here, Π R + n × n is a completely-decomposable stochastic matrix with m indecomposable sub-matrix blocks Π i of order n i .
Π = Π 1 0 0 0 0 Π 2 0 0 0 0 Π 3 0 0 0 0 Π m
Since Π and Π are both stochastic, the matrix C R + n × n must satisfy the equality constraint k = 1 n i c p i , k i = j i q = 1 n j c p i , q j p i , for blocks Π i and Π j . That is, they must obey max p i ( k = 1 n i | c p i , k i | ) = 1 . Additionally, the maximum degree of coupling between sub-systems Π i and Π j , given by the perturbation factor ε, must obey ε = max i ( i j q = 1 n j π p i , q j ) .
Proposition 6.
Let R π = ( V π , E π , Π ) be a transition model of a Markov chain with n states, where Π R + n × n is nearly completely decomposable into m Markov sub-chains.
(i) 
The associated low-order stochastic matrix Φ R + m × m found by solving the value of information is given by φ i , j = p i = 1 n i q j = 1 n j π p i , q i γ p i / q i n i γ q i , where p i , q i represent state indices p = 1 , , n i associated with block i, while q j represents a state index q = 1 , , n j into block j. The variable γ p i = γ p i ( Π ) denotes the invariant-distribution probability of state p in block i of Π.
(ii) 
Suppose that γ p i / q i n i γ q i = v p i ( 1 i ) is approximated by the entries of the first left-eigenvector v ( 1 i ) for block i of Π . We then have that
γ p i = 1 n i v p i ( 1 i ) q j = 1 n j π p i , q i γ ( Π ) Ψ 1 O ( ε 2 )
where the first term is the invariant distribution of the low-order matrix γ ( Φ ) , under the simplifying assumption, and Ψ R + n × m is the probabilistic partition matrix found by solving the value of information.
Proposition 6 elucidates the behavior of the value-of-information aggregation results: a reduced Markov chain will have similar long-run dynamics as a projected version of the original Markov chain. This result is made possible by the work of Simon and Ando [20]. They proved that, for nearly-completely-decomposable chains, there are two types of dynamics that influence the stationary distribution: short and long term. In the short term, each completely-decomposable block evolves almost independently towards a local equilibrium, as if the system was completely decomposable. In the long run, the entire aggregated chain moves toward the steady state defined by the first left-eigenvalue of the original stochastic matrix. The equilibrium states attained for each block of the original stochastic matrix are approximately the same as those for the short-run dynamics.
More specifically, the local-equilibrium states for the short-term dynamics may be closely approximated by the steady-state vectors of the sub-systems for the completely decomposable stochastic matrix Π . The macro-transition probability between blocks Π i and Π j of Π remains, in the long term, more or less constant in time and is approximately equal to Φ . Hence, the elements of the steady-state probability vector γ 1 : n ( 1 i ) , where γ 1 : n ( 1 i ) ( Φ I n × n ) = 0 , are so-called macro-variables that yield good approximations to the steady-state probabilities of being in any one state of block Π i . The so-called micro-variables γ p i ( 1 i ) = γ i ( 1 ) v p i ( 1 I ) are good approximations to the steady-state probabilities v p i ( 1 i ) of being in any particular state p of block i. That is, as we showed in the previous section, both the macro- and micro-variables have an 1 -norm error that is a square of the perturbation factor compared to those for the original stochastic matrix. The aggregated chain will thus possess similar long-term dynamics as the original.

4. Simulations

In the previous section, we provided an information-theoretic criterion, the value of information, for quantifying the effects of quantizing stochastic matrices associated with Markov chains. We also provided a first-order approach for optimizing this criterion, which provides a mechanism for simultaneously partitioning and aggregating chain states. In this section, we assess the empirical performance of this criterion. The aims of our simulations are multi-fold. First, we ascertain how well the value of information reduces the complexity of Markov chains when they possess either simple or complex state-transition dynamics. We also discuss various facets of the criterion within the context of these results. We then gauge how well the results for the “optimal” free-parameter value, as predicted via perturbation theory, align with the ground truth. We also illustrate that using Shannon mutual information, versus Shannon entropy, as a constraint for the expected-distortion objective function, avoids returning coincident partitions.

4.1. Simulation Protocols

For each of the examples that follow, we adopted the following simulation protocols for value-of-information-based aggregation. We initialized the aggregation process with a partition matrix of all ones, Ψ = [ 1 ] 9 × 1 , signifying that each state belongs to a single group. This is the global optimal solution of Markov chain aggregation for both the binary- and probabilistic-partition cases. For the latter case, it coincides with a parameter value β of zero for the value of information. We then found the subsequent critical values of β and increased the column count of the partition matrix Ψ . We determined which state group would be further split and modified both the new column and an existing column of Ψ to randomly allocate the appropriate states. This initialization process bootstraps the quantization for the new cluster and typically achieves convergence in only a few iterations. It also permits the value of information to reliably track the global minimizer for the binary-partition aggregation problem case as β increases.
For certain problems, a priori specifying a fixed amount of partition updates may not permit finding a steady-state solution. We therefore run the alternating updates until no entries of the partition matrix change across two iterations.

4.2. Simulation Results and Analyses

4.2.1. Value-of-Information Aggregation

Aggregation Performance. We establish the performance of value-of-information aggregation through two examples. The first, shown in Figure 6, corresponds to a Markov chain with nine states and four state groups with strong intra-group interactions and weak inter-group interactions. This is a relatively simple aggregation problem. The second example, presented in Figure 7, is of a nine-state Markov chain with a single dominant state group and six outlying states with near-equal transition probabilities. This is a more challenging problem than the first, as the outlying states cannot be reliably combined without adversely impacting the mutual dependence. In both cases, the transition probabilities were randomly generated through knowledge of a limit distribution γ .
In Figure 6 and Figure 7, we provide partitions and aggregated Markov chains for four critical values of the free parameter β . The “optimal” value of β , as predicted by our perturbation-theory formulation of the value of information, leads to four and seven state groups for the first and second examples, respectively. The associated partitions align with an inspection of the dynamics of the stochastic matrix: the partitions separate states that are more likely to transition to each other from those that are not. The ”optimal” aggregated stochastic matrix encodes this behavior well. The remaining aggregated chains do too for their respective partitions, as they all mimic the interaction dynamics of the original chain for the given state groups. However, those partitions for “non-optimal” β s either over- or under-quantize the chain states, which is illustrated by the plot of expected distortion E [ E [ g ( Π , Θ ) | Ψ ] | γ ] versus the critical values of β ; these plots are given in Figure 8. That is, for critical β s before the “optimal” value, there is a steep drop in the distortion, while the remaining β s only yield modest decreases. The “optimal” value of β for both examples, in contrast, lies at the “knee” of this curve, which is where the expected-distortion minimization, min Ψ E [ E [ g ( Π , Θ ) | Ψ ] | γ ] , is roughly balanced against the competing objective of state-mutual-dependence maximization with respect to some bound, E [ D KL ( γ Ψ ) ] r .
For both examples, we aggregated at critical values of β where the number of state groups increases. We also considered non-critical values of β between two phase changes; a thousand Monte Carlo trials were conducted for random β s. For each of these trials, the partitions produced between two related critical values were virtually identical, up to a permutation of the rows. Only minute, arithmetic-error-attributed differences were encountered. Such results illustrate the validity of our theory: only a finite number of critical values for β need to be used for reducing finite-cardinality stochastic matrices.
Convergence. In Figure 8, we furnished plots of the decrease in the expected distortion, E [ E [ g ( Π , Θ ( k ) ) | Ψ ( k ) ] | γ ] across each iteration k = 1 , 2 , . This provides a means of gauging the per-iteration solution improvement and hence convergence. We also provided plots of the partition matrix cross-entropy for consecutive iterations, E [ log ( Ψ ( k 1 ) ) | Ψ ( k ) ] . The partition cross-entropy is a bounded measure of change between consecutive partitions and captures how greatly the partition changed across a single update. Taken together, they offer alternate views of the aggregation improvement during intermediate stages of the dynamics reduction process. In either example, the average of these quantities across the Monte Carlo trials exhibits a nearly-linear decrease in their respective quantities before plateauing, regardless of the critical value of β . This finding suggests rapid convergence to the global solution, which was anticipated from our convergence analysis. That is, due to how we initialize the partitions, we are roughly ensuring that they are in close proximity to the next global optima Ψ , up to a permutation of the rows. The D KL ( Ψ Ψ ( 1 ) ) term in the approximation error bound dominates over the k 1 term in this situation and hence few changes in Ψ are needed.
To assess the convergence stability of the aggregation process, we performed a thousand Monte Carlo trials on both examples. In only a very small fraction of the trials did the partitions deviate from those presented in Figure 6 and Figure 7 by more than a simple permutation. Such occurrences were largely due to a degenerate initialization of a new partition column whereby no states would be associated with that new state group. Imposing a constraint that a new group must contain at least a single state fixed this issue and led to consistent partitions being produced. The expectation-maximization-based procedure for solving the value of information was then able to discover global optima well in just a few iterations; the optima often were binary partitions like those presented in Figure 6 and Figure 7.
Avoiding coincident Partitions. The results for the preceding examples indicate that the Shannon-information constraint, E [ D KL ( γ Ψ ( k ) ) ] r , has the potential to yield non-coincident partitions. We now demonstrate using two additional Markov chains that using a Shannon-entropy penalty, E [ log ( Ψ ( k ) ) ] r , is more likely to return partitions with duplicated rows. This unnecessarily inflates the state-group cardinality, leading to aggregations with redundant details.
Both of these examples are for Markov chains with nine states. The first example, shown in the top left-hand corner of Figure 9, contains three state groups with a high chance to both jump to states in different groups and jump to states within a group. Moreover, many of the rows in the matrix are the same. We anticipate that coincident partitions will easily materialize due to these properties. The second example is given in the bottom left-hand corner of Figure 9. It contains two state groups with weakly interacting intra-group dynamics and strong inter-group dynamics. Each group has highly distinct transition probabilities. We hence expect that returning coincident partitions will be more difficult than in the first case. As before, the transition probabilities for each matrix were randomly generated through knowledge of a limit distribution.
Partitions for nine state groups are presented in Figure 9. The partitions in the middle column of Figure 9 are the results for the Shannon-mutual-information constraint, while those in right column are for the Shannon-entropy constraint. The Shannon-mutual-information case quantizes the data in the manner we would expect for both examples: each state is, more or less, assigned to its own group so that the original stochastic matrix is exactly recovered. There are hence no degenerate clusters. For the Shannon-entropy case, three coincident clusters formed for the first problem and this value of β . Two states from the first state group were incorrectly viewed as being equivalent. Two states from the second group and three states from the third group were also improperly treated, leading to further coincident partitions. Four degenerate clusters thus emerged and the original stochastic matrix could not be recovered; the Kullback–Leibler distortion for this value of β and other β s illustrates this. For the second problem, every state in the second state group was considered equal. Seven degenerate groups were thus created, leading to a stochastic matrix with a very different invariant distribution and hence longer-term dynamics than the original.
For these examples, we considered the same number of groups as states to highlight the severity of the coincident partition issue when using an uncertainty constraint. Coincident clusters were also observed when the group count was below the number of states.

4.2.2. Value-of-Information Aggregation Discussions

We have illustrated that the value-of-information criterion provides an effective mechanism for dynamics reduction of Markov chains for these examples. Consistently stable partitions of the transition probabilities are produced by optimizing this criterion. Such partitions induce reduced-order chains that do not have duplicate state groups and are often parsimonious representations. We have additionally demonstrated that only a finite number of free-parameter values need to be considered for this purpose, the “optimal” value of which can be discerned in a data-driven fashion.
Aggregation Performance: Binary Partitions. In the previous section, we relaxed the binary-valued constraint on the partition matrices to avoid exactly solving a potentially computationally intractable problem. However, our aggregation results for the first two examples indicate that either binary or nearly-binary partition matrices may still be returned when solving the value of information. The reason for this is the interplay between the expected distortion and the Shannon-mutual-information constraint: while the latter does not explicitly preclude their formation, the former naturally favors binary partitions.
More specifically, non-binary partitions will always have less Shannon mutual information than binary partitions. This is because the conditional entropy of the states in the original and aggregated chains increases more quickly than the marginal entropy, which is due to the additional uncertainty in the non-binary partitions. Hence, for a given upper bound on the Shannon information, if a binary partition can be formed for that bound, then a corresponding non-binary one can also be formed. The minimization of the distortion term, however, impedes the formation of non-binary partitions. In the binary case, provided that the partition reflects the underlying structure of the transitions, only related probability vectors will be compared to each other. Vastly different rows and columns of the stochastic and joint stochastic matrices will not factor into the expected distortion, since the state-assignment probability will be zero if the partition encodes well the underlying transition structure. Making highly non-binary state-group assignments can raise the expected distortion: the Kullback–Leibler divergence between two, possibly very distinct, probability vectors may be multiplied by a non-zero state-assignment probability.
This behavior contrasts with the use of a conditional Shannon entropy equality constraint on the entries of the partition matrix. Such a constraint directly imposes that the partition matrix should have a given amount of uncertainty, potentially at the expense of a worse distortion. Non-binary partitions hence can be more readily constructed.
Aggregation Performance: “Optimal” State Group Count. We considered a perturbation-theoretic approach for determining the ‘optimal’ number of state groups. The approach operates on the assumption that, for finitely sized stochastic matrices, there is an error in estimating the marginal distribution of the original states. This poor estimate leads to a systematic error in evaluating the Shannon-information term, which we quantified in a second-order sense. In the case of binary partitions, a second-order correction of this error introduces a penalty in the value of information for using more aggregated state groups than can be resolved for a particular finitely sized state space. Values for the free parameter were returned that, for our examples, aligned well with a balance between the expected distortion of the aggregation and the mutual dependence between states in the original and aggregated chains.
As shown in our experiments, the value of information monotonically decreases for an increasing number of state groups. The second-order-corrected version shares this trait, as it is a slope-rescaled version of the original value of information. Ideally, we would like to further transform this slope-scaled value-of-information curve so that it possesses an extremum where both terms of the objective function are balanced. This would lend further credence to the notion that such a free parameter value, and hence the number of state groups, for any stochastic matrix is “optimal”. In our upcoming work, we will demonstrate how to perform this transformation. We will show that the value of information can be written, in some cases, as a variational problem involving two Shannon-mutual-information terms. Applying the same perturbation-theoretic arguments to this version of the value of information reveals that the corrected criterion is monotonically increasing up to a certain point, after which it is monotonically non-increasing and often is strictly decreasing. This inflexion point corresponds to the ‘optimal’ parameter value determined here. This value minimizes the mutual dependence between the original and aggregated states while simultaneously retraining as much information about the original transition dynamics as possible.

5. Conclusions

In this paper, we have provided a novel, two-part information-theoretic approach for aggregating Markov chains. The first part of our approach is aimed at assessing the distortion between original-and reduced-order chains according to the negative, modified Kullback–Leibler divergence between rows of their corresponding stochastic matrices. The discrete nature of the graphs precludes the direct comparison of the transition probabilities according to this divergence, which motivated our construction of a joint transition model. This joint model encodes all of the information of the reduced-order Markov chain and is of the proper size to compare against the original Markov chain. The second part of our approach addresses how to combine states in the original chain, according to the chosen divergence, by solving a value-of-information criterion. This criterion aggregates states together if doing so reduces the total expected distortion between the low- and high-order chains and simultaneously maximizes the bounded mutual dependence between states in the high- and low-order chains. It thus attempts to find a low-order Markov chain that most resembles the global and local transition structure of the original, high-order Markov chain.
The value of information provides a principled and optimal trade-off between the quality of the aggregation, as measured by the total expected distortion, and the complexity of it, as measured by the state mutual dependence according to Shannon mutual information. The complexity constraint has dual roles. The first is that it explicitly dictates the number of states in the low-order chain. We proved that changing the value of a variable associated with this constraint causes the aggregation process to undergo phase transitions where new groupings emerge by splitting an existing state cluster. The second role of the constraint is that it relaxes the condition that the partition matrices must be strictly binary. This relaxation permits the formulation of an efficient procedure for approximately solving the aggregation problem. While the same effect could be achieved with a Shannon entropy constraint, it has the tendency to yield coincident partitions. This over-inflates the number of states in the reduced-order model.
We applied our approach to a series of Markov chains. Our simulation results showed that our value-of-information scheme achieved equal or better performance compared to a range of different aggregation techniques. A practical advantage of our methodology is that we have derived a data-driven expression for the “optimal” value for a parameter associated with the state mutual dependence constraint. This expression was based upon correcting the underestimation of the Shannon information term for finitely sized stochastic matrices. Employing this expression fits the partitions more to the structure of the data than to the noise, ensuring that it tends not to over-cluster states in the original chain. It also frees investigators from having to supply the number of state groups. Many existing aggregation approaches rely on the manual specification of the state-group count, in contrast; a reasonable number of state groups may not be immediately evident for certain problems, which complicates their effective application.
As we noted at the beginning of the paper, our emphasis is on understanding the effects of the value of information when it is applied to resolve the exploration–exploitation dilemma in reinforcement learning. In particular, we seek to address the question of whether the value of information is implicitly aggregating the Markov chains underlying the Markov decision process during exploration. Toward this end, our next step will be to show that hidden Markov models can be reduced in a value-of-information-based manner. Much like our work here, this will entail defining a joint model that allows for comparisons between pairs of hidden Markov models with different state spaces. We will need to construct the joint model so that it is Markovian, which will ensure that the theory in this paper applies with few modifications. Following this, for the Markov-decision-process case, we will need to show that a lumpable partition of the state space can be defined by the value of information, where the partition is bounded by the state-transition and cost effects. States in the same partition will then be viewed as a single state in a reduced-order, aggregated Markov chain. An aggregated Markov decision process with average cost on the aggregated Markov chain can then be obtained. As a part of this effort, we will also quantify the local-neighborhood performance difference between this aggregated Markov decision process and the optimal one.

Author Contributions

Conceptualization, J.C.P.; Investigation, I.J.S.; Methodology, I.J.S. and J.C.P.; Writing—original draft, I.J.S. and J.C.P.

Funding

The work of the authors was funded by grants N00014-15-1-2013 (Jason Stack), N00014-14-1-0542 (Marc Steinberg), and N00014-19-WX-00636 (Marc Steinberg) from the US Office of Naval Research. The first author was additionally supported by in-house laboratory independent research (ILIR) grant N00014-19-WX-00687 (Frank Crosby) from the US Office of Naval Research, a University of Florida Research Fellowship, a University of Florida Robert C. Pittman Research Fellowship, and an ASEE Naval Research Enterprise Fellowship.

Acknowledgments

The authors would like to thank Sean P. Meyn at the University of Florida for his suggestion to use the value of information for aggregating Markov chains.

Conflicts of Interest

The authors declare no conflict of interest.

Nomenclature

R π Weighted, directed graph associated with a stochastic matrix Π
V π Vertex set for R π ; represents the n states of the original Markov chain, with the ith state denoted by v π i
E π Edge set for R π ; represents the reachability between states of the original Markov chain
Π A stochastic matrix of size n × n for the original Markov chain; the i-jth entry of this matrix is denoted by [ Π ] i , j = π i , j
π i , 1 : n The ith row of Π ; describes the chance to transition from state i to all other states in the original chain
R φ Weighted, directed graph associated with a stochastic matrix Φ
V φ Vertex set for R φ ; represents the m states of the reduced-order Markov chain, with the jth state denoted by v φ j
E φ Edge set for R φ ; represents the reachability between states of the reduced-order Markov chain
Φ A stochastic matrix of size m × m for the reduced-order Markov chain; the ijth entry of this matrix is denoted by [ Φ ] i , j = φ i , j
φ j , 1 : m The jth row of Φ ; describes the chance to transition from state j to all other states in the reduced-order chain
γ The unique invariant distribution associated with the original Markov chain; the ith entry is denoted by [ γ ] i = γ i and describes the probability of being in state v π i
gA divergence measure; written either as g ( Π , Φ ) , when comparing the entirety of Π to Φ , or g ( π i , 1 : n , φ j , 1 : m ) , when comparing rows π i , 1 : n and φ j , 1 : m of the stochastic matrices
ψ A partition function
Ψ A partition matrix; the ijth entry of this matrix is denoted by [ Ψ ] i , j = ψ i , j and represents the conditional probability between states v φ j and v π i
R ϑ Weighted, directed graph associated with a stochastic matrix Θ
V ϑ Vertex set for R ϑ ; represents the n + m states of the original Markov chain, with the kth state denoted by v ϑ k
E ϑ Edge set for R ϑ ; represents the reachability between states of the original Markov chain
Θ A stochastic matrix of size m × n ; the jkth entry of this matrix is denoted by [ Θ ] j , k = ϑ j , k
UA matrix of size n × m ; the ijth entry of this matrix is denoted by [ U ] i , j = u i , j
eA unit vector; the ith unit vector is denoted by e i
qThe total distortion; written as q ( R π , R φ ) for the weighted, directed graphs R π and R φ
α The marginal probability; the jth entry, [ α ] j = α j describes the probability of being in state v φ i
rA non-negative value for the Shannon-mutual-information bound
FThe value of information
β The corresponding hyperparameter associated with r
β 0 , β c The initial and critical values of β
QAn m × n offset matrix; the jith entry of this matrix is denoted by [ Q ] j , i = q j , i
ϵ An offset amount
CAn n × n offset matrix; the pkth entry of this matrix is denoted by [ C ] p , k = c p , k

Appendix A

Proposition A1.
For a transition model R π = ( V π , E π , Π ) over n states and a joint model R ϑ = ( V ϑ , E ϑ , Θ ) and m + n states, the Lagrangian of the relevant terms for the minimization problem given in Definition 11 is F ( Ψ , α ; Π , Θ , γ ) = E [ E [ g ( Π , Θ ) | Ψ ] | γ ] E [ D KL ( γ Ψ ) ] / β , or, rather,
F ( Ψ , α ; Π , Θ , γ ) = i = 1 n j = 1 m γ i ψ i , j g ( π i , 1 : n , ϑ j , 1 : n ) 1 β j = 1 m α j i = 1 n ψ i , j log ( ψ i , j / γ i ) .
Here, β 0 is a Lagrange multiplier that emerges from the Shannon mutual information constraint in the value of information. Probabilistic partitions [ Ψ ] i , j = ψ i , j , which are local solutions of ϑ j , 1 : n F ( Ψ , α ; Π , Θ , γ ) = 0 , can be found by the following expectation-maximization-based alternating updates
α j i = 1 n γ i ψ i , j , ψ i , j α j e β g ( π i , 1 : n , ϑ j , 1 : n ) / p = 1 m α p e β g ( π i , 1 : n , ϑ p , 1 : n ) ,
which are iterated until convergence.
Proof. 
We can convert the constrained value-of-information into an unconstrained problem using the theory of Lagrange multipliers. There are five different constraints for which we need to account,
F ( Ψ , α ; Π , Θ , γ ) = i = 1 n j = 1 m γ i ψ i , j g ( π i , 1 : n , ϑ j , 1 : n ) 1 β j = 1 m α j i = 1 n ψ i , j log ( ψ i , j / γ i ) + i = 1 n j = 1 m κ i ( 1 ψ i , j ) + i = 1 n j = 1 m ω i ( 1 ϑ j , i ) i = 1 n j = 1 m ξ i ψ i , j i = 1 n j = 1 m μ i ϑ j , i
The first constraint corresponds to the Shannon mutual information term. The remaining constraints ensure that entries from Θ and Ψ correspond to valid probabilities.
We can derive the update for the probabilistic partition matrix Ψ by differentiating the Lagrangian and setting it to zero
ψ i , j F ( Ψ , α ; Π , Θ , γ ) = γ i log ψ i , j α j γ i g ( π i , 1 : n , ϑ j , 1 : n ) β γ i β + κ i + ω i ξ i μ i = 0 .
Solving for ψ i , j yields ψ i , j = r = 1 n γ r ψ r , j e β g ( π i , 1 : n , ϑ j , 1 : n ) / r = 1 n p = 1 m γ r ψ r , j e β g ( π i , 1 : n , ϑ p , 1 : n ) , where κ i and ω i have been selected such that i = 1 n ψ i , j = 1 and i = 1 n ϑ i , j = 1 , respectively. It is apparent that the update for the marginal probabilities α is encoded in this update for Ψ .
We now can show that the update for the marginal probabilities is optimal. For a fixed probabilistic partition matrix Ψ , the following inequality
i = 1 n j = 1 m γ i ψ i , j log ψ i , j α j i = 1 n j = 1 m γ i ψ i , j log ψ i , j r = 1 n γ r ψ j , r
holds with equality if and only if α j = i = 1 n γ i ψ i , j . This result is a consequence of applying the divergence inequality to i = 1 n j = 1 m γ i ψ i , j log ( ψ i , j / α j ) i = 1 n j = 1 m γ i ψ i , j log ( ψ i , j / r = 1 n γ r ψ j , r ) . That is,
i = 1 n j = 1 m γ i ψ i , j log ψ i , j α j i = 1 n j = 1 m γ i ψ i , j log ψ i , j r = 1 n γ r ψ j , r = i = 1 n j = 1 m γ i ψ i , j log r = 1 n γ r ψ j , r α j i = 1 n j = 1 m γ i ψ i , j j = 1 m α j 0 ,
where equality to zero is only obtained if and only if α j = i = 1 n γ i ψ i , j . This implies that, for a fixed Ψ , the update for α globally solves the problem min α F ( Ψ , α ; Π , Θ , γ ) , establishing its optimality.
We now demonstrate that the probabilistic partition update is optimal. For a fixed marginal probability vector α , the following inequality
i = 1 n γ i log j = 1 m α j e β g ( π i , 1 : n , ϑ j , 1 : n ) i = 1 n j = 1 m γ i ψ i , j log ψ i , j α j 1 β i = 1 n j = 1 m ψ i , j g ( π i , 1 : n , ϑ j , 1 : n ) i = 1 n j = 1 m γ i ψ i , j log ψ i , j e β g ( π i , 1 : n , ϑ j , 1 : n ) α j
holds with equality if and only if ψ i , j = α j e β g ( π i , 1 : n , ϑ j , 1 : n ) / p = 1 m α p e β g ( π i , 1 : n , ϑ p , 1 : n ) . This follows from showing that
i = 1 n j = 1 m γ i ψ i , j log ψ i , j e β g ( π i , 1 : n , ϑ j , 1 : n ) α j = i = 1 n j = 1 m γ i ψ i , j log ψ i , j p = 1 m α p e β g ( π i , 1 : n , ϑ p , 1 : n ) α j e β g ( π i , 1 : n , ϑ j , 1 : n ) p = 1 m α p e β g ( π i , 1 : n , ϑ p , 1 : n ) = i = 1 n j = 1 m γ i ψ i , j log ψ i , j α j e β g ( π i , 1 : n , ϑ j , 1 : n ) / p = 1 m α p e β g ( π i , 1 : n , ϑ p , 1 : n ) + i = 1 n j = 1 m γ i ψ i , j log 1 p = 1 m α p e β g ( π i , 1 : n , ϑ p , 1 : n ) 0 + i = 1 n j = 1 m γ i ψ i , j log 1 p = 1 m α p e β g ( π i , 1 : n , ϑ p , 1 : n ) i = 1 n γ i log 1 p = 1 m α p e β g ( π i , 1 : n , ϑ p , 1 : n )
Here, we used the divergence inequality in the second-to-last step. As well, we have that the last step can be written as i = 1 n γ i log ( 1 / p = 1 m α p e β g ( π i , 1 : n , ϑ p , 1 : n ) ) = i = 1 n γ i log ( j = 1 n α j e β g ( π i , 1 : n , ϑ j , 1 : n ) ) , where the right-hand side is the desired expression. Substituting the update for Ψ in the original inequality leads to equivalency. Hence, for a fixed α , the update for Ψ globally solves min Ψ F ( Ψ , α ; Π , Θ , γ ) . ☐
Proposition A2.
Let R π = ( V π , E π , Π ) and R φ = ( V φ , E φ , Φ ) be transition models of two Markov chains over n and m states, respectively, where m < n . If [ Ψ ] i , j = ψ i , j is an optimal probabilistic partition and [ α ] j = α j an optimal marginal probability vector, then, for the updates in Proposition A1, we have that:
(i) 
The approximation error is non-negative
F ( Ψ ( k ) , α ( k ) ; Π , Θ , γ ) F ( Ψ , α ; Π , Θ , γ ) = i = 1 n γ i log j = 1 m α j e β g ( π i , 1 : n , ϑ j , 1 : n ) j = 1 m α j ( k ) e β g ( π i , 1 : n , ϑ j , 1 : n ) 0 .
(ii) 
The modified free energy monotonically decreases F ( Ψ ( k ) , α ( k ) ; Π , Θ , γ ) F ( Ψ ( k + 1 ) , α ( k + 1 ) ; Π , Θ , γ ) across all iterations k.
(iii) 
For any K 1 , we have the following bound for the sum of approximation errors
k = 1 K F ( Ψ ( k ) , α ( k ) ; Π , Θ , γ ) F ( Ψ , α ; Π , Θ , γ ) i = 1 n j = 1 m γ i ψ i , j log ψ i , j ψ i , j ( 1 ) .
Here, F ( Ψ , α ; Π , Θ , γ ) = E [ E [ g ( Π , Θ ) | Ψ ] | γ ] E [ D KL ( γ Ψ ) ] / β is the Lagrangian.
Proof. 
Parts (i) and (ii) follow immediately from Proposition 3.1. For part (iii), we have the following equality expressions for iterations k and k + 1 of the expectation-maximization updates
i = 1 n j = 1 m γ i ψ j , i log ψ i , j ( k + 1 ) ψ i , j ( k ) = i = 1 n j = 1 m γ i ψ j , i log α j ( k ) e g ( π i , 1 : n , ϑ j , 1 : n ) / β ψ i , j ( k ) p = 1 m α p ( k ) e g ( π i , 1 : n , ϑ p , 1 : n ) / β = F ( Ψ ( k ) , α ( k ) ; Π , Θ , γ ) F ( Ψ , α ; Π , Θ , γ ) + i = 1 n j = 1 m γ i ψ i , j log 1 ψ i , j ( k ) α j ( k ) α j α j e g ( π i , 1 : n , ϑ j , 1 : n ) / β p = 1 m α p e g ( π i , 1 : n , ϑ p , 1 : n ) / β .
We obtain that the last term in the last inequality expression is non-negative, since, from part (i), we have that ψ i , j = α j e g ( π i , 1 : n , ϑ j , 1 : n ) / β / p = 1 m α p e g ( π i , 1 : n , ϑ p , 1 : n ) / β . We thus have that.
i = 1 n j = 1 m γ i ψ j , i log ψ i , j ( k + 1 ) ψ i , j ( k ) = F ( Ψ ( k ) , α ( k ) ; Π , Θ , γ ) F ( Ψ , α ; Π , Θ , γ ) + i = 1 n j = 1 m γ i ψ i , j log ψ i , j ψ i , j ( k ) α j ( k ) α j F ( Ψ ( k ) , α ( k ) ; Π , Θ , γ ) F ( Ψ , α ; Π , Θ , γ ) + i = 1 n j = 1 m γ i ψ i , j 1 α j ψ i , j ( k ) α j ( k ) ψ i , j .
Since α j ψ i , j ( k ) / α j ( k ) ψ i , j = 1 , it can be seen that i = 1 n j = 1 m γ i ψ i , j ( 1 α j ψ i , j ( k ) / α j ( k ) ψ i , j ) = 0 . We hence recover the following inequality
i = 1 n j = 1 m γ i ψ j , i log ψ i , j ( k + 1 ) ψ i , j ( k ) F ( Ψ ( k ) , α ( k ) ; Π , Θ , γ ) F ( Ψ , α ; Π , Θ , γ ) ,
which holds for any k = 1 , 2 , Summing up this inequality up to K, we have that
k = 1 K F ( Ψ ( k ) , α ( k ) ; Π , Θ , γ ) F ( Ψ , α ; Π , Θ , γ ) i = 1 n j = 1 m γ i ψ i , j log ψ i , j ( K + 1 ) ψ i , j ( 1 ) i = 1 n j = 1 m γ i ψ i , j ψ i , j ( K + 1 ) ψ i , j ( 1 ) 1 .
Since log ( ψ i , j ( K + 1 ) / ψ i , j ( 1 ) ) = log ( ψ i , j ( K + 1 ) / ψ i , j ) + log ( ψ i , j / ψ i , j ( 1 ) ) , we get the desired inequality. ☐
Proposition A3.
Let R π = ( V π , E π , Π ) and R φ = ( V φ , E φ , Φ ) be transition models of two Markov chains over n and m states, respectively, where m < n . If [ Ψ ] i , j = ψ i , j is an optimal probabilistic partition and [ α ] j = α j an optimal marginal probability vector, then, the approximation error
F ( Ψ , α ; Π , Θ , γ ) F ( Ψ ( k ) , α ( k ) ; Π , Θ , γ ) 1 k i = 1 n j = 1 m γ i ψ i , j log ψ i , j ψ i , j ( 1 ) .
falls off as a function of the inverse of the iteration count k. Here, the constant factor of the error bound is a Kullback–Leibler divergence between the initial partition matrix Ψ ( 1 ) and the global-best partition matrix Ψ .
Proof. 
From Propositions A2 (ii) and A2 (iii), we get that
k F ( Ψ , α ; Π , Θ , γ ) F ( Ψ ( k ) , α ( k ) ; Π , Θ , γ ) k = 1 K F ( Ψ ( k ) , α ( k ) ; Π , Θ , γ ) F ( Ψ , α ; Π , Θ , γ ) .
The desired inequality follows after dividing both sides by k and substituting the bound obtained in Proposition 3.2(iii) on the right-hand side of the inequality. ☐
Proposition A4.
Let R π = ( V π , E π , Π ) and R φ = ( V φ , E φ , Φ ) be transition models of two Markov chains over n and m states, respectively, where m < n . Let g ( π i , 1 : n , ϑ i , 1 : n ) = j = 1 n γ i π i , j log ( π i , j / ϑ i , j ) , where R ϑ = ( V ϑ , E ϑ , Θ ) is the joint model. The following hold:
(i) 
The transition matrix Φ of a low-order Markov chain over states m is given by Φ = Θ Ψ , where Θ = U Π . Here, [ U ] i , j = γ i ψ i , j / k = 1 n γ k ψ k , j for the probabilistic partition matrix [ Ψ ] i , j = ψ i , j found using the updates in Proposition A1.
(ii) 
Suppose that we have a low-order chain over m states with a transition matrix Φ and weight matrix Θ given by (i). For some β 0 , suppose Θ β 0 , the matrix Θ for that value of β 0 , satisfies the following inequality d 2 / d ϵ 2 F ( Ψ , α ; Π , Θ β 0 + ϵ Q , γ ) | ϵ = 0 > 0 . Here, Q R + m × n is a matrix such that k = 1 m q j , 1 : n q j , 1 : n = 1 and j = 1 n q j , i = 0 i . A critical value β c = min β > β 0 ( d 2 / d ϵ 2 F ( Ψ , α ; Π , Θ β + ϵ Q , γ ) | ϵ = 0 0 ) , occurs whenever the minimum eigenvalue of the matrix
diag i = 1 n ψ k , j π i , 1 : n / ϑ k , 1 : n 2 β i = 1 n ψ k , j ( π i , 1 : n / ϑ k , 1 : n 2 ) ( π i , 1 : n / ϑ k , 1 : n 2 )
is zero. The number of rows in Θ and columns in Ψ needs to be increased once β > β c .
Proof. 
For part (i), we first substitute the partition matrix update into the Lagrangian F ( Ψ , α ; Π , Θ , γ ) . After some simplification and when ignoring irrelevant terms, we find that
ϑ j , i F ( Ψ , α ; Π , Θ , γ ) = ϑ j , i i = 1 n γ i log j = 1 m α j e β g ( π i , 1 : n , ϑ j , 1 : n ) .
Setting this expression to zero and solving, we arrive at ϑ j , i = r = 1 n γ r ψ r , j π r , i . Due to the conditions that i = 1 n ϑ j , i = i = 1 n π i , j = 1 j , we have ϑ j , i = r = 1 n γ r u r , j , where u r , j = ψ r , j / p = 1 n γ p ψ p , j . It is apparent that the entries of Θ are non-negative. As well, every entry in Φ is a convex combination of a column in Θ . Therefore, Φ is a probabilistic transition matrix.
For part (ii), the second variation of i = 1 n γ i log ( j = 1 m α j e β g ( π i , 1 : n , ϑ j , 1 : n ) ) at the optimal weighting matrix Θ is given by
Δ 2 F ( Ψ , α ; Π , Θ , γ ) = d 2 / d ϵ 2 F ( Ψ , α ; Π , Θ β 0 + ϵ Q , γ ) | ϵ = 0 = j = 1 m i = 1 n γ i ψ i , j q j , 1 : n ( diag i = 1 n ψ k , j π i , 1 : n / ( ϑ k , 1 : n ) 2 β i = 1 n ψ k , j ( π i , 1 : n / ( ϑ k , 1 : n ) 2 ) ( π i , 1 : n / ( ϑ k , 1 : n ) 2 ) ) q j , 1 : n + β i = 1 n q = 1 m γ i ψ i , q r = 1 n γ r ψ r , q ( π i , 1 : n / ϑ k , 1 : n ) q q , 1 : n 2 min j ( min eig ( j = 1 m i = 1 n γ i ψ i , j q j , 1 : n ( diag i = 1 n ψ k , j π i , 1 : n / ( ϑ k , 1 : n ) 2 β i = 1 n ψ k , j ( π i , 1 : n / ( ϑ k , 1 : n ) 2 ) ( π i , 1 : n / ( ϑ k , 1 : n ) 2 ) ) q j , 1 : n ) ) 0 .
Here, we have used the fact that Δ 2 F ( Ψ , α ; Π , Θ , γ ) is continuous and strictly positive for β 0 < β . We have also used the spectral theorem [46] to obtain a lower bound on the lowest eigenvalue for a self-adjoint operator. Equality to zero, Δ 2 F ( Ψ , α ; Π , Θ , γ ) = 0 , can thus only be obtained for β < β c if and only if
min-eig diag i = 1 n ψ k , j π i , 1 : n / ϑ k , 1 : n 2 β i = 1 n ψ k , j ( π i , 1 : n / ϑ k , 1 : n 2 ) ( π i , 1 : n / ϑ k , 1 : n 2 ) = 0 .
At such a point β , a bifurcation in d 2 / d ϵ 2 F ( Ψ , α ; Π , Θ β 0 + ϵ Q , γ ) | ϵ = 0 occurs, for a finite perturbation term Q, and the minimum is no longer stable. That is, there is a bifurcation on a solution branch that is fixed by the algebraic group of all permutations on m < n symbols, Sym ( m ) , which follows from the equivariant branching lemma and the Smoller–Wasserman theorem; this bifurcation is symmetry breaking. The equivariant branching lemma gives explicit bifurcating directions of the m branching solutions, each of which has symmetry Sym ( m 1 ) . The branches are hence associated with a Θ and Ψ of different cardinalities m. ☐
Proposition A5.
Let R π = ( V π , E π , Π ) and R φ = ( V φ , E φ , Φ ) be transition models of two Markov chains over n and m states, respectively, where m < n . R ϑ = ( V ϑ , E ϑ , Θ ) is a joint model, with m + n states. The systematic underestimation of the information cost of the Shannon mutual information term in Definition 11 can be second-order minimized by solving the following optimization problem
min Ψ R + n × m Θ R + m × n j = 1 m α j i = 1 n ψ i , j log ( ψ i , j / γ i ) + j = 1 m i = 1 n γ i ψ i , j 2 / 2 n log ( 2 ) α j | i = 1 n j = 1 m γ i ψ i , j g ( π i , 1 : n , ϑ i , 1 : n ) r , 0 ϑ i , k , ψ i , k 1 , k = 1 m ϑ i , k = 1 , k = 1 m ψ i , k = 1
where β = 2 j = 1 m α j i = 1 n ψ i , j log ( ψ i , j / γ i ) / 2 n .
Proof. 
Let us assume that we approximate the marginal distribution γ i by γ i = γ i + δ γ i , where γ i is the true marginal, with zero average over all possible realizations. There is hence an underestimation of the Shannon mutual information j = 1 m α j i = 1 n ψ i , j log ( ψ i , j / γ i ) , which can be found by taking the multi-order Taylor expansion about γ i ,
j = 1 m α j i = 1 n ψ i , j log ψ i , j γ i | γ i + δ γ i = j = 1 m α j i = 1 n ψ i , j log ψ i , j γ i + p = 2 ( 1 ) p p ( p 1 ) 1 log ( 2 ) α j p 1 j = 1 m w = 1 n s = 1 m γ w ψ w , s r = 1 n ψ r , j δ γ r p j = 1 m α j i = 1 n ψ i , j log ψ i , j γ i + 1 2 p log ( 2 ) j = 1 m w = 1 n s = 1 m γ w ψ w , s α j r = 1 n ψ r , j 2 γ r j = 1 m α j i = 1 n ψ i , j log ψ i , j γ i + 1 2 p log ( 2 ) 2 j = 1 m α j i = 1 n ψ i , j log ( ψ i , j / γ i ) .
For the last and second-to-last steps, we considered only the second-order Taylor series expansion term. For the last step, we used the following equality i = 1 n j = 1 m ρ i , j ψ i , j / α j = i = 1 n j = 1 m ρ i , j 2 log ( ψ i , j / α j ) , which is bounded below by 2 j = 1 m α j i = 1 n ψ i , j log ( ψ i , j / γ i ) . Here, ρ i , j is the joint distribution of the random variables. It can be seen that the Shannon mutual information term is underestimated by 2 j = 1 m α j i = 1 n ψ i , j log ( ψ i , j / γ i ) / 2 p log ( 2 ) bits. The bound on the second-order Taylor expansion hence has a rescaled slope.
Plugging the augmented Shannon information term into the value of information Lagrangian, in place of the original Shannon mutual information constraint, and solving yields following update for Ψ ,
ψ i , j = α j z exp ( β log ( 2 ) g ( π i , 1 : n , ϑ j , 1 : n ) + j , s = 1 m p = 2 ( 1 ) p p α j p γ i ψ i , s r = 1 n ψ r , j δ γ r p 1 ( p 1 ) γ i α j p 1 w = 1 n s = 1 m γ i δ γ w ψ w , s r = 1 n ψ r , j δ γ r p 1 )
where z is a normalization factor that ensures the entries of Ψ are probabilities. Considering only the second-order terms from the Taylor expansion and using β = 2 j = 1 m α j i = 1 n ψ i , j log ( ψ i , j / γ i ) / 2 n leads to a second-order minimization of the underestimation of information. ☐
Proposition A6.
Let R π = ( V π , E π , Π ) be a transition model of a Markov chain with n states, where Π R + n × n is nearly completely decomposable into m Markov sub-chains.
(i) 
The associated low-order stochastic matrix Φ R + m × m found by solving the value of information is given by φ i , j = p i = 1 n i q j = 1 n j π p i , q i γ p i / q i n i γ q i , where p i , q i represent state indices p = 1 , , n i associated with block i, while q j represents a state index q = 1 , , n j into block j. The variable γ p i = γ p i ( Π ) denotes the invariant-distribution probability of state p in block i of Π.
(ii) 
Suppose that γ p i / q i n i γ q i = v p i ( 1 i ) is approximated by the entries of the first left-eigenvector v ( 1 i ) for block i of Π . We then have that
γ p i = 1 n i v p i ( 1 i ) q j = 1 n j π p i , q i γ ( Π ) Ψ 1 O ( ε 2 )
where the first term is the invariant distribution of the low-order matrix γ ( Φ ) , under the simplifying assumption, and Ψ R + n × m is the probabilistic partition matrix found by solving the value of information.
Proof. 
For part (i), we note that the aggregated stochastic matrix for nearly-completely decomposable chains, obtained via Θ = U Π and Φ = Θ Ψ , satisfies the implicit equation
φ 1 : n , j = k = 1 m p i = 1 n k ϑ j , p i π p i , 1 : n where u j , p i = ψ p i , j γ p i q = 1 m s q = 1 n q ψ s q , q γ s q and ψ p i , j = e β g ( π p i , 1 : n , ϑ j , 1 : n ) q = 1 m s q = 1 n q e β g ( π s q , 1 : n , ϑ q , 1 : n ) .
In what follows, we want to assess the form of the aggregated stochastic matrix Φ when it contains m state groups. However, β dictates the number of state groups in a manner that is dependent on the original stochastic matrix Π . We therefore simply consider what happens when β is infinite and note that the same expression for Φ can be obtained for finite β s. In the former case, we have that
lim β u j , p i = I g ( π p i , 1 : n , ϑ j , 1 : n ) = min j g ( π p i , 1 : n , ϑ j , 1 : n ) γ p i q = 1 m s q = 1 n q I g ( π s q , 1 : n , ϑ q , 1 : n ) = min q g ( π s q , 1 : n , ϑ q , 1 : n ) γ s q
lim β ψ p i , j = e β g ( π p i , 1 : n , ϑ j , 1 : n ) min j g ( π p i , 1 : n , ϑ j , 1 : n ) q = 1 m e β g ( π p i , 1 : n , ϑ q , 1 : n ) min j g ( π p i , 1 : n , ϑ j , 1 : n )
where I is the indicator function. Due to the nearly-completely decomposable structure of the Markov chain, I g ( π p i , 1 : n , ϑ j , 1 : n ) = min j g ( π p i , 1 : n , ϑ j , 1 : n ) γ p i = γ p i j . Hence,
lim β φ i , j = ( lim β u j , p i ) Π ( lim β ψ p i , j ) = p i = 1 n i γ p i q i n i γ q i q j = 1 n j π p i , q i .
Part (ii) follows from the work of Courtois [21]. ☐

References

  1. Arruda, E.F.; Fragoso, M.D. Standard dynamic programming applied to time aggregated Markov decision processes. In Proceedings of the IEEE Conference on Decision and Control (CDC), Shanghai, China, 15–18 December 2009; pp. 2576–2580. [Google Scholar] [CrossRef]
  2. Aldhaheri, R.W.; Khalil, H.K. Aggregation of the policy iteration method for nearly completely decomposable Markov chains. IEEE Trans. Autom. Control 1991, 36, 178–187. [Google Scholar] [CrossRef]
  3. Ren, Z.; Krogh, B.H. Markov decision processes with fractional costs. IEEE Trans. Autom. Control 2005, 50, 646–650. [Google Scholar] [CrossRef]
  4. Sun, T.; Zhao, Q.; Luh, P.B. Incremental value iteration for time-aggregated Markov decision processes. IEEE Trans. Autom. Control 2007, 52, 2177–2182. [Google Scholar] [CrossRef]
  5. Jia, Q.S. On state aggregation to approximate complex value functions in large-scale Markov decision processes. IEEE Trans. Autom. Control 2011, 56, 333–334. [Google Scholar] [CrossRef]
  6. Aoki, M. Some approximation methods for estimation and control of large scale systems. IEEE Trans. Autom. Control 1978, 23, 173–182. [Google Scholar] [CrossRef]
  7. Príncipe, J.C. Information Theoretic Learning; Springer-Verlag: New York, NY, USA, 2010. [Google Scholar]
  8. Rached, Z.; Alajaji, F.; Campbell, L.L. The Kullback-Leibler divergence rate between Markov sources. IEEE Trans. Inf. Theory 2004, 50, 917–921. [Google Scholar] [CrossRef]
  9. Donsker, M.D.; Varadhan, S.R.S. Asymptotic evaluation of certain Markov process expectations for large time I. Commun. Pure Appl. Math. 1975, 28, 1–47. [Google Scholar] [CrossRef]
  10. Donsker, M.D.; Varadhan, S.R.S. Asymptotic evaluation of certain Markov process expectations for large time II. Commun. Pure Appl. Math. 1975, 28, 279–301. [Google Scholar] [CrossRef]
  11. Deng, K.; Mehta, P.G.; Meyn, S.P. Optimal Kullback-Leibler aggregation via spectral theory of Markov chains. IEEE Trans. Autom. Control 2011, 56, 2793–2808. [Google Scholar] [CrossRef]
  12. Geiger, B.C.; Petrov, T.; Kubin, G.; Koeppl, H. Optimal Kullback-Leibler aggregation via inforamtion bottleneck. IEEE Trans. Autom. Control 2015, 60, 1010–1022. [Google Scholar] [CrossRef]
  13. Sledge, I.J.; Príncipe, J.C. An analysis of the value of information when exploring stochastic, discrete multi-armed bandits. Entropy 2018, 20, 155. [Google Scholar] [CrossRef]
  14. Sledge, I.J.; Príncipe, J.C. Analysis of agent expertise in Ms. Pac-Man using value-of-information-based policies. IEEE Trans. Comput. Intell. Artif. Intell. Games 2018. [Google Scholar] [CrossRef]
  15. Sledge, I.J.; Emigh, M.S.; Príncipe, J.C. Guided policy exploration for Markov decision processes using an uncertainty-based value-of-information criterion. IEEE Trans. Neural Netw. Learn. Syst. 2018, 29, 2080–2098. [Google Scholar] [CrossRef]
  16. Stratonovich, R.L. On value of information. Izv. USSR Acad. Sci. Technical Cybern. 1965, 5, 3–12. [Google Scholar]
  17. Stratonovich, R.L.; Grishanin, B.A. Value of information when an estimated random variable is hidden. Izv. USSR Acad. Sci. Technical Cybern. 1966, 6, 3–15. [Google Scholar]
  18. Cover, T.M.; Thomas, J.A. Elements of Information Theory; John Wiley and Sons: New York, NY, USA, 2006. [Google Scholar]
  19. Von Neumann, J.; Morgenstern, O. Theory of Games and Economic Behavior; Princeton University Press: Princeton, NJ, USA, 1953. [Google Scholar]
  20. Simon, H.A.; Ando, A. Aggregation of variables in dynamic systems. Econometrica 1961, 29, 111–138. [Google Scholar] [CrossRef]
  21. Courtois, P.J. Error analysis in nearly-completely decomposable stochastic systems. Econometrica 1975, 43, 691–709. [Google Scholar] [CrossRef]
  22. Pervozvanskii, A.A.; Smirnov, I.N. Stationary-state evaluation for a complex system with slowly varying couplings. Kibernetika 1974, 3, 45–51. [Google Scholar] [CrossRef]
  23. Gaitsgori, V.G.; Pervozvanskii, A.A. Aggregation of states in a Markov chain with weak interaction. Kibernetika 1975, 4, 91–98. [Google Scholar] [CrossRef]
  24. Teneketzis, D.; Javid, S.H.; Sridhar, B. Control of weakly-coupled Markov chains. In Proceedings of the IEEE Conference on Decision and Control (CDC), Albuquerque, NM, USA, 10–12 December 1980; pp. 137–142. [Google Scholar] [CrossRef]
  25. Delebecque, F.; Quadrat, J.P. Optimal control of Markov chains admitting strong and weak interactions. Automatica 1981, 17, 281–296. [Google Scholar] [CrossRef]
  26. Zhang, Q.; Yin, G.; Boukas, E.K. Controlled Markov chains with weak and strong interactions. J. Optim. Theory Appl. 1997, 94, 169–194. [Google Scholar] [CrossRef]
  27. Courtois, P.J. Decomposability, Instabilities, and Saturation in Multiprogramming Systems; Academic Press: New York, NY, USA, 1977. [Google Scholar]
  28. Aldhaheri, R.W.; Khalil, H.K. Aggregation and optimal control of nearly completely decomposable Markov chains. In Proceedings of the IEEE Conference on Decision and Control (CDC), Tampa, FL, USA, 13–15 December 1989; pp. 1277–1282. [Google Scholar] [CrossRef]
  29. Kotsalis, G.; Dahleh, M. Model reduction of irreducible Markov chains. In Proceedings of the IEEE Conference on Decision and Control (CDC), Maui, HI, USA, 9–12 December 2003; pp. 5727–5728. [Google Scholar] [CrossRef]
  30. Dey, S. Reduced-complexity filtering for partially observed nearly completely decomposable Markov chains. IEEE Trans. Signal Process. 2000, 48, 3334–3344. [Google Scholar] [CrossRef]
  31. Vantilborgh, H. Aggregation with an error of O(ϵ2). J. ACM 1985, 32, 162–190. [Google Scholar] [CrossRef]
  32. Cao, W.L.; Stewart, W.J. Iterative aggregation/disaggregation techniques for nearly uncoupled Markov chains. J. ACM 1985, 32, 702–719. [Google Scholar] [CrossRef]
  33. Koury, J.R.; McAllister, D.F.; Stewart, W.J. Iterative methods for computing stationary distributions of nearly completely decomposable Markov chains. SIAM J. Algebraic Discrete Methods 1984, 5, 164–186. [Google Scholar] [CrossRef]
  34. Barker, G.P.; Piemmons, R.J. Convergent iterations for computing stationary distributions of Markov chains. SIAM J. Algebraic Discrete Methods 1986, 7, 390–398. [Google Scholar] [CrossRef]
  35. Dayar, T.; Stewart, W.J. On the effects of using the Grassman-Taksar-Heyman method in iterative aggregation-disaggregation. SIAM J. Sci. Comput. 1996, 17, 287–303. [Google Scholar] [CrossRef]
  36. Phillips, R.; Kokotovic, P. A singular perturbation approach to modeling and control of Markov chains. IEEE Trans. Autom. Control 1981, 26, 1087–1094. [Google Scholar] [CrossRef]
  37. Peponides, G.; Kokotovic, P. Weak connections, time scales, and aggregation of nonlinear systems. IEEE Trans. Autom. Control 1983, 28, 729–735. [Google Scholar] [CrossRef]
  38. Chow, J.; Kokotovic, P. Time scale modeling of sparse dynamic networks. IEEE Trans. Autom. Control 1985, 30, 714–722. [Google Scholar] [CrossRef]
  39. Filar, J.A.; Gaitsgory, V.; Haurie, A.B. Control of singularly perturbed hybrid stochastic systems. IEEE Trans. Autom. Control 2001, 46, 179–180. [Google Scholar] [CrossRef]
  40. Deng, K.; Sun, Y.; Mehta, P.G.; Meyn, S.P. An information-theoretic framework to aggregate a Markov chain. In Proceedings of the American Control Conference (ACC), St. Louis, MO, USA, 10–12 June 2009; pp. 731–736. [Google Scholar] [CrossRef]
  41. Deng, K.; Huang, D. Model reduction of Markov chains via low-rank approximation. In Proceedings of the American Control Conference (ACC), Montreal, QC, Canada, 27–29 June 2012; pp. 2651–2656. [Google Scholar] [CrossRef]
  42. Vidyasagar, M. Reduced-order modeling of Markov and hidden Markov processes via aggregation. In Proceedings of the IEEE Conference on Decision and Control (CDC), Atlanta, GA, USA, 15–17 December 2010; pp. 1810–1815. [Google Scholar] [CrossRef]
  43. Vidyasagar, M. A metric between probability distributions on finite sets of different cardinalities and applications to order reduction. IEEE Trans. Autom. Control 2012, 57, 2464–2477. [Google Scholar] [CrossRef]
  44. Zangwill, W.I. Nonlinear Programming: A Unified Approach; Prentice-Hall: Upper Saddle River, NJ, USA, 1969. [Google Scholar]
  45. Treves, A.; Panzeri, S. The upward bias in meausres of information derived from limited data samples. Neural Comput. 1995, 7, 399–407. [Google Scholar] [CrossRef]
  46. Kato, T. Perturbation Theory for Linear Operators; Springer-Verlag: New York, NY, USA, 1966. [Google Scholar]
Figure 1. Depiction of the comparison process for exact, binary aggregation of a nine-state Markov chain. The transition matrix Φ associated with the low-order, three-state Markov chain cannot be directly compared to the transition matrix Π of the high-order, nine-state chain for general measures g. For example, we may want to compare the fifth row of Π , π 5 , 1 : 9 , which is highlighted in green, with the second row of Φ , φ 2 , 1 : 3 : g ( π 5 , 1 : 9 , φ 2 , 1 : 3 ) , which is highlighted in purple. To facilitate this comparison, we consider a joint model whose accumulation matrix Θ is of the proper size for comparison against Π . Θ when multiplied with the binary partition matrix Ψ equals the low-order transition matrix Φ . It can be seen that the accumulation matrix Θ of the joint model encodes all of the dynamics of Φ . This relationship ensures that g ( π 5 , 1 : 9 , ϑ 2 , 1 : 9 ) is actually comparing the dynamics of Π and Φ . Θ has been automatically padded with zero entries, by way of the exact aggregation process developed in Section 3.1, to ensure that it is of the same size as Π . For this example, only the first, fourth, and eighth entries of any row in Π are relevant for the comparison of the entries in Φ . Such entries lead to the maximal preservation of the information between Π and Φ when using the negative, modified Kullback–Leibler divergence for g.
Figure 1. Depiction of the comparison process for exact, binary aggregation of a nine-state Markov chain. The transition matrix Φ associated with the low-order, three-state Markov chain cannot be directly compared to the transition matrix Π of the high-order, nine-state chain for general measures g. For example, we may want to compare the fifth row of Π , π 5 , 1 : 9 , which is highlighted in green, with the second row of Φ , φ 2 , 1 : 3 : g ( π 5 , 1 : 9 , φ 2 , 1 : 3 ) , which is highlighted in purple. To facilitate this comparison, we consider a joint model whose accumulation matrix Θ is of the proper size for comparison against Π . Θ when multiplied with the binary partition matrix Ψ equals the low-order transition matrix Φ . It can be seen that the accumulation matrix Θ of the joint model encodes all of the dynamics of Φ . This relationship ensures that g ( π 5 , 1 : 9 , ϑ 2 , 1 : 9 ) is actually comparing the dynamics of Π and Φ . Θ has been automatically padded with zero entries, by way of the exact aggregation process developed in Section 3.1, to ensure that it is of the same size as Π . For this example, only the first, fourth, and eighth entries of any row in Π are relevant for the comparison of the entries in Φ . Such entries lead to the maximal preservation of the information between Π and Φ when using the negative, modified Kullback–Leibler divergence for g.
Entropy 21 00349 g001
Figure 2. Depictions of the various models when using binary-valued partitions for the transition matrices in Figure 1. In (a), we show the transition model for a high-order, nine-state Markov chain (right) and its low-order, three-state transition model representation (left) after the aggregation process. The numbers along the edges represent the probabilities of transitioning to and from pairs of states. In (b), we show the joint model. The vertices of the joint model represent states in both the high-order and low-order chains. The edges between state pairs in both the high- and low-order chains, which are depicted using dashed lines, are removed. In the joint model, edges are inserted to connect states in the high-order chain with those in the low-order chain, thereby providing the state aggregation. Note that the edge weights in the joint model are unknown a priori and must be uncovered.
Figure 2. Depictions of the various models when using binary-valued partitions for the transition matrices in Figure 1. In (a), we show the transition model for a high-order, nine-state Markov chain (right) and its low-order, three-state transition model representation (left) after the aggregation process. The numbers along the edges represent the probabilities of transitioning to and from pairs of states. In (b), we show the joint model. The vertices of the joint model represent states in both the high-order and low-order chains. The edges between state pairs in both the high- and low-order chains, which are depicted using dashed lines, are removed. In the joint model, edges are inserted to connect states in the high-order chain with those in the low-order chain, thereby providing the state aggregation. Note that the edge weights in the joint model are unknown a priori and must be uncovered.
Entropy 21 00349 g002
Figure 3. Depiction of the comparison process for approximate, probabilistic aggregation of a nine-state Markov chain. Here, we want to compare the second row of the high-order transition matrix Π with the first row of a potential low-order stochastic matrix Φ . We show the transition matrix Π associated with a nine-state Markov chain on the left; four state clusters are visible along the main diagonal. The corresponding low-order transition matrix Φ for a four-state chain is given in the right. As before, comparisons between Π and Φ occur by comparing rows of Π with rows of Θ Ψ . Φ can be found via the joint model weight matrix Θ = U Π and the probabilistic partition Ψ : Φ = U Π Ψ , where [ U ] i , j = u i , j , u i , j = γ i ψ i , j / k = 1 n γ k ψ k , j . The least expect-ed distortion between the high-order Π and low-order Φ transition matrices is determined by way of Θ and Ψ . When performing exact aggregation, the dynamics of Φ are directly encoded in Θ . Ψ is only used to determine which columns of Θ can be ignored. For approximate aggregation, the dynamics of Φ are split between Θ and Ψ . This is because each state in the high-order model can have the chance to map to multiple states in the low-order model.
Figure 3. Depiction of the comparison process for approximate, probabilistic aggregation of a nine-state Markov chain. Here, we want to compare the second row of the high-order transition matrix Π with the first row of a potential low-order stochastic matrix Φ . We show the transition matrix Π associated with a nine-state Markov chain on the left; four state clusters are visible along the main diagonal. The corresponding low-order transition matrix Φ for a four-state chain is given in the right. As before, comparisons between Π and Φ occur by comparing rows of Π with rows of Θ Ψ . Φ can be found via the joint model weight matrix Θ = U Π and the probabilistic partition Ψ : Φ = U Π Ψ , where [ U ] i , j = u i , j , u i , j = γ i ψ i , j / k = 1 n γ k ψ k , j . The least expect-ed distortion between the high-order Π and low-order Φ transition matrices is determined by way of Θ and Ψ . When performing exact aggregation, the dynamics of Φ are directly encoded in Θ . Ψ is only used to determine which columns of Θ can be ignored. For approximate aggregation, the dynamics of Φ are split between Θ and Ψ . This is because each state in the high-order model can have the chance to map to multiple states in the low-order model.
Entropy 21 00349 g003
Figure 4. Depictions of the various models for the transition matrices in Figure 3 when using probabilistic partitions. In (a), we show the transition model for a high-order, nine-state Markov chain (right) and its low-order, four-state transition model representation (left) after the approximate aggregation process; In (b), we show the joint model defined by Θ = U Π and a relatively low value of β for this example. As before, eacht edge in both chains is removed and mappings between states in the two chains are established. For probabilistic partitions, each state in the high-order chain has the chance to map to more than one state in the low-order chain. This contrasts with the binary-valued partition case, where each state in the high-order chain could only be associated with a single state in the low-order chain.
Figure 4. Depictions of the various models for the transition matrices in Figure 3 when using probabilistic partitions. In (a), we show the transition model for a high-order, nine-state Markov chain (right) and its low-order, four-state transition model representation (left) after the approximate aggregation process; In (b), we show the joint model defined by Θ = U Π and a relatively low value of β for this example. As before, eacht edge in both chains is removed and mappings between states in the two chains are established. For probabilistic partitions, each state in the high-order chain has the chance to map to more than one state in the low-order chain. This contrasts with the binary-valued partition case, where each state in the high-order chain could only be associated with a single state in the low-order chain.
Entropy 21 00349 g004
Figure 5. An illustration of the phase change property when the Lagrange multiplier β is increased above three critical values. For 0 β < 0.095 , all of the states in the original chain are grouped together. As β is slightly increased beyond this upper threshold, a new state group emerges, as we highlight on the left-hand side of the figure. For any 0.095 β < 0.119 , only two state groups are formed. As β is increased to β 0.119 and β 0.794 , three and four state groups are formed, respectively; these results are shown in the middle and right-hand side of the figure. The “optimal” value of β , predicted by our perturbation-theory results, is close to β = 0.119 . This yields a parsimonious aggregation where the state-groups are compact and well separated. For β 0.794 , the original chain is over-partitioned: near-coincident clusters are defined in Ψ . The value of information hence starts to fit more to the noise in the state transitions than to the well-defined state groupings as β is increased beyond the next critical point after the ‘optimal’ value.
Figure 5. An illustration of the phase change property when the Lagrange multiplier β is increased above three critical values. For 0 β < 0.095 , all of the states in the original chain are grouped together. As β is slightly increased beyond this upper threshold, a new state group emerges, as we highlight on the left-hand side of the figure. For any 0.095 β < 0.119 , only two state groups are formed. As β is increased to β 0.119 and β 0.794 , three and four state groups are formed, respectively; these results are shown in the middle and right-hand side of the figure. The “optimal” value of β , predicted by our perturbation-theory results, is close to β = 0.119 . This yields a parsimonious aggregation where the state-groups are compact and well separated. For β 0.794 , the original chain is over-partitioned: near-coincident clusters are defined in Ψ . The value of information hence starts to fit more to the noise in the state transitions than to the well-defined state groupings as β is increased beyond the next critical point after the ‘optimal’ value.
Entropy 21 00349 g005
Figure 6. Value-of-information-based aggregation for a 9-state Markov chain with four discernible state groups. We show the original stochastic matrix Π R 9 × 9 with the partitions Ψ R m × 9 overlaid for four critical values of β . We also show the resulting aggregation Θ R m × m , which, in each case, approximately mimics the dynamics of the original stochastic matrix.
Figure 6. Value-of-information-based aggregation for a 9-state Markov chain with four discernible state groups. We show the original stochastic matrix Π R 9 × 9 with the partitions Ψ R m × 9 overlaid for four critical values of β . We also show the resulting aggregation Θ R m × m , which, in each case, approximately mimics the dynamics of the original stochastic matrix.
Entropy 21 00349 g006
Figure 7. Value-of-information-based aggregation for a 9-state Markov chain with one discernible state group and six outlying states. We show the original stochastic matrix Π R 9 × 9 with the partitions Ψ R m × 9 overlaid for four critical values of β . We also show the resulting aggregation Θ R m × m , which, in each case, approximately mimics the dynamics of the original stochastic matrix.
Figure 7. Value-of-information-based aggregation for a 9-state Markov chain with one discernible state group and six outlying states. We show the original stochastic matrix Π R 9 × 9 with the partitions Ψ R m × 9 overlaid for four critical values of β . We also show the resulting aggregation Θ R m × m , which, in each case, approximately mimics the dynamics of the original stochastic matrix.
Entropy 21 00349 g007
Figure 8. Expected distortion (blue curves) and cross-entropy (red curves) plots for the aggregation results in Figure 6, shown in (a), and Figure 7, shown in (b). For both (a,b), the large, left-most plot shows the expected distortion as a function of the number of state groups m after convergence has been achieved. The “knee” of the plot in (a) is given for m = 4 , while for (b) it is at m = 7 . These “knee” regions correspond to the “optimal” number of state groups as returned by our perturbation-theoretic criterion. They indicate where there are diminishing returns for including more aggregated state groups. The smaller four plots in (a,b) highlight the change in the expected distortion and cross-entropy as a function of the number of alternating-update iterations. These plots highlight a rapid stabilization of the update process.
Figure 8. Expected distortion (blue curves) and cross-entropy (red curves) plots for the aggregation results in Figure 6, shown in (a), and Figure 7, shown in (b). For both (a,b), the large, left-most plot shows the expected distortion as a function of the number of state groups m after convergence has been achieved. The “knee” of the plot in (a) is given for m = 4 , while for (b) it is at m = 7 . These “knee” regions correspond to the “optimal” number of state groups as returned by our perturbation-theoretic criterion. They indicate where there are diminishing returns for including more aggregated state groups. The smaller four plots in (a,b) highlight the change in the expected distortion and cross-entropy as a function of the number of alternating-update iterations. These plots highlight a rapid stabilization of the update process.
Entropy 21 00349 g008
Figure 9. Value-of-information-based aggregation for a 9-state Markov chain with three discernible state groups (top row) and two discernible state groups (bottom row). The left-most column shows the original stochastic matrices Π R 9 × 9 . The middle column gives the partitions Ψ R 9 × 9 found when using a Shannon mutual information constraint for the expected-distortion objective function. The right-most column gives the partitions Ψ R 9 × 9 found when using a Shannon entropy constraint for the expected-distortion objective function. When using Shannon entropy, several columns of the partition matrix are duplicated for high values of β , leading to an incorrect aggregation of states.
Figure 9. Value-of-information-based aggregation for a 9-state Markov chain with three discernible state groups (top row) and two discernible state groups (bottom row). The left-most column shows the original stochastic matrices Π R 9 × 9 . The middle column gives the partitions Ψ R 9 × 9 found when using a Shannon mutual information constraint for the expected-distortion objective function. The right-most column gives the partitions Ψ R 9 × 9 found when using a Shannon entropy constraint for the expected-distortion objective function. When using Shannon entropy, several columns of the partition matrix are duplicated for high values of β , leading to an incorrect aggregation of states.
Entropy 21 00349 g009

Share and Cite

MDPI and ACS Style

Sledge, I.J.; Príncipe, J.C. Reduction of Markov Chains Using a Value-of-Information-Based Approach. Entropy 2019, 21, 349. https://doi.org/10.3390/e21040349

AMA Style

Sledge IJ, Príncipe JC. Reduction of Markov Chains Using a Value-of-Information-Based Approach. Entropy. 2019; 21(4):349. https://doi.org/10.3390/e21040349

Chicago/Turabian Style

Sledge, Isaac J., and José C. Príncipe. 2019. "Reduction of Markov Chains Using a Value-of-Information-Based Approach" Entropy 21, no. 4: 349. https://doi.org/10.3390/e21040349

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop