Backward transfer entropy: Informational measure for detecting hidden Markov models and its interpretations in thermodynamics, gambling and causality

The transfer entropy is a well-established measure of information flow, which quantifies directed influence between two stochastic time series and has been shown to be useful in a variety fields of science. Here we introduce the transfer entropy of the backward time series called the backward transfer entropy, and show that the backward transfer entropy quantifies how far it is from dynamics to a hidden Markov model. Furthermore, we discuss physical interpretations of the backward transfer entropy in completely different settings of thermodynamics for information processing and the gambling with side information. In both settings of thermodynamics and the gambling, the backward transfer entropy characterizes a possible loss of some benefit, where the conventional transfer entropy characterizes a possible benefit. Our result implies the deep connection between thermodynamics and the gambling in the presence of information flow, and that the backward transfer entropy would be useful as a novel measure of information flow in nonequilibrium thermodynamics, biochemical sciences, economics and statistics.


Introduction
In many scientific problems, we consider directed influence between two component parts of complex system.To extract meaningful influence between components parts, the methods of time series analysis have been widely used [1][2][3].Especially, time series analysis based on information theory [4] provides useful methods for detecting the directed influence between component parts.For example, the transfer entropy (TE) [5][6][7] is one of the most influential informational methods to detect directed influence between two stochastic time series.The main idea behind TE is that, by conditioning on the history of one time series, informational measure of correlation between two time series represents the information flow that is actually transferred at the present time.Transfer entropy has been well adopted in a variety of research areas such as economics [8], neural networks [9][10][11], biochemical physics [12][13][14] and statistical physics [15][16][17][18][19]. Several efforts to improve the measure of TE have also been done [20][21][22].
In a variety of fields, a similar concept of TE has been discussed for a long time.In economics, the statistical hypothesis test called as the Granger causality (GC) has been used to detect the causal relationship between two time series [23,24].Indeed, for Gaussian variables, the statement of GC is equivalent to TE [25].In information theory, nearly the same informational measure of information flow called the directed information (DI) [26,27] has been discussed as a fundamental bound of the noisy channel coding under causal feedback loop.As in the case of GC, DI can be applied to an economic situation [29,51], that is the gambling with side information [4,30].
In this article, we provide the unified perspective on different measures of information flow, i.e., TE, DI, and DIF.To introduce TE for backward time series [13,38], called backward transfer entropy (BTE), we clarify the relationship between these informational measures.By considering BTE, we also obtain a tighter bound of the entropy change in a small subsystem even for non Markov process.In the context of time series analysis, this BTE has a proper meaning: an informational measure for detecting a hidden Markov model.From the view point of the statistical hypothesis test, BTE quantifies an anti-causal prediction.These fact implies that BTE would be a useful directed measure of information flow as well as TE.
Furthermore, we also discuss the analogy between thermodynamics for a small system [32,44,45] and the gambling with side information [4,30].To considering its analogy, we found that TE and BTE play similar roles in both settings of thermodynamics and gambling: BTE quantifies a loss of some benefit while TE quantifies some benefit.Our result reveals the deep connection between two different fields of science, thermodynamics and gambling.

Results
Setting.We consider stochastic dynamics of interacting systems X and Y, which are not necessarily Markov processes.We consider a discrete time k (= 1, . . ., N ), and write the state of X (Y) at time k as k := {y k , . . ., y k−l+1 }) be the path of system X (Y) from time k − l + 1 to k where l ≥ 1 is the length of the path.The probability distribution of the composite system at time k is represented by p(X k = x k , Y k = y k ), and that of paths is represented by p(X , where capital letters (e.g., X k ) represent random variables of its states (e.g., x k ).
The dynamics of composite system are characterized by the conditional probability p( Transfer entropy.Here, we introduce conventional TE as a measure of directed information flow, which is defined as the conditional mutual information [4] between two time series under the condition of the one's past.The mutual information characterizes the static correlation between two systems.The mutual information between X and Y at time k is defined as This mutual information is nonnegative quantity, and vanishes if and only if x k and y k are statistically independent (i.e., p( [4].This mutual information quantifies how much the state of y k includes the information about x k , or equivalently the state of x k includes the information about y k .In a same way, the mutual information between two paths x k and y k ′ is also defined as While the mutual information is very useful in a variety fields of science [4], it only represents statistical correlation between two systems in a symmetric way.In order to characterize the directed information flow from X to Y , Schreiber [5] introduced TE defined as is an informational difference about the path of the system X that is newly obtained by the path of the system Y from time k ′ to k ′ + 1.Thus, TE T X (l) can be regarded as a directed information flow from X to Y at time k ′ .This TE can be rewritten as the conditional mutual information [4] between the paths of X and the state of Y under the condition of the history of Y: which implies that TE is nonnegative quantity, and vanishes if and only if the transition probability in Y from y [see also Fig. 1(a)] Backward transfer entropy.Here, we introduce BTE as a novel usage of TE for the backward paths.We first consider the backward path of the system X (Y); , which is the time-reversed trajectories of the system X (Y) from time N − k + l to N − k + 1.We now introduce the concept of BTE defined as TE for the backward paths , respectively (see also Ref. [15,37]).(a) Transfer entropy T X (1) k →Y (2) k+1 corresponds to the edge from X k to Y k+1 on the Bayesian network.
k+1 corresponds to the edge from Ym to Xm+1 k+1 is zero, the edge from Ym to Xm+1 vanishes, i.e., p(Xm+1 In this sense, BTE may represent "the time-reversed directed information flow from the future to the past."However BTE is well defined as the conditional mutual information, it is nontrivial if such a concept makes any sense information-theoretically or physically where stochastic dynamics of composite system itself do not necessarily have the time-reversal symmetry. To clarify the proper meaning of BTE, we compare BTE T X † (1) k+1 is nonnegative and vanishes if and only if a Markov chain Y m → Y m+1 → X m+1 exists, which implies that dynamics of X are given by a hidden Markov model.In general, BTE T X † (l) is nonnegative and vanishes if and only if a Markov chain p(X exists.Therefore, BTE from X to Y quantifies how far it is from composite dynamics of X and Y to a hidden Markov model in X .Thermodynamics of information.We next discuss a thermodynamic meaning of BTE.To clarify the interpretation of BTE in nonequilibrium stochastic thermodynamics, we consider the following non-Markovian interacting dynamics where a nonnegative integer n represents the time delay between X and Y.The stochastic entropy change in heat bath B attached to the system X from time 1 to N in the presence of Y [15] is defined as where is the transition probability of backward dynamics, which satisfies the normalization of the probability x k Q X k+1 = 1.For example, if the system X and Y does not include any odd variable that changes its sign with the time-reversal transformation, the backward probability is given by p This definition of the entropy change in the heat bath Eq. ( 9) is well known as the local detailed balance or the detailed fluctuation theorem [45].We define the entropy change in X and heat bath as where ∆s X := − ln p(X N = x N ) + ln p(X 1 = x 1 ) is the stochastic Shannon entropy change in X .
For the non-Markovian interacting dynamics Eq. ( 8), we have the following inequality (see Method); We add that the term vanishes for the Markovian interacting dynamics (n = 0).
These results [Eqs.(12) and ( 13)] can be interpreted as a generalized second law of thermodynamics for the subsystem X in the presence of information flow from X to Y.If there is no interaction between X and Y, informational terms vanish, i.e., T X † (1) = 0, I(X N ; Y N ) = 0 and I(X 1 ; Y 1 ) = 0. Thus these results reproduce the conventional second law of thermodynamics ∆S X B ≥ 0, which indicates the nonnegativity of the entropy change in X and bath [45].If there is some interaction between X and Y, ∆S X B can be negative, and its lower bound is given by the sum of TE from X to Y and mutual information between X and Y at initial time; which is a nonnegative quantity I n (X N ) ≥ 0. In information theory, this quantity I 0 (X N ) is known as DI from X to Y [27].Intuitively speaking, −∆S X B quantifies a kind of thermodynamic benefit because its negativity is related to the work extraction in X in the presence of Y [32].Thus, a weaker bound (13) implies that the sum of TE quantifies a possible thermodynamic benefit of X in the presence of Y.
We next consider the sum of TE for the time-reversed trajectories; which is given by the sum of BTE and the mutual information between X and Y at final time.A tighter bound (12) can be rewritten as the difference between the sum of TE and BTE; This result implies that a possible benefit I n (X N ) should be reduced by up to the sum of BTE I n (X † (N ) N → Y † (N ) N ).Thus, the sum of BTE means a loss of thermodynamic benefit.We add that a tighter bound I n (X is not necessarily nonnegative while a weaker bound I n (X We here consider the case of Markovian interacting dynamics (n = 0).For Markovian interacting dynamics, we have the following additivity for a tighter bound [see Supplementary information (SI)] where the sum of TE and BTE for a single time step I 0 (X , respectively.This additivity implies that a tighter bound for multi time steps is equivalent to the sum of a tighter bound for a single time step I 0 (X We stress that a tighter bound for a single time step has been derived in Ref. [13].We next consider the continuous limit x k = x(t = k∆t), y k = y(t = k∆t), and N = O(∆t −1 ), where t denotes continuous time, ∆t ≪ 1 is an infinitesimal time interval and the symbol O is the Landau notation.Here we clarify the relationship between a tighter bound (17) and DIF [34] (or the learning rate [18]) defined as For the bipartite Markov jump process [18] or two dimensional Langevin dynamics without any correlation between thermal noises in X and Y [15], we have the following relationship [see also SI] Thus a bound by TE and BTE is equivalent to a bound by DIF for such systems in the continuous limit, i.e., −∆S X B ≤ I 0 (X Gambling with side information.In classical information theory, the formalism of the gambling with side information has been well known as another perspective of information theory based on the data compression over a noisy communication channel [4,30].In the gambling with side information, the mutual information between the result in the gambling and the side information gives a bound of the gambler's benefit.This formalism of gambling is similar to the above-mentioned result in thermodynamics of information.In thermodynamics, thermodynamic benefit (e.g., the work extraction) can be obtained by using information.On the other hand, the gambler obtain the benefit by using side information.We here clarify the analogy between gambling and thermodynamics in the presence of information flow.To clarify the analogy between thermodynamics and gambling, BTE plays a crucial role as well as TE.
We introduce the basic concept of the gambling with side information given by the horse race [4,30].Let y k be the horse that won the k-th horse race.Let f k ≥ 0 and o k ≥ 0 be the bet fraction and the odds on the k-th race, respectively.Let m k be the gambler's wealth before the k-th race.Let s k be the side information at time k.We consider the set of side information x k−1 = {s 1 , . . ., s k−1 }, which the gambler can access before the k-th race.The bet fraction f k is given by the function f k (y k |y and f 1 (y 1 |x 1 ).The conditional dependence {y ) implies that the gambler can decide the bet fraction f k (f 1 ) by considering the past information {y We assume normalizations of the bet fractions which mean that the gambler bets all one's money in every race.We also assume that y k 1/o k (y k ) = 1.This condition satisfies if the odds in every race are fair, i.e., 1/o k (y k ) is given by a probability of Y k .
The stochastic gambler's wealth growth rate at k-th race is given by ], which implies that the gambler's wealth stochastically changes due to the bet fraction and odds.The information theory of the gambling with side information indicates that the ensemble average of total wealth growth ≤I 0 (X where N .This result (21) implies that the sum of TE can be interpreted as a possible benefit of the gambler.
We discuss the analogy between thermodynamics of information and the gambling with side information.A weaker bound in the gambling with side information ( 21) is similar to a weaker bound in thermodynamics of information (16), where the negative entropy change −∆S X B corresponds to the total wealth growth G.On the other hand, a tighter bound in the gambling with side information (20) is rather different from a tighter bound by the sum of BTE in thermodynamics of information (16).We show that a tighter bound in the gambling is also given by the sum of BTE if we consider the special case that the bookmaker who decides the odds o k cheats in the horse race; The odds o k can be decided by the unaccessible side information x k+1 and information of the future races y which implies that the sum of BTE I 0 (X N represents a loss of the gambler's benefit because of the cheating by the bookmaker who can access the future information with anti-causality.We stress that Eq. ( 22) has a same form of the thermodynamic inequality (16) for Markovian interacting dynamics (n = 0).This fact implies that thermodynamics of information can be interpreted as the special case of the gambling with side information; The gambler uses the past information and the bookmaker uses the future information.If we regard thermodynamic dynamics as the gambling, anti-causal effect should be considered.
Causality.We here show that BTE itself is related to anti-causality without considering the gambling.From the view point of the statistical hypothesis test, TE is equivalent to GC for Gaussian variables [25].Therefore, it is naturally expected that BTE can be interpreted as a kind of the causality test.
Suppose that we consider two linear regression models y where α (α ′ ) is a constant term, A (A ′ ) is the vector of regression coefficients, ⊕ denotes concatenation of vectors, and ǫ (ǫ ′ ) is an error term.The Granger causality of X to Y quantifies how the past time series of X in the first model reduces the prediction error of y k ′ +1 compared to the error in the second model.Performing ordinary mean squares to find the regression coefficients A (A ′ ) and α (α ′ ) that minimize the variance of ǫ (ǫ ′ ), the standard measure of GC is given by where var(ǫ) denotes the variance of ǫ.Here we assume that the joint probability p(X ) is Gaussian.Under Gaussian assumption, TE and GC are equivalent up to a factor of 2, In the same way, we discuss BTE from the view point of GC.Here we assume that the joint probability p(X where α † (α ′ † ) is a constant term, A † (A ′ † ) is the vector of regression coefficients and ǫ † (ǫ ′ † ) is an error term.These linear regression models give a prediction of the past state of Y using the future time series of X and Y.
Intuitively speaking, we consider GC of X to Y for the rewind playback video of composite dynamics X and Y.
We call this causality test the Granger anti-causality of X to Y. Performing ordinary mean squares to find A † (A ′ † ) and α † (α ′ † ) that minimize var(ǫ † ) (var(ǫ ′ † )) , we define a measure of the Granger anti-causality of X to Y as The backward transfer entropy is equivalent to the Granger anti-causality up to factor 2, This fact implies that BTE can be interpreted as a kind of anti-causality test.We stress that composite dynamics of X and Y are not necessarily driven with anti-causality even if a measure of the Granger anti-causality has nonzero value.As GC just finds only the predictive causality [23,24], the Granger anti-causality also finds only the predictive causality for the backward time series.

Discussion
We proposed that directed measure of information called BTE, which is possibly useful to detect a hidden Markov model (7) and predictive anti-causality (29).In the both setting of thermodynamics and the gambling, the measurement of BTE has a profitable meaning; the detection of a loss of a possible benefit in the inequalities (16) and (22).
The concept of BTE can provide a clear perspective in the studies of the biochemical sensor and thermodynamics of information, because the difference between TE and DIF has attracted attention recently in these fields [14,35].In Ref. [14], Hartich et al. have proposed the novel informational measure for the biochemical sensor called sensory capacity.The sensory capacity is defined as the ratio between TE and DIF C := −I k flow /T X (1) k →Y (2) k+1 . Because DIF can be rewritten by TE and BTE [Eq.(18)] for Markovian interacting dynamics, we have the following expression for the sensory capacity in a stationary state, where we used I(X k+1 ; Y k+1 ) = I(X k ; Y k ) in a stationary state.This fact indicates that the ratio between TE and BTE could be useful to quantify the performance of the biochemical sensor.By using this expression (30), we show that the maximum value of the sensory capacity C = 1 can be achieved if a Markov chain of a hidden Markov model Y k → Y k+1 → X k+1 exists.In Ref. [35], Horowitz and Sandberg have shown a comparison between two thermodynamic bound by TE and DIF for two dimensional Langevin dynamics.For the Kalman-Bucy filter which is the optimal controller, they have found the fact that DIF is equivalent to TE in a stationary state.This idea can be clarified by the concept of BTE.Because the Kalman-Bucy filter can be interpreted as a hidden Markov model, BTE should be zero, and DIF is equivalent to TE in a stationary state.
Our results can be interpreted as a generalization of previous works in thermodynamics of information [46][47][48].In Refs.[46,47], S. Still et al. discuss the prediction in thermodynamics for Markovian interacting dynamics.In our results, we show the connection between thermodynamics of information and the predictive causality from the view point of GC.Thus, our results give a new insight into these works of the prediction in thermodynamics.In Ref. [48], G. Diana and M. Esposito have introduced the time-reversed mutual information for Markovian interacting where p(A = a|B = b) := p(A = a, B = b)/p(B = b) is the conditional probability of a under the condition of b.

[
see Fig. 1].Transfer entropy quantifies the dependence of X k in the transition from time Y k to Y k+1 [see Fig. 1(a)].In the same way, BTE quantifies the dependence of Y m in the correlation between X m+1 and Y m+1 [see Fig. 1(b)].Thus, BTE implies how X m+1 depends on Y m+1 without the dependence of the past state Y m .In other words, BTE T X † (1)

FIG. 2 : 1 =
FIG. 2: Schematic of the special case of the horse race.The gambler can only access the past side information x k−1 and the past races y (k−1) k−1 = {y1, . . ., y k−1 }, and decides the bet fraction f k on the k-th race.The bookmaker makes some cheating which can access the future side information x k+1 and the future races y (N−k) N = {y k+1 , . . ., yN }, and decides the odds on the k-th race.

(
N −k) N [see also Fig. 2].In this special case, the fair odds of the k-th race o k can be the conditional probability of the future information 1/o k (y k ) = p(Y k = y k |Y (N −k) N = y (N −k) N , X k+1 = x k+1 ) with k ≤ N − 1, and 1/o N (y N ) = p(Y N = y N |X N = x N ).The inequality (20) can be rewritten as