Next Article in Journal
Identity Authentication over Noisy Channels
Previous Article in Journal
Informational and Causal Architecture of Discrete-Time Renewal Processes
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Fisher Information Properties

Facultad de Ingeniería y Ciencias Aplicadas, Universidad de los Andes, Monseñor Álvaro del Portillo 12.455, Las Condes, Santiago, Chile
Entropy 2015, 17(7), 4918-4939; https://doi.org/10.3390/e17074918
Submission received: 18 June 2015 / Accepted: 10 July 2015 / Published: 13 July 2015
(This article belongs to the Section Information Theory, Probability and Statistics)

Abstract

:
A set of Fisher information properties are presented in order to draw a parallel with similar properties of Shannon differential entropy. Already known properties are presented together with new ones, which include: (i) a generalization of mutual information for Fisher information; (ii) a new proof that Fisher information increases under conditioning; (iii) showing that Fisher information decreases in Markov chains; and (iv) bound estimation error using Fisher information. This last result is especially important, because it completes Fano’s inequality, i.e., a lower bound for estimation error, showing that Fisher information can be used to define an upper bound for this error. In this way, it is shown that Shannon’s differential entropy, which quantifies the behavior of the random variable, and the Fisher information, which quantifies the internal structure of the density function that defines the random variable, can be used to characterize the estimation error.

1. Introduction

The birth of information theory was signaled by the publication of Claude Shannon’s work [1], which is based on studying the behavior of systems described by density functions. However, much before that work was published, Ronald Fisher had already published the definition of a quantity called Fisher information [2], a hard bound on the capacity to estimate the parameters that define a system [3,4]. Hence, this quantity regulates how well it is possible to determine the internal structure of a system and provides another point of view that can be used to study systems: how they are composed, what they are made of. This work springs from the belief that the combination of these approaches is what completely defines systems: their behavior (Shannon) and their architecture (Fisher). In the following, a series of published results is summarized, together with new results, in order to present a coherent set of Fisher information properties that will hopefully be useful for those that work with this quantity.

1.1. Fisher Information and Other Fields

One connection between Fisher information and the Shannon differential entropy was stated by Kullback [5] (p. 26), who proved that the second derivatives of the Kullback–Leibler divergence with respect to the density functions parameters produce the Fisher information matrix terms. Related results were presented by Blahut [6] (p. 300), and Frieden [7] (p. 37). Another important result that also relates these two frameworks is Bruijn’s identity ([8,9] and [10] (p. 672)), which establishes a relation between the derivative of Shannon differential entropy and Fisher information when the underlying random variable is the subject of Gaussian perturbations. This result was recently generalized to non-Gaussian perturbations [11,12]. A consequence of these results is the convolution inequality for Fisher information ([8,9,1316]; [10] (p. 674)).
Others have been studying the relation between Fisher information and physics. Here, it is important to point out the extreme physical information principle derived by Frieden and others in order to establish a general framework that explains physics [7,1720]. Of special interest has been the role of Fisher information to generate thermodynamical theory [7,1722]. It is very common in these approaches to use a special case of Fisher information where the estimated parameter is a location parameter. In this work, the original and general Fisher information definitions, and not the later special case, are addressed only.
Even thought Shannon’s ideas have been part of the the machine learning tool set for a long time, Fisher information has not followed the same track. Even though Fisher information is intimately connected to estimation theory [23], its use in the development of learning systems has not been well developed yet. Nevertheless, Amari discovered that natural gradient descent, i.e., common gradient descent corrected with the Fisher information matrix terms, takes into account the topology in a more precise manner, allowing for more efficient training procedures [24,25]. The use of Fisher information has also been taken into account in order to design objective functions to lead the estimation procedure. One of them is mixing maximum entropy with minimum Fisher information [26,27]. On the other hand, mixing Shannon’s differential entropy, Fisher information and the central limit theorem has allowed proving that in the presence of large datasets, it is natural to search for minimum Kullback–Leibler, or equivalent, solutions [28].

1.2. Contribution of This Work

This work is focused on presenting already known properties of Fisher information [3,4,7,8,10,2932] and introducing new ones, such that the reader can have a better grasp of Fisher information and its usefulness. The main results presented in this work are: (i) the generalization of the mutual information concept using Fisher information expressions; (ii) a new proof that conditioning under certain assumptions increases Fisher information; (iii) proving that in Markov chains, the Fisher information increases as the random variables become further away from the estimated parameter; and (iv) an upper bound on estimation error, which is regulated by the Fisher information.
This work is structured roughly in the same way in which is organized the first chapter of the well-known book of Cover and Thomas [30], in order to help the reader to draw a parallel between Shannon and Fisher information.

2. Notation

In the following sections, vectors and matrices are denoted with a bold font [7,31]. Furthermore, density functions are denoted by fXfX(x), where the f is reserved for density functions, the lowercase X corresponds to the name of the random variable, θ represents the parameters that define the density function and the symbol within the (·) stands for the instance of the random variable that is used to evaluate the density function. In this way, as an example, a different random variable could be denoted by fY;θfY;θ(y). A similar notation is used in [33].

3. Fisher Information

Let there be a random variable X and its associated density function fX;θfX;θ(x), which has a support S, and it depends on a set of parameters that is represented by the vector θ ∈ Θ. The value θk is the k-th component of θ. According to the original definition designed by Fisher to characterize maximum likelihood estimation [2]:
Definition 1 (Fisher Information). Given a random variable X and its associated density function fX;θ(x), which depends on the parameter vector θ ∈ Θ, and θk is the k-th component of θ, then the Fisher information associated with θk is defined by:
i F ( f X ; θ ) θ k f X ; θ ( X ) ( ln f X ; θ ( X ) θ k ) 2 d x
From the definition, it is clear that i F ( f X ; θ ) θ k 0. Furthermore, if fX does not depend on θk, then i F ( f X ; θ ) θ k = 0.
Example 1. In a Gaussian case with mean µ and standard deviation η, the density function is given by:
f X ; μ , η ( x ) = 1 2 π η exp ( ( x μ ) 2 η 2 )
In this case:
f X ; μ , η ( x ) = ln 1 2 π η ( x μ ) 2 2 η 2
ln 1 2 π η x 2 2 μ x + μ 2 2 η 2
If the parameter to be estimated is the mean µ, the previous expression needs to be derived with respect to µ:
d ln f X ; μ , η ( x ) d μ = 2 x 2 η 2 2 μ 2 η 2
= x μ η 2
Replacing into the definition of Fisher information definition:
i F ( f X ; μ , η ) μ = f X ; μ , η ( x ) ( x μ η 2 ) 2 d x
= f X ; μ , η ( x ) x 2 2 μ x + μ 2 η 4 d x
= 1 η 4 f X ; μ , η ( x ) x 2 d x 2 μ η 4 ± f X ; μ , η ( x ) x d x + μ 2 η 4 f X ; μ , η ( x ) d x
= 1 η 4 { ( η 2 + μ 2 ) 2 μ 2 + μ 2 }
= 1 η 2
This shows that for Gaussian functions, the variance of any estimator of the mean is directly proportional to the variance of the density function.
There is another expression that can be used to represent the Fisher information.
Theorem 1. Given a random variable X and its associated density function fX;θ(x), which depends on the parameter vector θ ∈ Θ and complies with the boundary condition for θk (see Appendix A), where θk is the k-th component of θ, then the Fisher information associated with θk is equal to:
i F ( f X ; θ ) θ k = f X ; θ ( X ) 2 ln f X ; θ ( X ) θ k 2 d x
A proof of this theorem can be found in [34] (p. 373).
Example 2. Continuing the Gaussian example, and using the alternative definition of the Fisher information, the required second derivative is first calculated:
d 2 ln f X ; μ , η ( x ) d μ 2 = d d μ ( x μ η 2 ) = 1 η 2
Replacing into Equation (12), the same result is obtained:
i F ( f X ; μ , η ) μ = f X ; μ , η ( x ) d 2 ln f X ; μ , η ( x ) d μ 2 d x
= f X ; μ , η ( x ) ( 1 η 2 ) d x
= 1 η 2 d x f X ; μ , η ( x )
= 1 η 2
The importance of the Fisher information quantity stems from the Cramer–Rao bound [3,4,23,35]:
Theorem 2 (Cramer–Rao Bound). Given a random variable X and its associated density function fX(x), which depends on the parameter vector θΘ and complies with the boundary condition for θk (see Appendix A), where θk is the k-th component of θ, also given that there is an unbiased estimator θ ^ k ( x )of the scalar parameter θk, then:
1 i F ( f X ; θ ) θ k σ θ ^ k 2
where:
σ θ ^ k 2 f X ; θ ( x ) ( θ ^ k ( x ) θ k ) 2 d x
is the variance of the estimator. Proofs of this theorem can be found in [7] (p. 29) and [23] (p. 66).
The Cramer–Rao bound establishes that the reciprocal of the Fisher information is a lower bound of the variance of an estimator. Any estimator that reaches the bound imposed by the Cramer–Rao theorem is called efficient [34]. It is important to notice that the bound does not depend on the estimator itself; it only depends on i F ( f X ; θ ) θ k. In this work, the case of biased estimators will not be analyzed, nor when the parameters themselves are random variables.
The following theorem states that the topology of the Fisher information in the density function space is very simple:
Theorem 3. The Fisher information i F ( f X ; θ ) θ kis convex in fX. Proofs of this theorem can be found in [7] (p. 69) and [29].

4. Several Random Variables Depending on θk

4.1. Joint Fisher Information Definition

Definition 2. Given two random variables X and Y and the associated joint density function fX,Y;θ(x, y), which depends on the parameter vector θ ∈ Θ, and θk is the k-th component of θ, then the joint Fisher information associated with θk is defined by:
i F ( f X , Y ; θ ) θ k f X , Y ; θ ( ln f X , Y ; θ ( x , y ) θ k ) 2 d x d y

4.2. An Equivalent Joint Fisher Information Definition

Theorem 4. Given two random variables X and Y and the associated joint density function fX,Y(x, y), which depends on the parameter vector θ ∈ Θ and complies with the boundary condition for θk (see Appendix A), where θk is the k-th component of θ, then the joint Fisher information associated with θk is equal to:
i F ( f X , Y ; θ ) θ k = f X , Y ; θ ( x , y ) 2 ln f X , Y ; θ ( x , y ) θ k 2 d x d y
Proof. This follows trivially from the alternative definition of the Fisher information. □

4.3. Conditional Fisher Information Definition

Definition 3.
i F ( f Y | X ; θ ) θ k f X , Y ; θ ( x , y ) ( ln f Y | X ; θ ( y | x ) θ k ) d x d y

4.4. Chain Rule for Two Random Variables

The following result was first published by Zamir [32], who used it to produce an alternative proof of the Fisher information inequality. In the following lines, the same chain rule is proven using the results presented in the previous sections.
Theorem 5 (Chain Rule for Two Random Variables). Given a joint density function fX,Y;θ(x, y), which depends on the parameter vector θ ∈ Θ, and given that the density functions comply with the boundary condition for θk (see Appendix A), where θk is the k-th component of θ, then:
i F ( f X , Y ; θ ) θ k = i F ( f Y | X ; θ ) θ k + i F ( f X ; θ ) θ k
= i F ( f X | Y ; θ ) θ k + i F ( f Y ; θ ) θ k
respectively.
Proof.
i F ( f X , Y ; θ ) θ k f X , Y ; θ ( x , y ) ( ln f Y | X ; θ ( x , y ) θ k ) 2 d x d y
= f X , Y ; θ ( x , y ) ( ln ( f Y | X ; θ | ( x , y ) f X ; θ ( x ) ) θ k ) 2 d x d y
= f X , Y ; θ ( x , y ) ( ln f Y | X ; θ | ( x , y ) θ k + ln f X ; θ | ( x ) θ k ) 2 d x d y
= i F ( f Y | X ; θ ) θ k + i F ( f X ; θ ) θ k + 2 f X , Y ; θ ( x , y ) ln f Y | X ; θ ( y | x ) θ k ln f X ; θ ( x ) θ k d x d y
but,
ln f Y | X ; θ ( y | x ) θ k ln f X ; θ ( x ) θ k d x d y = ln f Y | X ; θ ( y | x ) θ k ln f X ; θ ( x ) θ k d x d y
= ( f Y | X ; θ ( y | x ) θ k f Y | X ; θ ( y | x ) θ k d y ) d x
If f Y | X ; θ ( y | x )complies with the boundary condition with respect to θk (see Appendix A), then:
f Y | X ; θ ( y | x ) θ k d y = θ k d y f Y | X ; θ ( y | x ) = 0
Therefore, the theorem is proven. The other result is proven analogously. □
When the chain rule is used to estimate the Fisher information associated with a parameter, it is important to take into account that all of the terms that come out after applying the chain rule contain derivatives with respect to the same parameter. Because some of these terms may be dependent on density functions that do not depend on the parameter, some of these terms may be equal to zero.
Example 3. Given the random variable Y = X + N, where X is a Gaussian density function with mean µ and standard deviation η and N another Gaussian density function with mean zero and standard deviation ν, if the joint density function is available, and the parameter to be estimated is µ, then:
i F ( f Y , X ; µ , η , ν ) µ = i F ( f Y | X ; µ , η , ν ) µ + i F ( f X ; µ , η ) µ
= i F ( f N ; ν ) µ + i F ( f X ; µ , η ) µ
= i F ( f X ; μ , η ) μ
= 1 η 2
The previous result implies that if the joint density function of the output Y and the input X is available, the noise does not affect the estimation process. This is not surprising, since Y is a corrupted version of X, and it cannot shed more information on µ than that contained in X. Because all of the information hidden in X is available through the joint density function, it makes sense to think that the Fisher information of the joint density function corresponds to that of the marginal distribution fX;µ,η.
Given the density functions mentioned above, it is possible to prove that:
f Y ; μ , η , ν ( y ) = 1 2 π ( η 2 + ν 2 ) exp ( ( y μ ) 2 2 ( η 2 + ν 2 ) )
with Fisher information associated with μ equal to:
i F ( f Y ; μ , η , ν ) μ = 1 η 2 + ν 2
Using the other expression for the chain rule:
i F ( f Y , X ; µ , η , ν ) µ = i F ( f X | Y ; µ , η , ν ) µ + i F ( f Y ; µ , η , ν ) µ
= i F ( f X | Y ; µ , η , ν ) µ + 1 η 2 + ν 2
Using the previous results:
1 η 2 = i F ( f X | Y ; μ , η , ν ) μ + 1 η 2 + ν 2
which implies:
i F ( f Y ; μ , η , ν ) μ = ν 2 η 2 ( η 2 + ν 2 )

4.5. Chain Rule for Many Random Variables

In the case of more than two density functions:
Theorem 6 (Chain Rule for Many Random Variables). Given a set of n random variables X1, X2, …, Xn, all of them depending on θk, if the density functions comply with the boundary condition for θk (see Appendix A), then:
i F ( f X 1 , X 2 , , X n ; θ ) θ k = k = 1 n i F ( f X k | X k 1 , , X 1 ; θ ) θ k
Proof.
i F ( f X 1 , X 2 , , X n ; θ ) θ k = i F ( f X n , , X 2 | X 1 ; θ ) θ k + i F ( f X 1 ; θ ) θ k
i F ( f X 1 , X 2 , , X n ; θ ) θ k = i F ( f X n , , X 3 | X 2 , X 1 ; θ ) θ k + i F ( f X 2 | X 1 ; θ ) θ k + i F ( f X 1 ; θ ) θ k
= k = 1 n i F ( f x k | x k 1 , , x 1 ; θ ) θ k
If the n random variables in Theorem 6 are i.i.d., then i F ( f X 1 , X 2 , , X n ; θ ) θ k = n i F ( f X ; θ ) θ k.

5. Relative Fisher Information Type I

In the following, the relative Fisher information is defined. As far as it was possible to determine, the first definition of the relative Fisher information was given by Otto and Villani [36], who defined it for the translationally-invariant case. Furthermore, this expression has been rediscovered or simply used in many applications thereafter in different problems and fields [22,3744]. Furthermore, it seems that the first general analysis of the relative Fisher information was presented by the author in [45]. The following sections focus on this latter general case, where there is no assumption of translational invariance.
Analogously to the Kullback–Leibler divergence [46], also known as as relative entropy, which was designed to established how much two density functions differed, the relative Fisher information of Type I is obtained when the ratio of two intervening density functions is replaced into Equation (1), as is shown in the following definition.
Definition 4. The relative Fisher information Type I is defined by:
d F ( I ) ( f X ; θ | | f Y ; θ ) θ k f X ; θ ( x ) ( θ k ( ln ( f X ; θ ( x ) f Y ; θ ( x ) ) ) ) 2 d x
The same mechanism can be used to generate a second definition for the relative Fisher information. The same ratio can be replaced into Equation (12), producing an alternative and equally valid expression, which is designated as relative Fisher information Type II. This second expression is studied in the following sections.

6. Information Correlation

Definition 5. The information correlation with respect to θk is defined by:
i C ( f X , Y ; θ ) θ k f X , Y ; θ ( x , y ) ln f X ; θ ( x ) θ k ln f Y ; θ θ k d x d y
The name information correlation comes from the similarity between this definition and that of the classical correlation coefficient. It is important to keep in mind that it is different from the terms that fill the Fisher information matrix [23].
According to the definition i C ( f X , X ; θ ) θ k = i F ( f X ; θ ) θ k, and. i C ( f X , Y ; θ ) θ k = i C ( f Y , X ; θ ) θ k.
Example 4. Continuing with the example where Y = X + N, the information correlation between Y and X is given by:
i C ( f Y , X ; μ , η , ν ) μ = f Y , X ; μ , η , ν ( y , x ) d ln f Y ; μ , η , ν ( y ) d μ d ln f X ; μ , η ( x ) d μ d y d x
= f Y , X ; μ , η , ν ( y | x ) f X ; μ , η ( x ) d ln f Y ; μ , η , ν ( y ) d μ d ln f X ; μ , η ( x ) d μ d y d x
= f N ; ν ( y x ) f X ; μ , η ( x ) d ln f Y ; μ , η , ν ( y ) d μ d ln f X ; μ , η ( x ) d μ d y d x
= ( 1 2 π ν exp ( ( y x ) 2 2 v 2 ) ) ( 1 2 π η exp ( ( x μ ) 2 2 η 2 ) ) d ln f Y ; μ , η , ν ( y ) d μ d ln f X ; μ , η ( x ) d μ d y d x
= 1 2 π ν η exp ( 1 2 ( ( y x ) 2 v 2 + ( x μ ) 2 η 2 ) ) d ln f Y ; μ , η , ν ( y ) d μ d ln f X ; μ , η ( x ) d μ d y d x
where:
d ln f Y ; μ , η , ν ( y ) d μ = d d μ ( ln 1 2 π η 2 + ν 2 ( y μ ) 2 2 ( η 2 + ν 2 ) )
= ( y μ ) η 2 + ν 2
Analogously:
d ln f Y ; μ , η , ν ( y ) d μ = ( x μ ) η 2
Replacing these derivatives into the information correlation expression:
i C ( f Y , X ; μ , η , ν ) μ = 1 2 π ν η exp ( 1 2 ( ( y x ) 2 v 2 + ( x μ ) 2 η 2 ) ) ( y μ η 2 + ν 2 ) ( x μ η 2 ) d y d x
= 1 2 π ν η ( η 2 + ν 2 ) η 2 ( y μ ) ( x μ ) exp ( 1 2 ( ( y x ) 2 v 2 + ( x μ ) 2 η 2 ) ) d y d x
= 1 2 π ν η ( η 2 + ν 2 ) η 2 ( x μ ) exp ( 1 2 ( x μ ) 2 η 2 ) ( ( y x ) exp ( 1 2 ( y x ) 2 ν 2 ) d y ) d x
1 η 2 + ν 2
Theorem 7. The information correlation is bounded according to:
( i C ( f X , Y ; θ ) θ k ) 2 i F ( f X ; θ ) θ k i F ( f Y ; θ ) θ k
Proof.
0 f X , Y ; θ ( x , y ) ( a ln f X ; θ ( x ) θ k + ln f Y ; θ ( y ) θ k ) d x d y
which can be reexpressed as:
0 a 2 i F ( f X ; θ ) θ k + 2 a i C ( f X , Y ; θ ) θ k + i F ( f Y ; θ ) θ k
This is a second degree equation that is true for every possible a. Because this equation is always greater than zero, the discriminant of the equation has to comply with 4 ( i C ( f X , Y ; θ ) θ k ) 2 4 i F ( f X ; θ ) θ k i F ( f Y ; θ ) θ k 0 , which proves the theorem. □
Definition 6. The information correlation coefficient is defined by:
ρ F = i C ( f X , Y ; θ ) θ k i F ( f X ; θ ) θ k i F ( f Y ; θ ) θ k
Theorem 8. The information correlation coefficient is limited by:
1 ρ F 1
Proof. This comes from the definition of the information correlation coefficient and Theorem 7.
Theorem 9. If at least one of the following conditions:
  • fX;θ and fY;θ are independent.
  • Either fX;θ or fY;θ does not depend on θk.
is true, then:
i C ( f X , Y ; θ ) θ k = 0
Proof. Examination of the information correlation definition clearly shows that compliance with the first and second cases directly implies that this quantity is zero.

7. Mutual Fisher Information Type I

As happens in Shannon’s differential entropy handling, in this work, mutual Fisher information is also defined as relative Fisher information Type I, where the argument is the ratio between a joint density function and the product of its marginals.

7.1. Definition

Definition 7. The mutual Fisher information Type I is defined by:
m F ( I ) ( f X , Y ; θ ) θ k f X , Y ; θ ( x , y ) ( θ k ln ( f X , Y ; θ ( x , y ) f X ; θ ( x ) f Y ; θ ( y ) ) ) 2 d x d y
From the definition, it is obvious that m F ( I ) ( f X , Y ; θ ) θ k 0.
Theorem 10. If the boundary condition (see Appendix A) with respect to θk holds for fX,Y;θ(x,y), the mutual Fisher information Type I can be reformulated as a function of the Fisher information as follows:
m F ( I ) ( f X , Y ; θ ) θ k = i F ( f X | Y ; θ ) θ k i F ( f X ; θ ) θ k + 2 i C ( f X , Y ; θ ) θ k
= i F ( f Y | X ; θ ) θ k i F ( f Y ; θ ) θ k + 2 i C ( f X , Y ; θ ) θ k
Proof.
( θ k ln ( f X , Y ; θ ( x , y ) f X ; θ ( x ) f Y ; θ ( y ) ) ) 2 = ( ln f X , Y ; θ ( x , y ) θ k ln f X ; θ ( x ) θ k ln f Y ; θ ( y ) θ k ) 2
= ( ln f X , Y ; θ ( x , y ) θ k ) 2 + ( ln f X ; θ ( x ) θ k ) 2 + ( ln f Y ; θ ( y ) θ k ) 2 2 ln f X , Y ; θ ( x , y ) θ k ln f X ; θ ( x ) θ k 2 ln f X , Y ; θ ( x , y ) θ k ln f Y ; θ ( y ) θ k + 2 ln f X ; θ ( x ) θ k ln f Y ; θ ( y ) θ k
= ( ln f X | Y ; θ ( x | y ) θ k ) 2 + 2 ln f X | Y ; θ ( x | y ) θ k ln f Y ; θ ( y ) θ k + ( ln f Y ; θ ( y ) θ k ) 2 + ( ln f X , θ ( x ) θ k ) 2 + ( ln f Y ; θ ( y ) θ k ) 2 2 ln f X | Y ; θ ( x | y ) θ k ln f X ; θ ( x ) θ k 2 ln f Y ; θ ( y ) θ k ln f X ; θ ( x ) θ k 2 ln f X | Y ; θ ( x | y ) θ k ln f Y ; θ ( y ) θ k 2 ln f Y ; θ ( y ) θ k ln f Y ; θ ( y ) θ k 2 ln f X ; θ ( x ) θ k ln f Y ; θ ( y ) θ k
Simplifying:
( θ k ln ( f X , Y ; θ ( x , y ) f X ; θ ( x ) f Y ; θ ( y ) ) ) 2 = ( ln f X | Y ; θ ( x | y ) θ k ) + ( ln f X ; θ ( x ) θ k ) 2 2 ln f X | Y ; θ ( x | y ) θ k ln f X ; θ ( x ) θ k
Now,
2 f X , Y ; θ ( x , y ) ln f X | Y ; θ ( x | y ) θ k ln f X ; θ ( x ) θ k d x d y = 2 f Y ; θ ( y ) f X ; θ ( x ) f X | Y ; θ ( x | y ) θ k f X ; θ ( x ) θ k d x d y
= 2 ( 1 f X ; θ ( x ) f X ; θ ( x ) θ k f Y ; θ ( y ) f X | Y ; θ ( x | y ) θ k d y ) d x
Assuming that fX,Y;θ complies with the boundary condition (see Appendix A) with respect to θk, then:
f X ; θ ( x ) θ k = θ k f X , Y ; θ ( x , y ) d y
= f X | Y ; θ ( x | y ) f Y ; θ ( y ) θ k d y
= f Y ; θ ( y ) f X | Y ; θ ( x | y ) θ k d y + f X | Y ; θ ( x | y ) f Y ; θ ( y ) θ k d y
Hence,
= f Y ; θ ( y ) f X | Y ; θ ( x | y ) θ k d y = f X ; θ ( x ) θ k f X | Y ; θ ( x | y ) f Y ; θ ( y ) θ k d y
Using the previous result, it is obtained:
2 f X , Y ; θ ( x , y ) ln f X | Y ; θ ( x | y ) θ k ln f X ; θ ( x ) θ k d x d y = 2 1 f X ; θ ( x ) f X ; θ ( x ) θ k f X ; θ ( x ) θ k d x 2 1 f X ; θ ( x ) f X ; θ ( x ) θ k d y f X | Y ; θ ( x | y ) f Y ; θ ( y ) θ k d x
= 2 i F ( f X ; θ ) θ k 2 f X , Y ; θ ( x , y ) f X ; θ ( x ) f Y ; θ ( y ) f X ; θ ( x ) θ k f Y ; θ ( y ) θ k d x d y
This implies:
m F ( I ) ( f X , Y ; θ ) θ k = i F ( f X | Y ; θ ) θ k i F ( f X ; θ ) θ k + 2 i C ( f X , Y ; θ ) θ k
The other result is obtained analogously. □
Example 5. Continuing with the example where Y = X + N, the mutual Fisher information Type I is given by:
m F ( I ) ( f Y , X ; μ , η , ν ) μ = i F ( f Y | X ) θ k i F ( f Y ) θ k + 2 i C ( f Y , X ; μ , η , ν ) μ
= i F ( f N ) θ k i F ( f Y ) θ k + 2 i C ( f Y , X ; μ , η , ν ) μ
= 1 ν 2 1 η 2 + ν 2 + 2 i C ( f Y , X ; μ , η , ν ) μ
= η 2 ν 2 ( η 2 + ν 2 ) + 2 i C ( f Y , X ; μ , η , ν ) μ
= η 2 ν 2 ( η 2 + ν 2 ) + 2 η 2 + ν 2
= 1 η 2 + ν 2 + 1 ν 2

7.2. Conditional Mutual Fisher Information of Type I

Definition 8. The conditional information correlation with respect to θk of random variables X and Y given random variable Z is defined by:
i C ( f X , Y | Z ; θ ) θ k f X , Y , Z ; θ ( x , y , z ) ln f X | Z ; θ ( x | z ) θ k ln f Y | Z ; θ ( y | z ) θ k d x d y d z
Definition 9. The conditional mutual Fisher information of Type I of random variables X and Y given random variable Z is defined by:
m F ( I ) ( f X , Y | Z ; θ ) θ k f X , Y , Z ; θ ( x , y , z ) ( θ k ( ln ( f X , Y | Z ; θ ( x , y | z ) f X | Z ; θ ( x | z ) f Y | Z ; θ ( y | z ) ) ) ) 2 d x d y d z
Corollary 1. If the boundary condition (see Appendix A) with respect to 0k holds for fX,Y,Z;θ(x, y, z), the conditional mutual Fisher information of Type I of random variables X and Y given random variable Z can be reformulated as a function of the Fisher information as follows:
m F ( I ) ( f X , Y | Z ; θ ) θ k = i F ( f X | Y , Z ; θ ) θ k i F ( f X | Z ; θ ) θ k + 2 i C ( f X , Y | Z ; θ ) θ k
= i F ( f Y | X , Z ; θ ) θ k i F ( f Y | Z ; θ ) θ k + 2 i C ( f X , Y | Z ; θ ) θ k
Proof. This follows analogously to that of the simpler case.

8. Relative Fisher Information Type II

Given that there is an alternative expression for the Fisher information (check Equation (12)), there is another way of defining the relative Fisher information expression.
Definition 10. The relative Fisher information Type II is defined by:
d F ( I I ) ( f X ; θ f Y ; θ ) θ k f X ; θ ( x ) 2 θ k 2 ( ln ( f X ; θ ( x ) f Y ; θ ( x ) ) ) d x
Even though both definitions for the relative Fisher information are derived from equivalent expressions, they are not equivalent. Why is this so? This is because the argument of the Fisher information definition is a density function, whereas the argument of the relative Fisher information expression is a ratio of density functions, not a density function, thus their difference.

9. Mutual Fisher Information Type II

Analogously to the definition of the mutual Fisher information Type I, but in this case using the relative Fisher information of Type II, the following definition is obtained:
Definition 11. The mutual Fisher information Type II is defined by:
m F ( I I ) ( f X , Y ; θ ) θ k f X , Y ; θ ( x , y ) 2 θ k 2 ln ( f X , Y ; θ ( x , y ) f X ; θ ( x ) f Y ; θ ( y ) ) d x d y
Theorem 11. The mutual Fisher information Type II can be reformulated as a function of the Fisher information as follows:
m F ( I I ) ( f X , Y ; θ ) θ k = i F ( f X , Y ; θ ) θ k i F ( f X ; θ ) θ k i F ( f Y ; θ ) θ k
Proof.
m F ( I I ) ( f X , Y ; θ ) θ k = f X , Y ; θ ( x , y ) 2 θ k 2 ln ( f X , Y ; θ ( x , y ) f X ; θ ( x ) f Y ; θ ( y ) ) d x d y
= f X , Y ; θ ( x , y ) 2 ln f X , Y ; θ ( x , y ) θ k 2 d x d y + f X , Y ; θ ( x , y ) 2 ln f X ; θ ( x ) θ k 2 d x d y + f X , Y ; θ ( x , y ) 2 ln f Y ; θ ( y ) θ k 2 d x d y
from which the theorem follows. □
Corollary 2.
m F ( I I ) ( f X , Y ; θ ) θ k = i F ( f X | Y ; θ ) θ k i F ( f X ; θ ) θ k
= i F ( f Y | X ; θ ) θ k i F ( f Y ; θ ) θ k
Proof. This comes from combining Theorem 11 and the chain rule for Fisher information. □
Example 6. For the example where Y = X + N, the mutual Fisher information Type II is given by:
m F ( I I ) ( f Y , X ; μ , η , ν ) θ k = i F ( f Y | X ) θ k i F ( f Y ) θ k = i F ( f N ) θ k i F ( f Y ) θ k
1 ν 2 1 η 2 + ν 2
= η 2 ν 2 ( η 2 + ν 2 )
Corollary 3.
m F ( I ) ( f X , Y ; θ ) θ k = m F ( I I ) ( f X , Y ; θ ) θ k + 2 i C ( f X , Y ; θ ) θ k
Proof. This can be deduced from the mutual Fisher information theorems.
Given that mF(I) is always greater than or equal to zero, the expression mF(II) can be positive or negative according to the value of the information correlation.

10. Other Properties

10.1. Lower Bound for Fisher Information

Stam’s inequality [8,9,40,4750] states a lower bound for Fisher information, which links Fisher information and Shannon’s entropy power. However, this expression is limited to the special case where the parameters in the Fisher information expression correspond to a location parameter.
A more general result was recently proven by Stein et al. [51], which says that given a multidimensional random variable with density function fX;θ with:
μ ( θ ) = S x f x ; θ ( x ) d x
( θ ) = S ( x μ ( θ ) ) ( x μ ( θ ) ) T f x ; θ ( x ) d x
If the Fisher information matrix is defined by:
F ( f X ; θ ) = S f X ; θ ( x ) ( ln f X ; θ ( x ) θ ) T ( ln f X ; θ ( x ) θ ) d x
then:
F ( f X ; θ ) ¯ ( μ ( θ ) θ ) T 1 ( θ ) ( μ ( θ ) θ )
if μ ( θ ) θ exists. The authors of [51] explain that this is the same as saying that:
0 x T ( F ( f X ; θ ) ( μ ( θ ) θ ) T 1 ( θ ) ( μ ( θ ) θ ) ) x
The previous expression states that the difference of matrices between the large parenthesis is a positive semi-definite matrix. Thus, its diagonal elements are non-negative, and it can be stated:
Corollary 4. The following lower bound for Fisher information holds:
i = 1 m j = 1 m μ i θ k c i j 1 μ j θ k i F ( f X ; θ ) θ k
and c i j 1 stands for the ij-th element of Σ−1 (θ).

10.2. In Some Cases, Conditioning Increases the Fisher Information

The following result states that in some cases, conditioning a random variable with another variable may increase the Fisher information. This result is a generalization of another published previously by Zamir [32].
Theorem 12 (Conditioning Increases Information). If fY|X;θ depends on θk and fX does not depend on it, then:
i F ( f Y ; θ ) θ k i F ( f Y | X ; θ ) θ k
Proof. Thus, given that only fY|X;θ depends on θk, Theorem 9 guarantees that:
i C ( f X , Y ; θ ) θ k = 0
Hence, from the previous mutual Fisher information expressions:
0 m F ( I ) ( f X , Y ; θ ) θ k = i F ( f Y | X ; θ ) θ k i F ( f Y ; θ ) θ k
Thus:
i F ( f Y ; θ ) θ k i F ( f Y | X ; θ ) θ k

10.3. Data Processing Inequality

Following the same analysis done by Cover and Thomas to present the data processing theorem for Shannon entropy [30] and continuing with the work done by Zamir [32], the case where the joint density function of the random variables R, S and T can be expressed by fR,S,T;θ = fR;θ · fS|R;θ · fT|S;θ is considered. In this case, they form a short Markov chain that is represented by RST. Because Markovicity implies conditional independence, then it is true that fR,T|S;θ = fR|S;θ · fT|S;θ.
Theorem 13. Given a Markov chain R → S → T, where only fT|S;θ depends on θk, then:
m F ( I ) ( f R , T ; θ ) θ k m F ( I ) ( f S , T ; θ ) θ k
Proof. From the previous results:
m F ( I ) ( f ( R , S ) , T ; θ ) θ k = i F ( f R , S | T ; θ ) θ k i F ( f R , S ; θ ) θ k + 2 i C ( f ( R , S ) , T ; θ ) θ k
= i F ( f R | S , T ; θ ) θ k + i F ( f S | T ; θ ) θ k i F ( f R | S ; θ ) θ k i F ( f S ; θ ) θ k + 2 i C ( f ( R , S ) , T ; θ ) θ k
= ( i F ( f | R | S , T ; θ ) θ k i F ( f R | S ; θ ) θ k + 2 i C ( f R , T | S ; θ ) θ k ) 2 i C ( f R , T | S ; θ ) θ k + ( i F ( f S | T ; θ ) θ k i F ( f S ; θ ) θ k + 2 i C ( f S , T ; θ ) θ k ) 2 i C ( f S , T ; θ ) θ k + 2 i C ( f ( R , S ) , T ; θ ) θ k
= m F ( I ) ( f R , T | S ; θ ) θ k m F ( I ) ( f S , T ; θ ) θ k 2 i C ( f R , T | S ; θ ) θ k 2 i C ( f S , T ; θ ) θ k + 2 i C ( f ( R , S ) , T ; θ ) θ k
Analogously:
m F ( I ) ( f ( R , S ) , T ; θ ) θ k = m F ( I ) ( f S , T | R ; θ ) θ k + m F ( I ) ( f R , T ; θ ) θ k 2 i C ( f S , T | R ; θ ) θ k 2 i C ( f R , T ; θ ) θ k + 2 i C ( f ( R , S ) , T ; θ ) θ k
Because only fT|S;θ depends on θk and all of the information correlation terms have derivatives of density functions that do not depend on this parameter, then all of the information correlation terms are zero. Hence:
m F ( I ) ( f R , T | S ; θ ) θ k + m F ( I ) ( f S , T ; θ ) θ k = m F ( I ) ( f S , T | R ; θ ) θ k + m F ( I ) ( f R , T ; θ ) θ k
Given that m F ( I ) ( f R , T | S ; θ ) θ k = 0 because R and T are independent given S, and m F ( I ) ( f S , T | R ; θ ) θ k 0, then:
m F ( I ) ( f R , T ; θ ) θ k m F ( I ) ( f S , T ; θ ) θ k
Given that in the previous proof, all of the information correlation terms are zero, then m F ( I I ) ( f R , T ; θ ) θ k = m F ( I ) ( f R , T ; θ ) θ k, and m F ( I I ) ( f S , T ; θ ) θ k = m F ( I ) ( f R , T ; θ ) θ k. Thus, the following corollary is obtained:
Corollary 5. Given a Markov chain R → S → T, where only fT|S;θ depends on θk, then:
m F ( I I ) ( f R , T ; θ ) θ k m F ( I I ) ( f S , T ; θ ) θ k
Proof. The conditional independence provided by the Markovicity of the random variables follows directly from the mutual Fisher information Type II definition, and in this case, the values of mutual Fisher information Type I and mutual Fisher information Type II are identical.
Using the definition of mutual Fisher information Type II and the previous expression, it is readily obtained, in a simpler way, a result already proven by Plastino et al. [52]:
Corollary 6. From the previous results, it is obvious that:
i F ( f T | R ; θ ) θ k i F ( f T | S ; θ ) θ k
Proof. From Equation (121):
m F ( I I ) ( f R | T ; θ ) θ k m F ( I I ) ( f S , T ; θ ) θ k
i F ( f T | R ; θ ) θ k i F ( f T ; θ ) θ k i F ( f T | S ; θ ) θ k i F ( f T ; θ ) θ k
i F ( f T | R ; θ ) θ k i F ( f T | S ; θ ) θ k
In other words, in any Markovian process, the further away that the random variables used by the estimator are, the larger is the variance of the estimated parameter.

10.4. Upper Bound on Estimation Error

A well-known result states that given a variance η, of all possible density functions, the one that maximizes the differential entropy is the Gaussian density function [30]. Hence, for an arbitrary density function fX, some side information Y and an estimator X ^, it is possible to obtain an estimation version of the Fano inequality [10] (p. 255):
1 2 π e e 2 h S ( f X | Y ) E X { ( X X ^ ( Y ) ) 2 }
In the context of Fisher information, the same question arises: is it possible to bound the estimation error using this quantity as well? Surprisingly, the answer is yes, but in the form of an upper bound. Thus, Shannon entropy can be used to set error lower bounds and Fisher information upper ones. In order to establish this bound, the following setup is defined, where a random variable R is given, and a related random variable Y is observed, which, in turn, is used to calculate a function R ^ = g ( Y ). It is desired to bound the probability that ( R R ^ ) 2 > . It is important to note that R Y R ^ is a Markov chain and that R ^ depends on θ.
Theorem 14. Given a random variable R and an estimator of it named R ^, the estimation error is defined by:
E = ( R R ^ ) 2
Then, the probability that the estimation error exceeds some ε value:
P { E > ε } i F ( f R | R ^ ; θ ) θ k i F ( f R | R ^ , E = ξ ; θ ) θ k
for some ξ ∈ [ε, ∞].
Proof. Using the chain rule for Fisher information:
i F ( f R , E | R ^ ; θ ) θ k = i F ( f R | R ^ , E ; θ ) θ k + i F ( f E | R ^ ; θ ) θ k = i F ( f E | R , R ^ ; θ ) θ k + i F ( f R | R ^ ; θ ) θ k
Using the fact that given R and R ^, then E is no longer a random variable, then:
i F ( f E | R , R ^ ; θ ) θ k = 0
Hence,
i F ( f R | R ^ , E ; θ ) θ k + i F ( f E | R ^ ; θ ) θ k = i F ( f R | R ^ ; θ ) θ k
Neglecting i F ( f E | R ^ ; θ ) θ k, because it is always greater or equal to zero, it is obtained:
i F ( f R | R ^ , E ; θ ) θ k i F ( f R | R ^ ; θ ) θ k
Moreover, the term:
i F ( f R | R ^ , E ; θ ) θ k = e E r R , r ^ R ^ | ( r r ^ ) 2 = e f R , R ^ , E ; θ ( r , r ^ , e ) ( ln f R | R ^ , E ; θ ( r | r ^ , e ) θ k ) 2 d r d r ^ d e
= e E | e ε ε r R , r ^ R ^ | ( r r ^ ) 2 = e f R , R ^ , E ; θ ( r , r ^ , e ) ( ln f R | R ^ , E ; θ ( r | r ^ , e ) θ k ) d r d r ^ d e + = e E | e > ε r R , r ^ R ^ | ( r r ^ ) 2 = e f R , R ^ , E ; θ ( r , r ^ , e ) ( ln f R | R ^ , E ; θ ( r | r ^ , e ) θ k ) 2 d r d r ^ d e
e E | e > ε r R , r ^ R ^ | ( r r ^ ) 2 = e f R , R ^ , E ; θ ( r , r ^ , e ) ( ln f R | R ^ , E ; θ ( r | r ^ , e ) θ k ) 2 d r d r ^ d e
= e E | e > ε r R , r ^ R ^ | ( r r ^ ) 2 = e f R , R ^ , E ; θ ( r , r ^ | e ) f E ; θ ( e ) ( ln f R | R ^ , E ; θ ( r | r ^ , e ) θ k ) 2 d r d r ^ d e
= e E | e ε f E ; θ ( e ) r R , r ^ R ^ | ( r r ^ ) 2 = e f R , R ^ | E ; θ ( r , r ^ , e ) ( ln f R | R ^ , E ; θ ( r | r ^ , e ) θ k ) 2 d r d r ^ d e
= e E | e > ε f E ; θ ( e ) i F ( f R | R ^ , E = e ; θ ) θ k d e
Using the mean value theorem, for some ξ ∈ [ε, ∞]:
i F ( f R | R ^ , E = ξ ; θ ) θ k e E | e > ε f E ; θ ( e ) d e = i F ( f R | R ^ , E = ξ ; θ ) θ k P { E > ε } i F ( f R | R ^ , E ; θ ) θ k i F ( f R | R ^ ; θ ) θ k
Hence:
P { E > ε } i F ( f R | R ^ ; θ ) θ k i F ( f R | R ^ , E = ξ ; θ ) θ k

11. Discussion

The Fisher information, which sets a bound on how precise the estimation of an unknown parameter of a density function can be, has an associated set of properties that are equivalent to those of Shannon’s differential entropy. The properties presented in this work help to understand how to manipulate and use Fisher information in ways that so far have been exclusive to Shannon’s differential entropy These properties that are of special importance to the generalization of the mutual information concept for the Fisher information realm are a new version of the data processing theorem that shows that Fisher information decreases in a Markov chain and an upper bound of the estimation error of a random variable that is regulated by the Fisher information.

A. Boundary Condition

A general result from calculus establishes that for any function g(x, θk), the following is true:
θ k l ( θ k ) u ( θ k ) g ( x , θ k ) d x = g ( u ( θ k ) , θ k ) u ( θ k ) θ k g ( l ( θ k ) , θ k ) l ( θ k ) θ k + l ( θ k ) u ( θ k ) g ( x , θ k ) θ k d x
In the case of a vector integral, the previous expression applies to all of the components without any loss of generality.
Some of the results in this work use the following condition:
Condition 1 (Boundary Condition). A function complies with the boundary condition if it is possible to neglect the boundary terms in Equation (141), such that:
θ k g ( x , θ k ) d x = g ( x , θ k ) θ k d x
This condition corresponds to what sometimes are called regular cases [34] (p. 373).
It is important to keep in mind that not all density functions go along with this condition. As an example, in calculations that involve the uniform density function, where the parameters define the support, it is not possible to neglect the terms, and the boundary condition does not hold. Hence, it is always necessary to check whether the condition holds or not. If not, one may arrive at false results.
However, it is always possible to add a smooth function, one that does not change the original function too much, such that the new mathematical expression does comply with the boundary condition. In this way, functions, such as the uniform density function, as an example, can be adjusted to comply with this condition.

Acknowledgments

The author thanks Alexis Fuentes and Carlos Alarcón for reviewing this work, and helping to improve some expressions. The author also thanks CONICYT Chile for its grant FONDECYT 1120680.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Shannon, C. A Mathematical Theory of Communication. Bell Syst. Tech. J 1948, 27, 379–423. [Google Scholar]
  2. Fisher, R. Theory of Statistical Estimation. Proc. Camb. Philos. Soc. 1925, 22, 700–725. [Google Scholar]
  3. Rao, C.R. Information and the accuracy attainable in the estimation of statistical parameters. Bull. Calcutta Math. Soc. 1945, 37, 81–89. [Google Scholar]
  4. Cramer, H. Mathematical Methods of Statistics; Princeton University Press: Princeton, NJ, USA, 1945. [Google Scholar]
  5. Kullback, S. Information Theory and Statistics; Dover Publications Inc.: Mineola, NY, USA, 1968. [Google Scholar]
  6. Blahut, R.E. Principles and Practice of Information Theory; Addison-Wesley Publishing Company: Boston, MA, USA, 1987. [Google Scholar]
  7. Frieden, B.R. Science from Fisher Information: A Unification; Cambridge University Press: Cambridge, UK, 2004. [Google Scholar]
  8. Stam, A.J. Some mathematical properties of quantities of information. In Ph.D. Thesis; Technological University of Delft: Delft, The Netherlands, 1959. [Google Scholar]
  9. Stam, A.J. Some inequalities satisfied by the quantities of information of Fisher and Shannon. Inf. Control. 1959, 2, 101–112. [Google Scholar]
  10. Cover, T.; Thomas, J. Elements of Information Theory; John Wiley and Sons, Inc: Hoboken, NJ, USA, 2006. [Google Scholar]
  11. Narayanan, K.R.; Srinivasa, A.R. On the Thermodynamic Temperature of a General Distribution; Cornell University Library: Ithaca, NY, USA, 2007. [Google Scholar]
  12. Guo, D. Relative Entropy and Score Function: New Information-Estimation Relationships through Arbitrary Additive Perturbation, Proceedings of the IEEE International Symposium on Information Theory, Seoul, Korea, 28 June–3 July 2009; pp. 814–818.
  13. Blachman, N.M. The Convolution Inequality for Entropy Powers. IEEE Trans. Inf. Theory 1965, 11, 267–271. [Google Scholar]
  14. Costa, M.H.M.; Cover, T.M. On the Similarity of the Entropy Power Inequality and the Brunn Minkowski Inequality; Technical Report; Stanford University: Stanford, CA, USA, 1983. [Google Scholar]
  15. Zamir, R.; Feder, M. A generalization of the entropy power inequality with applications. IEEE Trans. Inf. Theory 1993, 39, 1723–1728. [Google Scholar]
  16. Lutwak, E.; Yang, D.; Zhang, G. CramerâǍŞRao and Moment-Entropy Inequalities for Renyi Entropy and Generalized Fisher Information. IEEE Trans. Inf. Theory 2005, 51, 473–478. [Google Scholar]
  17. Frieden, B.R.; Plastino, A.; Plastino, A.R.; Soffer, B.H. Fisher-Based Thermodynamics: Its Legendre Transform and Concavity Properties. Phys. Rev. E 1999, 60, 48–53. [Google Scholar]
  18. Frieden, B.R.; Plastino, A.; Plastino, A.R.; Soffer, B.H. Non-equilibrium thermodynamics and Fisher information: An illustrative example. Phys. Lett. A 2002, 304, 73–78. [Google Scholar]
  19. Frieden, B.R.; Petri, M. Motion-dependent levels of order in a relativistic universe. Phys. Rev. E 2012, 86, 1–5. [Google Scholar]
  20. Frieden, B.R.; Gatenby, R.A. Principle of maximum Fisher information from Hardy’s axioms applied to statistical systems. Phys. Rev. E 2013, 88, 1–6. [Google Scholar]
  21. Flego, S.; Olivares, F.; Plastino, A.; Casas, M. Extreme Fisher Information, Non-Equilibrium Thermodynamics and Reciprocity Relations. Entropy 2011, 13, 184–194. [Google Scholar] [Green Version]
  22. Venkatesan, R.C.; Plastino, A. Legendre transform structure and extremal properties of the relative Fisher information. Phys. Lett. A 2014, 378, 1341–1345. [Google Scholar]
  23. Van Trees, H.L. Detection, Estimation, and Modulation Theory: Part 1; John Wiley and Sons, Inc: Hoboken, NJ, USA, 2001. [Google Scholar]
  24. Amari, S.I. Natural Gradient Works Efficiently in Learning. Neural Comput. 1998, 10, 251–276. [Google Scholar]
  25. Pascanu, R.; Bengio, Y. Revisiting Natural Gradient for Deep Networks; Cornell University Library: Ithaca, NY, USA, 2014; pp. 1–18. [Google Scholar]
  26. Luo, S. Maximum Shannon entropy, minimum Fisher information, and an elementary game. Found. Phys. 2002, 32, 1757–1772. [Google Scholar]
  27. Langley, R.S. Probability Functionals for Self-Consistent and Invariant Inference: Entropy and Fisher Information. IEEE Trans. Inf. Theory 2013, 59, 4397–4407. [Google Scholar]
  28. Zegers, P.; Fuentes, A.; Alarcon, C. Relative Entropy Derivative Bounds. Entropy 2013, 15, 2861–2873. [Google Scholar]
  29. Cohen, M. The Fisher Information and Convexity. IEEE Trans. Inf. Theory 1968, 14, 591–592. [Google Scholar]
  30. Cover, T.; Thomas, J. Elements of Information Theory; John Wiley and Sons, Inc: Hoboken, NJ, USA, 1991. [Google Scholar]
  31. Frieden, B.R. Physics from Fisher Information: A Unification; Cambridge University Press: Cambridge, UK, 1998. [Google Scholar]
  32. Zamir, R. A Proof of the Fisher Information Inequality Via a Data Processing Argument. IEEE Trans. Inf. Theory 1998, 44, 1246–1250. [Google Scholar]
  33. Taubman, D.; Marcellin, M. JPEG2000: Image Compression Fundamentals, Standards, and Practice; Kluwer Academic Publishers: Dordrecht, The Netherlands, 2002. [Google Scholar]
  34. Hogg, R.V.; Craig, A.T. Introduction to Mathematical Statistics; Prentice Hall: Upper Saddle River, NJ, USA, 1995. [Google Scholar]
  35. Frieden, B.R. Probability, Statistical Optics, and Data Testing; Springer-Verlag: Berlin, Germany, 1991. [Google Scholar]
  36. Otto, F.; Villani, C. Generalization of an Inequality by Talagrand and Links with the Logarithmic Sobolev Inequality. J. Funct. Anal. 2000, 173, 361–400. [Google Scholar]
  37. Yáñez, R.J.; Sánchez-Moreno, P.; Zarzo, A.; Dehesa, J.S. Fisher information of special functions and second-order differential equations. J. Math. Phys. 2008, 49, 082104. [Google Scholar] [Green Version]
  38. Gianazza, U.; Savaré, G.; Toscani, G. The wasserstein gradient flow of the fisher information and the quantum drift-diffusion equation. Arch. Ration. Mech. Anal. 2009, 194, 133–220. [Google Scholar]
  39. Verdú, S. Mismatched Estimation and Relative Entropy. IEEE Trans. Inf. Theory 2010, 56, 3712–3720. [Google Scholar]
  40. Hirata, M.; Nemoto, A.; Yoshida, H. An integral representation of the relative entropy. Entropy 2012, 14, 1469–1477. [Google Scholar]
  41. Sánchez-Moreno, P.; Zarzo, A.; Dehesa, J.S. Jensen divergence based on Fisher’s information. J. Phys. A: Math. Theor. 2012, 45, 125305. [Google Scholar]
  42. Yamano, T. Phase space gradient of dissipated work and information: A role of relative Fisher information. J. Math. Phys. 2013, 54, 1–9. [Google Scholar]
  43. Yamano, T. De Bruijn-type identity for systems with flux. Eur. Phys. J. B 2013, 86, 363. [Google Scholar]
  44. Bobkov, S.G.; Chistyakov, G.P.; Gotze, F. Fisher information and the central limit theorem. Probab. Theory Relat. Fields. 2014, 159, 1–59. [Google Scholar]
  45. Zegers, P. Some New Results on The Architecture, Training Process, and Estimation Error Bounds for Learning Machines. Ph.D. Thesis, The University of Arizona, Tucson, AZ, USA, 2002. [Google Scholar]
  46. Kullback, S.; Leibler, R.A. On Information and Sufficiency. Ann. Math. Stat. 1951, 22, 79–86. [Google Scholar]
  47. Lutwak, E.; Yang, D.; Zhang, G. Renyi entropy and generalized Fisher information. IEEE Trans. Inf. Theory 2005, 51, 473–478. [Google Scholar]
  48. Kagan, A.; Yu, T. Some Inequalities Related to the Stam Inequality. Appl. Math. 2008, 53, 195–205. [Google Scholar]
  49. Lutwak, E.; Lv, S.; Yang, D.; Zhang, G. Extensions of Fisher Information and Stam’s Inequality. IEEE Trans. Inf. Theory 2012, 58, 1319–1327. [Google Scholar]
  50. Bercher, J.F. On Generalized Cramér-Rao Inequalities, and an Extension of the Shannon-Fisher-Gauss Setting; Cornell University Library: Ithaca, NY, USA, 2014. [Google Scholar]
  51. Stein, M.; Mezghani, A.; Nossek, J.A. A Lower Bound for the Fisher Information Measure. IEEE Signal Process. Lett. 2014, 21, 796–799. [Google Scholar]
  52. Plastino, A.; Plastino, A. Symmetries of the Fokker-Planck equation and the Fisher-Frieden arrow of time. Phys. Rev. E 1996, 54, 4423–4426. [Google Scholar]

Share and Cite

MDPI and ACS Style

Zegers, P. Fisher Information Properties. Entropy 2015, 17, 4918-4939. https://doi.org/10.3390/e17074918

AMA Style

Zegers P. Fisher Information Properties. Entropy. 2015; 17(7):4918-4939. https://doi.org/10.3390/e17074918

Chicago/Turabian Style

Zegers, Pablo. 2015. "Fisher Information Properties" Entropy 17, no. 7: 4918-4939. https://doi.org/10.3390/e17074918

Article Metrics

Back to TopTop