Next Article in Journal
Deep Ensemble of Weighted Viterbi Decoders for Tail-Biting Convolutional Codes
Previous Article in Journal
Optimizing Age Penalty in Time-Varying Networks with Markovian and Error-Prone Channel State
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Signal Fluctuations and the Information Transmission Rates in Binary Communication Channels

by
Agnieszka Pregowska
Institute of Fundamental Technological Research, Polish Academy of Sciences, Pawinskiego 5B, 02-106 Warsaw, Poland
Entropy 2021, 23(1), 92; https://doi.org/10.3390/e23010092
Submission received: 1 December 2020 / Revised: 26 December 2020 / Accepted: 8 January 2021 / Published: 10 January 2021
(This article belongs to the Section Information Theory, Probability and Statistics)

Abstract

:
In the nervous system, information is conveyed by sequence of action potentials, called spikes-trains. As MacKay and McCulloch suggested, spike-trains can be represented as bits sequences coming from Information Sources ( I S ). Previously, we studied relations between spikes’ Information Transmission Rates ( I T R ) and their correlations, and frequencies. Now, I concentrate on the problem of how spikes fluctuations affect I T R . The I S are typically modeled as stationary stochastic processes, which I consider here as two-state Markov processes. As a spike-trains’ fluctuation measure, I assume the standard deviation σ , which measures the average fluctuation of spikes around the average spike frequency. I found that the character of I T R and signal fluctuations relation strongly depends on the parameter s being a sum of transitions probabilities from a no spike state to spike state. The estimate of the Information Transmission Rate was found by expressions depending on the values of signal fluctuations and parameter s. It turned out that for smaller s < 1 , the quotient I T R σ has a maximum and can tend to zero depending on transition probabilities, while for 1 < s , the I T R σ is separated from 0. Additionally, it was also shown that I T R quotient by variance behaves in a completely different way. Similar behavior was observed when classical Shannon entropy terms in the Markov entropy formula are replaced by their approximation with polynomials. My results suggest that in a noisier environment ( 1 < s ) , to get appropriate reliability and efficiency of transmission, I S with higher tendency of transition from the no spike to spike state should be applied. Such selection of appropriate parameters plays an important role in designing learning mechanisms to obtain networks with higher performance.

Graphical Abstract

1. Introduction

Information transmission processes in natural environments are typically affected by signal fluctuations due to the presence of noise-generating factors [1]. It is particularly visible in biological systems, in particular during signal processing in the brain [2,3,4,5,6]. The physical information carriers in the brain are small electrical currents [7]. Specifically, the information is carried by sequences of action potentials also called spikes-trains. Assuming some time resolution, MacKay and McCulloch proposed a natural encoding method that associates to each spike-train a binary sequence [8]. Thus, the information is represented by a sequence of bits which, from a mathematical point of view, can be treated as a trajectory of some stochastic process [9,10].
In 1948, C. Shannon developed his famous Communication Theory where he introduced the concept of information and its quantitative measure [11]. The occurrences of both inputs transmitted through a communications channel and output symbols are described by sequences of random variables that define already stochastic processes and form some Information Sources [9,12]. Following this line, to characterize the amount of information transmitted per symbol the Information Transmission Rate ( I T R ) is applied.
Spike-trains Information Sources are often modeled as Poisson point processes [13,14]. Poisson point processes provide a good approximation of the experimental data, especially when the refractory time scale or, more generally, any memory time scale in the spike generation mechanism is short compared to the time scales such as mean interspike interval. The use of Poisson processes to model spike trains has been proposed from the earliest descriptions [15,16] due to the proportional relationship between the mean and variance of multiple neuronal responses. The Poisson property has been observed for many experimental data [17]. On the other hand, it is known that such processes exhibit Markov properties [18,19]. This is because in these processes when describing spikes arrival times, current time, and the time from the last spike are primarily taken into account [20]. There is a number of papers devoted to the modeling of spike-trains by different types of Markov processes (Markov Interval Models, Hidden Markov processes, Poisson Point processes) successfully applied to a variety of experimental data [20,21,22].
Description of complex systems dynamics, from financial markets [23] to the neural networks of living beings [24,25], requires appropriate mathematical tools. Among them, there are stochastic processes, Information Theory and statistical methods and recently, fuzzy numbers [26,27]. In recent years, to limit or even to exploit the effect of noise and fluctuations on information transmission efficiency extensive effort has been conducted, specifically to design spiking neuronal networks ( S N N s ) with appropriate learning mechanisms [28,29,30,31,32]. Moreover, different models of a neuron have been used to address noise resistance [33]. Traditionally, the complex nature of systems is characterized, mostly due to the presence of noise, by using fluctuations, variations, or other statistical tools [18,19,34,35,36]. The natural measure of fluctuations should, in general, reflect oscillations around the mean average value of the signal. Therefore, in most systems in physics, economics, fluid mechanics, fluctuations are most often quantifyied using the Standard Deviation [37,38,39].
In this paper, I analyze the relationship between the Information Transmission Rate of signals coming from a time-discrete two-states Markov Information Source and these signal fluctuations. As a spike-trains’ fluctuation measure, I consider the Standard Deviation of encoded spikes. Moreover, to gain a better insight, I have also analyzed the case when the I T R is referred to the signals Variance V instead of the Standard Deviation σ .
In the analysis of neuronal coding, specifically when studying neuronal signals, finding relationships between these signals’ main characteristics is of great importance [10]. Addressing this issue in our previous papers, I successively analyze the relations:
(1)
between signal Information Transmission Rates (also Mutual Information) and signal correlations [40]. I show that neural binary coding cannot be captured by straightforward correlations among input and output signals.
(2)
between signals information transmission rates and signal firing rates (spikes’ frequencies) [41]. By examining this dependence, I have found the conditions in which temporal coding rather than rate coding is used. It turned out that this possibility depends on the parameter characterizing the transition from state to state.
(3)
between information transmission rates of signals (which are (auto)correlated) coming from Markov information sources and information transmission rates of signals coming from corresponding (to this Markov processes) Bernoulli processes. Here, “corresponding” means limiting the Bernoulli process with stationary distributions of these Markov processes [42]. I have shown in the case of correlated signals that the loss of information is relatively small, and thus temporal codes, which are more energetically efficient, can replace rate codes effectively. These results were confirmed by experiments.
In this paper, I consider the next important issue, namely I study the relation between Information Transmission Rates of signals and fluctuations of these signals [40]. I found that also the character of the relation between I T R and signal fluctuations strongly depends on the parameter s. It turned out that for small s ( s < 1 ) , the quotient I T R σ has a maximum and tends to zero when the probability of transition from no spike state to spike state never reaches 0. While for large enough s, the quotient I T R σ is limited from below. A similar result appears when the Shannon entropy formula is replaced by appropriate polynomials.
On the other hand, I found that when I refer the quotient I T R σ to σ , i.e., when I consider, in fact, the quotient I T R V , this quotient behaves in a completely different way. This behavior is not regular. Specifically, I observed that for 1 < s , there is some range of parameter s for which I T R V has a few local extremas, in opposition to the case I T R σ .
The paper is organized as follows. In Section 2, I briefly recall Shannon Information Theory concepts (entropy, information, binary Information Sources, Information Transmission Rate), and fluctuation measures (Standard Deviation and Root Mean Square). In Section 3, I analyzed the quotients I T R σ and I T R V . Section 4 contains the discussion and final conclusions.

2. Theoretical Background and Methods

To introduce the necessary notation, I briefly recall Shannon Information Theory’s basic concepts [9,11,12], i.e., Information, Entropy, Information Source, and Information Transmission Rate.

2.1. Shannon’s Entropy and Information Transmission Rate

Let Z L be a set of all words of length L, built of symbols (letters) from some finite alphabet Z. Each word w Z L can be treated as an encoded message sent by Information Source Z , being a stationary stochastic process. If P ( w ) denotes the probability that the word w Z L already occurs, then the information in the Shannon sense carried by this word is defined as
I ( w ) : = log 2 P ( w ) .
This means that less probable events carry more information. Thus, the average information of the random variable Z L associated with the words of length L is called the Shannon block entropy and is given by
H ( Z L ) : = w Z L P ( w ) log 2 P ( w ) .
The appropriate measure for estimation of transmission efficiency of an Information Source Z is the information transmitted on average by a single symbol, i.e., I T R [9,12]
I T R ( L ) ( Z ) : = 1 L H ( Z L )
I T R ( Z ) = lim L 1 L H ( Z L ) .
This limit exists if and only if the stochastic process Z is stationary [9].
In the special case of a two-letter alphabet Z = { 0 , 1 } and the length of words L = 1 , I introduce the following notation
H 2 ( p ) : = H ( Z 1 ) = p log 2 p ( 1 p ) log 2 ( 1 p ) .
where P ( 1 ) = p , P ( 0 ) = 1 p are associated probabilities. This is, in fact, the formula for the entropy rate of a Bernoulli source [12]. Index 2 in (5) indicates that I consider logarithm with base 2, meaning that I consider the information expressed in bits.

2.2. Information Sources

In general, Information Sources are modeled as stationary stochastic processes [9,12]. The information is represented by trajectories of such processes. Here, to study the relation between Information Transmission Rate ( I T R ) and trajectories fluctuations, I consider Information Sources which are modeled as two-states Markov processes. The trajectories of these processes can be treated as encoded spike-trains [3,10,43]. The commonly accepted natural encoding procedure leads to binary sequences [10,43]. Spike-trains are, in fact, the main objects that carry information [3,7]. I additionally consider among the Markov processes, as a special case, the Bernoulli processes.

2.2.1. Information Sources—Markov Processes

I consider a time-discrete, two-states Markov process M , which is defined by a set of conditional probabilities p j | i , which describe the transition from state i to state j, where i , j = 0 , 1 , and by the initial probabilities P 0 ( 0 ) , P 0 ( 1 ) . The Markov transition probability matrix P can be written as
P : = p 0 | 0 p 0 | 1 p 1 | 0 p 1 | 1 = 1 p 1 | 0 p 0 | 1 p 1 | 0 1 p 0 | 1 .
Each of the columns of the transition probability matrix P has to sum to 1 (i.e., it is a stochastic matrix [9]).
The time evolution of the states probabilities is governed by the Master Equation [34]
P n + 1 ( 0 ) P n + 1 ( 1 ) = 1 p 1 | 0 p 0 | 1 p 1 | 0 1 p 0 | 1 · P n ( 0 ) P n ( 1 )
where n stands for time, P n ( 0 ) , P n ( 1 ) are probabilities of finding states 0 and 1 at time n, respectively. The stationary solution of (7) is given by
P e q ( 0 ) P e q ( 1 ) = p 0 | 1 ( p 0 | 1 + p 1 | 0 ) p 1 | 0 ( p 0 | 1 + p 1 | 0 ) .
It is known [9,12] that for Markov process M , the Information Transmission Rate as defined by (4) is of the following form
I T R M = P e q ( 0 ) · H ( p 1 | 0 ) + P e q ( 1 ) · H ( p 0 | 1 )
In previous papers [40,41,42], when I studied the relation between I T R s and firing rates, and when I compared I T R for Markov processes and for corresponding Bernoulli processes, I introduced a parameter s, which can be interpreted as the tendency of a transition from the no-spike state (“0”) to the spike state (“1”) and vice versa:
s : = p 0 | 1 + p 1 | 0
It turned out that this parameter plays an essential role in our considerations in this paper also. Note that s = 2 t r P and 0 s 2 . One can observe that two-states Markov processes are Bernoulli processes if and only if s = 1 .

2.2.2. Information Sources—Bernoulli Process Case

The Bernoulli processes play a special role among the Markov processes. Bernoulli process is a stochastic stationary process Z = ( Z i ) , i = 1 , 2 , formed by binary identically distributed and independent random variables Z i . In the case of the encoded spike-trains, I assume that the corresponding process (to be more precise, its trajectories) takes successively the values 1 (when spike has arrived in the bin) or 0 (when spike has not arrived). I assume that for a given size of time-bin applied (this depends, in turn, on the time resolution assumed), spike trains are encoded [44] in such a way that 1 is generated with probability p, and 0 is generated with probability q, where q is equal to 1 p . Following the definition, the Information Transmission Rate (3) of the Bernoulli process is
I T R B ( p , q ) = p log 2 p q log q = H 2 ( p ) .

2.2.3. Generalized Entropy Variants

The form of entropy H was derived under assumptions of monotonicity, joint entropy, continuity properties, and Grouping Axiom. In the classical case of the entropy rate H M for the Markov process, in Formula (11), the terms H ( p 1 | 0 ) and H ( p 0 | 1 ) are clearly understood in the Shannon sense (2). To ontain a better insight into the asymptotic behavior of the relations studied in this paper, I additionally consider Formula (11) with H replaced by its Taylor approximation (10 terms). I also studied the interesting case when instead of H, I used a well known unimodal map U ( p ) = 4 p ( 1 p ) [45] which is, in fact, close (Figure 1) to H in the supremum norm [46]. This idea is along the research direction related to generalized concepts of entropy developed, starting from Renyai [47], by many authors [48,49,50,51,52]. Figure 1 shows the approximation of entropy (11) by polynomials: unimodal map (black dash line) and 10 first terms in the Taylor series of H (gray dash-dot line). I also included the square root of the unimodal map (black point line) in this figure.

2.3. Fluctuations Measure

It is commonly accepted that for a given random variable X , the fluctuations of values of this random variable around its average can be characterized by the Standard Deviation σ [35]
σ : = ( E ( X E X ) 2 ) 1 2
where symbol E means the average taken over the probability distribution associated with the values reached by X .
Considering a stochastic process Y = ( X k ) , k = 1 , 2 , 3 , , where X k are random variables each with the same probability distribution as X , the fluctuation of trajectories of this process can be estimated by the Root-Mean-Square ( R M S ) . For a given trajectory ( x k ) k = 1 n , k = 1 , , n RMS is defined as the root from the arithmetic mean value of the squares, i.e.,
R M S ( Y ) : = ( 1 n k = 1 n ( x k x n a v r ) 2 ) 1 2
where x n a v r is the average value, i.e., x n a v r = 1 n k = 1 n x k . Note, that from this formula, the form of σ for Markov processes can be derived when using stationary distribution (8) in Formula (12).
The Standard Deviation σ for any random variable depends, in fact, not only on its probability distribution, but also on the values taken by this random variable. Here, I am interested in bits oscillation, i.e., if the spike train occurs or not. Thus, I have limited our considerations to the values 0 and 1.
To gain a better insight into the relation between I T R and s i g n a l / b i t s fluctuations, I also included an analysis of the quotient I T R V . This is interesting due to the specific form of Variation for the Bernoulli process, which leads to interesting observations when one considers, for example, the unimodal map to approximate entropy (5). Moreover, when studying I T R V I, in fact, refer the quotient I T R σ to σ since I have simply ( I T R σ ) σ = I T R V .

3. Results

In this section, I study the quotients I T R σ and I T R V as a function of the transition probability p 1 | 0 from the state no-spike 0 to the spike state 1 for a fixed parameter s (10). Note, that the probability 0 < p 1 | 0 < 1 and parameter 0 < s < 2 uniquely determined the transition probability matrix P (6) and consequently, they completely define the Markov process M , provided that initial probabilities P 0 ( 0 ) , P 0 ( 1 ) are chosen. Here, as initial probabilities, to get a stationary process, I must assume the probabilities of the form (8). To study the relation of I T R and σ , I decided to consider the quotients of these two quantities, which seems to be natural and already easy to interpret. This idea was successfully applied in [40,41,42], which compared ITR and signal correlations, as well as the frequency of jumps. I found that the key role is played by the parameter s, the value of which determines the qualitative and quantitative form of these relations. In fact, I T R and σ depend on s, and this is the reason why I analyze the quotient for fixed s.

3.1. Information against Fluctuations for Two-States Markov Processes—General Case

I start my considerations from the most general form of the two-states Markov process. To analyze the quotients I T R σ and I T R V , I first express Standard Deviation of Markov process M in terms of conditional probability p 1 | 0 and parameter s.

3.1.1. Standard Deviation in the Markov Process Case

For a given Markov process M to evaluate its fluctuation, specifically to address its long time behavior, one considers its corresponding stationary probabilities as defined by (8). Thus, in the limiting case, the Standard Deviation σ for the Markov process can be assumed as
σ M = P e q ( 0 ) · P e q ( 1 ) .
Fixing parameter s and expressing σ M as a function of the conditional probability p 1 | 0 , I came to the following formula:
σ s M ( p 1 | 0 ) = p 0 | 1 s · p 1 | 0 s = ( s p 1 | 0 ) p 1 | 0 s .
Note that in the case of Variance [ V s M ( p 1 | 0 ) ] = [ σ s M ( p 1 | 0 ) ] 2 , I have a polynomial dependence on p 1 | 0 (keeping in mind that s is fixed).

3.1.2. Relation between Information Transmission Rate I T R of Markov Process and Its Standard Deviation

Let us start by establishing the relation between Standard Deviation and I T R for the Bernoulli process. This means that in our notation, s is equal to 1. Making use of the classical inequality x 1 ln x (for all x > 0 ) and doing a few simple operations, one can come to the inequality 2 · log 2 e I T R 2 ( p 1 | 0 ) σ 2 . To find the relations between entropy I T R M and σ M in more general cases, one can consider the quotient
Q σ M , s ( p 1 | 0 ) : = I T R M , s ( p 1 | 0 ) σ s M ( p 1 | 0 ) .
Note that Q σ M , s ( p 1 | 0 ) is a symmetric function with respect to to the axe p 1 | 0 = s 2 , i.e.,
Q σ M , s ( p 1 | 0 ) = Q σ M , s ( s p 1 | 0 ) .
For 0 s 2 , I consider the quotient Q σ M , s ( p 1 | 0 ) in two cases taking into account the range of p 1 | 0
( A ) 0 s 1 and   this   implies 0 p 1 | 0 s
( B ) 1 < s < 2 and   this   implies s 1 p 1 | 0 1 .
Substituting (8), (10) and (14) into (16) I obtain
Q σ M , s ( p 1 | 0 ) : = p 0 | 1 s H ( p 1 | 0 ) + p 1 | 0 s H ( p 0 | 1 ) ( s p 1 | 0 ) p 1 | 0 s
and after simple calculations, I have
Q σ M , s ( p 1 | 0 ) = s p 1 | 0 s H ( p 1 | 0 ) + p 1 | 0 s H ( s p 1 | 0 ) s p 1 | 0 s · p 1 | 0 s = s p 1 | 0 p 1 | 0 H ( p 1 | 0 ) + p 1 | 0 s p 1 | 0 H ( s p 1 | 0 )
One can check that for smaller s ( 0 , 1 ) , i.e., in case (18), for a given fixed s when p 1 | 0 tends to interval bounds 0 or to s, the quotient Q σ M , s ( p 1 | 0 ) tends to 0, i.e.,
lim p 1 | 0 0 + Q σ M , s ( p 1 | 0 ) = lim p 1 | 0 s Q σ M , s ( p 1 | 0 ) = 0 .
By the form of (20) and symmetry property (17) it is clear that the quotient Q σ M , s ( p 1 | 0 ) reaches the maximum in the symmetry point p 1 | 0 = s 2 and it is equal to
Q σ M , s ( s 2 ) = 2 H ( s 2 ) .
One can check that in the case (B), i.e., for s ( 1 , 2 ) for a given fixed s when p 1 | 0 tends to s 1 or to 1 the quotient Q σ M , s ( p 1 | 0 ) tends to H ( s 1 ) ( s 1 ) , i.e.,
lim p 1 | 0 ( s 1 ) + Q σ M , s ( p 1 | 0 ) = lim p 1 | 0 1 Q σ M , s ( p 1 | 0 ) = H ( s 1 ) s 1 .
Thus, I have for s ( 1 , 2 )
H ( s 1 ) ( s 1 ) Q σ M , s ( p 1 | 0 ) 2 H ( s 2 ) .
Finally, I obtained an interesting estimation of Information Transmission Rate I T R by the level of fluctuation σ :
H ( s 1 ) s 1 σ s M ( p 1 | 0 ) I T R M , s ( p 1 | 0 ) 2 H ( s 2 ) σ s M ( p 1 | 0 ) .
The typical runnings of Q σ M , s ( p 1 | 0 ) for some values of the parameter, s are shown in Figure 2. Column A is devoted to lower values of the jumping parameter 0 s 1 , while column B presents the Q σ M , s courses for higher values of the jumping parameter 1 < s < 2 . Observe, that for 1 < s < 2 , the curves intersect contrary to the case 0 s 1 . This is mostly since the limiting value (24) is not a monotonic function of s, while the maximal value (23) is already monotonic.
Note, that for the approximation of entropy H by polynomials, specifically by unimodal map U and by Taylor series T, the corresponding quotients Q σ U , s B, Q σ T , s behave similarly as for the Shannon form of H (see Figure 2).

3.1.3. Relation between Information Transmission Rate I T R of Markov Process and Its Variation

To find how the Variation of trajectories of Markov Information Source affects the Information Transmission Rate, one should consider a modified quotient
Q σ M , s ( p 1 | 0 ) = I T R M ( p 1 | 0 ) V ( p 1 | 0 ) = I T R M ( p 1 | 0 ) P e q ( 0 ) · P e q ( 1 ) .
Substituting (8) and (10) to (27) I obtain
Q V M , s ( p 1 | 0 ) = p 0 | 1 s H ( p 1 | 0 ) + p 1 | 0 s H ( p 0 | 1 ) p 0 | 1 s · p 1 | 0 s = s [ H ( p 1 | 0 ) p 1 | 0 + H ( s p 1 | 0 ) s p 1 | 0 ] .
First, observe that as in the standard deviation case, I have a symmetry property around the value s 2 , i.e.,
Q V M , s ( p 1 | 0 ) = Q V M , s ( s p 1 | 0 ) .
By this symmetry, it is clear that Q V M , s ( p 1 | 0 ) reaches extremum at the point p 1 | 0 = s 2 and it is equal to 4 H ( s 2 ) .
Observe, that in the case (A), i.e., for a given fixed s ( 0 , 1 ) , for p 1 | 0 tending interval bound, i.e., to 0 or s the quotient Q V M , s ( p 1 | 0 ) , in opposite to Q σ T , s ( p 1 | 0 ) , tends to infinity, i.e.,
lim p 1 | 0 0 + Q V M , s ( p 1 | 0 ) = lim p 1 | 0 s Q V M , s ( p 1 | 0 ) = + .
Thus, it is clear that Q V M , s ( p 1 | 0 ) reaches a minimum at the point p 1 | 0 = s 2 .
In the case of (B), it turned out that the quotient Q V M , s ( p 1 | 0 ) for any fixed s ( 1 , 2 ) is bounded both from below and from above. I have:
lim p 1 | 0 ( s 1 ) + Q V M , s ( p 1 | 0 ) = lim p 1 | 0 1 Q V M , s ( p 1 | 0 ) = s H ( s 1 ) s 1 .
Numerical calculations showed that for the parameters s > s 0 the point p 1 | 0 = s 2 is a minimum while for s < s 0 at this point, there is a maximum, where the critical parameter s 0 1.33 can be calculated from the equality:
s 0 H ( s 0 1 ) s 0 1 = 4 H ( s 0 2 ) .
The typical running of the Q V M , s ( p 1 | 0 ) for some values of the parameter, s is shown in Figure 3. Panel A (left column) is devoted to lower values of the jumping parameter 0 s 1 , while panel B presents graphs of Q V M , s ( p 1 | 0 ) for higher values of the jumping parameter 1 < s < 2 .
It turned out that the approximation of entropy H by polynomials, namely by the unimodal map and by Taylor series, leads to completely different behavior of Q V M , s ( p 1 | 0 ) . Note, that for the approximation of H in (2) with the unimodal map the quotient Q V U , s ( p 1 | 0 ) , for each s, is a constant and equal to 4 s ( 2 s ) , while for the approximation by the Taylor series (10 terms), the quotient Q V T , s ( p 1 | 0 ) preserves a similar course as thay of H of the Shannon form.

4. Discussion and Conclusions

In this paper, I studied the relation between the Information Transmission Rate carried out by sequences of bits and the fluctuations of these bits. These sequences originate from Information Sources which are modeled by Markov processes. During the last 30 years, authors have modeled neuron activity by different variants of Markov processes, e.g., inhomogeneous Markov Interval Models and Hidden Markov Processes [20,21,22]. The Poisson Point processes commonly used to model experimental data of neuronal activity also exhibit the Markov property [18,19]. Our results show that the qualitative and quantitative character of the relation between the Information Transmission Rate and fluctuations of signal bits strongly depends on the jumping parameter s, which we introduced in our previous papers [41,42]. This parameter characterizes the tendency of the process to transition from state to state. In some sense, it describes the variability of the signals.
It turned out that, similarly as in our previous papers, when have studied the relation between Information Transmission Rates, spikes correlations, and frequencies of these spikes appearance, the critical value of s was equal to 1, which corresponds to the Bernoulli process. For all small s ( s < 1 ) , the quotient I T R σ could reach 0, while for larger s ( s > 1 ) , this quotient was always separated from 0. Specifically, for 1 < s < 1.7 , the I T R will always be, independent of transition probabilities which form this s, above the level of fluctuations (i.e., σ < I T R ) . Thus, this highlights an interesting fact that for large enough s, the information is never completely lost, independent of the level of fluctuations.
On the other hand, for each 0 < s < 2 , the quotient I T R σ is limited from above by 2 and it is reached for each s, for p 1 | 0 = s 2 Thus, I have that the maximum is reached when p 1 | 0 = p 0 | 1 . This means that, when one compares I T R to σ , the most effective transmission was for symmetric communication channels. Note, that the capacity C ( s ) of such channels is already equal to
C ( s ) = 1 H ( s 2 ) .
Additionally, it turned out that I T R σ for the approximation of Shannon entropy H by polynomials, specifically by the unimodal map and its Taylor series, behaves similarly. Observe, that for all s these quotients, independent of the approximation applied, reach the maximum for p 1 | 0 equal to s 2 and monotonically increase for p 1 | 0 less than s 2 , while monotonically decrease for p 1 | 0 below s 2 .
For a better insight into the relation between I T R and signal variability, I also referred I T R to Variance. I observed that the behavior of the I T R V significantly differs from the behavior of I T R σ . For each s < 1 , the quotient I T R V can tend to infinity and it is separated from 0. For 1 < s < 2 , it is limited from above and it never reaches 0 for any s. However, it behaves in a more complex way than I T R σ by having even three local extreme points, e.g., it is visible for s = 1.3 and s = 1.5 . On the other hand, approximations of Shannon entropy H by polynomials such as the unimodal map or by its Taylor series, contrary to the case of I T R σ , lead to a significant qualitative difference between the behavior of I T R V .
To summarize, the results obtained show that for Markov information sources, regardless of the level of fluctuation, the level of Information Transmission Rate does not reduce to zero, provided that the transition parameter s is sufficiently large. This means that to obtain more reliable communication, the spike trains should have a higher tendency of transition from the state no spike to spike state and vice versa. The inequality (26) allows for estimatio of the amount of information being transmitted by the level of signal fluctuations. Signal fluctuations characterize, in fact, the level of noise.
The results are presented in the context of signal processing in the brain, due to the fact that information transmission in the brain is in this case a natural and fundamental phenomena. However, our results have, in fact, a general character and can be applied to any communication system modeled by two-states Markov processes.

Funding

This research received no external funding.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Huk, A.C.; Hart, E. Parsing signal and noise in the brain. Science 2019, 364, 236–237. [Google Scholar] [PubMed]
  2. Mainen, Z.F.; Sejnowski, T.J. Reliability of spike timing in neocortical neurons. Science 1995, 268, 1503–1506. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  3. van Hemmen, J.L.; Sejnowski, T. 23 Problems in Systems Neurosciences; Oxford University Press: Oxford, UK, 2006. [Google Scholar]
  4. Deco, G.; Jirsa, V.; McIntosh, A.R.; Sporns, O.; Kötter, R. Key role of coupling, delay, and noise in resting brain fluctuations. Proc. Natl. Acad. Sci. USA 2009, 106, 10302–10307. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  5. Fraiman, D.; Chialvo, D.R. What kind of noise is brain noise: Anomalous scaling behavior of the resting brain activity fluctuations. Front. Physiol. 2012, 3, 1–11. [Google Scholar] [CrossRef] [Green Version]
  6. Gardella, C.; Marre, O.; Mora, T. Modeling the correlated activity of neural populations: A review. Neural Comput. 2019, 31, 233–269. [Google Scholar] [CrossRef] [Green Version]
  7. Adrian, E.D.; Zotterman, Y. The impulses produced by sensory nerve endings. J. Physiol. 1926, 61, 49–72. [Google Scholar] [CrossRef]
  8. MacKay, D.; McCulloch, W.S. The limiting information capacity of a neuronal link. Bull. Math. Biol. 1952, 14, 127–135. [Google Scholar] [CrossRef]
  9. Cover, T.M.; Thomas, J.A. Elements of Information Theory; Wiley: New York, NY, USA, 1991. [Google Scholar]
  10. Rieke, F.; Warland, D.D.; de Ruyter van Steveninck, R.R.; Bialek, W. Spikes: Exploring the Neural Code; MIT Press: Cambridge, MA, USA, 1997. [Google Scholar]
  11. Shannon, C.E. A mathematical theory of communication. Bell Labs Tech. J. 1948, 27, 379–423. [Google Scholar] [CrossRef] [Green Version]
  12. Ash, R.B. Information Theory; John Wiley and Sons: New York, NY, USA, 1965. [Google Scholar]
  13. Teich, M.C.; Khanna, S.M. Pulse-number distribution for the neural spike train in the cat’s auditory nerve. J. Acoust. Soc. Am. 1985, 77, 1110–1128. [Google Scholar] [CrossRef]
  14. Daley, D.H.; Vere-Jones, D. An Introduction to the Theory of Point Processes: Volume I: Elementary Theory and Methods; Springer: Berlin, Germany, 2003. [Google Scholar]
  15. Werner, G.; Mountcastle, V.B. Neural activity in mechanoreceptive cutaneous afferents: Stimulusresponse relations, weber functions, and information transmission. J. Neurophysiol. 1965, 28, 359–397. [Google Scholar] [CrossRef]
  16. Tolhurst, D.J.; Movshon, J.A.; Thompson, I.D. The dependence of response amplitude and variance of cat visual cortical neurones on stimulus contrast. Exp. Brain Res. 1981, 41, 414–419. [Google Scholar] [CrossRef] [PubMed]
  17. de Ruyter van Steveninck, R.R.; Lewen, G.D.; Strong, S.P.; Koberle, R.; Bialek, W. Reproducibility and variability in neural spike trains. Science 1997, 275, 1805–1808. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  18. Ross, S.M. Stochastic Processes; Wiley-Interscience: New York, NY, USA, 1996. [Google Scholar]
  19. Papoulis, A.; Pillai, S.U. Probability, Random Variables, and Stochastic Processes; Tata McGraw-Hill Education: New York, NY, USA, 2002. [Google Scholar]
  20. Kass, R.E.; Ventura, V. A spike-train probability model. Neural Comput. 2001, 13, 1713–1720. [Google Scholar] [CrossRef] [PubMed]
  21. Radons, G.; Becker, J.D.; Dülfer, B.; Krüger, J. Analysis, classification, and coding of multielectrode spike trains with hidden Markov models. Biol. Cybern. 1994, 71, 359–373. [Google Scholar] [CrossRef]
  22. Berry, M.J.; Meister, M. Refractoriness and neural precision. J. Neurosci. 1998, 18, 2200–2211. [Google Scholar] [CrossRef] [Green Version]
  23. Bouchaud, J.P. Fluctuations and response in financial markets: The subtle nature of ‘random’ price changes. Quant. Financ. 2004, 4, 176–190. [Google Scholar] [CrossRef]
  24. Knoblauch, A.; Palm, G. What is signal and what is noise in the brain? Biosystems 2005, 79, 83–90. [Google Scholar] [CrossRef] [Green Version]
  25. Mishkovski, I.; Biey, M.; Kocarev, L. Vulnerability of complex networks. J. Commun. Nonlinear Sci. Numer. Simul. 2011, 16, 341–349. [Google Scholar] [CrossRef] [Green Version]
  26. Zadeh, L. Fuzzy sets. Inf. Control. 1965, 8, 338–353. [Google Scholar] [CrossRef] [Green Version]
  27. Prokopowicz, P. The use of ordered fuzzy numbers for modeling changes in dynamic processe. Inf. Sci. 2019, 470, 1–14. [Google Scholar] [CrossRef]
  28. Zhang, M.L.; Qu, H.; Xie, X.R.; Kurths, J. Supervised learning in spiking, neural networks with noise-threshold. Neurocomputing 2017, 219, 333–349. [Google Scholar] [CrossRef] [Green Version]
  29. Lin, Z.; Ma, D.; Meng, J.; Chen, L. Relative ordering learning in spiking neural network for pattern recognition. Neurocomputing 2018, 275, 94–106. [Google Scholar] [CrossRef]
  30. Antonietti, A.; Monaco, J.; D’Angelo, E.; Pedrocchi, A.; Casellato, C. Dynamic redistribution of plasticity in a cerebellar spiking neural network reproducing an associative learning task perturbed by tms. Int. J. Neural Syst. 2018, 28, 1850020. [Google Scholar] [CrossRef] [PubMed]
  31. Kim, R.; Li, Y.; Sejnowski, T.J. Simple framework for constructing functional spiking recurrent neural networks. Proc. Natl. Acad. Sci. USA 2019, 116, 22811–22820. [Google Scholar] [CrossRef]
  32. Sobczak, F.; He, Y.; Sejnowski, T.J.; Yu, X. Predicting the fmri signal fluctuation with recurrent neural networks trained on vascular network dynamics. Cereb. Cortex 2020, 31, 826–844. [Google Scholar] [CrossRef]
  33. Qi, Y.; Wang, H.; Liu, R.; Wu, B.; Wang, Y.M.; Pan, G. Activity-dependent neuron model for noise resistance. Neurocomputing 2019, 357, 240–247. [Google Scholar] [CrossRef]
  34. van Kampen, N.G. Stochastic Processes in Physics and Chemistry; Elsevier: Amsterdam, The Netherlands, 2007. [Google Scholar]
  35. Feller, W. An Introduction to Probability Theory and Its Applications; Wiley Series Probability and Statistics; John Wiley and Sons: New York, NY, USA, 1958. [Google Scholar]
  36. Salinas, E.; Sejnowski, T.J. Correlated neuronal activity and the flow of neural information. Nat. Rev. Neurosci. 2001, 2, 539–550. [Google Scholar] [CrossRef] [Green Version]
  37. Frisch, U. Turbulence; Cambridge University Press: Cambridge, UK, 1995. [Google Scholar]
  38. Salinas, S.R.A. Introduction to Statistical Physics; Springer: Berlin, Germany, 2000. [Google Scholar]
  39. Kittel, C. Elementary Statistical Physics; Dovel Publications, INC.: Mineola, NY, USA, 2004. [Google Scholar]
  40. Pregowska, A.; Szczepanski, J.; Wajnryb, E. Mutual information against correlations in binary communication channels. BMC Neurosci. 2015, 16, 32. [Google Scholar] [CrossRef] [Green Version]
  41. Pregowska, A.; Szczepanski, J.; Wajnryb, E. Temporal code versus rate code for binary Information Sources. Neurocomputing 2016, 216, 756–762. [Google Scholar] [CrossRef] [Green Version]
  42. Pregowska, A.; Kaplan, E.; Szczepanski, J. How Far can Neural Correlations Reduce Uncertainty? Comparison of Information Transmission Rates for Markov and Bernoulli Processes. Int. J. Neural Syst. 2019, 29. [Google Scholar] [CrossRef] [Green Version]
  43. Amigo, J.M.; Szczepański, J.; Wajnryb, E.; Sanchez-Vives, M.V. Estimating the entropy rate of spike trains via Lempel-Ziv complexity. Neural Comput. 2004, 16, 717–736. [Google Scholar] [CrossRef] [PubMed]
  44. Bialek, W.; Rieke, F.; Van Steveninck, R.D.R.; Warl, D. Reading a neural code. Science 1991, 252, 1854–1857. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  45. Collet, P.; Eckmann, J.P. Iterated Maps on the Interval as Dynamical Systems; Reports on Progress in Physics; Birkhauser: Basel, Switzerland, 1980. [Google Scholar]
  46. Rudin, W. Principles of Mathematical Analysis; McGraw-Hill: New York, NY, USA, 1964. [Google Scholar]
  47. Renyi, A. On measures of information and entropy. In Proceedings of the 4th Berkeley Symposium on Mathematics, Statistics and Probability, Berkeley, CA, USA, 20 June–30 July 1960; pp. 547–561. [Google Scholar]
  48. Amigo, J.M. Permutation Complexity in Dynamical Systems: Ordinal Patterns, Permutation Entropy and All That; Springer Science and Business Media: Berlin/Heidelberg, Germany, 2010. [Google Scholar]
  49. Crumiller, M.; Knight, B.; Kaplan, E. The measurement of information transmitted by a neural population: Promises and challenges. Entropy 2013, 15, 3507–3527. [Google Scholar] [CrossRef] [Green Version]
  50. Bossomaier, T.; Barnett, L.; Harré, M.; Lizier, J.T. An Introduction to Transfer Entropy, Information Flow in Complex Systems; Springer: Berlin, Germany, 2016. [Google Scholar]
  51. Amigo, J.M.; Balogh, S.G.; Hernandez, S. A Brief Review of Generalized Entropies. Entropy 2018, 20, 813. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  52. Jetka, T.; Nienałtowski, K.; Filippi, S.; Stumpf, M.P.H.; Komorowski, M. An information-theoretic framework for deciphering pleiotropic and noisy biochemical signaling. Nat. Commun. 2018, 9, 1–9. [Google Scholar] [CrossRef]
Figure 1. Approximation of the Shannon entropy (black solid lines) using the Taylor series expression (gray dash-dot line, 10 first terms), unimodal function (black dash line), and unimodal map root (black point line).
Figure 1. Approximation of the Shannon entropy (black solid lines) using the Taylor series expression (gray dash-dot line, 10 first terms), unimodal function (black dash line), and unimodal map root (black point line).
Entropy 23 00092 g001
Figure 2. The quotient I T R σ as a function of the transition probability p 1 | 0 for chosen values of the jumping parameter s: (A) For parameters 0 s 1 due to (16) the range of p 1 | 0 is [ 0 , s ] and (B) for 1 < s < 2 according to (17) the range of p 1 | 0 is s 1 p 1 | 0 1 . The courses of the quotients Q σ H , s ( p 1 | 0 ) , Q σ U , s ( p 1 | 0 ) , Q σ T , s ( p 1 | 0 ) for Shannon form, unimodal map, unimodal map root, Taylor series being applied as H in Formula (9) are presented.
Figure 2. The quotient I T R σ as a function of the transition probability p 1 | 0 for chosen values of the jumping parameter s: (A) For parameters 0 s 1 due to (16) the range of p 1 | 0 is [ 0 , s ] and (B) for 1 < s < 2 according to (17) the range of p 1 | 0 is s 1 p 1 | 0 1 . The courses of the quotients Q σ H , s ( p 1 | 0 ) , Q σ U , s ( p 1 | 0 ) , Q σ T , s ( p 1 | 0 ) for Shannon form, unimodal map, unimodal map root, Taylor series being applied as H in Formula (9) are presented.
Entropy 23 00092 g002
Figure 3. The quotient I T R V as a function of the initial probability p 1 | 0 for the chosen values of the jumping parameter s: (A) For parameters 0 s 1 due to (16) the range is 0 < p 1 | 0 < s and (B) For parameters 1 < s < 2 due to (16) the range is s 1 p 1 | 0 1 . Observe that I T R V has a completely different course to that of the quotient I T R σ presented in Figure 2.
Figure 3. The quotient I T R V as a function of the initial probability p 1 | 0 for the chosen values of the jumping parameter s: (A) For parameters 0 s 1 due to (16) the range is 0 < p 1 | 0 < s and (B) For parameters 1 < s < 2 due to (16) the range is s 1 p 1 | 0 1 . Observe that I T R V has a completely different course to that of the quotient I T R σ presented in Figure 2.
Entropy 23 00092 g003
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Pregowska, A. Signal Fluctuations and the Information Transmission Rates in Binary Communication Channels. Entropy 2021, 23, 92. https://doi.org/10.3390/e23010092

AMA Style

Pregowska A. Signal Fluctuations and the Information Transmission Rates in Binary Communication Channels. Entropy. 2021; 23(1):92. https://doi.org/10.3390/e23010092

Chicago/Turabian Style

Pregowska, Agnieszka. 2021. "Signal Fluctuations and the Information Transmission Rates in Binary Communication Channels" Entropy 23, no. 1: 92. https://doi.org/10.3390/e23010092

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop