Elsevier

Chaos, Solitons & Fractals

Volume 40, Issue 5, 15 June 2009, Pages 2469-2474
Chaos, Solitons & Fractals

Nonsymmetric entropy and maximum nonsymmetric entropy principle

https://doi.org/10.1016/j.chaos.2007.10.039Get rights and content

Abstract

Under the frame of a statistical model, the concept of nonsymmetric entropy which generalizes the concepts of Boltzmann’s entropy and Shannon’s entropy, is defined. Maximum nonsymmetric entropy principle is proved. Some important distribution laws such as power law, can be derived from this principle naturally. Especially, nonsymmetric entropy is more convenient than other entropy such as Tsallis’s entropy in deriving power laws.

Introduction

Entropy (Boltzmann’s entropy) is the most important concept in statistical physics. Shannon’s entropy, which measures the uncertain degree of information, is also the most basic concept in information theory. The concept of entropy has been studied and applied extensively and deeply (for example, see [1], [2], [3], [4], [5], [6], [7], [8]). For example, entropy has some important applications in E-infinity theory of El Naschie and super string theory [4], [5], [6], [7]. At the same time, the concept of entropy has been generalized to various forms. For example, in 1988, Tsallis [8] proposed a generalized entropy Sq=k1-i=1mpiqq-1 to deal with nonextensive statistics. Tsallis’s entropy has been applied to other fields (see, for example, [9], [10]). In addition, there are Renyi entropy Hr=11-rlog(i=1mpr(xi)) and r-order entropy Hr=11-rlog(i=1mpr(xi)-1) and so on. These generalized entropies have some applications in some special cases.

On the other hand, in 1957, Jaynes [11] proposed the maximum-entropy principle for setting up probability distribution on basis of partial knowledge. But there are some systems for which Boltzmann’s entropy (or Shannon’s entropy) and other entropies cannot be utilized to obtain the distributions by using corresponding maximum entropy principle. For example, we consider a system in which a coin has two side with different probabilities of coming down. We denote the corresponding probabilities by p1 and p2. If we assume p1 > p2, then we will find that the maximum of Shannon’s entropy cannot be reached for the Shannon’s entropy S=-p1lnp1-p2lnp2. In fact, using p2 = 1  p1 and p1(12,1], we know that S cannot take its maximum since dSdp1<0. Thus for the Shannon’s entropy (or Boltzmann’s entropy), we cannot obtain the needed distribution by using maximum entropy principle. Therefore, it is necessary to introduce other entropy to deal with this kind of systems. Recently, I have proposed a new entropy named nonsymmetric entropy so that Zipf’s law is derived naturally by maximizing the nonsymmetric entropy [12]. My key idea is to regard every event has a kind of auxiliary information so that the total information of the event is defined by I=-lnpj-lnβj, where βj is an auxiliary parameter of the jth event, and lnβj is an auxiliary information. We define the nonsymmetric entropy by the average total information of every event. Maximizing the nonsymmetric entropy leads to the Zipf’s law when βj = jα. Of course we can take other values of βj to obtain other distribution laws, even to obtain every distribution law. But I hope that the concept of the nonsymmetric entropy has a potential value for the study of complexity. In fact, for the time being, I do not know how to develop this concept and its further implications.

In the present paper, I discuss further the nonsymmetric entropy and obtain the maximum nonsymmetric entropy principle. As application, we derive the power law from the maximum nonsymmetric entropy principle.

This paper is organized as follows: In Section 2, we give all definitions and theorems. Some discusses and applications are given. In Section 3, we give some applications. The last section is the conclusion.

Section snippets

Maximum nonsymmetric entropy principle

In Ref. [12], I have defined the concept of nonsymmetric entropy in a linguistic model. I will give a general definition in the following in a statistical model. We first give some hypotheses in a statistical model, in which the set of states is X = {x1,  , xm}, and pi is the probability of xi.

Definition 1

The auxiliary information of a sate xj is defined byIA=-lnβj,where βj is the auxiliary information parameter. Respectively, we define the total information of the state xj byI=-lnβj-lnpj=-ln(βjpj),

Definition 2

We define a

Application

If 1β(x)dx<, we haveρ0(x)=Cβ(x),which is the maximal nonsymmetric entropy distribution, where C=11β(x)dx.

If we take β(x) = xα, α > 1, and assume the range of the random variable X is (k, +∞), then the maximum nonsymmetric entropy distribution isρ0(x)=k1-αα-1x1-α,which is just the power law distribution in the continuous case. If there are some constraints, we will get other distributions similar to those in Theorem 3, Theorem 4.

Power law or Zipf’s law, which is an important non-exponential

Conclusion

The above results suggest that the nonsymmetric entropy is maybe a useful concept that will play a suitable role in complexity science. Of course the meaning of the nonsymmetric entropy needs a further reasonable explanation. I hope the concept of nonsymmetric entropy can stimulate more investigations in the future.

References (22)

  • C. Tsallis

    Possible generalization of Boltzmann–Gibbs statistics

    J Stat Phys

    (1988)
  • Cited by (5)

    • Lerch distribution based on maximum nonsymmetric entropy principle: Application to Conway's game of life cellular automaton

      2021, Chaos, Solitons and Fractals
      Citation Excerpt :

      GoL considered in this study was set at discrete time of steps, generating discrete Zipf distributions of course. However, the maximum nonsymmetric entropy principle also could be employed to generate continuous distributions [20]. In fact, nonsymmetric entropy was also considered by [27,38] to define the negentropy as the departure from gaussianity of the asymmetric gaussian distribution, and recently by [28] to define the Fisher–Shannon information complex plane using skewness as control parameter.

    View full text