Information fractal dimension of mass function

Fractal plays an important role in nonlinear science. The most important parameter to model fractal is fractal dimension. Existing information dimension can calculate the dimension of probability distribution. However, given a mass function which is the generalization of probability distribution, how to determine its fractal dimension is still an open problem of immense interest. The main contribution of this work is to propose an information fractal dimension of mass function. Numerical examples are illustrated to show the effectiveness of our proposed dimension. We discover an important property in that the dimension of mass function with the maximum Deng entropy is $\frac{ln3}{ln2}\approx 1.585$, which is the well-known fractal dimension of Sierpi\'nski triangle.


Introduction
Fractal is very common in nature [1,2], first proposed to measure the length of the coast. As the field progresses, it is now widely used across many different fields, such as time fractal [3,4], the calculation and geometry [5][6][7], and chaotic systems [8,9]. Irrefutably, the fractal dimension plays an vital role in fractal theory. To date, a lot of measures [10] has been proposed to determine the fractal dimension, including Hausdorff dimension [11], information dimension [12,13], correlation dimension [14], and multi-fractal dimension [15,16].
The uncertainty measurement is a topic of immense interest because it can be applied to the uncertain environment. Many algorithms have been proposed, including probability theory [17], Dempster-Shafer evidence theory [18][19][20] and belief structure [21,22]. The information volume of a probability distribution can be measured by Shannon entropy [23]. The mass function in evidence theory is a generalization of probability set to describe the uncertainty environment which is also called basic probability assignment (BPA). The uncertainty of mass function can be measured by Deng entropy [24,25]. Compared with probability distribution, mass function has been widely applied due to its ability in dealing with uncertain information. Many measures and parameters are developed about mass function, such as entropy [26,27], negation [28], correlation coefficient [29] and information quality [30]. The information volume of the mass function has been studied recently [31,32].
However, how to determine the fractal dimension of mass function is still an open problem. In this paper, we have proposed information dimension of mass function based on Deng entropy and fractal theory, which can be further applied in decisionmaking [33]. Importantly, we discover that the dimension of mass function with the maximum Deng entropy is 1.585, which is the same as the fractal dimension of Sierpinski triangle.
Entropy is very important in complex systems [34][35][36]. There are many different kinds of entropy function, such as Tsallis entropy [37,38] and Renyi entropy [39]. In information theory, Shannon entropy plays an important role [40,41]. As part of our literature review, we will introduce these concepts briefly.

Shannon entropy
Given a probability distribution P = {p 1 , p 2 , ..., p n }, Shannon entropy is defined as follows [42] If and only if p i = 1 n , Shannon entropy reaches maximum

Fractal and information dimension
Fractal has been widely studied [43] and fractal dimension is also attract a lot of focues [44,45]. For example, the fractal dimension of Sierpinski triangle is ln3 ln2 [46], the fractal dimension of Koch curve is ln4 ln3 [47], and the fractal dimension of Cantor sets is ln2 ln3 [48]. Besides, information dimension, as a kind of fractal dimension, plays a critical role in dealing with probability distribution. Below is a brief introduction of information dimension.
The information dimension is defined as follows [49] where the numerator is Shannon entropy, ε is the side length of the measured box, P i (ε) is the probability of the measured object falling into the ith box. N represents N measured boxes.

Mass function
Mass function is an important aspect of evidence theory, which is an extension of probability theory. Θ denotes the framework of discernment in evidence theory [19,20], the power set of Θ is 2 Θ , The mass function is defined as follows [19,20] m : This mapping satisfies A∈2 Θ m(A) = 1.
where A i is called focal element when m(A i ) > 0. Mass function can be seen as a generalization of probability distribution, which is more efficient in dealing with uncertainty [32,50].

Deng entropy and maximum Deng entropy
Recently, a new entropy called Deng entropy has been proposed to measure uncertainty of mass function. Given a mass function:

its Deng entropy is obtained as
where |A| is the cardinal of focal element A. When mass function is Bayesian structure [20], Deng entropy degenerates to Shannon entropy.
When the mass function satisfies the condition where m(A i ) is the mass function for A i and i = 1, 2, ..., 2 N − 1, Deng entropy reaches the maximum, which is shown as follows [25] In probability theory, Shannon entropy reaches maximum when all results have equal probability. However, in evidence theory, Deng entropy postulates that when the uncertainty of mass function reaches the maximum, multiple subsets should have more assignment.
The rest of the paper is organized as follows. Section 2 presents information dimension of mass function. Some numerical examples are given in Section 3 to illustrate the effectiveness of our proposed dimension. Finally, Section 4 concludes the paper.

A new proposal for information dimension of mass function
In this section, the new dimension is being proposed and we begin with some fundamental definitions. For convenience of the reader, we also provide the proof to the properties that are being discussed. Definition 1. For a framework of discernment Θ, the power set is 2 Θ = {A 1 , A 2 , ..., A 2 N }, a mass function is m(A). Its information dimension is defined as follows, where H D is Deng entropy, (2 |Ai| − 1) is the size of power set of the focal element A i . Property 1: When m(A i ) = 1, |A i | = 1, both 0 in numerator and denominator of Equation (12), we defined D m as follows, Proof : Given a framework of discernment Θ = {ω 1 , · · · , ω N }, a mass function is When a → 1, b → 0, the mass function degenerates into m(A i ) = 1, |A i | = 1. According to Equation (9), where 1 ≤ j ≤ 2 N − 1 and j = i, according to Equation (12), we have The special case with the condition (m(A i ) = 1, |A i | = 1) in Property 1 indicates that the information is deterministic, so its information dimension is 0.
where H S is Shannon entropy. For denominator, due to the elementary event is no longer split in probability theory, which is represented by the value of any exponent of 1 is 1. In other words, the split of each singleton is itself.
In the case of one mass function or probability distribution, only a number is obtained by Equation (12). However, in the next section when the mass function changes in a certain regularity, the number either stays the same or converges gradually to a constant, indicating that there is a scale invariance between Deng entropy and the splitting of mass function. Therefore, the number is used to represent this property and named as information fractal dimension.

Numerical examples
In this section, we provide some numerical examples to better illustrate the definition of the proposed dimension. In order to verify the results easily, all examples below use log2 for the purpose of calculation. Nevertheless, base two or base e (or others) does not affect the calculation. In Equation (12), the numerator and the denominator will be different for different bases. However, we have proposed dimension as a ratio reflecting scale invariance. As long as the logarithm of the numerator and denominator has the same base, the result will not change. Therefore we can just use log for ease of convenience. Example 1: A framework of discernment is Θ = {ω 1 , ω 2 }, a mass function is m(ω 1 ) = 5 6 , m 1 (ω 1 , ω 2 ) = 1 6 . According to Equation (9) and Equation (12),  Table 1. In Fig. 1, x axis is Deng entropy and y axis is log(2 |Θ| − 1) 1 . As can been seen from Table 1, the dimension of |Θ| = 1 is 0. It means the complexity of a definite information is 0. Fig. 1 indicates a linear relationship between Deng entropy and the size of split of mass function when |Θ| = 2, 3, ..., 20. The value of the slope is 1 and this means that the information dimension of the total uncertainty case is 1.  Table 2 and Fig.  2. Compared with Example 2 and Example 3, m(Θ) = 1 means total uncertainty in evidence theory and average distribution in probability theory m(A i ) = 1 N has equal information dimension. According to Equation (18), the calculation of Example 3 is

log(N )
The calculation of Example 2 according Equation (12) is Equation (20) can be rewritten as From above Equation (19) and Equation (21), the case of m(Θ) = 1, |Θ| = N and the case of average distribution in probability, where the number of elementary events are 2 N − 1, are equivalent in expressing the complexity of information. Example 4: Given a framework of discernment Θ, |Θ| = N = 1, 2, ..., 25. Its power set is 2 Θ = {∅, , A 1 , A 2 , ..., A 2 N −1 }. A mass function with average assignment in power set is m(A i ) = 1 2 N −1 . As can be seen from Table 3, with the increase of the size of Θ, D m is changed but eventually goes to 1.5. Different from Example 2 and Example 3, which is a constant from the beginning, we assume that for this example the convergent value is the final information fractal dimension of mass function with average distribution in power set. Example 5: Given a framework of discernment Θ, |Θ| = 1, 2, ..., 20 and a mass function with maximum Deng entropy: m(A) = 2 |A| −1 (2 |A i | −1) . Fig. 4 show the result and Fig. 5 is a Sierpinski triangle.
From Table 4, with the size of Θ increasing, D m is a convergent sequence. The dimension of mass function with the maximum Deng entropy is 1.585, which is the well-known fractal dimension of Sierpiński triangle.
Finally, we conclude the results from Example 2, Example 4 and Example 5 with a picture. Fig. 6 shows that there is a linear relationship between Deng entropy and the size of split of mass function. However, given any mass function like Example 1, is there a special distribution form of mass function that has the same dimension? What does the calculated dimension actually mean?
There is still no common agreement on the interpretation of dimension, but one plausible explanation is related to the degree of freedom. In Euclidean Space, onedimensional means that particle can only move in one direction. In two-dimensional space, particle can move in two orthogonal directions, and for three-dimensional, particle can move in three orthogonal directions. The higher the dimension, the more directions a particle can move; more variables are needed to measure it. However, we postulate here to express information fractal dimension as complexity. That is to say, the larger the information dimension is, the more complex the information represented by mass function will be. With regards to the property of fractal, the proposed information dimension can be applied to pattern recognition and multicriteria decision making in highly uncertain environment, in which col-ws-fnl (Paper's Title) 11 Table 4. The convergence process of Example 5 (m(A) = 2 |A| −1

Conclusion
How to determine the fractal dimension of uncertain information is still an open problem. In this paper, an information fractal dimension of mass function is proposed. The proposed method not only can calculate the dimension of probability distribution in probability theory, but also the mass function in evidence theory as well. Some interesting properties have been discussed. Importantly, we discover that the dimension of mass function with the maximum Deng entropy is 1.585, which is the same as the fractal dimension of Sierpinski triangle.
acknowledgements Yong Deng greatly appreciate China academician of the Academy of Engineering, Professor Shan Zhong and Professor You He, for their encouragement to do this research. Yong Deng greatly appreciates Professor Yugeng Xi to support this work. Ph.D student, Lipeng Pan, discussed the fractal dimension of Sierpiński tri- Fig. 5. Sierpinski triangle, whose fractal dimension is ln3 ln2 ≈ 1.585. Interested readers can refer to Ref. [49] for the full construction steps.