Next Article in Journal
An Entropy Formulation Based on the Generalized Liouville Fractional Derivative
Previous Article in Journal
Holistic Type Extension for Classical Logic via Toffoli Quantum Gate
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Some Notes on Maximum Entropy Utility

1
Department of Neurosurgery, Gachon University Gil Medical Center, 21 Namdongdaero 774, Namdong, Incheon 21565, Korea
2
College of Business and Economics, Chung-Ang University, 221 Heukseok Dongjak, Seoul 06974, Korea
*
Author to whom correspondence should be addressed.
Entropy 2019, 21(7), 637; https://doi.org/10.3390/e21070637
Submission received: 23 May 2019 / Revised: 21 June 2019 / Accepted: 25 June 2019 / Published: 27 June 2019

Abstract

:
The maximum entropy principle is effective in solving decision problems, especially when it is not possible to obtain sufficient information to induce a decision. Among others, the concept of maximum entropy is successfully used to obtain the maximum entropy utility which assigns cardinal utilities to ordered prospects (consequences). In some cases, however, the maximum entropy principle fails to produce a satisfactory result representing a set of partial preferences properly. Such a case occurs when incorporating ordered utility increments or uncertain probability to the well-known maximum entropy formulation. To overcome such a shortcoming, we propose a distance-based solution, so-called the centralized utility increments which are obtained by minimizing the expected quadratic distance to the set of vertices that varies upon partial preferences. Therefore, the proposed method seeks to determine utility increments that are adjusted to the center of the vertices. Other partial preferences about the prospects and their corresponding centralized utility increments are derived and compared to the maximum entropy utility.

1. Introduction

The maximum entropy principle is effective in solving decision problems, especially when it is not possible to obtain sufficient information to induce a decision [1,2,3,4,5]. Applications of the entropy principle to the multiple criteria decision-making (MCDM) problems can be found in [6,7,8]. Abbas [9] presents a method to assign cardinal utilities to ordered prospects (consequences) in the presence of uncertainty. The ordered prospects which are included in the category of partial preferences are easily encountered in practice [9,10,11]. The use of partial preferences about the prospects can provide a decision-maker with comfort in specifying preferences but in view of decision-making may fail to result in a final decision. Thus, an elegant approach to circumvent this problem is needed to solve real-world decision-making problems. To this end, Abbas [9] developed the maximum entropy approach to assigning cardinal utility to each prospect when only the ordered prospects are known. However, we doubt if the maximum entropy approach results in cardinal utilities representing a set of partial preferences properly where some other partial preferences about the prospects are additionally incorporated. In another context of true maximum ignorance where the state of prior knowledge is not strong, the maximum a posteriori probability can be better estimated by classical Bayesian theory; it is not necessary to introduce a new and exotic approach such as maximum entropy [12].
We discuss the maximum entropy utility approach further using the notations and definitions from Abbas [9].

2. Does the Maximum Entropy Principle Always Guarantee a Good Solution?

A utility vector contains the utility values of prospects starting from the lowest to the highest, where a utility value of zero (one) is assigned to the lowest (highest) according to a von Neumann and Morgenstern type utility assessment. We assume that there is at least one prospect, which has a strict preference to exclude the case of absolute indifference. The utility vector for ( K + 1 ) prospects can be denoted by
U ( U 0 ,   U 1 , , U K 1 ,   U K ) = ( 0 ,   U 1 ,   U 2 , , U K 1 ,   1 )
where 0 U 1 U 2 U K 1 1 .
A utility increment vector Δ U , whose elements are equal to the difference between the consecutive elements in the utility vector, can be denoted by
Δ U ( U 1 0 ,   U 2 U 1 , , 1 U K 1 ) = ( Δ u 1 ,   Δ u 2 , Δ u 3 , , Δ u K ) .
A utility increment vector Δ U satisfies two properties: (1) Δ u i 0 , i = 1 , , K and (2) i = 1 K Δ u i = 1 . Thus, it represents a point in a K -dimensional simplex, so-called the utility simplex. To assign cardinal utility to each Δ u i , Abbas [9] presumed that “If all we know about the prospects is their ordering, it is reasonable to assume, therefore, that the location of the utility increment vector is uniformly distributed over the utility simplex.” This idea led to the following nonlinear program (Equation (1)) of which the objective function is the maximum entropy constrained by a normalization condition and non-negativity constraints:
Δ U m a x e n t = maximize Δ u 1 ,   Δ u 2 , ,   Δ u K   i = 1 K Δ u i log ( Δ u i ) such that i = 1 K Δ u i = 1 Δ u i 0 ,   i = 1 , , K .
The optimal solution to this program is a utility increment vector with equal increments, that is
Δ u i = 1 K ,   i = 1 , , K .
This result seems to properly represent the utility simplex, since its extreme points simply consist of K unit vectors e i (one in the ith element and zeroes elsewhere) of which the coordinate-wise average yields 1 K .
Let us assume an ordered increasing utility (OIU) increment (in the latter part of the paper, we provide the ordered decreasing utility (ODU) increment defined by Δ u 1 Δ u 2 Δ u K ):
Δ u 1 Δ u 2 Δ u K ,
which can be further rewritten as U 1 0 U 2 U 1   1 U K 1 in terms of the utility vector. Studies regarding this partial preference, also called comparable preference differences, degree of preference, strength of preference, or preference intensity to utility theory, are found in Fishburn [13] and Sarin [14].
The incorporation of the ordered utility increment vector in the system of constraints of Equation (1) leads to the mathematical program (Equation (4)) and surely restricts the utility simplex as depicted in Figure 1, when considering a case of K = 3 .
Δ U m a x e n t = maximize Δ u 1 ,   Δ u 2 , ,   Δ u K   i = 1 K Δ u i log ( Δ u i ) such that Δ u 1 Δ u 2 Δ u K i = 1 K Δ u i = 1 Δ u i 0 ,   i = 1 , , K .
The solution to Equation (4) however still yields a utility increment vector with equal increments, Δ u i = 1 K , i = 1 , , K since nothing other than v K can result in a larger maximum entropy in a set of extreme points { v 1 , v 2 , , v K } where v 1 = ( 0 ,   0 , 0 ,   0 ,   1 ) , v 2 = ( 0 ,   0 , , 0 ,   1 2 ,   1 2 ) , , v K = ( 1 K , 1 K , 1 K ,   1 K ) .
Technically, we always obtain this result when the constituent constraints in the maximum entropy program contain v K as one of their extreme points. To illustrate, let us incorporate a constraint Δ u 3 Δ u 2 Δ u 2 Δ u 1 which more restricts the utility simplex in Figure 1 (see Figure 2). The set of extreme points is composed of v 1 = ( 0 ,   0 ,   1 ) , v 2 = ( 0 ,   1 3 ,   2 3 ) , v 3 = ( 1 3 ,   1 3 ,   1 3 ) , which also leads to a maximum entropy merely anchored at v 3 .
Therefore, it is doubtable if such equal utility increments adequately represent the feasible region (i.e., the restricted utility simplex) and if the assignment of such values will eventually be valid.
Let us consider another case where the maximum entropy principle does not work properly. We present the discrete version of preference inclusion in the maximum entropy utility, originally dealt with by Abbas [9] in a continuous case. Let us assume that a decision-maker specifies indifference between a lottery x 1 ,   p 1 ; x 2 ,   p 2 ; ; x K ,   p K and a reference lottery x K ,   p ,   x 1 , thus yielding
i = 1 K p i U i ( x i ) = p   where   x 1   x 2 x K .
Equation (5) can be rewritten in terms of the utility increments Δ u i such that i = 1 K F i 1 Δ u i = 1 p where F i = j = 1 i p j ( F 0 = 0 , F K = 1 ) .
Then, the principle of maximum entropy utility leads to the following program:
Δ U m a x e n t = maximize Δ u 1 ,   Δ u 2 , ,   Δ u K   i = 1 K Δ u i log ( Δ u i ) such that i = 1 K F i 1 Δ u i = 1 p i = 1 K Δ u i = 1 Δ u i 0 ,   i = 1 , , K .
The solution to Equation (6) is obtained by
Δ u i = exp ( β F i 1 ) i = 1 K exp ( β F i 1 )
where β corresponds to the Lagrange multiplier and is determined iteratively from the equation i = 1 K ( F i 1 ( 1 p ) ) exp ( β F i 1 ) = 0 . A formulation similar to Equation (6) and its solution (Equation (7)) are found in different contexts [15,16,17]. If a decision-maker is uncertain about the probability that equates a discrete lottery with a reference lottery and thus specifies a probability interval p _ p ˜ p ¯ as in [18], the expected utility of the prospects in Equation (5) can be expressed in the form of an interval:
p _ i = 1 K p i U i ( x i ) p ¯ ,   or   1 p ¯ i = 1 K F i 1 Δ u i 1 p _ .
With Equation (8) added in Equation (1), we obtain the optimal utility increments vector to Equation (9) that is anchored at either the lower or the upper bound in Equation (8).
Δ U m a x e n t = max Δ u 1 ,   Δ u 2 , ,   Δ u K   i = 1 K Δ u i log ( Δ u i ) such that 1 p ¯ i = 1 K F i 1 Δ u i 1 p _ i = 1 K Δ u i = 1 Δ u i 0 ,   i = 1 , , K .
For example, let K = 5 , F i = i 5 , i = 1 , , 5 , and p ˜ [ 0.6 ,   0.7 ] . Then, if we simply let Δ u i = 1 5 for all i , we obtain the maximum entropy value while they satisfy all the constraints in Equation (9), that is, i = 1 5 F i 1 5 = 0.4 = 1 p _ and i = 1 5 1 5 = 1 . Rather than this optimal solution, however, it is more reasonable to expect to obtain utility increments corresponding to somewhere between Δ u i ( 0.6 ) and Δ u i ( 0.7 ) in Equation (7). Further, this undesirable result is observed while uncertain p ˜ varies upon [ 0.6 ,   0.6 + α ] or [ 0.6 α ,   0.6 ] , α > 0 .

3. Centralized Utility Increments

We have shown two examples in which the maximum entropy principle works improperly when the utility simplex is restricted by additional partial preferences. This undesirable outcome can be attributed to the fact that the maximum entropy value is always attained when the equal utility increments vector is one of the extreme points characterizing the restricted utility simplex. Clearly, the utility increments representative of the restricted utility simplex are more likely to be found by considering as many extreme points as possible. Toward this end, we propose new utility increments that minimize the sum of the squared distances from all the extreme points (MSDE) to physically locate the utility increments at the center of the restricted utility simplex. Specifically, the MSDE approach considers the utility increments that minimize the expected quadratic distance to the set of vertices that varies upon types of partial preferences. This leads to the MSDE program in Equation (10):
minimize i = 1 K j = 1 M ( Δ u i v i j ) 2 such that i = 1 K Δ u i = 1 Δ u i 0 ,   i = 1 , , K
where v i j is the i th entry of the j th extreme point for the ordered prospects and M is the number of extreme points.
The solution to Equation (10) yields
Δ u i = 1 M j = 1 M v i j = 1 K
since M = K and v j = e j for all j .
This result is identical to Equation (2), which is compatible with the maximum entropy utility under the utility simplex. If we add the ordered utility increment vector (Equation (3)) to the system of constraints in Equation (10), the MSDE yields a solution:
Δ u i = 1 K j = K i + 1 K 1 j ,   i = 1 , , K
since v 1 = ( 0 , , 0 , 1 ) , v 2 = ( 0 , , 0 , 1 2 , 1 2 ) , , v K = ( 1 K , , 1 K , 1 K ) .
This solution, so-called the centralized utility increments, is quite different from the equal increments that would have resulted had we solved the program using the maximum entropy principle. In the case of K = 3 , simply compare ( 2 18 ,   5 18 ,   11 18 ) based on the centralized utility increments with ( 1 3 ,   1 3 ,   1 3 ) based on the maximum entropy principle.
To show that the solution in Equation (12) truly represents the center of vertices of the restricted utility simplex, we first develop a cumulative discrete utility increments function for Δ u i , F ( i K ) = j = 1 i Δ u j , i = 1 , , K and then its continuous function as follows [19]:
f O I U ( x ) = { x + ( 1 x ) ln ( 1 x )     for   0 x < 1 0                                 for     x = 1 .
Similar computations yield a continuous function for ordered decreasing utility increments as follows:
f O D U ( x ) = { x ( 1 ln x )         for   0 < x 1 0                     for     x = 0 .
It is interesting to note that both f O I U ( x ) and f O D U ( x ) include the entropy expression x ln x as their component. As shown in Figure 3, the continuous functions f O I U ( x ) and f O D U ( x ) bisect the lower triangle and upper triangle (of an area of 1 2 ) respectively since 0 1 f O I U ( x ) d x = 1 4 and 0 1 f O D U ( x ) d x = 3 4 . Noting that the straight line f ( x ) = x generates equal increments (for Δ u i = 1 K , F ( i K ) = j = 1 i Δ u j = i K and l i m K F K ( x ) = f ( x ) = ) for any K , f O I U ( x ) produces the centralized utility increments among numerous continuous functions that generate the utility increments satisfying Δ u 1 Δ u 2 Δ u K .
Further, we consider two categories of partial utility values that are widely used in MCDM problems: loose articulation (i.e., open-ended partial preferences of utility values) and interval expressions of utility values. The open-ended partial preferences of utility values may include the following types of preferences (see Ahn [20]):
  • Weak preference of utility values (WPU): U W P U = { U i U i 1 ,   i = 1 , , K }
  • Strict preference of utility values (SPU): U S P U = { U i U i 1 ε i > 0 ,   i = 1 , , K }
  • Weak difference of utility values (DPU): U D P U = { U K U K 1 U 1 U 0 }
  • Ratio preference of utility values (RPU): U R P U = { U i α i 1 U i 1 ,   α i 1 1 ,   i = 1 , , K } .
The interval expressions of utility values may include the following types of preferences:
  • Interval utility values (IU): U I U = { L B i U i U B i ,   i = 2 , , K 1 }
  • Interval differences of utility values (IDU): U I D U = { L B i U i U i 1 U B i ,   i = 1 , , K }
  • Interval ratios of utility values (IRU): U I R U = { L B i U i / U i 1 U B i , i = 2 , , K }
where L B i and U B i represent the lower and upper bounds, respectively.
Finally, we summarize in Table 1 the formulas of the maximum entropy utility and the centralized utility assignments for the case of the open-ended partial preferences (see more details in Appendix A for types of open-ended partial utility values and Appendix B for types of interval partial utility values respectively).

4. Conclusions

We have shown two examples in which the maximum entropy principle fails to produce an outcome representative of partial preferences about prospects. Therefore, we have to be cautious when we rely on the maximum entropy formulation to determine a representative vector over the feasible region of constraints. As an alternative, we propose the centralized utility increments that minimize the sum of squared distances from all the extreme points to physically locate the utility increments at the center of the restricted utility simplex. In particular, discrete and continuous functions are derived to demonstrate better performance of centralized utility increments over maximum entropy utility when the ordered utility increments are incorporated. Further, a range of partial utility values are introduced and their centralized utility assignments are compared with the maximum entropy utilities. However, it should be mentioned that we proposed other partial preferences beyond DPU in an attempt to show how to extend the MSDE approach to other partial preferences, which may not be directly related to the resolution of the problem inherent in the maximum entropy utility approach.
A final remark is that our proposed approach has the limitation of a deterministic one.

Author Contributions

Conceptualization, E.Y.K. and B.S.A.; methodology, B.S.A.; validation, E.Y.K. and B.S.A.; formal analysis, E.Y.K. and B.S.A.; writing—original draft preparation, E.Y.K. and B.S.A.; writing—review and editing, B.S.A.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. (Open-Ended Partial Preferences of Utility Values [20])

Appendix A.1. Strict Preference of Utility Values (SPU): U S P U = { U i U i 1 ε i > 0 , i = 1 , , K }

Let the utility vector Δ u i denote Δ u i = U i U i 1 , i = 1 , , K . These substitutions lead to an equivalent set in terms of Δ u i
{ Δ u i ε i > 0 ,   i = 1 , , K ,   i = 1 K Δ u i = 1 } .
Then, we use variable r i to denote r i = Δ u i ε i and s i to denote s i = r i / ( 1 i = 1 K ε i ) , which yields U r and U s in sequence:
U r = { r i 0 ,   i = 1 , , K ,   i = 1 K r i = 1 i = 1 K ε i }
and
U s = { s i 0 ,   i = 1 , , K ,   i = 1 K s i = 1 } .
The extreme points of U s can be easily determined as an identity matrix with a dimension K , and using the identity matrix, we obtain the extreme points of U r as ( 1 i = 1 K ε i ) e i , i = 1 , , K . More computations are required to obtain the extreme points in terms of Δ u i from U s . For example, given r K = ( 0 , ,   0 ,   1 i = 1 K ε i ) , we obtain ( ε 1 , ε 2 ,   ε 3 , ,   ε K 1 ,   1 i = 1 K ε i + ε K ) from Δ u i = r i + ε i , i = 1 , , K . Continuing in this manner for all r i , we obtain a set of extreme points in terms of Δ u i : ( 1 t + ε 1 ,   ε 2 , ,   ε K 1 ,   ε K ) , ( ε 1 ,   1 t + ε 2 ,   ε 3 , ,   ε K 1 ,   ε K ) , , ( ε 1 , ε 2 ,   ε 3 , ,   ε K 1 ,   1 t + ε K ) where t = i = 1 K ϵ i .
The coordinate averages of these vectors result in the centralized utility assignments Δ u i = ε i + 1 K ( 1 j = 1 K ε j ) and U i = j = 1 i ε j + i K ( 1 j = 1 K ε j ) from Δ u i = U i U i 1 . If ε i = 0 for all i , then a set of strict preference U S P U simply reduces to a set of weak preference U W P U such as U W P U = { 1 = U K U K 1 U 1 U 0 = 0 } .
Therefore, the centralized utility assignments for U W P U prove to be Δ u i = 1 K and U i = i K from the results of U S P U .

Appendix A.2. Weak Difference of Utility Values (DPU): U D P U = { U K U K 1 U 1 U 0 }

Using equations such that Δ u i = U i U i 1 , i = 1 , , K leads to a set
{ Δ u K Δ u K 1 Δ u 1 ,   i = 1 K Δ u i = 1 }
and its extreme points are well-known and widely-used in multi-attribute decision analysis with ranked attribute weights [21,22,23,24]: ( 0 ,   , 0 ,   1 ) , ( 0 , 0 , 1 2 ,   1 2 ) , , ( 1 K ,   ,   1 K ) . The coordinate averages of these vectors result in the centralized utility assignments Δ u i = 1 K j = K i + 1 K 1 j and U i = 1 K j = K i + 1 K j + i K j .

Appendix A.3. Ratio Preference of Utility Values (RPU)

The   ratio   preference   of   utility   values ,   U R P U = { U i α i 1 U i 1 ,   α i 1 > 0 ,   i = 1 , , K }   can   be  
rewritten as
U R P U = { U K α K 1 U K 1 α K 1 α K 2 U K 2 α K 1 α 1 U 1 α K 1 α 0 U 0 } .
The use of variables r i such that
r i = j = i K 1 α j U i j = i 1 K 1 α j U i 1 ,   i = 1 , , K 1   and   r K = U K α K 1 U K 1
leads to a set U r :
U r = { r i 0 ,   i = 1 , , K ,   i = 1 K r i = 1 } .
Stating from the extreme points e i , i = 1 , , K of U r , we solve a set of equations recursively to obtain the extreme points in terms of U i . For example, given e 1 = ( 1 ,   0 , ,   0 ) , we obtain U 1 = 1 α 1 α K 1 , U 2 = 1 α 2 α K 1 , , U K 1 = 1 α K 1 , U K = 1 by solving the following set of equations:
r 1 = α K 1 α 1 U 1 α K 1 α 0 U 0 = 1 ,
r 2 = α K 2 α 2 U 2 α K 1 α 1 U 1 = 0 ,
r K 1 = α K 1 U K 1 α K 1 α K 2 U K 2 = 0 ,
r K = U K α K 1 U K 1 = 0 .
Continuing in this manner for all e i , we obtain a set of extreme points represented in terms of U i : ( 1 α 1 α K 1 ,   1 α 2 α K 1 ,   , 1 α K 1 ,   1 ) , ( 0 ,   1 α 2 α K 1 ,   , 1 α K 1 ,   1 ) , , ( 0 ,   0 ,   0 ,   1 α K 1 , 1 ) , ( 0 ,   ,   0 ,   1 ) . The coordinate averages of these vectors result in the centralized utility assignments Δ u i = i K ( j = i K 1 α j ) 1 i 1 K ( j = i 1 K 1 α j ) 1 using U i = i K ( j = i K 1 α j ) 1 , i = 1 , , K 1 , U K = 1 .

Appendix B. (Interval Expressions of Utility Values)

Appendix B.1. Interval Utility Values: U I U = { L B i U i U B i , i = 2 , , K 1 }

In this case, each extreme point is determined by taking the lower and upper bounds of each U i alternately, and thus the total number of extreme points is 2 K 2 . To list them,
( 0 ,   L B 2 ,   L B 3 ,   ,   L B K 1 ,   1 ) ,   ( 0 ,   L B 2 ,   ,   L B K 2 , U B K 1 ,   1 ) , , ( 0 ,   U B 2 ,   U B 3 ,   ,   U B K 1 ,   1 ) .

Appendix B.2. Interval Differences of Utility Values: U I D U = { L B i U i U i 1 U B i , i = 1 , , K }

To start with, we make the change of variables Δ u i = U i U i 1 , which transforms the original set of bounded differences into
{ L B i Δ u i U B i , i = 1 ,   ,   K ,   i = 1 K Δ u i = 1 } .
The extreme points are easily identified by taking at least K 1 lower or upper bounds of Δ u i that sum to one. Suppose that i = 1 K 1 L B i + α = 1 , α [ L B K ,   U B K ] . Then we solve a set of equations to determine the extreme point in terms of U i :
U 1 U 0 = L B 1 ,
U 2 U 1 = L B 2 ,
U K 1 U K 2 = L B K 1 ,
U K U K 1 = α .
The resulting extreme points will be ( 0 ,   L B 1 ,   L B 1 + L B 2 , , i = 1 K 1 L B i ,   1 ) . To illustrate, suppose that with K = 4 ( U 0 = 0 ,   U 3 = 1 ) ,
{ 0.3 U 1 U 0 0.5 ,   0.2 U 2 U 1 0.3 ,   0.1 U 3 U 2 0.3 } .
By introducing Δ u i = U i U i 1 , i = 1 ,   2 ,   3 , we obtain
{ 0.3 Δ u 1 0.5 ,   0.2 Δ u 2 0.3 ,   0.1 Δ u 3 0.3 , i = 1 3 Δ u i = 1 } .  
Then, the extreme points are simply reduced to { ( 0.5 ,   0.2 , 0.3 ) ,   ( 0.5 ,   0.3 , 0.2 ) ,   ( 0.4 ,   0.3 ,   0.3 ) } . The first extreme point is determined by selecting three end points of Δ u i , and the last two by selecting two end points and one interior point lying between the lower and upper bounds of Δ u i [25]. Now, we solve a set of equations to obtain the extreme point in terms of U i such as ( 0 ,   0.5 ,   0.7 ,   1 ) ,
U 1 U 0 = 0.5 ,   U 2 U 1 = 0.2 ,   U 3 U 2 = 0.3 .
Similar computations give the other extreme points ( 0 ,   0.5 ,   0.8 ,   1 ) and ( 0 ,   0.4 ,   0.7 ,   1 ) .

Appendix B.3. Interval Ratios of Utility Values: U I R U = { L B i U i / U i 1 U B i , i = 2 , , K }

To illustrate, suppose without loss of generality that every judgment on U i , i = 1 ,   2 , is made relative to the most preferred U 3 :
2 U 3 / U 1 3 ,   4 U 3 / U 2 5 .
We can further identify a ratio U 2 / U 1 , say 2 5 U 2 / U 1 3 4 from the given interval ratios. Then, we denote q 1 = U 2 / U 1 , q 2 = U 3 / U 2 , and q 3 = U 1 / U 3 to obtain
Q = { 2 5 q 1 3 4 ,   4 q 2 5 ,   1 3 q 3 1 2 ,   q 1 · q 2 · q 3 = 1 } .
The extreme points of Q are determined as follows:
( 2 5 ,   5 , 1 2 ) ,   ( 3 4 ,   4 , 1 3 ) ,   ( 3 5 ,   5 , 1 3 ) ,   ( 1 2 ,   4 ,   1 2 ) .
To obtain the extreme points in terms of U i , we solve a system of equations. For example, with respect to ( 2 5 ,   5 , 1 2 ) , we construct the following set of equations to obtain ( 0 ,   1 2 , 1 5 ,   1 )
U 2 U 1 = 2 5 ,   U 3 U 2 = 5 ,   U 1 U 3 = 1 2 .
Similarly, we can find other extreme points such as
( 0 ,   1 3 , 1 4 ,   1 ) ,   ( 0 ,   1 3 , 1 5 ,   1 ) ,   ( 0 ,   1 2 , 1 4 ,   1 ) .

References

  1. Thomas, M.U. A generalized maximum entropy principle. Oper. Res. 1979, 27, 1188–1196. [Google Scholar] [CrossRef]
  2. Yeh, C.H. A problem-based selection of multi-attribute decision-making methods. Int. Trans. Oper. Res. 2002, 9, 169–181. [Google Scholar] [CrossRef]
  3. Dong, Q.; Guo, Y. Multiperiod multiattribute decision-making method based on trend incentive coefficient. Int. Trans. Oper. Res. 2013, 20, 141–152. [Google Scholar] [CrossRef]
  4. Su, W.; Peng, W.; Zeng, S.; Peng, B.; Pan, T. A method for fuzzy group decision making based on induced aggregation operators and Euclidean distance. Int. Trans. Oper. Res. 2013, 20, 579–594. [Google Scholar] [CrossRef]
  5. Ahn, B.S.; Yager, R.R. The use of ordered weighted averaging method for decision making under uncertainty. Int. Trans. Oper. Res. 2014, 21, 247–262. [Google Scholar] [CrossRef]
  6. Zhao, H.; Yao, L.; Mei, G.; Liu, T.; Ning, Y. A fuzzy comprehensive evaluation method based on AHP and Entropy for a landslide susceptibility map. Entropy 2017, 19, 396. [Google Scholar] [CrossRef]
  7. Wang, G.; Zhang, J.; Song, Y.; Li, Q. An entropy-based knowledge measure for Atanassov’s intuitionistic fuzzy sets and its application to multiple attribute decision making. Entropy 2018, 20, 981. [Google Scholar] [CrossRef]
  8. Lee, Y.C. Ranking DMUs by combining cross-efficiency scores based on Shannon’s entropy. Entropy 2019, 21, 467. [Google Scholar] [CrossRef]
  9. Abbas, A.E. Maximum entropy utility. Oper. Res. 2006, 54, 277–290. [Google Scholar] [CrossRef]
  10. Weber, M. Decision making with incomplete information. Eur. J. Oper. Res. 1987, 28, 44–57. [Google Scholar] [CrossRef]
  11. Kirkwood, C.W.; Sarin, R.K. Ranking with partial information: A method and an application. Oper. Res. 1985, 33, 38–48. [Google Scholar] [CrossRef]
  12. Frieden, B.R. Dice, entropy, and likelihood. Proc. IEEE 1985, 73, 1764–1770. [Google Scholar] [CrossRef]
  13. Fishburn, P.C. Utility theory with inexact preferences and degrees of preference. Syntheses 1970, 21, 204–221. [Google Scholar] [CrossRef]
  14. Sarin, R.K. Strength of preference and risky choice. Oper. Res. 1982, 30, 982–997. [Google Scholar] [CrossRef]
  15. Barron, F.H.; Schmidt, C.P. Sensitivity analysis of additive multiattribute value models. Oper. Res. 1988, 36, 122–127. [Google Scholar] [CrossRef]
  16. Soofi, E.S. Generalized entropy-based weights for multiattribute value models. Oper. Res. 1990, 38, 362–363. [Google Scholar] [CrossRef]
  17. Filev, D.; Yager, R.R. Analytic properties of maximum entropy OWA operators. Inf. Sci. 1995, 85, 11–27. [Google Scholar] [CrossRef]
  18. Mateos, A.; Jimenez, A.; Rios-Insua, S. Modelling individual and global comparisons for multi-attribute preferences. J. Multi-Crit. Decis. Anal. 2003, 12, 177–190. [Google Scholar] [CrossRef]
  19. Ahn, B.S. Compatible weighting method with rank order centroid: Maximum entropy ordered weighted averaging approach. Eur. J. Oper. Res. 2011, 212, 552–559. [Google Scholar] [CrossRef]
  20. Ahn, B.S. Extreme point-based multi-attribute decision analysis with incomplete information. Eur. J. Oper. Res. 2015, 240, 748–755. [Google Scholar] [CrossRef]
  21. Sarin, R.K. Elicitation of subjective probabilities in the context of decision-making. Decis. Sci. 1978, 9, 37–48. [Google Scholar] [CrossRef]
  22. Claessens, M.N.A.; Lootsma, F.A.; Vogt, F.J. An elementary proof of Paelinck’s theorem on the convex hull of ranked criterion weights. Eur. J. Oper. Res. 1991, 52, 255–258. [Google Scholar] [CrossRef]
  23. Carrizosa, E.; Conde, E.; Fernandez, F.R.; Puerto, J. Multi-criteria analysis with partial information about the weighting coefficients. Eur. J. Oper. Res. 1995, 81, 291–301. [Google Scholar] [CrossRef]
  24. Barron, F.H.; Barret, B.E. Decision quality using ranked attribute weights. Manag. Sci. 1996, 42, 1515–1523. [Google Scholar] [CrossRef]
  25. Ahn, B.S.; Park, H. Establishing dominance between strategies with interval judgments. Omega 2014, 49, 53–59. [Google Scholar] [CrossRef]
Figure 1. A utility simplex constrained by an ordered increasing utility increment.
Figure 1. A utility simplex constrained by an ordered increasing utility increment.
Entropy 21 00637 g001
Figure 2. Utility simplex with additional constraints in case of K = 3 . (a) Δ U 1 = { Δ u :   Δ u 3 Δ u 2 Δ u 1 ,   i = 1 3 Δ u i = 1 } (b) Δ U 2 = Δ U 1 { Δ u :   Δ u 3 Δ u 2 Δ u 2 Δ u 1 ,   i = 1 3 Δ u i = 1 } .
Figure 2. Utility simplex with additional constraints in case of K = 3 . (a) Δ U 1 = { Δ u :   Δ u 3 Δ u 2 Δ u 1 ,   i = 1 3 Δ u i = 1 } (b) Δ U 2 = Δ U 1 { Δ u :   Δ u 3 Δ u 2 Δ u 2 Δ u 1 ,   i = 1 3 Δ u i = 1 } .
Entropy 21 00637 g002
Figure 3. Continuous functions for increasing and decreasing utility increments.
Figure 3. Continuous functions for increasing and decreasing utility increments.
Entropy 21 00637 g003
Table 1. Partial information about utility values and their centralized utility values.
Table 1. Partial information about utility values and their centralized utility values.
Partial Utility ValueMaximum Entropy UtilityCentralized Utility Assignment
WPU Δ u i = 1 K U i = i K
Δ u i = 1 K
SPU Δ u i = 1 K if ε i 1 K for all i
{ Δ u i = ε i   for   i L = { l : ε l 1 K }                   Δ u i = ( 1 i L ε i ) / ( K | L | )     elsewhere
U i = j = 1 i ε j + i K ( 1 j = 1 K ε j )
Δ u i = ε i + 1 K ( 1 j = 1 K ε j )
DPU Δ u i = 1 K U i = 1 K j = K i + 1 K j + i K j
Δ u i = 1 K j = K i + 1 K 1 j
RPUmaximize i = 1 K Δ u i log ( Δ u i )
s.t. j = 1 i Δ u j α i 1 j = 1 i 1 Δ u j for all i
i = 1 K Δ u i = 1 , Δ u i 0
U i = i K ( j = i K 1 α j ) 1
Δ u i = i K ( j = i K 1 α j ) 1 i 1 K ( j = i 1 K 1 α j ) 1

Share and Cite

MDPI and ACS Style

Kim, E.Y.; Ahn, B.S. Some Notes on Maximum Entropy Utility. Entropy 2019, 21, 637. https://doi.org/10.3390/e21070637

AMA Style

Kim EY, Ahn BS. Some Notes on Maximum Entropy Utility. Entropy. 2019; 21(7):637. https://doi.org/10.3390/e21070637

Chicago/Turabian Style

Kim, Eun Young, and Byeong Seok Ahn. 2019. "Some Notes on Maximum Entropy Utility" Entropy 21, no. 7: 637. https://doi.org/10.3390/e21070637

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop