An alternate method to determine-measure values prior to applying Choquet integral in a multi-attribute decision making environment

. 2018 by the authors; licensee Growing Science, Canada ©


Introduction
Aggregation is an important process in multi-attribute decision making (MADM) analysis where the performance scores of an alternative, , , … , with respect to a set of multiple conflicting attributes, , , … , are synthesized into a single score.The process is usually performed gradually from one alternative to another.Based on these synthesized single scores, the alternatives are then well-ordered from the most to the least favourable ones (Marichal, 2000a); hence enable the decision makers to select the alternative that best meets their decision goals.In most of the scholarly literature that are linked to MADM, a mathematical function which melds the performance scores of an alternative into a single score is generally referred as an aggregation operator (Detyniecki, 2000).
Usually, additive operators such as simple weighted average (SWA), quasi arithmetic means, and ordered weighted average are used for aggregation purposes, but unfortunately none of these operators exactly captures the interactions that usually lie between the evaluation attributes (Grasbich, 1996).Discounting the interactions between the attributes during aggregation may result into an improper ranking of alternatives that diverges from actuality, and this could subsequently lead to faulty decisions.However, lately, the application of the Choquet integral operator (Choquet, 1953) is found to be making inroads into various multi-attribute decision problems (Berrah et al., 2008;Demirel et al., 2017;Feng et al., 2010;Tzeng, 2005;Zhang et al., 2016), thanks to its ability to take into account the usual interactions held by the attributes during the aggregation process (Grabisch, 1996;Grabisch & Labreuche, 2010;Marichal, 2000b).

Choquet integral and -measure
The use of the Choquet integral requires the prior identification of fuzzy measure values.These values not only characterize the importance of each attribute, but also the importance of each possible coalition or combination of the attributes (Angilella et al., 2004).As a result, a respondent1 who participated in a particular MADM analysis is required to provide 2 amount of data in the process of estimating the importance values of all possible combinations of the attributes (Bottero, 2013).Undoubtedly, this process can become a very arduous assignment, especially if the number of evaluation attributes, n involved in the analysis is sufficiently large (Kojadinovic, 2008;Krishnan et al., 2015).Many patterns of fuzzy measure have been introduced in order to reduce the complexity involved in the process of determining the general fuzzy measure values and λ-measure is one such pattern.A survey of past literature showed that the use of λ-measure is preferred compared to other patterns of fuzzy measure due to its mathematical soundness and modest degree of freedom property (Ishii & Sugeno, 1985).Let , , … , be a finite set of criteria.A set function .defined on the set of the subsets of C, P C , is called λ-measure if it satisfies the following properties (Sugeno 1974): a) : → 0,1 and ∅ 0, 1 (boundary property: the value of null subset is zero and the value of the subset with the presence of all attributes is one).

b) ∀ , ∈
, if ⊆ , then (monotonic property: adding any new attribute into a subset will not decrease the value of the subset).
Note that (a) and (b) are the fundamental properties for any patterns of fuzzy measure and (c) is the additional property of λ-measure.This measure is constrained by an interaction parameter, λ that provides information relating to the type of interaction held by the attribute, and in a way offers some hints to decision makers in developing appropriate strategies that could be executed to enhance the performance of alternatives or targets.Gürbüz et al. (2012) and Hu and Chen (2010) claimed that: a) If < 0, then it implies that the attributes are sharing sub-additive (redundancy) effects.This means a significant increase in the performance of the alternatives can be simply achieved by simultaneously enhancing the attributes in C which have higher densities (densities refer to the importance of subsets consisting single attribute, without the alliance with any other attributes).b) If > 0, then it implies that the attributes are sharing super-additive (synergy support) effects.This means a significant increase in the performance of the alternatives can be achieved by simultaneously enhancing all the attributes in C regardless of their densities.
c) If = 0, then it implies that the attributes are non-interactive.
Many methods were recommended in the past to simplify the process of identifying λ-measure values, but each of these methods required different amounts of initial data from the respondents.With the intention of further minimizing the initial data requirement from a respondent, Larbani et al. (2011) introduced a unique pattern of fuzzy measure known as -measure.The suggested method to identify λ -measure values only requires 1 /2 amount of information from a respondent as he or she just needs to offer the data on the dependence coefficient, of each pair of different attributes, i and j, and the densities, , 1, … , .

Let
, , … , be a finite set of attributes, then the λ -measure value of a subset consisting two attributes, i and j can be identified using equation (1), whereas the value of a subset comprising more than two attributes can be identified using equation (2).

,
, where where denotes any subsets of consisting more than two attributes (2) Note that equation ( 1) and ( 2) ensure that the -measure satisfies the two fundamental properties required by a fuzzy measure, namely the boundary and monotonicity property.The overall procedure to determine -measure values as proposed in its original work can be actually summarised as follows: a) Phase 1: The respondents are required to provide an estimation on the dependence coefficient, for each pair of different attributes, and , based on a scale that ranged from 0 to 1 where 0 indicates "no dependence" and 1 implies "complete dependence".b) Phase 2: Based on the dependence coefficients determined in phase 1, the density of each attribute, , 1, … , is identified by simply solving the following system of inequalities (3): 0 1, for all and in , (1 st requirement) (2 nd requirement) (3) 0, 1, … , (3 rd requirement) c) Phase 3: The identified and values are precisely substituted into equation ( 1) and (2) in order to identify the complete set of -measure values.The identified -measure values and the available performance scores of each alternative can then be replaced into Choquet integral model (4) to compute their final aggregated scores.With regards to (4), refers to any the subsets of C for 1,2, … , and represents the performance score of the alternative with respect to criteria .Also, note that the permutation of criteria in A parallel to the descending order of the performance scores.For instance, if ⋯ , then = , , … , (Murofushi & Sugeno, 1989).
It can be noticed that the original -measure identification method fails to offer sufficient information that could be useful in developing the proper strategies to significantly increase the performance of alternatives, unlike λ-measure where the strategies for enhancement can be determined based on the value of interaction parameter, λ.Due to this gap, Krishnan et al. (2017) recently introduced a revised -measure identification method through the combination of DEMATEL method.This revised version of method uses DEMATEL to reveal the causal-effect relationships between the decision attributes and at the same time, the outputs of DEMATEL (i.e.digraph and importance scores) are utilized to estimate the inputs (i.e. the dependence coefficient of each pair of attributes, and density, g of each attribute) required to identify the complete set of -measure values.Unlike the original version, with the presence of DEMATEL, the revised method provides better information to the decision makers with regards to the interactions shared by the attributes, and thereby enables them to determine and implement more sensible strategies to enhance the performance of the alternatives with more confidence.Nonetheless, it was found that the usage of the revised method requires triple the amount of initial data from a respondent compared to the original method.Therefore, through this study we expect to propose an alternate -measure identification method that is not too demanding in terms of initial data requirement, and at the same time delivers some important clues to the decision makers when deciding the exact strategies to improve the performance of alternatives.

The alternate -measure identification method
The proposed -measure identification method is mainly developed by integrating interpretive structural modelling (ISM) into the original -measure identification method.All in all, the employment of the proposed method involves five main phases.Figure 1 is the illustrative representation of the proposed method.Further details regarding the purpose and the exact steps involved in each phase are summarised in the following sections.

Phase 1: Applying ISM
In phase 1, the ISM method is used to systematically and clearly visualise and comprehend the actual relationships (Sage, 1977) which exist among the evaluation attributes, , , … , .From mathematical viewpoint, ISM utilises some fundamental notions of graph theory to efficiently construct a directed graph or network representation of the complex system composed by various entangled attributes (Malone, 1975).One may develop better understanding on the mathematical foundations involved in the usage of the method by referring to the work performed by Harary et al. (1965).There exists nine important steps when undertaking an ISM analysis as summarised from the studies carried out by Agrawal et al. (2017), Bhadani et al. (2016), Dwivedi et al. (2017), andSingh et al. (2017).
In step 1, the contextual relationships between each possible pair of attributes are determined through an in-depth discussion involving a group of respondents who are deemed to be well-informed about the field under investigation.Normally, four different symbols as depicted in Table 1 are used to express the direction of the relationship of attribute as compared to .
In step 2, the structural self-interaction matrix (SSIM) is developed by adhering to the initial judgments provided by the group of respondents in terms of the relationship held by each two attributes ( and ).

Table 1
Nominal By the end of step 3, a complete relation matrix, D as portrayed by Eq. ( 5) is obtained.Note that in Eq. ( 5) indicates the relationship of attribute as compared to which is expressed using digit 0 or 1.
In step 4, the initial reachability matrix, is attained by adding the relation matrix, with its unit matrix, as expressed by Eq. ( 6) (Huang et al., 2005): In step 5, the final reachability matrix, * is determined by using Eq. ( 7), where denotes the powers.Note that the determination of * is actually performed by utilising Boolean multiplication and addition rules (i.e. 1 In step 6, based on the final reachability matrix, the driving power ( ) of an attribute is computed by summing up the number of 1's in the rows and its dependence power ( ) is computed by summing up the number of 1's in the columns.
In step 7, a series of partitions is performed on the reachability matrix to identify the hierarchy (i.e.level) of the attributes in the decision system.This process is commenced by deriving the reachability and antecedent set of each attribute from the said matrix (Warfield, 1974).In this case, the reachability set of an attribute comprises the attribute itself together with the other attributes that it may influence, whilst the antecedent set consists of the attribute itself and the attributes that may influence it.Subsequently, the intersection set of each attribute is developed by comparing and identifying the similar attributes that are present in both the reachability and antecedent set.The attributes that own an identical reachability and intersection set occupy the top most level (i.e. level I) in the hierarchy system.Logically, the attributes at the top most level should not influence any other attributes in the system.Therefore, before determining the attributes in the following level (i.e. level II), the attributes at the top most level are discarded from the following considerations.A similar process is iterated until the attributes at the lowest level of the system are identified.
In step 8, the reachability matrix is converted into a lower triangular format by arranging the attributes according to their levels.
In step 9, based on the lower triangular form of reachability matrix, the diagram that clearly displays the directed relationship (also known as digraph) between the attributes can then be developed by the means of nodes and lines of edges, by taking into account the levels of the attributes.If there is a relationship between attribute and , this is represented by sketching an arrow pointing from to .

Phase 2: Determining dependence coefficients
In phase 2, one of the outputs of ISM, the digraph that provides a clear visualization on the relationships shared by the attributes is used as an aiding tool to determine the dependence coefficient, of each arbitrary pair of different attributes, and .The sub-steps in determining the dependent coefficients can be outlined as follows.
First, each pair of attributes is categorised according to the following possible types of relationships: two-way direct relationship (category 1), one-way direct relationship (category 2), indirect relationship (category 3), and almost independent (category 4).
Second, for the sake of easy data offering and to mathematically capture the typical uncertainty embedded in human estimations, the r number of respondents involved in the analysis are permitted to linguistically express the dependency strength of each pair of attributes, and (e.g."almost independent", "weak dependence", "moderate dependence", "complete dependence").However, at this stage, on a logical basis, the respondents must be alerted so that they ensure that the dependency strength assigned to the pairs in the preceding categories are always equivalent or superior to the pairs in later categories (i.e.dependency strength of pairs in category 1 ≥ category 2 ≥ category 3 ≥ category 4).
Third, according to the provided linguistic judgments, one out of eight fuzzy scales proposed by Chen and Hwang (1992) is chosen2 .Based on the chosen scale, each linguistic judgment concerning the dependency strength of each pair of attributes is then quantified into its corresponding fuzzy value, and is subsequently converted into its respective crisp value using the fuzzy scoring de-fuzzification method (as suggested in the same literature).
Fourth, the final dependence coefficient, of each pair of attributes is then computed by averaging the crisp values resulted from all the respondents.

Phase 3: Finding the importance degree of each attribute
In phase 3, by extending the idea of DEMATEL3 (Hung, 2011;Shieh et al., 2010) in the importance degree, the of an attribute with respect to the overall decision system is determined by adding the driving power of that attribute, to its dependence power, as expressed in Eq. ( 8).Note that the information on the driving and dependence power of each attribute have already been computed earlier (i.e.phase 1, step 6). (8)

Phase 4: Identifying the complete set of -measure values
In phase 4, the dependency coefficients, identified in phase 2 and the importance degrees, calculated in phase 3 are then used to develop and solve the following system of inequalities (8) in order to identify the density of each attribute, .0 1, for all and , (1 st requirement) (2 nd requirement) 0, 1, … , (3 rd requirement) : : … : = : : … : (4 th requirement) The proposed system of inequalities (9) somewhat differs from the original one given in Eq. ( 3) with the presence of 4 th requirement.The 4 th requirement in Eq. ( 3) implies that the ratio between the densities, should be equivalent to the ratio between the importance degrees, .For instance, if = 2, = 8, and = 4, then the requirement can be expressed as follows: 4 and 2 .
In other words, the determination of densities in this method is actually performed by taking into consideration some extra information derived through the use of ISM.The available and values are then appropriately substituted into Eq.( 1) and Eq. ( 2) in order to identify the complete set ofmeasure values.

Phase 5: Aggregation using Choquet integral
In phase 5, the identified -measure values and the available performance scores of each alternative can then be replaced into the Choquet integral model (4) to compute their final aggregated scores for ranking purpose.

Illustrating the feasibility of the proposed method
In this section, the feasibility of the proposed method is demonstrated based on a sample of supplier selection problem.Presume that a decision analyst from an automotive industry wants to evaluate the overall performance of five spare part suppliers, namely , , , , and based on six evaluation attributes as depicted in Table 2. Also, post evaluation, he intends to suggest some optimal strategies for suppliers to maintain or enhance their current performance.He surveyed a sample of clients who had had the experience in dealing with all these five suppliers.The clients were asked to rate the performance of each supplier with respect to each attribute based on a 0-1 numerical scale.The final performance matrix of the suppliers obtained through a simple averaging process is presented in Table 3. Further, we assume that a group of three experts had also been appointed by the decision analyst to gather all the crucial initial data required to initiate the analysis.The phase-by-phase solution for the existing problem using the proposed alternate -measure identification method and Choquet integral can then be summarised as in the following sections.

Table 2
Attributes for evaluating performance of spare parts suppliers No.
Quality ( ) Supplier's ability in consistently meeting quality of the spare parts 2.
Delivery ( ) Ability to meet delivery schedules 3.
Price ( ) Reasonability of the net prices of the spare parts 4.
Transportation & communication ( ) Supplier's location and their communication facilities 6.
After-sales services ( ) Supplier's ability in providing after-sales service Notes: These attributes were shortlisted from the study conducted by Mandal and Deshmukh (1994).For the sake of maintaining simplicity, only six attributes that are considered to be very pertinent to the investigated case were adapted in this analysis.Notes: The closer the value to 1, the better the supplier's performance with respect to the attribute.

Phase 1 of supplier evaluation problem
Assume that Table 4 is the SSIM developed after the group of experts jointly assessed the relationships between each possible pair of attributes, and that the binary relation matrix as shown in Table 5 was derived by adhering to the four rules mentioned in section 3.1.

Table 4 SSIM for supplier evaluation problem
The relationships expressed here are part of the actual evaluations provided in the study conducted by Mandal and Deshmukh (1994).

Table 5 Binary relation matrix for supplier evaluation problem
Next, by sequentially using Eq. ( 6) and Eq. ( 7), the following initial (refer Table 6) and final (refer Table 7) reachability matrix were obtained.

Table 6
Initial reachability matrix for supplier evaluation problem

Table 7
Final reachability matrix for supplier evaluation problem Notes: (*) implies the derivative relation which does not appear in the initial reachability matrix.
Subsequently, based on the final reachability matrix, the driving, and the dependence power, of each evaluation attribute was computed as shown in Table 8.

Table 8
Driving and dependence power of each attribute At the same time, a partition analysis on the final reachability matrix was carried out in order to decide the level or position of each attribute in the yet-to-be-developed ISM digraph.For this particular case, a total of three iterations were systematically performed, which means that the available six attributes were divided into three levels.The level determination can simply be performed by comparing the reachability set of each attribute to its intersection set as elucidated in section 3.1, step 7. Table 9, 10, and 11 clarify how the levels of the attributes were determined at each iteration.By reorganizing the attributes in the final reachability matrix according to their levels, the lower triangular version of the matrix as shown in Table 12 was obtained.The initial digraph representing the relationships among attributes was constructed by adhering to Table 12.This initial version of digraph was then "trimmed" by eliminating all transitiveness to construct the finalized one, as exemplified in Fig. 2.
Table 12 Lower triangular matrix for evaluation problem Attribute

Phase 2 of supplier evaluation problem
Based on the digraph, each pair of attributes, and was then categorised according to the type of relationship they share (i.e.one-way direct relationship, indirect relationship, and independency).It was assumed that the experts had linguistically described the dependency strength of each pair of attributes as shown in Table 12, obviously keeping in mind that the dependency of the pairs in one category should not be stronger than the pairs in the preceding category.Subsequently, as suggested by Chen and Hwang (1992), a seven-point fuzzy scale (as shown in Fig. 3) was chosen and amended accordingly in to quantify the linguistic judgments into their corresponding fuzzy numbers before converting them into appropriate crisp values using the fuzzy scoring approach.The final dependence coefficient, of each pair of attributes was then identified by averaging the crisp judgment provided by each expert.The overall process of finding value of each pair of attributes is summarised in Table 13.µ(x)

Phase 3 of supplier evaluation problem
At this phase, the importance degree of each attribute was identified using Eq. ( 8) as shown in Table 14.

Phase 4 of supplier evaluation problem
Based on the identified values and the available importance degree, ID, of each attribute, the following system of linear inequalities was constructed and solved with the aid of EXCEL Solver.By solving the system of inequalities, the following densities were obtained: = 0.035, g = 0.026, = 0.044, = 0.044, = 0.044, and = 0.035.Then, using Eq.(1) and Eq.(2), the complete set of -measure values as listed in Table 15 were identified.To facilitate better understanding, the following actual calculations involved in determining the fuzzy measure values of selected subsets of attributes (*) (refer to The same evaluation was then performed using the conventional SWA operator which assumes independency among the attributes.Note that since the application of SWA requires the sum of individual weights of the attributes to be equal to one ( + = 1), the available densities, , , , , , and were then normalized in advance to derive the following weights:.= 0.154, = 0.115, = 0.192, = 0.192, = 0.192, and = 0.154.Table 16 summarizes the results obtained using the Choquet integral (that is together with the identified λmeasure) and SWA operator.

Comparison of the -measure identification methods
In the context of types of initial data requirement, the proposed -measure identification method actually demands two types of initial data from a respondent: the contextual relationship between every two attributes (i.e. to ) and the dependency coefficient of each pair of attributes.Meanwhile, the revised method requires a respondent to provide his or her initial preference on the direct influence between every two attributes and dependency coefficient of each pair of attributes.Nevertheless, the original method only requires a respondent to provide judgment on the dependency coefficient of each pair of attributes.As a result, the original, revised, and proposed alternate method requires 1 /2, 2 1 /2, and 3 1 /2 amount of initial data from a respondent, respectively.A simple experiment as presented in Fig. 4 indicates that the difference between these three versions of identification methods in the context of initial data requirement is negligible when is small.Nevertheless, an apparent difference can be observed as grows larger.

Results and discussion
By applying the Choquet integral together with -measure values identified through the proposed method, with a final aggregated score of 0.722 was identified as the best spare parts supplier followed by , , , and .In contrast to that, a different set of ranking was obtained using the SWA operator where V was identified as the best supplier and the remaining suppliers were ranked in the following order: , , , and .This variation is actually rooted from the failure of the SWA operator to capture the existing interactions between the six attributes during the aggregation process.Therefore, it is more rational for the decision makers to make their final decisions based on the results generated through the usage of -measure and Choquet integral operator.In fact, the ISM method had revealed the presence of a certain degree of interaction between the attributes.Also, by referring to the same digraph, it can be concluded that in order to retain or improve their overall performance, suppliers should mainly channel their resources (e.g.budget, manpower, and time) towards enhancing their "technical capability", "transportation & communication", and "quality".Improving these bottom level attributes would eventually increase the performance of the suppliers with respect to the top level attributes (i.e."price", "delivery", and "after-sales service").However, from a logical point of view, suppliers who are in quest of a more cost-effective solution could simply divert their main improvement efforts on the two independent attributes, "technical capability" and "transportation & communication".This would then directly or indirectly improve the suppliers' performance in terms of the remaining four dependent attributes (refer to Fig. 2), and thereafter may significantly uplift their overall image from the perspective of potential customers.In other words, the proposed alternate method has been proven to have the ability to provide transparent and useful hints to the decision makers in determining the appropriate schemes for enhancing the performance of the alternatives.
On the other hand, through a simple experiment, we managed to establish that the initial data requirement of the proposed method is less than the revised method.To be precise, the proposed alternate method only requires twice the amount of initial data required by the original method, unlike the revised one which demands thrice.In short, the proposed method has partially, if not completely, compensated for the shortcomings associated with each existing identification method.

Summary and recommendations
In this study, a different version of -measure identification method was developed by integrating the ISM method into the original -measure identification method.This alternate method utilizes ISM to reveal the actual relationships held by the decision attributes, and at the same time the outputs of ISM (i.e.digraph, driving and dependency power of each attribute) are consumed to estimate the inputs (i.e. the dependence coefficient of each possible pair of attributes, and density, of each attribute) required to identify the complete set of -measure values.
The usage of the proposed method was demonstrated by evaluating a simple spare parts supplier selection problem.Unlike the original version, the presence of ISM equips this alternate method with the ability to provide clearer information to the decision makers about the relationships shared by the evaluation attributes, thereby enabling them to determine and execute more effective strategies to enhance the performance of the alternatives with better confidence.On the other hand, as expected, a simple experiment showed that the amount of initial data required by the proposed method is much less than the revised method, as compared to the original one.To be exact, the proposed alternate method simply requires twice the amount of initial data needed by the original method, unlike the revised method which requires thrice.
In a nutshell, this study has actually introduced a -measure identification method that offsets the drawbacks associated with each existing method; a method that is not too demanding in terms of initial data requirement, and at the same time delivers important clues to the decision makers to help them decide the exact strategies needed to improve the performance of alternatives.Note that although the proposed method needs a slightly more initial data than the original one, the advantage of this method (i.e. the ability to provide better information to identify optimal strategies for enhancement) can be regarded as more meaningful in the context of decision making than the advantage associated with the original one (i.e.minimal initial data requirement from a respondent).
However, future research may amend the proposed method to further minimize or simplify the overall initial data requirement associated with the implementation of the method.For instance, ISM could be swapped with any other suitable interaction modelling technique that not only reveals the relationship structure of the attributes, but at the same time consumes a minimum possible amount of initial data from the respondents.

Fig. 1 .
Fig. 1.Alternate -measure identification method scale for establishing contextual relationships Symbol Direction of relationship influences or reaches influences or reaches and influence each other and are unrelated In step 3, SSIM is transformed into a binary relation matrix (i.e.0 and 1) based on the following rules (Singh & Kanth, 2008): a) If the cell ( , is assigned with symbol in SSIM, then this cell ( , entry becomes 1 and the cell (c , c entry becomes 0 in the binary relation matrix.b) If the cell ( , is assigned with symbol in the SSIM, then this cell ( , entry becomes 0 and the cell ( , entry becomes 1 in the binary relation matrix.c)If the cell ( , is assigned with symbol X in the SSIM, then this cell ( , entry becomes 1 and the cell ( , entry also becomes 1 in the binary relation matrix.d)If the cell ( , is assigned with symbol O in the SSIM, then this cell ( , entry becomes 0 and the cell ( , entry also becomes 0 in the binary relation matrix.

Fig. 2 .
Fig. 2. Final digraph showing the relationships between the attributes

Fig. 4
Fig. 4 Initial data requirement among the three versions of identification methods

Table 9
First iteration of partition analysis

Table 13
Identification of λ values

Table 17
recaps the differences between the available -measure identification methods as discussed in this section.

Table 17
Comparison of the -measure identification methods