Next Article in Journal
Multivalued Fixed Point Results in Dislocated b-Metric Spaces with Application to the System of Nonlinear Integral Equations
Previous Article in Journal
Edge Even Graceful Labeling of Polar Grid Graphs
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Probabilistic Linguistic Aggregation Operators Based on Einstein t-Norm and t-Conorm and Their Application in Multi-Criteria Group Decision Making

1
School of Management and Economics, University of Electronic Science and Technology of China, Chengdu 610054, China
2
Center for West African Studies, University of Electronic Science and Technology of China, Chengdu 610054, China
*
Author to whom correspondence should be addressed.
Symmetry 2019, 11(1), 39; https://doi.org/10.3390/sym11010039
Submission received: 8 December 2018 / Revised: 22 December 2018 / Accepted: 24 December 2018 / Published: 2 January 2019

Abstract

:
One of the major problems of varied knowledge-based systems has to do with aggregation and fusion. Pang’s probabilistic linguistic term sets denotes aggregation of fuzzy information and it has attracted tremendous interest from researchers recently. The purpose of this article is to deal investigating methods of information aggregation under the probabilistic linguistic environment. In this situation we defined certain Einstein operational laws on probabilistic linguistic term elements (PLTESs) based on Einstein product and Einstein sum. Consequently, we develop some probabilistic linguistic aggregation operators, notably the probabilistic linguistic Einstein average (PLEA) operators, probabilistic linguistic Einstein geometric (PLEG) operators, weighted probabilistic linguistic Einstein average (WPLEA) operators, weighted probabilistic linguistic Einstein geometric (WPLEG) operators. These operators extend the weighted averaging operator and the weighted geometric operator for the purpose of aggregating probabilistic linguistic terms values respectively. Einstein t-norm and Einstein t-conorm constitute effective aggregation tools and they allow input arguments to reinforce each other downwardly and upwardly respectively. We then generate various properties of these operators. With the aid of the WPLEA and WPLEG, we originate the approaches for the application of multiple attribute group decision making (MAGDM) with the probabilistic linguistic term sets (PLTSs). Lastly, we apply an illustrative example to elucidate our proposed methods and also validate their potentials.

1. Introduction

Generally, when expressing preferences by means of linguistic information, decision-makers frequently face the challenges of uncertainties and vagueness Pang et al. [1]. To overcome this shortfall, Zadeh [2] introduced fuzzy sets (FSs) to deal with them as far as decision-making is concerned. Torra [3] subsequently proposed hesitant fuzzy sets (HFSs) to give a compelling extension of fuzzy sets to manage those situations, where a set of values are possible in the definition process of the membership of an element. However due to their limitations, Rodriguez et al. [4], introduced Hesitant fuzzy linguistic term sets (HFLTSs) to further handle vague and imprecise information whereby two or more sources of vagueness appear simultaneously. Rodriguez et al. [4], further went ahead and stated that the modelling tools of ordinary fuzzy sets are limited and besides the aforementioned tools are used to define quantitative problems. Considering the fact that mostly, uncertainty comes as a result of vagueness of explication utilized by experts in problems with qualitative nature, it will be appropriate to introduce fuzzy linguistic approach to provide tangible results. Nevertheless, in the current studies of (HFLTSs), Pang et al. [1] stated the decision makers’ proposed values cannot have the same relevance because the idea does not follow a realistic pattern. To bring some elements of clarifications, Pang et al. [1] propounded the probabilistic linguistic term sets (PLTSs). PLTSs were introduced to extend HFLTSs via the addition of probabilities without loss of the original linguistic information given by the experts. It could be mentioned that PLTSs came to light as a result of the generalization of the existing HFLTSs and HFSs models with the introduction of probabilities and hesitations. Under the decision-making environment, mentioned could be made of the useful and flexible nature of PLTSs, allowing them to depict or exhibit the qualitative judgement of experts [1]. They were introduced in the decision -making process to bring more flexibility and accuracy. Due to their relevance in dealing with uncertainties and vagueness, they are nowadays being considered as an important concept in the group decision-making domain. For instance, Pang et al. [1] form certain basic arithmetic aggregation operators, like probabilistic linguistic weighted averaging (PLWA) operator, the probabilistic linguistic weighted geometric (PLWG) operator, for aggregating PLTEs. Bai et al. [5] defined more appropriate comparison methods and institute in addition a robust way to handle PLTSs. Gou and Xu [6] established new operational laws with regards to the probabilistic information. A multi-criteria group decision-making algorithm with probabilistic interval preferences orderings was proposed by He et al. [7]. Under the probabilistic linguistic environment, Kobina et al. [8] proposed a series of probabilistic linguistic power aggregation operators manage multi-criteria decision making problems.
In decision-making, the accuracy of the final results largely depends on the information aggregation phase. For the past decade, many scholars have studied and developed numerous aggregation operators for PLTSs information [1,4,5,6,7]. It could be realized that these aggregation operators are based on the algebraic operational laws of the LTSs and PLTSs. However, the algebraic operational laws are not the only operational laws for information fusion. The Einstein operations are equally useful tools to substitute the algebraic operations [9]. Zhao et al. [10], in their research introduced Einstein product as a t-norm and Einstein sum as t-conorm. Einstein t-norm and t-conorm are successfully used for processing uncertainty and vagueness in system analysis, decision analysis, modeling and forecasting applications. For instance, Yu et al. [11] developed a family of hesitant fuzzy Einstein aggregation operators, such as the hesitant fuzzy Einstein Choquet ordered averaging operator, hesitant fuzzy Einstein Choquet ordered geometric operator, to deal with multiple attribute group decision-making under hesitant fuzzy environments. Wang and Liu [12] developed the interval-valued intuitionistic fuzzy Einstein weighted averaging (IVIFEWA) operator, demonstrated and verified their practicality and flexibility in a set of propulsion systems. Wang and Liu [13] investigated intuitionistic fuzzy weighted Einstein average (IFWEA) operator to accommodate the situations where the given arguments are AIFVs and applied IFEWA operator to MADM problem with intuitionistic fuzzy information. Yang and Yuan [14] developed the induced interval-valued intuitionistic fuzzy Einstein ordered weighted geometric (I-IVIFEOWG) operator and applied it to deal with multiple attribute decision making under interval-valued intuitionistic fuzzy environments. Cai and Han [15] developed the induced interval-valued Einstein ordered weighted averaging operator. Wang and Sun [16] also examined the interval-valued intuitionistic fuzzy Einstein geometric Choquet integral operator. Rahman et al. [17] focused on interval-valued Pythagorean fuzzy Einstein hybrid weighted averaging aggregation operator and their application to group decision making. Rahman et al. [18] proposed some interval-valued Pythagorean fuzzy Einstein weighted averaging aggregation operators. However, it seems that in the literature, there is a little investigation on aggregation techniques using the Einstein operations to aggregate probabilistic linguistic information. Hence, the aim of this paper is to explore some probabilistic linguistic aggregation operators based on the Einstein operational laws. Specifically, we develop the probabilistic linguistic Einstein average (PLEA), probabilistic linguistic Einstein geometric (PLEG), weighted probabilistic linguistic Einstein average (WPLEA) and weighted probabilistic linguistic Einstein geometric (WPLEG) aggregation operators. Taking into consideration the WPLEA and the WPLEG operators, we design a new multi-criteria group decision making (MCGDM) approach for PLTS information. The contributions of the study are as follows: (1) Our proposed methods provide more versatility in the aggregation process and they have the ability to depict the interrelationship of input arguments and the individual evaluation. (2) Considering the different situations, our proposed methods use Einstein operations with transformed PLTSs, which are more competent in handling uncertainty and vagueness than the existing PLTSs, fuzzy sets (FSs), Hesitant Fuzzy Sets (HFSs), Hesitant Fuzzy Linguistic Terms (HFLTSs). (3) The opinions of the decision-makers still remain the same in a situation where only few different linguistic terms evaluated by the DMs are considered. (4) Finally they take into consideration the probabilistic information of the input arguments and make use of the novel operational laws of PLTSs proposed by Gou et al. [6].
The remainder of the paper is structured as follows: In Section 2, we introduce certain elementary concepts and operations in relation to PLTSs and Einstein operations. Section 3 deals with Einstein operations of the transformed probabilistic linguistic term sets (PLTSs). In Section 4, we design a set of probabilistic linguistic Einstein aggregation operators (PLEA, PLEG, WPLEA, WPLEG) and then their desirable properties are also studied. In Section 5, we formulate the ways for applying MCGDM utilizing the WPLEA and WPLEG operators. In Section 6, an illustrative example is given to give an account and ascertain the proposed methods. In Section 7 we make a conclusion and we expand on future studies.

2. Preliminaries

2.1. Probabilistic Linguistic Term Sets (PLTSs)

The theory of PLTSs Pang et al. [1] is an extension of the concepts of HFLTSs. In what follows, we present some basic concepts of PLTSs and the corresponding operations.
Definition 1 (Pang et al. [1]).
Let S = { s t / t = 0 , 1 , , τ } be a linguistic term set. Then a probabilistic linguistic term set (PLTS) is defined as:
L ( p ) = { L ( k ) ( p ( k ) ) / L ( k ) S , r ( k ) t , p ( k ) 0 , k = 1 , 2 , , # L ( p ) , k = 1 # L ( p ) p k 1 } ,
where L ( k ) ( p k ) is the linguistic term L ( k ) associated with the probability p ( k ) , r ( k ) is the subscript of L ( k ) and # L ( p ) is the number of all linguistic terms in L ( p ) .
In a PLTSs, the positions of elements can be swapped arbitrarily. To make sure the operational results are straightforwardly ascertained, Pang et al. [1] proposed the ordered PLTS. It is described as:
Definition 2 (Pang et al. [1]).
Given a PLTS L ( p ) = { L k ( p ( k ) ) k = 1 , 2 , , L ( p ) } and r ( k ) is the subscript of linguistic term L ( k ) . L ( p ) is called an ordered PLTS, if the linguistic terms L ( k ) ( p k ) are arranged according to the values of r ( k ) p ( k ) in descending order.
With regards to comparing the PLTSs, Pang et al. [1] defined the scores and the deviation degree of a PLTS:
Definition 3 (Pang et al. [1]).
Let L ( p ) = { L k ( p ( k ) ) k = 1 , 2 , , L ( p ) } be a PLTS, and r ( k ) is the subscript of linguistic term L ( k ) . Then, the score of L ( p ) is defined as follows:
E ( L ( p ) ) = s α ¯ ,
where α ¯ = k = 1 L ( p ) r ( k ) p ( k ) / k = 1 L ( p ) p ( k ) . The deviation degree of L ( p ) is:
σ ( L ( p ) ) = ( k = 1 L ( p ) ( ( p ( k ) ( r ( k ) α ¯ ) ) 2 ) 0.5 ) / k = 1 L ( p ) p ( k ) .
Based on the score and the deviation degree of a PLTS, Pang et al. [1] further proposed the following laws to compare them:
Definition 4 (Pang et al. [1]).
Given two PLTSs L 1 ( p ) and L 2 ( p ) . E ( L 1 ( p ) ) and E ( L 2 ( p ) ) are the scores of L 1 ( p ) and L 2 ( p ) , respectively.
(1) 
If E ( L 1 ( p ) ) > E ( L 2 ( p ) ) , t h e n L 1 ( p ) i s b i g g e r t h a n L 2 ( p ) , d e n o t e d b y L 1 ( p ) > L 2 ( p ) ;
(2) 
If E ( L 1 ( p ) ) < E ( L 2 ( p ) ) , t h e n L 1 ( p ) i s s m a l l e r t h a n L 2 ( p ) , d e n o t e d b y L 1 ( p ) < L 2 ( p ) ;
(3) 
E ( L 2 ( p ) ) = E ( L 2 ( p ) ) , t h e n w e n e e d t o c o m p a r e t h e i r d e v i a t i o n deg r e e
(a) 
if σ ( L 1 ( p ) ) = σ ( L 2 ( p ) ) , t h e n L 1 ( p ) i s e q u a l t o L 2 ( p ) , d e n o t e d b y L 1 ( p ) ~ L 2 ( p ) ;
(b) 
if σ ( L 1 ( p ) ) > σ ( L 2 ( p ) ) , t h e n L 1 ( p ) i s s m a l l e r t h a n L 2 ( p ) , d e n o t e d b y L 1 ( p ) < L 2 ( p ) ;
(c) 
if σ ( L 1 ( p ) ) < σ ( L 2 ( p ) ) , t h e n L 1 ( p ) i s g r e a t e r t h a n L 2 ( p ) , d e n o t e d b y L 1 ( p ) > L 2 ( p ) .
A careful examination of the comparison laws of PLTSs may reveal that the number of their corresponding linguistic terms may be unequal. In order to address this problem, Pang et al. [1] normalized the PLTSs by increasing the numbers of linguistic terms for PLTSs. Hence, the normalization of PLTSs is defined as follows:
Definition 5 (Pang et al. [1]).
Let L 1 ( p ) = { L 1 k ( p 1 ( k ) ) k = 1 , 2 , , # L 1 ( p ) } be a probabilistic linguistic term set and let L 2 ( p ) = { L 2 k ( p 2 ( k ) ) k = 1 , 2 , , # L 2 ( p ) } be another PLTS. # L 1 ( p ) and # L 2 ( p ) are the numbers of linguistic terms in L 1 ( p ) and L 2 ( p ) . If # L 1 ( p ) > # L 2 ( p ) , then we will add # L 1 ( p ) # L 2 ( p ) linguistic terms to L 2 ( p ) so that the numbers of linguistic terms in L 1 ( p ) and L 2 ( p ) are identical. The added linguistic terms are the smallest ones in L 2 ( p ) and their probabilities are zero. Analogously, L 1 ( p ) < L 2 ( p ) , we can use the similar method.
Definition 6 (Pang et al. [1]).
Let S = { s t / t = 0 , 1 , , τ } be a linguistic term set. Given three PLTSs, L ( p ) , L 1 ( p ) and L 2 ( p ) their basic operations are summarized as follows:
(1) 
L 1 ( p ) L 2 ( p ) = L ( k ) 1 L 1 ( p ) , L ( k ) 2 L 2 ( p ) { ( p ( k ) 1 L ( k ) 1 p ( k ) 2 L ( k ) 2 ) } ;
(2) 
L 1 ( p ) L 2 ( p ) = L ( k ) 1 L 1 ( p ) , L ( k ) 2 L 2 ( p ) { ( L ( k ) 1 ) p 1 ( k ) ( L ( k ) 2 ) p 2 ( k ) } ;
(3) 
λ ( L ( p ) ) = λ L ( k ) L ( p ) { λ p ( k ) L ( k ) } a n d λ 0 ;
(4) 
( L ( p ) ) λ = λ L ( k ) L ( p ) { ( L ( k ) ) λ p ( k ) } a n d λ 0 ;

2.2. Einstein Operations

The concept of a triangular norm was introduced by Klement et al. [19] in order to generalize the triangular inequality of a metric. The existing notion of a t-norm and its dual operator (t-conorm) originated from Schweizer and Sklar [20]. These two operations can be applied as a generalization of the Boolean logic connectives to multi-valued logic. The t-norms generalize the conjunctives ‘AND’ operator and the t-conorms generalize the disjunctive ‘OR’ operator. This situation allows them to be used to define the intersection and union operations in fuzzy logic. Einstein operations include the Einstein product and Einstein sum, which are examples of t-norm and t-conorm, respectively are defined as follows.
Definition 7 ([21]).
Einstein product as, a t-norm is a function T : [ 0 , 1 ] × [ 0 , 1 ] [ 0 , 1 ] such that
x ε y = x y 1 + ( 1 x ) ( 1 y ) ,    ( x , y ) [ 0 , 1 ] 2
Definition 8 ([21]).
Einstein sum as, a t c o n o r m is also a function S : [ 0 , 1 ] × [ 0 , 1 ] [ 0 , 1 ] such that
x ε y = x + y 1 + x y ( x , y ) [ 0 , 1 ] 2

3. Einstein Operations of Transformed Probabilistic Linguistic Term Sets

Since Einstein operational laws need to obey some conditions before they can be carried out, thus the values of the individual arguments must be within the interval [ 0 , 1 ] , we need to find the equivalent transformation of PLTSs, since some probabilistic linguistic elements (PLEs) might not necessarily belong to [ 0 , 1 ] . Luckily Gou and Xu [6] defined the first equivalent transformation of probabilistic linguistic term sets (PLTSs) as follows:
Definition 9 (Gou and Xu [6]).
Let S = { s t / t = τ , , 1 , 0 , 1 , , τ } be any linguistic term set. L ( p ) is a PLTS. The equivalent transformation function of L ( p ) is defined as:
g ( L ( p ) ) = { [ r ( k ) 2 τ + 1 2 ] ( p ( k ) ) } = L γ ( p )
where g : [ τ , τ ] [ 0 , 1 ] and γ = g ( L ( k ) ) , γ [ 0 , 1 ] . g ( L ( k ) ) = ( r ( k ) 2 τ + 1 2 ) = γ .
Based on Definition 9, we can obtain new operational laws defined as follows:
Proposition 1.
Let L ( p ) = { L ( k ) i ( p i ( k ) ) / k = 1 , 2 , , # L i ( p ) } ( i = 1 , 2 , , n ) be a collection of PLTSs and g ( L ( p ) ) its equivalent transformation. Given three transformed PLTSs g ( L ( p ) ) , g ( L 1 ( p ) ) , g ( L 2 ( p ) ) then
(1) 
g ( L 1 ( p ) ) g ( L 2 ( p ) ) = g ( L 1 ( k ) ) g ( L 1 ( p ) ) , g ( L 2 ( k ) ) g ( L 2 ( p ) ) { ( g ( L 1 ( k ) ) + g ( L 2 ( k ) ) g ( L 1 ( k ) ) g ( L 2 ( k ) ) ) ( p 1 ( k ) p 2 ( k ) ) }
(2) 
g ( L 1 ( p ) ) g ( L 2 ( p ) ) = g ( L 1 ( k ) ) g ( L 1 ( p ) ) , g ( L 2 ( k ) ) g ( L 2 ( p ) ) { ( g ( L 1 ( k ) ) × g ( L 2 ( k ) ) ) ( p 1 ( k ) p 2 ( k ) ) }
(3) 
λ g ( L ( p ) ) = g ( L ( k ) ) g ( L ( p ) ) { ( 1 ( 1 g ( L ( k ) ) ) λ ) ( p ( k ) ) } ; λ 0
(4) 
g ( L ( p ) ) λ = g ( L ( k ) ) g ( L ( p ) ) { g ( L ( k ) ) λ ( p ( k ) ) } ; λ 0
Based on Definition 7 and Definition 8, we give some new operations on the transformed PLTEs as follows:
Proposition 2.
Let S = { s t / t = 0 , 1 , , τ } be a linguistic term set and λ > 0 . Given three transformed PLTSs g ( L ( p ) ) , g ( L 1 ( p ) ) and g ( L 2 ( p ) ) then
(1) 
g ( L 1 ( p ) ) ε g ( L 2 ( p ) ) = g ( L 1 ( k ) ) g ( L 1 ( p ) ) , g ( L 2 ( k ) ) g ( L 2 ( p ) ) { p 1 ( k ) g ( L 1 ( k ) ) + p 2 ( k ) g ( L 2 ( k ) ) 1 + [ p 1 ( k ) g ( L 1 ( k ) ) ] [ p 2 ( k ) g ( L 2 ( k ) ) ] }
(2) 
g ( L 1 ( p ) ) ε g ( L 2 ( p ) ) = g ( L 1 ( k ) ) g ( L 1 ( p ) ) , g ( L 2 ( k ) ) g ( L 2 ( p ) ) { [ p 1 ( k ) g ( L 1 ( k ) ) ] [ p 2 ( k ) g ( L 2 ( k ) ) ] 1 + [ 1 p 1 ( k ) g ( L 1 ( k ) ) ] [ 1 p 2 ( k ) g ( L 2 ( k ) ) ] }
(3) 
λ . ε g ( L ( p ) ) = g ( L ( k ) ) g ( L ( p ) ) { [ 1 + p ( k ) g ( L ( k ) ) ] λ [ 1 p ( k ) g ( L ( k ) ) ] λ [ 1 + p ( k ) g ( L ( k ) ) ] λ + [ 1 p ( k ) g ( L ( k ) ) ] λ }
(4) 
g ( L ( p ) ) ε λ = g ( L ( k ) ) g ( L ( p ) ) { 2 [ p ( k ) g ( L ( k ) ) ] λ [ 2 p ( k ) g ( L ( k ) ) ] λ + [ p ( k ) g ( L ( k ) ) ] λ } where g ( L ( k ) ) p ( k ) [ 0 , 1 ] and g ( L i ( p ) ) = g ( L i ( k ) ) p i ( k ) .
Since the operational law 1 and 2 are straightforward, we will prove operational laws 3 and 4.
Proof. 
In the following we firstly prove operational law 4 on the basis of operational law 2.
Based on Definition 7 and Definition 8, let x = g ( L 1 ( p ) ) and y = g ( L 2 ( p ) ) then
g ( L 1 ( p ) ) ε g ( L 2 ( p ) ) = g ( L 1 ( p ) ) g ( L 2 ( p ) ) 1 + ( 1 g ( L 1 ( p ) ) ) ( 1 g ( L 2 ( p ) ) ) = g ( L 1 ( p ) ) g ( L 2 ( p ) ) 1 + 1 g ( L 1 ( p ) ) g ( L 2 ( p ) ) + g ( L 1 ( p ) ) g ( L 2 ( p ) ) = g ( L 1 ( p ) ) g ( L 2 ( p ) ) 2 g ( L 1 ( p ) ) g ( L 2 ( p ) ) + g ( L 1 ( p ) ) g ( L 2 ( p ) )
For g ( L 1 ( p ) ) = g ( L 2 ( p ) ) = g ( L ( p ) ) , we obtain
g ( L ( p ) ) ε 2 = g ( L ( p ) ) 2 2 2 g ( L ( p ) ) + g ( L ( p ) ) 2
g ( L ( p ) ) ε 2 = 2 g ( L ( p ) ) 2 ( 2 g ( L ( p ) ) ) 2 + g ( L ( p ) ) 2 λ R
we have g ( L ( p ) ) λ = 2 g ( L ( p ) ) λ ( 2 g ( L ( p ) ) ) λ + g ( L ( p ) ) λ g ( L ( p ) ) ε λ = g ( L ( k ) ) g ( L ( p ) ) { 2 [ p ( k ) g ( L ( k ) ) ] λ [ 2 p ( k ) g ( L ( k ) ) ] λ + [ p ( k ) g ( L ( k ) ) ] λ } since g ( L ( p ) ) = g ( L ( k ) ) p ( k ) .
Proved as required. □
Considering operational law 1, we prove the operational law 3
g ( L 1 ( p ) ) ε g ( L 2 ( p ) ) = g ( L 1 ( p ) ) g ( L 2 ( p ) ) 1 + g ( L 1 ( p ) ) g ( L 2 ( p ) )
If
g ( L 1 ( p ) ) = g ( L 2 ( p ) ) = g ( L ( p ) )
then
2 ε g ( L ( p ) ) = 2 g ( L ( p ) ) 1 + g ( L ( p ) ) 2   so   λ R
we have
λ ε g ( L ( p ) ) = λ g ( L ( p ) ) 1 + g ( L ( p ) ) λ = ( 1 + g ( L ( p ) ) ) λ ( 1 g ( L ( p ) ) ) λ ( 1 + g ( L ( p ) ) ) λ + ( 1 g ( L ( p ) ) ) λ
λ . ε g ( L ( p ) ) = g ( L ( k ) ) g ( L ( p ) ) { [ 1 + p ( k ) g ( L ( k ) ) ] λ [ 1 p ( k ) g ( L ( k ) ) ] λ [ 1 + p ( k ) g ( L ( k ) ) ] λ + [ 1 p ( k ) g ( L ( k ) ) ] λ }
since
g ( L ( p ) ) = g ( L ( k ) ) p ( k )
Based on the operational laws (1)–(4) of Section 3, we can easily obtain the following properties.
(1)
g ( L 1 ( p ) ) ε g ( L 2 ( p ) ) = g ( L 2 ( p ) ) ε g ( L 1 ( p ) ) .
(2)
( g ( L 1 ( p ) ) ε g ( L 2 ( p ) ) ) ε g ( L 3 ( p ) ) = g ( L 1 ( p ) ) ε ( g ( L 2 ( p ) ) ε g ( L 3 ( p ) ) ) .
(3)
λ . ε ( g ( L 1 ( p ) ) ε g ( L 2 ( p ) ) ) = λ . ε g ( L 1 ( p ) ) ε λ . ε g ( L 2 ( p ) ) .
(4)
λ 1 . ε ( λ 2 . ε g ( L ( p ) ) ) = ( λ 1 λ 2 ) . ε g ( L ( p ) ) .
Proof. 
g ( L 1 ( p ) ) ε g ( L 2 ( p ) ) = p 1 ( k ) g ( L 1 ( k ) ) p 2 ( k ) g ( L 2 ( k ) ) 1 p 1 ( k ) g ( L 1 ( k ) ) p 2 ( k ) g ( L 2 ( k ) ) and g ( L 2 ( p ) ) ε g ( L 1 ( p ) ) = p 2 ( k ) g ( L 2 ( k ) ) p 1 ( k ) g ( L 1 ( k ) ) 1 + p 2 ( k ) g ( L 2 ( k ) ) p 1 ( k ) g ( L 1 ( k ) ) .
Since p 1 ( k ) g ( L 1 ( k ) ) p 2 ( k ) g ( L 2 ( k ) ) = p 2 ( k ) g ( L 2 ( k ) ) p 1 ( k ) g ( L 1 ( k ) ) and p 1 ( k ) g ( L 1 ( k ) ) p 2 ( k ) g ( L 2 ( k ) ) = p 2 ( k ) g ( L 2 ( k ) ) p 1 ( k ) g ( L 1 ( k ) ) then 1 + p 1 ( k ) g ( L 1 ( k ) ) p 2 ( k ) g ( L 2 ( k ) ) = 1 + p 2 ( k ) g ( L 2 ( k ) ) p 1 ( k ) g ( L 1 ( k ) ) .
Therefore p 1 ( k ) g ( L 1 ( k ) ) p 2 ( k ) g ( L 2 ( k ) ) 1 + p 1 ( k ) g ( L 1 ( k ) ) p 2 ( k ) g ( L 2 ( k ) ) = p 2 ( k ) g ( L 2 ( k ) ) p 1 ( k ) g ( L 1 ( k ) ) 1 + p 2 ( k ) g ( L 2 ( k ) ) p 1 ( k ) g ( L 1 ( k ) )
g ( L 1 ( p ) ) ε g ( L 2 ( p ) ) = p 1 ( k ) g ( L 1 ( k ) ) p 2 ( k ) g ( L 2 ( k ) ) 1 + p 1 ( k ) g ( L 1 ( k ) ) p 2 ( k ) g ( L 2 ( k ) ) + p 2 ( k ) g ( L 2 ( k ) ) p 1 ( k ) g ( L 1 ( k ) ) 1 + p 2 ( k ) g ( L 2 ( k ) ) p 1 ( k ) g ( L 1 ( k ) ) = p 2 ( k ) g ( L 2 ( k ) ) p 1 ( k ) g ( L 1 ( k ) ) 1 + p 2 ( k ) g ( L 2 ( k ) ) p 1 ( k ) g ( L 1 ( k ) ) + p 1 ( k ) g ( L 1 ( k ) ) p 2 ( k ) g ( L 2 ( k ) ) 1 + p 1 ( k ) g ( L 1 ( k ) ) p 2 ( k ) g ( L 2 ( k ) ) = g ( L 2 ( p ) ) ε g ( L 1 ( p ) )
Hence, we complete the proof of Property 1. The remaining properties can easily be proved. □

4. Probabilistic Linguistic Aggregation Operators

Based on the probabilistic linguistic environment, we treat the input arguments as PLTSs and we deeply investigate the extension of Einstein t-norm and Einstein t-conorm aggregation operators.

4.1. Probabilistic Linguistic Einstein Average (PLEA) Aggregation Operators

In this section, we discuss the extension of Einstein t-conorm aggregation operators to accommodate the probabilistic linguistic environment. Specifically, we propose some probabilistic linguistic Einstein average operators, i.e., Probabilistic Linguistic Einstein Average (PLEA) and Weighted Probabilistic Linguistic Einstein Average (WPLEA) which allows the input arguments to reinforce and support each other during the aggregation process.

4.1.1. PLEA

Based on the results of Definitions 1 and operational law (1) of Proposition 1 we present the definition of the PLEA aggregation operator as follows:
Definition 10.
Let L ( p ) = { L ( k ) i ( p i ( k ) ) / k = 1 , 2 , , # L i ( p ) } ( i = 1 , 2 , , n ) be a collection of PLTSs and g ( L ( p ) ) its equivalent transformation. A probabilistic linguistic Einstein Average (PLEA) operator is a mapping g ( L n ( p ) ) g ( L ( p ) ) , such that
P L E A ( g ( L 1 ( p ) ) , g ( L 2 ( p ) ) , , g ( L n ( p ) ) ) = ε i = 1 n 1 n g ( L i ( p ) )
Theorem 1.
Let L ( p ) = { L ( k ) ( p ( k ) ) / L ( k ) S , r ( k ) t , p ( k ) 0 , k = 1 , 2 , , # L ( p ) , k = 1 # L ( p ) p k 1 } , ( i = 1 , 2 , , n ) be a collection of PLTSs and g ( L ( p ) ) its equivalent transformation, then their aggregated value by using PLEA operator is also a PLTE and
P L E A ( g ( L 1 ( p ) ) , g ( L 2 ( p ) ) , , g ( L n ( p ) ) ) = g ( L i ( k ) ) g ( L i ( p ) ) i = 1 , 2 , , n { i = 1 n ( 1 + p i ( k ) g ( L i ( k ) ) ) 1 n i = 1 n ( 1 p i ( k ) g ( L i ( k ) ) ) 1 n i = 1 n ( 1 + p i ( k ) g ( L i ( k ) ) ) 1 n + i = 1 n ( 1 p i ( k ) g ( L i ( k ) ) ) 1 n }
Proof. 
We proved (8) by using mathematical induction on n. For n = 2 , according to the operational law (3) of Definition 10, we have
( 1 2 ) ε g ( L 1 ( p ) ) = g ( L 1 ( k ) ) g ( L 1 ( p ) ) { [ 1 + p 1 ( k ) g ( L 1 ( k ) ) ] 1 2 [ 1 p 1 ( k ) g ( L 1 ( k ) ) ] 1 2 [ 1 + p 1 ( k ) g ( L 1 ( k ) ) ] 1 2 + [ 1 p 1 ( k ) g ( L 1 ( k ) ) ] 1 2 }
( 1 2 ) ε g ( L 2 ( p ) ) = g ( L 2 ( k ) ) g ( L 2 ( p ) ) { [ 1 + p 2 ( k ) g ( L 2 ( k ) ) ] 1 2 [ 1 p 2 ( k ) g ( L 2 ( k ) ) ] 1 2 [ 1 + p 2 ( k ) g ( L 2 ( k ) ) ] 1 2 + [ 1 p 2 ( k ) g ( L 2 ( k ) ) ] 1 2 }
then
P L E A ( g ( L 1 ( p ) ) , g ( L 2 ( p ) ) ) = ( 1 2 ) ε g ( L 1 ( p ) ) ε ( 1 2 ) ε g ( L 2 ( p ) )
( 1 + p 1 ( k ) g ( L 1 ( k ) ) ) 1 2 ( 1 + p 2 ( k ) g ( L 2 ( k ) ) ) 1 2 ( 1 p 1 ( k ) g ( L 1 ( k ) ) ) 1 2 ( 1 p 2 ( k ) g ( L 2 ( k ) ) ) 1 2 ( 1 + p 1 ( k ) g ( L 1 ( k ) ) ) 1 2 ( 1 + p 2 ( k ) g ( L 2 ( k ) ) ) 1 2 + ( 1 p 1 ( k ) g ( L 1 ( k ) ) ) 1 2 ( 1 p 2 ( k ) g ( L 2 ( k ) ) ) 1 2
And
( 1 2 ) ε g ( L 1 ( p ) ) ( 1 2 ) ε g ( L 2 ( p ) ) = g ( L 1 ( k ) ) g ( L 1 ( p ) ) , g ( L 2 ( k ) ) g ( L 2 ( p ) ) i = 1 2 ( 1 + p i ( k ) g ( L i ( k ) ) ) 1 2 i = 1 2 ( 1 p i ( k ) g ( L i ( k ) ) ) 1 2 i = 1 2 ( 1 + p i ( k ) g ( L i ( k ) ) ) 1 2 + i = 1 2 ( 1 p i ( k ) g ( L i ( k ) ) ) 1 2
That is for n = 2 (8) holds. If Equation (8) holds for n = m , i.e.,
i = 1 m ( 1 m g ( L i ( p ) ) ) = P L E A ( g ( L 1 ( p ) ) , g ( L 2 ( p ) ) , , g ( L m ( p ) ) ) = g ( L i ( k ) ) g ( L i ( p ) ) i = 1 , 2 , , m i = 1 m ( 1 + p i ( k ) g ( L i ( k ) ) ) 1 m i = 1 m ( 1 p i ( k ) g ( L i ( k ) ) ) 1 m i = 1 m ( 1 + p i ( k ) g ( L i ( k ) ) ) 1 m + i = 1 m ( 1 p i ( k ) g ( L i ( k ) ) ) 1 m
Then, for n = m + 1 based on Definition 11 and the operational laws of Definition 10, we have
i = 1 m + 1 ( 1 n g ( L i ( p ) ) ) = i = 1 m ( 1 n g ( L i ( p ) ) ) ( 1 n g ( L m + 1 ( p ) ) ) = P L E A ( g ( L 1 ( p ) ) , g ( L 2 ( p ) ) , , g ( L m + 1 ( p ) ) ) = g ( L i ( k ) ) g ( L i ( p ) ) ( i = 1 , 2 , , m ) { i = 1 m ( 1 + p i ( k ) g ( L i ( k ) ) ) 1 m i = 1 m ( 1 p i ( k ) g ( L i ( k ) ) ) 1 m i = 1 m ( 1 + p 1 ( k ) g ( L i ( k ) ) ) 1 m + i = 1 m ( 1 p i ( k ) g ( L i ( k ) ) ) 1 m } = g ( L i ( k ) ) g ( L ( p ) ) ( i = 1 , 2 , , n ) i = 1 n ( 1 + p i ( k ) g ( L i ( k ) ) ) 1 n i = 1 n ( 1 p i ( k ) g ( L i ( k ) ) ) 1 n i = 1 n ( 1 + p i ( k ) g ( L i ( k ) ) ) 1 n + i = 1 n ( 1 p i ( k ) g ( L i ( k ) ) ) 1 n
i.e., for n = m + 1 , (8) holds, which completes the proof of Theorem 1. □
Illustrative example to demonstrate the validity of the operational laws in Definition 10.
Considering [6] let L 1 ( p ) = { s 1 ( 0.3 ) , s 2 ( 0.2 ) , s 3 ( 0.5 ) } ; L 2 ( p ) = { s 1 ( 0.2 ) , s 0 ( 0.3 ) } and λ = 1 2 . After normalization we obtained L 2 ( p ) = { s 1 ( 0.4 ) , s 0 ( 0.6 ) } . Given that g ( L i ( p ) ) = ( r i ( k ) 2 τ + 1 2 ) ( p i ( k ) ) , we obtained g ( L 1 ( p ) ) = { 2 3 ( 0.3 ) , 5 6 ( 0.2 ) , 1 ( 0.5 ) , } and g ( L 2 ( p ) ) = { 1 3 ( 0.4 ) , 1 2 ( 0.6 ) } .
Considering operational law 1 and operational law 3 we obtain
1 2 g ( L 1 ( p ) ) ε 1 2 g ( L 2 ( p ) ) = { ( 1 + 0.6667 × 0.3 ) 1 2 ( 1 0.6667 × 0.3 ) 1 2 ( 1 + 0.6667 × 0.3 ) 1 2 + ( 1 0.6667 × 0.3 ) 1 2 + ( 1 + 0.5 × 0.3333 ) 1 2 ( 1 0.5 × 0.3333 ) 1 2 ( 1 + 0.5 × 0.3333 ) 1 2 + ( 1 0.5 × 0.3333 ) 1 2 1 + ( 1 + 0.6667 × 0.3 ) 1 2 ( 1 0.6667 × 0.3 ) 1 2 ( 1 + 0.6667 × 0.3 ) 1 2 + ( 1 0.6667 × 0.3 ) 1 2 × ( 1 + 0.5 × 0.3333 ) 1 2 ( 1 0.5 × 0.3333 ) 1 2 ( 1 + 0.5 × 0.3333 ) 1 2 + ( 1 0.5 × 0.3333 ) 1 2 , .......................... }
P L E A ( g ( L 1 ( p ) ) , g ( L 2 ( p ) ) ) = ( 0.1668 , 0.2506 , 0.1500 , 0.2344 , 0.3290 , 0.4048 )
With respect to Definition 11 and Theorem 1, it can easily be proven that the PLEA aggregation operator has the following desirable properties. □
Property 1 (Idempotency).
Let g ( L i ( p ) ) ( i = 1 , 2 , , n ) be a collection of transformed PLTSs. If all g ( L i ( p ) ) ( i = 1 , 2 , , n ) are equal, i.e., g ( L i ( p ) ) = g ( L ( p ) ) then
P L E A ( g ( L 1 ( p ) ) , g ( L 2 ( p ) ) , , g ( L n ( p ) ) ) = g ( L ( p ) )
Proof. 
If g ( L i ( p ) ) = g ( L ( p ) ) for all i , then P L E A ( g ( L 1 ( p ) ) , g ( L 2 ( p ) ) , , g ( L n ( p ) ) ) is computed as follows:
P L E A ( g ( L 1 ( p ) ) , g ( L 2 ( p ) ) , , g ( L n ( p ) ) ) = ε i = 1 n 1 n g ( L i ( p ) ) = ε i = 1 n 1 n g ( L ( p ) )
Property 2 (Boundedness).
Let g ( L i ( p ) ) ( i = 1 , 2 , , n ) be a collection of PLTSs, then we have:
min i = 1 n min k = 1 # L i ( p ) p i ( k ) g ( L i ( k ) ) g ( L ) max i = 1 n max k = 1 # L i ( p ) p i ( k ) g ( L i ( k ) )
where g ( L ) P L E A ( g ( L 1 ( p ) ) , g ( L 2 ( p ) ) , , g ( L n ( p ) ) ) .
Proof. 
According to the result of Theorem 1, P L E A ( g ( L 1 ( p ) ) , g ( L 2 ( p ) ) , , g ( L n ( p ) ) ) is computed as:
P L E A ( g ( L 1 ( p ) ) , g ( L 2 ( p ) ) , , g ( L n ( p ) ) ) = ε i = 1 n 1 n g ( L i ( p ) ) .
Then, we can deduce the following relationships:
min i = 1 n min k = 1 # L i ( p ) p i ( k ) g ( L i ( k ) ) p i ( k ) g ( L i ( k ) ) max i = 1 n max k = 1 # L i ( p ) p i ( k ) g ( L i ( k ) )
By using the result of Theorem 1, we can easily conclude the proof of Property 2. □
Property 3 (Monotonicity).
Let g ( L i ( p ) ) and g ( L i ( p ) ) be two sets of PLTSs and the numbers of linguistic terms in g ( L i ( p ) ) and g ( L i ( p ) ) are identical ( i = 1 , 2 , , n ) . If g ( L i ( k ) ) ( p i ( k ) ) g ( L i ( k ) ) ( p i ( k ) ) for all i , i.e., g ( L i ( p ) ) g ( L i ( p ) ) , then
P L E A ( g ( L 1 ( p ) ) , g ( L 2 ( p ) ) , , g ( L n ( p ) ) ) P L E A ( g ( L 1 ( p ) ) , g ( L 2 ( p ) ) , , g ( L n ( p ) ) ) .
Property 4 (Commutativity).
Let ( g ( L 1 ( p ) ) , g ( L 2 ( p ) ) , , g ( L n ( p ) ) ) and ( g ( L 1 ( p ) ) , g ( L 2 ( p ) ) , , g ( L n ( p ) ) ) be a collection of PLTSs and let be any permutation of ( g ( L 1 ( p ) ) , g ( L 2 ( p ) ) , , g ( L n ( p ) ) ) then
P L E A ( g ( L 1 ( p ) ) , g ( L 2 ( p ) ) , , g ( L n ( p ) ) ) = P L E A ( g ( L 1 ( p ) ) , g ( L 2 ( p ) ) , , g ( L n ( p ) ) )
Proof. 
Because ( g ( L 1 ( p ) ) , g ( L 2 ( p ) ) , , g ( L n ( p ) ) ) is any permutation of ( g ( L 1 ( p ) ) , g ( L 2 ( p ) ) , , g ( L n ( p ) ) ) , by utilizing the results of Theorem 1 we can easily finish the proof of Property 4. □

4.1.2. WPLEA

In this section, we mainly consider the aggregation of the probabilistic linguistic information on the basis of the operational laws defined in Section 3. In the following, we present the weighted probabilistic linguistic Einstein average (WPLEA) operator based on the weighted arithmetic mean.
Definition 11.
Let L i ( p ) be a collection of PLTSs, w = ( w 1 , w 2 , , w n , ) T denotes the weighting vector of L i ( p ) and w i [ 0 , 1 ] , i = 1 n w i = 1 . Given the value of the weight w = ( w 1 , w 2 , , w n , ) T , we define weighted probabilistic linguistic Einstein average (WPLEA) as follows:
W P L E A ( g ( L 1 ( p ) ) , g ( L 2 ( p ) ) , , g ( L n ( p ) ) ) = i = 1 n w i g ( L i ( p ) )
Especially, if w = ( 1 / n , 1 / n , , 1 / n , ) T , then the WPLEA operator reduces to the probabilistic linguistic Einstein average (PLEA) operator:
P L E A ( L 1 ( p ) , L 2 ( p ) , , L n ( p ) ) = 1 n g ( L 1 ( p ) ) ε 1 n g ( L 2 ( p ) ) ε ε 1 n g ( L n ( p ) )
In light of the operations of the PLTSs described in Definition 10, we can deduce the following theorem.
Theorem 2.
Suppose that g ( L i ( p ) ) ( i = 1 , 2 , , n ) is a collection of transformed PLTSs, then their aggregated values by using the WPLEA operator is also a transformed PLTS, and:
W P L E A ( g ( L 1 ( p ) ) , g ( L 2 ( p ) ) , , g ( L n ( p ) ) ) = g ( L i ( k ) ) g ( L i ( p ) ) { i = 1 n ( 1 + g ( L i ( k ) ) p i ( k ) ) w i i = 1 n ( 1 g ( L i ( k ) ) p i ( k ) ) w i i = 1 n ( 1 + g ( L i ( k ) ) p i ( k ) ) w i + i = 1 n ( 1 g ( L i ( k ) ) p i ( k ) ) w i }
where w = ( w 1 , w 2 , , w n , ) T is the weight vector of g ( L i ( p ) ) ( i = 1 , 2 , , n ) with w i [ 0 , 1 ] and i = 1 n w i = 1 .
Proof. 
In the following, we first prove (11), by using mathematical induction on n:
For n = 2 : Since
w 1 g ( L 1 ( p ) ) = g ( L 1 ( k ) ) g ( L 1 ( p ) ) { ( ( 1 + p 1 ( k ) g ( L 1 ( k ) ) ) w 1 ( 1 p 1 ( k ) g ( L 1 ( k ) ) ) w 1 ( 1 + p 1 ( k ) g ( L 1 ( k ) ) ) w 1 + ( 1 p 1 ( k ) g ( L 1 ( k ) ) ) w 1 ) }
w 2 g ( L 2 ( p ) ) = g ( L 2 ( k ) ) g ( L 2 ( p ) ) { ( ( 1 + p 2 ( k ) g ( L 2 ( k ) ) ) w 2 ( 1 p 2 ( k ) g ( L 2 ( k ) ) ) w 2 ( 1 + p 2 ( k ) g ( L 2 ( k ) ) ) w 2 + ( 1 p 2 ( k ) g ( L 2 ( k ) ) ) w 2 ) }
Then
w 1 g ( L 1 ( p ) ) w 2 g ( L 2 ( p ) ) = g ( L 1 ( k ) ) g ( L 1 ( p ) ) , g ( L 2 ( k ) ) g ( L 2 ( p ) ) , { ( 1 + p 1 ( k ) g ( L 1 ( k ) ) ) w 1 ( 1 p 1 ( k ) g ( L 1 ( k ) ) ) w 1 ( 1 + p 1 ( k ) g ( L 1 ( k ) ) ) w 1 + ( 1 p 1 ( k ) g ( L 1 ( k ) ) ) w 1 + ( 1 + p 2 ( k ) g ( L 2 ( k ) ) ) w 2 ( 1 p 2 ( k ) g ( L 2 ( k ) ) ) w 2 ( 1 + p 2 ( k ) g ( L 2 ( k ) ) ) w 2 + ( 1 p 2 ( k ) g ( L 2 ( k ) ) ) w 2 1 + ( 1 + p 1 ( k ) g ( L 1 ( k ) ) ) w 1 ( 1 p 1 ( k ) g ( L 1 ( k ) ) ) w 1 ( 1 + p 1 ( k ) g ( L 1 ( k ) ) ) w 1 + ( 1 p 1 ( k ) g ( L 1 ( k ) ) ) w 1 . ( 1 + p 2 ( k ) g ( L 2 ( k ) ) ) w 2 ( 1 p 2 ( k ) g ( L 2 ( k ) ) ) w 2 ( 1 + p 2 ( k ) g ( L 2 ( k ) ) ) w 2 + ( 1 p 2 ( k ) g ( L 2 ( k ) ) ) w 2 } = g ( L i ( k ) ) g ( L i ( p ) ) ( i = 1 , 2 ) { i = 1 2 ( 1 + p i ( k ) g ( L i ( k ) ) ) w i i = 1 2 ( 1 p i ( k ) g ( L i ( k ) ) ) w i i = 1 2 ( 1 + p i ( k ) g ( L i ( k ) ) ) w i + i = 1 2 ( 1 p i ( k ) g ( L i ( k ) ) ) w i }
If (11) holds for n = m , that is
i = 1 m w i g ( L i ( p ) ) = g ( L i ( k ) ) g ( L i ( p ) ) ( i = 1 , 2 , , m ) { i = 1 m ( 1 + p i ( k ) g ( L i ( k ) ) ) w i i = 1 m ( 1 p i ( k ) g ( L i ( k ) ) ) w i i = 1 m ( 1 + p i ( k ) g ( L i ( k ) ) ) w i + i = 1 m ( 1 p i ( k ) g ( L i ( k ) ) ) w i }
Then, when n = m + 1 , by the operations of PLTEs, we have:
i = 1 m + 1 w i g ( L i ( p ) ) = i = 1 m w i g ( L i ( p ) ) w m + 1 g ( L m + 1 ( p ) ) = g ( L i ( k ) ) g ( L i ( p ) ) ( i = 1 , 2 , , m ) { i = 1 m ( 1 + p i ( k ) g ( L i ( k ) ) ) i = 1 m ( 1 p i ( k ) g ( L i ( k ) ) ) i = 1 m ( 1 + p i ( k ) g ( L i ( k ) ) ) + i = 1 m ( 1 p i ( k ) g ( L i ( k ) ) ) } g ( L m + 1 ( k ) ) g ( L m + 1 ( p ) ) { ( 1 + p i ( k ) g ( L i ( k ) ) ) w m + 1 ( 1 p i ( k ) g ( L i ( k ) ) ) w m + 1 ( 1 + p i ( k ) g ( L i ( k ) ) ) w m + 1 + ( 1 p i ( k ) g ( L i ( k ) ) ) w m + 1 } = g ( L i ( k ) ) g ( L i ( p ) ) ( i = 1 , 2 , , m + 1 ) { i = 1 m + 1 ( 1 + p i ( k ) g ( L i ( k ) ) ) w i i = 1 m + 1 ( 1 p i ( k ) g ( L i ( k ) ) ) w i i = 1 m + 1 ( 1 + p i ( k ) g ( L i ( k ) ) ) w i + i = 1 m + 1 ( 1 p i ( k ) g ( L i ( k ) ) ) w i }
i.e., (11) holds for n = m + 1 . Thus, Equation (11) holds for all n . Then
W P L E A ( g ( L 1 ( p ) ) , g ( L 2 ( p ) ) , , g ( L n ( p ) ) ) = g ( L i ( k ) ) g ( L i ( p ) ) ( i = 1 , 2 , , n ) { i = 1 n ( 1 + p i ( k ) g ( L i ( k ) ) ) w i i = 1 n ( 1 p i ( k ) g ( L i ( k ) ) ) w i i = 1 n ( 1 + p i ( k ) g ( L i ( k ) ) ) w i + i = 1 n ( 1 p i ( k ) g ( L i ( k ) ) ) w i }
This completes the proof of Theorem 2. □
Example 1.
Considering [6] let L 1 ( p ) = { s 1 ( 0.3 ) , s 2 ( 0.2 ) , s 3 ( 0.5 ) } ; L 2 ( p ) = { s 1 ( 0.2 ) , s 0 ( 0.3 ) } and w = ( 0.5 , 0.5 ) T . After normalization we obtained L 2 ( p ) = { s 1 ( 0.4 ) , s 0 ( 0.6 ) } . Given that g ( L i ( p ) ) = ( r i ( k ) 2 τ + 1 2 ) ( p i ( k ) ) , we obtained g ( L 1 ( p ) ) = { 2 3 ( 0.3 ) , 5 6 ( 0.2 ) , 1 ( 0.5 ) , } and g ( L 2 ( p ) ) = { 1 3 ( 0.4 ) , 1 2 ( 0.6 ) } .
W P L E A ( g ( L 1 ( p ) ) , g ( L 2 ( p ) ) ) = g ( L i ( k ) ) g ( L i ( p ) ) ( i = 1 , 2 ) { i = 1 2 ( 1 + p i ( k ) g ( L i ( k ) ) ) w i i = 1 2 ( 1 p i ( k ) g ( L i ( k ) ) ) w i i = 1 2 ( 1 + p i ( k ) g ( L i ( k ) ) ) w i + i = 1 2 ( 1 p i ( k ) g ( L i ( k ) ) ) w i } = { ( 1 + 0.2 ) 0.5 ( 1 + 0.13333 ) 0.5 ( 1 0.2 ) 0.5 ( 1 0.13333 ) 0.5 ( 1 + 0.2 ) 0.5 ( 1 + 0.13333 ) 0.5 + ( 1 0.2 ) 0.5 ( 1 0.13333 ) 0.5 , ( 1 + 0.2 ) 0.5 ( 1 + 0.3 ) 0.5 ( 1 0.2 ) 0.5 ( 1 0.3 ) 0.5 ( 1 + 0.2 ) 0.5 ( 1 + 0.3 ) 0.5 + ( 1 0.2 ) 0.5 ( 1 0.3 ) 0.5 , ( 1 + 0.16667 ) 0.5 ( 1 + 0.13333 ) 0.5 ( 1 0.16667 ) 0.5 ( 1 0.13333 ) 0.5 ( 1 + 0.16667 ) 0.5 ( 1 + 0.13333 ) 0.5 + ( 1 0.16667 ) 0.5 ( 1 0.13333 ) 0.5 , ( 1 + 0.16667 ) 0.5 ( 1 + 0.3 ) 0.5 ( 1 0.16667 ) 0.5 ( 1 0.3 ) 0.5 ( 1 + 0.16667 ) 0.5 ( 1 + 0.3 ) 0.5 + ( 1 0.16667 ) 0.5 ( 1 0.3 ) 0.5 , ( 1 + 0.5 ) 0.5 ( 1 + 0.13333 ) 0.5 ( 1 0.5 ) 0.5 ( 1 0.13333 ) 0.5 ( 1 + 0.5 ) 0.5 ( 1 + 0.13333 ) 0.5 + ( 1 0.5 ) 0.5 ( 1 0.13333 ) 0.5 , ( 1 + 0.5 ) 0.5 ( 1 + 0.3 ) 0.5 ( 1 0.5 ) 0.5 ( 1 0.3 ) 0.5 ( 1 + 0.5 ) 0.5 ( 1 + 0.3 ) 0.5 + ( 1 0.5 ) 0.5 ( 1 0.3 ) 0.5 , } W P L E A ( g ( L 1 ( p ) ) , g ( L 2 ( p ) ) ) = ( 0.1668 , 0.2506 , 0.1500 , 0.2344 , 0.3290 , 0.4048 )
Based on Definition 11 and Theorem 2, we can deduce the following desirable properties for the WPLEA aggregation operator.
Property 5 (Idempotency).
Let g ( L i ( p ) ) ( i = 1 , 2 , , n ) be a collection of PLTSs. If all g ( L i ( p ) ) ( i = 1 , 2 , , n ) are equal, i.e., g ( L i ( p ) ) = g ( L ( p ) ) , then
W P L E A ( g ( L 1 ( p ) ) , g ( L 2 ( p ) ) , , g ( L n ( p ) ) ) = g ( L ( p ) )
Proof. 
If g ( L i ( p ) ) = g ( L ( p ) ) for all i , then W P L E A ( g ( L 1 ( p ) ) , g ( L 2 ( p ) ) , , g ( L n ( p ) ) ) is computed as follows:
W P L E A ( g ( L 1 ( p ) ) , g ( L 2 ( p ) ) , , g ( L n ( p ) ) ) = ε i = 1 n w i g ( L i ( p ) ) = ε i = 1 n w i g ( L ( p ) ) = i = 1 n w i g ( L ( p ) ) = g ( L ( p ) )
Property 6 (Boundedness).
Let g ( L i ( p ) ) ( i = 1 , 2 , , n ) be a collection of PLTSs, then we have
min i = 1 n min k = 1 # g ( L i ( p ) ) p i ( k ) g ( L i ( k ) ) g ( L ) max i = 1 n max k = 1 # g ( L i ( p ) ) p i ( k ) g ( L i ( k ) )
where g ( L ) W P L E A ( g ( L 1 ( p ) ) , g ( L 2 ( p ) ) , , g ( L n ( p ) ) ) .
Proof. 
Let f ( x ) = 1 x 1 + x , 0 x 1 , and suppose that x 1 x 2 , then f ( x 1 ) f ( x 2 ) = 1 x 1 1 + x 1 1 x 2 1 + x 2 = 2 x 2 x 1 ( 1 + x 1 ) ( 1 + x 2 ) 0 , and thus, the function f ( x ) is decreasing.
Because min i = 1 n min k = 1 # g ( L i ( p ) ) p i ( k ) g ( L i ( k ) ) g ( L ) max i = 1 n max k = 1 # g ( L i ( p ) ) p i ( k ) g ( L i ( k ) ) for every
g ( L i ( k ) ) g ( L i ( p ) ) 1 max i = 1 n max i = 1 # L i ( p ) p i ( k ) g ( L i ( k ) ) 1 + max i = 1 n max i = 1 # L i ( p ) p i ( k ) g ( L i ( k ) ) 1 p i ( k ) g ( L i ( k ) ) 1 + p i ( k ) g ( L i ( k ) ) 1 min i = 1 n min i = 1 # L i ( p ) p i ( k ) g ( L i ( k ) ) 1 + min i = 1 n min i = 1 # L i ( p ) p i ( k ) g ( L i ( k ) )
Since ( i = 1 , 2 , , n ) and ( k = 1 , 2 , , # g ( L i ( p ) ) ) then, for w i 0 , we have
( 1 max i = 1 n max i = 1 # L i ( p ) p i ( k ) g ( L i ( k ) ) 1 + max i = 1 n max i = 1 # L i ( p ) p i ( k ) g ( L i ( k ) ) ) w i ( 1 p i ( k ) g ( L i ( k ) ) 1 + p i ( k ) g ( L i ( k ) ) ) w i ( 1 min i = 1 n min i = 1 # L i ( p ) p i ( k ) g ( L i ( k ) ) 1 + min i = 1 n min i = 1 # L i ( p ) p i ( k ) g ( L i ( k ) ) ) w i ( i = 1 , 2 , , n ) g ( L i ( k ) ) g ( L i ( p ) ) ( i = 1 , 2 , , n ) { i = 1 n ( 1 max i = 1 n max i = 1 # L i ( p ) p i ( k ) g ( L i ( k ) ) 1 + max i = 1 n max i = 1 # L i ( p ) p i ( k ) g ( L i ( k ) ) ) w i } g ( L i ( k ) ) g ( L i ( p ) ) ( i = 1 , 2 , , n ) { i = 1 n ( 1 p i ( k ) g ( L i ( k ) ) 1 + p i ( k ) g ( L i ( k ) ) ) w i } g ( L i ( k ) ) g ( L i ( p ) ) ( i = 1 , 2 , , n ) { i = 1 n ( 1 min i = 1 n min i = 1 # L i ( p ) p i ( k ) g ( L i ( k ) ) 1 + min i = 1 n min i = 1 # L i ( p ) p i ( k ) g ( L i ( k ) ) ) w i } ( 2 1 + max i = 1 n max i = 1 # L i ( p ) p i ( k ) g ( L i ( k ) ) ) 1 + g ( L i ( k ) ) g ( L i ( p ) ) ( i = 1 , 2 , , n ) { i = 1 n ( 1 p i ( k ) g ( L i ( k ) ) 1 + p i ( k ) g ( L i ( k ) ) ) w i } ( 2 1 + min i = 1 n min i = 1 # L i ( p ) p i ( k ) g ( L i ( k ) ) ) min i = 1 n min i = 1 # L i ( p ) p i ( k ) g ( L i ( k ) ) 2 1 + g ( L i ( k ) ) g ( L i ( p ) ) { i = 1 n ( 1 p i ( k ) g ( L i ( k ) ) 1 + p i ( k ) g ( L i ( k ) ) ) w i } 1 max i = 1 n max i = 1 # L i ( p ) p i ( k ) g ( L i ( k ) ) min i = 1 n min i = 1 # L i ( p ) p i ( k ) g ( L i ( k ) ) g ( L i ( k ) ) g ( L i ( p ) ) i = 1 n ( 1 + p i ( k ) g ( L i ( k ) ) ) w i i = 1 n ( 1 p i ( k ) g ( L i ( k ) ) ) w i i = 1 n ( 1 + p i ( k ) g ( L i ( k ) ) ) w i + i = 1 n ( 1 p i ( k ) g ( L i ( k ) ) ) w i max i = 1 n max i = 1 # L i ( p ) p i ( k ) g ( L i ( k ) ) .
Hence, the proof of Property 6 is completed. □
Property 7 (Monotonicity).
Let g ( L i ( p ) ) and g ( L i ( p ) ) be two collections of PLTSs and the numbers of linguistic terms in g ( L i ( p ) ) and g ( L i ( p ) ) are identical ( i = 1 , 2 , , n ) . Assume that w = ( w 1 , w 2 , , w n ) T is an associated weighting vector with w i [ 0 , 1 ] and i = 1 n w i = 1 . If g ( L i ( k ) ) ( p i ( k ) ) g ( L i ( k ) ) ( p i ( k ) ) for all i , i.e., g ( L i ( p ) ) g ( L i ( p ) ) then,
W P L E A ( g ( L 1 ( p ) ) , g ( L 2 ( p ) ) , , g ( L n ( p ) ) ) W P L E A ( g ( L 1 ( p ) ) , g ( L 2 ( p ) ) , , g ( L n ( p ) ) ) .
Proof. 
Let W P L E A ( g ( L 1 ( p ) ) , g ( L 2 ( p ) ) , , g ( L n ( p ) ) ) = g ( L ( p ) ) and W P L E A ( g ( L 1 ( p ) ) , g ( L 2 ( p ) ) , , g ( L n ( p ) ) ) = g ( L ( p ) ) . Let f ( x ) = 1 + x 1 x , x [ 0 , 1 ] ; then, it is an increasing function. If g ( L i ( p ) ) g ( L i ( p ) ) for all i , then f ( g ( L ( p ) ) ) f ( g ( L ( p ) ) ) , i.e., 1 + g ( L ( p ) ) 1 g ( L ( p ) ) 1 + g ( L ( p ) ) 1 g ( L ( p ) ) , for all i . Therefore, we have:
g ( L ( p ) ) = g ( L i ( k ) ) g ( L i ( P ) ) ( i = 1 , 2 , , m ) { j = 1 n ( 1 + p i ( k ) g ( L i ( k ) ) ) w i j = 1 n ( 1 p i ( k ) g ( L i ( k ) ) ) w i j = 1 n ( 1 + p i ( k ) g ( L i ( k ) ) ) w i + j = 1 n ( 1 p i ( k ) g ( L i ( k ) ) ) w i } = 1 2 1 + g ( L i ( k ) ) g ( L i ( P ) ) ( i = 1 , 2 , , m ) { i = 1 n ( 1 + p i ( k ) g ( L i ( k ) ) 1 p i ( k ) g ( L i ( k ) ) ) w i } 1 2 1 + g ( L i ( k ) ) g ( L i ( P ) ) ( i = 1 , 2 , , m ) { i = 1 n ( 1 + p i ( k ) g ( L i ( k ) ) 1 p i ( k ) g ( L i ( k ) ) ) w i } = g ( L i ( k ) ) g ( L i ( P ) ) ( i = 1 , 2 , , m ) { j = 1 n ( 1 + p i ( k ) g ( L i ( k ) ) ) w i j = 1 n ( 1 p i ( k ) g ( L i ( k ) ) ) w i j = 1 n ( 1 + p i ( k ) g ( L i ( k ) ) ) w i + j = 1 n ( 1 p i ( k ) g ( L i ( k ) ) ) w i } = g ( L ( p ) )
Therefore, we complete the proof of Property 7. □

4.2. Probabilistic Linguistic Einstein Geometric (PLEG) Aggregation Operators

In this section, we explore the fusion of Einstein t-norm (EG) aggregation operators under the probabilistic linguistic environment. We proposed the probabilistic linguistic Einstein geometric (PLEG) and the weighted probabilistic linguistic Einstein geometric (WPLEG) operators.

4.2.1. PLEG

On the basis of the operational laws (4) of Definition 10 and (5) of Definition 8, we present the definition of the PLEG aggregation operator as follows:
Definition 12.
Let L ( p ) = { L ( k ) i ( p i ( k ) ) / k = 1 , 2 , , # L i ( p ) } ( i = 1 , 2 , , n ) be a collection of PLTSs and g ( L i ( p ) ) its equivalent transformation. A probabilistic linguistic Einstein Average (PLEG) operator is a mapping g ( L n ( p ) ) g ( L ( p ) ) such that
P L E G ( g ( L 1 ( p ) ) , g ( L 2 ( p ) ) , , g ( L n ( p ) ) ) = ε i = 1 n ( g ( L i ( p ) ) ) ^ ( 1 n )
Theorem 3.
Let L ( p ) = { L ( k ) i ( p i ( k ) ) / k = 1 , 2 , , # L i ( p ) } ( i = 1 , 2 , , n ) be a collection of PLTSs and g ( L ( p ) ) its equivalent transformation, then their aggregated value by using PLEG operator is also a PLTE and
P L E G ( g ( L 1 ( p ) ) , g ( L 2 ( p ) ) , , g ( L n ( p ) ) ) = g ( L 1 ( k ) ) g ( L 1 ( p ) ) , g ( L 2 ( k ) ) g ( L 2 ( p ) ) , , g ( L n ( k ) ) g ( L n ( p ) ) { 2 i = 1 n ( p i ( k ) g ( L i ( k ) ) ) 1 n i = 1 n ( 2 p i ( k ) g ( L i ( k ) ) ) 1 n + i = 1 n ( p i ( k ) g ( L i ( k ) ) ) 1 n }
where g ( L i ( k ) ) = ( r ( k ) 2 τ + 1 2 ) .
We proved (13) by using mathematical induction on n. For n = 2 , according to the operational law (4) of Definition 10, we have
g ( L 1 ( p ) ) ( 1 n ) = g ( L 1 ( k ) ) g ( L 1 ( p ) ) { ( p 1 ( k ) g ( L 1 ( k ) ) ) 1 n ( 2 p 1 ( k ) g ( L 1 ( k ) ) ) 1 n + ( p 1 ( k ) g ( L 1 ( k ) ) ) 1 n }
g ( L 2 ( p ) ) ( 1 n ) = g ( L 2 ( k ) ) g ( L 2 ( p ) ) { ( p 2 ( k ) g ( L 2 ( k ) ) ) 1 n ( 2 p 2 ( k ) g ( L 2 ( k ) ) ) 1 n + ( p 2 ( k ) g ( L 2 ( k ) ) ) 1 n }
Then
P L E G ( g ( L 1 ( p ) ) , g ( L 2 ( p ) ) ) = g ( L 1 ( p ) ) ( 1 n ) ε g ( L 2 ( p ) ) ( 1 n ) = 2 ( p 1 ( k ) g ( L 1 ( k ) ) ) 1 n ( p 2 ( k ) g ( L 2 ( k ) ) ) 1 n ( 2 p 1 ( k ) g ( L 1 ( k ) ) ) 1 n ( 2 p 2 ( k ) g ( L 2 ( k ) ) ) 1 n + ( p 1 ( k ) g ( L 1 ( k ) ) ) 1 n ( p 2 ( k ) g ( L 2 ( k ) ) ) 1 n
That is for n = 2 Equation (13) holds. Suppose n = m , Equation (13) holds, i.e.,
P L E G ( g ( L 1 ( p ) ) , g ( L 2 ( p ) ) , , g ( L m ( p ) ) ) = 2 i = 1 m ( p i ( k ) g ( L i ( k ) ) ) 1 n i = 1 m ( 2 p i ( k ) g ( L i ( k ) ) ) 1 n + i = 1 m ( p i ( k ) g ( L i ( k ) ) ) 1 n
Then, for n = m + 1 based on Definition 11 and the operational laws of Definition 10, we have
ε i = 1 m + 1 ( g ( L i ( p ) ) ) 1 m + 1 = ε i = 1 m ( g ( L i ( p ) ) ) 1 m ε ( g ( L m + 1 ( p ) ) ) 1 m + 1 = ( 2 i = 1 m ( p i ( k ) g ( L i ( k ) ) ) 1 m i = 1 m ( 2 p i ( k ) g ( L i ( k ) ) ) 1 m + i = 1 m ( p i ( k ) g ( L i ( k ) ) ) 1 m ) ε ( 2 ( p i ( k ) g ( L m + 1 ( k ) ) ) 1 m + 1 ( 2 p i ( k ) g ( L m + 1 ( k ) ) ) 1 m + 1 + ( p i ( k ) g ( L m + 1 ( k ) ) ) 1 m + 1 ) = g ( L i ( k ) ) g ( L i ( p ) ) i = 1 , 2 , , n ( 2 i = 1 n ( p ( k ) g ( L i ( k ) ) ) 1 n i = 1 n ( 2 p ( k ) g ( L i ( k ) ) ) 1 n + i = 1 n ( p ( k ) g ( L i ( k ) ) ) 1 n )
Thus, the proof of Equation (13) is completed.
For example, considering the operational laws 2 and 4, and the information provided by Gou et al. [6] we obtain:
g ( L 1 ( p ) ) 1 2 ε g ( L 2 ( p ) ) 1 2 = { ( 0.2 ) 1 2 ( 2 0.2 ) 1 2 + ( 0.2 ) 1 2 ( 0.3 ) 1 2 ( 2 0.3 ) 1 2 + ( 0.3 ) 1 2 1 + ( 1 ( 0.2 ) 1 2 ( 2 0.2 ) 1 2 + ( 0.2 ) 1 2 ) ( 1 ( 0.3 ) 1 2 ( 2 0.3 ) 1 2 + ( 0.3 ) 1 2 ) , , , .... }
g ( L 1 ( p ) ) 1 2 ε 2 g ( L 2 ( p ) ) 1 2 = { 0.1636 , 0.2456 , 0.1491 , 0.2248 , 0.2673 , 0.3904 }
Now on the basis of Definition 13 and Theorem 3, it can be proved that the PLEG operator has the following properties.
Property (8) (Commutativity).
Let L i ( p ) ( i = 1 , 2 , n ) be a collection of PLTSs and g ( L i ( p ) ) its equivalent transformation. Let ( g ( L 1 ( p ) ) , g ( L 2 ( p ) ) , , g ( L n ( p ) ) ) be any permutation of ( g ( L 1 ( p ) ) , g ( L 2 ( p ) ) , , g ( L n ( p ) ) ) then
P L E G ( g ( L 1 ( p ) ) , g ( L 2 ( p ) ) , , g ( L n ( p ) ) ) = P L E G ( g ( L 1 ( p ) ) , g ( L 2 ( p ) ) , , g ( L n ( p ) ) )
Proof. 
Let
P L E G ( g ( L 1 ( p ) ) , g ( L 2 ( p ) ) , , g ( L n ( p ) ) ) = g ( L 1 ( p ) ) ^ ( 1 n ) ε g ( L 2 ( p ) ) ^ ( 1 n ) ε , , ε g ( L n ( p ) ) ^ ( 1 n )
P L E G ( g ( L 1 ( p ) ) , g ( L 2 ( p ) ) , , g ( L n ( p ) ) ) = g ( L 1 ( p ) ) ^ ( 1 n ) ε g ( L 2 ( p ) ) ^ ( 1 n ) ε , , ε g ( L n ( p ) ) ^ ( 1 n )
Since ( g ( L 1 ( p ) ) , g ( L 2 ( p ) ) , , g ( L n ( p ) ) ) is any permutation of ( g ( L 1 ( p ) ) , g ( L 2 ( p ) ) , , g ( L n ( p ) ) ) we have g ( L i ( p ) ) = g ( L i ( p ) ) and then
P L E G ( g ( L 1 ( p ) ) , g ( L 2 ( p ) ) , , g ( L n ( p ) ) ) = P L E G ( g ( L 1 ( p ) ) , g ( L 2 ( p ) ) , , g ( L n ( p ) ) )
Property 9 (Boundedness).
Let L i ( p ) ( i = 1 , 2 , n ) be a collection of PLTSs and g ( L i ( p ) ) its equivalent transformation, then we have:
min i = 1 n min k = 1 # g ( L i ( p ) ) p i ( k ) g ( L i ( k ) ) g ( L ) max i = 1 n max k = 1 # g ( L i ( p ) ) p i ( k ) g ( L i ( k ) )
The proof of Property 9 is similarly to that of Property 8 so we omit here.
Property 10 (Idempotency).
Let L i ( p ) ( i = 1 , 2 , n ) be a collection of PLTSs and g ( L i ( p ) ) its equivalent transformation. If all g ( L i ( p ) ) ( i = 1 , 2 , , n ) are equal, i.e., g ( L i ( p ) ) = g ( L ( p ) ) , then,
P L E G ( g ( L 1 ( p ) ) , g ( L 2 ( p ) ) , , g ( L n ( p ) ) ) = g ( L ( p ) )
Also, the proof of this property is similar to that of Property 9 so we omit here.

4.2.2. WPLEG

Considering the importance of the aggregated arguments, we extend the PLEG and give the definition of the weighted probabilistic linguistic Einstein geometric (WPLEG) operator as follows.
Definition 13.
Let L i ( p ) be a collection of PLTSs and g ( L i ( p ) ) its equivalent transformation w = ( w 1 , w 2 , , w n ) T denotes the weighting vector of L i ( p ) , w i [ 0 , 1 ] and i = 1 n w i = 1 . Given the value of the weight vector w = ( w 1 , w 2 , , w n ) T , we define weighted probabilistic linguistic Einstein geometric (WPLEG) operator as follows:
W P L E G ( g ( L 1 ( p ) ) , g ( L 2 ( p ) ) , , g ( L n ( p ) ) ) = ε i = 1 n g ( L i ( p ) ) ε w i
Based on the operations of the PLTSs described in Proposition 2 and Definition 13, we can derive the following Theorem:
Theorem 4.
Let L i ( p ) ( i = 1 , 2 , n ) be a collection of PLTSs and g ( L i ( p ) ) be its equivalent transformation, then their aggregated values by using the WPLEG operator is also a PLTE, and:
W P L E G ( g ( L 1 ( p ) ) , g ( L 2 ( p ) ) , , g ( L n ( p ) ) ) = g ( L 1 ( k ) ) g ( L 1 ( p ) ) , g ( L 2 ( k ) ) g ( L 2 ( p ) ) , , g ( L n ( k ) ) g ( L n ( p ) ) { 2 i = 1 n ( p i ( k ) g ( L i ( k ) ) ) w i i = 1 n ( 2 p i ( k ) g ( L i ( k ) ) ) w i + i = 1 n ( p i ( k ) g ( L i ( k ) ) ) w i }
where w = ( w 1 , w 2 , , w n ) T is the weight vector of L i j ( p ) ( i = 1 , 2 , , n ) and w i > 0 , i = 1 n w i = 1 . In particular, when w i = 1 n , i = 1 , 2 , , n , the WPLEG operator is reduced to the probabilistic linguistic Einstein geometric (PLEG) operator.
P L E G ( g ( L 1 ( p ) ) , g ( L 2 ( p ) ) , , g ( L n ( p ) ) ) = g ( L 1 ( k ) ) g ( L 1 ( p ) ) , g ( L 2 ( k ) ) g ( L 2 ( p ) ) , , g ( L n ( k ) ) g ( L n ( p ) ) { 2 i = 1 n ( p i ( k ) g ( L i ( k ) ) ) 1 n i = 1 n ( 2 p i ( k ) g ( L i ( k ) ) ) 1 n + i = 1 n ( p i ( k ) g ( L i ( k ) ) ) 1 n }
Proof. 
In the following, we first prove the (15) by using mathematical induction on n .
For n = 2 : Since [ g ( L 1 ( p ) ) ] w 1 = g ( L 1 ( k ) ) g ( L 1 ( p ) ) { 2 ( p 1 ( k ) g ( L 1 ( k ) ) ) w 1 ( 2 p 1 ( k ) g ( L 1 ( k ) ) ) w 1 + ( p 1 ( k ) g ( L 1 ( k ) ) ) w 1 } and [ g ( L 2 ( p ) ) ] w 2 = g ( L 2 ( k ) ) g ( L 2 ( p ) ) { 2 ( p 2 ( k ) g ( L 2 ( k ) ) ) w 2 ( 2 p 2 ( k ) g ( L 2 ( k ) ) ) w 2 + ( p 2 ( k ) g ( L 2 ( k ) ) ) w 2 } , then
[ g ( L 1 ( p ) ) ] w 1 [ g ( L 2 ( p ) ) ] w 2 = g ( L 1 ( k ) ) g ( L 1 ( p ) ) g ( L 2 ( k ) ) g ( L 2 ( p ) ) , { 2 ( p 1 ( k ) g ( L 1 ( k ) ) ) w 1 ( 2 p 1 ( k ) g ( L 1 ( k ) ) ) w 1 + ( p 1 ( k ) g ( L 1 ( k ) ) ) w 1 · 2 ( p 2 ( k ) g ( L 2 ( k ) ) ) w 2 ( 2 p 2 ( k ) g ( L 2 ( k ) ) ) w 2 + ( p 2 ( k ) g ( L 2 ( k ) ) ) w 2 1 + ( 1 2 ( p 1 ( k ) g ( L 1 ( k ) ) ) w 1 ( 2 p 1 ( k ) g ( L 1 ( k ) ) ) w 1 + ( p 1 ( k ) g ( L 1 ( k ) ) ) w 1 ) · ( 1 2 ( p 2 ( k ) g ( L 2 ( k ) ) ) w 2 ( 2 p 2 ( k ) g ( L 2 ( k ) ) ) w 2 + ( p 2 ( k ) g ( L 2 ( k ) ) ) w 2 ) } = g ( L 1 ( k ) ) g ( L 1 ( p ) ) , g ( L 2 ( k ) ) g ( L 2 ( p ) ) , { 2 i = 1 2 ( p i ( k ) g ( L i ( k ) ) ) w i i = 1 2 ( 2 p i ( k ) g ( L i ( k ) ) ) w i + i = 1 2 ( p i ( k ) g ( L i ( k ) ) ) w i }
If the (15) holds for n = m , that is
i = 1 m g ( L i ( p ) ) w i = η 1 ( k ) g ( L 1 ( p ) ) , η 2 ( k ) g ( L 2 ( p ) ) , , η n ( k ) g ( L m ( p ) ) { 2 i = 1 m ( p i ( k ) g ( L i ( k ) ) ) w i i = 1 n ( 2 p i ( k ) g ( L i ( k ) ) ) w i + i = 1 m ( p i ( k ) g ( L i ( k ) ) ) w i }
then, when n = m + 1 , by the operations of PLTEs, we have
i = 1 m + 1 g ( L i ( p ) ) w i = i = 1 m g ( L i ( p ) ) w i g ( L m + 1 ( p ) ) w m + 1 = g ( L 1 ( k ) ) g ( L 1 ( p ) ) , g ( L 2 ( k ) ) g ( L 2 ( p ) ) , , g ( L m ( k ) ) g ( L m ( p ) ) { 2 i = 1 m ( p i ( k ) g ( L i ( k ) ) ) w i i = 1 m ( 2 p i ( k ) g ( L i ( k ) ) ) w i + i = 1 m ( p i ( k ) g ( L i ( k ) ) ) w i } g ( L m + 1 ( k ) ) g ( L m + 1 ( p ) ) { 2 ( p m + 1 ( k ) g ( L m + 1 ( k ) ) ) w m + 1 ( 2 g ( L m + 1 ( k ) ) ) w m + 1 + ( g ( L m + 1 ( k ) ) ) w m + 1 } = η 1 ( k ) g ( L 1 ( p ) ) , η 2 ( k ) g ( L 2 ( p ) ) , , η m ( k ) g ( L m ( p ) ) , η m + 1 ( k ) g ( L m + 1 ( p ) ) { 2 i = 1 m + 1 ( p i ( k ) g ( L i ( k ) ) ) w i i = 1 m + 1 ( 2 p i ( k ) g ( L i ( k ) ) ) w i + i = 1 m + 1 ( p i ( k ) g ( L i ( k ) ) ) w i }
i.e., the equation holds for n = m + 1 . Thus, the equation holds for all n . Then
W P L E G ( g ( L 1 ( p ) ) , g ( L 2 ( p ) ) , , g ( L n ( p ) ) ) = g ( L 1 ( k ) ) g ( L 1 ( p ) ) , g ( L 2 ( k ) ) g ( L 2 ( p ) ) , , g ( L n ( k ) ) g ( L n ( p ) ) { 2 i = 1 n ( p i ( k ) g ( L i ( k ) ) ) w i i = 1 n ( 2 p i ( k ) g ( L i ( k ) ) ) w i + i = 1 n ( p i ( k ) g ( L i ( k ) ) ) w i }
which completes the proof of (15). □
Example 2.
Considering Gou et al. [6] let L 1 ( p ) = { s 1 ( 0.3 ) , s 2 ( 0.2 ) , s 3 ( 0.5 ) } ; L 2 ( p ) = { s 1 ( 0.2 ) , s 0 ( 0.3 ) } and w = ( 0.5 , 0.5 ) T . After normalization we obtained L 2 ( p ) = { s 1 ( 0.4 ) , s 0 ( 0.6 ) } . Given that g ( L i ( p ) ) = ( r i ( k ) 2 τ + 1 2 ) ( p i ( k ) ) , we obtained g ( L 1 ( p ) ) = { 2 3 ( 0.3 ) , 5 6 ( 0.2 ) , 1 ( 0.5 ) , } and g ( L 2 ( p ) ) = { 1 3 ( 0.4 ) , 1 2 ( 0.6 ) } .
W P L E G ( g ( L 1 ( p ) ) , g ( L 2 ( p ) ) ) = { 2 ( 0.2 ) 0.5 ( 0.13333 ) 0.5 ( 2 0.2 ) 0.5 ( 2 0.13333 ) 0.5 + ( 0.2 ) 0.5 ( 0.13333 ) 0.5 , 2 ( 0.2 ) 0.5 ( 0.3 ) 0.5 ( 2 0.2 ) 0.5 ( 2 0.3 ) 0.5 + ( 0.2 ) 0.5 ( 0.3 ) 0.5 , 2 ( 0.16667 ) 0.5 ( 0.13333 ) 0.5 ( 2 0.16667 ) 0.5 ( 2 0.13333 ) 0.5 + ( 0.16667 ) 0.5 ( 0.13333 ) 0.5 , 2 ( 0.16667 ) 0.5 ( 0.3 ) 0.5 ( 2 0.16667 ) 0.5 ( 2 0.3 ) 0.5 + ( 0.16667 ) 0.5 ( 0.3 ) 0.5 , 2 ( 0.5 ) 0.5 ( 0.13333 ) 0.5 ( 2 0.5 ) 0.5 ( 2 0.13333 ) 0.5 + ( 0.5 ) 0.5 ( 0.13333 ) 0.5 , 2 ( 0.5 ) 0.5 ( 0.3 ) 0.5 ( 2 0.5 ) 0.5 ( 2 0.3 ) 0.5 + ( 0.5 ) 0.5 ( 0.3 ) 0.5 , }
W P L E G ( g ( L 1 ( p ) ) , g ( L 2 ( p ) ) ) = { 0.1636 , 0.2456 , 0.1491 , 0.2248 , 0.2673 , 0.3904 }
Now on the basis of Definition 13 and Theorem 3, it can be demonstrated that the WPLEG operator has the following properties.
Property 11 (Boundedness).
Let L i ( p ) ( i = 1 , 2 , n ) be a collection of PLTSs and g ( L i ( p ) ) its equivalent transformation, then we have:
min i = 1 n min k = 1 # g ( L i ( p ) ) p i ( k ) g ( L i ( k ) ) g ( L ) max i = 1 n max k = 1 # g ( L i ( p ) ) p i ( k ) g ( L i ( k ) )
Proof. 
Let g ( x ) = ( 2 x ) x = 2 x 1 , 0 < x < 1 . Since g ( x ) = 2 x 2 1 < 0 then the function g ( x ) is decreasing. Because min i = 1 n min k = 1 # g ( L i ( p ) ) p i ( k ) g ( L i ( k ) ) p i ( k ) g ( L i ( k ) ) max i = 1 n max k = 1 # g ( L i ( p ) ) p i ( k ) g ( L i ( k ) ) for every g ( L i ( k ) ) g ( L i ( p ) ) .
Then ( 2 max i = 1 n max k = 1 # L i ( p ) p i ( k ) g ( L i ( k ) ) ) max i = 1 n max k = 1 # L i ( p ) p i ( k ) g ( L i ( k ) ) ( 2 p i ( k ) g ( L i ( k ) ) ) p i ( k ) g ( L i ( k ) ) ( 2 min i = 1 n min k = 1 # L i ( p ) p i ( k ) g ( L i ( k ) ) ) min i = 1 n min k = 1 # L i ( p ) p i ( k ) g ( L i ( k ) ) and then
[ ( 2 max i = 1 n max k = 1 # L i ( p ) p i ( k ) g ( L i ( k ) ) ) max i = 1 n max k = 1 # L i ( p ) p i ( k ) g ( L i ( k ) ) ] w i [ ( 2 p i ( k ) g ( L i ( k ) ) ) p i ( k ) g ( L i ( k ) ) ] w i [ ( 2 min i = 1 n min k = 1 # L i ( p ) p i ( k ) g ( L i ( k ) ) ) min i = 1 n min k = 1 # L i ( p ) p i ( k ) g ( L i ( k ) ) ] w i
g ( L i ( k ) ) g ( L i ( p ) )         ( i = 1 , 2 , , n )
Thus
i = 1 n [ ( 2 max i = 1 n max k = 1 # L i ( p ) p i ( k ) g ( L i ( k ) ) ) max i = 1 n max k = 1 # L i ( p ) p i ( k ) g ( L i ( k ) ) ] w i i = 1 n [ ( 2 p i ( k ) g ( L i ( k ) ) ) p i ( k ) g ( L i ( k ) ) ] w i i = 1 n [ ( 2 min i = 1 n min k = 1 # L i ( p ) p i ( k ) g ( L i ( k ) ) ) min i = 1 n min k = 1 # L i ( p ) p i ( k ) g ( L i ( k ) ) ] w i [ ( 2 max i = 1 n max k = 1 # L i ( p ) p i ( k ) g ( L i ( k ) ) ) max i = 1 n max k = 1 # L i ( p ) p i ( k ) g ( L i ( k ) ) p i ( k ) ] i = 1 n w i g ( L i ( k ) ) g ( L i ( p ) ) ( i = 1 , 2 , , n ) { i = 1 n [ ( 2 p i ( k ) g ( L i ( k ) ) p i ( k ) ) p i ( k ) g ( L i ( k ) ) p i ( k ) ] w i } [ ( 2 min i = 1 n min k = 1 # L i ( p ) p i ( k ) g ( L i ( k ) ) p i ( k ) ) min i = 1 n min k = 1 # L i ( p ) p i ( k ) g ( L i ( k ) ) p i ( k ) ] i = 1 n w i [ ( 2 max i = 1 n max k = 1 # L i ( p ) p i ( k ) g ( L i ( k ) ) ) max i = 1 n max k = 1 # L i ( p ) p i ( k ) g ( L i ( k ) ) ] i = 1 n [ ( 2 p i ( k ) g ( L i ( k ) ) ) p i ( k ) g ( L i ( k ) ) ] w i [ ( 2 min i = 1 n min k = 1 # L i ( p ) p i ( k ) g ( L i ( k ) ) ) min i = 1 n min k = 1 # L i ( p ) p i ( k ) g ( L i ( k ) ) ] 2 max i = 1 n max k = 1 # L i ( p ) p i ( k ) g ( L i ( k ) ) g ( L i ( k ) ) g ( L i ( p ) ) ( i = 1 , 2 , , n ) { i = 1 n [ ( 2 p i ( k ) g ( L i ( k ) ) ) p i ( k ) g ( L i ( k ) ) ] w i + 1 } 2 min k = 1 n min k = 1 # L i ( p ) p i ( k ) g ( L i ( k ) ) min i = 1 n min k = 1 # L i ( p ) p i ( k ) g ( L i ( k ) ) 2 1 g ( L i ( k ) ) g ( L i ( p ) ) ( i = 1 , 2 , , n ) { i = 1 n [ ( 2 p i ( k ) g ( L i ( k ) ) ) p i ( k ) g ( L i ( k ) ) p i ( k ) ] w i } + 1 max i = 1 n max k = 1 # L i ( p ) p i ( k ) g ( L i ( k ) ) 2 min i = 1 n min k = 1 # L i ( p ) p i ( k ) g ( L i ( k ) ) 2 g ( L i ( k ) ) g ( L i ( p ) ) ( i = 1 , 2 , , n ) { i = 1 n [ ( 2 p i ( k ) g ( L i ( k ) ) ) p i ( k ) g ( L i ( k ) ) ] w i } + 1 max i = 1 n max k = 1 # L i ( p ) p i ( k ) g ( L i ( k ) ) min i = 1 n min k = 1 # L i ( p ) p i ( k ) g ( L i ( k ) ) g ( L i ( k ) ) g ( L i ( p ) ) ( i = 1 , 2 , , n ) { 2 i = 1 n p i ( k ) g ( L i ( k ) ) i = 1 n ( 2 p i ( k ) g ( L i ( k ) ) p i ( k ) ) + i = 1 n p i ( k ) g ( L i ( k ) ) } max i = 1 n max k = 1 # L i ( p ) p i ( k ) g ( L i ( k ) )
Hence, the statement of Property 11 is true. □
Property 12 (Idempotency).
Let L i ( p ) ( i = 1 , 2 , n ) be a collection of PLTSs and g ( L i ( p ) ) its equivalent transformation. If all g ( L i ( p ) ) ( i = 1 , 2 , , n ) are equal, i.e., g ( L i ( p ) ) = g ( L ( p ) ) , then W P L E G ( g ( L 1 ( p ) ) , g ( L 2 ( p ) ) , , g ( L n ( p ) ) ) = g ( L ( p ) ) .
Proof. 
If g ( L i ( p ) ) = g ( L ( p ) ) for all i , then W P L E G ( g ( L 1 ( p ) ) , g ( L 2 ( p ) ) , , g ( L n ( p ) ) ) is computed as follows:
W P L E G ( g ( L 1 ( p ) ) , g ( L 2 ( p ) ) , , g ( L n ( p ) ) ) = ε i = 1 n g ( L i ( p ) ) ε w i = ε i = 1 n g ( L ( p ) ) ε w i = g ( L i ( k ) ) g ( L i ( p ) ) ( i = 1 , 2 , , n ) { 2 i = 1 n ( p i ( k ) g ( L i ( k ) ) ) w i i = 1 n ( 2 p i ( k ) g ( L i ( k ) ) ) w i + i = 1 n ( p i ( k ) g ( L i ( k ) ) ) w i } = g ( L ( k ) ) g ( L ( p ) ) ( i = 1 , 2 , , n ) { 2 i = 1 n ( p ( k ) g ( L ( k ) ) ) w i i = 1 n ( 2 p ( k ) g ( L ( k ) ) ) w i + i = 1 n ( p ( k ) g ( L ( k ) ) ) w i } = g ( L ( k ) ) g ( L ( p ) ) ( i = 1 , 2 , , n ) { 2 ( p ( k ) g ( L ( k ) ) ) i = 1 n w i ( 2 p ( k ) g ( L ( k ) ) ) i = 1 n w i + ( p ( k ) g ( L ( k ) ) ) i = 1 n w i } = g ( L ( k ) ) g ( L ( p ) ) ( i = 1 , 2 , , n ) { p ( k ) g ( L ( k ) ) } = g ( L ( p ) )
Therefore, we complete the proof of Property 12. □
Property 13 (Commutativity).
Let L i ( p ) ( i = 1 , 2 , n ) be a collection of PLTSs and g ( L i ( p ) ) its equivalent transformation. Let ( g ( L 1 ( p ) ) , g ( L 2 ( p ) ) , , g ( L n ( p ) ) ) be any permutation of ( g ( L 1 ( p ) ) , g ( L 2 ( p ) ) , , g ( L n ( p ) ) ) then
W P L E G ( g ( L 1 ( p ) ) , g ( L 2 ( p ) ) , , g ( L n ( p ) ) ) = W P L E G ( g ( L 1 ( p ) ) , g ( L 2 ( p ) ) , , g ( L n ( p ) ) )
Proof. 
Because ( g ( L 1 ( p ) ) , g ( L 2 ( p ) ) , , g ( L n ( p ) ) ) is any permutation of ( g ( L 1 ( p ) ) , g ( L 2 ( p ) ) , , g ( L n ( p ) ) ) , we have ( p i ( k ) g ( L i ( k ) ) ) w i = ( p i ( k ) g ( L ( k ) i ) ) w i
2 i = 1 n ( p i ( k ) g ( L i ( k ) ) ) w i = 2 i = 1 n ( p i ( k ) g ( L i ( k ) ) ) w i
i = 1 n ( 2 p i ( k ) g ( L i ( k ) ) ) w i + i = 1 n ( p i ( k ) g ( L i ( k ) ) ) w i = i = 1 n ( 2 p i ( k ) g ( L i ( k ) ) ) w i + i = 1 n ( p i ( k ) g ( L i ( k ) ) ) w i
Then
g ( L i ( k ) ) g ( L i ( p ) ) ( i = 1 , 2 , , n ) { 2 i = 1 n ( p i ( k ) g ( L i ( k ) ) ) w i i = 1 n ( 2 p i ( k ) g ( L i ( k ) ) ) w i + i = 1 n ( p i ( k ) g ( L i ( k ) ) ) w i } = g ( L i ( k ) ) g ( L i ( p ) ) ( i = 1 , 2 , , n ) { 2 i = 1 n ( p i ( k ) g ( L i ( k ) ) ) w i i = 1 n ( 2 p i ( k ) g ( L i ( k ) ) ) w i + i = 1 n ( p i ( k ) g ( L i ( k ) ) ) w i }
where g ( L i ( p ) ) = p i ( k ) g ( L i ( k ) ) and g ( L i ( p ) ) = p i ( k ) g ( L i ( k ) ) . Thus, the proof of Property 13 is completed. □
Property 14 (Monotonicity).
Let g ( L i ( p ) ) and g ( L i ( p ) ) be two sets of PLTSs and the numbers of linguistic terms in g ( L i ( p ) ) and g ( L i ( p ) ) are identical ( i = 1 , 2 , , n ) . If p i ( k ) g ( L i ( p ) ) p i ( k ) g ( L i ( p ) ) for all i , i.e., g ( L i ( p ) ) g ( L i ( p ) ) , then
W P L E G ( g ( L 1 ( p ) ) , g ( L 2 ( p ) ) , , g ( L n ( p ) ) ) W P L E G ( g ( L 1 ( p ) ) , g ( L 2 ( p ) ) , , g ( L n ( p ) ) )
Proof. 
Let f ( x ) = 1 x 1 + x , x [ 0 , 1 ] , and supposed x 1 x 2 , f ( x 1 ) f ( x 2 ) = 1 x 1 1 + x 1 1 x 2 1 + x 2 = 2 x 2 x 1 ( 1 + x 1 ) ( 1 + x 2 ) 0 , then g ( x ) is a decreasing function. If, p i ( k ) g ( L i ( k ) ) p i ( k ) g ( L i ( k ) ) for all i , then, 1 p i ( k ) g ( L i ( p ) ) 1 p i ( k ) g ( L i ( p ) ) i.e., 1 p i ( k ) g ( L i ( p ) ) 1 + p i ( k ) g ( L i ( p ) ) 1 p i ( k ) g ( L i ( p ) ) 1 + p i ( k ) g ( L i ( p ) ) , ( i = 1 , 2 , , n ) . Let i = 1 n w i = 1 is the weight vector of g ( L i ( p ) ) ( i = 1 , 2 , , n ) such that w i [ 0 , 1 ] , ( i = 1 , 2 , , n ) and i = 1 n w i = 1 , we have:
( 1 p i ( k ) g ( L i ( p ) ) 1 + p i ( k ) g ( L i ( p ) ) ) w i ( 1 p i ( k ) g ( L i ( p ) ) 1 + p i ( k ) g ( L i ( p ) ) ) w i ,   i = 1 , 2 , , n .
Thus,
g ( L i ( k ) ) g ( L i ( p ) ) ( i = 1 , 2 , , n ) { i = 1 n ( 1 p i ( k ) g ( L i ( p ) ) 1 + p i ( k ) g ( L i ( p ) ) ) w i } g ( L i ( k ) ) g ( L i ( p ) ) ( i = 1 , 2 , , n ) { i = 1 n ( 1 p i ( k ) g ( L i ( p ) ) 1 + p i ( k ) g ( L i ( p ) ) ) w i }    i = 1 , 2 , , n 1 + g ( L i ( k ) ) g ( L i ( p ) ) ( i = 1 , 2 , , n ) { i = 1 n ( 1 p i ( k ) g ( L i ( p ) ) 1 + p i ( k ) g ( L i ( p ) ) ) w i } g ( L i ( k ) ) g ( L i ( p ) ) ( i = 1 , 2 , , n ) { i = 1 n ( 1 p i ( k ) g ( L i ( p ) ) 1 + p i ( k ) g ( L i ( p ) ) ) w i } + 1 1 1 + g ( L i ( k ) ) g ( L i ( p ) ) ( i = 1 , 2 , , n ) { i = 1 n ( 1 p i ( k ) g ( L i ( p ) ) 1 + p i ( k ) g ( L i ( p ) ) ) w i } 1 g ( L i ( k ) ) g ( L i ( p ) ) ( i = 1 , 2 , , n ) { i = 1 n ( 1 p i ( k ) g ( L i ( p ) ) 1 + p i ( k ) g ( L i ( p ) ) ) w i } + 1 2 1 + g ( L i ( k ) ) g ( L i ( p ) ) ( i = 1 , 2 , , n ) { i = 1 n ( 1 p i ( k ) g ( L i ( p ) ) 1 + p i ( k ) g ( L i ( p ) ) ) w i } 2 g ( L i ( k ) ) g ( L i ( p ) ) ( i = 1 , 2 , , n ) { i = 1 n ( 1 p i ( k ) g ( L i ( p ) ) 1 + p i ( k ) g ( L i ( p ) ) ) w i } + 1 2 1 + g ( L i ( k ) ) g ( L i ( p ) ) ( i = 1 , 2 , , n ) { i = 1 n ( 1 p i ( k ) g ( L i ( p ) ) 1 + p i ( k ) g ( L i ( p ) ) ) w i } 1 2 g ( L i ( k ) ) g ( L i ( p ) ) ( i = 1 , 2 , , n ) { i = 1 n ( 1 p i ( k ) g ( L i ( p ) ) 1 + p i ( k ) g ( L i ( p ) ) ) w i } + 1 1
i.e.,
g ( L i ( k ) ) g ( L i ( p ) ) ( i = 1 , 2 , , n ) { i = 1 n ( 1 + p i ( k ) g ( L i ( p ) ) ) i = 1 n ( 1 p i ( k ) g ( L i ( p ) ) ) i = 1 n ( 1 + p i ( k ) g ( L i ( p ) ) ) + i = 1 n ( 1 p i ( k ) g ( L i ( p ) ) ) } g ( L i ( k ) ) g ( L i ( p ) ) ( i = 1 , 2 , , n ) { i = 1 n ( 1 + p i ( k ) g ( L i ( p ) ) ) i = 1 n ( 1 p i ( k ) g ( L i ( p ) ) ) i = 1 n ( 1 + p i ( k ) g ( L i ( p ) ) ) + i = 1 n ( 1 p i ( k ) g ( L i ( p ) ) ) }
Hence, the statement of Property 14 is true. □

5. Probabilistic Linguistic Einstein Aggregation Operators and Their Approaches to Multi-Criteria Group Decision Making

Under this section, we present a MCGDM problem where the evaluation information is likely to be expressed by the transformed PLTSs. Hence, we make use of the WPLEA or WPLEG operators to buttress our decision.
Let X = { x 1 , x 2 , , x m } be a finite set of m alternatives and C = { c 1 , c 2 , , c n } be a set of n attributes. Suppose that D = { d 1 , d 2 , , d e } denotes the set of DMs. By using the linguistic scale S = { S α / α = τ , , 1 , 0 , 1 , 2 , τ } each DM d q provides his or her linguistic evaluations over the alternative x i with respect to the attribute a j , i.e., A q = ( L q i j ) m × n ( i = 1 , 2 , , m ; j = 1 , 2 , n ; q = 1 , 2 , , e ) .
Then, we determine the collective evaluations of DMs for each alternative in terms of PLTEs.
In the context of GDM, the linguistic evaluation values g ( L i j ( p ) ) = g ( L i j ( k ) ) ( k = 1 , 2 , , # g ( L i j ( p ) ) ) with the corresponding probability p i j ( k ) are described as the PLTS g ( L i j ( p ) ) = { g ( L ( k ) i j ( p i j ( k ) ) ) / k = 1 , 2 , , # g ( L i j ( p ) ) } and # g ( L i j ( p ) ) is the number of linguistic terms in g ( L i j ( p ) ) . The PLTSs L i j ( p ) denote the evaluations values over the alternatives x i ( i = 1 , 2 , , m ) with respect to the attributes c j ( j = 1 , 2 , , n ) where L i j ( k ) is the k t h value of L i j ( p ) and p i j ( k ) is the probability L i j ( k ) ( k = 1 , 2 , , # L i j ( p ) ) . In this case p i j ( k ) > 0 and k = 1 # L i j ( p ) p i j ( k ) = 1 . All the PLTSs are contained in the probabilistic linguistic decision matrix R . Hence, the result is shown as follows:
R = g [ ( L i j ( p ) ) ] m × n = ( g ( L 11 ( p ) ) g ( L 12 ( p ) ) g ( L 1 n ( p ) ) g ( L 21 ( p ) ) L 22 ( p ) g ( L 2 n ( p ) ) g ( L m 1 ( p ) ) g ( L m 2 ( p ) ) g ( L m n ( p ) ) )
Without loss of generality, we assume that each transformed PLTS g ( L i j ( p ) ) is an ordered transformed PLTS. w = ( w 1 , w 2 , , w n ) T denotes the weighting vector of the attributes C and w j [ 0 , 1 ] , j = 1 n w j = 1 . Based on the above results, we will use the WPLEA or WPLEG aggregation operators to develop the corresponding approach for MCGDM with probabilistic linguistic information. This approach is designed with the determination of the objective weights.

5.1. The Determination of the Objective Weights Based on Entropy Measures

Entropy method is the concept of thermodynamics, which was first introduced by Shannon into the information theory, and now is widely used in the engineering, socio-economic and other fields [22]. Entropy measures are useful in computing weights of the criteria, and had been used widely in MCGDM problems [23]. Shannon entropy method constitutes one of the techniques to determine the weight of the criteria when it becomes difficult to be provided by the decision-maker [24]. Through the computation of information entropy of a proposed parameter, its weight is determined according to its relative degree of change that impact on equipment, index with larger degree of relative change as larger weight. For example, Peng et al. [25] introduced two optimization models for the determination of the criterion weights in a multi-criteria decision-making situations . In our study, emphasis should be laid on the determination of a reasonable weight of the criteria. This has become necessary because many at times the DMs are influenced by what they have as knowledge structure, personal bias, and familiarity with the decision alternatives. Consequently, the necessity arises for us to consider the MADM problem with completely unknown weights of criteria. Therefore, there will be a need for us to establish a weight determination method on the basis of entropy technique under the probabilistic linguistic environment.
Let L ( p ) = { L ( k ) i ( p i ( k ) ) / k = 1 , 2 , , # L i ( p ) } ( i = 1 , 2 , , n ) be a PLTE, and g ( L i ( p ) ) its equivalent transformation then the steps for determining the weights are as follows:
Step 1: Calculate the score matrix E ( g ( L i j ( p ) ) ) m × n of R = g ( L i j ( p ) ) m × n
E ( g ( L i j ( p ) ) ) m × n = ( E ( g ( L 11 ( p ) ) ) E ( g ( L 12 ( p ) ) ) E ( g ( L 1 n ( p ) ) ) E ( g ( L 21 ( p ) ) ) E ( g ( L 22 ( p ) ) ) E ( g ( L 2 n ( p ) ) ) E ( g ( L m 1 ( p ) ) ) E ( g ( L m 2 ( p ) ) ) E ( g ( L m n ( p ) ) ) ) m × n
Step 2: Normalize the score matrix E ( g ( L i j ( p ) ) ) m × n as follows:
R ¯ = g ( L i j ( p ) ¯ ) m × n = ( g ( L 11 ( p ) ¯ ) g ( L 12 ( p ) ¯ ) g ( L 1 n ( p ) ¯ ) g ( L 21 ( p ) ¯ ) g ( L 22 ( p ) ¯ ) g ( L 2 n ( p ) ¯ ) g ( L m 1 ( p ) ¯ ) g ( L m 2 ( p ) ¯ ) g ( L m n ( p ) ¯ ) )
where g ( L i j ( p ) ¯ ) = g ( L i j ( p ) ) i = 1 m g ( L i j ( p ) ) ; ( i = 1 , 2 , , m ; j = 1 , 2 , , n ) .
Step 3: Determine the attribute weights.
Let E j = 1 ln m i = 1 m g ( L i j ( p ) ¯ ) . ln g ( L i j ( p ) ¯ ) ; ( j = 1 , 2 , , n ) .
The attribute weight w j ( j = 1 , , n ) is determined by
w j = 1 E j j = 1 n ( 1 E j )

5.2. Probabilistic Linguistic MADM Approach

Decision-making processes comprise a series of steps: identifying the problems, constructing the preferences, evaluating the alternatives, and determining the best alternatives. With the aid of the WPLEA and the WPLEG aggregation operators, we develop a decision-making procedure for the ranking of alternatives. The detailed approach is illustrated as follows:
Step 1. In a practical decision-making problem, we determine the alternatives X = { x 1 , x 2 , x 3 , x 4 , x 5 } and a set of the attributes C = { c 1 , c 2 , c 3 , c 4 } . Then we obtain the decision matrix A ( q ) = ( L q i j ) m × n provided by the decision-maker DM d q .
Step 2. With regards to the collective matrix A ( q ) = ( L q i j ) m × n , the normalization process of the entries of A could be made as stated in Definition 5. The entries of the normalized matrix R = ( L i j ( p ) ) m × n are arranged in a decreasing order.
Step 3. Since the operational values may exceed the boundaries of LTSs and also the PLTEs must be within the interval [ 0 , 1 ] to satisfy the Einstein operational laws, we need to transform the PLTS R = ( L i j ( p ) ) m × n to the following equivalent form R = g ( L i j ( p ) ) m × n .
Step 4. We determine the criteria weights. The criteria weights can be determined by way of using the following formula:
w j = 1 E j j = 1 n ( 1 E j ) ( j = 1 , 2 , , n )
Step 5. If the DM prefers the WPLEA operator, then the aggregated value of the alternative x i is determined based on (11). The result is:
Z i = W P L E A ( g ( L i 1 ( p ) ) , g ( L i 2 ( p ) ) , , g ( L i n ( p ) ) ) = η i j ( k ) g ( L i j ( p ) ) ( j = 1 , 2 , , n ) { j = 1 n ( 1 + g ( L i j ( k ) ) p i j ( k ) ) w i j j = 1 n ( 1 g ( L i j ( k ) ) p i j ( k ) ) w i j j = 1 n ( 1 + g ( L i j ( k ) ) p i j ( k ) ) w i j + j = 1 n ( 1 g ( L i j ( k ) ) p i j ( k ) ) w i j }    ( i = 1 , 2 , , m )
If the DM prefers the WPLEG operator, then the aggregated value of the alternative x i is determined based on (15). The result is:
Z i = W P L E G ( g ( L i 1 ( p ) ) , g ( L i 2 ( p ) ) , , g ( L i n ( p ) ) ) = η j ( k ) g ( L j ( p ) ) ( j = 1 , 2 , , n ) { 2 j = 1 n ( [ p i j ( k ) g ( L i j ( k ) ) ] ) w i j j = 1 n ( 2 [ p i j ( k ) g ( L i j ( k ) ) ] ) w i j + j = 1 n ( [ p i j ( k ) g ( L i j ( k ) ) ] ) w i j }     ( i = 1 , 2 , , m )
In this case, we denote the aggregated value of the alternative x i as Z i .
Step 6. Following the results of Definition 4 of Section 2, the score and the deviation degree of Z i of the alternative x i are computed, i.e., E ( Z i ) and σ ( Z i ) ( i = 1 , 2 , , m ) .
Step 7. Rank all the alternatives x i in accordance with the ranking results of Definition 4.

6. Illustrative Example

Information technology is has become the antidote to the numerous problems faced by health organizations to improve healthcare delivery. In order to improve healthcare delivery, the adoption of health information technology (HIT) has become vital for health organizations. Many stakeholders like the government, information technology businesses, healthcare organizations, policy makers and consumers anticipate that healthcare problems can be addressed through technological innovations [26]. The adoption of HIT can help health administrators to reduce clinical errors, provide support to clinicians, improve patients’ information management and expand patients’ access to both remote and continuity healthcare services [27,28,29,30,31]. Due to the relevance and involvement of various uncertainties and risks associated with healthcare process, decision makers are very often involved in the decision-making process for healthcare issues. As a global concern, the Ghanaian government will want to assess and improve the quality and safety of care as far as health information technology (HIT) innovations in public hospitals in Ghana is concerned.
Hence this problem can be considered as a multi-criteria group decision making problem (MCDM) which requires MCDM methods for an effective problem-solving. In solving real-life decision-making problems, Pang et al. [1] stated that, decision-makers can employ linguistic terms to evaluate the performance of the alternatives, which can be used to help rank the hospitals and select the most desirable one via proper decision-making methods. To address the MCDM problem above, we adopt probabilistic linguistic values to overcome uncertainty and qualitative factors. In many situations, the preference information on attributes is uncertain and inconsistent. In order to account for the insufficiency in decision making, we present a probabilistic linguistic Einstein Aggregation (PLEA) operators to select the ideal hospital based on probabilistic linguistic values. In this section, we extend the PLMCGDM method to the healthcare environment. Government will want to assess or evaluate the performance of hospitals as far as implementation of HIT is concerned. For this, purpose, five hospitals were randomly chosen. In view of that a committee of three decision makers D = { d 1 , d 2 , d 3 } was formed to select the most appropriate hospital. Therefore, we introduce a MCGDM problem in which PLTS is used to express the evaluation information. Then, we apply the WPLEA or WPLEG operator to support our decision. Let X = { x 1 , x 2 , x 3 , x 4 , x 5 } be a finite set of five hospitals and C = { c 1 , c 2 , c 3 , c 4 } be the set of criteria defined for the selection process.
c 1 = e f f i c i e n t t h e r e b y d e c r e a s i n g c o s t s c 2 = e n h a n c e q u a l i t y c a r e c 3 = e n a b l i n g inf o r m a t i o n e x c h a n g e a n d c o m m u n i c a t i o n i n a s tan d a r d i z e d w a y b e t w e e n h e a l t h c a r e e s t a b l i s h m e n t s c 4 = e n c o u r a g i n g a p a r t n e r s h i p r e l a t i o n s h i p b e t w e e n p a t i e n t a n d h e a l t h p r o f e s s i o n a l s
adopted from [32]. The linguistic scale is S = { s t / t = 4 , , 1 , 0 , 1 , , 4 } . Considering the results of [33], the evaluations of the decision makers are shown in Table 1, Table 2 and Table 3.

6.1. Decision Analysis with Our Proposed Approaches

Step 1: Following the proposed methods in Section 5, we integrate the individual decision matrices A 1 A 3 into a collective decision matrix by (17). Hence, the result is shown in Table 4. For Table 4 each PLTS L i j ( p ) is assumed to be ordered PLTS. ( i = 1 , 2 , 3 , 4 , 5 ; j = 1 , 2 , 3 , 4 ) . Based on the Entropy Shannon measure determined in Step 4, the weighting vector of the attributes C is w = ( w 1 , w 2 , w 3 , w 4 ) T = ( 0.1628 , 0.3024 , 0.2427 , 0.2921 ) T . We use WPLEA and WPLEG aggregation operators to analyze the results of Table 5. With respect to the above results and the proposed methods in Section 5, the detail steps are shown as follows.
Step 2. Considering the collective matrix A ( q ) = ( L q i j ) 5 × 4 , we normalize the entries of R as stated in Definition 5. Then, the normalized entries are arranged in decreasing order. The results are presented in the Table 5.
Step 3. Since the operational values may exceed the boundaries of LTSs and also the PLTEs must be within the interval [ 0 , 1 ] to satisfy the Einstein operational laws, we need to transform the normalized PLTS R = ( L i j ( p ) ) 5 × 4 to the following equivalent form R = g ( L i j ( p ) ) 5 × 4 . The results are given in Table 6.
Step 4. We derive the criteria weights by utilizing (20) and the following is the weight vector of g ( L i ( p ) )
w = ( 0.1628 , 0.3024 , 0.2427 , 0.2921 ) T ( j = 1 , 2 , 3 , 4 )
Step 5. Aggregate the probabilistic linguistic values g ( L i j ( p ) ) for each alternative x i by the WPLEA (or WPLEG) operator.
If the decision-maker chooses the WPLEA operator, then the aggregated value of the alternatives x i ( i = 1 , 2 , , 5 ) is determined based on (11).
Z 1 = W P L E A ( g ( L 11 ( p ) ) , g ( L 12 ( p ) ) , g ( L 13 ( p ) ) , g ( L 14 ( p ) ) ) = { 0.3708 , 0.3225 , , 0.0204 } Z 2 = W P L E A ( g ( L 21 ( p ) ) , g ( L 22 ( p ) ) , g ( L 23 ( p ) ) , g ( L 24 ( p ) ) ) = { 0.5463 , 0.4734 , , 0 } Z 3 = W P L E A ( g ( L 31 ( p ) ) , g ( L 32 ( p ) ) , g ( L 33 ( p ) ) , g ( L 34 ( p ) ) ) = { 0.3454 , 0.3338 , , 0.0820 } Z 4 = W P L E A ( g ( L 41 ( p ) ) , g ( L 42 ( p ) ) , g ( L 43 ( p ) ) , g ( L 44 ( p ) ) ) = { 0.3356 , 0.3238 , , 0.0958 } Z 5 = W P L E A ( g ( L 51 ( p ) ) , g ( L 52 ( p ) ) , g ( L 53 ( p ) ) , g ( L 54 ( p ) ) ) = { 0.4119 , 0.3653 , , 0.1042 }
In case the decision -maker considers the WPLEG operator, then the aggregated value of the alternative x i is determined based on (15) ( i = 1 , 2 , , m ) . In the same way, we denote the aggregated value of the alternative x i as Z i . The results are:
Z 1 = W P L E G ( g ( L 11 ( p ) ) , g ( L 12 ( p ) ) , g ( L 13 ( p ) ) , g ( L 14 ( p ) ) ) = { 0.3645 , 0.3143 , , 0 } Z 2 = W P L E G ( g ( L 21 ( p ) ) , g ( L 22 ( p ) ) , g ( L 23 ( p ) ) , g ( L 24 ( p ) ) ) = { 0.5422 , 0.4295 , , 0 } Z 3 = W P L E G ( g ( L 31 ( p ) ) , g ( L 32 ( p ) ) , g ( L 33 ( p ) ) , g ( L 34 ( p ) ) ) = { 0.3304 , 0.3163 , , 0 } Z 4 = W P L E G ( g ( L 41 ( p ) ) , g ( L 42 ( p ) ) , g ( L 43 ( p ) ) , g ( L 44 ( p ) ) ) = { 0.3318 , 0.3176 , 0 } Z 5 = W P L E G ( g ( L 51 ( p ) ) , g ( L 52 ( p ) ) , g ( L 53 ( p ) ) , g ( L 54 ( p ) ) ) = { 0.3928 , 0.3395 , 0 }
Step 6. On the basis of the results of Definition 10, the scores of the alternatives x i can be computed, i.e., E ( Z i ) . If the DM uses WPLEA operator to calculate the decision formation, the scores are determined below:
E ( Z 1 ) = 0.2200 ; E ( Z 2 ) = 0.2460 ; E ( Z 3 ) = 0.2222 ; E ( Z 4 ) = 0.2186 ; E ( Z 5 ) = 0.2682 .
If the DM uses WPLEG operator to aggregate the decision formation, the scores are determined as follows:
E ( Z 1 ) = 0.0888 ; E ( Z 2 ) = 0.0389 ; E ( Z 3 ) = 0.1182 ; E ( Z 4 ) = 0.1151 ; E ( Z 5 ) = 0.1371 .
Step 7. If the DM uses WPLEA operator, we can determine the ranking of the scores of the alternatives based on the results of Step 6. It is shown as follows:
E ( Z 5 ) > E ( Z 2 ) > E ( Z 3 ) > E ( Z 1 ) > E ( Z 4 )
Hence, the ordering of the alternatives is:
x 5 > x 2 > x 3 > x 1 > x 4
If the DM uses WPLEG operator, we can determine the ranking of the scores of the alternatives based on the results of Step 6. It is shown as follows:
E ( Z 5 ) > E ( Z 3 ) > E ( Z 4 ) > E ( Z 1 ) > E ( Z 2 )
In this case, the ordering of the alternatives is:
x 5 > x 3 > x 4 > x 1 > x 2

6.2. Comparison Analysis

Under the probabilistic linguistic information, Ref. [1] developed an aggregation-based method for MAGDM. In order to validate the effectiveness of our proposed methods, we carefully compute the decision results for Ref. [1] and Ref. [33]. The decision results are demonstrated in Table 7.
Considering the linguistic power weighted average (LPWA) of Ref. [33] the rank is x 5 > x 2 > x 4 > x 1 > x 3 . Also, that of Ref. [1] is x 5 > x 2 > x 4 > x 1 > x 3 . For both methods the ranking is the same but different from our rankings, thus WPLEA: x 5 > x 2 > x 3 > x 1 > x 4 and WPLEG: x 5 > x 3 > x 4 > x 1 > x 2 . However, the optimal decision result conforms to our best alternative which is also x 5 for both methods, thus WPLEA and WPLEG.
On the MCGDM problems under probabilistic linguistic environment, we introduced our model to improve upon the existing techniques. Unlike the existing model of Ref. [33] considered in this paper, our model yields better results because since probabilities were not considered in their approach the accuracy of preference information of the DMs might be questionable and the ignorance of the probabilistic information may lead to erroneous decision results. Pang et al. [1] stated that completely ignoring the importance and probability of possible linguistic term sets in GDM with linguistic information is not rational. In addition, without the PLTSs, it might not be easy for the DMs to provide several possible linguistic values over an alternative or an attribute. This situation exposes the shortcomings of the model proposed in Ref. [33], in spite of the power average involvement in the aggregation process.
Now comparing our results with that of Ref. [1], our model yields better results. The reason being that since PLTSs itself as a theory has some limitations, it encounters the relationship phenomenon between inputs arguments. Luckily our proposed model provides more versatility in the aggregation process and it has the ability to depict the interrelationship of input arguments. To throw more light on the weaknesses, Gou et al. [6] stated that the existing operational laws of linguistic term and the extended linguistic terms are very unreasonable because occasionally their operational values exceed the bounds of linguistic term sets (LTSs). Besides, their novel operational laws can reduce the computational complexity experienced in [1] and also keep the probability information complete after operations. In general, WPLEG applies to the average of the ratio data and is mainly used to calculate the average growth (or change) rate of the data. From the characteristics of Table 7, the WPLEA is much better than WPLEG.

7. Conclusions

In this paper, we investigated the multi-criteria group decision-making problems where attribute values are in the form of transformed PLTSs. We introduce Einstein operations into the probabilistic linguistic environment where, we define some operations of PLTSs based on them, and some corresponding operational laws. Meanwhile, we develop the corresponding new operators, i.e., the PLEA, PLEG, WPLEA and WPLEG operators. In light of the PLMCGDM, we describe the decision-making problem and design corresponding approaches by employing the WPLEA and WPLEG. We extend the literature on Einstein operations and enrich the research work of PLTSs. Unlike other existing literatures, we assumed that weights are unknown and we calculated the weight of the criteria through entropy Shannon method, which is another reliable way of determining the weight of the criteria. Desirable properties of the operators have also been analyzed. Numerical experiments on hospital selection and comparative studies show the practicality and advantages of our proposed model. The ideas behind the implementation of our method will have the potential to give more insights to researchers and practitioners in the area of PLTSs. Future research work may propose new aggregation operators for the PLTSs in the areas of pattern recognition and clustering analysis.

Author Contributions

K.A. designed the framework and the fundamental idea. He analyzed the data and dealt with the deduction procedure. A.P.D. modified the expression and equally analyzed the data.

Funding

The research received no external funding. It was solely sponsored by the first author through his stipend.

Acknowledgments

The authors thank the Editor-in-Chief, the Associate Editor and the anonymous reviewers for their helpful comments and suggestions, which have led to an improved version of this paper. We also thank Professor Decui Liang whose suggestions and advice yielded positive results/impact on the research.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Pang, Q.; Wang, H.; Xu, Z. Probabilistic linguistic term sets in multi-attribute group decision making. Inf. Sci. 2016, 369, 128–143. [Google Scholar] [CrossRef]
  2. Zadeh, L.A. Fuzzy Sets*. Inf. Control 1965, 353, 338–353. [Google Scholar] [CrossRef]
  3. Torra, V. Hesitant Fuzzy Sets. Int. J. Intell. Syst. 2010, 25, 529–539. [Google Scholar] [CrossRef]
  4. Rodriguez, R.M.; Martinez, L.; Herrera, F. Hesitant fuzzy linguistic term sets for decision making. IEEE Trans. Fuzzy Syst. 2012, 20, 109–119. [Google Scholar] [CrossRef]
  5. Bai, C.; Zhang, R.; Qian, L.; Wu, Y. Comparisons of probabilistic linguistic term sets for multi-criteria decision making. Knowl.-Based Syst. 2017, 119, 284–291. [Google Scholar] [CrossRef]
  6. Gou, X.; Xu, Z. Novel basic operational laws for linguistic terms, hesitant fuzzy linguistic term sets and probabilistic linguistic term sets. Inf. Sci. 2016, 372, 407–427. [Google Scholar] [CrossRef]
  7. Yue, H.; Zeshui, X.; Jiang, W. Probabilistic interval reference ordering sets in multi-criteria group decision making. Int. J. Uncertain. Fuzz. Knowl.-Based Syst. 2017, 25, 189–212. [Google Scholar]
  8. Kobina, A.; Liang, D.; He, X. Probabilistic linguistic power aggregation operators for multi-criteria group decision making. Symmetry 2017, 9, 320. [Google Scholar] [CrossRef]
  9. Yu, Q.; Hou, F.; Zhai, Y.; Du, Y. Some Hesitant Fuzzy Einstein Aggregation Operators and Their Application to Multiple Attribute Group Decision Making. Int. J. Intell. Syst. 2016, 31. [Google Scholar] [CrossRef]
  10. Zhao, X.; Ju, R.; Yang, S.; Zhou, Y. Aggregation operators using einstein operations on intuitionistic trapezoidal fuzzy number. In Proceedings of the 2013 10th International Conference on Fuzzy Systems and Knowledge Discovery, Shenyang, China, 23–25 July 2013. [Google Scholar]
  11. Yu, D. Some hesitant fuzzy information aggregation operators based on einstein operational laws. Int. J. Intell. Syst. 2014, 29, 320–340. [Google Scholar] [CrossRef]
  12. Wang, W.; Liu, X. Some interval-valued intuitionistic fuzzy geometric aggregation operators based on einstein operations. In Proceedings of the 2012 9th International Conference on Fuzzy Systems and Knowledge Discovery, Chongqing, China, 29–31 May 2012; pp. 604–608. [Google Scholar] [CrossRef]
  13. Wang, W.; Liu, X. Intuitionistic Fuzzy Information Aggregation Using Einstein Operations. IEEE Trans. Fuzzy Syst. 2012, 20. [Google Scholar] [CrossRef]
  14. Yang, Y.; Yuan, S. Induced interval-valued intuitionistic fuzzy Einstein ordered weighted geometric operator and their application to multiple attribute decision making. J. Intell. Fuzzy Syst. 2014, 26, 2945–2954. [Google Scholar]
  15. Cai, X.; Han, L. Some induced Einstein aggregation operators based on the data mining with interval-valued intuitionistic fuzzy information and their application to multiple attribute decision making. J. Intell. Fuzzy Syst. 2014, 27, 331–338. [Google Scholar] [CrossRef]
  16. Wang, Q.; Sun, H. Interval-Valued Intuitionistic Fuzzy Einstein Geometric Choquet Integral Operator and Its Application to Multiattribute Group Decision-Making. Math. Probl. Eng. 2018, 2018. [Google Scholar] [CrossRef]
  17. Rahman, K.; Abdullah, S.; Ali, A.; Amin, F. Interval-valued Pythagorean fuzzy Einstein hybrid weighted averaging aggregation operator and their application to group decision making. Complex Intell. Syst. 2018. [Google Scholar] [CrossRef]
  18. Rahman, K.; Abdullah, S.; Khan, M.S.A. Some Interval-Valued Pythagorean Fuzzy Einstein Weighted Averaging Aggregation Operators and Their Application to Group Decision Making. J. Intell. Syst. 2018. [Google Scholar] [CrossRef]
  19. Klement, E.P.; Mesiar, R.; Pape, E. Generated Triangular Norms. Kybernetika 2000, 36, 363–377. [Google Scholar]
  20. Schweizer, B.; Sklar, A. Probabilistic Metric Spaces; Courier Corporation: North Chelmsford, MA, USA, 2011. [Google Scholar]
  21. Gassert, H. Operators on Fuzzy Sets: Zadeh and Einstein ations on Fuzzy Sets Properties of T-Norms and T-Conorms. Available online: https://pdfs.semanticscholar.org/a045/52b74047208d23d77b8aa9f5f334b59e65ea.pdf (accessed on 8 December 2018).
  22. Shannon, C.E. A Mathematical Theory of Communication. Bell Syst. Tech. J. 1948, 27, 379–423. [Google Scholar] [CrossRef]
  23. Joshi, D.K.; Kumar, S. Entropy of interval-valued intuitionistic hesitant fuzzy set and its application to group decision making problems. Granul. Comput. 2018, 3, 367–381. [Google Scholar] [CrossRef] [Green Version]
  24. Kacprzak, D. Objective weights based on ordered fuzzy numbers for fuzzy multiple criteria decision-making methods. Entropy 2017, 19, 373. [Google Scholar] [CrossRef]
  25. Tian, Z.P.; Zhang, H.Y.; Wang, J.; Wang, J.Q.; Chen, X.H. Multi-criteria decision-making method based on a cross-entropy with interval neutrosophic sets. Int. J. Syst. Sci. 2016, 47, 3598–3608. [Google Scholar] [CrossRef]
  26. Hillestad, R.; Bigelow, J.; Bower, A.; Girosi, F.; Meili, R.; Scoville, R.; Taylor, R. Can electronic medical record systems transform health care? Potential health benefits, savings, and costs. Health Aff. 2005, 24, 1103–1117. [Google Scholar] [CrossRef] [PubMed]
  27. Ammenwerth, E.; Gräber, S.; Herrmann, G.; Bürkle, T.; König, J. Evaluation of health information systems—Problems and challenges. Int. J. Med. Inform. 2003, 71, 125–135. [Google Scholar] [CrossRef]
  28. Black, A.D.; Car, J.; Pagliari, C.; Anandan, C.; Cresswell, K.; Bokun, T.; McKinstry, B.; Procter, R.; Majeed, A.; Sheikh, A. The impact of ehealth on the quality and safety of health care: A systematic overview. PLoS Med. 2011, 8, e1000387. [Google Scholar] [CrossRef] [PubMed]
  29. Gagnon, M.P.; Desmartis, M.; Labrecque, M.; Car, J.; Pagliari, C.; Pluye, P.; Frémont, P.; Gagnon, J.; Tremblay, N.; Légaré, F. Systematic review of factors influencing the adoption of information and communication technologies by healthcare professionals. J. Med. Syst. 2012, 36, 241–277. [Google Scholar] [CrossRef] [PubMed]
  30. Lapointe, L.; Mignerat, M.; Vedel, I. The IT productivity paradox in health: A stakeholder’s perspective. Int. J. Med. Inform. 2011, 80, 102–115. [Google Scholar] [CrossRef]
  31. Li, J.; Talaei-Khoei, A.; Seale, H.; Ray, P.; MacIntyre, C.R. Health Care Provider Adoption of eHealth: Systematic Literature Review. Interact. J. Med. Res. 2013, 2, e7. [Google Scholar] [CrossRef] [PubMed]
  32. Eysenbach, G. What is e-health? J. Med. Internet Res. 2001, 3, 1–5. [Google Scholar] [CrossRef]
  33. Xu, Y.; Merigo, J.M.; Wang, H. Linguistic power aggregation operators and their application to multiple attribute group decision making. Appl. Math. Model. 2012, 36, 5427–5444. [Google Scholar] [CrossRef]
Table 1. Decision matrix A 1 provided by d 1 .
Table 1. Decision matrix A 1 provided by d 1 .
c 1 c 2 c 3 c 4
x 1 s 2 s 3 s 3 s 2
x 2 s 2 s 0 s 1 s 1
x 3 s 1 s 0 s 2 s 1
x 4 s 2 s 0 s 1 s 3
x 5 s 2 s 4 s 3 s 1
Table 2. Decision matrix A 2 provided by d 2 .
Table 2. Decision matrix A 2 provided by d 2 .
c 1 c 2 c 3 c 4
x 1 s 0 s 1 s 0 s 1
x 2 s 3 s 2 s 1 s 2
x 3 s 0 s 1 s 2 s 2
x 4 s 1 s 0 s 1 s 2
x 5 s 0 s 3 s 2 s 1
Table 3. Decision matrix A 3 provided by d 3 .
Table 3. Decision matrix A 3 provided by d 3 .
c 1 c 2 c 3 c 4
x 1 s 1 s 1 s 0 s 1
x 2 s 3 s 2 s 1 s 2
x 3 s 1 s 0 s 3 s 3
x 4 s 3 s 2 s 1 s 1
x 5 s 1 s 2 s 3 s 2
Table 4. The probabilistic linguistic decision matrix of the group.
Table 4. The probabilistic linguistic decision matrix of the group.
c 1 c 2 c 3 c 4
x 1 { s 2 ( 0.33 ) , s 0 ( 0.33 ) , s 1 ( 0.33 ) } { s 3 ( 0.33 ) , s 1 ( 0.67 ) } { s 3 ( 0.33 ) , s 0 ( 0.67 ) } { s 2 ( 0.33 ) , s 1 ( 0.67 ) }
x 2 { s 2 ( 0.33 ) , s 3 ( 0.67 ) } { s 0 ( 0.33 ) , s 2 ( 0.67 ) } { s 1 ( 1 ) } { s 1 ( 0.33 ) , s 2 ( 0.67 ) }
x 3 { s 1 ( 0.33 ) , s 0 ( 0.33 ) , s 1 ( 0.33 ) } { s 0 ( 0.67 ) , s 1 ( 0.33 ) } { s 2 ( 0.67 ) , s 3 ( 0.33 ) } { s 1 ( 0.33 ) , s 2 ( 0.33 ) , s 3 ( 0.33 ) }
x 4 { s 2 ( 0.33 ) , s 1 ( 0.33 ) , s 3 ( 0.33 ) } { s 0 ( 0.67 ) , s 2 ( 0.33 ) } { s 1 ( 0.33 ) , s 1 ( 0.67 ) } { s 3 ( 0.33 ) , s 2 ( 0.33 ) , s 1 ( 0.33 ) }
x 5 { s 2 ( 0.33 ) , s 0 ( 0.33 ) , s 1 ( 0.33 ) } { s 4 ( 0.33 ) , s 3 ( 0.33 ) , s 2 ( 0.33 ) } { s 3 ( 0.67 ) , s 2 ( 0.33 ) } { s 1 ( 0.67 ) , s 2 ( 0.33 ) }
Table 5. Normalized probabilistic linguistic decision matrix arranged in decreasing order.
Table 5. Normalized probabilistic linguistic decision matrix arranged in decreasing order.
c 1 c 2 c 3 c 4
x 1 { s 2 ( 0.33 ) , s 0 ( 0.33 ) s 1 ( 0.33 ) } { s 3 ( 0.33 ) , s 1 ( 0.67 ) } { s 3 ( 0.33 ) , s 0 ( 0.67 ) } { s 2 ( 0.33 ) , s 1 ( 0.67 ) }
x 2 { s 3 ( 0.67 ) , s 2 ( 0.33 ) } { s 2 ( 0.67 ) , s 0 ( 0.33 ) } { s 1 ( 1 ) } { s 2 ( 0.67 ) , s 1 ( 0.33 ) }
x 3 { s 1 ( 0.33 ) , s 0 ( 0.33 ) , s 1 ( 0.33 ) } { s 1 ( 0.33 ) , s 0 ( 0.67 ) , } { s 2 ( 0.67 ) , s 3 ( 0.33 ) } { s 3 ( 0.33 ) , s 2 ( 0.33 ) , s 1 ( 0.33 ) }
x 4 { s 3 ( 0.33 ) , s 2 ( 0.33 ) s 1 ( 0.33 ) , } { s 2 ( 0.33 ) , s 0 ( 0.67 ) } { s 1 ( 0.67 ) , s 1 ( 0.33 ) } { s 3 ( 0.33 ) , s 2 ( 0.33 ) , s 1 ( 0.33 ) }
x 5 { s 2 ( 0.33 ) , s 1 ( 0.33 ) s 0 ( 0.33 ) } { s 4 ( 0.33 ) , s 3 ( 0.33 ) s 2 ( 0.33 ) } { s 3 ( 0.67 ) , s 2 ( 0.33 ) } { s 2 ( 0.33 ) , s 1 ( 0.67 ) }
Table 6. The transformed normalized probabilistic linguistic decision matrix.
Table 6. The transformed normalized probabilistic linguistic decision matrix.
c 1 c 2 c 3 c 4
x 1 { s 0.75 ( 0.33 ) , s 0.5 ( 0.33 ) s 0.375 ( 0.33 ) } { s 0.625 ( 0.67 ) , s 0.875 ( 0.33 ) , s 0.625 ( 0 ) } { s 0.5 ( 0.67 ) , s 0.875 ( 0.33 ) , s 0.5 ( 0 ) } { s 0.625 ( 0.67 ) , s 0.75 ( 0.33 ) , s 0.625 ( 0 ) }
x 2 { s 0.875 ( 0.67 ) , s 0.75 ( 0.33 ) , s 0.75 ( 0 ) } { s 0.75 ( 0.67 ) , s 0.5 ( 0.33 ) , s 0.5 ( 0 ) } { s 0.625 ( 1 ) , s 0.625 ( 0 ) , s 0.625 ( 0 ) } { s 0.75 ( 0.67 ) , s 0.625 ( 0.33 ) , s 0.625 ( 0 ) }
x 3 { s 0.625 ( 0.33 ) , s 0.5 ( 0.33 ) , s 0.375 ( 0.33 ) } { s 0.5 ( 0.67 ) , s 0.625 ( 0.33 ) , s 0.5 ( 0 ) } { s 0.75 ( 0.67 ) , s 0.875 ( 0.33 ) , s 0.75 ( 0 ) } { s 0.875 ( 0.33 ) , s 0.75 ( 0.33 ) , s 0.625 ( 0.33 ) }
x 4 { s 0.875 ( 0.33 ) , s 0.75 ( 0.33 ) , s 0.625 ( 0.33 ) } { s 0.5 ( 0.67 ) , s 0.75 ( 0.33 ) , s 0.5 ( 0 ) } { s 0.625 ( 0.67 ) , s 0.375 ( 0.33 ) , s 0.375 ( 0 ) } { s 0.875 ( 0.33 ) , s 0.75 ( 0.33 ) s 0.625 ( 0.33 ) }
x 5 { s 0.75 ( 0.33 ) , s 0.625 ( 0.33 ) s 0.5 ( 0.33 ) } { s 1 ( 0.33 ) , s 0.875 ( 0.33 ) , s 0.75 ( 0.33 ) } { s 0.875 ( 0.67 ) , s 0.75 ( 0.33 ) , s 0.75 ( 0 ) } { s 0.625 ( 0.67 ) , s 0.75 ( 0.33 ) , s 0.625 ( 0 ) }
Table 7. Comparative analysis of our proposed method with that in the existing literatures.
Table 7. Comparative analysis of our proposed method with that in the existing literatures.
MethodRank
Aggregation-based method of Ref. [1] x 5 > x 2 > x 4 > x 1 > x 3
The method with LPWA of Ref. [33] x 5 > x 2 > x 4 > x 1 > x 3
Our proposed method with WPLEA x 5 > x 2 > x 3 > x 1 > x 4
Our proposed method with WPLEG x 5 > x 3 > x 4 > x 1 > x 2

Share and Cite

MDPI and ACS Style

Agbodah, K.; Darko, A.P. Probabilistic Linguistic Aggregation Operators Based on Einstein t-Norm and t-Conorm and Their Application in Multi-Criteria Group Decision Making. Symmetry 2019, 11, 39. https://doi.org/10.3390/sym11010039

AMA Style

Agbodah K, Darko AP. Probabilistic Linguistic Aggregation Operators Based on Einstein t-Norm and t-Conorm and Their Application in Multi-Criteria Group Decision Making. Symmetry. 2019; 11(1):39. https://doi.org/10.3390/sym11010039

Chicago/Turabian Style

Agbodah, Kobina, and Adjei Peter Darko. 2019. "Probabilistic Linguistic Aggregation Operators Based on Einstein t-Norm and t-Conorm and Their Application in Multi-Criteria Group Decision Making" Symmetry 11, no. 1: 39. https://doi.org/10.3390/sym11010039

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop