Algorithms for single-valued neutrosophic decision making based on TOPSIS and clustering methods with new distance measure

: Single-valued neutrosophic set (SVNS) is an important contrivance for directing the decision-making queries with unknown and indeterminant data by employing a degree of “acceptance”, “indeterminacy”, and “non-acceptance” in quantitative terms. Under this set, the objective of this paper is to propose some new distance measures to ﬁnd discrimination between the SVNSs. The basic axioms of the measures have been highlighted and examined their properties. Furthermore, to examine the relevance of proposed measures, an extended TOPSIS (“technique for order preference by similarity to ideal solution”) method is introduced to solve the group decision-making problems. Additionally, a new clustering technique is proposed based on the stated measures to classify the objects. The advantages, comparative analysis as well as superiority analysis is given to shows its inﬂuence over existing approaches.


Introduction
MCDM ("Multi-Criteria Decision Making") plays a vital role in our daily lives. In this competitive environment, our goal is to determine the best option that must be inspected toward the numerous criteria. However, in many cases, it is difficult for a person to opt for a suitable one due to the presence of several kinds of uncertainties in the data, which may occur due to a lack of knowledge or human error. Thus, the process of MCDM becomes growing these days and generally involves the following three phases.
The above-mentioned approaches are widely applicable in different fields. However, the approaches based on IMs are extensively reviewed. Among them, a "Technique for Order Preference with respect to the Similarity to the Ideal Solution (TOPSIS)" [28] is a well-known approach that is working on the principle to pick the best one according to its minimum distance from the target set. For it, the two ideals namely PIS("positive ideal set") and NIS ("negative ideal set") are considered and the working of the TOPSIS method depends on it. In the TOPSIS method, both the inclinations such as similarity or dis-similarity are considered together to reach the target set. Based on these features, several researchers have addressed the problem of TOPSIS to solve the MCDM problem under the SVNS environment. For example, [29] firstly presented the model of TOPSIS for SVNS. [30] presented an MCDM method based on TOPSIS and VIKOR ("VIseKriterijumska Optimizacija I Kompromisno Resenje") methods. [31] presented an extended TOPSIS method based on the maximum-deviation method while [32] presented a modified TOPSIS method to solve the MCDM problems. [27] presented divergence measures based TOPSIS method under the SVNS. [33] discuss the TOPSIS method for solving the MCDM problems through statistical analysis. Apart from the above scheme, [34] built up a strategy for arranging perceptions into groups with the end goal that each cluster is as homogeneous as conceivable in the FS domain. The strategy is known as clustering which clusters the fuzzy information into the various leveled structure based on the proximity matrix. Inspired by this idea, [35] introduced the clustering method for the SVNS minimum spanning tree. Again, [36] presented another clustering algorithm based on the similarity measure which is obtained from distance measure. Since clustering has applications in various fields like image processing, data mining, medical diagnosis, machine learning, etc. therefore authors extend these applications of clustering analysis in the SVNS environment [37][38][39][40]. Thus, from the above studies, we conclude that SVNS is one of the most favorable environment to access the alternatives.
Considering the versatility of SVNS and the quality of the TOPSIS method, the theme of the present study is to examine the new distance measure to compute the degree of discrimination between the given sets. Also, we study their relevant axioms and the properties to show its validity. To reach the target precisely, we extended the given TOPSIS approach for DMPs. Holding all the above tips in mind, the main objective of the present work is listed as (i) to define some new distance measures for given numbers under SVNS environment.
(ii) to develop an algorithm to determine the MCDM problems based on the extended TOPSIS approach.
(iii) to test the presented approach with a numerical example.
(iv) to impersonate a new clustering algorithm based on the proposed measures.
The major assets of the presented TOPSIS method over the others as the basic TOPSIS method aggregate the decision matrices by the aggregation operator and then decide relative coefficients based on the aggregated matrices. But in our approach, we find the relative coefficient of each decision matrix individually and then give the final results for each alternative. Then we aggregate the results of each decision-maker and get new relative coefficients for final ranking results. That is, this approach gives the individual as well as aggregated ranking results of decision-makers. The rest of the text is designed as. Section 2 gives brief review on SVNS. Section 3 trades with new distance measures along with their characteristics. In Section 4, we offer an extended group TOPSIS method based on proposed measures to solve the MCDM problem. The applicability of the approach is discussed through a case study. In Section 5, a new clustering algorithm is presented and explained with a numerical example. Section 6 gives the advantages of the study. Finally, a concrete conclusion is given in Section 7.

Preliminaries
In it, we discuss some basic terms associated with SVNS in universal set X.
Definition 2.1. [3] A neutrosophic set N is given as are the degrees of "acceptance", "indeterminacy" and "non- We call a pair N = (ς N , τ N , υ N ), throughout this article, and known as SVN number (SVNN).
(iv) N 1 = N 2 if and only if N 1 ⊆ N 2 and N 2 ⊆ N 1 .
Definition 2.4. Let Ψ(X) be the collections of all SVNSs over X. A real-valued function D : Ψ(X) → Ψ(X) is termed as distance measures, if for N 1 , N 2 , N 3 ∈ Ψ(X), D satisfies the following axioms.

Proposed distance measure
This section presents new distance measure for SVNSs and investigating their properties.
) | x j ∈ X}, a proposed distance measure D between them is stated as.
and hence, by Eq. (3.1) Next, we define the degree of similarity based on proposed measure as follows.
Definition 3.2. A real-valued function S is termed as similarity measure between SVNSs N 1 and N 2 and defined as Theorem 3.2. The measure defined in Definition 3.2 have the following features: Proof. For two SVNSs N 1 and N 2 , we have (S3) It follows from definition.

MCDM method based on extended TOPSIS method
In this section, we offer a novel TOPSIS method based on proposed measures to handle the group DMPs. Further, a real-life example is given to demonstrate it and the validity test is conducted to justify it.

Proposed approach
Consider a MCGDM ("Multi-Criteria Group Decision Making") problem with "m" alternatives . . , n) under SVNS environment and recorded their rating in terms of SVNNs as vector of the criteria and ξ = (ξ 1 , ξ 2 , . . . , ξ l ); ξ z > 0; l z=1 ξ z = 1 be for experts. The collective values of all expert R (z) for m alternatives are represented in decision matrix R (z) = (α (z) i j ) m×n given as To select the finest alternative(s), the procedure steps (whose flowchart is presented in Figure 1) are summarized as follows: Step 1: Arrange the SVN decision matrix R α (z) i j m×n for each decision maker R.
Step 2: Normalize the information if required, by converting the cost type criteria into the benefit type.
Step 5: Compute the closeness degree for experts as: Step 6: From Eq. (4.3), we may obtain the different ordering based on the each expert opinion and hence it is difficult to compromise on a single task. To overcome it, we aggregate the expert preferences by using weight ξ z > 0, l z=1 ξ z = 1 to each expert as and Step 7: The overall closeness degree i of each alternative provided (D λ ) + i 0 and rank them accordingly.

Illustrative example
To illustrate the approach, we consider the following example, which can be read as A travel agency naming, Marricot Trip mate , has excelled in providing travel related services to domestic and Inbound tourists . Agency wants to provide more facilities like detailed information, online booking capabilities, allow to book and sell airline tickets, car rentals, hotels, and other travel related services etc. to their customers. For this purpose, agency intends to find an appropriate information technology (IT) software development company that delivers affordable solutions through software development. To complete this motive, agency forms a set of five companies (alternatives), namely, Zensar Tech (V 1 ), NIIT Tech (V 2 ), HCL Tech(V 3 ) and Hexaware Tech(V 4 ) and the selection is held on the basis of the different criteria, namely, Technology Expertise (V 1 ), Service quality (V 2 ), Project Management (V 3 ), Industry Experience (V 4 ). The agency hires the three experts R (1) , R (2) and R (3) for evaluation of the considered V i (i = 1, 2, . . . , 5) under V j ( j = 1, 2, 3, 4). For computation, we take λ = 2 and β 1 = β 2 = β 3 = 1/3. Then, the following steps of the stated method are executed to find the best one(s).
Step 1: The rating information of each expert is summarized in Table 1.
Step 2: As V j 's are of benefit type, so no need of normalization.
Step 3: The PIA and NIA are computed by Eqs. (4.1) and (4.2), and summarized in Table 2.
Step 4: By applying Eq. (3.1), the positive and negative degrees of the measurement values for each expert are represented in Table 3. For instance, for expert R (1) , the values of (D λ ) ( Table 3.

Rank the alternatives End
Step 2: Normalisation Phase Step 7: Satisfy with overall decision?

Yes No
Step 3: Ideal reference set Steps 4-5: Computational Phase Step 6: Ranking expertwise phase & aggregation phase Figure 1. Flowchart of the proposed approach.
Step 5: By Eq. (4.3), closeness degrees (z) i for each expert R (z) are computed and summarized in the third column of the each expert in Table 3. It is seen that for R (1) expert, the best one is V 2     Table 3. Measurement values from ideal alternatives corresponding to each expert. while V 1 for the other experts. As the best and worst alternatives change accordingly to the experts and hence it is hard to grasp and select the optimal one. To defeat the ambiguity, we consider the expert importance and aggregate their values.
Step Since 1 > 2 > 3 > 4 and hence ordering of the alternatives is V 1 V 2 V 3 V 4 . Therefore, V 1 is the best choice.
Further, to reach the power of the parameter λ on to the process, we modify the parameter λ and achieve the proposed TOPSIS method on the estimated data. The final optimal ranking of the given numbers is taken and results are recorded in Table 4. It is obviously seen that the ranking order for all values of λ is not alike. For instance, when λ = 1, 2 then the optimal alternative has been received as V 1 while the worst one is V 4 . On the other hand, for other values of λ's such as λ = 5, 10, . . ., we get V 2 as the best one. Hence, a person can examine the impact of λ during the process and pick the best one accordingly. This analysis will help the decision-maker to follow the given DMP more profoundly and encourage him/her to select the parameter according to the requirement of the process. Also, this parameter makes the approach more manageable as compared to others regarding the selection of the final decision. Table 4. Effect of λ on the ranking process. λ Values of 's for Ordering

Validity test
To verify the completion of the stated method, we inquire about their validity through the following three testing criteria, as established by [41]. Test criteria 1: "An effective decision-making method should not change the indication of the best alternative on replacing a non-optimal alternative by another worse alternative without changing the relative importance of each decision criteria". Test criteria 2: "An effective decision-making method should follow transitive property". Test criteria 3: "When an decision-making problem is decomposed into smaller problems and the same decision-making method is applied to smaller problems to rank the alternatives, a combined ranking of the alternatives should be identical to the original ranking of un-decomposed problem".

Under criterion 1
For the given problem, the best alternative obtained as V 1 and V 4 as non-optimal. So, to test under "criteria 1", we update the rating values of V 3 with the arbitrary new values for each expert, and tabulated in Table 5. Then, by implementing the proposed TOPSIS method on it, we compute the final closeness degree i for each alternative and get 1 = 0.6653, 2 = 0.5938, 3 = 0.4022 and 4 = 0.4503. Based on it, we easily obtain that V 1 V 2 V 4 V 3 and suggest us that V 1 is still best alternative. Therefore, the stated algorithm is valid "under test criteria 1". Table 5. Updated rating values of V 3 to each expert.
By merging all, we get V 1 V 2 V 3 V 4 , and it states the validity of suggested method "under test criteria 2".

Comparative study
For the comparison view, an examination has been arranged with the existing studies [25,29] under SVNS and interpreted as follows. [25] [25] performed the logarithm similarity-based MCGDM approach to solve the DMPs. We implement their approach to the considered data and their procedure steps are organized as follows.

Comparison with approach given by
Step 1: The information about the alternatives are listed in Table 1.
Step 2: Aggregated the experts preferences by taking average of their numbers and the resultant values (called as "central decision matrix") are listed in Table 6. Table 6. Aggregated values by weighted average. Step 3: From the values of Table 6, we compute the ideal alternative V * as Step 4: Compute the attribute weights ω i by Step 5: With information ω, we compute the logarithm similarity (LS ) values Step 6: Based on these values, we obtain V 2 V 1 V 3 V 4 as ranking and hence V 2 is best choice.

Comparison with approach given by [29]
By implementing the TOPSIS approach as given by [29] on to the considered data, we initially take all the experts and criteria at the same level. Then, to execute their approach, we aggregate the different expert preferences by using WA operator as suggested by [5]. Now, by utilizing Euclidean distance between V i and PIA/NIA, we compute the closeness degrees C i 's as C 1 = 0.6152, C 2 = 0.7381, C 3 = 0.5402 and C 4 = 0.3727. Thus, ordering are V 2 V 1 V 3 V 4 and the best alternative is V 2 .
From the above-computed decisions, it is analyzed that, the best alternative, as well as the ordering position of other alternatives obtained by using current approaches, is not alike to the proposed approach. However, these changes are evident as in both existing approaches all decision matrices are collaborated into a single matrix by some idea and then final results are decided. But the proposed approach offers the decision based on each decision-maker and then search the final decision by considering the decisions of all the experts. Moreover, in our approach, each decision-maker has his/her weight vector for criteria but in the existing approaches, this can never be accessible. Thus, we can say that the proposed approach is somehow superior to the existing approaches.

Proposed clustering method
In this section, we present a novel SVN cluster method based on the proposed similarity measure S to cluster the heterogenous object in the homogenous way. The description of the analysis is given hereafter. = 1, 2, . . . , m) is called as Similarity matrix between the SVNNs. Also, the matrixN has the following properties:

Definition 5.1. [36] For a collection of SVNNs
(ii) s ii = 1; (iii) s ik = s ki ; where i, k = 1, 2, . . . , m. Next, we present a clustering algorithm based on proposed measure S λ whose description are as follows.
Assume m alternatives {Q 1 , Q 2 , . . . , Q m } which are described by n criteria {B 1 , B 2 , . . . , B n }. These choices are assessed by an expert in terms of SVNNs. The target of this task is to classify the given Q i 's into their equivalence classes. For it, a method has been suggested which are summarized in the following steps: Step 1: Construct the similarity matrixN = (s ik ) m×m , s ik = S λ (V i , V k ), (i, k = 1, 2, . . . , m). Here S λ is computed by Eq. (3.2).
Step 2: Obtain the ESMN 2 p N = (s ik ) m×m by making use of composition of matrices as given in Definition 5.2.
Step 4: Classify the identical Q i and Q k into the same class. .
The above mentioned algorithm is demonstrated through an example as Consider five brands of mobile phones, say, Q 1 , Q 2 , Q 3 , Q 4 , Q 5 , which are selected under the six criteria, namely price of mobile phone (B 1 ), appearance (B 2 ), memory (B 3 ), operating system (B 4 ), performance (B 5 ) and processor (B 6 ). The aim is to classify the phones with these criteria. An expert Table 7. Rating values of each object. gives the rating of each phone over B j 's in terms of SVNNs. The complete summary of their ratings are listed in Table 7. To implemented the stated algorithm, we choose λ = 2 and β 1 = β 2 = β 3 = 1/3. Now, we utilize the proposed measure S to assemble the phones Q i , which involves the subsequent steps: Step 1: By using Eq. (3.2), calculate the degrees of similarity between the phones cars i.e., S(Q i , Q k ) (i,k=1,2, . . . , 5). Thus, a similarity matrixN is obtained as: Step 2: Compute the matrixN 2 , using Definition 5.2, given as:  Since,N 4 =N 2 , therefore,N 2 is an ESM.
Step 3: Assume α = 0.8637 and by Definition 5.5,N α becomes Step 4: From Eq. (5.1), we divide Q i into three classes as This means the phones Q 1 , Q 2 and Q 3 are more similar to each other than that of the alternative in other clusters.
Further, by examining the various α− cuts, we get the different classes. Thus, a comprehensive analysis based on the α− cut is placed in Table 8. From this Table 8, we recognize that the decision-maker has only one way to partition the set of alternatives in a particular number of classes. The above review unfolds the importance of different values of confidence level α on the clustering process and also investigates the role of α in the flexibility of the algorithm. However, the value of confidence level α is chosen by a decision-maker from 0(smallest) to 1(biggest). Based on their values, we summarize the clustering results and their corresponding α-level matrices whose description are given as  From above α− cutting matrices, it can easily noticed that when 0 ≤ α ≤ 0.8309, 0.8309 < α < 0.8450, 0.8450 < α ≤ 0.8637, 0.8637 < α ≤ 0.8791 and 0.8791 < α ≤ 1, the alternatives are classified into 1,2,3,4 and 5 classes respectively. This reflects that as given alternatives are more differentiated with increase of α value.

Advantages of the proposed work
The major benefits from the proposed approach over the existing approaches are listed as below.
1) This paper highlights the significance of taking the idea of agreeness, falsity, and disagrees in one envelop in form of SVNSs. As in real DM problems, the degree of membership and nonmembership may work independently, so this generalization of IFSs is more superior and innovative in the evaluation of the information.
2) The proposed methodology is extremely flexible due to the presence of the parameter λ in the formulation of both proposed measures. Due to the presence of the parameter, the decision-maker has the opportunity to give his/her decision by taking different semantics in his mind. This makes the proposed work more friendly and impressionable for the decision-maker who is working on different kinds of DM processes.
3) The proposed method practices the notion of TOPSIS for making the final decision. In this approach, the raking results are finding not by aggregating all the decision matrices but evaluated firstly based on the individual decision-maker and then get a final decision by considering the results given by each decision-maker. In this manner, this approach displays the individual as well as the aggregated decision on the final choice by considering the reference points.
4) Moreover, the proposed similarity measure can be used to cluster the heterogeneous data which is used in various concepts like data mining, image processing, DM problems, medical imaging and so on.

Conclusion
The key contribution of the work can be summarized below.
1) The examined study employs the three independent degrees namely MD, NMD and degree of indeterminacy to check the vagueness in the data.
2) This paper offers new distance measures for estimating the degree of discrimination between the two or more SVNSs. Traditionally, all the measurements are computed by using either Hamming or Euclidean distance measures [18][19][20], which may not furnish the proper choice to the expert. To succeed it, revised distance measures are injected in this work which supplies an alternative way to trade with the SVNN information.
3) An extended TOPSIS method has been introduced with the stated distance measures and by consideration the multi-experts. The advantages of the stated method are that it not only taken into the account the degree of discrimination but also takes the degree of similarity between the observation, to avoid the decision only based on the small distances. Also, the ideal alternatives i.e., PIA (V + ) and NIA (V − ) are considered as constant rather it is dependent on the given observation. Finally, the presented TOPSIS method is based on the additional parameter λ which will make a decision maker flexible to choose their alternatives based on their preferences or goals.
4) The MCGDM algorithm based on the recommended TOPSIS is explained, which is more generalized and flexible with the parameter λ to the decision-maker. The significance of the parameter λ is shown in detail (Table 4). To sustain their performance, a validity test is examined which ensures their reliableness and preciseness.

5)
A new clustering algorithm is presented based on the proposed similarity measures under the different confidence levels of the expert. The main objective of this algorithm is to classify the heterogenous objects into the homogenous classes. The applicability of this algorithm is explained with a numerical example and classify the objects with different levels of preferences of the expert.
In the future, we shall lengthen the application of the proposed measures to the diverse fuzzy environment as well as different fields of application such as supply chain management, emerging decision problems, brain hemorrhage, risk evaluation, etc [43][44][45][46][47].