A Study of Supplier Selection Method Based on SVM for Weighting Expert Evaluation

. How to choose suppliers scientiﬁcally is an important part of strategic decision-making management of enterprises. Expert evaluation is subjective and uncontrollable; sometimes, there exists biased evaluation, which will lead to controversial or unfair results in supplier selection. To tackle this problem, this paper proposes a novel method that employs machine learning to learn the credibility of expert from historical data, which is converted to weights in evaluation process. We ﬁrst use the Support Vector Machine (SVM) classiﬁer to classify the historical evaluation data of experts and calculate the experts’ evaluation credibility, then determine the weights of the evaluation experts, ﬁnally assemble the weighted evaluation results, and get a preference order of choosing suppliers. The main contribution of this method is that it overcomes the shortcomings of multiple conversions and large loss on evaluation information, maintains the initial evaluation information to the maximum extent, and improves the credibility of evaluation results and the fairness and scientiﬁcity of supplier selection. The results show that it is feasible to classify the past evaluation data of the evaluation experts by the SVM classiﬁcation model, and the expert weights determined on the basis of the evaluation credibility of experts are adjustable.


Introduction
Supplier evaluation and selection is an important part of strategic decision-making management in enterprises, and also an important branch of enterprise supply chain management research [1,2].In the operation processes of enterprises, the production and operation behaviors, such as the procurement of raw materials, machinery, and equipment, as well as external technology, services, are generally related to the choice of suppliers [3,4].How to choose suppliers by scientific decision-making will play a vital role in improving the market competitiveness of enterprises and maximizing economic and social benefits.
e existing domestic and overseas researches on supplier selection mainly focus on two aspects: the first is the construction of evaluation index system in the decisionmaking process of supplier selection, mainly focusing on the industry areas of enterprises and the personalized requirements of suppliers [5][6][7].e second is how to select the evaluation method and model scientifically [8][9][10], such as Best-worst method (BWM), TOPSIS, and Fuzzy.e research in [11] has identified the 5 essential barriers of supply chain and proposed a methodology called Fuzzy-AHP to compare the weight of these barriers.A Combined FUCOM Rough SAW approach has been used in supplier selection and has been performed in order to achieve sustainability in resources and environment [12].Best-worst method was used to decide the weights of green supplier selection, which aims to provide environment-friendly information system products [13].In [14], to decide the importance of selection criteria, Fuzzy-TOPSIS technique is used, which helps select dairy suppliers.Chakraborty et al. [15] try to solve the uncertainty in supplier selection with D numbers, and MARCOS is used for ranking alternative suppliers.Zhu et al. [16] built a closed-loop supply chain model and focus on the recycling behaviors of the members in this supply chain.Kurpjuweit et al. [17] developed a typology of three supplier selection archetypes.In [18], to show the application of a structured decision-making technique is vital, especially under the complex conditions that include both qualitative and quantitative criteria.Pearn et al. [19] considered a two-stage method composed of quality verification and selection decision for multiple-line supplier selection problems.Xie et al. [20] try to solve the uncertain yield and demand in supplier selection.However, these assembly methods focus only on the mathematical operation during the process of assembling evaluation and do not fully consider evaluating the loss of information in the assembly process.In fact, the more the times evaluation information assembled and converted using mathematical methods, the larger the loss of information.
Support Vector Machine (SVM) is a supervised machine learning method based on statistical learning theory, which becomes a hot research topic in the field of artificial intelligence after artificial neural networks in recent years.SVM method is built on the principle of Vapnik-Chervonenkis (VC) dimension and structural risk minimization in statistical learning theory.VC dimension is a core concept of statistical learning theory; it is an important indicator to describe the learning ability and complexity of function sets.SVM uses limited sample information to best compromise between model complexity and learning ability and obtains good generalization ability [21].It has been widely used in many fields like classification [22,23], feature selection [24], pattern recognition [25], and troubleshooting [26].
is paper proposes to use SVM classifier in the supplier selection process.Experts' past evaluation data is classified and used for calculating the evaluation credibility, and the evaluation credibility is used to determine the evaluation experts' weight.
en, the evaluation results are directly assembled with simple mathematical calculations; it not only retains initial evaluation information maximally, but also improves the credibility of evaluation results and realizes the fairness and scientificity in decision-making of supplier selection.
e paper aims to solve the problems in supplier selection and improve the fairness and reasonability.e main contribution is that SVM is used to evaluate the credibility of experts, which is subsequently converted to weight in supplier evaluation.Our method avoids multiple conversions and large loss on evaluation information, largely keeps the initial evaluation information, and improves the credibility of evaluation results.
e paper is organized as follows: Section 1 overviews the motivation, related works, and our basic idea.Section 2 describes the method and theories used in our paper.Section 3 describes how we process data to an unbiased way and meet the requirement of SVM classifier.Section 4 is the whole processing flow of our method, SVM classifier is trained, and then it is used to infer the credibility of experts, which is eventually converted to experts' weight in supplier selection.

Theoretical Model and Methodology Design
2.1.eoretical Model: SVM Classifier.Traditional statistical research is based on the law of large numbers, which is an approximation theory on huge amounts of samples, but in reality, there are always limited amounts of samples and cannot meet the requirement of the theory.To solve this problem, Vapnik et al. proposed a machine learning theory, called the statistical learning theory (STL).Cortes and Vapnik proposed linear support vector machine [27], Boser and Vapnik introduced kernel techniques and proposed nonlinear support vector machine [28], and Druckers et al. extended it to support vector regression [29].
e original binary classification model was extended to multiclass classification support vector machine [30] and structural support vector machine for structural prediction [31].
Assume the training set with n samples is ( For a set of functions f(x, w)  , there exists an optimal function f(x, w 0 ) which will minimize the expected risk when it is used to evaluate unknown samples: where f(x, w)   is the set of prediction function, L(y, f(x, w)) is loss function that defines how much the prediction of f(x, w) deviated from real value, and F(x, y) is joint probability.
In practical machine learning context, the expected risk cannot be calculated or minimized, because the joint probability F(x, y) is unknown [32].Empirical Risk Minimization (ERM) method is widely used in traditional machine learning; it aims at minimizing empirical risk R emp (w), but it is not reasonable when there are only limited amounts of samples.In statistical learning theory, under the worst distribution, empirical risk meets the relation in equation (3) with the probability 1 − η: where n is the number of samples, and h is VC dimension.For a practical classification problem, the number of samples is fixed, and the higher the VC dimension is (which means higher complexity of the classifier), the larger the confidence interval will be, and this will lead to the larger gap between real risk and empirical risk [33].erefore, when we design a classifier, not only the empirical risk, but also the VC dimension is required to be minimized to shrink the confidence interval and minimize the expected risk; this is called structural risk minimization (SRM) [34].Support Vector Machine is a novel machine learning method based on principles of VC dimension and structural risk minimization, which is specialized to deal with the problems with limited number of samples [35].SRM improved the generalization ability of models, and no limitation on the dimension of data.For linear classification, the classification plane is the plane that has largest distance with each class [36,37]; for nonlinear classification, high dimension transformation is applied to data and turns 2 Discrete Dynamics in Nature and Society nonlinear classification problem into a linear classification problem in a higher dimension space [38].SVM was originally proposed to solve linear separable problems.Its theory is developed from optimal classification hyperplane in linear separable problems.Suppose the training sample set given in formula (1), is linearly separable; that is, there exists a classification hyperplane g(x) � w • x + b � 0 that can divide n samples correctly and has the maximum distance from each class.is hyperplane is the optimal classification hyperplane, and the distance between the nearest sample in each class and the optimal classification plane is called margin.
erefore, the optimal hyperplane is also known as the maximum interval hyperplane, as shown in Figure 1.
Optimal classification hyperplane can separate two classes of samples correctly and made samples in a single class all fall into one side of the hyperplane, which means that all samples satisfy: is condition can be written as through adjusting scale, which means that g(x) of the first class should be larger or equal to 1, and g(x) of the second class should be less or equal to −1. ese two inequalities can be combined to a single inequality: e values of g(x) of samples on boundary in each class equal 1 and −1, respectively, so the margin between two class is M � 2/w; hence, the problem of finding an optimal hyperplane converts to an optimal problem under constraints of inequalities: which can be equally converted to the following optimization problem using Lagrange method: where a i ≥ 0, i � 1, 2, . . .l, are Lagrange coefficients.We can get the optimal classification function by using quadric programming method, and the solution is For linear nonseparable problems, we can use the nonlinear mapping Φ: R n ⟶ H, mapping samples in original input space into a higher dimension feature space, and then construct optimal classification hyperplane.Dot product operations in mapping samples into higher dimension space are computation intensive.Bajard et al. and Hamidzadeh et al. proposed to replace dot product operation with use kernel functions K(x i , x j ) � Φ(x i )Φ(x j ) that satisfy Mercer condition to reduce computation complexity [39,40].
Support vector machine can perform various kinds of nonlinear classifiers by selecting different kernel functions.
ere are three common types of kernel functions: (1) Polynomial kernel function: where q is the order of polynomial.(2) Radical base function (RBF): in which σ is the width of the radical base function.Each center of a base function is corresponding to a support vector, and its position, width, number, and weight can be determined by training process.(3) Sigmoid kernel function: SVM classifier that employs Sigmoid function, when v and c satisfy certain condition, equals a multilayer perception neural network that contains only one hidden layer, and the number of nodes in hidden layer is the number of support vectors.(3) determine the weight of each expert according to the credibility; (4) evaluate suppliers with expert's weight and experts evaluation data.e whole workflow is shown in Figure 2.
Our method simplified the mathematical operation during the process of assembling evaluation, which reduces the loss of information.Also, the usage of SVM classifier, which performs excellent in limited number of samples, turns the evaluation of credibility into an easy job.Our method is reasonable and effective in supplier selection.

Samples Set of SVM Classifier.
SVM is a supervised machine learning method; it has two processes in solving a classification problem: learning and classification.During the learning process, training data is used to train a classifier according certain efficient policy; then, the classifier is used to classify the input data samples [41].In our paper, experts' evaluation data set are defined as x i � (x i1 , x i2 , . . ., x i(2l) ) T is an input vector of 2l dimensions, which is preprocessed expert evaluation data; y i � −1, 1 { } is its corresponding output.In our work, "+1" means that evaluation of an expert is credible, and x i is credible data; "−1" means that evaluation of an expert is biased, thus, it is x i called biased data.In our experiment, 80% of all evaluation expert evaluation sample data are used for training data, and the remaining 20% are used as validation data.

Kernel Function and Parameter of SVM Classifier.
Kernel function is a very important part of SVM classifier; it will affect the result of classification, but Mercer eory only gives some alternative functions that can be used in support vector algorithm but does not explain how to construct nonlinear transformation function Φ(x) and kernel functions K(x i , x j ), and the type and parameter of kernel function are to be determined according to your own task.
Compared to polynomial kernel function, radical base kernel function has less parameters; it has relative fast computation and adapts to parameter adjustment [42]; Sigmoid kernel function only has two parameters, but it cannot be represented as dot product of two vectors in feature space [43,44].So, the radical base function is used as a kernel function in our research according to the characteristics of our samples.
e optimal classification function is We employ K-fold cross validation method to select and optimize parameters of SVM classifier: first, samples are divided into K mutually disjoint subsets with same size, every subset is used as validation set once, and other K−1 subsets are used as training set to train the classifier.en, we select parameters that give the smallest validation error as the optimal parameters of classifier after traverse all K alternatives.

Evaluation Experts' Credibility and Evaluation Weight.
e evaluation data of an evaluation expert in n most recently bids activities are selected as the input of SVM classifier, and the credibility of this evaluation experts is defined according to the output of SVM classifier as where N i is number of times of getting an output "+1," and M i is the number of times of getting an output "−1." e credibility of an evaluation expert can reflect the quality of his evaluation data in the past.We defined a normalized weight w i of his historical evaluation data based on evaluation credibility z i :

Data Preprocessing
During the process of supplier bid activities, because of the difference of purchasing categories, different evaluation attributes and evaluation criteria will be applied, and evaluation experts and suppliers are various in each bid activity, which means that the original evaluation data of experts are not comparable, and it must be processed beforehand to make it comparable and more reasonable and can be fed into a SVM classifier.
Because the original sample data itself implies the key information of evaluation characteristics of experts, and different data preprocessing methods have different degrees of retention of characteristic information, only appropriate preprocessing methods can unify the unit of data and will not affect the classification effect.In this paper, a combined method is used to preprocess the evaluation data of the experts; the process is as follows:

Normalization Processing of a Group Evaluation Data.
Assume that x m ij is the evaluation data that the m th evaluation expert had given to i th supplier's j th attribute, and after the group normalization processing, we get x m (1)  ij : where i indicates the i th supplier; i � 1, 2, . . ., k j is j th evaluation attribute, j � 1, 2, . . ., l; m means m th evaluation expert, m � 1, 2, . . ., h.Taking a public bid activity of goods as an example, there are four suppliers (named P 1 , P 2 , P 3 and P 4 ) participating in this bid, and five evaluation experts (named A, B, C, D and E) performed evaluation to these suppliers, and the total evaluation score is 100, in which 30 points is the objective score related with quoted price, and 70 points is the score given by evaluation experts.In this paper, we only consider the expert evaluation score, and their original evaluation data to different attributes of suppliers are shown in Table 1 in which "bidding responsiveness" refers to the degree of matching with bidding requirements, and the value in parentheses means the maximum point to different attribute of supplier.
To well understand the evaluation data from experts A, B, C, D and E, we use the box plot in Figure 3 to illustrate distribution of the evaluation data.Figures 3(a) to 3(e) are corresponding to five different indexes; e.g., Figure 3(a) is the distribution of scores that experts give to technology index for supplier P1 (blue box), P2 (orange box), P3 (grey box), and P4 (yellow box).
From this figure, we can see that the score distributions of P1 and P3 are more concentrative in all five indices than those of P2 and P4, and average scores of all indices are relatively higher for P1 and P3.
e evaluation data in Table 1 has the initial weights of the evaluation attributes; we need to remove the weights before normalization.e normalized evaluation data are shown in Table 2.

Normalization of Individual Evaluation Data.
Original evaluation data x m ij of m th evaluation expert is normalized according to an individual expert and get the normalized data x m (2)  ij : where i indicates the i th supplier, i � 1, 2, . . ., k, j is j th evaluation attribute, j � 1, 2, . . ., l, and m means m th evaluation expert, m � 1, 2, . . ., h.
Similarly, we remove the attribute weight of data in Table 1 and put them into formula (18), and we can get the normalized data of each individual expert, as shown in Table 3: After the data preprocessing, the evaluation data of experts not only keep the critical information of original evaluation data, but also remove the barrier between different attribute criterion and magnitude.e evaluation sample (x i , y i ) of expert in formula (11) is formed by evaluation data in Tables 2 and 3 (which formed x i ), and its corresponding output label y i .

Samples Source and Samples Set Distribution.
e experimental sample data in this study are derived from the evaluation records of some evaluation experts who are often involved in the supplier biding activities.We extract 450 groups of evaluation data that meet the criteria as experimental samples, and all the original evaluation data are normalized by the group and individual with method shown Section 4 to form a valid dataset.e number of positive and negative samples is well balanced in the process of selecting experimental samples to improve the accuracy of SVM classifier, in which 80% of the total sample data is randomly selected to form the training set for learning and optimizing the parameters of the classifier, and the remaining 20% of the samples composed test set to test the accuracy of the classifier.

Training a SVM Classifier.
e experimental tools for this study were based on a popular SVM software packages, Libsvm [45].Since the Libsvm package has its own format requirements for input data, the training data and validation data mentioned above are first converted  to the required format of "svmtrain" and "svmpredict" functions.grid py tool of LibSVM with 10-fold crossvalidation is used to find the optimal parameter value of c (penalty factor) and gamma (variance in RBF nuclear function) of RBF kernel function.When the model performance is the same, the combination of parameters with a smaller penalty factor is preferred in order to reduce the calculation time.Eventually, the optimal parameter combination of c � 8, gamma � 0.0625, is selected.e SVM classifier is trained by the "svmtrain" function, and then "svmpredict" function is applied to the validation samples to evaluate classifier model, and the accuracy of the classification is 96.67%.

Calculating Evaluation Credibility and Weight of Evaluation Expert.
Taking the five evaluation experts in a procurement of goods as an example, all the evaluation records of these five evaluation experts in the last 10 procurement evaluation activities are collected; if an expert's past evaluation data is insufficient, the evaluation credibility of the expert will be assigned to the average value.e extracted raw evaluation data is preprocessed and converted to the format required by the Libsvm package, then fed into the SVM classifier.e evaluation credibility of each expert is calculate using formula (15) based on the output of SVM classifier, and the results are shown in Table 4.   6 Discrete Dynamics in Nature and Society Based on the original evaluation values for the 4 suppliers by the 5 evaluation experts, with the initial weights of the attributes removed, we can get the "expert-supplier" evaluation matrix as With this evaluation matrix E and the normalized weights of the 5 evaluation experts w i mentioned earlier, the following formula is used to combine all experts' evaluation results: and we get evaluation results for each supplier as p 1 � 4.568 , p 2 � 4.124, p 3 � 4.619, p 4 � 4.201.erefore, in this supplier evaluation and selection process, the preference order for suppliers is p 3 > p 1 > p 4 > p 2 .

Discussion of Weight Selection in Supplier Selection
Management.When the weights of evaluation experts are determined by the experts' credibility, different weighting formulas can produce different expert weight coefficients; for example, if we use formula (23) to generate the evaluation expert weight w i ′ : where en, put the experts' credibility z in Table 4 into formula ( 24) and ( 23), and we will get the following weights of evaluation experts: w ′ � (w 1 ′ , w 2 ′ , w 3 ′ , w 4 ′ , w 5 ′ ) � (0.2118, 0.2634, 0.1410, 0.1741, 0.2097).e comparison of the weight coefficients that come from different weighting formula of 5 experts is shown in Table 5: e two sets of evaluation expert weight coefficients generating formulas ( 16) and ( 23) are different in value, but their changing trends are the same; that is, the two sets of weight coefficients of evaluation expert are linearly related to the evaluation of expert evaluation credibility; further analysis shows that expert weight coefficients with different degrees of discrete can be drawn from different formula, while evaluation credibility stays the same.
e result convinces that we can adjust the degree of discreteness of expert weights according to specific needs of enterprise's procurement projects when using evaluation credibility to determine experts' weights.In addition, the comparison between this method and the mean weight method is shown in Figure 4. e expert weight coefficients obtained on the basis of the evaluation credibility of experts all have the same trend of change, and this also verifies that our method has its scientificity and universality.
From the sensitivity analysis of this research method, it can be found that there is no significant difference in the results of expert evaluation credibility, whether the evaluation data of expert are collected of the last 10 or 50 procurement evaluation activities.
is verifies that the evaluation credibility of expert obtained has good stability.But the SVM classifier is sensitive to the choice of kernel function and its parameters.We select the kernel function and its parameters in this research based on our experience.

Conclusions
With the help of artificial intelligence tools, this paper explores a novel method of using the evaluation credibility of experts to determine the weight of the experts' evaluation, so as to effectively assemble the evaluation results together and optimize the supplier selection.e process of this study not only demonstrates the feasibility of using the SVM classification model to classify the experts' past evaluation data but also verifies that the expert weight determined on the basis of the evaluation credibility of the expert is universal.In the enterprise supplier selection practice, it can be used to adjust the evaluation weight of different experts according to the specific needs of the procurement projects or adjust the weight assignment of the same expert in different procurement projects.Certainly, there are still some limitations in our research: if the sample data is large, the training of SVM classifier will take more time.In addition, the performance of SVM classifier mainly depends on the selection of kernel function.At present, the kernel function and its parameters are selected manually, and there is no better way to solve this problem other than experience value.In the future, we will endeavor to find more appropriate kernel and parameters to improve SVM classifier model and explore the more effective ways to integrate experts' evaluation results.

Figure 2 :
Figure 2: Workflow of our method.

Figure 3 :
Figure 3: Distribution of evaluation data of experts (a-e).

Table 1 :
Original evaluation data from evaluation experts.

Table 2 :
Normalization of a group of evaluation data.

Table 3 :
Normalization of individual evaluation data.
E � e 1 e 2 e 3 e 4

Table 4 :
Results of classification of evaluation data and evaluation credibility of expert.

Table 5 :
Comparison of weight coefficients related to valuation experts.