Software Defect Prediction Based on Fuzzy Weighted Extreme Learning Machine with Relative Density Information

To identify software modules that are more likely to be defective, machine learning has been used to construct software defect prediction (SDP) models. However, several previous works have found that the imbalanced nature of software defective data can decrease the model performance. In this paper, we discussed the issue of how to improve imbalanced data distribution in the context of SDP, which can beneﬁt software defect prediction with the aim of ﬁnding better methods. Firstly, a relative density was introduced to reﬂect the signiﬁcance of each instance within its class, which is irrelevant to the scale of data distribution in feature space; hence, it can be more robust than the absolute distance information. Secondly, a K -nearest-neighbors-based probability density estimation (KNN-PDE) alike strategy was utilised to calculate the relative density of each training instance. Furthermore, the fuzzy memberships of sample were designed based on relative density in order to eliminate classiﬁcation error coming from noise and outlier samples. Finally, two algorithms were proposed to train software defect prediction models based on the weighted extreme learning machine. This paper compared the proposed algorithms with traditional SDP methods on the benchmark data sets. It was proved that the proposed methods have much better overall performance in terms of the measures including G -mean, AUC, and Balance. The proposed algorithms are more robust and adaptive for SDP data distribution types and can more accurately estimate the signiﬁcance of each instance and assign the identical total fuzzy coeﬃcients for two diﬀerent classes without considering the impact of data scale.


Introduction
SDP (software defect prediction) [1] has become an active research topic in software engineering, which has drawn growing interests from both academia and industry. It can be formulated as a learning problem, which is used to facilitate software testing and to save testing cost. Various machine learning methods have utilised software defect training data set to build prediction models, among which Random Forest [2] and Naive Bayes [3] were proved to have relatively stable performance. However, class imbalance [4][5][6][7] is a common problem in SDP data set, which can affect the model performance. Software defects distribution in software modules roughly conforms to Pareto principle, also known as the 80-20 rule. It means that 80% of the defects are concentrated in 20% of the program modules and the numbers of nondefective modules are much larger than the numbers of defective program modules. Hence, the accuracy of predicting few classes is lower.
Previous studies [8,9] have indicated that the model tends to fail when it is applied to data with class imbalance problem. In order to address this problem, some imbalanced techniques as ROS (random oversampling) [10], RUS (random undersampling) [11], and SMOTE (synthetic minority oversampling technique) [12] have been considered to construct SDP model. In addition, Wang and Yao [13] analysed three different types of class imbalance methods for software defect prediction. ey found their proposed ensemble approach DNC (Dynamic Adaboost.NC) is better than the ROS, RUS, and SMOTE. e DNC adjusts the parameter automatically during the training process, which can improve the prediction model's performance further. However, the above prediction models may encounter the following unpredictable problems: (1) choosing suitable coefficients for different classes, (2) abandoning the instances in the small disjunctions, and (3) estimating the wrong class boundary, further resulting in the unexpected SDP classification results.
In this paper, we present a more robust representation measure of data distribution information called relative density, which can be extracted by a K-nearest-neighbors-based probability density estimation (KNN-PDE) [14][15][16] alike strategy, to evaluate the significance of each training instance and to design the corresponding fuzzy membership function. In contrast to Euclidean-distance-based measure, the relative density is irrelevant to the scale of data distribution in feature space. Meanwhile, it can also reflect the proportional relation of different instances within the class. Moreover, the KNN-PDE alike strategy has another merit that there is no need to normalize the fuzzy coefficients after acquiring the relative density information of all training instances. en the fuzzy membership function is designed based on KNN-PDE and can assign the larger weights for those high-density instances. is paper recognises the fuzzy values as the weighted values of training instances and embeds them into weighted extreme learning machine (WELM), which can solve the noise or outliers effectively. WELM is selected as the baseline classifier based on three observations: (1) compared to other classifiers, WELM always has better or at least comparable generality ability and classification performance [17], (2) it can tremendously save training time compared to other classifiers [18], and (3) it can deal with data with imbalanced distribution based on costsensitive learning [19]. Finally, two algorithms based on WELM are proposed: one relies on the intraclass relative density and the other depends on the interclass relative density. at means the first function assigns the larger weights for those high-density instances, while the second function designates the larger weights for the examples which are nearer to the real classification boundary. To evaluate the algorithms' effectiveness, this paper performed a comparison with the previous works on the benchmark data sets, and the experimental results indicate that the proposed algorithms can generally produce better or at least comparable performance in terms of the measures including Gmean, AUC, and Balance. e remainder of this paper is structured as follows. Section 2 introduces some a priori knowledge related to this work including software defect prediction, extreme learning, and weighted extreme learning machine. Section 3 describes the proposed methods in detail. e experiments and analysis are given in Section 4, and Section 5 concludes the research and provides suggestion for future work.

Related Work
In this section, some preliminaries are presented, including software defect prediction, extreme learning machine, and weighted extreme learning machine.
2.1. Software Defect Prediction. SDP models are expected to improve software quality and reduce maintenance cost of software systems. e researchers utilised the defect prediction data sets to build comparable models for studies. So far, a great number of researches have been devoted to metrics describing code modules and learning algorithms to create SDP models. A variety of machine learning methods have been proposed and compared for SDP problems, such as neural networks [20], decision trees [21], Naive Bayes [22], and support vector machine [23]. However, the above methods ignore the effect of class imbalance on the model performance [1]; that is, the numbers of defective instances are more or less than nondefective instances. It is a great challenge for most conventional classification algorithms to work with data that have an unbalanced class distribution, because they may ignore the minority class that could be more valuable in a wide range of applications. us, some class imbalance learning techniques have been utilised to reduce the negative effect. e work in [24] studied which type of metrics is useful to handle class imbalance based on static code. An undersampled approach was proposed to balance training data [25] and check how little information is required to learn a defect predictor. e authors found that throwing away data does not affect the performance of selected predictors. In addition, ensemble algorithms [26] and their cost-sensitive variants were studied and shown to be effective if a proper cost ratio can be set. Issam et al. [27] implemented software defect prediction using ensemble learning on selected features-greedy forward selection. Yang et al. [9] proposed an ensemble learning approach for justin-time defect prediction, which contains two layers to improve the performance of SDP.
Besides the above introduced works, there are other existing works about software defect prediction which will not be listed because some of them do not consider data distribution, and several works just chose the basic sampling methods but different learning methods. In Section 1, it has been proved that WELM has three advantages over the traditional learning methods. erefore, this paper will only study how to build more robust SDP models based on data distribution and do a comparison with the sampling methods through extensive experiments and comprehensive analyses.

Extreme Learning Machine.
Extreme learning machine (ELM) that was proposed by Huang et al. [28] is a specific learning algorithm for single-hidden layer feedforward neural networks (SLFN). e main characteristic of ELM which distinguishes it from those conventional learning algorithms of SLFN is the random generation of hidden nodes. erefore, ELM does not need to iteratively adjust parameters to make them approach the optimal values; thus it has faster learning speed and better generalization ability. Previous research has indicated that ELM can produce better or at least comparable generality ability and classification performance to SVM and multiple-level perceptron (MLP) but only consumes tenths or hundredths of training time compared to SVM and MLP.
Let us consider a classification problem with N training instances to distinguish m categories, and then the ith training instance can be represented as (x i , t i ), where x i is an n × 1 input vector and t i is the corresponding m × 1 output vector. Suppose that there are L hidden nodes in ELM and that all weights and biases on these nodes are generated randomly. en, for the instance x i , its hidden layer output can be represented as a row vector h( ]. e mathematical model of ELM could be described as where T is the hidden layer output matrix over all training instances; β is the weight matrix of the output layer; in equation (1), only β is unknown, so the least square algorithm is applied to acquire its solution, which can be described as follows: where H † denotes the Moore-Penrose generalized inverse of the hidden layer output matrix H, which can guarantee that the solution is the least-norm least-squares solution for equation (1). According to previous work, ELM can be trained in the viewpoint of optimization. In the optimization version of ELM, we wish to synchronously minimize ‖Hβ − T‖ 2 and ‖β‖ 2 , so the question can be described as follows: where ξ i � [ξ i,1 , ξ i,2 , . . . , ξ i,m ] denotes the training error vector of the m output nodes with respect to the training instance x i and C is the penalty factor, representing the tradeoff between the minimization of training errors and maximization of generality ability. Obviously, this is a typical quadratic programming problem that can be solved by the Karush-Kuhn-Tucker (KKT) theorem [29]. e solution for equation (3) can be described as follows:

Weighted Extreme Learning
Machine. Weighted extreme learning machine (WELM) that can be regarded as a costsensitive learning version of ELM is an effective way to handle imbalanced data [19]. Similar to CS-SVM, the main idea of WELM is to assign different penalties for different categories, where the minority class has a larger penalty factor C, while the majority class has a smaller C value. en, WELM focuses on the training errors of the minority instances, making a classification hyperplane emerge in a more impartial position. A weighted matrix W is used to regulate the parameter C for different instances; that is, equation (3) can be rewritten as where W is an N × N diagonal matrix in which each value existing on the diagonal represents the corresponding regulation weight of parameter. Zong et al. [19] provided two different weighting strategies, which are described as follows: where W ii , #(t i ), AUG(t i ), and 0.618 denote the weight of the ith training instance, the number of instances belonging to the class t i , the average number of instances over all classes, and the value of the golden standard, respectively. Compared with WELM2, WELM1 is more practical and popular. en, the solution can be shown as follows: β � WT, when N ≤ L, Obviously, no matter which weight distribution method is used, few types of samples will be given more weight. Hence, the class imbalance ratio is the higher, and the weight ratio between different types of samples becomes higher. According to the work in [19], users can define W ii for every sample x i to improve the performance, so the paper considers constructing new W ii based on the data distribution.

The Proposed Methods
Although WELM can improve class imbalance problem, it does not consider the distribution of samples in feature space. In addition, there are noise and outliers in the software defect data, which can further affect the performance.
us, this paper draws on the experience of the works in [30,31] and introduces the concept of fuzzy sets, which can mine the distribution of each instance in feature space and conduct the more personalized setting for the weights. In order to describe our method, this section first introduces the relative density that is applied to avoid the large calculation of probability density in high-dimensional space. en the fuzzy membership functions are designed to replace the WELM's weight matrix based on relative density, and finally the two proposed algorithms of SDP are Scientific Programming described. Finally, the experiments are designed to validate the methods. e whole framework can be seen in Figure 1.

Relative Density Estimation Strategy.
As is known, it will be easy to identify outliers and noise from the significant instances if we can estimate the probability of each training instance. However, on high-dimensional feature space, it is always difficult to acquire the exact measurements of the probability density. It would be time-consuming even if an approximately accurate estimation of the probability density is obtained. In order to solve this problem, we introduce an improved method in this subsection. We consider that it is unnecessary to measure the probability density exactly, but it is enough to precisely extract the proportional relation of the probability densities between any two training instances. We call the information reflecting the proportional relation as relative density.
To obtain relative density, a similar K-nearest-neighborsbased probability density estimation (KNN-PDE) is applied. As a nonparametric probability density estimation approach, KNN-PDE estimates the probability density distribution in multidimensional continuous space by measuring the Knearest-neighbor distance of each training instance. When the number of the training instances goes to infinity, the result obtained from KNN-PDE can approximately converge to the real probability density distribution. Hence, the K-nearestneighbor distances can be used to estimate the relative density, and Euclidian distance is selected to calculate the distance in the proposed algorithms.
Suppose that a data set contains N instances; then, for each instance x i , we can find its Kth-nearest neighbors and record the distance between them as d K i . As is known, the larger d K i is, the lower density the instance x i will hold. At the same time, no matter noise or outliers should appear in the region of low density, thereby we can use d K i as the measure to evaluate the significance of each instance. However, to provide larger value for high-density instances and lower value for low-density instances, for example, noise and outliers, d K i should be transformed to its reciprocal 1/d K i . In this paper, the reciprocal of K-nearest-neighbors distance is defined as the instance's relative density. It is not difficult to observe that the proportional relation of the relative density between any two instances exactly equals the inverse of that of the K-nearest-neighbors distance between them, as Also, it is important to confirm the selection of the parameter K for the relative density. If the value of K is too small, it would be failed to identify the noise and outliers from those normal instances, but the distinction between those significant instances and the noise or outliers might become ambiguous and some small disjunctions would not be captured if the value of K is too large. To avoid this problem, this paper considers assigning an appropriate value for K. It is empirically set to be �� N √ during the experiments, where N denotes the number of the training instances.

Design of Fuzzy Membership Functions.
Based on the relative density, two different fuzzy membership functions are designed. One adopts intraclass relative density information, and the other uses interclass relative density information. e details will be introduced in the following sections.

Fuzzy Membership Function Based on Intraclass
Density Information. In this type of fuzzy membership function, f(x i ) is defined with respect to 1/d K i , which is the reciprocal of the distance between the instance x i and its Kth-nearest neighbors within the same class. e instances appearing in the high-density region are seen as more information ones and they are assigned higher f(x i ) values, while the examples far from the high-density region are seen as the noise or outliers and assigned them lower f(x i ) values. To avoid the impact induced by data distribution scale, a normalized fuzzy membership function can be represented as follows: where N c denotes the number of instances belonging to the class which x i drops in. e merit lies in the fact that the fuzzy membership value only reflects the relative density within its own class but is irrespective of the number of instances in that class. erefore, it will be more robust to the variance of the data distribution scale. In addition, due to the fact that each class is handled independently, it is adaptive for class imbalance problem.

Fuzzy Membership Function Based on Interclass Relative Density Information.
In this type of fuzzy membership function, f(x i ) is associated with the estimated class boundary; that is, the instance closer to the estimated class boundary will be assigned a higher membership value. To precisely estimate the class boundary, we deeply investigate the characteristics of four kinds of instances with respect to different density distributions. e instances are divided into noindent normal, boundary, noise, and outliers, respectively. Figure 2 provides a visualized description for these instances. e characteristics of these four instances could be concluded as follows: (a) Normal: the instance appears in the high-density region within its own class but low-density region in the other class (b) Boundary: the instance appears in the low-or medium-density region in both classes but always has a little higher density within its own class than that of the other 4 Scientific Programming (c) Noise: the instance appears in the low-density region within its own class but higher-density region in the other class (d) Outliers: the instances appear in the low-density region in both classes According to the characteristics listed above, we can exactly locate the boundary. First, for each instance, we can compare its intraclass relative density with interclass relative density to find the noise, which can be detected with a discriminant. If the instance x i is from the positive class, its discriminant is shown as follows: where d' denotes the distance calculated only with the distance in the other class, N + and N − denote, respectively, the numbers of instances in positive class and negative class, respectively, ⌈•⌉ provides the round-up operation, and IR is the class imbalance ratio that equals N + /N − . Meanwhile, if x i comes from the negative class, then the discriminant is modified as for each instance satisfying the discriminant condition in equations (10) and (11), this paper can extract them as noise and then assign a very small member value λ for them. en, for the rest of instances, we assign their membership values with interclass relative density information.
e fuzzy membership function can be represented as the following piecewise function: where N c1 and N c2 denote the numbers of instances belonging to nonnoise and noise with the same class of x i , and

Two Proposed Algorithms.
In this section, two proposed algorithms based on WELM are described. In order to set the personalized weights, this paper firstly considers the distribution information and obtained W ii ′ as the new weight of each training sample based on the value of f(x), which replaces the original W ii . en the new diagonal matrix W ii ′ can be described as follows: Next, this section describes the algorithm based on the intraclass relative density information called FWELM-INTRA and the algorithm based on the interclass relative density information called FWELM-INTER. eir flow paths are briefly described in Algorithms 1 and 2.

Data Sets.
During this study, the experimental data sets are available from the public PROMISE repository [32], which have been commonly used in empirical studies of SDP. Detailed information about the data sets is shown in Table 1; each data set contains the number of instances, the number of defects, the number of metrics, and the percentage of defective modules. According to the defective modules ratio, each data set is imbalanced. In order to ensure the accuracy and convergence of the proposed solutions, the zero padding is used to solve the missing values in the date set and data normalization [33] is adopted before conducting the experiments.

Experimental Settings.
Firstly, to validate the effectiveness and superiority of the two proposed algorithms, this paper compared them not only with many representative class imbalance learning algorithms based on ELM but also with WELM1 and WELM2. In addition, we also compared them with the ensemble method of DNC [13] that has been proved to be better than the traditional classifiers, that is, Naive Bayes and Random Forest. e simple description can be seen as follows: (1) ELM [28]: it is the standard ELM algorithm without any operations to address class imbalance problem of SDP data sets. (4) SMOTE [12]: it first adopts SMOTE algorithm to generate a totally balanced training set and then trains an ELM model on this training set. (5) WELM1 and WELM2 [19]: two different weighted strategies of WELM have been adopted as the balance control for a binary-classification task. In particular, they can be regarded as a baseline algorithm that is used to indicate the effect of noise or outliers of SDP data sets. (6) DNC [13]: it is the ensemble learning method to solve imbalance problem of SDP data sets. It can be regarded as a baseline algorithm that is used for comparison with the ensemble learning on performance.
Secondly, to measure the performance on the SDP data set, the probability of detection (PD) and the probability of false alarm (PF) are used based on [13]. For more comprehensive evaluation of predictors in the imbalanced context, G-mean [36] and AUC [37] are frequently used to measure how well the predictor can balance the performance between two classes. In the SDP context, G-mean reflects the change in PD efficiently [38]. It can be calculated by AUC estimates the area under the ROC curve, formed by a set of (PF, PD) pairs. e ROC curve illustrates the tradeoff between detection and false alarm rates, which serves as the performance of a classifier across all possible decision thresholds. AUC provides a single number for performance comparison, varying in [0, 1]. A better classifier should produce a higher AUC. AUC is equivalent to the probability that a randomly chosen example of the positive class will have a smaller estimated probability of belonging to the negative class than a randomly chosen example of the negative class.
In the work in [13], the point (PF � 0, PD � 1) was proposed as the ideal position on the ROC curve, where all defects are recognized without mistakes; the measure Balance is introduced by calculating the Euclidean distance from the real (PF, PD) point to (0, 1) and is frequently used by software engineers in practice [39]. By definition, In summary, this paper uses G-mean, AUC, and Balance to guarantee that the experiments are effective. All of them are expected to be high for a good predictor. e advantage of the three measures is their insensitivity to class distributions in data [40].
irdly, to avoid the randomness of the experiments, this paper applied 10-fold cross validation at each time of building modes using nine of ten partitions and testing on the remaining one. e above procedure is repeated 100 times (10-fold cross validation) to calculate the average result for each algorithm, and the results are provided in the form of mean ± standard deviation. e whole settings can be seen in Figure 3.
Meanwhile, for each algorithm related to ELM and WELM, a sigmoid function is used to calculate the hidden layer output matrix, and two main parameters L and C are determined by grid search, where L ∈ 10, 20, . . . In RQ1, we evaluate the effectiveness of the two proposed algorithms and compare them with previous studied G-mean, AUC, and Balance. In RQ2, we investigate the K value impacting our algorithms based on the range. In RQ3, we examine the time-complexity of two algorithms when the number of training instances and the number of attributes change, respectively. In the following sections, we provide the results analysis of the aforementioned three research questions.  (1) Divide Θ into two sets, Θ + only contains positive instances, and Θ − only contains negative instances. Here, SDP aims to discover the defective modules, so this paper uses positive instances to represent software defective instance, and negative instances to represent non-defective instances; (2) Count the number of instances in Θ + and Θ − , then record them as N + and N − , where N + + N − � N;

RQ1: Do the Proposed Algorithms Perform
(3) Calculate the parameter K for positive and negative class, where (4) For each instance x + i in Θ + , calculate its instances to the K + th nearest neighbors in Θ + and record it as d + i , as well for each instances x − j in Θ − , calculate its distance to the K − th nearest neighbors in Θ − and record it as d − j ; (5) For each instance x i in Θ, calculate its relative density by equation (8), and then calculate its fuzzy membership value f(x i ) by equation (9) e benefits of identifying software defects lie in two aspects: First, once a software defect is predicted, we can provide a timely warning to the development team and save the effort and time of the developers. Second, identifying the software defects can help to avoid defects in the future.

4.4.2.
Approach. To answer this RQ, we implement our approach (the data and code for the algorithms are available at https://github.com/Dark204/Work) and compare the performance with the baselines based on the open-source projects. en, we measure the performance and perform the statistical test. Tables 2-4 show the mean and standard deviation values of G-mean, AUC, and Balance. By observing the results, it is not difficult to draw some conclusions as follows:

Results.
(   training instances will affect the results. e results do not only imply that the two algorithms achieve a better balance between PD and PF but also prove the importance of the distribution of samples in space. (3) e two proposed algorithms are based on WELM.
So, they are also compared with WELM 1 and WELM 2. For G-mean, AUC, and Balance, two algorithms show outperforming efficiency, which proves that the fuzzy values based on relative information can improve WELM to train the better SDP models. Unlike traditional WELM, the proposed algorithms assign different weights that avoid the effect of noise or outliers. (4) As illustrated in Section 2, ELM cannot deal with the class imbalance. But the combination of ELM and data sampling techniques, ELM-RUS, ELM-ROS, and ELM-SMOTE, is effective. Table 2 shows that FWELM-INTRA and FWELM-INTER are still better than the three improved ELM algorithms on G-mean metric. In Tables 3 and 4, the results of AUC and Balance of FWELM-INTRA and FWELM-INTER are also higher than those on 9 of 10 data sets. (5) According to the results, WELM1 or WELM2 assigns the weights to instances; the noise or outliers can still degrade WELM performance and are also able to achieve at least comparable performance with DNC.
Besides, G-mean [36] is frequently used to measure how well the predictor performs. Hence, this paper mainly chose G-mean to make the statistics analysis in Tables 5-7. For each data set, nine predictive models will be built following the algorithms' settings described in the previous section.
e Friedman test is also used to detect significant difference among a group of results, and Holm's post hoc test is adopted to examine whether the proposed algorithms are distinctive among a 1 × N comparison [41,42]. e post hoc procedure allows us to know whether a hypothesis of comparison of means could be rejected at a specified level of significance α. e adjusted p value (APV) is also calculated to denote the lowest level of significance of a hypothesis which results in a rejection. Furthermore, we consider the average rankings of the algorithms to measure how good an algorithm is with respect to its partners. e ranking could be calculated by assigning a position to each algorithm depending on its performance on each data set. e algorithm that achieves the best performance on a specific data set will be given rank 1 (value 1); then the algorithm with the second-best result is assigned rank 2, and so forth. is task is conducted over all data sets and finally an average ranking is calculated.
In Table 5, this paper calculated its rankings, and APVs first. It can be found that FWELM-INTRA algorithm has acquired the lowest average ranking value, indicating that it is the best among all algorithms. As for FWELM-INTER algorithm, it ranks second, which is only a little worse than FWELM-INTER. In addition, we observe that the APVs of the ELM, ELM-RUS, and WELM2 are lower than a standard level of significance of 0.05. at means the three algorithms are significantly different from FWELM-INTRA algorithm. However, we cannot say that FWELM-INTRA is significantly different from FWELM-INTER, ELM-ROS, ELM-Smote, WELM1, and DNC, although it has a lower ranking value. Meanwhile, we know FWELM-INTRA is less complex than FWELM-INTER. erefore, we can recommend that FWELM-INTRA would be an efficient choice for learning from SDP data sets.
Next, this paper selects FWELM-INTRA as the baseline to execute the one-versus-one comparison. Specifically, we calculated the average percentage of performance promotion and counted win/same/loss number based on the pairwise ttest at 5% significance level throughout all data sets. e results are presented in Table 6. e results show that the proposed FWELM-INTRA algorithm has promoted the classification performance compared with the other algorithms, and the performance can be also improved more or less (2.84%-41.76%). In addition, we observed that FWELM-INTRA is better than all other algorithms on 8 data sets at least, indicating its superiority. In contrast with FWELM-INTER algorithm, its average performance only increases by 0.59%. erefore, we can say that the two proposed algorithms are fairly similar to each other.
Furthermore, effect size is also computed, since it emphasises the practical size of difference [43]. We use Cliff's delta [44], which is a nonparametric effect size measure that quantifies the amount of difference between two groups. In our context, Cliff's delta is computed to compare FWELM-INTRA with the other approaches. at is, |delta| < 0.147 is "negligible," |delta| < 0.33 and |delta| ≥ 0.147 are "small," |delta| < 0.474 and |delta| ≥ 0.33 are "medium," and otherwise it is "large." Table 7  erefore, the readers are suggested to choose an appropriate algorithm according to the characteristics of their practical applications.

RQ2: How Does the K Value Impact Performance?
4.5.1. Motivation. In the two proposed algorithms, we need to select the correct K for the relative density. If the value of K is too small, it would be failed to identify the noise and outliers. If the value of K is too large, some distinctions would not be captured. erefore, it is necessary to know whether there is a fixed K that can be applied to all data sets.

Approach.
To decide the value of K in relative density calculation to the classification performance, we choose the parameter K based on the number of instances. Figure 4 plots the variance of G-mean with the change of the Scientific Programming  parameter K for all ten data sets based on the two proposed algorithms. Figure 4, the X values in axis, 1-5, denote

Result. In
}, respectively. We observe that although there are some fluctuations, the performance still presents a rising trend with the increase of K in the initial phases on most data sets.
en, it will arrive at the peak and, subsequently, the performance will decrease. at means when the value of K is undersize or oversize, the performance of the proposed algorithms will be deteriorative. Excessively low K might assign oversize weights for some noise and outliers, while excessively high K might make the weights belonging to the instances in the same category converge. Although there might be different optimal settings for the parameter K in two different proposed algorithms, Figure 4 still shows some useful guidance; that is, data sets can choose K between �� N √ /2 and 2 �� N √ , and the performance of proposed algorithm could be guaranteed in safety. at also illustrates that the parameter K could be �� N √ conservatively and empirically in the experiments. Users are suggested and encouraged to choose an appropriate value for the parameter K by themselves in detailed situation.

RQ3: How About Time-Complexity of the Two Proposed
Algorithms?
4.6.1. Motivation. Without considering the time consumption of modeling WELM classifier, the computational complexity of the two proposed algorithms is affected by the distance calculation, finding the corresponding neighbor, and calculating the fuzzy membership value. Nevertheless, we should focus on the time-complexity to find the relationships with running time.

Approach.
For a data set with n instances, where each instance holds a attributes, we firstly analyse that calculating distances will take O(n 2 a) time, sorting will take v time, and calculating and assigning fuzzy membership values will require a constant O(n) time. en, it can be found that FWELM-INTER needs to cost more training time than FWELM-INTRA, as the density calculation will be run twice. For the two algorithms, the computational complexity depends on the calculation of the relative density and the fuzzy membership value for each training instance. Taking jm1 as an example, this paper tracked the variance of the running time with the variance of the number of training instances n and the number of attributes a, respectively. When we track the variance of n, a is assigned a constant value of 2, while when we track the variance of a, n is fixed at 100. e range of n is 100, 500, 1000, 2000, and 10000 and a � 1, 5, 10, 15, and 20. Figure 5 presents the variance of running time with the variance of n and a, respectively. Figure 5, the running time of the two algorithms is approximately linear with respect to the number of attributes a but exponential with respect to the number of training instances n. In addition, we found that FWELM-INTER always costs more running time than FSVM-INTRA and is consistent with the theory analysis listed above. It can be found that the computation of relative density is timeconsuming, and the proposed algorithms scarify running time efficiency to acquire promotion of the learning model.

reats to Validity.
In this section, the threats to validity are described through internal aspect and external aspect.
reads to internal validity concern the selection of data sets and methods. e research selected NASA data sets that have been commonly used in software prediction. ese data sets are from software engineering and have different metrics. However, evaluating the proposed approach on a large scale of practical projects is always desirable. Meanwhile, for the NASA data sets, the parameter settings in the proposed algorithms are from the experimental results. Nevertheless, the parameters need to be selected based on the chosen data sets. us, the proposed method should be verified in more data sets.
reads to external validity concern the possibility to generalize experimental results and comparisons with related algorithms. e proposed approach only requires that the metrics can be known and computed in the data sets, and the data sets are available. However, the selected data sets are open source and provide the metrics in our experiment. Meanwhile, we just compare some of state-of-the-art methods. Moreover, it is necessary to ask for support from more software developers in the companies to evaluate our approach and discuss the causal between software defects and software development. Further studies using different data sets, analysing metrics in detail, and comparing more algorithms may prove fruitful.

Conclusions
SDP aims at finding as many defective software modules as possible without hurting the overall performance of the constructed predictor. Although a lot of SDP models have been proposed in previous works, the imbalanced distribution between classes is still not considered well. By drawing the experience of previous works, this paper improved WELM to solve imbalance problem of SDP data set. Firstly, a novel measure called relative density was proposed to estimate the significance of each instance in feature space. en, fuzzy membership functions are designed and replace the weights of WELM. Next, the algorithms of FWELM-INTRAN and FWELM-INTER were created to train SDP models. Finally, the algorithms were evaluated on ten realworld SDP data sets, and three performance measures were considered: G-mean, AUC, and Balance.
In order to prove the effectiveness and efficiency of our methods to tackle imbalanced SDP problems, this paper performed comparison with six class imbalance learning methods related to ELM (ELM, ELM-RUS, ELM-ROS, ELM-SMOTE, WELM1, and WELM2) and also performed comparison shown to have better performance than the two top-ranked predictors (Naive Bayes and Random Forest) in the SDP literature. From the results obtained, the proposed algorithms could result in significantly better classification results in the context of SDP.
In summary, the proposed algorithms are more robust as they are adaptive for different data distribution types and can more accurately estimate the significance of each instance and assign the identical total fuzzy coefficients for two different classes without considering the impact of data scale. Of course, they have several drawbacks as well, including quite high time-complexity induced by K-nearestneighbors calculation, and the selection of the parameter K might also affect the quality of the classification model to some extent.
In future work, it will be interesting to develop more efficient cost-sensitive learning algorithms with low timecomplexity. In addition, how to transform the proposed methods to address cross-project SDP problem effectively will be investigated, too.
Data Availability e data and code used to support this study are available from the corresponding author upon request.

Conflicts of Interest
e authors declare that there are no conflicts of interest regarding the publication of this paper.