SXGBsite: Prediction of Protein–Ligand Binding Sites Using Sequence Information and Extreme Gradient Boosting

The prediction of protein–ligand binding sites is important in drug discovery and drug design. Protein–ligand binding site prediction computational methods are inexpensive and fast compared with experimental methods. This paper proposes a new computational method, SXGBsite, which includes the synthetic minority over-sampling technique (SMOTE) and the Extreme Gradient Boosting (XGBoost). SXGBsite uses the position-specific scoring matrix discrete cosine transform (PSSM-DCT) and predicted solvent accessibility (PSA) to extract features containing sequence information. A new balanced dataset was generated by SMOTE to improve classifier performance, and a prediction model was constructed using XGBoost. The parallel computing and regularization techniques enabled high-quality and fast predictions and mitigated overfitting caused by SMOTE. An evaluation using 12 different types of ligand binding site independent test sets showed that SXGBsite performs similarly to the existing methods on eight of the independent test sets with a faster computation time. SXGBsite may be applied as a complement to biological experiments.


Introduction
Accurate prediction of protein-ligand binding sites is important for understanding protein function and drug design [1][2][3][4]. The experiment-based three-dimensional (3D) structure recognition of protein-ligand complexes and binding sites is relatively expensive and time consuming [5,6]. Computational methods can predict binding sites rapidly and can be applied as a supplement to experimental methods. Structure-based methods, sequence-based methods, and hybrid methods are the commonly applied computation methods [7,8].
The structure-based methods are usually applied to predict ligand binding sites with known 3D protein structures [2,[9][10][11]. We focused on the sequence-based method without 3D structure information, and only a few structure-based methods are listed due to the rapid update of these different methods. Pockets on the protein surface can be identified by computing geometric measures, such as LIGSITE CSC [2,12], CASTp [13][14][15][16], LigDig [17], and Fpocket [18,19]. LIGSITE CSC [2,12] identifies pockets through the number of surface-solvent-surface events and clusters. CASTp [13][14][15][16] locates and measures pockets on 3D protein structures and annotates functional information for specific residues. Unlike traditional protein-centric approaches, LigDig [17] is a ligand-centric approach that identifies pockets using information from PDB [20], UniProt [21], PubChem [22], ChEBI [23], and KEGG [24]. Fpocket [18,19] identifies pockets using structure-based virtual screening (SBVS). RF-Score-VS [25] improves the performance of SBVS and can be used in the open source ODDT [26,27]. FunFOLD [1]  During the training process, the position-specific scoring matrix (PSSM) feature of residues was represented by the sparse evolution image, discrete cosine transform (DCT) compressed the PSSM feature to obtain the PSSM-DCT feature, and the predicted solvent accessibility (PSA) feature was used to improve the prediction quality. SMOTE generated a new balanced training set with the training set of PSSM-DCT + PSA features, and the prediction model of binding residues was constructed by the balanced training set and XGBoost. During the testing process, the unbalanced independent test set, which also extracted the PSSM-DCT + PSA features, was input into the prediction model to obtain the result.

Benchmark Datasets
The benchmark datasets were constructed based on the BioLip database [53] developed by Yu et al. [7], including the training and independent test datasets of 12 different ligands. The 12 types of ligands used were five types of nucleotides, five types of metal ions, DNA and Heme ( Table 1). The source code and datasets are available at https://github.com/Lightness7/SXGBsite.  During the training process, the position-specific scoring matrix (PSSM) feature of residues was represented by the sparse evolution image, discrete cosine transform (DCT) compressed the PSSM feature to obtain the PSSM-DCT feature, and the predicted solvent accessibility (PSA) feature was used to improve the prediction quality. SMOTE generated a new balanced training set with the training set of PSSM-DCT + PSA features, and the prediction model of binding residues was constructed by the balanced training set and XGBoost. During the testing process, the unbalanced independent test set, which also extracted the PSSM-DCT + PSA features, was input into the prediction model to obtain the result.

Benchmark Datasets
The benchmark datasets were constructed based on the BioLip database [53] developed by Yu et al. [7], including the training and independent test datasets of 12 different ligands. The 12 types of ligands used were five types of nucleotides, five types of metal ions, DNA and Heme (Table 1). The source code and datasets are available at https://github.com/Lightness7/SXGBsite. The position-specific scoring matrix (PSSM) encodes the evolution information of the protein sequence. The PSSM of each sequence was obtained using PSI-BLAST [54] in the database of non-redundant protein sequences (nr) with three iterations and the E-value of 0.001. The PSSM is a matrix of L × 20, where L rows represent L amino acid residues in the protein sequence, 20 columns represent the probability that each residue is mutated to 20 native residues, as follows: 20 . . . . . . . . . . . . P L,1 P L,2 · · · P L,20 The PSSM feature of contiguous residues was extracted with a sliding window with size w. The window was centered on the target residue and contained (w − 1)/2 adjacent residues on both sides. The size of the PSSM feature matrix was w × 20, and the residue sparse evolution image [8,48] is shown in Figure 2. The window size w = 17 was selected after testing different values of w, and the dimensions of the PSSM feature were 17 × 20 = 340. Note: numP, positive (binding residues) sample numbers; numN, negative (non-binding residues) sample numbers; ATP, adenosine triphosphate; ADP, adenosine diphosphate; AMP, adenosine monophosphate; GDP, guanosine diphosphate; GTP, guanosine triphosphate.

Position-Specific Scoring Matrix
The position-specific scoring matrix (PSSM) encodes the evolution information of the protein sequence. The PSSM of each sequence was obtained using PSI-BLAST [54] in the database of nonredundant protein sequences (nr) with three iterations and the E-value of 0.001. The PSSM is a matrix of L × 20, where L rows represent L amino acid residues in the protein sequence, 20 columns represent the probability that each residue is mutated to 20 native residues, as follows: The PSSM feature of contiguous residues was extracted with a sliding window with size w. The window was centered on the target residue and contained (w − 1)/2 adjacent residues on both sides. The size of the PSSM feature matrix was w × 20, and the residue sparse evolution image [8,48] is shown in Figure 2. The window size w = 17 was selected after testing different values of w, and the dimensions of the PSSM feature were 17 × 20 = 340.

Discrete Cosine Transform
Discrete Cosine Transform (DCT) [47] is widely applied for lossy data compression of signals and images. In this study, we used DCT to concentrate the information of PSSM into a few coefficients. For a given input matrix Mat ∈ m×n , DCT is defined as: Mat(m, n) cos where The compressed PSSM feature of the residue was obtained by using DCT on the PSSM feature matrix. Most of the information after PSSM-DCT was concentrated in the low-frequency part of the compressed PSSM. The first r rows of the compressed PSSM were reserved as the PSSM-DCT feature, and the dimensions of the PSSM-DCT feature were r × 20.

Discrete Wavelet Transform
Discrete Wavelet Transform (DWT) [49] can decompose discrete sequences into high-and low-frequency coefficients. Four-level DWT [50] was applied to acquire the first five discrete cosine coefficients, the standard deviation, mean, and maximum and minimum values of different scales, as shown in Figure 3. The PSSM-DWT feature of the residue was obtained from the PSSM feature via four-level DWT, and the PSSM-DWT feature had 1040 dimensions.
The compressed PSSM feature of the residue was obtained by using DCT on the PSSM feature matrix. Most of the information after PSSM-DCT was concentrated in the low-frequency part of the compressed PSSM. The first r rows of the compressed PSSM were reserved as the PSSM-DCT feature, and the dimensions of the PSSM-DCT feature were r × 20.

Discrete Wavelet Transform
Discrete Wavelet Transform (DWT) [49] can decompose discrete sequences into high-and lowfrequency coefficients. Four-level DWT [50] was applied to acquire the first five discrete cosine coefficients, the standard deviation, mean, and maximum and minimum values of different scales, as shown in Figure 3. The PSSM-DWT feature of the residue was obtained from the PSSM feature via fourlevel DWT, and the PSSM-DWT feature had 1040 dimensions.

Predicting Solvent Accessibility
Solvent accessibility [52] is related to the spatial arrangement and packing of residues during the protein folding process, which is an effective feature for protein-ligand binding site prediction [8,33,34]. We used the solvent accessibility prediction of proteins by nearest neighbor method (Sann) to obtain the PSA feature of residues [55], and the PSA feature had three dimensions.

SMOTE Over-Sampling
As a common method for tackling unbalanced data, SMOTE over-samples the minority class by synthesizing new samples, under-samples the majority class, and provides better classifier performance within the receiver operating characteristic (ROC) space [45,56]. A balanced sample set is generated from the unbalanced sample set through feature extraction by SMOTE. After a series of tests, a new sample set with better results was constructed with the same positive and negative sample number: 19,000.

Extreme Gradient Boosting Algorithm
Extreme Gradient Boosting (XGBoost) algorithm [46] is an improvement on the Gradient Boosting algorithm [57] by Chen et al. and is characterized by fast calculation and high prediction accuracy.
XGBoost is widely used by data scientists in multiple applications and has provided advanced results [58,59]. The training set after feature extraction and SMOTE x i (x i = {x 1 , x 2 , . . . , x m }, i = 1, 2, . . . , n ) was input into the K additive functions of XGBoost to build the model. The prediction result of the independent test set y i (y i = {0, 1}, i = 1, 2, . . . , s, where 0 represents non-binding residues and 1 represents binding residues) was output as follows: where f k is each independent tree function with leaf weights and F is the tree ensemble containing each function of the tree. XGBoost avoids large models with the following regularized objective formula: where l is a differentiable convex loss function that measures the closeness of the predictionŷ i and the target y i , and Ω is a regular term that penalizes model complexity by greedily adding f t to improve the tree ensemble model. The regular term avoids overfitting by penalizing leaf weights, and the Ω penalty function is as follows: where T is the number of leaves, ω is the leaf weights, and the regularization coefficients γ and λ are constants. The traditional GBDT only uses the first-order information of the loss function, whereas the second-order Taylor expansion was introduced into the loss function of XGBoost to optimize the function rapidly [60]. The simplified objective function of step t is: where g i = ∂ŷ (t−1) l(y i ,ŷ (t−1) ) and h i = ∂ 2 y (t−1) l(y i ,ŷ (t−1) ) represent the first-and second-order gradient statistics of the loss function, respectively. I j = i q(x i = j) is defined as a sample set of leaf j, simplified Equation (7) is: The optimal weight ω * j of leaf j and the corresponding objective function value are calculated by: The above equation provides the best split of the node. Supposing I L and I R are the left and right split nodes of the sample set I of the leaf, I = I L ∪ I R , respectively, the loss reduction after splitting is expressed as: To prevent overfitting, XGBoost uses shrinkage and column (feature) subsampling techniques, as well as the regularized objective [57].

Results
The performance of classification was evaluated on the specificity (SP), sensitivity (SN), accuracy (ACC), and Matthews correlation coefficient (MCC). The overall prediction quality of a binary model was evaluated using the area under the receiver operating characteristic curve (AUC). The formulas used to determine SN, SP, ACC, and MCC are, respectively, as follows: where TP, FP, TN, and FN represent true positive, false positive, true negative, and false negative, respectively.

Parameter Selection
ACC is insufficient for performance evaluation in unbalanced learning [7,8], MCC is suitable for quality assessment in sequence-based predictions [3], and AUC is usually used to assess the overall prediction quality of models. The value of MCC changes with the threshold, whereas the AUC value is not affected by the threshold value. We evaluated the prediction performance using MCC and AUC, and the threshold of the probability value was selected by maximizing the value of MCC. The value of MCC was used to select the first r rows of the PSSM-DCT matrix as feature on the guanosine triphosphate (GTP) training and independent test sets. PSSM-DCT obtained the optimal value of MCC when r was 9, as shown in Figure 4, and the dimensions of the PSSM-DCT feature were 9 × 20 = 180.  The size of the positive and negative sample sets after SMOTE is usually an integer multiple of the positive sample size in the original dataset, and the prediction quality may be affected by the amplification ratio of the positive sample sets. In this study, a fixed-size positive and negative sample set was generated by SMOTE to improve the prediction quality, and the optimal sample number was selected according to the value of MCC on the GTP training and independent test sets. The best value of MCC was obtained when the number of positive and negative samples was 19,000, as shown in The size of the positive and negative sample sets after SMOTE is usually an integer multiple of the positive sample size in the original dataset, and the prediction quality may be affected by the amplification ratio of the positive sample sets. In this study, a fixed-size positive and negative sample set was generated by SMOTE to improve the prediction quality, and the optimal sample number was selected according to the value of MCC on the GTP training and independent test sets. The best value of MCC was obtained when the number of positive and negative samples was 19,000, as shown in Figure 5. The size of the positive and negative sample sets after SMOTE is usually an integer multiple of the positive sample size in the original dataset, and the prediction quality may be affected by the amplification ratio of the positive sample sets. In this study, a fixed-size positive and negative sample set was generated by SMOTE to improve the prediction quality, and the optimal sample number was selected according to the value of MCC on the GTP training and independent test sets. The best value of MCC was obtained when the number of positive and negative samples was 19,000, as shown in Figure 5. The parameters of XGBoost were adjusted with five-fold cross-validation and a grid search method on the GTP training set. The parameters of XGBoost were adjusted with five-fold cross-validation and a grid search method on the GTP training set.

Method Selection
Different feature combinations of PSSM, PSSM-DCT, PSSM-DWT, and PSA were used to evaluate the prediction performance using the GTP training and independent test sets, PSSM-DCT + PSA produced the optimal MCC and AUC values (Table 2), and receiver operating characteristic curve (ROC) of different feature combinations is shown in Figure 6. As shown in Table 2   Three sampling schemes were used on the GTP training set to obtain three different training sets, including the entire GTP training set, the training set after random under-sampling (RUS), and the training set after SMOTE over-sampling. On the GTP independent test set, the prediction qualities of the models constructed by the three training sets are shown in Table 3, and receiver operating characteristic curve (ROC) of different sampling and classification algorithms is shown in Figure 7. SMOTE + XGBoost achieved the best prediction quality, performing better than SMOTE + SVM.  Three sampling schemes were used on the GTP training set to obtain three different training sets, including the entire GTP training set, the training set after random under-sampling (RUS), and the training set after SMOTE over-sampling. On the GTP independent test set, the prediction qualities of the models constructed by the three training sets are shown in Table 3, and receiver operating characteristic curve (ROC) of different sampling and classification algorithms is shown in Figure 7. SMOTE + XGBoost achieved the best prediction quality, performing better than SMOTE + SVM.

Results of Training Sets
The performance of SXGBsite was evaluated using five-fold cross-validation on the training sets. The results with the threshold of 0.5 and the maximized the MCC value are listed in Table 4. The fivefold cross-validation results are basically consistent with the maximized MCC threshold results of TargetS and EC-RUS. Regardless of the impact of the threshold, the results in Table 4 show the different characteristics of the two schemes for the class imbalance problem by comparing the default threshold (0.500) results of SXGBsite and EC-RUS, which use the same features. The RUS + ensemble classifiers scheme was more sensitive to positive samples and had information loss for negative samples. The SMOTE + XGBoost scheme reduced the information loss, the positive samples in the training set were mostly synthesized, and the sensitivity to positive samples was lower.

Results of Training Sets
The performance of SXGBsite was evaluated using five-fold cross-validation on the training sets. The results with the threshold of 0.5 and the maximized the MCC value are listed in Table 4. The five-fold cross-validation results are basically consistent with the maximized MCC threshold results of TargetS and EC-RUS. Regardless of the impact of the threshold, the results in Table 4 show the different characteristics of the two schemes for the class imbalance problem by comparing the default threshold (0.500) results of SXGBsite and EC-RUS, which use the same features. The RUS + ensemble classifiers scheme was more sensitive to positive samples and had information loss for negative samples. The SMOTE + XGBoost scheme reduced the information loss, the positive samples in the training set were mostly synthesized, and the sensitivity to positive samples was lower.

Comparison with Existing Methods
In terms of the independent test sets of the five nucleotides, SXGBsite is compared with TargetS, SVMPred, NsitePred, EC-RUS, and the alignment-based baseline predictor in Table 5. The results of TargetS, SVMPred, NsitePred, and EC-RUS are the threshold of maximizing the MCC value. In terms of the ATP, ADP, AMP, GDP, and GTP independent test sets, the metrics of the best prediction quality refer to the AUC of TargetS   On the independent test sets of the five metal ions, SXGBsite is compared with TargetS, FunFOLD, CHED, EC-RUS, and the alignment-based baseline predictor in Table 6. The results of TargetS, FunFOLD, CHED, and EC-RUS are the threshold of maximizing the MCC value. In terms of the independent test sets of Ca 2+ , Mg 2+ , Mn 2+ , Fe 3+ , and Zn 2+ , the differences between SXGBsite and the best prediction quality for the AUC are 0.021 (0.758 to 0.779), 0.001 (0.779 to 0.780), 0.032 (0.856 to 0.888), 0.054 (0.891 to 0.945), and 0.052 (0.906 to 0.958), respectively, and the differences between SXGBsite and the best prediction quality for the MCC are 0.046 (0.197 to 0.243), 0.026 (0.291 to 0.317), 0.067 (0.382 to 0.449), 0.094 (0.396 to 0.490), and 0.137 (0.390 to 0.527), respectively. SXGBsite showed good prediction performance on the Mg 2+ independent test set, and the reasons for the unsatisfactory performance on the metal ion independent test sets may be as follows: (1) TargetS uses the ligand-specific binding propensity feature to improve the prediction quality, and the features used in this study did not perform well for predicting metal ion binding residues; and (2) the volume of metal ions is smaller than that of nucleotides, which means that there are fewer binding residues (positive samples), and the lack of positive samples affected the prediction quality of the model.  (Table 7), SXGBsite achieved an MCC value lower than those of TargetS and EC-RUS, and an inferior AUC value to TargetS.  The prediction performance of SXGBsite was similar to those of the best two methods, TargetS and EC-RUS, on the independent test sets of the five nucleotides, Mg 2+ , DNA, and Heme. Both TargetS and EC-RUS are serial combinations of under-sampling and ensemble classifiers, which requires long calculation times. SXGBsite is a method of over-sampling and a single XGBoost classifier to quickly build high quality prediction models.

Running Time Comparison
The running time comparison of SXGBsite, EC-RUS (SVM), and EC-RUS (WSRC) on the independent test sets is provided in Table 9, and the benchmark in this study is the EC-RUC (SVM) running time. EC-RUS is a sequence-based method that was proposed by Ding et al., and its prediction quality was excellent. Ding et al. selected 19 sub-classifiers in the ensemble classifier, compared the results of ensemble SVMs and ensemble WSRCs, and concluded that ensemble WSRCs are more time-consuming than ensemble SVMs. Both SXGBsite and EC-RUS used the feature of PSSM-DCT + PSA, and the prediction model was built by SMOTE + XGBoost and RUS + ensemble classifiers, respectively. Due to having the same features, the results in Table 9 also show the running time comparison of SMOTE + XGBoost and RUS + ensemble classifiers, which means that two schemes for the class imbalance problem.

Case Study
The prediction results of SXGBsite are shown in the 3D models in Figure 8, and the protein-ligand complexes of 2Y4K-A and 2Y6P-A belong to the independent test sets of GDP and Mg 2+ , respectively. 1 Results excerpted from Hu et al. [61]. 2 Results excerpted from Shen et al. [44]. -denotes unavailable.

Case Study
The prediction results of SXGBsite are shown in the 3D models in Figure 8, and the protein-ligand complexes of 2Y4K-A and 2Y6P-A belong to the independent test sets of GDP and Mg , respectively.

Discussion
Many excellent computational methods are available in the field of protein-ligand binding site prediction; however, prediction efficiency can still be improved [8]. As the actual acquired protein-ligand binding site data show many fewer binding sites than non-binding sites, we selected unbalanced datasets of 12 different ligand types constructed by Yu et al. as the benchmark datasets. The adverse effects of unbalanced data on predictions are usually mitigated by over-or under-sampling methods, which are widely applied, and ensemble classifiers are often used together to overcome the loss of information caused by under-sampling. Both TargetS and EC-RUS performed well on the independent test sets built by Yu et al. by applying the scheme of under-sampling and ensemble classifiers. Although the loss of information by multiple under-sampling can be reduced by ensemble classifiers, serial combinations of multiple machine learning algorithms and high-dimensional features increase the computation time.
SXGBsite uses the features of PSSM-DCT + PSA and XGBoost with SMOTE to build prediction models, and Extreme Gradient Boosting algorithm developed by Chen et al. [46] was applied to solve overfitting and large sample sets caused by over-sampling. XGBoost's regularization technology overcomes the overfitting problem, and parallel computing can be used to quickly construct prediction models with large sample sets, which constitute the basis of SXGBsite. The threshold moving was used in this study to obtain the best MCC for comparison with other existing methods. The use of both threshold moving and sampling methods complicated the interpretation of the results, and the AUC measure without threshold change was used to better evaluate the prediction quality difference between SMOTE + XGBoost and RUS + ensemble classifiers. On the independent test sets of five nucleotides, Mg 2+ , DNA, and Heme, the difference between the AUC of SXGBsite and the best AUC was within 0.020. Considering the decrease in the running time, we think that the difference in AUC is acceptable. On the independent test sets of 12 ligands, the new method proposed here produced a higher prediction quality with a shorter computation time using the two features and a single classifier, and produced similar results to the best-performing TargetS and EC-RUS on 8 of the 12 independent test sets.

Conclusions
This paper proposes a new computational method, SXGBsite. Sequence information was used for the protein-ligand binding site prediction, and features extracted by PSSM-DCT+PSA and XGBoost with SMOTE were used to construct the prediction model. On the independent test sets of 12 different ligands, SXGBsite performed similarly to the best methods on the datasets with less computation time, which could be a complement of biological experiments as well as cost reductions. The features we used did not perform well on the metal ion datasets, and adding features with better prediction performance is the next step of the study.