Satisfiability Logic Analysis Via Radial Basis Function Neural Network with Artificial Bee Colony Algorithm

used in the output layer (containing output neuron). The foundation of having 3 layers is to minimize the classification and prediction error in RBFNN [8] . Hamadneh et al. [4] initially implemented logic programming in RBFNN. Their proposed network explored the capability of HornSAT as a logical rule in RBFNN. In this case, logical structure of RBFNN is solely dependent on 3 parameters: the center of all input neurons, its widths, and its Gaussian activation function. Despite the fact that RBFNN can be applied effectively, the number of neurons in a hidden layer for RBFNN will determine the complexity of the network [9]. If the number of neurons in the hidden layers is not enough, the learning in RBFNN fails to achieve optimal convergence. However, if the number of neurons in the hidden layers is very high, the network will experience overlearning [10] . Since the complexity of RBFNN increases as the number of clauses increase, an optimization algorithm becomes crucial. Yang and Ma [11] have successfully applied the Sparse Neural Network (SNN) algorithm for optimizing the number of hidden neurons. The core mechanism of SNN is in reducing the error via trial and error approach for determining the number of hidden neurons explicitly from the set of neurons. The limitation of SNN paradigm can be seen in extensive computational time during the number of hidden


I. Introduction
G enerally, Artificial Intelligence (AI) can be encompassed in some functional graphical and mathematical models that act as a symbolic system [1]. Greater impact can be achieved when symbolic operations have been integrated inside the Artificial Neural Network (ANN). ANN known as the conductive system has received careful attention due to its ability to evaluate the complex nonlinear dataset [1]. ANN has been successfully used in solving non-limited applications such as classification and optimization of approximation functions. However, the functionality of ANN can be measured by embedding the correct symbolic rule to govern the whole neural system. Logic programming has been a language of ANN for decades. Wan Abdullah [2] successfully explored the neural network that has been governed by logic programming. In this work, logical rule embodied ANN and the characteristic of the network will be examined by using Lyapunov energy analysis. The minimization of energy as a solution to the combinatorial representation motivates the integration of logical rules in a neural network [3]- [6]. The question remains on how one can choose the best ANN model in order to embed logic programming. The reliable ANN model typically has the least prediction and classification error analysis. In that regard, Radial Basis Function Neural Network (RBFNN) fascinated the researchers from sciences and engineering field because of simpler networks structure, faster learning speed and better approximation capabilities. As stated by Hamadneh et al. [7] in their paper, RBFNN can be used to develop separate models for the shear stress and heat transfer rate due to simpler networks structure. RBFNN is a feedforward neural network that contains 3 neuron layers (input, hidden and output layers). The input layer (containing input neurons) receives information being transferred to the hidden layer for data synthesis and training. The synthesized data will be used in the output layer (containing output neuron). The foundation of having 3 layers is to minimize the classification and prediction error in RBFNN [8]. Hamadneh et al. [4] initially implemented logic programming in RBFNN. Their proposed network explored the capability of HornSAT as a logical rule in RBFNN. In this case, logical structure of RBFNN is solely dependent on 3 parameters: the center of all input neurons, its widths, and its Gaussian activation function. Despite the fact that RBFNN can be applied effectively, the number of neurons in a hidden layer for RBFNN will determine the complexity of the network [9]. If the number of neurons in the hidden layers is not enough, the learning in RBFNN fails to achieve optimal convergence. However, if the number of neurons in the hidden layers is very high, the network will experience overlearning [10]. Since the complexity of RBFNN increases as the number of clauses increase, an optimization algorithm becomes crucial.
Yang and Ma [11] have successfully applied the Sparse Neural Network (SNN) algorithm for optimizing the number of hidden neurons. The core mechanism of SNN is in reducing the error via trial and error approach for determining the number of hidden neurons explicitly from the set of neurons. The limitation of SNN paradigm can be seen in extensive computational time during the number of hidden neuron computation process. Inspired by several works of [12]- [14], the 2 Satisfiability (2SAT) logic representation will be utilized with RBFNN to determine the important parameters for the hidden layer that control the number of hidden neurons. In fact, 2SAT is selected since it is complying with RBFNN based on the structure and representations.
Another major component of 2SAT in RBFNN is the training method that has a significant influence on the performance of RBFNN. On this matter, a plethora of global optimization methods have been extensively applied due to their global search capability. Metaheuristics algorithm is a popular algorithm to search for a near optimal solution for RBFNN [15], [16]. There are various nature-inspired and recently developed optimization algorithms such as Genetic Algorithm, Differential Evolution algorithm, Particle Swarm Optimization algorithm, Artificial Bee Colony, etc. and many of these proved their suitability to many engineering optimization problems [17].
The theoretical basis of the Genetic Algorithm (GA) has been developed by Holland [18]. The first who used GA in a problem involving the control of gas-pipeline transmission were Goldberg and Holland [19]. Other studies have been made by Hamadneh et al. [4] who used GA to train the hybrid model RBFNN with higher-order SAT logic. In this study, they used the full training paradigm to train RBFNN with higher-order SAT logic using k-means cluster algorithm and GA. The quest of finding the optimal algorithm was continued by Pandey et al. [20] who compared Multiple Linear Regression (MLR) and genetic algorithm to predict temporal scour depth near-circular pier in non-cohesive sediment. This study utilized 1100 laboratory experimental data-sets to develop the generalized scour equation using MLR and GA. In recent publications, Jing and Li [21] developed a reliability analysis method by integrating GA with RBFNN. This paper adopted GA to find the "potential" most probable point (MPP) in the optimization problem by control the density of samples to refine the RBFNN.

Differential evolution (DE) was first introduced by Storn and
Price [22] to solve the various global optimization problems. DE is a manageable yet powerful evolutionary algorithm with the advantages of less parameter, high simplicity, and fast convergence [22]. DE has been beneficial to various networks such as Hopfield Neural Network [23] and feed-forward neural networks [24]. Chauhan & Chandra [22] proposed the DE algorithm to train a wavelet neural network (WNN) by minimizing network error to obtain the proper relationship from the input vector in the input layer to the output vector in the output layer. Tao et al. [25] utilized the DE algorithm to improve RBFNN as the prediction model for the coking energy consumption process. Particle Swarm Optimization algorithm (PSO) is a nature-inspired evolutionary algorithm that imitates the influence of bird migration behavior [26]. PSO algorithm is one of the evolutionary algorithms proposed by Kennedy and Eberhart [27]. In some succeeding works, Qasem & Shamsuddin [28] proposed the PSO algorithm for enhancing RBFNN learning by optimizing the parameters of the hidden layer and output layer. Another study has been made by Alexandridis et al. [29], who used the PSO algorithm to optimize the construction of RBFNN. The proposed model was able to solve classification problems and solve function approximations with improved generalization capabilities and accuracy.
Karaboga and Basturk [30], [31] proposed the Artificial Bee Colony algorithm (ABC) to gain computational edge in optimizing the capability of both local search and global search. ABC was inspired by collective behaviors of bees gathering honey in an optimized pattern. ABC has been beneficial to various networks such as Hopfield Neural Network [14] and Hermite Neural Network [32]. Kurban & Besdok [33] utilized ABC to estimation the centers, width, and weights as the main parameters of RBFNN. Yu and Duan [34] proposed an optimized ABC in RBFNN integrated with Fuzzy C mean Clustering. In this paper, 2 layers of optimization in ABC were reported to increase the accuracy of the image fusion. Jafrasteh and Fathianpour [35] proposed hybrid RBFNN by introducing perturbation in ABC. The proposed system was reported to capture non-linear relationship in ore grade data. In another development, Satapathy et al. [36] combined the benefit of kernel trained ABC to further optimize the capability of RBFNN. The proposed RBFNN managed to increase the classification accuracy of EEG signal for epileptic seizure identification. The perspective has been expanded by Aljarah et al. [37] when they introduced hybrid ABC with RBFNN to solve well known datasets. On the perspective of logic programming in RBFNN, little studies have been done to optimize the parameter of RBFNN by using ABC. Kasihmuddin et al. [14] has demonstrated the ability of ABC to serve as an effective learning algorithm in Hopfield Neural Network (HNN). One of the notable use of ABC is proposed by Jiang et al. [30]. In this work, the ABC is employed for optimizing the parameters of RBFNN and predicting the ecological pressure. In another development, Menad et al. [38] have utilized the RBFNN framework with ABC algorithm (RBFNN-ABC) for predicting the carbon dioxide solubility and concentration in brine. The results manifested the capability of ABC in optimizing RBFNN that result in higher accuracy. By hybridizing RBFNN with 2SAT logic, here we examine the effects of ABC on the training phase as a single framework, RBFNN-2SATABC. Worth noting that the proposed model will be compared with the existing models. Thus, the main motivation of employing ABC in this research is due to: 1. According to Kasihmuddin et al. [14], [62], ABC has outperformed the other algorithm such as [5] and [6] in enhancing the training phase for bipolar 2SAT logical representation. We extended the nonbinary representation for optimizing the parameter entrenched in the hidden layer of RBFNN as inspired by the binary operators consist of employed bees and onlooker bees' phase.
2. Several current studies such as Menad et al. [38] and Jiang et al. [39] utilize the ABC in optimizing the prediction capability of RBFNN. Both local search and global search capability reduce the chances for ABC to achieve sub-optimal fitness. Motivated by these recent works, ABC algorithm is applied in improving the output quality from the output weight thereby improving the performance of the structure RBFNN-2SAT.
To this end, the contributions of this paper are as follows: 1. This paper explores another perspective in approaching implicit knowledge by using an explicit learning model. Real-life problem (implicit representation) is learnable by using a set of explicit mathematical representation (2SAT logical rule).
2. This is the first attempt to embed 2SAT logical rule (knowledge) to the feed-forward neural networks (learner). In this study, the 2SAT logical rule has been embedded in RBFNN by systematically obtaining the optimal value of parameters (center and width). 2SAT logical rule is expected to optimize the structure of the RBFNN by fixing the number of hidden neurons involved.

3.
Since the training of the proposed RBFNN always converges to suboptimal output weight, this paper will explore the capability of Artificial Bee Colony (ABC) compared to other existing established metaheuristics. The aim of the training model in RBFNN is to obtain the optimal output weight with the lowest iteration error. Extensive experimentation with various performance metrics has been conducted to reveal the effectiveness of ABC in the proposed RBFNN-2SAT.

4.
The proposed RBFNN provides an interesting perspective. RBFNN obtained the output weight of 2SAT by minimizing the objective function with the structurally systematic parameters. This approach is interestingly different from Sathasivam [40] that utilized the Wan Abdullah method in finding the correct synaptic weight (output weight). Although both paradigms utilized ABC in optimizing the proposed methods, the method proposed in this paper deals with non-binary optimization compared to the existing method. Therefore, the proposed method creates a new possible horizon for logic programming in the neural networks.
The rest of this paper is arranged as follows. The 2SAT logical rule is formulated in the first section. After the overview structure of the general RBFNN, the proposed hybrid model integrated with 2SAT is constructed. Accordingly, the proposed training model via metaheuristics algorithm namely GA, DE, PSO, and ABC will be discussed in detail. Finally, this paper presents numerical results to show the effectiveness of ABC in optimizing 2SAT in RBFNN and we conclude the paper with some remarks and future work.

II. Boolean 2 Satisfiability Representation
Satisfiability (SAT) is demarcated as a logic rule with an array of clauses composed of binary literals. SAT is effectively governed by positive [5] and negative outcomes. The main structure of SAT representation is shown as follows: (a) Consists of a set of m variables of 1 (b) Composes of a set of literals. A literal refers to the variable v or a negation of a variable, v ¬ .
(c) A set of n discrete clauses, 1 2 3 , , ,..., n l l l l . Every single clause composes of literals strictly combined by only ∧ logical operator.
Every variable can only take a bipolar value which is 1 or 0 that exemplifies the idea of true and false. Another variant of SAT representation is 2 Satisfiability. 2 Satisfiability (2SAT) consist of set of clauses that contain strictly 2 literals. The general formula for 2SAT logic is as follows: where i l refers to the clauses of 2SAT, meanwhile and i i C D denote the literals, ∨ refers to Disjunction (OR), and ∧ is an logical operator of Conjunction (AND).
The goal of 2SAT logic is to establish the ideal logical model of RBFNN to calculate the parameters of the hidden layer which contribute in deciding the number of hidden neurons in the hidden layer. Ideally, a combinatorial problem is similar to an ordinary mathematical model with quantifiable rate of change. Unfortunately, that statement does not hold if the specific combinatorial problem is dynamical and appeared as non-linear or linearly distributed. There were several efforts to represent the combinatorial problem via 2SAT formulation [42], [43]. These combinatorial problems contain implicit knowledge and could not be represented in standard rate of changes [44]. From that perspective, 2SAT is the main representation because this logical rule has a huge flexibility in terms of state (1 or 0) compared to standard mathematical representation.

III. Radial Basis Function Neural Network
Radial Basis Function Neural Network (RBFNN) is a variant of feed forward neural network with hidden interconnected layer which was pioneered by Lowe and Moody [45], [46]. Compared to other network, RBFNN has a more integrated structure and architecture. In terms of structure, RBFNN contains three neuron layers for computation purposes (See Fig. 1) [47]. In the input layer, m neurons represent the input data that was transferred to the system. During the training phase, the parameters (center and width) will be calculated in the hidden layer. The parameters obtained will be used to calculate the output weight in the output layer. To reduce the dimensionality from the input to the output layer, a Gaussian activation function has been introduced. The Gaussian activation function, ( ) i x ϕ of the hidden neuron in RBFNN is as follows [48], [49]: where c j , i σ are the center and width of the hidden neuron, respectively. In this case, j x is a input value for N input neurons and the Euclidean norm from neuron i to j can be defined as follows: where ' ji w is the input weight between the input neuron j and the hidden neuron i. Structurally, j x is a input data in the training set and the hidden neuron i. i c is the center of the hidden neuron. The final output of RBFNN ( ) i F w is given by the following: is the output value of RBFNN and the output weight is given by The aim of RBFNN is to obtain the optimal weights i w that satisfy the desired output value. In RBFNN, the hidden neuron provides a set of function that represents input pattern spanned by the hidden neuron [4], [47].
In this section, we will consider no training in conventional method Radial Basis Function Neural Network. Radial Basis Function Neural Network no-training paradigm was proposed by Vakil-Baghmisheh and Pavešić [50]. No training in Radial Basis Function is the simplest training because all the parameters were fixed. This method of training of RBFNN-2SAT does not have any practical value, because the number of prototype vectors should be equal to the number of input data, and consequently the network will be too complex. Fig. 2 shows the steps to integrate RBFNN no training with 2SAT, which can be abbreviated as RBFNN-2SATNT [9]: The parameter i x is the input data, whereas i c = i x is the center, i σ is the width, and ζ is the tolerance value.

IV. 2SAT Programming in RBFNN
Kasihmuddin et al. proposed logic programming by integrating 2SAT rule with neural network [14], [51]. The weight of the network was determined by Wan Abdullah method [2] where the inconsistencies of 2 Satisfiability logical rule have been minimized. The only problem of the proposed network is the rigidness of the weight calculation. 2SAT can be embedded to RBFNN by representing the variable as input neuron. Each input neuron j x constitutes {0,1} which signifies False and True. By using the value from input neuron, the parameters such as i c and i σ will be computed and the best number of hidden neuron will be obtained. In other words, embedding 2SAT as a logical rule makes RBFNN able to receive more input data with a fixed value of center and width. Hence the aim of the combination is to create a RBFNN model that classifies data based on 2SAT logical rule. Representation of 2SAT in RBFNN is given as the following formula: where Eq. (7) and (8) are vital in calculating training data for each 2SAT clause. Hence the implementation of 2SAT in RBFNN is abbreviated as RBFNN-2SAT. Table I illustrates the input data of RBFNN-2SAT  for:   2   , , ,  After finding the center and the width of the hidden layer, RBFNN will use the Gaussian function in Eq. (3) to calculate the output weight. As the number of clauses increase, RBFNN-2SAT requires more efficient learning method to find the correct output weight. In this paper, a metaheuristics algorithm will be implemented to find the optimal output weights that minimize the following objective function: where ( ) i f w is the final output classification of the RBFNN-2SAT.

V. Genetic Algorithm in RBFNN-2SAT
A Genetic Algorithm (GA) is a standard metaheuristic algorithm in solving various optimization problems. Given a finite solution space, the structure of a GA can be divided into local search and global search [52]. In a GA, the strings populations called chromosomes are represented in terms of solutions to the optimization problem [53]. The quality of the chromosome is denoted by the fitness value. At every generation, the fitness value of each chromosome is estimated, and the best fitness is selected as final solution. The chromosomes improve their fitness by implementing three (3) operators namely crossover, selection and mutation. Crossover promotes the exchange of information between chromosomes. Hamadneh et al. [4] used the GA to decide the centers of hidden neurons width and number of the hidden neuron by minimize the sum of absolute error of the actual outputs and the desired outputs. During selection, several chromosomes are selected from the current population depending on their fitness value. Mutation has been added to create genetic diversity of the chromosomes. In this paper, GA will be used to optimize the output weight of RBFNN-2SAT by reducing the training error. The implementation of GA in RBFNN is defined as RBFNN-2SATGA. In RBFNN-2SATGA, GA will calculate the output weight by using the centers, width in the hidden neuron. The steps involved in RBFNN-2SATGA are as follows: Step 1 Population Initialization: The output weights represented by a chromosome will be initialized. The representations of chromosomes are as follows: The population has N pop chromosomes containing N N of random output weights. The aim is to minimize the objective function:

Article in Press
Step 2 Fitness Computation: The fitness of each individual chromosome is calculated via a basis function of RBFNN-2SAT. The basis function used in this paper is shown in the following equation: where ( ) GA i f w is the objective function and i fit is the fitness value.
Step 3 Selection: The chromosomes are arranged in descending order based on the value of the fitness function. Only the best chromosomes (with the highest fitness value) are kept while others are discarded. The selection probability, p i for each chromosome will be calculated by using the following equation: Step 4 Crossover: During the crossover phase, information from the parent will be randomly exchanged for creating offspring with different genetic composition. The location of the crossover will be randomly selected. Crossover phase will determine the number of cross-population according to the crossover rate. Given two parents w k and w m , the offspring w i new will be produced by the following equations [54], [55]: where p i is the probability, r is the crossover rate, w k is the chromosome with higher probability, w m is the chromosome with lower probability and the parameter k is choosen by the following equation: where k + m = n and k > m. The value of k is uniformly distributed between k and m.
Step 5 Mutation: During the mutation phase, the chromosome information will be randomly assigned within the pre-determined range (often determined by the user). The mutation is expected to create a newly breed of chromosome. The equation involved is as follows: where new m w is the new chromosome from mutation phase when [0,1] τ ∈ .
Step 6 Termination: GA will iterate up to 10000 Generations. If a given solution termination criterion is met, the calculation of the algorithm is stopped or will go back to step 2 with i = i + 1. The final output of RBFNN-2SAT is a chromosome that contains optimal output weights of RBFNN-2SATGA.

VI. Differential Evolution Algorithm in RBFNN-2SAT
Storn and Price [22] has fruitfully introduced a new evolutionary population-based algorithm called the Differential Evolutionary (DE) algorithm which typically is being used in numerical optimization. The fundamental framework of DE algorithm can be divided into local and global search with an adaptable function optimizer [56]. The core differences between GA and DE is that the selection operator in DE uses an equal probability to elect parents. Hence, the chance is independent towards the fitness value of the solutions. In the DE algorithm, every individual solution competes with its parent and the fittest one will win [57]. In this work, the DE algorithm will be adopted as a learning mechanism during the training phase. The purpose of the training is to compute the corresponding output weights that connect hidden neurons and output neurons of RBFNN-2SAT. The stages involved in RBFNN-2SATDE in optimizing the connection weights between the hidden layer and the output layer is represented in Fig. 3.

Start End
Initialization of the population w = (w 1 , w 2 , w 3 , ..., w n ) Randomly select the initial parameter Mutation is generated according to

VII. Particle Swarm Optimization Algorithm in RBFNN-2SAT
The PSO algorithm is a class of iterative swarm-based searching algorithm, deployed widely as the learning algorithm or universal optimization. The pioneer work of PSO was coined by Eberhart and Kennedy [26] by mathematically modelling the socio-behavioral feature of the bird flocking and fish schooling in their own population. The remarkable feature in PSO is the existence of adjustable free parameters, which makes it easy to implement and optimize. Specifically, PSO adopted a vigorous searching process by impending the best particle in a solution space [58]. Pursuing that, the potential solutions, named particles, fly over the searching space by succeeding the existing optimum particles. In addition, the changes in the position of the particles occur in PSO, where it is vital in searching for the best particle. This study adopts the PSO algorithm to optimize the output weight among the hidden neurons and the output neurons of RBFNN-2SAT. Therefore, the steps involved in RBFNN-2SATPSO are represented in Fig. 4.

Start
Initialization of the population w 0 , x 0 Evaluate initial populations using objetive function f (w i ) The particle updates its velocity: The particle updates its position: The parameter Ω is the inertia weight, whereas ε 1 = ε 2 = 2 are acceleration constants, rand 1 = rand 2 are experimented arbitrarily within [0, 1], best i p refers to the individual best position attained by the particle of the primary swarm, and best i g denotes the global best position completed by the particles of the sucessive swarm and the position of the new particle, x i . Additionally, ζ is the tolerance value.

VIII. Artificial Bee Colony Algorithm in RBFNN-2SAT
Artificial bee colony (ABC) algorithm has been introduced by Karaboga [59] in resolving various mathematical optimization problems. In ABC, the colony of bees contains three groups called employed bees, onlooker bees, and scout bees. Generally, employed bees bring quantities of nectar from the resource food to the hive. They will share the information about the source of food with a certain probability by dancing inside the hive. Then, onlooker bees stay in the dancing areas and decide source of food depending on the prospect (the probability) provided by the employed bees [32]. The other type of bees is called the Scout Bee, which conducts the random search for new sources of food if the quality of the food source is not in a satisfactory state. In this paper, ABC will be used to optimize the output weight of RBFNN-2SAT by reducing the training error. The implementation of ABC in RBFNN is defined as RBFNN-2SATABC. In this context, the function to be optimized is: where ( ) ABC i f w is the objective function of the RBFNN-2SATABC model. The algorithm involved in RBFNN-2SATABC is as follows: Step 1 . n is the number of employed bees (the number of solutions), and d is the dimension of the solution space (number of hidden neurons).
Step 2 Employed Bee Phase: Employed bee will search for the food source. The new food source (solution) for employed bees, employed ji w is given as follows: where j, k are selected randomly and the w jk is called the neighbor bee of w ji . The value of ( ) employed ABC ji f w will be calculated as follows: where i fit is the fitness value of the bee.
Step 3 Onlooker Bee Phase: The probability value of the food sources will be calculated. Onlooker bee will perform exhange of information based on the following probability: By using the above probability, the food source will be obtained by using equation (21).
Step 4 Scout Bee Phase: If the values of fitness of the employed bees are not improving by a number continuous predetermined of iterations, which is called (Limit) those food source are abandoned, and these employed bee become the scouts, and generate a new solution new i w for the employed bee by using the following equation: ( 5,5), Step 5 Termination: If the stopping criterion is met, then it stops and the best food source is memorized, otherwise, the algorithm returns to Step 2.

IX. Experimental Setup
All the proposed RBFNN-2SAT model will be executed and coded in Microsoft Visual C # 2008 Express program in Microsoft Window 7, 64-bit, with 500 GB hard drive specification, 4096 MB RAM, and 3.40 GHz processor. The lists of parameters used in each RBFNN-2SAT model are summarized in Table II to Table V. Simulated data sets will be obtained by randomly generate the input data. The choice of data reduces the possible bias of the data which covers a wider range of search space. Next, the number of neurons NN used in the experiment varies from 6 108 NN ≤ ≤ .

X. Results and Discussion
Hamadneh et al. [60] use mean square error as a metric to appraise the performance of the trained RBFNN. In this paper, both proposed hybrid models will be compared by using four performance metrics such as Root Mean Square Error (RMSE), Sum of Squares Error  (28) where ( ) i f w is the actual output value, i y is the target output value and n is number of the iterations. In addition, computation time will be considered in order to evaluate the efficiency of the RBFNN model. In this study, 2SAT logical rule is expected to perform comparatively exceptional to other non-systematic logical rule such as [6], [29], [61], [62], [63]. This is due to the variation of the number of variables in each clause. This causes RBFNN-2SAT to alter the dimension of the hidden layer. Imbalance signal from the hidden layer to the output layer will lead to imbalance value of parameters (centre and width) and high computation error. The results in Fig. 5 until Fig. 8 allow to deduce the following findings: 1. RBFNN-2SAT can receive more input data with a fixed value of center and width. In this case, RBFNN-2SATABC creates a model that classifies data based on 2SAT logical rule with minimum value of RMSE, SSE and MAPE.
2. RBFNN-2SATABC has best performances in terms of errors as the number of neurons is increased. In the exploration front (employed bee), ABC locates the general range of the optimal output weight. The value of the output weight improves significantly during the exploitation phase (onlooker bees). Based on the result, the probability for RBFNN-2SATABC to reach the scout bee phase is approximately zero. In this case, RBFNN-2SATABC effectively explores different solution space in less iterations.
3. In terms of computation time, RBFNN-2SATABC was reported to be faster than the other RBFNN-2SAT model. At 20 NN > , the possibility for the conventional method RBFNN-2SATNT to be trapped in trial and error state increases. Trial and error cause RBFNN-2SATNT to achieve pre-mature convergence.

4.
On the other hand, RBFNN-2SATGA has a relatively larger learning error because of ineffective initial crossover. It requires several iterations for RBFNN-2SATGA to produce high quality output weight. During that time, the only operator that is effective is mutation. The problem is worsened when the suboptimal output weight is a floating number.
5. RBFNN-2SATDE is reported to illustrate some drawbacks such as tendency to be trapped at sub-optimal output weight and slow convergence rate. In this case, RBFNN-2SATDE requires more iterations to satisfy which results in the accumulation of error. In addition, the unbounded mutation operator in DE tends to create numerous alternate search space that reduces the probability of the RBFNN-2SATDE to achieve optimal output weight. 6. In another perspective, RBFNN-2SATPSO has a relatively lower learning error compared to another model. This is due to the use of the particle in this algorithm that mimics our proposed ABC algorithm. Although the result for RBFNN-2SATPSO seems quite promising, this algorithm lacks the control of the effective local search. In this case, as 10000 t → , the search space for each particle will magnify indefinitely and result in suboptimal output weight. Hence, RBFNN-2SATPSO will converge prematurely.
These experiments show that the ABC algorithm can be successfully applied to train RBFNN-2SAT. Another observation is that the effectiveness of ABC can be seen vividly when the number of neurons increases. Moreover, ABC algorithm in RBFNN achieves more promising performance based on RSME by 94.8%, SSE by 72.9%, MAPE by 99.1%, and CPU time by 39.8%. This concludes that ABC in RBFNN-2SAT could be used in practice to achieve better prediction results for the 2SAT logic programming.

XI. Conclusion
A hybrid paradigm, ABC algorithm incorporated with RBFNN and 2SAT (RBFNN-2SATABC) has been fruitfully developed to foster the learning phase with different number of neurons. Following that, the work as reported in this paper reveals the significant differences in the performance of RBFNN-2SATABC in terms of Root Mean Square Error (RMSE), Sum of Squares Error (SSE), Mean Absolute Percentage Error (MAPE), and process time (computation time in seconds). Furthermore, the proposed paradigm offers an error of approximately 2% of MAPE, and faster computation time compared to RBFNN-2SATGA. Henceforth, the RBFNN-2SATABC has been clearly recognized to be more robust than the RBFNN-2SATGA in certain aspects which include better lower error and faster process time in performing 2SAT logic programming. As future development, the RBFNN-2SATABC can be improved by using different classes of Satisfiability logic ranging from, Major Satisfiability (MAJ-SAT), Weighted SAT, Maximum Satisfiability (MAX-SAT) and Unsatisfiable Satisfiability (MIN-UNSAT). This work also can be applied as a traditional optimization method to solve problems such as travelling salesman and N-queen's problem.