Modified Election Algorithm in Accelerating the Performance of Hopfield Neural Network for Random kSatisfiability

Election Algorithm (EA) is a powerful metaheuristics model motivated by phenomena of the socio-political mechanism of the presidential election conducted in many countries. EA is selected as a topic of discussion due to its capability and robustness to carry out complex problems in the random-2SAT logic program. This paper utilizes a hybridized EA assimilated with the Hopfield neural network (HNN) in carrying out random logic program (HNN-R2SATEA). The efficiency of the proposed method was compared with the existing traditional exhaustive search (HNN-R2SATES) model and the recently introduced HNN-R2SATICA model. From the result obtained, clearly proven that based on our proposed hybrid model outperformed other existing model based on the Global Minima Ratio (ZM), Mean Absolute Error (MAE), Bayesian Information Criterion (BIC) and Execution Time (ET). The expected outcome portrays that the EA algorithm outperformed the other two algorithms in doing random-kSAT logic program. The results proved the robustness, effectiveness, and compatibility of the HNN-R2SATEA model.


Introduction
Artificial neural networks (ANNs) belong to the family of the computational architectural-based model, viewed as equivalent to a brains' programming by imitating its design and attempts to mimic nervous system activity through which information is handled by the brains [1]. It consists of many basic processing components (called artificial neurons) that are loosely based on biological neurons. This learns about the relations between the processing elements and the system parameters through a cycle of adjustment. It contains a number of interconnected neurons that has a specific synaptic weight that essentially influences how much the neuron output can influence the input to the next neurons. Usually, neurons also have their own weight called "a bias" term that defines the neurons' effects on the network itself. The information is preserved in the weight and learned by the network. This process is done by several known algorithms [2]. There has been an enormous development in computational technology and increase interest in the possible application and use of artificial intelligence in various domains of applications. Recently, ANN is rated as one of the most significant and widespread branches of study in the field of computational sciences and artificial intelligence (AI). Essentially, ANN is computer-generated mathematical algorithms that learn from regular data and extract the information from such data for meaningful and intelligence decision making. Trained ANN takes a very fundamental approach to the functioning of a small biological neural network. These are the digitalized biological brains prototype and can detect complex non-linear interactions between dependent as well as independent variables in data where human brains may not be detected. It can stimulate any function if the appropriate information is given. It is now commonly used in various medical and health disciplines, especially radiology and cardiology for detection purposes. Many scholars have applied ANN in medicine and clinical study for modelling the phermacoepidemiology and medical data mining [3].
It is very crucial to choose an appropriate framework for neural network training. Most predominantly Backpropagation (BP) algorithm is a gradient descent algorithm in error spaces that are most likely to be locally stuck and converged very slowly. There are many studies in the field of ANN training methods; this includes the work in [4] which studied the pattern retrieval Hopfield design analysis via genetic algorithm. The learning algorithms were considered as normal, robust and batch BP algorithms [5][6]. The improvement of the neural network BP algorithm as studied in [5]. ANNs "learn" several arrays of input-output mappings from observations by optimizing branch weights that connect the ANN nodes. In [6] the study on how to predict monthly streamflow assessed various training functions on network process has been presented. [7] proposed a learning approach and comparison with learning algorithms based on genetic algorithm fused in particle swarm optimization. In [8] a hybrid approach based on imperialist competition optimization incorporated in ANN for Fuzzy logic program algorithm for forecasting of reservoirs was proposed. [9] focused his research on Quantized neural networks: low accuracy weights and activations learning neural networks. The quantized weights and activations are used at train time to determine the gradients of the parameters. A novel hybrid solution has been suggested for global optimum path planning based on an ICA-trained neural network [10]. An examination of the fundamental insights into how the ICA evolved and how it extended to industrial engineering disciplines in [11]. In [12], a new method has been described for determining the learning rate of ANN which named as cyclical learning rates, which virtually eliminate the need to determine the best values and schedule for optimal learning rates experimentally. One of the major breakthroughs for AI and computational sciences is the neuro-symbolic computation that combines the benefit metaheuristics, Hopfield network and logic programming in finding an optimal solution of various optimization problems [4,[12][13][14]. Neural-symbolic computing strives to incorporate the two most fundamental cognitive abilities, as projected by various scholars. The ability to learn and reason from what has been learned [14][15][16]. Neural-symbolic computation has consistently been an active research area, internalizing the benefits of novel and robust learning and the logical reasoning and interpretability of symbolic simplification of ANN [17][18] Recent advances in neural-symbolic computation as a fundamental approach for applied machine learning and reasoning have been explored in [14]. The integration explains the usefulness of the technique by outlining the main characteristics of the methodology, the main convergence of neural processing with the symbolic representation of intelligence and reasoning that allows for the development of explainable AI systems. Neural-symbolic computation perspectives shed new light on the increasing need for interpretable and transparent AI systems [13][14][15].
The main idea of logic as a programming language in neural networks was introduced in [15] to serve, represent and interpret a problem. The motivating force behind it is the notion that a single formalism is adequate both for logic and computing, and that it subsumes computation. An algorithm may be known to consist of a logic component that defines the intelligence to be used in optimizing a given problems, and perhaps a control component that decides the problem-solving techniques by which that knowledge is utilized. So from the perspective of combinatorial optimization, it can be viewed as a problem. In [15], the logic programming idea was extended incorporating the competent logical mapping system or propositional knowledge through an asymmetric network of connectionists.
The proposed new symmetric connectionist network (SCN) has therefore attracted the interest of AI and computational scientist communities to combine the advantage of both logic program and neural network as a single network [16][17][18]. In [17], Wan Abdullah developed a method which named as Wan Abdullah Method. It is a method used in calculating the synaptic weight of the Hopfield network corresponds to the suggested logical system, and the technique is still applicable particularly when working with a recurrent neural network. In [18] Wan Abdullah method was used for determining the synaptic weight of the Hopfield neural network for Horn logic program of the network. [19] upgraded the Horn logic programming by integrating the efficient relaxation method to generate the optimum final neuron states. The stochastic approach for Hopfield network model programming has been extended further by [20], which reduced neurons oscillations during the Hopfield network model recovery phase. The notion of logic programming merged with radial basis function neural network as a single model for computation has been achieved in [21]. The result reveals that RBFNN in logical programming inevitably worked well. Consequently, in Mean Field Theory [22] depict HNN logic programming flexibility. Furthermore, [23] launched Horn logical rules for the application of Hopfield neural network activation function. In [23] successfully implemented the logical rule kSAT which observed to be very closely linked to the HNN. The benefit of metaheuristics algorithms, such as GA, ICA, EA, etc, is that the neural network can be inferred at different stages such as weight training and adjustment, system adaptation for determining the number of layers, node transfer functions, retrieval phase and learning rules [24].
Recently, EA has emerged a new evolutionary metaheuristics algorithm that has been applied in finding an optimal solution to computational optimization and engineering application [25][26]. Unlike many other metaheuristics that are mainly inspired by swam or natural evolutionary process. EA is adopting the socio-political process of presidential elections conducted in many countries. The synthesis of metaheuristics and the Hopfield model have been proven effective in carrying out satisfiability as a logic programming [27][28]. Therefore, this research incorporates an election algorithm to complement the Hopfield neural network to accelerate the random-kSAT solution search process. HNN-RkSATEA stipulates the fusion of the Hopfield network and the Election algorithm in finding the optimal solution R-kSAT logic programming.

Boolean Satisfiability
The problem of whether a given propositional theorem has a satisfying statement of truth (SAT) is regarded as one of the first problems to be proved as an NP-complete problem. The SAT problem involves figuring a set of binary mapping that satisfy a set of constraints or prove that there is no such mapping [29]. SAT is a key issue in computational sciences and mathematical optimization problem as well as in many other areas of engineering and electronic design automation including other well known NP-complete problems such as timetabling problem, graph colourability, independent sets, circuit design verification vertex cover and Hamiltonian path [29][30][31]. Recently, the real-world application of SAT algorithm has improved dramatically due to its ability to solve large industrial SAT instances in a relative short instance of time [32]. On the one hand, local search algorithm has been used to solve a large random instance of SAT as well as some classes of industrial and practical instances of SAT [33][34]. Notwithstanding the SAT problem being NP-Complete [35], the SAT solver technology has been significantly improved over the past decade. This has culminated in the building of several successful SAT algorithms worthy of optimizing thousands of variables with many constraints. Such algorithms include discrete mutation [28], conflict-driven clause learning [36], Membrane computing [37], MiniSAT [38], branch and bound algorithm [39][40]. One of the primary goals of SAT algorithm is focus on reducing the computational complexity in the network. In the meantime, NP problems can be converted in polynomial time, and so the previous, potential and future research efforts to optimize NP problems. In this study, random -kSAT will be embedded as a logical rules in the HNN.
In term of HNN-RkSATEA, is a brand new model as there is no effort to apply the benefits of Election algorithm in accelerating the performance of HNN in finding the optimal solution to random-kSAT logic programming. The proposed hybrid model developed based on random-kSAT clauses. The main focus of this study is, therefore, to explore the feasibility of the hybrid election algorithm incorporated in the neural Hopfield network model, based on the random-kSAT logical formula, therefore, HNN-RkSATEA will equate the efficacy of the proposed model with other current Hopfield models (HNN-RkSATES and HNN-RkSATICA).

Discrete Hopfield Neural Network
Hopfield, in an effort to model biological memory, proposed the so-called, Hopfield Associative Memory (HAM). Such a HNN model is based on the Mc-Culloch-Pitts paradigm of the artificial neuron [41]. In such an ANN, the state space constitutes the symmetric unit hypercube (i.e. the components of state vectors are {+1 or -1}. The associated convergence theorem ensures that in the serial mode of operation, the initial state converges to a stable state and in the fully parallel mode of operation [42]. In essence, the stable states are realized as the "memory states" of the associative memory. HNN was naturally interested in synthesizing a CAM, with certain "desired stable states" (that are preselected). It is a novel neural computational framework through the implementation of an auto-associative memory. They belong to recurrent or fully interconnected categories of neural networks. All neurons are connected to each other, but there is no self-recurrent connection between the neurons. HNN consists of N interconnected nodes; each of the nodes can be expressed by a simplified Ising variable of spin glass in statistical mechanics [43]. The model dynamical equation includes of the state-evolving equation calculated based on the following: Where jk U is the weight matrix going between j and k neurons, . HNN's energy dynamics and content-addressable memory function offers a versatile system with high capacity, error tolerance, rapid memory recovery and partial inputs [47][48][49]. To make it ideal for incorporation with combinatorial optimization such as SAT. HNN used the logical rule to instruct the network's activity based on the synaptic strength matrix. The logical formulation, in this scenario, consisting of variables vectors, is framed in form of N neurons. The implementation of the random-kSAT logical rule in HNN is translated in abbreviated terminology as HNN-RkSAT and the main objective is to minimize the logical discrepancies by reducing network cost function which can be represented eq. (2) as follows; The weight matrix will represent the synaptic connection matrix between the clauses and variables in a given logical formula. A simplified approach for computing the synaptic weight matrix values of HNN which named as Wan Abdullah method has been outlined in [17][18]. This is done by equating the cost function of HNN () The local field of HNN can be represented in eq. (4) as follows; 1, Based on eq. (5) and (6) ensures the energy dynamics for the network decrease monotonically. The final energy dynamics of HNN is provided in eq. (6) as follows; The convergence toward minimum energy is will be considered as optimal to an optimization search problem. The energy dynamics of an HNN has many local minima. As a consequence, the network is likely to reach an equilibrium state which does not satisfy a problem solution. A major task in this "field" is to search for evolutionary strategies like EA to move the network out of local minima.

Logic Program
The logic program has been used widely to represent connections and has both a declarative and functional meaning. It made up of a set program clause that is triggered by an initial goal declarative statement. It offers a simple way of solving problems [14]. The architecture of the logic program is easier to develop, modify and understand relative to the neuronal structure of the black-box network. Logic program is particularly appealing to novice database programmers and developers who don't want to be concerned with the complexities of monitoring the program's actions. A logic program (normal logic) is described in eq. (7) as a finite collection of logical clauses in the following expression: where nN  for each clause, they may differ, described as an atom in some first-order language and 1 ,..., n BB represent literals in the clause, that is, atoms or its negation. As is a common practice in logic programming, a logical clause may be as follows, Where the universal quantifier is known, and then described the head of the program clause, i B is referred to as body literal of the clause and their conjunction 1 ...

n BB 
is referred to the body of the program clause.

Satisfiability logic program (SAT)
An instance of it is generated from three parameters ( , , ) pqr , where p, q and r represent the number of a proposition, clauses, and literals per clause respectively. Each instance comprises of q random clauses involving exactly r literals each. Each is independently and randomly selected from the set of 2 r p r    all possible cause of length r. The focus of the SAT problem, therefore, is to find out whether there exists a mapping of truth assignment to variables making the formula in eq. (9) achievable.
The three components of general random-kSAT can be outlined as follows: i. Consisting of a collection of variables, 1 2 3 , , ,..., iii. A collection of distinct logical clauses;. Each logical clause consists of literal linked by a logical notation AND ()  . Where describes the Boolean formula for kSAT. i designated a clausal form of DNF with k number of Boolean variables, k-CNF is a special case of CNF which contains at most k literals in each clause. It has also been shown that a given Boolean formula given can be translated into a 2-CNF formula for which the NP-complete satisfaction problem remains [29]. In case of k=2 for satisfiability problem where the logical clause in 2SAT has the represented in eq. (10) as follows, Where defined a satisfiability logical rule which contains of logical clause i given in eq. (11) as follows: where each of them ij l represents a propositional variable and ij l  its negation; The first and most obvious application of SAT has been the well-known NP-complete which include vertex cover, TSP, independent set, Hamiltonian paths, etc.

Random 2-CNF formulas
The random 2-CNF Model was studied widely for a number of reasons. Firstly, it is an essentially reasonable framework, comparable to the random graph prototype, that focuses on the fundamental structural properties of the of satisfiability. Second, randomly selected formulations are experimentally difficult for SAT to choose appropriate parameters and are a frequently utilized benchmark for testing SAT algorithms. An instance of random-kSAT comes in the form of a collection of random clauses M=αN over N Boolean variables. Every logical clause usually contains exactly k variables that are associated by logical notation OR operations and tend to be negated with probability ½. In the scale-free model, given n, m and β, to formulate a random-kSAT, we l n generate m clauses independently at random from the set of 2 k n k    logical clauses, sampling every valid clause with a probability of 0.5 [50][51]. probability distribution over 2CNF formulas by selecting exactly one logical clause from each pair [52]. For SAT or UNSAT of random 2SAT.Let ( ) 2 , F n m be the distribution of n variables, m clauses over 2-CNF and each clause are selected randomly from all possible 2-clauses. Let r be a positive constant. Then: The random 2SATformula can be summarized in eq. (13) as follows: In general, the formulation can be generated in different combinations of neurons (atoms) as the number of neurons (atoms) fluctuated. Comparatively, a high number of neurons (atoms) per number of clause would increase the probability of a number of neurons beings satisfied [18][19]28].
The costs function of the negated random 2SAT for bipolar follows: Random 2-SAT can be regarded as one of the constrained optimizations in the Hopfield model.
This can be implemented in the network by storing atom truth values and generating a optimize cost function when maximum clauses are fulfilled. Eq. (14) can be written as in eq. (15) as follows, Since consistent interpretation found leading to The satisfied interpretation such as The aim of selecting the traditional searching framework is to determine the degree of efficacy of HNN-R2SATES model in carrying out random 2SAT logic programming. Apart from that, there are feasible satisfiable assignments given for any random 2SA logical representation [53]. The satisfied clause for the ES heuristic is extracted after a brutal " trial and error" procedure is carried out. The efficiency of the ES as searching algorithm received attention in [53][54]. The exhaustive search does the survey required to produce a precise topographical map. This approach requires that an extremely large but limited solution space be checked with the number of combinations of various variable values The objectives function of ES is expressed as follows: ES max f (17) Where, Where j f designates the number of different neurons combination, NC designates the number of a logical clauses in the random 2SAT and j C the number of different values of logical clauses passed through the eligibility stage as follows;

Stage 4: clause evaluation
Preserve the clause with the highest possible fitness. Otherwise, identify a new candidate variable vector. Exhaustive searches do not get trapped in local minima with fine enough sampling and worth for both continuous and discontinuous variables function [29,53]. Nonetheless, achieving a global minimum requires an extremely long time. Another drawback of this strategy is that the global minimum may be skipped because of undersampling. When measuring the cost function it is simple to use the sample whenever it takes a long time. For this reason, exhaustive searches are only practical for a small number of variables. For this reason, exhaustive searches in a minimal search area are only practicable for a small number of variables [12,18,28,54].

Election Algorithm (EA)
Election algorithm is a simple computational system inspired by the socio-political phenomenon of a presidential election. The EA begins by describing the optimization variables, the cost function, and the cost. It ends by checking for convergence like other optimization algorithms. It is considered a robust evolutionary methodology attracting a talented amount of research in optimization research. If an environment is considered as a function, EAs serve as a function optimizer. In this case, each individual in a population is a sample point in the function space [25][26].
In this simulation, a weight matrix (population) is generated at random at the beginning of the EA. In every phase, the vector will be updated through the campaign and coalition mechanism, and their eligibility values will be evaluated to ascertain the best candidates or voters' position. The cycle of reorganizing the new weight vector variable with repeat the best individuals vector and continue the searching until an appropriate solution is reached (best solution) [25][26]. The HNN energy dynamics is used as the second eligibility assessment method to pick the most appropriate weight matrix to solve the problem of random-kSAT. On the contrary, the basic motivation of the election algorithm is to discover the variables that optimize the number of random clauses optimize before joining the HNN. The election algorithm in random-kSAT is specifically made up of distinctive stages 1-5.
In general, the global optimization can be demonstrated as below (without the loss of generality minimization problem being considered as; The role of the eligibility function is to measure the goodness of each individual variable. Select the best value to serve as the initial number of candidate C N in forming the initial parties [32]. Thus, the number of individuals sampled as initial candidates represented in eq. (27) (28) Where v N described the total number of voters, Np N is the total population and C N designated as the total number of candidates. All supporters are divided among the candidates based on their similarity. The Euclidian distance matrix is used as a similarity measure is used as follows.

Stage 4 Advertising Campaign
Positive advertisement The process of modelling EA positive mechanism, the vector variable of a candidate in the solution space is randomly selected. Random numbers are sampled to pick the position of vector variables (voters) to be substituted. The selection rate is donated by number of variable vector values that are transferred from a candidate toward its supporters is represented in eq. (31) as follows [25]. [] Where S N is defined as the number of sampled vector variables to be replaced S X define the selection rate and C S described the total number of vector variables of the candidate in the solution space. It is clear that, in an EA party based on the eligibility Euclidean metric between a candidate and its relevant supporters, the effectiveness of advertisement varies. Positive advertisement happens, whereby the candidates that seem to have an excellent plan and idea if the decision was taken by the voters is to be influenced, the number of its supporters will increase and the chances of increasing the quality of the party's plans will increase.
Tomodel this goal,we represented eligibility distance coefficient ( On the basis of the eq. (33) in the campaign process, the near supporters are much more influenced by their associated candidate than by other followers.

Negative advertisement
In the implementations of EA, contrast advertisement is used among different negative campaigning strategy. Candidates, by their campaign of resistance, seek to fascinate the members of other parties towards themselves. This leads to an upsurge in support of the popular parties and also to a decline in popularity of the marginalized parties. The difference in eligibility between the voters and the candidates is measured at first in the defender group by applying the Euclidean metric as follows,

Coalition
Candidates confederate if they shared the same ideas; In EA, sometimes two or more parties with the same ideas and goals in solution space can come together to create a new party. So some candidates are leaving the campaign and joining another one called "leader." The candidate leaving the arena of the election is called "follower" The candidate leaving the election arena is referred to as "follower." The candidates of the followers collate with the leader and encourage their supporters to follow the leader. All the followers ' supporters become the leader's supporters. In our applications, among the candidates who wish to unite, a candidate is randomly selected as the successor candidate and the remaining candidates are considered as followers [25][26].

Stage 5: Election Day (Stopping condition)
Until a condition of termination is met, three different operators, positive advertising, negative advertising and coalition will be applied to update the population. Ultimately a candidate who gets the most votes will declare himself the winner and is equal to the best solution found for the problem of optimization and search [25]. Modified election algorithm incorporated in the Hopfield network model to carry out random-kSAT is expected to be feasible. It the vector variable of does not achieve the desired eligibility; the present vector variable bits will continue to improve during the negative campaign and coalition strategies. In order to boost the solution space, 100 to 10000 iterations usually set in most of the metaheuristics is also considered in our case [18]. A bipolar search involving on 1 and -1 is used since it is simple for a variable to converge to global maxima. In this research, a modified election algorithm is proposed to accelerate the learning phase of the Hopfield network model as a single model to carry out random 2SAT logic program.

Imperialist Competitive Algorithm (ICA)
Imperialist competition algorithm (ICA) is a metaheuristics optimization build on the basis of a socio-politically motivated strategy. It belongs to classes of evolutionary algorithm extracted from the imperialist competition. It begins with an initial population known as countries; a country includes groups of colonies and imperialist that together from empires [55][56][57]. In reality, imperialist countries are always trying to conquer other countries to turn them into their own colonies and complete among themselves actively for colonization of other nation; a proposed evolutionary heuristic is the competition that arises between empires in the solution space. During this competition, the weakened empires will collapse, and hence the stronger ones will become more powerful [50]. ICA converges towards a scenario where there exists only one empire in the solution space which is equal to the optimal solution. In this situation, the colonists use the same strength (fitness) and control function as the imperialists, as a portion of the colonies can be absorbed by imperialists [55][56][57].
The ICA consists of two primary phases, the movement of the colonies and imperialist competi tion In order to share the colonies proportionally among the imperialists, the normalized cost of an imperialist is simply defined as follows, Where n M defines as the cost of nth imperialist and n m is their normalized cost. Any imperialist with a higher cost value will have a lower normalized cost value. The power of each imperialist, having the normalized cost is measured using eq. (38) and it is based on the distribution of the colonies countries among the imperialist countries, Where n TM define the overall cost of the nth empire and  donate a positive number that is defined to be below one [49] as in eq. (41), Throughout the absorption strategy, the colony countries forwarded towards the imperialist by a unit ( y ). The dimension of movements, the vector between the colony and the imperialist. The difference that d and y have shown between the imperialist and their colony is uniformly distributed random variable. Where  is less than 2 but greater than 1. So, a reasonable choice can be  = 2. In A colony could be better positioned as it moves towards the imperialist countries so that the colony's position changes as per the imperialist's position. In ICA competition plays by Imperialist impacts strongly on convergent of the algorithm. Throughout colonial rivalries, the weakened empires will give up their power and colonies. To model a competition process in ICA; compute the probability that each empire would own all the colonies, taking into consideration the empire's total cost.   [55][56][57][58]. In this paper, the hybridisation of ICA and HNN for random-SAT is represented as HNN-RSATICA.
The main stages in HNN-RkSATICA are as follows: Stage 1. The random-kSAT problem is computed into a Hopfield Network, which is continuous and non-constrained.

Stage 2.
The countries in ICA are initialized at random. Stage 3. The HNN will obtain a feasible for every single run.

Stage 4.
The fitness of the countries is calculated.
Where country f is the fitness of the countries,

Model Performance Evaluation Criteria
The performance of our proposed HNN-R2SATEA model is compared with two existing models HNN-RkSATES [18] and HNN-R2SATICA [58] in term of global minimum ratio (Zm), Means Absolute Error (MAE), Bayesian Information Criterion (BIC) and Execution time (ET) to ascertain its efficiency, accuracy, robustness and model selection to make a fair comparison, both experiments are implemented in same computer with the same processor.

Global Minima Ratio (Zm)
This describes as the ratio of global minimum energy combined to the number of neurons in the solution space [18]. The Zm formula is given in eq. (51) as follows,

Mean Absolute Error (MAE)
This described as the average difference found between the expected and the actual values in the solution space in a given data. It is used to ascertain the proximity of forecasts to potential outcomes in a given distribution. It is commonly used because it has the capacity to estimate the error in the data. It is identified as one of the effective metrics used to identify the accumulation of uniformly distributed error in a given sample [18,59]. The MAE equation is presented as the follows,

Bayesian Information Criterion(BIC)
This described a criterion for the selection of models amongst a finite set of models in a given distribution. It is used to assess the computational efficiency of a model. The Mean square error (MSE) is taken into consideration when computing the BIC values. It is important to articulate the relationship between these two units [55]. In general, when the MSE is lower, the BIC tends to be lower. The model which has the minimum BIC value is regarded as the model of interest [60].
Cumulatively, the MSE will be used to evaluate the BIC value during the training and recovery process. The BIC formula is described as follows: Where n, pa, and MSE indicate the number of solutions obtained, its parameters and the mean square error used in the model respectively.

Execution Time(ET)
Execution time or commonly known as a computational time,represent the time taken to complete an implementation cycle [18]. It is represented in eq. (54)

METHODOLOGY/IMPLEMENTATION/EXPERIMENTAL SETUP,
Implementation of Neuro-Heuristic Method random-kSAT in the HNN(HNN-R2SATEA, HNN-R2SATES & HNN-R2SATICA) were executed conducted with an Intel® Celeron(R) CPU B800@ 1.80GHz 4.00Gb (2.85GB usable), via MICROSOFT VISUAL WINDOW 8 DEV C++. The program's main task is to find the best "model" that find the optimal occurrences of random-kSAT. Both parameters and clauses were initially randomized. Simulations performed with a different number of complex neurons from NN=5 until NN=100. The execution of these models is carried out on random 2SAT logical rule as presented according to the following steps; i. Translate all the random-kSAT logical clauses into Boolean algebra. ii. Designate neuron to each variable in a random-kSAT formula. iii. Randomize the state of the neurons and initialize all connection strengths to zero. iv. Derive a cost function of HNN with the negation of all random 2SAT clauses by assigning   [40], proved that 0.001 tolerance value for Lyapunov energy dynamics is appropriate since it yields a better sorting mechanism for the network.

RESULTS AND DISCUSSIONS
The outcomes are reported after simulation of the random 2SAT logic system using various performance evaluation metrics. The Zm, MAE, BIC and ET of the models are plotted and presented in Figures 5-8. The objective is to review the performance of three HNN models in doing random 2SAT logic programming via simulated data.  .and Imperialist competitive algorithm (ICA) in generating the optimally satisfied clauses. Meanwhile, the ES stressed the 'exhaustive' trial and error searching techniques during clause compliance. Once the complexity boosted, HNN-R2SATES were capable of sustaining only 60 neurons. This can be due to the nature of a thorough search, which raised the pressure of computing in order to achieve the correct neuron status. On the other hand, the ability of EA to control a high number of neurons form NN=5 up to NN=100 may be due to the sheer potential of EA's campaign and coalition mechanism and ICA's revolutionary and competition operator, that reduce the computation burden in finding the optimal states. However, some neurons states get trapped at NN=35, 45 and 95 in HNN-R2SATICA model as seen from the Figure 4. It indicates a greater efficiency in the technique of neuro-searching generated by HNN-R2SATEA carrying out random 2SAT logic programming. As the constraints expand forever, the network becomes more difficult as the NN rises in terms of the program's complexity. Figures 6 and 7 the MAE and BIC evaluation between HNN-R2SATEA, HNN-R2SATES and HNN-R2SATICA on carrying out random 2SAT logic programming. It recorded the models' performance throughout the system learning phase from NN=5 to NN=100. It is explicitly shown that HNN-R2SATEA outshines the HNN-R2SATES and HNN-R2SATICA models on the basis of the MAE and BIC assessments. The HNN-R2SATES show a steady rise in errors due to the brute-force hunting for satifiability mapping from NN=5 up to NN=100. The ICA displays a growing pattern of angularity, but most of it lower than the ES. The colonial operator's efficiency decreases the iterations resulted in finding global solutions better than ES. However, HNN-R2SATEA outclasses the HNN-R2SATES and HNN-R2SATICA based on MAE and BIC measures. This is due to the fact that the optimization mechanism, such as a coalition in the EA searching process, is much simpler without requiring additional iterations to reach a satisfying assignment. In fact, non-improving solution will be enhanced by a coalition strategy during the HNN learning phase. The HNN-R2SATES model observed to have reported an accumulation of MSE during the learning stage, as a result, more iteration needed to achieve global convergence. The accumulation of MSE tends to penalize the values of BIC. The BIC for HNN-R2SATES is, therefore, the highest compared to the other two models. In terms of MAE and BIC assessment, EA is an acceptable approach in Hopfield network in doing random 2SAT logic programming compared to ES and ICA.  Figure 8 demonstrates the Execution time for our proposed model, HNN-RkSATEA in comparison with HNN-R2SATICA and conventional HNN-R2SATES models. A quick glance at the running time shows that the program is becoming more complex, that it takes more effort to find global solutions. According to Figure 8, our proposed model HNN-R2SATEA requires less execution time compared to HNN-R2SATES and HNN-R2SATICA. Due to the fact that more neurons are required to cross the energy barrier to relax in global solutions through out the training phases of HNN [24][25]. In addition, the training process employing ES consumes more execution time due to the obvious trial and error procedure in locating the optimum number of randomly satisfying assignments. On the contrary, the implementation time was quicker with the incorporation of EA into HNN due to advocacy and coalition structures that speed up the training cycle to converge to global optimality. The coalition process can avoid the individual to trap in local minima (unsatisfied clause). Hence, the individual created by EA achieved global minima swiftly compared to ICA and ES searchin methods.

Conclusion
From the results obtained from simulations, it can be ascertained that our proposed HNN-R2SATEA model is the more improved and robust heuristic compared to HNN-R2SATICA and HNN-2SATES in accelerating the learning phase of HNN in carrying out a random-2SAT logic program. Our proposed model outperformed HNN-R2SATICA and HNN-R2SATES model. This has been established from the reported results in term of Zm, MAE, BIC and ET and more interestingly, a Zm of 1 is generated throughout the runs even with the complexity of the network. This also leads to the conclusion that EA is much more robust to boost the training phase of HNN for random 2SAT logic program and it is closest towards the global minimum, irrespective of the NN release into the HNN. Finally, in accelerating the computational phase of Hopfield neural network model, other metaheuristics approaches can be implemented in future.