Artificial Neural Network (ANN)-Based Voltage Stability Prediction of Test Microgrid Grid

The Power Grid Initiative is currently engaged in persistent endeavors to convert the conventional power grid into a smart grid with the objective of enhancing the operation of the power system. As per the United States Department of Energy, the contemporary power grid comprises a range of constituents such as information management systems, communication technologies, field devices, and other related components. The optimal functioning of said components yields enhanced power quality, thereby reducing power losses and augmenting the dependability of the power provision. During the operational process, the interdependence of interconnected components can present various challenges, including load forecasting, voltage stability evaluation, and power grid security. The present investigation involves the utilization of an artificial intelligence network (ANN) to facilitate the training and prediction of the nodal voltage level pertaining to the IEEE 4-bus system. Four discrete models have been postulated to authenticate the efficacy of the Artificial Neural Network (ANN) methodology. The findings indicate that the ensemble of models exhibits superior performance compared to other models in forecasting the nodal voltage stability of the aforementioned electrical system. The ensemble model attains the highest accuracy rate of 98.73%. Additionally, the mean squared error and mean absolute error exhibit the best (lowest) values of 0.0095 and 0.0141, respectively. The outcomes obtained indicate the efficacy of the Artificial Neural Network (ANN) methodology, particularly in the ensemble configuration, for predicting the voltage stability of the electrical power grid.

network that uses digital communications to identify, respond to, and ideally prevent changes in electricity consumption and other problems. Since many possible configurations of a system directly impact its safety, ML methods have been applied to studies of power systems [4]. An intelligent system from the generator to the consumer and back again, the power grid integrates renewable energy sources, intelligent transportation, and intelligent distribution are different system-level applications of ML. Through a network of power lines and substations, electricity is transmitted from generators to consumers' homes, companies, and other facilities. These targets are relevant for both grid-connected power plants and off-grid facilities in remote regions. Three commonly used machine learning models, including random forest, support vector machine, and artificial neural network, could correctly predict the recovery of the power grid nodes [5]. Renewable energy is energy that is derived from non-dispatchable sources and can be reused (for example, sunlight, wind, rain, tides, waves, and geothermal heat). The use of renewable energy sources in the power system is increasing rapidly. Along with renewable energy, demand response is also an important part of the smart power network, which refers to the actions taken by electric utility customers to adjust their energy consumption in order to more closely align supply and demand. Fiber broadband is the best option to connect, monitor and back up wireless networks due to its reliable dissemination. Multitudinous networks comprise the grid, but its three primary functions are energy generation, transmission, and distribution. An example of such an energy network is depicted in Figure  1. Consumers benefit from the technology of the electric grid because it allows them to generate resources in both directions [6].
The utilisation of renewable energy (RE) sources in the energy production portfolio are increasing in comparison to other available productions derived from alternative conventional energy sources, including natural gas, oil, and coal [6]. In [5], [6], and [7], the worldwide capacity for renewable energy production experienced a 7.9% increase in 2018, resulting in a total capacity of 2,351 GW by the end of that year. This represents a significant increase of 171 GW. At the conclusion of 2018, the photovoltaic (PV) solar power generated constituted 20% (486 GW) of the complete worldwide renewable energy (RE) production, exhibiting a capacity escalation of 94 GW (+24%) [8]. As a result of substantial cost reductions in solar energy production, solar photovoltaic (PV) modules, and the competitive procurement of solar PV systems, this energy source is beginning to emerge as a potentially significant contributor to global energy production [9]. The Levelized Cost of Electricity (LCOE) pertaining to solar photovoltaic (PV) technology has exhibited a consistent downward trend, with a reduction of 73% observed between 2010 and 2017. This trend has been accompanied by an enhancement in energy conversion efficiency, as reported in sources [10] and [11]. In order to ensure the efficacy of solar PV production as a component of the energy mix, it is imperative to have access to prediction models that are capable of providing precise power production forecasts [13], [14], [15]. Electricity traders are concerned with obtaining precise short-term forecasts (e.g., one day ahead) of solar PV system production [16]. The ability to make precise predictions regarding solar PV production for a period of one day in advance has numerous economic advantages in practical applications. These benefits include dependable operational planning, efficient scheduling of generation, proactive power trading, and effective electricity market operations. The estimation of energy production derived from the aforementioned sources is subject to stochastic weather variables, including solar radiation, wind speed, and ambient temperature. These variables result in significant uncertainties in the prediction of solar power production, as reported in previous studies. The Power plants, solar panels, and wind turbines are just some of the sources shown in Figure 1, with the various pathways depicted leading to end-users homes, companies, and other institutions (such as transformers, substations, and towers). The functioning of this intricate system cannot be fully represented by human-made models because there are consumers all over the globe and thousands of generators connected to a power grid. These days, we can model and anticipate a system's behavior with the help of state-of-the-art machine-learning techniques [19]. Correctly supplying electricity to homes and companies is crucial to avoiding power outages. With the help of power grids, we can ensure that no energy is wasted during transmission. In order to better anticipate consumer preferences, businesses have turned to machine learning and deep learning [20]. Power quality factors must be monitored, predicted, and optimized to keep fluctuations within acceptable limits [21]. So, even if there is a problem with power system, electronic devices can still function. When attempting to collect PQ data, it is impossible to account for every potential configuration of unconnected devices. The Power system can be represented by a circuit model with a group of generators acting as a source and loads acting as a sink. The real power transfer between the source and sink is incremented to obtain P-V curves. The voltage at a particular bus is influenced by the real power transferred, the reactance of the line, and the power factor of the load. As the load is increased voltage decreases and reaches a nose point, any further increase in VOLUME 11, 2023 load causes instability in the system. The power is greatly affected by weather fluctuations, an AI model is required for the true off-grid functioning of systems employing ANN. Power quality factors must be monitored, predicted, and optimized so that their variations remain within the acceptable range. This allows electrical devices to continue operating despite a problem [22]. In practice, it is impossible to collect PQ data for every conceivable configuration of the dozens of grid-connected appliances. As the weather has a significant impact on the induced power, an AI model is required for systems that use ANN to function in the true ''off-grid'' state of operation [13], [14], [15]. This paper provides a method for predicting the voltage stability of a 4-node star power grid network using a neural network trained on both complete and incomplete data. Traditional ANN is made to predict how stable the power grid system of a four-node star network will be given a lot of data. Predicting missing input variables due to sensors, network connections, or other system failures is advocated, and sub-neuron networks are proposed to do this. Missing input data are then forecasted and used to predict the Voltage stability of the system. The effectiveness of the suggested method is measured in four separate case studies, each of which lacks at least one input variable.The rest of the article is structured as follows: The work that has been done in relation to the issue at hand is presented in Section II, and Section III presents the determined methodology. The discussion and comprehensive results analysis are offered in Section IV. The study concluded in Section V.

II. LITERATURE REVIEW
The advanced measurement and detection, information and communication technologies, simulation, analysis, and control decision-making systems [16], [17], [18] contribute to the superiority of modern power systems over conventional power systems. Self-healing, the consumption of renewable energy, situational awareness, information interaction, and voltage stability are all areas where the power grid has significant advantages over the conventional power grid [19].
In Figure 2 the growth of the electricity market and the availability of intermittent and distributed generation have greatly increased the complexity and uncertainty of the operation of the power system. The power grid is evolving into a cyber-physical power system as it incorporates more external systems [20] into its measurement and communication infrastructure. It continues to produce heterogeneous, multi-source, and multidimensional data. The proliferation of big data presents a new challenge to power grid management, but also offers data support to investigate power grid issues. The rapid evolution of the modern power system has required the integration of components of a wide range of decentralised power grids, such as smart meters, communication infrastructure, distributed energy resources, and electric vehicles. Power networks that effectively utilise renewable energy and electric grids using cutting-edge technologies such as machine learning and deep learning are essential for a sustainable future. Virtual inertia (VI) synchronisation in synchronised topology for high-frequency voltage stability with distinct active and reactive power regulation [21] allows the integration of solar photovoltaic systems into the power infrastructure. But these days, characteristics are extracted using artificial neural networks trained with deep learning [22]. With the advancement of computing technologies in machine learning, many machine learning methods are now applicable to power grid applications, particularly those involving data management and analysis. In order to function properly, it must be compatible with the electricity grid system, which relies on data collection, analysis, and decision-making. If there were vulnerabilities in the system, machine learning could detect them, take appropriate measures to patch them, and report back to the system experts. Power grid outages (a) can be planned for and avoided with the help of ML; brownouts (b) can be avoided with the help of real-time analysis and AI prediction; faults (c) can be located in the electricity grid; symmetry (d) can be restored to the grid; gaps (e) and electricity theft (f) can be identified in the power system thanks to ML; and so on. The term ''deep'' refers to the fact that deep learning employs several levels of network structure. Image and video features that don't lie along a straight path can be extracted with the help of deep learning [23]. The modern deep learning framework known as topological convolutional networks (TCNN) was originally designed to model the structure of power grids and extract information effectively [22]. The deep learning approach, which makes use of many concealed layers, outperforms the ML algorithm in terms of accuracy and practicality. Because it is built on a deep learning algorithm, the power system simulation model can analyse faults in great depth and locate them. Since this is the case, decreasing losses in the electricity grid is less of a hassle [24]. In this article, an artificial neural network is used to improve the voltage stability measure of the accuracy of the power grid. As a result, many researchers have focused on studying artificial intelligence techniques that use massive amounts of data to improve the performance of the power grid and find solutions to problems. Generally, the following groups can be made for the various uses of AI in power grid technology. A human with expertise in the loop technique, a method for addressing particular kinds of issues. A method employed by artificial intelligence to predict future results based on a given set of inputs by analysing historical ones. To find out what the similarities and differences are among data without labels, unsupervised learning is used. Both supervised and unsupervised learning employ a strategy based on an intelligent agent that seeks to maximise the idea of cumulative reward. The outcome of various AI algorithms is combined to overcome the shortcomings of any one algorithm alone and improve the total performance. Supervised learning, in the context of machine learning, is the process of training a model by connecting labelled external input and output pairs [25], [26], [27], [28]. This process involves developing general hypotheses for inputs and output. After initial training, the mapping function can be used to make accurate forecasts of subsequent data. In the past two decades, many supervised learning algorithms have been developed and have found widespread application in the process of improving power grid systems.

III. METHODOLOGY
As artificial intelligence techniques are applied to load prediction, power grid assessment, fault detection, and security issues in the power grid and power system, the importance of an AI strategy is growing in the power grid. In order to make changes to procedures and develop graphics, you will need to utilize three distinct machine learning frameworks [26]. The standard scaler, the confusion matrix, which is utilised for performance measurement, and the KFold, which is implemented for cross-validation, are all included in these frameworks. Based on the input features including power consumption, generation capacity, and transmission line parameters, an artificial neural network can be trained to forecast the voltage stability in a power grid. Poweragainst-voltage (P-V) curve analysis is a typical technique for examining voltage stability in power systems. The P-V curve illustrates the connection between the system's power flow and the voltage levels at various buses. The PV curve has a steep slope when the system is operating at a steady voltage, indicating that a small change in power flow will result in a substantial change in voltage. The slope, on the other hand, grows shallow as the system nears instability, a sign that it is losing the ability to control voltage. To train the ANN for the prediction of voltage stability, historical data on power demand, generation capacity, and transmission line parameters can be used to generate input characteristics for the network. The output of the network can be a binary classification indicating whether the system is stable or unstable, or a continuous value indicating the margin of stability. The ANN model is used to illustrate the application of machine learning techniques to the analysis of electrical grid testing data, as shown in Figure 3. During the data analysis process, the scikit learning frameworks use the labelled input data collection that was previously created. A supplementary data collection consisting of sixty thousand observations is mapped to the independent variables during this time period. If something has a value of 1, it is stable, while if it has a value of 0, it is unstable. Plots are generated for each of the 12 features in the dataset during the following phase, which is referred to as ''feature evaluation,'' in order to demonstrate the features' distributions and how they are related to the ''stab'' dependent variable. A graph depicting the relationship between the dependent variable and the 12 numbers is also included in this section.
The machine learning algorithm split the data set into half so that it could be used to train and evaluate the models it generated. The 4-Bus system was described in this paper in Figure4, the algorithm will be trained with the first 54,000 data points before being tested with the remaining 6,000 data points. The predictive features of my dataset are the nominal power produced (positive) or consumed (negative) by each network participant, a real value within the range −2.0 to −0.5 for consumers ('consumed 2' to 'consumed 4'); and the reaction time of each network participant, a real value VOLUME 11, 2023 58997 Authorized licensed use limited to the terms of the applicable license agreement with IEEE. Restrictions apply. within the range 0.5 to 10 (node-1 corresponds to the supplier node1, and node-2 to the consumer nodes-4). The supplier node = − (consumed2 + consumed3 + consumed4), and the price elasticity coefficient for each network participant, a real value between 0.05 and 1.00 (the supplier node, ''price elasticity coefficient 2'' to ''price elasticity coefficient 4'' for the consumer nodes). This is because the total power consumed equals the total power generated. The artificial neural network is made up of a collection of neurons that are connected to one another through neural networks, just like neurons in a human brain. In order to teach the system to adapt its input values to the world around it, a nonlinear function known as the ''activation function'' has been utilised. The ANN is a type of algorithm that was developed by Kalogirou. It is capable of predicting the amount of energy a structure will use and tracking renewable energy sources [17]. The output of a synthesised neuron is a function that is not linear in nature and is determined by the sum of the inputs [18]. In order for artificial neurons to function, a signal must first be received and then the signal must be analysed (a signal is a real number). During the process of learning, the weights of the neurons and edges are known to undergo frequent changes. The voltage on Nigeria's 330 kV transmission network has become more consistent and stable as a result of the utilisation of an ANN controller.
An activation unit in a single neuron in ANN is defined by the relation given in (i): where x i are the inputs, w i are the weights, b is the bias, is the nonlinear activation function, and h is an activation function of a hidden layer. For a single perceptron layer, the output for a single layer can be computed using the following relation in ii: where y is the output of the number of layers with h (n) i being the nth hidden layer. Equation (2) can be modified for multilayer perceptron (MLP) and is given in (3): where W (k) is the weight matrix representing the weight matrix that contains the weights of each layer, h (k) is the activation vector comprised of the activations for all layers and b (k) is the bias vector for the number of layers. Each layer in ANN is succeeded by a non-linear activation function.
In this study, we use the rectified linear unit (ReLU) that is given by (4): In Figure 5, which shows the architecture of the ANN model, each circular node represents an artificial neuron, and each arrow represents a connection between the output of one artificial neuron and the input of another. The ReLU function is used, which stands for 'rectified linear activation'. If the input is positive, the output is the same as the input; otherwise, the output is zero. ReLU has been used as a classifier in each of the hidden layers to obtain better performance. In this case, we use a binary classification, where ''0'' means ''unstable'' and ''1'' means ''stable.'' In this work, we utilised three models of multilayer perceptron networks (MLPs). Each network has a different number of layers. The following table shows the details and number of parameters of all three networks. Each model was trained on the dataset, tested individually, and then the outputs are combined using an ensemble to produce an optimal predictive model. Performance evaluation metrics for binary-class classification such as Accuracy, Mean Squared Error (MSE), and mean absolute error (MAE) has been used. The accuracy of binary-class classification is given as: The MSEs for binary class classification for a generic class k are given as: where y i is the ith observed value, (y i )is predicted value and n is number of observations. MAE for binary-class classification for a generic class k is given as: where n is the number of errors, ∥x i − x∥ is the absolute error, ''x i is the predicted value and x is the true value''.

IV. RESULTS AND DISCUSSION
The experimental implementation of the proposed neural network was conducted using the Python programming language within a Jupyter Notebook environment. The neural network is implemented on a personal computer (PC) equipped with an Intel Core i7 processor clocked at 2.2 GHz, running  the ''Microsoft Windows 10'' operating system, and possessing a memory capacity of 16 GB. We have tested the cross-correlation of all the features available in the dataset. It can be seen in the figure below that there is a very strong correlation between the features. Therefore, we have fed all the features into the MLP network. The data set was divided into 80-20 format, where 80% of the data is assigned to training and 20% of the data is assigned to training. All the features were fed to each model. Each model was tested individually. The Table 1 shows that the results of each model when tested individually. In the Table 2, Model 1 is illustrated to achieve the lowest training time and inference time. This is due to the smaller number of parameters in Model 1. However,  In Figure 7. shows the total computing cost of testing and training the three suggested models and their ensemble learning technique in this research. In this empirical study, VOLUME 11, 2023 58999 Authorized licensed use limited to the terms of the applicable license agreement with IEEE. Restrictions apply.  we employ a soft voting ensemble as our ensembling method. It has been demonstrated conclusively that the training and inference time is proportional to the total number of available parameters in the model. The time required to train and make inferences is shortest for Model 1, which also has the fewest parameters, and longest for Model 3, which has the most. Using the combined weight of all of the models in use, a soft voting ensemble method is developed. The voting score combination has the most expensive computations. Model 1 has achieved an accuracy, MSE and MAE of 98.05%, 0.0151, and 0.0258, respectively. Model 2 performed worse than all the models mainly due to a large number of parameters and achieved an accuracy of 97.94%. The training and inference time are slightly longer than in Model 2 with 329.869 s and 0.381 s, respectively.
Model 3 has the worst training and inference times, in contrast to Model 1 and Model 2, however, it has achieved an accuracy of 98.10% with the MSE of 0.0137 and MAE of 0.0211. Model 3 is the best model among the models when trained individually shown in Figure 7. When three models were combined into the ensemble one, this model provided higher accuracy than the remaining models, and shown in Figure 8. The ensemble of models has achieved the highest accuracy despite a significant increase in training and inference time.
In Figure 9 provides visuals for the other two critical performance assessment metrics: mean absolute error (MAE) and mean square error (MSE). The overall MSE produced by the suggested models and ensembling technique is slightly different from the MAE. Figure 9 displays the overall accuracy for all employed models, demonstrating the efficacy of the soft voting ensemble approach. Model 3 achieved 98.10% accuracy, but the ensembling algorithm achieved 98.73% accuracy, making it the most accurate of the bunch. Model 2, which has more parameters but fewer layers than Model 1, which achieved 98.05% accuracy, also achieved the lowest accuracy of 97.94%. Therefore, it is clear that the standard soft voting ensemble not only increases the computational cost of the various models, but also significantly improves their accuracy. When three models were combined into the ensemble one. This model provided the best (highest) accuracy of 98.73 % compared with the remaining models, and shown in Figures 8 and 9. At the same time, the ensemble model had the worst training and inference times are 722.561 s and 0.567 s, respectively.Furthermore, the ensemble obtained the best (lowest) values of MSE and MAE of 0.0095 and 0.0141, respectively, when compared to that of other models.

V. CONCLUSION
In this paper, an artificial neural network has been employed to improve the voltage stability level of the IEEE 4-bus system. Various indicators, including training and inference time, accuracy, mean squared error, and mean absolute error, were utilized to evaluate the effectiveness of the four proposed ANN models. The results obtained indicate that the ensemble model provided the highest precision of 98.73%, along with the lowest values of the mean squared error (0.0095) and the mean absolute error (0.0141) compared to other models. However, the training and inference times of the ensemble model were slightly higher than those of the remaining models. In the near future, an improved version of the ANN's ensemble model will be developed based on optimizing its setting parameters to reduce the training time and the inference time. At the same time, this improved version will also be applicable for enhancing the voltage stability of an autonomous (isolated) microgrid.