Deep learning-based energy efficiency and power consumption modeling for optical massive MIMO systems

The fifth generation (5G) wireless communication system is considered a promising and recent research. Massive Multiple-Input Multiple-Output (MIMO) system has an influential role in improving game-changing enhancements in area throughput and energy efficiency (EE). EE refers to one of the easiest and most cost-effective ways to combat climate change, reduce energy costs for consumers, and improve the competitiveness of businesses. Deep Learning (DL) can significantly improve area throughput and EE. It plays a crucial role in the 5G wireless communication systems. Optical systems are not far from this system, which include the optical components which serve more accurately. To assess the overall power usage in up-link and down-link communications, a power dissipated model is introduced. The proposed model incorporates the overall power used by the base station (BS) power amplifier and circuit components as well as single antenna user equipment (UE). In this paper, EE and power consumption of massive MIMO systems are calculated based on Convolutional Neural Network hybrid with Long Short-Term Memory cell (CNNLSTM). This model is proposed to overcome the high complexity and over fitting by replacing the inner dense connections with convolution layers resulting in improved model performance. There are different linear processing schemes applied for detection and precoding, as Minimum Mean Squared Error (MMSE), Zero-Forcing (ZF), and Maximum Ratio Transmission/Maximum Ratio Combining (MRT/MRC). These schemes are applied to train our proposed CNNLSTM. It is observed the results are improved by 12.8% when using ZF (perfect CSI) and the system outperforms other schemes by 10%, 10.44% and 12.05% when using MRT, ZF (imperfect CSI), and MMSE, respectively, for the EE performance. The obtained results also reveal that an improvement of 7.5% is achieved when using MRT. It outperforms other schemes by 6.5%, 5% and 5%, respectively, when using ZF (perfect CSI), ZF (imperfect CSI), and MMSE for average power consumption per antenna using the CNNLSTM model. When using MRT, an improvement of 7.5% is achieved in the area throughput performance, and it outperformed the other schemes, ZF (perfect CSI), ZF (imperfect CSI) and MMSE, by 5.2%, 5% and 5.2%, respectively.


Introduction
Related to what explained in the abstract, we can introduce our proposal by some literature to demonstrate the difference between our work and previous works. For 3G and 4G standards, the BS permits up to 8 antennas. In 5G, the BS antennas increase the data rate of the transmission up to several Gbps. Several researches aimed to increase data rates and decrease the complexity, through optical systems (Shawky et al. 2020, Shawky et al. Oct. 2018, Shawky et al. 2018 and Salama et al. 2022 (vol. 54:584).
The enhancements are expected to be achieved through the technique of massive MIMO systems which are multicarrier systems. The BS has a number M of antennas; each BS connects with number N of UEs instantaneously. The definition of massive MIMO is shown in (Arshad et al. 2017) and the ratio of both M and N > > 1. Each BS processes its signals independently by using linear transmit precoding and received combining. The massive MIMO system has many advantages over MIMO systems as low power consumption because of enlarged antenna aperture and improved network capacity .
A massive MIMO network needs to be analyzed for realistic modeling of total power consumption during uplink and downlink communications as it plays a vital role to achieve an optimal area throughput and a maximum EE as shown in (Asif et al. 2020).
Nowadays, two major machine learning (ML) branches are widely utilized to first simulate the behavior of the 5G network before making informed judgments or forecasting the outcomes of complicated situations. This has a special bearing on EE (Abdellah et al. 2020). From the preceding considerations, it should be clear that reducing the 5G energy consumption is a significant issue that heavily depends on intricate BS and user equipment (UE) distributions, fluctuating traffic needs and wireless channels, as well as covert network trade-offs. In order to tailor the 5G network configuration-MIMO (Tanveer et al. 2022), a lean carrier, and 5G sleep modes and address user equipment (UE) specific communications needs with the lowest possible energy consumption, it is crucial to understand and predict the UE behaviors and requirements, as well as their evolution in time and space. Additionally, it is anticipated that as the 5G ecosystem gathers and makes accessible bigger data sets relating to the various difficulties, the present promising ML findings will be enhanced over time. In reality, the amount of training data significantly affects how well ML algorithms work (Shehzad et al. 2022), i.e., more data typically results in better performance. The EE optimization heavily depends on the accuracy of the embraced models. Unfortunately, many of the current models, primarily the theoretical ones, are rigid and incapable of adapting to unique channel characteristics, enabling technologies, or environmental changes. There could be a sizable theory to practice gap as a result. Instead, data-driven optimization could be able to bridge this gap by using Artificial Intelligence (AI) to understand the actual status of the network and deduce the best operational strategies for mobile networks (Nguyen et al. 2022 Deep Learning (DL) technology, which can automatically learn more complex features from complex data structures, has been successfully applied in many fields, including signal classification (Peng et al. 2018), modulation recognition (O'Shea et al. 2016), sentiment analysis (Song et al. 2018;Chen et al. 2020), channel estimation (Soltani et al. 2019), and image processing (Ngo et al. 2021). The quantization and feedback of MIMO channels have seen substantial advancements recently, and certain channel state (CS) technology issues have been resolved; thanks to the DL approach. The CsiNet technique is suggested by (Z. Lu, et al. 2020), which treats the MIMO channel matrix as an image and uses the most recent image processing network design for channel state information (CSI) compression and feedback.
The accuracy of low compression ratio (CR) CSI reconstruction has recently been improved by X. Song et al. (Song et al. 2021) by employing several parallel networks (Zhang et al. 2020). However, the network training parameters in (Bi et al. 2022) are costly, and in the conventional Long Short-Term Memory (LSTM) technique, the produced features contribute equally to the reconstruction of the final CSI, which is counterproductive to improving CSI reconstruction performance.
The proposal of this paper focuses on the optimum usage of the energy which means energy efficiency to decrease the consumed total power used in the antennas in the massive MIMO system. This paper mainly depends on the results of (Asif et al. 2020). Figure 1 shows the UE distributions in a single-cell scenario for uplink/downlink: M antenna BS and N antenna UEs. The model investigates the impact of M, N, and the total power EE of a massive MIMO architecture by utilizing the linear processing schemes; MMSE, MRT/ MRC, and ZF. The main aim is to improve the EE and increase area throughput by applying the CNN hybrid with LSTM model, which solves the complexity problem and over-fitting. In our proposed model, the gained data from stage one is divided into training and testing to be able to judge the model performance. Methods used in this work are similar to that used in the optical systems (Shawky et al. 2020, Shawky et al. Oct. 2018, Shawky et al. 2018 and Salama et al. 2022 (vol. 54:584).
The main contributions are summarized as follows.
i) When using ZF (perfect CSI), our results outperform other schemes (RT, ZF (imperfect CSI) and MMSE respectively) for EE performance. ii) For saving power consumption and optimum area throughput, the use of MRT scheme achieves the best improvement over other schemes (ZF (perfect and imperfect and MMSE). iii) Both MMSE and ZF (perfect CSI) schemes carry the maximum EE gain. However, ZF is better as compared with MSRE and MRT because of less complexity and better interference mitigation practice. iv) Both total circuit power and transmit power increase by increasing the number of BS antennas. v) The BS has hundreds of antennas which is an utmost efficient way to serve UE with optimal EE and area throughput.
The remainder of the paper is structured as follows. The methodologies used in the paper are listed in Sect. 2. Results and analysis based on simulation and assessment parameters are displayed and explained in Sect. 3. The findings are concluded in Sect. 4, which also lists suggestions for future works.

Dataset
The data entered for training into CNN and LSTM depends on the results of (Asif et al. 2020). The collection of the datasets are collected from the results of the (Asif et al. 2020), where it contains several data divided into three tables. Each table contains a total number The datasets aim to collect data for three issues, performance EE, area throughput in a single cell and the average power consumed/antenna (Watts). The total data size to evaluate the performance of EE is 9 datasets for 4 types of techniques MRT (perfect CSI), ZF (imperfect CSI), ZF (perfect CSI), MMSE (perfect CSI), that means total the data size is 36 datasets for the performance of EE. This is done to determine area throughput and average power consumed in a single cell. The relation between M with EE shows in Table 1 different values for schemes for the massive MIMO architecture based on MMSE, MRT/MRC, ZF, and a perfect CSI. Also, the relation between area throughput and power consumption with total number of BS antennas from Tables 2 and 3, respectively, gives data entered CNN and LSTM for training and testing the data to improve the EE.

Proposed convolutional neural network hybrid with LSTM
There is an urgent need for more adaptable and flexible solutions to reduce the energy consumption of wireless networks since they are complicated and stochastic. To solve various problems, pre-trained models are trained on a large benchmark dataset. For the EE and power consumption in this study, the CNNLSTM model is utilized to overcome the high  complexity, over fitting, leading to improve the gain throughput, energy efficiency, and to reduce the power consuming.

CNN model
The CNN is a neural network that may share certain parameters. CNN is made up of several layers, each of which has the ability to change one volume into another by a differentiable function (Alzubaidi et al. 2021;Abdeltawab et al. 2019). In CNN, many kinds of layers are utilized. The convolution layer, which uses a kernel 5 × 5 to compute the output volume by executing the dot product operation between all filters and picture patches, while the input layer stores the image raw input data. The output of the first layer or the convolutional layer is subjected to elementwise in the activation function layer. Moreover, the pool layer is responsible for decreasing the volume and boosting computing efficiency which is input to the CNN with the primary goal of preventing any form of overfitting.

LSTM model
Long periods of time can be spent keeping the data in LSTM (Awan et al. 2020). Based on time series data, it is used for planning, categorization, and prediction. Four neural networks and different memory units called cells make up the chain structure of the LSTM. While gates are in charge of memory modification, these cells are in charge of information retention. Information that is no longer helpful, gets deleted thanks to the Forget gate. The input gate is in charge of adding pertinent data to the cell. Cells are mined for meaningful information using the output gate. The LSTM structure is mathematically described as follows (Awan et al. 2020) where Z t , u t , m t and K t are the input gate, Forget gate, output gate, storage unit and hidden vectors, respectively, W ix , W fx , W fh , W fc , W ox , W oh , W oc are the weight matrices, b i , b f , b c , b 0 are the deviation of the LSTM obtained during the training process, σ and are the sigmoid function and multiplication, respectively.

CNN hybrid with LSTM mode
The CNNLSTM model significantly reduces the number of parameters and increases the ability to capture information. However, CNN is the answer to all picture classification problems. It is used in network traffic data by treating it as an image since the geographic space is represented as a matrix and the distribution of traffic is determined by an element of the matrix. The suggested model combines an LSTM model hybrid with a CNN. This LSTM model makes use of convolutional layers, which are useful for capturing information. Due to simultaneous convolution processing in the suggested structure, using our approach also provides cost reduction. A dropout is also used to avoid over fitting, which enhances the performance of our model. The CNNLSTM parameters are described in Table 4, and Fig. 2 shows the proposed model's block diagram.

Results and discussion
The proposal of this research focuses on improving the performance of massive MIMO in three directions; EE performance, average power consumed/antenna, and area throughput in a single cell. Mainly, the results are compared with that found in (Asif et al. 2020), where R.M. Asif et al. used four techniques to estimate the performance by traditional algorithms: MRT (perfect CSI), ZF (imperfect CSI), ZF (perfect CSI), and MMSE (perfect CSI). The difference in this paper to achieve the enhancement is utilizing DL with CNN hybrid with LSTM model. This process achieves best results compared to that illustrated in (Asif et al. 2020). The results of the simulation results and a discussion of the predetermined model are presented in Table 5. Table 5 is analyzed in Fig. 3, where the percentage of improvement between our results and results in (Asif et al. 2020), for the optimum EE versus M for MRT is ~ 10% for M = 60 and for ZF (imperfect CSI) is 10.44% for M = 140, for ZF (perfect CSI) is 12.8% for M = 120, and finally for MMSE, the improvement is 12.05%. This means that the largest improvement of EE is achieved when using ZF (perfect CSI) and CNN hybrid with LSTM model. Table 6 summarizes the average power consumed in different scenarios MRT (perfect CSI), ZF (imperfect CSI), ZF (perfect CSI) and finally MMSE (perfect CSI). Table 6 is analyzed in Figs. 4 and 5, where the percentage of improvement between our results as compared to (Shawky et al. 2018) for the average power consumed versus M for MRT is ~ 7.5%, for ZF (imperfect CSI) is 5% for M = 20, and for ZF (perfect CSI) is 6.5%, and for M = 20 Deep learning-based energy efficiency and power consumption… in MMSE is 5%. So, the largest improvement for saving power consumption is reduced by 7.5% using MRT and CNN hybrid with LSTM model. The values of area throughput in single cell based on different strategies, MRT (perfect CSI), ZF (imperfect CSI), ZF (perfect CSI) and MMSE (perfect CSI) are summarized in Table 7. Table 7 is analyzed of Fig. 6, where the percentage of improvement in our results as compared to (Asif et al. 2020) for the optimum area throughput for MRT is ~ 7.5% for M = 80 and for ZF (imperfect CSI) is 5% for M = 60, and for ZF (perfect CSI) is 5.2% for M = 20, which is the same as for MMSE. Hence, the largest improvement of area throughput is achieved when using MRT and CNN hybrid with the LSTM model.
It is observed from the results that the CNN-LSTM achieves better performance than LSTM. Figures 3-6 explain the best results of our proposed framework. Moreover, the  Deep learning-based energy efficiency and power consumption…

Conclusion
This paper focuses in using CNN hybrid with LSTM techniques for improving both EE and area throughput performance. Utilizing training and testing steps, the obtained results show an improvement in EE performance using ZF (perfect CSI) by ~ 12.8%. It outperforms other schemes, where using MRT, ZF (imperfect CSI) and MMSE achieve 10%, 10.44% and 12.05% improvement, respectively. This illustrates an improvement by 7.5% for using MRT and it outperforms using other schemes where, using ZF (perfect CSI), ZF (imperfect CSI) and MMSE give 6.5%, 5% and 5% improvement, respectively, for average power consumption per antennas using CNN hybrid with LSTM technique. Compared to the work of (Asif et al. 2020), our results achieve an improvement for the optimum area throughput/antenna for MRT by ~ 7.5% for M = 80 and for ZF (imperfect CSI) is 5% for M = 60, for ZF (perfect CSI) is 5.2% for M = 20 as same as for MMSE, which has 5.2% improvement. This means that the largest improvement of area throughput occurs when using MRT and CNN hybrid with LSTM model. We also compared our results to other references working in the same field (Rusek et al. 2013) and (Younas et al. 2018). Related to the results in (Rusek et al. 2013), we find that utilizing ZF imperfect (CSI) improves the average power consumed nearly by 20% and could be enhanced to be 21.5% if ZF perfect (CSI) is used. With respect to the results in (Younas et al. 2018), nearly 16% improvement is achieved in EE via utilizing MMSE (CSI).