Next Article in Journal
Occupational Footwear Design Influences Biomechanics and Physiology of Human Postural Control and Fall Risk
Previous Article in Journal
An Efficient Prediction System for Coronary Heart Disease Risk Using Selected Principal Components and Hyperparameter Optimization
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Deep Learning Models Applied to Prediction of 5G Technology Adoption

by
Ikhlas Fuad Zamzami
Management Information System Department, College of Business Rabigh King Abdulaziz University, Jeddah 21589, Saudi Arabia
Appl. Sci. 2023, 13(1), 119; https://doi.org/10.3390/app13010119
Submission received: 3 November 2022 / Revised: 27 November 2022 / Accepted: 29 November 2022 / Published: 22 December 2022
(This article belongs to the Section Electrical, Electronics and Communications Engineering)

Abstract

:
The issue addressed by this research study is the public’s scepticism about the benefits of adopting 5G technology. Some have even gone so far as to say that the technology can be harmful to people, while others are still looking for reassurance. This is why it is crucial to comprehend the primary factors that will affect the spread of 5G networks. The method used for this heavily relies on a deep learning algorithm. Channel metrics, context metrics, cell metrics, and throughput data are the conceptualized variables that will serve as the primary indicators for determining the adoption of 5G technology. Three deep learning models—deep reinforcement (DR), long-short term memory (LSTM), and a convolutional neural network (CNN)—were applied. The results show that the DR model and the CNN model are the most effective at predicting the elements that would affect 5G adoption. Despite the fact that LSTM models appear to have a high degree of accuracy, the quality of the data they output is quite poor. However, this is the case even when the models appear to be rather accurate. The logical inferences drawn from these findings show that the DR model and the CNN model’s applicability to the problem of predicting the rate at which 5G will be adopted can be put into practice with a high degree of accuracy. The novelty of this study is in its emphasis on using channel metrics, context metrics, cell metrics, and throughput data to focus on predictions for the development of 5G networks themselves and on the generation of the elements that determine the adoption of 5G. Previous efforts in the literature failed to establish methods for adopting 5G technology related to the criteria considered in this study; hence, this research fills a gap.

1. Introduction

5G is a name that is used to refer to the fifth-generation technological standard for broadband associated with various categories of networks. It comes with speeds that are up to one hundred times faster than those of 4G, which enable it to offer opportunities for businesses and organisations that have never been available before [1]. South Korea became the first country in the world to offer mobile 5G services for commercial use in 2019 [2]. Many industry stakeholders as well as analysts feel that the transition from 4G to 5G networks will constitute a significant technological advancement that will result in a shift in the market for broadband services. This belief is supported by the fact that this transition is currently underway [3]. It is difficult to predict what the future holds for 5G service because the technology has not yet matured to its full potential [4]. Businesses need to wait for research findings into the emerging 5G mobile industry, and then use the findings to inform the development of long-term strategic plans so that they can make the most of the enormous opportunity that presents itself. It is critical to conduct extensive research into the general demand for 5G services, as well as customer preferences and the impact of using 5G technology.
“What is already known in the open literature” [4,5,6,7,8,9] is that as 4G networks continue to expand, they will eventually be unable to meet the ever-increasing demand for high-speed broadband-loving service. Active network investment and high service uptake for 4G services are widespread, confirming the prior prediction result obtained by using a multi-generational diffusion model [5]. As a result, it is reasonable to believe that a network administrator should not place an excessive amount of value in adoption predictions. The inability of the telecommunications industry oversight to properly evaluate the implementation on a ground-share basis, as well as an overabundance of competitors and inadequate network service allocation, have all contributed to the plethora of problems that have recently surfaced [6]. Inadequate network service allocation, an excessive number of rivals, and insufficient network management are among the issues [7].
What the literature lacks is the ability to predict the adoption of 5G technology in advance of its eventual implementation for 5G mobile. The utilization of a deep learning model as a prediction model for the rate at which 5G technology will be adopted is required because the world is moving closer to its deployment for 5G mobile. This needs to be carried out as soon as possible. The manner in which this will be accomplished can be determined by reviewing past examples of how different parts of the world have dealt with the rollout of 4G. In order for the transition from 4G to 5G to be successful and for the 5G implementation to have any kind of influence over user behaviour, it is essential to be more efficient in the prediction evaluations, which is why deep learning techniques are adopted for this study. The Ultra-Broadband Forum 2022 is currently planning to release five heavyweight industry white papers that provide insight into the future of ultra-broadband over the next five years, in order to promote and drive the evolution of the ultra-broadband industry’s adoption of 5G and to discuss the strategic direction, requirements, and challenges of the ultra-broadband industry [8]. Insight about ultra-trajectory broadband over the next five years can be gleaned from these papers. This factor is useful for making educated guesses regarding the rate of 5G adoption. The study takes into consideration deep learning algorithms as a key prediction paradigm, since they can be used to construct a model that reacts to changes with regards to any amount of data. This study was also motivated by an ever-increasing amount of data as time passes. This paves the way for improved and more efficient performance predictions of the adoption of any technology, including 5G, using models generated by artificial intelligence techniques. This opens the door for improved and more efficient performance prediction with the adoption of any technology. Although TAM has been the most successful theory for modelling the adoption of technology by consumers, it is based on subjective judgments and assigns little weight to the particulars of the technology in question [9].
The most common application of deep learning is the problem solving of issues whose potential solutions can be derived from massive amounts of data. The extensive application of deep learning for prediction has contributed to the development of a well-deserved reputation for the quality of the models it generates. The three most commonly used deep learning models are DR, LSTM, and CNN [10]. They are becoming increasingly popular as a result of the flaws that are present in all of the machine learning models that have been developed up until this point [11]. These machine learning models are defined by their weakness as more data is added, which causes their accuracy to degrade, since they have a tendency to have difficulties with overfitting [12]. On the other hand, the circumstances in which deep learning methods are most successful are those in which the dataset is quite extensive [13]. This is another justification for why this current study adopted the use of deep learning for the prediction of 5G technology adoption.
  • The most important contributions of this work include the application of deep learning to define what the deployment of 5G will require, how simple it is to make predictions about the adoption of 5G, what role each component will play in those forecasts, etc. Consequently, by highlighting deep learning’s potential to predict the adoption of 5G, the components for predicting the adoption of 5G, the contribution that each component makes in predicting the adoption of 5G, and the implementation of 5G, this study contributes to the body of work on the potential applications of deep learning by exposing it to the area of 5G adoption.
  • Another significant component of the contribution consists of the logical inferences that are obtained from the prediction models. These inferences show that the DR model and the CNN model can be applied to the problem of predicting the rate at which 5G will be adopted, and that they can do so with a high degree of accuracy if they are used in practice. Additionally, these inferences show that the DR model and the CNN model can be applied to the problem of predicting the rate at which 5G will be adopted. This suggests that organizations that plan to invest in 5G or are considering doing so may find it easier to make predictions now that this research has provided them with the variables and model. It is possible that this will be advantageous to both companies.
  • The final contribution focuses on the fact that it is now possible to implement a novel method of deep learning that places an emphasis on using channel metrics, context metrics, cell metrics, and throughput data in order to predict the development of 5G networks themselves as well as the generation of the elements that determine the adoption of 5G. This indicates that the network capability of 5G is of the utmost importance, and that it should be the primary focus of efforts focused toward the adoption of the technology.
The remainder of the paper consists of the following: the second section presents the related work, the third section presents the research methodology, the fourth section presents the results and discussion, and the fifth section concludes the work.

2. Related Work

Numerous empirical research studies have been carried out in the past that are linked with 5G technology and its implementation. According to Jahng and Park [14], it is crucial to obtain an accurate forecast of the potential size of the new mobile market that will capitalise on the potential benefits of 5G technology. In order to better comprehend the development of 5G services, the study presents a customer adoption model based on system dynamics paired with an agent-based model that takes into account 5G adoption estimates under categories with three possible scenarios. One of the most significant findings is that the initial rate of acceptance for 5G is higher than that of 4G. Considering customer preferences and acquisition delay behaviour, Maeng et al. [5] assessed the accuracy of prediction for the 5G service industry. The study found that customer preference and purchase delay behaviour are key to the demand for 5G services, and that consumers have a significant degree of heterogeneity regarding the characteristics of 5G services. As the study also establishes, understanding mobile communication service customers, who are anticipated to be crucial initial users for the creation and diffusion of 5G services, is essential at a time when 5G commercialization is in its infancy. Therefore, the study concludes that data transmission rates and data offers are crucial for 5G spread and that price and a lack of necessity are the key issues delaying the acquisition of 5G services. This is why the adoption rate of 5G drops by an additional 50% when the purchasing delay is taken into account.
Deep learning algorithms have been used in a variety of different applications of 5G technology. In order to efficiently detect vulnerabilities in a 5G mobile wireless network and to gain users’ confidence in adopting 5G technology, Maimó et al. [15] present a deep learning anomaly detection method for network flows. The study used a deep belief network (DBN) and a long short-term memory (LSTM)-based anomaly prediction method to analyse the network data in real time. The finding reveals that an alert was generated once a flaw was detected in the network’s traffic. In a similar approach, Thantharate et al. [16] utilised a dense deep neural network to detect and eliminate security threads before attacking the 5G core wireless network. The proposed model has the ability to sell network slices as a service to serve different services on a single infrastructure that is reliable and highly secured.
Abiko et al. [17] propose deep reinforcement learning for allocating 5G radio resources to meet slice number-independent service requirements. Similarly, Shahriari et al. [18] use deep reinforcement learning for the prediction of the adoption of a generic 5G online learning system. Other models that are used in the previous research work are associated with CNN and LSTM, where Huang et al. [19] propose CNN-LSTM multitasking for 5G mobile network traffic estimation. Furthermore, CNN-based LeNet-5 was proposed by Alhazmi et al. [20] to detect cellular signals. LeNet-5 successfully identifies 5G, 3G, and LTE signals. Li et al. [21] mapped 5G on-demand service function chaining using an adaptive deep Q-learning network. A low-complexity heuristic service function chaining mapping algorithm helps agents make customer-specific decisions. Luo et al. [22] propose dynamic transmission power regulation using CNN and deep Q-learning for 5G non-line-of-sight transmission. Godala et al. [23] utilised CNN for 5G mobile network radio state channel information estimation. Klus et al. [24] recommend the use of CNN for user position estimation in 5G technology. Doan and Zhang [25] utilised CNN to detect 5G mobile wireless network difficulties. Razaak et al. [26] adopted a deep generative adversarial network-based model for image processing using 5G wireless networks. Dai et al. [27] use DRL for designing a caching solution for 5G mobile networks and beyond. Gante et al. [28] propose a temporal CNN for 5G mobile network millimetre wave location.
Despite 5G’s impressive security architecture for wireless network communications, previous research has reported an important finding: it is not enough to thwart traffic analysis attacks. This finding has motivated researchers to focus their attention on the traffic analysis defence domain within 5G networks. It is crucial to implement defence solutions for traffic analysis that are both strong and efficient. The goal of the proposal by Abolfathi et al. [29] is to determine how real and false packets should be dispersed across several channels of varying capacity in order to best protect networks from traffic analysis attacks. The research presents the issue as a zero-sum game and demonstrates that the optimal defence is achieved through the watered-down distribution of real and phoney packets. Using the proposed method considerably reduces the effectiveness of traffic analysis assaults, as shown by the results. This study is unique in that it is effective without real-time knowledge of protected traffic flows or any manipulation of production traffic. In addition, in a different setting, Javaheri et al. [30] show that the trained models emphasise the efficacy of deep learning based on their predictions acquired from the dataset. Using the recovered data, Mughaid et al. [31] reveal that the accuracy for detecting dropping attacks within the 5G network was quite high, recommending the use of multiple ML and DL algorithms in this endeavour. Additionally, the architecture for intrusion detection provided by Yadav et al. [32] is capable of quickly and accurately identifying genuine worldwide attackers. The neural network used in this study performs fantastically at detecting intrusions. Morabito et al. [33] presented further uses for deep convolutional neural networks associated with categorization of Alzheimer’s disease and moderate cognitive impairment using scalp EEG recordings.
According to the findings of the previous research that was conducted, the vast majority of the relevant work that has been carried out regarding the application of deep learning in connection with 5G has focused more on technical evaluations. Even though the works by Jahng and Park [14] and Maeng et al. [5] emphasise the necessity of predicting the potential size of the new 5G mobile market and assessing the accuracy of predictions for the 5G service industry, it is essential to also highlight the prediction of the production of 5G itself. This is due to the fact that the works of Jahng and Park [14] and Maeng et al. [5] emphasise the need to predict 5G adoptions, but either a very limited amount or none of the strategies proposed for adopting 5G technology were implemented.

3. Methodology

An effective structure in the applied methodology allows for the realization of a reliable comparison of the different artificial intelligence models. Thus, in the first part of this section, the techniques applied to obtain suitable data are pointed out. The models are introduced in the subsequent part, focusing on the exposition of the cost function, i.e., the function to be minimized. This allows for accurate results in each of the models. Finally, the metrics used to carry out the comparative analysis with certainty are presented.

3.1. Deep Learning Models

Models and issues associated with 5G that need to be solved, which are highly dependent on the deep learning architectures and the tasks connected with each deep learning architecture, need to be studied carefully. Those models that these studies found appropriate for use in 5G are LSTM, CNN, and DR; this can be justified by the work of Almutairi [34].

3.1.1. Long Short-Term Memory

Among the many challenges that recurrent neural networks (RNN) must contend with, vanishing gradients and growing gradients are two that the LSTM is able to address in an effective manner [35]. Because of the error caused by the vanishing gradient problem, RNNs are unable to be trained when there is a delay of more than 5–10 timesteps between the events that are input and the signals that indicate success. On the other hand, the LSTM is able to connect up to a thousand timesteps despite having low temporal delays. The cells that are assembled here to form these specialised units are the ones that are to blame for the constant flow of errors. Multiplicative gate units control cell entry [36]. The memory nodes of a typical LSTM network are always situated in the hidden layer of the network [37]. A set of memory cells and a pair of multiplicative gate units are the only components of a memory block that are responsible for processing input and output data. The constant error carousels in a memory cell solve the problem of vanishing gradient error by preventing the local backflow error from decreasing or increasing when no input or error signals are present. Because of the input and output gates, the constant error carousel is safeguarded against faults that originate from both inside and outside the computer. The level of activity of the constant error carousels is what determines the state of the cell. The switching states of the input gate y(in) and the output gate y(out) at discrete times such as 1, 2,… make use of the Equations (1) and (2):
n e t o u t j t = m w o u t j m y m t 1 ,   y o u t j t = f o u t j ( n e t o u t j t )
n e t i n j t = m w i n j m y m t 1 ,   y i n j t = f i n j ( n e t i n j t )
where j is the memory block, f is the logistic sigmoid in the range 0 ,   1 , and w l m denotes the connection weight from the unit m to the unit l .
To compute the internal state of a given memory cell S c t , the squashed gate input to the state at the recent time step S c t 1 where t > 0 can be added by the following Equations (3) and (4):
n e t c j v t = m w c j v m y m t 1
S c j v t = S c j v t 1 + y i n j t g n e t c j v t      
where c j v denotes cell v of the memory block j , squashing of the cell input is performing by g , and S c j v 0 = 0 . To determine the output of a cell y c , the internal state S c is squashed using an output squashing function h and gating it with the activation of the output gate y o u t expressed in Equation (5):
y c j v t = y o u t j t h S c j v t
where h denotes a centered sigmoid in the range 1 , 1 .
The output units K of a network with layered topology consisting of hidden layer with memory blocks, standard input, and output layer can be defined by the Equations (6) and (7):
n e t k t = m w k m y m t 1
y k t = f k n e t k t
where f k denotes the squashing function with logistic sigmoid in the range 0 ,   1 and m ranges over all input units and the cells in the hidden layer. The LSTM is capable of solving tasks with complex long time-lags that were never solved by RNN.

3.1.2. Convolutional Neural Network

The CNN is one of the most important deep learning approaches that can be applied in the context of 5G technology. In the realms of image processing and pattern recognition, feed-forward neural networks such as the convolutional neural network (CNN) are particularly useful. It has a straightforward design, is flexible, and requires little adjustment throughout training [38]. Layers such as the input layer, the convolution layer, the pooling layer, and the output layer make up a CNN’s overall structure. The input image is fed into the convolution layer, where a filter is used to generate a feature map. After the convolution layer sends its output, the pooling layer receives the feature maps and performs a downsampling operation on them [39]. When n neighbouring pixels are pooled into a single pixel, a narrow feature map is created by applying an activation function, scalar-weighing by a factor of Wx + 1, and adding a bias of bx + 1. Parallel learning is one of CNN’s main benefits since it reduces the network’s complexity. Once again, the sub-sampling procedure can be used to increase resilience and scalability. Equation (8) have been developed to represent how each layer of a CNN processes its output [40]:
O x , y l , k = tanh t = 0 f 1 r = 0 K h c = 0 K w W r , c k , t O x + r ,       x + c l 1 , t + B i a s i , k
where O x , y l , k is the output of neuron at convolution layer l , feature pattern k , row x , and column y . f denotes the number of convolution cores in a given feature pattern. At the sub-sampling stage, the output of neuron at the l th   sub-sampling layer, k th feature pattern, row x , and column y is expressed in Equation (9):
O x , y l , k = tanh W k r = 0 S h c = 0 S w O x × S h + r ,       y × S w + c l 1 , t + B i a s i , k
At the l th hidden layer H, the output of neuron j is provided in Equation (10):
O x , y l , k = tanh W k r = 0 S h c = 0 S w O x × S h + r ,       y × S w + c l 1 , t + B i a s i , k
where s denotes the number of feature patterns in the sub-sampling layer.
At the output layer, the output of neuron i at the l th output layer is expressed in Equation (11):
O l , i = tanh j = 0 H O l 1 , j W i , j l + B i a s i , j

3.1.3. Deep Reinforcement Learning

Reinforcement learning is one of the best strategies for making decisions in real time. As it acts and recognises things in the world, it learns [41]. At each level of interaction, the agent chooses an action that modifies the environment based on the current state of the environment. Whether an activity is useful or not, the agent receives feedback in the form of a reward or a penalty.
To describe RL, the research uses the notation of a Markov decision process (MDP) tuple, written as (S, A, R, P), where S is the current state of the environment, A is the current action being taken, R is the current reward, and P is the probability of a successful state transition. The goal of RL is to maximise the total discounted reward across all states by learning the optimal strategy [42]. That idea can be presented in Equation (12):
J π * = max π J π = max π E π t = 0 γ t r t
where π * and π denote the optimal policy and policy respectively, J π denotes total expected reward, E π . is the expectation based on the policy π and the transition probabilities, and γ is the discount factor in the range 0 ,   1 . The agent becomes opportunistic about the present reward when γ = 0 , and strives for long-term great reward when γ = 1 . r t denotes the reward at time t .
The achievable return for execution of an action a in a state s is represented by the value function Q s , a . This can be updated according to each state–action pair till a given threshold turns out to be greater than the highest change in the value, as presented in Equation (13):
Q s , a s p s | s , a r s , a , s + γ   max a   Q s , a
where p s | s , a denotes the transition probability from state s to state s when action a has been executed, and the reward is denoted by r s , a , s . Following the convergence of the algorithm, the optimal policy is achieved by performing a greedy action on each state s . This is expressed in Equation (14):
a * = a r g max a Q * s , a ,   s ϵ S
In situations where the system does not have prior knowledge about the environment, optimal policies can be achieved by a type of RL algorithm known as Q-learning. Given α t as the learning rate, such that when α t = 0 , the agent becomes incapable of learning, and when α t = 1 , the agent only considers the most recent information. The updating rule of Q-learning is provided in Equation (15):
Q s t , a t = 1 α t Q s t , a t + α t r t + 1 + γ   max a   Q s t + 1 ,   a
This implies at time step t , state s t is observed by the agent and an action a t is chosen, reward r t + 1 is received by the agent for execution of the action a t . Q-learning always tries to choose the optimal action by considering the state–action pair with the best Q value. RL algorithms are very good for solving various problems, especially problems relating to messaging and mobile networks [43].

3.2. Dataset Preparation

The dataset that is used for the study is the 5G production dataset obtained from Raca et al. [44]. The dataset is the production dataset. The data in this set was collected from a major Irish mobile service and consists of 5G trace data. There were two distinct modes of transportation (stationary and mobile) and two distinct uses for the data (video streaming and file download). The collection includes key performance indicators (KPIs) for cellular connections on the client side, such as channel metrics, context metrics, cell metrics, and throughput data. These data were collected by G-NetTrack Pro, a well-liked monitoring tool for non-rooted Android networks. This is the first open-source dataset that includes throughput, channel, and context information for 5G networks. Soon after the data is collected, it is cleaned and organised in a process called preprocessing. At this point, anomalous or duplicate information is identified. Data scrub methods are used to correct the data with only the minor discrepancies found.

3.3. Performance Metrics

The success of implementing 5G using deep learning must be measured across a number of dimensions (including channel metrics, context metrics, cell metrics, and throughput data). Performance metrics are used to evaluate the efficacy of a deep learning algorithm or model. Several performance measures can be used to learn about the efficiency of the models, methods, or processes. By comparing a proposed method or algorithm to existing ones, flaws may be revealed. To evaluate the relative merits of various 5G implementations based on deep learning, we must first dissect each component individually (channel metrics, context metrics, cell metrics, and throughput data). Despite the fact that there are numerous success indicators, the current study used the most widely used performance indicators in the academic research community. As a result, this study explores many performance metrics to evaluate the performance of the experimental analysis carried out. However, since the deep learning algorithms utilised need to be compared in order to find the difference among them, the measurement of error metrics is the crucial measurement considered. Despite the number of measures computed in order to evaluate the performance of various deep learning models, mean squared error (MSE) is adopted to measure how far the estimated value deviates, on average, from the actual value. The predicted value of the squared error is represented by Equation (16):
MSE = 1 n i = 1 n Y i Y i ^ 2
Root mean squared error (RMSE), with the residuals of the prediction errors, measures how, and shows how closely, the data gravitate toward the optimal line of fit and measure by Equation (17):
RMSE = 1 n i = 1 n Y i Y i ^ 2
Furthermore, while determining how closely the predicted values fall along the optimal line of fits at the target values, the coefficient of determination (R2) measured by Equation (18):
R 2 = i = 1 n Y i Y i ^ 2 i = 1 n Y i Y 2
A confusion matrix provides a concise summary of the prediction output of a technique in relation to specific test data. It is a matrix with two dimensions; the first dimension is indexed by the entity’s actual class and the second dimension is indexed by the class that the algorithm assigns to the entity. In this particular use of the uncertainty matrix, there are two classes: one is referred to as the positive class, and the other is referred to as the negative class. According to this interpretation, the four cells that make up the matrix are referred to as true positives (TP), false positives (FP), true negatives (TN), and false negatives (FN), as outlined in the table that follows.
Furthermore, accuracy is critical in the evaluation of a deep learning algorithm; one can say that a measurement or calculated value is accurate if and only if it agrees with the real value or meets some other criterion. The extent to which experimental values approximate genuine value is what is meant by “accuracy”. The accuracy (Acc) is provided as Acc = TP + TN P + N , while the sensitivity is the “True positive rate” which can be referred to as “Recall (R)” provided as R = TP/(TP + FN), and the specificity (S), referred to “True negative rate”, where R = TN/(TN + FP).
The receiver operating characteristic (ROC) curve has been established as a reliable method for evaluating the performance of both deterministic and probabilistic models [45]. The ROC curve is an all-inclusive and visually appealing summary of prediction accuracy. Its usefulness extends to a variety of contexts and prediction methods. Plotting the model’s “sensitivity” (X axis) against its “specificity” (Y axis) generates the ROC curve. When a model can accurately forecast whether or not 5G adoptions are dependent on the component of 5G production dataset, it will be represented by a high value for the area under the curve (AUC).

4. Presentation of the Results and Discussion

The deep learning algorithm was implemented on Google Colab using Python libraries. In particular, Keras and TensorFlow were utilised. Several iterations of the algorithms were carried out. The outcomes of the DR, LSMT, and CNN that were proposed were evaluated and compared. Both mean squared error and root mean squared error were used as performance indicators in order to evaluate how well the algorithms performed. Keras and TensorFlow are the software platforms that are used for the library. This is due to the fact that both TensorFlow and Keras are open-source software platforms, and Tensor is written in both C++ and Python. Python, the language in which The Odyssey is written, lends the pair an air of familiarity; in addition to their popularity, both are frequently employed in academic contexts. Keras is a tool that can be used on the TensorFlow backends to easily define deep learning models. Regarding the hardware, the type of CPU used is a GPU. The system’s specifications include Windows 11 (64 bits), a 12 Gen Intel® CoreTM i5-12500 (18 MB cache, 6 cores, 12 threads, 3.00 GHz to 4.60 GHz Turbo, 65 W), 8 GB, 1 × 8 GB, DDR4.
The insights associated with why the proposed approach performed much better than the existing methods lie in determining whether or not the algorithms were effective by applying them to the complete datasets. The outcomes were reviewed by contrasting the DR, LSTM, and CNN approaches that were proposed. The following configuration is used by the deep network algorithms: 60 epochs, one with various steps for each set of data for each epoch, and four hidden layers for the DR, CNN, and LSTM models, respectively. The optimal ratio for the data partitioning between training and testing is 70:30. The results of the investigation, which included carrying out the experiment with each of the models, are shown in Table 1, which summarises the findings and indicates how successful the proposed R, LSTM, and CNN models were. Consequently, when compared with other studies [11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26], the MSE and RMSE achieved by the proposed R, LSTM, and CNN models were the lowest. This indicates that DR, LSTM, and CNN have the best performance when it comes to predicting and preventing the adoption of 5G technology. Table 1, which includes DR, LSTM, and CNN, shows that the MSE for DR is 0.000064, and the RMSE is 0.0080. CNN has an MSE of 0.000041 and an RMSE of 0.0071, and LSMT has an MSE of 0.000066 and an RMSE of 0.0074.
In light of the fact that the mean errors obtained through the application of the cross-validation method in each of the splits—training, validation (dev), and testing—are detailed here, the characteristics of the 5G adoption in the present circumstance as well as the general information of the generated models are also detailed here. This is done on the basis of the fact that the mean errors acquired by utilising the cross-validation technique in each of the splits, specifically, are high (see Table 2).
In addition, despite the fact that LSTM models appear to produce accurate findings, the quality of these outcomes is poor. With the use of the RMSE measure, it is possible to demonstrate that the forecast in the validation set, as well as in the test set, has a significant gap in comparison to the training set. Because of this, it is possible to reach the conclusion that the convergence of these models is insufficient due to the presence of overfitting in the findings. Because of this, these models cannot be used to make accurate predictions about how quickly 5G will catch on.
Finally, the results that the other two models—DR and CNN—produce are extremely satisfactory. These are compatible with every set. The CNN model stands out because it is able to extract patterns from convolutions, which enables it to produce results that are consistent and accurate. Therefore, it is possible to assert that this model is the most accurate one for estimating the uptake of 5G. Both DR and CNN are modified, but based on the outcomes of the metrics that are presented, it is not possible to determine which of the two is superior.
Adoption of 5G may occur despite the fact that erroneous outcomes are possible, despite the fact that models may or may not be forecasting exactly properly. In order to evaluate the models’ accuracy in the relevant time periods, the results of the models are presented in Table 2 with no regard assigned to night-time throughput. Table 3 confirms the prior decisions made using Table 3. CNN is the worst model, the DR and LSTM models are overfitting; CNN is the best model; DR and LSTM models are good but not as good as CNN. If one looks at Table 3, they will notice that the CNN model consistently produces better outcomes than the DR model. As a result, the DR model will provide more convincing fits during non-normal times. The results are also unaffected by the fact that we are considering the entire time span. When all factors are taken into account, modelling can be made more resistant to data changes as a result of continuity and expertise.
According to the decision made, DR and CNN are the models that are best suited to projecting 5G uptake. Since the models continue to produce the same results regardless of whether the throughput is taken into account, the variations between the two approaches do not produce significant a priori distinctions. Despite the fact that it demonstrates the positive changes that were produced by the DR, CNN, and LSTM models during the training split for both situations, the SVR model is the one that has periods with better accuracy during the times where 5G adoption can be observed, where the mean errors are 0.91% in DR, 0.93 in CNN, and 0.91 in LSTM. This is shown by the fact that the DR model has mean errors of 0.91%, while CNN and LSTM both have mean errors of 0.91.
In this section of the study, the researchers present the characteristics of the implementation of 5G in this environment. They also provide general information on the models that were developed, such as the mean errors that were obtained by using the cross-validation method during each of the stages of training, validation, and testing. In order to find the optimal solution, the cross-validation method is used to evaluate and contrast the mean errors of each of the sub-splits. Specifically, this is due to the common mistakes that are made (see Table 4).
Despite the fact that LSTM models appear to have a high degree of precision, the quality of the data they generate is quite low. Using the root-mean-squared error (RMSE) measure, it is possible to demonstrate that the forecasts for the validation set and test set are significantly different from the forecasts for the training set. When compared side-by-side, the validation set and the test set reveal a number of intriguing discrepancies. Consequently, it is possible to conclude that the results have been overfit and that the convergence of these models is insufficient. This is because such a conclusion can be drawn from the presented information. Due to this, it is difficult to utilise these models to provide accurate predictions regarding the uptake of 5G technology.
In conclusion, the outcomes generated by the DR model and the CNN model are equally aesthetically pleasing. These can be used with any other set one already possesses. The ability to recognise and extract patterns from convolutions distinguishes the CNN model from other models. Because of this, the results it generates are reliable and accurate. Due to this, one may claim that the model delivers the most accurate forecasts of 5G’s adoption rate. Because both models have been updated, it is impossible to identify which model, DR or CNN, is superior based on the outcomes of the supplied measures. Improvements have been made to both DR and CNN.
The possibility of erroneous results and the unpredictability of model forecasts do not exclude the broad use of 5G technology. This has been carried out so that we may evaluate the models’ dependability during the pertinent time periods. The research provides support for decisions made in the past based on research. CNN has the lowest accuracy, DR and LSTM models are overfitted, CNN has the highest accuracy, and DR and LSTM models are good but inferior to CNN. CNN achieves the highest level of accuracy possible when used as a model. If one examines the prior findings, one will observe that the CNN model consistently outperforms the DR model. This characteristic appears in every instance of the sentence. This has a direct impact on the DR model’s capacity to provide compelling matches in out-of-the-ordinary situations. The outcomes have not changed in any manner as a result of taking into account all of the available time. When all relevant factors are considered, continuity and competence may make the model more resistant to changes in the data.
Figure 1 depicts the ROC curves for the first training case where some of the 5G parameters are used (NetworkMode, RSRP, RSRQ, SNR, CQI, and RSSI), whereas Figure 2 depicts the ROC curves for the validations. Because of this, the AUC value based on the training dataset cannot be examined independently when validating a model. Rather, it must be taken into account together with the validation. Hence, the AUC values from the validation dataset were also used in the construction of the model and are taken into consideration while validating the model. The ROC curve values show consistent behaviour, with the training set performing very well, at AUC (0.954), and the validation set performing at AUC (0.989). Each ROC curve for a different collection of features clearly shows that the model set is always superior to the analysis and that this dominance increases for the remaining parameters in the training and validation set. For a model to have a greater likelihood of making accurate predictions, the area under the curve (AUC) must be close to one unit. This lends credence to our assertion that our model is the best in the more plausible real-world setting when 5G networks are adopted.
Figure 3 depicts the ROC curve and the AUC value based on the training dataset following the addition of additional components from 5G production datasets as well as an increase in the weight. Figure 4 depicts the ROC curve and the AUC value based on the validation dataset. Every model demonstrates some degree of predictive power, as indicated by the AUC values of the area, which are (0.986) and (0.927) for training and validation, respectively. On the other hand, the performance can be considered to be the best possible. When validating a model, the AUC value calculated based on the training dataset cannot be considered separately because this is the case. The AUC values for the validation dataset were not used in the construction of the model; however, they ought to be taken into consideration while verifying the model.
The findings of the ROC curve study show that the model with two parameters also has the greatest AUC value (0.867) with a high weight, followed by the model with a combination of all the parameters. This was discovered through more research (0.851). That is, as shown in Figure 5, there is a decline in the ROC curve and the AUC value based on the training dataset, and Figure 6 shows a similar decrease for the validation dataset. In other words, the value of the ROC curve has decreased.
In a similar manner, the model of Figure 7 depicts the ROC curve and the AUC value based on the culmination of training, and Figure 8 depicts the same information based on the validation dataset. Both models have predictive power, with the AUC values of the latter being lower than those of the former in comparison to the first two rounds of the tests. In general, all of the trials turned out to be quite successful, and the performance can be considered to be the very best possible.
The findings of this study provide credence to the contention that it is critical to emphasize prediction on the manufacturing of 5G networks as well. This is the situation because it is extremely important to anticipate the adoption of 5G technology, but relatively few of the recommended solutions have been put into practice [46,47,48,49,50]. In the present investigation, a deep learning method is utilised to investigate the numerous facets of 5G technology and to estimate the degree to which it will be generally accepted. Some of the components that factor into this are channel measurements, context metrics, cellular metrics, and throughput data. Indicators that these regions are the most likely to adopt 5G can be found there. Deep reinforcement (DR), long short-term memory (LSTM), and a convolutional neural network (CNN) were the three distinct deep learning models that were put into use. Deep learning has been completely ignored in virtually all of the earlier research that has been carried out on the subject of 5G rollout. On the other hand, they have placed a higher priority on technical research, while others have stressed the significance of estimating the size of the new 5G mobile market and analysing the reliability of forecasts in the 5G service sector. The vast majority of the earlier studies that were conducted on the topic of the installation of 5G did not make use of deep learning, which is the key reason why this is the case. The findings of the prediction indicate that although LSTM models may give the impression of accuracy, the quality of the data that they supply is in fact rather poor. This is the case, in spite of the fact that the models provide the impression of being quite precise. An examination of the root-mean-squared error (RMSE) revealed that the visual appeal of results generated by the DR model and the CNN model are on par with one another. The findings of this study lend support to the notion that it is essential to provide equal weight to different projections on the expansion of 5G networks. This is due to the fact that accurate forecasts regarding the adoption of 5G technology are required, but either very few or none of the strategies that are advised for achieving this goal have been put into practice. This was spurred by the requirement to produce projections regarding the expansion of 5G networks.

5. Conclusions

A deep learning system is used to determine whether or not 5G will be successful by utilising channel data, context metrics, cell metrics, and throughput statistics, as well as an analysis of their application. Utilized in this scenario were the deep learning models known as deep reinforcement learning (DR), LSTM, and convolutional neural networks (CNN). Deep learning has not been utilised in the majority of the recent research that has been conducted on the subject of the implementation of 5G; rather, the emphasis has been placed on technical assessments. Those businesses that use deep learning are more likely to emphasise the need to estimate the size of the 5G mobile industry and evaluate the accuracy of such estimates. The results of the predictions showed that LSTM models provide the impression of being very accurate but generate data of poor quality. It was proved, through the use of the metric known as root-mean-squared error (RMSE), that the outputs of both the DR model and the CNN model are aesthetically pleasing. The novelty of this research lies in the fact that it places an emphasis on the utilisation of channel metrics, context metrics, cell metrics, and throughput data in order to concentrate on generating predictions regarding the development of 5G networks themselves as well as the generation of the factors that will determine the adoption of 5G. The limitation of this study dwells in the dataset itself, which represents one of the most significant aspects of this research. The dataset was for a 5G production dataset that had been obtained from a large Irish mobile service provider. In future research, dataset issues ought to be taken into consideration when deciding whether or not to make use of other datasets from other countries in order to gain some additional insights from those other countries. The deep learning algorithm model that was utilised presents yet another limitation to the findings. Despite the fact that it is effective at mapping inputs to outputs, it is not very good at comprehending the context of the data it is managing. A future research project ought to consider several architectural models that take the context into account.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Cited in reference.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Benalia, E.; Bitam, S.; Mellouk, A. Data dissemination for Internet of vehicle based on 5G communications: A survey. Trans. Emerg. Telecommun. Technol. 2020, 31, e3881. [Google Scholar] [CrossRef]
  2. Kim, D.K.; Lee, H.; Lee, S.C.; Lee, S. 5G commercialization and trials in Korea. Commun. ACM 2020, 63, 82–85. [Google Scholar] [CrossRef] [Green Version]
  3. Blind, K.; Niebel, C. 5G roll-out failures addressed by innovation policies in the, E.U. Technol. Forecast. Soc. Change 2022, 180, 121673. [Google Scholar] [CrossRef]
  4. Nguyen, H.X.; Trestian, R.; To, D.; Tatipamula, M. Digital twin for 5G and beyond. IEEE Commun. Mag. 2021, 59, 10–15. [Google Scholar] [CrossRef]
  5. Maeng, K.; Kim, J.; Shin, J. Demand forecasting for the 5G service market considering consumer preference and purchase delay behavior. Telemat. Inform. 2020, 47, 101327. [Google Scholar] [CrossRef]
  6. Glisic, S.; Makela, J.P. Advanced Wireless Networks: 4G Technologies. In Proceedings of the 2006 IEEE Ninth International Symposium on Spread Spectrum Techniques and Applications, Manaus, Brazil, 28 August 2006; pp. 442–446. [Google Scholar]
  7. Varshney, U.; Jain, R. Issues in emerging 4G wireless networks. Computer 2001, 34, 94–96. [Google Scholar] [CrossRef]
  8. Available online: https://www.huawei.com/minisite/ubbf/en/ (accessed on 6 February 2022).
  9. Oyman, M.; Bal, D.; Ozer, S. Extending the technology acceptance model to explain how perceived augmented reality affects consumers’ perceptions. Comput. Hum. Behav. 2022, 128, 107127. [Google Scholar] [CrossRef]
  10. Chiroma, H.; Gital, A.Y.; Rana, N.; Abdulhamid, S.I.; Muhammad, A.N.; Umar, A.Y.; Abubakar, A.I. Nature inspired meta-heuristic algorithms for deep learning: Recent progress and novel perspective. In Advances in Computer Vision, Proceedings of the 2019 Computer Vision Conference (CVC), Las Vegas, NV, USA, 2–3 May 2019; Springer: Berlin/Heidelberg, Germany, 2019; pp. 59–70. [Google Scholar]
  11. Le Cun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef] [PubMed]
  12. Du, Y.; Ren, L.; Liu, X.; Wu, Z. Machine learning method intervention: Determine proper screening tests for vestibular disorders. Auris Nasus Larynx 2022, 49, 564–570. [Google Scholar] [CrossRef]
  13. Zuo, C.; Qian, J.; Feng, S.; Yin, W.; Li, Y.; Fan, P.; Han, J.; Qian, K.; Chen, Q. Deep learning in optical metrology: A review. Light Sci. Appl. 2022, 11, 39. [Google Scholar] [CrossRef] [PubMed]
  14. Jahng, J.H.; Park, S.K. Simulation-based prediction for 5G mobile adoption. ICT Express 2020, 6, 109–112. [Google Scholar] [CrossRef]
  15. Maimó, L.F.; Clemente, F.J.G.; Pérez, M.G.; Pérez, G.M. On the performance of a deep learning-based anomaly detection system for 5G networks. In Proceedings of the 2017 IEEE Smart World, Ubiquitous Intelligence and Computing, Advanced and Trusted Computed, Scalable Computing & Communications, Cloud and Big Data Computing, Internet of People and Smart City Innovation, San Francisco, CA, USA, 4–8 August 2017; pp. 1–8. [Google Scholar]
  16. Thantharate, A.; Beard, C. ADAPTIVE6G: Adaptive Resource Management for Network Slicing Architectures in Current 5G and Future 6G Systems. J. Netw. Syst. Manag. 2023, 31, 9. [Google Scholar] [CrossRef]
  17. Abiko, Y.; Mochizuki, D.; Saito, T.; Ikeda, D.; Mizuno, T.; Mineno, H. Proposal of allocating radio resources to multiple slices in 5G using deep reinforcement learning. In Proceedings of the 2019 IEEE 8th Global Conference on Consumer Electronics (GCCE), Osaka, Japan, 15–18 October 2019; pp. 1–2. [Google Scholar]
  18. Shahriari, B.; Moh, M.; Moh, T.S. Generic online learning for partial visible dynamic environment with delayed feedback: Online learning for 5G C-RAN load-balancer. In Proceedings of the 2017 International Conference on High Performance Computing & Simulation (HPCS), Genoa, Italy, 17–21 July 2017; pp. 176–185. [Google Scholar]
  19. Huang, C.W.; Chiang, C.T.; Li, Q. A study of deep learning networks on mobile traffic forecasting. In Proceedings of the 2017 IEEE 28th annual international symposium on personal, indoor, and mobile radio communications (PIMRC), Montreal, QC, Canada, 8–13 October 2017; pp. 1–6. [Google Scholar]
  20. Alhazmi, M.H.; Alymani, M.; Alhazmi, H.; Almarhabi, A.; Samarkandi, A.; Yao, Y.D. 5G signal identification using deep learning. In Proceedings of the 2020 29th Wireless and Optical Communications Conference (WOCC), Newark, NJ, USA, 1–2 May 2020; pp. 1–5. [Google Scholar]
  21. Li, G.; Feng, B.; Zhou, H.; Zhang, Y.; Sood, K.; Yu, S. Adaptive service function chaining mappings in 5G using deep Q-learning. Comput. Commun. 2020, 152, 305–315. [Google Scholar] [CrossRef]
  22. Luo, C.; Ji, J.; Wang, Q.; Chen, X.; Li, P. Channel state information prediction for 5G wireless communications: A deep learning approach. IEEE Trans. Netw. Sci. Eng. 2018, 7, 227–236. [Google Scholar] [CrossRef]
  23. Godala, A.R.; Kadambar, S.; Chavva, A.K.; Tijoriwala, V.S. A deep learning based approach for 5G NR CSI estimation. In Proceedings of the 2020 IEEE 3rd 5G World Forum (5GWF), Bangalore, India, 10–12 September 2020; pp. 59–62. [Google Scholar]
  24. Klus, R.; Klus, L.; Solomitckii, D.; Valkama, M.; Talvitie, J. Deep learning based localization and HO optimization in 5G NR networks. In Proceedings of the 2020 International Conference on Localization and GNSS (ICL-GNSS), Tampere, Finland, 2–4 June 2020; pp. 1–6. [Google Scholar]
  25. Doan, M.; Zhang, Z. Deep learning in 5G wireless networks-anomaly detections. In Proceedings of the 2020 29th Wireless and Optical Communications Conference (WOCC), Newark, NJ, USA, 1–2 May 2020; pp. 1–6. [Google Scholar]
  26. Razaak, M.; Kerdegari, H.; Davies, E.; Abozariba, R.; Broadbent, M.; Mason, K.; Argyriou, V.; Remagnino, P. An integrated precision farming application based on 5G, UAV and deep learning technologies. In Proceedings of the International Conference on Computer Analysis of Images and Patterns, Salerno, Italy, 3 September 2019; pp. 109–119. [Google Scholar]
  27. Dai, Y.; Xu, D.; Maharjan, S.; Chen, Z.; He, Q.; Zhang, Y. Blockchain and deep reinforcement learning empowered intelligent 5G beyond. IEEE Netw. 2019, 33, 10–17. [Google Scholar] [CrossRef]
  28. Gante, J.; Falcao, G.; Sousa, L. Deep learning architectures for accurate millimeter wave positioning in 5G. Neural Process. Lett. 2020, 51, 487–514. [Google Scholar] [CrossRef]
  29. Abolfathi, M.; Shomorony, I.; Vahid, A.; Jafarian, J.H. A Game-Theoretically Optimal Defense Paradigm against Traffic Analysis Attacks Using Multipath Routing and Deception. In Proceedings of the 27th ACM on Symposium on Access Control Models and Technologies, New York, NY, USA, 8–10 June 2022; pp. 67–78. [Google Scholar]
  30. Javaheri, E.; Kumala, V.; Javaheri, A.; Rawassizadeh, R.; Lubritz, J.; Graf, B.; Rethmeier, M. Quantifying mechanical properties of automotive steels with deep learning based computer vision algorithms. Metals 2020, 10, 163. [Google Scholar] [CrossRef] [Green Version]
  31. Mughaid, A.; AlZu’bi, S.; Alnajjar, A.; AbuElsoud, E.; Salhi, S.E.; Igried, B.; Abualigah, L. Improved dropping attacks detecting system in 5g networks using machine learning and deep learning approaches. Multimed. Tools Appl. 2022, 81, 1–23. [Google Scholar] [CrossRef]
  32. Yadav, N.; Pande, S.; Khamparia, A.; Gupta, D. Intrusion detection system on IoT with 5G network using deep learning. Wirel. Commun. Mob. Comput. 2022, 2022, 9304689. [Google Scholar] [CrossRef]
  33. Morabito, F.C.; Campolo, M.; Ieracitano, C.; Ebadi, J.M.; Bonanno, L.; Bramanti, A.; Desalvo, S.; Mammone, N.; Bramanti, P. Deep convolutional neural networks for classification of mild cognitive impaired and Alzheimer’s disease patients from scalp EEG recordings. In Proceedings of the 2016 IEEE 2nd International Forum on Research and Technologies for Society and Industry Leveraging a Better Tomorrow (RTSI), Bologna, Italy, 7–9 September 2016; pp. 1–6. [Google Scholar]
  34. Almutairi, M.S. Deep learning-based solutions for 5G network and 5G-enabled Internet of vehicles: Advances, meta-data analysis, and future direction. Math. Probl. Eng. 2022, 2022, 6855435. [Google Scholar] [CrossRef]
  35. Chen, S.Y.; Yoo, S.; Fang, Y.L. Quantum long short-term memory. In Proceedings of the ICASSP 2022—2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Singapore, 23 May 2022; pp. 8622–8626. [Google Scholar]
  36. Peng, L.; Wang, L.; Xia, D.; Gao, Q. Effective energy consumption forecasting using empirical wavelet transform and long short-term memory. Energy 2022, 238, 121756. [Google Scholar] [CrossRef]
  37. Chen, G.; Tang, B.; Zeng, X.; Zhou, P.; Kang, P.; Long, H. Short-term wind speed forecasting based on long short-term memory and improved BP neural network. Int. J. Electr. Power Energy Syst. 2022, 134, 107365. [Google Scholar] [CrossRef]
  38. Guo, J.; Han, K.; Wu, H.; Tang, Y.; Chen, X.; Wang, Y.; Xu, C. Cmt: Convolutional neural networks meet vision transformers. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 18–22 June 2022; pp. 12175–12185. [Google Scholar]
  39. Xie, Y.; Zaccagna, F.; Rundo, L.; Testa, C.; Agati, R.; Lodi, R.; Manners, D.N.; Tonon, C. Convolutional Neural Network Techniques for Brain Tumor Classification (from 2015 to 2022): Review, Challenges, and Future Perspectives. Diagnostics 2022, 12, 1850. [Google Scholar] [CrossRef]
  40. Wei, S.; Chen, Y.; Zhou, Z.; Long, G. A quantum convolutional neural network on NISQ devices. AAPPS Bull. 2022, 32, 2. [Google Scholar] [CrossRef]
  41. Panzer, M.; Bender, B. Deep reinforcement learning in production systems: A systematic literature review. Int. J. Prod. Res. 2022, 60, 4316–4341. [Google Scholar] [CrossRef]
  42. Wang, X.; Wang, S.; Liang, X.; Zhao, D.; Huang, J.; Xu, X.; Dai, B.; Miao, Q. Deep reinforcement learning: A survey. IEEE Trans. Neural Netw. Learn. Syst. 2022. ahead of print. [Google Scholar] [CrossRef]
  43. Zhang, L.; Lai, S.; Xia, J.; Gao, C.; Fan, D.; Ou, J. Deep reinforcement learning based IRS-assisted mobile edge computing under physical-layer security. Phys. Commun. 2022, 55, 101896. [Google Scholar] [CrossRef]
  44. Raca, D.; Leahy, D.; Sreenan, C.J.; Quinlan, J.J. Beyond throughput, the next generation: A 5g dataset with channel and context metrics. In Proceedings of the 11th ACM Multimedia Systems Conference, Istanbul, Turkey, 8–11 June 2020; pp. 303–308. [Google Scholar]
  45. Khan, F.A.; Abubakar, A. Machine translation in natural language processing by implementing artificial neural network modelling techniques: An analysis. Int. J. Perceptive Cogn. Comput. 2020, 6, 9–18. [Google Scholar]
  46. Mourtzis, D.; Angelopoulos, J.; Panopoulos, N. Smart Manufacturing and Tactile Internet Based on 5G in Industry 4.0: Challenges, Applications and New Trends. Electronics 2021, 10, 3175. [Google Scholar] [CrossRef]
  47. Huang, H.; Guo, S.; Gui, G.; Yang, Z.; Zhang, J.; Sari, H.; Adachi, F. Deep learning for physical-layer 5G wireless techniques: Opportunities, challenges and solutions. IEEE Wirel. Commun. 2019, 27, 214–222. [Google Scholar] [CrossRef] [Green Version]
  48. Santos, G.L.; Endo, P.T.; Sadok, D.; Kelner, J. When 5G meets deep learning: A systematic review. Algorithms 2020, 13, 208. [Google Scholar] [CrossRef]
  49. Sharma, P.; Jain, S.; Gupta, S.; Chamola, V. Role of machine learning and deep learning in securing 5G-driven industrial IoT applications. Ad Hoc Netw. 2021, 123, 102685. [Google Scholar] [CrossRef]
  50. Bega, D.; Gramaglia, M.; Fiore, M.; Banchs, A.; Costa-Perez, X. DeepCog: Cognitive network management in sliced 5G networks with deep learning. In Proceedings of the IEEE INFOCOM 2019—IEEE Conference on Computer Communications, Paris, France, 29 April 2019–2 May 2019; pp. 280–288. [Google Scholar]
Figure 1. The ROC area of the first trained data: AUC (0.954).
Figure 1. The ROC area of the first trained data: AUC (0.954).
Applsci 13 00119 g001
Figure 2. The ROC area of the validation data: AUC (0.989).
Figure 2. The ROC area of the validation data: AUC (0.989).
Applsci 13 00119 g002
Figure 3. The ROC area of all the trained data: AUC (0.986).
Figure 3. The ROC area of all the trained data: AUC (0.986).
Applsci 13 00119 g003
Figure 4. The ROC area of validation data: AUC (0.927).
Figure 4. The ROC area of validation data: AUC (0.927).
Applsci 13 00119 g004
Figure 5. The ROC area of all the modified trained data: AUC (0.851).
Figure 5. The ROC area of all the modified trained data: AUC (0.851).
Applsci 13 00119 g005
Figure 6. The ROC area of modified validation data: AUC (0.877).
Figure 6. The ROC area of modified validation data: AUC (0.877).
Applsci 13 00119 g006
Figure 7. The ROC area of all the final adjusted trained data: AUC (0.867).
Figure 7. The ROC area of all the final adjusted trained data: AUC (0.867).
Applsci 13 00119 g007
Figure 8. The ROC area of all the final adjusted validation data: AUC (0.885).
Figure 8. The ROC area of all the final adjusted validation data: AUC (0.885).
Applsci 13 00119 g008
Table 1. The performance of the first analysis.
Table 1. The performance of the first analysis.
ModelMSERMSE
DR0.0000640.0080
CNN0.0000410.0071
LSTM0.0000660.0074
Table 2. Average error obtained with the first cross-validation.
Table 2. Average error obtained with the first cross-validation.
PartitionMetricDRLSTMCNN
TrainMSE
RMSE
0.03
0.173
0.06
0.245
0.06
0.245
TestMSE
RMSE
0.04
0.2
0.07
0.264
0.05
0.22
R20.910.930.91
Table 3. Averaged error obtained with the second cross-validation.
Table 3. Averaged error obtained with the second cross-validation.
PartitionMetricDRLSTMCNN
TrainMSE
RMSE
0.006
0.0775
0.0042
0.0649
0.0068
0.0825
TestMSE
RMSE
0.016
0.126
0.0042
0.065
0.0068
0.0825
R20.870.720.94
Table 4. Average error obtained with the last cross-validation.
Table 4. Average error obtained with the last cross-validation.
PartitionMetricDRLSTMCNN
TrainMSE
RMSE
0.0075
0.0867
0.0025
0.05
0.0012
0.035
TestMSE
RMSE
0.0085
0.0922
0.0094
0.097
0.0086
0.093
R20.950.940.96
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zamzami, I.F. Deep Learning Models Applied to Prediction of 5G Technology Adoption. Appl. Sci. 2023, 13, 119. https://doi.org/10.3390/app13010119

AMA Style

Zamzami IF. Deep Learning Models Applied to Prediction of 5G Technology Adoption. Applied Sciences. 2023; 13(1):119. https://doi.org/10.3390/app13010119

Chicago/Turabian Style

Zamzami, Ikhlas Fuad. 2023. "Deep Learning Models Applied to Prediction of 5G Technology Adoption" Applied Sciences 13, no. 1: 119. https://doi.org/10.3390/app13010119

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop