Ensemble Consensus-based Representation Deep Reinforcement Learning for Hybrid FSO/RF Communication Systems

Hybrid FSO/RF system requires an efficient FSO and RF link switching mechanism to improve the system capacity by realizing the complementary benefits of both the links. The dynamics of network conditions, such as fog, dust, and sand storms compound the link switching problem and control complexity. To address this problem, we initiate the study of deep reinforcement learning (DRL) for link switching of hybrid FSO/RF systems. Specifically, in this work, we focus on actor-critic called Actor/Critic-FSO/RF and Deep-Q network (DQN) called DQN-FSO/RF for FSO/RF link switching under atmospheric turbulences. To formulate the problem, we define the state, action, and reward function of a hybrid FSO/RF system. DQN-FSO/RF frequently updates the deployed policy that interacts with the environment in a hybrid FSO/RF system, resulting in high switching costs. To overcome this, we lift this problem to ensemble consensus-based representation learning for deep reinforcement called DQNEnsemble-FSO/RF. The proposed novel DQNEnsemble-FSO/RF DRL approach uses consensus learned features representations based on an ensemble of asynchronous threads to update the deployed policy. Experimental results corroborate that the proposed DQNEnsemble-FSO/RF's consensus-learned features switching achieves better performance than Actor/Critic-FSO/RF, DQN-FSO/RF, and MyOpic for FSO/RF link switching while keeping the switching cost significantly low.


I. INTRODUCTION
F Ree space optical communication (FSO) supports high data rate applications with minimum electromagnetic interference [1]. FSO requires a point-to-point link between transmitter and receiver and is sensitive to atmospheric turbulences, such as fog, dust, and sand storms resulting in degradation of FSO link capacity [2], [3]. To increase the reliability of terrestrial broadband links, a radio frequency (RF) link is integrated with FSO links to form a hybrid FSO/RF system. This system exploits the complementary nature of individual FSO and RF links concerning their environmental sensitivity. This rationale is well explained as two links exhibit complementary attributes to atmospheric and weather conditions; the FSO link is not very sensitive to rain, while RF links do not deteriorate the signal quality from fog, sand, and dust storms and are heavily attenuated due to rain [4]- [6]. A hybrid FSO/RF maintains reliable communication due to an efficient switching between FSO and RF links under varying weather conditions, thereby improving the performance of the system as a whole.
Designing an efficient hybrid FSO/RF system with high link availability in a dynamic environment characterized by fog, dust, or sand is challenging; essentially decisions regarding the switching of the link between FSO and RF must be made immediately [7], [8]. Towards efficient FSO and RF link switching, few works have been reported, such as switching mechanism based on weak and strong turbulence for a 1hop FSO/RF system [9], [10], fuzzy logic-based FSO/RF soft switching mechanism [12], predictive link switching mechanism using long short term memory (LSTM) [15], Ad-aBoostClassifier, DecisionTreeClassifier, RandomForestClassifier, and GradientBoostingClassifier [16]. Other methods include threshold-based link adaptation technique [17] to avoid frequent link switching, FSO link outage-initiated link switching [18], switching for multi-hop FSO/RF systems under various statistical models [19]- [21], and coding schemes for link switching [22], [23]. These techniques, although, switch FSO and RF links using threshold, coding, or predictive methods. However, these techniques are not efficient for link switching under a time-varying dynamic environment. The dynamics of network conditions certainly compound the problem and switching control complexity that is not addressed by the existing FSO/RF link switching techniques for hybrid FSO/RF systems.
Reinforcement learning (RL) enables an agent to learn a policy to maximize the expected sum of rewards. RL coupled with deep learning is known as deep reinforcement learning (DRL) learns difficult policies for complex problems, such as control tasks [24]. RL can efficiently learn state-action space under dynamic environments unknown for an agent. However, it is not suitable for scenarios with large state spaces with poor generalization capabilities. DQN efficiently solves these problems by using a deep neural network (DNN) to arXiv:2108.02551v2 [cs.LG] 7 Nov 2022 approximate a value function to select an optimal policy. A RL agent can solve an optimization problem using an on-policy and off-policy. On-policy RL results in high sampling complexity where a policy acting on an environment and one being trained is the same, thereby requiring new samples for each training iteration. In contrast to on-policy, off-policy RL, such as DQN [25] exploits experience rather than relying only on new samples. Off-policy RL coupled with the neural networks for function approximation often leads to unstable behavior for continuous action space that is common in control/switching problems. An actor-critic DRL method addresses this instability issue with some actions using an intersection of off-policy, such as DQN, and policy-based methods, such as reinforcement learning [26].
Recent years have witnessed the application of artificial intelligence, such as DRL to various applications, such as efficient resource management and optimization [27]- [32], fog/edge computing and caching [33], [34], and dynamic channel assignment [35]. In [36], authors introduced a heterogeneous hybrid visible light communication/RF system for industrial networks. The system targets to meet various quality of service (QoS) requirements, such as reliability, low latency communication, and higher data rates. The work also proposed a deep post-decision state-based experience replay method to support energy-efficient resource management. The learning algorithm learns an optimal policy with accelerated learning speed and efficiency.
To our best knowledge, there is no work to consider DRL for FSO and RF link switching or control problems in hybrid FSO/RF systems. This work proposes DRL-based frameworks for link switching in a hybrid FSO/RF system such as actorcritic and Deep-Q network (DQN). Due to frequently changing weather conditions, such as fog, dust, and sand, integrating FSO/RF system with efficient link switching is indeed challenging. The key motivation for this DRL-based framework is to support dynamic and multi-factor decision making [37], [38].
DQN has gained significant attention due to tremendous improvement in performance with the help of deep neural networks to extract features. Recently, significant optimizations to DQN have been proposed, such as double DQN [39] to reduce overestimation of action values, experience replay buffer to prioritize learning [40], distributed DQN to model the distribution of action value, and dueling DQN architecture [41] that maps action values from state and advantage values.
To enable DQN efficiency for link switching and control decisions, it is desirable to limit a DQN agent to reduce the number of times the deployed/target policy changes during training. For large-scale hybrid FSO/RF systems, updating a policy requires reconsidering the complete environment. This requirement motivates the design of DQN agents with low switching costs. The proposed agent in this work aims to reduce the number of times the deployed/target policy interacts with the environment changes. Our work proposes consensusbased feature selection criteria to DQN to reduce switching costs in contrast to DQN. The key contributions of this work are summarized as follows: • Conduct the first systematic study of modern deep re-inforcement learning algorithms for hybrid FSO/RF systems. We implement two well-known DRL algorithms: 1) actor-critic based on actor and critic neural networks and 2) deep Q-network that consists of two deep neural networks to approximate a Q-function. We investigate these DRL methods to find the best policy for FSO and RF switching in contrast to MyOpic. This is illustrated below: -An actor-critic DRL-based method called Actor/Critic-FSO/RF solves the link switching optimization problem by considering its non-convex and combinatorial characteristics. The objective is to investigate the optimal long-term reward/utility of FSO and RF link switching for a hybrid FSO than 0.7 contribute to 90% of all predictions. The remainder of this article unfolds as follows. Section II discusses recent works relevant to link switching techniques in hybrid FSO/RF systems. We describe the system model and problem formulation in Section III, respectively. Deep reinforcement learning methods, i.e., Actor/Critic-FSO/RF and DQN-FSO/RF for the FSO/RF link switching are presented in Section IV. Section V formulates the policy switching problem considering the hybrid FSO/RF system. The proposed DRL method to solve the policy switching problem called DQNEnsemble-FSO/RF is presented in Section VI. Performance evaluation including MyOpic policy switching, evaluation setup, results, and analysis of all the DRL methods are given in Section VIII. Finally, Section IX concludes the paper with possible future directions.

II. RELATED WORK
Hybrid FSO/RF systems have been widely discussed in the literature to improve the reliability of communication. Authors in [42] proposed a hybrid FSO/RF system to act as a capacity backhaul to support high traffic for 5G and beyond. In this work, the FSO acts as a primary backhaul and RF as a secondary. RF system is activated through a one-bit feedback signal by the receiver, but the system does not consider the real-time channel conditions closer to the transmitter.
A link switching technique called adaptive combining used signal-to-noise ratio (SNR) threshold to keep the FSO link active is proposed in [43]. It activates the RF link, if the perceived SNR of the FSO link drops below the switching threshold, enabling diversity reception for simultaneous data transmission over both the links combined through maximal ratio combining (MRC). The system switches back to the FSO link upon acceptable link quality. The combining technique based on MRC, however, is subject to performance and complexity tradeoff [44].
Works in [9]- [11] proposed a FSO/RF link switching mechanism based on atmospheric turbulences. The switching mechanism keeps one link active depending upon atmospheric conditions. A method in [9], [10] evaluated an FSO/RF switching mechanism according to weak and strong turbulence for a 1-hop FSO/RF hybrid system. Abid et al. in [12] proposed a fuzzy logic-based hybrid FSO/RF soft switching mechanism to improve the reliability and capacity of FSO under atmospheric conditions, such as sand/dust. The system consists of fuzzyinference-based FSO and RF subsystem controlled using a fuzzy inference switching mechanism. The system improves the performance of the hybrid system for bit error rate (BER), SNR, and system capacity. However, the fuzzy logic inference is dependent on human expertise and requires extensive validation and verification.
In [14], Renat et al. proposed a hard switching method based on received signal strength identifier (RSSI) predictions. The method aims to increase the high availability of the optical link using an RF link under atmospheric turbulences. Authors have considered the use of machine learning models, such as random forest, gradient-boosting regression, and decision trees for RSSI prediction to increase the availability of FSO/RF systems. Although, these models can accurately determine the predicted RSSI value. However, are these models are prone to over-fitting, thereby compromising their reliability. Further, decision trees are susceptible to small changes in data resulting in different outputs.
A work in [15] proposed a predictive link switching mechanism for hybrid FSO/RF systems to conserve energy. The proposed method kept the FSO link continuously active and sampled the FSO signal periodically to collect a dataset. The method trained an LSTM model using the dataset. The work correlated the number of FSO signal power samples in the RF transmission with the prediction error using a predefined error threshold to improve energy efficiency. This work, however, is preliminary and does not consider the dynamics in a hybrid FSO/RF communication system, a challenging task to the model environment to optimize network performance.
Another work in [16] proposed a hard FSO/RF switching mechanism by predicting RSSI value. The authors analyzed the effects of the atmospheric channel on the quality of the optical signal. Although this analysis studied both the soft and hard switching between FSO and RF, the primary consideration was given to the hard switching. The work evaluated various machine learning classifiers, such as Ad-aBoostClassifier, DecisionTreeClassifier, RandomForestClassifier, and GradientBoostingClassifier for RSSI prediction to enable efficient hard FSO/RF switching. Similarly to [15], the work is just an evaluation study of single/ensemble machine learning classifiers for the hard switching between FSO and RF links. The contribution is limited and pays no attention to modeling dynamics of the atmospheric channel and its effect on soft switching.
According to the above investigation, it is evident that there are only preliminary studies including ensemble/single machine learning for link switching in hybrid FSO/RF systems. There are only a few works to harness the potentials of deep learning, such as LSTM in the field of hybrid FSO/RF systems. To our best knowledge, there is no work reported to investigate the performance of deep reinforcement learning for FSO/RF hybrid systems with a focus on link switching. Considering the practicability of deep reinforcement learning in radio resource allocation and management [48], in this work we enable consensus-based DRL for FSO/RF link switching under various weather turbulences.

III. SYSTEM MODEL
In this section, we presents the link switching problem in which hybrid FSO/RF system consisting of FSO and RF links switches link and learn the link states. Below, we describe the system model in detail.

A. Link Switching Pattern
The hybrid system is equipped with RF and FSO with two possible states: T x ready and T x switch . If a link is in T x ready state, it can successfully transmit data. On the other hand, a T x switch state indicates an imminent switching, otherwise, it can result in failed transmission. We assume that the states of these links are dynamically switching based on the atomospheric conditions, such as dust. FSO links interacts with the dust particles under Mie scattering. Mie scattering occurs when the diameter of dust particles is same as wavelengths of scattered light. Scattering coefficient is calculated using the distance, link visibility, and wavelenght of the transmitted bea. The relationship between the visibility range and dust is given in 1.
The threshold value of attenuation level is selected based on the visibility and is calculated according to (1) and (3) [12]. In (1), V represents visibility in KM, and C as the concentration of dust (g/m 3 ). The relationship between dust particles concentrations and scattering coefficient is given in (2) [12].
In (2),τ s is scattering transmission, λ denotes wavelength of the signal, q is constant, and R as the propagation range.
In [12], authors investigated attenuation of the signal due to dust on FSO links using (1). According to this work, the presence of atmospheric particles affects the visibility measured that correlates directly with the attenuation. The atmospheric visibility V , and wavelength of the beam λ are used to calculate the specific attenuation γ as given in (3). (3) is calculated as given in (4).
In Equation (4), V is measured in km and λ in µm. We have selected the range of 100dB/km to 120dB/km attenuation thresholds based on work in [13]. According to analysis of results given in [13] show that over 70% of the time attenuation of FSO links approaches to 120dB/km based on Kruse and Kim model, thereby requiring link switched to RF. On the other hand, attenuation less than 100dB/km is considered too weak for optical communications, therefore RF link is used to increment the reward to train the DRL agent under attenuation values less than 100dB/km. At any time slot t, the FSO/RF links can be in T x ready and T x switch states based on acceptable values of SNR as discussed in [12]. FSO link is considered T x ready if atmospheric observations/attenuation level denoted as γ is less than 100dB/Km. If the value of γ ≥ 120dB/Km, the link is switched to RF link due to attenuation of FSO link. We assume that the states of links are dynamically switching between T x ready and T x switch according to the attenuation levels. We can model the switching patterns of the link states as Markov chain M .
At any instant of time t, the link state is represented as We assume that the link states only change at the start of each time slot and remains same within the time slot. The probability that the link states change from the current state to a different state in M is p; and (1 − p) if it stays the same.

B. Transceiver 's Observations
The model assumes that the link switching pattern is unknown. Transceiver learns link switching patterns from the observation of attenuation levels γ based on weather dynamics using (1) and (3) and are based on acceptable value of SNR as discussed in [13]. The system observations of the system in time slot t is denoted as O link i,t , where i denotes FSO and RF links selected for time slot t t = 1, 2, . . . M given in (6).
The link i represents FSO and RF links and φ i,t is the indicator defined as: According to above equations, if link i is not selected for transmission or its state is not known that indicates that the observation of the links as zero.

C. Action Space
The transceiver can select FSO or RF link in each time slot. We consider a discrete action space A = {link F SO , link RF }, corresponds to selecting RF or FSO link. Hence, in each time slot, the system selects an action from the action space A, i.e., access the corresponding RF or FSO link, and the condition of the selected link will be revealed.

D. Link Switching Problem
In this section, we formulate link switching problem using the γ as discussed in Section III. This work considers link switching pattern learning using actor-critic, DQN, and novel ensemble consensus DRL approach to make link selection decisions. The agent make link access decision based on the γ, and updates the policy accordingly.
Let us define the reward as r i,t if the link i, i.e., FSO or RF in time slot t as given in (8) below.
Since the hybrid system aims to select FSO or RF links based on γ in dB/km to ensure successful transmissions, the system aims to extract a policy, i.e., to map observation O link to the action space A to maximise the long-term expected reward R of link access given in (9).
The reward R for a finite number of time slots T is expressed in (10) as: where φ i,t is the indicator function defined in (7). The problem can be formulated as: where k represents FSO and RF links that system can select in each time slot, and according to the R, the R ∈ [−k, k].

IV. DEEP REINFORCEMENT LEARNING-BASED AGENT
FOR HYBRID FSO/RF SYSTEM This section presents the framework for the proposed hybrid FSO/RF system based on a deep reinforcement learning agent. We consider various DRL approaches including DQN and actor-critic to switch between RF and FSO links. Further, we also consider the performance comparison of the actor-critic agent with the MyOpic policy under complex and dynamic weather conditions. Finally, we propose a consensus ensemble representation learning DQN approach for low switching costs under time-varying and dynamic weather conditions. To the best of our knowledge, this is the first study and implementation of DQN and actor-critic, and proposal of ensemble consensus representation learning DQN in the field of hybrid FSO/RF for link switching.

A. Deep Reinforcement Learning Agent
FSO/RF link state: As discussed earlier, the FSO/RF link state is time-varying and can be modeled by the Markov Decision Process (MDP). The agent uses its observation space as an input to the DRL framework and selects the FSO or RF link in each iteration. The agent updates the reward based on the state of the selected FSO or RF link. The FSO/RF system learns the best policy based on previous experiences in form of observation space for link i as O link i,t as given in (6) for time slot t that is updated for time slot t + 1 where MAX denotes the maximum number of iterations to observe the FSO/RF link state.
Action space: The agent evaluates different actions from the action space A using the observation space O link i,t with the highest reward. In the context of FSO/RF switching, an action means to access RF or FSO link for data transmission in time slot t.
Reward: Once the agent deliberate action from the A, it observes an instantaneous reward based r i,t given in (8) using γ. The basic objective of the reward is to solve the optimization problem as defined in (11).
We also define the reward received by the agent in time slot t in addition to 10 for the actor-critic DRL agent as: where φ i,t is the link selection indicator as given in (7) and r i,t as in Equation 8.

B. Actor/Critic-FSO/RF DRL Algorithm
It can be observed from Figure 2 that the actor-critic framework is based on actor and critic neural networks. It can be observed from Algorithm 1 that the actor neural network is initialized with θ and critic with parameter µ. The actor neural network maps an observation O link at time slot t to an actions a from A using the optimal policy π * as given in (13).

In the Equation,
A is discrete and the normalized probability of each action is calculated using the Softmax function at the output layer of the actor-network.
In (13), π θ (O link ) is the mapping policy which is a function of link observations O link parametrized by θ The actor neural network, therefore, can be represented as (14).
The critic neural network is based on a value function, V (O link ). The critic neural network receives the feedback from the actor network that records the V (O link ) by executing the action in the environment with varying weather conditions at time slot t. The actor feedback consists of r t and the observation O link t+1 for next time slot t + 1. The critic uses this value to calculate the temporal difference error (TD) as given in (15), where γ ∈ (0, 1).
The critic network uses the optimal value function V alue * to minimize the least squares temporal difference (LSTD) as given in (16).
The actor uses the TD error given in (15) to compute the policy gradient as given below in (17).
In (17), piθ t (O link t ), a t represents the score of action, i.e., selected FSO/RF link under the current optimal policy π * . Given this the parameters of the actor, neural network can be updated using the ∆θ t ← α∇θ t logπθ t (O link t , a t ) with  learning rate of α ∈ 0, 1. The gradient for the actor network is computed using the (18).
The critic network collects the M ax recent observations of FSO and RF links called as O link t at the beginning of time slot t. The actor network chooses link i based on the optimal policy π * . Once the selected link is used for transmission, the observed reward is recorded. The critic network computes the TD error by using the reward, current observation state O link t , and the next observation state O link t+1 . The computed error updates the gradients of both the critic and actor neural networks.
Algorithm 1 presents the steps of the Actor/Critic-FSO/RF DRL agent. Line 1 and line 2 of the algorithm initialize two neural networks: actor and critic. Critic network critic F SO/RF is parametrized with µ and actor-network actor F SO/RF with θ In lines 1-2, Initial observations O link i,t , where i is FSO or RF links for each time slot t are initialized as 0. Line 6 to line 8 selects the FSO/RF link for timeslot t as dictated by the policy π under θ, receives the reward based on the linkstate, and updates the observation for the next time slot t.In line 10 in Algorithm, based on the observation O link t+1 for time slot t + 1, the critic network computes the TD error. Line 11 uses an optimal function to minimize the error. In Line 12 in Algorithm 1, the actor-network chooses the RF or FSO link based on the optimal policy π * .

C. Deep-Q Network for Hybrid FSO/RF System (DQN-FSO/RF)
The following section illustrates link switching using DQN for the hybrid FSO/RF system. 1) Q-learning: Q-learning based hybrid FSO/RF system has the potential to learn switching patterns between FSO and RF links directly from γ as discussed in the Section III. This is an ideal solution for weather dynamics, such as dust storms. Q-learning for hybrid FSO/RF system aims to find an optimal policy for efficient FSO/RF switching to maximize the accumulated reward R as defined in (10). Q-learning learns from actions outside the current policy called offpolicy learning. The Q-value denoted as Q(s, a) represents the cumulative reward of an agent being in state s and performing the committed action a and is given in (19).
The agent selects FSO/RF link as dictated by the policy under θ Access the selected link and receives reward based on the link state  Equation given (20) computes the expected future Q-value as a difference between the old Q-value and the discounted future value with one-step look ahead.
2) DQN-FSO/RF: Q-learning can find an optimal policy if it can estimate Q-function for each action and state pair using (19). This turns computationally expensive for large state and action space. DQN [37] uses a deep neural network called as Q-Network to approximate the Q-function, i.e., Q(s, a) = r(s, a) + βmax a Q(s , a). The agent selects an action a t using Q(s t , a) and makes transition to next state s t+1 with reward r t+1 . It is represented as a transition/experience tuple Exp t = (s t , a t , r t+1 , s t+1 ). The Exp t is saved to a replay memory buffer D.
DQN is based on states, actions, rewards, and the Q-Network Q(s, a; θ). At each time slot t for link i, the Q-Network uses O link i,t using (6) to select action link F SO or link RF from the action space A as illustrated in Figure 3.
The objective of the Q-Network is to maximize the reward as given in (10).
A DQN agent consists of two deep neural networks to approximate Q-function which is illustrated in Figure 3. One acts as an action-value function approximator Q(s, a, θ i ) and the second as the target action-value approximatorQ(s, a, θ − i ), where θ i and θ − i represents weights of the neural networks for iteration i. The weights of the first neural network, θ i are updated using the mini-batch of random samples from the replay memory buffer D at each learning iteration i.
The weights θ − i of the second network are updated using the stochastic gradient descent (SGD) and backward propagation algorithm that minimizes the mean-square error(MSE) as loss function. Referring to (20), the loss is calculated as (21), where θ represents the weights of the Q-Network.
In (21), θ − represents the parameters/weights of the target neural network that is replaced by the parameters θ of training Q-Network every K time steps as can be seen in Figure  3. Deep Q-Network uses mini-batch data from the replay buffer D to train the Q-Network. Instead of using an -greedy approach, the experience replay component exploits stochastic prioritization to generate the probability of actions that helps the neural network to converge. The steps of DQN-FSO/RF are summarized in Algorithm 2.
Algorithm 2 shows the steps of the proposed DQN-FSO/RF agent. Line 1-5 are basic initializations for the DQN-FSO/RF agent. These initializations include replay buffer size, mini-batch size, Xavier weights, action-value, and the target network. Line 6-11 shows that for T time slots, each Agent F SO/RF observes the state of FSO or RF link. The agent selects the best action, link F SO or link RF to maximize the Q-value of the action-value network. Line 13-14 records the reward by executing the selected action and updates the state for time slot t + 1. Lines 15 to 20 record the tuple that is stored in the experience replay buffer. For given time slot t, the tuple includes current s, selected action, i.e., link F SO or link RF , observed reward r, and the state for the next time slot t + 1. Line 18 samples a mini-batch Batch mini from the replay buffer D. This Batch mini later in line 19 is used to compute the loss function using the SGD for action-value network weights θ. Line 20 updates the target network weight θ − to the action-value network weights θ every F steps.

V. DQN-FSO/RF POLICY SWITCHING COST PROBLEM FORMULATION
This section focuses on the policy switching cost called as link

RF/F SO switch
. This cost is used to optimize the DQN-FSO/RF agent. The cost represents the frequency of changes in deployed policy π in action-value network, i.e., Q(s, a) in Algorithm 2 in T episodes as given below in (22): The objective of the DQN-FSO/RF agent is to learn an optimal policy π * with small link RF/F SO switch cost.

VI. ENSEMBLE CONSENSUS REPRESENTATION DEEP REINFORCEMENT LEARNING DQNENSEMBLE-FSO/RF FOR FSO/RF LINK SWITCHING
Inspired by the representation learning [45], We adopt the concept that DQN learns to extract informative features of the states of environments using the consensus of an ensemble of threads. Our proposed criterion tries to switch the deployed policy according to the consensus of features. In the proposed approach to ensemble consensus to DQN-FSO/RF, M asynchronous threads sample batch of data from the replay buffer and then extract features of all states to train both the actionvalue network and target Q-network.
For a state s, the extracted features by M threads are denoted as f m (target, s), f m (action−value, s), respectively. The similarity score between f m (target, s), f m (action − value, s) for each thread m on state s is defined in (23) as: The average similarity score for a batch of states denoted as B for each thread m is given in (24) as: The Equation 25 computes the consensus score of features using (24). Lines 1-4 in Algorithm 3 perform initializations of learning rate α, replay buffer size, mini-batch size, and the number of asynchronous threads. Line 6-14 presents the steps to compute the average similarity score by M asynchronous threads. The average similarity score in lines 11-12 is calculated using the features extracted from both the action-value and target networks for each state s using the mini-batch sampled from the replay buffer D as given in lines 9-11. Line 15 calculates the ensemble consensus features consensusSim() using the average similarity score avgSim(). In contrast to F steps for the DQN-FSO/RF, line 16-18 updates the target network weight θ − to the action-value network weights θ, whenever consensusSim() ≤ α.

VII. TIME COMPLEXITY OF DQNENSEMBLE-FSO/RF
Let us assume that the DQN consists of K layers. The size of input layer is H and other layers as m. The time required to find the switching threshold for L links is O (H log L). The time complexity of each Agent F SO/RF to switch among L links for T number of iterations is given as: Let M denotes the number of asynchronous threads in the DQNEnsemble-FSO/RF, and the time required for the feature extraction and feature similarity between the actionvalue and target network is F . The time complexity for the DQNEnsemble-FSO/RF for S number of states is O(S log F ). The training time complexity for each Agent F SO/RF using DQNEnsemble-FSO/RF using M threads is: Xavier weight initialization with random number using a uniform probability distribution U , n number of inputs 4 Q(s, a) ← θ Initialize action-value network with weights θ 5Q(s, a) ← θ − Initialize target networkQ with weight random weights θ − 6 for ( each time step t ∈ T ) do D j ← (s j t , a j t , r j t , s j t+1 ) Store tuple (s j t , a j t , r j t , s j t+1 ) into replay memory 18 Batch mini ← sampling(D j ) Uniformly sample Batch mini from replay memory D j 19 Loss = (r j t + βmax aQ (s j t+1 , a ; θ − j ) − Q(s j t , a j t ; θ j )) 2 Calculate loss using SGD with respect to θ

A. MyOpic Policy for FSO/RF Link Switching
The MyOpic policy only accumulates immediate reward obtained from transceiver switching without considering the future. MyOpic agent always selects a transceiver to maximize the immediate expected reward.
Given the state space of Markov chain be where T x F SO ready /T x RF ready and T x RF switch /T x F SO switch denotes binary vector representing the state of FSO or RF links: γ < 100dB/Km and γ ≥ 120dB/Km (1), and otherwise (0) illustrated in (28).
ωs i 1(s ik (t) = 1) (28) In (28), 1 is an indicator function and ωs i is the conditional probability of the hybrid FSO/RF system in state s i given past decisions/observations. The MyOpic policy follows a simple round-robin FSO and RF switching procedure and is not considered optimal [47].

B. Evaluation Setup
We have implemented the proposed DQNEnsemble-FSO/RF, DQN-FSO/RF, and Actor/Critic-FSO/RF using Tensorflow with the DRL environment in OpenAI Gym framework 1 . The code for these DRL agents is available on the GitHub repository 2 . For each iteration, DRL agents are trained using 600 to 2000 episodes. The DQN-FSO/RF and DQNEnsemble-FSO/RF agents are created using Keras functions. The neural network has 5 layers; 3 hidden, one input, and one output layer. All three hidden layers consist of 300, 200, and 100 neurons, respectively. These layers are implemented using ReLu activation functions with the linear output layer. For both the DQNEnsemble-FSO/RF and DQN-FSO/RF agents, actions are selected using the Boltzman policy [46]. Our evaluations consider the use of Adam optimizer to minimize the loss function given in (21). Other parameters including mini-batch size, learning rate, discount factor σ, and experience replay buffer size are given in Table I. This is mainly attributed to the fact that the agents have not acquired sufficient information for the FSO/RF environment. However, owing to the deep neural networks, the loss drops rapidly, and for all the DRL agents, it stabilizes or converges around 100 episodes, thereby indicating the capability of all these agents to adapt to the FSO/RF environment dynamics. The results show that for all three DRL agents, the loss degrades to almost zero with an increase in the number of episodes. It is evident from Figure 4 that DQNEnsemble-FSO/RF agent shows lesser loss value in contrast to DQN-FSO/RF showing the success of DQNEnsemble-FSO/RF for FSO and RF switching. On the contrary, the Actor/Critic-FSO/RF agent has the highest loss. This trend of the results demonstrates that all the DRL agents converge with an increase in the number of training episodes. The results also imply that all DRL agents converge within a reasonable number of training episodes. Actor/Critic-FSO/RF converges faster as compared to DQNEnsemble-FSO/RF and DQN-FSO/RF. Figure 5 presents the average reward observed by all the DRL agents with a varying number of training episodes. For all the DRL agents, increasing the number of episodes leads to higher rewards and ultimately convergence. The figure shows that for the first 50 training episodes, the reward for all the DRL agents is low. However, increasing the number of episodes leads to convergence with higher rewards. The reward results show that for DQNEnsemble-FSO/RF and DQN-FSO/RF, the average reward is highest after 100 episodes.

C. Results and Analysis
Actor/Critic-FSO/RF agent achieves maximum average reward after 200 episodes indicating the success of switching between RF and FSO links. This increase in average reward with the number of episodes indicates the convergence of DRL agents after a reasonable number of training episodes. Although, Actor/Critic-FSO/RF achieves the highest possible reward between 100 and 200 episodes. However, instability in training results in some variance of the average reward over 300 to 600 episodes. It can be observed that the DQNEnsemble-FSO/RF converges to a higher average reward than the DQN-FSO/RF and Actor/Critic-FSO/RF. Both DQN-FSO/RF and Actor/Critic-FSO/RF achieve higher rewards with fewer training episodes compared to DQNEnsemble-FSO/RF, indicating faster learning. Further, in contrast to high variance of the average reward over multiple episodes for Actor/Critic-FSO/RF, DQN-FSO/RF, and DQNEnsemble-FSO/RF demonstrates stability in training and therefore less average reward variance over a longer number of episodes. Overall average reward results show that DQNEnsemble-FSO/RF is superior to that of DQN-FSO/RF and Actor/Critic-FSO/RF, respectively, indicating the effectiveness of DQNEnsemble-FSO/RF learning due to a decrease in overestimation error of Q-value for FSO and RF switching. Figure 6 shows the mean reward for the DQNEnsemble-FSO/RF after interacting with the environment for episodes varying from 250 to 2000. The DQNEnsemble-FSO/RF achieves convergence of approximately 1500 episodes. It can be observed that in the early episodes, the reward value is low due to the limited learning. As the number of training episodes increases, the DQNEnsemble-FSO/RF gradually evolves, and reward increases. The average reward per episode is significantly improved after 1000 episodes. After about 1500 training episodes, the reward flattens out smoothly, indicating DQNEnsemble-FSO/RF's ability for successful FSO and RF link switching.     actor and critic decreases during the training process, which indicates that both the actor and critic reduce the error due to overestimations and helps the Q-learning. The loss curves show that the actor loss shows a dramatic decrease at the beginning of the training process, then the reduction gradually turns unstable. On the contrary, the critic loss is lower from the beginning and stays stable as the training continues. As a consequence, the action-value of the critic neural network achieves a higher reward. This, however, does not guarantee an optimal learned policy as the Actor/Critic-FSO/RF may overfit. For DQNEnsemble-FSO/RF,DQN-FSO/RF, and MyOpic , we evaluate the switching cost for 6500, 8500, and 10, 000 episodes. During the training process, the target network policy is synchronized with the action-value policy using the consensus of features of 10 threads. As shown in Figure  8, MyOpic with known P , i.e, 0.5 follows a round-robin fashion to switch policy and has the highest switching cost for all the episodes. MyOpic cannot use correlation among FSO, RF, and environment for policy switching. DQN-FSO/RF shows a significantly lower switching cost than MyOpic. DQN-FSO/RF agent can learn the FSO/RF system dynamics including the correlation between FSO, RF, and environment, i.e., atmospheric conditions. The learned policy switching improves accumulated DQN-FSO/RF, thereby improving the FSO/RF system performance. In contrast to MyOpic and DQN-FSO/RF, DQNEnsemble-FSO/RF drastically reduces the policy switches that represent low switching cost suitable for FSO/RF hybrid systems operating under stable environments, such as atmospheric conditions. DQNEnsemble-FSO/RF's consensus criterion for policy switching achieves better performance with the minimal switching cost for 6500, 8500, and 10, 000 episodes. The consensus criterion switches the action-value policy decreases with an increase in the number of episodes. It results in a significant switching cost reduction compared to MyOpic and DQN-FSO/RF and remains more robust than MyOpic and DQN-FSO/RF. Figure 9 investigates the first transition of the DQNEnsemble-FSO/RF execution with a varying number of samples. It can be observed from the Figure that DQNEnsemble-FSO/RF's mean error converges faster between 0 and 40 steps. The plot considers the error calculation over the initial 100 steps to calculate the mean error. Figure 10 plots the error of the DQNEnsemble-FSO/RF for the last transition for a sample captured during the agent execution. This error value for this transition is used to calculate the means to analyze DQNEnsemble-FSO/RF's learning performance. Similar to DQNEnsemble-FSO/RF, Figure 11 and Figure 12 show the first and the last transition of the DQN-FSO/RF agent, respectively. These samples represent the smoothest transitions during the agent execution, demonstrating faster convergence. Figure 13 and Figure 14 plots the behaviour Actor/Critic-FSO/RF for the first and last transitions, respectively. It is evident from the figure that the Actor/Critic-FSO/RF does not converge in contrast to DQNEnsemble-FSO/RF and DQN-FSO/RF. Further, for the first transition, its mean error remains significantly higher compared to DQNEnsemble-FSO/RF and DQN-FSO/RF, resulting in slow learning. The last transition, however, demonstrates a significantly lower mean error and is comparable to DQNEnsemble-FSO/RF and DQN-FSO/RF. Figure 15 demonstrates the stability of DQNEnsemble-FSO/RF, DQN-FSO/RF, and Actor/Critic-FSO/RF. The stability, here, represents, the frequency an agent deviates from the median error for both the first and last transitions. As discussed, earlier, the last transition of each agent is used to calculate the median value as it indicates an agent's best performance. It can be observed from the Figure that DQNEnsemble-FSO/RF significantly deviates less from the median for the last transition in contrast to DQN-FSO/RF and Actor/Critic-FSO/RF. This proves DQNEnsemble-FSO/RF's stable learning for the first transition. Its count for the deviation from the median value for the last transition is comparable to the other agents. This shows that DQNEnsemble-FSO/RF exhibits overall better stability for both the first and the last transition attributed to its consensus representation learning to update the deployed policy. The Actor/Critic-FSO/RF agent demonstrates the least stability among all the agents, therefore, has slow learning performance.

D. RSSI versus Visibility Analysis of DQNEnsemble-FSO/RF
This section presents a performance analysis of the DQNEnsemble-FSO/RF agent under the FSO/RF environment with various visibility ranges of 5km to 30km. Figure 16 shows MAE of the received signal strength indicator (RSSI) with various prediction steps. The results are taken over the time interval of 5 minutes, therefore each prediction step corresponds to 5 minutes. The figure shows that an increase in the number of prediction steps increases errors. This is mainly attributed to the higher weights associated with the RSSI values from the data collected over time affecting the accuracy of predicted results. A reduction in visibility of FSO/RF environment causes fluctuations in RSSI monitored by the DQNEnsemble-FSO/RF agent affecting the accuracy of prediction.
It can be seen from the Figure 16 that during 5 minutes RSSI values observations by the DQNEnsemble-FSO/RF, the MAE values at 1.5 km and 2.5 km are 0.254dBm and 0.341dBm, respectively. Similarly, when the predictions are taken over 25 minutes, these values at 1.5 km and 2.5 km are increased  to 0.423dBm and 0.48dBm, respectively. From these results, the MAE rate for both the 1.5 km and 2.5 km are 83.1% and 86.1%, respectively. It is evident from the results that the increase in prediction error rate is lower and acceptable level with a fewer number of prediction steps. The randomness in observed RSSI due to visibility fluctuations is not a true linear function that is significantly impacted by the agent DQNEnsemble-FSO/RF learning.
To understand the probabilistic distribution of mean absolute error by DQNEnsemble-FSO/RF under visibility values of 5km to 30km, we plot the cumulative distribution function (CDF) of the absolute error (AE) in Figures 17. AE is computed using the predicted RSSI by the DQNEnsemble-FSO/RF and comparing it with the true RSSI. Figure shows that AE values lower than 0.6 accounts for 50% of errors for 5km. On the contrary, Figure shows that AE lower than less than 0.6 contributes to more than 70%. Figure 17 shows that the percentage increases to above 90% for the AE data values lower than 0.5. Finally, the AE data lower than 0.6 accounts for a maximum of 70%. These results show that  the prediction results of the DQNEnsemble-FSO/RF increase with an increase in visibility ranges. This, however, maintains considerable prediction accuracy.

IX. CONCLUSION AND FUTURE WORKS
To overcome the challenges of unknown weather dynamics, such as fog, dust, and sand storms, this work has applied the concept of ensemble consensus representation deep reinforcement learning for FSO/RF link switching for hybrid FSO/RF systems. The link switching optimization problem has been formulated to achieve the maximum long-term FSO and RF links utility as a whole while maximizing the link availability between the transmitter and receiver. Considering the non-convex and combinatorial characteristics of this optimization problem, we have applied actor-critic and DQN under a hybrid FSO/RF system with dynamic weather conditions. Compared with the actor-critic, the DQN algorithm achieves faster convergence. Further, to reduce the frequent switching of the deployed policy of DQN, we have proposed  The results also prove the efficiency of DQNEnsemble-FSO/RF for predicting RSSI for FSO/RF link switching with minimum absolute error for various visibility ranges. We believe this work is the first step toward the application of DRL for the link switching problem with consideration of low switching cost for hybrid FSO/RF systems. One interesting direction is to design a deep Qnetwork algorithm with provable guarantees and generalization that can consider the switching cost for a larger state space compared to the states considered in this work.