Altered resting-state network connectivity patterns for predicting attentional function in deaf individuals: An EEG study

Multiple aspects of brain development are inﬂuenced by early sensory loss such as deafness. Despite growing evidence of changes in attentional functions for prelingual profoundly deaf, the brain mechanisms underlying these attentional changes remain unclear. This study investigated the relationships be- tween differential attention and the resting-state brain network difference in deaf individuals from the perspective of brain network connectivity. We recruited 36 deaf individuals and 34 healthy controls (HC). We recorded each participant’s resting-state electroencephalogram (EEG) and the event-related potential (ERP) data from the Attention Network Test (ANT). The coherence (COH) method and graph theory were used to build brain networks and analyze network connectivity. First, the ERPs of analysis in task states were investigated. Then, we correlated the topological properties of the network functional connectivity with the ERPs. The results revealed a signiﬁcant correlation between frontal-occipital connection in the resting state and the amplitude of alert N1 amplitude in the alpha band. Speciﬁcally, clustering coeﬃ- cients and global and local eﬃciency correlate negatively with alert N1 amplitude, whereas the characteristic path length positively correlates with alert N1 amplitude. In addition, deaf individuals exhibited weaker frontal-occipital connections compared to the HC group. In executive control, the deaf group had longer reaction times and larger P3 amplitudes. However, the orienting function did not signiﬁcantly dif- fer from the HC group. Finally, the alert N1 amplitude in the ANT task for deaf individuals was predicted using a multiple linear regression model based on resting-state EEG network properties. Our results sug- gest that deafness affects the performance of alerting and executive control while orienting functions develop similarly to hearing individuals. Furthermore, weakened frontal-occipital connections in the deaf brain are a fundamental cause of altered alerting functions in the deaf. These results reveal important effects of brain networks on attentional function from the perspective of brain connections and provide potential physiological biomarkers to predicting attention.


Introduction
Attention is an essential human ability. It is defined as the orienting and focusing of mental activity or consciousness on a specific object ( Ocasio, 2011 ;Reynolds and Heeger, 2009  individuals. The combined effect of multiple attentional functions and the dynamic change of brain activity during attention in deaf individuals has received litte consideration. However, considering the complexity of the changes in brain regions required during attention, differences in brain function and structure in deaf individuals affect multiple processes of attentional function ( Bonna et al., 2021 ;Dell Ducas et al., 2021 ;Pénicaud et al., 2013 ;Rosemann and Thiel, 2018 ;Schmidt et al., 2013 ). Relatedly, to understand the development of attentional abilities, it is necessary to understand the changes in the neural network of the brain during attention in deaf individuals. Therefore, the present study aimed to investigate the neural mechanisms underlying different attentional functions in deaf individuals and to determine the possible correlation between their brain network connectivity and attentional functions.
Attention is a complex cognitive process. Posner and Peterson proposed an attention network model based on cognitive neuropsychological research to describe the workings of the human attention system. This model divides the attentional system into three subcomponents: alerting, orienting, and executive control ( Fan and Posner, 2004 ;Posner and Petersen, 1990 ;Wang and Fan, 2007 ). Alerting is the attainment or maintenance of a state in which sensitivity to upcoming information increases ( Fernandez-Duque and Posner, 1997 ); orienting is the shifting of attention to the stimulus to be selected or attended to ( Posner, 1980 ); and executive control is the monitoring and resolution of conflicts between expectations, stimuli, and responses ( Fan and Posner, 2004 ;Fan et al., 2002 ;Wang and Fan, 2007 ). This representation of attention provides a basis for the simultaneous measurement of multiple attention functions.
Several studies have used the Attention Network Test (ANT) to examine how attentional networks are affected by hearing loss. However, there have been no consistent conclusions. Studies of deaf adults show that deaf and hearing individuals had comparable alerting and orienting abilities, suggesting parallel levels of attention across groups. However, at the executive control level, deaf participants were shown to have more significant flanker interference effects than hearing participants ( Dye et al., 2007 ). Moreover, a study on the development of attention networks in deaf children found different results. Auditory deprivation can impair the development of the alerting network and enhance two elementary operations of orienting: moving and engaging. The developmental trajectory of the executive control network was neither deficient nor enhanced, but rather comparable to that observed in hearing children ( Daza and Phillips-Silver, 2013 ). Deafness does affect the development of attention networks. In contrast, our previous study found that children with hearing loss had significantly poorer alerting and executive control functions than hearing control, with no significant differences in orienting functions ( Tongao et al., 2019 ). These phenomena may be attributable to the variety of deaf sample characteristics, the visual features of the target stimuli, and target eccentricity, resulting in the heterogeneity of empirical findings in different diversity tasks ( Bavelier et al., 2006 ;Dye and Bavelier, 2013 ;Hoemann, 1978 ). Furthermore, the underlying neural mechanisms that are changed in attention in deaf individuals remain unclear. Therefore, the major aim of the current study was to select a matched group of deaf and hearing individuals and apply cognitive neuroscience technology to complement behavioral assessments. We could then determine whether significant between-group brain differences are found in the presence or absence of serious behavioral deficits and whether neural correlates are more closely connected with behavioral results.
Previous research has shown the neural mechanisms in the brains of deaf individuals during attentional tasks and demonstrated the crucial roles of the temporal cortex ( Cardin et al., 2013 ;Lomber et al., 2010 ;Merabet and Pascual-Leone, 2010 ), prefrontal cortex ( Campbell and Sharma, 2020 ;Kang et al., 2004 ;Scott et al., 2014 ), and occipital cortex ( Bottari et al., 2011 ;Codina et al., 2011 ;De Schonen et al., 2018 ;Vachon et al., 2013 ) in attention. Auditory brain regions were activated during visual attentional processing in deaf individuals. This activation was defined as crossmodal plasticity, and cross-modal plasticity allows deaf individuals to perform better than other normal-hearing individuals on some visual attention tasks ( Cardin et al., 2013 ;Ding et al., 2015 ;Hartmann and Weisz, 2019 ;Kok et al., 2014 ;Lomber et al., 2010 ;Merabet and Pascual-Leone, 2010 ;Vachon et al., 2013 ). During the attention process, changes also occur in the visual cortex (especially in the right hemisphere) of deaf individuals. The increase in visual region volume may result from visual compensation in the absence of auditory input ( Bavelier et al., 2006 ;Dewey and Hartley, 2015 ;Mayberry et al., 2011 ;Pénicaud et al., 2013 ;Sharma and Mitchell, 2013 ) and the early exposure to sign language and changes in multisensory integration networks ( Bavelier et al., 2001 ;Mayberry et al., 2011 ;Neville and Lawson, 1987 ). In addition, the frontal cortex plays a role in connecting temporal and occipital cortices when attention is conditioned, and the frontal cortex in deaf individuals modulates the reorganization of temporal visual crossmodality ( Campbell and Sharma, 2020 ;Glick and Sharma, 2020 ;Rosemann and Thiel, 2018 ).
However, these studies have expanded our understanding of the neural mechanisms of attention in deaf individuals. They have mainly focused on the activation and deactivation of the brain during task conditions, ignoring functional interactions between different regions of the brain network that may play a crucial role in cognitive processes. In neuroscience, there is a growing consensus that functions are an emerging property of the interactions between brain regions. Brain connections are not just about signaling between individual brain regions; behavior and cognition also emerge from the interactions between cortical areas ( Thiebaut de Schotten and Forkel, 2022 ). Thus, brain activity for a specific function involves the comprehensive effort s of multiple brain regions. Connections can amplify or attenuate brain signals and determine the structure and function of the cerebral cortex. Specifically, at the group ( Smith et al., 2009 ) and the individual ( Tavor et al., 2016 ) levels, there are similarities between the synchronous communication between brain regions at rest and the activity of brain regions during a task. The brain at rest is working. It is characterized by a particular network mode that incorporates distinct regions ( Kounios et al., 2008 ;Zhang et al., 2014 ). Theoretically, resting-state brain networks may describe the intrinsic allocation of brain resources and are crucial for predicting individual task performance ( Northoff et al., 2010 ;Tian et al., 2017 ). The potential relationship between the resting-state brain network and brain cognition has been intensively studied in recent years ( Brokaw et al., 2016 ;Falahpour et al., 2018 ;Romeo et al., 2021 ). For example, it has been shown that linguistic creativity is highly correlated with the high temporal variability of the resting network connection among brain regions. These regions include the lateral prefrontal cortex, the parahippocampal gyrus, and the precuneus ( Sun et al., 2019 ). The relationship between P3 and resting-state network topology was studied by Li et al. (2015) . It was shown that larger P3 amplitudes were related to a more efficient resting-state brain network . Zhang et al. (2015) showed that an efficient resting-state electroencephalogram (EEG) network facilitates performance in motor imagery tasks. These studies of restingstate brain networks provided new insight into the underlying mechanisms of brain function and cognition in hearing individuals. Whether differences in resting-state brain networks between deaf and hearing individuals and changes in resting-state brain networks in deaf individuals influence the attention network's three functions remains unclear. No electrophysiological study has yet successfully linked cognitive differences during attention processing in deaf individuals with differential resting-state brain network connections.
In this study, we aimed to investigate the characteristics of alerting, orienting, and executive control in deaf individuals and to reveal the underlying mechanism of individual differences in attention performance in the deaf. We recruited congenitally deaf patients and age-matched hearing controls. We recorded resting-state EEG data and corresponding event-related potential (ERP) data from an ANT. We first used brain network analysis to study brain network connectivity in deaf individuals. Then, we analyzed ERPs for three attention functions. Finally, we correlated the restingstate brain network with attention performance and modeled it accordingly to predict the performance of deaf individuals during attention. Our results will support a better understanding of attention's underlying neural mechanisms and spontaneous brain activity's potential contribution to it and provide new insights into the underlying neurological mechanisms of attention processing in deaf individuals.

Participants
Sample size estimation was performed using G * Power v.3.1 ( Faul et al., 2009 ). According to the analysis ( f = 0.25, α = 0.05, β = 0.95, ANOVA: Repeated ANOVA, within-subjects factors), a total sample size of 28 subjects was required to detect a reliable effect size. The current study recruited 39 deaf and 40 normal participants, so the subject size was sufficient. Each participant was right-handed, had normal or corrected-to-normal vision, and was without neurological or psychiatric diseases. Deafness was caused by either genetic (hereditary) or pregnancy-related factors (maternal disease or drug side effects). All prelingually deaf individuals had binaural hearing loss of more than 90 dB (frequency range of 125 to 80 0 0 Hz) and did not use hearing aids (currently or in the past). According to a self-assessment questionnaire, all participants were fluent in Chinese sign language. Moreover, none of the hearing individuals reported hearing loss or knowledge of Chinese sign language (CSL). Data from three deaf and six hearing individuals were excluded from further analysis due to significant eye movement artifacts, head motion, and muscle artifacts that contaminated more than fifty percent of their trials After exclusion, the deaf group included 36 individuals (18 females; range 18-23 years; M age = 20.8 ± 1.10), meanwhile the hearing group had 34 people (17 females; range 18-23 years; M age = 20.6 ± 1.5). The groups did not differ in age, sex, intelligence, or years of education (see Table 1 for details). Each participant provided written informed consent and was paid for their participation. The local ethics committee of Tibet University authorized the present research, which was conducted in accordance with the Declaration of Helsinki.

Experimental procedures
The study used the ANT designed by Fan et al. (2002) , Posner and Petersen (1990) . Throughout the experiment, the center of the screen displayed a fixation cross. The cue stimuli appeared either above or below the fixation cross (spatial cue), in the center (central cue), or not at all (no cue) ( Fig. 1 ). Five horizontal arrows or lines appeared above or below the fixation cross as the target stimulus. The average angle of each small arrow was 0.58 °, and the angle of each arrow differed from neighboring arrows by at least 0.06 °The overall visual angle subtended by the stimuli (one central arrow and four flanker arrows) was 3.27 °The five arrows were shown either 1.06 °above or below the fixation cross. Participants were required to identify the direction of the center arrow by pressing the left or right button regardless of whether the flanking conditions were congruent or incongruent (executive attention). The test consisted of six blocks. Each block included 108 trails. The whole study took about 30 min, and the test was prepared using E-Prime 2.0 (Version 2.0, Psychology Software Tools, Inc., Pittsburgh, PA). We calculated the effects of the three attention networks for reaction time (RT) and the associated ERP components. To ensure consistency, all calculations for dependent variables isolating the effects of alerting, orienting, and executive attention were completed using the same formulas: alerting = double cue -no cue; orienting = spatial cue -center cue; and executive control: incongruent = target -congruent target.

EEG data recording
The EEG was recorded using the NeuroScan Curry7 system with 64 Ag/AgCl electrodes extended by the International 10-20 system. The EEG data were referenced in the right mastoid (M2) and using a ground electrode on the medial frontal aspect. Horizontal electrooculography (HEOG) was recorded with electrodes placed lat- Note. a the p -value was obtained by two-sample t -test, two tailed;. b the p -value was obtained by x 2 test. erally on both eyes, and vertical electrooculography (VEOG) was recorded with electrodes placed above and below the left eye. An amplifier amplified the signal, and continuous EEG was recorded with an online filtering bandwidth of 0.05-100 Hz and a sampling frequency of 500 Hz for the data. The resistance of all electrodes was kept below 5 k . At the beginning of the experiment, the participant was asked to sit comfortably in a chair with eyes closed and to stay awake while resting EEG was recorded for 4 min; after the experiment, the ANT task was performed.

EEG data analysis
The EEG recordings contained both task-related EEG data and resting-state sets. The purpose of the task-related EEG data was to reliably estimate the amplitude of N1 \ P3, whereas the resting state EEG data was applied to build the related brain network. Thus, the EEG data analysis investigated the N1 \ P3 waveform and the resting-state EEG data. The related analysis was conducted using MATLAB 2016a (MathWorks Inc.), and the analysis procedure is represented in Fig. 2 . The sections that follow include information on data processing.

Time-domain analysis of task EEG data
The EEG data were preprocessed using MATLAB R2016a (Math-Works, Inc.) and the EEGLAB toolbox ( Delorme and Makeig, 2004 ). Signals between 0.1 and 40 Hz were bandpass filtered. The average of the left and right mastoids was used to re-reference the EEG data. The critical epochs for the stimulus-locked analyses ranged between −50 0 and 10 0 0 ms following stimulus onset, and −500 to 0 ms served as the baseline. In each epoch, segments containing non-physiological artifacts were rejected by careful visual evaluation. Subsequently, independent component analysis (ICA) was carried out to remove any eye blink, movement, and heartbeat artifacts. In addition, signals exceeding a 75 V amplitude threshold were removed. The cue and stimulus condition data were then overlapped and averaged. The mean amplitudes of the posterior target N1 were analyzed from 80 to 150 ms and 150-250 ms after target stimulus presentation. These components were averaged on parietal (i.e., P3, Pz, and P4) and occipital (i.e., O1, Oz, and O2) electrodes; the average amplitude of P3 was examined 300-700 ms after target stimulus presentation. Both amplitudes were calculated using the FCZ, CZ, CPZ, and PZ sites, and this region of interest was selected based on previous studies ( Neuhaus et al., 2010 ;Williams et al., 2016 ).

Resting-state EEG network analysis
To reduce the effect of volume conduction, 21 typical electrodes (i.e., FP1, FPz, FP2, F7, F3, Fz, F4, F8, T7, C3, Cz, C4, T8, P7, P3, Pz, P4, P8, O1, O2, Oz) were selected in the 10-20 system to construct the brain network Qin et al., 2010 ;Xu et al., 2014 ). The resting-state EEG signals were band-pass filtered in the range of 0.1-40 Hz, averaging referencing ( Park et al., 2016 ) and continuously divided into 10-s segments ( Fraschini et al., 2016 ). These segments were then visually inspected, and those with eye or head motion artifacts were removed. In this study, a frequency-specific coherence (COH) method was used to define the functional connectivity strength between two network nodes. Coherence (COH) can effectively capture the coupling of EEG signals in the relevant frequency domain and assess the relationship between the network and cognition such as motor imagery and attention. In the current study, we used COH to measure functional connectivity by identifying a synchronized set of defined neurons at each frequency point between each pair of electrodes Zhang et al., 2015 ). The EEG brain network was then constructed based on 10 s segments in five conventional EEG bands: delta (0.1-3 Hz), theta (4-7 Hz), alpha (8-13 Hz), beta (14-30 Hz), and gamma (30-40 Hz) ( Dasdemir et al., 2017 ;Mumtaz et al., 2017 ). The resting-state network topological properties for the weighted network were further calculated using graph theory. Four network properties-clustering coefficient (CC), global efficiency (Ge), local efficiency (Le), and characteristic path length (CPL)-describe the network's ability for information processing. Both CC and Le are associated with evaluating the potential for functional segregation across lobes, which reflects the local information processing capability of the brain networks. Ge and CPL are used to evaluate the potential for functional integration between brain regions ( Rubinov and Sporns, 2010 ). For this study, the brain connectivity toolbox calculated these four weighted network properties (BCT, www.nitrc.org/projects/bct/).

Multiple linear prediction analysis
In the present study, a multiple linear regression model of the predictive model was created with the four network characteristics (i.e., CC, Ge, Le, and CPL) as variables. The corresponding model for making predictions was as follows: The leave-one-out crossvalidation method was used to predict individual attentional performance ( Tian et al., 2017 ;Xu et al., 2010 ;Zhang et al., 2016 ). For n samples, in each leave-one-out cross-validation, n-1 samples were used for training, and the remaining sample was utilized for testing. The regression coefficient for each variable was evaluated to formulate a prediction model for the present n-1 samples, which could then be used to predict the attentional performance of individuals in the test set. This process was repeated n times until all n samples were used as a testing set for one time. To quantitatively assess the performance of prediction, we first obtained the correlation coefficient between actual and predicted attentional performance using Person's correlation analysis. We measured prediction error using the root mean square error (RMSE).

Statistical analysis
For behavioral data, the independent sample t -test was performed on the attention network effect values of the deaf and hearing groups. For EEG data, a mixed-model analysis of variance (ANOVA) was used to calculate the amplitudes of N1 and P3. Secondly, an independent sample t -test was performed on the network connection and properties (CC, CPL, Ge, Le). The potential relationship between resting EEG brain network and task-state ERP was then explored using Pearson correlation analysis. Finally, the performance of deaf individuals on attentional tasks was predicted using a multiple linear regression model based on restingstate EEG network properties. When all statistical results were non-spherical, p -values were corrected by the Greenhouse-Geisser method, and multiple comparisons were corrected by the Bonferroni method. The significance level was p < 0.05, and η p 2 represented the effect size: 0.01 for small effect size, 0.06 for medium size effect size, and 0.14 for large effect size. In addition, the correlation analysis and independent-samples t -test were corrected by FDR (false discovery rate) in the resting-state network connectivity results.   [ F (2,67) = 4.7; p < 0.005, η p 2 = 0.11]. Meanwhile, the mean attention network effects of alerting the deaf group were smaller than the hearing group [ F (1,68) = 0.08; p < 0.005]. Moreover, the executive scores of the hearing group were substantially less than the deaf group [ F (1,68) = 1.1; p < 0.005]. Notably, there was no significant difference between the orienting of the two groups. Fig. 3 illustrates the attention network effects for the two groups ( Figs. 4 and 5 ).

ERP results
Alerting . Concerning the normalized target N1 amplitude, we conducted a 2 (group: deaf, hearing) × 2 (brain region: parietal, occipital) × 3 (hemisphere: left, middle, right) mixed ANOVA for the mean N1 wave amplitude. Results showed significant main effects of brain regions [ F (1,69) = 44.86, p < 0.01, η p 2 = 0.088], with the occipital region (0.514 ± 0.069 μV) showing a more negative showed a more negative N1 amplitude than the deaf group (0.982 ± 0.067 μV). The main effect of hemisphere was not significant [ F (2,68) = 5.44, p > 0.05, η p 2 = 0.057]. Moreover, the interaction between the group and hemisphere was significant [ F (2,68) = 21.38, p < 0.01, η p 2 = 0.09]; in the left hemisphere, the N1 amplitude was more negative in the hearing group (0.393 ± 0.07 μV) than in the deaf group (1.09 ± 0.07 μV). The interaction between group and brain region was not significant [ F (2,68) = 3.94, p > 0.05, Orienting. Concerning the normalized target N1 amplitude, we conducted a 2 (group: deaf, hearing) × 2 (brain region: parietal, occipital) × 3 (hemisphere: left, middle, right) mixed ANOVA for the mean N1 wave amplitude. Results showed significant main effects in brain regions [ F (1,69) = 6.01, p < 0.05, η p 2 = 0.051], with the parietal region (1.11 ± 0.062 μV) showing a more negative N1 amplitude compared to the occipital region (1.406 ± 0.104 μV). Executive . Concerning the aspect of the normalized target P3 amplitude, we conducted a 2 (group: deaf, hearing) × 2 (brain region: centro-parietal, fronto-central) mixed ANOVA for the mean P3 wave amplitude. Results showed significant main effects of brain regions [ F (1,69) = 76.8, p < 0.001, η p 2 = 0.072], with the centroparietal (0.545 ± 0.082 μV) showing a more positive P3 amplitude compared to the fronto-central (0.131 ± 0.096 μV). The main effect of the group was significant [ F (1,69) = 10.71, p < 0.05, η p 2 = 0.026]; the deaf group (0.619 ± 0.121 μV) showed a more positive P3 amplitude than the hearing group (0.057 ± 0.121 μV). Other interactions were not significant. Fig. 6 (a) illustrates the differences in network connectivity in five frequency bands between the deaf and healthy groups. An independent samples t -test revealed that the difference in resting state network topology in the alpha frequency band for deaf participants showed weaker frontal-occipital connections compared to hearing participants ( p < 0.05, FDR corrected). In the other four frequency bands, there were no significant differences between deaf and hearing individuals in the resting-state network ( p > 0.05). The potential resting-state network differences between the deaf and hearing groups were explored by two-sample t -tests. As shown in Fig. 6 (b), the CC, Ge, and Le of the deaf group were significantly smaller than those of the hearing group in the alpha band. In addition, the CPL of the deaf group was significantly higher than that of the hearing group.

Relationships between resting-state network topology properties and alerting N1 amplitude
As shown in Fig. 7 (a), we explored the network topology with the correlation between the alpha band and alerting N1 amplitude. We found significant correlations between frontal-occipital linkages and alerting N1 amplitude ( p < 0.05, FDR corrected). Then, we analyzed the alpha bands' potential relationships between alerting N1 amplitude and the resting-state network properties (CC, CPL, Ge, and Le). As shown in Fig. 7 (b), in the alpha band, C ( r = −0.356, p = 0.044), Ge ( r = −0.352, p = 0.045), and Le ( r = −0.35, p = 0.045) were shown to be significantly negatively correlated with alerting N1 amplitude, while CPL was positively associated with alerting N1 amplitude ( r = 0.358 p = 0.043).

Multivariate linear predict analysis
The relationship between the resting-state network properties and the attentional index was significantly correlated. Therefore, the network properties (CC, CPL, Ge, and Le) may be used as features to predict individual attentional index in the ANT task. The relationship between actual and predicted N1 amplitude is shown in Fig. 8 , where the X and Y-axes represent the predicted and actual N1 amplitude, respectively. The corresponding Pearson's correlation coefficient was r = 0.40 ( p = 0.01), and the RMSE was 13.32%.

Discussion
In this study, we explored the characteristics of three different components of the attention network in deaf individuals. Furthermore, we investigated the relationship between network features and attentional function from the perspective of brain network connectivity, aiming to understand better the underlying neural mechanisms of attentional deficits in deaf individuals. Based on resting-state functional connectivity and ERP techniques, we evaluated the performance of attentional abilities of deaf individuals at three levels (behavioral, ERP, and brain network). We further correlated the topological properties of functional network connectivity with ERPs. The results reveale d a substantial correlation between the alert N1 amplitude and the resting-state frontal-occipital connectivity in the alpha band. Specifically, clustering coefficients and   global and local efficiency correlate negatively with alert N1 amplitude, whereas the characteristic path length positively correlates with alert N1 amplitude. The deaf individuals exhibited weaker frontal-occipital connections compared to the HC group. In executive control, the deaf group had longer reaction times and larger P3 amplitudes, while the orienting function was not significantly different from the HC group. Finally, the alert N1 amplitude in the ANT task for deaf individuals was predicted using a multiple linear regression model based on resting-state EEG network properties. These results suggest that deafness affects the performance of alerting and executive control, whereas orienting function develops similarly to that of hearing individuals, and that weakened frontaloccipital connections in the deaf brain are a fundamental cause of altered alerting function in the deaf. This is the first study to investigate a tripartite correlation of differences in visual ERPs, deafness, and resting-state EEG network analysis.
We found that the alertness function was substantially lower in the deaf group than in the hearing group, showing that the deaf group had difficulty retaining the alert state when not warned of the new target and were disproportionately slow in no-cue trials. Our behavioral results contradicted those of previous studies. For example, Dye et al. (2007) found that deaf and hearing individuals displayed comparable alerting abilities. Our study had a more considerable number of participants and controlled for factors such as intelligence and education levels in both groups. This difference in the deaf group may account for the different results. In terms of executive control ability, the results showed significantly higher executive control scores for the deaf group than the hearing group, indicating that the deaf group needed more time to resolve conflicts, which is similar to the studies of attention networks among deaf college students ( Dye et al., 2007 ). In addition, deaf individuals have been found to perform poorly in experiments featuring executive control functional tasks. Figueras et al. (2008) found that deaf individuals took longer than hearing individuals in a circadian Stroop task. These individuals required more protracted and significant effort to suppress previously dominant responses. The investigators also used the Hanota test, finding that hearing children performed better in executive function than deaf children with cochlear implants ( Figueras et al., 2008 ). Consistent with previous studies, the present study suggested that deaf individuals perform poorly on executive function tasks compared to hearing individuals, indicating that deaf individuals have deficits in their executive control network. In terms of orienting ability, no significant differences in the two groups of participants were observed, indicating that there were no significant differences in the ability of the two groups to select attention to specific information from a large amount of external input. This finding is consistent with the results of a previous study by Parasnis and Samar on the orienting ability of deaf participants. They found that deaf individuals received the same information from valid cues as hearing individuals. Thus, no significant differences are exhibited in the attentional orienting ability of the deaf and the hearing ( Parasnis and Samar, 1985 ). However, deaf individuals received less information from invalid cues. Parasnis and Samar suggested that this reflects the deaf indi-vidual's increased ability to shift attention away from the location of invalid cues. Bosworth and Dobkins (2002) found that deaf individuals benefited less from valid cues than hearing individuals during an action discrimination study using both valid and invalid cues.
Moreover, we found that the neurophysiological markers (N1) associated with attentional function correspondingly supported the behavioral performances of alerting in the deaf. The alerting N1 amplitude was lower in the hearing than in the deaf group. N1 represents an early component of visual attention and is thought to be an indicator of the direction of attention ( Natale et al., 2006 ) or the integration of feature selection and working memory encoding ( Zanto et al., 2011 ). The N1 component has opposite performance under intentional and unintentional attention, with effective stimuli under intentional attention triggering greater wave amplitude while effective stimuli under unintentional attention decreased contralateral N1 wave amplitude ( Fu et al., 2001 ). The results of this study suggest that the hearing group occupied more attentional resources in response to possible incoming stimuli to maintain a compassionate state of readiness to perceive the outside world. In addition, we found that in the occipital region, the alerting N1 amplitude was lower in the deaf than in the hearing. Our results were consistent with those in previous studies. As observed in the majority of the previous neuroimaging studies ( Bell et al., 2019 ;Bottari et al., 2011 ;Cardin et al., 2020 ;De Schonen et al., 2018 ;Stroh et al., 2022 ), activation in the occipital region was more robust in the deaf than in the hearing when visual stimuli were presented. The auditory cortex of the deaf individual cannot function properly due to deprivation. Therefore, it primarily relies on vision to receive external information. This adaptation leads to a cross-modal reorganization of the visual cortex in deaf individuals ( Bell et al., 2019 ;Cardin et al., 2020 ;Codina et al., 2011 ;Ding et al., 2015 ;Vachon et al., 2013 ). We also found that the P3 amplitude in the prefrontal region was significantly greater in deaf individuals than in hearing individuals, which we do not attribute to their increased executive function. We believe that the executive control function of deaf participants was overactivated during the conflict resolution task. This result also occurred in conjunction with behavioral outcomes, where deaf individuals spent more time resolving conflict and devoted more attentional resources to the conflict resolution task. Behavioral performance remained poorer, suggesting that their executive function effectiveness was less efficient than that of hearing individuals. These results are consistent with the "cortical inefficiency" hypothesis ( Krompinger and Simons, 2011 ;Wagner et al., 2006 ). This hypothesis describes the physiological patterns as an "inefficiency" of the brain regions responsible for conflict processing in deaf individuals. Significantly larger P300 wave amplitudes were also found in visually relevant P3 studies of deaf individuals than in hearing individuals, which is explained by more pronounced multisensory interaction in hearing than in deaf individuals ( Hauthal et al., 2014 ;Yukhymenko et al., 2019 ).
Finally, we found that the deaf group had weaker longrange frontal-occipital connections in the alpha band; significantly smaller clustering coefficients, global efficiency, and local efficiency; and significantly larger characteristic path length. These topologies and network properties were significantly correlated with the alerting N1 amplitude. When the resting state EEG is recorded with eyes closed, it is characterized mainly by alpha oscillations. The alpha band has been connected with many cognitive features, including working memory capacity ( Richard Clark et al., 2004 ), information processing speed ( Klimesch, 1996 ), and inhibition ( Klimesch, 2012 ). Additionally, the alpha band has been linked to attentional arousal and cognitive preparedness ( Angelakis et al., 2004 ). Overall, brain activity in the alpha band may preconfigure attentional and cognitive resources critical for subsequent tasks, thereby indicating the brain's potential ability to process information efficiently during the task. Thus, the diminished activation in the alpha band of the brain in the deaf compared to the hearing may result from insufficient attentional and cognitive resources allocated to the subsequent vigilance task to mobilize the brain's potential capacity to process information effectively. Moreover, the attentional alerting system is associated with frontal and parietal regions, mainly in the thalamus, frontal lobes, and areas of the parietal lobes ( Fan et al., 2005 ;Paus et al., 1997 ;Raz and Buhle, 2006 ). In the ANT tasks, activation of alerting typically begins in the relevant regions of the occipital cortex associated with visual cognition. It then extends to the frontoparietal regions and thalamus associated with the alerting response ( Dilks et al., 2013 ;Fan et al., 2005 ;Neuhaus et al., 2010 ). Changes in the visual cortex in deaf individuals (especially in the right hemisphere) lead to visual compensation via increased visual region volume in the absence of auditory input. Furthermore, changes due to early exposure to sign language and multisensory integration networks occur ( Cardin et al., 2013 ;Dewey and Hartley, 2015 ;Ding et al., 2015 ;Hartmann and Weisz, 2019 ;Lomber et al., 2010 ;Merabet and Pascual-Leone, 2010 ;Pénicaud et al., 2013 ;Vachon et al., 2013 ). Moreover, the frontal cortex connects the temporal and occipital cortex when attention is conditioned ( Campbell and Sharma, 2020 ;Glick and Sharma, 2020 ;Rosemann and Thiel, 2018 ). The frontaloccipital connection is weaker in deaf individuals, and this connection is significantly correlated with the amplitude of alerting N1. This weakened connection makes the alerting process in deaf individuals different from hearing individuals.
In addition, we found that deterioration in vigilance was accompanied by decreases in C, Ge, and Le and increases in L, and changes in resting-state network connectivity patterns predict attentional function in deaf individuals. Theoretically, decreases in C, Ge, and Le and increases in L indicate a decrease in the efficiency of information processing in the brain. At the same time, several studies have found a reduction in functional connectivity between the auditory cortex regions and the attention-control network in resting states. These findings demonstrated that deaf individuals exhibited increased inattention, distractibility, and impulsivity; weaker sustained attention; and reduced selective attention and cognitive control ( Conway et al., 2009 ;Horn et al., 2005 ;Mitchell and Quittner, 1996 ;Quittner et al., 1994 ). In cochlear implant recipients, the same reduced global network efficiency alpha bands were found.These findings further suggest that brain connectivity affects the development of alertness. This change in the structure and function of different brain regions allows many interconnected regions to operate in multisensory integration and visuospatial attention ( Scott et al., 2014 ). It allows deaf individuals to have different developmental trajectories concerning various attentional functions. Overall, brain connectivity patterns contribute to the diagnosis of neurodevelopmental defects ( Astle et al., 2019 ). Thus, consideration of brain connectivity appears to reconcile brain lesion studies with functional neuroimaging in healthy volunteers and provide a more comprehensive biological explanation for clinical manifestations in terms of the disintegration of brain processes. In addition, the results of brain disconnection usually achieve higher explanatory power than the localization of brain regions alone ( Pacella et al., 2019 ). Therefore, it would prove beneficial if measures of brain connectivity were translated into standard operating procedures for advanced personalized neuroscience ( Satterthwaite et al., 2018 ) that focus on recovery and support the prediction of symptom recovery while providing new targets for therapy.
There are three main limitations to this study. First, the current study utilized an undirected network analysis. Therefore, it was impossible to demonstrate the direction of information flow between relevant brain regions or a causal relationship between the N1/P3 and resting state networks. In the future, we will use causal analyses, such as partial directional coherence, Granger causality, or dynamic causality models, to construct directed networks. Second, the study was cross-sectional, not longitudinal. Future research should combine longitudinal experiments, diversify experimental studies based on tracking experiments, and make data processing and analysis methods richer by establishing strategies for model analysis to obtain more convincing results. Finally, caution should be exercised in interpreting the present results because they are likely to be influenced by many other uncontrollable factors such as sign language use, degree of auditory impairment, and individual differences. Future studies should also include additional control groups of deaf speakers and hearing signers to more carefully interpret the effects of sensory deprivation on attention.

Conclusion
In conclusion, the present study used attention network tests and brain network analysis to investigate the effect of deafness on attention. The results revealed that deafness affects the performance of alerting and executive control. At the same time, orienting functions in the deaf develop similarly to those of hearing individuals, and weakened frontal-occipital connections in the deaf brain are a fundamental cause of altered alerting functions in the deaf. The inefficient resting-state brain allows deaf individuals to devote more attentional resources to possible incoming stimuli. Thus, they can maintain a highly sensitive state, ready to perceive the external world. The increased effect of conflict is critical in showing that individuals with hearing loss might recruit additional compensatory cognitive resources due to their "cortical inefficiency." In general, these findings extend our comprehension of attention from the perspective of resting-state brain networks and might provide a potential physiological biomarker to predict attention responses. The results further illustrated that restingstate network connectivity analysis may be an effective approach to reveal the underlying biological mechanisms of cognitive function.

Declaration of Competing Interest
The authors declare no competing interests.