Laterality and hemispheric specialization of self-face recognition

Inspired by the pioneering work of Eran Zaidel beginning in the early 1970 ’ s on the role of the two cerebral hemispheres of the human brain in self-related cognition, we review research on self-face recognition from a laterality perspective. The self-face is an important proxy of the self, and self-face recognition has been used as an indicator of self-awareness more broadly. Over the last half century, behavioral and neurological data, along with over two decades of neuroimaging research evidence have accumulated on this topic, generally concluding a right-hemisphere dominance for self-face recognition. In this review, we briefly revisit the pioneering roots of this work by Sperry, Zaidel & Zaidel, and focus on the important body of neuroimaging literature on self-face recognition it has inspired. We conclude with a brief discussion of current models of self-related processing and future directions for research in this area.


Introduction
Many cognitive functions have been shown to draw more on one cerebral hemisphere compared with the other. These instances of neural specialization are referred to as "hemispheric dominance" or "functional lateralization" (Bradshaw and Nettleton 1981;Benowitz et al. 1983;Sperry, 1982;Zaidel, 1983a). Although it is not known exactly when, sometime in the 3rd century BC the Greeks had already drawn conclusions about hemispheric specialization that have been documented and translated to mean something akin to: 'There are accordingly two brains in the head. The one gives us our intellect, the other provides the faculty of perception. That is to say: the brain on the right side is the one that perceives, whereas the left brain is the one which understands' (Lokhorst, 1996). Much later, evidence-based conclusions led to theories on the functional specialization of the cerebral hemispheres in the human brainone of the famous examples being that of Paul Broca (Broca, 1861) (Broca, 1863) who linked language production to the left hemisphere based on neurological evidence from patients with brain damage (Dronkers et al., 2007). Although hemispheric specialization is one of the oldest theories put forward in cognitive neuroscience, many questions regarding the lateralization of cognitive functions remain to be answered.
Invertebrates (Frasnelli et al., 2012) and vertebrates (Denenberg et al., 1978) show behavioral and brain asymmetries, suggesting that lateralization occurred early in evolutionary history (Hopkins et al. 2015;Hopkins and Cantalupo 2008). Lateralization implies a functional asymmetry whereby dominance for different functions are established in opposite hemispheres. Lateralization has been presumed to confer certain advantages (Rogers, 2000) (Levy, 1977). Some of the advantages attributed to hemispheric functional specialization include increasing neural efficiency (for example, by avoiding redundancy), enhancing the brain's parallel processing capacity, and preventing conflicts between duplicate control systems (Vallortigara 2006;Gerrits, Verhelst, and Vingerhoets 2020).
The systematic study of hemispheric functional differences was pioneered by Roger Sperry and colleagues including Eran Zaidel, who was mentored by Sperry in the 1970's at Caltech. These experiments were part of an exhaustive research effort that included studies in "splitbrain" patients -individuals who underwent surgical separation of the two cerebral hemispheres by midline section of forebrain and midbrain commissures to ameliorate otherwise intractable epilepsy (Sperry, 1975;Glickstein and Sperry 1960;Sperry, 1967).
Studies investigating the lateralization of verbal and spatial functions in split-brain patients were limited because tachistoscopic presentations rarely exceeded 200 ms, preventing participants from scanning the stimuli and thus restricting stimuli to rather simple types of input. During his postdoctoral work, Eran Zaidel invented the Z-lens (Zaidel, 1975), one of his most original methodological contributions to the field of brain research. The Z-lens is a contact lens that stabilizes retinal images, allowing active exploration of a visual stimulus with eye movements while stimulating only half of the visual field during visual exploration. This way, only the hemisphere contralateral to the stimulated visual field receives the visual information (Iaccino, 2014). This novel technique allowed Zaidel to compare the commissurotomized hemispheres on a variety of more complex verbal tasks, enabling the discovery of previously unknown linguistic capabilities of the isolated right hemisphere (Zaidel 1978;Zaidel 1983b), as well as exploration of self-awareness in the right 'minor' hemisphere for the first time (Sperry et al., 1979). Findings from this program of experimentation aiming to understand the lateralization of cognitive functions revolutionized neuroscience research by identifying unique capabilities of each hemisphere, as well as elucidating some of the major functions of the corpus callosum in interhemispheric transfer of information and hand-eye coordination, earning Sperry a share of the Nobel Prize in 1981 (Pearce, 2019) . Sperry described this work in his Nobel Lecture: 'Accordingly we undertook to test the right hemisphere more specifically for the presence of self recognition and related forms of self and social awareness. With perception of pictorial stimuli confined to one hemisphere by the scleral contact lens occluder developed by Eran Zaidel, the subject merely had to point to select items in a multiple choice array in answer to various kinds of leading questions regarding his or her knowledge and feelings concerning the content of the pictures. Subject's responses included also differential emotional expressions, thumbs-up, thumbs-down evaluations, exclamations, replies to 20-question type prompting and spontaneous remarks relevant to the emotional aspects of affect-laden stimuli. The results revealed that the disconnected right hemisphere readily recognizes and identifies him or herself among a choice array of portrait photos, and in doing so, generates appropriate emotional reactions and displays a good sense of humor requiring subtle social evaluations. Similar findings were obtained for pictures of the immediate family, relatives, acquaintances, pets, personal belongings, familiar scenes and also political, historical and religious figures, as well as television and screen personalities. The relatively inaccessible inner world of the nonspeaking hemisphere was thus found to be surprisingly well developed. The general level of performance on these tests was in good accord with that obtained from the left hemisphere of the same subject or in free vision. Results to date suggest the presence of a normal and well developed sense of self and personal relations along with a surprising knowledgeability in general' (Sperry, 1982). Zaidel's early work uniquely combined theoretical motivation for self-face recognition studies with novel approaches for assessing lateralization. Other researchers were also beginning to report evidence of self-recognition in left and right hemispheres of split-brain patients during this time using skin resistance changes as an index of arousal, corroborating the existence of conscious awareness in the right "mute" hemisphere (Preilowski, 1977). Thus began the inquiry into laterality of self-recognition in the human brain.

Self-face recognition
Self-awareness is a complex aspect of human cognition that continues to be the topic of active interdisciplinary research ( de Vignemont 2018;Eurich, 2018;Arzy, Molnar-Szakacs, and Blanke 2008;Molnar-Szakacs and Arzy 2009;Uddin, 2012 ) (Molnar-Szakacs andUddin, 2013). The self is a multifaceted concept, including autobiographical memory and self-knowledge in the psychological domain, continuity across time in the temporal domain, and awareness of one's body in the physical domain (Gillihan and Farah, 2005). Indeed, the awareness of one's own body is in itself a conceptually complex topic, because the experience is inherently multimodal, subjective, and global. Dieguez & Lopez note 'the body is the source of its own perception, a subject and an object at the same time' (Dieguez and Lopez, 2017). The 'bodily self' arises from the dynamic integration of bodily and environmental visual, tactile, proprioceptive, vestibular, auditory, olfactory, visceral and motor information (Blanke, 2012) .
In the realm of physical aspects of self-awareness involving bodily awareness, most studies have focused on the behavioral and neural correlates of self-face recognition. Visual self-face recognition directly contributes to awareness of the bodily self as distinct from others (Brooks-Gunn and Lewis 1984;Gallup, Platek, and Spaulding 2014). Our face represents 'the visual signature of the self, instantly communicating a dynamic synthesis of gender, race, age, emotion, and mood that signals our intentions and performs our identity and persona' (Molnar-Szakacs et al., 2021). This bodily sense of the self, along with the ability for kinesthetic-visual matching, has been argued to form the basis for explicit mirror self-face recognition (Mitchell, 1997). Indeed, one of the most widely used tests of self-awareness involves self-face recognition.
Mirror self-recognition is considered to be the benchmark of objective self-awareness, and 'the mirror test' developed by Gordon Gallup has been widely used in a variety of animals as a non-verbal test of visual self-recognition (Gallup, 1970). In this test, the animal is marked with a dye whilst unaware, and their ability to recognize themselves in a mirror is then observed. If the animal touches or investigates the mark, it is taken as an indication that the animal perceives the reflected image as an image of itself. Gallup claims that 'The unique feature of mirror-image stimulation is that the identity of the observer and his reflection in a mirror are necessarily one and the same. The capacity to correctly infer the identity of the reflection must, therefore, presuppose an already existent identity on the part of the organism making this inference. Without an identity of your own it would be impossible to recognize yourself.' He further explains that mirror experience is not necessary for developing a sense of identity, but is merely a 'means of mapping what the chimpanzee already knows […] chimpanzees may already have a self-concept, and a mirror may merely represent a means of objectifying its existence' (Gallup, 1977). Additional comparative studies providing evidence of mirror self-recognition in chimpanzees, as well as its neural substrates have since been conducted (Hecht et al. 2017;Hopkins et al. 2019;Povinelli et al. 1993;de Veer et al. 2003). This body of comparative work highlights the importance of understanding the neural basis of self-recognition in other species, as such research can enrich our understanding of the human capacity for self-awareness.
Self-awareness as evidenced by self-face recognition develops around the age of two in humans (Cebioglu and Broesch 2021;Lewis, Brooks-Gunn, and Jaskir 1985), and a version of the mirror test where the dye is replaced by rougeand thus called 'the rouge test' -has been commonly used in children (Amsterdam, 1972). In his early work on self-recognition, Eran Zaidel along with Roger Sperry and Dahlia Zaidel examined two split-brain patients and reported that both hemispheres were capable of self-recognition (Sperry et al., 1979). In a later study, the senior author of the current review worked with Eran Zaidel and colleagues to assess the ability of the disconnected cerebral hemispheres to recognize images of the self. In this study, a split-brain patient was tested using morphed self-other face images presented to one visual hemifield, and thus to one hemisphere at a time, while making "self/other" judgments. The performance of the right and left hemispheres of this patient as assessed by a signal detection method was not significantly different, and both hemispheres performed significantly above chance at the task of correctly identifying images predominantly containing self-elements. The right and left hemispheres of this patient independently and equally possessed the ability to self-recognize, but only the right hemisphere could successfully recognize familiar others (Uddin et al., 2005b) (Uddin, 2011a). These data provide evidence for a modular concept of self-recognition whereby, under certain conditions (i.e. hemispheric disconnection) the capacity for self-recognition of both hemispheres is revealed.
These studies examining self-recognition in the split-brain reveal some of Eran Zaidel's broader views on the cognitive abilities of the two hemispheres of the brain. He believed that they were independently capable of most cognitive tasks, but in some cases approached them differently. He considered that division of labor between the hemispheres might allow them to work in parallel and thus be more efficient at problem solving. He often spoke of different "perceptual styles" of the left and right hemispheres, with the left involved in more analytic processing and the right more in synthetic processing (Zaidel, 1978).
Research in adults has found that when the two hemispheres are connected in neurologically intact brains, the right hemisphere appears to dominate during self-face recognition (see (Janowska et al., 2021) for review). Using a behavioral paradigm where subjects viewed a morphed-face movie sequence and pressed a button when they judged that it became "more self than not self", it was found that when participants responded with their left hand, they were more likely to identify self-to-famous morphed images as self. This self-bias was not seen when subjects responded with their right hand, leading the authors to conclude that the right hemisphere, controlling the left hand, is specialized for processing images of the self (Keenan et al., 1999) (Keenan et al., 2000). There are multiple additional lines of evidence to suggest that if task processing is strongly lateralized to a particular hemisphere, responses made by the contralateral hand will show a temporal advantage (Tormos et al. 1997;Berlucchi, Aglioti, and Tassinari 1997;Hodges et al. 1997). Keenan and colleagues also reported that using the left hand, and thus the right hemisphere of the brain, a split-brain patient was also more accurate at self-recognition in reacting to images morphed with the self-face (Keenan et al., 2003). It is worth noting that Eran Zaidel was a strong believer that the best way to assess hemispheric specialization for a task was with tachistoscopic presentation to limit visual input (rather than response hand bias).
In addition to behavioral evidence, many previous studies have highlighted the importance of the right cerebral hemisphere in self-face recognition based on clinical evidence. For instance, some patients with right hemisphere damage are unable to identify their own faces when they are reflected in a mirror (Feinberg and Shapiro 1989;Spangenberg, Wagner, and Bachman 1998;Breen, Caine, and Coltheart 2001;Yun et al. 2014). In a pioneering study, Keenan and colleagues undertook an examination of patients who had each hemisphere anesthetized during a Wada test procedure. When performing the task of differentiating their face from a famous face morph, the patients' ability to self-recognize was disrupted only when the right hemisphere and not the left hemisphere was anesthetized. That is, self-attribution was lowered when individuals viewed self-other morph faces while the right hemisphere was disrupted, providing direct evidence of the role of the right hemisphere in self-face recognition (Keenan et al., 2001b). In an accompanying experiment with 10 participants (not undergoing Wada testing), transcranial magnetic stimulation (TMS) was delivered to the motor cortex of the right or left hemisphere during viewing of self-famous or familiar-famous morphed faces. The amplitude of the resulting motor-evoked potentials (MEPs) was significantly greater for the right hemisphere than for the left hemisphere during presentation of self-morphs, and greater for self-morphs than familiar morphs, providing corroborating evidence that the right hemisphere seems to be preferentially involved in self-face recognition (Keenan et al., 2001a).
Although the link between self-face recognition and self-awareness remains a topic of debate, self-face recognition tests show significant reliability, are simple to use, do not require language, and have gained a large literature base. For these reasons, self-face recognition may be considered one of the most valid measures available for testing and gauging self-awareness (Kramer et al., 2020) . The concept of self-awareness as examined through self-face recognition is now commonly used in developmental (Bertenthal and Fischer, 1978)(Filippetti and Tsakiris, 2018)), clinical (Inoue et al., 2015) ;Irani et al. 2006;(Quevedo et al., 2018) ), comparative studies (Gallup 1977;Hirata, Fuwa, and Myowa 2017;Reiss and Marino 2001) and cognitive neuroscience (Sugiura et al. 2005;(Uddin et al., 2005a) , ; Kaplan et al. 2008;Platek et al. 2006).
With Eran Zaidel and colleagues, we performed a functional magnetic resonance imaging (fMRI) study of self-face recognition using selfface stimuli morphed with the face of a familiar other. We observed a pattern of signal increases in the right inferior frontal gyrus and inferior parietal cortex that tracked with the amount of self-face presented in the morphed stimuli. In other words, the greater amount of "self" present in the stimulus, the greater the activation in the fronto-parietal regions of the right hemisphere Uddin et al., 2005a. These brain regions overlap with the human mirror neuron system (MNS), which maps the actions of others onto one's own motor repertoire via a simulation mechanism (Cattaneo and Rizzolatti, 2009). A large body of research has since accumulated showing that the observation of others' physical bodies can elicit corresponding patterns of activation in one's own MNS (Molnar-Szakacs and Uddin 2012; (Molnar-Szakacs and Uddin, 2013) (Gamez-Djokic et al., 2015) ; , suggesting that neural representations of our own and others' bodies can overlap. While the frontoparietal brain regions we describe activate during both self and other perception, they show increasing levels of activity as the subjects perceive more self in the stimuli, thus distinguishing self from other. Similar findings have since been published (Sugiura et al. 2005;Platek et al. 2006;Kaplan et al. 2008) supporting the role of the MNS in physical self-recognition.
A large number of additional neuroimaging studies using various approaches including fMRI, electroencephalography (EEG), positron emission tomography (PET), and TMS have reported a unique pattern of right hemispheric brain activation during self-face recognition that contrasts with the pattern observed during the perception of both familiar and unfamiliar others' faces (Ninomiya et al. 1998;Keenan, McCutcheon, and Pascual-Leone Pascual 2001a, b;Sugiura et al. 2000;Apps et al. 2012;Devue et al. 2007; (Morita et al., 2008); Uddin et al. 2006;Taylor et al. 2009) (Apps et al., 2012). Taken together, this large body of work across various techniques and paradigms has shown that the brain network supporting self-face recognition involves the right inferior frontoparietal cortices and bilateral regions of the inferior occipitotemporal cortices, irrespective of familiarity with the others' faces presented as controls (Platek et al. 2008;(Sugiura et al., 2008); (Morita et al., 2018); (Hu et al., 2016) ).
Our team continued this line of investigation with Eran Zaidel by examining more specifically the role of the inferior parietal cortex in self-recognition using repetitive TMS. We presented morphed images of self and stranger faces to participants and examined self-recognition after stimulation to the left and right inferior parietal cortex region we previously identified as being activated during self-face recognition using fMRI (Uddin et al., 2005a). Twenty minutes of repetitive TMS stimulation over the right inferior parietal cortex, which resulted in temporary change of activation in the area, significantly impaired subjects' performance on the self-face recognition task, leading them to identify an image containing mostly 'other' as 'self'. Left hemisphere stimulation did not have a significant effect on performance (Uddin et al., 2006). In line with our findings, it has been shown that applying rTMS to the right temporo-parietal junction (TPJ) has a similar impact on self-face recognition (Heinisch et al., 2011(Heinisch et al., , 2012. Similarly, Zeugin and colleagues found that TMS to the right TPJ disrupted mental rotation of the self-face compared with familiar faces in general (Zeugin et al., 2020).
A recent study has also found that disruption of the prefrontal cortex (PFC) in the right hemisphere has more impact in individuals with subclinical grandiose narcissism (Kramer et al., 2020). Individuals high in narcissistic traits appear to have increased self-focus (Fan et al., 2011). Indeed, the researchers found that as narcissistic traits increase, TMS delivered to the right PFC causes a greater disruption of the recognition of one's own face, providing further evidence of the role of right prefrontal cortices in self-related cognition.
Considering neurodiversity in research on human cognition is essential to understanding the full spectrum of brain functioning. Many reports have highlighted the significance of altered self-and other representations in autism (Kanner 1943;Rogers and Pennington 1991;(Molnar-Szakacs and Uddin, 2016) (Uddin, 2011);). In fact, the term "autism" itself is derived from the Greek word "autos", meaning "self". Autism is a lifelong developmental spectrum disorder characterized by persistent deficits in social communication and interaction as well as restricted repetitive behaviors and interests of varying combinations and severity (American Psychiatric Association, 2013). Enhanced self-focus has also been discussed as a characteristic of autism (Kanner 1943;(Baron-Cohen, 2005). Studying the neural mechanisms subserving self-representations in autism can complement findings in neurotypical individuals. Uddin, Zaidel and colleagues used event-related fMRI to investigate brain responses to morphed self-other images in children with autism spectrum disorder (ASD) and neurotypical children. The children viewed randomly presented digital morphs between their own face and a gender-matched other face and made "self/other" judgments. Both groups of children activated a right premotor/prefrontal system when identifying images containing a greater percentage of the self-face. However, while neurotypical children showed activation of this system during both self-and other-processing, children with ASD only recruited this system while viewing images containing mostly their own face . These neuroimaging findings provide additional support for the dominant role of the right hemisphere in self-face recognition. They are also consistent with clinical observations of higher levels of self-focused behavior in autism, as children with ASD showed decreased neural response to viewing faces of others compared to viewing faces of themselves. However, findings are mixed. Some studies have reported that self-bias -the tendency to preferentially process information when self-relevant -is smaller or absent in autism (Nijhof et al., 2020). Further studies are needed on whether and how different aspects of self-processing are related in neurotypical and autistic individuals.
Another developmental neurological condition that is highly relevant to the study of self-face recognition is congenital prosopagnosia, characterized by an impairment in face recognition, including one's own face. Individuals with congenital prosopagnosia appear to show typical activation of the ventral occipital visual areas during face recognition, but fail to activate the more anterior 'extended face network', including the anterior temporal lobe and frontal cortices (Behrmann and Avidan, 2005). It has been proposed that the impairment in congenital prosopagnosia may arise from alterations to white matter fiber tracts connecting posterior visual areas to the anterior nodes of the face processing network (Avidan and Behrmann, 2009). Interestingly, it has been found that congenital prosopagnosics do show a consistent self-face and self-body advantage, although significantly lower than that of controls. Congenital prosopagnosics also show a right perceptual bias -that is, optimal performance when presented with the right side of their face (Malaspina et al., 2016(Malaspina et al., , 2019. Taken together, studies of neurodiverse individuals such as those diagnosed with autism or congenital prosopagnosia produce findings in line with the evidence reviewed thus far for both right hemisphere specialization and unique neural processes devoted to bodily self-perception.

Contributions of laterality findings to cognitive models of self-face recognition
It has been shown that face stimuli can capture attention, even when presented subliminally (Eimer and Kiss 2007;Sato and Kawahara 2015). However, the self-face has been shown to activate different neural networks beyond those activated by face perception in general, and even familiar faces. Self-related information is prioritized by our cognitive system, which manifests in faster and more accurate responses in tasks involving perception of one's face (Keyes and Dlugokencka 2014;Tacikowski et al. 2011;Ma and Han 2010;Miyakoshi et al. 2010). In one study, subliminal presentation of the self-face elicited greater activation of the ventral tegmental area, a center of the dopamine reward pathway, than presentation of others' faces. Subliminal presentations of others' faces induced activation in the amygdala, which generally responds to unfamiliar information (Ota and Nakano, 2021). However, a question regarding the mechanism underlying the prioritization of the self-face remainsis this effect really self-specific or is it due to the familiarity or salience of the self-face? To address this question, Alzueta and colleagues used EEG to record the brain activity of healthy participants while discriminating between their own face, a friend's face and a stranger's face. Results showed that the N170 component reflecting the first stage of the encoding of facial information was not sensitive to the self-face. In contrast, the subsequent P200 component distinguished between self-face and the other faces, followed by the N250 amplitude that increased as a function of face familiarity. These data suggest that there is distinct processing of the self-face that arises at an intermediate stage (~200 m), as indicated by a lower P200 amplitude (Alzueta et al., 2019). In a follow-up experiment, time-frequency analysis revealed a greater and sustained decrease in alpha-beta power during self-face processing in comparison with other faces, either familiar or unknown, suggesting that perceiving one's own face could trigger a particular attentional mechanism that modulates the activity of cortical regions dedicated to facial perception (Alzueta et al., 2020). It was found that the alpha rhythm desynchronization was generated in the vicinity of brain areas specialized in face processing, around the intersection between the posterior fusiform gyrus and the inferior/middle occipital gyri, and was strongly lateralized to the right hemisphere. The beta source was more broadly distributed over the occipito-temporal cortex, from primary visual to face-related areas, spanning both hemispheres. The authors interpret the results to indicate that one's own face might have the potential to hijack attention driven by a top-down attentional control mechanism. Importantly, these studies suggest that this effect is specific to the self-face and cannot be explained in terms of familiarity.
Recent theoretical models have proposed that self-face processing is built on the interaction between bottom-up and top-down attentional control (Humphreys and Sui 2016;Sui and Gu 2017). The role of the two hemispheres in achieving this interplay between bottom-up and top-down attentional mechanisms was described by Eran Zaidel in the Public Broadcasting Service television series Closer to Truth: 'For important computations, one side will compute while the other side will monitor, maybe compute another way. So one way to describe the division of labor between the two sides is to say that one side is top down, and one side is bottom up. So maybe the left side computes things bottom up from the perception to the cognition. The right side goes top down.
[…] So when we perceive a forest, do we see the forest or do we see the trees first, then those trees make up the forest […]? Turns out we do both. And there is the hypothesis that the left hemisphere is the bottom up and the right hemisphere is top down and by having both working in parallel, we can have the benefit of both processes happening and then comparing the results and of course, then they may agree or disagree and sometimes interfere, and that leads to all kinds of interesting interactions' (How Brain Scientists Think about Consciousness, 2020). In line with this proposal, Bola and colleagues have posited that self-face processing might be driven by bottom-up mechanisms at early stages, then, once we have recognized our own face by activating its memory representation, top-down control mechanisms would come into play by allocating greater and sustained attentional resources to keep self-face representation in an active state (Wójcik et al., 2019). They investigated this hypothesis, while trying to understand if the underlying mechanism is early, automatic, and unconscious or later, conscious and controlled. Using a dot-probe experiment with a backwards masking procedure, they were able to isolate the early and preconscious processing stages characteristic of the self-prioritization effect. To address the question of familiarity, they investigated whether a face that becomes visually familiar due to repeated presentations is able to capture attention in a similar manner as the self-face. Analysis of the N2pc ERP component revealed that the self-face image automatically captures attention, both when processed consciously and unconsciously. In contrast, the visually familiar face did not attract attention, neither in the conscious, nor in the unconscious condition. Based on these findings, they were able to conclude that the self-prioritization mechanism is early and automatic and is not triggered by mere visual familiarity (Bola et al., 2021).

Implications for consciousness research: split consciousness?
Throughout his career, Eran Zaidel enjoyed speculating on the question of how consciousness arises. He believed that split-brain research could give insight into the question of whether and to what extent separate consciousness could exist in each disconnected hemisphere. He often said that the separate hemispheres contained separate sensory systems, separate language systems, and separate systems for consciousness. He believed in the duality of consciousness, which is normally hidden in the intact brain but can be revealed when the hemispheres are surgically separated. Roger Sperry noted that studies of split-brain patients demonstrate that these individuals exhibit evidence of remarkably intact cognitive abilities and conscious experiences despite lacking all major connections between the two cerebral hemispheres. He writes: 'the general behavior and conversation during the course of a casual social encounter without special tests typically reveals nothing to suggest that these people are not essentially the same persons that they were before the surgery with the same inner selves and personalities'. He notes, however, that 'despite the outwardly seeming normality … and the apparent unity and coherence of the behavior and personality of these individuals, controlled lateralized testing for the function of each hemisphere independently indicates that in reality these people live with two largely separate left and right domains of inner conscious awareness' and that ' … each surgical disconnected hemisphere appears to have a mind of its own, each capable of controlling the behavior of the body but each cut off from, and oblivious of, conscious events in the partner hemisphere' (Sperry, 1984).
The question of the unity of consciousness can also be examined in hemispherectomy patients who have undergone surgical removal of an entire hemisphere. One of the most interesting aspects of the hemispherectomy phenomenon is that patients can still report intact conscious experiences despite loss of an entire hemisphere (Werth, 2006). One influential theory of consciousness, the global neuronal workspace hypothesis, was first introduced by Dehaene and colleagues (Dehaene et al., 1998). This theory posits that conscious access results from recurrent processing that amplifies and sustains a neural representation, allowing information to be globally accessed. According to this hypothesis, conscious processing relies on recurrent loops between distributed processors in the brain (Mashour et al., 2020). A theory postulating that whole-brain dynamics underlie conscious experience might struggle to account for the phenomenon of preserved consciousness in hemispherectomy and split-brain patients (Uddin, 2020).
Further speculation based on hemispherectomy and split-brain research is that the intact consciousness exhibited by these patients may be due to the fact that the two cerebral hemispheres were independently capable of supporting consciousness to begin with (that is, prior to surgery). The study of these unique patients thus provides an opportunity for empirical investigation of questions surrounding the unity of consciousness.

Conclusion & future directions
Research on the role of the hemispheres in self-representation initiated by Eran Zaidel under the mentorship of Roger Sperry, and made possible at the time by the Z-lens he invented, has since blossomed into a fruitful and active area of research in psychology, neuroscience, neurology and philosophy. Questions regarding the brain bases of selfawareness and self-representation in the two hemispheres continue to fascinate researchers as well as the general public. Some next steps in better understanding the neural underpinnings of how we recognize and conceptualize ourselves might be to pursue investigations to better understand self-related modulation of top-down and bottom-up processes, especially in the context of predictive processing models (Moutoussis et al. 2014;Seth 2013;Apps and Tsakiris 2014). The predictive processing framework postulates that the brain operates as a prediction-machine based on the interplay between precision-weighted prediction errors and the hierarchical structure of generative models instantiating predictions. Moreover, the predictive processing account provides a theoretical grounding for the proposal that self-representation acts as an integrative hub or an 'integrative glue' (Sui and Humphreys, 2015). Future work should continue to investigate how attentiveness to self-relevant inputs arises, and further probe how specific hubs in the brain may act as gatekeepers during this process (Molnar-Szakacs and Uddin, 2022). As Sui and Rotshtein (2019) have put it, ' … self-related information acts as a global modulator of attentional processing' .
In fMRI research, hemispheric dominance is typically measured by a laterality index or asymmetry index that quantifies left and right hemispheric contributions to functional activation patterns. Methodological issues in assessment of laterality indices continue to emerge (Seghier, 2008). Tools from network neuroscience are increasingly used to quantify inter-and intra-hemispheric communication (Kliemann et al. 2019;Gee et al., 2011;Stark et al. 2008). Those of us who worked with Eran Zaidel know that in his inexhaustible enthusiasm, he would have welcomed the development of novel methods and new approaches for examining his favorite topic: the lateralization of function in the two cerebral hemispheres.

Data availability
No data was used for the research described in the article.