4 Embodied Medicine: What Human-Computer Con-fluence Can Offer to Health Care

: In this chapter we claim that the structuration, augmentation and replacement of bodily self-consciousness is at the heart of the research in Human Computer Confluence, requiring knowledge from both cognitive/social sciences and advanced technologies. More, the chapter suggests that this research has a huge societal potential: it is possible to use different technological tools to modify the characteristics of bodily self-consciousness with the specific goal of improving the person’s level of well-being. The chapter, after discussing the characteristics of bodily self‐conscious-ness, suggests that different disorders – including PTSD, eating disorders, depression, chronic pain, phantom limb pain, autism, schizophrenia, Parkinson’s and Alzheimer’s – may be related to an abnormal interaction between perceptual and schematic contents in memory (both encoding and retrieval) and/or prospection altering on one or more layers of the bodily self-experience of the subjects. A critical feature of the resulting abnormal representations is that, in most situations, they are not accessible to consciousness and cannot be directly altered. If this happens, the only working approach is to create or strengthen alternative ones. The chapter suggests and discusses the possible use of technology for a direct modification of bodily self-consciousness. More, it presents three different strategies: the structuration of bodily self-consciousness through the focus and reorganization of its contents (Mindful Embodiment); the augmentation of bodily self-consciousness to achieve enhanced and extended experiences (Augmented Embodiment); the replacement of bodily self-consciousness with a synthetic one (Synthetic Embodiment).


Introduction
The term "Human Computer Confluence -HCC" refers to (Ferscha, 2013) "An invisible, implicit, embodied or even implanted interaction between humans and system components… Researchers strive to broker a unified and seamless interactive framework that dynamically melds interaction across a range of modalities and devices, from interactive rooms and large display walls to near body interaction, wearable devices, in-body implants and direct neural input and stimulation" (p.5).
The work of different European research initiatives is currently shaping human computer confluence.In particular these initiatives are exploring how the emerging symbiotic relation between humans and information and communication technologies -including virtual and augmented realities -can be based on radically new forms of sensing, perception, interaction and understanding (Ferscha, 2013).Specifically, the EU funded HC2 CSA (www.hcsquared.eu)identified the following key research areas in human computer confluence are: -HCC DATA: new methods to stimulate and use human sensory perception and cognition to interpret massive volumes of data in real time; -HCC TRANSIT: new methods and concepts towards unobtrusive mixed or virtual reality environment; -HCC SENSE: new forms of perception and action in virtual world.
On one side, different researches demonstrated the possibility of using technology -in particular virtual reality (VR) -to develop virtual bodies (avatars) able to induce bodily self-consciousness (see (Riva & Waterworth, 2014;Riva, Waterworth, & Murray, 2014) and the chapter by Herbelin and colleagues in this book for an in-depth description of this possibility).
On the other side, bodily self-consciousness plays a central role in structuring cognition and the self.First, according to the influential Somatic Marker Hypothesis, modifications in bodily arousal influence cognitive processes themselves.As underlined by Damasio (Damasio, 1996): "Somatic markers" -biasing signals from the body -"influence the processes of response to stimuli, at multiple levels of operation, some of which occur overtly (consciously, 'in mind') and some of which occur covertly (non-consciously, in a non-minded manner)."(p.1413).
Furthermore, it is through the development of their own bodily self-consciousness that subjects define their boundaries within a spatial and social space (Bermúdez, Marcel, & Eilan, 1995;Brugger, Lenggenhager, & Giummarra, 2013;Riva, 2014).According to Damasio (1999) the autobiographical self emerges only when -to quote the book's title -self comes to mind, so that in key brain regions, the encoded experiences of past intersect with the representational maps of whole-body sensory experience.Starting from the above concepts, the chapter claims that the structuration, augmentation and replacement of bodily self-consciousness is at the heart of the research in Human Computer Confluence, requiring knowledge from both cognitive/social sciences and advanced technologies (see Figure 4.1).Furthermore, the chapter aims at bridging technological development with bodily self-consciousness research by suggesting that it is possible to develop and use technologies to modify the characteristics of bodily self-consciousness with the specific goal of improving the person's level of well-being.

Bodily Self-Consciousness
As underlined by Olaf Blanke (2012) in his recent paper for Nature Reviews Neuroscience: "Human adults experience a 'real me' that 'resides' in 'my' body and is the subject (or 'I') of experience and thought.This aspect of self-consciousness, namely the feeling that conscious experiences are bound to the self and are experiences of a unitary entity ('I'), is often considered to be one of the most astonishing features of the human mind."(p.556).
The increasing interest of cognitive science, social and clinical psychology in the study of the experience of the body is providing a better picture of the process (Blanke, 2012;Gallagher, 2005;Slaughter & Brownell, 2012;Tsakiris, Longo, & Haggard, 2010).First, even though bodily self-consciousness is apparently experienced by the subject as a unitary experience, neu roimaging and neurological data suggest that it includes different experiential layers that are integrated in a coherent experience (Blanke, 2012;Crossley, 2001;Pfeiffer et al., 2013;Shilling, 2012;Vogeley & Fink, 2003).In general, we become aware of our bodies through exteroceptive signals arising on (i.e., touch), or outside the body (i.e., vision) and through interoceptive (i.e., heart rate) and proprioceptive (i.e.skeletal striated muscles and joints) signals arising from within the body (Durlik, Cardini, & Tsakiris, 2014;Garfinkel & Critchley, 2013).
Second, these studies support the idea that body representations play a central role in structuring cognition and the self.For this reason, the experience of the body is strictly connected to processes like cognitive development and autobiographical memory.
But what is the role of bodily self-consciousness?We use the "feelings" from the body to sense both our physical condition and emotional state.These feelings range from proprioceptive and exteroceptive bodily changes that may be visible also to an external observer (i.e., posture, touch, facial expressions) to proprioceptive and interoceptive changes that may be not visibile to an external observer (i.e., endocrine release, heart rate, muscle contractions) (Bechara & Damasio, 2005).
As suggested by Craig (2002Craig ( , 2003)), all feelings from the body are represented in a hierarchical homeostatic system that maintains the integrity of the body.More, a re-mapping of this representation can be used to judge and predict the effects of emotionally relevant stimuli on the body with the aim of making rational decisions that affect survival and quality of life (Craig, 2010;Damasio, 1994).According to Damasio (1994), this ''collective representation of the body constitutes the basis for a 'concept' of self'' (p.239) that exists as ''momentary activations of topographically organized representations' '(p. 240).This view is shared by different authors.For example, for Craig (2003) "the subjective image of the ' material me' is formed on the basis of the sense of the homeostatic condition of each individual' s body" (p.503).

Bodily Self-Consciousness: its Role and Development
From this brief analysis is clear that bodily self-consciousness it is not a simple phenomenon.First, it includes different processes and neural structures that work together integrating different sources of input (see Figure 4.2).Melzack (1999;2013) defined this widely distributed neural network, including loops between the limbic system and cortex as well as between the thalamus and cortex, as the "body-self neuromatrix" (Melzack, 2005): ''The neuromatrix, distributed throughout many areas of the brain, comprises a widespread network of neurons which generates patterns, processes information that flows through it, and ultimately produces the pattern that is felt as a whole body.The stream of neurosignature output with constantly varying patterns riding on the main signature pattern produces the feelings of the whole body with constantly changing qualities".(p.87).
More, the characteristics of bodily self-consciousness evolve over time following the ontogenetic development of the subject.Specifically, Riva (2014) suggested that we expand over time our bodily self-consciousness by progressively including new experiences -minimal selfhood, self location, agency, body ownership, third-person perspective, and body satisfaction -based on different and more sophisticated bodily representations (Figure 4.3).First, perception and action extend the minimal selhood (based on the body schema) existing at birth through two more online representations: "the spatial body," produced by the integration in an egocentric frame of afferent sensory information (retinal, somaesthetic, proprioceptive, vestibular, and auditory), and "the active body," produced by the integration of "the spatial body" with efferent information relating to the movement of the body in space.From an experiential viewpoint "the spatial body" allows the experience of where "I" am in space and that "I" perceive the world from there, while "the active body" provides to the self the sense of agency, the sense that we control our own bodily actions.
Then, through the maturation of the underlying neural networks and the progressive increase of mutual social exchanges, the embodied self is extended by further representations: "the personal body," integrating the different body locations in a whole body representation; "the objectified body," the integration of the objectified public representations of the personal body; the "body image," integrating the objectified representation of the personal body with the ideal societal body.From an experiential viewpoint these representations produce new bodily experiences: "the personal body" allows the "whole-body ownership", the unitary experience of owning a whole body (I); "the objectified body" allows the "objectified self", the experience of being exposed and visible to others within an intersubjective space (Me); the "body image" allows the "body satisfaction/dissatisfaction", the level of satisfaction the subject feels about the body in comparison to societal standards.
A first important characteristic (Galati, Pelle, Berthoz, & Committeri, 2010) of the different body representations described above is that they are both schematic (allocentric) and perceptual (egocentric).The role of the egocentric representations is "pragmatic" (Jeannerod & Jacob, 2005): the representation of an object using egocentric coordinates is required for reaching and grasping.Instead, the role of allocentric representations is "semantic" (Jeannerod & Jacob, 2005): the representation of an object using allocentric coordinates is required for the visual awareness of its size, shape, and orientation.
Another important characteristics of the different body representations involved is that each of them is characterized by a specific disorder (Riva, 2014) that significantly alters the experience of the body (Figure 4.4): Phantom limb (body schema), the experience of a virtual limb, perceived to occupy body space and/or producing pain; -Unilateral hemi-neglect (spatial body), the experience of not attending the contralesional side of the world/body; -Alien hand syndrome (active body), the experience of having an alien hand acting autonomously; -Autoscopic phenomena (personal body), the experience of seeing a second own body in extrapersonal space; -Xenomelia (objectified body), the non-acceptance of one's own extremities and the resulting desire for elective limb amputation or paralysis; -Body dysmorphia (body image), the experience of having a problem with a specific part of the body.
The features of these disorders suggest a third important characteristics of bodily representations (Melzack, 2005): they are usually produced and modulated by sensory inputs but they can act and produce qualitatively rich bodily experiences even in the absence of any input signal.Why?According to Melzack (2005;2013), the different inputs received by the bodyself neuromatrix are then converted in neurosignatures, patterns of brain cell activity (synaptic connections) and chemical releases that serve as the common repository of information about the different contents of our bodily experience in the brain.These neurosignatures, and not the original sensory inputs, are then projected in the "sentient neural hub" (brain areas in the central core of the brainstem) to be converted into a continually changing stream of awareness.In simple words, our experience of the body is not produced directly by sensory inputs, but it is mediated by neurosignatures that are influenced both by cognitive and affective inputs (see Figure 4.2).More, given their representational content -they are a memory of a specific bodily experience (Salt, 2002) -neurosignatures can produce an experience even without a specific sensory input or different time after it appeared (Melzack, 2005): "When we reach for an apple, the visual input has clearly been synthesized by a neuromatrix so that it has three-dimensional shape, color, and meaning as an edible, desirable object, all of which are produced by the brain and are not in the object "out there."(p.88).

First-Person Spatial Images: the Common Code of Bodily Self-Consciousness
A key feature of the body-self neuromatrix is its ability of processing, integrating and generating a wide range of different inputs and patterns: sensory experiences, thoughts, feelings, attitudes, beliefs, memories and imagination.But how does it work?For a long time brain sciences considered action, perception, and interpretation as separate activities.However, recently brain sciences started to describe cognitive processes as embodied (J.Prinz, 2006).In this view, perception, execution, and imagination share a common spatial coding in the brain (Hommel, Müsseler, Aschersleben, & Prinz, 2001): "Cognitive representations of events (i.e., of any to-be-perceived or to-be-generated incident in the distal environment) subserve not only representational functions (e.g., for perception, imagery, memory, reasoning, etc.) but action-related functions as well (e.g., for action planning and initiation)."(p.850).
Within this general view, the most important part for our discussion is the one related to the common coding (Common Coding Theory): actions are coded in terms of the perceivable effects they should generate.More in detail, when an effect is intended, the movement that produces this effect as perceptual input is automatically activated, because actions and their effects are stored in a common representational domain.As underlined by Prinz (1997): "Under conditions where stimuli share some features with planned actions, these stimuli tend, by virtue of similarity, either to induce those actions or interfere with them, depending on the structure of the task at hand.This implies that there are certain products of perception on the one hand and certain antecedents of action on the other that share a common representational domain."(p.152).
The Common Coding Theory may be considered a variation of the Ideomotor Principle introduced by William James (1890).According to James, imagining an action creates a tendency to its execution, if no antagonistic mental images are simultaneously present: "Every representation of a movement awakens in some degree the actual movement which is its object; and awakens it in a maximum degree whenever it is not kept from doing so by an antagonistic representation present simultaneously in the mind ."(p.526).Prinz (1997), suggests that the role of mental images is instead taken by the distal perceptual events that an action should generate.When the activation of a common code exceeds a certain threshold, the corresponding motor codes are automatically triggered.
Further, the Common Coding Theory extends this approach to the domain of event perception, action perception, and imitation.The underlying process is the following (Knoblich & Flach, 2003): first, common event representations become activated by the perceptual input; then, there is an automatic activation of the spatial codes attached to these event representations; finally, the activation of the spatial codes results in a prediction of the action results in terms of expected perceptual events on the common coding level.Giudice and colleagues (2013) recently demonstrated that the processing of spatial representations in working memory is not influenced by its source.It is even possible to combine long-term memory data with perceptual images within an active spatial representation without influencing judgments of spatial relations.
This vision fits well with the Convergence Zone Theory proposed by Damasio (1989).This theory has two main claims.
First, when a physical entity is experienced, it activates feature detectors in the relevant sensory-motor areas.During visual processing of an apple, for example, neurons fire for edges and planar surfaces, whereas others fire for color, configural properties, and movement.Similar patterns of activation in feature maps on other modalities represent how the entity might sound and feel, and also the actions performed on it.Second, when a pattern becomes active in a feature system, clusters of conjunctive neurons (convergence zones) in association areas capture the pattern for later cognitive use.Damasio assumes the existence of different convergence zones at multiple hierarchical levels, ranging from posterior to anterior in the brain.At a lower level, convergence zones near the visual system capture patterns there, whereas convergence zones near the auditory system capture patterns there.Further, downstream, higher-level association areas in more anterior areas, such as the temporal and frontal lobes conjoin patterns of activation across modalities.
In fact, a critical feature of convergence zones underlined by Simmons and Barsalou is modality-specific re-enactments (Barsalou, 2003;Simmons & Barsalou, 2003): once a convergence zone captures a feature pattern, the zone can later activate the pattern in the absence of bottom-up stimulation.In particular, the conjunctive neurons play the important role of reactivating patterns (re-enactment) in feature maps during imagery, conceptual processing, and other cognitive tasks.For instance, when retrieving the memory of an apple, conjunctive neurons partially reactivate the visual state active during its earlier perception.Similarly, when retrieving an action performed on the apple, conjunctive neurons partially reactivate the motor state that produced it.
According to this view, a fully functional conceptual system can be built on reenactment mechanisms: first, modality-specific sensorimotor areas become activated by the perceptual input (an apple) producing patterns of activation in feature maps; then, clusters of conjunctive neurons (convergence zones) identify and capture the patterns (the apple is red, has a catching size, etc.); later the convergence zone fire to partially reactivate the earlier sensory representation (I want to take a different apple); finally this representation reactivate a pattern of activation in feature maps similar, but not identical, to the original one (re-enactment) allowing the subject to predict the action results.
The final outcome of this vision is the idea of a spatial-temporal framework of virtual objects directly present to the subject: an inner world simulation in the brain.As described by Barsalou (2002): "In representing a concept, it is as people were being there with one of its instances.Rather than representing a concept in detached isolated manner, people construct a multimodal simulation of themselves interacting with an instance of the concept.To represent the concept they prepare for situated action with one of its instances."(p.9).
In this view our body, too, can be considered the result of a multimodal simulation.Margaret Wilson clearly underlined (2006): "The human perceptual system incorporates an emulator… that is isomorphic to the human body.While it is possible that such an emulator is hard-wired into the perceptual system or learned in purely perceptual terms, an equally plausible hypothesis is that the emulator draws on bodyschematic knowledge derived from the observer's representation of his own body."(p.221).
Different clinical and experimental studies have shown that multimodal information about the body is coded in an egocentric spatial frame of reference (Fotopoulou et al., 2011;Loomis et al., 2013), suggesting that our bodily experience is build up using a first-person spatial code.However, differently from other physical objects, our body is experienced both as object (third-person) -we perceive our body as a physical object in the external world -and as subject (first-person) -we experience our body through different neural representations that are not related to its physical appearance (Legrand, 2010;Moseley & Brugger, 2009).Put differently, simulating the body is more complex than simulating an apple because it requires the integration/ translation of both its first-(egocentric) and third-person (allocentric) characteristics (Riva, 2014) in a coherent representation.Knoblich and Flach clearly explained this point (2001): "First-person and third-person information cannot be distinguished on a common-coding level.This is because the activation of a common code can result either from one's own intention to produce a certain action effect (first person) or from observing somebody else producing the same effect (third person).Hence, there ought to be cognitive structures that, in addition, keep first-and third-person information apart."(p.468).
But how do they interact?As suggested by Byrne and colleagues (Byrne, Becker, & Burgess, 2007): "Long-term spatial memory is modeled as attractor dynamics within medial temporal allocentric representations, and short-term memory is modeled as egocentric parietal representations driven by perception, retrieval, and imagery and modulated by directed attention.Both encoding and retrieval/imagery require translation between egocentric and allocentric representations, which are mediated by posterior parietal and retrosplenial areas and the use of head direction representations in Papez's circuit."(p.340).Seidler and colleagues (2012) demonstrated the role played by working memory in two different types of motor skill learning -sensorimotor adaptation and motor sequence learning -confirming a critical involvement of this memory in the above interaction process.In general the interaction between egocentric perception and allocentric data happens through the episodic buffer of the working memory and involves all three of its components (Baddeley, 2012;Wen, Ishikawa, & Sato, 2013): verbal, spatial, and visuo-tactile.

The Impact of Altered Body Self-Consciousness on Health Care
The concept of neuromatrix was originally introduced by Melzack (2005) to explain the pain experience: "The neuromatrix theory of pain proposes that the neurosignature for pain experience is determined by the synaptic architecture of the neuromatrix, which is produced by genetic and sensory influences… In short, the neuromatrix, as a result of homeostasis-regulation patterns that have failed, may produce neural "distress" patterns that contribute to the total neuromatrix pattern...Each contribution to the neuromatrix output pattern may not by itself produce pain, but both outputs together may do so."(p.90).
In his view, pain is a multidimensional experience produced by multiple influences.More, this experience is rooted in our bodily self-consciousness and mediated by a neurological representational system (neurosignatures produced by patterns of synaptic connections) that is the result of cognitive, affective and sensory inputs (see Figure 4.2).In particular, Melzack (2005) suggests that pain is the result of an abnormal representation of some bodily state: "Action-neuromodules attempt to move the body and send out abnormal patterns that are felt as shooting pain.The origins of these pains, then, lie in the brain… This suggests that an abnormal, partially genetically determined mechanism fails to turn off the stress response to viral, psychological, or other types of threat to the body-self.." (pp.91-92).
The actual version of the "body-self neuromatrix" theory (Melzack, 2005) has a high explanatory power and it is able to account for a great deal of the variance found in the different pain related disorders.However, as this model has gained explanatory power, it has lost part of its predictive power: it is so general that it cannot be easily falsified.More, as we have seen previously, current advances in neuroscience suggest an important role played by the representational format of neurosignatures that is barely addressed by the theory.
The focus on the representational format is instead the main feature of the "dual representation theory of posttraumatic stress disorder (PTSD)" (Brewin, 2014;Brewin, Dalgleish, & Joseph, 1996) used to explain the visual intrusions (Brewin, Gregory, Lipton, & Burgess, 2010), sensory experiences of short duration, extremely vivid, detailed, and with highly distressing content, experienced by PTSD patients.According to this theory, patients during the traumatic event encode two different memory representations: -Sensory-bound representations (S-rep): including sensory details and affective/ emotional states; -Contextual representations (C-rep): abstract structural descriptions, including the spatial and personal context of the person experiencing the event.
As explained by Brewin and Burgess (2014): "In healthy memory the S-rep and C-rep are tightly associated, such that an S-rep is generally retrieved via the associated C-rep.Access to C-reps is under voluntary control but may also occur involuntarily.According to the theory, direct involuntary activation and reexperiencing of S-reps occurs when the S-rep is very strongly encoded, due to the extreme affective salience of the traumatic event, and the C-rep is either encoded weakly or without the usual tight association to the S-rep.This might be due to stress-induced down-regulation of the hippocampal memory system and/or due to a dissociative response to the traumatic event."(p.217).
This description has many similarities with the one used by Melzack to explain pain.In both cases an abnormal process fails to turn off the stress response to a threat to the body-self.However, the added value of this theory is provided by the focus on the representational format: the problem is produced by the lack of coherence between two memories of the same event coded in different representational formats.
As we have seen before, the experience of the body is coded in two different formats -schematic (allocentric) and perceptual (egocentric) -that fit well with the descriptions provided by Brewin and colleagues (Brewin et al., 2010): one system encodes stimulus information from an egocentric viewpoint in a form similar to how it was originally experienced, allowing its storing and retrieval (involuntary only) into the episodic memory; the second system uses a set of abstract codes to translate the stimulus information in an allocentric format, allowing its storing and retrieval (both involuntary and voluntary) into the autobiographical memory.
A recent experiment using direct human neural activity recording from neurosurgical patients playing a virtual reality memory game provided a first, but important, evidence that in normal subjects these different representations are integrated in situated conceptualizations and retrieved using multimodal stimuli (from perception to language).In their study Miller and colleagues (2013) found that place-responsive cells (schematic) active during navigation were reactivated during the subsequent recall of navigation-related objects using language.
In the study, subjects were asked to find their way around a virtual world, delivering specific objects (e.g a zucchini) to certain addresses in that world (e.g. a bakery store).At the same time, the researchers recorded the activity in the hippocampus corresponding to specific groups of place cells selectively firing off when the subject was in certain parts of the game map.Using these brain recordings, the researchers were able to develop a neural map that corresponded to the city's layout in the hippocampus of the subject.
Next, the subjects were asked to recall, verbally, as many of the objects, in any order, they had delivered.Using the collected neural maps, the researchers were able to cross-reference each participant's spatial memories as he/she accessed his/her episodic memories of the delivered items (e.g. the zucchini).The researchers found that when the subject named an item that was delivered to a store in a specific region of the map the place cell neurons associated with it reactivated before and during vocalization.
This important experimental result suggests that schematic and perceptual representations are integrated in situated conceptualizations that allows us to represent and interpret the situation we are experiencing (Barsalou, 2013).Once these situated conceptualizations are assembled, they are stored in memory.When cued later, they reinstate themselves through simulation within the respective modalities producing pattern completion inferences (Barsalou, 2003).
In this way, a perceptual information stored in the episodic memory may be retrieved by corresponding emotional states or by sensory cues and modulated by activation of the associated schematic information (Brewin et al., 2010).Alternatively, schematic information may be retreived through language and activate associated perceptual information that provide additional sensory and emotional aspects to retrieval (Brewin et al., 2010).
Interestingly, the interaction between perceptual and schematic information happens both in memory and prospection (Zheng, Luo, & Yu, 2014).As explained by Brewin and colleagues (2010): "As part of the process of deliberately simulating possible future outcomes, some individuals will construct images (e.g., of worst possible scenarios) based on information in C-memory and generate images in the precuneus.These images may be influenced by related information held in C-memory or in addition the images may be altered by the involuntary retrieval of related material from S-memory.Novel images may also arise spontaneously in S-memory through processes of association… As a result, internallygenerated images may come to behave much like intrusive memories, being automatically triggered and accompanied by a strong sense of reliving." (p. 222).
In other words, as predicted by the neuromatrix theory, subjects to avoid stress may create new integrated conceptualizations producing simulation and pattern completion inferences that are indistinguishable from the ones closely reflecting actual experience.Considerable evidence suggests that the etiology of different disorders -including PTSD, eating disorders, depression, chronic pain, phantom limb pain, autism, schizophrenia, Parkinson's and Alzheimer's -may be related to these processes.Specifically, these disorders may be produced by an abnormal interaction between perceptual and schematic contents in memory (both encoding and retrieval) and/or prospection (Brewin et al., 2010;Melzack, 2005;Riva, 2014;Riva, Gaudio, & Dakanalis, 2013;Serino & Riva, 2013, 2014;Zheng et al., 2014) that produces its effect on one or more layers of the bodily self-experience of the subjects.

Self-Consciousness
Recently Riva and colleagues underlined that one of the fundamental objectives for human computer confluence in the coming decade will be to create technologies that contribute to enhancement of happiness and psychological well-being (Botella et al., 2012;Riva, Banos, Botella, Wiederhold, & Gaggioli, 2012;Wiederhold & Riva, 2012).In particular these authors suggested that it is possible to use technology to manipulate the quality of personal experience, with the goal of increasing wellness, and generating strengths and resilience in individuals, organizations and society (Positive Technology).But what is personal experience?
According to Merriam Webster's Collegiate Dictionary (http://www.merriam-webster.com/dictionary/experience), it is possible to define experience both as "a: direct observation of or participation in events as a basis of knowledge" (subjective experience) and "b: the fact or state of having been affected by or gained knowledge through direct observation or participation" (personal experience).
However, there is a critical difference between them (Riva, 2012).If subjective experience is the experience of being an intentional subject, personal experience is the experience affecting a particular subject.This difference suggests that, independently from the subjectivity/intentionality of any individual, it is possible to alter the features of our personal experience from outside.
Following the previous broad discussion that clearly identified in bodily self-consciousness the core of our personal experience, we can suggest that bodily self-consciousness may become the dependent variable that may be manipulated by external researchers using technologies to improve health and wellness.But why and how?
In the previous paragraph we suggested that many disorders may be produced by an abnormal interaction between perceptual and schematic contents in memory and/ or prospection.A critical feature of the resulting abnormal representation is that in most situation they are not accessible to consciousness and cannot be directly altered.If this happens, the only working approach is to create or strengthen alternative ones (Brewin, 2006).This approach is commonly used by different cognitive-behavioral techniques.For example, the competitive memory training -COMET (Korrelboom, de Jong, Huijbrechts, & Daansen, 2009) has been used to improve low self esteem in individuals with eating disorders.This approach stimulate to retrieve and attend to positive autobiographical memories that are incompatible with low self-esteem by using self-esteem promoting imagery, self-verbalizations, facial and bodily expression, and music.
Another, potentially more powerful approach, is the use of technology for a direct modification of bodily self-consciousness.In general it is possible to modify our bodily self-consciousness in three different ways (Riva et al., 2012;Riva & Mantovani, 2014;Waterworth & Waterworth, 2014): -By structuring bodily self-consciousness through the focus and reorganization of its contents (Mindful Embodiment).-By augmenting bodily self-consciousness to achieve enhanced and extended experiences (Augmented Embodiment).-By replacing bodily self-consciousness with a synthetic one (Synthetic Embodiment).-The first approach -mindful embodiment -aims at helping the modification of our bodily experience by facilitating the availability of its content in the working memory.As we have seen previously, the availability of a spatial image in working memory from perception, language or long term memory, allows its updating (Loomis et al., 2013).Following this different techniques -from Vipassanāmeditation to Mindfulness -use focused attention to become aware of different bodily states (Pagnini, Di Credico, et al., 2014;Pagnini, Phillips, & Langer, 2014) to facilitate their reorganization and change by competing contents.Even if this approach does not necessarily requires technological support, technological tools can improve its effectiveness.For example, Chittaro and Vianello (2014) recently demonstrated that a mobile application could be beneficial in helping users practice mindfulness.The use of the App obtained better results in comparison to two traditional, well-known mindfulness techniques in terms of achieved decentering, level of difficulty and degree of pleasantness.
Another technologically enhanced approach to mindful embodiment is biofeedback: the use of visual or acoustic feedback for representing physiologic parameters like heart frequency or skin temperature to allow their voluntary control (Repetto et al., 2009;Repetto & Riva, 2011).
The second approach -augmented embodiment -aims at enhancing bodily selfconsciousness by altering/extending its boundaries.As noted by Waterworth and Waterworth (2014), it is possible to achieve this goal through two different methods.The first, that the authors define "altered embodiment", is achieved by mapping the contents of a sensory channel to a different one.An example of this approach is the "vOICe system" (http://www.seeingwithsound.com)that convert video camera images into sound to enable the blind to navigate the world (and other information) by hearing instead of seeing.The second, that the authors define "extended embodiment", is achieved through the active use of tools.Riva and Mantovani identified two different types of tool-mediated actions (Riva & Mantovani, 2012, 2014): firstorder or second-order: -First-order mediated action: the body is used to control a proximal tool (an artifact present and manipulable in the peripersonal space) to exert an action upon an external object.An example is the videogame player using a joystick (proximal tool) to move an avatar (distal tool) to pick up a sword (external object).-Second-order mediated action: the body is used to control a proximal tool that controls a different distal one (a tool present and visible in the extrapersonal space, either real or virtual) to exert an action upon an external object.An example is the cranemen using a lever (proximal tool) to move a mechanical boom (distal tool) to lift materials (external objects).
In their view, which is in agreement with and supported by different scientific and research data (Balakrishnan & Shyam Sundar, 2011;Blanke, 2012;Clark, 2008;Herkenhoff Carijó, de Almeida, & Kastrup, in press;Jacobs, Bussel, Combeaud, & Roby-Brami, 2009;Slater, Spanlang, Sanchez-Vives, & Blanke, 2010), these two mediated actions have different effects on our spatial and bodily experience (see Figure 4.1): -Incorporation: the proximal tool extends the peripersonal space of the subject; -Telepresence: the user experiences a second peripersonal space centered on the distal tool.
A successfully learned first-order mediated action produces incorporation -the proximal tool extends the peripersonal space of the acting subject.In other words, the acquisition of a motor skill related to the use of a proximal tool extends the body model we use to define the near and far space.From a neuropsychological view point the tool is incorporated in the near space, prolonging it till the end point of the tool.From a phenomenological view point, instead, we are now present in the tool and we can use it intuitively as we use our hands and our fingers.
A successfully learned second-order mediated action, produces incarnation, too -a second body representation centered on the distal tool.In fact, second-order mediated actions are based on the simultaneous handling of two different body modelsone centered on the real body (based on proprioceptive data) and a second centered on the distal tool (visual data) -that are weighted in a way that minimizes the uncertainty during the mediated action.In other words, this second peripersonal space centered on the distal tool competes with the one centered on the body to drive action and experience.Specifically, when the distal-centered peripersonal space becomes the prevalent one, it also shifts the extrapersonal space to the one surrounding the distal tool.From an experiential viewpoint the outcome is simple (Riva, Waterworth, Waterworth, & Mantovani, 2011;Waterworth, Waterworth, Mantovani, & Riva, 2010, 2012): the subject experiences presence in the distal environment (telepresence).
A final approach -synthetic embodiment -aims at replacing bodily self-consciousness with a synthetic one (incarnation).As also discussed in the chapter by Herbelin and colleagues in this volume, it is possible to use a specific technologyvirtual reality (VR) -to reach this goal.But what is VR?
The basis for the VR idea is that a computer can synthesize a three-dimensional (3D) graphical environment from numerical data.Using visual, aural or haptic devices, the human operator can experience the environment as if it were a part of the world.This computer generated world may be either a model of a real-world object, such as a house; or an abstract world that does not exist in a real sense but is understood by humans, such as a chemical molecule or a representation of a set of data; or it might be in a completely imaginary science fiction world.
Usually VR is described as a particular collection of technological hardware.However, given the properties of distal tools described before, we can describe VR as an "embodied technology" for its ability of modifying the feeling of presence (Riva, 2009;Riva et al., 2014): the human operator can experience the synthetic environment as if it were "his/her surrounding world" (telepresence) or can experience the synthetic avatar (user's virtual representation) as if it were "his/her own body" (synthetic embodiment).
Different authors showed that is possible to use VR both to induce an illusory perception of a fake limb (Slater, Perez-Marcos, Ehrsson, & Sanchez-Vives, 2009) or a fake hand (Perez-Marcos, Slater, & Sanchez-Vives, 2009) as part of our own body, and to produce an out-of-body experience (Lenggenhager et al., 2007), by altering the normal association between touch and its visual correlate.It is even possible to generate a body transfer illusion (Slater et al., 2009): Slater and colleagues substituted the experience of male subjects' own bodies with a life-sized virtual human female body.To achieve synthetic embodiment -the user experiences a synthetic new body -a spatio-temporal correspondence between the multisensory signals and sensory feedback experienced by the user and the visual data related to the distal tool is required.For example, users are embodied in an avatar if the movements of the avatar are temporally synchronized with their own movements and there is a synchronous visuotactile stimulation of their own and avatar's body.

Conclusions
In this chapter we claimed that the structuration, augmentation and replacement of bodily self-consciousness is at the heart of the research in Human Computer Confluence, requiring knowledge from both cognitive/social sciences and advanced technologies.More, the chapter suggested that this research has a huge societal potential: it is possible to use different technological tools to modify the characteristics of bodily self-consciousness with the specific goal of improving the person's level of well-being.
In the first part of the chapter we explored the characteristics of bodily self-consciousness.
-Even though bodily self-consciousness is apparently experienced by the subject as a unitary experience, neu roimaging and neurological data suggest that it includes different experiential layers that are integrated in a coherent experience.First, in the chapter we suggested the existence of six different experiential layersminimal selfhood, self location, agency, whole-body ownership, objectified self, and body satisfaction -evoving over time by integrating six different representations of the body characterized by specific pathologies -body schema (phantom limb), spatial body (unilateral hemi-neglect), active body (alien hand syndrome), personal body (autoscopic phenomena), objectified body (xenomelia) and body image (body dysmorphia); second, we do not experience these layers separately except in some neurological disorders; third, there is a natural, intermodal communication between them; and fourth, changes in one component can educate and inform other ones (Gallagher, 2005).-The different pathologies involving the different bodily representations suggest a critical role of the brain in the experience of the body: our experience of the body in not direct, but mediated by neurosignatures that are influenced both by cognitive and affective inputs and can produce an experience even without a specific sensory input or different time after it appeared (Melzack, 2005): -Neurosignatures share spatial images as a common representational format.To interact between them, neurosignatures share a common representational code (Loomis et al., 2013): spatial images.Main features of spatial images are: a) they are relatively short-lived and, as such, resides within working memory; b) they are experienced as entities external to the head and body; c) they can be based on inputs from the three spatial senses, from language, and from long term memory; d) they represents space in egocentric coordinates.This suggests that dysfunctions in bodily self-consciousness may arise from two possible sources: a failure in the conversion/update between these representations or errors in perception of spatial relations by a given perceptual/cognitive system (Bryant, 1997).
In the second part of the chapter we discussed how different mental health disorders may be produced by an abnormal interaction between perceptual and schematic contents in memory (both encoding and retrieval) and/or prospection.A critical feature of the resulting abnormal representation is that not always they are accessible to consciousness So, it is possible to counter them only by creating or strengthening alternative ones (Brewin, 2006).
A possible strategy to achieve this goal is using technology.As discussed in the chapter it is possible to modify our bodily self-consciousness in three different ways (Riva et al., 2012;Riva & Mantovani, 2014;Waterworth & Waterworth, 2014): -By structuring bodily self-consciousness through the focus and reorganization of its contents (Mindful Embodiment).In this approach individual use focused attention to become aware of different bodily states.Particulary relevant is biofeedback, a technology that use visual or acoustic feedback for representing physiologic parameters like heart frequency or skin temperature.-By augmenting bodily self-consciousness to achieve enhanced and extended experiences (Augmented Embodiment).In this approach it is possible to use technology either to map the contents of a sensory channel to a different one, or to extend the boundaries of the body through the incorporation of the tool of the incarnation of the subject in a virtual space (telepresence).-By replacing bodily self-consciousness with a synthetic one (Synthetic Embodiment).In this approach it is possible to use virtual reality for creating synthetic avatar (user's virtual representation) experienced by the user as if it were "his/ her own body".To achieve synthetic embodiment a spatio-temporal correspondence between the multisensory signals and sensory feedback experienced by the user, and the visual data related to the avatar, is required.
In conclusion, the contents of this chapter constitute a sound foundation and rationale for future researches aimed at the definition and development of technologies and procedures aimed at the structuration, augmentation and replacement of bodily self-consciousness.In particular, the chapter provides the preliminary evidence required to justify future research to identify the most effective technological interventions and the optimal amount of technological support needed for improving our level of well-being.

Figure 4 . 1 :
Figure 4.1: The role of bodily self-consciousness in Human Computer Confluence.