Non-Facial and Non-Verbal Affective Expression for Appearance-Constrained Robots Used in Victim Management*

Abstract Non-facial and non-verbal methods of affective expression are essential for social interaction in appearance-constrained robots such as those used in search and rescue, law enforcement, and military applications. This research identified five main methods of non-facial and non-verbal affective expression (body movements, postures, orientation, color, and sound). Based on an extensive review of literature, prescriptive design recommendations were developed for the appropriate non-facial and non-verbal affective expression methods for three proximity zones of interest (intimate, personal, and social). These design recommendations serve as guidelines to add retroactively affective expression through software with minimal or no physical modification to a robot. A large-scale, complex human-robot interaction study was conducted to validate these design recommendations using 128 participants and four methods of evaluation. The study was conducted in a high-fidelity, confined-space simulated disaster site with all robot interactions performed in the dark. Statistically significant results indicated that participants felt the robots that exhibited affective expressions were more calming, friendly, and attentive, which improved the social human-robot interactions.


Introduction
As the world of technology evolves, it is inevitable that robots will become a part of our daily lives. Therefore, it is imperative that roboticists determine how these robots will interact socially with humans. There has been a strong focus on building better and more reliable robots, developing more complex behaviors, and improving the intelligence capabilities of these robots; however there also needs to be concerted e ort put forth to determine the impact of these developments on the humans who will either be operating or interacting with these robots. Human-Robot Interaction (HRI) and social robotics are emerging research areas designed to address these concerns.
A primary focus of research associated with social robotics is the use of facial expressions and/or animal mimicry to convey a ect to the humans who will interact with these robots [1−6]. There are two main factors related to a ective expression identified in the psychology literature and supported by observations from computer science and social robotics: (a) presentation methods for a ective expression, and (b) the importance and impact of proxemics, the relative spatial distance between agents in social interactions [7]. These factors provide a key understanding into the development of a multi-modal system of a ec- * This material is based upon work supported by the National Science Foundation under Grant # 0937060 to the Computing Research Association for the CIFellows Project, a National Science Foundation Graduate Research Fellowship Award Number DGE-0135733, ARL Number W911NF-06-2-0041, IEEE Robotics and Automation Society Graduate Fellowship, and a Microsoft HRI grant.
† E-mail: cindy.bethel@yale.edu ‡ E-mail: murphy@cse.tamu.edu tive expression, which is essential for robots that are appearance-constrained. In general, researchers in the fields of psychology, computer science, and social robotics have focused most of their attention on facial expression as the primary method to express a ect and have neglected other important methods of presentation such as body movement, posture, orientation, color, and sound [2,8,9].
Appearance-constrained robots are designed to be functional and serve a particular purpose or role. They are not engineered to have the ability to exhibit facial expressions or make eye contact. Appearance constraints stem primarily from the limitations of the application. Mobility is a major limitation; for example, uneven terrain may drive the use of tracks instead of using anthropomorphic legs. Power and platform size are two other limitations; robots such as those for operation in highly confined spaces may not have enough space or on-board power to add facial features. Extra e ectors may interfere with the mission (e.g., snag on wires or overhangs) or decrease reliability (e.g., dust breaking the e ectors). The environment itself may limit a ect, for example, low or unconstrained lighting may prevent the viewing of avatars on screens [7]. These limitations pose significant challenges as to how these appearance-constrained robots will support social human-robot interaction [7,10].
Appearance-constrained robots are used in di erent applications, such as search and rescue, law enforcement, military, and assistive technologies. Because of the types of uses for these robots, they often interact with humans and therefore the ability to interact socially would allow the robots to complete their tasks in an improved manner. In search and rescue applications, the ability to interact socially with victims through the use of non-facial and non-verbal a ective expression will reduce stress levels, keep victims calm until assistance can arrive, and prevent a condition known as shock [11]. In the case of military and law enforcement applications, the robots tend to attract the atten-tion of bystanders, which could impede the operations of the robot. In those cases, the robot might need to exhibit an aggressive behavior to keep people at a distance so that it can be utilized to accomplish the required task. This article presents research associated with the use of non-facial and non-verbal a ective expression based on inter-agent distance as a method of social interaction in appearance-constrained robots. The application domain used for this investigation was robot-assisted search and rescue; however the results will be applicable to other domains such as military, law enforcement, education, service, entertainment, and assistive applications. It is expected that the results from this research will benefit not only the HRI community but also the robotics community as a whole. The article begins with a discussion of related work on presentation methods and proxemics in Section 2. A set of prescriptive design recommendations based on non-facial and non-verbal methods of a ective expression as it relates to inter-agent distance or proxemics is presented in Section 3. Section 4 provides details of a study conducted to validate the prescriptive design recommendations. The results of the study are presented in Section 5. There is a discussion of factors that may have impacted the results presented in the article in Section 6. The last section of the article provides the conclusions derived from this research in Section 7.

Related Work
Non-facial and non-verbal presentation methods of a ective expression can be separated into five di erent categories: body movements, posture, orientation, color, and sound [7,8]. Argyle and Bartneck describe some a ective expressions using body movements: depression − slow and hesitating movements, and elation − fast, expansive, and emphatic movements [8,40]. Researchers such as Buck, Bull, Fast, Spiegel, and Machotka discuss that body movements and posture can reveal more about the actual a ective state of an individual than facial expressions or even verbal communication [9,12,13,43].
Research conducted by Fast suggests that body movement and posture are humans' most primitive and basic methods of conveying a ect [13]. Buck argues that further investigation into the influence of bodily feedback through the use of body movement, posture, and orientation is necessary to understand a ective responses [12]. Argyle, a social psychologist performed a study in which the results indicated that in some cases body movements better reflected a participant's true emotional state even though it contradicted their facial expressions and verbal communication of their emotional state. Robot orientation and approach toward a person with whom it is interacting is indicative of its perceived attentiveness and caring for that person [3, 14−17]. Several studies have shown that orientation of the robot toward the person with whom it is interacting helps with developing trust for the robot [14,15,16,17]. Argyle discusses the use of color to produce an a ective response such as: blue − elicits pleasant, calm, secure, and tender responses [8].
There have been a few robot implementations that utilized some form of color to exhibit a ective expression [18,19]. There has been very limited work associated with the mapping of a ective expression or emotions to particular colors. This may be an e ective presentation method for a ective expression; however there are challenges associated with this method related to personal preferences and interpretation of the associated emotion may be influenced by di erences in cultural background (e.g., in some cultures white signifies death/grief and in others this is signified by black). Norman discusses that vocal patterns and tones can express a ect  familiar and unfamiliar music based on the perception of acoustic cues and transcending cultural boundaries for a ective expression [21]. The use of music, sound, and tones, can present challenges because personal preferences and cultural background can strongly influence this presentation method. The underlying theory of proxemics research is that the spatial distance between humans has a significant impact on the quality and comfort level of interactions; this has been predicted to extend to humans' interactions with robots [5,14,17,22,23,41]. Although there have been some di erences in the division of spatial distances or proxemics for use in inter-agent interactions, the consensus appears to be four primary zones (refer to Fig. 1): intimate, personal, social, and public [5,8,24,42]. Argyle's categories appear more relevant to the study of human-robot interaction. Details of his four proximity zones are as follows: · Intimate (from contact to 0.46 meters) − This zone is for individuals who are involved in an intimate relationship, they can touch, smell, feel body heat, talk in a whisper, but cannot see the other person very well.
· Personal (0.46 − 1.22 meters) − This zone is a distance at which discomfort can be felt if that space is penetrated by someone whom the individual is unfamiliar with, each person can be clearly seen, and they can touch each other by reaching. Studies have indicated that individuals are most comfortable interacting with robots in the social distance zone [5,24]. It is important to determine the best method of a ective expression to use in each of the social distance zones to ensure the comfort level of the individual with the robot in social interactions. Although considerable research has been devoted to the study of proxemics, minimal research has focused on proxemics as it relates to a ective expressions.

Prescriptive Design Recommendations
Appearance-constrained robots are capable of non-facial and non-verbal a ective expression for social engagement and interaction with humans through the use of software and little or no physical modification to the robot. A set of prescriptive design recommendations were developed as part of this research to provide guidelines to implement nonfacial and non-verbal a ect in these appearance-constrained robots. The original prescriptive design recommendations were developed based on the synthesis of an extensive review of literature associated with social robotics, psychology, and animal behavior [7]. Based on the literature it was determined that there are five methods of non-facial and non-verbal a ective expression: body movements, postures, orientation, color and/or illuminated color, and sound [7,8,25]. The recommendations for utilizing these five methods of expression are based on the distance between the agents or proxemics. . For most interactions, the distance of three meters or less is the most critical during inter-agent interactions and this is especially true in urban search and rescue confined-space scenarios. Therefore, the design recommendations are based on proximity zones that fall within this inter-agent distance. These prescriptive design recommendations provide a mechanism to implement a ective behaviors on mobile robot platforms through the use of software and with little or no physical modification to the robot. The design recommendations went through several revisions based on a series of evaluations and studies conducted as part of this research [7,10,25,26]. The final version of the prescriptive design recommendations are presented in Fig. 2.
There are three methods of non-facial and non-verbal a ective expression that researchers should be focused on with regards to the appropriateness of use: body movements, postures, and sound. If a robot is interacting with a person in the intimate zone, body movements and postures should be used with caution. The body movements should be limited, controlled, and slow because large, fast, and/or erratic movements can be intimidating and frightening to the person interacting with the robot. Another concern in the intimate zone is that the robot may not be fully visible depending on the size of the robot and therefore the movements may not be fully visible to the person during the interactions. In the personal zone, body movements need to be small to medium and controlled, whereas in the social zone the movements should be large or exaggerated to be visible to the person. In the case of postures, in the intimate zone, it is important that postures remain minimal and lower than the face of the person with which the robot interacts. If the displayed posture is higher than the person's face, then a looming e ect can occur and the person will be uncomfortable and possibly intimidated. These are not desirable outcomes for a social human-robot interaction. Postures are visible in both the personal and social zones and can be useful. The issue of looming is not a factor in these two proximity zones. The use of sound can also pose some challenges. Sound is very dependent on background and environmental noise levels. If the robot is operated in a quiet environment such as a museum then the use of sound as a method of non-facial and non-verbal a ective expression will be a useful tool; however if the robot is operating in a noisy environment such as found in urban search and rescue, then sound may not be very e ective in the personal and social zones due to the noise created by the equipment, the structures, and responders. It is likely that sound would be e ective in the intimate zone in urban search and rescue applications to provide comfort to the victim.
Orientation should always be used to indicate attentiveness and caring during social human-robot interactions. Bruce et al. showed in their study that orientation was a very important factor in establishing a connection between the robot and the person with which it was interacting [15]. Orientation is visible and applicable in all of the proximity zones and it is a useful behavior for social human-robot interaction. Based on the review of literature, color is expected to be useful in all three of the proximity zones when displayed as illuminated color. However, studies need to be conducted to validate the use of color for nonfacial and non-verbal a ective expression based on proximity zones. Illuminated color is visible in all three proximity zones of interest, but further studies need to be conducted to determine the e ectiveness of interpretation of the a ective expression based on the color displayed during social human-robot interactions.

Experimental Validation
A study was conducted to validate some of the prescriptive design recommendations discussed in Section 3. The study focused on validating the recommendations for body movements, postures, and orientation. An attempt was made also to validate the use of illuminated color; however the results were inconclusive so further research studies need to be conducted to fully validate the recommendations for color by proximity zone. This section will discuss all the phases of the study including the hypotheses, study design, study participants, robots and operating modes, site design, methods of measurement, and study tasks.

Hypotheses
This study was designed to evaluate how participants acting as``victims'' of a natural disaster would interact with appearance-constrained search and rescue robots in a high-fidelity simulated disaster site. There were two hypotheses evaluated for this study: 1.``Victims'' will feel calmer when interacting with a robot that expresses appropriate social a ect for its proximity.
2.``Victims'' will view the experience as more positive when interacting with a robot that expresses appropriate social a ect for its proximity.

Study Design
The study conducted was a 2 x 2 mixed-model design. For the withinparticipants factor (robots) all participants interacted with two robots (Inuktun Extreme-VGTV and iRobot Packbot Scout -See Fig. 3) and for the between-participants factor (operating mode) both robots were operated in one of two operating modes (standard or emotive). This design was utilized because it was important for participants not to discover the purpose of the study and a clear comparison could be made between the responses to the robots operated in the standard mode versus the emotive mode. Participants were told that the study was designed to evaluate the behaviors of two urban search and rescue robots; therefore the participants believed that they were to perform a comparison of the two robots. The robot order of presentation was counterbalanced and the operating modes were counterbalanced among participants and balanced for age and gender.

Participants
A statistical power analysis was performed to determine the appropriate number of participants required for the study (refer to [27] for detailed instructions on performing a power analysis). Based on having two groups, 80% power, a medium e ect size of 0.25, and an α = 0.05; it was determined that 128 participants or more would be required for the study.
There was a total of 128 participants that completed the study. There were 79 females and 49 males with an age range of 18 -62 (Mean = 22.84, Std Dev = 9.13). The participants had various educational backgrounds and ethnicities and were recruited from within and outside of the University of South Florida where the study was conducted. Four of the participants had experience as search and rescue responders in either military or civilian settings.
Participants received a ra e ticket for a chance to win one of two gift cards. Additionally, some participants received course credit as part of a course requirement for participating in the study. The participants were not required to complete the study to be included in the ra e ticket drawing or to receive the course credit.

Robots and Operating Modes
The robots utilized in this study had been used in actual robot-assisted urban search and rescue responses and training exercises. The two robots used were the iRobot Packbot Scout and the Inuktun Extreme-VGTV (refer to Fig. 3). These robots were selected because they are appearance-constrained and polymorphic (shape-shifting).

Di erences Between the Robots
The primary di erences between the two robots are presented in Fig.  4. The Packbot Scout is almost twice the size of the Inuktun Extreme-VGTV robot. The Packbot is significantly heavier and moves at a faster speed than the Inuktun. Another visible di erence is that the Packbot in the standard mode only has infrared lighting and no visible lighting compared to the Inuktun, which has two small but bright halogen lights that people often interpret as eyes, giving the Inuktun a somewhat anthropomorphic appearance. The Packbot has rubber tracks making its movements quieter in comparison to the hard plastic tracks on the Inuktun, which participants reported as noisy.

Programmed Robot Behaviors and Medical Assessment Path
The robot behaviors were pre-programmed so that each participant would have similar experiences. The operator interface allowed for limited movement corrections to account for dead-reckoning problems with the e ectors and motors. The robots were programmed to move in a sequential path through the simulated disaster site. The path was based on observations of a medical assessment conducted as part of a search and rescue training exercise [28,29]. The programmed path allowed the robots to traverse all three proximity zones of interest (refer to Fig. 5). In both the standard and emotive modes of operation, the robots followed this programmed pathway for all interactions with the participants. The numbered circles in Fig. 5 represent key points of medical assessment of the face and body (locations 3, 5, and 7), locations to survey surrounding structures (locations 1, 2, and 8), and stand-o distances to monitor victims without being intrusive (locations 4 and 6).

Standard Operating Mode Behaviors
There were two operating modes used in this study, the standard mode and the emotive mode. The behaviors for the standard mode were developed based on observations of robot operators tele-operating the robots in training exercises. When the robot operators identified a potential victim, they would become excited and rapidly and erratically drive the robots directly to the victim's head and face region to make contact. The robots would be raised to full height to observe the victim and then would turn away from the victims to survey the surrounding structures. To emulate these behaviors for the study, the robots operated in the standard mode had fast, erratic, and not well-controlled body movements. When the robots operated in the standard mode were located in the intimate zone (contact-0.46 m) they would move directly to the face of the participant to observe the person, exhibited quick and erratic movements, and the posture was fully raised which created a looming e ect over the participant. Following the initial contact, the robot would turn away from the participant and move into the personal zone (0.46-1.22 m), then survey the surrounding structures and the body of the participant. The robots would move in and out of

Emotive Operating Mode Behaviors
The behaviors for the robots operated in the emotive mode were developed based on the prescriptive design recommendations in addition to consultations with subject matter experts from Disney. In the emotive mode, the robots also entered and exited from the social zone with the body movements and postures large and exaggerated; however the body movements were slower and more controlled than observed in the standard mode. As the robots approached the participant they would exhibit cautious and interested behaviors through a creeping movement, and would slowly raise similar to a dog or squirrel investigating something unknown, the robots would slowly approach the participant and then slightly raise again to observe the participant and surrounding structures ensuring that the height did not exceed the level of the face of the participant to eliminate the looming e ect. The movements in the intimate zone were very limited and controlled not to frighten or startle the participant. After initial contact with the participant the robots in the emotive mode would slowly back away from the participant showing concern and attentiveness, and throughout the interaction would remain oriented toward the participant at all times. In the personal zone the robots operated in the emotive mode exhibited small to medium, controlled movements. The robots operated in the emotive mode exhibited behaviors that followed the prescriptive design recommendations for body movements, postures, and orientation as described in Section 3.
The under-carriages of both robots operated in the emotive mode were illuminated with a light blue illuminated tape requiring minimal modification to the robots (see Fig. 6). This was utilized to produce a calming e ect and light blue was selected because of an analysis of color found in the literature [8]. Though the light blue lighting e ect was not men-tioned by the participants as a factor in their evaluations during the course of the study; subject matter experts interacting with the robots in a pilot study indicated that the lighting e ect was useful to illuminate the size and shape of the robot as it approached. Further research needs to conducted to determine the actual impact of di erent illuminated colors based on proximity zone.

Site Design
The site used for the study was located in a research lab in the Department of Computer Science and Engineering at the University of South Florida. An indoor location was necessary to control temperature and sound because psychophysiology measurements were utilized in the study. Rubble was hauled from a demolished building on campus and placed to create a high-fidelity simulated disaster site (refer to Fig.  7). A confined-space wooden box was created that had a vinyl cover to provide participants with the sense of being trapped in a collapsed building. The site was large enough to encompass all three proximity zones of interest and there were features in the site to mark each of the proximity zones. The entire site was surrounded by black curtains to prevent participants from viewing the site prior to their interactions. The robots entered the site from behind the black curtains through a wooden chute, so that the robots would not be visible until the actual interaction began. All interactions occurred in the dark to simulate as closely as possible the conditions found in a collapsed building. Temperatures were kept constant around 65 • F for accurate physiological measures.

Methods of Measurement
The study utilized four methods of measurement: self-assessments, psychophysiology, video observation, and structured audio-recorded interviews. It is important to use three or more methods of measurement to establish convergent validity in the study results [27].

Self-Assessments
For this study, there were a total of twelve di erent self-assessment tools. The following details the assessments utilized:

Health Information − This assessment requested basic
health information as it related to medications prescription and over-the-counter, food intake, smoking, and ca eine use. This questionnaire was developed by the researcher for the purposes of this study to determine health status that might influence the physiological data.
3. Physical Activity Questionnaire [30] − This assessment measured the participants' physical activity levels. This questionnaire was utilized to determine the fitness level of participants that might influence their physiological data. [31] − This assessment asked multiple questions on the quality and amount of sleep for participants over the previous month. This data was collected to determine if sleep patterns might influence participants' physiological data. [32] − This assessment measured the participants levels of stress over the previous month. This data was collected to determine if stress was a factor influencing participants' physiological data.

Perceived Stress Scale (PSS)
6. Single Item Social Support (SIMSS) [33] − This consisted of one question: How many people do you have near that you can readily count on for real help in times of trouble or di culty, such as to watch over children or pets, give rides to the hospital or store, or help if you are sick? It had a scale of 0, 1, 2-5, 6-9, 10 or more. [34] − Only the state portion of this standardized psychological assessment was used. This assessment allowed participants to respond to questions regarding how they felt prior to interacting with the robots. This provided a baseline of their current feelings and state. This assessment was given after each robot interaction with the final assessment compared to the pre-interaction version to make sure participants returned to their previous state once experiments were concluded. This was checked for ethical reasons. [35] − This is a standardized psychological assessment that allowed participants to rate their feelings and emotions on a five-point Likert scale. This instrument identified the strength of feelings and emotions prior to and following any robot interactions. 9. Self-Assessment Manikin (SAM) [36] − This assessment evaluates participants' experiences with the robots on a nine point Likert scale for valence, arousal, and dominance. Approximately 25% of the participants were confused by the questions associated with the dominance dimension of this assessment tool; therefore only the valence and arousal dimensions were actually analyzed.

Positive and Negative Affect Schedule (PANAS)
10. Robot Assessment − This assessment was developed to determine participants' feelings regarding four di erent aspects of the robot they experienced. The assessment evaluated the appearance, sounds, movements, and speed of the robot. Each criteria was evaluated based on a nine point Likert scale for valence, arousal, and dominance. Numerous participants verbally expressed confusion and questioned the wording associated with the dominance dimension (approximately 25% of participants); therefore this dimension was not utilized. This questionnaire was developed for the purposes of this study as a distractor so that participants would not infer the true purpose of the study.
11. Observations of the Robot [37] − This assessment was modified from the original version to correspond with this human-robot study. Several questions were removed due to the fact that the original assessment was geared toward a humanoid and were not applicable to this human-robot study.
12. Overall Study Assessment [37] − This assessment was developed to assess participants' thoughts regarding the study as a whole. It was administered at the end of the study prior to debriefing participants on the purpose of the study.
The state version of the State-Trait Anxiety Inventory (STAI) and the Positive and Negative A ect Schedule (PANAS) were administered before the robot interactions and then again after each robot interaction to determine any possible changes in the participants' level of anxiety and to monitor their state of mind throughout the study for ethical purposes. Significant results will be presented in Section 5 for the valence and arousal dimensions of the Self-Assessment Manikin (SAM) and the Observations of the Robot assessments. Analyses of the data from the other assessments is ongoing and will not presented as part of this article.

Psychophysiology Measures
For this study there were four psychophysiological measures utilized: electrocardiography (ECG or EKG), respiration (thoracic and abdominal), blood volume pulse (BVP), and skin conductance levels (SCL) (for more information on these measures and conducting psychophysiological studies refer to [38,39]). Psychophysiological measures can be useful to measure stress and arousal levels. One major issue with these measures is that researchers tend to attribute too much meaning to these tangible signals. These measures can be useful because they are based on the autonomic nervous system and the results cannot be influenced or modified by the participant.

Structured Interview
After both interactions were completed, participants were interviewed individually to determine their feelings regarding their interactions with both robots. The structured interviews were audio-recorded after appropriate audio-video consent was obtained. Analyses of these qualitative measures is ongoing and will not be presented as part of this article.

Video Observations
Infrared night vision cameras were utilized because the robot interactions occurred in the dark. Images were obtained from four camera perspectives: robot view, face view, participant view, and overhead view (refer to Fig. 8). The analysis of the video observations is ongoing and results will not be reported as part of this article.

Study Tasks
The study was divided into three sections: (1) pre-interaction tasks, (2) interaction tasks, and (3) post-interaction tasks. The entire study required approximately 90 minutes per participant to conduct with data collection occurring over a period of five weeks.

Pre-Interaction Tasks
The pre-interaction tasks consisted of having each participant read and sign the informed and audio-video consent forms as required by the Institutional Review Board, an oversight agency at most universities in the United States, which ensures the safety of human participants involved in studies. Participants were read instructions regarding what they could expect while participating in the study. They were provided opportunities to ask questions and could decide to quit at any point during the study. Following the consent and instruction process, each participant completed self-assessments 1 -8 described in subsubsection 4.6.1, subsection 4.6.
After completing the self-assessments, all of the psychophysiology sensors and control unit were attached to the participant and they were requested to rest on their right side on a cot located outside of the experiment site to obtain baseline recordings and to ensure that all the equipment was working properly. While the baseline recordings were obtained, participants listened to quiet instrumental music to obtain a resting baseline measurement from all the sensors. After these measurements were obtained the participants were taken into the simulated disaster site and placed into the confined space wooden box (see Fig. 7). Another set of baseline psychophysiology measurements were taken prior to any robot interactions to determine if the participants were having any physiological e ects from being in the confined space.

Interaction Tasks
After baseline measurements were completed, the lights in the study site were turned o and the first of two robots emerged from the wooden chute. Both robots were operated in either the standard mode or the emotive mode during the interactions with participants. Psychophysiology and video recordings were obtained during the interactions. Following each interaction, participants were requested to complete assessments 7 -11 described in subsubsection 4.6.1, subsection 4.6. This provided a mental task to distract participants from thinking about the interactions to reduce practice e ects from the within-participants factors of the study. After completion of the assessments, participants were requested to return to the prone position on their right side and another set of baseline psychophysiology measures were recorded to ensure the participants had returned to their prior baseline levels before the next robot interaction. This process was repeated with the second robot, followed by completion another set of assessments 7 -11.

Post-Interaction Tasks
Following the completion of the second set of assessments 7 -11, the participants were removed from the confined space wooden box and all the psychophysiology equipment was removed. The participants were then requested to return to the pre-interaction area of the study site for the structured interview. After completing the interview, participants were debriefed on the actual purpose of the study, and were requested to complete the follow-up assessment 12 described in subsubsection 4.6.1, subsection 4.6. Following the completion of this assessment, the participants were free to leave the study site, and were requested not to share any information associated with their experiences in the study in order to protect the integrity of the research.

Results
The results from this study indicate that robots operated in the emotive mode were perceived as more calming, friendly, and attentive compared to the robots operated in the standard mode. Depending on the order of presentation of the robots, there was a significant three-way interaction obtained from the valence (positive vs. negative) portion of the Self-Assessment Manikin (SAM).

Results Related to Arousal
Arousal is a measure of calm compared to excitement. In this study arousal was measured using the Self-Assessment Manikin (SAM) assessment and through the use of psychophysiology measures. A doubly multivariate analysis of variance (MANOVA) was conducted for the arousal dimension of the SAM assessment. The results indicate a statistically significant main e ect (α = 0.05) of operating mode on arousal F(1, 123) = 12.05, p = .001 (refer to Fig. 9). Participants that viewed both robots operated in the emotive mode reported feeling calmer than participants that viewed both robots operated in the standard mode. The results from the psychophysiology data indicated a statistically significant two-way interaction (α = 0.05) for operating mode and robot order on respiration rates F(1, 122) = 5.76, p = .018 (refer to Fig.   10). If participants viewed the Inuktun Extreme-VGTV first followed by the iRobot Packbot Scout, then participants that interacted with both robots in the emotive mode had lower respiration rates than participants that interacted with both robots in the standard mode. Lower respiration rates, typically indicate lower arousal levels with the person feeling calmer. However, when the iRobot Packbot Scout was viewed first followed by the Inuktun Extreme-VGTV then participants exhibited no statistically significant di erences in respiration rates during the interactions. The order the robots were presented did impact most of the psychophysiology results. There was also a significant habituation e ect observed after the initial interactions. These findings will be discussed further in the Discussion Section 6. The results from the SAM assessment support the first hypothesis presented in Section 4. The participants in this study did report feeling calmer when they interacted with robots that expressed appropriate social a ect for their proximity to the humans with whom they are in-teracting through the use of the prescriptive design recommendations. The respiration rate results partially supported this hypothesis as well, though more research should be conducted to validate these findings.

Results Related to Valence
Valence is a measure of positive compared to negative responses of the participants to their interactions with both robots. The Self-Assessment Manikin (SAM) assessment was used to determine how positive or negative participants reported feeling about their interactions with the robots. A MANOVA was conducted for the valence dimension of the SAM assessment. The results indicated a statistically significant threeway interaction (α = 0.05) for operating mode, robot order, and robot type on valence F(1, 123) = 4.50, p = .036 (refer to Fig. 11). Overall, participants were more positive toward the Inuktun than they were toward the Packbot. However, this di erence was more pronounced for participants who saw the Inuktun Extreme-VGTV first followed by the iRobot Packbot Scout in the emotive mode. Further, when participants viewed the Inuktun Extreme-VGTV first followed by the iRobot Packbot Scout, in either operating mode, the overall valence scores for both robots were lower. This di erence was most pronounced for the iRobot Packbot Scout when it was seen in the emotive mode. There was no significant di erence in the valence responses based on the operating mode of the robots. The results from the SAM assessment for valence were complex and confusing; however they partially support the second hypothesis presented in Section 4. Further research needs to be conducted associated with how positive humans view interactions with robots that express appropriate a ect for their proximity to the humans with whom they are interacting.

Other Results
There were two statistically significant results related to the Observation of the Robot assessment. There was a statistically significant main e ect (α = 0.05) for operating mode on friendliness (unfriendly versus friendly) F(1, 124) = 5.631, p = .019 (refer to Fig. 12). Participants that interacted with both robots operated in the emotive mode perceived the robots as more friendly than participants that interacted with both robots in the standard mode.
There was a statistically significant main e ect (α = 0.05) for operating mode on the question "How much did the robot look at you?" (α = 0.05) based on percentages (0% -100% in 10% increments) F(1, 124) = 6.491, p = .012 (refer to Fig. 13). Participants exposed to both robots operated in the emotive mode reported that the robots spent more time looking at them compared to participants that viewed both robots in the standard mode. These results indicate that participants that interacted with both robots operated in the emotive mode reported that the robots  exhibited a higher level of attentiveness compared to participants that interacted with both robots operated in the standard mode.

Discussion
The results from this research revealed several trends and insights associated with non-facial and non-verbal social human-robot interaction, such as humans reported feeling calmer when interacting with robots that were programmed to operate in an emotive mode (the robots exhibited behaviors consistent with the prescriptive design recommendations) with some support evident in lower respiration rates with certain ordering of the robots. Participants tended to calibrate their responses to the robots based on their first robot encounter. Participants perceived the robots operated in the emotive mode to be more friendly and attentive. It is expected that robots will play more significant roles in our daily lives and it is important to gain a better understanding of how humans will respond to these robots with which they will interact.
As evidenced by the results, it is apparent that non-facial and non-verbal a ective expression does impact social human-robot interactions as related to the urban search and rescue application and also applications where it is important to keep humans calm during social humanrobot interaction. This was observed strongly in arousal responses and also to some degree in the valence responses of participants in the research study. The perception of friendliness and attentiveness of the robots operated in the emotive mode is also an important factor in social human-robot interactions.

Discussion Related to Arousal Results
The results for arousal from the SAM assessment indicate that the movements, posture, orientation, and possibly illuminated color can make a di erence in social human-robot interaction when attempting to elicit a calming response from humans toward the robots with which they may interact. The most significant findings for psychophysiology were discovered in the respiration rate. The respirate rate results indicate that participants were calmer to the Inuktun Extreme-VGTV in the emotive mode compared to the standard mode when the Inuktun was viewed first. There was no significant di erence in the respiration rates when the Packbot is viewed first regardless of operating mode. Participants appeared to only exhibit di erent physiological response patterns to the Inuktun in the di erent operating modes and not the Packbot. Further studies need to be conducted to determine if this is a consistent pattern. One purpose of this study was to determine if there is a better way to operate robots used in urban search and rescue (US&R) operations to keep victims calmer until assistance arrives to extricate them from the disaster site. The results support that appropriate robot movements, posture, orientation, and possibly illuminated color keeps victims calmer until help can arrive in comparison to how robots have been typically operated in US&R training exercises.

Discussion Related to Valence Results
The most significant result associated with valence is the three-way interaction between robot type, robot order, and operating mode. In all cases, the Inuktun was viewed more positively than the Packbot regardless of operating mode (standard or emotive) and robot order (Inuktun First or Packbot First); however there was a more notable di erence observed in the Inuktun compared to the Packbot when programmed to operate in the emotive mode and the Inuktun was viewed first followed by the Packbot. An explanation for this might be that participants found the interactions with the Inuktun in the emotive mode as positive; however when interacting with the Packbot in the emotive mode following the Inuktun the larger size of the Packbot and the lack of perceived eyes (halogen lights on the camera face of the Inuktun) when the Packbot was operated in the dark may have come across as noticeably di erent, impacting participants' responses. This di erence may have been more apparent in the emotive mode because both robots moved more slowly and at similar speeds making it easier to distinguish the di erences between the robots. In the standard mode, both robots move quickly and the movements were erratic in nature, which may have distracted participants from paying attention to the actual di erences in the robots. The results from the three-way interaction for valence indicate that all three factors, operating mode, robot order, and robot type have significant e ects on how positive or negative humans feel about their interactions with the particular robots used in this study and these e ects may translate to other types of robots, though more research would need to be conducted to make a definitive determination. These factors appear to be inter-related and should be considered when developing robot systems and evaluating social human-robot interactions.

Discussion of Factors Related to the Psychophysiology Results
The results from the psychophysiology measures were not highly indicative of the participants' arousal levels. Psychophysiology measures levels of arousal and can only detect arousal when it is genuinely present. Arousal responses the body feels are not something that can be simulated or consciously manipulated. That is one of the justifications for using psychophysiology measurements in research studies because participants cannot manipulate these measurements. The problem with this study was that it appeared that many participants were feeling``too'' safe in the simulated disaster site during the robot interactions. Although the site was made to look as realistic as possible, safety factors were put in place to make sure participants were not harmed in case of a problem with the robots or if rubble should become dislodged.
Another possible factor that may have influenced the physiological responses of the participants was having them placed in the prone position. In typical psychophysiological studies, participants are placed in a seated position and usually are given complicated mental tasks to perform. There is no data on the e ects of having participants placed in the prone position in psychophysiological studies; therefore it is di cult to know if this is an appropriate placement for obtaining accurate psychophysiological measurements. Additionally, participants were not given any mental tasks to perform and were in an observational mode during the robot interactions when the recordings were obtained. The impact of participant placement and mental activity on physiological responses requires further exploration.

Conclusions
As part of this research a set of prescriptive recommendations were developed for determining what method of non-facial and non-verbal a ective expression was appropriate to use based on the interagent distance or proximity between the robot and the human with whom it is interacting. These prescriptive recommendations form a de facto toolbox, which provides one mechanism for roboticists to add a ect retroactively to a robot through software changes, reducing or eliminating the need for physical modifications or designing a new robot. These methods of a ective expression can be used in appearance-constrained, non-anthropomorphic, and anthropomorphic robots to add a ective expression through body movements, postures, orientation, color, and/or sound. Further investigative studies should be conducted to confirm the recommendations for color and sound. The results from this study do imply that the way a robot moves, the postures it displays, and its orientation can make a significant di erence in how humans respond to the robot with which they are interacting, specifically in the urban search and rescue domain and in other domains requiring a calming response from humans interacting with robots, such as in assistive robotic applications. Although more research should be conducted, the results associated with the research presented in this article provides preliminary validation of the body movements, postures, and orientation guidelines presented in the prescriptive design recommendations. From the results, participants expressed that they felt the robots were more friendly and attentive when operated in the emotive mode compared to the participants that viewed both robots operated in the standard mode. These results indicate that humans are more likely to feel more comfortable with the robots that follow the prescriptive design recommendations as exhibited in the robots operated in the emotive mode. The use of non-facial and non-verbal a ective expression can be beneficial in establishing social human-robot interactions for appearance-constrained robots, especially robots used in robot-assisted search and rescue or in interactions that require a calming response from the humans that interact with robots. Based on the results of this study, it is clear that if an appearance-constrained robot is expected to exhibit caution and interest in the person with which it is interacting, slow, controlled, and minimal movements should be displayed, especially in the intimate zone. The postures exhibited in the intimate zone (contact-0.46 m) should be kept lower than the person's face to avoid looming e ects during the interaction. If the robot is expected to show interest, attentiveness, and caring to the person with which it is interacting, then it should remain oriented toward the person during the interactions regardless of the proximity zone. It is less clear if there is an impact from the illuminated light blue lighting e ect on calming participants and its e ectiveness as a method of a ective expression. The colored neon light was visible to participants; however it is not clear if it elicited a calming response. Further research needs to be conducted related to the impact of color and more specifically illuminated color on social human-robot interactions. The emotive body movements, postures, and orientation developed as part of this research are now being incorporated into robot-assisted urban search and rescue protocols for use in victim management. The use of emotive behaviors produces a calming e ect, lowers arousal responses, and it is expected to reduce the impact and incidence of shock observed in victims of natural and man-made disasters.