Next Article in Journal
Error Dynamics Based Dual Heuristic Dynamic Programming for Self-Learning Flight Control
Next Article in Special Issue
Integrating Virtual, Mixed, and Augmented Reality to Human–Robot Interaction Applications Using Game Engines: A Brief Review of Accessible Software Tools and Frameworks
Previous Article in Journal
Application of the Impedance Measurement Method to Evaluate the Results of Winter Grafting of Pear Cuttings Using Cold Plasma
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

How Does Exposure to Changing Opinions or Reaffirmation Opinions Influence the Thoughts of Observers and Their Trust in Robot Discussions?

1
Interaction Science Laboratories, ATR, Seika-cho 619-0237, Japan
2
Department of Information Systems Design, Doshisha University, Kyotanabe 610-0321, Japan
3
Faculty of Culture and Information Science, Doshisha University, Kyotanabe 610-0321, Japan
*
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(1), 585; https://doi.org/10.3390/app13010585
Submission received: 21 October 2022 / Revised: 8 December 2022 / Accepted: 30 December 2022 / Published: 31 December 2022
(This article belongs to the Special Issue Advanced Human-Robot Interaction)

Abstract

:

Featured Application

Investigating the effects of observing discussions between social robots. A potential application is the design of social robot behaviors in scenarios where robots provide information via conversations.

Abstract

This study investigated how exposure to changing or reaffirmation opinions in robot conversations influences the impressions of observers and their trust in media. Even though the provided conversational contents include the same amount of information, their order, positive/negative attitudes, and discussion styles change their perceived impressions. We conducted a web survey using video stimuli, where two robots discussed Japan’s first state of emergency response to the COVID-19 pandemic. We prepared two patterns of opinion changes to a different side (positive–negative and negative–positive) and two patterns of opinion reaffirmation (positive–positive and negative–negative) with identical information contents; we only modified their order. The experimental results showed that exposure to opinion changes from the positive side (i.e., negative–positive) or positive opinion reaffirmation (positive–positive) effectively provides positive and fair impressions. Exposure to an opinion that became negative (i.e., positive–negative) effectively provided negative and fair impressions, although negative opinion reaffirmation (negative–negative) led to significantly less trust in media.

1. Introduction

Robots played an important role during the COVID-19 pandemic in the context of preventing infection, such as disinfection [1,2], delivery [3,4], and therapy [5,6]. As interaction opportunities continue to grow between robots and people, the importance of understanding the social impact of human–robot interactions is also increasing. For example, a previous study reported that social robots are a medium to provide information to users in public environments [7]. Such information-providing roles would be one essential application for social robots. Recent studies argued that multiple robots influence behavior changes in humans [8,9]. Another study reported that observing conversations between two robots attracts people’s attention more than explanations from just one robot [10]. Another perspective argues that social praise from multiple agents more effectively improves motor skills than a single agent [11]. Based on these contexts, using multiple robots as a form of media, e.g., conversations and discussions among robots, to provide information is a promising application method during COVID-19 in the context of preventing infections, similar to remote-meeting applications.
For this purpose, investigating influences on people’s thinking from robot conversations is important to design the contents in advance. Previous studies reported that the sequence of information greatly influences people, a finding known as the primacy/recency effect [12,13]. The primacy effect suggests that people are more highly influenced by the information presented at the start than information presented at the middle or end of the contents [12]. The recency effect is the opposite phenomenon; people are more influenced by the information presented at the conclusion of an interaction than information presented at its beginning [13]. These effects have been broadly investigated [14,15,16,17,18,19,20,21,22], and researchers in human–computer interaction also recently investigated these effects in cases where information providers are not humans [23,24,25,26]. The results suggest that primacy/recency effects exist in human–computer interaction contexts, and sometimes the recency effects are larger than the primacy effects [25,26].
In addition to the recency effects, we believe that another factor influences people’s thinking in a discussion context: opinion changes and reaffirmations. Suppose a speaker changes her opinion during a discussion with another speaker who has a different opinion or speculates aloud with a different view. In that case, observers might have been more influenced than by just the reaffirmation of identical opinions. Moreover, even though the amount of information is the same, showing opinion changes may elicit more trust than just reaffirming identical opinions. When robots provide information during discussions, clarifying the effects of opinion changes and reaffirming them from the viewpoint of impression changes as well as trust represent important information for using robots as a medium and designing their content.
In this study, we investigated how exposure to opinion changes and reaffirmations in robot conversations influenced observer impressions. We conducted a web survey using video stimuli (Figure 1), where two robots discussed Japan’s first state of emergency during the pandemic. We compared participant attitudes toward these topics before/after watching video stimuli and their perceived media trust [27] in the videos. We prepared two patterns of opinion changes to a different side (positive–negative and negative–positive) and two patterns of opinion reaffirmation (positive–positive and negative–negative) with identical information contents, and only modified their order.
The paper’s organization is as follows: Section 2 describes the materials and methods. Section 3 describes the experiment results and provides discussions based on the results. Section 5 describes the conclusions.

2. Materials and Methods

All the procedures were approved by the Advanced Telecommunication Research Review Boards (20-501-3). We employed a between-subject design in which each participant was randomly assigned to one of four conditions. The experiment has one factor for its between-subject design: four opinion factor conditions (positive–negative, negative–positive, positive–positive, and negative–negative).

2.1. Visual Stimulus and Conditions

We recorded a video with two small-sized humanoid robots. We used Sota, which was developed by VSTONE Inc. It has three degrees of freedoms (DOFs) in its head, one DOF for each shoulder, one DOF for each elbow, and one for its base. It is 28 cm tall and weighs 763 g. In all the videos, the robots’ positions were identical. In the videos, the left-side robot started the conversations and eventually reaffirmed/agreed with the right-side robot. Therefore, in the positive–positive and negative–negative conditions, the left- and right-side robots had identical attitudes toward the conversational topic, and the left-side robot eventually reaffirmed the right-side robot’s opinion. In the positive–negative and negative–positive conditions, the left- and right-side robots have different attitudes toward the conversational topic, although the left-side robot eventually agrees with the right-side robot. Each video’s resolution was 1280 × 720 pixels, and the fps was 29.97. During the discussion, they faced each other and maintained a slight idling behavior with movement of their arms and heads. To indicate which robot was speaking, a Light Emitting Diode (LED) on the mouth blinked based on voice volume. The conversation scripts in all the conditions are reproduced below:

2.1.1. Two-Robot Discussion: Positive–Negative

Left: In mid-April, the government declared a state of emergency throughout Japan. I think that decision helped control the spread of the coronavirus.
Right: Yes, I know. However, that announcement came out too late. The number of infected people in Japan is very low compared to other countries around the world, although that number is still increasing. And of course, and the death toll is too high.
Left: That’s true. But due to the government’s declaration, people stayed home as much as possible, so I think we controlled the virus’ spread to some extent.
Right: However, the economy has suffered so much that some companies have gone bankrupt.
Left: Yes, that’s true. I used to think that the government’s declaration was good, but now I think you’re right; it was bad.

2.1.2. Two-Robot Discussion: Negative–Positive

Left: In mid-April, the government declared a state of emergency. But I think it came too late.
Right: I agree. However, thanks to that announcement, we controlled the spread of the coronavirus to some extent. The number of infected people is still increasing, and the death toll is too high, although the number of infected people in Japan remains low compared to other countries.
Left: That’s true. But I think the economy has suffered so much that some companies have gone bankrupt.
Right: However, due to the government’s declaration, people stayed home as much as possible, so I think we controlled the spread of the virus to some extent.
Left: That’s true. I used to think that the government’s declaration was bad, but now I think you’re right; it was good.

2.1.3. Two-Robot Discussion: Positive–Positive

Left: In mid-April, the government declared a state of emergency. That decision came too late, but I think it helped control the spread of the coronavirus.
Right: That’s right. The number of infected people is still increasing, and the death toll is also too high. However, the number of infected people in Japan remains low compared to other countries.
Left: That’s true. And the economy has suffered so badly that some companies have gone bankrupt.
Right: However, due to the government’s declaration, people stayed home as much as possible, so I think we controlled the spread of the virus.
Left: That’s true. I used to think the declaration was good, and you’re right. I still think it was good.

2.1.4. Two-Robot Discussion: Negative–Negative

Left: In mid-April, the government declared a state of emergency. That decision helped control the spread of coronavirus, but I think it came too late.
Right: That’s right. The number of infected people in Japan is very low compared to other countries. However, that number is still increasing, and the death toll remains too high.
Left: That’s true. Due to the government’s declaration, people stayed home as much as possible, so the spread of the virus was controlled.
Right: However, the economy has suffered so badly that some companies have gone bankrupt.
Left: That’s true. I used to think that the declaration was bad, and you’re right, I still think it was bad

2.2. Measurement

We measured the following questionnaire item to investigate our participants’ perceived feelings of Japan’s first state of emergency before and after watching the video and calculated the differences: “Please choose the answer that best describes your impression of Japan’s first declaration of a state of emergency in April 2020.” This survey was conducted during February and March 2021 in the period of Japan’s second officially announced state of emergency (six months after its first emergency declaration). At this time, the advantages and disadvantages of the first state of emergency were contentious, and the impact on infection prevention and economic activities was unknown. Based on the validity of both positive and negative information, we believed that grappling with a current issue was crucial to examine how people’s opinions change depending on the presentation of information.
We used a one-to-seven response format, where one is the most negative and seven is the most positive. We also measured their impressions with a media trust scale that consisted of the following five items by modifying the descriptions from previous work [27], where the Cronbach alpha was 0.837:
-
The robots fairly covered the first state of emergency.
-
The robots unbiasedly covered the first state of emergency.
-
The robots told the whole story when covering the first state of emergency.
-
The robots accurately covered the first state of emergency.
-
The robots separated facts from opinions when covering the first state of emergency.

2.3. Participants

The experiment was conducted using the participant pools of a Japanese survey company. A total of 408 people (239 females, 167 males, 2 declined to specify; average age was 39.00) participated in our experiment. The screening process winnowed that number to 244 valid participants (144 females, 100 males; average age of 39.11). We did not measure the participants’ social status.

2.4. Procedure

First, the participants read explanations of the experiment and how to evaluate each video; we then verified that they could clearly hear the video’s audio. Next, they answered questions about their perceptions of Japan’s first state of emergency and observed a video of the assigned condition and again described their impressions of Japan’s first state of emergency and their trust in media. Finally, they answered dummy questions to determine how carefully they watched the video and to verify the quality of their answers because past research reported the need for the screening of participants in web surveys [28,29].

3. Results

3.1. Questionnaire Results about the Difference in Perceived Impressions

Figure 2 shows the difference in the perceived impressions of before/after watching the video stimuli. A plus value denotes positive changes after watching the videos. We conducted a one-factor analysis of variance (ANOVA) for the opinion factor and identified a significant main effect (F(3, 240) = 6.554, p < 0.001, partial η2 = 0.076). Multiple comparisons with the Bonferroni method of the opinion factor identified the following results: negative–positive > positive–negative (p = 0.008), positive–positive > positive–negative (p < 0.001), and positive–positive > negative–negative (p < 0.001).

3.2. Questionnaire Results about Perceived Media Trust

Figure 3 shows the perceived media trust. We also conducted a one-factor ANOVA for the opinion factor and identified its significant main effects (F(3, 240) = 8.620, p < 0.001, partial η2 = 0.097). Multiple comparisons with the Bonferroni method of the opinion factor identified the following results: positive–negative > negative–negative (p = 0.020), negative–positive > negative–negative (p < 0.001), and positive–positive > negative–negative (p < 0.001).

3.3. How Many People Changed Their Opinions?

Table 1 shows the number of people who did not change or did change their opinions. We conducted a Chi-square test for which the results revealed significant differences among the conditions: χ2(6) = 26.481, p < 0.01, φ = 0.233. Residual analysis revealed that the number of people who changed their opinion from a negative to a positive view was significantly smaller than the number of people who changed their opinion in the opposite fashion in the positive–negative condition. The number of people who changed their negative opinions to positive views was significantly larger than the number of people who changed their positive opinions to negative opinions in the positive–positive condition.

3.4. Summary

Our experimental results showed that exposure to a changing or a reaffirmation opinion in robot conversations influenced the impressions of our participants and their trust in media, although each condition used identical information contents (in different sequences). Our results showed that exposure to opinion changes to the positive side (i.e., negative–positive) or positive opinion reaffirmation (positive–positive) effectively provided positive and fair impressions. Exposure to opinions that shift to the negative side (i.e., positive–negative) is more effective than providing negative and fair impressions, although negative opinion reaffirmation (negative–negative) resulted in significantly fewer media trust impressions.

4. Discussion

4.1. Design Implications

A key implication of our research is that people’s impressions and perceived trust in media are changed by exposure to different consensus-building processes, e.g., changing or reaffirming opinions. In today’s society, we receive information from a variety of media sources, including social media. Although obtaining information from diverse perspectives is critical to prevent mental rigidity, our experimental results suggest that opinions can be changed to the opposite side while espousing fairness by building consensus from both sides of an argument. Our experimental results also provide evidence that the recency effect is larger than the primacy effect, similar to previous studies [25,26]. Regardless of changing or reaffirmation opinions in robot conversations, participants’ impressions became positive when the robot’s opinion was positive at the end, and their impressions became negative when the robot’s opinion was negative.
These results suggest different viewpoints for media content designers and consumers. For the former, increased perceived trust from content consumers that is designed to provide negative impressions of specific topics will be effective, including a speaker who changes her opinion from positive to negative. In other words, an argument that only affirms negative opinions lowers media trust. Note that these design guidelines should be avoided in deceitful perspectives, e.g., intentionally providing negative impressions on specific topics such as fake news. For the latter, to prevent such deliberate impression changes, it would be effective to understand that people’s impressions are easily swayed by modifying the order of information as well as observing opinion changes.

4.2. Gender Effects

In the analysis of the experiment results, we did not investigate whether participants’ genders influenced opinion changes and perceived media trust. As an additional analysis, we conducted a two-factors ANOVA for the opinion factor and the genders. However, there are no significant differences in the gender factor in both the perceived impressions (p = 0.549) and the perceived media trust (p = 0.444) and the interaction effects (the perceived impressions: p = 0.258, the perceived media trust: p = 0.504). Therefore, at least in our experiment, the results did not reveal any differences about the gender effects in the context of opinion changes.

4.3. Limitations and Future Works

This study suffers from several limitations, including the use of a specific robot (i.e., Sota) and concentrating on a specific conversational topic (i.e., Japan’s state of emergency). Our future works will investigate the influence of recency effects as well as opinion changes with different conversational topics, such as more general or personalized conversations.
In addition, using different entities (e.g., different robots or human speakers) would make for interesting future works. We did not conduct the experiment with human speaker cases in this study to avoid their various characteristics and effects, such as appearances, age, perceived authority, and so on. If we conducted an experiment with human speakers, the power of opinion changes would be stronger than with robot speakers, but the trends of conditions would be the same because people regard social robots as “social others”, with similar social influences to human being [30,31,32].
From another perspective, the trust relationships would have influenced opinion changes. In human–robot interaction studies, building trust relationships between robots and people is one of the active research topics [33,34,35,36]. Moreover, past studies reported how trusted robots are effective in the context of persuasion [37,38,39]. Therefore, people may change their opinions to a greater extent when they observe that their trusted robots’ opinions change.
Since we only conducted our experiment with Japanese participants, generality and cultural differences must also be considered. Although the primacy/recency effects have been investigated in-depth worldwide, cultural differences have received less focus. A few studies reported that Americans showed a stronger primacy effect than Asians [40,41]; therefore, conducting our study with different countries might illuminate cultural differences. The relationships between showing opinion changes and perceived trust remain unknown.

5. Conclusions

In this study, we described how observing changing opinions or reaffirmation of opinions in robot conversations influences the observer’s impressions and trust toward conversational topics. For this purpose, we conducted a web survey with 408 participants. The experimental results with 244 valid participants showed that people’s impressions and their perceived trust in media are significantly changed by exposure to different consensus-building processes, although each condition used identical information contents. These results also provided additional evidence that the recency effect is larger than the primacy effect in the context of conversational robot media.
These results indicate which conversational strategies are more effective for providing specific impressions in trusted ways. To provide positive impressions, showing both opinion changes to the positive side (i.e., negative–positive) and positive opinion reaffirmation (positive–positive) is effective; about one-fifth to one-quarter of participants changed their opinion positively. On the other hand, for providing negative impressions, showing opinion changes to the negative side (i.e., positive–negative; about 40% of participants changed their opinion negatively) is more effective than negative opinion reaffirmation (negative–negative) in the context of media trust.
These results can contribute to enhancing content design in terms of information from various kinds of media sources such as TV and SNS. Additionally, this knowledge is useful for studying impression changes by media senders as it provides an understanding of people’s perception biases.

Author Contributions

Conceptualization, M.S.; data curation, H.I., M.S.; formal analysis, M.S.; funding acquisition, M.S.; investigation, M.K., T.I., M.S.; methodology, H.I., M.K., T.I., M.S.; project administration, M.S.; resources, K.S.; supervision, M.K., T.I., M.S.; validation, M.K., T.I., M.S.; visualization, M.S.; writing: original draft preparation, M.S.; writing: review and editing, H.I., M.K., T.I., K.S., M.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research work was supported in part by JST CREST Grant Number JPMJCR18A1, Japan JSPS KAKENHI Grant Number JP20K19897, and JP22H03895.

Institutional Review Board Statement

The study was approved by the ethics committee at the Advanced Telecommunication Research Institute (ATR) (20-501-3).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data presented in this study can be obtained on request from the corresponding author.

Acknowledgments

We thank the participants of our experiments.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study, in the collection, analyses, or interpretation of data, in the writing of the manuscript, or in the decision to publish the results.

References

  1. McGinn, C.; Scott, R.; Donnelly, N.; Roberts, K.L.; Bogue, M.; Kiernan, C.; Beckett, M. Exploring the Applicability of Robot-Assisted UV Disinfection in Radiology. Front. Robot. AI 2021, 7, 590306. [Google Scholar] [CrossRef]
  2. Zhao, Y.-L.; Huang, H.-P.; Chen, T.-L.; Chiang, P.-C.; Chen, Y.-H.; Yeh, J.-H.; Huang, C.-H.; Lin, J.-F.; Weng, W.-T. A Smart Sterilization Robot System With Chlorine Dioxide for Spray Disinfection. IEEE Sensors J. 2021, 21, 22047–22057. [Google Scholar] [CrossRef]
  3. Hooks, J.; Ahn, M.S.; Yu, J.; Zhang, X.; Zhu, T.; Chae, H.; Hong, D. ALPHRED: A Multi-Modal Operations Quadruped Robot for Package Delivery Applications. IEEE Robot. Autom. Lett. 2020, 5, 5409–5416. [Google Scholar] [CrossRef]
  4. Grasse, L.; Boutros, S.J.; Tata, M.S. Speech Interaction to Control a Hands-Free Delivery Robot for High-Risk Health Care Scenarios. Front. Robot. AI 2021, 8, 612750. [Google Scholar] [CrossRef]
  5. Ruiz-del-Solar, J.; Salazar, M.; Vargas-Araya, V.; Campodonico, U.; Marticorena, N.; Pais, G.; Salas, R.; Alfessi, P.; Rojas, V.C.; Urrutia, J. Mental and Emotional Health Care for COVID-19 Patients: Employing Pudu, a Telepresence Robot. IEEE Robot. Autom. Mag. 2021, 28, 82–89. [Google Scholar] [CrossRef]
  6. Akiyoshi, T.; Nakanishi, J.; Ishiguro, H.; Sumioka, H.; Shiomi, M. A Robot That Encourages Self-Disclosure to Reduce Anger Mood. IEEE Robot. Autom. Lett. 2021, 6, 7925–7932. [Google Scholar] [CrossRef]
  7. Mubin, O.; Ahmad, M.I.; Kaur, S.; Shi, W.; Khan, A. Social robots in public spaces: A meta-review. Int. Conf. Soc. Robot. 2018, 11357, 213–220. [Google Scholar]
  8. Vollmer, A.-L.; Read, R.; Trippas, D.; Belpaeme, T. Children conform, adults resist: A robot group induced peer pressure on normative social conformity. Sci. Robot. 2018, 3, 7111. [Google Scholar] [CrossRef] [Green Version]
  9. Qin, X.; Chen, C.; Yam, K.C.; Cao, L.; Li, W.; Guan, J.; Zhao, P.; Dong, X.; Lin, Y. Adults still can’t resist: A social robot can induce normative conformity. Comput. Hum. Behav. 2021, 127, 107041. [Google Scholar] [CrossRef]
  10. Sakamoto, D.; Hayashi, K.; Kanda, T.; Shiomi, M.; Koizumi, S.; Ishiguro, H.; Ogasawara, T.; Hagita, N. Humanoid Robots as a Broadcasting Communication Medium in Open Public Spaces. Int. J. Soc. Robot. 2009, 1, 157–169. [Google Scholar] [CrossRef]
  11. Shiomi, M.; Okumura, S.; Kimoto, M.; Iio, T.; Shimohara, K. Two is better than one: Social rewards from two agents en-hance offline improvements in motor skills more than single agent. PLoS ONE 2020, 15, e0240622. [Google Scholar] [CrossRef]
  12. Asch, S.E. Forming impressions of personality. J. Abnorm. Soc. Psychol. 1946, 41, 258. [Google Scholar] [CrossRef]
  13. Crano, W.D. Primacy versus Recency in Retention of Information and Opinion Change. J. Soc. Psychol. 1977, 101, 87–96. [Google Scholar] [CrossRef]
  14. Mayo, C.W.; Crockett, W.H. Cognitive complexity and primacy-recency effects in impression formation. J. Abnorm. Soc. Psychol. 1964, 68, 335–338. [Google Scholar] [CrossRef]
  15. Wilson, W.; Insko, C. Recency effects in face-to-face interaction. J. Personal. Soc. Psychol. 1968, 9, 21. [Google Scholar] [CrossRef]
  16. Steiner, D.D.; Rain, J.S. Immediate and delayed primacy and recency effects in performance evaluation. J. Appl. Psychol. 1989, 74, 163. [Google Scholar] [CrossRef]
  17. Demaree, H.A.; Shenal, B.V.; Everhart, D.E.; Robinson, J.L. Primacy and Recency Effects Found Using Affective Word Lists. Cogn. Behav. Neurol. 2004, 17, 102–108. [Google Scholar] [CrossRef]
  18. Stephane, M.; Ince, N.F.; Kuskowski, M.; Leuthold, A.; Tewfik, A.H.; Nelson, K.; McClannahan, K.; Fletcher, C.R.; Tadipatri, V.A. Neural oscillations associated with the primacy and recency effects of verbal working memory. Neurosci. Lett. 2010, 473, 172–177. [Google Scholar] [CrossRef]
  19. Forgas, J.P. Can negative affect eliminate the power of first impressions? Affective influences on primacy and recency effects in impression formation. J. Exp. Soc. Psychol. 2010, 47, 425–429. [Google Scholar] [CrossRef]
  20. Sullivan, J. The Primacy Effect in Impression Formation: Some Replications and Extensions. Soc. Psychol. Pers. Sci. 2018, 10, 432–439. [Google Scholar] [CrossRef]
  21. Wiedenroth, A.; Wessels, N.M.; Leising, D. There Is No Primacy Effect in Interpersonal Perception: A Series of Pre-registered Analyses Using Judgments of Actual Behavior. Soc. Psychol. Personal. Sci. 2021, 12, 1437–1445. [Google Scholar] [CrossRef]
  22. Brunel, F.F.; Nelson, M.R. Message order effects and gender differences in advertising persuasion. J. Advert. Res. 2003, 43, 330–341. [Google Scholar] [CrossRef]
  23. Murphy, J.; Hofacker, C.F.; Mizerski, R. Primacy and Recency Effects on Clicking Behavior. J. Comput. Commun. 2006, 11, 522–535. [Google Scholar] [CrossRef] [Green Version]
  24. Kim, H.; Fesenmaier, D.R. Persuasive Design of Destination Web Sites: An Analysis of First Impression. J. Travel Res. 2008, 47, 3–13. [Google Scholar] [CrossRef] [Green Version]
  25. Schüler, A.; Scheiter, K.; Gerjets, P. Is spoken text always better? Investigating the modality and redundancy effect with longer text presentation. Comput. Hum. Behav. 2013, 29, 1590–1601. [Google Scholar] [CrossRef]
  26. Yuasa, M. Do You Forgive Past Mistakes of Animated Agents? A Study of Instances of Assistance by Animated Agents. J. Adv. Comput. Intell. Intell. Inform. 2020, 24, 404–412. [Google Scholar] [CrossRef]
  27. Strömbäck, J.; Tsfati, Y.; Boomgaarden, H.; Damstra, A.; Lindgren, E.; Vliegenthart, R.; Lindholm, T. News media trust and its impact on media use: Toward a framework for future research. Ann. Int. Commun. Assoc. 2020, 44, 139–156. [Google Scholar] [CrossRef] [Green Version]
  28. Downs, J.S.; Holbrook, M.B.; Sheng, S.; Cranor, L.F. Are your participants gaming the system? screening mechanical turk workers. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Atlanta, GA, USA, 10–15 April 2010; pp. 2399–2402. [Google Scholar]
  29. Oppenheimer, D.M.; Meyvis, T.; Davidenko, N. Instructional manipulation checks: Detecting satisficing to increase statistical power. J. Exp. Soc. Psychol. 2009, 45, 867–872. [Google Scholar] [CrossRef]
  30. Reeves, B.; Nass, C. The Media Equation: How People Treat Computers, Television, and New Media Like Real People and Places; CSLI Publications and Cambridge: New York, NY, USA, 1996. [Google Scholar]
  31. Kahn, P.H., Jr.; Kanda, T.; Ishiguro, H.; Freier, N.G.; Severson, R.L.; Gill, B.T.; Ruckert, J.H.; Shen, S. “Robovie, you’ll have to go into the closet now”: Children’s social and moral relationships with a humanoid robot. Dev. Psychol. 2012, 48, 303–314. [Google Scholar] [CrossRef] [Green Version]
  32. Hayashi, K.; Shiomi, M.; Kanda, T.; Hagita, N. Are Robots Appropriate for Troublesome and Communicative Tasks in a City Environment? IEEE Trans. Auton. Ment. Dev. 2011, 4, 150–160. [Google Scholar] [CrossRef]
  33. Pinxteren, M.M.V.; Wetzels, R.W.; Rüger, J.; Pluymaekers, M.; Wetzels, M. Trust in humanoid robots: Implications for services marketing. J. Serv. Mark. 2019, 33, 507–518. [Google Scholar] [CrossRef]
  34. Natarajan, M.; Gombolay, M. Effects of anthropomorphism and accountability on trust in human robot interaction. In Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction, New York, NY, USA, 23–26 March 2020; pp. 33–42. [Google Scholar]
  35. Song, Y.; Luximon, Y. The face of trust: The effect of robot face ratio on consumer preference. Comput. Hum. Behav. 2020, 116, 106620. [Google Scholar] [CrossRef]
  36. Okuoka, K.; Enami, K.; Kimoto, M.; Imai, M. Multi-device trust transfer: Can trust be transferred among multiple de-vices? Front. Psychol. 2022, 13, 920844. [Google Scholar] [CrossRef] [PubMed]
  37. Siegel, M.; Breazeal, C.; Norton, M.I. Persuasive robotics: The influence of robot gender on human behavior. In Proceedings of the 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems, St. Louis, MO, USA, 11–14 October 2009; pp. 2563–2568. [Google Scholar] [CrossRef]
  38. Ham, J.; Midden, C.J.H. A Persuasive Robot to Stimulate Energy Conservation: The Influence of Positive and Negative Social Feedback and Task Similarity on Energy-Consumption Behavior. Int. J. Soc. Robot. 2013, 6, 163–171. [Google Scholar] [CrossRef]
  39. Salomons, N.; Linden, M.v.d.; Sebo, S.S.; Scassellati, B. Humans Conform to Robots: Disambiguating Trust, Truth, and Conformity. In Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction, Chicago, IL, USA, 5–8 March 2018; pp. 187–195. [Google Scholar]
  40. Ji, L.-J.; Peng, K.; Nisbett, R.E. Culture, control, and perception of relationships in the environment. J. Personal. Soc. Psychol. 2000, 78, 943. [Google Scholar] [CrossRef] [PubMed]
  41. Noguchi, K.; Kamada, A.; Shrira, I. Cultural differences in the primacy effect for person perception. Int. J. Psychol. 2013, 49, 208–210. [Google Scholar] [CrossRef]
Figure 1. Opinion changes and reaffirmation of visual stimuli.
Figure 1. Opinion changes and reaffirmation of visual stimuli.
Applsci 13 00585 g001
Figure 2. Average and S.E of attitude differences of before/after watching videos. * p < 0.05.
Figure 2. Average and S.E of attitude differences of before/after watching videos. * p < 0.05.
Applsci 13 00585 g002
Figure 3. Average and S.E of media trust. * p < 0.05.
Figure 3. Average and S.E of media trust. * p < 0.05.
Applsci 13 00585 g003
Table 1. Number of people who changed their opinions.
Table 1. Number of people who changed their opinions.
No ChangePositively ChangedNegatively Changed
Positive–negative28523
Negative–positive431210
Positive–positive42195
Negative–negative31917
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Itahara, H.; Kimoto, M.; Iio, T.; Shimohara, K.; Shiomi, M. How Does Exposure to Changing Opinions or Reaffirmation Opinions Influence the Thoughts of Observers and Their Trust in Robot Discussions? Appl. Sci. 2023, 13, 585. https://doi.org/10.3390/app13010585

AMA Style

Itahara H, Kimoto M, Iio T, Shimohara K, Shiomi M. How Does Exposure to Changing Opinions or Reaffirmation Opinions Influence the Thoughts of Observers and Their Trust in Robot Discussions? Applied Sciences. 2023; 13(1):585. https://doi.org/10.3390/app13010585

Chicago/Turabian Style

Itahara, Hiroki, Mitsuhiko Kimoto, Takamasa Iio, Katsunori Shimohara, and Masahiro Shiomi. 2023. "How Does Exposure to Changing Opinions or Reaffirmation Opinions Influence the Thoughts of Observers and Their Trust in Robot Discussions?" Applied Sciences 13, no. 1: 585. https://doi.org/10.3390/app13010585

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop