Abstract
In this chapter, we will focus the discussion on some of the behavioral and communication strategies that a robot can employ in adversarial environments. So far in this book, we have looked at how the robot can be interpretable to the human in the loop while it is interacting with her either through its behavior or through explicit communication. However, in the real world not all of the robot’s interactions may be of purely cooperative nature. The robot may come across entities of adversarial nature while it is completing its tasks in the environment. In such cases, the robot may have some secondary objectives like privacy preservation, minimizing information leakage, etc. in addition to its primary objective of achieving the task. Further, in adversarial settings it is not only essential to minimize information leakage but also to ensure that this process of minimizing information leakage is secure. Since an adversarial observer may use diagnosis to infer the internal information and then use that information to interfere with the robot’s objectives.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2022 Springer Nature Switzerland AG
About this chapter
Cite this chapter
Sreedharan, S., Kulkarni, A., Kambhampati, S. (2022). Obfuscatory Behavior and Deceptive Communication. In: Explainable Human-AI Interaction. Synthesis Lectures on Artificial Intelligence and Machine Learning. Springer, Cham. https://doi.org/10.1007/978-3-031-03767-2_9
Download citation
DOI: https://doi.org/10.1007/978-3-031-03767-2_9
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-03757-3
Online ISBN: 978-3-031-03767-2
eBook Packages: Synthesis Collection of Technology (R0)eBColl Synthesis Collection 11