How Private Is Your RL Policy? An Inverse RL Based Analysis Framework

Authors

  • Kritika Prakash Machine Learning Laboratory, International Institute of Information Technology, Hyderabad
  • Fiza Husain Machine Learning Laboratory, International Institute of Information Technology, Hyderabad
  • Praveen Paruchuri Machine Learning Laboratory, International Institute of Information Technology, Hyderabad
  • Sujit Gujar Machine Learning Laboratory, International Institute of Information Technology, Hyderabad

DOI:

https://doi.org/10.1609/aaai.v36i7.20772

Keywords:

Machine Learning (ML)

Abstract

Reinforcement Learning (RL) enables agents to learn how to perform various tasks from scratch. In domains like autonomous driving, recommendation systems, and more, optimal RL policies learned could cause a privacy breach if the policies memorize any part of the private reward. We study the set of existing differentially-private RL policies derived from various RL algorithms such as Value Iteration, Deep-Q Networks, and Vanilla Proximal Policy Optimization. We propose a new Privacy-Aware Inverse RL analysis framework (PRIL) that involves performing reward reconstruction as an adversarial attack on private policies that the agents may deploy. For this, we introduce the reward reconstruction attack, wherein we seek to reconstruct the original reward from a privacy-preserving policy using the Inverse RL algorithm. An adversary must do poorly at reconstructing the original reward function if the agent uses a tightly private policy. Using this framework, we empirically test the effectiveness of the privacy guarantee offered by the private algorithms on instances of the FrozenLake domain of varying complexities. Based on the analysis performed, we infer a gap between the current standard of privacy offered and the standard of privacy needed to protect reward functions in RL. We do so by quantifying the extent to which each private policy protects the reward function by measuring distances between the original and reconstructed rewards.

Downloads

Published

2022-06-28

How to Cite

Prakash, K., Husain, F., Paruchuri, P., & Gujar, S. (2022). How Private Is Your RL Policy? An Inverse RL Based Analysis Framework. Proceedings of the AAAI Conference on Artificial Intelligence, 36(7), 8009-8016. https://doi.org/10.1609/aaai.v36i7.20772

Issue

Section

AAAI Technical Track on Machine Learning II