Semantic-Aware Transformation-Invariant RoI Align

Authors

  • Guo-Ye Yang BNRist Department of Computer Science and Technology, Tsinghua University
  • George Kiyohiro Nakayama Stanford University
  • Zi-Kai Xiao BNRist Department of Computer Science and Technology, Tsinghua University
  • Tai-Jiang Mu BNRist Department of Computer Science and Technology, Tsinghua University
  • Xiaolei Huang College of Information Sciences and Technology, Pennsylvania State University
  • Shi-Min Hu BNRist Department of Computer Science and Technology, Tsinghua University

DOI:

https://doi.org/10.1609/aaai.v38i6.28469

Keywords:

CV: Object Detection & Categorization, CV: Representation Learning for Vision

Abstract

Great progress has been made in learning-based object detection methods in the last decade. Two-stage detectors often have higher detection accuracy than one-stage detectors, due to the use of region of interest (RoI) feature extractors which extract transformation-invariant RoI features for different RoI proposals, making refinement of bounding boxes and prediction of object categories more robust and accurate. However, previous RoI feature extractors can only extract invariant features under limited transformations. In this paper, we propose a novel RoI feature extractor, termed Semantic RoI Align (SRA), which is capable of extracting invariant RoI features under a variety of transformations for two-stage detectors. Specifically, we propose a semantic attention module to adaptively determine different sampling areas by leveraging the global and local semantic relationship within the RoI. We also propose a Dynamic Feature Sampler which dynamically samples features based on the RoI aspect ratio to enhance the efficiency of SRA, and a new position embedding, i.e., Area Embedding, to provide more accurate position information for SRA through an improved sampling area representation. Experiments show that our model significantly outperforms baseline models with slight computational overhead. In addition, it shows excellent generalization ability and can be used to improve performance with various state-of-the-art backbones and detection methods. The code is available at https://github.com/cxjyxxme/SemanticRoIAlign.

Published

2024-03-24

How to Cite

Yang, G.-Y., Nakayama, G. K., Xiao, Z.-K., Mu, T.-J., Huang, X., & Hu, S.-M. (2024). Semantic-Aware Transformation-Invariant RoI Align. Proceedings of the AAAI Conference on Artificial Intelligence, 38(6), 6486-6493. https://doi.org/10.1609/aaai.v38i6.28469

Issue

Section

AAAI Technical Track on Computer Vision V