FRED: Towards a Full Rotation-Equivariance in Aerial Image Object Detection

Authors

  • Chanho Lee KAIST
  • Jinsu Son KAIST
  • Hyounguk Shon KAIST
  • Yunho Jeon Hanbat University
  • Junmo Kim KAIST

DOI:

https://doi.org/10.1609/aaai.v38i4.28069

Keywords:

CV: Object Detection & Categorization

Abstract

Rotation-equivariance is an essential yet challenging property in oriented object detection. While general object detectors naturally leverage robustness to spatial shifts due to the translation-equivariance of the conventional CNNs, achieving rotation-equivariance remains an elusive goal. Current detectors deploy various alignment techniques to derive rotation-invariant features, but still rely on high capacity models and heavy data augmentation with all possible rotations. In this paper, we introduce a Fully Rotation-Equivariant Oriented Object Detector (FRED), whose entire process from the image to the bounding box prediction is strictly equivariant. Specifically, we decouple the invariant task (object classification) and the equivariant task (object localization) to achieve end-to-end equivariance. We represent the bounding box as a set of rotation-equivariant vectors to implement rotation-equivariant localization. Moreover, we utilized these rotation-equivariant vectors as offsets in the deformable convolution, thereby enhancing the existing advantages of spatial adaptation. Leveraging full rotation-equivariance, our FRED demonstrates higher robustness to image-level rotation compared to existing methods. Furthermore, we show that FRED is one step closer to non-axis aligned learning through our experiments. Compared to state-of-the-art methods, our proposed method delivers comparable performance on DOTA-v1.0 and outperforms by 1.5 mAP on DOTA-v1.5, all while significantly reducing the model parameters to 16%.

Published

2024-03-24

How to Cite

Lee, C., Son, J., Shon, H., Jeon, Y., & Kim, J. (2024). FRED: Towards a Full Rotation-Equivariance in Aerial Image Object Detection. Proceedings of the AAAI Conference on Artificial Intelligence, 38(4), 2883-2891. https://doi.org/10.1609/aaai.v38i4.28069

Issue

Section

AAAI Technical Track on Computer Vision III