Pointerformer: Deep Reinforced Multi-Pointer Transformer for the Traveling Salesman Problem

Authors

  • Yan Jin Huazhong University of Science and Technology
  • Yuandong Ding Huazhong University of Science and Technology
  • Xuanhao Pan Huazhong University of Science and Technology
  • Kun He Huazhong University of Science and Technology
  • Li Zhao Microsoft Research
  • Tao Qin Microsoft Research Asia
  • Lei Song Microsoft Research
  • Jiang Bian Microsoft Research

DOI:

https://doi.org/10.1609/aaai.v37i7.25982

Keywords:

ML: Reinforcement Learning Algorithms

Abstract

Traveling Salesman Problem (TSP), as a classic routing optimization problem originally arising in the domain of transportation and logistics, has become a critical task in broader domains, such as manufacturing and biology. Recently, Deep Reinforcement Learning (DRL) has been increasingly employed to solve TSP due to its high inference efficiency. Nevertheless, most of existing end-to-end DRL algorithms only perform well on small TSP instances and can hardly generalize to large scale because of the drastically soaring memory consumption and computation time along with the enlarging problem scale. In this paper, we propose a novel end-to-end DRL approach, referred to as Pointerformer, based on multi-pointer Transformer. Particularly, Pointerformer adopts both reversible residual network in the encoder and multi-pointer network in the decoder to effectively contain memory consumption of the encoder-decoder architecture. To further improve the performance of TSP solutions, Pointerformer employs a feature augmentation method to explore the symmetries of TSP at both training and inference stages as well as an enhanced context embedding approach to include more comprehensive context information in the query. Extensive experiments on a randomly generated benchmark and a public benchmark have shown that, while achieving comparative results on most small-scale TSP instances as state-of-the-art DRL approaches do, Pointerformer can also well generalize to large-scale TSPs.

Downloads

Published

2023-06-26

How to Cite

Jin, Y., Ding, Y., Pan, X., He, K., Zhao, L., Qin, T., Song, L., & Bian, J. (2023). Pointerformer: Deep Reinforced Multi-Pointer Transformer for the Traveling Salesman Problem. Proceedings of the AAAI Conference on Artificial Intelligence, 37(7), 8132-8140. https://doi.org/10.1609/aaai.v37i7.25982

Issue

Section

AAAI Technical Track on Machine Learning II