Skip to main content

Multivariate Time Series Retrieval with Binary Coding from Transformer

  • Conference paper
  • First Online:
Neural Information Processing (ICONIP 2022)

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 1791))

Included in the following conference series:

  • 855 Accesses

Abstract

Deep learning to binary coding improves multivariate time series retrieval performance by end-to-end representation learning and binary codes from training data. However, it is fair to say that exist deep learning retrieval methods, e.g., Encoder-Decoder based on recurrent or Convolutional neural network, failed to capture the latent dependencies between pairs of variables in multivariate time series, which results in substantial loss of retrieval quality. Furthermore, supervised deep learning to binary coding failed to meet the requirements in practice, due to the lack of labeled multivariate time series datasets. To address the above issues, this paper presents Unsupervised Transformer-based Binary Coding Networks (UTBCNs), a novel architecture for deep learning to binary coding, which consists of four key components, Transformer-based encoder (T-encoder), Transformer-based decoder (T-decoder), an adversarial loss, and a hashing loss. We employ the Transformer encoder-decoder to encode temporal dependencies, and inter-sensor correlations within multivariate time series by attention mechanisms solely. Meanwhile, to enhance the generalization capability of deep network, we add an adversarial loss based upon improved Wasserstein GAN (WGAN-GP) for real multivariate time series segments. To further improve of quality of binary code, a hashing loss based upon Convolutional encoder (C-encoder) is designed for the output of T-encoder. Extensive empirical experiments demonstrate that the proposed UTBCNs can generate high-quality binary codes and outperform state-of-the-art binary coding retrieval models on Twitter dataset, Air quality dataset, and Quick Access Recorder (QAR) dataset.

Supported by the National Natural Science Foundation of China, U2033209.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 89.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 119.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://archive.ics.uci.edu/ml/datasets/Beijing+Multi-Site+Air-Quality+Data.

  2. 2.

    https://archive.ics.uci.edu/ml/datasets/Buzz+in+social+media+.

References

  1. Fu, T.: A review on time series data mining. Eng. Appl. Artif. Intell. 24, 164–181 (2011)

    Article  Google Scholar 

  2. Hallac, D., Vare, S., Boyd, S., Leskovec, J.: Toeplitz inverse covariance-based clustering of multivariate time series data. In: Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 215–223 (2017)

    Google Scholar 

  3. Lin, J., Keogh, E., Lonardi, S., Chiu, B.: A symbolic representation of time series, with implications for streaming algorithms. In: Proceedings of the 8th ACM SIGMOD Workshop on Research Issues in Data Mining and Knowledge Discovery, pp. 2–11 (2003)

    Google Scholar 

  4. Yeh, C., et al.: Matrix profile I: all pairs similarity joins for time series: a unifying view that includes motifs, discords and shapelets. In: 2016 IEEE 16th International Conference on Data Mining (ICDM), pp. 1317–1322 (2016)

    Google Scholar 

  5. Chan, K., Fu, A.: Efficient time series matching by wavelets. In: Proceedings of the 15th International Conference on Data Engineering (Cat. No. 99CB36337), pp. 126–133 (1999)

    Google Scholar 

  6. Faloutsos, C., Ranganathan, M., Manolopoulos, Y.: Fast subsequence matching in time-series databases. ACM SIGMOD Rec. 23, 419–429 (1994)

    Article  Google Scholar 

  7. Effros, M., Feng, H., Zeger, K.: Suboptimality of the Karhunen-Loeve transform for transform coding. IEEE Trans. Inf. Theory 50, 1605–1619 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  8. De Lathauwer, L., De Moor, B., Vandewalle, J.: A multilinear singular value decomposition. SIAM J. Matrix Anal. Appl. 21, 1253–1278 (2000)

    Article  MathSciNet  MATH  Google Scholar 

  9. Keogh, E., Chakrabarti, K., Pazzani, M., Mehrotra, S.: Dimensionality reduction for fast similarity search in large time series databases. Knowl. Inf. Syst. 3, 263–286 (2001)

    Article  MATH  Google Scholar 

  10. Berndt, D., Clifford, J.: Using dynamic time warping to find patterns in time series. In: KDD Workshop, vol. 10, pp. 359–370 (1994)

    Google Scholar 

  11. Rakthanmanon, T., et al.: Searching and mining trillions of time series subsequences under dynamic time warping. In: Proceedings of the 18th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 262–270 (2012)

    Google Scholar 

  12. Gan, J., Feng, J., Fang, Q., Ng, W.: Locality-sensitive hashing scheme based on dynamic collision counting. In: Proceedings of the 2012 ACM SIGMOD International Conference on Management of Data, pp. 541–552 (2012)

    Google Scholar 

  13. LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. Nature 521, 436–444 (2015)

    Article  Google Scholar 

  14. Dizaji, K., Zheng, F., Sadoughi, N., Yang, Y., Deng, C., Huang, H.: Unsupervised deep generative adversarial hashing network. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3664–3673 (2018)

    Google Scholar 

  15. Zhu, D., et al.: Deep unsupervised binary coding networks for multivariate time series retrieval. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 1403–1411 (2020)

    Google Scholar 

  16. Cao, Y., Liu, B., Long, M., Wang, J.: Hashgan: deep learning to hash with pair conditional Wasserstein GAN. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1287–1296 (2018)

    Google Scholar 

  17. Gulrajani, I., Ahmed, F., Arjovsky, M., Dumoulin, V., Courville, A.: Improved training of Wasserstein GANs. In: Advances in Neural Information Processing Systems, vol. 30 (2017)

    Google Scholar 

  18. Keogh, E., Chakrabarti, K., Pazzani, M., Mehrotra, S.: Locally adaptive dimensionality reduction for indexing large time series databases. In: Proceedings of the 2001 ACM SIGMOD International Conference on Management of Data, pp. 151–162 (2001)

    Google Scholar 

  19. Lin, J., Keogh, E., Wei, L., Lonardi, S.: Experiencing SAX: a novel symbolic representation of time series. Data Mining Knowl. Discov. 15, 107–144 (2007)

    Article  MathSciNet  Google Scholar 

  20. Korn, F., Jagadish, H., Faloutsos, C.: Efficiently supporting ad hoc queries in large datasets of time sequences. ACM SIGMOD Rec. 26, 289–300 (1997)

    Article  Google Scholar 

  21. Zhu, Y., Shasha, D.: Warping indexes with envelope transforms for query by humming. In: Proceedings of the 2003 ACM SIGMOD International Conference on Management of Data, pp. 181–192 (2003)

    Google Scholar 

  22. Norouzi, M., Punjani, A., Fleet, D.: Fast search in hamming space with multi-index hashing. In: 2012 IEEE Conference on Computer Vision and Pattern Recognition, pp. 3108–3115 (2012)

    Google Scholar 

  23. Yang, H., Lin, K., Chen, C.: Supervised learning of semantics-preserving hashing via deep neural networks for large-scale image search. arXiv Preprint arXiv:1507.00101, vol. 1, p. 3 (2015)

  24. Lin, K., Lu, J., Chen, C., Zhou, J.: Learning compact binary descriptors with unsupervised deep neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1183–1192 (2016)

    Google Scholar 

  25. Song, D., Xia, N., Cheng, W., Chen, H., Tao, D.: Deep R-th root of rank supervised joint binary embedding for multivariate time series retrieval. In: Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 2229–2238 (2018)

    Google Scholar 

  26. Pereira, J., Silveira, M.: Unsupervised anomaly detection in energy time series data using variational recurrent autoencoders with attention. In: Proceedings of 2018 17th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1275–1282 (2018)

    Google Scholar 

  27. Qin, Y., Song, D., Chen, H., Cheng, W., Jiang, G., Cottrell, G.: A dual-stage attention-based recurrent neural network for time series prediction. arXiv Preprint arXiv:1704.02971 (2017)

  28. Vaswani, A., et al.: Attention is all you need. In: Advances in Neural Information Processing Systems, vol. 30 (2017)

    Google Scholar 

  29. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)

    Google Scholar 

  30. Ba, J., Kiros, J., Hinton, G.: Layer normalization. arXiv Preprint arXiv:1607.06450 (2016)

  31. Kingma, D., Ba, J.: Adam: a method for stochastic optimization. arXiv Preprint arXiv:1412.6980 (2014)

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Weidong Yang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Tan, Z., Zhao, M., Wang, Y., Yang, W. (2023). Multivariate Time Series Retrieval with Binary Coding from Transformer. In: Tanveer, M., Agarwal, S., Ozawa, S., Ekbal, A., Jatowt, A. (eds) Neural Information Processing. ICONIP 2022. Communications in Computer and Information Science, vol 1791. Springer, Singapore. https://doi.org/10.1007/978-981-99-1639-9_33

Download citation

  • DOI: https://doi.org/10.1007/978-981-99-1639-9_33

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-99-1638-2

  • Online ISBN: 978-981-99-1639-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics