Skip to main content

Continuous Depth Recurrent Neural Differential Equations

  • Conference paper
  • First Online:
Machine Learning and Knowledge Discovery in Databases: Research Track (ECML PKDD 2023)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 14170))

  • 861 Accesses

Abstract

Recurrent neural networks (RNNs) have brought a lot of advancements in sequence labeling tasks and sequence data. However, their effectiveness is limited when the observations in the sequence are irregularly sampled, where the observations arrive at irregular time intervals. To address this, continuous time variants of the RNNs were introduced based on neural ordinary differential equations (NODE). They learn a better representation of the data using the continuous transformation of hidden states over time, taking into account the time interval between the observations. However, they are still limited in their capability as they use the discrete transformations and a fixed discrete number of layers (depth) over an input in the sequence to produce the output observation. We intend to address this limitation by proposing RNNs based on differential equations which model continuous transformations over both depth and time to predict an output for a given input in the sequence. Specifically, we propose continuous depth recurrent neural differential equations (CDR-NDE) which generalize RNN models by continuously evolving the hidden states in both the temporal and depth dimensions. CDR-NDE considers two separate differential equations over each of these dimensions and models the evolution in temporal and depth directions alternatively. We also propose the CDR-NDE-heat model based on partial differential equations which treats the computation of hidden states as solving a heat equation over time. We demonstrate the effectiveness of the proposed models by comparing against the state-of-the-art RNN models on real world sequence labeling problems.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Code is available at https://github.com/srinivas-quan/CDR-NDE.

References

  1. Anumasa, S., Srijith, P.K.: Improving robustness and uncertainty modelling in neural ordinary differential equations. In: IEEE Winter Conference on Applications of Computer Vision, WACV 2021, Waikoloa, HI, USA, 3–8 January 2021, pp. 4052–4060. IEEE (2021)

    Google Scholar 

  2. Anumasa, S., Srijith, P.K.: Latent time neural ordinary differential equations. In: Thirty-Sixth AAAI Conference on Artificial Intelligence, pp. 6010–6018. AAAI Press (2022)

    Google Scholar 

  3. Asuncion, A., Newman, D.: UCI machine learning repository (2007)

    Google Scholar 

  4. Brandstetter, J., Worrall, D.E., Welling, M.: Message passing neural PDE solvers. In: International Conference on Learning Representations (2021)

    Google Scholar 

  5. Cannon, J.R.: The one-dimensional heat equation. Cambridge University Press, Cambridge (1984)

    Google Scholar 

  6. Che, Z., Purushotham, S., Cho, K., Sontag, D., Liu, Y.: Recurrent neural networks for multivariate time series with missing values. Sci. Rep. 8(1), 1–12 (2018)

    Article  Google Scholar 

  7. Chen, R.T., Amos, B., Nickel, M.: Neural spatio-temporal point processes. In: International Conference on Learning Representations (2020)

    Google Scholar 

  8. Chen, R.T., Rubanova, Y., Bettencourt, J., Duvenaud, D.K.: Neural ordinary differential equations. In: Advances in neural information processing systems, pp. 6571–6583 (2018)

    Google Scholar 

  9. Cho, K., van Merriënboer, B., Bahdanau, D., Bengio, Y.: On the properties of neural machine translation: encoder-decoder approaches. In: Proceedings of SSST-8, Eighth Workshop on Syntax, Semantics and Structure in Statistical Translation, pp. 103–111 (2014)

    Google Scholar 

  10. De Brouwer, E., Simm, J., Arany, A., Moreau, Y.: GRU-ODE-Bayes: continuous modeling of sporadically-observed time series. In: Advances in Neural Information Processing Systems. vol. 32 (2019)

    Google Scholar 

  11. Derczynski, L., et al.: Rumoureval 2019 data (2019). https://doi.org/10.6084/M9.FIGSHARE.8845580.V1, https://figshare.com/articles/RumourEval_2019_data/8845580/1

  12. Funahashi, K.I., Nakamura, Y.: Approximation of dynamical systems by continuous time recurrent neural networks. Neural Netw. 6(6), 801–806 (1993)

    Google Scholar 

  13. Haber, E., Ruthotto, L.: Stable architectures for deep neural networks. Inverse Prob. 34(1), 014004 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  14. Hanshu, Y., Jiawei, D., Vincent, T., Jiashi, F.: On robustness of neural ordinary differential equations. In: International Conference on Learning Representations (2019)

    Google Scholar 

  15. He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask R-CNN. IEEE Trans. Pattern Anal. Mach. Intell. 42(2), 386–397 (2020)

    Article  Google Scholar 

  16. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)

    Google Scholar 

  17. Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997)

    Article  Google Scholar 

  18. Hu, Y., Zhao, T., Xu, S., Xu, Z., Lin, L.: Neural-PDE: A RNN based neural network for solving time dependent PDEs (2020). https://doi.org/10.48550/ARXIV.2009.03892, https://arxiv.org/abs/2009.03892

  19. Kimura, T.: On dormand-prince method. Jpn. Malays. Tech. Inst. 40(10), 1–9 (2009)

    Google Scholar 

  20. Lechner, M., Hasani, R.: Learning long-term dependencies in irregularly-sampled time series. arXiv preprint arXiv:2006.04418 (2020)

  21. Lu, Y., Zhong, A., Li, Q., Dong, B.: Beyond finite layer neural networks: bridging deep architectures and numerical differential equations. In: International Conference on Machine Learning, pp. 3276–3285. PMLR (2018)

    Google Scholar 

  22. Lukasik, M., Srijith, P.K., Vu, D., Bontcheva, K., Zubiaga, A., Cohn, T.: Hawkes processes for continuous time sequence classification: an application to Rumour stance classification in twitter. In: Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016. The Association for Computer Linguistics (2016)

    Google Scholar 

  23. Mei, H., Eisner, J.M.: The neural hawkes process: a neural self-modulating multivariate point process. In: Advances in Neural Information Processing Systems. vol. 30 (2017)

    Google Scholar 

  24. Mozer, M.C., Kazakov, D., Lindsey, R.V.: Discrete event, continuous time RNNs. arXiv preprint arXiv:1710.04110 (2017)

  25. Neil, D., Pfeiffer, M., Liu, S.C.: Phased lstm: Accelerating recurrent network training for long or event-based sequences. In: Advances in Neural Information Processing Systems. vol. 29 (2016)

    Google Scholar 

  26. Recktenwald, G.W.: Finite-difference approximations to the heat equation. Mech. Eng. 10(01) (2004)

    Google Scholar 

  27. Ren, S., He, K., Girshick, R., Sun, J.: Faster R-CNN: towards real-time object detection with region proposal networks. IEEE Trans. Pattern Anal. Mach. Intell. 39(6), 1137–1149 (2017)

    Article  Google Scholar 

  28. Rubanova, Y., Chen, R.T., Duvenaud, D.K.: Latent ordinary differential equations for irregularly-sampled time series. In: Advances in Neural Information Processing Systems. vol. 32 (2019)

    Google Scholar 

  29. Ruthotto, L., Haber, E.: Deep neural networks motivated by partial differential equations (2018). https://doi.org/10.48550/ARXIV.1804.04272

  30. Schiesser, W.E.: The numerical method of lines: integration of partial differential equations. Elsevier (2012)

    Google Scholar 

  31. Schuster, M., Paliwal, K.K.: Bidirectional recurrent neural networks. IEEE Trans. Sig. Process. 45(11), 2673–2681 (1997)

    Article  Google Scholar 

  32. Tamire, M., Anumasa, S., Srijith, P.K.: Bi-directional recurrent neural ordinary differential equations for social media text classification. In: Proceedings of the 2nd Workshop on Deriving Insights from User-Generated Text, pp. 20–24. Association for Computational Linguistics (2022)

    Google Scholar 

  33. Trong, D.D., Long, N.T., Alain, P.N.D.: Nonhomogeneous heat equation: identification and regularization for the inhomogeneous term. J. Math. Anal. Appl. 312(1), 93–104 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  34. Wang, W., Chen, Z., Hu, H.: Hierarchical attention network for image captioning. In: Proceedings of the AAAI Conference on Artificial Intelligence, pp. 8957–8964 (2019)

    Google Scholar 

  35. Zubiaga, A., et al.: Discourse-aware Rumour stance classification in social media using sequential classifiers. Inf. Process. Manag. 54(2), 273–290 (2018)

    Article  Google Scholar 

  36. Zubov, K., et al.: NeuralPDE: Automating physics-informed neural networks (PINNs) with error approximations. arXiv e-prints pp. arXiv-2107 (2021)

    Google Scholar 

Download references

Acknowledgements

This work has been partly supported by the funding received from the Department of Science and Technology (DST), Govt of India, through the ICPS program (DST/ICPS/2018).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to P. K. Srijith .

Editor information

Editors and Affiliations

Ethics declarations

Ethical Statement

We propose novel and flexible techniques to model irregular time series data. The performance of the proposed models is experimented on publicly available datasets. The method can be applied to irregular time series data arising in several domains such as social networks. We do not find any ethical issues with the proposed approach or the data set used in the experiments.

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Anumasa, S., Gunapati, G., Srijith, P.K. (2023). Continuous Depth Recurrent Neural Differential Equations. In: Koutra, D., Plant, C., Gomez Rodriguez, M., Baralis, E., Bonchi, F. (eds) Machine Learning and Knowledge Discovery in Databases: Research Track. ECML PKDD 2023. Lecture Notes in Computer Science(), vol 14170. Springer, Cham. https://doi.org/10.1007/978-3-031-43415-0_14

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-43415-0_14

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-43414-3

  • Online ISBN: 978-3-031-43415-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics