Abstract
Various topological techniques and tools have been applied to neural networks in terms of network complexity, explainability, and performance. One fundamental assumption of this line of research is the existence of a global (Euclidean) coordinate system upon which the topological layer is constructed. Despite promising results, such a topologization method has yet to be widely adopted because the parametrization of a topologization layer takes a considerable amount of time and lacks a theoretical foundation, leading to suboptimal performance and lack of explainability. This paper proposes a learnable topological layer for neural networks without requiring an Euclidean space. Instead, the proposed construction relies on a general metric space, specifically a Hilbert space that defines an inner product. As a result, the parametrization for the proposed topological layer is free of user-specified hyperparameters, eliminating the costly parametrization stage and the corresponding possibility of suboptimal networks. Experimental results on three popular data sets demonstrate the effectiveness of the proposed approach.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
Technically, \(\lambda \) could be an element in any other field \(\mathbb {F}\). We restrict our discussion to the real numbers \(\mathbb {R}\) (which is also a field) in the context of neural network applications.
- 2.
Mathematically speaking, it would be the external derivative of a vector field \(\mathcal {H}_{(L_0, M_0)}[\mathcal {F}]\). We do not use such terms to avoid unnecessary confusions.
- 3.
Parallel computing of this preprocessing stage is possible but we do not discuss it in this paper.
References
Adams, H., et al.: Persistence images: a stable vector representation of persistent homology. J. Mach. Learn. Res. 18(1), 218–252 (2017)
Bubenik, P.: Statistical topological data analysis using persistence landscapes. J. Mach. Learn. Res. 16(1), 77–102 (2015)
Carrière, M., Chazal, F., Glisse, M., Ike, Y., Kannan, H., Umeda, Y.: Optimizing persistent homology based functions. In: Proceedings of the 38th International Conference on Machine Learning, vol. 139, pp. 1294–1303 (2021)
Chen, C., Ni, X., Bai, Q., Wang, Y.: A topological regularizer for classifiers via persistent homology. In: Proceedings of the 22nd International Conference on Artificial Intelligence and Statistics (16–18 Apr 2019), vol. 89, pp. 2573–2582 (2019)
Edelsbrunner, H., Harer, J.: Computational Topology - an Introduction. American Mathematical Society (2010)
FashionMNIST Dataset. https://github.com/zalandoresearch/fashion-mnist. Accessed 2020
Gabrielsson, R.B., Nelson, B.J., Dwaraknath, A., Skraba, P.: A topology layer for machine learning. In: The 23rd International Conference on Artificial Intelligence and Statistics (AISTATS), vol. 108, PMLR, pp. 1553–1563 (2020)
Hofer, C., Kwitt, R., Niethammer, M., Uhl, A.: Deep learning with topological signatures. In: Guyon, I., et al. (eds.) Advances in Neural Information Processing Systems, vol. 30, Curran Associates, Inc. (2017)
Kim, K., Kim, J., Zaheer, M., Kim, J., Chazal, F., Wasserman, L.: Pllay: efficient topological layer based on persistent landscapes. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems (2020), vol. 33, Curran Associates, Inc., pp. 15965–15977
KMNIST Dataset. https://github.com/rois-codh/kmnist. Accessed 2020
Kusano, G., Fukumizu, K., Hiraoka, Y.: Persistence weighted gaussian kernel for topological data analysis. In: Proceedings of the 33rd International Conference on International Conference on Machine Learning - Volume 48 (2016), ICML 2016, JMLR.org, p. 2004-2013
Kwitt, R., Huber, S., Niethammer, M., Lin, W., Bauer, U.: Statistical topological data analysis - a kernel perspective. In: Proceedings of the 28th International Conference on Neural Information Processing Systems - Volume 2, NIPS 2015, pp. 3070–3078. MIT Press, Cambridge (2015)
Lacombe, T., Ike, Y., Carrière, M., Chazal, F., Glisse, M., Umeda, Y.: Topological uncertainty: monitoring trained neural networks through persistence of activation graphs. In: Zhou, Z. (ed.) Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, IJCAI 2021, Virtual Event/Montreal, Canada, 19-27 August 2021 (2021), pp. 2666–2672. ijcai.org (2021)
MNIST Dataset. http://yann.lecun.com/exdb/mnist/. Accessed 2020
Mobahi, H., Farajtabar, M., Bartlett, P.: Self-distillation amplifies regularization in hilbert space. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 3351–3361. Curran Associates, Inc. (2020)
Moor, M., Horn, M., Rieck, B., Borgwardt, K.: Topological autoencoders. In: Proceedings of the 37th International Conference on Machine Learning (ICML) (13–18 Jul 2020), vol. 119, PMLR, pp. 7045–7054 (2020)
Muandet, K., Fukumizu, K., Dinuzzo, F., Schölkopf, B.: Learning from distributions via support measure machines. In: Proceedings of the 25th International Conference on Neural Information Processing Systems - Volume 1 (Red Hook, NY, USA, 2012), NIPS 2012, pp. 10–18. Curran Associates Inc. (2012)
Munkres, J.: Topology. Pearson Education, Limited (2003)
Naitzat, G., Zhitnikov, A., and Lim, L. Topology of deep neural networks. J. Mach. Learn. Res. 21 (2020), 184:1–184:40
Ohnishi, M., Yukawa, M., Johansson, M., Sugiyama, M.: Continuous-time value function approximation in reproducing kernel hilbert spaces. In: Bengio, S., Wallach, H., Larochelle, H., Grauman, K., Cesa-Bianchi, N., Garnett, R. (eds.) Advances in Neural Information Processing Systems (2018), vol. 31. Curran Associates, Inc
Pagliana, N., Rosasco, L.: Implicit regularization of accelerated methods in hilbert spaces. In: Wallach, H., Larochelle, H., Beygelzimer, A., d’ Alché-Buc, F., Fox, E., Garnett, R. (eds.) Advances in Neural Information Processing Systems, vol. 32, Curran Associates, Inc. (2019)
Reininghaus, J., Huber, S., Bauer, U., Kwitt, R.: A stable multi-scale kernel for topological machine learning. In: 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2015), pp. 4741–4748 (2015)
Rieck, B., Bock, C., Borgwardt, K.M.: A persistent weisfeiler-lehman procedure for graph classification. In: Proceedings of the 36th International Conference on Machine Learning (ICML), vol. 97, PMLR, pp. 5448–5458 (2019)
Srinivasan, S., Downey, C., Boots, B.: Learning and inference in hilbert space with quantum graphical models. In: Bengio, S., Wallach, H., Larochelle, H., Grauman, K., Cesa-Bianchi, N., Garnett, R. (eds.) Advances in Neural Information Processing Systems (2018), vol. 31, Curran Associates, Inc. (2018)
Sriperumbudur, B.K., Fukumizu, K., Lanckriet, G.R.G.: Universality, characteristic kernels and rkhs embedding of measures. J. Mach. Learn. Res. 12, null (2011), 2389–2410
Sriperumbudur, B.K., Gretton, A., Fukumizu, K., Schölkopf, B., Lanckriet, G.R.: Hilbert space embeddings and metrics on probability measures. J. Mach. Learn. Res. 11, 1517–1561 (2010)
Acknowledgement
Results presented in this paper were partly obtained using the Chameleon testbed supported by the National Science Foundation.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Shen, G., Zhao, D. (2024). Towards Nonparametric Topological Layers in Neural Networks. In: Yang, DN., Xie, X., Tseng, V.S., Pei, J., Huang, JW., Lin, J.CW. (eds) Advances in Knowledge Discovery and Data Mining. PAKDD 2024. Lecture Notes in Computer Science(), vol 14647. Springer, Singapore. https://doi.org/10.1007/978-981-97-2259-4_7
Download citation
DOI: https://doi.org/10.1007/978-981-97-2259-4_7
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-97-2261-7
Online ISBN: 978-981-97-2259-4
eBook Packages: Computer ScienceComputer Science (R0)