Abstract
Quantum optimal control problems are typically solved by gradient-based algorithms such as GRAPE, which suffer from exponential growth in storage with increasing number of qubits and linear growth in memory requirements with increasing number of time steps. These memory requirements are a barrier for simulating large models or long time spans. We have created a nonstandard automatic differentiation technique that can compute gradients needed by GRAPE by exploiting the fact that the inverse of a unitary matrix is its conjugate transpose. Our approach significantly reduces the memory requirements for GRAPE, at the cost of a reasonable amount of recomputation. We present benchmark results based on an implementation in JAX.
This material is based upon work supported by the U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research, under the Accelerated Research in Quantum Computing and Applied Mathematics programs, under contract DE-AC02-06CH11357, and by the National Science Foundation Mathematical Sciences Graduate Internship. We gratefully acknowledge the computing resources provided on Bebop and Swing, a high-performance computing cluster operated by the Laboratory Computing Resource Center at Argonne National Laboratory.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Aupy, G., Herrmann, J., Hovland, P., Robert, Y.: Optimal multistage algorithm for adjoint computation. SIAM J. Sci. Comput. 38(3), C232–C255 (2016)
Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. J. Mach. Learn. Res. 18(153), 1–43 (2018). http://jmlr.org/papers/v18/17-468.html
Beaumont, O., Herrmann, J., Pallez, G., Shilova, A.: Optimal memory-aware backpropagation of deep join networks. Phil. Trans. R. Soc. A 378(2166), 20190049 (2020)
Bradbury, J., et al.: JAX: composable transformations of Python+NumPy programs (2018). http://github.com/google/jax
Caneva, T., Calarco, T., Montangero, S.: Chopped random-basis quantum optimization. Phys. Rev. A 84, 022326 (2011). https://doi.org/10.1103/PhysRevA.84.022326
Chen, T., Xu, B., Zhang, C., Guestrin, C.: Training deep nets with sublinear memory cost. arXiv preprint arXiv:1604.06174 (2016)
Cyr, E.C., Shadid, J., Wildey, T.: Towards efficient backward-in-time adjoint computations using data compression techniques. Comput. Meth. Appl. Mech. Eng. 288, 24–44 (2015)
Doria, P., Calarco, T., Montangero, S.: Optimal control technique for many-body quantum dynamics. Phys. Rev. Lett. 106, 190501 (2011). https://doi.org/10.1103/PhysRevLett.106.190501
Griewank, A.: Achieving logarithmic growth of temporal and spatial complexity in reverse automatic differentiation. Optim. Methods Softw. 1(1), 35–54 (1992)
Griewank, A., Walther, A.: Algorithm 799: Revolve: an implementation of checkpointing for the reverse or adjoint mode of computational differentiation. ACM Trans. Math. Softw. 26(1), 19–45 (2000). https://doi.org/10.1145/347837.347846
Griewank, A., Walther, A.: Evaluating Derivatives: Principles and Techniques of Algorithmic Differentiation. No. 105 in Other Titles in Applied Mathematics, 2nd edn. SIAM, Philadelphia (2008). http://bookstore.siam.org/ot105/
Hascoet, L., Pascual, V.: The Tapenade automatic differentiation tool: principles, model, and specification. ACM Trans. Math. Softw. (TOMS) 39(3), 1–43 (2013)
Jain, P., et al.: Checkmate: breaking the memory wall with optimal tensor rematerialization. In: Proceedings of Machine Learning and Systems, vol. 2, pp. 497–511 (2020)
Johansson, J., Nation, P., Nori, F.: QuTiP: an open-source python framework for the dynamics of open quantum systems. Comput. Phys. Commun. 183(8), 1760–1772 (2012). https://doi.org/10.1016/j.cpc.2012.02.021
Johansson, J., Nation, P., Nori, F.: QuTiP 2: a Python framework for the dynamics of open quantum systems. Comput. Phys. Commun. 184(4), 1234–1240 (2013). https://doi.org/10.1016/j.cpc.2012.11.019
Khaneja, N., Brockett, R., Glaser, S.J.: Time optimal control in spin systems. Phys. Rev. A 63, 032308 (2001). https://doi.org/10.1103/PhysRevA.63.032308
Kubota, K.: A Fortran77 preprocessor for reverse mode automatic differentiation with recursive checkpointing. Optim. Meth. Softw. 10(2), 319–335 (1998). https://doi.org/10.1080/10556789808805717
Kukreja, N., Hückelheim, J., Louboutin, M., Washbourne, J., Kelly, P.H., Gorman, G.J.: Lossy checkpoint compression in full waveform inversion. Geoscientific Model Development Discussions, pp. 1–26 (2020)
Leung, N., Abdelhafez, M., Koch, J., Schuster, D.: Speedup for quantum optimal control from automatic differentiation based on graphics processing units. Phys. Rev. A 95, 042318 (2017). https://doi.org/10.1103/PhysRevA.95.042318
Naumann, U.: The Art of Differentiating Computer Programs. Society for Industrial and Applied Mathematics (2011). https://doi.org/10.1137/1.9781611972078
Rajbhandari, S., Ruwase, O., Rasley, J., Smith, S., He, Y.: ZeRO-infinity: breaking the GPU memory wall for extreme scale deep learning. In: Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, SC 2021. Association for Computing Machinery, New York (2021). https://doi.org/10.1145/3458817.3476205
Schanen, M., Marin, O., Zhang, H., Anitescu, M.: Asynchronous two-level checkpointing scheme for large-scale adjoints in the spectral-element solver Nek5000. Procedia Comput. Sci. 80(C), 1147–1158 (2016). https://doi.org/10.1016/j.procs.2016.05.444
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 Thomas Propson, Marcelo Bongarti and UChicago Argonne, LLC, Operator of Argonne National Laboratory, under exclusive license to Springer Nature Switzerland AG, part of Springer Nature
About this paper
Cite this paper
Narayanan, S.H.K., Propson, T., Bongarti, M., Hückelheim, J., Hovland, P. (2022). Reducing Memory Requirements of Quantum Optimal Control. In: Groen, D., de Mulatier, C., Paszynski, M., Krzhizhanovskaya, V.V., Dongarra, J.J., Sloot, P.M.A. (eds) Computational Science – ICCS 2022. ICCS 2022. Lecture Notes in Computer Science, vol 13353. Springer, Cham. https://doi.org/10.1007/978-3-031-08760-8_11
Download citation
DOI: https://doi.org/10.1007/978-3-031-08760-8_11
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-08759-2
Online ISBN: 978-3-031-08760-8
eBook Packages: Computer ScienceComputer Science (R0)