A numerical damped oscillator approach to constrained Schr\"{o}dinger equations

This article explains and illustrates the use of a set of coupled dynamical equations, second order in a fictitious time, which converges to solutions of stationary Schr\"{o}dinger equations with additional constraints. We include three qualitative different numerical examples: the radial Schr\"{o}dinger equation for the hydrogen atom; the two-dimensional harmonic oscillator with degenerate excited states; and finally a non-linear Schr\"{o}dinger equation for rotating states. The presented method is intuitive, with analogies in classical mechanics for damped oscillators, and easy to implement, either in own coding, or with software for dynamical systems. Hence, we find it suitable to introduce it in a continuation course in quantum mechanics or generally in applied mathematics courses which contain computational parts.


I. INTRODUCTION
In this article we describe the idea of solving stationary Schrödinger equations (SE) as energy minimization problems with constraints, by using a second order damped dynamical system. We discuss how to numerically solve the problems in a stable and efficient way.
For the formulas of this article to be easily recognised and directly applicable for the students of different courses, we write out most formulas explicitly in an infinite-dimensional representation. For example, we use integrals instead of scalar products (or Dirac notation).
However, if you write your own code 1 instead of using high-level solvers for the differential equations, you need to formulate integrals as finite sums and derivatives, e.g., as finite differences, i.e., the linear Schrödinger equation can be formulated as a linear matrix equation Hu = Eu, with H a matrix, u a (column) eigen-vector, and E an eigenvalue of H. We provide enough details, including the references, for all the numerical results presented here to be reproducible. From a pedagogical point of view, the method has the advantage of beeing able to solve a large set of problems with the same main idea and high-level software as well as incorporating several important concepts from classical mechanics. We hope the readers will expand the theory and applications in different directions from the examples presented here.

II. THE METHOD
In order to introduce the idea of the method let us first consider a basic example in classical mechanics. The harmonic oscillator is according to Newton's second law described by Mü + ku = 0, k > 0. (1) Here u = u(t) is the distance from the equilibrium for a mass M on which a force −ku is acting. The dot denotes the time derivative. For example the mass could be attached to a spring, with k being the spring constant, see Fig. 1. If we in addition assume that there is some friction force proportional to the velocityu, e.g., between the mass and the surface on which it is sliding, we get a damped second order system where η > 0 is a damping parameter due to the friction. It is clear from this example that one of the parameters M, η, k can be scaled out, so we usually set M = 1 in the following.
The solutions of Eq. (2) in the non-critical case (η = 2 √ k) are given by where C i are determined by the initial conditions u(0) andu(0), and ξ j = −η/2 ± (η/2) 2 − k. It is easy to see from Eq. (3) that u tends to zero for large times, which is the equilibrium position of the mass and hence the stationary solution to Eq. (2). The value η = 2 √ k for the damping parameter ensures the ξ j to be real and is referred to as critical damping for Eq. (2), for which u(t) = (C 1 + C 2 t) exp(− √ kt). For smaller or larger values of η, the oscillations are referred to as under-or over-damped respectively. The critically damped system is known to be the fastest way for the system to return to its equilibrium, i.e., to reach the stationary solution. In multimode discretized systems the above argument can be generalized in order to obtain an optimal value for the damping parameter to be used in a numerical calculation 2 .
Now we note that u = 0 is also the solution of the (trivial) minimization problem min (ku 2 /2). In fact, the convex functional V (u) = ku 2 /2 is the mechanical potential corresponding to the force F = −ku which is conservative since F = −dV /du. Of course, in this case it is easier to directly solve ku = 0 or min ku 2 /2 for u, than to integrate the differential equation (2). However, this simple idea can be extended to solve more challenging problems, where η takes the role of a parameter that can be tuned for optimal numerical properties, as we explain in this article.
We here briefly repeat that the stationary (i.e., time-independent) SE for a particle with mass M and a spatially-dependent potential V (r, t) = V (r) follows from the time- Given a time and space separating ansatz of the wavefunction Ψ(r, t) = ϕ(r) exp(−iEt/ ), we then have from Eq. (4)Ĥ Alternatively, the SE can be viewed as the Euler-Lagrange equation 3 corresponding to the minimization of the energy, which if we start from Eq. (5) is the following functional of u where the bar from now on denotes complex conjugation.
If the normalization of the wave function is considered as a constraint, then the denominator of Eq. (6) is unity and we can write for the groundstate of Eq. (5) where s.t. is an abbreviation for subject to. We later also give examples with more complicated constraints.
The main idea now for solving Eq. (7), already introduced above, is to utilize the fact that the solution to Eq. (7) is also a stationary solution, say u * (r), to the second order damped dynamical systemü where we in the following reserve the dot notation for the derivatives with respect to a fictitious time τ . The term δE/δū above is a functional derivative of the energy, such that u(τ ) → u * will satisfy the Euler-Lagrange equation δE/δū = 0 when τ → ∞.
Then, we can apply a symplectic explicit method, such as symplectic Euler or Störmer-Verlet 5 , which give us an iterative map in the numerical approximations (u ν , v ν ), ν = 1, 2, 3, ... with a step in fictitious time ∆τ ν and a damping η ν . The choice of parameters ∆τ ν and η ν can be aimed to optimize the performance of the numerical method, which generally is a non-trivial task. But for linear differential equations, such as the Schrödinger equation, analytic results exist 2,6 . For simplicity we here keep all parameters constant through the iterations, i.e., independent of the step ν.
The approach of finding the solution to Eq. (7) by solving Eq. (8) with a symplectic method has been named the dynamical functional particle method (DFPM) 7 . We would like to emphasize that it is the combination of the second order damped dynamical system together with an efficient (fast, stable, accurate) symplectic solver that makes DFPM a very powerful method. Even if the idea of solving minimization problems using dynamical systems with different damping strategies goes far back, see Refs. 8,9 , it has not been presented for the Schrödinger equation with constraints with symplectic solvers for second order systems.
According to our practical experience many common (non-symplectic) integration methods with optimal or non-optimal damping parameters give a reasonably fast convergence of Eq. (8) to the stationary solutions. So unless the numerical performance is important, a variety of softwares 1 for dynamical systems can be used in practical implementations.
Let us finally comment on one specific closely related approach for solving Eq. (7) that has been studied extensively 10,11 . The steepest descent methoḋ often called the imaginary time method when applied for the SE with α = 1 (i.e., change t → −iτ in Eq. (4)). It might seem that Eq. (10) should serve better than Eq. (8) since the exponential decrease towards the stationary solution in Eq. (10) can be made arbitrary large by choosing α large enough.
However, as proved strictly for linear problems 2 , and by numerical evidence for some non-linear examples 12 , going to a second order differential equation in a fictitious time is superior if one takes into account the stability and accuracy of the numerical solver. DFPM has been shown to have a remarkably faster convergence to the stationary solution than any numerical method applied to Eq. (10), see 2 .
It is the purpose of this article to explain this new method through a few qualitatively different examples for the stationary SE. In addition it will be extended to a corresponding method to treat constraints.

B. Damped oscillator approach for global constraints
Common methods for calculating excited states are so called shooting methods 13 , although restricted to systems in one spatial dimension or with potentials obeying separation of variables and methods based on diagonalization 14 .
DFPM is readily extended to more general constrained problems, e.g., needed for finding normalized-and excited states of the SE in general settings. Consider a convex minimization problem for E(u) with smooth global constraint functionals G j (u) = 0, i.e., The problem in Eq. (11) has a unique solution u * if δG j /δū is surjective at u * , and thus it fulfills a so-called Karush-Kuhn-Tucker condition. The corresponding dynamical system for constraints can be formulated using an extended constrained energy functional (often called µ j are Lagrange multipliers. The dynamical system for solving Eq. (11) is then given bÿ with µ j (τ ) chosen such that u(τ ) tends to u * during the evolution, see examples in the next section. For more details on existence and uniqueness of solutions to constrained problems see 15 and references therein.
In order to solve Eq. (11) one can choose the µ j (τ ) such that u(τ ) always remains on the constraints set, e.g., by projection methods, or as we will do here to approach it (oscillatorily) as τ grows. Projection is generally costly but there are important exceptions such as, e.g., eigenvalue problems with only normalization constraints.
For our damped approach, we introduce an additional dynamical system, analogous to Eqs. (2) and (8), for a constraint G j as in Eq. (11) according tö Then G j (u(τ )) tends to zero exponentially fast and the equations (13) can be used to derive expressions of the Lagrange multipliers µ j (τ ) for Eq. (12), which for some problems are explicit.
This method was introduced in 6 for solving matrix eigenvalue problems, where it was shown that u(τ ) converges asymptotically to the eigenvectors. It was also shown that the choice of k j in Eq. (13) does not change the local convergence rate if k j lie within a rather large range which is determined by the eigenvalues to the operator δE/δū, see 6 for details.
In this article we always keep k j = k for all j for simplicity, while using the freedom in different k j can further improve the numerical performance of the method. Under these assumptions, the local convergence rate of the corresponding symplectic Euler with the optimal parameters 2 will be the same as for the projection approach. However, while the two approaches have the same local behaviour it is not generally a priori known which of these two methods is faster for a specific problem. A general known disadvantage with projection is that large changes in the Lagrange multipliers require small timesteps.

A. Normalization constraint
Taking the first and second order derivatives of the normalization constraint with respect to τ giveṡ Inserting the expressions from Eq. (15) into the left hand side of the general differential equation for constraints (13), then gives after simplifications If we now use the DFPM equation (12) with δE/δū + µ 1 δG 1 /δū =Ĥu − µ 1 u for the stationary SE, we can identify the limit of the Lagrange multiplier being equal to the energy lim τ →∞ µ 1 (τ ) = E in this case, compare with Eq. (5). Inserting −Ĥu + µ 1 u into the curly brackets of Eq. (16) gives after simplifications with E(τ ) ≡ ūĤudx. Finally we can solve for the Lagrange multiplier µ 1 (τ ) We see in Eq. (18) that µ 1 → E, since |u| → 0, N → 1 as τ → ∞.

B. Normalization constraint and one orthogonalization constraint
Introducing, in addition to G 1 above, the following orthogonalization constraint means that the solution u, should be orthogonal to a known normalized function u 0 . This u 0 can be defined analytically, which can be helpful while testing software 1 , but more often u 0 is a numerically obtained solution. For example in the case of a convex 1D problem, u 0 is the solution with the lowest eigenvalue E (groundstate), and u is the solution with the second lowest eigenvalue E (first excited state). As seen in the 2D example of Sec. IV B, this situation can be more complicated in higher dimensions where eigenvalues can be degenerate, meaning that several different solutions can have the same eigenvalue.
Using Cramer's rule on the linear system of Eq. (23) give the explicit expressions that can be used to calculate the first excited state in various problems. We give two such specific examples in the next section.

IV. NUMERICAL EXAMPLES
In this section we show numerical results using the symplectic Euler method 5 for three examples and compare the presented DFPM method against analytic formulas. The first The function u, is in this example the radial part of the three-dimensional spatial wavefunction from Eq. (4) multiplied with the radius r = |r|, i.e., ϕ(r) = ϕ(r, θ, φ) = u(r)/r Y m l (θ, φ). We write the dimensionless (i.e., with = M e = a 0 = 1) radial SE of the hydrogen atom aŝ where the expression within the large paranthesis above is referred to as the effective radial potential. For comparison, the energies only depend on a single quantum number n = (2m + 1) for the so called spherical harmonics Y m l , the three-dimensional wavefunction for Hydrogen have a n 2 degeneracy.
For the numerical approach, the solutions u n,l * with n = 1, 2, ..., n * − 1 should be orthogonal to the unknown u n * ,l * . Since one needs acces to u n,l * , n = 1, 2, ..., n * − 1 to obtain u n * ,l * , we solve Eq. (12) in consecutive order. This means first only with a normalization constraint, see Sec. III A, to obtain u 1,l * , then with one additional orthogonality constraint, see Sec. III B, to obtain u 2,l * , and then with several orthogonality constraints, see Appendix A, to obtain the solutions u n * ,l * with n * > 2.
The two Lagrange multipliers needed in the sum above to calculate the first excited state, i.e. for n * = 2, can be obtained from Eq. (24). For the case with several multipliers (n * > 2) in Eq. (28), they can conveniently be obtained from e.g. numerical solutions of Eq. (A6).
The six stationary numerical solutions to Eq. (28) with lowest energies are plotted in Fig. 2, while the corresponding energies are illustrated in Fig. 3.

B. Two-dimensional harmonic oscillator
In this example we calculate the well known wave functions u(x, y) and energies E to the dimensionless (i.e., with = M = ω = 1) Schrödinger equation with an isotropic two-dimensional harmonic potential on a 2D Cartesian grid That is, for the groundstate we solve the following optimization problem udr, s.t. We show the six first numerical solutions to Eq. (31) in Fig. 4. For comparisons we note that the equation (29) possesses the explicit solutions 16 E (nx,ny) = n x + n y + 1, u (nx,ny) (x, y) = 1 where H denote the Hermite polynomials 17 , and the two quantum numbers can take the values n x , n y = 0, 1, 2, 3, ... .
In contrast to the radial SE for the hydrogen atom, there is no dependence on any of the quantum numbers n x , n y in the SE (29), and different solutions u (nx,ny) give degenerate energies E (nx,ny) as long as n x + n y = constant, see Eq. (32).
In the left plot of Fig. 5 we show the numerical convergence for the energies. In the right plot of Fig. 5 we show the numerical convergence for the constraints.

C. The non-linear Schrödinger equation under rotation
The non-linear Schrödinger equation (NLSE) is commonly used to model, many interacting bosonic particles via a mean-field approximation 18 . We have developed a DFPM formulation with damped constraints for a dimensionless non-linear Schrödinger equation in u = u(x) on a ring geometry −π ≤ x ≤ π (with radius R = = 2M = 1) using periodic boundary conditions u(−π) = u(π). The aim is to minimize the total energy subject to one constraint for normalization, and one constraint for the angular momentum being 0 We note that this problem can be solved analytically and refer to Appendix B of Ref. 12 with the two Lagrange multiplicators from Eq. (B10) In Eqs. (35) and (36) µ is the so called chemical potential, which is not equal to E for the NLSE, and Ω is the angular velocity for the rotation. The quantities N, E kin , , b 1 , b 2 in Eq. (36), which depend on the fictitious time τ , are defined in Appendix B.
In Fig. 6 we have plotted the resulting so called Yrast curve (main figure) with the density and phase of the corresponding complex wave function u for the particularly interesting We believe this presentation of DFPM may be helpful for students and researchers who want to solve globally constrained equations in general. More specifically, it can be used for numerically solving different kinds of Schrödinger equations attaining (degenerated) excited states and energies.

ACKNOWLEDGMENTS
We acknowledge valuable comments from Patrik Sandin. We also thank the three "French musketeers" Julien Régnier, Nico Gaudy, and Alexandre Clercq for valuable discussions about DFPM during their internships atÖrebro University.

In the numerical examples in Sections IV A and IV B both a normalization constraint
and several orthogonalization constraints are treated simultaneously. We here sketch how an arbitrary number of orthogonalization constraints are treated.
We generalize Eqs. (14) and (19) to a vector containing w orthogonalization constraints and one normalization constraint we have from Eq. (13) From Eq. (12) we now haveü such that the left hand side of Eq. (A3) is Now since ū i u j dx = δ ij we can write Eq. (A3) on matrix form with the Lagrange multipliers as the unknows where Re (G j ) = (ū j u +ūu j ) dx/2 and E = ūĤudx.
We note that the system (A6) can be solved very efficiently by sparse block Gaussian elimination with a computational cost proportional to w.

Appendix B: Normalization and angular momentum constraints
The two constraints we used for the NLSE are defined in Eq. (34). Taking the first and second order derivatives of G 1 and G 2 with respect to τ giveṡ Inserting the expressions from Eqs. (B1) and (B2) into the left hand side of the general differential equation for constraints (13), then gives after simplifications The use of Eq. (35) for the curly brackets above gives (with η, γ, µ and Ω real, and by using integration by parts to some terms) and Comparing the above equations with Eq. (13) and inserting the constraints (34) with the three real functions N (τ ) , (τ ) and E kin (τ ) = − π −πū ∂ 2 u ∂x 2 dx, being the norm, angular momentum, and kinetic energy, respectively, we have from Eq. (B5) and from Eq. (B6) The second to last term in the left hand side above disappears, since ∂(ūu) ∂xū udx = ∂ ∂x (ūu) 2 dx/2 = (|u (π) | 4 − |u (−π) | 4 ) /2 = 0 due to the periodic boundary conditions. Hence, Eqs. (B8) and (B9) leads us to the following linear system for the Lagrange multipliers Matlab program DFPM 1D HO.m that can calculate excited states of a particle in a 1D harmonic oscillator potential; and an XML input file DFPM 1D HO 1st excited state.xmds for the free software www.xmds.org that can be used to solve the same problem for the first