Ab Initio Molecular Dynamics on the Electronic Boltzmann Equilibrium Distribution

We prove that for a combined system of classical and quantum particles, it is possible to write a dynamics for the classical particles that incorporates in a natural way the Boltzmann equilibrium population for the quantum subsystem. In addition, these molecular dynamics do not need to assume that the electrons immediately follow the nuclear motion (in contrast to any adiabatic approach), and do not present problems in the presence of crossing points between different potential energy surfaces (conical intersections or spin-crossings). A practical application of this molecular dynamics to study the effect of temperature in molecular systems presenting (nearly) degenerate states - such as the avoided crossing in the ring-closure process of ozone - is presented.


Introduction
The possibility of dividing a system of particles into a quantum and a classical subsystem is of wide interest in several areas of physics and chemistry. This division is typically, but not exclusively, done by considering the electrons to be quantum and the nuclei to be classical (this is the choice that we make in this work, although the observation of quantum effects for protons is nowadays a matter of considerable debate [1]). When the gap is large (i.e. when the lowest electronic excitation energy is much greater than the highest available nuclear vibrational frequency, which depends on the temperature), the division is properly handled by making use of the standard Born-Oppenheimer (BO) separation. Ab initio molecular dynamics (AIMD) [2] can then be performed for the system of classical nuclei on the ground state BO surface (gsBOMD)-and this type of dynamics has been applied countless times in the last couple of decades, either directly or by making use of equivalent but more efficient schemes such as the Car-Parrinello (CP) technique [3] or other alternatives [4,5]. These dynamics can be used to compute equilibrium averages by assuming ergodicity and computing time averages over a number of trajectories. Note that when molecular dynamics (MD) is used to sample the equilibrium distribution, all the dynamics are physically equivalent insofar as they produce the correct distribution, and only efficient criteria may favour one dynamics over another.
If the temperature is of the order of the electronic gap or larger, we cannot assume any longer that the quantum particles are constantly in the ground state. Indeed, in many physical, chemical or biological processes the dynamical effects arising from the presence of low-lying electronic excited states have to be taken into account. For instance, in situations where the hydrogen bond is weak, different states come close to each other and non-adiabatic proton transfer transitions become rather likely at normal temperature [6]. In these circumstances, the computation of ensemble averages cannot be based on a model that assumes the nuclei moving on the ground state BO surface.
If transitions to higher-energy levels must be allowed, a different type of dynamics must be performed. In the realm of first principles MD calculations, two approaches come to mind: Ehrenfest dynamics and surface hopping [7]. Their suitability for the calculation of equilibrium properties is, however, still a subject of study ([8, 11, 12]; [13] and references therein). Also, in density-functional theory (DFT), one could perform MD at T = 0, working with partial occupation numbers to account for the electronic excitations [14], ideally making use of a temperature-dependent exchange and correlation functional [15]. This scheme is however tied to DFT, and is hindered by the difficulty of realistically approximating this functional.
In this paper, we first emphasize that the distribution that is often regarded in the context of quantum-classical systems as the 'correct equilibrium distribution' (ρ eq (0) W in equation (12) 3 below), despite being commonly obtained through heuristic arguments, is however only a zeroth order approximation (in the quantum-classical mass ratio √ m/M) to the canonical equilibrium density matrix associated with a rigorous quantum-classical formulation, as shown by Kaprel and Ciccotti [9] and Nielsen et al [10]. Therefore, there is a priori no reason to ask for any mixed quantum-classical theory such as, e.g. Ehrenfest dynamics, to yield exactly the Boltzmann equilibrium distribution, ρ eq (0) W , contrarily to what is often required in the literature [8], [11]- [13].
However, although ρ eq (0) W is only an approximation to the correct quantum-classical equilibrium ensemble, it is acceptable when √ m/M is small, and the results are then reliable. Hence, ρ eq (0) W will also be the target equilibrium distribution in this work. Therefore, in what follows, we will write a system of dynamic equations for the classical particles such that the equilibrium distribution in the space of classical variables is in fact given by equation (12). This is also a goal of surface hopping methods, although it is not fully achieved since these methods do not exactly yield this distribution. We will do this by deriving a temperature-dependent effective potential for the classical variables, which differs from the potential energy surfaces (PES) that emerge from BO equations. It is straightforward, however, to write an equation that gives the expression for the effective potential in terms of these PES. But note that our approach will be based on the assumption that the full system of electrons and nuclei is in thermal equilibrium at a given temperature and not on the assumption that electrons immediately follow the nuclear motion (i.e. the adiabatic approximation).
As an example of the interest in going beyond the PES we mention the issue of quantum effects in proton transfer [16]. It is a matter of current debate to what extent protons behave 'quantum-like' in biomolecular systems (e.g. whether there is any trace of superposition, tunnelling or entanglement in their behaviour). Recently, McKenzie and coworkers [1] have carefully examined the issue and concluded that 'tunneling well below the barrier only occurs for temperatures less than a temperature T 0 which is determined by the curvature of the PES at the top of the barrier'. In consequence, the correct determination of this curvature is of paramount importance.
As we will see, the curvature predicted by our temperature-dependent effective potential is smaller than the one corresponding to the ground state PES, in the cases in which the quantum excited surfaces approach, at the barrier top, the ground state one. Therefore, T 0 would be smaller than that corresponding to the ground state PES (see equation (8) in [1]) and hence the conclusion in this reference that 'quantum tunneling does not play a significant role in hydrogen transfer in enzymes' is reinforced by our results.

Method
We start our discussion with a system of classical particles, which we divide into two subgroups, one of them, of mass m, characterized by the conjugate variables (x, p), and the other one, of mass M, characterized by the conjugate variables (X, P). If we had only one degree of freedom for each subgroup (i.e. only one particle in one dimension), the Hamiltonian would read 4 The generalization of the following to more particles or degrees of freedom is straightforward.
In the canonical ensemble, the average of any observable O(x, p, X, P) is computed as where Z = dx d p dX dP e −β H total (x, p,X,P) is the partition function. However, if we think of an observable A(X, P) that depends only on the variables of one of the particles (let us call it a heavy particle, assuming that M m), these two equations can be exactly rewritten as with Z = dX dP e −β H eff (X,P;β) . Here, we have defined a temperature-dependent effective Hamiltonian: Therefore, the equilibrium properties of the subsystem formed by the heavy particle can be described in a closed manner, with an effective Hamiltonian in which the coordinates of the light particle have been integrated out.
If we now consider the two particles to be quantum, the system will be characterized by the canonical operators (x,p,X ,P), related by the commutation relations, [X ,P] = ih, [x,p] = ih, and by a total HamiltonianĤ total (x,p,X ,P) whose expression is analogous to equation (1), except that we now have operators instead of classical variables.
The key object, as far as equilibrium properties are concerned, is the equilibrium density matrix, defined aŝ which allows us to compute equilibrium averages as Like in the classical case discussed before, for observables depending only on the operatorŝ X ,P, the effective Hamiltonian can be defined analogously to equation (4) and reads Note that the integrals in equation (4) are now replaced by the trace operation over the quantum Hilbert space denoted by Tr q . We are interested, however, in an intermediate case: the heavy particle is classical, whereas the light particle is quantum mechanical. For this purpose, it is most suitable to follow the authors of [9,10] and start from the partial Wigner representation [17] of the full quantum mechanical density matrix,ρ(t) which is an operator only in the light particle Hilbert space, depending on two real numbers (X, P). The classical limit can then be taken in a very straightforward manner by considering the evolution equation forρ W , and retaining only linear terms in √ m/M = µ. The result is the following [9]: In this equation, the {·, ·} are the usual Poisson brackets. The HamiltonianĤ W, total (X, P) is the partial Wigner transform of the total Hamiltonian for the full quantum system replacing the (X ,P) operators by real numbers: The equilibrium density matrix in the partial Wigner representation at the classical limit for the heavy particle, denoted byρ eq W , should be stationary with respect to equation (9). If we use this property and expand it, it can then be proved [10] that the zeroth order term is given bŷ with Z = Tr q [ dX dP e −βĤ W, total (x,p,X,P) ]. Note that this object corresponds, at fixed classical variables (X, P), with the equilibrium density matrix for the electronic states. However, it is only an approximation to the true quantum-classical equilibrium density matrix, since it is not a stationary solution to the quantum-classical Liouvillian given in equation (9). The error made, i.e. the rate of change of the distribution as it evolves in time, is given by It can be seen that it grows with β (quadratic dependence at small β) and it is proportional to the velocity of the classical particle P/M. Therefore, the error becomes smaller as the temperature grows. With these facts in mind, we will consider equation (12) as a reasonable approximation and use it, for example, to compute averages of observables: Ô (x,p, X, P) = Tr q dX dPÔ(x,p, X, P)ρ Analogously to the preceding sections (cf equations (4) and (7)), if we are interested in observables that depend only on the heavy, classical particle, we can define the temperaturedependent effective Hamiltonian in the form: H eff (X, P; β) := − 1 β log Tr q e −β H total (x,p,X,P) .
It is then easy to verify that the partition function can be written as Z = dX dP e −β H eff (X,P;β) , (15) and the classical observables can be computed as Hence, the quantum subsystem has been 'integrated out', and does not appear explicitly in the equations any more (of course, it has not disappeared, being hidden in the definition of the effective Hamiltonian). In this way, the more complicated quantum-classical calculations have been reduced to a simpler classical dynamics with an appropriate effective Hamiltonian, which produces the same equilibrium averages of classical observables (equation (16)) as the one we would obtain using equation (12) and hence incorporates the quantum back-reaction on the evolution of the classical variables In the case of a molecular system, the total (partially Wigner transformed) Hamiltonian readsĤ total (R, P) = T n (P) +Ĥ e (R), where R denotes collectively all nuclear coordinates, P all nuclear momenta, T n (P) is the total nuclear kinetic energy, andĤ e (R) is the electronic Hamiltonian, which includes the electronic kinetic term and all the interactions. The effective Hamiltonian, defined in equation (14) in general, is in this case given by where the last equality is a definition for the effective potential. Now, making use of the adiabatic basis, defined as the set of all eigenvectors of the electronic HamiltonianĤ e :Ĥ e (R)| n (R) = E n (R)| n (R) , we can rewrite V eff (R; β) as where E n0 (R) = E n (R) − E 0 (R). This equation permits us to see explicitly how the ground state energy E 0 differs from V eff , and in consequence how an MD based on V eff is going to differ from a gsBOMD. In particular, note that V eff (R; β) E 0 (R). Also, note that to the extent that nuclei do not have quantum behaviour near conical intersections or spin crossings [18], nothing prevents us from using this equation also in these cases. The definition of the classical, effective Hamiltonian for the nuclear coordinates in equation (18) allows us now to use any of the well-established techniques available for computing canonical equilibrium averages in a classical system 8 , given in this case by the convenient expression (16). For example, we could use (classical) Monte Carlo methods, or if we want to perform MD simulations, we could propagate the stochastic Langevin dynamics associated with the Hamiltonian (18): where is a vector of stochastic fluctuations obeying i (t) = 0 and i (t 1 ) j (t 2 ) = 2γ k B T δ i j δ(t 1 − t 2 ), which relates the dissipation strength γ and the temperature T to the fluctuations (fluctuation-dissipation theorem).
Indeed, it is well known that these dynamics are equivalent to the Fokker-Planck equation for the probability density W (R, P) in the classical phase space [19]: where {·, ·} is the classical Poisson brackets. Any solution to equation (21) approaches at infinite time a distribution W eq (R, P) such that ∂ t W eq (R, P) = 0. This stationary solution is unique and is equal to the Gibbs distribution, W eq (R, P) = Z −1 e −β H eff (R,P;β) [19]. Thus, the long time solutions of equation (21) and hence those of equation (20) reproduce the canonical averages in equation (16). This property, which is also satisfied by other dynamics like the one proposed by Nosé [20,21] if the H eff in equation (18) is used, comes out in a very natural way from the present formalism, which is yet unclear if one uses other ab initio MD candidates for going beyond gsBOMD [8], [11]- [13].

Applications
The most obvious applications of our temperature-dependent effective potential and its associated dynamics are the processes of proton transfer and thermal isomerization whenever low-lying electronic excitated states have to be taken into account. These processes are ubiquitous in chemistry and biochemistry [6,24,25]. In particular, ab initio MD of intramolecular proton transfer around conical intersection and excited states is a topic of current interest [26] and a great tradition [27].
As an example of our approach, we show in the following the difference between our temperature-dependent effective potential and the gsBOPES for the avoided crossing between the open and closed forms of the ozone molecule. We focus on the ring closure of this system, as depicted in figure 1 (inset). Using the CASSCF (complete active space self-consistent field) method [22], we have computed the PES corresponding to the ground and first excited singlet states (1 1 A 1 , and 2 1 A 1 ), and the effective potential along the relevant reaction coordinate, the ring closure angle φ-using only those states as they are the only relevant ones in this case. At about φ 0 = φ = 83.4 • , there is a barrier between two possible minima in the gsPES of this molecule. The 2 1 A 1 electronic state approaches at this point the ground state PES, and in consequence, one might expect that the effective potential, and its derivatives at the cusp, will differ considerably from the gsPES values as the temperature goes up. This can be seen in figure 1.
The situation shown in this figure is very general. In fact, one can prove from equation (19) that if the first and second derivative of E 10 at the barrier top verifies then  Figure 1. PES corresponding to the 1 1 A 1 (E 0 ) and 2 1 A 1 (E 1 ) states and the effective potential at the indicated temperatures, for the ozone molecule. The reaction coordinate is the molecular angle.
i.e. the curvature by our V eff is smaller than the gsPES curvature. Note that, in avoided crossings, ∂ E 10 ∂φ (φ 0 ) is approximately zero and therefore the above condition will be verified in the most interesting cases.
Here, we have computed the effective PES for a given reaction coordinate. Once this surface is at our disposal, it is a trivial task to perform nuclear dynamics on top of this surface. As we mentioned in the previous section, this nuclear dynamics can be performed in two ways. First of all, one could perform dynamics for ensembles with the Fokker-Planck equation. In this case, one would just use the effective potential that has been pre-computed in the equation and use any of the well-known partial differential equation solvers to propagate the Fokker-Planck equation. Alternatively, one can propagate the nuclear dynamics individually, equation (20), and then one would use any of the standard propagators that are also well described in the literature.
However, in many situations it will not be advantageous to pre-compute the effective PES (due to a larger dimensionality, etc), and instead the propagation would be performed 'on the fly', i.e. simultaneously to the computation of only those points in the surface that are necessary.
Note that the computational cost for the evaluation of the temperature-dependent effective potential is necessarily larger than the cost for the computation of the gsBOPES. A number of excited state surfaces must be computed, which can be done, for example, with the CASSCF technique that we have employed here, or with the time-dependent DFT. If we wish to perform 'on the fly' MD simulations, we will also need to compute the gradients of those surfaces, which can be done with linear response theory-similarly to how it is done for the ground state surface.

Conclusions
In this paper, we have introduced a temperature-dependent effective potential and the associated constant-T dynamics that produce in a natural way the Boltzmann equilibrium population of the quantum subsystem and its corresponding back reaction, something pursued in recent years by many researchers [8], [11]- [13]. We justify our only assumption, the equilibrium distribution prescribed by equation (12), using the formulation of Nielsen etal [9,10]. Our approach is particularly useful in the case of a conical intersection or spin-crossing [18], and does not assume that the electrons or quantum variables immediately follow the nuclear motion, in contrast to any adiabatic approach. The fact that, when using our effective potential, the height of a barrier in the PES and its curvature near the top decrease at avoided crossings makes our work relevant in the context of the transition-state theory [23]. In particular, this is important if one wants to adequately discriminate possible quantum-like behaviour of nuclei from simple classical effects due to the direct influence of temperature on the potential between two metastable states, for example in biological systems [1,16].