Eliminating the Second-Order Time Dependence from the Time Dependent Schrödinger Equation Using Recursive Fourier Transforms

: A strategy is developed for writing the time-dependent Schrödinger Equation (TDSE), and more generally the Dyson Series, as a convolution equation using recursive Fourier transforms, thereby decoupling the second-order integral from the first without using the time ordering operator. The energy distribution is calculated for a number of standard perturbation theory examples at first-and second-order. Possible applications include characterization of photonic spectra for bosonic sampling and four-wave mixing in quantum computation and Bardeen tunneling amplitude in quantum mechanics.


Introduction
The Time-Dependent Schrödinger Equation (TDSE), although unsolvable in exact terms, is often approached through various perturbative methodologies, such as those pioneered by Rayleigh-Schrödinger [1], Dirac [2], Dyson [3], Lippmann-Schwinger [4], WKB [5], Feynman [6][7][8], and others [9].Under a weak time-dependent perturbation, the TDSE solution can be written in terms of the known eigenvectors of the time independent Schrödinger Equation (TISE), which results in a Dyson series-an infinite recursion of coupled time integrals.In field theory, it is standard to use the time-ordering operator for decoupling the integrals.
Using the observation that these coupled integrals can be written as repeated Fourier transforms and their inverse, along with appropriate phase factors, a method is developed for decoupling the integrals at second order.
Many methods exist to integrate the Schrödinger Equation [5,[10][11][12][13][14].In particular, some methods use the Fourier transform explicitly, but usually as a trick to make calculations easier, while in others such as split-step, multi-slice, or Fourier space filtration [15][16][17], the Fourier transform plays a more fundamental role relating to the Fourier dual spaces.
In this study, similar to split-step, we break the Hamiltonian into kinetic and potential energy portions and then employ a Recursive Fourier Transform (RFT) technique in a novel way to decouple the second-order integral from the first, bypassing the need to invoke time ordering.Thus, we can represent TDSE, plausibly to any order, as a convolution equation by invoking the Convolution Theorem.This presentation aims to offer an efficient second-order analytical solution to the TDSE, while also emphasizing the method's utility as a first principle rather than a calculational trick [18].
The procedure will be applied here to basic examples, such as Gaussian potentials, but it is quite general to any potential whose Fourier transform exists.One may also find useful application in studying quantum Zeno dynamics, for instance in suppressing second-order photon exchange relative to first order for the purpose of generating entanglement [56].
The procedure may also be useful in quantum error correction techniques by characterizing spectral properties of noise sources leading to the decoherence of qubits.
A detailed breakdown of the paper is as follows: first-order RFT technique introduction to familiar use cases (Section 2.1), a new second-order RFT decoupling technique (Section 2.2), and experimental and theoretical applications (Section 3).Appendices with supplemental sections on basic definitions and concepts (Appendices A and B) , mathematical property examination (Appendix C), numerical accuracy of the method (Appendix D), and interpretation of the results (Appendix E) are also included.

Methods
The TDSE will now be evaluated to first and second order using the recursive Fourier transform (RFT) method.We use as a starting point the standard formulation of TDSE, for instance as in [57].
The matrix element for the transition between the initial state |ω i ⟩ and the final state where the potential VI is written in the interaction picture.The potential has both an operator component and a continuous time dependence.We will focus our discussion on the latter.

Formulating the First-Order Time Dependent Schrödinger Equation through Recursive Fourier Transforms (RFT)
The first-order term in Equation ( 1) has the well-known approximation [58] ⟨ω f |ψ(T)⟩ (1) ∝ 1 where the symbol ∼ indicates the Fourier transform, c i (0) is the amplitude of state |ω i ⟩ at t = 0, and V f i = ⟨ω f | V|ω i ⟩ is the matrix element of the potential that connects the initial and final states and will be insignificant to the current discussion.In the second line, a standard approximation was made by allowing the time interval to become infinite in both directions, "asymptotic time," so that Equation (2) becomes the Fourier transform of the potential.This relationship to the Fourier transform becomes further intriguing when we write Equation (2) in a suggestive way, Clearly this expression involves a transformation (though not formally a Fourier-type transform) from a frequency representation to a time representation, and a subsequent transformation back to a frequency representation.
Based on this intuition, we modify Equation (2) to not assume asymptotic time by inserting an indicator function (or mask) that is non-zero only within the specified time range, 0 → T, Everything on the right-hand side is written in the time domain over parameter t 1 , but by writing each factor as the inverse Fourier transforms of their Fourier transforms, we can write the integral over t 1 as follows: F where sinc(x) ≡ sin(x)/x, and the symbol F is an obvious notation that makes explicit that the transform is converting from an expression in t 1 to an expression in ω.
Note that the outer operation in Equation ( 6) is not technically a Fourier transform to ω but rather a projection onto a single exp (iω f t 1 ) basis state.To accomplish this, we computed a Fourier transform to switch to a continuous energy basis and then evaluated the result at a discrete energy ω = ω f .Conceptually, this is important because ω is a dummy convolution variable that is replaced by the measurable energy ω f .
By applying the convolution theorem to Equation ( 6) and inserting into Equation (4), we obtain an expression for the first-order transition amplitude, This is the first main result, the first order spectral response to the time-dependant perturbation V. Comparing with Equation (2), we observe variations in the frequency domain with a "spatial frequency" 4π/T, where T is the duration of the measurement (see Figure 1b).Two standard cases will be considered to illustrate the validity of the result above.

Example: Gaussian-Kicked Harmonic Oscillator
A simple system to consider is the harmonic oscillator "kicked" by a small Gaussian pulse, where τ is the characteristic time of the Gaussian.For instance, this can represent a cold atom in an optical trap [33,34].
To find the transition amplitude from the i to the f state using Equation ( 2), the asymptotic time approximation results in the expression where Ω is a normalization constant owing to the fixed natural frequency of the oscillator, and ω parameterizes the frequency response of the Gaussian perturbation.[58] Using instead Equation ( 7), the same transition can be written as where T is the measurement interval, and τ is the characteristic width of the Gaussian.Equation ( 10) is valid under the usual conditions necessary for a Taylor series (weak interaction), but unlike Equation ( 9), it does not make the asymptotic time approximation.
The effect of convolution is to add a small ripple to the Gaussian (see Figure 1b).This ripple was ignored in the standard approach (Equation ( 9)), when the limits of integration are arbitrarily set to infinity.ω-space convolved with sinc(ωT/2) in Eq. 10 shows the variations in the frequency domain with "spatial frequency" 4π/T.

Example: Fermi's Golden Rule
It will next be verified that Equation (7) reduces to the well-known Fermi Golden Rule.A typical example involves calculating the transition probability for an electron around the stationary atom absorbing a photon and transitioning from a bound state to a continuum of states.The Hamiltonian is where V(t) is the time-dependent perturbation, and Ĥ0 and V0 are independent of time.
The standard integral in Equation (2) results in the following i → f transition amplitude: where We dropped the first term, as is customary, in favor of the second term, which dominates around the resonant frequency ω f i ≈ ω d [59].Equation (7) obtains the same result.Computing and dropping the + term for the same reason as above, gives which is the standard result (up to constant factors).

Decoupling the Second-Order TDSE through RFT
Having examined the familiar first-order result using recursive Fourier transform methods, we now derive our second main result: an expression for the second-order term in the TDSE expansion.
The integrals for the second-order amplitudes are more complicated because the upper limit of integration for the nested integral is the integration parameter for the outer integral t 1 .Starting from Equation (1), by inserting a discrete basis of equally spaced states, the second-order transition amplitude is The integrals are therefore coupled, and the method in Section 2.1 must be modified.This is a Dyson series and was decoupled by Dyson by introducing the time-ordering operator.This is used widely in quantum field theory [3].
Here, the integrals will be decoupled in a new way in the following four steps.We assume that the spectra of the energy eigenstates ω k are discrete.For simplicity, we consider only the case in which they are equally spaced, that is, a simple harmonic oscillator.Then, we can write for integers k.
Step 1. Apply the convolution theorem to the nested integral The limits of integration of the nested integral are extended to infinity, using a rectangular mask, as in Equation ( 4), Because we truncated the signal using a rect(t) mask, this step was exact.The integrals are still coupled via t 1 , but the coupling now parameterizes the width of the mask rather than the integration domain.
By following the steps in Equation ( 4), we can write each factor in the integrand of the second line of Equation ( 16) in the frequency domain, and apply the convolution theorem, In evaluating the Fourier transforms, we have transformed bases from the original parameter of integration, t 2 , to ω and then to ω k , an intermediate basis of energy states.Note that the expression inside the parenthesis on the last line of Equation ( 18) is a continuous distribution in a dummy parameter ω, evaluated at a specific value ω = ω k after performing the convolution.
The nested integral is now a convolution in ω-space, but the sinc function's width depends on t 1 , which is coupled to the outer integral.How do we compute a convolution of a signal whose shape is changing as t 1 is integrated over?
Step 2. Discretize the integral over t 1 as a Riemann sum and move it inside the sum over k and i It is easier to handle Equation ( 18) by writing the integral over t 1 as a Riemann sum of step size ∆T, and rearranging the sums (switching the order of the sum and the integral in an infinite series can have unpredictable effects on the convergence of the series in general, but for our purposes we only examine the second-order expansion; this poses the same limitation on validity as other variational approaches such as Feynman diagrams): Because the second line is a distribution in the frequency domain evaluated at a specific point, it is simply a c-number for each term in the Riemann sum.
Step 3. Allow the variation over time to vary the width of the distribution sinc(ωn∆T) Here is the central insight to decouple the integrals.For each step in the Riemann sum over n (coupling variable), we identify the expression in the second row of Equation (19), in the continuous limit n∆T → t 1 , as having the form of an "impulse response" (in the time domain), Equation ( 20) is the impulse response of the system to a perturbation of duration t 1 , the nested integration variable.The intermediate frequency ω k is defined in Eqn. 15.
Step 4. Apply the convolution theorem to the outer integral Now, we can change the Riemann sum back to an integral over t 1 .Crucially, h ki , which is an explicit distribution in ω-space, appears inside an integral over time t 1 .We can therefore interpret it as a function of time rather than frequency.We can now repeat the earlier technique of extending the integration domain in the first-order to ±∞ and inserting a rectangular function of width T, The effect of the outer integral over t 1 on the nested integral is to vary the width t 1 of the impulse response across the duration of the measurement window from 0 → T, and sample it at ω k to generate h ki [t 1 ] (see Figure 2).Because of Step 3, everything inside the t 1 integral in Equation ( 21) can be treated as a distribution in t 1 , and the convolution theorem can be used again.The t 1 integral becomes a Fourier transform by explicitly writing each factor in the ω-domain, where Hki (ω) ≡ F } is the Fourier transform of the impulse response, also called the amplitude transfer function (see Figure 3).
The final ω-domain expression for second-order transition amplitude from i → f is where For notational simplicity, we have defined This is the second main result, expressing the second-order contribution to the transition amplitude owing to an arbitrary time-limited perturbation of a system with evenly spaced energy eigenkets |ω k ⟩.
A detailed analysis of Eqs.23 and 24 has been placed in App. C. Comparison of first and second-order transition amplitude, relative to initial state ω i , calculated with the RFT method introduced here.The second-order calculation computes paths through intermediate energies at ±20ω (0) .For the second-order calculation, the central peak is reduced, the wings are amplified, and the minima are increased.Only the range ±10ω (0) is shown, but the contributions from terms outside of this range have a significant effect on the accuracy of the result.Not drawn to scale: the second-order contribution is in reality reduced by a factor of h relative to first-order.

Example: Second-Order Harmonic Perturbation Golden Rule
To illustrate the results of Section 2.2, we compare the recursive Fourier transform method with the standard second-order approach [60,61].
Consider the ramped up oscillating potential, where ϵ is a small positive constant which ensures ramp up of the potential from t = −∞, and ω d is the driving frequency of the potential.This can model, for instance, light interacting with an atom [57].
Via the traditional application of the TDSE at second-order, we integrate the potential twice (Equation ( 1)) to obtain This can be interpreted as an amplitude for a particular transition through an intermediate state k, summed over all such possible paths (see [60]).Taking the time-derivative of the squared amplitude in the small ϵ limit results in an expression for the transition rate, where the δ-function comes from the small ϵ limit of This is the second-order version of Fermi's Golden Rule.
A similar expression can be achieved using recursive Fourier transforms.Starting with Equations ( 23) and ( 24), the second order transition amplitude is where In Equation ( 29), the convolution is over ω f , while in Equation ( 30), the convolution is over ω ′ .For ϵ > 0, the Fourier transform of the potential (Equation ( 25)) is which, in the limit ϵ → 0, becomes Ṽ(ω ′ ) = δ(ω ′ − ω d ) (see Equation ( 28)).Then, the transfer function in Equations (A1) and (A3) becomes This creates a series of descending harmonic spikes as in Figure A4, but in this case centered on ω d .This is precisely the content of the summation terms in Equation ( 26), but the new calculation exposes the hidden structure in the harmonics, as described in Appendix C.
Inserting Equations ( 31) and (32) into Equation ( 29) obtains In the limit of large t, the sinc function and its pre-factor approach a delta function, δ(ω f − ω i − 2ω d ).In the limit of small ϵ, the factor exp(2ϵt) → 1.
The time dependence of Equation ( 33) is the same as in Equation ( 27), only made more explicit by expressing the result in terms of frequency rather than in terms of energy.The time derivative (of the amplitude, in this case) is constant, as in Equation (27).
Thus, the new method given in Equation ( 29) and the traditional method given in Equation (27) give similar results for the measurable transition probability under the given conditions.

Discussion
The recursive Fourier transform method for decoupling the Dyson series has application in both experiment and theory.A few possibilities are discussed.

Bosonic Sampling and Quantum Computation
Bosonic sampling [23] with indistinguishable photons represents a computational challenge that can only be tackled by quantum computers and thus would demonstrate so-called quantum supremacy.
Tamma and Laibacher explain that "for a given interferometric network, the interference of all the possible multi-photon detection amplitudes. . .depends only on the pairwise overlap of the spectral distributions of the single photons" [62].They emphasize extracting quantum information from the "spectral correlation landscapes" of photons [24].So characterizing the frequency spectrum of a single photon is an essential task.
Various physical properties are related to the spectra of the photon.Further elaborating, Tamma and Laibacher assert that their results reveal the "ability to zoom into the structure of the frequency correlations to directly probe the spectral properties of the full N-photon input state. . ." [24], where "single-photon states" are characterized by a spectral distribution c(ω) [24].
The indistinguishability of photon pairs, time delays between photons, generation of ultra short photons, and probability of detection in a multi photon experiment can all be related to the spectral distribution.
The recursive Fourier transform approach we explored in this study allows us to calculate the spectral distributions c(ω) of photons with greater precision and efficiency, potentially leading to improvements in the above areas of research.

Quantum Field Theory
Dyson decoupled the nested integrals in higher-order TDSE (Equation ( 1)) by introducing a time-ordering operator that places all operators in order of increasing time from right to left.Then, TDSE can be written as a complex exponential, where V I is expressed in the Interaction Picture of Dirac.From this method, the usual field theory methods for calculating field correlation functions are typically derived.
In this study, we accomplished decoupling in a novel way, with no appeal to time ordering.This may be a more efficient method for directly calculating higher-order correlation functions or Feynman amplitudes by using convolution.It also removes the asymptotic time assumption, because the limited time intervals are computed exactly, without approximating the integration domain to be infinite.
If the recursive Fourier transform method allows for efficient calculation of higher order terms, one may be able to relax the constraint for small perturbations and allow for a broader range of potential strengths, moving out of the regime of weak coupling forces.

Bardeen Tunneling
Bardeen investigated electron tunneling at a voltage-biased junction between the conductive components.Following [37], describe the tunneling potential between tip and ssample of an electron microscope as V(t) ∝ exp (ηt/h), and after making several assumptions (for instance, keep only first-order terms, and small tunneling current), the solution the the TDSE is where M i f is the matrix element of the Hamiltonian perturbation.The energies ω f and ω i correspond to the sample and tip of an electron microscope, respectively.Electrons come in with energy ω i , and in a junction biased at voltage V 0 , ω i → ω i + eV 0 .
Next one typically estimates the tunneling current as the time rate of change of the tunneling probability, in the limit that η → 0.
Bardeen's formulation is valid under certain conditions [38].This result is useful because it describes tunneling in terms of time rather than space.
A related calculation can be executed using Equation (7) and identifying Ṽ = δ(E), since the potential is constant in time, and hω i = E i + eV 0 , since the initial state includes and offset due to the bias potential.
In the first case, we find that the tunneling current is non-zero when ω f = ω i + eV 0 .In the second case, we find that the transition amplitude is non-zero under the same condition, which has the same physical meaning.
The recursive Fourier transform approach efficiently determines the probability amplitude for each outgoing energy mode.This method could permit a more detailed description of the energy transitions across the tunneling barrier, including for arbitrarily short timescales.For instance, in tunneling across a voltage-biased barrier, the energy profile Equation (7) correlates with the excess kinetic energy profile of an electron ensemble post-barriercrossing.The ensemble's velocity profile could be measured.

Example: 2nd Order Bardeen Tunneling
Although a second-order expression is not typically attempted with Bardeen's approach, the recursive Fourier transform approach to the general TDSE allows us to guess at a second-order result for Bardeen tunneling.
First, we determine the transfer function for second order Bardeen tunneling.Again using Ṽ = δ(E) and hω i = E i + eV 0 , Equation (24) obtains Note that ω ′ is the convolution parameter.As usual, indices i, f , k correspond to initial, final, and intermediate energy states, respectively.Performing the Fourier transform results in Following the steps in Appendix C.1, we write so using Equation ( 23), the 2nd order Bardeen tunneling amplitude is where Vab denotes the matrix elements of the potential operator, and the convolution variable is ω f .Equation (38) incorporates two series of complex sinc functions.The first series is centered at ω f = eV 0 /h.The distributions in the second series are centered at ω f = ω k , descending in amplitude away from eV 0 /h.See Figure A4.
This represents the distribution of kinetic energies of tunneled electrons described by Equation (38).This result is significant as a transient effect for small times only.It illustrates the usefulness of the RFT method for extending existing methods of calculation.

Joint Spectral Amplitude Function
Recent experiments in quantum optics [63][64][65][66][67][68] rely on the generation of entangled photons characterized by a joint spectral amplitude function (JSA), F(ω s , ω i ), such that To be concrete, consider 4-wave mixing with signal, idler, and pump photons given by e ik(ω p )z−iω p t In this process, two pump photons at frequency ω 0 annihilate to generate two outgoing photons (signal and idler) at frequencies ω 0 ± ∆ for some frequency detuning value ∆.This is a statement of energy conservation.The photons are assumed to be in a nonlinear, dispersive medium with wave vector k(ω) (the non-linearity is contained in the expression with γ, but is not important for this derivation).At first order, the Schrödinger equation gives where F(ω s , ω i ) is the JSA.
The wave vector mismatch (due to dispersion) is calculated from the Taylor expansion of the wave vectors, The JSA can be written as For Gaussian pump photons, the integral can be evaluated in closed form to obtain an expression (to second order) for the JSA [63], where in the last step, the radical is set to unity using the fiber approximation |k ′′ | << 1 σ 2 p L .Alternately, this expression can be derived using the recursive Fourier transform process developed in this paper.Rewrite the integrals in Equation (40) as )e i(ω s +ω i )t e −iω 0 t e −iγPz dΩe where Ω ≡ ω p − ω 0 .Using the methods of Section 2, we write Performing the Fourier transform over z first results in a factor δ(k + Ω 2 k ′′ /2), and using the convolution theorem, we arrive at The domain of convolution is specified with a subscript, and the distributions are evaluated at the given expressions of signal and idler frequencies.
The newly-derived expression allows for short-time calculations using easily computable routines.In place of a radical in the denominator, the higher order effects of the pump are incorporated into the convolution operations.

Quantum Zeno Dynamics
The quantum Zeno effect would be a potentially fruitful experimental case to verify the utility of the new method for calculating second order amplitudes.This effect can be used, for instance, to generate entanglement by suppressing first-order terms relative to second-order terms in the Schrodinger Equation [56].Nodurft et al. use Fermi's Golden Rule to demonstrate their scheme utilizing a photonic waveguide coupled with a series of perturbing atoms to eliminate outcomes in which both photons appear at the same output port.This effectively entangles the photons by making the wave function inseparable.They write, "this imbalance between single and two photon absorption rates facilitates the Zeno effect by suppressing terms where both a left and right photon are found, but not when a single photon is found."Performing this calculation to higher precision using the RFT method presented here may provide more control and efficiency in creating entangled states.It would be interesting to use as a test case for verification that this method (see Sections 2.1.2and 2.2.1) does indeed improve upon Fermi's Golden Rule.
Other relevant potential applications of this calculation applied to the Zeno effect include error correction in quantum computers [69] and construction of quantum gates [70].

Solitons and Non-Linear Cases
The method provided here can be directly applied to any dynamical case in which the potential function has a Fourier transform.Its generality may make it particularly helpful for nonlinear versions of the Schrodinger equation.As an example, Bayindir et.al. study q-deformed Rosen-Morse potentials to analyze the time evolution of solitons [71].They utilize a spectral method to determine the stationary states of the system and the Runge-Kutta (RK) method to evolve the stationary states in time.
Non-linear terms are hard to handle because they involve convolution in the frequency domain, due to the convolution theorem.One typically avoids this by iterating them in the spatial domain, whereas the linear terms are iterated in the frequency domain.In the method presented here, the convolutions in the frequency domain are taken into account explicitly, rendering the spatial iteration of the nonlinear terms unnecessary.It seems that this method could be used as an alternative to RK to compute time evolution, readily handling linear and nonlinear terms.
It should be noted that the RK method results in a time domain wave function, whereas the methods here result in a frequency domain wave function.These can be easily compared using forward or inverse Fourier transforms.

Master Equations
When considering open quantum systems, the Lindblad equation can be used to account for dissipative or decoherence effects from the system to the environment.These effects are accomplished by a nonlinear term added to the Schrodinger equation, requiring careful treatment.There is still, however, a time integration that must be accomplished, which is often done using the Runge-Kutta method.The alternate method proposed here can likely be applied to that integration process, providing some utility in these cases.A careful analysis of these benefits was not undertaken.

Other Applications
The method proposed based upon Fourier transforms has a quite general form and might be used in other scenarios.The TDSE describes the diffraction of the wave function around a small temporal perturbation.One might consider application in the spatial domain, rederiving the usual single slit diffraction formula, and then extending this to a second-order calculation.Furthermore, in the spatial domain, one might apply the RFT technique to a tunneling barrier, for instance in a scanning tunneling microscope or in the alpha decay.

Summary
In this work, we devised a novel technique that decouples nested integrals in the Dyson series for the time-dependent Schrödinger Equation (TDSE) using recursive Fourier transforms (RFT).This provides an approach which is particularly suited for computation on both classical and quantum computers.
This method shares similarities with existing multi-slice or split-operator techniques, but it is used to refine accuracy of wavefunction spectra rather than propagate a wavefunction over time.The RFT approach computes the temporal diffraction of a wavefunction under a perturbing force of finite duration.It can be used, for instance, in the characterization of single photons in cases where indistinguishability is important.
The decoupling of the integrals at second-order is achieved by shifting to the frequency domain to obtain a nested sinc function, then interpreting the nested sinc as a function of time while also swapping the order of operators to perform the outer time integral before the sum over energy.This varies the width of the sinc function in the frequency domain, which can be sampled at a given frequency to extract an amplitude in the frequency domain.This allows the TDSE to be expressed as a sum of (non-nested) convolutions.We anticipate that this procedure can be iterated to higher orders.
Funding: This research received no external funding.Each t value is a unique experiment leading to a different distribution.In varying t, Equation ( 9) becomes a functional by creating a configuration space for each t value.

Appendix C. Detailed Analysis
We will now examine Equations ( 23) and ( 24) in more detail.

Appendix C.1. Evaluating the Transfer Function
Evaluating the form of the transfer function Hki can be performed first for the case of a negligible potential, Ṽ(ω) = δ(ω) (infinitely wide in the time domain).Because convolution with a δ-function is the identity operation, we can then evaluate h ki [t] explicitly and take its Fourier transform, In other words, the transfer function is composed of a series of discrete impulses spaced at integer multiples of ω (0) (because ω ki = kω (0) ).See Figure 3.
Next, Hki is convolved with the other factors in Equation ( 24), resulting in As shown in Figures A2 and A3, each term in the sum over k contributes complex impulses at ω k and ω i .
The special case k = i must be handled separately.Here, the desirable properties of the sinc function at the origin are required, and Equation (A1) is the Fourier transform of a constant as follows: This contributed to a purely real amplitude at the origin, as shown in the middle graph of Figure A2.
Summing over all Hki contributions over the range k = ±25, it can be seen in Figure A4 that for k ̸ = 0, Hki contributes a real portion at ω k = 0 which is amplified as more frequencies are included (k → ±∞), whereas the real portion remains small and finite at every other ω k , vanishing when the distribution is normalized (top).Conversely, the imaginary portions cancel at ω k = 0 but are significant everywhere else, decaying inversely with respect to |k| (middle).
An analogy can be drawn to the frequency-domain decomposition of sound signals.In the second-order calculation, the probability amplitude signal was deflected into a series of higher harmonics.Similarly, an acoustic musical instrument generates sound through the combination of a pluck (impulse) and resonant cavity that amplifies higher harmonics (impulse response).This is similar to the relationship between Hki in Equation ( 23) and the rest of that equation.23), corresponding to the outer integral over t 1 in Equation ( 18). (2nd/3rd row) The transfer function Hki (ω) in Equation ( 24) captures information from the nested t 2 integral as a series of spikes, δ(ω − 2ω (0) ) + δ(ω), (left), δ(ω − 3ω (0) ) + δ(ω), (right).See Equation (A1).(4th row) The real (solid) and imaginary (dashed) parts of the convolution in Equation ( 23).(Bottom) The combined result from k = 2 and k = 3.

Appendix C.2. Stepping Through the Algorithm for Transfer Function
A comparison was made between the direct integration of Equation ( 1) and the convolution approach in Equations ( 23) and (24) using MATLAB.The program begins by generating a nested impulse response, Equation (20).This has the form sinc * V and width t 1 , as shown in Figure 3 (far-left).
This impulse response (in ω) is then varied over time across the integration limits 0 < t 1 < T, generating a sequence of sinc graphs of varying widths.The sample value at the vertical line ω k for each graph was stored as a new array h ki [t] (Figure 3, middle).For a given ω k , these samples oscillate with a frequency profile that is dependent on the physics of the experiment (such as the properties of the potential and duration of the window of measurement).
The Fourier transform of h ki [t] is Hki (ω) (Figure 3 right panel).It is a series of impulses representing each intermediate contribution to the second-order amplitude (see Figures A2 and A4).If the perturbation is negligible, we can write Ṽ(ω) = δ(ω).In this case, Hki is composed of δ-function impulses at ω k and ω i .If the potential is strong, other harmonics appear in this graph (see Figure A5 bottom-right).
Finally, in Equation ( 23), Hki is convolved with a phase-shifted sinc impulse response so that a copy of the impulse response is placed wherever Hki has a spike, as shown in Figure A3.This is performed for every possible intermediate state ω k , and the amplitude plots for each are summed.Each code loop over ω k contributes an impulse response centered at ω k and another centered at ω i (Figure A2).
After looping over all 2k + 1 intermediate distributions, the impulses centered on ω i reinforce 2k + 1 times, whereas the second-order signal at each ω k appears only once.The result is a strong central peak and decaying wings (Figure 4).The contribution to Hki (ω) at k = 0 is non-zero for every k ̸ = 0, growing without bound, as we include more momenta (k → ∞), see Equation (A3).There is also a small, constant, real contribution at each ω k , which vanishes when the distribution is normalized (top).The imaginary portions cancel at ω k = 0 and are built everywhere else, decaying inversely to |k| (middle), see Equation (A1).The absolute magnitude of Hki was a harmonic series of impulses (bottom).These impulses are convolved with the first-order impulse response, Equation (7). is stronger, h ki [t 1 ] does not line up on its endpoints, and spectral artifacts occur at integer multiples of ω (1) .
For reasons that are not fully understood by the author, the interpretation of the second-order results is clear only when ω (1) = ω (0) , which is known as cyclotron resonance.This appears to be related to the interpretation of Equations ( 23) and ( 24) as a signal reconstruction problem using sinc-interpolation: this is the only case considered in this study.Figure A7a,b demonstrate the distinct behavior of direct integration versus convolution with respect to the number of intermediate states k max that are summed over.The convolution method converges faster than the direct integration with respect to the number of intermediate terms included.The convolution method relies heavily on non-local surrounding states.This is not surprising when considering the similarity between Equations ( 23) and (24) to the sinc interpolation signal reconstruction (as shown in Appendix E.2).

Figure 1 .
Figure 1.In the first order TDSE in Eq. 7, a Gaussian potential's tails are truncated by the measurement.(a) The integration windows for the TDSE are marked as vertical lines.The tails of the Gaussian are excluded, leading to ringing in the frequency domain (not shown).(b) A Gaussian potential in

Figure 2 .
Figure 2. The width of the sinc function in the impulse response depends on t 1 .As t 1 increases from top to bottom in the figures, we take samples at ω = ω k (defined in Eqn. 15, shown as a diamond on vertical line in the figure; here k = −2).The sample values trace out another sinc function.

Figure 3 .
Figure 3.We interpret Equation (20) as a function of t 1 instead of ω. (Left) Nested sinc distribution (second line in Equation (18)), varied in width over t 1 , while repeatedly sampled at ω k = −4ω (0) (vertical line with diamond marking the sample value).As the width of the sinc decreases, its height grows linearly with t 1 , so the height of the sample oscillations is constant.(Middle) The samples (h ki [t 1 ]) oscillate as t 1 is varied.(Right) The Fourier transform Hki (ω ′ ) = F {h ki [t 1 ]} is a series of spikes representing second-order impulses.These δ-functions are convolved in Equation (23) to place a copy of the outer sinc at each impulse.Note that for illustrative purposes the potential was ignored, i.e. chosen such that Ṽ = δ(ω).

Figure 4 .
Figure 4.Comparison of first and second-order transition amplitude, relative to initial state ω i , calculated with the RFT method introduced here.The second-order calculation computes paths through intermediate energies at ±20ω (0) .For the second-order calculation, the central peak is reduced, the wings are amplified, and the minima are increased.Only the range ±10ω (0) is shown, but the contributions from terms outside of this range have a significant effect on the accuracy of the result.Not drawn to scale: the second-order contribution is in reality reduced by a factor of h relative to first-order.

Figure A4 .
Figure A4.Hki (ω) for −25 < k < 25.The contribution to Hki (ω) at k = 0 is non-zero for every k ̸ = 0, growing without bound, as we include more momenta (k → ∞), see Equation (A3).There is also a small, constant, real contribution at each ω k , which vanishes when the distribution is normalized (top).The imaginary portions cancel at ω k = 0 and are built everywhere else, decaying inversely to |k| (middle), see Equation (A1).The absolute magnitude of Hki was a harmonic series of impulses (bottom).These impulses are convolved with the first-order impulse response, Equation(7).

Figure A5 .
Figure A5.Bandlimited signal: Because the duration of the time integration of the TDSE is less than the full signal (5% shown here, top left), h ki[t]  is smaller than Ṽ by that factor, and the resolution is decreased (lower left, the distinct spikes are only resolvable because horizontal scale is expanded).In the upper right, h ki [t] was padded with copies of itself (upper right) to compensate for the limited bandwidth of the signal.This ensures its Fourier transform has the desired high resolution (lower right).Shown here is the k = +2 term of Equation24.A moderate strength potential was used, resulting in harmonics at nearby states (see small spikes at k = +1, +3 and other integers in bottom right).