Tkwant: a software package for time-dependent quantum transport

Tkwant is a Python package for the simulation of quantum nanoelectronics devices to which external time-dependent perturbations are applied. Tkwant is an extension of the kwant package (https://kwant-project.org/) and can handle the same types of systems: discrete tight-binding-like models that consist of an arbitrary central region connected to semi-infinite electrodes. The problem is genuinely many-body even at the mean field level and is treated within the non-equilibrium Keldysh formalism. Examples of Tkwant applications include the propagation of plasmons generated by voltage pulses, propagation of excitations in the quantum Hall regime, spectroscopy of Majorana fermions in semiconducting nanowires, current-induced skyrmion motion in spintronic devices, multiple Andreev reflection, Floquet topological insulators, thermoelectric effects, and more. The code has been designed to be easy to use and modular. Tkwant is free software distributed under a BSD license and can be found at https://tkwant.kwant-project.org/.


I. INTRODUCTION
The field of quantum nanoelectronics -connecting coherent nano-or microscale devices at sub-Kelvin temperatures to macroscopic electronic measuring apparatus -began in the early eighties and lies at the root of emerging solid-state-based quantum technologies. A pletorha of new physical effects have been discovered including conductance quantization, electronic interferometry (Aharonov-Bohm effect, 1 Mach-Zehnder interferometers 2 ), interaction effects (Coulomb blockade, 3 Kondo effect in quantum dots 4,5 ) hybrid normal-superconducting effects (Andreev reflection 6 ), Klein tunneling (graphene), 7,8 sub-poissonian quantum noise, 9 and many more. Numerical simulations, featuring increasingly closer connections to experiment, play an important role in the study of these phenomena.
A recent and growing trend in the field is to revisit quantum nanoelectronics at increasingly higher frequencies in the GHz to THz range where one can probe the internal dynamics of a system. While such high-frequency nanoelectronics is still mostly under development, many important milestones have already been reached including the design of coherent single electron sources and their tomography, [10][11][12][13] the study of the propagation of excitations produced by voltage pulses at zero magnetic field 14 and in the quantum Hall regime, 15 the measurement of photo-assisted shot noise, 16 and more. Many phenomena involving superconductors (e.g. multiple Andreev reflection) are intrinsically time-dependent due to the appearance of the AC Josephson 17-19 effect when a superconducting junction is DC-biased 20 . The recent developments in the manipulation of (semiconducting or superconducting) quantum bits also involve time-resolved dynamics in the GHz range. [21][22][23][24][25] There exists, in short, a growing number of experiments that address timedependent phenomena.
On the othe hand, the theory of time-dependent quantum transport is rather mature. It involves several formalisms that use either non-equilibrium Green's functions 26 or scattering approaches 27 , both being developed either for periodic perturbations (Floquet formalism) or directly in the time domain. In contrast, numerical simulations, which play an increasingly important role in DC quantum transport, have received limited attention in the time domain. This is due, in part, to the fact that until recently such simulations were quite computationally intensive, therefore making their application to relevant phenomena rather difficult. Recent algorithmic progress, however, makes direct time-dependent simulations of relevant quantum devices computationally affordable on a small computing cluster or even on a desktop computer.
This article presents tkwant, a software library that implements state-of-the-art algorithms for the simulation of time-dependent quantum transport. 28 Tkwant (Time-dependent kwant) is an extension of the kwant 29 Python library for DC quantum transport. Tkwant can simulate a wide variety of models for different materials (semiconductors, graphene, topological materials, superconductors, metals, magnets, etc.), different geometries (Hall bars, rings, wires, etc.) in arbitrary dimension (1D, 2D, 3D, . . . ), in presence of arbitrary perturbations (voltage pulses, polarized light, static or dynamical disorder, time-dependent electrostatic gates, etc.), and an arbitrary number of connected electrodes. Tkwant has been designed to be easy to learn, to use, and to extend. It is the hope of its authors that it will be useful for many new projects outside of its original range of applications.
The article is organized as follows: Sec. II introduces tkwant through a simple concrete example: the propagation of a voltage pulse inside an electronic Fabry-Perot cavity. Sec. III provides a brief presentation of the main theoretical objects of time-dependent quantum transport. The different numerical algorithms used in tkwant are discussed in Sec. IV. Sec. V discusses how the structure of the code is organized to handle one-body and many-body problems. Sec. VI illustrates various aspects of tkwant with a full-scale application: propaga-tion of a voltage pulse sent off an electrostatic gate deposited on top of a graphene quatum billard. Summary and conclusion remarks are given in Sec. VII. Additional technical details on the band structure analysis and calculation of boundary conditions in electrodes are given, respectively, in Appendix A and B. The source code of the the actual simulations that were used to generate the figures of this article is provided as supplementary material.

II. TKWANT IN A NUTSHELL
This section features a rapid tour of tkwant. We start by formulating the type of problems that tkwant can solve. Then, we present a simple, yet nontrivial, example calculation for the propagation of an abrupt voltage raise in a one-dimensional Fabry-Perot interferometer. The complete source code for this example is discussed, in order to illustrate the close relation between the short Python code that one writes and the mathematical model than one wants to simulate. Finally, we review various existing applications of tkwant.

A. Problem formulation
Tkwant can handle general discrete quadratic Hamiltonians of the generic form where the time-dependent matrix H ij (t) is defined by the user andĉ † i (ĉ i ) is the fermionic creation (annihilation) operator on site i. A site i may label not only lattice positions, but might also refer to other degrees of freedom, such as spin or orbital numbers. Tkwant inherits from kwant a comprehensive set of tools for building the HamiltonianĤ(t) for devices of arbitrary shapes and dimensions on any lattice (graphene, cubic, amorphous, combinations of those, etc.). Note that even though we consider non-interacting problems, we have defined the above system in terms of a second-quantized Hamiltonian. Indeed, in contrast to DC transport where one can essentially solve the one-body quantum problem at (or close to) Fermi level, here the time-dependent perturbation makes the handling of the Pauli principle nontrivial, even in the non-interacting limit.
Although kwant may be used for systems with a finite number of sites, its most common usage is for infinite systems. These systems consist of a finite central region called the scattering region (s with N s sites) connected to several infinite electrodes called leads (l). The leads are semi-infinite and invariant by translation. Such a Hamiltonian take the form where the different terms correspond, respectively, to the scattering region (s), to the leads (l) and to the coupling between the scattering region and the infinite leads (sl).
A sketch of such a system is shown in Fig. 1. We refer to such infinite systems as open systems. Note that they are different from another class of systems, also refered to as open, that are described by a Lindblad equation and that can be addressed with the software package qutip 30 for example. The Hamiltonian for the scattering region is a general quadratic Hamiltonian, Only the finite scattering regionĤ s (t) and the coupling to the leadĤ sl (t) contain time-dependent perturbations. The leads are time-independent with one exception: they may be shifted by a global potential H l ii (t) = V l (t) that is identical on all the sites of a lead. Indeed in this case, a simple gauge transformation allows one to restore an time-independent lead at the cost of adding a global timedependent phase to the coupling Hamiltonian: H sl in (t) → e −iφ l (t) H sl in (t). In tkwant, we focus on leads that are invariant by translation: they consist of unit cells a that are repeated to form a semi-infinite quasi-one dimensional system. Each unit cell contains N sites labeled by indices n, m. A site i in the lead is described by the vector i = (a, n). The Hamiltonian of a lead l is fully characterized by two N × N matrices H l 0 and V l , a,nĉa,m + V l nmĉ † a,nĉa−1,m + h.c.. (5) The leads are also considered to be in (possibly different) thermal equilibrium characterized by a Fermi function f l (E) with a time-independent chemical potential µ l and temperature T l . The coupling between the scattering region and the lead is an arbitrary quadratic Hamiltonian between the scattering region and the first unit cell of the lead,Ĥ Except for the fact that some matrix elements are timedependent, the systems considered in tkwant are identical to those in kwant.
The general problem that tkwant adresses is the time evolution of observables such as densities or currents after the system is subject to a time-dependent perturbation for t > t 0 . The system is initially in a stationary state for t < t 0 (in or out of equilibrium). tkwant computes expectation values such as whereρ(t) is the non-equilibrium density matrix of the system. No assumption of adiabaticity or otherwise is made in the calculation and higher-order observables 31 (such as quantum noise) can also be obtained.

B. Diving into tkwant with a simple example
Below we discuss a numerical experiment for a simple yet nontrivial system. We consider an infinite onedimensional chain with nearest-neighbor hoppings. Two potential barriers, A and B, are placed in the system to form a Fabry-Perot cavity. A sketch of the system is shown in the top panel of Fig. 2. At t = 0, the electric potential of the left electrode is suddenly raised from zero to a finite value V b and we want to study the transient regime of the current I(t) before it eventually reaches its stationary value. The lower panel of Fig. 2 shows the result: the current increases over several plateaus that correspond to the different trajectories through the cavity (direct transmission, reflection at B followed by reflection at A then transmission, etc.). The inset shows an interesting phenomenon: on each plateau, there are small oscillations of the current at a frequency eV b /h. We refer to Ref. [32] for a detailed discussion of the physics of this system.
The Hamiltonian for this system readŝ Top panel: schematic of the system, a onedimensional chain with potential barriers on sites A and B that transform the system into a Fabry-Perot cavity. At t = 0 one quickly raises the voltage V (t) of the left lead (which induces a phase φ(t) in the hopping shown in red) from 0 to V b . A similar system has been studied in Ref. [32]. Lower panel: result of the simulation, current I(t) measured on the right of the two barriers A and B. This plot can be obtained by running the Python code given in the code listing 1. Inset: detail of the main figure showing periodic oscillations of the current.
where i is a static onsite potential that defines the cavity. The Fermi level is fixed at E F = −1 and the temperature at zero. The time-dependent ramp-up voltage, is applied to the left electrode (i ≤ 0). The voltage ramp amounts to adding an extra phase φ(t) to the hopping from the electrode to the cnetral system, see Eq. (4). The current I(t) takes the form where the site i 0 is chosen in the right part of the central region, outside of the Fabry-Perot cavity. To simulate the system described above with tkwant, it is sufficient to write the short Python program that can be found below in Listing 1. Such Python scripts replace the traditional input files of standalone numerical simulation programs while providing more flexibility for defining the system, analyzing the results and combining tkwant with other packages. Tkwant defines objects that represent highlevel concepts closely matching the mathematical objects of the underlying formalism. All these objects have a documented application programming interface (API) and are exposed to the user in order to provide as much flexibility as possible. Understanding this script requires basic familiarity with the Python language. The first function make_-fabry_perot_system() defines a kwant system that implements Eq. (8). The scattering region contains 80 sites with a barrier 4 = 75 = 5.19615 (sites A and B, respectively). An additional gate voltage (here −0.0956), applied to all the sites inside the cavity (4 < i < 75), allows to tune the cavity in or out of resonance. The leads possess an additional translational symmetry so that they are entirely described by a single unit cell (here a single site) and its connection to neighboring unit cells. We refer to kwant 29 documentation for a description of how to define systems.
The other function phi(time) implements Eq. (4) with V (t) of Eq. (9). Here, the integral of Eq. (4) has been calculated analytically, but it can also be calculated numerically for more complex functions V (t).
The main program body calls make_fabry_perot_system() to create the kwant system and "finalizes" it. This operation takes the high-level "builder" of kwant and constructs a low-level object better suited for numerical calculations. Note that from the perspective of the kwant package, the parameter time is just another parameter of the kwant Hamiltonian as could be e.g. a magnetic field or a spin-orbit strength. However, tkwant will recognize a parameter with the name time as the time variable.
The next stage is to define the observables that will be calculated during the simulation. To this end an instance of kwant.operator.Current is created. This stage is necessary because the internal state of a tkwant simulation can become very large and therefore cannot be recorded for every time step. Most observables must be therefore computed on-the-fly and as such be listed before the begin of the simulation.
The actual tkwant code starts with tkwant.manybody.lead_occupation(), when one defines a chemical potential of E F = −1 for all leads. The temperature is zero by default. The creation of a tkwant.manybody.State instance initializes the timedependent many-body state. This many-body state is evolved according to the many-body Schrödinger equation using the state.evolve() method.
The state.evaluate() method is used for the on-the-fly calculation of the current. Note that the function state.evaluate() returns either scalars or regular Python (NumPy 33 ) arrays so that any post-processing or plotting of the data can be performed with standard Python tools.
The apparent simplicity of the above script hides a lot of technicalities and default values for certain parameters. Tkwant can be used in this default "automatic" mode which is sufficient for many purposes. However, the user can also claim control of all the defaults and other technical aspects as will be explained in the rest of this article.

C. Other examples: a review of tkwant applications
At the time of writing, tkwant has already been used for various applications. We review them briefly below in order to illustrate some of the problems that can be studied within the tkwant framework. We also review the articles that cover various related technical aspects (algorithm and formalism).
The first tkwant article 34 contains a detailed description of the theoretical framework and in particular shows the equivalence between the non-equilibrium Green's functions formalism and the scattering wave function formalism which is actually used by tkwant. For computational purposes, we indeed find that the usage of scattering wavefunctions allows to obtain multiple orders of magnitude of speed-up compared to to Green'sfunction-based approaches.
Ref. [35] contains a simplified presentation of the formalism as well as an application to flying qubits in twodimensional gases. Ref. [36] extends the study of flying qubits to realistic models.
Ref. [37] improves the algorithms of Ref. [34] to achieve a computational time linear in t (total simulation time) and N s (number of sites in the scattering region). Tkwant currently implements its "source-sink" algorithm. This article also discusses the propagation of voltage pulses through Josephson junctions as well as the currentvoltage characteristic in presence of multiple Andreev reflection.
Other studies featuring superconductors include a method for performing the spectroscopy of Majorana modes in semiconducting nanowires 38 and a mean-field technique to describe the role of electromagnetic environment of Josephson junctions within a microscopic model 39 .
Refs. [32 and 40] discuss the propagation of voltage pulses through Mach-Zehnder and Fabry-Perot electronic interferometers. Ref. [41] studies how the propagation of voltage pulses in the quantum Hall regime could be manipulated in real time. Ref. [42] illustrates how an effective (Floquet) topological insulator could be stabilized with a periodic time-dependent perturbation such as circularly polarized light. Ref. [31] provides the necessary formalism and technicalities to calculate quantum fluctuations (such as current noise) with tkwant. The formalism is illustrated with the calculation of the noise associated with Lorentzian pulses (the so-called Levitons 43 ) Ref. [44] studies the current generated by a moving skyrmion in a magnetic material. Ref. [45] studies time-dependent (electronic) heat transport and thermoelectric effects. Ref. [46] studies the propagation of plasmons in 1D or quasi-1D geometries and makes contact with the theory of Luttinger liquids.
All the examples discussed above can be simulated with the current version of tkwant with the exception of Refs. [39 and 46] which require an extension that is still at prototype level. A few dozen lines of code typically separate one application from another.

III. FUNDAMENTALS OF TIME-DEPENDENT QUANTUM TRANSPORT FORMALISM
In this section, we provide the minimum level of formalism to define the mathematical objects that are calculated in a tkwant simulation. The most popular formalism for time-dependent quantum transport uses the Keldysh formalism of non-equilibrium Green's functions (NEGF). Starting from the general formalism, 47-49 its application to quantum transport in mesoscopic systems can be found in Refs. [26,[50][51][52]. The alternativeyet fully equivalent -approach that Tkwant uses employs scattering wave-functions. This natural extension of kwant's stationary scattering wave-functions is highly advantageous from a computational perspective. The formalism is explained in detail in Ref. [34].

A. Definition of Keldysh Green's functions
The two central objects of NEGF are, respectively, the retarded (R) and lesser (<) Green's functions, whereĉ † i (t) andĉ i (t) refer to the previously introduced fermionic operators in the Heisenberg picture and the average . . . = Tr[. . . ρ] is taken with respect to a nonequilibrium density matrix that supposes that each lead remains at its own thermodynamic equilibrium while the system is time-independent (in a stationary state) for t < t 0 before the time-dependent perturbations are turned on for t > t 0 . For quadratic models of the form of Eq. (2), calculating these Green's functions is a two-step procedure, 34,52 where one first calculates G R (solving the quantum mechanical problem) and then G < (filling the states according to a non-equilibrium statistical distribution). Calculating the average of a physical observable, can be simply done from the knowledge of G < at equal times: As it turns out, the calculation of the retarded and lesser Green's function can be bypassed entirely. Doing so is computationally advantageous in particular when only equal time quantities are needed but also for calculations of quantum noise. 31 B. Toy model of wavefunction formalism: finite system To motivate the scattering wavefunction approach, let us first discuss a simpler situation where the system con-tains a finite number of sites, a finite number of particles and is initially at equilibrium at zero temperature. For t < t 0 , the system is described by a time-independent HamiltonianĤ 0 . For t > t 0 , we write (without loss of generality) the full Hamiltonian as the sum ofĤ 0 with whatever time-dependent perturbationŴ(t) has been added,Ĥ DiagonalizingĤ 0 provides the initial wavefunctions ψ α , where the index α take discrete values. The stationary Schrödinger equation iŝ from which one can build the Slater determinant that forms the many-body state of the system at t < t 0 : with the operatorsd † α defined aŝ and E F the Fermi level. Solving the many-body time dependent problem for t > t 0 when one switches on the perturbation is straightforward. It amounts to following the evolution of each of the wavefunctions according to One obtains The unitary evolution of the wavefunctions ψ α (t, i) preserves the initial orthonormalization of the stationary states ψ α so that one has from which the fermionic anticommuation relations ofd † α operators follow. Calculating a physical observable is again straightforward and amounts to calculating the observable for each filled state Let us emphasize that the sum over occupied states is crucial here as it enforces the Pauli principle.

C. Scattering wavefunction formalism: infinite system
The scattering wavefunction formalism generalizes the previous subsection to the case of infinite systems that consist of a finite scattering region connected to several leads that remain at their respective thermodynamic equilibrium. The theory is exact for arbitrary time-dependent perturbations (no adiabatic assumption is necessary). We partition the Hamiltonian aŝ The scattering wave functions ψ αE at t < t 0 are now labeled by the energy, a continuous variable E (the system being infinite, the energy can take any value inside the bandwidth), and a discrete index α that labels all the conducting channels at energy E such that Note that despite the apparent resemblence of Eq. (24) with Eq. (15), they are of very different nature. While Eq. (15) is simply the solution of the eigenvalue problem for a finite matrix, Eq. (24) covers an infinite system with a continuous spectrum. The ψ αE are obtained from wave function matching between the incoming and outgoing modes in the leads.
Very conveniently, ψ αE are direct outputs of the kwant solver. The scattering wavefunctions of kwant are normalized such that they correspond to a unit particle current per channel and per energy (i.e. before wave matching, the lead plane waves are normalized to carry unit incoming and outgoing current which guarantees the unitarity of the scattering matrix). For t > t 0 , one needs to follow the dynamics of these wavefunctions: Unitarity of the time evolution implies The observables are then calculated with is the Fermi function of the lead to which channel α belongs. In particular, the number n i (t) of electrons on site i reads while the particle current I ij (t) from site i to site j reads with the usual continuity equation The above equations suppose that the entire spectrum consists of the continuum of scattering states. It is also possible that some discrete set of bound states ψ b with energy E b is present [53][54][55][56][57] with evanescent contributions in the leads. In that case, the formula needs to be modified to account for those: 58 where the Fermi function f (E b ) refers to the central region.
As announced above, the scattering wavefunction formalism is equivalent to the more standard Keldysh approach. In particular, the retarded and lesser Green's functions can be computed from the scattering wavefunction through simple integrals: 34 It is also possible to compute the scattering wavefunctions from the knowledge of the retarded Green's function. 34 Equations (23), (24), (25) and (27) form the closes set that tkwant solves.

IV. NUMERICAL APPROACH
This section describes the set of algorithms used in tkwant to solve the closed set of time-dependent equations (23), (24), (25) and (27).

A. Overview of the different subproblems
Tkwant consists of algorithms for the following subproblems (1) Definition of the model of Eq. (23). This is done with kwant to which we refer for further information.
(2) Calculation of the initial scattering states of Eq. (24). This is also performed using the kwant library.
(3) Integration of the time-dependent Schrödinger equation in an infinite system for each of these states according to Eq. (25). This subproblem is solved using a mapping onto an effective nonhermitian finite problem which we refer to as the "source-sink" algorithm. This finite problem is later integrated using standard schemes for differential equations.
(4) Calculation of the observables. This amounts to estimating accurately the integral (27) over the energy using an appropriate quadrature rule. This step is critical in ensuring a proper treatment of the Pauli principle. 34 (5) Band structure analysis. The calculation of the integral of subproblem (4) is actually performed in momentum k, not in energy E. A preliminary step consists in analyzing the band structure of each lead in order to perform the associated change of variable. This subproblem is solved using the package kwantSpectrum 59 which we also introduce in this article.

B. Solving subproblem (3): integration of the time-dependent Schrödinger equation for an infinite system
In this section, we discuss how equation (25) is integrated. Since the wave functions ψ αE (t, i) are non-zero throughout the infinite system, a direct integration is not possible and one must first map the problem onto a finite problem. This is done in two steps using the "source" and "sink" algorithm developed in Ref. [37]. The resulting differential equations are then integrated using standard integration schemes.

Source algorithm
The source algorithm is a simple change of variable where one writes The new wavefunctionψ αE (t, i) encodes the deviation of the total wavefunction with respect to the stationary one. Inserting the above definition into Eqs. (25), one arrives In other words,ψ αE (t, i) follows a Schrödinger equation with an additional source term S αE (t, i) that can be calculated from the scattering state. In return the initial value of the wavefunction is zero everywhere. Since the source term is only localized inside the scattering region, only a finite region of the system needs to be considered. The phase shift e −iEt in the definition ofψ αE (t, i) is unimportant; it simply absorbs the faster time dependence which allows one to use significantly larger integration steps in the numerical integration.

Sink algorithm
For large simulation times, the wave functionψ αE (t, i) penetrates deeply into the leads, so that a large finite system must be considered. One can indeed consider a finite chunk of lead of length v max t/2 (v max : maximum speed in the system; the factor 2 accounts for the duration of both forward and backward propagation in the lead since the wavepackets get reflected at the lead boundary) to guarantee that no spurious reflection at the end of the finite lead alters the results. It is important to note that even though one considers a finite system for the time-dependent propagation, the stationary wavefunctions ψ αE (i) are still computed for an infinite system, hence the results correspond to an exact solution of the infinite problem (within a given accuracy). The corresponding algorithm has an overall computational cost that asymptotically scales as t 2 although in many situations the cost is still dominated by the finite scattering region.
The "sink" algorithm developed in Ref. [37] allows to overcome this t 2 scaling and go down to a computational cost proportional to t. Since the leads are invariant by translation, the propagation inside the leads is ballistic: once a wave packet enters a lead, it never comes back to the scattering region and can be ignored. To take advantage of this fact, one can introduce a "sink" in a lead: a purely imaginary potential iΣ(i) that absorbs any wavefunction that penetrates into the lead. As a result, the dynamics becomes non-hermitian, The design of the absorbing term iΣ(i) must be done with care in order to preserve the original dynamics: the imaginary potential must be increased very smoothly inside the leads as any abrupt variation of iΣ(i) creates spurious back-scattering that sends parts of the wavepacket back into the scattering region and spoils the simulation. The concrete procedure to design the absorbing potential is described in details in Appendix B. Eq. (36) is the actual equation that is integrated into tkwant.

C. Subproblem (4): Calculation of the physical observables
In this section, we discuss how tkwant solves the joint problem of performing the summation over conducting channels (α) and the integration over energy.
Dispersion spectrum En(k) in the first Brillouin zone for lead 0 of the Mach-Zehnder interferometer from Ref. [32]. At zero temperature, only the lowest band with n = 0 (blue) has energies below the Fermi energy EF that will contribute to the many-body state. The contributing energies with positive velocity v0(k) ≥ 0 are highlighted in red. Energy and momentum boundaries of the contributing area, Emin = E0(kmin) and EF ≡ Emax = E0(kmax), are important to calculate many-body expectation value with Eqs. (27) and (38).
To understand the strategy in performing the integration and the summation of subproblem (4), it is very illuminating to look at the dispersion relation E n (k) of the different leads. Kwant provides a direct access to this dispersion relation. The package kwantSpectrum builds on this basic facility to provide a detailed analysis of the E n (k) curves.
A typical example of a dispersion relation is shown in Fig. 3. This example corresponds to a quasi-one dimensional lead in presence of a perpendicular magnetic field. The low-energy bands correspond to the first Landau levels and are therefore very flat. Performing the summation and the integration over energy amounts to integrating from the bottom of the band to the Fermi level E F (or up to E F plus a few time the temperature at finite T ) and keeping the contributions arising from "open" channels. An open channel corresponds to a value of k for which ∃n, E n (k) = E and the corresponding velocity is positive. For example in Fig. 3, there is a single open channel at E = E F , two at E = +2.5 and none at E = −4. While such a direct integration over energy is possible, it suffers from serious difficulties. Indeed, close to the bottom of a band, the integrand -that contains the density of states -diverges. For a simple quadratic band opening E ∼ k 2 , this results in a 1/ √ E integrable singularity. For the example of Fig. 3, the bottom of the band is extremely flat (Landau level) and the associated density of states corresponds to a Dirac function. This is extremely ill-adapted to quadrature methods. An example of the integrand in energy is shown in the top panels of Fig. 4 with a zoom on the right. The very sharp peak associated to the Landau level is very hard to resolve numerically.
In order to avoid these divergences and more generally to obtain smooth integrands, it is much more favorable to perform the integral in k-space. 35 To do so, one starts by analyzing the band structure E n (k) in order to extract the intervals of integration [k min,α , k max,α ]. Specific algorithms have been developed to perform this analysis (finding the bottom and top of the bands where v n (k) = 0, ensuring continuity of the bands at band crossings, etc.). They correspond to subproblem (5) and are described in Appendix A. In our example of Fig. 3, there is a single interval [k min,0 , k max,0 ] (in red) but more intervals would appear as one increases the Fermi energy. Performing the integration in k introduces a Jacobian |dE n /dk| = v n that absorbs the divergences of the integrand in energy. The resulting formula for the calculation of an observable reads (38) An example of the corresponding integrand in k-space is shown in the lower panels of Fig. 4. These integrands are perfectly smooth, in contrast to their counterparts in E-space shown in the upper panels.
The last step, once all the momentum intervals are at hand, it to evaluate the corresponding integrals using quadrature rules of the form Tkwant uses two kinds of quadrature rules with either a fixed number of points (Gauss-Legendre rules) or an adaptive number of points (Gauss-Kronrod rules 60,61 ). Both quadratures have the additional advantage that the integrand is not evaluated at the boundaries of the interval where band opening leads to ill-defined behavior of the integrand (1/ √ E singularities for the integration in energy domain).
Integrand Iα of the many-body observable. Upper panels: Iα(E) in energy representation Eq. (32). The divergence at the lower band gap Emin causes numerical inaccuracies, better visible in the zoom on the right. Lower panels: Iα(k) in momentum representation Eq. (38) at two different timesteps. Iα(k) is a smooth function everywhere inside the integration region. These integrands correspond to the electronic density in the Mach-Zehnder interferometer from Ref.
[32] that corresponds to lead 0, band n = 0 contribution, summed over all the sites of the scattering region. Integration bounds correspond to Fig. 3.

V. SOFTWARE ARCHITECTURE AND MAIN CONCEPTS
In this section, we describe how tkwant is organized. tkwant implements several concepts that provide a clean separation between the different subproblems and allow the package to be easily modified or extended. For instance, although tkwant's main focus is time-dependent nanoelectronic problems, it can also be used for simpler problems such as the propagation of a single-particle wave packet in an infinite or even finite system.
Tkwant has separate APIs for one-body problems and many-body problems. For each of these, it proposes a low-level interface that exposes all the mathematical objects used in the algorithms and a high-level interface that provides additional functionality as well as heuristics to propose robust values of the simulation parameters (such as the imaginary potential or the number quadrature points in the calculation of the integrals). The lowlevel API of both one-body and many-body problems has been designed to be compatible but independent from kwant while the high-level interface relies on kwant more heavily.

A. Solving one-body problems
To illustrate the one-body solvers, let us consider the simple problem of the propagation of a wavepacket in one dimension. This means we want to integrate with some initial condition, for example The first step for such a simulation is to discretize the spatial variable x. This can be done automatically 62 or manually by approximating the ∂ 2 x operator with a threepoint rule on an equidistant grid x i = ia where a is the discretization lattice constant. One arrives at a tight binding model of the form of Eq. (25a) with the Hamiltonian matrix

Finite systems
The dynamics of the probability density |ψ(t, x)| 2 for a finite system of N s = 400 sites is shown in the left panel of Fig. 5. The initial condition is a Gaussian wave packet centered at j 0 = 100 with a momentum k = π/6. As the dispersion relation of the infinite chain is E(k) = 2 − 2 cos k, the wavepacket has initial group velocity v(k) = ∂ k E = 1 (in units of lattice spacing a per time unit) towards the right of the system. As the system is finite, the wave packet gets reflected at the boundaries and displays a ping-pong like dynamics while at the same time the wavepacket spreads.
Listing 2 uses the tkwant low-level interface, namely the class onebody.WaveFunction, to obtain the data of the left panel of 1 from tkwant import onebody 2 import numpy as np 3 import scipy 4 import matplotlib.pyplot as plt  Listing 3 performs the same task as Listing 2 but uses kwant 29 for the construction of the Hamiltonian matrix and for the calculation of the density. For such a simple example, using kwant is superfluous. However in more complex situations (time-dependent systems of various shapes, with different lattices or topologies, etc.) it becomes very handy. The method evaluate() calculates the expectation value of an operator. The class onebody.WaveFunction interprets any argument with name time automatically as the time argument and attributes the corresponding Hamiltonian elements to the W (t) matrix.
1 from tkwant import onebody 2 import kwant 3 import numpy as np 4 import matplotlib.pyplot as plt   Listing 3. This Python code is similar to Listing 2 except that the Hamiltonian matrix and the density operator are defined using kwant 29 . The object syst is the Kwant object that represents the finite system. It contains the Hamiltonian matrix and can be used by tkwant's one-body solver onebody.WaveFunction. Running this script takes only a few seconds on a desktop computer.

Infinite systems
The dynamics of the probability density |ψ(t, x)| 2 for an infinite system is shown in the right panel of Fig. 5. In contrast to the previous example, the wavepacket is not reflected on the boundary of the system but continues its propagation indefinitely. The finite system here only corresponds to the window that we are monitoring but the physical system is strictly infinite and translationally invariant.
The corresponding code is shown in Listing 4. The differences with Listing 3 are highlighted in blue. The chain is extended to positive and negative infinity by attaching semi-infinite leads to the kwant system. The Hamiltonian matrix of the infinite (or open) system has a block structure similar to that of Eq. (2). Note that we have to provide special boundary conditions (imaginary potential, cf. Sec. B) to the tkwant solver (line 41) to deal with infinite systems.
1 from tkwant import onebody, leads 2 import kwant 3 import numpy as np 4 import matplotlib.pyplot as plt  Listing 4. Python code to calculate the time-evolution of the probability density |ψ(t, i)| 2 for an infinite one-dimensional chain (right panel of Fig. 5). The object syst is the kwant system that represents the infinite system. It has leads on both sides of the finite scattering region that extend the chain to ±∞. Additional boundary conditions must be provided to the onebody.WaveFunction solver for a system with leads. New lines of code (in blue) and comments (in gray) are highlighted to show the difference to Listing 3. Running this script takes only a few seconds on a desktop computer.

Infinite systems with initial scattering states
In tkwant special support exists for the simulation of infinite systems whose initial state is a scattering state of the system. The scattering states are obtained from the numerical solution of Eq. (24), and this step is conve-niently performed with kwant. For the one-dimensional chain, the scattering states have the simple form In presence of a time-dependent perturbartion, scattering states immediately become more complex as a reflected wave must be added in the left lead while the wave on the right gets multiplied by a transmission amplitude and the wave in the scattering region loses its plane-wave structure. Scattering state initial conditions are somewhat special in two ways. First, scattering states are eigenstates ofĤ 0 hence a time-dependent perturbation is needed to observe a nontrivial time evolution. Second, these initial conditions are defined everywhere in the infinite system (hence the appearance of the source terms in the Schrödinger equation, see Section IV B) as opposed to just inside the scattering region as is the case for a simple wave packet.
The high-level class onebody.ScatteringStates handles the calculation of these initial conditions, of the associated source terms, and provides robust automatic heuristics for setting up proper boundary conditions in the leads (imaginary potential). Listing 5 shows an example of the API of onebody.ScatteringStates. An instance of onebody.ScatteringStates is an iterable object that returns onebody.WaveFunction objects upon iteration. Listing 5. Python code snippet to set up a one-body solver for the time-dependent Schrödinger equation for an infinite system that starts in an initial scattering state. To use this snippet, one should replace lines 38-45 in listing 4 by the above lines. syst is a kwant system with leads, energy the energy of the state and lead=0 refers to the left lead. Note that the boundary conditions are built automatically on the fly.

B. Solving the many-body problem
Let us now turn to the many-body solver of tkwant. Solving the many-body Schrödinger equation with tkwant requires several steps as described in Sec. IV. Tkwant provides two interfaces for solving the many-body problem.
The first, class manybody.WaveFunction, provides a low-level interface for the problem. Its main task is to handle the evolution of multiple scattering states (in parallel for multi-core computers) and perform the integration over energy using a static number of scattering states. When using manybody.WaveFunction the different preprocessing steps must be handled manually. They consist of • the calculation of the dispersion relation E n (k) for all leads, • the analysis of E n (k) to obtain the k-intervals for the integration, • the calculation of the imaginary potential in the leads, • and the calculation of the initial scattering states at t = 0.
The other class, manybody.State, provides a highlevel interface that offers additional functionality: it uses heuristics to automatically handle the preprocessing steps; it implements an adaptive integration scheme that allows one to refine the integration by adding new points on the fly. Note that in what follows, we concentrate on the treatment of the energy/momentum integration on the continuum part of the spectrum. Bound states, if present, must also be accounted for. We refer to tkwant documentation for a description of the corresponding API. 28

Low-level API
Listing 6 showcases usage of the low-level interface, supposing that a kwant system syst has already been constructed. Line 8 calculates and analyzes the dispersion relations of the different leads. Line 12 sets up the Fermi functions of the different electrodes. Line 13 calculates the maximum energy E max of the energy integration (energy above which the Fermi functions are effectively zero). Line 16 sets up an imaginary potential in the leads adapted to their actual spectrum. Lines 19-21 set up the "quadrature intervals" that will be used for the integration. A quadrature interval is an interval in k to which a quadrature rule (here Gauss-Legendre) is associated along with the order in which this rule will be used (here 20, meaning that 20 points will be used per interval). The function split_intervals allows to split one interval into several subintervals in order to obtain a higher accuracy of the integration. Line 24 sets up the different "tasks", i.e. the different one-body problems that must be integrated. Line 25 calculates the initial condition for each task. All this information is gathered (line 28) by the manybody.WaveFunction instance that is in charge of integrating the different one-body problems and performing the integration. Note that at this level, the integration is performed on a fixed number of predefined points.
The core routines of manybody.WaveFunction handle the different tasks in parallel using the Message Passing Interface (MPI) 64 framework. As the problem is embarrassingly parallel it easily scales to thousands of cores. In addition to saving computing time, the distribution of tasks in a parallel execution also lowers also the memory footprint per core, so that tkwant simulation are usually not limited by the amount of memory available. The time-resolved simulation of a system whose static kwant simulation runs on a single core, typically requires around one hundred cores or more if comparable computation times are desired. 1 from tkwant import leads, manybody as mb 2 import kwantspectrum 3 import functools.partials as part 4 5 dens_op = kwant.operator.Density(syst) 6 7 # Calculate the spectrum E(k) for all leads. 8 spectra = kwantspectrum.spectra(syst.leads) 9 10 # Estimate the cutoff energy Ecut from T, \mu and f(E). 11 # All states are effectively empty above E_cut 12 occupations = mb.lead_occupation(chemical_potential=0, temperature=0) 13 emin, emax = mb.calc_energy_cutoffs(occupations) 14 15 # Define boundary conditions. 16 bdr = leads.automatic_boundary(spectra, tmax, emin=emin, emax=emax) 17 18 # Calculate the k intervals for the quadrature. 19 intvl_type = part(mb.Interval, order=20, quadrature='gausslegendre') 20 intervals = mb.calc_intervals(spectra, occupations, interval_type) 21 intervals = mb.split_intervals(intervals, number_subintervals=10) 22 23 # Calculate all one-body scattering states at time t = 0. 24 tasks = mb.calc_tasks(intervals, spectra, occupations) 25 psi_init = mb.calc_initial_state(syst, tasks, bdr)  Listing 6. Python code snippet to build up the many-body wavefunction manually. The different steps reflect the numerical algorithm and the comments follow the preprocessing step described in Sec. V B. Note that the number of interval splits (number_subintervals=10 in the example) is highly systemdependent.
The low-level interface has been designed to be very modular so that it can be adapted or extended to new situations easily. The convergence of the integral of Eq. (38) must be checked manually. Increasing its accuracy is possible by using quadrature rules of higher order and by splitting the initial intervals (such as the one shown in Fig. 3) into subintervals. We have found empirically that using 10-20 points per sub-interval is usually optimal while using higher orders often brings little benefit. The number of sub-intervals must then be increased until the result converges. However this number is dependent very much on the paricular system under study. The main advantage of the high-level interface described below is adaptative refinement of the integral.

High-level API
The class manybody.State forms the high-level interface for the many-body problem. It takes care of all the preprocessing steps automatically so that setting up a simulation becomes as simple as 1 from tkwant import manybody as mb  Listing 7. Python code snippet to compute a many-body wavefunction in an automatic way. This code should give results similar to the code of listing 6 but an additional adaptive quadrature helps to assure the numerical accuracy.
While being slightly less flexible than the low-level approach, it is more convenient and sufficient in most cases.
The main additional facility provided by manybody.State is the ability to dynamically adapt the number of points used to perform the energy/momentum integral. The function refine_interval() on line 11 of Listing 7 estimates the error in the integration and then proceeds to split the integration interval into subintervals if necessary. Line 13 shows the corresponding estimate of the integration error using the state.estimate_error() method. A global adaptive strategy, based on Quadpack's algorithm 60 is used for the refinement cycle and the error estimate.
The adaptive calculation of the integral is a non-trivial and computationally intensive problem. Indeed, here the integrand depends on time. The regions in k-space that dominate the integral at a given time might be different from the regions that dominate at a later time. Furthermore, anytime the algorithm decides that more points are necessary in a certain part of k-space to achieve a given accuracy, these new points must be evolved all the way from t = 0 to the current time of the simulation. From a computational perspective this is suboptimal as it interferes with parallelization (computing cores must wait until the new tasks "catch up"). To minimize this effect, we found empirically that it is best to perform the refinement early in the simulation with a slightly smaller error tolerance than the ultimately targeted one.

C. Overall architecture and code design
The design of Tkwant is centered around the four classes that have already been introduced above. They implement, respectively, the one-body/many-body states of the system at low/high level of abstraction. Functions exist to help with the various pre-calculations that arise at the beginning of a simulation. Fig. 6 shows the relation between the main tkwant classes.
The four solver classes provide at least two methods: an evolve() method to evolve the wavefunction(s) forward in time and an evaluate() method, to calculate expectation values of an operator. Extending the functionality of the solvers can be achieved by providing classes with a similar interface ("duck typingâĂİ). We find this approach preferable to inheritance mechanisms.
Additional methods, like for instance adaptive refinement, are present in "high-level" classes which are more specialized. The public attributes follow a similar logic. While all solver classes have at least one time attribute which holds the current time of the state, additional attributes such as lead or mode index are already a specialization to a specific usecase. The overall data flow diagram of the high-level solver manybody.States is shown in Fig. 7 with the various steps of preprocessing, evolution, and on-the-fly refinement of the integral. Array-valued numerical data, especially for performance-critical parts, are usually represented in form of NumPy 33 -arrays within tkwant. For more complex and heterogeneous data, such as the sequence of quadrature intervals, tkwant uses flat lists of data classes. By data class, we mean a class without methods, which is only used to store data as attributes. This is practical because the data is easily readable by humans and can be manipulated without having to care about side effects from stateful objects.

VI. A REAL-LIFE APPLICATION: PULSE PROPAGATION IN A GRAPHENE QUANTUM BILLIARD
We end this article with a real-life example of tkwant usage. The device is a small graphene sample of chaotic shape, connected to a semi-infinite graphene ribbon. An electrostatic gate deposited on top of the system is pulsed and one follows the associated ripple of density that propagates inside the sample.
Snapshots of the electron density c † i (t)c i (t) are shown in Fig. 8 along with a sketch of the system (leftmost panel). One observes first a clear ballistic propagation of the ripple, followed by a more complex speckle like interference pattern as the waves get reflected by the boundaries of the billiards. Eventually, at very long time the ripple leaves the sample entirely through the semi-infinite ribbon.
The typical workflow of a tkwant project starts with an analysis of the static properties, such as the dispersion relation of the leads or (the energy dependance of) the conductance matrix of the system. This static analysis allows one to estimate and tune the relevant timescales of the system and can be done for example with kwant. Here, we skip this part for brevity and focus on the timedependent simulations.
The complete Python script to perform this numerical simulation and to plot the result is given in code Listing 8. The structure of the script is quite similar to the first example in code Listing 1 and most of the lines are again related to the construction of the system with kwant. The code in Listing 8 can be optionally run in parallel on several cores to speed up the computation. Tkwant is parallelized with MPI 64 . A few additional lines (related to the user-defined function am_master()) are needed to redirect all output to the master MPI process "rank zero" responsible for plotting the data. 1 import tkwant 2 import kwant 3 import numpy as np 4 import functools as ft   def circle(pos, x0, y0, r): 16 x, y = pos 17 return (x -x0)**2 + (y -y0)**2 < r**2 18 19 def electrode_shape(pos): 20 x, y = pos 21 upper_arc = circle(pos, -2.7, 4.8, 6.8) 22 return (-4 < x < -2) and (5 < y < 15) and upper_arc 23 24 def lead_shape(site): 25 x, y = site.pos  Listing 8. Python code to simulate the electron density of the graphene dot after perturbation with a pulse. Running the code generates the density snapshots shown in Fig. 8. Note that the code can be run in parallel using MPI. Running the code on 48 cores (AMD Opteron 6176 with 2.3 GHz) takes about 2 hours.

VII. CONCLUSION AND OUTLOOK
Recent years have seen a radical shift in the way with which the scientific community approaches numerical simulations. First, open source software -a necessary condition for an efficient distribution of both old and novel algorithms -has become increasingly popular. Second, the monolithic approach to scientific programming is progressively yielding to the advent of versatile libraries, often in high-level languages such as Python, that facilitate extensions and combining of different packages. Scientific projects involving computer simulations are increasingly expected to promote transparency and reproducibility by publishing the code that was used to produce the data.
The authors of this work also subscribe to an approach that could be described as "computer-assisted theory", where algorithms follow closely the theoretical approach that one would use in an analytical calculations. In particular tkwant exposes all the mathematical objects of the relevant theory (e.g. Green's functions, wave functions, dispersion relations, etc.) and explicitly solves a given mathematical problem. The application to specific physical problems is left to the end user. This is in contrast to the "numerical experiments" approach where the modeling and associated stream of approximations is often partly implicit.
In this article, we have presented the package tkwant for time-dependent quantum transport. The design of tkwant itself aims at lowering the entrance cost to new users as far as possible. Exhaustive documentation is available online including a tutorial, additional examples, and complete reference documentation. 28 The authors hope that tkwant will be used with success by many research groups.
Extensions to Tkwant exist that are not yet included in the official release. One of them extends the noninteracting model to a time-dependent mean field model which already goes beyond the random phase approximation. This extension has been used in Ref. [46] to describe how charge excitations get renormalized into plasmons in presence of electron-electron interaction (Luttinger liquids). It is used in Ref. [39] to study the effect of an electromagnetic environment on the properties of superconducting Josephson junctions. More extensions could be envisioned such as the inclusion of Lindbladlike terms in the dynamics or a treatment of correlations beyond mean field using e.g. the novel quantum quasi Monte-Carlo technique 65,66 . It would also be very interesting to combine tkwant with a proper treatment of electrostatics such as the one performed in Ref. [67]. In kwant and tkwant, the leads are semi-infinite systems that are invariant by translations. They are described by a unit cell containing N sites. This unit cell is repeated up to infinity. A lead is characterized by two N × N matrices: The Hamiltonian matrix inside a unit cell H 0 and the hopping matrix V that connects one unit cell to the next. These two matrices can directly be retrieved with kwant 29 . In this appendix we discuss the underlying principles of a small package kwantSpectrum 59 that calculates and analyzes the lead dispersion relation.

Problem formulation
Introducing the matrix the dispersion relation of the lead is simply given by diagonalizing H(k): While diagonalizing such a matrix for a set of values of k is straightforward numerically, such a direct approach has an important drawback. The problem is best shown on a simple example. The left panel of Fig. 9 shows a plot of the dispersion relations for a simple three band models. While the three bands are smooth functions of k the numerical diagonalization make different calculations of the different bands for different values of k. Energies for a given value of k are typically returned ordered from smallest to highest value, so that the smooth bands are only known up to a permutation. This is apparent from the wrong coloring of the bands in the left panel of Fig.  9.
Quadrature techniques for integration rely, however, on smooth integrands. The task of kwantSpectrum is to perform a "smooth dispersion relation reconstruction", i.e. for each value of k, finds the permutation that goes from the left panel of Fig. 9 to its middle panel.
KwantSpectrum returns a precise interpolant of the smooth bands that can be used to analyze the dispersion relations and define the proper integration intervals in k-space. The resulting plot is shown in the middle panel of Fig. 9.
k-integration in tkwant is performed on bands and values of k that satisfy E α (k) ≤ E F and positive velocity ∂E α (k)/∂k ≥ 0. Calculating the corresponding intervals of integration (shown in bold in the right panel of Fig.  9) requires the knowledge of various special points. The interpolation of kwantSpectrum provides direct access to these special points: maximum and minimum of each bands, inflection points (where the velocity is maximum), solutions of E α (k) = E F , see the right panel of Fig. 9. Another application of kwantSpectrum is the unfolding of the spectrum from the first Brillouin zone to a larger zone in k-space.
The rest of this appendix briefly describes kwantSpectrum API and then proceeds to describe the algorithm used for the smooth dispersion relation reconstruction.

KwantSpectrum package
Listing 9 shows the code used to generate the right panel of Fig. 9. Lines 6-14 define a lead using kwant. Kwant automatically handles the translational symmetry, i.e. it automatically constructs the two matrices H 0 and V that are needed for the calculation. The actual computation of the spectrum (matching algorithm and interpolation) is performed in line 18. The function kwantspectrum.spectrum() computes the interpolant of the different bands. It returns an object that provides various methods for calculating the intervals of integrations and special points that are used in the rest of the script.

Overview of the reconstruction algorithm
Let us start by describing the building block of the algorithm we use for reconstructing the smooth dispersion relations. They are as follows:

a. Matching
Considering an interval [k l , k r ].
• First we calculate the dispersion relation E α,l = E α (k l ) (E α,r = E α (k r )) at momentum k l (k r ) by diagonalizing Eq. (A2). We also obtain (as explained below) the velocities v α,l = ∂E α (k l )/∂k and v α,r = ∂E α (k r )/∂k at the same points.
• Second, we construct a cost matrix M αβ that measures how likely is band β at point k r to be assigned to band α at point k l . The underlying idea for the construction of the cost matrix is straightforward: Given E α,l and its derivative v α,l , we make a linear extrapolation of the band α at point k r . The resulting value E α,l + (k r − k l )v α,l is compared to the value of the different bands β at k r .
A possible choice for the cost matrix is therefore The actual form of M αβ that we use is more robust as it also takes advantage of our knowledge of v α,r and is fully symmetric. The detailed form of the cost matrix will be given below in Eq. (A8). For a perfect match of α and β the corresponding element M αβ vanishes.
• Third, once the cost matrix has been constructed we are back to a standard "linear assignment problem": one must find the permutation P of the index β that brings the vanishing elements of the cost matrix onto the diagonal, i.e. we look for the permutation P that minimizes For this problem, we use the "Hungarian method" 68 as implemented in the SciPy 69 package.
• Once the matching has been done, we construct a cubic interpolation E lr α (k) of the different smooth bands inside [k l , k r ]. The function E lr α (k) is a polynomial of degree three that satisfies E lr α (k l ) = E α,l , E lr α (k r ) = E P (α),r , ∂E lr α (k l )/∂k = v α,l and ∂E lr α (k r )/∂k = v P (α),r . The precise form of the interpolant is given below in Eq. (A10).
• An important part of the algorithm is the evaluation of the quality of the interpolant and of the validity of the matching. We introduce the error δ of the interpolant. To estimate δ we first split the interval [k l , k r ] in two and perform the matching and interpolation on the two subintervals [k l , k c ] and [k c , k r ] where k c = (k l + k r )/2. δ measures the difference between the interpolant E lr α (k) and the two subinterpolants E lc α (k) and E cr α (k). Its precise definition is given below in Eq. (A17).
c. Overall adapting algorithm.
The overall algorithm works as follows. We start with k l = −π and k r = +π and apply the matching algorithm and interpolation on the interval [−π, +π]. The interval is then split in two for the evaluation of the error δ. If δ is smaller than a preset tolerance level , the algorithm stops. Otherwise the same procedure is applied to the two subintervals [−π, 0] and [0, +π]. One proceeds recursively by dividing each sub-interval for which the error δ lies above the tolerance threshold . When δ < for each interval the recursive splitting stops. Note that through the quality of the interpolant, the tolerance also controls the validity of the matching. Indeed, the cubic interpolant does not converge if the underlying function has discontinuous derivatives.
To understand the role of the tolerance parameter , let us consider an extreme (yet perfectly physical) scenario where two bands almost cross but there is a small avoided crossing ∆ 1 between the two bands. The Hamiltonian H(k) reads so that the two bands are E ± (k) = ± (k − k 0 ) 2 + ∆ 2 . An example of the matching algorithm for this model is shown in Fig. 10 for two values of the tolerance . When > ∆, the algorithm will ignore the small avoided crossing (left panel). When < ∆, the algorithm is sensitive to the avoided crossing and labels the band accordingly (right panel)

Cost matrix
To evaluate the cost matrix in an interval [k l , k r ], we use two different linear extrapolations of the spectrum starting from the left and right points respectively. The linear interpolation from the left is while the interpolation from the right is Using these two approximations, the cost matrix is simply defined as the average of the square of the difference between the two approximations: Performing the integration, we arrive at, with

Cubic Interpolation
We use a piecewise cubic Hermite interpolation in each of the intervals [k l , k r ]. Piecewise cubic Hermite interpolation has the advantage that the function and the first derivative of the interpolation function are exact on the boundaries k l and k r . Moreover, these interpolations provide a local (in contrast to a global, which would be the case for Splines) error estimate in each interval.
The interpolation takes the form To estimate the error of the interpolation E lr α (k) in the interval [k l , k r ], we construct the two interpolants E lc α (k) and E cr α (k) with k c = (k l + k r )/2 and compute the average of the square of the differences between the two interpolant: (the factor 105 is there purely for convenience) with To perform each of these integrals, let us remark that they amount respectively to calculating the variance of a cubic interpolant with zero value and derivative on the left (right) while the right (left) values of the interpolant are given by and Performing the integral, we arrive at so that, It is important to notice that this error, which consists of a weighted sum of the deviation of the value and its derivative at the middle point k c is much more robust than an estimate that would include only one of this two quantities would be. Such error kind of estimates have been used in the context of quadrature rules 70,71 .

Derivatives of the energy spectrum
We end this appendix by summarizing the basic results of perturbation theories that we use to calculate the derivative and second derivative of E α (k) with respect to k. Although only the first derivative has been used in the matching algorithm (attempts to use the second derivative have been found to be less robust), the second derivative will be used in the next appendix for the calculation of the effective mass needed for setting the imaginary potential. Introducing and and APPENDIX B: Heuristic for setting the absorbing imaginary potential

Problem formulation
Since we consider leads that are invariant by translation, any wave packet that enters the lead will propagate ballistically inside the lead towards infinity and therefore never come back to the scattering region. In tkwant, we use an imaginary potential Σ l (a) inside the lead l to absorb these wave packets to that they do not create spurious signal in the simulations. This imaginary potential depends on the cell a insides the lead. The addition of the imaginary potential amounts to the changê in the lead Hamiltonian. The corresponding "sink" algorithm was discussed in Ref. [37]. The present discussion expands on the original algorithm and adds simple heuristics for the choice of the Σ l (a) function.
The choice of Σ l (a) is an optimization problem where one seeks to minimize the amount of signal reflected into the scattering region. Two effects work in opposite direction: on one hand one wants a large imaginary potential so that the waves get absorbed before they reach the end of the system. On the other hand any abrupt increase of Σ l (a) creates backscattering that sends spurious waves back into the scattering region. Hence, one aims to construct an imaginary potential that rises very smoothly to avoid these reflections.
In practice, the spurious reflection due to the variation of Σ l (a) is dominated by the long wave length part of the spectrum. Indeed, when the wave length λ = 2π/k of the wave is large, any variation of Σ l (a) looks abrupt. On the other hand, the corresponding wave are typically very slow. Hence, if a spurious reflection is created, it typically takes a long time to reach the scattering region. The strategy used in tkwant is to split the lead region into two sub-regions: the absorbing zone where the imaginary potential is applied and a buffer zone, see Fig. 11 for a sketch. The size of the respective sub-regions are optimized to guarantee -for a given level of precisionthat the spurious reflections do not have time to reach the scattering region in the duration of the simulation. This is a very conservative "safe" mode of tkwant. Experience users can use less stringent conditions but need to check the accuracy of the results manually.  11. Sketch of the imaginary potential used in the leads. A finite portion of the lead is included in the time-dependent simulation (the initial calculation of the scattering states is done with infinite leads). This finite portion is split into a buffer zone (blue) and an absorbing zone (yellow) where the imaginary potential is slowly raised.
Following Ref. [37], we use a polynomial shape of the imaginary potential, Σ(x) = (n + 1)Ax n . (B2) The length of the buffer and of the absorbing zone are denoted as L b and L Σ unit cells respectively. To define the spurious reflection, we consider a fictitious scattering problem. The system consists of an infinite buffer region terminated on one side by the absorbing zone. For a given channel α with momentum k, the presence of the imaginary potential creates a reflection r αk so that the scattering states propagating in the buffer zone can be written as The spurious reflection r βαE can be calculated numerically with kwant or estimated analytically. We define the total spurious reflection as Only the channels α that have a large enough velocity v α > L b /t max (where t max is the maximum time of the simulation) are taken into account into the calculation of r. Indeed, slower channels may contribute to reflection, but due to the presence of the buffer region, the reflected wave will not have time to reach the scattering region and spoil the results. Given a targeted accuracy r max the problem reduces to optimize the parameters A, n, L b and L Σ such that r ≤ r max while L = L b + L Σ is as small as possible. A trivial possibility is to have no imaginary potential at all and choose L b large enough so that even the fastest channels cannot reach the scattering region. Although non-optimum, this boundary condition is implemented in tkwant and referred to as "simple boundary condition". 37 tkwant implements an heuristic algorithm that -although non-optimal in general -considerably improves on the simple boundary condition in certain cases. We stress again that tkwant provides a "safe" algorithm that seeks a given precision whatever the dynamics in the scattering region. For a given time-dependent problem, the error will usually be much smaller than r max . For large simulations where the computing time is critical, a manual control of the imaginary potential may be significantly more efficient.

Heuristic for optimization
In our heuristic, we consider only two extreme values of (α, k): The fastest modes that will quickly go through the buffer region but will be absorbed efficiently by the imaginary potential (with very little reflection) and the slowest modes that will take a long time to cross the buffer region but will create significantly more reflection. KwantSpectrum provides the necessarily tools for finding the maximum velocity v fast in the leads (fast modes) as well as the points of maximum curvature γ slow = ∂ 2 E α /∂k 2 where the velocity vanishes (slow modes).
To estimate the reflection r Σ of a given mode of dispersion relation where q = k − k 0 is the momentum counted from the bottom/top of the band, we use an analytical expression Eq. (34) derived in Ref. [37]. Note the presence of a typo in Eq. (34) in Ref. [37]. The correct form has a factor (n − 1)! instead of (n − 1) in the second term and reads, r Σ = e −Aq/ε + An(n + 1)(n − 1)! 2 n+2 εq n L n+1 where the first and second term respectively describe the absorption by the imaginary potential and the reflection when it is not perfectly adiabatic.
Optimization of A. We first choose the optimum value of A * that minimizes Eq. (B6) i.e. that satisfies ∂ A r Σ (A * ) = 0. The value of A * strongly depends on q and γ. We optimize A * with respect to the fastest mode. Indeed the first term of Eq. (B6) scales as e −2A/v while the second scales as A/v n+1 . Hence fast modes are limited by the first term while slow modes are limited by the second one. Since the first term is exponential, it is computationally cheap to make it negligible for all modes. Making the second term small enough is a matter of increasing L Σ . We arrive at, A * = − ε(q fast ) q fast log n(n + 1)(n − 1)! 2(2q fast L Σ ) n+1 .

(B7)
Optimization of L Σ /L b . The second optimization is to find the best of way of splitting the total length L into L = L Σ + L for a given A * . Introducing x as, the second term of Eq. (B6) is dominated by the slowest modes that can go through the buffer layer. The corresponding q slow satisfies 2L b = γ slow q slow t max . We get, r Σ = A * t max n(n + 1)(n − 1)! 2xL Optimizing with respect to x, ∂ x r Σ = 0 in the above equation leads to the optimum splitting fraction x * , independently of the value of γ slow .
Our overall algorithm for setting the values of L b , L Σ and A reads as follows: 1. We start with an initial value of L 0 = v fast t max /2 that corresponds to the "simple boundary condition" with no imaginary potential. The choice of this length is guaranteed to induce no spurious reflection.
3. We use Eq. (B11) to find the value of L * that satisfies r Σ < r max .
4. If the new value L * < L 0 then it is computationally advantageous to use L * instead of L 0 in the simulations. We update L 0 → L * and go back to step 2 to see if L can be further decreased. If L * > L 0 we terminate the optimization and keep L 0 as our value of L.
Note that we did not perform a systematic optimization over the order of the polynomial n, but we have found empirically that n = 6 is a good compromise.

Illustration
To illustrate the procedure, we apply the optimizing algorithm to a real world system with a complex energy dispersion as shown in Fig. 12. This example is difficult due to the presence of tiny gaps at the avoided crossings (high curvature/very low effective mass) which leads to a potentially large spurious reflection. Fig. 13 compares our analytical estimate of r to an exact numerical calculation performed with kwant. We observe a deviation from the analytical estimate for high values of r but the estimate is rather accurate for small r. Since it is in the latter parameter range that it is actually needed, the estimate is quite reliable. See, e.g. Fig. 3 of Ref. [37] for a more detailed study.

Computational complexity
The overall computational complexity (CPU time of a simulation) of tkwant scales as (N s + N L)t max where L also scales with t max . For the "simple boundary condition", L ∝ t max , such that the overall complexity is ∝ t max for large scattering regions/short simulation times but ∝ t 2 max for small scattering regions/long simulation times.
The heuristic algorithm described in this appendix has a complexity L ∝ t x * max [as can be seen from Eq. (B9) neglecting logarithmic corrections] which translates into a more favorable overall complexity ∝ t 1+x * max ≈ t 1.5 max for large simulation times. The crossover between the short and large time behavior is illustrated in Fig. 14   The scaling t 1.5 max corresponds to a "safe" usage of tkwant that does not make any assumptions about the actual dynamics that is taking place in the scattering region or additional symmetries in the leads. In most cases, it is possible to obtain the optimum overall scaling ∝ t max . One can take advantage of the structure of the leads. For instance, if the lead is in the quantum Hall regime, inducing back reflection with the imaginary potential involves back scattering an chiral edge state on one edge of the lead to the other side. As this process is exponentially suppressed with the width of the lead, extremely accurate results can be obtained with an absorbing zone that contains only a handful of sites. Another example is graphene: since the imaginary potential does not break the symmetry between A and B sites, it conserves the corresponding pseudo-spin hence do not induce back scattering in the region close to the Dirac points. Last, in many practical situations, the time-dependent perturbation is actually slow and small with respect to /E F and E F respectively. It follows that only the modes close to E F will actually play a role in the simulation. Experienced users can manually set the imaginary potential Σ l (a) and check the convergence of the results by monitoring how they converge with L Σ and/or L b . Up to around tmax ≈ 10 3 , the "simple boundary conditions" with only buffer cells (linear scaling L ∼ tmax, orange dashdotted line) are preferred. For larger tmax, the combination of buffer and absorbing cells is more effective Blue dotted line: theoretical scaling t x * max with x * = 8/15. Same parameters as in Fig. 12 : rmax = 10 −5 , n = 6.