K$\omega$ -- Open-source library for the shifted Krylov subspace method

We develop K$\omega$, an open-source linear algebra library for the shifted Krylov subspace methods. The methods solve a set of shifted linear equations for a given matrix and a vector, simultaneously. The leading order of the operational cost is the same as that for a single equation. The shift invariance of the Krylov subspace is the mathematical foundation of the shifted Krylov subspace methods. Applications in materials science are presented to demonstrate the advantages of the algorithm over the standard Krylov subspace methods such as the Lanczos method. We introduce benchmark calculations of (i) an excited (optical) spectrum and (ii) intermediate eigenvalues by the contour integral on the complex plane. In combination with the quantum lattice solver H$\Phi$, K$\omega$ can realize parallel computation of excitation spectra and intermediate eigenvalues for various quantum lattice models.


Introduction
The response of physical observables to an external field or a perturbation is an essential probe in the experimental and theoretical studies of quantum many-body systems. Theoretically, such a quantity can be formulated as a selected element of the Green's function, with given states (or vectors) a and b, where z is the complex energy parameter and the Green's function G(z) is defined as the inverse of the shifted Hamiltonian H, In computational quantum physics, the Hamiltonian H is usually represented by a real-symmetric or Hermitian M×M sparse matrix. A typical application of the Green's function formalism is calculating spectra in which the elements a † G(z k )b are calculated at sampling points z k located near the real axis (z k ≡ ω k + iδ) with a tiny imaginary part δ or where the elements a † G(z k )b (a b) are calculated at sampling points z k for the numerical contour integral on the complex plane. The dimension M is large, e.g., M ≥ 10 9 , and efficient numerical methods are essential to treat such large matrices.
Formally, the Green's function (2) can be decomposed as where λ j and y j are the j-th eigenvalue and the corresponding eigenvector of the matrix H, respectively, i.e., In practice, however, the numerical evaluation of all the eigenpairs is too expensive [the operational cost is O(M 3 ) and the memory cost is O(M 2 )] to treat large-scale matrices (M ≥ 10 9 ). In computational quantum physics, a Lanczos-based algorithm combined with the continued fraction technique has been proposed and widely used to calculate the Green's function [1,2]. The most expensive operation of the Lanczos-based algorithm is the matrix-vector product, with an operational cost of O(M 2 ). Moreover, the sparseness of the Hamiltonian matrix reduces this cost to O(m) where m is the number of nonzero elements in the matrix. However, the calculation of the Green's function is restricted to diagonal components and it is not straightforward to evaluate the convergences of the obtained results.
In the present paper, we describe the numerical library Kω (https://github.com/issp-center-dev/Komega), which solves linear equations defined by (z k I − H)x (k) = b (k = 0, 1, 2, . . . , N eq − 1), (5) instead of Eq. (4). Here, b is a given vector, {z k } are given scalars, I is the identity matrix, and N eq is the number of linear equations. Eq. (5) is called the shifted linear equation since the matrix on the left hand side is different only by the 'shift' term, z k I. In general, the scalar z is a complex number and the matrix of (zI − H) is non-Hermitian. The solution vector x (k) is written as and satisfies a † G(z k )b = a † x (k) . An iterative algorithm is adopted in Kω, based on the property that the result of the n-th iteration, regardless of z k , can be spanned in the Krylov subspace. The operational cost is usually dominated by the matrixvector multiplication procedure for a vector v (v ⇒ Hv).
However, no open-source numerical library/solver of the shifted Krylov subspace methods has yet been developed to our knowledge.
In this paper, we first review the algorithm implemented in Kω briefly in section 2. Next, the basic information such as installation and usage is introduced in section 3. Then, some examples using Kω as a library or software are illustrated in section 4. Finally, we summarize the paper in section 5.

Overview
In this section, we briefly review the shifted Krylov subspace methods implemented in Kω. The shifted linear equations of Eq. (5) can be rewritten as with A = z 0 I − H and σ ≡ z k − z 0 for k = 0, 1, 2, . . . , N eq − 1.
Here, the suffix k in Eq. (5) is dropped for simplicity. Hereafter, Eq. (7) with σ = 0 is called the seed equation, while the other equations with σ 0 are called the shifted equations. The accuracy of the approximate solution vector x σ n can be checked by monitoring the residual vector where n is the number of iteration steps. Table 1 shows the four solver methods available in Kω. The methods are classified according to the type of matrix A + σI. Users need to select an appropriate method depending on whether the matrix H is real-symmetric or Hermitian and whether the shift constant values {z k } are complex or real [25].
The mathematical foundation of Kω is an iterative algorithm and the numerical solution of Eq. (7) at the n-th iteration is obtained in the Krylov subspace defined as The common mathematical foundation of the shifted Krylov subspace methods is the shift invariance property of the Krylov subspace and the collinear residual theorem [3], which is explained later in this section. This section explains the shifted conjugate gradient (CG) method in Table 1, as an example. The other methods, the shifted conjugate orthogonal CG method and the shifted BiCG method are explained in Appendix A and Appendix B, respectively.

Shifted conjugate gradient method
The shifted CG method is based on the CG method [26] that is used for solving Ax = b, when the matrix A is real-symmetric (or Hermitian) and positive definite.

Seed equation
The seed equation [Eq. (7) with σ = 0] is denoted as Ax = b and the solution vector at the n-th iterative step is denoted as x n , where x 0 is set to be x 0 = 0. The residual vector at the n-th step is denoted as r n = b − Ax n . The three vectors, x n , r n , and x n+1 = x n + α n p n , with the initialization p 0 = r 0 = b. Kω uses the three-term recurrence formula [7] which is obtained by eliminating p n from Eqs. (14) and (16). By taking the inner product between r n and Eq. (17), we also obtain α n = ρ n r † n Ar n − β n−1 α n−1 ρ n .
It is crucial that the n-th residual r n belongs to the (n + 1)-th Krylov subspace K n+1 (A, b) and is orthogonal to the n-th Krylov subspace K n (A, b), r n ∈ K n+1 (A, b) and r n ⊥ K n (A, b).
These mean that r n belongs to the one-dimensional orthogo-

Shifted equation and collinear residuals
When the CG method is applied to a shifted equation ((A + σI) x = b, σ 0) with x σ 0 = 0 and r 0 = b, the residual vector r σ n belongs to the same one-dimensional subspace K ⊥ n (A, b) because of Eq. (10). Consequently, the residual r σ n is parallel to that of the seed equation (r n ) where π σ n is a constant [3] called the collinearity factor.  The collinearity factor π σ n satisfies the recurrence equation with the initialization π σ 0 = π σ −1 = 1, because of Eq. (17) and Eq. (20). By substituting π σ n r σ n into r n in Eq. (17), the recurrence equations for the shifted equations are derived as with x σ 0 = 0 and p σ 0 = b. It should be noted that the recurrences of Eqs. (21), (22), (23), (24), and (25) require no expensive matrix-vector multiplication. Table 2 compares the operational costs of the conventional Krylov method and the shifted Krylov methods [4]. The first row of Table 2 represents the conventional Krylov method, which solves the N eq linear equations independently in each Krylov subspace. Here M and M NZ are the dimension and the average number of non-zero elements per column of matrix A, respectively (M NZ ≤ M). The second row represents the cost of the shifted Krylov method for the case where all the elements of the solution vectors x σ are calculated. We find that the operational cost for the sparse matrix-vector product (SpMV) is drastically reduced (MM NZ N eq → MM NZ ), since the explicit SpMV appears only in the seed equation (σ = 0). The third row represents the cost of the shifted Krylov subspace method for the case where we do not need all the elements of the solution x σ , but only its projection,

Cost and projection
In the third case, we can replace the recurrence equations, Eqs. (24) and (25), by with y σ 0 = 0 and u σ 0 = Pb. By this replacement, the number of scalar-vector products (SV) is reduced from 3MN eq to 3M + 3M left (N eq − 1). For example, the calculation of an element of the Green's function by Eq. (1) is a case with M left = 1. In typical applications, both N eq and M left are much smaller than M (N eq , M left M), and thus the operational costs in the third row in Table 2 are reduced to those in the first row with N eq = 1. In other words, the operational cost in the third case is reduced, typically, to that for solving a single linear equation.
Kω implements the third case in Table 2, i.e., the shifted Krylov subspace method with projection. For the seed equation, the residual vector r n is updated by Eq. (17) and the coefficients ρ n , α n , and β n are updated by Eqs. (11), (18), and (15), respectively. For the shifted equations, the projected solution vector y σ n and the projected search direction vector u σ n are updated by Eqs. (26) and (27), respectively, and the coefficients π σ n , α σ n , and β σ n are updated by Eqs. (21), (22), and (23), respectively. The solution vectors x σ n for the shifted equations can be obtained by setting P = I or M left = M, but users should accept the additional memory cost, as explained below.
The present implementation of Kω offers a great advantage not only in the operational cost but also in the memory cost, because an M-dimensional vector v requires a large memory cost in typical applications. In the present implementation, only three M-dimensional vectors, the residual vectors for the seed equation (r n+1 , r n , and r n−1 ), are stored in the memory with a memory cost of O(M 1 M 0 eq ). The residual vectors for the shifted equations, such as r σ n , can be obtained by Eq. (20) with  [4]. The operational costs of the sparse matrixvector products (SpMV), the scalar-vector products (SV), and the inner products (Inner) are listed separately.
a negligible memory cost of O(M 0 M 1 eq ). To store the other Mdimensional vectors for the shifted equations, such as x σ n and p σ n , the required memory size will be O(M 1 M 1 eq ), which can be huge.

Seed switching
A mathematical technique called seed switching [7] is adopted in Kω for an efficient convergence, because the convergence speed of the CG method can be different among the energy points {σ k }. The residual vectors for several shifted equations (r σ n , σ 0) can sometimes remain large, while that for the seed equation is reduced to be negligible [4]. In this case, we can switch the seed equation as A = A + σ seed I, where σ seed is the shift that gives the largest residual |r σ seed n |, or in other words, the smallest collinearity factor |π σ seed n |. The residual vector for the new seed equation is obtained by Eq. (20) to be r σ seed n = (1/π σ seed n )r n . It is noted that the present implementation style does not require the solution and search direction vectors for the new seed equation (x σ seed n , p σ seed n ).

Comparison with Lanczos method
Here, we briefly describe the merits for calculating G ab (z) of the shifted Krylov method compared to the traditional way based on the Lanczos-based algorithm [1,2]. In the Lanczosbased algorithm, the Krylov subspace of K n (A, b) is generated by the Lanczos-type recurrence formula and the diagonal component G bb is given by a continued fraction form. In contrast, using the shifted Krylov method, both the diagonal components G bb and the off-diagonal components G ab can be calculated directly and simultaneously with the same order of operational cost as that of the Lanczos-based method. In addition, the accuracy of the obtained results can be evaluated by monitoring the residual vector of Eq. (8).

Usage of Kω
In this section, the usage of Kω is introduced. We first introduce how to install Kω. Kω provides libraries and a standalone program ShiftK.out. In Sec. 3.2, the procedures for using Kω are schematically shown.

Installation
The stable version of Kω is distributed on the release page with the Lesser General Public License (LGPL) version 3. To build Kω a Fortran compiler and a BLAS library are required. The following is an example of compiling Kω: Here, install_dir indicates the full path of the directory where the library is installed.

Schematic flow of Kω usage
In this subsection, the usage of Kω as a library or a standalone program is explained. In Fig. 2, the schematic flow of the library usage (a) and the corresponding flowchart in the standalone program (b) are shown.

Library (i) Preparation of a routine for matrix-vector multiplication
Kω provides a reverse communication interface for the matrix-vector multiplication routine (v ⇒ Hv). The interface requires the preparation of a routine to perform the matrixvector multiplication, to be called in Kω. This interface allows extremely large matrices to be handled, since the matrix elements are internally generated in the 'matrix-vector multiplication' and not stored in the memory.

(ii) Selection of an appropriate solver
Kω provides four kinds of numerical solvers. An appropriate solver should be selected depending on whether the type of Hamiltonian H and the frequency z are complex or real, as shown in Table 1. It is noted that H must be Hermitian or symmetric for a complex or real matrix, respectively. For an efficient calculation, the seed switching function [7] is introduced in all methods.

(iii) Calculation
The calculation is performed in the following steps: (a) Initialization using *_init functions to set and initialize internal variables in the library. For restart calculations, the initial values of the coefficients and the vector should be inputted at this step. (b) Updating of the results iteratively using *_update functions, which are called alternately with the matrix-vector product routine in the loop to update the solution. *_update also performs the seed switching. At each step, the values of the 2-norm of the residual vector at each shift point are obtained using *_getresidual functions. (c) Finalization using *_finalize functions to release the memories of the arrays stored in the library. Here, * indicates the name of the solver such as komega_cg_r, komega_cg_c, komega_cocg, and komega_bicg.

Standalone program
To show detailed usage of Kω, an example standalone program ShiftK.out is also provided. ShiftK.out computes the diagonal element of the Green's function: Here, a is expanded as i a i n i where n i is the i-th basis vector of the Hilbert space. In the following, the usage of the software is explained.

(i) Preparation of an input file
The input parameters for ShiftK.out are categorized into four sections: cg, dyn, filename, and ham.
The cg section sets the numerical condition for the CG (or COCG, BiCG) method. maxloops is the maximum number of iterations and convfactor is the threshold value for the convergence criterion of the residual norm, i.e., max σ |r σ n | < 10 −convfactor .
The dyn section specifies the parameters for computing the spectrum. By setting the parameters omegamin, omegamax, and nomega, the target frequencies are given as ω i = omegamin + i × (omegamax − omegamin)/nomega, (i = 0, . . . nomega − 1). In ShiftK.out, the calculation mode is chosen from normal, recalc, and restart by specifying the parameter calctype. normal is the mode for computing with the Krylov subspace from scratch. recalc is the mode for computing with the Krylov subspace generated in the previous calculation (See (iv) for details). restart is the mode for restarting the calculation from the previous run. In the computation of the Green's function, the shifted BiCG or shifted COCG method is automatically selected when H is real or complex, respectively.

(a) Input of the matrix of the Hamiltonian
The filename section specifies the name of the input files for the matrix of the Hamiltonian inham or the initial excited vector invec. The file format of both files is the Matrix Market format [27]. If invec is not specified, a random vector is used as the initial vector. An example input file for reading the Hamiltonian matrix and excited vector is as follows: &filename inham = "Ham.dat" invec = "Excited.dat" / &cg maxloops = 100 convfactor = 6 / &dyn calctype = "normal" nomega = 100 omegamin = (-2d0, 0.1d0) omegamax = ( 1d0, 0.1d0) /

(b) Generation of the matrix of the Hamiltonian
For the trial use, the matrix of the Hamiltonian is generated in ShiftK.out mode. In this mode, the ham section is used instead of inham in the filename section. In the ham section, model parameters are specified to generate the Hamiltonian matrix for a one-dimensional spin chain model: j is a spin-1/2 operator at a site i with component j = x, y, z. In the ham section, the following parameters are specified: the total site of the spin chain nsite and the parameters of the strength for spin-spin interactions Jx, Jy, Jz, and Dz. An example of using the mode to generate internally the Hamiltonian matrix is as follows: &filename / &ham Jx = 1d0 Jy = 1d0 Jz = 1d0 Dz = 1d0 / &cg maxloops = 100 convfactor = 6 / &dyn calctype = "normal" nomega = 100 outrestart = .TRUE. /

(ii) Run
After preparing the input file, an executable ShiftK.out in terminal is run as follows: Here, namelist.def is the name of the input file. The residual values at each step are output to the residual.dat file in the working directory.

(iii) Results
After running ShiftK.out, the output directory is automatically generated. In this directory, dynamical Green's functions, the residual vector, and the coefficients are output to dynamicalG.dat, ResVec.dat0, and TriDiagComp.dat, respectively.
(iv) Recalculation for additional data (optional) The standalone program ShiftK.out provides, as an optional function, a recalculation function for additional data, as explained below. After the successful completion of ShiftK.out, users might like to obtain the solution at additional points σ that are not obtained in the completed calculation. In such cases, users can calculate the solution at these points for the shifted equations with negligible operational costs, because the coefficients of {α n , β n } and the projected residual vectors {Pr n } are saved in the files TriDiagComp.dat and ResVec.dat, respectively, and the recurrence relations for the shifted equation Eqs. (21), (22), (23), (26), and (27) can be solved without any expensive matrix-vector multiplications.

Applications in material science
In this section, we present several numerical results of Kω applied to quantum lattice models, to demonstrate typical applications in computational quantum physics. The quantum lattice model plays a crucial role for quantum many-body systems and is described by a large sparse matrix H. The applied studies in this section use the quantum lattice solver package HΦ [28]. HΦ was developed as an exact diagonalization solver and can treat a wide range of quantum lattice models, such as the Hubbard model, the Kondo model, and the Heisenberg model. Kω can be called from HΦ and the users of HΦ can use the shifted Krylov subspace solvers. Here, three typical examples are explained.

Calculation of spectra using ShiftK.out
The first example is a spectral calculation of Eq. (29) by the standalone program ShiftK.out explained in Sec. 3.2.2. A typical application for a quantum lattice model is the calculation of the dynamical correlation factor, since it is often used to analyze low-energy structures. The dynamical correlation factor is defined as where φ 0 is the ground state vector, A and B are the matrices to generate excited states, ω represents the frequency, and η represents the smearing factor. For example, by taking A = B = S z , we can calculate the dynamical spin structure factors, which can be directly measured by the neutron scattering experiments.
The key part in the calculation of the dynamical correlation factors is solving the linear equation defined as As an example of a calculation of dynamical Green's functions by ShiftK.out, we present the calculation of the dynamical spin structure factors on a one-dimensional Heisenberg chain, whose Hamiltonian is defined as where S represents the spin 1/2 operator and we take a magnetic interaction of J = 1 and system size of L = 12. The matrix data file of the Hamiltonian H is generated by HΦ in the Matrix Market format. The dynamical spin structure factors are defined as where S z (q) = L−1 j=0 e iq j S z j and q is the wave vector. The input file for ShiftK.out is given as follows (E 0 = 0, −5.5 ≤ ω ≤ 0.0, η = 0.02): &filename inham = "Ham.dat" invec = "excitedvec.dat" / &cg maxloops = 1000 convfactor = 6 / &dyn calctype = "normal" nomega = 1000 omegamin = (-5.5, -0.02d0) omegamax = ( 0.0, -0.02d0) / Here, Ham.dat and excitedvec.dat are the input files for the Hamiltonian matrix H and the excited vector S z (q)φ 0 . These files can be obtained using the quantum lattice solver HΦ. The details are shown in Appendix C.
In Fig. 3, we show the numerical result of S (q = π, ω, η). Here, we shift ω by the ground state energy E 0 = −5.387. The frequencies where S (q = π, ω, η) have sharp peaks correspond to the excitation energies induced by S z (q = π). The convergence of the residuals can be checked in the standard output.

Calculation of internal eigenpairs
The second example is the calculation of internal eigenvalues by a contour-integral method called the Sakurai-Sugiura (SS) method [29]. The SS method is an efficient method for obtaining eigenpairs (eigenvalues and eigenvectors) in a specified eigenvalue range. Although an SS method software package has already been developed [30], this example illustrates how to use Kω in a standalone program. In the SS method, the projection matrix on the n-th eigenvector, plays a key role. By multiplying P n by a vector φ = n a n y n , we can extract the component of y n as P n φ = a n y n .
On the other hand, the projection matrix P n can be expressed by an integration in the complex plane as where Γ represents a contour on the complex plain.
Using P Γ , we can extract only the eigenvectors whose eigenvalues exist within Γ as In this process, we can use Kω to obtain φ 0 = (zI − H) −1 φ 0 . The k-th moment can be simply expressed as follows: where z 0 is an arbitrary complex number. Thus, once we obtain φ 0 = (zI − H) −1 φ 0 , we can calculate the Krylov subspace defined as where N k represent the number of moments. Taking other vectors φ l (l = 1 · · · N l − 1), we can extend the Krylov subspace as By performing singular value decomposition for S , we obtain The number of non-zero singular values in Σ is just that of the independent vectors (M nz ) in N N k ×N l spanned by s k,l . By using only the left-singular vectors with non-zero singular values, we can construct the matrixŨ, UsingŨ, we project the original Hamiltonian on the small matrix (dimension M nz × M nz ) whose eigenvalues exist within Γ asH By diagonalizingH, we can obtain all the eigenvalues and eigenvectors which exist within Γ.
As an example, we apply the SS method to the 12-site Heisenberg chain model defined in Eq. (33). For simplicity, we obtain the eigenvalues around the ground state (the lowest eigenvalue). The matrix data file in the Matrix Market format (Ham.dat) is generated using HΦ as in Appendix C. Using Ham.dat, we can perform the SS method using Kω [31]. To perform the integration on the complex plane, we use the following points: where we take γ = −5, ρ = 0.8, and N z = 100 as an example.
In Table 3, we show the result of the SS method for the L = 12 Heisenberg chain, whose Hilbert dimension is 924. To see the convergence of the SS method, we show the obtained eigenvalues for several different conditions. By taking about 50 different bases, we can obtain the correct eigenvalues including degeneracies.

Calculation of optical conductivity using HΦ
The third example is the calculation of a spectrum or dynamical correlation function using HΦ, demonstrating that using Kω in HΦ can be a powerful tool for various problems. As an example, we show the optical conductivity in the extended Hubbard model, which is a typical model for strongly correlated electron systems. We note that the optical conductivity is often used for examining the metallic or insulating behavior of correlated electron systems. The optical conductivity can be calculated from the current-current correlation I(ω, η), which is defined as where e x is the unit translational vector in the x direction. The ground state vector φ 0 is calculated by an exact diagonalization solver built in HΦ. From the current-current correlation, the regular part (i.e., without the Drude part at ω = 0) of the optical conductivity is defined as where N s is the number of sites.
To directly compare the optical conductivity with previous studies [32], we calculate the optical conductivity in the extended Hubbard model defined as where c iσ and c † iσ denote the annihilation and creation operators of an electron at site i with spin σ, and n iσ = c † iσ c iσ represents the number operator of an electron at site i with spin σ. We note that this model at quarter filling is an effective model for organic conductors. We perform a calculation for an N s = 4 × 4 system with t = 1, U = 4, V = 3, and V = 5.
In Fig. 4, we show the result of the optical conductivity in the extended Hubbard model. This result is consistent with a previous study [32]. We also show the residuals of I(ω, η) in Fig. 5. By calculating the residuals, we can check whether the obtained dynamical correlation functions are well converged. As mentioned above, this is one of the main advantages of the shifted Krylov method. Furthermore, by examining the ω dependence of the residuals we can also identify bottlenecks for the convergence.
It is noteworthy that the matrix size becomes huge for the calculation of realistic models. In such cases, the matrix-vector product dominates the numerical cost of the shifted-Krylov method. To address this problem, we designed the Kω library to not directly include the matrix-vector product in the function, as explained in Sec. 3.2.1. If users can prepare the matrixvector product function with high parallelization efficiency, the  For the extended Hubbard model, we find that the residual for the high-energy part remains large. This result indicates that it is necessary to perform more than 10 4 iterations to obtain a well-converged correlation function when the threshold value is set to 10 −12 . calculation time is greatly suppressed. In fact, the calculation of dynamical Green's functions by HΦ shows high parallelization efficiency, since the matrix-vector product is well parallelized [28]. From this point of view, this library is considered to be suitable for large-scale calculations.

Summary
We developed the numerical library Kω for solving shifted linear equations with the shifted Krylov subspace methods. The present paper details applications for quantum many-body problems that appear in condensed matter physics. As a demonstration, numerical results including dynamical Green's functions, optical conductivity, and eigenvalues through the SS method are shown. The method is also applicable to other computational physics areas, and hence Kω could be a useful numerical library in computational physics.

Acknowledgements
TH wishes to thank S. Yamamoto for helpful advice regarding the shifted Krylov method. KY and TM wish to thank M. Naka and H. Seo for fruitful discussion about calculation of the optical conductivity. This work was supported in part by the Ministry of Education, Culture, Sports, Science and Technology (MEXT) of Japan as a social and scientific priority issue (Creation of new functional devices and high-performance materials to support next-generation industries; CDMSI) to be tackled by using post-K computer. This work was also supported in part by the KAKENHI funds (19H04125, 17H02828). We would also like to acknowledge support from the "Project for advancement of software usability in materials science" by the Institute for Solid State Physics, University of Tokyo, for the development of Kω ver. 1.0. This work was also supported by the Building of Consortia for the Development of Human Resources in Science and Technology from the MEXT of Japan. We also wish to thank the Supercomputer Center of Institute for Solid State Physics, University of Tokyo for the use of their numerical resources. L = 12 model = "Spin" lattice = "chain" method = "FullDiag" J = 1.0 2Sz = 0 HamIO = "out" Using this input file, we can obtain the output file of the Hamiltonian matrix in the Matrix Market format (zvo_Ham.dat renamed as Ham.dat in Sec. 4).

Appendix C.2. Output the excited vector
To obtain the excited vector S z (q)φ 0 by HΦ, we first calculate the ground state φ 0 and then calculate the excited vector by multiplying φ 0 by S z (q). The ground state φ 0 can be obtained by the following input file: L = 12 model = "Spin" lattice = "chain" method = "CG" J = 1.0 2Sz = 0 EigenVecIO = "Out" The input file for calculating the excited state S z (q)φ 0 is given as follows: L = 12 model = "Spin" lattice = "chain" method = "CG" J = 1.0 2Sz = 0 CalcSpec="Normal" SpectrumQL = 0.5 EigenVecIO ="In" OutputExcitedVec = "Out" Here, the wave vector is specified using SpectrumQL. In the above case, S z (q = π)φ 0 is obtained. For details, see HΦ user's manual. After the calculations are finished, the output file of the excited vector zvo_excitedvec_rank_0.dat (renamed as excitedvec.dat in Sec. 4) is obtained.