5.1. Lanczos method¶
5.1.1. Details of Lanczos method¶
Some parts of this section are based on the manual of TITPACK [1] and the textbook published by M. Sugihara and K. Murota [2] (these references are written in Japanese).
In the Lanczos method, by successively operating the Hamiltonian to the initial vector, we obtain the accurate eigenvalues around the maximum and minimum eigenvalues and associated eigenvectors. Because we can perform the Lanczos method by using only two vectors, the dimensions of which are the dimensions of the total Hilbert space [3] , the Lanczos method is frequently used for the diagonalization of the large matrices. As explained in detail below, one additional vector is necessary for obtaining the eigenvector.
The principle of the Lanczos method is based on the power method. In the power method, by successively operating the Hamiltonian \(\hat{\mathcal H }\) to the arbitrary vector \(\boldsymbol{x}_{0}\), we generate \(\hat{\mathcal H }^{n}\boldsymbol{x}_{0}\). The obtained space \(\mathcal{K}_{n+1}(\hat{\mathcal H },\boldsymbol{x}_{0})=\{\boldsymbol{x}_{0},\hat{\mathcal H }^{1}\boldsymbol{x}_{0},\dots,\hat{\mathcal H }^{n}\boldsymbol{x}_{0}\}\) is called the Krylov subspace. The initial vector is represented by the superposition of the eigenvectors \(\boldsymbol{e}_{i}\) (the corresponding eigenvalues are \(E_{i}\)) of \(\hat{\mathcal H }\) as
Here, \(E_{0}\) denotes the maximum absolute values of the eigenvalues. We note that all the eigenvalues are real numbers because the Hamiltonian is Hermitian. By operating \(\hat{\mathcal H }^{n}\) to the initial vector, we obtain the relation as
This relation indicates that the eigenvector of \(E_{0}\) becomes dominant for sufficiently large \(n\). In the Lanczos method, we obtain the eigenvalues and eigenvectors by performing the appropriate transformation for the obtained Krylov subspace.
In the Lanczos method, we successively generate the normalized orthogonal basis \(\boldsymbol{v}_{0},\dots,\boldsymbol{v}_{n-1}\) from the Krylov subspace \(\mathcal{K}_{n}(\hat{\mathcal H },\boldsymbol{x}_{0})\). We define an initial vector and associated components as \(\boldsymbol{v}_{0} =\boldsymbol{x}_{0}/|\boldsymbol{x}_{0}|\), \(\beta_{0}=0,\boldsymbol{x}_{-1}=0\). From this initial condition, we can obtain the normalized orthogonal basis:
From these definitions, it it obvious that \(\alpha_{k}\), \(\beta_{k}\) are real numbers.
In the subspace spanned by these normalized orthogonal basis, the Hamiltonian is transformed as
Here, \(V_{n}\) is a matrix whose column vectors are \(\boldsymbol{v}_{i}(i=0,1,\dots,n-1)\). \(T_{n}\) is a tridiagonal matrix and its diagonal elements are \(\alpha_{i}\) and subdiagonal elements are \(\beta_{i}\). It is known that the eigenvalues of \(\hat{\mathcal H }\) are well approximated by the eigenvalues of \(T_{n}\) for sufficiently large \(n\). (We note that \(V^{\dagger}V=I\), \(I\) is an identity matrix). The original eigenvectors of \(\hat{\mathcal H }\) are obtained by \(\boldsymbol{e}_{i}=V\tilde{\boldsymbol{e}}_{i}\), where \(\tilde{\boldsymbol{e}}_{i}\) denotes the eigenvectors of \(T_{n}\). From \(V\), we can obtain the eigenvectors of \(\hat{\mathcal H }\) by performing the Lanczos method. However, in the actual calculations, it is difficult to keep \(V\), because its dimension is large [dimension of \(V\) = (dimension of the total Hilbert space) \(\times\) (the number of Lanczos iterations)]. Thus, to obtain the eigenvectors, we again perform the same Lanczos calculations after we obtain the eigenvalues from the Lanczos methods. In the first Lanczos calculation, we keep \(\tilde{\boldsymbol{e}_{i}}\), because its dimension is small [4] . From this procedure, we obtain the eigenvectors from \(V\).
In the Lanczos method, within a few hundred or thousand Lanczos iterations, we obtain accurate eigenvalues near the maximum and minimum eigenvalues. The necessary number of iterations is sufficiently small as compared to the dimensions of the total Hilbert space.
We note that it is shown that the errors of the maximum and minimum eigenvalues become exponentially small as a function of Lanczos iterations (for details, see Ref. [2] ).
5.1.2. Inverse iteration method¶
From the approximate value of the eigenvalues \((E_{n})\), by successively operating \((\hat{\mathcal H }-E_{n})^{-1}\) to the initial vector \(\boldsymbol{y}_{0}\), we can obtain the accurate eigenvector for \(E_{n}\).
From \((\hat{\mathcal H }-E_{n})^{-1}\boldsymbol{y}_{0}\), we obtain linear simultaneous equations such as
By solving this equation using the conjugate gradient method (CG method), we obtain the eigenvector. From the obtained eigenvector, we can calculate the eigenvalues and correlation functions. We note that four additional vectors are necessary to perform the CG method. For a large system size, it may be impossible to allocate memory to the additional vectors.
5.1.3. Details of implementation¶
Initial vector¶
For the Lanczos method, an initial vector is specified with initial_iv
\((\equiv r_s)\) defined in an input file for Standard mode or a ModPara file for Expert mode. The type of initial vector can be selected as a real number or complex number by using InitialVecType
in a ModPara file.
For canonical ensemble and
initial_iv
\(\geq 0\), a component of a target of the Hilbert space is given by(5.6)¶\[(N_{\rm dim}/2 + r_s ) \% N_{\rm dim},\]where \(N_{\rm dim}\) is the total number of the Hilbert spaces and \(N_{\rm dim}/2\) is added to avoid selecting a special Hilbert space for a default value
initial_iv
\(=1\). When the type of initial vector is selected as a real number, the coefficient value is given by \(1\), while when it is selected as a complex number, the value is given by \((1+i)/\sqrt{2}\).For a grand canonical ensemble or
initial_iv
\(<0\), the initial vector is given by using a random generator, i.e., the coefficients of all the components for the initial vector are given by random numbers. The seed is calculated as(5.7)¶\[123432+|r_s|,\]where \(r_s\) is the number given by an input file and \(n_{\rm run}\) is the number of runs. The maximum value of \(n_{\rm run}\) is defined by
NumAve
in an input file for Standard mode or a ModPara file for Expert mode. Random numbers are generated by using SIMD-oriented Fast Mersenne Twister (dSFMT) [5] .
Convergence condition¶
In \({\mathcal H}\Phi\),
we use dsyev
(routine of LAPACK)
for diagonalization of \(T_{n}\).
We use the energy of the first excited state of \(T_{n}\)
as the criterion of convergence.
In the standard setting,
after five Lanczos steps,
we diagonalize \(T_{n}\) every two Lanczos steps.
If the energy of the first excited states coincides with
the previous energy within the specified accuracy,
the Lanczos iteration finishes.
The accuracy of the convergence can be specified by
CDataFileHead
(ModPara file in the expert mode).
After obtaining the eigenvalues, we again perform the Lanczos iteration to obtain the eigenvector. From the eigenvectors \(|n\rangle\), we calculate energy \(E_{n}=\langle n|\hat{\mathcal H }|n\rangle\) and variance \(\Delta=\langle n|\hat{\mathcal H }^{2}|n\rangle -(\langle n|\hat{\mathcal H }|n\rangle)^2\). If \(E_{n}\) coincides with the eigenvalues obtained by the Lanczos iteration and \(\Delta\) is smaller than the specified value, we finish diagonalization.
If the accuracy of the Lanczos method is not sufficient, we perform the CG method to obtain the eigenvector. As an initial vector of the CG method, we use the eigenvectors obtained by the Lanczos method in the standard setting. This frequently accelerates the convergence.