Determinant-free fermionic wave function using feed-forward neural networks

We propose a general framework for finding the ground state of many-body fermionic systems by using feed-forward neural networks. The anticommutation relation for fermions is usually implemented to a variational wave function by the Slater determinant (or Pfaffian), which is a computational bottleneck because of the numerical cost of $O(N^3)$ for $N$ particles. We bypass this bottleneck by explicitly calculating the sign changes associated with particle exchanges in real space and using fully connected neural networks for optimizing the rest parts of the wave function. This reduces the computational cost to $O(N^2)$ or less. We show that the accuracy of the approximation can be improved by optimizing the"variance"of the energy simultaneously with the energy itself. We also find that a reweighting method in Monte Carlo sampling can stabilize the calculation. These improvements can be applied to other approaches based on variational Monte Carlo methods. Moreover, we show that the accuracy can be further improved by using the symmetry of the system, the representative states, and an additional neural network implementing a generalized Gutzwiller-Jastrow factor. We demonstrate the efficiency of the method by applying it to a two-dimensional Hubbard model.

While such studies using NNs are successful for bosonic systems [18,19], they have encountered some difficulties for fermionic systems and frustrated spin systems due to the complicated sign structures in the wave functions [20]. For instance, it was reported that the manyvariable variational Monte Carlo (mVMC) calculation can achieve higher accuracy compared to NN-based ones for highly-frustrated spin systems [21]. In addition, in most cases the sign structure of the wave function is implemented by the Slater determinant (or Pfaffian) and the rest part of the wave function is approximated by NNs [5,12,14], but it is computationally expensive to extend such approaches to large system sizes since the numerical cost for the calculation of the Slater determinant is O(N 3 ) for N -particle systems. Most recently, methods which do not rely on the Slater determinant have been developed [7,16].
In this paper, we propose a general framework to approximate fermionic many-body wave functions by NNs without using the Slater determinant. In our method, the sign changes of the wave function are explicitly calculated for each particle exchange in real space, while the rest part is approximated by the NNs. This reduces the numerical cost from O(N 3 ) to O(N 2 ) or less. In addition, we successfully improve and stabilize the calculations by introducing the following developments. (i) We optimize not only the energy but also the "variance" of the energy simultaneously. (ii) We use a reweighting technique with a modified Monte Carlo (MC) weight to stabilize the calculations of the local energy. (iii) We take into account only the "representative" states among the symmetryrelated ones, which allows us to use flexible NN architectures; we employ the FCNN in the present study. (iv) We incorporate not only the number of the particles on each degrees of freedom but also their products as the inputs to the NN. (v) We prepare an additional NN that is a generalization of the Gutzwiller-Jastrow (GJ) factor. We apply the above framework to the Hubbard model on a two-dimensional square lattice, and demonstrate that the numerical accuracy can be better than the mVMC result for the modest number of electrons. This paper is organized as follows. In Sec. II, we introduce the method. After describing the overall framework in Sec. II A, we introduce each component in the framework one by one from Sec. II B to Sec. II F. We also discuss the numerical cost in Sec. II G. In Sec. III, we present the benchmark results for the Hubbard model. After describing the set up of the calculations in Sec. III A, we demonstrate the efficiency of our technique by changing the parameters in the model and method in Secs. III B-III G. Finally, we give the concluding remarks in Sec. IV.

A. Overall framework
In this paper, we present our framework for discrete lattice systems, while the method is generic and can be straightforwardly extended to the systems in continuous space like an electron gas and molecules. First, we denote   the state of N fermionic particles on a lattice as where r i and σ i denote the spatial coordinates and the internal degrees of freedom such as spin of ith particle, respectively. In Eq. (1), {(r i , σ i )} are arranged in ascending order of an integer I(r, σ) ∈ [0, N tot ) which labels the one-particle states (r, σ); N tot = N site N int , where N site and N int are the numbers of the lattice sites and the internal degrees of freedom, respectively. For a given Hamiltonian H, the local energy for each state |x is defined as where |ψ θ is an arbitrary wave function. Then, the expectation value of the total energy can be written as where p θ (x) is the probability distribution of the state |x given by From the variational principle, the relation E θ ≥ E GS always holds, where E GS is the true ground state energy.
In addition, we define the "variance" of the energy by with which satisfies the relation V θ ≥ 0 since all the local energies in the true ground state are equal to E GS . In this framework, we approximate the wave function ψ θ (x) in Eq. (5) by using NNs with the parameters θ, and optimize it so as to minimize not only the energy E θ but also V θ simultaneously; namely, we minimize the loss function defined as where λ is a hyperparameter that we tune during the optimization process (0 ≤ λ < 1). The overall framework of our method is shown in Fig. 1. First, we generate many states {|x } by using the Markov chain MC (MCMC) sampling (Sec. II B). Next, we generate the states {|x } for {|x }, satisfying x|H|x = 0, which are necessary to calculate the local energy E loc θ (x) in Eq. (2) (Sec. II C). Then, we reduce them to the representative states {|x rep } and {|x rep } through the symmetry operations allowed in the given system, and at the same time, we calculate the sign changes {s x } and {s x } caused by particle exchanges under the symmetry operations (Sec. II D). In the next step, we calculate {ψ θ (x rep )} and {ψ θ (x rep )} by using the NNs composed of two FCNNs with different roles (Sec. II E). Then, we calculate E θ and V θ by using ψ θ (x) = ψ θ (x rep )s x and ψ θ (x ) = ψ θ (x repr )s x . Finally, we compute the derivatives of the loss function in terms of θ, ∇ θ L θ , through the backpropagation, and update the parameters θ (Sec. II F). The updated NNs are used to generate new states {|x } in the next loop. This sequence is repeated until the energy converges.

B. Markov chain Monte Carlo sampling and reweighting
In the conventional variational MC method, the states {|x } are generated by the MCMC sampling by using |ψ θ (x)| 2 as the weight, and the energy is calculated by where N MCS is the number of the MCMC samples. In this case, however, when a sample has a small value of |ψ θ (x)|, E loc θ (x) in Eq. (2) becomes large, which can make the calculation unstable. This indeed occurs especially in fermionic systems, where ψ θ (x) changes its sign across zero during the optimization process. To circumvent this instability, we perform the MCMC sampling by using |ψ θ (x)| µ (0 < µ ≤ 1) as the weight and adopt the reweighting method; namely, we calculate the energy as This trick stabilizes the calculation as We employ the Metropolis algotirhm for the MCMC sampling.

C. Generation of |x
The calculation of E loc θ (x) requires the states {|x } for which the matrix elements x|H|x are nonzero as We can generate {|x } by simply operating H on |x . In these operations, however, the state in |x , in general, violates the order of the particles in Eq. (1). In such cases, we need to rearrange the order to follow the representation in Eq. (1). The numerical cost is O(N log N ). At the same time, the overall sign is changed by (−1) P , where P is the number of times for the particle exchanges in the rearrangement of |x . Note that when |x is generated by the single-particle hopping, the numerical cost is reduced to O(log N ). In this case, P is given by the change in the order of the particle associated with the hopping.

D. Symmetry operation and representative state
The wave functions for the states connected by the symmetry of the system, such as the point group and translational symmetries, are identical except for the phases. Therefore, it is sufficient to take into account only one of such equivalent states in the actual calculations. In order to distinguish the equivalent states, we introduce where n rσ = x|n rσ |x withn rσ =ĉ † rσĉrσ ;ĉ † rσ (ĉ rσ ) is the creation (annihilation) operator of the fermion with r and σ. We call the state with the largest value of m x the "representative state" among the symmetry-related ones, and denote it as |x rep . For each |x , we retain the sign change s x from |x to |x rep , which consists of the product of the sign changes caused by the particle exchanges and the character of the symmetry operation.
In the above calculations for the representative states, the symmetry operations in each point group are implemented in a separate manner. For example, in the case of C 4v symmetry, all the symmetry operations can be constructed by combinations of the fourfold rotation C 4 and the mirror operations σ v ; hence, we implement the algorithms for these two separately. The sign change is calculated by multiplying the characters of the symmetry operations according to the irreducible representation of the point group. For example, for the irreducible representation B 2 in C 4v , the sign is obtained by multiplying −1 (+1) for each operation of C 4 (σ v ). On the other hand, for the translational symmetry operations, it is sufficient to take into account the state in which one of the particles occupies the state with I(r, σ) = 0, since we take into account the representative state with the largest m x in the calculations.
The bottleneck of the present computation is in the calculations of m x for all the states generated by the symmetry operations, whose total computational cost is O(N 2 ). In the simulations for finite-size systems, the irreducible representations of the ground state depend on the parameters, such as the system sizes and the number of particles. Therefore, we perform the calculations independently for all the irreducible representations that do not have degeneracy, and select the ground state by comparing the energies among them. Note that our framework can also be applicable to the systems without any symmetry, such as low-symmetric molecules and disordered systems.
E. Neural network architecture Figure 2 shows the architecture of the NN in this study. We use two FCNNs to represent the wave function. The use of FCNNs brings, at least, two advantages: (i) they can be tuned irrespective of the lattice structure of the system, unlike CNNs and GCNs, and (ii) they can capture spatially long-range correlations within reasonable computational time and memory consumption. We tried other architectures, such as Resnet [22] and self-attention network [23], but did not find any improvement. Below, we describe the details of each FCNN that we used in the present study. One is called the main NN, which generates the output of two real numbers corresponding to the real and imaginary parts of the wave function from the input of the number of particles at each site, nrσ, and their product nrσn r σ . The other called the GJ-type NN is an extension of the GJ factor, which generates the output of a real number g from the input of the correlation factor for the number of particles in Eq. (13).
One of the FCNNs, which we call the main NN, is used to represent the real and imaginary parts of the wave function. As the input, we use a one-dimensional vector [{n rσ },{n rσ n r σ }], which contains the particle number at each site, n rσ , and the products of the particle numbers, n rσ n r σ . Since the dimensions of n rσ and n rσ n r σ are N tot and N tot (N tot − 1)/2, respectively, the total dimension of the input vector is N tot (N tot + 1)/2.
In order to further take into account the correlation effects, we prepare the other FCNN which we call the GJ-type NN. The input is the correlation factor for the number of particles defined bỹ where 0 ≤ñ δσσ ≤ N , and the output is a real number g, which is incorporated into the wave function as the factor of e −ReLU(g) , as shown in Fig. 2. This NN can be regarded as an extension of the GJ factor [24,25]. A unique feature of our NN architecture is that not only the correlation effect but also the sign structure of the fermionic wave function is represented by the NN. In contrast, most previous studies have used NNs to incorporate the correlation effect, with the use of the Slater determinant or the Pfaffian to implement the fermionic anticommutation relations. Thus, our framework would be more flexible to optimize the sign structure of the wave function than those based on the Slater determinant or the Pfaffian. Another feature is that not only the number of particles but also the products of them, such as n rσ n r σ andñ δσσ , are incorporated into the input, which improves the optimization. One can regard it as a "pre-processing" in the context of machine learning. It is easily implemented by employing the FCNNs where the dimension of the input can be tuned flexibly, in contrast to CNNs and GCNs.

F. Update θ
The parameters θ are updated so as to minimize the loss function L θ in Eq. (8). This is done by calculating the θ derivative of L θ with using the following equation: where O denotes E or V ; see Appendix A for the derivation. The derivatives in each NN layer are computed by backpropagation. For the update, we choose an appropriate algorithm, such as the stochastic gradient descent and Adam [26], depending on the system and the layer architecture of the NN.

G. Computational cost
Let us discuss the computational cost for the calculation of ψ θ (x ) from |x , which corresponds to the calculation of the Slater determinant (or Pfaffian) in the previous studies. As mentioned in Sec. II D, the bottleneck for preparing |x is in the transformation from |x to |x rep , which is O(N 2 ). Meanwhile, the preparations of {n rσ n r σ } andñ δσσ in Sec. II E both require the cost of O(N 2 tot ). Thus, the computational cost in our present scheme is O(N 2 ). This is smaller than the that for the Slater determinant, O(N 3 ). Note, however, that the overall cost in the total optimization depends on the numbers of MCMC samples and NN parameters necessary for each system.
It is possible to further reduce the computational cost in the present method. As mentioned in Sec. II D, the symmetry operations are not mandatory. One may also omit {n rσ n r σ } in the input for the main NN. In addition, the GJ-type NN can be omitted. With these simplifications, the bottleneck remains in the generation of |x , which is O (N log N ). As mentioned in Sec. II C, this is further reduced to O(log N ) when |x is generated only by single-particle hoppings. In this case, the bottleneck is in the calculation of {n rσ } in Sec. II E, which is O(N ).

III. RESULTS
In this section, we present the benchmark results by applying our method to the Hubbard model on a square lattice. The Hamiltonian reads where r, r denotes the nearest-neighbor pairs, and σ =↑ or ↓ (N int = 2). We set t = 1 and U = 4 throughout the following calculations. After describing the computational details in Sec. III A, we first present the benchmark for the system with N site = 4 × 4 and N = 10 in Secs. III B-III F, in comparison with the results obtained by the exact diagonalization (ED). Then, we show the results for N site = 6 × 6 and N = 10 in Sec. III G, in comparison with the mVMC result. In both cases, we impose the periodic boundary conditions, where the noninteracting ground states are in the closed-shell configurations. We find that the point group symmetry of the lowestenergy state is always A 1 as discussed in Sec. III E, and hence, show the results for A 1 in the other sections.

A. Computational details
We perform the MCMC updates for 10 7 -10 9 times, and take the samples in every 10 3 -10 4 steps, namely, N MCS = 10 4 -10 5 , except for Sec. III F. In the main (GJtype) NN, we prepare five (four) layers of the FCNN; the first four (three) layers have 400 parameters, while the last one has 40. In both NNs, we employ the parametric ReLU [27], except for the last two layers (last layer) for the main (GJ-type) NN where we use the hyperdolic tangent. We use Adam for the optimization [26], in which we take the hyperparameters as β 1 = 0.9 and β 2 = 0.99. For the learning rates η main and η GJ used in the main and GJ-type NNs, respectively, we set the initial values at 10 −3 -10 −4 and gradually decrease them to 10 −4 -10 −5 during the optimization processes; the actual schedules will be shown for each case in the following sections. We note that the optimization process sensitively depends on the values of β 1 , β 2 , η main , and η GJ . Specifically, it is crucial to take η GJ being much smaller than η main to avoid a metastable state where only the onsite Coulomb energy tends to be optimized but the kinetic energy does not. The calculations are performed on GPUs with parallelization up to 32 cores.
B. Efficiency of the reweighting method First, we demonstrate that the reweighting method introduced in Sec. II B can stabilize and accelerate the optimization. Figure 3 shows the optimization processes of the relative error of the energy defined by when we take µ = 2 and µ = 1 in Eq. (10). Here E GS is the ground state energy obtained by the ED calculation. We set λ = 0 in Eq. (8). In the case of µ = 2, which corresponds to the conventional MCMC sampling with the use of Eq. (9), there appear some spikes indicated by the arrows in Fig. 3, suggesting numerical instabilities where the local energy becomes large due to small absolute values of the wave function. Such behaviors are not observed for the reweighting method with µ = 1. By this stabilization, the energy for µ = 1 is always lower than that for µ = 2, indicating that the reweighting method accelerates the optimization. We also performed the calculations by taking µ = 0.5 and µ = 0.25, but did not find any further improvement compared to the case of µ = 1. The lower panel of Fig. 3 shows the schedules of the learning rates, where we reduce η main and η GJ in a stepwise manner during the optimization. We find that this is an efficient way to lower the energy. We note that the optimization is no longer accelerated if we take smaller values of η main and η GJ than the smallest ones in Fig. 3 due to the slowing down of the optimization process. Similar behaviors were reported in a previous study [11].
C. Two types of the neural networks Next, we investigate how the two NNs introduced in Sec. II E work in the actual optimization. Figure 4 shows the optimization processes with and without the input of {n rσ n r σ } in the main NN, in addition to {n rσ }. In both cases with and without the use of the GJ-type NN, we find that the additional input accelerates the optimization. This is surprising since {n rσ n r σ } have no extra information about the states beyond {n rσ }. The results in Fig. 4 also indicate that the use of the GJ-type NN in addition to the main NN accelerates the optimization.

D. Effect of the energy variance
Then, we investigate the effect of the inclusion of the energy variance, V θ in Eq. (6), in the loss function in Eq. (8). The results are shown in Fig. 5, in comparison with those obtained by the optimization without V θ throughout the optimization. Here we vary λ in a stepwise manner, as shown in the lower panel of Fig. 5(b). We find that the introduction of λ = 0.2 at the 2000th step rapidly lowers the energy, and at the same time, the energy variance is largely reduced. The increase of λ to 0.5 at the 3000th step disturbs the optimization temporarily, but the energy is further lowered in the later steps. We note that a further increase of λ makes the optimization unstable. In addition, a nonzero λ from the beginning of the optimization may trap the system into a metastable state where only V θ has a small value. The results indicate that the appropriate inclusion of the energy variance in the loss function accelerates the optimization process. In order to understand the reason of this improvement, we study the behavior of the energy and the variance around the true ground state. We define the linear combination of the wave function using the NN, ψ θ (x), and that for the ground state ψ GS (x) = x|ψ GS as Using |x and θ at the 1000th step in the data in Fig. 5, we compute the average of the energy and the variance for Eq. (17) while varying ξ by whereÕ denotesẼ orṼ , andÕ loc θ (x) is calculated by replacing ψ θ (x) withψ θ (x) in Eqs. (2) and (7).
Figure 6(a) shows Ẽ θ as a function of ξ. Although Ẽ θ = E GS at ξ = 1, we find that Ẽ θ takes the minimum slightly below E GS at ξ 0.89. While the result appears to violate the variational principle, this is presumably due to the finite number of samples. This suggests that the update of θ may fail when using only the energy in the loss function.
On the other hand, Ṽ θ takes the minimum value of Ṽ θ = 0 at exactly ξ = 1 even for a finite number of samples, as shown in Fig. 6(b). This is because V loc  I. Errors in the total energy ∆E, the kinetic energy ∆EK , and the onsite Coulomb energy ∆EU for different irreducible representations (irrep). The parameters and calculation conditions are the same as those in Fig. 5 with the introduction of nonzero λ.

E. Irreducible representation
As discussed in the end of Sec. II D, we need to choose a proper irreducible representation for the finite-size calculations. We list the error of the energy, ∆E, for different irreducible representations in Table I obtained after the optimization in the same condition in Fig. 5 with the introduction of nonzero λ. The irreducible representation E, which has twofold degeneracy, is excluded from the calculations. We find that the energy for A 1 is significantly lower and close to E GS , compared to the other three. In addition, in Table I, we list the errors in the kinetic energy ∆E K and the onsite Coulomb energy ∆E U corresponding to the first and second terms in Eq. (15), respectively. Except for A 1 , the kinetic energy remains high, while the onsite Coulomb energy becomes lower than the value in the true ground state. This suggests that the inappropriate choice of the irreducible representation fails to optimize the kinetic energy presumably due to the wrong sign structure of the wave function.
where L main (L GJ ) is the number of layers and D main l (D GJ l ) is the number of neurons on the layer l in main (GJ-type) NN. Here we fix N MCS at 16000. The calculation conditions are the same as those with nonzero λ in Sec. III D. ∆E decreases while increasing N θ and appears to approach zero for N θ → ∞. This looks consistent with the universal approximation theorem: our framework by using the NNs can approximate the true ground state as precisely as possible by increasing the number of parameters, when the optimization is performed for sufficiently large samples.
We also plot ∆E while changing the number of MC samples N MCS in Fig. 7(b). We fix N θ at 2070120: We find that ∆E monotonically decreases while increasing N MCS . This result indicates that not only the number of NN parameters but also the number of MC samples is relevant to the approximation. Contrary to N θ , however, ∆E appears to converge to a nonzero value for N MCS → ∞, suggesting that the capability of the approximation is limited more strongly by N θ rather than N MCS .
G. Benchmark for Nsite = 6 × 6 Finally, we apply our method to the case of N site = 6 × 6 and N = 10. The result is shown in Fig. 8. Since the ED is not feasible in this case, we compare the result with that obtained by the mVMC calculations [28]. We find that the energy by our method is decreased to near the value obtained by the mVMC; by closely looking, our method achieves a slightly lower energy than the mVMC, as shown in the inset. As it was shown that the mVMC can provide comparable or better results compared to the existing methods based on NNs [5,13,21], our result would prove that the capability of our method is comparable to or better than the other NN-based ones.

IV. CONCLUDING REMARKS
In this paper, we have developed a framework to approximate fermionic many-body wave functions by NNs without using the Slater determinant. In our framework, the sign change of the wave function associated with the fermionic anticommutation relation is explicitly calculated by defining the order of particles in real space, and the rest part of the wave function is optimized by the FCNNs. We applied our method to the Hubbard model on the square lattice, and showed that it achieves a lower energy than the mVMC calculation. The numerical cost of our framework is O(N )-O(N 2 ), depending on the details of the computational tricks and the input for NNs to be employed. This has an advantage over the other NN methods based on the Slater determinant (or Pfaffian) whose numerical cost is O(N 3 ). Thus far, our framework is successful for a modest number of particles, and the application in a larger scale is left for future study; it would be crucial to develop further efficient optimization techniques for a larger number of parameters.
In our framework, we have implemented several numerical tricks to stabilize and accelerate the optimization process. First, we employed the reweighting method with the use of the MC weight by taking a smaller power than two for the absolute of the wave function, to avoid numerical instabilities by accidentally small weights appearing in the optimization process. Second, we limited ourselves to calculate only the representative states, which reduces the effective Hilbert space to be calculated and ensures the symmetry of the system for the wave function. Lastly, we included the energy variance in the loss function, in addition to the energy. We believe that all these tricks would be efficient for a wide class of fermionic systems and applicable to the optimization methods in the conventional VMC studies.
Furthermore, we implemented two FCNNs, the main NN and the GJ-type NN, in the NN architecture. The main NN represents both the sign and amplitude of the wave function. We incorporated not only the number of the particles but also the products of them. Meanwhile, the GJ-type NN is a generalization of the GJ factor to incorporate correlation effects. We showed that both of them contribute to the improvement of the optimization. For further improvement, imposing the symmetry of the system on NNs would give better results [29]. In addition, it is interesting to incorporate various NN architectures developed in the machine learning community.