Application of Grassmann phase space theory to Cooper pair model

This paper concerns the application of Grassmann phase space theory (GSPT) to treat the dynamical evolution of systems of identical fermions, such as ultracold gases of fermionic atoms. Phase space theory (which originated from quantum optics) is increasing in importance since it overcomes certain issues associated with other theoretical methods, such as Greens functions, variational methods, quantum-Monte-Carlo equations, etc. In phase-space theory quantum states are represented by quasi-probability distribution functions of phase space variables associated with canonical system operators—such as annihilation, creation operators. Evolution is described via a Fokker-Planck equation for the distribution function, which is equivalent to Ito stochastic equations for (time dependent) stochastic phase space variables. Quantum correlation functions given as averages of products of phase space variables over the quasi-probability distributions then become stochastic averages of products of stochastic phase space variables. In GSPT, the phase space variables are Grassmann numbers, but as computer representation of g-numbers is difficult, Grassmann phase space methods were regarded as being computationally inaccessible. However, previous work using the un-normalised B distribution shows that computer representation of Grassmann variables is unnecessary. Stochastic averages of products for quantum correlation functions at later times are related linearly to stochastic averages at earlier times via stochastic matrices only involving c-numbers. Thus, GSPT calculations of quantum correlation functions now only involve c-number computations. This paper presents the first correct numerical calculation of a quantum correlation function for a fermionic system using stochastic methods based on Grassmann phase space theory, namely the time dependence of the coherence between two Cooper pair states in a four-mode fermion system, where the short and finite time solutions can be compared to known exact results. Good agreement between the stochastic and exact results is found, showing that GPST is a valid approach for treating fermionic systems. The treatment of time evolution involves a novel use of the eigenvalues and biorthogonal column eigenvectors of a stochastically determined c-number matrix M and its transpose. Other topics of interest in ultra-cold fermi gases for which the GSPT could be applied are highlighted, such as the strong interaction regime for the BEC/BCS crossover achieved using magnetically tuned Feshbach resonance techniques.

1. Introduction 1.1. Theoretical methods for non-relativistic many-body systems Quantum dynamics is one of the most fundamental problems in modern physics since time-evolution is the basis for any theoretical prediction, yet many-body complexity makes this an extremely challenging task in quantum systems. New theoretical methods are always needed, and quantitative experiments with wellunderstood interactions are vitally important to enable the testing of predictions. Also, for systems in thermodynamic equilibrium, evolution associated with changes to external variables such as temperature can also be thought of as a type of quantum dynamics. We will briefly review some of the recent developments relevant to ultracold atoms since these provide an exceptionally simple and well-understood physical environment within which quantitative tests of dynamical theoretical predictions can be performed.
There is a wide range of theoretical treatments of the behaviour of many-body systems in non-relativistic quantum systems, which are found in condensed matter physics, nuclear physics, quantum optics and more recently in the physics of degenerate bosonic and fermionic atomic gases at low temperatures and density. These include not only the more traditional Green function [1,2], variational [3,4], path integrals [5], mean field theory [6] and stochastic Schrödinger equation-Quantum Monte Carlo [7] approaches, but also phase space methods [8][9][10][11][12]. In most modern treatments, where the many-body systems involve identical particles, a second quantisation approach [13,14] is used.
There are well known issues regarding the theoretical methods described above. Path integrals and Monte-Carlo methods for instance, are useful for bosons at thermal equilibrium. However, for quantum dynamics and for fermions in particular, there are phase and sign problems, which severely limits their applicability. Also, the simulations involved are often restricted to small sizes which are difficult to extrapolate to the true thermodynamic limit [15][16][17]. The standard perturbation theory method, is applicable for certain problems, but others involving large interaction strengths may contain expansions which generally do not converge and are of infinite order due to a lack of a small expansion parameter. Using the mean field theory approach, one has to account for quantum fluctuations in atom numbers or limited coherence times and lengths [18]. Variational methods require an assumption regarding a trial function for which the choice depends on trial and error or experience. For Green functions, which have a long history in condensed matter problems-being applied to calculate properties of systems such as superfluids and superconductors [1,19], a correct choice has to be made regarding the Feynman diagrams relevant to the problem.
Phase-space representations on the other hand, are of increasing importance as a viable and successful means to study exponentially complex quantum many-body systems from first principles [8][9][10][11][12]. These were invented to describe lasers, but have been adapted to treat atoms instead of photons. Consequently, the behaviour of bosonic photons and atoms is often treated using phase space methods in both quantum optics and cold atom physics. Here, mode annihilation and creation operators are represented by c-number phase space variables, with the density operator equivalent to a distribution function of these variables. Phase space methods have also been introduced for treating fermionic systems [20], but differing choices have been made for the phase space variables. Corney and Drummond [21] for example, introduce c-number variables associated with pairs of fermion annihilation, creation operators. However, the anti-commutation rules for fermion annihilation, creation operators suggests the possibility of using anti-commuting Grassmann variables to represent these operators [20]. However, in spite of the seminal work by Cahill and Glauber [20] (and a few other treatments [12,[22][23][24][25][26][27]), the use of Grassmann phase space methods in quantum-atom optics to treat fermionic systems is rather rare, though fermion coherent states using Grassmann variables are widely used in particle physics.
The present paper is the first correct numerical application of Grassmann phase space theory (GPST) to calculating a quantum correlation function (QCF) for a fermionic system (see section 1.2) for details). The QCF involved is the coherence between two Cooper pair states in a four mode fermionic system, showing the development of this coherence over finite times. This system has the advantage that the evolution is known via standard matrix mechanics, so although the GPST method is not expected to reveal any new physics in this case, it can be compared to a known exact result. New physics will be revealed when GPST is applied to systems not yet completely understood, such as the BEC/BCS crossover in cold Fermi gases. As will be shown below, (see sections 3.2, 3.3 for details) the evolution of the fermion QCF involves the stochastic calculation of a c-number matrix M, and a novel method for determining the evolution is presented, based on using the numerical eigenvalues and biorthogonal column eigenvectors of M and its transpose. Furthermore, the present paper clearly demonstrates that numerical calculations based on GPST can be carried out using only c-numbers, and without the need to represent Grassmann variables themselves on the computer -an issue previously thought to restrict GPST to purely analytic applications (see section 1.2 for details).
Unlike variational methods, phase space methods do not require assuming a trial form for the quantum state, and whereas Green function methods involve selecting which class of Feynman diagrams is important in the process and which are to be discarded, phase space methods do not depend on making such selections. They ultimately involve calculations with representative sets of stochastic trajectories that sample the distribution function throughout the phase space, and their main limitation is a numerical constraint on the numbers of trajectories that can be stored on a computer.

Phase space theory -Grassmann phase space variables
Thermal evolution based on a Matsubara equation [28] was treated using a Grassmann phase space theory by Plimak, Collett and Olsen [22] for a 1D system of spin 1/2 fermions with zero-range interactions, and numerical results were presented for number correlations between pairs of fermions with various momentum, spin cases +  - - +  k k k k , , , . ( ) These authors used an un-normalized B distribution function based on fermion Bargmann coherent states [20] for which a Fokker-Planck equation (FPE) was obtained where the drift vector only depended linearly on the Grassmann phase space variables-a feature the authors recognised as being vital for numerical work. Plimak et al also introduced stochastic Grassmann variables, with those at a later time (or inverse temperature, for thermal evolution) being related linearly to those at an earlier time by a stochastic c-number transformation matrix β −1 . However, rather than introducing Ito SE for the stochastic variables themselves, they considered an Ito SE for the transformation matrix β. Their fundamental equation was an ansatz for determining the B distribution function at later times from that at an initial time, via substituting stochastic Grassmann variables for the original non-stochastic phase space variable, multiplying by the determinent (det β) of the transformation matrix, and then taking a stochastic average of the resultant product.
The Grassmann phase space theory used in the present paper was developed by Dalton et al on a different basis and is set out in [12,24,26] and [27]. Details are set out here in section 3. Extensive accounts of the underlying Grassmann algebra and calculus may be found in [12,20,22] and [25]. Essentially, the phase space method also involves representing the quantum density operator for the system by a Grassmann (un-normalized B) distribution function [12,22,24,26,27] and [25] in a phase space where the phase space variables replacing the fermion annihilation and creation operators are Grassmann variables. Quantum correlation functions (QCF) can be related to experimentally measurable quantities, and theoretically to Grassmann phase space integrals involving the distribution function with the fermion operators being replaced by phase space variables. Evolution equations (over time or temperature) for the density operator lead to Fokker-Planck equations (FPE) for the distribution function via the application of correspondence rules. However, unlike in [22], the FPE are then replaced by Ito stochastic equations for stochastic Grassmann phase space variables themselves, which are derived from the FPE. The QCF are now given by stochastic averages of products of these stochastic variables. The stochastic averages of products at a later time can be shown to be related linearly to such stochastic averages at an earlier time via matrices that only involve c-numbers. Even though these matrices involve stochastic quantities such as Wiener increments, their non-dependence on Grassmann variables enables computations to be carried out without having to represent Grassmann variables on the computer. The initial stochastic averages of products are obtained from the initial density operator. A comparison of the present approach with that in [22] is provided in the work of Polyakov [25], which confirmed the present formalism.
The utility of the theory can first be tested on some fermion systems that have been treated previously by other methods. In this paper, we will numerically calculate the coherence between two Cooper pair states in a simple four mode fermion system as a stochastic average [12,24,27]. The analytic short time and finite time solutions for such coherence are known using analytic methods, so comparisons can be made with exact results. Another test of the theory would involve a re-determination of the quantum correlation functions for interacting spin 1/2 fermions which were previously calculated by Plimak et al [20], by a Grassmann phase space approach involving a different treatment of evolution. Although based on a different Ito stochastic equation, the numerical calculations of Plimak et al [20], nevertheless showed that a Grassmann phase space theory could be used to calculate quantum correlation functions for a field-like situation involving a continuous range of momentum values -implying that similar calculations could be carried out on topics such as the BEC/BCS crossover.

Plan of paper
As the main potential application of Grassmann phase space theory will be to treat cold quantum gases of fermionic atoms, we provide in section 2 a brief overview of current topics of interest in this area. In section 3 we will review the various features of Grassmann Phase Space Theory starting with its key theoretical expressions, followed by how it can be used in numerical calculations. In section 4 we will describe the Cooper pair model for a system of two spin 1 2 fermions with four modes, comparing the theoretical results obtained for the model using analytical methods with the numerical results obtained for the QCF's based on GPST stochastic calculations. Section 5 summarises the paper. Various details are set out in Appendices, available as online Supplementary Material.

Cold quantum gases
Research in the field of ultracold atomic gases has been a major activity since the 1990ʼs when Bose-Einstein condensation was achieved for bosonic atoms [29][30][31][32]. Non-interacting untrapped bosonic atoms at zero temperature form a BEC, with a macroscopic occupancy of the lowest single particle energy state, which is possible due to the absence of the Pauli exclusion principle. Since the 2000ʼs ultracold gases of fermionic atoms have also been prepared manifesting different effects. Non-interacting untrapped fermionic atoms at zero temperature form a Fermi gas, with each energy state only being occupied by two atoms with different spins due to the Pauli exclusion principle. Consequently, energy states are filled up to the Fermi energy E F , whose associated wave number k F is proportional to the inverse of the average separation between the atoms. In both cases, the single particle states are plane waves with momentum ÿk.
Ultracold quantum gases have opened up new horizons in many-body physics, from novel quantum states of matter to quantum computing applications. They provide a unique table-top paradigm for exploring the properties of quantum many-body systems in nature, from the thermodynamics of high-temperature superconductors to the hydrodynamics of QCD Quark gluon plasmas (QGPs). These gases are mainly made of alkali metal atoms but also more recently other atoms as well as diatomic molecules. They can be fermionic or bosonic with a wide variety of internal hyperfine spin structures. They can be made strongly or weakly interacting, and both attractive and repulsive. They are contained in a variety of magnetic and optical traps in one, two and three dimensions, including optical lattices.
For bosonic gases, there is a wide range of interesting topics which include Bose-Einstein condensation (BEC) and superfluidity (flow without dissipation below a critical velocity v c ). The BEC paradigm, was first developed for non-interacting bosons, and later generalized to take into account repulsive interactions, describes bosonic fluids like 4 He or ultracold Bose gases like 87 Rb. Interactions are described in terms of a twobody scattering length [33]. The condensate is a macroscopic occupation of a single quantum state that occurs below a transition temperature T c , which, even in strongly interacting Bose systems like 4 He, is of the same order of magnitude as the quantum degeneracy temperature at which the inter-particle spacing becomes of the order of the thermal de Broglie wavelength [34,35].
For fermionic gases, not only do effects such as fermionic BCS superfluidity (based on large Cooper pairs of two atoms with opposite spins and momenta and described in terms of the BCS theory (Bardeen, Cooper Schrieffer [36])) occur, but also BEC superfluidity (based on tightly bound molecules of two fermionic atoms with opposite spins) can be observed. It was proposed as early as 1950 by Fritz London (see [37] that fermionic superfluidity for fermionic atoms is a pair condensate in momentum space, in contrast with a BEC of tightly bound pairs in real space, and BCS theory then emphasised the different nature of BCS and BEC types of suoperfluidity. The BCS paradigm, first developed for metallic superconductors, describes a pairing instability arising from a weak attractive interaction in a highly degenerate system of fermions. The formation of pairs, and their condensation, both occur at the same T c that is orders of magnitude smaller than the Fermi energy E F . However, it was later realised [38][39][40][41], that the BCS theory provided a good qualitative description of both the BEC and BCS regimes, as the two body scattering length is changed from being attactive (the BCS regime) to being negative (the BEC regime). Experimentally, such changes in the two body scattering length can be achieved using magnetically tuned Feshbach resonance [42][43][44][45] techniques. Studies involving Feshbach resonance have led to ground-breaking observations, including the condensation of molecules, and to additional intensive research relevant to the crossover physics, from a molecular Bose-Einstein condensate (BEC) to atomic Cooper pairs in the BCS state (BEC/BCS crossover) [17,41,46,47]. The calculation of the phase transition temperature T c between the superfluid phases as a function of k a 1 F ( )is also of interest. Near the crossover, the scattering length becomes very large, and this corresponds to the so-called unitary regime where strong correlations occur, and for which BCS theory involving mean field equations is no longer adequate. Figure 1 shows the phase diagram associated with a Feshbach resonance, the variables being the inverse of the two fermion scattering length and the temperature.
Various BEC/BCS crossover studies ( [30,48] -see figure 24) have shown that there is a smooth change in the size of the Cooper pairs through the Feshbach resonance. However, it is expected that there is little correlation between different Cooper pairs well away from the unitary regime. On the BEC side there should be little relationship between the nearby positions of the pair of fermions in one tightly bound molecular Cooper pair, and the positions of the pair of fermions in another. On the BCS side, there should be little relationship between the relatedk k , momenta of the pair of fermions in one Cooper pair and the relatedl l , momenta of the pair of fermions in another. However, in the crossover regime, the positions (or momenta) of the four fermions in any two Cooper pairs should be highly correlated. Figure 2 illustrates this effect.
A study of the BCS/BEC crossover regime-including the strong correlation unitary regime, would be a worthwhile application for Grassmann phase space theory. The application would employ Fokker-Planck equations and the related Ito stochastic equations either based on starting with a Matsubara equation [28], which describes the temperature evolution of the system from an initial high temperature where the atomic gas behaves classically, or starting with a Liouville-von Neumann equation, which describes time evolution. Numerical methods will be used at first to study the two particle quantum correlation function which has the form [49]: In the BCS theory, there is a smooth evolution in the BEC/BCS crossover in the size of the Cooper pair from the situation for a tightly bound molecule to that for a loosely bound Cooper pair with no dramatic change at resonance (see Ketterle [30]). It is expected that Grassmann phase space method would give similar results. However, if higher order quantum correlations were studied, we might expect to see strong interaction effects, such as inter-pair coherence lengths differing from intra-pair coherence lengths, which are attributed to fluctuation correlations not included in the BCS mean field theory [48,[50][51][52]. Our hypothesis is that there is little correlation between different Cooper pairs well away from the unitary regime, however, in the crossover regime the positions of the four fermions in any two Cooper pairs should be highly correlated. As is well-known [49], the two particle QCF in equation (2.1) can be found using Bragg spectroscopy. However, the four particle QCF for describing one Cooper pair at r 1 , r 2 and another at r 3 , r 4 is This cannot be found using standard Bragg spectroscopy, suggesting that a new form of Bragg spectroscopy will be needed to fully study the strong interaction regime. On the theory side, the calculation of such QCF would be an important application of Grassmann phase space theory. More recently, quantized vortices in a rotating Fermi gas have provided a direct signature for the presence of superfluidity in a strongly interacting Fermi gas, as they are a direct consequence of the existence of a   The BEC-BCS crossover. By tuning the interaction strength between the two fermionic spin states, one can smoothly cross over from a regime of tightly bound molecules to a regime of long-range Cooper pairs, whose characteristic size is much larger than the interparticle spacing. In between these two extremes, one encounters an intermediate regime where the pair size is comparable to the interparticle spacing interaction. Reprinted from Ketterle  macroscopic wave-function that describes the superfluid. It is beyond the scope of this paper to describe these topics, but references such as [29,30,48,50] provide a useful overview of the area.
Interacting fermions appear in a wide range of settings. The Grassmann phase space theory will be focused on applications to the specific topic of strongly interacting Fermi gases [30,51], whichprovide a well-controlled and flexible environment to study many-body phenomena in strongly correlated systems. Generically, strong interactions give rise to strong correlations. A strongly correlated system cannot be described by working perturbatively from non-interacting particles or quasiparticles. In the case of electrons in condensed matter systems, theories constructed from single-particle properties, such as the Hartree-Fock approximation, cannot describe the problem at hand. In the case of fluids, the kinetic theories based on quasiparticle degrees of freedom, in particular the Boltzmann equation, fail [31,53]. Theoretically, strongly interacting Fermi systems represent a challenging scenario to treat, as the large scattering length means there is no small parameter to describe the interactions. Models based on simple perturbation theory are therefore no longer adequate to describe certain system parameters. Experiments can provide useful information both revealing properties of these systems, and establishing benchmarks for appraising different approximate theoretical approaches. For example, recent experiments on ultracold Fermi gases have provided an unprecedented opportunity to test universality in the laboratory, which in principle allows for the interior properties of hot dense neutron stars to be investigated on earth [16].
3. Grassmann phase space theory 3.1. Summary of GPST features As explained in section 1, in our work we follow the approach of Plimak, Collett and Olsen [23], in that we also base our work on the un-normalized B distribution, but with Grassmann number phase space variables ) associated with each mode. Quantum correlation functions (QCF), Fock state populations and coherences are given by Grassmann phase space integrals over the B distribution function, with the mode operators being replaced by Grassmann phase space variables. Fokker-Planck equations for the distribution function are obtained, and these will involve Grassmann derivatives rather than c-number derivatives. The phase space variables are then replaced by stochastic Grassmann variables. The QCF, etc (as in the boson case) are now given by stochastic averages of products of the stochastic phase variables. However, unlike in [22], Ito stochastic equations for the stochastic Grassmann variables are derived from the Fokker-Planck equation. An approach for bosons described by Gardiner [54] is followed, where we equate the time derivative of the Grassmann phase space average of an arbitrary function of the phase space variables to the stochastic average of the same function when the Grassmann phase space variables are replaced by stochastic Grassmann variables. The time dependence of the phase space average is determined from the FPE for the distribution function and the time dependence of the stochastic average is determined from the Ito SE for the stochastic Grassmann variables, and these time dependences are required to be the same. This establishes the relationship between the deterministic and noise terms in the Ito SDE and the drift and diffusion terms in the FPE. It is a different approach to that based on the ansatz of Plimak et al, and our Grassmann Ito SE for the stochastic Grassmann variables are not equivalent to the c-number Ito SE for the transformation matrix obtained in [22]. A subsequent paper by Polyakov [25] was in agreement with our formalism.
We then show how the Ito SE for the stochastic Grassmann variables can be applied in numerical calculations of stochastic averages of products of these quantities needed for determining QCF, etc. Essentially, the stochastic average of a product of Grassmann stochastic phase variables at the end of a small time interval is related via a linear transformation to the set of stochastic averages of all the products of Grassmann stochastic phase variables (of the same order) at start of the time interval. The key result is showing that the linear transformation matrix M relating the stochastic Grassmann phase space variables at the end of a time interval to those at its beginning just involves c-numbers, such as stochastic Wiener increments and quantities from the Ito SE for the Grassmann stochastic phase variables. By dividing a finite time interval into small time intervals, the stochastic average of product of Grassmann stochastic phase variables at the end of the finite time interval can be obtained in steps from the stochastic averages of products of Grassmann stochastic phase variables at the initial time. The numerical method for this process involves calculating the eigenvalues and eigenvectors of the linear transformation matrix M and its transpose M T for a suitably short time interval. Finally, the stochastic averages of products of Grassmann stochastic phase variables at the initial time are obtained from the initial density operator via expressions for QCF, Fock state populations and coherences, where the relevant phase space integrals are related to initial stochastic averages of products of Grassmann stochastic phase variables.
In order to treat problems involving large particle numbers, it is often convenient to consider field annihilation and creation operators rather than those for separate modes. A phase space theory based on fields can be constructed for fermions, as is the case for bosons. The density operator is represented by a Grassmann distribution functional involving Grassmann fields associated with the field operators. The QCF etc are now given via Grassmann functional integrals. The distribution functional satisfies a functional Fokker-Planck equation (FFPE) involving Grassmann functional derivatives. Ito stochastic field equations (Ito SFE) can be obtained which are equivalent to the FFPE. The detailed development of Grassmann phase space field theory is covered in [12,26,27], but for reasons of space will not be outlined here.
In the next subsections we outline the separate mode treatment of Grassmann phase space theory.
)in equation (3.5) is unique, and is an even Grassmann function of 12,22,24]. The un-normalized )distribution function is related to the normalized distribution function )via [12,24,26] = - , The distribution function )is normalized to unity and could also be used to determine quantum correlation functions [12], though this would require directly solving the Fokker-Planck equations. However, in numerical calculations it is convenient to consider unnormalized forms of the distribution functions, as these turn out to result in simpler Fokker-Planck equations, and lead to Ito equations linear in the Grassmann variables-which can be treated numerically.
3.2.3. QCF, Fock state populations, coherences as phase space integrals One set of important quantities that can be determined is that of the populations of, and coherences between, the fermion Fock states. We consider two Fock states -each with p occupied modes -given by Quantum correlation functions etc are given by Grassmann phase space integrals (see [12], pp.140-143).
and for Fock states F ñ l | { } and F ñ m | { } , the population and coherences are also phase space integrals given by ... ,

Hamiltonian for cold quantum gases
The Hamiltonian for a fermion system involving one and two particle interactions, and may be written in terms of one particle and two particle operators h â ( ) and where the sums are over the N identical particles and the expressions are invariant under any permutation of these identical particles. In terms of annihilation and creation operators, the Hamiltonian given in first quantization by equation (3.12), can be written in the second quantization form The evolution of the quantum state allowing for both Hamiltonian dynamics and Markovian relaxation due to coupling to an external reservoir is described by a master equation [12,24] å r r r r r where for pairs fermion modes denoted º a i j , and º b k l , the transition operators S â and relaxation The symmetry features are

Fokker-Planck equation-drift vector A and diffusion matrix D
Correspondence rules for replacing terms in the evolution equation for the density operator by equivalent terms in the Fokker-Planck equation for the B distribution function can be obtained using equation (3.3). These are set out as equation (45) in [24]. The Fokker-Planck equation for the B distribution function determines the dynamics in phase space and can be written using right Grassmann derivatives:

Linearity of drift vector, bilinearity of diffusion matrix
The drift vector A is an odd Grassmann function, linearly dependent on the Grassmann variables. This key linearity feature is dependent on using the B distribution function and is vital for numerical work. The diffusion matrix D is even and is bilinearly dependent on the Grassmann variables. Unlike for bosons, it is an anti- For the master equation equation (3.16), the quantities giving the drift vector in terms of the Hamiltonian matrix elements and relaxation coefficients are wherei C and + i C are the submatrices of the drift vector A. The quantities that define the sub-matrices submatrices -- In these submatrices i, j=1, 2, K, n and the c-number quantities, Γ kikj , ν ijkl etc are defined in equations (3.14)-(3.17).

The stochastic and phase space averages
The Ito stochastic equations provide an equivalent determination of the phase space dynamics. As described in the Introduction, phase space variables g p are replaced by time-dependent stochastic Grassmann variables g t p ( )  .
The ith member of the stochastic ensemble of g t For an arbitrary function ) of the Grassmann phase space variables, the phase space average á ) and the stochastic average ) after the replacement by stochastic variables are given by , , The stochastic equations for the g t p ( )  are determined from the Fokker-Planck equation for the distribution function )by requiring the phase space average of an arbitrary function ) and stochastic average of the same function to always coincide (see Gardiner, [54]). This will enable QCF, Fock state populations and coherence to either be given by a phase space integral involving the distribution function or a stochastic average involving the stochastic Grassmann phase space variables. Thus,

Ito equations for stochastic variables
The Ito stochastic equations for the stochastic Grassmann variables are given by where the deterministic term C g t p (˜( )) and the noise factor B g t a p (˜( )) are odd Grassmann functions, and are yet to be determined. The G + t a ( )are standard c-number Gaussian-Markov random noise terms These have the following stochastic properties with the stochastic average of any odd number product being zero, and that for any even number product being determined from sums of products of the G G t t show that the stochastic averages of a single Γ is zero and the stochastic average of the product of two Γ ʼs is zero if they are different, and delta function correlated in the time difference if they are the same. In addition, the stochastic averages of products of odd numbers of Γ are zero, and stochastic averages of products of even numbers of Γ are the sums of products of stochastic averages of pairs of Γ. At this stage, we just list the Γ a via a=1,2, K, i(n), where the total number i(n) is expected to depend on the number of modes n. It will turn out that = i n n 2 2 ( ) .

The Uncorrelation property
An additional property is that any F g t (˜( )) and the products of any Γ a (t + ) at later times t + are uncorrelated.

The integral form of the Ito equation -the Wiener increments
Together with the stochastic averaging properties, the uncorrelation property and stochastic properties of sums and products, an expression for the time derivative of the stochastic average of Grassmann functions in the Ito stochastic equations. The Ito stochastic equation for g t p ( )  can also be written in the integral form is a Grassmann stochastic increment, and the Wiener stochastic variable w a  and Wiener increment are An important result for the stochastic average of the product of two Wiener increments is which can easily be derived using equations (3.35) and (3.29).
The following results for stochastic averages of products of Wiener increments can also be obtained,

Relation between FPE and Ito equations' quantities
By equating the time derivatives of the two averages in equations (3.25) and (3.26) for an arbitrary function ) , the following important relationships between A and D in FPE, and C and B occurring in Ito SE are found.
= - The detailed derivation is set out in [12]. The deterministic factor C in the Ito SE is easily obtained as the negative of the drift vector A in the FPE (the opposite sign to the boson case). As for bosons, the noise factor B is related to the diffusion matrix D in the FPE via It is obtained via a construction process involving Takagi factorization [55,56].  . As there are 2n 2 columns for K, it follows that the number of Gaussian-Markov or Wiener stochastic variables in the Ito equations is 2n 2 also. This contrasts to the smaller number 2n for the boson case.
From the linearity of the drift vector elements we can write it in the form (see appendix A, equation (A.7)).
where L is ań n 2 2 matrix of c-numbers, with rows p and columns r which are listed as n 1, 2 ..., 2 . Hence, we have from equation (3.40) )only involves c-numbers, and equation (3.47) shows there is a linear relationship involving a c-number stochastic transformation matrix between the Grassmann stochastic phase space variables at time t and those at time t+δt. If the evolution between t 0 and = + t t 3.48 p f r s z p r n r s n y z z This shows that the stochastic Grassmann variables at final time depend linearly on the SGV at earlier time via a stochastic transformation matrix that involves only c-numbers. A similar feature applies in Plimak Collett and Olsen [22]. This linearity feature is only present for the B distribution. For the P distribution the drift vector C p (g) involves terms that depend cubically on the Grassmann variables. Although the P distribution function is still equivalent to Ito stochastic equations, the third order feature of the drift vector results in the Ito equations not being of use in numerical calculations [22,24]. However, the P distribution function and its equivalent Ito stochastic equations are still useful for formal theory. The number of stochastic c-number Wiener increments involved is 2n 2 , which increases as square of number of modes. A similar number of increments applies in the Gaussian phase-space treatment developed by Corney and Drummond [21].

QCFs, populations and coherences as stochastic averages
The QCF, Fock state populations, coherences are now given by stochastic average of product of stochastic Grassmann variables instead of phase space integrals. Thus ... ... , 3.50 14. General equations of QCFs -Stochastic evolution of QCF, populations and coherences By dividing the evolution between t 0 and = + t t f n 1 into equal small intervals with d = + + t t t i i 1 with i=0, K, n, then using equation (3.47) in each factor for a product of the stochastic Grassmann variables at time d + t t i , we can then place all the stochastic Grassmann variables for time t i together in order and finally take the stochastic average of both sides to obtain the result where the uncorrelation property equation (3.32) -has been used. This shows that the stochastic average of products of d given by sums over stochastic averages of products of the c-number stochastic quantities in the square bracket in equation ( ) at the initial timet 0 are determined from initial density operator at time t 0 using equation (3.51) (see [27], equation (25)).
Thus, numerical calculations for the dynamical and thermal evolution of the QCF, Fock state populations and coherences can be carried out without having to represent the Grassmann variables themselves on a computer.

Equations for QCF in terms of M matrix
We can divide the evolution between t 0 and = + t t at time t f is given by successively applying matrix multiplication by M matrices to vector  X t 0 ( ) listing products of g t z 0 ( )  at time t 0 -same numbers as for g t p f ( )  . The stochastic averages for  X t 0 ( )are determined from the initial density operator at time t 0 .
Hence no computer representation of Grassmann variables is involved, thus enabling numerical calculations for dynamical and thermal evolution. Here, we will treat a simple four mode problem involving two spin 1/2 fermions in free space in order to test Grassmann phase space theory numerically. For this case we can obtain analytic results to compare with. We will consider the dynamical evolution of coherences between two distinct Cooper pair states in the situation where the system is initially in one of these states, and where relaxation processes and external potentials are ignored. A non-numerical initial treatment of how the two-fermion-number correlations develop, owing to coupling between the two distinct Cooper pair states via the short-range interatomic interactions, is set out in [12,24,27].
where the mode functions are box normalized in a volume V=L 3 .
The mode annihilation operators are denoted

Hamiltonian
Hamiltonian dynamics will be considered based on equation (3.13), with coupling constant g describing the interaction terms In first quantization the interaction between the fermions is given by where r denotes space and α denotes spin   , ( ). a denotes the opposite spin to α. The free fermion kinetic energy is given by (ˆˆˆˆˆˆˆˆˆˆˆˆˆˆˆˆˆˆˆˆˆˆˆˆˆˆˆˆˆˆˆ) . Expressions for h ij and ν ijkl are obtained using equations (3.14), (3.15) above. As the spatial mode functions are plane waves, normalised in a box V, consequently periodic boundary conditions based on equation (4.2) result in many terms being zero and many terms being equal. The one-body terms are diagonal and all equal. Several of the two-body terms are zero and the remainder are equal.. In terms of the general notation in equations (3.14), (3.15), the non-zero h ij and ν ijkl are  | are non-magnetic, having one fermion in a spin + mode and one fermion in a spin − mode. The states F ñ 5 | and F ñ 6 | are magnetic, with both fermions in either a spin + mode or a spin − mode. These all have the same energy if fermion-fermion interactions are ignored.

Initial condition
We consider the case where the initial state is the pure Cooper pair state F ñ | should develop. This coherence is the one to be determined and whose presence indicates coupling between Cooper pair states (each with two fermions with opposite momenta and spins) is taking place. The work of Plimak et al [22] shows that this leads to anomalous number correlations of the form á ñ-á ñá ñ ) and -k k , ( ). Nonzero correlations for the (k, k) and -k k , ( ) cases are unexpected. The anomalous number correlations are due to the fact that true energy eigenstate is of the form The Ito equations are given in [12] and [24] -(see equations (93), (94), and Corrigenda) and will not be repeated here. Because there is no cross-coupling between the g i and the + g j in the diffusion matrix, the overall number of Wiener increments involved is 16, which is less than the expected number = n 2 32 2 . The Wiener increments have been numbered ¼ 1, 2, , 16. From inspection of the Ito equations we can then identify the elements of the Θ matrix equation (3.47) (refer also to equations (3.44) and (3.45)) that link the stochastic phase space variables between times t and t+δt. This matrix is given in two parts as follows, with Θ − linking the ¢ g s and Θ + linking the +¢ g s.
A direct evaluation of the stochastic average provides an expression for the short time behaviour of the coherence. There are terms involving the stochastic averages of products of two, three and four Wiener increments. Those involving three are always zero. Those involving four are sums of products of stochastic averages of two Wiener increments. These are zero because those products involving dw dw ¼ , , 6 1 1   such as dw dw dw dw dw dw = = = ...

Elimination of free time evolution and dimmensionless time variables
From the form of the coupling matrices we see that the free evolution for all four modes has the same frequency ω, and as we have just seen ω does not appear in the result for the short time coherence. To simplify the numerical calculation of the coherence, one can eliminate the free evolution frequency ω from the Ito stochastic equations in [24] (see equations (93), (94), and Corrigenda). Writing  T  h T  h T   ,  , ,

h T T h T h T h T T h T h T h T T h T h T h T T h T h T h T T h T h T h T T h T h T h T T h T h T h T
As a first test of numerical calculations using Grassmann phase space theory we will evaluate the coherence numerically for a small time interval with δT=1 based on the expression in equation ( | , f f ñ ñ , 2 5 | | and f ñ 6 | are specified by other X k ʼs. A list of all the components of the X vector is given in appendix B. Our focus is on determining the coherence between f ñ 3 | and f ñ 4 | , given by X 16 . As only the population of f ñ 3 | , given byX 15 is initially non-zero, then the short-time coupling to the coherence X 16 will be determined by the matrix element M (16,15). Hence we first determine this matrix element, using the same approach as in section 4.
This shows that the population X 15 is coupled to the coherence X 16 to order δT.  (36,15) are typical, and can be found in appendix C. This means that coherences and populations other than those specified by X 15 , X 22 and X 16 , X 21 will not be coupled to the initial population X 15 , so only the 4×4 submatrix of M specified by these elements will be required to determine the evolution of the populations X 15 , X 22 and coherences X 16 , X 21 . This saved wasting time determining numerically all 36×36 elements of M.
To show this more formally to be the case for finite time d = T n T · we have (in an obvious notation)

Symmetry of matrix M
When evaluated analytically the matrix M can be shown to be symmetric. If the matrix element M α,β is given by , may involve different Wiener increments dW  r ( ) to those in the Q  b a , they can be seen to be in one-one correspondence, so when stochastic averaging occurs the outcome is the same. This leads to the symmetry result

Numerical results
As indicated above, we have introduced a dimensionless time variable δT given by d g hV t ( ) , and the coherence will now be calculated numerically in the case of the short time regime for various δT. The new Wiener increments dW a are normalized as in equation (3.36), but now with the interval being the dimensionless quantity δT. The stochastic average in equation (3.52) or (4.28) that is involved in calculating the coherence as a function of dimensionless time δT will now be carried out numerically based on the fundamental definition (equation (3.25)) of a stochastic average. The result for short time δT should be proportional to δT (with a factor 1/i). The calculations were carried out using MatLab, which allowed us to run six labs in parallel. The stochastic averaging was carried out over m=1000 trajectories in each lab, and in each trajectory the normalized Wiener increments are obtained using the MatLab command mvnrnd. for which the mean values of the random variables is set as zero and the covariance is set to dT . Two other MatLab commands can be used for the same purpose, namely normrnd and randn for which the mean values of the random values is set to zero and the standard deviation is set to dT . We display a plot of the numerically calculated coherence between states f 3 and f 4 for various choices of δT in units of 1/i and for cases of m=1000 trajectories in each case. The analytic result from equation (4.29) is also shown. The results have shown that the numerical calculation of the coherence agrees well with the analytic formula. We then calculate the same coherence, but now for a finite time interval with T ranging from 0 to π and based on using equations (3.65), and with the same initial condition. We will compare the result with the analytic expression given by equation (4.41). All calculations were carried out using a standard desk-top computer with parallel processing facilities. The programming issues are discussed below in appendix D. Here we will only present results. Table 1 Table 1. Table shows      , In this subsection we consider the effect on the accuracy of the numerical results for the short time coherence of varying the ensemble size for the stochastic calculations and of changing the time interval. It was expected that the accuracy would improve as the ensemble size becomes larger, and that for a given ensemble size the accuracy Table 5. Table shows   would improve as the time interval gets shorter. This information will become important in determining what ensemble sizes and time intervals will be suitable for numerical applications on more complex fermion systems. Table 7 shows that for this δT range the % error decreases as the ensemble size increases. For ensemble size m=1000, the avg % error is approximately 2.8%. The analogous results for a second run of the calculations (Run_2) are given in table C4 in appendix C. The two figures show that for this δT range the error decreases as the ensemble size increases. For ensemble size m=1000, the stochastically calculated coherence C(f 4 ; f 3 ) is in agreement with the analytic result with an average std error of approximately 4% for the two program runs made. The figures also show that the error increases as δT increases, specially for smaller ensemble sizes. The error in both Real(C) and Imag(C) are of order 10 −4 at δT∼0.01 with m=1000.

Conclusion about the numerical test
The four mode Cooper pair system has been treated using Grassmann phase space theory methods. The stochastic calculations for the short time behaviour and the finite time behaviour for C(f 4 ; f 3 ) have been performed and have validated the approach by agreeing well with the corresponding analytic results for different choices of time interval. However, the work on this simple problem has indicated that the desk top computer (although parallel processing was used) would be inadequate for more complex calculations, and that the supercomputer facility will be needed.

Summary and conclusion
In this paper we have outlined the theoretical methods used in cold atom physics and described phase space theory approaches, including the recently developed Grassmann phase space theory for fermions. A brief overview of the physics for cold atom physics, focusing on the BEC/BCS crossover in Fermi gases that is of particular interest for applying Grassmann phase space theory was presented. We have set out the key equations in Grassmann phase space theory and described how numerical calculations in GPST can be carried out. We then applied GPST to a simple four mode Cooper pair model to test the validity of the stochastic approach by comparing the numerical results to the analytic results for the short and finite time behaviour of a coherence between two Fock states. In this first correct numerical application of GPST to a fermion problem, we have found the numerical stochastic calculations based on GPST and the known analytic results for the four mode Cooper pair model to be in good agreement, indicating that GPST is a valid approach. Furthermore, we have shown that GPST can be applied in stochastic calculations without the need to represent Grassmann variables on the computer. Numerical calculations are feasible because Grassmann stochastic variables at later times are related linearly to such variables at earlier times via c-number stochastic quantities. GPST should be applicable to topics involving larger numbers of modes and fermion numbers, though such application may require using a super-computer with parallel processing capabilities. Large fermion number applications could be based on using the Grassmann field version of GPST, which has also been developed (see [12,[25][26][27]). L is a 2n×2n c-number matrix with rows p and columns r listed as 1,2, K, 2n. The non-zero elements of L can be identified from the following table 1, , and = ¼ p r n , 1, ,2 . For each row with a given form for g p , the matrix element L r p is given by the stated L ij A when g r is given in the same row, and zero otherwise. For example, with p in the range 1, K, n the matrix element L r p is zero if r is in the range n+1, n+2, K, 2n, since -i C does not involve any + g j . The matrix L has 2n rows and 2n columns, as the row and column indices are the joint quantities p and r. Thus, the matrix L may be formatted as where the rows are listed in each n × 1 submatrix as i and the columns are listed as j. Again, for each submatrix an element specified byi (row) and j (column) can also be listed for L via p (row) and r (column), where for each submatrix ij uniquely specifies p, r. An example for n=2 illustrates the procedure:     ) and those of the matrices -+ M and +-M are zero. Here * means taking the complex conjugate of c-numbers. In this case, the diffusion matrix has the simple form: Using these matrices the expressions for the quantities K r a p , and L r p that specify the deterministic and noise terms in the Ito stochastic equations equation 4 and will be fixed for the entire row, with Here terms include stochastic average of products of zero, one, two, three, four dWs, but no two dWs are the same, so only the products just involving 1 contribute.
Terms include the stochastic average of products of two, three, four dWs, but no two dWs are the same, so all products give zero. ) across the 6 parallel processing labs. The results were obtained using the MatLab parallel processing program for M 4×4 matrix. The average matrix element, the average error and the σ error are also shown for each of these matrix elements.  (21,21), M(21, 22) across the 6 parallel processing labs. The results were obtained using the MatLab parallel processing program for M 4×4 matrix. The average matrix element, the average error and the σ error are also shown for each of these matrix elements. Table C3. Table shows   In Figure C1 note that for the first two plots (for img (15, i) where i = 15, 16, 22)), the curves for the stochastic average points for M (15,15) coincide exactly with that for M (16,16) and that for M (15,21) coincide exactly with that for M (16,22). This is because in the  16 8 , dW + 9 and the result for each specific element of the 4×4 matrix M calculated using equation (3.47). These are then averaged to give the stochastic value for the matrix element. For different Labs, the 1000 choices of the 16 Wiener increments will be different, hence the 6 independent (and different) results for each specific element of M shown in tables 1 and C1-C3. The average of these for the 6 Labs is also quoted in tables 1 and C1-C3 , and amount to determining the stochastic averages for an ensemble size of =´= m 6 1000 6000. It will be noticed that the results for the 6 Labs for certain pairs of matrix elements are the same-for example M 15, 21 ( )and M (16,22). This is because the most important pairs of Wiener increments are the same -for example M (15,21) and M (16,22)  ( ) make negligible difference, since the leading terms in these elements are order 1 (rather than order dT ). The stochastic averaging process determines each 4×4 element to an accuracy of ca 4% for the effective ensemble sizes of 6000 independent calculations. The accuracy is specified by the standard deviation of the results in the 6 Labs from the theoretical value for the matrix element as shown in tables 1 and C1-C3 C.4. Results for Run 2 for the short time behaviour parallel processing program Table C4 shows that for this δT range the std error decreases as the ensemble size increases. For ensemble size m=1000, the avg std error is approximately 5.6%.  Table C4. Table shows the std error for the numerically calculated imag(C) across the 6 parallel processing labs, for dT 1.  The results were obtained in run 2 for the MatLab parallel processing C(f 4 ; f 3 ) program for δTä[0,0.01] and m=100, 300, 1000. generates multiple simultaneous instruction streams. Multiple processors or cores, sharing the memory of a single computer, execute these streams. To make use of that concept we certain commands must be employed within the source code. The process starts by opening a MATLAB pool, this is done using the 'parpool' command which creates workers (or labs) to do parallel computations. On the i7 processor at SUT there are 6 cores. The parpool commnad creates 6 labs A client and 5 normal labs (or workers) one equivalent to each core present. The client is the head MATLAB session -creates workers, distributes work, receives results. The Workers/Labs are independent, headless, MATLAB sessions. They do not share memory, and are created before being used and destroyed after the program execution takes its course.

C.3. Symmetry considerations
To make use of multithreaded parallelism, one uses the SPMD (single program multiple data). This command explicitly and/or automatically divide work and data between workers/labs and communicate between workers/labs. The data created before the SPMD block is copied to all workers, whereas the data created within the SPMD block, is unique to worker, composite on client. Memory is not shared by the workers. Data can be shared between workers using special data types: composite, distributed, codistributed. The parallel processing toolbox also contains parallel for loop constructs and commands for tall arrays manipulation where the processing for such arrays is distributed among the available workers with respect to their rows or columns. To test the finite time behaviour and compare the numerical results with the analytical ones, a parallel processing program was written to calculate the stochastic average of C(j 4 ; j 3 ) for δT1. Initially the program employed the M(α, β) 36×36 matrix, which was to be multiplied successively to obtain the stochastic average of M 16, 15 , n ( ( )) where d = T n T. A considerable processing time was noted, as the program was evaluating all the 1296 elements of the 36×36 M matrix for each member i of the ensemble. It then displayed the stochastic averages of all the M elements in files (which included the above mentioned 16 elements of Rows 15,16,21 and 22). Performing the calculation in that manner had proven to be extremely taxing in terms of complexity and computer processing time. For example, even with the parallel processing capability utilized, only 10 members of the ensemble were processed per hour. At this rate, it would have taken the program 4 days to process1000 ensemble members. Since the elements of Rows 15,16,  states, a simpler parallel processing program was later developed with the aim of reducing the processing time relevant to the initial j j C ; 4 3 ( ) for d  T 1 program: This modified program concentrated only on the 16 elements of the M (4×4), and was used to confirm the analytic values of this submatrix for δT=0.01, and then to calculate the finite time behaviour for element M (16,15). The process began by calculating the stochastic average values for all the matrix elements of the M 4×4 submatrix, across the 6 labs available, and then evaluating the average matrix element for each of the elements. The resulting numerical M was found to be not quite complex symmetric as was the case with its analytic counterpart. One then had to resort to biorthogonality to calculate the required finite time behaviour of the coherence C(f 4 ; f 3 ) from the final stochastic results for the 4×4 matrix M with δT=0.01. The calculation was quite informative, as its results indicated that the stochastic approach does confirm the analytic result for the finite time behaviour of C(f 4 ; f 3 ) given by equation (4.41) (see also equation(3.65)), with T ranging from 0 to π. To calculate the finite ensemble size specified. Within each lab the main ensemble size can be subdivided by taking its subsets, as was done before when using the sequential programs.
• For m=500 and for one run of the program, the parallel pool processes data and produces results relevant to 6 different C points (one for each lab) depending on 6 different dT values assigned to each lab. In other words, for size m=500 ensemble with its subdivisions, each run of the program involves 6 different calculations which are performed for the various stochastic averages involved within each lab, depending on different sets of randomly generated noise terms and different values of dT . Therefore, for m=500, 6 multiplied by 500=3000 records are processed for one run, 500 records for each lab.

D.1.4. Final remarks
The parallel pool processes 42 records in 30 min (actually 42 multiplied by 6=252 records (or ensemble members' calculations) across the 6 parallel processing labs available). Consequently, and with a simple calculation (assuming the same number of records is processed in similar times), the program run takes approximately 5.9 hours to reach completion for a size m=500 ensemble, after which data for different 6 C points for that ensemble size and its subdivisions can be gathered. This versus only one C point for a single run for that ensemble size as was the case with the old sequential program, for which the run took 3.5 hrs to reach completion. Therefore, the data needed for plotting the results for size m=500 ensemble and its subdivisions can be gathered in just two program runs, rather than in 10 runs. As it turned out, the processing time is not exactly the same for the same number of records, and the program took about 3 hours and 40 minutes to process data for ensemble of size m=1000 (6000 records are processed in that case.) The parallel processing capability could also be used for calculating ensemble averages of the M matrix elements and this is expected to save processing time. For example, if each of 4 parallel processors calculates ensemble averages of m=100 random generations of the M matrix elements and then average the outcomes for each processor, it should be equivalent to an ensemble average over m=400 random generations, and achieved with the processing time for one processor.