Covariant operator bases for continuous variables

Coherent-state representations are a standard tool to deal with continuous-variable systems, as they allow one to efficiently visualize quantum states in phase space. Here, we work out an alternative basis consisting of monomials on the basic observables, with the crucial property of behaving well under symplectic transformations. This basis is the analogue of the irreducible tensors widely used in the context of SU(2) symmetry. Given the density matrix of a state, the corresponding expansion coefficients in that basis constitute the state multipoles, which describe the state in a canonically covariant form that is both concise and explicit. We use these quantities to assess properties such as quantumness or Gaussianity.


Introduction
The notion of observable plays a central role in quantum physics [1].The term was first used by Heisenberg [2] (beobachtbare Größe) to refer to quantities involved in physical measurements and thus having an operational meaning.They give us information about the state of a physical system and may be predicted by the theory.According to the conventional formulation, observables are represented by selfadjoint operators acting on the Hilbert space associated with the system [3,4].
Given an abstract observable, one has to find its practical implementation.For discrete degrees of freedom, the associated Hilbert space is finite dimensional and the observable is then represented by a matrix whose explicit form depends on the basis.Choosing this basis such that it possesses specific properties can be tricky [5][6][7][8].Especially, when the system has an intrinsic symmetry, the basis should have the suitable transformation properties under the action of that symmetry.This idea is the rationale behind the construction of irreducible tensorial sets [9], which are crucial for the description of rotationally invariant systems [10] and can be generalized to other invariances [11].In this way, the quantum state is represented by its associated multipoles, which are precisely the moments of the generators arranged in a manifestly invariant form.
Things get more complicated in the continuous-variable setting, when the Hilbert space has infinite dimensions.The paradigmatic example is that of a single bosonic mode, where the Weyl-Heisenberg group emerges as a hallmark of noncommutativity [12].As Fock and coherent states are frequently regarded as the most and least quantum states, respectively, they are typically used as bases in quantum optics.Coherent states constitute an overcomplete basis which is at the realm of the phase-space formulation of quantum theory [13][14][15][16][17][18][19][20][21][22] where observables become c-number functions (the symbols of the operators).This is the most convenient construct for visualizing quantum states and processes for continuous variables (CV).
An alternative approach well suited to compare the classical and quantum evolutions of a given system is to decompose the wave function (or density matrix) into its infinite set of statistical moments.These moments contain the same physical information as the wave function, but they have the advantage of being observable.Their evolution equations give rise to Hamiltonian equations with quantum corrections coming from momentum variables [23][24][25][26][27][28][29].It seems thus natural to look at the monomials TKq = â †K+q âK−q (1) with K = 0, 1/2, 1, . . .and q = −K, . . ., +K.Here, we restrict ourselves to a single mode, with bosonic creation and annihilation operators â and â † , respectively.The extension to multiple bosonic modes is direct.
We explore here the properties of this basis and check its explicit invariance under symplectic transformations (i.e., linear canonical transformations), which is not apparent at first sight [30].Some work along these lines can be found in Ref. [31], but restricted to the symmetric (or Weyl) ordering.Here, we examine the inverse of these monomials for arbitrary operator orderings, so they can be used to directly expand any quantum operator, as it would be required for proper tensorial sets for CV.These operators can then be added to the quantum optician's toolbox and used by anyone working in CV.
When an arbitrary pure or mixed density matrix is expanded in the basis (1), its expansion coefficients are the moments, dubbed as state multipoles, which convey complete information.These moments couple parts of the density matrix corresponding to different particle numbers, unlike 2K-particle density matrix expansions in quantum statistical mechanics, although some of the q = 0 moments appear in both contexts [32][33][34].For CV, moments have been considered for studying quantumness [35,36].Here, we inspect how the multipoles characterize the state.Drawing inspiration from SU(2), we compare states that hide their information in the large-K coefficients to those whose information is mostly contained in the smallest-K multipoles.The result is an intriguing counterplay between the extremal states in the other representations, including Fock states, coherent states, and states with maximal off-diagonal coefficients in the Fock basis.
There are many avenues to explore with the monomials representation.After a brief review of the basic concepts required in Sec. 2, we examine the properties of the basis (1) and its inverse in Sec. 3. The corresponding multipoles appear as the expansion coefficients of the density matrix in that basis.The covariance under symplectic transformations tells us how the different parts of a state are interconverted through standard operations.Note that we are considering only normally ordered polynomials, but everything can be extended for antinormally and symmetrically ordered monomials.In Sec. 4 we introduce the concept of cumulative multipole distribution and its inverse and find the extremal states for those quantities and determine in this way which states are the most and least quantum.The direct connections between our multipoles, tomography, quantization, and quasiprobability distributions are elucidated in Sec. 5. Our conclusions are finally summarized in Sec. 6.

Background
We provide here a self-contained background that is familiar to quantum opticians.The reader can find more details in the previously quoted literature [13][14][15][16][17][18][19][20][21][22].A single bosonic mode has creation and annihilation operators satisfying the commutation relations ( These can be used to define the Fock states as excitations of the vacuum |vac⟩ annihilated as â |vac⟩ = 0, as well as the canonical coherent states These can both be used to resolve the identity: The coherent states can also be defined as displaced versions of the vacuum state |α⟩ = D(α) |vac⟩ via the displacement operators that take numerous useful forms These obey the composition law and the trace-orthogonality condition Their matrix elements in the coherent-state basis can be found from the composition law and in the Fock-state basis are given by [37] ⟨m| where L (α) n (•) denotes the generalized Laguerre polynomial [38].Given any operator F , it can be expressed in the Fock basis as and in the coherent-state basis as However, it is always possible to express this coherent-state representation in a diagonal form.For the particular case of the density operator ρ this yields the Glauber-Sudarshan P -function [39,40] with [41] The same holds true for any operator F for which ⟨−β| F |β⟩ e |β| 2 is square-integrable.
One identity that often shows up in this realm is an expression for the vacuum in terms of normally ordered polynomials [42]: This allows us to express any unit-rank operator from the Fock basis as Since any operator can be written as a linear combination of operators |m⟩⟨n|, this directly guarantees that a normally ordered expression will always exist for any operator.

State multipoles
As heralded in the Introduction, the monomials (1) are the components of finite-dimensional tensor operators with respect to the symplectic group Sp(2, R).Their transformation properties are examined in the Appendix A. For completeness, we have to seek operators TKq satisfying the proper orthonormality conditions to be inverses of the monomials: Using the trace-orthogonality conditions of the displacement operators, we can rewrite this condition as Now, by inspection, we attain orthonormality when because then the derivatives of the delta function will match perfectly when performing integration by parts.In consequence, we have Interestingly, they appear as moments of the operators introduced in the pioneering work by Agarwal and Wolf [43].This inversion process can be repeated with other ordered polynomials and we find the inverse operators to again appear as moments of the other operators introduced therein.In Appendix B we sketch the procedure for the case of symmetric order, where our technique is shown to be useful for finding the inverse operators for arbitrary operator ordering, whenever such exist.Once they are known, it is easy to expand any operator, such as a density matrix ρ, through ρ = where ⟨ TKq ⟩ = Tr(ρ TKq ), following the standard notation for SU(2) [10], will be called the state multipoles.They correspond to moments of the basic variables, properly arranged.Conversely, we can expand operators in the basis of the inverse operators, with ⟨ TKq ⟩ = Tr(ρT Kq ) now being the inverse multipoles.Since inverse operators inherit the Hermitian conjugation properties of the monomials, the multipoles and inverse multipoles simply transform as q ↔ −q under complex conjugation.The purity of a state has a simple expression in terms of the multipoles It is more challenging to express the trace of a state in terms of the multipoles because the operators TKq are not trace-class; however, by formally writing Tr[ D(β)] = πδ 2 (β) exp(−|β| 2 /2), we can compute Tr( TKq ) = δ K0 δ q0 (25) such that normalization dictates that the inverse multipoles satisfy 1 = Tr(ρ) = ⟨ T00 ⟩.
Extension of all of these results to the multimode case merely requires concatenation of multipole copies of our results, since operators from different bosonic modes commute with each other.Considering the extended monomial operators the extended inverse operators are obtained from our inverse operators as The extended orthonormality conditions are simply which immediately yields results such as the purity of a multimode state being All such generalizations to the multimode case are similarly straightforward so we proceed by only focusing on the essential computations for a single mode.
In principle, the complete characterization of a CV state requires the knowledge of infinite multipoles.For a Gaussian state, only moments up until K = 1 are needed , as these encode all of the means and covariances of position and momentum operators, which are the only relevant degrees of freedom in a Gaussian state.This suggests that either the inverse multipoles ⟨ TKq ⟩ for larger values of K or the multipoles ⟨ TKq ⟩ characterize the non-Gaussianity of a state.
In consequence, we have to calculate the multipoles of arbitrary states.Before that, we consider the simplest cases of coherent and Fock states, for which the calculations are straightforward.Starting with coherent states, using (20) and recalling the Rodrigues formula for the generalized Laguerre polynomials [38], we get The magnitudes of these multipole moments versus |α| 2 for various values of K and q are plotted in Fig. 1.As we can appreciate, they decrease rapidly with K and large |α|, occasionally vanishing at the values of |α| for which a particular Laguerre polynomial vanishes.The overall structure follows the Laguerre polynomials diminished by the exponentially decaying function exp(−|α 2 ) and the factorial factor 1/(K − q)!.We see that, even for states with a large amount of energy, the majority of the information may be contained in the lower-order moments.
As for Fock states, we use the matrix elements of the displacement operator ⟨n| D(β) |n⟩ = exp(−|β| 2 /2)L n (|β| 2 ).Since these only depend on |β| and not its phase, the q = 0 terms all vanish, leaving us with The inverse multipoles are trivial in both cases, with Note that the multipoles that vanish for Fock states have n > K and the inverse multipoles that vanish for Fock states have K > n.
For arbitrary states, we note that, since any state can be expressed in terms of its P -function, we can write Figure 2: Parts of the state in the Fock state basis coupled to by a particular inverse operator TKq .Each value of q labels the off-diagonal stripe (grouped with a particular colour) of the matrix that affects the value of ⟨ TKq ⟩.For example, when computing ⟨ TK2 ⟩, one uses only components of the density matrix in the q = 2 oval of the state's coefficients when represented in the Fock basis.Each value of K labels the maximal antidiagonal row that contributes to the value of ⟨ TKq ⟩.This antidiagonal row is characterized by the row and column number summing to 2K (the colours get darker as the antidiagonal increases from the top left).For example, when computing ⟨ T 3 2 2 ⟩ one looks at the intersection of the K = 3/2 and q = 2 ovals and finds no overlap, such that there is no contribution from any state, while computing ⟩ requires the coefficients of |2⟩⟨1| and |1⟩⟨0| when the density matrix is expanded in the Fock basis, as per the q = −1 oval telling us which components might contribute and the K = 3/2 oval telling us the limit beyond which no elements contribute.
To get more of a handle on these multipoles, expecially when P is not a well-behaved function, it is more convenient to have an expression in terms of the matrix elements ϱ mn = ⟨m| ρ |n⟩.This can be provided by expressing P (α) in terms of matrix elements of the state in the Fock basis and derivatives of delta functions.More directly, we can compute (m ≤ n) These give the matrix elements of the inverse operators TKq in the Fock basis and show that TKq can only have nonnull eigenstates when q = 0. Putting these together for an arbitrary state, we find In this way, we get a simple expression for the inverse monomials in the Fock basis: whose orthonormality with the operators TKq can be directly verified.This expression equally serves to furnish a representation of the moments of the displacement operator in the Fock basis.
To understand this result, we plot in Fig. 2 a representation of the nonzero parts of different operators TKq in the Fock basis, which equivalently represents which elements of a density matrix ϱ mn contribute to a given multipole ⟨ TKq ⟩.The contributing elements are all on the 2qth diagonal, ranging over the first 2K + 1 antidiagonals.The inverse multipoles ⟨ TKq ⟩ depend on the −qth diagonal and all of the antidiagonals starting from the 2Kth antidiagonal.These are in contrast with expansions in quantum statistical mechanics that only retain information about the parts of a state with a fixed number of particles (here q = 0).This picture makes clear a number of properties that will become useful for our purposes.
To conclude, it is common to find operators of a generic form f (â, â † ).Quite often, it is necessary to find their normally ordered form :f (â, â † ):, where : : denotes normal ordering.Such is necessary, for example, in photodetection theory [44].Although algebraic techniques are available [45], the multipolar expansion that we have developed makes this computation quite tractable.We first compute Tr[: The integral is nothing but the Fourier transform (note βα * − β * α is purely imaginary) of the function f (α, α * ) with respect to both of its arguments.If we call F (β, β * ) this transform, the multipole moments of : f (â, â † ) :, denoted by F Kq , become In other words, the moments of the Fourier transform of f (α, α * ) give the expansion coefficients of : f (â, â † ) : in the TKq basis.

Cumulative multipolar distribution
We turn now our attention to cumulative multipole distribution, which in the context of SU( 2) is a good quantifier of quantumness, and inspect whether the cumulative multipoles are good quantifiers of quantumness for CV as well.That is, we form the cumulative multipole distributions with M = 0, 1/2, 1, . . .and where is the Euclidean norm of the Kth multipole.The quantities A M (ρ) can be be used to furnish a generalized uncertainty principle for CV [31] and they are a good indicator of quantumness [46,47].
For spin variables, it has been shown that A M (ρ) are maximized to all orders M by SU(2)-coherent states, which are the least quantum states in this context, and vanish for the most quantum states, which are called the Kings of Quantumness, the furthest in some sense from coherent states [48][49][50].
What states maximize and minimize these cumulative variables for CV? Do these extremal states behave as for SU (2), where they are respectively the least and most quantum states?Can we use the cumulative multipole moments as a proxy for inspecting the quantumness of a state, as can be done in SU(2)?Let us begin by examining a few of the lowest orders.M = 0: For an arbitrary state, we can write A 0 in terms of the Fock-state coefficients as This is uniquely maximized by the vacuum state |vac⟩, with ϱ 00 = 1, which is a minimal-energy coherent state and can be considered the least quantum state in this context.The quantity A 0 , on the other hand, is minimized by any state with ϱ 00 = 0, which causes A 0 to vanish.This is easily attained by Fock states |n⟩ with n > 0. In this sense, all Fock states that are not the vacuum are the most quantum.States becomes more quantum as they gain more energy and their vacuum component ϱ 00 diminishes in magnitude.M = 1/2: For K = 1/2, we can readily compute This is minimized by any state with no coherences in the Fock basis (such as, e.g., number states).
On the other hand, it is maximized by states with maximal coherence in the smallest-energy section of the Fock basis: ), with φ ∈ R. Together, A 1/2 is minimized by any state with ϱ 00 = 0, because that forces ϱ 01 to vanish by positivity of the density matrix, and it is still uniquely maximized by the vacuum state, again because of the positivity constraint |ϱ 01 | ≤ ϱ 00 (1 − ϱ 00 ).
M = 1: Now, we find This is minimized by all states with ϱ 00 = ϱ 11 = 0, again including Fock states but now with more than one excitation, but it is also minimized by the state |ψ + ⟩ that maximized A 1/2 .It is again maximized by the vacuum state with ϱ 00 = 1, but it is also maximized by the single-photon state with ϱ 11 = 1.The cumulative distribution is again the more sensible quantity: A 1 is minimized by states with vanishing components in the zero-and single-excitation subspaces, of which the Fock state |2⟩ has the lowest energy, and is uniquely maximized by the vacuum (coherent) state.M = 3/2: We find As usual this is minimized by any Fock state and by any state with no probability in photon-number sectors up until n = 3, while it is maximized by pure states of the form |ψ⟩ = e iφ 1 . The cumulative A 3/2 is again uniquely maximized by the vacuum state and minimized by any Fock state and by any state with no probability in photon-number sectors up until n = 3.
M > 3/2: The consistent conclusion is that different Euclidean norms of the multipoles for different orders K can be maximized by different states, but that the cumulative distribution is always maximized by the vacuum state.All of the orders of multipoles and their cumulative distribution vanish for sufficiently large Fock states, cementing Fock states as maximally quantum according to this condition.We as of yet have only a circuitous proof that A M (ρ) is uniquely maximized by |vac⟩ for arbitrarily large M : in Appendix C, we provide joint analytical and numerical arguments that this pattern continues for all M , such that the vacuum state may be considered minimally quantum according to this condition.
We can compute this maximal cumulative multipole moment, that of the vacuum, at any order: with a Bessel function [38] and a regularized hypergeometric function [51].This approaches I 0 (2) ≈ 2.27959 in the limit of large M .Moreover, by computing A ∞ (|n⟩) = I 0 (2)/n! 2 , we realize why only |0⟩ and |1⟩ behave so similarly in the large-M limit.Finally, note that the cumulative multipole operators also take the intriguing form where P n (α) = exp −|α| 2 /2α n / √ n! is the Poissonian amplitude.We thus conclude that the multipole moments defined here are a proxy for the "vacuumstateness" or "Fock-stateness" of a quantum state.Like their SU(2) counterparts, these multipoles can be used to quantify quantumness, with the most quantum states having the lowest cumulative multipole moments and vice versa.Since in many known contexts the vacuum states and Fock states define the limits of the least and most quantum states [46], this implies that measuring the lowest-order multipoles could already be well defined for inspecting the quantumness of a quantum state.

Inverse multipole distribution
An important question arises: how does one measure a state's multipole moments?Homodyne detection provides one immediate answer.By interfering a given state ρ with a coherent state |α⟩ on a balanced beamsplitter and measuring the difference of the photocurrents of detectors placed at both output ports, one collects a signal proportional to x(θ) = âe −iθ + â † e iθ , where θ can be varied by changing the phase arg α of the reference beam.Collecting statistics of the quadrature x(θ) up to its Kth-order moments for a variety of phases θ allows one to read off the moments ⟨ TKq ⟩ = â †K+q âK−q .This provokes the question: what states maximize and minimize the cumulative multipole moments in the inverse basis?
We start by defining, in analogy to Eq. (39), the cumulative distribution This quantity directly depends on the energy of the state, vanishing if an only if the state is the vacuum.As for the maximization, it is clear that coherent states with more energy cause the cumulative sum A M to increase, so we must fix the average energy n = â † â when comparing which states maximize and minimize the sum.
The inequality is saturated if and only if âK+q |ψ⟩ ∝ âK−q |ψ⟩; that is, â2q |ψ⟩ ∝ |ψ⟩, which, for q ̸ = 0, requires coherent states or superpositions of coherent states with particular phase relationships akin to higher-order cat states [52][53][54]: Each of these states provides the same value |⟨ TKq ⟩| 2 = |α| 4K .Then, since saturating the inequality for all q requires ψ 0 = 0, only a coherent state maximizes the cumulative sum A M for any fixed energy n = |α| 2 .We already know that |vac⟩ minimizes A M .For a given, fixed n > 0, one can ask what state minimizes the cumulative multipoles.All of the multipoles with q ̸ = 0 vanish for Fock states; this is because they vanish for any state that is unchanged after undergoing a rotation by π/2q about the origin in phase space.The q = 0 multipoles, on the other hand, depend only on the diagonal coefficients of the density matrix in the Fock basis, which can be minimized in parallel.
To minimize a multipole moment there are two cases to consider: n < K and n ≥ K.If n < K, the multipole vanishes by simply partitioning all of the probability among the Fock states with fewer than K photons and arranging those states in a convex combination with no coherences in the Fock basis.If n ≥ K, the sum is ideally minimized by setting ϱ nn = 1, by convexity properties of the binomial coefficients (they grow by a larger amount when n increases than the amount that they shrink when n decreases).For noninteger n, the minimum is achieved by setting with no coherences between these two Fock states.Here, ⌈x⌉ is the ceiling function that gives the smallest integer value that is bigger than or equal to x.Since this minimization does not depend on K, we have thus found the unique state that minimizes A M for all M with arbitrary n: It is intriguing that coherent states and Fock states respectively maximize and minimize this sum for integer-valued energies, while a convex combination of the nearest-integer Fock states minimize this sum for a noninteger energy.These results should be compared against those for the sum A M , which was uniquely maximized by the vacuum state that minimizes the sums here and for which the states that made it vanish were Fock states with large energies.Both sums are minimized for some Fock states and both sums are maximized by some coherent states, but the scalings with energy are opposite, where smaller energy leads to larger A M and smaller A M while larger energy leads to smaller A M and larger A M ; it just so happens that the state with smallest energy is both a Fock state and a coherent state.As a preview for the next section, we note that states with a finite number of photons 2K are described by only the moments up to K. To directly reconstruct the state, one simply arranges these measured moments with our inverse operators as in Eq. (22); the operators described here are exactly those used to sort measurement information for state reconstruction.

Further applications: quantizers and tomography
We have analyzed moments of a quantum state in the monomial basis to gain intuition as to the impact of each moment on the quantumness of the state.What else can be done with these moments?
One direct application is to creating a state's phase-space functions from a set of measurements; this is the province of quantum state tomography.To create any s-ordered quasiprobability distribution from a given state ρ, one uses the operator kernel ŵ(s) (α) to find The Wigner function, for example, is found by taking s = 0, while s can range from 1 for the normally ordered quasiprobability distribution to −1 for the antinormally ordered counterpart.Without yet giving a form for the kernels ŵ(s) (α), we note that it would be useful to express them in the monomial basis Then, a measurement of the moments ⟨ TKq ⟩ ϱ , where we now make explicit that the expectation values are with respect to the state ρ, which is achieved via homodyne detection as above, immediately yields the quasiprobability distribution through The missing connection between the tomographic measurement results ⟨ TKq ⟩ ϱ and the quasiprobability distributions W (56) We can immediately compute the coefficients of the operator kernels in the monomial basis from the definitions The Wigner function, for example, has ŵ(0) (α) and we have already computed Tr[ D(β) TKq ] in Eq. (18).This leaves us with one integral to solve: The same calculation can be performed with any other value of s by simply replacing the argument of the exponent in the quantizer.As well, a tomography scheme that directly measures moments with another ordering, such as the symmetrically ordered moments ⟨ T W Kq ⟩ defined in App.B, can again be used to directly construct any quasiprobability distribution after computing the moments of our appropriate inverse operator, such as Tr[ ŵ(s) (α) TW Kq ].We list some relevant results along these lines: Kq , (61) The inverse operators and multipole moments are thus intimately connected to quantizers and tomography.It may come as no surprise that these are even more intertwined: a little inspection leads to and so on for any s: with the s-ordered inverse operators The monomial moments for a coherent state are the expansion coefficients of the (−s)-ordered kernels in the s-ordered inverse operator basis and the quasiprobability distributions can now be written as where T(0) Kq = TW Kq and T(1) Kq = TKq using the notation from before.The multipole moments are exactly what are required to be measured for constructing a quasiprobability distribution in the polynomial (i.e., α * K+q α K−q ) basis.Then, different polynomial bases can be obtained by switching TKq for a different ordering in these expressions.
As a sort of duality, we can compute For example, we have found the quantizer for the Husimi Q function to be this is a specific case of the general result from orthonormality: These lead to the symmetrically pleasing results that intertwine the monomials and their inverses: To summarize, a state's quasiprobability distributions can be found either by measuring the monomials and then weighting those measurements by the s-ordered moments of coherent states, or by measuring the s-ordered moments and weighting those measurements by the inverse moments (monomial expectation values) of coherent states.None of the operators TKq or TKq are trace class, with formally infinite trace when q = 0 and vanishing trace when q ̸ = 0. Yet, by matching all of these operators together, another connection we can write along these lines is The sum certainly does not converge in any usual sense, as could have been expected for identity operators in infinite dimensions, but the sum of the operators paired with their inverse operators can conclusively be asserted to be proportional to this identity operator.Finally, on the topic of operator ordering, the question frequently arises: how does one actually express some s-ordered polynomial operator in a known basis, such as the normally ordered one?Our construction is the direct solution: the expansion coefficients are given by the trace of the product of the polynomial with our inverse operators TKq , most of which can simply be read off from Eq. (60) [other than s = 1, which typically require derivatives of delta functions as in Eq. (59)].These fill a missing gap and can readily be used for further computations.
To put it all together, let us consider a measurement of the s-ordered polynomials.We wish to construct an arbitrary s ′ -ordered quasiprobabilty distribution.This is achieved via for any s-ordered polynomials that arise from noise-added channels that perform quantum-limited amplification and then attenuation on a state by the same factor 2/(1+s) (e.g., â † â → â † â+(1−s)/2; equivalently, the quasiprobability distribution for the operators get smoothened by a Gaussian kernel for increasing noise).

Concluding remarks
Expanding the density operator in a conveniently chosen operator set has considerable advantages.By using explicitly the algebraic properties of the basis operators the calculations are often greatly simplified.But the usefulness of the method depends on the choice of the basis operator set.The idea of irreducible tensor operators is to provide a well-developed and efficient way of using the inherent symmetry of the system.However, the irreducible-tensor machinery was missing for CV, in spite of the importance of these systems in modern quantum science and technology.We have provided a complete account of the use of such bases, which should constitute an invaluable tool for quantum optics.
Just because a particular product TKq TK ′ q ′ with q ̸ = q ′ is traceless does not mean that it necessarily vanishes.In fact, we can directly compute the product of two such operators to find their structure constants.Each inverse operator TKq serves to decrease the number of photons in a state by 2q, so the product of two inverse operators must be a finite sum of inverse operators whose second index satisfies q ′′ = q + q ′ .We start by writing In theory, the coefficients f K ′′ are formally given by Tr( TKq TK ′ q ′ TK ′′ ,q+q ′ ).Inspecting Eq. (36), we find some interesting, immediate results: for example, when q, q ′ ≥ 0 and 2q > K ′ − q ′ , all of the structure constants f K ′′ vanish and we have TKq TK ′ q ′ = 0. Similar vanishing segments can be found for any combination of the signs of q and q ′ , which is not readily apparent from multiplications of displacement operators from Eq. (20).
The nonzero structure constants can be found via iteration, using Fig. 2 as a guide.Taking, for example, q, q ′ ≥ 0, we find products of the form (80) the nonzero structure constants obey K ′′ ≤ K ′′ max = min(K +q ′ , K ′ −q).The one with the largest K ′′ is the only one that has the term |K ′′ max − q − q ′ ⟩ ⟨K ′′ max + q + q ′ |, so its structure constant must balance the unique contribution to that term from TK ′′ max ,q+q ′ .This means that where one of the final two terms in the denominator will simply be 0! = 1.Then, by iteration, one can balance the contribution of TK ′′ max −k,q+q ′ in order to find the structure constants f K ′′ max −k (K, K ′ , q, q ′ ).
The structure constants for the monomial operators are already known.One can compute [55] TKq TK from normal ordering.The inverse operators transform nicely under displacements: These displaced operators are inverse to the displaced monomials It is interesting to note that the displaced inverse operators are given by an infinite sum of inverse operators and the displaced monomials by a finite sum of monomials, in contrast to the number of terms |m⟩ ⟨n| required to expand the original operators in the Fock basis.

B Symmetrically ordered monomials
We briefly consider here the example of symmetrically ordered multinomials T W Kq .We can write them explicitly in terms of the normally ordered polynomials as where {•} sym denotes the symmetric (or Weyl) order or operators [55].An important expression for the symmetrically ordered polynomials is We thus look for inverse operators through By inspection, we attain orthonormality when which corresponds to (89) simply differing from the expression (20) for TKq by removing the factor of exp(−|β| 2 /2).
We can find the multipoles for specific states.We simply quote the results

⟨α| TW
and For arbitrary states, we can follow the same procedure as we used for normal order; the final result is (m ≤ n) Finally, it is direct to check that the tensors T W Kq are covariant under symplectic transformations [31].

C Vacuum state as maximizing the cumulative multipolar distribution
We here provide analytical and numerical evidence that the vacuum state uniquely maximizes the cumulative multipolar distribution to arbitrary orders M > 3/2.
First, we note by convexity that the multipole moments are all largest for pure states.We next ask how to maximize a single multipole moment |⟨ TKq ⟩|.The phases can be arranged such that ϱ nm (−1) n > 0 for all n and m in Eq. (35), while each term is bounded as It is tempting to use a Cauchy-Schwarz inequality to say that this expression is maximized by states with the relationship ϱ nn = λn! for some normalization constant λ.This fails, however, for two related reasons: one cannot simultaneously saturate the inequality |ϱ nm | ≤ √ ϱ mm ϱ nn for all m and n while retaining a positive density operator ρ; similarly, the trace of ρ is bounded, which the Cauchy-Schwarz inequality does not take into consideration.One can outperform this Cauchy-Schwarz bound by concentrating all of the probability in the term with the largest value of |⟨ TKq ⟩| is maximized by any pure state with ϱ ññ = ϱ ñ−2q,ñ−2q = 1/2: This condition changes with K and q, so there will always be a competition between which terms |⟨ TKq ⟩| 2 are maximized in the cumulative sum.The contributions to A M by the various terms |⟨ TKq ⟩| 2 diminish with increasing K, which can be seen through the following argument.As M increases by 1/2, the number of new terms contributing to the sum increases quadratically: there are 2M + 1 new multipoles to consider and each multipole is a sum of at most M + 1 terms.From the preceding discussion, each multipole is individually maximized when it is made from only a single term, the cumulative multipole moment A M can only increase by the addition of O(M ) (competing) terms.In contrast, the magnitudes of each of the multipole moments decay exponentially with increasing M , due to the factorials in the denominator Eq. (94), stemming from Eq. (35).One can, therefore, guarantee that a state maximizing A M for sufficiently large M will continue to maximize A M for all larger values of M , at least approximately.
We can also inspect the inverse operators directly to understand the maximization properties.The multipoles being summed as an indicator of quantumness, |⟨ TKq ⟩| 2 , can be expressed as expectation values of the duplicated operator TKq ⊗ T † Kq = TKq ⊗ TK,−q with respect to the duplicated states ρ ⊗ ρ.The vacuum state |0⟩ ⊗ |0⟩ is the only duplicated state that is an eigenstate of all of the duplicated operators for all K and q, albeit with different eigenvalues for each operator.These operators act on Fock states as and have nonzero matrix elements given by Kronecker products of the stripes found in Fig. 2 (some combinations of K q, and n cause the proportionality constant to be zero).These can be used to help finding the eigenstates and eigenvalues of the summed joint operators ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ As mentioned previously, each individual operator TKq only has null eigenstates, unless q = 0; this can be seen from the striped pattern in Fig. 2. The same is true of the joint operators TKq ⊗ T † Kq , but is not true of the summed joint operators ÂM .The latter are represented in the Fock basis by sparse antitriangular matrices, which can be visualized by Kronecker products of pairs of matrices from Fig. 2. The eigenstates and eigenvalues can thus be found directly for any M .For example, the joint Fock state with maximal eigenvalue is the joint vacuum state |0⟩ ⊗ |0⟩.
The cumulative operators ÂM have positive expectation values when taken with respect to any duplicated state ρ ⊗ ρ.However, ÂM may have negative eigenvalues, because some of the eigenstates may not be of the form ρ ⊗ ρ.For example, the eigenstate whose eigenvalue has the largest magnitude is always found to be the maximally entangled state (|0⟩ ⊗ |1⟩ − |1⟩ ⊗ |0⟩)/ √ 2, with a large, negative eigenvalue.This is orthogonal to any duplicated state ρ ⊗ ρ because the latter is permutation symmetric, not antisymmetric, so we can readily ignore all contributions to ÂM from this part of its spectrum.
Another entangled state is the eigenstate with the next largest eigenvalue: (|0⟩ ⊗ |2⟩ − c |1⟩ ⊗ |1⟩ + |2⟩ ⊗ |0⟩)/N for some positive constants c and N = √ 2 + c 2 .This eigenstate obeys permutation symmetry, so it will contribute to the multipole moments.The maximum contribution will come from a state of the form specifically with p 0 = 1 − p 0 − p 1 .Since c > 1, the contribution is uniquely maximized by p 0 = 0 and p 1 = 1, so again we need only consider the joint Fock states in the analysis.The overlap of |1⟩ ⊗ |1⟩ with this eigenstate is c 2 /N 2 ≈ 0.621.
The eigenstate with the third largest-magnitude eigenvalue is the joint vacuum state |0⟩ ⊗ |0⟩.The ratio of its eigenvalue to that with the second largest magnitude approaches ≈ 0.647 > c 2 /N 2 as M increases.This is enough to ensure that the joint vacuum state uniquely maximizes the cumulative multipole moments for all M .We stress that these optima have not been found through a numerical optimization, but rather through an exact diagonalization of the operators ÂM , which means our analysis does not have to worry about local minima or other numerical optimization hazards.
How can this be made more rigorous?The eigenvalues and eigenstates can be found exactly ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ for any value of M by diagonalizing the sparse matrix ÂM .By M = 9/2, the largest eigenvalues have already converged to three significant digits and c 2 /N 2 to four; by M = 7, they have all converged to six significant digits.The contributions from a new, larger value of K = M strictly reduce the magnitude of each expansion coefficient in the sum of Eq. (36) by a multiplicative factor, ranging from 1/(M + q) for the term with the smallest n that has appeared the most times in the cumulative multipole to 1 for the term with the largest n that has only appeared once previously.There is also the addition of an extra term for |M − q⟩ ⟨M + q|, normalized by the large factor 1/ (M + q)!(M − q)!.Each term gets divided by an increasingly large factor as M increases; the factor that decreases the slowest has already started out with a tiny magnitude due to the normalization factor 1/ (M + q)!(M − q)!.The magnitudes of the expansion coefficients in the cumulative sums decrease at least exponentially in ÂM − ÂM−1/2 , so the largest eigenvalues and eigenstates of A M are fixed once they are known for moderate M (see visualization in Fig. 3).
The above demonstrates that the state maximizing the cumulative multipole moments for any value of M must take the form (p 0 + p 1 + p 2 = 1) because such a states concentrates maximal probability in the subspace with the largest eigenvalues of ÂM .We can compute the cumulative multipole moments for such a state, which equal The relative phases that maximize this sum satisfy 2ψ − ϕ = π, so we can set e iψ = 1 and e iϕ = −1 without loss of generality.There are now only two constants to optimize over in the sum All of the terms decay at least exponentially with K, so it is again evident that optimizing the sum for moderate M will approximately optimize the sum for all larger M .Computing the contributions to A M , we find which converges to this value by M = 7 (see Fig. 4) and we have verified that these digits remain unchanged beyond M = 100.This means that the sum will be maximized by either p 0 = 1 or p 1 = 1 (visualization in Fig. 5).We can directly compute A M (|0⟩) − A M (|1⟩) = 1/⌊M ⌋! 2 , where ⌊x⌋ is the floor function that gives greatest integer less than or equal to x.This means that the vacuum state is the unique state with the maximal cumulative multipole moment for all M , while its supremacy diminishes exponentially with M .

2 ▲ p 0 p 1 ▼ p 0 p 2 ○ p 1 p 2 □ p 0 p 2 p 1 Figure 4 :
Figure 4: Coefficients of the cumulative mutipole sum for the different weights in the optimal state |ψ opt ⟩.The coefficients rapidly converge for moderate M , with those of p 2 0 and p 2 1 rapidly approaching each other.

Figure 5 :
Figure 5: Cumulative multipole sum for optimal state |ψ opt ⟩ as a function of the two independent probabilities p 0 and p 1 .The multipoles to order M = 100 are included, by which point they have converged well beyond machine precision.It is clear that the maximum is obtained by setting all of the probability to go to either p 0 or p 1 with no shared probability between the two.