Quantum field tomography

We introduce the concept of quantum field tomography, the efficient and reliable reconstruction of unknown quantum fields based on data of correlation functions. At the basis of the analysis is the concept of continuous matrix product states, a complete set of variational states grasping states in quantum field theory. We innovate a practical method, making use of and developing tools in estimation theory used in the context of compressed sensing such as Prony methods and matrix pencils, allowing us to faithfully reconstruct quantum field states based on low-order correlation functions. In the absence of a phase reference, we highlight how specific higher order correlation functions can still be predicted. We exemplify the functioning of the approach by reconstructing randomised continuous matrix product states from their correlation data and study the robustness of the reconstruction for different noise models. We also apply the method to data generated by simulations based on continuous matrix product states and using the time-dependent variational principle. The presented approach is expected to open up a new window into experimentally studying continuous quantum systems, such as encountered in experiments with ultra-cold atoms on top of atom chips. By virtue of the analogy with the input-output formalism in quantum optics, it also allows for studying open quantum systems.


I. INTRODUCTION
Quantum theory predicts probability distributions of outcomes in anticipated quantum measurements. The actual problem encountered in practice, however, is often not so much concerned with predicting certain outcomes of specific measurement procedures. But rather, it is interested in reconstructing the unknown quantum state at hand that is compatible with precisely such acquired data of measurement outcomes. This task of reconstructing states based on data -possible under certain conditions of completeness or other reasonable assumptions -is called quantum state tomography. For finite-dimensional quantum systems, this task is feasible and is routinely used in experiments. However the number of parameters to be determined scales exponentially with the system size: Full quantum state tomography is highly inefficient. However, this is much less of a problem than one might at first be tempted to think. It was one of the major insights in the field in recent years to recognise that economical or efficient quantum state tomography is well possible for systems with many degrees of freedom. In fact, in most physically relevant questions, fully unconstrained quantum state tomography may be said to solve the "wrong problem". One is surely often not interested in arbitrary states, but only in those states that one is expected to encounter in practice, which are naturally more restricted.
Notions of tomography based on compressed sensing [9,14] or on the matrix product state formalism [1,5,34] are exactly along those lines of thinking. Using such more sophisticated tools, tomography of quantum systems with many degrees of freedom is well possible. Key to this step is to acknowledge that one has to identify the right model in which to represent the states, e.g., in the flavour of approximately low-rank states or those with clustering correlation functions. In the context of matrix product state tomography, the notion of a model refers to a meaningful variational class of states that provably captures all states exhibiting low entanglement [8,41]. In fact, here tomography is efficiently possible for any system size. Additionally, by increasing the bond dimension, an arbitrary state can be well approximated. In this way, efficient many-body tomography is possible as matrix product state based tomography. Quite similar to the mindset of compressed sensing, a "sparsity of commonly encountered states" is heavily made use of for the benefit of tomography.
In quantum field theory, the situation of having to deal with a large number of degrees of freedom is aggravated. Here, one obviously has to consider an infinite number of degrees of freedom per unit length. Again, a moment of thought reveals that to think about quantum field tomography in the sense of trying to "fill an infinite table with numbers" is rather ill-guided. This is not the actual problem one aims at solving in any practical context. In fact, naturally, one needs to identify the appropriate model once again, or again, the right "sparsity structure".
In this work, we introduce the concept of quantum field tomography, tomography of continuous systems in quantum field theory, and provide a practical and feasible method for achieving this. We do so by drawing and further developing ideas from the study of continuous matrix product states [16,35,47], methods of how to assess higher order correlation functions in that context [24], as well as a machinery from statistical estimation theory, such as a Prony analysis [38] and matrix pencil methods [21,22], which are here brought to a new context. In fact, these methods of estimation have not been considered before in the context of quantum state reconstruction and are expected to be interesting in their own right. The basis of the analysis are low-order multi-point correlation functions directly accessible in many common current experiments.
This approach opens up a new window into grasping the physics of continuous quantum systems in equilibrium and non-equilibrium. Instead of having to make a physical model (e.g., define a Hamiltonian) and checking for the plausibility of it, one can-based on data of correlation functions-reconstruct the quantum field itself. Such an approach seems particularly appealing when studying one-dimensional continuous bosonic models such as ultra-cold atoms on top of atom chips [13,27,28]. What is more, if only partial data is available, say, in the absence of a phase reference frame, also higher-order correlation functions of the same type can be predicted. The starting point of the analysis is what is called "Wick's theorem" for matrix product states [24], which is here brought to a new level and transformed into a practical method of reconstructing unknown continuous matrix product states from correlation function data.
This work is structured as follows. In Sec. II, we will give a short overview of the concept of continuous matrix product states (cMPS) [16,35,47] as well as what can be called a "Wick theorem" for this class of states [24], aiming as a preparation for the following technical sections. In Sec. III, we will describe in great technical detail how to reconstruct a field state from its low order correlation functions and give a complete matrix product state description of it. The limitations of this method are investigated in Sec. IV. In Sec. V, we will demonstrate the method using simulated data from random cMPS and apply the method to the ground state of the Lieb-Liniger model, a prototypical integrable model in quantum field theory [3,30]. The data used here have been generated using a cMPS-based simulation based on the time-dependent variational principle [6,15,17]. The impact of noise in real world-scenarios on the method is investigated here. In Sec. VI, we summarise and conclude this work.

II. BACKGROUND
In this work, we are concerned with one-dimensional quantum fields with fast decaying spatial correlations. Analogous to the case of many-body quantum systems, successfully described by the matrix product state (MPS) formalism, there is a variational class of states specially suited to study such systems: the continuous matrix product states (cMPS) [35,47].

A. Continuous matrix product states
In this section, we briefly review the basics of the cMPS formalism. For a review and comprehensive discussion of the computation of correlation functions, see, e.g., Ref. [16].

Basic definitions
A translation invariant cMPS with periodic boundary conditions and one species of bosonic particles is defined as where the collection of field operatorsΨ(x), x ∈ [0, L], obey the bosonic commutation relations of the free field |Ω is the vacuum state vector, Q, R ∈ C d×d are matrices acting on an auxiliary ddimensional space, the "virtual space", and constitute the variational parameters of the class. L is the length of the closed physical system, P denotes the path ordering operator and Tr aux traces out the auxiliary space. The parametrisation in (1) by Q and R is not unique, i.e., there is and additional gauge freedom. Namely, when simultaneously conjugating Q and R with an invertible matrix G [16],Q then the two resulting state vectors still represent the same state, i.e., all expectations values are invariant under this transformation.

Related physical processes
A useful interpretation of the correlations in cMPS can be given in terms of a finite dimensional quantum system interacting with a field [35]. The Hamiltonian of the joint system is given byĤ where1 F is the identity on the field, K ∈ C d×d is the Hamiltonian of the free evolution of the finite dimensional system, and R ⊗Ψ † (x) is the coupling between the system and the field with R ∈ C d×d . Starting with the state vector |ϕ i A |Ω F and evolving over [0, L] x we formally arrive at Note that by setting and projecting onto ϕ i | ⊗1 F to decouple A from F, and summing over a complete orthonormal basis of all |ϕ i , we again obtain Eq.
(1). This shows the interpretation of the cMPS formalism in the sequential preparation picture of MPS [36].
In this picture, we interpret K to be the Hamiltonian of a virtual particle living in the auxiliary space that mediates field interactions. Even more [35], the dynamical behaviour of the auxiliary system A can be modelled by computing the derivative of where Tr F means tracing out the physical system F. This yields the ODE which is a master equation in Lindblad form, governing the Markovian evolution of ρ A where R plays the role of dissipative quantum jump (Lindblad) operators. In turn, starting from Q and R, no all such pairs give rise to a valid effective Hamiltonian K, for this is it required that This does not restrict generality in any way, however: The above simply means that Q and R have to satisfy which can always be achieved with suitable gauge transformation.

B. Correlation functions in cMPS
The mathematical relations between the n-point functions are the starting point for our tomography algorithms, hence we give a brief summary at this point. A quantum field state can be completely characterised by all the possible normal expectation values constructed fromΨ(.) andΨ † (.) and their commutation relations. In this work, we will focus on density-like correlation functions, i.e., for each position x k ∈ [0, L] exist both operatorsΨ † (x k ) andΨ(x k ) within the expectation values. The expectation value ψ Q,R |Ψ † (x 1 ) . . .Ψ † (x n )Ψ(x n ) . . .Ψ † (x 1 )|ψ Q,R can be computed as (see, e.g., Ref. [16]), with the transfer matrix and the positive distances τ j = x j+1 − x j for j = 1, . . . , n − 1 and τ n = L − x n , and the overline denotes complex conjugation. Correlation functions of cMPS are given by expressions involving only the auxiliary space. Static properties of a quantum field with one spatial dimension are hence related to non-equilibrium properties of a zero-dimensional system. In this sense, they have been referred to as being "holographic quantum states" [35]. For a normalised cMPS, the eigenvalues of T are all complex with negative or zero real parts. This leads to finite expectation values in the thermodynamic limit L → ∞ and, assuming that T is diagonalisable, which is case if its spectrum is non-degenerate, the npoint function (12) can be further simplified to a sum of exponentially damped oscillatory terms C (n) (τ 1 , . . . , τ n−1 ) = d 2 k1,...,kn−1=1 ρ k1,k2,...,kn−1 e λ k 1 τ1 . . . e λ k n−1 τn−1 (14) where ρ k1,k2,...,kn−1 = M 1,kn−1 M kn−1,kn−2 . . . M k1,1 .
The matrix M ∈ C d 2 ×d 2 is defined as M = X −1 R ⊗ R X, where X is a changeof-basis matrix such that X −1 T X is diagonal and compatible with the ordering of the eigenvalues {λ k }.
A first step to reconstruct a cMPS would be to identify {ρ k1,k2,...,kn−1 } and {λ k }. That this is in principle possible can be seen by considering the Laplace transform of C (n) which has the simple form .
Each of the d 2(n−1) combinations of T eigenvalues appears as a pole of L (n) in C n−1 together with the corresponding residue in the numerator. If all the eigenvalues are different, i.e., the spectrum of T non-degenerate, and all residues nonzero, then all residues are distinguishable as well. Since the Laplace transform itself proved to be infeasible for practical reconstruction algorithms, we will present alternative ways in the following. Independently of this, we want to keep calling the eigenvalues {λ k } the poles and {ρ k1,k2,...,kn−1 } the residues of the n-point function.
The structure of the correlation functions with the residues as products of entries of one matrix, Eq. (15), allows for expressing higher order correlation functions by lower order correlation functions, very much reminding of the Wick's theorem in quantum field theory [24]. In this sense, we will recover M from the residues. We will describe this in detail below.

C. Additional symmetries
In the remainder of this work, we will make use of some symmetries that the cMPS fulfil. Here, we briefly state them. By construction, for each non-real entry of R ⊗ R and T there exists another entry containing its complex conjugate. More precisely, one can show that and and E j,k = e j e T k , the dyadic product of the canonical column vectors e j , [12,Sec. 2.5]. Hence, if λ is an eigenvalue of T with eigenvector v then Λ d T Λ d v = λv, and since , such that the spectrum of T is closed under complex conjugation. This fact also follows from the channel property of cMPS as discussed in Ref. [48].
For the reconstruction algorithms we will discuss below, it is instrumental to fix an unambiguous ordering of the eigenvalues of the transfer matrix T , which makes its diagonal matrix D and furthermore the matrix M unambiguous, too. If we order the eigenvalues in D such that the κ ∈ {1, . . . , d 2 } real eigenvalues constitute a block and the remaining d 2 − κ are arranged in complex conjugate pairs (e.g., ordering by descending real part), then D obeys the symmetry relation Ξ d,κ DΞ d,κ = D with the permutation matrix where σ x is the x−Pauli matrix. In addition, since X consists of the eigenvectors v of T as column vectors, Λ d v is the eigenvector of λ, when v corresponds to λ. Moreover, since Ξ d,κ interchanges the columns back, we have that Λ d XΞ d,κ = X. Using this fact and the definition M = X −1 R ⊗ RX, we obtain the symmetry relation Ξ d,κ M Ξ d,κ = M for the matrix M . This relation connects each entry of M with its complex conjugate and, via Eq. (15), each residue with its complex conjugate. This can be used for a systematic least squares approach to reconstruct the poles and residues, see Sec. III B.

III. STATE RECONSTRUCTION
Having established the structure of the correlation functions in cMPS, i.e., the structure of the data of our reconstruction problem, it remains to develop an appropriate protocol to extract the information encoded in the data. Given an n-point density-like correlation function of order 3 or higher corresponding to a cMPS |Ψ Q,R , we will show that, in most cases, it is in principle possible to reconstruct the parameter matrices Q and R up to an arbitrary gauge and phase, and to reproduce all n-point functions.
We are dealing with a so-called inverse problem, a large class of problems that make "use of the actual results of some measurements of the observable parameters to infer the actual values of the model parameters" [44]. Many inverse problems are ill-conditioneda small change in the measurements can lead to a huge change in the model parameters. In this chapter we will examine the required steps for cMPS reconstruction, see Fig. 1, and the respective main factors that influence their performance regarding perturbed input data. Each step will be discussed in a separate section. We will see that in particular the first and the last step can be particularly ill-conditioned.

A. Reconstruction steps
The reconstruction of a generic, translational invariant cMPS in the thermodynamic limit comprises the following steps, which are represented in Fig. 1: 1. The first step in processing the input data is to extract the poles {λ k } and the residues {ρ k1,k2,...,kn−1 } from an n-point correlation function, n ≥ 3, which is measured and contains additional noise and experimental imperfections.
and the matrix D is determined from the poles. This can be achieved using certain invariances in the correlation functions that led to the formulation of Wick's theorem for matrix product states.
FIG. 1. The particular reconstruction steps starting with the input data, an n-point correlation function of a cMPS |ΨQ,R , and ending with the variational parameter matrices Q and R, that fully characterise the state. Alternatively, the state can also be described by K and R. With this knowledge, one can compute other n -point correlation functions and compare with the input data to obtain evidence for a successful reconstruction.
3. In the final step, the cMPS parametrisation matrices Q and R can be extracted from the matrices M and D by imposing a specific gauge. Additionally, and after another gauge transformation, the Hamiltonian K of the auxiliary system can be computed from the matrices Q and R.
In order to only generate and predict higher order density-like n-point functions, it is in general sufficient to use the matrices D and M , from the second step, without any further reconstruction steps. This is much more robust against noise than the full reconstruction. Furthermore, we can also leave out some of the poles (together with the corresponding entries in M ) that barely contribute to the n-point functions. We will follow this approach in accompanying work when analysing experimental data [43].

B. Reconstructing the poles and residues
When analyzing spectra of sampled linear combinations of sinusoidal functions, methods based on integral transforms like the discrete Fourier transform (DFT) seem like a natural choice. In our case, however, we deal with exponentially damped sinusoids with potentially similar frequencies, which results in heavy broadening and overlapping of the corresponding spectral peaks. In this case, the damping factors would have to be determined from the corresponding peaks' width, and, in view of experimental data, we cannot assume too many sampled data points. Hence, the spectral resolution would be rather low. Only for certain cases the peaks in the frequency spectrum are sufficiently separated to directly determine the poles in a feasible way using integral transforms.
Another class of methods for data fitting that may come to mind is based on nonlinear (e.g., least squares) minimisation approaches. Clearly, the number of parameters critically determines the computational effort and the successfully applicability of the algorithm. The results, however, can be improved by restricting ourselves to a likely parameter region as a result of a preceding Fourier transform. Taking into account the Λ d and Ξ d,κ symmetries, and assuming normalised n-point functions, the number of real parameters can be reduced to nd 2 − 2. Only for unambiguous global minima (which is usually not the case for high damping factors in combination with noise) and for very small bond dimension, we obtained satisfactory results in acceptable time. Least squares approaches for correlation functions with larger n are at best feasible when using Q and R as parameters, otherwise the number of parameters would become too large. In spite of these drawbacks, a least squares algorithm could be used as an additional refinement step with initial values from other procedures, like the ones discussed below; nevertheless the number of parameters is still limiting. On the other hand, if we can only assume a small number of parameters and expect a considerable amount of noise, the least squares method can be a robust alternative. For example, for bond dimension d = 2, such nonlinear least squares approach can be feasibly and successfully used.
Alternative minimisation methods, e.g., simulated annealing, did not lead to considerable improvements. However, the scaling of the computational effort with the number of parameters can be significantly mitigated using iterative quadratic maximum likelihood (IQML) methods, but the application to correlation functions with n > 2 is not straight- Realising the challenges of solving a non linear estimation problem, it seems logical to exploit the structure of our particular model of the data to see if there are ways to more efficiently solve the estimation problem. It turns out that for data structures that consist of sums of damped oscillatory terms, it is possible to separate the estimation of poles and residues of the function in two different linear estimation processes. In the following sections, we describe two major approaches one can take to achieve such estimation.

Prony analysis
This technique is used in digital signal processing and its roots go back to a method that was originally established by R. de Prony in 1795 in the context of fluids [38]. The main idea is to first recover the poles independently by determining the roots of a polynomial computed from the signal (the correlation function) and then to insert the poles into a system of linear equations for the coefficients, which is in principle solvable with the usual linear algebra procedures. Prony's method is a special case of linear prediction [19] and has many further applications, e.g., as the starting point for nearest-neighbor detection of atoms in optical lattices [26,29]. The original method, however, is very sensitive to noise, so that for working on experimental data we need to use several modifications, which we will describe below. For further summaries and an introduction of the method, see for instance Refs. [18,31,37].
Prony's method is usually applied to C-valued functions, corresponding to 2-point functions, and for our purposes has to be extended to work with higher order n-point functions, which can easily be done. Therefore, in our description, we will start with the one-dimensional case with signal function The function is sampled at a finite number of points and is available only for N + 1 points τ j , which is C (2) (τ j ) := C j , j = 0, . . . , N . We, thus, obtain a system of linear equations . . .
Once we have identified all poles {λ k }, we can easily solve this system and are finished with the reconstruction. As we will see, one requirement for Prony's method is to sample the signal at equidistant points τ j = j · ∆τ , j ∈ {0, . . . , N }, and with e λ k ∆τ := µ k . We arrive at where the poles are encoded in the (in general, nonsquare) Vandermonde matrix We must take care not to choose the sampling interval ∆τ too large, since, considering the Nyquist-Shannon sampling theorem [42], the sampling rate should in general be at least twice the highest frequency ω sup of the signal spectrum 2π/(∆τ ) < 2ω sup . Vandermonde matrices will often be ill-conditioned-e.g., according to Gautschi [10], a lower bound for the norm of the inverse matrix of V (for N = d 2 and V invertible) is which will get very large if two poles get close to each other. This fact hints at the intrinsic limitations of this reconstruction method.
To determine the poles, we can regard the set {µ 1 , . . . , µ p } as the roots of a polynomial P d 2 with real coefficients and degree d 2 in the variable z, for each k = 0, . . . , N . Note that there are d 2 values of µ k but d 2 + 1 of a l . Such a polynomial naturally exists-it is just the product of the linear factors (z − µ k ), Our goal is to relate the set of coefficients {a l } to the set of function values {C j }. Once we have all a l , we can compute the roots of the corresponding polynomial (30) and obtain the poles λ k = ln(µ k )/∆τ , k = 1, . . . , d 2 . To this end, we multiply the first line of Eq. (27) by a 0 , the second by a 1 and so on, and perform the sum, Now, by choice of the a l , each µ k is a root of P d 2 (z) for all k = 1, . . . , d 2 so that each sum over l in Eq. (32) vanishes. Accordingly, we see that Since . . , the coefficient a d 2 belonging to the highest power is equal to one. Hence, Eq. (33) becomes the recurrence relation which is a linear system of equations whose unknowns a l are the coefficients of polynomial P d 2 (z). In order to compute the remaining p coefficients {a 0 , . . . , a p−1 }, we need at least d 2 equations. More linear independent equations are easy to obtain because the argument in Eq. (32) is still valid if we shift C l to C l+m for any m ∈ N with d 2 + m ≤ N : For d 2 equations the largest index that appears is 2d 2 − 1 and our equation system looks like Therefore, for d 2 poles we need at least 2d 2 sampling points {C 0 , . . . , C 2d 2 −1 }. The square matrix on the left hand side of Eq. (36) can be written as (C j+k ) j,k=0,...,d 2 −1 and has the form of a Hankel matrix. If it is nonsingular, the solution vector (a 0 , . . . , a d 2 −1 ) T is unique and can, together with a d 2 = 1, directly be replaced in (30), which in turn will yield the d 2 poles in a unique way. Hence, when reconstructing a function with d 2 poles and residues, we need precisely 2d 2 sampling points to exactly solve the Hankel and the Vandermonde system, provided that both matrices are not singular. This means that for small bond dimensions and without noise the necessary resolution of the signal for a complete reconstruction is very low. There are many established criteria for the invertibility [25, § 18] and inversion algorithms [4,45] of Hankel or Toeplitz matrices (Eq. (36) can also be rearranged as a Toeplitz system.). They are known to be potentially ill-conditioned, which reflects the inverse nature of the problem, e.g., the spectral condition number of a real positive-definite N × N Hankel matrix is bounded from below by 3 · 2 N −6 [46]. In practice, recovering the poles is more stable when oversampling the signal and using a higher pole estimate, i.e., working with a larger (not necessarily square) Hankel matrix and a larger solution vector in Eq. (36), and solving the equation system in a least squares sense. This boils down to applying the Moore-Penrose pseudoinverse to the right hand side of Eq. (36) to obtain the coefficients of the polynomial, insert the computed poles into Eq. (27) and discard the N + 1 − p surplus poles with the smallest associated residues.
Note that instead of solving Eq. (36), we can also determine the kernel of (C j+k ) j,k=0,...,p whose dimension is larger or equal to one due to Eq. (33). Only in the latter case, which corresponds to the matrix in Eq. (36) being nonsingular, we get a unique (up to multiplication by a constant) solution vector (a 0 , . . . , a p−1 , b p ) T . The constant does not pose a problem because any multiple of (a 0 , . . . , a p−1 , a p ) T yields the same roots of the associated polynomial: p l=0 α a l z l = 0 is equivalent to p l=0 a l z l = 0. This method has proven to be more robust towards noise in some cases [33] and can be generalised in a very elegant way to higher order correlation functions [39].
Unfortunately, in many cases, Prony's method is highly susceptible to noise in the signal. However, it presents a beautiful framework that shows that in principle, it is possible to reconstruct poles and residues of a signal. Without noise, both poles and residues can be determined exactly. In the next section, we describe a better algorithm for solving this type of inverse problems, which is more stable for larger bond dimension and finer sample rates.

Matrix pencil method
The original matrix pencil method (MPM) was developed by Hua and Sarkar [21,22] and can be directly applied to our problem. As with the Prony algorithm, the poles are determined first and independently from the residues. Although the MPM is related to Prony [40], it is considerably less sensitive to noise [19, Sec. 1.2] and can deal with higher sampling rates in a more stable fashion. Once the poles are identified, the residues are found via a linear equation system in the same way as in Prony's method. Here, we will just describe how to determine the poles. For simplicity, we will begin with the case of reconstructing a 2-point function and generalise to higher order correlation functions in the following section.
A matrix pencil M of degree n ∈ N is a polynomial over C with matrix valued coefficients M j ∈ C d×d , M (γ) = n j=0 M j γ j . As with the Prony algorithm, we start by forming the Hankel matrix from the experimental data points {C 0 , . . . , with integers N, P , such that N − P, P > d 2 . Generally, the larger the number of samples N , the better the estimation of poles becomes. The optimal value for P regarding noise sensitivity typically lies between N/3 and N/2 [23]. In this method, we make use of the fact that C [1] can always be decomposed as with Vandermonde matrices and and the diagonal matrix R = diag (ρ 1 , . . . , ρ d 2 ), as can easily be verified by using Eq. (39). In addition to the Hankel matrix C [1] , we construct a second Hankel matrix which in turn can be decomposed as , and consider the linear matrix pencil with γ ∈ C. Since all µ j of V 1 and V 2 are distinct for a non-degenerate spectrum of T and N − L, L > d 2 , the matrices V 1 and V 2 have rank d 2 and we can see that Generically, the matrix pencil C [2] − γC [1] will have the same rank, except for γ = γ j ∈ {µ 1 , . . . , µ d 2 }. In that case, the jth row of and there exists a nontrivial vector v with In this form, the complex number γ can be regarded as a solution of the generalised eigenvalue problem (GEVP) (47). This means that the d 2 nonzero generalised eigenvalues of Eq. (47) are exactly the exponentiated poles e λ1∆t , . . . , e λ d 2 ∆t . Eq. (47) can be solved by a generalised Schur decomposition of the matrix pair {C [2] , C [1] } or by solving the ordinary eigenvalue problem with the pseudoinverse C [1] + of C [1] [22]. After having determined the poles this way, they can be inserted into a linear equation system to obtain the according residues, as with Prony's method.

Technical improvements
Several improvements can be made to the original MPM approach including features from other reconstruction methods which led to algorithms like Pro-ESPRIT and TLS ESPRIT [23], which we mention for the sake of completeness. Modifications based on structured low rank approximations [2,32] did not lead to significantly better results. Here, we will focus on the so-called state space matrix pencil method, which shows the highest robustness towards noise of all direct MPM descendants [19,23] and is the one we prefer to implement.
In this context, we continue with Eq. (47), but instead of solving it directly, we perform additional noise filtering steps via SVD rank truncations [20]. Performing separate SVD truncations like in the original approach has proven to be less robust than performing a joint SVD on C [1] and C [2] ∈ C (N −P )×P by with a unitary matrix U ∈ U (N − P ), Σ ∈ C (N −P )×2P containing the singular values of the concatenated matrices (C [1] , C [2] ) ∈ C (N −P )×2P , and The GEVP we want to solve now is with the filtered eigenvalues γ ∈ C. Since V [1] T , V [2] T ∈ C P ×d 2 and P d 2 , there is still surplus information we can use to SVD filter Eq. (52) one more time. For higher robustness, we repeat the truncation process, applying it to the concatenated matrix (V with the doubly SVD filtered eigenvalues γ ∈ C. If there is no noise, then all the d 2 generalised eigenvalues of the matrix pencil {V [2] T , V [1] T } are generalised eigenvalues of {V [2] , V [1] }, thus generalised eigenvalues of {C [2] , C [1] } and nothing else than the exponentiated poles e λ1∆τ , . . . , e λ d 2 ∆τ . With noise, we can assume that the filtered set of eigenvalues {γ } provide a better estimate than the unfiltered {γ} [20,23]. Since V [1] † T is invertible by construction, everything boils down to solving an ordinary eigenvalue problem: This concludes the description of the state space matrix pencil method, which is our preferred technique for pole reconstruction.

Generalisation to higher dimensions
So far, we have developed the reconstruction techniques for 2-point correlation functions. In this section, we show how to deal with higher order functions and generalise the previous discussion. Additionally, we show how one can improve the signal-to-noise ratio by exploiting redundant information in the higher order correlation functions.
If, for an n-point function, we uniformly sample each tensor index with N sampling points, we obtain a (n − 1)-dimensional array C l1,...,ln−1 l1,...,ln−1=0,...,N −1 ∈ C N n−1 with To extract the poles, we carry forward the approach of Zhu and Hua [49, chap. 17.11]. We fix one index l j of C l1,...,ln−1 and sum over the other indiceŝ The summing provides averaging and hence increases noise stability. This procedure is only possible because the poles and the sampling interval are the same for each index of the n-point function data array. Inserting the definition for C l1,...,ln−1 and separating e λ k j lj ∆τ from the summation of k j yieldŝ lj+1∆τ · · · e λ k n−1 ln−1∆τ (59) Eq. (58) can be be regarded as the components of a 2-point function with the sought-after poles and the term within the brackets, which only depends on k j , contains its residues. The concrete values of these effective residues do not matter, since in this step we are only interested in the poles. We can average further by summingĈ This still corresponds to a 2-point function with the correct poles.
We can now apply the established matrix pencil, Prony or a least squares method on the vectorĈ lj to obtain the poles. Additionally, the averaging results in an effective reduction of the standard deviation of the (white) noise by a factor of ((n − 1)N n−1 ) −1 . Regarding the residues, we can reshape the array of the poles into a matrix and obtain the residues as the solution vector of the corresponding linear equation system in the least squares sense.

C. Extracting M
After having determined poles and residues of the input correlation function-our first reconstruction step as discussed in Sec. III A-the next step is to identify the matrix M . From M together with D, the variational parameter matrices R and Q can be determined.
First, we note that conjugating M with a diagonal matrix whose first entry is equal to one does not change the density-like correlation functions. This observation can be used to require that M 1,j = 1 for j = 2, . . . , d 2 , which is possible if the M 1,j are nonzero. For M 1,1 to be equal to one, we need to normalise the n-point function by dividing by In particular, we obtain ρ (n) k1,1,...,1 = 1 · · · · · 1 · M k1,1 = ρ k1 . For clearness, in this section we mark the dimensions of the residues with an additional index. We can compute M i,j for any i, j = 1, . . . , d 2 and n ≥ 3 via From this equation we can also see that we need n to be larger than three, since a 2-point function can at best provide the first column of M . In practice, we may want to reduce noise by averaging over multiple independent prescriptions for M i,j , namely By rearranging the residues, we can also express higher order expectation values in terms of lower order: This is the Wick's theorem for matrix product states [24]. At this point, we can check the validity of the reconstructed M , since it necessarily must obey the symmetry Ξ d,κ M Ξ d,κ = M for accordingly ordered spectrum of T .

D. Extracting R
To obtain a deeper understanding of the dynamics of the cMPS and to get a direct parametrisation of the state, it is necessary to reconstruct the variational parameter matrices R and Q. We have that, by definition, and D = diag (λ j ) = X −1 T X with the change-of-basis matrix X indeterminate. Because of the gauge invariance of Q and R, we can determine them only up to conjugation with an invertible matrix and therefore will not need to determine the concrete form of X at all. In this sense, there are no specific R and Q matrices to be reconstructed. Nevertheless, we continue using the terms R and Q, thinking, without loss of generality, of matrices that are in a specific, yet arbitrary, gauge. Our strategy to recover the variational parameter matrices is to choose R diagonal, which can be done in almost all cases, and determine Q accordingly. Equivalently, one could also require Q to be diagonal and determine R accordingly, but here we use the former approach. We first diagonalise M → Y −1 M Y = M diag with the change-of-basis matrix Y . Since M , as well as its similar matrix R ⊗ R, has the spectrum {r i r j } with i, j = 1, . . . , d, where r 1 , . . . , r d are the eigenvalues of R, the entries of M diag can be reordered with a permutation matrix O such that the resulting matrix has the form of a Kronecker product of two diagonal matrices R rec Since R rec by construction is similar to R, we can write it as R rec = W −1 RW , where W is the change-of-basis matrix that diagonalises R. Diagonalising and reordering M thus yields R in a certain gauge, namely W −1 RW , and we can identify R rec with a reconstruction of the matrix R. Note that XY O has a Kronecker product structure as well, which will be important for reconstructing Q. Rewriting Eq. (66), we have which is equal to R rec ⊗ R rec , and, by definition of R rec and using a Kronecker product identity, hence equal to There is a little subtlety in that, in general, numerical diagonalisation algorithms will not provide Y such that XY O is a Kronecker product, but usually such that each eigenvector, a column of Y , is normalised, yielding a matrix Y N . This matrix can also be written as To determine O and extract R rec from R rec ⊗ R rec , it is important to take into account that multiplying R with an arbitrary complex phase factor e iϕ does not change R ⊗ R.
In the same way, Q ⊗ 1 d + 1 d ⊗ Q is left invariant when adding iχ · 1 d with χ ∈ R to Q. Hence, also the transfer matrix remains unchanged. Clearly, out of density-like correlation functions, R and Q can only be reconstructed up to these factors since Q and R only appear in these Kronecker product terms.
By fixing e iϕ , one diagonal entry r j of R rec can be assumed to be real and M diag can be rearranged to a Kronecker product by successively checking if for an entry M diag,l,l the fraction |M diag,l,l /r j | 2 yields another (real) entry of M diag (or, in practice with noise, is sufficiently close to it), which must be the case for a Kronecker product matrix with spectrum {r i r j }. After repeating this procedure for all entries of M diag , all eigenvalues {r j } are determined, in a fixed order that determines the order of R rec and O as well. Now, it remains to determine Q, which will be done in the next section.

E. Extracting Q
The second parameter matrix to be reconstructed, Q, will in general not be diagonal in the same gauge where R is diagonal. The goal is to find Q in the appropriate gauge. First, we take the matrix D, which contains the eigenvalues of T , subtract the reconstructed matrix M , and see that in principle all the information about Q is stored here: By conjugating this with the matrix Y O, which is the same change-of-basis matrix that directly led from M to R rec ⊗ R rec , we obtain We obtain in this way Q rec := W −1 QW in the gauge corresponding to the gauge of R rec = W −1 RW and thus it represents a valid set of parameters that define the state. To extract Q rec out of Eq. (70), we can, as in the case of R rec , assume one diagonal entry q j,j of Q rec to be real, which corresponds to subtracting iIm(q j,j ) · 1 d from Q. In this way, we can read each q j,j from the corresponding diagonal entry q j,j + q j,j = 2q j,j in Eq. (70) and subsequently the remaining diagonal entries. Because of the structure of Eq. (70) as a Kronecker sum, the off-diagonal entries can be read off without further preparation.
The fact that Y is only determined up multiplication with a diagonal matrix D Y , as mentioned in the previous section, does not pose an obstacle for the reconstruction of Q rec . Its gauge needs to be fixed only up to conjugation with a diagonal matrix if R rec is in a diagonal gauge. Furthermore, it does not matter that also the matrix M is only determined up to conjugation with a diagonal matrix D M , which we used to require that M 1,j = 1 for j ≥ 2. Using D −1 M M D M instead of M in Eq. (69) and X −1 T X being diagonal, we have The particular structure of X orX is not needed in the algorithm.
On the other hand, if we normalise the n-point function and hence M by multiplying it by a constant, we have to be careful since D − cM , for some c ∈ R, will in general not result in a matrix similar to Q ⊗ 1 d + 1 d ⊗ Q. Accordingly, we have to renormalize M →M 1,1 · M . The numberM 1,1 can be read off the residueρ Note that computing eigenvectors, which the matrix X consists of, can be a very unstable (in extreme cases even discontinuous) procedure, especially for higher bond dimensions, when eigenvalues can cluster [11, cor. 7.2.6]. Hence the procedure of determining Q is highly susceptible to noise. To improve noise stability, we can average Y by using the symmetry property Ξ d,κ Y Λ d = Y , which follows from the symmetries of M and R rec ⊗ R rec , and use (Y This concludes the reconstruction of the variational parameter matrices Q and R, which is the last step in our reconstruction procedure, Sec. III A. Additionally, it is now possible to construct the Hamiltonian of the auxiliary system K as in Eq. (10) et sqq. and relate the cMPS to a Lindblad master equation. The fact that we can reconstruct Q only up to an additive term iχ · 1 results in K being indeterminate up to an additive term χ · 1. This is reasonable since only the differences in the spectrum of the Hamiltonian are physically relevant and these are not affected by a global shift by χ.

IV. APPLICABILITY AND LIMITATIONS
The proposed tomography method relies on assumptions. It is hence important to know its limitations and how to check the applicability of the method to given data. The basic assumption is that the correlations in the data are-at least approximately-of the type found in cMPS spatially, or equivalently of the type found in finite dimensional quantum systems whose dynamics are given by a Lindblad equation temporally. It is hence natural to assume that our method is applicable to settings similar to the ground states of gapped local Hamiltonians and for fields which originate from an interaction with finite level systems-think, e.g., of a light beam emitted by an atom trap. In this section, we aim to give a description of ways to gain confidence and check the consistency of the estimates obtained by our reconstruction methods for quantum fields.
Since it is our goal to produce usable estimation tools for experimental applications, it is very important to have a clear understanding of how to determine whether or not a particular reconstruction was successful or even if the cMPS ansatz is applicable to a particular situation. In this context, we can recognize two different scenarios that can occur: 1. the idealised case, where the data actually comes from a cMPS, and 2. a realistic case, in which the data comes from a physical system (not a cMPS, but possibly well approximated by one) and is in general noisy. In the following, we will discuss both in more detail.
In the ideal case, data will be produced by a generic cMPS of unknown bond dimension d. From the 2-point correlation function, following the reconstruction methods discussed in Sec. III B 2, we can extract an estimation of d by computing the rank of the (sufficiently sized) ansatz Hankel matrix in Eq. (37). Even if noise is present in the signal, an estimation of the bond dimension can be obtained, because noise-induced singular values are small. Since some of the elements of matrix M can be zero, some of the residues ρ corresponding to poles λ can also be zero, thereby hiding those poles. Correlators with different n, on the other hand, can reveal these poles at some point, but not necessarily so. Having found all the poles there are, implying also access to the whole matrix M , is indicated by an agreement of the poles of all available n-point functions. One should keep in mind, though, that one will never be able to verify this, even in the idealised case, with a finite amount of data, as it possible to construct a state which agrees with a given cMPS on e.g., a finite number of n-point functions but differs elsewhere. However, a non-increase of the set of poles over a wide range of n-point functions is sufficient to build confidence in the correctness of the reconstruction. It is a satisfactory feature of our method that we can quantify the confidence of the reconstruction in this way.
In contrast, a priori information about the number of expected poles and a guarantee that the number and numerical values of residues and poles will be consistent for all npoint functions is not available in most real-world tomographic settings. In fact, when data comes from an experiment, we expect a description in terms of cMPS to be possible only in an approximate sense. A similar situation is known for discrete MPS in a lattice setting, where an exact description of a state can be found only if its Schmidt rank is finite. However, many states whose Schmidt numbers form a fast decaying sequence allow for an efficient description with discrete MPS. Even if the physical system is well approximated by a cMPS in this sense, in general we expect to have an infinite number of poles to recover. However, only a small number of them will be associated to residues that are big enough to contribute to the correlation functions. The number of relevant residues and poles can be identified by looking for singular values of Hankel matrix Eq. (37) greater than an appropriate threshold. The tomographer, hence, has to formulate a hypothesis about the relevance of the observed poles and try to gain confidence in his/her assumption. The desired situation to observe in practice is that the recovered poles do not change too much (i.e., they are within some threshold, e.g., previously determined by the noise level) independently of the correlation function used to extract them.
In summary, if the set of poles has to be extended time and again over a wide range of correlation functions, the assumption that the state can be described by a cMPS is clearly wrong. In particular, such a situation would tell us that the cMPS ansatz is not a good model for the particular system and data set. Along the lines of the discussion above, in practice, what we propose to check and gain confidence of the applicability of our methods is the following. Use lower order correlation functions to extract a cMPS description of the system, use the reconstructed cMPS to predict higher order functions and compare them to available measured ones. This way, we can check the consistency of the reconstruction procedure and the validity of the cMPS ansatz for the field state under investigation.

V. APPLICATIONS
In this section, we show how the formalism developed so far can be applied to real world scenarios. We demonstrate the applicability in two basic settings. First, we generate correlation functions similar to data obtainable in current experimental settings. For this, we use simulated data to study the performance of the reconstruction method in situations in which noise is present. Second, we analyse the applicability of our techniques to the Lieb-Liniger model, which is a well-known and well-investigated model in one-dimensional non-relativistic field theory.

A. Simulations and error analysis
Before typical noise models can be taken into consideration, we ask what kind of problems we are most likely to encounter. As we have seen, given an arbitrary cMPS n-point function with non-degenerate spectrum, its poles and residues can be obtained by matrix pencil or Prony's methods, provided there is sufficient accuracy. We keep in mind that formally it is required that T has a non-degenerate spectrum, which is, however, the case for almost all randomised T . Also, it is possible that M contains elements of value zero, which is, likewise, not to be expected. On the other hand, there are other more practical obstacles related to concrete implementation features of the numerical algorithms discussed above.

Typical problems to be expected
The identification of the poles when determining the matrices M and D is the most critical part of our procedure. More concretely, we face the problem of resolving maxima of the Laplace transform of the correlations in the complex plane. We do not do this directly, but the challenges remain the same.
The problem is to discern poles that lie close to each other and to identify poles that have comparatively small residues. Moreover, we might face large damping factors, which results in a broadening of the peaks in the Fourier spectrum. The required accuracy for the correct identification of poles and residues hence critically depends on the position of the poles {λ j } in the complex plane and the ratio between damping factor Re(λ j ) and frequency Im(λ j ). Not surprisingly, all these issues are aggravated for higher bond dimensions; the n-point functions consist of a larger number of oscillatory components, typically in the vicinity of other poles. Moreover, the reconstruction of the residues will also be affected if the poles are close to each other. This happens because the corresponding linear Vandermonde system of equations becomes more ill-conditioned.
When reconstructing Q from the matrix M , we face another type of typical problem. Determining R does not lead to significant additional numerical problems since it mainly involves an ordinary diagonalisation procedure, whereas for reconstructing Q we need the eigenvectors of M , which are very susceptible to perturbations of the matrix.
In the following, we want to test the robustness of our method by analysing typical noise cases independently. First, as a preparatory step, we generate typical cMPS. Second, we examine how the reconstruction of the poles is affected by adding noise to the input correlation functions. Third, we survey the reconstructability of R and Q when the input for this reconstruction step, the matrix M , is perturbed. Fourth, we study the influence of the presence of additional fields.

Generating typical cMPS
In this section, we give a recipe to generate correlation functions with structural features on a desired length scale, based on a randomisation-ansatz for the Q and R matrices. This is in principle a non-trivial task, as the length scales and damping of the fluctuations are directly derived from the spectrum of T , which depends non-linearly on the entries of Q and R.
We make the ansatz of generating Q and R as complex Gaussian random matricesi.e., real and imaginary part of the entries are independent and identically normally distributed-and renormalise Q such that all eigenvalues of T have real part ≤ 0. This results in a roughly uniform distribution of the eigenvalues of T within a disc left of the imaginary axis, which is not entirely unexpected when considering Girko's circular law [7] and the Kronecker product structure of T . The damping factors of the poles are of the same magnitude as their frequencies or larger, which is not the case if oscillations are actually to be observed and moreover aggravates the identification of such poles and increases the accuracy requirements.
In a more refined ansatz, we hence consider sampling K and R instead, from the same distribution, which leads to a drastically higher concentration of poles close to the imaginary axis, when scaling both matrices with a small number η, see Fig. 2, where we show a comparison of distributions of the poles in the complex plane between the naïve and the refined method of randomly sampled cMPS. This scaling of the matrices does not constitute a gauge of the cMPS but rather a transformation to another cMPS, cf. [47]. Matrix Q is mapped to 1 2 η 2 R † R − iηK, see Eq. (10), such that for small η the eigenvalues of Q will typically feature much larger imaginary part than real part, since the spectrum of K is real and the R † R term adds to Q in second order in η. This carries over to the construction of T where R ⊗ R also appears in second order in η as opposed to Q ⊗ 1 + 1 ⊗ Q, which are first order. Overall, for small η most damping factors become smaller than the frequencies by several orders of magnitude, a property expected to hold if oscillations are observed. Moreover, a distinct peak structure in the Fourier transform emerges, and the poles and residues of T are sufficiently separated and can be determined even with moderate amounts of noise present.

Effects of noisy correlation functions
Typical experimentally measured signals have inaccurate read-out of the signal. We model such noisy situations as Gaussian noise, and study the effect on the reconstruction procedure by adding noise to correlation functions originating from a cMPS.
In particular, we apply the matrix pencil method to the noisy amputated 2-point function evaluated at 200 points τ k , for cMPS with elements of R, K sampled from N (0, 0.01).
The white noise function w is sampled from N (0, mean(|Ĉ (2) |)/SNR) where SNR is the signal-to-noise ratio.
In Fig. 3, p is the percentage of pole sets with mean j=2,...,d 2 |(λ j −λ j )/λ j | < 0.1 as a function of the signal-to-noise ratio, where {λ j } are the original poles, and {λ j } the pole estimates. Each point is computed for 5000 runs of our numerical experiment to gather enough statistics. What we observe is that for bond dimension d = 2, our reconstruction procedure is robust to reasonable amounts of noise. However, for bond dimension d = 3, we see that the robustness is much smaller, which hints to the practical limitations of our reconstruction procedure. The results can, for example, be improved by increasing the sampling rates, however this can be difficult to achieve in experiments.
Note that in both cases shown in Fig. 3 our procedure behaves as expected from a proper estimator as a function of SNR: the lesser the noise, the better the reconstruction. In fact, for zero noise, we can in general expect 100% reconstructability, independent of the bond dimension. As already mentioned, for higher order correlation functions, n > 2, the reconstructability of the poles does not necessarily deteriorate. In fact, since one can average over all projections that fix all but one τ , a significant part of the noise is effectively averaged out.

Reconstructability of Q and R when perturbing M
In this section, we look at the next step in the reconstruction process: recovering cMPS parametrisation matrices Q and R from an imperfectly recovered matrix M . We do so by simulating M and perturbing it directly, rather than using a reconstructed M matrix from noisy correlation functions. We do it this way to have control over the size of the perturbation and thus to separate these two different stages of the reconstructed problem and investigate their effect separately.
For this purpose, we prepare matrices R and Q with entries sampled from N (0, 1), then calculate T and M , and perturb M with an error matrix ∆. The perturbation has to be carefully designed in order to retain the symmetry M = Ξ d,κ M Ξ d,κ of the unperturbed matrix with real and imaginary parts of the entries of ∆ 0 sampled from N (0, 2 −1/2 mean(|M |)) so that ∆ = Ξ d,κ ∆Ξ d,κ . Furthermore, since the first row of M is set to one due to normalisation, the first row of ∆ is set to zero. From the reconstructed matricesQ andR fromM = M + ∆ with scaling parameter ∈ R + we build the transfer matrixT and compare its spectrum with the spectrum of the original T . The ratio of samples with mean deviation σ(T ) to σ(T ) not larger than 10% as a function of is depicted in Fig. 4 for bond dimensions d = 2 (blue) and d = 3 (green). As the error grows, the ratio of successfully reconstructed Q and R matrices drops for both bond dimensions. However, the d = 2 case is clearly more robust to perturbations. Additionally, we want to point out that any potential deviation of the spectra of T andT is almost certainly due to the reconstruction of Q.

Effects of additional interactions
As discussed earlier in Sec. II A 2, typical correlations under consideration can be seen as originating from processes where a field state is generated by an interaction with a finite dimensional system, and can be described by a Lindblad equation. In the ideal case, where the finite dimensional system interacts only with the field we measure, we obtain correlations which are perfectly described by a cMPS, or equivalently by a Lindblad equation with one Lindblad operator. In the case where the finite dimensional system interacts with other systems or fields, which we might not even know of, the Lindblad equation is altered and supplemented by more Lindblad operators, which correspond to the other systems or fields. In this case, the transfer matrix takes the form [35] where and the additional fields are represented by the terms with j ≥ 2. Each of the two latter summands in R j are connected to Q via Eq. (10). The matrix M remains R 1 ⊗R 1 , because it comes from measuring the field corresponding to it, but now in the diagonal basis of a different T than the one for a single field. In order to analyze the sensitivity of reconstructing the variational parameter matrices, we consider one additional perturbation field. More additional fields within the same order of magnitude yield very similar outcomes. This results in T = iK ⊗ 1 − 1 ⊗iK + R 1 + R 2 . In this section, we study how well the spectrum of K can be matched depending on the scaling parameter ∈ R + . Analogous to the last section, we prepare cMPS by randomly generating K, R 1 , and R 2 with elements whose real and imaginary parts are sampled from N (0, 1). We then generate M matrices and from this reconstruct R 1,rec and an effective Q rec , assuming only a single field. From R 1,rec and Q rec we compute K rec and compare the differences of its eigenvalues, ∆κ j =κ j+1 −κ j , with the differences of the eigenvalues κ j of the actual K. The reconstruction of K is said to be successful if max j=1,...,d−1 The reconstruction rate, depending on and the bond dimension, is shown in Fig. 5. For → 0 (single field case) all cMPS can be reconstructed. As the size of the additional field approaches the size of the main field, the reconstruction rate drops to zero. The smaller the bond dimension, the more perturbation by additional fields can be tolerated. We conclude that for sufficiently small additional fields, a successful reconstruction is in principle still feasible. Moreover, for d = 2, the most robust case, this is true even if the additional fields are merely one order of magnitude smaller than the main field.

B. The Lieb-Liniger model
In this section, we analyze the applicability of the results discussed above to the Lieb-Liniger model [30]. The model describes the dynamics of a one dimensional system of bosons interacting via a delta-potential. In second quantisation, the Hamiltonian describing such a model is given by where x ∈ [0, L] is the position coordinate and c is the interaction strength.
For our application, we use Q and R parametrisations of cMPS approximations for several bond dimensions of the ground state of the Lieb-Liniger Hamiltonian for particular values of interaction strength c by using the algorithm and implementation of v. Hase [17]. This algorithm is an adaptation of the time-dependent variational principle for quantum lattices [15] to the continuous case (compare also Ref. [6]). It relates to an imaginary time evolution that exponentially damps all excited components of an initial state vector |Ψ (d) (a cMPS with bond dimension d) with increasing imaginary time and produces the ground state eigenvector of a Hamiltonian H, by applying e −iHt with t ∈ iR to |Ψ (d) . The convergence of the energy of e −iHt |Ψ (d) indicates the approach to the cMPS ansatz ground state vector, which we denote by |Θ (d) Q,R , together with its characterising matrices Q and R. Several interesting structural properties of the state in the cMPS representation are revealed, signifying a symmetry in the model: degeneracies and a block structure of the matrix M . These features emerge in the integrable Lieb-Liniger case, and do not appear in Gaussian-sampled cMPS as described above. These features, which will be discussed more in detail in the following, appear regardless of the bond dimension and interaction strength used. Moreover, they do not depend on the algorithm used to obtain the ground state.

Degeneracies in the eigenvalue structure of M
The topic of this section is to characterise the structure of the spectrum of M by understanding the degeneracy structure R in the exactly integrable case. In the case at hand, since all two-fold degenerate eigenvalues are equally spread into one of both blocks each, one is able to predict the spectrum of R from M even without reconstructing the second block. In our simulations, it is seen that the eigenvalues of Q and R appear in d/2 pairs {q [1] j , q [2] j } and {r [1] j , r [2] j } with q [1] j = q [2] j + iχ, r respectively, for each pair j, with χ and φ are independent of j. If d is odd, the two remaining unpaired eigenvalues take the form q =q + iχ and r =re iϕ , respectively, witĥ q,r ∈ R. We can simplify the structure by performing the transformations which leave the transfer matrix T and all density-like n-point functions invariant. This ensures that the pairs now consist of complex conjugates and the spectra of Q and R are closed under complex conjugation, which we want to require for the further argument.
Since the spectrum of M by construction is the same as that of R⊗R (up to a normalisation constant and each λ ∈ σ R ⊗ R can be written as r j ·r k with certain j, k = 1, . . . , d, the appearance of (complex conjugate) pairs in the spectrum of R implies twofold degeneracies for the according eigenvalues in the spectrum of M as products of R eigenvalues, especially Not all eigenvalues are degenerate: r [1] j r [2] j and r [2] j r [1] j are complex conjugates, but since j = k, there are no other combinations that yield the same values. Assuming that R does not contain any other degeneracies, M will comprise d non-degenerate eigenvalues and d 2 − d eigenvalues that are twofold degenerate each.

Block structure
Another structural observation we can make for the matrix M of the ground state of the Lieb-Liniger Hamiltonian is the fact that it can be transformed to a block diagonal matrix. We do this by simply grouping vanishing and nonvanishing elements in M and interchanging its rows and columns correspondingly, which amounts to a basis permutation. This way, we define the matrix M := M 1 ⊕ M 2 , where M 1 and M 2 are block matrices and relate to the nonvanishing and vanishing residues of the cMPS. The block structure of M and the fact that e T is diagonal imply a block structure of their products, which carries over to the correlation functions, lets M 2 decouple completely, and hence disappear from the reconstruction.
We can see why all the residues corresponding to M 2 vanish for every n-point function in the following way. Let us assume we reordered M and formed M by performing the basis permutations described above, and we consider a pole λ l of the cMPS. For an arbitrary n-point function, each residue which contains the index l at least once can be written as with j = 2, . . . , n − 2.
We take l ∈ {ζ + 1, . . . , d 2 }, where ζ is the dimension of M 1 , i.e., λ l corresponds to a pole associated with M 2 . In this situation, we note that two things can happen. Either k j+1 ≤ ζ, and M l,kj+1 = 0 since the entry is located in the lower left block of M , which contains just zeros, and thus the entry vanishes. Or k j+1 > ζ, and there exists an entry M km,km+1 with m > j, k m > ζ, and k m+1 ≤ ζ such that M km,km+1 = 0. This has to eventually happen since the last entry in the residue expression is of the form M kn−1,1 and 1 ≤ ζ. Clearly, the residue vanishes again, and so does for the boundary indices k 1 = l or k n−1 = l.

Reconstruction
Because of the block structure of M , we conclude that there is no direct way of obtaining all poles of cMPS approximations of the Lieb-Liniger ground state from a n-point densitylike correlation function. In this case, the p-number [24], which is defined as the minimum order for a p-point function of a cMPS to reveal all poles, is infinite. There is a useful connection between the degeneracies in the spectrum of M and its block structure for the Lieb-Liniger model. It turns out that all the non-degenerate eigenvalues are related to M entries in the first block, while the degenerate pairs are distributed such that always one eigenvalue is associated with the first block and the other with the second. This way, since only the first block contributes to any density-like correlation function, all degeneracies are effectively lifted, and hence full reconstruction is possible. Since all eigenvalues of M that appear in the vanishing second block also appear in the visible first block one can in principle determine the spectrum of R even without full knowledge of M . The same holds for the spectrum of Q since also D − M has the same spectral properties. For reconstructing both R and Q in the corresponding gauge, however, our procedure requires full knowledge of M . But again, note that for full reconstruction of the densitylike correlation functions, this full knowledge is not required here.
This structure disappears if integrability is broken, and hence in a neighbourhood around the (cMPS approximation of the) Lieb-Liniger ground state. Imaginary time evolution gives us a notion of distance to the limit of the approximation process, as we can, e.g., observe convergence of matrix entries along imaginary time paths. The block structure and degeneracy become more clearly defined the closer one gets to the limit point. Ultimately, at the limit point of the imaginary time evolution, the degeneracies and block structure of M will prevent our methods to recover a full cMPS description in terms of matrices Q and R of the system. On the other hand, for each state along such a path, we can in principle apply our reconstruction method. The closer we get, the better all characteristic parameters can be reconstructed although the more ill-conditioned the problem becomes. A reconstruction of the n-point functions of arbitrary order is still possible, as it is based on the matrices D and M alone and determining these quantities is in principle possible. Since the second block does not contribute to any n-point function, the applicability of "Wick's theorem" for (continuous) matrix-product states is maintained even in this case and we still can successfully predict higher order from lower order correlation functions.

VI. SUMMARY AND OUTLOOK
In this work, we have introduced the concept of quantum field tomography. In spite of the inherent difficulties of attempting to reconstruct a continuous system, i.e., a system with infinite degrees of freedom, we have shown that this task can be done when only a relevant class of naturally occurring states is considered. This is physically well motivated since one expects naturally appearing states not to be of the most general form but restricted to a smaller class of states. This is clearly the case in physical applications in which, for example, matrix product states have been shown to be a very successful model to describe correlations and dynamics, which is a very well known ansatz for one dimensional many-body systems. Here, we concentrated on developing tomographic tools for one dimensional continuous many-body systems or quantum fields.
For this propose, we employed the continuous generalisation of the MPS variational class of states: the continuous matrix product state formalism. Based on this formalism, and the predicted structure of the relevant data, i.e., the correlation functions, we developed a procedure to extract a best fit cMPS using state of the art statistical estimation tools. In this way, we are able to deliver a working and readily applicable tool to study these type of systems.
Formally, we have used the cMPS formalism to describe the structure of correlation functions that can in principle be measured in experiments. Having identified this basic structure, we defined the tools needed to extract the pertinent information from the data. For this propose, we use the matrix pencil method as a viable way to determined the variational parameters of the cMPS from a correlation function. We show that one can successfully extract a cMPS description of a system in principle for arbitrary bond dimensions. However, for noisy signals, one is limited, in general, to lower bond dimension approximations. Generally, this approach is applicable to states with low entanglement, similarly to matrix-product states approximating states that satisfy an area law for suitable Renyi entanglement entropies. In the discrete case, the connection of having "low entanglement" and being approximatable with a matrix product state of low bond dimension has been fully rigorously fleshed out already [8,41]. In the continuous case, this connection is surely equally plausible, but is awaiting a similar fully rigorous treatment.
We have also given an in-depth study of the applicability of the reconstruction tools and their robustness for different noise models. Extensive numerical simulations were employed which provide at least empirical confidence of the performance of the reconstruction tools. We find that for the cases studied in this work, our methods are reasonably robust to noise when searching for low bond dimension cMPS estimates.
It is important to note that the methods developed in this work are also readily applicable to the translationally invariant discrete MPS case. Since, one in reality deals with discrete (sampled) data even if the system is continuous in nature, all the methods developed here carry to the discrete case of matrix product states, reflecting a finite lattice spacing, with minimal modifications. There is also evidence that the approach taken here reveals insight into the structure of the underlying model as such and can detect signatures of integrability.
The novel methods proposed in this work open a new avenue to explore continuous systems of many particles in both equilibrium and non-equilibrium. It constitutes a step towards assessing strongly correlated models with a topographic mindset, without having to make a model of the system in the first place: Instead, one asks what the state is that is most compatible with the data found. This is a most healthy mindset specifically in the context of emergent quantum technologies, where one aims at assessing the state of a quantum system without making too strong assumptions in the first place. In quantum information science, quantum state tomography is already a pillar on which the field rests, a technique routinely applied in most experiments. The present work opens up perspectives to think of quantum field tomography of strongly correlated quantum systems, as they feature in dynamical quantum simulators. Specifically in this context, the tools presented here can be used for partial benchmarking of analog quantum simulators. To fully explore the potential of such an approach to study many-body systems out of equilibrium constitutes a truly exciting perspective.

VII. ACKNOWLEDGEMENTS
This work was supported by the the BMBF (QuOReP), the EU (RAQUEL, COST, SIQS), the FQXi, and the ERC (TAQ). We thank T. J. Osborne and M. Friesdorf for discussions and M. von Hase for providing the code using the time-dependent variational principle.