Lensfree diffractive tomography for the imaging of 3D cell cultures.

New microscopes are needed to help realize the full potential of 3D organoid culture studies. In order to image large volumes of 3D organoid cultures while preserving the ability to catch every single cell, we propose a new imaging platform based on lensfree microscopy. We have built a lensfree diffractive tomography setup performing multi-angle acquisitions of 3D organoid culture embedded in Matrigel and developed a dedicated 3D holographic reconstruction algorithm based on the Fourier diffraction theorem. With this new imaging platform, we have been able to reconstruct a 3D volume as large as 21.5 mm (3) of a 3D organoid culture of prostatic RWPE1 cells showing the ability of these cells to assemble in 3D intricate cellular network at the mesoscopic scale. Importantly, comparisons with 2D images show that it is possible to resolve single cells isolated from the main cellular structure with our lensfree diffractive tomography setup.


INTRODUCTION
The study of in vitro cell populations remains a challenging task if one needs to gather large quantitative and systematic data over extended periods of time while preserving the integrity of the living sample. As discussed in Ref. 1, there is a need for a new microscopy technique that must be label-free and non-phototoxic to be as 'gentle' as possible with the sample, and 'smart' enough to observe the sample exhaustively at a variety of scales both in space and time. Lens-free video microscopy is addressing these needs in the context of 2D cell culture. 2,3 As scientists better understand the benefit of growing organoids in 3D and routinely adopt 3D culture techniques, lens-free imaging must also be adapted to 3D cultures. Therefore, the new challenging task is to extend lens-free microscopy techniques to the acquisitions and fully 3D reconstructions of large organoids structures. [4][5][6] The adaptation of lensless microscopy techniques to 3D organoid cultures imaging is the scope of the present paper.
We first describe an experimental bench dedicated to lens-free diffractive tomography of 3D biological samples. Next, we present the Fourier diffraction theorem and the three dedicated reconstruction algorithms we developed to retrieve 3D objects. We conclude with 3D reconstructions of a HUVEC cell culture and a RWPE1 prostatic cell culture grown in 3D to compare the performances of the three proposed reconstruction methods.

MATERIALS AND METHODS 2.1 Experimental bench
Unlike 2D lens-free imaging, where only one image is needed for retrieving the 2D sample, the reconstruction of a 3D object from lens-free acquisitions requires to multiply the viewing angles. For this purpose, we have developed an experimental bench, illustrated in Fig.1. It is composed of a semi-coherent RGB illumination source * and CMOS sensor † .
The experimental bench follows the traditional pattern of the 2D lens-free micrsocopy (see Fig. 1). The object is placed in between a sensor and a semi-coherent illumination. Nonetheless, the illumination is tilted by an angle of θ = 45 • and the sensor is slightly deported so that the geometrical projection of the 3D object of interest remains centered regardless of angle ϕ around the sample. This allows to optimise the the field of view, increasing the overall volume that one can reconstruct. Figure 1. Left-hand side -Experimental bench dedicated to lens-free diffractive tomography. Right-hand side -Optical scheme of the system. The semi-coherent incident plane wave Uinc is scattered by the 3D sample: each element of the volume behaves like a secondary spherical source, creating a diffracted wave U dif . The sensor records the intensity of their summation: Itot = |Utot| 2 with Utot = Uinc + U dif

Fourier diffraction theorem
It is possible to show 7 that it exists a strong link between the diffracted wave U dif and the scattering potential f of the 3D object to reconstruct: This is the Fourier diffraction theorem which states that, at a given plane z = z s and for an incident plane wave U inc ( − → r ) = e i − → k0. − → r in a medium of refractive index n 0 , the 2D Fourier transform of U dif and the 3D Fourier transform of f are linked by the relation (using the notation of Fig. 2): where (u, v) and (α, β, γ) are respectively the coordinates in the Fourier space ‡ on the plane z = z s and in the Fourier space of the object which satisfy the following relations: Let's note here that, looking more closely at Fig. 2, this theorem can be used both for simulation purposes (going clockwise on the figure from a 3D simulated object to the diffracted waves U dif in terms of the lighting positions) or for direct reconstruction (going counter-clockwise on the figure from the diffracted waves recorded by the sensor toward the retrieved object via a mapping of the Fourier domain on spherical caps). ‡ Note that the Fourier transform and its inverse transform are defined for a given function g as: This definition extends naturally to higher dimensions.
Let's also mention that this theorem requires knowledge of the diffracted wave U dif both in amplitude and phase, whereas with our setup only I tot = |U tot | 2 is recorded by the sensor. Hence there is a lack of phase information in the hologram space as this is the case in 2D lensfree microscopy.

Reconstruction methods
The first step of each methods is a registration of the data: a region of interest is chosen in the dataset and the different frames at different angles are aligned on this pattern via a Least Squares minimisation algorithm, as described in Ref. 6.
Once the data are aligned on specific holograms recorded at different angles, three different methods were developed to reconstruct 3D samples from the 2D acquisitions.
The first two methods are based on the Fourier diffraction theorem used to map the Fourier domainf of the 3D object f . Each acquisition with a different illumination gives information on coefficients off lying on spherical caps (Fig. 2 used counter-clockwise). Both methods require an estimation of the diffracted wave U dif .
Phase ramp In this method (see Ref. 6) the unknown phase on the sensor is estimated as being a phase ramp, whose characteristics match the ones of the illumination. This method has the advantage to be fast, allowing to reconstruct large volumes in a small amount of time. Nevertheless, one can note that on the one hand, this remains a strong approximation on the phase and on the other hand, only a small part of the Fourier domain of the object is accessible: the coefficients on which lie the spherical caps. One can expect strong artefacts.
Phase retrieval In this method, the unknown phase on the sensor is estimated by an iterative phase retrieval on each 2D pictures of the dataset: the 3D object is approximated by an average median plan and standard algorithm of phase retrieval developed in the realm of 2D lens-free imaging can be applied. This method solves one pitfall of the previous one: the phase introduced in the reconstruction is more realistic and can reduce some artefacts. Nevertheless, it does not solve the problem of the Fourier mapping limitations: only the same coefficients on the spherical caps are accessible.
3D inverse problem The last method presented here uses the Fourier diffraction theorem as a direct model for simulating the data, i.e. the recorded intensity of the total wave U tot (Fig. 2 used clockwise). This model is used to perform an inverse problem approach for iteratively retrieving the 3D object.
The first advantage of such an approach is that we are able to model the end-to-end non-linear process of data acquisition and to solve the inverse problem without requiring a direct inversion of the model by comparing the for a given scattering potential f . The second advantage is that one can add a priori information to the reconstruction process such as possible constraints on the definition domain C (f ) or via a regularisation term µ r f r . In this work, we tested two kinds of regularised norms: the L 1 -norm, fostering sparse reconstruction, and the total variation, a constraint on the sparsity of the image gradient, leading to sharp objects. This method also allows to improve the alignment of the data among the iterations, increasing the overall reconstruction quality. Indeed, one can use the simulated data as a reference to better align the experimental data set.  The zoom in the red medallion shows the artefacts of the first method around an isolated single cell: on the xy-plane one can see white and black residues around the branches. These are twin-images of the focused object, a well-known phenomenon in classical 2D in-line holography due to the lack of phase information. On the xz/yz-plane, some artefacts on straight lines due to the limited angular coverage are visible. Nonetheless, the object has a similar spatial extension in the three directions.

On HUVEC network
As one can expect, the twin-image artefacts is strongly reduced as soon as a 2D phase retrieval is performed. Orthogonal views (not presented here) on the 3D reconstruction performed with the 2D phase retrieval method would show nevertheless that the second type of artefacts due to the limited angular views are still present. They tend to disappear in the reconstruction done with a 3D inverse problem approach.
Looking at the profiles, one can see that the signal-to-noise ratio (SNR), empirically defined here as the ratio of the intensity of the objects compared to the mean background value, increases between the two first methods thanks to the diminution of the twin-image signal and the signal of isolated cell gains a factor 10 with the inverse problem approach. On such data, one can wonder if using a 3D inverse problem is the best solution: indeed, the 2D phase retrieval method appear to be enough to analyse the network structures and is obtained with a faster running code (see table 1). Fig. 4 presents similar views on a prostatic cell culture embedded in Matrigel TM . They tend to create organoids. Once they are stabilized, they start to grow networks. The field of view appears more crowded than in the previous section and the sample presents a 3D spatial extension. The dataset is composed of 3 × 16 acquisitions done at 16 different angles (∆ϕ = 18.8 • ) in the three available wavelengths of the LED. Similar conclusions can be obtained concerning the artefacts and the augmentation of the SNR. But furthermore, the 3D inverse problem approach shows here its advantages over the two other methods: the organoids are sharper and well localized. Some are even not visible with the two other methods. It gives credit to this method, despite its long computational time which can appear as prohibitory (see table 1).

CONCLUSION
We presented a novel tool to perform acquisition on large 3D cell cultures. Based on the in-line holographic principle, it can image unlabelled and unstained samples. To overcome the limitations raised by such a microscope, that is to say the lack of phase and the limited angular coverage, we developed three dedicated algorithms.
We showed that these algorithms are able to retrieve the 3D object but with different qualities in terms of signal-over-noise ratio and computational time. Giving the result in a single pass, the algorithm based on a phase ramp is fast but leads to a signal which can be hard to distinguish from the artefacts and the noise. Providing the best SNR, the algorithm based on the 3D inverse problem approach can nevertheless be extremely time consuming.
It appears then that the choice to use either an algorithm or another will depend on the targeted application. To identify isolated single cells in a 3D volume, which provide a strong signal, the first algorithm can be sufficient. On the other hand if one aims at reconstructing complex overlapping structures, only the 3D regularised iterative reconstruction can provide a pertinent result.