Epi-mode tomographic quantitative phase imaging in thick scattering samples

: Quantitative phase imaging (QPI) is an important tool in biomedicine that allows for the microscopic investigation of live cells and other thin, transparent samples. Importantly, this technology yields access to the cellular and sub-cellular structure and activity at nanometer scales without labels or dyes. Despite this unparalleled ability, QPI’s restriction to relatively thin samples severely hinders its versatility and overall utility in biomedicine. Here we overcome this signiﬁcant limitation of QPI to enable the same rich level of quantitative detail in thick scattering samples. We achieve this by ﬁrst illuminating the sample in an epi-mode conﬁguration and using multiple scattering within the sample—a hindrance to conventional transmission imaging used in QPI—as a source of transmissive illumination from within. Second, we quantify phase via deconvolution by modeling the transfer function of the system based on the ensemble average angular distribution of light illuminating the sample at the focal plane. This technique packages the quantitative, real-time sub-cellular imaging capabilities of QPI into a ﬂexible conﬁguration, opening the door for truly non-invasive, label-free, tomographic quantitative phase imaging of unaltered thick, scattering specimens. Images of controlled scattering phantoms, blood in collection bags, cerebral organoids and freshly excised whole mouse brains are presented to validate the approach.


Introduction
Quantitative phase imaging uses variations in the index of refraction of a sample as a source of endogenous contrast [1], providing label-free information of sub-cellular structures and allowing for the reconstruction of valuable biophysical parameters, such as cell dry-mass at femtogram scales, mass transport, and sample thickness and fluctuations at nanometer scales [2,3]. As a result, QPI has become a valuable tool in biology and medicine, with a growing set of applications in fields like oncology, hematology, pathology, immunology, developmental biology, and neuroscience [1,2]. However, QPI has suffered from the need for trans-illumination through relatively thin objects in order to (1) gain access to the forward-scattered field, which carries crucial low spatial frequency information of a sample, and (2) avoid contributions from multiple scattered light or out-of-focus planes. This restriction has severely limited the biological applicability of phase imaging to mostly thinly-sliced histological tissue, cultured cells, or other thin transparent samples in-vitro [4,5].
Methods for QPI typically involve interfering beams of a coherent source [1], but phase contrast itself can be produced simply with partially coherent asymmetric illumination in a typical wide-field microscope, without interferometry [6]. Images produced from incoherent or partially coherent light sources have the advantage of increased resolution and a lack of noise from speckle or other coherent artifacts [7]. The phase contrast produced from asymmetric illumination can be used to recover quantitative phase with a complete linearized model of the imaging system via a regularized deconvolution with the optical transfer function of the microscope [8]. This type of phase reconstruction method does not suffer from phase wrapping artifacts and has recently been widely adopted for QPI (including 3D tomographic phase reconstruction of thin samples) Vol. 10, No. 7 | 1 Jul 2019 | BIOMEDICAL OPTICS EXPRESS 3605 using partially coherent structured illumination sources such as LED arrays [9] and modified pupils [10,11].
Using principles of asymmetric illumination to recover quantitative phase, we transform transmissive QPI into an epi-mode imaging modality with tomographic capabilities. To achieve this we first use a modified version of an illumination scheme known as oblique back-illumination microscopy (OBM) [12], which produces a virtual light source within a thick sample, via multiple scattering, that emulates a transmission geometry with oblique illumination. By subtracting two images acquired with opposite oblique illumination angles, this strategy effectively removes contributions from out-of-focus planes, and yields high-resolution, tomographic differential phase contrast in thick specimens [12,13]. We then model the multiple scattering process used for illumination to arrive at the ensemble average angular distribution of light approaching the target, and ultimately convert this distribution into a transfer function of the system [14,15]. This allows us to leverage the same regularized deconvolution methods to recover quantitative phase. Our approach, termed quantitative oblique back-illumination microscopy (qOBM), incorporates the trappings of QPI into a versatile epi-configuration, allowing non-invasive, label-free, real-time quantitative imaging in media that are otherwise out of the reach of previous QPI technologies. With qOBM, quantitative phase imaging can now be extended to many more areas of biomedicine.
Here we describe the system and theoretical framework, verify our approach using controlled scattering phantoms, and demonstrate the utility of qOBM by imaging blood in collection bags, cerebral organoids and freshly excised whole mouse brains.

Microscope and differential phase contrast
The system consists of a traditional microscope with epi-illumination emanating from two pairs of optical fibers positioned around the objective housing ( Fig. 1(a)). The fibers from each pair are placed diametrically opposite from one another as shown in Fig. 1(b). When light from an LED light source is deployed through one of the fibers, it produces trans-illumination at the focus of the microscope objective by way of multiple scattering ( Fig. 1(c)). With an overall oblique illumination at the focal plane, lateral variations in index of refraction redirect light toward or away from the acceptance angles of the objective's numerical aperture, producing phase contrast in observed intensity. As with OBM [12], qOBM first generates differential phase contrast by subtracting images taken with the diametrically opposed fibers. Again, the asymmetric illumination and the subtraction process produce differential phase contrast, and, as out of focus contributions in either illumination image are the same, the subtraction process rejects out of focus content, allowing for tomographic sectioning [12,16] Because of the way phase images are produced here, the position of the fibers has to be carefully considered. Truly diffuse light emerging from a highly scattering medium is not typically suitable for generating phase contrast, as the source uniformity fails to create the illumination asymmetry necessary to produce a strong difference transfer function. In other words, the level of obliquity of the virtual source produced from scattering in this scenario is small. But with short off-axis source-detector separations, or with an initial oblique incident illumination, the diffuse transport approximation breaks down [17], and light preferentially emerges at a more shallow vertical angle and in the direction between the source and the detector (the "shear" direction in the parlance of differential interference contrast) [18]. This preference for forward scattering is the source of the asymmetric illumination that gives rise to phase contrast in the difference image in OBM and hence qOBM.
Conventional OBM achieves asymmetric illumination by positioning the source fibers a small distance off-axis from the detector. This directional preference manifests as small deviations from the classical Lambertian profile [16]. Here we achieve a dramatic increase in obliquity of light incident on the target by positioning our fibers at a canted vertical angle [19], typically 45 degrees ( Fig. 1(a)), positioned 9 mm away from the center of the microscope objective, which improves phase contrast and sensitivity to fine detail [20]. The angle of the incident fibers and their horizontal distance from the source are degrees of freedom to control the effective source distribution, offering flexibility in illumination. Furthermore, the two sets of opposing fiber sources are positioned perpendicularly to one another ( Fig. 1(b)) to produce images with orthogonal shear directions, which is necessary to reconstruct a quantitative 2D image of phase [18].
In practice, differential phase contrast is produced by first normalizing each intensity image (Figs. 2(a) and 2(b)) by its overall variance, which removes any variations between the LEDs' output. Then images corresponding to the diametrically opposed sides are subtracted and normalized by their sum (i.e., the background intensity) to produce a differential phase contrast image (Figs. 2(c) and 2 (d)), as shown in Eq. (1) [6,21], I L and I R represent the images with left and right illumination, respectively. Conventional, absorption-based contrast can be obtained by taking the sum of two opposite oblique illumination images [21].
In our present configuration the four LEDs (Luxeon sink-PAD II) are coupled into multimode plastic fibers (Thorlabs FP1000ERT, numerical aperture (NA) 0.5, 1000µm diameter), each using an aspheric condenser lens (Thorlabs ACL2520U-A, NA 0.6). These fibers are housed in a custom 3D printed objective adapter which holds them at the desired incident angle and off-axis source-detector distance. From previous experiments, our system currently uses two sets of colored LEDs, one at 530 nm (green), and another 630 nm (red), selected to provide absorption spectral information key to performing blood cell classification [13]. Different wavelengths can be used to optimize for a specific task. Imaging was performed on an inverted microscope (Zeiss Axiovert 200) with a 60x objective, (Nikon S Plan ELWD, NA 0.7), at a resolution of 0.6µm. The LEDs illuminate the sample individually, and light is detected with a digital camera (sCMOS pco.eddge 42LT) at 20 Hz. The illumination and camera triggering were coordinated with custom software (National Instruments LabVIEW 2017) and a data acquisition block (National Instruments SCB-68A).

System transfer function
To extract quantitative phase from the collected differential phase contrast image, a transfer function for the imaging system as a whole must be found. While the formula for image formation in a microscope can be explained in terms of the propagation of mutual intensity, an equivalent description can be made in the context of angular spectra [22], which is more natural in the context of our illumination scheme. Thus, the measured intensity can be described by the illumination field E(x) (with Fourier pair E(u)), with a distribution of illumination angles over u, the wavelength-scaled 2-dimensional angular coordinates, multiplied by the object transmittance function o(x). This complex distribution is then convolved with the pupil function of the system, and the amplitude squared is taken to yield the observed intensity. This can be expressed succinctly as, Here F represents a 2D Fourier transform from spatial coordinates into spatial-frequency coordinates, x are the spatial coordinates at the focal plane, P(f) is the pupil function, either 0 or 1, in spatial frequency coordinates f that correspond to physical coordinates at the back focal plane, and r represents spatial coordinates at the camera. The coordinates x and r are in units of distance, and correspond when scaled appropriately for magnification.
If the light source is incoherent, then Eq. (2) can be expressed as (see Appendix for details), where S(u) = |E(u)| 2 is the corresponding angular intensity distribution from the scattering medium. In this form, the terms inside the modulus bracket indicate the equivalent intensity of an image formed when illuminated from a coherent source with a single incident angle u 0 , and the outer integral indicates that the image formed from partial or fully incoherent illumination is the incoherent sum of images formed from contributing individual coherent plane waves. Further, Eq. (3) demonstrates that the angular distribution of the source illumination intensity around the target gives sufficient information to produce a transfer function. In other words, the extent of spatial coherence of the microscope is fully determined by the breadth of the illumination intensity in angle space [23]. This is beneficial for qOBM, as this quantity is readily available from photon transport simulation.
To extract the overall system transfer function from Eq. (3), it is necessary to expand the quantity within the modulus brackets using the identity Substituting the variable m = f − u, and introducing the integration variable n, we transform o(x) to O(m) and O(n) (as shown in Appendix), and Eq. (3) becomes, in agreement with ref. [24], where, O(m) is the Fourier transform of the target object function o(r), m and n are variables of integration in the spatial frequency space, and C is the transfer function of the microscope for a single image: Here P is the pupil function of the system, and S(u) represents the power spectrum in angular frequency of the illuminating light, or, proportionately, in illumination angle at the object. Equations (4) and (5) recapitulate the four-dimensional transfer function for a partially coherent microscope derived previously in the literature from the Hopkins equation for mutual coherence [15,24]. This demonstrates the well-known result that the angular intensity distribution provides sufficient information to determine the coherence of a system with an incoherent source [23], validating our use of illumination intensity angular spectra as a starting point for transfer function generation.
Equation (4) can be represented as a four-dimensional convolution with complex conjugate terms, a source of nonlinearity. This can be linearized with the assumption of a weak object. A thick sample may itself not be weakly scattering as a whole, but scattering from outside of the focal plane simply contributes to the angular intensity spectrum, so local phase effects outside of this region do not contribute to the image. The scattering of the object within the Rayleigh range is all that contributes to the reconstruction, and this can be said to be weak: where µ(x) and φ(x) are the (real valued) absorption and phase functions of the object. This means that the object function in the spatial frequency space is given by where M and Φ are Fourier transforms of µ and φ, respectively. Expanding the product of the objects from Eq. (4), gives where the cross and squared terms are neglected as they are assumed to be small. The delta functions in Eq. (8) serve to reduce the dimensions of the four dimensional convolution to two dimensions when substituted in Eq. (4) (see Appendix). From the two-dimensional result, an optical transfer function is derived to convert the DPC image to a phase image. The image that is produced in this way has phase gradient contrast, and the equivalent 2D optical phase transfer function is therefore given by, where u represents the same coordinates inverted in the shear direction: u = [−u 1 , u 2 ], and the delta functions in Eq. (8) have allowed us to set n = 0 in Eq. (5). This transfer function is real and odd, therefore the point spread function given by its Fourier transform (c δ (r)) is purely imaginary. Hermitian symmetry ensures that an even source distribution gives rise to images that display absorption information, while an odd source distribution (synthesized here with the subtraction operation in Eq. (1)) gives phase information. Finally, the sum in the denominator of Eq. (1) normalizes the image by C(0, 0), a real-valued constant background term, and the DPC image can now be expressed as, which directly yields access to the phase of the object (see Appendix for detailed derivation).

Angular distribution of illumination
As was shown in the previous section, in order to quantify phase obtained with oblique backillumination, it is necessary to establish the source lighting distribution S(u) in spatial frequency space. The radiance of multiply scattered light as a function of position and angle is treated in radiative transport theory [25], to which analytical solutions are only available in certain simplified geometries [26]. We therefore model the propagation of the source illumination photons with a Monte Carlo photon transport program, allowing for the simulation of scattering media of arbitrary size, shape, and scattering properties. We finally bin the angles of photons that reach the object plane into a 2D histogram. Monte Carlo photon scattering simulations are performed with MCXLAB, an open-source light transport simulation software suite [27], and processed with MATLAB (Mathworks, 2018b). A model geometry of the microscope at the object was initiated, 50mm × 50mm wide, with isotropic layers of 1 mm of air, 1 mm of glass for the slide, and 23 mm of a given scattering medium. Optical properties for tissues and scattering phantoms were obtained from established values in the literature [17,[28][29][30][31].
This data is then transformed from the angle space to the spatial frequency space (see Fig.  1(d)), and then up-sampled and smoothed to produce an effective source spatial spectrum ( Fig.  1(e)). The transformation from angle space to spatial frequency space involves an obliquity factor to account for a projection from a hemispherical grid to a planar grid, and a multiplication of the bin count by √ 1 − u 2 − v 2 to normalize the bin counts by the unit solid angle subtended by a grid unit. This step is the equivalent of normalizing the photon count for a given direction vector by the unit area of solid angle subtended by a grid unit in the normalized spatial frequency space, and is necessary to produce a physically meaningful intensity distribution.
It is important to highlight that local variation in optical properties behind and around the focal plane do not alter the ensemble angular distribution of light produced from multiple scattering significantly, given the assumption that the tissue at any one point is weakly scattering. Throughout the scattering medium, such local fluctuations contribute to the aggregate bulk scattering properties of the tissue as a whole and are thus taken into account implicitly. Therefore, beyond the creation of an individual optical transfer function for a given scattering medium, the numerical treatment of any sample is exactly the same. Error may arise from using an inexact model for the scattering medium relative to the degree of mismatch between the geometrical and optical properties of the medium and those used in simulation. For example, if the brain images presented below (see section 3.3) are processed assuming the medium is primarily composed of white matter instead of gray matter, the recovered quantitative phase values show discrepancies of <5%, and qualitative structural differences are imperceptible.

Quantitative phase reconstruction
From the distribution obtained from the photon transport simulation for a single LED source, an optical transfer function can be produced for the differential phase contrast (DPC) image formed by the microscope using Eq. (9) (Fig. 1(f)), which can then be applied to recover the object function (Eq. (6)) with a deconvolution.
The formalism presented above only treats two sources to estimate the phase from a single DPC image which only carries information about refractive index variations along one direction (the direction between the two sources) [18]. Using a single shear direction to recover phase can lead to streak artifacts (Figs. 2(e) and 2(f)) [9]. To produce a fully quantitative phase image, we use the second pair of illumination sources, positioned orthogonally in horizontal angle to the first pair. Then, the image of the ground truth object is reproduced with Tikhonov regularized deconvolution [8,9] according to where C D PC = iC ∆ C(0,0) , α is a regularization parameter, and the wavelength parameter λ k allows the phase displacement to be normalized to the corresponding phase of a single wavelength λ 0 (green λ 0 =530 nm was used in this case). Here k = 2, corresponding to the two orthogonal DPC images. The result is a cross-sectional, en-face quantitative phase image of an object embedded in a scattering medium. Combining the contributions from two orthogonal dimensions allows for features oriented in any direction to appear with equal contrast, and produces images that are both quantitative and superior in quality to those produced by either of the individual pairs of illuminations on their own (see Fig. 2).
The regularization parameter α can have a significant effect on the phase values in the images, so in order to ensure that an unbiased measure of phase could be obtained, it was necessary to implement an algorithm to arrive at the value automatically. Although it affects both image quality and quantitative value, there is a theoretical optimal choice for α that is large enough to maximize smoothing of the image noise while being small enough to minimize the mismatch in the division operation in Eq. (11) [8].
The regularization parameter was determined with generalized cross validation (GCV) [32], as it requires no prior information about image or noise power. The GCV estimate of α is given in linear algebra formalism by [8]: where A is a convolution matrix representation of the point spread function of the system, f is a solution for a given value of α, g is the collected image data, I is the identity matrix, and A(α) = AA * (AA * + αI) −1 . Despite the recondite formalism, this approach has an intuitive explanation. The numerator in Eq. (13) represents the mean square error between the captured phase gradient image and the reconstructed phase gradient image. This reconstructed phase gradient is produced by filtering the reconstructed quantitative phase object image through the system transfer function a second time, as if it were the original object being imaged. This reproduces a second phase gradient image similar to the one originally captured, only distorted slightly by having been processed a second time. The more similar the original captured phase gradient image is to the reprocessed one, the more faithfully the deconvolution procedure inverts the transfer function of the system. Therefore minimizing this difference reduces the error introduced by the regularization parameter. The denominator in Eq. (13) represents the square sum deviation from unity when the forward transfer function of the system is deconvolved from itself, according to Eq. (11). This term mirrors the error in the numerator by analogy, but it serves to normalize the numerator by the amount of error the regularization parameter induces on the reprocessed transfer function itself. Images captured of different scenes with different ambient illumination intensity may alter the minimum value found with the numerator error term alone. The denominator term, then, ensures that the regularization parameter chosen is invariant with different images processed from the same modality. Because of this normalization term, it is only necessary to perform this procedure once for a given imaging sample, rather than for each phase image.

Quantitative phase validation
To validate our approach, we first imaged a quartz lithography relief ( Fig. 3(a)) with 300, 200, and 100 nm height structures, beneath a 1% intralipid agar phantom to provide a well-defined scattered light source. The scattering medium was poured onto a cover slip that sat on the surface of the lithograph. This configuration provided scattering from the intralipid preparation while keeping the interface between quartz (n=1.54) and air (n=1) on the lithograph target.

Figures 3(a) and 3(b)
show the reconstructed quantitative image of the lithograph. The results show good agreement with the expected values, although some artifacts are observed due to the high-frequency content inherent in the sharp artificial edges of this sample (the artifact is not nearly as severe in biological samples). This effect arises from the limited ability of the linear reconstruction method to capture very high-frequency content in an effect analogous to the well-known problem of halo-formation in other phase imaging methods [33][34][35]. Some work has been done on removing these effects in DPC-based quantitative phase imaging, but it exceeds the scope of this work [36]. Next, we measure the phase sensitivity using the standard deviation of a critically sampled featureless 40µm × 40µm region on the lithograph [10], which yields 0.039 radians. Using the formula for optical path displacement, ∆z = λ∆φ 2π∆n , where ∆z represents the height of the object, λ the wavelength, ∆n the refractive index difference between the object and the surrounding medium, and ∆φ the phase difference, this translates to 1.75 nm of height in the air-quartz interface.
To further validate the system on features with more natural shapes, we imaged 2 µm polystyrene beads (n=1.581) immersed in oil (n=1.455) in a 100 µm well beneath a 1% intralipid agar scattering phantom as with the lithograph (Fig. 3(c)). To compare the experiment to theory, we also simulate an image and profile of a bead using Eq. (3) (Fig. 3(c) inset and 3(d)). Cross-sections of 20 beads from experimental results were averaged and compared to the simulated reconstruction and the expected (ideal) height profiles (Fig. 3(d)). Discrepancies in the experimental results arise from a mismatch in source distribution (between the Monte Carlo results and those physically produced) or improper normalization. However, error in the reconstruction of the simulated bead phase image is solely attributable to limitations of the weak object approximation. Note that all three profiles are in agreement, indicating a suitable transfer function has been produced, and that the error from the weak phase approximation is minimal. This weak phase error, emerging from higher order terms neglected in Eq. (6), can become considerable for large phase objects, but its effect is predictable and monotonic, and therefore can be accommodated for heuristically [10]. Blood banks often require rapid, sterile, non-destructive assessments of blood inside collection or storage units (i.e., bags) to assess, for example, transfusion viability by checking for 'storage lesions' based on red blood cell (RBC) morphology [37], or potency for hematopoietic stem cell therapy based on total nucleated cell count [38]. Thus, as an initial demonstration of qOBM on biological samples, we imaged whole blood from a patient with sickle cell disease inside a clear blood collection bag ( Fig. 4(a)).

Blood samples
Human blood was drawn from consenting human donors by vasopuncture into heparinized tubes and diluted with phosphate-buffered saline (PBS) to 1% of pure blood concentration. All procedures adhered to approved Institutional Review Board protocols. The blood was transferred into custom-made translucent PVC bag (InstantSystems) and placed on a glass slide over the objective to flatten the bag and reduce bowing. The objective lens was focused on cells from the blood bag suspension that had settled on the inner surface of the bag, which were illuminated by the multiple scattering of the LED light from the fibers. Imaging from this constant plane provided a consistent cross-section of the cells in the image and reduced motion artifacts. Out-of-focus RBCs are not observed due to the tomographic optical sectioning capabilities of qOBM. Furthermore, the dual wavelength configuration yields spectral information from the absorption images to distinguish white from red blood cells [13]. Figure 4 shows representative results, where the quantitative phase images show remarkable details of gross cellular morphology and sub-cellular features. Here we can clearly see normal biconcave and sickled RBCs, acanthocytes, and white blood cells along with their internal contents, including the nucleus. In addition to extracting feature-rich morphology, qOBM maintains a reliable quantitative phase profile. A cross section of 20 biconcave red blood cells from the displayed image is compared with an ideal standard theoretical healthy cell profile [28] and the numerical simulation thereof (Fig. 4(b) inset). Again, the results show good agreement. Next, we image whole mouse brains. All animal experimental protocols were approved by Institutional Animal Care and Use Committee (IACUC) of the Georgia Institute of Technology. The mice (Mus musculus) used were of a C57/BL6 genetic background and were 14 months old. Whole brains were dissected, briefly rinsed in phosphate buffered saline and then placed directly on a microscope slide for imaging without staining, slicing or other alteration unless otherwise specified in the corresponding image caption.

Brain imaging
All images were collected at 20 Hz, limited by the signal-to-noise ratio of the image from the intensity of scattered illumination light impinging on the object at the focal plane. As the inverse transfer function is produced before-hand for a given scattering medium geometry, quantitative phase images are computed and displayed in real-time, facilitating the task of finding and identifying structures and landmarks. Figure 5 shows the results. Here fine structural details can clearly be observed, such as neural cell soma with resolvable internal cell contents (Fig. 5(e)), smooth muscle cells, blood vessels and nearby glial cells (Fig. 5(c)). From coronal sections, we are able to resolve axons (Fig. 5(f)) as well as cell bodies of neurons and glia (Fig. 5(d)). As expected the phase values for RBCs in situ in Fig. 5 are lower than the RBCs in PBS in Fig. 4 due to the relatively high refractive index environment of blood serum and plasma in the tissue [17,39].
To demonstrate qOBM's optical sectioning capability, we captured 100 images in a z-stack (0.6 µm steps), centered on a blood vessel in the cortex (Figs. 5(a) and 5(b)), and rendered a 3D volume from concatenated 2D images (i.e., without the use of 3D deconvolution). The maximum intensity projection of the 3D volume clearly shows the vessel, red blood cells, and tissue structures. Note that this 3D volume is not meant to represent a map of refractive index, as that would require quantitative z-sectioning information contained in, for example, a 3D optical transfer function. Rather, this is meant to demonstrate the extent of out-of-focus rejection permitted by the inherent optical sectioning capabilities of qOBM.
Brain imaging typically relies on fluorescent markers or dyes to label specific structure or function [40]. However, QPI is emerging as a powerful label-free alternative technique to measure neural cell structure and functional activity in isolated transparent samples [2]. With the demonstrated ability to tomographically measure whole tissue structure with inherent optical sectioning, qOBM marks an important step in bringing QPI into the realm of volumetric brain imaging. Further, with its real-time display and ease of integration to any standard microscope, qOBM may additionally prove valuable as a complement to labeled functional neural imaging techniques. Finally, we image cerebral organoids. Organoids are cell-derived 3D organ models that mimic the structure and cell diversity of native biological tissue, and have rapidly emerged as powerful in vitro model of diseases that are intractable or impractical to study in humans or animal models [41]. Organoids are typically grown and incubated in enclosed bioreactors, wherein non-invasive live imaging of growth and organization of developing heterogeneous tissue could facilitate an understanding of, for example, complex developmental diseases [42], such as Autism Spectrum Disorder [43] and microencephaly [41]. Figure 6 shows a series of qOBM images of a 26 day old cerebral organoid [44] taken at various depths with 10 µm increments. Notable features include neuroblasts and immature neurons (Figs. 6(a) and 6 (b)), mature neurons with sub-cellular detail (Figs. 6(b) and 6(c)), neural progenitor cells (Fig. 6(d)), and the characteristic "rosette" shape formed by neural progenitor cells that develop and grow radially (Fig. 6(e)). These images show qOBM's unique potential to monitor the structure, growth, and health of organoids without labels.

Discussion and conclusion
We have demonstrated that qOBM provides highly detailed, tomographic, quantitative information of thick biological samples at the nanometer level with a convenient epi-mode geometry. The technology offers real-time imaging capabilities and does not require any scanning optics or expensive lasers, making the system simple, relatively inexpensive, and robust. Other label-free modalities that image in a reflection mode include optical coherence techniques (OCT) [45,46] and confocal reflectance microscopy (CRM) [47,48]. However, these modalities rely on backscattered light for contrast, which requires a strong index of refraction mismatch, resulting in a loss of sensitivity for important low-frequency content of samples. On the other hand, qOBM has access to the forward scattered field, mimicking a transmission mode microscope, and thus making it sensitive to subtle changes in index of refraction [49]. Autofluorescence is also often used as a source of label-free contrast, but fluorescence suffers from photobleaching, phototoxicity, and a lack of information from non-fluorescent components [50]. Nevertheless, qOBM can be combined with autofluorescence (or even fluorescence from exogenous labels) or other laser scanning technologies (including multiphoton methods) as a complimentary modality (OBM has already been integrated with a laser scanning microscope [51]). Finally, recent developments in QPI have indeed enabled imaging samples thicker than a single cell layer, and even provide 3D capabilities [5], however these technologies-unlike qOBM-still require a physical transmission-based system, which severely restricts the types of samples that can be analyzed.
In conclusion, qOBM unlocks the potential of quantitative phase imaging for thick scattering samples in epi-mode which can have a profound impact across a broad landscape in biomedicine.
Here we validated the approach, starting form a theoretical analysis and then imaging controlled samples, and showed three potential applications, including blood imaging in collection bags, organoid imaging, and brain imaging. qOBM is a flexible tool that can be applied to many areas of basic and transnational research and clinical practice.

Appendix -qOBM image formation
We begin by expanding the illumination intensity incident on the camera, as given in Eq. (2) with some reorganization: Equation (14) can be simplified by recognizing that the effective source comes from scattering of light from multiple locations in 3D space and is therefore fully incoherent [52]: E(u)E * (u ) = |E(u)| 2 δ(u − u ). With the illuminating power S(u) = |E(u)| 2 , Eq. (14) simplifies to We expand the integral within the modulus in Eq. (15) letting m = f − u, and rearrange terms and substitute O(m), the Fourier transform of o(x), Now using the identity | ∫ f (m) dm| 2 = ∬ f (m) f * (n) dm dn, and introducing variable of integration n, Eq. (16) becomes This formula matches the result for image formation in a transmission microscope from the principle of the propagation of mutual intensity [24,53,54].
From Eq. (17), we can formulate a four-dimensional optical transfer function for a transmission microscope, which is given by the bracketed integral, and represented by Eq. (5). Furthermore, condensing Eq. (17) yields Eq. (4), the four dimensional integral representation of the image, which can be further simplified if we make the substitution q = m − n, and rewrite the expression in Fourier space of the camera plane q, While highly simplified, the presence of the complex conjugate of the object's transmission function makes the description of the image, in general, non-linear since, However, with the weak object approximation we can ignore the cross terms and square terms in Eq. (20), yielding Eq. (8), which linearizes the description of the image. Substituting Eq. (8) into Eq. (19) and using the sifting property of the delta function we get, Looking at the C terms in Eq. (21)(c.f. Eq. (5)), C(q, 0) = ∫ S(u)P(u + q)P * (u) d 2 u, and C(0, −q) = ∫ S(u)P(u)P * (u − q) d 2 u, also note that C(0, 0) = ∫ S(u)P(u)P * (u) d 2 u represents the DC background term of a single acquired images. These three transfer functions are important, as they will allow us to connect Eq. (21) to the image formed by the difference of two oppositely-illuminated images. Finally, note that when S(u) is even in both dimensions, and the pupil function is radially symmetric and real, then C(q, 0) = C(0, −q); and if S(u) is odd in one dimension (and even in the other), C(q, 0) = −C(0, −q).
By illuminating with two diametrically opposing LEDs, the net 'source' for the difference image in qOBM will be odd in one dimension, while for the summation image it will be even in both dimensions, allowing us to separate amplitude and phase. We can therefore define synthetic transfer functions formed by the respective linear operations, where u represents the coordinates in the back focal plane inverted in the shear direction (e.g. from left illumination to right illumination): u = [−u 1 , u 2 ]. The fact that there is no operation available to make the source distribution odd in the dimension perpendicular to the shear direction means that a second orthogonal pair of illuminations is necessary to derive a complete quantitative picture of the phase. Now we find the image equation for the sum and difference images, using Eqs. (21)-(23) and the symmetry arguments for the transfer function with a net even and odd source:Ĩ Next we transform these equations back to the spatial coordinates of the camera, r. To do this, we make use of the fact that φ(r) and µ(r) are real valued functions and thus obey the Hermitian symmetry relation f (x) ⇔ F(s) = F * (−s) [55], where ⇔ signifies a Fourier pair: Here c δ (r) is the effective point spread function of the difference image, given by the inverse FT of Eq. (22). From Eq. (26), it should be noted that, because the transfer function C ∆ (q) is real and odd, c δ (r) must be imaginary, and the intensity image is therefore real-valued (but not positive in general). For the same reason, F −1 {C ∆ (q)δ(q)} = C ∆ (0) = 0; and conversely, because C Σ is real and even, F −1 {C Σ (q)δ(q)} = C Σ (0) = 2 · C(0, 0). Thus the sum image gives where C(0, 0) is again the real-valued background intensity term given by Eq. (4), and c σ is the sum image point spread function. The last point we seek to make here is regarding the normalization for a DPC image. The DPC image (Eq. (1)), according to Eqs. (26) and (27), is given by I D PC (r) = ic δ (r) * φ(r) C(0, 0) − c σ (r) * µ(r) ≈ ic δ (r) * φ(r) C(0, 0) .
Here we assumed the object's weak absorption is negligible compared to the DC term. Finally, in order to produce a transfer function C D PC that gives rise to a real point spread function c D PC , and that isolates the phase transmission function, we modify Eq. (22) so that C D PC (q) = −i · ∫ [S(u) − S(u )] P(u + q)P * (u) d 2 u ∫ S(u)P(u)P * (u) d 2 u .
Thus the DPC image can be expressed as, I D PC (r) = c D PC (r) * φ(r). This formula agrees with the general formula for the optical transfer function for differential phase contrast that has previously been reported [6,7,9]. However, here we have explicitly tied the formula to the source distribution from first principles in a way that we hope is approachable and understandable. Furthermore, we believe that the explicitly stated intermediate steps and simplifying assumptions will facilitate future development in the field.